arxiv_id
stringlengths 0
16
| text
stringlengths 10
1.65M
|
|---|---|
2205.15082
|
\section{Introduction}
Consider a scalar, autonomous ordinary differential equation (ODE) of the form
\begin{equation}\label{eq:ode}
\begin{split}
\frac{dX}{dt}(t) &= a(X(t)) \qquad \text{for } t > 0, \\
X(0) &= 0
\end{split}
\end{equation}
where \( a\from{\mathbb R} \rightarrow {\mathbb R} \) is Borel measurable. (The initial data $X(0)=0$ can be translated to an arbitrary point $x_0\in{\mathbb R}$, if needed.)
If the drift $a$ is non-smooth then uniqueness of solutions might fail --- this is the \emph{Peano phenomenon}. To distinguish physically reasonable solutions from non-physical ones, we add stochastic noise to the equation, with the aim of letting the noise go to zero. Thus, we consider a stochastic differential equation
\begin{equation}\label{eq:ode_pert} %
\begin{split}
dX_\varepsilon(t) &= a(X_\varepsilon(t)) dt + \varepsilon dW(t), \\
X_\varepsilon(0) &= 0.
\end{split}
\end{equation}
where \( W(t) \) is a one-dimensional Brownian motion on a given probability space \( (\Omega, {\mathcal F}, \Pr )\), and \( \varepsilon > 0 \). By the Zvonkin--Veretennikov theorem \cite{Veretennikov1981,Zvonkin1974}, equation \eqref{eq:ode_pert} has a unique strong solution.
In this paper we consider the following problem:
\begin{quotation}
\emph{Identify the limit $\lim_{\varepsilon\to0} X_\varepsilon$, and show that it satisfies \eqref{eq:ode}.}
\end{quotation}
Somewhat informally, the challenges are:
\begin{itemize}
\item determining whether the sequence $\{X_\varepsilon\}_\varepsilon$ (or a subsequence) converges, and in what sense;
\item identifying the limit(s), either by a closed form expression or some defining property;
\item proving that the limit solves \eqref{eq:ode} by passing to the limit in the (possibly discontinuous) term $a(X_\varepsilon)$.
\end{itemize}
The problem originated in the 1981 paper by Veretennikov \cite{Veretennikov1981b}, and was treated extensively in the 1982 paper by Bafico and Baldi \cite{BaficoBaldi1982}. Only little work has been done on this problem since then, despite its great interest. The original work of Bafico and Baldi dealt with the Peano phenomenon for an autonomous ordinary differential equation. They considered continuous drifts which are zero at some point and are non-Lipschitz continuous on at least one side of the origin. In their paper they show that the $\varepsilon\to0$ limit of the probability measure that represents the solution of the stochastic equation is concentrated on at most two trajectories. Further, they compute explicitly some limit probability measures for specific drifts. Unfortunately, since the result of Bafico and Baldi relies on the direct computation of the solution of an elliptic PDE, it only works in one dimension. In one dimension this elliptic PDE reduces to a second-order boundary value problem for which an explicit solution can be computed. Therefore, there is little hope that this approach will also work in higher dimensions.
The only other work that is known to us dating back to the previous century is the paper by Mathieu from 1994 \cite{Mathieu1994}. In 2001 Grandinaru, Herrmann and Roynette published a paper \cite{GradinaruHerrmannRoynette2001} which showed some of the results of Bafico and Baldi using a large deviations approach. Herrmann did some more work on small-noise limits later on together with Tugaut \cite{HerrmannTugaut2010, HerrmannTugaut2012, HerrmannTugaut2014}.
Yet another approach to Bafico and Baldi's original problem was presented by Delarue and Flandoli in \cite{DelarueFlandoli2014}. They apply a careful argument based on exit times. Noteworthy it also works in arbitrary dimension but with a very specific right-hand side, in contrast to the original assumption of a general continuous function; see also Trevisian \cite{Trevisian13}.
We also point out the recent paper by Delarue and Maurelli \cite{DelarueMaurelli2020}, where multidimensional gradient dynamics with H\"older type coefficients was perturbed by a small Wiener noise.
The 2008 paper by Buckdahn, Ouknine and Quincampoix \cite{BuckdahnOuknineQuincampoix2008} shows that the the zero noise limit is concentrated on the set of all Filippov solutions of \eqref{eq:ode}. Since this set is potentially very large, this result is of limited use to us.
Even less work was done for zero noise limits with respect to partial differential equations. To our best knowledge the only paper published so far is Attanasio and Flandoli's note on the linear transport equation \cite{AttanasioFlandoli2009}.
A new approach was proposed by Pilipenko and Proske when the drift in \eqref{eq:ode} has H\"older-type asymptotics in a neighborhood of $x=0$ and the perturbation is a self-similar noise \cite{PilipenkoProske2015}. They used space-time scaling and reduce a solution of the small-noise problem to a study of long time behaviour of a stochastic differential equation with a {\it fixed} noise. This approach can be generalized to multidimensional case and multiplicative Levy-noise perturbations \cite{PilipenkoProske2018, KulikPilipenko2020, PavlyukevichPilipenko2020, PilipenkoProske2021}.
\subsection{Uniqueness of classical solutions}
If the drift $a=a(x)$ is continuous then the question of existence and uniqueness of solutions of \eqref{eq:ode} is well established. If $a$ is {continuous} then it's known since Peano that there always exists at least one solution (at least for small times). Binding \cite{Binding1979} found that the solution is unique {if and only if} $a$ satisfies a so-called Osgood condition at all zeros $x_0$ of $a$:
\begin{equation}\label{eq:osgood_cond}
\int_{x_0-\delta}^{x_0} \frac{1}{a(z)\wedge0}\,dz= -\infty,\qquad
\int_{x_0}^{x_0+\delta} \frac{1}{a(z)\vee0}\,dz = +\infty
\end{equation}
for all $\delta\in(0,\delta_0)$ for some $\delta_0>0$. (Here and in the remainder we denote \(\alpha \wedge \beta\coloneqq\min(\alpha,\beta)\) and $\alpha\vee\beta\coloneqq\max(\alpha,\beta)$.) The unique solution starting at $x$ is then given by
\begin{equation}\label{eq:deterministicsolution}
X(t;x) = \begin{cases}
x & \text{if } a(x)=0 \\
A^{-1}(t) & \text{if } a(x)\neq0
\end{cases}
\end{equation}
(at least for small $t$), where $A(y)\coloneqq\int_{x}^y 1/a(z)\, dz$ and $A^{-1}$ is its inverse function.
If $a$ is discontinuous --- say, $a\in L^\infty({\mathbb R})$ --- then the question of existence and uniqueness is much more delicate. The paper \cite{Fjordholm2018} gives necessary and sufficient conditions for the uniqueness of \emph{Filippov solutions} of \eqref{eq:ode}. We remark here that the extension to Filippov solutions might lead to non-uniqueness, even when the classical solution is unique. To see this, let $E\subset{\mathbb R}$ be measure-dense, i.e.~a set for which both $U\cap E$ and $U\setminus E$ have positive Lebesgue measure for any nonempty, open set $U\subset{\mathbb R}$ (see \cite{Rud83} for the construction of such a set), and let $a=1+\ind_E$. Then \eqref{eq:deterministicsolution} is the unique classical solution for any starting point $x\in{\mathbb R}$, whereas any function satisfying $\frac{d}{dt}X(t)\in[1,2]$ for a.e.~$t>0$ will be a Filippov solution. We will show that even in cases such as this one, the stochastically perturbed solutions converge to the classical solution, and not just any Filippov solution, as was shown in \cite{BuckdahnOuknineQuincampoix2008}.
\subsection{Main result}
We aim to prove that the distribution of solutions $X_\varepsilon$ of \eqref{eq:ode_pert} converges to a distribution concentrated on either a single solution of the deterministic equation \eqref{eq:ode}, or two ``extremal'' solutions. Based on the discussion in the previous section, we can divide the argument into cases depending on whether $a$ is positive, negative or changes sign in a neighbourhood, and in each case, whether an Osgood-type condition such as \eqref{eq:osgood_cond} holds. The case of negative drift is clearly analogous to a positive drift, so we will merely state the results for negative drift, without proof.
Under the sole assumption $a\in L^\infty({\mathbb R})$, the sequence $\{X_\varepsilon\}_\varepsilon$ is weakly relatively compact in $C([0,T])$, for any $T>0$. (Indeed, by \eqref{eq:ode_pert}, $X_\varepsilon-\varepsilon W$ is uniformly Lipschitz, and $\varepsilon W\overset{P}{\to}0$ as $\varepsilon\to0$. See e.g.~\cite{Billingsley1999} for the full argument.) Hence, the problems are to characterize the distributional limit of any convergent subsequence, to determine whether the entire sequence converges (i.e., to determine whether the limit is unique), and to determine whether the sense of convergence can be strengthened.
Without loss of generality we will assume that the process starts at $x=0$. If $a(0)=0$ but $a$ does \textit{not} satisfy the Osgood condition \eqref{eq:osgood_cond} at $x=0$, then both
$\psi_-$ and $\psi_+$ are classical solutions
of \eqref{eq:ode} (along with infinitely many other solutions), where
\begin{equation}\label{eq:maximalsolutions}
\psi_\pm(t) \coloneqq A_\pm^{-1}(t), \qquad \text{where } A_\pm(x) \coloneqq \int_0^x \frac{1}{a(z)}\,dz \text{ for } x\in{\mathbb R}_\pm.
\end{equation}
Generally, the functions $\psi_\pm$ are defined in a neighborhood of 0. We have assumed that $a$ is bounded, so
$\psi_\pm$ cannot blow up in finite time, but they can reach singular points $R_\pm$ where $A_\pm$ blow up. If $t_\pm\in(0,\infty]$ are the times when $\psi_\pm(t_\pm)=R_\pm$ then we set $\psi_\pm(t)\equiv R_\pm$ for all $t\geq t_\pm$. We aim to prove that the distribution of $X_\varepsilon$ converges to a distribution concentrated on the two solutions $\psi_-,\ \psi_+$, and to determine the weighting of these two solutions.
\begin{theorem}\label{thm:ZeroNoisePositiveDrift111}
Let $a\in L^\infty({\mathbb R})$ {satisfy $a\geq 0$} a.e.~in $(-\delta_0, \delta_0)$ for some $\delta_0>0$, and
\begin{equation}\label{eq:osgoodOnesided}
\int_{0}^{\delta_0} \frac{1}{a(z)} dz<\infty.
\end{equation}
Then, for any $T>0$, $X_\varepsilon$ converges uniformly in probability to $\psi_+$:
\begin{equation}\label{eq:C2}
\big\|X_\varepsilon-\psi_+ \big\|_{C([0,T])} \overset{P} \to 0 \qquad\text{as } \varepsilon\to0.
\end{equation}
An analogous result holds for \emph{negative} drifts, with obvious modifications.
\end{theorem}
The proof of Theorem \ref{thm:ZeroNoisePositiveDrift111} for strictly positive drifts $a$ is given in Section \ref{sec:positive_drift}, while the general case is considered in Section \ref{section:finalOfTheorem1.1}. The final theorem applies also to signed drifts:
\begin{theorem}\label{thm:ZeroNoiseRepulsive}
Let $a\in L^\infty({\mathbb R})$ satisfy
\begin{equation}\label{eq:osgoodrepulsive}
-\int_{\alpha}^{0} \frac{1}{a(z)\wedge0}\, dz<\infty, \qquad \int_{0}^{\beta} \frac{1}{a(z)\vee 0}\, dz<\infty
\end{equation}
for some $\alpha<0<\beta$ (compare with \eqref{eq:osgood_cond}). Let $\{\varepsilon_k\}_k$ be some sequence satisfying $\varepsilon_k>0$ and $\lim_{k\to\infty}\varepsilon_k=0$, and define
\begin{equation}\label{eq:weights}
p_k \coloneqq
\frac{s_{\varepsilon_k}(0)-s_{\varepsilon_k}(\alpha)}{s_{\varepsilon_k}(\beta)- s_{\varepsilon_k}(\alpha)}
\in [0,1], \qquad
s_\varepsilon(r) \coloneqq \int_0^r \exp\Bigl(-\frac{2}{\varepsilon^2}\int_0^z a(u)\,du\Bigr)\,dz.
\end{equation}
Then $\{P_{\varepsilon_k}\}_k$ is weakly convergent if $\{p_k\}_k$ converges. Defining $p\coloneqq \lim_{k}p_k$ and $P\coloneqq\wlim_k P_{\nu_k}$, we have
\begin{equation}\label{eq:limitMeasure}
P = (1-p)\delta_{\psi_-} + p\delta_{\psi_+}.
\end{equation}
\end{theorem}
The proof is given in Section \ref{sec:repulsive}, where we also provide tools for computing $p$.
\subsection{Outline of the paper}
We now give an outline of the rest of this manuscript. In Section \ref{sec:technical_results} we
give several technical results on convergence of SDEs with respect to perturbations of the drift;
the relation between the solution and its
exit time; and the distribution of the solution of an SDE.
The goal of Section \ref{sec:positive_drift} is to prove Theorem \ref{thm:ZeroNoisePositiveDrift111} in the case where $a>0$, and in Section \ref{section:finalOfTheorem1.1} we extend to the case $a\geq0$. In Section \ref{sec:repulsive} we prove Theorem \ref{thm:ZeroNoiseRepulsive} and provide several results on sufficient conditions for convergence. Finally, we give some examples in Section \ref{sec:examples}.
\section{Technical results}\label{sec:technical_results}
In this section we list a few technical results. The first two results are comparison principles. In order to prove them we use approximations by SDEs with smooth coefficients and the classical results on comparison. Since we do not suppose that the drift is smooth or even continuous, the results are not standard.
\begin{theorem}\label{thm:convergenceSDE_Thm}
Let $\{a_n\from {\mathbb R} \rightarrow {\mathbb R} \}_{n\geq0}$ be uniformly bounded
measurable functions such that $a_n \to a_0$ pointwise a.e.~as $n\to\infty$. Let $X_n$ be a solution to the SDE
\[
X_n (t )= x_n + \int_0^t a_n (X_n (s )) ds + W(t),\qquad t\in[0,T].
\]
Then $\{X_n\}_n$ converges uniformly in probability:
\[
\bigl\|X_n(t)-X_0(t)\bigr\|_{C([0,T])} \overset{P}\to 0 \qquad \text{as } n\to\infty.
\]
\end{theorem}
For a proof, see e.g.~\cite[Theorem~2.1]{Pilipenko2013}.
\begin{theorem}\label{thm:comparisonThm}
Let \( a_1, a_2\from {\mathbb R} \rightarrow {\mathbb R} \) be locally bounded measurable functions satisfying \( a_1 \leq a_2\) and let $x_1\leq x_2$. Let \( X_1, X_2 \) be solutions to the equations
\begin{align*}
X_i (t )= x_i + \int_0^t a_i (X_i (s)) ds + W(t), \qquad i=1,2.
\end{align*}
Then
\[ X_1 (t )\leq X_2 (t )\qquad \forall\ t \geq 0 \]
with probability 1.
\end{theorem}
The proof is given in Appendix \ref{app:comparisonprinciple}.
\begin{lemma}\label{lem:timeinversion}
Let $\{f_n\}_{n\geq 1}\subset C([0,T])$ be a uniformly convergent sequence of non-random continuous functions and let $f_0\in C([0,T])$ be a strictly increasing function. Set $\tau^x_n\coloneqq\inf\bigl\{t\geq 0 : f_n(t)=x\bigr\}$ for every $n\geq 0$, and assume that
\[
\tau^x_n \to\tau^x_0 \qquad \text{for every } x\in \big(f_0(0), f_0(T)\bigr)\cap{\mathbb Q}.
\]
Then
\[
f_n\to f_0 \qquad \text{in } C([0,T]) \text{ as } n\to\infty.
\]
\end{lemma}
\begin{proof}
Let $\mathcal{T}\coloneqq f_0^{-1}({\mathbb Q})$, and note that this is a dense subset of $[0,T]$, since $f_0^{-1}$ is continuous. Let $t\in\mathcal{T}$ be arbitrary and let $x\coloneqq f_0(t)\in{\mathbb Q}$. By assumptions of the lemma we have $t=\tau_0^x=\lim_{n\to\infty}\tau_n^x.$
Moreover, since $f_n(\tau^x_n)=x$ for sufficiently large $n$, we have
\begin{equation}\label{eq:240}
f_0(t)=x=\lim_{n\to\infty}f_n(\tau^x_n) = \lim_{n\to\infty} f_n(t),
\end{equation}
the last step following from the fact that $f_n$ converges uniformly and $\tau^x_n\to \tau^x_0=t$ as $n\to\infty$. Thus, $\{f_n\}_n$ converges pointwise to $f_0$ on a dense subset of $[0,T]$. But $\{f_n\}_n$ is uniformly convergent by assumption, so necessarily $f_n\to f_0$ uniformly.
\end{proof}
\begin{corollary}\label{cor:ConvergenceOfPaths}
Let $\{\xi_n\}_{n\geq 1} $ be a sequence of continuous stochastic processes $\xi_n\from[0,\infty)\to{\mathbb R}$
that is locally uniformly convergent with probability $1$. Let $\xi_0$ be a strictly increasing continuous process satisfying $\xi_0(0)=0$ and $\lim_{t\to\infty}\xi_0(t)=\infty$. Set $\tau_n^x\coloneqq\inf\{t\geq 0 : \xi_n(t)\geq x\}$ and assume that
\[
\tau_n^x \overset{P}\to\tau_0^x \qquad \text{for every } x\in[0,\infty)\cap{\mathbb Q}.
\]
Then
\[
\xi_n \to \xi_0 \qquad \text{locally uniformly with probability }1.
\]
\end{corollary}
\begin{proof}
Enumerate the positive rational numbers as ${\mathbb Q}\cap (0,\infty)=\{x_n\}_n$.
Select a sequence $\{n^1_k\}_k$ such that
\[
\lim_{k\to\infty}\tau^{x_1}_{n^1_k} = \tau^{x_1}_0 \qquad \text{$\Pr$-a.s.}
\]
Then select a sub-subsequence $\{n^2_k\}_k$ of $\{n^1_k\}_k$ such that
\[
\lim_{k\to\infty}\tau^{x_2}_{n^2_k} = \tau^{x_2}_0 \qquad \text{$\Pr$-a.s.,}
\]
and so on. Then
\[
\Pr\Bigl(\forall\ j\in{\mathbb N} \quad \lim_{k\to\infty}\tau^{x_j}_{n^k_k} = \tau^{x_j}_0 \Bigr) = 1.
\]
From Lemma \ref{lem:timeinversion} it follows that
\[
\Pr\Bigl(\lim_{k\to\infty}\xi_{n^k_k}=\xi_0 \quad \text{uniformly in }[0,T]\Bigr)=1
\]
for any $T>0$. This yields the result.
\end{proof}
Assume that $a, \sigma\from {\mathbb R}\to{\mathbb R}$ are bounded measurable functions, $\sigma$
is separated from zero.
It is well known that the stochastic differential equation
\[
d\xi(t) = a(\xi(t))dt+ \sigma(\xi(t)) dW(t), \qquad t\geq 0,
\]
has a unique (weak) solution, which is a continuous strong Markov process, i.e., $\xi$ is a diffusion process.
Denote
$L\coloneqqa(x)\frac{d}{dx}+\frac{1}2\sigma^2(x) \frac{d^2}{dx^2}$
and
let $s$ and $m$ be a scale function and a speed measure of $\xi,$
see details in \cite[Chapter VII]{RevuzYor1999}.
Define the hitting time of $\xi$ as $\tau^y\coloneqq\inf\{t\geq 0 : \xi(t) =y\}$.
Recall that $s$ and $m$ are well-defined up to constants,
and $s$ is a non-degenerate $L$-harmonic function, i.e.,
\begin{equation}\label{eq:Lharmonic}
L s=0,
\end{equation}
in particular
\begin{equation}\label{eq:eq_scale}
s(x)\coloneqq\int_{y_1}^x\exp\left(-\int_{y_2}^y\frac{2 a(z)}{\sigma(z)^2}dz\right) dy,
\end{equation}
and
\begin{equation}\label{eq:463}
m(dy)=\frac{2}{s'(y)\sigma(y)^2}dy
\end{equation}
for any choices of $y_1, y_2,$ see \cite[Chapter VII, Exercise 3.20]{RevuzYor1999}.
\begin{theorem}\label{thm:exit_time} Let $x_1<x_2$ be arbitrary.
\begin{enumerate}[leftmargin=*,label=(\roman*)]
\item \cite[Chapter VII, Proposition 3.2 and Exercise 3.20]{RevuzYor1999}
\label{thm:exit_time1}
\begin{align*}
\Pr^{x}\big(\tau^{x_1}\wedge \tau^{x_2}<\infty\big)=1 \qquad &\forall\ x\in[x_1,x_2] \\
\intertext{and}
\Pr^{x}\bigl(\tau^{x_1}< \tau^{x_2}\bigr)=\frac{s(x_2)-s(x)}{s(x_2)-s(x_1)} \qquad &\forall\ x\in[x_1,x_2],
\end{align*}
\item \label{thm:exit_time3}\cite[Chapter VII, Corollary 3.8]{RevuzYor1999}
For any $I=(x_1,x_2) $, $x\in I$ and for any non-negative measurable function $f$ we have
\begin{equation}\label{eq:194}
\Exp^x\biggl(\int_0^{\tau^{x_1}\wedge \tau^{x_2}} \!\!f(\xi(t)) dt\biggr) =
\int_{x_1}^{x_2} \!G(x,y) f(y) m(dy),
\end{equation}
where $G=G_I$ is a symmetric function such that
\[
G_I(x,y)=\frac{(s(x)-s(x_1))(s(x_2)-s(y))}{s(x_2)-s(x_1)}, \qquad x_1\leq x\leq y\leq x_2.
\]
\end{enumerate}
\end{theorem}
\begin{remark}\label{rem:harmonic_functions}~
\begin{enumerate}[leftmargin=*,label=(\textit{\roman*})]
\item The function $\tilde u(x)\coloneqq\Exp^x\Bigl(\int_0^{\tau^{x_1}\wedge \tau^{x_2}} f(\xi(t)) dt\Bigr)$ from the left-hand side of \eqref{eq:194}
is a solution to
\[
\begin{cases}
L \tilde u(x) =-f(x), & x\in(x_1,x_2)\\
\tilde u(x_1)=\tilde u(x_2)=0.
\end{cases}
\]
The function $G$ from \eqref{eq:194} is the corresponding Green function, in the sense that $\tilde{u}(x)$ can be written as the right-hand side of \eqref{eq:194}.
\item \label{thm:exit_time2}
If we take $f(x)=1$ in \eqref{eq:194}, then we get a formula for
the expectation of the exit time $u(x)\coloneqq\Exp^x(\tau^{x_1}\wedge \tau^{x_2})$, $x\in[x_1,x_2]$.
In particular,
\[u(x)=-\int_{x_1}^x2\Phi(y)\int_{x_1}^y \frac{dz}{\sigma(z)^2\Phi(z)}dy+
\int_{x_1}^{x_2}2\Phi(y)\int_{x_1}^y \frac{dz}{\sigma(z)^2\Phi(z)}dy \frac{\int_{x_1}^{x}\Phi(y)dy}{\int_{x_1}^{x_2}\Phi(y)dy},\]
where $\Phi(x)=\exp\left(-\int_{x_1}^x\frac{2 a(z)}{\sigma(z)^2}dz\right).$
\end{enumerate}
\end{remark}
Finally, the following result will be quite useful when taking limits
{$\sigma=\sigma_\varepsilon(x)\coloneqq\varepsilon\to0$} in terms such as $s$ and $u$ above.
\begin{lemma}\label{lem:approxidentity}
Let $\alpha<\beta$ and $\varepsilon\neq0$, let $f,g\in L^1((\alpha,\beta))$ with $f>0$ almost everywhere, and let
\begin{equation*}
g_\varepsilon(y)\coloneqq\int_{y}^{\beta}\exp\left(-\int_{y}^z \frac{f(u)}{\varepsilon^2}\,du\right)\frac{f(z)}{\varepsilon^2}g(z)\,dz, \qquad y\in[\alpha,\beta].
\end{equation*}
Then $g_\varepsilon \to g$ as $\varepsilon\to 0$ in $L^1((\alpha,\beta))$ and pointwise a.e.~ in $y\in(\alpha,\beta)$. The same is true if
\begin{equation*}
g_\varepsilon(y)\coloneqq\int_{\alpha}^{y}\exp\left(-\int_z^y \frac{f(u)}{\varepsilon^2}\,du\right)\frac{f(z)}{\varepsilon^2}g(z)\,dz, \qquad y\in[\alpha,\beta].
\end{equation*}
\end{lemma}
The proof is given in Appendix \ref{app:comparisonprinciple}. Note that this lemma provides a positive answer to the
question raised by Bafico and Baldi in \cite[Remark~b~in~Section~6]{BaficoBaldi1982} on whether
\cite[Proposition 3.3]{BaficoBaldi1982} still holds under the sole assumption of \( \int_0^r 1/a(z)dz < + \infty \).
\section{Positive drifts}\label{sec:positive_drift}
This section is dedicated to the proof of Theorem \ref{thm:ZeroNoisePositiveDrift111}. In order to prove the theorem, we first prove the following:
\begin{theorem}\label{thm:ZeroNoiseUnifPositive}
Let $a\in L^\infty({\mathbb R})$ and assume that there exist positive constants $\delta_0,c_->0$
such that
\begin{equation}\label{eq:assumption_c_pm}
a(x)\geq c_- \quad \text{for a.e. } x\in(-\delta_0,\infty).
\end{equation}
Then we have the uniform convergence in probability
\begin{equation}\label{eq:result}
\|X_\varepsilon- \psi_+\|_{C([0,T])}\overset{P}\to 0 \quad \text{as } \varepsilon\to0 \text{ for all }T>0.
\end{equation}
\end{theorem}
\begin{proof}[Proof of Theorem \ref{thm:ZeroNoiseUnifPositive}]
The proof consists of these steps:
\begin{enumerate}[label=\arabic*.]
\item Show weak relative compactness of $\{X_\varepsilon\}_\varepsilon$
\item Show that $\bar X_0$ is strictly increasing, where $\bar X_0$ is a limit point of $\{X_\varepsilon\}_\varepsilon$.
\item Reduce to proving convergence of the hitting
times $\tau^\varepsilon\to\tau$, see Lemma \ref{lem:timeinversion}.
\end{enumerate}
\medskip\noindent \textit{Step 1:}
For any $T>0$ the family $\{X_\varepsilon\}_{\varepsilon\in (0,1]}$ is weakly relatively compact in $C([0,T])$
(see e.g.~\cite{Billingsley1999}).
Since $\psi_+$ is non-random, the convergence statement \eqref{eq:result} is equivalent to the weak convergence
\[
X_\varepsilon\Rightarrow \psi_+ \qquad \text{ in } C([0,T]) \text{ as } \varepsilon\to0 .
\]
for any $T>0$. To prove the latter, it suffices to verify that if $\{X_{\varepsilon_k}\}_k$ is any convergent subsequence, then $\psi_+$ is its limit.
\medskip \noindent \textit{Step 2:}
Assume that $X_{\varepsilon_k}\Rightarrow \bar X_0$ as $k\to\infty$. Since
\[
X_{\varepsilon_k}(t)=\int_0^t a(X_{\varepsilon_k}(s))\, ds+\varepsilon_k W(t) \qquad \forall\ t\in[0,T],
\]
and $\varepsilon_k W \overset{P}{\to} 0$, Slutsky's theorem implies that also
\begin{equation}\label{eq:Lip}
\int_0^\cdot a(X_{\varepsilon_k}(s))\, ds \Rightarrow \bar X_0 \qquad \text{in }C([0,T]).
\end{equation}
By Skorokhod's representation theorem \cite[Theorem 1.6.7]{Billingsley1999}, we may assume that the convergence in \eqref{eq:Lip} happens almost surely. Since $c_-\leq a \leq c_+$ (for some $c_+>0$), we conclude that
\[
c_-\leq \frac{\bar X_0(t_2)-\bar X_0(t_1)}{t_2-t_1} \leq c_+ \qquad \forall\ t_1,t_2\in[0,T], \text{ almost surely.}
\]
In particular, $\bar{X}_0$ is strictly increasing.
\medbreak \noindent \textit{Step 3:}
Notice that assumption \eqref{eq:assumption_c_pm} implies that $\lim_{t\to\infty}\psi_+(t)=+\infty.$
Define
\[
\tau_\varepsilon^x\coloneqq\inf\{t\geq 0\,:\, X_\varepsilon(t)=x\}, \qquad \tau_0^x \coloneqq \inf\{t\geq 0 \,:\, \psi_+(t)=x\} = A(x)
\]
where $A(x)\coloneqq \int_0^x a(z)^{-1}\,dz$ (cf.~\eqref{eq:deterministicsolution}).
By Corollary \ref{cor:ConvergenceOfPaths} it is enough to show convergence in probability of $\tau_\varepsilon$:
\begin{equation}\label{eq:conv_hitting}
\tau_\varepsilon^x\overset{P}\to A(x) \qquad\text{as }\varepsilon\to0 \text{ for every } x\in{\mathbb Q}\cap [0,\infty).
\end{equation}
To check \eqref{eq:conv_hitting} it is sufficient to verify that
\begin{subequations}
\begin{alignat}{2}
&\lim_{\varepsilon\to0} \Exp(\tau_\varepsilon^x) = A(x) &\qquad&\text{for any } x\in{\mathbb Q}\cap [0,\infty), \label{eq:conv_hitting_expectation} \\
&\lim_{\varepsilon\to0}\Var(\tau_\varepsilon^x)= 0 &&\text{for any } x\in{\mathbb Q}\cap [0,\infty). \label{eq:conv_hitting_variance}
\end{alignat}
\end{subequations}
We prove these properties under less restrictive conditions on $a$, given in the lemma below.
\begin{lemma}\label{lem:properties_of_time}
Let $R,\delta>0$ and let $a\in L^\infty({\mathbb R})$ satisfy $a > 0$ a.e.~in $(-\delta,R)$. Assume that the Osgood-type condition
\begin{equation}\label{eq:positivedriftcondition}
\int_{0}^R \frac{1}{a(z)}\, dz<\infty
\end{equation}
is satisfied. Denote $A(r)\coloneqq\int_0^r a(z)^{-1}\,dz$ for $r\in[0,R]$. Then
\begin{subequations}
\begin{alignat}{2}
&\lim_{\varepsilon\to0}\Pr^x\big(\tau^{-\delta}_\varepsilon>\tau^{R}_\varepsilon\big)=1 &&\forall \ 0\leq x\leq R, \label{eq:ProbabilityFirstExit} \\
&\lim_{\varepsilon\to0}\Exp^x\big(\tau^{-\delta}_\varepsilon\wedge \tau^r_\varepsilon\big) = A(r)
{-A(x)} &\qquad& \forall\ 0\leq x<r\leq R. \label{eq:ExpectedTrajectory} \\
\intertext{{Moreover, if $a(x)\geq c_-$ for $x\in(-\infty,-\delta)$ for some constant $c_->0$, then also}}
& {\lim_{\varepsilon\to0}\Exp^0 ( \tau^r_\varepsilon) =A(r)} &&\forall\ 0<r\leq R, \label{eq:ConvergenceOfExpectationsExits} \\
\intertext{ and if $a(x)\geq c_->0$ for all $ x\in{\mathbb R}$, then}
&{\lim_{\varepsilon\to0}\Var^0( \tau^r_\varepsilon) =0} &&\forall\ 0<r\leq R. \label{eq:VanishingVariance}
\end{alignat}
\end{subequations}
\end{lemma}
We finalize the proof of Theorem \ref{thm:ZeroNoiseUnifPositive} and then prove the claims of Lemma \ref{lem:properties_of_time} separately. Define the function
\[
\tilde a(x):=\begin{cases} a(x) & \text{if } x>-\delta,\\ c_- & \text{if } x\leq -\delta,\end{cases}
\]
and denote the solution to the corresponding stochastic differential equation by $\tilde X_\varepsilon$. It follows from Lemma \ref{lem:properties_of_time} that
\[
\|\tilde X_\varepsilon- \psi_+\|_{C([0,T])}\overset{P}\to 0 \qquad \text{as } \varepsilon\to0 \text{ for all }T>0.
\]
Uniqueness of the solution yields $\Pr\bigl(\tilde X_\varepsilon(t)= X_\varepsilon(t) \text{ for } t\leq \tau_\varepsilon^{-\delta}\bigr)=1.$
It is easy to see that $\Pr(\tau_\varepsilon^{-\delta}=\infty)\to1 $ as $\varepsilon\to0.$ This completes the proof of Theorem \ref{thm:ZeroNoiseUnifPositive}.
\end{proof}
\begin{proof}[Proof of \eqref{eq:ProbabilityFirstExit} in Lemma \ref{lem:properties_of_time}]
By Theorem \ref{thm:exit_time}\ref{thm:exit_time1}, we can write
\[
\Pr^x(\tau^r_\varepsilon<\tau^{-\delta}_\varepsilon) = \frac{s_\varepsilon(x)}{s_\varepsilon(r)} \geq \frac{s_\varepsilon(0)}{s_\varepsilon(r)}
\]
for every $x\in[0,r]$, where (cf.~\eqref{eq:eq_scale})
\begin{equation}\label{eq:scalefunction}
s_\varepsilon(x)\coloneqq\int_{-\delta}^xe^{-B(y)/\varepsilon^2}\, dy, \qquad B(y) \coloneqq 2\int_{-\delta}^y a(z) dz.
\end{equation}
We have
\begin{equation}\label{eq:scale-function-estimate}
s_\varepsilon(0) = \int_{-\delta}^0 e^{-B(y)/\varepsilon^2}\,dy \geq \delta e^{-B(0)/\varepsilon^2}
\end{equation}
since $B$ is nondecreasing. For sufficiently small $\varepsilon>0$ we can find $y_\varepsilon>0$ such that $B(y_\varepsilon)=B(0)+\varepsilon$. Note that $y_\varepsilon\to0$ as $\varepsilon\to0$. Again using the fact that $B$ is nondecreasing, we can estimate
\begin{align*}
s_\varepsilon(r) &= s_\varepsilon(0)+\int_0^r e^{-B(y)/\varepsilon^2}\,dy
\leq s_\varepsilon(0) + y_\varepsilon e^{-B(0)/\varepsilon^2} + (r-y_\varepsilon)e^{-B(y_\varepsilon)/\varepsilon^2} \\
&\leq e^{-B(0)/\varepsilon^2}\Bigl(s_\varepsilon(0) + y_\varepsilon + re^{-1/\varepsilon}\Bigr).
\end{align*}
Using \eqref{eq:scale-function-estimate}, we get
\[
\Pr^x(\tau^r_\varepsilon<\tau^{-\delta}_\varepsilon) \geq \frac{s_\varepsilon(0)e^{B(0)/\varepsilon^2}}{s_\varepsilon(0)e^{B(0)/\varepsilon^2} + y_\varepsilon+re^{-1/\varepsilon}}
\geq \frac{\delta}{\delta + y_\varepsilon+re^{-1/\varepsilon}}.
\]
Since $y_\varepsilon+re^{-1/\varepsilon}\to0$ as $\varepsilon\to0$, we conclude that $\Pr^x(\tau^r_\varepsilon<\tau^{-\delta}_\varepsilon)\to1$ as $\varepsilon\to0$.
\end{proof}
\begin{proof}[Proof of \eqref{eq:ExpectedTrajectory} in Lemma \ref{lem:properties_of_time}]
We will show that for any $r\in(0,R]$ and $x\in[0,r]$, we have
$\lim_{\varepsilon\to0} \Exp^x\big(\tau^{-\delta}_\varepsilon \wedge \tau^r_\varepsilon\big) = \int_x^ra(z)^{-1}dz.$
It follows from Theorem \ref{thm:exit_time}\ref{thm:exit_time3}\ with $x_1=-\delta$, $x_2=r$, $f\equiv1$,
$s= s_\varepsilon$ (cf.~\eqref{eq:scalefunction}) and $m=m_\varepsilon$ (cf.~\eqref{eq:463}) that for any $\delta>0$ and $x\in[0,r]$,
\begin{equation}\label{eq:668}
\begin{aligned}
&\Exp^x\big(\tau^{-\delta}_\varepsilon \wedge \tau^{r}_\varepsilon\big)
= \int_{-\delta}^r G_\varepsilon(x,y)\,m_\varepsilon(dy) \\
&= \int_{-\delta}^x G_\varepsilon(y,x)\,m_\varepsilon(dy)+\int_x^r G_\varepsilon(x,y)\,m_\varepsilon(dy) \\
&= \int_{-\delta}^x \frac{s_\varepsilon(y)(s_\varepsilon(r)-s_\varepsilon(x))}{s_\varepsilon(r)}\,
m_\varepsilon(dy)+ \int_x^r \frac{s_\varepsilon(x)(s_\varepsilon(r)-s_\varepsilon(y))}{s_\varepsilon(r)}\,m_\varepsilon(dy) \\
&= \int_{-\delta}^x \underbrace{\frac{s_\varepsilon(y)}{s_\varepsilon(r)}}_{\eqqcolon\, p_\varepsilon(y)} (s_\varepsilon(r)-s_\varepsilon(x))\,
m_\varepsilon(dy) + \underbrace{\frac{s_\varepsilon(x)}{s_\varepsilon(r)}}_{=\,p_\varepsilon(x)} \int_x^r (s_\varepsilon(r)-s_\varepsilon(y))\, m_\varepsilon(dy) \\
&= \int_{-\delta}^x p_\varepsilon(y)\left[
\int_x^r\exp\left(-\int_{-\delta}^z\frac{2 a(u)}{\varepsilon^2}du\right) dz\right]
\frac{2}{\varepsilon^2} \exp\left(\int_{-\delta}^y\frac{2 a(z)}{\varepsilon^2}dz\right) dy \\
&\quad + p_\varepsilon(x)\int_x^r\left[
\int_{y}^r\exp\left(-\int_{-\delta}^z\frac{2 a(u)}{\varepsilon^2}du\right) dz \right]
\frac{2}{\varepsilon^2} \exp\left(\int_{-\delta}^y\frac{2 a(z)}{\varepsilon^2}dz\right) dy \\
&= \int_{-\delta}^xp_\varepsilon(y) \int_x^r\exp\left(-\int_y^z\frac{2 a(u)}{\varepsilon^2}du\right)\frac{2}{\varepsilon^2} \,dz
dy \\
&\quad + p_\varepsilon(x)\int_x^r\int_{y}^r\exp\left(-\int_y^z\frac{2 a(u)}{\varepsilon^2}du\right)\frac{2}{\varepsilon^2} \,dzdy \\
&= {
\int_{-\delta}^xp_\varepsilon(y) \int_y^r\exp\left(-\int_y^z\frac{2 a(u)}{\varepsilon^2}du\right)
\frac{2 a(z)}{\varepsilon^2} \frac{\ind_{(x,r)}(z)}{ a(z)} \,dz
dy }\\
&\quad + p_\varepsilon(x)\int_x^r\int_{y}^r\exp\left(-\int_y^z\frac{2 a(u)}{\varepsilon^2}du\right)
\frac{2 a(z)}{\varepsilon^2} \frac{1}{ a(z)} \,dzdy \\
&= I_\varepsilon + \mathit{II}_\varepsilon.
\end{aligned}
\end{equation}
By Theorem \ref{thm:exit_time}\ref{thm:exit_time1} we have $p_\varepsilon(x) = \Pr^x(\tau_\varepsilon^{-\delta}>\tau_\varepsilon^r)$, and \eqref{eq:ProbabilityFirstExit} in Lemma \ref{lem:properties_of_time} implies that $\lim_{\varepsilon\to0}p_\varepsilon(x)=1$ for every $x\in[0,r]$. Letting $f(z)=2a(z)$ and $g(z) = \frac{1}{a(z)}\ind_{(x,r)}(z)$ for $z\in[0,r]$, we see that the $z$-integral in $\mathit{II}_\varepsilon$ can be written as
\[
{\int_y^r\exp\left(-\int_y^z\frac{f(u)}{\varepsilon^2}du\right)\frac{f(z)}{\varepsilon^2}g(z) \,dz.}
\]
Note that $f,g\in L^1([0,r])$, by \eqref{eq:positivedriftcondition}. Thus, we can apply Lemma \ref{lem:approxidentity} with $\alpha=0$, $\beta=r$ to get
\[
g_\varepsilon(y)\coloneqq\int_y^r\exp\left(-\int_y^u\frac{2 a(z)}{\varepsilon^2}dz\right)\frac{2}{\varepsilon^2} \,du \to g(y)
\]
in $L^1([0,r])$ and pointwise a.e.\ as $\varepsilon\to0$, so that
\[
\mathit{II}_\varepsilon \to \int_x^r g(y)\,dy = \int_x^r\frac{1}{a(y)}\,dy.
\]
A similar manipulation will hold for $I_\varepsilon$, with the same functions $f$ and $g$, yielding
\[
I_\varepsilon \to \int_{-\delta}^x \frac{1}{a(y)}\ind_{(x,r)}(y)\,dy = 0.
\]
Putting these together gives
\[
\lim_{\varepsilon\to0}\Exp^x\big(\tau^{-\delta}_\varepsilon \wedge \tau^{r}_\varepsilon\big)
= \lim_{\varepsilon\to0} I_\varepsilon+\mathit{II}_\varepsilon = \int_x^r \frac{1}{a(y)}\,dy.
\]
This concludes the proof.
\end{proof}
\begin{proof}[Proof of \eqref{eq:ConvergenceOfExpectationsExits} in Lemma \ref{lem:properties_of_time}]
{For any $x\in[0,r)$, note that $\lim_{\delta\to+\infty} \Exp^x(\tau^{-\delta}_\varepsilon\wedge \tau^r_\varepsilon)=\Exp^x(\tau^r_\varepsilon)$.
Using \eqref{eq:668} and the assumption $a\geq c_->0$ it is easy to obtain the uniform estimates for
expectations and to see that $\lim_{\varepsilon\to0} \Exp^0(\tau^r_\varepsilon)= A(r).$}
\end{proof}
\begin{proof}[Proof of \eqref{eq:VanishingVariance} in Lemma \ref{lem:properties_of_time}]
Let $X_\varepsilon$ solve \eqref{eq:ode_pert} and define $Y_\varepsilon(t) = \varepsilon^{-2}X_\varepsilon(\varepsilon^2t)$. Substitution into \eqref{eq:ode_pert} then gives
\begin{equation}\label{eq:scaledSDE}
Y_\varepsilon(t) = \int_0^t a\big(\varepsilon^2 Y_\varepsilon(s)\big)\,ds + B(t)
\end{equation}
where $B(t)=\varepsilon^{-1}W(\varepsilon^2t)$ is another Brownian motion. Applying the same scaling to $\tau$, we see that if $\pi^n_\varepsilon$ is the exit time of $Y_\varepsilon$ from $(-\infty,n]$ then $\pi^n_\varepsilon = \varepsilon^{-2}\tau^{\varepsilon^2n}_\varepsilon$.
To this end, fix $x>0$, let $n=\varepsilon^{-2} x$ (assumed for simplicity to be an integer) and define the increments $\zeta^1_\varepsilon=\pi^1_\varepsilon$, $\zeta^2_\varepsilon=\pi^2_\varepsilon-\pi^1_\varepsilon$, $\dots$, $\zeta^n_\varepsilon = \pi^n_\varepsilon-\pi^{n-1}_\varepsilon$. The strong Markov property ensures that $\zeta^1_\varepsilon,\dots,\zeta^n_\varepsilon$ are independent random variables. Hence,
\begin{align*}
\Var(\tau^x_\varepsilon) &= \varepsilon^4\Var(\pi^n_\varepsilon) = \varepsilon^4\Var\Biggl(\sum_{k=1}^n\zeta^k_\varepsilon\Biggr) \\
&= \varepsilon^4\sum_{k=1}^n\Var(\zeta^k_\varepsilon).
\end{align*}
Hence, if we can bound $\Var(\zeta^k_\varepsilon)$ by a constant independent of $\varepsilon$, then $\Var(\tau^x_\varepsilon) \leq \varepsilon^4Cn = C x \varepsilon^2 \to 0$, and we are done. To this end, note first the naive estimate $\Var(\zeta^k_\varepsilon)\leq \Exp((\zeta^k_\varepsilon)^2)$. Next, we invoke the comparison principle Theorem \ref{thm:comparisonThm} between $Y_\varepsilon$ and
\[
Z_\varepsilon(t)\coloneqq\int_0^t c_-\,dt+B(t) = c_-t+B(t),
\]
yielding $Z_\varepsilon(t)\leq Y_\varepsilon(t)$ for all $t\geq0$, almost surely. Hence, $\pi^n_\varepsilon \leq \tilde{\pi}^n_\varepsilon$, where $\tilde{\pi}^n_\varepsilon$ is the exit time of $Z_\varepsilon$, and correspondingly, $\zeta^k_\varepsilon\leq \tilde{\zeta}^k_\varepsilon$ for $k=1,\dots,n$. Since $(\tilde{\zeta}^k_\varepsilon)_{k=1}^n$ are identically distributed, we get
\[
\Exp\big((\zeta^k_\varepsilon)^2\big) \leq \Exp\big((\tilde{\zeta}^k_\varepsilon)^2\big) = \Exp\big((\tilde{\zeta}^1_\varepsilon)^2\big) = \Exp\big((\tilde{\pi}^1_\varepsilon)^2\big).
\]
To estimate the latter, we have (letting $p_t = \mathrm{Law}(B_t) = \frac{1}{\sqrt{2\pi t}}e^{-|\cdot|^2/(2t)}$)
\begin{align*}
\Pr\big(\tilde{\pi}^1_\varepsilon > t\big) &= \Pr\big(\tilde{\pi}^1_\varepsilon > t,\ c_-t+B_t<1\big) + \underbrace{\Pr\big(\tilde{\pi}^1_\varepsilon > t,\ c_-t+B_t \ge 1\big)}_{=\;0} \\
&\leq \Pr\big(c_-t+B_t<1\big) = \Pr\big(B_t<1-c_-t\big) \\
&= \int_{-\infty}^{1-c_-t} \frac{1}{\sqrt{2\pi t}}\exp\biggl(-\frac{|x|^2}{2t}\biggr)\,dx \\
&= \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{(1-c_-t)/\sqrt{t}} \exp\biggl(-\frac{|y|^2}{2}\biggr)\,dy.
\end{align*}
It follows that
\[ \Exp((\tilde{\pi}^1_\varepsilon)^2) = \int_0^\infty 2t \Pr(\tilde{\pi}^1_\varepsilon > t)\,dt
\leq \frac{1}{\sqrt{2\pi}}\int_0^\infty 2t\int_{-\infty}^{(1-c_-t)/\sqrt{t}} \exp\left(-\frac{|y|^2}{2}\right)\,dy\,dt < \infty, \]
and we are done.
\end{proof}
Using the above theorem and standard comparison principles, we extend the result
to drifts satisfying an Osgood-type condition:
\begin{lemma}\label{lem:ZeroNoiseOsgood}
Let $a\in L^\infty({\mathbb R})$ satisfy $a>0$ a.e.~in $(-\delta_0,\infty)$ for some $\delta_0>0$. Assume that for all $R>0$,
\[
\int_{0}^R \frac{1}{a(z)} dz<\infty.
\]
Then, for any $T>0$, $X_\varepsilon$ converges to $\psi_+$:
\begin{equation}\label{eq:C22}
\big\|X_\varepsilon-\psi_+\big\|_{C([0,T])} \overset{P} \to 0 \qquad\text{as } \varepsilon\to0 \text{ for all } T>0
\end{equation}
(where $\psi_+$ is the maximal solution \eqref{eq:maximalsolutions}).
\end{lemma}
\begin{proof}
As in the proof of Theorem \ref{thm:ZeroNoiseUnifPositive} we know that $\{X_\varepsilon\}_\varepsilon$
is weakly relatively compact, so it has some weakly convergent subsequence $\{X_{\varepsilon_k}\}_k$. Due to Skorokhod's representation theorem \cite[Theorem 1.6.7]{Billingsley1999} there exists a sequence of copies
$\tilde X_{\varepsilon_k}$ of $X_{\varepsilon_k}$ that satisfy the corresponding SDEs with Wiener processes $B_{\varepsilon_k}$ and such that
$\{\tilde X_{\varepsilon_k}\}_k$ converges almost surely to some continuous non-decreasing process $\tilde X$:
\begin{equation}\label{eq:conv_tilde}
\Pr\Bigl(\lim_{k\to\infty} \|\tilde X_{\varepsilon_k}-\tilde X\|_{C([0,T])}=0 \quad \forall\ T>0\Bigr)=1.
\end{equation}
{The limit process is non-decreasing, so without loss of generality we may assume that function
$a$ is such that $a(x)=c_-$ for all $x\in(-\infty,-\delta_0),$ where $c_->0$ is a constant.}
Define \( a_n \coloneqq a + \nicefrac{1}{n} \),
{let $\tilde X_{n,\varepsilon}$ be the corresponding stochastic
process and let $X_n$ denote the solution of the corresponding deterministic problem}. It holds for all \( n \in {\mathbb N} \) that \( a_n \geq \nicefrac{1}{n} \),
thus the result above holds for \( a_n \).
Let $\pi^x$, $\pi^x_{\varepsilon_k}$, $\pi^x_{n,\varepsilon_k}$, $\tau^x_n$ and $\tau^x$ be the hitting
times of $\tilde X$, $\tilde X_{\varepsilon_k}$, $\tilde X_{n,\varepsilon_k}$, $X_n$ and $\psi_+$, respectively. By the comparison principle Theorem \ref{thm:comparisonThm}, we know that
\begin{equation}\label{eq:ineq_limits1}
\tilde X_{n,\varepsilon_k} \geq \tilde X_{\varepsilon_k}, \qquad \text{or equivalently,} \qquad \pi^x_{n,\varepsilon_k} \leq \pi^x_{\varepsilon_k}
\; \forall\ x
\end{equation}
{(cf.~Lemma~\ref{lem:timeinversion}).}
It follows
from Theorem \ref{thm:ZeroNoiseUnifPositive} that $\tilde X_{n,\varepsilon_k}\to X_n$ a.s.~as $k\to\infty$, which together with \eqref{eq:conv_tilde} and \eqref{eq:ineq_limits1} implies
\begin{equation}\label{eq:ineq_limits2}
X_n \geq \tilde X, \qquad\text{or equivalently,}\qquad \tau^x_{n} \leq \pi^x\;\forall\ x.
\end{equation}
The lower semi-continuity of a hitting time with respect to its process also implies that $\pi^x\leq \liminf_{k\to\infty} \pi^x_{\varepsilon_k}$
a.s. for any $x\geq 0$. Hence, for any $x\geq 0$,
\begin{align*}
A(x)&=\lim_{n\to\infty}A_n(x) = \lim_{n\to\infty} \tau_n^x \leq \Exp(\pi^x) \\
&\leq \Exp\Bigl(\liminf_{k\to\infty} \pi_{\varepsilon_k}^x\Bigr)
\leq \liminf_{k\to\infty} \Exp\bigl(\pi_{\varepsilon_k}^x\bigr)
= A(x),
\end{align*}
the last equality following from \eqref{eq:ExpectedTrajectory} in Lemma \ref{lem:properties_of_time}. Hence, $\Exp(\pi^x)=A(x)$ for all $x\geq0$, and since $\pi^x\geq\tau_n^x\to A(x)$ as $n\to\infty$, we conclude that $\pi^x=A(x)$ almost surely for every $x\geq0$,
so Corollary \ref{cor:ConvergenceOfPaths} implies that $\tilde X=A^{-1}=\psi_+$ almost surely. Since $\psi_+$ is non-random, we have the uniform convergence in probability
\[
\Pr\biggl(\lim_{k\to\infty}\|X_{\varepsilon_k}- \psi_+\|_{C([0,T])}=0 \quad\forall\ T>0\biggr)=1.
\]
And finally, since the limit $\psi_+$ is unique, we can conclude that the entire sequence $\{X_\varepsilon\}_\varepsilon$ converges.
\end{proof}
We are now ready to prove Theorem \ref{thm:ZeroNoisePositiveDrift111} under the additional condition that $a>0$ a.e.~in $(-\delta_0,0)$:
\begin{proof}[Proof of Theorem \ref{thm:ZeroNoisePositiveDrift111} for positive $a$]
The case when $
\int_{0}^{R} \frac{dx}{a(x)\vee0}<\infty
$ for any $R>0$ (and hence, in particular, $a>0$ a.e.~in $(-\delta_0,\infty)$)
has been considered in Lemma \ref{lem:ZeroNoiseOsgood}. Thus, we can assume that there is some $R>0$ such that $a>0$ a.e.~on $(-\delta_0,R)$, and for any (small) $\delta>0$,
\begin{equation}\label{eq:osgoodblowup}
\int_0^{R-\delta} \frac{dx}{a(x)}<\infty \quad\text{but}\quad
\int_0^{R+\delta} \frac{dx}{a(x)\vee 0}=\infty.
\end{equation}
Recall that
\[
\psi_+(x)=
\begin{cases}
A^{-1}(x),& x\in[0,A(R)),\\
R, & x\geq A(R).
\end{cases}
\]
(Note that $A(R)$ may be equal to $\infty.$) The proof of the theorem consists of the following steps:
\begin{enumerate}[label=\arabic*.]
\item Prove the theorem for the stopped process $X_\varepsilon(\cdot\wedge\tau^R_\varepsilon)$
\item Prove the theorem for nonnegative drifts
\item Extend to possibly negative drifts.
\end{enumerate}
\noindent\textit{Step 1.}
Set $\widehat a_m(x)\coloneqq a(x)\ind_{x\leq R-\nicefrac{1}{m}}+\ind_{x>R-\nicefrac1m}$ for
$m\in{\mathbb N}$, and note that $\widehat a_m$ satisfies the conditions of
Lemma \ref{lem:ZeroNoiseOsgood}. Let $\widehat{X}_{m,\varepsilon} $ denote the solution to the
corresponding SDE, $\widehat{X}_{m} $ its limit, and $\widehat{\tau}_{m,\varepsilon}^x,\ \widehat{\tau}_{m }^x$
the corresponding
hitting times. It follows from the uniqueness of a solution that
\[
\Pr\Bigl( \widehat{\tau}_{m,\varepsilon}^{R-\nicefrac1m}=\widehat{\tau}_\varepsilon^{R-\nicefrac1m}\Bigr)=1 \quad\text{and}\quad
\Pr\Bigl(\widehat{X}_{m,\varepsilon}(t) = X_{\varepsilon}(t) \quad\forall\ t\leq
\widehat{\tau}_\varepsilon^{R-\nicefrac1m}\Bigr)=1.
\]
Thus, by Lemma \ref{lem:ZeroNoiseOsgood},
\begin{equation}\label{eq:605}
\begin{split}
\sup_{t\in[0,T]}\big|X_{\varepsilon}\bigl(t\wedge \widehat{\tau}_\varepsilon^{R-\nicefrac{1}{m}}\bigr)- A^{-1}\big(t\wedge \widehat{\tau}_\varepsilon^{R-\nicefrac{1}{m}}\big)\big|
&\overset{P} \to 0 \qquad\text{as } \varepsilon\to0 \text{ for all } T>0,
\\
\sup_{t\in[0,T]}\big|\widehat{X}_{m,\varepsilon}\bigl(t\wedge \widehat{\tau}_\varepsilon^{R-\nicefrac{1}{m}}\bigr)- A^{-1}\bigl(t\wedge \widehat{\tau}_\varepsilon^{R-\nicefrac1m}\bigr)\big|
&\overset{P} \to 0 \qquad\text{as } \varepsilon\to0 \text{ for all } T>0,
\end{split}
\end{equation}
for every $m\in{\mathbb N}$.
Let $\overline X_0$ be a limit point of $\{X_\varepsilon\}_\varepsilon$ and
$X_{\varepsilon_k}\Rightarrow \overline X_0$ as $k\to\infty.$
It follows from \eqref{eq:605} that $\overline X_0(\cdot\wedge \tau^{R-\nicefrac1m}_m) = A^{-1}(\cdot\wedge \tau^{R-\nicefrac1m}_m )$, and since $m$ is arbitrary, we have $\overline{X}_0(\cdot\wedge \tau^{R} ) = A^{-1}(\cdot\wedge \tau^{R} )$, that is, $\overline X_0(\cdot\wedge\tau^R) = \psi_+(\cdot\wedge\tau^R)$. In particular, the entire sequence of stopped processes converges, by uniqueness of the limit.
\medskip\noindent\textit{Step 2.}
Assume next, in addition to \eqref{eq:osgoodblowup}, that $a\geq0$ a.e.~in ${\mathbb R}$. Any limit point of $\{X_\varepsilon\}_\varepsilon$ is a non-decreasing process, so to prove the theorem it suffices to verify that for any $\delta>0$ and $M>0$
\[
\limsup_{k\to\infty}\Pr \bigl( \tau^{R+\delta}_{\varepsilon_k}<M\bigr)=0
\]
Set $a_n\coloneqq a+\nicefrac{1}{n}$ and let $ X_{n,\varepsilon}$ denote
the solution to the corresponding SDE.
It follows from comparison Theorem \ref{thm:comparisonThm} that for any $M>0$
\[
\limsup_{k\to\infty}\Pr\bigl(\tau^{R+\delta}_{\varepsilon_k}<M\bigr)\leq
\liminf_{n\to\infty}\limsup_{k\to\infty}\Pr\bigl(\tau^{R+\delta}_{n,\varepsilon_k}<M\bigr).
\]
Theorem \ref{thm:ZeroNoiseUnifPositive} implies that $\lim_{\varepsilon\to0} X_{n,\varepsilon}=X_n=A^{-1}_n,$ so the right hand side of the above
inequality equals zero for any $M$. This concludes the proof if $a$ is non-negative everywhere.
\medskip\noindent\textit{Step 3.}
In the case that $a$ takes negative values, we consider the processes $X_\varepsilon^+$ satisfying the corresponding SDEs with drift $a^+(x)\coloneqq a(x)\vee 0$. We have already proved in Step 2 that
\begin{alignat*}{2}
\bigl\|X_\varepsilon^+-\psi_+\bigr\|_{C([0,T])} \overset{P}\to 0 && \text{as }\varepsilon\to0 \;\forall\ T>0 \\
\intertext{(since $a^+$ has the same deterministic solution $\psi_+$ as $a$ does), and in Step 1 that}
\bigl\|X_\varepsilon\big(\cdot\wedge \tau^R_0\big)-\psi_+\bigr\|_{C([0,T])} \overset{P}\to 0 &\qquad& \text{as }\varepsilon\to0\;\forall\ T>0.
\end{alignat*}
Theorem \ref{thm:comparisonThm} yields $X_\varepsilon^+(t)\geq X_\varepsilon(t)$. Therefore, any (subsequential) limit of $\{X_\varepsilon^+\}_\varepsilon$ is greater than or equal to a limit of $\{ X_\varepsilon\}_\varepsilon$, and if $\bar X_0$ is a limit point of $\{X_\varepsilon\}_\varepsilon$ then
\[
\Pr\Bigl(\bar X_0(t) = \psi_+(t) \ \forall\ t\leq\tau^R_0 \text{ and } \bar{X}_0(t) \leq R \ \forall\ t>\tau^R_0\Bigr) =1.
\]
On the other hand, it can be seen that any limit point $\bar X_0$ of $\{X_\varepsilon\}_\varepsilon$
satisfies
\[
\Pr\Bigl(\exists\ t\geq \tau^0_R : \bar X_0(t)<R\Bigr)=0.
\]
Thus we have equality, $\bar X_0(t)=\psi_+(t)$ for all $t\geq 0 $ almost surely. This concludes the proof for the case $a(x)>0$ for $x\in(-\delta_0,0)$. The case $a(x)\geq 0$ for $x\in(-\delta_0,0)$ will be considered in \S\ref{section:finalOfTheorem1.1}.
\end{proof}
\section{Velocity with a change in sign}\label{sec:repulsive}
In this section we consider the repulsive case and prove Theorem \ref{thm:ZeroNoiseRepulsive}. We also provide several tools for computing the zero noise probability distribution.
\subsection{Convergence in the repulsive case}
\begin{lemma}\label{lem:osgoodrepulsive}
Let $\alpha<0<\beta$, assume that $a\in L^\infty({\mathbb R})$ satisfies the ``repulsive Osgood condition'' \eqref{eq:osgoodrepulsive}, and define $p_\varepsilon$ by
\begin{equation}\label{eq:weightdef}
p_\varepsilon \coloneqq \frac{- s_\varepsilon(\alpha)}{s_\varepsilon(\beta)- s_\varepsilon(\alpha)}, \qquad
s_\varepsilon(r) \coloneqq \int_0^r e^{-B(z)/\varepsilon^2} \,dz, \qquad B(z)\coloneqq 2\int_0^z a(u)\,du.
\end{equation}
Then
\[
\limsup_{\varepsilon\to0}\Exp^0\big(\tau_{\varepsilon}^\alpha\wedge \tau_{\varepsilon}^\beta\big) \leq \int_\alpha^\beta \frac{1}{|a(x)|}\,dx < \infty.
\]
If $p_{\varepsilon_k}\to p$ as $k\to\infty$, then
\[
\Exp^0\big(\tau_{\varepsilon_k}^\alpha\wedge \tau_{\varepsilon_k}^\beta\big) \to {(1-p)}\int_\alpha^0 \frac{-1}{a(z)}\,dz + {p}\int_0^\beta \frac{1}{a(z)}\,dz \qquad \text{as }k\to\infty.
\]
\end{lemma}
\begin{proof}
{By \eqref{eq:Lharmonic}, \eqref{eq:463}, and \eqref{eq:194}
with $f=1$} we can write
{\begin{align*}
&\Exp^0\big(\tau_{\varepsilon}^\alpha\wedge \tau_{\varepsilon}^\beta\big)
= \int_\alpha^0 \frac{(s_\varepsilon(y)-s_\varepsilon(\alpha))(s_\varepsilon(\beta)-s_\varepsilon(0))}{s_\varepsilon(\beta)-s_\varepsilon(\alpha)}\frac{2e^{B(y)/\varepsilon^2}}{\varepsilon^2}\,dy \\
&\qquad +\int_0^\beta \frac{(s_\varepsilon(0)-s_\varepsilon(\alpha))(s_\varepsilon(\beta)-s_\varepsilon(y))}{s_\varepsilon(\beta)-s_\varepsilon(\alpha)}\frac{2e^{B(y)/\varepsilon^2}}{\varepsilon^2}\,dy \\
&\quad= {(1-p_\varepsilon)} \int_\alpha^0 (s_\varepsilon(y)-s_\varepsilon(\alpha))\frac{2e^{B(y)/\varepsilon^2}}{\varepsilon^2}\,dy + {p_\varepsilon}\int_0^\beta (s_\varepsilon(\beta)-s_\varepsilon(y))\frac{2e^{B(y)/\varepsilon^2}}{\varepsilon^2}\,dy \\
&\quad= {(1-p_\varepsilon) \int_\alpha^0\int_\alpha^y \frac{2e^{(B(y)-B(z))/\varepsilon^2}}{\varepsilon^2}\,dz\,dy+
p_\varepsilon \int_0^\beta\int_y^\beta\frac{2e^{(B(y)-B(z))/\varepsilon^2}}{\varepsilon^2}\,dz\,dy}\\
&\quad= (1-p_\varepsilon) \int_\alpha^0\int_\alpha^y \frac{2\exp\Bigl({\textstyle -\int_z^y \frac{2a(u)}{\varepsilon^2} du}\Bigr)}{\varepsilon^2}\,dz\,dy \\
&\qquad +p_\varepsilon \int_0^\beta\int_y^\beta\frac{2\exp\Bigl({\textstyle -\int_z^y \frac{2a(u)}{\varepsilon^2} du}\Bigr)}{\varepsilon^2}\,dz\,dy\\
&\quad= (1-p_\varepsilon) \int_\alpha^0\int_\alpha^y \exp\Bigl({\textstyle-\int_z^y \frac{2a(u)}{\varepsilon^2} du}\Bigr) \frac{2 a(z)}{\varepsilon^2}\frac{1}{a(z)}\,dz\,dy \\
&\qquad +p_\varepsilon \int_0^\beta\int_y^\beta \exp\Bigl({\textstyle-\int_z^y \frac{2a(u)}{\varepsilon^2} du}\Bigr) \frac{2 a(z)}{\varepsilon^2}\frac{1}{a(z)}\,dz\,dy.
\end{align*}}
Setting $f(z)=2\mathop{\rm sign}(z)a(z)$ and $g(z)=\frac{1}{a(z)}$ in Lemma \ref{lem:approxidentity}, we find that the above two integrals with $\varepsilon=\varepsilon_k$ converge to
\[
\int_\alpha^0 \frac{-1}{a(z)}\,dz \qquad\text{and}\qquad \int_0^\beta\frac{1}{a(z)}\,dz
\]
respectively, as $k\to\infty$. This concludes the proof.
\end{proof}
We can now prove the main theorem in the repulsive case.
\begin{proof}[Proof of Theorem \ref{thm:ZeroNoiseRepulsive}]
Let $X_{\varepsilon_k'}$ be any weakly convergent subsequence of $\{X_{\varepsilon_k}\}_k$, and let $\tau_{\varepsilon_k'}$ and $\tau$ be the hitting times of $X_{\varepsilon_k'}$ and its limit, respectively.
By Lemma \ref{lem:osgoodrepulsive} we have for any $\alpha<0<\beta$
\[
\Exp^0(\tau^\alpha\wedge\tau^\beta)\leq \liminf_{k\to\infty}\Exp^0\bigl(\tau^\alpha_{\varepsilon_k}\wedge\tau^\beta_{\varepsilon_k}\bigr) = {(1-p)A(\alpha)+ pA(\beta)}.
\]
Consequently, $\Pr^0\bigl(\tau^\alpha\wedge\tau^\beta=\infty\bigr)=0$, so $\Pr^0(\tau^\alpha<\tau^\beta)=\lim_{k\to\infty}\Pr^0(\tau^\alpha_{\varepsilon_k'}<\tau^\beta_{\varepsilon_k'})={1-p}$ and $\Pr^0(\tau^\alpha>\tau^\beta)={p}$.
Using Theorem \ref{thm:ZeroNoisePositiveDrift111} and the strong Markov property, the probability of convergence once the process escapes $(\alpha,\beta)$ at $x=\beta$ is one:
\[
\lim_{k\to\infty}\Pr^0\Bigl(\bigl\|X_{\varepsilon_k'}(\cdot-\tau_\beta)-\psi_+(\cdot-A(\beta))\bigr\|_{C([0,T])}>\delta \bigm| \tau^\alpha>\tau^\beta \Bigr) = 1,
\]
for any sufficiently small $\delta>0$, and likewise for those paths escaping at $x=\alpha$. Passing $\alpha,\beta\to0$ yields
\begin{align*}
&\lim_{\delta\to0}\lim_{k\to\infty}\Pr^0\Bigl(\|X_{\varepsilon_k'}-\psi_-\|_{C([0,T])}>\delta\Bigr) = {1-p}, \\ &\lim_{\delta\to0}\lim_{k\to\infty}\Pr^0\Bigl(\|X_{\varepsilon_k'}-\psi_+\|_{C([0,T])}>\delta\Bigr) = {p}.
\end{align*}
Since this is true for any weakly convergent subsequence $\varepsilon_k'$, and the limit is unique, the entire sequence $\varepsilon_k$ must converge.
\end{proof}
\subsection{Probabilities in the repulsive case}
{Theorem \ref{thm:ZeroNoiseRepulsive} gives a concrete condition for convergence of the sequence
of perturbed solutions, as well as a characterization of the limit distribution. In this section we give
an explicit expression for the probabilities in the limit distribution, and an equivalent condition for
convergence.}
Consider the integral
\[
B(x)\coloneqq \int_0^x a(y)\,dy
\]
and denote $B_\pm = B\bigr|_{{\mathbb R}_\pm}$.
{Select any $\alpha>0, \beta>0$ such that the function $\mu\from[0,\beta)\to(\alpha,0]$
defined by $\mu=B_-^{-1}\circ B_+$ is well-defined --- that is,
\[
B_+(x) = B_-(\mu(x)), \quad \forall\ x\in [0,\beta).
\]
Clearly, $B_\pm$ are Lipschitz continuous. Since $a$ is strictly positive (negative) for $x>0$ ($x<0$), the inverses of $B_\pm$ are absolutely continuous (see e.g.~\cite[Exercise 5.8.52]{Bogachev2007}), so $\mu$ is also absolutely continuous.
We now rewrite the probability of choosing the left/right extremal solutions $X^\pm$ in terms of $\mu$.}
\begin{theorem}\label{thm:limitprobs}
Let $a\in L^\infty({\mathbb R})$ satisfy \eqref{eq:osgoodrepulsive}
and let $\mu\from[0,\beta)\to(\alpha,0]$ be as above.
Then $\{p_\varepsilon\}_\varepsilon$ converges if either the derivative $\mu'(0)$ exists, or if $\mu'(0)=-\infty$. In either case, we have
\begin{subequations}\label{eq:limit_prob}
\begin{equation}\label{eq:limit_prob1}
\lim_{\varepsilon\to0}p_\varepsilon = {\frac{-\mu'(0)}{1-\mu'(0)}}.
\end{equation}
Moreover, the derivative $\mu'(0)$ exists if and only if the limit $\lim_{u\downarrow0}\frac{B_-^{-1}(u)}{B_+^{-1}(u)}$ exists, and we have the equality:
\begin{equation}
\label{eq:limit_prob2}
\mu'(0)=\lim_{u\downarrow0}\frac{B_-^{-1}(u)}{B_+^{-1}(u)}.
\end{equation}
\end{subequations}
\end{theorem}
To prove the theorem we will need the following lemmas:
\begin{lemma}\label{lem:limits}
Let $\alpha<0<\beta$. Define $p_\varepsilon$ as in \eqref{eq:weightdef} and $p_\varepsilon'$ similarly, where
$\alpha,\beta$ are exchanged with any
$\alpha'<0<\beta'.$
Then $\lim_{\varepsilon\to0}p_\varepsilon'/p_\varepsilon = 1$. In particular,
$p_{\varepsilon_k}$ converges to some $p$ as $k\to\infty$ if and only if $p_{\varepsilon_k}'$ converges to $p$.
\end{lemma}
The proof follows from the following observation: Since $B$ is strictly increasing,
then for any positive $r_1< r_2$ or negative $r_1>r_2$,
\[
\lim_{\varepsilon\to0}\frac{\int_{r_1}^{r_2} e^{-B(z)/\varepsilon^2} \,dz}{\int_0^{r_1} e^{-B(z)/\varepsilon^2} \,dz}=0.
\]
{Next, we prove a technical lemma:}
\begin{lemma}\label{lem:approxidentity2}
Let $0<a \in L^\infty([0,\beta])$ and $f\in L^1({\mathbb R})$, and for $\varepsilon>0$ and $x\in[0,\beta)$ define
\begin{gather*}
B(x) = 2\int_0^x a(y)\,dy, \qquad \nu_\varepsilon(x) = e^{-B(x)/\varepsilon^2}\ind_{[0,\beta]}(x), \\
\bar{\nu}_\varepsilon = \int_0^\beta \nu_\varepsilon(y)\,dy, \qquad
f_\varepsilon(x) = \frac{1}{\bar{\nu}_\varepsilon}\int_0^\beta f(x+y)\nu_\varepsilon(y)\,dy.
\end{gather*}
Then $f_\varepsilon(x) \to f(x)$ as $\varepsilon\to0$ if and only if $x$ is a Lebesgue point of $f$.
\end{lemma}
\begin{proof}
{Let $x\in[0,\beta)$. For $s\in(0,\beta-x)$, let
\[
F(s) = \int_0^s |f(x+y)-f(x)|\,dy, \qquad C_s = \sup_{y\in(0,s)}\tfrac{F(y)}{y}.
\]
Then $C_s \to 0$ as $s\to0$ if and only if $x$ is a Lebesgue point.} We estimate
\begin{align*}
|f_\varepsilon(x)-f(x)| &= \frac{1}{\bar\nu_\varepsilon}\biggl|\int_0^\beta (f(x+y)-f(x))\nu_\varepsilon(y)\,dy\biggr| \\
&\leq \underbrace{\frac{1}{\bar{\nu}_\varepsilon}\int_0^s |f(x+y)-f(x)|\nu_\varepsilon(y)\,dy}_{=\,I_1} + \underbrace{\frac{1}{\bar{\nu}_\varepsilon}\int_s^\beta |f(x+y)-f(x)|\nu_\varepsilon(y)\,dy}_{=\,I_2}.
\end{align*}
For the first term we integrate by parts several times to get
\begin{align*}
I_1 &= F(s)\frac{\nu_\varepsilon(s)}{\bar\nu_\varepsilon} - \frac{1}{\bar\nu_\varepsilon}\int_0^s F(y)\nu_\varepsilon'(y)\,dy
\leq F(s)\frac{\nu_\varepsilon(s)}{\bar\nu_\varepsilon} - \frac{C_s}{\bar\nu_\varepsilon}\int_0^s y\nu_\varepsilon'(y)\,dy \\
&= F(s)\frac{\nu_\varepsilon(s)}{\bar\nu_\varepsilon} - \frac{C_s}{\bar\nu_\varepsilon} s\nu_\varepsilon(s) + \frac{C_s}{\bar\nu_\varepsilon}\int_0^s \nu_\varepsilon(y)\,dy \\
&\leq F(s)\frac{\nu_\varepsilon(s)}{\bar\nu_\varepsilon} + \frac{C_s}{\bar\nu_\varepsilon}\int_0^\beta \nu_\varepsilon(y)\,dy\\
&= F(s)\frac{\nu_\varepsilon(s)}{\bar\nu_\varepsilon} + C_s.
\end{align*}
For the second term we estimate
\begin{align*}
I_2 &\leq 2\|f\|_{L^1} \frac{\nu_\varepsilon(s)}{\bar\nu_\varepsilon}.
\end{align*}
If we can find $s=s_\varepsilon$ such that both $s_\varepsilon\to0$ and $\frac{\nu_\varepsilon(s_\varepsilon)}{\bar\nu_\varepsilon} \to 0$
as $\varepsilon\to0$, then both $I_1$ and $I_2$ vanish in the $\varepsilon\to0$ limit,
and we can conclude the result. Below we explain the existence of such a choice.
Since $B$ is increasing and Lipschitz continuous, with $B(0)=0$ and $\|B\|_{\mathrm{Lip}} \leq 2\|a\|_{L^\infty}<\infty$, there is some $\kappa<s$ satisfying $B(\kappa)=\tfrac12 B(s)$, and $\kappa\geq \frac{1}{2\|B\|_{\mathrm{Lip}}}B(s)$. Moreover, since $\nu_\varepsilon$ is decreasing we have
\[
\bar\nu_\varepsilon = \int_0^\beta \nu_\varepsilon(y)\,dy \geq \kappa\nu_\varepsilon(\kappa) = \kappa e^{-B(\kappa)/\varepsilon^2} = \kappa e^{-B(s)/(2\varepsilon^2)},
\]
so
\[
\frac{\nu_\varepsilon(s)}{\bar\nu_\varepsilon} \leq \frac{1}{\kappa}e^{-B(s)/(2\varepsilon^2)}
\leq 2\|B\|_{\mathrm{Lip}} \frac{e^{-B(s)/(2\varepsilon^2)}}{B(s)}.
\]
Now choose $s=s_\varepsilon$ such that $B(s_\varepsilon) = \varepsilon$. (Such a number exists for sufficiently small $\varepsilon>0$.) Then $s_\varepsilon\to0$ as $\varepsilon\to0$, and
\[
\frac{\nu_\varepsilon(s)}{\bar\nu_\varepsilon} \leq 2\|B\|_{\mathrm{Lip}} \frac{e^{-1/(2\varepsilon)}}{\varepsilon} \to 0
\]
as $\varepsilon\to0$. This finishes the proof.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:limitprobs}]
We have
\[
p_\varepsilon = \frac{{-s_\varepsilon(\alpha)}}{s_\varepsilon(\beta)-s_\varepsilon(\alpha)} =
{\frac{-\frac{s_\varepsilon(\alpha)}{s_\varepsilon(\beta)}}{1-\frac{s_\varepsilon(\alpha)}{s_\varepsilon(\beta)}}}.
\]
By Lemma~\ref{lem:limits} we may assume $\mu(\beta)=\alpha$, so
\begin{align*}
s_\varepsilon(\alpha) &= \int_0^\alpha e^{-B(\mu^{-1}(x))/\varepsilon^2}\,dx = \int_0^{\beta}e^{-B(x)/\varepsilon^2}\mu'(x)\,dx.
\end{align*}
Thus,
\[
\frac{s_\varepsilon(\alpha)}{s_\varepsilon(\beta)} = \frac{1}{\bar\nu_\varepsilon}\int_0^\beta \nu_\varepsilon(x)\mu'(x)\,dx
\]
where
\[
\nu_\varepsilon(x) = e^{-B(x)/\varepsilon^2}, \qquad \bar\nu_\varepsilon = \int_0^\beta e^{-B(z)/\varepsilon^2}\,dz.
\]
From Lemma \ref{lem:approxidentity2} with $f(x)\coloneqq \mu'(x)$ it now follows that $p_\varepsilon$ converges if either $0$ is a Lebesgue point for $\mu'$, or $\lim_{x\to0}\mu'(x)={-\infty}$.
In the former case, we notice that $0$ is a Lebesgue point for $\mu'$ if
the following limit exists:
\[
{\lim_{h\downarrow 0}}\frac{\int_0^h \mu'(z) \,dz}{h}=
{\lim_{h\downarrow 0}}\frac{ \mu(h) -\mu(0)}{h}.
\]
The right hand side of the last equation is the usual definition of the derivative.
To prove \eqref{eq:limit_prob2} notice that
\[
\lim_{h\downarrow 0}\frac{ \mu(h) -\mu(0)}{h}= \lim_{h\downarrow 0}\frac{ \mu(h)}{h}=
\lim_{h\downarrow 0}\frac{B_-^{-1}\circ B_+(h)}{h}
=\lim_{u\downarrow 0}\frac{B_-^{-1}(u)}{B_+^{-1}(u)}.
\]
\end{proof}
\subsection{Repulsive, regularly varying drifts}
Although Theorem \ref{thm:ZeroNoiseRepulsive} provides an explicit expression \eqref{eq:limit_prob} of the limit probabilities, the limit \eqref{eq:limit_prob2} might be difficult to evaluate in practice. It is clearly easier to study existence of the limits
\begin{equation}\label{eq:equiv_a_B}
\lim_{x\downarrow0} \frac{a(-x)}{a(x)}
\end{equation}
or
\begin{equation}\label{eq:equiv_a_B1}
\lim_{x\downarrow0} \frac{B(-x)}{B(x)}
\end{equation}
than that for the inverse functions in \eqref{eq:limit_prob}. We will show that the limit in \eqref{eq:limit_prob} can easily be calculated using \eqref{eq:equiv_a_B} or \eqref{eq:equiv_a_B1} if $a$ or $B$ are regularly varying at $0$.
Recall that a positive, measurable function $f\colon [0,\infty)\to(0,\infty)$ is \emph{regularly varying} of index $\gamma$ at $+\infty$ if $\lim_{x\to\infty}\frac{f(\lambda x)}{f(x)}=\lambda^\rho$ for all $\lambda>0$. It is regularly varying of index $\rho$ at $0$ if the function $x\mapsto f(1/x)$ is a regularly varying function of index $-\rho$ at $+\infty$. The set of regularly varying functions of index $\rho$
(at $+\infty$) is denoted by $R_\rho.$ It is well known that if $f\in R_\rho$, then
$f(x)=x^\rho \ell(x)$ for some slowly varying function $\ell$, i.e.~some $\ell\from[0,\infty)\to(0,\infty)$ for which $\lim_{x\to\infty}\frac{\ell(\lambda x)}{\ell(x)}=1$ for all $\lambda>0$.
We first consider the case when $B$ is regularly varying, and then the case when $a$ is. Note that the latter implies the former, but not {vice versa}.
\begin{proposition}\label{prop:B_regvar}
Assume that the functions $x\mapsto B_\pm(\pm x)$ are regularly varying of index
$\rho>0$ at 0, and that the limit $c\coloneqq \lim_{x\downarrow0} B_-(-x)/B_+(x)$ exists
(or equals $\infty$). Then $\{p_\varepsilon\}_{\varepsilon>0}$ converges, and
\begin{equation}\label{key}
p \coloneqq \lim_{\varepsilon\to0} p_{\varepsilon} = \frac{{c^{-1/\rho}}}{1+c^{-1/\rho}}.
\end{equation}
If the functions $x\mapsto B_\pm(\pm x)$ are regularly varying of different indices $\rho_\pm,$
then
\[
p \coloneqq \lim_{\varepsilon\to0} p_{\varepsilon} =
\begin{cases}
1 & \rho_+<\rho_-\\
0 & \rho_+>\rho_-.
\end{cases}
\]
\end{proposition}
\begin{proof}
It follows from \cite[Exercise~14, p.~190]{BTG} that if $f_1, f_2\colon(0,\infty)\to(0,\infty)$ are non-decreasing, regularly varying functions at $0$ of index $\rho>0$, then
\[
\lim_{x\to0}\frac{f_1(x)}{f_2(x)}=1 \qquad \text{if and only if} \qquad \lim_{x\to0}\frac{f_1^{-1}(x)}{f_2^{-1}(x)}=1,
\]
where $f_1^{-1}, f_2^{-1}$ are inverse functions. Write now $f_1(x)=B_-(-x)$, $f_2(x)=c B_+(x)$. Then
\begin{equation}\label{eq:ratiolimit}
\lim_{x\to 0}\frac{f_1(x)}{f_2(x)}=1.
\end{equation}
The inverse function for $x\mapsto c B_+(x)$ is $x\mapsto B_+^{-1}(x/c),$ and $B_+^{-1}$ is
regularly varying of index $1/\rho$ (see \cite[Theorem 1.5.12]{BTG}), so
\[
B_+^{-1}(x/c)= (x/c)^{1/\rho}\ell_1(x/c)\sim (x/c)^{1/\rho}\ell_1(x)= c^{-1/\rho} B_+^{-1}(x)\qquad \text{as } x\to 0
\]
(where equivalence is meant in the sense of slowly
varying functions). Hence, \eqref{eq:ratiolimit} yields
\begin{equation}\label{eq:ratiolimitinv}
\lim_{x\to0}\frac{B_-^{-1}(x)}{B_+^{-1}(x)}=c^{-1/\rho}.
\end{equation}
The same computation can be easily performed in reverse, so
\eqref{eq:ratiolimit} and \eqref{eq:ratiolimitinv} are equivalent, and
the result now follows from Theorem \ref{thm:limitprobs} if $B_\pm$ are of the same index.
If $x\mapsto B_\pm(\pm x)$ are regularly varying of different indices $ {\rho_\pm},$ then
the inverse functions are regularly varying functions of indices $\frac{1}{\rho_\pm},$ and the result is obvious.
\end{proof}
\begin{proposition}\label{prop:a_regvar}
Assume that both $x\mapsto a(\pm x)$ (for $x\geq0$) are regularly varying at $0$ with index $\rho>0$, and that the limit $c\coloneqq \lim_{x\downarrow0} \frac{-a(-x)}{a(x)}$ exists. Then $\{p_\varepsilon\}_{\varepsilon>0}$ converges, and
\[
p\coloneqq \lim_{\varepsilon\to0}p_\varepsilon = \frac{{c^{-1/(1+\rho)}}}{1+c^{-1/(1+\rho)}}.
\]
If the functions $x\mapsto a(\pm x)$ are regularly varying of different indices $\rho_\pm,$
then
\[
p \coloneqq \lim_{\varepsilon\to0} p_{\varepsilon} =
\begin{cases}
1 & \text{if } \rho_+<\rho_-\\
0 & \text{if } \rho_+>\rho_-.
\end{cases}
\]
\end{proposition}
\begin{proof}
It follows from the Karamata theorem, see \cite[Theorem 1.6.1]{BTG}, that for $x>0$,
\begin{align*}
B(x)&= 2\int_0^x a(y) \, dy= 2\int_{1/x}^\infty a(1/z)z^{-2} \, dz
= 2\int_{1/x}^\infty \ell(z)z^{-2-\rho} \, dz& \\
&\sim \frac{2 \ell(1/x) x^{1+\rho}}{1+\rho} \sim \frac{x a(x)}{1+\rho}
\end{align*}
as $x\to0$, and likewise for $x<0$. Thus, $x\mapsto B(\pm x)$ are regularly varying of index $1+\rho$. Letting
\[
c\coloneqq\lim_{x\downarrow0}\frac{-a(-x)}{a(x)},
\]
we can now apply Proposition \ref{prop:B_regvar} with $1+\rho$ in place of $\rho$ and get the desired result.
The case when $a(\pm x)$ are regularly varying with different indices can be considered similarly,
cf. Proposition \ref{prop:B_regvar}.
\end{proof}
Finally, we provide a result which simplifies the computation of the limit distribution for severely oscillating drifts.
\begin{proposition}\label{prop:oscillatingdrift}
Let $a\from{\mathbb R}\to{\mathbb R}$ satisfy $xa(x)\geq0$ for all $x\in{\mathbb R}$, and assume that it is of the form
\[
a(x) = b(x) + |x|^\gamma g(\tfrac1x),
\]
where $\gamma>0$, $b$ is regularly varying at $0$ of order $\rho<\gamma+1$, and $g\in L^\infty({\mathbb R})$ is such that its antiderivative $G(x)=\int_0^x g(y)\,dy$ also lies in $L^\infty({\mathbb R})$. Assume also that the limit $c\coloneqq \lim_{x\downarrow0} \frac{-b(-x)}{b(x)}$ exists. Then $\{p_\varepsilon\}_{\varepsilon>0}$ converges, and
\[
p\coloneqq \lim_{\varepsilon\to0}p_\varepsilon = \frac{{c^{1/(1+\rho)}}}{1+c^{1/(1+\rho)}}.
\]
\end{proposition}
\begin{proof}
We claim first that
\[
\int_0^x y^\gamma g(\tfrac1y)\,dy = O(x^{1+\gamma}) = o(x^\rho) \qquad \text{as } x\downarrow0.
\]
Indeed,
\begin{align*}
\biggl|\int_0^x y^\gamma g(\tfrac1y)\, dy\biggr| &= \biggl|\int_{1/x}^\infty z^{-2-\gamma} g(z)\, dz\biggr| \\
&= \biggl|-x^{2+\gamma}G(\tfrac1x) - (\rho+2)\int_{1/x}^\infty z^{-3-\gamma}G(z)\,dz\biggr| \\
&\leq x^{2+\gamma}\|G\|_{L^\infty} + (\rho+2)\|G\|_{L^\infty}\int_{1/x}^\infty z^{-3-\gamma}\,dz \\
&= \Bigl(1 + \frac{\rho+2}{\gamma+2}\Bigr)x^{2+\gamma}\|G\|_{L^\infty}.
\end{align*}
It follows that the antiderivative $B(x)=\int_0^x a(y)\,dy$ equals a regularly
varying function of order $1+\rho$, plus a term of order $o(x^{1+\rho})$. Following the
same procedure as in the proof of Proposition \ref{prop:a_regvar} yields the desired result.
\end{proof}
\section{Proof of Theorem \ref{thm:ZeroNoisePositiveDrift111}}\label{section:finalOfTheorem1.1}
We have already proven the Theorem in Section \ref{sec:positive_drift} if $a>0$ a.e.~in a small neighborhood of 0.
We will only prove the result for $a$ such that
$a\geq 0$ for negative $x$ and $\int_0^{R}\frac{dy}{a(y)\vee 0}<\infty$ for all $R>0$. The general
case, i.e., $a(x)\geq 0$ for a.e.~$x\in(-\delta_0,0)$ and $\int_0^{\delta_0}\frac{dy}{a(y)}<0$, is considered similarly to the reasoning in Section \ref{sec:positive_drift}.
{It follows from the comparison theorem that for any $x>0$ we have the inequality
$X_\varepsilon(t)\leq X_\varepsilon^x(t)$ for $t\geq 0$ with probability 1,
where $X_\varepsilon^x$ is a solution of \eqref{eq:ode_pert} that started from $x$, $X_\varepsilon^x(0)=x.$
Since $a$ is a.e.~positive on $(0,x)$, we have already seen that $\{X_\varepsilon^x(t)\}_{\varepsilon}$ converges to
$\psi_+(\psi_+^{-1}(x)+t)$ as $t\to\infty.$ Thus, any limit point of $\{X_\varepsilon(t)\}_\varepsilon$ must be less than or equal to $\psi_+(\psi_+^{-1}(x)+t)$ for any $x>0$, almost surely.
Therefore, any limit point of $\{X_\varepsilon(t)\}_\varepsilon$ does not exceed $\psi_+(t).$}
Define the function
\[
a_n(x):=\begin{cases}
a(x) & \text{if } x\geq 0 \\
-\tfrac1n a(-\tfrac{x}{n}) & \text{if } x<0,
\end{cases}
\]
and denote the corresponding solutions to stochastic differential equations by $X_{n,\varepsilon}(t).$
Let us apply Theorem \ref{thm:limitprobs} to the sequence $\{X_{n,\varepsilon}\}_\varepsilon.$
Calculate the limit \eqref{eq:limit_prob2}:
\[
B_{n,+}(x)=\int_0^x a(y) dy,\qquad B_{n,-}(x)=\int_0^x a(y/n) dy/n=B_+(x/n).
\]
Thus,
\[
(B_{n,-})^{-1}(u)=n (B_{n,+})^{-1}(u), \qquad \lim_{u\downarrow0}\frac{(B_{n,-})^{-1}(u)}{(B_{n,+})^{-1}(u)}=n,
\]
and we get convergence
\[
P_{X_{n,\varepsilon}}\Rightarrow \frac{1}{n+1}\delta_{-n\psi_+(n^{-2}t)}+\frac{n}{n+1}\delta_{\psi_+(t)} \qquad \text{as }\varepsilon\to0.
\]
By the comparison theorem we have the inequality $X_{n,\varepsilon}(t)\leq X_\varepsilon(t), t\geq 0$ with probability 1. Therefore, any limit point of $\{X_\varepsilon\}_\varepsilon$ equals $\psi_+$ with probability at least $\frac{n}{n+1}.$
We conclude that the limit of $\{X_\varepsilon\}_\varepsilon$ exists and equals $\psi_+$ almost surely.
The limit is non-random, so we have convergence in probability, as in \eqref{eq:C2}. This finishes the proof of Theorem \ref{thm:ZeroNoisePositiveDrift111}.
\section{Examples}\label{sec:examples}
\begin{example}\label{ex:1}
For some fixed $\rho\in (0,1)$ we consider the function
\[
a(x)\coloneqq \mathop{\rm sign}(x)|x|^\rho \bigl(1+ \tfrac{1}{2}\phi\bigl(\tfrac{1}{x}\bigr)\bigr) \qquad \text{where } \phi(y)\coloneqq\sum_{n\in{\mathbb Z}}\ind_{[2n-1,2n)} - \ind_{[2n,2n+1)},
\]
defined for all $x\neq0$. Using Proposition \ref{prop:oscillatingdrift} with $b(x)=\mathop{\rm sign}(x)|x|^\rho$, $\gamma=\rho$ and $g(y)=\tfrac12\mathop{\rm sign}(y)\phi(y)$, we get $c=1$, and that $p_\varepsilon \to \frac{1}{2}$. We also see that $a$ satisfies the repulsive condition \eqref{eq:osgoodrepulsive} of Theorem \ref{thm:ZeroNoiseRepulsive}, so we conclude that
\[
P_\varepsilon \Rightarrow \tfrac12\delta_{\psi_-} + \tfrac12\delta_{\psi_+} \qquad\text{as } \varepsilon\to0
\]
where $\psi_\pm$ are the maximal classical solutions.
Figure \ref{fig:Example51} shows an ensemble of approximate solutions for the above drift. We used noise sizes \(\varepsilon = \frac{3^{-i}}{e} ,\ i=-2, \dots, -9\), and computed 150 samples of the solution with the Euler--Maruyama scheme with a step size $\Delta t=2.5\times 10^{-3}$ up to time $t=0.5$. The left-hand figure shows all sample paths (vertical axis) as a function of time (horizontal axis), where bigger \(\varepsilon\) were given a lighter shades of grey. The sample paths with the smallest \( \varepsilon \) are depicted in red. The right-hand figure shows the cumulative distribution function of the samples at the final time \( t = 0.5 \) using the smallest value for \( \varepsilon \). We can clearly see that the solution is concentrated on the extreme sample paths $\psi_-,\psi_+$, each with probability $\tfrac12$.
\begin{figure}
\includegraphics[width=0.49\linewidth]{./Example_5_1_plot.png}
\includegraphics[width=0.49\linewidth]{./Example_5_1_histogram.png}
\caption{Sample paths (left) and cumulative distribution function (right) for Example \ref{ex:1}.}\label{fig:Example51}
\end{figure}
\end{example}
\begin{example}\label{ex:2}
Let $a(x)=x^\beta$, $x>0,$ where $\beta\in (0,1).$ We claim that we can continuously extend $a$ to the set $(-\infty,0]$ such that
\begin{enumerate}[label=(\alph*)]
\item $-a(-x)\leq a(x)<0$ for all $x<0$;
\item $\int_{-1}^0 \frac{1}{a(x)}dx=-\infty,$ i.e., the Osgood condition is not satisfied to the left of zero;
\item $P_{X_\varepsilon}\Rightarrow \tfrac12 \delta_{\psi_+}+\tfrac12 \delta_{0}$ as $\varepsilon\to0,$ i.e., the limit process with probability $\tfrac12$ moves like the maximal positive solution $\psi_+(t)=((1-\beta)t)^{\frac{1}{1-\beta}}, t\geq 0$
and stays at 0 forever with probability $\tfrac12$ too.
\end{enumerate}
This example is not covered by the theory in the previous sections, and should therefore be read as a demonstration of the complex behaviours that can occur in the zero noise limit. Note also that the zero-noise limit is \textit{not} only concentrated on the maximal solution $\psi_+$, but also on the trivial solution $\psi_-\equiv0$.
Before we construct the extension, let us provide some simple preliminary analysis.
If a function $a\from{\mathbb R}\to {\mathbb R}$ satisfies the linear growth condition, then the family $\{X_\varepsilon\}_\varepsilon$
is weakly relatively compact. If additionally the function $a$ is continuous, then
any limit point of $\{X_\varepsilon\}_\varepsilon$ satisfies \eqref{eq:ode}. Both conditions
(a) and (b) yield that any solution to \eqref{eq:ode}, and hence any limit point of
$\{X_\varepsilon\}_\varepsilon$ has a form
\begin{equation}\label{eq:limit_sol}
X_0(t)=
\begin{cases}
0, & t\leq \tau\\
((1-\beta)(t-\tau))^{\frac{1}{1-\beta}},& t> \tau,
\end{cases}
\end{equation}
where $\tau\in[0,\infty].$ Our aim is to find an extension of $a$ such that
\begin{equation}\label{eq:tau_probab}
\Pr(\tau=0)= \Pr(\tau=\infty)=\tfrac12
\end{equation}
for any limit point
$X_0$ having representation
\eqref{eq:limit_sol}.
Let $A=\cup_{k\geq 1}[-\frac{1}{2^k}, -\frac{1}{2^k}+\frac{1}{4^k}].$ Set
\begin{align*}
\tilde a(x) &\coloneqq \mathop{\rm sign}(x) a(|x|)\ind_{x\notin A}=
\begin{cases}
x^\beta, & x> 0\\
-|x|^\beta,& x\leq 0,\ x\notin A \\
0,& x\leq 0,\ x\in A,
\end{cases} \\
\bar a(x) &\coloneqq \mathop{\rm sign}(x) a(|x|)= \mathop{\rm sign}(x) |x|^\beta=
\begin{cases}
x^\beta, & x> 0\\
-|x|^\beta ,& x\leq 0.
\end{cases}
\end{align*}
Define $a$ on $(-\infty,0)$ to be any negative, continuous function such that $\int_{-\delta}^0 \frac{1}{a(x)}dx=-\infty$ for any $\delta>0$, and
\[
\bar a(x)\leq a(x) \leq \tilde a(x) \text{ for all } x\in(-\infty,0).
\]
It is clear that there exists a function $a$ satisfying these properties.
Introduce the transformed process
\[
Y_\varepsilon(t)\coloneqq \varepsilon^{\frac{-2}{1+\beta}} X_\varepsilon\bigl(\varepsilon^{\frac{2(1-\beta)}{1+\beta}}t\bigr).
\]
It can be seen (see \cite{PilipenkoProske2018} for a more general case) that
\begin{equation}\label{eq:1525}
d Y_\varepsilon(t)= a_\varepsilon( Y_\varepsilon(t))dt + d w_\varepsilon(t),
\end{equation}
where $w_\varepsilon(t)= \varepsilon^{\frac{-(1-\beta)}{1+\beta}} w\bigl(\varepsilon^{\frac{2(1-\beta)}{1+\beta}}t\bigr)$ is a Wiener process, and
\begin{equation}
\label{eq:1526}
a_\varepsilon(y)= \varepsilon^{\frac{-2\beta}{1+\beta}}a\bigl(\varepsilon^{\frac{ 2}{1+\beta}}y\bigr).
\end{equation}
Notice that $a_\varepsilon(y)=a(y)$ for all $y\in (0,\infty)$ and for all $y<0$ such that $\varepsilon^{\frac{ 2}{1+\beta}}y\notin A$. For all other $y<0$ we have the inequality $-|y|^\beta\leq a(y)<0$, by the choice of the function $a$.
We have convergence $ a_\varepsilon(y)\to \bar a(y)=\mathop{\rm sign}(y) |y|^\beta$ in Lebesgue measure on any interval $y\in[-R,R]$.
Observe also that
\[
\int_0^x a_\varepsilon(y) dy \geq \int_0^x \hat a(y) dy \qquad \forall \varepsilon>0,\ \forall x<0
\]
where
\[
\hat a(x)=
\begin{cases}
0,\ & x\in [-\tfrac32\cdot 2^n, -2^n ] \text{ for some } n\in{\mathbb Z} \\
-|x|^\beta & \text{otherwise.}
\end{cases}
\]
In particular, the last estimate yields
\[
\sup_{\varepsilon\in(0,1]}\lim_{R\to+\infty}
\int_{-\infty}^{-R} \exp\biggl(-2\int_0^x a_\varepsilon(y)\,dy\biggr) dx =0.
\]
Set
\[
\sigma^X_{\varepsilon}(p)\coloneqq\inf\{t\geq 0 : X_\varepsilon(t)=p\},
\]
\[
\sigma^Y_{\varepsilon}(p)\coloneqq\inf\{t\geq 0 : Y_\varepsilon(t)=p\} .
\]
The observations above and formulas of Theorem \ref{thm:exit_time} yield that for any $R>0$, and for any sequences $\{R^\pm_\varepsilon\}$ such that $\lim_{\varepsilon\to0}R^\pm_\varepsilon=\pm\infty$ we have
\begin{align*}
&\hspace{-2em}\lim_{\varepsilon\to0}\Pr\Bigl(\sigma^Y_{\varepsilon}(R) < \sigma^Y_{\varepsilon}(-R) \mid Y_\varepsilon(0)=0\Bigr) =
\lim_{\varepsilon\to0}\Pr\Big( \sigma^Y_{\varepsilon}(-R)< \sigma^Y_{\varepsilon}(R) \mid Y_\varepsilon(0)=0\Big) \\
={}&\lim_{\varepsilon\to0}\Pr\Big( \sigma^Y_{\varepsilon}(R^+_\varepsilon)< \sigma^Y_{\varepsilon}(R^-_\varepsilon) \mid Y_\varepsilon(0)=0\Big)
= \lim_{\varepsilon\to0}\Pr\Big( \sigma^Y_{\varepsilon}(R^-_\varepsilon)< \sigma^Y_{\varepsilon}(R^+_\varepsilon) \mid Y_\varepsilon(0)=0\Big)\\
={}&\tfrac12.
\end{align*}
Hence, for any $\delta^\pm>0$ we have
\begin{equation}\begin{aligned}\label{eq:1566}
&\hspace{-2em}\lim_{\varepsilon\to0}\Pr\Big( \sigma^X_{\varepsilon}\big(R \varepsilon^{\frac{2}{1+\beta}}\big)
< \sigma^X_{\varepsilon}\bigl(-R \varepsilon^{\frac{2}{1+\beta}}\big) \mid X_\varepsilon(0)=0\Big)\\
={}&
\lim_{\varepsilon\to0}\Pr\Big( \sigma^X_{\varepsilon}\bigl(-R \varepsilon^{\frac{2}{1+\beta}}\big)<
\sigma^X_{\varepsilon}\big(R \varepsilon^{\frac{2}{1+\beta}}\big) \mid X_\varepsilon(0)=0\Big) \\
={}&\lim_{\varepsilon\to0}\Pr\Big(\sigma^X_{\varepsilon}(\delta^+)< \sigma^X_{\varepsilon}(\delta^-) \mid X_\varepsilon(0)=0\Big)
= \lim_{\varepsilon\to0}\Pr\Big(\sigma^X_{\varepsilon}(\delta^-)< \sigma^X_{\varepsilon}(\delta^+) \mid X_\varepsilon(0)=0\Big)\\
={}&\tfrac12.
\end{aligned}
\end{equation}
Hence, if $X_0$ is a limit point of $\{X_\varepsilon\}$ having representation
\eqref{eq:limit_sol}, then $\Pr(\tau=\infty)\geq \tfrac12.$
It also follows from Theorem \ref{thm:exit_time} that for any $R>0$
\[
\sup_{\varepsilon>0}\Exp \Big(\sigma^Y_{\varepsilon}(R)\wedge \sigma^Y_{\varepsilon}(-R) \mid Y_\varepsilon(0)=0\Big)<\infty.
\]
Thus,
\begin{equation}\label{eq:1575}
\sigma^X_{\varepsilon}\big(R \varepsilon^{\frac{2}{1+\beta}}\big)\wedge \sigma^X_{\varepsilon}\bigl(-R \varepsilon^{\frac{2}{1+\beta}}\big)
\overset{\Pr}\to 0 \qquad \text{as } \varepsilon\to0
\end{equation}
if $X_{\varepsilon}(0)=0.$
Let $\bar X_\varepsilon $ be a solution to
\[
d \bar X_\varepsilon(t) =\bar a\big(\bar X_\varepsilon(t)\big)dt +\varepsilon d w(t)
\]
and define
\[
\bar Y_\varepsilon(t)\coloneqq \varepsilon^{\frac{-2}{1+\beta}}\bar X_\varepsilon\bigl(\varepsilon^{\frac{2(1-\beta)}{1+\beta}}t\bigr).
\]
Then (cf.~\eqref{eq:1525}, \eqref{eq:1526})
\[
d \bar Y_\varepsilon(t)= \bar a(\bar Y_\varepsilon(t))dt + d w_\varepsilon(t).
\]
In particular, if $\bar X_\varepsilon(0)= R \varepsilon^{\frac{2}{1+\beta}}$ for all $\varepsilon>0,$ where $R$ is a constant, then all processes $\bar Y_\varepsilon$ have the same distribution independent of $\varepsilon.$
Notice that for any $R>0$,
\begin{equation}\label{eq:1597}
\Pr\Bigl(X_\varepsilon(t) = \bar X_\varepsilon(t), \ t\in [0, \sigma^X_{\varepsilon}(0) ] \mid X_\varepsilon(0)= \bar X_\varepsilon(0)= R \varepsilon^{\frac{2}{1+\beta}}\Bigr) =1.
\end{equation}
and
\begin{equation}\begin{aligned}\label{eq:1600}
p_R &\coloneqq \Pr\Bigl(\sigma^X_{\varepsilon}(0)=\infty \mid X_\varepsilon(0)= R \varepsilon^{\frac{2}{1+\beta}}\Bigr) \\
&= \Pr\Bigl(X_\varepsilon(t)>0, t\geq 0 \mid X_\varepsilon(0)= R \varepsilon^{\frac{2}{1+\beta}}\Bigr)
\\
&=\Pr\Bigl(\bar X_\varepsilon(t)>0, t\geq 0 \mid \bar X_\varepsilon(0)= R \varepsilon^{\frac{2}{1+\beta}}\Bigr) \\
&= \Pr\Bigl(\bar Y_\varepsilon(t)>0 \mid \bar Y_\varepsilon(0)= R\Bigr)\to 1 \qquad \text{as } R\to\infty.
\end{aligned}\end{equation}
It follows from \cite{PilipenkoProske2018} that if $\bar X_\varepsilon(0)= R \varepsilon^{\frac{2}{1+\beta}}, \varepsilon>0$,
then
\begin{equation}
\label{eq:1611}
\bar X_\varepsilon \Rightarrow p_R\delta_{\psi_+}+(1-p_R)\delta_{\psi_-} \qquad \text{as } \varepsilon\to0.
\end{equation}
Hence, \eqref{eq:1566}, \eqref{eq:1575}, \eqref{eq:1597}, \eqref{eq:1600},
and \eqref{eq:1611} yield that for any limit point $X_0$
of $\{X_\varepsilon\}$ we have $\Pr(\tau=0)\geq \tfrac12.$ This concludes the proof of the convergence $P_{X_\varepsilon}\Rightarrow \tfrac12 \delta_{\psi_+}+\tfrac12 \delta_{0}$ as $\varepsilon\to0.$
Figure \ref{fig:Example52} shows the same type of simulation as in Example \ref{ex:1}. From the figure it is clear that for small $\varepsilon$, the samples split in two groups of equal size, one moving along $\psi_+$ and the other remaining around the origin. As the noise decreases, the left-going samples concentrate around the trivial solution $X\equiv0$.
\begin{figure}
\includegraphics[width=0.49\linewidth]{./Example_5_2_plot.png}
\includegraphics[width=0.49\linewidth]{./Example_5_2_histogram.png}
\caption{Sample paths (left) and cumulative distribution function (right) for Example \ref{ex:2}.}\label{fig:Example52}
\end{figure}
\end{example}
|
2205.15013
|
\section{Definitions and Basic Identities}
Let the coefficient of a power series be defined as:
\begin{equation}
[q^n] \sum_{k=0}^{\infty} a_k q^k = a_n
\end{equation}
Let $P(n)$ be the number of integer partitions of $n$, and let $P(n,m)$ be the
number of integer partitions of $n$ into exactly $m$ parts.
Let $P(n,m,p)$ be the number of integer partitions of $n$ into exactly $m$ parts, each part at most $p$,
and let $P^*(n,m,p)$ be the number of integer partitions of $n$ into at most $m$ parts, each part at most $p$,
which is the number of Ferrer diagrams that fit in a $m$ by $p$ rectangle:
\begin{equation}\label{pnmpsum}
P^*(n,m,p) = \sum_{k=0}^m P(n,k,p)
\end{equation}
Let the following definition of the q-binomial coefficient,
also called the Gaussian polynomial, be given.
\begin{definition}
The q-binomial coefficient is defined by \cite{A84,AAR}:
\begin{equation}\label{gaussdef}
\gaussian{m+p}{m} = \prod_{j=1}^m \frac{1-q^{p+j}}{1-q^j}
\end{equation}
\end{definition}
The q-binomial coefficient is the generating function of $P^*(n,m,p)$ \cite{A84}:
\begin{equation}\label{pnmpgen}
P^*(n,m,p) = [q^n] \gaussian{m+p}{m}
\end{equation}
The q-binomial coefficient is a product of cyclotomic polynomials \cite{knuth}.
\section{Properties of q-Binomial Coefficients and $P(n,m,p)$}
Some identities of the q-binomial coefficient are proved from its definition,
and from these some properties of $P^*(n,m,p)$ and $P(n,m,p)$ are derived.
\begin{theorem}
\begin{equation}
\gaussian{m+p}{m} = \gaussian{m+p-1}{m-1} + q^m \gaussian{m+p-1}{m}
\end{equation}
\end{theorem}
\begin{proof}
\begin{equation}
\prod_{j=1}^m\frac{1-q^{p+j}}{1-q^j}
= \prod_{j=1}^{m-1}\frac{1-q^{p+j}}{1-q^j} + q^m \prod_{j=1}^m\frac{1-q^{p+j-1}}{1-q^j}
\end{equation}
\begin{equation}
\prod_{j=1}^m(1-q^{p+j}) = (1-q^m) \prod_{j=1}^{m-1}(1-q^{p+j}) + q^m \prod_{j=0}^{m-1}(1-q^{p+j})
\end{equation}
\begin{equation}
1 = \frac{1-q^m}{1-q^{m+p}} + q^m \frac{1-q^p}{1-q^{m+p}}
\end{equation}
\begin{equation}
1-q^{m+p} = 1-q^m+q^m(1-q^p)
\end{equation}
\end{proof}
\begin{theorem}
\begin{equation}
P^*(n,m,p) = P^*(n,m-1,p) + P^*(n-m,m,p-1)
\end{equation}
\end{theorem}
\begin{proof}
Using the previous theorem:
\begin{equation}
\begin{split}
P^*(n,m,p) & = [q^n]\gaussian{m+p}{m} = [q^n]\gaussian{m+p-1}{m-1} + [q^{n-m}]\gaussian{m+p-1}{m} \\
& = P^*(n,m-1,p) + P^*(n-m,m,p-1) \\
\end{split}
\end{equation}
\end{proof}
From this theorem and identity (\ref{pnmpsum}) follows:
\begin{equation}\label{pnmpdif}
P^*(n,m,p) - P^*(n,m-1,p) = P(n,m,p) = P^*(n-m,m,p-1)
\end{equation}
or equivalently:
\begin{equation}\label{pnmpdef}
P^*(n,m,p) = P(n+m,m,p+1)
\end{equation}
From this theorem and this identity follows:
\begin{equation}
P(n,m,p) = P(n-1,m-1,p) + P(n-m,m,p-1)
\end{equation}
\begin{theorem}
\begin{equation}
\gaussian{m+p}{m} = \gaussian{m+p-1}{m} + q^p \gaussian{m+p-1}{m-1}
\end{equation}
\end{theorem}
\begin{proof}
\begin{equation}
\prod_{j=1}^m\frac{1-q^{p+j}}{1-q^j}
= \prod_{j=1}^m\frac{1-q^{p+j-1}}{1-q^j} + q^p \prod_{j=1}^{m-1}\frac{1-q^{p+j}}{1-q^j}
\end{equation}
\begin{equation}
\prod_{j=1}^m(1-q^{p+j}) = \prod_{j=0}^{m-1}(1-q^{p+j}) + q^p(1-q^m) \prod_{j=1}^{m-1}(1-q^{p+j})
\end{equation}
\begin{equation}
1 = \frac{1-q^p}{1-q^{m+p}} + q^p \frac{1-q^m}{1-q^{m+p}}
\end{equation}
\begin{equation}
1-q^{m+p} = 1-q^p+q^p(1-q^m)
\end{equation}
\end{proof}
\begin{theorem}
\begin{equation}
P^*(n,m,p) = P^*(n,m,p-1) + P^*(n-p,m-1,p)
\end{equation}
\end{theorem}
\begin{proof}
Using the previous theorem:
\begin{equation}
\begin{split}
P^*(n,m,p) & = [q^n]\gaussian{m+p}{m} = [q^n]\gaussian{m+p-1}{m} + [q^{n-p}]\gaussian{m+p-1}{m-1} \\
& = P^*(n,m,p-1) + P^*(n-p,m-1,p) \\
\end{split}
\end{equation}
\end{proof}
Using (\ref{pnmpdef}):
\begin{equation}
P(n,m,p) = P(n,m,p-1) + P(n-p,m-1,p)
\end{equation}
The following theorem is a symmetry identity:
\begin{theorem}\label{gaussym}
\begin{equation}
\gaussian{m+p}{m} = \gaussian{m+p}{p}
\end{equation}
\end{theorem}
\begin{proof}
\begin{equation}
\prod_{j=1}^m \frac{1-q^{p+j}}{1-q^j} = \prod_{j=1}^p \frac{1-q^{m+j}}{1-q^j}
\end{equation}
Using cross multiplication:
\begin{equation}
\prod_{j=1}^p(1-q^j)\prod_{j=1}^m(1-q^{p+j}) = \prod_{j=1}^m(1-q^j)\prod_{j=1}^p(1-q^{m+j})
= \prod_{j=1}^{m+p}(1-q^j)
\end{equation}
\end{proof}
From this theorem follows:
\begin{equation}
P^*(n,m,p) = P^*(n,p,m)
\end{equation}
and using (\ref{pnmpdef}):
\begin{equation}
P(n,m,p) = P(n-m+p-1,p-1,m+1)
\end{equation}
Using (\ref{pnmpsum}) and (\ref{pnmpdif}):
\begin{equation}
P^*(n,m,p) = \sum_{k=0}^m P^*(n-k,k,p-1)
\end{equation}
Combining this identity with (\ref{pnmpgen}) and using theorem \ref{gaussym}:
\begin{equation}
\gaussian{m+p}{p} = \sum_{k=0}^m q^k \gaussian{p+k-1}{p-1}
\end{equation}
which is identity (3.3.9) in \cite{A84}.
Taking $m=n$ in (\ref{pnmpsum}) and (\ref{pnmpdef}) and conjugation of Ferrer diagrams:
\begin{equation}\label{pnnp}
P(2n,n,p+1) = P(n+p,p)
\end{equation}
and taking $p=n$:
\begin{equation}
P(n) = P(2n,n) = P(2n,n,n+1)
\end{equation}
The partitions of $P(n,m,p)-P(n,m,p-1)$ have at least one part equal to $p$,
and therefore by conjugation of Ferrer diagrams:
\begin{equation}
P(n,m,p) - P(n,m,p-1) = P(n,p,m) - P(n,p,m-1)
\end{equation}
This identity can also be derived from the other identities.
\section{The q-Multinomial Coefficient}
Let $(m_i)_{i=1}^s$ be a sequence of $s$ nonnegative integers, and let $n$ be given by:
\begin{equation}
n = \sum_{i=1}^s m_i
\end{equation}
The q-multinomial coefficient is a product of q-binomial coefficients \cite{CP,wiki2}:
\begin{equation}
\gaussian{n}{m_1\cdots m_s} = \prod_{i=1}^s \gaussian{\sum_{j=1}^i m_j}{m_i}
= \prod_{i=1}^s \gaussian{n-\sum_{j=1}^{i-1}m_j}{m_i}
\end{equation}
\section{Computation of $P(n,m,p)$ with $P(n,m)$}
Let the coefficient $a_k^{(m,p)}$ be defined as:
\begin{equation}
a_k^{(m,p)} = [q^k] \prod_{j=1}^m (1-q^{p+j})
\end{equation}
These coefficients can be computed by multiplying out the product,
which up to $k=n-m$ is $O(m(n-m))=O(n^2)$.
Using (\ref{pnmpgen}) and (\ref{pnmpdif}):
\begin{equation}
\begin{split}
P(n,m,p) & = P^*(n-m,m,p-1) = [q^{n-m}] \prod_{j=1}^m \frac{1-q^{p+j-1}}{1-q^j}
= [q^{n-m}] \sum_{k=0}^{n-m} a_k^{(m,p-1)} \frac{q^k}{\prod_{j=1}^m(1-q^j)} \\
& = \sum_{k=0}^{n-m} a_k^{(m,p-1)} [q^{n-k}] \frac{q^m}{\prod_{j=1}^m(1-q^j)}
= \sum_{k=0}^{n-m} a_k^{(m,p-1)} P(n-k,m) \\
\end{split}
\end{equation}
The list of the $n-m+1$ values of $P(m,m)$ to $P(n,m)$ can be computed
using the algorithm in \cite{MK} which is also $O(n^2)$,
and therefore this algorithm computes $P(n,m,p)$ in $O(n^2)$.
For computing $P^*(n,m,p)$ (\ref{pnmpdef}) can be used.
\section{Computation of q-Binomial Coefficients}
From definition (\ref{gaussdef}) the q-binomial coefficients are:
\begin{equation}
\begin{split}
[q^n] \prod_{j=1}^m \frac{1-q^{p+j}}{1-q^j}
& = [q^n] \sum_{k=0}^n a_k^{(m,p)} \frac{q^k}{\prod_{j=1}^m(1-q^j)}
= \sum_{k=0}^n a_k^{(m,p)} [q^{n+m-k}] \frac{q^m}{\prod_{j=1}^m(1-q^j)} \\
& = \sum_{k=0}^n a_k^{(m,p)} P(n+m-k,m) \\
\end{split}
\end{equation}
Because $P^*(n,m,p)=0$ when $n>mp$ and (\ref{pnmpgen}), the coefficients $[q^n]$
are nonzero if and only if $0\leq n\leq mp$.
The product coefficients $a_k^{(m,p)}$ can therefore be computed in $O(m^2p)$,
and the list of $mp+1$ values of $P(m,m)$ to $P(mp+m,m)$ can also be computed
in $O(m^2p)$ \cite{MK}.
The sums are convolutions which can be done with \texttt{ListConvolve},
and therefore this algorithm computes the q-binomial coefficients in $O(m^2p)$.
Because of symmetry theorem \ref{gaussym}, $m$ and $p$ can be interchanged when $m>p$,
which makes the algorithm $O(\textrm{min}(m^2p,p^2m))$.
Using a change of variables:
\begin{equation}
\gaussian{n}{m} = \gaussian{m+n-m}{m}
\end{equation}
The algorithm for computing this q-binomial coefficient is $O(\textrm{min}(m^2(n-m),(n-m)^2m))$.
From this follows that when $m$ or $n-m$ is constant, then the algorithm is $O(n)$, and when
$m=cn$ for some constant $c$, then the algorithm is $O(n^3)$. Because $P^*(n,m,p)=P^*(mp-n,m,p)$
only $P^*(n,m,p)$ for $0\leq n\leq \lceil mp/2\rceil$ needs to be computed,
which makes the algorithm about two times faster.
For comparison of results with the computer algebra program below
an alternative algorithm using cyclotomic polynomials is given.
\begin{verbatim}
QBinomialAlternative[n_,m_]:=Block[{result={1},temp},
Do[Which[Floor[n/k]-Floor[m/k]-Floor[(n-m)/k]==1,
temp=CoefficientList[Cyclotomic[k,q],q];
result=ListConvolve[result,temp,{1,-1},0]],{k,n}];
result]
\end{verbatim}
Computations show that this alternative algorithm is $O(n^4)$.
\section{A Formula for $Q(n,m,p)$}
Let $Q(n,m,p)$ be the number of integer partitions of $n$ into exactly $m$
distinct parts with each part at most $p$.
\begin{theorem}
\begin{equation}
Q(n,m,p) = P(n-m(m-1)/2,m,p-m+1)
\end{equation}
\end{theorem}
\begin{proof}
The proof is with Ferrer diagrams and the "staircase" argument.
Let a normal partition be a partition into $m$ parts,
and let a distinct partition be a partition into $m$ distinct parts.
Let the parts of a Ferrer diagram with $m$ parts be indexed from small to large by $s=1\cdots m$.
Each distinct partition of $n$ contains a "staircase" partition
with parts $s-1$ and a total size of $m(m-1)/2$, and subtracting this from such a
partition gives a normal partition of $n-m(m-1)/2$, and the largest part
is decreased by $m-1$.
Vice versa adding the "staircase" partition to a normal partition of $n$
gives a distinct partition of $n+m(m-1)/2$,
and the largest part is increased by $m-1$.
When the parts of the distinct partition are at most $p$,
then the parts of the corresponding normal partition
are at most $p-(m-1)$.
Because of this $1-1$ correspondence between the Ferrer diagrams of these two types
of partitions the identity is valid.
\end{proof}
\section{Computer Algebra Program}
The following Mathematica\textsuperscript{\textregistered}
functions are listed in the computer algebra program below.\\
\texttt{PartitionsPList[n,p]}\\
Gives a list of the $n$ numbers $P(1,p)..P(n,p)$, where
$P(n,p)$ is the number of partitions of $n$ with each part at most $p$.
This algorithm is $O(pn)$.\\
\texttt{PartitionsQList[n,p]}\\
Gives a list of the $n$ numbers $Q(1,p)..Q(n,p)$, where
$Q(n,p)$ is the number of partitions of $n$ into distinct parts with each part at most $p$.
This algorithm is $O(pn)$.\\
\texttt{PartitionsInPartsP[n,m,p]}\\
Computes the number of integer partitions of $n$ into exactly $m$ parts
with each part at most $p$. This algorithm is $O(n^2)$.\\
\texttt{PartitionsInPartsQ[n,m,p]}\\
Gives the number of integer partitions of $n$ into exactly $m$ distinct parts
with each part at most $p$,
using the formula $Q(n,m,p)=P(n-m(m-1)/2,m,p-m+1)$. This algorithm is $O(n^2)$.\\
\texttt{PartitionsInPartsPList[n,m,p]}\\
Gives a list of $n-m+1$ numbers of $P(m,m,p)$..$P(n,m,p)$.
This algorithm is $O(n^2)$.\\
\texttt{PartitionsInPartsQList[n,m,p]}\\
Gives a list of the $n-m(m+1)/2+1$ numbers $Q(m(m+1)/2,m,p)..Q(n,m,p)$,
using the formula $Q(n,m,p)=P(n-m(m-1)/2,m,p-m+1)$.
This algorithm is $O(n^2)$.\\
\texttt{QBinomialCoefficients[n,m]}\\
Gives the coefficients of $q$ of the q-binomial coefficient $\binom{n}{m}_q$.
This algorithm is $O(n^3)$.\\
\texttt{QMultinomialCoefficients[mlist]}\\
Gives the coefficients of $q$ of the q-multinomial coefficient
$\binom{n}{m_1\cdots m_s}_q$ where $s$ is the length of the list \texttt{mlist}
containing $m_1\cdots m_s$, and where $n=\sum_{i=1}^sm_i$.\\
Below is the listing of a Mathematica\textsuperscript{\textregistered} program
that can be copied into a notebook, using the package
taken from at least version 3 of the earlier paper \cite{MK}. The notebook must be saved
in the directory of the package file.
\begin{verbatim}
SetDirectory[NotebookDirectory[]];
<< "PartitionsInParts.m"
PartitionsPList[n_Integer?Positive,p_Integer?NonNegative]:=
PartitionsInPartsPList[n+p,p][[Range[2,n+1]]]
PartitionsQList[n_Integer?Positive,p_Integer?NonNegative]:=
Block[{pprod=ConstantArray[0,n+1]},pprod[[1]]=1;
Do[pprod[[Range[k+1,n+1]]]+=pprod[[Range[1,n-k+1]]],{k,Min[p,n]}];
pprod[[Range[2,n+1]]]]
partprod[n_,m_,p_]:=
Block[{prod=ConstantArray[0,n+1]},prod[[1]]=1;
Do[prod[[Range[p+k+1,n+1]]]-=prod[[Range[1,n-p-k+1]]],
{k,Min[m,n-p]}];prod]
PartitionsInPartsP[n_Integer?NonNegative,m_Integer?NonNegative,
p_Integer?NonNegative]:=If[n<m,0,
Block[{prods,parts,result},prods=partprod[n-m,m,p-1];
parts=PartitionsInPartsPList[n,m];result=0;
Do[result+=prods[[k+1]]parts[[n-m-k+1]],{k,0,n-m}];result]]
PartitionsInPartsQ[n_Integer?NonNegative,m_Integer?NonNegative,
p_Integer?NonNegative]:=If[n-m(m-1)/2<m||p<m,0,
PartitionsInPartsP[n-m(m-1)/2,m,p-m+1]]
PartitionsInPartsPList[n_Integer?NonNegative,m_Integer?NonNegative,
p_Integer?NonNegative]:=If[n<m,{},
Block[{prods,parts},prods=partprod[n-m,m,p-1];
parts=PartitionsInPartsPList[n,m];
ListConvolve[prods,parts,{1,1},0]]]
PartitionsInPartsQList[n_Integer?NonNegative,m_Integer?NonNegative,
p_Integer?NonNegative]:=If[n-m(m-1)/2<m||p<m,{},
PartitionsInPartsPList[n-m(m-1)/2,m,p-m+1]]
QBinomialCoefficients[N_Integer,M_Integer]:=If[M<0||N<M,{},
Block[{m=M,p=N-M,result,prods,parts,ceil},
Which[m>p,m=p;p=M];ceil=Ceiling[(m p+1)/2];
prods=partprod[ceil-1,m,p];parts=PartitionsInPartsPList[ceil+m-1,m];
result=PadRight[ListConvolve[prods,parts,{1,1},0],m p+1];
result[[Range[ceil+1,m p+1]]]=result[[Range[m p-ceil+1,1,-1]]];
result]]
MListQ[alist_List]:=(alist!={}&&VectorQ[alist,IntegerQ])
QMultinomialCoefficients[mlist_List?MListQ]:=
If[!VectorQ[mlist,NonNegative],{},
Block[{length=Length[mlist],bprod={1},msum=0,qbin},
Do[msum+=mlist[[k]];qbin=QBinomialCoefficients[msum,mlist[[k]]];
bprod=ListConvolve[bprod,qbin,{1,-1},0],{k,length}];
bprod]]
\end{verbatim}
\pdfbookmark[0]{References}{}
|
2205.15024
|
\section{Counterexample}\label{sec:counterexample}
\begin{thm} \label{thm:mainTheorem}
Let $\textup{R}_8$ be the dihedral quandle of order $8$. Then
\begin{displaymath}
\left|\Delta^2\left(\textup{R}_8\right)/\Delta^3\left(\textup{R}_8\right)\right|= 16.
\end{displaymath}
\end{thm}
\noindent From \autoref{prop:basis}, we get that $\{e_i=a_i-a_0:i=1,2,\cdots, n-1\}$ is a basis for $\delrn{}{n}$. We will be using this notation in the subsequent computation.
\begin{lemma}\label{lemma:multiplictionWith_e4}
Let $\textup{R}_{2k}$ denote the dihedral quandle of order $2k~(k\ge 2)$. Then $e_i \cdot e_k=0$ for all $i=1,2,\cdots, 2k-1$.
\end{lemma}
\begin{proof}
Observe that
\begin{align*}
e_i \cdot e_k & = \left(a_i -a_0\right) \cdot \left(a_k-a_0\right) \\
& = a_{2k-i}-a_{2k-i}-a_0+a_0=0.
\end{align*}
\end{proof}
\begin{lemma}\label{lemma:multiplictionSymmetry}
Let $\textup{R}_{2k}$ denote the dihedral quandle of order $2k~(k\ge 2)$. Then $e_i\cdot e_j = e_i \cdot e_{k+j}$ for all $j=1,2,\cdots,k-1$ and for all $i=1,2,\cdots,2k-1$.
\end{lemma}
\begin{proof}
Note that
\begin{align*}
e_i \cdot e_{k+j} & = a_ia_{k+j}-a_ia_0-a_0a_{k+j}+a_0 \\
& = a_i a_j - a_i a_0 -a_0a_j+a_0 \\
& = e_i \cdot e_j.
\end{align*}
\end{proof}
\noindent We will use \autoref{lemma:multiplictionWith_e4} and \autoref{lemma:multiplictionSymmetry} to simplify the multiplication tables.
\begin{proof}[Proof of \autoref{thm:mainTheorem}]
Recall that a basis of $\delr{}$ is given by $\mathcal{B}_1=\{e_1,e_2,\cdots,e_7\}$. The multiplication table for the $e_i\cdot e_j$ is given as follows:
\begin{center}
\begin{displaymath}
\begin{array}{|c|c|c|c|}
\hline
& e_1 & e_2 & e_3 \\ \hline
e_1 & e_1-e_2-e_7 & e_3-e_4-e_7 & e_5-e_6-e_7 \\
\hline
e_2 & -e_2-e_6 & e_2-e_4-e_6 & -2e_6 \\
\hline
e_3 & -e_2-e_5+e_7 & e_1-e_4-e_5& e_3-e_5-e_6 \\
\hline
e_4 & -e_2-e_4+e_6 & -2e_4 & e_2 - e_4- e_6 \\
\hline
e_5 & -e_2-e_3+e_5 & -e_3-e_4+e_7 & e_1-e_3-e_6 \\
\hline
e_6 & -2e_2 + e_4 & -e_2 - e_4 + e_6 & -e_2-e_6 \\
\hline
e_7 & -e_1-e_2 + e_3 & -e_1-e_4+e_5 & -e_1-e_6+e_7 \\
\hline
\end{array}
\end{displaymath}
\end{center}
Since $\delr{2}$ is generated by $e_i\cdot e_j$ as a $\mathbb{Z}$-module, using row reduction over $\mathbb{Z}$ one can show that a $\mathbb{Z}$-basis is given by
\begin{align*}
\mathcal{B}_2 = & \left\{u_1 = \e{1}-\e{2}-\e{7}, u_2 = \e{2}+\e{6}, u_3= \e{3}-\e{4}-\e{7},\right. \\
& \kern .5cm \left.u_4 = \e{4}+2\e{6}, u_5 = \e{5}-\e{6}-\e{7}, u_6 = 4\e{6} \right\}.
\end{align*}
We now want to express a $\mathbb{Z}$-basis of $\delr{3}$ in terms of $\mathcal{B}_2$. First we calculate the products $u_i\cdot e_j$. This is presented in the following table.
\begin{center}
\begin{displaymath}
\begin{array}{|c|c|c|c|}
\hline
& e_1 & e_2 & e_3 \\ \hline
u_1 & \makecell{2e_1 + e_2 -e_3 \\ +e_6 -e_7} & \makecell{e_1 -e_2 +e_3 \\+e_4 -e_5 +e_6 -e_7 }& \makecell{e_1 -e_4 +e_5 \\ +2e_6 -2e_7} \\
\hline
u_2 & -3e_2+e_4 -e_6 & -2e_4 & -e_2 +e_4 -3e_6 \\
\hline
u_3 & \makecell{e_1+e_2-e_3\\+e_4-e_5-e_6+e_7} & 2e_1+2e_4-2e_5& \makecell{e_1-e_2+e_3+e_4 \\-e_5 +e_6 -e_7} \\
\hline
u_4 & -5e_2-e_4+e_6 & -2e_2-4e_4+2e_6 & -e_2-e_4 -3e_6 \\
\hline
u_5 & \makecell{e_1+2e_2-2e_3\\-e_4+e_5} & \makecell{e_1+e_2-e_3+e_4\\-e_5-e_6+e_7} & 2e_1+e_2-e_3+e_6-e_7 \\
\hline
u_6 & -8e_2+4e_4 & -4e_2-4e_4+4e_6 & -4e_2-4e_6 \\
\hline
\end{array}
\end{displaymath}
\end{center}
\noindent Hence, a $\mathbb{Z}$-basis for $\delr{3}$ is given by
\begin{align*}
\mathcal{B}_3 & = \left\{v_1 = e_1-e_2+e_3+e_4-e_5+e_6-e_7, v_2 = e_2 - e_3 -2e_4+2e_5+e_6-e_7, \right. \\
& \kern 0.5cm \left. v_3 = -e_3-e_4+2e_5-2e_6-e_7, v_4 = -2e_4, v_5 = -4e_5-4e_6 + 4e_7, v_6 = 8e_6 \right\}.
\end{align*}
Now we will present the elements of $\mathcal{B}_3$ in terms of $\mathcal{B}_2$. We have the following presentation.
\begin{displaymath}
\begin{array}{c c c c c c c c}
v_1 & = & u_1 & & & + 2u_4 & -u_5 & -u_6 \\
v_2 & = & & u_2 & -u_3 & - u_4 & + 2u_5 & + u_6 \\
v_3 & = & & & -u_3 & -2u_4 & +2u_5 & +u_6 \\
v_4 & = & & & & 2u_4 & & -u_6\\
v_5 & = & & & & & -4u_5 \\
v_6 & = & & & & & & 2u_6.
\end{array}
\end{displaymath}
Note that we can alter the basis $\mathcal{B}_2$ of $\delr{2}$ as follows:
\begin{align*}
& \left\{u_1+2u_4-u_5-u_6, u_2-u_3-u_4+2u_5+u_6, u_3+2u_4-2u_5-u_6, u_4, u_5, u_6 \right\}.
\end{align*}
Hence,
\begin{align*}
\dfrac{\delr{2}}{\delr{3}} & \cong \dfrac{\mathbb{Z} v_1\oplus \mathbb{Z} v_2 \oplus \mathbb{Z} v_3 \oplus \mathbb{Z} u_4\oplus \mathbb{Z} u_5 \oplus \mathbb{Z} u_6}{\mathbb{Z} v_1\oplus \mathbb{Z} v_2 \oplus \mathbb{Z} v_3 \oplus \mathbb{Z} (2u_4-u_6)\oplus \mathbb{Z} (-4u_5) \oplus \mathbb{Z} (2u_6)} \\
& \cong \mathbb{Z}_4\oplus \dfrac{\mathbb{Z} u_4 \oplus \mathbb{Z} u_6}{\mathbb{Z} (2u_4-u_6) \oplus \mathbb{Z} (2u_6)} \\
& \cong \mathbb{Z}_4 \oplus \dfrac{\mathbb{Z} u_4 \oplus \mathbb{Z} u_6}{\mathbb{Z} u_4 \oplus \mathbb{Z} (4u_6)} \\
& \cong \mathbb{Z}_4 \oplus \mathbb{Z}_4.
\end{align*}
\end{proof}
\section{Introduction} \label{sec:introduction}
A \textit{quandle} is a pair $(A,\cdot)$ such that `$\cdot$' is a binary operation satisfying
\begin{enumerate}
\item the map $S_a:A\longrightarrow A$, defined as $S_a(b)=b\cdot a$ is an automorphism for all $a\in A$,
\item for all $a\in A$, we have $S_a(a)=a$.
\end{enumerate}
\noindent To have a better understanding of the structure, a theory parallel to group rings was introduced by Bardakov, Passi and Singh in \cite{BaPaSi19}. Let $\mathbb{Z}_n$ denote the cyclic group of order $n$. Then defining $a\cdot b=2b-a$ defines a quandle structure on $A=\mathbb{Z}_n$. This is known as \textit{dihedral quandle}. For other examples see \cite{BaPaSi19}. The quandle ring of a quandle $A$ is defined as follows. Let $R$ be a commutative ring. Consider
\begin{displaymath}
R[A] \vcentcolon= \left\{\sum_{i}r_ia_i: r_i\in R,a_i\in A \right\}.
\end{displaymath}
Then this is an additive group in usual way. Define multiplication as
\begin{displaymath}
\left(\sum_{i}r_ia_i\right)\cdot \left(\sum_{j}s_ja_j\right) \vcentcolon= \sum_{i,j}r_is_j(a_i\cdot a_j).
\end{displaymath}
The \textit{augmentation ideal} of $R[A]$, $\Delta_R(A)$ is defined as the kernel of the augmentation map
\begin{displaymath}
\varepsilon :R[A]\to R,~\sum_{i}r_ia_i \mapsto \sum_{i} r_i.
\end{displaymath}
The powers $\Delta^k_R(A)$ is defined as $\left(\Delta_R(A)\right)^k$. When $R=\mathbb{Z}$, we will be omitting the subscript $R$. The following proposition gives a basis for $\Delta_R(X)$.
\begin{propositionX}\cite[Proposition 3.2, Page 6]{BaPaSi19} \label{prop:basis}
A basis of $\Delta_R(X)$ as an $R$-module is given by $\{a-a_0:a\in A\setminus\{a_0\}\}$, where $a_0\in A$ is a fixed element.
\end{propositionX}
The following has been conjectured in \cite[Conjecture 6.5, Page 20] {BaPaSi19}.
\begin{conj}
Let $\textup{R}_n=\{a_0,a_1,\cdots,a_{n-1}\}$ denote the dihedral quandle of order $n$. Then we have the following statements.
\begin{enumerate}
\item For an odd integer $n>1$, $\delrn{k}{n}/\delrn{k+1}{n}\cong \mathbb{Z}_n$ for all $k\ge 1$.
\item For an even integer $n> 2$, $\left|\delrn{k}{n}/\delrn{k+1}{n}\right|=n$ for $k\ge 2$.
\end{enumerate}
The first statement has been confirmed by Elhamdadi,
Fernando and Tsvelikhovskiy in \cite[Theorem 6.2, Page 182]{ElFeTs19}. The second statement holds true for $n=4$, see \cite{BaPaSi19}. Here we have given a counterexample in \autoref{thm:mainTheorem} to show that the conjecture is not true in general.
\end{conj}
|
2205.15032
|
\section{Introduction}
By a finite partially ordered set (poset) \(I\) of size \(n\) we mean a pair \(I=(\{1,\ldots,n\}, \preceq_I)\), where \(\preceq_I\) is a reflexive, antisymmetric and transitive binary relation. Every poset \(I\) is uniquely determined by its \textit{incidence matrix}
\[
C_{I} = [c_{ij}] \in\MM_{n}(\ZZ),\textnormal{ where } c_{ij} = 1 \textnormal{ if } i \preceq_I j\textnormal{ and } c_{ij} = 0\textnormal{ otherwise},
\]
i.e., a square $(0,1)$-matrix that encodes the relation \(\preceq_I\).
It is known that various mathematical classification problems can be solved by a reduction to the classification of indecomposable $K$-linear representations ($K$ is a field) of finite digraphs or matrix representations of finite partially ordered sets, see~\cite{Si92}. Inspired by these results,
here we study posets that are non-negative.
A poset $I$\ is defined to be \textit{non-negative} of \textit{corank} $\crk_I \geq 0$ if its symmetric Gram matrix $G_I\eqdef\tfrac{1}{2}(C_I+C_I^{tr})\in\MM_n(\QQ)$ is positive semi-definite of rank $n-\crk_I$.
Non-negative posets are classified by means of signed graphs
\(\Delta_I=(V=\{1,\ldots,n\},
E=\{\{i,j\};\ i\preceq_I j \textnormal{ or } j \preceq_I i\}, v\overset{\sgn}{\mapsto} 1)\)
described by the adjacency matrix $\Ad_I\eqdef 2G_I-E$. Such signed graphs, i.e., with vertices connected by edges of the same sign, are called \textit{bigraphs}, see~\cite{simsonCoxeterGramClassificationPositive2013}. Analogously as in case of posets, a bigraph $\Delta$ is defined to be \textit{non-negative} of corank $\crk_\Delta\geq 0$ if its \textit{symmetric Gram matrix} $G_\Delta\eqdef \frac{1}{2}(\Ad_\Delta + E)$ is positive semi-definite of rank $n-\crk_\Delta$. Two bigraphs [posets] are said to \textit{be weakly Gram $\ZZ$-congruent} $\sim_\ZZ$ (or $\ZZ$-equivalent) if their symmetric Gram matrices are congruent and the matrix that defines this congruence is $\ZZ$-invertible, i.e., $G_1=B^{tr}G_{2}B$ and $B\in\Gl(n,\ZZ)\eqdef\{A\in\MM_n(\ZZ);\,\det A=\pm 1\}$ \cite{simsonCoxeterGramClassificationPositive2013}.
It is easy to check that this relation preserves poset (bigraph) definiteness and corank, see \cite{simsonCoxeterGramClassificationPositive2013}. Every \textit{positive} (i.e.,~corank~$0$) connected bigraph (poset) $\Delta$ is weakly Gram $\ZZ$-congruent with a unique simply-laced Dynkin diagram
$\Dyn_\Delta\in\{\AA_n,\ab \DD_n,\ab \EE_6,\ab \EE_7,\ab \EE_8\}$ of \cref{tbl:Dynkin_diagrams}
called the Dynkin type of $\Delta$, see \cite{simsonCoxeterGramClassificationPositive2013,barotQuadraticFormsCombinatorics2019}.\vspace*{-2ex}
{\newcommand{\mxs}{0.80}
\begin{longtable}{@{}r@{\,}l@{\,}l@{\quad}r@{\,}l@{}}
$\AA_n\colon$ & \grapheAn{\mxs}{1} & $\scriptstyle (n\geq 1);$\\[0.2cm]
$\DD_n\colon$ & \grapheDn{\mxs}{1} & $\scriptstyle (n\geq 1);$ & $\EE_6\colon$ & \grapheEsix{\mxs}{1}\\[0.2cm]
$\EE_7\colon$ & \grapheEseven{\mxs}{1} & & $\EE_8\colon$ & \grapheEeight{\mxs}{1}\\
\caption{Simply-laced Dynkin diagrams}\label{tbl:Dynkin_diagrams}
\end{longtable}
\begin{remark}
We view any graph $G=(V,E)$ as bigraph with a sign function
$\sgn\colon E \mapsto \{-1,1\}$ defined as $\sgn(e)=-1$ for every $e\in E$.
\end{remark}
\noindent Moreover, every non-negative bigraph of corank $r>1$ is weakly Gram $\ZZ$-congruent with the canonical $r$-vertex extension
$\wh D_n^{(r)}$
of simply laced Dynkin diagram
$D_n \in \{\AA_n,\ab \DD_n,\ab \EE_6,\ab \EE_7,\ab \EE_8\}$,
see~\cite[Definition 2.2]{simsonSymbolicAlgorithmsComputing2016} and~\cite{zajacStructureLoopfreeNonnegative2019}. On this way, one associates a Dynkin diagram $\Dyn_I$ with an arbitrary non-negative partially poset~$I$.\smallskip
In the present work, we give a complete description of connected non-negative posets $I=(\{1,\ldots,n\},\preceq_I)$ of Dynkin type $\Dyn_I=\AA_n$ in terms
of their \textit{Hasse digraphs} $\CH(I)$.
We recall from \cite[Section 14.1]{Si92} that $\CH(I)$ is an acyclic digraph $\CH(I)=(\{1,\ldots,n\}, A)$, where $i\to j\in A$ iff $i\preceq_I j$, $i\neq j$, and there is no such a $k\in\{1,\ldots,n\}\setminus \{i,j\}$ that $i\preceq_I k\preceq_I j$. The main result of the paper is the following theorem that shows the correspondence between combinatorial and algebraic properties of non-negative posets of Dynkin type $\AA_n$.
\begin{theorem}\label{thm:a:main}
Assume that $I=(\{1,\ldots n\},\preceq_I)$ is a finite connected poset and $\CH(I)$ is the Hasse digraph of~$I$.
\begin{enumerate}[label=\normalfont{(\alph*)}]
\item\label{thm:a:main:posit} $\Dyn_I=\AA_n$ and $\crk_I=0$ if and only if
$\CH(I)$ is an oriented path.
\item\label{thm:a:main:princ} $\Dyn_I=\AA_{n-1}$ and $\crk_I=1$ if and only if the Hasse digraph $\CH(I)$ is $2$-regular with at least two sinks.
\item\label{thm:a:main:crkbiggeri} If $\Dyn_I=\AA_{n-\crk_I}$, then $\crk_I \leq 1$.
\end{enumerate}
\end{theorem}
In particular, we confirm Conjecture 6.4 stated in~\cite{gasiorekAlgorithmicCoxeterSpectral2020}, that is, we show that in the case of non-negative posets of Dynkin type $\AA_n$, there is a one-to-one correspondence between positive posets and connected digraphs whose underlying graph is a path. We give a similar description of corank $1$ (i.e., \textit{principal}) posets: there is a one-to-one correspondence between such posets and connected cycles with at least two sinks. Moreover, we show that this description is complete: there are no connected non-negative posets of corank $r\geq2 $.
Our description of positive posets has a direct application in the problem of Dynkin type recognition, see~\cite[Proposition 6.5]{gasiorekAlgorithmicCoxeterSpectral2020}.
\begin{corollary}
The Dynkin type $\Dyn_I$ of any connected positive poset $I$
can be calculated in $O(n)$, assuming that $I$ is encoded in the form of an adjacency list of its Hasse digraph~$\CH(I)$.
\end{corollary}
\noindent This improves upon the previous best $O(n^2)$ result of Makuracki-Mr\'oz~\cite{makurackiQuadraticAlgorithmCompute2020} and Zaj\k{a}c~\cite{zajacPolynomialTimeInflation2020}.
\section{Preliminaries}
Throughout, we mainly use the terminology and notation introduced in~\cite{gasiorekAlgorithmicStudyNonnegative2015,Si92} (in regard to partially ordered sets), \cite{barotQuadraticFormsCombinatorics2019,simsonCoxeterGramClassificationPositive2013} (edge-bipartite graphs and quadratic forms), and~\cite{diestelGraphTheory2017} (graph theory). In particular, by $\NN\subseteq\ZZ\subseteq\QQ\subseteq \RR$ we denote the set of non-negative integers, the ring of integers, the rational and the real number fields, respectively. We view $\ZZ^n$, with $n\geq 1$, as a free abelian group, and we denote by $e_1, \ldots, e_n$ the standard $\ZZ$-basis of $\ZZ^n$. We use a row notation for vectors $v=[v_1,\ldots,v_n]$ and we write $v^{tr}$ to denote column vectors.
Two (di)graphs $G=(V,E)$ and $G'=(V',E')$ are called \textbf{isomorphic} $G\simeq G'$ if there exist a bijection $f\colon V\to V'$ that preserves edges (arcs), i.e., $(u,v)\in E \Leftrightarrow (f(u), f(v))\in E'$. We call graph $G$ a \textit{path graph} (chain) if $V$ is an empty set (i.e., $G$ is a \textit{null graph}) or $G\simeq\,P_n(u,v)\eqdef \, u\scriptstyle \bullet\,\rule[1.5pt]{22pt}{0.4pt}\,\bullet\,\rule[1.5pt]{22pt}{0.4pt}\,\,\hdashrule[1.5pt]{12pt}{0.4pt}{1pt}\,\rule[1.5pt]{22pt}{0.4pt}\,\bullet \displaystyle v$ and $u\neq v$ (if $u=v$, we call $G$ a \textbf{cycle}). We say that a digraph $D$ is an \textit{oriented path} (oriented chain) if $D\simeq\,\vec P(a,b)\eqdef \, a\scriptstyle \bullet \raisebox{-1.5pt}{\parbox{25pt}{\rightarrowfill}} \bullet \raisebox{-1.5pt}{\parbox{25pt}{\rightarrowfill}} \,\hdashrule[1.5pt]{12pt}{0.4pt}{1pt} \raisebox{-1.5pt}{\parbox{25pt}{\rightarrowfill}} \bullet \displaystyle b$ and $a\neq b$ (if $a=b$ we call $D$ an \textbf{oriented cycle}). A digraph $D$ is called \textbf{acyclic} if it contains no oriented cycle, i.e., induced subdigraph isomorphic to an oriented cycle. A graph $G=(V,E)$ is \textbf{connected} if $P(u, v)\subseteq D$ for every $u\neq v\in V$. By \textbf{underlying graph} $\ov D$ we mean a graph obtained from digraph $D$ (bigraph $\Delta$) by forgetting the orientation of its arcs (forgetting signs of its edges). A digraph $D$ is connected if graph $\ov D$ is connected. A connected (di)graph is called a \textbf{tree} if it does not contain any cycle. We call a vertex $v$ of a digraph $D=(V,A)$ a \textit{source} (minimum) if it is not a target of any arc $\alpha\in A$. Analogously, we call $v\in D$ a \textit{sink} (maximum) if it is not a source of any arc (arrow).
\begin{definition}\label{df:hasse}
The \textit{Hasse digraph} $\CH(I)$ of a finite partially ordered set $I=(\{1,\ldots,n\},\preceq_I)$, is an acyclic digraph with the set of vertices $\{1,\ldots,n\}$, where there is an arrow $i\to j$ if and only if $i\preceq_I j$ and there is no such a $k\in\{1,\ldots,n\}\setminus \{i,j\}$ that $i\preceq_I k\preceq_I j$, see \cite[Section 14.1]{Si92}.
\end{definition}
\noindent We note that $\CH(I)$ is the \textit{transitive reduction} (in the sense of \cite{ahoTransitiveReductionDirected1972}) of the digraph $\overrightarrow{I}=(V,A)$ defined as follows:\ $V=\{1,\ldots,n\}$ and $i\to j\in A$ if and only if $i\preceq_I j$ and $i\neq j$.\smallskip
We call a finite poset $I$ \textit{connected} if $\CH(I)$ is connected and note that every minimal (maximal) element in $I$ corresponds to a source (sink) in the digraph $\CH(I)$. Every finite acyclic digraph $D=(V,A)$ defines the poset $I(D)=I_D\eqdef(\{1,\ldots, |V| \}, {\preceq_D)}$, where a $\preceq_D b$ if there is an oriented path $\vec P(a,b)\subseteq D$. We note that $\CH(I_D)\neq D$ in general.\smallskip
Following~\cite{simsonCoxeterGramClassificationPositive2013} we associate with a bigraph (a poset) $\Delta$:
\begin{itemize}
\item the unit quadratic form $q_\Delta\colon\ZZ^n\to\ZZ$ defined by the formula $q_\Delta(v):=v\cdot G_\Delta\cdot v^{tr}$,\inlineeqno{eq:quadratic_form}
\item kernel $\Ker q_\Delta \eqdef \{v \in\ZZ^n;\ q_\Delta(v)=0\}\subseteq\ZZ^n$,\inlineeqno{eq:kernel}
\end{itemize}
where $G_\Delta\in\MM_n(\ZZ)$ is the symmetric Gram matrix of $\Delta$. It is known, that a bigraph $\Delta$ (poset $I$) is non-negative of corank~$r$ if and only if the quadratic form $q_\Delta$ is positive semi-definite (i.e., $q_\Delta(v)\geq 0$ for every $v\in\ZZ^n$) and its kernel $\Ker q_\Delta$ is a free abelian subgroup of rank $r$, see~\cite{simsonCoxeterGramClassificationPositive2013}.
\begin{example}
To illustrate the definitions, consider the following example.
\begin{center}
\hfill
$Q=$
\begin{tikzpicture}[baseline={([yshift=-2.75pt]current bounding box)},label distance=-2pt,
xscale=0.66, yscale=0.6]
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=left:$\scriptscriptstyle 1$] (n1) at (1 , 2 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=above:$\scriptscriptstyle 2$] (n2) at (0 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=above:$\scriptscriptstyle 3$] (n3) at (2 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=right:$\scriptscriptstyle 4$] (n4) at (1 , 0 ) {};
\foreach \x/\y in {1/3, 2/1, 2/4, 4/3}
\draw [-stealth, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\draw [-stealth, shorten <= 2.00pt, shorten >= 2.10pt] (n2) to (n3);
\end{tikzpicture}\hfill
$\CH(I_Q)=$
\begin{tikzpicture}[baseline={([yshift=-2.75pt]current bounding box)},label distance=-2pt,
xscale=0.66, yscale=0.6]
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=left:$\scriptscriptstyle 1$] (n1) at (1 , 2 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=above:$\scriptscriptstyle 2$] (n2) at (0 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=above:$\scriptscriptstyle 3$] (n3) at (2 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=right:$\scriptscriptstyle 4$] (n4) at (1 , 0 ) {};
\foreach \x/\y in {1/3, 2/1, 2/4, 4/3}
\draw [-stealth, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\end{tikzpicture}\hfill
$C_I=\begin{bsmallmatrix*}[r]
1 & 0 & 1 & 0\\
1 & 1 & 1 & 1\\
0 & 0 & 1 & 0\\
0 & 0 & 1 & 1
\end{bsmallmatrix*}$
\hfill
$\Delta_I=$
\begin{tikzpicture}[baseline={([yshift=-2.75pt]current bounding box)},label distance=-2pt,
xscale=0.66, yscale=0.6]
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=left:$\scriptscriptstyle 1$] (n1) at (1 , 2 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=above:$\scriptscriptstyle 2$] (n2) at (0 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=above:$\scriptscriptstyle 3$] (n3) at (2 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=right:$\scriptscriptstyle 4$] (n4) at (1 , 0 ) {};
\foreach \x/\y in {1/3, 2/1, 2/4, 4/3}
\draw [-, densely dashed, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\end{tikzpicture}\hfill
$G_I=G_\Delta=\begin{bsmallmatrix*}[r]
1 & \frac{1}{2} & \frac{1}{2} & 0\\\frac{1}{2} & 1 & \frac{1}{2} & \frac{1}{2}\\\frac{1}{2} & \frac{1}{2} & 1 & \frac{1}{2}\\0 & \frac{1}{2} & \frac{1}{2} & 1
\end{bsmallmatrix*}$\hfill\mbox{}
\end{center}
We have:
\begin{itemize}
\item
$I_Q=(\{1,2,3,4\}, \{2 \preceq 1, 2 \preceq 4, 1 \preceq 3, 4 \preceq 3 \})$,
\item $2$ is minimal in and $3$ is maximal in $I_Q$, equivalently $2$ is a source in $\CH(I_Q)$ and $3$ is a sink in $\CH(I_Q)$,
\item $q_I(x)= \sum_{i}x_{i}^{2} +
x_{2} (x_{1} \!+\! x_{3} \!+\! x_{4}) \!+\! x_{3}(x_{1} \!+\! x_{4})=
\left(x_{1} \!+\! \tfrac{1}{2}x_{2} \!+\! \tfrac{1}{2}x_{3}\right)^{2} \!+\!
\tfrac{3}{4}\! \left(\tfrac{1}{3}x_{2} \!+\! x_{3} \!+\! \frac{2}{3} x_{4}\right)^{2} \!+\!
\tfrac{2}{3}\! \left(x_{2} \!+\! \tfrac{1}{2}x_{4}\right)^{2} \!+\! \tfrac{1}{2}x_{4}^{2}$,
\item $q_{\Delta_I}(x)=q_I(x)$ and $\Ker I=\Ker \Delta_I=\{0\}\subseteq\ZZ^4$,
\item poset $I_Q$ (bigraph $\Delta_I$) is non-negative of corank $0$, i.e., positive.
\end{itemize}
\end{example}
We need the following fact to define the Dynkin type of a connected bigraph (poset) $\Delta$.
\begin{fact}\label{fact:specialzbasis} Assume that $\Delta$ is a connected non-negative bigraph
of corank $r \geq 1$ and let $q_\Delta\colon\ZZ^n\to\ZZ$~\eqref{eq:quadratic_form} be the quadratic form associated with $\Delta$.
\begin{enumerate}[label=\normalfont{(\alph*)}]
\item\label{fact:specialzbasis:existance} There exist integers $1 \leq j_1 < \ldots < j_r \leq n$ such that free abelian group $\Ker q_\Delta\subseteq \ZZ^r$ of rank $r \geq 1$ admits a
$(k_1,\ldots, k_r)$-special $\ZZ$-basis $h^{(k_1)},\ldots, h^{(k_r)}\in\Ker q_\Delta$, that is, $h^{(k_i)}_{k_i} = 1$ and $h^{(k_i)}_{k_j} = 0$ for $1 \leq i,j \leq r$ and $i \neq j$.
\item\label{fact:specialzbasis:subbigraph} $\Delta^{(k_i)}\eqdef\Delta\setminus\{k_i\}$ is a connected non-negative bigraph
of corank $r - 1\geq 0$.
\item\label{fact:specialzbasis:positsubbigraph} Bigraph $\Delta^{(k_1,\ldots,k_r)}\eqdef\Delta\setminus\{k_1,\ldots,k_r\}$ is of corank $0$ (i.e., positive) and connected.\inlineeqno{eq:positive_subbigraph}
\end{enumerate}
\end{fact}
\begin{proof}
Apply \cite[Lemma 2.7]{zajacPolynomialTimeInflation2020} and \cite[Theorem 2.1]{zajacStructureLoopfreeNonnegative2019}.
\end{proof}
\pagebreak
By a\textbf{ Dynkin type} $\Dyn_\Delta$ of a connected non-negative bigraph $\Delta$ we mean a unique simply-laced Dynkin diagram $\Dyn_\Delta \in \{\AA_n,\DD_n,\EE_6,\EE_7,\EE_8\}$ that determines $\Delta$ uniquely, up to a weak Gram $\ZZ$-congruence. The following definition comes from~\cite{simsonSymbolicAlgorithmsComputing2016}
(see also~\cite[]{barotQuadraticFormsCombinatorics2019} for an alternative approach).
\begin{definition}\label{df:Dynkin_type}
Assume that $\Delta$ is a connected non-negative bigraph of corank $r\geq 0$.
The Dynkin type of $\Delta$ is defined to be the unique simply-laced Dynkin diagram of \Cref{tbl:Dynkin_diagrams} viewed as a bigraph
\[
\Dyn_\Delta = \check \Delta \in \{\AA_n,\DD_n,\EE_6,\EE_7,\EE_8\}
\]
such that $\check \Delta$ is weakly Gram $\ZZ$-congruent with $\Dyn_\Delta$,
where
\begin{itemize}
\item $\check \Delta\eqdef \Delta$ if $r=0$ (i.e., $\Delta$ is positive),
\item $\check \Delta\eqdef\Delta^{(k_1,\ldots,k_r)}=\Delta\setminus\{k_1,\ldots,k_r\}\subseteq \Delta$~\eqref{eq:positive_subbigraph} if $r>0$.
\end{itemize}
The bigraph $\Dyn_\Delta$ can be obtained by means of the inflation algorithm~\cite[Algorithm 3.1]{simsonCoxeterGramClassificationPositive2013}.
\end{definition}
By a Dynkin type $\Dyn_I$ of a connected non-negative poset $I$ we mean a Dynkin type $\Dyn_{I_\Delta}$ of the bigraph $I_\Delta$ defined by a symmetric Gram matrix $G_{I_\Delta}\eqdef G_I $. Following~\cite{simsonCoxeterGramClassificationPositive2013,barotQuadraticFormsCombinatorics2019} we call a non-negative poset $I$:
\begin{itemize}
\item \textit{positive}, if $\crk_I=0$;
\item \textit{principal}, if $\crk_I=1$ and
\item \textit{indefinite}, if its symmetric Gram matrix $G_I$ is not positive/negative semidefinite (i.e., $G_I$ is indefinite).
\end{itemize}
\begin{remark}\label{rmk:indef}
A poset $I$ is indefinite if and only if there exist such vectors $v,w\in\ZZ^n$ that $q_I(v)>0$ and $q_I(w)<0$, where $q_I\colon\ZZ^n\to\ZZ$ is an incidence quadratic form \eqref{eq:quadratic_form} of $I$. Since we have $q_I(e_1)=q_I([1,0,\ldots,0])=1>0$ for every poset $I$, to show that given $I$ is indefinite it is sufficient to show that $q_I(w)<0$ for some $w\in\ZZ^n$.
\end{remark}
\section{Hanging path in a Hasse digraph}
We often use the fact that changing the orientation of arcs (arrows) on the ``hanging path'' in the Hasse digraph $\CH(I)$ does not change definiteness nor the corank of a poset $I$. We need one more definition from~\cite{simsonCoxeterGramClassificationPositive2013} to state this observation formally.
\begin{definition}
Two finite partially ordered sets $I$ and $J$ are called \textit{strong Gram $\ZZ$-congruent} (or \textit{$\ZZ$-bilinear equivalent}) $I\approx_\ZZ J$ if there exists such a $\ZZ$-invertible matrix $B\in\Gl(n,\ZZ)$, that $C_I=B^{tr}\cdot C_J\cdot B.$
\end{definition}
It is straightforward to check that strong Gram $\ZZ$-congruence implies a week one, but the inverse implication is not true in general, see \cite{gasiorekAlgorithmicStudyNonnegative2015}. The inverse is true in case of one-peak positive posets, see~\cite{gasiorekCoxeterTypeClassification2019}.
Following~\cite[(16.7) and Prop. 16.15]{Si92} and~\cite{gasiorekOnepeakPosetsPositive2012}, we introduce the following definition.
\begin{definition}
Let $J_p\subseteq J$ be a subposet of a finite poset $J=(\{1,\ldots,n\}, \preceq_J)$ and $p\in \{1,\ldots,n\}$ be a point of $J_p$.
\begin{enumerate}[label=\normalfont{(\alph*)}]
\item The subposet $J_p\subseteq J$ is called \textbf{$p$-anchored path}
if:
\begin{itemize}
\item Hasse digraph $\CH(J)\setminus \{p\}$ is not connected,
\item graph $\ov {\CH(J_p)}$ is isomorphic with a path graph and $\deg(p)$ in $\CH(J_p)$ equals one.
\end{itemize}
\item The subposet $J_p\subseteq J$ is called \textbf{inward} (\textbf{outward})
$p$-anchored path if
$J_p$ is a $p$-anchored path and $p\in J_p$ is a unique maximal (minimal) point in
the poset $J_p$.
\item Given an inward or outward $p$-anchored path $J_p$ of the poset $J$, the $J_p$-reflection of $J$ is defined to be the poset $J'\eqdef \delta_{J_p} J$ obtained from $J$ by reverting all arrows in the Hasse digraph of $J_p\subseteq J$, as illustrated below
\begin{center}
$\CH(J)=$
\begin{tikzpicture}[baseline={([yshift=-2.75pt]current bounding box)},label distance=-2pt,
xscale=0.48, yscale=0.4]
\node[circle, draw, inner sep=0pt, minimum size=3.5pt] (n1) at (4 , 3 ) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt] (n2) at (4 , 2 ) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt] (n3) at (4 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptstyle p$] (n4) at (5.50, 1.50) {};
\coordinate (n5) at (0 , 3 ) {};
\coordinate (n6) at (0 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptstyle s_1$] (n7) at (7 , 1.50) {};
\node (n8) at (8.50, 1.50) {};
\node (n9) at (10 , 1.50) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptstyle s_r$] (n10) at (11.50, 1.50) {};
\draw[decoration={complete sines,segment length=3mm, amplitude=1mm},decorate] (n1) -- (n5);
\draw (n5) -- (n6);
\draw[decoration={complete sines,segment length=3mm, amplitude=1mm},decorate] (n6) -- (n3);
\foreach \x/\y in {4/7, 7/8, 9/10}
\draw [-stealth, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\foreach \x/\y in {1/4, 2/4, 3/4}
\draw [-, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\draw [line width=1.2pt, line cap=round, dash pattern=on 0pt off 5\pgflinewidth, -, shorten <= 2.50pt, shorten >= 2.50pt] (n2) to (n3);
\draw [line width=1.2pt, line cap=round, dash pattern=on 0pt off 5\pgflinewidth, -, shorten <= -2.50pt, shorten >= -2.50pt] (n8) to (n9);
\end{tikzpicture}
\begin{tikzpicture}[baseline={([yshift=-5.75pt]current bounding box)},
label distance=-2pt,xscale=0.35, yscale=0.5]
\coordinate (n1) at (0 , 1.50);
\coordinate (n2) at (2.50, 1.50);
\coordinate (n3) at (0 , 0 );
\coordinate (n4) at (2.50, 0 );
\draw [|-stealth] (n1) to node[above=-2.0pt, pos=0.5] {$\scriptscriptstyle \delta_{J_p} J$} (n2);
\draw [|-stealth] (n4) to node[above=-2.0pt, pos=0.5] {$\scriptscriptstyle \delta_{J_p} J'$} (n3);
\end{tikzpicture}
$\CH(J')=$
\begin{tikzpicture}[baseline={([yshift=-2.75pt]current bounding box)},label distance=-2pt,
xscale=0.48, yscale=0.4]
\node[circle, draw, inner sep=0pt, minimum size=3.5pt] (n1) at (4 , 3 ) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt] (n2) at (4 , 2 ) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt] (n3) at (4 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptstyle p$] (n4) at (5.50, 1.50) {};
\coordinate (n5) at (0 , 3 ) {};
\coordinate (n6) at (0 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptstyle s_1$] (n7) at (7 , 1.50) {};
\node (n8) at (8.50, 1.50) {};
\node (n9) at (10 , 1.50) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptstyle s_r$] (n10) at (11.50, 1.50) {};
\draw[decoration={complete sines,segment length=3mm, amplitude=1mm},decorate] (n1) -- (n5);
\draw (n5) -- (n6);
\draw[decoration={complete sines,segment length=3mm, amplitude=1mm},decorate] (n6) -- (n3);
\foreach \x/\y in {1/4, 2/4, 3/4}
\draw [-, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\foreach \x/\y in {4/7, 7/8, 9/10}
\draw [stealth-, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\draw [line width=1.2pt, line cap=round, dash pattern=on 0pt off 5\pgflinewidth, -, shorten <= 2.50pt, shorten >= 2.50pt] (n2) to (n3);
\draw [line width=1.2pt, line cap=round, dash pattern=on 0pt off 5\pgflinewidth, -, shorten <= -2.50pt, shorten >= -2.50pt] (n8) to (n9);
\end{tikzpicture}
\end{center}
where $J_p=\{p,s_1,\dots,s_r\}$.
\end{enumerate}
\end{definition}
\begin{lemma}\label{lemma:preflection}
Let $J_p\subseteq J$ be an inward or outward $p$-anchored path of a finite poset $J$. Then $J\approx_\ZZ \delta_{J_p} J$. In particular, $J$ is non-negative of corank $r\geq 0$ if and only if the poset $J'\eqdef \delta_{J_p} J$ is non-negative of corank $r\geq 0$.
\end{lemma}
\begin{proof}
Let $J^\shortleftarrow,J^\shortrightarrow$ be such finite posets that
$J^\shortleftarrow\setminus J^\shortleftarrow_p = J^\shortrightarrow\setminus J^\shortrightarrow_p$ where $J^\shortleftarrow_p \subseteq J^\shortleftarrow$ ($J^\shortrightarrow_p\subseteq J^\shortrightarrow$) is an inward (outward) $p$-anchored path of length $k$ and $\ov{\CH(J^\shortrightarrow_p)}=\ov{\CH(J^\shortleftarrow_p)}$. Without loss of generality, we may assume that $\CH(J^\shortleftarrow)$ and $\CH(J^\shortrightarrow)$ have the forms
\begin{center}
$\CH(J^\shortleftarrow)=$
\begin{tikzpicture}[baseline={([yshift=-2.75pt]current bounding box)},label distance=-2pt,
xscale=0.48, yscale=0.4]
\node[circle, draw, inner sep=0pt, minimum size=3.5pt] (n1) at (4 , 3 ) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt] (n2) at (4 , 2 ) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt] (n3) at (4 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptstyle \phantom{+}\!\!\!p$] (n4) at (5.50, 1.50) {};
\coordinate (n5) at (0 , 3 ) {};
\coordinate (n6) at (0 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptstyle p+1$] (n7) at (7 , 1.50) {};
\node (n8) at (8.50, 1.50) {};
\node (n9) at (10 , 1.50) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label={below:$\scriptstyle p+k-1=n$}] (n10) at (11.50, 1.50) {};
\draw[decoration={complete sines,segment length=3mm, amplitude=1mm},decorate] (n1) -- (n5);
\draw (n5) -- (n6);
\draw[decoration={complete sines,segment length=3mm, amplitude=1mm},decorate] (n6) -- (n3);
\path (n5) to node[pos=0.5] {{\scriptsize $J^\shortleftarrow\setminus J^\shortleftarrow_p$}} (n3);
\foreach \x/\y in {4/7, 7/8, 9/10}
\draw [stealth-, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\foreach \x/\y in {1/4, 2/4, 3/4}
\draw [-, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\draw [line width=1.2pt, line cap=round, dash pattern=on 0pt off 5\pgflinewidth, -, shorten <= 2.50pt, shorten >= 2.50pt] (n2) to (n3);
\draw [line width=1.2pt, line cap=round, dash pattern=on 0pt off 5\pgflinewidth, -, shorten <= -2.50pt, shorten >= -2.50pt] (n8) to (n9);
\end{tikzpicture}
\quad
$\CH(J^\shortrightarrow)=$
\begin{tikzpicture}[baseline={([yshift=-2.75pt]current bounding box)},label distance=-2pt,
xscale=0.48, yscale=0.4]
\node[circle, draw, inner sep=0pt, minimum size=3.5pt] (n1) at (4 , 3 ) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt] (n2) at (4 , 2 ) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt] (n3) at (4 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptstyle \phantom{+}\!\!\!p$] (n4) at (5.50, 1.50) {};
\coordinate (n5) at (0 , 3 ) {};
\coordinate (n6) at (0 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptstyle p+1$] (n7) at (7 , 1.50) {};
\node (n8) at (8.50, 1.50) {};
\node (n9) at (10 , 1.50) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label={below:$\scriptstyle p+k-1=n$}] (n10) at (11.50, 1.50) {};
\draw[decoration={complete sines,segment length=3mm, amplitude=1mm},decorate] (n1) -- (n5);
\draw (n5) -- (n6);
\draw[decoration={complete sines,segment length=3mm, amplitude=1mm},decorate] (n6) -- (n3);
\path (n5) to node[pos=0.5] {{\scriptsize $J^\shortrightarrow\setminus J^\shortrightarrow_p$}} (n3);
\foreach \x/\y in {4/7, 7/8, 9/10}
\draw [-stealth, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\foreach \x/\y in {1/4, 2/4, 3/4}
\draw [-, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\draw [line width=1.2pt, line cap=round, dash pattern=on 0pt off 5\pgflinewidth, -, shorten <= 2.50pt, shorten >= 2.50pt] (n2) to (n3);
\draw [line width=1.2pt, line cap=round, dash pattern=on 0pt off 5\pgflinewidth, -, shorten <= -2.50pt, shorten >= -2.50pt] (n8) to (n9);
\end{tikzpicture}
\end{center}
and their incidence matrices are equal
\begin{center}
$C^\shortleftarrow_J\eqdef$
\begin{tikzpicture} [baseline=(n6.base), every node/.style={outer sep=0pt,inner sep=0pt},every left delimiter/.style={xshift=1pt}, every right delimiter/.style={xshift=-1pt}]
\matrix (m1) [matrix of nodes, ampersand replacement=\&, nodes={minimum height=1.6em,minimum width=1.6em,
text depth=-.25ex,text height=0.6ex,inner xsep=0.0pt,inner ysep=0.0pt, execute at begin node=$\scriptscriptstyle, execute at end node=$}
, column sep={0pt,between borders}, row sep={0pt,between borders}, left delimiter=\lbrack, right delimiter=\rbrack]
{
|(n1)| \& \& |(n22)| \& |(n3)| c_{1,p} \& |(n14)| \& \& |(n23)| \\
\& |(n8)| \& \& \& \& \& \\
\& \& |(n4)| \& |(n2)| c_{p\mppms 1,p} \& \& \& |(n24)| \\
|(n6)| c_{p,1} \& \& |(n5)| c_{p, p\mppms 1} \& |(n7)| 1 \& |(n16)| 0 \& \& |(n15)| 0\\
|(n19)| c_{p,1} \& \& |(n18)| c_{p, p\mppms 1} \& |(n17)| 1 \& |(n9)| 1 \& \& |(n25)| \\
\& \& \& \& \& |(n10)| \& \\
|(n21)| c_{p,1} \& \& |(n20)| c_{p, p\mppms 1} \& |(n13)| 1 \& |(n12)| 1 \& \& |(n11)| 1\\
};
\path (n1) to node[pos=0.5, scale=1.5] {{\scriptsize $C_{J^\shortleftarrow\setminus J^\shortleftarrow_p}$}} (n4);
\draw[-] ([xshift=-2.4pt]n3.north west) to ([xshift=-2.4pt]n13.south west);
\draw[-] ([xshift=2.4pt]n13.south east) to ([xshift=2.4pt]n3.north east);
\draw[-] (n6.south west) to (n15.south east);
\draw[-] (n15.north east) to (n6.north west);
\path (n3.north east) to node[pos=0.5, scale=2] {{\scriptsize $0$}} (n15.north east);
\path (n12.south west) to node[pos=0.7, scale=1.5] {{\scriptsize $0$}} (n15.south east);
\foreach \x/\y in {6/5, 9/11, 12/11, 16/15, 19/18, 21/20}
\draw[line width=1.2pt, line cap=round, dash pattern=on 0pt off 5\pgflinewidth] (n\x) to (n\y);
\foreach \x/\y in {9/12, 17/13}
\draw[line width=1.2pt, line cap=round, dash pattern=on 0pt off 5\pgflinewidth, shorten <= 2pt] (n\x) to (n\y);
\foreach \x/\y in {3/2, 18/20, 19/21}
\draw[line width=1.2pt, line cap=round, dash pattern=on 0pt off 5\pgflinewidth, shorten <= 3pt] (n\x) to (n\y);
\foreach \i/\j in {1/1, 22/p-1, 3/\phantom{+}p\phantom{+}, 14/p+1, 23/n}
\node[gray, scale=0.7] at ([yshift=2mm]n\i.north) {$\scriptstyle \j$};
\foreach \i/\j in {23/1, 24/p-1, 15/p, 25/p+1, 11/n}
\node[gray, scale=0.7,text width=1.5em,anchor=west] at ([xshift=3mm]n\i.east) {$\scriptstyle \j$};
\end{tikzpicture}
\textnormal{ and }
$C_{J^\shortrightarrow}\eqdef$
\begin{tikzpicture} [baseline=(n6.base), every node/.style={outer sep=0pt,inner sep=0pt},every left delimiter/.style={xshift=1pt}, every right delimiter/.style={xshift=-1pt}]
\matrix (m1) [matrix of nodes, ampersand replacement=\&, nodes={minimum height=1.6em,minimum width=1.6em,
text depth=-.25ex,text height=0.6ex,inner xsep=0.0pt,inner ysep=0.0pt, execute at begin node=$\scriptscriptstyle, execute at end node=$}
, column sep={0pt,between borders}, row sep={0pt,between borders}, left delimiter=\lbrack, right delimiter=\rbrack]
{
|(n1)| \& \& |(n21)| \& |(n3)| c_{1,p} \& |(n18)| c_{1,p} \& \& |(n17)| c_{1,p} \\
\& |(n8)| \& \& \& \& \& \\
\& \& |(n4)| \& |(n2)| c_{p\mppms 1,p} \& |(n20)| c_{p\mppms 1,p} \& \& |(n19)| c_{p\mppms 1,p}\\
|(n6)| c_{p,1} \& \& |(n5)| c_{p, p\mppms 1} \& |(n7)| 1 \& |(n15)| 1 \& \& |(n14)| 1 \\
\& \& \& |(n16)| 0 \& |(n9)| 1 \& \& |(n12)| 1 \\
\& \& \& \& \& |(n10)| \& \\
\& \& \& |(n13)| 0 \& \& \& |(n11)| 1 \\
};
\path (n1) to node[pos=0.5, scale=1.5] {{\scriptsize $C_{J^\shortrightarrow\setminus J^\shortrightarrow_p}$}} (n4);
\draw[-] ([xshift=-2.4pt]n3.north west) to ([xshift=-2.4pt]n13.south west);
\draw[-] ([xshift=2.4pt]n13.south east) to ([xshift=2.4pt]n3.north east);
\draw[-] (n6.south west) to (n14.south east);
\draw[-] (n14.north east) to (n6.north west);
\path (n6.south west) to node[pos=0.5, scale=2] {{\scriptsize $0$}} (n13.south west);
\path (n13.south east) to node[pos=0.30000000000000004, scale=1.5] {{\scriptsize $0$}} (n12.north east);
\foreach \x/\y in {6/5, 9/11, 12/11, 15/14, 18/17, 20/19}
\draw[line width=1.2pt, line cap=round, dash pattern=on 0pt off 5\pgflinewidth] (n\x) to (n\y);
\foreach \x/\y in {9/12, 16/13}
\draw[line width=1.2pt, line cap=round, dash pattern=on 0pt off 5\pgflinewidth, shorten <= 2pt] (n\x) to (n\y);
\foreach \x/\y in {3/2, 17/19, 18/20}
\draw[line width=1.2pt, line cap=round, dash pattern=on 0pt off 5\pgflinewidth, shorten <= 3pt] (n\x) to (n\y);
\foreach \i/\j in {1/1, 21/p-1, 3/\phantom{+}p\phantom{+}, 18/p+1, 17/n}
\node[gray, scale=0.7] at ([yshift=2mm]n\i.north) {$\scriptstyle \j$};
\foreach \i/\j in {17/1, 19/p-1, 14/p, 12/p+1, 11/n}
\node[gray, scale=0.7,text width=1.5em,anchor=west] at ([xshift=3mm]n\i.east) {$\scriptstyle \j$};
\end{tikzpicture}.
\end{center}
It is straightforward to check that for
\begin{equation}\label{lemma:preflection:bmat}
B\eqdef\!\!
\begin{tikzpicture} [baseline=(n6.base), every node/.style={outer sep=0pt,inner sep=0pt},every left delimiter/.style={xshift=1pt}, every right delimiter/.style={xshift=-1pt}]
\matrix (m1) [matrix of nodes, ampersand replacement=\&, nodes={minimum height=1.2em,minimum width=1.2em,
text depth=-.25ex,text height=0.6ex,inner xsep=0.0pt,inner ysep=0.0pt, execute at begin node=$\scriptscriptstyle, execute at end node=$}
, column sep={0pt,between borders}, row sep={0pt,between borders}, left delimiter=\lbrack, right delimiter=\rbrack]
{
|(n1)| 1 \& \& |(n17)| \& |(n3)| 0 \& |(n13)| \& \& |(n18)| \\
\& |(n8)| \& \& \& \& \& \\
|(n21)| \& \& |(n4)| 1 \& |(n2)| 0 \& \& \& |(n20)| \\
|(n6)| 0 \& \& |(n5)| 0 \& |(n7)| 1 \& |(n15)| \phantom{-}1 \& \& |(n14)| \phantom{-}1\\
\& \& \& |(n16)| 0 \& \& \& |(n9)| -1 \\
\& \& \& \& \& |(n10)| \& \\
\& \& \& |(n12)| 0 \& |(n11)| -1 \& \& |(n19)| \\
};
\draw[-] (n3.north west) to (n12.south west);
\draw[-] (n12.south east) to (n3.north east);
\draw[-] (n6.south west) to (n14.south east);
\draw[-] (n14.north east) to (n6.north west);
\draw[line width=1.2pt, line cap=round, dash pattern=on 0pt off 5\pgflinewidth, shorten <= 2pt, shorten >= -3pt] (n15) to (n14);
\path (n3.north east) to node[pos=0.5, scale=2] {{\scriptsize $0$}} (n14.north east);
\path (n6.south west) to node[pos=0.5, scale=2] {{\scriptsize $0$}} (n12.south west);
\path (n21.south west) to node[pos=0.7, scale=1.2] {{\scriptsize $0$}} (n17.north east);
\path (n17.north east) to node[pos=0.7, scale=1.2] {{\scriptsize $0$}} (n21.south west);
\foreach \x/\y in {1/4, 9/11}
\draw[line width=1.2pt, line cap=round, dash pattern=on 0pt off 5\pgflinewidth, shorten <= 2pt] (n\x) to (n\y);
\foreach \x/\y in {3/2, 6/5, 16/12}
\draw[line width=1.2pt, line cap=round, dash pattern=on 0pt off 5\pgflinewidth, shorten <= -1pt, shorten >= -2pt] (n\x) to (n\y);
\foreach \x/\y in {7/19, 19/7}
\path (n\x.south east) to node[pos=0.7, scale=1.2] {{\scriptsize $0$}} (n\y.south east);
\foreach \i/\j in {1/1, 17/p-1, 3/\phantom{+}p\phantom{+}, 13/p+1, 18/n}
\node[gray, scale=0.7] at ([yshift=2mm]n\i.north) {$\scriptstyle \j$};
\foreach \i/\j in {18/1, 20/p-1, 14/p, 9/p+1, 19/n}
\node[gray, scale=0.7,text width=1.5em,anchor=west] at ([xshift=3mm]n\i.east) {$\scriptstyle \j$};
\end{tikzpicture}\in\MM_n(\ZZ)
\end{equation}
we have $\det B=\pm 1$ and $B^{tr}\cdot \left(C_{J^\shortleftarrow}\right)\cdot B = C_{J^\shortrightarrow}$, thus $J\approx_\ZZ J'$. Since, by definition, $J^\shortrightarrow = \delta_{J^\shortleftarrow_p}J^\shortleftarrow$ and $J^\shortleftarrow = \delta_{J^\shortrightarrow_p}J^\shortrightarrow$, the proof is finished.
\end{proof}
In other words, interchanging an inward $p$-anchored paths with outward ones, an operation defined at the Hasse digraph level, yields a strong Gram $\ZZ$-congruency of posets (defined at the level of incidence matrices). It is easy to generalize this fact to all, not necessarily inward/outward, $p$-anchored paths.
\begin{corollary}\label{corr:ahchoredpath}
Let $J=J'\cup J_p$ be a finite poset and $J_p$ be a $p$-anchored path.
\begin{enumerate}[label=\normalfont{(\alph*)}]
\item\label{corr:ahchoredpath:bilinear:out} $J\approx_\ZZ J^\shortrightarrow$,
where $J^\shortrightarrow\eqdef J'\cup J_p^\shortrightarrow$ is a poset with an outward $p$-anchored path $J_p^\shortrightarrow$ and $\ov{\CH(J_p^\shortrightarrow)}=\ov{\CH(J_p)}$.
\item\label{corr:ahchoredpath:bilinear:in} $J\approx_\ZZ J^\shortleftarrow$,
where $J^\shortleftarrow\eqdef J'\cup J_p^\shortleftarrow$ is a poset with an inward $p$-anchored path $J_p^\shortleftarrow$ and $\ov{\CH(J_p^\shortleftarrow)}=\ov{\CH(J_p)}$.
\item\label{corr:ahchoredpath:anyorient}
For every orientation of arrows of $\wt Q_p \eqdef \CH(J_p)$ the poset $\wt J\eqdef J\cup \wt J_p$, where $\wt J_p\eqdef I_{\wt Q_p}$, is non-negative of corank $r\geq 0$ if and only if the poset $J$ is non-negative of corank $r\geq 0$.
\end{enumerate}
\end{corollary}
\begin{proof}
It is easy to see that for every orientation of arrows of the path $\ov {\CH(J_{p})}$ there exists series of $J'_{p'}$-reflections, where $p'\in J'_{p'}\subseteq J_p$, that carries the $p$-anchored path $J_p$ into outward $p$-anchored path $J^\shortrightarrow_p$, therefore~\ref{corr:ahchoredpath:bilinear:out} follows by \Cref{lemma:preflection}.\smallskip
The statement~\ref{corr:ahchoredpath:bilinear:in} follows from~\ref{corr:ahchoredpath:bilinear:out} as $J\approx_\ZZ J^\shortrightarrow \approx_\ZZ J^\shortleftarrow$ by \Cref{lemma:preflection}. Since~\ref{corr:ahchoredpath:anyorient} follows from~\ref{corr:ahchoredpath:bilinear:out}, the proof is finished.
\end{proof}
Summing up, changing the orientation of arrows in a $p$-anchored path does not change the non-negativity nor the corank.
\begin{example}
Consider the following triple of partially ordered sets: $I$, $I^\shortrightarrow$ and $I^\shortleftarrow$.
\begin{center}
\hfill
$I=$
\begin{tikzpicture}[baseline=(n7.base),label distance=-2pt,xscale=0.65, yscale=0.74]
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 1$] (n1) at (4 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 2$] (n2) at (1 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 3$] (n3) at (3 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 4$] (n4) at (0 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 5$] (n5) at (2 , 0 ) {};
\node (n7) at (3 , 0 ) {$\mathclap{\phantom{7}}$};
\foreach \x/\y in {1/3, 4/2, 5/2, 5/3}
\draw [-stealth, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\end{tikzpicture}
\hfill
$I^\shortrightarrow=$
\begin{tikzpicture}[baseline=(n7.base),label distance=-2pt,xscale=0.65, yscale=0.74]
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 1$] (n1) at (4 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 2$] (n2) at (1 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 3$] (n3) at (3 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 4$] (n4) at (0 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 5$] (n5) at (2 , 0 ) {};
\node (n7) at (3 , 0 ) {$\mathclap{\phantom{7}}$};
\foreach \x/\y in {2/5, 3/1, 4/2, 5/3}
\draw [-stealth, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\end{tikzpicture}
\hfill
$I^\shortleftarrow=$
\begin{tikzpicture}[baseline=(n7.base),label distance=-2pt,xscale=0.65, yscale=0.74]
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 1$] (n1) at (4 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 2$] (n2) at (1 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 3$] (n3) at (3 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 4$] (n4) at (0 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 5$] (n5) at (2 , 0 ) {};
\node (n7) at (3 , 0 ) {$\mathclap{\phantom{7}}$};
\foreach \x/\y in {1/3, 2/4, 3/5, 5/2}
\draw [-stealth, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\end{tikzpicture}
\hfill
\mbox{}
\end{center}
We have
$I\approx_\ZZ I^\shortrightarrow$ and $I\approx_\ZZ I^\shortleftarrow$, since
$I^\shortrightarrow = \delta_{\{2, 5,3,1\}} \delta_{\{5,3,1\}} \delta_{\{1,3\}} I$
and $I^\shortleftarrow = \delta_{\{4,2,5,3\}} \delta_{\{4,2,5\}} \delta_{\{4,2\}} I$. Moreover, using the description given in \Cref{lemma:preflection} we get equality $C_{I^\shortrightarrow}=B_1^{tr}C_IB_1$, where
\begin{equation*}
B_1=
\begin{bsmallmatrix*}[r]
0 & 0 & 0 & 0 & 1\\
0 & 1 & 0 & 0 & 0\\
0 & 0 & 0 & 1 & 0\\
1 & 0 & 0 & 0 & 0\\
0 & 0 & 1 & 0 & 0
\end{bsmallmatrix*}
\begin{bsmallmatrix*}[r]
1 & 0 & 0 & 0 & 0\\
0 & 1 & 0 & 0 & 0\\
0 & 0 & 1 & 0 & 0\\
0 & 0 & 0 & 1 & 1\\
0 & \phantom{\shortminus}0 & \phantom{\shortminus}0 &\phantom{\shortminus} 0 & \shortminus 1
\end{bsmallmatrix*}
\begin{bsmallmatrix*}[r]
1 & 0 & 0 & 0 & 0\\
0 & 1 & 0 & 0 & 0\\
0 & 0 & 1 & 1 & 1\\
0 & 0 & 0 & 0 & \shortminus 1\\
0 & \phantom{\shortminus}0 & \phantom{\shortminus}0 & \shortminus 1 & 0
\end{bsmallmatrix*}
\begin{bsmallmatrix*}[r]
1 & 0 & 0 & 0 & 0\\
0 & 1 & 1 & 1 & 1\\
0 & 0 & 0 & 0 & \shortminus 1\\
0 & 0 & 0 & \shortminus1 & 0\\
0 & \phantom{\shortminus}0 & \shortminus 1 & 0 & 0
\end{bsmallmatrix*}
\begin{bsmallmatrix*}[r]
0 & 0 & 0 & 1 & 0\\
0 & 1 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 1\\
0 & 0 & 1 & 0 & 0\\
1 & 0 & 0 & 0 & 0
\end{bsmallmatrix*}
=
\begin{bsmallmatrix*}[r]
0 & 0 & \shortminus 1 & 0 & 0\\
1 & 1 & 1 & 0 & 1\\
0 & 0 & 1 & 0 & 1\\
0 & 0 & 0 & 1 & 0\\
\shortminus1 & \phantom{\shortminus}0 & \shortminus 1 & \phantom{\shortminus}0 & \shortminus 1
\end{bsmallmatrix*}\in\Gl(5,\ZZ),
\end{equation*}
i.e., $B$ is a product of a permutation matrix and \eqref{lemma:preflection:bmat}-shaped matrices. Analogously, one can calculate such a matrix $B_2\in\Gl(5, \ZZ)$, that $C_{I^\shortleftarrow}=B_2^{tr}C_IB_2$.
\end{example}
\begin{remark}\label{remark:hassepath:anyorient}
In view of \Cref{corr:ahchoredpath}\ref{corr:ahchoredpath:anyorient}, we usually omit the orientation of the edges in $p$-anchored paths when presenting Hasse digraphs of finite posets.
\end{remark}
\section{Main results}
The main result of this work is a full description of connected non-negative posets $I$ of Dynkin type $\Dyn_I=\AA_n$ in terms of their directed Hasse digraphs $\CH(I)$. First, we show that \Cref{thm:a:main} holds for ``trees'', i.e., posets $I$ whose Hasse digraph $\CH(I)$ is a tree.
\begin{lemma}\label{lemma:trees}
If $I=(\{1,\ldots,n\},\preceq_I)$ is such a connected poset that the Hasse digraph $Q\eqdef \CH(I)$ is a tree (i.e., graph $\ov{Q}$ does not contain any cycle), then exactly one of the following conditions hold.
\begin{enumerate}[label=\normalfont{(\alph*)}]
\item\label{lemma:trees:posit} The poset $I$ is non-negative, $\crk_I=0$ and $\ov Q$ is isomorphic with a Dynkin
graph $\Dyn_I=\ov Q\in\{\AA_n, \DD_n, \EE_6, \EE_7, \EE_8 \}$ of \Cref{tbl:Dynkin_diagrams}.
\item\label{lemma:trees:nneg} The poset $I$ is non-negative, $\crk_I>0$ and $\ov Q$ is isomorphic with an Euclidean graph (see \cite[VII.2(b)]{ASS}).
\item\label{lemma:trees:indef} The poset $I$ is indefinite in the sense that symmetric Gram matrix $G_I=\frac{1}{2}(C_I+C_I^{tr})$ is indefinite.
\end{enumerate}
\end{lemma}
\begin{proof}
Assume that $I$ is such a connected poset, that the Hasse digraph $Q\eqdef \CH(I)=(V,A)$ is a tree and $C_I\in\MM_{|I|}(\ZZ)$ is its incidence matrix. By \cite[Proposition 2.12]{simsonIncidenceCoalgebrasIntervally2009}, the matrix
\begin{equation*}
C_{I}^{-1}=[c_{ab}]\in\MM_n(\ZZ) \textnormal{ has coefficients }
c_{ab}=\begin{cases}
\phantom{-}1,\textnormal{ iff }a=b,\\
-1,\textnormal{ iff } a\to b \in A,\\
\phantom{-}0, \textnormal{ otherwise},
\end{cases}
\end{equation*}
i.e., the matrix $C_I^{-1}$ uniquely encodes the quiver $\CH(I)$. It is straightforward to check that
\begin{equation}\label{eq:digraph_euler_q}
q_Q(x)\eqdef \tfrac{1}{2}x(C_{I}^{-1} +\ C_{I}^{-tr}) x^{tr} = \sum_{i\in V} x_i^2
- \sum_{a\to b\,\in A} x_{a}x_{b}
\end{equation}
is the \textit{Euler quadratic form} of the \textit{quiver} (digraph) $Q$ in the sense of~\cite[Section VII.4]{ASS}. Moreover, we have
\[
C_I^{tr}\cdot G_Q \cdot C_I=\tfrac{1}{2} C_I^{tr}\cdot (C_{I}^{-1} +\ C_{I}^{-tr}) \cdot C_I = \tfrac{1}{2}(C_I^{tr}+C_I) = G_I,
\]
where $G_Q\eqdef \tfrac{1}{2}(C_{I}^{-1} +\ C_{I}^{-tr})$ is the symmetric Gram matrix of the Euler quadratic form $q_Q$ \eqref{eq:digraph_euler_q}.
Hence theorem follows by~\cite[Corollary 2.4]{gasiorekOnepeakPosetsPositive2012} and~\cite[Proposition VII.4.5]{ASS}.
\end{proof}
One of the possible interpretations of \Cref{thm:a:main} is that the Hasse digraph $\CH(I)$ of a connected non-negative-poset $I$ of Dynkin type $\Dyn_I=\AA_n$ has no vertices of degree larger than $2$. Now we prove a more general classification of all connected Hasse digraphs whose vertices have a degree at most $2$.
\begin{theorem}\label{thm:quiverdegmax2}
Assume that $Q=(V,A)$, where $V=\{1,\ldots,n\}$, is a connected acyclic quiver, $|V|=n$ and $\deg(v)\leq 2$ for every $v\in V$. Then exactly
one of the following conditions holds.
\begin{enumerate}[label=\normalfont{(\alph*)}]
\item\label{thm:quiverdegmax2:path} $Q$ is not $2$-regular, $\ov Q\simeq \AA_n$ and $Q$ is the Hasse digraph of the positive poset $I_Q$
of the Dynkin type $\Dyn_{I_Q}=\AA_n$.
\item\label{thm:quiverdegmax2:cycle} $Q$ is $2$-regular and $\ov Q$ is a cycle graph. Moreover:
\begin{enumerate}[label=\normalfont{(b\arabic*)}, leftmargin=4ex]
\item\label{thm:quiverdegmax2:cycle:onesink} $Q$ has exactly one sink and
\begin{enumerate}[label=\normalfont{(\roman*)}, leftmargin=1ex]
\item\label{thm:quiverdegmax2:cycle:onesink:posita}
$\Dyn_{I_Q}=\AA_n$, $I_Q$ is positive and $\CH(I_Q)\neq Q\simeq$\!\!\!
\begin{tikzpicture}[baseline={([yshift=-2.75pt]current bounding box)},label distance=-2pt,xscale=0.65, yscale=0.74]
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=left:$\scriptscriptstyle 1$] (n1) at (0 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=above:$\scriptscriptstyle 2$] (n2) at (1 , 0.50) {};
\node (n3) at (2 , 0.50) {$\scriptscriptstyle $};
\node (n4) at (3 , 0.50) {$\scriptscriptstyle $};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=above:$\scriptscriptstyle n\mppmss 2$] (n5) at (4 , 0.50) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=above:$\scriptscriptstyle n\mppmss 1$] (n6) at (5 , 0.50) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=right:$\scriptstyle n$] (n7) at (6 , 0 ) {};
\foreach \x/\y in {1/2, 1/7, 5/6, 6/7}
\draw [-stealth, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\draw [-stealth, shorten <= 2.50pt, shorten >= -2.50pt] (n2) to (n3);
\draw [line width=1.2pt, line cap=round, dash pattern=on 0pt off 5\pgflinewidth, -, shorten <= -1.00pt, shorten >= -2.50pt] (n3) to (n4);
\draw [-stealth, shorten <= -2.50pt, shorten >= 2.50pt] (n4) to (n5);
\end{tikzpicture},
\item\label{thm:quiverdegmax2:cycle:onesink:positd}
$\Dyn_{I_Q}=\DD_n$, $I_Q$ is positive and $\CH(I_Q)= Q\simeq$\!\!\!
\begin{tikzpicture}[baseline={([yshift=-2.75pt]current bounding box)},label distance=-2pt,xscale=0.65, yscale=0.74]
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=left:$\scriptscriptstyle 1$] (n1) at (0 , 0.25 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=above:$\scriptscriptstyle 2$] (n2) at (1 , 0.50) {};
\node (n3) at (2 , 0.50) {$\scriptscriptstyle $};
\node (n4) at (3 , 0.50) {$\scriptscriptstyle $};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=above:$\scriptscriptstyle n\mppmss 3$] (n5) at (4 , 0.50) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=above:$\scriptscriptstyle n\mppmss 2$] (n6) at (5 , 0.50) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=right:$\scriptstyle n$] (n7) at (6 , 0.25 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle n\mppmss 1$] (n8) at (3 , 0 ) {};
\foreach \x/\y in {1/2, 1/8, 5/6, 6/7, 8/7}
\draw [-stealth, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\draw [-stealth, shorten <= 2.50pt, shorten >= -2.50pt] (n2) to (n3);
\draw [line width=1.2pt, line cap=round, dash pattern=on 0pt off 5\pgflinewidth, -, shorten <= -1.00pt, shorten >= -2.50pt] (n3) to (n4);
\draw [-stealth, shorten <= -2.50pt, shorten >= 2.50pt] (n4) to (n5);
\end{tikzpicture},
\item\label{thm:quiverdegmax2:cycle:onesink:posite}
$\Dyn_{I_Q}=\EE_\star$, $\CH(I_Q)=Q$ and
\begin{itemize}
\item $I_Q$ is positive,
$Q\simeq$\!\!\!
\begin{tikzpicture}[baseline={([yshift=-2.75pt]current bounding box)},
label distance=-2pt,xscale=0.55, yscale=0.54]
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 1$] (n1) at (0 , 0.50) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=above:$\scriptscriptstyle 2$] (n2) at (1 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=above:$\scriptscriptstyle 3$] (n3) at (2 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 4$] (n4) at (1 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 5$] (n5) at (2 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 6$] (n6) at (3 , 0.50) {};
\foreach \x/\y in {1/2, 1/4, 2/3, 3/6, 4/5, 5/6}
\draw [-stealth, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\end{tikzpicture} or
$Q\simeq$\!\!\!
\begin{tikzpicture}[baseline={([yshift=-2.75pt]current bounding box)},
label distance=-2pt,xscale=0.55, yscale=0.54]
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 1$] (n1) at (0 , 0.50) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=above:$\scriptscriptstyle 2$] (n2) at (1 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=above:$\scriptscriptstyle 3$] (n3) at (2 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=above:$\scriptscriptstyle 4$] (n4) at (3 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 5$] (n5) at (1.50, 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 6$] (n6) at (2.50, 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 7$] (n7) at (4 , 0.50) {};
\foreach \x/\y in {1/2, 1/5, 2/3, 3/4, 4/7, 5/6, 6/7}
\draw [-stealth, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\end{tikzpicture} or
$Q\simeq$\!\!\!
\begin{tikzpicture}[baseline={([yshift=-2.75pt]current bounding box)},
label distance=-2pt,xscale=0.55, yscale=0.54]
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 1$] (n1) at (0 , 0.50) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=above:$\scriptscriptstyle 2$] (n2) at (1 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=above:$\scriptscriptstyle 3$] (n3) at (2 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=above:$\scriptscriptstyle 4$] (n4) at (3 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=above:$\scriptscriptstyle 5$] (n5) at (4 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 6$] (n6) at (2 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 7$] (n7) at (3 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 8$] (n8) at (5 , 0.50) {};
\foreach \x/\y in {1/2, 1/6, 2/3, 3/4, 4/5, 5/8, 6/7, 7/8}
\draw [-stealth, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\end{tikzpicture} with
$\Dyn_{I_Q}=\EE_6$, $\EE_7$ and $\EE_8$, respectively;
\item $I_Q$ is principal,
$Q\simeq$\!\!\!
\begin{tikzpicture}[baseline={([yshift=-2.75pt]current bounding box)},
label distance=-2pt,xscale=0.55, yscale=0.54]
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 1$] (n1) at (0 , 0.50) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=above:$\scriptscriptstyle 2$] (n2) at (1 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=above:$\scriptscriptstyle 3$] (n3) at (2 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=above:$\scriptscriptstyle 4$] (n4) at (3 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 5$] (n5) at (1 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 6$] (n6) at (2 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 7$] (n7) at (3 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 8$] (n8) at (4 , 0.50) {};
\foreach \x/\y in {1/2, 1/5, 2/3, 3/4, 4/8, 5/6, 6/7, 7/8}
\draw [-stealth, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\end{tikzpicture} or
$Q\simeq$\!\!\!
\begin{tikzpicture}[baseline={([yshift=-2.75pt]current bounding box)},
label distance=-2pt,xscale=0.55, yscale=0.54]
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 1$] (n1) at (0 , 0.50) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=above:$\scriptscriptstyle 2$] (n2) at (1 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=above:$\scriptscriptstyle 3$] (n3) at (2 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=above:$\scriptscriptstyle 4$] (n4) at (3 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=above:$\scriptscriptstyle 5$] (n5) at (4 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=above:$\scriptscriptstyle 6$] (n6) at (5 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 7$] (n7) at (2.50, 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 8$] (n8) at (3.50, 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 9$] (n9) at (6 , 0.50) {};
\foreach \x/\y in {1/2, 1/7, 2/3, 3/4, 4/5, 5/6, 6/9, 7/8, 8/9}
\draw [-stealth, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\end{tikzpicture} with
$\Dyn_{I_Q}=\EE_7$ and $\EE_8$, respectively;
\end{itemize}
\item\label{thm:quiverdegmax2:cycle:onesink:indef} otherwise, the poset $I_Q$ is indefinite.
\end{enumerate}
\item\label{thm:quiverdegmax2:cycle:multsink} $Q$ has more than one sink, the poset $I_Q$ is principal with
$\Dyn_{I_Q}=\AA_{n-1}$ and $\CH(I_Q)=Q$.
\end{enumerate}
\end{enumerate}
\end{theorem}
\begin{proof} Assume that $Q$ is a connected acyclic digraph and
$\deg(v)\leq 2$ for every $v\in V$. Then $\ov Q$ is either $2$-regular (thus a cycle) or $\ov Q\simeq P(1,n)=\AA_n$ is a path graph.
\ref{thm:quiverdegmax2:path}
If $\ov Q$ is a path graph (i.e., $\ov Q\simeq \AA_n$), then clearly $Q$ is a tree and \ref{thm:quiverdegmax2:path} follows
by \Cref{lemma:trees}\ref{lemma:trees:posit}.\smallskip
\ref{thm:quiverdegmax2:cycle}
Assume that $\ov Q$ is a cycle. It is easy to see that $Q$ is composed of $2k$ oriented
chains and has exactly $k$ sources and $k$ sinks, where $k\in\NN$ is non-zero. First, we assume that $k=1$.
\ref{thm:quiverdegmax2:cycle:onesink} Since $Q$ has exactly one sink,
it is composed of two oriented chains, and we have
\begin{center}
$Q \simeq \bAA_{p,r}\ \ \eqdef$
\begin{tikzpicture}[baseline={([yshift=-2.75pt]current bounding box)},label distance=-2pt,xscale=0.65, yscale=0.74]
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=left:$\scriptscriptstyle 1$] (n1) at (0 , 0.50) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=above:$\scriptscriptstyle 2$] (n2) at (1 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle p\mpppss 1$] (n3) at (1.50, 0 ) {};
\node (n4) at (2 , 1 ) {$\scriptscriptstyle $};
\node (n5) at (2.50, 0 ) {$\scriptscriptstyle $};
\node (n6) at (3 , 1 ) {$\scriptscriptstyle $};
\node (n7) at (3.50, 0 ) {$\scriptscriptstyle $};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=above:$\scriptscriptstyle p\mppmss 1$] (n8) at (4 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=above:$\scriptscriptstyle p$] (n9) at (5 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle p\mpppss r\mppmss 1$] (n10) at (4.50, 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=right:{$\scriptstyle p\mppps r=n$}] (n11) at (6 , 0.50) {};
\foreach \x/\y in {1/2, 1/3, 8/9, 9/11, 10/11}
\draw [-stealth, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\foreach \x/\y in {2/4, 3/5}
\draw [-stealth, shorten <= 2.50pt, shorten >= -2.50pt] (n\x) to (n\y);
\foreach \x/\y in {4/6, 5/7}
\draw [line width=1.2pt, line cap=round, dash pattern=on 0pt off 5\pgflinewidth, -, shorten <= -1.00pt, shorten >= -2.50pt] (n\x) to (n\y);
\foreach \x/\y in {6/8, 7/10}
\draw [-stealth, shorten <= -2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\end{tikzpicture}
\end{center}
where $1\leq r\leq p$ and $p+r=n\geq 3$. Now, the following cases are possible.
\ref{thm:quiverdegmax2:cycle:onesink:posita}
We have $r=1$, i.e., $Q \simeq \bAA_{n-1,1}$. Then $I_Q$ is a one-peak poset with
$\CH(I_Q)={}_{0}\AA^*_{n-1}\neq Q$ and \ref{thm:quiverdegmax2:cycle} follows
by~\cite[Theorem 5.2]{gasiorekOnepeakPosetsPositive2012}.
Analogously \ref{thm:quiverdegmax2:cycle:onesink:positd},
for $r=2$ we have $Q \simeq \bAA_{n-2,2}$ and $I_Q$ is a one-peak poset with
$\CH(I_Q)=\wh\DD^*_{n-2}\diamond \AA_{1} = Q$ and thesis follows by~\cite[Theorem 5.2]{gasiorekOnepeakPosetsPositive2012}.
\ref{thm:quiverdegmax2:cycle:onesink:posite}
Assume now that $r>2$. The results of~\cite{gasiorekOnepeakPosetsPositive2012} and \cite{gasiorekAlgorithmicStudyNonnegative2015}
yield:
\begin{itemize}
\item $\bAA_{3,3}$, $\bAA_{4,3}$ and $\bAA_{5,3}$ are Hasse digraphs of a positive poset of the Dynkin type $\EE_6$, $\EE_7$ and $\EE_8$, respectively,
\item $\bAA_{4,4}$ and $\bAA_{6,3}$ are Hasse digraphs of a principal poset
of the Dynkin type $\EE_7$ and $\EE_8$, respectively.
\end{itemize}
In each of the remaining cases,
the poset $I_{\bAA_{p,r}}$ contains one of the subposets: $I_{\bAA_{7,3}}$ or $I_{\bAA_{4,3}}$.
\begin{center}
$\bAA_{7,3}=$
\begin{tikzpicture}[baseline={([yshift=-2.75pt]current bounding box)},label distance=-2pt,xscale=0.65, yscale=0.74]
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 1$] (n1) at (0 , 0.50) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=above:$\scriptscriptstyle 2$] (n2) at (1 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=above:$\scriptscriptstyle 3$] (n3) at (2 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=above:$\scriptscriptstyle 4$] (n4) at (3 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=above:$\scriptscriptstyle 5$] (n5) at (4 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=above:$\scriptscriptstyle 6$] (n6) at (5 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=above:$\scriptscriptstyle 7$] (n7) at (6 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 8$] (n8) at (3 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 9$] (n9) at (4 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 10$] (n10) at (7 , 0.50) {};
\foreach \x/\y in {1/2, 1/8, 2/3, 3/4, 4/5, 5/6, 6/7, 7/10, 8/9, 9/10}
\draw [-stealth, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\end{tikzpicture}\qquad
$\bAA_{5,4}=$
\begin{tikzpicture}[baseline={([yshift=-2.75pt]current bounding box)},label distance=-2pt,xscale=0.65, yscale=0.74]
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 1$] (n1) at (0 , 0.50) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=above:$\scriptscriptstyle 2$] (n2) at (1 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=above:$\scriptscriptstyle 3$] (n3) at (2 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=above:$\scriptscriptstyle 4$] (n4) at (3 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=above:$\scriptscriptstyle 5$] (n5) at (4 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 6$] (n6) at (1.50, 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 7$] (n7) at (2.50, 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 8$] (n8) at (3.50, 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 9$] (n9) at (5 , 0.50) {};
\foreach \x/\y in {1/2, 1/6, 2/3, 3/4, 4/5, 5/9, 6/7, 7/8, 8/9}
\draw [-stealth, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\end{tikzpicture}
\end{center}
Since
$q_{\bAA_{7,3}}([11,\ab -3,\ab -3,\ab -3,\ab -3,\ab -3,\ab -3,\ab -7,\ab -7,\ab 10])=-5$ and
\mbox{$q_{\bAA_{5,4}}([11,\ab -4,\ab -4,\ab -4,\ab -4,\ab -5,\ab -5,\ab -5,\ab 9])=-9$},
these posets are indefinite (see \Cref{rmk:indef}) and \ref{thm:quiverdegmax2:cycle:onesink:indef} follows.\medskip
\ref{thm:quiverdegmax2:cycle:multsink} Assume that $\ov Q$ is a cycle that
is composed of $s\eqdef 2k>2$ oriented chains. Without loss of generality,
we can assume that
\begin{center}
$\ov Q\ \ \simeq$
\begin{tikzpicture}[baseline={([yshift=-2.75pt]current bounding box)},label distance=-2pt,
xscale=0.64, yscale=0.64]
\draw[draw, fill=cyan!10](0,1) circle (17pt);
\draw[draw, fill=cyan!10](5,2) circle (17pt);
\draw[draw, fill=cyan!10](9,1) circle (17pt);
\draw[draw, fill=cyan!10](5,0) circle (17pt);
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=left:{$\scriptscriptstyle 1=r_1$}] (n1) at (0 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=above left:$\scriptscriptstyle 2$] (n2) at (1 , 2 ) {};
\node (n3) at (2 , 2 ) {$\scriptscriptstyle $};
\node (n4) at (3 , 2 ) {$\scriptscriptstyle $};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=above:$\scriptscriptstyle r_2\mppmss 1$] (n5) at (4 , 2 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=above:$\scriptscriptstyle r2$] (n6) at (5 , 2 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=above:$\scriptscriptstyle r_2\mpppss 1$] (n7) at (6 , 2 ) {};
\node (n8) at (7 , 2 ) {$\scriptscriptstyle $};
\node (n9) at (8 , 2 ) {$\scriptscriptstyle $};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=right:$\scriptscriptstyle r_t$] (n10) at (9 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below right:$\scriptscriptstyle r_t\mpppss 1$] (n11) at (8 , 0 ) {};
\node (n12) at (7 , 0 ) {$\scriptscriptstyle $};
\node (n13) at (6 , 0 ) {$\scriptscriptstyle $};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle n\mppmss 1$] (n14) at (2 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below left:$\scriptscriptstyle n$] (n15) at (1 , 0 ) {};
\node (n16) at (3 , 0 ) {};
\node (n17) at (4 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle r_s$] (n18) at (5 , 0 ) {};
\foreach \x/\y in {1/2, 5/6, 6/7, 10/11, 14/15, 15/1}
\draw [-, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\foreach \x/\y in {2/3, 7/8, 11/12, 18/17}
\draw [-, shorten <= 2.50pt, shorten >= -2.50pt] (n\x) to (n\y);
\foreach \x/\y in {3/4, 8/9, 12/13, 17/16}
\draw [line width=1.2pt, line cap=round, dash pattern=on 0pt off 5\pgflinewidth, -, shorten <= -1.00pt, shorten >= -2.50pt] (n\x) to (n\y);
\foreach \x/\y in {4/5, 9/10, 13/18, 16/14}
\draw [-, shorten <= -2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\end{tikzpicture}
\end{center}
where every $r_1,\ldots,r_s\in \{1,\ldots, n \}$ is either a source or a sink.
In particular, we have:
\begin{itemize}
\item
$1 = r_1 < r_2 < \cdots < r_t<\cdots < r_s$,
\item
$\{1,\ldots,r_2\}$, $\{r_2,\ldots,r_3\}$, $\ldots$\,, $\{r_{s-1},\ldots,r_s\}$,
$\{r_s,\ldots,n, 1\}$ are oriented chains.
\end{itemize}
Since the incidence quadratic form $q_{I_Q}\colon \ZZ^n\to \ZZ$ \eqref{eq:quadratic_form} is given by the formula:
{\allowdisplaybreaks\begin{align*}
q_{I_Q}(x) =&
\sum_{i} x_i^2 \,\,+ \sum_{{1\leq t < s}}
\Big(
\sum_{r_t\leq i<j \leq r_{t+1}} x_i x_j
\Big) +
\sum_{r_s\leq i<j \leq n} x_i x_j
+ x_1(x_{r_s}+\cdots+ x_n)\\
=&
\sum_{
i\not\in\{r_1,\ldots,r_s \}
} \frac{1}{2}x_i^2 + \frac{1}{2}
\sum_{1\leq t < s}
\Big(\sum_{r_t\leq i\leq r_{t+1}} x_i\Big)^2 +
\frac{1}{2}(x_{r_s} + \cdots + x_{n} + x_1)^2,
\end{align*}
the poset $I_Q$ is non-negative. Consider the vector $h_Q=[h_1,\ldots,h_n]\in\ZZ^n$,
where $h_i=0$ if $i\not\in\{r_1,\ldots,r_s \}$, $h_{r_i}=1$ if $i$ is odd
and $-1$ otherwise. That is, $0\neq h_Q\in\ZZ^n$ has $s=2k$ non-zero coordinates
(equal $1$ and $-1$ alternately).
It is straightforward to check that
\[
q_{I_Q}(h_Q) = \tfrac{1}{2}(h_{r_1}+h_{r_2})^2+\cdots+\tfrac{1}{2}(h_{r_{s-1}}+h_{r_s})^2 +\tfrac{1}{2}(h_{r_s}+h_1)^2=0,
\]
i.e., $I_Q$ is not positive and the first coordinate of $h_Q\in\Ker I_Q$ equals $h_1=1$.
Since $\smash{\ov Q^{(1)}}\simeq \AA_{n-1}$, where $Q^{(1)}\eqdef Q\setminus\{1\}$,
by~\ref{thm:quiverdegmax2:path} the poset $I_{Q^{(1)}}\subset I_Q$
is positive and $\Dyn_{I_{Q^{(1)}}}\!=\AA_{n-1}$. Hence
we conclude that $I_Q$ is principal of Dynkin type $\Dyn_{I_Q}\!=\AA_{n-1}$, see \Cref{df:Dynkin_type}.
\end{proof}
One of the methods to prove that a particular poset $I$ is indefinite is to show that it contains a subposet $J\subseteq I$ that is indefinite (we use this argument in the proof of \Cref{thm:quiverdegmax2}\ref{thm:quiverdegmax2:cycle:onesink:posite}). In the following lemma, a list of indefinite posets used further in the paper is presented.
\begin{lemma}\label{lemma:posindef}
If $I$ is such a finite partially ordered set, that its Hasse quiver $\CH(I)$ is isomorphic
with any of the shapes $\CF_1,\ldots,\CF_7$
given in \Cref{tbl:indefposets},
then $I$ is indefinite.\begin{center}
$\CF_1\colon\!\!\!\!$
\begin{tikzpicture}[baseline={([yshift=-2.75pt]current bounding box)},label distance=-2pt,
xscale=0.65, yscale=0.85]
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label=right:$\scriptscriptstyle \mppmss 2$] (n1) at (1 , 0 ) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 2$] (n2) at (0 , 0.50) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label=right:$\scriptscriptstyle \mppmss 2$] (n3) at (1 , 1 ) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label={[xshift=-0.2ex]above right:$\scriptscriptstyle 2$}] (n4) at (2 , 0.50) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle \mppmss 1$] (n5) at (3 , 0.50) {};
\foreach \x/\y in {2/1, 2/3, 4/1, 4/3}
\draw [-stealth, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\draw [-, shorten <= 2.50pt, shorten >= 2.50pt] (n4) to (n5);
\end{tikzpicture}\ \ \ $\CF_2\colon\!\!\!\!$
\begin{tikzpicture}[baseline={([yshift=-2.75pt]current bounding box)},label distance=-2pt,
xscale=0.65, yscale=0.85]
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle \mppmss 4$] (n1) at (3 , 0.50) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle \mppmss 4$] (n2) at (2 , 0.50) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle \mppmss 6$] (n3) at (0 , 0.50) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label=right:$\scriptscriptstyle 7$] (n4) at (1 , 1 ) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label=right:$\scriptscriptstyle 5$] (n5) at (1 , 0 ) {};
\foreach \x/\y in {4/2, 4/3, 5/2, 5/3}
\draw [-stealth, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\draw [-, shorten <= 2.50pt, shorten >= 2.50pt] (n2) to (n1);
\end{tikzpicture}\ \ \ $\CF_3\colon\!\!\!\!$
\begin{tikzpicture}[baseline={([yshift=-2.75pt]current bounding box)},label distance=-2pt,
xscale=0.65, yscale=0.5]
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label={[xshift=0.2ex]below:$\scriptscriptstyle \mppmss 20$}] (n1) at (4 , 1 ) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 4$] (n2) at (3 , 1 ) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 4$] (n3) at (2 , 1 ) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 4$] (n4) at (1 , 1 ) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 4$] (n5) at (0 , 1 ) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 5$] (n6) at (3 , 2 ) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 5$] (n7) at (2 , 2 ) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 5$] (n8) at (1 , 2 ) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label=right:$\scriptscriptstyle 9$] (n9) at (3 , 0 ) {};
\foreach \x/\y in {1/2, 1/6, 1/9, 2/3, 3/4, 4/5, 6/7, 7/8}
\draw [-, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\end{tikzpicture}\ \ \ $\CF_4\colon\!\!\!\!$
\begin{tikzpicture}[baseline={([yshift=-2.75pt]current bounding box)},label distance=-2pt,
xscale=0.65, yscale=0.5]
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label={[xshift=0.1ex]below:$\scriptscriptstyle \mppmss 21$}] (n1) at (6 , 1 ) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 3$] (n2) at (5 , 1 ) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 3$] (n3) at (4 , 1 ) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 3$] (n4) at (3 , 1 ) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 3$] (n5) at (2 , 1 ) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 3$] (n6) at (1 , 1 ) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 3$] (n7) at (0 , 1 ) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 7$] (n8) at (5 , 2 ) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 7$] (n9) at (4 , 2 ) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label=right:$\scriptscriptstyle 10$] (n10) at (5 , 0 ) {};
\foreach \x/\y in {1/2, 1/8, 1/10, 2/3, 3/4, 4/5, 5/6, 6/7, 8/9}
\draw [-, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\end{tikzpicture}
$\CF_5\colon\!\!\!\!$
\begin{tikzpicture}[baseline={([yshift=-2.75pt]current bounding box)},label distance=-2pt,
xscale=0.55, yscale=0.5]
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle \mppmss 7$] (n1) at (4 , 1 ) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 5$] (n2) at (0 , 0 ) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle \mppmss 2$] (n3) at (1 , 0 ) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle \mppmss 2$] (n4) at (2 , 0 ) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle \mppmss 2$] (n5) at (3 , 0 ) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle \mppmss 2$] (n6) at (4 , 0 ) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle \mppmss 2$] (n7) at (5 , 0 ) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle \mppmss 2$] (n8) at (6 , 0 ) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 5$] (n9) at (7 , 0 ) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 4$] (n10) at (8 , 0 ) {};
\foreach \x/\y in {1/9, 2/1}
\draw [bend left=15.0, -stealth, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\foreach \x/\y in {2/3, 3/4, 4/5, 5/6, 6/7, 7/8, 8/9}
\draw [-stealth, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\draw [-, shorten <= 2.50pt, shorten >= 2.50pt] (n9) to (n10);
\end{tikzpicture}\ $\CF_6\colon\!\!\!\!$
\begin{tikzpicture}[baseline={([yshift=-2.75pt]current bounding box)},label distance=-2pt,
xscale=0.55, yscale=0.5]
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle \mppmss 7$] (n1) at (4 , 1 ) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 5$] (n2) at (0 , 0 ) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 5$] (n3) at (1 , 0 ) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle \mppmss 2$] (n4) at (2 , 0 ) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle \mppmss 2$] (n5) at (3 , 0 ) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle \mppmss 2$] (n6) at (4 , 0 ) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle \mppmss 2$] (n7) at (5 , 0 ) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle \mppmss 2$] (n8) at (6 , 0 ) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle \mppmss 2$] (n9) at (7 , 0 ) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 4$] (n10) at (8 , 0 ) {};
\draw [-, shorten <= 2.50pt, shorten >= 2.50pt] (n2) to (n3);
\foreach \x/\y in {3/4, 4/5, 5/6, 6/7, 7/8, 8/9, 9/10}
\draw [-stealth, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\foreach \x/\y in {1/10, 3/1}
\draw [bend left=15.0, -stealth, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\end{tikzpicture}
\ $\CF_7\colon\!\!\!\!$
\begin{tikzpicture}[baseline={([yshift=-2.75pt]current bounding box)},label distance=-2pt,
xscale=0.55, yscale=0.5]
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle \mppmss 7$] (n1) at (0 , 0 ) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 2$] (n2) at (1 , 0 ) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 2$] (n3) at (2 , 0 ) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 2$] (n4) at (3 , 0 ) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 2$] (n5) at (4 , 0 ) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 2$] (n6) at (5 , 0 ) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 2$] (n7) at (6 , 0 ) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle \mppmss 7$] (n8) at (7 , 0 ) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 5$] (n9) at (3 , 1 ) {};
\node[circle, draw, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 4$] (n10) at (4 , 1 ) {};
\foreach \x/\y in {1/9, 10/8}
\draw [bend left=15.0, -stealth, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\foreach \x/\y in {1/2, 2/3, 3/4, 4/5, 5/6, 6/7, 7/8, 9/10}
\draw [-stealth, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\end{tikzpicture}
\captionof{table}{Indefinite posets}\label{tbl:indefposets}
\end{center}
\end{lemma}
\begin{proof}
First, we recall that an `unoriented edge' in a diagram means that both arrow (arc) orientations are permissible, see \Cref{remark:hassepath:anyorient}. In other words, diagrams $\CF_1,\ldots,\CF_7$ describe $776$ different posets.
By \Cref{corr:ahchoredpath}\ref{corr:ahchoredpath:anyorient}, without loss of generality,
we may assume that all unoriented edges in diagrams presented in \Cref{tbl:indefposets}
are oriented ``from left to right''. That is, the arrows in the
Hasse digraph $\vec \CF\in\{\CF_1,\ldots,\CF_7\}$ are oriented in such a way that:
\begin{itemize}
\item $\vec \CF_1$ has three sinks,
\item $\vec \CF_2$ has two sinks and
\item $\vec \CF_3,\ldots,\vec \CF_7$ have exactly one sink.
\end{itemize}\pagebreak
It is straightforward to check that $q_{I({\vec \CF_i})}(v_{\CF_i})<0$, where
$\CH\big(I({\vec \CF_i})\big)\in\{\vec \CF_1, \ldots,\vec \CF_7 \}$ and $v\in\ZZ^n$ is an integral vector whose coordinates
are given as the labels of vertices of diagram $\CF_i$ in \Cref{tbl:indefposets}, hence we conclude that poset $I({\vec \CF_i})$ is indefinite (see \Cref{rmk:indef}). Since isomorphic posets are weakly Gram $\ZZ$-congruent (this $\ZZ$-congruence is defined by a permutation matrix) it follows that every poset isomorphic with $I({\vec \CF_i})$ is indefinite.
\end{proof}
Assume that $I=(\{1,\ldots,n\}, \preceq_I)$ is a connected principal poset of Dynkin type $\Dyn_I=\AA_{n-1}$. That is, assume that $I$ is a connected non-negative poset of corank $\crk_I=1$ and there exist such a $p\in \{1,\ldots,n\}$, that $I^{(p)}\eqdef I\setminus \{p\}$ is a positive connected poset of Dynkin type $\Dyn_{I^{(p)}}=\AA_{n-1}$, see \Cref{df:Dynkin_type}. By \Cref{thm:a:main}\ref{thm:a:main:posit}, the graph $\ov {\CH(I^{(p)})}$ is isomorphic with a path graph. In the following lemma we describe all possible shapes of a digraph $\CH(I)$.
\begin{lemma}\label{lemma:pathext}
Assume that $I=(\{1,\ldots,n\},\preceq_I)$ is such a finite connected poset, that for some $p\in \{1,\ldots,n\}$ the graph
$\ov {\CH(I\setminus\{p\}})$ is isomorphic with a path graph. Then exactly
one of the following conditions holds.
\begin{enumerate}[label=\normalfont{(\alph*)}]
\item\label{lemma:pathext:posit} $I$ is positive and:
\begin{enumerate}[label=\normalfont{(a\arabic*)}, leftmargin=3ex]
\item\label{lemma:pathext:posit:a} $\Dyn_I=\AA_n$ with $\CH(I)\simeq \CA_n=$\begin{tikzpicture}[baseline=(n7.base),label distance=-2pt,xscale=0.64, yscale=0.64]
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 1$] (n1) at (0 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 2$] (n2) at (1 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 3$] (n4) at (2 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle n\mppmss 1$] (n5) at (5 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle n$] (n6) at (6 , 0 ) {};
\node (n7) at (3 , 0 ) {$\mathclap{\phantom{7}}$};
\node (n8) at (4 , 0 ) {};
\foreach \x/\y in {1/2, 2/4, 5/6}
\draw [-, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\draw [-, shorten <= 2.50pt, shorten >= -2.50pt] (n4) to (n7);
\draw [line width=1.2pt, line cap=round, dash pattern=on 0pt off 5\pgflinewidth, -, shorten <= -0.50pt, shorten >= -2.50pt] (n7) to (n8);
\draw [-, shorten <= -2.50pt, shorten >= 2.50pt] (n8) to (n5);
\end{tikzpicture};
\item\label{lemma:pathext:posit:d} $\Dyn_I=\DD_n$ and the Hasse digraph $\CH(I)\simeq Q_I$ has a shape $Q_I\in\{\CD_n^{(1)},\CD_{n,s}^{(2)},\CD_{n,s}^{(3)}\}$.
\begin{center}
$\CD_n^{(1)}=$\!\!\!
\begin{tikzpicture}[baseline=(nb.base),label distance=-2pt,xscale=0.64, yscale=0.64]
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 1$] (n1) at (0 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 2$] (n2) at (1 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=right:$\scriptscriptstyle 3$] (n3) at (1 , 1 ) {};
\node (n4) at (2 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle n\mppmss 1$] (n5) at (4 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle n$] (n6) at (5 , 0 ) {};
\node (n8) at (3 , 0 ) {};
\node (nb) at (0.50, 0.50) {\phantom{7}};
\foreach \x/\y in {1/2, 2/3, 5/6}
\draw [-, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\draw [-, shorten <= 2.50pt, shorten >= -2.50pt] (n2) to (n4);
\draw [line width=1.2pt, line cap=round, dash pattern=on 0pt off 5\pgflinewidth, -, shorten <= -0.50pt, shorten >= -2.50pt] (n4) to (n8);
\draw [-, shorten <= -2.50pt, shorten >= 2.50pt] (n8) to (n5);
\end{tikzpicture} \quad
$\CD_{n,s}^{(2)}=$\!\!\!
\begin{tikzpicture}[baseline=(n6.base),label distance=-2pt,
xscale=0.64, yscale=0.64]
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle s$] (n1) at (3 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=right:$\scriptscriptstyle s\mpppss 1$] (n2) at (4 , 2 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=right:$\scriptscriptstyle s\mpppss 2$] (n3) at (4 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 1$] (n5) at (1 , 1 ) {};
\node (n6) at (2 , 1 ) {$\mathclap{\phantom{7}}$};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below right:$\scriptscriptstyle s\mpppss 3$] (n4) at (5 , 1 ) {};
\node (n9) at (6 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle n$] (n10) at (7, 1 ) {};
\foreach \x/\y in {1/2, 1/3, 2/4, 3/4}
\draw [-stealth, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\draw [-, shorten <= 2.50pt, shorten >= -2.50pt] (n5) to (n6);
\draw [line width=1.2pt, line cap=round, dash pattern=on 0pt off 5\pgflinewidth, -, shorten <= -1.5pt, shorten >= -.50pt] (n6) to (n1);
\draw [line width=1.2pt, line cap=round, dash pattern=on 0pt off 5\pgflinewidth, -, shorten <= 2.pt, shorten >= -1.50pt] (n4) to (n9);
\foreach \x/\y in { 9/10}
\draw [-, shorten <= -2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\end{tikzpicture}\quad
$\CD_{n,s}^{(3)}=$\!\!\!\!\!
\begin{tikzpicture}[baseline=(nb.base),label distance=-2pt,
xscale=0.64, yscale=0.64]
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=above:$\scriptscriptstyle 1$] (n1) at (0 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=above:$\scriptscriptstyle 2$] (n2) at (1 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=above:$\scriptscriptstyle p$] (n3) at (4 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=above:$\scriptscriptstyle n$] (n4) at (5 , 1 ) {};
\node (n5) at (2 , 1 ) {};
\node (n6) at (3 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle s\mpppss 1$] (n7) at (0 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle s\mpppss 2$] (n8) at (1 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle n\mppmss 1$] (n9) at (4 , 0 ) {};
\node (n10) at (2 , 0 ) {};
\node (n11) at (3 , 0 ) {};
\node (nb) at (0.50, .50) {\phantom{7}};
\foreach \x/\y in {1/2, 3/4, 9/4}
\draw [-stealth, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\draw [-stealth, shorten <= 2.50pt, shorten >= -2.50pt] (n2) to (n5);
\foreach \x/\y in {5/6, 10/11}
\draw [line width=1.2pt, line cap=round, dash pattern=on 0pt off 5\pgflinewidth, -, shorten <= -0.50pt, shorten >= -2.50pt] (n\x) to (n\y);
\draw [-stealth, shorten <= -2.50pt, shorten >= 2.50pt] (n6) to (n3);
\draw [-, shorten <= 2.50pt, shorten >= 2.50pt] (n7) to (n8);
\draw [-, shorten <= 2.50pt, shorten >= -2.50pt] (n8) to (n10);
\draw [-, shorten <= -2.50pt, shorten >= 2.50pt] (n11) to (n9);
\draw [-stealth, shorten <= 3.50pt, shorten >= 1.90pt] ([yshift=-0.5]n1.south east) to ([yshift=1.5]n9.north west);
\end{tikzpicture}
\end{center}
where $n\geq 4$ if $Q_{I}=\CD_n^{(1)}$; $n\geq 4$, $s\geq 1$ if $Q_{I}=\CD_{n,s}^{(2)}$ and $n\geq 5$, $s\geq 2$ if $Q_I=\CD_{n,s}^{(3)}$.
\item\label{lemma:pathext:posit:e} $\Dyn_I=\EE_n$, where $n\in\{6,7,8\}$, and the Hasse digraph $\CH(I)$, up to isomorphism, is one of $498$ \textnormal{digraphs [}$86$~up to orientation of hanging paths, see \Cref{corr:ahchoredpath}\ref{corr:ahchoredpath:anyorient}\textnormal{]}, i.e., there are $38, 145, 315$ \textnormal{[}$11, 30, 45$\textnormal{ up to orientation of hanging paths]} digraphs with $\Dyn_I=\EE_6, \EE_7$ and $\EE_8$, respectively. In particular, digraphs $\CH(I)$ of all posets $I$ with $\Dyn_I=\EE_6$ are depicted below.
\begin{center}
{\newcommand{\mxscale}{0.65}
\newcommand{\myscale}{0.55}
\hfil
\begin{tikzpicture}[baseline={([yshift=-2.75pt]current bounding box)},label distance=-2pt,xscale=\mxscale, yscale=\myscale]
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n1) at (0 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n2) at (1 , 2 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n3) at (1 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n4) at (1 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n5) at (2 , 2 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n6) at (2 , 1 ) {};
\foreach \x/\y in {1/2, 1/3, 1/4, 2/5, 3/6}
\draw [-, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\end{tikzpicture} \hfill
\begin{tikzpicture}[baseline={([yshift=-2.75pt]current bounding box)},label distance=-2pt,xscale=\mxscale, yscale=\myscale]
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n1) at (0 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n2) at (1 , 2 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n3) at (2 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n4) at (3 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n5) at (1 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n6) at (2 , 0 ) {};
\foreach \x/\y in {1/2, 1/5, 2/3, 5/3, 5/6}
\draw [-stealth, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\draw [-, shorten <= 2.50pt, shorten >= 2.50pt] (n3) to (n4);
\end{tikzpicture} \hfill
\begin{tikzpicture}[baseline={([yshift=-2.75pt]current bounding box)},label distance=-2pt,xscale=\mxscale, yscale=\myscale]
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n1) at (0 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n2) at (1 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n3) at (1 , 2 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n4) at (1 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n5) at (2 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n6) at (0 , 2 ) {};
\foreach \x/\y in {1/2, 1/3, 2/5, 3/5, 6/3}
\draw [-stealth, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\draw [-, shorten <= 2.50pt, shorten >= 2.50pt] (n1) to (n4);
\end{tikzpicture} \hfill
\begin{tikzpicture}[baseline={([yshift=-2.75pt]current bounding box)},label distance=-2pt,xscale=\mxscale, yscale=\myscale]
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n1) at (0 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n2) at (1 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n3) at (2 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n4) at (3 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n5) at (2 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n6) at (3 , 0 ) {};
\foreach \x/\y in {1/2, 1/5, 2/3, 3/4, 5/4}
\draw [-stealth, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\draw [-, shorten <= 2.50pt, shorten >= 2.50pt] (n5) to (n6);
\end{tikzpicture} \hfill
\begin{tikzpicture}[baseline={([yshift=-2.75pt]current bounding box)},label distance=-2pt,xscale=\mxscale, yscale=\myscale]
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n1) at (0 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n2) at (1 , 2 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n3) at (1 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n4) at (1 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n5) at (2 , 1.50) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n6) at (2 , 0.50) {};
\foreach \x/\y in {1/2, 1/3, 1/4, 2/5, 3/5, 3/6, 4/6}
\draw [-stealth, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\end{tikzpicture} \hfill
\begin{tikzpicture}[baseline={([yshift=-2.75pt]current bounding box)},label distance=-2pt,xscale=\mxscale, yscale=\myscale]
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n1) at (0 , 1.50) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n2) at (0 , 0.50) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n3) at (1 , 2 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n4) at (1 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n5) at (1 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n6) at (2 , 1 ) {};
\foreach \x/\y in {1/3, 1/4, 2/4, 2/5, 3/6, 4/6, 5/6}
\draw [-stealth, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\end{tikzpicture}\hfill\mbox{} \\[0.34cm]
\hfil
\begin{tikzpicture}[baseline={([yshift=-2.75pt]current bounding box)},label distance=-2pt,xscale=\mxscale, yscale=\myscale]
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n1) at (0 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n2) at (1 , 2 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n3) at (1 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n4) at (2 , 2 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n5) at (2 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n6) at (2 , 0 ) {};
\foreach \x/\y in {1/2, 1/3, 2/5, 3/5}
\draw [-stealth, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\foreach \x/\y in {2/4, 3/6}
\draw [-, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\end{tikzpicture} \hfill
\begin{tikzpicture}[baseline={([yshift=-2.75pt]current bounding box)},label distance=-2pt,xscale=\mxscale, yscale=\myscale]
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n1) at (0 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n2) at (1 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n3) at (2 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n4) at (3 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n5) at (1.50, 2 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n6) at (1 , 0 ) {};
\foreach \x/\y in {1/2, 1/5, 2/3, 3/4, 5/4}
\draw [-stealth, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\draw [-, shorten <= 2.50pt, shorten >= 2.50pt] (n1) to (n6);
\end{tikzpicture} \hfill
\begin{tikzpicture}[baseline={([yshift=-2.75pt]current bounding box)},label distance=-2pt,xscale=\mxscale, yscale=\myscale]
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n1) at (0 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n2) at (1 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n3) at (2 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n4) at (3 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n5) at (1.50, 2 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n6) at (2 , 0 ) {};
\foreach \x/\y in {1/2, 1/5, 2/3, 3/4, 5/4}
\draw [-stealth, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\draw [-, shorten <= 2.50pt, shorten >= 2.50pt] (n2) to (n6);
\end{tikzpicture} \hfill
\begin{tikzpicture}[baseline={([yshift=-2.75pt]current bounding box)},label distance=-2pt,xscale=\mxscale, yscale=\myscale]
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n1) at (0 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n2) at (1 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n3) at (2 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n4) at (3 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n5) at (1.50, 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n6) at (3 , 0 ) {};
\foreach \x/\y in {1/2, 1/5, 2/3, 3/4, 5/4}
\draw [-stealth, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\draw [-, shorten <= 2.50pt, shorten >= 2.50pt] (n4) to (n6);
\end{tikzpicture} \hfill
\begin{tikzpicture}[baseline={([yshift=-2.75pt]current bounding box)},label distance=-2pt,xscale=\mxscale, yscale=\myscale]
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n1) at (0 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n2) at (1 , 2 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n3) at (2 , 2 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n4) at (3 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n5) at (1 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n6) at (2 , 0 ) {};
\foreach \x/\y in {1/2, 1/5, 2/3, 3/4, 5/6, 6/4}
\draw [-, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\end{tikzpicture}\hfill\mbox{}
}\end{center}
\end{enumerate}
\item\label{lemma:pathext:princ} $I$ is principal and:
\begin{enumerate}[label=\normalfont{(b\arabic*)}, leftmargin=3ex]
\item\label{lemma:pathext:princ:a} $\Dyn_I=\AA_{n-1}$ and Hasse digraph $\CH(I)$ is a $2$-regular digraph with at least two sinks.
\item\label{lemma:pathext:princ:e} $\Dyn_I=\EE_{n-1}$, where $n\in\{8,9\}$, and the Hasse digraph $\CH(I)$, up to isomorphism, is one of $850$ \textnormal{digraphs [}$98$~up to orientation of hanging paths\textnormal{]}, i.e., there are $185, 665$ \textnormal{[}$36,62$ up to orientation of hanging paths\textnormal{]} digraphs with $\Dyn_I= \EE_7$ and $\EE_8$, respectively.
\end{enumerate}
\item\label{lemma:pathext:indef} $I$ is indefinite.
\end{enumerate}
\end{lemma}
\begin{proof}
First, we note that:
\begin{itemize}
\item by \Cref{corr:ahchoredpath}\ref{corr:ahchoredpath:anyorient} and
results of \cite{gasiorekOnepeakPosetsPositive2012}, for posets $I$ with $\CH(I)$ isomorphic with $\CA_n,\ab\CD_n^{(1)},\ab\CD_{n,s}^{(2)},\ab\CD_{n,s}^{(3)}$ we have that $I$ is positive, $\Dyn_I=\AA_n$ if $\CH(I)\simeq \CA_n$ and $\Dyn_I=\DD_n$ otherwise;
\item by \Cref{thm:quiverdegmax2}\ref{thm:quiverdegmax2:cycle:multsink} posets $I$ with $\CH(I)$ being $2$-regular with at least two sinks are principal and $\Dyn_I=\AA_{n-1}$.
\end{itemize}
\medskip
The proof is divided into two parts. First, we prove the thesis by analyzing all posets on at most $11$ points. Then, using induction, we prove it for posets $I$ of size $|I|>11$.\smallskip
\textbf{Part $1^\circ$} It is easy to verify that connected posets $I$ with $|I|\in\{1,2,3\}$ satisfy the assumptions, are positive and their Hasse digraph $\CH(I)$ is isomorphic with a path graph. Therefore $\Dyn_I=\AA_{|I|}$ and the thesis follows. Now, using Computer Algebra System (e.g. SageMath or Maple), we compute all (up to isomorphism) partially ordered sets $I$ of size at most $11$ using a suitably modified version of~\cite[Algorithm 7.1]{gasiorekOnepeakPosetsPositive2012} (see also~\cite{brinkmannPosets16Points2002} for a different approach). There are exactly $\num{49519383}$ posets $I$ of size $4\leq |I| \leq 11$, of which $\num{46485488}$ are connected. In $\num{58198}$ cases, for some $p\in I$, the graph $\ov {\CH(I\setminus\{p\}})$ is isomorphic with a path graph.
A more precise analysis is given in the following table.
{\sisetup{group-minimum-digits=4
\begin{longtable}{lrrrrrrrrrrr}\toprule
& & \multicolumn{5}{c}{positive} & \multicolumn{2}{c}{principal} & indefinite
\\\cmidrule(lr){3-7}\cmidrule(lr){8-9}\cmidrule(ll){10-12}
$n$& $\# I$ & $\CA$ & $\CD_n^{(1)}$ & $\CD_{n,s}^{(2)}$ & $\CD_{n,s}^{(3)}$ & $\EE_n$ & $\AA_{n-1}$ & $\EE_{n-1}$ & $\# I$ \\\midrule
$4$ & $10$ & $4$ & $4$ & $1$ & & & $1$ & & \\
$5$ & $34$ & $10$ & $12$ & $4$ & $3$ & & $1$ & & $4$\\
$6$ & $129$ & $16$ & $24$ & $12$ & $7$ & $38$ & $5$ & & $27$\\
$7$ & $413$ & $36$ & $48$ & $32$ & $15$ & $145$ & $6$ & & $131$\\
$8$ & $\num{1369}$ & $64$ & $96$ & $80$ & $31$ & $315$ & $17$ & $185$ & $581$\\
$9$ & $\num{4184}$ & $136$ & $192$ & $192$ & $63$ & & $25$ & $665$ & $\num{2911}$\\
$10$ & $\num{12980}$ & $256$ & $384$ & $448$ & $127$ & & $56$ & & $\num{11709}$\\
$11$ & $\num{39079}$ & $528$ & $768$ & $\num{1024}$ & $255$ & & $88$ & & $\num{36416}$\\
\bottomrule
\end{longtable}}
In particular, there are $\num{46749427}$ non-isomorphic posets of size $11$,
of which $\num{43944974}$ are connected. In $\num{39079}$
cases, for some $p\in I$, the graph
$\ov {\CH(I\setminus\{p\}})$ is isomorphic to a path graph and,
up to isomorphism, there are exactly:
\begin{itemize}
\item $\num[group-minimum-digits = 4]{2575}$ positive posets $I$, of which
$528$, $768$, $1024$ and $255$ have its Hasse digraph $\CH(I)$ isomorphic with
$\CA_n$, $\CD_n^{(1)}$, $\CD_{n,s}^{(2)}$, and $\CD_{n,s}^{(3)}$, respectively;
\item $88$ principal posets $I$\ whose Hasse quiver $\CH(I)$ is $2$-regular and contains at least two sinks;
\item $\num{36416}$ indefinite posets.
\end{itemize}
This computer-assisted analysis completes the proof for posets $I$ of size $|I|\leq 11$.\smallskip
\textbf{Part $2^\circ$} Assume that $n=|I|>11$ and thesis holds for posets $I'$ of size $|I'|=n-1$. We proceed by induction. To prove the inductive step, assume that $I$ is such a finite connected poset,
that $\ov \CH(I\setminus\{p\})$ is isomorphic to a path graph (for some $p\in \{1,\ldots,n\}$).
Without loss of generality, we may assume that
$p=1$ and
$\ov{\CH(I\setminus \{1\}})\simeq P(2,n)= 2 \,\rule[2.5pt]{22pt}{0.4pt}\,3\,\rule[2.5pt]{22pt}{0.4pt}\,
\,\hdashrule[2.5pt]{12pt}{0.4pt}{1pt}\,\rule[2.5pt]{22pt}{.4pt}\,n-1\,\rule[2.5pt]{22pt}{0.4pt}\,n$.
Consider the poset $J\eqdef I\setminus \{n\}$.\smallskip
(A) If $J$ is not connected, i.e.,
$n$ is an articulation point in the graph $\ov {\CH(J)}$,
then, by assumptions, we have that $J$ has two connected components $J=\{2,\ldots,n-1\}\sqcup \{1\}$, as depicted in the picture:
\begin{center}
$\ov\CH(I)=$\!\!\!\!\!
\begin{tikzpicture}[baseline=(n2.base),label distance=-2pt]
\matrix [matrix of nodes, ampersand replacement=\&, nodes={minimum height=1.3em,minimum width=1.3em,
text depth=0ex,text height=1ex, execute at begin node=$, execute at end node=$}
, column sep={15pt,between borders}, row sep={10pt,between borders}]
{
|(n2)|2 \& |(n3)|3 \& |(n4)| \& |(n5)| \& |(n6)|n-1 \& |(n7)|n \& |(n1)|1 \\
};
\foreach \x/\y in {2/3, 3/4, 5/6, 6/7, 7/1}
\draw [-, shorten <= -2.50pt, shorten >= -2.50pt] (n\x) to (n\y);
\draw [line width=1.2pt, line cap=round, dash pattern=on 0pt off 5\pgflinewidth, -, shorten <= -2.50pt, shorten >= -2.50pt] (n4) to (n5);
\draw [thick,decoration={brace,raise=0.25cm},decorate] (n2.west) -- (n7.east) node [pos=0.5,anchor=north,yshift=0.85cm] {$\scriptstyle \ov \CH(I\setminus\{p\})$};
\draw[line cap= round,line width =12pt,opacity=0.2,green,shorten >=1ex] (n2.center) to (n6.east);
\draw[draw=black,line cap= round,line width =12pt,opacity=0.2,green,shorten >=1ex,shorten <=1ex] (n1.west) to (n1.east);
\end{tikzpicture}\!\!\!\!\!
$\simeq \CA_n,$
\end{center}
and the thesis follows.\medskip
\pagebreak[1]
(B) If $J$ is connected, then, by the inductive hypothesis, one of the following conditions holds:
(i) $J$ is positive and $\CH(J)\simeq Q_{J}$, where
$Q_{J}\in\{\CA_n,\CD_n^{(1)},\CD_{n,s}^{(2)},\CD_{n,s}^{(3)}\}$;
(ii) $J$ is principal, and the Hasse quiver $\CH(J)$ is $2$-regular with at least two sinks;
(iii) $J$ is indefinite.
We analyze these cases one by one.\smallskip
(i) Assume that $J=I\setminus\{n\}$ is positive, i.e., $\CH(J)\simeq Q_{J}$, where
$Q_{J}\in\{\CA_n,\CD_n^{(1)},\CD_{n,s}^{(2)},\CD_{n,s}^{(3)}\}$, and $\CF_1,\ldots, \CF_7$ are indefinite posets presented in \Cref{tbl:indefposets}. First we note that
(by assumptions) the digraphs $\CH(I)$ and $\CH(J)$ are connected and
$\ov\CH(I\setminus \{1\})\simeq P(2,n)\simeq\CA_{n-1}$. Hence we conclude that
the degree of the vertex $n$ in the digraph $\CH(I)$ equals $1$ or $2$.
Depending on the shape of $Q_J$ we have:
\begin{enumerate}[label=\normalfont{($\arabic*^\circ$)},wide,labelindent=0pt]
\item\label{lemma:pathext:prf:b:i:i} \textbf{if $Q_J=\CA_{n-1}$}, then either $\CH(I)\simeq Q_I$,
where $Q_I\in\{\CA_n,\CD_n^{(1)},\CD_{n,s'}^{(3)}\}$ (for some $2\leq s'< n-1$) and $I$ is positive or $\CH(I)$ is $2$-regular (with at least two sinks) and $I$ is principal;
\item\label{lemma:pathext:prf:b:i:ii} \textbf{if $Q_J=\CD_{n-1}^{(1)}$}, then either $\CH(I)\simeq Q_I$,
where $Q_I\in\{\CD_n^{(1)},\CD_{n,1}^{(2)},\CD_{n,n-3}^{(2)}, \CD_{n,2}^{(3)}\}$
and $I$ is positive or $I$ is indefinite, as it contains (as a subposet)
$\CF\in\{\CF_1,\ldots, \CF_6\}$;
\item\label{lemma:pathext:prf:b:i:iii} \textbf{if $Q_J=\CD_{n-1,s}^{(2)}$}, then either $\CH(I)\simeq \CD_{n,s'}^{(2)}$
(for some $1\leq s'\leq n-3$) and $I$ is positive or $I$ is indefinite, as it contains (as a subposet)
$\CF\in\{\CF_3, \ldots, \CF_6\}$;
\item\label{lemma:pathext:prf:b:i:iiii} \textbf{if $Q_J=\CD_{n-1, s}^{(3)}$}, then either $\CH(I)\simeq \CD_{n,s'}^{(3)}$
(for some $2\leq s'< n-1$) and $I$ is positive or $I$ is indefinite, as it contains (as a subposet)
$\CF\in\{\CF_2, \ldots, \CF_7\}$.
\end{enumerate}
We describe the case \ref{lemma:pathext:prf:b:i:iii} in a more detailed manner.
Since $q_I(v)=q_{I^{op}}(v)$,
for every $v\in\ZZ^n$ (where $I^{op}$ is a poset opposite to $I$), poset
$I$ is indefinite
(non-negative of corank $r$) if and only if $I^{op}$ is indefinite
(non-negative of corank $r$). Therefore we analyse $I$ up to the ${}^{op}$-operation.
Assume that $I=(\{1,\ldots,n \}, \preceq_I)$ is such a finite poset, that
$\ov{\CH(I\setminus \{1\}})\simeq P(2,n)$ and
$\CH(I\setminus \{n\})\simeq \CD_{n-1,s}^{(2)}$. The Hasse digraph
$\CH(I)$ has one of the following shapes:\medskip
\noindent($3^\circ a$)
\begin{tikzpicture}[baseline={([yshift=-2.75pt]current bounding box)},label distance=-2pt,
xscale=0.65, yscale=0.55]
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=left:$\scriptscriptstyle 1$] (n1) at (3 , 2 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 2$] (n2) at (0 , 1 ) {};
\node (n3) at (1 , 1 ) {$\scriptscriptstyle $};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle s'$] (n4) at (2 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle $] (n5) at (4 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle $] (n6) at (3 , 0 ) {};
\node (n7) at (5 , 1 ) {$\scriptscriptstyle $};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle n$] (n8) at (6 , 1 ) {};
\draw [-, shorten <= 2.50pt, shorten >= -2.50pt] (n2) to (n3);
\draw [line width=1.2pt, line cap=round, dash pattern=on 0pt off 5\pgflinewidth, -, shorten <= -2.50pt, shorten >= 2.50pt] (n3) to (n4);
\foreach \x/\y in {1/5, 4/1, 4/6, 6/5}
\draw [-stealth, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\draw [line width=1.2pt, line cap=round, dash pattern=on 0pt off 5\pgflinewidth, -, shorten <= 2.50pt, shorten >= -2.50pt] (n5) to (n7);
\draw [-, shorten <= -2.50pt, shorten >= 2.50pt] (n7) to (n8);
\end{tikzpicture}
\qquad\qquad
($3^\circ b$)
\begin{tikzpicture}[baseline={([yshift=-2.75pt]current bounding box)},label distance=-2pt,
xscale=0.55, yscale=0.55]
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle n\phantom{\mppmss }$] (n1) at (11 , 2 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label={[xshift=-1.5ex]below:$\scriptscriptstyle n\mppmss 1$}] (n2) at (10 , 2 ) {};
\node[circle, fill=, gray!90, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle $] (n3) at (9 , 2 ) {};
\node[circle, fill=, gray!90, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle $] (n4) at (8 , 1 ) {};
\node[circle, fill=, gray!90, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 1$] (n5) at (9 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle $] (n6) at (9 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle $] (n7) at (10 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle $] (n8) at (11 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle $] (n9) at (12 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle $] (n10) at (13 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle $] (n11) at (14 , 0 ) {};
\node[circle, fill=, gray!90, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle $] (n12) at (15 , 0 ) {};
\node[circle, fill=, gray!90, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 2$] (n13) at (16 , 0 ) {};
\foreach \x/\y in {2/3, 3/4, 5/4, 6/4}
\draw [-, gray!90, stealth-, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\draw [-, , gray!90, -, shorten <= 1pt, shorten >= 1.0pt] (n13) to (n12);
\foreach \x/\y in {1/2, 1/5, 7/5, 7/6}
\draw [stealth-, line width=1.2pt, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\foreach \x/\y in {8/7, 9/8, 10/9, 11/10}
\draw [-, line width=1.2pt, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\draw [line width=1.2pt, gray!90, line cap=round, dash pattern=on 0pt off 3\pgflinewidth, -, shorten <= 2.0pt, shorten >= 1.0pt] (n12) to (n11);
\end{tikzpicture}
We have $\CH(I)\simeq \CD_{n-1,s}^{(2)}$ in case ($3^\circ a$) and $I$ is positive; in case ($3^\circ b$) $I$ is indefinite as $\CF_3\subseteq I$.\medskip
\noindent ($3^\circ c$)
\begin{tikzpicture}[baseline={([yshift=-2.75pt]current bounding box)},label distance=-2pt,xscale=0.65, yscale=0.74]
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 1$] (n1) at (1 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle n$] (n2) at (2 , 0 ) {};
\node[circle, fill=gray!90, inner sep=0pt, minimum size=3.5pt, label={[xshift=0.7ex]above left:$\scriptscriptstyle n\mppmss 1$}] (n3) at (0 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=above:$\scriptscriptstyle n\mppmss 2$] (n4) at (1 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label={[xshift=-0.8ex]above right:$\scriptscriptstyle n\mppmss 3$}] (n5) at (2 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle $] (n6) at (3 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle $] (n7) at (4 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle $] (n8) at (5 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle $] (n9) at (6 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle $] (n10) at (7 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle $] (n11) at (8 , 1 ) {};
\node[circle, fill=gray!90, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle $] (n12) at (9 , 1 ) {};
\node[circle, fill=gray!90, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 2$] (n13) at (10 , 1 ) {};
\foreach \x/\y in {3/1, 3/4}
\draw [-, gray!90, -stealth, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\foreach \x/\y in {1/2, 1/5, 4/5}
\draw [-stealth, line width=1.2pt, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\foreach \x/\y in {5/6, 6/7, 7/8, 8/9, 9/10, 10/11}
\draw [-, line width=1.2pt, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\draw [ -, gray!90, shorten <= 2.50pt, shorten >= 2.50pt] (n12) to (n13);
\draw [line width=1.2pt, gray!90, line cap=round, dash pattern=on 0pt off 3\pgflinewidth, -, shorten <= 2.0pt, shorten >= 1.0pt] (n12) to (n11);
\end{tikzpicture}
\quad or\quad
\begin{tikzpicture}[baseline={([yshift=-2.75pt]current bounding box)},label distance=-2pt,
xscale=0.55, yscale=0.55]
\node[circle, fill=gray!90, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle $] (n1) at (0 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 1$] (n2) at (1 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle $] (n3) at (2 , 2 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle $] (n4) at (1 , 2 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle $] (n5) at (3 , 2 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle $] (n6) at (4 , 2 ) {};
\node[circle, fill=gray!90, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle $] (n7) at (1 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle \phantom{\mppmss }n$] (n8) at (6 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle n\mppmss 1$] (n9) at (5 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle $] (n10) at (4 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle $] (n11) at (3 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle $] (n12) at (2 , 0 ) {};
\node[circle, fill=gray!90, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle $] (n13) at (5 , 2 ) {};
\node[circle, fill=gray!90, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 2$] (n14) at (6 , 2 ) {};
\foreach \x/\y in {1/2, 1/4, 1/7}
\draw [-, gray!90, -stealth, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\foreach \x/\y in {2/3, 4/3, 9/8, 10/9, 11/10, 12/11}
\draw [-stealth, line width=1.2pt, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\foreach \x/\y in {3/5, 5/6}
\draw [-, line width=1.2pt, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\foreach \x/\y in {6/13, 7/12}
\draw [line width=1.2pt, gray!90, line cap=round, dash pattern=on 0pt off 3\pgflinewidth, -, shorten <= 2.0pt, shorten >= 1.0pt] (n\x) to (n\y);
\draw [gray!90, -, shorten <= 1.50pt, shorten >= 1.50pt] (n13) to (n14);
\draw [-stealth, line width=1.2pt, shorten <= 2.50pt, shorten >= 2.50pt] (n2) to ([yshift=2]n8.north west);
\end{tikzpicture}\medskip
In this case $I$ is indefinite as $\CF_4\subseteq I$.\medskip
\noindent ($3^\circ d$)\begin{tikzpicture}[baseline={([yshift=-2.75pt]current bounding box)},label distance=-2pt,
xscale=0.55, yscale=0.55]
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 1$] (n1) at (8 , 2 ) {};
\node[circle, fill=gray!90, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 3$] (n2) at (6 , 1 ) {};
\node[circle, fill=gray!90, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 2$] (n22) at (5 , 1 ) {};
\node[circle, fill=gray!90, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle $] (n3) at (7 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle $] (n4) at (8 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle $] (n5) at (9 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle $] (n6) at (8 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle $] (n7) at (7 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle $] (n8) at (6 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle $] (n9) at (5 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle $] (n10) at (4 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle $] (n11) at (3 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle $] (n12) at (2 , 0 ) {};
\node[circle, fill=gray!90, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle n\mppmss 1$] (n13) at (1 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle n$] (n14) at (0 , 1 ) {};
\foreach \x/\y in {3/1, 3/4, 14/13}
\draw [- gray!90, -stealth, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\draw [ gray!90, -, shorten <= 2.50pt, shorten >= 2.50pt] (n22) to (n2);
\foreach \x/\y in {1/5, 4/5, 6/5, 7/6, 8/7, 9/8, 10/9, 11/10, 12/11, 14/1}
\draw [line width=1.2pt,-stealth, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\draw [line width=1.2pt, gray!90, line cap=round, dash pattern=on 0pt off 3\pgflinewidth, -, shorten <= 2.0pt, shorten >= 1.0pt] (n12) to (n13);
\draw [line width=1.2pt, gray!90, line cap=round, dash pattern=on 0pt off 3\pgflinewidth, -, shorten <= 2.0pt, shorten >= 1.0pt] (n3) to (n2);
\end{tikzpicture}
\quad or\quad
\begin{tikzpicture}[baseline={([yshift=-2.75pt]current bounding box)},label distance=-2pt,
xscale=0.55, yscale=0.54]
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 1$] (n1) at (1 , 1 ) {};
\node[circle, fill=gray!90, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle 2$] (n2) at (4 , 2 ) {};
\node[circle, fill=gray!90, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle $] (n3) at (3 , 2 ) {};
\node[circle, fill=gray!90, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle $] (n4) at (2 , 2 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle $] (n5) at (1 , 2 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle $] (n6) at (0 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle $] (n7) at (1 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle $] (n8) at (2 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle $] (n9) at (3 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle $] (n10) at (4 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle $] (n11) at (5 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle $] (n12) at (6 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle $] (n13) at (7 , 0 ) {};
\node[circle, fill=gray!90, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle $] (n14) at (8 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle n$] (n15) at (9 , 1 ) {};
\foreach \x/\y in {1/4, 5/4, 14/15}
\draw [-, gray!90, -stealth, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\foreach \x/\y in {1/15, 6/1, 6/5, 6/7, 7/8, 8/9, 9/10, 10/11, 11/12, 12/13}
\draw [-stealth, line width=1.2pt,shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\draw [line width=1.2pt, gray!90, line cap=round, dash pattern=on 0pt off 3\pgflinewidth, -, shorten <= 2.0pt, shorten >= 1.0pt] (n3) to (n4);
\draw [line width=1.2pt, gray!90, line cap=round, dash pattern=on 0pt off 3\pgflinewidth, -, shorten <= 2.0pt, shorten >= 1.0pt] (n14) to (n13);
\draw [-, gray!90,shorten <= 2.0pt, shorten >= 2.0pt] (n3) to (n2);
\end{tikzpicture}\medskip
\noindent In case ($3^\circ d$) poset $I$ is indefinite as $\CF_5\subseteq I$ or $\CF_6\subseteq I$, respectively. The cases\ref{lemma:pathext:prf:b:i:i}, \ref{lemma:pathext:prf:b:i:ii} and \ref{lemma:pathext:prf:b:i:iiii} follow by analogous arguments. Details are left to the reader.\medskip
(ii) Assume that $J$ is principal, i.e., the Hasse digraph $\CH(J)$ is $2$-regular with at least two sinks. Now there are two possibilities: degree of vertex $n$ in the quiver $\CH(I)$ equals one or two. If $\deg(n)=1$, then $I$ is indefinite, as it contains (as a subposet) a poset of the shape $\CF_1$, $\CF_2$, $\CF_3$ or $\CF_4$ given in \Cref{tbl:indefposets}.
If $\deg(n)=2$, then Hasse quiver $\CH(I)$ is $2$-regular, contains identical number of sinks as $\CH(J)$. Therefore, by \Cref{thm:quiverdegmax2}\ref{thm:quiverdegmax2:cycle:multsink}, is principal and statement \ref{lemma:pathext:princ} follows.\medskip
(iii) Since poset $J\subset I$ is indefinite, the poset $I$ is indefinite as well, and the proof is finished.
\end{proof}
\begin{center}
\textbf{Proof of \Cref{thm:a:main}}
\end{center}
Now we have all the necessary tools to prove the main result of this work.
\begin{proof}[Proof of \Cref{thm:a:main}]
Assume that $I=(\{1,\ldots n\},\preceq_I)$ is a finite connected non-negative poset and $\Dyn_I=\AA_{n-\crk_I}$.\smallskip
\ref{thm:a:main:posit} Our aim is to show that $\crk_I=0$ if and only if
$\ov{\CH(I)}\simeq \CA_n =
1 \,\rule[2.5pt]{22pt}{0.4pt}\,2\,\rule[2.5pt]{22pt}{0.4pt}\,
\hdashrule[2.5pt]{12pt}{0.4pt}{1pt}\,
\rule[2.5pt]{22pt}{.4pt}\,n$.\smallskip
Since ``$\Leftarrow$'' is a consequence of
\Cref{lemma:trees}\ref{lemma:trees:posit},
it is sufficient to prove ``$\Rightarrow$''.
First, we show that
$\deg(v)\leq 2$
for every vertex $v\in\{1,\ldots n\}$. Assume, to the contrary, that there exists such a vertex $v$, that $\deg(v)>2$. If that is the case, then there exists such a subposet $J\subseteq I$ that its Hasse quiver $\CH(J)$ has the form
\begin{center}
$\CH(J)\colon$
\begin{tikzpicture}[baseline={([yshift=-2.75pt]current bounding box)},label distance=-2pt,xscale=0.65, yscale=0.74]
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle v$] (n1) at (1 , 0.50) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=left:$\scriptscriptstyle $] (n2) at (2 , 0.50) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=left:$\scriptscriptstyle $] (n3) at (0 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=left:$\scriptscriptstyle $] (n4) at (0 , 0 ) {};
\foreach \x/\y in {1/2, 3/1, 4/1}
\draw [-, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\end{tikzpicture}.
\end{center}
By \Cref{lemma:trees}\ref{lemma:trees:posit}, $\Dyn_{J}=\DD_4$. Since $J\subseteq I$, ~\cite[Proposition 2.25]{barotQuadraticFormsCombinatorics2019} yields $\Dyn_I\in\{\DD_n, \EE_n\}$, contrary to our assumptions. Hence we conclude that $\deg(v)\leq 2$, for every vertex $v\in\{1,\ldots n\}$ and, in view of \Cref{thm:quiverdegmax2}\ref{thm:quiverdegmax2:path}, statement \ref{thm:a:main:posit} follows.\medskip
\ref{thm:a:main:princ}
We show that $\crk_I=1$ if and only if $\CH(I)$ is $2$-regular with at least two sinks. To prove ``$\Rightarrow$'' assume that $I$ is a finite connected principal poset of the Dynkin type $\Dyn_I=\AA_{n-1}$. By definition, there exists such a $k\in \{1,\ldots,n\}$, that the poset $J\eqdef I\setminus\{k\}$ is positive of the Dynkin type $\Dyn_J=\AA_{n-1}$. By~\ref{thm:a:main:posit} we have $\ov {\CH(J)}\simeq \CA_n$ (i.e., graph $\ov{\CH(I\setminus\{k\})}$ is isomorphic to a path graph) thus, by \Cref{lemma:pathext}\ref{lemma:pathext:princ}, Hasse digraph $\CH(I)$ is $2$-regular with at least two sinks and ``$\Rightarrow$'' follows.
``$\Leftarrow$'' By \Cref{thm:quiverdegmax2}\ref{thm:quiverdegmax2:cycle:multsink}, every poset $I$ with Hasse digraph $\CH(I)$ being $2$-regular with at least two sinks is principal of the Dynkin type $\Dyn_I=\AA_{n-1}$.\medskip
\ref{thm:a:main:crkbiggeri} Our aim is to show that the assumption that $\Dyn_I=\AA_{n-\crk_I}$ yields $\crk_I \leq 1$. Assume, by contradiction, that $I$ is a finite connected non-negative poset of corank $\crk_I=r>1$ and Dynkin type $\Dyn_I=\AA_{n-r}$. It follows that there exists $(j_1,\ldots,j_r)$-special $\ZZ$-basis $\bh^{(j_1)},\ldots, \bh^{(j_r)}\in\ZZ^n$ of incidence quadratic form $q_I\colon\ZZ^n\to\ZZ$~\eqref{eq:quadratic_form}, see \Cref{df:Dynkin_type}. Moreover, by \Cref{fact:specialzbasis}\ref{fact:specialzbasis:subbigraph}, $J\eqdef I\setminus\{j_3,\ldots,j_r\}$ is a finite connected non-negative poset of corank~$2$ and Dynkin type $\Dyn_J=\AA_{n-r}$. We show that no such connected non-negative corank~$2$ poset $J$ of Dynkin type $\Dyn_J=\AA_{n-r}$ exists, that is, the assumption that $\crk_I>1$ yields a contradiction. \smallskip
First, we note that~\cite[Corollary 4.4(b)]{gasiorekAlgorithmicStudyNonnegative2015} yields $|J|>16$. Moreover, $J_1\eqdef J\setminus\{j_1\}$ and $J_2\eqdef J\setminus\{j_2\}$ are finite connected principal posets of Dynkin type $\Dyn_{J_1}\!=\Dyn_{J_2}\!=\AA_{n-r}$. By~\ref{thm:a:main:princ}, $\CH(J_1)$ and $\CH(J_2)$ are $2$-regular digraphs that have at least two sinks. That is, one of the following conditions hold for the poset $J$:
\begin{enumerate}[label=\normalfont{(\roman*)}]
\item\label{thm:a:main:crkbiggeri:prf:i}
$\CH(J)$ is a $2$-regular digraph that has at least two sinks (i.e., $j_1$ and $j_2$ lie along
oriented paths in $\CH(J)$);
\item\label{thm:a:main:crkbiggeri:prf:ii} $\CH(J)$ has at least two sinks and is of the shape:
$\CH(J)\colon$
\begin{tikzpicture}[baseline={([yshift=-2.75pt]current bounding box)},label distance=-2pt,
xscale=0.55, yscale=0.55]
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n1) at (0 , 1 ) {};
\node (n2) at (1 , 1 ) {$ $};
\node (n3) at (2 , 1 ) {$ $};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n4) at (3 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=right:$\scriptscriptstyle j_2$] (n5) at (4 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=left:$\scriptscriptstyle j_1$] (n6) at (4 , 2 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n7) at (5 , 1 ) {};
\node (n8) at (6 , 1 ) {$ $};
\node (n9) at (7 , 1 ) {$ $};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt] (n10) at (8 , 1 ) {};
\foreach \x/\y in {1/2, 7/8}
\draw [-, shorten <= 2.50pt, shorten >= -2.50pt] (n\x) to (n\y);
\foreach \x/\y in {2/3, 8/9}
\draw [line width=1.2pt, line cap=round, dash pattern=on 0pt off 5\pgflinewidth, -, shorten <= -2.50pt, shorten >= -2.50pt] (n\x) to (n\y);
\foreach \x/\y in {3/4, 9/10}
\draw [-, shorten <= -2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\foreach \x/\y in {4/5, 4/6, 5/7, 6/7}
\draw [-stealth, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\draw [bend right=35.0, -, shorten <= 2.50pt, shorten >= 2.50pt] (n1) to (n10);
\end{tikzpicture}
\item\label{thm:a:main:crkbiggeri:prf:iii} $J$ contains (as a subposet) an indefinite poset of the shape
$\CF_1$, $\CF_2$, $\CF_3$ or $\CF_4$ given in \Cref{tbl:indefposets}.
\end{enumerate}
In the case~\ref{thm:a:main:crkbiggeri:prf:i} the poset $J$ is principal, i.e., $\crk_J=1$, which contradicts the assumption that $J$ is a corank $2$ poset.
Now, we show that the same goes for~\ref{thm:a:main:crkbiggeri:prf:ii}.
Without loss of generality,
we may assume that $\CH(J)$ has a shape
\begin{center}
$\CH(J)\colon$
\begin{tikzpicture}[baseline={([yshift=-2.75pt]current bounding box)},label distance=-2pt,
xscale=0.64, yscale=0.64]
\draw[draw, fill=cyan!10](0,1) circle (17pt);
\draw[draw, fill=cyan!10](5,2) circle (17pt);
\draw[draw, fill=cyan!10](9,1) circle (17pt);
\draw[draw, fill=cyan!10](5,0) circle (17pt);
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=above left:$\scriptscriptstyle j$] (n1) at (1 , 2 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=left:$\scriptscriptstyle j\mpppss 1$] (n2) at (2 , 3 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=right:$\scriptscriptstyle j\mpppss 2$] (n3) at (2 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label={[xshift=-0.7ex]above right:$\scriptscriptstyle j\mpppss 3$}] (n4) at (3 , 2 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=left:{$\scriptscriptstyle 1=r_1$}] (n5) at (0 , 1 ) {};
\node (n6) at (4 , 2 ) {$\scriptscriptstyle $};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle \phantom{\mpppss }r_2\phantom{\mpppss }$] (n7) at (5 , 2 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle r_2\mpppss 1$] (n8) at (6 , 2 ) {};
\node (n9) at (7 , 2 ) {$\scriptscriptstyle $};
\node (n10) at (8 , 2 ) {$\scriptscriptstyle $};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle r_t$] (n11) at (9 , 1 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below right:$\scriptscriptstyle r_t\mpppss 1$] (n12) at (8 , 0 ) {};
\node (n13) at (7 , 0 ) {$\scriptscriptstyle $};
\node (n14) at (6 , 0 ) {$\scriptscriptstyle $};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle r_s$] (n15) at (5 , 0 ) {};
\node (n16) at (4 , 0 ) {$\scriptscriptstyle $};
\node (n17) at (3 , 0 ) {$\scriptscriptstyle $};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle n\mppmss 1$] (n18) at (2 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=below:$\scriptscriptstyle n\phantom{\mppmss }$] (n19) at (1 , 0 ) {};
\foreach \x/\y in {1/2, 1/3, 2/4, 3/4}
\draw [-stealth, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\draw [line width=1.2pt, line cap=round, dash pattern=on 0pt off 5\pgflinewidth, -, shorten <= 2.50pt, shorten >= 2.50pt] (n5) to (n1);
\foreach \x/\y in {4/6, 9/10, 14/13, 17/16}
\draw [line width=1.2pt, line cap=round, dash pattern=on 0pt off 5\pgflinewidth, -, shorten <= -2.50pt, shorten >= -2.50pt] (n\x) to (n\y);
\foreach \x/\y in {6/7, 10/11, 16/15}
\draw [-, shorten <= -2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\foreach \x/\y in {5/19, 7/8, 8/9, 12/11, 13/12, 15/14, 19/18}
\draw [-, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\draw [-, shorten <= 2.50pt, shorten >= -2.50pt] (n18) to (n17);
\end{tikzpicture}
\end{center}
where:
\begin{itemize}
\item every $r_1,\ldots,r_s\in \{1,\ldots, n \}$ is either a source or a sink,
\item
$1 = r_1 < r_2 < \cdots < r_t<\cdots < r_s$ and $s\eqdef 2k>2$,
\item
subdigraphs $\{1,\ldots,j+1, j+3,\ldots, r_2\}$, $\{1,\ldots,j, j+2,\ldots, r_2\}$,
$\{r_2,\ldots,r_3\}$, $\ldots$\,, $\{r_{s-1},\ldots,r_s\}$,
$\{r_s,\ldots,n, 1\}$ are oriented chains.
\end{itemize}
Since the incidence quadratic form $q_{J}\colon \ZZ^n\to \ZZ$ \eqref{eq:quadratic_form} is given by the formula:
{\allowdisplaybreaks\begin{align*}
q_{J}(x) =&
\sum_{i} x_i^2 \,\,+ \sum_{{1\leq t < s}}
\left(
\sum_{r_t\leq i<j \leq r_{t+1}} x_i x_j
\right) +
\sum_{r_s\leq i<j \leq n} x_i x_j
+ x_1(x_{r_s}+\cdots+ x_n) - x_{j+1}x_{j+2}\\
=&
\sum_{
i\not\in\{r_1,\ldots,r_s,j+1,j+2\}
} \frac{1}{2}x_i^2 +
\frac{(x_{j+1}-x_{j+2})^2}{2}
+ \frac{1}{2}
\sum_{1\leq t < s}
\left(\sum_{r_t\leq i\leq r_{t+1}} x_i\right)^2 +
\frac{1}{2}(x_{r_s} + \cdots + x_{n} + x_1)^2,
\end{align*}
then $q_J(v)\geq 0$ for every $v\in\ZZ^n$ and $J$ is non-negative. Consider the vector $0\neq \bh_J=[\bh_1,\ldots,\bh_n]\in\ZZ^n$, where $\bh_{r_i}=1$ if $i$ is odd and $-1$ otherwise. It is straightforward to check that
\[
q_{J}(\bh_J) = \tfrac{1}{2}(\bh_{r_1}+\bh_{r_2})^2+\cdots+\tfrac{1}{2}(\bh_{r_{s-1}}+\bh_{r_s})^2 +\tfrac{1}{2}(\bh_{r_s}+\bh_1)^2=0,
\]
i.e., the poset $J$ is not positive and $\bh_J\in\Ker J$ has first coordinate equal $\bh_1=1$. It follows that
\begin{itemize}
\item $\{\bh_J\}$ is a $(1)$-special $\ZZ$-basis of $\Ker J=\ZZ\cdot \bh_J\subseteq\ZZ^{n-r+1}$,
\item $J^{(1)}\eqdef J\setminus \{1 \}\simeq\CD_{n-r,s}^{(2)}$ is a positive poset of Dynkin type $\Dyn_J=\Dyn_{J^{(1)}}=\DD_{n-r}$, see \Cref{lemma:pathext}\ref{lemma:pathext:posit:d} and \Cref{df:Dynkin_type}.
\end{itemize}
Similarly, as~\ref{thm:a:main:crkbiggeri:prf:i}, we conclude that the poset $J$ is principal,
which contradicts the assumption that $J$ is of corank $2$.
\smallskip
To finish the proof, we note that every $J$ that is not described in~\ref{thm:a:main:crkbiggeri:prf:i} and~\ref{thm:a:main:crkbiggeri:prf:ii} contains (as a subposet) one of the posets $\CF_1$, $\CF_2$, $\CF_3$ or $\CF_4$ presented in \Cref{tbl:indefposets}. Hence $J$ is indefinite. This follows by the standard case by case inspection, as in the proof of \Cref{lemma:pathext}. Details are left to the reader.
\end{proof}
\section{Enumeration of \texorpdfstring{$\AA_n$}{An} Dynkin type non-negative posets}
We finish the paper by giving explicit formulae~\eqref{fact:digrphnum:path:eq} and~\eqref{fact:digrphnum:cycle:eq} for the number of all possible orientations of the path and cycle graphs, up to isomorphism of \textit{unlabeled} digraphs. We apply these results to devise the formula~\eqref{thm:typeanum:eq} for the number of non-negative posets of Dynkin type $\AA_n$.
\begin{fact}\label{fact:digrphnum:path}
Let $P_n\eqdef P(1,n) =
1 \,\rule[2.5pt]{22pt}{0.4pt}\,2\,\rule[2.5pt]{22pt}{0.4pt}\,
\hdashrule[2.5pt]{12pt}{0.4pt}{1pt}\,
\rule[2.5pt]{22pt}{.4pt}\,n$
be a path graph on $n\geq 1$ vertices. There are $2^{n-1}$ possible orientations of
edges of $P_n$ that yields exactly
\begin{equation}\label{fact:digrphnum:path:eq}
N(P_n)=
\begin{cases}
2^{n-2}, & \textnormal{if $n\geq 2$ is even},\\[0.1cm]
2^{\frac{n - 3}{2}} + 2^{n - 2}, & \textnormal{if $n\geq 1$ is odd,}\\
\end{cases}
\end{equation}
directed graphs, up to the isomorphism of unlabeled digraphs.
\end{fact}
\begin{proof}
Here we follow arguments given in the proof of~\cite[Proposition 6.7]{gasiorekAlgorithmicCoxeterSpectral2020}. To calculate the number of non-isomorphic orientations of edges of $P_n$, we consider two cases.
\begin{enumerate}[label=\normalfont{(\roman*)},wide]
\item\label{fact:digrphnum:path:prf:i} First, assume that $|I|=n\geq 2$ is an even number. In this case, every digraph $I$ has exactly two representatives among $2^{n-1}$ edge orientations: one drawn ``from the left'' and the other ``from the right'' (i.e., symmetric along the path). Therefore, the number $N(P_n)$ of all such non-isomorphic digraphs equals $2^{n-2}$.
\item\label{fact:digrphnum:path:prf:ii} Now we assume that $|I|=n\geq 1$ is an odd number. If $n=1$, then, up to isomorphism, there exists exactly $1=2^{-1}+2^{-1}$ digraph. Otherwise, $n\geq 3$ and among all $2^{n-1}$ edge orientations:
\begin{itemize}
\item digraphs that are ``symmetric'' along the path have exactly one representation, and
\item the rest of digraphs have exactly two representations, analogously as in~\ref{fact:digrphnum:path:prf:ii}.
\end{itemize}
It is straightforward to check that there are $2^\frac{n-1}{2}$ ``symmetric'' path digraphs, hence in this case we obtain
\[
N(P_n)=\frac{2^{n-1}-2^\frac{n-1}{2}}{2}+2^\frac{n-1}{2}=2^{\frac{n - 3}{2}} + 2^{n - 2}.\qedhere
\]
\end{enumerate}
\end{proof}
\begin{remark}
The formula~\eqref{fact:digrphnum:path:eq} describes the number of various combinatorial objects. For example: the number of linear oriented trees with $n$ arcs or unique symmetrical triangle quilt patterns along the diagonal of an $n\times n$ square,
see~\cite[OEIS sequence A051437]{oeis_A051437}.
\end{remark}
By \Cref{thm:a:main}\ref{thm:a:main:posit}, the Hasse digraph $\CH(I)$ of every positive connected poset $I$ of Dynkin type $\Dyn_I=\AA_n$ is an oriented path graph, as suggested in~\cite[Conjecture 6.4]{gasiorekAlgorithmicCoxeterSpectral2020}. Hence, \Cref{fact:digrphnum:path} gives an exact formula for the number of all,
up to the poset isomorphism, such connected posets $I$ (see also \cite[Proposition 6.7]{gasiorekAlgorithmicCoxeterSpectral2020}).
\begin{corollary}\label{cor:posit:num:poset}
Given $n\geq 1$, The total number $N(n,\AA)$ of all finite non\hyp isomorphic connected positive posets $I=(\{1,\ldots,n\},\preceq_I)$ of Dynkin type $\AA_n$ equals $N(n,\AA)\eqdef N(P_n)$ \eqref{fact:digrphnum:path:eq}.
\end{corollary}
Similarly, the description given in \Cref{thm:a:main}\ref{thm:a:main:princ} makes it possible to count all connected principal posets $I$ of Dynkin type $\AA_n$. First,
we need to know the exact number of all, up to isomorphism, orientations of the cycle graph $C_n$.
\begin{fact}\label{fact:digrphnum:cycle}
Let $C_n\eqdef P_n(1,1) =$
\begin{tikzpicture}[baseline=(n11.base),label distance=-2pt,xscale=0.65, yscale=0.74]
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label={[name=n11]left:$1$}] (n1) at (0 , 0 ) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label={[yshift=-0.3ex]above:$\scriptscriptstyle 2$}] (n2) at (1 , 0 ) {};
\node (n3) at (2 , 0 ) {$ $};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label=right:$n$] (n4) at (5 , 0 ) {};
\node (n5) at (3 , 0 ) {$ $};
\node[circle, fill=black, inner sep=0pt, minimum size=3.5pt, label={[yshift=-0.5ex]above:$\scriptscriptstyle n\mppmss 1$}] (n6) at (4 , 0 ) {};
\draw[shorten <= 2.50pt, shorten >= 2.50pt] (n1) .. controls (0.2,0.6) and (4.8,0.6) .. (n4);
\draw [line width=1.2pt, line cap=round, dash pattern=on 0pt off 5\pgflinewidth, -, shorten <= -2.50pt, shorten >= -2.50pt] (n3) to (n5);
\foreach \x/\y in {1/2, 6/4}
\draw [-, shorten <= 2.50pt, shorten >= 2.50pt] (n\x) to (n\y);
\draw [-, shorten <= 2.50pt, shorten >= -2.50pt] (n2) to (n3);
\draw [-, shorten <= -2.50pt, shorten >= 2.50pt] (n5) to (n6);
\end{tikzpicture}
be the cycle graph on $n\geq 3$ vertices.
The number $N(C_n)$ of directed graphs $D$ with $\ov D=C_ n$, up to the isomorphism of unlabeled digraphs,
is given by the formula
\begin{equation}\label{fact:digrphnum:cycle:eq}
N(C_n)=
\begin{cases}
\frac{1}{2n} \sum_{d\mid n}\left(2^{\frac{n}{d}}\varphi(d)\right), & \textnormal{if $n\geq 3$ is odd,}\\[0.1cm]
\frac{1}{2n} \sum_{d\mid n}\left(2^{\frac{n}{d}}\varphi(d)\right)+ 2^{\frac{n}{2}-2}, & \textnormal{if $n\geq 4$ is even},\\
\end{cases}
\end{equation}
where $\varphi$ is the Euler's totient function.
\end{fact}
\begin{proof}
Assume that $D$ is such a digraph that $\ov D=C_n$ is a connected $2$-regular simple graph. Without loss of generality, we may assume that the digraph $D$
is depicted in the circle layout (on the plane), and its arcs are labeled with two colors:
\begin{itemize}
\item \textit{black}: if the arc is clockwise oriented, and
\item \textit{white}: if the arc is counterclockwise oriented.
\end{itemize}
That is, every $D$ can be viewed as a binary combinatorial necklace $\CN_2(n)$.
For example, for $n=5$ there exist $32$ orientations of edges of the cycle $C_5$ that yield exactly $8$ different binary necklaces of length $5$ shown in \Cref{fig:cycle_nekl_5}.
\begin{center}
{\setlength{\fboxsep}{2pt}
\fbox{\!\!\!\!\ringv{a}{0}{0}{0}{0}{0}\ringv{b}{1}{1}{1}{1}{1}}
\fbox{\!\!\!\!\ringv{c}{0}{0}{0}{0}{1}\ringv{d}{0}{1}{1}{1}{1}}\\[0.1cm]
\fbox{\!\!\!\!\ringv{e}{0}{0}{0}{1}{1}\ringv{f}{0}{0}{1}{1}{1}}
\fbox{\!\!\!\!\ringv{g}{0}{0}{1}{0}{1}\ringv{h}{0}{1}{0}{1}{1}}}\captionof{figure}{Binary combinatorial necklaces of length $5$}\label{fig:cycle_nekl_5}
\end{center}
Moreover, up to digraph isomorphism, every $D$ has in this case
exactly two representations among the necklaces: ``clockwise'' and ''anticlockwise'' ones, as shown in the \Cref{fig:cycle_nekl_5} (isomorphic digraphs are gathered in boxes).
On the other hand, if $|D|=n\geq 4$ is an even number, certain digraphs have exactly one representation among necklaces. As an illustration, let us consider the $n=6$ case.
\begin{center}
{\setlength{\fboxsep}{2pt}\fbox{\!\!\!\!\ringvi{a}{0}{0}{0}{0}{0}{0}\ringvi{b}{1}{1}{1}{1}{1}{1}
}
\fbox{\!\!\!\!\ringvi{c}{0}{0}{0}{0}{0}{1}\ringvi{d}{0}{1}{1}{1}{1}{1}
}
\fbox{\!\!\!\!\ringvi{e}{0}{0}{0}{0}{1}{1}\ringvi{f}{0}{0}{1}{1}{1}{1}
}
\ringvi{i}{0}{0}{0}{1}{1}{1}
\hspace{1.5pt}\fbox{\!\!\!\!\ringvi{g}{0}{0}{0}{1}{0}{1}\ringvi{h}{0}{1}{0}{1}{1}{1}
}
\fbox{\!\!\!\!\ringvi{j}{0}{0}{1}{0}{0}{1}\ringvi{k}{0}{1}{1}{0}{1}{1}
}
\ringvi{l}{0}{0}{1}{0}{1}{1}
\ringvi{m}{0}{1}{1}{0}{1}{0}
\ringvi{n}{0}{1}{0}{1}{0}{1}}
\captionof{figure}{Binary combinatorial necklaces of length $6$}\label{fig:cycle_nekl_6}
\end{center}
Every such a ``rotationally symmetric'' digraph is uniquely determined by a directed path graph of length $\frac{n}{2} + 1$ and, by \Cref{fact:digrphnum:path}, there are exactly
$2^{\frac{n}{2}-1}$ such digraphs.
Now we show that every isomorphism $f\colon\{1,\ldots,n\}\to\{1,\ldots,n\}$ of
digraphs $D_1$ and $D_2$ with $\ov D_1=\ov D_2=C_n$ has a form of a ``clockwise'' or ``anticlockwise'' \textit{rotation}.
Fix a vertex $v_1\in D_1$ and consider the sequence $v_1,v_2,\ldots,v_n$ of vertices, where $v_{i+1}$ is a ``clockwise'' neighbour of $v_i$ (i.e., we have either $v_i\to v_{i+1}$ \textit{black} arc or $v_i\gets v_{i+1}$ \textit{white} arc in digraph $D_1$). One of two possibilities hold: $f(v_2)$ is either a ``clockwise'' neighbour of $f(v_{1})$ or an ``anticlockwise'' one. If the first possibility holds, then $f(v_{i+1})$ is a ``clockwise'' neighbour of $f(v_i)$ for every $2\leq i < n$, hence isomorphism $f$ encode a ``clockwise'' rotation. On the other hand, the assumption that $f(v_2)$ is an ``anticlockwise'' neighbour of $f(v_{1})$ implies that $f(v_{i+1})$ is an ``anticlockwise'' neighbour of $f(v_i)$, i.e., $f$ encodes an ``anticlockwise'' rotation.\smallskip
Summing up, if $|D|=n\geq 3$ is an odd number, the digraph $D$ has exactly two representatives among binary necklaces $\CN_2(n)$: one ``clockwise'' and the other ``anticlockwise''. Since
$|\CN_2(n)|=\frac{1}{n} \sum_{d\mid n}\left(2^{\frac{n}{d}}\varphi(d)\right)$, see~\cite{riordanCombinatorialSignificanceTheorem1957},
the first part of the equality~\eqref{fact:digrphnum:cycle:eq} follows.
Assume now that $|D|=n\geq 4$ is an even number.
In the formula $|\CN_2(n)|/2$ we count only half (i.e., $2^{\frac{n}{2}-2}$) of ``rotationally symmetric'' digraphs. Hence
$N(C_n)=\frac{1}{2n} \sum_{d\mid n}\left(2^{\frac{n}{d}}\varphi(d)\right)+ 2^{\frac{n}{2}-2}$
in this case.
\end{proof}
In proof of \Cref{fact:digrphnum:cycle} we show that the number of cyclic graphs with oriented edges, up to symmetry of dihedral group, coincides with the number of such digraphs, up to isomorphism of unlabeled digraphs.
\begin{remark}
The formula~\eqref{fact:digrphnum:cycle:eq} is described
in~\cite[OEIS sequence A053656]{oeis_A053656} and, among others, counts
the number of minimal fibrations of a bidirectional $n$-cycle
over the $2$-bouquet (up to precompositions with automorphisms of
the $n$-cycle), see~\cite{boldiFibrationsGraphs2002}.
\end{remark}
\begin{corollary}\label{cor:cycle_pos:dag_dyna:num}
Let $n\geq 3$ be an integer. Then, up to isomorphism, there exists exactly:
\begin{enumerate}[label=\normalfont{(\alph*)}]
\item\label{cor:cycle_pos:dag_dyna:num:cycle} $N(C_n)-1$
directed acyclic graphs $D$ whose underlying graph $\ov D$ is a cycle graph $C_n$,
\item\label{cor:cycle_pos:dag_dyna:num:poset} $N(n, \wt \AA)=N(C_n)-\lceil\frac{n+1}{2}\rceil$
principal posets $I$ of Dynkin type $\Dyn_I=\AA_n$ \textnormal{(}equivalently, connected posets weakly $\ZZ$-congruent with the Euclidean diagram $\wt \AA_{n-1}=C_n$\textnormal{)},
\end{enumerate}
where $N(C_n)$ is given by the formula \eqref{fact:digrphnum:cycle:eq}.
\end{corollary}
\begin{proof}
Since, up to isomorphism, there exists exactly one cyclic orientation of the $C_n$
graph, \ref{cor:cycle_pos:dag_dyna:num:cycle} follows directly from \Cref{fact:digrphnum:cycle}.
To prove~\ref{cor:cycle_pos:dag_dyna:num:poset}, we note that by \Cref{thm:a:main}\ref{thm:a:main:princ} it is sufficient to count all oriented cycles that have at least two sinks. Since, among all possible orientations of a cycle, there are:
\begin{itemize}
\item $\lfloor\frac{n}{2}\rfloor$ cycles with exactly one sink,
\item $1$ oriented cycle
\end{itemize}
and $\lfloor\frac{n}{2}\rfloor+1=\lceil\frac{n+1}{2}\rceil$,
the statement~\ref{cor:cycle_pos:dag_dyna:num:poset} follows from \Cref{fact:digrphnum:cycle}.
\end{proof}
Summing up, we can devise an exact formula that describes the total number of non-negative partially ordered sets of Dynkin type $\AA_n$.
{\sisetup{group-minimum-digits=4
\begin{theorem}\label{thm:typeanum}
Let $Nneg(\AA_n)$ be the number of all non-negative posets $I$ of size $n=|I|$ and Dynkin type $\Dyn_I=\AA_{n-\crk_I}$. Then
\begin{equation}\label{thm:typeanum:eq}
Nneg(\AA_n)=
\begin{cases}
1 & \textnormal{ if }n\in\{1,2\},\\
2^{n - 2} + \frac{1}{2n} \sum_{d\mid n}\left(2^{\frac{n}{d}}\varphi(d)\right) + 2^{\frac{n - 3}{2}}-\lceil\frac{n+1}{2}\rceil, & \textnormal{ if } n\geq 3 \textnormal{ is odd},\\[0.1cm]
2^{n - 2} + \frac{1}{2n} \sum_{d\mid n}\left(2^{\frac{n}{d}}\varphi(d)\right) + 2^{\frac{n}{2}-2}-\lceil\frac{n+1}{2}\rceil, & \textnormal{ if } n\geq 4 \textnormal{ is even},\\
\end{cases}
\end{equation}
where $\varphi$ is Euler's totient function. In particular, there are exactly $1$, $1$, $3$, $5$, $11$, $21$, $42$, $81$, $161$, $312$, $616$, $\num{1209}$, $\num{2389}$, $\num{4711}$, $\num{9344}$, $\num{18497}$, $\num{36743}$, $\num{72955}$, $\num{145116}$, $\num{288633}$, $\num{574729}$
such posets of size $1,\ldots,21$.
\end{theorem}
\begin{proof}
Apply \Cref{cor:posit:num:poset}
and \Cref{cor:cycle_pos:dag_dyna:num}\ref{cor:cycle_pos:dag_dyna:num:poset}.
\end{proof}
Summing up, we get the following asymptotic description of connected non-negative posets $I$ of Dynkin type $\AA_n$.
\begin{corollary}
Let
$N(n, \wt \AA)$ be the number of
principal posets $I$ of Dynkin type $\Dyn_I=\AA_{n-1}$ and
$Nneg(\AA_n)$ be the number of all non-negative posets $I$ of size $n=|I|$ and Dynkin type $\Dyn_I=\AA_{n-\crk_I}$. Then
\begin{enumerate}[label=\normalfont{(\alph*)}]
\item $\lim_{n\to \infty} \frac{Nneg(\AA_{n+1})}{Nneg(\AA_n)}=2$ and $Nneg(\AA_n)\approx 2^{n-2}$,
\item the number of connected non-negative posets $I$ of Dynkin type $\AA_{n-\crk_I}$ grows exponentially,
\item $\lim_{n\to\infty}\frac{N(n, \wt \AA)}{Nneg(\AA_n)}=0$, hence almost all such posets are positive.
\end{enumerate}
\end{corollary}
\begin{proof}
Apply \Cref{cor:cycle_pos:dag_dyna:num}\ref{cor:cycle_pos:dag_dyna:num:poset} and \Cref{thm:typeanum}.
\end{proof}
\section{Future work}\label{sec:conclusions}
In the present work, we give a complete description of connected non-negative posets $I$ of Dynkin type $\Dyn_I=\AA_{|I|-\crk_I}$ and, in particular, we show that $\crk_I\leq1$. Computations suggest that there is an upper bound for corank of $\Dyn_I=\EE_n$ type posets as well.
\begin{conjecture}
If $I$ is a non-negative connected poset of Dynkin type $\Dyn_I=\EE_{|I|-\crk_I}$, then $\crk_I\leq 3$.
\end{conjecture}
The conjecture yields$|I|\leq 11$ and, consequently, we get the following.
\begin{conjecture}
If $I$ is such a non-negative connected poset, that $\crk_I>1$ and $|I|>11$, then $\Dyn_I=\DD_{|I|-\crk_I}$.
\end{conjecture}
In other words, it seems that Dynkin type checking of connected posets $I$ that have at least $n=|I|\geq 12$ points is rather straightforward (compare with \cite{makurackiQuadraticAlgorithmCompute2020}). That is, for any non-negative poset $I$ we have the following:
\begin{enumerate}[label=\normalfont{(\roman*)},wide]
\item if $\deg v=2$ for all but two $v\in \CH(I)$ and $\deg v=1$ otherwise, then $\Dyn_I=\AA_{n}$ and $I$ is positive;
\item if $\deg v=2$ for all $v\in \CH(I)$, then $\Dyn_I=\AA_{n-1}$ and $I$ is principal;
\item $\Dyn_I=\DD_{n-\crk_I}$.
\end{enumerate}
Nevertheless, this description does not give any insights into the structure of $\DD_n$ type posets.
\begin{oproblem}
Give a structural description of Hasse digraphs of $\DD_n$ type non-negative connected posets.
\end{oproblem}
|
2302.11221
|
\section{\protect\bigskip \textbf{Introduction}}
We know that the Mac Donald polynomials and their particular cases can be
represented as analogs with \ one or two parameters of the usual symmetric
functions. A large number of papers continue to be published on the subject
giving rise to various analogues of symmetric functions. Most of these
analogs are broad constructions using powerful algebraic tools such as
representation theory. More modestly, this article is the first of a series
whose object is a $q$-analog (or $q$-deformation) defined in a rather
elementary way, of certain symmetric functions, namely the functions defined
for each pair of integers $\left( n,r\right) $ $\ $such that $n\geq r\geq 1$
by
\begin{equation}
p_{n}^{(r)}=\sum\limits_{\left| \lambda \right| =n,\;l\left( \lambda \right)
=r}m_{\lambda } \tag{1.1}
\end{equation}
Where the $m_{\lambda \text{ }}$ are the symmetric monomial functions, the
sum being over the integer partitions $\lambda $ of $n$, with length $%
l(\lambda )=r$. The functions $p_{n}^{(r)}$ are introduced with this
notation in exercise 19 p.33 of [10], whose notations we will follow fairly
faithfully. Despite its simplicity, this approach turns out to be quite
fruitful. The $q$-analogs thus defined, that we will note $\left[ p_{n}^{(r)}%
\right] $, possessing attractive properties. We present in this article the
definition and some properties of this q-analog . We also give applications
by specialization to the formal series
\begin{equation}
E_{xp}(t)=\sum\limits_{n=0}^{\infty }q^{\binom{n}{2}}\dfrac{t^{n}}{n!}
\tag{1.2}
\end{equation}
sometimes called q-deformation of the exponential. We were thus able to
revisit polynomials well known as enumerator of inversion in rooted forests,
or in a reciprocal way such as the sum enumerator of parking functions (see
Yan's summary article [17] on these notions). We deduce new identies
verified by these polynomials, from which we extract some combinatorial
consequences.
The organization of the article is as follows. In section 2 we set the
notations and recall the necessary prerequisites for symmetric functions,
integer partitions and the q-calculus.
Sections 3 and 4 are devoted to the general study of the $\left[ p_{n}^{(r)}%
\right] $. In section 3 we give the definition of $\left[ p_{n}^{(r)}\right]
$\ and its first properties, with the particular case $r=1$. Section 4
contains theorem 4.1 and its corollary 4.2 which is a major result of the
article. It links the $q$-analog to its classical form through $q$-Stirling
numbers of the second kind. We will also give the matrix and inverse forms
of this corollary with the $q$-Stirling numbers of the first kind. Section 5
begins the generalization to the $p,q$-analog.
The remaining sections 6 to 10 are devoted to the applications of $\left[
p_{n}^{(r)}\right] $, obtained by specializing to the above function $%
E_{xp(t)}$. In section 6, we prove lemma 6.1, crucial for the rest of the
article. In section 7, we use lemma 6.1 and corollary 4.2 to explore
polynomials, introduced in [11] and [16], and this by a different route from
these authors. More precisely, it is the class of polynomials \ that we will
note $J_{n,r}$ defined for $n\geq r\geq 1$ , and which correspond with the
notations of [17] to the polynomials $I_{n-r}^{(r,1)}$.
An interesting result to which we arrive is a ''positive'' linear recurrence
relation betwen these polynomials, given by Corollary 7.3. This recurrence
makes it possible to determine all the polynomials from the initial
conditions $J_{r,r}=1$. We give as an example the first rows and columns of
the $J_{n,r}$ table.
In section 8 we study the reciprocal polynomial of $J_{n,r}$, which we will
denote by $\overline{J_{n,r}}$ and which are the sum enumerators of parking
functions. The corollary 7.3 translates into a linear recurrence betwen the $%
\overline{J_{n,r}}$, that we compare with another linear recurrence betwen $%
\overline{J_{n,r}}$, coming from the application of Goncarev polynomials to
parking functions [9].
In section 9, as an application of Corollary 7.3, we obtain an explicit
formula for the $J_{n,r}$ (Theorem 9.1) which is another major result of our
work. This formula leads us to introduce in section 10, new statistics on
forests whose enumerator polynomial is $J_{n,r}\left( q\right) $.
Complementary articles will follow, presenting other properties of this $q$%
-analog and other applications in particular combinatorial.
\section{Preliminary}
\QTP{Body Math}
Let us first recall the definitions relating to integer partitions and
symmetric functions, refering for more details to ch.1 of [10] or to chap.7
of [13]. If $\lambda =\left( \lambda _{1},\lambda _{2},...,\lambda
_{r}\right) $ is a partition of the integer $n$, we set $\left| \lambda
\right| =\lambda _{1}+\lambda _{2}+...+\lambda _{r}=n,\;l(\lambda )=r,$and$%
\; $
\QTP{Body Math}
\begin{equation*}
n(\lambda )=\sum\limits_{i\geq 1}i(\lambda _{i}-1)=\sum\limits_{i\geq 1}%
\binom{\lambda _{i}^{\prime }}{2}
\end{equation*}
\ where $\lambda ^{\prime }=\left( \lambda _{1}^{\prime },\lambda
_{2}^{\prime },..\right) $ is the conjugate of the partition $\lambda $.\
\QTP{Body Math}
If $A$ is a ring, $\Lambda _{A}$ is the set of symmetric functions in the
variables $X=\left( x_{i}\right) _{i\geq 1}\;$with coefficients in $A$. For $%
\lambda $ describing the set of partitions, $\left( m_{\lambda }\right)
,\left( e_{\lambda }\right) ,\left( h_{\lambda }\right) ,\left( p_{\lambda
}\right) ,\left( s_{\lambda }\right) $ and $\left( f_{\lambda }\right) $,
are the six classical bases of $\Lambda _{A}$. Agreeing that $%
e_{0}=h_{0}=1\; $we recall that
\QTP{Body Math}
\begin{equation*}
E(t)=\sum\limits_{i\geq 0}e_{i}t^{i}=\prod\limits_{i\geq
1}(1-x_{i}t)\;;\;\;H(t)=\sum\limits_{i\geq 0}h_{i}t^{i}=\left( E(-t)\right)
^{-1}=\dfrac{1}{\prod\limits_{i\geq 1}\left( 1+x_{i}\right) }
\end{equation*}
\QTP{Body Math}
For the $q$-analogs we will use, with a few exceptions, the notations of
[7], to which we refer for more details. The index $q$ can be omitted if
there \ is no ambiguity. For ($n,k)\in \mathbf{N}^{2}$, we have
\QTP{Body Math}
\begin{equation*}
\left[ n\right] _{q}=1+q+q^{2}+...+q^{n-1}\text{ for }n\neq 0\;\text{and }%
\left[ 0\right] _{q}=0
\end{equation*}
\QTP{Body Math}
\begin{equation*}
\left[ n\right] _{q}!=\left[ 1\right] \left[ 2\right] ...\left[ n\right]
\text{ for }n\neq 0\;\text{and }\left[ 0\right] _{q}!=1
\end{equation*}
\QTP{Body Math}
\begin{equation*}
\QATOPD[ ] {n}{k}_{q}=\dfrac{\left[ n\right] !}{\left[ k\right] !\left[ n-k%
\right] !}\text{ for }n\geq k\geq 0\text{ \ \ and }\QATOPD[ ] {n}{k}_{q}=0\;%
\text{otherwise.}
\end{equation*}
\QTP{Body Math}
$\bigskip $The $q$-d\'{e}rivation of the formal series $F(t)$ is given by
\begin{equation*}
D_{q}F(t)=\dfrac{F(qt)-F(t)}{(q-1)t}\text{ \ \ and for }r\geq
1\;\;D_{q}^{r}=D_{q}\left( D_{q}^{r-1}F(t)\right) \text{ \ with \ }%
D_{q}^{0}F(t)=F
\end{equation*}
\QTP{Body Math}
Specifically $D_{q}^{r}t^{n}=\left[ r\right] !\QATOPD[ ] {n}{r}t^{n-r}$ for $%
n\geq r$ \ \ and \ \ $D_{q}t^{0}=0$
\QTP{Body Math}
$\bigskip $
\QTP{Body Math}
We will also use as an index instead of $q$, a rational fraction of $q$. It
is necessary to understand in this case what we will give as a definition:
\begin{definition}
Let $f(q)$ be a rational fraction of $q$ such that $f(1)=1$. We set for all $%
n\in \mathbb{N}$%
\begin{equation*}
\left[ n\right] _{f(q)}=1+f(q)+\left( f(q)\right) ^{2}+...+\left(
f(q)\right) ^{n-1}\ \ if\ n\geq 1\ \ and\ \ \ \left[ 0\right] _{f(q)}=0
\end{equation*}
then are defined as above $\left[ n\right] _{f(q)}!$, $\QATOPD[ ] {n}{k}%
_{f(q)}$, etc... . In the same way we set
\begin{equation*}
D_{f(q)}F(t)=\dfrac{F(f(q)t)-F(t)}{(f(q)-1)t}
\end{equation*}
\end{definition}
\QTP{Body Math}
Finally, we recall some usual notations used in the article: $D^{r}F(t)$ is
the usual derivation of order $r$ with respect to $t$ of the formal series $%
F(t)$, $\delta _{i}^{j}=\delta _{i,j}$ is the Kronecker symbol which is 1 if
$i=j$ and 0 otherwise ; If $A$ is a finite set, $\left| A\right| $ denotes
its cardinality. $\mathbb{N},\mathbb{Z}$ denote respectively the sets of
natural and relative integers and $\mathbb{Q}$\ that of rational numbers.$\ $%
For $n\in \mathbb{N},n>0$ we denote $\mathbf{n=}\left\{ 1,2,...n\right\} $.
\section{Definition of $\left[ p_{n}^{(r)}\right] $}
Lest us first recall some properties of $p_{n}^{(r)}$ defined by $\left(
1.1\right) $, which are in example 19 p.33 of [10] or can easily be deduced
from it. Agreing to set $p_{n}^{(0)}=\delta _{n}^{0}$, we have for all $r\in
\mathbf{N}$
\begin{equation}
\sum\limits_{n\geq r}p_{n}^{(r)}\left( -t\right) ^{n-r}=\dfrac{1}{r!}\dfrac{%
D^{r}E(t)}{E(t)} \tag{3.1}
\end{equation}
As we have
\begin{equation}
\dfrac{D^{r}E(t)}{r!}=\sum\limits_{n\geq r}\binom{n}{r}e_{n}t^{n-r}
\tag{3.2}
\end{equation}
we deduce
\begin{equation*}
\sum\limits_{n\geq 0}e_{n}t^{n}\sum\limits_{n\geq
r}p_{n}^{(r)}(-t)^{n-r}=\sum\limits_{n\geq r}\binom{n}{r}e_{n}t^{n-r}
\end{equation*}
By performing the Cauchy product and equating coefficients, we get for any
pair of integers $\left( n,r\right) $ such that $n\geq r\geq 0$, by agreing
to set $e_{i}=0$ if $i<0$, the system with $n+1-r$ equations
\begin{equation*}
\text{For }0\leq k\leq n-r\;\;\;\sum\limits_{i=0}^{n-r}e_{k-i}\left(
-1\right) ^{i}p_{r+i}^{(r)}=\binom{r+k}{r}e_{r+k}
\end{equation*}
whose solution is
\begin{equation}
p_{n}^{(r)}=\left|
\begin{array}{cccccccc}
\binom{r}{r}e_{r} & 1 & 0 & 0 & . & . & . & 0 \\
\binom{r+1}{r}e_{r+1} & e_{1} & 1 & 0 & . & . & . & 0 \\
\binom{r+2}{r}e_{r+2} & e_{2} & e_{1} & 1 & 0 & . & . & 0 \\
. & . & & . & . & . & . & . \\
. & . & & & . & . & . & . \\
. & . & & & & . & . & 0 \\
. & . & & & & & . & 1 \\
\binom{n}{r}e_{n} & e_{n-r} & e_{n-r-1} & . & . & . & e_{2} & e_{1_{{}}}
\end{array}
\right| \tag{3.3}
\end{equation}
\bigskip To define $\left[ p_{n}^{(r)}\right] _{q}$, we will use the
following q-analogue of equation $\left( 3.1\right) $.
\begin{definition}
For any pair of integers $\left( n,r\right) $\ such that $n\geq r\geq 0$, we
set:
\begin{equation}
\sum\limits_{n\geq r}\left[ p_{n}^{(r)}\right] _{q}\left( -t\right) ^{n-r}=%
\dfrac{1}{\left[ r\right] _{q}!}\dfrac{D_{q}^{r}E(t)}{E(t)} \tag{3.1q}
\end{equation}
which implies in particular $\left[ p_{n}^{(0)}\right] _{q}=\delta _{n}^{0}$.
\end{definition}
The subscript q is implied in what follows if there is no ambiguity. It is
easy to check that the previous calculations transpose without difficulty
with in particular
\begin{equation}
\dfrac{D_{q}^{r}E(t)}{\left[ r\right] !}=\sum\limits_{n\geq r}\QATOPD[ ] {n}{%
r}e_{n}t^{n-r} \tag{3.2q}
\end{equation}
\bigskip And for $n\geq r\geq 0$ the system with $n+r-1$ equations
\begin{equation*}
\text{Pour }0\leq k\leq n-r\;\;\;\sum\limits_{i=0}^{n-r}e_{k-i}\left(
-1\right) ^{i}\left[ p_{r+i}^{(r)}\right] =\QATOPD[ ] {r+k}{k}e_{r+k}
\end{equation*}
whose solutions is the q-analog of $\left( 3.3\right) $ given by
\begin{proposition}
For $n\geq r\geq 1$ we have
\begin{equation}
\left[ p_{n}^{(r)}\right] =\left|
\begin{array}{cccccccc}
\QATOPD[ ] {r}{r}e_{r} & 1 & 0 & 0 & . & . & . & 0 \\
\QATOPD[ ] {r+1}{r}e_{r+1} & e_{1} & 1 & 0 & . & . & . & 0 \\
\QATOPD[ ] {r+2}{r}e_{r+2} & e_{2} & e_{1} & 1 & 0 & . & . & 0 \\
. & . & & . & . & . & . & . \\
. & . & & & . & . & . & . \\
. & . & & & & . & . & 0 \\
. & . & & & & & . & 1 \\
\QATOPD[ ] {n}{r}e_{n} & e_{n-r} & e_{n-r-1} & . & . & . & e_{2} & e_{1_{{}}}
\end{array}
\right| \tag{3.3q}
\end{equation}
\end{proposition}
This determinant can be taken as an alternative definition of $\left[
p_{n}^{(r)}\right] $.
\QTP{Body Math}
\textbf{Particular case} $r=1$
we then get by noting as we said in the introduction $\left[ p_{n}^{(1)}%
\right] =\left[ p_{n}\right] $
\begin{equation*}
\sum\limits_{n\geq r}\left[ p_{n}\right] _{q}\left( -t\right) ^{n-1}=\dfrac{%
D_{q}E(t)}{E(t)}
\end{equation*}
and the system of equation for all $n\geq 1,$ $0\leq k\leq
n-1,\;\;\;\sum\nolimits_{i=0}^{n-1}e_{k-i}\left( -1\right) ^{i}\left[ p_{i+1}%
\right] =\left[ k+1\right] e_{k+1}$
whose solution is
\begin{equation}
\left[ p_{n}\right] =\left|
\begin{array}{ccccccc}
\left[ 1\right] e_{1} & 1 & 0 & 0 & . & . & 0 \\
\left[ 2\right] e_{2} & e_{1} & 1 & 0 & . & . & 0 \\
. & . & . & . & . & . & . \\
. & . & . & . & . & . & . \\
. & . & . & & . & . & 0 \\
. & . & . & & & . & 1 \\
\left[ n\right] e_{n} & e_{n-1} & e_{n-2} & . & . & e_{2} & e_{1}
\end{array}
\right| \tag{3.4}
\end{equation}
The system above also allows $e_{n}$ to be expressed as determinants
\begin{equation}
\left[ n\right] !e_{n}=\left|
\begin{array}{ccccccc}
\left[ p_{1}\right] & \left[ 1\right] & 0 & 0 & . & . & 0 \\
\left[ p_{2}\right] & \left[ p_{1}\right] & \left[ 2\right] & 0 & . & . & 0
\\
. & . & . & . & . & . & . \\
. & . & . & . & . & . & . \\
. & . & & . & . & . & 0 \\
. & . & & & & . & \left[ n-1\right] \\
\left[ p_{n}\right] & \left[ p_{n-1}\right] & \left[ p_{n-2}\right] & . & .
& \left[ p_{2}\right] & \left[ p_{1}\right]
\end{array}
\right| \tag{3.5}
\end{equation}
We notice that equations $\left( 3.4\right) $ and $\left( 3.5\right) $\ are
q-analog of the classical case (see the first two equations of example 8 \
p.28 of [10]). We will continue in additional articles the study of $\left[
p_{n}\right] $ and more generally, of
\begin{equation}
\left[ p_{\lambda }\right] =\left[ p_{\lambda _{1}}\right] \left[ p_{\lambda
_{2}}\right] ...\left[ p_{\lambda _{r}}\right] \tag{3.6}
\end{equation}
defined for any integer partition $\lambda =\left( \lambda _{1},\lambda
_{2},...,\lambda _{r}\right) $.
\bigskip
\section{\protect\bigskip Relation between $\left[ p_{n}^{(r)}\right] $ and $%
p_{n}^{(r)}$}
\begin{theorem}
\bigskip We have for all $n\geq r\geq 1$%
\begin{equation}
\left[ p_{n}^{(r)}\right] _{q}=\sum\limits_{j=r}^{n}p_{n}^{(j)}\sum%
\limits_{l=r}^{j}\left( -1\right) ^{l-r}\binom{j}{l}\QATOPD[ ] {l}{r}_{q}
\tag{4.1}
\end{equation}
\end{theorem}
\begin{proof}
\bigskip We start from $\left( 3.1q\right) $ which with $E(t)H(-t)=1$ and $%
m=n-r$ becomes
\begin{equation*}
\sum\limits_{m\geq 0}\left[ p_{m+r}^{(r)}\right] \left( -1\right)
^{m}t^{m}=\left( \dfrac{1}{\left[ r\right] !}D_{q}^{r}E(t)\right) H(-t)
\end{equation*}
We replace the parenthesis of the right-hand side by $\left( 3.2q\right) $
and as $H(-t)=\sum\nolimits_{l\geq 0}h_{l}\left( -1\right) ^{l}t^{l}$, we
obtain by equating coefficients
\begin{equation}
\left[ p_{m+r}^{(r)}\right] =\QATOPD[ ] {r}{r}e_{r}h_{m}-\QATOPD[ ] {r+1}{r}%
e_{r+1}h_{m-1}+...+\left( -1\right) ^{m-1}\QATOPD[ ] {r+m-1}{r}%
e_{r+m-1}+\left( -1\right) ^{m}\QATOPD[ ] {r+m}{r}e_{r+m} \tag{4.2}
\end{equation}
We clearly have $h_{m}=\sum\nolimits_{l=1}^{m}p_{m}^{(l)}$ therefore
\begin{equation*}
e_{r}h_{m}=e_{r}\sum\limits_{l=1}^{m}p_{m}^{(l)}=\sum\limits_{j=r}^{m+r}%
\binom{j}{r}p_{m+r}^{(j)}
\end{equation*}
Let us explain the last equality above.The product of a monomial of $e_{r}$
and a monomial of $h_{m}$ gives a monomial of degree $m+r$, whose number $j$%
\ of distinct variables is between $r$ and $r+m$ (for an infinity of
variable $x_{i}$, or at least if the number of variables is greater than
equal to $m+r$). It is precisely also the form of a monomial of the right
member. Moreover in the left member, the number of identical monomials of
degree $m+r$ and with $j$ distincts variables corresponds to the number of
monomials of $e_{r}$ whose $r$ distinct variables are taken in these $j$
letters, that is to say $\binom{j}{r}$.
We therefore have by replacing all the products $e_{r+k}h_{m-k}\;$in the
equation $\left( 4.2\right) $
\begin{equation*}
\left[ p_{m+r}^{(r)}\right] =\sum\limits_{k=0}^{m}\left( -1\right) ^{k}%
\QATOPD[ ] {r+k}{r}\sum\limits_{j=r+k}^{m+r}\binom{j}{r+k}p_{m+r}^{(j)}
\end{equation*}
By reversing the sums we get
\begin{equation*}
\left[ p_{m+r}^{(r)}\right] =\sum\limits_{j=r}^{m+r}p_{m+r}^{\left( j\right)
}\sum\limits_{k=0}^{j-r}\left( -1\right) ^{k}\binom{j}{r+k}\QATOPD[ ] {r+k}{r%
}
\end{equation*}
With the change of index $l=r+k$ et $n=m+r$, we find $\left( 4.1\right) $.
\end{proof}
To give the corollary 4.2 which is one of the main result of the article, we
must make some reminders on the q-analogs of the Stirling numbers. The
''classical'' numbers of Stirling are well known (see for example chap.5 of
[3]). For their q-analogs which are still the subject of research, reference
is made to [4], and [12] with an updatet bibliography. In [1] and [2]
Carlitz defined the q-Stirling number of the second kind, which we will
denote\ $S_{q}\left[ n,k\right] $, by the following identity (with our
notations)
\begin{equation}
\QATOPD[ ] {n}{k}_{q}=\sum\limits_{j=k}^{n}\binom{n}{j}\left( q-1\right)
^{j-k}S_{q}\left[ j,k\right] \tag{4.3}
\end{equation}
(we also sometimes omit the index q if there is no ambiguity). And he get by
inversion of $\left( 4.3\right) $
\begin{equation}
\left( 1-q\right) ^{n-k}S\left[ n,k\right] =\sum\limits_{l=k}^{n}\left(
-1\right) ^{l-k}\binom{n}{l}\QATOPD[ ] {l}{k} \tag{4.4}
\end{equation}
as well as the recurrence relation
\begin{equation}
S\left[ n,k\right] =S\left[ n-1,k-1\right] +\left[ k\right] S\left[ n-1,k%
\right] \tag{4.5}
\end{equation}
which makes it possible to find all the values of $S\left[ n,k\right] $ for $%
\left( n,k\right) \in \mathbf{N}^{2}$ by setting $S\left[ n,0\right] =\delta
_{n}^{0}\;\;\;$and$\;\;\;S\left[ n,k\right] =0$ if $k>n$. We have $S\left[
n,n\right] =S\left[ n,1\right] =1$ and\ \ $S_{q=1}\left[ n,k\right] =S(n,k)$
where $S(n,k)$ is the classical Stirling number of the second kind. Formula $%
\left( 4.5\right) $ \ is the q-analog of the recurence formula for $S(n,k)$.
Note that $\left( 4.3\right) $ and $\left( 4.4\right) $ have no classical
correspondent.
\begin{corollary}
\bigskip We have for all $n\geq r\geq 1$%
\begin{equation}
\left[ p_{n}^{(r)}\right] _{q}=\sum\limits_{j=r}^{n}\left( 1-q\right)
^{j-r}S_{q}\left[ j,r\right] \,p_{n}^{(j)} \tag{4.6}
\end{equation}
where the $S_{q}\left[ n,r\right] $ are the q-Stirling numbers of the second
kind of Carlitz.
\end{corollary}
\begin{proof}
\bigskip the proof is obtained immediately from Theorem 4.1 with $\left(
4.4\right) $.
\end{proof}
\textbf{Particular case} $r=1$. We get for all $n\geq 1$%
\begin{equation}
\left[ p_{n}\right] =\sum\limits_{j=1}^{n}\left( 1-q\right) ^{j-1}p_{n}^{(j)}
\tag{4.7}
\end{equation}
\bigskip \textbf{Matrix form and inverse}:
Since $S\left[ n,k\right] =0$ for $k<n$, the equation $\left( 4.6\right) $
can still be written for $n\geq r\geq 1$
\begin{equation}
\left[ p_{n}^{(r)}\right] =\sum\limits_{j=1}^{n}\left( 1-q\right) ^{j-r}S%
\left[ j,r\right] \,p_{n}^{(j)} \tag{4.8}
\end{equation}
\bigskip Let the row matrices defined by \ $\left[ P_{n}\right] =\left( %
\left[ p_{n}^{(r)}\right] \right) _{r=1}^{n}\;\;\;\;P_{n}=\left(
p_{n}^{\left( r\right) }\right) _{r=1}^{n}$
\bigskip System $\left( 4.8\right) $ is written in matrix form\ \ $\left[
P_{n}\right] =P_{n}A_{n}$ \ where $A_{n}$ is the triangular matrix $\left(
A_{i,j}\right) _{i,j=1}^{n}$ \ with $A_{i,j}=\left( 1-q\right) ^{i-j}S\left[
i,j\right] $
It is easy to see that we have: \
\begin{equation}
A_{n}=U_{n}\left[ S_{n}\right] U_{n}^{-1} \tag{4.9}
\end{equation}
\ where $\left[ S_{n}\right] $ is the triangular matrix of q-Stirling number
of second kind $\ \ \left[ S_{n}\right] =\left( S_{q}\left[ i,j\right]
\right) _{i,j=1}^{n}$and $U_{n}$ is the diagonal matrix: \
\begin{equation*}
U_{n}=\left(
\begin{array}{cccc}
\left( 1-q\right) ^{0} & & & \\
& \left( 1-q\right) ^{1} & & \\
& & ... & \\
& & & \left( 1-q\right) ^{n-1}
\end{array}
\right)
\end{equation*}
The Stirling numbers of the first kind, denoted $s(n,k)$, have for q-analog
the q-Stirling numbers of the first kind introduced by [6], which we will
denote by $s_{q}\left[ n,k\right] $. Let the matrix
\begin{equation*}
\;\;\;\;\ \left[ s_{n}\right] =\left( s_{q}\left[ i,j\right] \right)
_{i,j=1}^{n}
\end{equation*}
then we know ( see for example [4] p.96) that $\left[ S_{n}\right] $ and $%
\left[ s_{n}\right] $ are inverses of each other. We therefore deduce from $%
\left( 4.9\right) $ that $A_{n}$ is inversible and that $A_{n}^{-1}=U_{n}%
\left[ s_{n}\right] U_{n}^{-1}$ and $P_{n}=\left[ P_{n}\right] A_{n}^{-1}\ \
$with$\ \ A_{n}^{-1}=\left( B_{i,j}\right) _{i,j=1}^{n}\;\ \ \ $and \ \ $%
B_{i,j}=\left( 1-q\right) ^{i-j}s\left[ i,j\right] $. Hence the second
corollary equivalent to the previous one.
\begin{corollary}
We have for all $n\geq r\geq 1$%
\begin{equation*}
p_{n}^{(r)}=\sum\limits_{j=r}^{n}\left( 1-q\right) ^{j-r}s_{q}\left[ j,r%
\right] \left[ p_{n}^{(j)}\right] _{q}
\end{equation*}
where the $s_{q}\left[ n,r\right] $ are the q-Stirling numbers of the first
kind.
\end{corollary}
\bigskip
\textbf{Remark: }it is easy to see that $\left( 4.8\right) $\ generalizes
for $n\geq r\geq 0$ to $\left[ p_{n}^{(r)}\right] =\sum\nolimits_{j=0}^{n}%
\left( 1-q\right) ^{j-r}S\left[ j,r\right] \,p_{n}^{(j)}$. One could easily
deduce the generalizations of the matrix and inverse forms of this system,
with the augmented q-Stirling matrices $\widehat{\left[ S_{n}\right] }%
=\left( S\left[ i,j\right] \right) _{i,j=0}^{n}$,$\;\widehat{\left[ s_{n}%
\right] }=\left( s\left[ i,j\right] \right) _{i,j=0}^{n}$\ \ \ \ \
\section{\protect\bigskip Extension to $p,q$-analogue}
We know that there is a calculation with two parameters, denoted $p,q$%
-analog, whose origin dates back at least to 1991, and which is reduced to
the $q$-analog when $p=1$. We briefly recall the definitions that generalize
those of the $q$-analog given in section 2. One can consult for more details
[4] and [5] with a recent bibliography.
\begin{equation*}
\left[ n\right] _{p,q}=\dfrac{p^{n}-q^{n}}{p-q}%
=p^{n-1}+p^{n-2}q+...+pq^{n-2}+q^{n-1}
\end{equation*}
\begin{equation*}
\left[ n\right] _{p,q}!=\left[ 1\right] _{p,q}\left[ 2\right] _{p,q}...\left[
n\right] _{p,q}\;\;\;\;\QATOPD[ ] {n}{k}_{p,q}=\dfrac{\left[ n\right] _{p,q}!%
}{\left[ k\right] _{p,q}!\left[ n-k\right] _{p,q}!}
\end{equation*}
\begin{equation*}
D_{p,q}F(t)=\dfrac{F(pt)-F(qt)}{\left( p-q\right) t}\text{ \ then }%
D_{p,q}^{r}F(t)=D_{p,q}\left( D_{p,q}^{r-1}F(t)\right)
\end{equation*}
Specifically:
\begin{equation*}
D_{p,q}^{r}t^{n}=\left[ r\right] _{p,q}!\QATOPD[ ] {n}{r}_{p,q}t^{n-r}
\end{equation*}
\bigskip The definition of the p,q-analog of the $p_{n}^{(r)}$ follows that
of the $q$-analog.
\begin{definition}
We set for $n\geq r\geq 0$%
\begin{equation*}
\sum\limits_{n\geq r}\left[ p_{n}^{(r)}\right] _{p,q}\left( -t\right) ^{n-r}=%
\dfrac{1}{\left[ r\right] _{p,q}!}\dfrac{D_{p,q}^{r}E(t)}{E(t)}
\end{equation*}
which implies in particular $\left[ p_{n}^{(0)}\right] _{p,q}=\delta _{n}^{0}
$
\end{definition}
We verify that the calculations of section 3 transpose to this $p$,$q$%
-analog, with in particular
\begin{equation*}
\dfrac{D_{p,q}^{r}E(t)}{\left[ r\right] _{p,q}!}=\sum\limits_{n\geq r}%
\QATOPD[ ] {n}{r}_{p,q}e_{n}t^{n-r}
\end{equation*}
\bigskip and
\begin{equation*}
\left[ p_{n}^{(r)}\right] _{p,q}=\left|
\begin{array}{cccccccc}
\QATOPD[ ] {r}{r}_{p,q}e_{r} & 1 & 0 & 0 & . & . & . & 0 \\
\QATOPD[ ] {r+1}{r}_{p,q}e_{r+1} & e_{1} & 1 & 0 & . & . & . & 0 \\
\QATOPD[ ] {r+2}{r}_{p,q}e_{r+2} & e_{2} & e_{1} & 1 & 0 & . & . & 0 \\
. & . & & . & . & . & . & . \\
. & . & & & . & . & . & . \\
. & . & & & & . & . & 0 \\
. & . & & & & & . & 1 \\
\QATOPD[ ] {n}{r}_{p,q}e_{n} & e_{n-r} & e_{n-r-1} & . & . & . & e_{2} &
e_{1_{{}}}
\end{array}
\right|
\end{equation*}
\textbf{Particular case }$r=1$:\ we also note $\left[ p_{n}^{(1)}\right]
_{p,q}=\left[ p_{n}\right] _{p,q}$
and we get the following $p,q-$analogs of the formulas $\left( 3.4\right) $
and $\left( 3.5\right) $ for $\left[ p_{n}\right] _{p,q}$ and $\left[ n%
\right] _{p,q}!\;e_{n}$, with inside the determinants the $p,q$-analogs of
the square brackets.
\begin{equation*}
\left[ p_{n}\right] _{p,q}=\left|
\begin{array}{ccccccc}
\left[ 1\right] e_{1} & 1 & 0 & 0 & . & . & 0 \\
\left[ 2\right] e_{2} & e_{1} & 1 & 0 & . & . & 0 \\
. & . & . & . & . & . & . \\
. & . & . & . & . & . & . \\
. & . & . & & . & . & 0 \\
. & . & . & & & . & 1 \\
\left[ n\right] e_{n} & e_{n-1} & e_{n-2} & . & . & e_{2} & e_{1}
\end{array}
\right| \text{ \ \ \ \ \ \ \ \ \ \ \ \ \ }\left[ n\right] _{p,q}!\;e_{n}=%
\left|
\begin{array}{ccccccc}
\left[ p_{1}\right] & \left[ 1\right] & 0 & 0 & . & . & 0 \\
\left[ p_{2}\right] & \left[ p_{1}\right] & \left[ 2\right] & 0 & . & . & 0
\\
. & . & . & . & . & . & . \\
. & . & . & . & . & . & . \\
. & . & & . & . & . & 0 \\
. & . & & & & . & \left[ n-1\right] \\
\left[ p_{n}\right] & \left[ p_{n-1}\right] & \left[ p_{n-2}\right] & . & .
& \left[ p_{2}\right] & \left[ p_{1}\right]
\end{array}
\right|
\end{equation*}
We leave it to the reader to verify that the proof of Theorem 4.1 extends
without difficulty to the $p,q$-analog and gives:
\begin{proposition}
\bigskip We have for all $n\geq r\geq 1$%
\begin{equation*}
\left[ p_{n}^{(r)}\right] _{p,q}=\sum\limits_{j=r}^{n}p_{n}^{(j)}\sum%
\limits_{l=r}^{j}\left( -1\right) ^{l-r}\binom{j}{l}\QATOPD[ ] {l}{r}_{p,q}
\end{equation*}
\end{proposition}
\bigskip
\textbf{open problem }: Things get more complicated if we want to extend
Corollary 4.2. There are indeed $p,q$-Stirling numbers of the first and
second kind (see [4], [15]). But as stated on p.104 of [4], there is no $p,q$%
-analog of the formula $\left( 4.4\right) $ which we used to go from theorem
4.1 to Corollary 4.2. This apparently prevents to obtain a $p,q$-analog of
this corollary. We have not had time to look further into the question and
to see in particular what is the state of advancement of the various
problems raised p.104 of [4]. We therefore leave this problem open for the
moment.
\bigskip
\section{Relation betwen $p_{n}^{(r)}$ and its $q$-analog for the $q$%
-deformation exponential}
We now specialize to the case $e_{n}=q^{\binom{n}{2}}/n!$ which corresponds
to the exponential $q$-deformation, defined by $\left( 1.2\right) $. In this
case we have the following result.
\begin{lemma}
\bigskip \bigskip For the $q$-deformation exponentiel defined by $\left(
1.2\right) $, we have for $n\geq r+1\geq 2$%
\begin{equation}
p_{n}^{(r)}=\dfrac{\left( 1-q^{r}\right) }{r!}q^{\binom{r}{2}}\left[ p_{n-r}%
\right] _{q^{r}} \tag{6.1}
\end{equation}
where for all $n\geq 1$, $\left[ p_{n}\right] _{q^{r}}$ is deduced from $%
\left( 3.4\right) $ with in the determinant $e_{n}=q^{\binom{n}{2}}/n!$ and $%
\left[ n\right] =\left[ n\right] _{q^{r}}$ given by definition $2.1$
\end{lemma}
\begin{proof}
Lets start from $\left( 3.3\right) $ with $e_{n}=q^{\binom{n}{2}}/n!$ . We
factor the first column by $q^{\binom{r}{2}}/r!$ , then we subtract the
first column from the second, which gives
\begin{equation*}
p_{n}^{(r)}=\dfrac{q^{\binom{r}{2}}}{r!}\left|
\begin{array}{cccccccc}
1 & 0 & 0 & 0 & . & . & . & 0 \\
\dfrac{q^{\binom{r+1}{r}-\binom{r}{2}}}{1!} & \dfrac{1-q^{r}}{1!} & 1 & 0 & .
& . & . & 0 \\
\dfrac{q^{\binom{r+2}{2}-\binom{r}{2}}}{2!} & q^{\binom{2}{2}}\dfrac{1-q^{2r}%
}{2!} & 1 & 1 & 0 & . & . & 0 \\
. & . & . & . & . & . & . & . \\
. & . & . & . & . & . & . & . \\
\dfrac{q^{\binom{r+j}{2}-\binom{r}{2}}}{j!} & q^{\binom{j}{2}}\dfrac{1-q^{jr}%
}{j!} & . & . & . & . & . & 0 \\
. & . & . & . & . & . & . & 1 \\
\dfrac{q^{\binom{n}{2}-\binom{r}{2}}}{\left( n-r\right) !} & q^{\binom{n}{2}}%
\dfrac{1-q^{\left( n-r\right) r}}{\left( n-r\right) !} & \dfrac{q^{\binom{%
n-r-1}{2}}}{\left( n-r-1\right) !} & . & . & . & \dfrac{q^{\binom{2}{2}}}{2!}
& 1
\end{array}
\right|
\end{equation*}
For $1\leq j\leq n-r$ , we have\ $1-q^{jr}=\left( 1-q^{r}\right) \left[ j%
\right] _{q^{r}}$. We therefore factor the second column by $\left(
1-q^{r}\right) $, then we expand the determinant with respect to the first
row, hence:
\begin{equation*}
p_{n}^{(r)}=\dfrac{q^{\binom{r}{2}}}{r!}\left( 1-q^{r}\right) \left|
\begin{array}{ccccccc}
1 & 1 & 0 & . & . & . & 0 \\
\left[ 2\right] _{q^{r}}\dfrac{q^{\binom{2}{2}}}{2!} & 1 & 1 & 0 & . & . & 0
\\
. & \dfrac{q^{\binom{2}{2}}}{2!} & . & . & . & . & . \\
. & & . & . & . & . & . \\
\left[ j\right] _{q^{r}}\dfrac{q^{\binom{j}{2}}}{j!} & & & . & . & . & 0
\\
. & & & & \;.\; & . & 1 \\
\left[ n-r\right] _{q^{r}}\dfrac{q^{n-r}}{\left( n-r\right) !} & \dfrac{q^{%
\binom{n-r-1}{2}}}{\left( n-r-1\right) !} & . & . & . & \dfrac{q^{\binom{2}{2%
}}}{2!} & 1
\end{array}
\right|
\end{equation*}
By comparing this determinant to $\left( 3.4\right) $, with the definition
2.1 and $f(q)=q^{r}$, we see that we have indeed obtained the second member
of $\left( 6.1\right) $.
\end{proof}
\bigskip
\section{\protect\bigskip The polynomials $J_{n,r}$ and their linear
recurrence relation}
\bigskip
\begin{lemma}
\bigskip For the formal series $E_{xp}(t)=\sum\limits_{n=0}^{\infty }q^{%
\binom{n}{2}}t^{n}/n!,$ we have
\begin{equation}
D^{r}E_{xp}(t)=q^{\binom{r}{2}}E_{xp}(q^{r}t) \tag{7.1}
\end{equation}
\end{lemma}
\begin{proof}
\bigskip On the left we have formally:\qquad \qquad
\begin{equation*}
D^{r}E_{xp}(t)=\sum\limits_{n=0}^{\infty }\dfrac{q^{\binom{n}{2}}}{n!}%
D^{r}t^{n}=\sum\limits_{n=r}^{\infty }\dfrac{q^{\binom{n}{2}}}{n!}\dfrac{n!}{%
\left( n-r\right) !}t^{n-r}=\sum\limits_{m=0}^{\infty }q^{\binom{m+r}{2}}%
\dfrac{t^{m}}{m!}
\end{equation*}
where we made the change of index $m=n-r$.
On the right we have\qquad \qquad
\begin{equation*}
q^{\binom{r}{2}}E_{xp}(q^{r}t)=q^{\binom{r}{2}}\sum\limits_{m=0}^{\infty }q^{%
\binom{m}{2}}\dfrac{\left( q^{r}t\right) ^{m}}{m!}=\sum\limits_{m=0}^{\infty
}q^{\binom{r}{2}+\binom{m}{2}+mr}\dfrac{t^{m}}{m!}
\end{equation*}
and we check that $\binom{m+r}{2}=\binom{m}{2}+\binom{r}{2}+mr$
\end{proof}
It is now possible to introduce the polynomials $J_{n,r}$\ very naturally.
\begin{theorem}
In the case of specialization to $E_{xp}(t)=\sum\nolimits_{n=0}^{\infty }q^{%
\binom{n}{2}}t^{n}/n!$ \ \ we have for $n\geq r\geq 1$
\begin{equation}
p_{n}^{(r)}=\left( 1-q\right) ^{n-r}\dfrac{q^{\binom{r}{2}}}{r!\left(
n-r\right) !}J_{n,r}(q) \tag{7.2}
\end{equation}
where $J_{n,r}$ is a unitary polynomial with positive integer coefficients,
a constant term equal to $\left( n-r\right) !$ \ and whose degree is $\binom{%
n-1}{2}-\binom{r-1}{2}$. We have moreover, \ for all $r\geq 1$ $J_{r,r}=1$
\end{theorem}
\begin{proof}
a) Let us first prove the theorem for $n=r.$ From formula $\left( 3.1\right)
$ and lemma 7.1 we have
\begin{equation*}
\sum\limits_{n\geq r}p_{n}^{(r)}\left( -t\right) ^{n-r}=\dfrac{1}{r!}\dfrac{%
D^{r}E_{xp}(t)}{E_{xp}(t)}=\dfrac{q^{\binom{r}{2}}}{r!}\dfrac{E_{xp}(q^{r}t)%
}{E_{xp}(t)}
\end{equation*}
The constant term of the formal series quotient $E_{xp}(q^{r}t)/E_{xp}(t)$
is $E_{xp}(q^{r}0)/Exp(0)=1$. Thefore the constant term of the first-member
series is $p_{r}^{(r)}=q^{\binom{r}{2}}/r!$. By replacing in relation $%
\left( 7.2\right) $ we get \ \ \ $J_{r,r}=1$ \ and this polynomial verifies
all the assertions of the theorem.
b) For the general case $n\geq r$ $\geq 1$ we reason by induction on $n$.
For $n=r=1$ it is a particular case of a) and it is therefore proved.
Assume true the theorem for all pairs of integers $\left( l,j\right) $ such
that $1\leq l$ $\leq n$ and $1\leq j\leq l$. And let's prove it for $n+1$
and all $r$ between $1$ and $n$ (since it's already proved for $r=n+1$ by
a)). We have with lemma 6.1
\begin{equation}
p_{n+1}^{(r)}=\left( 1-q^{r}\right) \dfrac{q^{\binom{r}{2}}}{r!}\left[
p_{n+1-r}\right] _{q^{r}} \tag{7.3}
\end{equation}
and the particular case $\left( 4.7\right) $ of corollary 4.2 gives
\begin{equation*}
\left[ p_{n+1-r}\right] _{q^{r}}=\sum\limits_{j=1}^{n+1-r}\left(
1-q^{r}\right) ^{j-1}p_{n+1-r}^{(j)}
\end{equation*}
But for $1\leq r\leq n$ we have $1\leq n+1-r\leq n$, so the induction
hypothesis applies to $p_{n+1-r}^{(j)}$, hence
\begin{equation*}
\left[ p_{n+1-r}\right] _{q^{r}}=\sum\limits_{j=1}^{n+1-r}\left(
1-q^{r}\right) ^{j-1}\left( 1-q\right) ^{n+1-r-j}\dfrac{q^{\binom{j}{2}}}{%
j!\left( n+1-r-j\right) !}J_{n+1-r,j}
\end{equation*}
by replacing in ($7.3)$ we get after some easy transformations \
\begin{equation*}
p_{n+1}^{(r)}=\left( 1-q\right) ^{n+1-r}\dfrac{q^{\binom{r}{2}}}{r!\left(
n+1-r\right) !}\sum\limits_{j=1}^{n+1-r}\left[ r\right] ^{j}\binom{n+1-r}{j}%
q^{\binom{j}{2}}J_{n+1-r,j}
\end{equation*}
Let's set
\begin{equation}
J_{n+1,r}(q)=\sum\limits_{j=1}^{n+1-r}\left[ r\right] ^{j}\binom{n+1-r}{j}q^{%
\binom{j}{2}}J_{n+1-r,j} \tag{7.4}
\end{equation}
then we have verified $\left( 7.2\right) $ of the theorem for $n+1$.
Moreover, since $\left( 7.4\right) $ links $J_{n+1,r}$ to polynomials which
have positive integer coefficients by induction hypothesis, with
coefficients which are themselves polynomials in q with positive integer, it
is the same for $J_{n+1,r}$.
In the right hand side of $\left( 7.4\right) $ the degree of each term of
the sum is by induction hypothesis:
\begin{equation*}
d%
{{}^\circ}%
\left( \left[ r\right] ^{j}\binom{n+1-r}{j}q^{\binom{j}{2}%
}J_{n+1-r,j}\right) =(r-1)j+\binom{j}{2}+\binom{n-r}{2}-\binom{j-1}{2}=%
\binom{n-r}{2}+rj-1
\end{equation*}
It is clearly maximum for $j=n+1-r$ and is then $\binom{n-1}{2}-\binom{r-1}{2%
}$. And the coefficient of this monomial of maximum degree is $%
J_{n+1-r,n+1-r}$ which is equal to 1 by a). $J_{n+1,r}$ is therefore a
unitary \ polynomial.
When $j=1$ the term of the sum is $\left[ r\right] \left( n+1-r\right)
J_{n+1-r,1}(q)$. This polynomial has a zero valuation and it is the only
term of the sum having this property. The valuation of $J_{n+1,r}$ is
therefore zero and its constant term is by induction hypothesis $\left(
n+1-r\right) .\left( n-r\right) !=\left( n+1-r\right) !.$ Which ends the
proof.
\end{proof}
\bigskip
\begin{corollary}
The \bigskip \bigskip polynomials $J_{n,r}$ satisfy when $n-1\geq r\geq 1$,
the linear recurrence whose coefficients are elements of $\mathbb{N}[q]$:
\begin{equation}
J_{n,r}(q)=\sum\limits_{j=1}^{n-r}\left[ r\right] _{q}^{j}\;q^{\binom{j}{2}}%
\binom{n-r}{j}J_{n-r,j}(q) \tag{7.5}
\end{equation}
This recurrence suffices with the initial conditions $J_{r,r}=1$ for $r\geq 1
$ to calculate all the $J_{n,r}$ for $n\geq r+1.$
\end{corollary}
\bigskip
\begin{proof}
\bigskip This recurrence is none other than the equation $\left( 7.4\right) $
defining $J_{n+1,r}$ in the proof of theorem $7.2$, it is therefore proved
for all $n$ and $r$ such that $n-1\geq r\geq 1$.\bigskip\ Arrange the
polynomials in a table as shown below, with $J_{n,r}=0$ if $n<r$ by
convention. We see that the linear horizontal recurrence $\left( 7.5\right) $
makes it possible to determine line by line all the polynomials such as $n>r$%
.
\end{proof}
\bigskip
\begin{tabular}{|l|l|l|l|l|l|}
\hline
\begin{tabular}{ll}
& $r$ \\
$n$ &
\end{tabular}
& $1$ & $2$ & $3$ & $4$ & $5$ \\ \hline
$1$ & $1$ & 0 & 0 & 0 & 0 \\ \hline
$2$ & $1$ & $1$ & 0 & 0 & $0$ \\ \hline
$3$ & $2+q$ & $1+q$ & $1$ & 0 & 0 \\ \hline
$4$ & $6+6q+3q^{2}+q^{3}$ & $2+3q+2q^{2}+q^{3}$ & $1+q+q^{2}$ & $1$ & 0 \\
\hline
$5$ & $
\begin{tabular}{l}
$24+36q+30q^{2}+20q^{3}$ \\
$+10q^{4}+4q^{5}+q^{6}$%
\end{tabular}
$ & $
\begin{tabular}{l}
$6+12q+12q^{2}+10q^{3}$ \\
$+6q^{4}+3q^{5}+q^{6}$%
\end{tabular}
$ &
\begin{tabular}{l}
$2+3q+4q^{2}+$ \\
$3q^{3}+2q^{4}+q^{5}$%
\end{tabular}
& $1+q+q^{2}+q^{3}$ & $1$ \\ \hline
\end{tabular}
\bigskip \textbf{Table of polynomials }$J_{n,r}$\textbf{\ for }$5\geq
n,r\geq 1$
\textbf{Exemple of application of }$\left( \mathbf{7.5}\right) $\textbf{\ }:
$n=6,r=2\qquad $%
\begin{equation*}
J_{6,2}=4\left( 1+q\right) J_{4,1}+6\left( 1+q\right) ^{2}qJ_{4,2}+4\left(
1+q\right) ^{3}q^{3}J_{4,3}+\left( 1+q\right) ^{4}q^{6}
\end{equation*}
\textbf{Particular case} $r=1$
The corollary then gives for all $n\geq 1$: $\qquad
J_{n+1,1}(q)=\sum\limits_{j=1}^{n}\binom{n}{j}J_{n,j}(q)$
\bigskip The corollary $\left( 7.3\right) $\ can be generalized in the
following form, by setting for all $n\geq 0\;\;$ $J_{n,0}=\delta _{n,0}$ and
$\left[ 0\right] _{q}^{0}=1.$
\textbf{Corollary 7.3 generalised. }\textit{We have for all pair of integers
}$\left( n,r\right) $\textit{\ such that }$n\geq r\geq 0$\textit{\ : }
\begin{equation}
J_{n,r}(q)=\sum\limits_{j=0}^{n-r}\left[ r\right] _{q}^{j}\;q^{\binom{j}{2}}%
\binom{n-r}{j}J_{n-r,j}(q) \tag{7.6}
\end{equation}
\begin{proof}
if \ $n>$ $r\geq 1\;\;\left( 7.5\right) $ implies $\left( 7.6\right) $ since
the added term contains $J_{n-r,0}$ which is zero.
if $n=r\geq 1$ the left side of $\left( 7.6\right) $ is $1$ and the right
side is $\left[ 0\right] ^{0}q^{0}\binom{0}{0}J_{0,0}=1.$
if $n\geq r=0$ the left side of $\left( 7.6\right) $ is $J_{n,0}=\delta
_{n,0}$ and in the right the sum is reduced to the first term $\left[ 0%
\right] ^{0}q^{0}\binom{0}{0}J_{n,0}=\delta _{n,0}$
\end{proof}
\bigskip
Now let's make the connection with the polynomials introduced previouly and
of which we briefly recall the genesis. We refer to [17] for more details.
In [11] Mallow and Riordan define the inversion polynomial enumerator for
rooted trees with $n$ nodes that they note $J_{n}$. Then Stanley and Yan
successively generalized the definition of inversion polynomial enumerator
to sequences of \ ''colored'' rooted forest, which correspond to classical
parking functions (in the terminology of [17] section 1.4.4). These
polynomials are characterized by a pair of parameters $\left( \alpha ,\beta
\right) $ and are denoted $I_{m}^{(\alpha ,\beta )}$in [17]. These authors
demonstrate by a combinatorial method, the formula (Corollary 5.1 of [16])
\begin{equation}
\sum\limits_{m\geq 0}\left( q-1\right) ^{m}I_{m}^{\left( \alpha ,\beta
\right) }\dfrac{t^{m}}{m!}=\dfrac{\sum\limits_{m\geq 0}q^{\alpha n+\beta
\binom{n}{2}}\dfrac{t^{n}}{n!}}{\sum\limits_{m\geq 0}q^{\beta \binom{n}{2}}%
\dfrac{t^{n}}{n!}} \tag{7.7}
\end{equation}
Note that these polynomials are still the subject of recent research (see
for example [14] which uses a slightly different notation).The comparison of
$\left( 7.7\right) $ with $\left( 3.1\right) $ and theorem 7.2 shows that:
\begin{equation}
J_{n,r}=I_{n-r}^{\left( r,1\right) }\Leftrightarrow I_{m}^{\left( r,1\right)
}=J_{m+r,r} \tag{7.8}
\end{equation}
Note that another proof of $\left( 7.7\right) $ was obtained, partly
combinatorially, with the Goncarev polynomials in [9]. All these methods are
different from the one we used, which only applies to the class of
polynomials defined by (7.8). Very recently this class of polynomials has
been the subject of [8] with still different objectives and notations.
Case $r=1$ corresponds to the case of a tree [11], so we have $J_{n,1}=J_{n}$
which gives another notation for $J_{n,1}$.
\section{\protect\bigskip Reciprocal polynomials}
\begin{definition}
\bigskip The polynomials $\overline{J_{n,r}}$ are defined by
\begin{equation}
\overline{J_{n,r}}\left( q\right) =q^{\binom{n-1}{2}-\binom{r-1}{2}%
}J_{n,r}(1/q) \tag{8.1}
\end{equation}
These are the reciprocal polynomias of the $J_{n,r}$
\end{definition}
Since $\overline{\overline{J_{n,r}}}=J_{n,r}$, note that $\left( 8.1\right) $
is equivalent to
\begin{equation}
J_{n,r}\left( q\right) =q^{\binom{n-1}{2}-\binom{r-1}{2}}\overline{J_{n,r}}%
(1/q) \tag{8.1bis}
\end{equation}
It is well known that these reciprocal polynomials are equal to the sum
enumerator of the corresponding parking functions whose definition we
recall. We will refer for more details to [17] which we adopt very nearly
the notations for the parking functions of our case, ie the case where
parameters $\left( \alpha ,\beta \right) =(r,1)$ . Let $m=n-r$ and denote by
$PK_{m}(r,1)$ the set of parking functions associated with the sequence $%
r,r+1,r+2,...,r+m-1$. $PK_{m}(r,1)$ is the set of sequences of integers $%
a=\left( a_{1},a_{2},...,a_{m}\right) $ such that
\begin{equation*}
a_{(i)}<r+i-1\text{ \ \ for \ \ }1\leq i\leq m
\end{equation*}
$\left( a_{(1)},a_{\left( 2\right) },...,a_{\left( m\right) }\right) $ being
the sequences of $a_{i}$ ordered in a non-decreasing way.
We therefore have with $\left| a\right| =a_{1}+a_{2}+...+a_{m}$
\begin{equation*}
\overline{J_{m+r,r}}(q)=\sum\limits_{a\in PK_{m}(r,1)}q^{\left| a\right|
}=S_{m}\left( q;r\right)
\end{equation*}
where the second member is the sum enumerator we write $S_{m}\left(
q;r\right) $ (instead of $S_{m}\left( q;r,r+1,r+2,...,r+m-1\right) $ in
[17]). We verify that this relation is still true if $m=0$, on the condition
that we set $PK_{0}(r,1)=\emptyset $ and $\sum\limits_{a\in \emptyset
}q^{\left| a\right| }=1$.
With the use of $\left( 8.1bis\right) $, the corollary 7.3 becomes
\bigskip
\begin{corollary}
The reciprocal polynomials $\overline{J_{n}^{(r)}}$ \ satisfy when $n-1\geq
r\geq 1$, the linear recurrence relation whose coefficients are elements of $%
\mathbb{N}[q]$:
\begin{equation}
\text{ \ \ \ }\overline{J_{n,r}}(q)=\sum\limits_{j=1}^{n-r}\left[ r\right]
_{q}^{j}\;q^{r\left( n-r-j\right) }\binom{n-r}{j}\overline{J_{n-r,j}}(q)
\tag{8.2}
\end{equation}
\end{corollary}
\bigskip Of course we can alternatively use this recurrence to find the
table of $\overline{J_{n,r}}$ with $\overline{J_{r,r}}=1$ and $\overline{%
J_{n,r}}=0$ if $n<r$.
It is interesting to compare the linear recurrence $\left( 8.2\right) $ with
a linear recurence that Kung and Yan found between the sum enumerators, in
their study on the application of Goncarov polynomials to parking functions.
It is the relation $\left( 6.2\right) $\ in [9] (or $\left( 1.27\right) $
p.37 in [17]), that in our case and notation will give
\begin{equation*}
1=\sum\limits_{k=0}^{m}\binom{m}{k}q^{\left( r+k\right) \left( m-k\right)
}\left( 1-q\right) ^{k}\overline{J_{k+r,r}}
\end{equation*}
\bigskip Which is equivalent with $m+r=n$ \ and $l=k+r$\ to
\begin{equation}
\left( 1-q\right) ^{n-r}\overline{J_{n,r}}(q)=1-\sum\limits_{l=r}^{n-1}%
\binom{n-r}{l-r}q^{l\left( n-l\right) }\left( 1-q\right) ^{l-r}\overline{%
J_{l,r}}\left( q\right) \tag{8.3}
\end{equation}
To see the difference, let's take the example of $n=6,r=2$
\begin{equation}
\overline{J_{6,2}}=4\left( 1+q\right) q^{6}\overline{J_{4,1}}+6\left(
1+q\right) ^{2}q^{4}\overline{J_{4,2}}+4\left( 1+q\right) ^{3}q^{2}\overline{%
J_{4,3}}+\left( 1+q\right) ^{4}\overline{J_{4,4}} \tag{with 8.2}
\end{equation}
\begin{equation}
\left( 1-q\right) ^{4}\overline{J_{6,2}}=1-q^{8}\overline{J_{2,2}}%
-4q^{9}\left( 1-q\right) \overline{J_{3,2}}-6q^{8}\left( 1-q\right) ^{2}%
\overline{J_{4,2}}-4q^{5}\left( 1-q\right) ^{3}\overline{J_{5,2}}
\tag{with
8.3}
\end{equation}
\bigskip We see that our recurence is horizontal, here using row 4 of table\
$\overline{J_{n,r}}$, and the coefficients of the recurrence are elements of
$\mathbb{N}\left[ q\right] $
On the other hand, recurence $\left( 8.3\right) $ is vertical, using here
column 2 of table $\overline{J_{n}^{\left( r\right) }}$, and the
coefficients of recurrence are elements of $\mathbb{Z}\left[ q\right] $
\section{\protect\bigskip An explicit formula for the $J_{n}^{(r)}$}
By applying the recurrence formula $\left( 7.5\right) $, we will prove the
following theorem which is declined in two versions corresponding to the two
versions of corollary 7.3. Previously we generalize notations used for
integer partitions.
\bigskip
\textbf{Notation}: Let $u=\left( u_{1},u_{2},...,u_{k}\right) $ be a
sequence of strictly positive integers, we set:
\begin{equation*}
\left| u\right| =u_{1}+u_{2}+...+u_{k}\text{ \ \ \ and \ \ }n(u^{\prime })=%
\binom{u_{1}}{2}+\binom{u_{2}}{2}+...+\binom{u_{k}}{2}
\end{equation*}
Note that $u^{\prime }$ is not defined in this case, only $n(u\prime )$ has
a meaning given by the above formula.\bigskip\ More generally if $U$ is the
set of infinite sequence of integers $u=\left( u_{1},u_{2},...\right) $,
such that only a finite number of term $u_{i}$ are non-zero, we can extend
the above notations to all $u\in U$ by: \ \
\begin{equation*}
\left| u\right| =\sum\limits_{i\geq 1}u_{i}\text{ \ \ and \ }n(u^{\prime
})=\sum\limits_{i\geq 1}\binom{u_{i}}{2}
\end{equation*}
\begin{theorem}
a) For any couple of integers $\left( n,r\right) $ satisfying $n-1\geq r\geq
1$ \ we have the explicit formula
\begin{equation}
J_{n,r}(q)=\sum \left[ r\right] _{q}^{u_{1}}\left[ u_{1}\right]
_{q}^{u_{2}}...\left[ u_{k-1}\right] _{q}^{u_{k}}q^{n(u^{\prime })}\binom{n-r%
}{u_{1},u_{2},...,u_{k}} \tag{9.1}
\end{equation}
where the sum is over the $k$-multiplets of strictly positive integers $%
u=\left( u_{1},u_{2},...,u_{k}\right) $ with $k\geq 1$\ and $\left| u\right|
=n-r$, $n(u^{\prime })=\sum\limits_{i=1}^{k}\binom{u_{i}}{2}$, and where $%
\binom{n-r}{u_{1},u_{2},...,u_{k}}$ is the multinomial coefficient.
b) For any couple of integers $\left( n,r\right) $ satisfying $n\geq r\geq 0$
\ we have:
\begin{equation}
J_{n,r}(q)=\left( n-r\right) !\sum \left[ r\right] _{q}^{u_{1}}q^{n(u^{%
\prime })}\prod\limits_{i\geq 1}\dfrac{\left[ u_{i}\right] _{q}^{u_{i+1}}}{%
u_{i}!} \tag{9.2}
\end{equation}
the sum is over the sequences $u=\left( u_{i}\right) _{i\geq 1}$ $\in $ $U$
defined above, with $\left| u\right| =\sum\nolimits_{i\geq 0}u_{i}=n-r$ and $%
n(u^{\prime })=\sum\nolimits_{i\geq 0}\binom{u_{i}}{2}$
\end{theorem}
As we will see below, the sum in $\left( 9.2\right) $\ contains only a
finite number of non-zero terms. Note that if b) is apparently more
complicated than a), it is in fact easier to prove.
\begin{proof}
\bigskip We first show the equivalence of a) and b) when $n-1\geq r\geq 1.$
In order for the general term of the sum in (9.2) to be non-zero, it is
necessary -taking into account $\left[ 0\right] ^{n}=\delta _{n}^{0}$ - that
the finite number of non-zero (say $k$) terms of the sequence $u$, occupy
the first $k$ places in this sequence. In other word, the summation can be
limited to these sequences, which we will call commencing sequences, and we
will denote by $\left( 9.2bis\right) $ the equation coresponding to these
sequences. The map which to a multiplet of the sum of (9.1), $\left(
u_{1},u_{2},...,u_{k}\right) $ with $u_{1}+u_{2}+...+u_{k}=n-r$ $\ $and $%
k\geq 1$, associates the commencing sequence $u=\left(
u_{1},u_{2},...,u_{k},0,0,...\right) $ with $\left| u\right| =n-r$ ,\ is
clearly a bijection between the two sets. Moreover we chek that the value of
the terms in the respective sums, thus put in 1-1 corespondence are equal.
The two sums are therefore equal.
Let us now prove $\left( 9.2bis\right) $ for all $n\geq r\geq 0$.
1) If $n=r$ then the condition $\left| u\right| =\sum\nolimits_{i\geq
1}u_{i}=n-r=0$ implies $u_{i}=0$ for all $i\geq 1$. The sum in $\left(
9.2bis\right) $ therefore reduces to $0!\left[ r\right] ^{0}q^{0}\left[ 0%
\right] ^{0}.../0!..=1$ which is indeed equal to $J_{r,r}$ for all $r\geq 0$
2) If $r=0$ and $n>0$ it is necessary for a sequence $u$ verifying $\left|
u\right| =n-r>0$ , that its first term $u_{1}$ be non-zero. Therefore the
term indexed by $u$ in the sum of ($9.2bis)$ will be zero since its contains
the factor $\left[ 0\right] ^{u_{1}}=0$. The sum in ($9.2bis$) is therefore
zero, which is indeed $J_{n,0}$ for $n>0$
3) Lets now prove by induction on $n$ the general case $n\geq r\geq 0$
When $n=r=0$ it has already been checked by 1)
Assume that relation ($9.2bis$) is true for any pair of integers $\left(
l,j\right) $ such that $0\leq l\leq n-1$, \ $0\leq j\leq l$ and show that it
is true for $l=n$, $n>0$ and $0\leq j=r\leq n$
If $r=0$ it is the case 2) already proved since $n>0$.
If $r=n$ it is the case 1) already proved
So suppose $1\leq r\leq n-1$. The set of sequences verifying $\left|
u\right| =\sum\nolimits_{i\geq 0}u_{i}=n-r>0$ can be decomposed according to
the value of $u_{1}$ into:
* $u_{1}=1$ from where $\sum\nolimits_{i\geq 2}u_{i}=n-r-1$
and in a general way for $1\leq j\leq n-r$
* $u_{1}=j$ \ \ from where $\sum\nolimits_{i\geq 2}u_{i}=n-r-j$
The right-hand side of $\left( 9.2bis\right) $\ can therefore be written:
\begin{equation}
=\sum\limits_{j=1}^{n-r}\left[ r\right] ^{j}q^{\binom{j}{2}}\binom{n-r}{j}%
\left[ \left( n-r-j\right) !\sum\limits_{u_{2}+u_{3}+...=n-r-j}\left[ j%
\right] ^{u_{2}}q^{\binom{u_{2}}{2}+\binom{u_{3}}{2}+...}\prod\limits_{i\geq
2}\dfrac{\left[ u_{i}\right] ^{u_{i+1}}}{u_{i}!}\right] \tag{9.3}
\end{equation}
Let us set for $i\geq 1$ $\;\;v_{i}=u_{i+1}$, then\ \ $v=\left( v_{i}\right)
_{i\geq 1}$ is a sequence satisfying $\left| v\right| =(n-r)-j$ and the hook
in $\left( 9.3\right) $ can be rewritten:
\begin{equation*}
\left( (n-r)-j\right) !\sum \left[ j\right] _{q}^{v_{1}}q^{n(v^{\prime
})}\prod\limits_{i\geq 1}\dfrac{\left[ v_{i}\right] _{q}^{v_{i+1}}}{v_{i}!}
\end{equation*}
the sum is over all sequences $\nu $ of integers such that $\left| v\right|
=n-r-j$. But this sum is $J_{n-r,j}$ according to the induction
hypothesis.The second member of $\left( 9.3\right) $ is therefore
\begin{equation*}
\sum\limits_{j=1}^{n-r}\left[ r\right] _{q}^{j}\;q^{\binom{j}{2}}\binom{n-r}{%
j}J_{n-r,j}(q
\end{equation*}
which is indeed equal to $J_{n,r}$ according to $\left( 7.5\right) $ .
\end{proof}
.
\bigskip \textbf{Case where }$q=1:$
We get with $\left( 9.1\right) $
\begin{equation}
J_{n,r}(1)=\sum_{\substack{ u_{1}+u_{2}+...+u_{k}=n-r \\ u_{i}\geq
1,\;k\geq 1}}r^{u_{1}}u_{1}^{u_{2}}...u_{k-1}^{u_{k}}\binom{n-r}{%
u_{1},u_{2},...,u_{k}}=rn^{n-r-1} \tag{9.4}
\end{equation}
Let us explain the last term of \ $\left( 9.4\right) $. It follows from the
generalization made in [16], of the tree inversions for the polynomials $%
I_{m}^{(\alpha ,\beta )}$, that $J_{n,r}\left( 1\right) $ is equal to the
number of forests on $n$ vertices (including the roots) comprising rooted $r$
trees, whose roots are specified. The set of these forests is denoted by $%
\mathcal{F}_{m}\left( r,1\right) $ in [17] with here $m=n-r$. We know that
the number of these forests is $J_{n,r}(1)=rn^{n-r-1}$(see proposition 5.3.2
in [13]), which we can also easily verify by induction.
\bigskip
\section{\protect\bigskip Level statistics on forests}
We will denote $\mathcal{F}_{n,R}$ the set of rooted forests whose vertices
are $V=\mathbf{n}=\left\{ 1,2,...,n\right\} $, and whose roots are a
specified part $R\subseteq \mathbf{n}$ with $\left| R\right| =r$. $\mathcal{F%
}_{n,R}$ is equipotent to $\mathcal{F}_{n-r}\left( r,1\right) $ of [17]. We
assume until further notice that $n-1\geq r\geq 1$, therefore $R\subset
\mathbf{n}$. Let $F\in \mathcal{F}_{n,R}$, $k$ will designate the height of $%
\ F$, i.e. the greatest height of its trees. We set for $i$ an integer
between $0$ and $k$, $V_{i}=\left\{ v\in \mathbf{n};\;\text{distance of }v%
\text{ from the root}=i\right\} $ and $u_{i}=\left| V_{i}\right| $; in
particular $V_{0}=R$ and $u_{0}=r$. The map which to $v\in \mathbf{n}$
associates $V_{i}$ such that $v\in V_{i}$\ \ is denoted $L_{F}$, $%
L_{F}\left( v\right) $ is the level (or generation) of $v$.
We necessarly have $u_{0}+u_{1}+....u_{k}=n$ . According to the previous
section, we have with these notations and $r\leq n-1$ which implies $k\geq 1$
:
\begin{equation*}
\left| \mathcal{F}_{n,R}\right| =J_{n,r}\left( 1\right) =\sum_{\substack{ %
u_{0}+u_{1}+...+u_{k}=n \\ u_{0}=r,\;u_{i}\;\geq 1}}%
u_{0}^{u_{1}}u_{1}^{u_{2}}...u_{k}^{u_{k+1}}\binom{n-r}{u_{1},u_{2},...,u_{k}%
}
\end{equation*}
It is convenient to describe level statistics in a fairly general way, with
the following definitions.
\begin{definition}
Let $\mathcal{P}_{n}$ be the set of subsets of $\mathbf{n}$. A ranking $\rho
$\ on $\mathcal{P}_{n}$\ is the data for each $P\in \mathcal{P}_{n}$ $\ $of
a bijection from $P$ into $\mathbf{p}=\left\{ 1,...,p\right\} $ where $%
p=\left| P\right| $, bijection denoted $\rho \left( P\right) $.
\end{definition}
\textbf{Exemple of ranking on }$\mathcal{P}_{n}$: the increasing ranking $%
\rho _{+}$ is the one for which $\rho _{+}\left( P\right) $ is the
increasing bijection for all $P\in \mathcal{P}_{n}$; the decreasing ranking $%
\rho _{-}$ is the one for which $\rho _{-}\left( P\right) $ is the
decreasing bijection for all $P\in \mathcal{P}_{n}$. Any combination of $%
\rho _{+}$ and $\rho _{-}$ is still a ranking on $\mathcal{P}_{n}$, for
example that which is equal to $\rho _{+}$ if $\left| P\right| $ is even and
to $\rho _{-}$ otherwise
\begin{definition}
\bigskip\ Let $\rho $ be a ranking on $\mathcal{P}_{n}$ and $\mathcal{F}_{%
\mathbf{n},R}$ the set of forests defined above, $n$ and $R$ being given.
The weight associated with $\rho $ of a vertex $v$ of $F\in \mathcal{F}_{n,R}
$ is $w_{\rho }(v)=\rho \left( L_{F}\left( v\right) \right) \left( v\right) $
\end{definition}
\bigskip Note that this weight depends only on $\rho $ and the generation of
$v$. We note $p\left( v\right) $ the (only) parent of the vertex $v\in
\mathbf{n}-R$.
\textbf{Example with }$n=13$\textbf{, }$R=\left\{ 7,11,2\right\} $\textbf{\
and }$\rho _{+}$: For the forest $F\in \mathcal{F}_{13,R}$ represented below
and for each of its vertices $v$\ labeled in black, we have indicated in red
the weight associated with $\rho _{+}$. The cardinal of each of its
generation is also indicated on the right.
\bigskip
\includegraphics[width=16cm]{foret.jpg}
\begin{definition}
Let \bigskip $\rho $ a ranking on $\mathcal{P}_{n}$, \ for all $F\in \
\mathcal{F}_{n,R}$ ($n$ and $R\subset \mathbf{n}$\ given), the level
statistic $l_{\rho }$ associated with $\rho $ is defined by
\begin{equation*}
l_{\rho }\left( F\right) =n(u^{\prime })+\sum\nolimits_{v\in \mathbf{V}%
-R}(w_{\rho }\left( p\left( v\right) \right) -1)
\end{equation*}
where $u_{i}$ is the number of vertices at distance $i$ from its roots and $%
n(u\prime )=\binom{u_{1}}{2}+\binom{u_{2}}{2}+...+\binom{u_{k}}{2}$
\end{definition}
For the forest $F$ represented above, we have
\begin{equation*}
l_{\rho _{+}}\left( F\right) =\binom{4}{2}+\binom{5}{2}+\binom{1}{2}+\left(
4-1\right) +2\left( 1-1\right) +3\left( 4-1\right) +\left( 2-1\right)
+\left( 3-1\right) +2\left( 1-1\right) =31
\end{equation*}
\begin{theorem}
Let $n\in \mathbb{N}^{\ast }$, and $R\subset \mathbf{n}$ , $\left| R\right|
=r$, then for any ranking $\rho $ on $\mathcal{P}_{n}$, we have
\begin{equation*}
J_{n,r}\left( q\right) =\sum\limits_{F\in \mathcal{F}_{n,R}}q^{l_{\rho
}\left( F\right) }
\end{equation*}
\end{theorem}
\begin{proof}
For each height $k$, between $1$ and $n-r$, and each $k$-multiplet $u=\left(
u_{1},u_{2},...,u_{k}\right) $ with $u_{i}\geq 1$ and $\left| u\right|
=u_{1}+u_{2}+...+u_{k}=n-r$, we have $\binom{n-r}{u_{1},u_{2},...u_{k}}$
possibilities of placing the $n-r$ non-root vertices on the $k$ levels from $%
1$ to $k$, i.e. of choosing the $k$ generations $V_{1},V_{2},...V_{k}$.So we
have
\begin{equation}
\sum\limits_{F\in \mathcal{F}_{n,R}}q^{l_{\rho }\left( F\right) }=\sum\limits
_{\substack{ k\geq 1,u_{i}\geq 1 \\ u_{1}+u_{2}+...+u_{k}=n-r}}\binom{n-r}{%
u_{1},u_{2},...,u_{k}}\;q^{n(u^{\prime })}\sum_{F\in F\left(
V_{0},V_{1},...,V_{k}\right) }q^{\sum\nolimits_{v\in \mathbf{n}-R}\left(
w_{\rho }\left( p(v)\right) -1\right) } \tag{10.1}
\end{equation}
Here $F\left( V_{0},V_{1},...,V_{k}\right) $ is the set of forests\ of $%
\mathcal{F}_{n,R}$, with a specified height $k$, and whose generations $%
V_{0}=R,V_{1},V_{2},...V_{k}$ are all specified. This set of forests is
clearly in bijection with the Cartesian product $\prod\nolimits_{i=0}^{k-1}%
\mathcal{G}_{i}$ where, from $i=0$ to $k-1$, $\mathcal{G}_{i}\ $is the set
of maps from $V_{i+1}$ to $V_{i}$; the map $g\in \mathcal{G}_{i}$ associated
with a forest of $F\left( V_{0},V_{1},...,V_{k}\right) $ being defined by $%
g\left( v\right) =p(v)$. We then have
\begin{equation*}
\sum\limits_{v\in \mathbf{n}-R}\left( w_{\rho }\left( p(v)\right) -1\right)
=\sum\limits_{i=0}^{k-1}\sum\limits_{v\in V_{i+1}}\left( w_{\rho }\left(
g(v)\right) -1\right)
\end{equation*}
hence
\begin{equation}
\sum_{F\in F\left( V_{0},V_{1},...,V_{k}\right) }q^{\sum\limits_{v\in
\mathbf{n}-R}\left( w_{\rho }\left( p(v)\right) -1\right)
}=\prod\limits_{i=0}^{k-1}\Theta _{i} \tag{10.2}
\end{equation}
with
\begin{equation*}
\Theta _{i}=\sum\limits_{g\in \mathcal{G}_{i}}q^{\sum\nolimits_{v\in
V_{i+1}}w_{\rho }(g(v))-1}
\end{equation*}
Let's calculate $\Theta _{i}$ for $i$ given between $0$ and $k-1$. By
definition of $\rho $, $w_{\rho }=\rho \left( V_{i}\right) $ is a bijection
from $V_{i}$ to $\mathbf{u}_{i}$ and $\rho \left( V_{i+1}\right) $ is a
bijection from $V_{i+1}$to $\mathbf{u}_{i+1}$. So for each $g\in \mathcal{G}%
_{i}$, $h=\Phi \left( g\right) =\rho \left( V_{i}\right) \circ g\circ \left(
\rho \left( V_{i+1}\right) \right) ^{-1}$ is a mapping from $\mathbf{u}_{i+1}
$ to $\mathbf{u}_{i}$. By construction $\Phi $ is a bijection from $\mathcal{%
G}_{i}$ to $\mathbf{u}_{i}^{\mathbf{u}_{I+1}}$, \ So we can make the
bijective change of index $h=\Phi (g)$ . We can therefore write
\begin{equation*}
\Theta _{i}=\sum\nolimits_{h\in \mathbf{u}_{i}^{\mathbf{u}%
_{I+1}}}\prod\limits_{j=1}^{u_{i+1}}q^{h(j)-1}=\left(
1+q+...+q^{u_{i}-1}\right) ^{u_{i+1}}
\end{equation*}
the second equality above coming from classical combinatorial results (see
for example Th.D p. 30 in [3]). By transferring these expressions into $%
\left( 10.1\right) $ we get
\begin{equation*}
\sum\limits_{F\in \mathcal{F}_{n,R}}q^{l_{\rho }\left( F\right) }=\sum\limits
_{\substack{ k\geq 1,u_{i}\geq 1 \\ u_{1}+u_{2}+...+u_{k}=n-r}}\binom{n-r}{%
u_{1},u_{2},...,u_{k}}\;q^{n(u^{\prime })}\prod\limits_{i=0}^{k-1}\left[
u_{i}\right] _{q}^{u_{i+1}}
\end{equation*}
which with $\left( 9.1\right) $ ends the proof.
\end{proof}
\bigskip
\textbf{Remark} : we can include the case $R=V=\mathbf{n}$ in the previous
theorem. $\mathcal{F}_{n,V}$ reduces to the empty graph $E_{n}$ ($n$
vertices without edges) and it suffices to set for all ranking $\rho $, $%
l_{\rho }\left( E_{n}\right) =0$. We verify that $J_{n,n}=1=\sum\nolimits_{F%
\in \mathcal{F}_{n,V}}q^{l_{\rho }(F)}=q^{l_{\rho }\left( E_{n}\right) }$
\bigskip
\bigskip \textbf{Case of reciprocal polynomials:}
With replacing the formulas $\left( 8.1bis\right) $, in the formula of
theorem 9.1 we can obtain explicit formula for the $\overline{J_{n,r}}.$ For
example we get from $\left( 9.1\right) $
\begin{corollary}
\bigskip For any couple of integers $\left( n,r\right) $ satisfying $n-1\geq
r\geq 1$ \ we have
\begin{equation}
\overline{J_{n,r}}(q)=\sum \left[ r\right] _{q}^{u_{1}}\left[ u_{1}\right]
_{q}^{u_{2}}...\left[ u_{k-1}\right] _{q}^{u_{k}}q^{\sigma (u)+r\left(
n-r-u_{1}\right) }\binom{n-r}{u_{1},u_{2},...,u_{k}} \tag{10.3}
\end{equation}
where the sum is over the $k$-multiplets of strictly positive integers $%
u=\left( u_{1},u_{2},...;u_{k}\right) $ with $k\geq 1$, $\left| u\right| =n-r
$, and with
\begin{equation*}
\sigma \left( u\right) =\sum\limits_{1\leq i,j\leq \leq k,\;j\geq
i+2}u_{i}u_{j}
\end{equation*}
\ \
\end{corollary}
\bigskip It is possible to define level statistic from the formula $\left(
10.3\right) $ which can still be written
\begin{equation}
\overline{J_{n,r}}(q)=\sum \left[ u_{0}\right] _{q}^{u_{1}}\left[ u_{1}%
\right] _{q}^{u_{2}}...\left[ u_{k-1}\right] _{q}^{u_{k}}q^{\sigma (\widehat{%
u})}\binom{n-r}{u_{1},u_{2},...,u_{k}} \tag{10.4}
\end{equation}
where the sum is over the multiplets $\widehat{u}=\left( u_{0}=r,\text{ }%
u_{1},u_{2},...;u_{k}\right) $ such that $\left| \widehat{u}\right|
=u_{0}+u_{1}+...+u_{k}=n$ and setting
\begin{equation*}
\sigma \left( \widehat{u}\right) =\sum\limits_{0\leq i,j\leq k\;j\geq
i+2}u_{i}u_{j}
\end{equation*}
We leave it to the reader to verify that for any ranking $\rho $ on $%
\mathcal{P}_{n}$, $\overline{l_{\rho }}$ defined below is a statistic for $%
\overline{J_{n,r}}\left( q\right) $.
\begin{equation*}
\overline{l_{\rho }}\left( F\right) =\sigma (\widehat{u})+\sum\limits_{v\in
\mathbf{n}-R}\left( w_{\rho }\left( p(v)\right) -1\right)
\end{equation*}
\
Let us point out in conclusion that the polynomials $J_{n,r}$ and the
formulas exposed in this article -as well as their reciprocal- are
susceptible of other combinatorial developments which will be presented in
future articles. In particular we will see a combinatorial proof of $\left(
8.2\right) $ with parking functions, that allow to extend this formula to
all generalized parking functions defined in \S\ 1.4.1 of $\left[ 17\right] $
.
\textbf{References}
$\left[ 1\right] $ L. Carlitz, On Abelian fields. \textit{Trans. Amer. Math.
Soc.} \textbf{35} (1933) 122-136.
$\left[ 2\right] $ L. Carlitz, $q$-Bernoulli numbers and polynomials.
\textit{Duke Math. J.} \textbf{15} (1948) 987-1000.
$\left[ 3\right] $ L. Comtet, \textit{Advanced Combinatorics}. Springer
Netherlands, Dordrecht,1974.
[4] A. de Medicis, P. Leroux, A unified combinatorial approach for $q$- (and
$p,q$-) Stirling numbers. \textit{J. Stat. Planning and Inference} \textbf{34%
} (1993) 89-105.
$\left[ 5\right] $ U. Duran, M. Acikgoz, Apostol type ($p,q$)-Bernoulli, ($%
p,q$)-Euler and ($p,q$)-Genocchi Polynomials and numbers. \textit{Comm. in
Math. and Appl}. \textbf{8, }vol.1\textbf{\ }(2017) 7-30.
$\left[ 6\right] $ H. W. Gould, The $q$-Stirling numbers of the first and
second kinds. \textit{Duke Math. J}. \textbf{28} (1961) 281-289.
[7] V. Kac, P. Cheung, \textit{Quantum Calculus. }Springer, New York, 2002.
$\left[ 8\right] $ R. Kenyon, M. Yin, Parking functions: From combinatorics
to probability, preprint, arXiv:2103.1718$0$.
$\left[ 9\right] $ J.P.S. Kung, C.H. Yan, Goncarov polynomials and parking
functions. \textit{J. Comb. Theory Ser. A} \textbf{102}(1) (2003) 16-37.
$\left[ 10\right] $ I. G. Macdonald, \textit{Symmetric functions and Hall
polynomials}, second ed., Oxford University Press, New York, 1995.
$\left[ 11\right] $ C. L. Mallows and J. Riordan, The inversion enumerator
for labeled trees. \textit{Bull. Amer. Soc. }\textbf{74} (1968) 92-94.
$\left[ 12\right] $ B. E. Sagan, $q$-Stirling numbers in type B, preprint,
arXiv:2205.14078.
$\left[ 13\right] $ R. P. Stanley, \textit{Enumerative Combinatorics.} Vol.
2. Cambridge University Press, Cambridge, United Kingdom, 1999.
$\left[ 14\right] $ R. P. Stanley, Some aspects of (r,k)-parking functions.
\textit{J. Comb. Theory Ser. A} \textbf{159} (2018) 54-78.
[15] M. Wachs and D. White, p,q-Stirling numbers and set partition
statistics. \textit{J. Comb. Theory Ser. A} \textbf{56} (1991) 27-46.
$\left[ 16\right] $ C. H. Yan. Generalized parking functions, tree
inversions, and multicolored graphs. \textit{Adv. in Appl. Math}. \textbf{27}%
(2-3) (2001) 641-670.
[17] C.H. Yan, Parking functions, in M. Bona (ed.), \textit{Handbook of
Enumerative Combinatorics}, CRC Press, Boca Raton, FL (2015) pp. 835-893.
\end{document}
|
2205.14924
|
\section{Introduction}
\subsection{Background}
Denote by $ \|\cdot\| $ the distance to the nearest integer. The famous Dirichlet Theorem asserts that for any real numbers $ x\in[0,1] $ and $ N\ge 1 $, there exists a positive integer $ n $ such that
\begin{equation}\label{eq:Dirichlet's theorem}
\|nx\|<\frac{1}{N}\quad\text{and}\quad n\le N.
\end{equation}
As a corollary,
for any $ x\in[0,1] $, there exist infinitely many positive integers $ n $ such that
\begin{equation}\label{eq:Dirichlet's corollary}
\|nx\|\le\frac{1}{n}.
\end{equation}
Let $ (\{nx\})_{n\ge 0} $ be the orbit of the irrational rotation by an irrational number $ x $, where $ \{nx\} $ is the fractional part of $ nx $. From the dynamical perspective, Dirichlet Theorem and its corollary describe the rate at which $ 0 $ is approximated by the orbit $ (\{nx\})_{n\ge 0} $ in a uniform and asymptotic way, respectively.
In general, one can study the Hausdorff dimension of the set of points which are approximated by the orbit $ (\{nx\})_{n\ge 0} $ with a faster speed.
For the asymptotic approximation, Bugeaud~\cite{Bug03} and, independently, Schmeling and Troubetzkoy~\cite{ScTr03} proved that
\[\hdim \{y\in[0,1]:\|nx-y\|<n^{-\kappa}\text{ for infinitely many }n\}=\frac{1}{\kappa},\]
where $ \hdim $ stands for the Hausdorff dimension. The corresponding uniform approximation problem was recently studied by Kim and Liao~\cite{KimLi19} who obtained the Hausdorff dimension of the set
\[\{y\in[0,1]:\forall N\gg1,~\exists n\le N, \text{ such that }\|nx-y\|<N^{-\kappa}\}.\]
Naturally, one wonders about the analog results when the orbit $ (\{nx\})_{n\ge 0} $ is replaced by an orbit $ (T^nx)_{n\ge 0} $ of a general dynamical system $ ([0,1], T) $. For any $ \kappa>0 $, Fan, Schmeling and Troubetzkoy~\cite{FaScTr13} considered the set
\[\mathcal L^{\kappa}(x):=\{y\in[0,1]:|T^nx-y|<n^{-\kappa}\text{ for infinitely many }n\}\]
of points that are asymptotically approximated by the orbit $ (T^nx)_{n\ge 0} $ with a given speed $ n^{-\kappa} $, where $ T $ is the doubling map. It seems difficult to investigate the size of $ \mathcal L^{\kappa}(x) $ when $ x $ is not a dyadic rational, as the distribution of $ (T^nx)_{n\ge 0} $ is not as well-studied as that of $ (\{nx\})_{n\ge 0} $, see for example~\cite{AlBe98} for more details about the distribution of $ (\{n_x\})_{n\ge 0} $.
However, from the viewpoint of ergodic theory, Fan, Schmeling and Troubetzkoy~\cite{FaScTr13} obtained the Hausdorff dimension of $ \mathcal L^{\kappa}(x) $ for $ \uph $-a.e.\,$ x $, where $ \uph $ is the Gibbs measure associated with a H\"older continuous potential $ \phi $. They found that the size of $ \mathcal L^{\kappa}(x) $ is closely related to the local dimension of $ \uph $ and to the first hitting time for shrinking targets.
In their paper~\cite{LiaoSe13}, Liao and Seuret extended the results of~\cite{FaScTr13} to Markov maps. Later, Persson and Rams~\cite{PerRams17} considered more general piecewise expanding interval maps, and proved some similar results to those of~\cite{FaScTr13,LiaoSe13}. These studies are also closed related to the metric theory of random covering set; see~\cite{BaFa05, Dvo56,Fan02,JoSt08,Seu18,She72,Tan15} and references therein.
As a counterpart of the dynamically defined asymptotic approximation set $ \mathcal L^{\kappa}(x) $, we would like to study the corresponding uniform approximation set $ \uk $ defined as
\[\begin{split}
\uk:&=\{y\in[0,1]:\forall N\gg1,~\exists n\le N, \text{ such that }|T^nx-y|<N^{-\kappa}\}\\
&=\bigcup_{i=1}^\infty\bigcap_{N= i}^\infty\bigcup_{n=1}^N B(T^nx, N^{-\kappa}),
\end{split}\]
where $ B(x,r) $ is the open ball of center $ x $ and radius $ r $, and $ T $ is a Markov map (see Definition~\ref{d:Markov map} below).
By a simple argument, one can check that $ \uk\setminus\{T^nx\}_{n\ge 0}\subset \mathcal L^\kappa(x) $. Thus trivially one has $ \hdim \uk\le\hdim\mathcal L^\kappa(x) $.
As the studies on $ \mathcal L^\kappa(x) $, we are interested in the sizes (Lebesgue measure and Hausdorff dimension) of $ \uk $. Our first result asserts that for any $ \kappa>0 $, the Hausdorff dimension of $ \uk $ is constant almost surely with respect to a $ T $-invariant ergodic measure.
\begin{thm}\label{t:sub}
Let $ T $ be a Markov map on $ [0,1] $ and $ \nu $ be a $ T $-invariant ergodic measure. Then for any $ \kappa>0 $, $ \hdim\uk $ is constant almost surely.
\end{thm}
The above theorem gives no information regarding the Hausdorff dimension of $ \uk $ for $ \nu $-a.e.\,$ x $. In order to calculate $ \hdim\uk $ for $ \nu $-a.e.\,$ x $, we impose a stronger condition, the same as that of Fan, Schmeling and Troubetzkoy~\cite{FaScTr13} and Liao and Seuret~\cite{LiaoSe13}, that $ \nu $ is the Gibbs measure $ \uph $ of $ T $ associated with a H\"older continuous potential $ \phi $. Let $ \bk:=[0,1]\setminus \uk $. Motivated by the above theorem and the extensive studies on the asymptotic set $ \mathcal L^{\kappa}(x) $ and its variants, we are interested in the following questions.
\begin{enumerate}[(Q1)]
\item Does it exist $ \kappa>0 $ such that $ \uk $ equal to the whole interval $ [0,1] $ for $ \uph $-a.e.\,$ x $? If such $ \kappa $ exists, how to determine the critical value of $ \kappa $ such that $ \uk=[0,1] $ for $ \uph $-a.e.\,$ x $?
\item When $ \uk $ is not equal to $ [0,1] $ for $ \uph $-a.e.\,$ x $, is it possible that $ \uk $ has full Lebesgue measure for $ \uph $-a.e.\,$ x $? If so, can we compute the Hausdorff dimension of $ \bk $ for $ \uph $-a.e.\,$ x $?
\item When $ \uk $ is a Lebesgue-null set for $ \uph $-a.e.\,$ x $, what is the Hausdorff dimension of $ \uk $ for $ \uph $-a.e.\,$ x $?
\end{enumerate}
In this paper, we answer these questions when $ T $ is an expanding Markov map of the interval $ [0,1] $ with a finite partition---a Markov map, for short.
\begin{defn}[Markov map]\label{d:Markov map}
A transformation $ T:[0,1]\to[0,1] $ is an expanding Markov map with a finite partition provided that there is a partition of $ [0,1] $ into subintervals $ I(i)=(a_i, a_{i+1}) $ for $ i=0,\dots,Q-1 $ with endpoints $ 0=a_0<a_1<\cdots<a_Q=1 $ satisfying the following properties.
\begin{enumerate}[\upshape(1)]
\item There is an integer $ n_0 $ and a real number $ \rho $ such that $ |(T^{n_0})'|\ge\rho>1 $.
\item $ T $ is strictly monotonic and can be extended to a $ C^2 $ function on each $ \overline{I(i)} $.
\item If $ I(j)\cap T\big(I(k)\big)\ne\emptyset $, then $ I(j)\subset T\big(I(k)\big) $.
\item There is an integer $ R $ such that $ I(j)\subset \cup_{n=1}^R T^n\big(I(k)\big) $ for every $ k $, $ j $.
\item For every $ k\in\{0,1,\dots,Q-1\} $, $ \sup_{(x,y,z)\in I(k)^3}\big(|T''(x)|/|T'(y)||T'(z)|\big)<\infty $.
\end{enumerate}
\end{defn}
For the probability measure $ \nu $ and for $ y\in[0,1] $ we set
\[\ud_\nu(y):=\liminf_{r\to 0}\frac{\log\nu\big(B(y,r)\big)}{\log r}\quad\text{and}\quad\od_\nu(y):=\limsup_{r\to 0}\frac{\log\nu\big(B(y,r)\big)}{\log r},\]
which are called, respectively, the lower and upper local dimensions of $ \nu $. When $ \ud_\nu(y)=\od_\nu(y) $, their common value is denoted by $ d_\nu(y) $, and is simply called the local dimension of $ \nu $ at $ y $.
Let $ D_\nu $ be the multifractal spectrum of $ \nu $ defined by
\[D_\nu(s):= \hdim\{y\in[0,1]:d_\nu(y)=s\}.\]
Our answers to questions (Q1)-(Q3) are stated in the following theorem. For a Markov map $ T $ and a H\"older continuous potential $ \phi $, we first define three critical values by
\begin{align}
\alpha_-:&=\min_{\nu\in\mi}\frac{-\int\phi \,d\nu}{\int\log|T'|\,d\nu},\label{eq:alpha-}\\
\alpha_{\max}:&=\frac{-\int\phi \, d\mu_{\max}}{\int\log|T'|\, d\mu_{\max}},\label{eq:alphamax}\\
\alpha_+:&=\max_{\nu\in\mi}\frac{-\int\phi \, d\nu}{\int\log|T'|\, d\nu},\label{eq:alpha+}
\end{align}
where $ \mi $ is the set of $ T $-invariant probability measures on $ [0,1] $ and $ \mu_{\max} $ is the Gibbs measure associated with the potential $ -\log|T'| $. By definition, it holds that $ \alpha_-\le \alpha_{\max}\le\alpha_+ $. Indeed, the quantities $ \alpha_- $, $ \alpha_{\max} $ and $ \alpha_+ $ depend on $ T $ and $ \phi $. However, for simplicity, we leave out the dependence unless the context requires specification.
Denote by $ \lambda $ the Lebesgue measure on $ [0,1] $.
\begin{thm}\label{t:main}
Let $ T $ be a Markov map. Let $ \phi $ be a H\"older continuous potential and let $ \uph $ be the corresponding Gibbs measure.
\begin{enumerate}[\upshape(1)]
\item If $ 1/\kappa>\alpha_+ $, then for $ \uph $-a.e.\,$ x $, $ \bk=\emptyset $ and hence $ \uk=[0,1] $.
\item For $ \mu_\phi $-a.e.\,$ x $,
\[\lambda\big(\uk\big)=1-\lambda\big(\bk\big)=\begin{cases}
0\quad&\text{if }1/\kappa\in(0,\alpha_{\max}),\\
1\quad&\text{if }1/\kappa\in(\alpha_{\max},+\infty).
\end{cases}\]
\item For $ \mu_\phi $-a.e.\,$ x $,
\[\hdim\uk=\begin{cases}
D_{\mu_\phi}(1/\kappa) &\text{if }1/\kappa\in(0,\alpha_{\max}]\setminus\{\alpha_-\},\\
1&\text{if }1/\kappa\in(\alpha_{\max},+\infty).
\end{cases}\]
\item For $ \mu_\phi $-a.e.\,$ x $,
\[\hdim\bk=\begin{cases}
1&\text{if }1/\kappa\in(0,\alpha_{\max}),\\
D_{\mu_\phi}(1/\kappa) &\text{if }1/\kappa\in[\alpha_{\max},+\infty)\setminus \{\alpha_+\}.
\end{cases}\]
\end{enumerate}
\end{thm}
\begin{rem}
It is worth noting that the multifractal spectrum $ D_{\uph}(s) $ vanishes if $ s\notin[\alpha_-,\alpha_+] $. So if $ 1/\kappa<\alpha_- $, then $ \hdim\uk=0 $ for $ \uph $-a.e.\,$ x $.
\end{rem}
\begin{rem}
The cases $ 1/\kappa=\alpha_- $ and $ \alpha_+ $ are not covered by the above theorem. However, if the multifractal spectrum $ D_{\uph} $ is continuous at $ \alpha_- $ (respectively $ \alpha_+ $), we get that $ \hdim\mathcal U^{\alpha_-}(x)=0 $ (respectively $ \hdim\mathcal B^{\alpha_+}(x)=0 $) for $ \uph $-a.e.\,$ x $. The situation becomes more subtle if $ D_{\uph}(\cdot) $ is discontinuous at $ \alpha_- $ (respectively $ \alpha_+ $). Our methods do not work for obtaining the value of $ \hdim\mathcal U^{\alpha_-}(x) $ (respectively $ \hdim\mathcal B^{\alpha_+}(x)=0 $) for $ \uph $-a.e.\,$ x $.
\end{rem}
\begin{exmp}\label{ex:example 1}
Suppose that $ T $ is the doubling map and $ \uph:=\lambda $ is the Lebesgue measure. Applying Theorem~\ref{t:main}, we have for $ \lambda $-a.e.\,$ x $,
\[\hdim\uk=\begin{cases}
0 &\text{if }1/\kappa\in(0,1),\\
1&\text{if }1/\kappa\in(1,+\infty).
\end{cases}\]
The Lebesgue measure is monofractal and hence the corresponding multifractal spectrum $ D_\lambda $ is discontinuous at $ 1 $. Theorem~\ref{t:main} fails to provide any metric statement for the set $ \mathcal U^1(x) $ for $ \lambda $-a.e.\,$ x $. But we can still deduce from Theorem~\ref{t:sub} that $ \hdim\mathcal U^1(x) $ is constant for $ \lambda $-a.e.\,$ x $.
\end{exmp}
Some sets similar to $ \mathcal U^1(x) $ in Example~\ref{ex:example 1} have recently been studied by Koivusalo, Liao and Persson~\cite{KoLiPe21}. In their paper, instead of the orbit $ (T^nx)_{n\ge 0} $, they investigated the sets of points uniform approximated by an independent and identically distributed sequence $ (\omega_n)_{n\ge 1} $. Specifically, they showed that with probability one, the lower bound of the Hausdorff dimension of the set
\[\begin{split}
\mathcal U=\{y\in[0,1]:\forall N\gg1,~\exists n\le N, \text{ such that }|\omega_n-y|<1/N\}
\end{split}\]
is larger than $ 0.2177444298485995 $ (\cite[Theorem 5]{KoLiPe21}).
\begin{exmp}
Let $ p\in(1/2,1) $. Suppose that $ T $ is the doubling map on $ [0,1] $ and $ \mu_p $ is the $ (p,1-p) $ Bernoulli measure. It is known that the multifractal spectrum $ D_{\mu_p} $ is continuous on $ (0,+\infty) $ and attains its unique maximal value $ 1 $ at $ -\log_2 \big(p(1-p)\big)/2 $. Theorem~\ref{t:main} then gives that for $ \mu_p $-a.e.\,$ x $,
\[\hdim\uk=\begin{dcases}
D_{\mu_p}(1/\kappa) &\text{if }1/\kappa\in\bigg(0,\frac{-\log_2 \big(p(1-p)\big)}{2}\bigg),\\
1&\text{if }1/\kappa\in\bigg[\frac{-\log_2 \big(p(1-p)\big)}{2},+\infty\bigg).
\end{dcases}\]
\end{exmp}
Let $ \hdim \nu $ be the dimension of the Borel probability measure $ \nu $ defined by
\[\hdim \nu=\inf\{\hdim E:E\text{ is Borel set of }[0,1]\text{ and }\nu(E)>0\}.\]
\begin{rem}
As already discussed above, $ \uk\setminus\{T^nx\}_{n\ge 0}\subset \mathcal L^\kappa(x) $, one may wonder whether the sets $ \uk $ and $ \mathcal L^\kappa(x) $ are essentially different. More precisely, is it possible that $ \hdim\uk $ is strictly less than $ \hdim\mathcal L^\kappa(x) $? Theorem~\ref{t:main} affirmatively answers this question. Compared with the asymptotic approximation set $ \mathcal L^{\kappa}(x) $, the structure of the uniform approximation set $ \uk $ does have a notable feature. When $ 1/\kappa\in(0,\hdim\uph)\setminus\{\alpha_-\} $, the map $ 1/\kappa\mapsto\hdim\uk $ agrees with the multifractal spectrum $ D_{\uph}(1/\kappa) $, while the map $ 1/\kappa\mapsto\hdim\mathcal L^{\kappa}(x) $ is the linear function $ f(1/\kappa)=1/\kappa $ independent of the multifractal spectrum. Therefore, $ \hdim\uk<\hdim\mathcal L^\kappa(x) $. See Figure 1 for an illustration.
\end{rem}
\begin{figure}[H]
\centering
\includegraphics[height=11cm, width=16cm]{Graphic}
\caption*{\footnotesize Figure 1: The multifractal spectrum of $ \uph $ and the maps $ 1/\kappa\mapsto\hdim\bk $, $ 1/\kappa\mapsto\hdim\uk $ and $ 1/\kappa\mapsto\hdim\mathcal L^\kappa(x) $.}
\end{figure}
For the asymptotic approximation set $ \mathcal L^\kappa(x) $, the most difficult part lies in establishing the lower bound for $ \hdim\mathcal L^\kappa(x)$ when $ 1/\kappa<\hdim\uph $, for which a multifractal mass transference principle for Gibbs measure is applied, see \cite[\S 8]{FaScTr13}, \cite[\S 5.2]{LiaoSe13} and~\cite[\S 6]{PerRams17}. Specifically, if $ \uph(\mathcal L^\kappa(x))=1 $, then for any $ 1/\kappa_0<1/\kappa $, the multifractal mass transference principle implies that $ \hdim \mathcal L^{\kappa_0}(x)\ge 1/\kappa_0 $. However, recent progresses in uniform approximation~\cite{BuLi16,KimLi19,KoLiPe21,ZhWu20} indicate that there is no mass transference principle for uniform approximation set. Therefore we can not expect that $ \hdim\uk $ decreases linearly with respect to $ 1/\kappa $ as $ \hdim\mathcal L^\kappa(x) $ does. The main new ingredient of this paper is the difficult upper bound for $ \hdim\uk $ when $ 1/\kappa<\hdim\uph $. To overcome the difficulty, we fully develope and combine the methods in~\cite{FaScTr13} and~\cite{KoLiPe21}.
The paper is organized as follows. We start in Section 2 with some preparations on Markov map, and then use ergodic theory to give a direct proof of Theorem~\ref{t:sub}. Section 3 contains some recalls on multifractal
analysis and a variational principle which are essential in the proof of
Theorem~\ref{t:main}. Section 4 describes some relations among hitting time, approximating rate and local dimension of $ \uph $. From these relations we then derive items (1), (2) and (4) of Theorem~\ref{t:main} in Section 5.1, as well as the lower bound of $ \hdim\uk $ in Section 5.2. In the same Section 5.2, we establish the upper bound of $ \hdim\uk $, which is arguably the most substantial part.
\section{Basic definitions and the proof of Theorem~\ref{t:sub}}
\subsection{Covering of $ [0,1] $ by basic intervals}
Let $ T $ be a Markov map as defined in Definition~\ref{d:Markov map}. For each $ (i_1i_2\cdots i_n)\in\{0,1,\dots,Q-1\}^n $, we call
\[I(i_1i_2\cdots i_n):=I(i_1)\cap T^{-1}\big(I(i_2)\big)\cap\dots\cap T^{-n+1}\big(I(i_n)\big)\]
a basic interval of generation $ n $. It is possible that $ I(i_1i_2\cdots i_n) $ is empty for some $ (i_1i_2\cdots i_n)\in\{0,1,\dots,Q-1\}^n $.
The collection of non-empty basic intervals of a given generation $ n $ will be denoted by $ \Sigma_n $. For any $ x\in[0,1] $, we denote $ I_n(x) $ the unique basic interval $ I\in\Sigma_n $ containing $ x $.
By the definition of Markov map, we obtain the following bounded distortion property on basic intervals: there is a constant $ L>1 $ such that for any $ x\in[0,1] $,
\begin{equation}\label{eq:bdp}
\text{for any }n\ge 1,\quad L^{-1}|(T^n)'(x)|^{-1}\le |I_n(x)|\le L|(T^n)'(x)|^{-1},
\end{equation}
where $ |I| $ is the length of the interval $ I $.
Consequently,
we can find two constants $ 1<L_1<L_2 $ such that
\begin{equation}\label{eq:length of basic interval}
\text{for every } I\in \Sigma_n,\quad L_2^{-n}\le |I|\le L_1^{-n}.
\end{equation}
\subsection{Proof of Theorem~\ref{t:sub}}
Now we are able to prove that $ \hdim\uk $ is constant almost surely with respect to an ergodic measure $ \nu $. Recall that a $ T $-invariant measure $ \nu $ is ergordic if and only if any $ T $-invariant function is constant almost surely.
\begin{proof}[Proof of Theorem~\ref{t:sub}]
Fix $ \kappa>0 $, define the function $ f_\kappa:x\mapsto\hdim\uk $. We will prove the following two claims.
\begin{enumerate}[{Claim} 1.]
\item For any $ x\in[0,1] $, $ f_\kappa(x)\le f_\kappa(Tx) $.
\item The function $ f_\kappa $ is measurable.
\end{enumerate}
Suppose that Claims 1 and 2 are ture. Since $ f_\kappa $ is measurable, by the fact $ 0\le f_\kappa\le 1 $ and the invariance of $ \nu $, we have that $ f_\kappa $ is invariant with repect to $ \nu $, that is, $ f_\kappa(x)=f_\kappa(Tx) $ for $ \nu $-a.e.\,$ x $. In the presence of ergodicity of $ \nu $, $ f_\kappa $ is constant almost surely, which proves the theorem.
Now, let us prove the two claims. For Claim 1, let $ y\in\uk\setminus\{Tx\} $ and let $ q $ be the smallest integer satisfying $ y\notin B(Tx, 2^{-q\kappa}) $. By the definition of $ \uk $, for any integer $ N> 2^q $ large enough, there exists $ 1\le n\le N $ such that $ y\in B(T^nx, N^{-\kappa}) $. Moreover, the condition $ N> 2^q $ implies that $ n\ne1 $, hence $ y\in B\big(T^{n-1}(Tx),N^{-\kappa}\big) $. This gives that $ y\in\mathcal U^\kappa(Tx) $, therefore $ \uk\setminus\{Tx\}\subset \mathcal U^{\kappa}(Tx) $, and $ \hdim\uk\le\hdim \mathcal U^\kappa(Tx) $, or equivalently, $ f_\kappa(x)\le f_\kappa(Tx) $. Hence Claim 1 follows.
For Claim 2, it suffices to prove that for any $ t>0 $, the set
\[\begin{split}
A(t):=\{x\in[0,1]:f_\kappa(x)<t\}=\{x\in[0,1]:\hdim\uk<t\}
\end{split}\]
is measurable. Recall that
\begin{equation}\label{eq:uk definition}
\uk=\bigcup_{i=1}^\infty\bigcap_{N= i}^\infty\bigcup_{n=1}^NB(T^nx, N^{-\kappa}).
\end{equation}
Through out the proof of this theorem, we will assume the ball $ B(T^nx, N^{-\kappa}) $ is closed. This makes the proof achievable, and it does not change the Hausdorff dimension of $ \uk $.
By the definition of Hausdorff dimension, a point $ x\in A(t) $ if and only if there exists $ h\in \N $ such that
\[\mathcal H^{t-\frac{1}{h}}\bigg(\bigcup_{i=1}^\infty\bigcap_{N= i}^\infty\bigcup_{n=1}^NB(T^nx, N^{-\kappa})\bigg)=0,\]
or equivalently for all $ i\ge 1 $,
\begin{equation}\label{eq:ht-1/j=0}
\mathcal H^{t-\frac{1}{h}}\bigg(\bigcap_{N= i}^\infty\bigcup_{n=1}^NB(T^nx, N^{-\kappa})\bigg)=0.
\end{equation}
By the definition of Hausdorff measure,~\eqref{eq:ht-1/j=0} holds if and only if for any $ j,k\in\N $,
\[\mathcal H_{\frac{1}{j}}^{t-\frac{1}{h}}\bigg(\bigcap_{N= i}^\infty\bigcup_{n=1}^NB(T^nx, N^{-\kappa})\bigg)<\frac{1}{k}.\]
Hence, we see that
\[\begin{split}
A(t)=\bigcup_{h=1}^\infty\bigcap_{i=1}^\infty\bigcap_{j=1}^\infty\bigcap_{k=1}^\infty\bigg\{ x\in [0,1]:\mathcal H_{\frac{1}{j}}^{t-\frac{1}{h}}\bigg(\bigcap_{N= i}^\infty\bigcup_{n=1}^NB(T^nx, N^{-\kappa})\bigg)<\frac{1}{k}\bigg\}=:\bigcup_{h=1}^\infty\bigcap_{i=1}^\infty\bigcap_{j=1}^\infty\bigcap_{k=1}^\infty B_{h,i,j,k}.
\end{split}\]
If $ x\in B_{h,i,j,k} $, then there is a countable open cover $ \{U_p\}_{p\ge 1} $ with $ 0<|U_p|<1/j $ satisfying
\begin{equation}\label{eq:subset Up}
\bigcap_{N= i}^\infty\bigcup_{n=1}^NB(T^nx, N^{-\kappa})\subset \bigcup_{p=1}^\infty U_p\quad \text{and}\quad\sum_{p=1}^\infty |U_p|^{t-\frac{1}{h}}<\frac{1}{k}.
\end{equation}
The set $ \bigcap_{N= i}^\infty\bigcup_{n=1}^NB(T^nx, N^{-\kappa}) $ can be viewed as the intersection of a family of decreasing compact sets $ \big\{\bigcap_{N= i}^l\bigcup_{n=1}^NB(T^nx, N^{-\kappa})\big\}_{l\ge i} $, hence there exists $ l_0\ge i $ satisfying
\[\bigcap_{N= i}^{l_0}\bigcup_{n=1}^NB(T^nx, N^{-\kappa})\subset\bigcup_{p=1}^\infty U_p,\]
which implies
\[\mathcal H_{\frac{1}{j}}^{t-\frac{1}{h}}\bigg(\bigcap_{N= i}^{l_0}\bigcup_{n=1}^NB(T^nx, N^{-\kappa})\bigg)<\frac1k.\]
We then deduce that
\begin{equation}\label{eq:Bijkl subset Cijklm}
B_{h,i,j,k}\subset\bigcup_{l=i}^{\infty}\bigg\{ x\in [0,1]:\mathcal H_{\frac{1}{j}}^{t-\frac{1}{h}}\bigg(\bigcap_{N= i}^l\bigcup_{n=1}^NB(T^nx, N^{-\kappa})\bigg)<\frac1k\bigg\}=:\bigcup_{l=i}^{\infty}C_{h,i,j,k,l}.
\end{equation}
If $ x\in C_{h,i,j,k,l} $ for some $ l\ge 1 $, then
\[\mathcal H_{\frac{1}{j}}^{t-\frac{1}{h}}\bigg(\bigcap_{N= i}^\infty\bigcup_{n=1}^NB(T^nx, N^{-\kappa})\bigg)\le\mathcal H_{\frac{1}{j}}^{t-\frac{1}{h}}\bigg(\bigcap_{N= i}^l\bigcup_{n=1}^NB(T^nx, N^{-\kappa})\bigg)<\frac{1}{k}.\]
Hence $ x\in B_{i,j,k,l} $ and the reverse inclusion of~\eqref{eq:Bijkl subset Cijklm} is proved.
Notice that $ T,T^2,\dots,T^l $ are continuous on every basic interval of generation greater than $ l $. For any $ x\in C_{h,i,j,k,l} $, denote $ \mathcal S(x):=\bigcap_{N= i}^l\bigcup_{n=1}^NB(T^nx, N^{-\kappa}) $. There is an open cover $ (V_p)_{p\ge 1} $ of $ \mathcal S(x) $ with $ 0<|V_p|<1/j $ and $ \sum_p |V_p|^{t-1/h}<1/k $. Since $ \mathcal S(x) $ is compact, we see that the distance $ \delta $ between $ \mathcal S(x) $ and the complement of $ \bigcup_{p\ge 1} V_p $ is positive. By the continuity of $ T,T^2,\dots,T^l $, there is some $ l_1:=l_1(\delta)>l $ for which if $ y\in I_{l_1}(x) $, then $ \mathcal S(y) $ is contained in the $ \delta/2 $-neighborhood of $ \mathcal S(x) $. Thus $ \mathcal S(y) $ can also be covered by $ \bigcup_{p\ge 1} V_p $, and $ I_{l_1}(x)\subset C_{h,i,j,k,l} $. Finally, $ C_{h,i,j,k,l} $ is a union of some basic intervals, which is measurable.
Now combining the equalities obtained above, we have
\[A(t)=\bigcup_{h=1}^\infty\bigcap_{i=1}^\infty\bigcap_{j=1}^\infty\bigcap_{k=1}^\infty\bigcup_{l=i}^{\infty}C_{h,i,j,k,l},\]
which is a Borel measurable set.
\end{proof}
\section{Multifractal properties of Gibbs measures}\label{s:Multifractal properties}
In this section, we review some of the standard facts on multifractal properties of Gibbs measures.
\begin{defn}\label{d:Gibbs measure}
A Gibbs measure $ \uph $ associated with a potential $ \phi $ is a probability measure satisfying the following: there exists a constant $ \gamma>0 $ such that
\[\text{for any basic interval }I\in\Sigma_n, \quad\gamma^{-1}\le\frac{\uph(I)}{e^{S_n\phi(x)-nP(\phi)}}\le\gamma,\quad\text{for every }x\in I,\]
where $ S_n\phi(x)=\phi(x)+\cdots+\phi(T^{n-1}x) $ is the $ n $th Birkhoff sum of $ \phi $ at $ x $, and $ P(\phi) $ is the topological pressure of $\phi$ defined by
\[P(\phi)=\lim_{n\to\infty}\frac{1}{n}\log\sum_{I\in\Sigma_n}\sup_{x\in I}e^{S_n\phi(x)}.\]
\end{defn}
The following theorem ensures the existence and uniqueness of invariant Gibbs measure.
\begin{thm}[\cite{Bal00,Wal78}]
Let $ T:[0,1]\to[0,1] $ be a Markov map. Then for any H\"older continuous function $ \phi $, there exists a unique $ T $-invariant Gibbs measure $ \uph $ associated with $ \phi $. Further, $ \uph $ is ergodic.
\end{thm}
The Gibbs measure $ \uph $ also satisfies the quasi-Bernoulli property (see~\cite[Lemma 4.1]{LiaoSe13}), i.e. for any $ n>k\ge 1 $, for any basic interval $ I(i_1\cdots i_n)\in\Sigma_{n} $, the following holds
\begin{equation}\label{eq:quasi-Bernoulli}
\gamma^{-3}\uph(I')\uph(I'')\le\uph(I)=\uph(I'\cap T^{-k}I'')\le \gamma^3\uph(I')\uph(I''),
\end{equation}
where $ I'=I(i_1\cdots i_k)\in\Sigma_k $ and $ I''=I(i_{k+1}\cdots i_n)\in\Sigma_{n-k} $. It follows immediately that
\begin{equation}\label{eq:quasi-Bernoulli consequence}
\text{for any }m\ge k,\quad\uph(I'\cap T^{-m}U)\le \gamma^3\uph(I')\uph(U),
\end{equation}
where $ U $ is an open set in $ [0,1] $.
We adopt the convention that $ \phi $ is normalized, i.e. $ P(\phi)=0 $. If it is not the case, we can replace $\phi$ by $ \phi-P(\phi) $.
Now, let us recall some standard facts on multifractal analysis, which aims at studying the multifractal spectrum $ D_{\uph} $. Some multifractal analysis results were summarised in~\cite{LiaoSe13} and we present them as follows. The proof can also be found in the references~\cite{BaPeS97,BrMiP92, CoLeP87, PeWe97,Ran89, Sim94}.
\begin{thm}[{\cite[Theorem 2.5]{LiaoSe13}}]\label{t:multifractal analysis}
Let $ T $ be a Markov map. Let $ \phi $ be a H\"older continuous potential and let $ \uph $ be the corresponding Gibbs measure. Then, the following hold.
\begin{enumerate}[\upshape(1)]
\item The function $ D_{\uph} $ of $ \uph $ is a concave real-analytic map on the interval $ (\alpha_-,\alpha_+) $, where $ \alpha_- $ and $ \alpha_+ $ are defined in~\eqref{eq:alpha-} and~\eqref{eq:alpha+}, respectively.
\item The spectrum $ D_{\uph} $ reaches its maximum value $ 1 $ at $ \alpha_{\max} $ defined in~\eqref{eq:alphamax}.
\item The graph of $ D_{\uph} $ and the first bisector intersect at a unique point which is $ (\hdim\uph,\hdim\uph) $. Moreover, $ \hdim\uph $ satisfies
\[\hdim\uph=\frac{-\int\phi\, d\uph}{\int\log|T'|\, d\uph}.\]
\end{enumerate}
\end{thm}
\begin{prop}[{\cite[\S 2.3]{LiaoSe13}}]\label{p:topological pressure equals to 0}
For every $ q\in\R $, there is a unique real number $ \eta_\phi(q) $ such that the topological pressure $ P\big(-\eta_\phi(q)\log |T'|+q\phi\big) $ equals to $ 0 $. Further, $ \eta_\phi(q) $ is real-analytic and concave.
\end{prop}
\begin{rem}\label{r:Gibbs measure}
For simplicity, we denote by $ \mu_q $
the $ T $-invariant Gibbs measure associated with the potential $ -\eta_\phi(q)\log |T'|+q\phi $. Certainly, $ \eta_\phi(0)=1 $ and the corresponding measure $ \mu_0 $ is associated with the potential $ -\log|T'| $. By the bounded distortion property~\eqref{eq:bdp}, the Gibbs measure $ \mu_0 $, coinciding with $ \mu_{\max} $, is strongly equivalent to the Lebesgue measure $ \lambda $.
\end{rem}
For every $ q\in\R $, we introduce the exponent
\begin{equation}\label{eq:alpha(q)}
\alpha(q)=\frac{-\int\phi\, d\mu_q}{\int\log|T'|\, d\mu_q}.
\end{equation}
\begin{prop}[{\cite[\S 2.3]{LiaoSe13}}]\label{p:proposition of muq and alpha(q)}
Let $ \mu_q $ and $ \alpha(q) $ be as above. The following statements hold.
\begin{enumerate}[\upshape(1)]
\item The Gibbs measure $ \mu_q $ is supported by the level set $ \{y:d_{\uph}(y)=\alpha(q)\} $ and $ D_{\uph}\big(\alpha(q)\big)=\hdim\mu_q=\eta_\phi(q)+q\alpha(q) $.
\item The map $ \alpha(q) $ is decreasing, and
\begin{align*}
\lim_{q\to+\infty}\alpha(q)=\alpha_-,&\quad\lim_{q\to-\infty}\alpha(q)=\alpha_+,\notag\\
\alpha(1)=\hdim\uph,&\ \ \quad\alpha(0)=\alpha_{\max}.\label{eq:alpha(0)}
\end{align*}
\item The inverse of $ \alpha(q) $ exists, and is denoted by $ q(\alpha) $. Moreover, $ q(\alpha)<0 $ if $ \alpha\in (\alpha_{\max},\alpha_+) $, and $ q(\alpha)\ge 0 $ if $ \alpha\in (\alpha_-,\alpha_{\max}] $.
\end{enumerate}
\end{prop}
For a measure $ \nu $, define the lower and upper Markov pointwise dimensions respectively by
\[\underline{M}_{\nu}(y):=\liminf_{n\to\infty}\frac{\log\nu\big(I_n(y)\big)}{\log |I_n(y)|},\quad \overline{M}_{\nu}(y):=\limsup_{n\to\infty}\frac{\log\nu\big(I_n(y)\big)}{\log |I_n(y)|}.\]
When $ \underline{M}_{\nu}(y)=\overline{M}_{\nu}(y) $, their common value is denoted by $ M_\nu(y) $. By~\eqref{eq:bdp} and~\eqref{eq:length of basic interval}, we have
\[\od_{\nu}(y)\le \overline{M}_{\nu}(y),\]
which implies the inclusions
\begin{equation}\label{eq:local<upper<Markov dimension}
\{y:d_{\nu}(y)=s\}
\subset\{y:\ud_{\nu}(y)\ge s\}\subset\{y:\od_{\nu}(y)\ge s\}\subset\{y:\overline{M}_{\nu}(y)\ge s\}.
\end{equation}
By the Gibbs property of $ \uph $ and the bounded distortion property on basic intervals~\eqref{eq:bdp}, the definitions of Markov pointwise dimensions can be reformulated as
\begin{equation}
\overline{M}_{\uph}(y)=\limsup_{n\to \infty}\frac{S_n\phi(y)}{S_n(-\log T')(y)}\quad\text{and}\quad M_{\uph}(y)=\lim_{n\to \infty}\frac{S_n\phi(y)}{S_n(-\log T')(y)}.
\end{equation}
This allows us to derive the following lemma, which is an alternative version of a proposition due to Jenkinson~\cite[Proposition 2.1]{Jen06}. We omit its proof since the argument is similar.
\begin{lem}\label{l:>alpha+ empty}
Let $ T $ be a Markov map. Let $ \phi $ be a H\"older continuous potential and let $ \uph $ be the corresponding Gibbs measure. Then,
\[\sup_{y\in [0,1]}\overline{M}_{\uph}(y)=\sup_{y\colon M_{\uph}(y)\text{ exists}}M_{\uph}(y)=\max_{\nu\in \mi }\frac{-\int\phi \, d\nu}{\int\log|T'|\, d\nu}=\alpha_+.\]
In particular, for any $ s>\alpha_+ $,
\[\{y:d_{\uph}(y)=s\}=\{y:\od_{\uph}(y)\ge s\}=\emptyset.\]
\end{lem}
We finish the section with a variational principle.
\begin{lem}\label{l:dimension spectrum}
Let $ T $ be a Markov map. Let $ \phi $ be a H\"older continuous potential and let $ \uph $ be the corresponding Gibbs measure.
\begin{enumerate}[\upshape(1)]
\item For every $ s<\alpha_{\max} $, $ \hdim\{y:\ud_{\uph}(y)\le s\}=\hdim\{y:\od_{\uph}(y)\le s\}=D_{\uph}(s) $.
\item For every $ s\in(\alpha_{\max},+\infty)\setminus \alpha_+ $,
$ \hdim\{y:\ud_{\uph}(y)\ge s\}=\hdim\{y:\od_{\uph}(y)\ge s\}=D_{\uph}(s) $.
\end{enumerate}
\end{lem}
\begin{proof}
(1) We point out that the following inclusions hold
\begin{equation*}
\{y:d_{\uph}(y)=s\}\subset\{y:\od_{\uph}(y)\le s\}\subset\{y:\ud_{\uph}(y)\le s\}.
\end{equation*}
In~\cite[proposition 2.8]{LiaoSe13}, the leftmost set and the rightmost set were shown to have the same Hausdorff dimension. This together with the above inclusions completes the proof of the first point of the lemma.
(2) When $ T $ is the doubling map, the statement was formulated by Fan, Schmeling and Trobetzkoy~\cite[Theorem 3.3]{FaScTr13}. Our proof follows their idea closely, we include it for completeness.
By Lemma~\ref{l:>alpha+ empty}, we can assume without loss of generality that $ s<\alpha_+ $. The inclusions in~\eqref{eq:local<upper<Markov dimension} imply the following inequalities:
\[\hdim\{y:d_{\uph}(y)=s\}
\le \hdim\{y:\od_{\uph}(y)\ge s\}\le\hdim\{y:\overline{M}_{\uph}(y)\ge s\}.\]
We turn to prove the reverse inequalites.
By Proposition~\ref{p:proposition of muq and alpha(q)} and the condition $ s>\alpha_{\max} $, there exists a real number $ q_s:=q(s)<0 $ such that
\[s=\frac{-\int\phi\, d\mu_{q_s}}{\int\log|T'|\, d\mu_{q_s}} \quad\text{ and }\quad D_{\uph}(s)=\hdim\mu_{q_s}=\eta_\phi(q_s)+q_ss,\]
where $ \mu_{q_s} $ is the Gibbs measure associated with the potential $ -\eta_\phi(q_s)\log|T'|+q_s\phi $. Now let $ y $ be any point such that $ \overline{M}_{\uph}(y)\ge s $. By Proposition~\ref{p:topological pressure equals to 0}, the topological pressure $ P(-\eta_\phi(q_s)\log|T'|+q_s\phi) $ is $ 0 $. Then we can apply the Gibbs property of $ \mu_{q_s} $ and~\eqref{eq:bdp} to yield
\[\begin{split}
\underline{M}_{\mu_{q_s}}(y)&=\liminf_{n\to\infty}\frac{\log e^{S_n(-\eta_\phi(q_s)\log|T'|+q_s\phi)(y)}}{\log|I_n(y)|}\\
&=\liminf_{n\to\infty}\bigg(\frac{-\eta_\phi(q_s)\log|(T^n)'(y)|}{\log|I_n(y)|}+q_s\cdot\frac{\log e^{S_n\phi(y)}}{\log|I_n(y)|}\bigg)\\
&=\eta_\phi(q_s)+q_s\cdot\limsup_{n\to\infty}\frac{\log\uph \big(I_n(y)\big)}{\log|I_n(y)|}\\
&=\eta_\phi(q_s)+q_s \overline{M}_{\uph}(y)\\
&\le \eta_\phi(q_s)+q_ss=D_{\uph}(s),
\end{split}\]
where the inequality holds because $ q_s<0 $.
Finally, Billingsley's Lemma~\cite[Lemma 1.4.1]{BiPe17} gives
\[\hdim\{y:\overline{M}_{\uph}(y)\ge s\}\le\hdim\{y:\underline{M}_{\mu_{q_s}}(y)\le D_{\uph}(s)\}\le D_{\uph}(s)=\hdim\{y:d_{\uph}(y)=s\}.\qedhere \]
\end{proof}
\section{Covering questions related to hitting time and local dimension}
In Section~\ref{ss:covering hitting time} below, we reformulate the uniform approximation set $ \uk $ in terms of hitting time. Thereafter, we relate the first hitting time for shrinking balls to local dimension in Section~\ref{ss:hitting time local dimension}.
\subsection{Covering questions and hitting time}\label{ss:covering hitting time}
\
Denote $ \mathcal O^+(x):=\{T^nx:n\ge 1\} $.
\begin{defn}
For every $ x,y\in[0,1] $ and $ r>0 $, we define the first hitting time of the orbit of $ x $ into the ball $ B(y,r) $ by
\[\tau_r(x,y):=\inf\{n\ge 1:T^nx\in B(y,r)\}.\]
\end{defn}
Set
\[\ur(x,y):=\liminf_{r\to 0}\frac{\log\tau_r(x,y)}{-\log r}\quad\text{and}\quad\ovr(x,y):=\limsup_{r\to 0}\frac{\log\tau_r(x,y)}{-\log r}.\]
For convenience, when $ \mathcal O^+(x)\cap B(y,r)=\emptyset $, we set $ \tau_r(x,y)=\infty $ and $ \ur(x,y)=\ovr(x,y)=\infty $.
If $ \ur(x,y)=\ovr(x,y) $, we denote the common value by $ R(x,y) $.
For any ball $ B\subset [0,1] $, we define the first hitting time $ \tau(x, B) $ by
\[\tau(x, B):=\inf \{n\ge 1:T^nx\in B\}.\]
Similarly, we set $ \tau(x, B)=\infty $ when $ \mathcal O^+(x)\cap B=\emptyset $.
The following lemma exhibits a relation between $ \uk $ and hitting time.
\begin{lem}\label{l:described by hitting time}
For any $ \kappa>0 $, we have
\begin{align*}
\bigg\{ y\in[0,1]:\ovr(x,y)>\frac{1}{\kappa} \bigg\}&\subset\bk\subset\bigg\{ y\in[0,1]:\ovr(x,y)\ge\frac{1}{\kappa} \bigg\}\cup\mathcal O^+(x),\\
\bigg\{ y\in[0,1]:\ovr(x,y)<\frac{1}{\kappa} \bigg\}\setminus\mathcal O^+(x)&\subset\uk\subset\bigg\{ y\in[0,1]:\ovr(x,y)\le\frac{1}{\kappa} \bigg\}.
\end{align*}
\end{lem}
\begin{proof}
The top left and bottom right inclusions imply one another. Let us prove the bottom right inclusion. Suppose that $ y\in \uk $. Then for all large enough $ N $ there is an $ n\le N $ such that $ T^nx \in B(y,N^{-\kappa})$. Thus $ \tau_{N^{-\kappa}}(x,y)\le N $ for all $ N $ large enough, which implies $ \ovr(x,y)\le 1/\kappa $.
The top right and bottom left inclusions imply one another. So, it remains to prove the bottom left inclusion. Consider $ y $ such that $ \ovr(x,y)<1/\kappa $ and $ y\notin\mathcal O^+(x) $. By the definition of $ \ovr(x,y) $, there is a positive real number $ r_0<1 $ such that
\[\tau_r(x,y)<r^{-1/\kappa}, \quad\text{for all }0<r<r_0.\]
Denote $ n_r:=\tau_r(x,y) $, for all $ 0<r<r_0 $. Since $ y\notin \mathcal O^+(x) $, the family of positive integers $ \{n_r:0<r<r_0\} $ is unbounded. For each $ N> r_0^{-1/\kappa} $, denote $ t:=N^{-\kappa} $. The definition of $ n_{t} $ implies that
\[T^{n_{t}}x\in B(y,t)=B(y,N^{-\kappa}).\]
We conclude $ y\in\uk $ by noting that $ n_{t}<t^{-1/\kappa}=N $.
\end{proof}
\subsection{Relation between hitting time and local dimension}\label{ss:hitting time local dimension}
As Lemma~\ref{l:described by hitting time} shows, we need to study the hitting time $ \ovr(x,y) $ of the Gibbs measure $ \uph $. We will prove that the hitting time is related to local dimension when the measure is exponential mixing.
\begin{defn}
A $ T $-invariant measure $ \nu $ is exponential mixing if there exist two constants $ C>0 $ and $ 0<\beta<1 $ such that for any ball $ A $ and any Borel measurable set $ B $,
\begin{equation}\label{eq:exponential mixing}
|\nu(A\cap T^{-n}B)-\nu(A)\nu(B)|\le C\beta^n\nu(B).
\end{equation}
\end{defn}
\begin{thm}[\cite{Bal00, LiSaV98,PaPo90, Rue04}]\label{t:exponential mixing}
The $ T $-invariant Gibbs measure $ \uph $ associated with a H\"older continuous potential $ \phi $ of a Markov map $ T $ is exponential mixing.
\end{thm}
The exponential mixing property allows us to apply the following theorem which decribes a relation between hitting time and local dimension of invariant measure.
\begin{thm}[\cite{Gal07}]
Let $ (X,T,\nu) $ be a measure-theoretic dynamical system. If $ \nu $ is superpolynomial mixing and if $ d_\nu(y) $ exists, then for $ \nu $-a.e.\,$ x $, we have
\[R(x,y)=d_\nu(y).\]
\end{thm}
It should be noticed that superpolynomial mixing property is much weaker than exponential mixing property.
Now, we turn to the study of the Markov map $ T $ on the interval $ [0,1] $.
An application of Fubini's theorem yields the following corollary.
\begin{cor}[{\cite[Corollary 3.8]{LiaoSe13}}]\label{c:hitting time and local dimension}
Let $ T $ be a Markov map. Let $ \uph $ and $ \mu_\psi $ be two $ T $-invariant Gibbs probability measures on $ [0,1] $ associated with H\"older potentials $ \phi $ and $ \psi $, respectively. Then,
\[\text{for }\uph\times\mu_\psi\text{-a.e.\,}(x,y),\quad R(x,y)=d_{\uph}(y)=\frac{-\int\phi\, d\mu_\psi}{\int\log|T'|\, d\mu_\psi}.\]
\end{cor}
\section{The studies of $ \bk $ and $ \uk $}
\subsection{The study of $ \bk $}
In this subsection, we are going to prove Theorem~\ref{t:main} except the
item (3). Let us start with the lower bound for $ \hdim\bk $.
\begin{lem}\label{l:lower bound for hdimfk}
Let $ T $ be a Markov map. Let $ \phi $ be a H\"older continuous potential and let $ \uph $ be the corresponding Gibbs measure. For any $ \kappa>0 $, the following hold.
\begin{enumerate}[\upshape(a)]
\item If $ 1/\kappa\in(0,\alpha_{\max}) $, then $ \lambda\big(\bk\big)=1 $ for $ \uph $-a.e.\,$ x $.
\item If $ 1/\kappa\in[\alpha_{\max},+\infty)\setminus\{\alpha_+\} $, then $ \hdim\bk\ge D_{\uph}(1/\kappa) $ for $ \uph $-a.e.\,$ x $.
\end{enumerate}
\end{lem}
\begin{proof}
(a) Let $ 1/\kappa\in(0,\alpha_{\max}) $. As already observed in Section~\ref{s:Multifractal properties}, the Gibbs measure $ \mu_0 $ associated with $ \log|T'| $ is strongly equivalent to the Lebesgue measure $ \lambda $. Thus, a set $ F $ has full $ \mu_0 $-measure if and only if $ F $ has full $ \lambda $-measure. Corollary~\ref{c:hitting time and local dimension} implies that
\[\text{for }\uph\times\mu_0\text{-a.e.\,}(x,y),\quad R(x,y)=d_{\uph}(y)=\frac{-\int\phi\, d\mu_0}{\int\log|T'|\, d\mu_0}=\alpha_{\max}.\]
By Fubini's theorem, for $ \uph $-a.e.\,$ x $, the set $ \{y:R(x,y)=d_{\uph}(y)=\alpha_{\max}\} $ has full $ \mu_0 $-measure. Then for $ \uph $-a.e.\,$ x $, we have
\[\mu_0\big(\{ y:\ovr(x,y)>1/\kappa \}\big)\ge \mu_0\big(\{y:R(x,y)=d_{\uph}(y)=\alpha_{\max}\}\big)=1.\]
By Lemma~\ref{l:described by hitting time}, we arrive at the conclusion.
(b) By Lemma~\ref{l:>alpha+ empty}, the level set $ \{y:d_{\uph}(y)=1/\kappa\} $ is empty if $ 1/\kappa>\alpha_+ $. Thus $ D_{\uph}(1/\kappa)=0 $, and therefore $ \hdim\bk\ge 0=D_{\uph}(1/\kappa) $ trivially holds for all $ 1/\kappa>\alpha_+ $.
Let $ 1/\kappa\in[\alpha_{\max},\alpha_+) $. We can suppose that $ \alpha_{\max}\ne \alpha_+ $, since otherwise $ [\alpha_{\max},\alpha_+)=\emptyset $, there is nothing to prove.
For any $ s\in (1/\kappa,\alpha_+) $, by Proposition~\ref{p:proposition of muq and alpha(q)}, there exists a real number $ q_s:=q(s) $ such that
\[s=\frac{-\int\phi\, d\mu_{q_s}}{\int\log|T'|\, d\mu_{q_s}}\quad\text{and}\quad\mu_{q_s}(\{y:d_{\uph}(y)=s\})=1.\]
Applying Corollary~\ref{c:hitting time and local dimension}, we obtain
\[\text{for }\uph\times\mu_{q_s}\text{-a.e.\,}(x,y),\quad R(x,y)=d_{\uph}(y)=\frac{-\int\phi\, d\mu_{q_s}}{\int\log|T'|\, d\mu_{q_s}}=s.\]
It follows from Fubini's theorem that, for $ \uph $-a.e.\,$ x $, the set $ \{y:R(x,y)=d_{\uph}(y)=s\} $ has full $ \mu_{q_s} $-measure. Consequently, for $ \uph $-a.e.\,$ x $,
\[\begin{split}
\hdim\{y:\ovr(x,y)>1/\kappa\}&\ge \hdim\{y:R(x,y)=d_{\uph}(y)=s\}\\
&\ge \hdim \mu_{q_s}=D_{\uph}(s).
\end{split}\]
We conclude by noting that $ s\in (1/\kappa,\alpha_+) $ is arbitrary and $ D_{\uph} $ is continuous on $ [\alpha_{\max},\alpha_+) $.
\end{proof}
We are left to determine the upper bound of $ \hdim\bk $. The following four lemmas were initially proved by Fan, Schmeling and Troubetzkoy~\cite{FaScTr13} for the doubling map, and later by Liao and Sereut~\cite{LiaoSe13} in the context of Markov maps.
We follow their ideas and demonstrate more general results. In the Lemmas~\ref{l:multi-relation}--\ref{l:hitting time subset local dimension}, we will not assume that $ T $ is a Markov map.
\begin{lem}\label{l:multi-relation}
Let $ T $ be a map on $ [0,1] $ and $ \nu $ be a $ T $-invariant exponential mixing measure. Let $ A_1,A_2,\dots,A_k $ be $ k $ subsets of $ [0,1] $ such that each $ A_i $ is a union of at most $ m $ disjoint balls. Then
\[\prod_{i=1}^{k}\bigg(1-\frac{mC\beta^d}{\nu(A_i)}\bigg)\le \frac{\nu(A_1\cap T^{-d}A_2\cap\cdots\cap T^{-d(k-1)}A_k)}{\nu(A_1)\nu(A_2)\cdots\nu(A_k)}\le \prod_{i=1}^{k}\bigg(1+\frac{mC\beta^d}{\nu(A_i)}\bigg),\]
where $ \beta $ is the constant appearing in~\eqref{eq:exponential mixing}.
\end{lem}
\begin{proof}
Since each $ A_i $ is a union of at most $ m $ disjoint balls, the exponential mixing property of $ \nu $ gives that, for every $ d\ge 1 $,
\begin{equation}\label{eq:exponential mixing multi}
|\nu(A_i\cap T^{-d}B)-\nu(A_i)\nu(B)|\le mC\beta^d\nu(B),
\end{equation}
where $ B $ is a Borel measurable set.
In particular, defining
\[B_i=A_i\cap T^{-d}A_{i+1}\cap\cdots\cap T^{-d(k-i)}A_k,\]
we get, for any $ i<k $,
\[|\nu(A_i\cap T^{-d}(B_{i+1}))-\nu(A_i)\nu(B_{i+1})|\le mC\beta^d\nu(B_{i+1}).\]
The above inequality can be written as
\begin{equation*}
1-\frac{mC\beta^d}{\nu(A_i)}\le \frac{\nu(A_i\cap T^{-d}B_{i+1})}{\nu(A_i)\nu(B_{i+1})}\le 1+\frac{mC\beta^d}{\nu(A_i)}.
\end{equation*}
Multiplying over all $ i\le k $ and using the identity
\[B_{i+1}=A_{i+1}\cap T^{-d}B_{i+2},\]
we have
\[\prod_{i=1}^{k}\bigg(1-\frac{mC\beta^d}{\nu(A_i)}\bigg)\le \frac{\nu(A_1\cap T^{-d}A_2\cap\cdots\cap T^{-d(k-1)}A_k)}{\nu(A_1)\nu(A_2)\cdots\nu(A_k)}\le \prod_{i=1}^{k}\bigg(1+\frac{mC\beta^d}{\nu(A_i)}\bigg).\qedhere\]
\end{proof}
The following lemma illustrates that balls with small local dimension for exponential mixing measure are hit with big probability.
\begin{lem}\label{l:big hitting probability}
Let $ T $ be a map on $ [0,1] $ and $ \nu $ be a $ T $-invariant exponential mixing measure. Let $ h $ and $ \epsilon $ be two positive real numbers. For each $ n\in\N $, consider $ N\le 2^n $ distinct balls $ B_1,\dots,B_N $ satisfying $ |B_i|=2^{-n} $ and $ \nu(B_i)\ge 2^{-n(h-\epsilon)} $ for all $ 1\le i\le N $. Set
\[\mathcal C_{n,N,h}=\{x\in[0,1]:\exists 1\le i\le N\text{ such that }\tau(x,B_i)\ge 2^{nh}\}.\]
Then there exists an integer $ n_h\in\N $ independent of $ N $ such that
\[\text{for every }n\ge n_h,\quad \nu(\mathcal C_{n,N,h})\le 2^{-n}.\]
\end{lem}
\begin{proof}
For each $ i\le N $, let
\[\Delta_{i}:=\{x\in[0,1]:\forall k\le 2^{nh}, T^kx\notin B_i \}.\]
Obviously we have $ \mathcal C_{n,N,h}=\bigcup_{i=1}^N\Delta_{i} $, so it suffices to bound from above each $ \nu(\Delta_{i}) $. Pick an integer $ \omega $ such that $ \omega>\log_{\beta^{-1}}2^{h} $. Let $ k=[2^{nh}/(\omega n)] $ be the integer part of $ 2^{nh}/(\omega n) $. Then
\[\Delta_{i}\subset\bigcap_{j=1}^{k}\{x\in[0,1]: T^{j\omega n}x\notin B_i\}=\bigcap_{j=1}^{k}T^{-j\omega n}B_i^c.\]
Since $ \omega>\log_{\beta^{-1}}2^{h}\ $, there is an $ n_h $ large enough such that for any $ n\ge n_h $,
\begin{equation}\label{eq:condition on nh}
2C\beta^{\omega n}<2^{-nh-1}\le \nu(B_i)/2
\end{equation}
and
\begin{equation}\label{eq:condition 2 on nh}
2^{n+1} \exp\bigg(\frac{-2^{n\epsilon}}{2\omega n}\bigg)\le 2^{-n}.
\end{equation}
Now applying Lemma~\ref{l:multi-relation} to $ A_l= B_i $ for all $ l\le N $ and to $ m=2 $, we conclude from~\eqref{eq:condition on nh} that
\begin{align*}
\nu(\Delta_{i})\le\nu(\cap_{j=1}^{k}T^{-j\omega n}B_i^c)
&\le (\nu(B_i^c)+2C\beta^{\omega n})^{k}\\
&\le (1-\nu(B_i)/2)^{k}\\
&\le (1-2^{-n(h-\epsilon)-1})^{ 2^{nh}/(\omega n)-1}\\
&=(1-2^{-n(h-\epsilon)-1})^{-1} \exp\bigg(\frac{2^{nh}\log(1-2^{-n(h-\epsilon)-1})}{\omega n}\bigg)\\
&\le 2\exp\bigg(\frac{-2^{n\epsilon}}{2\omega n}\bigg).
\end{align*}
By~\eqref{eq:condition 2 on nh},
\[\nu(C_{n,N,h})\le \sum_{i=1}^{N}\nu(B_i)\le 2^{n+1} \exp\bigg(\frac{-2^{n\epsilon}}{2\omega n}\bigg)\le 2^{-n}.\qedhere\]
\end{proof}
Let us recall that $ \{y:\ovr(x,y)\ge s\} $ is random depending the random element $ x $, but $ \{y:\od_{\uph}(y)\ge s\} $ is independent of $ x $. The following lemma reveals a connection between the deterministic set $ \{y:\ovr(x,y)\ge s\} $ and the random set $ \{y:\od_{\uph}(y)\ge s\} $.
\begin{lem}\label{l:hitting time subset local dimension}
Let $ T $ be a map on $ [0,1] $ and $ \nu $ be a $ T $-invariant exponential mixing measure. Let $ s\ge 0 $. Then for $ \nu $-a.e.\,$ x $,
\[\{y:\ovr(x,y)\ge s\}\subset\{y:\od_{\nu}(y)\ge s\}.\]
\end{lem}
\begin{proof}
The case $ s=0 $ is obvious. We therefore assume $ s>0 $. For any integer $ n\ge 1 $, let
\[\mathcal R_{n,s,\epsilon}(x)=\big\{y:\tau\big(x,B(y,2^{-n+1})\big)\ge 2^{n(s-\epsilon)}\big\},\quad\mathcal E_{n,s,\epsilon}=\big\{y:\nu\big(B(y,2^{-n})\big)\le 2^{-n(s-2\epsilon)}\big\}.\]
By definition, $ y\in \{y:\ovr(x,y)\ge s\} $ if and only if for any $ \epsilon>0 $, there exist infinitely many integers $ n $ such that
\[\frac{\log \tau(x,B(y,2^{-n+1}))}{\log 2^n}\ge s-\epsilon.\]
Hence, we have
\begin{equation}\label{eq:ovr>s=}
\{y:\ovr(x,y)\ge s\}=\bigcap_{\epsilon>0}\limsup_{n\to\infty}\mathcal R_{n,s,\epsilon}(x).
\end{equation}
Similarly,
\[\{y:\od_{\uph}(y)\ge s\}=\bigcap_{\epsilon>0}\limsup_{n\to\infty}\mathcal E_{n,s,\epsilon}.\]
Thus, it is sufficient to prove that, for $ \nu $-a.e.\,$ x $, there exists some integer $ n(x) $ such that
\begin{equation}\label{eq:Rnse subset Ense}
\text{for all }n\ge n(x),\quad\mathcal R_{n,s,\epsilon}(x)\subset \mathcal E_{n,s,\epsilon},
\end{equation}
or equivalently,
\begin{equation}\label{eq:Rnsec subset Ensec}
\text{for all }n\ge n(x),\quad\mathcal E^c_{n,s,\epsilon}(x)\subset \mathcal R^c_{n,s,\epsilon}.
\end{equation}
Notice that $ \mathcal E_{n,s,\epsilon}^c $ can be covered by $ N\le 2^n $ balls with center in $ \mathcal E_{n,s,\epsilon}^c $ and radius $ 2^{-n} $. Let $ \mathcal F_{n,s,\epsilon}:=\{B_1,B_2,\dots,B_N\} $ be the collection of these balls. By definition, we have $ \nu(B_i)\le 2^{-n(s-2\epsilon)} $. Applying Lemma~\ref{l:big hitting probability} to the collection $ \mathcal F_{n,s,\epsilon} $ of balls and to $ h=s-\epsilon $, we see that
\[\sum_{n\ge n_h}\nu(\{x:\exists B\in\mathcal F_{n,s,\epsilon} \text{ such that }\tau(x,B)\ge 2^{n(s-\epsilon)}\})\le \sum_{n\ge n_h}2^{-n}<\infty.\]
By Borel-Cantelli lemma, for $ \nu $-a.e.\,$ x $, there exists an integer $ n(x) $ such that
\[\text{for all }n\ge n(x),\text{ for all }B\in \mathcal F_{n,s,\epsilon},\quad \tau(x, B)< 2^{n(s-\epsilon)}.\]
If $ y\in B $ for some $ B\in \mathcal F_{n,s,\epsilon} $ and $ n\ge n(x) $, then $ B\subset B(y,2^{-n+1}) $, which implies that $ \tau\big(x, B(y,2^{-n+1})\big)<\tau(x,B) $. We then deduce that $ B $ is included in $ \mathcal R_{n,s,\epsilon}^c $. This yields $ \mathcal E_{n,s,\epsilon}^c\subset \mathcal R_{n,s,\epsilon}^c $, which is what we want.
\end{proof}
\begin{rem}
With the notation in Lemma~\ref{l:hitting time subset local dimension}, proceeding with the same argument as~\eqref{eq:ovr>s=}, we have
\[ \{y:\ur(x,y)\ge s\}=\bigcap_{\epsilon>0}\liminf_{n\to\infty}\mathcal R_{n,s,\epsilon}(x)\quad \text{and}\quad\{y:\ud_{\uph}(y)\ge s\}=\bigcap_{\epsilon>0}\liminf_{n\to\infty}\mathcal E_{n,s,\epsilon}.\]
It then follows from~\eqref{eq:Rnse subset Ense} that for $ \nu $-a.e. $ x $,
\[\{y:\ur(x,y)\ge s\}\subset\{y:\ud_{\nu}(y)\ge s\}.\]
\end{rem}
Apply Lemma~\ref{l:hitting time subset local dimension} to the Gibbs measure $ \uph $, we get the following upper bound.
\begin{lem}\label{l:upper bound for hdimfk}
Let $ T $ be a Markov map. Let $ \phi $ be a H\"older continuous potential and let $ \uph $ be the corresponding Gibbs measure. Let $ 1/\kappa\ge\alpha_{\max} $, then for $ \uph $-a.e.\,$ x $,
\[\hdim\bk\le D_{\uph}(1/\kappa).\]
Moreover, if $ 1/\kappa>\alpha_+ $, then for $ \uph $-a.e.\,$ x $,
\[\bk=\emptyset.\]
\end{lem}
\begin{proof}
Recall that Lemma~\ref{l:described by hitting time} asserts that
\[\bk\subset\{y:\ovr(x,y)\ge 1/\kappa\}\cup\mathcal O^+(x).\]
A direct application of Proposition~\ref{l:dimension spectrum} and Lemma~\ref{l:hitting time subset local dimension} yields the first conclusion.
The second conclusion follows from Lemmas~\ref{l:>alpha+ empty} and~\ref{l:described by hitting time}.
\end{proof}
Collecting the results obtained in this subsection, we can prove Theorem~\ref{t:main} except the item (3).
\begin{proof}[Proof of the items (1), (2) and (4) of Theorem~\ref{t:main}]
Combining with Lemmas~\ref{l:lower bound for hdimfk} and~\ref{l:upper bound for hdimfk}, we get the desired result.
\end{proof}
\subsection{The study of $ \uk $}
In this subsection, we prove the remaining part of Theorem~\ref{t:main}, that is, item (3). We begin by proving the lower bound of $ \hdim\uk $, which may be proved in much the same way as Lemma~\ref{l:lower bound for hdimfk}.
\begin{lem}
Let $ T $ be a Markov map. Let $ \phi $ be a H\"older continuous potential and let $ \uph $ be the corresponding Gibbs measure.
\begin{enumerate}[\upshape(a)]
\item If $ 1/\kappa\in(0,\alpha_{\max}]\setminus\{\alpha_-\} $, then $ \hdim\uk\ge D_{\uph}(1/\kappa) $ for $ \uph $-a.e.\,$ x $.
\item If $ 1/\kappa\in(\alpha_{\max},+\infty) $, then $ \hdim\uk=1 $ for $ \uph $-a.e.\,$ x $.
\end{enumerate}
\end{lem}
\begin{proof}
(a) By Lemma~\ref{l:dimension spectrum}, the Hausdorff dimension of the level set $ \{y:d_{\uph}(y)=1/\kappa\} $ is zero if $ 1/\kappa<\alpha_- $. Therefore $ \hdim\uk\ge 0=D_{\uph}(1/\kappa) $.
The remaining case $ 1/\kappa\in(\alpha_-,\alpha_{\max}] $ holds by the same reason as Lemma~\ref{l:lower bound for hdimfk} ($ b $).
(b) Observe that the full Lebesgue measure statement implies the full Hausdorff dimension statement, it follows from item (2) of Theorem~\ref{t:main} that $ \hdim\uk= 1 $ when $ 1/\kappa\in(\alpha_{\max},+\infty) $.
\end{proof}
It is left to show the upper bound of $ \hdim\uk $ when $ 1/\kappa\le\alpha_{\max} $. The proof combines the methods developed in~\cite[\S 7]{FaScTr13} and~\cite[Theorem 8]{KoLiPe21}.
Heuristically, the larger the local dimension of a point is, the less likely it is to be hit.
\begin{lem}\label{l:upper bound of hdimuk}
Let $ T $ be a Markov map. Let $ \phi $ be a H\"older continuous potential and let $ \uph $ be the corresponding Gibbs measure. Let $ 1/\kappa\le\alpha_{\max} $, then for $ \uph $-a.e.\,$ x $,
\[\hdim\uk\le D_{\uph}(1/\kappa).\]
\end{lem}
\begin{proof}
The proof will be divided into two steps.
Step 1. Given any $ a>1/\kappa $, we are going to prove that
\begin{equation}\label{eq:ukcap}
\hdim \big(\uk\cap\{y:\ud_{\uph}(y)>a\}\big)=0\quad\text{for }\mu_\phi\text{-}a.e.\,x.
\end{equation}
Suppose now that~\eqref{eq:ukcap} is established. Let $ (a_m)_{m\ge 1} $ be a monotonically decreasing sequence of real numbers converging to $ 1/\kappa $. Applying ~\eqref{eq:ukcap} to each $ a_m $ yields a full $ \uph $-measure set corresponding to $ a_m $. Then by taking the intersection of these countable full $ \uph $-measure sets, we conclude from the countable stability of Hausdorff dimension that
\[\hdim \big(\uk\cap\{y:\ud_{\uph}(y)>1/\kappa\}\big)=0\quad\text{for }\mu_\phi\text{-a.e.}\,x.\]
As a result, by Lemma~\ref{l:dimension spectrum}, for $ \uph $-a.e.\,$ x $,
\begin{align*}
\hdim \uk&=\hdim \big(\uk\cap\big(\{y:\underline{d}_{\mu_\phi}(y)\le 1/\kappa\}\cup \{y:\ud_{\uph}(y)>1/\kappa\}\big)\big)\notag\\
&=\hdim \big(\uk\cap\{y:\underline{d}_{\mu_\phi}(y)\le 1/\kappa\}\big)\notag\\
&\le\hdim\{y:\underline{d}_{\mu_\phi}(y)\le 1/\kappa\}\label{eq:upper bound of hdimuk}=D_{\uph}(1/\kappa).
\end{align*}
This clearly yields the lemma.
Choose $ b\in(1/\kappa, a) $. Put $ A_n:=\{y:\mu_\phi(B(y, r))<r^b\text{ for all }r<2^{-n}\} $. By the definition of $ \ud_{\uph}(y) $, we have
\[\{y:\ud_{\uph}(y)>a\}\subset\bigcup_{n=1}^\infty A_n.\]
Thus,~\eqref{eq:ukcap} is reduced to show that for any $ n\ge 1 $,
\begin{equation}\label{eq:ukcapan}
\hdim (\uk\cap A_n)=0\quad\text{for }\mu_\phi\text{-}a.e.~x.
\end{equation}
Step 2. The next objective is to prove~\eqref{eq:ukcapan}.
Fix $ n\ge 1 $. Let $ \epsilon>0 $ be arbitrary. Choose a large integer $ l\ge n $ with
\begin{equation}\label{eq:condition l}
12\times 2^{-\kappa l}<2^{-n}\quad\text{and}\quad \gamma^312^b2^{(1-b\kappa)l}< \epsilon,
\end{equation}
where the constant $ \gamma $ is defined in~\eqref{eq:quasi-Bernoulli}.
Let $ \theta_j=[\kappa j\log_{L_1}2]+1 $, where $ L_1 $ is given in~\eqref{eq:length of basic interval}. Then, by~\eqref{eq:length of basic interval} the length of each basic interval of generation $ \theta_j $ is smaller than $ 2^{-\kappa j} $. Define
\[\mathcal I_{j}(x):=\bigcup_{J\in\Sigma_{\theta_j}\colon d(I_{\theta_j}(x),J)<2^{-\kappa j}}J,\]
where $ d(\cdot,\cdot) $ is the Euclidean metric. Clearly $ \mathcal I_j(x) $ covers the ball $ B(x, 2^{-\kappa j}) $ and is contained in $ B(x, 3\times2^{-\kappa j}) $.
Moreover, if $ I_{\theta_j}(x)=I_{\theta_j}(y) $, then $ \mathcal I_j(x)=\mathcal I_j(y) $. With the notation $ \mathcal I_j(x) $, we consider the set
\[G_{l,i}(x)= A_n\cap\bigg(\bigcap_{j=l}^i\bigcup_{k=1}^{2^j}\mathcal{I}_{j}(T^kx)\bigg).\]
The advantage of using $ \mathcal I_j(x) $ rather than $ B(x, 2^{-\kappa j}) $ is that the map $ x\mapsto G_{l,i}(x) $ is constant on each basic interval of generation $ 2^i+\theta_i $. We are going to construct inductively a cover of $ G_{l,i}(x) $ by the family $ \big\{B(T^kx, 3\times2^{-\kappa i}):k\in S_i(x)\big\} $ of balls, where $ S_i(x)\subset \{1,2,\dots,2^i\} $.
For $ i=l $, we let $ S_l(x)\subset\{1,2,\dots, 2^l\} $ consist of those $ k\le 2^l $ such that $ \mathcal I_l(T^kx) $ intersects $ A_n $. Suppose now that $ S_i(x) $ has been defined. We define $ S_{i+1}(x) $ to consist of those $ k\le 2^{i+1} $ such that $ \mathcal I_{i+1}(T^kx) $ intersects $ G_{l,i}(x) $. Then the family $ \big\{B(T^kx, 3\times2^{-\kappa (i+1)}):k\in S_{i+1}(x)\big\} $ of balls forms a cover of $ G_{l,i+1}(x) $, and the construction is completed.
With the aid of the notation $ \mathcal I_{i+1}(T^kx) $, one can verify that $ x\mapsto S_{i+1}(x) $ is constant on each basic interval of generation $ 2^{i+1}+\theta_{i+1} $. Let $ N_{i+1}(x):=\sharp S_{i+1}(x) $, then $ x\mapsto N_{i+1}(x) $ is also constant on each basic interval of generation $ 2^{i+1}+\theta_{i+1} $.
In order to establish~\eqref{eq:ukcapan}, we need to estimate $ N_{i+1}(x) $. For those $ k\in S_{i+1}(x)\cap\{1,2,\dots,2^i\} $, since $ \mathcal I_{i+1}(T^kx)\subset\mathcal I_i(T^kx) $ and $ G_{l,i}(x)\subset G_{l,i-1}(x) $, we must have that $ \mathcal I_i(T^kx) $ intersects $ G_{l,i-1}(x) $, hence $ k\in S_i(x) $. On the other hand, since $ \mathcal I_{i+1}(T^kx) $ is contained in $ B(T^kx,3\times2^{-\kappa (i+1)}) $, if $ \mathcal I_{i+1}(T^kx) $ has non-empty intersection with $ G_{l,i}(x) $, then the distance between $ T^kx $ and $ G_{l,i}(x) $ is less than $ 3\times2^{-\kappa (i+1)} $. In particular,
\begin{equation}\label{eq:Gi(x)}
T^kx \in\big\{y: d\big(y, G_{l,i}(x)\big)<3\times2^{-\kappa(i+1)}\big\}\subset \bigcup_{J\in\Sigma_{\theta_i}\colon d(J,G_{l,i}(x))<3\times2^{-\kappa (i+1)}}J.
\end{equation}
Denote the right hand side union as $ \hat G_{l,i}(x) $. The set $ \hat G_{l,i}(x) $ is nothing but the union of cylinders of level $ \theta_i $ whose distance from $ G_{l,i}(x) $ is less than $ 3\times 2^{-\kappa(i+1)} $. Thus, by the fact that $ x\mapsto G_{l,i}(x) $ is constant on each basic interval of generation $ 2^i+\theta_i $, we have
\begin{equation}\label{eq:Gi(x)=Gi(y)}
\hat G_{l,i}(x)=\hat G_{l,i}(y),\quad \text{whenever}\quad I_{2^i+\theta_i}(x)=I_{2^i+\theta_i}(y) .
\end{equation}
According to the above discussion, it holds that
\[N_{i+1}(x)\le N_i(x)+M_{i+1}(x),\]
where $ M_{i+1}(x) $ is the number of $ 2^i<k\le 2^{i+1} $ for which $ T^k x $ intersects $ \hat G_{l,i}(x) $.
The function $ M_{i+1}(x) $ can further be written as the summation over finitely many characteristic function as
\begin{equation*}
M_{i+1}(x)=\sum_{k=2^i+1}^{2^{i+1}}\chi_{\hat G_{l,i}(x)}(T^kx)=\sum_{k=2^i+1}^{2^i+\theta_i}\chi_{\hat G_{l,i}(x)}(T^kx)+\sum_{k=2^i+\theta_i+1}^{2^{i+1}}\chi_{\hat G_{l,i}(x)}(T^kx).
\end{equation*}
It then follows from~\eqref{eq:Gi(x)=Gi(y)} that
\begin{align}
\notag\int M_{i+1}(x) d\uph(x)&=\sum_{k=2^i+1}^{2^i+\theta_i}\int \chi_{\hat G_{l,i}(x)}(T^kx)d\uph(x)+\sum_{k=2^i+\theta_i+1}^{2^{i+1}}\int \chi_{\hat G_{l,i}(x)}(T^kx)d\uph(x)\\
&\le \theta_i+\sum_{J\in\Sigma_{2^i+\theta_i}}\sum_{k=2^i+\theta_i+1}^{2^{i+1}}\int \chi_J(x) \chi_{\hat G_{l,i}(x)}(T^kx)d\uph(x)\notag\\
&=\theta_i+\sum_{J\in\Sigma_{2^i+\theta_i}}\sum_{k=2^i+\theta_i+1}^{2^{i+1}}\int \chi_J(x) \chi_{\hat G_{l,i}(x_J)}(T^kx)d\uph(x),\label{eq:sumsum}
\end{align}
where $ x_J $ is any fixed point of $ J $.
Now the task is to deal with the right hand side summation.
We deduce from~\eqref{eq:quasi-Bernoulli consequence} that for each $ k>2^i+\theta_i $,
\begin{equation}\label{eq:int<CJG}
\int \chi_J(x) \chi_{\hat G_{l,i}(x_J)}(T^kx)d\uph(x)=\uph\big(J\cap T^{-k}\big(\hat G_{l,i}(x_J)\big)\big)\le \gamma^3\uph(J)\uph\big(\hat{G}_{l,i}(x_J)\big).
\end{equation}
Since $ G_{l,i}(x_J) $ can be covered by the family $ \big\{B(T^kx_J, 3\times2^{-\kappa i}):k\in S_i(x_J)\big\} $ of balls, then by~\eqref{eq:Gi(x)} the family $ \mathcal F_i(x_J):=\big\{B(T^kx_J, 6\times2^{-\kappa i}):k\in S_i(x_J)\big\} $ of enlarged balls forms a cover of $ \hat{G}_{l,i}(x_J) $. Observe that each enlarged ball $ B\in \mathcal F_i(x_J) $ intersects $ A_n $, thus $ B\subset B(y,12\times 2^{-\kappa i}) $ for some $ y\in A_n $. Then by the definition of $ A_n $ and~\eqref{eq:condition l},
\[\uph(B)\le \uph(B(y,12\times 2^{-\kappa i}))\le12^b2^{-b\kappa i}.\] Accordingly,
\begin{equation}\label{eq:uphhatG}
\uph\big(\hat{G}_{l,i}(x_J)\big)\le 12^b2^{-b\kappa i}N_i(x_J).
\end{equation}
Recall that $ x\mapsto N_i(x) $ is constant on each basic interval of generation $ 2^i+\theta_i $. Applying the upper bound~\eqref{eq:uphhatG} on $ \uph\big(\hat{G}_{l,i}(x_J)\big) $ to~\eqref{eq:int<CJG}, and then substituting~\eqref{eq:int<CJG} into~\eqref{eq:sumsum}, we have
\begin{align*}
\notag\int M_{i+1}(x) d\uph(x)
&\le \theta_i+\sum_{J\in\Sigma_{2^i+\theta_i}}\sum_{k=2^i+\theta_i+1}^{2^{i+1}}\gamma^3\uph\big(\hat{G}_{l,i}(x_J)\big)\uph(J)\\
&\le \theta_i+\sum_{J\in\Sigma_{2^i+\theta_i}}\sum_{k=2^i+\theta_i+1}^{2^{i+1}}\gamma^312^b2^{-b\kappa i}N_i(x_J)\uph(J)\\
&= \theta_i+\gamma^312^b2^{-b\kappa i}(2^i-\theta_i)\int N_i(x)d\uph(x)\\
&\le \theta_i+\epsilon\int N_i(x)d\uph(x),
\end{align*}
where the last inequality follows from~\eqref{eq:condition l}.
Since $ N_{i+1}(x)\le N_i(x)+M_{i+1}(x) $, we have
\begin{equation}\label{eq:intNi+1x}
\int N_{i+1}(x)d\uph(x)\le \theta_i+(1+\epsilon)\int N_i(x)d\uph(x).
\end{equation}
Note that~\eqref{eq:intNi+1x} holds for all $ i\ge l $ and $ N_l(x)\le 2^l $. Then
\[\begin{split}
\int N_{i+1}(x)d\uph(x)&\le \sum_{k=l}^{i}(1+\epsilon)^{i-k}\theta_k+(1+\epsilon)^{i-l+1}\int N_l(x)d\uph(x)\\
&<(1+\epsilon)^i(i\theta_i+2^l).
\end{split}\]
By Markov's inequality,
\[\begin{split}
\uph\big(\big\{x:N_{i+1}(x)\ge (1+\epsilon)^{2i}(i\theta_i+2^l)\big\}\big)&\le \uph\bigg(\bigg\{x:N_{i+1}(x)\ge (1+\epsilon)^i\int N_{i+1}(x)d\uph(x)\bigg\}\bigg)\\
&\le (1+\epsilon)^{-i},
\end{split}\]
which is summable over $ i $. Hence, for $ \uph $-a.e.\,$ x $, there is an $ i_0(x) $ such that
\begin{equation}\label{eq:Ni+1(x)le (1+epsilon)}
N_{i+1}(x)\le (1+\epsilon)^{2i}(i\theta_i+2^l)
\end{equation}
holds for all $ i\ge i_0(x) $.
Denote by $ F_{l,\epsilon} $ the full measure set on which the formula~\eqref{eq:Ni+1(x)le (1+epsilon)} holds. Let $ x\in F_{l,\epsilon} $. Then such an $ i_0(x) $ exists. With $ i\ge i_0(x) $, we may cover the set
\[G_l(x)=A_n\cap\bigg(\bigcap_{j=l}^\infty\bigcup_{k=1}^{2^j}\mathcal I_j(T^kx)\bigg)\]
by $ N_i(x) $ balls of radius $ 3\times2^{-\kappa i} $. Since $ \theta_i=[\kappa i\log_{L_1}2]+1\le ci $ for some $ c>0 $, we have \begin{equation}\label{eq:upper bound of Gl(x)}
\hdim G_l(x)\le\limsup_{i\to\infty}\frac{\log N_i(x)}{\log 2^{\kappa i}}\le\limsup_{i\to\infty}\frac{\log \big((1+\epsilon)^{2i}(i\theta_i+2^l)\big)}{\log 2^{\kappa i}}=\frac{2\log (1+\epsilon)}{\kappa\log2}.
\end{equation}
Let $ (\epsilon_m)_{m\ge 1} $ be a monotonically decreasing sequence of real numbers converging to $ 0 $. For each $ \epsilon_m $, choose an integer $ l_m $ satisfying~\eqref{eq:condition l}, but with $ \epsilon $ replaced by $ \epsilon_m $. For every $ m\ge 1 $, by the same reason as~\eqref{eq:upper bound of Gl(x)}, there exists a full $ \uph $-measure set $ F_{l_m,\epsilon_m} $ such that
\[\text{for all }x\in F_{l_m,\epsilon_m},\quad\hdim G_{l_m}(x)\le\frac{2\log (1+\epsilon_m)}{\kappa\log2}.\]
By taking the intersection of the countable full $ \uph $-measure sets $ (F_{l_m,\epsilon_m})_{m\ge 1} $, and using the fact that $ G_l(x) $ is increasing in $ l $, we obtain that for $ \uph $-a.e.\,$ x $,
\[\text{for any }l\ge 1,\quad\hdim G_l(x)\le\lim_{m\to \infty}\hdim G_{l_m}(x)=0.\]
We conclude~\eqref{eq:ukcapan} by noting that
\[\uk\cap A_n\subset\bigcup_{l\ge 1}G_l(x).\qedhere\]
\end{proof}
\begin{rem}
Recall that Lemma~\ref{l:hitting time subset local dimension} exhibits a relation between $ \uk $ and hitting time:
\[ \{ y:\ovr(x,y)<1/\kappa\}\setminus\mathcal O^+(x)\subset\uk\subset\{ y:\ovr(x,y)\le1/\kappa\}.\]
With this relation in mind, it is natural to investigate the size of the level set
\[\{y:R(x,y)=1/\kappa\}.\]
Lemmas~\ref{l:hitting time subset local dimension} and~\ref{l:upper bound of hdimuk} together with the inclusions
\[\{y:R(x,y)=1/\kappa\}\subset\{y:\ovr(x,y)\le 1/\kappa\}\quad\text{and}\quad \{y:R(x,y)=1/\kappa\}\subset\{y:\ovr(x,y)\ge 1/\kappa\}\]
give the upper bound for Hausdorff dimension:
\[\hdim\{y:R(x,y)=1/\kappa\}\le D_{\uph}(1/\kappa),\quad\text{for }\uph\text{-a.e.}\, x. \]
The lower bound, coinciding with the upper bound, can be proved by the same argument as Lemma~\ref{l:lower bound for hdimfk}. Thus for $ \uph $-a.e.\,$ x $,
\[\hdim\{y:R(x,y)=1/\kappa\}=D_{\uph}(1/\kappa),\quad\text{if }1/\kappa\notin \{\alpha_-,\alpha_+\}. \]
\end{rem}
|
2302.12183
|
\section{Introduction}
Fractional calculus is nowadays a well consolidated field of research
with many applications in various areas of knowledge, which is interesting
and important as it presents more refined results that are in line with reality
\cite{Sousa,Sousa1,Sousa3,Sousa4,Sousa5,Oliveira,Podlubny,Kilbas,Lakshmikantham}.
In recent years, applications via fractional derivatives have
drawn the attention of numerous researchers, from different areas of knowledge,
particularly during the two years that the world has faced the COVID-19 pandemic
\cite{novo1}. On the other hand, mathematical models have been investigated
for years in order to try to model and provide a better description of the studied
phenomenon. In this sense, the fundamental role of fractional derivatives enters,
i.e., trying to improve the results and provide results closer to reality compared
to the classic case. In 2022 Khan et al. discussed an interesting work on a fractional
mathematical model of tuberculosis in China and presented numerical simulations
that show the applicability of the schemes \cite{novo2}. In the same year,
Etemad et al. \cite{novo3}, using the Caputo-type fractional derivative,
investigated a model of the AHIN1/09 virus. It is remarkable the importance
and consequences that fractional derivatives provide when used to discuss
theoretical and practical problems. For other interesting works that discuss
phenomena, see \cite{novo4,novo6} and the references therein.
Although a well consolidated area, there are, however, still numerous open
problems and paths to be unraveled \cite{Aghayan1,Hassani,Lopes,Aghayan}.
A research path that some mathematicians have recently been interested consists
to investigate fractional calculus on time scales \cite{Zhu,Yan,Kumar,Malik,Benkhettou}.
In 2007, Atici and Eloe investigated the fractional $q$-calculus on the quantum time scale \cite{Atici}.
The authors presented a study on some properties of the fractional $q$-calculus and investigated
the $q$-Laplace transform. In 2011, Bastos et al. investigated a class of fractional
derivatives on times scales using the inverse Laplace transform \cite{Bastos}. Nowadays,
the subject of fractional calculus on time scales is very rich and under strong current research
\cite{Benkhettou,Ammi,Ahmadkhanlu,Torres,Kumar1,Belaid,Williams}.
Here, we make use of the idea behind a $\psi$-Hilfer fractional derivative,
that is, fractional differentiation of functions with respect to another function,
which is a remarkable and relevant idea with a big impact on fractional calculus and its applications,
in particular for problems described by fractional differential equations \cite{MR4420453}.
The study of $\psi$-Riemann--Liouville fractional integrals with respect to a function $\psi$
on time scales was initiated by Mekhalfi and Torres in 2017, where they introduced
some generalized fractional operators on time scales
of a function with respect to another function and carried out some applications
to dynamic equations \cite{Mekhalfi}. Later, in 2018, Harikrishnana et al.
proposed the $\psi$-Hilfer fractional operator on time scales as
\begin{eqnarray}
\label{901}
{^{\T}}{\Delta}_{a+}^{\alpha,\beta;\psi} g(x)={^{\mathbb{T}}}{\mathds{I}}_{a+}^{\beta(n-\alpha);\psi}
\, ^{\mathbb{T}}\Delta\,{^{\mathbb{T}}}{\mathds{I}}_{a+}^{n-\gamma;\psi}g(\tau)
\end{eqnarray}
with $\gamma=\alpha+\beta(n-\alpha)$ and
$^{\mathbb{T}}\Delta=\dfrac{d}{d\tau}$ \cite{Harikrishnana}.
However, we note that (\ref{901}) does not comply with the standard
fractional calculus. In fact, instead of $^{\mathbb{T}}\Delta=\dfrac{d}{d\tau}$,
we should have $^{\mathbb{T}}\Delta_{\psi}=\bigg(\dfrac{\Delta}{\psi^{\Delta}(x)}\bigg)$.
To see that the term $^{\mathbb{T}}\Delta=\dfrac{d}{d\tau}$ is inconsistent,
one just needs to take $\mathbb{T}=\mathbb{R}$ and $\beta\rightarrow 0$ or $\beta\rightarrow 1$,
for which one does not obtain the $\psi$-Riemann--Liouville
or the $\psi$-Caputo fractional derivative, as desired.
One of the problems when working with solutions of fractional differential equations
on time scales is that the area is still under construction and some fundamental tools
are still under discussion and investigation. Examples include the lack of existence,
uniqueness, stability and controllability results, which prevent attacking
more sophisticated problems. To fill the gap, and motivated by the above mentioned works,
we provide here new tools and an extension of the fractional calculus on time scales.
We claim that the results we now obtain contribute significantly to the field
of fractional differential equations on time scales.
Precisely, we consider the fractional initial value problem
\begin{eqnarray}
\label{I*}
\begin{dcases}
{^{\T}}{\Delta}_{0+}^{\alpha,\beta;\psi} y(t)=g^{\mathbb{T}}(\alpha,\gamma-\alpha)\,f(t,y(t)), \qquad t\in[0,1]=J\subseteq\mathbb{T},\\
{^{\mathbb{T}}}{\mathds{I}}_{0}^{1-\gamma;\psi}y(0)=0,
\end{dcases}
\end{eqnarray}
where ${^{\T}}{\Delta}_{0+}^{\alpha,\beta;\psi}(\cdot)$ is the $\psi$-Hilfer fractional derivative on time scales
of order $\alpha$, $0<\alpha<1$, and of type $0\leq{\beta}\leq{1}$
and $f:J\times\mathbb{T}\to\mathbb{R}$ is a right-dense continuous function.
Moreover, we will add an operator $B$ and a control function $u$
into problem (\ref{I*}), which results in
\begin{eqnarray}
\label{eqI}
\begin{dcases}
{^{\T}}{\Delta}_{0+}^{\alpha,\beta;\psi} y(t)=g^{\mathbb{T}}(\alpha,\gamma-\alpha)\,(f(t,y(t))+Bu(t)), \quad t\in[0,1]=J\subset\mathbb{T},\\
{^{\mathbb{T}}}{\mathds{I}}_{0}^{1-\gamma;\psi}y(0)=0,
\end{dcases}
\end{eqnarray}
where $B:\mathbb{R}\to\mathbb{R}$ is assumed to be a bounded linear operator and the control
function $u$ belongs to $L^{2}(J,\mathbb{R})$. Furthermore,
$g^{\mathbb{T}}(\alpha,\gamma-\alpha)\,:=\dfrac{B^{\ T}_{0,1}(\alpha,\gamma-\alpha)}{B(\alpha,\gamma-\alpha)}$,
where $B^{\mathbb{T}}_{0,1}(\alpha,\gamma-\alpha)$ and $B(\alpha,\gamma-\alpha)$
with $0<\alpha<1$, $\gamma=\alpha+\beta(1-\alpha)$, are time-scale Beta
functions and the classical Beta function, respectively.
Our main contributions may be summarized in three axes.
Firstly, we present an extension to the $\psi$-Hilfer
fractional derivative in the sense of time scales
and discuss the essential properties in formulating
a fractional derivative. Furthermore, some properties for the
$\psi$-Riemann--Liouville fractional integral on time scales are also given.
In particular, we investigate Leibniz's rule, the Laplace transform
and integration by parts for both integral and fractional derivatives
on time scales, respectively in the sense of $\psi$-Riemann-Liouville
and $\psi$-Hilfer.
Our second direction of contribution consists to investigate the question
of existence and uniqueness of solution for the initial value
problem (\ref{I*}). In concrete, we prove the following two results.
\begin{theorem} \label{teorema32}
Suppose $J=[0,1]\subseteq\mathbb{T}$.
Then, the initial value problem \eqref{I*} has a unique solution on $J$
if function $f(t,y(t))$ is a right-dense continuous function
for which there exists $M>0$ such that $|f(t,y(t))|< M$ on $J$
and the Lipschitz condition
\begin{equation*}
\|f(t,x)-f(t,y)\|\leq {L}\|x-y\|
\end{equation*}
holds for some ${L}>0$ and for all $t\in J$ and $x,y\in\mathbb{R}$.
\end{theorem}
\begin{theorem} \label{th-3.4}
Suppose $f:J\times\mathbb{R}\to\mathbb{R}$ is a right-dense continuous function
such that there exists $M>0$ with $|f(t,y)|\leq M$
for all $t\in J,y\in\mathbb{R}.$ Then problem \eqref{I*} has a solution on $J$.
\end{theorem}
To better understand our third axle of novelty,
let us first present our hypotheses. They are:
\noindent \rm{($A_1$)} Let $f:J\times\mathbb{T}\to\mathbb{R}$ be
a right-dense continuous function.
\noindent \rm{($A_2$)} There exists $M>0$
on $[0,1]=J\subset\mathbb{T}$ such that
$|f(t,y(t))|\leq M$.
\noindent \rm{($A_3$)} Let $D=\{x\in C_{1-\gamma}(J,\mathbb{R}):\|x\|_{C_{1-\gamma}}\leq\rho\}$.
\noindent \rm{($A_4$)} The linear operator $\mathscr{W} u:L^{2}(J,\mathbb{R})\to\mathbb{R}$ defined by
$$
\mathscr{W}_{\alpha}u
=\frac{1}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha)}\int_{0}^{t}\psi^{\Delta}(s)(\psi(t)-\psi(s))^{\alpha-1}
(Bu)(s)\Delta s
$$
has bounded invertible operators $(\mathscr{W}_{\alpha}u)^{-1}$,
which take values in $L^{2}(J,\mathbb{R})$, ${\rm Ker}$/$\mathscr{W}_{\alpha}$,
and there exists positive constants $M_{W}$ such that
$\|(\mathscr{W}_{\alpha})^{-1}\|\leq M_{W}$. Also,
$B$ is a continuous operator from $\mathbb{R}$ to $\mathbb{R}$ and there exists
a positive constant $M_{B}$ such that $\|B\|\leq M_{B}$. Furthermore, we have
$g^{\mathbb{T}}(\gamma-1,1-\gamma):=\dfrac{B^{\mathbb{T}}_{0,1}(\gamma-1,1-\gamma)}{B(\gamma-1,1-\gamma)}$
with $\gamma=\alpha+\beta(1-\alpha)$.
We investigate the solution controllability for \eqref{eqI}.
Precisely, we prove total controllability under hypotheses $A_i$, $i = 1,\ldots, 4$.
\begin{theorem}
\label{th-4.2}
If all assumptions $(A_1)$--$(A_4)$ hold, then the control system \eqref{eqI}
is controllable on $J$ provided
\begin{eqnarray}
\label{4.1}
M\,\frac{(\psi(1)-\psi(0))^{1-\beta(1-\alpha)}}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha+1)}<1,
\end{eqnarray}
where $g^{\mathbb{T}}(\gamma-1,1-\gamma):=\dfrac{B^{\mathbb{T}}_{0,1}(\gamma-1,1-\gamma)}{B(\gamma-1,1-\gamma)}$
with $\gamma=\alpha+\beta(1-\alpha)$.
\end{theorem}
It should be noted that our results are interesting and nontrivial
even in the particular cases of (i) the standard $\psi$-Hilfer fractional
derivative (obtained by choosing $\mathbb{T}=\mathbb{R}$);
(ii) the classical integer-order case (obtained with $\mathbb{T}=\mathbb{R}$ and $\alpha=1$);
and (iii) the standard time-scale calculus (obtained by considering $\alpha=1$).
Moreover, for concrete particular choices of function $\psi$, we obtain
new formulations involving fractional derivatives on time scales. This opens
new directions of investigation and discussion. In particular, an open problem
consists to compute the solutions to problems involving such operators
by developing suitable numerical methods.
The article is organized as follows. In Section~\ref{sec2}, we recall
the necessary concepts and results from the literature that help us
throughout the paper. In Section~\ref{sec3}, we present our new extension
involving the $\psi$-Hilfer fractional derivative on time scales. The
fundamental properties of the new operator are investigated and, in particular,
we obtain an integration by parts formula and the corresponding Laplace transform.
We proceed by attacking the second main purpose of our paper, that is,
to prove existence and uniqueness of a solution to problem \eqref{I*}.
This is the subject of Section~\ref{sec5}. Finally, we investigate the
controllability of (\ref{eqI}) in Section~\ref{sec6}.
We end the article with comments and future work.
\section{Preliminaries}
\label{sec2}
A time scale $\mathbb{T}$ is an arbitrary nonempty closed subset of the real numbers.
For $\in \mathbb{T}$, one defines the forward jump operator
$\sigma: \mathbb{T}\rightarrow \mathbb{T}$ by \cite{Agarwal,Bohner,Anastassiou}
\begin{eqnarray*}
\sigma(\xi)=\inf\left\{s\in\mathbb{T}: s>\xi \right\}
\end{eqnarray*}
while the backward jump operator$\rho \mathbb{T}\rightarrow \mathbb{T}$ is defined by
\begin{equation*}
\rho(\xi)=\sup\left\{ s\in\mathbb{T}:\, s<\xi\right\}.
\end{equation*}
In addition, we put $\sigma(\max\,\mathbb{T})=\max \mathbb{T}$ if $\max \mathbb{T}$
is finite and $\rho(\min\, \mathbb{T})=\min\, \mathbb{T}$ if there exists a finite $\min ,\ \mathbb{T}$.
Following \cite{novo}, for $0\leq\gamma\leq{1}$ we define the weighted space
$C_{1-\gamma,\psi}(J,\mathbb{R})$ of continuous functions $f$ on the the finite interval
$J=[0,1]\subset\mathbb{T}$ by
\begin{equation*}
C_{1-\gamma,\psi}[0,1]
=\Big\{f:(0,1]\to\mathbb{R}:(\psi(t)-\psi(0))^{1-\gamma}f(t)\in C([0,1])\Big\}
\end{equation*}
with the norm
\begin{equation*}
\|f\|_{C_{1-\gamma,\psi}[0,1]}=\|(\psi(t)-\psi(0))^{1-\gamma}f(t)\|_{C_{[0,1]}}.
\end{equation*}
The derivative makes use of the set $\mathbb{T}^\kappa$,
which is derived from the time scale $\mathbb{T}$ as follows:
if $\mathbb{T}$ has left-scattered maximum $M$, then
$\mathbb{T}^\kappa:=\mathbb{T} \setminus \{M\}$; otherwise, $\mathbb{T}^\kappa:=\mathbb{T}$.
\begin{definition}[Delta derivative \cite{Agarwal,Bohner,Anastassiou}]
Assume $f:\mathbb{T}\to\mathbb{R}$ and let $t\in\mathbb{T}^{\kappa}$. One defines
\begin{equation*}
f^{\Delta}(t)=\lim_{s\to t}\frac{f(\sigma(s))-f(t)}{\sigma(s)-t},
\qquad t\neq\sigma(s),
\end{equation*}
provided the limit exists. We will call $f^{\Delta}(t)$ the delta derivative
of $f$ at $t$. Moreover, we say that $f$ is delta differentiable on $\mathbb{T}^{\kappa}$ provided
$f^{\Delta}(t)$ exists for all $t\in\mathbb{T}^{\kappa}$. The function $f^{\Delta}:\mathbb{T}^{\kappa}\to\mathbb{R}$
is then called the (delta) derivative of $f$ on $\mathbb{T}^{\kappa}$.
\end{definition}
\begin{definition}[See \cite{Agarwal,Bohner,Anastassiou}]
Let $[a,b]$ denote a closed bounded interval in $\mathbb{T}$. A function $F:[a,b]\to\mathbb{R}$
is called a delta anti-derivative of function $f:[a,b)\to\mathbb{R}$ provided $F$
is continuous on $[a,b]$, delta differentiable on $[a,b)$, and $F^{\Delta}(t)=f(t)$
for all $t\in[a,b)$. Then, we define the $\Delta$-integral of $f$ from $a$ to $b$ by
$$
\int_{a}^{b}f(t)\Delta t:=F(b)-F(a).
$$
\end{definition}
\begin{proposition}[See \cite{Agarwal,Bohner,Anastassiou}]
\label{Proposition24}
Suppose $a,b\in\mathbb{T}$, $a<b$, and $f(t)$ is continuous on $[a,b]$. Then,
$$
\int_{a}^{b}f(t)\Delta t=[\sigma(a)-a]f(a)+\int_{\sigma(a)}^{b}f(t)\Delta t.
$$
\end{proposition}
\begin{proposition}[See \cite{Agarwal,Bohner,Anastassiou}]
\label{Prop-6}
Suppose $\mathbb{T}$ is a time scale and $f$ is an increasing continuous function on $[a,b]$.
If $F$ is the extension of $f$ to the real interval $[a,b]$ given by
$$
F(s):=
\begin{dcases}
f(s) &{\rm if} \quad s\in\mathbb{T},\\
f(t) &{\rm if} \quad s\in(t,\sigma(t))\not\subset\mathbb{T},
\end{dcases}
$$
then
\begin{equation*}
\int_{a}^{b}f(t)\Delta t\leq\int_{a}^{b}F(t){\rm d}t.
\end{equation*}
\end{proposition}
Let $n-1<\alpha<n$ with $n\in\mathbb{N}$, $I=[a,b]$ be an interval such that
$-\infty\leq{a}<b\leq\infty$ and $f,\psi\in C^{n}([a,b],\mathbb{R})$ be two
functions such that $\psi$ is increasing and $\psi'(x)\neq{0}$ for all $x\in{I}$.
The left-sided $\psi$-Hilfer fractional derivative
$^{{\rm H}}\mathds{D}_{a+}^{\alpha,\beta;\psi}(\cdot)$
of function $f$ of order $\alpha$ and type $0\leq\beta\leq{1}$ is given by
\begin{eqnarray}
\label{1}
^{{\rm H}}\mathds{D}_{a+}^{\alpha,\beta;\psi}f(x)
=\mathds{I}_{a+}^{\beta(n-\alpha);\psi} \bigg(\frac{1}{\psi'(x)}
\frac{{\rm d}}{{\rm d}x}\bigg)^{n}\mathds{I}_{a+}^{n-\gamma;\psi}f(x)
\end{eqnarray}
where $\mathds{I}_{a+}^{\delta,\psi}(\cdot)$ is the Riemann--Liouville fractional
integral with respect to another function $\psi$ with $\delta=\beta(n-\alpha)$
or $\delta=(1-\beta)(n-\alpha)$ and $\gamma=\alpha+\beta(n-\alpha)$.
\begin{lemma}[See \cite{Sousa}]
\label{lemma3.2}
Let $Q_1$ and $Q_2$ be two bounded sets in a Banach space $X$. Then,
\begin{enumerate}
\item $\mu(Q_1)=0$ if and only if $\overline{Q}_1$ is compact.
\item $\mu(Q_1)=\mu(\overline{Q}_1).$
\item $Q_1\subset Q_2$ implies $\mu(Q_1)\leq \mu(Q_2).$
\item $\mu(Q_1+Q_2)\leq\mu(Q_1)+\mu(Q_2)$.
\end{enumerate}
\end{lemma}
\section{The $\psi$-Hilfer fractional derivative on times scales}
\label{sec3}
In this section we introduce the $\psi$-Hilfer fractional derivative
on times scales. For that we begin by recalling the notion
of $\psi$-Riemann--Liouville fractional integral
as introduced by Mekhalfi and Torres in \cite{Mekhalfi}.
\begin{definition}[See \cite{Mekhalfi}]
\label{B}
Suppose $\mathbb{T}$ is a time scale, $[a,b]$ is an interval of $\mathbb{T}$, $f$ is an
integrable function on $[a,b]$, and $\psi$ is monotone having a delta derivative
$\psi^{\Delta}$ with $\psi^{\Delta}(x)\neq{0}$ for any $x\in[a,b]$. Let $0<\alpha<1$.
Then, the $\psi$-Riemann-Liouville fractional integral on times scales of order $\alpha$
of function $f$ with respect to $\psi$ is defined by
\begin{eqnarray}
\label{2}
^{{\rm \mathbb{T}}}\mathds{I}_{a+}^{\delta,\psi}f(x)
=\frac{1}{\Gamma(\alpha)}\int_{a}^{x}\psi^{\Delta}(s)(\psi(x)-\psi(s))^{\alpha-1}f(s)\Delta s.
\end{eqnarray}
\end{definition}
Here we make use of \eqref{2} to define a version
for (\ref{1}) on the sense of times scales.
\begin{definition}
Let $n-1<\alpha<n$ with $n\in\mathbb{N}$. Suppose $\mathbb{T}$ is a time scale,
$[a,b]$ is an interval of $\mathbb{T}$, and $f,\psi\in C^{n}([a,b])$ are
two functions such that $\psi$ is increasing having a delta derivative
$\psi^{\Delta}$ with $\psi^{\Delta}(x)\neq{0}$ for all $x\in[a,b]$.
The $\psi$-Hilfer fractional derivative on time scales of order $\alpha$
and type $0\leq\beta\leq{1}$ is given by
\begin{eqnarray}
\label{3}
{^{\T}}{\Delta}_{a+}^{\alpha,\beta;\psi} f(x)={^{\mathbb{T}}}{\mathds{I}}_{a+}^{\beta(n-\alpha);\psi}
\bigg(\frac{\Delta}{\psi^{\Delta}(x)}\bigg)^{(n)}{^{\mathbb{T}}}{\mathds{I}}_{a+}^{n-\gamma;\psi}f(x)
\end{eqnarray}
with $\gamma=\alpha+\beta(n-\alpha)$.
\end{definition}
\begin{remark}
\label{remark-1:1}
If $\mathbb{T}=\mathbb{R}$, then \eqref{3} reduces to the $\psi$-Hilfer fractional derivative \eqref{1}.
\end{remark}
\begin{remark}
\label{remark-1:2}
Taking the limit $\beta\to 0$ on both sides of \eqref{3}, we obtain the
$\psi$-Riemann--Liouville fractional derivative on time scales given by
\begin{eqnarray}
\label{4}
{^{\T}_{\rm RL}}{\Delta}_{a+}^{\alpha;\psi} f(x)=\,\,\bigg(\frac{\Delta}{\psi^{\Delta}(x)}\bigg)^{(n)}{^{\mathbb{T}}}{\mathds{I}}_{a+}^{n-\alpha;\psi}f(x).
\end{eqnarray}
\end{remark}
\begin{remark}
\label{remark-1:3}
Taking the limit $\beta\to 1$ on both sides of \eqref{3},
one has the $\psi$-Caputo fractional derivative on time scales given by
\begin{eqnarray}\label{5}
{^{\T}_{\rm C}}{\Delta}_{a+}^{\alpha;\psi} f(x)=\,\,{^{\mathbb{T}}}{\mathds{I}}_{a+}^{n-\alpha;\psi}\left(
\frac{\Delta}{\psi^{\Delta}(x)}\right)^{(n)}f(x).
\end{eqnarray}
\end{remark}
\begin{remark}
\label{remark-1:4}
Let $\mathbb{T}=\mathbb{R}$. Taking $\alpha=1$ and $\psi(x)=x$, we get the classical derivative.
\end{remark}
\begin{remark}
\label{remark-1:5:0}
Note that \eqref{3} can be written as
\begin{eqnarray*}
{^{\T}}{\Delta}_{a+}^{\alpha,\beta;\psi} f(x)=\,\,{^{\mathbb{T}}}{\mathds{I}}_{a+}^{\gamma-\alpha;\psi}\,{^{\T}_{\rm RL}}{\Delta}_{a+}^{\gamma;\psi} f(x)
\end{eqnarray*}
and
\begin{eqnarray*}
{^{\T}}{\Delta}_{a+}^{\alpha,\beta;\psi} f(x)=\,\,{^{\T}_{\rm C}}{\Delta}_{a+}^{\mu;\psi}\,{^{\mathbb{T}}}{\mathds{I}}_{a+}^{n-\gamma;\psi}f(x)
\end{eqnarray*}
with $\gamma=\alpha+\beta(n-\alpha)$ and $\mu=n(1-\beta)+\beta\alpha$,
where ${^{\mathbb{T}}}{\mathds{I}}_{a+}^{\gamma-\alpha;\psi}(\cdot)$
and ${^{\T}_{\rm RL}}{\Delta}_{a+}^{\gamma;\psi}(\cdot)$ are defined by \eqref{2} and \eqref{4}, respectively.
\end{remark}
\begin{remark}
\label{remark-1:5}
With particular choices of $\psi$
we obtain a wide class of fractional derivatives on time scales.
For example, let us consider the $\psi$-Hilfer fractional derivative on time scales
and function $\psi(x)={^{\mathbb{T}}}{\mathds{I}}_{a+}^{(1-\beta)(n-\alpha);\psi}f(x)$. In this case, we have
\begin{equation*}
{^{\T}}{\Delta}_{a+}^{\alpha,\beta;\psi} f(x)={^{\mathbb{T}}}{\mathds{I}}_{a+}^{n-\mu;\psi}\bigg(\frac{\Delta}{\psi^{\Delta}(x)}\bigg)^{(n)}\psi(x)
\end{equation*}
with $\mu=n(1-\beta)+\beta\alpha$.
\end{remark}
\begin{definition}{\rm\cite{Benaissa}} {\rm(Beta function on time scales)}
We define the Beta function on time scales by
$B^{\mathbb{T}}_{a,b}(p,q)=\displaystyle
\int_{a}^{b} (s-a)^{q-1}(b-s)^{p-1}\Delta s$, for some $p,q>0$.
\end{definition}
\begin{proposition}{\rm\cite{Benaissa}}
The function $B^{\mathbb{T}}_{a,b}(p,q)$ satisfies the inequality
$B^{\mathbb{T}}_{a,b}(p,q)\geq B(p,q)(b-a)^{p+q-1}$ for some $p,q>0$,
where $B(p,q)$ is the classical Beta function.
\end{proposition}
We proceed by proving several basic but fundamental properties.
\begin{proposition}
\label{A}
For any integrable function $f$ on $[a,b]$,
the $\psi$-Riemann--Liouville fractional integral on time scales satisfies
$$
{^{\mathbb{T}}}{\mathds{I}}_{a+}^{\alpha;\psi}\,{^{\mathbb{T}}}{\mathds{I}}_{a+}^{\beta;\psi}f(t)
\geq {^{\mathbb{T}}}{\mathds{I}}_{a+}^{\alpha+\beta;\psi}f(t)
$$
for any $\alpha,\beta>0$.
\end{proposition}
\begin{proof}
Using \eqref{2} of Definition~\ref{B} yields
\begin{eqnarray*}
{^{\mathbb{T}}}{\mathds{I}}_{a+}^{\alpha;\psi}\,{^{\mathbb{T}}}{\mathds{I}}_{a+}^{\beta;\psi}f(t)
&=& \frac{1}{\Gamma(\alpha)}\int_{a}^{t}\psi^{\Delta}(s)(\psi(t)-\psi(s))^{\alpha-1}\Big(
{^{\mathbb{T}}}{\mathds{I}}_{a+}^{\beta;\psi}h(s)\Big)\Delta s\\
&=&\frac{1}{\Gamma(\alpha)}\int_{a}^{t}\psi^{\Delta}(s)(\psi(t)-\psi(s))^{\alpha-1}\bigg(
\frac{1}{\Gamma(\beta)}\int_{a}^{s}\psi^{\Delta}(\tau)(\psi(s)-g(\tau))^{\beta-1}
f(\tau)\Delta\tau\bigg)\Delta s\\
&=&\frac{1}{\Gamma(\alpha)\Gamma(\beta)}\int_{a}^{t}
\int_{a}^{s}\psi^{\Delta}(s)(\psi(t)-\psi(s))^{\alpha-1}\psi^{\Delta}(\tau)
(\psi(s)-g(\tau))^{\beta-1}f(\tau)\Delta\tau\Delta s.
\end{eqnarray*}
Now, we interchange the order of integration from Fubini's theorem to obtain
\begin{eqnarray*}
{^{\mathbb{T}}}{\mathds{I}}_{a+}^{\alpha;\psi}\,{^{\mathbb{T}}}{\mathds{I}}_{a+}^{\beta;\psi}f(t)
=\frac{1}{\Gamma(\alpha)\Gamma(\beta)}\int_{a}^{t}\bigg(
\int_{\tau}^{t}\psi^{\Delta}(s)(\psi(t)-\psi(s))^{\alpha-1}
\psi^{\Delta}(\tau)(\psi(s)-g(\tau))^{\beta-1}\Delta s\bigg)f(\tau)\Delta\tau.
\end{eqnarray*}
Making the change of variable $r=\dfrac{\psi(s)-g(\tau)}{\psi(t)-g(\tau)}$, $r\in\mathbb{R}$,
we have ${\rm \Delta}r=\dfrac{\psi^{\Delta}(s)}{\psi(t)-\psi(\tau)}\Delta s$;
when $s\to\tau$ one has $r\to 0$; and when $s\to t$ one has $r\to 1$. Hence,
\begin{equation*}
\begin{split}
{^{\mathbb{T}}}{\mathds{I}}_{a+}^{\alpha;\psi}&\,{^{\mathbb{T}}}{\mathds{I}}_{a+}^{\beta;\psi}f(t)\\
&=\frac{1}{\Gamma(\alpha)\Gamma(\beta)}\int_{a}^{t}\bigg[\int_{\tau}^{t}\psi^{\Delta}(s)
\bigg(1-\frac{\psi(s)-g(\tau)}{\psi(t)-g(\tau)}\bigg)^{\alpha-1}(\psi(t)-g(\tau))^{\alpha-1}
\psi^{\Delta}(\tau)(\psi(s)-g(\tau))^{\beta-1}\Delta s\bigg]f(\tau)\Delta\tau\\
&=\frac{1}{\Gamma(\alpha)\Gamma(\beta)}\int_{a}^{t}\bigg[\int_{0}^{1}(1-r)^{\alpha-1}
(\psi(t)-g(\tau))^{\alpha-1}\psi^{\Delta}(\tau)(\psi(t)-g(\tau))r^{\beta-1}
(\psi(t)-g(\tau))^{\beta-1}{\rm \Delta}r]f(\tau)\Delta\tau\\
&=\frac{1}{\Gamma(\alpha)\Gamma(\beta)}\int_{0}^{1}(1-r)^{\alpha-1}r^{\beta-1}{\rm \Delta}r
\int_{a}^{t}(\psi(t)-g(\tau))^{\alpha+\beta-1}\psi^{\Delta}(\tau)f(\tau)\Delta\tau\\
&=\frac{B^{\mathbb{T}}_{0,1}(\alpha,\beta)}{\Gamma(\alpha)\Gamma(\beta)}
\int_{a}^{t}(\psi(t)-g(\tau))^{\alpha+\beta-1}\psi^{\Delta}(\tau)f(\tau)\Delta\tau\\
&\geq\frac{1}{\Gamma(\alpha+\beta)}\int_{a}^{t}
(\psi(t)-g(\tau))^{\alpha+\beta-1}\psi^{\Delta}(\tau)f(\tau)\Delta\tau.
\end{split}
\end{equation*}
The proof is complete.
\end{proof}
\begin{remark}
\label{remarknovo}
Is it possible to obtain a function $g^{\mathbb{T}}(\alpha,\beta)$ such that
${^{\mathbb{T}}}{\mathds{I}}_{a+}^{\alpha;\psi }\,{^{\mathbb{T}}}{\mathds{I}}_{a+}^{\beta;\psi}f(t)
= g^{\mathbb{T}}(\alpha,\beta) {^{\mathbb{T}}}{\mathds{I}}_{a+}^{\alpha+\beta;\psi}f(t)$?
The answer is yes. Indeed, assuming the first part of the proof of Theorem~\ref{A}, we have
\begin{eqnarray*}
{^{\mathbb{T}}}{\mathds{I}}_{a+}^{\alpha;\psi}\,{^{\mathbb{T}}}{\mathds{I}}_{a+}^{\beta;\psi}f(t)
&&=\frac{B^{\mathbb{T}}_{0,1}(\alpha,\beta)}{\Gamma(\alpha)\Gamma(\beta)}
\int_{a}^{t}(\psi(t)-g(\tau))^{\alpha+\beta-1}\psi^{\Delta}(\tau)f(\tau)\Delta\tau\notag\\
&&=\frac{B^{\mathbb{T}}_{0,1}(\alpha,\beta)}{B(\alpha,\beta)}
\frac{1}{\Gamma(\alpha+\beta)}\int_{a}^{t}
(\psi(t)-g(\tau))^{\alpha+\beta-1}\psi^{\Delta}(\tau)f(\tau)\Delta\tau\notag\\
&&= g^{\mathbb{T}}(\alpha,\beta) {^{\mathbb{T}}}{\mathds{I}}_{a+}^{\alpha+\beta;\psi}f(t),
\end{eqnarray*}
where $B(\alpha,\beta)$ is the classical Beta function and
$g^{\mathbb{T}}(\alpha,\beta):=\dfrac{B^{\mathbb{T}}_{0,1}(\alpha,\beta)}{B(\alpha,\beta)}$.
\end{remark}
As a consequence of Remark~\ref{remarknovo} above, taking $\mathbb{T}=\mathbb{R}$, we have
${\mathds{I}}_{a+}^{\alpha;\psi}\,\,{\mathds{I}}_{a+}^{\beta;\psi}f(t)
= {\mathds{I}}_{a+}^{\alpha+\beta;\psi}f(t)$.
\begin{lemma}
\label{lemma-4}
Let $n-1\leq\gamma<n$ and $t\in C_{\gamma}[a,b]$.
Then, $\displaystyle {^{\mathbb{T}}}{\mathds{I}}_{a+}^{\alpha;\psi}f(a)
=\lim_{x\to a+}{^{\mathbb{T}}}{\mathds{I}}_{a+}^{\alpha;\psi}f(x)=0$, $n-1\leq\gamma<\alpha$.
\end{lemma}
\begin{proof}
First, we have that
${^{\mathbb{T}}}{\mathds{I}}_{a+}^{\alpha;\psi}f(x)\in C_{\gamma}[a,b]$ is bounded.
Since $f\in C_{\gamma}[a,b]$, then $(\psi(x)-\psi(a))^{\gamma}f(x)$ is continuous
on $[a,b]$ and thus
\begin{eqnarray}
\label{*}
\Big|(\psi(x)-\psi(a))^{\gamma}f(x)\Big|< M
\quad \Rightarrow
\quad |f(x)|<\Big|(\psi(x)-\psi(a))^{-\gamma}\Big| M,
\end{eqnarray}
$x\in[a,b]$, for some positive constant $M$. Taking
${^{\mathbb{T}}}{\mathds{I}}_{a+}^{\alpha;\psi}(\cdot)$ on both sides of \eqref{*} yields
\begin{eqnarray*}
\Big|{^{\mathbb{T}}}{\mathds{I}}_{a+}^{\alpha;\psi}f(x)\Big|
&<&\Big|{^{\mathbb{T}}}{\mathds{I}}_{a+}^{\alpha;\psi}
(\psi(x)-\psi(a))^{-\gamma}\Big|M\\
&=&M\frac{\Gamma(n-\gamma)}{\Gamma(\alpha+n-\gamma)}
(\psi(x)-\psi(a))^{\alpha-\gamma}.
\end{eqnarray*}
Since $\gamma<\alpha$, the right-sided tends to $0$ as $x$ tends to $a+$. Therefore, one has
$$
{^{\mathbb{T}}}{\mathds{I}}_{a+}^{\alpha;\psi}f(a)=\lim_{x\to a+}{^{\mathbb{T}}}{\mathds{I}}_{a+}^{\alpha;\psi}f(x)=0
$$
and the result is proved.
\end{proof}
\begin{theorem}
Let $0<\alpha<1$ and $\psi$ be monotone having a delta derivative
$\psi^{\Delta}$ with $\psi^{\Delta}(x)\neq{0}$ for all $x\in[a,b]$.
The $\psi$-Riemann--Liouville fractional integral on time scales
is a bounded operator given by
\begin{equation*}
\Big\| {^{\mathbb{T}}}{\mathds{I}}_{a+}^{\alpha;\psi}
f\Big\|_{C_\gamma,\psi}\leq {L}\big\|f\big\|_{C_\gamma,\psi}
\end{equation*}
with $\displaystyle {L}=\frac{(\psi(b)-\psi(a))^{\alpha}}{\Gamma(\alpha+1)}$.
\end{theorem}
\begin{proof}
From \eqref{2} of Definition~\ref{B} and using Proposition~\ref{Prop-6}, it follows that
\begin{eqnarray*}
\Big\|{^{\mathbb{T}}}{\mathds{I}}_{a+}^{\alpha;\psi} f\Big\|_{C_\gamma,\psi}
&=&\max_{x\in[a,b]}\Big|(\psi(x)-\psi(a))^{\gamma}\,{^{\mathbb{T}}}{\mathds{I}}_{a+}^{\alpha;\psi} f(x)\Big|\\
&=&\max_{x\in[a,b]}\bigg|(\psi(x)-\psi(a))^{\gamma}\frac{1}{\Gamma(\alpha)}\int_{a}^{x}
\psi^{\Delta}(s)(\psi(t)-\psi(s))^{\alpha-1}f(s)\Delta s\bigg|\\
&\leq &\|f\|_{C_{\gamma,\psi}}\bigg|\frac{1}{\Gamma(\alpha)}\int_{a}^{x}
\psi^{\Delta}(s)(\psi(t)-\psi(s))^{\alpha-1}\Delta s\bigg|\\
&\leq &\frac{\|f\|_{C_{\gamma,\psi}}}{\Gamma(\alpha)}\frac{(\psi(b)-\psi(a))^{\alpha}}{\alpha}\\
&=&\frac{(\psi(b)-\psi(a))^{\alpha}}{\Gamma(\alpha+1)}\|f\|_{C_{\gamma,\psi}}\\
&=&{L}\|f\|_{C_{\gamma,\psi}},
\end{eqnarray*}
where $\displaystyle {L}=\frac{(\psi(b)-\psi(a))^{\alpha}}{\Gamma(\alpha+1)}$.
\end{proof}
\begin{lemma}
\label{lemma1}
Let $\mathbb{T}$ be a time scale, $(a,b]$ with $-\infty\leq{a}<b\leq\infty$
be an interval in the real line, $\alpha>0$, and $\psi(x)$ be a monotone
increasing and positive function in $\Delta$ sense whose derivative
is continuous in $(a,b]$. Then,
\begin{eqnarray}
\label{8}
{^{\mathbb{T}}}{\mathds{I}}_{a+}^{\alpha;\psi}f(x)
=\sum_{n=0}^{\infty}\binom{-\alpha}{n}{f^{\Delta}}^{(n)}(x)
\frac{(\psi(x)-\psi(a))^{\alpha+n}}{\Gamma(\alpha+n+1)}
\end{eqnarray}
where ${f^{\Delta}}^{(n)}(\cdot)$ is the $n$th
derivative on the time scale $\mathbb{T}$ and $x>a$.
\end{lemma}
\begin{proof}
Let us consider $f$ as follows:
\begin{eqnarray}
\label{9}
f(t)=\sum_{n=0}^{\infty}\frac{{f^{\Delta}}^{(n)}(x)}{n!}(\psi(t)-\psi(x))^{n}.
\end{eqnarray}
Taking ${^{\mathbb{T}}}{\mathds{I}}_{a+}^{\alpha;\psi}$ on both sides of (\ref{9}) yields
\begin{eqnarray*}
{^{\mathbb{T}}}{\mathds{I}}_{a+}^{\alpha;\psi}f(x)
&=&\frac{1}{\Gamma(\alpha)}\int_{a}^{x} \psi^{\Delta}(t)(\psi(x)-\psi(t))^{\alpha-1}f(t)\Delta t\\
&=&\frac{1}{\Gamma(\alpha)}\int_{a}^{x}\psi^{\Delta}(t)(\psi(x)-\psi(t))^{\alpha-1}
\Bigg(\sum_{n=0}^{\infty}\frac{{f^{\Delta}}^{(n)}(x)}{n!}(\psi(t)-\psi(x))^{n}\Bigg)\Delta t\\
&=&\sum_{n=0}^{\infty}\frac{{f^{\Delta}}^{(n)}(x)}{n!}
\frac{(-1)^n}{\Gamma(\alpha)}\int_{a}^{x}\psi^{\Delta}(t)(\psi(x)-\psi(t))^{\alpha+n-1}\Delta t\\
&=&\sum_{n=0}^{\infty}\frac{{f^{\Delta}}^{(n)}(x)}{n!}\frac{(-1)^n}{\Gamma(\alpha)}
\frac{(\psi(x)-\psi(a))^{\alpha+n}}{\Gamma(\alpha+n+1)}\Gamma(\alpha+n).
\end{eqnarray*}
Taking into account the identity
\begin{equation*}
\binom{\alpha}{n}=\frac{(-1)^{n}\alpha\Gamma(n-\alpha)}{\Gamma(1-\alpha)\Gamma(n+1)},
\end{equation*}
we have
\begin{equation*}
\binom{-\alpha}{n}=\frac{(-1)^{n-1}(-\alpha)\Gamma(\alpha+n)}{\Gamma(1+\alpha)\Gamma(n+1)}
= \frac{(-1)^{n}\Gamma(\alpha+n)}{\Gamma(\alpha)\Gamma(n+1)}.
\end{equation*}
We conclude that
\begin{equation*}
{^{\mathbb{T}}}{\mathds{I}}_{a+}^{\alpha;\psi}f(x)
=\sum_{n=0}^{\infty}\binom{-\alpha}{n}{f^{\Delta}}^{(n)}(x)
\frac{(\psi(x)-\psi(a))^{\alpha+n}}{\Gamma(\alpha+n+1)}
\end{equation*}
and the proof is complete.
\end{proof}
Now, we present the Leibniz rule associated with the $\psi$-Riemann--Liouville
fractional integral on time scales.
\begin{theorem}
\label{A*}
Let $\mathbb{T}$ be a time scale, $(a,b]$ with $-\infty\leq{a}<b\leq\infty$ be an interval in the real line,
$\alpha>0$, and $\psi(x)$ be a monotone increasing and positive function in $\Delta$ sense
whose derivative is continuous in $(a,b]$. The left fractional integral of the
product of two functions is given by
\begin{equation*}
{^{\mathbb{T}}}{\mathds{I}}_{a+}^{\alpha;\psi}(fh)(x)
=\sum_{k=0}^{\infty}{f^{\Delta}}^{(k)}(x) {^{\mathbb{T}}}{\mathds{I}}_{a+}^{\alpha;\psi} h(x),
\end{equation*}
where ${f^{\Delta}}^{(k)}$ is the $k$th derivative on the time scale $\mathbb{T}$ and $x>a$.
\end{theorem}
\begin{proof}
Let $f$ and $h$ satisfying the condition of Lemma~\ref{lemma1}.
Then, from \eqref{8} it follows that
\begin{equation*}
{^{\mathbb{T}}}{\mathds{I}}_{a+}^{\alpha;\psi}(fh)(x)
=\sum_{m=0}^{\infty}\binom{-\alpha}{m} {(fh)^{\Delta}}^{(m)}(x)
\frac{(\psi(x)-\psi(a))^{\alpha+m}}{\Gamma(\alpha+m+1)}.
\end{equation*}
Using the Leibniz rule,
\begin{equation*}
{(fh)^{\Delta}}^{(m)}(x)=\sum_{k=0}^{m}{f^{\Delta}}^{(k)}(x){h^{\Delta}}^{(m-k)}(x)
\end{equation*}
with $m\in\mathbb{N}$ and $f,h\in C^{m}([a,b])$, which yields that
\begin{eqnarray*}
{^{\mathbb{T}}}{\mathds{I}}_{a+}^{\alpha;\psi}(fh)(x)
&=&\sum_{m=0}^{\infty}\binom{-\alpha}{m}
\sum_{k=0}^{m}\binom{m}{k}{f^{\Delta}}^{(k)}(x){h^{\Delta}}^{(m-k)}(x)
\frac{(\psi(x)-\psi(a))^{\alpha+m}}{\Gamma(\alpha+m+1)}\\
&=&\sum_{k=0}^{\infty}{f^{\Delta}}^{(k)}(x)\sum_{m=k}^{\infty}
\binom{-\alpha}{m}\binom{m}{k}{h^{\Delta}}^{(m-k)}(x)
\frac{(\psi(x)-\psi(a))^{\alpha+m}}{\Gamma(\alpha+m+1)}.
\end{eqnarray*}
Considering $n=m-k$ and using the identity
\begin{equation*}
\binom{-\alpha}{n+k}\binom{n+k}{k}=\binom{-\alpha}{k}\binom{-(\alpha+k)}{n}
\end{equation*}
we obtain that
\begin{eqnarray*}
{^{\mathbb{T}}}{\mathds{I}}_{a+}^{\alpha;\psi}(fh)(x)
&=&\sum_{k=0}^{\infty}{f^{\Delta}}^{(k)}(x)
\sum_{n=0}^{\infty}\binom{-\alpha}{n+k}
\binom{n+k}{k}{h^{\Delta}}^{(n+k-k)}(x)
\frac{(\psi(x)-\psi(a))^{\alpha+n+k}}{\Gamma(\alpha+n+k+1)}\\
&=&\sum_{k=0}^{\infty}{f^{\Delta}}^{(k)}(x)\binom{-\alpha}{k}\sum_{n=0}^{\infty}
\binom{-(\alpha+k)}{n}{h^{\Delta}}^{(n)}(x)\frac{(\psi(x)-\psi(a))^{\alpha+n+k}}{\Gamma(\alpha+n+k+1)}\\
&=&\sum_{k=0}^{\infty}\binom{-\alpha}{k}{f^{\Delta}}^{(k)}(x)\,
{^{\mathbb{T}}}{\mathds{I}}_{a+}^{\alpha+k;\psi}h(x)
\end{eqnarray*}
and the proof is complete.
\end{proof}
\begin{proposition}
Let $0\leq \alpha \leq 1$ and $0\leq \beta \leq 1$. Then,
\begin{enumerate}[label={\normalfont(\arabic*)}]
\item ${^{\T}}{\Delta}_{a+}^{\alpha,\beta;\psi}\big(\lambda_1 f(t)+\lambda_2 \psi(t)\big)=\lambda_1{^{\T}}{\Delta}_{a+}^{\alpha,\beta;\psi} f(t)
+\lambda_2{^{\T}}{\Delta}_{a+}^{\alpha,\beta;\psi} \psi(t)$, where $\lambda_{1},\lambda_{2}\in\mathbb{R}$.
\item $\displaystyle {^{\T}}{\Delta}_{a+}^{\alpha,\beta;\psi}(\psi(x)-\psi(a))^{\delta-1}
=\frac{\Gamma(\delta)}{\Gamma(\delta-\alpha)}
(\psi(x)-\psi(a))^{\delta-\alpha-1}$, $\delta>1$.
\end{enumerate}
\end{proposition}
\begin{proof}
(1) Using the fact that
${^{\T}}{\Delta}_{a+}^{\alpha,\beta;\psi} f(t)={^{\mathbb{T}}}{\mathds{I}}_{a+}^{\gamma-\alpha;\psi}\,{^{\T}_{\rm RL}}{\Delta}_{a+}^{\gamma;\psi} f(t)$ and
because ${^{\mathbb{T}}}{\mathds{I}}_{a+}^{\gamma-\alpha;\psi}(\cdot)$ and ${^{\T}_{\rm RL}}{\Delta}_{a+}^{\gamma;\psi}(\cdot)$
are linear, we have that ${^{\T}}{\Delta}_{a+}^{\alpha,\beta;\psi}(\cdot)$ is also linear.
\noindent (2) Remembering that ${^{\T}_{\rm RL}}{\Delta}_{a+}^{\gamma;\psi}(\psi(x)-\psi(a))^{\delta-1}
=\dfrac{\Gamma(\delta)}{\Gamma(\delta-\alpha)} (\psi(x)-\psi(a))^{\delta-\alpha-1}$
and
$$
{^{\mathbb{T}}}{\mathds{I}}_{a+}^{\alpha;\psi}(\psi(x)-\psi(a))^{\delta-1}
=\dfrac{\Gamma(\delta)}{\Gamma(\delta+\alpha)} (\psi(x)-\psi(a))^{\delta+\alpha-1},
$$
we obtain that
\begin{eqnarray*}
{^{\T}}{\Delta}_{a+}^{\alpha,\beta;\psi}(\psi(x)-\psi(a))^{\delta-1}
&=&{^{\mathbb{T}}}{\mathds{I}}_{a+}^{\gamma-\alpha;\psi}\,{^{\T}_{\rm RL}}{\Delta}_{a+}^{\gamma;\psi}(\psi(x)-\psi(a))^{\delta-1}\\
&=&{^{\mathbb{T}}}{\mathds{I}}_{a+}^{\gamma-\alpha;\psi}\bigg\{
\frac{\Gamma(\delta)}{\Gamma(\delta-\gamma)}
(\psi(x)-\psi(a))^{\delta-\gamma-1}\bigg\}\\
&=&\frac{\Gamma(\delta)}{\Gamma(\delta-\alpha)}(\psi(x)-\psi(a))^{\delta-\alpha+1}.
\end{eqnarray*}
The proof is complete.
\end{proof}
\begin{remark}
\label{rema1}
In particular, given $1\leq k\in \mathbb{N}$, and as $\delta>1$,
we have $\displaystyle {^{\T}}{\Delta}_{a+}^{\alpha,\beta;\psi}(\psi(x)-\psi(a))^{k}
=\frac{k!}{\Gamma(k-1-\alpha)} (\psi(x)-\psi(a))^{k-\alpha}$.
On the other hand, for $1>k\in \mathbb{N}_{0}$,
we have $\displaystyle {^{\T}}{\Delta}_{a+}^{\alpha,\beta;\psi}(\psi(x)-\psi(a))^{k}=0$.
\end{remark}
\begin{theorem}
\label{th-5}
If $f\in C^{n}[a,b]$, $n-1<\alpha<n$, and $0\leq\beta\leq 1$, then
\begin{equation*}
{^{\mathbb{T}}}{\mathds{I}}_{a+}^{\gamma;\psi}\,{^{\T}}{\Delta}_{a+}^{\alpha,\beta;\psi} f(x)
= g^{\mathbb{T}}(\alpha,\gamma-\alpha) g^{\mathbb{T}}(\gamma-n,n-\gamma) f(x)- g^{\mathbb{T}}(\alpha,\gamma-\alpha)\sum_{k=1}^{n}\frac{(\psi(x)-\psi(a))^{\gamma-k}}{\Gamma(\gamma-k+1)}{
f_{\psi}^{\Delta}}^{(n-k)}\,{^{\mathbb{T}}}{\mathds{I}}_{a+}^{n-\gamma;\psi}f(a),
\end{equation*}
\end{theorem}
\begin{proof}
Using the identity
\begin{eqnarray}
\label{star}
{^{\mathbb{T}}}{\mathds{I}}_{a+}^{\alpha;\psi}\,{^{\mathbb{T}}}{\mathds{I}}_{a+}^{\gamma;\psi}f(x)
=g^{\mathbb{T}}(\alpha,\beta) {^{\mathbb{T}}}{\mathds{I}}_{a+}^{\alpha+\gamma;\psi}f(x),
\end{eqnarray}
we have
\begin{eqnarray}
\label{star-2}
{^{\mathbb{T}}}{\mathds{I}}_{a+}^{\alpha;\psi}\,{^{\T}}{\Delta}_{a+}^{\alpha,\beta;\psi} f(x)
=g^{\mathbb{T}}(\alpha,\gamma-\alpha){^{\mathbb{T}}}{\mathds{I}}_{a+}^{\gamma;\psi}\, {^{\T}_{\rm RL}}{\Delta}_{a+}^{\gamma;\psi} f(x)
\end{eqnarray}
with $\gamma=\alpha+\beta(n-\alpha)$. Integrating by parts $n$-times, we get
\begin{eqnarray*}
{^{\mathbb{T}}}{\mathds{I}}_{a+}^{\gamma;\psi}\,{^{\T}_{\rm RL}}{\Delta}_{a+}^{\gamma;\psi} f(x)
=g^{\mathbb{T}} (\gamma-n,n-\gamma) f(x)-\sum_{k=1}^{n}
\frac{(\psi(x)-\psi(a))^{\gamma-k}}{\Gamma(\gamma-k+1)}{f_{\psi}^{\Delta}}^{(n-k)}
\,{^{\mathbb{T}}}{\mathds{I}}_{a+}^{n-\gamma;\psi}f(a).
\end{eqnarray*}
From \eqref{star} and \eqref{star-2}, we conclude that
\begin{equation*}
{^{\mathbb{T}}}{\mathds{I}}_{a+}^{\gamma;\psi}\,{^{\T}}{\Delta}_{a+}^{\alpha,\beta;\psi} f(x)
= g^{\mathbb{T}}(\alpha,\gamma-\alpha) g^{\mathbb{T}}(\gamma-n,n-\gamma) f(x)- g^{\mathbb{T}}(\alpha,\gamma-\alpha)\sum_{k=1}^{n}\frac{(\psi(x)-\psi(a))^{\gamma-k}}{\Gamma(\gamma-k+1)}{
f_{\psi}^{\Delta}}^{(n-k)}\,{^{\mathbb{T}}}{\mathds{I}}_{a+}^{n-\gamma;\psi}f(a),
\end{equation*}
where
\begin{equation*}
{f_{\psi}^{\Delta}}^{(n)}
:=\l(\frac{\Delta}{\psi^{\Delta}(x)}\r)^{n}f(x)
\end{equation*}
and $g^{\mathbb{T}}(p,q)=\dfrac{B_{0,1}^{\mathbb{T}}(p,q)}{B(p,q)}$
are the Beta functions in time scales and the classical Beta function.
The result is proved.
\end{proof}
\begin{remark}
\begin{enumerate}
\item Taking $n=1$ in Theorem~\ref{th-5}, we have
\begin{equation*}
{^{\mathbb{T}}}{\mathds{I}}_{a+}^{\gamma;\psi}\,{^{\T}}{\Delta}_{a+}^{\alpha,\beta;\psi} f(x)
= g^{\mathbb{T}}(\alpha,\gamma-\alpha) g^{\mathbb{T}}(\gamma-1,1-\gamma) f(x)
- g^{\mathbb{T}}(\alpha,\gamma-\alpha)\frac{(\psi(x)-\psi(a))^{\gamma-1}}{\Gamma(\gamma)}
\,{^{\mathbb{T}}}{\mathds{I}}_{a+}^{1-\gamma;\psi}f(a).
\end{equation*}
\item Taking $\mathbb{T}=\mathbb{R}$ in Theorem~\ref{th-5}, we have
\begin{equation*}
{\mathds{I}}_{a+}^{\gamma;\psi}\, \Delta^{\alpha,\beta;\psi}_{a+}
f(x) = f(x)- \frac{(\psi(x)-\psi(a))^{\gamma-1}}{\Gamma(\gamma)}
\,{\mathds{I}}_{a+}^{1-\gamma;\psi}f(a).
\end{equation*}
\end{enumerate}
\end{remark}
\begin{proposition}
\label{prop-I}
For any integrable function $h$ on $[a,b]$ one has
\begin{equation*}
{^{\T}}{\Delta}_{a+}^{\alpha,\beta;\psi}\,{^{\mathbb{T}}}{\mathds{I}}_{a+}^{\alpha;\psi}f(x)=g^{\mathbb{T}}(\gamma-n,n-\gamma)\,\, f(x).
\end{equation*}
\end{proposition}
\begin{proof}
By definition of ${^{\T}}{\Delta}_{a+}^{\alpha,\beta;\psi}(\cdot)$, and using Proposition~\ref{A},
Lemma~\ref{lemma-4} and Theorem~\ref{th-5}, we can write that
\begin{eqnarray*}
{^{\T}}{\Delta}_{a+}^{\alpha,\beta;\psi}\,{^{\mathbb{T}}}{\mathds{I}}_{a+}^{\alpha;\psi}f(x)
&=&{^{\mathbb{T}}}{\mathds{I}}_{a+}^{\gamma-\alpha;\psi}\,
{^{\T}_{\rm RL}}{\Delta}_{a+}^{\gamma-\alpha;\psi} f(x)\\
&=&g^{\mathbb{T}}(\gamma-n,n-\gamma) f(x)-\sum_{k=1}^{n}\frac{(\psi(x)-\psi(a))^{\gamma-k}}{\Gamma(\gamma
-k+1)}{f_{\psi}^{\Delta}}^{(n-k)}\,{^{\mathbb{T}}}{\mathds{I}}_{a+}^{n-\gamma;\psi}f(a)\\
&=&g^{\mathbb{T}}(\gamma-n,n-\gamma) f(x).
\end{eqnarray*}
The proof is complete.
\end{proof}
\begin{theorem}
\label{th-6}
The $\psi$-Hilfer fractional derivative on time scales is a bounded operator
for all $n-1<\alpha<n$ and $0\leq\beta\leq{1}$ with
\begin{equation*}
\Big\|{^{\T}}{\Delta}_{a+}^{\alpha,\beta;\psi} f\Big\|_{C_{\gamma,\psi}}
\leq {L} \Big\|{f^{\Delta}}^{(n)}\Big\|_{C_{\gamma,\psi}^{n}}.
\end{equation*}
\end{theorem}
\begin{proof}
Remembering that ${^{\T}}{\Delta}_{a+}^{\alpha,\beta;\psi} f(x)={^{\mathbb{T}}}{\mathds{I}}_{a+}^{\gamma-\alpha;\psi}\,{^{\T}_{\rm RL}}{\Delta}_{a+}^{\gamma;\psi} f(x)$ yields
\begin{eqnarray*}
\Big\|{^{\T}}{\Delta}_{a+}^{\alpha,\beta;\psi} f\Big\|_{C_{\gamma,\psi}}
&=&\Big\|{^{\mathbb{T}}}{\mathds{I}}_{a+}^{\gamma-\alpha;\psi}\,
{^{\T}_{\rm RL}}{\Delta}_{a+}^{\gamma;\psi} f(x)\Big\|\\
&\leq &\frac{\Big\|{^{\T}_{\rm RL}}{\Delta}_{a+}^{\gamma;\psi} f\Big\|_{C_{\gamma,\psi}}}{\Gamma(\gamma-\alpha)}\max_{x\in[a,b]}
\bigg|\int_{a}^{x}\psi^{\Delta}(t)(\psi(x)-\psi(t))^{\gamma-\alpha-1}\Delta t\bigg|\\
&\leq &\frac{(\psi(b)-\psi(a))^{\gamma-\alpha}}{(\gamma-\alpha)\Gamma(\gamma-\alpha)}
\Big\|{^{\T}_{\rm RL}}{\Delta}_{a+}^{\gamma;\psi} f\Big\|_{C_{\gamma,\psi}}\\
&\leq &\frac{(\psi(b)-\psi(a))^{\gamma-\alpha}}{(\gamma-\alpha)\Gamma(\gamma-\alpha)}
\frac{\Big\|{f^{\Delta}}^{(n)}\Big\|_{C_{\gamma,\psi}^{n}}}{\Gamma(n-\gamma)}
\max_{x\in[a,b]}\bigg|\int_{a}^{x}\psi^{\Delta}(t)(\psi(x)-\psi(t))^{n-\gamma-1}\Delta t\bigg|\\
&\leq &\frac{(\psi(b)-\psi(a))^{n-\alpha}}{(n-\gamma)(\gamma-\alpha)
\Gamma(n-\gamma)\Gamma(\gamma-\alpha)}\Big\|{f^{\Delta}}^{(n)}\Big\|_{C_{\gamma,\psi}^{n}},
\end{eqnarray*}
which proves the intended result.
\end{proof}
\begin{theorem}
\label{th-11}
Let $f\in C^{1}([a,b])$, $\alpha\geq{0}$, $\delta\geq{0}$
and $0\leq{\beta}\leq{1}.$ Then,
\begin{equation*}
{^{\T}}{\Delta}_{a+}^{\alpha,\beta;\psi}\,{^{\mathbb{T}}}{\mathds{I}}_{a+}^{\delta;\psi}f(x)
=g^{\mathbb{T}}(1-\alpha,\delta)\,\,g^{\mathbb{T}}(\gamma-\alpha,
\gamma-\delta){\mathds{I}}_{a+}^{2\gamma-\alpha-\delta;\psi}f(x)
\end{equation*}
with $\alpha\geq\delta\geq{0}$.
\end{theorem}
\begin{proof}
We begin by noting that
\begin{equation}
\label{I}
\begin{split}
{^{\T}_{\rm RL}}{\Delta}_{a+}^{\alpha;\psi}\,{^{\mathbb{T}}}{\mathds{I}}_{a+}^{\delta;\psi}f(x)
&=\l(\frac{\Delta}{\psi^{\Delta}(x)}\r)
{^{\mathbb{T}}}{\mathds{I}}_{a+}^{1-\alpha;\psi}\,{^{\mathbb{T}}}{\mathds{I}}_{a+}^{\delta;\psi}f(x)\\
&=g^{\mathbb{T}}(1-\alpha,\delta)\,{^{\mathbb{T}}}{\mathds{I}}_{a+}^{\alpha-\delta;\psi}f(x).
\end{split}
\end{equation}
Using the relation
\begin{equation*}
{^{\T}}{\Delta}_{a+}^{\alpha,\beta;\psi} f(x)={^{\mathbb{T}}}{\mathds{I}}_{a+}^{\alpha-\delta;\psi}\,{^{\T}_{\rm RL}}{\Delta}_{a+}^{\alpha;\psi} f(x)
\end{equation*}
with $\gamma=\alpha+\beta(1-\alpha)$ and (\ref{I}), we get that
\begin{eqnarray*}
{^{\T}}{\Delta}_{a+}^{\alpha,\beta;\psi}\,{^{\mathbb{T}}}{\mathds{I}}_{a+}^{\delta;\psi}f(x)
&=&{^{\mathbb{T}}}{\mathds{I}}_{a+}^{\gamma-\alpha;\psi}\,
{^{\T}_{\rm RL}}{\Delta}_{a+}^{\gamma;\psi}\,{^{\mathbb{T}}}{\mathds{I}}_{a+}^{\delta;\psi}f(x)\\
&=&g^{\mathbb{T}}(1-\alpha,\delta)\,\,g^{\mathbb{T}}(\gamma-\alpha,\gamma
-\delta){\mathds{I}}_{a+}^{2\gamma-\alpha-\delta;\psi}f(x),
\end{eqnarray*}
which proves the intended equality.
\end{proof}
We now approach the $\Delta$-Laplace
transform on times scales and integration by parts.
Let $p\in R(\mathbb{T},\mathbb{R})$. We define the exponential function by
\begin{equation*}
e_{p}(t,s)=exp\left( \int_{a}^{t} x_{\mu(\tau)} (p(\tau))\Delta\tau \right).
\end{equation*}
\begin{definition}
Let $f,\psi:[0,\infty)\rightarrow \mathbb{R}$ be real valued functions
such that $\psi$ is a nonnegative increasing function with $\psi(0)=0$.
Then the Laplace transform of $f$ with respect to $\psi$ is defined by
\begin{equation*}
\mathscr{L}_{\psi}(f(t))=F(s)=\int_{0}^{\infty} e^{-s \psi(t)} \psi'(t) f(t) dt
\end{equation*}
for all $s\in \mathbb{C}$ for which this integral converges.
Here, $\mathscr{L}_{\psi}(\cdot)$ denotes the Laplace transform with respect
to $\psi$, which we call the generalized Laplace transform.
\end{definition}
\begin{corollary}
\label{cor-20}
If $f(t)$ is a function whose classical Laplace transform is $F(s)$,
then the generalized Laplace transform of function
$f\circ \psi=f(\psi(t))$ is also $F(s)$:
\begin{equation*}
\mathscr{L}[f(t)]=F(s) \quad \Rightarrow \quad \mathscr{L}_{\psi}[f(\psi(t))]=F(s).
\end{equation*}
\end{corollary}
\begin{definition}
For $f:\mathbb{T}\to\mathbb{R}$, the time-scale or generalized transform of $f$,
denoted by $\mathscr{L}[f]$ or $F(z)$, is given by
\begin{equation*}
^{\rm{\mathbb{T}}}\mathscr{L}[f](z)=F(z):=\int_{0}^{\infty}f(t)\psi^{\sigma}(t)\Delta t,
\end{equation*}
where $\psi(t)={\rm e}_{\theta z}(t,0)$
$\big(\psi^{\sigma}(t)={\rm e}_{\theta z}(\sigma(t))\big)$, that is,
\begin{equation*}
^{\rm{\mathbb{T}}}\mathscr{L}[f](z)=F(z)
:=\int_{0}^{\infty}f(t){\rm e}_{\theta z}(\sigma(t))\Delta t.
\end{equation*}
\end{definition}
\begin{theorem}[Inversion of the transform]
Suppose that $F(z)$ is analytic in the region
${\rm Re}_{\mu}(z)>{\rm Re}_{\mu}(c)$ and $F(z)\to 0$
as $|z|\to\infty$ in this region. Furthermore, suppose $F(z)$
has finitely many regressive poles of finite order $\{z_1,z_2,\ldots,z_n\}$
and $\tilde{F}_{\mathbb{R}}(z)$ is the transform of the function $\tilde{f}(t)$ on $\mathbb{R}$
that correspondents to the transform $F(z)=F_{\mathbb{T}}(z)$ of $f(t)$ on $\mathbb{T}$. If
\begin{equation*}
\int_{c-i\infty}^{c+i\infty}|\tilde{F}_{\mathbb{R}}(z)||{\rm d}z|<\infty,
\end{equation*}
then
\begin{equation*}
f(t)=\sum_{i=1}^{n}{\rm Res}_{z=z_i}{\rm e}_{z}(t,0)F(z)
\end{equation*}
has transform $F(z)$ for all $z$ with ${\rm Re}(z)>c$.
\end{theorem}
Our main purpose here is to propose an extension
of the Laplace transform on time scales using $\psi$.
The operators ${^{\T}}{\Delta}_{a+}^{\alpha,\beta;\psi}$, ${^{\T}_{\rm RL}}{\Delta}_{a+}^{\alpha;\psi}$, ${^{\T}_{\rm C}}{\Delta}_{a+}^{\alpha;\psi}$ and ${^{\mathbb{T}}}{\mathds{I}}_{a+}^{\alpha;\psi}$
can be written as the conjugation of the standard fractional operators
with the operation of composition with $\psi$ or $\psi^{-1}$, given by
\begin{equation}
\label{90}
\begin{split}
{^{\mathbb{T}}}{\mathds{I}}_{a+}^{\alpha;\psi}
&=Q_{\psi}\circ{^{\mathbb{T}}}{\mathds{I}}_{a+}^{\alpha;\psi(a)}
\circ(Q_{\psi})^{-1},\\
{^{\T}_{\rm RL}}{\Delta}_{a+}^{\alpha;\psi} &=Q_{\psi}\circ {^{\T}_{\rm RL}}{\Delta}_{a+}^{\alpha;\psi(a)}\circ(Q_{\psi})^{-1},\\
{^{\T}_{\rm C}}{\Delta}_{a+}^{\alpha;\psi} &=Q_{\psi}\circ {^{\T}_{\rm C}}{\Delta}_{a+}^{\alpha;\psi(a)}\circ(Q_{\psi})^{-1},
\end{split}
\end{equation}
and
$$
{^{\T}}{\Delta}_{a+}^{\alpha,\beta;\psi}=Q_{\psi}\circ {^{\T}}{\Delta}_{a+}^{\alpha,\beta;\psi(a)}\circ(Q_{\psi})^{-1},
$$
where the functional operator $Q_{\psi}$ is given by
\begin{equation*}
(Q_{\psi} f)(x)=f(\psi(x)).
\end{equation*}
\begin{definition}
Let $f,\psi:\mathbb{T}\to\mathbb{R}$ be such that $\psi$ is a nonnegative increasing function with $\psi(0)=0$.
Then, the time scale generalized transform of $f$ with respect to $\psi$ is defined by
\begin{equation*}
^{\rm{\mathbb{T}}}\mathscr{L}[f](z)=F(z)
:=\int_{0}^{\infty}f(t)\psi^{\Delta}(t)\psi^{\sigma}(t)\Delta t,
\end{equation*}
where $\psi(\psi(t))={\rm e}_{\theta z}(\psi(t),0)$
$(\psi^{\sigma}(\psi(t))={\rm e}_{\theta z}(\sigma(\psi(t)))$, that is,
\begin{equation*}
^{\rm{\mathbb{T}}}\mathscr{L}[f](z)=F(z)
:=\int_{0}^{\infty}f(t)\psi^{\Delta}(t){\rm e}_{\theta z}(\sigma(\psi(t)))\Delta t.
\end{equation*}
\end{definition}
Next, we prove an integration by parts formula
for the $\psi$-Riemann-Liouville fractional integral on times scales.
\begin{theorem}
\label{th-4}
Let $\alpha>0$, $p,q\geq{1}$ and $\frac{1}{p}+\frac{1}{q}\leq 1+\alpha$,
where $p\neq{1}$ and $q=1+n$ in the case when $\frac{1}{p}+\frac{1}{q}=1+\alpha$.
Moreover, let
\begin{equation*}
{^{\mathbb{T}}}{\mathds{I}}_{a+}^{\alpha;\psi}(L_p)
=\Big\{f:f={^{\mathbb{T}}}{\mathds{I}}_{a+}^{\alpha;\psi}g,\,\,
g\in L_p(a,b)\Big\}.
\end{equation*}
The following integration by parts formulas hold:
if $\varphi\in L_p(a,b)$ and $\phi\in L_q(a,b)$, then
\begin{equation*}
\int_{a}^{b}\bigg({^{\mathbb{T}}}{\mathds{I}}_{a+}^{\alpha;\psi}\phi(t)\bigg)\varphi(t)\Delta t
= \int_{a}^{b}\phi(t)\psi^{\Delta}(t)\,{^{\mathbb{T}}}{\mathds{I}}_{b-}^{\alpha;\psi}
\l(\frac{\varphi(t)}{\psi^{\Delta}(t)}\r) \Delta t.
\end{equation*}
\end{theorem}
\begin{proof}
If $\varphi\in L_{p}(a,b)$ and $\phi\in L_q(a,b)$,
then, from \eqref{2} of Definition~\ref{B}, it follows that
\begin{eqnarray*}
\int_{a}^{b}\bigg({^{\mathbb{T}}}{\mathds{I}}_{a+}^{\alpha;\psi}\phi(t)\bigg)\varphi(t)\Delta t
&=& \int_{a}^{b}\bigg(\frac{1}{\Gamma(\alpha)}\int_{a}^{t}\psi^{\Delta}(s)(\psi(t)-\psi(s))^{\alpha-1}
\phi(s) \Delta s\bigg)\varphi(t)\Delta t\\
&=&\int_{a}^{b}\frac{1}{\Gamma(\alpha)}\int_{t}^{b}
\phi(t)\psi^{\Delta}(t)(\psi(s)-\psi(t))^{\alpha-1}\varphi(t)\Delta t\\
&=&\int_{a}^{b}\phi(t)\psi^{\Delta}(t)\,{^{\mathbb{T}}}{\mathds{I}}_{b-}^{\alpha;\psi}
\l(\frac{\varphi(t)}{\psi^{\Delta}(t)}\r) \Delta t.
\end{eqnarray*}
\end{proof}
\section{Existence and uniqueness}
\label{sec5}
Now we investigate the question of existence
and uniqueness of solution to problem \eqref{I*}.
\begin{lemma}
Let $0<\alpha<1$, $J\subseteq\mathbb{T}$, and $f:J\times\mathbb{R}\to\mathbb{R}$.
We say that $y(t)$ is a solution of problem \eqref{I*}
if, and only if, this function is a solution of
\begin{eqnarray}
\label{II}
y(t)=\frac{1}{g^{\mathbb{T}}(\gamma-1,1-\gamma)} \frac{1}{\Gamma(\alpha)}\int_{0}^{t}
\psi^{\Delta}(s)(\psi(t)-\psi(s))^{\alpha-1}f(s,y(s))\Delta s,
\end{eqnarray}
where $g^{\mathbb{T}}(\gamma-1,1-\gamma):=\dfrac{B^{\mathbb{T}}_{0,1}(\gamma-1,
1-\gamma)}{B(\gamma-1,1-\gamma)} $ with $\gamma=\alpha+\beta(1-\alpha)$.
\end{lemma}
\begin{proof}
Applying the operator ${^{\mathbb{T}}}{\mathds{I}}_{0}^{\alpha;\psi}(\cdot)$
to both sides of problem \eqref{I*}, using the initial condition
and Theorem~\ref{th-5}, we have
\begin{eqnarray}
\label{2.5}
y(t)= \frac{1}{g^{\mathbb{T}}(\gamma-1,1-\gamma)} \frac{1}{\Gamma(\alpha)}
\int_{0}^{t}\psi^{\Delta}(s)(\psi(t)-\psi(s))^{\alpha-1} f(s,y(s))\Delta s.
\end{eqnarray}
On the other hand, applying the operator ${^{\T}}{\Delta}_{0+}^{\alpha,\beta;\psi}(\cdot)$ on both sides
of \eqref{2.5}, and using Proposition~\ref{prop-I}, we obtain
\begin{eqnarray*}
{^{\T}}{\Delta}_{0+}^{\alpha,\beta;\psi} y(t)
&=&{^{\T}}{\Delta}_{0+}^{\alpha,\beta;\psi}\left(\frac{1}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha)} \int_{0}^{t}
\psi^{\Delta}(s)(\psi(t)-\psi(s))^{\alpha-1}f(s,y(s))\,\Delta s\right)\notag\\
&=& f(t,y(t)).
\end{eqnarray*}
Now, taking ${^{\mathbb{T}}}{\mathds{I}}_{0}^{1-\gamma;\psi}(\cdot)$ on both sides
of Eq.(\ref{2.5}) and using Lemma~\ref{lemma-4}, we get
${^{\mathbb{T}}}{\mathds{I}}_{0}^{1-\gamma;\psi} y(0)=0$. The proof is complete.
\end{proof}
\begin{proof} (of Theorem~\ref{teorema32})
Let $S$ be the set of right-dense continuous functions
and $J\subseteq\mathbb{T}$. For $y\in S$, define
\begin{equation*}
\|y\|_{C_{1-\gamma,\psi}}=\sup_{t\in J}\|(\psi(t)-\psi(0))^{1-\gamma}y(t)\|_{C}.
\end{equation*}
Note that $S$ is a Banach space. Define the subset
$S_{\psi}(\rho)$ and the operator $\Theta$ by
\begin{equation*}
S_{\psi}(\rho)=\big\{x\in S:\|x_s\|_{C_{1-\gamma,\psi}}\leq\rho\big\}
\end{equation*}
and
\begin{equation*}
\Theta(y)= \frac{1}{g^{\mathbb{T}}(\gamma-1,1-\gamma)} \frac{1}{\Gamma(\alpha)}
\int_{0}^{t}\psi^{\Delta}(s)(\psi(t)-\psi(s))^{\alpha-1}f(s,y(s))\Delta s.
\end{equation*}
Using Proposition~\ref{Proposition24}, one has
\begin{eqnarray*}
|(\psi(t)-\psi(0))^{1-\gamma}\Theta(y(t))|
&=&\bigg|\frac{(\psi(t)-\psi(0))^{1-\gamma}}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha)}\int_{0}^{t}
\psi^{\Delta}(s)(\psi(t)-\psi(s))^{\alpha-1}f(s,y(s))\Delta s\bigg|\\
&\leq & \frac{(\psi(t)-\psi(0))^{1-\gamma}}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha)}\int_{0}^{t}
\psi^{\Delta}(s)(\psi(t)-\psi(s))^{\alpha-1}|f(s,y(s))|\Delta s\\
&\leq & M\,
\frac{(\psi(t)-\psi(0))^{1-\gamma}}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha)}\int_{0}^{t}
\psi^{\Delta}(s)(\psi(t)-\psi(s))^{\alpha-1}\Delta s.
\end{eqnarray*}
Since $\psi^{\Delta}(s)(\psi(t)-\psi(s))^{\alpha-1}$
is an increasing function, by Proposition~\ref{Prop-6} it follows that
\begin{equation*}
\int_{0}^{t}\psi^{\Delta}(s)(\psi(t)-\psi(s))^{\alpha-1}\Delta s
\leq\int_{0}^{t} \psi'(s)(\psi(t)-\psi(s))^{\alpha-1}{\rm d}s.
\end{equation*}
Consequently, one has
\begin{eqnarray*}
|(\psi(t)-\psi(0))^{1-\gamma}\Theta(y(t))|
&\leq & M\,
\frac{(\psi(t)-\psi(0))^{1-\gamma}}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha)}\int_{0}^{t}
\psi'(s)(\psi(t)-\psi(s))^{\alpha-1}{\rm d}s\\
&=&\frac{M}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha)}
\frac{(\psi(t)-\psi(0))^{\alpha}}{\alpha}\,(\psi(t)-\psi(0))^{1-\gamma}\\
&\leq &M\,\frac{(\psi(1)-\psi(0))^{1-\beta(1-\alpha)}}{\Gamma(\alpha+1)},
\end{eqnarray*}
that is,
$$
\|\Theta y\|_{C_{1-\gamma,\psi}}\leq M\,\frac{(\psi(1)
-\psi(0))^{1-\beta(1-\alpha)}}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha+1)}.
$$
Now, we consider
\begin{equation*}
\rho=M\,
\frac{(\psi(1)-\psi(0))^{1-\beta(1-\alpha)}}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha+1)}.
\end{equation*}
We have that $\Theta$ is an operator from $S_{\psi}(\rho)$ to $S_{\psi}(\rho)$.
Moreover, for $x,y\in S_{\psi}(\rho)$, yields
\begin{equation}
\label{star*}
\begin{split}
\|(\psi(t)-\psi(0))^{1-\gamma}(\Theta x(t)- \Theta y(t))\|
&\leq \frac{(\psi(t)-\psi(0))^{1-\gamma}}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha)}
\int_{0}^{t}\psi^{\Delta}(s)(\psi(t)-\psi(s))^{\alpha-1}|f(s,x(s))-f(s,y(s))|\Delta s\\
&\leq {L}\,\frac{\|x-y\|_{\infty}(\psi(t)-\psi(0))^{1
-\gamma}}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha)}
\int_{0}^{t}\psi^{\Delta}(s)(\psi(t)-\psi(s))^{\alpha-1}\Delta s\\
&\leq {L} \frac{\|x-y\|_{C_{1-\gamma,\psi}}(\psi(t)
-\psi(0))^{1-\gamma}}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha)}
\int_{0}^{t}\psi'(s)(\psi(t)-\psi(s))^{\alpha-1}{\rm d}s\\
&= {L}\,\frac{\|x-y\|_{C_{1-\gamma,\psi}}}{g^{\mathbb{T}}(\gamma-1,1-\gamma)
\Gamma(\alpha)}\frac{(\psi(t)-\psi(0))^{1-\beta(1-\alpha)}}{\alpha}\\
&\leq {L}\,\frac{(\psi(1)-\psi(0))^{\alpha}}{g^{\mathbb{T}}(\gamma-1,
1-\gamma)\Gamma(\alpha+1)}\|x-y\|_{C_{1-\gamma,\psi}}.
\end{split}
\end{equation}
It follows that
\begin{equation}
\label{A**}
\|\Theta x-\Theta y\|_{C_{1-\gamma,\psi}}
\leq {L}\,\frac{(\psi(1)-\psi(0))^{\alpha}}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha+1)}
\|x-y\|_{C_{1-\gamma,\psi}}.
\end{equation}
Indeed, evaluating the supremum, for $t\in[0,1]$ of both sides of \eqref{star*},
and using the definition of the norm in the weighted space,
we have \eqref{A**}. If $\displaystyle {L}\,\frac{(\psi(1)
-\psi(0))^{\alpha}}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha+1)}<1$,
then this will be a contraction map and we obtain the desired existence
and uniqueness of solution to problem \eqref{I*}.
\end{proof}
\begin{remark}
\label{remark-2}
(i) Taking the limit $\beta\to 0$ in \eqref{I*},
we obtain a $\psi$-Caputo fractional derivative problem on times scales.
Using Theorem~\ref{teorema32}, a solution to such problem exists and is unique.
(ii) Taking $\beta\to 1$ in \eqref{I*}, we get a corresponding problem on time scales in the
$\psi$-Riemann--Liouville fractional derivative sense. Under the conditions of Theorem~\ref{teorema32},
we conclude that such problem admits a unique solution. (iii) From the choice of $g(\cdot)$,
we obtain numerous particular cases for problem \eqref{I*}, for which our Theorem~\ref{teorema32}
provides a sufficient condition for existence of a unique solution.
\end{remark}
\begin{proof} (of Theorem~\ref{th-3.4})
We prove the result in four steps.
\noindent {\rm Step 1:} $\Theta$ is continuous.
Consider the a sequence $y_n$ that $y_n\to y$ in $C(J,\mathbb{R})$. So, for each $t\in J$, yields
\begin{eqnarray*}
&&|(\psi(t)-\psi(0))^{1-\gamma}(\Theta(y_n)(t)
-\Theta(y)(t))|\\
&\leq &\frac{(\psi(t)-\psi(0))^{1-\gamma}}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha)}\int_{0}^{t}
\psi^{\Delta}(s)(\psi(t)-\psi(s))^{\alpha-1}|f(s,y_n(s))-f(s,y(s))|\Delta s\\
&\leq &\frac{(\psi(t)-\psi(0))^{1-\gamma}}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha)}\int_{0}^{t}
\psi^{\Delta}(s)(\psi(t)-\psi(s))^{\alpha-1}\sup_{s\in J}|f(s,y_n(s))-f(s,y(s))|\Delta s\\
&\leq &\frac{\|f(\cdot,y_n(\cdot))-f(\cdot,y(\cdot))\|_{C_{1-\gamma,\psi}}}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha)}
\int_{0}^{t}\psi^{\Delta}(s)(\psi(t)-\psi(s))^{\alpha-1}\Delta s\\
&\leq & \frac{\|f(\cdot,y_n(\cdot))-f(\cdot,y(\cdot))\|_{C_{1-\gamma,\psi}}}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha)}
\int_{0}^{t}\psi'(s)(\psi(t)-\psi(s))^{\alpha-1}{\rm d}s\\
&\leq& \frac{\|f(\cdot,y_n(\cdot))-f(\cdot,y(\cdot))\|_{C_{1-\gamma,\psi}}}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha)}\,
(\psi(1)-\psi(0))^{\alpha}.
\end{eqnarray*}
Since $f$ is a continuous function, we obtain that
\begin{equation*}
\|\Theta y_n-\Theta y\|_{C_{1-\gamma,\psi}}
\leq\frac{(\psi(1)-\psi(0))^{\alpha}}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha+1)}
\|f(\cdot,y_n(\cdot))-f(\cdot,y(\cdot))\|_{C_{1-\gamma,\psi}}\to 0
\end{equation*}
as $n\to\infty$.
\noindent{\rm Step 2:} The map $\Theta$ sends bounded sets
into bounded sets in $C(J,\mathbb{R})$. To see that, it is enough
to show that for any $\rho$, there exists a positive constant $\ell$
such that for each $y\in B_{\rho}=\{y\in C_{1-\gamma,\psi}(J,\mathbb{R})
:\|y\|_{C_{1-\gamma,\psi}}\leq\rho\}$ we have $\|\Theta y\|_{C_{1-\gamma,\psi}}\leq\ell$.
Indeed, by hypothesis, for each $t\in J$ one has
\begin{eqnarray*}
\|(\psi(t)-\psi(0))^{1-\gamma}\Theta y(t)\|&\leq &\frac{(\psi(t)
-\psi(0))^{1-\gamma}}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha)}
\int_{0}^{t}\psi^{\Delta}(s)(\psi(t)-\psi(s))^{\alpha-1}|f(s,y(s))|\Delta s\\
&\leq & M\,\frac{(\psi(t)-\psi(0))^{1-\gamma}}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha)}\int_{0}^{t}
\psi^{\Delta}(s)(\psi(t)-\psi(s))^{\alpha-1}\Delta s\\
&\leq & M\,\frac{(\psi(t)-\psi(0))^{1-\gamma}}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha)}
\int_{0}^{t} \psi'(s)(\psi(t)-\psi(s))^{\alpha-1}{\rm d}s\\
&\leq &\frac{M}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha+1)}(\psi(1)-\psi(0))^{1-\beta(1-\alpha)}=\ell.
\end{eqnarray*}
\noindent{\rm Step 3:} The map $\Theta$ sends bounded sets
into equicontinuous sets of $C(J,\mathbb{R})$. Let $t_1,t_2\in J$, $t_1<t_2$,
$B_{\rho}$ be a bounded set of $C(J,\mathbb{R})$ as in \rm{Step 2},
and $y\in B_{\rho}$. Then,
\begin{eqnarray*}
|(\Theta y)(t_2)-(\Theta y)(t_1)|&\leq
&\frac{(\psi(t_2)-\psi(0))^{1-\gamma}}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha)}
\int_{0}^{t_1}\psi^{\Delta}(s)(\psi(t_1)-\psi(s))^{\alpha-1}|f(s,y(s))|\Delta s\\
&\quad&-\frac{(\psi(t_1)-\psi(0))^{1-\gamma}}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha)}
\int_{0}^{t_2}\psi^{\Delta}(s)(\psi(t_2)-\psi(s))^{\alpha-1}|f(s,y(s))|\Delta s\\
&\leq &M\,\frac{(\psi(t_2)-\psi(0))^{1-\gamma}}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha)}
\int_{0}^{t_1}\psi^{\Delta}(s)[(\psi(t_1)-\psi(s))^{\alpha-1}-(\psi(t_2)-\psi(s))^{\alpha-1}]\Delta s\\
&\quad & + M\,\frac{(\psi(t_1)-\psi(0))^{1-\gamma}}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha)}
\int_{t_1}^{t_2}\psi^{\Delta}(s)(\psi(t_2)-\psi(s))^{\alpha-1}\Delta s\\
&\leq & M\,\frac{(\psi(t_2)-\psi(0))^{1-\gamma}}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha)}
\int_{0}^{t_1}\psi'(s)[(\psi(t_1)-\psi(s))^{\alpha-1}-(\psi(t_2)-\psi(s))^{\alpha-1}]{\rm d}s\\
&\quad& +M\,\frac{(\psi(t_1)-\psi(0))^{1-\gamma}}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha)}\int_{t_1}^{t_2}
\psi'(s)(\psi(t_2)-\psi(s))^{\alpha-1}{\rm d}s\\
&\leq & M\,\frac{(\psi(t_2)-\psi(0))^{1-\gamma}}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha)}
\big[(\psi(t_2)-\psi(t_1))^{\alpha}+(\psi(t_1)-\psi(0))^{\alpha}-(\psi(t_2)-\psi(0))^{\alpha}\big]\\
&\quad& + M\,\frac{(\psi(t_1)-\psi(0))^{1-\gamma}}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha)}\,(\psi(t_2)-\psi(t_1))^{\alpha}\\
&=&\frac{2 M}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha+1)}\,(\psi(t_2)-\psi(0))^{\alpha}\big[(\psi(t_2)-\psi(0))^{1-\gamma}
-(\psi(t_1)-\psi(0))^{1-\gamma}\big]\\
&\quad&+\frac{M}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha+1)}\,(\psi(t_2)-\psi(0))^{1-\gamma}\big[\,(\psi(t_1)-\psi(0))^{\alpha}
-\,(\psi(t_2)-\psi(0))^{\alpha}\big]\rightarrow 0
\end{eqnarray*}
as $t_1\to t_2$. In this sense, we have $\Theta:C_{1-\gamma;\psi}(J,\mathbb{R})\to C_{1-\gamma,\psi}(J,\mathbb{R})$
is continuous and completely continuous (consequence of steps 1 to 3 and Arzela-Ascoli theorem).
\noindent{\rm Step 4:} A priory bounds. Now it remains to show that
$\Omega=\{y\in C(J,\mathbb{R}):y=\lambda \Theta(y);0<\lambda<1\}$
is a bounded set. Let $y\in\Omega$. Then, $y=\lambda \Theta(y)$
for some $0<\lambda<1$. Thus, for each $t\in J$, it yields that
\begin{equation*}
y(t)=\lambda\Bigg[\frac{1}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha)}
\int_{0}^{t}\psi^{\Delta}(s) (\psi(t)-\psi(s))^{\alpha-1}f(s,y(s))\Delta s\Bigg].
\end{equation*}
Thus, we conclude via the estimate in Step~2.
In this sense, $\Theta$ has a fixed point, hence a solution to the problem \eqref{I*}
(Schauder's fixed point theorem).
\end{proof}
\section{Controllability}
\label{sec6}
In this section, we investigate the question of controllability
for \eqref{eqI}. For this, first, we present the concept of
controllability and the integral equation that is equivalent
to the problem to be discussed.
\begin{definition}
We say that \eqref{eqI} is controllable on $J$ if, for any given initial state $y_0$
and any given final state $\bar{y}$, there exists a piecewise right-dense continuous function
$u\in L^{2}(J,U)$ such that the solution $y$ of \eqref{eqI} satisfies $y(1)=\bar{y}$.
\end{definition}
\begin{theorem}
A function $f\in C(J,\mathbb{R})$ is a solution of \eqref{eqI} if, and only if,
this function is a solution of the following integral equation:
\begin{eqnarray*}
y(t)= \frac{1}{g^{\mathbb{T}}(\gamma-1,1-\gamma)} \frac{1}{\Gamma(\alpha)}
\int_{0}^{t}\psi^{\Delta}(s)(\psi(t)-\psi(s))^{\alpha-1}\big(f(s,y(s))+(Bu)(s)\big)\Delta s,
\end{eqnarray*}
$t\in[0,1]=J\subset\mathbb{T}$.
\end{theorem}
\begin{proof}
Applying the operator ${^{\mathbb{T}}}{\mathds{I}}_{0}^{\alpha;\psi}(\cdot)$
to both sides of problem \eqref{eqI}, using the initial condition
and Theorem~\ref{th-5}, we have
\begin{eqnarray}
\label{2.54}
y(t)= \frac{1}{g^{\mathbb{T}}(\gamma-1,1-\gamma)} \frac{1}{\Gamma(\alpha)}
\int_{0}^{t}\psi^{\Delta}(s)(\psi(t)-\psi(s))^{\alpha-1}\big(f(s,y(s))+(Bu)(s)\big)\Delta s.
\end{eqnarray}
On the other hand, applying the operator ${^{\T}}{\Delta}_{0+}^{\alpha,\beta;\psi}(\cdot)$ on both sides
of \eqref{2.54}, and using Proposition~\ref{prop-I}, we obtain
\begin{eqnarray*}
{^{\T}}{\Delta}_{0+}^{\alpha,\beta;\psi} y(t)
&=&{^{\T}}{\Delta}_{0+}^{\alpha,\beta;\psi}\left(\frac{1}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha)} \int_{0}^{t}
\psi^{\Delta}(s)(\psi(t)-\psi(s))^{\alpha-1}(f(s,y(s))+(Bu)(s))\Delta s\right)\notag\\
&=& f(t,y(t))+(Bu)(t).
\end{eqnarray*}
Now, taking ${^{\mathbb{T}}}{\mathds{I}}_{0}^{1-\gamma;\psi}(\cdot)$ on both sides of
the Eq.(\ref{2.54}) and using Lemma~\ref{lemma-4}, we get
${^{\mathbb{T}}}{\mathds{I}}_{0}^{1-\gamma;\psi} y(0)=0$.
The proof is complete.
\end{proof}
\begin{lemma}
\label{lemma 3.4}
Let the assumptions ${\rm (A_1)}$--${\rm (A_4)}$ be satisfied and
$y(0)\in\mathbb{R}$ be an arbitrary point. Then the solution $y(t)$
of system \eqref{eqI} on $[0,1]$ is defined by the control function
$$
u(t)=(\mathscr{W}_{\alpha})^{-1}\bigg[y_1- \frac{1}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha)}
\int_{0}^{t}\psi^{\Delta}(s)(\psi(t)-\psi(s))^{\alpha-1}f(s,y(s))\Delta s\bigg],
$$
$t\in[0,1]$. Moreover, the control function $u(t)$ has an estimate
$\|u(t)\|\leq M_{u}^{\circ}$ with
\begin{equation*}
M_{u}^{\circ}=|y_1|+\frac{M}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha+1)}(\psi(t)-\psi(0))^{\alpha}.
\end{equation*}
\end{lemma}
\begin{proof}
Let $y(t)$ be a solution of system \eqref{eqI}
on $[0,1]\subset\mathbb{T}$ defined by \eqref{2.5}. Then,
\begin{eqnarray*}
y(t)&=&\frac{1}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha)}\int_{0}^{t}\psi^{\Delta}(s)(\psi(t)-\psi(s))^{\alpha-1}
\big[f(s,y(s))+(Bu)(s)\big]\Delta s\\
&=&\frac{1}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha)}\int_{0}^{t}\psi^{\Delta}(s)(\psi(t)-\psi(s))^{\alpha-1}f(s,y(s))\Delta s\\
&\quad&+\frac{1}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha)}\int_{0}^{t}\psi^{\Delta}(s)(\psi(t)-\psi(s))^{\alpha-1}
B(\mathscr{W}_{\alpha})^{-1} \\&& \bigg[y_1
-\frac{1}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha)}\int_{0}^{t}\psi^{\Delta}(\xi)(\psi(t)-\psi(\xi))^{\alpha-1}
f(\xi,y(\xi))\Delta\xi\bigg]\Delta s\\
&=&\frac{1}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha)}\int_{0}^{t}\psi^{\Delta}(s)(\psi(t)-\psi(s))^{\alpha-1}f(s,y(s))\Delta s\\
&\quad&+\mathscr{W}_{\alpha}(\mathscr{W}_{\alpha})^{-1}\bigg[y_1
+\frac{1}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha)}\int_{0}^{t}\psi^{\Delta}(\xi)(\psi(t)-\psi(\xi))^{\alpha-1}
f(\xi,y(\xi))\Delta\xi\bigg]\\
&=&y_1.
\end{eqnarray*}
In this sense, we have the estimative
\begin{eqnarray*}
|u(t)|&=&\bigg|(\mathscr{W}_{\alpha})^{-1}\bigg(y_1-\frac{1}{g^{\mathbb{T}}(\gamma-1,
1-\gamma)\Gamma(\alpha)}\int_{0}^{t}\psi^{\Delta}(s)(\psi(t)-\psi(s))^{\alpha-1}f(s,y(s))\Delta s\bigg)\bigg|\\
&\leq &|(\mathscr{W}_{\alpha})^{-1}\bigg(|y_1|
+\frac{1}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha)}\int_{0}^{t}\psi^{\Delta}(s)
(\psi(t)-\psi(s))^{\alpha-1}|f(s,y(s))|\Delta s\bigg)\\
&\leq & M_{W}^{\circ}\bigg(|y_1|+\frac{M}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha)}
\int_{0}^{t}\psi^{\Delta}(s)(\psi(t)-\psi(s))^{\alpha-1}\Delta s\bigg)\\
&\leq & M_{W}^{\circ}\bigg(|y_1|+\frac{M}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha)}
\int_{0}^{t}\psi'(s)(\psi(t)-\psi(s))^{\alpha-1}{\rm d}s\bigg)\\
&=&M_{W}^{\circ}\bigg(|y_1|+\frac{M}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha+1)}
\,(\psi(t)-\psi(0))^{\alpha}\bigg)\\
&=& M_{u}^{\circ}.
\end{eqnarray*}
Therefore, the proof is complete.
\end{proof}
To finalize the main results, next we present the proof
that guarantees that the system \eqref{eqI} is controllable.
\begin{proof} (of Theorem \ref{th-4.2})
Consider the subset ${D}_{\psi,\delta}\subseteq C_{1-\gamma,\psi}(J,\mathbb{R})$ as follows:
\begin{equation*}
{D}_{\psi,\delta}=\big\{x\in C_{1-\gamma,\psi}(J,\mathbb{R}):\|x\|_{C_{1-\gamma,\psi}}\leq\delta\big\}.
\end{equation*}
We define the operator ${K}:{D}_{\psi,\delta}\to {D}_{\psi,\delta}$ as
\begin{equation*}
({K}y)(t)= \frac{1}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha)}
\int_{0}^{t}\psi^{\Delta}(s)(\psi(t)-\psi(s))^{\alpha-1}
\big(f(s,y(s))+(Bu)(s)\big)\Delta s.
\end{equation*}
Note that the operator ${K}$ is well defined and the fixed points of ${K}$ are solutions to \eqref{eqI}.
Indeed, $x\in {D}_{\psi,\delta}$ is a solution of \eqref{eqI} if, and only if, it is a solution
of the operator equation $x={K}x$. Therefore, the existence of a solution of \eqref{eqI}
is equivalent to determine a positive constant $\delta$ such that ${K}$
has a fixed point on ${D}_{\psi,\delta}$. We decompose the operator ${K}$ into two operators
${K}_1$ and ${K}_2$, ${K}={K}_1+{K}_2$ on ${D}_{\psi,\delta}$, where
\begin{eqnarray*}
({K}_1 y)(t)=\frac{1}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha)}\int_{0}^{t}\psi^{\Delta}(s)
(\psi(t)-\psi(s))^{\alpha-1} Bu(s)\Delta s, \quad t\in[0,1]=J\subset\mathbb{T}
\end{eqnarray*}
and
\begin{eqnarray*}
({K}_2 y)(t)= \frac{1}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha)}
\int_{0}^{t}\psi^{\Delta}(s)(\psi(t)-\psi(s))^{\alpha-1} f(s,u(s))\Delta s.
\end{eqnarray*}
\noindent \rm{Step 1:} The operator ${K}_1$ maps ${D}_{\psi,\delta}$ into itself.
For each $t\in J$ and $x\in {D}_{\psi,\delta}$, it follows from Lemma~\ref{lemma 3.4} that
\begin{eqnarray*}
&&\big\|(g(t)-\psi(0))^{1-\gamma}(K_1 x)(t)\big\|\notag \\
&=&\Bigg\|\frac{(\psi(t)-\psi(0))^{1-\gamma}}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha)}
\int_{0}^{t}\psi^{\Delta}(s) (\psi(t)-\psi(s))^{\alpha-1}(Bu)(s)\Delta s\Bigg\|\\
&\leq &\frac{(\psi(t)-\psi(0))^{1-\gamma}}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha)}
\int_{0}^{t}\psi^{\Delta}(s) (\psi(t)-\psi(s))^{\alpha-1}\|B\| \|u(s)\|\Delta s\\
&\leq & \frac{(\psi(t)-\psi(0))^{1-\gamma}}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha)}\, M_{B}
\int_{0}^{t}\psi^{\Delta}(s)(\psi(t)-\psi(s))^{\alpha-1}M_{W}^{\circ}\\
&\quad & \times \Bigg(|y_1|+\frac{M}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha+1)}(\psi(s)-\psi(0))^{\alpha}\Bigg)\Delta s\\
&\leq& \frac{(\psi(1)-\psi(0))^{1-\gamma}}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha)}\,M_{W}^{\circ}\,M_{B}
\Bigg(|y_1| +\frac{M}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha+1)}(\psi(1)-\psi(0))^{\alpha}\Bigg)\\
&\quad& \int_{0}^{t}
\psi'(s)(\psi(t)-\psi(s))^{\alpha-1}{\rm d}s\\
&\leq& \frac{(\psi(1)-\psi(0))^{1-\beta(1-\alpha)}}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha+1)}\,M_{W}^{\circ}
\, M_{B}\Bigg(|y_1|+\frac{M}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha+1)}(\psi(1)-\psi(0))^{\alpha}\Bigg)\\
&\leq & \delta,
\end{eqnarray*}
which implies that $\|{K}_1 y\|_{C_{1-\gamma,\psi}}\leq\delta$.
Thus, ${K}_1$ maps ${D}_{\psi,\delta}$ into itself.
\noindent \rm{Step 2:} The operator ${K}_2$ is continuous. Let $\{y_n\}$ be a sequence
in ${D}_{\psi,\delta}$ satisfying $y_n\to y$ as $n\to\infty.$ Then, for each $t\in J$, one has
\begin{eqnarray*}
&&\Big\|(\psi(t)-\psi(0))^{1-\gamma}\big(({K}_2 y_n)(t)-({K}_2 y)(t)\big)\Big\|_{C}\\
&\leq& \frac{(\psi(t)-\psi(0))^{1-\gamma}}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha)}\int_{0}^{t}
\psi^{\Delta}(s)(\psi(t)-\psi(s))^{\alpha-1}\|f(s,y_n(s))-f(s,y(s))\|\Delta s\\
&\leq &\frac{(\psi(t)-\psi(0))^{1-\gamma}}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha)}\int_{0}^{t}g'(s)
(\psi(t)-\psi(s))^{\alpha-1}\|f(s,y_n(s))-f(s,y(s))\|{\rm d}s\\
&\leq &\|f(\cdot\,,y_n(\cdot))-f(\cdot\,,y(\cdot))\|_{C_{1-\gamma,\psi}}
\frac{(\psi(t)-\psi(0))^{1-\beta(1-\alpha)}}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha+1)}.
\end{eqnarray*}
By the Lebesgue dominated convergence theorem, we know that
$\|{K}_2 y_n-{K}_2 y\|_{C_{1-\gamma,\psi}}\to 0$
as $n\to\infty$. This means that ${K}_2$ is continuous.
\noindent \rm{Step 3:} Now we show that ${K}_2({D}_{\psi,\delta})\subset {D}_{\psi,\delta}$.
We prove this by contradiction, supposing that there exists a function
$\eta(\cdot)\in {D}_{\psi,\delta}$ such that $\|({K}_2 y)\|_{C_{1-\gamma,\psi}}>\delta$.
Thus, under such assumption, for each $t\in J$ we get
\begin{eqnarray*}
\delta &<&\big\|(\psi(t)-\psi(0))^{1-\gamma}({K}_2\eta)(t)\big\|\\
&\leq &\frac{(\psi(t)-\psi(0))^{1-\gamma}}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha)}
\int_{0}^{t}\psi^{\Delta}(s)(\psi(t)-\psi(s))^{\alpha-1}\|f(s,\eta(s))\|\Delta s\\
&\leq &\frac{(\psi(t)-\psi(0))^{1-\gamma}}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha)}
\int_{0}^{t}\psi'(s)(\psi(t)-\psi(s))^{\alpha-1}\|f(s,\eta(s))\|{\rm d}s\\
&\leq &M\,
\frac{(\psi(1)-\psi(0))^{1-\beta(1-\alpha)}}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha+1)}\,\delta.
\end{eqnarray*}
Dividing both sides by $\delta$, and taking the limit as ${K}\to\infty$, we get
\begin{equation*}
M\,\frac{(\psi(1)-\psi(0))^{1-\beta(1-\alpha)}}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha+1)}\geq{1},
\end{equation*}
which contradicts \eqref{4.1}. This shows that
${K}_2({D}_{\psi,\delta})\subset {D}_{\psi,\delta}$.
\noindent \rm{Step 4:} Now we show that ${K}_2({D}_{\psi,\delta})$ is bounded and equicontinous.
From Step 3, it is clear that ${K}_2({D}_{\psi,\delta})$ is bounded. It remains to show that
${K}_2({D}_{\psi,\delta})$ is equicontinuous. Indeed, we have
\begin{eqnarray*}
&&\big\|(\psi(t_2)-\psi(0))^{1-\gamma}({K}_2 y)(t_2)-(\psi(t_1)-\psi(0))^{1-\gamma}({K}_1 y)(t_1)\big\|\\
&=&\Bigg\|\frac{(\psi(t_2)-\psi(0))^{1-\gamma}}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha)}\int_{0}^{t_2}
\psi^{\Delta}(s)(\psi(t_2)-\psi(s))^{\alpha-1}f(s,y(s))\Delta s\\
&\qquad &-\frac{(\psi(t_1)-\psi(0))^{1-\gamma}}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha)}\int_{0}^{t_1}
\psi^{\Delta}(s)(\psi(t_1)-\psi(s))^{\alpha-1}f(s,y(s))\Delta s\Bigg\|\\
&\leq&\Bigg\|\frac{(\psi(t_2)-\psi(0))^{1-\gamma}
-(\psi(t_1)-\psi(0))^{1-\gamma}}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha)}
\int_{0}^{t_2}\psi^{\Delta}(s)(\psi(t_2)-\psi(s))^{\alpha-1}f(s,y(s))\Delta s\Bigg\|\\
&\quad& +\Bigg\|\frac{1}{g^{\mathbb{T}}(\gamma-1,1-\gamma)
\Gamma(\alpha)}(\psi(t_1)-\psi(0))^{1-\gamma}\int_{t_1}^{t_2}\psi^{\Delta}(s)
(\psi(t_2)-\psi(s))^{\alpha-1}f(s,y(s))\Delta s\Bigg\|\\
&\quad& +\Bigg\|\frac{1}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha)}
(\psi(t_1)-\psi(0))^{1-\gamma}\int_{0}^{t_1}\psi^{\Delta}(s)
(\psi(t_1)-\psi(s))^{\alpha-1}f(s,y(s))\Delta s\Bigg\|\\
&\leq& \frac{(\psi(t_2)-\psi(0))^{1-\gamma}-(\psi(t_1)
-\psi(0))^{1-\gamma}}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha)}\,M
\int_{0}^{t_2}g'(s)(\psi(t_2)-\psi(s))^{\alpha-1}{\rm d}s\\
&\quad& +\frac{(\psi(t_1)-\psi(0))^{1-\gamma}}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha)}\,M
\int_{t_1}^{t_2}\psi'(s)(\psi(t_2)-\psi(s))^{\alpha-1}{\rm d}s\\
&\quad& +\frac{(\psi(t_1)-\psi(0))^{1-\gamma}}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha)}\,M
\int_{0}^{t_1}\psi'(s)\big[(\psi(t_2)-\psi(s))^{\alpha-1}-(\psi(t_1)-\psi(s))^{\alpha-1}\big]{\rm d}s\\
\end{eqnarray*}
\begin{eqnarray*}
&\leq& \frac{(\psi(t_2)-\psi(0))^{1-\gamma}-(\psi(t_1)
-\psi(0))^{1-\gamma}}{g^{\mathbb{T}}(\gamma-1,1-\gamma)\Gamma(\alpha+1)}\,M
(\psi(t_2)-\psi(0))^{\alpha}\\
&\quad& +\frac{(\psi(t_1)-\psi(0))^{1-\gamma}}{g^{\mathbb{T}}(\gamma-1,
1-\gamma)\Gamma(\alpha+1)}\,M(\psi(t_2)-\psi(t_1))^{\alpha}\\
&\quad& + \frac{(\psi(t_1)-\psi(0))^{1-\gamma}}{g^{\mathbb{T}}(\gamma-1,
1-\gamma)\Gamma(\alpha+1)}\,M\big[(\psi(t_2)-\psi(0))^{\alpha}
-(\psi(t_1)-\psi(0))^{\alpha}\big]\to 0
\end{eqnarray*}
as $t_2\to t_1$. It follows that
$\|{K}_2 y-{K}_1 y\|_{C_{1-\gamma,\psi}}\to 0$ as $t_2\to t_1$.
Hence, ${K}_2({D}_{\psi,\delta})$ is equicontinuous.
As a consequence of Steps 2--4, together with the Arzel\`{a}-Ascoli theorem,
one has that ${K}_2$ is compact. Hence, from Steps 1-4 and Lemma~\ref{lemma3.2},
we conclude that ${K}={K}_1+{K}_2$ is continuous and takes bounded sets into bounded sets.
Also, one can verify the validity of $\mu\big({K}_2({D}_{\psi,\delta})\big)=0$ since
${K}_2({D}_{\psi,\delta})$ is relatively compact. It follows from the inclusion
${K}_1({D}_{\psi,\delta})\subset {D}_{\psi,\delta}$ and the equality
$\mu\big({K}_2({D}_{\psi,\delta})\big)=0$ that
\begin{equation*}
\mu\big({K}({D}_{\psi,\delta})\big)\leq\mu\big({K}_1({D}_{\psi,\delta})\big)
+ \mu\big({K}_2({D}_{\psi,\delta})\big)\leq\mu({D}_{\psi,\delta})
\end{equation*}
for every bounded set ${D}_{\psi,\delta}$
of $C_{1-\gamma,\psi}(J,\mathbb{R})$ with $\mu({D}_{\psi,\delta})>0$.
Since ${K}({D}_{\psi,\delta})\subset {D}_{\psi,\delta}$ for the convex, closed and bounded set
${D}_{\psi,\delta}$ of $C_{1-\gamma,\psi}(J,\mathbb{R})$, all conditions of the Sadovskii fixed point theorem
are satisfied and we conclude that the operator ${K}$ has a fixed point $x\in {D}_{\psi,\delta}$
that is a solution of \eqref{eqI} with $y(1)=y_1$. Therefore, \eqref{eqI} is controllable on $J$.
\end{proof}
\section{Conclusion and future work}
We presented a new version for the $\psi$-Hilfer fractional derivative,
in the sense of time scales, proving the fundamental properties of this new derivative.
The respective proofs were presented in detail and discussed. On the other hand,
the theory of differential equations on time scales is of great relevance in several areas.
So, in this sense, in order to present an approach on dynamic equations on time scales
via $\psi$-Hilfer fractional derivatives, we investigated the existence,
uniqueness, and controllability of solution
for the problems (\ref{I*}) and (\ref{eqI}), respectively.
Although we were able to obtain some important results, questions arose
during the work for future work. These include:
\begin{enumerate}
\item The discussion of type I and type II generalized Leibniz rules
for the $\psi$-Hilfer fractional derivative in time scales.
\item If one presents a time-scale version of the $\psi$-Hilfer
fractional derivative of variable order, will the semigroup property also not hold?
What is lost and what is gained?
\item Is it possible to discuss issues of continuous
dependence on data and issues of reachability
for fractional dynamic equations on time scales?
\item Are $\psi$-Hilfer fractional derivatives in time scales
relevant to discuss some mathematical modeling problems
via Laplace transforms in time-scales?
\end{enumerate}
Numerous other issues, involving fractional derivatives
and differential equations on time scales, can be discussed from our work
and we trust they will emerge naturally over time.
\section*{Acknowledgements}
This work was supported by \emph{Funda\c{c}\~{a}o
Cearense de Apoio ao Desenvolvimento Cient\'{\i}fico e Tecnol\'{o}gico} (FUNCAP)
of Brazil, project BP4-00172-00054.02.00/20; and by
\emph{Funda\c{c}\~{a}o para a Ci\^{e}ncia e a Tecnologia} (FCT) of Portugal,
project UIDB/04106/2020 (CIDMA). The authors would like to thank the referees
for their careful reading of the work and its many useful suggestions.
|
2302.12079
|
\section{Introduction}
Let $C$ be a germ of complex plane curve singularity with \(r\geq 1\) branches. Campillo, Delgado and Kiyek \cite{CDKmanuscr} attached a series
\[
P_C(\underline{t})=P_C(t_1,\ldots, t_r)=\frac{\displaystyle\prod_{i=1}^r (t_i-1) \cdot\bigg(\displaystyle\sum_{\underline{w} \in \mathbb{Z}^r_{\geq 0}} \dim_{\mathbb{C}}J(\underline{w})/J(\underline{w}+\underline{1})\cdot \underline{t}^{\underline{w}}\bigg)}{t_1\cdots t_r -1},
\]
where $\underline{w}=(w_1,\ldots, w_r)\in \mathbb{Z}^r$, \(J_C(\underline{w})=J(\underline{w}):=\{g\in\mathcal{O}\;:\;\underline{v}(g)\geq \underline{w}\}\) defines a multi-index filtration associated to the valuation \(\underline{v}=(v_1,\dots,v_r)\) at the local ring \(\mathcal{O}:=\mathcal{O}_C\) of \(C\), $\underline{1}=(1,\ldots , 1)$ and $\underline{t}^{\underline{w}}:=t_1^{w_1}\cdots t_r^{w_r}$. Observe that $P_C(\underline{t})$ is formal power series if $C$ is irreducible, i.e. $r=1$, and a polynomial if $r>1$. The dimensions of the $\mathbb{C}$-vector spaces $J(\underline{w})/J(\underline{w}+\underline{1})$ are finite and depend on the value semigroup of the $C$, which is $\Gamma(C)=\{\underline{v}(g): g\in \mathcal{O}, g \neq 0 \}$, see e.g. \cite[(3.5)]{MFjpaa}.
\medskip
The interest of this series ---called for brevity the Poincar\'e series of $C$--- became apparent when Campillo, Delgado and S. Gusein-Zade \cite{CDG99a} proved its coincidence with the zeta function of the monodromy transformation of the singularity in the irreducible case. They developed a research line to compute the Poincaré series through some invariants of the singularity, not only for plane curves \cite{CDG99a, CDG03a, CDGduke, CDG07} (even in a motivic setting \cite{CDG07}; see also \cite{Mams}) but also for rational surface singularities \cite{CDG04} and curves on them \cite{CDG05}. In these papers, they proposed alternative definitions of $P_C(\underline{t})$ involving techniques of integration with respect to the Euler characteristic, which led to an A'Campo type formula \cite[Theorems 3 and 4]{CDGduke} in terms of the dual graph \(G(C)\) of the minimal embedded resolution of the singularity, namely
\[
P_C(\underline{t})=
\prod_{Q \in G(C)} (\underline{t}^{\underline{v}^Q}-1)^{-\chi (E_Q^{\circ})}.
\]
This formula à la A'Campo seamlessly blends information which can be read either from the topology or the algebraic point of view: From the topological side, the Euler characteristic $\chi (E_Q^{\circ})$ of the smooth part of the irreducible component $E_Q$ of the exceptional divisor created in the resolution process, and from the algebraic side, the valuation $\underline{v}^Q$ of the points $Q$ of the dual graph \(G(C).\) Moreover, the well established relation between $\underline{v}^Q$ and the linking invariants of the algebraic link \(L:=C\cap S^3_\varepsilon\) in the \(3\)--sphere \(S^3_\varepsilon\) with radius \(\varepsilon >0\) small enough, allowed them to apply a result by Eisenbud and Neuman \cite[Theorem 12.1]{EN} in order to deduce the connection between \(P_C(\underline{t})\) and the Alexander polynomial $\Delta_L(\underline{t})$ of $L$:
\[P_C(\underline{t})=\Delta_L(\underline{t})\quad\text{if}\quad r>1\quad\text{and}\quad (t-1)\cdot P_C(t)=\Delta_L(t)\quad \text{if}\quad r=1.\]
\medskip
However, it seems as though this outcome is merely a fortuitous occurrence resulting from two a priori unrelated mathematical entities; paraphrasing the own authors, ``up to now this coincidence has no conceptual explanation. It is obtained by direct computations of both objects in the same terms and comparison of the results" \cite[p.~450]{CDG15}; see also \cite[pp.~271--272]{CDGDocumenta}. To date, this sentence is still valid and the aim of this paper is to precisely provide a conceptual proof of this coincidence.
\medskip
Besides that, the main contribution of this paper is to give a purely algebraic proof of some theorems of factorization of the Poincaré series of $C$ depending on some key values of the semigroup of values of $C$ which can be read off from the dual resolution graph associated to $C$. These theorems can be understood as an algebraic analogous of the topological results of Eisenbud-Neumann \cite{EN} and Sumners-Woods \cite{SW} on the decomposition theorems of the Alexander polynomial.
\subsection{Summary of our approach}
The value semigroup of an irreducible plane curve singularity is a complete intersection numerical semigroup, which means that it can be constructed from a process called gluing \cite{Delormegluing} (see also Section \ref{subsec:iterativepoincare}). On the other hand, in this case \(L\) is an iterated torus knot. The clue is to realize that the gluing construction of the value semigroup of an irreducible plane curve singularity mimics the construction leading to the description of \(L\) as an iterated torus knot. More concretely, the satellization process describing the iterated toric structure of \(L\) is in one-to-one correspondence with the gluing construction in the value semigroup. Thus, one can see the algebraic operation as a topological operation. This fusion between algebra and topology gives us the hint that a purely algebraic recursive computation of the Poincaré series may provide the conceptual explanation to its coincidence with Alexander polynomial.
\medskip
Our starting point is then to setting aside the application of the Eisenbud-Neumann Theorem \cite[Theorem 12.1]{EN} and deepening on the algebraic calculations. To do so, the \textbf{first step} (Theorem \ref{thm:p1p2p3}) is to write the Poincaré series as a product
\begin{equation}\tag{\(\ast\)}\label{eqn:star1}
P_C(\underline{t})=\frac{1}{\underline{t}^{\underline{v}^{\mathbf{1}}}-1} \cdot \prod_{i=1}^q \frac{\underline{t}^{\underline{v}^{\sigma_{i}}}-1}{\underline{t}^{\underline{v}^{\rho_{i}}}-1}\cdot (\underline{t}^{\underline{v}^{\sigma_0}}-1)\cdot \prod_{\rho \in \widetilde{\mathcal{E}}} \frac{\underline{t}^{(n_{\rho}+1)\underline{v}^{\rho}}-1}{\underline{t}^{\underline{v}^{\rho}}-1} \cdot \prod_{s(\alpha) >1} (\underline{t}^{\underline{v}^{\alpha}}-1)^{s(\alpha)-1},
\end{equation}
where the factors depend on relevant vertices of the dual graph, which are the so-called star points and the fist vertex of the dual graph (cf. Subsection~\ref{subsub:dualgraph}). This yields a pure algebraic, valuative expression for the Poincar\'e series.
\medskip
In view of the expression \eqref{eqn:star1}, the \textbf{second step} is to establish a suitable ordering of those relevant vertices of $G(C)$ which makes it possible to compute the Poincaré series in an iterative way. This ordering will base on a one-to-one correspondence between the star points of the dual graph and the topologically relevant exponents of the Puiseux series of the branches of the curve. Since we are interested in the topological properties of the curve, we first define a \emph{topological Puiseux series} for each of the branches, which provides us a simplified expression encoding the necessary information (Section \ref{subsec:topologicalpuiseux}). The ordering of the star points will only depend on the minimal generators of the value semigroups of every branch and the contact and intersection multiplicities between the branches. This ordering in the dual graph induces an ordering in the topological Puiseux series of the branches. This allows us to define a sequence of plane curves \(C_{\alpha_1},C_{\alpha_2},\dots,C_{\alpha_l}=C\) depending on the ordered star points (cf.~Subsection \ref{subsubsec:truncationstar}) which approximate \(C,\) so that the last curve in that sequence is \(C.\)
\medskip
Now, in a \textbf{third step}, we are able to compute the Poincar\'e series \(P_{\alpha_i}\) of each approximate curve $C_{\alpha_i}$ from the series of the previous approximate curve as products of the form
\begin{equation}\tag{\(\dagger\)}\label{eqn:dag1}
P_{\alpha_i}(\bullet)=P_{\alpha_{i-1}}(\bullet) \cdot Q(\bullet) \cdot \prod B(\bullet) \cdot Q(\bullet),
\end{equation}
where the polynomials \(Q,B\) are defined in \eqref{eqn:defkeypoly} and the ``\(\bullet\)'' depend on the contact between branches and the minimal generators of the individual semigroups of the branches (cf.~Section \ref{subsec:iterativepoincare}). This recursive expression yields a purely algebraic procedure to compute iteratively the Poincaré series \(P_C(\underline{t})\).
\medskip
In the \textbf{fourth} and last \textbf{step}, we show that this purely algebraic, iterative construction can be translated step by step to the iterated toric structure of the link, confirming the guess motivated by the construction in the irreducible case. To do so, we first observe that our ordering in the set of branches is equivalent to the study of the algebraic link \(L\) from its innermost to its outermost component; this contrasts with the customary description of the Waldhausen decomposition, which uses to be formulated from outermost to inner (see also \cite{LeCarousels}). This is not at all surprising, as the study of the closed complement of the link ``from outer to inner'' is the natural point of view, appropriate to obtain such a decomposition. We recall here that the vertices of the plumbing diagram correspond to Seifert pieces in the Waldhausen decomposition of the link exterior \cite[Section 22]{EN}.
\medskip
The ordering from innermost to outermost is imposed by the algebraic construction, otherwise it is inconvenient to produce the iterative procedure to compute the Poincaré series. This is reasonable, since we are studying the algebraic properties of the link, and not \emph{a priori} those of its exterior. The ultimate reason to avoid the theorem of Eisenbud-Neumann is the following: their splicing construction perfectly allows to describe the Waldhausen decomposition, hence this is closer to the perspective of the link from its exterior. A posteriori, one might check with a bit of effort that both procedures encode the same information, but the connection between the algebra and the topology will be lost.
\medskip
A crucial point in the fourth step is the work of Sumners and Woods \cite{SW}. They indicate a recursive way to compute the Alexander polynomial of \(L\) once the components are ordered as in our case. Then, we can show that each step in our procedure coincides with the topological description. As a consequence, we can provide a recursive proof of the coincidence between \(P_C(\underline{t})\) and \(\Delta_L(\underline{t})\) without using the results of Eisenbud-Neumann \cite{EN}, hence without topological guidance. Moreover, thanks to our algebraic procedure we provide a more explicit expression of the Alexander polynomial in the case of more than three branches than the one given by Sumners and Woods \cite[Section VII]{SW}. Besides, this recursive argument has the advantage that it reveals an intrinsic topological nature of the algebraic operation, as the title of this paper aims at pointing out.
\subsection{Outline}
We will now indicate the parts of the manuscript where each of the above steps, as well as the necessary auxiliary results, are realized.
\medskip
Whereas Section 2 is devoted to introduce the main tools (and notation) which are imperative to understand the remainder of the article (Puiseux series, embedded resolutions of the curve and their dual graphs in Subsection \ref{subsec:dualgraph}, the Noether formula \ref{noehterformula}, and both the value semigroup and the extended semigroup in Subsection \ref{sec:semigroupvalues}), the technical core of the proofs in Section 4 lies in Section 3; observe that Section 4 encloses the first, and third steps explained above; the second step is cleared in Section 3, and the last step is given in Section 5.
\medskip
Indeed, in Subsections 3.1 and 3.3 we recall the main results about maximal contact values and the values of the star points attached to the dual graph $G(C)$ of the minimal embedded resolution of $C$. Subsection 3.2 is devoted to define the topological Puiseux series of the branches. The above mentioned second step is solved in Section 3.4: here we define the ordering on the set of star point in the dual graph and the sequence of approximating curves of $C$; in addition, this subsection gives a detailed description of how to construct the sequence of the approximating curves that will allow us to prove the iterative computation of their Poincar\'e series.
\medskip
Section 4 starts with the definition of the Poincaré series of the curve. After, we work out the first step, namely the proof of \eqref{eqn:star1} in Theorem \ref{thm:p1p2p3}. Subsection 4.2 addresses the step-by-step method of the recursive computation of the Poincaré series, cf.~(\ref{eqn:dag1}). First we recall the irreducible case in Proposition \ref{prop:poincareirreduciblecase}, and then we prove the two base cases, in which the only approximating curve is the curve itself (Proposition \ref{prop:poincarelemm1} and Proposition \ref{prop:poincarelemm2}). Eventually in Subsection \ref{subsec:generalprocedure} we present the general process, which completes the third step.
\medskip
Section 5 focuses on topology: We review the topological counterpart of the algebraic constructions of the previous sections. First we describe the process of satellization for the construction of an algebraic link attached to a curve. Then we present the gluing operation as a topological feature in the irreducible case; then we describe its generalization to reducible curves following the exposition by Sumners and Woods \cite{SW}; here we supply further details missing in their exposition.
\medskip
A closing, short section with historical remarks has been included for the convenience of the reader.
\medskip
\subsection{General assumptions and notation}
We will denote by $\mathbb{N}$ the set of nonnegative integers. The cardinality of a finite set $A$ will be denoted by $|A|$. For an element $x$ of a ring $R$, we will write $(x)$ the principal $R$-ideal generated by $x$.
\medskip
We understand for a \emph{curve} a germ of holomorphic function $f:(\mathbb{C}^2,0)\to (\mathbb{C},0)$ with isolated singular point at $0$. We will write $C:f=0$ and say that $C$ is a curve given by a power series $f\in \mathbb{C}\{x,y\}$, where $\mathbb{C}\{x,y\}$ stands for the ring of convergent power series (in two indeterminates $x$ and $y$). We will assume $f$ (hence $C$) to be reduced. If $f$ is not irreducible, the factorization $f=f_1\cdots f_r$ for $f_i\neq f_j$ if $i\neq j$ into irreducible germs induces irreducible curves $C_1, \ldots , C_r$ called branches of $C$. We will write $C=\bigcup_{i=1}^{r} C_i$. We set $\mathtt{I}:=\{1,\ldots , r\}$.
\medskip
A parametrization of a branch $C:f=0$ at $0$ is given by power series $x(t), y(t) \in \mathbb{C}\{t\}$ such that $f(x(t), y(t)) = 0 \in \mathbb{C}\{t\}$ and, if $\tilde{x}(t), \tilde{y}(t)$ satisfy $f(\tilde{x}(t), \tilde{y}(t)) = 0$, then there is a unique unit $u \in \mathbb{C}\{t\}$ such that $\tilde{x}(t) = x(u \cdot t)$ and $\tilde{y}(t) = y(u \cdot t)$. This allows us to define the intersection multiplicity of two branches as the total order of one curve on the parametrizations of the branches of the other curve, namely
\[
\big [C,\{g=0\}\big ]_0:= [f, g]_0: = \mathrm{ord}_t g(x(t), y(t)) = \mathrm{sup}\{m \in \mathbb{N} : t^m \ \mbox{divides} \ g(x(t), y(t))\};
\]
we will omit the dependence of $0$ in the writing if this is clear from the context.
\medskip
A branch of a curve can be parametrized by a Puiseux series, which we understand as a formal power series with rational exponents of the form
\[
y=\sum_{j>0} a_j x^{j/n},
\]
for $a_j \in \mathbb{C}$. We agree to have a sort of normal form for the Puiseux series by taking exponents with common denominator $n$ coprime to $\mathrm{gcd}\{j : a_j \neq 0\}$; this $n$ is called the polydromy order of the series.
\medskip
\noindent \textbf{Acknowledgements}. The authors wish to express their gratitude to Prof. F\'elix Delgado de la Mata for many stimulating conversations and helpful suggestions during the preparation of the paper.
\medskip
\input{resolution_of_singularities_v3}
\input{poincare_series_v3}
\input{topology_of_the_link_v4.tex}
\printbibliography
\end{document}
\section{Poincaré series in terms of the minimal resolution}\label{sec:Poincareseries}
In this section we will define the Poincar\'e series associated to a plane curve singularity $C=\bigcup_{i=1}^{r} C_i$, and describe it in terms of the dual graph of the minimal embedded resolution of $C$. For that, we need first to define the Poincar\'e series associated to the value semigroup of $C$, and then to consider an extension of the value semigroup of $C$.
\subsection{Poincaré series associated to the curve}\label{subsec:poincarebasic}
In the context of a discrete valuation, it is common to work with the Poincaré series associated to a filtration on the local ring. For \(\underline{v} \in\mathbb{Z}^r\), set \(J_C(\underline{v})=J(\underline{v}):=\{g\in\mathcal{O}\;|\;\underline{v}(g)\geq \underline{v}\}\). These are ideals which yield a multi-index filtration, and it makes sense to consider the quotiens $J(\underline{v})/J(\underline{v} + \underline{1})$ which turn to be finite-dimensional $\mathbb{C}$-vector spaces of dimension $c(\underline{v})$; this leads to the consideration of
$$
L_C(t_1,\dots,t_r)=\sum_{\underline{v}\in\mathbb{Z}^r}c(\underline{v})\cdot \underline{t}^{\underline{v}}.
$$
The dimensions $c(\underline{v})$ depend on $\Gamma(C)$ \cite[(3.5)]{MFjpaa}, hence $L_C(\underline{t})$ does so. We will abuse of notation and write $L_C$ rather than $L_{\Gamma(C)}$; this will be consistently done with all the objects occurring in the sequel.
\medskip
In the case of \(r=1,\) the series \(L_C(t)\) is the generating series of the value semigroup of $C$; however, for \(r>1\), this is not a (formal) power series, but \(L(t_1,\dots,t_r)\in\mathbf{Z}[[t_1,\dots,t_r,t^{-1}_1,\dots,t^{-1}_r]]\); i.e. this is a Laurent series infinitely long in all directions since \(c(\underline{v})\) can be positive for \(\underline{v}\) with some negative components \(v_i\) as well. As in \cite{CDGduke} we may check that
$$
P'_C(t_1,\dots,t_r)=L_C(t_1,\dots,t_r)\cdot \prod_{i=1}^{r}(t_i-1)
$$
is a polynomial (if $r>1$) and moreover, it is divisible by \((t_1\cdots t_r-1)\). This lead to the definition of the Poincar\'e series associated to the multi-index filtration given by the ideals $J(\nu)$, thus to the curve $C$, as
\[
P_C(t_1,\dots,t_r)=P'_C(t_1,\dots,t_r)/(t_1\cdots t_r-1)
\]
which is in fact a polynomial if $r>1$.
\medskip
The univariate Poincar\'e series $P_C(t)$ is easily computed from the value semigroup $\Gamma(C)$, since $c(v)=1$ if and only if $v\in \Gamma(C)$. This is not longer true if $r>1$, and the computation of $P_C(t_1,\ldots , t_r)$ becomes complicated. There is a way to compute it in terms of the embedded resolution of $C$ due to Campillo, Delgado and Gusein-Zade \cite{CDG03a} by a formula which is analogous to that of A'Campo \cite{ACampo1,ACampo2} for the zeta function of the monodromy transformation of $C$; it may be also computed by using techniques of integration with respect the Euler characteristic again by Campillo, Delgado and Gusein-Zade, see e.g. \cite{CDG00, CDG02}. In fact, they show that the Poincar\'e series $P_{\widehat{\Gamma}_C}(t_1,\ldots , t_r)$ associated to the (projectivization of the) extended semigroup $\widehat{\Gamma}_C$, which is
$$
\chi(\mathbb{P}\widehat{\Gamma})=\sum\limits_{\underline{v}\in\mathbb{N}^r}\chi\big(\mathbb{P} F_{\underline{v}}\big)\cdot\underline{t}^{\underline{v}},
$$
(it is certainly possible to construct the projectivizations $\mathbb{P} F_{\underline{v}}=F_{\underline{v}}/\mathbb{C}^{\ast}$ of the fibre $F_{\underline{v}}$ \cite{CDGextended}),
coincides with the product over all the irreducible components of the exceptional divisor of the minimal resolution of $C$ of powers of cyclotomic polynomials of the form $(\underline{t}^{\underline{m}}-1)$, where $\underline{m}$ is the multiplicity of the liftings of the functions $f_i$ corresponding to the branches $C_i$ to the resolution space along the irreducible components of the exceptional divisor; in other words, we have
\begin{equation}\label{eq:duke}
P_{\widehat{\Gamma}(C)}(t_1,\ldots , t_r) = \prod_{P \in G(C)} (t^{\underline{v}^{P}}-1)^{-\chi(E_{P}^{\circ})},
\end{equation}
where $\underline{v}^{P}$ stands for the value of the germ of a nonsingular curve which is transversal to $E_P$ in a smooth point of $E_P$. We further develop (\ref{eq:duke}) in terms of special points of the dual graph of $C$. First we need some notation.
\medskip
Denote by $\mathcal{D}$ the set of dead arcs of $G(C)$. If $L \in \mathcal{D}$ then let $\rho_L$ (resp. $\sigma_L$) be its end (resp. star) point. Also, let $\widetilde{\mathcal{D}} := \{L \in
\mathcal{D} \mid \sigma_L > \sigma_0\}$ be the set of dead arcs occurring after the first separation point $\sigma_0$ of $G(C)$. In addition, write $\widetilde{\mathcal{E}}$ for the set of ends for the dead arcs in $\widetilde{\mathcal{D}}$.
\medskip
For any $L \in \widetilde{\mathcal{D}}$ we know that $\underline{v}^{\sigma_L} = (n_L + 1)\underline{v}^{\rho_L}$ for some integer $n_L \ge1$. We will denote also $n_{\rho} =n_L$ for $\rho=\rho_L \in \widetilde{\mathcal{E}}$.
\medskip
Let $L_0, \ldots ,L_q$ be the dead arcs of $G(C)$ with $\sigma_i =\sigma_{L_i} \le \sigma^0$ for $i \in \{1,\ldots , q\}$ ordered in such a way that $L_0$ has end point the vertex corresponding to $\mathbf{1}$, and $\sigma_1 < \ldots < \sigma_q$. Note that, if $q \ge 1$, then $\sigma_1$ is also the star point of the dead arc $L_0$ starting with $\mathbf{1}$. We denote also $\rho_i =\rho_{L_i}$ for $1\le i \le q$. As in the above case, let us denote by $n_i =n_{\rho_i}$ (for $i \in \{1, \ldots , q\}$) the integers such that $\underline{v}^{\sigma_i}= (n_i + 1)\underline{v}^{\rho_i}$. For the sake of completeness we set also $n_0 = n_{\mathbf{1}} = -1$. Note to which extend the divisor $\mathbf{1}$ and the integer $n_{\mathbf{1}}$ play a special role: if $q \ge 1$, then $\underline{v}^{\sigma_1}$ is also a multiple of $\underline{v}^{\mathbf{1}}$; this is, of course, different from $(n_{\mathbf{1}} + 1)\underline{v}^{\mathbf{1}}$.
\medskip
We define:
\begin{eqnarray}
P_1(\underline{t}) & := & \frac{1}{\underline{t}^{\underline{v}^{\mathbf{1}}}-1} \cdot \prod_{i=1}^q \frac{\underline{t}^{\underline{v}^{\sigma_{i}}}-1}{\underline{t}^{\underline{v}^{\rho_{i}}}-1}\cdot (\underline{t}^{\underline{v}^{\sigma_0}}-1); \nonumber \\
P_2(\underline{t}) & := & \prod_{\rho \in \widetilde{\mathcal{E}}} \frac{\underline{t}^{(n_{\rho}+1)\underline{v}^{\rho}}-1}{\underline{t}^{\underline{v}^{\rho}}-1}; \nonumber \\
P_3(\underline{t}) & := & \prod_{s(\alpha) >1} (\underline{t}^{\underline{v}^{\alpha}}-1)^{s(\alpha)-1}. \nonumber
\end{eqnarray}
\begin{theorem} \label{thm:p1p2p3}
Let $C$ be a plane curve singularity with dual graph $G(C)$, then
\[
P_C(t_1,\ldots , t_r)=P_1(t_1,\ldots , t_r)\cdot P_2(t_1,\ldots , t_r)\cdot P_3(t_1,\ldots , t_r).
\]
\end{theorem}
\begin{proof}
Campillo, Delgado and Gusein-Zade showed eq.~ (\ref{eq:duke}). Since
\[
\chi(E_{P}^{\circ})=2-\big | \{\mbox{singular points of } E_P\}\big |=2-\nu (P),
\]
it remains only to compute those values $\underline{v} (P)\neq 2$ for $P\in G(C)$. In view of the previous definitions, the statement follows straightforward.
\end{proof}
\begin{rem}
Theorem \ref{thm:p1p2p3} may be proven with a bit a effort from the explicit computation of the dimensions of the vector spaces $C(\underline{v})$ from the semigroup of values. This circumvents the use of the extended semigroup.
\end{rem}
\subsection{Iterative construction of the Poincaré series}\label{subsec:iterativepoincare}
In this subsection, we will show how to compute the Poincaré series iteratively from the star points in the dual graph of the curve. This construction will become an extension of the irreducible case and it is highly inspired on it.
\medskip
The construction in the irreducible case bases on an algebraic operation called gluing, which is available for both numerical and affine semigroups; since the semigroup of a plane curve with more than one branch is none of them, we do not have to our disposal the gluing operation. However, it is possible to reconstruct the process by following the paths marked by the star points which now correspond with maximal contact values and values at proper star points. In this way, we are going to show that the decomposition of the Poincar\'e series given by Theorem \ref{thm:p1p2p3} can be seen in terms of products of the following polynomials:
\begin{equation}\label{eqn:defkeypoly}
\begin{array}{cc}
P(m,n,x)=&\displaystyle \frac{x^{mn}-1}{x^m-1}\cdot\frac{x-1}{x^n-1} \\[12pt]
Q(m,n,x,y)=&\displaystyle \frac{(yx^m)^n-1}{yx^m-1}\\[12pt]
B(m,n,x,y,z)=&(yx^m)^nz^m-1.
\end{array}
\end{equation}
First we describe the case of one branch.
\subsubsection{The irreducible case}
In the case of a plane curve with a single branch, we have mentioned in Section \ref{sec:semigroupvalues} that its semigroup of values $\Gamma$ is a numerical semigroup minimally generated by \(\{\overline{\beta}_0,\dots,\beta_g\}.\) This numerical semigroup is a complete intersection numerical semigroup (see \cite{rosbook}), therefore it can be constructed by a process defined by Delorme in \cite{Delormegluing} and called gluing; we explain it briefly.
\medskip
Let \(A=\{a_1, \dots,a_{g_1}\},B=\{b_1,\dots, b_{g_2}\}\) and \(C=\{c_1,\dots, c_{g_0}\}\) be three subsets of natural numbers. A semigroup \(S=\langle C \rangle\) in \(\mathbb{N}\) is said to be a gluing of \(S_1=\langle A\rangle\) and \(S_2=\langle B \rangle\) if its finite set of generators \(C\) splits into two parts, say \(C=k_1 A\sqcup k_2 B\) with \(k_1,k_2\geq 1\), and the defining ideals of the corresponding semigroup rings satisfy that \(I_C\) is generated by \(I_A+I_B\) and one extra element. We will denote the gluing of \(S_1\) and \(S_2\) via \((k_1,k_2)\) as \(k_1 S_1\bowtie k_2S_2.\)
\medskip
The point is that we can construct $\Gamma$ as an iterated gluing. In the notation of Section \ref{sec:semigroupvalues}, we write \(p_i=n_i\) and \(w_i=\overline{\beta}_i/(n_{i+1}\cdots n_g)=\overline{\beta}_i/e_i\). Since \(\gcd(p_i,w_i)=1,\) we start with the numerical semigroup \(\Gamma_1=\langle p_1,w_1 \rangle.\)
Now, we perform the gluing of \(\Gamma_1\) and the (trivial) semigroup \(\mathbb{N}\) via \((p_2,w_2)\) in order to obtain \(\Gamma_2=p_2 \Gamma_1\bowtie w_2\mathbb{N}.\) It is easily seen that \(\Gamma_2=\langle p_1p_2, p_2w_1,w_2 \rangle\). Recursively, we define \(\Gamma_i=n_{i-1}\Gamma_{i-1}\bowtie w_{i-1}\mathbb{N}\), and again it is a simple matter to check that
\[
\Gamma_i=\langle p_1\cdots p_i,w_1p_2\cdots p_i,\ldots , w_{i-1}p_i,w_i \rangle;
\]
in the end, we get that the semigroup of values is \(\Gamma=\Gamma_g.\) From this point of view, the knowledge of the minimal generators \(\overline{\beta}_0,\dots,\overline{\beta}_g\) is enough to provide the construction of \(\Gamma\) by gluing. Moreover, the gluing construction of the semigroup can be identified with the successive truncations of the topological Puiseux series of the branch: The key idea is that each star point in the dual graph of the branch defines a gluing operation in the semigroup in an ordered way.
\medskip
In the irreducible case, it is easily seen that any element of the semigroup \(\Gamma\) can be written in a canonical form as follows:
\[
\nu=a_0\overline{\beta}_0+a_1\overline{\beta}_1+\cdots a_g\overline{\beta}_g
\]
where \(a_i\leq n_i\) for \(i=1,\dots,g\) and \(a_0\geq 0.\) From this unique expression it is easily deduced that
\[
P_{\Gamma_{i}}(t)=\frac{\displaystyle\prod_{j=1}^{i}(t^{n_j\overline{\beta}_j/e_i}-1)}{\displaystyle\prod_{j=0}^{i}(t^{\overline{\beta}_j/e_i}-1)}, \quad\text{and}\quad P_{\mathbb{N}}(t)=\frac{1}{t-1},
\]
for $\Gamma_i=\langle \overline{\beta}_0,\ldots , \overline{\beta}_i \rangle$; the only operation we need to perform is a gluing operation of type \((n_i,\overline{\beta}_i/e_i)\), and this operation behaves very well with respect to the unique expression of an element of the semigroup. Moreover, it translates directly to the dual graph, since it follows the star points of the geodesic to the unique arrow in the graph. This fact allows us to construct the Poincaré series of the gluing as
\[
P_{\Gamma_{i+1}}(t)=(t^{n_{i+1}\overline{\beta}_{i+1}/e_{i+1}}-1)\cdot P_{\Gamma_{i}}(t^{n_{i+1}})\cdot P_{\mathbb{N}}(t^{\overline{\beta}_{i+1}/e_{i+1}}),
\]
which provides the well known expression for the Poincaré series of the numerical semigroup \(\Gamma_{i+1}.\) If we write
\[P(\alpha,\beta,x)=\frac{x^{\alpha\beta}-1}{x^\alpha-1}\cdot\frac{x-1}{x^\beta-1},\]
and \(b_{l,m}:=\prod_{j=l}^{m}n_j\) with \(b_{l,m}=1\) if \(l>m,\) then the following is easily checked.
\begin{proposition}\label{prop:poincareirreduciblecase}
Let \(C\) be an irreducible plane curve singularity with semigroup \(\Gamma\). With the previous notation, we have
\[
P_C(t)=P_\Gamma(t)=(\prod^{g}_{j=1}P(\overline{\beta}_j/e_j,n_j,t^{b_{j+1,g}}))/(t-1)
\]
\end{proposition}
\subsubsection{The base cases}
Before continuing, let us introduce some notation. Let \(C\) be a plane curve singularity with \(r\) branches and let \(\{(p_{i,j},m_{i,j})\;|\;1\leq i\leq k_i,\;1\leq j\leq r\}\) be the set of the Puiseux pairs of the topological Puiseux series of the branches. Recursively, we define
\begin{equation}\label{eqn:defw_j}
w_{1,j}=m_{1,j}\quad \text{and}\quad w_{i,j}=m_{i,j}-m_{i-1,j}p_{i,j}+w_{i-1,j}p_{i-1,j}p_{
i,j}.
\end{equation}
Observe that, for \(r=1\) it is \(w_{i,1}=\overline{\beta}_i/e_i\), and for \(r>1\) we have that \(w_{i,j}\in\{\overline{\beta}^{j}_s/e^{j}_s,[f_j,f_l]/e^j_s\}\) for some \(1\leq s\leq g_j\) and \(1\leq l\leq r.\)
\medskip
As we have seen, the irreducible case shows the computation of the Poincaré series for the sequence of approximating curves in the star points of the dual graph, obviously in the irreducible case all of them are non-proper. To mimic this process for non-irreducible plane curves we need first to prove two base cases (which are those with a single approximating curve which is the curve itself). The first one corresponds to a plane curve with \(r\) smooth branches all of them with the same contact, as its dual graph has a unique star point, which is in fact a proper star point.
\begin{proposition}\label{prop:poincarelemm1}
Let \(C=\bigcup_{i=1}^{r} C_i\) such that \(C_i\) is smooth for all \(i\in \mathtt{I}\) and such that the contact pair is equal for all branches, i.e. \((q,c)=(q_{i,j},c_{i,j})\) for all \(i,j\in\mathtt{I}\). In particular, the topological Puiseux series of the branches are \(s_i(x)=a_ix^{c}\) with \(a_i\neq a_j\) for $i \neq j$. Then,
\[
P_C(\underline{t})=Q\big(c,1,t_1,\prod_{i=2}^{r}t_i^c\big)\cdot \Big(\prod_{i=2}^{r-1}B(1,c,t_i,\prod_{k<i}t_k,\prod_{k>i}t_k^c)\Big)\cdot Q\big(1,c,t_r,\prod_{i=1}^{r-1}t_i\big).
\]
\end{proposition}
\begin{proof}
By hypothesis, the dual graph has only one star point, \(\sigma_0.\) Moreover, the value at \(\sigma_0\) is \(v^{\sigma_0}=(c,\dots,c)\) and it has \(s(\sigma_0)=r-1.\) Also, \(v^{\mathbf{1}}=(1,\dots,1)\). Then,
\begin{equation*}
\begin{split}
& Q\big(c,1,t_1,\prod_{i=2}^{r}t_i^c\big)\cdot \Big(\prod_{i=2}^{r-1}B(1,c,t_i,\prod_{k<i}t_k,\prod_{k>i}t_k^c)\Big)\cdot Q\big(1,c,t_r,\prod_{i=1}^{r-1}t_i\big)\\
=& \frac{\displaystyle\prod_{i=1}^{r}t_i^c-1}{\displaystyle\prod_{i=1}^{r}t_i^c-1}\cdot\bigg(\displaystyle\prod_{i=2}^{r-1}\Big(\big(t_i \prod_{k<i}t_k\big)^c\prod_{k>i}t_k^c-1\Big)\bigg)\cdot\displaystyle\frac{\displaystyle\prod_{i=1}^{r}t_i^c-1}{\displaystyle\prod_{i=1}^{r}t_i-1}\\
=&\frac{\displaystyle\prod_{i=1}^{r-1}\big(\prod_{k=1}^{r}t_k^c-1\big)}{\displaystyle\prod_{k=1}^{r}t_k-1}=\frac{\underline{t}^{\underline{v}^{\sigma_0}}-1}{\underline{t}^{\underline{v}^{\mathbf{1}}}-1}\cdot (\underline{t}^{\underline{v}^{\sigma_0}}-1)^{r-2}=P_C(\underline{t}),
\end{split}
\end{equation*}
as desired.
\end{proof}
Observe that the polynomials \(Q,B\) appear in a somehow natural way: we can think of \(Q\) as a ``shifted gluing" along a single branch so that, if we have only two branches, then the Poincaré series of \(C\) can be thought as the product of the Poincaré series of the branches one of them with a ``gluing" of type \((1,c).\) If we have more than two branches, then \(B\) is the ``gluing term" so we can identify three branches as \(x,y,z\) and to perform the gluing along \(x,y\) with the monomial \((yx^m)^n\) and then to ``identify \(x\) and \(z\)" which gives rise \((yx^m)^nz^m.\) Following this idea, we can present the second base case, which is the case of a plane curve singularity with \(r\) branches with at most one characteristic exponent all of them with contact \((q_{i,j},c_{i,j})\in\{(1,0),(0,l)\}.\) The condition on the contact implies that again the curve itself is the only approximating curve.
\begin{proposition}\label{prop:poincarelemm2}
Let \(C=\bigcup_{i=1}^{r} C_i\) be a curve with \(\Gamma^i=\langle\overline{\beta}_0^i,\overline{\beta}^i_1\rangle\) or \(\Gamma^i=\mathbb{N}.\) Assume that for every index \(i\in \mathtt{I}\) such that \(C_i\) is a singular branch we have \(l:=\Big\lfloor\frac{\overline{\beta}^i_1}{\overline{\beta}^i_0}\Big\rfloor=\Big\lfloor\frac{\overline{\beta}^j_1}{\overline{\beta}^j_0}\Big\rfloor\) if \(j\neq i\) and \(C_j\) is singular. Moreover, assume that the contact pairs are of the form \((q_{i,j},c_{i,j})\in\{(1,0),(0,l)\}.\) Then the Poincaré series $ P_C(\underline{t})$ is equal to the product
\[
Q\big(w_{1,1},p_{1,1},t_1,\prod_{i=2}^{r}t_i^{w_{1,i}}\big)\cdot\bigg(\prod_{i=2}^{r-1}B\big(p_{1,i},w_{1,i},t_i,\prod_{k<i}t_k^{p_{1,k}},\prod_{k>i}t_{k}^{w_{1,k}}\big )\bigg)\cdot Q\big(p_{1,r},w_{1,r},t_r,\prod_{i=1}^{r-1}t_i^{p_{1,i}}\big).
\]
\end{proposition}
\begin{proof}
We can assume that at least one branch is singular (otherwise we would be in the conditions of Proposition \ref{prop:poincarelemm1}). Without loss of generality, we assume further that the branches are ordered with the refinement of the good ordering introduced in Subsection \ref{subsec:totalorderstar}; this means in particular that the last branch \(f_r\) is singular, and that \(\sigma_0\) is the unique star point in \(G(C_r)\); this implies that \(\nu^{\sigma_0}=p_{1,r}\eta^{(r)}\) where
\[
\mathrm{pr}_i(\eta^{(r)})=\left\{\begin{array}{ll}
\ \overline{\beta}^r_1,& \text{if}\quad (C_i\mid C_r)=(1,0); \\
\ [C_i,C_r]/p_{1,r},& \text{otherwise}.
\end{array}\right.
\]
By the assumption that the contact pairs are of the form \((q_{i,j},c_{i,j})\in\{(1,0),(0,l)\}\), both the Noether formula \ref{noehterformula} and the ordering in the set of branches imply the equality \([]C_i,C_j]=p_{1,i}w_{1,j}\) if \(i<j\). Therefore, we have
\[
Q\big(p_{1,r},w_{1,r},t_r,\prod_{i=1}^{r-1}t_i^{p_{1,i}}\big)=\frac{\underline{t}^{\nu^{\sigma_0}}-1}{\underline{t}^{\nu^{\mathbf{1}}}-1}.
\]
Assume first that all the branches are singular, as in Figure \ref{fig:prop2-case1}. In this case, there exists a unique dead end in the dual graph, which we denote by \(T\). The hypothesis on the contact pairs leads to the existence of a maximal contact value of the form \(\nu^{T}=(\overline{\beta}^1_1,\dots,\overline{\beta}^r_1).\) Moreover, the total order in the set of star points shows that \(\mathcal{S}=\mathcal{S}_1\) in this case, and the arrow corresponding to \(f_1\) goes through the maximal star point in \(\mathcal{S}.\) This means that \((n_T+1)\nu^{T}=p_{1,1}\nu^{T}.\)
We have
\[
P_1(\underline{t})\cdot P_2(\underline{t})=Q\big(w_{1,1},p_{1,1},t_1,\prod_{i=2}^{r}t_i^{w_{1,i}}\big)\cdot Q\big(p_{1,r},w_{1,r},t_r,\prod_{i=1}^{r-1}t_i^{p_{1,i}}\big).
\]
Assume now that there is at least one smooth branch, as in Figure \ref{fig:prop2-case2}. In this case, the hypothesis on the contact means that there are no dead ends in the dual graph of \(C\); this makes the factor $P_2(\underline{t})$ trivial, namely \(P_2(\underline{t})=1.\) On the other hand, the good order implies that \(f_1\) is smooth so \(Q(w_{1,1},p_{1,1},t_1,\prod_{i=2}^{r}t_i^{w_{1,i}})=1\) since \(p_{1,1}=1.\) Therefore, again we have
\[
P_1(\underline{t})\cdot P_2(\underline{t})=Q\big(w_{1,1},p_{1,1},t_1,\prod_{i=2}^{r}t_i^{w_{1,i}}\big)\cdot Q\big(p_{1,r},w_{1,r},t_r,\prod_{i=1}^{r-1}t_i^{p_{1,i}}\big).
\]
What is left is to show
\[
P_3(\underline{t})=\prod_{i=2}^{r-1}B\big(p_{1,i},w_{1,i},t_i,\prod_{k<i}t_k^{p_{1,k}},\prod_{k>i}t_{k}^{w_{1,k}}\big),
\]
with independence of the existence of a smooth branch. To complete the proof, we proceed as in the case of \(\sigma_0.\) Let \(Q\) be the point in \(G(C)\) such that \(Q\) is the unique end point of \(G(C_{\mathtt{J}_{sing}})\) where \(C_{\mathtt{J}_{sing}}\) is the package of singular branches. As explained in Subsection \ref{subsec:totalorderstar}, we can decompose \(\mathtt{J}_{sing}=\bigcup_{t+1}^{s}J_s\) with the packages defined in \((\star\star\star).\) Consider \(R\in\mathcal{R}\) and denote by \(\mathtt{J}\subset\mathtt{I}\) the set of indices such that the geodesic to an arrow associated to \(j\in\mathtt{J}\) finishes at \(R\); this means that there are \(s_R\) packages such that \(\mathtt{J}=J_{l_1}\cup\cdots\cup J_{l_1+s_R}.\) Then, \(\nu^R=p_{1,j}\nu^Q\) for all \(j\in \mathtt{J}\) and
\[
\mathrm{pr}_i(\nu^Q)=\left\{\begin{array}{ll}
\ \overline{\beta}^\mathtt{J}_1,& \text{if}\quad i\in\mathtt{J} \\
\ [C_i,C_\mathtt{J}]/p_{1,\mathtt{J}},& \text{if}\quad i\notin\mathtt{J}
\end{array}\right.
\]
For each \(j\in\mathtt{J}\) we have a factor
\[
B\big (p_{1,j},w_{1,j},t_j,\prod_{k<j}t_k^{p_{1,k}},\prod_{k>j}t_{k}^{w_{1,k}}\big)=(\underline{t}^{\nu^R}-1).
\]
Therefore, the claim follows from Proposition \ref{prop:numbersq} taking into account that each of the previous factors appears \(s_R=s(R)-1\) times.
\end{proof}
\begin{figure}
\begin{subfigure}[b]{0.5\textwidth}
\centering
$$
\unitlength=0.50mm
\begin{picture}(70.00,110.00)(-70,-30)
\thinlines
\put(-100,30){\line(1,0){38}}
\put(-100,30){\circle*{2}}
\put(-85,30){\circle*{2}}
\put(-70,30){\circle*{2}}
\put(-57,30){\circle*{2}}
\put(-62,30){\line(1,0){18}}
\put(-44,30){\circle*{2}}
\put(-44,30){\line(1,-1){30}}
\put(-44,30){\vector(1,1){15}}
\put(-44,30){\vector(1,0.6){20}}
\put(-44,30){\vector(1,0.2){17}}
\put(-35.5,21){\circle*{2}}
\put(-35.5,21){\vector(1,-0.2){9}}
\put(-35.5,21){\vector(1,0.5){9}}
\put(-35.5,21){\vector(1,1){6}}
\put(-24.5,10){\circle*{2}}
\put(-24.5,10){\vector(1,1.3){9}}
\put(-24.5,10){\vector(1,0.4){9}}
\put(-24.5,10){\vector(1,2){8}}
\put(-14,0){\circle*{2}}
\put(-14,0){\vector(1,1.7){7.5}}
\put(-14,0){\vector(1,-1.3){9}}
\put(-14,0){\vector(1,.7){14}}
\put(-14,0){\line(-1,-1){13}}
\put(-27,-13){\circle*{2}}
\end{picture}
$$
\caption{All the branches being singular.}
\label{fig:prop2-case1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.5\textwidth}
\centering
$$
\unitlength=0.50mm
\begin{picture}(70.00,110.00)(-70,-30)
\thinlines
\put(-100,30){\line(1,0){38}}
\put(-100,30){\circle*{2}}
\put(-85,30){\circle*{2}}
\put(-70,30){\circle*{2}}
\put(-57,30){\circle*{2}}
\put(-62,30){\line(1,0){18}}
\put(-44,30){\circle*{2}}
\put(-44,30){\line(1,-1){30}}
\put(-44,30){\vector(1,1){15}}
\put(-44,30){\vector(1,0.6){20}}
\put(-44,30){\vector(1,0.2){17}}
\put(-35.5,21){\circle*{2}}
\put(-35.5,21){\vector(1,-0.2){9}}
\put(-35.5,21){\vector(1,0.5){9}}
\put(-35.5,21){\vector(1,1){6}}
\put(-24.5,10){\circle*{2}}
\put(-24.5,10){\vector(1,1.3){9}}
\put(-24.5,10){\vector(1,0.4){9}}
\put(-24.5,10){\vector(1,2){8}}
\put(-14,0){\circle*{2}}
\put(-14,0){\vector(1,1.7){7.5}}
\put(-14,0){\vector(1,-0.6){9}}
\put(-14,0){\vector(1,.7){14}}
\put(-14,0){\line(-1,-1){13}}
\put(-27,-13){\circle*{2}}
\put(-27,-13){\vector(1,-1.3){9}}
\put(-27,-13){\vector(1,0){11}}
\end{picture}
$$
\caption{At least one singular branch.}
\label{fig:prop2-case2}
\end{subfigure}
\caption{Some base cases.}
\label{fig:ddd}
\end{figure}
\subsubsection{The general procedure}\label{subsec:generalprocedure}
We are now ready to show the iterative construction of the Poincaré series. Following the notation of Subsection \ref{subsec:totalorderstar}, let \(\mathcal{S}=\mathcal{E}\cup\mathcal{R}\) be the set of star points of the dual graph \(G(C)\) ordered by the total order. We provide an iterative procedure to calculate the Poincaré series based on the computation of the Poincaré series of the truncations at the star points defined in Subsection \ref{subsubsec:truncationstar}. We show that we can compute the Poincaré series of \(C\) iteratively from the Poincaré series of the truncations at the star points.
\medskip
Let \((q,c)=(f_1|\cdots|f_r)\) be the contact pair of \(C.\) Obviously if \(C\) is a plane curve which has only one approximating curve, i.e. \(C\) is the unique approximating curve then we are in the conditions of Proposition \ref{prop:poincarelemm1} or Proposition \ref{prop:poincarelemm2}, and there is nothing to prove. Therefore, we can assume that \(C\) is a plane curve with at least one approximating curve \(C_{\alpha_1}\neq C.\)
\medskip
As in Subsection \ref{subsubsec:truncationstar} let \(\alpha_1\prec\cdots \prec \alpha_{q-1}\) be the first star points in \(\mathcal{S}_1\) which are known to be common to all the branches. For \(1\leq k\leq q-1\) we define the semigroup
\[
\Gamma^{1}_k=\Big \langle\frac{\overline{\beta}^i_0}{e^{i}_k},\dots ,\frac{\overline{\beta}^i_k}{e^{i}_k}\Big \rangle,
\]
which is independent of \(i\in\mathtt{I}\) since the contact pair of \(C\) is \((q,c).\) For \(i=1,\dots, q-1\) let \(C_{\alpha_i}\) be the irreducible curve associated to the star point \(\alpha_i\) defined in Subsection \ref{subsubsec:truncationstar} and write \(P_{\alpha_i}(t):=P_{C_{\alpha_i}}=P_{\Gamma^{1}_i}\) for the Poincaré series of the plane curve \(C_{\alpha_i}.\) By Proposition \ref{prop:poincareirreduciblecase}, we have
\[
P_{\alpha_i}(t)=\frac{\displaystyle\prod_{k=1}^{i}(t^{n^1_k\overline{\beta}^1_k/e^1_i}-1)}{\displaystyle\prod_{k=0}^{i}(t^{\overline{\beta}^1_k/e^1_i}-1)}.
\]
Using the gluing property of a numerical semigroup, for \(i=1,\dots,q-2\) we obtain
\[
P_{\alpha_i}(t)=P_{\alpha_{i-1}}(t^{p_{i,1}})\cdot \frac{(t^{p_{i,1}}-1)}{t-1}\cdot P(w_{i,1},p_{i,1},t).
\]
We now proceed as in Subsection \ref{subsubsec:truncationstar} and denote by \(Q=\alpha_{q-1}\) and \(\sigma=\alpha_q.\) We will now show how to compute the Poincare series of \(C_\sigma\) from the Poincaré series of \(C_Q.\) As in Subsection \ref{subsubsec:truncationstar}, we need to distinguish two cases:
\begin{enumerate}[wide, labelwidth=!, labelindent=0pt]
\item [\(\bullet\)] If \(c>0\), then \(C_{\alpha_q}\) is an irreducible plane curve and we can compute its Poincaré series as in the previous case
\[
P_{\sigma}(t)=P_{Q}(t^{p_{q,1}})\cdot \frac{(t^{p_{q,1}}-1)}{t-1}\cdot P(w_{q,1},p_{q,1},t).
\]
\item [\(\bullet\)] If \(c=0\), then \(\sigma=\alpha_q=\sigma_0\) and we need to consider the partition \(\displaystyle\mathtt{I}=( \bigcup_{p=1}^{t} I_{p})\cup(\bigcup_{p=t+1}^{s} I_p)\). Hence \(C_{\sigma}\) is a plane curve with with \(s\) branches, and we have
\begin{lemma} \label{lem:aux1}
\[
\begin{split}
P_\sigma(t_1,\dots,t_s)=&P_Q(t_1^{p_{q,I_1}}\cdots t_s^{p_{q,I_s}})\cdot Q\big(w_{q,I_1},p_{q,I_1},t_1,\prod_{k=2}^{s} t^{w_{q,I_k}}\big)\\
\cdot&\bigg(\prod_{j=2}^{s-1}B(p_{q,j},w_{q,j},t_j,\prod_{k<j}t_k^{p_{q,k}},\prod_{k>j}t_{k}^{w_{q,k}})\bigg)\cdot \Big((\prod_{k=1}^{s}t_k^{p_{q,k}})^{w_{q,s}}-1\Big).
\end{split}
\]
\end{lemma}
\begin{proof}
The proof goes in an analogous manner as that of Proposition \ref{prop:poincarelemm2}. By Theorem \ref{thm:p1p2p3} we have a decomposition of \(P_\sigma(t_1,\dots,t_s)=P_1\cdot P_2\cdot P_3.\) First, since \(p_{q,I_k}=n_{q,I_k}=e^{I_k}_{q-1}/e^{I_k}_q\) and \(Q=\alpha_{q-1}\), we have that
\[
P_Q(t_1^{p_{q,I_1}}\cdots t_s^{p_{q,I_s}})=\frac{\displaystyle\prod_{k=1}^{q-1}\big((t_1\cdots t_s)^{n^1_k\overline{\beta}^1_k/e^1_q}-1\big)}{\displaystyle\prod_{k=0}^{q-1}\big((t_1\cdots t_s)^{\overline{\beta}^1_k/e^1_q}-1\big)}.
\]
Analysis similar to that in the part of the proof of Proposition \ref{prop:poincarelemm2} where all the branches are singular, shows that
\[
P_1(\underline{t})\cdot P_2(\underline{t})=P_Q(t_1^{p_{q,I_1}}\cdots t_s^{p_{q,I_s}})\cdot Q\big(w_{q,I_1},p_{q,I_1},t_1,\prod_{k=2}^{s} t^{w_{q,I_k}}\big) \cdot \Big((\prod_{k=1}^{s}t_k^{p_{q,k}})^{w_{q,s}}-1\Big).
\]
The only difference with the proof of Proposition \ref{prop:poincarelemm2} lies on the fact that, in the current case, the factor \(\underline{t}^{v^{\mathbf{1}}}-1\) is already contained in \(P_Q(t_1^{p_{q,I_1}}\cdots t_s^{p_{q,I_s}}).\) Therefore, instead of adding a factor \(\displaystyle Q\Big(p_{q,s},w_{q,s},t_s,\prod_{k=1}^{s-1}t_k^{p_{q,k}}\Big),\) we only need to add the factor corresponding to \(\underline{t}^{v^{\sigma_0}}-1.\)
\medskip
It remains to prove that \(P_3(\underline{t})=\displaystyle \prod_{j=2}^{s-1}B\big (p_{q,j},w_{q,j},t_j,\prod_{k<j}t_k^{p_{q,k}},\prod_{k>j}t_{k}^{w_{q,k}}\big).\) To do that, we first observe that since \(c=0\) then \(p_{q,j}=e^{j}_{q-1}/e^{j}_q\) and \(w_{q,j}=\overline{\beta}^j_q/e^j_q\) are independent of \(j\) and hence the factor
\[
B\Big(p_{q,j},w_{q,j},t_j,\prod_{k<j}t_k^{p_{q,k}},\prod_{k>j}t_{k}^{w_{q,k}}\Big)=\big(\prod_{k<j}t_k^{p_{q,j}}t_k^{p_{q,j}}\big)^{\overline{\beta}^j_q/e^j_q}\big(\prod_{k>j}t_{k}^{\overline{\beta}^j_q/e^j_q}\big)^{p_{q,j}}-1=(t_1\cdots t_s)^{p_{q,j}\overline{\beta}^j_q/e^j_q}-1
\]
is repeated \(s-2\) times. By the definition of \(C_\sigma\) and the fact that \(c=0\), we have \(v^{\sigma}=v^{\sigma_0}=p_{q,j}w_{q,j}(1,\dots,1)\) and the valency of \(\sigma_0\) is \(s+2.\) Since \(\sigma_0\) is the only point proper star point of \(G(C_\sigma)\) and \(s(\sigma_0)-1=v(\sigma_0)-4=s-2\) in this case, the claim follows.
\end{proof}
\end{enumerate}
If \(c=0,\) then after computing the Poincaré series \(P_\sigma\) we set \(Q=\sigma_0\) and \(\sigma\) is the next (with respect to the ordering \(<\) in \(\mathcal{S}\)) star point to be considered. If \(c\neq 0\), then we set \(Q=\alpha_q\) and \(\sigma=\sigma_0.\) In this case, we compute the Poincaré series similarly to the case \(c=0:\)
\begin{lemma} \label{lem:aux2}
\[\begin{split}
P_\sigma(t_1,\dots,t_s)=&P_Q(t_1^{p_{q,I_1}}\cdots t_s^{p_{q,I_s}})\cdot Q\big(w_{q,I_1},p_{q,I_1},t_1,\prod_{k=2}^{s} t^{w_{q,I_k}}\big)\\
\cdot&\Big(\prod_{j=2}^{s-1}B(p_{q,j},w_{q,j},t_j,\prod_{k<j}t_k^{p_{q,k}},\prod_{k>j}t_{k}^{w_{q,k}})\Big)\cdot \big((\prod_{k=1}^{s}t_k^{p_{q,k}})^{w_{q,s}}-1\big).
\end{split}
\]
\end{lemma}
\begin{proof}
The proof is analogous to the proof of Lemma \ref{lem:aux1}~resp.~Lemma \ref{prop:poincarelemm2}. The only difference with respect to the proof of Lemma \ref{lem:aux1} is that the \(p_{q,I_k}\) are now different and we need to proceed as in the case of the proof of Proposition \ref{prop:poincarelemm2} to explicitly compute the polynomial \(P_3.\)
\end{proof}
We shall continue computing the Poincaré series of the approximations of \(C.\) To do that, we follow the procedure to construct the approximations described in Subsection \ref{subsubsec:truncationstar}. The distinguished points \(Q,\sigma\) are now at the stage \(Q=\sigma_0\) and \(\sigma\) is the next start point to be considered for the computation of the Poincaré series.
Let \(\mathtt{I}=(\bigcup_{p=1}^{t} I_{p})\cup(\bigcup_{p=t+1}^{s} I_p)\) be the partition created at \(\sigma_0.\) Denote by \(\overline{\sigma}_1,\dots,\overline{\sigma}_\epsilon\) the star points between \(\sigma_0\) and the point where the geodesics of \(I_1\) go through. At this point we have
\[
\alpha_1\prec \cdots\prec\alpha_q\preceq \sigma_0\preceq \overline{\sigma}_1\preceq\cdots\preceq\overline{\sigma}_\epsilon\preceq P.
\]
Then,
\begin{enumerate}[wide, labelwidth=!, labelindent=0pt]
\item Assume \(|I_1|=1:\)
\begin{enumerate
\item The semigroup \(\Gamma^1\) of the first branch \(C^1\) of \(C\) has \(q\) minimal generators. This implies that \(\mathcal{S}_1=\{\alpha_1\prec \cdots\prec\alpha_q\preceq \sigma_0\preceq \overline{\sigma}_1\preceq\cdots\preceq\overline{\sigma}_\epsilon\}\) and \(y^{I_1}_{\sigma_0}\) is the topological Puiseux series of the branch \(C^1.\) Then, there is no need to perform any computation in this step and we have finished with \(\mathcal{S}_1\). We move to the package \(I_2\) and we make \(\sigma\) the star point from which the geodesics of \(I_2\) go through and \(Q=\sigma_0.\)
\item The semigroup \(\Gamma^1\) of the first branch \(C^1\) of \(C\) has \(g_1>q\) minimal generators. Then,
\[\mathcal{S}_1=\{\alpha_1\prec \cdots\prec\alpha_q\preceq \sigma_0\preceq \overline{\sigma}_1\preceq\cdots\preceq\overline{\sigma}_\epsilon\prec \alpha_1^{I_1}\prec\cdots \alpha_{g_1-q}^{I_1}\}\]
where \(\alpha_1^{I_1}\prec\cdots \alpha_{g_1-q}^{I_1}\) are the non-proper star points defining the maximal contact values associated to the remaining generators of the semigroup \(\Gamma^1\). We are in the case \(Q=\sigma_0\) and \(\sigma=\alpha_1^{I_1};\) for \(i=2,\dots,g_1-q\) we will consider \(Q=\alpha^{I_1}_{i-1}\) and \(\sigma=\alpha^{I_1}_i.\) At each stage, we need to compute the Poincaré series \(P_{C_{\alpha^{I_1}_i}}=P_{\alpha^{I_1}_i}=P_\sigma\) from the Poincaré series of \(P_Q=P_{C_{\alpha^{I_1}_{i-1}}}.\) At each stage, to simplify notation let us denote by \(p_\sigma:=p_{q+i,I_1}\) and \(w_\sigma:=w_{q+i,I_1}.\) Now, for all \(j\notin I_1,\) i.e. \(j\in I_k\) for some \(k=2,\dots,s,\) we have by definition of \(C_\sigma\) that \(w_{q+i,I_k}:=w_{q+i,j}=[f_{I_1},f_j]/e^{I_1}_{q+i-1}.\) Then,
\begin{proposition} \label{prop:case1b}
\[
P_{\sigma}(\underline{t})=P_Q(t_1^{p_\sigma},t_2,\dots,t_s)\cdot Q\big(w_\sigma,p_\sigma,t_1,\prod_{k=2}^{s} t^{w_{q+i,I_k}}\big).
\]
\end{proposition}
\begin{proof}
First of all, we observe that the dual graph \(G(C_\sigma)\) can be obtained from \(G(C_Q)\) by adding a single dead arc corresponding to \(\sigma,\) thus \(G(C_Q)\) is a subgraph of \(G(C_\sigma).\) Therefore, the set of proper star points in \(G(C_\sigma)\) is equal to the set of proper star points in \(G(C_Q)\). Moreover, if we denote by \(\mathcal{S}_\sigma\) the set of star points of \(G(C_\sigma)\) and \(\mathcal{S}_Q\) the set of star points of \(G(C_Q),\) then \(\mathcal{S}_\sigma\setminus\mathcal{S}_Q=\{\sigma\}\in \widetilde{\mathcal{E}}_\sigma.\) Therefore, by Noether's formula \ref{noehterformula} and the definition of the curves \(C_\sigma\) and \(C_Q\) we have that
\begin{align*}
P_Q(&t_1^{p_\sigma},t_2,\dots,t_s)=\\
&= \frac{1}{\underline{t}^{\underline{v}^1}-1}\cdot \prod_{i=1}^q \frac{\underline{t}^{\underline{v}^{\sigma_{i}}}-1}{\underline{t}^{\underline{v}^{\rho_{i}}}-1}\cdot (\underline{t}^{\underline{v}^{\sigma^0}}-1) \cdot \prod_{\rho \in \widetilde{\mathcal{E}}\setminus\{\sigma\} } \frac{\underline{t}^{(n_{\rho}+1)\underline{v}^{\rho}}-1}{\underline{t}^{\underline{v}^{\rho}}-1} \cdot \prod_{s(\alpha) >1} (\underline{t}^{\underline{v}^{\alpha}}-1)^{s(\alpha)-1}.
\end{align*}
Therefore, the proof is completed by showing that
\[
Q\big(w_\sigma,p_\sigma,t_1,\prod_{k=2}^{s} t^{w_{q+i,I_k}}\big)=\frac{\underline{t}^{(n_{\sigma}+1)\underline{v}^{\sigma}}-1}{\underline{t}^{\underline{v}^{\sigma}}-1}.
\]
Let \(L^1_{q+i}\) be the dead arc associated to the star point \(\sigma\) and \(P(L^1_{q+i})\) its end point. Then, by eq.~ \eqref{eqn:maximalcontactvalues} in Subsection \ref{subsec:maximalcontactdualgrapha}, the Noether formula \ref{noehterformula}, and the definition of \(C_\sigma\) we have that
\[
\mathrm{pr}_j(\underline{v}^\sigma)=\left\{\begin{array}{cc}
\ \overline{\beta}^{1}_{q+i}/e^{1}_{q+i},
&\text{if}\;j=1; \\[.3cm]
\frac{[f_1,f_j]}{e^{1}_ {q+i-1}},& \text{if}\;j\notin I_1.
\end{array}\right.
\]
\begin{figure}[H]
$$
\unitlength=0.50mm
\begin{picture}(70.00,110.00)(0,-50)
\thinlines
\put(-100,30){\line(1,0){24}}
\put(-100,30){\circle*{2}}
\put(-80,30){\circle*{2}}
\put(-80,30){\line(0,-1){15}}
\put(-80,15){\circle*{2}}
\put(-57,30){\circle*{2}}
\put(-57,30){\line(0,-1){15}}
\put(-57,15){\circle*{2}}
\put(-74,30){$\ldots$}
\put(-62,30){\line(1,0){18}}
\put(-63,35){\scriptsize{$Q=\sigma_0$}}
\put(-8,23){$\ddots$}
\put(-44,30){\circle*{3}}
\put(-44,30){\line(1,-1){30}}
\put(-44,30){\vector(1,1){15}}
\put(-44,30){\vector(1,0.6){20}}
\put(-44,30){\vector(1,0.2){17}}
\put(-35.5,21){\circle*{2}}
\put(-35.5,21){\vector(1,-0.2){9}}
\put(-35.5,21){\vector(1,0.5){9}}
\put(-35.5,21){\vector(1,1){6}}
\put(-24.5,10){\circle*{2}}
\put(-24.5,10){\vector(1,1.3){9}}
\put(-24.5,10){\vector(1,0.4){9}}
\put(-24.5,10){\vector(1,2){8}}
\put(-14,0){\circle*{2}}
\put(-14,0){\vector(1,1.9){7.5}}
\put(-14,0){\vector(1,1.3){9}}
\put(-14,0){\vector(1,.7){14}}
\put(-11,-32){\scriptsize{$I_{1}$}}
\put(-11,-39){\scriptsize{with $|I_{1}|=1$}}
\put(-14,0){\line(0,-1){25}}
\put(-14,-25){\circle*{2}}
\put(-14,-25){\vector(1,1.9){7.5}}
\put(-14,-25){\vector(1,1.3){9}}
\put(-14,-25){\vector(1,1){10}}
\put(-14,-25){\vector(1,.7){10}}
\put(-14,-25){\vector(1,0){13}}
\put(-107,28){{\scriptsize ${\bf 1}$}}
\put(-92,-15){$G(C_Q)$}
\put(25,28){$\rightsquigarrow$}
\put(55,28){{\scriptsize ${\bf 1}$}}
\put(70,-15){$G(C_{\sigma})$}
\put(62,30){\line(1,0){24}}
\put(62,30){\circle*{2}}
\put(82,30){\circle*{2}}
\put(82,30){\line(0,-1){15}}
\put(82,15){\circle*{2}}
\put(105,30){\circle*{2}}
\put(105,30){\line(0,-1){15}}
\put(105,15){\circle*{2}}
\put(88,30){$\ldots$}
\put(100,30){\line(1,0){18}}
\put(102,35){\scriptsize{$Q=\sigma_0$}}
\put(154,23){$\ddots$}
\put(118,30){\circle*{3}}
\put(118,30){\line(1,-1){30}}
\put(118,30){\vector(1,1){15}}
\put(118,30){\vector(1,0.6){20}}
\put(118,30){\vector(1,0.2){17}}
\put(126.5,21){\circle*{2}}
\put(126.5,21){\vector(1,-0.2){9}}
\put(126.5,21){\vector(1,0.5){9}}
\put(126.5,21){\vector(1,1){6}}
\put(137.5,10){\circle*{2}}
\put(137.5,10){\vector(1,1.3){9}}
\put(137.5,10){\vector(1,0.4){9}}
\put(137.5,10){\vector(1,2){8}}
\put(148,0){\circle*{2}}
\put(148,0){\vector(1,1.9){7.5}}
\put(148,0){\vector(1,1.3){9}}
\put(148,0){\vector(1,.7){14}}
\put(148,0){\line(0,-1){25}}
\put(148,-25){\circle*{2}}
\put(148,-25){\vector(1,1.9){7.5}}
\put(148,-25){\vector(1,1.3){9}}
\put(148,-25){\vector(1,1){10}}
\put(148,-25){\vector(1,.7){10}}
\put(148,-25){\line(1,0){20}}
\put(168,-25){\line(0,-1){15}}
\put(168,-25){\circle*{3}}
\put(162,-31){\scriptsize{$\sigma$}}
\put(168,-40){\circle*{2}}
\put(168,-25){\vector(1,1){10}}
\end{picture}
$$
\caption{Graphs \(G(C_Q)\) and \(G(C_\sigma)\) considered in Proposition \ref{prop:case1b}.}
\label{fig47}
\end{figure}
The claim follows by definition of \(\displaystyle Q\big(w_\sigma,p_\sigma,t_1,\prod_{k=2}^{s} t^{w_{q+i,I_k}}\big)\) and the fact that \((n_{\sigma}+1)\underline{v}^{\sigma}=p_\sigma \underline{v}^\sigma.\)
\end{proof}
\end{enumerate}
\item Assume \(|I_1|>1.\) There are two cases to be distinguished:
\begin{enumerate
\item For all \(j\in I_1\) we have \(g_j=q,\) i.e. the semigroups \(\Gamma^{j}\) have \(q\)--minimal generators. In this case, \(\sigma=\sigma_0^{I_1}\) is the first separation point of the branches of \(I_1\) and let \(I_1=\bigcup_{k=1}^{s_1} I_{1,k}\) be the induced index partition (see also Subsection \ref{subsubsec:truncationstar}). Since \(\sigma\) is an ordinary point then \(p_{\sigma,I_{1,k}}=1\) for all \(k=1,\dots,s_1\) and \(w_{\sigma,I_{1,k}}=w_{q+1,I_{1,k}}=[f_{I_{1,j}},f_{I_{1,k}}]\) with \(j\neq k\) is independent of \(j,\) i.e. \(w_{\sigma,I_{1,k}}=w_{\sigma,I_{1,k'}}\) if \(k\neq k'.\) Also, for \(i=2,\dots, \) \(w_{\sigma,I_{i}}=[f_{I_{1,k}},f_{I_{i}}]\) is independent of \(k.\) Then,
\begin{figure}[H]
$$
\unitlength=0.50mm
\begin{picture}(70.00,110.00)(0,-50)
\thinlines
\put(-100,30){\line(1,0){24}}
\put(-100,30){\circle*{2}}
\put(-80,30){\circle*{2}}
\put(-80,30){\line(0,-1){15}}
\put(-80,15){\circle*{2}}
\put(-57,30){\circle*{2}}
\put(-57,30){\line(0,-1){15}}
\put(-57,15){\circle*{2}}
\put(-74,30){$\ldots$}
\put(-62,30){\line(1,0){18}}
\put(-63,35){\scriptsize{$Q=\sigma_0$}}
\put(-8,23){$\ddots$}
\put(-44,30){\circle*{3}}
\put(-44,30){\line(1,-1){30}}
\put(-44,30){\vector(1,1){15}}
\put(-44,30){\vector(1,0.6){20}}
\put(-44,30){\vector(1,0.2){17}}
\put(-35.5,21){\circle*{2}}
\put(-35.5,21){\vector(1,-0.2){9}}
\put(-35.5,21){\vector(1,0.5){9}}
\put(-35.5,21){\vector(1,1){6}}
\put(-24.5,10){\circle*{2}}
\put(-24.5,10){\vector(1,1.3){9}}
\put(-24.5,10){\vector(1,0.4){9}}
\put(-24.5,10){\vector(1,2){8}}
\put(-14,0){\circle*{2}}
\put(-14,0){\vector(1,1.9){7.5}}
\put(-14,0){\vector(1,1.3){9}}
\put(-14,0){\vector(1,.7){14}}
\put(-11,-32){\scriptsize{$I_{1}$}}
\put(-11,-39){\scriptsize{with $|I_{1}|>1$}}
\put(-14,0){\line(0,-1){25}}
\put(-14,-25){\circle*{2}}
\put(-14,-25){\vector(1,1.9){7.5}}
\put(-14,-25){\vector(1,1.3){9}}
\put(-14,-25){\vector(1,1){10}}
\put(-14,-25){\vector(1,.7){10}}
\put(-14,-25){\vector(1,0){13}}
\put(-107,28){{\scriptsize ${\bf 1}$}}
\put(-92,-15){$G(C_Q)$}
\put(25,28){$\rightsquigarrow$}
\put(55,28){{\scriptsize ${\bf 1}$}}
\put(70,-15){$G(C_{\sigma})$}
\put(62,30){\line(1,0){24}}
\put(62,30){\circle*{2}}
\put(82,30){\circle*{2}}
\put(82,30){\line(0,-1){15}}
\put(82,15){\circle*{2}}
\put(105,30){\circle*{2}}
\put(105,30){\line(0,-1){15}}
\put(105,15){\circle*{2}}
\put(88,30){$\ldots$}
\put(100,30){\line(1,0){18}}
\put(102,35){\scriptsize{$Q=\sigma_0$}}
\put(154,23){$\ddots$}
\put(118,30){\circle*{3}}
\put(118,30){\line(1,-1){30}}
\put(118,30){\vector(1,1){15}}
\put(118,30){\vector(1,0.6){20}}
\put(118,30){\vector(1,0.2){17}}
\put(126.5,21){\circle*{2}}
\put(118,17){\scriptsize{$\overline{\sigma}_1$}}
\put(126.5,21){\vector(1,-0.2){9}}
\put(126.5,21){\vector(1,0.5){9}}
\put(126.5,21){\vector(1,1){6}}
\put(137.5,10){\circle*{2}}
\put(137.5,10){\vector(1,1.3){9}}
\put(137.5,10){\vector(1,0.4){9}}
\put(137.5,10){\vector(1,2){8}}
\put(148,0){\circle*{2}}
\put(148,0){\vector(1,1.9){7.5}}
\put(148,0){\vector(1,1.3){9}}
\put(148,0){\vector(1,.7){14}}
\put(148,0){\line(0,-1){25}}
\put(148,-25){\circle*{2}}
\put(148,-25){\vector(1,1.9){7.5}}
\put(148,-25){\vector(1,1.3){9}}
\put(148,-25){\vector(1,1){10}}
\put(148,-25){\vector(1,.7){10}}
\put(148,-25){\line(1,0){20}}
\put(168,-25){\circle*{3}}
\put(162,-31){\scriptsize{$\sigma$}}
\put(142,-31){\scriptsize{$\overline{\sigma}_{\varepsilon}$}}
\put(168,-25){\vector(1,1.4){12}}
\put(168,-25){\vector(1,0.5){14}}
\put(168,-25){\vector(1,-0.2){13}}
\put(168,-25){\vector(1,-0.7){14}}
\end{picture}
$$
\caption{Graphs \(G(C_Q)\) and \(G(C_\sigma)\) considered in Proposition \ref{prop:case2a}.}
\label{fig48}
\end{figure}
\begin{proposition} \label{prop:case2a}
\begin{align*}
P_\sigma(t_1,\dots,t_{s_1}&,t_{s_1+1},\dots,t_{s_1+s-1})=P_Q(t_1\cdots t_{s_1},t_{s_1+1},\dots,t_{s_1+s-1})\\
\cdot&\bigg(\prod_{j=2}^{s_1}B(w_{q+1,I_{1,j}},1,t_{I_{1,j}},(\prod_{k>j}^{s_1}t_{k}^{w_{q+1,I_{1,k}}})(\prod_{k=s_1+1}^{s_1+s-1}t_{k}^{w_{q+1,I_{k-s_1+1}}}),\prod_{k<j}t_k)\bigg).
\end{align*}
\end{proposition}
\begin{proof}
As in the proof of Proposition \ref{prop:case1b}, \(G(C_Q)\) is a subgraph of \(G(C_\alpha)\) and \(\mathcal{S}_\sigma\setminus\mathcal{S}_Q=\{\sigma\},\) but in this case \(\sigma\) is a proper star point so it only contributes to the factor \(P_3.\) It is then easy to see that
\begin{align*}
P_Q(t_1\cdots& t_{s_1},t_{s_1+1},\dots,t_{s_1+s-1})=\\
=& \frac{1}{\underline{t}^{\underline{v}^1}-1} \cdot \big(\prod_{i=1}^q \frac{\underline{t}^{\underline{v}^{\sigma_{i}}}-1}{\underline{t}^{\underline{v}^{\rho_{i}}}-1}\big)\cdot (\underline{t}^{\underline{v}^{\sigma_0}}-1) \cdot \prod_{\rho \in \widetilde{\mathcal{E}} } \frac{\underline{t}^{(n_{\rho}+1)\underline{v}^{\rho}}-1}{\underline{t}^{\underline{v}^{\rho}}-1} \cdot \prod_{\tiny\begin{array}{c}s(\alpha) >1\\ \alpha\neq\sigma\end{array}} (\underline{t}^{\underline{v}^{\alpha}}-1)^{s(\alpha)-1}.
\end{align*}
To finish we need to check that
\[
\bigg(\prod_{j=2}^{s_1}B(w_{q+1,I_{1,j}},1,t_{I_{1,j}},(\prod_{k>j}^{s_1}t_{k}^{w_{q+1,I_{1,k}}})(\prod_{k=s_1+1}^{s_1+s-1}t_{k}^{w_{q+1,I_{k-s_1+1}}}),\prod_{k<j}t_k)\bigg)=(t^{\underline{v}^\sigma}-1)^{s(\sigma)-1}.
\]
To do that, first observe that since \(w_{\sigma,I_{1,k}}=w_{\sigma,I_{1,k'}}\) if \(k\neq k'\) we have
\[
(t_{I_{1,j}}\cdot \prod_{k<j}t_k)^{w_{q+1,I_{1,j}}}=\prod_{k\leq j}t_k^{w_{q+1,I_{1,k}}}.
\]
In this way,
\begin{align*}
&B\Big(w_{q+1,I_{1,j}},1,t_{I_{1,j}},\Big(\prod_{k>j}^{s_1}t_{k}^{w_{q+1,I_{1,k}}}\Big)\cdot\Big(\prod_{k=s_1+1}^{s_1+s-1}t_{k}^{w_{q+1,I_{k-s_1+1}}}\Big),\prod_{k<j}t_k\Big) \\
& \hspace{20pt} =\Big(\big(\prod_{k=s_1+1}^{s_1+s-1}t_{k}^{w_{q+1,I_{k-s_1+1}}}\big)\cdot t_j^{w_{q+1,I_{1,j}}}\Big)\cdot\big(\prod_{k<j}t_k^{w_{q+1,I_{1,k}}}\big)-1\Big)
\end{align*}
is independent of \(j\) and hence it is repeated \(s_1-1\) times.
\medskip
On the other hand, since \(q=g_i\) for all \(i=1,\dots, s_1\) then by eq.~ \eqref{eqn:valueproperordinary} we have
\begin{equation*}
\mathrm{pr}_j(\underline{v}^\sigma)=\left\{\begin{array}{ll}
[f_{I_{1,j}},f_{I_{1,k}}]& \text{if}\;j\neq k\;\text{and}\; j\leq s_1 \\
[f_{I_{1,k}},f_{I_{j-s_1+1}}]& \text{if}\; s_1+1\leq j\leq s
\end{array}\right.
\end{equation*}
Finally, by definition of \(C_\sigma,\) the valency of \(\sigma\) in \(G(C_\sigma)\) is \(s_1+1.\) Therefore, \(s(\sigma)=s_1\) as there is no dead arc starting with \(\sigma.\) Thus, the claim follows.
\end{proof}
We then do \(Q=\sigma_0^{I_{1}}\) and \(\sigma=\sigma_0^{I_{1,1}}\) to continue the process.
\item We assume that \(g_j>q\) for some \(j\in I_1.\) We distinguish again two subcases:
\begin{enumerate}[wide, labelwidth=!, labelindent=0pt]
\item Assume \((f_1|\cdots|f_{|I_1|})\leq (q+1,0)\) and for simplicity assume \(g_1>q\). Let us denote by \(\sigma_{0}^{I_1}\) be the first proper star point of the package \(I_1.\)
Let \(I_1=\bigcup_{k=1}^{s_1} I_{1,k}\) be the partition associated to the proper star \(\sigma_{0}^{I_1}.\) In this case, the analysis of the first \(s_1\) branches of \(C_{\sigma_{0}^{I_1}}\) is more delicate as there are some of them that have at least \(q+1\) maximal contact values. For this reason, we subdivide the partition of \(I_1\) as in the proof of Proposition \ref{prop:poincarelemm2}. Let \(\mathtt{J}_{sing}=\bigcup_{k=l+1}^{s_1}I_{1,k}\) be the branches of \(C_\sigma\) with \(q+1\) maximal contact values. Observe that the good ordering of the branches implies that the first \(l\) branches have \(q\)-maximal contact values \(\mathtt{J}_{sm}=\bigcup_{k=1}^{l}I_{1,k}\) \ (cf. Lemma \ref{lem:orderbetaiscompatible}) and \(l\) could be \(0\). Thus, \(C_\sigma\) is plane curve that by definition have the first \(l\) branches have \(q\) maximal contact values, the next \(s_1-l\) branches have \(q+1\) maximal contact values and the last \(s-1\) branches have \(q\) maximal contact values. Then,
\begin{proposition}\label{prop:case2bi}
\begin{align*}
& P_\sigma(t_1,\dots,t_{s_1},t_{s_1+1},\dots,t_{s_1+s-1})= P_Q(t^{p_{q+1,I_{1,1}}}_1\cdots t^{p_{q+1,I_{1,s_1}}}_{s_1},t_{s_1+1},\dots,t_{s_1+s-1})\\
& \hspace{20pt} \cdot Q\Big(w_{q+1,I_{1,1}},p_{q+1,I_{1,1}},t_1,(\prod_{k=2}^{s_1}t_{k}^{w_{q+1,I_{1,k}}})(\prod_{k=s_1+1}^{s_1+s-1}t_{k}^{w_{q+1,I_{k-s_1+1}}})\Big)\\
&\hspace{20pt} \cdot \bigg(\prod_{j=2}^{s_1}B(w_{q+1,I_{1,j}},p_{q+1,I_{1,j}},t_{I_{1,j}},(\prod_{k>j}^{s_1}t_{k}^{w_{q+1,I_{1,k}}})(\prod_{k=s_1+1}^{s_1+s-1}t_{k}^{w_{q+1,I_{k-s_1+1}}}),\prod_{k<j}t^{p_{q+1,I_{1,k}}}_k)\bigg).
\end{align*}
\end{proposition}
\begin{proof}
First of all, observe that if \(\mathtt{J}_{sm}=I_1\) then the proof is analogous to the proof of Proposition \ref{prop:case2a}. Therefore, we will assume \(\mathtt{J}_{sm}\neq I_1.\) As usual, the dual graph \(G(C_Q)\) is a subgraph of \(G(C_\sigma)\) and we need to analyze the new star points.
\medskip
As in the proof of Proposition \ref{prop:poincarelemm2} we star with the case \(\mathtt{J}_{sing}=I_1.\) In this case, there are \(\epsilon+1\) new star points, i.e. \[\mathcal{S}_\sigma\setminus\mathcal{S}_Q=\{\sigma_0^{I_1}\preceq \overline{\sigma}^{I_1}_1\preceq\cdots\preceq\overline{\sigma}^{I_1}_\epsilon\}:=\mathcal{NS}.\]
Since \(\mathtt{J}_{sing}=I_1\) there exists a unique end point \(T:=P(L_{q+1}^{1})\) which is in fact common to the dead arcs \(L_{q+1}^{I_{1,k}}\) associated to each star point of \(\mathcal{NS}\) as star point of the corresponding \(G(C_\sigma^{k}).\) Therefore, by the ordering in the set of branches (cf. proof of Proposition \ref{prop:poincarelemm2}) \(\widetilde{\mathcal{E}}_\sigma\setminus\widetilde{\mathcal{E}}_Q=\{\overline{\sigma}_\epsilon^{I_1}\};\) observe that \((f_1|\cdots|f_{|I_1|})=(q+1,0\) if and only if \(\overline{\sigma}_\epsilon^{I_1}=\sigma_0^{I_1}.\) Also, the star points of \(\mathcal{NS}\setminus\{\overline{\sigma}_\epsilon^{I_1}\}\) are all proper star points. The star point \(\overline{\sigma}_\epsilon^{I_1}\) may be also a proper star point but it is distinguished since it also belong to \(\widetilde{\mathcal{E}}_\sigma.\) Then, by Noether's formula \ref{noehterformula}, Theorem \ref{thm:p1p2p3} and the definition of the curves \(C_\sigma,C_Q\) with a bit of effort one may check that
\[
\begin{split}
P_Q(t^{p_{q+1,I_{1,1}}}_1\cdots t^{p_{q+1,I_{1,s_1}}}_{s_1},t_{s_1+1},\dots,t_{s_1+s-1})= &\frac{1}{\underline{t}^{\underline{v}^1}-1}\cdot \prod_{i=1}^q \frac{\underline{t}^{\underline{v}^{\sigma_{i}}}-1}{\underline{t}^{\underline{v}^{\rho_{i}}}-1}\cdot (\underline{t}^{\underline{v}^{\sigma^0}}-1)\\
\cdot& \prod_{\rho \in \widetilde{\mathcal{E}}\setminus\{\overline{\sigma}_\epsilon\} } \frac{\underline{t}^{(n_{\rho}+1)\underline{v}^{\rho}}-1}{\underline{t}^{\underline{v}^{\rho}}-1} \cdot \prod_{\tiny\begin{array}{c}s(\alpha) >1\\ \alpha\notin\mathcal{NS}\end{array}} (\underline{t}^{\underline{v}^{\alpha}}-1)^{s(\alpha)-1}.
\end{split}
\]
By the good ordering in Subsection \ref{subsec:totalorderstar} we have that \((n_{v^{T}}+1)\underline{v}^{T}=p_{q+1,I_{1,1}}\underline{v}^{T}\), where \(p_{q+1,I_{1,1}}=e^{I_{1,1}}_{q}/e^{I_{1,1}}_{q+1}\) and by eq.~ \eqref{eqn:maximalcontactvalues}
\[\mathrm{pr}_j(\underline{v}^T)=\left\{\begin{array}{cc}
\overline{\beta}^{I_{1,1}}_{q+1}/e^{I_{1,1}}_{q+1}
&\text{if}\;j=1 \\[.3cm]
\frac{[f_1,f_j]}{e^{I_{1,1}}_ {q}}& \text{if}\;j\neq 1
\end{array}\right.
\]
In this way, it is straightforward to check
\[
Q\Big(w_{q+1,I_{1,1}},p_{q+1,I_{1,1}},t_1,(\prod_{k=2}^{s_1}t_{k}^{w_{q+1,I_{1,k}}})(\prod_{k=s_1+1}^{s_1+s-1}t_{k}^{w_{q+1,I_{k-s_1+1}}})\Big)=\frac{\underline{t}^{(n_{T}+1)\underline{v}^{T}}-1}{\underline{t}^{\underline{v}^{T}}-1}.
\]
As in the proof of Proposition \ref{prop:poincarelemm2}, let us now assume that \(\mathtt{J}_{sm}\neq \emptyset.\) In this case, \(T\) is no longer an end point for \(G(C_{\sigma})\) as the good order of the branches implies that the geodesics of \(I_{1,1}\) pass through \(T.\) In this case \(p_{q+1,I_{1,1}}=1\) and then
\[
Q\Big(w_{q+1,I_{1,1}},p_{q+1,I_{1,1}},t_1,(\prod_{k=2}^{s_1}t_{k}^{w_{q+1,I_{1,k}}})(\prod_{k=s_1+1}^{s_1+s-1}t_{k}^{w_{q+1,I_{k-s_1+1}}})\Big)=1.
\]
Thus, both in the case \(\mathtt{J}_{sing}=I_1\) and in the case \(\mathtt{J}_{sing}\neq I_1,\) we have proven
\begin{align*}
& P_1(\underline{t})\cdot P_2(\underline{t})= P_Q\big(t^{p_{q+1,I_{1,1}}}_1\cdots t^{p_{q+1,I_{1,s_1}}}_{s_1},t_{s_1+1},\dots,t_{s_1+s-1}\big)\\
& \hspace{20pt} \cdot Q\Big (w_{q+1,I_{1,1}},p_{q+1,I_{1,1}},t_1,(\prod_{k=2}^{s_1}t_{k}^{w_{q+1,I_{1,k}}})(\prod_{k=s_1+1}^{s_1+s-1}t_{k}^{w_{q+1,I_{k-s_1+1}}})\Big).
\end{align*}
We are left with the task of checking that
\begin{align*}
&\prod_{j=2}^{s_1}B\Big(w_{q+1,I_{1,j}},p_{q+1,I_{1,j}},t_{I_{1,j}},(\prod_{k>j}^{s_1}t_{k}^{w_{q+1,I_{1,k}}})(\prod_{k=s_1+1}^{s_1+s-1}t_{k}^{w_{q+1,I_{k-s_1+1}}}),\prod_{k<j}t^{p_{q+1,I_{1,k}}}_k\Big)\\
&\hspace{20pt} =\prod_{\alpha\in\mathcal{NS}} (\underline{t}^{\underline{v}^{\alpha}}-1)^{s(\alpha)-1}.
\end{align*}
Indeed, we proceed as in the proof of Proposition \ref{prop:poincarelemm2}. Eq.~ \eqref{eqn:valueproperspecial} leads to the fact that each factor is of the form
\[
B\Big(w_{q+1,I_{1,j}},p_{q+1,I_{1,j}},t_{I_{1,j}},\big(\prod_{k>j}^{s_1}t_{k}^{w_{q+1,I_{1,k}}}\big)\big(\prod_{k=s_1+1}^{s_1+s-1}t_{k}^{w_{q+1,I_{k-s_1+1}}}\big),\prod_{k<j}t^{p_{q+1,I_{1,k}}}_k\Big)=(t^{\underline{v}^{\alpha}}-1)
\]
for some \(\alpha \in \mathcal{NS}\) which depends on \(j.\) By definition, to each \(\alpha\in\mathcal{NS}\) there are \(s(\alpha)-1\) packages (i.e.~branches of \(C_\sigma\)) associated to it; this means that each of the factors is repeated \(s(\alpha)-1\) times, and the proof is complete.
\end{proof}
\begin{figure}[H]
$$
\unitlength=0.50mm
\begin{picture}(70.00,110.00)(-20,-70)
\thinlines
\put(-120,30){\line(1,0){20}}
\put(-120,30){\circle*{2}}
\put(-104,30){\circle*{2}}
\put(-104,30){\line(0,-1){15}}
\put(-104,15){\circle*{2}}
\put(-77,30){\circle*{2}}
\put(-77,30){\line(0,-1){15}}
\put(-77,15){\circle*{2}}
\put(-94,30){$\ldots$}
\put(-82,30){\line(1,0){18}}
\put(-64,30){\circle*{3}}
\put(-64,30){\line(1,-1){30}}
\put(-64,30){\vector(1,1){15}}
\put(-64,30){\vector(1,0.6){20}}
\put(-64,30){\vector(1,0.2){17}}
\put(-34,0){\circle*{2}}
\put(-34,0){\line(0,-1){25}}
\put(-34,-25){\circle*{2}}
\put(-34,-25){\vector(1,1.9){7.5}}
\put(-34,-25){\vector(1,1.3){9}}
\put(-34,-25){\vector(1,1){10}}
\put(-34,-25){\vector(1,.7){10}}
\put(-34,-25){\vector(1,0){13}}
\put(-127,28){{\scriptsize ${\bf 1}$}}
\put(-112,-15){$G(C_Q)$}
\put(-15,28){$\rightsquigarrow$}
\put(15,28){{\scriptsize ${\bf 1}$}}
\put(38,-35){$G(C_{\sigma})$}
\put(22,30){\line(1,0){20}}
\put(22,30){\circle*{2}}
\put(38,30){\circle*{2}}
\put(38,30){\line(0,-1){15}}
\put(38,15){\circle*{2}}
\put(69,30){\circle*{2}}
\put(69,30){\line(0,-1){15}}
\put(69,15){\circle*{2}}
\put(48,30){$\ldots$}
\put(60,30){\line(1,0){18}}
\put(78,30){\circle*{3}}
\put(78,30){\line(1,-1){30}}
\put(78,30){\vector(1,1){15}}
\put(78,30){\vector(1,0.6){20}}
\put(78,30){\vector(1,0.2){17}}
\put(108,0){\circle*{2}}
\put(108,0){\line(0,-1){25}}
\put(108,-25){\circle*{2}}
\put(108,-25){\vector(1,1.9){7.5}}
\put(108,-25){\vector(1,1.3){9}}
\put(108,-25){\vector(1,1){10}}
\put(108,-25){\vector(1,.7){10}}
\put(108,-25){\line(1,0){20}}
\put(128,-25){\circle*{2}}
\put(128,-25){\vector(1,-0.2){10}}
\put(128,-25){\vector(1,1){13}}
\put(139,-36){\circle*{2}}
\put(139,-36){\vector(1,-0.2){10}}
\put(139,-36){\vector(1,1){13}}
\put(128,-25){\line(1,-1){20}}
\put(149,-46){\circle*{2}}
\put(149,-46){\vector(1,-0.5){10}}
\put(149,-46){\vector(1,-0.1){12}}
\put(149,-46){\vector(1,0.7){13}}
\put(149,-46){\line(-1,-1){15}}
\put(134,-61){\circle*{2}}
\put(134,-61){\vector(2,0.8){10}}
\put(134,-61){\vector(1,-0.4){12}}
\put(134,-61){\vector(1.2,-0.7){11}}
\put(134,-61){\vector(1,0){10}}
\end{picture}
$$
\caption{Graphs \(G(C_Q)\) and \(G(C_\sigma)\) considered in Proposition \ref{prop:case2bi}.}
\label{fig49}
\end{figure}
\item Assume \((f_1|\cdots|f_{|I_1|})> (q+1,0),\) i.e \((f_1|\cdots|f_{|I_1|})\geq (q+1,c)\) with \(c\neq 0,\) and for simplicity \(g_1> q.\) Let us denote by \((q_{I_1},c_{I_1}):= (f_1|\cdots|f_{|I_1|}).\) Since \((q_{I_1},c_{I_1})> (q+1,0),\) there are \(q_{I_1}-q\) star points which are non-proper between \(\overline{\sigma}_\epsilon\) and \(\sigma_0^{I_1},\) i.e.
\[\overline{\sigma}_{\epsilon}\prec \alpha_{q+1}\prec \cdots \prec \alpha_{q_{I_1}}\preceq \sigma_0^{I_1}.\]
At this stage, the case is analogous to the case \((1)(b)\) as we are dealing with a sequence of star points on one branch of \(C_\sigma\) that are non-proper. Therefore, a slight change in the proof of Proposition \ref{prop:case1b} actually shows the following.
\begin{proposition}\label{prop:case2bii}
\[
P_{\sigma}(\underline{t})=P_Q(t_1^{p_\sigma},t_2,\dots,t_s)\cdot Q\big(w_\sigma,p_\sigma,t_1,\prod_{k=2}^{s} t^{w_{q+i,I_k}}\big).
\]
\end{proposition}
\end{enumerate}
\end{enumerate}
\end{enumerate}
We run this process until \(\sigma=\max\{\alpha\in\mathcal{S}\}.\)
\begin{rem}
Observe that in the algorithm we have developed, to simplify the exposition, the case where the operations occur in the first package. In the general case, the indexing must be adjusted as follows:
\begin{enumerate}[wide, labelwidth=!, labelindent=0pt]
\item [$\bullet$] In the case of Proposition \ref{prop:case1b} if the operation is performed in the \(j\)--th branch one must use the expression \[
P_{\sigma}(\underline{t})=P_Q(t_1,\dots,t_j^{p_\sigma},t_{j+1},\dots,t_s)\cdot Q\big(w_\sigma,p_\sigma,t_j,\prod_{k\neq j}t^{w_{q+i,I_k}}\big).
\]
\item[$\bullet$] In the case of Proposition \ref{prop:case2bi} there is an index \(j\) where the partition is produced \(I_j=\bigcup_{k=1}^{s_j}I_{j,k}\) for the corresponding proper star point. Then the indexing in Proposition \ref{prop:case2bi} is
\begin{align*}
& P_\sigma(t_1,\dots,t_{j-1},t_{j},\dots,t_{j+s_j},\dots,t_{s_j+s-1})= P_Q(t_1,\dots,t_{j-1},t^{p_{q+1,I_{j,1}}}_j\cdots t^{p_{q+1,I_{j,s_j}}}_{s_j+j},t_{j+s_j+1},\dots,t_{s_j+s-1})\\
& \hspace{20pt} \cdot Q\Big(w_{q+1,I_{j,1}},p_{q+1,I_{j,1}},t_j,(\prod_{k=2}^{s_j}t_{k+j}^{w_{q+1,I_{j,k}}})(\prod_{k\notin\{j,\dots,s_j\}}t_{k}^{w_{q+1,I_{k}}})\Big)\\
&\hspace{20pt} \cdot \Big(\prod_{l=2}^{s_j}B(w_{q+1,I_{j,l}},p_{q+1,I_{j,l}},t_{j+l},(\prod_{k=l+1}^{s_j}t_{j+k}^{w_{q+1,I_{j,k}}})(\prod_{k\notin\{j,\dots,s_j+j\}}t_{k}^{w_{q+1,I_{k-s_j+1}}}),\prod_{k=j}^{l-1}t^{p_{q+1,I_{j,k}}}_k)\Big).
\end{align*}
\end{enumerate}
The indexing of Proposition \ref{prop:case2a} and Proposition \ref{prop:case2bii} is adjusted analogously to these cases.
\end{rem}
Altogether, we have proven the following result.
\begin{theorem}
Let \(C\) be a plane curve singularity, and consider the set \(\mathcal{S}\) of star points of the dual graph \(G(C).\) Assume that the branches are ordered by the good order in their topological Puiseux series, and the points in \(\mathcal{S}\) are ordered by the total order \(<\) defined in Subsection \ref{subsec:totalorderstar}. Then, the Poincaré series \(P_C(\underline{t})\) can be computed recursively, via the previous process, from the Poincaré series \(P_\alpha(\underline{t}')\) of the approximations associated to the star points defined in Subsection \ref{subsubsec:truncationstar}.
\end{theorem}
\begin{ex}
Let us continue with the Example \ref{ex:examplerefinedgoodorder}. We assume that the branches of \(C\) are good ordered, i. e. \(Y_1=2x^2+x^5,\) \(Y_2=2x^2+x^{14/3}+1/2x^5,\) \(Y_3=5x^2+x^3,\) \(Y_4=5x^2+x^{5/2}+3x^3\) and \(Y_5=x^2.\) Following our procedure the first star point in the dual graph is \(\sigma_0\) (see Figure \ref{fig:ex_2}) and the Poincaré series of \(C_{\sigma_0}\) is, by Proposition \ref{prop:poincarelemm1},
\[
P_{\sigma_0}(t_1,t_2,t_3)=Q(2,1,t_1,t_2^2,t_3^2)\cdot B(1,2,t_2,t_1,t_3^2)\cdot Q(1,2,t_3,t_1t_2)=\frac{(t_1^2t_2^2t_3^2-1)^2}{t_1t_2t_3-1}.
\]
The next star point in the dual graph is \(\sigma_0^{I_1},\) we set \(\sigma=\sigma_0^{I_1}\) and \(Q:=\sigma_0.\) We are in the case \((2)(b)(i)\) as the package \(I_1\) has two branches and one of them has one Puiseux pair. Then, application of Proposition \ref{prop:case2bi} yields
\begin{align*}
P_\sigma(t_1,t_2,t_3,t_4)&=P_Q(t_1t_2^3,t_3,t_4)\cdot Q(14,1,t_1,t_2^{14}t_3^3t_4^2)\cdot B(14,3,t_2,t_3^2t_4^2,t_1)\\
&=\frac{(t_1^2t_2^6t_3^2t_4^2-1)^2(t_1^{14}t_2^{42}t_3^6t_4^6-1)}{t_1t_2^3t_3t_4-1}.
\end{align*}
Finally, the last star point in the dual graph is \(\sigma_0^{I_2}\) and
\begin{align*}
P_C(\underline{t})&=P_\sigma(t_1,t_2,t_3,t_4,t_5)=P_Q(t_1,t_2,t_3t_4^2,t_5)\cdot Q(5,1,t_3,t_4^{5}t_1^2t_2^6t_5^2)\cdot B(5,2,t_4,t_1^2t_2^6t_5^4,t_3)\\
&=\frac{(t_1^2t_2^6t_3^2t_4^4t_5^2-1)^2(t_1^{14}t_2^{42}t_3^6t_4^{12}t_5^6-1)(t_1^4t_2^{12}t_3^5t_4^{10}t_5^4-1)}{t_1t_2^3t_3t_4^2t_5-1},
\end{align*}
which completes the iterative computation of the Poincaré series.
\end{ex}
\section{Invariants associated to a resolution of plane curve singularities}\label{sec:invariantsofresolution}
Let $f \in \mathbb{C}\{x,y\}$ be an irreducible power series defining a plane branch \(C.\) The origin of the Puiseux series goes back to Newton: he looked for solutions of a polynomial equation $f(x,y)=0$ clearing $y$ as a function of $x$ and approximating successively; the $j$th step of his algorithm reads off as
\[
y=x^{q_1/p_1}(a_1+x^{q_2/p_1p_2}( + \cdots +x^{q_{j-2}/p_1\cdots p_{j-1}}(a_{j}+a_jx^{q_{j-1}/p_1\cdots p_j})\cdots )),
\]
where $a_j\in \mathbb{C}$ and $\mathrm{gcd}(p_j,q_j)=1$ for every $j$. The couples $(p_j,q_j)$ are called the Newton pairs of $C$. Puiseux extended the Newton method to reducible curves and condensed Newton's writing into a formal power series with fractional exponents: the approximations turned to be partial sums of a power series
\[
y=b_1 x^{m_1/p_1} +b_2 x^{m_2/p_1p_2} +\cdots + b_j x^{m_j/p_1\cdots p_j} +\cdots
\]
The couples $(p_i,m_i)$ are called the Puiseux pairs of $C_i$ for every branch $C_i$ of $C$. They are related to the Newton pairs by the recursion $q_1=m_1, q_i=m_{i}-m_{i-1}p_i$. It is well known that there are only finitely many topologically meaningful terms in the Puiseux development, therefore we will assume without loss of generality that a branch has a Puiseux expansion of the form
$$
y=b_1 x^{m_1/p_1} +b_2 x^{m_2/p_1p_2} +\cdots + b_j x^{m_k/p_1\cdots p_k}.
$$
\medskip
If we now consider a non-irreducible reduced power series \(f\in\mathbb{C}\{x,y\}\) defining a plane curve \(C\) with branches \(C_1,\dots,C_r\), then we will assume that all the Puiseux developments are of the form
\[
y=s_i(x)=b_{1,i} x^{m_{1,i}/p_{1,i}} +b_{2,i} x^{m_{2,i}/p_{1,i}p_{2,i}} +\cdots + b_{k_i,i} x^{m_{k_i,i}/p_{1,i}\cdots p_{k_i,i}},
\]
for a sufficiently large \(k_i\). Obviously, this means that we may have some branches satisfying \(m_{1,i}/p_{1,i}<1\). Moreover, all along this paper we will assume an ordering in the (set of) branches of a plane curve as follows.
\medskip
Suppose that all the Puiseux developments coincide up to the term \(\alpha,\) so there exists at least one branch whose Puiseux development is different at the (\(\alpha+1\))-th term. Then we have without loss of generality that
\[
\frac{m_{\alpha+1,1}}{p_{\alpha+1,1}}\geq \frac{m_{\alpha+1,2}}{p_{\alpha+1,2}}\geq\cdots \geq\frac{m_{\alpha+1,r}}{p_{\alpha+1,r}}.
\]
Recursively, if \(\{j_1<j_2<\dots<j_s\}=:\mathtt{J}\subsetneq \mathtt{I}\) with \(|\mathtt{J}|\geq 2\) and \(C_\mathtt{J}=\bigcup_{i\in\mathtt{J}}C\) is a subset of branches whose Puiseux developments coincide up to a term \(\alpha'>\alpha\) and at least one of them is different at \(\alpha'+1\), then
\[
\frac{m_{\alpha'+1,j_1}}{p_{\alpha'+1,j_1}}\geq \frac{m_{\alpha'+1,j_2}}{p_{\alpha'+1,j_2}}\geq\cdots\geq \frac{m_{\alpha'+1,j_s}}{p_{\alpha'+1,j_s}}.
\]
\begin{ex}\label{example:goodorder}
Observe that this ordering in the set of branches depends on the choice of the Puiseux series and it may vary on the topological class. Consider the curve \(C=\cup_{i=1}^{5} C_i\) where the Puiseux series of the branches \(C_i\) are
\(y_1=x^4,\) \(y_2=x^{5/2},\) \(y_3=2x^2+x^{14/3},\) \(y_4=2x^2\) and \(y_5=x^2\) and \(C'=\cup_{i=1}^{5} C'_i\) with Puiseux series
\(y'_1=x^4,\) \(y'_2=x^{5/2},\) \(y'_3=2x^2+x^{14/3},\) \(y'_4=2x^2+x^5\) and \(y'_5=x^2\). It is easy to see that \(C\) and \(C'\) are topologically equivalent, but the branches of \(C\) are good ordered and the branches of \(C'\) are not.
\end{ex}
Since we are interested in the study of the topological class, we will introduce in Subsection \ref{subsec:totalorderstar} a refinement in the good order of the set of branches which is canonical in the equisingularity class of the curve and that does not depend on the choice of the Puiseux series of the branches.
\subsection{Embedded resolution of plane curves}\label{subsec:dualgraph}
Let $C=\bigcup_{i=1}^{r} C_i \subseteq (\mathbb{C},0)$ be a (complex) plane curve singularity given by the equation $f=0$, where $f=f_1\cdots f_r$ and $f_i$ is the equation of the branch $C_i$ for every $i=1,\ldots , r$. Consider a simple sequence of blowing-ups of points whose first center is $0=:p_1$:
\begin{equation}\label{seq}
\pi : \;\;\; \cdots \longrightarrow X_n \longrightarrow X_{n-1} \longrightarrow \cdots \longrightarrow X_1 \longrightarrow X_0=(\mathbb{C}^2,0);
\end{equation}
here \emph{simple} means that we only blow-up points $p_i$, $i >1$, belonging to the exceptional divisor created last. The \emph{cluster of centers} of $\pi$ will be denoted by $\mathcal{C}_{\pi}=\{p_1=0, p_2, \ldots \}.$ Recall that the pull-back \(\overline{C}=\pi^{\ast}(C)=(f\circ \pi)^{-1}(0)\) of \(C\) is called the total transform of \(C\) by \(\pi.\) The strict transform of \(C\) is defined to be \(\widetilde{C}:=\overline{\pi^\ast(C\setminus \{0\})}\), where the bar denotes the Zariski closure. Each center \(p\in\mathcal{C}_\pi\) is assigned to an intersection multiplicity with the strict transform of \(C\) at \(p\). Recall also that the strict transform resp. the total transform of \(C\) at \(p\) is \(\widetilde{C}_p:=\overline{\pi_p^\ast(C\setminus \{0\})}\) resp. \(\overline{C}=\pi_p^{\ast}(C)\), where \(\pi_p\) denotes the blowing-up of \(p\). If we write \(E_0\) for the first neighborhood of \(0\), then the \(i\)-th neighborhood of \(0\) (for any $i>1$) is defined as the set of points on the first neighborhood of any point on the \((i-1)\)-th neighborhood of \(0\). The points in any neighborhood of \(0\) are called points infinitely near to \(0\), and we denote the set of them by \(\mathcal{N}_0\). Also, \(\mathcal{N}_0\) is endowed with a natural ordering: we write \(p<q\) whenever \(q\) is infinitely near to \(p\).
\medskip
For $i > j$ we say that a point $p_i$ is proximate to $p_j$, written $p_i \rightarrow p_j$, if $p_i$ belongs to the strict transform of the exceptional divisor $E_j$ obtained by blowing-up $p_j$. For every $i\geq 1$ we denote by $E_i$ the exceptional divisor on $X_i$ obtained by blowing-up $p_i$, and we say that $p_i$ and $E_i$ are \emph{satellite} if $p_i$ is proximate to two points of $\mathcal{C}_{\pi}$; otherwise we say that they are \emph{free}.
\medskip
Every point in \(\mathcal{N}_0\) is associated to two integer values corresponding to the intersection multiplicities of both the strict and the total transform at that point. The multiplicity of \(C\) at a point \(p\in\mathcal{C}_\pi\) is defined to be \(e_p:=e_p(\widetilde{C}_p)\). On the other hand, we call the value of \(C\) at \(p\) to \(v_p(C):=e_p(\overline{C}_p)\). Observe that a curve goes through \(p\) if and only if \(e_p\geq 1\). We will denote by \(\mathcal{N}_0(C)\) the set of infinitely near points to \(0\) lying on \(C\). Obviously, the set \(\mathcal{N}_0(C)\) is an infinite set with finitely many satellite points and points with multiplicity strictly bigger than 1 (see \cite[Chapter~3]{casas} for a more detailed treatment).
\medskip
Moreover, thanks to the proximity relations \cite[Theorem~3.5.3]{casas} one can compute the values (and multiplicities) recursively with the aid of Noether's formula (cf.~\cite[Theorem~3.3.1]{casas}).
\begin{proposition}\label{noehterformula}
(Noether's Formula) Let $C_1,C_2$ be germs of curve in $0$. The intersection multiplicity $[C_1,C_2]_0$ is finite if and only if $C_1$ and $C_2$ share finitely many points infinitely near to $0$, and in such a case
$$[C_1, C_2]_0=\sum_{p \in \mathcal{N}_0(C_1)\cap\mathcal{N}_{0}(C_2)}e_p(C_1)e_p(C_2).$$
\end{proposition}
\subsubsection{The dual graph of the minimal embedded resolution of $C$}\label{subsub:dualgraph}
It is customary to provide the minimal embedded resolution in form of a weighted graph called the dual (resolution) graph \(G(C)\) of the embedded resolution. The vertices (or points) $P$ of $G(C)$ represent the components of the total transform $\overline{C}$ of the curve $C$ i.e. both the irreducible components $E_P$ of the exceptional divisor $E = \pi^{-1}(0)$ of $\pi$ and the strict transforms $\widetilde{C}_i$ of the branches $C_i$ of $C$; in the latter case they are depicted by arrows. Abusing of notation, we will write the vertex corresponding to the branch $C_i$ as $C_i$. Two vertices (or a vertex and an arrow) of $G(C)$ are connected by an edge if the corresponding components intersect. The resulting graph is an oriented tree, with starting point (i.e. the starting divisor of the resolution) denoted by $\mathbf{1}$. For $i\in \mathtt{I}$, the geodesic in $G(C)$ joining $\mathbf{1}$ with the arrow corresponding to $f_i$ will be denoted by $\Gamma_i$. The set of vertices of the graph $G(C)$ can be endowed with a partial ordering: we set $P^{\prime} < P$ if and only if the geodesic in $G(C)$ from the vertex $\mathbf{1}$ to the vertex $P$ passes through the vertex $P^{\prime}$. Figure \ref{figA} shows how the dual graph of a reducible curve looks like.
\medskip
The dual graph is labeled as follows. For $P \in G(C)$ with $P \neq \mathbf{1}$, let $\nu (P)$ be the number of vertices (or arrows) in $G(C)$ connected with $P$. The valence $\nu (\mathbf{1})$ may be different from $1$ in some special situation explained in a paragraph below. The points $P\in G(C)$ with $\nu (P)=2$ are called ordinary, those with $\nu(P)=1$ are the end points of the graph, and those points satisfying $\nu(P)\geq 3$ are said to be star points; star points will be denoted by $\alpha$. The set of end points of $G(C)$ will be denoted by $\mathcal{E}$ (we let drop the dependence on $G(C)$ out in the notation for the sake of simplicity).
\begin{figure}[h]
$$
\unitlength=0.50mm
\begin{picture}(120.00,110.00)(0,-30)
\thinlines
\put(-20,30){\line(1,0){40}}
\put(38,30){\line(1,0){22}}
\put(60,30){\circle*{2}}
\put(60,30){\line(1,1){40}}
\put(85,55){\line(1,-1){10}}
\put(85,55){\circle*{2}}
\put(100,70){\circle*{2}}
\put(100,70){\line(2,1){20}}
\put(120,80){\vector(0,1){10}}
\put(110,75){\line(1,-2){4}}
\put(100,70){\line(2,-1){20}}
\put(120,60){\vector(1,1){10}}
\put(110,65){\line(-1,-2){4}}
\put(60,30){\line(1,-1){40}}
\put(100,-10){\circle*{2}}
\put(85,5){\line(-1,-1){10}}
\put(85,5){\circle*{2}}
\put(86,6){{}}
\put(75,-5){\circle*{2}}
\put(73,-13){{}}
\put(100,-10){\line(2,1){20}}
\put(120,0){\vector(0,1){10}}
\put(110,-5){\line(1,-2){4}}
\put(100,-10){\line(2,-1){20}}
\put(120,-20){\vector(1,0){10}}
\put(134,-21){{\scriptsize$\tilde{C}_r$}}
\put(110,-15){\line(-1,-2){4}}
\put(102,-17){{}}
\put(110,-15){\circle*{2}}
\put(106,-23){\circle*{2}}
\put(104,-31){{}}
\put(60,30){\line(1,0){40}}
\put(85,30){\line(0,-1){13}}
\put(85,30){\circle*{2}}
\put(100,30){\circle*{2}}
\put(100,30){\line(2,1){20}}
\put(120,40){\vector(0,1){10}}
\put(110,35){\line(1,-2){4}}
\put(100,30){\line(2,-1){20}}
\put(120,20){\vector(1,1){7.5}}
\put(110,25){\line(-1,-2){4}}
\put(-20,30){\circle*{2}}
\put(-5,30){\line(0,-1){15}}
\put(-5,30){\circle*{2}}
\put(-5,15){\circle*{2}}
\put(15,30){\line(0,-1){20}}
\put(15,30){\circle*{2}}
\put(15,10){\circle*{2}}
\put(45,30){\line(0,-1){20}}
\put(45,30){\circle*{2}}
\put(45,10){\circle*{2}}
\put(60,30){\line(0,-1){15}}
\put(60,15){\circle*{2}}
\put(-27,28){{\scriptsize ${\bf 1}$}}
\put(-3,5,10){{\scriptsize$\rho_1$}}
\put(16.5,7){{\scriptsize$\rho_2$}}
\put(58,10){{\scriptsize$\rho_h$}}
\put(-6,33){{\scriptsize$\alpha_1$}}
\put(14,33){{\scriptsize$\alpha_2$}}
\put(25,30){$\ldots$}
\put(53,33){\scriptsize{$\sigma_0$}}
\end{picture}
$$
\caption{The dual graph $G(C)$ of a curve $C$.}
\label{figA}
\end{figure}
An \emph{arc} in the graph $G(C)$ is a sequence of vertices connected each other in a consecutive order and satisfying that all its vertices are ordinary up to the extremes, i.e. a geodesic joining two nonordinary points. A \emph{dead arc} (or \emph{tail}) of $G(C)$ is an arc with one end point. Hence the tail of $G(C)$ corresponding to the end vertex $L$ consists of all vertices $L^{\prime}$ with $\alpha_{L}< L^{\prime} \le L$. The set of dead arcs is denoted by $\mathcal{D}$. The extreme of a dead arc $L$ will be denoted by $P(L)$, and called dead end. For every end point $L$ of $G(C)$, there is a nearest star point $\alpha_{L}$ such that $\alpha_{L} < L$.
\medskip
A star point $P$ is said to be proper if it does not belong to any dead arc or it does with $\nu(P) \geq 4$. The set of proper star points will be denoted by $\mathcal{R}$, and will be denoted by $\sigma$ (the letter $\alpha$ is kept for star points in general).
\medskip
A vertex $R$ is said to be a \emph{separation point} of the graph $G(C)$ if there exist two branches $C_i$ and $C_j$ of the curve $C$ such that $R < C_i$, $R < C_j$, and $R$ is the maximal vertex with these properties; we will also say that $R$ is the separation point between the branches $C_i$ and $C_j$. The first (i.e. minimal) separation point of $G(C)$ will be written $\sigma_0$; in other words, $\sigma_0$ is the last point in $\bigcap_{i=1}^{r} \Gamma_i$. Observe also that the separation points are proper star points. By convention, $\nu (\mathbf{1})$ is the number of edges (or arrows) incident in $P$ plus one, so that $\nu (\mathbf{1})\geq 2$, in the case $\mathbf{1}=\sigma_0$.
\medskip
We can describe the set $\mathcal{R}$ of proper star points in an alternative way: consider the ``smooth part'' $E^{\circ}_P$ of the component $E_P$ i.e. $E_P$ minus intersection points with other components of the total transform of the curve $C$. The cardinality of the set of connected components of the complement $(f \circ \pi)^{-1}(0) \setminus E^{\circ}_P$ is denoted by $s(P)$. Observe that
\begin{align*}
s(P)\geq 1& \Longleftrightarrow \ P \in \{\sigma_0\} \cup \Big ( \bigcup_{i=1}^r \Gamma_i \setminus \bigcap_{i=1}^{r} \Gamma_i \Big) \\
s(P)=0 & \Longleftrightarrow \ P\in \bigcup_{L\in \mathcal{D}} \big (L\setminus \{\alpha_L\} \}\big ) \cup \big (\bigcap_{i=1}^r \Gamma_i \setminus \{\sigma_0\}\big).
\end{align*}
\medskip
\medskip
The number $s(P)$ is related to $\nu (P)$ in the following manner: if $P\neq \sigma_0$ with $s(P)\geq 1$, then
\[
s(P)=\left \{
\begin{array}{ll}
\nu(P)-1, & \mbox{if there is no dead arc starting with } P, \\
\nu(P)-2, & \mbox{otherwise}.
\end{array}
\right.
\]
Moreover, $s(\sigma_0)=\nu(\sigma_0)-2$ in the first situation, and $s(\sigma_0)=\nu(\sigma_0)-3$ in the second one.
\medskip
From this discussion it is easily deduced that the set of proper star points is
\[
\mathcal{R}=\{ P \in G(C) : s(P)>1\} \cup \{\sigma_0\}.
\]
\subsection{The semigroup of values of a plane curve}\label{sec:semigroupvalues}
Let \(C\) be a (reduced) germ of complex plane curve singularity with equation \(f=\prod_{i=1}^{r}f_i\in\mathbb{C}\{x,y\}\), such that \(f_i\) is irreducible for any \(i\in \mathtt{I}\) and \((f_i)\neq(f_j)\) if \(i\neq j\). The curve \(C\) has \(r\) irreducible components, denoted by \(C_{i}:=\{f_i=0\}\) and that we call branches of \(C\).
\medskip
For every branch \(C_i\) there is a discrete valuation \(v_i\) associated to the local ring \(\mathcal{O}_i:=\mathbb{C}\{x,y\}/(f_i)\) of the branch. This valuation can be defined as \(v_i(h):=[f_i,h]_0,\) the intersection multiplicity at the origin. Therefore, we have a multivaluation, say \(\underline{v},\) in the local ring of the plane curve \(\mathcal{O}:=\mathbb{C}\{x,y\}/(f)\) defined as \(\underline{v}(h)=(v_1(h),\dots,v_r(h))\) for \(h\in\mathbb{C}\{x,y\}.\) The semigroup of values of \(C\) (or \(f\)) is the additive submonoid of \(\mathbb{N}^r\) defined by
\[
\Gamma(C):=\{\underline{v}(h)=(v_1(h),\ldots,v_r(h))\in\mathbb{N}^r\; :\;h\in\mathcal{O},\,h\neq 0\};
\]
we write \(\Gamma=\Gamma(C)\) if no risk of confusion arises. The semigroup \(\Gamma\) has a conductor \(\varsigma=\varsigma(\Gamma),\) which is defined to be the minimal element of \(\Gamma\) such that $\gamma\in\Gamma$ whenever \(\gamma\geq \varsigma\).
\medskip
\subsubsection{The irreducible case.} In the case of \(r=1\) we have a single branch, and the semigroup of values is a numerical semigroup whose minimal generating set is finite and can be computed from the characteristic exponents of \(f\), \cite{ZariskiModuli}. First of all, let us recall the definition of the Puiseux characteristics of the branch. The \emph{Puiseux characteristics} is a finite sequence of natural numbers defined as follows: let $(n)\subseteq \mathbb{Z}$ be the set of multiples of $n$, and set $\beta_1:=\min \{j : a_j \neq 0, j \notin (n)\}$, and recursively
\begin{align*}
e_{i-1} &=\mathrm{gcd}\{ n, \beta_1, \ldots , \beta_{i-1}\}>1\\
\beta_i &= \min \{j : a_j \neq 0, j \notin (e_{i-1})\} \ \ \mbox{for}\;i=1,\dots, g\;\mbox{and}\;e_g=1.
\end{align*}
The finite sequence given by the $e_i$ is called the $e$-sequence. According to these numbers, the Puiseux series of the branch can be decomposed as
\[
s(x)=\sum_{\tiny\begin{array}{c}
j\in (\beta_0)\\1\leq j<\beta_1
\end{array}}a_jx^{j/\beta_0}+\cdots+\sum_{\tiny\begin{array}{c}
j\in (e_{i-1})\\\beta_{i-1}\leq j<\beta_i
\end{array}}a_jx^{j/\beta_0}+\cdots+\sum_{\tiny\begin{array}{c}
j\in \mathbb{Z}\\\beta_{g}\leq j
\end{array}}a_jx^{j/\beta_0}.
\]
Assume that \(C\) has characteristic exponents \(\{\beta_0,\dots,\beta_g\}\) and write \(n_i=e_{i-1}/e_{i}\) for \(i=1,\dots,g.\) Let us define
\begin{equation}\label{eq:definbetabarra}
\overline{\beta}_0=\beta_0,\;\overline{\beta}_1=\beta_1\quad\text{and}\quad \overline{\beta}_{i+1}=n_i\overline{\beta}_i+\beta_{i+1}-\beta_i\quad \mbox{for}\ 1\leq i\leq g.
\end{equation}
\begin{rem}
This recursion provides a relation between the elements \(\overline{\beta}_i\) and the Puiseux pairs.
\end{rem}
The semigroup \(\Gamma(C)\) is minimally generated by the elements \(\overline{\beta}_0,\dots,\overline{\beta}_g,\) i.e.
\[
\Gamma(C)=\langle\overline{\beta}_0,\dots,\overline{\beta}_g\rangle=\big \{\gamma\in\mathbb{N}\;: \;\gamma=a_0\overline{\beta}_0+\cdots+a_g\overline{\beta}_g\ \ \mbox{with} \ \ a_i\in\mathbb{N},\ \ \mbox{for}\; \ i=0,\dots,g\big \}.
\]
It is customary to call the elements \(\overline{\beta}_0,\dots,\overline{\beta}_g\) the maximal contact elements (or values); this terminology comes from the fact that they coincide with the intersection multiplicities of a certain truncation of the Puiseux series of \(C\), as we will see in Subsection \ref{subsec:maximalcontact}.
\medskip
The main combinatorial properties of the semigroup of values of a plane branch are the following (see e.g. \cite{ZariskiModuli}:
\begin{enumerate}
\item \label{prop1:irreducsemigroup} \(n_i\overline{\beta}_i<\overline{\beta}_{i+1}\) for \(i=1,\dots,g-1.\)
\item\label{prop2:irreducsemigroup} \(n_i\overline{\beta}_i\in\langle\overline{\beta}_0,\dots,\overline{\beta}_{i-1} \rangle\) for \(i=1,\dots,g.\)
\item\label{prop3:irreducsemigroup} If \(\gamma\in\Gamma(C),\) then \(\gamma\) can be written in a unique way as \(\gamma=\sum_{i=0}^{g}a_i\overline{\beta}_i\) with \(a_0\geq 0\) and \(0\leq a_i\leq n_i-1\) for \(i=1,\dots,g\).
\end{enumerate}
Finally, the value semigroup \(\Gamma\) is symmetric, i.e. \(\gamma\in\Gamma\) if and only if \(\varsigma-1-\gamma\notin \Gamma.\) The symmetry property is the combinatorial counterpart of the Gorenstein property of the local ring of the branch \cite{Kunz70}.
\subsubsection{The case of several branches} If \(r>1,\) the semigroup of values is no longer finitely generated, but it is finitely determined (see \cite{Delgmanuscripta1,CDGlondon}); moreover, \(\Gamma\subsetneq \Gamma(C_1)\times\cdots\times\Gamma(C_r)\). Before continuing let us establish some notation:
\begin{notation}
For an index subset \(\mathtt{J}\subset \mathtt{I}=\{1,\dots,r\}\) we set \(f_\mathtt{J}:=\prod_{j\in \mathtt{J}}f_j\) and \(C_\mathtt{J}=\sum_{j\in \mathtt{J}}C_j\) for the plane curve with equation \(f_\mathtt{J}\).
We denote by \(\mathrm{pr}_{\mathtt{J}}:\mathbb{N}^r\rightarrow\mathbb{N}^{|\mathtt{J}|}\) the projection on the indices of \(\mathtt{J}\), and \(\alpha_\mathtt{J}:=\mathrm{pr}_\mathtt{J}(\alpha)\).
For \(i,j\in \mathtt{I}\) with \(i\neq j\) we denote by \(\Upsilon_{i,j}:=[C_i,C_j]\) the intersection multiplicity between the branches $C_i$ and $C_j$. Finally, for each \(i\in \mathtt{I}\) we write \(\Gamma^{(i)}:=\Gamma(C_i).\)
\end{notation}
We will assume that \(\mathbb{N}^r\) is partially ordered: For $\alpha=(\alpha_1,\dots,\alpha_r), \beta=(\beta_1,\dots,\beta_r)\in \mathbb{Z}^r$,
\[
\alpha\leq \beta\Longleftrightarrow\;\alpha_i\leq\beta_i\;\,\mbox{for all}\,\ i \in \mathtt{I}.
\]
Some elementary properties of the semigroup of values \(\Gamma\) are the following (see \cite{Delgmanuscripta1}):
\begin{enumerate}
\item If \(\alpha,\beta\in\Gamma\), then \[\alpha\wedge\beta:=\min\{\alpha,\beta\}:=(\min\{\alpha_i,\beta_i\})_{i\in \mathtt{I}}\in\Gamma.\]
\item If \(\alpha,\beta\in \Gamma\) and \(j\in \mathtt{I}\) with \(\alpha_j=\beta_j\), then there exists an \(\epsilon\in \Gamma\) such that \(\epsilon_j>\alpha_j=\beta_j\) and \(\epsilon_i\geq\min\{\alpha_i,\beta_i\}\) for all \(i\in \mathtt{I}\setminus\{j\}\), with equality if \(\alpha_i\neq\beta_j\).
\item The semigroup $\Gamma$ has a conductor, i.e. there exists an element $\varsigma\in \Gamma$ such that $\varsigma+\Gamma \subseteq \Gamma$.
\end{enumerate}
Now, for a given \(\alpha\in\mathbb{N}^r\) and an index subset $\mathtt{J}\subset \mathtt{I}$, we set
\[
\overline{\Delta}_\mathtt{J}(\alpha)=\big\{\beta\in\mathbb{N}^r\;:\;\beta_j=\alpha_j\quad\forall j\in \mathtt{J}\quad \text{and}\quad \beta_k>\alpha_k\quad\forall k\notin \mathtt{J} \big\},
\]
\[
\overline{\Delta}(\alpha)=\cup_{i=1}^{r}\overline{\Delta}_i(\alpha),\quad \Delta_\mathtt{J}(\alpha)=\overline{\Delta}_\mathtt{J}(\alpha)\cap \Gamma\quad\text{and}\quad \Delta(\alpha)=\overline{\Delta}(\alpha)\cap \Gamma.
\]
The sets \(\Delta(\alpha)\) are important in order to define those key elements of \(\Gamma\) which allow us to extend the symmetry property viewed in the irreducible case. An element \(\gamma\in \Gamma\) is called a maximal element of \(\Gamma\) if \(\Delta(\gamma)=\emptyset.\) If, moreover, \(\Delta_\mathtt{J}(\gamma)=\emptyset\) for all \(\mathtt{J}\subset \mathtt{I}\) such that \(\emptyset\neq \mathtt{J}\neq \mathtt{I}\), then \(\gamma\) is said to be absolute maximal. On the other hand, if \(\gamma\) is a maximal and if \(\Delta_\mathtt{J}(\alpha)\neq \emptyset\) for all \(\mathtt{J}\subset \mathtt{I}\) such that \(|\mathtt{J}|\geq 2\), then \(\gamma\) will be called relative maximal. It is easily checked that the set of maximal elements of \(\Gamma\) is finite.
\medskip
We would like to emphasize that, by definition, the element \(\gamma\in\Gamma\) is absolute maximal if and only if there exist absolute maximal elements \(\alpha\) and \(\beta\) such that \(\gamma=\alpha+\beta\). An absolute maximal element that cannot be decomposed as the sum of two nonzero elements of \(\Gamma\) is said to be an irreducible absolute maximal. As a consequence, any absolute maximal can be decomposed as a sum of irreducible absolute maximal elements.
\medskip
To finish, observe that the semigroup \(\Gamma\) is also a symmetric semigroup \cite{Delgmanuscripta2} in the following sense: \(\gamma\in\Gamma\) if and only if $\Delta(\varsigma-(1,\dots,1)-\gamma)=\emptyset$.
\subsection{The extended semigroup} The values $v_i(g)$ are orders of germs $g\circ \varphi_i$ at the origin of $\mathbb{C}^2$, where $\varphi_i$ denotes a parametrization of $C_i$ for every $i\in \mathtt{I}$ and $g$ is an element of the ring $\mathcal{O}_{\mathbb{C}^2,0}$ of germs of holomorphic functions at the origin in $\mathbb{C}^2$; this allows us to write
\[
g\circ \varphi_i(t_i)=a_i(g)t_i^{v_i(g)}+\mathrm{terms~of~higher~degree}.
\]
Campillo, Delgado and Gusein-Zade \cite{CDGextended} considered an extension of the value semigroup $\Gamma(C)$ which they called the extended semigroup $\widehat{\Gamma}(C)=\widehat{\Gamma}$ associated to $C$: this is the subsemigroup of $\mathbb{N}^r \times (\mathbb{C}^{\ast})^r$ consisting of all tuples
\[
(\underline{v}(g),\underline{a}(g)):=\big (v_1(g),\ldots , v_r(g), a_1(g),\ldots , a_r(g)\big)
\]
for every $g\in \mathcal{O}_{\mathbb{C}^2,0}$ with $v_i(g)<\infty$ for all $i\in \mathtt{I}$.
\medskip
They showed that the set $\mathbb{N}^r \times (\mathbb{C}^{\ast})^r$ may be endowed with the structure of semigroup. Although $\widehat{\Gamma}$ depends on the choice of the parametrizations of the branches $C_i$ of $C$, this is not an issue if one considers isomorphism classes induced by invertible changes of coordinates, as described in \cite[Remark 2]{CDGextended}.
\medskip
The relation between $\Gamma$ and $\widehat{\Gamma}$ is given by the surjective homomorphism $\mathrm{pr}:~\widehat{\Gamma} \to \Gamma$ defined by $\mathrm{pr}\big ((\underline{v}(g),\underline{a}(g))\big )=\underline{v}(g)$. The sets $F_{\underline{v}}:=\mathrm{pr}^{-1}(\underline{v})\subseteq \{\underline{v}\}\times (\mathbb{C}^{\ast})^r$ for every $\underline{v}\in \Gamma(C)$ are called fibres of $\widehat{\Gamma}(C)$. Therefore
\[
\widehat{\Gamma}=\bigcup_{\underline{v} \in \Gamma} F_{\underline{v}} \times \{\underline{v}\}.
\]
The extended semigroup will help to the understanding of the proof of Theorem \ref{thm:p1p2p3}.
\section{Star points: the topological guides in the dual graph}\label{sec:descriptionofstar}
In this section we will present the foremost algebraic tools in order to provide the iterative construction of the Poincaré series in Section \ref{sec:Poincareseries}. The core of this section is to precisely describe the main topological terms in a distinguished Puiseux series associated a plane curve $C$: The series of those branches will be expressed in terms of the star points of the dual graph of \(C.\) As we will show, the star points are key to provide an ordered sequence of approximating curves to $C$; these will play a central role in Subsection \ref{subsec:iterativepoincare}.
\subsection{The maximal contact }\label{subsec:maximalcontact}
We first recall the interpretation of both the minimal generators of the value semigroup of a branch and the irreducible absolute maximal elements ---for curves with several branches--- in terms of intersection multiplicities. In both cases, the values associated to those elements will be called maximal contact values. We follow the exposition of Delgado \cite{Delgmanuscripta1, Delgadoari}.
\medskip
For an irreducible curve $C$, let \(C_q\) be the branch whose Puiseux series coincides with the following truncation of the Puiseux series of \(C\)
\[
\varphi_q(x)=\sum_{\tiny\begin{array}{c}
j\in (\beta_0)\\\ 1\leq j<\beta_1
\end{array}}a_jx^{j/\beta_0}+\cdots+\sum_{\tiny\begin{array}{c}
j\in (e_{i-1})\\\beta_{q-1}\leq j<\beta_q
\end{array}}a_jx^{j/\beta_0}.
\]
The germs \(C_q\) have the property that their intersection multiplicity with the curve \(C\) is exactly the \(q\)--th generator of the semigroup of values \mbox{\(\overline{\beta}_q:=[C_q,C]\)}. More generally, any \(\varphi\in\mathbb{C}\{x,y\}\) satisfying \([f,\varphi]=\overline{\beta}_q\) will be called a maximal contact element of genus \(q-1\) with \(f\).
\medskip
Consider now a plane curve \(C\) with \(r\) branches; for each \(i\in\mathtt{I}\) let us denote by \(\{\beta^{i}_0,\dots,\beta^{i}_{g_i}\}\) the Puiseux exponents and \(\{\overline{\beta}^{i}_0,\dots,\overline{\beta}^{i}_{g_i}\}\) the maximal contact values of a branch \(C_i.\) Let us denote by \(\mathcal{B}^{i}_{n}\) the set of curves having maximal contact of genus \(n\) with \(f_i\). We will say that \(\varphi\in\mathbb{C}\{x,y\}\) has maximal contact of genus \(n\) with \(f\) if the following two assertions hold:
\begin{itemize}
\item[$\diamond$] The set \(J_\varphi=\{i\in\mathtt{I}\;:\;\varphi\in\mathcal{B}^{i}_n\}\) is non empty.
\item[$\diamond$] \(J_\varphi\) is maximal for the inclusion ordering, i.e. there exists no branch \(\phi\) such that \(J_\varphi\subset J_\phi.\)
\end{itemize}
The maximal contact values of a plane curve with \(r\) branches can be explicitly computed from the maximal contact values of each of its branches and their intersection multiplicities. For each \(n\in\mathbb{N}\) we set \(\mathcal{A}_n=\{\mathcal{B}^{i}_{n}\;:\;i\in\mathtt{I}\}\) with the inclusion ordering. Define
\[
\mathcal{M}_n=\{\mathtt{J}\subseteq \mathtt{I}\;:\;\forall i,j\in\mathtt{J}\quad\mathcal{B}^{i}_{n}=\mathcal{B}^{j}_{n}\quad\text{and}\quad \mathcal{B}^{i}_{n}\quad\text{is minimal in}\;\mathcal{A}_n\};
\]
as in \cite[(3.7)]{Delgmanuscripta1}, given \(K\in\mathcal{M}_n\), for $\varphi\in \mathcal{B}^{K}_{n}$ we check that
\begin{equation}\label{eqn:maximalcontactvalues}
v_j(\varphi)=\left\{\begin{array}{cl}
\overline{\beta}^{j}_{n+1} &\text{if}\;j\in K \\[.3cm]
\frac{[f_i,f_j]}{e^{i}_n}& \text{if}\;j\notin K\ \ \text{and}\ \ i\in K.
\end{array}\right.
\end{equation}
Write \(B_0=(\overline{\beta}^{1}_{0},\dots,\overline{\beta}^{r}_{0})\); the set of maximal contact values is
\begin{equation}\label{eq:b2}
\overline{\mathcal{B}}=\big \{\underline{v}(\varphi)\;:\;\varphi\in\mathcal{B}^{\mathtt{J}}_n,\ \mbox{for} \ \mathtt{J}\in\mathcal{M}_n,\ \ n\in\mathbb{N}\big \}\cup \{B_0\}.
\end{equation}
With the above notation, the branches \(f_i\) of \(f\) appear as curves with maximal contact of genus \(g_i\) with \(f\), and obviously \(\underline{v}(f_i)\notin \Gamma\) since \(v_i(f_i)=\infty\). Usually, we will use the maximal contact elements that are finite, i.e.
\begin{equation}\label{eq:b}
\mathcal{B}=\left(\{\underline{v}(\varphi)\;:\;\varphi\in\mathcal{B}^{\mathtt{J}}_n,\ \mbox{for} \ \mathtt{J}\in\mathcal{M}_n,\ \ n\in\mathbb{N}\}\cap \Gamma\right)\cup \{B_0\}.
\end{equation}
The following result (\cite[(3.18)]{Delgmanuscripta1}) provides the announced identification of the maximal contact values and the irreducible absolute maximals of the semigroup of values.
\begin{theorem}[Delgado]
\(\gamma\in\Gamma\) is irreducible absolute maximal if and only if \(\gamma\) is a maximal contact value.
\end{theorem}
If the curve has only one branch, an irreducible absolute maximal element is nothing but a minimal generator of the semigroup of values. However, the generation of the semigroup of a curve with several branches from the irreducible absolute maximal elements is more subtle and one needs to exploit all the combinatorial properties of the semigroup (see \cite{Delgmanuscripta1} and \cite{CDGlondon}).
\subsubsection{The contact pair}
According to \cite[(1.1.3),(1.1.4)]{Delgadoari}, the contact pair $(f\mid f')=(q,c)$ of two (irreducible) branches $f$ and $f'$ can be defined as follows. Let $\overline{\beta}'_1,\ldots , \overline{\beta}'_{g'}$ resp. $e'_0,\ldots , e'_{g'}$ be the maximal contact values resp. the $e$-sequence associated to the branch $f'$. Let $t$ be the minimum integer such that
\[
[f,f']\leq \mathrm{min}\big \{e'_t \overline{\beta}_{t+1},e_t \overline{\beta}'_{t+1} \big\}=p(t),
\]
setting $\overline{\beta}_{g+1}=\overline{\beta}'_{g'+1}=\infty$ and \(e_{-1}=e'_{-1}=0\) if necessary. Write $\ell_t$ resp. $\ell'_t$ for the integer part of $(\overline{\beta}_{t+1} - n_t\overline{\beta}_t)/e_t$ resp. of $(\overline{\beta}'_{t+1} - n'_t\overline{\beta}'_t)/e'_t$. We distinguish three complementary cases:
\begin{itemize}
\item[$\diamond$] If $[f,f'] < p(t)$, then there exists an integer $c$ with $0 < c \leq \mathrm{min}(\ell_t,\ell'_t)$ such that
$$
[f,f']= e'_{t-1}\overline{\beta}_t + ce_te'_{t} = e_{t-1}\overline{\beta}'_t + ce_te'_t.
$$
In this case $q = t$ and $(f\mid f') = (t, c)$.
\item[$\diamond$] If $[f,f']=p(t)$ and $e'_t\overline{\beta}_{t+1}\neq e_t\overline{\beta}'_{t+1}$, then $(q,c)=(t,\mathrm{min}\{\ell_t+1,\ell'_t+1\})$. Recall that, if $[f, f'] = e'_t\overline{\beta}_{t+1} < e_t\overline{\beta}'_{t+1}$, then $\ell_t < \ell'_t$ and, if $\ell_q < \ell'_q$ then $[f,f'] = e'_t\overline{\beta}_{t+1} < e_t\overline{\beta}'_{t+1}$.
\item[$\diamond$] Finally, if $[f,f']=p(t)$ and $e'_t\overline{\beta}_{t+1}= e_t\overline{\beta}'_{t+1}$, then $(q,c)=(t+1,0)$.
\end{itemize}
\medskip
Observe that in the first two cases \(c\neq 0\) and the contact pair can be seen as a measure of the coincidences between the Puiseux developments of the branches \(f,f';\) this means that, if two branches have contact pair \((q,c)\), then their Puiseux developments become different at the term with Puiseux pair \((1,\beta_q/e_q +c)\) if \(c\neq 0\), and at the term with Puiseux pair \((e_q/e_{q+1},\beta_{q+1}/e_{q+1})\) if \(c=0.\)
\medskip
Similarly, if we have $r$ branches $f_1,\ldots, f_r$, then the contact pair of these is defined to be
$$
(f_1\mid f_2 \mid \cdots \mid f_r)=\mathrm{min}\{(f_i\mid f_j) : i \neq j \ \mbox{with} \ i,j\in \mathtt{I}\},
$$
where the minimum is understood to be with respect to the lexicographic ordering in $\mathbb{N}^2$. Obviously, for several branches this means that the Puiseux developments satisfy the previous conditions.
\begin{rem}
The contact pair measures the number of free infinitely near points shared by the branches. Recall that the coefficients of the Puiseux expansion can be seen as the projective coordinates of the free points of the branch (see \cite[Proposition 5.7.1]{casas}).
\end{rem}
\subsubsection{Maximal contact elements in terms of the dual graph}\label{subsec:maximalcontactdualgrapha}
For a point $P\in G(C)$, a curvette at $P$ is defined to be a smooth curve germ $\theta_P$ in the resolution space, transversal to the irreducible component \(E_P\) of the exceptional divisor in a regular point of the exceptional divisor $E$. If $\theta_P$ is given by an element $\varphi\in \mathbb{C}[\![X,Y]\!]$, and $\tilde{\varphi}$ stands for the strict transform of $\varphi$ by $\pi$, we say that $\varphi$ meets a subset $G$ of $G(C)$ if $\tilde{\varphi} \cap E_P$ is a regular point of $E$ for some $P\in G$; furthermore, if $\tilde{\varphi}$ meets $P\in G(C)$ and $\tilde{\varphi}$ is smooth, then we say that $\varphi$ becomes a curvette at $P$.
\medskip
Since the dual graph $G(C_i)$ of a branch $C_i$ is known to have $g_i$ dead arcs, we write
\[
\mathcal{D}(C_i)=\{L_1^i,\ldots , L_{g_i}^i\},
\]
were the dead arcs are supposed to be ordered as $w(P(L_1^i))<w(P(L_2^i))<\cdots < w(P(L_{g_i}^i))$, were $w(P)$ is the number of blowing-ups needed to build the divisor $E_P$, for any $P\in G(C)$. Then the maximal contact values of the branches $f_i$ can be interpreted as intersection multiplicities as
\[
[f_i,\pi (\theta_{P(L_{j_i}^i)})]=\overline{\beta}_{j_i}^i
\]
for any $i\in \mathtt{I}$ and $j_i\in \{0,\dots, g_i\}$. This means that $\varphi\in \mathbb{C}[\![X,Y]\!]$ has maximal contact of genus $n-1$ with $f_i$ if and only if $\varphi$ becomes a curvette at the end point $P(L^{i}_{j_i})$ of $L^{i}_{j_i}\in \mathcal{D}(C_i)$. Moreover, $\varphi$ becomes a curvette at the point $\mathbf{1}$ if and only if
\[
[f_i,\varphi]=\overline{\beta}_0^i,
\]
for any $i\in \mathtt{I}$. Following the exposition of Delgado \cite[(1.2.1)]{Delgadoari}, it can be shown that the maximal contact values can be realized by curvettes at the end points of $G(C)$ or at $\mathbf{1}$; indeed they can be read off in the dual graph as those elements in the set
\[
\mathcal{B}'=\{\underline{v}\big ( \pi (\theta_{P(L)}) \big ) \in \Gamma : L \in \mathcal{D} \} \cup \{\underline{v} (\pi(\theta_{P_0}))\}.
\]
Observe that the points of $G(C)$ whose values are elements in $\mathcal{B}'$ are those in $\mathcal{E}\cup \{\sigma_0\}$. Moreover, they are ---by definition--- elements in the value semigroup $\Gamma$, hence $\mathcal{B}'=\mathcal{B}$ for the set $\mathcal{B}$ of eq.~(\ref{eq:b}).
\subsection{Topological Puiseux series}\label{subsec:topologicalpuiseux}
In our context, we are interested in the topological equivalence of plane curves. This equivalence relation can be described in terms of the semigroups of the branches and their intersection multiplicity as remarked by Zariski (see also \cite[Proposition 4.3.9]{wall}): \begin{proposition}\cite{zarsaturationII}\label{prop:equisingularity}
Let \(C,C'\) be curve singularities. They are equisingular if and only if the following conditions holds:
\begin{enumerate}
\item There is a bijection \(C_i\leftrightarrow C_i'\) between the branches of \(C\) and \(C'\) such that \(\Gamma(C_i)=\Gamma(C_i')\), and
\item For all \(i,j\in \mathtt{I}\), we have \([C_i,C_j]=[C_i',C_j']\) (if $\mathtt{I}$ indexes the number of branches).
\end{enumerate}
\end{proposition}
Proposition \ref{prop:equisingularity} yields a model of the Puiseux series to work with in terms of topological equiva\-lence. In the case of (plane) branches, Proposition \ref{prop:equisingularity} implies that two branches are topologically equivalent if and only if their semigroups are equal; in particular, this means that they have the same Puiseux characteristics. Thus, if \(B\) is a branch with Puiseux characteristics \(\{\beta_0,\dots,\beta_g\}\), then it is topologically equivalent to a branch with Puiseux series
\begin{equation}\label{eq:toppuiseuxseriesirr}
s(x)=\sum_{i=1}^{g}a_i x^{\beta_i/\beta_0}.
\end{equation}
The Puiseux series of eq.~ \eqref{eq:toppuiseuxseriesirr} will be called the topological Puiseux series of the branch \(B\); this obviously provides a relation between the Puiseux pairs and the Puiseux characteristics:
\[(p_{1},m_{1})=\bigg(\frac{e_0}{e_1},\frac{\beta_1}{e_1}\bigg),\;(p_{2},m_{2})=\bigg(\frac{e_1}{e_2},\frac{\beta_2}{e_2}\bigg),\dots,(p_{g},m_{g})=(e_{g-1},\beta_g).
\]
If the curve is irreducible, the terms in the topological Puiseux series can be described in terms of the star points in the geodesic joining \(\mathbf{1}\) with unique arrow in the graph, so that all the star points correspond with curves of maximal contact. For non-irreducible curves, we need to deal with the intersection multiplicities between branches in order to construct a topological Puiseux series associated to the equisingularity type of the curve.
\medskip
We finish this section showing how to attach the topological Puiseux series to a plane curve, following the idea of the irreducible case. First of all, we need to characterize the contact pair in terms of the Puiseux series of the branches.
\begin{proposition}\label{prop:contactPuiseux}
Let \(C_1,C_2\) be two branches with contact pair \((q,c).\) Denote by \([s_i(x)]_{<k}\) the truncation of the Puiseux series of a branch up to order \(k-1\), and for \(i=1,2\) write \(\ell^{i}\) for the integer part of \((\overline{\beta}^{i}_q-n^{i}_{q-1}\overline{\beta}_{q-1}^{i})/e^{i}_{q-1}\) and
\[k_i=\left\{\begin{array}{cc}
\frac{\beta_q^i+(c-1)e_q^i}{\beta_0^i} & \text{if}\;c\neq 0 \\[.3cm]
\frac{\beta_{q-1}^{i}+\ell^{i}e_{q-1}^{i}}{\beta_0^i} & \text{if}\;c=0
\end{array}\right.\]
Then,
\[
[s_1(x)]_{<k_1}=[s_2(x)]_{<k_2},
\]
and the series \(s_1,s_2\) differ at the term \((\beta_q^i+ce_q^i)/\beta_0^i\); in fact this is the first term in which they differ. Conversely, if two Puiseux series satisfy these conditions, then the corresponding branches have contact \((q,c)\).
\end{proposition}
\begin{proof}
By definition, if \(C_1,C_2\) have contact pair \((q,c)\), then their intersection multiplicity is
$[f_1,f_2]=e^{2}_{q-1}\overline{\beta}^{1}_{q}+ce^{1}_qe^{2}_q$. A straighforward application of Noether's formula \ref{noehterformula} (see also \cite[Sec. 5.7]{casas} allows us to conclude.
\end{proof}
Proposition \ref{prop:contactPuiseux} yields a tool to interpret the contact pair in terms of the Puiseux series. Thus, we are ready to present the topological Puiseux series of the branches of a curve.
\begin{theorem}\label{thm:topologicalPuiseuxseries}
Let \(C=\cup_{i=1}^{r} C_i\) be a curve whose topological type is described by the Puiseux exponents \(\{\beta_0^{i},\dots,\beta_{g_i}^{i}\}\) of every branch together with their contact pairs \((q_{i,j},c_{i,j})=(f_i\mid f_j).\) Then, there exists a plane curve singularity \(C'=\cup_{i=1}^{r} C_i'\) which is topologically equivalent to \(C\) such that the Puiseux series of each branch is
\[
s'_i(x)=\sum_{k=1}^{g_i}a^{(i)}_kx^{\beta^i_k/\beta^i_0}+\sum_{j\in\mathtt{I}\setminus\{i\}}b^{(i)}_jx^{(\beta^i_{q_{i,j}}+c_{i,j}e^i_{q_{i,j}})/\beta^i_0},
\]
where \(b_j^{(i)}\neq b_i^{(j)}\) for all \(i,j.\)
\end{theorem}
\begin{proof}
According to Proposition \ref{prop:equisingularity} it would be enough to show that \(\Gamma(C_i)=\Gamma(C_i')\) and that \([f_i,f_j]=[f'_i,f'_j].\) From the expression of \(s'_i(x)\) it follows straightforward that the Puiseux characteristics of \(C_i'\) are identical to the Puiseux characteristics of \(C_i.\) Therefore the equality \(\Gamma(C_i)=\Gamma(C_i')\) follows from the relation between the Puiseux characteristics and the minimal generators of the semigroup (\ref{eq:definbetabarra}). On the other hand, by Proposition \ref{prop:contactPuiseux} we have that \((q'_{i,j},c'_{i,j})=(q_{i,j},c_{i,j})\) from where we deduce the equality \([f_i,f_j]=[f'_i,f'_j]\).
\end{proof}
We have a one-to-one correspondence between the terms in the topological Puiseux series and the star points of the dual graph of \(C.\) We will refer to the truncation of the topological Puiseux series at a star point to the truncation up to the term defining the star point (including it); thus we will write the truncations as
\[
y^{i}_{k}=\sum_{l=1}^{k}a_l^{(i)}x^{\beta^{i}_{l}/\beta^i_0}+\sum_{\tiny \begin{array}{c}
j\in\mathtt{I}\setminus\{i\}\\
q_{i,j}\leq k\\
\end{array}}b^{(i)}_jx^{(\beta^i_{q_{i,j}}+c_{i,j}e^i_{q_{i,j}})/\beta^i_0}.
\]
In the topological Puiseux series of a branch \(C_j\) of a non--irreducible plane curve \(C\), two different types of Puiseux pairs appear: on the one hand, there are exactly \(g_j\) Puiseux pairs for which \(p_{i,j}\neq 1\), and for those we have that \[(p_{i_1,j},m_{i_1,j})=\bigg(\frac{e^j_0}{e^j_1},\frac{\beta^j_1}{e^j_1}\bigg),\;(p_{i_2,j},m_{i_2,j})=\bigg(\frac{e^j_1}{e^j_2},\frac{\beta^j_2}{e^j_2}\bigg),\dots,(p_{i_g,j},m_{i_g,j})=(e^j_{g-1},\beta^j_g).
\]
On the other hand, if two branches separate at a free point which is not a dead end in any of their dual graphs, then there are Puiseux pairs with \(p_{i,j}=1\) satisfying the following:
\begin{align*}
(p_{k,j},m_{k,j})= & (1,m_{k,j}) \ \mbox{with}\;m_{k,j}\in \Bigg[1,\bigg\lfloor\frac{\beta^j_1}{\beta^j_0}\bigg\rfloor \Bigg]\cap\mathbb{N}\; \ \mbox{for all} \ k<i_1.\\[5pt]
(p_{k,j},m_{k,j})=& \Big(1,\frac{\beta^j_{\ell-1}}{e^j_{\ell-1}}+c\Big)\;\text{with}\;c\in \Bigg[1,\bigg\lfloor\frac{\beta^j_{\ell}-\beta^j_{\ell-1}}{e^j_{\ell-1}}\bigg\rfloor\Bigg]\cap\mathbb{N}\\
&\mbox{for all} \ k<i_{\ell} \;\mbox{and}\ \ell=2,\dots,g_j+1.
\end{align*}
\begin{rem}
The topological Puiseux series of the branches is a canonical representative of the topological class of \(C\). Since we are interested only in topological aspects, we will assume from now on that the Puiseux series of the branches of \(C\) are topological Puiseux series.
\end{rem}
\begin{rem}
In \cite{LeCarousels} Lé provides a way to define characteristic exponents for a Puiseux development of a non-irreducible plane curve. This allows us to treat the Puiseux series of each branch as a ``single parametrization". In that case, we need to consider a Puiseux parametrization of the form \((t^k,s(t^k))\) with \(k=\overline{\beta}_0^1\cdots\overline{\beta}_0^{r}.\) However, the ordering in the set of branches defined by Lé is inverse to our ordering. We will provide further explanation of this fact in Section \ref{sec:topology}.
\end{rem}
\subsection{Values at the proper star points}\label{subsec:valuesproperstar}
As we have seen in the previous section, the set of maximal contact curves is enough to determine the equisingularity class of a branch, as it determines the minimal generators of the semigroup. In contrast to this case, we need further information to determine the equisingularity class of a non--irreducible curve; more precisely, we need to consider the values at the proper star points of the dual graph.
\medskip
In the same spirit as done with the maximal contact values, we would like to remark that the values of the proper star points have also a geometrical interpretation. For \(i,j\in\mathtt{I}\), let $\Gamma_i$ resp.~$\Gamma_j$ be the geodesic in \(G(C)\) joining the origin with the arrow corresponding to \(f_i\) resp.~\(f_j.\) The point \(R\in\Gamma_i\cap\Gamma_j\) with maximal weight in \(\Gamma_i \cap \Gamma_j\) is called a \textit{separation point}; then any proper star point is in fact a separation point.
\medskip
According to the definition, we can distinguish among three different types of proper star points:
\begin{enumerate}
\item\label{valuepropertype1} \(R\) is the star point associated to a \(q+1\)--th dead arc of \(G(C_i)\); in this case \([f_i,f_j]=e^j_q\overline{\beta}^i_{q+1}\leq e^i_q\overline{\beta}^j_{q+1}.\)
\item\label{valuepropertype2} \(R\) is an ordinary point on \(G(C_i)\) and \(G(C_j)\); in this case \([f_i,f_j]=e^i_q\overline{\beta}^j_{q+1}+ce^i_qe^j_q\) with \(c>0.\)
\item \label{valuepropertype3}\(R\) is an ordinary point on \(G(C_i)\) and a dead end in \(G(C_j)\); in this case \([f_i,f_j]=e^j_q\overline{\beta}^i_{q+1},\) \(c_{i,j}=\min\{l_q+1,l'_q+1\}\).
\end{enumerate}
The analysis of each of these three cases will provide the values at the proper star points we are looking for. For a start, assume we are in case \eqref{valuepropertype1}, and let \(J_1\subset \mathtt{I}\) be such that for all \(i,i'\in J_1\) we have \((q_{i,i'},c_{i,i'})\geq (q+1,0)\) and for any \(j\notin J_1\) we have \((q_{i,j},c_{i,j})\leq (q+1,0).\) Then, Noether's formula \ref{noehterformula} (see also \cite[Sect.~2.2]{Delgadoari} yields
\begin{equation}\label{eqn:valueproperspecial}
\mathrm{pr}_i(\underline{v}^R)=\left\{\begin{array}{cl}
n_{q+1}^{i}\overline{\beta}^{i}_{q+1}& \text{if}\;i\in J_1 \\[.3cm]
\frac{[f_i,f_j]}{e_{q+1}^{j}}& \text{if}\;i\notin J_1 \ \ \text{and}\ \ j\in J_1
\end{array}\right.
\end{equation}
Next we assume the case (2); let \(J_1\subset \mathtt{I}\) be such that for all \(i,i'\in J_1\) we have \((q_{i,i'},c_{i,i'})\geq (q,c)\) and for any \(j\notin J_1\) we have \((q_{i,j},c_{i,j})\leq (q,c).\) Again Noether's formula \ref{noehterformula} (see also \cite[Sect.~2.2]{Delgadoari} yields
\begin{equation}\label{eqn:valueproperordinary}
\mathrm{pr}_i(\underline{v}^R)=\left\{\begin{array}{cl}
\frac{[f_i,f_j]}{e_{q}^{i}}& \text{if}\;i,j\in J_1 \\[.3cm]
\frac{[f_i,f_j]}{e_{q}^{j}}& \text{if}\;i\notin J_1 \ \ \text{and}\ \ j\in J_1
\end{array}\right.
\end{equation}
Finally, observe that the case \eqref{valuepropertype3} can be treated as the case \eqref{valuepropertype2} since by Noether's formula, the intersection multiplicity for the branches in \(J_1\) will coincide with \(e^j_q\overline{\beta}^{i}_q.\)
\medskip
In this way, we can define recursively \(D^1,\dots D^{r-1}\in \Gamma\) values such that \(\mathrm{pr}_j(D^i)=\frac{[f_i,f_j]}{e_{q}^{j}}\) for some $i,j,q$; these values are the values attained by curvettes at the proper star points of \(G(C)\) and we have, analogously to the case of maximal contact values, that
\[
\{D^1,\dots,D^{r-1}\}=\{\underline{v}\big ( \pi (\theta_{P(L)}) \big ) \in \Gamma : L \in \mathcal{R}\}.
\]
The set \(\mathcal{B}\cup \{D^1,\dots,D^{r-1}\} \) is called set of principal values (see \cite[Sect.~2.2]{Delgadoari}) and obviously it contains all the necessary information to recover the equisingularity type of the curve. Moreover, Proposition 2.2.6 in \cite{Delgadoari} shows that
\begin{proposition}\label{prop:numbersq}
For each proper star point \(Q\) the corresponding \(D_Q=\underline{v}(\pi(\theta_Q))\) appears \(s_Q\) times, where \(s_Q=\nu(Q)-3\) if \(Q\) belongs to a dead arc and \(s_Q=\nu(Q)-2\) otherwise.
\end{proposition}
\begin{rem}
The discussion at the end of Subsection \ref{subsec:dualgraph} shows that \(s_Q=s(Q)-1\) if \(Q\neq \sigma_0\) and \(s_{\sigma_0}=s(\sigma_0)-2\).
\end{rem}
\subsection{A guide tour through the star points}\label{subsec:totalorderstar}
Let us denote by \(\mathcal{S}:=\mathcal{E}\cup\mathcal{R}\) the set of star points of \(G(C).\) In this part, we will introduce a total ordering in \(\mathcal{S}\), which will induce a total ordering in the set of proper star points $\mathcal{R}$. Without loss of generality, we assume that the branches of \(C\) are ordered by the good order of the topological Puiseux series of branches. Since the proper star points mark those terms of the topological Puiseux series in which two series differ, the ordering of the proper star points will allow us to refine the good order in the set of branches by using the contact pair at each proper star point. This refined good order becomes now an invariant in the topological class of \(C\), since it is defined from the topological Puiseux series.
\medskip
Let us denote by \(\prec\) the natural order induced by a geodesic in the vertices of \(G(C)\). Let \(\Gamma_1\) be the geodesic joining \(\mathbf{1}\) with the arrow corresponding to \(f_1\), and define
\[
\mathcal{S}_1:=\Gamma_1\cap \mathcal{S}=\{\alpha_1\prec\alpha_2\cdots\prec\alpha_{t_1}\}.
\]
Recursively, for \(2\leq i\leq r\) we consider the geodesic \(\Gamma_i\) joining \(\mathbf{1}\) with the arrow corresponding to \(f_i\) and define
\[
\mathcal{S}_i:=\Gamma_i\cap\bigg(\mathcal{S}\setminus \big (\bigcup_{k<i}\mathcal{S}_k\big)\bigg)=\{\alpha_{t_k+1}\prec \cdots \prec \alpha_{t_i}\}.
\]
From now on we will assume that the set of star points in \(G(C)\) is ordered by the relation \(<\) as follows: if \(\alpha_i,\alpha_j\in \mathcal{S}_k\) for some \(k\), then \(\alpha_i<\alpha_j\) if and only if \(\alpha_i\prec \alpha_j\); if \(\alpha_i\in \mathcal{S}_k\) and \(\alpha_j\in \mathcal{S}_{k'}\), then \(\alpha_i<\alpha_j\) if and only if \(k<k'.\) Obviously, \(<\) produces a total order in \(\mathcal{S}\) by the construction of the sets \(\mathcal{S}_i.\)
\medskip
As we will see, the total order in the set of star points allows us to provide an ordered way to compare the topological Puiseux series of the different branches. Thanks to Theorem \ref{thm:topologicalPuiseuxseries} the only terms where two topological Puiseux series can be different are those corresponding to proper star points. Now let us describe how this order translates into the dual graph of \(C.\)
\medskip
Let us start with \(\sigma_0,\) which is the first proper star point and obviously, it belongs to \(\mathcal{S}_1.\) By Theorem \ref{thm:topologicalPuiseuxseries} this is the first term in the topological Puiseux series of the branches where at least two of them differ. Since \(\sigma_0\) is the first proper star point it cannot be a dead end for any branch, this means that we should distinguish two cases:
\medskip
\noindent \textbf{(A)} First, assume \(\sigma_0\) is the star point associated to the $(q+1)$--death arc of some \(G(C_i),\) as in case \eqref{valuepropertype1} of Subsection \ref{subsec:valuesproperstar}. Let \((q,c)=(f_1\mid\cdots\mid f_r)\) be the contact pair of the curve \(C=\cup_{i=1}^{r}C_i\) and define
\[
l_{\mathtt{I}}:=\min\left\{\left\lfloor\frac{\overline{\beta}^i_{q+1}-n^i_q\overline{\beta}^i_q}{e^i_q}\right\rfloor\;:\;i\in \mathtt{I}=\{1,\dots,r\}\right\}.
\]
Since \(\sigma_0\) is a star point to the \(q+1\)--death arc of some \(G(C_i),\) which we denote by \(L_{q+1}^{i},\) we know that \((q,c)\in \{(q,l_\mathtt{I} +1),(q+1,0)\}.\) Let us denote by \(T\) the end point of \(L_{q+1}^{i}.\) Thus, we define a partition of \(\mathtt{I}=(\bigcup_{p=1}^{t} I_{p})\cup(\bigcup_{p=t+1}^{s} I_p)\) as follows:
\begin{enumerate}
\item [(\(\star\))] \(i\in\bigcup_{p=1}^{t}I_p\) if and only if
\(\left\lfloor\frac{\overline{\beta}^i_{q+1}-n^i_q\overline{\beta}^i_q}{e^i_q}\right\rfloor>l_\mathtt{I};\)
\item[(\(\star\star\))] \label{pack2}
\(i\in I_{p}\) and \(j\in I_{p'}\) with \(p\neq p'\) if and only if \(T\) is the separation point of \(f_i,f_j\);
\item[(\(\star\star\star\))] \label{pack3} setting \(\bigcup_{p=t+1}^{s}I_p:=\mathtt{I}\setminus(\bigcup_{p=1}^{t}I_p)\), for \(p\geq t+1\) we have \(i\in I_{p}\) and \(j\in I_{p'}\) with \(p\neq p'\) if and only if \(T\) is not the separation point of \(f_i,f_j\) and \((f_i\mid f_j)\in\{(q,c),(q+1,0)\}.\)
\end{enumerate}
The packages \(I_{t+1},\dots,I_s\) will be called ``singular packages" and the packages \(I_1,\dots,I_t\) will be called ``smooth packages"; see Figure \ref{fig:partitionatsigma}.
\begin{rem}
Observe that if \(i,i'\in I_p\) with \(p\geq t+1\) then \((f_i\mid f_j)>(q+1,0),\) if \(i\in \bigcup_{p=1}^{t} I_{p} \) and \(j\in \bigcup_{p=t+1}^{s} I_{p}\) then \((f_i\mid f_j)=(q,l+1),\) and for all \(j\in \bigcup_{p=t+1}^{s} I_{p}\) we have \(\left\lfloor\frac{\overline{\beta}^i_{q+1}-n^i_q\overline{\beta}^i_q}{e^i_q}\right\rfloor=l_\mathtt{I}.\)
\end{rem}
\begin{lemma}\label{lem:orderbetaiscompatible}
Under the previous notation, set \(\kappa:=\sum_{p=1}^{t}|I_p|\). Assume that the singular packages \(I_{t+1},\cdots,I_s\) are ordered as
\begin{equation}\label{eq:refinedorder}
\frac{\overline{\beta}^{j_{t+1}}_{q+1}}{e^{j_{t+1}}_q}\geq\cdots\geq \frac{\overline{\beta}^{j_s}_{q+1}}{e^{j_s}_q}.
\end{equation}
We have \(\big\{\kappa+1,\dots, \kappa+|I_{t+1}|\big\}=I_{t+1},\dots,\) \(\big\{\kappa+|I_{t+1}|+1,\dots,\kappa+|I_{t+1}|+|I_{t+2}|\big\}=I_{t+2}\) and \(\displaystyle\Big\{\kappa+\sum_{p=t+1}^{s-1}|I_p|+1,...,r\Big\}=I_{s}.\)
\end{lemma}
\begin{proof}
We need only to check that the good order in the topological Puiseux series is compatible with the order in the packages defined by eq.~ \eqref{eq:refinedorder}. First, we show that the packages in \(J:=\displaystyle\bigcup_{p=t+1}^{s}I_p\) are ordered following the good order in the topological Puiseux series.
\medskip
As a consequence of Proposition \ref{prop:contactPuiseux}, the topological Puiseux series of the branches in \(J\) are exactly the same up to the terms which are strictly smaller than the term corresponding to \(\beta^j_{q+1}/\beta^j_0\) and all of them differ at that term. Then, the first Puiseux pair in which they become different is of the form \((e^j_q/e^j_{q+1},\beta^j_{q+1}/e^j_{q+1}).\) Therefore, if \(j_k\in I_k\subset J\), then we need to check that
\[
\frac{\beta^{j_k}_{q+1}/e^{j_k}_{q+1}}{e^{j_{k}}_q/e^{j_{k}}_{q+1}}\geq \frac{\beta^{j_{k+1}}_{q+1}/e^{j_{k+1}}_{q+1}}{e^{j_{k+1}}_q/e^{j_{k+1}}_{q+1}}.
\]
Combining eq.~\eqref{eq:definbetabarra} and eq.~ \eqref{eq:refinedorder} we have
\[\frac{\overline{\beta}^{j_k}_{q+1}}{e^{j_k}_{q}}=\frac{\beta^{j_k}_{q+1}}{e^{j_k}_{q}}-\frac{\beta^{j_k}_{q}}{e^{j_k}_{q}}+n^{j_k}_{q}\frac{\overline{\beta}^{j_k}_{q}}{e^{j_k}_{q}}\geq \frac{\beta^{j_{k+1}}_{q+1}}{e^{j_{k+1}}_{q}}-\frac{\beta^{j_{k+1}}_{q}}{e^{j_{k+1}}_{q}}+n^{j_{k+1}}_{q}\frac{\overline{\beta}^{j_{k+1}}_{q}}{e^{j_{k+1}}_{q}}.\]
Recall that \(\overline{\beta}^{I_p}/e_q^{I_p}\) is independent of \(p\) for \(p=t+1,\dots,s\). Then it is easily seen that \(n^{I_p}_j\) is independent of \(p\) for $j=1,\ldots , q$. Hence the first Puiseux pair in which they become different is of the form \((e^j_q/e^j_{q+1},\beta^j_{q+1}/e^j_{q+1})\), and we have
\[
-\frac{\beta^{j_k}_{q}}{e^{j_k}_{q}}+n^{j_k}_{q}\frac{\overline{\beta}^{j_k}_{q}}{e^{j_k}_{q}}=-\frac{\beta^{j_{k+1}}_{q}}{e^{j_{k+1}}_{q}}+n^{j_{k+1}}_{q}\frac{\overline{\beta}^{j_{k+1}}_{q}}{e^{j_{k+1}}_{q}};
\]
this allows us to deduce the desired inequality \(\frac{\beta^{j_k}_{q+1}}{e^{j_k}_{q}}\geq \frac{\beta^{j_{k+1}}_{q+1}}{e^{j_{k+1}}_{q}}.\)
\medskip
On the other hand, by the definition of the packages in \(\mathtt{I}\setminus J,\) the first term which is different in the topological Puiseux series of a branch in \(I_l\subset \mathtt{I}\setminus J\) with respect to the topological Puiseux series of a branch not in \(I_p \subset \mathtt{I}\setminus J\) is of the form \((1,\beta^{I_p}_q/e^{I_p}_q+c)\) with \(c=l_\mathtt{I}+1\) and the intersection multiplicity \([f_{I_p},f_{I_{t+1}}]=e_{q-1}^{I_{t+1}}\overline{\beta}_q^{I_p}+(l_\mathtt{I}+1)e_q^{I_{t+1}}e_q^{I_p}\); combining eq.~\eqref{eq:definbetabarra} and eq.~ \eqref{eq:refinedorder} as before we obtain
\[
\beta^{I_p}_q/e^{I_p}_q+c>\beta^{I_{t+1}}_{q+1}/e_q.
\]
Thus if \(i\in \mathtt{I}\setminus J\) and \(j\in J\), then we have \(i<j.\)
\end{proof}
Moreover, we can also set
\(\big \{1,\dots,|I_1|\big\}=I_1,\) \(\big\{| I_1|+1,\dots,|I_2|\big\}=I_2,\dots,\) \(\displaystyle\Big\{\sum_{i=1}^{t-1}|I_i|+1,\dots,\kappa\Big\}=I_t\); this holds because the term \(\beta_q^{I_p}/e_q^{I_p}+c\) is equal for all \(p=1,\dots,t.\) Therefore, they trivially satisfy the good ordering in the topological Puiseux series. Furthermore, the topological Puiseux series of two branches \(i\in I_p\) and \(j\in I_{p'}\) with \(p\neq p'\) and \(p,p'\leq t\) have different coefficients for the term \(\beta_q^{I_p}/e_q^{I_p}+c.\) Figure \ref{fig:partitionatsigma} describes the dual graph at this stage.
\begin{figure}[h]
$$
\unitlength=0.50mm
\begin{picture}(70.00,110.00)(0,-50)
\thinlines
\put(-60,30){\line(1,0){24}}
\put(-60,30){\circle*{2}}
\put(-40,30){\circle*{2}}
\put(-40,30){\line(0,-1){15}}
\put(-40,15){\circle*{2}}
\put(-17,30){\circle*{2}}
\put(-17,30){\line(0,-1){15}}
\put(-17,15){\circle*{2}}
\put(-34,30){$\ldots$}
\put(-22,30){\line(1,0){18}}
\put(-4,30){\circle*{3}}
\put(-4,30){\line(1,-0.5){70}}
\put(-4,30){\line(-0.2,1){3}}
\put(-4,30){\line(0.2,0.5){5}}
\put(-4,30){\line(1,0.2){15}}
\put(4,43){\scriptsize{$I_{s-1}$}}
\put(-6,47){\scriptsize{$I_{s}$}}
\put(-6,23){\scriptsize{$\sigma_0$}}
\put(31,13){\circle*{2}}
\put(31,13){\line(0,1){14}}
\put(31,13){\line(1,1.3){9}}
\put(31,13){\line(1,0){14}}
\put(66,-5){\circle*{2}}
\put(66,-5){\line(0.1,1){1.5}}
\put(66,-5){\line(1,1.4){9}}
\put(66,-5){\line(1,-0.2){14}}
\put(83,-7){\scriptsize{$I_{t+1}$}}
\put(66,-5){\line(-1,-1){20}}
\put(46,-25){\circle*{2}}
\put(46,-25){\line(1,0.3){15}}
\put(46,-25){\line(1,-0.4){16}}
\put(46,-25){\line(1,-1.4){11}}
\put(64,-30){\scriptsize{$I_{2}$}}
\put(60,-42){\scriptsize{$I_{1}$}}
\end{picture}
$$
\caption{Case (A): Dual graph with \(\sigma_0\) being star point of some branch.}
\label{fig:partitionatsigma}
\end{figure}
\noindent \textbf{(B)} Now, let us consider the case where \(\sigma_0\) is an ordinary point for all \(G(C_i).\) This case can be treated as the previous case if we consider \(\mathtt{I}=\bigcup_{p=1}^{t}I_p\). Observe that if \(\sigma_0\) is an ordinary point in the dual graph of all the branches then \(\sigma_0\) is a separation point. Therefore, we can define the partition of \(\mathtt{I}\) as \(i,j\in I_p\) if and only if \((f_i\mid f_j)>(q,c).\) Thus, the topological Puiseux series of all the branches are the same for order strictly less than \(\beta_q^{I_p}/e_q^{I_p}+c\) and at the term \(\beta_q^{I_p}/e_q^{I_p}+c\), which we recall is independent of \(p,\) have a different coefficient if and only if they belong to a different package. The description of the dual graph is now easier:
\begin{figure}[h]
$$
\unitlength=0.50mm
\begin{picture}(80.00,40.00)(50,10)
\thinlines
\put(20,30){\line(1,0){30}}
\put(20,30){\circle*{2}}
\put(40,30){\circle*{2}}
\put(40,30){\line(0,-1){15}}
\put(40,15){\circle*{2}}
\put(54,30){$\ldots$}
\put(70,30){\circle*{2}}
\put(70,15){\circle*{2}}
\put(70,30){\line(0,-1){15}}
\put(67,30){\line(1,0){23}}
\put(90,30){\circle*{2}}
\put(83,34){{\scriptsize$\sigma_0$}}
\put(90,30){\line(1,0.5){12}}
\put(90,30){\line(1,1){10}}
\put(90,30){\line(1,-0.1){13}}
\put(90,30){\line(1,-0.4){12}}
\put(90,30){\line(1,-1){10}}
\end{picture}
$$
\caption{Case (B): $\sigma_0$ is an ordinary point of every $G(C_i)$.}
\label{fig4}
\end{figure}
Again, we can put \(\big\{1,\dots,|I_1|\big\}=I_1,\) \(\big\{| I_1|+1,\dots,|I_2|\big\}=I_2,\dots,\) \(\displaystyle\Big\{\sum_{i=1}^{t-1}|I_i|+1,\dots,r\Big\}=I_t\), since this is compatible with the good order.
\medskip
Once we have ordered the packages at \(\sigma_0,\) we shall continue decomposing each package until we arrive to a decomposition of packages with cardinal exactly one. There are two options: either \(|I_1|=1\) and then we move to the first package with cardinal strictly bigger than one or all the packages have cardinal exactly one and then we are done. We will first consider the case where there is a package in \(\bigcup_{p=1}^{t} I_p\) with \(|I_p|>1.\) Assume for simplicity that \(|I_1|>1\) and abusing a bit of notation, let us denote the contact pair of the package as \((q,c)=(f_1\mid\cdots\mid f_{|I_1|}).\) Let \(\sigma_0^{I_1}\) be the first proper star point of the dual graph of \(C^{I_1},\) where \(C^{I_1}\) is the curve defined by \(f_{I_1}=f_1\cdots f_{|I_1|}.\)
\medskip
We have that \(\sigma_0^{I_1}\in\mathcal{S}_1\); however this is not (in general) the next proper star point in \(\mathcal{S}_1\) after \(\sigma_0,\) more concretely \(\sigma_0^{I_1}\in\mathcal{S}_1\) is not the next proper star point in \(\mathcal{S}_1\) after \(\sigma_0\) if only if \(\bigcup_{p=1}^{t} I_p\neq \mathtt{I}.\) As we already mentioned, the proper star points are the guides to study the terms where the topological Puiseux series of the branches become different. If \(\bigcup_{p=1}^{t} I_p\neq \mathtt{I}\), then each package \(I_{t+1},\dots,I_s\) is associated to a proper star point belonging to \(\mathcal{S}_1\) and we have \(\sigma_0\prec P_1\prec \cdots\prec P_{\epsilon}\) consecutive proper star points in \(\mathcal{S}_1.\) In fact, it is easy to see that \(\epsilon=s-t-\sum(s(P_i)-1)\) where \(s(P_i)\) is the number of times that the value of the proper star point appears in the semigroup. Observe that we have already analyzed the compatibility of the order in the packages with the good order in the branches in Lemma \ref{lem:orderbetaiscompatible}, thus those proper star points have been also investigated. After them, one may find star points which are non-proper: There is no need of analyzing them since they do not generate any new ramification in the dual graph. Hence we continue to the first proper star point in \(\mathcal{S}_1\) after \(P_{s-1},\) i.e. \(\sigma_0^{I_1}.\) For \(\sigma_0^{I_1}\) we have now two different situations to be considered:
\begin{enumerate}
\item Assume that \(\sigma_0^{I_1}\) is the star point associated to the \(q'\)--death arc of some \(G(C^i),\) \(i\in I_1\) and \(q'>q.\) In this case, \(\sigma_0^{I_1}\neq T,\) where \(T\) is the end point of \(L^r_{q+1}.\) We proceed in the same way as in the case of \(\sigma_0\) to define a partition of \(I_1=(\bigcup_{p=1}^{t_1}I_{1,p})\cup(\bigcup_{t_1+1}^{s_1} I_{1,p}).\) The partition is defined and ordered as in the case of \(\sigma_0;\) to do so, we have to take into account that \(\sigma_0^{I_1}\) plays the role of \(\sigma_0\) in the dual graph of \(G(C^{I_1}).\) Again by Lemma \ref{lem:orderbetaiscompatible} we have that the ordering of the subpackages of \(I_1\) is compatible with the good order in the topological Puiseux series.
\begin{figure}[h]
$$
\unitlength=0.50mm
\begin{picture}(100.00,110.00)(0,-70)
\thinlines
\put(-60,30){\line(1,0){24}}
\put(-60,30){\circle*{2}}
\put(-40,30){\circle*{2}}
\put(-40,30){\line(0,-1){15}}
\put(-40,15){\circle*{2}}
\put(-17,30){\circle*{2}}
\put(-17,30){\line(0,-1){15}}
\put(-17,15){\circle*{2}}
\put(-34,30){$\ldots$}
\put(-22,30){\line(1,0){18}}
\put(-8,35){\scriptsize{$\sigma_0$}}
\put(13,39){\scriptsize{$I_s$}}
\put(18,33){\scriptsize{$I_{s-1}$}}
\put(32,23){$\ddots$}
\put(-4,30){\circle*{3}}
\put(-4,30){\line(1,-1){30}}
\put(-4,30){\line(1,0.5){15}}
\put(-4,30){\line(1,0.2){20}}
\put(-4,30){\line(1,0){15}}
\put(4.5,21){\circle*{2}}
\put(4.5,21){\line(1,-0.2){9}}
\put(4.5,21){\line(1,0.5){9}}
\put(4.5,21){\line(1,1){6}}
\put(15.5,10){\circle*{2}}
\put(15.5,10){\line(1,1.3){9}}
\put(15.5,10){\line(1,0.4){9}}
\put(15.5,10){\line(1,2){8}}
\put(26,0){\circle*{2}}
\put(26,0){\line(1,1.9){7.5}}
\put(26,0){\line(1,1.3){9}}
\put(26,0){\line(1,.7){14}}
\put(41,5){\scriptsize{$I_{t+1}$}}
\put(35,-10){\scriptsize{$I_{t}$}}
\put(29,-32){\scriptsize{$I_{1}$}}
\put(69,-34){\scriptsize{$\sigma_0^{I_1}$}}
\put(26,0){\line(0,-1){25}}
\put(26,-25){\circle*{2}}
\put(26,-25){\line(1,1.9){7.5}}
\put(26,-25){\line(1,1.3){9}}
\put(26,-25){\line(1,1){10}}
\put(26,-25){\line(1,.7){10}}
\put(26,-25){\line(1,0){25}}
\put(54,-25){$\ldots$}
\put(65,-25){\line(1,0){10}}
\put(38,-25){\circle*{2}}
\put(38,-25){\line(0,-1){15}}
\put(38,-40){\circle*{2}}
\put(46,-25){\circle*{2}}
\put(46,-25){\line(0,-1){15}}
\put(46,-40){\circle*{2}}
\put(75,-25){\circle*{3}}
\put(75,-25){\line(1,1.9){7.5}}
\put(75,-25){\line(1,1.3){9}}
\put(75,-25){\line(1,1){10}}
\put(75,-25){\line(1,.7){10}}
\put(75,-25){\line(1,-0.5){40}}
\put(100,-37.5){\circle*{2}}
\put(100,-37.5){\line(1,1){10}}
\put(100,-37.5){\line(1,1.3){9}}
\put(100,-37.5){\line(1,0.7){10}}
\put(100,-37.5){\line(1,1.7){8}}
\put(115,-45){\circle*{2}}
\put(115,-45){\line(1,0){20}}
\put(115,-45){\line(1,0.2){20}}
\put(115,-45){\line(1,.4){20}}
\put(115,-45){\line(-1,-0.5){20}}
\put(95,-55){\circle*{3}}
\put(89,-55){\scriptsize{$T$}}
\put(95,-55){\line(1,-.7){20}}
\put(95,-55){\line(1,-.5){20}}
\put(95,-55){\line(1,-.3){20}}
\put(95,-55){\line(1,0){20}}
\put(120,-55){\scriptsize{$I_{1,t_1}$}}
\put(120,-65){$\vdots$}
\put(120,-75){\scriptsize{$I_{1,1}$}}
\put(80,-8){\scriptsize{$I_{1,s_1}$}}
\put(140,-39){$\vdots$}
\put(140,-48){\scriptsize{$I_{1,t_1+1}$}}
\put(-67,28){{\scriptsize ${\bf 1}$}}
\end{picture}
$$
\caption{Situation (1): $\sigma_0^{I_1}$ is proper.}
\label{figC}
\end{figure}
\item Assume that \(\sigma_0^{I_1}\) is an ordinary point for all \(G(C^i)\), \(i\in I_1.\) Then again the partition of \(I_1\) is defined and ordered as in the case of \(\sigma_0.\) All the packages generated in this partition are smooth packages.
\end{enumerate}
\begin{figure}[h]
$$
\unitlength=0.50mm
\begin{picture}(70.00,110.00)(0,-50)
\thinlines
\put(-60,30){\line(1,0){24}}
\put(-60,30){\circle*{2}}
\put(-40,30){\circle*{2}}
\put(-40,30){\line(0,-1){15}}
\put(-40,15){\circle*{2}}
\put(-17,30){\circle*{2}}
\put(-17,30){\line(0,-1){15}}
\put(-17,15){\circle*{2}}
\put(-34,30){$\ldots$}
\put(-22,30){\line(1,0){18}}
\put(-8,35){\scriptsize{$\sigma_0$}}
\put(13,39){\scriptsize{$I_s$}}
\put(18,33){\scriptsize{$I_{s-1}$}}
\put(32,23){$\ddots$}
\put(-4,30){\circle*{3}}
\put(-4,30){\line(1,-1){30}}
\put(-4,30){\line(1,0.5){15}}
\put(-4,30){\line(1,0.2){20}}
\put(-4,30){\line(1,0){15}}
\put(4.5,21){\circle*{2}}
\put(4.5,21){\line(1,-0.2){9}}
\put(4.5,21){\line(1,0.5){9}}
\put(4.5,21){\line(1,1){6}}
\put(15.5,10){\circle*{2}}
\put(15.5,10){\line(1,1.3){9}}
\put(15.5,10){\line(1,0.4){9}}
\put(15.5,10){\line(1,2){8}}
\put(26,0){\circle*{2}}
\put(26,0){\line(1,1.9){7.5}}
\put(26,0){\line(1,1.3){9}}
\put(26,0){\line(1,.7){14}}
\put(41,5){\scriptsize{$I_{t+1}$}}
\put(35,-10){\scriptsize{$I_{t}$}}
\put(29,-32){\scriptsize{$I_{1}$}}
\put(69,-34){\scriptsize{$\sigma_0^{I_1}$}}
\put(26,0){\line(0,-1){25}}
\put(26,-25){\circle*{2}}
\put(26,-25){\line(1,1.9){7.5}}
\put(26,-25){\line(1,1.3){9}}
\put(26,-25){\line(1,1){10}}
\put(26,-25){\line(1,.7){10}}
\put(26,-25){\line(1,0){25}}
\put(54,-25){$\ldots$}
\put(65,-25){\line(1,0){10}}
\put(38,-25){\circle*{2}}
\put(38,-25){\line(0,-1){15}}
\put(38,-40){\circle*{2}}
\put(46,-25){\circle*{2}}
\put(46,-25){\line(0,-1){15}}
\put(46,-40){\circle*{2}}
\put(75,-25){\circle*{3}}
\put(75,-25){\line(1,1.9){7.5}}
\put(75,-25){\line(1,1){10}}
\put(75,-25){\line(1,0.3){12}}
\put(75,-25){\line(1,-.2){12}}
\put(75,-25){\line(1,-0.5){12}}
\put(80,-8){\scriptsize{$I_{1,s_1}$}}
\put(-67,28){{\scriptsize ${\bf 1}$}}
\end{picture}
$$
\caption{Situation (2): $\sigma_0^{I_1}$ is ordinary.}
\label{figD}
\end{figure}
Continuing with this process we will finally obtain an ordering of the first \(|I_1|\) branches which is compatible with the good ordering of the set of branches induced by the topological Puiseux series. Once we finish with \(I_1\) we repeat this process with each package of \(\bigcup_{p=1}^t I_p\) with \(|I_p|>1.\) In this way we obtain an ordering of the first \(\kappa\) branches which is compatible with the good ordering of the set of branches induced by the topological Puiseux series. Now we shall continue with the package \(I_{t+1}.\) As in the case of \(I_1\) we only need to deal with the packages with cardinal strictly bigger than one. The procedure is the same as in the cases developed for \(I_1.\) After all the iterations, we obtain a partition of \(\mathtt{I}\) in packages of cardinal one such that the indexing of the packages is compatible with the good order induced by the topological Puiseux series in the set of branches.
\medskip
Summarizing, the set of star points \(\mathcal{S}\) is totally ordered and this order is compatible with the good order in the topological Puiseux series of \(C.\) In fact, given the equisingularity data we have provided a way to compute this order. We will see in Example \ref{ex:examplerefinedgoodorder} that this ordering is more restrictive than just the good order in the topological Puiseux series. From now on we will refer as good order to our refined good order and not just the good order.
\subsubsection{Approximations associated to the star points}\label{subsubsec:truncationstar}
To finish this section, let us define a sequence of truncated plane curves which can be associated to the star points following their total ordering. This sequence will allow us to better understand the ordering \(<\) in \(\mathcal{S}.\) Let \((q,c)=(f_1\mid\cdots\mid f_r)\) be the contact pair of \(C,\) then for \(i=1,\dots,q-1\) we define \(C_{\alpha_i}\) as the irreducible plane curve given by the Puiseux parametrization
\[y_{\alpha_i}:=y^{1}_{i}=\sum_{k=1}^{i} a_k^{(1)}x^{\beta^1_i/\beta^1_0}.\]
Recall that, for \(i=1,\dots,q-1\), the quotient \(\beta^j_i/\beta^j_0\) is independent of \(j=1,\dots,r\) and then \(y_{\alpha_i}\) is a maximal contact curve which is common to all the branches. Now, let us denote by \(Q=\alpha_{q-1}\) and \(\sigma=\sigma_0\); we must distinguish two cases:
\begin{enumerate}
\item [\(\diamond\)] If \(c>0\) then define \(C_{\alpha_q}\) as the irreducible plane curve with Puiseux series \(y_{\alpha_q}:=y^1_q.\)
\item [\(\diamond\)] If \(c=0\) then \(\alpha_q=\sigma_0\) and consider the partition \(\mathtt{I}=(\bigcup_{p=1}^{t} I_{p})\cup(\bigcup_{p=t+1}^{s} I_p)\) explained before. Then, we define \(C_{\alpha_q}=C_{\sigma_0}\) as the plane curve singularity with \(s\) branches defined by the Puiseux series:
\[y^{I_p}_{\alpha_q}:= \sum_{l=1}^{q}a_l^{(i)}x^{\beta^{i}_{l}/\beta^i_0}+\sum_{\tiny \begin{array}{c}
j\notin I_p\\
\end{array}}b^{(i)}_jx^{(\beta^i_{q_{i,j}}+c_{i,j}e^i_{q_{i,j}})/\beta^i_0}\quad\text{with}\quad i\in I_p.\]
\end{enumerate}
In the case \(c>0\) we have \(\sigma_0=\alpha_{q+1}\), and we define \(C_{\alpha_{q+1}}=C_{\sigma_0}\) analogously to the case \(c=0.\)
\medskip
Following the procedure described to order \(\mathcal{S},\) let \(\mathtt{I}=(\bigcup_{p=1}^{t} I_{p})\cup(\bigcup_{p=t+1}^{s} I_p)\) be the partition created at \(\sigma_0.\) By definition there are \(\epsilon\) proper star points (in fact, \(s-t-\sum (s_Q-1)\)) in \(\mathcal{S}_1\) between \(\sigma_0\) and the separation point of \(I_1\) with \(I_2.\) Let us denote them by \(\overline{\sigma}_1,\dots,\overline{\sigma}_\epsilon\) and let \(P\) be the separation point of \(I_1\) and \(I_2.\) We put \(C_{\overline{\sigma}_i}=C_{\sigma_0}\) for all \(i.\) At this point we have
\[
\alpha_1\prec \cdots\prec\alpha_q\preceq \sigma_0\preceq \overline{\sigma}_1\preceq\cdots\preceq\overline{\sigma}_\epsilon\preceq P.
\]
The consideration of further approximations of $C$ requires to define a recursive procedure which distinguishes several cases. To do so, we rename the distinguished points \(Q,\sigma\): Let \(Q=\sigma_0\) be a star point where the process start, and let \(\sigma\) be the next star point to be considered in order to define an approximating curve. Then,
\begin{enumerate}
\item Assume \(|I_1|=1:\)
\begin{enumerate}
\item The semigroup \(\Gamma^1\) of the first branch \(C^1\) of \(C\) has \(q\) minimal generators. This implies that \(\mathcal{S}_1=\{\alpha_1\prec \cdots\prec\alpha_q\preceq \sigma_0\preceq \overline{\sigma}_1\preceq\cdots\preceq\overline{\sigma}_\epsilon\}\) and \(y^{I_1}_{\sigma_0}\) is the topological Puiseux series of the branch \(C^1.\) Then, we have finished with \(\mathcal{S}_1\) and we move to the package \(I_2.\) We set \(Q=\sigma_0\)
and write \(\sigma\) for the star point from which \(I_2\) emanates, i.e. \(\sigma=\overline{\sigma}_i\) for some \(i.\)
\item The semigroup \(\Gamma^1\) of the first branch \(C^1\) of \(C\) has \(g_1>q\) minimal generators. Then,
\[\mathcal{S}_1=\{\alpha_1\prec \cdots\prec\alpha_q\preceq \sigma_0\preceq \overline{\sigma}_1\preceq\cdots\preceq\overline{\sigma}_\epsilon\prec \alpha_1^{I_1}\prec\cdots \alpha_{g_1-q}^{I_1}\}\]
where \(\alpha_1^{I_1}\prec\cdots \alpha_{g_1-q}^{I_1}\) are the non-proper star points defining the maximal contact values associated to the remaining generators of the semigroup \(\Gamma^1\). For each \(\alpha^{I_1}_i\) we define the plane curve \(C_{\alpha^{I_1}_i}\) with \(s\) branches, where the branches \(j=2,\dots,s\) have Puiseux series \(y_{\alpha^{I_1}_i}^{j}=y_{\sigma_0}^{I_j},\) i.e. the branches \(j=2,\dots,s\) are the same as the ones of \(C_{\sigma_0}\), and for \(j=1\) the Puiseux series is
\[
y_{\alpha^{I_1}_i}^{1}:=y_{\sigma_0}^{I_1}+\sum_{k=q+1}^{q+i} a_k^{(1)}x^{\beta_k^1/\beta_0^1}.
\]
Once we have completed \(\mathcal{S}_1\), we move to the package \(I_2.\) We make \(\sigma\) the star point from which \(I_2\) goes through and \(Q=\alpha_{g_1-q}^{I_1}.\)
\end{enumerate}
\item Assume \(|I_1|>1.\) There are two cases to be distinguished:
\begin{enumerate}
\item For all \(j\in I_1\) we have \(g_j=q,\) i.e. the semigroups \(\Gamma^{j}\) have \(q\)--minimal generators. This implies that all the star points in \(\mathcal{S}_1\) after \(Q\) are proper star points of \(G(C)\) and they are ordinary points of the individual dual graphs \(G(C^j)\) of the branches. Let \(\sigma_0^{I_{1}}\prec\cdots\prec \sigma_{0}^{I_{1,\cdots,1}}\) be the \(l\leq |I_1|\) proper star points from \(Q\) to the arrow of \(C^1\) in \(G(C).\) We only need to analyze the situation at the first one, \(\sigma_0^{I_1}\); for the remainder it follows by the recursive process we are defining. For \(\sigma_0^{I_1},\) we have a partition into smooth packages of \(I_1=\bigcup_{k=1}^{s_1} I_{1,k}\) and we define the plane curve \(C_{\sigma_0^{I_1}}\) with \(s_1+s-1\) branches where the first \(s_1\) branches have Puiseux series of the form
\[y^{I_{1,k}}_{\sigma_1^{I_1}}:=y^{I_1}_{\sigma_0}+\sum_{i\in I_1\setminus{I_{1,k}}}b_j^{j}x^{(\beta^j_{q_{i,j}}+c_{i,j}e^j_{q_{i,j}})/\beta^j_0}\quad\text{with} \quad j\in I_{1,k},\]
and the last \(s-1\) branches are equal to the branches of \(C_{\sigma_0}.\) We set \(\sigma\) the star point from which \(I_2\) goes through and \(Q=\sigma_0^{I_{1,1}}.\)
\item We assume that \(g_j>q\) for some \(j\in I_1.\) We distinguish again two subcases:
\begin{enumerate}
\item \(C^1\) has \(g_1\geq q\) and \((f_1|\cdots|f_{|I_1|})\leq (q+1,0)\). Denote by \(\sigma_{0}^{I_1}\) the first proper star point of the package \(I_1.\)
Let \(I_1=\bigcup_{k=1}^{s_1} I_{1,k}\) be the partition associated to the proper star \(\sigma_{0}^{I_1}.\) Then, we define \(C_{\sigma_{0}^{I_1}}\) as the plane curve with \(s_1+s-1\) branches, where the last \(s-1\) branches are equal to the last \(s-1\) branches of \(C_{\sigma_0}\), and the first \(s_1\) branches are defined in the same way we defined \(C_{\sigma_0}\) from \(C_{\alpha_q}.\) In this case \(\sigma_0^{I_1}\) plays the role of \(\sigma_0\) and \(Q\) plays the role of \(\alpha_q.\) Thus, we have again a sequence
\[
\alpha_1\prec \cdots\prec\alpha_q\preceq \sigma_0\preceq \overline{\sigma}_1\preceq\cdots\preceq\overline{\sigma}_\epsilon\prec \sigma_0^{I_1}\preceq \overline{\sigma}^{I_1}_1\preceq\cdots\preceq\overline{\sigma}^{I_1}_\epsilon
\]
and set \(\sigma=\overline{\sigma}_{\epsilon_1}^{I_1}\) and \(Q=\sigma_0^{I_1}\) to continue the process.
\item \(C^1 \) has \(g_1\geq q\) and \((f_1|\cdots|f_{|I_1|})> (q+1,0),\) i.e \((f_1|\cdots|f_{|I_1|})\geq (q+1,c)\) with \(c\neq 0.\) Write \((q_{I_1},c_{I_1}):= (f_1|\cdots|f_{|I_1|}).\) Since \((q_{I_1},c_{I_1})> (q+1,0),\) there are \(q_{I_1}-q\) star points which are non-proper between \(\overline{\sigma}_\epsilon\) and \(\sigma_0^{I_1},\) i.e.
\[\overline{\sigma}_{\epsilon}\prec \alpha_{q+1}\prec \cdots \prec \alpha_{q_{I_1}}\preceq \sigma_0^{I_1}.\]
Then for each \(\alpha_i\) with \(i=1,\dots,q_{I_1}-q\) we define \(C_{\alpha_i}\) as a plane curve with the same number of branches \(s\) as \(C_{\sigma_0}\), where the last \(s-1\) branches are the same as those of \(C_{\sigma_0}\) and the first branch is defined as
\[
y^{1}_{\alpha_i}:=y^{I_1}_{\sigma_0}+ +\sum_{k=q+1}^{q_{I_1}} a_k^{(1)}x^{\beta_k^1/\beta_0^1}
\]
similarly to the case \((1)(b).\)
Moreover we define \(C_{\sigma_{0}^{I_1}}\) as the plane curve with \(s_1+s-1\) branches, where the last \(s-1\) branches are equal to the last \(s-1\) branches branches of \(C_{\sigma_0}\) and the first \(s_1\) branches are defined in the same way we defined \(C_{\sigma_0}\) from \(C_{\alpha_q}.\) In this case \(\sigma_0^{I_1}\) plays the role of \(\sigma_0\) and \(Q\) plays the role of \(\alpha_{q_{I_1}}.\) We set \(Q=\sigma_0^{I_1}\) and \(\sigma\) for the next star point that must be considered to define an approximating curve, i.e. it is defined from the partition associated to \(\sigma_0^{I_1}\) in the same way as in the previous cases.
\end{enumerate}
\end{enumerate}
\end{enumerate}
We run this process until \(\sigma=\max\{\alpha\in\mathcal{S}\}\); in that case \(C_\sigma=C\) and then we have obtained the given plane curve.
\medskip
Observe that the ordering that we have introduced in the set of branches is now a canonical order for the branches on a fixed equisingularity class and it can be described only by using the dual graph of \(C.\) Moreover, we have showed that this ordering introduced in the dual graph implies the good ordering of the topological Puiseux series of the branches. As we will see in Section \ref{subsec:iterativepoincare}, the total order in the star points of the dual graph together with the order in the set of branches is crucial to provide an iterative construction of the Poincaré series.
\begin{ex}\label{ex:examplerefinedgoodorder}
Consider the plane curve defined in Example \ref{example:goodorder}. Assume first the curve is given by the Puiseux series of the branches \(C_i\) ordered as \(y_1=2x^2+x^{14/3},\)
\(y_2=x^4,\) \(y_3=x^{5/2},\) \(y_4=2x^2\) and \(y_5=x^2.\) The equisingularity class is given by \(\Gamma^1=\langle 3,14\rangle,\) \(\Gamma^3=\langle2,5\rangle,\) \(\Gamma^2=\Gamma^4=\Gamma^5=\mathbb{N}\) and the intersection multiplicities: \([f_1,f_2]=[f_1,f_5]=6,\) \([f_1,f_3]=12,\) \([f_1,f_4]=14,\) \([f_2,f_3]=5,\) \([f_2,f_4]=[f_2,f_5]=[f_4,f_5]=2\) and \([f_3,f_4]=[f_3,f_5]=4.\)
The topological Puiseux series are
\begin{equation*}
\begin{array}{ccc}
y_1=2x^2+x^{14/3}+1/2x^5,& \quad y_2=5x^2+x^{3},&\quad y_3=5x^2+x^{5/2}+3x^3,\\
y_4=2x^2+x^5 &{\text{and}} &\quad y_5=x^2.
\end{array}
\end{equation*}
The dual graph of \(C\) with the branches in this order is given in Figure \ref{fig:ex_0}.
\begin{figure}[H]
$$
\unitlength=0.50mm
\begin{picture}(80.00,40.00)(50,5)
\thinlines
\put(30,30){\line(1,0){20}}
\put(30,30){\circle*{2}}
\put(50,30){\circle*{3}}
\put(43,34){{\scriptsize$\sigma_0$}}
\put(50,30){\vector(1,1){10}}
\put(50,30){\line(1,0){80}}
\put(70,30){\circle*{2}}
\put(90,30){\circle*{2}}
\put(110,30){\circle*{2}}
\put(130,30){\circle*{2}}
\put(130,30){\vector(1,1){10}}
\put(130,30){\line(0,-1){20}}
\put(130,20){\circle*{2}}
\put(130,10){\circle*{2}}
\put(130,10){\vector(1,0){15}}
\put(50,30){\line(1,-1){15}}
\put(65,15){\circle*{2}}
\put(65,15){\vector(1,0){12}}
\put(65,15){\line(-1,-1){10}}
\put(55,5){\circle*{2}}
\put(55,5){\vector(1,0){12}}
\put(80,15){{\scriptsize$y_{1}$}}
\put(70,3){{\scriptsize$y_{4}$}}
\put(130,40){{\scriptsize$y_{3}$}}
\put(145,15){{\scriptsize$y_{2}$}}
\put(60,45){{\scriptsize$y_5$}}
\end{picture}
$$
\caption{Dual graph of \(C\).}
\label{fig:ex_0}
\end{figure}
The given ordering does not coincide with the good ordering we defined, so let us show how to order the topological Puiseux series according to the refined good ordering. The first separation point occurs at \(x^2\) and the partition at \(\sigma_0\) with this ordering is \(\mathtt{I}=I_1\cup\ I_2\cup I_3\), where \(I_1=\{1,4\},\) \(I_2=\{2,3\}\) and \(I_3=\{5\}\). As we mentioned, we can reorder the branches so that \(I'_1=\{1,2\},\) \(I'_2=\{3,4\}\) and \(I_3=\{5\}\). In doing so, our new ordering at this point is \(y_1'=y_1,\) \(y_2'=y_4,\) \(y_3'=y_3,\) \(y_4'=y_2\) and \(y_5'=y_5.\) Since the separation point is an ordinary point for all the branches, the dual graph of the truncation \(C_{\sigma_0}\) is
\begin{figure}[h]
$$
\unitlength=0.50mm
\begin{picture}(80.00,40.00)(115,5)
\thinlines
\put(70,30){\line(1,0){20}}
\put(70,30){\circle*{2}}
\put(90,30){\circle*{2}}
\put(83,34){{\scriptsize$\sigma_0$}}
\put(90,30){\vector(1,0.5){12}}
\put(90,30){\vector(1,-0.1){13}}
\put(90,30){\vector(1,-1){10}}
\put(105,36){{\scriptsize$I_3$=\{5\}}}
\put(105,26){{\scriptsize$I_2$=\{2,3\}}}
\put(103,16){{\scriptsize$I_1$=\{1,4\}}}
\put(140,28){$\rightsquigarrow$}
\put(165,30){\line(1,0){20}}
\put(165,30){\circle*{2}}
\put(185,30){\circle*{2}}
\put(178,34){{\scriptsize$\sigma_0$}}
\put(185,30){\vector(1,0.5){12}}
\put(185,30){\vector(1,-0.1){13}}
\put(185,30){\vector(1,-1){10}}
\put(200,36){{\scriptsize$I_3$=\{5\}}}
\put(200,26){{\scriptsize$I_2'$=\{3,4\}}}
\put(198,16){{\scriptsize$I_1'$=\{1,2\}}}
\end{picture}
$$
\caption{Dual graph of \(C_{\sigma_0}\) and reordering of the packages.}
\label{fig:ex_1}
\end{figure}
and \(C_{\sigma_0}\) has three branches defined by \(y_{I_1}=2x^2,\) \(y_{I_2}=5x^2\) and \(y_{I_3}=x^2.\)
We continue following the \(I_1\) package. The next term where the branches belonging to \(I_1\) separate is \(5>14/3\). Hence, to be good ordered, we must permute the indexing of both branches, namely \(y''_1=y_2'\) and \(y''_2=y_1'.\) Then the truncation at \(C_{\sigma_0^{I_1}}\) has four branches ordered as \(y_{I_{1,1}}=y_4=2x^2+x^5,\) \(y_{I_{1,2}}=y_1=2x^2+x^{14/3}+1/2x^5,\) \(y_{I_2}=5x^2\) and \(y_{I_3}=x^2.\) The dual graph is given in Figure \ref{fig:ex_2}.
\begin{figure}[H]
$$
\unitlength=0.50mm
\begin{picture}(80.00,40.00)(85,4)
\thinlines
\put(70,30){\line(1,0){20}}
\put(70,30){\circle*{2}}
\put(90,30){\circle*{2}}
\put(83,34){{\scriptsize$\sigma_0$}}
\put(90,30){\vector(1,0.5){12}}
\put(90,30){\vector(1,-0.1){13}}
\put(90,30){\line(1,-1){15}}
\put(105,15){\circle*{2}}
\put(105,15){\vector(1,0){12}}
\put(105,15){\line(-1,-1){10}}
\put(95,5){\circle*{2}}
\put(95,5){\vector(1,0){12}}
\put(120,15){{\scriptsize$y_{I_{1,2}}=2x^2+x^{14/3}+\frac{1}{2}x^5$}}
\put(110,3){{\scriptsize$y_{I_{1,1}}=2x^2+x^5$}}
\put(105,36){{\scriptsize$I'_3$=\{5\}}}
\put(105,26){{\scriptsize$I'_2$=\{3,4\}}}
\end{picture}
$$
\caption{Dual graph of \(C_{\sigma_0^{I_1}}\).}
\label{fig:ex_2}
\end{figure}
As we have finished with the star points belonging to the geodesics of the branches of \(I_1,\) we move to the branches of \(I_2.\) The separation point of both branches is \(3>5/2\) so again we need to permute the branches \(y''_3=y'_4\) and \(y''_4=y'_3.\) Then, the truncation \(C_{\sigma_0^{I_2}}=C\) is our original curve and the branches are good ordered as \(Y_1:=y_{I_{1,1}}=y_4,\) \(Y_2:=y_{I_{1,2}}=y_1,\) \(Y_3:=y_{I_{2,1}}=y_2,\) \(Y_4:=y_{I_{2,2}}=y_3\) and \(Y_5:=y_{I_3}=y_5.\) Then we obtain the dual graph of \(C\) with the branches good ordered.
\begin{figure}[H]
$$
\unitlength=0.50mm
\begin{picture}(80.00,40.00)(50,5)
\thinlines
\put(30,30){\line(1,0){20}}
\put(30,30){\circle*{2}}
\put(50,30){\circle*{3}}
\put(43,34){{\scriptsize$\sigma_0$}}
\put(50,30){\vector(1,1){10}}
\put(50,30){\line(1,0){80}}
\put(70,30){\circle*{2}}
\put(90,30){\circle*{2}}
\put(110,30){\circle*{2}}
\put(130,30){\circle*{2}}
\put(130,30){\vector(1,1){10}}
\put(130,30){\line(0,-1){20}}
\put(130,20){\circle*{2}}
\put(130,10){\circle*{2}}
\put(130,10){\vector(1,0){15}}
\put(50,30){\line(1,-1){15}}
\put(65,15){\circle*{2}}
\put(65,15){\vector(1,0){12}}
\put(65,15){\line(-1,-1){10}}
\put(55,5){\circle*{2}}
\put(55,5){\vector(1,0){12}}
\put(80,15){{\scriptsize$Y_2$}}
\put(70,3){{\scriptsize$Y_1$}}
\put(130,40){{\scriptsize$Y_{4}$}}
\put(145,15){{\scriptsize$Y_{3}$}}
\put(60,45){{\scriptsize$Y_5$}}
\end{picture}
$$
\caption{Dual graph of \(C\) with the good order in the branches.}
\label{fig:ex_2}
\end{figure}
\end{ex}
\section{The Alexander polynomial of the link} \label{sec:topology}
The Alexander polynomial (in $r$ variables) is an invariant of a link with $r$ (numbered) components in the sphere $S^3$. The precise definition can be found, for example, in \cite{EN}. For general topological properties of links and its numerical invariants we refer to \cite{EN,SW,Shinohara1,Shinohara2}, for an accurate description of the link complement of a plane curve we refer to \cite{LeCarousels}. To a plane curve singularity $C = \bigcup_{i=1}^r C_i \subset (\mathbb{C}^2, 0)$ we assign the link $L=C \cap S_{\varepsilon}^3$ in the $3$-sphere $S_{\varepsilon}^3$ of radius $\varepsilon$ centered at the origin in the complex plane $\mathbb{C}^2$ with $\varepsilon$ small enough. It is well known that the link complement \(X=S^{3}-L\) fibers over \(S^1\) and it has an iterated toric structure that can be described via Lé's Carousels \cite{LeCarousels}. The definition of the multivariable Alexander polynomial bases on the notion of universal abelian covering \(\rho:\widetilde{X}\rightarrow X.\) The group of covering transformations \(H_1(X;\mathbb{Z})=\mathbb{Z}^r\) is a free abelian multiplicative group on the symbols \(\{t_1,\dots,t_r\}\) where each \(t_i\) is geometrically associated with an oriented meridian of an irreducible component of the link. In this way, if \(\widetilde{p}\) is a typical fiber of \(\rho\) then the group \(H_1(\widetilde{X},\widetilde{p};\mathbb{Z})\) becomes a module over \(\mathbb{Z}[t_1,t_1^{-1},\dots,t_r,t_r^{-1}].\) The multivariable Alexander polynomial \(\Delta_L(t_1,\dots,t_r)\) is then defined as the greatest common divisor of the first Fitting ideal \(F_1(H_1(\widetilde{X};\mathbb{Z})).\) Observe that \(\Delta_L(t_1,\dots,t_r)\) is then well-defined up to multiplication by a unit of \(\mathbb{Z}[t_1^{\pm 1},\dots t_r^{\pm 1}].\)
\subsection{Description of the link: satellization}\label{subsec:satellization}
In this part we provide the topological counterpart of the algebraic constructions of Section \ref{sec:descriptionofstar}. For that we will follow the beautiful exposition of Weber in \cite{Webertop}. One can also see that the satellization construction can be given explicitly in order to obtain the Waldhausen decomposition following \cite{LeCarousels}. As we have already mentioned, the topology of the singularity is encoded in the topology of the algebraic link \(L=C\cap S^3\) on the \(3\)--sphere. It is well known that this link is oriented and that it can be described by a process called \emph{satellization} \cite[Sect. 6]{Webertop}.
\medskip
First, we define the notion of torus knot. Consider the map $f:S^1\to S^1 \times S^1$ given by $f(z)=(z^p, z^q)$; if we identify $S^1$ with the complex unit circle, the image of $f$ is a closed curve, say $K$, which turns out to be a knot in $\mathbb{R}^3$ obtained by making $q$ winds longitudinally and $p$ winds transversally: this is a torus knot of type $(p,q)$.
\medskip
In general we can construct a link from a given knot by a process called satellization whose input date are:
\begin{enumerate}
\item An oriented knot \(K\) in \(S^3\) together with a tubular neighbourhood \(N\) around.
\item An oriented link \(L\) in the interior of the tubular neighborhood \(V\) of the unknot $U$ such that $L$ is not contained in any ball inside $V$.
\item The choice of an orientation preserving a diffeomorphism \(\varphi:V\rightarrow N\) such that \(\varphi(U)=K\) and carrying parallels to parallels; the diffeomorphism \(\varphi\) is determined by the parallel \(p\) on \(\partial N\) (up to isotopy) such that \(p=\varphi(p').\)
\end{enumerate}
The process of replacing $K$ by $\varphi (L)$ is called the \emph{satellization} of \(L\) around \(K\). Since the branches $C_i$ of $C$ correspond to iterated torus knots, the satellization yields an iterative method to construct the algebraic link associated to $C$ from the knots corresponding to the branches $C_i$ (as an iteration of torus links). \medskip
Once we know what a torus knot of type \((p,q)\) is, we may define a \((p,q)\)--satellization: let \(K\) be an oriented knot in \(S^3.\) An oriented knot \(K'\) is a \((p,q)\)--satellite of \(K\) if it has the same smooth type of a torus knot \((p,q)\) on the boundary of a tubular neighborhood of \(K\) on which meridians are chosen to be non-singular closed oriented curves which have a linking number \(+1\) with \(K\) and parallels are non-singular closed oriented curves which do not link \(K\) and have an intersection number \(+1\) with a meridian.
\medskip
The iterated torus structure of the knot \(K\) associated with an irreducible plane curve singularity with \(g\) Puiseux pairs is described by a theorem of K. Brauner \cite[Sect. III]{Brauner28} (see also \cite[Theorem 2.3.2]{LeCarousels}): the knot \(K_g\) is computed recursively from successive $(p_i,w_i)$-satellizations around $K_{i-1}$, where \(K_0\) is the unknot and the \(p_i,w_i\) are defined from the Puiseux pairs and the expressions in eq.~ \eqref{eqn:defw_j}.
\medskip
With this construction, the idea is to reproduce the iterated torus structure to the case of links. Consider a concentric tubular neighborhood $V'$ inside the interior \(V^{\circ}\) of $V$. A torus link is the link of $s\geq 1$ torus knots of type $(\alpha, \beta)$ with $\alpha \geq 1$ placed on the boundary $\partial V'$, and possibly together with $U$. We will write $\mathrm{TL}(s,\delta; (\alpha, \beta))$ for the torus link of $s$ torus knots of type $(\alpha, \beta)$; here $\delta$ is equal to 1 if $U$ is chosen as a component of the link, and 0 otherwise.
\medskip
It is certainly possible to build an iterated torus link from a set $L_1,\ldots L_g$ torus links as follows. First, we select a component $K_1$ of $L_1$ and satellizate $L_2$ around $K_1$. This yields a link, and we select again a component $K_2$ od this link, and satellizate $L_3$ around $K_2$. We repeat this process until we satellizate $L_g$ around the cosen component of the link just obtained. Observe that it is possible to select a same component several times: in this case, each new satellization takes place inside a smaller tubular neighborhood.
\subsubsection{Iterative homology}\label{subsubsec:iterativehomology}
A key part of our new proof of the coincidence of the Alexander polynomial and the Poincaré series are the results of Sumners and Woods \cite{SW}, which allow the computation of the Alexander polynomial iteratively. Those results are based on an iterative homology computation of the universal abelian cover \(\widetilde{X}\) of the link exterior. Let us recall briefly their construction.
\medskip
Given an algebraic link \(L:=L_1\cup\cdots\cup L_r\) with \(r\geq 1\) components, we must first understand the homology of the abelian cover of the exterior of a new link created by the two operations (without loss of generality we assume that the operation occurs in the last branch). Sumners and Woods \cite[Sect. V]{SW} provided a method to compute the homology of the abelian cover of the exterior of the new link in terms of the old link. To do so, we need to build a nice Mayer Vietoris decomposition of the link exterior which will lift to the abelian cover. Let us first consider a tubular neighborhood \(\bigcup_{i=1}^{r}\{V_i\}\) of \(L\) such that each \(V_i\) is a tubular neighborhood of each component \(L_i\) and the \(V_i\)'s are pairwise disjoint. Let \(V'\) be a tubular neighborhood of the unknot \(K_0\) and assume \(K'\) is a knot contained in \(V'\) such that \(K'\) is homologous to \(p\) times \(K_0.\) Let \(\varphi:V' \rightarrow V_r\) be an orientation preserving onto diffeomorphism taking longitude to longitude. Define \(K:=\varphi(K)'\simeq p L_r\) and the two possible operations are the following:
\begin{enumerate}
\item Satellization along one component giving rise to a new link \(L'=L_1\cup\cdots\cup L_{r-1}\cup K\) with the same number of components as the old link \(L\). If \(U'\) is a tubular neighborhood of \(K'\) which is contained in the interior of \(V'\) then the link exterior of \(L'\) is
\[X=S^3\setminus \operatorname{Int}(V_1\cup V_{r-1}\cup \varphi(U')).\]
\item Adding one new branch giving rise to a new link \(\widehat{L}=L_1\cup\cdots\cup L_{r}\cup L_{r+1}\) with one more component than the old link \(L\). If we denote by \(V^\ast\) a small tubular neighborhood of \(K_0\) which is contained in \(V'\) and misses \(U',\) in this case the link exterior of \(\widehat{L}\) has the form
\[X=S^3\setminus \operatorname{Int}(V_1\cup V_{r-1}\cup \varphi(U')\cup\varphi(V^\ast)).\]
\end{enumerate}
Let us denote by \(Y:=S^3\setminus \operatorname{Int}(V_1\cup \cdots\cup V_{r})\) the link exterior of the old link \(L\) and define
\[W:=\left\{\begin{array}{lc}
V_r\setminus \operatorname{Int}(\varphi(U')), &\text{in the case \((1)\);} \\
V_r\setminus \operatorname{Int}(\varphi(U')\cup\varphi(V^\ast)), &\text{in the case \((2)\),}
\end{array}
\right.\]
If \(T=\partial V_r\) then we obtain the Mayer-Vietoris splitting for \(X=Y\cup_T W.\) On account of Sumners and Woods \cite{SW} this Mayer-Vietoris splitting induces a splitting on the abelian covering spaces of \(X\) with very nice homological properties. We denote by \(\rho:\widetilde{X}\rightarrow X\) the universal abelian cover and \(\Lambda:=\mathbb{Z}[t_1^{\pm 1},\dots,t_r^{\pm 1}].\) Since we will be interested in the computation of the Alexander polynomial, we must describe the \(\Lambda\)--module structure of \(H_1(\widetilde{X},\widetilde{p}).\) To do so, following Sumners and Woods \cite[Sect. V]{SW} it is convenient to use the Mayer-Vietoris splitting; write \(\widetilde{Y}=\rho^{-1}(Y),\) \(\widetilde{W}=\rho^{-1}(W)\) and \(\widetilde{T}=\rho^{-1}(T).\) Then, Sumners and Wodds \cite[Sect. V]{SW} showed that \(H_1(\widetilde{X},\widetilde{p})\) decomposes as \(\Lambda\)-module in the following forms:
\begin{enumerate}
\item [(a)] Assume \(r=1\) and we have performed an operation of type \((1)\) or \((2)\), then
\[H_1(\widetilde{X})\simeq_\Lambda H_1(\widetilde{Y})\oplus H_1(\widetilde{W})/H_1(\widetilde{T}).\]
\item [(b)] Assume \(r\geq 2\) and we have performed an operation of type \((1)\) or \((2)\), then
\[H_1(\widetilde{X})\simeq_\Lambda H_1(\widetilde{Y})\oplus H_1(\widetilde{W}).\]
\end{enumerate}
Those decompositions allow to compute the Alexander polynomial iteratively as we will see in the remainder of the section.
\subsection{Gluing as a topological operation: the irreducible case}\label{subsec:alexirreducible}
If the curve is irreducible with \(g\) Puiseux pairs, the knot \(K\) is an iterated torus knot of type \((n_1,w_1),\cdots,(n_g,w_g)\); this is obtained by successive satellizations of knots of type \(n_i,w_i\). Recall that if \(\overline{\beta}_0,\dots,\overline{\beta}_g\) are the generators of the semigroup of values of the curve and \(e_i=\gcd(\overline{\beta}_0,\dots,\overline{\beta}_i)\) then \(n_i=e_{i-1}/e_i\) and \(w_i=\overline{\beta}_i/e_i.\) The semigroup of values can be constructed by gluing as follows: we start with the numerical semigroup \(\Gamma_1=\langle n_1,w_1\rangle,\) which is in fact the semigroup of values of the first approximating curve \(C_{\alpha_1}.\) Then, we perform a gluing and get
\[
\Gamma_2=n_2\Gamma_1\bowtie w_2\mathbb{N}=\langle n_1n_2,n_2w_1,w_2\rangle,
\]
which is the semigroup of values of the second approximating curve. Recursively, we obtain the semigroups of the approximating curves as \(\Gamma_i=n_i\Gamma_{i-1}\bowtie w_i\mathbb{N}\) until the last iteration, that provides the semigroup of the curve:
\[
\Gamma=n_g\Gamma_{g-1}\bowtie w_{g-1}\mathbb{N}=\langle\overline{\beta}_0=n_1\dots n_g,\overline{\beta}_1=n_2\cdots n_g w_1,\dots,\overline{\beta}_g\rangle.
\]
The semigroups \(\Gamma_i\) are complete intersection numerical semigroups. Let us denote by \(a_{0,i},\dots,a_{i,i}\) the minimal generators of \(\Gamma_i.\) To be a complete intersection numerical semigroup means that the semigroup algebra \(\mathbb{C}[t^\nu\;:\;\nu\in \Gamma_i]\) is defined by a binomial ideal with \(i\) generators, namely \(\ker(\varphi)=I_{i}=(f_{1,i},\dots,f_{i,i})\), from the exact sequence
\[\begin{array}{ccccccccc}
0&\rightarrow& \mathbb{C}[u_0,\dots,u_i]^{i}&\rightarrow& \mathbb{C}[u_0,\dots,u_i]&\xrightarrow{\varphi}& \mathbb{C}[t^\nu\;:\;\nu\in \Gamma]&\rightarrow& 0\\
& &&& u_k&\mapsto& t^{a_{k,i}}& &.
\end{array}
\]
The generators of \(I_i\) may be expressed ina more precise way. Each \(\Gamma_i=\langle a_{0,i},\dots,a_{i,i}\rangle\) has the property that \(n_{k,i}a_{k,i}\in\langle a_{0,i},\dots,a_{k-1,i}\rangle\) where \(n_{k,i}=\gcd(a_{0,i},\dots,a_{k-1,i})/\gcd(a_{0,i},\dots,a_{k-1,i}).\) Thus, the generators of \(I\) have the following form
\[f_{k,i}=u_k^{n_{k,i}}-\prod_{j=0}^{k-1}u_j^{l_j^{k}}\quad \text{where}\quad n_{k,i}a_{k,i}=l_0^ka_{0,i}+\cdots+l_{k-1}^{k}a_{k-1,i}.\]
There is an important fact about the expression of those generators: when performing the gluing, the first \(i\) generators remain ``unchanged'', i.e. \(I_{i+1}=I_i+(f_{i+1,i+1})\) since \(n_{k,i}=n_{k,i+1}\) for \(k<i+1\); the reason is that for each semigroup \(\Gamma_i\) inequality \(n_{k,i}a_{k,i}<a_{k+1,i}\) is satisfied.
\medskip
On the other hand, as explained in the previous section, the cabling operation translates into the first homology group of the abelian cover by decomposing it in \(n_i\) copies of the previous link together with a new homology class. This shows that the homological decomposition in the abelian cover and the algebraic decomposition in the algebraic description of the semigroup algebra are affected in the same way; this shows that both operations are essentially the same. Now, the coincidence between the Alexander polynomial and the Poincaré series in the irreducible case is a natural consequence of the coincidence between both operations.
\medskip
This sequence of iterations is provided by knots obtained from the approximations of \(C\) defined by the \(g\) star points of the dual graph of \(C\) associated with the maximal contact values. Indeed, if \(K_1,\dots,K_g\) is the sequence of approximating knots, then Theorem 5.1 \cite{SW} (see also \cite[Theorem II]{Seifert50}) shows
\begin{proposition}\label{prop:alexirre}
\[
\Delta_{K_i}(t)=\Delta_{K_{i-1}}(t^{n_i})\cdot P(w_i,n_i,t).
\]
\end{proposition}
Therefore Proposition \ref{prop:poincareirreduciblecase} and Proposition \ref{prop:alexirre} imply the relation \((t-1)P_C(t)=\Delta_K(t).\) Moreover, this tells that the gluing construction is exactly associated to the satellization process used to obtain \(K_i\). This identification is indeed very deep: its ultimate reason is the correspondence between the presentations of the semigroup algebra and the first homology group of the link exterior.
\medskip
Observe that the definition of \(P_C(t)\) (cf.~Section \ref{sec:Poincareseries}) in the irreducible resp.~non-irreducible case differs by a factor $1-t$; this distinction yields a series in the irreducible case, and not a polynomial. Hence in the irreducible case the Poincaré series and the Alexander polynomial only can coincide modulo this factor.
\subsection{Iterative computation of the Alexander polynomial}
Our ordering in the set of branches implies an ordering in the set of the link components from the innermost to the outer component. This allows us to compute the Alexander polynomial by means of the algorithm provided by Sumners and Woods in \cite{SW}. In recalling their procedure we will show its algebraic counterpart with our iterative computation of the Poincaré series explained in Section \ref{subsec:iterativepoincare}. Here we provide a more detailed description than the one in \cite{SW}: they only write down the explicit factorization in the case of a curve with 2 and 3 branches, whereas our explicit expressions for the factorizations of the Poincaré series are valid in the case with more than three branches.
\medskip
Before starting with the topological interpretation of our algebraic procedure, we need to present the building blocks of Sumners and Woods computation for the Alexander polynomials. Thanks to the iterative homology computations done in \cite[Sect.~V]{SW} (see also Subsection \ref{subsubsec:iterativehomology}), Sumners and Woods introduce a decomposition of the Alexander polynomial in each of the cases:
\begin{theorem} \cite[Theorems 5.2, 5.3 and 5.4]{SW}\label{thm:SW5}
With the notation of Subsection \ref{subsubsec:iterativehomology}, let $\langle L,L'\rangle$ denote the homological linking number of \(L\) and \(L'\).
\begin{enumerate}
\item Assume \(L\) is a link with \(r\geq 2\) components and \(L'\) be the link obtained from an iteration of type \((1)\) via the knot \(K'\) with winding number \(p\neq 0.\) Then,
\[
\Delta_{L'}(t_1,\dots,t_r)=\Delta_L(t_1,\dots,t_r^p)\cdot\Delta_M\big(t_r,\prod_{i=1}^{r-1}t_i^{\langle L_i,L_r\rangle}\big),
\]
where \(M\) denotes the model link of two components formed by \(K'\) and the unknotted meridian curve on the boundary torus containing \(K'\).
\item Assume \(L\) is a knot and \(\widehat{L}\) is the link with two components obtained from an iteration of type \((2)\) via the knot \(K'\) with winding number \(p\neq 0.\) Then,
\[
\Delta_{\widehat{L}}(t_1,t_2)=\Delta_L(t_1,t_2^p)\cdot \Delta_N(t_1,t_2),
\]
where \(N\) denotes the model link of two components formed by \(K'\) and the unknotted core of the torus containing \(K'.\)
\item Assume \(L\) is a link with \(r\geq 2\) components and \(\widehat{L}\) be the link obtained from an iteration of type \((2)\) via the knot \(K'\) with winding number \(p\neq 0.\) Then,
\[
\Delta_{\widehat{L}}(t_1,\dots,t_r)=\Delta_L(t_1,\dots,t_rt_{r+1}^p)\cdot \Delta_P\big(t_r,t_{r+1},\prod_{i=1}^{r-1}t_i^{\langle L_i,L_r\rangle}\big),
\]
where \(P\) denotes the model link of three components formed by \(K',\) the unknotted meridian curve on the boundary of the torus containing \(K'\) and the unknotted core of the torus containing \(K'.\)
\end{enumerate}
\end{theorem}
Moreover, Sumners and Woods \cite[Sect.~VI]{SW} show that the Alexander polynomials of the model links can be computed as follows:
\begin{theorem}\cite[Theorems 6.1, 6.2, 6.3]{SW} \label{thm:SW1} Let \(L_1\) be a torus knot of type \((\alpha,\beta),\) \(M\) be a link of two components formed by the torus knot \(L_1\) linked with its unknotted exterior core, let \(N\) denotes a link of two components formed by \(L_1\) and the unknotted core of the torus containing \(L_1\) and let \(P\) be a link of three components formed by the torus knot \(L_1\) linked with both its exterior unknotted core and its interior unknotted core. Let \(P(m,n,x),Q(m,n,x,y),B(m,n,x,y,z)\) be the polynomials defined in \eqref{eqn:defkeypoly}. Then, their Alexander polynomials are
\[\begin{split}
\Delta_{L_1}(t)=P(\alpha,\beta,t),&\quad \Delta_{M}(t_1,t_2)=Q(\alpha,\beta,t_2,t_1),\\
\Delta_N(t_1,t_2)=Q(\beta,\alpha,t_2,t_1)&\quad\text{and}\quad \Delta_{P}(t_1,t_2,t_3)=B(\alpha,\beta,t_2,t_1,t_3).
\end{split} \]
\end{theorem}
Now, we will use these results to show the topological counterpart of our algebraic iterative computation of the Poncaré series described in Section \ref{subsec:iterativepoincare}. We will see that as in the irreducible case, the coincidence between the Poincaré series and the Alexander polynomial comes from the equivalence between the algebraic process and the topological one. As a consequence, we provide an alternative proof of the Theorem of Campillo, Delgado and Gusein-Zade \cite{CDGduke} and this new proof shows the intrinsic reason for the coincidence between both invariants. Moreover, we improve the computations of Sumners and Woods \cite{SW} for the algebraic link associated with a plane curve singularity as they provide closed formulas only for the cases of two and three branches and we describe the process in its full generality for any number of branches in terms of the value semigroup.
\subsubsection{The base cases}
As in Section \ref{subsec:iterativepoincare}, we start with the two base cases. In the first place, we consider the case where all the branches are smooth with the same contact.
\begin{proposition}\label{prop:alexlemm1}
Let \(C=\bigcup_{i=1}^{r} C_i\) such that \(C_i\) is smooth for all \(i=1,\dots,r\) and such that the contact pair is equal for all branches, i.e. \((q,c)=(q_{i,j},c_{i,j})\) for all \(i,j\in\mathtt{I}\). Then,
\[
\Delta_C(\underline{t})=P_C(\underline{t}).
\]
\end{proposition}
\begin{proof}
We start with the oriented unknot \(K_1\) and first linking it with a knot of type \((c,1).\) After that, we consider the resulting link and start linking knots of type \((c,1)\) proceeding by induction on the number of branches, i.e. the number of components of the link. Combining \cite[Theorem 5.3 and Theorem 5.4]{SW}, Theorem \ref{thm:SW1} and Lemma \ref{prop:poincarelemm1} and taking into account that we are operating from innermost to the outer component, the claim follows.
\end{proof}
We continue with the second base case,
\begin{proposition}\label{prop:alexlemm2}
Let \(C=\bigcup_{i=1}^{r} C_i\) be such that \(\Gamma^i=\langle\overline{\beta}_0^i,\overline{\beta}^i_1\rangle\) or \(\Gamma^i=\mathbb{N}.\) Assume that for each \(i\in \mathtt{I}=\{1,\dots,r\}\) such that \(C_i\) is a singular branch we have \(l:=\Big\lfloor\frac{\overline{\beta}^i_1}{\overline{\beta}^i_0}\Big\rfloor=\Big\lfloor\frac{\overline{\beta}^j_1}{\overline{\beta}^j_0}\Big\rfloor\) if \(j\neq i\) and \(C_j\) is singular. Moreover, assume that the contact pairs are of the form \((q_{i,j},c_{i,j})\in\{(1,0),(0,l)\}.\) Then,
\[
\Delta_L(\underline{t})=P_C(\underline{t})
\]
\end{proposition}
\begin{proof}
Since the branches of \(C\) are ordered in such a way the components of the link go from inner to outermost, the first components are those corresponding to smooth branches and hence their corresponding link components are of type \((k,1)\) where \(k\) depends on the contact between them. The last components are those corresponding to the singular branches, which have associated components of type \((\overline{\beta}_0,\overline{\beta}_1).\) Therefore, using \cite[Theorem 5.3 and Theorem 5.4]{SW}, Theorem \ref{thm:SW1} and Lemma \ref{prop:poincarelemm2} and taking into account that we are operating from innermost to the outermost component, the claim follows.
\end{proof}
\begin{rem}
We have explicitly given the expression of those base cases as in \cite{SW} the Alexander polynomial is not explicitly computed.
\end{rem}
\subsubsection{The general procedure}
We now proceed to show the general procedure of the coincidence between both invariants. To do so we will follow the structure of Section \ref{subsec:iterativepoincare}. Let \((q,c)\)
be the contact pair of \(C\) and let \(C_{\alpha_{q-1}}\) be the corresponding approximating curve as in Subsection \ref{subsec:generalprocedure}. The curve \(C_{\alpha_{q-1}}\) is irreducible and then we can use the results of Section \ref{subsec:alexirreducible} to check that \((t-1)P_{C_{\alpha_{q-1}}}(t)=\Delta_{K_{\alpha_{q-1}}}(t).\) We will simplify notation as in Section \ref{subsec:iterativepoincare} and we will denote \(\Delta_\sigma(t)\) to \(\Delta_{L_\sigma}(t)\) where \(L_\sigma\) is the link of the curve \(C_\sigma.\) Now we distinguish the cases \(c>0\) and \(c=0,\) as done in Section \ref{subsec:iterativepoincare}. There is nothing to prove if \(c>0\) as the approximating curve is still irreducible: then, let us assume \(c=0.\) In this case the corresponding approximating curve \(C_{\sigma_{0}}\) has \(s\)--branches, hence its associated link has \(s\) components and we can compute its Alexander polynomial as follows:
\begin{proposition}\label{prop:aux1alex}
\[\begin{split}
\Delta_\sigma(t_1,\dots,t_s)=&\Delta_Q(t_1^{p_{q,I_1}}\cdots t_s^{p_{q,I_s}})\cdot Q\Big(w_{q,I_1},p_{q,I_1},t_1,\prod_{k=2}^{s} t^{w_{q,I_k}}\Big)\\ \cdot & \bigg ( \prod_{j=2}^{s-1}B\big(p_{q,j},w_{q,j},t_j,\prod_{k\cdot n<j}t_k^{p_{q,k}},\prod_{k>j}t_{k}^{w_{q,k}}\big)\bigg)\cdot Q\Big(p_{q,s},w_{q,s},t_s,\prod_{k=1}^{s-1}t_k^{p_{q,k}}\Big).
\end{split}\]
In particular,
\(P_\sigma(t_1,\dots,t_s)=\Delta_\sigma(t_1,\dots,t_s)\)
\end{proposition}
\begin{proof}
The formula for the Alexander polynomial follows from the application of \cite[Theorems 5.2, 5.3 and 5.4]{SW}. The equality between the Poincaré series and the Alexander polynomial can be now deduced from Lemma \ref{lem:aux1} as follows: we have seen that
\[
\Delta_Q(t_1^{p_{q,I_1}}\cdots t_s^{p_{q,I_s}})=P_{C_{\alpha_{q-1}}}(t_1^{p_{q,I_1}}\cdots t_s^{p_{q,I_s}})\cdot (t_1^{p_{q,I_1}}\cdots t_s^{p_{q,I_s}}-1).\]
By Lemma \ref{lem:aux1}, we only need to check that
\[\Delta_Q(t_1^{p_{q,I_1}}\cdots t_s^{p_{q,I_s}})\cdot Q\big(p_{q,s},w_{q,s},t_s,\prod_{k=1}^{s-1}t_k^{p_{q,k}}\big)=P_{C_{\alpha_{q-1}}}(t_1^{p_{q,I_1}}\cdots t_s^{p_{q,I_s}})\cdot \big((\prod_{k=1}^{s}t_k^{p_{q,k}})^{w_{q,s}}-1\big),\]
which follows by the previous identity and the definition of the \(Q\)--polynomial.
\end{proof}
Despite of the fact that we do not have the gluing operation at our disposal in the case of more than one branch, we can see that this equality shows that the algebraic construction produced in Section \ref{sec:Poincareseries} is the exact analogy of the topological construction: The main idea is that the polynomial \(Q\) will be used each time when a maximal contact value is added to the value semigroup of the plane curve; observe that, if there is no maximal contact curve, i.e. the case where there exist branches in the smooth packages, then \(Q=1\) since \(p=1.\) On the other hand, the polynomial \(B\) will be used each time when a proper star point is added to the dual graph. In appending each type of value to the semigroup we need to be careful with the order, as our process extremely depends on the total order of the star points, i.e. the set of principal values of the semigroup. To go on showing the equivalence between the algebraic and topological constructions is now an easy routine.
\medskip
Let us continue with the case where \(c>0\). In this case, the same proof of Proposition \ref{prop:aux1alex} but with the use of Lemma \ref{lem:aux2} shows that the Poincaré series coincides with the Alexander polynomial for the approximate curve associated with the proper star point \(\sigma_0.\) Once \(\sigma_0\) is already computed, we continue with the procedure of Subsection \ref{subsec:generalprocedure}. The distinguished points \(Q,\sigma\) are now at the stage \(Q=\sigma_0\) and \(\sigma\) the next start point to be considered to compute the Poincaré series.
\medskip
Let \(\mathtt{I}=(\bigcup_{p=1}^{t} I_{p})\cup(\bigcup_{p=t+1}^{s} I_p)\) be the partition created at \(\sigma_0.\) Denote by \(\overline{\sigma}_1,\dots,\overline{\sigma}_\epsilon\) the star points between \(\sigma_0\) and the point where the geodesics of \(I_1\) goes through. Recall that at this point we have
\[
\alpha_1\prec \cdots\prec\alpha_q\preceq \sigma_0\preceq \overline{\sigma}_1\preceq\cdots\preceq\overline{\sigma}_\epsilon\preceq P.
\]
Then,
\begin{enumerate}[wide, labelwidth=!, labelindent=0pt]
\item Assume \(|I_1|=1:\)
\begin{enumerate
\item The semigroup \(\Gamma^1\) of the first branch \(C^1\) of \(C\) has \(q\) minimal generators. As in Subsection \ref{subsec:generalprocedure} in this case there is nothing to prove.
\item The semigroup \(\Gamma^1\) of the first branch \(C^1\) of \(C\) has \(g_1>q\) minimal generators. Then,
\[\mathcal{S}_1=\{\alpha_1\prec \cdots\prec\alpha_q\preceq \sigma_0\preceq \overline{\sigma}_1\preceq\cdots\preceq\overline{\sigma}_\epsilon\prec \alpha_1^{I_1}\prec\cdots \alpha_{g_1-q}^{I_1}\}.\]
We are in the case \(Q=\sigma_0\) and \(\sigma=\alpha_1^{I_1};\) for \(i=2,\dots,g_1-q\) we will consider \(Q=\alpha^{I_1}_{i-1}\) and \(\sigma=\alpha^{I_1}_i.\) At each stage, we need to show that the Poincaré series \(P_{C_{\alpha^{I_1}_i}}=P_{\alpha^{I_1}_i}=P_\sigma\) equals the Alexander polynomial. At each stage, to simplify notation let us denote by \(p_\sigma:=p_{q+i,I_1}\) and \(w_\sigma:=w_{q+i,I_1}.\) By definition of \(C_\sigma\) its associated link \(L_\sigma=L_1\cup\cdots L_s\) is obtained from the link of \(C_Q\) from an operation of type \((1)\) about \(L_1\) via a torus knot of type \((p_\sigma,w_{q+i,I_k})\) with winding number \(p_\sigma.\) Then, applying Theorem \ref{thm:SW5}(1), Theorem \ref{thm:SW1}, Proposition \ref{prop:case1b} and Proposition \ref{prop:aux1alex} we have
\begin{proposition} \label{prop:case1balex}
\[P_{\sigma}(\underline{t})=\Delta_Q(t_1^{p_\sigma},t_2,\dots,t_s)\cdot Q\big(w_\sigma,p_\sigma,t_1,\prod_{k=2}^{s} t^{w_{q+i,I_k}}\big)=\Delta_\sigma(\underline{t}).\]
\end{proposition}
\end{enumerate}
\item Assume \(|I_1|>1.\) There are two cases to be distinguished:
\begin{enumerate
\item For all \(j\in I_1\) we have \(g_j=q,\) i.e. the semigroups \(\Gamma^{j}\) have \(q\)--minimal generators. In this case, \(\sigma=\sigma_0^{I_1}\) is the first separation point of the branches of \(I_1\) and let \(I_1=\cup_{k=1}^{s_1} I_{1,k}\) be the induced index partition (see also Subsection \ref{subsubsec:truncationstar}). By definition of \(C_\sigma\) its associated link \(L_\sigma=L_1\cup\cdots L_{s_1}\cup L_{s_1+1}\cup\cdots L_{s_1+s-1}\) is obtained from the link \(L_Q=L'_1\cup L_2\cup \cdots L_2\) of \(C_Q\) from successive operations of type \((2)\) about \(L'_1\) with winding number \(p_{\sigma,I_{1,k}}=1\) since \(\sigma\) is an ordinary point and thus the term in the topological Puiseux series is not a characteristic exponent. Then, the application of Theorem \ref{thm:SW5}(3), Theorem \ref{thm:SW1}, Proposition \ref{prop:case2a} and Proposition \ref{prop:aux1alex} yields
\begin{proposition} \label{prop:alexcase2a}
\[
\begin{split}
P_\sigma(t_1,\dots,t_{s_1},t_{s_1+1},\dots,&t_{s_1+s-1})=\Delta_Q(t_1\cdots t_{s_1},t_{s_1+1},\dots,t_{s_1+s-1})\\
\cdot&\Big(\prod_{j=2}^{s_1}B(w_{q+1,I_{1,j}},1,t_{I_{1,j}},(\prod_{k>j}^{s_1}t_{k}^{w_{q+1,I_{1,k}}})(\prod_{k=s_1+1}^{s_1+s-1}t_{k}^{w_{q+1,I_{k-s_1+1}}}),\prod_{k<j}t_k)\Big)\\
&=\Delta_\sigma(t_1,\dots,t_{s_1},t_{s_1+1},\dots,t_{s_1+s-1}).
\end{split}
\]
\end{proposition}
We then do \(Q=\sigma_0^{I_{1}}\) and \(\sigma=\sigma_0^{I_{1,1}}\) to continue the process.
\item We assume that \(g_j>q\) for some \(j\in I_1.\) We distinguish again two subcases:
\begin{enumerate}[wide, labelwidth=!, labelindent=0pt]
\item[\textbf{(i)}] Assume \((f_1|\cdots|f_{|I_1|})\leq (q+1,0)\) and for simplicity assume \(g_1>q\). Let us denote by \(\sigma_{0}^{I_1}\) be the first proper star point of the package \(I_1.\)
Let \(I_1=\cup_{k=1}^{s_1} I_{1,k}\) be the partition associated with the proper star \(\sigma_{0}^{I_1}.\) In this case, the link of \(C_\sigma\) is obtained from the link of \(C_Q\) by successive operations of type \((2)\) together with one operation of type \((1)\) all of them performed along the first component at each step. Then, applying Theorem \ref{thm:SW5}(1), Theorem \ref{thm:SW5}(3), Theorem \ref{thm:SW1}, Proposition \ref{prop:case2bi} and Proposition \ref{prop:aux1alex} we have
\begin{proposition}\label{prop:alexcase2bi}
\begin{align*}
& P_\sigma(t_1,\dots,t_{s_1},t_{s_1+1},\dots,t_{s_1+s-1})= \Delta_Q(t^{p_{q+1,I_{1,1}}}_1\cdots t^{p_{q+1,I_{1,s_1}}}_{s_1},t_{s_1+1},\dots,t_{s_1+s-1})\\
& \hspace{20pt} \cdot Q\Big(w_{q+1,I_{1,1}},p_{q+1,I_{1,1}},t_1,(\prod_{k=2}^{s_1}t_{k}^{w_{q+1,I_{1,k}}})(\prod_{k=s_1+1}^{s_1+s-1}t_{k}^{w_{q+1,I_{k-s_1+1}}})\Big)\\
&\hspace{20pt} \cdot \Big(\prod_{j=2}^{s_1}B(w_{q+1,I_{1,j}},p_{q+1,I_{1,j}},t_{I_{1,j}},(\prod_{k>j}^{s_1}t_{k}^{w_{q+1,I_{1,k}}})(\prod_{k=s_1+1}^{s_1+s-1}t_{k}^{w_{q+1,I_{k-s_1+1}}}),\prod_{k<j}t^{p_{q+1,I_{1,k}}}_k)\Big)\\
&=\Delta_\sigma(t_1,\dots,t_{s_1},t_{s_1+1},\dots,t_{s_1+s-1}).
\end{align*}
\end{proposition}
\item[\textbf{(ii)}] Assume \((f_1|\cdots|f_{|I_1|})> (q+1,0),\) i.e \((f_1|\cdots|f_{|I_1|})\geq (q+1,c)\) with \(c\neq 0,\) and for simplicity \(g_1> q.\) Let us denote by \((q_{I_1},c_{I_1}):= (f_1|\cdots|f_{|I_1|}).\) Since \((q_{I_1},c_{I_1})> (q+1,0),\) there are \(q_{I_1}-q\) star points which are non-proper between \(\overline{\sigma}_\epsilon\) and \(\sigma_0^{I_1},\) i.e.
\[\overline{\sigma}_{\epsilon}\prec \alpha_{q+1}\prec \cdots \prec \alpha_{q_{I_1}}\preceq \sigma_0^{I_1}.\]
The situation, in this case, is the same as in the case \((1)(b),\) and analogous reasoning yields the following:
\begin{proposition}
\[P_{\sigma}(\underline{t})=\Delta_Q(t_1^{p_\sigma},t_2,\dots,t_s)\cdot Q\big(w_\sigma,p_\sigma,t_1,\prod_{k=2}^{s} t^{w_{q+i,I_k}}\big)=\Delta_{\sigma}(\underline{t}).\]
\end{proposition}
\end{enumerate}
\end{enumerate}
\end{enumerate}
Our method shows the coincidence between both invariants from a very explicit point of view; moreover our proof makes no appeal to the results of Eisenbud and Neumann results \cite{EN}. Thus, it constitutes an alternative proof of the one given by Campillo, Delgado and Gusein-Zade \cite{CDGduke}. This new proof provides the intrinsic topological nature of the Poincaré series of the value semigroup: it shows that the algebraic operation of adding one maximal contact value is in correspondence with a topological operation of satellization along one component of the link; the algebraic operation of adding one value associated to a proper star point is in correspondence with the topological operation of adding one branch to the associated link. Moreover, those are the only operations to be aware of in order to construct the value semigroup of a plane curve singularity. The associated Mayer-Vietoris splitting is translated into the algebraic setting via a generalization of the gluing construction valid in the irreducible case. From this perspective, it is reasonable to ask whether the property of being a complete intersection isolated singularity is merely an algebraic characteristic or it possesses a deeper, intrinsic topological significance.
\section{Historical remarks}
Connections between knot theory and plane curve singularities were apparently realized for the first time by Poul Heegaard at the end of the 19th century. He pursued to develop topological tools for the investigation of algebraic surfaces \cite[\S76,\S 77]{Epple}. Wilhelm Wirtingen was aware of this, and proposed his student Karl Brauner the following problem: the description of the singularities of an algebraic function of two variables by means of the intersection of its discriminant curve with the spherical boundary in a small neighborhood of a singular point, as well as the associated branched covering. Brauner solved the problem both for irreducible and reducible discriminant curves \cite{Brauner28}. He actually described the topology of the link in terms of repeated cabling, as well as an explicit presentation of the fundamental group of the complement of the link \cite{Neu03}. Erich K\"ahler reproved this using a more modern approach based on Puiseux pairs \cite{Kahler}. However, Brauner left an open question: Is it possible that two plane curve singularities with different Puiseux pairs could have topological equivalent neighborhoods? At this point, the fact that the knot associated to the singularity is an iterated torus knot characterized by the Puiseux pairs was known.
\medskip
Werner Burau answered the above question to the positive in both the irreducible \cite{Burau33} and the particular reducible case of two branches \cite{Burau34}; at the same time, Oskar Zariski solved the question out in the irreducible case by computing the Alexander polynomial of the knot \cite{Zar32}. He had already investigated the connections between knots and plane curves in \cite{Zar29}. The Alexander polynomial is an important invariant in knot theory; it was introduced by J.W. Alexander in order to determine the knot type \cite{Alexander}. Indeed, a crucial question in the mathematical atmosphere at that time was about the invariants which completely determine the topology of a plane curve singularity. As already mentioned, the results of Burau \cite{Burau33, Burau34} showed that the Alexander polynomial completely determines the topology in the irreducible case and in the reducible case of two branches. Surprisingly, it was not until 1980 when a full answer to this question was given by Yamamoto \cite{Yamamoto}, who proved that the Alexander polynomial classifies the topological type of plane curve singularities. It is interesting to point out here that the key of Yamamoto's result is also the use of Sumners and Woods \cite{SW} description of the iterative computation of the Alexander polynomial we used for our purpose.
\medskip
A key fact to understand the theory of this topic developed in the second half of the twentieth century is the connection of algebraic links and \(3\)--manifolds: the classical example of \(3\)--manifold is the exterior of an algebraic link embedded in the \(3\)-sphere. In 1967, Friedhelm Waldhausen \cite{waldhausenII} first introduced the concept of graph manifold, from which link exteriors are the canonical example. Waldhausen \cite{waldhausenI,waldhausenII} provided in fact a decomposition of the \(3\)--manifold with very nice geometrical properties further explored by Jaco-Shalen \cite{JacoShalen} and Johannson \cite{Johannson} independently. This decomposition is at the core of the theoretical approach to the study of link exteriors proposed by Eisenbud and Neumann \cite{EN} in 1985.
\medskip
Eisenbud and Neumann's theory provides a ``good" setting to compute invariants of a link in an additive way deepening the pioneering results on the Alexander polynomial of Seifert \cite{Seifert50} and Torres \cite{Torres} from a modern perspective. In fact, another advantage of the Eisenbud and Neumann theory is that their construction allows to compute some other invariants of the link and not only the Alexander polynomial. The occurrence of the Eisenbud and Neumann theory in the context of Thurston developments on the geometry of \(3\)--manifolds \cite{Thurston1, Thurston2, Thurston3} and its connection with the Poincaré conjecture suggest why Eisenbud and Neumann had such an impact in the research perspective of the topic at the end of the twentieth century.
\medskip
From the algebraic side, the value semigroup of an irreducible plane curve singularity was first introduced by Roger Apèry \cite{apery} in 1946. However, the milestone in the study of singularities from a purely algebraic point of view was the foundational articles of Zariski \cite{zarequiI,zarequiII,zarequiIII,zarsaturationI,zarsaturationII,zarsaturationIII} about the notion of equisingularity and the saturation of local rings. Combining the new concept of equisingularity with his previous results \cite{Zar32}, Zariski showed \cite{zarequiII} the first connection between both approaches, the topological and the algebraic, by proving that the semigroup of values is in fact a topological invariant of an irreducible plane curve and thus equisingularity becomes topological invariance for irreducible plane curves.
\medskip
As pointed out by Félix Delgado \cite{Delgmanuscripta1}, R. Waldi \cite{Waldi} proved that the value semigroup of a plane curve with several branches. After that, F. Delgado \cite{Delgmanuscripta1,Delgmanuscripta2} (see also the works of A. Garc\'ia \cite{Garcia} and V. Bayer \cite{Bayer} for the bibranch case) provided the full combinatorial algebraic description of this semigroup together with the natural generalization of some properties of the irreducible case. After Delgado's description of the value semigroup for a reducible plane curve singularity, A. Campillo, K. Kiyek and himself introduced a sort of generating function \cite{CDKmanuscr} which can be associated with the value semigroup, the so-called Poincar\'e series. This has been profusely studied by Campillo, Delgado and Gusein-Zade later on, e.g. in \cite{CDG99a, CDG99b, CDG00, CDG02, CDG03a, CDG04, CDG05, CDG07}; surprisingly, in one of his investigations they showed that the Poincar\'e series --which turns out to be a polynomial in the case of a plane curve singularity with more than one branch-- coincides with the Alexander polynomial associated to the link of the singularity \cite{CDGduke}. They left however open the ultimate reason that could explain this fortunate circumstance, whose answer is sketched---we believe---in our paper.
|
2302.12082
|
\section{Introduction}
A random matrix is a matrix whose entries are random variables. As
eigenvalues of a matrix are continuous functions of its entries, so
the eigenvalues of a random matrix are random variables.
A random $N\times N$ matrix has the Jacobi $\beta$-Ensemble (J$\betaup$E) distribution
if a joint probability density function of its eigenvalues is
\begin{equation}
\label{eq:2}
\frac1{S_N(\alpha_1+1, \alpha_2+1, \beta/2)} \prod_{i=1}^N
x_i^{\alpha_1} (1-x_i)^{\alpha_2}|\Delta(\vec{x})|^{\beta},\qquad
0\leq x_i \leq 1,
\end{equation}
where
\begin{equation}
\label{eq:16}
S_N(a ,b ,c) \mathbin{\hbox{\raise0.08ex\hbox{\rm :}}\!\!=} \prod_{i=0}^{N-1} \frac{\Gamma(1+(i+1)
c) \Gamma(a +ic)\Gamma(b +ic)}{\Gamma(1+c)\Gamma(
a +b +(N+i-1)c)},
\end{equation}
and the Vandermonde determinant is defined by
\begin{equation}
\label{eq:1}
\Delta(\vec{x}) \mathbin{\hbox{\raise0.08ex\hbox{\rm :}}\!\!=} \prod_{1\leq i < j \leq N} (x_j-x_i).
\end{equation}
That \eqref{eq:2} is a properly normalised probability density
is a consequence of Selberg's integral \cite{sel:boe}.
In many situations $\beta$ is a non-negative integer, and we
will mostly be assuming that one of $\alpha_1$ or $\alpha_2$ is
a non-negative integer, but \eqref{eq:2} makes sense for
arbitrary real values of these parameters subject to the constraints
\begin{equation}
\label{eq:3}
\alpha_1 > -1,\quad \alpha_2 > -1, \quad \beta > - \frac12\min\left\{
\frac1N, \frac{\alpha_1}{N-1}, \frac{\alpha_2}{N-1}\right\}.
\end{equation}
The naming of this ensemble reflects the presence in
\eqref{eq:2} of the factors $x_i^{\alpha_1}(1-x_i)^{\alpha_2}$
which are a density with respect to which (a certain version of) the
classical Jacobi polynomials form an orthogonal set.
We label by $\phi_i$ the sorted eigenvalues, so that
$0 \leq \phi_1 \leq \phi_2 \leq \cdots \leq \phi_N \leq 1$. This
article is concerned with the distribution of the extreme
eigenvalues $\phi_1$ and $\phi_N$. In fact, since the change of
variables $x_i \mapsto 1 - x_i$, for $i=1,\ldots,N$, in \eqref{eq:2}
leaves the joint probability density invariant, save for the exchange
$\alpha_1 \leftrightarrow \alpha_2$, and reverses the order of the eigenvalues,
it will not present a loss of generality to focus on the
\emph{smallest} eigenvalue $\phi_1$.
The limiting empirical eigenvalue density for Jacobi random matrices
was derived in \cite{wac:tle}. For fixed $\alpha_1,\alpha_2$, the large $N$
limiting density is
\begin{equation}
\label{eq:151}
\frac{N}\pi \frac1{\sqrt{x(1-x)}},\qquad 0<x<1.
\end{equation}
This means that for large $N$ the number of eigenvalues in the
interval $[0,N^{-2}]$ is approximately
$N/\pi\int_0^{N^{-2}}(x(1-x))^{-1/2}\,{\mathrm d} x = {\mathrm O}(1)$,
and it is natural to expect $N^2\phi_1$ to have a non-trivial limiting
distribution.
Our main objects of interest will be the (cumulative) probabilty
distribution function $F_{\phi_1}(\xi) \mathbin{\hbox{\raise0.08ex\hbox{\rm :}}\!\!=} {\mathbb P}(\phi_1 \leq \xi)$,
and the rescaled version
\begin{equation}
\label{eq:8}
F_{N^2\phi_1}(x) = {\mathbb P}( N^2 \phi_1 \leq x) =
{\mathbb P}\left( \phi_1 \leq \frac{x}{N^2}\right) =
F_{\phi_1}\left( \frac{x}{N^2} \right).
\end{equation}
Deferring to below a more comprehensive summary of previous work on
this problem, we mention a result \cite{mor:eed} of
Moreno-Pozas, Morales-Jimenez, McKay in the case $\beta=2$ (the
Jacobi Unitary Ensemble, JUE). They proved, for
$\alpha_1=0,1$ and $\alpha_2\in{\mathbb N}_0$; and for $\alpha_1=2,
\alpha_2\in\{0,1,2\}$, the two-term asymptotic result
\begin{equation}
\label{eq:9}
F_{N^2\phi_1}(x) = 1 - {\mathrm e}^{-x}\det(I_{j-i}(2\sqrt{x})) +
\frac{\alpha_1+\alpha_2}N x{\mathrm e}^{-x}
\det(I_{2+j-i}(2\sqrt x)) + {\mathrm O}\left(\frac1{N^2}\right)
\end{equation}
where the determinants appearing in~\eqref{eq:9} are of size
$\alpha_1\times\alpha_1$, and $I_n(z)$ is the $I$-Bessel
function
\begin{equation}
\label{eq:10}
I_\nu(z) \mathbin{\hbox{\raise0.08ex\hbox{\rm :}}\!\!=} \frac{z^\nu}{2^\nu\sqrt{\pi}\Gamma(\nu+\frac12)}\int_{-1}^1
{\mathrm e}^{-zt}(1-t^2)^{\nu-1/2}\,{\mathrm d} t,\qquad\Re\{\nu\}>-\frac12,
\end{equation}
and $I_{-n}(z)=(-1)^n I_n(z)$ for $n\in{\mathbb Z}$.
On the other hand, Borodin and Forrester have derived \cite{bor:isa}
the leading-order distribution of the smallest eigenvalue of the J$\betaup$E\
for any $\beta>0$ and $\alpha_1\in{\mathbb N}_0$:
\begin{equation}
\label{eq:150}
\lim_{N\to\infty} F_{N^2\phi_1}(x)
=
1 - {\mathrm e}^{-\beta x/2}
\fourIdx{}0{(\beta/2)}1{F}\left(;\frac{2\alpha_1}{\beta};x\vec{1}^{\alpha_1}
\right),
\end{equation}
where $\fourIdx{}{0}{(\sigma)}{1}{F}(;c;\vec{x})$ is a multivariate
hypergeometric function that will be defined precisely
in Section~\ref{sec:mult-hyperg-funct}, and
$\vec{1}^n\mathbin{\hbox{\raise0.08ex\hbox{\rm :}}\!\!=}(1,1,\ldots,1)\in{\mathbb R}^n$. Our principal result is a
version of the two-term asymptotic \eqref{eq:9} valid for $\beta>0$.
\begin{theorem} \label{thm:main}
Let $\phi_1$ be the smallest eigenvalue of the $N\times N$ Jacobi
$\beta$-Ensemble, $\beta>0$, with
$\alpha_1\in{\mathbb N}_0$ and $\alpha_2>-1$. For $x>0$,
\begin{multline}
\label{eq:140}%
F_{N^2\phi_1}(x) = 1 - {\mathrm e}^{-\beta x/2} \fourIdx{}0{(\beta/2)}1F\left(
;\frac{2\alpha_1}\beta; x\vec{1}^{\alpha_1} \right) \\
+ \frac{x^{1+\alpha_1}}N\left((\alpha_1 + \alpha_2 +1)-\frac\beta2\right)
\left(\frac\beta2\right)^{2\alpha_1}
\frac{\Gamma(1+\beta/2)}{\Gamma(1+\alpha_1)\Gamma(1+\alpha_1+\beta/2)}
\\\times {\mathrm e}^{-\beta x/2}
\fourIdx{}0{(\beta/2)}1F\left(
;\frac{2\alpha_1}\beta+2; x\vec{1}^{\alpha_1} \right) +
{\mathrm O}\left(\frac1{N^2}\right).
\end{multline}
The error estimate can depend on $\alpha_1, \alpha_2, \beta$ but is
uniform for $x$ in a compact set.
\end{theorem}
All our results for the distribution of the smallest eigenvalue
can be re-cast to give an analogous result for the largest
eigenvalue, as indicated earlier. We will not write down these
analogues for every result, allowing just the following Corollary
of Theorem \ref{thm:main}.
\begin{corollary} \label{cor:main}
Let $\phi_N$ be the largest eigenvalue of the $N\times N$ Jacobi
$\beta$-Ensemble, $\beta>0$, with
$\alpha_2\in{\mathbb N}_0$ and $\alpha_1>-1$. For $x>0$,
\begin{multline}
\label{eq:140}%
{\mathbb P}(\phi_N \leq 1 - x/N^2)
= {\mathrm e}^{-\beta x/2} \fourIdx{}0{(\beta/2)}1F\left(
;\frac{2\alpha_2}\beta; x\vec{1}^{\alpha_2} \right) \\
- \frac{x^{1+\alpha_2}}N\left((\alpha_1 + \alpha_2 +1)-\frac\beta2\right)
\left(\frac\beta2\right)^{2\alpha_2}
\frac{\Gamma(1+\beta/2)}{\Gamma(1+\alpha_2)\Gamma(1+\alpha_2+\beta/2)}
\\\times {\mathrm e}^{-\beta x/2}
\fourIdx{}0{(\beta/2)}1F\left(
;\frac{2\alpha_2}\beta+2; x\vec{1}^{\alpha_2} \right) +
{\mathrm O}\left(\frac1{N^2}\right).
\end{multline}
\end{corollary}
There are several known random matrix models that lead to J$\betaup$E\
eigenvalue distributions. Most famous are perhaps the double-Wishart
(or \textsc{Manova}) models from Statistics:
set $M_1$, $M_2$ to be independent $n_1\times N$
and $n_2\times N$ matrices with independent standard normal real random
variable entries, $n_1, n_2\geq N$. If
$A=M_1^\dag M_1$, $B= M_2^\dag M_2$, then the
matrix $A(A+B)^{-1}$ has eigenvalues distributed according to the
J$\betaup$E\ with $\beta=1$, $\alpha_1=(n_1-N-1)/2$ and $\alpha_2=(n_2-N-1)/2$
\cite{fis:tsd,hsu:otd,roy:psa,gir:ots,moo:otd}.
Since our results rely on $\alpha_1$ being integer, this requires
$n_1-N$ to be an \emph{odd} diference.
If we repeat the above construction, with complex normal random
variables, then the eigenvalue distribution of $A(A+B)^{-1}$ is
J$\betaup$E\ with $\beta=2$, $\alpha_1=n_1-N$ and $\alpha_2=n_2-N$
\cite[Section 8]{jam:dom}.
Another model leading to the joint probability density function
\eqref{eq:2} is the corners process of random matrices from classical
compact groups: if $U$ is a random $m\times m$ unitary or orthogonal matrix
chosen with respect to Haar measure, $m\geq 2N$, and $M$ is the
principal $N\times N$ submatrix of $U$ (the upper-left corner matrix),
then, letting $s_1,\ldots,s_N$
denote the $N$ eigenvalues of $M^\dag M$,
the points $x_1=s_1/\beta,\ldots,x_N=s_N/\beta$ are distributed
according to \eqref{eq:2} with $\alpha_1=\beta/2-1$
$\alpha_2=(m-2N+1)\beta/2-1$ and $\beta=2$ (unitary case) or $\beta=1$
(orthogonal case) \cite[\S7.2]{col:por,eat:gia}.
In the latter case $\alpha_1=-1/2$ which is not
an integer, so Theorem \ref{thm:main} does not apply, but
Corollary \ref{cor:main} does apply
for the distribution of the largest eigenvalue
if $m$ is an odd number (whence $\alpha_2$ is an integer).
Random matrix models that allow full exploration of the parameter space,
including to arbitrary $\beta>0$, are also known
\cite{lip:amm,kil:mmf,ede:tbj}.
The J$\betaup$E\ exhibits two ``hard edges'' in the spectrum at $x=0$ and $x=1$
since the eigenvalues are strictly confined between these values, which
furthermore coincides with the support of the limiting eigenvalue
density \eqref{eq:151}.
This is in contrast to some other random matrix models such as the
Gaussian ensembles \cite{meh:rm} which have compactly supported
limiting eigenvalue density---the famous Wigner's semi-circle law
\cite{wig:cvo,wig:otd}---but without any intrisic obstacle to
having individual eigenvalues appearing at any point on the real line.
Statistics such as the distribution of smallest eigenvalues are expected
to be ``universal'' in the limit $N\to\infty$, in the sense that they ought
not to depend on the precise features of the random matrix model in
question. In our present context it means that the limiting distribution
\eqref{eq:150} will be valid for other matrix models with a hard
spectral edge. Indeed, the same limiting distribution has been proven
for a different set of matrix models exhibiting a hard edge---the
Laguerre $\beta$-Ensembles (L$\betaup$E; sometimes called
Wishart random matrices) \cite{for:era}, as well
as modifications of the JUE that preserve the hard edge \cite{kui:ufe}.
The finite $N$ corrections to the leading order derived in Theorem
\ref{thm:main} are not expected to be universal---indeed the presence
of the parameter $\alpha_2$ seems to rule that out---but
they do exhibit an interesting feature that had already been conjectured
for the Laguerre Unitary Ensemble at the hard edge \cite{ede:bui} and
proved for that model in \cite{bor:ano,per:fNc,hac:lcc}:
the correction term is proportional to the \emph{derivative} of
the main term. This holds for our two-term asymptotic \eqref{eq:140},
although it may not seem immediately apparent: see \eqref{eq:106} below.
Forrester and Trinh \cite{for:fsc} have investigated the eigenvalue
density for the L$\betaup$E for $\beta>0$,
and found two-term asymptotics at the hard edge of the spectrum, and
that the correction term is also proportional to the derivative of the
leading term. It seems likely that the methods in the present work
could also be adapted to study the hard-edge of the L$\betaup$E too.
J$\betaup$E\ random matrices have a number of known applications. The outage
probability of multiple-input/multiple-output (MIMO) systems subject
to interference, such as those used in cellular mobile radio networks,
can be modelled in terms of the largest eigenvalue of JUE matrices
\cite{kan:qfi}. The conductance eigenvalues in random matrix models
for mesoscopic disordered quantum systems are known to be governed by
the J$\betaup$E\ distribution with $\alpha_2=0$ \cite{bee:rmt,for:qcp}. In
this context, expressions for the average spectral density have been
derived in terms of multi-variable hypergeometric functions
\cite{viv:ted}, somewhat similar to expressions for the smallest
eigenvalue derived in Section \ref{sec:main-calculations}.
Finally, some tests in multivariate Statistics are based on the
distributions of extreme eigenvalues of J$\betaup$E\ (generally the
parameters $\beta=1$ and $\beta=2$ corresponding to the real and
complex underlying fields are most relevant), see Roy \cite{roy:oah}.
Some of these statistical applications are reviewed in
Section 2 of \cite{joh:maa}. Roy's test has practical applications
in signal analysis in the presence of coloured noise, for which the
distribution of the largest eigenvalue of the JUE is required
\cite{cha:ebd}.
Aside from the references \cite{mor:eed,bor:isa} mentioned above,
theoretical work on the distribution of extreme eigenvalues for Jacobi
ensembles goes back at least to \cite{kha:dot} for $\beta=2$ and
Constantine \cite{con:snd} for $\beta=1$, motivated by the
aforementioned applications in Statistics.
In \cite{dum:dot} expressions were derived for distribution functions
in terms of multivariate hypergeometric functions in $N$ variables,
and corresponding formulae for density functions in $N-1$ variables
given in \cite{dum:sed} and \cite{dre:cwb}. Algorithms for a numerical
evaluation of the distribution of the smallest eigenvalue in the
JUE were given in \cite{due:tle} with methods applicable to arbitrary
values of the parameters $\alpha_1, \alpha_2 > -1$, and furthermore
which extend even to non-integer values of $N$.
Johnstone \cite{joh:maa} and Jiang \cite{jia:ltf} have investigated
statistics of extreme eigenvalues, and other quantities, in a setting
where the parameter values $\alpha_1$ and $\alpha_2$ are not fixed,
but vary as $N\to\infty$, leading to a soft edge in the spectrum.
Scaling limits at the hard and soft-edge were treated together in
\cite{hol:eso}.
Forrester and Li \cite{for:roc} have studied eigenvalue correlations
for a broader class of unitary ensembles with a hard edge at the
spectrum (which includes the JUE) and found $1/N$-correction terms
consistent with \cite{mor:eed}.
In Section \ref{sec:multi-vari-hyperg} we introduce some of the
analytic tools that will be used (multi-variable hypergeometric
functions and Jacobi polynomials). In Section \ref{sec:main-calculations}
we collect some exact formul\ae\ for finite $N$. In Section
\ref{sec:two-term-asymptotic} we prove a two-term asymptotic
formula for $\fourIdx{}2{(\sigma)}1{F}$
multivariate hypergeometric functions, that is then
used to give the proof of Theorem \ref{thm:main} in Section
\ref{sec:main-result}. A few special cases are treated in
Section \ref{sec:explicit-formulas}.
\section{Multi-variable hypergeometric functions and
Jacobi polynomials} \label{sec:multi-vari-hyperg}
Multi-variable analogues of classical hypergeometric functions and
orthogonal polynomials are a relatively recently-developed area
of study that have nevertheless proved very useful in Random
Matrix Theory, see, e.g.\ \cite{dum:MOPS,bak:tcs,for:sds,%
win:dmf,mez:mot,jon:sft} as well as many other articles cited in the
present work. They can be defined as series of Jack
polynomials which we define first.
\subsection{Jack polynomials}
Let $\vec{x}=(x_1,\ldots,x_n)$ be a set of variables,
$\lambda=(\lambda_1,\ldots,\lambda_n)$ be an integer partition\footnote{%
We can assume that the number of parts of $\lambda$ is equal to the
number of variables, since if there are more parts than variables the
corresponding Jack polynomial is zero; on the other hand, any partition
can be padded with $0$s to increase the number of parts to $n$.}
of size $|\lambda|=\lambda_1+\cdots+\lambda_n$, and
let $\sigma>0$. The Jack polynomials \cite{jac:aco} are certain homogeneous,
symmetric polynomials $C_\lambda^{(\sigma)}(\vec{x})$ of degree $|\lambda|$.
We define the operators
\begin{equation}
\label{eq:diff1}
\mathrm{D}_k \mathbin{\hbox{\raise0.08ex\hbox{\rm :}}\!\!=} \sum_{i=1}^n x_i^k \frac{\partial^2}{\partial x_i^2} +
\frac2\sigma \sum_{i\neq j} \frac{x_i^k}{x_i-x_j}\frac\partial{\partial x_i}
\end{equation}
and
\begin{equation}
\label{eq:diff2}
\mathrm{E}_k \mathbin{\hbox{\raise0.08ex\hbox{\rm :}}\!\!=} \sum_{i=1}^n x^k \frac\partial{\partial x_i},
\end{equation}
for $k\in{\mathbb N}_0$
Jack polynomials are joint eigenfunctions of $\mathrm{E}_1$ and $\mathrm{D}_2$
\cite{sta:scp}. In fact,
\begin{equation}
\label{eq:diff3}
\mathrm{E}_1 C^{(\sigma)}_\lambda(\vec{x}) = |\lambda|C^{(\sigma)}_\lambda(\vec{x})
\end{equation}
(a relation satisfied by \emph{any} homogeneous polynomial of degree
$|\lambda|$) and
\begin{equation}
\label{eq:diff4}
\mathrm{D}_2 C^{(\sigma)}_\lambda(\vec{x}) = \left(\rho_\lambda +
\frac2\sigma|\lambda|(n-1))\right)
C^{(\sigma)}_\lambda(\vec{x}),
\end{equation}
where
\begin{equation}
\label{eq:diff5}
\rho_\lambda \mathbin{\hbox{\raise0.08ex\hbox{\rm :}}\!\!=} \sum_{i=1}^n \lambda_i \left( \lambda_i - 1 -
\frac2\sigma (i-1) \right).
\end{equation}
The definition of $C_\lambda^{(\sigma)}(\vec{x})$ is completed by
triangularisation:
if
\begin{equation}
\label{eq:152}
C_\lambda^{(\sigma)}(\vec{x}) = \sum_{\mu} b_{\mu\lambda} m_\mu(\vec{x})
\end{equation}
is the expansion of $C_\lambda^{(\sigma)}$ in the basis of
monomial symmetric functions, then the coefficient $b_{\mu\lambda}=0$
unless $\mu\leq\lambda$ in terms of dominance ordering of partitions
\cite{sta:scp};
and normalisation:
\begin{equation}
\label{eq:147}
\sum_{|\lambda|=k} C_\lambda^{(\sigma)}(\vec{x}) = (x_1+\cdots+x_n)^k,\qquad
k\in{\mathbb N}.
\end{equation}
(That such a normalisation exists is proved in
\cite[Prop.~2.3]{sta:scp}, although a different normalisation for the
Jack polynomials is actually used throughout \cite{sta:scp}. The
normalisation leading to \eqref{eq:147} is commonly-used for applications
in Random Matrix Theory.)
\subsection{Multi-variable Jacobi polynomials}
As with multi-variable hypergeometric functions defined in the next
subsection, multi-variable generalisations of the classical Jacobi
polynomials were initially studied for Jack parameter $\sigma=2$
\cite{jam:gjp}
with applications in Statistics in mind. Later these were generalised
to other values of $\sigma$, with a variety of conventions for
normalisation and support of the orthogonality measure \cite{vre:ffe,
hec:rsaI, opd:sao, ols:mjp}. They sometimes go by the name ``Jacobi
polynomials associated with the root system $BC_n$'' \cite{koo:sfa}.
In our definitions, we follow \cite{las:pdj} with a difference in the
choice of normalisation.
For $a,b\in{\mathbb R}$ fixed, $J_\lambda^{\sigma,a,b}(\vec{x})$ is a symmetric
polynomial eigenfunction of the operator
\begin{equation}
\label{eq:139}
\mathrm{D}_2 - \mathrm{D}_1 + (a+b+2)\mathrm{E}_1 - (a+1)\mathrm{E}_0
\end{equation}
of the form
\begin{equation}
\label{eq:116}
J_\lambda^{\sigma,a,b}(\vec{x}) = \sum_{\mu\subseteq\lambda} c_{\mu,\lambda}
C_\mu^{(\sigma)}(\vec{x}),
\end{equation}
for constants $c_{\mu,\lambda}$ depending on $a, b$ and $\sigma$, and the
notation $\mu\subseteq\lambda$ means $\mu_i\leq\lambda_i$ for $i=1,\ldots,n$.
We normalise $J_\lambda^{\sigma,a,b}$ by requiring $c_{\lambda,\lambda}=1$
(the ``monic'' choice).
The multi-variable Jacobi polynomials $\{ J_\lambda^{2/\beta,\alpha_1,
\alpha_2}(\vec{x})\}$ are orthogonal with respect to the joint
probability density \eqref{eq:2} of the J$\betaup$E\
\cite[Th\'eor\`eme 2]{las:pdj}.
\subsection{Multivariate hypergeometric functions}
\label{sec:mult-hyperg-funct}
Multivariate hypergeometric functions were introduced
for general values of the Jack parameter $\sigma$ by
Kaneko \cite{kan:sia} and Kor\'anyi \cite{kor:hti},
generalising the definition relevant to the case $\sigma=2$
introduced by Herz \cite{her:bfo}, the Statistics applications
of which being studied in \cite{con:snd, jam:dom, mui:aom}.
Efficient numerical implementations of multi-variable hypergeometric
functions are available \cite{koe:tee}.
They are defined as a sum over partitions as
\begin{equation}
\label{eq:6}
\fourIdx{}{p}{(\sigma)}{q}{F}(a_1,\ldots, a_p; b_1,\ldots, b_q;\vec{x})
\mathbin{\hbox{\raise0.08ex\hbox{\rm :}}\!\!=} \sum_{\lambda} \frac{[a_1]_\lambda^{(\sigma)}\cdots
[a_p]_\lambda^{(\sigma)}}{[b_1]_\lambda^{(\sigma)}\cdots
[b_q]_\lambda^{(\sigma)}|\lambda|!} C_\lambda^{(\sigma)}(\vec{x}),
\end{equation}
where $[a]_\lambda^{(\sigma)}$ is the generalised Pochhammer symbol
defined by
\begin{equation}
\label{eq:7}
[a]_\lambda^{(\sigma)} \mathbin{\hbox{\raise0.08ex\hbox{\rm :}}\!\!=} \prod_{i=1}^n \left( a-\frac{i-1}\sigma
\right)_{\lambda_i},
\end{equation}
and the classical Pochhammer symbol $(a)_n$ is
\begin{equation}
\label{eq:148}
(a)_n \mathbin{\hbox{\raise0.08ex\hbox{\rm :}}\!\!=} \frac{\Gamma(a+n)}{\Gamma(a)} = \prod_{j=0}^{n-1} (a+j),
\qquad a\in{\mathbb C}, n\in {\mathbb N}.
\end{equation}
The reader familiar with hypergeometric functions of a single variable
(recapitulated in \eqref{eq:79} below) will recognise the generalisation
\eqref{eq:6}.
For general values of the parameters $a_1,\ldots,a_p,b_1,\ldots,b_q$
the series in \eqref{eq:6} converges absolutely for all
$\vec{x}\in{\mathbb C}^n$ if $p\leq q$ and for $\vec{x}$ in some ball if
$p=q+1$ \cite{kan:sia}. However, if any of the ``upper'' parameters,
$a_1$ say, is equal to a negative integer $-m$, $m>0$, then the series
contains only finitely-many terms and defines a multi-variable
symmetric polynomial of degree $mn$.
\subsection{Some useful identities}
Yan undertook one of the first systematic studies of multi-variable
hypergeometric functions for arbitrary $\sigma>0$ and proved a number of
formul\ae\ and identities, including
the Pfaff-like formula \cite[eq.~(35)]{yan:aco}
\begin{equation}
\label{eq:11}
\twoFone{\sigma}(a,b;c;\vec{x})
=\prod_{i=1}^n(1-x_i)^{-a} \twoFone{\sigma}
\left( a, c-b; c; \frac{-x_1}{1-x_1},\ldots, \frac{-x_n}{1-x_n}\right).
\end{equation}
(The case $\sigma=2$ was derived in \cite[Theorem 7.4.3]{mui:aom}.)
A number of integral representations are also available. We mention
here, and will use below, the formula due to
Kaneko \cite{kan:sia}:
\begin{multline}
\label{eq:22}
\int_0^1\cdots\int_0^1 \prod_{i=1}^n x_i^{a-1}(1-x_i)^{b-1} \prod_{j=1}^m
(x_i-t_j) |\Delta(\vec{x})|^{2/\sigma}{\mathrm d}^n\vec{x} \\
=S_n(a+m,b,1/\sigma) \fourIdx{}2{(1/\sigma)}1{F}\left( -n,
{\sigma}({a+b+m-1}) +n-1; \sigma({a+m-1}); \vec{t}\right),
\end{multline}
valid for $\Re\{a\}>0, \Re\{b\}>0$, $\Re\{1/\sigma\}>-\min\{1/n,
\Re\{a\}/(n-1), \Re\{b\}/(n-1)\}$, and $\vec{t}=(t_1,\ldots,t_m)$.
In \eqref{eq:22} and below we use ${\mathrm d}^{n}\vec{x}$ as a shorthand
for ${\mathrm d} x_1\cdots {\mathrm d} x_n$.
\section{Calculations for finite-size matrices}\label{sec:main-calculations}
In this section we collect some formul\ae\ for the distribution and
density of the smallest eigenvalue of a J$\betaup$E\ matrix of fixed finite
size $N\times N$.
\subsection{Probability distribution of the smallest eigenvalue}
If $\phi_1$ is the smallest eigenvalue of the J$\betaup$E\ then, for any
constant $\xi\in{\mathbb R}$,
\begin{equation}
\label{eq:4}
F_{\phi_1}(\xi) \mathbin{\hbox{\raise0.08ex\hbox{\rm :}}\!\!=} {\mathbb P}(\phi_1 \leq \xi) = 1 - {\mathbb P}(\phi_1 > \xi).
\end{equation}
As all eigenvalues are between $0$ and $1$, we obviously have
\begin{equation}
\label{eq:128}
F_{\phi_1}(\xi) = \left\{ \begin{array}{cc}
0, & \xi\leq0, \\
1, & \xi\geq1,
\end{array}\right.
\end{equation}
so it will be sufficient to find expressions for the probability
${\mathbb P}(\phi_1 > \xi)$, $0<\xi<1$.
\begin{proposition}\label{prop:finite_N}
Let $\phi_1$ be the smallest eigenvalue of the joint distribution
\eqref{eq:2} with $\alpha_1\in{\mathbb N}$. Then, for $0<\xi<1$,
\begin{equation}
\label{eq:24}
{\mathbb P}(\phi_1 > \xi) = (1-\xi)^{N(1+\alpha_2+(N-1)\beta/2)}
\fourIdx{}2{(\beta/2)}1{F}\left( -N, 1-N-\frac2\beta(\alpha_2+1);
\frac{2\alpha_1}\beta;\xi \vec{1}^{\alpha_1}\right).
\end{equation}
\end{proposition}
\noindent{\sl Proof.}\phantom{X}
Recalling that $x_1,\ldots,x_n$ are un-ordered eigenvalues, we integrate
the joint probability density \eqref{eq:2}, to get
\begin{align}
{\mathbb P}(\phi_1 > \xi) &= {\mathbb P}(x_1 > \xi, x_2>\xi, \ldots, x_N>\xi)
\nonumber \\
&= \frac1{S_N(\alpha_1+1, \alpha_2+1, \beta/2)}
\int_\xi^1\cdots \int_\xi^1
\prod_{i=1}^N
x_i^{\alpha_1} (1-x_i)^{\alpha_2}|\Delta(\vec{x})|^{\beta}\,{\mathrm d}^N\vec{x}.
\label{eq:5}
\end{align}
If we make the substitution $y_i = (x_i-\xi)/(1-\xi)$, $1\leq i \leq N$,
this maps each of the integrals to an integral over $[0,1]$, and we have
\begin{multline}
\label{eq:113}
{\mathbb P}(\phi>\xi) = \frac{(1-\xi)^{N(1+\alpha_1+\alpha_2 + (N-1)\beta/2)}}%
{S_N(\alpha_1+1,\alpha_2+1,\beta/2)} \\\times
\int_0^1\cdots\int_0^1 \prod_{i=1}^N \left( y_i + \frac{\xi}{1-\xi}
\right)^{\alpha_1}(1-y_i)^{\alpha_2} |\Delta(\vec{y})|^\beta\,{\mathrm d}^N\vec{y}.
\end{multline}
For $\alpha_1\in{\mathbb N}$ this integral can be evaluated by means
of Kaneko's integral \eqref{eq:22} to get
\begin{multline}
\label{eq:23}
{\mathbb P}(\phi>\xi) =
(1-\xi)^{N(1+\alpha_1+\alpha_2 + (N-1)\beta/2)} \\ \times%
\fourIdx{}2{(\beta/2)}1{F}\left( -N, \frac{2}{\beta}(\alpha_1+\alpha_2+1)+
N-1; \frac{2\alpha_1}{\beta}; \frac{-\xi}{1-\xi}\vec{1}^{\alpha_1}\right).
\end{multline}
Up to this point we have followed Borodin and Forrester's paper \cite{bor:isa}
(our \eqref{eq:23} is equation (3.16) therein). The only novel step in
the proof is to simplify the argument of the multivariate
hypergeometric function in \eqref{eq:23} by applying the Pfaff-like
identity \eqref{eq:11} to give \eqref{eq:24}. \hspace*{\fill}~$\Box$
Based on \eqref{eq:23}, Borodin and Forrester proved the asymptotic
scaling limit \eqref{eq:150} for the smallest eigenvalue.
We have also a formula for ${\mathbb P}(\phi_1>\xi)$ in terms of
multi-variable Jacobi polynomials.
\begin{corollary} \label{cor:Jacobi_polys}
With $\phi_1$ and $\alpha_1\in{\mathbb N}$ as above, an alternative
expression for the probability in Proposition \ref{prop:finite_N}
is, for $0<\xi<1$,
\begin{equation}
\label{eq:153}
{\mathbb P}(\phi_1>\xi) = (1-\xi)^{N(1+\alpha_1+\alpha_2+(N-1)\beta/2)}
\frac{P_{(N^{\alpha_1})}^{\beta/2,-1+2/\beta,-1+2(\alpha_2+1)/\beta}
\left(\frac{-\xi}{1-\xi}\vec{1}^{\alpha_1}\right)}%
{P_{(N^{\alpha_1})}^{\beta/2,-1+2/\beta,-1+2(\alpha_2+1)/\beta}
(\vec{0}^{\alpha_1})},
\end{equation}
where $P_\lambda^{\sigma,a,b}(\vec{x})$ is the multi-variable Jacobi
polynomial, and an explicit expression for the denominator in
\eqref{eq:153} is
\begin{multline}
P_{(N^{\alpha_1})}^{\beta/2,-1+2/\beta,-1+2(\alpha_2+1)/\beta}
(\vec{0}^{\alpha_1}) = \frac{(-1)^{N\alpha_1} \alpha_1! (N\alpha_1)!
(1+2\alpha_1/\beta)_{N-1}}{(N-1)! (N\beta/2)_{\alpha_1}}
\\\times
\prod_{i=1}^{\alpha_1} \frac1{(N-1+2(\alpha_2+1+i)/\beta)_N}.
\end{multline}
In these expressions $\vec{0}^n$ is a shorthand for $(0,\ldots,0)\in
{\mathbb R}^n$.
\end{corollary}
Corollary \ref{cor:Jacobi_polys} will be proved in Section
\ref{sec:main-result:2}.
\subsection{Probability density of the smallest eigenvalue}
Our main interest is in the probability \emph{distribution} function
of the smallest eigenvalue $\phi_1$ of the J$\betaup$E. However
with little effort we can derive a formula for a probabilty density
in terms of a multi-variable hypergeometric function,
that will also be used to prove a key differentiation identity
(Corollary \ref{cor:derivative} below).
\begin{proposition}
If $\alpha_1\in{\mathbb N}$, a marginal probability density function
for the smallest eigenvalue $\phi_1$ of the Jacobi
$\beta$-Ensemble \eqref{eq:2} is given
by
\begin{multline}
\label{eq:13}
Z_N(\alpha_1, \alpha_2, \beta) \phi_1^{\alpha_1}
(1-\phi_1)^{\alpha_2 + (N-1)(1+\alpha_2 +N\beta/2)} \\
\times \twoFone{\beta/2} \left( 1-N, 2 - N - \frac2\beta(\alpha_2+1);
\frac{2\alpha_1}\beta + 2 ; \phi_1\vec{1}^{\alpha_1} \right),
\end{multline}
for $0\leq\phi_1\leq1$, where the normalisation constant is
\begin{equation}
\label{eq:14}
Z_N(\alpha_1, \alpha_2, \beta) \mathbin{\hbox{\raise0.08ex\hbox{\rm :}}\!\!=} \frac{N S_{N-1}(\alpha_1 + 1
+ \beta, \alpha_2 +1, \beta/2)}%
{S_N(\alpha_1 + 1, \alpha_2 + 1, \beta/2)} .
\end{equation}
\end{proposition}
\noindent{\sl Proof.}\phantom{X}
The joint probability density function of the \emph{ordered} eigenvalues
of the J$\betaup$E\ is
\begin{equation}
\label{eq:144}
\frac{N!}{S_N(\alpha_1+1, \alpha_2+1, \beta/2)} \prod_{i=1}^N
\phi_i^{\alpha_1} (1-\phi_i)^{\alpha_2}|\Delta({\pmb{\phiup}})|^{\beta},\qquad
0\leq \phi_1\leq\cdots\leq\phi_N \leq 1.
\end{equation}
This has the same functional form as \eqref{eq:2}, except for the
factor $N!$ in the numerator to account for the ordering of the
variables. To derive the marginal density function for $\phi_1$ we
integrate out all the other variables
\begin{align}
&\frac{N!}{S_N(\alpha_1+1, \alpha_2+1, \beta/2)}
\int_{\phi_1}^1 \int_{\phi_2}^1 \cdots\int_{\phi_{N-1}}^1 \prod_{i=1}^N
\phi_i^{\alpha_1} (1-\phi_i)^{\alpha_2}|\Delta({\pmb{\phiup}})|^{\beta}
\,{\mathrm d}\phi_2\cdots{\mathrm d}\phi_N \nonumber \\
&=\frac{N!/(N-1)!}{S_N(\alpha_1+1, \alpha_2+1, \beta/2)}
\int_{\phi_1}^1 \int_{\phi_1}^1 \cdots\int_{\phi_{1}}^1 \prod_{i=1}^N
\phi_i^{\alpha_1} (1-\phi_i)^{\alpha_2}|\Delta({\pmb{\phiup}})|^{\beta}
\,{\mathrm d}\phi_2\cdots{\mathrm d}\phi_N, \label{eq:145}
\end{align}
un-ordering the $N-1$ integrations. With the change of variables
$y_i=(\phi_i-\phi_1)/(1-\phi_1)$ for $i=2,\ldots,N$, this multiple
integral becomes
\begin{multline}
\label{eq:146}
\frac{N}{S_N(\alpha_1+1, \alpha_2+1, \beta/2)} \phi_1^{\alpha_1}
(1-\phi_1)^{\alpha_2} (1-\phi_1)^{(N-1)(1+\alpha_1+\alpha_2+\beta) +
(N-1)(N-2)\beta/2} \\
\times \int_{0}^1 \int_{0}^1 \cdots\int_{0}^1 \prod_{i=2}^N
\left(y_i + \frac{\phi_1}{1-\phi_1}\right)^{\alpha_1}
(1-y_i)^{\alpha_2}y_i^\beta |\Delta(\vec{y})|^{\beta}
\,{\mathrm d}^{N-1}\vec{y}
\end{multline}
where, in a slightly unusual notation $\vec{y}=(y_2,\ldots,y_N)\in
{\mathbb R}^{N-1}$. The $(N-1)$-fold multiple integral may be evaluated by
means of Kaneko's integral \eqref{eq:22} to give
\begin{multline}
\label{eq:12}
\frac{N S_{N-1}(\alpha_1 + 1 + \beta, \alpha_2 +1, \beta/2)}%
{S_N(\alpha_1 + 1, \alpha_2 + 1, \beta/2)}
\phi_1^{\alpha_1} (1-\phi_1)^{\alpha_2 + (N-1)(1+\alpha_1 + \alpha_2
+N\beta/2)} \\
\times \twoFone{\beta/2} \left( 1-N, \frac2\beta(\alpha_1+\alpha_2+1)
+ N; \frac{2\alpha_1}\beta + 2 ; \frac{-\phi_1}{1-\phi_1}\vec{1}^{\alpha_1}
\right).
\end{multline}
By an application of the Pfaff-like identity \eqref{eq:11}, this may be
re-written as \eqref{eq:13}. \hspace*{\fill}~$\Box$
The formula \eqref{eq:13} for the probability density function was first
derived by Dumitriu \cite{dum:sed}, with a different method of proof.
Slightly different, but equivalent, multivariable hypergeometric function
representations for the probability density function have
been given in \cite{dre:cwb}.
\begin{corollary} For $\alpha_1\in{\mathbb N}$ \label{cor:derivative}
we have the derivative identity
\begin{multline}
\label{eq:25}
\frac{{\mathrm d}}{{\mathrm d} \xi} \left( (1-\xi)^{N(1+\alpha_2+(N-1)\beta/2)}
\fourIdx{}2{(\beta/2)}1{F}\left( -N, 1-N-\frac2\beta(\alpha_2+1);
\frac{2\alpha_1}\beta;\xi \vec{1}^{\alpha_1}\right)\right)
\\ = - Z_N(\alpha_1,\alpha_2,\beta) \xi^{\alpha_1}
(1-\xi)^{N(1+\alpha_2+(N-1)\beta/2)-1} \\ \times
\twoFone{\beta/2} \left( 1-N, 2 - N - \frac2\beta(\alpha_2+1);
\frac{2\alpha_1}\beta + 2 ; \xi\vec{1}^{\alpha_1} \right),
\end{multline}
for all $\xi\in{\mathbb C}$ except possibly $\xi=1$.
\end{corollary}
\noindent{\sl Proof.}\phantom{X}
From \eqref{eq:4} and \eqref{eq:24} above the probability distribution
function of $\phi_1$, the smallest eigenvalue, is
\begin{equation}
F_{\phi_1}(\xi) = 1 - (1-\xi)^{N(1+\alpha_2+(N-1)\beta/2)}
\fourIdx{}2{(\beta/2)}1{F}\left( -N, 1-N-\frac2\beta(\alpha_2+1);
\frac{2\alpha_1}\beta;\xi \vec{1}^{\alpha_1}\right),
\end{equation}
for $0\leq\xi\leq1$,
and a probability density function is given by \eqref{eq:13}.
The result \eqref{eq:25} follows because the density function agrees with
the derivative of the distribution function at points of continuity. By
analytic continuation the identity persists outside of the interval
$0<\xi<1$.
\hspace*{\fill}~$\Box$
We remark that the result \eqref{eq:25} does not seem easy to prove in a
direct way starting from the definition \eqref{eq:6} of the
multivariate hypergeometric functions. A similar observation was made
by Forrester \cite{for:era} who found the analogous identity at the
level of $\fourIdx{}1{(\sigma)}1{F}$ multivariate hypergeometric
functions.
Later, we will want to take the limit $N\to\infty$, so we record here
the asymptotic behaviour of $Z_N$ in this limit.
\begin{lemma} \label{lem:Z}
As $N\to\infty$ we have
\begin{equation}
\label{eq:15}
Z_N(\alpha_1, \alpha_2, \beta) \sim \frac{ \Gamma(1+\beta/2)
(\beta/2)^{2\alpha_1+1}}%
{\Gamma(1+\alpha_1)\Gamma(1+\alpha_1+\beta/2)}
N^{2(\alpha_1+1)}.
\end{equation}
\end{lemma}
\noindent{\sl Proof.}\phantom{X}
Using the value \eqref{eq:16} for the Selberg integrals, and cancelling
common factors we get
\begin{multline}
\label{eq:17}
Z_N(\alpha_1, \alpha_2, \beta) = \frac{N\Gamma(1+\beta/2)}{\Gamma(1 +
N\beta/2)} \frac{\Gamma(\alpha_1 + \alpha_2 + 2 + (N-1)\beta) }%
{\Gamma(\alpha_1 + 1 + (N-1)\beta/2) \Gamma(\alpha_2+1+(N-1)\beta/2)}
\\
\times \prod_{k=0}^{N-2} \frac{\Gamma(\alpha_1 + 1 + \beta + k\beta/2)
\Gamma(\alpha_1+\alpha_2+2+(N+k-1)\beta/2)}%
{\Gamma(\alpha_1 + 1 + k\beta/2)\Gamma(\alpha_1+\alpha_2+\beta+2+
(N+k-2)\beta/2)}.
\end{multline}
Re-writing the factors appearing in the product in \eqref{eq:17} as
\begin{equation}
\label{eq:18}
\frac{\Gamma(\alpha_1 + 1 + (k+2)\beta/2)
\Gamma(\alpha_1+\alpha_2+2+(N+k-1)\beta/2)}%
{\Gamma(\alpha_1 + 1 + k\beta/2)\Gamma(\alpha_1+\alpha_2+2+
(N+k)\beta/2)}
\end{equation}
we realise that many factors cancel in the product over $k$ and we are
left with
\begin{multline}
\label{eq:19}
\prod_{k=0}^{N-2} \frac{\Gamma(\alpha_1 + 1 + \beta + k\beta/2)
\Gamma(\alpha_1+\alpha_2+2+(N+k-1)\beta/2)}%
{\Gamma(\alpha_1 + 1 + k\beta/2)\Gamma(\alpha_1+\alpha_2+\beta+2+
(N+k-2)\beta/2)} \\
= \frac{\Gamma(\alpha_1+1+(N-1)\beta/2)\Gamma(\alpha_1 + 1 + N\beta/2)
\Gamma(\alpha_1 + \alpha_2 + 2 + (N-1)\beta/2)}%
{\Gamma(\alpha+1)\Gamma( \alpha_1+1+\beta/2)
\Gamma(\alpha_1 + \alpha_2 + 2 + (N-1)\beta)}.
\end{multline}
Reuniting the product with the prefactors in \eqref{eq:17} and further
cancellation results in
\begin{equation}
\label{eq:20}
Z_N(\alpha_1, \alpha_2, \beta) = \frac{N \Gamma(1+\beta/2)\Gamma(\alpha_1
+1+N\beta/2)\Gamma(\alpha_1 + \alpha_2 + 2 + (N-1)\beta/2)}%
{\Gamma(1+N\beta/2)\Gamma(\alpha_1+1)\Gamma(\alpha_1 + 1 + \beta/2)
\Gamma(\alpha_2+1+(N-1)\beta/2)}.
\end{equation}
The asymptotic \eqref{eq:15} follows by applying the asymptotic
formula
\begin{equation}
\label{eq:21}
\frac{\Gamma(a + cN)}{\Gamma(b+cN)} \sim (cN)^{a-b}
\end{equation}
to the $N$-dependent factors.\hspace*{\fill}~$\Box$
\section{Two-term asymptotic formula}\label{sec:two-term-asymptotic}
Our main analytic tool is going to be a two-term asymptotic formula for the
$\fourIdx{}2{(\sigma)}1F$ multi-variable hypergeometric function,
stated below, and proved in the following subsections.
\begin{theorem}
\label{thm:two-term}
Let $a,b,c\in{\mathbb C}$ and $\sigma>0$ be fixed, such that $c-(i-1)/\sigma$ is not
a negative integer for $1\leq i\leq n$. Then with $p_1(\vec{x})\mathbin{\hbox{\raise0.08ex\hbox{\rm :}}\!\!=} x_1
+ \cdots +x_n$,
\begin{multline}
\label{eq:99}
\fourIdx{}2{(\sigma)}1{F} \left( a-N, b-N; c; \frac1{N^2}\vec{x}\right)
= \fourIdx{}0{(\sigma)}1{F}(;c;\vec{x}) + \frac1N
(c - a - b ) \mathrm{E}_1
\left\{\fourIdx{}0{(\sigma)}1{F}(;c;\vec{x}) \right\} \\ -
\frac1Np_1(\vec{x}) \fourIdx{}0{(\sigma)}1{F}(;c;\vec{x})
+ {\mathrm O}\left(\frac1{N^2}\right),
\end{multline}
where the error estimate is uniform for $\vec{x}$ in compact subsets
of ${\mathbb C}^n$, but may depend on $a,b,c,\sigma, n$. The operator
$\mathrm{E}_1$ in \eqref{eq:99} is defined in \eqref{eq:diff2}.
\end{theorem}
Our strategy will be to split the sum defining the
$\fourIdx{}2{(\sigma)}1F$ multi-variable hypergeometric function as
\begin{multline}
\label{eq:76}
\fourIdx{}2{(\sigma)}1{F} \left( a-N, b-N; c; \frac1{N^2}\vec{x}\right)
= \sum_{|\lambda| < N^{1/3}} \frac{[a-N]_\lambda^{(\sigma)}
[b-N]_\lambda^{(\sigma)}}{N^{2|\lambda|}[c]_{\lambda}^{(\sigma)}
|\lambda|!} C_\lambda^{(\sigma)}(\vec{x}) \\
+ \sum_{|\lambda|\geq N^{1/3}} \frac{[a-N]_\lambda^{(\sigma)}
[b-N]_\lambda^{(\sigma)}}{N^{2|\lambda|}[c]_{\lambda}^{(\sigma)}
|\lambda|!} C_\lambda^{(\sigma)}(\vec{x}),
\end{multline}
recalling that the Jack polynomials $C_\lambda^{(\sigma)}$ are
homogeneous of order $|\lambda|$.
It will turn out that the tail terms (the second sum) do not
contribute significantly to the $N\to\infty$ limit.
\subsection{Preliminary results}
It is a consequence of Stirling's formula that
\begin{equation}
\label{eq:78}
\frac{\Gamma(z+\alpha)}{\Gamma(z+\beta)} = z^{\alpha-\beta}\left(
1 + \frac{(\alpha-\beta)(\alpha+\beta-1)}{2z} +
{\mathrm O}\left( \frac1{z^2}\right)\right),
\end{equation}
as $|z|\to\infty$. We will need a form of this result with control on
how the error depends on the parameters $\alpha,\beta$.
\begin{lemma} \label{lem:stirling}
Suppose $\alpha\in{\mathbb C}$ is a quantity such that $|\alpha|^2/z={\mathrm o}(1)$
as $z\to\infty$. Then
\begin{equation}
\label{eq:26}
\Gamma(z+\alpha) = \sqrt{2\pi} z^{z+\alpha-1/2}{\mathrm e}^{-z}\left(
1 + \frac{\alpha(\alpha-1)+1/6}{2z} + {\mathrm O}\left( \frac{|\alpha|^4+1}{z^2}
\right)\right),
\end{equation}
as $z\to\infty$ with $|\arg\{z\}|<\pi$.
\end{lemma}
\noindent{\sl Proof.}\phantom{X}
By the classical Stirling formula \cite[6.1.37]{abr:hmf}
\begin{equation}
\label{eq:27}
\Gamma(z) = \sqrt{2\pi} z^{z-1/2}{\mathrm e}^{-z} \left( 1 + \frac1{12z}
+{\mathrm O}\left( \frac1{z^2}\right) \right),
\end{equation}
as $z\to\infty$ with $|\arg\{z\}|<\pi$.
Therefore,
\begin{align}
\Gamma(z+\alpha) &= \sqrt{2\pi} (z+\alpha)^{z+\alpha-1/2}{\mathrm e}^{-z-\alpha}
\left( 1 + \frac1{12(z+\alpha)} \nonumber
+{\mathrm O}\left( \frac1{(z+\alpha)^2}\right) \right) \\
&= \sqrt{2\pi} (z+\alpha)^{z+\alpha-1/2}{\mathrm e}^{-z-\alpha}
\left( 1 + \frac1{12z}
+{\mathrm O}\left( \frac{|\alpha|+1}{z^2}\right) \right),\label{eq:31}
\end{align}
using
\begin{equation}
\label{eq:28}
\frac1{z+\alpha} = \frac1z + {\mathrm O}\left( \frac{|\alpha|}{z^2}\right)
\end{equation}
and
\begin{equation}
\label{eq:29}
\frac1{(z+\alpha)^2} = {\mathrm O}\left( \frac1{z^2}\right)
\end{equation}
provided $|\alpha/z|\leq1/2$.
Now,
\begin{align}
(z+\alpha)^{z+\alpha-1/2} &= z^{z+\alpha-1/2}\left( 1+\frac\alpha{z}
\right)^{z+\alpha-1/2} \nonumber \\
&= z^{z+\alpha-1/2} \exp\left( \left(
z+\alpha-\frac12\right)\log\left( 1 + \frac\alpha{z}\right)\right) \nonumber \\
&= z^{z+\alpha-1/2} \exp\left( \left(
z+\alpha-\frac12\right)\left( \frac\alpha{z} - \frac{\alpha^2}{2z^2}
+{\mathrm O}\left( \frac{|\alpha|^3}{z^3} \right) \right)\right) \nonumber \\
&= z^{z+\alpha-1/2} \exp\left( \alpha + \frac{\alpha(\alpha-1)}{2z}
+{\mathrm O} \left( \frac{|\alpha|^3+|\alpha|^2}{z^2}\right) \right) \nonumber \\
&= z^{z+\alpha-1/2} {\mathrm e}^\alpha \exp\left(\frac{\alpha(\alpha-1)}{2z}\right)
\left( 1 +{\mathrm O} \left( \frac{|\alpha|^3+|\alpha|^2}{z^2}\right) \right)
\nonumber \\
&= z^{z+\alpha-1/2} {\mathrm e}^\alpha \left(1 + \frac{\alpha(\alpha-1)}{2z}
+ {\mathrm O} \left( \frac{|\alpha|^4+|\alpha|^2}{z^2}\right) \right),
\label{eq:30}
\end{align}
provided $|\alpha|^2/|z|\leq 1/2$. Putting it together with \eqref{eq:31}
we get \eqref{eq:26}. \hspace*{\fill}~$\Box$
\begin{corollary} \label{cor:gamma}
Suppose $\alpha$ and $\beta$ satisfy $|\alpha|^2/z={\mathrm o}(1)$
and $|\beta|^2/z={\mathrm o}(1)$
as $z\to\infty$. Then
\begin{equation}
\label{eq:32}
\frac{\Gamma(z+\alpha)}{\Gamma(z+\beta)} = z^{\alpha-\beta}\left(
1 + \frac{(\alpha-\beta)(\alpha+\beta-1)}{2z} +
{\mathrm O}\left( \frac{|\alpha|^4+|\beta|^4+1}{z^2}\right)\right).
\end{equation}
\end{corollary}
\noindent{\sl Proof.}\phantom{X}
Applying Lemma \ref{lem:stirling} and cancelling common factors,
\begin{align}
\label{eq:33}
\frac{\Gamma(z+\alpha)}{\Gamma(z+\beta)} &= z^{\alpha-\beta}\left(
1 + \frac{\alpha(\alpha-1)-1/6}{2z} + {\mathrm O}\left( \frac{|\alpha|^4 + 1}{z^2}
\right)\right) \nonumber \\ &\qquad\qquad\times\left(
1 + \frac{\beta(\beta-1)-1/6}{2z} + {\mathrm O}\left( \frac{|\beta|^4 + 1}{z^2}
\right)\right)^{-1} \nonumber \\
&= z^{\alpha-\beta}\left(
1 + \frac{\alpha(\alpha-1)-\beta(\beta-1)}{2z} +
{\mathrm O}\left( \frac{|\alpha|^4 +|\beta|^4 + 1}{z^2}
\right)\right),
\end{align}
by the binomial theorem. Re-writing the second term gives \eqref{eq:32}.
\hspace*{\fill}~$\Box$
We will also require some bounds on Pochhammer symbols
\eqref{eq:148}.
\begin{lemma} \label{lem:pochhammer}
Let $a\in{\mathbb C}$ be fixed, and $n,N\in{\mathbb N}$.
\begin{enumerate}
\item If $n\leq N$ then $|(a-N)_n|\leq (|a|+N-n+1)_n\leq (N+|a|)^n$;
\item If $n>N$ then $|(a-N)_n|\leq (|a|)_n$.
\end{enumerate}
\end{lemma}
\noindent{\sl Proof.}\phantom{X}
We have that
\begin{align}
(a-N)_n &= (a-N)(a-N+1)\cdots(a-N+n-1) \nonumber \\
&=(-1)^n(N-a)(N-a-1)\cdots(N-a-(n-1)).\label{eq:62}
\end{align}
If $n\leq N$ then using the second line of \eqref{eq:62} and the
triangle inequality
\begin{align}
|(a-N)_n| &= |N-a||N-1 - a|\cdots|N-(n-1) -a| \nonumber \\
&\leq (N+|a|)(N-1+|a|)\cdots(N-(n-1)+|a|) = (|a|+N-n+1)_{n} \nonumber \\
&\leq (N+|a|)^n.\label{eq:65}
\end{align}
If $n>N$ then re-ordering the product,
\begin{equation}
\label{eq:63}
(a-N)_n = a(a+1)\cdots(a+n-N-1) \times (a-1)(a-2)\cdots(a-N).
\end{equation}
However,
\begin{align}
\label{eq:64}
\big| (a-1)(a-2)\cdots(a-N) \big| &\leq (|a|+1)(|a|+2)\cdots(|a|+N) \nonumber
\\
& \leq (|a|+n-N)(|a|+n-N+1) \cdots (|a|+n-1),
\end{align}
so from \eqref{eq:63} we end up with
\begin{align}
|(a-N)_n| &\leq |a|(|a|+1)\cdots(|a|+n-N-1) \nonumber \\
&\qquad \times (|a|+n-N)(|a|+n-N+1)
\cdots (|a|+n-1) \nonumber \\ &= (|a|)_n.
\label{eq:66}
\end{align}
\hspace*{\fill}~$\Box$
We shall test certain sums for convergence by comparison with the classical
hypergeometric series
\begin{equation}
\label{eq:79}
\fourIdx{}p{}q{F}(a_1,\ldots,a_p;b_1,\ldots,b_q;z) =
\sum_{k=0}^\infty \frac{(a_1)_k\cdots (a_p)_k}{(b_1)_k\cdots(b_q)_k}
\frac{z^k}{k!}.
\end{equation}
For generic choice of parameters $a_1,\ldots,a_p,b_1,\ldots,b_q\in{\mathbb C}$
the power series \eqref{eq:79} is known to have radius of convergence
$r=1$ if $p=q+1$ and infinite radius of convergence if $p\leq q$
\cite[\S 2.2]{sla:ghf}. The exceptions to these rules are when the series
has only a finite number of terms and reduces to a polynomial in
$z$. This can happen when one of the parameters $a_1,\ldots,a_p$
is a negative integer.
\subsection{The main contribution}
\label{sec:main-contribution}
We start by analysing the first sum on the right-hand side of
\eqref{eq:76}.
\begin{proposition} \label{prop:main-contribution}
Let $a,b$ be fixed quantities. Then
\begin{align}
\label{eq:83}
&\sum_{|\lambda| < N^{1/3}} \frac{[a-N]_\lambda^{(\sigma)}
[b-N]_\lambda^{(\sigma)}}{N^{2|\lambda|}[c]_{\lambda}^{(\sigma)}
|\lambda|!} C_\lambda^{(\sigma)}(\vec{x}) \\&= \sum_{|\lambda|<N^{1/3}}
\frac{1}{[c]_\lambda^{(\sigma)} |\lambda|!}
\left( 1 + \frac1{N}\left(\left( \frac2\sigma
(n-1) - a - b\right)\mathrm{E}_1 - \mathrm{D}_2\right)\right) C_\lambda^{(\sigma)}(\vec{x})
+{\mathrm O}\left(\frac1{N^2}\right) \nonumber
\end{align}
where the implied constant may depend on $n,a,b,c$ and $\sigma$ but is uniform
for $\vec{x}$ in compact subsets of ${\mathbb C}^n$.
\end{proposition}
Making use of the representation
\begin{equation}
\label{eq:80}
[a-N]_\lambda^{(\sigma)} = \prod_{i=1}^n \left( a-N-\frac{i-1}\sigma
\right)_{\lambda_i} =
\prod_{i=1}^n (-1)^{\lambda_i} \frac{\Gamma(1+N+(i-1)/\sigma-a)}%
{\Gamma(1+N+(i-1)/\sigma-a-\lambda_i)}
\end{equation}
we can write, using Corollary \ref{cor:gamma} for the
ratios of gamma functions,
\begin{align}
[a-N]_\lambda^{(\sigma)} & [b-N]_\lambda^{(\sigma)} \nonumber \\ &=
\prod_{i=1}^{n} (-1)^{2|\lambda_i|} \frac{\Gamma(1+N+(i-1)/\sigma-a)
\Gamma(1+N+(i-1)/\sigma-b)}{\Gamma(1+N+(i-1)/\sigma-a-\lambda_i)
\Gamma(1+N+(i-1)/\sigma-b-\lambda_i)} \nonumber \\
&= \prod_{i=1}^{n} N^{2\lambda_i}\left( 1 + \frac{\lambda_i}{2N}
\left( \frac{2}\sigma(i-1) +1 - 2a - \lambda_i\right) +
{\mathrm O}\left( \frac{|\lambda_i|^4+(\sigma'i)^4}{N^2}\right)\right) \nonumber \\
&\qquad\quad\times
\left( 1 + \frac{\lambda_i}{2N}
\left( \frac{2}\sigma(i-1) +1 - 2b - \lambda_i\right) +
{\mathrm O}\left( \frac{|\lambda_i|^4+(\sigma'i)^4}{N^2}\right)\right) ,
\end{align}
where $\sigma'=\max\{1,\sigma^{-1}\}$. This leads to
\begin{align}
\label{eq:81}
[&a-N]_\lambda^{(\sigma)} [b-N]_\lambda^{(\sigma)} \nonumber \\ &=
\prod_{i=1}^{n} N^{2\lambda_i}\left( 1 + \frac{\lambda_i}N
\left( \frac2\sigma(i-1)-a-b-\lambda_i+1\right) +
{\mathrm O}\left( \frac{|\lambda_i|^4+(\sigma'i)^4}{N^2}\right)\right)\nonumber \\
&=N^{2|\lambda|} \Bigg( 1 + \frac{|\lambda|}N\left( \frac2{\sigma}(n-1)-a-b
\right) - \frac1{N}\sum_{i=1}^n \lambda_i\left( \lambda_i - 1 - \frac{2(i-1)}
\sigma + \frac{2(n-1)}\sigma\right) \nonumber \\ &\qquad\qquad
+ {\mathrm O}\left( \frac{\|\lambda\|_4^4 + {\sigma'}^4n^5}{N^2}\right)\Bigg).
\end{align}
We adopt here a convenient shorthand $\|\lambda\|_4^4=\lambda_1^4+\cdots
\lambda_n^4$.
Recalling the actions \eqref{eq:diff3}, \eqref{eq:diff4} of the operators
$\mathrm{E}_1$, $\mathrm{D}_2$ on Jack polynomials,
\begin{multline}
\label{eq:82}
[a-N]_\lambda^{(\sigma)} [b-N]_\lambda^{(\sigma)} C_{\lambda}^{(\sigma)}
(\vec{x}) = N^{2|\lambda|} \Bigg(\left( 1 + \frac1{N}\left( \left( \frac2\sigma
(n-1) - a - b\right)\mathrm{E}_1 - \mathrm{D}_2\right)\right) C_\lambda^{(\sigma)}(\vec{x})
\\+ {\mathrm O}\left( \frac{\|\lambda\|_4^4 + {\sigma'}^4n^5}{N^2}
|C_{\lambda}^{(\sigma)}(\vec{x})|\right)\Bigg).
\end{multline}
Substituting \eqref{eq:82} for the numerator in \eqref{eq:83} leads
\textit{via} cancellation of the factors $N^{2|\lambda|}$ to the
sum on the right-hand side of \eqref{eq:83}.
To prove the error estimate it will be sufficient to demonstrate that
\begin{equation}
\label{eq:67}
\sum_{\lambda} \left|\frac{\|\lambda\|_4^4}{[c]_{\lambda}^{(\sigma)}
|\lambda|!} C_\lambda^{(\sigma)}(\vec{x})\right| < \infty
\end{equation}
uniformly for $\vec{x}$ in compact subsets of ${\mathbb C}^n$.
We do this following a method elaborated by Kaneko \cite{kan:sia}.
Namely: there
exists a constant $C$ depending only on $n$ such that
\begin{equation}
\label{eq:68}
|C_{\lambda}^{(\sigma)}(\vec{x})| \leq C |\lambda|^{n/2}\left(
\sigma(\sigma'n)^{3/2} \|\vec{x}\|_{\infty} \right)^{|\lambda|},
\end{equation}
where $\|\vec{x}\|_\infty = \max\{|x_1|,\ldots,|x_n|\}$ and
$\sigma'=\max\{1,\sigma^{-1}\}$ \cite[Lemma 1]{kan:sia}. Thus there
exists a constant $R>0$ depending only on $n, \sigma$ such that
$\|\lambda\|_4^4 |C_{\lambda}^{(\sigma)}(\vec{x})|
\leq C R^{|\lambda|} \|\vec{x}\|_\infty^{|\lambda|}$. We also
may observe that
\begin{equation}
\label{eq:69}
|\lambda|! = (\lambda_1+ \cdots+\lambda_n)! \geq \lambda_1!\cdots
\lambda_n!.
\end{equation}
So
\begin{align}
\sum_{\lambda}\left| \frac{\|\lambda\|_4^4}{[c]_{\lambda}^{(\sigma)}
|\lambda|!} C_\lambda^{(\sigma)}(\vec{x}) \right| &\leq
C \sum_{\lambda} \frac{R^{|\lambda|}\|\vec{x}\|_\infty^{|\lambda|}}%
{|[c]_\lambda^{(\sigma)}||\lambda|!} \nonumber \\
&\leq C \prod_{i=1}^n \sum_{\lambda_i=0}^\infty
\frac{(R\|\vec{x}\|_\infty)^{\lambda_i}}{|(c-(i-1)/\sigma)_{\lambda_i}|
\lambda_i!},
\label{eq:70}
\end{align}
using the definition \eqref{eq:7} of the generalised Pochhammer symbol
together with \eqref{eq:69}. It can be seen that \eqref{eq:70} is
bounded by comparing each factor to a convergent $\fourIdx{}{0}{}{1}{F}$
hypergeometric series. \hspace*{\fill}~$\Box$
\subsection{Bounding the tail terms}
Using Kaneko's bound for Jack polynomials from the end of
subsubsection \ref{sec:main-contribution} we get
\begin{equation}
\bigg| \sum_{|\lambda| \geq N^{1/3}}
\frac{[a-N]_\lambda^{(\sigma)}[b-N]_\lambda^{(\sigma)}
}{N^{2|\lambda|}[c]_{\lambda}^{(\sigma)}
|\lambda|!} C_\lambda^{(\sigma)}(\vec{x}) \bigg| \leq
C \sum_{|\lambda| \geq N^{1/3}}
\frac{|[a-N]_\lambda^{(\sigma)}[b-N]_\lambda^{(\sigma)}|
(R\|\vec{x}\|_\infty)^{|\lambda|}}{N^{2|\lambda|}|[c]_{\lambda}^{(\sigma)}|
\lambda_1!\cdots\lambda_n!},
\label{eq:71}
\end{equation}
for some constant $C$ depending only on $n$ and $R$ depending only on
$n$ and $\sigma$. If $\lambda$ is a partition with $|\lambda|\geq N^{1/3}$
then, as $\lambda_1$ is the largest part, we must have
$\lambda_1\geq N^{1/3}/n$. Factorising the generalised Pochhammer
symbols according to \eqref{eq:7} we achieve the inequality
\begin{multline}
\label{eq:72}
\bigg| \sum_{|\lambda| \geq N^{1/3}}
\frac{[a-N]_\lambda^{(\sigma)}[b-N]_\lambda^{(\sigma)}
}{N^{2|\lambda|}[c]_{\lambda}^{(\sigma)}
|\lambda|!} C_\lambda^{(\sigma)}(\vec{x}) \bigg| \leq
\{\mathbb C}\bigg( \sum_{\lambda_1=N^{1/3}/n}^\infty
\frac{|(a-N)_{\lambda_1}||(b-N)_{\lambda_1}|(R\|\vec{x}\|_\infty)^{\lambda_1}}%
{N^{2\lambda_1}|(c)_{\lambda_1}| \lambda_1!} \bigg)\\
\times \prod_{i=2}^n\sum_{\lambda_i=0}^\infty
\frac{|(a-(i-1)/\sigma-N )_{\lambda_i}||(b-(i-1)/\sigma-N)_{\lambda_i}
|(R\|\vec{x}\|_\infty)^{\lambda_i}}%
{N^{2\lambda_i}|(c-(i-1)/\sigma)_{\lambda_i}| \lambda_i!}.
\end{multline}
\begin{proposition} \label{prop:tail}
For every $\rho>0$ and compact set $K\subseteq {\mathbb C}^n$, there exists a
constant $C_{\rho,K}$ (which may additionally depend on $a, b, c, \sigma, n$)
such that for every $\vec{x}\in K$, and all $N$ sufficiently large, we
have
\begin{equation}
\label{eq:777}
\bigg| \sum_{|\lambda| \geq N^{1/3}}
\frac{[a-N]_\lambda^{(\sigma)}[b-N]_\lambda^{(\sigma)}
}{N^{2|\lambda|}[c]_{\lambda}^{(\sigma)}
|\lambda|!} C_\lambda^{(\sigma)}(\vec{x}) \bigg|\leq \frac{C_{\rho,K}}{N^\rho}
\end{equation}
\end{proposition}
\noindent{\sl Proof.}\phantom{X}
For brevity, let us define $a'=a-(i-1)/\sigma$, $b'=b-(i-1)/\sigma$,
$c'=c-(i-1)/\sigma$.
The following three estimates give us the bound we need:
\begin{enumerate}
\item Using part 2.\ of Lemma \ref{lem:pochhammer},
\begin{align}
\sum_{\lambda_i>N} \frac{|(a'-N )_{\lambda_i}||(b'-N)_{\lambda_i}
|(R\|\vec{x}\|_\infty)^{\lambda_i}}%
{N^{2\lambda_i}|(c')_{\lambda_i}| \lambda_i!} &\leq
\frac1{N^N}\sum_{\lambda_i>N} \frac{(|a'|)_{\lambda_i}(|b'|)_{\lambda_i}}%
{|(c')_{\lambda_i}|\lambda_1!} \left( \frac{R\|\vec{x}\|_\infty}{N}
\right)^{\lambda_i} \nonumber \\
&={\mathrm O}\left( \frac1{N^N}\right)
\label{eq:73}
\end{align}
provided we additionally choose $N$ large enough so that $R\|\vec{x}\|_\infty/
N<1$, and the absolute convergence of a $\fourIdx{}2{}1{F}$ hypergeometric
series in the unit disc.
\item Using part 1.\ of Lemma \ref{lem:pochhammer},
\begin{align}
\sum_{\lambda_i=0}^N \frac{|(a'-N )_{\lambda_i}||(b'-N)_{\lambda_i}
|(R\|\vec{x}\|_\infty)^{\lambda_i}}%
{N^{2\lambda_i}|(c')_{\lambda_i}| \lambda_i!} &\leq
\sum_{\lambda_i=0}^N \frac{(N+|a'|)^{\lambda_i} (N+|b'|)^{\lambda_i}
(R\|\vec{x}\|_\infty)^{\lambda_i}}{N^{2\lambda_i}|(c')_{\lambda_i}|
\lambda_i!} \nonumber \\
&=\sum_{\lambda_i=0}^N \frac{((1+|a'|/N)(1+|b'|/N)
R\|\vec{x}\|_\infty)^{\lambda_i}}{|(c')_{\lambda_i}|
\lambda_i!} \nonumber \\
&<\infty \label{eq:74}
\end{align}
by comparison with a $\fourIdx{}0{}1{F}$ hypergeometric series.
\item For the sum over $\lambda_1$ between $N^{1/3}/n$ and $N$ we
again use part 1.\ of Lemma \ref{lem:pochhammer}, as in the previous step
leading to
\begin{align}
\label{eq:75}
\sum_{N^{1/3}/n<\lambda_1\leq N}
&\frac{|(a-N )_{\lambda_i}||(b-N)_{\lambda_i}
|(R\|\vec{x}\|_\infty)^{\lambda_i}}%
{N^{2\lambda_i}|(c)_{\lambda_i}| \lambda_i!} \nonumber \\ &\leq
\sum_{N^{1/3}/n<\lambda_1\leq N} \frac{((1+|a|/N)(1+|b|/N)
R\|\vec{x}\|_\infty)^{\lambda_i}}{|(c)_{\lambda_i}|
\lambda_i!} \nonumber \\
&\leq \frac1{\Gamma(N^{1/3}/n+1)}
\sum_{N^{1/3}/n<\lambda_1\leq N} \frac{((1+|a|/N)(1+|b|/N)
R\|\vec{x}\|_\infty)^{\lambda_i}}{|(c)_{\lambda_i}| } \nonumber \\
&={\mathrm O}\left( \frac1{\Gamma(N^{1/3}/n+1)} \right),
\end{align}
where the series has been compared to a $\fourIdx{}1{}1{F}$
hypergeometric series.
\end{enumerate}
We use 1.\ and 2.\ to bound each factor for $i\geq2$ in \eqref{eq:72}
by a constant. We then use 1.\ and 3.\ to deduce the rapid decay
in $N$ of the remaining sum over $\lambda_1$.
\hspace*{\fill}~$\Box$
With essentially the same method and calculations we can bound
similarly the tail of two further series.
\begin{proposition} \label{prop:more-tails}
For every $\rho>0$ and compact set $K\subseteq {\mathbb C}^n$, there exists a
constant $C_{\rho,K}$ (which may additionally depend on $a,b,c, \sigma, n$)
such that for every $\vec{x}\in K$, and all $N$ sufficiently large, we
have
\begin{equation}
\label{eq:77}
\bigg| \sum_{|\lambda| \geq N^{1/3}}
\frac{1}{[c]_{\lambda}^{(\sigma)} |\lambda|!}
\left( \left( \frac2\sigma
(n-1) - a - b\right)\mathrm{E}_1 - \mathrm{D}_2\right)
C_\lambda^{(\sigma)}(\vec{x}) \bigg|\leq \frac{C_{\rho,K}}{N^\rho}
\end{equation}
and
\begin{equation}
\label{eq:77a}
\bigg| \sum_{|\lambda| \geq N^{1/3}}
\frac{1}{[c]_{\lambda}^{(\sigma)} |\lambda|!}
C_\lambda^{(\sigma)}(\vec{x}) \bigg|\leq \frac{C_{\rho,K}}{N^\rho}.
\end{equation}
\end{proposition}
\noindent{\sl Proof.}\phantom{X} Using the fact that $C_{\lambda}^{(\sigma)}(\vec{x})$
is an eigenfunction of $\mathrm{E}_1$ and $\mathrm{D}_2$ with eigenvalues that
depend only polynomially on the parts of $\lambda$, we may follow the
proof of the preceding Proposition \ref{prop:tail} making only trivial
changes. \hspace*{\fill}~$\Box$
\subsection{Asymptotic formula}
\begin{proposition} \label{prop:partial-result}
For fixed $a, b$, we have
\begin{multline}
\label{eq:87}
\fourIdx{}2{(\sigma)}1{F} \left( a-N, b-N; c; \frac1{N^2}\vec{x}\right)
= \fourIdx{}0{(\sigma)}1{F}(;c;\vec{x}) \\ + \frac1N
\left(\left( \frac2\sigma (n-1) - a - b\right)
\mathrm{E}_1 - \mathrm{D}_2\right)\fourIdx{}0{(\sigma)}1{F}(;c;\vec{x})
+ {\mathrm O}\left(\frac1{N^2}\right),
\end{multline}
where the implied constant may depend on $n, a, b, \sigma$ but is
uniform for $\vec{x}$ in compact subsets of ${\mathbb C}^n$.
\end{proposition}
\noindent{\sl Proof.}\phantom{X}
Starting from \eqref{eq:76} and
as a consequence of the bound \eqref{eq:777} of Proposition
\ref{prop:tail}, we have
\begin{equation}
\label{eq:84}
\fourIdx{}2{(\sigma)}1{F} \left( a-N, b-N; c; \frac1{N^2}\vec{x}\right)
= \sum_{|\lambda| < N^{1/3}} \frac{[a-N]_\lambda^{(\sigma)}
[b-N]_\lambda^{(\sigma)}}{N^{2|\lambda|}[c]_{\lambda}^{(\sigma)}
|\lambda|!} C_\lambda^{(\sigma)}(\vec{x})
+ {\mathrm O}\left( \frac1{N^\rho}\right)
\end{equation}
for any $\rho>0$. By Proposition \ref{prop:main-contribution} this is
\begin{multline}
\label{eq:85}
\fourIdx{}2{(\sigma)}1{F} \left( a-N, b-N; c; \frac1{N^2}\vec{x}\right)
= \sum_{|\lambda|<N^{1/3}}
\frac{1}{[c]_\lambda^{(\sigma)} |\lambda|!} C_\lambda^{(\sigma)}(\vec{x})
\\+\frac1N
\sum_{|\lambda|<N^{1/3}}
\frac{1}{[c]_\lambda^{(\sigma)} |\lambda|!}
\left(\left( \frac2\sigma
(n-1) - a - b\right)\mathrm{E}_1 - \mathrm{D}_2\right) C_\lambda^{(\sigma)}(\vec{x})
+{\mathrm O}\left(\frac1{N^2}\right).
\end{multline}
By Proposition \ref{prop:more-tails} we can complete the sums in
\eqref{eq:85} without affecting the error estimate to get
\begin{multline}
\label{eq:86}
\fourIdx{}2{(\sigma)}1{F} \left( a-N, b-N; c; \frac1{N^2}\vec{x}\right)
= \sum_{\lambda}
\frac{1}{[c]_\lambda^{(\sigma)} |\lambda|!} C_\lambda^{(\sigma)}(\vec{x})
\\+\frac1N
\sum_{\lambda}
\frac{1}{[c]_\lambda^{(\sigma)} |\lambda|!}
\left(\left( \frac2\sigma
(n-1) - a - b\right)\mathrm{E}_1 - \mathrm{D}_2\right) C_\lambda^{(\sigma)}(\vec{x})
+{\mathrm O}\left(\frac1{N^2}\right).
\end{multline}
This is \eqref{eq:87}, recognising
\begin{equation}
\label{eq:88}
\fourIdx{}0{(\sigma)}1{F}(;c;\vec{x}) = \sum_{\lambda}
\frac{1}{[c]_\lambda^{(\sigma)} |\lambda|!} C_\lambda^{(\sigma)}(\vec{x})
\end{equation}
\hspace*{\fill}~$\Box$
Given that $\fourIdx{}0{(\sigma)}1{F}(;c;\vec{x})$ is continuous, this
already proves
\begin{equation}
\label{eq:149}
\lim_{N\to\infty} \fourIdx{}2{(\sigma)}1{F} \left( a-N, b-N; c;
\frac1{N^2}\vec{x}\right)
= \fourIdx{}0{(\sigma)}1{F}(;c;\vec{x}),
\end{equation}
locally uniformly in $\vec{x}\in{\mathbb C}^n$---the leading-order of Theorem
\ref{thm:two-term}.
Our final task in this section will be to put the ``$1/N$'' term of
\eqref{eq:87} into a nicer form.
\mathversion{bold}
\subsection{Partial Differential Equation satisfied by $\fourIdx{}0{(\sigma)}1{F}$}
\mathversion{normal}
It is a Theorem of Yan \cite[Theorem 2.1]{yan:aco} and Kaneko
\cite[Theorem 2]{kan:sia} that
if $c-(i-1)/\sigma$ is not a negative integer for any $1\leq i\leq n$ then
the unique solution to system of equations
\begin{multline}
\label{eq:89}
x_i(1-x_i) \frac{\partial^2\Psi}{\partial x_i^2} +
\left( c - \frac{n-1}\sigma - \left( a + b + 1 - \frac{n-1}\sigma\right)
x_i\right) \frac{\partial\Psi}{\partial x_i} \\
+\frac1{\sigma} \left( \sum_{\substack{j=1\\j\neq i}}^n \frac{x_i(1-x_i)}%
{x_i-x_j} \frac{\partial\Psi}{\partial x_i} -
\sum_{\substack{j=1\\j\neq i}}^n \frac{x_j(1-x_j)}{x_i-x_j}
\frac{\partial\Psi}{\partial x_j}\right) = ab\Psi,
\end{multline}
$1\leq i\leq n$,
subject to $\Psi(\vec{x})$ being symmetric in its variables and analytic at
$\vec{x}=\vec{0}$, is
\begin{equation}
\label{eq:90}
\Psi(\vec{x}) = \Psi(\vec{0}) \fourIdx{}2{(\sigma)}1{F}(a,b; c; \vec{x}).
\end{equation}
This result for $\sigma=2$ was first proved by Muirhead \cite{mui:sop},
having been conjectured, apparently, by A.~G.~Constantine. Muirhead
also shows how the system \eqref{eq:89} can be degenerated to give
the holonomic system of equations for $\fourIdx{}0{(2)}{1}{F}$
multivariate hypergeometric functions, which can easily be generalised
for arbitrary $\sigma$ as follows.
\begin{proposition}
\label{prop:0f1system}
Provided that $c-(i-1)/\sigma$ is not a negative integer for any $1\leq
i\leq n$,
the multivariate hypergeometric function $\fourIdx{}0{(\sigma)}1{F}(c;
\vec{x})$ is the unique solution of the $n$ differential equations
\begin{equation}
\label{eq:91}
x_i \frac{\partial^2\Psi}{\partial x_i^2} +
\left( c - \frac{n-1}\sigma \right) \frac{\partial\Psi}{\partial x_i}
+\frac1{\sigma} \left( \sum_{\substack{j=1\\j\neq i}}^n \frac{x_i}%
{x_i-x_j} \frac{\partial\Psi}{\partial x_i} -
\sum_{\substack{j=1\\j\neq i}}^n \frac{x_j}{x_i-x_j}
\frac{\partial\Psi}{\partial x_j}\right) = \Psi,
\end{equation}
$1\leq i\leq n$,
subject to the constraints that $\Psi(\vec{x})$ is symmetric in its variables,
is analytic at $\vec{x}=\vec{0}$ and satisfies $\Psi(\vec{0})=1$.
\end{proposition}
\noindent{\sl Proof.}\phantom{X}
Since we now know \eqref{eq:149} that
\begin{equation}
\label{eq:84b}
\fourIdx{}{0}{(\sigma)}1F (;c;\vec{x}) =
\lim_{N\to\infty} \fourIdx{}{2}{(\sigma)}1F\left( -N, -N; c; \frac1{N^2}
\vec{x}\right),
\end{equation}
we set $a=b=-N$ and make the change of variables $x_i\mapsto x_i/N^2$ in
\eqref{eq:89} to get
\begin{multline}
\label{eq:92}
N^2x_i\left(1-\frac{x_i}{N^2}\right) \frac{\partial^2\Psi}{\partial x_i^2} +
\left( \left(c - \frac{n-1}\sigma\right)N^2 - \left( 1 - 2N - \frac{n-1}\sigma
\right) x_i\right) \frac{\partial\Psi}{\partial x_i} \\
+\frac{N^2}{\sigma} \left( \sum_{\substack{j=1\\j\neq i}}^n
\frac{x_i(1-x_i/N^2)}%
{x_i-x_j} \frac{\partial\Psi}{\partial x_i} -
\sum_{\substack{j=1\\j\neq i}}^n \frac{x_j(1-x_j/N^2)}{x_i-x_j}
\frac{\partial\Psi}{\partial x_j}\right) = N^2\Psi.
\end{multline}
Dividing through by $N^2$ and letting $N\to\infty$ we recover
\eqref{eq:91}. \hspace*{\fill}~$\Box$
\begin{corollary} \label{cor:pde}
Under the same condition on $c$ as in Proposition
\ref{prop:0f1system},
the function $\Psi(\vec{x}) = \fourIdx{}0{(\sigma)}1{F}(;c;\vec{x})$ is
a solution to the partial differential equation
\begin{equation}
\label{eq:93}
\mathrm{D}_2 \Psi = p_1(\vec{x})\Psi - \left( c- \frac{2}{\sigma}(n-1)\right)
\mathrm{E}_1\Psi,
\end{equation}
where $p_1(\vec{x})=x_1+\cdots+x_n$.
\end{corollary}
\noindent{\sl Proof.}\phantom{X}
We multiply through the $i$th equation \eqref{eq:91}, satisfied by
$\Psi(\vec{x}) = \fourIdx{}0{(\sigma)}1{F}(;c;\vec{x})$, by $x_i$:
\begin{equation}
\label{eq:94}
x_i^2 \frac{\partial^2\Psi}{\partial x_i^2} +
\left( c - \frac{n-1}\sigma \right)x_i \frac{\partial\Psi}{\partial x_i}
+\frac1{\sigma} \left( \sum_{\substack{j=1\\j\neq i}}^n \frac{x_i^2}%
{x_i-x_j} \frac{\partial\Psi}{\partial x_i} -
\sum_{\substack{j=1\\j\neq i}}^n \frac{x_ix_j}{x_i-x_j}
\frac{\partial\Psi}{\partial x_j}\right) = x_i\Psi.
\end{equation}
Since
\begin{equation}
\label{eq:95}
\frac{x_ix_j}{x_i-x_j} = x_j + \frac{x_j^2}{x_i-x_j},
\end{equation}
\eqref{eq:94} is equivalent to
\begin{multline}
\label{eq:96}
x_i^2 \frac{\partial^2\Psi}{\partial x_i^2} +
\left( c - \frac{n-1}\sigma \right)x_i \frac{\partial\Psi}{\partial x_i}
\\+\frac1{\sigma} \left( \sum_{\substack{j=1\\j\neq i}}^n \frac{x_i^2}%
{x_i-x_j} \frac{\partial\Psi}{\partial x_i} -
\sum_{\substack{j=1\\j\neq i}}^n x_j\frac{\partial\Psi}{\partial x_j} -
\sum_{\substack{j=1\\j\neq i}}^n \frac{x_j^2}{x_i-x_j}
\frac{\partial\Psi}{\partial x_j}\right) = x_i\Psi.
\end{multline}
Summing over \eqref{eq:96} for $i=1,\ldots,n$, we arrive at
\begin{equation}
\label{eq:97}
\mathrm{D}_2 \Psi + \left( c- \frac{n-1}\sigma\right) \mathrm{E}_1 \Psi
- \frac{n-1}{\sigma} \mathrm{E}_1 \Psi = p_1(\vec{x}) \Psi.
\end{equation}
Re-arranged, this is \eqref{eq:93}. \hspace*{\fill}~$\Box$
\dimostrazionea{Theorem \ref{thm:two-term}}
We use Corollary \ref{cor:pde} to replace the term $\mathrm{D}_2\left\{
\fourIdx{}{0}{(\sigma)}1{F}(;c;\vec{x})\right\}$ in \eqref{eq:87} by
\begin{equation}
\label{eq:98}
p_1(\vec{x}) \fourIdx{}{0}{(\sigma)}1{F}(;c;\vec{x}) +
\left(\frac{2}{\sigma}(n-1)-c\right) \mathrm{E}_1 \left\{
\fourIdx{}{0}{(\sigma)}1{F}(;c;\vec{x})\right\}.
\end{equation}
The resultant cancellation of terms involving $\sigma$ leads to
\eqref{eq:99}. \hspace*{\fill}~$\Box$
\section{Main Results}\label{sec:main-result}
\subsection{Proof of Theorem \ref{thm:main}}
Looking-back to \eqref{eq:24} we had
\begin{multline}
\label{eq:101}
{\mathbb P}(N^2\phi_1 > x) = \left(1-\frac{x}{N^2}\right)^{N(1+\alpha_2+
(N-1)\beta/2)}\\\times
\fourIdx{}2{(\beta/2)}1{F}\left( -N, 1-N-\frac2\beta(\alpha_2+1);
\frac{2\alpha_1}\beta;\frac{x}{N^2} \vec{1}^{\alpha_1}\right).
\end{multline}
Standard asymptotic arguments give
\begin{equation}
\label{eq:102}
\left(1-\frac{x}{N^2}\right)^{N(1+\alpha_2+
(N-1)\beta/2)} = {\mathrm e}^{-\beta x/2} \left( 1 -
\left( 1 + \alpha_2 - \frac{\beta}2\right)\frac{x}N +
{\mathrm O}\left( \frac1{N^2} \right)\right)
\end{equation}
uniformly for ${x}$ in compact sets.
In the result of Theorem \ref{thm:two-term},
taking $\vec{x}$ to be a constant multiple of $\vec{1}^n$ we have
\begin{multline}
\label{eq:100}
\fourIdx{}2{(\sigma)}1{F} \left( a-N, b-N; c; \frac{x}{N^2}\vec{1}^n
\right)
= \fourIdx{}0{(\sigma)}1{F}(;c;x\vec{1}^n) + \frac1N
(c - a - b ) x\frac{{\mathrm d}}{{\mathrm d} x}
\left\{\fourIdx{}0{(\sigma)}1{F}(;c;x\vec{1}^n) \right\} \\ -
\frac{n}N x\, \fourIdx{}0{(\sigma)}1{F}(;c;x\vec{1}^n)
+ {\mathrm O}\left(\frac1{N^2}\right),
\end{multline}
which may be applied in \eqref{eq:101} with $a=0$, $b=1-2(\alpha_2+1)/\beta$,
$c=2\alpha_1/\beta$, $\sigma=\beta/2$ and $n=\alpha_1$
(so that $c-(i-1)/\sigma = 2(\alpha_1+1-i)/\beta$ is not a negative
integer for $i=1,\ldots,\alpha_1$, justifying the use of Proposition
\ref{prop:0f1system}), to give
\begin{multline}
\label{eq:104}
\fourIdx{}2{(\beta/2)}1{F}\left( -N, 1-N-\frac2\beta(\alpha_2+1);
\frac{2\alpha_1}\beta;\frac{x}{N^2} \vec{1}^{\alpha_1}\right)
\\= \fourIdx{}0{(\beta/2)}1F\left(
;\frac{2\alpha_1}\beta; x\vec{1}^{\alpha_1} \right)
+\frac1N \left( \frac{2\alpha_1}\beta + \frac2\beta(\alpha_2 +1) -1\right)
x\frac{{\mathrm d}}{{\mathrm d} x}\left\{ \fourIdx{}0{(\beta/2)}1F\left(
;\frac{2\alpha_1}\beta; x\vec{1}^{\alpha_1} \right) \right\}
\\- \frac{\alpha_1}N x\, \fourIdx{}0{(\beta/2)}1F\left(
;\frac{2\alpha_1}\beta; x\vec{1}^{\alpha_1} \right) + {\mathrm O}\left(\frac1{N^2}
\right).
\end{multline}
Combining with \eqref{eq:102} we get
\begin{align}
{\mathbb P}(N^2\phi_1 > x) &= {\mathrm e}^{-\beta x/2} \fourIdx{}0{(\beta/2)}1F\left(
;\frac{2\alpha_1}\beta; x\vec{1}^{\alpha_1} \right) \nonumber \\
&\qquad + \frac1N\left( \frac2\beta(\alpha_1 + \alpha_2 + 1) - 1 \right) x
{\mathrm e}^{-\beta x/2} \frac{{\mathrm d}}{{\mathrm d} x}\left\{ \fourIdx{}0{(\beta/2)}1F\left(
;\frac{2\alpha_1}\beta; x\vec{1}^{\alpha_1} \right) \right\}
\nonumber \\ & \qquad-\frac1N\left( 1 + \alpha_1 + \alpha_2 - \frac\beta2
\right) x{\mathrm e}^{-\beta x/2} \fourIdx{}0{(\beta/2)}1F\left(
;\frac{2\alpha_1}\beta; x\vec{1}^{\alpha_1} \right) + {\mathrm O}\left( \frac1{N^2}
\right) \nonumber \\
&= {\mathrm e}^{-\beta x/2} \fourIdx{}0{(\beta/2)}1F\left(
;\frac{2\alpha_1}\beta; x\vec{1}^{\alpha_1} \right) \nonumber \\
&\qquad + \frac{x}N\left(\frac{2}\beta(\alpha_1 + \alpha_2 +1)-1\right)
\frac{{\mathrm d}}{{\mathrm d} x} \left\{ {\mathrm e}^{-\beta x/2}\fourIdx{}0{(\beta/2)}1F\left(
;\frac{2\alpha_1}\beta; x\vec{1}^{\alpha_1} \right) \right\} \nonumber \\
&\qquad + {\mathrm O}\left( \frac1{N^2} \right).
\label{eq:103}
\end{align}
Setting $\xi = x/N^2$ in \eqref{eq:25} of Corollary \ref{cor:derivative}
\begin{multline}
\label{eq:105}
N^2 \frac{{\mathrm d}}{{\mathrm d} x} \left( \left(1-\frac{x}{N^2}\right)^{N(1+\alpha_2
+(N-1)\beta/2)}
\fourIdx{}2{(\beta/2)}1{F}\left( -N, 1-N-\frac2\beta(\alpha_2+1);
\frac{2\alpha_1}\beta;\frac{x}{N^2} \vec{1}^{\alpha_1}\right)\right)
\\ = - \frac{Z_N(\alpha_1,\alpha_2,\beta)}{N^{2\alpha_1}} x^{\alpha_1}
\left(1-\frac{x}{N^2}\right)^{N(1+\alpha_2+(N-1)\beta/2)-1} \\ \times
\twoFone{\beta/2} \left( 1-N, 2 - N - \frac2\beta(\alpha_2+1);
\frac{2\alpha_1}\beta + 2 ; \frac{x}{N^2}\vec{1}^{\alpha_1} \right),
\end{multline}
where $Z_N$ was defined in \eqref{eq:14}. A further application of
(the leading order of) Theorem \ref{thm:two-term} and equation \eqref{eq:102},
and Lemma \ref{lem:Z} for the asymptotics of $Z_N$, brings \eqref{eq:105}
to
\begin{multline}
\label{eq:106}
\frac{{\mathrm d}}{{\mathrm d} x}\left( {\mathrm e}^{-\beta x/2} \fourIdx{}0{(\beta/2)}1F\left(
;\frac{2\alpha_1}\beta; x\vec{1}^{\alpha_1} \right) \right) \\=
\frac{-\Gamma(1+\beta/2)}{\Gamma(1+\alpha_1)\Gamma(1+\alpha_1+\beta/2)}
\left( \frac\beta2 \right)^{2\alpha_1+1} x^{\alpha_1}
{\mathrm e}^{-\beta x/2}
\fourIdx{}0{(\beta/2)}1F\left(
;\frac{2\alpha_1}\beta+2; x\vec{1}^{\alpha_1} \right).
\end{multline}
We use \eqref{eq:106} to remove the derivative term from \eqref{eq:103},
giving
\begin{multline}
\label{eq:107}
{\mathbb P}(N^2\phi_1 > x) = {\mathrm e}^{-\beta x/2} \fourIdx{}0{(\beta/2)}1F\left(
;\frac{2\alpha_1}\beta; x\vec{1}^{\alpha_1} \right) \\
- \frac{x^{1+\alpha_1}}N\left((\alpha_1 + \alpha_2 +1)-\frac\beta2\right)
\left(\frac\beta2\right)^{2\alpha_1}
\frac{\Gamma(1+\beta/2)}{\Gamma(1+\alpha_1)\Gamma(1+\alpha_1+\beta/2)}
\\\times {\mathrm e}^{-\beta x/2}
\fourIdx{}0{(\beta/2)}1F\left(
;\frac{2\alpha_1}\beta+2; x\vec{1}^{\alpha_1} \right) +
{\mathrm O}\left(\frac1{N^2}\right).
\end{multline}
This completes the proof. \hspace*{\fill}~$\Box$
That the first-order correction term in \eqref{eq:107} is proportional
to the derivative of the leading term implies that the finite-$N$ behaviour
can be interpreted, up to an error of order ${\mathrm O}(N^{-2})$ as a correction
to the width: if we let
\begin{equation}
\label{eq:158}
F_\infty(x)\mathbin{\hbox{\raise0.08ex\hbox{\rm :}}\!\!=} 1 - {\mathrm e}^{-\beta x/2} \fourIdx{}0{(\beta/2)}1F\left(
;\frac{2\alpha_1}\beta; x\vec{1}^{\alpha_1} \right)
\end{equation}
then we may interpret Theorem \ref{thm:main} as saying
\begin{equation}
\label{eq:159}
F_{N^2\phi_1}(x) = F_\infty(x) + \frac{x}N\left(\frac2\beta(\alpha_1 +
\alpha_2 + 1)-1\right) F_\infty'(x) + {\mathrm O}\left(\frac1{N^2}\right).
\end{equation}
By Taylor's theorem, this is equivalent to
\begin{equation}
\label{eq:160}
F_{N^2\phi_1}(x) = F_\infty\left( x\left( 1 + \frac1N\left(
\frac2\beta(\alpha_1 +
\alpha_2 + 1)-1\right) \right)\right) + {\mathrm O}\left(\frac1{N^2}\right).
\end{equation}
\subsection{Connection with Jacobi polynomials} \label{sec:main-result:2}
We now prove the formula \eqref{eq:153} from
Corollary \ref{cor:Jacobi_polys} giving a formula for the distribution
of the smallest J$\betaup$E\ eigenvalue in terms of multi-variable Jacobi
polynomials.
\dimostrazionea{Corollary \ref{cor:Jacobi_polys}}
We have for $\vec{x}\in{\mathbb C}^n$
\begin{multline}
\label{eq:39}
\fourIdx{}2{(\sigma)}1{F}(-N, N+1+a+b+(n-1)/\sigma; a + 1 +(n-1)/\sigma;
\vec{x}) \\=
\frac{[-N]_{(N^n)}^{(\sigma)}[N+1+a+b+(n-1)/\sigma]_{(N^n)}^{(\sigma)}}{[a+1
+(n-1)/\sigma]_{(N^n)}^{(\sigma)} (nN)!}
P_{(N^n)}^{\sigma,a,b}(\vec{x}).
\end{multline}
This is essentially Th\'eor\`eme 5 of \cite{las:pdj}, incorporating our
different choice of normalisation of the Jacobi polynomials. It may be proved
by observing that both sides of \eqref{eq:39} are multivariate
symmetric polynomials that satisfy the same
partial differential equation (see \cite{yan:aco} or \cite{kan:sia} for
the PDE satisfied by $\fourIdx{}2{(\sigma)}1{F}$) and that the
leading term on both sides is proportional to $C_{(N^n)}^{(\sigma)}(\vec{x})$
with identical constant.
Comparing to \eqref{eq:23} we find the appropriate parameters for the
multivariate Jacobi polynomial are
\begin{equation}
\label{eq:40}
\sigma = \frac{\beta}2,\qquad a= -1 + \frac2\beta,\qquad
b = -1 + \frac{2(\alpha_2+1)}\beta.
\end{equation}
From \eqref{eq:23} and the identity \eqref{eq:39} with parameters as above
we have
\begin{align}
\label{eq:41}
{\mathbb P}(&\phi_1 > \xi) = (1-\xi)^{N(1+\alpha_1+\alpha_2+(N-1)\beta/2)}\\
&\times
\frac{[-N]_{(N^{\alpha_1})}^{(\beta/2)}[N-1+2(\alpha_1+\alpha_2+1)/\beta
]_{(N^{\alpha_1})}^{(\beta/2)}}{[2\alpha_1/\beta]_{(N^{\alpha_1})}^{(\beta/2)}
(N\alpha_1)!} P_{(N^{\alpha_1})}^{\beta/2,-1+2/\beta,-1+2(\alpha_2+1)/\beta}
\left(\frac{-\xi}{1-\xi}\vec{1}^{\alpha_1}\right).\nonumber
\end{align}
Taking the limit $\xi\to0^+$ we need to have ${\mathbb P}(\phi_1>0)=1$ and so
\begin{equation}
\label{eq:42}
P_{(N^{\alpha_1})}^{\beta/2,-1+2/\beta,-1+2(\alpha_2+1)/\beta}
(\vec{0}^{\alpha_1}) = \frac{[2\alpha_1/\beta]_{(N^{\alpha_1})}^{(\beta/2)}
(N\alpha_1)!}{[-N]_{(N^{\alpha_1})}^{(\beta/2)}[N-1+2(\alpha_1+\alpha_2+
1)/\beta]_{(N^{\alpha_1})}^{(\beta/2)}}.
\end{equation}
Some of the quantities in \eqref{eq:42} simplify: we have, starting
from \eqref{eq:7},
\begin{align}
[-N]_{(N^n)}^{(\sigma)} &= \prod_{i=1}^n\prod_{j=1}^N \left( -N + j -1
-\frac{i-1}\sigma \right) \nonumber \\
&=\prod_{i=1}^n\prod_{j=1}^N \left( -j -\frac{i-1}\sigma \right) \nonumber \\
&=(-1)^{Nn}\prod_{i=0}^{n-1}\prod_{j=1}^{N} \left( j + \frac{i}\sigma \right),
\label{eq:46}
\end{align}
and
\begin{align}
\left[ \frac{n}\sigma+\theta \right]_{(N^n)}^{(\sigma)} &=
\prod_{i=1}^n\prod_{j=1}^N \left( \theta + j - 1 + \frac{n-i+1}\sigma
\right) \nonumber \\
&=\prod_{i=1}^n\prod_{j=0}^{N-1} \left( \theta + j + \frac{i}\sigma
\right),
\label{eq:47}
\end{align}
so that
\begin{align}
\frac{[n/\sigma]_{(N^n)}^{(\sigma)}}{[-N]_{(N^n)}^{(\sigma)}} &=
(-1)^{nN} \frac{\prod_{i=1}^n \prod_{j=0}^{N-1}(j+i/\sigma)}%
{\prod_{i=0}^{n-1}\prod_{j=1}^N (j+i/\sigma)} \nonumber \\
&= (-1)^{Nn} \frac{\prod_{i=1}^n (i/\sigma) \prod_{j=1}^{N-1} (j+n/\sigma)}%
{\prod_{i=0}^{n-1} (N+i/\sigma)\prod_{j=1}^{N-1}j} \nonumber \\
&= \frac{(-1)^{nN}\, n!}{(N-1)!\sigma^n} \frac{\prod_{j=1}^{N-1} (j+n/\sigma)}%
{\prod_{i=0}^{n-1} \sigma^{-1}(N\sigma+i)} \nonumber \\
&= \frac{(-1)^{nN}\, n! (1+n/\sigma)_{N-1}}{(N-1)!(N\sigma)_n}.
\label{eq:48}
\end{align}
In \eqref{eq:42} the ratio $[2\alpha_1/\beta]_{(N^{\alpha_1})}^{(\beta/2)}/
[-N]_{(N^{\alpha_1})}^{(\beta/2)}$ is of the form \eqref{eq:48} and
the factor $[N-1+2(\alpha_1+\alpha_2+1)/\beta]_{(N^n)}^{(\beta/2)}$ is
of the form \eqref{eq:47}. Thus we get
\begin{multline}
\label{eq:49}
P_{(N^{\alpha_1})}^{\beta/2,-1+2/\beta,-1+2(\alpha_2+1)/\beta}
(\vec{0}^{\alpha_1}) = \frac{(-1)^{N\alpha_1} \alpha_1! (N\alpha_1)!
(1+2\alpha_1/\beta)_{N-1}}{(N-1)! (N\beta/2)_{\alpha_1}}
\\\times
\prod_{i=1}^{\alpha_1} \frac1{(N-1+2(\alpha_2+1+i)/\beta)_N}.
\end{multline}
\hspace*{\fill}~$\Box$
\section{Explicit formul\ae}\label{sec:explicit-formulas}
In certain situations we are able to derive expressions for
$F_{\phi_1}$ and asymptotics for $F_{N^2\phi_1}$ that are more
explicit, and these are expounded in the present Section.
To begin with we focus on the case $\beta=2$ corresponding to the
Jacobi Unitary Ensemble. In this case we benefit from the fact that
the multi-variable Jacobi polynomials enjoy a determinantal
structure. In Section \ref{sec:small_alpha} we record some formul\ae\
(for arbitrary $\beta>0$) for the special cases $\alpha_1=0$ and
$\alpha_1=1$.
\subsection{Determinantal identities}
In order to state the determinantal identities
let us denote by $p_m^{a,b}(x)$ the $m$th monic Jacobi polynomial
orthogonal with respect to the measure $x^{a}(1-x)^{b}$
on the interval $[0,1]$. In terms of the definition $P_m^{(a,b)}$
of Jacobi polynomials given by Szeg\H{o} \cite[Ch.~IV]{sze:op}, our
definition satisfies
\begin{equation}
\label{eq:34}
p_m^{a,b}(x) = \frac{m!}{(m+a+b+1)_m} P_m^{(b,a)}(%
2x-1).
\end{equation}
As a hypergeometric function,
\begin{equation}
\label{eq:35}
p_m^{a,b}(x) = \frac{(-1)^mm!}{(m+a+b+1)_m}
\binom{m+a}m \fourIdx{}2{}1{F}(-m, m+a+b+1; a+1; x).
\end{equation}
We also need the hook-length $h_\lambda$ for a partition, defined by
\begin{equation}
\label{eq:154}
h_\lambda \mathbin{\hbox{\raise0.08ex\hbox{\rm :}}\!\!=} \frac{\prod_{i=1}^{\ell(\lambda)} (\lambda_i+
\ell(\lambda)-i)!}{\prod_{i<j}\lambda_i - \lambda_j - i + j},
\end{equation}
with $\ell(\lambda)$ the number of non-zero parts.
If $\sigma=1$ the following Lemma gives alternative expresions for
the multivariate
Jacobi polynomials.
\begin{lemma}\label{lem:2f1}
Let $\vec{x}\in{\mathbb C}^n$. Then
\begin{equation}
\label{eq:36}
P_\lambda^{1,a,b}(\vec{x}) =
\frac{(-1)^{\lfloor n/2\rfloor}|\lambda|!}{h_\lambda}
\frac{\det(p_{\lambda_i+n-i}^{a,b}(x_j))_{i,j=1}^n}{\Delta(\vec{x})},
\end{equation}
where $h_\lambda$ is the
hook-length of the partition $\lambda$ and $\Delta(\vec{x})$ is
the Vandermonde determinant \eqref{eq:1}. If,
furthermore $\vec{x}=x\vec{1}^n$, $x\in{\mathbb C}$, then we have
\begin{equation}
\label{eq:52}
P_\lambda^{1,a,b}(x\vec{1}^n) =
\frac{(-1)^{\lfloor n/2\rfloor}|\lambda|!}{h_\lambda \prod_{j=1}^{n-1}j!}
\det\left( \frac{(\lambda_i+n-i)!}{(\lambda_i+n-i-j+1)!}
p_{\lambda_i+n-i-j+1}^{a+j-1,b+j-1}(x) \right)_{i,j=1}^n.
\end{equation}
\end{lemma}
\noindent{\sl Proof.}\phantom{X}
That the multivariate Jacobi polynomial at $\sigma=1$ has a determinant
evaluation in terms of univariate
Jacobi polynomials is known since \cite[Th\'eor\`eme 10]{las:pdj}:
\begin{equation}\label{eq:43}
P_\lambda^{1,a,b}(\vec{x}) = \text{const}
\frac{\det(p_{\lambda_i+n-i}^{a,b}(x_j))_{i,j=1}^n}{\Delta(\vec{x})}.
\end{equation}
Lasalle uses a different version of the Jacobi polynomials to us which
changes the numerical value of the constant in \eqref{eq:43}.
We can fix the constant
by observing that since our Jacobi polynomials are monic,
\begin{align}
\frac{\det(p_{\lambda_i+n-i}^{a,b}(x_j))}{\Delta(\vec{x})}
&=\frac{\det(x^{\lambda_i+n-i}(x_j))}{\Delta(\vec{x})} +
\text{lower order terms} \nonumber \\
&= (-1)^{\lfloor n/2\rfloor}
\mathfrak{s}_\lambda(\vec{x}) + \text{lower order terms},
\label{eq:37}
\end{align}
where $\mathfrak{s}_\lambda$ is a Schur polynomial. The Jack polynomials
at $\sigma=1$ are proportional to Schur polynomials \cite{jac:aco}, and
in fact
\begin{equation}
\label{eq:38}
C_\lambda^{(1)}(\vec{x}) = \frac{|\lambda|!}%
{h_\lambda} \mathfrak{s}_\lambda(\vec{x}).
\end{equation}
The implication is
\begin{equation}
\label{eq:36bis}
P_\lambda^{1,a,b}(\vec{x}) =
\frac{(-1)^{\lfloor n/2\rfloor}|\lambda|!}{h_\lambda}
\frac{\det(p_{\lambda_i+n-i}^{a,b}(x_j))_{i,j=1}^n}{\Delta(\vec{x})}.
\end{equation}
In our applications $\vec{x}$ is a scalar multiple of $\vec{1}^n$.
To take the conformal limit $\vec{x}\to x\vec{1}^n$, we use the formula
\begin{equation}
\label{eq:44}
\lim_{\vec{x}\to x\vec{1}^n} \frac{\det(\varphi_i(x_j))}{\Delta(\vec{x})}
= \frac1{\prod_{j=1}^{n-1} j!} \mathcal{W}(\varphi_1,\ldots,\varphi_n)(x)
\end{equation}
proved in \cite[Lemma A.1]{ari:lew}, where $\mathcal{W}$ denotes the
Wronskian
\begin{equation}
\label{eq:45}
\mathcal{W}(\varphi_1,\ldots,\varphi_n)(x)
= \det\left( \frac{{\mathrm d}^{j-1}}{{\mathrm d} x^{j-1}}
\varphi_i(x)\right)_{i,j=1}^n,
\end{equation}
and the functions $\varphi_1,\ldots,\varphi_n$ must be regular at
$x$. (A version of \eqref{eq:44} valid for polynomials
$\varphi_1,\ldots,\varphi_n$ was proved in \cite[Theorem 1]{gra:eop},
which would suffice to handle \eqref{eq:36}, but later we will have
reason to apply \eqref{eq:44} to non-polynomial functions.)
In the application we presently have in mind, $\varphi_i$ is the Jacobi
polynomial $p_{\lambda_i+n-i}^{a,b}$ and using the fact that
\begin{equation}
\label{eq:50}
\frac{{\mathrm d}}{{\mathrm d} x} p_n^{a,b}(x) = n p_{n-1}^{a+1,b+1}(x),
\end{equation}
which extends to
\begin{align}
\frac{{\mathrm d}^{j-1}}{{\mathrm d} x^{j-1}} p_n^{a,b}(x) &= n(n-1)\cdots
(n-j+2) p_{n-j+1}^{a+j-1,b+j-1}(x) \nonumber \\
&= \frac{n!}{(n-j+1)!} p_{n-j+1}^{a+j-1,b+j-1}(x),
\label{eq:51}
\end{align}
we have
\begin{equation}
\label{eq:52bis}
\mathcal{W}(p_{\lambda_1+n-1}^{a,b},\ldots,p_{\lambda_n}^{a,b})(x)
= \det\left( \frac{(\lambda_i+n-i)!}{(\lambda_i+n-i-j+1)!}
p_{\lambda_i+n-i-j+1}^{a+j-1,b+j-1}(x)\right)_{i,j=1}^n.
\end{equation}
Combining \eqref{eq:36}, \eqref{eq:44} and \eqref{eq:52bis} we get
\eqref{eq:52}. \hspace*{\fill}~$\Box$
\begin{corollary}
For $\vec{x}\in{\mathbb C}^n$, and fixed $c$ such that $c-i$ is not a negative
integer for $i=0,\ldots,n-1$, we have
\begin{equation}
\label{eq:125}
\fourIdx{}0{(1)}1{F}(;c;\vec{x})
=(-1)^{\lfloor n/2\rfloor} \frac{\prod_{i=1}^n \Gamma(c+1-i) x_i^{n-c/2}}
{\Delta(\vec{x})}
\det\left( x_j^{-i/2} I_{c+i-2n}(2\sqrt{x_j})\right)_{i,j=1}^n,
\end{equation}
where $I_\nu(z)$ is the $I$-Bessel function.
If $\vec{x}=x\vec{1}^n$, $x\in{\mathbb C}$ we have a further
formula:
\begin{equation}
\fourIdx{}0{(1)}1F(;c;x\vec{1}^n)
= x^{n(n-c)/2} \frac{\prod_{i=1}^n \Gamma(c+1-i)}{\prod_{j=1}^{n-1} j!}
\det\left( I_{c-n+j-i}(2\sqrt{x})\right)_{i,j=1}^n.
\label{eq:138}
\end{equation}
\end{corollary}
A special case (with $c=2n$) of \eqref{eq:138}
was given in \cite{for:bca}, where it was proved by using the
relationships with Painlev\'e functions \cite{for:dpe}. We further
remark that \eqref{eq:125} could be proved in a less direct way
through the use of tau functions of hypergeometric type
\cite{orl:nsm}.
\medskip
\noindent{\sl Proof.}\phantom{X}
We know that if $c-(i-1)/\sigma$ is not a negative integer for any $i=1,
\ldots,n$ then
\begin{equation}
\label{eq:60}
\fourIdx{}2{(\sigma)}1{F}\left(-N, -N; c; \frac1{N^2}\vec{x}\right) \to
\fourIdx{}0{(\sigma)}1{F}(;c;\vec{x})\qquad\text{as $N\to\infty$,}
\end{equation}
uniformly for $\vec{x}$ in compact subsets of $\mathbb{C}^n$.
From \eqref{eq:39}
\begin{equation}
\fourIdx{}2{(\sigma)}1{F}\left(-N, -N; c; \frac1{N^2}\vec{x}\right) =
\frac{\left( [-N]^{(\sigma)}_{(N^n)}\right)^2}{[c]_{(N^n)}^{(\sigma)} (Nn)!}
P_{(N^n)}^{\sigma,c-1-(n-1)/\sigma,-c-2N}\left( \frac1{N^2}\vec{x}\right).
\label{eq:61}
\end{equation}
Setting $\sigma=1$ and applying \eqref{eq:36},
\begin{align}
\fourIdx{}2{(1)}1{F}\left(-N, -N; c; \frac1{N^2}\vec{x}\right) &=
\frac{\left( [-N]^{(1)}_{(N^n)}\right)^2}{[c]_{(N^n)}^{(1)} (Nn)!}
P_{(N^n)}^{1,c-n,-c-2N}\left( \frac1{N^2}\vec{x}\right)
\label{eq:108} \\
&= (-1)^{\lfloor n/2\rfloor}
\frac{\left( [-N]^{(1)}_{(N^n)}\right)^2}{[c]_{(N^n)}^{(1)}
\frac{N^{n(n-1)}}{h_{(N^n)}} \frac{\det(p_{N+n-i}^{c-n,-c-2N}
(x_j/N^2))_{i,j=1}^n}{\Delta(\vec{x})}.
\nonumber
\end{align}
It is problematic to take the limit $N\to\infty$ here directly. The
determinant of Jacobi polynomials tends to $0$ as $N\to\infty$ but to
find the rate and leading-term some further manipulations are necessary.
These involve repeated use of the contiguous identity
\begin{equation}
\label{eq:109}
p_m^{a,b}(x) + \frac{m(m+b)}{(2m+a+b-1)(2m+a+b)} p_{m-1}^{a,b}(x) =
p_m^{a-1,b}(x),
\end{equation}
to add
successively to each row a multiple of the row below, in a
recursive fashion, to get that
\begin{equation}
\label{eq:110}
\det\big(p_{N+n-i}^{c-n,-c-2N}
(x_j/N^2)\big)_{i,j=1}^n = \det\big(p_{N+n-i}^{c-2n+i,-c-2N}
(x_j/N^2)\big)_{i,j=1}^n,
\end{equation}
and we have
\begin{equation}
\label{eq:111}
\fourIdx{}2{(1)}1{F}\left(-N, -N; c; \frac1{N^2}\vec{x}\right) =
(-1)^{\lfloor n/2\rfloor}
\frac{\left( [-N]^{(1)}_{(N^n)}\right)^2}{[c]_{(N^n)}^{(1)}
\frac{N^{n(n-1)}}{h_{(N^n)}} \frac{\det(p_{N+n-i}^{c-2n+i,-c-2N}
(x_j/N^2))}{\Delta(\vec{x})}.
\end{equation}
We return to the hypergeometric function representation of Jacobi polynomials
\eqref{eq:35},
\begin{multline}
\label{eq:112}
p_{N+n-i}^{c-2n+i,-c-2N}(x_j/N^2) =
\frac{(-1)^{N+n-i}(N+n-i)!}{(1-n-N)_{N+n-i}}
\binom{N+c-n}{N+n-i} \\
\times \fourIdx{}2{}1{F}(i-n-N, 1-n-N ;1+c-2n+i; x_j/N^2)
\end{multline}
and observe that
\begin{equation}
\label{eq:115}
(1-n-N)_{N+n-i} = (-1)^{N+n-i}\frac{(N+n-1)!}{(i-1)!}
\end{equation}
so
\begin{multline}
\label{eq:117}
p_{N+n-i}^{c-2n+i,-c-2N}(x_j/N^2) =\\
\frac{(i-1)!\Gamma(N+c+1-n)}{\Gamma(N+n)
\Gamma(1+c-2n+i)} \fourIdx{}2{}1{F}(i-n-N, 1-n-N ;1+c-2n+i; x_j/N^2).
\end{multline}
Using \eqref{eq:78} and the fact that
\begin{equation}
\label{eq:118}
\lim_{N\to\infty}
\fourIdx{}2{}1{F}(i-n-N, 1-n-N ;1+c-2n+i; x_j/N^2) =
\fourIdx{}0{}1{F}(;1+c-2n+i; x_j)
\end{equation}
we have the asymptotic behaviour
\begin{equation}
p_{N+n-i}^{c-2n+i,-c-2N}(x_j/N^2)
\sim \frac{(i-1)!N^{c+1-2n}}{\Gamma(1+c-2n+i)}
\fourIdx{}0{}1{F}(;1+c-2n+i; x_j)
\end{equation}
as $N\to\infty$. Putting this into the determinant from \eqref{eq:111},
\begin{equation}
\label{eq:121}
\det(p_{N+n-i}^{c-2n+i,-c-2N} (x_j/N^2))
\sim N^{cn+n-2n^2}\prod_{i=1}^n(i-1)! \det\left(
\frac{\fourIdx{}0{}1{F}(;1+c-2n+i; x_j)}{\Gamma(1+c-2n+i)}\right),
\end{equation}
as $N\to\infty$.
We already know (equation \eqref{eq:46}) that
\begin{equation}
\label{eq:119}
[-N]_{(N^n)}^{(1)} = (-1)^{nN} \prod_{i=1}^n \frac{\Gamma(N+
i)}{\Gamma(i)},
\end{equation}
and we have
\begin{equation}
\label{eq:54}
h_{(N^n)} = \prod_{i=1}^n\prod_{j=1}^N (i+j-1) = \prod_{i=1}^n
\frac{(N+i-1)!}{(i-1)!},
\end{equation}
so
\begin{align}
\frac{\left( [-N]^{(1)}_{(N^n)}\right)^2}{[c]_{(N^n)}^{(1)} h_{(N^n)}}
&= \prod_{i=1}^n \frac{\Gamma(N+i)\Gamma(c+1-i)}{\Gamma(i)\Gamma(c+1-i+N)}
\nonumber \\
&\sim \prod_{i=1}^n \frac{\Gamma(c+1-i)}{\Gamma(i)}N^{-c-1+2i} \nonumber \\
&= N^{-cn+n^2} \prod_{i=1}^n \frac{\Gamma(c+1-i)}{\Gamma(i)}\quad
\text{as $N\to\infty$.}
\label{eq:120}
\end{align}
Putting \eqref{eq:119} and \eqref{eq:120} together into the right-hand
side of \eqref{eq:108} we find that all the factors of $N$ cancel,
and we recover the $N\to\infty$ limit, which yields
\begin{align}
\fourIdx{}0{(1)}1{F}(;c;\vec{x}) &= \lim_{N\to\infty}
\fourIdx{}2{(1)}1{F}\left(-N, -N; c; \frac1{N^2}\vec{x}\right)
\nonumber \\
&= (-1)^{\lfloor n/2\rfloor} \frac{\prod_{i=1}^n \Gamma(c+1-i)}{\Delta(\vec{x})}
\det\left(
\frac{\fourIdx{}0{}1{F}(;1+c-2n+i; x_j)}{\Gamma(1+c-2n+i)}\right)_{i,j=1}^n
\label{eq:122}
\end{align}
We can derive an expression involving the more familar Bessel functions
by means of the identity \cite[\S7.8, eq.~(1)]{erd:htfII}
\begin{equation}
\label{eq:124}
\fourIdx{}0{}1{F}(;\alpha+1;x) = \Gamma(\alpha+1)x^{-\alpha/2}I_\alpha(2
\sqrt{x}),
\end{equation}
Inserting this into
\eqref{eq:122}, we have
\begin{equation}
\fourIdx{}0{(1)}1{F}(;c;\vec{x}) =
(-1)^{\lfloor n/2\rfloor} \frac{\prod_{i=1}^n \Gamma(c+1-i)}{\Delta(\vec{x})}
\det\left( x_j^{n-(c+i)/2} I_{c+i-2n}(2\sqrt{x_j})\right),
\end{equation}
which is equivalent to \eqref{eq:125}.
To take the confluent limit $\vec{x}\to x\vec{1}^n$, we prefer to work
with \eqref{eq:122}. Using, for a second time, identity \eqref{eq:44},
\begin{multline}
\label{eq:133}
\lim_{\vec{x}\to x\vec{1}^n} \fourIdx{}0{(1)}1F(;c ;\vec{x}) =
(-1)^{\lfloor n/2 \rfloor} \frac{\prod_{i=1}^n \Gamma(c+1-i)}{\prod_{j=1}^{n-1}
j!}\\\times
\det\left( \frac1{\Gamma(1+c-2n+i)} \frac{{\mathrm d}^{j-1}}{{\mathrm d} x^{j-1}}
\left( \fourIdx{}0{}1F (;1+c-2n+i; x)\right)\right).
\end{multline}
Since
\begin{equation}
\label{eq:134}
\frac{{\mathrm d}}{{\mathrm d} x} \fourIdx{}0{}1{F}(;c;x) = \frac1c
\fourIdx{}0{}1{F}(;c+1;x),
\end{equation}
we have
\begin{equation}
\label{eq:135}
\frac{{\mathrm d}^{j-1}}{{\mathrm d} x^{j-1}} \fourIdx{}0{}1{F}(;c;x) =
\frac{\Gamma(c)}{\Gamma(c+j-1)}
\fourIdx{}0{}1{F}(;c+j-1;x),
\end{equation}
and
\begin{equation}
\label{eq:136}
\frac{{\mathrm d}^{j-1}}{{\mathrm d} x^{j-1}} \left( \fourIdx{}0{}1{F}(;1+c-2n+i;x)\right)
= \frac{\Gamma(1+c-2n+i)}{\Gamma(c-2n+i+j)}
\fourIdx{}0{}1{F}(;c-2n+i+j;x).
\end{equation}
Putting this into \eqref{eq:133},
\begin{align}
\fourIdx{}0{(1)}1F(;c;x\vec{1}^n) &= (-1)^{\lfloor n/2 \rfloor}
\frac{\prod_{i=1}^n \Gamma(c+1-i)}{\prod_{j=1}^{n-1} j!}
\det\left( \frac{\fourIdx{}0{}1F (;c-2n+i+j; x)}%
{\Gamma(c-2n+i+j)} \right)_{i,j=1}^n \nonumber \\
&= \frac{\prod_{i=1}^n \Gamma(c+1-i)}{\prod_{j=1}^{n-1} j!}
\det\left( \frac{\fourIdx{}0{}1F (;c-n+j-i; x)}%
{\Gamma(c-n+j-i)} \right)_{i,j=1}^n,
\label{eq:137}
\end{align}
reversing the order of the rows in the determinant. Using \eqref{eq:124} this
becomes
\begin{equation}
\fourIdx{}0{(1)}1F(;c;x\vec{1}^n)
= \frac{\prod_{i=1}^n \Gamma(c+1-i)}{\prod_{j=1}^{n-1} j!}
\det\left( x^{(-c+n-j+i)/2} I_{c-n+j-i}(2\sqrt{x})\right)_{i,j=1}^n
\end{equation}
As a final step we use the identity $\det(\theta^{i-j}a_{ij}) =
\det(a_{ij})$ to reduce this to \eqref{eq:138}.
\hspace*{\fill}~$\Box$
\subsection{Smallest eigenvalue of the Jacobi Unitary Ensemble}
\begin{proposition}
Let $F_{\phi_1}$ be the probability distribution function of
the smallest eigenvalue of the $N\times N$ Jacobi $\beta$-Ensemble
with $\beta=2$, $\alpha_1\in{\mathbb N}_0$ and $\alpha_2>-1$. For
$0<\xi<1$,
\begin{multline}
\label{eq:157}
F_{\phi_1}(\xi) = 1 - (-1)^{N\alpha_1}
\prod_{i=1}^{\alpha_1}(N+\alpha_2+i)_N
(1-\xi)^{N(\alpha_1+\alpha_2+N)}\\
\times \det \left( \frac1{(N+i-j)!}p_{N+i-j}^{j-1,\alpha_2+j-1}
\left( \frac{-\xi}{1-\xi}\right) \right)_{i,j=1}^{\alpha_1}.
\end{multline}
As $N\to\infty$ we have, for $x>0$,
\begin{equation}
\label{eq:143bis}
F_{N^2\phi_1}(x) = 1 - {\mathrm e}^{-x} \det(I_{j-i}(2\sqrt{x}))
+ \frac{x}N
(\alpha_1 + \alpha_2) {\mathrm e}^{-x}
\det(I_{2+j-i}(2\sqrt{x}))
+ {\mathrm O}\left(\frac1{N^2}\right),
\end{equation}
where the determinants in \eqref{eq:143bis} are of size $\alpha_1\times
\alpha_1$.
\end{proposition}
\noindent{\sl Proof.}\phantom{X}
In view of \eqref{eq:153} we need to specialise \eqref{eq:52} to the
rectangular partition $\lambda=(N^n)$, to get
\begin{equation}
\label{eq:53}
P_{(N^n)}^{1,a,b}(x\vec{1}^n) =
\frac{(-1)^{\lfloor n/2\rfloor}(nN)!}{h_{(N^n)} \prod_{j=1}^{n-1}j!}
\det\left( \frac{(N+n-i)!}{(N+n-i-j+1)!}
p_{N+n-i-j+1}^{a+j-1,b+j-1}(x) \right)_{i,j=1}^n.
\end{equation}
We have already seen (equation \eqref{eq:54}) that
\begin{equation}
\label{eq:156}
h_{(N^n)} = \prod_{i=1}^n \frac{\Gamma(N+i)}{\Gamma(i)},
\end{equation}
and, after cancellation, \eqref{eq:53} becomes
\begin{equation}
\label{eq:55}
P_{(N^n)}^{1,a,b}(x\vec{1}^n) =
\frac{(-1)^{\lfloor n/2\rfloor}(nN)!}{\prod_{i=1}^{n}(N+i-1)!}
\det\left( \frac{(N+n-i)!}{(N+n-i-j+1)!}
p_{N+n-i-j+1}^{a+j-1,b+j-1}(x) \right)_{i,j=1}^n.
\end{equation}
In the determinant in \eqref{eq:55} there is a factor $(N+n-i)!$ multiplying
the $i$th row. If we extract these factors, they cancel the factorials in
the denominator. Finally, we reverse the order of the rows producing
a factor $(-1)^{\lfloor n/2 \rfloor}$ and the end result
\begin{equation}
\label{eq:56}
P_{(N^n)}^{1,a,b}(x\vec{1}^n) = (nN)!
\det\left( \frac{1}{(N+i-j)!}
p_{N+i-j}^{a+j-1,b+j-1}(x) \right)_{i,j=1}^n.
\end{equation}
So, if $\beta=2$,
\begin{align}
\label{eq:57}
{\mathbb P}(\phi_1&>\xi) = \frac{(1-\xi)^{N(\alpha_1+\alpha_2+N)}}%
{P_{(N^{\alpha_1})}^{1,0,\alpha_2}(\vec{0}^{\alpha_1})}
P_{(N^{\alpha_1})}^{1,0,\alpha_2}\left(\frac{-\xi}{1-\xi}\vec{1}^{\alpha_1}
\right) \\
&=\frac{(N\alpha_1)!}{P_{(N^{\alpha_1})}^{1,0,\alpha_2}(\vec{0}^{\alpha_1})}
(1-\xi)^{N(\alpha_1+\alpha_2+N)}
\det \left( \frac1{(N+i-j)!}p_{N+i-j}^{j-1,\alpha_2+j-1}
\left( \frac{-\xi}{1-\xi}\right) \right)_{i,j=1}^{\alpha_1},
\nonumber
\end{align}
where, from \eqref{eq:49},
\begin{equation}
\label{eq:58}
P_{(N^{\alpha_1})}^{1,0,\alpha_2}
(\vec{0}^{\alpha_1}) = \frac{(-1)^{N\alpha_1} \alpha_1! (N\alpha_1)!
(1+\alpha_1)_{N-1}}{(N-1)! (N)_{\alpha_1}}
\prod_{i=1}^{\alpha_1} \frac1{(N+\alpha_2+i))_N}.
\end{equation}
Note that since $\alpha_1!(1+\alpha_1)_{N-1} = (N+\alpha_1-1)!$ and
$(N-1)!(N)_{\alpha_1}=(N+\alpha_1-1)!$ too we get a simplified formula
\begin{equation}
\label{eq:59}
P_{(N^{\alpha_1})}^{1,0,\alpha_2}
(\vec{0}^{\alpha_1}) = {(-1)^{N\alpha_1} (N\alpha_1)! }
\prod_{i=1}^{\alpha_1} \frac1{(N+\alpha_2+i)_N}.
\end{equation}
Equations \eqref{eq:57} and \eqref{eq:59} yield \eqref{eq:157}.
We now turn to the two-term asymptotic formula \eqref{eq:143bis}.
With $\beta=2$ in \eqref{eq:140},
\begin{align}
\label{eq:141}
F_{N^2\phi_1}(x) &= 1 - {\mathrm e}^{-x} \fourIdx{}0{(1)}1F\left(
;\alpha_1; x\vec{1}^{\alpha_1} \right) \\ \nonumber
&\qquad + \frac{x^{1+\alpha_1}}N(\alpha_1 + \alpha_2)
\frac{{\mathrm e}^{-x}}{\Gamma(1+\alpha_1)\Gamma(2+\alpha_1)}
\fourIdx{}0{(1)}1F\left(
;\alpha_1+2; x\vec{1}^{\alpha_1} \right) +
{\mathrm O}\left(\frac1{N^2}\right).
\end{align}
Using the representation \eqref{eq:138} for the multi-variable hypergeometric
functions this becomes
\begin{align}
\label{eq:142}
F_{N^2\phi_1}(x) &= 1 - {\mathrm e}^{-x} \prod_{i=1}^{\alpha_1}\frac{\Gamma(\alpha_1
+1 - i)}{\Gamma(i)} \det(I_{j-i}(2\sqrt{x}))
\\ \nonumber
&\qquad + \frac{x}N
\frac{(\alpha_1 + \alpha_2) {\mathrm e}^{-x}}{\Gamma(1+\alpha_1)\Gamma(2+\alpha_1)}
\prod_{i=1}^{\alpha_1} \frac{\Gamma(\alpha_1+3-i)}{\Gamma(i)}
\det(I_{2+j-i}(2\sqrt{x}))
+ {\mathrm O}\left(\frac1{N^2}\right),
\end{align}
and, upon cancellation of the gamma function factors,
\begin{equation}
\label{eq:143}
F_{N^2\phi_1}(x) = 1 - {\mathrm e}^{-x} \det(I_{j-i}(2\sqrt{x}))
+ \frac{x}N
(\alpha_1 + \alpha_2) {\mathrm e}^{-x}
\det(I_{2+j-i}(2\sqrt{x}))
+ {\mathrm O}\left(\frac1{N^2}\right).
\end{equation}
\hspace*{\fill}~$\Box$
The formula \eqref{eq:143} proves a conjecture made in \cite{mor:eed}.
(The result is also implicit in the recent work \cite{for:roc}.)
It seems likely that explicit formul\ae\ along the lines of
\eqref{eq:143} will also be available in the other privileged cases
$\beta=1$ and $\beta=4$. The details will appear elsewhere.
\mathversion{bold}
\subsection{Small values of $\alpha_1$}\label{sec:small_alpha}
\mathversion{normal}
If $\alpha_1=0$, then most of the analysis above is quite unnecessary
and we already recover from \eqref{eq:113}
\begin{align}
\nonumber
{\mathbb P}(\phi_1>\xi) &= \frac{(1-\xi)^{N(1+\alpha_2 + (N-1)\beta/2)}}%
{S_N(1,\alpha_2+1,\beta/2)}
\int_0^1\cdots\int_0^1 \prod_{i=1}^N
(1-y_i)^{\alpha_2} |\Delta(\vec{y})|^\beta\,{\mathrm d}^N\vec{y} \\
&=(1-\xi)^{N(1+\alpha_2 + (N-1)\beta/2)}, \qquad 0<\xi<1,
\label{eq:114}
\end{align}
recognising the value of the Selberg integral (or
observing that both sides must be unity as $\xi\to0^+$). So, for
$\alpha_1=0$ and $x>0$,
\begin{align}
\nonumber
{\mathbb P}(N^2\phi_1 \leq x) &= 1 - {\mathbb P}\left(\phi_1 > \frac{x}{N^2}\right) \\
\nonumber
&= 1 - \left( 1 - \frac{x}{N^2} \right)^{N(1+\alpha_2 + (N-1)\beta/2)} \\
&= 1 - {\mathrm e}^{-\beta x/2} +
\left( 1 + \alpha_2 - \frac{\beta}2\right)\frac{x{\mathrm e}^{-\beta x/2}}N +
{\mathrm O}\left( \frac1{N^2} \right),
\label{eq:123}
\end{align}
referring-back to \eqref{eq:102}. This generalises, to arbitrary
$\beta>0$, Corollary 1 of \cite{mor:eed}.
If $\alpha_1=1$ then the multivariate hypergeometric funtions of
$\alpha_1$ arguments become ordinary one-variable hypergeometric
functions. In this case, for $0\leq\xi\leq1$,
\begin{equation}
\label{eq:126}
{\mathbb P}(\phi_1\leq \xi) = 1- (1-\xi)^{N(1+\alpha_2+(N-1)\beta/2)}
\fourIdx{}2{}{1}F\left( -N, 1-N-\frac2\beta(\alpha_2+1);
\frac2\beta; \xi\right),
\end{equation}
by \eqref{eq:24}. Using Corollary~\ref{cor:Jacobi_polys}, and the fact
that multi-variable Jacobi polynomials of a single variable coincide
with classical single-variable ones, this may be expressed further
as
\begin{multline}
\label{eq:161} {\mathbb P}(\phi_1\leq \xi)
= 1 - (-1)^N(1-\xi)^{N(2+\alpha_2+(N-1)\beta/2)}\\\times
\frac{(N+2(\alpha_2+2)/\beta-1)_N}{(2/\beta)_N} p_N^{2/\beta-1,
2(\alpha_2+1)/\beta-1}\left( \frac{-\xi}{1-\xi} \right), \qquad 0< \xi<1.
\end{multline}
We apply Theorem \ref{thm:two-term} with $n=1$ to \eqref{eq:126} to find that
\begin{align}
{\mathbb P}(N^2\phi_1\leq x) &= 1- \left(1-\frac{x}{N^2}\right)^{N(1+\alpha_2+
(N-1)\beta/2)}
\fourIdx{}2{}{1}F\left( -N, 1-N-\frac2\beta(\alpha_2+1);
\frac2\beta; \frac{x}{N^2} \right) \nonumber \\
&=1 - {\mathrm e}^{-\beta x/2} \left( 1 - \left( 1 + \alpha_2 -
\frac{\beta}2\right)\frac xN + {\mathrm O}\left( \frac1{N^2} \right)\right)
\nonumber \\
&\qquad \times \bigg( \fourIdx{}0{}1F \left(;\frac2\beta; x\right) +
\frac1N\left(\frac2\beta(\alpha_2+2)-1\right) x\frac{{\mathrm d}}{{\mathrm d} x}\left(
\fourIdx{}0{}1F \left(;\frac2\beta; x\right)\right) \nonumber \\
&\qquad\qquad - \frac1N x\, \fourIdx{}0{}1F \left(;\frac2\beta; x\right) +
{\mathrm O}\left(\frac1{N^2}\right) \bigg) \nonumber \\
&= 1 - {\mathrm e}^{-\beta x/2} \fourIdx{}0{}1F \left(;\frac2\beta; x\right)
+\frac{x}N{\mathrm e}^{-\beta x/2} \left( 2 + \alpha_2 - \frac\beta2\right)
\bigg(\fourIdx{}0{}1F \left(;\frac2\beta; x\right) \nonumber \\
&\qquad\qquad -
\fourIdx{}0{}1F \left(;\frac2\beta+1; x\right)\bigg) +
{\mathrm O}\left( \frac1{N^2}\right),
\label{eq:127}
\end{align}
using \eqref{eq:134} for the derivative of the hypergeometric function.
We may further use \eqref{eq:124} to replace hypergeometric functions
with Bessel functions, yielding
\begin{align}
{\mathbb P}(N^2\phi_1\leq x) &= 1 - {\mathrm e}^{-\beta x/2} \Gamma\left(\frac2\beta
\right)x^{1/2-1/\beta} I_{2/\beta-1}(2\sqrt{x}) \nonumber \\
&\qquad+\frac{x}N{\mathrm e}^{-\beta x/2} \left( 2 + \alpha_2 - \frac\beta2\right)
\bigg(\Gamma\left( \frac2{\beta}\right) x^{1/2-1/\beta}I_{2/\beta-1}(
2\sqrt{x}) \nonumber \\
&\qquad\qquad - \Gamma\left( \frac2\beta+1\right) x^{-1/\beta} I_{2/\beta}(
2\sqrt{x}) \bigg)
+ {\mathrm O}\left( \frac1{N^2}\right) \nonumber \\
&= 1 - {\mathrm e}^{-\beta x/2} \Gamma\left(\frac2\beta
\right)x^{1/2-1/\beta} I_{2/\beta-1}(2\sqrt{x}) \nonumber \\
&\qquad+\frac{x}N{\mathrm e}^{-\beta x/2} \left( 2 + \alpha_2 - \frac\beta2\right)
\Gamma\left( \frac2{\beta}\right) x^{-1/\beta} \bigg(x^{1/2}I_{2/\beta-1}(
2\sqrt{x}) \nonumber \\
&\qquad\qquad - \frac2\beta I_{2/\beta}(
2\sqrt{x}) \bigg)
+ {\mathrm O}\left( \frac1{N^2}\right).
\label{eq:129}
\end{align}
We note that
\begin{equation}
\label{eq:130}
\sqrt{x} I_{2/\beta-1}(2\sqrt{x}) = \frac2\beta I_{2/\beta}(2\sqrt{x})
+ \sqrt{x} I_{2/\beta+1}(2\sqrt{x}):
\end{equation}
an application of the Bessel function identity \cite[\S7.11,
eq.~(23)]{erd:htfII}
\begin{equation}
\label{eq:131}
I_\nu(z) = \frac{z}{2\nu}\left( I_{\nu-1}(z) - I_{\nu+1}(z)\right).
\end{equation}
Putting \eqref{eq:130} into \eqref{eq:129} gives
\begin{multline}
\label{eq:132}
{\mathbb P}(N^2\phi_1\leq x) = 1 - {\mathrm e}^{-\beta x/2} \Gamma\left(\frac2\beta
\right)x^{1/2-1/\beta} I_{2/\beta-1}(2\sqrt{x})
\\ +\frac{x^{3/2-1/\beta}}N {\mathrm e}^{-\beta x/2} \left( 2 + \alpha_2 -
\frac\beta2\right)
\Gamma\left( \frac2{\beta}\right) I_{2/\beta+1}(2\sqrt{x})
+ {\mathrm O}\left( \frac1{N^2}\right).
\end{multline}
This result is consistent with \eqref{eq:143} when $\beta=2$.
\subsection*{Acknowledgements}
The author wishes to acknowledge helpful conversations about this work
with J.~P.~Keating and D.~Savin.
\def\leavevmode\lower.6ex\hbox to 0pt{\hskip-.23ex \accent"16\hss}D{\leavevmode\lower.6ex\hbox to 0pt{\hskip-.23ex \accent"16\hss}D}
\def$'$} \def\rmi{{\mathrm i}{$'$} \def{\mathrm i}{{\mathrm i}}
|
2302.12062
|
\section{Introduction}
The quantum dilogarithm is a $q$-series with many remarkable properties \cite{Z}, including the famous five-term identity \cite{FK}. Cluster algebra theory and wall-crossing of motivic invariants of quivers have led to vast generalizations of such dilogarithm identities \cite{K}.\\[1ex]
In this note, we explore the outer limits of this circle of ideas, by investigating ``wild'' dilogarithm identities, those arising from wild quivers. For this we use again motivic wall-crossing, interpret the resulting series in terms of motivic Donaldson--Thomas invariants, and use the geometric interpretation of the latter in terms of intersection homology of quiver moduli spaces \cite{MR} to establish very strong positivity properties. In the rank two case, originating from generalized Kronecker quivers, the well-explored and very special symmetries of Kronecker moduli yield many additional explicit properties of the individual terms of our highly infinite dilogarithm identities; see Theorem \ref{mainkronecker}.\\[1ex]
To derive this identity, we collect in Section \ref{s3} the available material on wall-crossing of motivic invariants (see \cite{Moz} for an introduction), and adapt it to the present notation and special setting in Section \ref{s4}. Although similar approaches are used, for example, in the context of the tropical vertex \cite{GP,RW}, it it desirable to state the nature of such wild identities as explicitly as possible.\\[1ex]
{\bf Acknowldegments:} The author would like to thank Daping Weng for several discussions on these wild identities, Bernhard Keller for the opportunity to present them in his seminar, Vladimir Fock as well as Sergey Mozgovoy for valuable comments, and Timm Peerenboom for carefully reading a draft of this text and suggesting several improvements.
\section{Quantum dilogarithm identities}\label{s2}
We first define the quantum dilogarithm, and state the classical five-term identity. The coefficient ring $\mathbb{Q}(q^{1/2})[[x]]$, and in particular the twist by half-powers of $q$, will become natural in the context of motivic Donaldson--Thomas invariants.
\begin{definition} We define the quantum dilogarithm $\Phi(x)\in\mathbb{Q}(q^{1/2})[[x]]$ as
$$\Phi(x)=\sum_{n\geq 0}\frac{q^{n/2}x^n}{(1-q)\cdot\ldots\cdot(1-q^n)}=$$
$$=\exp\left(\sum_{n\geq 1}\frac{x^n}{n\cdot (q^{-n/2}-q^{n/2})}\right)=\prod_{n\geq 0}\frac{1}{1-q^{n+1/2}\cdot x}.$$\end{definition}
{\it Remark:} The translation from the present definition to the ones in the literature is straightforward. For example, \cite{FK,V} use the definition $(x;q)_\infty=\Phi(q^{-1/2}x)^{-1}$, and \cite{K} uses $\mathbb{E}(x)=\Phi(-qx)$. The classical Euler dilogarithm \cite{Z} arises as the following limit: $$(q^{-1/2}-q^{1/2})\log\Phi(x)=\sum_{n\geq 1}\frac{x^n\cdot(q^{-1/2}-q^{1/2})}{n\cdot(q^{-n/2}-q^{n/2})}\stackrel{q\rightarrow 1}{\longrightarrow}\sum_{n\geq 1}\frac{x^n}{n^2}= {\rm Li}_2(x)$$
\begin{definition} For a positive integer $m$, we define $\mathbb{Q}(q^{1/2})_{q^m}[[x,y]]$ as the skew formal power series ring with skew commutativity relation $xy=q^myx$.
\end{definition}
We can now formulate the five-term quantum dilogarithm identity:
\begin{theorem}[Schützenberger, Fadeev, Kashaev, Volkov] In $\mathbb{Q}(q^{1/2})_q[[x,y]]$, we have
$$\Phi(x)\Phi(y)=\Phi(y)\Phi(-q^{-1/2}xy)\Phi(x)$$
\end{theorem}
For $m=2$, we will obtain the following identity as a special case of Theorem \ref{mainkronecker}.
\begin{theorem} In $\mathbb{Q}(q^{1/2})_{q^2}[[x,y]]$, we have $$\Phi(x)\Phi(y)=\Phi(y)\Phi(q^{-2}xy^2)\Phi(q^{-6}x^2y^3)\Phi(q^{-12}x^3y^4)\ldots\cdot$$
$$\cdot \Phi(q^{-1/2}xy)^{-1}\Phi(q^{-3/2}xy)^{-1}\cdot$$
$$\cdot\ldots\Phi(q^{-12}x^4y^3)\Phi(q^{-6}x^3y^2)\Phi(q^{-2}x^2y)\Phi(x).$$
\end{theorem}
To make such identities more readable, we will now introduce a shorthand notation, which again will be motivated later by the framework of Donaldson--Thomas invariants:
\begin{definition} For $a,b\geq 0$, define
$$\Phi_{(a,b)}=\Phi({(-1)}^{mab}{q}^{(a^2+b^2-2mab-1)/2}x^ay^b)^{{(-1)}^{a^2+b^2-mab-1}}.$$
More generally, for a Laurent polynomial $P(q)=\sum_{k}c_k(-q^{1/2})^k\in\mathbb{Q}[q^{\pm 1/2}]$, define
$$\Phi_{(a,b)}^{{\circ P}}={\prod_{k}}\Phi((-1)^{mab}q^{(a^2+b^2-2mab-1{+k})/2}x^ay^b)^{(-1)^{a^2+b^2-mab-1{+k}}\cdot{c_k}}.$$
\end{definition}
Then the previous identities simplify to
$$\Phi_{(1,0)}\Phi_{(0,1)}=\Phi_{(0,1)}\Phi_{(1,1)}\Phi_{(1,0)}\mbox{ for }
m=1,$$
$$ \Phi_{(1,0)}\Phi_{(0,1)}=\Phi_{(0,1)}\Phi_{(1,2)}\Phi_{(2,3)}\ldots\cdot\Phi_{(1,1)}^{\circ(q+1)}\cdot\ldots\Phi_{(3,2)}\Phi_{(2,1)}\Phi_{(1,0)}$$
for $m=2$.\\[1ex]
In the case $m\geq 3$, we can no longer give explicit identities, but we will obtain a rather complete qualitative description.
To formulate this main result, we need two more definitions. We denote by $\sigma$ the operator on $\mathbb{Z}^2$ given by $\sigma(a,b)=(b,mb-a)$, which generates an infinite dihedral group together with the involution $(a,b)\mapsto (b,a)$. We also define $\mu_\pm={(m\pm\sqrt{m^2-4})}/{2}$, the two roots of the quadratic equation $x^2-mx+1=0$.
\begin{theorem}\label{mainkronecker} In $\mathbb{Q}(q^{1/2})_{q^m}[[x,y]]$ for $m\geq 3$, we have
$$\Phi_{(1,0)}\Phi_{(0,1)}=\Phi_{(0,1)}\Phi_{\sigma(0,1)}\Phi_{\sigma^2(0,1)}\Phi_{\sigma^3(0,1)}\cdot\ldots\cdot$$
$$ \cdot\prod^{\rightarrow}_{{\mu_-\leq a/b\leq \mu_+}\atop{\mbox{\tiny increasing}}}\Phi_{(a,b)}^{\circ P_{(a,b)}}\cdot$$
$$ \cdot\ldots\Phi_{\sigma^{-3}(1,0)}\Phi_{\sigma^{-2}(1,0)}\Phi_{\sigma^{-1}(1,0)}\Phi_{(1,0)},$$
where the $P_{(a,b)}$ satisfy the following properties:
\begin{enumerate}
\item {\bf ``Wildness''/Completeness:} We have the non-vanishing property $$P_{(a,b)}\not=0\mbox{ for all }\mu_-\leq a/b\leq \mu_+.$$
\item {\bf Dihedral symmetry:} We have the symmetries
$$P_{(b,a)}=P_{(a,b)},\;\;\;P_{\sigma(a,b)}=P_{(a,b)}.$$
\item {\bf Positivity:} We have
$$P_{(a,b)}\in\mathbb{N}[q],\mbox{ of degree } d=mab-a^2-b^2+1>0.$$
\item {\bf Unimodality:} The polynomial $P_{(a,b)}=\sum_kc_kq^k$ is palindromic and unimodal:
$$c_{d-k}=c_k\mbox{ and }1=c_0\leq c_1\leq \ldots \geq c_{d-1}\geq c_{d}=1.$$
\item {\bf Lowest order terms:}~ $$P_{(1,k)}=\left[{m\atop k}\right]_q.$$
\item {\bf Special value:} ~
$$P_{(k,k)}(1)=\frac{1}{(m-2)k^2}\sum_{d|k}\mu(\frac{k}{d})(-1)^{md+1}{{(m-1)^2d-1}\choose{d}}.$$
\end{enumerate}
\end{theorem}
As a consequence of the dihedral symmetry property, we see that all $P_{(a,b)}$ are determined by those for $a\leq b\leq\frac{m}{2}a$. All properties will follow from interpreting the $P_{(a,b)}$ as the Poincaré polynomials in intersection homology of Kronecker moduli \cite{D}, a class of projective varieties parametrizing certain tuples of matrices up to base change.
\section{Quiver setup}\label{s3}
In this section, we recall the necessary terminology on quiver representations and their moduli spaces, and we formulate the main ingredients for general quiver dilogarithm identities, namely the motivic wall-crossing formula, and the definition and geometricity of motivic Donaldson--Thomas invariants. The reader is referred to \cite{Moz} for a detailed introduction into motivic wall-crossing for quivers, and to \cite{RSmall} for a short summary.\\[1ex]
Let $Q$ be a finite acyclic quiver. We order the set of vertices $Q_0=\{i_1,\ldots,i_n\}$ such that $i_k\rightarrow i_l$ implies $k>l$. The Euler form of $Q$ is the (in general non-symmetric) bilinear form on $\mathbb{Z}Q_0$ given by $\langle\mathbf{d},\mathbf{e}\rangle=\sum_{i\in Q_0}d_ie_i-\sum_{\alpha:i\rightarrow j}d_ie_j$ for $\mathbf{d},\mathbf{e}\in\mathbb{Z}Q_0$.\\[1ex]
We define the formal quantum affine space $\mathbb{Q}(q^{1/2})_q[[Q_0]]$ as the skew formal power series ring with topological basis $t^\mathbf{d}$ for $\mathbf{d}\in\mathbb{N}Q_0$ and multiplication twisted by the antisymmetrized Euler form
$$t^{\bf d}\cdot t^{\bf e}=(-q^{1/2})^{\langle{\bf d},{\bf e}\rangle-\langle{\bf e},{\bf d}\rangle}t^{{\bf d}+{\bf e}}.$$
We fix linear functions $\Theta,\kappa\in(\mathbb{Z}Q_0)^*$ on $Q$ such that $\kappa({\bf d})>0$ for ${\bf d}\in\mathbb{N}Q_0\setminus 0$, and consider the associated slope function $\mu({\bf d})=\frac{\Theta({\bf d})}{\kappa({\bf d})}$ for $\mathbf{d}\in\mathbb{N}Q_0\setminus 0$. We denote by $\Lambda_a$ the set of all ${\bf d}\in\mathbb{N}Q_0\setminus 0$ of slope $a\in\mathbb{Q}$.\\[1ex]
We define the Grothendieck ring of varieties $K_0({\rm Var}_\mathbb{C})$ as the free abelian group in isomorphism classes of complex algebraic varieties $X$ modulo the ``cut-and-paste'' relation
${ }[X]=[A]+[U]\mbox{ if } A\subset X\mbox{ closed, } U=X\setminus A,$
with product given by $[X]\cdot[Y]=[X\times Y]$. We abbreviate the Lefschetz motive $[\mathbb{A}^1]=:q$.\\[1ex]
We consider the localization $R=K_0({\rm Var}_\mathbb{C})[q^{\pm 1/2}, (1-q^n)^{-1}: n\geq 1]$, and define the formal motivic affine space $R_q[[Q_0]]$ as above, with $R$ replacing the coefficient ring $\mathbb{Q}(q^{1/2})$. In fact, all our computations will happen in the smaller coefficient ring of motives which are rational functions in $q$. Note that the existence of motivic measures such as the virtual Hodge polynomial shows that this subring is isomorphic to a subring of $\mathbb{Q}(q^{1/2})$.\\[1ex]
Given a dimension vector ${\bf d}\in\mathbb{N}Q_0$, we fix $\mathbb{C}$-vector spaces $V_i$ of dimension $d_i$ for $i\in Q_0$. We consider the base change action
$$\prod_{i\in Q_0}{\rm GL}(V_i)=G_{\bf d}\curvearrowright R_{\bf d}(Q)=\bigoplus_{\alpha:i\rightarrow j}{\rm Hom}(V_i,V_j)$$
given by $$(g_i)_i\cdot(f_\alpha)_\alpha=(g_jf_\alpha g_i^{-1})_{\alpha:i\rightarrow j},$$
whose orbits, by definition, correspond bijectively to the isomorphism classes of complex representations of $Q$ of dimension vector $\mathbf{d}$.\\[1ex]
We denote by $R_{\bf d}^{\mu-{\rm sst}}(Q)\subset R_{\bf d}(Q)$ the open subset of $\mu$-semistable points. Using this notation, we can formulate the motivic wall-crossing formula, which is formally equivalent to the existence of the Harder--Narasimhan filtration \cite{RHN}:
\begin{theorem}[Motivic wall-crossing formula]\label{wcf} In $R_q[[Q_0]]$, we have the identity
$$\Phi(t_1)\cdot\ldots\cdot \Phi(t_n)= \sum_{{\bf d}}(-q^{1/2})^{\langle{\bf d},{\bf d}\rangle}\frac{[R_{\bf d}(Q)]}{[G_{\bf d}]}t^{\bf d}=$$
$$ =\prod^\rightarrow_{a\in\mathbb{Q}\mbox{ \tiny decreasing}}\left(1+\sum_{{\bf d}\in\Lambda_a}(-q^{1/2})^{\langle{\bf d},{\bf d}\rangle}\frac{[R_{\bf d}^{\mu-{\rm sst}}(Q)]}{[G_{\bf d}]}t^{\bf d}\right).$$
\end{theorem}
Next, we can define motivic Donaldson--Thomas invariants. We generalize the shorthand notation of the previous section to
$$\Phi(x)^{\circ P(q)}=\prod_k\Phi(q^{k/2}x)^{(-1)^kc_k}$$
for $P(q)=\sum_kc_k(-q^{1/2})^k$.
\begin{definition}\cite{KS}\label{dt} Assume that $\langle\_,\_\rangle$ is symmetric on $\Lambda_a$. Define ${\rm DT}^\mu_{\bf d}(q)\in\mathbb{Q}[q^{\pm 1/2}]$ by factorization in $R_q[[Q_0]]$:
$$1+\sum_{{\bf d}\in\Lambda_a}(-q^{1/2})^{\langle{\bf d},{\bf d}\rangle}\frac{[R_{\bf d}^{\mu-{\rm sst}}(Q)]}{[G_{\bf d}]}t^{\bf d}=\prod_{{\bf d}\in\Lambda_a}\Phi(t^{\bf d})^{\circ {\rm DT}^\mu_{\bf d}(q)}.$$
The ${\rm DT}^\mu_{\bf d}(q)$ are called the motivic Donaldson--Thomas invariants of the quiver $Q$ with stability $\mu$.
\end{definition}
We remark that the more common definition in terms of the plethystic exponential ${\rm Exp}$ is equivalent to this one since, by definition, $\Phi(x)={\rm Exp}(\frac{x}{q^{-1/2}-q^{1/2}})$.\\[1ex]
The motivic Donaldson--Thomas invariants admit a geometric interpretation in terms of intersection homology of moduli spaces. Namely, we consider the GIT quotient
$$R_{\bf d}^{\mu-{\rm sst}}(Q)//G_{\bf d}=M_{\bf d}^{\mu-{\rm sst}}(Q),$$
the moduli space of $\mu$-semistable representations of $Q$ of dimension vector ${\bf d}$.\\[1ex]
It is an irreducible projective normal (typically singular) complex algebraic variety. If ${\bf d}$ is $\mu$-stable, that is, if there exists a $\mu$-stable representation of dimension vector ${\bf d}$, then $\dim M_{\bf d}^{\mu-{\rm sst}}(Q)=1-\langle{\bf d},{\bf d}\rangle$. In terms of this moduli space, we have the following geometric interpretation of the motivic Donaldson--Thomas invariants \cite{MR}:
\begin{theorem}[Geometricity of DT invariants]\label{geom} We have $${\rm DT}_{\bf d}^\mu(q)=(-q^{1/2})^{\langle{\bf d},{\bf d}\rangle-1}\sum_{k\geq 0}\dim{{\rm IH}}^k(M_{\bf d}^{\mu-{\rm sst}}(Q),\mathbb{Q})(-q^{1/2})^k$$
if ${\bf d}$ is $\mu$-stable, and ${\rm DT}_{\bf d}^\mu(q)=0$ otherwise.
\end{theorem}
\section{Quantum dilogarithm identity for quivers}\label{s4}
To combine the methods prepared in the previous section, we consider quivers and stabilities such that the restriction of $\langle\_,\_\rangle$ is symmetric on all $\Lambda_a$. In particular, this holds if the antisymmetrized Euler form of $Q$ is determined by the stability function, in the sense that
$$\langle{\bf d},{\bf e}\rangle-\langle{\bf e},{\bf d}\rangle=\kappa({\bf d})\Theta({\bf e})-\kappa({\bf e})\Theta({\bf d})\mbox{ for all }{\bf d},{\bf e}\in\mathbb{Z}Q_0,$$ compare \cite[Proposition 5.2]{RSmall}. Besides generalized Kronecker quiver, this property holds, for example, for complete bipartite quivers.
\begin{theorem}\label{quiverdilog} Assume that $\langle\_,\_\rangle$ is symmetric on all $\Lambda_a$. Then we have a factorization
$$\Phi(t_1)\cdot\ldots\cdot\Phi(t_n)=\prod^\rightarrow_{\mu({\bf d})\mbox{ \tiny decreasing}}\Phi(t^{\bf d})^{\circ(-q^{1/2})^{\langle{\bf d},{\bf d}\rangle-1}\cdot P_{\bf d}(q)}$$
in $R_q[[Q_0]]$, for polynomials $P_{\bf d}(q)$ with the following properties:
\begin{enumerate}
\item {\bf Non-vanishing:} We have $P_{\bf d}(q)\not=0$ if and only if ${\bf d}$ is $\mu$-stable,
\item {\bf Positivity:} We have $P_{\bf d}\in\mathbb{N}[q]$, of degree $\deg P_{\bf d}(q)=1-\langle{\bf d},{\bf d}\rangle$,
\item {\bf Unimodality:} $P_{\bf d}(q)$ is palindromic and unimodal,
\item {\bf Simplicity:} We have $P_{\bf d}(q)=1$ if, additionally, $\langle{\bf d},{\bf d}\rangle=1$.
\end{enumerate}
\end{theorem}
This is now readily proved: combining Theorem \ref{wcf}, Definition \ref{dt} and Theorem \ref{geom}, we see that $P_{\bf d}(q)$ is precisely the Poincar\'e polynomial in intersection homology of $M_{\bf d}^{\mu-{\rm sst}}(Q)$ in case ${\bf d}$ is $\mu$-stable, proving the non-vanishing property. Positivity and unimodality are proven in \cite[Corollary 1.2]{MR}. The degree statement follows from the dimension formula for the moduli space, and consequently the simplicity statement follows.\\[1ex]
To derive Theorem \ref{mainkronecker}, we consider the $m$-arrow Kronecker quiver $K_m=1\stackrel{(m)}{\leftarrow}2$, with stability function given by $\Theta({\bf d})=m\cdot(d_2-d_1)$ and $\kappa({\bf d})=d_1+d_2$.\\[1ex]
We identify the variables $t_1=x$ and $t_2=y$, leading to an identification of $\mathbb{Q}(q^{1/2})_q[[(K_m)_0]]$ with $\mathbb{Q}(q^{1/2})_{q^m}[[x,y]]$, such that $t^{\bf d}=(-q^{1/2})^{md_1d_2}x^{d_1}y^{d_2}$ and $$\Phi(t^{{\bf d}})^{\circ (-q^{1/2})^{\langle{\bf d},{\bf d}\rangle-1}\cdot P(q)}=\Phi_{(a,b)}^{\circ P(q)}$$
for ${\bf d}=(a,b)$.\\[1ex]
Theorem \Ref{quiverdilog} then provides a factorization in $\mathbb{Q}(q^{1/2})_{q^m}[[x,y]]$ of the form
$$\Phi_{(1,0)}\Phi_{(0,1)}=\prod^\rightarrow_{a/b\mbox{ \tiny increasing}}\Phi_{(a,b)}^{\circ P_{(a,b)}},$$
such that
$$P_{(a,b)}(q)=\sum_{k\geq 0}\dim{\rm IH}^{2k}(K_{(a,b)},\mathbb{Q})q^k,$$
where
$$ K_{(a,b)}=M_{a\times b}(\mathbb{C})^m_{\rm sst}//{\rm GL}_a(\mathbb{C})\times{\rm GL}_b(\mathbb{C})$$
are the Kronecker moduli of \cite{D}.\\[1ex]
All remaining properties in Theorem \ref{mainkronecker} now follow from properties of these Kronecker moduli spaces. Using the analysis of \cite[Section 4]{Scho}, we see that a dimension vector $(a,b)$ is $\mu$-stable if and only if $\langle(a,b),(a,b)\rangle\leq 1$. In case $\langle(a,b),(a,b)\rangle=1$, the dimension vector is a real root, and the moduli space $K_{(a,b)}$ reduces to a point, thus $P_{(a,b)}=1$. Otherwise, we have $\langle(a,b),(a,b)\rangle\leq 0$, which translates to $\mu_-\leq a/b\leq \mu_+$. This proves the completeness statement of the theorem. By linear duality, we have $K_{(a,b)}\simeq K_{(b,a)}$. Moreover, reflection functors induce isomorphisms
$K_{(a,b)}\simeq K_{(b,mb-a)}$ by \cite[Proposition 4.3]{weist}. This establishes the dihedral symmetry.
The Kronecker moduli $K_{(1,k)}$ identify with the Grassmannians ${\rm Gr}_k^m$, determining the lowest order terms in the factorization. Finally, the special value $P_{(k,k)}(1)$ is computed in \cite[Theorem 5.2]{RFun}.
|
2302.12153
|
\section{$4$-term relations}\label{s1}
In 1990, V.~Vassiliev~\cite{V90} introduced the notion of finite type
knot invariant and associated to any knot invariant of order at most~$n$
a function on chord diagrams with~$n$ chords. He showed that any such
function satisfies $4$-term relations. In 1993, M.~Kontsevich~\cite{K93}
proved the converse: if a function on chord diagrams with~$n$ chords
taking values in a field of characteristic zero satisfies the $4$-term
relations, then it can be obtained by Vassiliev's construction from some
knot invariant of order at most~$n$. (In Vassiliev's theorem, as well as
in Kontsevich's one, the function is subject to an additional
requirement; namely, it must satisfy the so-called one-term relation.
This requirement, however, vanishes when knots are replaced by framed
knots, it does not affect the theory, and we will not mention it below).
Functions on chord diagrams satisfying $4$-term relations are called
\emph{weight systems}.
The universal finite type knot invariant constructed by Kontsevich
(\emph{Kontsevich's integral})
allows one to reconstruct, in principle, a finite type invariant
from the corresponding weight system. Therefore, studying weight systems
plays a key role in understanding of the nature of finite type knot
invariants. Note, however, that explicit computation of the Kontsevich
integral of a knot is a computationally complicated problem that
does not have a satisfactory solution up to now.
\subsection{Chord diagrams and $4$-term reltions}
A \emph{chord diagram of order~$n$} is an oriented circle
together with $2n$ pairwise distinct points on it split into $n$
pairs considered up to orientation preserving diffeomorphisms of the circle.
We call the circle carrying the diagram its \emph{Wilson loop}.
Everywhere in the pictures below we assume that the circle is
oriented counterclockwise. The points forming a pair are connected
by a line or arc segment.
A $4$-term relation is defined by a chord diagram and a pair of
chords having neighboring ends in it. We say that
\emph{a function~$f$ on chord diagrams satisfies Vassiliev's
$4$-term relations} if for any chord diagram~$C$ and any pair of chords
$a,b$ in~$C$ having neighboring ends the equation shown in
Fig.~\ref{fourtermrelation} holds.
\begin{figure}[ht]
\[
f\left(\begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}]
\draw[dashed] (0,0) circle (1);
\filldraw[black] (-.8,0) node[anchor=west]{$a$};
\filldraw[black] (.3,0) node[anchor=west]{$b$};
\draw[line width=1pt] ([shift=( 20:1cm)]0,0) arc [start angle= 20, end angle= 70, radius=1];
\draw[line width=1pt] ([shift=(110:1cm)]0,0) arc [start angle=110, end angle=160, radius=1];
\draw[line width=1pt] ([shift=(250:1cm)]0,0) arc [start angle=250, end angle=290, radius=1];
\draw[line width=1pt] (45:1) .. controls (5:0.3) and (-40:0.3) .. (280:1);
\draw[line width=1pt] (135:1) .. controls (175:0.3) and (220:0.3) .. (260:1);
\end{tikzpicture}\right) -
f\left(\begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}]
\filldraw[black] (-.6,0) node[anchor=west]{$a$};
\filldraw[black] (.2,0) node[anchor=west]{$b$};
\draw[dashed] (0,0) circle (1);
\draw[line width=1pt] ([shift=( 20:1cm)]0,0) arc [start angle= 20, end angle= 70, radius=1];
\draw[line width=1pt] ([shift=(110:1cm)]0,0) arc [start angle=110, end angle=160, radius=1];
\draw[line width=1pt] ([shift=(250:1cm)]0,0) arc [start angle=250, end angle=290, radius=1];
\draw[line width=1pt] (45:1) .. controls (-5:0.1) and (-50:0.1) .. (260:1);
\draw[line width=1pt] (135:1) .. controls (185:0.1) and (225:0.1) .. (280:1);
\end{tikzpicture}\right) =
f\left(\begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}]
\filldraw[black] (-.4,.4) node[anchor=west]{$a$};
\filldraw[black] (.4,0) node[anchor=west]{$b$};
\draw[dashed] (0,0) circle (1);
\draw[line width=1pt] ([shift=( 20:1cm)]0,0) arc [start angle= 20, end angle= 70, radius=1];
\draw[line width=1pt] ([shift=(110:1cm)]0,0) arc [start angle=110, end angle=160, radius=1];
\draw[line width=1pt] ([shift=(250:1cm)]0,0) arc [start angle=250, end angle=290, radius=1];
\draw[line width=1pt] (35:1) .. controls (0:0.3) and (-45:0.3) .. (280:1);
\draw[line width=1pt] (135:1) .. controls (105:0.5) and (85:0.5) .. (55:1);
\end{tikzpicture}\right) -
f\left(\begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}]
\filldraw[black] (-.4,.3) node[anchor=west]{$a$};
\filldraw[black] (.2,0) node[anchor=west]{$b$};
\draw[dashed] (0,0) circle (1);
\draw[line width=1pt] ([shift=( 20:1cm)]0,0) arc [start angle= 20, end angle= 70, radius=1];
\draw[line width=1pt] ([shift=(110:1cm)]0,0) arc [start angle=110, end angle=160, radius=1];
\draw[line width=1pt] ([shift=(250:1cm)]0,0) arc [start angle=250, end angle=290, radius=1];
\draw[line width=1pt] (55:1) .. controls (5:0.1) and (-40:0.1) .. (270:1);
\draw[line width=1pt] (135:1) .. controls (105:0.4) and (65:0.4) .. (35:1);
\end{tikzpicture}\right)
\]
\caption{$4$-term relation for chord diagrams}
\label{fourtermrelation}
\end{figure}
The equation in the picture has the following meaning. All the four
chord diagrams entering it can have an additional tuple of chords
whose ends belong to the dashed arcs, which is one and the same
in all the four diagrams. Out of the four ends of the two distinguished
chords $a,b$, both end of~$b$ as well as one end of~$a$ are fixed,
while the second end of~$a$ takes successively all the four positions
close to the ends of~$b$. The specific position of the second end of~$a$
determines the chord diagram in the relation.
Note that it does not matter whether the two chords~$a$ and~$b$
intersect one another or not: the case of intersecting chord reduces to
that of nonintersecting ones by multiplying the relation shown in
Fig.~\ref{fourtermrelation} by $-1$.
Below, we write down the $4$-term relation in the form
\begin{eqnarray}
f(C)-f(C'_{ab})=f(\widetilde{C}_{ab})-f(\widetilde{C}'_{ab})
\end{eqnarray}
and say that the chord diagram~$C'_{ab}$ is obtained from~$C$
by \emph{Vassiliev's first move} on the pair of chords $a,b$,
the chord diagram
$\widetilde{C}_{ab}$ is the result of \emph{Vassiliev's second move},
and $\widetilde{C}'_{ab}$ is the result of the composition of the
two moves; note that the first and the second move, when elaborated
on one and the same pair of chords, commute with one another,
and the order in which they are performed is irrelevant.
The definition of the $4$-term relation uses subtraction, and
we are planning to multiply chord diagrams in what follows.
That is why we will usually consider weight systems with values
in a commutative ring, making the ring explicit in specific examples.
\subsection{Intersection graphs}
Invariants of intersection graphs of chord diagrams serve
as one of the main sources of weight systems,
The \emph{intersection graph $g(C)$ of a chord diagram~$C$}
is the simple graph the set of whose vertices is in one-to-one
correspondence with the set of chords of~$C$, and two vertices
are connected by an edge iff the corresponding chords intersect.
(Two chords in a chord diagram intersect one another if their
ends follow the circle in alternating order; the intersection
point of chords in pictures is not a vertex of the chord diagram.)
Fig.~\ref{f4tig} shows an example of a $4$-term relation for
a chord diagram and its intersection graph. Only the chord
diagrams and their intersection graphs are depicted; there is no
reference to the value of the function.
\begin{figure}[h]
\begin{center}
\begin{picture}(200,100)(60,0)
\thicklines
\multiput(12,72)(90,0){4}{\circle{40}}
\multiput(55,72)(180,0){2}{{\scriptsize $-$}}
\put(145,72){$=$}
\multiput(5,91)(90,0){4}{\line(-1,-3){10}}
\multiput(0,94)(90,0){4}{{\scriptsize E}}
\multiput(-3,84)(90,0){4}{\line(1,0){31}}
\multiput(-9,87)(90,0){4}{{\scriptsize D}}
\multiput(-8,72)(90,0){4}{\line(3,-1){36}}
\multiput(-14,73)(90,0){4}{{\scriptsize F}}
\multiput(32,72)(90,0){4}{\line(-5,-2){35}}
\multiput(34,73)(90,0){4}{{\scriptsize B}}
\multiput(12,92)(90,0){4}{\line(1,-5){7.5}}
\multiput(10,95)(90,0){4}{{\scriptsize C}}
\put(18,91){\line(-1,-5){7.6}}
\multiput(18,94)(90,0){4}{{\scriptsize A}}
\put(108,91){\line(-2,-3){20.9}}
\put(198,91){\line(1,-2){13}}
\put(288,91){\line(1,-1){12.3}}
\multiput(2,30)(90,0){4}{\circle*{3}}
\multiput(22,30)(90,0){4}{\circle*{3}}
\multiput(2,-10)(90,0){4}{\circle*{3}}
\multiput(22,-10)(90,0){4}{\circle*{3}}
\multiput(-8,10)(90,0){4}{\circle*{3}}
\multiput(32,10)(90,0){4}{\circle*{3}}
\multiput(55,10)(180,0){2}{{\scriptsize $-$}}
\put(145,10){$=$}
\multiput(-6,-16)(90,0){4}{{\scriptsize A}}
\multiput(24,-16)(90,0){4}{{\scriptsize B}}
\multiput(-6,32)(90,0){4}{{\scriptsize E}}
\multiput(22,32)(90,0){4}{{\scriptsize D}}
\multiput(-14,12)(90,0){4}{{\scriptsize F}}
\multiput(34,12)(90,0){4}{{\scriptsize C}}
\multiput(-8,10)(90,0){4}{\line(1,0){39}}
\multiput(-8,10)(90,0){4}{\line(1,2){10}}
\multiput(-8,10)(90,0){2}{\line(1,-2){10}}
\multiput(-8,10)(90,0){4}{\line(3,-2){30}}
\multiput(32,10)(90,0){2}{\line(-3,-2){30}}
\multiput(32,10)(90,0){4}{\line(-1,2){10}}
\multiput(32,10)(90,0){4}{\line(-1,-2){10}}
\multiput(2,30)(90,0){4}{\line(1,0){20}}
\multiput(2,-10)(180,0){2}{\line(1,0){20}}
\multiput(2,-10)(90,0){4}{\line(1,2){20}}
\end{picture}
\end{center}
\caption{A $4$-term relation for chord diagrams and
corresponding intersection graphs}
\label{f4tig}
\end{figure}
Any graph with at most~$5$ vertices is the intersection graph
of some chord diagram. There are two graphs with $6$ vertices that
are not intersection graphs; they are shown in Fig.~\ref{fR3}.
As~$n$ grows, the fraction of intersection graphs among simple graphs with~$n$ vertices grows rapidly.
A variety of complete sets of obstacles for a graph to be an intersection graph is known.
\begin{figure}[h]
\begin{center}
\begin{picture}(300,60)(10,40)
\thicklines
\put(55,51){\circle*{4}}
\put(55,75){\circle*{4}}
\put(34,62){\circle*{4}}
\put(76,62){\circle*{4}}
\put(40,35){\circle*{4}}
\put(70,35){\circle*{4}}
\put(55,50){\line(0,1){24}}
\put(55,51){\line(2,1){21}}
\put(55,51){\line(-2,1){21}}
\put(55,50){\line(1,-1){16}}
\put(55,50){\line(-1,-1){16}}
\put(40,35){\line(1,0){32}}
\put(55,75){\line(2,-1){21}}
\put(55,75){\line(-2,-1){21}}
\put(40,35){\line(-1,4){7}}
\put(70,35){\line(1,4){7}}
\put(255,70){\circle*{4}}
\put(255,40){\circle*{4}}
\put(235,80){\circle*{4}}
\put(275,80){\circle*{4}}
\put(235,30){\circle*{4}}
\put(275,30){\circle*{4}}
\put(235,30){\line(0,1){50}}
\put(235,30){\line(1,0){40}}
\put(235,80){\line(1,0){40}}
\put(275,30){\line(0,1){50}}
\put(255,40){\line(0,1){30}}
\put(235,30){\line(2,1){20}}
\put(235,80){\line(2,-1){20}}
\put(275,80){\line(-2,-1){20}}
\put(275,30){\line(-2,1){20}}
\end{picture}
\end{center}
\caption{$5$-wheel and $3$-prism, the two graphs with~$6$
vertices that are not intersection graphs}
\label{fR3}
\end{figure}
On the other hand, certain intersection graphs can be realized
as intersection graphs of several, not pairwise isomorphic
chord diagrams. Consider, for example, the chain on $n$ vertices,
\begin{center}
\begin{picture}(80,40)(-50,-20)
\unitlength=20pt
\put(0,0){\makebox(0,0){$
\underbrace{
\makebox(5.4,0){
\begin{picture}(5,0.2)(0,-0.2)
\put(0,0){\circle*{0.15}}
\put(0,0){\line(1,0){1}}
\put(1,0){\circle*{0.15}}
\put(1,0){\line(1,0){1}}
\put(2,0){\circle*{0.15}}
\put(2,0){\line(1,0){0.5}}
\put(3.08,0){\makebox(0,0){\ldots}}
\put(3.5,0){\line(1,0){0.5}}
\put(4,0){\circle*{0.15}}
\put(4,0){\line(1,0){1}}
\put(5,0){\circle*{0.15}}
\end{picture}
}
}_{\scriptstyle n\ {\rm vertices}}$
}
}
\end{picture}
\end{center}
For $n=5$, there are three distinct chord diagrams with this intersection graph:
\def\celvn
{ \bezier{40}(-1,0)(-1,0.577)(-0.5,0.866)
\bezier{40}(-0.5,0.866)(0,1.155)(0.5,0.866)
\bezier{40}(0.5,0.866)(1,0.577)(1,0)
\bezier{40}(1,0)(1,-0.577)(0.5,-0.866)
\bezier{40}(0.5,-0.866)(0,-1.155)(-0.5,-0.866)
\bezier{40}(-0.5,-0.866)(-1,-0.577)(-1,0)
}
\def \put(0,1){\circle*{0.15}
{ \put(0,1){\circle*{0.15}}
\put(0,-1){\circle*{0.15}}
\put(0,1){\line(0,-1){2}}
}
\begin{center}
\begin{picture}(180,60)(0,-30)
\setlength{\unitlength}{20pt}
\put(0,0){
\begin{picture}(0,0)
\celvn \put(0,1){\circle*{0.15}
\put(0.866,0.5){\circle*{0.15}}
\put(-0.5,0.866){\circle*{0.15}}
\bezier{40}(-0.5,0.866)(0.086,0.322)(0.866,0.5)
\put(0.5,-0.866){\circle*{0.15}}
\put(-0.866,-0.5){\circle*{0.15}}
\bezier{40}(-0.866,-0.5)(-0.086,-0.322)(0.5,-0.866)
\put(0.5,0.866){\circle*{0.15}}
\put(1,0){\circle*{0.15}}
\bezier{40}(1,0)(0.433,0.25)(0.5,0.866)
\put(-0.5,-0.866){\circle*{0.15}}
\put(-1,0){\circle*{0.15}}
\bezier{40}(-1,0)(-0.433,-0.25)(-0.5,-0.866)
\end{picture}
}
\put(4,0){
\begin{picture}(0,0)
\celvn
\put(-0.5,0.866){\circle*{0.15}}
\put(-0.5,-0.866){\circle*{0.15}}
\put(-0.5,-0.866){\line(0,1){1.732}}
\put(-0.866,0.5){\circle*{0.15}}
\put(0.5,0.866){\circle*{0.15}}
\bezier{40}(-0.866,0.5)(-0.086,0.322)(0.5,0.866)
\put(0,1){\circle*{0.15}}
\put(0.866,0.5){\circle*{0.15}}
\bezier{40}(0.866,0.5)(0.25,0.433)(0,1)
\put(-0.866,-0.5){\circle*{0.15}}
\put(0.5,-0.866){\circle*{0.15}}
\bezier{40}(-0.866,-0.5)(-0.086,-0.322)(0.5,-0.866)
\put(0,-1){\circle*{0.15}}
\put(0.866,-0.5){\circle*{0.15}}
\bezier{40}(0.866,-0.5)(0.25,-0.433)(0,-1)
\end{picture}
}
\put(8,0){
\begin{picture}(0,0)
\celvn \put(0,1){\circle*{0.15}
\put(-0.866,0.5){\circle*{0.15}}
\put(0.5,0.866){\circle*{0.15}}
\bezier{40}(0.5,0.866)(-0.086,0.322)(-0.866,0.5)
\put(-0.5,-0.866){\circle*{0.15}}
\put(0.866,-0.5){\circle*{0.15}}
\bezier{40}(0.866,-0.5)(0.086,-0.322)(-0.5,-0.866)
\put(-0.5,0.866){\circle*{0.15}}
\put(-1,0){\circle*{0.15}}
\bezier{40}(-1,0)(-0.433,0.25)(-0.5,0.866)
\put(0.5,-0.866){\circle*{0.15}}
\put(1,0){\circle*{0.15}}
\bezier{40}(1,0)(0.433,-0.25)(0.5,-0.866)
\end{picture}
}
\end{picture}
\end{center}
\subsection{$4$-term relations for graphs}\label{ss4tg}
$4$-term relations for graphs were introduced in~\cite{L00}.
We say that a graph invariant~$f$
\emph{satisfies the $4$-term relation}
if for any graph~$G$ and any pair $a,b$ of its vertices we have
\begin{eqnarray}
f(G)-f(G'_{ab})=f(\widetilde{G}_{ab})-f(\widetilde{G}'_{ab});
\end{eqnarray}
here the graph~$G'_{ab}$ is obtained from~$G$
by switching adjacency between the vertices~$a$ and~$b$,
the graph $\widetilde{G}_{ab}$ is obtained from~$G$ by switching
adjacency of~$a$ to all the vertices in~$G$ adjacent to~$b$,
and the graph $\widetilde{G}'_{ab}$ is the result of composition of
these two operations.
\begin{remark}
The transformation $G\mapsto G'_{ab}$ is symmetric with
respect to~$a$ and~$b$,
$G'_{ab}=G'_{ba}$. In turn, the second Vassiliev move for graphs
$G\mapsto \widetilde{G}_{ab}$ is not symmetric, and
the graph $\widetilde{G}_{ba}$is not, as a rule,
isomorphic to $\widetilde{G}_{ba}$.
\end{remark}
Graph invariants satisfying $4$-term relations are called
\emph{$4$-invariants}. Similarly to the case of weight systems,
we assume that $4$-invariants take values in a commutative ring.
A direct study of the way the intersection graph of a chord diagram
changes under Vassiliev moves proves the following statement.
\begin{theorem}[\cite{L00}]
If~$f$ is a $4$-invarint of graphs, then
$f\circ g$ is a weight system on chord diagrams.
\end{theorem}
An example of a $4$-term relation is given in Fig.~\ref{f4tig}.
Note, however, that even if a graph~$G$ is an intersection graph,
this does not mean that the same is true for the other three
graph in a $4$-term relation.
Therefore, any $4$-invariant determines a weight system,
and hence, by Kontsevich's theorem, a knot invariant.
Below, we give several examples of $4$-invariants.
The following question posed by S.~Lando remains open for
the last two decades:
\begin{problem}
Is it true that modulo $4$-term relations for graphs any graph
is a linear combination of intersection graphs?
\end{problem}
Computer computations (I.~A.~Dynnikov, private communication)
show that this is true for all graphs with at most~$8$ vertices.
\subsubsection{Chromatic polynomial}
One of historically first examples of $4$-invariants
of simple graphs is the chromatic polynomials.
Let~$G$ be a graph, and let $V(G)$ be the set of its vertices.
The
\emph{chromatic polynomial} $\chi_G(c)$
of~$G$ is the number of regular colorings of the set
$V(G)$ in~$c$ colors, that is, the number of mappings
$V(G)\to\{1,2,\dots,c\}$ such that for any two adjacent
(connected by an edge) vertices the values of the mapping are different.
It is well known that the chromatic polynomial satisfies
the \emph{contraction--deletion relation}:
\begin{equation}
\chi_G(c)=\chi_{G'_e}(c)-\chi_{G''_e}(c)
\end{equation}
for any graph~$G$ and any edge~$e$ in it;
here the graph $G'_e$ is obtained from~$G$ be deleting the edge~$e$,
and $G''_e$ is the result of contracting this edge.
As an edge is contracted, it is deleted, its two ends become a single
new vertex of the graph, and any multiple edge that can appear after
that is replaced by a single edge.
The contraction--deletion relation can be proved easily:
it corresponds to splitting of the set of regular colorings of the
vertices of the graph $G'_e$ in~$c$ colors into two disjoint subsets,
namely, those colorings where the ends of~$e$ are colored differently
(these are exactly the regular colorings of the vertices of~$G$)
and those where these ends are colored with the same color
(the latter correspond one-to-one to regular colorings
of the vertices of $G''_e$). Since the chromatic polynomial
of a discrete (edge-free) graph on~$n$ vertices is~$c^n$,
the contraction-deletion relation proves that, for a graph~$G$
on~$n$ vertices, $\chi_G(c)$ indeed is a polynomial of degree~$n$
having the leading coefficient~$1$.
\begin{theorem}[\cite{CDL94}]
Chromatic polynomial is a $4$-invariant,
\end{theorem}
Indeed, applying the contraction--deletion relation
to a graph~$G$ and an edge~$e$ having the ends $a,b$
in it, we conclude that
$$
\chi_{G}(c)-\chi_{G'_{ab}}(c)=-\chi_{G''_{ab}}(c).
$$
In turn,
$$
\chi_{\widetilde{G}_{ab}}(c)-\chi_{\widetilde{G}'_{ab}}(c)
=-\chi_{\widetilde{G}''_{ab}}(c).
$$
Verifying that the natural identification of the sets of vertices
of the two graphs $G''_{ab}$ and $\widetilde{G}''_{ab}$
establishes their isomorphism completes the proof.
\subsubsection{Weighted chromatic polynomial
\\ (Stanley's symmetrized chromatic polynomial)}
Chromatic polynomial is an important graph invariant;
however, it is not distinguishingly powerful.
For example, the chromatic polynomial of any tree on~$n$
vertices is $c(c-1)^{n-1}$, while the number of pairwise nonisomorphic
trees on~$n$ vertices grows very rapidly as~$n$ grows.
Weighted chromatic polynomial, more widely known under the name
of Stanley's symmetrized chromatic polynomial, is a much finer
graph invariant.
In order to define weighted chromatic polynomial, we will need
the notion of weighted graph.
\begin{definition}
A \emph{weighted graph} is a simple graph~$G$ together with
a positive integer, a \emph{weight}, assigned to each of its vertices.
The \emph{weight} of a weighted graph is the sum of the weights of
its vertices.
\end{definition}
\begin{definition}[\cite{CDL94}]
The \emph{weighted chromatic polynomial}
is the weighted graph invariant $G\mapsto W_G(q_1,q_2,\dots)$
taking values in the ring of polynomials in infinitely many variables
$q_1,q_2,\dots$ and possessing the following properties:
\begin{itemize}
\item the weighted chromatic polynomial of the graph on one
vertex of weight~$n$ is~$q_n$;
\item the weighted chromatic polynomial is multiplicative, $W_{G_1\sqcup G_2}=W_{G_1}W_{G_2}$
for a disjoint union of arbitrary weighted graphs $G_1,G_2$;
\item the weighted chromatic polynomial satisfies the weighted
contraction--deletion relation
\begin{equation}
W_G=W_{G'_e}+W_{G''_e},
\end{equation}
where $e$ is an edge of the weighted graph~$G$,
$G'_e$ denotes the result of deleting an edge~$e$ from~$G$,
and $G''_e$ is the result of contracting~$e$;
contraction is defined in the same way as for simple graphs,
and the new vertex is assigned the weight equal to the sum of
the weights of the two ends of~$e$.
\end{itemize}
\end{definition}
\begin{theorem}[\cite{CDL94}]\label{tHawg}
There is a unique weighted graph invariant possessing the
above properties. The weighted chromatic polynomial $W_G$
of a weighted graph~$G$ of weight~$n$ is a quasihomogeneous
polynomial of the variables~$q_1,q_2,\dots$ of degree~$n$,
if we set the weight of variables~$q_k$ equal to~$k$, for $k=1,2,\dots$.
\end{theorem}
A simple graph can be considered as a weighted graph, with the
weight of each of its vertices set to be~$1$. Thus, any weighted
graphs invariant defines therefore a simple graphs invariant.
\begin{theorem}
The weighted chromatic polynomial is a $4$-invariant.
\end{theorem}
Indeed, similarly to the proof for chromatic polynomial,
we remark that the natural identification of the set
of vertices of the graphs $G''_{ab}$ and $\widetilde{G}''_{ab}$
establishes an isomorphism of these graphs as weighted graphs.
In 1995, R.~Stanley introduced the notion of symmetrized chromatic
polynomial.
\begin{definition}[\cite{S95}]
Let~$G$ be a simple graph. A \emph{coloring} of the set of vertices $V(G)$ of~$G$
into an infinite set of colors is a mapping
$\beta:V(G)\to X=\{x_1,x_2,\dots\}$. To each coloring~$\beta$,
one associates a monomial $x_\beta$ of degree $n=|V(G)|$ in the variables $x_1,x_2,\dots$, which is equal to the product of
the values of~$\beta$ on all the vertices of~$G$.
The \emph{symmetrized chromatic polynomial}
$S_G(x_1,x_2,\dots)$ of~$G$ is the infinite sum
$$
S_G(x_1,x_2,\dots)=
\sum_{\substack{\beta:V(G)\to X\\ \beta\text{ regular}}}
x_\beta,
$$
where the coloring~$\beta$ is said to be \emph{regular}
if it takes any two adjacent vertices to different elements of~$X$.
\end{definition}
By obvious reasons, the symmetrized chromatic polynomial is a
symmetric function in the variables~$X$. Stanley's conjecture
asserts that the symmetrized chromatic polynomial distinguishes
between any two trees. It is confirmed for trees with up to~$29$ vertices~\cite{HJ19}, indicating thus that the symmetrized chromatic
polynomial is a much finer graph invariant than the ordinary
chromatic polynomial $\chi_G$.
Being a symmetric polynomial in the variables~$X$,
any symmetrized chromatic polynomial can be expressed as a
polynomial in any basis in the space of symmetric polynomials.
In particular, one can choose the power sums
$$
p_k=\sum_{i=1}^\infty x_i^k
$$
for this basis. When expressed in this form, the symmetrized
chromatic polynomial $S_G$ becomes a finite quasihomogeneous
polynomial of degree~$n=|V(G)|$ in the variables $p_1,p_2,\dots$
if we set the degree of the variable~$p_k$ equal to~$k$, for
$k=1,2,\dots$. The following statement establishes a relationship
between Stanley's symemtrized chromatic polynomial and
the weighted chromatic polynomial.
\begin{theorem}[\cite{NW99}]
Stanley's symmetrized chromatic polynomial, when expressed
in terms of power sums, under the substitution
$p_k=(-1)^{n-k}q_k$, $k=1,2,\dots$, becomes the weighted chromatic
polynomial; here $n=|V(G)|$.
\end{theorem}
As a corollary, Stanley's symmetrized chromatic polynomial
is a $4$-invariant of graphs.
In turn, the ordinary chromatic polynomial is a specialization
of Stanley's symmetrized one: the latter transforms into
the former under the substitution $p_i=c$, for $i=1,2,3,\dots$.
\subsubsection{Interlace polynomial}
Initially, the interlace polynomial has been defined
as a function on oriented graphs having two-in and two-out edges at each vertex. The definition appeared
first in the paper~\cite{ABS04}, which is devoted to
DNA sequencing. Later, the definition has been extended
to arbitrary simple graphs. We start with defining the
pivot operation.
Let $G$ be a simple graph. For any pair
$a,b$ of adjacent vertices of~$G$, all the other
its vertices are split into four classes, namely:
\begin{enumerate}
\item the vertices adjacent to~$a$, and not to~$b$;
\item the vertices adjacent to~$b$, and not to~$a$;
\item the vertices adjacent to both~$a$, and~$b$;
\item the vertices adjacent neither to~$a$, nor to~$b$.
\end{enumerate}
The \emph{pivot} $G^{ab}$ of~$G$ along the edge~$ab$
is the graph obtained by switching the adjacency between
any two vertices in the first three classes iff they
belong to different classes.
The \emph{interlace polynomial} $L_G(x)$ is defined
by the following recurrence relations:
\begin{enumerate}
\item if~$G$ does not have edges, then $L_G(x)=x^n$,
where $n$ is the number of vertices in~$G$;
\item for any edge $ab$ in~$G$, we have
$$
L_G(x)=L_{G\setminus a}(x)+L_{G^{ab}\setminus b}(x),
$$
\end{enumerate}
where $G\setminus a$ denotes the graph obtained from~$G$
by deleting the vertex~$a$ and all the edges incident
to it.
In~\cite{ABS04}, it is proved that the interlace polynomial
is well defined: the result of its calculation is
independent of the order in which the pivots are
applied. The definition immediately implies that
$L_G(x)$ indeed is a polynomial in~$x$
whose degree is $n=|V(G)|$.
\begin{theorem}
The interlace polynomial satisfies $4$-term relations.
\end{theorem}
This theorem is proved in~\cite{NN} and, in a different way, in~\cite{K20}.
\subsubsection{Transition polynomial}
Transition polynomial for $4$-regular graphs endowed
with an oriented Eulerian circuit was intro\-duced in~\cite{J90}.
By contracting each chord of a chord diagram~$C$ we
make the latter into a $4$-regular graph in which
the Wilson loop of the chord diagram becomes an
oriented Eulerian circuit. The transition polynomial
of this graph defines, therefore, a mapping from the
set of chord diagrams to a space of polynomials.
Conversely, to a $4$-regular graph with a distinguished oriented Eulerian circuit a chord diagram is associated;
the number of chords in it coincides with the number of
vertices in the graph. The Eulerian circuit turns into
the Wilson loop of the chord diagram, and the vertices
of the graph become the chords.
Here we define the transition polynomial for a chord diagram.
In order to define it, we will need the notion of transition.
Let~$C$ be a chord diagram. Each chord in~$C$ can be replaced by
a ribbon in one of the following three ways; we encode these ways
by the Greek letters $\chi$, $\phi$, or $\psi$:
\begin{itemize}
\item $\phi$ if this an ordinary ribbon;
\item $\chi$ if this is a half-twisted ribbon;
\item $\psi$ if this is no ribbon at all, that is, if we
simply erase the chord.
\end{itemize}
A choice of a transition for each ribbon, i.e.,
a mapping $V(C)\to\{\phi,\chi,\psi\}$ taking the set of chords
$V(C)$ to the set of transition types, is called
a \emph{state} of the chord diagram~$D$.
Choose a \emph{weight function}
$w:\{\phi,\chi,\psi\}\to K$ that associates to each of the three
Greek letters an element of a commutative ring~$K$.
The \emph{weighted transition polynomial} of a chord diagram~$C$
is the polynomial
$$
T_C(x)=\sum_{s:V(C)\to\{\phi,\chi,\psi\}}
\prod_{v\in V(C)} w(s(v))x^{c(s)-1},
$$
where the summation is carried over all states of~$C$,
and $c(s)$ denotes the number of connected components of
the boundary of the surface obtained from~$C$ by attaching
a disk to the Wilson loop with the chords replaced by
the corresponding ribbons.
\begin{theorem}[\cite{DZ22}]
By choosing $w(\chi)=t$,
$w(\phi)=-t$, $w(\psi)=s$, for the weight function, we make
the transition polynomial into a weight system taking values
in the ring of polynomials $\mathbb{C}[t,s,x]$.
\end{theorem}
\subsubsection{Skew characteristic polynomial}\label{sssSCP}
Let $G$ be a simple graph with $n=|V(G)|$ vertices.
Number the vertices by the numbers from~$1$ to~$n$
in an arbitrary way and associate to the chosen numbering the
adjacency matrix of~$G$.
The \emph{adjacency matrix} $A(G)$ is an
$n\times n$-matrix over the two-element field~$\mathbb{F}_2$,
containing~$1$ in the cell $(i,j)$
if the vertices numbered~$i$ and~$j$ are connected by an edge,
and containing~$0$ otherwise. In particular, the matrix $A(G)$
is symmetric and has zeroes on the diagonal. The isomorphism
class of~$G$ is reconstructed uniquely from its adjacency matrix.
Various characteristics of the adjacency matrix, for example, its characteristic polynomial, play a key role in the study of graphs.
\begin{definition}
A graph~$G$ is \emph{nondegenerate} (respectively,
\emph{degenerate}) if its adjacency matrix $A(G)$ is nondegenerate
(respectively, \emph{degenerate}) over~$\mathbb{F}_2$.
The \emph{nondegeneracy} of a graph is the graph invariant taking
values in the field~$\mathbb{C}$ and equal to~$1$ for a nondegenerate graph
and to~$0$ for a degenerate one.
\end{definition}
The adjacency matrix of any graph with an odd number of veertices
is degenerate since it is antisymmetric over the field $\mathbb{F}_2$,
whence the nondegeneracy of any graph with an odd number of vertices
is~$0$. A graph with an even number of vertices can be either
nondegenerate or degenerate. We denote the nondegeneracy of a
graph~$G$ by $\nu(G)$.
\begin{lemma}\label{lnd}
Nondegeneracy is invariant with respect to the second Vassiliev move.
\end{lemma}
Indeed, let $G$ be graph, and let $a,b$ be its vertices.
Pick a numbering of the vertices of~$G$ such that~$a$ is assigned number~$1$
and~$b$ is assigned number~$2$. Then the adjacency matrix $A(\widetilde{G}_{ab})$ of
$\widetilde{G}_{ab}$ is
\begin{eqnarray}\label{eVsm}
A(\widetilde{G}_{ab})=I_{12}^tA(G)I_{12},
\end{eqnarray}
where $I_{12}$ is the square $|V(G)|\times |V(G)|$-matrix over~$\mathbb{F}_2$,
which coincides with the identity matrix everywhere with the exception
of the cell $(1,2)$ whose value equals~$1$,
and $I_{12}^t$ is its transpose matrix. The assertion of the lemma
follows from the fact that both matrices $I_{12}$ and $I_{12}^t$
are nondegenerate.
\begin{definition}[\cite{DL22}]
The \emph{skew characteristic polynomial} of a graph~$G$ is the polynomial
$$
Q_G(u)=\sum_{U\subset V(G)} \nu(G|_U)u^{|V(G)|-|U|},
$$
where the summation is carried over all subsets~$U$ of the set $V(G)$
of vertices of the graph, and $G|_U$ denotes the subgraph of~$G$
induced by a subset~$U$ of its vertices.
\end{definition}
The skew characteristic polynomial of a graph with an even number of vertices
is an even function of its argument, and if the number of vertices is
odd, then the skew characteristic polynomial is odd as well.
Lemma~\ref{lnd} implies
\begin{proposition}
The skew characteristic polynomial is a $4$-invariant of graphs.
\end{proposition}
Let us explain why the skew characteristic polynomial carries this name.
Let $C$ be a chord diagram, and let $g(C)$ be its intersection graph.
Pick the following orientation of the graph. Choosing a point
$\alpha$ on the Wilson loop that is not an end of any chord,
cut the circle at this point and develop it into a horizontal line
whose orientation inherits that of the circle. Under this transformation,
the chords become half circles (arcs), and two vertices in $g(C)$
are connected by an edge iff the corresponding arcs intersect
one another. Orient each edge in $g(C)$ from the vertex corresponding
to the arc whose left end precedes the left end of the arc corresponding
to the second vertex of the edge. If we number the vertices of the graph
in the order of the left ends of the corresponding arcs,
then each edge is oriented from the vertex with a smaller number to that
with a greater one. The orientation of the intersection graph obtained
in this way depends on the cut point~$\alpha$ of the circle;
denote the corresponding directed graph by
$\vec{g}_\alpha(C)$.
The intersection matrix of a directed graph with~$n$ numbered
vertices is the integer-valued $n\times n$-matrix whose
$(i,j)$-entry is $0$ if the vertices $i$ and $j$ are not connected
by an edge, it is $1$ if the arrow goes from~$i$ to~$j$, and
it is $-1$ if there is an arrow going from~$j$ to~$i$.
The adjacency matrix of a directed graph is skew symmetric.
\begin{proposition}[\cite{DL22}]
The characteristic polynomial of the adjacency matrix
of the directed graph $\vec{g}_\alpha(C)$ is independent
of the cut point~$\alpha$.
\end{proposition}
\begin{theorem}[\cite{DL22}]
The characteristic polynomial of the adjacency matrix of the directed
intersection graph of a chord diagram~$C$ coincides with the
skew characteristic polynomial $Q_{g(C)}$ of its intersection graph.
\end{theorem}
Therefore, the skew characteristic polynomial of an intersection
graph coincides with the characteristic polynomial of the
oriented graph obtained by a certain orientation of its edges,
which belongs to a class of admissible orientations. This
fact justifies the choice of the name for the invariant. For a graph
that is not an intersection graph, there could be no such orientation.
\section{Hopf algebra of graphs}\label{s2}
Many natural graph invariants are multiplicative.
The value of such an invariant of a graph $G_1\sqcup G_2$,
which is a disjoint union of two graphs,
$G_1$ and $G_2$, is the product of its values on~$G_1$ and~$G_2$.
In particular, all the invariants defined in Sec.~\ref{ss4tg} are multiplicative.
Multiplicativity means that in order to calculate the value
of the invariant on a graph it suffices to know its values
on the connected components of the graph.
In this section we show that the behaviour of the invariants
with respect to comultiplication of graphs is of not less importance.
Multiplication and comultiplication of graphs together form
a Hopf algebra structure on the vector space spanned by simple graphs.
This Hopf algebra is a polynomial algebra in its primitive elements.
The primitive elements are linear combinations of graphs, and for many
graph invariants their values on primitive elements are much simpler
than their values of the graphs themselves. Thus, the value of a
chromatic polynomial on any primitive element is a linear polynomial,
while the chromatic polynomial of a simple graph on~$n$ vertices
is a polynomial of degree~$n$. Following~\cite{CKL20},
we call polynomial graph invariants whose values on primitive elements
are linear polynomials \emph{umbral invariants}. Integrability properties
of umbral invariants are discussed in Sec.~\ref{ssI}.
\subsection{Multiplication and comultiplication of graphs}
Denote by~$\mathcal{G}_n$ the vector space over~$\mathbb{C}$ freely spanned by
all graphs on~$n$ vertices, $n=0,1,2,\dots$, and let
$$
\mathcal{G}=\mathcal{G}_0\oplus\mathcal{G}_1\oplus\mathcal{G}_2\oplus\dots
$$
be the direct sum of these vector spaces. Note that the vector space
$\mathcal{G}_0$ is one-dimensional; it is spanned by the empty graph.
Introduce a commutative multiplication $m:\mathcal{G}\otimes\mathcal{G}\to\mathcal{G}$
on~$\mathcal{G}$ by its values on the generators
$$
m:G_1\otimes G_2\mapsto G_1\sqcup G_2
$$
and extending its to linear combinations of graphs by linearity.
Obviously, this multiplication is commutative and respects the grading:
$$
m:\mathcal{G}_k\otimes \mathcal{G}_n\to \mathcal{G}_{k+n}
$$
for all~$k$ and~$n$. The empty graph~$e\in\mathcal{G}_0$
is the unity of this multiplication.
The action of the \emph{comultiplication} $\mu:\mathcal{G}\to\mathcal{G}\otimes\mathcal{G}$
on a graph~$G$ has the form~\footnote{Comultiplication is often
denoted by~$\Delta$; in the present paper, however, symbol~$\Delta$
is heavily used in the different context of delta-matroids.}
$$
\mu:G\mapsto \sum_{V(G)=U\sqcup W} G|_U\otimes G|_W,
$$
where summation in the right-hand side is carried over all
ordered partitions of the set of vertices $V(G)$
of~$G$ into two disjoint subsets, and $G|_U$
denotes the subgraph of~$G$ induced by a subset~$U$ of its
vertices. Comultiplication is extended to linear combinations of graphs
by linearity. With respect to this comultiplication, the vector space
$\mathcal{G}$ is a
\emph{coalgebra}. The \emph{counit} with respect to~$\mu$
is the linear mapping$\epsilon:\mathbb{C}\to\mathcal{G}_0$ taking~$1$
to~$e$.
\subsection{Primitive elements, Milnor--Moore theorem,\\ and Hopf algebra
structure}
An element~$p$ of a coalgebra with comultiplication~$\mu$ is
\emph{primitive} if
$$
\mu:p\mapsto1\otimes p+p\otimes1.
$$
In the coalgebra $\mathcal{G}$, the graph~$K_1\in\mathcal{G}_1$
consisting of a single vertex is primitive. No other graph
is primitive, however certain linear combinations of graphs
in which more than one graph participate are primitive.
Thus, it is easy to check that the difference $K_2-K_1^2$
is primitive. Here and below we denote by~$K_n$
the complete graph on~$n$ vertices.
Any polynomial algebra $\mathbb{C}[y_1,y_2,\dots]$ in either finitely or
infinitely many variables is endowed with a natural coalgebra
structure if we declare the variables $y_i$ primitive,
$\mu(y_1)=1\otimes y_i+y_i\otimes1$.
Comultiplication is extended to monomials and their linear combinations
(polynomials) as a ring homomorphism,
i.e., $\mu(\prod_{k=1}^n y_{i_k})=\prod_{k=1}^n
(1\otimes y_{i_k}+y_{i_k}\otimes1)$.
\begin{definition}
A tuple $(H,m,\mu,e,\epsilon,S)$, where
\begin{itemize}
\item $H$ is a vector space;
\item $m:H\otimes H\to H$ is a multiplication with a unit~$e:\mathbb{C}\to H$;
\item $\mu:H\to H\otimes H$ is a comultiplication with a counit $\epsilon:H\to\mathbb{C}$;
\item $S:H\to H$ is a linear mapping;
\end{itemize}
is called a \emph{Hopf algebra} if
\begin{enumerate}
\item $m$ is a coalebra homomorphism;
\item $\mu$ is an algebra homomorphism;
\item $S$ satisfies the relation
$m \circ (S \otimes {\rm Id}) \circ \mu
= \eta\circ\epsilon = m \circ ({\rm Id} \otimes S)
\circ \mu$.
\end{enumerate}
\end{definition}
The mapping~$S$ in a Hopf algebra is called an \emph{antipode}.
The algebra of polynomials $\mathbb{C}[y_1,y_2,\dots]$ becomes a Hopf algebra
if we define the antipode~$S$ as the mapping $S:y_i\mapsto-y_i$
for all~$i=1,2,\dots$, and extend it to whole space~$H$ as an algebra
homomorphism.
A Hopf algebra~$H$ can be \emph{graded}. In this case, it is represented
as a direct sum of finite dimensional subspaces
$$
H=H_0\oplus H_1\oplus H_2\oplus\dots,
$$
and both multiplication and comultiplication must respect the grading, i.e.,
$$
m:H_i\otimes H_j\to H_{i+j},\qquad
\mu:H_n\to\bigoplus_{i+j=n} H_i\otimes H_j.
$$
A graded Hopf algebra is said to be \emph{connected}
if the vector space~$H_0$ is one-dimensional.
If a grading (weight) $w(y_i)\in \mathbb{N}$ of each variable~$y_i$
in a polynomial Hopf algebra is given such that for each~$n=1,2,3,\dots$
there are finitely many variables of weight at most~$n$, then
the polynomial Hopf algebra $\mathbb{C}[y_1,y_2,\dots]$
becomes graded. The subspace of grading~$n$ in it is spanned by
the monomials of weight~$n$, where the weight of a monomial is the sum
of the weights of the variables entering it.
The following Milnor--Moore theorem describes the structure of graded
commutative cocom\-mutative Hopf algebras.
\begin{theorem}[\cite{MM65}]\label{tMM}
Any connected graded commutative cocommutative Hopf algebra is a
poly\-nomial Hopf algebra in its primitive elements.
\end{theorem}
This theorem means that if we choose a basis in each subspace of
primitive elements
$P(H_n)\subset H_n\subset H$ of grading~$n$,
then $H$ is a graded polynomial Hopf algebra in the elements
of these basis if we set the weight of each element equal
to its grading.
\begin{remark}
The Milnor--Moore theorem remains true if one assumes that the Hopf
algebra is cococmmutative only. However, we are going to use it only
in the commutative case, where the statement above is sufficient.
\end{remark}
A primitive element is naturally associated to any polynomial
in a Hopf algebra of polynomials, namely, its linear part.
This mapping from the vector space of polynomials to the vector space
of their linear parts is a projection, i.e. its square coincides
with itself. Milnor--Moore theorem~\ref{tMM} implies that
the projection to the subspace of primitive elements is naturally
defined in any connected graded commutative cocommutative Hopf algebra~$H$.
We denote this projection by~$\pi$, $\pi:H\to P(H)$.
Each homogeneous subspace $H_n\subset H$
can be represented as a direct sum $H_n=P(H_n)\oplus D(H_n)$
of the subspace of primitive elements $P(H_n)$ and the kernel
$D(H_n)$ of the projection~$\pi$. The kernel $D(H_n)$
consists of \emph{decomposable elements}, that is, of polynomials
in primitive elements of grading smaller than~$n$.
The projection~$\pi$ can be given by the following formula.
Let $\phi,\psi:H\to H$ be linear mappings.
Define the \emph{convolution product}
$\phi *\psi:H\to H$ as the linear mapping
$(\phi*\psi)(a)=m((\phi\otimes\psi)(\mu(a)))$ for all $a\in H$.
The convolution product in graded Hopf algebras
can be used to define other operations on linear mappings,
that are represented in terms of power series.
Thus, if $\phi:H\to H$ is a grading preserving linear mapping
of the Hopf algebra~$H$ to itself such that
$\phi(1)=1$, then the logarithm of~$\phi$ can be defined as
$$
\log~\phi=\phi_0-\frac12\phi_0*\phi_0+\frac13\phi_0*\phi_0*\phi_0-\dots,
$$
where $\phi_0:H\to H$ is the linear mapping given by $\phi_0:1\mapsto0$,
and $\phi_0$ coincides with~$\phi$ on all homogeneous subspaces~$H_k$
of positive grading $k>0$.
\begin{theorem}[\cite{S95}]
The projection~$\pi:H\to H$ is the logarithm of the identity
mapping, $\pi=\log~{\rm Id}$.
\end{theorem}
To prove this statement, it sufficies to verify that
$\log~{\rm Id}(p)=\phi_0(p)=p$ for any primitive element~$p$,
and $\log~{\rm Id}(p_1p_2\dots)=0$ for any nonlinear monomial in
primitive elements.
In particular, in the Hopf algebra of graphs $\mathcal{G}$ the projection~$\pi$
to the subspace of primitive elements whose kernel is the subspace of
decomposable elements is given by~\cite{L97}
\begin{equation}\label{epp}
\pi(G)=G-1!\sum_{U_1\sqcup U_2=V(G)}G|_{U_1}G|_{U_1}
+2!\sum_{U_1\sqcup U_2\sqcup U_3=V(G)}G|_{U_1}G|_{U_2}G|_{U_3}
-\dots,
\end{equation}
where the summations are carried over all unordered partitions
of the set $V(G)$ of the vertices of~$G$ into $2,3,4,\dots$
nonempty disjoint subsets. For example,
$\pi(K_3)=K_3-3K_1K_2+2K_1^3$. It is easy to check that the linear combination
of graphs in the right hand side indeed is a primitive
element.
The projection $\pi:\mathcal{G}\to\mathcal{G}$ takes disconnected graphs to~$0$
since they are decomposable elements of the Hopf algebra~$\mathcal{G}$.
In turn, projections of connected graphs with~$n$ vertices
form a basis in the space $P(\mathcal{G}_n)$ of primitive elements in grading~$n$.
The identity mapping ${\rm Id}:\mathcal{G}\to \mathcal{G}$ preserves $4$-term relations,
whence the same is true for its logarithm
$\log~{\rm Id}=\pi$. As a corollary, the projection formula works
in the Hopf algebra~$\mathcal{F}$ as well.
In the Hopf algebra of chord diagrams~$\mathcal{C}$,
the projection formula looks similarly, with the set $V(C)$
of chords of the chord diagram~$C$ replacing the set $V(G)$
of vertices of~$G$.
\begin{remark}
There is another way to associate to a graph a primitive elements
in the Hopf algebra of graphs, see e.g.~\cite{AM13}, which, probably,
looks simpler. Namely, let's take~$G$ to the sum
$$
G\mapsto
\sum_{E'\subset E(G)}
(-1)^{|E(G)|-|E'|}G|_{E'},
$$
where summation is carried over all spanning subgraphs of~$G$.
However, in contrast to the projection~$\pi$
this operation is specific to the Hopf algebra of graphs and cannot
be extended by linearity to a projection to the subspace of primitives.
\end{remark}
\subsection{Graph invariants and the Hopf algebra structure}
The behaviour of many graph invariants, as well as weight systems,
is closely related to the structure of the corresponding Hopf algebra.
The chromatic polynomial demonstrates a typical example.
Let us extend the chromatic polynomial to a mapping
$\chi:\mathcal{G}\to\mathbb{C}[c]$ of the whole space~$\mathcal{G}$
to the space of polynomials in one variable by linearity.
\begin{theorem}\label{tchrpr}
The value of a chromatic polynomial on any primitive element
of the Hopf algebra~$\mathcal{G}$ is a linear polynomial,
i.e., a monomial of degree~$1$.
\end{theorem}
In fact, this property is nothing but the well known
binomiality property of chromatic polynomial, which
asserts that~$\chi$ is a Hopf algebra homomorphism:
\begin{theorem}
The chromatic polynomial of a graph~$G$ satisfies the relation
$$
\chi_G(x+y)=\sum_{U\sqcup W=V(G)}
\chi_{G|_U}(x)\chi_{G|_W}(y),
$$
where summation on the right hand side is carried over all
ordered partitions of the set~$V(G)$ of vertices of~$G$
into two disjoint nonempty subsets.
\end{theorem}
Graded homomorphisms of the Hopf algebra of graphs~$\mathcal{G}$
to the Hopf algebra of polynomials $\mathbb{C}[p_1,p_2,\dots]$
are studied in more detail below, see Sec.~\ref{ssI}.
Theorem~\ref{tchrpr} means, in particular, that the value of
chromatic polynomial on the projection $\pi(G)$
of an arbitrary graph~$G$ to the subspace of primitives is a
linear polynomial. Equation~(\ref{epp}) and the fact that the free
term of the chromatic polynomial of any (nonempty) graph is~$0$,
imply that the polynomial
$\chi_{\pi(G)}(c)$ coincides with the linear term of
$\chi_G(c)$ for any graph~$G$.
Certain other graph invariants behave similarly.
\begin{theorem}[\cite{DL22},~\cite{K20}]
The skew characteristic polynomial of a graph on any
primitive element is a constant.
The interlace polynomial of a primitive element of order~$n$ is a polynomial of degree at most~$\left[\frac{n}2\right]$.
\end{theorem}
\subsection{Hopf algebra of graphs and integrability}\label{ssI}
In this section we describe a relation between graph invariants
and the theory of integrable systems of mathematical physics.
To a graph invariant, one can associate its averaging, namely,
the result of summation of the values of the invariant over
all graphs taken with the weight inverse to the order of the
automorphism group of graphs. If the invariant takes values in
the ring of polynomials in infinitely many variables, then the
result is a formal power series in these variables.
For graphs on surfaces (embedded graphs) similar constructions,
as is well known, lead to solutions of integrable hierarchies.
However, for abstract graphs a result of this type has been proved
only recently~\cite{CKL20}. It asserts that the averaging of an
umbral graph invariant becomes, after an appropriate rescaling
of the variables, a solution to the Kadomtsev--Petviashvili
hierarchy.
\subsubsection{KP integrable hierarchy}
The {\it\bfseries Kadomtsev--Petviashvili integrable hierarchy {\rm(}KP hierarchy{\rm)}} is an infinite system of nonlinear partial differential equations for an unknown function $F(p_1,p_2,\dots)$
depending on infinitely many variables. The equations of the hierarchy
are indexed by partitions of integers $n$, $n\ge4$, into two parts
none of which equals~$1$. The first two equations, which correspond
to partitions of~$4$ and~$5$, respectively, are
\begin{eqnarray*}\frac{\partial^2F}{\partial p_2^2} &=& \frac{\partial^2F}{\partial p_1\partial p_3}-
\frac12\Bigl(\frac{\partial^2F}{\partial p_1^2}\Bigr)^2 - \frac1{12}\frac{\partial^4F}{\partial p_1^4}\\
\frac{\partial^2F}{\partial p_2\partial p_3}&=& \frac{\partial^2F}{\partial p_1\partial p_4}-
\frac{\partial^2F}{\partial p_1^2}
\cdot\frac{\partial^2F}{\partial p_1\partial p_2}
- \frac16\frac{\partial^4F}{\partial p_1^3\partial p_2}.
\end{eqnarray*}
The left hand side of each equation corresponds to a partition into
two parts none of which equals~$1$, while the terms in the right hand side
correspond to partitions of the same number~$n$, which include
parts equal to~$1$. For $n=6$, there are two equations corresponding
to the partitions $2+4=6$ and
$3+3=6$, and so on. Exponents of solutions to the KP hierarchy are
called its $\tau$-functions.
\subsubsection{Umbral graph invariants}
\begin{definition}
A graph invariant with values in the ring of polynomials
$\mathbb{C}[q_1,q_2,\dots]$ in infinitely many variables is called
\emph{umbral} if its extension to a mapping
$\mathcal{G}\to\mathbb{C}[q_1,q_2,\dots]$ by linearity is a graded
Hopf algebra homomorphism; here the grading in the ring of polynomials
is defined by the weights of the variables $w(q_i)=i$, for $i=1,2,3\dots$.
\end{definition}
The definition immediately implies that a graph invariant is umbral
iff its value on any primitive element of order~$n$
is $cq_n$ for some constant~$c$.
\begin{example}
Stanley's symmetrized chromatic polynomial is an example of umbral invariant.
Indeed, by theorem~\ref{tHawg}, the Hopf algebra of weighted graphs
modulo contraction--deletion relation is graded isomorphic to the Hopf
algebra of polynomials.
\end{example}
Other umbral invariants and their various specializations
obtained by assigning concrete values to the variables~$q_i$
contain many well known graph invariants.
\begin{example}
In~\cite{CKL20}, Abel polynomials of graphs are introduced.
Associate to any spanning forest in a graph~$G$ the monomial
in the variables~$q_i$ equal to the product
$iq_i$ over all the trees in the forest, where~$i$
denotes the number of vertices in the tree. Multiplication by~$i$
here corresponds to the choice of a root in the tree,
that is, such a monomial encodes the number of rooted trees
corresponding to the chosen spanning forest. The Abel polynomial
$A_G(q_1,q_2,\dots)$ is equal to the sum of these monomials
over all spanning forests in~$G$.
If $G$ is a graph with~$n$ vertices, then $A_G$
is a quasihomogeneous polynomial of degree~$n$,
the coefficient of $q_n$ in it being the complexity of~$G$
times~$n$. (Recall that the
\emph{complexity} of a graph is the number of spanning trees in it.)
After the substitution $q_i=x$ for all $i=1,2,\dots,n$,
$A_{K_n}$ of the complete graph on~$n$ vertices, it becomes
the conventional Abel polynomial~\cite{Ab826}
$A_n(x)=x(x+n)^{n-1}$.
\begin{theorem}[\cite{CKL20}]
The Abel polynomial is an umbral graph invariant, i.e., its
extension to the Hopf algebra of graphs~$\mathcal{G}$
is a graded Hopf algebra homomorphism.
\end{theorem}
\end{example}
\subsubsection{Integrability of umbral invariants}
\begin{theorem}\label{tm}
Let~$I$ be an umbral graph invariant taking values in
the ring of polynomials in infinitely many variables
$q_1,q_2,\dots$. Define the generating function
\begin{equation}\label{eupgi}
\mathcal{I}^\circ(q_1,q_2,\dots)=\sum_{G}\frac{I_G(q_1,q_2,\dots)}{|\mathop{\rm Aut}(G)|},
\end{equation}
where summation is carried over all isomorphism classes of graphs,
and $|\mathop{\rm Aut}(G)|$ denotes the order of the automorphism group of~$G$.
Denote the constants $i_n$ as the sum
$$
i_n=n!\sum_{G\text{\rm\ connected}} \frac{[q_n]I_G(q_1,q_2,\dots)}{|\mathop{\rm Aut}(G)|}
$$
of all the coefficients of the monomial~$q_n$.
Suppose the constant~$i_n$ is nonzero for all $n=1,2,\dots$.
Then after rescaling the variables~$\displaystyle q_n=\frac{2^{n(n-1)/2}(n-1)!}{i_n}\cdot p_n$
the generating function~$\mathcal{I}^\circ$ becomes the following linear
combination of the one-part Schur polynomials{\rm:}
$$
S(p_1,p_2,\dots)=
1+2^0s_1(p_1)+2^1s_2(p_1,p_2)+\dots+2^{n(n-1)/2}s_n(p_1,p_2,\dots,p_n)+\dots.
$$
\end{theorem}
We underline that after the rescaling in the theorem
{\it each\/} umbral graph invariant~$I$ becomes
{\it one and the same \/} generating function.
Summation over connected graphs in the expression for~$i_n$
can be replaced by summation over all graphs, since for any
disconnected graph~$G$ the coefficient of the monomial~$q_n$ in the polynomial $I_G(q_1,q_2,\dots)$ is~$0$.
Since any linear combination of one-part Schur polynomials
is a $\tau$-function of the KP hierarchy, we have
\begin{corollary}
After rescaling of the variables given in Theorem~\ref{tm},
the generating function $\mathcal{I}$ becomes a
$\tau$-function for the KP hierarchy.
\end{corollary}
\section{Hopf algebra of chord diagrams\\ and weight systems associated to Lie algebras}\label{s3}
Both the space of chord diagrams and the space of graphs are equipped with natural Hopf algebra structures. We describe in this section the relationship of this structure with the weight systems constructed from Lie algebras. This is a large and important class of weight systems related to quantum knot invariants. However, finding values of weight systems in this class is computationally a rather complicated problem since it requires to make computations in a noncommutative algebra. We recall known results and describe new ones that help to overcome these difficulties and obtain explicit answers in the case of the Lie algebras $\mathfrak{sl}(2)$ and $\mathfrak{gl}(N)$.
\subsection{Structure of the Hopf algebra of chord diagrams}
Similarly to graphs, chord diagrams generate a Hopf algebra. An essential distinction of this Hoph algebra from the Hopf algebra of graphs is that it is well defined only in the quotient space of chord diagrams modulo the subspace spanned by the $4$-term relations. Without taking this quotient there is no way to introduce multiplication correctly (while the comultiplication is defined in a natural way).
Define the product $C_1C_2$ of chord diagrams $C_1,C_2$ as the chord diagram obtained by gluing their Wilson loops cutting them at arbitrary points distinct from the endpoint of chords, preserving the orientations of these loops, see Fig.~\ref{fcdp}. This product is well defined nodulo the $4$-term relations.
\begin{figure}[h]
\centerline{
\begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}]
\draw (0,0) circle (1);
\draw (1,0) -- (-1,0);
\fill[black] (1,0) circle (1pt)
(-1,0) circle (1pt)
(60:1) circle (1pt)
(120:1) circle (1pt)
(240:1) circle (1pt)
(300:1) circle (1pt);
\draw (60:1) .. controls (0,0.2) and (0,-0.2) .. (300:1);
\draw (120:1) .. controls (0,0.2) and (0,-0.2) .. (240:1);
\end{tikzpicture} $\times$
\begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}]
\draw (0,0) circle (1);
\draw (1,0) -- (-1,0);
\fill[black] (1,0) circle (1pt)
(-1,0) circle (1pt)
(60:1) circle (1pt)
(120:1) circle (1pt)
(240:1) circle (1pt)
(300:1) circle (1pt);
\draw (60:1) .. controls (0,0.2) and (0,-0.2) .. (300:1);
\draw (120:1) .. controls (0,0.2) and (0,-0.2) .. (240:1);
\end{tikzpicture} $=$
\begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}]
\draw ([shift=( 20:1cm)]0,0) arc [start angle= 20, end angle= 340, radius=1];;
\draw (0,1) -- (0,-1);
\fill[black] (0,1) circle (1pt)
(0,-1) circle (1pt)
(30:1) circle (1pt)
(150:1) circle (1pt)
(210:1) circle (1pt)
(330:1) circle (1pt);
\draw (30:1) .. controls (0.2,0) and (-0.2,0) .. (150:1);
\draw (210:1) .. controls (-0.2,0) and (0.2,0) .. (330:1);
\draw[xshift=2.5cm] ([shift=( 200:1cm)]0,0) arc [start angle= 200, end angle=520, radius=1];
\draw[xshift=2.5cm] (30:1) -- (210:1);
\fill[xshift=2.5cm,black] (0,1) circle (1pt)
(0,-1) circle (1pt)
(30:1) circle (1pt)
(150:1) circle (1pt)
(210:1) circle (1pt)
(330:1) circle (1pt);
\draw[xshift=2.5cm] (150:1) .. controls (190:0.2) and (230:0.2) .. (0,-1);
\draw[xshift=2.5cm] (0,1) .. controls (50:0.2) and (10:0.2) .. (330:1);
\draw (20:1) -- ($(20:1)+(0.62,0)$);
\draw (-20:1) -- ($(-20:1)+(0.62,0)$);
\end{tikzpicture} $=$
\begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}]
\draw (0,0) circle (1);
\draw ( 0:1) -- (-90:1)
( 30:1) -- (-30:1)
( 60:1) -- (-60:1)
( 90:1) -- (150:1)
(120:1) -- (210:1)
(180:1) -- (240:1);
\fill[black] ( 0:1) circle (1pt)
( 30:1) circle (1pt)
( 60:1) circle (1pt)
( 90:1) circle (1pt)
(120:1) circle (1pt)
(150:1) circle (1pt)
(180:1) circle (1pt)
(210:1) circle (1pt)
(240:1) circle (1pt)
(270:1) circle (1pt)
(300:1) circle (1pt)
(330:1) circle (1pt);
\end{tikzpicture}}
\caption{Multiplication of chord diagrams}\label{fcdp}
\end{figure}
The comultiplication takes any chord diagram $C$ to the sum of tensor products of chord diagrams obtained by splitting the set of chords in~$C$ into two nonintersecting subsets
$$
\mu:C\mapsto \sum_{I\sqcup J=V(C)} C|_I\otimes C|_J.
$$
These operations turn the vector space
$$
\mathcal{C}=\mathcal{C}_0\oplus\mathcal{C}_1\oplus\mathcal{C}_2\oplus\dots,
$$
where $\mathcal{C}_n$ is spanned by chord diagrams with~$n$ chords, $4$-term relations moduled out, into graded commutative cocommutative connected Hopf algebra~\cite{K93}.
The map taking a chord diagram to its intersection graph is extended by linearity to a homomorphism of the Hopf algebra of chord diagrams to the Hopf algebra of graphs modulo the $4$-term relations.
The Hopf algebra of chord diagrams is naturally isomorphic to the Hopf algebra of Jacobi diagrams. A \emph{Jacobi diagram} is an embedded $3$-regular graph with distinguished oriented loop called the Wilson loop. The grading of such a diagarm is equal to half the number of vertices (it is easy to see that the number of vertices in a $3$-regular graph is necessarily even). Denote by $\mathcal{J}_n$ the space spanned by Jakobi diagrams of grading~$n$ factorized by
\begin{itemize}
\item skewsymmetry relations
\item STU relations.
\end{itemize}
The skewsymmetry relation states that the change of orientation at any vertex of the diagram results in the change of the sign of the corresponding element; the STU relation is depicted on Fig.~\ref{7fig:40}. It allows one to represent any Jacobi diagram as a linear combination of chord diagrams. This correspondence establishes an isomorphism between~$\mathcal{J}$ and~$\mathcal{D}$~\cite{BN95}.
Similarly to the case of chord diagrams, the product of two Jacobi diagrams is defined by gluing their Wilson loops cut at arbitrary points distinct from the vertices of the factors. After removing the Wilson loop, a Jacobi diagram splits into a number of connected components (the connected components of a chord diagram are its chords). The comultiplication takes a Jacobi diagram to the sum of tensor products of its subdiagrams obtained by splitting the set of its connected components into two subsets in all possible ways.
\begin{figure}[htbp]
\begin{picture}(100,50)(-20,10)
\thicklines
\put(30,40){\circle{30}}
\thicklines
\put(30,40){\line(0,1){15}}
\bezier{80}(30,40)(30,40)(19,29)
\bezier{80}(30,40)(30,40)(41,29)
\put(18,50){\line(1,0){24}}
\bezier{80}(95,25)(110,20)(125,25)
\put(110,23){\line(0,1){15}}
\put(110,38){\line(1,2){10}}
\put(110,38){\line(-1,2){10}}
\bezier{80}(175,25)(195,20)(215,25)
\bezier{80}(275,25)(295,20)(315,25)
\bezier{80}(187,23)(187,40)(175,55)
\bezier{80}(203,23)(203,40)(215,55)
\bezier{80}(287,23)(287,40)(315,55)
\bezier{80}(303,23)(303,40)(275,55)
\put(150,40){\makebox(0,0){$=$}}
\put(250,40){\makebox(0,0){$-$}}
\put(33,10){\makebox(0,0){(a)}}
\put(197,10){\makebox(0,0){(b)}}
\end{picture}
\caption{(a) A Jacobi diagram (b) STU relation. The halfedges at each internal vertex are oriented counterclockwise; transversal intersections of edges are not vertices
}\label{7fig:40}
\end{figure}
\begin{figure}[htbp]
\begin{picture}(100,40)(6,20)
\thicklines
\put(35,40){\circle{30}}
\put(105,40){\circle{30}}
\put(150,40){\circle{30}}
\put(200,40){\circle{30}}
\put(260,40){\circle{30}}
\put(310,40){\circle{30}}
\put(355,40){\circle{30}}
\put(15,40){\makebox(0,0){$\mu\Big(\,$}}
\put(68,40){\makebox(0,0){$\Big)\,=1\otimes$}}
\put(128,40){\makebox(0,0){$+$}}
\put(175,40){\makebox(0,0){$\otimes$}}
\put(228,40){\makebox(0,0){$+$}}
\put(285,40){\makebox(0,0){$\otimes$}}
\put(333,40){\makebox(0,0){$+$}}
\put(380,40){\makebox(0,0){$\otimes1$}}
\put(35,40){\line(0,1){15}}
\bezier{80}(35,40)(35,40)(24,29)
\bezier{80}(35,40)(35,40)(46,29)
\put(24,50){\line(1,0){22}}
\put(105,40){\line(0,1){15}}
\bezier{80}(105,40)(105,40)(94,29)
\bezier{80}(105,40)(105,40)(116,29)
\put(94,50){\line(1,0){22}}
\put(355,40){\line(0,1){15}}
\bezier{80}(355,40)(355,40)(344,29)
\bezier{80}(355,40)(355,40)(366,29)
\put(344,50){\line(1,0){22}}
\put(150,40){\line(0,1){15}}
\bezier{80}(150,40)(150,40)(139,29)
\bezier{80}(150,40)(150,40)(161,29)
\put(189,50){\line(1,0){22}}
\put(310,40){\line(0,1){15}}
\bezier{80}(310,40)(310,40)(299,29)
\bezier{80}(310,40)(310,40)(321,29)
\put(249,50){\line(1,0){22}}
\end{picture}
\caption{Comultiplication of Jacobi diagrams}\label{7fig:40a}
\end{figure}
In particular, connected Jacobi diagrams are primitive elements in the Hoph algebra~$\mathcal{J}$. These primitive elements span each of the homogeneous subspaces $P(\mathcal{J}_n)$ of primitive elements
(though usually they are subject to certain linear relations).
As a corollary, each of the spaces $P(\mathcal{J}_n)$, as well as the space $P(\mathcal{D}_n)$ isomorphic to the former, admits a natural filtration
$$
\{0\}\subset P^{(1)}(\mathcal{J}_n)\subset P^{(2)}(\mathcal{J}_n)\subset\dots \subset P^{(n+1)}(\mathcal{J}_n)
=P(\mathcal{J}_n),
$$
where the subspace $P^{(k)}(\mathcal{J}_n)$ is spanned by connected Jacobi diagrams with at most~$k$ vertices on the Wilson loop.
The universal enveloping algebra $U\mathfrak{g}$ of the Lie algebra~$\mathfrak{g}$ admits a filtration by vector subspaces
$$
U^{(0)}\mathfrak{g}\subset U^{(1)}\mathfrak{g}\subset U^{(2)}\dots U\mathfrak{g},\qquad
U^{(0)}\mathfrak{g}\cup U^{(1)}\mathfrak{g}\cup U^{(2)}\mathfrak{g}\cup\dots=U\mathfrak{g},
$$
where $U^{(k)}\mathfrak{g}$ is spanned by monomials of degree at most~$k$ in the elements of~$\mathfrak{g}$. This filtration, in turn, induces a filtration in the center $ZU\mathfrak{g}$ of the universal enveloping algebra. It was shown in~\cite{CV97} that for the weight system corresponding to a given metrized Lie algebra~$\mathfrak{g}$, its value on a primitive element lying in the filtration term~$P^{(k)}$ belongs to $ZU^{(k)}$.
A conjecture in~\cite{ZY22} states that the projection $\pi(C)$ of a given chord diagram~$C$ with $n$ chords to the subspace of primitive elements lies in the subspace $P^{(k+1)}(\mathcal{J}_n)$, where $k$ is half the circumference of the intersection graph~$g(C)$. (The \emph{circumference} of a simple graph is the length of the longest cycle in it.) The results concerning the values of the weight systems associated to Lie algebras on the projections of chord diagrams to the subspace of primitive elements stated below confirm this conjecture.
\subsection{Universal construction of a weight system\\ associated to a Lie algebra}
To each metrized Lie algebra a weight system is associated which takes values in the center of the universal enveloping algebra of this Lie algebra. This center is typically a ring of polynomials in several generators which are the Casimir elements of the Lie algebra. We describe below the construction of this weight system and provide various algorithms for its computation leading to explicit formulas.
\subsubsection{Definition of the weight system}
The universal construction for a weight system obtained from a metrized Lie algebra looks like follows. Let $\mathfrak{g}$ be a Lie algebra equipped with an ${\rm ad}$-invariant scalar product (non-degenerate symmetric bilinear form) $\langle\cdot,\cdot\rangle$; here
${\rm ad}$-invariance means that the equality $\langle [a,b],c\rangle=\langle a,[b,c]\rangle$
holds for any three elements $a,b,c\in\mathfrak{g}$. Pick an arbitrary basis $e_1,\dots,e_d$ in $\mathfrak{g}$, $d=\dim\mathfrak{g}$, and denote by $e_1^*,\dots,e_d^*$ the elements of the dual basis, $\langle e_i,e_j^*\rangle=\delta_{i,j}$, $i,j=1,\dots,d$.
Now let~$C$ be a chord diagram with~$n$ chords. Choose an arbitrary point on a circle distinct from the endpoints of the chords and call it the cut point. Then, for each chord $a$, pick an index $i_a$ from the range $1,\dots,d$, and assign the basic element $e_{i_a}$ to one of the endpoints of the chord, and the element $e_{i_a}^*$ of the dual basis to the other endpoint of the chord. Multiply the resulting $2n$ elements of the Lie algebra in the order prescribed by the orientation of the Wilson loop, starting from the cut point. The resulting monomial is considered as an element of the universal enveloping algebra $U\mathfrak{g}$ of the given Lie algebra. Sum up the obtained monomials over all the~$d^n$ possible ways to put indices on the chords; denote the resulting sum by $w_\mathfrak{g}(C)\in U\mathfrak{g}$. For example, for the arc diagram
\begin{center}
\begin{picture}(95,40)(460,580)
\thinlines
\put(495,580){\oval(30,30)[t]}
\put(475,580){\oval(42,42)[t]}
\put(495,580){\oval(60,60)[t]}
\thicklines
\put(440,580){\vector( 1, 0){105}}
\end{picture}
\end{center}
obtained by cutting the corresponding chord diagram, the resulting element of the universal enveloping algebra is equal to
$$
\sum_{i,j,k=1}^de_ie_je_ke_i^*e_k^*e_j^*.
$$
\begin{theorem}[\cite{K93}]
\begin{enumerate}
\item The element $w_\mathfrak{g}(C)$ is independent of the choice of the basis $e_1,\dots,e_d$ in the Lie algebra.
\item The element $w_\mathfrak{g}(C)$ is independent of the choice of a cut point in the chord diagram.
\item The element $w_\mathfrak{g}(C)$ belongs to the center $ZU\mathfrak{g}$ of $U\mathfrak{g}$.
\item The invariant $w_\mathfrak{g}$ satisfies the $4$-term relations, and thus is a weight system.
\item The constructed weight system taking values in the commutative ring $ZU\mathfrak{g}$ is multiplicative, i.e.\ $w_\mathfrak{g}(C_1C_2)=w_\mathfrak{g}(C_1)w_\mathfrak{g}(C_2)$ for any two chord diagrams $C_1$ and $C_2$.
\end{enumerate}
\end{theorem}
\subsubsection{Jacobi diagrams and the values of the weight system on primitive elements}
The definition of the weight system associated with a Lie algebra has a convenient reformulation in terms of Jacobi diagrams. For a given metrized Lie algebra~$\mathfrak{g}$ consider the linear $3$-form $\omega(a,b,c)=\langle [a,b],c\rangle$ treated as an element of the third tensor power $\mathfrak{g}^*\otimes\mathfrak{g}^*\otimes\mathfrak{g}^*$. Denote by $\omega^*\in\mathfrak{g}\otimes\mathfrak{g}\otimes\mathfrak{g}$ the corresponding dual tensor obtained by identifying~$\mathfrak{g}$ with the dual space~$\mathfrak{g}^*$ using the scalar product $\langle\cdot,\cdot\rangle$. Note that the tensor $\omega^*$ is invariant with respect to cyclic permutations of the tensor factors and it changes the sign under transpositions of any two factors. Take now a Jacobi diagram. We associate the tensor $\omega^*$ with each its internal vertex labelling factors of the tensor product with the edges exiting from this vertex. To each internal edge, i.e. an edge connecting two internal vertices of the diagram, we associate the convolution of tensors corresponding to the ends of the edge by applying the scalar product. The result of such convolution is an element of the tensor power of $\mathfrak{g}$ whose factors are labelled by the vertices lying on the Wilson loop. Finally, we apply to this tensor the homomorphism to the universal enveloping algebra sending the tensor product of some collection of elements in~$\mathfrak{g}$ to their product in~$U\mathfrak{g}$ in the order they follow on the Wilson loop in the counterclockwise order starting from the cut point. The resulting element of~$U\mathfrak{g}$ is nothing but the value of the weight system~$w_{\mathfrak{g}}$ on the given Jacobi diagram. This definition is particularly useful for efficient computation of the values of the weight system~$w_\mathfrak{g}$ on primitive elements represented by connected Jacobi diagrams.
\subsection{$\mathfrak{sl}(2)$ weight system}
The Lie algebra~$\mathfrak{sl}(2)$ is the simplest noncommutative Lie algebra with a nondegenerate ${\rm ad}$-invariant scalar product. In contrast to the case of more complicated Lie algebras, the $\mathfrak{sl}(2)$ weight system admits the Chmutov-Varchenko recurrence relation, which simplifies dramatically its explicit computation. Nevertheless, even with this recursion relation the computation remains quite laborious and leads to explicit answers in a limited number of cases only. In particular, no explicit formula for the values of the~$w_{\mathfrak{sl}(2)}$ weight system on chord diagrams all whose chords intersect one another pairwise was known until recently. We describe below the corresponding result of P.~Zakorko as well as the result of P.~Zinova and M.~Kazarian about the values of $w_{\mathfrak{sl}(2)}$ on chord diagrams whose intersection graph is the complete bipartite graph.
\subsubsection{Chmutov-Varchenko recursion relation}
The weight system $w_\mathfrak{g}$ associated with a given Lie algebra $\mathfrak{g}$ takes values in the commutative ring~$ZU\mathfrak{g}$. However, the summands of an expression for~$w_\mathfrak{g}(C)$ lie in the complicated noncommutative algebra~$U\mathfrak{g}$. Therefore, a direct application of the definition is not efficient in practice, and one needs to find methods for more efficient computation of such a weight system that would deal with commutative rings (say, rings of polynomials) at each step.
The simplest noncommutative Lie algebra is~$\mathfrak{sl}(2)$. An efficient algorithm for computing this weight system was suggested by Chmutov and Varchenko~\cite{CV97}. The center of the universal enveloping algebra $U\mathfrak{sl}(2)$ is the ring of polynomials in one variable, which is represented by the Casimir element. We denote this generator by~$c$. Thus, the weight system $w_{\mathfrak{sl}(2)}$ takes values in the algebra of polynomials in~$c$.
The value $w_{\mathfrak{sl}(2)}(C)$ on a chord diagram~$C$
with~$n$ chords is a polynomial in~$c$ of degree~$n$ with the leading coefficient~$1$ and zero free term (for $n>0$).
\begin{theorem}\label{th:Chmutov-Varchenko}
The weight system $w_{\mathfrak{sl}(2)}$ associated to the Lie algebra $\mathfrak{sl}(2)$ satisfies the following relations:
\begin{enumerate}
\item $w_{\mathfrak{sl}(2)}$ is a multiplicative weight system, $w_{\mathfrak{sl}(2)}(C_1C_2)=w_{\mathfrak{sl}(2)}(C_1)w_{\mathfrak{sl}(2)}(C_2)$ for arbitrary chord diagrams $C_1$ and $C_2$. As a corollary, for the empty chord diagram the value of the weight system is equal to~$1$;
\item if the diagram $C$ is obtained from a diagram $C'$ by adding one chord having no intersections with the chords of~$C'$, then we have
$$
w_{\mathfrak{sl}(2)}(C)=c\;w_{\mathfrak{sl}(2)}(C').
$$
\item If the diagram $C$ is obtained from a diagram $C'$ by adding one chord intersecting exactly one chord of~$C'$, then we have
$$
w_{\mathfrak{sl}(2)}(C)=(c-1)w_{\mathfrak{sl}(2)}(C').
$$
\item The $6$-term relations depicted on Fig.~\ref{fCVrr} hold.
\end{enumerate}
\end{theorem}
\begin{figure}[htbp]
\def\bepi#1{\makebox[33pt]{\unitlength=18pt
\begin{picture}(1.8,1.1)(-0.98,-0.2) #1
\end{picture}} }
\def\sctw#1#2#3#4{
\bezier{25}(-0.26,0.97)(0,1.035)(0.26,0.97)
\bezier{4}(0.26,0.97)(0.52,0.9)(0.71,0.71)
\bezier{#1}(0.71,0.71)(0.9,0.52)(0.97,0.26)
\bezier{4}(0.97,0.26)(1.035,0)(0.97,-0.26)
\bezier{#2}(0.97,-0.26)(0.9,-0.52)(0.71,-0.71)
\bezier{4}(0.71,-0.71)(0.52,-0.9)(0.26,-0.97)
\bezier{25}(0.26,-0.97)(0,-1.035)(-0.26,-0.97)
\bezier{4}(-0.26,-0.97)(-0.52,-0.9)(-0.71,-0.71)
\bezier{#3}(-0.71,-0.71)(-0.9,-0.52)(-0.97,-0.26)
\bezier{4}(-0.97,-0.26)(-1.035,0)(-0.97,0.26)
\bezier{#4}(-0.97,0.26)(-0.9,0.52)(-0.71,0.71)
\bezier{4}(-0.71,0.71)(-0.52,0.9)(-0.26,0.97)
}
\def\bepi{\sctw{25}{25}{4}{4} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(-0.174,0.985){\circle*{0.15} \put(-0.174,-0.985){\circle*{0.15}}{\bepi{\sctw{25}{25}{4}{4} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(-0.174,0.985){\circle*{0.15} \put(-0.174,-0.985){\circle*{0.15}}}
\def\bepi{\sctw{25}{25}{4}{4} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(0.174,0.985){\circle*{0.15} \put(-0.174,-0.985){\circle*{0.15}}{\bepi{\sctw{25}{25}{4}{4} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(0.174,0.985){\circle*{0.15} \put(-0.174,-0.985){\circle*{0.15}}}
\def\bepi{\sctw{25}{25}{4}{4} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(-0.174,0.985){\circle*{0.15} \put(0.174,-0.985){\circle*{0.15}}{\bepi{\sctw{25}{25}{4}{4} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(-0.174,0.985){\circle*{0.15} \put(0.174,-0.985){\circle*{0.15}}}
\def\bepi{\sctw{25}{25}{4}{4} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(0.174,0.985){\circle*{0.15} \put(0.174,-0.985){\circle*{0.15}}{\bepi{\sctw{25}{25}{4}{4} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(0.174,0.985){\circle*{0.15} \put(0.174,-0.985){\circle*{0.15}}}
\def\bepi{\sctw{25}{25}{4}{4} \put(-0.174,0.985){\circle*{0.15} \put(0.866,0.5){\circle*{0.15}}{\bepi{\sctw{25}{25}{4}{4} \put(-0.174,0.985){\circle*{0.15} \put(0.866,0.5){\circle*{0.15}}}
\def\bepi{\sctw{25}{25}{4}{4} \put(-0.174,-0.985){\circle*{0.15} \put(-0.174,0.985){\circle*{0.15}}{\bepi{\sctw{25}{25}{4}{4} \put(-0.174,-0.985){\circle*{0.15} \put(-0.174,0.985){\circle*{0.15}}}
\def\bepi{\sctw{4}{25}{4}{25} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(-0.174,0.985){\circle*{0.15} \put(-0.174,-0.985){\circle*{0.15}}{\bepi{\sctw{4}{25}{4}{25} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(-0.174,0.985){\circle*{0.15} \put(-0.174,-0.985){\circle*{0.15}}}
\def\bepi{\sctw{4}{25}{4}{25} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(0.174,0.985){\circle*{0.15} \put(-0.174,-0.985){\circle*{0.15}}{\bepi{\sctw{4}{25}{4}{25} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(0.174,0.985){\circle*{0.15} \put(-0.174,-0.985){\circle*{0.15}}}
\def\bepi{\sctw{4}{25}{4}{25} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(-0.174,0.985){\circle*{0.15} \put(0.174,-0.985){\circle*{0.15}}{\bepi{\sctw{4}{25}{4}{25} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(-0.174,0.985){\circle*{0.15} \put(0.174,-0.985){\circle*{0.15}}}
\def\bepi{\sctw{4}{25}{4}{25} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(0.174,0.985){\circle*{0.15} \put(0.174,-0.985){\circle*{0.15}}{\bepi{\sctw{4}{25}{4}{25} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(0.174,0.985){\circle*{0.15} \put(0.174,-0.985){\circle*{0.15}}}
\def\bepi{\sctw{4}{25}{4}{25} \put(-0.14,0.985){\circle*{0.15} \put(-0.866,0.5){\circle*{0.15}{\bepi{\sctw{4}{25}{4}{25} \put(-0.14,0.985){\circle*{0.15} \put(-0.866,0.5){\circle*{0.15}}
\put(0.866,-0.5){\circle*{0.15}} \put(0.866,-0.5){\line(-5,3){1.72}} }}
\def\bepi{\sctw{4}{25}{4}{25} \put(0.174,-0.985){\circle*{0.15} \put(-0.174,0.985){\circle*{0.15}}{\bepi{\sctw{4}{25}{4}{25} \put(0.174,-0.985){\circle*{0.15} \put(-0.174,0.985){\circle*{0.15}}}
\def\bepi{\sctw{25}{25}{4}{4} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(-0.174,-0.985){\circle*{0.15} \put(-0.174,0.985){\circle*{0.15}}{\bepi{\sctw{25}{25}{4}{4} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(-0.174,-0.985){\circle*{0.15} \put(-0.174,0.985){\circle*{0.15}}}
\def\bepi{\sctw{25}{25}{4}{4} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(-0.174,-0.985){\circle*{0.15} \put(0.174,0.985){\circle*{0.15}}{\bepi{\sctw{25}{25}{4}{4} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(-0.174,-0.985){\circle*{0.15} \put(0.174,0.985){\circle*{0.15}}}
\def\bepi{\sctw{25}{25}{4}{4} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(0.174,-0.985){\circle*{0.15} \put(-0.174,0.985){\circle*{0.15}}{\bepi{\sctw{25}{25}{4}{4} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(0.174,-0.985){\circle*{0.15} \put(-0.174,0.985){\circle*{0.15}}}
\def\bepi{\sctw{25}{25}{4}{4} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(0.174,0.985){\circle*{0.15} \put(0.174,-0.985){\circle*{0.15}}{\bepi{\sctw{25}{25}{4}{4} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(0.174,0.985){\circle*{0.15} \put(0.174,-0.985){\circle*{0.15}}}
\def\bepi{\sctw{25}{25}{4}{4} \put(-0.174,0.985){\circle*{0.15} \put(-0.174,-0.985){\circle*{0.15}}{\bepi{\sctw{25}{25}{4}{4} \put(-0.174,0.985){\circle*{0.15} \put(-0.174,-0.985){\circle*{0.15}}}
\def\bepi{\sctw{25}{4}{25}{4} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(0.174,-0.985){\circle*{0.15} \put(-0.174,0.985){\circle*{0.15}}{\bepi{\sctw{25}{4}{25}{4} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(0.174,-0.985){\circle*{0.15} \put(-0.174,0.985){\circle*{0.15}}}
\def\bepi{\sctw{25}{4}{25}{4} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(0.174,-0.985){\circle*{0.15} \put(0.174,0.985){\circle*{0.15}}{\bepi{\sctw{25}{4}{25}{4} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(0.174,-0.985){\circle*{0.15} \put(0.174,0.985){\circle*{0.15}}}
\def\bepi{\sctw{25}{4}{25}{4} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(-0.174,-0.985){\circle*{0.15} \put(-0.174,0.985){\circle*{0.15}}{\bepi{\sctw{25}{4}{25}{4} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(-0.174,-0.985){\circle*{0.15} \put(-0.174,0.985){\circle*{0.15}}}
\def\bepi{\sctw{25}{4}{25}{4} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(-0.174,-0.985){\circle*{0.15} \put(0.174,0.985){\circle*{0.15}}{\bepi{\sctw{25}{4}{25}{4} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(-0.174,-0.985){\circle*{0.15} \put(0.174,0.985){\circle*{0.15}}}
\def\bepi{\sctw{25}{4}{25}{4} \put(0.174,0.985){\circle*{0.15} \put(-0.174,-0.985){\circle*{0.15}}{\bepi{\sctw{25}{4}{25}{4} \put(0.174,0.985){\circle*{0.15} \put(-0.174,-0.985){\circle*{0.15}}}
\def\bepi{\sctw{25}{4}{25}{4}\put(-0.14,0.985){\circle*{0.15}\put(-0.866,-0.5){\circle*{0.15}{\bepi{\sctw{25}{4}{25}{4}\put(-0.14,0.985){\circle*{0.15}\put(-0.866,-0.5){\circle*{0.15}}
\put(0.866,0.5){\circle*{0.15}}\put(0.866,0.5){\line(-5,-3){1.72}} }}
\def\di#1{\raisebox{0pt}[20pt][23pt]{#1}}
\def\wdi#1{W\biggl( {\raisebox{0pt}[20pt][30pt]{#1}} \biggr)}
\def\spwdi#1{W\biggl( \mbox{#1} \biggr)}
\def\slrel#1#2#3#4#5#6#7{
\begin{eqnarray*}
\wdi{#1}-\wdi{#2}-\wdi{#3}+\wdi{#4} = \makebox[30pt]{}& \\
\hfill = 2 \wdi{#5} - 2 \wdi{#6} #7 &
\end{eqnarray*} }
\def\put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15}{\put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15}}
\put(0,-1){\line(0,1){2}} }
\def\put(-0.174,0.985){\circle*{0.15}{\put(-0.174,0.985){\circle*{0.15}}
\put(0.866,0.5){\circle*{0.15}}
\bezier{70}(-0.174,0.985)(0.21,0.45)(0.866,0.5) }
\def\put(-0.174,-0.985){\circle*{0.15}{\put(-0.174,-0.985){\circle*{0.15}}
\put(0.866,-0.5){\circle*{0.15}}
\bezier{70}(-0.174,-0.985)(0.21,-0.45)(0.866,-0.5) }
\def\put(0.174,0.985){\circle*{0.15}{\put(0.174,0.985){\circle*{0.15}}
\put(0.866,0.5){\circle*{0.15}}
\bezier{60}(0.174,0.985)(0.36,0.48)(0.866,0.5) }
\def\put(0.174,-0.985){\circle*{0.15}{\put(0.174,-0.985){\circle*{0.15}}
\put(0.866,-0.5){\circle*{0.15}}
\bezier{60}(0.174,-0.985)(0.36,-0.48)(0.866,-0.5) }
\def\put(-0.174,0.985){\circle*{0.15}{\put(-0.174,0.985){\circle*{0.15}}
\put(-0.174,-0.985){\circle*{0.15}}
\bezier{100}(-0.174,0.985)(0.3,0)(-0.174,-0.985) }
\def\put(-0.174,-0.985){\circle*{0.15}{\put(-0.174,-0.985){\circle*{0.15}}
\put(0.866,0.5){\circle*{0.15}}
\bezier{90}(-0.174,-0.985)(0,0)(0.866,0.5) }
\def\put(-0.174,0.985){\circle*{0.15}{\put(-0.174,0.985){\circle*{0.15}}
\put(0.866,-0.5){\circle*{0.15}}
\bezier{90}(-0.174,0.985)(0,0)(0.866,-0.5) }
\def\put(-0.174,0.985){\circle*{0.15}{\put(-0.174,0.985){\circle*{0.15}}
\put(-0.866,0.5){\circle*{0.15}}
\bezier{60}(-0.174,0.985)(-0.36,0.48)(-0.866,0.5) }
\def\put(0.174,0.985){\circle*{0.15}{\put(0.174,0.985){\circle*{0.15}}
\put(-0.866,0.5){\circle*{0.15}}
\bezier{90}(0.174,0.985)(-0.21,0.45)(-0.866,0.5) }
\def\put(-0.14,0.985){\circle*{0.15}{\put(-0.14,0.985){\circle*{0.15}}
\put(0.174,-0.985){\circle*{0.15}}
\put(0.2,-0.985){\line(-1,6){0.325}} }
\def\put(0.174,-0.985){\circle*{0.15}{\put(0.174,-0.985){\circle*{0.15}}
\put(-0.866,0.5){\circle*{0.15}}
\bezier{90}(0.174,-0.985)(0,0)(-0.866,0.5) }
\def\put(0.866,0.5){\circle*{0.15}{\put(0.866,0.5){\circle*{0.15}}
\put(0.866,-0.5){\circle*{0.15}}
\bezier{60}(0.866,0.5)(0.5,0)(0.866,-0.5) }
\def\put(0.174,0.985){\circle*{0.15}{\put(0.174,0.985){\circle*{0.15}}
\put(0.866,-0.5){\circle*{0.15}}
\bezier{80}(0.174,0.985)(0.36,0.16)(0.866,-0.5) }
\def\put(0.174,-0.985){\circle*{0.15}{\put(0.174,-0.985){\circle*{0.15}}
\put(0.866,0.5){\circle*{0.15}}
\bezier{80}(0.174,-0.985)(0.36,-0.16)(0.866,0.5) }
\def\put(-0.174,0.985){\circle*{0.15}{\put(-0.174,0.985){\circle*{0.15}}
\put(-0.866,-0.5){\circle*{0.15}}
\bezier{80}(-0.174,0.985)(-0.36,0.16)(-0.866,-0.5) }
\def\put(-0.174,-0.985){\circle*{0.15}{\put(-0.174,-0.985){\circle*{0.15}}
\put(-0.866,0.5){\circle*{0.15}}
\bezier{80}(-0.174,-0.985)(-0.36,-0.16)(-0.866,0.5) }
\def\put(-0.174,-0.985){\circle*{0.15}{\put(-0.174,-0.985){\circle*{0.15}}
\put(-0.866,-0.5){\circle*{0.15}}
\bezier{60}(-0.174,-0.985)(-0.36,-0.48)(-0.866,-0.5) }
\def\put(0.174,-0.985){\circle*{0.15}{\put(0.174,-0.985){\circle*{0.15}}
\put(-0.866,-0.5){\circle*{0.15}}
\bezier{80}(0.174,-0.985)(-0.21,-0.45)(-0.866,-0.5) }
\def\put(0.174,0.985){\circle*{0.15}{\put(0.174,0.985){\circle*{0.15}}
\put(0.174,-0.985){\circle*{0.15}}
\bezier{100}(0.174,0.985)(-0.3,0)(0.174,-0.985) }
\def\put(-0.866,0.5){\circle*{0.15}{\put(-0.866,0.5){\circle*{0.15}}
\put(-0.866,-0.5){\circle*{0.15}}
\bezier{60}(-0.866,0.5)(-0.5,0)(-0.866,-0.5) }
\def\put(0.174,0.985){\circle*{0.15}{\put(0.174,0.985){\circle*{0.15}}
\put(-0.866,-0.5){\circle*{0.15}}
\bezier{90}(0.174,0.985)(0,0)(-0.866,-0.5) }
\def\bepi{\sctw{25}{25}{4}{4} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(-0.174,0.985){\circle*{0.15} \put(-0.174,-0.985){\circle*{0.15}}{\bepi{\sctw{25}{25}{4}{4} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(-0.174,0.985){\circle*{0.15} \put(-0.174,-0.985){\circle*{0.15}}}
\def\bepi{\sctw{25}{25}{4}{4} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(0.174,0.985){\circle*{0.15} \put(-0.174,-0.985){\circle*{0.15}}{\bepi{\sctw{25}{25}{4}{4} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(0.174,0.985){\circle*{0.15} \put(-0.174,-0.985){\circle*{0.15}}}
\def\bepi{\sctw{25}{25}{4}{4} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(-0.174,0.985){\circle*{0.15} \put(0.174,-0.985){\circle*{0.15}}{\bepi{\sctw{25}{25}{4}{4} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(-0.174,0.985){\circle*{0.15} \put(0.174,-0.985){\circle*{0.15}}}
\def\bepi{\sctw{25}{25}{4}{4} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(0.174,0.985){\circle*{0.15} \put(0.174,-0.985){\circle*{0.15}}{\bepi{\sctw{25}{25}{4}{4} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(0.174,0.985){\circle*{0.15} \put(0.174,-0.985){\circle*{0.15}}}
\def\bepi{\sctw{25}{25}{4}{4} \put(-0.174,0.985){\circle*{0.15} \put(0.866,0.5){\circle*{0.15}}{\bepi{\sctw{25}{25}{4}{4} \put(-0.174,0.985){\circle*{0.15} \put(0.866,0.5){\circle*{0.15}}}
\def\bepi{\sctw{25}{25}{4}{4} \put(-0.174,-0.985){\circle*{0.15} \put(-0.174,0.985){\circle*{0.15}}{\bepi{\sctw{25}{25}{4}{4} \put(-0.174,-0.985){\circle*{0.15} \put(-0.174,0.985){\circle*{0.15}}}
\def\bepi{\sctw{4}{25}{4}{25} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(-0.174,0.985){\circle*{0.15} \put(-0.174,-0.985){\circle*{0.15}}{\bepi{\sctw{4}{25}{4}{25} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(-0.174,0.985){\circle*{0.15} \put(-0.174,-0.985){\circle*{0.15}}}
\def\bepi{\sctw{4}{25}{4}{25} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(0.174,0.985){\circle*{0.15} \put(-0.174,-0.985){\circle*{0.15}}{\bepi{\sctw{4}{25}{4}{25} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(0.174,0.985){\circle*{0.15} \put(-0.174,-0.985){\circle*{0.15}}}
\def\bepi{\sctw{4}{25}{4}{25} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(-0.174,0.985){\circle*{0.15} \put(0.174,-0.985){\circle*{0.15}}{\bepi{\sctw{4}{25}{4}{25} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(-0.174,0.985){\circle*{0.15} \put(0.174,-0.985){\circle*{0.15}}}
\def\bepi{\sctw{4}{25}{4}{25} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(0.174,0.985){\circle*{0.15} \put(0.174,-0.985){\circle*{0.15}}{\bepi{\sctw{4}{25}{4}{25} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(0.174,0.985){\circle*{0.15} \put(0.174,-0.985){\circle*{0.15}}}
\def\bepi{\sctw{4}{25}{4}{25} \put(-0.14,0.985){\circle*{0.15} \put(-0.866,0.5){\circle*{0.15}{\bepi{\sctw{4}{25}{4}{25} \put(-0.14,0.985){\circle*{0.15} \put(-0.866,0.5){\circle*{0.15}}
\put(0.866,-0.5){\circle*{0.15}} \put(0.866,-0.5){\line(-5,3){1.72}} }}
\def\bepi{\sctw{4}{25}{4}{25} \put(0.174,-0.985){\circle*{0.15} \put(-0.174,0.985){\circle*{0.15}}{\bepi{\sctw{4}{25}{4}{25} \put(0.174,-0.985){\circle*{0.15} \put(-0.174,0.985){\circle*{0.15}}}
\def\bepi{\sctw{25}{25}{4}{4} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(-0.174,-0.985){\circle*{0.15} \put(-0.174,0.985){\circle*{0.15}}{\bepi{\sctw{25}{25}{4}{4} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(-0.174,-0.985){\circle*{0.15} \put(-0.174,0.985){\circle*{0.15}}}
\def\bepi{\sctw{25}{25}{4}{4} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(-0.174,-0.985){\circle*{0.15} \put(0.174,0.985){\circle*{0.15}}{\bepi{\sctw{25}{25}{4}{4} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(-0.174,-0.985){\circle*{0.15} \put(0.174,0.985){\circle*{0.15}}}
\def\bepi{\sctw{25}{25}{4}{4} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(0.174,-0.985){\circle*{0.15} \put(-0.174,0.985){\circle*{0.15}}{\bepi{\sctw{25}{25}{4}{4} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(0.174,-0.985){\circle*{0.15} \put(-0.174,0.985){\circle*{0.15}}}
\def\bepi{\sctw{25}{25}{4}{4} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(0.174,0.985){\circle*{0.15} \put(0.174,-0.985){\circle*{0.15}}{\bepi{\sctw{25}{25}{4}{4} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(0.174,0.985){\circle*{0.15} \put(0.174,-0.985){\circle*{0.15}}}
\def\bepi{\sctw{25}{25}{4}{4} \put(-0.174,0.985){\circle*{0.15} \put(-0.174,-0.985){\circle*{0.15}}{\bepi{\sctw{25}{25}{4}{4} \put(-0.174,0.985){\circle*{0.15} \put(-0.174,-0.985){\circle*{0.15}}}
\def\bepi{\sctw{25}{4}{25}{4} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(0.174,-0.985){\circle*{0.15} \put(-0.174,0.985){\circle*{0.15}}{\bepi{\sctw{25}{4}{25}{4} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(0.174,-0.985){\circle*{0.15} \put(-0.174,0.985){\circle*{0.15}}}
\def\bepi{\sctw{25}{4}{25}{4} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(0.174,-0.985){\circle*{0.15} \put(0.174,0.985){\circle*{0.15}}{\bepi{\sctw{25}{4}{25}{4} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(0.174,-0.985){\circle*{0.15} \put(0.174,0.985){\circle*{0.15}}}
\def\bepi{\sctw{25}{4}{25}{4} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(-0.174,-0.985){\circle*{0.15} \put(-0.174,0.985){\circle*{0.15}}{\bepi{\sctw{25}{4}{25}{4} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(-0.174,-0.985){\circle*{0.15} \put(-0.174,0.985){\circle*{0.15}}}
\def\bepi{\sctw{25}{4}{25}{4} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(-0.174,-0.985){\circle*{0.15} \put(0.174,0.985){\circle*{0.15}}{\bepi{\sctw{25}{4}{25}{4} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(-0.174,-0.985){\circle*{0.15} \put(0.174,0.985){\circle*{0.15}}}
\def\bepi{\sctw{25}{4}{25}{4} \put(0.174,0.985){\circle*{0.15} \put(-0.174,-0.985){\circle*{0.15}}{\bepi{\sctw{25}{4}{25}{4} \put(0.174,0.985){\circle*{0.15} \put(-0.174,-0.985){\circle*{0.15}}}
\def\bepi{\sctw{25}{4}{25}{4}\put(-0.14,0.985){\circle*{0.15}\put(-0.866,-0.5){\circle*{0.15}{\bepi{\sctw{25}{4}{25}{4}\put(-0.14,0.985){\circle*{0.15}\put(-0.866,-0.5){\circle*{0.15}}
\put(0.866,0.5){\circle*{0.15}}\put(0.866,0.5){\line(-5,-3){1.72}} }}
\def\di#1{\raisebox{0pt}[20pt][23pt]{#1}}
\def\wdi#1{w_{\mathfrak{sl}(2)}\biggl({\raisebox{0pt}[20pt][30pt]{#1}}\biggr)}
\def\spwdi#1{W\biggl( \mbox{#1} \biggr)}
\def\slrel#1#2#3#4#5#6#7{
\begin{eqnarray*}
\makebox[2pt]{} \wdi{#1}-\wdi{#2}-\wdi{#3}+\wdi{#4}
& \\
\hfill = \,\,\, \wdi{#5} - \wdi{#6} #7 \phantom{verylongstring} &
\end{eqnarray*}
}
\slrel{\bepi{\sctw{25}{25}{4}{4} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(-0.174,0.985){\circle*{0.15} \put(-0.174,-0.985){\circle*{0.15}}}{\bepi{\sctw{25}{25}{4}{4} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(0.174,0.985){\circle*{0.15} \put(-0.174,-0.985){\circle*{0.15}}}{\bepi{\sctw{25}{25}{4}{4} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(-0.174,0.985){\circle*{0.15} \put(0.174,-0.985){\circle*{0.15}}}{\bepi{\sctw{25}{25}{4}{4} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(0.174,0.985){\circle*{0.15} \put(0.174,-0.985){\circle*{0.15}}}{\bepi{\sctw{25}{25}{4}{4} \put(-0.174,0.985){\circle*{0.15} \put(0.866,0.5){\circle*{0.15}}}{\bepi{\sctw{25}{25}{4}{4} \put(-0.174,-0.985){\circle*{0.15} \put(-0.174,0.985){\circle*{0.15}}};
\slrel{\bepi{\sctw{25}{25}{4}{4} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(-0.174,-0.985){\circle*{0.15} \put(-0.174,0.985){\circle*{0.15}}}{\bepi{\sctw{25}{25}{4}{4} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(-0.174,-0.985){\circle*{0.15} \put(0.174,0.985){\circle*{0.15}}}{\bepi{\sctw{25}{25}{4}{4} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(0.174,-0.985){\circle*{0.15} \put(-0.174,0.985){\circle*{0.15}}}{\bepi{\sctw{25}{25}{4}{4} \put(0,1){\circle*{0.15}}\put(0,-1){\circle*{0.15} \put(0.174,0.985){\circle*{0.15} \put(0.174,-0.985){\circle*{0.15}}}{\bepi{\sctw{25}{25}{4}{4} \put(-0.174,0.985){\circle*{0.15} \put(0.866,0.5){\circle*{0.15}}}{\bepi{\sctw{25}{25}{4}{4} \put(-0.174,0.985){\circle*{0.15} \put(-0.174,-0.985){\circle*{0.15}}}.
\caption{$6$-term relations for the weight system $\mathfrak{sl}(2)$ }\label{fCVrr}
\end{figure}
As usual, it is assumed in these relations that besides the shown chords all diagrams participating in the relations contain other chords that are the same for all six diagrams. The Chmutov-Varchenko relations are sufficient to compute the value of the weight system on arbitrary chord diagram. Indeed, if the relations 1--3 are not applicable to a given chord diagram, then this diagram necessarily contains a chord to which relation 4 can be applied. All diagrams obtained by applying the relations are simpler than the original one in the following sense: the two diagrams on the right hand side have smaller number of chords than each of the diagrams on the left hand side; all the diagrams on the left hand side have equal number of chords but the last three of them have smaller number of pairs of intersecting chords than the first one. This implies that the value of the weight system is computed in a unique way by applying these relations repeatedly in finitely many steps.
The Chmutov-Varchenko relations can be also used for an axiomatic definition of the weight system $w_{\mathfrak{sl}(2)}$. With this approach, the equalities of the theorem are taken as the definition of the weight system. Then, the assertion that the function is well defined becomes a nontrivial statement, which claims that the result of computation is independent of the order in which the Chmutov-Varchenko relations are applied. The $4$-term relations for the constructed function is a corollary of Chmutov-Varchenko relations, i.e., it is a weight system indeed.
\subsubsection{$\mathfrak{sl}(2)$-weight system for graphs}
The following result of Chmutov and Lando is specific for the Lie algebra $\mathfrak{sl}(2)$ and the weight system associated to it. Its analog, say, for the Lie algebra $\mathfrak{sl}_3$ does not hold.
\begin{theorem}[\cite{CL07}]\label{conj:int}
The value of the weight system $w_{\mathfrak{sl}(2)}$ on any chord diagram is uniquely determined by its intersection graph.
\end{theorem}
This theorem implies that the weight system $w_{\mathfrak{sl}(2)}$ defines a function on those graphs that are intersection graphs of chord diagrams. This function vanishes on those combinations of intersection graphs that are involved into $4$-term relations.
\begin{conjecture}[Lando]
The weight system $w_{\mathfrak{sl}(2)}$ admits an extension to a function on graphs satisfying the $4$-term relation for graphs. This extension is unique.
\end{conjecture}\label{csl2}
Existence of such an extension would mean that $w_{\mathfrak{sl}(2)}$ vanishes on any linear combination of intersection graphs that are corollaries of the $4$-term relations for graphs (such a linear combination is not necessarily a corollary of the $4$-term relation for chord diagrams). Uniqueness of such an extension is not specific for~$w_{\mathfrak{sl}(2)}$. It is related to the question whether the mapping sending a chord diagram to its intersection graph is epimorphic, see Sect.~\ref{ss4tg}. Validity of the conjecture is checked using computer for graphs with at most~$8$ vertices (E.~Krasilnikov,~\cite{K21}). Note that for the extension obtained by Krasilnikov the values of $w_{\mathfrak{sl}(2)}$ on some graphs with~$8$ vertices are not integer (all such graphs are not intersection graphs since for all intersection graphs the Chmutov--Varchenko relations guarantee that the values of the weight system on all intersection graphs are polynomials with integer coefficients).
There are also some other pieces of evidence supporting Conjecture~\ref{csl2}. For instance, the required extension is known for the top coefficient of the values of~$w_{\mathfrak{sl}(2)}$ in the projection of the chord diagram to the subspace of primitive elements.
The value $w_{\mathfrak{sl}(2)}(\pi(D))$ of the weight system~$w_{\mathfrak{sl}(2)}$ on the projection of a given chord diagram~$D$
with~$2n$ chords to the subspace of primitive elements is a polynomial of degree at most~$n$.
\begin{theorem}[\cite{KLMR14},~\cite{BNV15}]
The coefficient of $c^n$ in the value $w_{\mathfrak{sl}(2)}(\pi(D))$ of the weight system $w_{\mathfrak{sl}(2)}$ on the projection of the chord diagram~$D$
with~$2n$ chords to the subspace of primitive elements coincides with the value $2\log~\nu(g(D))$ of doubled logarithm of the nondegeneracy of the intersection graph of the diagram (the logarithm is understood in the sense of convolution in the Hopf algebra).
\end{theorem}
Since the nondegeneracy of a graph is its $4$-invariant, we obtain a confirmation of the conjecture.
One of the possible approaches to the proof of Conjecture~\ref{csl2} consists in an attempt to construct a $4$-invariant of graphs taking values in the ring of polynomials in one variable whose values on intersection graphs coincide with those for $w_{\mathfrak{sl}(2)}$. In order to construct such an invariant, it is important to have a large range of explicitly computed values of~$w_{\mathfrak{sl}(2)}$ on graphs: these values might indicate which characteristics of the graph are reflected in~$w_{\mathfrak{sl}(2)}$. In many ways, the results provided below are inspired by this problem.
In some respect, the weight system $w_{\mathfrak{sl}(2)}$ resembles very much the chromatic polynomial of a graph:
\begin{itemize}
\item it is multiplicative;
\item its value on a chord diagram is a polynomial in one variable whose degree is equal to the number of vertices in the intersection graph;
\item the leading coefficient of this polynomial is equal to ~$1$;
\item the signs of the coefficients alternate;
\item the absolute values of the second coefficient of the polynomial is equal to the number of edges in the graph;
\item the free term of this polynomial is equal to zero if the diagram has at least one chord;
\item adding a leaf to a chord diagram leads to the multiplication of the value of $w_{\mathfrak{sl}(2)}$ by $c-1$.
\end{itemize}
It is worth to know, thereby, whether an analogue of a theorem by J.~Huh~\cite{H12} proved by him for the case of chromatic polynomial holds for the weight system $w_{\mathfrak{sl}(2)}$ as well:
\begin{problem}
Is it true that the sequence of absolute values of the coefficients of the weight system $w_{\mathfrak{sl}(2)}$ is logarithmically convex for arbitrary chord diagram? In particular, is it true that this sequence of coefficients is unimodular, that is, it increases first and then decreases?
\end{problem}
The proof of Huh relates the chromatic polynomial to the geometry of algebraic manifolds. It would be interesting to find a similar relationship for the $\mathfrak{sl}(2)$ weight system.
\subsubsection{The values of the $\mathfrak{sl}(2)$ weight system on complete graphs}
The Chmutov-Varchenko relations allow one both to compute the values of the weight system~$w_{\mathfrak{sl}(2)}$ on specific chord diagrams and to obtain closed formulas for some infinite series of chord diagrams. In particular, consider the chord diagram~$K_n$ formed by~$n$ pairwise intersecting chords. The intersection graph for this diagram is the complete graph on~$n$ vertices. For the complete graphs the computation of one or another graph invariant causes usually no difficulty due to high symmetry of complete graphs. However, for the case of the weight system $w_{\mathfrak{sl}(2)}$ this problem proved to be surprisingly difficult and the answer is rather unexpected.
The following statement was suggested as a conjecture by S.~Lando around 2015. It was partially proved (for the linear terms of the polynomials) in~\cite{B17}.
\begin{theorem}[\cite{Za22}]
The generating series for the values of the weight system associated to the Lie algebra~$\mathfrak{sl}(2)$ on complete graphs admits the following infinite continuous fraction expansion
\begin{eqnarray*}
\sum_{n=0}^\infty w_{\mathfrak{sl}(2)}(K_n)t^n&=&1+ct+c(c-1)t^2+c(c-1)(c-2)t^3
+c(c^3-6c^2+13c-7)t^4+\dots\\
&=&\frac1{1+a_0t+\frac{b_1t^2}{1+a_1t+\frac{b_2t^2}{1+a_2t+\dots}}},
\end{eqnarray*}
where
$$
a_m=-c+m\,(m+1),\qquad b_m=m^2 c-\frac{m^2(m^2-1)}{4}.
$$
\end{theorem}
It is useful to compare this decomposition to a continuous fraction with a similar decomposition for the generating function for chromatic polynomials of complete graphs:
\begin{eqnarray*}
\sum_{n=0}^\infty \chi_{K_n}(c)t^n&=&1+ct+c(c-1)t^2+c(c-1)(c-2)t^3
+c(c-1)(c-2)(c-3)t^4+\dots\\
&=&\frac1{1+a_0t+\frac{b_1t^2}{1+a_1t+\frac{b_2t^2}{1+a_2t+\dots}}},
\end{eqnarray*}
$$
a_m=-c+2m,\qquad b_m=m c-m(m-1).
$$
The proof of this theorem given in~\cite{Za22} contains also the following efficient way for computing the values of the weight system from the theorem, appearing as an intermediate step of computations. Consider the linear operator~$T$ acting on the space of polynomials in one variable~$x$ whose action is defined as follows:
$$
T(1)=x,\qquad T(x)=x^2-x,
$$
and also
$$
T(x^2f)=(2x-1)T(xf)+(2c-x-x^2)T(f)+(x-c)^2
$$
for any polynomial~$f$.
\begin{theorem}[\cite{Za22}]
The value of the weight system~$\mathfrak{sl}(2)$ on the chord diagram with the complete intersection graph is equal to the value of the polynomial $T^n(1)$ at the point $x=c$,
$$
w_{\mathfrak{sl}(2)}(K_n)=T^n(1)\bigm|_{x=c}.
$$
\end{theorem}
From the known value of the weight system $w_{\mathfrak{sl}(2)}$ on complete graphs it is easy to find its values on their projections to the subspace of primitive elements.
Indeed, polynomials in complete graphs form a Hopf subalgebra in the Hopf algebra of graphs~$\mathcal{G}$.
The order of the automorphism group of the complete graph~$K_n$ on~$n$ vertices is~$n!$,
which allows one to represent the exponential generating
function for the projections of complete graphs to the subspace of primitives as the logarithm of the exponential
generating function for complete graphs:
$$
\sum_{n=1}^\infty\pi(K_n)\frac{t^n}{n!}=
\log~\sum_{n=0}^\infty K_n\frac{t^n}{n!}.
$$
Applying $w_{\mathfrak{sl}(2)}$ to both parts of the identity and
substituting the values $w_{\mathfrak{sl}_2}(K_n)$ we already know,
we obtain the first terms of the expansion:
\begin{eqnarray*}
\sum_{n=1}^\infty \bar w_{\mathfrak{sl}(2)}(K_n)\frac{t^n}{n!}&=&
c\frac{t}{1!}-c\frac{t^2}{2!}+2c\frac{t^3}{3!}+
(2c^2-7c)\frac{t^4}{4!}-(24c^2-38c)\frac{t^5}{5!}\\
&&-(16c^3-284c^2+295c)\frac{t^6}{6!}+\dots.
\end{eqnarray*}
\subsubsection{Algebra of shares}
\begin{definition}
A \emph{share} with $n$ chords is an ordered pair of
oriented intervals in which $2n$ pairwise distinct points
are given, split into~$n$ disjoint pairs, considered up to orientation preserving
diffeomorphisms of each of the two intervals.
\end{definition}
For the intervals, we often take two arcs of the Wilson loop. In this case a share is a part of a chord diagram
formed by the chords whose both ends belong to a chosen
pair of arcs, while none of the other chords has an end on these arcs.
If a pair of arcs of a chord diagram forms a share, then its complement also is a share.
In this case we call the chord diagram the
\emph{join} of two shares and denote it by
$S_1\#S_2$, see Fig.~\ref{fig:sharejoin}.
In a join, the end of the first interval of the share~$S_1$
is attached to the beginning of the first interval in~$S_2$, the end of the first interval in~$S_2$ is
attached to the beginning of the second interval in~$S_1$,
the end of the second interval in~$S_1$
is attached to the beginning of the second interval in~$S_2$, and, finally, the end of the second interval in~$S_2$
is attached to the beginning of the first interval in~$S_1$. Join is a noncommutative operation.
\begin{figure}
\centering
\includegraphics[scale=.6]{pics/TwoShares.jpg}
\caption{Join of two shares}
\label{fig:sharejoin}
\end{figure}
To each share~$S$, one can associate a chord diagram
by attaching the end of each of the two intervals to the
beginning of the other interval, thus uniting the
intervals into a Wilson loop. We denote this chord diagram by~$\overline{S}$ and call it the \emph{closure}
of~$S$. We also call the intersection graph $g(\overline{S})$ the intersection graph of the share~$S$
and denote it by $g(S)$.
Two shares can be multiplied by attaching their intervals with coinciding numbers, see Fig.~\ref{fsp}. This
multiplication is noncommutative.
\begin{figure}[h]
\begin{center}
\begin{picture}(300,60)(10,40)
\thicklines
\put(40,90){\vector(-1,0){80}}
\put(-40,50){\vector(1,0){80}}
\put(-50,46){$1$}
\put(-50,86){$2$}
\put(0,50){\oval(30,30)[t]}
\put(0,50){\line(0,1){40}}
\put(-5,40){$S_1$}
\put(70,70){\circle*{4}}
\put(190,90){\vector(-1,0){80}}
\put(110,50){\vector(1,0){80}}
\put(100,46){$1$}
\put(100,86){$2$}
\put(160,90){\oval(30,30)[b]}
\put(160,90){\line(-1,-1){40}}
\put(130,90){\line(1,-1){40}}
\put(145,40){$S_2$}
\put(220,70){$=$}
\put(360,90){\vector(-1,0){100}}
\put(260,50){\vector(1,0){100}}
\put(250,46){$1$}
\put(250,86){$2$}
\put(280,50){\oval(30,30)[t]}
\put(280,50){\line(0,1){40}}
\put(340,90){\oval(30,30)[b]}
\put(340,90){\line(-1,-1){40}}
\put(300,90){\line(1,-1){40}}
\put(305,40){$S_1S_2$}
\end{picture}
\end{center}
\caption{Product of two shares}\label{fsp}
\end{figure}
The space spanned by isomorphism classes of shares
can be endowed with relations analogous to the Chmutov--Varchenko relations in Theorem~\ref{th:Chmutov-Varchenko}.
Namely, we can assume that all the diagrams taking part
in the relation are, in fact, shares. There is, however, some subtlety in defining relations~2 and~3 in the Theorem. Namely, suppose a share~$S$ is obtained from a share~$S'$
by adding a single chord whose both ends belong to the same
interval number $i,~i\in\{1,2\}$, which intersects at most
one of the other chords. Then the relations for the shares
have the form
\begin{equation}\label{e1t}
S=c_i\; S',\qquad S=(c_i-1)\,S',
\end{equation}
in the cases when the given chord intersects none
of the other chords, or just one of them, respectively.
Denote by $\mathcal{S}$ the quotient space of the vector space
spanned by all the shares modulo the subspace spanned
by the $6$-term relations and relations~(\ref{e1t}).
This quotient space is endowed with the algebra structure
induced by the share multiplication operation.
Note that since the $6$-term relations are not homogeneous, the algebra~$\mathcal{S}$ is filtered rather than graded.
\begin{proposition}
The algebra $\mathcal{S}$ is commutative and isomorphic to the
algebra of polynomials in the three generators $c_1,c_2,x$ shown below:
\begin{center}
\begin{picture}(300,60)(10,40)
\thicklines
\put(40,90){\vector(-1,0){80}}
\put(-40,50){\vector(1,0){80}}
\put(-50,46){$1$}
\put(-50,86){$2$}
\put(0,50){\oval(30,30)[t]}
\put(-5,40){$c_1$}
\put(190,90){\vector(-1,0){80}}
\put(110,50){\vector(1,0){80}}
\put(100,46){$1$}
\put(100,86){$2$}
\put(150,90){\oval(30,30)[b]}
\put(145,40){$c_2$}
\put(340,90){\vector(-1,0){80}}
\put(260,50){\vector(1,0){80}}
\put(250,46){$1$}
\put(250,86){$2$}
\put(300,90){\line(0,-1){40}}
\put(295,40){$x$}
\end{picture}
\end{center}
\end{proposition}
The proposition states that modulo $6$-term relations and relations~(\ref{e1t}), each share admits a unique representation
as a linear combination of the shares $1,x,x^2,x^3,\dots$,
where the share $x^n$ is formed by $n$ parallel chords
having ends on the two distinct intervals; the coefficients
in these linear combinations are polynomials in~$c_1$ and $c_2$.
One of the reasons for introducing the algebra $\mathcal{S}$
is the following remark.
\begin{proposition}
Suppose two shares (or two linear combinations of shares)
$S_1$ and $S_2$ represent one and the same element in $\mathcal{S}$.
Then for an arbitrary share~$R$, the values of the
$\mathfrak{sl}_2$ weight system on the chord diagrams obtained
by joining the shares $S_1$ and $S_2$,
respectively, with~$R$, coincide,
$$
w_{\mathfrak{sl}(2)}(S_1\#R)=w_{\mathfrak{sl}(2)}(S_2\#R).
$$
\end{proposition}
One can chose for the additional share~$R$, for example,
the tuple of~$n$ mutually parallel chords having ends on
distinct arcs, or the tuple of~$n$ pairwise intersecting
chords having ends on distinct arcs.
\subsubsection{Values of the $\mathfrak{sl}(2)$-weight system on
complete bipartite graphs}
Chord diagrams whose intersection graph is complete
bipartite form another wide class of chord diagrams,
for which the values of $w_{\mathfrak{sl}(2)}$ are given by explicit
formulas.
Denote by $K_{m,n}$ the chord diagram formed by~$m$
parallel horizontal chords and~$n$ parallel vertical
chords. The intersection graph of such a chord diagram
is the complete bipartite graph having~$m$ vertices in one part and~$n$ vertices in the other part.
Equivalently, $K_{m,n}$ is the join of two shares formed
by, respectively, $m$ and $n$ parallel chords, each having
ends on distinct arcs. For each~$m=0,1,2,3,\dots$,
consider the generating function for the values of the
weight system $w_{\mathfrak{sl}(2)}$ on the chord diagrams $K_{m,n}$,
$$
G_m=G_m(c,t)=\sum_{n=0}^\infty w_{\mathfrak{sl}(2)}(K_{m,n})t^n.
$$
\begin{theorem}
The generating functions $G_{m}$ are subject to the relation
$$
\frac{G_{m}(t)-c^m}{t}=\sum_{i=0}^ms_{i,m}G_{i},\quad m=1,2,\dots,
$$
where the coefficients $s_{i,m}$ are given by the generating power series
$$
\sum_{i,m=0}^\infty s_{i,m} x^i t^m=
\frac{1}{1-x\,t}
\Biggl(c+\frac{c^2t^2-x\,t}{(1-x\,t)^2+(1+x\,t)t-2\,c\,t^2} \Biggr)
$$
\end{theorem}
Note that $G_m$ enters the right hand side of the equation
as well, with coefficient $s_{m,m}=c-\frac{m\,(m-1)}{2}$.
Moving this term to the left hand side, we can make the recurrence more explicit:
$$
G_m=\frac{1}{1-s_{m,m} t}\bigl(c^m+t\sum_{i=0}^{m-1}s_{i,m} G_i\Bigr).
$$
By induction, this implies
\begin{corollary}
The generating function $G_m$ can be represented as a finite linear combination of geometric porgressions
$\frac{1}{1-\left(c-\frac{i(i+1)}{2}\right)\,t}$, $i=0,1,\dots,m$. The coefficients of these geometric
progressions are polynomials in~$c$.
\end{corollary}
For small~ $m$, the recurrence yields
\begin{align*}
G_0&= \frac{1}{1-c\,t},\\
G_1&=\frac{c}{1-(c-1)\,t},\\
G_2&=\frac{c^2}{3 (1-c\,t)}
+\frac{c}{2\,(1-(c-1)\,t)}
+\frac{c\,(4 c-3)}{6\,(1-(c-3)\,t)},\\
G_3&=\frac{c^2}{6\,(1-c\,t)}
+\frac{c\,(3\,c^2-2c+2)}{5\,(1-(c-1)\,t)}
+\frac{c\,(4 c-3)}{3\,(1-(c-3)\,t)}
+\frac{c\,(c-2)(4 c-3)}{10\,(1-(c-6)\,t)}.
\end{align*}
The Proposition above admits the following generalization.
For an arbitrary share~$S$, denote by $(S,n)$
the chord diagram which is the join of~$S$ and the share~$x^n$ consisting of~ $n$ parallel chords.
Introduce the generating function
$$
f_S=\sum_{n=0}^\infty w_{\mathfrak{sl}(2)}((S,n))\, t^n.
$$
\begin{corollary}
The generating function $f_S$ can be represented as a finite
linear combination of geometric progressions of the form
$\frac{1}{1-\left(c-\frac{i(i+1)}{2}\right)\,t}$, $i=0,1,\dots,m$, where~ $m$ is the number of those chords of~$S$ whose ends belong to distinct arcs. The coefficients
of these progressions are polynomials in~$c$.
\end{corollary}
Indeed, if a share $S$ can be presented modulo $6$-term relations and relations~(\ref{e1t}) as a linear
combination of the basis shares $x^m$,
$$
S=\sum_{m=0}^m a_m x^m,
$$
then the series $f_S$ can be expressed as the linear combination of the series
$$
\sum_{n=0}^\infty w_{\mathfrak{sl}(2)}(x^m,n)\, t^n
=
\sum_{n=0}^\infty w_{\mathfrak{sl}(2)}(K_{m,n})\, t^n
=G_m,
$$
with the same coefficients, that is,
$$
f_S=\sum_{k=0}^m a_m G_m,
$$
and we can apply the Corollary above.
Similarly to the case of complete graphs,
polynomials in complete bipartite graphs form a Hopf
subalgebra in~$\mathcal{G}$. There are formulas for the values
of the weight system $w_{\mathfrak{sl}(2)}$ on the projections
of complete bipartite graphs to the subspace of primitives,
which generalize the formula for the logarithm of the
exponential generating function for complete graphs, see
details in~\cite{F22}.
\subsubsection{Values of the $\mathfrak{sl}(2)$-weight system
on graphs that are not intersection graphs}
Not each graph is the intersection graph of some chord diagrams, but using $4$-term relations for graphs one can
try to express it in terms of intersection graphs and extend to it the $\mathfrak{sl}(2)$-weight system. The formulas from
the previous section allow one to obtain such an extention
not only to some specific graph, but also for infinite
sequences of graphs. For an arbitrary graph~$G$, denote by
$(G,n)$ its connected sum with~$n$ isolated vertices, i.e.,
the graph obtained by adding~$n$ vertices to~$G$ and connecting each of the added vertices to each of the vertices of~$G$. Now, take for~$G$ the graph
$C_5=\raisebox{-0.2ex}{\includegraphics[scale=.35]{pics/C5.jpg}}$, the cycle on~$5$ vertices. For $n\ge1$, the graph $(C_5,n)$ is not an intersection graph.
\begin{proposition}[\cite{F20}]
Suppose Conjecture~\ref{csl2} about extendability of the
$\mathfrak{sl}(2)$-weight system to the space of graphs is valid.
Then the values of an extended weight system on the graphs
$(C_5,n)$ are given by the generating function
\begin{align*}
\sum_{n=0}^\infty w_{\mathfrak{sl}(2)}((C_5,n))\,t^n\hskip-5em&\\
&=(c+5) c^2\,G_0-2 (c-1) (7 c+3)\,G_1+(5 c^2-6 c-26)\, G_2+29\,G_3-10\,G_4+G_5\\
&=\tfrac{(30 c^4-60 c^3-111 c^2+64 c+36) c}{70 (1-(c -1)t)}
+\tfrac{(c-2) (5 c^2-15 c+9) (4 c-3) c}{45 (1-(c -6) t)}
+\tfrac{(c-6) (c-2) (4 c-15) (4 c-3) c}{126 (1-(c-15) t)},
\end{align*}
where $G_m=\sum_{n=0}^\infty w_{\mathfrak{sl}(2)}(K_{m,n})\,t^n$ are the generating functions for the values of the
$\mathfrak{sl}(2)$-weight system on complete bipartite graphs.
\end{proposition}
\begin{figure}
\centering
\begin{picture}(350,80)
\put(0,10){\includegraphics[scale = 0.8]{pics/PentEquation2.pdf}}
\put(30,0){$(C_5,n)$}
\put(110,0){$G_{1,n}$}
\put(195,0){$G_{2,n}$}
\put(270,0){$G_{3,n}$}
\end{picture}
\caption{A $4$-term relation for the graphs $(C_5,n)$.
All the graphs in the right hand side are intersection graphs} \label{pic:4term5wheel}
\end{figure}
The proof is achieved by applying
to the graphs $(C_5,n)$ the $4$-term relation shown in Fig.~\ref{pic:4term5wheel},
$$
(C_5,n)=G_{1,n}-G_{2,n}+G_{3,n}.
$$
Each of the graphs in the right hand side is an intersection graph. Moreover, it is the intersection graph
of a chord diagram of the form $(S,n)$, for some share $S$,
$$
G_{1,n}=g\Biggl(\raisebox{-3ex}{\includegraphics[scale=0.3]{pics/S1.jpg}},n\Biggr),
\qquad G_{2,n}=g\Biggl(\raisebox{-3ex}{\includegraphics[scale=0.3]{pics/S2.jpg}},n\Biggr),
\qquad G_{3,n}=g\Biggl(\raisebox{-3ex}{\includegraphics[scale=0.3]{pics/S3.jpg}},n\Biggr).
$$
By applying the argument from the previous section, we
can express each of the shares obtained in terms of the
basic shares $x^m$, $m=0,1,\dots,5$, and express in this
way the generating function for the graphs
$(C_5,n)$ in terms of the corresponding generating functions for complete bipartite graphs.
\subsection{$\mathfrak{gl}(N)$-weight system}
Until recently, no recursion similar to the Chmutov--Varchenko one has been known for Lie algebras
other than $\mathfrak{sl}(2)$, and essentially the only known way of computing the values of the weight system associated
to this or that Lie algebra was the straightforward application of the definition, which is extremely laborious. Here we explain the results of the recent paper~\cite{ZY22} devoted to computation of the weight
systems associated to the Lie algebras $\mathfrak{gl}(N)$.
For the basis in $\mathfrak{gl}(N)$, one can choose the matrix units
$E_{ij}$, which have~$1$ on the intersection of the
$i$~th row and $j$~th column, all whose other entries are zeroes. The commutator of two matrix units has the form $[E_{ij},E_{kl}]=\delta_{jk}E_{il}-\delta_{il}E_{jk}$.
The standard invariant scalar product is given by
$\langle x,y\rangle=\mathop{\rm Tr}(xy)$,
and the dual basis with respect to this scalar product is
$E_{ij}^*=E_{ji}$. The center $ZU(\mathfrak{gl}(N))$ is
generated freely by the \emph{Casimir elements} $C_1,C_2,\dots,C_N$, where $C_{1}=\sum_{i=1}^NE_{ii}$, $C_{2}=\sum_{i,j=1}^NE_{ij}E_{ji}$,
and, more generally,
$$
C_{k}=\sum_{i_1,i_2,\dots,i_k=1}^NE_{i_1i_2}E_{i_2i_3}\dots E_{i_k i_1}.
$$
Casimir elements with the numbers greater than~$N$ are
defined similarly. They also belong to the center, but they can be expressed as polynomials in the Casimir elements whose numbers are at most~$N$.
Thus, the value of the weight system associated to the Lie algebra $\mathfrak{gl}(N)$ is a polynomial in the Casimir elements.
For example, it is easy to see that for the chord diagram~$K_1$, which consists of the single chord, we have
$w_{\mathfrak{gl}(N)}(K_1)=C_2$.
\begin{theorem}[\cite{ZY22}]\label{th:wgl}
There is a universal weight system, which we denote by
$w_{\mathfrak{gl}}$, taking values in the ring of polynomials in
infinitely many variables $N,C_1,C_2,\dots$,
whose value for a given positive integer~$N$
coincides with the value of the $\mathfrak{gl}(N)$ weight system.
\end{theorem}
It is easy to see that for each chord diagram~$C$ with~$n$ chords the value of the weight system $w_{\mathfrak{gl}(N)}(C)$, for $N\ge n$, is a polynomial in $C_1,\dots,C_{n}$.
The theorem asserts that the coefficients of this polynomial are polynomials in~$N$. Moreover, the universal formula for $w_{\mathfrak{gl}(N)}(C)$ we obtain remains valid for $N< n$ as well.
\begin{example}
For the chord diagram $K_2$, which is formed by two intersecting chords, we have
$$
w_{\mathfrak{gl}(N)}(K_2)=\sum_{i,j,k,l=1}^NE_{ij}E_{kl}E_{ji}E_{lk}
=C_2^2+C_1^2-N\, C_2=w_{\mathfrak{gl}}(K_2).
$$
\end{example}
Below, we define the weight system $w_{\mathfrak{gl}}$ for arbitrary permutations and give a recurrence relation for its computation.
\subsubsection{$\mathfrak{gl}(N)$-weight system on permutation and recurrence relations}
Let $m$ be a positive integer, and let $S_m$ be the group of all permutations of the elements $\{1,2,\dots,m\}$;
for an arbitrary $\sigma\in S_m$, we set
$$
w_{\mathfrak{gl}(N)}(\sigma)=\sum_{i_1,\dots,i_m=1}^NE_{i_1i_{\sigma(1)}}E_{i_2i_{\sigma(i_2)}}\dots E_{i_mi_{\sigma(m)}}\in U(\mathfrak{gl}(N)).
$$
The element thus constructed belongs, in fact, to the center
$ZU(\mathfrak{gl}(N))$. In addition, it is invariant under conjugation by the cyclic permutation,
$$
w_{\mathfrak{gl}(N)}(\sigma)=\sum_{i_1,\dots,i_m=1}^N E_{i_2i_{\sigma(i_2)}}\dots E_{i_mi_{\sigma(m)}}E_{i_1i_{\sigma(1)}}.
$$
For example, the Casimir element $C_m$ corresponds to the cyclic permutation
$1\mapsto2\mapsto\dots\mapsto m\mapsto 1$.
On the other side, any arc diagram with~$n$ arcs can be considered as an involution without fixed points on the
set of $m=2n$ elements. The value of $w_{\mathfrak{gl}(N)}$ on a
permutation given by this involution coincides with
the value of the $\mathfrak{gl}(N)$-weight system on the corresponding chord diagram. For example, for the chord
diagram~$K_n$ we have $w_{\mathfrak{gl}(N)}(K_n)=w_{\mathfrak{gl}(N)}((1~n{+}1)(2~n{+}2)\dots(n~2n))$.
Any permutation can be represented by a directed graph.
The~$m$ vertices of this graph correspond to the permuted elements. They are situated along the oriented circle and are ordered cyclically in the direction of the circle.
Directed edges of the graph show the action of the permutation (so that at each vertex there is one incoming and one outgoing edge). The directed graph
$G(\sigma)$ consists of these $m$ vertices and $m$ edges, for example,
\[
G(\left(1\ n+1)(2\ n+2)\cdots(n\ 2n)\right)=\begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)},decoration={markings, mark= at position .55 with {\arrow{stealth}}}]
\draw[->,thick] (-2,0)--(2,0);
\draw[blue,postaction={decorate}] (-1.8,0) .. controls (-1,.5) ..(.2,0);
\draw[blue,postaction={decorate}] (-1.4,0) .. controls (-.6,.5) ..(.6,0);
\draw[blue,postaction={decorate}] (-.2,0) .. controls (1,.5) ..(1.8,0);
\draw[blue,postaction={decorate}] (.2,0) .. controls (-1,-.5) ..(-1.8,0);
\draw[blue,postaction={decorate}] (.6,0) .. controls (-.6,-.5) ..(-1.4,0);
\draw[blue,postaction={decorate}] (1.8,0) .. controls (1,-.5) ..(-.2,0);
\fill[black] (-1.8,0) circle (1pt) node[below] {\tiny 1};
\fill[black] (-1.4,0) circle (1pt) node[below] {\tiny 2};
\fill[black] (-0.2,0) circle (1pt) node[below] {\tiny n};
\fill[black] ( .15,0) circle (1pt) node[below] {\tiny n+1};
\fill[black] ( .6,0) circle (1pt) node[below]{\tiny n+2};
\fill[black] ( 1.8,0) circle (1pt) node[below] {\tiny 2n};
\node[below] at (-.8,0) {$\cdots$};
\node[below] at (1.2,0) {$\cdots$};
\end{tikzpicture}
\]
(the circle containing the vertices of the graph is shown as the horizontal line).
\begin{theorem}[\cite{ZY22}]
The value of the weight system $w_{\mathfrak{gl}(N)}$ on a permutation possesses the following properties:
\begin{itemize}
\item the weight system $w_{\mathfrak{gl}(N)}$
is multiplicative with respect to the connected sum
(concatenation) of permutations. It follows, therefore, that for the empty graph (having zero vertices)
the value of $w_{\mathfrak{gl}(N)}$ is~$1$.
\item For the cyclic permutation whose cyclic order is consistent with the permutation the value of $w_{\mathfrak{gl}(N)}$ coincides with the corresponding Casimir element, $w_{\mathfrak{gl}(N)}(1\mapsto2\mapsto\dots\mapsto m\mapsto 1)=C_m$.
\item For an arbitrary permutation $\sigma\in S_m$,
and for any two neighboring elements $k$, $k+1$
in the set of vertices $\{1,2,\dots,m\}$,
the values of the invariant $w_{\mathfrak{gl}_N}$ is subject to the identity
\begin{multline*}
w_{\mathfrak{gl}(N)}\left(\begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)},decoration={markings, mark= at position .55 with {\arrow{stealth}}}]
\draw[->,thick] (-1,0) -- (1,0);
\fill[black] (-.3,0) circle (1pt) node[below] {\tiny k};
\fill[black] (.3,0) circle (1pt) node[below] {\tiny k+1};
\draw (-.5,.8) node[left] {a};
\draw (-.5,-.8) node[left] {b};
\draw (.5,.8) node[right] {c};
\draw (.5,-.8) node[right] {d};
\draw[blue,postaction={decorate}] (-.5,.8) -- (.3,0);
\draw[blue,postaction={decorate}] (-.3,0) -- (.5,.8);
\draw[blue,postaction={decorate}] (-.5,-.8) -- (-.3,0);
\draw[blue,postaction={decorate}] (.3,0) -- (.5,-.8);
\end{tikzpicture}\right)-
w_{\mathfrak{gl}(N)}\left(\begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)},decoration={markings, mark= at position .55 with {\arrow{stealth}}}]
\draw[->,thick] (-1,0) -- (1,0);
\fill[black] (.3,0) circle (1pt) node[below] {\tiny k+1};
\fill[black] (-.3,0) circle (1pt) node[below] {\tiny k};
\draw (-.5,.8) node[left] {a};
\draw (-.5,-.8) node[left] {b};
\draw (.5,.8) node[right] {c};
\draw (.5,-.8) node[right] {d};
\draw[blue,postaction={decorate}] (-.5,.8) -- (-.3,0);
\draw[blue,postaction={decorate}] (.3,0) -- (.5,.8);
\draw[blue,postaction={decorate}] (-.5,-.8) -- (.3,0);
\draw[blue,postaction={decorate}] (-.3,0) -- (.5,-.8);
\end{tikzpicture}\right)\\=
w_{\mathfrak{gl}(N)}\left(\begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)},decoration={markings, mark= at position .55 with {\arrow{stealth}}}]
\draw[->,thick] (-1,0) -- (1,0);
\fill[black] (0,0) circle (1pt) node[above] {\tiny k'};
\draw (-.5,.8) node[left] {a};
\draw (-.5,-.8) node[left] {b};
\draw (.5,.8) node[right] {c};
\draw (.5,-.8) node[right] {d};
\draw[blue,postaction={decorate}] (-.5,.8) ..controls (0,.4) .. (.5,.8);
\draw[blue,postaction={decorate}] (-.5,-.8) -- (0,0);
\draw[blue,postaction={decorate}] (0,0) -- (.5,-.8);
\end{tikzpicture}\right)-
w_{\mathfrak{gl}(N)}\left(\begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)},decoration={markings, mark= at position .55 with {\arrow{stealth}}}]
\draw[->,thick] (-1,0) -- (1,0);
\fill[black] (0,0) circle (1pt) node[below] {\tiny k'};
\draw (-.5,.8) node[left] {a};
\draw (-.5,-.8) node[left] {b};
\draw (.5,.8) node[right] {c};
\draw (.5,-.8) node[right] {d};
\draw[blue,postaction={decorate}] (-.5,-.8) ..controls (0,-.4) .. (.5,-.8);
\draw[blue,postaction={decorate}] (-.5,.8) -- (0,0);
\draw[blue,postaction={decorate}] (0,0) -- (.5,.8);
\end{tikzpicture}\right)
\end{multline*}
The graphs on the left hand side of the identity show two neighboring vertices and the edges incident to them.
In the graphs on the right hand side, these two vertices are replaced with a single one. All the other vertices and the (semi)edges are the same for all the four graphs
participating in the identity.
In the exceptional case $\sigma(k+1)=k$, the identity acquires the form
\begin{multline*}
w_{\mathfrak{gl}(N)}\left(\begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)},decoration={markings, mark= at position .55 with {\arrow{stealth}}}]
\draw[->,thick] (-1,0) -- (1,0);
\fill[black] (-.3,0) circle (1pt) node[below] {\tiny k};
\fill[black] (.3,0) circle (1pt) node[below] {\tiny k+1};
\draw (-.5,.8) node[left] {a};
\draw (.5,.8) node[right] {b};
\draw[blue,postaction={decorate}] (-.5,.8) -- (.3,0);
\draw[blue,postaction={decorate}] (-.3,0) -- (.5,.8);
\draw[blue,postaction={decorate}] (.3,0) ..controls(0,-.3).. (-.3,0);
\end{tikzpicture}\right)-
w_{\mathfrak{gl}(N)}\left(\begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)},decoration={markings, mark= at position .55 with {\arrow{stealth}}}]
\draw[->,thick] (-1,0) -- (1,0);
\fill[black] (.3,0) circle (1pt) node[below] {\tiny k+1};
\fill[black] (-.3,0) circle (1pt) node[below] {\tiny k};
\draw (-.5,.8) node[left] {a};
\draw (.5,.8) node[right] {b};
\draw[blue,postaction={decorate}] (-.5,.8) -- (-.3,0);
\draw[blue,postaction={decorate}] (.3,0) -- (.5,.8);
\draw[blue,postaction={decorate}] (-.3,0) ..controls(0,-.3).. (.3,0);
\end{tikzpicture}\right)\\
=C_1\times
w_{\mathfrak{gl}(N)}\left(\begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)},decoration={markings, mark= at position .55 with {\arrow{stealth}}}]
\draw[->,thick] (-1,0) -- (1,0);
\draw (-.5,.8) node[left] {a};
\draw (.5,.8) node[right] {b};
\draw[blue,postaction={decorate}] (-.5,.8) ..controls (0,.4) .. (.5,.8);
\end{tikzpicture}\right)
-N\times
w_{\mathfrak{gl}(N)}\left(\begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)},decoration={markings, mark= at position .55 with {\arrow{stealth}}}]
\draw[->,thick] (-1,0) -- (1,0);
\fill[black] (0,0) circle (1pt) node[above] {\tiny k'};
\draw (-.5,.8) node[left] {a};
\draw (.5,.8) node[right] {b};
\draw[blue,postaction={decorate}] (-.5,.8) -- (0,0);
\draw[blue,postaction={decorate}] (0,0) -- (.5,.8);
\end{tikzpicture}\right)
\end{multline*}
\end{itemize}
\end{theorem}
Applying the identity in the theorem each graph can be reduced to a monomial in the generators
$C_k$ (that is, concatenation of independent cycles) modulo
graphs with smaller number of vertices. This leads to an inductive computation of the values of the invariant $w_{\mathfrak{gl}(N)}$.
\begin{corollary}
The value of $w_{\mathfrak{gl}(N)}$ on an arbitrary permutation belongs to the center $ZU\mathfrak{gl}(N)$ of the universal enveloping algebra, is a polynomial in $N,C_1,C_2,\dots$,
and this polynomial is universal (one and the same for all
the $\mathfrak{gl}(N)$).
\end{corollary}
We denote the mapping taking a permutation to this universal polynomial by $w_{\mathfrak{gl}}$. Theorem~\ref{th:wgl}
specializes this Corollary to the case when the permutation is an involution without fixed points.
Below, we give a table of values of $w_{\mathfrak{gl}}$ on chord diagrams $K_n$, for few first values of $n$.
\begin{align*}
w_{\mathfrak{gl}}(K_2)&=C_1^2+C_2^2-N\,C_2,\\
w_{\mathfrak{gl}}(K_3)&=C_2^3+3 C_1^2 C_2+2 C_2 N^2+(-2 C_1^2-3 C_2^2) N,\\
w_{\mathfrak{gl}}(K_4)&=
3 C_1^4-4 C_1^3+6 C_2^2 C_1^2+2 C_1^2-8 C_3 C_1+C_2^4+6 C_2^2+(-6 C_2^3-14 C_1^2 C_2+6 C_1 C_2-2 C_2+2 C_4) N
\\&\qquad
+(6 C_1^2+11 C_2^2-2 C_3) N^2-6 C_2 N^3,\\
w_{\mathfrak{gl}}(K_5)&=
C_2^5+10 C_1^2 C_2^3+30 C_2^3+15 C_1^4 C_2-20 C_1^3 C_2+10 C_1^2 C_2-40 C_1 C_3 C_2+(-20 C_1^4+48 C_1^3
\\&\qquad
-50 C_2^2 C_1^2-32 C_1^2+30 C_2^2 C_1+96 C_3 C_1-10 C_2^4-82 C_2^2+10 C_2 C_4) N+(35 C_2^3+70 C_1^2 C_2
\\&\qquad
-72 C_1 C_2-10 C_3 C_2+32 C_2-24 C_4) N^2+(-24 C_1^2-50 C_2^2+24 C_3) N^3+24 C_2 N^4
\end{align*}
Similarly to the case of $\mathfrak{sl}(2)$, the values of the weight system $w_{\mathfrak{gl}(N)}$ on projections of complete graphs
to the subspace of primitives can be computed by taking
the logarithm of the corresponding exponential generating function.
\subsubsection{Computing the $\mathfrak{gl}(N)$-weight system\\ by means of the Harish-Chandra isomorphism}
In this section we describe one more way, suggested in~\cite{ZY22}, to compute the $\mathfrak{gl}(N)$ weight system.
The algebra $U(\mathfrak{gl}(N))$ admits the decomposition into the direct sum
\begin{equation}\label{HCd}
U(\mathfrak{gl}(N))=(\mathfrak{n}_-U(\mathfrak{gl}(N))+U(\mathfrak{gl}(N))\mathfrak{n}_+)\oplus U(\mathfrak{h}),
\end{equation}
where $\mathfrak{n}_-$, $\mathfrak{h}$ and $\mathfrak{n}_+$ are, respectively, the subalgebras of low-triangular, diagonal, and upper-triangular matrices in $\mathfrak{gl}(N)$.
\begin{definition}
The \emph{Harish-Chandra projection} is the linear projection to the second summand in~\eqref{HCd},
$$
\phi:U(\mathfrak{gl}_N)\to U(\mathfrak{h})=\mathbb{C}[E_{11},\cdots,E_{NN}].
$$
\end{definition}
\begin{theorem}[Harish-Chandra isomorphism, \cite{O91}]
The restriction of the Harish-Chandra projection to the center $ZU(\mathfrak{gl}_N)$ is an injective homomorphism of
commutative algebras, and its isomorphic image consists of
symmetric functions in the shifted matrix units $x_i=E_{ii}+N-i$, $i=1,\dots,N$.
\end{theorem}
The isomorphism of the theorem allows one to identify the codomain
$\mathbb{C}[C_1,C_2,\dots,C_N]$ of the weight system $w_{\mathfrak{gl}(N)}$
with the ring of symmetric functions in the generators
$x_1,\dots,x_N$. The generators $C_k$ under this isomorphism admit the following explicit expression:
$$
1-N\,u-\sum_{k=1}^\infty\phi(C_k)\,u^{k+1}=\prod_{i=1}^N\frac{1-(x_i+1)\,u}{1-x_iu}.
$$
Computation of the values of the $\mathfrak{gl}(N)$ weight system on a given chord diagram, by means of the Harish-Chandra isomorphism
consists in applying the projection $\phi$ to each monomial entering the definition of the weight system one by one and obtain its contribution to the final value. The total
contribution is obtained by summing up elements of a commutative ring. As a result, one can considerably decrease the required amount of random-access memory.
Nevertheless, time resources grows rather rapidly as~$N$ grows, since the number of monomials entering the definition, which is $N^{2n}$ for a chord diagram with~$n$ chords, also grows. Ramark also that the Harish-Chandra
projection can be applied to computing the
$w_{\mathfrak{gl}(N)}$ weight system for a specific~$N$, and it does not lead to a universal (polynomial) dependence on~$N$ for its values.
\subsection{$\mathfrak{sl}(N)$ weight system}
The Lie algebra $\mathfrak{gl}(N)$ is not simple, it can be represented as the direct sum of the simple Lie algebra
$\mathfrak{sl}(N)$ and a one-dimensional Abelian Lie algebra.
Therefore, the center of the universal enveloping algebra
$\mathfrak{gl}(N)$ is the tensor product of the centers of the universal enveloping algebras of $\mathfrak{sl}(N)$ and $\mathbb{C}$,
whence it could be identified with the ring of polynomials
in $C_1$ with coefficients in $ZU(\mathfrak{sl}(N))$.
Therefore, the values of the weight system $w_{\mathfrak{sl}(N)}$
can be obtained from that of $w_{\mathfrak{gl}(N)}$ by setting $C_1=0$ and $C_k=\tilde C_k$, $k\ge2$, where $\tilde C_k$
is the projection of the corresponding Casimir element in
$ZU(\mathfrak{gl}(N))$ to $ZU(\mathfrak{sl}(N))$. The result of this substitution is a polynomial in $\tilde C_2,\tilde C_3,\dots$. More explicitly, set $\tilde E_{i,j}=E_{i,j}-\delta_{i,j}N^{-1}C_1\in\mathfrak{sl}(N)\subset\mathfrak{gl}(N)$. Then
$$
\tilde C_{k}=\sum_{i_1,i_2,\dots,i_k=1}^N\tilde E_{i_1i_2}\tilde E_{i_2i_3}\dots \tilde E_{i_k i_1}=\sum_{i=0}^k\binom{k}{i}(-1)^iC_{k-i}\left(\tfrac{C_1}{N}\right)^i,
$$
where we set $C_0=N$ and
$$
ZU(\mathfrak{sl}(N))=\mathbb{C}[\tilde C_2,\dots,\tilde C_N]\subset
ZU(\mathfrak{gl}(N))=\mathbb{C}[C_1,\dots,C_N].
$$
Alternatively, the weight system $w_{\mathfrak{sl}(N)}$ can be computed by extending it to permutations by means of recurrence relations similar to that for the case
$\mathfrak{gl}(N)$: just replace there $w_{\mathfrak{gl}(N)}$ with $w_{\mathfrak{sl}(N)}$, $C_k$ with $\tilde C_k$ and set $\tilde C_1=0$. Similarly to the $w_{\mathfrak{gl}}$ weight system, the result
is a universal polynomial in $N,\tilde C_2,\tilde C_3,\dots$, which we denote by $w_{\mathfrak{sl}}$. Since the number
of generators is smaller, computation of
$w_{\mathfrak{sl}(N)}$ is more efficient, and the answer is more compact.
For example, for the chord diagrams $K_n$ we obtain
\begin{align*}
w_{\mathfrak{sl}}(K_2)&=\tilde C_2^2-\tilde C_2 N,\\
w_{\mathfrak{sl}}(K_3)&=\tilde C_2^3-3 \tilde C_2^2 N+2 \tilde C_2 N^2,\\
w_{\mathfrak{sl}}(K_4)&=\tilde C_2^4+6 \tilde C_2^2-2 (3 \tilde C_2^3+\tilde C_2-\tilde C_4) N+(11 \tilde C_2^2-2 \tilde C_3) N^2-6 \tilde C_2 N^3,\\
w_{\mathfrak{sl}}(K_5)&=
\tilde C_2^5+30 \tilde C_2^3-2 (5 \tilde C_2^4+41 \tilde C_2^2-5 \tilde C_4 \tilde C_2) N+(35 \tilde C_2^3-10 \tilde C_3 \tilde C_2+32 \tilde C_2-24 \tilde C_4) N^2
\\&\qquad
-2 (25 \tilde C_2^2-12 \tilde C_3) N^3+24 \tilde C_2 N^4,\\
w_{\mathfrak{sl}}(K_6)&=
\tilde C_2^6+90 \tilde C_2^4+264 \tilde C_2^2-240 \tilde C_4 \tilde C_2+160 \tilde C_3^2+(-15 \tilde C_2^5-552 \tilde C_2^3+30 \tilde C_4 \tilde C_2^2+64 \tilde C_3 \tilde C_2-72 \tilde C_2
\\&\qquad
+88 \tilde C_4-16 \tilde C_6) N+(85 \tilde C_2^4-30 \tilde C_3 \tilde C_2^2+1014 \tilde C_2^2-174 \tilde C_4 \tilde C_2-88 \tilde C_3+32 \tilde C_5) N^2
\\&\qquad
+(-225 \tilde C_2^3+174 \tilde C_3 \tilde C_2-416 \tilde C_2+224 \tilde C_4) N^3+2 (137 \tilde C_2^2-120 \tilde C_3) N^4-120 \tilde C_2 N^5,\\
w_{\mathfrak{sl}}(K_7)&=
\tilde C_2^7+210 \tilde C_2^5+3192 \tilde C_2^3-1680 \tilde C_4 \tilde C_2^2+1120 \tilde C_3^2 \tilde C_2+(-21 \tilde C_2^6-2212 \tilde C_2^4+70 \tilde C_4 \tilde C_2^3+448 \tilde C_3 \tilde C_2^2
\\&\qquad
-10680 \tilde C_2^2+7432 \tilde C_4 \tilde C_2-112 \tilde C_6 \tilde C_2-4096 \tilde C_3^2) N+(175 \tilde C_2^5-70 \tilde C_3 \tilde C_2^3+8358 \tilde C_2^3-714 \tilde C_4 \tilde C_2^2
\\&\qquad
-2792 \tilde C_3 \tilde C_2+224 \tilde C_5 \tilde C_2+3456 \tilde C_2-3392 \tilde C_4+544 \tilde C_6) N^2+(-735 \tilde C_2^4+714 \tilde C_3 \tilde C_2^2-12892 \tilde C_2^2
\\&\qquad
+2212 \tilde C_4 \tilde C_2+3392 \tilde C_3-1088 \tilde C_5) N^3+4 (406 \tilde C_2^3-581 \tilde C_3 \tilde C_2+1316 \tilde C_2-464 \tilde C_4) N^4
\\&\qquad
-12 (147 \tilde C_2^2-200 \tilde C_3) N^5+720 \tilde C_2 N^6.
\end{align*}
The values of the weight system $w_{\mathfrak{gl}}$ can be reconstructed from that of $w_{\mathfrak{sl}}$ by using the Hopf
algebra structure of the space of chord diagrams. Namely,
for any \emph{primitive} element $P$ of degree greater than $1$ in the algebra of chord diagrams we have $w_{\mathfrak{gl}(N)}(P)=w_{\mathfrak{sl}(N)}(P)\in ZU(\mathfrak{sl}(n))$, i.e., $w_{\mathfrak{gl}(N)}(P)$ can be expressed as a polynomial in $\tilde C_2,\tilde C_3,\dots$. For $P=K_1$, we have
$$
w_{\mathfrak{gl}(N)}(K_1)=C_2=\tilde C_2+\frac{C_1^2}{N}=w_{\mathfrak{sl}(N)}(K_1)+\frac{C_1^2}{N}.
$$
This allows one to reconstruct the values of $w_{\mathfrak{gl}}$
from the known values of $w_{\mathfrak{sl}}$ by presenting a given chord diagram as a polynomial in primitive elements.
More explicitly, for a given chord diagram~$C$ we have
$$
w_{\mathfrak{gl}}(C)=\sum_{I\sqcup J=V(C)} \Bigl(\tfrac{C_1^2}{N}\Bigr)^{|I|} w_{\mathfrak{sl}}(C|_J).
$$
In particular, exponential generating functions for the
values of $w_{\mathfrak{gl}}$ and $w_{\mathfrak{sl}}$ on the chord diagrams $K_n$ are subject to the relation
$$
1+\sum_{n=1}^\infty w_{\mathfrak{gl}}(K_n)\frac{x^n}{n!}=e^{\frac{C_1^2}{N}}\Biggl(1+\sum_{n=1}^\infty w_{\mathfrak{sl}}(K_n)\frac{x^n}{n!}\Biggr).
$$
\subsection{The weight system $\mathfrak{gl}(1|1)$}
In addition to metrized Lie algebras, weight systems can be constructed from metrized Lie superalgebras. Such a weight system takes values on the center of the universal enveloping algebra of the corresponding Lie superalgebra.
The general construction of such a weight system is very similar to that for Lie algebras, and we are not presenting it here.
Similarly to the case of a Lie algebra, computation of its values for a more or less complicated Lie superalgebra is rather laborious.
In~\cite{FKV97}, this construction is elaborated for the simplest Lie superalgebra $\mathfrak{gl}(1|1)$. The corresponding
weight system $w_{\mathfrak{gl}(1|1)}$ takes values in the ring of polynomials in two variables, the subring in
the center $ZU\mathfrak{gl}(1|1)$ of the universal enveloping algebra,
which is generated by the Casimir elements of grading~$1$ and~$2$.
In~\cite{FKV97}, recurrence relations for computing
the values of this weight system, which are similar to the Chmutov--Varchenko relations for $w_{\mathfrak{sl}(2)}$ are deduced. In~\cite{CL07}, it is proved that the values of this weight system depend on the intersection graph of the chord diagram only. The skew characteristic polynomial of graphs,
see Sec.~\ref{sssSCP}, is nothing but the extension of the
weight system $w_{\mathfrak{gl}(1|1)}$ to a $4$-invariant of graphs.
\section{Embedded graphs and delta-matroids}\label{s4}
A chord diagram can be treated as an embedded graph
with a single vertex on an orientable surface.
One of the key problems of the theory of weight systems
is how one can extend a given weight system to embedded
graphs with arbitrarily many vertices. Such
generalized weight systems are associated to
finite type invariants of links in the same way as
ordinary weight systems are associated to finite type
invariants of knots.
There are various approaches to extending weight systems
and graph invariants to embedded graphs. The most
well developed approach consists in interpreting an
embedded graph as an ordinary graph to which information
about the embedding is added, see, e.g.~\cite{EMM13}.
Various extensions of the classical Tutte polynomial
has been constructed in this way~\cite{BR02,EMM15}.
In this section we describe another approach to constructing
extensions, which is under development in the last few years.
This approach is based on the analysis of the
Hopf algebra structure
on the space of delta-matroids. It leads to invariants
satisfying $4$-term relations and being generalized weight systems, therefore. Delta-matroids (also
known under the name of Lagrangian matroids) where introduced by Bouch\'et around 1990~\cite{B89,B91}.
These combinatorial structures isolate equally well
essential features of both abstract and embedded graphs.
The Hopf algebra structure on various spaces of delta-matroids and $4$-term relations for them
were introduced in~\cite{LZ17}.
\subsection{Embedded graphs and $4$-term relations}
Embedded graphs, also known under the name of ribbon graphs, are the subject of topological graph theory.
An \emph{orientable embedded graph}
can be defined as an abstract graph (having, probably,
loops and multiple edges) endowed with a cyclic order
of semiedges incident to each vertex.
Below, we consider only orientable connected embedded
graphs. Such a graph admits a presentation as an
orientable surface with boundary formed by disks
associated to the verticies and ribbons associated to
the edges; each ribbon is attached to one or two
disks by its shorter sides, see Fig.~\ref{feg}.
\begin{figure}
\centering
\includegraphics[scale=0.4]{pics/RibbonGraph.png}
\caption{An embedded graph}
\label{feg}
\end{figure}
To each chord diagram, one can associate a ribbon graph
with a single vertex by making the Wilson loop into
the vertex by attaching a disk to it and replacing
each chord with a ribbon in such a way that the surface remains orientable.
The $4$-term relations~(\ref{fourtermrelation})
can be interpreted as valid for an arbitrary embedded
graph, not necessarily a single-vertex one.
In this more general case the ends of the two edges
taking part in the relation may be attached to
the boundaries of different vertices (two or even three).
We call these relations \emph{generalized $4$-term relations}, and we call the embedded graph invariants
satisfying them
\emph{generalized weight systems}.
Vassiliev's first move for embedded graphs exchanges
the two neighboring ends of two ribbons, and the second Vassiliev move consists in sliding an end of
a ribbon (a handle) along the second ribbon.
Below, we will be interested first of all in the question
how to extend a weight system to a generalized weight system.
The weight systems we are going to construct will be
based on the Hopf algebra structure on the space of delta-matroids.
\subsection{Delta-matroids of graphs and embedded graphs,\\
and $4$-term relations for them}
A \emph{set system} is a pair $(E;S)$
consisting of a finite set~$E$ and a subset $S\subset 2^E$ of the set of its subsets. A set system
$(E;S)$ is said to be
\emph{proper} if the set~$S$ is nonempty;
below, we consider only proper set systems.
\begin{definition}
A proper set system $(E;S)$ is called a
\emph{delta-matroid} if the following
\emph{symmetric exchange axiom} is valid for it:
for any pair of subsets
$X,Y\in S$ and any element
$a\in X\Delta Y$ there is an element
$b\in Y\Delta X$ such that
$(X\Delta \{a,b\})\in S$.
\end{definition}
Here~$\Delta$ denotes the symmetric set difference operation, $X\Delta Y=(X\setminus Y)\cup(Y\setminus X)$. The element~$b$ in the symmetric exchange axiom
can either coincide with~$a$ or be different from it.
The elements of~$S$ are called \emph{admissible
sets} for the delta-matroid $(E;S)$.
To each simple graph~$G$ and any embedded graph~$\Gamma$,
one can associate a delta-matroid. For a graph,
the construction procceds as follows.
Recall that a graph~$G$ is said to be nondegenerate
if the determinant of its adjacency matrix over the two-element field~$\mathbb{F}_2$ is~$1$. The \emph{delta-matroid
$\delta(G)=(V(G);S(G))$ of a graph~$G$}
is the set system consisting of the set $V(G)$
of vertices of~$G$ and a set $S(G)\subset 2^{V(G)}$
of its subsets, where a subset $U\subset V(G)$
of the set of vertices of the graph is admissible
(i.e., $U\in S(G)$) iff the induced subgraph $G|_U$ is nondegenerate.
In turn, the \emph{delta-matroid
$\delta(\Gamma)=(E(\Gamma);S(\Gamma))$ of an
embedded graph~$\Gamma$} is the set system consisting
of the set $E(\Gamma)$ of the edges of~$\Gamma$
and a set $S(\Gamma)\subset 2^{E(\Gamma)}$
of its subsets, where a subset $U\subset E(\Gamma)$
of the set of edges is admissible (i.e.,
$U\in S(\Gamma)$) iff the spanning embedded subgraph
$\Gamma|_U$ has a connected boundary.
If the genus of an embedded graph~$\Gamma$
is~$0$ (that is, this graph is embedded into the sphere),
a set of its edges is admissible iff the corresponding
spanning subgraph $\Gamma|_U$ is a tree. For an arbitrary
genus, the admisssible subsets in the delta-matroid
$\delta(\Gamma)$ also are called
\emph{quasitrees}.
A.~Bouch\'et proved that the set systems $\delta(G)$ and $\delta(\Gamma)$
indeed are delta-matroids. In addition, if $\Gamma$
is an embedded graph with a single vertex, that is, a chord diagram,
the corresponding definitions are compatible:
\begin{theorem}
If~$C$ is a chord diagram, that is, an embedded graph
with a single vertex, then the delta-matroid~$\delta(C)$
is naturally isomorphic to the delta-matroid $\delta(g(C))$
of the intersection graph of~$C$.
\end{theorem}
In other words, the boundary of a chord diagram~$C$ is
connected iff its intersection graph is nondegenerate.
The last assertion has been rediscovered many times,
see, e.g., \cite{S01}.
The delta-matroid of a graph, as well as the delta-matroid of an orientable embedded graph is even.
A delta-matroid $(E;S)$ is said to be
\emph{even} if the number of elements in all its
admissible sets has the same parity,
$|X|\equiv|Y|~{\rm mod}~2$ for all $X,Y\in S$.
Since we are interested first of all in graphs
embedded into orientable surfaces, below, we talk
only about even delta-matroids, although most
of the constructions can be as well extended to delta-matroids
that are not even, see Sec.~\ref{ssno}.
Introduce the twist (or partial dual) of a delta-matroid
in the following way. Let $U\subset E$ be a subset
of the base set~$E$ of a delta-matroid
$D=(E;S)$. The \emph{twist} $D*U$ of a delta-matroid~$D$ in the
\emph{subset}~$U$ is the set system
$D*U=(E;S*U)$, where $S*U=\{X\Delta U|X\in S\}$.
For delta-matroids of embedded graphs, the twist
operation corresponds to partial duality
introduced in~\cite{C09}. The twist of a delta-matroid
in an arbitrary subset of its base set is a delta-matroid~\cite{B91}.
\begin{definition}
The delta-matroid $\delta(G)$ of a graph~$G$ is called \emph{graphic}. The twist of a graphic delta-matroid
in a subset of its base set is called a
\emph{binary delta-matroid}.
\end{definition}
In particular, the delta-matroid $\delta(\Gamma)$
of any embedded graph~$\Gamma$ on an orientable surface is even binary.
For delta-matroids, a $4$-term relation is defined.
Its definition requires defining two operations on
delta-matroids, namely, the first and the second Vassiliev moves.
Each of these operations is specified by a chosen
ordered pair of elements of the delta-matroid.
The result $\widetilde{D_{ab}}$ of
\emph{sliding the handle~$a$ along the handle~$b$}
in a delta-matroid~$D=(E;S)$, $a,b\in E$,
is defined as $\widetilde{D_{ab}}=(E;\widetilde{S_{ab}})$,
where
$$
\widetilde{S_{ab}} =S\Delta\{X \sqcup a|X \sqcup b \in S
\text{ and } X \subseteq E \setminus \{a, b\}\}.
$$
The result ${D'_{ab}}$ of
\emph{exchange of the ends of the handles~$a$ and~$b$}
in a delta-matroid~$D=(E;S)$, $a,b\in E$,
is defined by ${D'_{ab}}=(E;{S'_{ab}})$,
where
$$
{S'_{ab}} =S\Delta\{X \sqcup \{a,b\}|X \in S
\text{ and } X \subseteq E \setminus \{a, b\}\}.
$$
Both the first and the second Vassiliev move
take a binary delta-matroid to a delta-matroid of the
same type.
A function~$f$ on even binary delta-matroids
\emph{satisfies $4$-term relations}
if for any such delta-matroid $D=(E;S)$
and any pair $a,b\in E$ of elements of the base set
we have
$$
f(D)-f(D'_{ab})=f(\widetilde{D_{ab}})-f(\widetilde{D'_{ab}}).
$$
If a function on even binary delta-matroids
satisfies $4$-term relations, then its restriction
to delta-matroids of embedded graphs on orientable
surfaces satisfies generalized $4$-term relations for them.
\subsection{Hopf algebras of delta-matroids}
Two set systems $(E_1;S_1)$ and $(E_2;S_2)$
are said to be \emph{isomorphic} if there is a one-to-one mapping of the set~$E_1$ to the set~$E_2$
that takes the set of subsets
$S_1$ one-to-one to the set of subsets $S_2$.
Denote by~$\mathcal{B}^e_n$ the vector space spanned by
isomorphism classes of even binary delta-matroids
on $n$-element sets and define
$$
\mathcal{B}^e=\mathcal{B}^e_0\oplus\mathcal{B}^e_1\oplus\mathcal{B}^e_2\oplus\dots.
$$
The vector space~$\mathcal{B}^e$ can be endowed with a
natural structure of graded commutative cocommuta\-tive
Hopf algebra. Multiplication in this algebra is
given by the disjoint union, while comultiplica\-tion
is given by splitting of the base set of a delta-matroid
into two disjoint subsets in all possible ways.
A delta-matroid is said to be \emph{connected}
if it cannot be represented as a product of two
delta-matroids with smaller base sets.
The quotient of the vector space $\mathcal{B}^e$ modulo
the subspace of $4$-term relations inherits a
Hopf algebra structure. We are not going to use
this Hopf algebra, and do not introduce a notation for it.
\begin{remark}
In contrast to graphs and delta-matroids, embedded
graphs, as far as we know, do not generate a natural
Hopf algebra.
\end{remark}
\subsection{Extending graph invariants to embedded
graphs and delta-matroids}
In this section, we explain how to extend to embedded
graphs the $4$-invariants of graphs described in Sec.~\ref{s2}.
The extension obtained happen to be defined on
binary delta-matroids, and their definition uses
the Hopf algebra structure on the vector space of binary
delta-matroids.
\subsubsection{Interlace polynomial}
The interlace polynomial of graphs defined above
admits a natural extension to binary delta-matroids.
\begin{definition}
The \emph{distance} from a set system $D=(E;S)$
to a subset $U\subset E$ is the number
$$
d_D(U)={\rm min}_{W\in S}~|U\Delta W|.
$$
The \emph{interlace polynomial} $L_D(x)$
of a delta-matroid~$D=(E;S)$ is the polynomial
$$
L_D(x)=\sum_{U\subset E}x^{d_D(U)}.
$$
\end{definition}
\begin{theorem}
If $D=D(G)$ is the delta-matroid of a graph~$G$,
then the interlace polynomial $L_D$ of~$D$ is
related to the interlace polynomial of~$G$ in
the following way:
$$
L_{D(G)}(x)=L_G(x-1).
$$
\end{theorem}
\begin{theorem}[\cite{K20}]
The interlace polynomial of even binary delta-matroids
satisfies $4$-term relations for them.
\end{theorem}
\begin{theorem}
The space of values of the interlace polynomial on
the subspace of primitive elements in
$\mathcal{B}^e_n$ has dimension~$n$.
\end{theorem}
This implies that the dimension of the space of
primitive elements in~$\mathcal{B}^e_n$ is at least~$n$.
\subsubsection{Stanley's symmetrized chromatic polynomial}
The notion of umbral invariant of graphs admits a
natural extension to invariants of delta-matroids.
To define such an invariant, it suffices to specify,
for each connected delta-matroid~$D=(E;S)$ the coefficient
of~$q_n$, where $n=|E|$, in the value of the umbral
invariant of this delta-matroid.
In~\cite{NZ21}, a way to extending umbral graph invariants to umbral invariants of delta-matroids
has been suggested. This way is based on the combinatorial Hopf algebra structure introduced in~\cite{ABS06}.
Below, we restrict ourselves with commutative cocommutative Hopf algebras.
A \emph{combinatorial Hopf algebra} is a pair $(\mathcal{H},\xi)$ consisting of a connected graded
Hopf algebra~$\mathcal{H}$ and a character (i.e.,
a multiplicative mapping) $\xi:\mathcal{H}\to\mathbb{C}$.
The polynomial Hopf algebra $\mathbb{C}[q_1,q_2,\dots]$
becomes a combinatorial Hopf algebra if we endow it
with the \emph{canonical character} $\zeta$
which takes each variable~$q_k$ to~$1$.
This combinatorial Hopf algebra is a terminal object
in the category of combinatorial Hopf algebras, that
is, to each combinatorial Hopf algebra $(\mathcal{H},\xi)$
a unique umbral invariant (a graded Hopf algebra homomor\-phism)
$\mathcal{H}\to\mathbb{C}[q_1,q_2,\dots]$ that takes~$\xi$
to~$\zeta$ is associated.
It is proved in~\cite{ABS06} that Stanley's symmetrized
chromatic polynomial is the graded Hopf algebra homomorphism $\mathcal{G}\to\mathbb{C}[q_1,q_2,\dots]$, which
corresponds to the character~$\xi:\mathcal{G}\to\mathbb{C}$,
which takes the graph with a single vertex to~$1$,
and any connected graph with
$n\ge2$ vertices to~$0$.
This structure allowed one to define~\cite{NZ21} \emph{Stanley's symmetrized chromatic polynomial
of delta-matroids} as the graded homomorphism
$W:\mathcal{B}^e\to\mathbb{C}[x;q_1,q_2,\dots]$
from the Hopf algebra of even binary delta-matroids
to the Hopf algebra of polynomials in the variables~$q$
with coefficients in the ring of polynomials in a
variable~$x$, which is associated to the character
taking the delta-matroid $(\{1\};\{\{\emptyset\}\})$
to~$1$, the delta-matroid $(\{1\};\{\{1\}\})$
to~$x$, and any connected delta-matroid of grading
$n\ge2$ to~$0$. These two delta-matroids on
$1$-element set span
the homogeneous subspace $\mathcal{B}^e_1$.
\begin{theorem}
Stanley's symmetrized chromatic polynomial of
delta-matroids satisfies $4$-term relations for
delta-matroids.
\end{theorem}
\subsubsection{Transition polynomial}
Transition polynomial also admits an extension to
delta-matroids, which satisfies $4$-term relations for
them. In order to define transition polynomial, we will need few further definitions. Let $D=(E;S)$ be a
delta-matroid, and let $U\subset E$ be a subset of
its base set.
The polynomial
$$
Q_D(s,t,x)=\sum_{E=\Phi\sqcup X\sqcup\Psi}
s^{|\Phi|}t^{|X|}x^{d_{D+\Phi*X\bar*\Psi}(\emptyset)}
$$
is called the \emph{transition polynomial}
of the delta-matroid
$D=(E;S)$. Here the operation~$+$, for an element $u\in E$, is defined by
$$
D+u=(E,S\Delta\{F\cup\{u\}|F\in S, u\notin F\}).
$$
The operation $\bar*$ is given by
$$
D\bar*u=D+u*u+u=D*u+u*u.
$$
These two operations are extended to an arbitrary subset $A\subset E$.
This definition is a specialization of the definition of weighted transition polynomial in~\cite{BH14}.
\begin{theorem}[~\cite{DZ22}]
The transition polynomial of even binary delta-matroids
satisfies $4$-term relations for them.
\end{theorem}
\subsubsection{Skew characteristic polynomial}
Graph nondegeneracy naturally extends to embedded graphs
and delta-matroids. A delta-matroid $(E;S)$ is said to be
\emph{nondegenerate} if $E\in S$, i.e., if the whole
set~$E$ is admissible. For embedded graphs, this means
that the embedded graph is nondegenerate iff its
boundary is connected. For chord diagram, this definition is consistent with the one we already know.
\emph{Nondegeneracy
$\nu(D)$ of a delta-matroid} is~$1$ if~$D$
is nondegenerate, and it is~$0$ otherwise.
We can define the \emph{skew characteristic
polynomial of a delta-matroid} $D=(E;S)$ by
$$
Q_D(x)=\sum_{U\subset E} x^{|U|}\nu(D|_U).
$$
\begin{theorem}
The skew characteristic polynomial of delta-matroids
satisfies $4$-term relations.
\end{theorem}
Similarly to the case of graphs, this statement
follows from the fact that nondegeneracy is
preserved under the second Vassiliev move.
\subsection{Extending to nonorientable surfaces:\\
framed chord diagrams, framed graphs,\\
and not necessarily even delta-matroids}\label{ssno}
$4$-term relations for chord diagrams, graphs, and embedded graphs have a nonorientable version, which
we describe here briefly. It should not be separated
from the orientable one: all the constructions, and first
of all the Hopf algebra structures, require simultaneous
participation of both orientable and nonorientable objects.
In each case, orientable objects generate Hopf subalgebras
in the Hopf algebras of objects without restrictions to their orientability.
A \emph{framed chord diagram} is a pair $(C,\varphi)$
consisting of a chord diagram~$C$ and a framing
$\varphi:V(C)\to\mathbb{F}_2$ taking each chord of~$C$ to
an element of the field~$\mathbb{F}_2$.
An isomorphism of framed chord diagrams is an
isomorphism of the underlying chord diagrams taking
the framing of the first of them to that of the second one.
A \emph{framed graph} is a pair $(G,\varphi)$ consisting of a simple graph~$G$ and a framing
$\varphi:V(G)\to\mathbb{F}_2$, which associates to any
vertex of~$G$ an element of the field~$\mathbb{F}_2$.
An isomorphism of framed graphs is an
isomorphism of the underlying graphs taking
the framing of the first of them to that of the second one.
By numbering the vertices of~$G$ with the numbers
$1,\dots,n=|V(G)|$, one associates to the framed graph its adjacency matrix
$A(G,\varphi)$. Nondiagonal elements of this matrix
coincide with those of the adjacency matrix $A(G)$ of the simple graph~$G$, and its diagonal elements coincide
with the framings of the corresponding vertices.
Isomorphism classes of framed graphs coincide with
symmetric matrices over the field $\mathbb{F}_2$
considered up to simultaneous renumbering of the
columns and the rows.
A framed graph is associated to any framed chord
diagram, which is its intersection graph.
A framed chord diagram can be considered as a ribbon
graph with a single vertex: an ordinary ribbon is
associated to a chord with framing~$0$, while a chord
with framing~$1$ is replaced with a half-twisted ribbon. If there is a chord with framing~$1$, then the
surface corresponding to the framed chord diagram is
nonorientable.
A framed graph invariant~$f$ is called a $4$-\emph{invariant} if for any pair of vertices
$a,b\in V(G)$ it satisfies the $4$-term relation
$$
f((G,\varphi))-f((G'_{ab},\varphi))=
f((\widetilde{G}_{ab},\widetilde{\varphi}_{ab}))-
f((\widetilde{G}'_{ab},\widetilde{\varphi}_{ab})).
$$
There is a natural one-to-one correspondence between
the sets of vertices of any two graphs
participating in the relation, therefore,
both framings $\varphi$ and $\tilde\varphi_{ab}$ have a common
domain. Here $\widetilde{\varphi}_{ab}$ denotes the
framing of the graph $\widetilde{G}_{ab}$, which
coincides with $\varphi$ on all the vertices but~$a$,
and $\widetilde{\varphi}_{ab}(a)=\varphi(a)+\varphi(b)$.
In other words, the adjacency matrix
$A(\widetilde{G}_{ab})$ of the framed graph $\widetilde{G}_{ab}$ can be obtained from $A(G)$ by means of the same transformation~(\ref{eVsm})
as in the unframed case.
Nondegeneracy and skew characteristic polynomial
for framed graphs are defined in the same way as for
ordinary graphs. The nondegenracy of a graph is
preserved under the second Vassiliev move.
This implies that the skew characteristic polynomial
is a $4$-invariant of framed graphs. In contrast to
skew characteristic polynomial of simple graphs
the skew characteristic polynomial of a framed graph
can be neither even, nor odd.
Any $4$-invariant of framed graphs defines a framed
weight system, which takes a framed chord diagram
to the value of the invariant on its framed intersection graph.
In particular, nondegenercy and skew characteristic
polynomial both define framed weight systems.
The delta-matroid of a framed graph and the delta-matroid
of an arbitary embedded graph, may be nonorientable,
are defined in the same way as in the orientable case.
If a framed graph has a vertex with framing~$1$,
then the corresponding delta-matroid will not be even:
its admissible sets will contain even as well as
odd number of elements. The delta-matroid of an
embedded graph on a nonorientable surface also is
not even. Isomorphism classes of binary delta-matroids
generate a Hopf algebra containing the Hopf subalgebra
$\mathcal{B}^e$.
|
2302.12166
|
\section{Introduction}
In porous media flow models in geophysical applications, the porosity of the medium is often treated as a static quantity. This assumption, however, need not always be justified due to the ability of rocks to deform by compaction.
In certain cases, the interaction of porosity and pressure can lead to the formation of \emph{porosity waves}.
These can take the form of solitary waves formed by travelling higher-porosity regions \cite{Yarushina2015b} or of chimney-like channels \cite{Raess2019}.
Such effects are important, for instance, in the modelling of rising magma \cite{McKenzie1984,Barcilon1986}, where porosity waves arise due to high temperatures. Such waves or channels can also form in soft sedimentary rocks, in salt formations or under the influence of chemical reactions; see for example \cite{Yarushina2015a,Raess2018,Raess2019}.
Quantifying uncertainties caused by the formation of preferential flow pathways can thus be important for safety analyses in geoengineering applications \cite{Yarushina2022}.
Here, we consider a poroviscoelastic model similar to the one introduced in \cite{Connolly1998,Vasilyev1998} for the interaction of porosity and pressure. The equations for porosity $\phi$ and effective pressure $u$ studied numerically in \cite{Vasilyev1998} are of the basic form
\begin{equation}\label{eq:cp}
\begin{aligned}
\partial_t \phi & = - \nabla \cdot \phi^n (\nabla u + f) ,\\
Q \partial_t u & = \nabla \cdot \phi^n (\nabla u + f) - \phi^m u ,
\end{aligned}
\end{equation}
with a constant $Q>0$, fixed exponents $m,n\geq 1$ and a constant vector $f$, subject to initial conditions on $\phi$ and $u$ as well as boundary conditions on $u$. The problem \eqref{eq:cp} can be regarded as a slightly restricted version of the more general case considered in this work, but \eqref{eq:cp} has very similar features and is obtained by minor additional simplifications.
Note that \eqref{eq:cp} can be regarded as the combination of a hyperbolic equation for $\phi$ and a parabolic equation for $u$. In the absence of strong smoothness assumptions on $u$, the advection coefficient in the first equation can be of low regularity. However, \eqref{eq:cp} can also be rewritten as
\begin{equation}\label{eq:cp2}
\begin{aligned}
\partial_t \phi & = - \phi^m u - Q \partial_t u ,\\
Q \partial_t u & = \nabla \cdot \phi^n (\nabla u + f) - \phi^m u ,
\end{aligned}
\end{equation}
where the parabolic problem for $u$ is coupled to a pointwise ordinary differential equation for $\phi + Q u$. For this reason, solutions of this problem turn out to be more well-behaved than \eqref{eq:cp} would suggest. The limiting case $Q=0$ in \eqref{eq:cp2}, corresponding to a purely viscous flow, takes the form
\begin{equation}\label{eq:magma}
\begin{aligned}
\partial_t \phi & = - \phi^m u ,\\
0 & = \nabla \cdot \phi^n (\nabla u + f) - \phi^m u .
\end{aligned}
\end{equation}
This model has been studied in particular in the context of magma dynamics \cite{Wiggins1995,Simpson2006,Ambrose2018}.
For modelling sharp transitions between materials in \eqref{eq:cp2} and \eqref{eq:magma}, it is important to be able to treat porosities with \emph{jump discontinuities}. These turn out to be determined mainly by the initial condition of the form
\[ \phi|_{t=0} = \phi_0 \,, \]
which arises in both problems, with a given function $\phi_0$ on the spatial domain. The initial data for $u$ that are additionally required in \eqref{eq:cp} and \eqref{eq:cp2} can be assumed to be sufficiently smooth in practically relevant models. For $\phi_0$ of low regularity, existence and uniqueness of solutions to \eqref{eq:cp2} and \eqref{eq:magma} is not covered by existing results. While for \eqref{eq:magma}, existence and uniqueness of solutions have been obtained in \cite{Simpson2006} in one spatial dimension and in \cite{Ambrose2018} in the higher-dimensional case, these results require higher-order Sobolev regularity of $\phi_0$.
\subsection{Novel contributions}
In the present work, for more general versions of both \eqref{eq:cp2} and the viscous limiting case \eqref{eq:magma}, we consider existence and uniqueness of solutions under low regularity requirements on the initial data $\phi_0$. In particular, we focus on conditions on $\phi_0$ that permit jump discontinuities. We arrive at different conditions for \eqref{eq:cp2} and \eqref{eq:magma}, depending on the dimension $d$ of the spatial domain $\Omega$.
For the purely viscous case as in \eqref{eq:magma}, when $d\in\{1,2\}$, we obtain existence and uniqueness of solutions up to the maximal time of existence under the sole requirement $\phi_0 \in L^\infty(\Omega)$. Using a different technique, we still obtain existence of solutions (where uniqueness remains open) for arbitrary $d$ provided that $\phi_0 \in BV(\Omega) \cap L^\infty(\Omega)$, which includes functions with discontinuities. Moreover, for arbitrary $d$ we also obtain a new result on existence and uniqueness assuming $\phi_0 \in C^{k,1}$ for some $k\geq 0$.
For general poroviscoelastic problems, with \eqref{eq:cp2} as a special case, we obtain well-posedness for $\phi_0$ with jump discontinuities only under more restrictive assumptions:
with a partition of $\Omega$ into finitely many open subsets $\Omega^j$ with $C^{1,\mu}$-boundaries for a $\mu>0$, we assume that $\phi_0$ is H\"older continuous on $\overline{\Omega}^j$ for each $j$.
We then obtain existence and uniqueness of solutions where $\phi$ is also piecewise H\"older continuous in the spatial variables. These conditions cover typical model cases treated in applications.
We focus here on the case of bounded domains $\Omega$ with homogeneous Dirichlet conditions on $u$, which are typically used in application scenarios (see for example \cite{Raess2018,Raess2019}), and on the well-posedness for potentially small time intervals. The existence of solutions on $\R^d$ for arbitrarily long times under the regularity assumptions considered here is left for future investigation.
\subsection{Outline}
In Section \ref{sec:modeldescription}, we describe the full viscoelastic model and its viscous limiting case. Section \ref{sec:viscouslimit} is devoted to the local well-posedness (up to maximal times of existence) of solutions of the viscous model under different regularity assumptions. In Section \ref{sec:viscoelasticmodel}, we study the local well-posedness of the viscoelastic model.
\section{Description of the model}\label{sec:modeldescription}
Let a domain $\Omega\subseteq \R^d$ with $d \in \N$ be given. For ease of notation, for any $T>0$, we introduce the space-time cylinder $\Omega_T=(0,T) \times \Omega$.
The model for poroviscoelastic flow on which we focus in this work reads
\begin{subequations}\label{eq:model}
\begin{align}
\label{eq:modelphieq} \partial_t \phi&=-(1-\phi) \bl \frac{b(\phi)}{\sigma(u)}u +Q\partial_t u \br ,\\
\partial_t u&=\frac{1}{Q} \bl \nabla \cdot a(\phi)(\nabla u + (1-\phi) f)-\frac{b(\phi)}{\sigma(u)}u \br ,
\end{align}
\end{subequations}
with functions $a$, $b$ and $\sigma$ that are to be specified, and where $Q>0$ and $f\in \R^d$ are assumed to be given constants.
Physically meaningful solutions of this problem should satisfy $\phi \in (0,1)$ on $\Omega_T$.
The problem is supplemented with initial data
\begin{align}\label{eq:ic}
\phi(0,x)=\phi_0(x), \quad u(0,x)=u_0(x),\quad x\in\Omega,
\end{align}
for given functions $\phi_0\colon \Omega\to (0,1)$ and $u_0\colon \Omega\to \R$.
In addition, if $\partial \Omega \neq \emptyset$, we equip~\eqref{eq:model} with homogeneous Dirichlet boundary conditions for $u$.
The derivation of \eqref{eq:model} is sketched in Appendix \ref{sec:deriv}. The coefficient functions $a$ and $b$ of main interest are of the form
\begin{equation*}
a(\phi) = a_0\phi^n, \qquad b(\phi) = b_0\phi^m
\end{equation*}
with real constants $a_0, b_0 > 0$ and $n, m \geq 1$. This assumption on $a$ is motivated by the Carman-Kozeny relationship \cite{Costa2006} between the porosity $\phi$ and the permeability of the medium. The function $\sigma$ accounts for \emph{decompaction weakening} \cite{Raess2018,Raess2019} and $\sigma/\phi^m$ can be regarded as the effective viscosity.
\begin{ass}\label{ass:sigma}
We assume that $\sigma\in C^1(\R)$ satisfies
\begin{align*}
\sup_{v \in \R} \sigma(v) < \infty,\quad
\inf_{v\in \R} \sigma(v) > 0, \quad
\sigma' \geq 0 \text{ on $\R$},
\end{align*}
as well as
\begin{align*}
\inf_{v\in \R} \left\{\frac{1}{\sigma(v)}-\frac{v\sigma'(v)}{\sigma^2(v)}\right\}>0,\quad
c_L=\sup_{v\in \R} \left\{\frac{1}{\sigma(v)}-\frac{v\sigma'(v)}{\sigma^2(v)}\right\}<+\infty.
\end{align*}
\end{ass}
A trivial example for $\sigma$ in Assumptions~\ref{ass:sigma} is given by $\sigma(v) = c_0$ for all $v\in\R$ with a constant $c_0>0$, proposed in \cite{Vasilyev1998}. Another example for $\sigma$, suggested in \cite{Raess2018,Raess2019} and satisfying Assumptions~\ref{ass:sigma}, is an expression of the form
\begin{align}\label{eq:sigma}
\sigma(v)=c_0 \bl 1 - c_1 \bl 1 + \tanh \bl -\frac{v}{c_2} \br \br \br ,\quad v\in \R,
\end{align}
which provides a phenomenological model for decompaction weakening.
Here $c_0>0$ is a positive constant, $c_1\in [0,\frac12)$ and $c_2>0$, where $1 + \tanh$ can be regarded as a smooth approximation of a step function taking values in the interval $(0,2)$.
In most the well-studied case $c_1=0$, as considered in \cite{Vasilyev1998}, one observes the formation of porosity waves, whereas $c_1>0$ with appropriate problem parameters and initial conditions can lead to the formation of channels.
With parameters from the stated ranges, $\sigma$ as in \eqref{eq:sigma} satisfies Assumptions~\ref{ass:sigma}: clearly, $\sigma$ is bounded from below by $c_0(1-2c_1)$ and from above by $c_0$, and is non-decreasing with $\sigma'(v)= \tfrac{c_0 c_1}{c_2} (1-\tanh^2(-v/c_2))$, $\lim_{v\to \pm \infty} v\sigma'(v)=0$ and $\sigma(v)>v\sigma'(v)$ for all $v\in \mathbb{R}$.
In what follows, it will be convenient to write
\begin{equation}\label{eq:kappa}
\kappa(v) = \frac{v}{\sigma(v)}\,.
\end{equation}
Note that $\kappa$ is Lipschitz continuous with Lipschitz constant $c_L$.
Starting from \eqref{eq:model}, we next consider the viscous limiting case, small-porosity approximations and reformulations that can become necessary for initial data of low regularity.
\subsection{Purely viscous model}
The purely viscous case corresponds to the limiting case $Q = 0$ in~\eqref{eq:model}.
Since the dynamics of~\eqref{eq:model} in the cases of interest are typically dominated by the viscous behaviour, this case can give some first insights. For $Q=0$, the equations \eqref{eq:model} become
\begin{subequations}\label{eq:viscousmodel}
\begin{align}
\partial_t \phi&=-(1-\phi){b(\phi)}{\kappa(u)},\label{eq:viscousphi}\\
0&=\nabla \cdot a(\phi) (\nabla u + (1-\phi)f)-{b(\phi)}{\kappa(u)},
\end{align}
\end{subequations}
again with Dirichlet boundary conditions for $u$.
In this case, we have an initial condition on $\phi$,
\[
\phi(0,x) = \phi_0(x), \quad x \in \Omega ,
\]
but no initial condition on $u$.
\subsection{Small-porosity approximations}
For initial data with $\phi_0(x)\in(0,1]$ for $x \in \Omega$ and bounded $u$, for a classical solution to~\eqref{eq:model} one has $\phi\leq 1$ due to the presence of the factor $1-\phi$ in~\eqref{eq:modelphieq}. Replacing the factor $1-\phi$ in~\eqref{eq:model} by 1, which is a commonly used approximation for small porosities, we obtain the modified model
\begin{subequations}\label{eq:modelmod}
\begin{align}
\partial_t \phi&=- \bigl( {b(\phi)}{\kappa(u)} +Q\partial_t u \bigr) ,\label{eq:modelmodphi}\\
\partial_t u&=\frac{1}{Q} \bigl( \nabla \cdot a(\phi)(\nabla u +f)-{b(\phi)}{\kappa(u)} \bigr) \label{eq:modelmodu}.
\end{align}
\end{subequations}
We again equip~\eqref{eq:modelmod} with Dirichlet boundary conditions for $u$ and supplement it with initial data
\eqref{eq:ic}.
The same approximation can be made in the viscous model~\eqref{eq:viscousmodel}, resulting in
\begin{subequations}\label{eq:viscousmodified}
\begin{align}
\partial_t \phi&=-{b(\phi)}{\kappa(u)},\\
0&=\nabla \cdot a(\phi) (\nabla u+f)-{b(\phi)}{\kappa(u)}\label{eq:viscouselliptic}.
\end{align}
\end{subequations}
For small $\phi$, it is typically assumed that the qualitative behaviour of solutions to~\eqref{eq:modelmod} and~\eqref{eq:viscousmodified} are similar to the ones of the original models~\eqref{eq:model} and~\eqref{eq:viscousmodel}, respectively.
\subsection{Reformulations}\label{sec:reformulations}
Let us note first that for the full coupled problem~\eqref{eq:model} with nonsmooth initial porosity $\phi_0$, the interpretation of the first equation~\eqref{eq:modelphieq} is not obvious, since it contains a term of the form $(1-\phi)\partial_t u$. When $\phi$ has jump discontinuities in the spatial variables, $\partial_t u$ in general only exists in the distributional sense, that is, as an element of $L^2(0,T; H^{-1}(\Omega))$. In this case, the product of the distribution $\partial_t u$ and $1-\phi$, which is not weakly differentiable, may not be defined.
However, the original problem~\eqref{eq:model} including the factor $1-\phi$ can be reduced to a similar form as~\eqref{eq:modelmod} by the following observation: ~\eqref{eq:modelphieq} can formally be rewritten as
\begin{align*}
\partial_t \log (1 - \phi) = {b(\phi)}{\kappa(u)} +Q\partial_t u .
\end{align*}
Introducing the new variable $\lambda = - \log(1-\phi)$, so that $\phi = 1 - e^{-\lambda}$, the system~\eqref{eq:model} can be written in the form
\begin{subequations}\label{eq:logtransformed}
\begin{align}
\partial_t \lambda &= - \bl {b(1 - e^{-\lambda})}{\kappa(u)} +Q\partial_t u \br , \\
\partial_t u &= \frac{1}{Q} \bl \nabla \cdot a(1-e^{-\lambda})(\nabla u + e^{-\lambda} f)-{b(1 - e^{-\lambda})}{\kappa(u)} \br ,
\end{align}
\end{subequations}
which has the same structure as~\eqref{eq:modelmod}. Physically meaningful solutions with $0<\phi <1$ are obtained precisely when $\lambda > 0$. As we shall see, the reformulation~\eqref{eq:logtransformed} is also advantageous for obtaining a weak formulation, and we will thus consider~\eqref{eq:model} in this form.
Both~\eqref{eq:modelmod} and~\eqref{eq:logtransformed} are of the general form
\begin{subequations}\label{eq:generalstructure}
\begin{align}
\partial_t \varphi &= -{\beta(\varphi)}{\kappa(u)} - Q \partial_t u ,\label{eq:generalstructure_a}\\
\partial_t u &= \frac{1}{Q} \bl \nabla \cdot \alpha(\varphi) (\nabla u + \zeta(\varphi))-{\beta(\varphi)}{\kappa(u)} \br ,\label{eq:generalstructure_b}
\end{align}
\end{subequations}
where $\alpha, \beta$ and $\zeta$ are locally Lipschitz continuous functions; concerning their regularity, we make Assumptions~\ref{ass:alphabeta} below. Note that since $\varphi$ is in general bounded from above and below, on this range the functions $\alpha, \beta$ and $\zeta$ satisfy a uniform Lipschitz condition.
Similarly, the viscous models \eqref{eq:viscousmodel} or~\eqref{eq:viscousmodified} can be written as
\begin{subequations}\label{eq:generalstructureviscous}
\begin{align}
\partial_t \varphi &= -{\beta(\varphi)}{\kappa(u)} , \label{eq:generalstructureviscous1}\\
0 &= \nabla \cdot \alpha(\varphi) (\nabla u + \zeta(\varphi))-{\beta(\varphi)}{\kappa(u)}.\label{eq:generalstructureviscous2}
\end{align}
\end{subequations}
Unless stated otherwise, we assume in what follows that $\alpha,\beta$ in~\eqref{eq:generalstructure} and~\eqref{eq:generalstructureviscous} satisfy the following conditions.
\begin{ass}\label{ass:alphabeta}
We assume that $\alpha, \beta, \zeta \in C^{0,1}_\mathrm{loc}(\R^+)$
and that $\alpha$ is strictly positive on $\R^+$; in other words, for each $\delta > 0$ there exists an $\epsilon > 0$ such that for all $x \in [\delta,\infty)$ we have $\alpha(x) \geq \epsilon > 0$.
Furthermore we assume that $\beta(x) \geq 0$ for each $x \in \R^+$.
\end{ass}
\begin{rem}
Note that our existence and uniqueness results on the viscous model \eqref{eq:generalstructureviscous} and the fully viscoelastic model \eqref{eq:generalstructure} in Sections~\ref{sec:viscouslimit} and~\ref{sec:viscoelasticmodel}, respectively, also hold under weaker assumptions on $f$. In general, $f$ can be a vector-valued function in $L^\infty(\Omega)$, and for some of our results, even $f\in L^p(\Omega)$ with some $p<\infty$ is sufficient. Only for the results in Section~\ref{sec:higherreg} and the main results of Section~\ref{sec:parabolic}, we need higher regularity assumptions on $f$, still without requiring $f$ to be constant.
\end{rem}
\section{Analysis of the viscous limiting case}\label{sec:viscouslimit}
In this section, we investigate the existence and uniqueness of solutions to the viscous limit model \eqref{eq:generalstructureviscous}.
The proof is based on a reformulation as a fixed point problem for $\varphi$ similarly to the one considered in \cite{Ambrose2018,Simpson2006}, where the main difficulties in the present case result from our substantially lower regularity assumptions on the problem data.
We write \eqref{eq:generalstructureviscous1} in integral form. This results in the viscous limit model in mild formulation for \eqref{eq:generalstructureviscous1} and weak formulation for \eqref{eq:generalstructureviscous2},
\begin{subequations}\label{eq:generalviscousmildweak}
\begin{align}
\varphi(t,\cdot) &= \varphi_0 - \int_0^t \beta(\varphi)\kappa(u) \D s, & & \text{for $t \in [0,T]$,} \label{eq:generalviscousmild} \\
0&=\nabla \cdot \alpha(\varphi) (\nabla u + \zeta(\varphi))-\beta(\varphi)\kappa(u) & & \text{in $W^{-1,2}(\Omega)$}, \label{eq:generalviscousweak}
\end{align}
\end{subequations}
where we assume in what follows that $\sigma$ satisfies Assumptions~\ref{ass:sigma} and $\alpha$, $\beta$ and $\zeta$ satisfy Assumptions~\ref{ass:alphabeta}.
As a first step, we consider the elliptic problem \eqref{eq:generalviscousweak} for fixed $\varphi$.
\subsection{Elliptic problem}
Let $T>0$ be given. As a first step, for any $t\in [0,T]$, we show the existence of a unique weak solution $u$ of the elliptic equation~\eqref{eq:generalviscousweak}, that is,
\begin{align}\label{eq:viscousellipticproof}
\nabla \cdot \alpha(\tilde\varphi) (\nabla u + \zeta(\tilde\varphi))-\beta(\tilde\varphi) \kappa(u) = 0
\end{align}
with given $\tilde\varphi \in L^\infty(\Omega)$ such that $\inf_\Omega \tilde\varphi > 0$, for any space dimension $d\geq 1$.
\begin{thm}\label{th:existence_uniqueness_elliptic}
Let $\Omega\subset \R^d$ be a bounded domain. Let $t\in [0,T]$ be given and suppose that $\tilde\varphi \in L^\infty(\Omega)$
satisfies $\inf_\Omega \tilde\varphi >0$.
Then~\eqref{eq:viscousellipticproof} with homogeneous Dirichlet boundary conditions has a unique weak solution in $W^{1,2}_0( \Omega)$ satisfying
\begin{align}\label{eq:elliptic_W12bound}
\| u\|_{W^{1,2}(\Omega)}\leq C \Norm[L^{2}(\Omega)]{\alpha(\tilde\varphi) \, \zeta(\tilde\varphi)}
\end{align}
for a constant $C>0$ depending on $\Omega$, $\sigma$ and on the lower bound of $\tilde\varphi$, but independent of $u$.
\end{thm}
\begin{proof}
To prove the existence of a unique weak solution of~\eqref{eq:viscousellipticproof}, we consider the functional $\mathcal J\colon W^{1,2}_0(\Omega)\to \R$ defined by
\begin{align*}
\mathcal J(u)=\frac{1}{2}\int_\Omega \alpha(\tilde\varphi) \nabla u \cdot \nabla u\di x+\int_\Omega \beta(\tilde\varphi)V(u)\di x+\int_\Omega gu \di x.
\end{align*}
Here, $g=-\nabla \cdot \alpha(\tilde\varphi) \, \zeta(\tilde\varphi) \in W^{-1,2}(\Omega)$ since $\alpha(\tilde\varphi) \, \zeta(\tilde\varphi) \in L^\infty(\Omega) \subseteq L^2(\Omega)$. We set
\begin{align*}
V(u)=\int_0^u \kappa(s)\di s
\end{align*}
for $\kappa$ from~\eqref{eq:kappa},
implying that
\begin{align*}
V''(u)=\kappa'(u)=\frac{1}{\sigma(u)}-\frac{u\sigma'(u)}{\sigma^2(u)}>0 ,
\end{align*}
and hence $V$ is convex. A minimiser $u$ of $\mathcal J$ is a weak solution of~\eqref{eq:viscousellipticproof}, i.e.
\begin{align*}
\int_\Omega g\psi \di x=-\int_\Omega \nabla\psi \cdot \alpha(\tilde\varphi) \nabla u\di x-\int_\Omega \beta(\tilde\varphi) \kappa(u) \psi\di x
\end{align*}
for all $\psi\in W^{1,2}_0(\Omega)$.
To show that $\mathcal J$ has a minimiser in $W^{1,2}_0(\Omega)$, it is sufficient to prove that $\mathcal J$ is coercive and weakly lower semi-continuous on $W^{1,2}_0(\Omega)$.
For the coercivity of $\mathcal J$, note that $V(s)\geq 0$ for all $s\in \R$ and that $\tilde\varphi$ is bounded by a positive constant from below. This implies that there exist a constant $C_1>0$ such that
\begin{align*}
\mathcal J(u)\geq C_1\| \nabla u \|_{L^2(\Omega)}^2+\mathcal F(u)
\end{align*}
where
\begin{align*}
\mathcal F(u)
= \int_\Omega gu \di x=\int_\Omega\nabla u \cdot \alpha(\tilde\varphi) \, \zeta(\tilde\varphi)\di x\geq -\Norm[W^{1,2}(\Omega)]{u} \Norm[L^{2}(\Omega)]{\alpha(\tilde\varphi) \, \zeta(\tilde\varphi)}
\end{align*}
by the Cauchy-Schwarz inequality.
By the Poincaré inequality, there exists a constant $C>0$ such that
\begin{align*}
\mathcal J(u)\geq C \| u \|_{W^{1,2}(\Omega)}^2-\| u\|_{W^{1,2}(\Omega)} \|\alpha(\tilde\varphi) \, \zeta(\tilde\varphi)\|_{L^{2}(\Omega)}
\end{align*}
and hence $\mathcal J$ is coercive.
To show that $\mathcal J$ is weakly lower semi-continuous, let $\{u_n\}_{n\in \N}\subset W^{1,2}_0(\Omega)$ be a sequence converging weakly to $u$ in $W^{1,2}_0(\Omega)$.
Since $\alpha(\tilde\varphi) \nabla u\in L^2(\Omega)$, $\nabla u_n$ converges weakly in $L^2(\Omega)$ and
\begin{align*}
\frac{1}{2}\int_\Omega \alpha(\tilde\varphi) \nabla u_n \cdot \nabla u_n\di x\geq
\frac{1}{2}\int_\Omega \alpha(\tilde\varphi) (2\nabla u \cdot \nabla u_n-\nabla u\cdot \nabla u)\di x,
\end{align*}
we have the weak lower semi-continuity of the first term of $\mathcal{J}$.
Note that there exists a constant $C_2>0$ such that
\begin{align*}
\int_\Omega \beta(\tilde\varphi) (V(u_n)-V(u))\di x&=\int_\Omega \beta(\tilde\varphi) \int_u^{u_n} \frac{s}{\sigma(s)}\di s\di x\geq \frac{C_2}{2} \int_\Omega \beta(\tilde\varphi) (u_n^2-u^2)\di x\\&\geq C_2 \int_\Omega \beta(\tilde\varphi) ( uu_n-u^2)\di x,
\end{align*}
implying that the second term of $\mathcal J$ is weakly lower semi-continuous. Since $\alpha(\tilde\varphi) \, \zeta(\tilde\varphi) \in L^2(\Omega)$ and $\{u_n\}_{n\in \N}$ converges weakly to $u$ in $L^2(\Omega)$, we have $\lim_{n\to\infty} \mathcal F(v_n)=\mathcal F(v)$.
Hence, $\mathcal J$ is weakly lower semi-continuous and together with $\mathcal J$ being coercive, this guarantees the existence of a minimiser of $\mathcal J$.
To show the uniqueness of minimisers of $\mathcal J$, we consider minimisers $u,\tilde u\in W^{1,2}_0(\Omega)$ of $\mathcal J$. Then,
\begin{align*}
\mathcal J(u)\leq \mathcal J \bl \frac{u+\tilde u}{2} \br , \quad \mathcal J(\tilde u)\leq \mathcal J \bl \frac{u+\tilde u}{2} \br ,
\end{align*}
implying
\begin{align*}
\mathcal J(u)+ \mathcal J(\tilde u)\leq 2\mathcal J \bl \frac{u+\tilde u}{2} \br
\end{align*}
which yields
\begin{multline*}
\frac{1}{2}\int_\Omega \alpha(\tilde\varphi) \nabla u \cdot \nabla u\di x+\int_\Omega \beta(\tilde\varphi)V(u)\di x
+\frac{1}{2}\int_\Omega \alpha(\tilde\varphi) \nabla \tilde u \cdot \nabla \tilde u\di x+\int_\Omega \beta(\tilde\varphi) V(\tilde u)\di x\\
\leq \frac{1}{4}\int_\Omega \alpha(\tilde\varphi) \nabla (u+\tilde u) \cdot \nabla (u+\tilde u) \di x+2\int_\Omega \beta(\tilde\varphi) V \bl \frac{u+\tilde u}{2} \br \di x.
\end{multline*}
The convexity of $V$ implies
\begin{align*}
V \bl \frac{u+\tilde u}{2} \br \leq \frac{1}{2} V(u)+\frac{1}{2}V(\tilde u)
\end{align*}
and we obtain
\begin{align*}
&\frac{1}{4}\int_\Omega \alpha(\tilde\varphi) \nabla (u-\tilde u) \cdot \nabla (u-\tilde u) \di x
\leq 0.
\end{align*}
Since $\tilde\varphi$ is bounded by a positive constant from below, we obtain $\| \nabla (u-\tilde u) \|_{L^2(\Omega)}=0$, implying $u=\tilde u$ in $W^{1,2}_0(\Omega)$.
That the weak solution $u\in W^{1,2}_0(\Omega)$ of~\eqref{eq:viscousellipticproof} satisfies~\eqref{eq:elliptic_W12bound} follows from
\begin{align*}
&\|u\|_{W^{1,2}(\Omega)} \|\alpha(\tilde\varphi) \, \zeta(\tilde\varphi)\|_{L^{2}(\Omega)} \\&\geq -\int_\Omega\nabla u \cdot \alpha(\tilde\varphi) \, \zeta(\tilde\varphi) \di x\\&=-\int_\Omega gu\di x=\int_\Omega \nabla u \cdot \alpha(\tilde\varphi) \nabla u\di x+\int_\Omega \frac{\beta(\tilde\varphi)}{\sigma(\bar u)} u^2\di x\geq C \| u\|_{W^{1,2}(\Omega)}^2,
\end{align*}
where $C>0$ depends on $\sigma$ and the lower bound of $\tilde\varphi$.
\end{proof}
Since we already know by Theorem~\ref{th:existence_uniqueness_elliptic} that there exists a solution to the nonlinear elliptic problem \eqref{eq:viscousellipticproof}, we consider $\sigma(u) \in L^\infty(\Omega)$ as being part of the coefficient and apply elliptic regularity results to \eqref{eq:viscousellipticproof}.
\subsubsection{Elliptic regularity theory}
We next recall two additional regularity results for elliptic problems that we apply both to \eqref{eq:viscousellipticproof} and to its linearisation with respect to $\varphi$.
We thus state these results for general linear elliptic equations
\begin{align}\label{eq:ellgeneral}
-\nabla \cdot (a \nabla u) + b u
= \nabla \cdot f + g,
\end{align}
on a bounded domain $\Omega \subset \R^d$ with homogeneous Dirichlet boundary conditions,
where $a, b \in L^\infty(\Omega)$ and $\inf_\Omega a>0$.
Using \cite[Thm.~8.15]{Gilbarg2001}, we obtain uniform boundedness of the solution to \eqref{eq:ellgeneral} for $f$ and $g$ of suitable integrability.
\begin{thm}\label{th:unifbounded}
Let $\Omega\subset \R^d$ be a bounded domain, $q>\max\{d,2\}$, $f \in L^q(\Omega,\R^d)$ and $g \in L^{q/2}(\Omega)$. Then the weak solution $u \in W^{1,2}_0(\Omega)$ to \eqref{eq:ellgeneral} satisfies
\begin{align*}
\Norm[L^\infty(\Omega)]{u}
\leq C \bl \Norm[L^2(\Omega)]{u} + \Norm[L^q(\Omega)]{f} + \Norm[L^{q/2}(\Omega)]{g} \br,
\end{align*}
where $C>0$ depends on $d$, $q$, $\operatorname{meas}(\Omega)$, the upper bounds of $a, b$ and the lower bound of $a$.
\end{thm}
Under similar integrability assumptions on $f$ and $g$, we obtain higher integrability of the solution to \eqref{eq:ellgeneral} by the following result of Meyers in \cite[Thm.~1]{Meyers1963}. The assumptions on $\partial\Omega$ suffice as a consequence of \cite[Thm.~1]{Auscher:02}.
\begin{thm}\label{th:meyerhigherint}
Let $\Omega\subset \R^d$ be a bounded domain with $C^1$-boundary.
Then for the weak solution $u \in W^{1,2}_0(\Omega)$ of \eqref{eq:ellgeneral}, we have $u \in W^{1,p}(\Omega)$ for some $p>2$ if $f \in L^p(\Omega,\R^d)$ and $g \in L^{r}(\Omega)$ with $r = \max\{2, (dp)/(d+p)\}$, as well as the estimate
\begin{align*}
\Norm[W^{1,p}(\Omega)]{u}
\leq C \bl \Norm[L^r(\Omega)]{u} + \Norm[L^p(\Omega)]{f} + \Norm[L^{r}(\Omega)]{g} \br.
\end{align*}
\end{thm}
Note that we can always choose $p > 2$ sufficiently small such that $r = 2$.
\subsubsection{Lipschitz estimate with respect to porosity}
Applying Theorem~\ref{th:unifbounded} and Theorem~\ref{th:meyerhigherint} to~\eqref{eq:viscousellipticproof}, we obtain $u \in L^\infty(\Omega) \cap W^{1,p}(\Omega)$ for some $p > 2$ provided that $\zeta(\tilde\varphi) \in L^q(\Omega)$ for $q > \max\{2,d\}$.
Next, we show that the solution $u(\tilde\varphi)\in W^{1,2}_0(\Omega)$ to \eqref{eq:viscousellipticproof} is Lipschitz continuous with respect to $\tilde\varphi\in L^\infty(\Omega)$ for $d\leq 2$. We will use this Lipschitz estimate to study the coupled problem~\eqref{eq:generalstructureviscous}.
\begin{cor}\label{th:lipschitzestimatelinf}
Let $d \leq 2$ and suppose that $\Omega\subset \R^d$ is a bounded domain with $C^{1}$-boundary. Let $\tilde\varphi_1, \tilde\varphi_2\in L^{\infty}( \Omega)$ with $\min\{ \inf_\Omega\tilde\varphi_1,\inf_\Omega\tilde\varphi_2\} > 0$, let $\alpha, \beta, \zeta \in C^1(\R^+)$ and let $\sigma \in C^{1,1}(\R)$. The unique weak solution to~\eqref{eq:viscousellipticproof} in $W^{1,2}_0(\Omega)$ corresponding to $\tilde\varphi_1$ and $\tilde\varphi_2$ are denoted by $u(\tilde\varphi_1)$ and $u(\tilde\varphi_2)$, respectively. Then
\begin{align}\label{eq:lipschitzhoelder}
\Norm[L^\infty(\Omega)]{u(\tilde\varphi_1)-u(\tilde\varphi_2)} \leq C \Norm[L^\infty(\Omega)]{\tilde\varphi_1-\tilde\varphi_2}
\end{align}
and
\begin{align}\label{eq:lipschitzsigma}
\Norm[L^\infty(\Omega)]{\kappa(u(\tilde\varphi_1)) - \kappa(u(\tilde\varphi_2))}
\leq C \Norm[L^\infty(\Omega)]{\tilde\varphi_1-\tilde\varphi_2}
\end{align}
with constants $C>0$ independent of $\tilde\varphi_1$ and $\tilde\varphi_2$.
\end{cor}
\begin{proof}
As a first step, we show that the solution
$u \in L^\infty(\Omega)$ (by Theorem~\ref{th:unifbounded})
of \eqref{eq:viscousellipticproof} is Fr\'echet differentiable with respect to $\tilde\varphi \in L^\infty(\Omega)$ with $\inf_\Omega\tilde\varphi>0$.
For each $h \in L^\infty(\Omega)$, the candidate for the directional derivative $w_h = u'(\tilde\varphi)h \in W_0^{1,2}(\Omega)$ is given implicitly by the weak formulation of the equation
\begin{multline}\label{eq:elliptic_linear_wh}
\nabla \cdot ( \alpha(\tilde\varphi) \nabla w_h) - \beta(\tilde\varphi) \, \kappa'(u(\tilde\varphi)) \, w_h\\
= -\nabla \cdot \bl \alpha'(\tilde\varphi) h (\nabla u(\tilde\varphi)+\zeta(\tilde\varphi)) + \alpha(\tilde\varphi) \zeta'(\tilde\varphi) h \br + \beta'(\tilde\varphi) \, h \, \kappa(u(\tilde\varphi)).
\end{multline}
One obtains~\eqref{eq:elliptic_linear_wh} by formally differentiating
\begin{align*}
\frac{\di}{\di \dphi} \Bigl( L_{\tilde\varphi + \dphi h} (u(\tilde\varphi + \dphi h)) - g(\tilde\varphi + \dphi h) \Bigr)\bigg|_{\dphi=0} = 0,
\end{align*}
where
\begin{align*}
L_{\tilde\varphi}(u(\tilde\varphi))
&= \nabla \cdot \alpha(\tilde\varphi) \nabla u(\tilde\varphi)-\beta(\tilde\varphi) \kappa(u(\tilde\varphi)), \quad g(\tilde\varphi)=-\nabla \cdot \alpha(\tilde\varphi) \zeta(\tilde\varphi).
\end{align*}
We need to show that $w_h$ solving \eqref{eq:elliptic_linear_wh} is indeed a Fr\'echet derivative, that is, with $\udphi := u(\tilde\varphi + \dphi h)$ we have
\begin{align*}
\lim_{\dphi \to 0} \Norm[L^\infty(\Omega)]{\frac{\udphi - \uzero}{\dphi} - w_h} = 0
\end{align*}
where the convergence is uniform in $h$ with $\Norm[L^\infty]{h}=1$, and $u'(\tilde\varphi)$ given by $w_h = u'(\tilde\varphi) h$ is a bounded operator on $L^\infty$.
Considering \eqref{eq:viscousellipticproof} for $\uzero$ and $\udphi$ yields the difference equation
\begin{multline}\label{eq:elldiff}
\nabla \cdot (\alpha(\tilde\varphi) \nabla(\udphi - \uzero)) - \beta(\tilde\varphi) \Delta_{\kappa,\uzero}(\udphi) (\udphi - \uzero)\\
\begin{split}
=& - \nabla \cdot ((\alpha(\tilde\varphi + \dphi h) - \alpha(\tilde\varphi)) \nabla \udphi) + \kappa(\udphi) (\beta(\tilde\varphi + \dphi h) - \beta(\tilde\varphi))\\
&- \nabla \cdot (\alpha(\tilde\varphi + \dphi h) \zeta(\tilde\varphi + \dphi h) - \alpha(\tilde\varphi) \zeta(\tilde\varphi)).
\end{split}
\end{multline}
Here we define
\begin{align}\label{eq:Delta}
\Delta_{\kappa,y}(x) :=
\begin{cases}
\frac{\kappa(x)-\kappa(y)}{x-y} &\text{if } x \neq y,\\
\kappa'(y) &\text{else,}
\end{cases}
\end{align}
which is used pointwise in~\eqref{eq:elldiff} and~\eqref{eq:ellderiv}.
We have $\kappa(\udphi)-\kappa(\uzero)=\Delta_{\kappa,\uzero}(\udphi) (\udphi-\uzero)$ where $\Delta_{\kappa,\uzero}$ is continuous at $\uzero$ and $\Delta_{\kappa,\uzero}(\uzero)=\kappa'(\uzero)$.
By combining \eqref{eq:elldiff} with \eqref{eq:elliptic_linear_wh}, we obtain
\begin{multline}\label{eq:ellderiv}
\nabla \cdot \bl \alpha(\tilde\varphi) \, \nabla\!\bl \frac{\udphi - \uzero}{\dphi} - w_h \br \br - \beta(\tilde\varphi) \kappa'(\uzero) \bl \frac{\udphi - \uzero}{\dphi} - w_h \br\\
\begin{split}
=& - \nabla \cdot \bl \bl \frac{\alpha(\tilde\varphi + \dphi h) - \alpha(\tilde\varphi)}{\dphi} - \alpha'(\tilde\varphi) h \br \nabla \udphi \br - \nabla \cdot (\alpha'(\tilde\varphi) h (\nabla \udphi - \nabla \uzero))\\
&+ \bl \frac{\beta(\tilde\varphi + \dphi h) - \beta(\tilde\varphi)}{\dphi} - \beta'(\tilde\varphi) h \br \kappa(\udphi) + \beta'(\tilde\varphi) h (\kappa(\udphi) - \kappa(\uzero))\\
&- \nabla \cdot \bl \frac{\alpha(\tilde\varphi + \dphi h) \zeta(\tilde\varphi + \dphi h) - \alpha(\tilde\varphi) \zeta(\tilde\varphi)}{\dphi} - \alpha'(\tilde\varphi) \zeta(\tilde\varphi) h - \alpha(\tilde\varphi) \zeta'(\tilde\varphi) h \br \\
&+ \beta(\tilde\varphi) \frac{\udphi - \uzero}{\dphi} (\Delta_{\kappa,\uzero}(\udphi) - \kappa'(\uzero)).
\end{split}
\end{multline}
In the case $d=1$, we apply Theorem~\ref{th:existence_uniqueness_elliptic} to~\eqref{eq:elldiff} to obtain
$\Norm[W^{1,2}(\Omega)]{\udphi - \uzero} \leq C \dphi$, which implies $U_s\to U_0$ in $L^\infty(\Omega)$ uniformly in $h$ with $\Norm[L^\infty]{h}=1$.
Applying Theorem~\ref{th:existence_uniqueness_elliptic} to~\eqref{eq:ellderiv} and using the properties of $\Delta_{\kappa,\uzero}$ at $\uzero$ yield the desired result.
If $d=2$, we apply Theorem~\ref{th:unifbounded} and Theorem~\ref{th:meyerhigherint} to~\eqref{eq:elldiff} implying that
$\Norm[L^\infty(\Omega)]{\udphi - \uzero} \leq C \dphi$ and $\udphi \to \uzero$ in $W^{1,p}(\Omega)$.
Applying Theorem~\ref{th:unifbounded} to~\eqref{eq:ellderiv} yields the desired result also in this case with convergence uniform in $h$ with $\Norm[L^\infty]{h}=1$.
Note that all the arguments here make use of the Lipschitz continuity of $\alpha, \beta, \zeta, \kappa$ and $\kappa'$.
To show that $u'(\tilde\varphi)$ is indeed a linear bounded operator on $L^\infty$, we again distinguish the cases $d=1$ and $d=2$.
For $d=1$, the solution $w_h$ of the elliptic problem \eqref{eq:elliptic_linear_wh} satisfies $\| w_h\|_{W^{1,2}(\Omega)}\leq C\Norm[L^\infty(\Omega)]{h}$ for some constant $C>0$ independent of $h$ and together with the embedding $L^\infty(\Omega) \subset W^{1,2}(\Omega)$, an upper bound of $\Norm[L^\infty\to L^\infty]{u'(\tilde\varphi)}$ immediately follows.
To obtain such a bound for $d=2$, we apply Theorem~\ref{th:meyerhigherint} to obtain $\nabla u \in L^p(\Omega)$ for some $p>2 = d$. This allows us to apply Theorem~\ref{th:unifbounded}, which requires only $\nabla u \in L^p(\Omega)$ for some $p>d$, to the linearized problem \eqref{eq:elliptic_linear_wh}, and we obtain
\begin{align*}
\Norm[L^\infty(\Omega)]{w_h}
&\leq C_1 \Norm[L^\infty(\Omega)]{h} \bl \Norm[L^p(\Omega)]{\alpha'(\tilde\varphi) (\nabla u(\tilde\varphi) + \zeta(\tilde\varphi)) + \alpha(\tilde\varphi) \zeta'(\tilde\varphi)} + \Norm[L^p(\Omega)]{\beta'(\tilde\varphi) \kappa(u(\tilde\varphi)} \br\\
&\leq C_2 \Norm[L^\infty(\Omega)]{h}
\end{align*}
with a constant $C_2>0$ independent of $h$, which shows $\Norm[L^\infty\to L^\infty]{u'(\tilde\varphi)} \leq C_2$.
We now show the Lipschitz estimates \eqref{eq:lipschitzhoelder} and \eqref{eq:lipschitzsigma}.
For $\tilde\varphi_1,\tilde\varphi_2\in L^\infty(\Omega)$, we set
\begin{align*}
\theta(s) := u((1-s)\tilde\varphi_1 + s\tilde \varphi_2),
\end{align*}
and one can easily verify that its Fréchet derivative satisfies
\begin{align*}
\theta' (s) &= u'((1-s)\tilde\varphi_1 +s\tilde\varphi_2) (\tilde\varphi_2 -\tilde \varphi_1).
\end{align*}
This allows us to use the fundamental theorem of calculus for the associated Bochner integral
\cite[Proposition A.2.3]{Liu2015} and
we obtain the Bochner integral representation
\begin{align}\label{eq:integralrepell}
u(\tilde\varphi_2) - u(\tilde\varphi_1) = \theta(1) - \theta(0)
= \int_0^1 \theta'(s)\,\di s = \int_0^1 u'((1-s)\tilde\varphi_1 +s\tilde\varphi_2) (\tilde\varphi_2 - \tilde\varphi_1)\,\di s.
\end{align}
This leads to~\eqref{eq:lipschitzhoelder} by taking $L^\infty$-norms on both sides. To show~\eqref{eq:lipschitzsigma}, we note the Lipschitz continuity of
$\kappa$ by Assumptions~\ref{ass:sigma}, which immediately implies
\begin{align*}
\Norm[L^\infty(\Omega)]{\kappa(u(\tilde\varphi_1)) - \kappa(u(\tilde\varphi_2))}
\leq c_L \Norm[L^\infty(\Omega)]{u(\tilde\varphi_1)-u(\tilde\varphi_2)}
\end{align*}
and together with~\eqref{eq:lipschitzhoelder} yields~\eqref{eq:lipschitzsigma}.
\end{proof}
\subsection{Coupled problem with nonsmooth data}\label{sec:coupledviscous}
In this section, assuming $d\leq 2$, we investigate existence and uniqueness of the viscous limiting problem~\eqref{eq:generalviscousmildweak}.
Provided that $\varphi(t,\cdot)\in L^\infty(\Omega)$ is uniformly positive, the existence and uniqueness of a weak solution $u(\varphi(t,\cdot))\in W^{1,2}(\Omega)$ is guaranteed by Theorem~\ref{th:existence_uniqueness_elliptic}, and a Lipschitz estimate for solutions $u(\varphi(t,\cdot))\in W^{1,2}(\Omega)$ for such $\varphi(t,\cdot)\in L^\infty(\Omega)$ is satisfied by Corollary~\ref{th:lipschitzestimatelinf}.
For $T>0$ and $R>\epsilon>0$, let
\begin{align*}
\CHI_T=\Bigl\{ \varphi\in C([0,T], L^\infty( \Omega))\colon \sup_{t\in[0,T]} \|\varphi(t,\cdot) \|_{L^\infty( \Omega)}\leq R, \quad \inf_{t\in[0,T]} \varphi(t,x) \geq \epsilon \text{ for a.e.~} x\in \Omega \Bigr\}.
\end{align*}
We define $\mathcal{E}$ as the nonlinear operator that maps $\varphi(t,\cdot)$ to $u$. Moreover, for all $t\in [0,T]$, we define
\begin{equation}\label{eq:lambdaoperatorLinfty}
\Lambda[\varphi](t,\cdot)
= \varphi_0 - \int_{0}^{t} \beta(\varphi(s,\cdot)) \, \kappa(\mathcal{E}(\varphi(s,\cdot))) \D s.
\end{equation}
Note that for $\varphi\in \CHI_T$, we only require continuity with respect to the time variable, but $\Lambda[\varphi]$ is continuously differentiable by construction. The operator $\Lambda$ satisfies a contraction property on $\CHI_T$.
\begin{prp}\label{prp:localtimeexistence}
Let $d \leq 2$ and $R>\epsilon>0$. Let $\Omega\subset \R^d$ be a bounded domain with $C^1$-boundary. Let $\alpha, \beta, \zeta \in C^1(\R^+)$. %
Assume that $\varphi_0\in L^\infty(\Omega)$ satisfies
$\|\varphi_0 \|_{L^\infty( \Omega)}< R$ and $ \inf_\Omega \varphi_0 > \epsilon$. Then the following statements hold.
\begin{enumerate}
\item[(i)] There exists a $T_1 = T_1(R, \epsilon, d, \varphi_0)$ such that $\Lambda[\varphi] \in \CHI_T$ for all $T \in(0, T_1)$ and $\varphi \in \CHI_T$ where $u(t,\cdot)=\mathcal{E}(\varphi(t,\cdot))\in W^{1,2}(\Omega)$ for all $t\in [0,T]$.
\item[(ii)] There exists $T_2 \in (0\,,T_1]$, $T_2 = T_2(R, \epsilon, d, \varphi_0)$ such that $\Lambda$ defines a contraction on $\CHI_T$ for $T \in (0, T_2)$ with respect to the norm of $C([0,T]; L^\infty(\Omega))$.
\end{enumerate}
\end{prp}
\begin{proof}
The proof follows a similar strategy as the one of \cite[Proposition~2.9]{Simpson2006}.
To prove (i), we need to find $T_1 > 0$ such that for each fixed $T\in(0, T_1)$ we have
\begin{align}
\Lambda[\varphi]&\in C(0,T; L^\infty(\Omega)),\label{equ8b}\\
\|\Lambda[\varphi](t,\cdot) \|_{L^\infty(\Omega)}&\leq R,\label{equ6b}\\ \Lambda[\varphi](t,\cdot) &\geq \epsilon\label{equ7b}
\end{align}
for all $t \in [0\,,T]$.
Let $T>0$ be given and let $\mathcal{E}(\varphi(t,\cdot)) \in W^{1,p}(\Omega)$ for $t \in [0\,,T]$ be the unique solution of
\begin{align*}
L_{\varphi(t,\cdot)}(\mathcal{E}(\varphi(t,\cdot))) = - \nabla \cdot \alpha(\varphi(t,\cdot)) \, \zeta(\varphi(t,\cdot))
\end{align*}
from Theorem~\ref{th:existence_uniqueness_elliptic} together with Theorem~\ref{th:meyerhigherint}.
Furthermore we can estimate
\begin{align*}
\Norm[L^\infty(\Omega)]{\Lambda[\varphi](t_1,\cdot) - \Lambda[\varphi](t_2,\cdot)}
&\leq \left| \int_{t_1}^{t_2} \Norm[L^\infty(\Omega)]{\beta(\varphi(s,\cdot))\, \kappa(\mathcal{E}(\varphi(s,\cdot)))} \D s\right|\\
&\leq C \AV{t_2 - t_1},
\end{align*}
for some constant $C>0$ independent of $T$, where we used Theorem~\ref{th:unifbounded} as well as the uniform bounds on $\varphi$ and $\sigma$. This implies that $\Lambda[\varphi] \in W^{1,\infty}(0,T;L^\infty(\Omega))$ and hence~\eqref{equ8b}.
For showing~\eqref{equ6b}, note that
\begin{align*}
\|\Lambda[\varphi](t,\cdot) \|_{L^\infty(\Omega)}&\leq \|\varphi_0 \|_{L^\infty(\Omega)}+ C T
\end{align*}
using the same arguments as above.
We may choose $T>0$ sufficiently small such that~\eqref{equ6b} holds because $\|\varphi_0 \|_{L^\infty(\Omega)}< R$ by assumption.
Next we show~\eqref{equ7b}. Since $|\Lambda[\varphi](t,\cdot)-\varphi_0|\leq CT$ for all $t\in [0,T]$ we have
\begin{align*}
\Lambda[\varphi](t,\cdot)\geq \varphi_0- CT
\end{align*}
by the assumption on $\varphi_0$, implying that we may choose $T>0$ sufficiently small so that also~\eqref{equ7b} holds. Hence, $T_1>0$ can be chosen such that~\eqref{equ8b}--\eqref{equ7b} are satisfied, which concludes the proof of (i).
To establish (ii), let $\varphi_1,\varphi_2\in\CHI_T$. For $t\in[0,T]$, we have
\begin{align*}
\|\Lambda[\varphi_1]( t,\cdot) - \Lambda[\varphi_2](t,\cdot)\|_{L^\infty(\Omega)}
&\leq C \int_{0}^{t} \|\varphi_1(s,\cdot) - \varphi_2(s,\cdot)\|_{L^\infty(\Omega)} \D s\\&\leq C T\sup_{s\in[0,T]} \|\varphi_1(s,\cdot) - \varphi_2(s,\cdot)\|_{L^\infty(\Omega)}
\end{align*}
for some constant $C>0$ independent of $T$ due to the Lipschitz continuity of
$x\mapsto \kappa(x)$
and $\beta$, as well as the Lipschitz continuity of $u$ by \eqref{eq:lipschitzhoelder}. This implies that
\begin{align*}
\sup_{t\in[0,T]}\|\Lambda[\varphi_1]( t,\cdot) - \Lambda[\varphi_2](t,\cdot)\|_{L^\infty(\Omega)}
\leq C T\sup_{s\in[0,T]} \|\varphi_1(s,\cdot) - \varphi_2(s,\cdot)\|_{L^\infty(\Omega)}.
\end{align*}
By choosing $T_2 \in (0, T_1]$ such that $C T_2 < 1$ we obtain that $\Lambda$ is a contraction on $\CHI_T$ with respect to the $C([0,T];L^\infty)$-norm, which completes the proof of (ii).
\end{proof}
For bounded initial data $\varphi_0$ and $d \leq 2$, the contraction property in Proposition~\ref{prp:localtimeexistence} yields the local well-posedness of the solution to \eqref{eq:generalviscousmildweak}, which can be continued to its maximal time of existence.
\begin{thm}\label{th:wellposed}
Under the assumptions of Proposition~\ref{prp:localtimeexistence},
there exists $T>0$ depending on $R,\epsilon,k,d,\varphi_0$ such that $\Lambda[\varphi]\in \CHI_T$ in~\eqref{eq:lambdaoperatorLinfty} with $u(t,\cdot)=\mathcal{E}(\varphi(t,\cdot))\in W^{1,2}(\Omega)$ for $t\in[0,T]$ is the unique solution to~\eqref{eq:generalviscousmildweak} in $\CHI_T$.
If the maximal time of existence $T_{\mathrm{max}}>0$ satisfies $T_{\mathrm{max}}<\infty$, then
\begin{align*}
\lim_{T\to T_{\mathrm{max}}} \bl \|\varphi(T,\cdot) \|_{L^\infty( \Omega)}+\frac{1}{| \varphi(T,\cdot)|} \br =\infty.
\end{align*}
Moreover, the solution $\varphi\in \CHI_T$ depends continuously on the initial data $\varphi_0$.
\end{thm}
\begin{proof}
By Proposition~\ref{prp:localtimeexistence} there exists $T>0$ such that $\Lambda[\varphi]\in \CHI_T$ is a contraction on $\CHI_T$. The contraction mapping theorem implies that there exists a unique fixed point $\varphi \in \CHI_T$ which proves the existence and uniqueness of a local solution to~\eqref{eq:generalviscousmildweak} in $\CHI_T$.
For the statement on the maximal time of existence, suppose that $T_{\text{max}}<\infty$ with
\begin{align*}
\sup_{t\in[0, T_{\text{max}})} \bl \|\varphi(t,\cdot) \|_{L^\infty(\Omega)}+\frac{1}{| \varphi(t,\cdot)|} \br \leq K
\end{align*}
for some $K>0$. Then, the local existence theory for $R=K+1$ and $\epsilon=\tfrac{1}{2}K^{-1}$ yields the existence of a solution on $[0,T_{\text{max}}+\delta]$ for some $\delta>0$, contradicting the maximality of $T_{\text{max}}$.
Let $\varphi_0,\psi_0\in L^\infty(\Omega)$ be initial data satisfying the conditions of Proposition~\ref{prp:localtimeexistence}.
Then, there exists $T>0$ such that $\varphi,\psi \in \CHI_T$ satisfy~\eqref{eq:generalviscousmildweak} with initial data $\varphi_0,\psi_0$. For $t\in[0,T]$, we have
\begin{align*}
\|\varphi( t,\cdot) - \psi(t,\cdot)\|_{L^\infty(\Omega)}
&\leq \|\varphi_0 - \psi_0\|_{L^\infty(\Omega)}+ C \int_{0}^{t} \|\varphi(s,\cdot) - \psi(s,\cdot)\|_{L^\infty(\Omega)} \D s
\end{align*}
for some $C>0$, independent of $\varphi,\psi$,
which yields
\begin{align*}
\|\varphi( t,\cdot) - \psi(t,\cdot)\|_{L^\infty(\Omega)}
&\leq \|\varphi_0 - \psi_0\|_{L^\infty(\Omega)}\exp(Ct)
\end{align*}
by Gr\"onwall’s inequality.
\end{proof}
\subsection{Higher regularity}\label{sec:higherreg}
In this section, we show higher regularity for solutions of \eqref{eq:generalviscousmildweak} under additional smoothness assumptions on the problem data, in particular $\varphi_0 \in C^{k,1}(\overline{\Omega})$ for a $k \in \N_0$.
Again, we first consider auxiliary results for a general linear elliptic equation
\begin{align}\label{eq:ellgeneral2}
-\nabla \cdot (a \nabla u) + b u
= f,
\end{align}
on a bounded domain $\Omega \subset \R^d$ with homogeneous Dirichlet boundary conditions,
where $a \in C^{0,1}(\overline{\Omega})$, $b \in L^\infty(\Omega)$ and $\inf_\Omega a>0$.
Using \cite[Thm.~9.15,~9.17]{Gilbarg2001}, we obtain the following $W^{2,p}$-bound for the weak solution of~\eqref{eq:ellgeneral2}.
\begin{thm}\label{th:existence_p}
Let $d\geq 1$, let $\Omega \subset\R^d$ be a bounded domain with $C^{1,1}$-boundary. Let $t\in [0,T]$ be given. Suppose that $f \in L^p(\Omega)$ with $p \in (1,\infty)$. Then~\eqref{eq:ellgeneral2} has a unique solution $u\in W^{2,p}( \Omega) \cap W^{1,2}_0(\Omega)$ satisfying
\begin{align}\label{eq:elliptic_W2pbound}
\Norm[W^{2,p}(\Omega)]{u}
\leq C \Norm[L^p(\Omega)]{f}
\end{align}
for a constant $C>0$ depending on $\Omega$, but independent of $u$.
\end{thm}
Under appropriate smoothness assumptions, the regularity of the solution to~\eqref{eq:ellgeneral2} can be further improved.
\begin{thm}\label{th:existence_wkp}
Let $d\geq 1$ and suppose that $\Omega \subset\R^d$ be a bounded domain with $C^{k+1,1}$-boundary for a given $k\in\N_0$. Let $t\in [0,T]$ and let $a \in C^{k,1}(\overline{\Omega})$ be bounded from below by a positive constant.
In addition, suppose that $b \in W^{k,\infty}(\Omega)$, $f \in W^{k,p}(\Omega)$ with $p \in (1,\infty)$. Then,~\eqref{eq:ellgeneral2} with homogeneous Dirichlet boundary conditions has a unique weak solution $u\in W^{k+2,p}( \Omega)$ satisfying
\begin{align}\label{eq:elliptic_Wkpbound}
\Norm[W^{k+2,p}(\Omega)]{u}
\leq C \Norm[W^{k,p}(\Omega)]{f}
\end{align}
for a constant $C>0$ depending on $\Omega$ and $k$, but independent of $u$.
\end{thm}
The proof is a standard bootstrap argument following the lines of~\cite[Theorem~6.3.2 and~6.3.5]{Evans2010} and thus omitted.
\begin{rem}\label{rem:Wkp}
Applying Theorem~\ref{th:existence_wkp} to~\eqref{eq:generalviscousweak}, we obtain $u \in W^{k+2,p}(\Omega)$ assuming that $\varphi(t,\cdot) \in C^{k,1}(\overline{\Omega})$ is bounded from below by a positive constant, $\alpha, \zeta \in C^{k,1}_\mathrm{loc}(\R^+)$, $\beta \in C^k(\R^+)$ and $\sigma \in C^{k,1}(\R)$.
\end{rem}
By the Sobolev embedding $W^{k+2,p}(\overline \Omega) \subset C^{k+1,\gamma}(\overline{\Omega})$ with $\gamma = 1 - \frac{d}{p}$ for $p>d$ and Theorem \ref{th:existence_wkp}, the solution $u\in W^{k+2,p}(\Omega)$ to the original elliptic problem \eqref{eq:generalviscousweak} satisfies a Lipschitz estimate in $C^{k,1}(\overline\Omega)$ under slightly stronger assumptions on the coefficient functions.
\begin{prp}\label{prp:LipschitzEll}
Let $p \in (d,\infty)$ and let $\Omega\subset \R^d$ be a bounded domain with $C^{k+1,1}$-boundary for a given $k\in\N_0$. Let $\tilde\varphi_1, \tilde\varphi_2\in C^{k,1}(\overline{\Omega})$ are bounded from below by a positive constant.
Furthermore, let $\alpha, \zeta \in C^{k+1,1}_\text{loc}(\R^+)$, $\beta \in C^{k+1}(\R^+)$ and let $\sigma$ be such that $\kappa \in C^{k+1,1}(\R)$.
Then the corresponding solutions $u(\tilde\varphi_1), u(\tilde\varphi_2)\in W^{k+2,p}(\Omega) \cap W^{1,2}_0(\Omega)$ to \eqref{eq:viscousellipticproof} satisfy
\begin{align}\label{eq:lipschitzCk}
\Norm[C^{k,1}(\overline{\Omega})]{u(\tilde\varphi_1)-u(\tilde\varphi_2)}
\leq C_k \Norm[C^{k,1}(\overline{\Omega})]{\tilde\varphi_1-\tilde\varphi_2}
\end{align}
and
\begin{align}\label{eq:Lipschitzhighordersigma}
\Norm[C^{k,1}(\overline{\Omega})]{\kappa(u(\tilde\varphi_1))-\kappa(u(\tilde\varphi_2))}
\leq C_k \Norm[C^{k,1}(\overline{\Omega})]{\tilde\varphi_1-\tilde\varphi_2}
\end{align}
with constants $C_k>0$ depending on $k$.
\end{prp}
\begin{proof}
To show \eqref{eq:lipschitzCk}, we proceed similarly to Corollary~\ref{th:lipschitzestimatelinf}. For given uniformly positive $\tilde\varphi \in C^{k,1}(\overline{\Omega})$, we consider the linearized equation for $w_h$ with $h \in C^{k,1}(\overline{\Omega})$ which reads
\begin{multline*}
\nabla \cdot ( \alpha(\tilde\varphi) \nabla w_h) - \beta(\tilde\varphi) \, \kappa'(u(\tilde\varphi)) \, w_h\\
= -\nabla \cdot \bl \alpha'(\tilde\varphi) h (\nabla u(\tilde\varphi)+\zeta(\tilde\varphi)) + \alpha(\tilde\varphi) \zeta'(\tilde\varphi) h \br + \beta'(\tilde\varphi) \, h \, \kappa(u(\tilde\varphi))
\end{multline*}
as before, where $\nabla u(\tilde\varphi) \in W^{k+1,p}(\Omega)$ due to Remark~\ref{rem:Wkp}. Hence, the requirements for Theorem~\ref{th:existence_wkp} are fulfilled. Furthermore, we can use ~\eqref{eq:elldiff} and~\eqref{eq:ellderiv} with right-hand sides in $W^{k,p}(\Omega)$ together with Theorem~\ref{th:existence_wkp} and the properties of $\Delta_{\kappa,\uzero}$ to show that
\begin{align*}
\lim_{\dphi \to 0} \Norm[C^{k,1}(\overline{\Omega})]{\frac{\udphi - \uzero}{\dphi} - w_h} = 0.
\end{align*}
The decisive step here, estimating the norm of $W^{k,p}(\Omega)$ by the one of $C^{k}(\overline{\Omega})$, is to show that $\Norm[C^k(\overline{\Omega})]{\Delta_{\kappa,\uzero}(\udphi) - \kappa'(\uzero)}$ converges to zero if $\dphi \to 0$, which is proved in Appendix~\ref{sec:Ckconv}.
Using the Sobolev embedding $W^{k+2,p}(\overline{\Omega}) \subset C^{k+1,\gamma}(\overline{\Omega})$ with $\gamma = 1 - \frac{d}{p}$, this yields $\Norm[C^{k+1}]{w_h} \leq C_k \Norm[C^k]{h}$ for a constant $C_k>0$ depending on $k$.
This justifies the use of the Bochner integral representation of $u(\tilde\varphi_1) - u(\tilde\varphi_2)$ as in~\eqref{eq:integralrepell} and we can conclude that~\eqref{eq:lipschitzCk} holds.
Since the $C^{k,1}$-norm contains derivatives for $k > 0$, we cannot apply the Lipschitz estimate for $\kappa$
as in Corollary~\ref{th:lipschitzestimatelinf} to show that~\eqref{eq:Lipschitzhighordersigma} holds for some constant $C_k>0$ depending on $k$.
To show \eqref{eq:Lipschitzhighordersigma}, we introduce
$\Theta(s) = \kappa(u((1-s)\tilde\varphi_1 + s \tilde\varphi_2))$
with
\begin{align*}
\Theta'(s)
= \kappa'(u((1-s)\tilde\varphi_1 + s \tilde\varphi_2)) \, u'((1-s)\tilde\varphi_1 +s\tilde\varphi_2) \, (\tilde\varphi_2 - \tilde\varphi_1).
\end{align*}
We then obtain $\Norm[C^{k,1}]{\Theta'(s)} \leq C_k \Norm[C^{k,1}]{\tilde\varphi_1 - \tilde\varphi_2}$ since $\kappa \in C^{k+1,1}(\R)$, where $C_k$ only depends on $k$. Using the associated Bochner integral representation yields \eqref{eq:Lipschitzhighordersigma}.
\end{proof}
Similarly to the space $\CHI_T$ in Section~\ref{sec:coupledviscous}, for $T>0$, $R>\epsilon>0$ and $k \in \N_0$ we define
\begin{align*}
\CHI_T^k = \Bigl\{ \varphi\in C([0,T], C^{k,1}(\overline \Omega))\colon \sup_{t\in[0,T]} \|\varphi(t,\cdot) \|_{C^{k,1}(\overline \Omega)}\leq R, \quad \inf_{t\in[0,T]} \varphi(t,x) \geq \epsilon \text{ for all $x\in \Omega$} \Bigr\}.
\end{align*}
The local well-posedness of the solution to \eqref{eq:generalviscousmildweak} in $\CHI_T^k$ can be shown similar to Section~\ref{sec:coupledviscous} by establishing a contraction property with respect to the norm of $C([0,T], C^{k,1}(\overline \Omega))$ as in Proposition~\ref{prp:localtimeexistence} and proceeding analogously to Theorem~\ref{th:wellposed}.
\begin{cor}
Let $p \in (d,\infty)$ and suppose that $R>\epsilon>0$, and let $\Omega\subset \R^d$ be a bounded domain with $C^{k+1,1}$-boundary for a given $k\in\N_0$. Let $\alpha, \beta, \zeta \in C^{k+1,1}_\mathrm{loc}(\R^+)$ satisfy Assumptions \ref{ass:alphabeta}. %
Further, let $\sigma$ as in Assumptions~\ref{ass:sigma} be such that $\kappa \in C^{k+1,1}(\R)$, and assume that $\varphi_0\in C^{k,1}(\overline{\Omega})$ satisfies
$\Norm[C^{k,1}(\overline{\Omega})]{\varphi_0}< R$ and $\inf_\Omega \varphi_0 > \epsilon$. Then there exists $T>0$, depending on $R$, $\epsilon$, $k$, $d$ and $\varphi_0$ such that $\Lambda[\varphi]\in \CHI_T^k$ in~\eqref{eq:lambdaoperatorLinfty} with $u(t,\cdot)=\mathcal{E}(\varphi(t,\cdot))\in W^{k+2,p}(\Omega)$ for $t\in[0,T]$ is the unique solution to~\eqref{eq:generalviscousmildweak} in $\CHI_T^k$ when equipped with initial data $\varphi_0$ for $\varphi$.
If the maximal time of existence $T_{\mathrm{max}}>0$ satisfies $T_{\mathrm{max}}<\infty$, then
\begin{align*}
\lim_{T\to T_{\mathrm{max}}} \bl \|\varphi(T,\cdot) \|_{C^{k,1}(\overline \Omega)}+\frac{1}{| \varphi(T,\cdot)|} \br =\infty.
\end{align*}
Moreover, the solution $\varphi\in \CHI_T^k$ depends continuously on the initial data $\varphi_0$.
\end{cor}
\begin{proof}
Similarly as in Proposition~\ref{prp:localtimeexistence}, we prove that there exists some $\tilde T>0$ such that $\Lambda$ in \eqref{eq:lambdaoperatorLinfty} defines a contraction on $\CHI_T^k$ for all $T\in (0,\Tilde{T})$ with respect to the norm of $C([0,T], C^{k,1}(\overline \Omega))$. Proceeding as in the proof of Proposition~\ref{prp:localtimeexistence}, we obtain
\begin{align*}
\Norm[C^{k,1}(\overline{\Omega})]{\beta(\varphi(s,\cdot)) \, \kappa(\mathcal{E}(\varphi(s,\cdot)))}
\leq C
\end{align*}
uniformly by Theorem~\ref{th:existence_wkp} together with the boundedness of $\varphi$ and the submultiplicativity of $\Norm[C^{k,1}(\overline{\Omega})]{\cdot}$. The lower bound follows from the respective estimate of the $L^\infty$-norm in Proposition~\ref{prp:localtimeexistence}.
The contraction property for $\varphi_1,\varphi_2 \in S_T^k$ can be shown by considering
\begin{align*}
&\Norm[C^{k,1}(\overline{\Omega})]{\beta(\varphi_1(s,\cdot)) \, \kappa(\mathcal{E}(\varphi_1(s,\cdot))) - \beta(\varphi_2(s,\cdot)) \, \kappa(\mathcal{E}(\varphi_2(s,\cdot)))}\\
&\leq C \Norm[C^{k,1}(\overline{\Omega})]{\beta(\varphi_1(s,\cdot)) - \beta(\varphi_2(s,\cdot))}
+ C \Norm[C^{k,1}(\overline{\Omega})]{\kappa(\mathcal{E}(\varphi_1(s,\cdot))) - \kappa(\mathcal{E}(\varphi_2(s,\cdot)))}\\
&\leq C \Norm[C^{k,1}(\overline{\Omega})]{\varphi_1(s,\cdot) - \varphi_2(s,\cdot)}
\end{align*}
which follows from Theorem~\ref{th:existence_wkp}, Proposition~\ref{prp:LipschitzEll} and the fact that $\beta \in C^{k+1,1}([\epsilon,R])$.
Using the contraction property of $\Lambda$ on $\CHI_T^k$, the statement now follows analogously to Theorem~\ref{th:wellposed}.
\end{proof}
\subsection{Existence for the coupled problem with nonsmooth initial data for general $d$}\label{sec:exEuler}
The results of Section \ref{sec:coupledviscous} yield existence and uniqueness of solutions under the minimal regularity requirement $\varphi_0 \in L^\infty(\Omega)$, but apply only to spatial dimensions $d \in \{1,2\}$.
To obtain existence of solutions with nonsmooth data for general $d$, we now use a different approach based on a time-discrete function-valued explicit Euler scheme for the viscous limit model~\eqref{eq:generalviscousmildweak}. Under similar assumptions as in Section \ref{sec:coupledviscous}, but under the slightly stronger regularity requirement $\varphi_0 \in L^\infty(\Omega) \cap BV(\Omega)$, we obtain existence of the solution to \eqref{eq:generalviscousmildweak} for arbitrary $d$, as well as uniqueness for $d\leq 2$ and sufficiently small $T$.
We assume $\varphi_0 \in L^\infty(\Omega) \cap BV(\Omega)$ satisfying $\| \varphi_0\|_{L^\infty} < R$ and $\inf_\Omega \varphi_0 > \epsilon$ for some $R>\epsilon>0$ to be given and define the space
\begin{align*}
\CHI
= \Bigl\{ \varphi \in L^\infty(\Omega) \colon\, \Norm[L^\infty(\Omega)]{\varphi} \leq R, \varphi(x) \geq \epsilon \text{ for a.e.\ } x \in \Omega \Bigl\}.
\end{align*}
We consider a time discretisation on $[0,T]$ with uniform grid points $t_k^N := k \tau$, where $\tau = \frac{T}{N}$. We set
$\varphi_0^N = \varphi_0 $
and define
\begin{align}\label{eq:exeul}
\varphi_{k+1}^N
= \varphi_k^N - \tau \beta(\varphi_k^N) \, \kappa(u(\varphi_k^N)),
\end{align}
where $u_k^N = u(\varphi_k^N)$ denotes the solution of~\eqref{eq:generalviscousweak} at time $t_k^N$. We obtain the following estimate of $\varphi_{k}^N$ in \eqref{eq:exeul}.
\begin{lem}
Let $\Omega \subset \R^d$ be a domain and suppose that $\varphi_0 \in BV(\Omega)$ with $\| \varphi_0\|_{L^\infty} < R$ and $\inf_\Omega \varphi_0 > \epsilon$.
%
Then there exists $T > 0$ such that $\varphi_k^N \in \CHI$ for all $N \in \N$ and $k = 1,\ldots,N$, and there exists $C > 0$ such that the following estimate holds,
\begin{align}\label{eq:eulerestimate}
\max_{k=0,\ldots,N} \Norm[BV(\Omega)]{\varphi_{k}^N} \leq e^{C T} \Norm[BV(\Omega)]{\varphi_0}.
\end{align}
\end{lem}
\begin{proof}
First of all we observe that by Theorem~\ref{th:unifbounded}, there exists $\bar{C}$ depending only on $\epsilon$ and $R$ such that whenever $\varphi_k^N \in \CHI$, we have
\begin{equation}\label{eq:Cbar}
\Norm[L^\infty(\Omega)]{\beta(\varphi_k^N) \, \kappa(u(\varphi_k^N))}
\leq \bar{C} \,.
\end{equation}
By the Volpert chain rule for $BV$ functions (see for example~\cite{Volpert1967} as well as~\cite{Ambrosio1990}), for any $\psi \in BV(\Omega)$ and $F \in W^{1,\infty}(\R)$, we have $\Norm[BV(\Omega)]{F \circ \psi} \leq \Norm[L^\infty]{F'} \Norm[BV(\Omega)]{\psi}$. Applying this estimate, we make use of the Lipschitz continuity of $\kappa$ on $\R$ and of $\beta$ on $[\epsilon,R]$. Using in addition that $BV(\Omega)$ is a Banach algebra as well as~\eqref{eq:elliptic_W12bound}, assuming that $\varphi_k^N \in \CHI$ we obtain
\begin{equation}
\begin{aligned}\label{bvstepboundviscous}
\Norm[BV(\Omega)]{\beta(\varphi_k^N) \, \kappa(u(\varphi_k^N))}
&\leq \Norm[BV(\Omega)]{\beta(\varphi_k^N)} \Norm[BV(\Omega)]{\kappa(u(\varphi_k^N))}\\
&\leq C_1 \Norm[BV(\Omega)]{\varphi_k^N} \Norm[BV(\Omega)]{u(\varphi_k^N)}\\
&\leq C_1 \Norm[BV(\Omega)]{\varphi_k^N} \Norm[W^{1,2}(\Omega)]{u(\varphi_k^N)}
\leq C_2 \Norm[BV(\Omega)]{\varphi_k^N}.
\end{aligned}
\end{equation}
We now choose the largest $T$ such that
\begin{equation*}
\varphi_0 - T \bar C \geq \epsilon,\qquad
\varphi_0 + T \bar C \leq R \,,
\end{equation*}
which ensures that $\varphi_k^N \in \CHI$ for all $N \in \N$ and $k = 1,\ldots,N$.
Then as a consequence of \eqref{bvstepboundviscous},
\begin{equation*}
\Norm[BV(\Omega)]{\varphi_{k+1}^N} \leq \Norm[BV(\Omega)]{\varphi_k^N} + \tau C_2 \Norm[BV(\Omega)]{\varphi_k^N}
\end{equation*}
which implies \eqref{eq:eulerestimate}.
\end{proof}
From~\eqref{eq:exeul}, we define two space-time functions
\begin{align*}
\varphi^N(t,\cdot)
&= \frac{1}{\tau} \left[ (t_{k+1}^N - t) \varphi_k^N + (t - t_k^N) \varphi_{k+1}^N \right]\\
\tilde{\varphi}^N(t,\cdot)
&= \varphi_k^N
\end{align*}
for $t \in [t_k^N,t_{k+1}^N)$ for each $k$. The first observation is that
\begin{align*}
\partial_t \varphi^N
= \frac{\varphi_{k+1}^N - \varphi_k^N}{\tau}
= - \beta(\varphi_k^N) \, \kappa(u(\varphi_k^N))
= - \beta(\tilde{\varphi}^N) \, \kappa(u(\tilde{\varphi}^N))
\end{align*}
on each $(t_k^N,t_{k+1}^N)$, and thus on $[0,T]$. As a consequence,
\begin{align}\label{eq:timediscrete}
\varphi^N(t,\cdot)
= \varphi_0 - \int_0^t \beta(\tilde{\varphi}^N(s,\cdot)) \, \kappa(u(\tilde{\varphi}^N(s,\cdot))) \di s
\end{align}
a.e.\ in $\Omega$. The functions $\varphi^N, \tilde\varphi^N\colon [0,T]\to BV(\Omega)$ are Bochner measurable, since for each $N$, their image has finite dimension. Next, we prove the existence of solutions to \eqref{eq:generalviscousmildweak} by showing that there exists a strictly increasing $(N_n)_{n\in\N}$ such that the subsequences $\varphi^{N_n}, \tilde\varphi^{N_n}$ for $n\to \infty$ converge to the same limit $\varphi$ solving the integral formulation \eqref{eq:generalviscousmild}, that is,
\begin{align*}
\varphi(t,\cdot)
= \varphi_0 - \int_0^t \beta(\varphi(s,\cdot)) \, \kappa(u(s,\cdot)) \di s \,.
\end{align*}
\begin{thm}
Let $\Omega \subset \R^d$ be a bounded domain. Suppose that $\varphi_0 \in BV(\Omega)$ with ${\| \varphi_0\|_{L^\infty} < R}$ and $\inf_\Omega \varphi_0 > \epsilon$.
Then, there exists a $T>0$ and $\varphi \in W^{1,\infty}(0,T;BV(\Omega))$ solving~\eqref{eq:generalviscousmild}. Moreover, there exists a strictly increasing $(N_n)_{n\in\N}$ such that the subsequences $\varphi^{N_n}, \tilde\varphi^{N_n}$ converge to $\varphi$ in $L^\infty(0,T;L^1(\Omega))$ as $n \to \infty$.
\end{thm}
\begin{proof}
From~\eqref{eq:eulerestimate} and \eqref{bvstepboundviscous}, we obtain
\begin{align*}
\Norm[W^{1,\infty}(0,T;BV(\Omega))]{\varphi^N}
&= \sup_{t \in [0,T]} \Norm[BV(\Omega)]{\varphi^N(t,\cdot)} + \sup_{t \in [0,T]} \Norm[BV(\Omega)]{\partial_t \varphi^N(t,\cdot)} \\
& \leq ( 1 + C ) e^{C T} \Norm[BV(\Omega)]{\varphi_0} .
\end{align*}
Note that $\varphi^N \in C([0,T];BV(\Omega))$ and $\partial_t \varphi^N \in L^\infty(0,T; BV(\Omega))$, where $BV(\Omega)$ is compactly embedded in $L^1(\Omega)$. By the Aubin-Lions theorem, there exists a subsequence, again denoted $\varphi^N$, that converges strongly in $C([0,T];L^1(\Omega))$ to some $\varphi \in C([0,T];BV(\Omega))$.
Moreover,
\begin{align*}
\Norm[L^\infty(0,T;L^\infty(\Omega))]{\tilde{\varphi}^N - \varphi^N}
&= \sup_{k=0,\ldots,N-1} \Norm[L^\infty(\Omega)]{\varphi_{k+1}^N - \varphi_k^N}\\
&= \tau \sup_{k=0,\ldots,N-1} \Norm[L^\infty(\Omega)]{\beta(\varphi_k^N) \, \kappa(u(\varphi_k^N))}
\leq \frac{T \bar{C}}{N}
\end{align*}
with $\bar C$ as in \eqref{eq:Cbar},
which implies that $\tilde{\varphi}^N \to \varphi$ in each space containing $L^\infty(0,T;L^1(\Omega))$.
Let us now consider the convergence of $u(\tilde{\varphi}^N)$ a.e.\ in time. We know that $\Norm[W^{1,2}(\Omega)]{u(\tilde{\varphi}^N)}$ is uniformly bounded. By Theorem \ref{th:unifbounded} we also obtain that $\Norm[L^\infty(\Omega)]{u(\tilde{\varphi}^N)}$ is uniformly bounded. Passing to a subsequence, $u(\tilde{\varphi}^N)$ converges weakly in $W^{1,2}(\Omega)$, strongly in $L^2(\Omega)$ and weak-* in $L^\infty(\Omega)$ to some $\tilde u \in W^{1,2}(\Omega)\cap L^\infty(\Omega)$. It remains to verify that $\tilde u = u(\varphi)$, that is,
\begin{align}\label{eq:weakformutilde}
\int_\Omega \alpha(\varphi) \nabla \tilde u \cdot\nabla v + \beta(\varphi) \, \kappa(\tilde{u}) \, v \di x = \int_\Omega \alpha(\varphi) \, \zeta(\varphi) \cdot \nabla v\di x,\quad \text{for all $v \in W^{1,2}_0(\Omega)$.}
\end{align}
Since $\alpha$ is Lipschitz, $\alpha(\tilde{\varphi}^N)$ converges in $L^1(\Omega)$ and has a subsequence (not renamed) that converges pointwise a.e.\ in $\Omega$. We consider
\begin{multline*}
\int_\Omega \alpha(\varphi) \nabla \tilde u \cdot\nabla v \di x
- \int_\Omega \alpha(\tilde{\varphi}^N) \nabla u(\tilde{\varphi}^N) \cdot\nabla v \di x \\
= \int_\Omega \alpha(\varphi) (\nabla \tilde u - \nabla u(\tilde{\varphi}^N))\cdot\nabla v\di x
+ \int_\Omega ( \alpha(\varphi) - \alpha(\tilde{\varphi}^N)) \nabla u(\tilde{\varphi}^N)\cdot\nabla v\di x.
\end{multline*}
Using $\nabla u(\tilde{\varphi}^N) \rightharpoonup \nabla \tilde u$, we obtain that
\begin{align*}
\int_\Omega \alpha(\varphi) (\nabla \tilde u - \nabla u(\tilde{\varphi}^N))\cdot\nabla v\di x \to 0.
\end{align*}
By the dominated convergence theorem, we have
\begin{align*}
\left| \int_\Omega ( \alpha(\varphi) - \alpha(\tilde{\varphi}^N)) \nabla u(\tilde{\varphi}^N)\cdot\nabla v\di x \right|
\leq \Norm[L^2]{(\alpha(\varphi)-\alpha(\tilde{\varphi}^N))\nabla v}\Norm[L^2]{\nabla u(\tilde{\varphi}^N)} \to 0,
\end{align*}
since $|\alpha(\varphi)-\alpha(\tilde{\varphi}^N)|^2|\nabla v|^2$ has an integrable majorant and goes to zero pointwise a.e.; the convergence of the other terms in \eqref{eq:weakformutilde} can be seen similarly, but the arguments are slightly easier because the involved sequences converge strongly. Altogether, we obtain $\tilde u = u(\varphi)$, and hence the convergence of $u(\tilde{\varphi}^N)$ to $u(\varphi)$ in the sense stated above.
Next we show that the right-hand side of~\eqref{eq:timediscrete} converges to the respective limit. The Lipschitz continuity of $\kappa$ yields strong convergence of $\kappa(u(\tilde{\varphi}^N))$ in $L^2(\Omega)$. Hence we have
\begin{align*}
\Norm[L^1(\Omega)]{\beta(\varphi) \, \kappa(u(\varphi)) - \beta(\tilde{\varphi}^N) \, \kappa(u(\tilde{\varphi}^N))}
\leq& \Norm[L^1(\Omega)]{\beta(\tilde{\varphi}^N) \bl \kappa(u(\varphi)) - \kappa(u(\tilde{\varphi}^N)) \br }\\
&+ \Norm[L^1(\Omega)]{\kappa(u(\varphi)) \bl \beta(\varphi) - \beta(\tilde{\varphi}^N) \br }
\end{align*}
Both summands converge to zero due to the uniform $L^\infty$-bound on $\beta(\tilde{\varphi}^N)$ and the dominated convergence theorem together with the Lipschitz continuity of $\beta$ (since we can pass to a subsequence which converges a.e.\ in $\Omega$).
We use the dominated convergence theorem to show that the right-hand side of
\begin{multline*}
\Norm[L^1(\Omega)]{\int_0^t \beta(\tilde{\varphi}^N(s,\cdot)) \, \kappa(u(\tilde{\varphi}^N(s,\cdot))) \di s - \int_0^t \beta(\varphi(s,\cdot)) \, \kappa(u(\varphi(s,\cdot))) \di s}\\
\leq \int_0^t \Norm[L^1(\Omega)]{\beta(\tilde{\varphi}^N(s,\cdot)) \, \kappa(u(\tilde{\varphi}^N(s,\cdot))) - \beta(\varphi(s,\cdot)) \, \kappa(u(\varphi(s,\cdot)))} \di s
\end{multline*}
goes to zero for all $t \in [0,T]$ since its $L^\infty((0,T)\times \Omega)$-norm is bounded uniformly, which provides an integrable majorant. After passing to an appropriate subsequence, this implies the convergence of the time integral in \eqref{eq:timediscrete} a.e.\ in $\Omega$.
\end{proof}
For $d\leq 2$, we also have uniqueness of the solution to \eqref{eq:generalviscousmildweak}.
\begin{cor}
Let $d\leq 2$ and let $\Omega \subset \R^d$ be a bounded domain with $C^1$-boundary. Suppose that $\varphi_0 \in BV(\Omega)$ with $\| \varphi_0\|_{L^\infty} < R$ and $\inf_\Omega \varphi_0 > \epsilon$. Then, there exists a $T>0$ and a unique $\varphi \in W^{1,\infty}(0,T;BV(\Omega))$ solving~\eqref{eq:generalviscousmild}.
\end{cor}
\begin{proof}
To show the uniqueness of $\varphi \in W^{1,\infty}(0,T;BV(\Omega))$ solving~\eqref{eq:generalviscousmild} for $T>0$ sufficiently small, suppose that for $\Tilde{T}>0$ given, there exist $\varphi_1,\varphi_2 \in W^{1,\infty}(0,\tilde T;BV(\Omega))$ solving the integral formulation of~\eqref{eq:generalviscousmild}. We denote the associated solutions by $u(\varphi_1),u(\varphi_2)$, respectively.
Using the integral formulation of~\eqref{eq:generalviscousmild} together with the Lipschitz continuity of
$\kappa$ and $\beta$, as well as the Lipschitz continuity of $u$ for $d\leq 2$ by \eqref{eq:lipschitzhoelder} yields
\begin{align*}
\sup_{s\in [0,\tilde T]} \|\varphi_1(s,\cdot)-\varphi_2(s,\cdot)\|_{L^\infty(\Omega)}
%
%
\leq C \tilde T \sup_{s\in [0,\tilde T]} \left\|\varphi_1(s,\cdot) -\varphi_2(s,\cdot) \right\|_{L^\infty(\Omega)}.
\end{align*}
For $T\in (0,\min\{\tfrac{1}{C},\tilde T\})$, we have a contraction, implying that $\varphi_1(t,\cdot)=\varphi_2(t,\cdot)$ for a.e.\ $x\in \Omega$ and $t\in [0,T]$.
Since $\varphi \in W^{1,\infty}(0,T;BV(\Omega))$, one can use the same argument repeatedly to get uniqueness up to the maximal time of existence.
\end{proof}
\section{Analysis of the full viscoelastic model}\label{sec:viscoelasticmodel}
In this section, we investigate the existence and uniqueness of solutions to the full viscoelastic model \eqref{eq:generalstructure}.
We assume that $\Omega \subset \R^d$ is a bounded domain and we set $\Omega_T= (0,T]\times \Omega$ for some fixed time $T>0$.
We write \eqref{eq:generalstructure_a} in integral form,
\begin{subequations}\label{eq:fullmildweak}
\begin{align}
\varphi(t,\cdot) &= \varphi_0 + Qu_0 - Qu(t,\cdot) - \int_0^t \beta(\varphi) \, \kappa(u) \D s , & & t \in [0,T], \label{eq:phi} \\
\partial_t u &= \frac{1}{Q} \bl \nabla \cdot \alpha(\varphi) (\nabla u + \zeta(\varphi))-\beta(\varphi) \, \kappa(u) \br & & \text{in $W^{-1,2}(\Omega)$ for a.e.\ $t \in [0,T]$,} \label{eq:u}
\end{align}
\end{subequations}
subject to Dirichlet boundary conditions for $u$ and initial data $u(0,\cdot)=u_0$ for some given $u_0 \in L^2(\Omega)$.
To still obtain well-posedness of this problem for initial data $\varphi_0$ with jump discontinuities, in the present setting we need more restrictive regularity assumptions.
As typical in applications of models \eqref{eq:fullmildweak}, we assume piecewise H\"older continuous initial data that may have jump discontinuities at boundaries of a partition of $\Omega$, which correspond to material interfaces.
Specifically, we shall assume that $\varphi_0$ is H\"older continuous on pairwise disjoint open subsets
\begin{equation}\label{eq:partition}
\Omega^j\subset \Omega,\quad j = 1,\ldots,M,
\end{equation}
such that $\overline{\Omega} = \bigcup_{j=1}^M \overline{\Omega}^j$, $\Omega^j \Subset \Omega$ for $j = 1,\ldots,M-1$ and $\partial \Omega \subset \partial \Omega^M$. We set $\Omega_T^j = (0,T) \times \Omega^j$.
We first consider the parabolic problem \eqref{eq:u} separately for fixed $\varphi$.
We again assume that $\sigma$ satisfies Assumptions \ref{ass:sigma} and $\alpha, \beta$ and $\zeta$ satisfy Assumptions~\ref{ass:alphabeta} in what follows.
\subsection{Parabolic problem}\label{sec:parabolic}
In this section, we study the existence and uniqueness of solutions of the parabolic problem \eqref{eq:u}.
We set $\Qalpha=\tfrac{1}{Q}\alpha$ and $\Qbeta=\tfrac{1}{Q}\beta$.
Weak solutions of~\eqref{eq:u} are characterized as
$u\in L^2(0,T;W^{1,2}_0(\Omega))$ with weak derivatives $\partial_t u\in L^2(0,T; W^{-1,2}(\Omega))$ and $u(0)=u_0$ such that for every $\psi\in W^{1,2}_0(\Omega)$ and a.e.\ $t\in[0,T]$,
\begin{multline}\label{eq:weakpara}
\int_\Omega \partial_t u(t) \psi \di x +\int_\Omega \Qalpha(\varphi(t)) \, \nabla u(t) \cdot \nabla \psi\di x + \int_\Omega \Qbeta(\varphi(t)) \, \kappa(u(t))\psi \di x\\
= \int_\Omega \psi \nabla \cdot \Qalpha(\varphi(t)) \, \zeta(\varphi(t)) \di x\,.
\end{multline}
Note that
\begin{align*}
\Qalpha(\varphi) &\in L^\infty((0,T)\times \Omega),\\
\frac{\Qbeta(\varphi)}{\sigma(u)} &\in L^\infty((0,T)\times \Omega)\text{ for any } u \in L^2(0,T;W^{1,2}(\Omega)),\\
\nabla \cdot \Qalpha(\varphi) \zeta(\varphi) &\in L^2(0,T; W^{-1,2}(\Omega)).
\end{align*}
The existence and uniqueness of solutions to the semilinear problem~\eqref{eq:weakpara} can be shown by standard techniques, see for example \cite[Chapter 7]{Evans2010}, adapted to our particular assumptions with source term $\nabla \cdot \Qalpha(\varphi) \, \zeta(\varphi) \in L^2(0,T;W^{-1,2}(\Omega))$. The proof is given in Appendix~\ref{sec:appexistence}.
\begin{thm}\label{th:exparabolic}
Let $\Omega$ be a bounded domain and let $\varphi\in L^\infty((0,T)\times\Omega)$.
There exists a unique weak solution $u\in C([0,T]; L^2(\Omega))\cap L^2(0,T; W^{1,2}_0(\Omega))$ of \eqref{eq:weakpara}. Moreover, there exists a constant $C>0$, depending on $\Omega$, $T$, $Q$ and the coefficients of~\eqref{eq:weakpara}, such that
\begin{multline}\label{eq:bound_un}
\| u\|_{L^\infty(0,T;L^2(\Omega))} + \| u\|_{L^2(0,T;W^{1,2}_0(\Omega))} + \| \partial_t u \|_{L^2(0,T;W^{-1,2}(\Omega))}\\
\leq C \bl \| \Qalpha(\varphi) \, \zeta(\varphi)\|_{L^2(0,T;L^{2}(\Omega))} +\|u_0\|_{L^2(\Omega)} \br.
\end{multline}
\end{thm}
\subsubsection{Parabolic regularity theory}
We next consider auxiliary results, formulated for a general parabolic equation
\begin{align}\label{eq:pargeneral}
\partial_t u - \nabla \cdot (a \nabla u) + b u
= \nabla \cdot f + g
\end{align}
with homogeneous Dirichlet boundary conditions on $u$ with initial data $u_0$,
where $a, b \in L^\infty(\Omega)$ and $\inf_\Omega a>0$.
The boundedness of the solution to \eqref{eq:pargeneral} is ensured by the following proposition.
\begin{thm}\label{th:ubound}
Let $\Omega$ be a Lipschitz domain
and let $f, g \in L^\infty(\Omega_T)$ and $u_0 \in L^\infty(\Omega)$. Then
\begin{align}\label{eq:supbound}
\sup_\Omega |u(t, \cdot)| \leq \sup_\Omega |u_0| + C \bl \int_0^t \| u(s,\cdot)\|_{L^2(\Omega)}^2 \D s \br ^{1/2},
\end{align}
where $C>0$ is independent of $t$ and $T$.
\end{thm}
\begin{proof}
Specializing \cite[Theorem V.3.2]{DiBenedetto1993} to our setting, we get the desired result by choosing the parameters in~\cite{DiBenedetto1993} in the notation used there as follows:
\begin{center}
\begin{tabular}{lll}
$p = 2$, & $\delta = 2$, & $\kappa_0 = 1$, \\
$C_0 = \frac{1}{2} \mathrm{ess \, inf} \, a$, & $c_0 = 0$, & $\varphi_0 = \frac{1}{2 a} \, \AV{f}^2$, \\
$C_1 = \mathrm{ess \, sup} \, a$, & $c_1 = 0$, & $\varphi_1 = \AV{f}$, \\
$C_2 = 0$, & $c_2 = \mathrm{ess \, sup} \AV{b}$, & $\varphi_2 = \AV{g}$.
\end{tabular}
\end{center}
Hence, $C$ depends on $d$ and the essential bounds of $a$, $b$, $f$ and $g$.
%
\end{proof}
On the partition $\Omega^j$, $j=1,\ldots,M$, defined in \eqref{eq:partition}, we introduce parabolic Hölder spaces $C^{k,\gamma}_\text{par}(\overline{\Omega}_T^j)$ for $k \in \{0,1\}$ and $\gamma \in (0,1)$ endowed with the parabolic space-time norm defined in~\cite{Ladyzenskaja1968},
\begin{equation}\label{eq:holder}
\begin{aligned}
\Norm[C^{0,\gamma}_\text{par}(\overline{\Omega}^j_T)]{u}
= &\sup_{\overline{\Omega}^j_T} |u|+ \sup_{\substack{t,x_1,x_2\\x_1 \neq x_2}} \frac{\AV{ u(t,x_1) - u(t,x_2)}}{\AV{x_1 - x_2}^\gamma} + \sup_{\substack{t_1,t_2,x\\t_1 \neq t_2}} \frac{\AV{ u(t_1,x) - u(t_2,x)}}{\AV{t_1 - t_2}^{\frac{\gamma}{2}}} ,\\
\Norm[C^{1,\gamma}_\text{par}(\overline{\Omega}^j_T)]{u}
= &\sup_{\overline{\Omega}^j_T} |u| +
\sup_{\overline{\Omega}^j_T} |\nabla u| + \sup_{\substack{t,x_1,x_2\\x_1 \neq x_2}} \frac{\AV{\nabla u(t,x_1) - \nabla u(t,x_2)}}{\AV{x_1 - x_2}^\gamma}\\
&+ \sup_{\substack{t_1,t_2,x\\t_1 \neq t_2}} \frac{\AV{u(t_1,x) - u(t_2,x)}}{\AV{t_1 - t_2}^{\frac{1 + \gamma}{2}}}
+ \sup_{\substack{t_1,t_2,x\\t_1 \neq t_2}} \frac{\AV{\nabla u(t_1,x) - \nabla u(t_2,x)}}{\AV{t_1 - t_2}^{\frac{\gamma}{2}}} .
\end{aligned}
\end{equation}
Assuming that $a$ in \eqref{eq:pargeneral} satisfies $a \in C^{0,\gamma}_\text{par}(\overline{\Omega}_T^j)$ for $j=1,\ldots,M$, we obtain the following bound for the weak solution $u\in C^{1,\gamma}_\text{par}(\overline{\Omega}_T^j)$ of \eqref{eq:pargeneral}.
\begin{thm}\label{th:parapiecewholder}
Let $\Omega^j$ for $j=1,\ldots,M$, and hence also $\Omega$, have $C^{1,\mu}$-boundaries with a $\mu>0$, let $f \in C^{0,\gamma}_\text{par}(\overline{\Omega}_T^j)$ for $j=1,\ldots,M$ with a $\gamma \in (0,\mu/(1+\mu)]$. If $u$ is a weak solution to \eqref{eq:pargeneral} with $u_0 \in C^{1,\gamma}(\overline{\Omega}^j)$ for $j=1,\ldots,M$ and homogeneous Dirichlet boundary conditions on $u$, then
\begin{align}\label{eq:piecewiseest}
\sum_{j=1}^{M} \Norm[C^{1,\gamma}_\text{par}(\overline{\Omega}_T^j)]{u}
&\leq C \Biggl( \Norm[L^\infty(\Omega_T)]{u} + \Norm[L^\infty(\Omega_T)]{g} + \sum_{j=1}^{M} \Norm[C^{0,\gamma}_\text{par}(\overline{\Omega}_T^j)]{f} +
\Norm[C^{1,\gamma}_\text{par}(\overline{\Omega}_T^j)]{u_0}\Biggr) ,
\end{align}
where $C$ depends on $d$, $M$, $\gamma$, $\mu$, and on $\Omega^j$ and $\Norm[C^{0,\gamma}_\text{par}(\overline\Omega_T^j)]{a}$ for $j=1,\ldots,M$.
\end{thm}
\begin{proof}
We first assume $u_0 \equiv 0$.
Utilizing~\cite[Theorem~7.1]{Dong2021} and absorbing $b u$ into the right-hand side, we only have to apply the norm equivalences proved in Appendix~\ref{sec:equiv}, which hold for vanishing initial data. Note furthermore that $\Norm[L^\infty(\Omega_T)]{u}$ is bounded due to Theorem~\ref{th:ubound}.
For nonvanishing $u_0$, we use its constant continuation in time and consider
\begin{align*}
\partial_t v - \nabla \cdot (a \nabla (v+u_0)) + b \, (v+u_0)
= \nabla \cdot f + g
\end{align*}
with vanishing initial data on $v = u - u_0$. The additional terms $\nabla \cdot a \nabla u_0$ and $b \, (v+u_0) \in L^\infty(\Omega_T)$ can be absorbed into the right-hand sides as before.
\end{proof}
\subsubsection{A parabolic Lipschitz estimate}
By applying Theorem~\ref{th:ubound} and~\ref{th:parapiecewholder} to~\eqref{eq:weakpara} we get that $u \in C^{1,\gamma}_\text{par}(\overline{\Omega}_T^j)$ if $\varphi \in C^{0,\gamma}_\text{par}(\overline{\Omega}_T^j)$ for $j = 1,\ldots,M$.
Next, we show a Lipschitz bound for $u$.
\begin{lem}\label{lem:LipschitzParaPiecw}
Assume that $\Omega^j$ has a $C^{1,\mu}$-boundary with $\mu>0$,
$\varphi_1,\varphi_2 \in C^{0,\gamma}_\text{par}(\overline{\Omega}_T^j)$,
$u(0,\cdot) = u_0 \in C^{1,\gamma}(\overline{\Omega}^j)$ for $j=1,\ldots,M$, $\gamma \in (0,\mu/(1+\mu)]$,
$\alpha, \zeta \in C^2(\R^+)$,
$\beta \in C^1(\R^+)$ and $\sigma \in C^{1,1}(\R)$. Let the unique weak solutions to~\eqref{eq:weakpara} for $\varphi_1$ and $\ \varphi_2$ be denoted by $u(\varphi_1)$ and $u(\varphi_2)$, respectively.
Then
\begin{align}\label{eq:pwlipschitzparabolic}
\sum_{j=1}^{M} \Norm[C^{0,\gamma}_\text{par}(\overline{\Omega}_T^j)]{u(\varphi_1) - u(\varphi_2)}
\leq C T^{\gamma/2} \sum_{j=1}^{M} \Norm[C^{0,\gamma}_\text{par}(\overline{\Omega}_T^j)]{\varphi_1 - \varphi_2}
\end{align}
and
\begin{align}\label{eq:pwlipschitzparabolicsigma}
\sum_{j=1}^{M} \Norm[C^{0,\gamma}_\text{par}(\overline{\Omega}_T^j)]{\kappa(u(\varphi_1)) - \kappa(u(\varphi_2))}
\leq C T^{\gamma/2} \sum_{j=1}^{M} \Norm[C^{0,\gamma}_\text{par}(\overline{\Omega}_T^j)]{\varphi_1 - \varphi_2}
\end{align}
with constants $C>0$ independent of $\varphi_1,\varphi_2$ and $T$.
\end{lem}
\begin{proof}
By considering the linearized equation
\begin{multline}\label{eq:parabolic_linear_wh}
\partial_t w_h - \nabla \cdot ( \Qalpha (\varphi) \nabla w_h) + \Qbeta(\varphi) \, \kappa'(u(\varphi)) w_h\\
= \nabla \cdot ( \Qalpha'(\varphi) h (\nabla u(\varphi) + \zeta(\varphi)) + \Qalpha(\varphi) \, \zeta'(\varphi) \, h ) - \Qbeta'(\varphi) h \, \kappa(u(\varphi))
\end{multline}
for the candidate for the directional derivative $w_h = u'(\varphi)h$
with $h \in C^{0,\gamma}_\text{par}(\Omega_T^j)$, we can apply Theorem~\ref{th:parapiecewholder} (resp.~\cite[Theorem~7.1]{Dong2021}) using that a uniformly bounded solution exists by Theorem~\ref{th:ubound} and $\nabla u(\varphi) \in C^{0,\gamma}_\text{par}(\Omega_T^j)$.
To show that our candidate $w_h$ is actually a Fr\'echet derivative, that is, with $\udphi := u(\varphi + \dphi h)$ we have
\begin{align*}
\lim_{\dphi \to 0} \sum_{j=1}^{M} \Norm[C^{0,\gamma}_\text{par}(\overline{\Omega}_T^j)]{\frac{\udphi - \uzero}{\dphi} - w_h} = 0
\end{align*}
where the convergence is uniform in $h$ normalized in $C^{0,\gamma}_\text{par}(\Omega_T^j)$. Subtracting the respective equations, we obtain
\begin{multline}\label{eq:pardiff}
\partial_t (\udphi - \uzero) - \nabla \cdot (\Qalpha(\varphi) \nabla(\udphi - \uzero)) + \Qbeta(\varphi) \Delta_{\kappa, \uzero}(\udphi) (\udphi - \uzero)\\
\begin{split}
=& \nabla \cdot ((\Qalpha(\varphi + \dphi h) - \Qalpha(\varphi)) \nabla \udphi) - \kappa(\udphi) (\Qbeta(\varphi + \dphi h) - \Qbeta(\varphi))\\
&+ \nabla \cdot (\Qalpha(\varphi + \dphi h) \zeta(\varphi + \dphi h) - \Qalpha(\varphi) \zeta(\varphi))
\end{split}
\end{multline}
and
\begin{multline}\label{eq:parderiv}
\partial_t \bl \frac{\udphi - \uzero}{\dphi} - w_h \br - \nabla \cdot \bl \Qalpha(\varphi) \, \nabla\!\bl \frac{\udphi - \uzero}{\dphi} - w_h \br \br + \Qbeta(\varphi) \kappa'(\uzero) \bl \frac{\udphi - \uzero}{\dphi} - w_h \br\\
\begin{split}
=& \nabla \cdot \bl \bl \frac{\Qalpha(\varphi + \dphi h) - \Qalpha(\varphi)}{\dphi} - \Qalpha'(\varphi) h \br \nabla \udphi \br + \nabla \cdot (\Qalpha'(\varphi) h (\nabla \udphi - \nabla \uzero))\\
&- \bl \frac{\Qbeta(\varphi + \dphi h) - \Qbeta(\varphi)}{\dphi} - \Qbeta'(\varphi) h \br \kappa(\udphi) - \Qbeta'(\varphi) h (\kappa(\udphi) - \kappa(\uzero))\\
&+ \nabla \cdot \bl \frac{\Qalpha(\varphi + \dphi h) \zeta(\varphi + \dphi h) - \Qalpha(\varphi) \zeta(\varphi)}{\dphi} - \Qalpha'(\varphi) \zeta(\varphi) h - \Qalpha(\varphi) \zeta'(\varphi) h \br \\
&- \Qbeta(\varphi) \frac{\udphi - \uzero}{\dphi} (\Delta_{\kappa, \uzero}(\udphi) - \kappa'(\uzero)).
\end{split}
\end{multline}
Here, $\Delta_{\kappa,\uzero}(\udphi)$ is defined as in~\eqref{eq:Delta} and is used pointwise in~\eqref{eq:pardiff} and~\eqref{eq:parderiv}.
As before we have $\kappa(\udphi)-\kappa(\uzero)=\Delta_{\kappa,\uzero}(\udphi) (\udphi-\uzero)$ where $\Delta_{\kappa,\uzero}$ is continuous at $\uzero$ and $\Delta_{\kappa,\uzero}(\uzero)=\kappa'(\uzero)$.
First, we apply Theorem~\ref{th:ubound} and Theorem~\ref{th:parapiecewholder} to~\eqref{eq:pardiff} to show that $\Norm[L^\infty(\Omega_T)]{\udphi - \uzero} \leq C \dphi$ and $\udphi \to \uzero$ in $C^{1,\gamma}_{\text{par}}(\overline{\Omega}^j_T)$-norm uniformly with respect to $h$ normalized in $C^{0,\gamma}_\text{par}(\Omega_T^j)$.
Note that on the right-hand side of~\eqref{eq:piecewiseest}, $\Norm[L^\infty]{\Qbeta(\varphi + \dphi h) - \Qbeta(\varphi)}\to 0$ follows from the Lipschitz continuity of $\Qbeta$ and Theorem~\ref{th:ubound}.
In order to show
\begin{align*}
\lim_{\dphi \to 0} \sum_{j=1}^M \Norm[C^{0,\gamma}_\text{par}(\overline{\Omega}_T^j)]{\Qalpha(\varphi + \dphi h) - \Qalpha(\varphi)} = 0,
\end{align*}
we use~\cite[Thm. 3]{Goebel1992} twice for space and time. This yields (local) Lipschitz continuity of $\Qalpha$ with respect to $\Norm[C^{0,\gamma}_\text{par}(\overline{\Omega}_T^j)]{\cdot}$ and hence proves the desired result since the convergence of the other divergence term follows analogously, this time using the regularity of $\zeta$ as well.
Next we apply Theorem~\ref{th:parapiecewholder} to~\eqref{eq:parderiv} where, as before, only the terms in divergence form pose problems. In order to prove
\begin{align*}
\lim_{\dphi \to 0} \sum_{j=1}^M \Norm[C^{0,\gamma}_\text{par}(\overline{\Omega}_T^j)]{\frac{\Qalpha(\varphi + \dphi h) - \Qalpha(\varphi)}{\dphi} - \Qalpha'(\varphi) h} = 0,
\end{align*}
we apply~\cite[Thm. 4]{Goebel1992}, again twice for space and time, to prove Fr\'echet differentiability of $\Qalpha$ with respect to $\Norm[C^{0,\gamma}_\text{par}(\overline{\Omega}_T^j)]{\cdot}$. This proves the desired result since the convergence of the other divergence term follows analogously using the regularity of $\zeta$.
Since we obtain
\begin{align*}
\sum_{j=1}^{M} \Norm[C^{1,\gamma}_\text{par}(\overline{\Omega}_T^j)]{w_h}
&\leq C \Bigg( \Norm[L^\infty(\Omega_T)]{\Qbeta(\varphi) \kappa'(u(\varphi)) w_h + \Qbeta'(\varphi) \, h \, \kappa(u(\varphi))}\\
&\qquad+ \sum_{j=1}^{M} \Norm[C^{0,\gamma}_\text{par}(\overline{\Omega}_T^j)]{\Qalpha'(\varphi) h (\nabla u(\varphi) + \zeta(\varphi)) + \Qalpha(\varphi) \, \zeta'(\varphi) \, h} \Bigg)\\
&\leq C \sum_{j=1}^{M} \Norm[C^{0,\gamma}_\text{par}(\overline{\Omega}_T^j)]{h},
\end{align*}
by applying Theorem~\ref{th:ubound} and~\ref{th:parapiecewholder} to~\eqref{eq:parabolic_linear_wh} this yields an upper bound of $\Norm{u'(\varphi)}$ for the associated piecewise operator norm. Since $w_h(0,\cdot)$ vanishes identically,
\begin{align*}
\sum_{j=1}^{M} \Norm[C^{0,\gamma}_\text{par}(\overline{\Omega}_T^j)]{w_h}
\leq \max\{2, C\} \, T^{\gamma/2} \sum_{j=1}^{M} \Norm[C^{1,\gamma}_\text{par}(\overline{\Omega}_T^j)]{w_h}
\end{align*}
by estimate~\eqref{eq:paraest} for a constant $C$ depending on the diameter of $\Omega$.
Considering the integral representation~\eqref{eq:integralrepell}, we obtain
\begin{align*}
\sum_{j=1}^{M} \Norm[C^{0,\gamma}_\text{par}(\overline{\Omega}_T^j)]{u(\varphi_1) - u(\varphi_2)}
&= \sum_{j=1}^{M} \Norm[C^{0,\gamma}_\text{par}(\overline{\Omega}_T^j)]{\int_0^1 u'((1-s) \varphi_2 + s \varphi_1) (\varphi_1 - \varphi_2) \D s}\\
&\leq \int_0^1 \sum_{j=1}^{M} \Norm[C^{0,\gamma}_\text{par}(\overline{\Omega}_T^j)]{u'((1-s) \varphi_2 + s \varphi_1) (\varphi_1 - \varphi_2)} \D s\\
&\leq C T^{\gamma/2} \sum_{j=1}^{M} \Norm[C^{0,\gamma}_\text{par}(\overline{\Omega}_T^j)]{\varphi_1 - \varphi_2}
\end{align*}
for some constant $C>0$ independent of $\varphi_1,\varphi_2$ and $T$. With
$\Theta(s) = \kappa(u((1-s)\varphi + s \psi))$,
we can thus again estimate $\Theta'$ as in Proposition~\ref{prp:LipschitzEll}.
Since $\kappa \in C^{1,1}(\R)$, this proves \eqref{eq:pwlipschitzparabolicsigma}.
\end{proof}
\subsection{Coupled problem}\label{sec:coupledfull}
In this section we investigate existence and uniqueness of the full viscoelastic model~\eqref{eq:fullmildweak} by a similar approach as in Sections~\ref{sec:coupledviscous} and~\ref{sec:higherreg}, that is, by establishing a contraction property on an appropriately chosen set.
Note that existence and uniqueness of the weak solution to the parabolic problem \eqref{eq:u} are provided in Theorem \ref{th:exparabolic} and
for a pointwise bound of the solution, Theorem~\ref{th:ubound} can be applied.
Let us denote by $\mathcal{P}$ the nonlinear operator which maps $\varphi \in L^\infty((0,T)\times\Omega)$ to the unique weak solution $u\in C([0,T]; L^2(\Omega))\cap L^2(0,T; W^{1,2}_0(\Omega))$ of the parabolic problem~\eqref{eq:u} according to Theorem~\ref{th:exparabolic}, that is, $\mathcal{P}(\varphi) = u$. For all $t \in [0,T]$, we define
\begin{align*}
\Xi[\varphi](t,\cdot)
&= \varphi_0 + Qu_0 - Q\mathcal{P}(\varphi)(t,\cdot) - \int_0^t \beta(\varphi(s,\cdot)) \, \kappa(\mathcal{P}(\varphi)(s,\cdot)) \D s \,.
\end{align*}
Here, $\varphi_0\in C^{0,\gamma}(\overline{\Omega}^j), u_0 \in C^{1,\gamma}(\overline{\Omega}^j)$ denote the initial conditions of $\varphi,u$, respectively.
Given $R>\epsilon>0$, we aim to find a fixed point of $\Xi$ on the set $\CHI_T^\text{pw}$, defined by
\begin{align*}
\CHI_T^\text{pw} =\Biggl\{ \varphi\in C(0,T;L^\infty(\Omega))\colon \sum_{j=1}^{M} \Norm[C^{0,\gamma}_\text{par}(\overline{\Omega}_T^j)]{\varphi} \leq R, \quad \inf_{t\in[0,T]} \varphi(t,x) \geq \epsilon \text{ for a.e.~} x\in \Omega\Biggr\}
\end{align*}
endowed with the metric induced by the norm
\begin{align*}
\Norm[\mathrm{pw}]{\varphi} = \sum_{j=1}^{M} \Norm[C^{0,\gamma}_\text{par}(\overline{\Omega}_T^j)]{\varphi}
\end{align*}
for $\varphi\in \CHI_T^\text{pw}$.
By the following proposition, there exists a $\tilde{T}$ such that $\Xi$ defines a contraction on $\CHI_T^\text{pw}$ for all $T\in (0,\Tilde{T})$.
\begin{prp}\label{prp:contr}
Suppose that $\Omega^j$ has a $C^{1,\mu}$-boundary with $\mu>0$, $\varphi_0 \in C^{0,\gamma}(\overline{\Omega}^j)$, %
$u(0,\cdot) = u_0 \in C^{1,\gamma}(\overline{\Omega}^j)$ for $j=1,\ldots,M$, $\gamma \in (0,\mu/(1+\mu)]$, $\alpha, \zeta \in C^2(\R^+)$, $\beta \in C^1(\R^+)$ and $\kappa \in C^{1,1}$. If in addition
\begin{align*}
\sum_{j=1}^{M} \Norm[C^{0,\gamma}(\overline{\Omega}^j)]{\varphi_0} < R
\end{align*}
and $\inf_\Omega \varphi_0 > \epsilon$,
there exists a $\tilde T>0$ such that $\Xi$ defines a contraction on $\CHI_T^\mathrm{pw}$ with respect to $\Norm[\mathrm{pw}]{\cdot}$ for all $T\in (0,\Tilde{T})$.
\end{prp}
\begin{proof}
We proceed as in the proof of Proposition~\ref{prp:localtimeexistence}. Note that $\sum_{j=1}^{M} \Norm[C^{0,\gamma}_\text{par}(\overline{\Omega}_T^j)]{\varphi_0} < R$ for the constant continuation of $\varphi_0$ in time.
Furthermore, for the constant continuation of $u_0$ in time, note that
\begin{align*}
\sum_{j=1}^{M} \Norm[C^{0,\gamma}_\text{par}(\overline{\Omega}_T^j)]{u_0 - \mathcal{P}(\varphi)}
\leq C T^{\gamma/2}
\end{align*}
by~\eqref{eq:paraest} since $\mathcal{P}(\varphi) \in C^{1,\gamma}_\text{par}(\overline{\Omega}_T^j)$, and
\begin{align*}
\sum_{j=1}^{M} \Norm[C^{0,\gamma}_\text{par}(\overline{\Omega}_T^j)]{\beta(\varphi) \, \kappa(\mathcal{P}(\varphi))}
\leq C
\end{align*}
uniformly by Theorem~\ref{th:parapiecewholder}. Hence, using~\eqref{eq:intholder}, we can choose $T>0$ sufficiently small so that
\begin{align*}
\sum_{j=1}^{M} \Norm[C^{0,\gamma}_\text{par}(\overline{\Omega}_T^j)]{\Xi[\varphi]}\leq R
\end{align*}
holds. The lower bound of $\Xi[\varphi]$ follows
by repeating the previous estimates with the weaker $C(0,T;L^\infty(\Omega))$-norm which yields
\begin{align*}
\Norm[C(0,T;L^\infty(\Omega))]{\Xi[\varphi] - \varphi_0}
\leq C T^{\gamma/2}
\end{align*}
and hence
\begin{align*}
\Xi[\varphi] \geq \varphi_0 - C T^{\gamma/2}
\end{align*}
for all $t \in [0,T]$ a.e.\ in $\Omega$ similar to Proposition~\ref{prp:localtimeexistence}.
The contraction property can be shown by considering
\begin{align*}
\sum_{j=1}^{M} \Norm[C^{0,\gamma}_\text{par}(\overline{\Omega}_T^j)]{\mathcal{P}(\varphi_1) - \mathcal{P}(\varphi_2)}
\leq C T^{\gamma/2} \sum_{j=1}^{M} \Norm[C^{0,\gamma}_\text{par}(\overline{\Omega}_T^j)]{\varphi_1 - \varphi_2}
\end{align*}
as well as
\begin{align}\label{eq:parabolic_lipschitz_help}
\begin{split}
\sum_{j=1}^{M} &\Norm[C^{0,\gamma}_\text{par}(\overline{\Omega}_T^j)]{\int_0^t \beta(\varphi_1(s,\cdot)) \, \kappa(\mathcal{P}(\varphi_1)(s,\cdot)) - \beta(\varphi_2(s,\cdot)) \, \kappa(\mathcal{P}(\varphi_2)(s,\cdot)) \D s}\\
&\leq C T^{1-\gamma/2} \sum_{j=1}^{M} \Norm[C^{0,\gamma}_\text{par}(\overline{\Omega}_T^j)]{\beta(\varphi_1) \, \kappa(\mathcal{P}(\varphi_1)) - \beta(\varphi_2) \, \kappa(\mathcal{P}(\varphi_2))}\\
&\leq T^{1-\gamma/2} \sum_{j=1}^{M} C_1 \Norm[C^{0,\gamma}_\text{par}(\overline{\Omega}_T^j)]{\beta(\varphi_1) - \beta(\varphi_2)}
+ C_2 \Norm[C^{0,\gamma}_\text{par}(\overline{\Omega}_T^j)]{\kappa(\mathcal{P}(\varphi_1)) - \kappa(\mathcal{P}(\varphi_2))}\\
&\leq T^{1-\gamma/2} (C_1 + C_2 T^{\gamma/2}) \sum_{j=1}^{M} \Norm[C^{0,\gamma}_\text{par}(\overline{\Omega}_T^j)]{\varphi_1 - \varphi_2}
\end{split}
\end{align}
which follows from~\eqref{eq:intholder}, Theorem~\ref{th:parapiecewholder}, Lemma~\ref{lem:LipschitzParaPiecw} and the fact that $\beta \in C^{0,1}([\epsilon,R])$, $\varphi_i \in C^{0,\gamma}_\text{par}(\overline{\Omega}_T^j)$ for $i=1,2$ as well as uniform positivity of $\sigma$.
Hence we obtain that $\Xi$ defines a contraction on $\CHI_T^\text{pw}$ if we choose $\tilde{T}$ small enough.
\end{proof}
Using Proposition~\ref{prp:contr}, we can show the existence of a unique solution to~\eqref{eq:fullmildweak} in $\CHI_T^\text{pw}$ for sufficiently small $T>0$.
\begin{thm}\label{th:wellposedPara}
Under the same assumptions as in Proposition~\ref{prp:contr}, there exists $T>0$, depending on $R,\epsilon,d,\varphi_0$, such that $\Xi[\varphi] \in \CHI_T^\mathrm{pw}$ coupled with $u(t,\cdot)=\mathcal{P}(\varphi)(t,\cdot)$ for $t\in[0,T]$ is the local unique solution to~\eqref{eq:fullmildweak} on $\CHI_T^\mathrm{pw}$ when equipped with initial data $\varphi_0$ for $\varphi$. If the maximal time of existence $T_{\text{max}}>0$ satisfies $T_{\mathrm{max}}<\infty$, then
\begin{align}\label{eq:Tmaxgeneral}
\lim_{T\to T_{\mathrm{max}}} \bl \sum_{j=1}^{M} \Norm[C^{0,\gamma}(\overline{\Omega}^j)]{\varphi(T,\cdot)} + \frac{1}{| \varphi(T,\cdot)|} \br =\infty.
\end{align}
Moreover, the solution $\varphi\in \CHI_T^\mathrm{pw}$ depends continuously on the initial data $\varphi_0$.
\end{thm}
\begin{proof}
Note that we fixed some $T>0$ in order to get a time-uniform constant for the bound of the solution $u$ in Theorem~\ref{th:exparabolic}.
By Proposition~\ref{prp:contr} there exists $T_2 \in (0,T]$ such that $\Xi[\varphi] \in \CHI_{T_2}^\mathrm{pw}$ is a contraction on $\CHI_{T_2}^\mathrm{pw}$. The upper bound $T$ is needed here since the operator $\mathcal{P}$ may otherwise not be defined. The contraction mapping theorem implies that there exists a unique fixed point $\varphi \in \CHI_{T_2}^\mathrm{pw}$ which proves the existence and uniqueness of a local solution to~\eqref{eq:fullmildweak} in $\CHI_{T_2}^\text{pw}$.
In order to show~\eqref{eq:Tmaxgeneral} suppose that $T_{\mathrm{max}}<\infty$ with
\begin{align*}
\sup_{t\in[0, T_{\text{max}})} \bl \sum_{j=1}^{M} \Norm[C^{0,\gamma}(\overline{\Omega}^j)]{\varphi(t,\cdot)} + \frac{1}{| \varphi(t,\cdot)|} \br \leq K
\end{align*}
for some $K>0$.
By Theorem~\ref{th:parapiecewholder} and Proposition~\ref{prp:contr}, $\varphi(T_{\text{max}},\cdot) \in C^{0,\gamma}(\overline{\Omega}^j)$ and $u(T_{\text{max}},\cdot) \in C^{1,\gamma}(\overline{\Omega}^j)$ for $j = 1,\ldots,M$. Hence our local existence theory for $\tilde{\varphi}_0 = \varphi(T_{\text{max}},\cdot)$, $\tilde{u}_0 = u(T_{\text{max}},\cdot)$, $\tilde{R}=K+1$ and $\tilde{\epsilon}=\tfrac{1}{2}K^{-1}$ yields the existence of a solution on $[0,T_{\text{max}}+\delta]$ for some $\delta > 0$, contradicting the maximality of $T_{\text{max}}$ and verifying~\eqref{eq:Tmaxgeneral}.
To show the continuous dependence on the initial data, we consider $\varphi_0,\psi_0 \in C^{0,\gamma}(\overline{\Omega}^j)$ satisfying
$\sum_{j=1}^{M} \Norm[C^{0,\gamma}(\overline{\Omega}^j)]{\varphi_0} < R$, $\inf_{\Omega_T}\varphi_0 > \epsilon$ and $\sum_{j=1}^{M} \Norm[C^{0,\gamma}(\overline{\Omega}^j)]{\psi_0} < R$, $ \inf_{\Omega_T} \psi_0 > \epsilon$.
Then, there exists a $T>0$ such that $\varphi,\psi \in \CHI_T^\text{pw}$ satisfy~\eqref{eq:fullmildweak} with initial data $\varphi_0,\psi_0$.
For $t\in[0,T]$, we have
\begin{align*}
\sum_{j=1}^{M} \Norm[C^{0,\gamma}_\text{par}(\overline{\Omega}^j_t)]{\varphi - \psi}
\leq \sum_{j=1}^{M} &\Norm[C^{0,\gamma}(\overline{\Omega}^j)]{\varphi_0 - \psi_0} + Q \Norm[C^{0,\gamma}_\text{par}(\overline{\Omega}^j_t)]{\mathcal{P}(\varphi)(t,\cdot) - \mathcal{P}(\psi)(t,\cdot)}\\
&+ \Norm[C^{0,\gamma}_\text{par}(\overline{\Omega}^j_t)]{\int_{0}^{t} \beta(\varphi(s,\cdot)) \, \kappa(\mathcal{P}(\varphi)(s,\cdot)) - \beta(\psi(s,\cdot)) \, \kappa(\mathcal{P}(\psi)(s,\cdot)) \D s}\\
\leq \sum_{j=1}^{M} &\Norm[C^{0,\gamma}(\overline{\Omega}^j)]{\varphi_0 - \psi_0} + (C_1 t^{\gamma/2} + C_2 t^{1-\gamma/2}) \Norm[C^{0,\gamma}_\text{par}(\overline{\Omega}^j_t)]{\varphi - \psi},
\end{align*}
where the estimate of the second term follows as in \eqref{eq:pwlipschitzparabolic}
with $C_1$ from~\eqref{eq:pwlipschitzparabolic}, independent of $\varphi,\psi$, and the estimate of the third term is obtained as in \eqref{eq:parabolic_lipschitz_help}. This yields
\begin{align*}
\sum_{j=1}^{M} \Norm[C^{0,\gamma}_\text{par}(\overline{\Omega}^j_t)]{\varphi - \psi}
\leq \frac{1}{1 - (C_1 t^{\gamma/2} + C_2 t^{1-\gamma/2})} \sum_{j=1}^{M} \Norm[C^{0,\gamma}(\overline{\Omega}^j)]{\varphi_0 - \psi_0},
\end{align*}
where one should note that we chose $T$ such that $C_1 t^{\gamma/2} + C_2 t^{1-\gamma/2} < 1$ for all $t \leq T$ in Proposition~\ref{prp:contr}. Hence we obtain
\begin{align*}
\sum_{j=1}^{M} \Norm[C^{0,\gamma}_\text{par}(\overline{\Omega}^j_t)]{\varphi - \psi}
\leq C \sum_{j=1}^{M} \Norm[C^{0,\gamma}(\overline{\Omega}^j)]{\varphi_0 - \psi_0}
\end{align*}
with a $C>0$.
\end{proof}
\begin{rem}
Using a result on piecewise H\"older regularity of solutions of elliptic problems obtained in~\cite{Li2000} analogous to Theorem \ref{th:parapiecewholder}, one can also deduce existence and uniqueness of solutions to the viscous problem \eqref{eq:generalviscousmildweak} with piecewise H\"older regularity for any spatial dimension~$d$.
\end{rem}
\subsection*{Acknowledgements}
The authors would like to thank Evangelos Moulas for introducing them to the models discussed in this work and for helpful discussions.
\bibliographystyle{plain}
|
2302.13218
|
\section{Introduction}
We consider the one-dimensional Schr\"{o}dinger equation with a finite number
of $\delta$-interactions
\begin{equation}
-y^{\prime\prime}+\left( q(x)+\sum_{k=1}^{N}\alpha_{k}\delta(x-x_{k})\right)
y=\lambda y,\quad0<x<b,\;\lambda\in\mathbb{C},\label{Schrwithdelta}%
\end{equation}
where $q\in L_{2}(0,b)$ is a complex valued function, $\delta(x)$ is the Dirac
delta distribution, $0<x_{1}<x_{2}<\dots<x_{N}<b$ and $\alpha_{1}%
,\dots,\alpha_{N}\in\mathbb{C}\setminus\{0\}$. Schr\"{o}dinger equations with
distributional coefficients supported on a set of measure zero naturally
appear in various problems of mathematical physics
\cite{alveberio1,alveberio2,atkinson,barrera1,coutinio,uncu} and have been
studied in a considerable number of publications and from different
perspectives. In general terms, Eq. (\ref{Schrwithdelta}) can be interpreted
as a regular equation, i.e., with the regular potential $q\in L_{2}(0,b)$,
whose solutions are continuous and such that their first derivatives satisfy
the jump condition $y^{\prime}(x_{k}+)-y^{\prime}(x_{k}-)=\alpha_{k}y(x_{k})$
at special points \cite{kochubei, kostenko}. Another approach consists in
considering the interval $[0,b]$ as a quantum graph whose edges are the
segments $[x_{k},x_{k+1}]$, $k=0,\dots,N$, (setting $x_{0}=0$, $x_{N+1}=b$),
and the Schr\"{o}dinger operator with the regular potential $q$ as an
unbounded operator on the direct sum $\bigoplus_{k=0}^{N}H^{2}(x_{k},x_{k+1}%
)$, with the domain given by the families $(y_{k})_{k=0}^{N}$ that satisfy the
condition of continuity $y_{k}(x_{k}-)=y_{k+1}(x_{k}+)$ and the jump condition
for the derivative $y_{k+1}^{\prime}(x_{k}+)-y_{k}^{\prime}(x_{k}-)=\alpha
_{k}y_{k}(x_{k})$ for $k=1,\dots N$ (see, e.g.,
\cite{gesteszy1,kurasov1,kurasov2}). This condition for the derivative is
known in the bibliography of quantum graphs as the $\delta$-type condition
\cite{berkolaikokuchment}. Yet another approach implies a regularization of
the Schrodinger operator with point interactions, that is, finding a subdomain
of the Hilbert space $L_{2}(0,b)$, where the operator defines a function in
$L_{2}(0,b)$. For this, note that the potential $q(x)+\sum_{k=1}^{N}\alpha
_{k}\delta(x-x_{k})$ defines a functional that belongs to the Sobolev space
$H^{-1}(0,b)$. In \cite{bondarenko1,gulyev,hryniv,schkalikov} these forms of
regularization have been studied, rewriting the operator by means of a
factorization that involves a primitive $\sigma$ of the potential.
Theory of transmutation operators, also called transformation operators, is a
widely used tool in studying differential equations and spectral problems
(see, e.g., \cite{BegehrGilbert, directandinverse, levitan,
marchenko,SitnikShishkina Elsevier}), and it is especially well developed for
Schr\"{o}dinger equations with regular potentials. It is known that under
certain general conditions on the potential $q$ the transmutation operator
transmuting the second derivative into the Schr\"{o}dinger operator can be
realized in the form of a Volterra integral operator of the second kind, whose
kernel can be obtained by solving a Goursat problem for the Klein-Gordon
equation with a variable coefficient \cite{hugo2,levitan, marchenko}.
Furthermore, functional series representations of the transmutation kernel
have been constructed and used for solving direct and inverse Sturm-Liouville
problems \cite{directandinverse,neumann}. For Schr\"{o}dinger equations with
$\delta$-point interactions, there exist results about equations with a single
point interaction and discontinuous conditions $y(x_{1}+)=ay(x_{1}-)$,
$y^{\prime}(x_{1}+)=\frac{1}{a}y^{\prime}(x_{1}-)+dy(x_{1}-)$, $a,b>0$ (see
\cite{hald,yurkoart1}), and for equations in which the spectral parameter is
also present in the jump condition (see \cite{akcay,mammadova,manafuv}).
Transmutation operators have also been studied for equations with
distributional coefficients belonging to the $H^{-1}$-Sobolev space in
\cite{bondarenko1,hryniv,schkalikov}. In \cite{hugo2}, the possibility of
extending the action of the transmutation operator for an $L_{1}$-potential to
the space of generalized functions $\mathscr{D}^{\prime}$, was studied.
The aim of this work is to present a construction of a transmutation operator
for the Schr\"{o}dinger equation with a finite number of point interactions.
The transmutation operator appears in the form of a Volterra integral
operator, and with its aid we derive analytical series representations for
solutions of (\ref{Schrwithdelta}). For this purpose, we obtain a closed form
of the general solution of (\ref{Schrwithdelta}). From it, the construction of
the transmutation operator is deduced, where the transmutation kernel is
ensembled from the convolutions of the kernels of certain solutions of the
regular equation (with the potential $q$), in a finite number of steps. Next,
the spectral parameter power series (SPPS) method is developed for Eq.
(\ref{Schrwithdelta}). The SPPS method was developed for continuous
(\cite{kravchenkospps1,sppsoriginal}) and $L_{1}$-potentials (\cite{blancarte}%
), and it has been used in a piecewise manner for solving spectral problems
for equations with a finite number of point interactions in
\cite{barrera1,barrera2,rabinovich1}. Following \cite{hugo}, we use the SPPS
method to obtain an explicit construction of the image of the transmutation
operator acting on polynomials. Similarly to the case of a regular potential
\cite{neumann}, we obtain a representation of the transmutation kernel as a
Fourier series in terms of Legendre polynomials and as a corollary, a
representation for the solutions of equation (\ref{Schrwithdelta}) in terms of
a Neumann series of Bessel functions. Similar representations are obtained for
the derivatives of the solutions. It is worth mentioning that the methods
based on Fourier-Legendre representations and Neumann series of Bessel
functions have shown to be an effective tool in solving direct and inverse
spectral problems for equations with regular potentials, see, e.g.,
\cite{directandinverse,neumann, KravTorbadirect}.
In Section 2, basic properties of the solutions of (\ref{Schrwithdelta}) are
compiled, studying the equation as a distributionional sense in
$\mathscr{D}^{\prime}(0,b)$ and deducing properties of its regular solutions.
Section 3 presents the construction of the closed form solution of
(\ref{Schrwithdelta}). In Section 4, the construction of the transmutation
operator and the main properties of the transmutation kernel are developed. In
Section 5, the SPPS method is presented, with the mapping and transmutation
properties of the transmutation operator. Section 6 presents the
Fourier-Legendre series representations for the transmutation kernels and the
Neumann series of Bessel functions representations for solutions of
(\ref{Schrwithdelta}), and a recursive integral relation for the Fourier-Legendre coefficients is obtained. Finally, in Section 7, integral and Neumann series of
Bessel functions representations for the derivatives of the solutions are presented.
\section{Problem setting and properties of the solutions}
We use the standard notation $W^{k,p}(0,b)$ ($b>0$) for the Sobolev space of
functions in $L_{p}(0,b)$ that have their first $k$ weak derivatives in
$L_{p}(0,b)$, $1\leqslant p\leqslant\infty$ and $k\in\mathbb{N}$. When $p=2$,
we denote $W^{k,2}(0,b)=H^{k}(0,b)$. We have that $W^{1,1}(0,b)=AC[0,b]$, and
$W^{1,\infty}(0,b)$ is precisely the class of Lipschitz continuous functions
in $[0,b]$ (see \cite[Ch. 8]{brezis}). The class of smooth functions with
compact support in $(0,b)$ is denoted by $C_{0}^{\infty}(0,b)$, then we define
$W_{0}^{1,p}(0,b)=\overline{C_{0}^{\infty}(0,b)}^{W^{1,p}}$ and $H_{0}%
^{1}(0,b)=W_{0}^{1,2}(0,b)$. Denote the dual space of $H^{1}_{0}(0,b)$ by
$H^{-1}(0,b)$. By $L_{2,loc}(0,b)$ we denote the class of measurable functions
$f:(0,b)\rightarrow\mathbb{C}$ such that $\int_{\alpha}^{\beta}|f(x)|^{2}%
dx<\infty$ for all subintervals $[\alpha, \beta]\subset(0,b)$.
The characteristic function of an interval $[A,B]\subset\mathbb{R}$ is denoted
by $\chi_{[A,B]}(t)$. In order to simplify the notation, for the case of a
symmetric interval $[-A,A]$, we simply write $\chi_{A}$. The Heaviside
function is given by $H(t)=\chi_{(0,\infty)}(t)$. The lateral limits of the
function $f$ at the point $\xi$ are denoted by $f(\xi\pm)=\lim_{x\rightarrow
\xi{\pm}}f(x)$. We use the notation $\mathbb{N}_{0}=\mathbb{N}\cup\{0\}$. The
space of distributions (generalized functions) over $C_{0}^{\infty}(0,b)$ is
denoted by $\mathscr{D}^{\prime}(0,b)$, and the value of a distribution
$f\in\mathscr{D}^{\prime}(0,b)$ at $\phi\in C_{0}^{\infty}(0,b)$ is denoted by
$(f,\phi)_{C_{0}^{\infty}(0,b)}$.\newline
Let $N\in\mathbb{N}$ and consider a partition $0<x_{1}<\dots<x_{N}<b$ and the
numbers $\alpha_{1}, \dots, \alpha_{N}\in\mathbb{C}\setminus\{0\}$. The set
$\mathfrak{I}_{N}=\{(x_{j},\alpha_{j})\}_{j=1}^{N}$ contains the information
about the point interactions of Eq. (\ref{Schrwithdelta}). Denote
\[
q_{\delta,\mathfrak{I}_{N}}(x):= \sum_{k=1}^{N}\alpha_{k}\delta(x-x_{k}%
),\quad\mathbf{L}_{q}:= -\frac{d^{2}}{dx^{2}}+q(x), \quad\mathbf{L}%
_{q,\mathfrak{I}_{N}}:= \mathbf{L}_{q}+q_{\delta,\mathfrak{I}_{N}}(x).
\]
For $u\in L_{2,loc}(0,b)$, $\mathbf{L}_{q,\mathfrak{I}_{N}}u$ defines a
distribution in $\mathscr{D}^{\prime}(0,b)$ as follows
\[
(\mathbf{L}_{q,\mathfrak{I}_{N}}u,\phi)_{C_{0}^{\infty}(0,b)}:= \int_{0}^{b}
u(x)\mathbf{L}_{q}\phi(x)dx+ \sum_{k=1}^{N}\alpha_{k} u(x_{k})\phi(x_{k})
\quad\mbox{for } \, \phi\in C_{0}^{\infty}(0,b).
\]
Note that the function $u$ must be well defined at the points $x_{k}$, $k=1,
\dots, N$. Actually, for a function $u\in H^{1}(0,b)$, the distribution
$\mathbf{L}_{q,\mathfrak{I}_{N}}u$ can be extended to a functional in
$H^{-1}(0,b)$ as follows
\[
(\mathbf{L}_{q,\mathfrak{I}_{N}}u,v)_{H_{0}^{1}(0,b)}:= \int_{0}^{b}
\{u^{\prime}(x)v^{\prime}(x)+q(x)u(x)v(x) \}dx+ \sum_{k=1}^{N}\alpha_{k}
u(x_{k})v(x_{k})\quad\mbox{for }\, v\in H_{0}^{1}(0,b).
\]
We say that a distribution $F\in\mathscr{D}^{\prime}(0,b)$ is $L_{2}$-regular,
if there exists a function $g\in L_{2}(0,b)$ such that $(F,\phi)_{C_{0}%
^{\infty}(0,b)}=(g,\phi)_{C_{0}^{\infty}(0,b)}:=\int_{0}^{b}g(x)\phi(x)dx$ for
all $\phi\in{C_{0}^{\infty}(0,b)}$.
Denote $x_{0}=0$, $x_{N+1}=b$. We recall the following characterization of
functions $u\in L_{2,loc}(0,b)$ for which $\mathbf{L}_{q,\mathfrak{I}_{N}}u$
is $L_{2}$-regular.
\begin{proposition}
\label{propregular} If $u\in L_{2,loc}(0,b)$, then the distribution
$\mathbf{L}_{q,\mathfrak{I}_{N}}u$ is $L_{2}$-regular iff the following
conditions hold.
\begin{enumerate}
\item For each $k=0, \dots, N$, $u|_{(x_{k},x_{k+1})}\in H^{2}(x_{k},x_{k+1}%
)$.
\item $u\in AC[0,b]$.
\item The discontinuities of the derivative $u^{\prime}$ are located at the
points $x_{k}$, $k=1, \dots, N$, and the jumps are given by
\begin{equation}
\label{jumpderivative}u^{\prime}(x_{k}+)-u^{\prime}(x_{k}-)=\alpha_{j}
u(x_{k}) \quad\mbox{for } k=1, \cdots, N.
\end{equation}
\end{enumerate}
In such case,
\begin{equation}
\label{regulardist}(\mathbf{L}_{q,\mathfrak{I}_{N}}u,\phi)_{C_{0}^{\infty
}(0,b)}=(\mathbf{L}_{q}u,\phi)_{C_{0}^{\infty}(0,b)} \quad\mbox{for all }
\phi\in{C_{0}^{\infty}(0,b)}.
\end{equation}
\end{proposition}
\begin{proof}
Suppose that $\mathbf{L}_{q,\mathfrak{I}_{N}}u$ is $L_{2}$-regular. Then there
exists $g\in L_{2}(0,b)$ such that
\begin{equation}
\label{aux0prop1}(\mathbf{L}_{q,\mathfrak{I}_{N}}u,\phi)_{C_{0}^{\infty}%
(0,b)}=(g,\phi)_{C_{0}^{\infty}(0,b)} \quad\mbox{for all } \phi\in
{C_{0}^{\infty}(0,b)}.
\end{equation}
\begin{enumerate}
\item Fix $k\in\{1, \dots, N-1\}$. Take a test function $\phi\in C_{0}%
^{\infty}(0,b)$ with $\operatorname{Supp}(\phi) \subset(x_{k},x_{k+1})$.
Hence
\begin{equation}
\label{auxprop1}\int_{x_{k}}^{x_{k+1}}g(x)\phi(x)dx=(\mathbf{L}%
_{q,\mathfrak{I}_{N}}u,\phi)_{C_{0}^{\infty}(0,b)}= \int_{x_{k}}^{x_{k+1}%
}u(x)\mathbf{L}_{q}\phi(x)dx,
\end{equation}
because $\phi(x_{j})=0$ for $j=1, \dots, N$. From (\ref{auxprop1}) we obtain
\[
\int_{x_{k}}^{x_{k+1}}u(x)\phi^{\prime\prime}(x)dx= \int_{x_{k}}^{x_{k+1}%
}\{q(x)u(x)-g(x)\}\phi(x)dx.
\]
Set $v(x)=\int_{0}^{x}\int_{0}^{t}\{q(s)u(s)-g(s)\}dsdt$. Hence $v\in
W^{2,1}(x_{j},x_{j+1})$, $v^{\prime\prime}(x)=q(x)u(x)-g(x)$ a.e. $x\in
(x_{j},x_{j+1})$, and we get the equality
\begin{equation}
\label{auxiliareq}\int_{x_{k}}^{x_{k+1}}(u(x)-v(x))\phi^{\prime\prime}(x)dx=0
\quad\forall\phi\in C_{0}^{\infty}(x_{k}, x_{k+1}).
\end{equation}
Equality (\ref{auxiliareq}) implies that $u(x)=v(x)+Ax+B$ a.e. $x\in
(x_{k},x_{k+1})$ for some constants $A$ and $B$ (\cite[pp. 85]{vladimirov}).
In consequence $u\in W^{2,1}(x_{k}, x_{k+1})$ and
\begin{equation}
\label{aux2prop1}-u^{\prime\prime}(x)+q(x)u(x)= g(x) \quad\mbox{a.e. }
x\in(x_{k},x_{k+1}).
\end{equation}
Furthermore, $u\in C[x_{k},x_{k+1}]$, hence $qu\in L_{2}(x_{k},x_{k+1})$ and
then $u^{\prime\prime}=qu-g\in L_{2}(x_{k},x_{k+1})$. In this way
$u|_{(x_{k},x_{k+1})}\in H^{2}(x_{k},x_{k+1})$.
Now take $\varepsilon>0$ and an arbitrary $\phi\in C_{0}^{\infty}%
(\varepsilon,x_{1})$. We have that
\[
(\mathbf{L}_{q,\mathfrak{I}_{N}}u,\phi)_{C_{0}^{\infty}(0,b)}= \int
_{\varepsilon}^{x_{1}}\{-u(x)\phi^{\prime\prime}(x)+q(x)u(x)\phi
(x)\}dx=\int_{\varepsilon}^{x_{1}}g(x)\phi(x)dx.
\]
Applying the same procedure as in the previous case we obtain that $u\in
H^{2}(\varepsilon,x_{1})$ and satisfies Eq. (\ref{aux2prop1}) in the interval
$(\varepsilon,x_{1})$. Since $\varepsilon$ is arbitrary, we conclude that $u$
satisfies (\ref{aux2prop1}) for a.e. $x\in(0,x_{1})$. Since $q,g\in
L_{2}(0,x_{1})$, then $u|_{(0,x_{1})}\in H^{2}(0,x_{1})$ (see \cite[Th.
3.4]{zetl}). The proof for the interval $(x_{N},b)$ is analogous.
Since $u\in C^{1}[x_{k},x_{k+1}]$, $k=0,\dots, N$, the following equality is
valid (see formula (6) from \cite[pp. 100]{kanwal})
\begin{align}
\int_{0}^{b} u(x)\phi^{\prime\prime}(x)dx & = \sum_{k=1}^{N}\left\{
u^{\prime}(x_{k}+)-u^{\prime}(x_{k}-) \right\} \phi(x_{k})\label{aux3prop1}\\
& \quad-\sum_{k=1}^{N}\left\{ u(x_{k}+)-u(x_{k}-) \right\} \phi^{\prime
}(x_{k}) +\int_{0}^{b} u^{\prime\prime}(x)\phi(x)dx, \qquad\forall\phi\in
C_{0}^{\infty}(0,b).\nonumber
\end{align}
Fix $k\in\{1, \cdots, N\}$ arbitrary and take $\varepsilon>0$ small enough
such that $(x_{k}-\varepsilon,x_{k}+\varepsilon)\subset(x_{k-1},x_{k+1})$.
Choose a cut-off function $\psi\in C_{0}^{\infty}(x_{k}-\varepsilon
,x_{k}+\varepsilon)$ satisfying $0\leqslant\psi\leqslant1$ on $(x_{k}%
-\varepsilon, x_{k}+\varepsilon)$ and $\psi(x)=1$ for $x\in(x_{k}%
-\frac{\varepsilon}{3}, x_{k}+\frac{\varepsilon}{3})$.
\item By statement 1, it is enough to show that $u(x_{k}+)=u(x_{k}-)$.
Set\newline$\phi(x)=(x-x_{k})\psi(x)$, in such a way that $\phi(x_{k})=0$ and
$\phi^{\prime}(x_{k})=1$. Hence
\[
(\mathbf{L}_{q,\mathfrak{I}_{N}}u, \phi)_{C_{0}^{\infty}(0,b)}= \int
_{x_{k}-\varepsilon}^{x_{k}+\varepsilon}u(x)\mathbf{L}_{q}\phi(x)dx.
\]
By (\ref{aux3prop1}) we have
\[
\int_{x_{k}-\varepsilon}^{x_{k}+\varepsilon}u(x)\phi^{\prime\prime}(x)dx =
u(x_{k}-)-u(x_{k}+)+\int_{x_{k}-\varepsilon}^{x_{k}+\varepsilon}%
u^{\prime\prime}(x)\phi(x)dx,
\]
because $\phi(x_{k})=0$ and $\phi^{\prime}(x_{k})=1$. Since $u$ satisfies
(\ref{aux0prop1}), we have
\[
\int_{x_{k}-\varepsilon}^{x_{k}+\varepsilon}(\mathbf{L}_{q}u(x)-g(x))\phi
(x)dx+u(x_{k}+)-u(x_{k}-)=0.
\]
By statement 1, $\mathbf{L}_{q}u=g$ on both intervals $(x_{k-1},x_{k})$,
$(x_{k}, x_{k+1})$. Then we obtain that $u(x_{k}+)-u(x_{k}-)$=0.
\item Now take $\psi$ as the test function. Hence
\[
(\mathbf{L}_{q,\mathfrak{I}_{N}}u,\psi)_{C_{0}^{\infty}(0,b)}= \int
_{x_{k}-\varepsilon}^{x_{k}+\varepsilon}u(x)\mathbf{L}_{q}\psi(x)dx+\alpha
_{k}u(x_{k}),
\]
because $\operatorname{Supp}(\psi)\subset(x_{k}-\varepsilon,x_{k}%
+\varepsilon)$ and $\psi\equiv1$ on $(x_{k}-\frac{\varepsilon}{3}, x_{k}%
+\frac{\varepsilon}{3})$. On the other hand, by (\ref{aux3prop1}) we obtain
\[
\int_{x_{k}-\varepsilon}^{x_{k}+\varepsilon}u(x)\psi^{\prime\prime}(x)dx =
u^{\prime}(x_{k}+)-u^{\prime}(x_{k}-)+\int_{x_{k}-\varepsilon}^{x_{k}%
+\varepsilon}u^{\prime\prime}(x)\psi(x)dx,
\]
because $\psi^{\prime}(x_{k})=0$. Thus, by (\ref{aux0prop1}) we have
\[
\int_{x_{k}-\varepsilon}^{x_{k}+\varepsilon}(\mathbf{L}_{q}u(x)-g(x))\psi
(x)dx+ u^{\prime}(x_{k}-)-u^{\prime}(x_{k}+)+\alpha_{k}u(x_{k})=0.
\]
Again, by statement 1, we obtain (\ref{jumpderivative}).
\end{enumerate}
Reciprocally, if $u$ satisfies conditions 1,2 and 3, equality (\ref{aux3prop1}%
) implies (\ref{regulardist}). By condition 1, $\mathbf{L}_{q,\mathfrak{I}%
_{N}}u$ is $L_{2}$-regular.
\end{proof}
\begin{definition}
The $L_{2}$-\textbf{regularization domain} of $\mathbf{L}_{q, \mathfrak{I}%
_{N}}$, denoted by $\mathcal{D}_{2}\left( \mathbf{L}_{q, \mathfrak{I}_{N}%
}\right) $, is the set of all functions $u\in L_{2,loc}(0,b)$ satisfying
conditions 1,2 and 3 of Proposition \ref{propregular}.
\end{definition}
If $u\in L_{2,loc}(0,b)$ is a solution of (\ref{Schrwithdelta}), then
$\mathbf{L}_{q-\lambda,\mathfrak{I}_{N}}u$ equals the regular distribution
zero. Then we have the next characterization.
\begin{corollary}
\label{proppropofsolutions} A function $u\in L_{2,loc}(0,b)$ is a solution of
Eq. (\ref{Schrwithdelta}) iff $u\in\mathcal{D}_{2}\left( \mathbf{L}_{q,
\mathfrak{I}_{N}}\right) $ and for each $k=0, \dots, N$, the restriction
$u|_{(x_{k},x_{k+1})}$ is a solution of the regular Schr\"odinger equation
\begin{equation}
\label{schrodingerregular}-y^{\prime\prime}(x)+q(x)y(x)=\lambda y(x)
\quad\mbox{for } x_{k}<x<x_{k+1}.
\end{equation}
\end{corollary}
\begin{remark}
\label{remarkidealdomain} Let $f\in\mathcal{D}_{2}\left( \mathbf{L}%
_{q,\mathfrak{I}_{N}}\right) $. Given $g\in C^{1}[0,b]$, we have
\begin{align*}
(fg)^{\prime}(x_{k}+)-(fg)^{\prime}(x_{k}-) & =f^{\prime}(x_{k}%
+)g(x_{k})+f(x_{k})g^{\prime}(x_{k}+)-f^{\prime}(x_{k}-)g(x_{k})-f(x_{k}%
)g^{\prime}(x_{k}-)\\
& = \left[ f^{\prime}(x_{k}+)-f^{\prime}(x_{k}-)\right] g(x_{k}) = \alpha_{k}
f(x_{k})g(x_{k})
\end{align*}
for $k=1, \dots, N$. In particular, $fg\in\mathcal{D}_{2}\left( \mathbf{L}%
_{q,\mathfrak{I}_{N}}\right) $ for $g\in H^{2}(0,b)$.
\end{remark}
\begin{remark}
\label{remarkunicityofsol} Let $u_{0}, u_{1}\in\mathbb{C}$. Consider the
Cauchy problem
\begin{equation}
\label{Cauchyproblem1}%
\begin{cases}
\mathbf{L}_{q, \mathfrak{I}_{N}}u(x) = \lambda u(x), \quad0<x<b,\\
u(0)=u_{0}, \; u^{\prime}(0)= u_{1}.
\end{cases}
\end{equation}
If the solution of the problem exists, it must be unique. It is enough to show
the assertion for $u_{0}=u_{1}=0$. Indeed, if $w$ is a solution of such
problem, by Corollary \ref{proppropofsolutions}, $w$ is a solution of
(\ref{schrodingerregular}) on $(0, x_{1})$ satisfying $w(0)=w^{\prime}(0)=0$.
Hence $w\equiv0$ on $[0,x_{1}]$. By the continuity of $w$ and condition
(\ref{jumpderivative}), we have $w(x_{1})=w^{\prime}(x_{1}-)=0$. Hence $w$ is
a solution of (\ref{schrodingerregular}) satisfying these homogeneous
conditions. Thus, $w\equiv0$ on $[x_{1}, x_{2}]$. By continuing the process
until the points $x_{k}$ are exhausted, we arrive at the solution $w\equiv0$
on the whole segment $[0,b]$.
The uniqueness of the Cauchy problem with conditions $u(b)=u_{0}$, $u^{\prime
}(b)=u_{1}$ is proved in a similar way.
\end{remark}
\begin{remark}
Suppose that $u_{0}=u_{0}(\lambda)$ and $u_{1}=u_{1}(\lambda)$ are entire
functions of $\lambda$ and denote by $u(\lambda,x)$ the corresponding unique
solution of (\ref{Cauchyproblem1}). Since $u$ is the solution of the Cauchy
problem $\mathbf{L}_{q}u=\lambda u$ on $(0,x_{1})$ with the initial conditions
$u(\lambda,0)=u_{1}(\lambda)$, $u^{\prime}(\lambda,0)=u_{1}(\lambda)$, both
$u(\lambda,x)$ and $u^{\prime}(\lambda,x+)$ are entire functions for any
$x\in[0,x_{1}]$ (this is a consequence of \cite[Th. 3.9]{zetl} and \cite[Th.
7]{blancarte}). Hence $u^{\prime}(\lambda,x_{1}-)=u^{\prime}(\lambda
,x_{1}+)-\alpha_{1}u(\lambda,x_{1})$ is entire in $\lambda$. Since $u$ is the
solution of the Cauchy problem $\mathbf{L}_{q}u=\lambda u$ on $(x_{1},x_{2})$
with initial conditions $u(\lambda,x_{1})$ and $u^{\prime}(\lambda,x_{1}+)$,
we have that $u(\lambda,x)$ and $u^{\prime}(\lambda,x+)$ are entire functions
for $x\in[x_{1},x_{2}]$. By continuing the process we prove this assertion for
all $x\in[0,b]$.
\end{remark}
\section{Closed form solution}
In what follows, denote the square root of $\lambda$ by $\rho$, so
$\lambda=\rho^{2}$, $\rho\in\mathbb{C}$. For each $k\in\{1, \cdots, N\}$ let
$\widehat{s}_{k}(\rho,x)$ be the unique solution of the Cauchy problem
\begin{equation}%
\begin{cases}
-\widehat{s}_{k}^{\prime\prime}(\rho,x)+q(x+x_{k})\widehat{s}_{k}(\rho
,x)=\rho^{2}\widehat{s}_{k}(\rho, x) \quad\mbox{ for } 0<x<b-x_{k},\\
\widehat{s}_{k}(\rho,0)=0, \; \widehat{s}_{k}^{\prime}(\rho, 0)=1.
\end{cases}
\end{equation}
In this way, $\widehat{s}_{k}(\rho, x-x_{k})$ is a solution of $\mathbf{L}%
_{q}u=\rho^{2} u$ on $(x_{k},b)$ with initial conditions $u(x_{k})=0$,
$u^{\prime}(x_{k})=1$. According to \cite[Ch. 3, Sec. 6.3]{vladimirov},
$(\mathbf{L}_{q}-\rho^{2})\left( H(x-x_{k})\widehat{s}_{k}(\rho,
x-x_{k})\right) =-\delta(x-x_{k})$ for $x_{k}<x<b$. \newline
We denote by $\mathcal{J}_{N}$ the set of finite sequences $J=(j_{1}, \dots,
j_{l})$ with $1<l\leqslant N$, $\{j_{1}, \dots, j_{l}\}\subset\{1, \dots, N\}$
and $j_{1}<\cdots<j_{l}$. Given $J\in\mathcal{J}_{N}$, the length of $J$ is
denoted by $|J|$ and we define $\alpha_{J}:= \alpha_{j_{1}}\cdots
\alpha_{j_{|J|}}$.
\begin{theorem}
\label{TheoremSolCauchy} Given $u_{0}, u_{1}\in\mathbb{C}$, the unique
solution $\displaystyle u_{\mathfrak{I}_{N}}\in\mathcal{D}_{2}\left(
\mathbf{L}_{q,\mathfrak{I}_{N}}\right) $ of the Cauchy problem
(\ref{Cauchyproblem1}) has the form
\begin{align}
u_{\mathfrak{I}_{N}}(\rho,x) & = \widetilde{u}(\rho,x)+ \sum_{k=1}^{N}%
\alpha_{k}\widetilde{u}(\rho,x_{k})H(x-x_{k})\widehat{s}_{k}(\rho
,x-x_{k})\nonumber\\
& \;\;\;\; +\sum_{J\in\mathcal{J}_{N}}\alpha_{J} H(x-x_{j_{|J|}})
\widetilde{u}(\rho,x_{j_{1}}) \left( \prod_{l=1}^{|J|-1}\widehat{s}_{j_{l}%
}(\rho,x_{j_{l+1}}-x_{j_{l}})\right) \widehat{s}_{j_{|J|}}(\rho, x-x_{j_{|J|}%
}),\label{generalsolcauchy}%
\end{align}
where $\widetilde{u}(\rho,x)$ is the unique solution of the regular
Schr\"odinger equation
\begin{equation}
\label{regularSch}\mathbf{L}_{q}\widetilde{u}(\rho,x)= \rho^{2}\widetilde
{u}(\rho,x), \quad0<x<b,
\end{equation}
satisfying the initial conditions $\widetilde{u}(\rho,0)=u_{1}, \;
\widetilde{u}^{\prime}(\rho,0)=u_{1}$.
\end{theorem}
\begin{proof}
The proof is by induction on $N$. For $N=1$, the proposed solution has the
form
\[
u_{\mathfrak{I}_{1}}(\rho,x)=\widetilde{u}(\rho,x)+\alpha_{1}H(x-x_{1}%
)\widetilde{u}(\rho,x_{1})\widehat{s}_{1}(\rho,x-x_{1}).
\]
Note that $u_{\mathfrak{I}_{1}}(\rho,x)$ is continuous, and $u_{\mathfrak{I}%
_{1}}(\rho,x_{1})=\widetilde{u}(\rho,x_{1})$. Hence
\[
(\mathbf{L}_{q}-\rho^{2})u_{\mathfrak{I}_{1}}(\rho,x)= \alpha_{1}\widetilde
{u}(\rho,x_{1})(\mathbf{L}_{q}-\rho^{2})\left( H(x-x_{1})\widehat{s}_{1}%
(\rho,x-x_{1})\right) = -\alpha_{1}\widetilde{u}(\rho,x_{1})\delta(x-x_{1}),
\]
that is, $u_{\mathfrak{I}_{1}}(\rho,x)$ is a solution of (\ref{Schrwithdelta})
with $N=1$. Suppose the result is valid for $N$. Let $u_{\mathfrak{I}_{N+1}%
}(\rho,x)$ be the proposed solution given by formula (\ref{generalsolcauchy}).
It is clear that $u_{\mathfrak{I}_{N+1}}(\rho, \cdot)|_{(x_{k},x_{k+1})}\in
H^{2}(x_{k},x_{k+1})$, $k=0, \cdots, N$, $u_{\mathfrak{I}_{N+1}}(\rho,x)$ is a
solution of (\ref{schrodingerregular}) on each interval $(x_{k},x_{k+1})$,
$k=0, \dots, N+1$, and $u_{\mathfrak{I}_{N+1}}^{(j)}(\rho,0)= \widetilde
{u}^{(j)}(\rho,0)=u_{j}$, $j=0,1$. Furthermore, we can write
\[
u_{\mathfrak{I}_{N+1}}(\rho,x)=u_{\mathfrak{I}_{N}}(\rho,x)+H(x-x_{N+1}%
)f_{N}(\rho,x),
\]
where $\mathfrak{I}_{N}= \mathfrak{I}_{N+1}\setminus\{(x_{N+1}, \alpha
_{N+1})\}$, $u_{\mathfrak{J}_{N}}(\rho,x)$ is the proposed solution for the
interactions $\mathfrak{I}_{N}$, and the function $f_{N}(\rho,x)$ is given by
\begin{align*}
f_{N}(\rho,x) & = \alpha_{N+1}\widetilde{u}(\rho,x_{N+1})\widehat{s}%
_{N+1}(x-x_{N+1})\\
& \quad+\sum_{\overset{J\in\mathcal{J}_{N+1}}{j_{|J|}=N+1}}\alpha_{J}
\widetilde{u}(\rho,x_{j_{1}}) \left( \prod_{l=1}^{|J|-1}\widehat{s}_{j_{l}%
}(\rho,x_{j_{l+1}}-x_{j_{l}})\right) \widehat{s}_{N+1}(\rho, x-x_{N+1}),
\end{align*}
where the sum is taken over all the sequences $J=(j_{1}, \dots, j_{|J|}%
)\in\mathcal{J}_{N}$ with $j_{|J|}=N+1$. From this representation we obtain
$u_{\mathfrak{I}_{N+1}}(\rho,x_{N+1}\pm)=u_{\mathfrak{I}_{N}}(\rho,x_{N+1})$
and hence $u_{\mathfrak{I}_{N+1}}\in AC[0,b]$. By the induction hypothesis,
$u_{\mathfrak{I}_{N}}(\rho,x)$ is the solution of (\ref{Schrwithdelta}) for
$N$, then in order to show that $u_{\mathfrak{I}_{N+1}}(\rho,x)$ is the
solution for $N+1$ it is enough to show that $(\mathbf{L}_{q}-\rho^{2})\hat
{f}_{N}(\rho,x)=-\alpha_{N}u_{N}(x_{N+1})\delta(x-x_{N+1})$, where $\hat
{f}_{N}(\rho,x)=H(x-x_{N+1})f_{N}(\rho,x)$. Indeed, we have
\begin{align*}
(\rho^{2}-\mathbf{L}_{q})\hat{f}_{N}(\rho,x) & = \alpha_{N+1}\widetilde
{u}(\rho,x_{N+1})\delta(x-x_{N+1})+\\
& \quad\; +\sum_{\overset{J\in\mathcal{J}_{N+1}}{j_{|J|}=N+1}}\alpha_{J}
\widetilde{u}(\rho,x_{j_{1}}) \left( \prod_{l=1}^{|J|-1}\widehat{s}_{j_{l}%
}(\rho,x_{j_{l+1}}-x_{j_{l}})\right) \delta(x-x_{N+1})\\
& = \alpha_{N+1}\delta(x-x_{N+1})\Bigg{[} \widetilde{u}(\rho,x_{N+1}%
)+\sum_{k=1}^{N}\alpha_{k}\widetilde{u}(\rho,x_{N+1})\widehat{s}_{k}(\rho,
x_{N+1}-x_{k})\\
& \quad\;\;\,+ \sum_{J\in\mathcal{J}_{N}}\alpha_{J}\widetilde{u}%
(\rho,x_{j_{1}})\left( \prod_{l=1}^{|J|-1}\widehat{s}_{j_{l}}(\rho,x_{j_{l+1}%
}-x_{j_{l}})\right) \widehat{s}_{j_{|J|}}(\rho, x_{N+1}-x_{j_{|J|}})
\Bigg{]}\\
& = \alpha_{N+1}u_{\mathfrak{I}_{N}}(\rho,x_{N+1})\delta(x-x_{N+1}%
)=\alpha_{N+1}u_{\mathfrak{I}_{N+1}}(\rho,x_{N+1})\delta(x-x_{N+1}),
\end{align*}
where the second equality is due to the fact that
\[
\{J\in\mathcal{J}_{N+1}\,|\, j_{|J|}=N+1 \} = \{(J^{\prime},N+1)\, |\,
J^{\prime}\in\mathcal{J}_{N} \}\cup\{(j,N+1) \}_{j=1}^{N}.
\]
Hence $u_{\mathfrak{I}_{N+1}}(\rho,x)$ is the solution of the Cauchy problem.
\end{proof}
\begin{example}
Consider the case $q\equiv0$. Denote by $e_{\mathfrak{I}_{N}}^{0}(\rho,x)$ the
unique solution of
\begin{equation}
\label{Deltadiracpotentialexample}-y^{\prime\prime}+\left( \sum_{k=1}%
^{N}\alpha_{k} \delta(x-x_{k})\right) y =\rho^{2} y, \quad0<x<b,
\end{equation}
satisfying $e_{\mathfrak{I}_{N}}^{0}(\rho, 0)=1$, $e_{\mathfrak{I}_{N}}%
^{0}(\rho, 0)=i\rho$. In this case we have $\widehat{s}_{k}(\rho,x)=\frac
{\sin(\rho x)}{\rho}$ for $k=1,\dots, N$. Hence, according to Theorem
\ref{TheoremSolCauchy}, the solution $e_{\mathfrak{I}_{N}}^{0}(\rho,x)$ has
the form
\begin{align}
e_{\mathfrak{I}_{N}}^{0}(\rho,x) & =e^{i\rho x}+\sum_{k=1}^{N}\alpha
_{k}e^{i\rho x_{k}}H(x-x_{k})\frac{\sin(\rho(x-x_{k}))}{\rho}\nonumber\\
& \quad+\sum_{J\in\mathcal{J}_{N}}\alpha_{J} H(x-x_{j_{|J|}})e^{i\rho
x_{j_{1}}}\left( \prod_{l=1}^{|J|-1}\frac{\sin(\rho(x_{j_{l+1}}-x_{j_{l}}%
))}{\rho}\right) \frac{\sin(\rho(x-x_{j_{|J|}}))}{\rho}.\label{SolDiracDeltae}%
\end{align}
\end{example}
\section{Transmutation operators}
\subsection{Construction of the integral transmutation kernel}
Let $h\in\mathbb{C}$. Denote by $\widetilde{e}_{h}(\rho,x)$ the unique
solution of Eq. (\ref{regularSch}) satisfying $\widetilde{e}_{h}(\rho,0)=1$,
$\widetilde{e}^{\prime}_{h}(\rho, 0)=i\rho+h$. Hence the unique solution
$e_{\mathfrak{I}_{N}}^{h}(\rho,x)$ of Eq. (\ref{Schrwithdelta}) satisfying
$e_{\mathfrak{I}_{N}}^{h}(\rho,0)=1$, $(e_{\mathfrak{I}_{N}}^{h})^{\prime
}(\rho,0)=i\rho+h$ is given by
\begin{align}
e_{\mathfrak{I}_{N}}^{h}(\rho,x) & =\widetilde{e}_{h}(\rho,x)+\sum_{k=1}%
^{N}\alpha_{k}\widetilde{e}_{h}(\rho, x_{k})H(x-x_{k})\widehat{s}_{k}(\rho,
x-x_{k})\label{SoleGral}\\
& \quad+\sum_{J\in\mathcal{J}_{N}}\alpha_{J} H(x-x_{j_{|J|}})\widetilde
{e}_{h}(\rho,x_{j_{1}})\left( \prod_{l=1}^{|J|-1}\widehat{s}_{j_{l}}%
(\rho,x_{j_{l+1}}-x_{j_{l}})\right) \widehat{s}_{j_{|J|}}(\rho,x-x_{j_{|J|}%
}).\nonumber
\end{align}
It is known that there exists a kernel $\widetilde{K}^{h}\in C(\overline
{\Omega})\cap H^{1}(\Omega)$, where $\Omega=\{(x,t)\in\mathbb{R}^{2}\, |\,
0<x<b, |t|<x\}$, such that $\widetilde{K}^{h}(x,x)=\frac{h}{2}+\frac{1}{2}%
\int_{0}^{x}q(s)ds$, $\widetilde{K}^{h}(x,-x)=\frac{h}{2}$ and
\begin{equation}
\label{transm1}\widetilde{e}_{h}(\rho,x)=e^{i\rho x}+\int_{-x}^{x}%
\widetilde{K}^{h}(x,t)e^{i\rho t}dt
\end{equation}
(see, e.g., \cite{levitan, marchenko}). Actually, $\widetilde{K}^{h}%
(x,\cdot)\in L_{2}(-x,x)$ and it can be extended (as a function of $t$) to a
function in $L_{2}(\mathbb{R})$ with a support in $[-x,x]$. For each $k\in\{1,
\dots, N\}$ there exists a kernel $\widehat{H}_{k}\in C(\overline{\Omega_{k}%
})\cap H^{1}(\Omega_{k})$ with $\Omega_{k}=\{(x,t)\in\mathbb{R}^{2}\,|\,
0<x<b-x_{k}, \; |t|\leqslant x\}$, and $\widehat{H}_{k}(x,x)=\frac{1}{2}%
\int_{x_{k}}^{x+x_{k}}q(s)ds$, $\widehat{H}_{k}(x,-x)=0$, such that
\begin{equation}
\label{representationsinegeneral1}\widehat{s}_{k}(\rho,x)=\frac{\sin(\rho
x)}{\rho}+\int_{0}^{x}\widehat{H}_{k}(x,t)\frac{\sin(\rho t)}{\rho}dt
\end{equation}
(see \cite[Ch. 1]{yurko}). From this we obtain the representation
\begin{equation}
\label{representationsinegeneral2}\widehat{s}_{k}(\rho,x-x_{k})=\frac
{\sin(\rho(x-x_{k}))}{\rho}+\int_{0}^{x-x_{k}}\widehat{H}_{k}(x-x_{k}%
,t)\frac{\sin(\rho t)}{\rho}dt=\int_{-(x-x_{k})}^{x-x_{k}}\widetilde{K}%
_{k}(x,t)e^{i\rho t}dt,
\end{equation}
where
\begin{equation}
\label{kernelauxiliarsine}\widetilde{K}_{k}(x,t)=\frac{1}{2}\chi_{x-x_{k}%
}(t)+\displaystyle \frac{1}{2}\int_{|t|}^{x-x_{k}}\widehat{H}_{k}%
(x-x_{k},s)ds.
\end{equation}
We denote the Fourier transform of a function $f\in L_{1}(\mathbb{R})$ by
$\mathcal{F}f(\rho)=\int_{\mathbb{R}}f(t)e^{i\rho t}dt$ and the convolution of
$f$ with a function $g\in L_{1}(\mathbb{R})$ by $f\ast g(t)= \int_{\mathbb{R}%
}f(t-s)g(s)ds$. We recall that $\mathcal{F}(f\ast g)(\rho)=\mathcal{F}%
f(\rho)\cdot\mathcal{F}g(\rho)$. Given $f_{1}, \dots, f_{M} \in
L_{2}(\mathbb{R})$ with compact support, we denote their convolution product
by $\left( \prod_{l=1}^{M}\right) ^{\ast}f_{l}(t):= (f_{1}\ast\cdots\ast
f_{M})(t)$. For the kernels $\widetilde{K}^{h}(x,t), \widetilde{K}_{k}(x,t)$,
the operations $\mathcal{F}$ and $\ast$ will be applied with respect to the
variable $t$.
\begin{lemma}
\label{lemaconv} Let $A,B>0$. If $f\in C[-A,A]$ and $g\in C[-B,B]$, then
$(\chi_{A} f)\ast(\chi_{B} g)\in C(\mathbb{R})$ with $\operatorname{Supp}%
\left( (\chi_{A} f)\ast(\chi_{B} g)\right) \subset[-(A+B),A+B]$.
\end{lemma}
\begin{proof}
The assertion $\operatorname{Supp}\left( (\chi_{A} f)\ast(\chi_{B} g)\right)
\subset[-(A+B),A+B]$ is due to \cite[Prop. 4.18]{brezis}. Since $(\chi_{A}
f)\in L_{1}(\mathbb{R})$ and $(\chi_{B} g)\in L_{\infty}(\mathbb{R})$, it
follows from \cite[Prop. 8.8]{folland} that $(\chi_{A} f)\ast(\chi_{B} g)\in
C(\mathbb{R})$.
\end{proof}
\begin{theorem}
\label{thoremtransmoperator} There exists a kernel $K_{\mathfrak{I}_{N}}%
^{h}(x,t)$ defined on $\Omega$ such that
\begin{equation}
\label{transmutationgeneral}e_{\mathfrak{I}_{N}}^{h}(\rho,x)=e^{i\rho x}%
+\int_{-x}^{x}K_{\mathfrak{I}_{N}}^{h}(x,t)e^{i\rho t}dt.
\end{equation}
For any $0<x\leqslant b$, $K_{\mathfrak{J}_{N}}^{h}(x,t)$ is piecewise
absolutely continuous with respect to the variable $t\in[-x,x]$ and satisfies
$K_{\mathfrak{I}_{N}}^{h}(x,\cdot)\in L_{2}(-x,x)$. Furthermore,
$K_{\mathfrak{I}_{N}}^{h}\in L_{\infty}(\Omega)$.
\end{theorem}
\begin{proof}
Susbtitution of formulas (\ref{transm1}) and (\ref{representationsinegeneral2}%
) in (\ref{SoleGral}) leads to the equality
\begin{align*}
& e_{\mathfrak{I}_{N}}^{h}(\rho,x)= e^{i\rho x}+\int_{-x}^{x}\widetilde{K}%
^{h}(x,t)e^{i\rho t}dt+\\
& +\sum_{k=1}^{N}\alpha_{k}H(x-x_{k})\left( e^{i\rho x_{k}}+\int
\limits_{-x_{k}}^{x_{k}}\widetilde{K}^{h}(x_{k},t)e^{i\rho t}dt\right) \left(
\int\limits_{-(x-x_{k})}^{x-x_{k}}\widetilde{K}_{k}(x,t)e^{i\rho t}dt\right)
\\
& +\sum_{J\in\mathcal{J}_{N}}\alpha_{J}H(x-x_{j_{|J|}})\Bigg[\left( e^{i\rho
x_{j_{1}}}+\int\limits_{-x_{j_{1}}}^{x_{j_{1}}}\widetilde{K}^{h}(x_{j_{1}%
},t)e^{i\rho t}dt\right) \left( \prod_{l=1}^{|J|-1}\int\limits_{-(x_{j_{l+1}%
}-x_{j_{l}})}^{x_{j_{l+1}}-x_{j_{l}}}\widetilde{K}_{k}(x_{j_{l+1}},t)e^{i\rho
t}dt\right) \\
& \qquad\qquad\qquad\qquad\cdot\int\limits_{-(x-x_{j_{|J|}})}^{x-x_{j_{|J|}}%
}\widetilde{K}_{k}(x,t)e^{i\rho t}dt\Bigg]
\end{align*}
Note that
\begin{align*}
\prod_{l=1}^{|J|-1}\int\limits_{-(x_{j_{l+1}}-x_{j_{l}})}^{x_{j_{l+1}%
}-x_{j_{l}}}\widetilde{K}_{k}(x_{j_{l+1}},t)e^{i\rho t}dt & = \mathcal{F}%
\left\{ \left( \prod_{l=1}^{|J|-1}\right) ^{\ast}\left( \chi_{x_{j_{l+1}%
}-x_{j_{l}}}(t)\widetilde{K}_{k}(x_{j_{l+1}},t)\right) \right\} .
\end{align*}
In a similar way, if we denote $I_{A,B}=\left( e^{i\rho A}+\int\limits_{-A}%
^{A}\widetilde{K}^{h}(A,t)e^{i\rho t}dt\right) \left( \int\limits_{-B}%
^{B}\widetilde{K}_{k}(B,t)e^{i\rho t}dt\right) $ with $A,B\in(0,b)$, then
\begin{align*}
I_{A,B} = & e^{i\rho A}\int\limits_{-B}^{B}\widetilde{K}_{k}(B,t)e^{i\rho
t}dt+ \mathcal{F}\left( \chi_{A}(t)\widetilde{K}^{h}(A,t)\ast\chi
_{B}(t)\widetilde{K}_{k}(B,t)\right) \\
= & \mathcal{F}\left( \chi_{[A-B,B+A]}(t)\widetilde{K}_{k}(B,t-A)+ \chi
_{A}(t)\widetilde{K}^{h}(A,t)\ast\chi_{B}(t)\widetilde{K}_{k}(B,t)\right) .
\end{align*}
Set $R_{N}(\rho,x)=e_{N}(\rho,x)-e^{i\rho x}$. Thus,
\begin{align*}
R_{N}(\rho,x) = & \mathcal{F}\Bigg[ \chi_{x}(t)\widetilde{K}^{h}(x,t)\\
& +\sum_{k=1}^{N}\alpha_{k}H(x-x_{k}) \left( \chi_{[2x_{k}-x,x]}%
(t)\widetilde{K}_{k}(x,t-x_{k})+ \chi_{x_{k}}(t)\widetilde{K}^{h}(x_{k}%
,t)\ast\chi_{x-x_{k}}(t)\widetilde{K}_{k}(x,t)\right) \\
& +\sum_{J\in\mathcal{J}_{N}}\alpha_{J}H(x-x_{j_{|J|}})\left( \prod
_{l=1}^{|J|-1}\right) ^{\ast}\left( \chi_{x_{j_{l+1}}-x_{j_{l}}}%
(t)\widetilde{K}_{k}(x_{j_{l+1}},t)\right) \\
& \ast\Big(\chi_{[x_{j_{|J|}}+x_{j_{1}}-x, x-(x_{j_{|J|}}-x_{j_{1}}%
)]}(t)\widetilde{K}_{j_{|J|}}(x,t-x_{j_{1}})\\
& \qquad\; + \chi_{x_{j_{1}}}(t)\widetilde{K}^{h}(x_{j_{1}},t)\ast
\chi_{x-x_{j_{|J|}}}(t)\widetilde{K}_{j_{|J|}}(x,t)\Big)\Bigg]
\end{align*}
According to Lemma \ref{lemaconv}, the support of $\left( \prod_{l=1}%
^{|J|-1}\right) ^{\ast}\left( \chi_{x_{j_{l+1}}-x_{j_{l}}}(t)\widetilde{K}%
_{k}(x_{j_{l+1}},t)\right) $ lies in \newline$[x_{j_{1}}-x_{j_{|J|}},
x_{j_{|J|}}-x_{j_{1}}]$ and $\chi_{ x-(x_{j_{|J|}}-x_{j_{1}})}(t)\widetilde
{K}_{j_{|J|}}(x,t-x_{j_{1}})+ \chi_{x_{j_{1}}}(t)\widetilde{K}^{h}(x_{j_{1}%
},t)\ast\chi_{x-x_{j_{|J|}}}(t)\widetilde{K}_{j_{|J|}}(x,t)$ has its support
in $[x_{j_{|J|}}+x_{j_{1}}-x, x-(x_{j_{|J|}}-x_{j_{1}})]$. Hence the
convolution in the second sum of $R_{N}(\rho,x)$ has its support in $[-x,x]$.
On the other hand, $\chi_{x_{k}}(t)\widetilde{K}^{h}(x_{k},t)\ast\chi
_{x-x_{k}}(t)\widetilde{K}_{k}(x,t)$ has its support in $[-x,x]$, and since
$[2x_{k}-x,x]\subset[-x,x]$, we conclude that $\operatorname{Supp}\left(
\mathcal{F}^{-1}R_{N}(\rho,x)\right) \subset[-x,x]$.
Thus, we obtain (\ref{transmutationgeneral}) with
\begin{align}
K_{\mathfrak{I}_{N}}^{h}(x,t) = & \chi_{x}(t)\widetilde{K}^{h}%
(x,t)\nonumber\\
& +\sum_{k=1}^{n}\alpha_{k}H(x-x_{k}) \left( \chi_{[2x_{k}-x,x]}%
(t)\widetilde{K}_{k}(x,t-x_{k})+ \chi_{x_{k}}(t)\widetilde{K}^{h}(x_{k}%
,t)\ast\chi_{x-x_{k}}(t)\widetilde{K}_{k}(x,t)\right) \nonumber\\
& +\sum_{J\in\mathcal{J}_{N}}\alpha_{J}H(x-x_{j_{|J|}})\left( \prod
_{l=1}^{|J|-1}\right) ^{\ast}\left( \chi_{x_{j_{l+1}}-x_{j_{l}}}%
(t)\widetilde{K}_{j_{l}}(x_{j_{l+1}},t)\right) \label{transmkernelgeneral}\\
& \qquad\ast\Big(\chi_{ x-(x_{j_{|J|}}-x_{j_{1}})}(t)\widetilde{K}_{j_{|J|}%
}(x,t-x_{j_{1}}) + \chi_{x_{j_{1}}}(t)\widetilde{K}^{h}(x_{j_{1}},t)\ast
\chi_{x-x_{j_{|J|}}}(t)\widetilde{K}_{j_{|J|}}(x,t)\Big),\nonumber
\end{align}
and $K_{\mathfrak{I}_{N}}(x,\cdot)\in L_{2}(x,-x)$. By formula
(\ref{transmkernelgeneral}) and the definitions of $\widehat{K}^{h}(x,t)$ and
$\widetilde{K}_{k}(x,t)$, $K_{\mathfrak{I}_{N}}(x,t)$ is piecewise absolutely
continuous for $t\in[-x,x]$. Since $\widehat{K}^{h},\widetilde{K}_{k}\in
L_{\infty}(\Omega)$, is clear that $K_{\mathfrak{I}_{N}}^{f}\in L_{\infty
}(\Omega)$.
\end{proof}
\newline
As a consequence of (\ref{transmutationgeneral}), $e_{\mathfrak{I}_{N}}%
^{h}(\rho,x)$ is an entire function of exponential type $x$ on the spectral
parameter $\rho$.
\begin{example}
\label{beginexample1} Consider (\ref{SolDiracDeltae}) with $N=1$. In this
case the solution $e_{\mathfrak{I}_{1}}^{0}(\rho,x)$ is given by
\[
e_{\mathfrak{I}_{1}}^{0}(\rho,x)= e^{i\rho x}+\alpha_{1}e^{i\rho x_{1}%
}H(x-x_{1})\frac{\sin(\rho(x-x_{1}))}{\rho}.
\]
We have
\[
e^{i\rho x_{1}}\frac{\sin(\rho(x-x_{1}))}{\rho}=\frac{1}{2}\int_{x_{1}%
-x}^{x-x_{1}}e^{i\rho(t+x_{1})}dt= \frac{1}{2}\int_{2x_{1}-x}^{x}e^{i\rho
t}dt.
\]
Hence
\[
e_{\mathfrak{I}_{1}}^{0}(\rho,x)= e^{i\rho x}+\int_{-x}^{x}K_{\mathfrak{I}%
_{1}}^{0}(x,t)e^{i\rho t}dt \quad\mbox{with }\, K_{\mathfrak{I}_{1}}%
^{0}(x,t)=\frac{\alpha_{1}}{2}H(x-x_{1})\chi_{[2x_{1}-x,x]}(t).
\]
\end{example}
\begin{example}
\label{beginexample2} Consider again Eq. (\ref{SolDiracDeltae}) but now with
$N=2$. In this case the solution $e_{\mathfrak{I}_{2}}^{0}(\rho,x)$ is given
by
\begin{align*}
e_{\mathfrak{I}_{2}}^{0}(\rho,x) = & e^{i\rho x}+\alpha_{1}e^{i\rho x_{1}%
}H(x-x_{1})\frac{\sin(\rho(x-x_{1}))}{\rho}+\alpha_{2}e^{i\rho x_{2}}%
H(x-x_{2})\frac{\sin(\rho(x-x_{2}))}{\rho}\\
& +\alpha_{1}\alpha_{2}e^{i\rho x_{1}}H(x-x_{2})\frac{\sin(\rho(x_{2}%
-x_{1}))}{\rho}\frac{\sin(\rho(x-x_{2}))}{\rho},
\end{align*}
and the transmutation kernel $K_{\mathfrak{I}_{2}}^{0}(x,t)$ has the form
\begin{align*}
K_{\mathfrak{I}_{2}}^{0}(x,t) & = \frac{\alpha_{1}H(x-x_{1})}{2}\chi
_{[2x_{1}-x,x]}(t)+\frac{\alpha_{2}H(x-x_{2})}{2}\chi_{[2x_{1}-x,x]}(t)\\
& \qquad+\frac{\alpha_{1}\alpha_{2}H(x-x_{2})}{4}\left( \chi_{x_{2}-x_{1}}%
\ast\chi_{x-x_{2}}\right) (t-x_{1}).
\end{align*}
Direct computation shows that
\begin{multline*}
\chi_{x_{2}-x_{1}}\ast\chi_{x-x_{2}}(t-x_{1})=\\%
\begin{cases}
0, & t\not \in [2x_{1}-x,x],\\
t+x-2x_{1}, & 2x_{1}-x< t< -|2x_{2}-x-x_{1}|+x_{1},\\
x-x_{1}-|2x_{2}-x-x_{1}|, & -|2x_{2}-x-x_{1}|+x_{1}< t<|2x_{2}-x-x_{1}%
|+x_{1}\\
x-t, & |2x_{2}-x-x_{1}|+x_{1}<t<x.
\end{cases}
\end{multline*}
In Figure \ref{levelcurves}, we can see some level curves of the kernel
$K_{\mathfrak{I}_{2}}^{0}(x,t)$ (as a function of $t$), $\mathfrak{I_{2}%
}=\{(0.25,1), (0.75,2)\}$, for some values of $x$. \begin{figure}[h]
\centering
\includegraphics[width=11.cm]{ejemplo1kernel.pdf} \caption{The graphs of
$K_{\mathfrak{I}_{2}}^{0}(x,t)$, as a function of $t\in[-1,1]$, for some
points $x\in(0,1)$ and $\mathfrak{I_{2}}=\{(0.25,1), (0.75,2)\}$.}%
\label{levelcurves}%
\end{figure}
\end{example}
For the general case we have the following representation for the kernel.
\begin{proposition}
The transmutation kernel $K_{\mathfrak{I}_{N}}^{0}(\rho,x)$ for the solution
$e_{\mathfrak{I}_{N}}^{0}(\rho,x)$ of (\ref{SolDiracDeltae}) is given by
\begin{align}
K_{\mathfrak{I}_{N}}^{0}(x,t) & = \sum_{k=0}^{N}\frac{\alpha_{k}H(x-x_{k}%
)}{2}\chi_{[2x_{k}-x,x]}(t)\nonumber\\
& +\sum_{J\in\mathcal{J}_{N}}\frac{\alpha_{J}H(x-x_{j_{|J|}})}{2^{|J|}%
}\left( \left( \prod_{l=1}^{|J|-1}\right) ^{\ast}\chi_{x_{j_{l+1}}-x_{j_{l}}%
}(t) \right) \ast\chi_{x-x_{j_{|J|}}}(t-x_{j_{1}})\label{kerneldeltaN}%
\end{align}
\end{proposition}
\begin{proof}
In this case $\widetilde{e}_{0}(\rho,x)=e^{i\rho x}$, $\widehat{s}_{k}%
(\rho,x-x_{k})=\frac{\sin(\rho(x-x_{k}))}{\rho}$, hence $\widetilde{K}%
^{0}(x,t)\equiv0$, $\widetilde{K}_{k}(x,t)=\frac{1}{2}\chi_{x-x_{k}}(t)$.
Substituting these expressions into (\ref{transmkernelgeneral}) and taking
into account that $\chi_{x_{j_{|J|}}+x_{j_{1}}-x, x-(x_{j_{|J|}}-x_{j_{1}}%
)}(t)=\chi_{x-x_{j_{|J|}}}(t-x_{j_{1}})$ we obtain (\ref{kerneldeltaN})
\end{proof}
\newline
Let
\begin{equation}
\label{transmutationoperator1}\mathbf{T}_{\mathfrak{I}_{N}}^{h} u(x):= u(x)+
\int_{-x}^{x} K_{\mathfrak{I}_{N}}^{h}(x,t)u(t)dt.
\end{equation}
By Theorem \ref{thoremtransmoperator}, $\mathbf{T}_{\mathfrak{I}_{N}}^{f}%
\in\mathcal{B}\left( L_{2}(-b,b)\right) $ and
\begin{equation}
\label{transmutationrelation1}e_{\mathfrak{I}_{N}}^{h}(\rho,x)= \mathbf{T}_{
\mathfrak{I}_{N}}^{h} \left[ e^{i\rho x}\right] .
\end{equation}
\subsection{Goursat conditions}
Let us define the function
\begin{equation}
\label{Sigmafunction}\sigma_{\mathfrak{I}_{N}}(x):= \sum_{k=1}^{N}\alpha
_{k}H(x-x_{k}).
\end{equation}
Hence $\sigma_{\mathfrak{I}_{N}}^{\prime}(x)=q_{\delta,\mathfrak{I}_{n}}(x)$
in the distributional sense ( $(\sigma_{\mathfrak{I}_{N}},\phi)_{C_{0}%
^{\infty}(0,b)}=-(q_{\delta,\mathfrak{I}_{N}}, \phi^{\prime})_{C_{0}^{\infty
}(0,b)}$ for all $\phi\in C_{0}^{\infty}(0,b)$). Note that in Examples
\ref{beginexample1} and \ref{beginexample2} we have
\[
K_{\mathfrak{J}_{N}}^{0}(x,x)= \frac{1}{2}\left( \int_{0}^{x}q(s)ds+\sigma
_{\mathfrak{I}_{N}}(x)\right) \;\,\mbox{ and }\;\, K_{\mathfrak{I}_{N}%
}^{0}(x,-x)=0 \;\, \mbox{ for } N=1,2.
\]
More generally, the following statement is true.
\begin{proposition}
\label{propGoursat} The integral transmutation kernel $K_{\mathfrak{I}_{N}%
}^{h}$ satisfies the following Goursat conditions for $x\in[0,b]$
\begin{equation}
K_{\mathfrak{J}_{N}}^{h}(x,x)= \frac{1}{2}\left( h+\int_{0}^{x}q(s)ds+\sigma
_{\mathfrak{I}_{N}}(x)\right) \qquad\mbox{ and }\qquad K_{\mathfrak{I}%
_{N}}^{h}(x,-x)=\frac{h}{2}.
\end{equation}
\end{proposition}
\begin{proof}
Fix $x\in[0,b]$ and take $\xi\in\{-x,x\}$. By formula
(\ref{transmkernelgeneral}) we can write
\[
K_{\mathfrak{I}_{N}}^{h}(x,\xi)= \widetilde{K}^{h}(x,\xi)+\sum_{k=1}^{N}%
\alpha_{k}H(x-x_{k})\chi_{[2x_{k}-x,x]}(\xi)\widetilde{K}_{k}(x,\xi
-x_{k})+F(x,\xi),
\]
where
\begin{align*}
F(x,t) = & \sum_{k=1}^{n}\alpha_{k}H(x-x_{k}) \chi_{x_{k}}(t)\widetilde
{K}^{h}(x_{k},t)\ast\chi_{x-x_{k}}(t)\widetilde{K}_{k}(x,t)\\
& +\sum_{J\in\mathcal{J}_{N}}\alpha_{J}H(x-x_{j_{|J|}})\left( \prod
_{l=1}^{|J|-1}\right) ^{\ast}\left( \chi_{x_{j_{l+1}}-x_{j_{l}}}%
(t)\widetilde{K}_{j_{l}}(x_{j_{l+1}},t)\right) \\
& \qquad\ast\Big(\chi_{ x-(x_{j_{|J|}}-x_{j_{1}})}(t)\widetilde{K}_{j_{|J|}%
}(x,t-x_{j_{1}}) + \chi_{x_{j_{1}}}(t)\widetilde{K}^{h}(x_{j_{1}},t)\ast
\chi_{x-x_{j_{|J|}}}(t)\widetilde{K}_{j_{|J|}}(x,t)\Big).
\end{align*}
In the proof of Theorem \ref{thoremtransmoperator} we obtain that
$\operatorname{Supp}(F(x,t))\subset[-x,x]$. Since $\widetilde{K}^{h}(x_{j},t)$
and $\widetilde{K}_{k}(x_{j},t)$ are continuous with respect to $t$ in the
intervals $[-x_{j},x_{j}]$ and $[x_{k}-x_{j},x_{j}-x_{k}]$ respectively for
$j=1, \dots, N$, $k\leqslant j$, by Lemma \ref{lemaconv} the function $F(x,t)$
is continuous for all $t\in\mathbb{R}$. Thus $F(x,\xi)=0$. For the case
$\xi=x$, we have that $\widetilde{K}^{h}(x,x)=\frac{h}{2}+\frac{1}{2}\int
_{0}^{x}q(s)ds$, $\chi_{[2x_{k}-x,x]}(x)=1$ and
\[
\widetilde{K}_{k}(x,x-x_{k})=\frac{1}{2}\chi_{x-x_{k}}(x-x_{k}%
)+\displaystyle \frac{1}{2}\int_{|x-x_{k}|}^{x-x_{k}}\widehat{H}_{k}%
(x-x_{k},s)ds= \frac{1}{2}
\]
(we assume that $x\geqslant x_{k}$ in order to have $H(x-x_{k})=1$).
Thus\newline$K_{\mathfrak{I}_{N}}^{h}(x,x)=\frac{1}{2}\left( h+\int_{0}%
^{x}q(s)ds+\sigma_{\mathfrak{I}_{N}}(x)\right) $. For the case $\xi=-x$,
$\widetilde{K}^{h}(x,-x)=\frac{h}{2}$ and\newline$\chi_{[2x_{k}-x,x]}(-x)=0$.
Hence $K_{\mathfrak{I}_{N}}^{h}(x,x)=\frac{h}{2}$.
\end{proof}
\begin{remark}
\label{remarkantiderivativegousart} According to Proposition \ref{propGoursat}%
, $2K_{\mathfrak{I}_{N}}^{h}(x,x)$ is a (distributional) antiderivative of the
potential $q(x)+q_{\delta,\mathfrak{I}_{N}}(x)$.
\end{remark}
\subsection{The transmuted Cosine and Sine solutions}
Let $c_{\mathfrak{I}_{N}}^{h}(\rho,x)$ and $s_{\mathfrak{I}_{N}}(\rho,x)$ be
the solutions of Eq. (\ref{Schrwithdelta}) satisfying the initial conditions
\begin{align}
c_{\mathfrak{I}_{N}}^{h}(\rho, 0)= 1, & (c_{\mathfrak{I}_{N}}^{h})^{\prime
}(\rho, 0)=h,\label{cosinegralinitialconds}\\
s_{\mathfrak{I}_{N}}(\rho, 0)=0, & s_{\mathfrak{I}_{N}}^{\prime}(\rho,0)=
1.\label{sinegralinitialconds}%
\end{align}
Note that $c_{\mathfrak{I}_{N}}^{h}(\rho,x)=\frac{e_{\mathfrak{I}_{N}}%
^{h}(\rho,x)+e_{\mathfrak{I}_{N}}^{h}(-\rho,x)}{2}$ and $s_{\mathfrak{I}_{N}%
}(\rho,x)= \frac{e_{\mathfrak{I}_{N}}^{h}(\rho,x)-e_{\mathfrak{I}_{N}}%
^{h}(-\rho,x)}{2i\rho}$.
\begin{remark}
\label{remarksinecosinefunda} By Corollary \ref{proppropofsolutions},
$c_{\mathfrak{I}_{N}}^{h}(\rho,\cdot), s_{\mathfrak{I}_{N}}(\rho,\cdot)\in
AC[0,b]$ and both functions are solutions of Eq. (\ref{schrodingerregular}) on
$[0,x_{1}]$, hence their Wronskian is constant for $x\in[0,x_{1}]$ and
\begin{align*}
1 & = W\left[ c_{\mathfrak{I}_{N}}^{h}(\rho,x), s_{\mathfrak{I}_{N}}%
(\rho,x)\right] (0)=W\left[ c_{\mathfrak{I}_{N}}^{h}(\rho,x), s_{\mathfrak{I}%
_{N}}(\rho,x)\right] (x_{1}-) =
\begin{vmatrix}
c_{\mathfrak{I}_{N}}^{h}(\rho,x_{1}) & s_{\mathfrak{I}_{N}}(\rho,x_{1})\\
(c_{\mathfrak{I}_{N}}^{h})^{\prime}(\rho,x_{1}-) & s^{\prime}_{\mathfrak{I}%
_{N}}(\rho,x_{1}-)
\end{vmatrix}
\\
& =
\begin{vmatrix}
c_{\mathfrak{I}_{N}}^{h}(\rho,x_{1}) & s_{\mathfrak{I}_{N}}(\rho,x_{1})\\
(c_{\mathfrak{I}_{N}}^{h})^{\prime}(\rho,x_{1}+)-\alpha_{1} c_{\mathfrak{I}%
_{N}}^{h}(\rho,x_{1}) & s^{\prime}_{\mathfrak{I}_{N}}(\rho,x_{1}+)-\alpha_{1}
s_{\mathfrak{I}_{N}}(\rho,x_{1})
\end{vmatrix}
\\
& =
\begin{vmatrix}
c_{\mathfrak{I}_{N}}^{h}(\rho,x_{1}) & s_{\mathfrak{I}_{N}}(\rho,x_{1})\\
(c_{\mathfrak{I}_{N}}^{h})^{\prime}(\rho,x_{1}+) & s^{\prime}_{\mathfrak{I}%
_{N}}(\rho,x_{1}+)
\end{vmatrix}
= W\left[ c_{\mathfrak{I}_{N}}^{h}(\rho,x), s_{\mathfrak{I}_{N}}%
(\rho,x)\right] (x_{1}+)
\end{align*}
(the equality in the second line is due to (\ref{jumpderivative})). Since
$c_{\mathfrak{I}_{N}}^{h}(\rho,x), s_{\mathfrak{I}_{N}}(\rho,x)$ are solutions
of (\ref{schrodingerregular}) on $[x_{1},x_{2}]$, then $W\left[
C_{\mathfrak{I}_{N}}^{h}(\rho,x), s_{\mathfrak{I}_{N}}(\rho,x)\right] $ is
constant for $x\in[x_{1},x_{2}]$. Thus, \newline$W\left[ c_{\mathfrak{I}_{N}%
}^{h}(\rho,x), s_{\mathfrak{I}_{N}}(\rho,x)\right] (x)=1$ for all
$x\in[0,x_{2}]$. Continuing the process we obtain that the Wronskian equals
one in the whole segment $[0,b]$. Thus, $c_{\mathfrak{I}_{N}}^{h}(\rho,x),
s_{\mathfrak{I}_{N}}(\rho,x)$ are linearly independent. Finally, if $u$ is a
solution of (\ref{Schrwithdelta}), by Remark \ref{remarkunicityofsol}, $u$ can
be written as $u(x)=u(0)c_{\mathfrak{I}_{N}}^{h}(\rho,x)+u^{\prime
}(0)s_{\mathfrak{I}_{N}}(\rho,x)$. In this way, $\left\{ c_{\mathfrak{I}_{N}%
}^{h}(\rho,x), s_{\mathfrak{I}_{N}}(\rho,x)\right\} $ is a fundamental set of
solutions for (\ref{Schrwithdelta}).
\end{remark}
Similarly to the case of the regular Eq. (\ref{regularSch}) (see \cite[Ch.
1]{marchenko}), from (\ref{transmutationgeneral}) we obtain the following representations.
\begin{proposition}
The solutions $c_{\mathfrak{I}_{N}}^{h}(\rho,x)$ and $s_{\mathfrak{I}_{N}%
}(\rho,x)$ admit the following integral representations
\begin{align}
c_{\mathfrak{I}_{N}}^{h}(\rho, x) & = \cos(\rho x)+\int_{0}^{x}
G_{\mathfrak{I}_{N}}^{h}(x,t)\cos(\rho t)dt,\label{transmcosinegral}\\
s_{\mathfrak{I}_{N}}(\rho,x) & = \frac{\sin(\rho x)}{\rho}+\int_{0}%
^{x}S_{\mathfrak{I}_{N}}(x,t)\frac{\sin(\rho t)}{\rho}%
dt,\label{transmsinegral}%
\end{align}
where
\begin{align}
G_{\mathfrak{I}_{N}}^{h}(x,t) & = K_{\mathfrak{I}_{N}}^{h}%
(x,t)+K_{\mathfrak{I}_{N}}^{h}(x,-t),\label{cosineintegralkernelgral}\\
S_{\mathfrak{I}_{N}}(x,t) & = K_{\mathfrak{I}_{N}}^{h}(x,t)-K_{\mathfrak{I}%
_{N}}^{h}(x,-t).\label{sineintegralkernelgral}%
\end{align}
\end{proposition}
\begin{remark}
\label{remarkGoursatcosinesine} By Proposition \ref{propGoursat}, the cosine
and sine integral transmutation kernels satisfy the conditions
\begin{equation}
G_{\mathfrak{I}_{N}}^{h}(x,x)= h+\frac{1}{2}\left( \int_{0}^{x}q(s)ds+\sigma
_{\mathfrak{I}_{N}}(x)\right) ,
\end{equation}
\begin{equation}
S_{\mathfrak{I}_{N}}(x,x)= \frac{1}{2}\left( \int_{0}^{x}q(s)ds+\sigma
_{\mathfrak{I}_{N}}(x)\right) \quad\mbox{and }\;\; S_{\mathfrak{I}_{N}%
}(x,0)=0.
\end{equation}
Introducing the cosine and sine transmutation operators
\begin{equation}
\label{cosineandsinetransop}\mathbf{T}_{\mathfrak{I}_{N},h}^{C}u(x)=
u(x)+\int_{0}^{x}G_{\mathfrak{I}_{N}}^{h}(x,t)u(t)dt, \quad\mathbf{T}%
_{\mathfrak{I}_{N}}^{S}u(x)= u(x)+\int_{0}^{x}S_{\mathfrak{I}_{N}}(x,t)u(t)dt
\end{equation}
we obtain
\begin{equation}
\label{transmutedcosineandsinesol}c_{\mathfrak{I}_{N}}^{h}(\rho,x)=\mathbf{T}%
_{\mathfrak{I}_{N},h}^{C}\left[ \cos(\rho x)\right] , \quad s_{\mathfrak{I}%
_{N}}(\rho,x) = \mathbf{T}_{\mathfrak{I}_{N}}^{S}\left[ \frac{\sin(\rho
x)}{\rho}\right] .
\end{equation}
\end{remark}
\begin{remark}
\label{remarkwronskian} According to Remark \ref{remarksinecosinefunda}, the
space of solutions of (\ref{Schrwithdelta}) has dimension 2, and given
$f,g\in\mathcal{D}_{2}\left( \mathbf{L}_{q,\mathfrak{I}_{N}}\right) $
solutions of (\ref{Schrwithdelta}), repeating the same procedure of Remark
\ref{remarksinecosinefunda}, $W[f,g]$ is constant in the whole segment
$[0,b]$. The solutions $f,g$ are a fundamental set of solutions iff
$W[f,g]\neq0$.
\end{remark}
\section{The SPPS method and the mapping property}
\subsection{Spectral parameter powers series}
As in the case of the regular Schr\"odinger equation
\cite{blancarte,sppsoriginal}, we obtain a representation for the solutions of
(\ref{Schrwithdelta}) as a power series in the spectral parameter (SPPS
series). Assume that there exists a solution $f\in\mathcal{D}_{2}\left(
\mathbf{L}_{q,\mathfrak{I}_{N}}\right) $ that does not vanish in the whole
segment $[0,b]$.
\begin{remark}
\label{remarknonhomeq} Given $g\in L_{2}(0,b)$, a solution $u\in
\mathcal{D}_{2}\left( \mathbf{L}_{q,\mathfrak{I}_{N}}\right) $ of the
non-homogeneous Cauchy problem
\begin{equation}
\label{nonhomoCauchy}%
\begin{cases}
\mathbf{L}_{q,\mathfrak{I}_{N}}u(x)= g(x), \quad0<x<b\\
u(0)=u_{0}, \; u^{\prime}(0)=u_{1}%
\end{cases}
\end{equation}
can be obtained by solving the regular equation $\mathbf{L}_{q}u(x)=g(x)$ a.e.
$x\in(0,b)$ as follows. Consider the Polya factorization $\mathbf{L}%
_{q}u=-\frac{1}{f}Df^{2}D\frac{u}{f}$, where $D=\frac{d}{dx}$. A direct
computation shows that $u$ given by
\begin{equation}
\label{solutionnonhomogeneouseq}u(x)= -f(x)\int_{0}^{x}\frac{1}{f^{2}(t)}%
\int_{0}^{t} f(s)g(s)ds+\frac{u_{0}}{f(0)}f(x)+(f(0)u_{1}-f^{\prime}%
(0)u_{0})f(x)\int_{0}^{x}\frac{dt}{f^{2}(t)}%
\end{equation}
satisfies (\ref{nonhomoCauchy}) (actually, $f(x)\int_{0}^{x}\frac{1}{f^{2}%
(t)}dt$ is the second linearly independent solution of $\mathbf{L}_{q}u=0$
obtained from $f$ by Abel's formula). By Remark \ref{remarkidealdomain},
$u\in\mathcal{D}_{2}\left( \mathbf{L}_{q,\mathfrak{J}_{N}}\right) $ and by
Proposition \ref{propregular} and Remark \ref{remarkunicityofsol}, formula
(\ref{solutionnonhomogeneouseq}) provides the unique solution of
(\ref{nonhomoCauchy}). Actually, if we denote $\mathcal{I}u(x):= \int_{0}%
^{x}u(t)dt$ and define $\mathbf{R}_{\mathfrak{I}_{N}}^{f}:= -f\mathcal{I}%
f^{2}\mathcal{I}$, then $\mathbf{R}_{\mathfrak{I}_{N}}^{f}\in\mathcal{B}\left(
L_{2}(0,b)\right) $, $\mathbf{R}_{\mathfrak{I}_{N}}^{f}\left( L_{2}%
(0,b)\right) \subset\mathcal{D}_{2}\left( \mathbf{L}_{q,\mathfrak{I}_{N}%
}\right) $ and is a right-inverse for $\mathbf{L}_{q,\mathfrak{I}_{N}}$, i.e.,
$\mathbf{L}_{q,\mathfrak{I}_{N}}\mathbf{R}_{\mathfrak{I}_{N}}^{f} g=g$ for all
$g\in L_{2}(0,b)$.
\end{remark}
Following \cite{sppsoriginal} we define the following recursive integrals:
$\widetilde{X}^{(0)}\equiv X^{(0)} \equiv1$, and for $k\in\mathbb{N}$
\begin{align}
\widetilde{X}^{(k)}(x) & := k\int_{0}^{x}\widetilde{X}^{(k-1)}(s)\left(
f^{2}(s)\right) ^{(-1)^{k-1}}ds,\\
X^{(k)}(x) & := k\int_{0}^{x}X^{(k-1)}(s)\left( f^{2}(s)\right) ^{(-1)^{k}%
}ds.
\end{align}
The functions $\{\varphi_{f}^{(k)}(x)\}_{k=0}^{\infty}$ defined by
\begin{equation}
\label{formalpowers}\varphi_{f}^{(k)}(x):=
\begin{cases}
f(x)\widetilde{X}^{(k)}(x),\quad\mbox{if } k \mbox{ even},\\
f(x)X^{(k)}(x),\quad\mbox{if } k \mbox{ odd}.
\end{cases}
\end{equation}
for $k\in\mathbb{N}_{0}$, are called the \textit{formal powers} associated to
$f$. Additionally, we introduce the following auxiliary formal powers
$\{\psi_{f}^{(k)}(x)\}_{k=0}^{\infty}$ given by
\begin{equation}
\label{auxiliaryformalpowers}\psi_{f}^{(k)}(x):=
\begin{cases}
\frac{\widetilde{X}^{(k)}(x)}{f(x)},\quad\mbox{if } k \mbox{ odd},\\
\frac{X^{(k)}(x)}{f(x)},\quad\mbox{if } k \mbox{ even}.
\end{cases}
\end{equation}
\begin{remark}
\label{remarkformalpowers} For each $k \in\mathbb{N}_{0}$, $\varphi_{f}%
^{(k)}\in\mathcal{D}_{2}\left( \mathbf{L}_{q,\mathfrak{I}_{N}}\right) $.
Indeed, direct computations show that the following relations hold for all
$k\in\mathbb{N}_{0}$:
\begin{align}
D\varphi_{f}^{(k)} & = \frac{f^{\prime}}{f}\varphi_{f}^{(k)}+k\psi
_{f}^{(k-1)}\label{formalpowersderivative}\\
D^{2}\varphi_{f}^{(k)} & = \frac{f^{\prime\prime}}{f}\varphi_{f}%
^{(k)}+k(k-1)\varphi_{f}^{(k-2)}\label{Lbasisproperty}%
\end{align}
Since $\varphi_{f}^{(k)}, \psi_{f}^{(k)}\in C[0,b]$, using the procedure from
Remark \ref{remarkidealdomain} and (\ref{formalpowersderivative}) we obtain
$\varphi_{f}^{(k)}\in\mathcal{D}_{2}\left( \mathbf{L}_{q,\mathfrak{I}_{N}%
}\right) $.
\end{remark}
\begin{theorem}
[SPPS method]\label{theoremspps} Suppose that $f\in\mathcal{D}_{2}\left(
\mathbf{L}_{q,\mathfrak{I}_{N}}\right) $ is a solution of (\ref{Schrwithdelta}%
) that does not vanish in the whole segment $[0,b]$. Then the functions
\begin{equation}
\label{SPPSseries}u_{0}(\rho,x)= \sum_{k=0}^{\infty}\frac{(-1)^{k}\rho
^{2k}\varphi_{f}^{(2k)}(x)}{(2k)!}, \quad u_{1}(\rho,x)= \sum_{k=0}^{\infty
}\frac{(-1)^{k}\rho^{2k}\varphi_{f}^{(2k+1)}(x)}{(2k+1)!}%
\end{equation}
belong to $\mathcal{D}_{2}\left( \mathbf{L}_{q,\mathfrak{I}_{N}}\right) $, and
$\{u_{0}(\rho,x), u_{1}(\rho,x)\}$ is a fundamental set of solutions for
(\ref{Schrwithdelta}) satisfying the initial conditions
\begin{align}
u_{0}(\rho,0)=f(0), & u^{\prime}_{0}(\rho,0)=f^{\prime}%
(0),\label{initialspps0}\\
u_{1}(\rho,0)=0, & u^{\prime}_{1}(\rho,0)=\frac{1}{f(0)}%
,\label{initialspps1}%
\end{align}
The series in (\ref{SPPSseries}) converge absolutely and uniformly on
$x\in[0,b]$, the series of the derivatives converge in $L_{2}(0,b)$ and the
series of the second derivatives converge in $L_{2}(x_{j},x_{j+1})$, $j=0,
\cdots, N$. With respect to $\rho$ the series converge absolutely and
uniformly on any compact subset of the complex $\rho$-plane.
\end{theorem}
\begin{proof}
Since $f\in C[0,b]$, the following estimates for the recursive integrals
$\{\widetilde{X}^{(k)}(x)\}_{k=0}^{\infty}$ and $\{X^{(k)}(x)\}_{k=0}^{\infty
}$ are known:
\begin{equation}
\label{auxiliarestimatesspps}|\widetilde{X}^{(n)}(x)|\leqslant M_{1}^{n}
b^{n}, \; |X^{(n)}(x)|\leqslant M_{1}^{n} b^{n}\quad\mbox{for all } x\in[0,b],
\end{equation}
where $M_{1}=\|f^{2}\|_{C[0,b]}\cdot\left\| \frac{1}{f^{2}}\right\| _{C[0,b]}$
(see the proof of Theorem 1 of \cite{sppsoriginal}). Thus, by the Weierstrass
$M$-tests, the series in (\ref{SPPSseries}) converge absolutely and uniformly
on $x\in[0,b]$, and for $\rho$ on any compact subset of the complex $\rho
$-plane. We prove that $u_{0}(\rho,x)\in\mathcal{D}_{2}\left( \mathbf{L}%
_{q,\mathfrak{I}_{N}}\right) $ and is a solution of (\ref{Schrwithdelta}) (the
proof for $u_{1}(\rho,x)$ is analogous). By Remark \ref{remarkformalpowers},
the series of the derivatives of $u_{0}(\rho,x)$ is given by $\frac{f^{\prime
}}{f}\sum_{k=0}^{\infty}\frac{(-1)^{k}\rho^{2k}\varphi_{f}^{(2k)}}{(2k)!}%
+\sum_{k=1}^{\infty}\frac{(-1)^{k}\rho^{2k}\psi_{f}^{(2k-1)}}{(2k-1)!}$. By
(\ref{auxiliarestimatesspps}), the series involving the formal powers
$\varphi_{f}^{(k)}$ and $\psi_{f}^{(k)}$ converge absolutely and uniformly on
$x\in[0,b]$. Hence, $\sum_{k=0}^{\infty}\frac{(-1)^{k}\rho^{k}D\varphi
_{f}^{(2k)}(x)}{(2k)!}$ converges in $L_{2}(0,b)$. Due to \cite[Prop.
3]{blancarte}, $u_{0}(\rho,\cdot)\in AC[0,b]$ and $u_{0}^{\prime}%
(\rho,x)=\frac{f^{\prime}(x)}{f(x)}\sum_{k=0}^{\infty}\frac{(-1)^{k}\rho
^{2k}\varphi_{f}^{(2k)}}{(2k)!}+\sum_{k=1}^{\infty}\frac{(-1)^{k}\rho^{2k}%
\psi_{f}^{(2k-1)}}{(2k-1)!}$ in $L_{2}(0,b)$. Since the series involving the
formal powers defines continuous functions, then $u_{0}(\rho,x)$ satisfies the
jump condition (\ref{jumpderivative}). Applying the same reasoning it is shown
that $u_{0}^{\prime\prime}(\rho,x)=\sum_{k=0}^{\infty}\frac{(-1)^{k}\rho
^{2k}D^{2}\varphi_{f}^{(2k)}}{(2k)!}$, the series converges in $L_{2}%
(x_{j},x_{j+1})$ and $u_{0}(\rho,\cdot)|_{(x_{j},x_{j+1})}\in H^{2}%
(x_{j},x_{j+1})$, $j=0, \dots, N$.
Since $\widetilde{X}^{(n)}(0)=0$ for $n\geqslant1$, we have
(\ref{initialspps0}). Finally, by (\ref{Lbasisproperty})
\begin{align*}
\mathbf{L}_{q}u_{0}(\rho,x) & = \sum_{k=0}^{\infty}\frac{(-1)^{k}\rho
^{2k}\mathbf{L}_{q}\varphi_{f}^{(2k)}(x)}{(2k)!}=\sum_{k=2}^{\infty}%
\frac{(-1)^{k+1}\rho^{2k}\varphi_{f}^{(2k-2)}(x)}{(2k-2)!}\\
& = \rho^{2}\sum_{k=0}^{\infty}\frac{(-1)^{k}\rho^{2k}\varphi_{f}^{(2k)}%
(x)}{(2k)!}=\rho^{2}u(\rho,x),
\end{align*}
this for a.e. $x\in(x_{j},x_{j+1})$, $j=0, \dots, N$.
Using (\ref{initialspps0}) and (\ref{initialspps1}) we obtain $W[u_{0}%
(\rho,x),u_{1}(\rho,x)](0)=1$. Since the Wronskian is constant (Remark
\ref{remarkwronskian}), $\{u_{0}(\rho,x), u_{1}(\rho,x)\}$ is a fundamental
set of solutions.
\end{proof}
\subsection{Existence and construction of the non-vanishing solution}
The existence of a non-vanishing solution is well known for the case of a
regular Schr\"odinger equation with continuous potential (see \cite[Remark
5]{sppsoriginal} and \cite[Cor. 2.3]{camporesi}). The following proof adapts
the one presented in \cite[Prop. 2.9]{nelsonspps} for the Dirac system.
\begin{proposition}
[Existence of non-vanishing solutions]\label{propnonvanishingsol} Let
$\{u,v\}\in\mathcal{D}_{2}\left( \mathbf{L}_{q,\mathfrak{I}_{N}}\right) $ be a
fundamental set of solutions for (\ref{Schrwithdelta}). Then there exist
constants $c_{1},c_{2}\in\mathbb{C}$ such that the solution $f=c_{1}u+c_{2}v$
does not vanish in the whole segment $[0,b]$.
\end{proposition}
\begin{proof}
Let $\{u,v\}\in\mathcal{D}_{2}\left( \mathbf{L}_{q,\mathfrak{I}_{N}}\right) $
be a fundamental set of solutions for (\ref{Schrwithdelta}). Then $u$ and $v$
cannot have common zeros in $[0,b]$. Indeed, if $u(\xi)=v(\xi)=0$ for some
$\xi\in[0,b]$, then $W[u,v](\xi+)=u(\xi)v^{\prime}(\xi+)-v(\xi)u^{\prime}%
(\xi+)=0$. Since $W[u,v]$ is constant in $[0,b]$, this contradicts that
$\{u,v\}$ is a fundamental system.
This implies that in each interval $[x_{j},x_{j+1}]$, $j=0, \cdots, N$, the
map $F_{j}: [x_{j}, x_{j+1}]\rightarrow\mathbb{CP}^{1}$, $F_{j}(x):=\left[
u|_{[x_{j}, x_{j+1}]}(x) :v|_{[x_{j}, x_{j+1}]}(x)\right] $ (where
$\mathbb{CP}^{1}$ is the complex projective line, i.e., the quotient of
$\mathbb{C}^{2}\setminus\{(0,0)\}$ under the action of $\mathbb{C}^{*}$, and
$[a:b]$ denotes the equivalent class of the pair $(a,b)$) is well defined and
differentiable. In \cite[Prop. 2.2]{camporesi} it was established that a
differentiable function $f: I\rightarrow\mathbb{CP}^{1}$, where $I\subset
\mathbb{R}$ is an interval, is never surjective, using that Sard's theorem
implies that $f(I)$ has measure zero.
Suppose that $(\alpha,\beta)\in\mathbb{C}^{2}\setminus\{(0,0)\}$ is such that
$\alpha u(\xi)-\beta v(\xi)=0$ for some $\xi\in[0,b]$. Hence $%
\begin{vmatrix}
u(\xi) & \beta\\
v(\xi) & \alpha
\end{vmatrix}
=0$, that is, $(u(\xi), v(\xi))$ and $(\alpha,\beta)$ are proportional. Since
$\xi\in[x_{j},x_{j+1}]$ for some $j\in\{0,\cdots, N\}$, hence $[\alpha
:-\beta]\in F_{j}\left( [x_{j}, x_{j+1}]\right) $.
Thus, the set $C:= \left\{ [\alpha:\beta]\in\mathbb{CP}^{1}\, |\,\exists
\xi\in[0,b] \,:\, \alpha u(\xi)+\beta v(\xi)=0\right\} $ is contained in
\newline$\cup_{j=0}^{N}F_{j}\left( [x_{j}, x_{j+1}]\right) $, and then $C$
has measure zero. Hence we can obtain a pair of constants $(c_{1},c_{2})
\in\mathbb{C}^{2}\setminus\{ (0,0)\}$ with $[c_{1}:-c_{2}]\in\mathbb{CP}%
^{1}\setminus C$ and $f=c_{1}u+c_{2}v$ does not vanish in the whole segment
$[0,b]$.
\end{proof}
\begin{remark}
If $q$ is real valued and $\alpha_{1}, \cdots, \alpha_{N}\in\mathbb{R}%
\setminus\{0\}$, taking a real-valued fundamental system of solutions for the
regular equation $\mathbf{L}_{q} y=0$ and using formula
(\ref{generalsolcauchy}), we can obtain a real-valued fundamental set of
solutions $\{u,v\}$ for $\mathbf{L}_{q,\mathfrak{I}_{N}}y=0$. In the proof of
Proposition \ref{propnonvanishingsol} we obtain that $u$ and $v$ have no
common zeros. Hence $f=u+iv$ is a non vanishing solution.
For the complex case, we can choose randomly a pair of constants $(c_{1}%
,c_{2})\in\mathbb{C}^{2}\setminus\{(0,0)\}$ and verify if the linear
combination $c_{1}u+c_{2}v$ has no zero. If there is a zero, we repeat the
process until we find the non vanishing solution. Since the set $C$ (from the
proof of Proposition \ref{propnonvanishingsol}) has measure zero, is almost
sure to find the coefficients $c_{1},c_{2}$ in the first few tries.
\end{remark}
By Proposition \ref{propnonvanishingsol}, there exists a pair of constants
$(c_{1},c_{2})\in\mathbb{C}^{2}\setminus\{(0,0)\}$ such that
\begin{align}
y_{0}(x) & = c_{1}+c_{2}x+\sum_{k=1}^{N}\alpha_{k}(c_{1}+c_{2}x_{k}%
)H(x-x_{k})(x-x_{k})\nonumber\\
& \quad+\sum_{J\in\mathcal{J}_{N}}\alpha_{J}(c_{1}+c_{2}x_{j_{1}%
})H(x-x_{j_{|J|}})\left( \prod_{l=1}^{|J|-1}(x_{j_{l+1}}-x_{j_{1}})\right)
(x-x_{j_{|J|}})\label{nonvanishingsolqzero}%
\end{align}
is a non-vanishing solution of (\ref{Schrwithdelta}) for $\rho=0$ (if
$\alpha_{1}, \dots,\alpha_{k} \in(0,\infty)$, it is enough with take $c_{1}%
=1$, $c_{2}=0$). Below we give a procedure based on the SPPS method
(\cite{blancarte,sppsoriginal}) to obtain the non-vanishing solution $f$ from
$y_{0}$.
\begin{theorem}
Define the recursive integrals $\{Y^{(k)}\}_{k=0}^{\infty}$ and $\{\tilde
{Y}^{(k)}\}_{k=0}^{\infty}$ as follows: $Y^{(0)}\equiv\tilde{Y}^{(0)}\equiv1$,
and for $k\geqslant1$
\begin{align}
Y^{(k)}(x) & =
\begin{cases}
\int_{0}^{x} Y^{(k)}(s)q(s)y_{0}^{2}(s)ds, & \mbox{ if } k \mbox{ is even},\\
\int_{0}^{x} \frac{Y^{(k)}(s)}{y_{0}^{2}(s)}ds, & \mbox{ if } k
\mbox{ is odd},
\end{cases}
\\
\tilde{Y}^{(k)}(x) & =
\begin{cases}
\int_{0}^{x} \tilde{Y}^{(k)}(s)q(s)y_{0}^{2}(s)ds, & \mbox{ if } k
\mbox{ is odd},\\
\int_{0}^{x} \frac{\tilde{Y}^{(k)}(s)}{y_{0}^{2}(s)}ds, & \mbox{ if } k
\mbox{ is even}.
\end{cases}
\end{align}
Define
\begin{equation}
\label{sppsforparticularsol}f_{0}(x)= y_{0}(x)\sum_{k=0}^{\infty}\tilde
{Y}^{(2k)}(x), \qquad f_{1}(x) = y_{0}(x)\sum_{k=0}^{\infty}Y^{(2k+1)}(x).
\end{equation}
Then $\{f_{0},f_{1}\}\subset\mathcal{D}_{2}\left( \mathbf{L}_{q,
\mathfrak{I}_{N}}\right) $ is a fundamental set of solution for $\mathbf{L}%
_{q,\mathfrak{I}_{N}}u=0$ satisfying the initial conditions $f_{0}(0)=c_{1}$,
$f^{\prime}_{0}(0)=c_{2}$, $f_{1}(0)=0$, $f^{\prime}_{1}(0)=1$. Both series
converge uniformly and absolutely on $x\in[0,b]$. The series of the
derivatives converge in $L_{2}(0,b)$, and on each interval $[x_{j},x_{j+1}]$,
$j=0, \dots, N$, the series of the second derivatives converge in $L_{2}%
(x_{j}, x_{j+1})$. Hence there exist constants $C_{1},C_{2}\in\mathbb{C}$ such
that $f=C_{1}f_{0}+C_{2}f_{1}$ is a non-vanishing solution of $\mathbf{L}_{q,
\mathfrak{I}_{N}}u=0$ in $[0,b]$.
\end{theorem}
\begin{proof}
Using the estimates
\[
|\tilde{Y}^{(2k-j)}(x)|\leqslant\frac{M_{1}^{(n-j)}M_{2}^{n}}{(n-j)!n!},
\quad|Y^{(2k-j)}(x)|\leqslant\frac{M_{1}^{n}M_{2}^{(n-j)}}{n!(n-j)!}, \quad
x\in[0,b], \; j=0,1,\; k \in\mathbb{N},
\]
where $M_{1}= \left\| \frac{1}{y_{0}^{2}}\right\| _{L_{1}(0,b)}$ and $M_{2}=
\|qy_{0}^{2}\|_{L_{1}(0,b)}$, from \cite[Prop. 5]{blancarte}, the series in
(\ref{sppsforparticularsol}) converge absolutely and uniformly on $[0,b]$. The
proof of the convergence of the derivatives and that $\{f_{0},f_{1}%
\}\in\mathcal{D}_{2}\left( \mathbf{L}_{q, \mathfrak{I}_{N}}\right) $ is a
fundamental set of solutions is analogous to that of Theorem \ref{theoremspps}
(see also \cite[Th. 1]{sppsoriginal}) and \cite[Th. 7]{blancarte} for the
proof in the regular case).
\end{proof}
\subsection{The mapping property}
Take a non vanishing solution $f\in\mathcal{D}_{2}\left( \mathbf{L}%
_{q,\mathfrak{I}_{N}}\right) $ normalized at zero, i.e., $f(0)=1$, and set
$h=f^{\prime}(0)$. Then the corresponding transmutation operator and kernel
$\mathbf{T}^{h}_{\mathfrak{I}_{N}}$ and $K_{\mathfrak{I}_{N}}^{h}(x,t)$ will
be denoted by $\mathbf{T}^{f}_{\mathfrak{I}_{N}}$ and $K_{\mathfrak{I}_{N}%
}^{f}(x,t)$ and called the \textit{canonical} transmutation operator and
kernel associated to $f$, respectively (same notations are used for the cosine
and sine transmutations).
\begin{theorem}
\label{theoretransprop} The canonical transmutation operator $\mathbf{T}%
^{f}_{\mathfrak{I}_{N}}$ satisfies the following relations
\begin{equation}
\label{transmproperty}\mathbf{T}_{\mathfrak{I}_{N}}^{f}\left[ x^{k}\right] =
\varphi_{f}^{(k)}(x) \qquad\forall k\in\mathbb{N}_{0}.
\end{equation}
The canonical cosine and sine transmutation operators satisfy the relations
\begin{align}
\mathbf{T}_{\mathfrak{I}_{N},f}^{C}\left[ x^{2k}\right] & = \varphi
_{f}^{(2k)}(x) \qquad\forall k\in\mathbb{N}_{0}.\label{transmpropertycos}\\
\mathbf{T}_{\mathfrak{I}_{N}}^{S}\left[ x^{2k+1}\right] & = \varphi
_{f}^{(2k+1)}(x) \qquad\forall k\in\mathbb{N}_{0}.\label{transmpropertysin}%
\end{align}
\end{theorem}
\begin{proof}
Consider the solution $e_{\mathfrak{I}_{N}}^{h}(\rho,x)$ with $h=f^{\prime
}(0)$. By the conditions (\ref{initialspps0}) and (\ref{initialspps1}),
solution $e_{\mathfrak{I}_{N}}^{h}(\rho,x)$ can be written in the form
\begin{align}
e_{\mathfrak{I}_{N}}^{h}(\rho,x) & = u_{0}(\rho,x)+i\rho u_{1}%
(\rho,x)\nonumber\\
& = \sum_{k=0}^{\infty}\frac{(-1)^{k}\rho^{2k}\varphi_{f}^{(2k)}(x)}%
{(2k)!}+\sum_{k=0}^{\infty}\frac{i(-1)^{k}\rho^{2k+1}\varphi_{f}^{(2k+1)}%
(x)}{(2k+1)!}\nonumber\\
& = \sum_{k=0}^{\infty}\frac{(i\rho)^{2k}\varphi_{f}^{(2k)}(x)}{(2k)!}%
+\sum_{k=0}^{\infty}\frac{(i\rho)^{2k+1}\varphi_{f}^{(2k+1)}(x)}%
{(2k+1)!}\nonumber\\
& = \sum_{k=0}^{\infty}\frac{(i\rho)^{k}\varphi_{f}^{(k)}(x)}{k!}%
\label{auxseriesthmap}%
\end{align}
(The rearrangement of the series is due to absolute and uniform convergence,
Theorem \ref{theoremspps}). On the other hand
\[
e_{\mathfrak{I}_{N}}^{h}(\rho,x) = \mathbf{T}_{\mathfrak{I}_{N}}^{f}\left[
e^{i\rho x}\right] = \mathbf{T}_{\mathfrak{I}_{N}}^{f}\left[ \sum
_{k=0}^{\infty}\frac{(i\rho)^{k}x^{k}}{k!} \right]
\]
Note that $\displaystyle \int_{-x}^{x}K_{\mathfrak{I}_{N}}^{f}(x,t)\left(
\sum_{k=0}^{\infty}\frac{(i\rho)^{k}t^{k}}{k!} \right) dt= \sum_{k=0}^{\infty
}\frac{(i\rho)^{k}}{k!}\int_{-x}^{x}K_{\mathfrak{I}_{N}}^{f}(x,t)t^{k}dt$, due
to the uniform convergence of the exponential series in the variable $t
\in[-x,x]$. Thus,
\begin{equation}
\label{auxiliarseriesthmap}e_{\mathfrak{I}_{N}}^{h}(\rho,x) = \sum
_{k=0}^{\infty}\frac{(i\rho)^{k}\mathbf{T}_{\mathfrak{I}_{N}}^{f}\left[
x^{k}\right] }{k!}.
\end{equation}
Comparing (\ref{auxiliarseriesthmap}) and (\ref{auxseriesthmap}) as Taylor
series in the complex variable $\rho$ we obtain (\ref{transmproperty}).
Relations (\ref{transmpropertycos}) and (\ref{transmpropertysin}) follows from
(\ref{transmproperty}), (\ref{cosineintegralkernelgral}),
(\ref{sineintegralkernelgral}) and the fact that $G_{\mathfrak{I}_{N}}%
^{f}(x,t)$ and $S_{\mathfrak{I}_{N}}(x,t)$ are even and odd in the variable
$t$, respectively.
\end{proof}
\begin{remark}
\label{remarkformalpowersasymp} The formal powers $\{\varphi_{f}%
^{(k)}(x)\}_{k=0}^{\infty}$ satisfy the asymptotic relation\newline%
$\varphi_{f}^{(k)}(x)=x^{k}(1+o(1))$, $x\rightarrow0^{+}$, $\forall
k\in\mathbb{N}$.
Indeed, by Theorem \ref{theoretransprop} and the Cauchy-Bunyakovsky-Schwarz
inequality we have
\begin{align*}
|\varphi_{f}^{(k)}(x)-x^{k}| & = \left| \int_{-x}^{x}K_{\mathfrak{I}_{N}%
}^{f}(x,t)t^{k}dt\right| \leqslant\left( \int_{-x}^{x}\left| K_{\mathfrak{I}%
_{N}}^{f}(x,t)\right| ^{2}dt\right) ^{\frac{1}{2}}\left( \int_{-x}^{x}%
|t|^{2k}dt\right) ^{\frac{1}{2}}\\
& \leqslant\sqrt{2b}\left\| K_{\mathfrak{I}_{N}^{f}}\right\| _{L_{\infty
}(\Omega)}\sqrt{\frac{2}{2k+1}}x^{k+\frac{1}{2}}%
\end{align*}
(because $K_{\mathfrak{I}_{N}}^{f}\in L_{\infty}(\Omega)$ by Theorem
\ref{thoremtransmoperator}). Hence
\[
\left| \frac{\varphi_{f}^{(k)}(x)}{x^{k}}-1\right| \leqslant\sqrt{2b}\left\|
K_{\mathfrak{I}_{N}^{f}}\right\| _{L_{\infty}(\Omega)}\sqrt{\frac{2}{2k+}%
}x^{\frac{1}{2}} \rightarrow0, \qquad x\rightarrow0^{+}.
\]
\end{remark}
\begin{remark}
\label{remarktransmoperatorlbais} Denote $\mathcal{P}(\mathbb{R}%
)=\mbox{Span}\{x^{k}\}_{k=0}^{\infty}$. According to Remark
\ref{remarkformalpowers} and Proposition \ref{propregular} we have that
$\mathbf{T}_{\mathfrak{I}_{N}}^{f}\left( \mathcal{P}(\mathbb{R})\right)
=\mbox{Span}\left\{ \varphi_{f}^{(k)}(x)\right\} _{k=0}^{\infty}$, and by
(\ref{Lbasisproperty}) we have the relation%
\begin{equation}
\label{transmrelationdense}\mathbf{L}_{q,\mathfrak{I}_{N}}\mathbf{T}%
_{\mathfrak{I}_{N}}^{f} p = -\mathbf{T}_{\mathfrak{I}_{N}}^{f}D^{2}p
\qquad\forall p \in\mathcal{P}(\mathbb{R}).
\end{equation}
According to \cite{hugo2}, $\mathbf{T}_{q,\mathfrak{I}_{N}}^{f}$ is a
transmutation operator for the pair $\mathbf{L}_{q,\mathfrak{I}_{N}}$,
$-D^{2}$ in the subspace $\mathcal{P}(\mathbb{R})$, and $\{\varphi_{f}%
^{(k)}(x)\}_{k=0}^{\infty}$ is an $\mathbf{L}_{q,\mathfrak{I}_{N}}$-basis.
Since $\varphi_{f}^{(K)}(0)=D\varphi_{f}^{(k)}(0)=0$ for $k\geqslant2$,
$\{\varphi_{f}^{(k)}(x)\}_{k=0}^{\infty}$ is called a \textbf{standard }
$\mathbf{L}_{q,\mathfrak{J}_{N}}$-basis, and $\mathbf{T}_{\mathfrak{I}_{N}%
}^{f}$ a standard transmutation operator. By Remark \ref{remarknonhomeq} we
can recover $\varphi_{f}^{(k)}$ for $k\geqslant2$ from $\varphi_{f}^{(0)}$ and
$\varphi_{f}^{(0)}$ by the formula
\begin{equation}
\label{recorverformalpowers}\varphi_{f}^{(k)} (x) =-k(k-1)\mathbf{R}%
_{\mathfrak{I}_{N}}^{f}\varphi_{f}^{(k)}(x) =k(k-1)f(x)\int_{0}^{x}\frac
{1}{f^{2}(t)}\int_{0}^{t}f(s)\varphi_{f}^{(k-2)}(s)ds
\end{equation}
(compare this formula with \cite[Formula (8), Remark 9]{hugo2}).
\end{remark}
The following result adapts Theorem 10 from \cite{hugo2}, proved for the case
of an $L_{1}$-regular potential.
\begin{theorem}
The operator $\mathbf{T}_{\mathfrak{I}_{N}}^{f}$ is a transmutation operator
for the pair $\mathbf{L}_{q, \mathfrak{I}_{N}}$, $-D^{2}$ in $H^{2}(-b,b)$,
that is, $\mathbf{T}_{\mathfrak{I}_{N}}^{f}\left( H^{2}(-b,b)\right)
\subset\mathcal{D}_{2}\left( \mathbf{L}_{q,\mathfrak{I}_{N}}\right) $ and
\begin{equation}
\label{transmpropertygeneral}\mathbf{L}_{q, \mathfrak{I}_{N}} \mathbf{T}%
_{\mathfrak{I}_{N}}u= -\mathbf{T}_{\mathfrak{I}_{N}}D^{2}u \qquad\forall u\in
H^{2}(-b,b)
\end{equation}
\end{theorem}
\begin{proof}
We show that
\begin{equation}
\label{auxiliareqtransmpropgen}\mathbf{T}_{\mathfrak{I}_{N}}u(x) =
u(0)\varphi_{f}^{(0)}(x)+u^{\prime}(0)\varphi_{f}^{(1)}(x)-\mathbf{R}%
_{\mathfrak{I}_{N}}^{f}\mathbf{T}_{\mathfrak{I}_{N}}^{f}u^{\prime\prime
2}(-b,b).
\end{equation}
Let us first see that (\ref{auxiliareqtransmpropgen}) is valid for
$p\in\mathcal{P}(\mathbb{R})$. Indeed, set $p(x)=\sum_{k=0}^{M}c_{k} x^{k}$.
By the linearity of $\mathbf{T}_{\mathfrak{I}_{N}}^{f}$, Theorem
\ref{theoretransprop} and (\ref{recorverformalpowers}) we have
\begin{align*}
\mathbf{T}_{\mathfrak{I}_{N}}^{f}p(x) & = c_{0}\varphi_{f}^{(0)}%
+c_{1}\varphi_{f}^{(1)}(x)+\sum_{k=2}^{M}c_{k}\varphi_{f}^{(k)}(x)\\
& = p(0)\varphi_{f}^{(0)}+p^{\prime}(0)\varphi_{f}^{(1)}(x)-\sum_{k=2}%
^{M}c_{k}k(k-1)\mathbf{R}_{\mathfrak{I}_{N}}^{f}\varphi_{f}^{(k-2)}(x)\\
& = p(0)\varphi_{f}^{(0)}+p^{\prime}(0)\varphi_{f}^{(1)}(x)-\sum_{k=2}%
^{M}c_{k}k(k-1)\mathbf{R}_{\mathfrak{I}_{N}}^{f}\mathbf{T}_{\mathfrak{I}_{N}%
}^{f} \left[ x^{k-2}\right] \\
& = p(0)\varphi_{f}^{(0)}+p^{\prime}(0)\varphi_{f}^{(1)}(x)-\mathbf{R}%
_{\mathfrak{I}_{N}}^{f}\mathbf{T}_{\mathfrak{I}_{N}}^{f}p^{\prime\prime}(x)
\end{align*}
This establishes (\ref{auxiliareqtransmpropgen}) for $p\in\mathcal{P}%
(\mathbb{R})$. Take $u\in H^{2}(-b,b)$ arbitrary. There exists a sequence
$\{p_{n}\}\subset\mathcal{P}(\mathbb{R})$ such that $p_{n}^{(j)}%
\overset{[-b,b]}{\rightrightarrows} u^{(j)}$, $j=0,1$, and $p^{\prime\prime
}_{n}\rightarrow u$ in $L_{2}(-b,b)$, when $n\rightarrow\infty$ (see
\cite[Prop. 4]{hugo2}). Since $\mathbf{R}_{\mathfrak{I}_{N}}^{f}%
\mathbf{T}_{\mathfrak{I}_{N}}^{f}\in\mathcal{B}\left( L_{2}(-b,b),
L_{2}(0,b)\right) $ we have
\begin{align*}
\mathbf{T}_{\mathfrak{I}_{N}}^{f}u(x) & = \lim_{n\rightarrow\infty}
\mathbf{T}_{\mathfrak{I}_{N}}^{f}p_{n}(x) = \lim_{n\rightarrow\infty}\left[
p_{n}(0)\varphi_{f}^{(0)}+p^{\prime}_{n}(0)\varphi_{f}^{(1)}(x)-\mathbf{R}%
_{\mathfrak{I}_{N}}^{f}\mathbf{T}_{\mathfrak{I}_{N}}^{f}p^{\prime\prime}%
_{n}(x) \right] \\
& = u(0)\varphi_{f}^{(0)}(x)+u^{\prime}(0)\varphi_{f}^{(1)}(x)-\mathbf{R}%
_{\mathfrak{I}_{N}}^{f}\mathbf{T}_{\mathfrak{I}_{N}}^{f}u^{\prime\prime}(x)
\end{align*}
and we obtain (\ref{auxiliareqtransmpropgen}). Hence, by Remark
\ref{remarknonhomeq}, $\mathbf{T}_{\mathfrak{I}_{N}}^{f}\left( H^{2}%
(-b,b)\right) \subset\mathcal{D}_{2}\left( \mathbf{L}_{q,\mathfrak{I}_{N}%
}\right) $, and since $\mathbf{L}_{q,\mathfrak{I}_{N}}\varphi_{f}^{(k)}=0$ for
$k=0,1$, applying $\mathbf{L}_{q,\mathfrak{I}_{N}}$ in both sides of
(\ref{auxiliareqtransmpropgen}) we have (\ref{transmpropertygeneral}).
\end{proof}
\section{Fourier-Legendre and Neumann series of Bessel functions expansions}
\subsection{Fourier-Legendre series expansion of the transmutation kernel}
Fix $x\in(0,b]$. Theorem \ref{thoremtransmoperator} establishes that
$K_{\mathfrak{I}_{N}}^{h}(x,\cdot)\in L_{2}(-x,x)$, then $K_{\mathfrak{I}_{N}%
}^{h}(x,t)$ admits a Fourier series in terms of an orthogonal basis of
$L_{2}(-x,x)$. Following \cite{neumann}, we choose the orthogonal basis of
$L_{2}(-1,1)$ given by the Legendre polynomials $\{P_{n}(z)\}_{n=0}^{\infty}$.
Thus,
\begin{equation}
\label{FourierLegendreSeries1}K_{\mathfrak{I}_{N}}^{h}(x,t)= \sum
_{n=0}^{\infty}\frac{a_{n}(x)}{x}P_{n}\left( \frac{t}{x}\right)
\end{equation}
where
\begin{equation}
\label{FourierLegendreCoeff1}a_{n}(x)=\left( n+\frac{1}{2}\right) \int
_{-x}^{x}K_{\mathfrak{I}_{N}}^{h}(x,t)P_{n}\left( \frac{t}{x}\right)
dt\qquad\forall n\in\mathbb{N}_{0}.
\end{equation}
The series (\ref{FourierLegendreSeries1}) converges with respect to $t$ in the
norm of $L_{2}(-x,x)$. Formula (\ref{FourierLegendreCoeff1}) is obtained
multiplying (\ref{FourierLegendreSeries1}) by $P_{n}\left( \frac{t}{x}\right)
$, using the general Parseval's identity \cite[pp. 16]{akhiezer} and taking
into account that $\|P_{n}\|_{L_{2}(-1,1)}^{2}=\frac{2}{2n+1}$, $n\in
\mathbb{N}_{0}$.
\begin{example}
Consider the kernel $K_{\mathfrak{I}_{1}}^{0}(x,t)=\frac{\alpha_{1}}%
{2}H(x-x_{1})\chi_{[2x_{1}-x,x]}$ from Example \ref{beginexample1}. In this
case, the Fourier-Legendre coefficients has the form
\[
a_{n}(x) = \frac{\alpha_{1}}{2}\left( n+\frac{1}{2}\right) H(x-x_{1}%
)\int_{2x_{1}-x}^{x}P_{n}(t)dt=\frac{\alpha_{1}}{2}\left( n+\frac{1}%
{2}\right) xH(x-x_{1})\int_{2\frac{x_{1}}{x}-1}^{1}P_{n}(t)dt.
\]
From this we obtain $a_{0}(x)=\frac{\alpha_{1}}{2}H(x-x_{1})(x-x_{1})$. Using
formula $P_{n}(t)=\frac{1}{2n+1}\frac{d}{dt}\left( P_{n+1}(t)-P_{n-1}%
(t)\right) $ for $n\in\mathbb{N}$, and that $P_{n}(1)=0$ for all
$n\in\mathbb{N}$, we have
\[
a_{n}(x)= \frac{\alpha_{1}}{4}xH(x-x_{1})\left[ P_{n-1}\left( \frac{2x_{1}}%
{x}-1\right) -P_{n+1}\left( \frac{2x_{1}}{x}-1\right) \right]
\]
\end{example}
\begin{remark}
From (\ref{FourierLegendreCoeff1}) we obtain that the first coefficient
$a_{0}(x)$ is given by
\begin{align*}
a_{0}(x) & = \frac{1}{2}\int_{-x}^{x}K_{\mathfrak{I}_{N}}^{h}(x,t)P_{0}%
\left( \frac{t}{x}\right) dt =\frac{1}{2}\int_{-x}^{x}K_{\mathfrak{I}_{N}}%
^{h}(x,t)dt\\
& = \frac{1}{2}\mathbf{T}_{\mathfrak{I}_{N}}^{h}[1]-\frac{1}{2} = \frac{1}%
{2}(e_{\mathfrak{I}_{N}}^{h}(0,x)-1).
\end{align*}
Thus, we obtain the relations
\begin{equation}
\label{relationfirstcoeff1}a_{0}(x)= \frac{1}{2}(e_{\mathfrak{I}_{N}}%
^{h}(0,x)-1), \qquad e_{\mathfrak{I}_{N}}^{h}(0,x)= 2a_{n}(x)+1.
\end{equation}
\end{remark}
For the kernels $G_{\mathfrak{I}_{N}}^{h}(x,t)$ and $S_{\mathfrak{I}_{N}%
}(x,t)$ we obtain the series representations in terms of the even and odd
Legendre polynomials, respectively,%
\begin{align}
G_{\mathfrak{I}_{N}}^{h}(x,t) & = \sum_{n=0}^{\infty}\frac{g_{n}(x)}%
{x}P_{2n}\left( \frac{t}{x}\right) ,\label{Fourierexpansioncosine}\\
S_{\mathfrak{I}_{N}}(x,t) & = \sum_{n=0}^{\infty}\frac{s_{n}(x)}{x}%
P_{2n+1}\left( \frac{t}{x}\right) ,\label{Fourierexpansionsine}%
\end{align}
where the coefficients are given by
\begin{align}
g_{n}(x) & = 2a_{2n}(x)= (4n+1)\int_{0}^{x}G_{\mathfrak{I}_{N}}%
^{h}(x,t)P_{2n}\left( \frac{t}{x}\right)
dt,\label{Fourierexpansioncosinecoeff}\\
s_{n}(x) & = 2a_{2n+1}(4n+3)\int_{0}^{x}S_{\mathfrak{I}_{N}}(x,t)P_{2n+1}%
\left( \frac{t}{x}\right) dt.\label{Fourierexpansionsinecoeff}%
\end{align}
The proof of these facts is analogous to that in the case of Eq.
(\ref{schrodingerregular}), see \cite{neumann} or \cite[Ch. 9]%
{directandinverse}.
\begin{remark}
\label{remarkfirstcoeffcosine} Since $g_{0}(x)=2a_{0}(x)$, then
$g_{0}(x)=e_{\mathfrak{I}_{N}}^{h}(0,x)-1$. Since $e_{\mathfrak{I}_{N}}%
^{h}(0,x)$ is the solution of (\ref{Schrwithdelta}) with $\rho=0$ satisfying
$e_{\mathfrak{I}_{N}}^{h}(0,0)=1$, $(e_{\mathfrak{I}_{N}}^{h})^{\prime
}(0,0)=h$, hence by Remark \ref{remarkunicityofsol}, $e_{\mathfrak{I}_{N}}%
^{h}(0,x)=c_{\mathfrak{I}_{N}}^{h}(0,x)$ and
\begin{equation}
\label{relationfirstcoeff}g_{0}(x)= c_{\mathfrak{I}_{N}}^{h}(0,x)-1.
\end{equation}
On the other hand, for the coefficient $s_{0}(x)$ we have the relation
\[
s_{0}(x)= 3\int_{0}^{x}H_{\mathfrak{I}_{N}}(x,t)P_{1}\left( \frac{t}{x}\right)
dt= \frac{3}{x}\int_{0}^{x}H_{\mathfrak{I}_{N}}(x,t)tdt.
\]
Since $\frac{\sin(\rho x)}{\rho}\big{|}_{x=0}=x$, from (\ref{transmsinegral})
we have
\begin{equation}
\label{relationcoeffs0}s_{0}(x)= 3\left( \frac{s_{\mathfrak{I}_{N}}(0,x)}%
{x}-1\right) .
\end{equation}
\end{remark}
For every $n\in\mathbb{N}_{0}$ we write the Legendre polynomial $P_{n}(z)$ in
the form $P_{n}(z)=\sum_{k=0}^{n}l_{k,n}z^{k}$. Note that if $n$ is even,
$l_{k,n}=0$ for odd $k$, and $P_{2n}(z)=\sum_{k=0}^{n}\tilde{l}_{k,n}z^{2k}$
with $\tilde{l}_{k,n}=l_{2k,2n}$. Similarly $P_{2n+1}(z)=\sum_{k=0}^{n}\hat
{l}_{k,n}z^{2k+1}$ with $\hat{l}_{k,n}=l_{2k+1,2n+1}$. With this notation we
write an explicit formula for the coefficients (\ref{FourierLegendreCoeff1})
of the canonical transmutation kernel $K_{\mathfrak{J}_{N}}^{f}(x,t)$.
\begin{proposition}
\label{propcoeffwithformalpowers} The coefficients $\{a_{n}(x)\}_{n=0}%
^{\infty}$ of the Fourier-Legendre expansion (\ref{FourierLegendreSeries1}) of
the canonical transmutation kernel $K_{\mathfrak{I}_{N}}^{f}(x,t)$ are given
by
\begin{equation}
\label{CoeffFourier1formalpowers}a_{n}(x)= \left( n+\frac{1}{2}\right) \left(
\sum_{k=0}^{n}l_{k,n}\frac{\varphi_{f}^{(k)}(x)}{x^{k}}-1\right) .
\end{equation}
The coefficients of the canonical cosine and sine kernels satisfy the
following relations for all $n\in\mathbb{N}_{0}$
\begin{align}
g_{n}(x) & = (4n+1)\left( \sum_{k=0}^{n}\tilde{l}_{k,n}\frac{\varphi
_{f}^{(2k)}(x)}{x^{2k}}-1\right) ,\label{CoeffFourierCosineformalpowers}\\
s_{n}(x) & = (4n+3)\left( \sum_{k=0}^{n}\hat{l}_{k,n}\frac{\varphi
_{f}^{(2k+1)}(x)}{x^{2k+1}}-1\right) ,\label{CoeffFourierSineformalpowers}%
\end{align}
\end{proposition}
\begin{proof}
From (\ref{FourierLegendreCoeff1}) we have
\begin{align*}
a_{n}(x) & = \left( n+\frac{1}{2}\right) \int_{-x}^{x}K_{\mathfrak{I}_{N}%
}^{f}(x,t)\left( \sum_{k=0}^{n}l_{k,n}\left( \frac{t}{x}\right) ^{k}\right)
dt\\
& = \left( n+\frac{1}{2}\right) \sum_{k=0}^{n}\frac{l_{k,n}}{x^{k}}\int
_{0}^{x}K_{\mathfrak{I}_{N}}^{f}(x,t)t^{k}dt\\
& = \left( n+\frac{1}{2}\right) \sum_{k=0}^{n}\frac{l_{k,n}}{x^{k}}\left(
\mathbf{T}_{\mathfrak{I}_{N}}^{f}\left[ x^{k}\right] -x^{k} \right) .
\end{align*}
Hence (\ref{CoeffFourier1formalpowers}) follows from Theorem
\ref{theoretransprop} and that $P_{n}(z)=1$. Since $g_{n}(x)=2a_{2n}(x)$,
$s_{n}(x)=2a_{2n+1}(x)$, $l_{2k+1,2n}=0$, $l_{2k,2n+1}=0$ and $l_{2k,2n}%
=\tilde{l}_{k,n}$,$l_{2k+1,2n+1}=\hat{l}_{k,n}$, we obtain
(\ref{CoeffFourierCosineformalpowers}) and (\ref{CoeffFourierSineformalpowers}).
\end{proof}
\begin{remark}
\label{remarkcoefficientsaregood} By Remark \ref{remarkformalpowersasymp},
formula (\ref{CoeffFourier1formalpowers}) is well defined at $x=0$. Note that
$x^{n}a_{n}(x)$ belongs to $\mathcal{D}_{2}\left( \mathbf{L}_{q,\mathfrak{I}%
_{N}}\right) $ for all $n\in\mathbb{N}_{0}$.
\end{remark}
\subsection{Representation of the solutions as Neumann series of Bessel
functions}
Similarly to the case of the regular Eq. (\ref{regularSch}) \cite{neumann}, we
obtain a representation for the solutions in terms of Neumann series of Bessel
functions (NSBF). For $M\in\mathbb{N}$ we define
\[
K_{\mathfrak{I}_{N},M}^{h}(x,t):= \sum_{n=0}^{M}\frac{a_{n}(x)}{x}P_{n}\left(
\frac{t}{x}\right) ,
\]
that is, the $M$-partial sum of (\ref{FourierLegendreSeries1}).
\begin{theorem}
\label{ThNSBF1} The solutions $c_{\mathfrak{I}_{N}}^{h}(\rho,x)$ and
$s_{\mathfrak{I}_{N}}(\rho,x)$ admit the following NSBF representations
\begin{align}
c_{\mathfrak{I}_{N}}^{h}(\rho,x)= \cos(\rho x)+\sum_{n=0}^{\infty}%
(-1)^{n}g_{n}(x)j_{2n}(\rho x),\label{NSBFcosine}\\
s_{\mathfrak{I}_{N}}(\rho,x)= \frac{\sin(\rho x)}{\rho}+\frac{1}{\rho}%
\sum_{n=0}^{\infty}(-1)^{n}s_{n}(x)j_{2n+1}(\rho x),\label{NSBFsine}%
\end{align}
where $j_{\nu}$ stands for the spherical Bessel function $j_{\nu}%
(z)=\sqrt{\frac{\pi}{2z}}J_{\nu+\frac{1}{2}}(z)$ (and $J_{\nu}$ stands for the
Bessel function of order $\nu$). The series converge pointwise with respect to
$x$ in $(0,b]$ and uniformly with respect to $\rho$ on any compact subset of
the complex $\rho$-plane. Moreover, for $M\in\mathbb{N}$ the functions
\begin{align}
c_{\mathfrak{I}_{N},M}^{h}(\rho,x)= \cos(\rho x)+\sum_{n=0}^{M}(-1)^{n}%
g_{n}(x)j_{2n}(\rho x),\label{NSBFcosineaprox}\\
s_{\mathfrak{I}_{N},M}(\rho,x)= \frac{\sin(\rho x)}{\rho}+\frac{1}{\rho}%
\sum_{n=0}^{M}(-1)^{n}s_{n}(x)j_{2n+1}(\rho x),\label{NSBFsineaprox}%
\end{align}
obey the estimates
\begin{align}
|c_{\mathfrak{I}_{N}}^{h}(\rho,x)-c_{\mathfrak{I}_{N},M}^{h}(\rho,x)| &
\leqslant 2\epsilon_{2M}(x)\sqrt{\frac{\sinh(2bC)}{C}}%
,\label{estimatessinecosineNSBF}\\
|\rho s_{\mathfrak{I}_{N}}(\rho,x)-\rho s_{\mathfrak{I}_{N},M}(\rho,x)| &
\leqslant 2\epsilon_{2M+1}(x)\sqrt{\frac{\sinh(2bC)}{C}}%
,\label{estimatessinesineNSBF}%
\end{align}
for any $\rho\in\mathbb{C}$ belonging to the strip $|\operatorname{Im}%
\rho|\leqslant C$, $C>0$, and where\newline$\epsilon_{M}(x)=\|K_{\mathfrak{I}%
_{N}}^{h}(x,\cdot)-K_{\mathfrak{I}_{N},2M}^{h}(x,\cdot)\|_{L_{2}(-x,x)}$.
\end{theorem}
\begin{proof}
We show the results for the solution $c_{\mathfrak{I}_{N}}^{h}(\rho,x)$ (the
proof for $s_{\mathfrak{I}_{N}}(\rho,x)$ is similar). Substitution of the
Fourier-Legendre series (\ref{Fourierexpansioncosine}) in
(\ref{transmcosinegral}) leads us to
\begin{align*}
c_{\mathfrak{I}_{N}}^{h}(\rho,x) & = \cos(\rho x)+\int_{0}^{x}\left(
\sum_{n=0}^{\infty}\frac{g_{n}(x)}{x}P_{2n}\left( \frac{t}{x}\right) \right)
\cos(\rho t)dt\\
& = \cos(\rho x)+\sum_{n=0}^{\infty}\frac{g_{n}(x)}{x}\int_{0}^{x}P_{2n}\left(
\frac{t}{x}\right) \cos(\rho t)dt
\end{align*}
(the exchange of the integral with the summation is due to the fact that the
integral is nothing but the inner product of the series with the function
$\overline{\cos(\rho t)}$ and the series converges in $L_{2}(0,x)$). Using
formula 2.17.7 in \cite[pp. 433]{prudnikov}
\[
\int_{0}^{a}\left\{
\begin{matrix}
P_{2n+1}\left( \frac{y}{a}\right) \cdot\sin(by)\\
P_{2n}\left( \frac{y}{a}\right) \cdot\cos(by)\\
\end{matrix}
\right\} dy= (-1)^{n}\sqrt{\frac{\pi a}{2b}}J_{2n+\delta+\frac{1}{2}}(ab),
\quad\delta=\left\{
\begin{matrix}
1\\
0
\end{matrix}
\right\} , \; a>0,
\]
we obtain the representation (\ref{NSBFcosine}). Take $C>0$ and $\rho
\in\mathbb{C}$ with $|\operatorname{Im}\rho|\leqslant C$. For $M\in\mathbb{N}$
define $G_{\mathfrak{I}_{N},M}^{h}(x,t):= K_{\mathfrak{I}_{N},2M}%
^{h}(x,t)-K_{\mathfrak{I}_{N},2M}^{h}(x,-t)= \sum_{n=0}^{M}\frac{g_{n}(x)}%
{x}P_{2n}\left( \frac{t}{x}\right) $, the $M$-th partial sum of
(\ref{Fourierexpansioncosine}). Then
\[
c_{\mathfrak{I}_{N},M}^{h}(\rho,x) = \cos(\rho x)+\int_{0}^{x}G_{\mathfrak{I}%
_{N},M}^{h}(x,t)\cos(\rho t)dt.
\]
Using the Cauchy-Bunyakovsky-Schwarz inequality we obtain
\begin{align*}
|c_{\mathfrak{I}_{N}}^{h}(\rho,x)-C_{\mathfrak{I}_{N},M}^{h}(\rho,x)| & =
\left| \int_{0}^{x}\left( G_{\mathfrak{I}_{N}}^{h}(x,t)-G_{\mathfrak{I}_{N}%
,M}^{h}(x,t)\right) \cos(\rho t)dt\right| \\
& = \left| \left\langle \overline{G_{\mathfrak{I}_{N}}^{h}%
(x,t)-G_{\mathfrak{I}_{N},M}^{h}(x,t)}, \cos(\rho t) \right\rangle
_{L_{2}(0,x)} \right| \\
& \leqslant\|G_{\mathfrak{I}_{N}}^{h}(x,\cdot)-G_{\mathfrak{I}_{N},M}%
^{h}(x,\cdot)\|_{L_{2}(0,x)}\|\cos(\rho t)\|_{L_{2}(0,x)}.
\end{align*}
Since $\|K_{\mathfrak{I}_{N}}^{h}(x,\cdot)-K_{\mathfrak{I}_{N},2M}^{h}%
(x,\cdot)\|_{L_{2}(-x,x)}=\frac{1}{2}\|G_{\mathfrak{I}_{N}}^{h}(x,\cdot
)-G_{M,n}^{h}(x,\cdot)\|_{L_{2}(0,x)}$,
\begin{align*}
\int_{0}^{x}|\cos(\rho t)|^{2}dt & \leqslant\frac{1}{4}\int_{0}^{x}\left(
|e^{i\rho t}|+|e^{-i\rho t}|\right) ^{2}dt \leqslant\frac{1}{2}\int_{0}^{x}
\left( e^{-2t\operatorname{Im\rho}}+ e^{2t\operatorname{Im\rho}}\right) dt\\
& = \int_{-x}^{x}e^{-2\operatorname{Im}\rho t}dt = \frac{\sinh
(2x\operatorname{Im}\rho)}{\operatorname{Im}\rho}%
\end{align*}
and the function $\frac{\sinh(\xi x)}{\xi}$ is monotonically increasing in
both variables when $\xi,x\geqslant0$, we obtain
(\ref{estimatessinecosineNSBF}).
\end{proof}
Given $H\in\mathbb{C}$, we look for a pair of solutions $\psi_{\mathfrak{I}%
_{N}}^{H}(\rho,x)$ and $\vartheta_{\mathfrak{I}_{N}}(\rho,x)$ of
(\ref{Schrwithdelta}) satisfying the conditions
\begin{align}
\psi_{\mathfrak{I}_{N}}^{H}(\rho,b)=1, & (\psi_{\mathfrak{I}_{N}}%
^{H})^{\prime}(\rho,b)=-H,\label{conditionspsi}\\
\vartheta_{\mathfrak{I}_{N}}(\rho,b)=0, & \vartheta_{\mathfrak{I}_{N}%
}^{\prime}(\rho,b)=1.\label{conditionstheta}%
\end{align}
\begin{theorem}
The solutions $\psi_{\mathfrak{I}_{N}}^{H}(\rho,x)$ and $\vartheta
_{\mathfrak{I}_{N}}(\rho,x)$ admit the integral representations
\begin{align}
\psi_{\mathfrak{I}_{N}}^{H}(\rho,x) & = \cos(\rho(b-x))+\int_{x}%
^{b}\widetilde{G}_{\mathfrak{I}_{N}}^{H}(x,t)\cos(\rho
(b-t))dt,\label{solpsiintrepresentation}\\
\vartheta_{\mathfrak{I}_{N}}(\rho,x) & = \frac{\sin(\rho(b-x))}{\rho}%
+\int_{x}^{b}\widetilde{S}_{\mathfrak{I}_{N}}^{H}(x,t)\frac{\sin(\rho
(b-t))}{\rho}dt,\label{solthetaintrepresentation}%
\end{align}
where the kernels $\widetilde{G}_{\mathfrak{I}_{N}}^{H}(x,t)$ and
$\widetilde{S}_{\mathfrak{I}_{N}}(x,t)$ are defined in $\Omega$ and satisfy
$\widetilde{G}_{\mathfrak{I}_{N}}^{H}(x,\cdot), \widetilde{S}_{\mathfrak{I}%
_{N}}(x,\cdot) \in L_{2}(0,x)$ for all $x\in(0,b]$. In consequence, the
solutions $\psi_{\mathfrak{I}_{N}}^{H}(\rho,x)$ and $\vartheta_{\mathfrak{I}%
_{N}}(\rho,x)$ can be written as NSBF
\begin{equation}
\label{NSBFforpsi}\psi_{\mathfrak{I}_{N}}^{H}(\rho,x)= \cos(\rho
(b-x))+\sum_{n=0}^{\infty}(-1)^{n}\tau_{n}(x)j_{2n}(\rho(b-x)),
\end{equation}
\begin{equation}
\label{NSBFfortheta}\vartheta_{\mathfrak{I}_{N}}(\rho,x)= \frac{\sin
(\rho(b-x))}{\rho}+\sum_{n=0}^{\infty}(-1)^{n}\zeta_{n}(x)j_{2n}(\rho(b-x)),
\end{equation}
with some coefficients $\{\tau_{n}(x)\}_{n=0}^{\infty}$ and $\{\zeta
_{n}(x)\}_{n=0}^{\infty}$.
\end{theorem}
\begin{proof}
We prove the results for $\psi_{\mathfrak{I}_{N}}^{H}(\rho,x)$ (the proof for
$\vartheta_{\mathfrak{I}_{N}}(\rho,x)$ is similar). Set $y(\rho,x)=
\psi_{\mathfrak{I}_{N}}^{H}(\rho,b-x)$. Note that $y(\rho, 0)=1$, $y^{\prime
}(\rho,0)= H$ and for $\phi\in C_{0}^{\infty}(0,b)$ we have%
\begin{align*}
(y^{\prime\prime2}y(x), \phi(x))_{C_{0}^{\infty}(0,b)} & = (\psi
_{\mathfrak{I}_{N}}^{H}(\rho,x),\phi^{\prime\prime2}\phi(b-x) )_{C_{0}%
^{\infty}(0,b)}\\
& = (q(x)\psi_{\mathfrak{I}_{N}}^{H}(\rho,x), \phi(b-x))_{C_{0}^{\infty}%
(0,b)}+\sum_{k=0}^{N}\alpha_{k}\psi_{\mathfrak{I}_{N}}^{H}(\rho,x_{k}%
)\phi(b-x_{k})\\
& = (q(b-x)y(x),\phi(x))_{C_{0}^{\infty}(0,b)}+\sum_{k=0}^{N}\alpha
_{k}y(b-x_{k})\phi(b-x_{k}),
\end{align*}
that is, $\psi_{\mathfrak{I}_{N}}^{H}(\rho,x)$ is a solution of
(\ref{Schrwithdelta}) iff $y(x)=\psi_{\mathfrak{I}_{N}}^{H}(\rho,b-x)$ is a
solution of
\begin{equation}
\label{reversedequation}-y^{\prime\prime}(x)+\left( q(b-x)+\sum_{k=0}%
^{N}\alpha_{k}\delta(x-(b-x_{k}))\right) y(x)=\rho^{2}y(x).
\end{equation}
Since $0<b-x_{N}<\cdots<b-x_{0}<b$, hence (\ref{reversedequation}) is of the
type (\ref{Schrwithdelta}) with the point interactions $\mathfrak{I}_{N}%
^{*}=\{(b-x_{N-j},\alpha_{N-j})\}_{j=0}^{N}$ and $\psi_{\mathfrak{I}_{N}}%
^{H}(\rho,b-x)$ is the corresponding solution $c_{\mathfrak{I}_{N}^{*}}%
^{H}(\rho,x)$ for (\ref{reversedequation}). Hence
\begin{equation}
\label{integralrepresentationpsi}\psi_{\mathfrak{I}_{N}}^{H}(\rho,b-x)=
\cos(\rho x)+ \int_{0}^{x}G_{\mathfrak{I}_{N}^{*}}^{H}(x,t)\cos(\rho t)dt
\end{equation}
for some kernel $G_{\mathfrak{I}_{N}^{*}}^{H}(x,t)$ defined on $\Omega$ with
$\widetilde{G}_{\mathfrak{I}_{N}}^{H}(x,\cdot)\in L_{2}(0,x)$ for $x\in(0,b]$.
Thus,
\begin{align*}
\psi_{\mathfrak{I}_{N}}(\rho,x) & =\cos(\rho(b-x))+\int_{0}^{b-x}%
G_{\mathfrak{I}_{N}^{*}}^{H}(b-x,t)\cos(\rho t)dt\\
& =\psi_{\mathfrak{I}_{N}}(\rho,x)=\cos(\rho(b-x))+\int_{x}^{b}G_{\mathfrak{I}%
_{N}^{*}}^{H}(b-x,b-t)\cos(\rho(b-t))dt,
\end{align*}
where the change of variables $x\mapsto b-x$ was used. Hence we obtain
(\ref{solpsiintrepresentation}) with $\widetilde{G}_{\mathfrak{I}_{N}^{*}}%
^{H}(x,t)=G_{\mathfrak{I}_{N}^{*}}^{H}(b-x,b-t)$ In consequence, by Theorem
\ref{ThNSBF1} we obtain (\ref{NSBFforpsi}).
\end{proof}
\begin{remark}
As in Remark \ref{remarkfirstcoeffcosine}
\begin{equation}
\label{firstcoeffpsi}\tau_{0}(x)= \psi_{\mathfrak{I}_{N}}^{H}(0,x)-1
\quad\mbox{and }\; \zeta_{0}(x)=3\left( \frac{\vartheta_{\mathfrak{I}_{N}%
}(0,x)}{b-x}-1\right) .
\end{equation}
\end{remark}
\begin{remark}
\label{remarkentirefunction} Let $\lambda\in\mathbb{C}$ and $\lambda=\rho
^{2}$.
\begin{itemize}
\item[(i)] The functions $\widehat{s}_{k}(\rho,x-x_{k})$ are entire with
respect to $\rho$. Then from (\ref{generalsolcauchy}) $c_{\mathfrak{I}_{N}%
}^{h}(\rho,x)$, $s_{\mathfrak{I}_{N}}(\rho,x)$ and $\psi_{\mathfrak{I}_{N}%
}^{H}(\rho, x)$ are entire as well.
\item[(ii)] Suppose that $q$ is real valued and $\alpha_{0}, \dots, \alpha
_{N}, u_{0}, u_{1}\in\mathbb{R}$. If $u(\lambda,x)$ is a solution of
$u^{(k)}(\lambda,0)=u_{k}$, $k=0,1$, then by the uniqueness of the Cauchy
problem $\overline{u(\lambda,x)}=u(\overline{\lambda},x)$. In particular, for
$\rho, h, H\in\mathbb{R}$, the solutions $c_{\mathfrak{I}_{N}}^{h}(\rho,x)$,
$s_{\mathfrak{I}_{N}}(\rho,x)$ and $\psi_{\mathfrak{I}_{N}}^{H}(\rho, x)$ are
real valued.
\end{itemize}
\end{remark}
\subsection{A recursive integration procedure for the coefficients $\{a_n(x)\}_{n=0}^{\infty}$}
Similarly to the case of the regular Schr\"odinger equation \cite{directandinverse,neumann,neumann2}, we derive formally a recursive integration procedure for computing the Fourier-Legendre coefficients $\{a_n(x)\}_{n=0}^{\infty}$ of the canonical transmutation kernel $K_{\mathfrak{J}_N}^f(x,t)$. Consider the sequence of functions $\sigma_n(x):=x^na_n(x)$ for $n\in \mathbb{N}_0$. According to Remark \ref{remarkcoefficientsaregood}, $\{\sigma_n(x)\}_{n=0}^{\infty}\subset \mathcal{D}_2\left(\mathbf{L}_{q,\mathfrak{J}_N}\right)$.
\begin{remark}\label{Remarksigmafunctionproperties}
\begin{itemize}
\item[(i)] By Remark \ref{remarkfirstcoeffcosine},
\begin{equation}\label{sigma0}
\sigma_0(x)=\frac{f(x)-1}{2}.
\end{equation}
\item[(ii)] By (\ref{CoeffFourier1formalpowers}), $a_1(x)=\frac{3}{2}\left(\frac{\varphi_f^{(1)}(x)}{x}-1\right)$. Thus, from (\ref{formalpowers}) and (\ref{auxiliaryformalpowers}) we have
\begin{equation}\label{sigma1}
\sigma_1(x)=\frac{3}{2}\left(f(x)\int_0^x\frac{dt}{f^2(t)}-x\right).
\end{equation}
\item[(iii)] For $n\geqslant 2$, $\sigma_n(0)=0$, and by (\ref{CoeffFourier1formalpowers}) we obtain
\begin{align*}
D\sigma_n(x) & = \left(n+\frac{1}{2}\right)\sum_{k=0}^{n}l_{k,n}D\left(x^{n-k}\varphi_f^{(k)}(x)\right)\\
& =\left(n+\frac{1}{2}\right)\left(\sum_{k=0}^{n-1}l_{k,n} (n-k)x^{n-k-1}\varphi_f^{(k)}(x)+\sum_{k=0}^{n}l_{k,n}x^{n-k}D\varphi_f^{(k)}(x)\right).
\end{align*}
By (\ref{formalpowersderivative}) and (\ref{auxiliaryformalpowers}), $D\varphi_f^{(k)}(0)=0$ for $k\geqslant 1$. Hence, $\sigma_n'(0)=0$.
\end{itemize}
\end{remark}
Denote by $c_{\mathfrak{J}_N}^f(\rho,x)$ the solution of (\ref{Schrwithdelta}) satisfying (\ref{cosinegralinitialconds}) with $h=f'(0)$. On each interval $[x_k,x_{k+1}]$, $k=0, \cdots, N$, $c_{\mathfrak{J}_N}^f(\rho,x)$ is a solution of the regular equation (\ref{schrodingerregular}). In \cite[Sec. 6]{neumann} by substituting the Neumann series (\ref{NSBFcosine}) of $c_{\mathfrak{J}_N}^f(\rho,x)$ into Eq. (\ref{schrodingerregular}) it was proved that the functions $\{\sigma_{2n}(x)\}_{n=0}^{\infty}$ must satisfy, at least formally, the recursive relations
\begin{equation}\label{firstrecursiverelation}
\mathbf{L}_q\sigma_{2n}(x) = \frac{4n+1}{4n-3}x^{4n-1}\mathbf{L}_q\left[\frac{\sigma_{2n-2}(x)}{x^{4n-3}}\right], \quad x_k<x<x_k
\end{equation}
for $k=0,\cdots, N$. Similarly, substitution of the Neumann series (\ref{NSBFsine}) of $s_{\mathfrak{J}_N}(\rho,x)$ into (\ref{schrodingerregular}) leads to the equalities
\begin{equation}\label{secondrecursiverelation}
\mathbf{L}_q\sigma_{2n+1}(x) = \frac{4n+3}{4n+1}x^{4n+3}\mathbf{L}_q\left[\frac{\sigma_{2n+1}(x)}{x^{4n+1}}\right], \quad x_k<x<x_k.
\end{equation}
Taking into account that $\sigma_n\in \mathcal{D}_2\left(\mathbf{L}_{q,\mathfrak{J}_N}\right)$ and combining (\ref{firstrecursiverelation}), by Remark \ref{Remarksigmafunctionproperties}(iii) and (\ref{secondrecursiverelation}) we obtain that the functions $\sigma_n(x)$, $n\geqslant 2$, must satisfy (at least formally) the following Cauchy problems
\begin{equation}\label{recurrencerelationsigma}
\begin{cases}
\displaystyle \mathbf{L}_{q,\mathfrak{J}_N}\sigma_n(x) = \frac{2n+1}{2n-3}x^{2n-1}\mathbf{L}_q\left[\frac{\sigma_{n-2}(x)}{x^{2n-3}}\right], \quad 0<x<b, \\
\sigma_n(0) =\sigma'_n(0)=0.
\end{cases}
\end{equation}
\begin{remark}\label{remarkcontinuousquotient}
If $g\in \mathcal{D}_2\left(L_{q,\mathfrak{J}_N}\right)$, then $\frac{g}{f}\in H^2(0,b)$.
Indeed, $\frac{g}{f}\in C[0,b]$, and the jump of the derivative at $x_k$ is given by
\begin{align*}
\left(\frac{g}{f}\right)'(x_k+)- \left(\frac{g}{f}\right)'(x_k-) & = \frac{g'(x_k+)f(x_k)-f'(x_k+)g(x_k)}{f^2(x_k)}-\frac{g'(x_k-)f(x_k)-f'(x_k-)g(x_k)}{f^2(x_k)}\\
& = \frac{1}{f^2(x_k)}\left[\left(g'(x_k+)-g'(x_k-)\right)f(x_k)-g(x_k)\left(f'(x_k+)-f'(x_k-)\right) \right]\\
& = \frac{1}{f^2(x_k)}\left[ \alpha_kg(x_k)f(x_k)-\alpha_kg(x_k)f(x_k) \right]=0.
\end{align*}
Hence $\frac{g}{f}\in AC[0,b]$, and then $\frac{g}{f}\in H^2(0,b)$.
\end{remark}
\begin{proposition}
The sequence $\{\sigma_n(x)\}_{n=0}^{\infty}$ satisfying the recurrence relation (\ref{recurrencerelationsigma}) for $n\geqslant 2$, with $\sigma_0(x)=\frac{f(x)-1}{2}$ and $\sigma_1(x)= \frac{3}{2}\left(f(x)\int_0^x\frac{dt}{f^2(t)}-x\right)$, is given by
\begin{equation}\label{recurrenceintegralsigma}
\sigma_n(x)= \frac{2n+1}{2n-3}\left( x^2\sigma_{n-2}(x)+2(2n-1)\theta_n(x)\right), \quad n\geqslant 2,
\end{equation}
where
\begin{equation}
\theta_n(x):= \int_0^x\left(\eta_n(t)-tf(t)\sigma_{n-2}(t)\right)\frac{dt}{f^2(t)}, \quad n\geqslant 2,
\end{equation}
and
\begin{equation}
\eta_n(x):= \int_0^x\left((n-1)f(t)+tf'(t)\right)\sigma_{n-2}(t)dt, \quad n\geqslant 2.
\end{equation}
\end{proposition}
\begin{proof}
Set $g\in \mathcal{D}_2\left(\mathbf{L}_{q,\mathfrak{J}_N}\right)$ and $n\geqslant 2$. Consider the Cauchy problem
\begin{equation}\label{auxcauchyproblem2}
\begin{cases}
\displaystyle \mathbf{L}_{q,\mathfrak{J}_N}u_n(x) = \frac{2n+1}{2n-3}x^{2n-1}\mathbf{L}_q\left[\frac{g(x)}{x^{2n-3}}\right], \quad 0<x<b, \\
u_n(0) =u'_n(0)=0.
\end{cases}
\end{equation}
By formula (\ref{solutionnonhomogeneouseq}) and the Polya factorization $\mathbf{L}_q= -\frac{1}{f}Df^2D\frac{1}{f}$ we obtain that the unique solution of the Cauchy problem (\ref{auxcauchyproblem2}) is given by
\[
u_n(x)= \frac{2n+1}{2n-3} f(x)\int_0^x \frac{1}{f^2(t)}\left(\int_0^t s^{2n-1} Df^2(s)D\left[ \frac{g(s)}{s^{2n-3}f(s)}\right]ds \right)dt.
\]
Consider an antiderivative $\int s^{2n-1} Df^2(s)D\left[ \frac{g(s)}{s^{2n-3}f(s)}\right]ds$. Integration by parts gives
\begin{align*}
\int s^{2n-1} Df^2(s)D\left[ \frac{g(s)}{s^{2n-3}f(s)}\right]ds & = s^{2n-1}f^2(s)D\left(\frac{g(s)}{s^{2n-3}f(s)}\right)-(2n-1)sf(s)g(s) \\
& \quad + \int \left((2n-1)(2n-2)f(s)+2(2n-1)sf'(s)\right)g(s)ds.
\end{align*}
Note that
\begin{align*}
s^{2n-1}f^2(s)D\left(\frac{g(s)}{s^{2n-3}f(s)}\right) & = s^{2n-1}f^2(s)\frac{D\left(\frac{g(s)}{f(s)}\right)}{s^{2n-3}}-s^{2n-1}f^2(s)\frac{\frac{g(s)}{f(s)}}{s^{4n-6}}(2n-3)s^{2n-4}\\
& = s^2f^2(s)D\left(\frac{g(s)}{f(s)}\right)-(2n-3)sf(s)g(s).
\end{align*}
Since $g\in \mathcal{D}_2\left(\mathbf{L}_{q,\mathfrak{J}_N}\right)$, by Remark \ref{remarkcontinuousquotient}, $D\left(\frac{g(s)}{f(s)}\right)$ is continuous in $[0,b]$. Thus,
\begin{align*}
\int s^{2n-1} Df^2(s)D\left[ \frac{g(s)}{s^{2n-3}f(s)}\right]ds
& = s^2f^2(s)D\left(\frac{g(s)}{f(s)}\right)-(4n-4)sf(s)g(s)\\
&\quad + 2(2n-1)\int\left((n-1)f(s)+sf'(s)\right)ds
\end{align*}
is well defined at $s=0$ and is continuous in $[0,b]$. Then we obtain that
\begin{align*}
\Phi(t) & := \int_0^t s^{2n-1} Df^2(s)D\left[ \frac{g(s)}{s^{2n-3}f(s)}\right]ds \\
& = t^2f^2(t)D\left(\frac{g(t)}{f(t)}\right)-(4n-4)tf(t)g(t) + 2(2n-1)\Theta_n[g](t),
\end{align*}
with $H_n[g](t):= \displaystyle \int_0^t \left((n-1)f(s)+sf'(s)\right)g(s)ds$,
is a continuous function in $[0,b]$. Now,
\begin{align*}
\int_0^x \Phi(t)\frac{dt}{f^2(t)} &= \int_0^x t^2D\left[\frac{g(t)}{f(t)}\right]dt-(4n-4)\int_{0}^{t} t\frac{g(t)}{f(t)}dt+2(2n-1)H_n[g](t)\\
& = x^2\frac{g(x)}{f(x)}-2(2n-1)\int_0^x\left[H_n[g](t)-tf(t)g(t)\right]dt.
\end{align*}
Hence
\begin{equation}\label{auxsol2}
u_n(x)= \frac{2n+1}{2n-3}\left(x^2g(x)-2(2n-1)\Theta_n[g](x)\right),
\end{equation}
with $\Theta_n[g](x):= \displaystyle \int_0^x\left[H_n[g](t)-tf(t)g(t)\right]dt$.
Finally, since $\sigma_0, \sigma_1\in \mathcal{D}_2\left(\mathbf{L}_{q,\mathfrak{J}_{N}}\right)$, formula (\ref{recurrenceintegralsigma}) is obtained for all $n\geqslant 2$ by induction, taking $g=\sigma_{2n-2}$ in (\ref{auxcauchyproblem2}) and $\eta_n(x)=H_n[\sigma_{n-2}](x)$, $\theta_n(x)=\Theta_n[\sigma_{n-2}](x)$ in (\ref{auxsol2}).
\end{proof}
Integral relations of type (\ref{recurrenceintegralsigma}) are effective for the numerical computation of the partial sums (\ref{NSBFcosineaprox}) and (\ref{NSBFsineaprox}), as seen in \cite{neumann,neumann2}.
\section{Integral representation for the derivative}
Since $e_{\mathfrak{I}_{N}}^{h}(\rho, \cdot)\in AC[0, b]$, it is worthwhile
looking for an integral representation of the derivative of $e_{\mathfrak{I}%
_{N}}^{h}(\rho,x)$. Differentiating (\ref{SoleGral}) we obtain
\begin{align*}
(e_{\mathfrak{I}_{N}}^{h})^{\prime}(\rho,x) & =\widetilde{e}_{h}^{\prime
}(\rho,x)+\sum_{k=0}^{N}\alpha_{k}\widetilde{e}_{h}(\rho, x_{k})H(x-x_{k}%
)\widehat{s}^{\prime}_{k} (\rho, x-x_{k})\\
& \quad+\sum_{J\in\mathcal{I}_{N}}\alpha_{J} H(x-x_{j_{|J|}})\widetilde
{e}_{h}(\rho,x_{j_{1}})\left( \prod_{l=1}^{|J|-1}\widehat{s}_{j_{l}}%
(\rho,x_{j_{l+1}}-x_{j_{l}})\right) \widehat{s}_{j_{|J|}}^{\prime}%
(\rho,x-x_{j_{|J|}}).
\end{align*}
Differentiating (\ref{representationsinegeneral1}) and using that $\widehat
{H}_{k}(x,x)=\frac{1}{2}\int_{0}^{x}q(t+x_{k})dt$, we obtain
\begin{align*}
\widehat{s}_{k}^{\prime}(\rho,x) & = \cos(\rho x)+\frac{1}{2}\frac{\sin(\rho
x)}{\rho}\int_{0}^{x}q(t+x_{k})dt+\int_{0}^{x}\partial_{x}\widehat{H}%
_{k}(x,t)\frac{\sin(\rho t)}{\rho}dt.
\end{align*}
Denote
\begin{equation}
\label{antiderivativeq}w(y,x):= \frac{1}{2}\int_{y}^{x}q(s)ds \quad
\mbox{ for }\; x,y\in[0,b].
\end{equation}
Hence, the derivative $\widehat{s}_{k}^{\prime}(\rho,x-x_{k})$ can be written
as
\begin{equation}
\widehat{s}_{k}^{\prime}(\rho,x-x_{k})= \cos(\rho(x-x_{k}))+\int_{-(x-x_{k}%
)}^{x-x_{k}}\widetilde{K}_{k}^{1}(x,t)e^{i\rho t}dt,
\end{equation}
where $\displaystyle \widetilde{K}_{k}^{1}(x,t)= w(x_{k},x)+\frac{1}{2}%
\int_{|t|}^{x-x_{k}}\partial_{x}\widehat{H}_{k}(x,t)dt$.
On the other hand, differentiation of (\ref{transm1}) and the Goursat
conditions for $\widetilde{K}^{h}(x,t)$ lead to the equality
\begin{equation}
\tilde{e}^{\prime}_{h}(\rho,x)= (i\rho+w(0,x))e^{i\rho x}+h\cos(\rho
x)+\int_{-x}^{x} \partial_{x}\widetilde{K}^{h}(x,t)e^{i\rho t}dt.
\end{equation}
Using the fact that
\[
\cos(\rho A)\int_{-B}^{B}f(t)e^{i\rho t}dt= \int_{-(B+A)}^{B+A}\frac{1}%
{2}\left( \chi_{[-(B+A),B-A]}(t)f(t-A)+\chi_{[A-B,B+A]}(t)f(t+A)\right)
e^{i\rho t}dt
\]
for $A,B>0$ and $f\in L_{2}(\mathbb{R})$ with $\operatorname{Supp}%
(f)\subset[-B,B]$, we obtain
\begin{align*}
\tilde{e}_{h}(\rho, x_{j})\widehat{s}_{k}^{\prime}(\rho,x-x_{k}) & =e^{i\rho
x_{j}}\cos(\rho(x-x_{k}))+\mathcal{F}\left[ \widehat{K}_{x_{j},x_{k}%
}(x,t)\right] ,
\end{align*}
where
\begin{align*}
\widehat{K}_{x_{j},x_{k}}(x,t) & = \chi_{[x_{k}-x-x_{j},x-x_{k}-x_{j}%
]}(t)\widetilde{K}_{k}^{1}(x,t-x_{j})+\chi_{x_{j}}(t)\widetilde{K}^{h}%
(x_{j},t)\ast\chi_{x-x_{k}}(t)\widehat{K}_{k}^{1}(x,t)\\
& \qquad+\frac{1}{2}\chi_{[x_{k}-x_{j}-x,x_{j}-x+x_{k}]}(t)\widehat{K}%
^{h}(x_{j},t-x+x_{k})\\
& \qquad+\frac{1}{2}\chi_{[x-x_{k}-x_{j},x-x_{k}+x_{j}]}(t)\widehat{K}%
^{h}(x_{j},t+x-x_{k})\Big).
\end{align*}
By Lemma \ref{lemaconv} the support of $\widehat{K}_{x_{j},x_{k}}(x,t)$
belongs to $[x_{k}-x-x_{j},x-x_{k}+x_{j}]$. Using the equality
\[
\prod_{l=1}^{|J|-1}\widehat{s}_{j_{l}}(\rho,x_{j_{l+1}}-x_{j_{l}}%
)=\mathcal{F}\left\{ \left( \prod_{l=1}^{|J|-1}\right) ^{\ast}\left(
\chi_{x_{j_{l+1}}-x_{j_{l}}}(t)\widetilde{K}_{k}(x_{j_{l+1}},t)\right)
\right\}
\]
we have
\[
(e_{\mathfrak{I}_{N}}^{h})^{\prime i\rho x}+h\cos(\rho x)+\sum_{k=0}^{N}%
\alpha_{k}H(x-x_{k})e^{i\rho x_{k}}\cos(\rho(x-x_{k}))+\mathcal{F}\left\{
E_{\mathfrak{I}_{N}}^{h}(x,t)\right\}
\]
where
\begin{align*}
E_{\mathfrak{I}_{N}}^{h}(x,t) & = \chi_{x}(t)\partial_{x}\widetilde{K}%
^{h}(x,t)+\sum_{k=0}^{N}\alpha_{k} H(x-x_{k})\widehat{K}_{x_{k},x_{k}}(x,t)\\
& \quad+\sum_{J\in\mathcal{I}_{N}}\alpha_{J} H(x-x_{j_{|J|}}) \widehat
{K}_{x_{j_{1}},x_{j_{|J|}}}(x,t)\ast\left( \prod_{l=1}^{|J|-1}\right) ^{\ast
}\left( \chi_{x_{j_{l+1}}-x_{j_{l}}}(t)\widetilde{K}_{k}(x_{j_{l+1}},t)\right)
.
\end{align*}
Again, by Lemma \ref{lemaconv} the support of $E_{\mathfrak{I}_{N}}^{h}(x,t)$
belongs to $[-x,x]$. Since $e^{i\rho x_{k}}\cos(\rho(x-x_{k}))=\frac{1}%
{2}e^{i\rho x}\left( 1+e^{-2i\rho(x-x_{k})}\right) $, we obtain the following representation.
\begin{theorem}
The derivative $(e_{\mathfrak{I}_{N}}^{h})^{\prime}(\rho,x)$ admits the
integral representation
\begin{align}
(e_{\mathfrak{I}_{N}}^{h})^{\prime}(\rho,x) & = \left( i\rho+w(0,x)+\frac
{1}{2}\sigma_{\mathfrak{I}_{N}}(x)\right) e^{i\rho x}+h\cos(\rho x)\nonumber\\
& \quad+\sum_{k=0}^{N}\frac{\alpha_{k}}{2}H(x-x_{k})e^{-2i\rho(x-x_{k})}%
+\int_{-x}^{x}E_{\mathfrak{I}_{N}}^{h}(x,t)e^{i\rho t}%
dt,\label{derivativetransme}%
\end{align}
where $E_{\mathfrak{I}_{N}}^{h}(x,\cdot)\in L_{2}(-x,x)$ for all $x\in(0,b]$.
\end{theorem}
\begin{corollary}
The derivatives of the solutions $c_{\mathfrak{I}_{N}}^{h}(\rho,x)$ and
$s_{\mathfrak{I}_{N}}(\rho,x)$ admit the integral representations
\begin{align}
(c_{\mathfrak{I}_{N}}^{h})^{\prime}(\rho,x) & =-\rho\sin(\rho x)+ \left(
h+w(0,x)+\frac{1}{2}\sigma_{\mathfrak{I}_{N}}(x)\right) \cos(\rho
x)\nonumber\\
& \quad+\sum_{k=0}^{N}\frac{\alpha_{k}}{2}H(x-x_{k})\cos(2\rho(x-x_{k}%
))+\int_{0}^{x}M_{\mathfrak{I}_{N}}^{h}(x,t)\cos(\rho
t)dt,\label{derivativetransmcosine}\\
s^{\prime}_{\mathfrak{I}_{N}}(\rho,x) & =\cos(\rho x)+ \left( w(0,x)+\frac
{1}{2}\sigma_{\mathfrak{I}_{N}}(x)\right) \frac{\sin(\rho x)}{\rho}\nonumber\\
& \quad-\sum_{k=0}^{N}\alpha_{k}H(x-x_{k})\frac{\sin(2\rho(x-x_{k}))}{2\rho
}+\int_{0}^{x}R_{\mathfrak{I}_{N}}(x,t)\frac{\sin(\rho t)}{\rho}%
dt,\label{derivativetransmsine}%
\end{align}
where
\begin{align}
N_{\mathfrak{I}_{N}}^{h}(x,t) & = E_{\mathfrak{I}_{N}}^{h}%
(x,t)+E_{\mathfrak{I}_{N}}^{h}(x,-t)\quad\label{kernelderivativecosine}\\
R_{\mathfrak{I}_{N}}^{h}(x,t) & = E_{\mathfrak{I}_{N}}^{h}%
(x,t)-E_{\mathfrak{I}_{N}}^{h}(x,-t),\label{kernelderivativesine}%
\end{align}
defined for $x\in[0,b]$ and $|t|\leqslant x$.
\end{corollary}
\begin{corollary}
\label{CorNSBFforderivatives} The derivatives of the solutions
$c_{\mathfrak{I}_{N}}^{h}(\rho,x)$ and $s_{\mathfrak{I}_{N}}(\rho,x)$ admit
the NSBF representations
\begin{align}
(c_{\mathfrak{I}_{N}}^{h})^{\prime}(\rho,x) & =-\rho\sin(\rho x)+ \left(
h+w(0,x)+\frac{1}{2}\sigma_{\mathfrak{I}_{N}}(x)\right) \cos(\rho
x)\nonumber\\
& \quad+\sum_{k=0}^{N}\frac{\alpha_{k}}{2}H(x-x_{k})\cos(2\rho(x-x_{k}%
))+\sum_{n=0}^{\infty}(-1)^{n}l_{n}(x)j_{2n}(\rho
x),\label{NSBFderivativetransmcosine}\\
s^{\prime}_{\mathfrak{I}_{N}}(\rho,x) & =\cos(\rho x)+ \left( w(0,x)+\frac
{1}{2}\sigma_{\mathfrak{I}_{N}}(x)\right) \frac{\sin(\rho x)}{\rho}\nonumber\\
& \quad-\sum_{k=0}^{N}\alpha_{k}H(x-x_{k})\frac{\sin(2\rho(x-x_{k}))}{2\rho
}+\sum_{n=0}^{\infty}(-1)^{n}r_{n}(x)j_{2n+1}(\rho
x),\label{NSBFderivativetransmsine}%
\end{align}
where $\{l_{n}(x)\}_{n=0}^{\infty}$ and $\{r_{n}(x)\}_{n=0}^{\infty}$ are the
coefficients of the Fourier-Legendre expansion of $M_{\mathfrak{I}_{N}}%
^{h}(x,t)$ and $R_{\mathfrak{J}_{N}}(x,t)$ in terms of the even and odd
Legendre polynomials, respectively.
\end{corollary}
\section{Conclusions}
The construction of a transmutation operator that transmute the solutions of
equation $v^{\prime\prime }+\lambda v=0$ into solutions of (\ref{Schrwithdelta}) is
presented. The transmutation operator is obtained from the closed form of the
general solution of equation (\ref{Schrwithdelta}). It was shown how to
construct the image of the transmutation operator on the set of polynomials,
this with the aid of the SPPS method. A Fourier-Legendre series representation
for the integral transmutation kernel is obtained, together with a
representation for the solutions $c_{\mathfrak{I}_{N}}^{h}(\rho,x)$,
$s_{\mathfrak{I}_{N}}(\rho,x)$ and their derivatives as Neumann series of
Bessel functions, together with integral recursive relations for the construction of the Fourier-Legendre coefficients. The series (\ref{NSBFcosine}), (\ref{NSBFsine}),
(\ref{NSBFderivativetransmcosine}), (\ref{NSBFderivativetransmsine}) are
useful for solving direct and inverse spectral problems for
(\ref{Schrwithdelta}), as shown for the regular case
\cite{inverse1,directandinverse,neumann,neumann2}.
\section*{Acknowledgments}
Research was supported by CONACYT, Mexico via the project 284470.
|
2302.13237
|
\section{Introduction}
Task mapping in modern high performance parallel computers can be modeled as a graph embedding problem.
Let $G(V,E)$ be a simple and connected graph with vertex set $V(G)$ and edge set $E(G)$.
Graph embedding\cite{BCHRS1998,AS2015,ALDS2021} is an
ordered pair $<f,P_f>$ of injective mapping between the guest graph $G$ and the host graph $H$ such that
\begin{itemize}
\item[(i)] $f:V(G)\rightarrow V(H)$, and
\item[(ii)] $P_f: E(G)\rightarrow$
$\{P_f(u,v):$ $P_f(u,v)$ is a path in $H$ between $f(u)$ and $f(v)$ for $\{u,v\}\in E(G)\}$.
\end{itemize}
It is known that the topology mapping problem is NP-complete\cite{HS2011}.
Since Harper\cite{H1964} in 1964 and Bernstein\cite{Bernstein1967} in 1967, a series of embedding problems have been studied\cite{E1991,OS2000,DWNS04,FJ2007,LSAD2021}.
The quality of an embedding can be measured by certain cost criteria.
One of these criteria is the wirelength.
Let $WL(G,H;f)$ denote the wirelength of $G$ into $H$ under the embedding $f$.
Taking over all embeddings $f$, the minimum wirelength of $G$ into $H$ is defined as
$$WL(G,H)=\min\limits_{f} WL(G,H;f).$$
Hypercube is one of the most popular, versatile and efficient topological structures of interconnection networks\cite{Xu2001}.
More and more studies related to hypbercubes have been performed\cite{Chen1988,PM2009,PM2011,RARM2012}.
Manuel\cite{PM2011} et al. computated the minimum wirelength of embedding hypercube into a simple cylinder. In that paper, the wirelenth for hypercube into general cylinder and torus were given as conjectures.
Though Rajan et al.\cite{RRPR2014} and Arockiaraj et al.\cite{AS2015} studied those embedding problems, the two conjectures are still open.
We recently gave rigorous proofs of hypercubes into cycles\cite{LT2021} and cylinders (the first conjecture)\cite{Tang2022} successively.
Using those techniques and process,
we try to settle the last conjecture for torus.
In the paper, we also generaliz the results to other Cartesian product of paths and/or cycles.
It is seen that the grid, cylinder and torus are Cartesian product of graphs. In the past, the vertices of those graphs are labeled by a series of nature numbers\cite{PM2009,PM2011,AS2015,Tang2022}.
But it is not convenient for some higher dimensional graphs.
To describe a certain embedding efficiently,
we apply tuples to lable the vertices in the paper.
By the tool of Edge Isoperimetric Problem(EIP)\cite{H2004}, we estimate and explain the minimal wirelength for hypercube into
torus and other Cartesian product of graphs.
\noindent\textbf{Notation.}
For $n\ge 1$,
we define $Q_n$ to be the hypercube with vertex-set $\{0,1\}^n$,
where two $0-1$ vectors are adjacent if they differ in exactly one coordinate \cite{RR2022}.
\noindent\textbf{Notation.}
An $r_1\times r_2$ grid with $r_1$ rows and $r_2$ colums is represented by $P_{r_1}\times P_{r_2}$ where the rows are labeled $1,2,\ldots, r_1$ and
the columns are labeled $1,2,\ldots, r_2$ \cite{PM2009}.
The torus $C_{r_1}\times C_{r_2}$ is a $P_{r_1}\times P_{r_2}$ with a wraparound edge in each column and a wrapround edge in each row.
\textbf{Main Results}
\begin{thm}\label{ccthm}
For any $n_1,\ n_2\ge2,\ n_1+ n_2=n$.
The minimum wirelength of hypercubes into torus is
\begin{equation*}\label{cpwl}
WL(Q_n,C_{2^{n_1}}\times C_{2^{n_2}})=
2^{n_2}(3\cdot 2^{2n_1-3}-2^{n_1-1})+
2^{n_1}(3\cdot 2^{2n_2-3}-2^{n_2-1}).
\end{equation*}
Moreover, Gray code embedding is an optimal embedding.
\end{thm}
\noindent\textbf{Notation.}
Cartesian product of paths and/or cycles is denoted by
$\mathscr{G}=\mathscr{G}_1\times \mathscr{G}_2\times \cdots \times \mathscr{G}_k,$
where $\mathscr{G}_i \in \{P_{2^{n_i}}, C_{2^{n_i}}\}, 1\le i \le k$.
\begin{thm}\label{carthm}
For any $k>0$, $ n_i\ge2$, and $\sum_{i=1}^{k}n_i=n$.
The minimum wirelength of hypercubes into Cartesian product $\mathscr{G}$ is
\begin{equation*}
WL(Q_n,\mathscr{G})=\sum_{i=1}^{k}\mathscr{L}_i,
\end{equation*}
where \begin{equation*}
\mathscr{L}_i=\left\{
\begin{array}{rcl}
&2^{n-n_i}(3\cdot 2^{2n_i-3}-2^{n_i-1}),&\mbox{if}\ \ \mathscr{G}_i=C_{2^{n_i}},\\
&2^{n-n_i}(2^{2n_i-1}-2^{n_i-1}),&\mbox{if}\ \ \mathscr{G}_i=P_{2^{n_i}}.
\end{array}
\right.
\end{equation*}
Moreover, Gray code embedding is an optimal embedding.
\end{thm}
The paper is organized as follows.
In Section \ref{Preliminaries}, some definitions and elementary properties are introduced.
In Section \ref{sec:torus}, we explain the Gray code embedding is an optimal strategy for hypercube into torus.
Section \ref{sec:cartesian} is devoted to Cartesian products of paths and/or cycles.
\section{Preliminaries} \label{Preliminaries}
EIP has been used as a powful tool in the computation of minimum wirelength of graph embedding\cite{H2004}.
EIP is to determine a subset $S$ of cardinality $k$ of a graph $G$ such that the edge cut separating this subset from its complement has minimal size. Mathematically, Harper denotes
$$\Theta(S)=\{ \{u,v\}\in E(G)\ : u\in S, v\notin S \}.$$
For any $S\subset V(Q_n)$, use $\Theta(n,S)$ in place of $\Theta(S)$ and let $|\Theta(n,S)|$ be $\theta(n,S)$.
\begin{lem}\label{swap}
Take a subcube $Q_{n_1}$ of $Q_n$, and $S_1\subset V(Q_{n_1}), S_2\subset V(Q_{n-n_1})$, then
\begin{equation*}
\theta(n,S_1\times S_2)=\theta(n,S_2 \times S_1).
\end{equation*}
\end{lem}
\begin{proof}
By the definition of hypercube $Q_n$,
there is an edge connected in $S_1\times S_2$ if and only if
there is an edge connected in $S_2\times S_1$.
\end{proof}
The following lemma is efficient technique to find the exact wirelength.
\begin{lem}\cite{Tang2022}\label{wl}
Let $f$ be an embedding of $Q_n$ into $H$.
Let $(L_i)_{i=1}^{m}$ be a partition of $E(H)$.
For each $1\le i \le m$, $(L_i)_{i=1}^{m}$ satisfies:
\begin{itemize}
\item[\bf{\normalfont (A1)}]$L_i$ is an edge cut of $H$ such that $L_i$ disconnects $H$ into two components and one of induduced vertex sets is denoted by $l_i$;
\item[\bf{\normalfont (A2)}]$|P_f(u,v)\cap L_i|$ is one if $\{u,v\}\in \Theta(n,f^{-1}(l_i))$ and zero otherwise for any $\{u,v\}\in E(Q_n)$.
\end{itemize}
Then $$WL(Q_n,H;f)=\sum_{i=1}^{m}\theta(n,f^{-1}(l_i)).$$
\end{lem}
\noindent\textbf{Notation.}
$N_n=\{1,2,\cdots,n\}$, and $F_i^{n}=\{i,i+1,\cdots,i+2^{n-1}-1\}, \quad 1\le i \le 2^{n-1}$.
\noindent\textbf{Notation.}
Let $(i,j)$ denote a vertice in row $i$ and column $j$ of cylinder $C_{2^{n_1}}\times P_{2^{n_2}}$, where $1\le i \le 2^{n_1}$ and $1\le j \le 2^{n_2}$.
Then $V(C_{2^{n_1}}\times P_{2^{n_2}})=N_{2^{n_1}}\times N_{2^{n_2}}.$
It is seen that the vertex sets
$F_i^{n_1}\times N_{2^{n_2}}=\{(x_1,x_2):x_1\in F_i^{n_1},\ x_2\in N_{2^{n_2}} \}$ and
$N_{2^{n_1}}\times N_{j}=\{(x_1,x_2): x_1\in N_{2^{n_1}},\ x_2\in N_{j} \}$ are equivalent to
$A_i$ and $B_j$ defined in \cite{Tang2022} , respectively.
See Fig.\ref{label_1} and Fig.\ref{label_2} for examples.
\begin{figure}[htbp]
\centering
\includegraphics[width=5in]{A.jpg}
\caption{$A_2$ and $F_2^{3}\times N_{2^{2}}$ in cylinder $C_{2^{3}}\times P_{2^{2}}$ }
\label{label_1}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=5in]{B.jpg}
\caption{$B_3$ and $N_{2^3}\times N_{3}$ in cylinder $C_{2^{3}}\times P_{2^{2}}$}
\label{label_2}
\end{figure}
Now we generalize Gray code map $\xi_n:\{0,1\}^n\rightarrow \{1,2,\cdots,2^n\}$ defined in \cite{LT2021,Tang2022}.
Define $k$-order Gray code map $\xi_{n_1\ldots n_k}$ corresponding to $k$ components.
\begin{defn}
$k$-order Gray code map $\xi_{n_1\ldots n_k}$ is given by $\xi_{n_1\ldots n_k}:\{0,1\}^n\rightarrow N_{2^{n_1}}\times\cdots \times N_{2^{n_k}}$,\ i.e.,
$$\xi_{n_1\ldots n_k}(v)=\xi_{n_1\ldots n_k}(v_1\ldots v_k)=(\xi_{n_1}(v_1),\ldots,\xi_{n_k}(v_k)),$$
where $n_1+\ldots+n_k=n$, and $v=v_1\ldots v_k\in \{0,1\}^n, v_i\in \{0,1\}^{n_i}, 1\le i \le k$.
\end{defn}
For example, $\xi_{32}(11011)=(\xi_{3}(110),\xi_{2}(11))=(5,3)$.
According to the rule of Gray code map, we have that
\begin{equation*}\label{change}
\xi_{n_1n_2}^{-1}(F_i^{n_1}\times N_{2^{n_2}})=\xi_n^{-1}(A_i),\ \ \xi_{n_1n_2}^{-1}(N_{2^{n_1}}\times N_{j})=\xi_n^{-1}(B_j).
\end{equation*}
Together with (12) and (13) in \cite{Tang2022}, we have that
\begin{subequations}\label{cpwl1+2}
\begin{eqnarray}
&&\sum_{i=1}^{2^{n_1-1}} \theta(n,\xi_{n_1n_2}^{-1}(F_i^{n_1}\times N_{2^{n_2}}))=
2^{n-n_1}(3\cdot 2^{2n_1-3}-2^{n_1-1}).\\
&&\sum_{j=1}^{2^{n_2}-1}\theta(n,\xi_{n_1n_2}^{-1}(N_{2^{n_1}}\times N_{j}))
=2^{n-n_2}(2^{2n_2-1}-2^{n_2-1}).
\end{eqnarray}
\end{subequations}
Let $f: \{0,1\}^n\rightarrow N_{2^{n_1}}\times N_{2^{n_2}}$ be an
embedding of $Q_n$ into $C_{2^{n_1}}\times P_{2^{n_2}}$.
Theorems 5.2 and 5.1 in \cite{Tang2022} is rewritten as
\begin{subequations}\label{cp1+2}
\begin{eqnarray}
&&\sum_{i=1}^{2^{n_1-1}} \theta(n,f^{-1}(F_i^{n_1}\times N_{2^{n_2}}))\ge
\sum_{i=1}^{2^{n_1-1}} \theta(n,\xi_{n_1n_2}^{-1}(F_i^{n_1}\times N_{2^{n_2}})).\\
&&\sum_{j=1}^{2^{n_2}-1}\theta(n,f^{-1}(N_{2^{n_1}}\times N_{j}))\ge
\sum_{j=1}^{2^{n_2}-1} \theta(n,\xi_{n_1n_2}^{-1}(N_{2^{n_1}}\times N_{j})).
\end{eqnarray}
\end{subequations}
Cylinder $C_{2^{n_1}}\times P_{2^{n_2}}$ can also be observed as $P_{2^{n_2}}\times C_{2^{n_1}}$.
Let $f: \{0,1\}^n\rightarrow N_{2^{n_2}}\times N_{2^{n_1}}$ be an
embedding of $Q_n$ into $P_{2^{n_2}}\times C_{2^{n_1}}$, then \eqref{cp1+2}
is rewritten as
\begin{subequations}\label{change:cp1+2}
\begin{eqnarray}
&&\sum_{i=1}^{2^{n_1-1}} \theta(n,f^{-1}(N_{2^{n_2}}\times F_i^{n_1}))\ge
\sum_{i=1}^{2^{n_1-1}} \theta(n,\xi_{n_2n_1}^{-1}(N_{2^{n_2}}\times F_i^{n_1})).\\
&&\sum_{j=1}^{2^{n_2}-1}\theta(n,f^{-1}(N_j\times N_{2^{n_1}}))\ge
\sum_{j=1}^{2^{n_2}-1} \theta(n,\xi_{n_2n_1}^{-1}(N_j\times N_{2^{n_1}})).
\end{eqnarray}
\end{subequations}
\begin{rem}
It is seen that
$\xi_{n_1n_2}^{-1}(F_i^{n_1}\times N_{2^{n_2}}) =\xi_{n_1}^{-1}(F_i^{n_1})\times V(Q_{{n_2}})$. Then, by Lemma \ref{swap}, we get that $\theta(n,\xi_{n_1n_2}^{-1}(F_i^{n_1}\times N_{2^{n_2}}))=\theta(n,\xi_{n_2n_1}^{-1}(N_{2^{n_2}}\times F_i^{n_1}))$.
\end{rem}
\section{hypbercubes into torus}\label{sec:torus}
In this section, we prove Theorem \ref{ccthm} in the following procedures.
\noindent$\bullet$ \textbf{Labeling.}\ \
Let a binary tuple set denote the vertex set of torus $C_{2^{n_1}}\times C_{2^{n_2}}$ , that is
$$V(C_{2^{n_1}}\times C_{2^{n_2}})=
\{x=(x_1,x_2): 1\le x_i\le 2^{n_i},\ i=1,2\}=N_{2^{n_1}}\times N_{2^{n_2}}.$$
The edge set
$E(C_{2^{n_1}}\times C_{2^{n_2}})$ is composed of $\mathscr{E}_1$ and $\cup \mathscr{E}_2$,
where
$$\begin{array}{rcl}
\mathscr{E}_1&=&\{\{(x_1,x_2),(x_1',x_2)\}:\{x_1,x_1'\}\in E(C_{2^{n_1}}), x_2\in N_{2^{n_2}}\},\\
\mathscr{E}_2&=&\{\{(x_1,x_2),(x_1,x_2')\}: x_1\in N_{2^{n_1}}, \{x_2,x_2'\}\in E(C_{2^{n_2}})\}.
\end{array}$$
\noindent$\bullet$ \textbf{Partition.}\ \ Construct a partition of the edge set of torus.
\textbf{Step 1.}\
For each $i=1,2$, $j=1,\ldots,2^{n_i-1}$,
let $\mathscr{X}_{ij}$ be an edge cut of the cycle $C_{2^{n_i}}$ such that $\mathscr{X}_{ij}$ disconnects $C_{2^{n_i}}$ into two components where the induced vertex set
is $F_j^{n_i}$.
\textbf{Step 2.}\
For $i=1,2$, denote
\begin{equation*}
\mathscr{P}_{ij}=\bigcup_{\{x_i,x_i'\} \in \mathscr{X}_{ij}}\{\{x,x'\}\in \mathscr{E}_i\},
\end{equation*}
then $\{\mathscr{P}_{ij}:1\le i \le 2, 1\le j \le 2^{n_i-1}\}$ is the partition of $E(C_{2^{n_1}}\times C_{2^{n_2}})$.
\noindent$\bullet$ \textbf{Computation.}\ \
Notice that for each $i,j$,
$\mathscr{P}_{ij}$ is an edge cut of the torus $C_{2^{n_1}}\times C_{2^{n_2}}$.
$\mathscr{P}_{1j}$ disconnects the torus into two components where the induced vertex set is $F_j^{n_1}\times N_{2^{n_2}}$, and
$\mathscr{P}_{2j}$ induces vertex set $N_{2^{n_1}}\times F_j^{n_2}$.
See Fig.\ref{label_3} for an example.
\begin{figure}[htbp]
\centering
\includegraphics[width=5in]{torus.jpg}
\caption{(a) Edge cut $\mathscr{P}_{12}$ disconnects $C_{2^3}\times C_{2^3}$ into two components,where the induced vertex set is $F_2^{3}\times N_{2^{3}}$.
(b) Edge cut $\mathscr{P}_{23}$ disconnects $C_{2^3}\times C_{2^3}$ into two components,where the induced vertex set is $N_{2^{3}}\times F_3^{3}$.
}
\label{label_3}
\end{figure}
Let $f: \{0,1\}^n\rightarrow N_{2^{n_1}}\times N_{2^{n_2}}$ be an
embedding of $Q_n$ into $C_{2^{n_1}}\times C_{2^{n_2}}$.
Under the partition $\{\mathscr{P}_{ij}:1\le i \le 2, 1\le j \le 2^{n_i-1}\}$ and Lemma \ref{wl},
the wirelength is written as a summation related to function $\theta$, i.e.,
\begin{equation}\label{ccsum}
WL(Q_n,C_{2^{n_1}}\times C_{2^{n_2}};f)=
\sum_{j=1}^{2^{n_1-1}}\theta(n,f^{-1}(F_j^{n_1}\times N_{2^{n_2}}))+
\sum_{j=1}^{2^{n_2-1}}\theta(n,f^{-1}(N_{2^{n_1}}\times F_j^{n_2})).
\end{equation}
According to Lemma \ref{swap} and (\ref{cpwl1+2}a), we have that
\begin{equation}\label{cc1}
\sum_{j=1}^{2^{n_2-1}} \theta(n,\xi_{n_1n_2}^{-1}(N_{2^{n_1}}\times F_j^{n_2}))=
2^{n-n_2}(3\cdot 2^{2n_2-3}-2^{n_2-1}).
\end{equation}
According to Lemma \ref{swap} and (\ref{change:cp1+2}a), we have that
\begin{equation}\label{cc2}
\sum_{j=1}^{2^{n_2-1}} \theta(n,f^{-1}(N_{2^{n_1}}\times F_j^{n_2}))\ge
\sum_{j=1}^{2^{n_2-1}} \theta(n,\xi_{n_1n_2}^{-1}(N_{2^{n_1}}\times F_j^{n_2})).
\end{equation}
Combining above three fomulas and (\ref{cpwl1+2}a),(\ref{cp1+2}a),
Theorem \ref{ccthm} holds.
\section{hypercubes into Cartesian product of paths and/or cycles}
\label{sec:cartesian}
In this section, we prove Theorem \ref{carthm} in three parts.
The first part follows the analogous process as Section \ref{sec:torus}.
Then we obtain the wirelength under Gray code embedding.
In the end, we conclude that Gray code embedding is an optimal strategy.
\subsection{Compuation of embedding wirelength}\label{sub1}\
\noindent$\bullet$ \textbf{Labeling.}\ \ Let $$V(\mathscr{G})=\{x=(x_1,\ldots,x_k):x_i\in N_{2^{n_i}}, 1\le i\le k \}
=N_{2^{n_1}}\times \cdots \times N_{2^{n_k}}$$
be the vertex set of Cartesian product $\mathscr{G}$ of $k$ paths and/or cycles.
The edge set $E(\mathscr{G})$ of Cartesian product $\mathscr{G}$ is composed of all edges $\mathscr{E}_i$ correspongding to $k$ paths and/or cycles, denoted by
$E(\mathscr{G})=\bigcup_{i=1}^{k}\mathscr{E}_i$.
\noindent$\bullet$ \textbf{Partition.}\ \ Construct a partition of the edge set of Cartesian product $\mathscr{G}$.
\textbf{Step 1.}\
For each $i=1,\ldots,k$, $j=1,\ldots,2^{n_i-1}$, $\mathscr{X}_{ij}$ is described earlier in Section \ref{sec:torus}.
For each $i=1,\ldots,k$, $j=1,\ldots,2^{n_i}-1$,
let $\mathscr{Y}_{ij}$ be an edge cut of the path $P_{2^{n_i}}$ such that $\mathscr{Y}_{ij}$ disconnects $P_{2^{n_i}}$ into two components where the induced vertex set is $N_j$.
\textbf{Notation.}\ For $1\le i\le k$, let $q_i$ be $2^{n_i-1}$ if $\mathscr{G}_i=C_{2^{n_i}}$ and $2^{n_i}-1$ otherwise $\mathscr{G}_i=P_{2^{n_i}}$.
For $j=1,\ldots,q_i$, denote
\begin{equation*}\label{huaF}
\mathscr{F}_{ij}=\left\{
\begin{array}{cl}
\mathscr{X}_{ij},&\mbox{if}\quad \mathscr{G}_i=C_{2^{n_i}},\\
\mathscr{Y}_{ij},&\mbox{if}\quad \mathscr{G}_i=P_{2^{n_i}}.
\end{array}
\right.
\end{equation*}
\textbf{Step 2.}\
For $i=1,\ldots,k$, $j=1,\ldots,q_i$, denote
\begin{equation*}
\mathscr{P}_{ij}=\bigcup_{\{x_i,x_i'\} \in \mathscr{F}_{ij}}\{\{x,x'\}\in \mathscr{E}_i\},
\end{equation*}
then $\{\mathscr{P}_{ij}:1\le i \le k, 1\le j \le q_i\}$ is a partition of $E(\mathscr{G})$.
\noindent$\bullet$ \textbf{Computation.}\ \
Notice that for each $i,j$,
$\mathscr{P}_{ij}$ is an edge cut of Cartesian product $\mathscr{G}$.
Define a vertext set $\mathscr{A}_{ij}$ to be $F_j^{n_i}$ if $\mathscr{G}_i=C_{2^{n_i}}$ and $N_j$ otherwise $\mathscr{G}_i=P_{2^{n_i}}$.
\textbf{Notation.}\
\begin{equation}\label{Bij}
\begin{array}{cl}
&\mathscr{B}_{1j}=\mathscr{A}_{1j}\times N_{2^{n_2}}\times \cdots \times N_{2^{n_k}},\ \
\mathscr{B}_{kj}=N_{2^{n_1}}\times \cdots \times N_{2^{n_{k-1}}}\times \mathscr{A}_{kj},\\
&\mathscr{B}_{ij}=N_{2^{n_1}}\times \cdots \times \mathscr{A}_{ij}\times \cdots \times N_{2^{n_k}}, \ \ 1<i< k.
\end{array}
\end{equation}
It is seen that $\mathscr{P}_{ij}$ disconnects $\mathscr{G}$ into two components where the induced vertex set $\mathscr{B}_{ij}$.
Let $f: \{0,1\}^n\rightarrow N_{2^{n_1}}\times \cdots \times N_{2^{n_k}}$ be an embedding of $Q_n$ into $\mathscr{G}$.
Under the partition $\{\mathscr{P}_{ij}:1\le i \le k, 1\le j \le q_i\}$ and Lemma \ref{wl},
the wirelength is written as a summation related to function $\theta$, i.e.,
\begin{equation}\label{wlG}
WL(Q_n,\mathscr{G};f)=\sum_{i=1}^{k}\sum_{j=1}^{q_i}\theta(n,f^{-1}(\mathscr{B}_{ij})).
\end{equation}
\subsection{The wirelength under Gray code embedding}\label{sub2}\ \
We deal with the wirelength under Gray code embedding in two cases:
one is that $\mathscr{G}_i$ is cycle $C_{2^{n_i}}$,
and the other is that $\mathscr{G}_i$ is path $P_{2^{n_i}}$.
In the following, set $1\le i \le k, 1\le j \le q_i$.
\begin{lem}\label{BWL1}\
If $\mathscr{G}_i$ is cycle $C_{2^{n_i}}$, then we have that
\begin{equation*}
\sum_{j=1}^{q_i}\theta(n,\xi_{n_1\ldots n_k}^{-1}(\mathscr{B}_{ij}))=
2^{n-n_i}(3\cdot 2^{2n_i-3}-2^{n_i-1}).
\end{equation*}
\end{lem}
\begin{proof} By the Notation \eqref{Bij}, we have that
\begin{equation*}
\begin{array}{rcl}
\xi_{n_1\ldots n_k}^{-1}(\mathscr{B}_{ij})&=&
\xi_{n_1\ldots n_k}^{-1}(N_{2^{n_1}}\times \cdots \times F_j^{n_i}\times \ldots \times N_{2^{n_k}})\\
&=&V(Q_{n_1})\times \ldots\times\xi_{n_i}^{-1}(F_j^{n_i})
\times\ldots\times V(Q_{n_k})\\
&=&V(Q_{n_1+\ldots+n_{i-1}})\times
\xi_{n_i}^{-1}(F_j^{n_i})\times V(Q_{n_{i+1}+\ldots+n_k}).
\end{array}
\end{equation*}
Moreover, by Lemma \ref{swap}, we have that
\begin{equation*}
\begin{array}{rcl}
&&\theta(n,V(Q_{n_1+\ldots+n_{i-1}})\times
\xi_{n_i}^{-1}(F_j^{n_i})\times V(Q_{n_{i+1}+\ldots+n_k}))\\
&=&\theta(n,\xi_{n_i}^{-1}(F_j^{n_i})\times
V(Q_{n_{i+1}+\ldots+n_k})\times V(Q_{n_1+\ldots+n_{i-1}}))\\
&=&\theta(n,\xi_{n_i}^{-1}(F_j^{n_i})\times V(Q_{n-n_i})).
\end{array}
\end{equation*}
Therefore, Lemma \ref{BWL1} follows from (\ref{cpwl1+2}a).
\end{proof}
Similarly, we write the following lemma.
\begin{lem}\label{BWL2}
If $\mathscr{G}_i$ is path $P_{2^{n_i}}$, then we have that
\begin{equation*}
\sum_{j=1}^{q_i}\theta(n,\xi_{n_1\ldots n_k}^{-1}(\mathscr{B}_{ij}))=
2^{n-n_i}(2^{2n_i-1}-2^{n_i-1}).
\end{equation*}
\end{lem}
Combining \eqref{wlG}, Lemma \ref{BWL1} and Lemma \ref{BWL2}, we get the wirelength under Gray code embedding of hypercube into Cartesian product of paths and/or cycles. That is
\begin{equation*}\label{wlgray}
WL(Q_n,\mathscr{G};\xi_{n_1\ldots n_k})=\sum_{i=1}^{k}\mathscr{L}_i,
\end{equation*}
where $\mathscr{L}_i$ is defined in Theorem \ref{carthm}.
\subsection{Minimum wirelength}\label{sub3}\
We show that
Gray code embedding wirelength is the lower bound of wirelength for hypercube into Cartesian product of paths and/or cycles.
According to \eqref{wlG}, it is sufficient to prove that
\begin{lem}\label{finalieq}
Let $f: \{0,1\}^n\rightarrow N_{2^{n_1}}\times \cdots \times N_{2^{n_k}}$ be an embedding of $Q_n$ into $\mathscr{G}$, then
\begin{equation*}
\sum_{i=1}^{k}\sum_{j=1}^{q_i}\theta(n,f^{-1}(\mathscr{B}_{ij}))\ge
\sum_{i=1}^{k}\sum_{j=1}^{q_i}\theta(n,\xi_{n_1\cdots n_k}^{-1}(\mathscr{B}_{ij})).
\end{equation*}
\end{lem}
\begin{proof}
To prove this lemma, we only consider that $i=1$, since a
similar argument works for the other $2\le i\le k$.
\noindent$\bullet$ \textbf{Case 1.}\ \
$\mathscr{G}_1=C_{2^{n_1}}$.
For $1\le j \le q_1=2^{n_1-1}$,
$f^{-1}(\mathscr{B}_{1j})=f^{-1}(F_j^{n_1}\times N_{2^{n_2}}\times \cdots \times N_{2^{n_k}})$.
Define a bijective map $f_1$ from $N_{2^{n_1}}\times N_{2^{n_2}}\times \cdots \times N_{2^{n_k}}$ to $N_{2^{n_1}}\times N_{2^{n-n_1}}$, where
\begin{equation*}\label{f1}
f_1(x_1,x_2,\cdots,x_k)=(x_1, x_k+\sum_{i=2}^{k-1}(x_i-1)2^{\sum_{j=i+1}^{k}n_j}).
\end{equation*}
It is clear that $f_1(F_j^{n_1}\times N_{2^{n_2}}\times \cdots \times N_{2^{n_k}})=F_j^{n_1}\times N_{2^{n-n_1}}$.
Moreover, we have that
\begin{equation*}
f^{-1}(\mathscr{B}_{1j})=f^{-1}\circ f_1^{-1}(F_j^{n_1}\times N_{2^{n-n_1}})=
(f_1\circ f)^{-1}(F_j^{n_1}\times N_{2^{n-n_1}}).
\end{equation*}
Notice that $f_1\circ f$ is an arbitrary map from $\{0,1\}^n$ to $N_{2^{n_1}}\times N_{2^{n-n_1}}$, then, by (\ref{cp1+2}a), we have that
$\sum_{i=1}^{2^{n_1}-1} \theta(n,f^{-1}(\mathscr{B}_{1j}))\ge
\sum_{i=1}^{2^{n_1}-1} \theta(n,\xi_{n_1}^{-1}(F_j^{n_1})\times V(Q_{n-n_1}))$. Therefore, we conclude that
\begin{equation}\label{B1p}
\sum_{i=1}^{2^{n_1}-1} \theta(n,f^{-1}(\mathscr{B}_{1j}))\ge
\sum_{i=1}^{2^{n_1}-1} \theta(n,\xi_{n_1\cdots n_k}^{-1}(\mathscr{B}_{1j})).
\end{equation}
\noindent$\bullet$ \textbf{Case 2.}\ \
$\mathscr{G}_1=P_{2^{n_1}}$.
By a similar analysis, we also get \eqref{B1p}.
Combining \textbf{Case 1} and \textbf{Case 2}, the case for $i=1$ is proved. Thus the lemma holds.
\end{proof}
\noindent \textbf{Proof of Theorem \ref{carthm}.} Theorem \ref{carthm} follows from Subsection \ref{sub1} to \ref{sub3}.
\noindent\textbf{Acknowledgements}
The author is grateful to Prof. Qinghui Liu for his thorough review and suggestions.
This work is supported by the National Natural Science Foundation of China, No.11871098.
|
2302.13186
|
\section{}
\begin{abstract}
\noindent We count the number of ways to build paths, stars, cycles, and complete graphs as a sequence of vertices and edges, where each edge follows both of its endpoints. The problem was considered 50 years ago by Stanley but the explicit sequences corresponding to graph families seem to have been little studied. A cost-based variant is introduced and applications are considered.
\end{abstract}
{\it Key Words:} Roller-coaster problem, up-down permutations, linearization of posets, construction number, Hasse diagrams, minimizing edge delay, ontogenies.
\section{Introduction}
The {\bf elements} of a graph $G = (V,E)$ are the set $V \cup E$ of vertices and edges.
A linear order on $V \cup E$ is a {\bf construction
sequence} (or {\bf c-sequence}) for $G$ if each edge appears after both of its endpoints. For instance, for the path $P_3$ with vertices $1,2,3$ and edges $12, 23$, one has construction sequences $(1,2,3,23,13)$ and $(1,2,12,3,23)$, while $(1,3,12,2,23)$ is not a c-sequence. There are a total of 16 c-sequences for $P_3$.
Let ${\cal C}(G)$ be the set of all c-sequences for $G=(V,E)$.
The {\bf construction number} of $G$ is $c(G) := \#{\cal C}(G)$, the number of distinct construction sequences. So $c(P_3)=16$.
The problem of finding the construction numbers of graphs occurred to the author in November 2022, and he proposed, as a problem for the {\it American Math Monthly}, to determine $c(P_n)$. However, after some discussion, the problem to determine $c(K_n)$ for the complete graph $K_n$ was substituted. This was much harder to enumerate, and its solution involved the collaboration of Richard Stong, Jim Tilley, and Stan Wagon. The revised problem is to appear \cite{monthly-prob} with co-proposers PCK, RS, and JT; SW is the Problems Editor: {\it For $K_n = (V,E)$, how many ways are there to linearly order $V \cup E$ such that each edge appears after both vertices comprising the edge}?
After seeing how to extend construction sequences to more abstract objects, we found Stanley's work (e.g., \cite{stanley}), which includes two of our results. Stan Wagon pointed out a different approach to graph construction \cite{vb2012}, {\bf assembly numbers} due to Vince and Bon\'a \cite{vb2012}, motivated by the goal of describing the self-assembly of macromolecules performed by virus capsids in the host cell. (See \S 4 below.)
In this paper, we further refine the notion of construction number to account for the {\bf cost} of having two adjacent vertices with no edge that joins them. A construction sequence for a graph $G$ is {\bf economical} if it has minimum total cost for its edges. This sharply reduces the set of feasible sequences and allows a greedy heuristic.
Section 2 has additional definitions and some lemmas.
In Section 3, we find $c(G)$ when $G$ is $K_{1,n}$ (the star with $n$ peripheral points), path $P_n$, cycle $C_n$, and complete graph $K_n$.
Section 4 describes earlier appearances of construction numbers and considers extensions of c-sequences to hypergraphs, simplicial complexes, CW-complexes, posets, and categories. Section 5 defines cost-functions, some types of c-sequences, and relative constructability of graphs, while the last section has open problems and applications.
\section{Basic definitions and lemmas}
Let $G = (V,E)$ be a labeled graph, where $V=[p]:=\{1,\ldots,p\}$. Let $q:=\#E := |E|$, and let $S := V \cup E$ is the set of {\bf elements} of $G$. Suppose $\ell := p+q = \#S$. By a {\bf permutation} on $S$, we mean a sequence $x$ of length $\ell$ taken from $S$ where each element appears exactly once. If $s \in S$, write $x(s)$ for the unique $j \in [\ell]$ s.t. $x_j = s$.
Let $P_n$ be the path with $n$ vertices, $K_{1,n}$ the star with a single degree-$n$ hub having $n$ neighbors, each an endpoint; $C_n$ and $K_n$ are the cycle graph and complete graph with $n$ vertices. See, e.g., Harary \cite{harary} for any undefined graph terminology.
A permutation $x$ on $S$ is a {\bf construction sequence} (c-sequence) for $G$ if each edge follows both of its endpoints; i.e., for $e = uw$, $x(u)<x(e)$ and $x(w)<x(e)$. Let ${\cal C}(G)$ be the set of all construction sequences and let $c(G):= \#{\cal C}(G)$ be the {\bf construction number} of $G$. Clearly, $p!q! \leq c(G) \leq (p+q)!$ for each graph $G$.
The {\bf graph sequence} associated with a c-sequence $x$ is the sequence $G_i$ of graphs, $1 \leq i \leq \ell$, where $G_i$ is the subgraph of $G$ determined by the set $\{x_1,\ldots, x_i\}$ of elements, which is indeed a graph.
Let $b_i$ be the number of connected components of $G_i$. Let $\beta(x) := \max_{i \in [\ell]} b_i$ and let $b(x):=(b_1,\ldots,b_\ell)$.
\begin{lemma} [S. Wagon]
If $G$ is connected and has minimum degree $k$, then the last $k$ entries in any $x \in {\cal C}(G)$ are edges. Moreover,
$x(v) \leq \ell - r$, where $r = \deg(v,G)$.
\end{lemma}
Given two element-wise disjoint finite sequences $s_1$ and $s_2$ of lengths $n$ and $m$, we define a {\bf shuffle} of the two sequences to be a sequence of length $n+m$ which contains both $s_1$ and $s_2$ as subsequences. The number of shuffle sequences of $s_1$ and $s_2$ is ${{n+m}\choose{n}}$, giving the construction number of a disjoint union in terms of its parts.
\begin{lemma}
If $x_1$ and $x_2$ are c-sequences for disjoint graphs $G_1$ and $G_2$, resp., then each shuffle of $x_1$ and $x_2$ is a c-sequence for $G_1 \cup G_2$, and we have
\begin{equation}
c(G_1 \cup G_2) = c(G_1) c(G_2) {{\ell_1+\ell_2}\choose{\ell_1}},
\end{equation}
where $\ell_1$ and $\ell_2$ are the lengths of the sequences $x_1$ and $x_2$, resp.
\label{lm:union}
\end{lemma}
The number of ways to extend a c-sequence for $G {-} v$ to a sequence for $G$ can depend on which c-sequence is chosen.
For example, take $P_2 = (\{1,2\},\{a\})$, where $a=12$. Then ${\cal C}(P_2)=\{x',y'\}$, where $x'=(1,2,a) \equiv 12a$ and $y'=21a$. Consider $P_3 = (\{1,2,3\}, \{a,b\})$, where $b=23$. As $P_2 \subset P_3$, each c-sequence for $P_3$ extends a c-sequences of $P_2$. One finds that
$x'$ has exactly 7 extensions (in short form)
\[ 312ab, 312ba,132ab,132ba,123ab,123ba,12a3b\]
to $x \in {\cal C}(P_3)$, while $y'$ has exactly 9 extensions (in short form)
\[321ab,321ba,32b1a,231ab,231ba,23b1a,213ab,213ba,21a3b. \]
This gives $c(P_3)=16$ as it should; see below.
The previous lemma extends to any finite disjoint union. If $G_1, \ldots, G_n$ have $\ell_1, \ldots, \ell_n$ elements, then for $\ell := \ell_1 + \cdots + \ell_n$,
\begin{equation}
c(G_1 \cup \cdots \cup G_n) = \prod_{i=1}^n c(G_i) {{\ell}\choose{\ell_1 \cdots \ell_n}}.
\label{eq:disj-union}
\end{equation}
Let $G=([p],E)$ and let $v \in [p]$. The {\bf based construction number} of $(G,v)$ is the number of c-sequences for $G$ which start with the {\bf base point} $v$. We write ${\cal C}(G,v)$ for the set of suitably restricted c-sequences and $c(G,v)$ for its cardinality.
Every c-sequence starts with a vertex, so $c(G) = \sum_{v \in VG} c(G,v)$. Further, we have
\begin{lemma}
If $v, w \in VG$ and if $\exists \phi: G \to G$ an isomorphism such that $\phi(v)=w$, then $c(G,v) = c(G,w)$.
\label{lm:iso}
\end{lemma}
\begin{proof}
Let $x \in {\cal C}(G)$. Then $\tilde{x} := (\phi(x_1), \ldots, \phi(x_\ell))$ is also a c-sequence for $G$.
\end{proof}
For $i=1,\ldots, n$, let $G_i$ be pairwise-disjoint graphs with $\ell_i$ elements, $v_i \in VG_i$, and suppose the vertices $v_i$ are identified to a single vertex $v$.
Let $G := \bigvee_{i=1}^n G_i$ be the resulting {\bf wedge-product} graph with {\bf base point} $v$. Then as in (\ref{eq:disj-union}), we have
\begin{equation}
c(G,v) = \prod_{i=1}^n c(G_i,v_i) {{\ell-1}\choose{\ell_1-1 \cdots \ell_n-1}}.
\label{eq:wedge-prod}
\end{equation}
where $\ell := \ell_1 + \cdots + \ell_n$
\section{Construction numbers for some graph families}
In this section, we find $c(G)$ when $G$ is a star, path, cycle, or complete graph. The first result is also due to Stan Wagon.
\begin{theorem}
For $n \geq 0$, $c(K_{1,n}) = 2^n(n!)^2$.
\end{theorem}
\begin{proof}
For $n=0,1$, the result holds. Suppose $n \geq 2$ and let $x = (x_1, \ldots, x_{2n+1})$ be a construction sequence for $K_{1,n}$. There are $n$ edges $e_i = v_0 v_i$, where $v_0$ is the central node, and one of the edges, say, $e_i$, must be the last term in $x$. This leaves $2n$ coordinates in $x' := (x_1, \ldots, x_{2n})$ and one of them is $v_i$. The remaining $(2n-1)$ coordinates are a construction sequence for the $(n-1)$-star $K_{1,n-1}$. Hence, $c(K_{1,n}) = n (2n) c(K_{1,n} - v_i)= 2n^2 2^{n-1}(n-1)!^2 = 2^n (n!)^2$ by induction.
\end{proof}
The numbers 2, 16, 288, 9216, 460800 generated by the above formula count the number of c-sequences for $K_{1,n}$ for $n \in \{1,2,3,4,5\}$.
These numbers are the absolute value of the sequence A055546 in the OEIS (reference \cite{oeis}) and describe the number of ways to seat $n$ men and $n$ women in a roller coaster with $n$ rows, where each row has two seats which must be occupied by a man and a woman.\\
Note that the star $K_{1,n}$ is the wedge-product of $n$ copies of $K_2$. There is a unique based construction sequence for $K_2$. Using (\ref{eq:wedge-prod}),
\begin{equation}
c(K_{1,n}, \star)= {{2n}\choose{2 \cdots 2}}, \;\mbox{where the base-point $\star$ is the hub of the star}.
\end{equation}
Counting c-sequences for cycles and paths is essentially the same problem.
\begin{lemma}
If $n \geq 3$, then $c(C_n) = n \cdot c(P_n)$.
\label{lm:cycle-path}
\end{lemma}
\begin{proof}
If $x \in {\cal C}(C_n)$, then $x_\ell$ is an edge; the remainder is a c-sequence for $P_n$.
\end{proof}
Before determining $c(P_n)$, we give a Catalan-like recursion for these numbers.
\begin{lemma}
$c(P_n) = \sum_{k=1}^{n-1} c(P_k) \,c(P_{n-k}) \,{{2n-2}\choose{2k-1}}.$
\label{lm:cat}
\end{lemma}
\begin{proof}
Any construction sequence $x$ for $P_n$ has last entry an edge $e$, whose removal creates subpaths with $k$ and $n-k$ vertices, resp., for some $k$, $1 \leq k \leq n-1$. Now $x$ contains construction sequences for both subpaths which suffices by Lemma \ref{lm:union}.
\end{proof}
Trivially, $c(P_1) = 1$ and recursion gives the sequence
$ 1, 2, 16, 272, 7936, 353792$ for $n=1,\ldots,6$, in the OEIS as A000182 \cite{oeis}, the sequence of {\bf tangent numbers}, $T_n$, which has a long and interesting history. For instance, its exponential generating function is $\tan(x)$, and it corresponds to the odd terms in the sequence of Euler numbers \cite[A000111]{oeis}; see, e.g., Kobayashi \cite{kobayashi}. Here are two proofs that $c(P_n) = T_n$.\\
{\bf Proof 1.}\\
Let $U(n) \subset S(n)$ be the {\bf up-down} permutations,
where consecutive differences switch sign, and the first is positive. It is well-known \cite{mathw2} that $\#U(2n-1) = T_n$.
\begin{proposition}[D. Ullman]
There is a bijection from ${\cal C}(P_n)$ to $U(2n-1)$.
\end{proposition}
\begin{proof}
A permutation $\pi$ of the consecutively labeled elements of a path is a construction sequence if and only if $\pi^{-1}$ is an up-down sequence. Indeed, $\pi^{-1}(2j)$ is the position in $\pi$ occupied by the $j$-th edge, while $\pi^{-1}(2j-1),\pi^{-1}(2j+1)$ correspond to the positions of the two vertices flanking the $2j$-th edge and so are smaller iff $\pi$ is a construction sequence. Hence, $\pi^{-1}$ is an up-down sequence, and conversely.
\end{proof}
For instance, $P_5$ gives the sequence $(1,2,3,4,5,6,7,8,9)$, where odd-numbers correspond to vertices and even numbers to edges. An example c-sequence for $P_5$ is $\pi = (5,9,7,6,3,8,4,1,2)$; each even number (e.g., 4) is preceded by its odd neighbors (3 and 5) - i.e., each edge by its two endpoints.
The inverse sequence $\pi^{-1} = (8,9,5,7,1,4,3,6,2)$ is up-down, as required.\\
{\bf Proof 2.}\\
By \cite[A000182]{oeis}, $T_n = J_{2n-1}$, where for $r \geq 1$, $J_r$ denotes the number of permutations of $\{0,1, \ldots, r+1\}$ which begin with `$1$', end with `$0$', and have consecutive differences which alternate in sign.
Then $J_{2k}=0$ for $k \geq 1$ as the sequences counted by $J$ must begin with an {\it up} and end with a {\it down} and hence have an odd number of terms. These {\bf tremolo} sequences are in one-to-one correspondence with ``Joyce trees'' and were introduced by Street \cite{rs} who showed they satisfy the following recursion.
\begin{proposition} [R. Street]
For $r \geq 3$,
$J_r = \sum_{m=0}^{r-1} {{r-1}\choose{m}} J_m J_{r-1-m}$.
\label{pr:street}
\end{proposition}
Now we show $c(P_n) = J_{2n-1}$. Indeed,
$J_1 = c(P_1)$ and $J_3 = c(P_2)$. Replace $J_{2r-1}$ by $c(P_r)$ and
$J_{2r}$ by zero and re-index;
Street's recursion becomes Lemma \ref{lm:cat}, so $c(P_n)$ and $J_{2n-1}$ both satisfy the same recursion and initial conditions. But $J_{2n-1} = T_n$.\\
By \cite{mathw}, \cite[24.15.4]{dlmf}), we have for $n \geq 1$ with $B_{2n}$ the $2n$-th Bernoulli number,
\begin{equation}
c(P_n) = T_n = (1/n) {{2^{2n}}\choose{2}} |B_{2n}|.
\label{eq:path}
\end{equation}
An asymptotic analysis \cite{random-constr} shows
$c(P_n)$ is exponentially small compared to $c(K_{1,n-1})$.
Using Lemma \ref{lm:cycle-path} and equation (\ref{eq:path}), we have for $n \geq 1$,
\begin{equation}
c(C_n) = {{2^{2n}}\choose{2}} |B_{2n}|
\label{eq:cycle}
\end{equation}
The first two cycles are CW-complexes: $C_1$ has one vertex and a loop, and $C_2$ has two vertices
and two parallel edges; the formula is correct for both (and for $n \geq 3$).
If $\star$ is one of the endpoints of $P_n$, we can calculate $c(P_n,\star)$ for the first few values, getting (with some care for the last term) $1,1,5,61$ for $n=1,2,3,4$. In fact,
\begin{equation}
c(P_n,\star) = S_n,
\label{eq:star-base}
\end{equation}
where $S_n$ is the $n$-th secant number (\cite[A000364]{oeis}), counting the ``zig'' permutations.
\subsection{Complete graphs}
This section is omitted until the Monthly problem \cite{monthly-prob} has appeared and its solutions are collected. The solution is due primarily to S. Wagon, R. Stong, and J. Tilley.
\section{Earlier appearances and extensions}
The concept of construction sequence for graphs was already known in a more general
context \cite[p 10]{stanley} but integer sequences were only briefly considered. Stanley studied the number of linear extensions of a partial order \cite[p 8]{stanley}, using them to define a polynomial \cite[p 130]{enum-comb1}. In \cite[p 7]{two} he showed the number of linear extensions of a partial order determined by a path is an Euler number, implying (\ref{eq:path}) and (\ref{eq:star-base}).
Now take the Hasse diagram of any partial order; define a construction sequence for the diagram to be a total order on the elements such that each element is preceded in the linear order by all elements which precede it in the partial order.
Simplicial and CW-complexes can be partially ordered by ``is a face of'', and the linearization of posets includes graphs and hypergraphs (e.g., the $r$-regular case) as 2-layer Hasse diagrams,
where elements in the top layer have degree $r$ for $r$-regular hypergraphs.
A notion which sounds similar to construction numbers is due to Vince and B\'ona: the number of {\it assembly trees} of a graph \cite{vb2012}. Assembly numbers count ways to build up a graph from subgraphs induced by various subsets of the vertices.
For $n$-stars, \cite{vb2012} gives $n!$, while for paths and cycles, a Catalan-type value is found. Thus, assembly-tree numbers and construction numbers can be quite different.
Construction sequences make sense for hypergraphs, multigraphs, and indeed for any CW-complex. In the latter case, one counts sequences of cells, where each cell must follow the cells to which it is attached. For simplicial complexes, one might start with simplexes, cubes, and hyperoctahedra (the standard Platonic polytopes), and the sporadic instances in 3 and 4 dimensions.
One could also consider construction sequences for topoi and for (co)limits of diagrams in categories, even beyond the finite realm \cite[p 77]{stanley}. Philosophically, emergent concepts follow the substrate from which they arise.
\section{Types of construction sequences}
In this section, we describe {\it economical, easy, {\rm and} greedy} construction sequences.
Let $G=(V,E)$ be a graph, let $x \in {\cal C}(G)$, and let $e \in E$. We define the {\bf cost} of edge $e = uw$ with respect to a construction sequence $x$ to be
$$\nu(e,x) := 2x(e) - x(u) - x(w)$$
where for all $s \in V \cup E$, we have
$x(s)=j$ if and only if $x_j = s$.
Let
$$\nu(x) := \sum_{e \in E} \nu(e,x)$$
be the cost of $x$, and let $\nu(G)$ be the least cost of any of its c-sequences. Thus, edge-cost is the delay between placement of its endpoints and placement of the edge.
The {\bf greedy algorithm} $\mathbb{G}$ takes input a graph $G =(V,E)$ and a linear order $\lambda$ on $V$, $\lambda := (v_1, \ldots, v_p)$, and outputs a c-sequence $x := x(\lambda) := \mathbb{G}(G,\lambda) \in {\cal C}(G)$. As soon as an edge is {\bf available} (meaning that both its endpoints have appeared), the greedy algorithm selects it. If several edges are available, some method of breaking ties is employed - e.g., using lexicographic order or avoiding cycles as long as possible. When no edges are available, the next vertex on the input list is selected, thereby increasing the number of connected components. Put $\mathbb{G}(G) := \{\mathbb{G}(G,\lambda) : \lambda \;\mbox{linear order on}\; V\}$.
We call a c-sequence for $G$ with minimum cost {\bf economical} (an {\bf ec}-sequence). Let ${\cal C}'(G)$ be the set of ec-sequences for $G$ and let $c'(G)$ be its cardinality.
\begin{conjecture}
If $G$ is any connected graph, then $\mathbb{G}(G)$ contains ${\cal C}'(G)$.
\end{conjecture}
A good question is, for a specific graph, how to choose the vertex ordering $\lambda$ on which to run the greedy algorithm. For the path, the obvious two choices work.
\begin{lemma}
Let $n \geq 2$. Then $\nu(P_n)=4n-5$ and $c'(P_n) = 2$.
\end{lemma}
\begin{proof}
If $P_n$ is the $n$-path with $V = [n]$ and natural linear order, the greedy algorithm gives the c-sequence
$121'32'43'54'\cdots n (n-1)'$, where we write $k'$ for the edge $[k,k{+}1]$.
The first edge costs 3, while each subsequent edge costs 4. The unique nontrivial isomorphism of $P_n$ produces the other member of ${\cal C}'(P_n)$ by Lemma \ref{lm:iso}.
\end{proof}
For $K_{1,n}$, starting from the hub 0, all vertex orderings cause the greedy algorithm to give an $x$ with cost $(n+1)^2 - 1$ but, in fact, it is better to take $\lfloor n/2 \rfloor$ of the peripheral vertices (in any order) followed by the hub vertex, now fill in the available edges (all orders produce the same cost), and then continue in random order for each remaining peripheral vertex followed by the unique newly available edge.
For $K_{1,5}$, one gets $x(012345) = (011'22'33'44'55')$ (dropping commas and letting $n'$ denote the edge $[0,n]$) and $\nu(x(012345))= 35 = (5+1)^2 - 1$. But $x(120345) = (1201'2'33'44'55')$ has cost $4+5+5+7+9=30$. Postponing 0 reduces later delay.
We note that for the path, if $x$ is an economical c-sequence, then $\beta(x) \leq 2$; but for the star, this does not hold.
The {\bf easy} sequences are the c-sequences obtained by listing all the vertices first and then all the edges. Once the vertices are listed, {\it cost is independent of the ordering of the edges}. It suffices to check this for the interchange of two adjacent edges. One of the edges moves left and so has its cost reduced by 2 while the edge that moves to the right has its cost increased by 2; hence, the sum remains constant.
\begin{lemma}
For $n \geq 2$, let $x \in {\cal C}(P_n)$ be any easy sequence which begins with $v_1, \ldots, v_n$ in the order they appear along the path. Then $\nu(x) = {{2n-1}\choose{2}}$.
\end{lemma}
\begin{proof}
Let $n \geq 2$ and put $x_0 := 1 \,2 \,3 \cdots n\, [n-1,n] \,[n-2,n-1] \cdots 23 \,12$. Then $x_0$ has cost $\nu(x_0) = 3 + 7 + \cdots + 4(n-1)-1 = {{2n-1}\choose{2}}$, so $\nu(x) = {{2n-1}\choose{2}}$.
\end{proof}
We have examples where the cost of an easy sequence for $P_n$, which begins with some random order of vertices, can be slightly higher or lower than ${{2n-1}\choose{2}}$.
\begin{lemma}
Let $n \geq 2$. Then $\nu(C_n)=6n-4$ and $c'(C_n) = 2n$.
\end{lemma}
\begin{proof}
The elements $x \in {\cal C}'(C_n)$ for $C_n = (v_1, e_{12}, v_3, \ldots, e_{n-1,n}, v_n,e_{n,1})$ begin as in $P_n$ but the last edge costs $2+(2n-1) = 2n+1$, so $\nu(x) = 4n-5+2n+1=6n-4$. Any of the $n$ edges of the cycle could be last, and either clockwise or counterclockwise orientation could occur, so $c'(C_n)=2n$.
\end{proof}
A different cost model could be formed by attributing cost to the {\it vertices}, rather than the edges. For $v \in V(G)$, let $E_v$ be the set of edges incident with $v$ and put
$$\kappa(v,x) := \Big( \sum_{e \in E_v} x(e) - x(v)\Big)\Big/\deg(v,G) \;\; \mbox{for}\;\; x \in {\cal C}(G).$$
Then $\kappa(x) := \sum_{v \in V} \kappa(v,x)$ and $\kappa(G) := \min_{x \in {\cal C}(G)} \kappa(x)$ are an alternative measure.
It would also be possible to explore using {\it maximum}, rather than summation, to get the cost of a graph from that of its edges (or vertices), as with $L_1$ vs $L_\infty$ norms.
Rather than follow a discrete model, it is natural to introduce time as a continuous variable.
Suppose $G = (V,E)$ is a graph and let $h: V \cup E \to [0,1]$, where for all $e = uw \in E$, $h(e) > \max(h(u),h(w))$.
One could then define a new edge cost $\tilde{\nu}(e, h)$
\begin{equation}
\tilde{\nu}(e, h) := 2h(e) - h(u) - h(w).
\end{equation}
One might allow an edge to exist just prior to the existence of one or both of its endpoints, if the process of implementing the edge has measurable temporal extent.
Choice of cost function may be influenced by application as we shall discuss. However, merely having a sharply curtailed repertoire of construction sequences may make it easier to find nice c-sequences. Perhaps $c'(G) < c'(H)$ implies $c(G) < c(H)$.
Currently, we don't know how much variation can occur in $c(G)$ among families of graphs with a fixed number of vertices and edges. Let ${\cal G}(p,q)$ be the set of all graphs $G=(V,E)$ with $V=[p]$ and $q = \#E$ and suppose $G \in {\cal F} \subseteq {\cal G}(p,q)$. Define the {\bf constructability} of $G$ in ${\cal F}$ to be $c(G)$ divided by the average over ${\cal F}$,
\begin{equation}
\xi(G, {\cal F}) := \frac{c(G)}{\alpha({\cal F})},\;\;\mbox{where} \;\alpha({\cal F}) := (\# {\cal F})^{-1}\sum_{H \in {\cal F}} c(H).
\end{equation}
The $n$-vertex trees with greatest and lowest constructability are stars and paths \cite{random-constr}. But the value of $\alpha({\cal F})$ for ${\cal F}$ the family of $n$-vertex trees is unknown. We think that diameter and maximum degree are inversely proportional for maximal planar or maximal outerplanar graphs of fixed order $|V|$. Do these invariants affect $c(G)$ analogously with trees?
Are paths and stars available as spanning trees for the extremal examples of constructability?
\section{Discussion}
Aside from their combinatorial relevance for the structure of incidence algebras \cite{stanley} or for enumeration and integer sequences \cite{monthly-prob, random-constr}, construction numbers of graphs might have a deeper theoretical aspect. A natural idea is to think of construction sequences as the outcome of a constrained stochastic process, where a graph {\it evolves} through the addition of new vertices and edges subject to the condition that an edge cannot appear before its endpoints. Any given graph thus could be ``enriched'' by knowledge of its history, either the linear order on its elements or their actual time of appearance. The existence of such histories might enable some new methods of proof - e.g., for the graph reconstruction problem of Ulam and Harary.
Practical applications would include operations research, where directed hyperedges describe complex tasks such as ``build an airbase'' which depend on various supporting infrastructures.
If a link should occur at some moment, one would like the necessary endpoints to happen just-in-time.
Graphs occur in many scientific contexts.
It would be interesting to study the actual construction sequences for the complex biochemical networks found in the organic kingdom. How close are they to being economical?
Brightwell \& Winkler \cite{bw91} showed that counting the linear extensions of a poset is $\#P$-complete and contrast this with randomized polynomial-time algorithms which estimate this number. Their conjecture that $\#P$-completeness holds even for height-2 posets was proved by Dittmer \& Pak \cite{dp2020}, who further included incidence posets of graphs. (Note that their order is the reverse of ours.)
Applications of linear extensions of posets to equidistributed classes of permutations were given by Bj\"orner \& Wachs \cite{bw91}, and Burrow \cite{burrow} has studied using traversals of posets representing taxonomies and concept lattices to construct algorithms for information databases.
Computer calculation of construction numbers to get sample values can aid in finding correct formulas (via the OEIS \cite{oeis}) for inductive proofs, but such computation is difficult due to the large number of permutations.
This might be flipped into an asset by utilizing the theoretical calculations here as a ``teacher'' for neural network or machine learning methods (cf. Talvitie et al. \cite{talvitie}). More ambitiously, a mathematically oriented artificial intelligence could be challenged to discover the formulas above, along with some of the others we would like to have.
There are other ways to build graphs - one could attach stars by adding a vertex and all of its edges to any existing vertex or, for 2-connected graphs, one could attach {\it ears} (i.e., paths attached only by their endpoints). For random graphs, probabilities might depend on history. How are these various approaches related.
|
2302.13249
|
\section{Introduction}
Over the past two decades, 3d $\mathcal N=4$ mirror symmetry has attracted a lot
of attentions from both physicians and mathematicians (see, for example,
\cite{BFN,BDG,N} and references therein). It is also closely related
to the theory of \textit{symplectic duality} of Braden et al. \cite{BPW,BLPW}.
Suppose two (possibly singular) manifolds are symplectic dual to each
other, then there are some highly nontrivial identities between
the geometry and topology of them. One of the properties predicted
by 3d $\mathcal N=4$ mirror symmetry and
symplectic duality is Hikita's conjecture.
Suppose we are given a pair of symplectic dual conical symplectic singularities,
then Hikita's conjecture is a relation of the coordinate ring of one conical
symplectic singularity to the cohomology ring of the symplectic
resolution of the other dual conical symplectic singularity, which is stated as follows.
\begin{conjecture}[Hikita {\cite[Conjecture 1.3]{Hi}}]
Let $X$ and $X^{!}$ be a pair of symplectic dual conical
symplectic singularities over $\mathbb{C}$.
Suppose $X^{!}$ admits
a conical symplectic resolution $\tilde{X}^{!}\rightarrow X^{!}$,
and $T$ is a maximal torus of the Hamiltonian action
on $X$. Then there is an isomorphism of graded algebras
\begin{eqnarray*}
\mathrm{H}^\bullet(\tilde{X}^{!},\mathbb{C})\cong\mathbb{C}[X^T].
\end{eqnarray*}
\end{conjecture}
Hikita also proved this conjecture in several cases, such as
hypertoric varieties, finite type
A quiver varieties, and the Hilbert schemes of points in
the plane (which are self-dual). He then asked whether this phenomenon holds for
other examples of symplectic duality.
Later, Nakajima generalized Hikita's conjecture
to the equivariant case (see \cite[\S8]{KTW}), and Kamnitzer, McBreen and
Proudfoot further generalized it
to the quantum case in \cite{KMP} .
Recently Shlykov proved in \cite{Sh} that Hikita's
conjecture holds for the case of the minimal
nilpotent orbit closure $\overline{\mathcal O}_{min}$
in a simple Lie algebra
$\mathfrak{g}$ of ADE type and the dual conical
symplectic singularity, i.e., the Slodowy slice to the
subregular nilpotent orbit in the same Lie algebra.
This is highly related to the duality discovered by Spaltenstein
\cite{Spa} and Lusztig \cite{Lus} (see also \cite{CM} for more details).
By Slodowy \cite{S}, the Slodowy slice
is isomorphic to the Kleinian singularity
$\mathbb C^2/\Gamma$ of the same type.
If we denote by $\widetilde{\mathbb C^2/\Gamma}$
the minimal resolution of $\mathbb C^2/\Gamma$
and assume that the fixed point scheme
for $T$ is the same as the one for a generic $\mathbb C^\times$-action, then
Shlykov proved in \cite{Sh} that
$$\mathrm H^\bullet(\widetilde{\mathbb C^2/\Gamma})
\cong\mathbb C[\overline{\mathcal O}_{min}^{\mathbb C^\times}]
$$
as graded algebras.
The purpose of this paper is to
generalize his work to the equivariant case.
\begin{theorem}\label{maintheorem}
Let $\mathfrak g$ be a complex semisimple Lie algebra of ADE type.
Let $\widetilde{{\mathbb C^2}/\Gamma}$ be the minimal resolution
of the singularity of the same type, and let $\overline{\mathcal O}_{min}$
be the closure of the minimal nilpotent orbit in $\mathfrak g$. Then the equivariant
Hikita conjecture
holds for the pair $\widetilde{{\mathbb C^2}/\Gamma}$ and
$\overline{\mathcal O}_{min}$; that is,
we have isomorphisms of graded algebras:
$$
\begin{array}{ll}
\mathrm{H}^\bullet_{(\mathbb C^{\times})^2}
(\widetilde{{\mathbb C^2}/\Gamma})
\cong B(\mathscr A[\overline{\mathcal O}_{min}]),
&\mbox{if $\mathbb C^2/\Gamma$ is an $A_n$ singularity,}\\[2mm]
\mathrm{H}^\bullet_{\mathbb C^{\times}}
(\widetilde{{\mathbb C^2}/\Gamma})
\cong B(\mathscr A[\overline{\mathcal O}_{min}]),
&\mbox{otherwise,}
\end{array}
$$
where $\mathscr A[\overline{\mathcal O}_{min}]$
is the quantization of $\mathbb C[\overline{\mathcal O}_{min}]$
in the sense of Joseph (see \S\ref{sect:quantization} for the details), and
$B(-)$ is the associated $B$-algebra (see \S\ref{sect:B-algebra} for the
definition).
\end{theorem}
In the above theorem, the $A_n$ singularities and their resolutions
are toric varieties,
and hence we naturally consider the
$(\mathbb C^\times)^2$-equivariant cohomology for them.
For singularities of DE type, there is only a natural $\mathbb C^\times$ action
on them, and we can only consider
their $\mathbb C^\times$-equivariant cohomology, which
has been studied by Bryan and Gholampour in \cite{BG}.
(In fact, Bryan and Gholampour obtained more; they computed
the quantum $\mathbb C^\times$-equivariant cohomology of these varieties,
including the $A_n$ case. The quantum version of Hikita's conjecture
for these varieties will be studied somewhere else.)
On the other hand, Joseph gave in \cite{Jo}
the quantizations of the minimal orbit closures
in Lie algebras of ADE type.
They are the quotients of the corresponding universal enveloping
algebras by some specific two-sided ideals, which
are called the Joseph ideals.
Later, Garfinkle in her thesis \cite{Ga} constructed
explicitly the Josephs ideals.
Interestingly enough, the Joseph ideals in the type A case
are not unique, but are parameterized by the complex numbers $\mathbb C$.
For the other types of Lie algebras, the Joseph ideals are uniquely determined.
Thus in the type A case, if we view the number that parameterizes
the Joseph ideals as a formal parameter, then the quantizations
of the minimal orbits in this case are over the ring of polynomials of two variables,
which exactly matches the base ring of the $(\mathbb C^\times)^2$-equivariant
cohomology of the dual side. For the other types of minimal orbits,
all the algebras involved are over the polynomials of one variable.
The proof of Theorem \ref{maintheorem} is then based on the explicit
computations of the algebras in the theorem.
The rest of this paper is organized as follows.
In \S\ref{sect:cohomologyofADE} we first recall some
basic facts on Kleinian singularities, and
then compute the equivariant cohomology of
the minimal resolutions of these singularities.
In \S\ref{sect:quantization}
we study the quantizations of the minimal nilpotent orbit closures
in Lie algebras of ADE type, which is due to Joseph \cite{Jo} and Garfinkle \cite{Ga}.
In \S\ref{sect:B-algebra} we study the corresponding $B$-algebra of these quantizations.
In \S\ref{sect:proofofmainthm} we prove Theorem \ref{maintheorem}.
Namely, for each type of Lie algebras, we give an explicit isomorphism
between the two types of algebras.
\begin{ack}
In the spring of 2021, Professor Yongbin Ruan gave a series of lectures
at Zhejiang University
on his project on the mirror symmetry of nilpotent orbits
of semi-simple Lie algebras.
This paper is also motivated
by our study of his lectures. We are extremely grateful to him as well as
IASM, Zhejiang University
for inviting us to attend the lectures and for offering excellent working conditions.
This work is supported by NSFC Nos. 11890660, 11890663 and 12271377.
\end{ack}
\section{Equivariant cohomology of ADE resolutions}\label{sect:cohomologyofADE}
In this section, we study the equivariant
cohomology of the minimal resolutions of Kleinian singularities.
The type A case is discussed
in \S\ref{subsect:typeA}
and the rest cases are discussed in \S\ref{BG}.
\subsection{Kleinian singularities}\label{subsect:Kleiniansing}
Let $\Gamma$ be a finite subgroup of
$\mathrm{SL}_2(\mathbb C)$.
It naturally acts on $\mathbb C^2$ via the canonical
action of $\mathrm{SL}_2(\mathbb C)$.
The singularity
$\mathbb C^2/\Gamma$ is called a Kleinian singularity,
and has been widely studied. The following table summarizes
the classification of Kleinian singularities:
\begin{center}
\begin{tabular}{p{2cm}p{4cm}p{5cm}}
\hline
Type & $\Gamma$ &Defining equation\\
\hline\hline
$A_n$& Cyclic Group $\mathbb{Z}_{n+1}$ & $x^{n+1}-yz=0$\\
$D_n$& Binary Dihedral &$x(y^2-x^{n-2})+z^2=0$\\
$E_6$& Binary Tetrahedral & $x^4+y^3+z^2=0$\\
$E_7$& Binary Octahedral & $x^3+xy^3+z^2=0$\\
$E_8$& Binary Icosahedral & $x^5+y^3+z^2=0$\\
\hline
\end{tabular}
\end{center}
The singularity $\mathbb C^2/\Gamma$ has a unique minimal
resolution, denoted by $\widetilde{\mathbb C^2/\Gamma}$,
whose exceptional fiber is given by a tree of $\mathbb{CP}^1$'s.
The corresponding tree, whose vertices are the $\mathbb{CP}^1$'s
and whose edges between two given vertices are identified with the intersection
points of the corresponding $\mathbb{CP}^1$'s.
It turns out that the trees such constructed are exactly the Dynkin diagrams
of the Lie algebra of the same type.
There is another direct relationship between the Kleinian singularities
and the Lie algebras; namely, the Kleinian singularities
are exactly the Slodowy slices to the subregular nilpotent orbits
in the Lie algebra of the same type.
Let $\mathfrak g$ be a Lie algebra. Recall that
the nilpotent cone of $\mathfrak g$, usually denoted
by $\mathcal N$, is the set
$$\mathcal N:=\left\{x\in\mathfrak g: (\mathrm{ad}_x)^n=0\,\,
\mbox{for some $n\in\mathbb N$}\right\}.$$
\begin{definition}[Slodowy slice \cite{S}]
Let $x \in\mathfrak{g}$ be a nilpotent element. Extend this to a
choice of $\mathfrak{sl}_2(\mathbb{C})$
triple $\langle x, h, y\rangle \subseteq \mathfrak{g}$. The
Slodowy slice associated to $(\mathfrak{g}, x)$ is the
affine sub-variety $S = x + \ker[y, -] \subseteq \mathfrak{g}$.
\end{definition}
It is a transverse slice to the nilpotent orbit $\mathcal{O}$ of the point $x$.
\begin{theorem}[Brieskorn \cite{Br} and Slodowy \cite{S}]
\label{Grothendieck-Brieskorn}
Let $\mathfrak{g}$ be simply-laced, $\mathcal{N} \subseteq \mathfrak{g}$
denote the nilpotent cone, and $S$ be a Slodowy slice to a subregular nilpotent
element $x\in\mathcal{O}_{sub}$. The intersection $S_x \cap \mathcal{N}$
is a Kleinian surface singularity with the same Dynkin diagram as
$\mathfrak{g}$. Moreover, the symplectic resolution
$\widetilde{S_x \cap \mathcal{N}}\rightarrow S_x \cap \mathcal{N}$
is the same as the minimal resolution of the Klein singularity $\widetilde{\mathbb{C}^2/\Gamma}\rightarrow\mathbb{C}^2/\Gamma$.
\end{theorem}
\subsection{Equivariant cohomology of resolutions of $A_n$ singularities}\label{subsect:typeA}
Let $\Gamma=\mathbb{Z}_{n+1}$ with $\xi$ being
the generator of $\Gamma$. The finite group $\Gamma$ acts on $\mathbb C^2$ as
$$\xi \cdot (z_1, z_2)=\left(e^{\frac{2\pi i}{n+1}}z_1, e^{-\frac{2\pi i}{n+1}}z_2\right).$$
The associated singularity, denoted by $A_n$, is given by $\mathbb C^2/\Gamma$.
\subsubsection{Toric construction}
Let $\mathcal{A}_n$ be the minimal resolution of $A_n$,
which is also called the
Hirzebruch-Jung resolution. Then $\mathcal{A}_n$ is a toric variety,
which is given by $n+2$ toric fans in ${\mathbb R}^2$:
\begin{equation*}
v_0=(1, 0), v_1=(0, 1), v_2=(-1,2), \cdots, v_{n+1}=(-n, n+1).
\end{equation*}
By Cox et. al. \cite{Cox}, we can view $\mathcal{A}_n$ as a GIT-quotient, namely,
$$\mathcal{A}_n\cong \big(\mathbb C^{n+2}-Z(\Sigma)\big)/(\mathbb C^\times)^n,$$
where $Z(\Sigma)=\{z_0\cdots\widehat{z_i}\cdots z_{n+1}=0|1\leq i\leq n+1\}$,
and the $(\mathbb C^\times)^n$ action on $\mathbb C^{n+2}$ is as follows:
for any $(\lambda_1,\cdots, \lambda_n)\in (\mathbb C^\times)^n$,
\begin{equation}\label{A_n GIT}
(\lambda_1,\cdots, \lambda_n)\cdot(z_0, \cdots,
z_{n+1}):=(\lambda_1z_0,\lambda_1^{-2}\lambda_2z_1,
\lambda_1\lambda_2^{-2}\lambda_3 z_2,
\cdots, \lambda_{n-1}\lambda_n^{-2}z_n, \lambda_n z_{n+1}).
\end{equation}
Here we use the homogeneous coordinate
$[z_0: z_1:\cdots: z_{n+1}]$ to parametrize the
$({\mathbb C}^\times)^n$-orbit of $(z_0, z_1,\cdots, z_{n+1})$, which is a point in $\mathcal{A}_n$.
The projection $\mathcal{A}_n\rightarrow {\mathbb C}^2/\Gamma$ is
$$[z_0: z_1:\cdots: z_{n+1}]\mapsto
\left((z_0^{n+1} z_1^n z_2^{n-1}\cdots z_n)^{\frac{1}{n+1}},
(z_1z_2^{2}\cdots z_n^n z_{n+1}^{n+1})^{\frac{1}{n+1}}\right).$$
\subsubsection{Equivariant cohomology of $\mathcal{A}_n$}
There is a natural $(\mathbb C^\times)^2$-action on
$\mathcal{A}_n$: for $\eta\in \mathbb C^*$,
\begin{equation}\label{A_n action}
\eta\cdot [z_0: z_2: \cdots : z_{n+1}]=[\eta^{t_1}z_0: z_1:\cdots : z_{n-1}: \eta^{t_2}z_{n+1}],
\end{equation}
which has the following $n+1$ fixed points:
\begin{equation*}
p_0=[\underbrace{1:1:\cdots:1}_{n}:0:0], p_1=[\underbrace{1:1:\cdots:1}_{n-1}:0:0: 1],
\cdots, p_n=[0: 0: \underbrace{1:1:\cdots:1}_{n}].
\end{equation*}
The exceptional fiber of $\mathcal{A}_n$ is given by a
tree of $\mathbb{CP}^1$'s corresponding to the Dynkin diagram of $A_n$.
Using the GIT description, the $i$-th $\mathbb{CP}^1$ is the line connecting
$p_{i-1}$ and $p_i$. More precisely, it is the point set
$$\{[z_0: z_1:\cdots: z_{n+1}]\big|z_{n+1-k}=0\}\subseteq \mathcal{A}_n.$$
Recall the following Atiyah-Bott localization theorem.
\begin{proposition}\label{localization}
Suppose $X$ is a variety with a $T$-action on it, and
$X^T$ is the fixed locus of $T$.
Let $i: X^T\hookrightarrow X$ be the embedding.
Then after localization on $\mathrm{H}^\bullet_T(pt)$, we have
$$i^*: \mathrm{H}^\bullet_T(X)\rightarrow \mathrm{H}^\bullet_T(X^T)$$
is an isomorphism of the $T$-equivariant cohomology groups.
Furthermore, if $X^T$ is proper, then for any $\alpha\in \mathrm{H}^\bullet_T(X)$,
\begin{equation*}
\int_{X}\alpha=\int_{X^T}\frac{i^*\alpha}{e_T(N_{X^T|X})},
\end{equation*}
where $e_T(N_{X^T|X})$ is the equivariant Euler class of the normal bundle $N_{X^T|X}$.
\end{proposition}
For the $({\mathbb C}^\times)^2$-action described above, since
$\mathrm{H}^\bullet_{({\mathbb C}^\times)^2}(pt)={\mathbb C}[t_1, t_2]$, we have:
\begin{corollary}\label{cor:equivofAn}
As a ${\mathbb C}[t_1, t_2]$-vector space,
$$\mathrm{H}^\bullet_{({\mathbb C}^\times)^2}(\mathcal{A}_n)
={\rm Span}_{{\mathbb C}[t_1, t_2]}\{1, e_1, \cdots, e_n\},$$
where $e_i$ is cohomology class corresponding to the
$i$-th $\mathbb{CP}^1$ in the exceptional fiber.
\end{corollary}
\subsubsection{Equivariant Poincar\'e pairing}
Notice that $\mathcal{A}_n$ is not proper, but the $({\mathbb C}^\times)^2$-fixed point set is proper,
so we can define Poincar\'e pairing on the equivariant cohomology
$\mathrm H^\bullet_{({\mathbb C}^\times)^2}(\mathcal{A}_n)$ via Proposition \ref{localization}
\begin{definition}
For $\alpha$, $\beta\in \mathrm{H}^\bullet_{T}(X)$, define the equivariant Poincar\'e pairing to be
\begin{equation}\label{eq pa}
\<\alpha, \beta\>:=\int_{X^T}\frac{i^*(\alpha\cup \beta)}{e_T(N_{X^T|X})}.
\end{equation}
\end{definition}
Now we calculate the equivariant Poincar\'e pairing on
$\mathrm H^\bullet_{({\mathbb C}^\times)^2}(\mathcal{A}_n)$.
\begin{proposition}
On
$\mathrm{H}^\bullet_{({\mathbb C}^\times)^2}(\mathcal{A}_n)$,
with the notion of Corollary \ref{cor:equivofAn},
we have
$$
\<1, 1\>=\frac{1}{(n+1)t_1t_2}, \quad \<1, e_i\>=0,
$$
and
$$
\<e_i, e_j\>=\left\{
\begin{array}{cl}
-2, & \hbox{$i=j,$} \\
1, & \hbox{$|i-j|= 1,$}\\
0, & \hbox{$i-j|\geq 2.$}
\end{array}
\right.
$$
\end{proposition}
\begin{proof}
Firstly we calculate $e_T(N_{X^T|X})$ in \eqref{eq pa}. In our case, $X^T=\sqcup_{i=0}^{n}p_i$,
and therefore $N_{p_i|\mathcal{A}_n}$ is the tangent space
$T_{p_i}\mathcal{A}_n$, which is a trivial bundle.
Now it suffices to figure out the $({\mathbb C}^\times)^2$-action on $T_{p_i}X$. By \eqref{A_n GIT}
and \eqref{A_n action}, near $p_0$, we have
\begin{align*}
&[\eta^{t_1}z_0:z_1\cdots:z_n: \eta^{t_2}z_{n+1}]\\
=&[\underbrace{1:1:\cdots:1}_{n}:\eta^{(n+1)t_1}z_0^{n+1}z_1^{n}
\cdots z_{n-1}^2z_n : \eta^{t_2-nt_1}z_0^{-n}z_1^{-(n-1)}\cdots z_{n-1}^{-1}z_{n+1}],
\end{align*}
which means that $y_1=z_0^{n+1}z_1^{n}\cdots z_{n-1}^2z_n$,
$y_2=z_0^{-n}z_1^{-(n-1)}\cdots z_{n-1}^{-1}z_{n+1}$ is the local coordinate near $p_0$.
And the $({\mathbb C}^\times)^2$-action on $T_{p_0}\mathcal{A}_n$ is
$$\eta \cdot (y_1, y_2)=(\eta^{(n+1)t_1}y_1, \eta^{t_2-nt_1}y_2).$$
So $$e_T(T_{p_0}\mathcal{A}_n)=(n+1)t_1\cdot (t_2-nt_1)\in \mathrm H^\bullet_T(p_0).$$
Similarly,
\begin{equation}\label{weight on tangent}
e_T(T_{p_k}\mathcal{A}_n)=\big((n+1-k)t_1-kt_2\big)\cdot\big((k+1)t_2-(n-k)t_1\big)\in
\mathrm H^\bullet_T(p_k).
\end{equation}
Secondly, we lift $e_j$ to $\mathrm{H}^\bullet_T(\mathcal{A}_n)$,
and calculate $i_k^*(e_j)\in \mathrm{H}^\bullet_T(p_k)$,
where $i_k: p_k\rightarrow \mathcal{A}_n$ is the embedding.
Suppose $L_i$ is the line bundle on $\mathcal{A}_n$ defined as
$({\mathbb C}^{n+2}\times {\mathbb C}_{\chi_i})/({\mathbb C}^\times)^{n}$,
where $\chi_i$ is the character of $({\mathbb C}^\times)^{n}$ by projecting to the
$i$-th component. We can set the $({\mathbb C}^\times)^2$ action on $L_i$ as
\begin{equation}\label{A_n action on L_i}
\eta\cdot [z_0: z_2: \cdots : z_{n+1}: v]=[\eta^{t_1}z_0: z_1:\cdots : z_{n-1}: \eta^{t_2}z_{n+1}: v],
\end{equation}
which makes $L_i$ a $({\mathbb C}^\times)^2$-equivariant line bundle.
By \eqref{A_n GIT}, \eqref{A_n action} and a similar argument as $e_T(T_{p_i}\mathcal{A}_n)$, we have
\begin{align}\label{L_i}
e_T(L_i\big|_{p_k})=\left\{
\begin{array}{ll}
-it_1, & \hbox{$0\leq k\leq n-i$,} \\
-(n+1-i)t_2, & \hbox{$n+1-i \leq k\leq n+1$.}
\end{array}
\right.
\end{align}
Notice that $e_j$ corresponds to the divisor $\{z_{n+1-j}=0\}$,
which is the zero locus of a section of line bundle
$\widetilde{L}_j= L_{n-j}\otimes L_{n-j+1}^{-2}\otimes L_{n-j+2}$
(we set $L_i=\mathcal{O}$ when $i\leq 0$ or $i\geq n+1$).
So $e_T(\widetilde{L}_j)$ is a lift of $e_j$ on $\mathrm H^\bullet_T(\mathcal{A}_n)$. By \eqref{L_i},
\begin{align}\label{e_i}
i_k^*(e_j)=e_T(L_{n-j}\otimes L_{n-j+1}^{-2}\otimes L_{n-j+2}\big|_{p_k})=\left\{
\begin{array}{lll}
(n-j+2)t_1-(j-1)t_2, & \hbox{$k=j-1$;} \\
-(n-j)t_1+(j+1)t_2, & \hbox{$k=j$.}\\
0, & \hbox{otherwise.}
\end{array}
\right.
\end{align}
Finally, plugging \eqref{weight on tangent} and \eqref{L_i} into \eqref{eq pa}, we have
\begin{align*}
\<1, 1\>&=\sum_{k=0}^{n} \frac{1}{\big((n+1-k)t_1-kt_2\big)\cdot \big((k+1)t_2-(n-k)t_1\big)}\\
&=\sum_{k=0}^{n}\frac{1}{t_2-t_1}\cdot \left(\frac{1}{(n+1-k)t_1-kt_2}+\frac{1}{(k+1)t_2-(n-k)t_1}\right)\\
&=\frac{1}{(n+1)t_1t_2}
\end{align*}
and
\begin{align*}
\<e_j, e_j\>=&
\frac{\big((n-j+2)t_1-(j-1)t_2\big)^2}{\big((n+2-j)t_1-(j-1)t_2\big)\cdot \big(jt_2-(n-j+1)t_1\big)}\\
&+\frac{\big(-(n-j)t_1+(j+1)t_2\big)^2}{\big((n-j+1)t_1-jt_2\big)\cdot \big((j+1)t_2-(n-j)t_1\big)}\\
=& -\frac{(-n+j-2)t_1+(j-1)t_2}{jt_2-(n-j+1)t_1}-\frac{(n-j)t_1-(j+1)t_2}{(n-j+1)t_1-jt_2}\\
=&-2.
\end{align*}
The calculation of the other pairings is left to the readers.
\end{proof}
Now we calculate the product structure on
$\mathrm{H}^\bullet_{({\mathbb C}^\times)^2}(\mathcal{A}_n)$.
\begin{proposition}
Denote by $\mathcal{A}_n$ the minimal resolution of
the $A_n$ singularity. Then the equivariant cohomology of
$\mathcal A_n$ is
\begin{eqnarray*}
\mathrm H^{\bullet}_{(\mathbb{C}^\times)^2}(\mathcal{A}_n)
=\mathrm{Span}_{\mathbb{C}[t_1,t_2]}\{1,e_1,e_2,\cdots,e_n\},
\end{eqnarray*}
where
\begin{align}
e_j\cup e_{j+1}=&-t_2(e_1+2e_2+\cdots+je_j)-t_1[(n-j)e_{j+1}+\cdots\notag\\
&+2e_{n-1}+e_n]+(n+1)t_1t_2, \label{ejej+1}\\
e_j\cup e_l=&0 \quad\text{ for $|j-l|\geq2$,} \label{elej} \\
e_j\cup e_j=&2t_2(e_1+2e_2+\cdots+(j-1)e_{j-1})+\big((n+2-j)t_1+(j+1)t_2\big)e_j\notag\\
&+2t_1\big((n-j)e_{j+1}+\cdots+2e_{n-1}+e_n\big)-2(n+1)t_1t_2.\label{ej2}
\end{align}
\end{proposition}
\begin{proof}
Here we only verify \eqref{ej2}; the proof of the other identities is similar and is left to the
readers.
By Proposition \ref{localization}, it suffices to check that, for any $k$,
\begin{align*}
i_k^*(e_j)\cup i_k^*(e_j)=&2t_2\big(i_k^*(e_1)+2i_k^*(e_2)+\cdots+(j-1)i_k^*(e_{j-1})\big)\\
&+\big((n+2-j)t_1+(j+1)t_2\big)i_k^*(e_j)\\
&+2t_1\big((n-j)i_k^*(e_{j+1})+\cdots+2i_k^*(e_{n-1})+i_k^*(e_n)\big)-2(n+1)t_1t_2.
\end{align*}
The above equality is easily checked by
plugging \eqref{e_i} into the two sides of the above equation. For example, for $k=j$,
the left hand side of the above equation equals
$$\big(-(n-j)t_1+(j+1)t_2\big)^2=(n-j)^2t_1^2+(j+1)^2t_2^2-2(n-j)(j+1)t_1t_2$$
while the right hand side equals
$$\big((n+2-j)t_1+(j+1)t_2\big)\cdot \big(-(n-j)t_1+(j+1)t_2\big)
+2(n-j)t_1\cdot \big((n-j+1)t_1-jt_2\big)-2(n+1)t_1t_2,
$$
and they are equal to each other.
\end{proof}
\subsection{Equivariant cohomology of resolutions of DE singularities}\label{BG}
Let $\widetilde{\mathbb C^2/\Gamma}$ be the minimal resolution
of an ADE singularity. Observing that the scalar $\mathbb C^\times$-action
on $\mathbb C^2$ commutes with the action of $\Gamma$,
and thus $\mathbb C^\times$ acts on $\mathbb C^2/\Gamma$.
It lifts to an action on $\widetilde{\mathbb C^2/\Gamma}$.
Let $\{e_1,e_2,\cdots, e_n\}$ be the set of irreducible components
in the exceptional fiber in
$\widetilde{\mathbb C^2/\Gamma}$, which gives a basis
of $\mathrm{H}_2(\widetilde{\mathbb C^2/\Gamma},\mathbb Z)$.
It is direct to see that they are invariant under the $\mathbb C^\times$-action,
and hence lifts to a basis of the $\mathbb C^\times$-equivariant homology.
The intersection matrix $e_i\cap e_j$ defines a perfect pairing on
$\mathrm{H}_2(\widetilde{\mathbb C^2/\Gamma},\mathbb Z)$.
With this pairing we may identify
$\mathrm{H}_2(\widetilde{\mathbb C^2/\Gamma},\mathbb Z)$
with
$\mathrm{H}^2(\widetilde{\mathbb C^2/\Gamma},\mathbb Z)$.
By a slight abuse of notation, the image of $e_i$ in the above
isomorphism is also denoted by $e_i$.
Let $\Phi$ be the root system associated
to the Dynkin diagram given by this pairing. We can identify $e_1,\cdots,e_n$
with the simple roots $\alpha_1,\cdots, \alpha_n$
and the intersection matrix with minus of the Cartan matrix
$$e_i\cup e_j=-\langle \alpha_i,\alpha_j\rangle.$$
Bryan and Gholampour computed the $\mathbb C^\times$-equivariant
cohomology ring
of $\widetilde{\mathbb C^2/\Gamma}$, which is given as follows.
\begin{theorem}[{\cite[Theorem 6]{BG}}]\label{thm:BryanGholam}
The $\mathbb C^\times$-equivariant cohomology ring
$\mathrm H_{\mathbb C^\times}^\bullet(\widetilde{\mathbb C^2/\Gamma})$
of the minimal resolution
of an ADE singularity has the equivariant cup product
\begin{eqnarray*}
e_i\cup e_j=-t^2|\Gamma|\langle\alpha_i,\alpha\rangle
+\sum_{\alpha\in\Delta^+}\langle\alpha_j,\alpha\rangle e_\alpha,
\end{eqnarray*}
where $\Gamma$ is the subgroup of $\mathrm{SL}_2(\mathbb{C})$,
$e_\alpha=c_1e_1+\cdots+c_n e_n$, $c_1,\cdots, c_n\in\mathbb{N}$
if the root $\alpha=c_1\alpha_1+\cdots c_n \alpha_n$
with $\alpha_1,\cdots, \alpha_n$ being the simple roots, and
$\langle -,-\rangle$ is the inner product in the root system.
\end{theorem}
By the root data of Lie algebras of ADE type (see, for example,
Bourbaki \cite[PLATE I-VII]{Bo}), we may explicitly write down the cup product
in all the cases. In what follows, we only give
$\mathbb C^\times$-equivariant homology
of $\widetilde{\mathbb C^2/\Gamma}$ where $\Gamma$ is of DE type.
\begin{corollary}[The $D_n$ case]\label{cohomology of Dn}
Denote by $\mathcal{D}_n$
the minimal resolution of
the $D_n$ singularity. Then the $\mathbb C^\times$-equivariant cohomology of
$\mathcal D_n$
is
\begin{eqnarray*}
\mathrm H^\bullet_{\mathbb{C}^\times}(\mathcal{D}_n)
=\mathrm{Span}_{\mathbb{C}[t]}\{1,e_1,e_2,\cdots,e_n\},
\end{eqnarray*}
where
\begin{align}
e_k\cup e_{k+1}=&-2te_1-\cdots-2kte_k-(2n-4)te_{k+1}-\cdots-(2n-4)te_{n-2}\notag\\
&-(n-2)te_{n-1}-(n-2)te_n+4(n-2)t^2,\quad\quad\text{ for } k\leq n-2, \label{ekek+1Dn}\\
e_k\cup e_l=&0,\quad\quad \text{ for $k\leq n-2$ and $|k-l|\geq2$, } e_{n-1}\cup e_n=0 \label{eiejDn} \\
e_{n-2}\cup e_n=&e_{n-1}\cup e_n=-2te_1-\cdots-(2n-2k)te_{n-k}-\cdots\notag \\
&-(2n-4)te_{n-2}-(n-2)te_{n-1}-(n-2)te_n\label{en-2en} \\
e_k\cup e_k=&4te_1+\cdots+4(k-i)e_{k-i}+\cdots+(2n+2k-2)te_k
+(4n-8)te_{k+1}+\cdots\notag\\
&+(4n-8)te_{n-2}+(2n-4)te_{n-1}+(2n-4)te_n,\quad
\text{ for } k\leq n-2,\label{ek2Dn}\\
e_{n-1}\cup e_{n-1}=&4te_1+\cdots+(4n-8)te_{n-2}+2nte_{n-1}+(2n-4)te_n+4(n-2)t^2,\label{en-1^2}\\
e_n\cup e_n=&4te_1+\cdots+(4n-8)te_{n-2}+(2n-4)te_{n-1}+2nte_n+4(n-2)t^2.\label{en^2}
\end{align}
\end{corollary}
\begin{corollary}[The $E_6$ case]
\label{cohomology of E6}
Denote by $\mathcal{E}_6$ the minimal resolution of
the $E_6$ singularity. Then the $\mathbb C^\times$-equivariant cohomology of
$\mathcal E_6$ is
\begin{eqnarray*}
\mathrm H^\bullet_{\mathbb{C}^\times}
(\mathcal{E}_6)=\mathrm{Span}_{\mathbb{C}[t]}\{1,e_1,e_2,\cdots,e_6\},
\end{eqnarray*}
where
\begin{align*}
e_1\cup e_1=&-48t^2+t(14e_1+12e_2+20e_3+24e_4+16e_5+8e_6),\\
e_2\cup e_2=&-48t^2+t(8e_1+16e_2+16e_3+24e_4+16e_5+8e_6),\\
e_3\cup e_3=&-48t^2+t(8e_1+12e_2+20e_3+24e_4+16e_5+8e_6),\\
e_4\cup e_4=&-48t^2+t(8e_1+12e_2+16e_3+26e_4+16e_5+8e_6),\\
e_5\cup e_5=&-48t^2+t(8e_1+12e_2+16e_3+24e_4+20e_5+8e_6),\\
e_6\cup e_6=&-48t^2+t(8e_1+12e_2+16e_3+24e_4+20e_5+14e_6),\\
e_1\cup e_3=&24t^2-t(4e_1+6e_2+10e_3+12e_4+8e_5+4e_6),\\
e_2\cup e_4=&24t^2-t(4e_1+6e_2+8e_3+12e_4+8e_5+4e_6),\\
e_3\cup e_4=&24t^2-t(4e_1+6e_2+8e_3+12e_4+8e_5+4e_6),\\
e_4\cup e_5=&24t^2-t(4e_1+6e_2+8e_3+12e_4+8e_5+4e_6),\\
e_5\cup e_6=&24t^2-t(4e_1+6e_2+8e_3+12e_4+10e_5+4e_6),
\end{align*}
and $e_i\cup e_j=0$ for all the rest $i, j$.
\end{corollary}
\begin{corollary}[The $E_7$ case]\label{cohomology of E7}
Denote by $\mathcal{E}_7$ the minimal resolution of
the $E_7$ singularity. Then the $\mathbb C^\times$-equivariant cohomology of
$\mathcal E_7$ is
\begin{eqnarray*}
\mathrm H^\bullet_{\mathbb{C}^\times}(\mathcal{E}_7)
=\mathrm{Span}_{\mathbb{C}[t]}\{1,e_1,e_2,\cdots,e_7\},
\end{eqnarray*}
where
\begin{align*}
e_1\cup e_1=&-96t^2+t(22e_1+24e_2+36e_3+48e_4+36e_5+24e_6+12e_7),\\
e_2\cup e_2=&-96t^2+t(16e_1+28e_2+32e_3+48e_4+36e_5+24e_6+12e_7),\\
e_3\cup e_3=&-96t^2+t(16e_1+24e_2+36e_3+48e_4+36e_5+24e_6+12e_7),\\
e_4\cup e_4=&-96t^2+t(16e_1+24e_2+32e_3+50e_4+36e_5+24e_6+12e_7),\\
e_5\cup e_5=&-96t^2+t(16e_1+24e_2+32e_3+48e_4+40e_5+24e_6+12e_7),\\
e_6\cup e_6=&-96t^2+t(16e_1+24e_2+32e_3+48e_4+40e_5+30e_6+12e_7),\\
e_7\cup e_7=&-96t^2+t(16e_1+24e_2+32e_3+48e_4+40e_5+32e_6+20e_7),\\
e_1\cup e_3=&48t^2-t(8e_1+12e_2+18e_3+24e_4+18e_5+12e_6+6e_7),\\
e_2\cup e_4=&48t^2-t(8e_1+12e_2+16e_3+24e_4+18e_5+12e_6+6e_7),\\
e_3\cup e_4=&48t^2-t(8e_1+12e_2+16e_3+24e_4+18e_5+12e_6+6e_7),\\
e_4\cup e_5=&48t^2-t(8e_1+12e_2+16e_3+24e_4+18e_5+12e_6+6e_7),\\
e_5\cup e_6=&48t^2-t(8e_1+12e_2+16e_3+24e_4+20e_5+12e_6+6e_7),\\
e_6\cup e_7=&48t^2-t(8e_1+12e_2+16e_3+24e_4+20e_5+16e_6+6e_7),
\end{align*}
and $e_i\cup e_j=0$ for all the rest $i, j$.
\end{corollary}
\begin{corollary}[The $E_8$ case]\label{cohomology of E8}
Denote by $\mathcal{E}_8$ the minimal resolution of
the $E_8$ singularity. Then the $\mathbb C^\times$-equivariant cohomology of
$\mathcal E_8$ is
\begin{eqnarray*}
\mathrm H^\bullet_{\mathbb{C}^\times}(\mathcal{E}_8)
=\mathrm{Span}_{\mathbb{C}[t]}\{1,e_1,e_2,\cdots,e_8\},
\end{eqnarray*}
where
\begin{align*}
e_1\cup e_1=&-240t^2+t(46e_1+60e_2+84e_3+120e_4+96e_5+72e_6+48e_7+24e_8),\\
e_2\cup e_2=&-240t^2+t(40e_1+64e_2+80e_3+120e_4+96e_5+72e_6+48e_7+24e_8),\\
e_3\cup e_3=&-240t^2+t(40e_1+60e_2+84e_3+120e_4+96e_5+72e_6+48e_7+24e_8),\\
e_4\cup e_4=&-240t^2+t(40e_1+60e_2+80e_3+122e_4+96e_5+72e_6+48e_7+24e_8),\\
e_5\cup e_5=&-240t^2+t(40e_1+60e_2+80e_3+120e_4+100e_5+72e_6+48e_7+24e_8),\\
e_6\cup e_6=&-240t^2+t(40e_1+60e_2+80e_3+120e_4+100e_5+78e_6+48e_7+24e_8),\\
e_7\cup e_7=&-240t^2+t(40e_1+60e_2+80e_3+120e_4+100e_5+80e_6+56e_7+24e_8),\\
e_8\cup e_8=&-240t^2+t(40e_1+60e_2+80e_3+120e_4+100e_5+80e_6+60e_7+34e_8),\\
e_1\cup e_3=&120t^2-t(20e_1+30e_2+42e_3+60e_4+48e_5+36e_6+24e_7+12e_8),\\
e_2\cup e_4=&120t^2-t(20e_1+30e_2+40e_3+60e_4+48e_5+36e_6+24e_7+12e_8),\\
e_3\cup e_4=&120t^2-t(20e_1+30e_2+40e_3+60e_4+48e_5+36e_6+24e_7+12e_8),\\
e_4\cup e_5=&120t^2-t(20e_1+30e_2+40e_3+60e_4+48e_5+36e_6+24e_7+12e_8),\\
e_5\cup e_6=&120t^2-t(20e_1+30e_2+40e_3+60e_4+50e_5+36e_6+24e_7+12e_8),\\
e_6\cup e_7=&120t^2-t(20e_1+30e_2+40e_3+60e_4+50e_5+40e_6+24e_7+12e_8),\\
e_7\cup e_8=&120t^2-t(20e_1+30e_2+40e_3+60e_4+50e_5+40e_6+30e_7+12e_8),
\end{align*}
and $e_i\cup e_j=0$ for all the rest $i, j$.
\end{corollary}
\section{Quantization of the minimal orbits}\label{sect:quantization}
In this section, we study the quantization of the minimal nilpotent
orbits of Lie algebras of ADE type.
In \S\ref{subsect:coordofminimal} we briefly go over some properties
of the minimal orbits. In \S\ref{subsect:Joseph} we recall Joseph's result on the quantization
of the minimal orbits and then in \S\ref{subsect:Garfinkle} we go over Garfinkle's construction
of Joseph's ideals, with explicit formulas.
\subsection{The coordinate ring of minimal orbits}\label{subsect:coordofminimal}
In this subsection, we assume $\mathfrak g$ is a complex semisimple Lie algebra,
and $\mathcal O_{min}$ is the minimal nilpotent orbit of $\mathfrak g$.
Let us first recall the following.
\begin{proposition}[{c.f. \cite[\S 8.3]{Jan}}]
Let $\mathfrak g$ be a complex semisimple Lie algebra
and $\mathcal{O}$ be a nilpotent orbit of $\mathfrak g$. Then
\[
\mathbb{C}[\mathcal{O}]=\mathbb{C}[\overline{\mathcal{O}}]
\]
if and only if $\overline{\mathcal{O}}$ is normal.
\end{proposition}
In particular, $\overline{\mathcal{O}}_{min}$ is normal with isolated singularity (see \cite{VP}), and hence
\[
\mathbb{C}[\mathcal{O}_{min}]=\mathbb{C}[\overline{\mathcal{O}}_{min}].
\]
Due to this proposition, in what follows we shall not distinguish
$\mathbb{C}[\mathcal{O}_{min}]$ and $\mathbb{C}[\overline{\mathcal{O}}_{min}]$.
The following result is proved by Shlykov in \cite{Sh}.
\begin{theorem}[\cite{Sh}] \label{Sh2}
Let $I$ be the defining ideal of $\overline{\mathcal{O}}_{min}$ in $\mathrm{Sym}(\mathfrak{g})$, i.e.,
\[
I:=\{\mu\in \mathrm{Sym}(\mathfrak{g})|\mu(\overline{\mathcal{O}}_{min})=0\},
\]
then its image of the projection
$$
\mathrm{Sym}(\mathfrak{g})\rightarrow \mathrm{Sym}(\mathfrak{h}),\quad
I\mapsto f(I)
$$
induced by the inclusion $\mathfrak{h}^*\hookrightarrow \mathfrak{g}^*$
is given by $\mathrm{Sym}^{\geq 2}\mathfrak{h}$.
\end{theorem}
The following is obtained by Hikita:
\begin{theorem}[{\cite[Theorem 1.1]{Hi}}]
If we choose a generic action of $\mathbb{C}^\times$ such that the fixed point scheme for it and
for torus $T$ are the same, then
$\overline{\mathcal{O}}_{min}^{\mathbb{C}^\times}$=$\mathfrak{h}\cap\overline{\mathcal{O}}_{min}$
as a scheme.
\end{theorem}
Since
$
\mathbb{C}[\overline{\mathcal{O}}_{min}]= \mathrm{Sym}(\mathfrak{g})/I,
$
combining the above two theorems,
we have (see Shlykov \cite{Sh})
\begin{equation*}
\mathbb{C}[\overline{\mathcal{O}}_{min}^{\ \mathbb{C}^*}]=\mathbb{C}[\mathfrak{h}\cap
\overline{\mathcal{O}}_{min}]=\mathrm{Sym}(\mathfrak{h})/f(I)=\mathrm{Sym}(\mathfrak{h})/\mathrm{Sym}^{\geq 2}\mathfrak{h}.
\end{equation*}
\subsection{Quantization of the minimal nilpotent orbits}\label{subsect:Joseph}
We now study the quantization of the minimal nilpotent orbits in Lie algebras of ADE type.
We start with some basic concepts on the quantization of Poisson algebras;
see, for example, Losev \cite{Lo} for more details.
\begin{definition}[Filtered and graded quantizations]
Suppose $A$ is a commutative $\mathbb Z_{\ge 0}$-graded $k$-algebra,
equipped with a Poisson
bracket whose degree is $-1$, where $k$ is a field of characteristic zero.
\begin{enumerate}
\item[$(1)$]
A {\it filtered quantization} of $A$ is a filtered $k$-algebra
$\mathcal A=\bigcup_{i\ge 0}\mathcal A_{i}$ such that
the associated graded algebra $\mathrm{gr}\,\mathcal A$
is isomorphic to $A$ as graded Poisson algebras.
\item[$(2)$]
A {\it graded quantization} of $A$ is a graded $k[\hbar]$-algebra
$A_\hbar$ ($\mathrm{deg}\,\hbar=1$)
which is free as a $k[\hbar]$-module, equipped with an isomorphism of $k$-algebras:
$f: A_\hbar/\hbar\cdot A_\hbar \to A$
such that for any
$a, b\in A_\hbar$, if we denote their images in
$A_\hbar/\hbar\cdot A_\hbar$ by $\overline a, \overline b$ respectively,
then
$$f\left(\overline{\frac{1}{\hbar}[a, b]}\right)=\{f(\overline a), f(\overline b)\}.$$
\end{enumerate}
\end{definition}
Let $A$ be an filtered associative algebra. Recall that the {\it Rees algebra} of $A$ is the
graded algebra $Rees(A):=\bigoplus_{i\in\mathbb{Z}}A_{\leq i}\cdot\hbar^i$,
equipped with the multiplication $(a\hbar^i)(b\hbar^j)=ab\hbar^{i+j}$ for $a,b\in A$.
Now, suppose $\mathcal A$ is a filtered quantization of $A$, then
the associated Rees algebra $Rees(\mathcal A)$ is a graded quantization of $A$.
\begin{example}
The universal enveloping algebra $\mathcal{U}(\mathfrak{g})$ is the filtered quantization of
$\mathbb{C}[\mathfrak{g}^*]=\mathrm{Sym}(\mathfrak{g})$, and the Rees algebra of
$\mathcal{U}(\mathfrak{g})$,
$Rees(\mathcal{U}(\mathfrak{g})):=
\bigoplus_{i\in\mathbb{Z}}\mathcal{U}(\mathfrak{g})_{\leq i}\cdot\hbar^i$
is the graded quantization of $\mathrm{Sym}(\mathfrak{g})$.
On the other hand, there is an isomorphism of
$\mathfrak{g}$-modules:
\begin{eqnarray*}
\beta: \mathrm{Sym}(\mathfrak{g})&\rightarrow & \mathcal{U}(\mathfrak{g}),\\
x_1\cdots x_k &\mapsto & \dfrac{1}{k!}\sum_{\pi\in S_n} x_{\pi(1)}\cdots x_{\pi(k)},
\end{eqnarray*}
which is called {\it symmetrization}.
\end{example}
Since the universal enveloping algebra $\mathcal{U}(\mathfrak{g})$ is
the quantization of the symmetric algebra $\mathrm{Sym}(\mathfrak{g})$,
we need to study the quantization of the ideal $I$ of $\mathrm{Sym}(\mathfrak{g})$.
Joseph in \cite{Jo} found a two-sided ideal of $\mathcal{U}(\mathfrak{g})$
which plays the role of the quantization of $I$.
\subsubsection{Joseph's quantization of the minimal orbits}
Let us first recall the result of Joseph \cite{Jo},
which is stated as follows.
\begin{definition-theorem}[Joseph \cite{Jo} and Garfinkle \cite{Ga}]
Let $\mathfrak{g}$ be a complex semisimple Lie algebra.
\begin{enumerate}
\item[$(1)$]
If $\mathfrak{g}$ is the type A Lie algebra, then there exists a family of
completely prime two-sided primitive ideals $J^z$, parametrized
by a parameter $z\in\mathbb{C}$, such that
$$
\mathrm{gr}J^z=I(\overline{\mathcal{O}}_{min}).
$$
\item[$(2)$]
If $\mathfrak{g}$ is different from the Lie algebra of type A,
then there exists a unique completely prime two-sided primitive ideal $J$ such that
$$
\mathrm{gr}J=I(\overline{\mathcal{O}}_{min}).
$$
\end{enumerate}
In both cases, the ideals $J^z$ and $J$ are called the Joseph ideals.
\end{definition-theorem}
In the above definition,
a two-sided ideal $J$ of $\mathcal{U}(\mathfrak{g})$ is called {\it primitive} if it is
the kernel of an irreducible representation $(\rho, V)$ of $\mathcal{U}(\mathfrak{g})$, i.e.,
$J$ is the annihilator of $V$,
\[
J=Ann(V)=\{u\in\mathcal{U}(\mathfrak{g})|\rho(u)\cdot V=0\}.
\]
An ideal $J$ of $\mathcal{U}(\mathfrak{g})$ is called
{\it completely prime} if for all $u,v\in\mathcal{U}(\mathfrak{g})$, $uv\in J$
implies $u\in J$ or $v\in J$.
In fact, in the original paper \cite{Jo}, Joseph only proved that the Joseph ideals in type A Lie algebras
are not unique. It is Garfinkle who
gave the explicit constructions of the Joseph ideals in Lie algebras of all types,
and in particular, formulated the Joseph ideals in type A Lie algebras in the form given in the above
theorem.
\begin{theorem}\label{thm:quantizationofminimalorbits}
Let $\mathfrak g$ be the Lie algebra of type ADE. Let $J$ be its Joseph ideal.
Then
$U(\mathfrak g)/J$ gives a filtered quantization of $\overline{\mathcal O}_{min}$.
\end{theorem}
\begin{proof
Since
\[
\mathrm{gr} \big(\mathcal{U}(\mathfrak{g})/J\big)=\mathrm{gr}\big(\mathcal{U}(\mathfrak{g})\big)/gr(J)
=\mathrm{Sym}(\mathfrak{g})/I(\overline{\mathcal{O}}_{min})=\mathbb{C}[\overline{\mathcal{O}}_{min}],
\]
that is to say, for the symplectic singularity $\overline{\mathcal{O}}_{min}$, the algebra
$\mathcal{U}(\mathfrak{g})/J$ is its filtered quantization.
\end{proof}
By this theorem, $Rees(\mathcal{U}(\mathfrak{g})/J)$ is the graded quantization of
$\overline{\mathcal{O}}_{min}$, and we sometimes write it
as $\mathscr A[\overline{\mathcal O}_{min}]$; that is,
$\mathscr A[\overline{\mathcal O}_{min}]=Rees\big(\mathcal U(\mathfrak g)/J\big)$.
\subsection{Garfinkle's construction of the Joseph ideals}\label{subsect:Garfinkle}
Garfinkle in her thesis \cite{Ga} gave an explicit construction of the Joseph ideals.
In this subsection, we go over her results with some details.
\begin{notation}
Let us fix some notations in representation theory of Lie algebras.
Let $\mathfrak{g}$ be a complex semisimple Lie algebra, $\mathfrak{h}$ be a
Cartan subalgebra of $\mathfrak{g}$, $\Delta$ be the roots of $\mathfrak{h}$ in
$\mathfrak{g}$ and $\Delta^+$ be a fixed choice of positive roots.
Let $\Phi$ the simple roots of $\Delta$ and $\Phi^+$ be
the simple roots of $\Delta^+$.
The Lie algebra $\mathfrak{g}$ has the root space decomposition
$\mathfrak{g}=\oplus_{\alpha\in\Delta}\mathfrak{g}^\alpha$, and let
\[
\mathfrak{n}^+=\oplus_{\alpha\in\Delta^+}\mathfrak{g}^\alpha,
\mathfrak{n}^-=\oplus_{\alpha\in\Delta^+}\mathfrak{g}^{-\alpha},
\mathfrak{b}=\mathfrak{h}\oplus\mathfrak{n}^+.
\]
denote the associated subalgebras of $\mathfrak{g}$.
Let $(\pi, V)$ be a representation of $\mathfrak{g}$; for any weight
$\lambda\in\mathfrak{h}^*$, let
$V^\lambda=\{v\in V|\pi(h)(v)=\lambda(h) v\;\mbox{for any}\; h\in\mathfrak{h}\}$. Let
$V^{\mathfrak{n}^-}:= \{v\in V|\pi(x)v=0\;\mbox{for any}\; x\in\mathfrak{n}^-\}$.
Denote by $X_{\alpha_i}$ and $Y_{\alpha_i}$ the root vectors in $\mathfrak{g}^{\alpha_i}$ and
$\mathfrak{g}^{-\alpha_i}$ respectively, and denote by $h_i$ the element in
$\mathfrak{h}$ corresponding to $\alpha_i$ such that $\alpha_i(H)=B(H,h_i)$
for all $H\in\mathfrak{h}$, where $B(H,h_i)$ denotes the killing form. By the construction of Chevalley basis,
$h_i=[X_{\alpha_i}, Y_{\alpha_i}]$ and $\{h_i\}\cup \{X_{\alpha_i}\}\cup\{Y_{\alpha_i}\}$
form a basis of $\mathfrak{g}$. Denote by $h_i^{\vee}$ the dual element of $h_i$
via the Killing form, i.e., $B(h_i^{\vee}, h_j)=\delta_{ij}$.
Let $\alpha_1,\cdots,\alpha_n\in\Phi^+$, the subscript of the positive roots are the same as \cite[PLATE I-VII]{Bo}); for convenience, we denote by $X_{12\cdots n}$
and $Y_{12\cdots n}$ the root vectors corresponding to the root
$\alpha_1+\alpha_2+\cdots+\alpha_n\in \Delta^+$ and the root
$-(\alpha_1+\alpha_2+\cdots+\alpha_n)$ respectively.
In the type D Lie algebra case, we denote by
$X_{1\cdots\overline{k}\cdots n}$ and $Y_{1\cdots\overline{k}\cdots n}$
the root vectors corresponding to the root
$\alpha_1+\cdots+ 2\alpha_k+\cdots+\alpha_n\in\Delta^+$
and $-(\alpha_1+\cdots+ 2\alpha_k+\cdots+\alpha_n)$ respectively.
Let $I_{\mathfrak{p},\lambda}$ be the left ideal of the universal enveloping algebra
$\mathcal{U}(\mathfrak{g})$ generated by $\{x-\lambda(x)|x\in\mathfrak{p}\}$.
\end{notation}
\subsubsection{Joseph ideal for type A Lie algebras}
As we stated in the previous subsection, Joseph's theorem
is an existence theorem. It was Garfinkle who gave the explicit construction
of the Joseph's ideals.
The following several propositions are taken from Garfinkle \cite{Ga}.
\begin{proposition}[{\cite[Proposition 3.2]{Ga} and
\cite[\S4.4]{BJ}}]\label{structure of J}
For type A Lie algebras $\mathfrak{g}$, we have the following decomposition of
irreducible representations:
\begin{eqnarray*}
Sym^2(\mathfrak{g})\cong
V(2\theta)\oplus V(\theta+\alpha)\oplus V(\theta)\oplus V(0),
\end{eqnarray*}
where $\alpha$ is the highest root in the sub root system
$\Delta_\theta:=\{\alpha\in\Delta|\alpha \text{ is perpendicular to }
\theta\}$, and $\Delta$ is the root system of $\mathfrak{g}$.
The ideal $I(\overline{\mathcal{O}}_{min})$ is generated by the lowest root
vectors in $V(\theta+\alpha)$ and $V(\theta)$ and elements in $V(0)$,
and $V(0)$ is generated by the Casimir element of $\mathcal U(\mathfrak g)$.
\end{proposition}
By this proposition, we next determine the lowest root
vectors in $V(\theta+\alpha)$ and $V(\theta)$ respectively.
We do this one by one.
Recall that $X_{i\cdots j}$ and $Y_{i\cdots j}$ denote the weight vectors of weights
$\alpha_i+\cdots+\alpha_j$ and $-(\alpha_i+\cdots+\alpha_j)$ respectively in $Sym^2(\mathfrak{g})$.
\begin{proposition}[{\cite[\S IV.3 Definition 7]{Ga}}]\label{prop:lwofVthetaplusalpha}
The lowest weight vector of the subrepresentation $V(\theta+\alpha_2+\cdots+\alpha_{n-1})$ in
Proposition \ref{structure of J} is
\begin{eqnarray}\label{eq:lwofVthetaplusalpha}
Y_\theta Y_{2\cdots n-1}-Y_{1\cdots n-1}Y_{2\cdots n},
\end{eqnarray}
which corresponds to $-(\theta+\alpha_2+\cdots+\alpha_{n-1})$,
the lowest weight in $V(\theta+\alpha_2+\cdots+\alpha_{n-1})$.
\end{proposition}
Next, let us recall the following.
\begin{lemma}[{\cite[\S IV.3 Remark 4]{Ga}}]\label{Vtheta2}
In the $A_n$ case,
$v$ is the lowest weight vector of the representation $V(\theta)$ if and only if
$v\in V(\theta)^{\mathfrak{n}^-}$.
\end{lemma}
By this lemma, we have the following.
\begin{proposition}\label{generalterm1}
The lowest weight vector of the subrepresentation $V(\theta)$ in Proposition \ref{structure of J} is
\begin{eqnarray}\label{vtheta}
-(n+1)(Y_1Y_{2\cdots n}+Y_{12}Y_{3\cdots n}+\cdots+Y_{1\cdots n-1}Y_n)+
\sum_{k=1}^n Y_\theta(2k-1-n)h_k.
\end{eqnarray}
\end{proposition}
\begin{proof}
Since $V(\theta)\cong\mathfrak{g}$ as $\mathfrak g$-representations,
let $v$ be the lowest weight vector in $V(\theta)$, then $v$
corresponds to the weight $-\theta$ in $\mathfrak{g}$. By \cite[\S 25.3]{FH},
\[
v=(\Omega-2(\theta,\theta) Id\otimes Id)(v{'}),
\]
where $v'$ is the weight vector corresponding to the weight
$-\theta$ in the representation $Sym^2(\mathfrak{g})$.
On the other hand, by Lemma \ref{Vtheta2}, $v\in V(\theta)^{\mathfrak{n}^-}$, and therefore
\begin{align*}
v&=(\Omega-2(\theta,\theta) Id\otimes Id)(v{'})\\
&=(\Omega-4Id\otimes Id)\left(Y_\theta\sum_{k=1}^n(3-k)h_k\right)\\
&=-2(n+1)(Y_1Y_{2\cdots n}+Y_{12}Y_{3\cdots n}+
\cdots+2Y_{1\cdots n-1}Y_n)+2\sum_{k=1}^n Y_\theta(2k-1-n)h_k.
\end{align*}
Without affecting the results, we take a half of the above vector and let
\[
v=-(n+1)(Y_1Y_{2\cdots n}+Y_{12}Y_{3\cdots n}+\cdots+
Y_{1\cdots n-1}Y_n)+\sum_{k=1}^n Y_\theta(2k-1-n)h_k.\qedhere
\]
\end{proof}
Having found the generators in $I(\overline{\mathcal O}_{min})$,
our next goal is to find the corresponding generators in the Joseph ideal.
First, for the subrepresentation $V(\theta+\alpha_2+\cdots+\alpha_{n-1})$,
we have the following proposition:
\begin{proposition}[\cite{Ga} \S IV.3 Theorem 2 and \S 5]\label{Garfinkle lemma1}
Let $v_0$ be the lowest weight of the representation $V(\theta+\alpha_2+\cdots+\alpha_{n-1})$. Then
$\beta(v_0)$ is an
element of Joseph ideal $J$ of $\mathcal{U}(\mathfrak{g})$.
\end{proposition}
Thus combining Propositions \ref{prop:lwofVthetaplusalpha}
and \ref{Garfinkle lemma1}, we obtain that
$\beta(v_0)$ is given by
\begin{eqnarray}\label{vtheta+alpha2}
\beta(Y_\theta Y_{2\cdots n-1}-Y_{1\cdots n-1}Y_{2\cdots n})
=Y_\theta Y_{2\cdots n-1}-Y_{1\cdots n-1}Y_{2\cdots n}\in\mathcal{U}(\mathfrak{g}).
\end{eqnarray}
Second, we find the generator of $J$ corresponding to \eqref{vtheta}.
Recall that a subalgebra $\mathfrak{p}\subseteq\mathfrak{g}$ such that
$\mathfrak{p}\supseteq \mathfrak{b}$ is called a parabolic subalgebra.
Let $\Phi'\subset \Phi$, we define a parabolic subalgebra as follows:
Let $\Delta_\mathfrak{l}=\{\gamma\in\Delta|\gamma=\sum_{\alpha\in\Phi'}n_\alpha\alpha,
n_\alpha\in\mathbb{Z}\}$, $\Delta_{\mathfrak{u}^+}=\{\alpha\in\Delta^+|\alpha\notin\Delta_l \}$.
Then, let
$\mathfrak{l}=\mathfrak{h}\oplus\oplus_{\alpha\in\Delta_l}\mathfrak{g}^\alpha$, $\mathfrak{u}
=\oplus_{\alpha\in\Delta_{\mathfrak{u}^+}}\mathfrak{g}^\alpha$. We call $\mathfrak{p}
=\mathfrak{l}\oplus\mathfrak{u}$
the parabolic subalgebra defined by $\Phi'$.
The following proposition is straightforward.
\begin{lemma}\label{prop:lambda}
Let $\mathfrak g$ be a complex semisimple Lie algebra,
$\mathfrak{p}$ is a parabolic subalgebra defined by $\Phi-\{\alpha_n\}$.
Suppose
$\lambda\in\mathfrak{h}^*$. Then the following two conditions are equivalent:
\begin{enumerate}
\item[$(1)$]
$\lambda$ can be extended to a character on $\mathfrak{p}$, i.e.,
$\lambda|_{[\mathfrak{p},\mathfrak{p}]}=0, \lambda|_\mathfrak{h}=\lambda$;
\item[$(2)$]
there exists a complex number
$z\in\mathbb C$ such that
$\lambda (h_n)=z$, while $\lambda (h_1)=\cdots=\lambda (h_{n-1})=0.$
\end{enumerate}
\end{lemma}
\begin{proposition}[{\cite[\S IV.3 Proposition 3, \S IV.6 Theorem 1 and \S V Theorem 1]{Ga}}]\label{Vtheta}
Let $v\in V(\theta)^{\mathfrak{n}^-}$, $\mathfrak{p}$ be the parabolic subalgebra of
$\mathfrak{g}$ defined by $\Phi^+-\{\alpha_n\}$, and $\lambda\in\mathfrak{h}^*$
satisfying the conditions in
Lemma \ref{prop:lambda}. Then, there exists a
$y\in\mathcal{U}_1(\mathfrak{g})^{\mathfrak{n}^-}$ depending on
$\lambda$ such that $\beta(v)-y\in I_{\mathfrak{p},\lambda}$. In this case, $\beta(v)-y\in J$.
\end{proposition}
Thus by combining Proposition \ref{generalterm1}, Lemma \ref{prop:lambda} and Proposition
\ref{Vtheta}, we have that
\begin{align}\label{vinU}
\beta(v)-y=&-(1+n)(Y_{2\cdots n}Y_1+Y_{3\cdots n}Y_{12}+\cdots +Y_nY_{1\cdots n-1})\notag \\
&+Y_\theta\left(\sum_{k=1}^n (2k-1-n)h_k-\lambda\Big(\sum_{k=1}^n (2k-1-n)h_k\Big)\right)\notag\\
=&-(1+n)(Y_{2\cdots n}Y_1+Y_{3\cdots n}Y_{12}+\cdots +Y_nY_{1\cdots n-1})\notag \\
&+Y_\theta\left(\sum_{k=1}^n (2k-1-n)h_k+(1-n)z\right
\end{align}
is an element in the Joseph ideal $J$.
Third, we find the generator of the Joseph ideal that corresponds to
the Casimir element of $\mathfrak g$. Let us denote by $C$ the Casimir element.
We have the following.
\begin{proposition}[{\cite[\S IV.3]{Ga}}]\label{prop:vtheta0An}
Let $\mathfrak g$ be the $A_n$ Lie algebra. Then
\begin{align}\label{vtheta0An}
C-c_\lambda =&\sum_{\alpha\in\Phi^+}(X_\alpha Y_\alpha+ Y_\alpha X_\alpha)+
\sum_{i=1}^n h_i\cdot\dfrac{1}{n+1}\Big((n-i+1)\big(h_1+2h_2+\cdots+(i-1)h_{i-1}\big)\notag\\
&+i\big((n-i+1)h_i+(n-i)h_{i+1}+\cdots+h_n\big)\Big)-n\left(\dfrac{z}{n+1}+1\right)z
\end{align}
is a generator of $J$, where $c_\lambda=(\lambda,\lambda)+(\lambda,2\rho)$ and
$\rho$ is the half of the sum of positive roots.
\end{proposition}
\begin{proof}
The Casimir element is $C=\sum_{\alpha\in\Phi^+}X_\alpha Y_\alpha+
Y_\alpha X_\alpha+\sum_{i=1}^n h_ih_i^{\vee}$,
where $n$ is the rank of the corresponding Lie algebra.
For Lie algebra of $A_n$,
$2\rho=n\alpha_1+2(n-1)\alpha_2+\cdots+i(n-i+1)\alpha_i+\cdots+n\alpha_n$.
By Proposition \ref{prop:lambda}, we have $\lambda=z\lambda_n$. Thus
\[
c_\lambda=n\left(\dfrac{z}{n+1}+1\right)z.
\]
By \cite[\S IV.3 \S IV.6 Theorem 1 and \S V Theorem 1]{Ga}, $C-c_\lambda$
is an element of $J$.
\end{proof}
Garfinkle showed that the Joseph ideal $J$ in the type A case are exactly generated the above
three types of elements. Observe that $J$ depends on an element $z\in\mathbb C$; to specify
its dependence on $z$, in what follows we shall write it as $J^z$.
Summarizing the above propositions, we have the following.
\begin{proposition}[\cite{Ga}]\label{Prop:GaJosephAn}Let $\mathfrak g$ be the type A Lie algebra.
For each $z\in\mathbb C$, there is a Joseph ideal in $U(\mathfrak g)$, denoted by $J^z$, which is
generated by \eqref{vtheta+alpha2},
\eqref{vinU} and \eqref{vtheta0An}.
\end{proposition}
\subsubsection{Joseph ideal for type D Lie algebras}
Let $\alpha$ be the simple root not orthogonal to the highest root $\theta$;
in the case of type D and $E_6$, $E_7$, $E_8$, such an $\alpha$
is unique.
\begin{lemma}[{see \cite{Ga}, \cite[\S 4.4]{BJ} and \cite{GS}}]\label{structureofDE}
Let $\mathfrak g$ be the complex semisimple Lie algebra of DE type.
Let $\alpha_i$ be the highest roots of the complex semisimple Lie algebra obtained from
$\mathfrak{g}$ by deleting $\alpha$ from the Dynkin diagram of $\mathfrak{g}$.
Then we have the following decomposition of irreducible representations:
\begin{eqnarray*}
Sym^2(\mathfrak{g})= V(2\theta)\oplus_i V(\theta+\alpha_i)\oplus V(0).
\end{eqnarray*}
\end{lemma}
For the type D Lie algebra, the unique simple root which is not
perpendicular to $\theta$ is precisely the simple root $\alpha_2$,
and thus we have the following corollary:
\begin{corollary}\label{structure of Dn}
For the $D_n$ Lie algebra $\mathfrak{g}$, we have the decomposition of
irreducible representations:
\begin{eqnarray*}
Sym^2(\mathfrak{g})\cong V(2\theta)\oplus V(\theta+\theta{'})\oplus V(\theta+\alpha_1)\oplus V(0),
\end{eqnarray*}
where $\theta{'}=\alpha_3+2\alpha_4+\cdots+2\alpha_{n-2}+\alpha_{n-1}+\alpha_n$ is the
highest root of the Lie algebra corresponding to the sub-Dynkin diagram $D_{n-2}$ of
$D_n$, which consists of the roots $\alpha_3,\cdots, \alpha_n$.
\end{corollary}
By Garfinkle \cite{Ga}, the ideal $I(\overline{\mathcal{O}}_{min})$ is generated by the lowest
root vectors
$V(\theta+\theta^{'})$, $V(\theta+\alpha_1)$ and elements of $V(0)$.
We have the following.
\begin{proposition}
The lowest weight vector of the subrepresentation $V(\theta+\alpha_1)$ in
Corollary \ref{structure of Dn} is
\begin{align}\label{vtheta+alpha1}
&Y_1Y_\theta+Y_{12}Y_{12\overline{3}\cdots\overline{n-2},n-1,n}+Y_{123}
Y_{123\overline{4}\cdots\overline{n-2},n-1,n}+\notag\\
&\cdots+Y_{123\cdots n-2}Y_{123\cdots n-2,n-1,n}+Y_{123\cdots n-1}Y_{123\cdots n-2,n},
\end{align}
and the lowest weight vector of the subrepresentation
$V(\theta+\alpha_3+2\alpha_4+\cdots+2\alpha_{n-2}+\alpha_{n-1}+\alpha_n )$
in Corollary \ref{structure of Dn} is
\begin{eqnarray}\label{WofDn}
Y_{23\overline{4}\cdots\overline{n-2},n-1,n }Y_{12\overline{3}\cdots\overline{n-2},n-1,n}-
Y_{123\overline{4}\cdots \overline{n-2},n-1,n}Y_{2\overline{3}\cdots \overline{n-2},n-1,n}-
Y_{3\overline{4}\cdots\overline{n-2},n-1,n}Y_\theta.
\end{eqnarray}
\end{proposition}
\begin{proof}
The proof is similar to the proof of Proposition \ref{generalterm1}.
\end{proof}
We then have the following lemma parallel to
Proposition \ref{Garfinkle lemma1}, Lemma \ref{prop:lambda} and Proposition
\ref{prop:vtheta0An}.
\begin{proposition}[{\cite[\S IV.3 Theorem 2, \S IV.6 Theorem 1 and \S V]{Ga}}]\label{Garfinkle lemma1 Dn}
Let $v$ be the lowest weight vector of the representation $V(\theta+\alpha_1)$ or
$V(\theta+\alpha_i)$. Then $\beta(v)\in J$ if and only if $\lambda(h_1)=-(n-2)$,
$\lambda_2=\cdots=\lambda_n=0$.
\end{proposition}
From the above proposition, we also see that $J$ is unique (see also Joseph \cite{Jo}).
Next, we consider the generator of $J$ that corresponds to the Casimir element $C$.
We have the following.
\begin{proposition}[{see \cite[\S IV.3]{Ga}}]\label{proposition-CasimirofDn}
Let $\mathfrak g$ be the $D_n$ Lie algebra.
We have
\begin{align}\label{vtheta0Dn}
C-c_\lambda
=&\sum_{\alpha\in\Phi^+}(X_\alpha Y_\alpha+ Y_\alpha X_\alpha)+
h_1\left(h_1+h_2+\cdots+h_{n-2}+\dfrac{1}{2}h_{n-1}+\dfrac{1}{2}h_n\right)\notag \\
&+h_2\big(h_1+2(h_2+\cdots+h_{n-2}+h_{n-2})+h_{n-1}+h_n\big)\notag \\
&+\cdots+h_{n-2}\left(h_1+2h_2+\cdots+(n-3)h_{n-3}+(n-2)h_{n-2}+
\dfrac{n-2}{2}(h_{n-1}+h_{n})\right)\notag \\
&+\dfrac{1}{2}h_{n-1}\left(h_1+h_2+\cdots+(n-2)h_{n-2}+\dfrac{n}{2}h_{n-1}+
\dfrac{n-2}{2}h_{n}\right)\notag \\
&+\dfrac{1}{2} h_n\left(h_1+h_2+\cdots+(n-2)h_{n-2}+\dfrac{n-2}{2}h_{n-1}+
\dfrac{n}{2}h_{n}\right)-2n+n^2
\end{align}
is an element of $J$.
\end{proposition}
\begin{proof}
The Casimir element is $C=\sum_{\alpha\in\Phi^+}X_\alpha Y_\alpha+ Y_\alpha X_\alpha+
\sum_{i=1}^n h_ih_i^{\vee}$, where $n$ is the rank of the corresponding Lie algebra.
By Proposition \ref{Garfinkle lemma1 Dn},
$c_\lambda=2n-n^2$.
\end{proof}
\begin{remark}
The symmetrization map takes the lowest weight vector to $\mathcal{U}(\mathfrak{g})$,
but does not change the form of (\ref{vtheta+alpha1}) and (\ref{WofDn}).
Thus we identify (\ref{vtheta+alpha1}) and (\ref{WofDn}) with
$\beta(v)$ and $\beta(v_0)$ respectively, which are two elements of Joseph ideal $J$.
\end{remark}
Similarly to Proposition \ref{Prop:GaJosephAn},
by summarizing the above propositions, we have:
\begin{proposition}[\cite{Ga}]\label{Prop:GaJosephDn}
The Joseph ideal $J$ of the $D_n$ Lie algebra is generated by
\eqref{vtheta+alpha1}, \eqref{WofDn} and \eqref{vtheta0Dn}.
\end{proposition}
\subsubsection{Joseph ideal for type E Lie algebras}
Now we turn to the Joseph ideals of type E Lie algebras.
Again, we do this one by one.
\subsubsubsection{The $E_6$ case}
By Lemma \ref{structureofDE}, we have the following.
\begin{corollary}\label{structure of E6
For the $E_6$ Lie algebra $\mathfrak g$,
we have the following decomposition of representation:
\begin{eqnarray*}
Sym^2(\mathfrak{g})\cong V(2\theta)\oplus V(\theta+\alpha_1+\alpha_3+
\alpha_4+\alpha_5+\alpha_6)\oplus V(0),
\end{eqnarray*}
where $\theta$ is the highest root vector of Lie algebra of type $E_6$, i.e.,
$\theta=\alpha_1+2\alpha_2+2\alpha_3+3\alpha_4+2\alpha_5+\alpha_6$.
\end{corollary}
For the subrepresentation
$V(\theta+\alpha_1+\alpha_3+\alpha_4+\alpha_5+\alpha_6 )$
in the above corollary, we have the following.
\begin{proposition}
The lowest weight vector of the subrepresentation
$V(\theta+\alpha_1+\alpha_3+\alpha_4+\alpha_5+\alpha_6 )$ in Proposition \ref{structure of E6} is
\begin{eqnarray}\label{highestrootvectorE6}
Y_{13456}Y_\theta-Y_{123456}Y_{12\bar{3}\bar{\bar{4}}\bar{5}6}-Y_{123\bar{4}56}
Y_{12\bar{3}\bar{4}\bar{5}6}-Y_{12\bar{3}\bar{4}56}Y_{123\bar{4}\bar{5}6}.
\end{eqnarray}
\end{proposition}
Similarly to Proposition \ref{proposition-CasimirofDn}, the subrepresentation $V(0)$ in
Corollary \ref{structure of E6} is spanned by $C$, and $C-c_\lambda$ is a generator of
the Joseph ideal $J$ if and only if $\lambda(h_6)=-3$, $\lambda_2=\cdots=\lambda_n=0$
(see {\cite[\S IV.3 Theorem 2, \S IV.6 Theorem 1 and \S V]{Ga}}), and thus we have the following:
\begin{align}\label{CasimirE6}
C-c_\lambda =&C-(\lambda,\lambda)-(\lambda,2\rho)\notag\\
=&\sum_{\alpha\in\Phi^+}(X_\alpha Y_\alpha+ Y_\alpha X_\alpha)+\dfrac{1}{3}
h_1(4h_1+3h_2+5h_3+6h_4+4h_5+2h_6)\notag \\
&+h_2(h_1+2h_2+2h_3+3h_4+2h_5+h_6)\notag\\
&+\dfrac{1}{3} h_3(5h_1+6h_2+10h_3+12h_4+8h_5+4h_6)\notag\\
&+h_4(2h_1+3h_2+4h_3+12h_4+8h_5+4h_6)\notag\\
&+\dfrac{1}{3} h_5(4h_1+6h_2+8h_3+12h_4+10h_5+5h_6)\notag\\
&+\dfrac{1}{3}h_6(2h_1+3h_2+4h_3+6h_4+5h_5+4h_6)+36.
\end{align}
Summarizing the above results, we have the following.
\begin{proposition}[\cite{Ga}]\label{Prop:GaJosephE6}
The Joseph ideal $J$ of type $E_6$ Lie algebra is generated by
\eqref{highestrootvectorE6} and \eqref{CasimirE6}.
\end{proposition}
\subsubsubsection{The $E_7$ case}
By Lemma \ref{structureofDE}, we have the following.
\begin{corollary}\label{structure of E7}
For the $E_7$ Lie algebra $\mathfrak g$,
we have the following decomposition of representation:
\begin{eqnarray*}
Sym^2(\mathfrak{g})\cong V(2\theta)\oplus V(\theta+\alpha_2+
\alpha_3+2\alpha_4+2\alpha_5+2\alpha_6+\alpha_7)\oplus V(0),
\end{eqnarray*}
where $\theta$ is the highest root vector of Lie algebra of type $E_7$, i.e.,
$\theta=2\alpha_1+2\alpha_2+3\alpha_3+4\alpha_4+3\alpha_5+2\alpha_6+\alpha_7$.
\end{corollary}
For the subrepresentation
$V(\theta+\alpha_2+\alpha_3+2\alpha_4+2\alpha_5+2\alpha_6+\alpha_7)$
in the above corollary, we have the following.
\begin{proposition}[{see \cite[\S 4.3 Definition 7]{Ga}}]
The lowest weight vector of the subrepresentation
$V(\theta+\alpha_2+\alpha_3+2\alpha_4+2\alpha_5+2\alpha_6+\alpha_7)$ in
Proposition \ref{structure of E7} is
\begin{eqnarray}\label{highestrootvectorE7}
Y_{23\bar{4}\bar{5}\bar{6}7}Y_\theta-
Y_{123\bar{4}\bar{5}\bar{6}7}Y_{1\bar{2}
\bar{\bar{3}}\bar{\bar{\bar{4}}}\bar{\bar{5}}\bar{6}7}-
Y_{12\bar{3}\bar{4}\bar{5}\bar{6}7}Y_{1\bar{2}\bar{3}\bar{\bar{\bar{4}}}
\bar{\bar{5}}\bar{6}7}-Y_{12\bar{3}\bar{\bar{4}}\bar{5}\bar{6}7}
Y_{1\bar{2}\bar{3}\bar{\bar{4}}\bar{\bar{5}}\bar{6}7}-
Y_{12\bar{3}\bar{\bar{4}}\bar{\bar{5}}\bar{6}7}Y_{1\bar{2}\bar{3}\bar{\bar{4}}\bar{5}\bar{6}7}.
\end{eqnarray}
\end{proposition}
Similarly to Proposition \ref{proposition-CasimirofDn},
the subrepresentation $V(0)$ in Corollary \ref{structure of E7} is spanned by $C$,
and $C-c_\lambda$ is a generator of the Joseph ideal $J$ if and only if $\lambda(h_7)=-4$,
$\lambda_2=\cdots=\lambda_n=0$
(see {\cite[\S IV.3 Theorem 2, \S IV.6 Theorem 1 and \S V]{Ga}}), and thus we have:
\begin{align}\label{CasimirE7}
C-c_\lambda =& C-(\lambda,\lambda)-(\lambda,2\rho)\notag\\
=&\sum_{\alpha\in\Phi^+}(X_\alpha Y_\alpha+ Y_\alpha X_\alpha) +h_1(2h_1+2h_2+3h_3+4h_4+3h_5+2h_6+h_7)\notag \\
&+\dfrac{1}{2}h_2(4h_1+7h_2+8h_3+12h_4+9h_5+8h_6+3h_7)\notag \\
&+ h_3(3h_1+4h_2+6h_3+8h_4+6h_5+4h_6+2h_7)\notag \\
&+h_4(4h_1+6h_2+8h_3+12h_4+9h_5+6h_6+3h_7)\notag \\
&+\dfrac{1}{2} h_5(6h_1+9h_2+12h_3+18h_4+15h_5+10h_6+5h_7)\notag \\
&+h_6(2h_1+3h_2+4h_3+6h_4+5h_5+4h_6+2h_7)\notag \\
&+\dfrac{1}{2}h_7(2h_1+3h_2+4h_3+6h_4+5h_5+4h_6+3h_7)+84.
\end{align}
Summarizing the above results, we have the following.
\begin{proposition}[\cite{Ga}]\label{Prop:GaJosephE7}
The Joseph ideal $J$ of type $E_7$ Lie algebra is generated by
\eqref{highestrootvectorE7} and \eqref{CasimirE7}.
\end{proposition}
\subsubsubsection{The $E_8$ case}
By Lemma \ref{structureofDE}, we have the following.
\begin{corollary}\label{structure of E8}
For the $E_8$ Lie algebra,
we have the following decomposition of representation:
\begin{eqnarray*}
Sym^2(\mathfrak{g})= V(2\theta)\oplus
V(\theta+2\alpha_1+2\alpha_2+3\alpha_3+
4\alpha_4+3\alpha_5+2\alpha_6+\alpha_7)\oplus V(0),
\end{eqnarray*}
where $\theta$ is the highest root vector of Lie algebra of type $E_8$, i.e.,
$\theta=2\alpha_1+3\alpha_2+4\alpha_3+6\alpha_4+5\alpha_5+4\alpha_6+
3\alpha_7+2\alpha_8$.
\end{corollary}
For the subrepresentation
$V(\theta+\alpha_2+\alpha_3+2\alpha_4+2\alpha_5+2\alpha_6+\alpha_7)$
in the above corollary, we have the following.
\begin{proposition}[ See \cite{Ga} \S 4.3 Definition 7]
The lowest weight vector of the subrepresentation
$V(\theta+\alpha_2+\alpha_3+2\alpha_4+2\alpha_5+2\alpha_6+\alpha_7)$
in Proposition \ref{structure of E8} is
\begin{align}\label{highestrootvectorE8}
&Y_{1^{2}2^{2}3^{3}4^{4}5^{3}6^{2}7^1}Y_\theta
-Y_{1^{2}2^{2}3^{3}4^{4}5^{3}6^{2}7^{1}8^{1}}
Y_{1^{2}2^{3}3^{4}4^{6}5^{5}6^{4}7^{3}8^1}\notag\\
&-Y_{1^{2}2^{2}3^{3}4^{4}5^{3}6^{2}7^{2}8^1}
Y_{1^{2}2^{3}3^{4}4^{6}5^{5}6^{4}7^{2}8^1}
-Y_{1^{2}2^{2}3^{3}4^{4}5^{3}6^{3}7^{2}8^1}
Y_{1^{2}2^{3}3^{4}4^{6}5^{5}6^{3}7^{2}8^1}\notag\\
&-Y_{1^{2}2^{2}3^{3}4^{4}5^{4}6^{3}7^{2}8^1}
Y_{1^{2}2^{3}3^{4}4^{6}5^{3}6^{3}7^{2}8^1}
-Y_{1^{2}2^{2}3^{3}4^{5}5^{4}6^{3}7^{2}8^1}
Y_{1^{2}2^{3}3^{4}4^{5}5^{3}6^{3}7^{2}8^1}\notag\\
&-Y_{1^{2}2^{2}3^{4}4^{5}5^{4}6^{3}7^{2}8^1}
Y_{1^{2}2^{3}3^{3}4^{5}5^{3}6^{3}7^{2}8^1},
\end{align}
where, for simplicity, $Y_{1^{n_1}\cdots k^{n_k}\cdots 8^{n_8}}$
represents the root vector corresponds to the root
$n_1\alpha_1+\cdots n_k\alpha_k+\cdots+n_8\alpha_8$.
\end{proposition}
Similarly to Proposition \ref{proposition-CasimirofDn},
the subrepresentation $V(0)$ in Corollary \ref{structure of E8} is spanned by
$C$, and $C-c_\lambda$ is a generator of the Joseph ideal $J$ if and only if
$\lambda(h_8)=-5$, $\lambda_2=\cdots=\lambda_n=0$
(see {\cite[\S IV.4, \S IV.6 Theorem 1 and \S V]{Ga}}), and thus we have:
\begin{align}\label{CasimirE8}
C-c_\lambda =& C-(\lambda,\lambda)-(\lambda,2\rho)\notag \\
=&\sum_{\alpha\in\Phi^+}(X_\alpha Y_\alpha+
Y_\alpha X_\alpha) +h_1(4h_1+5h_2+7h_3+10h_4+8h_5+6h_6+4h_7+2h_8)\notag \\
&+h_2(5h_1+8h_2+10h_3+15h_4+12h_5+9h_6+6h_7+3h_8)\notag \\
&+ h_3(7h_1+10h_2+14h_3+20h_4+16h_5+12h_6+8h_7+4h_8)\notag \\
&+h_4(10h_1+15h_2+20h_3+30h_4+24h_5+18h_6+12h_7+6h_8)\notag \\
&+ h_5(8h_1+12h_2+16h_3+24h_4+20h_5+15h_6+10h_7+5h_8)\notag \\
&+h_6(6h_1+9h_2+12h_3+18h_4+15h_5+12h_6+8h_7+4h_8)\notag \\
&+h_7(4h_1+6h_2+8h_3+12h_4+10h_5+8h_6+6h_7+3h_8)\notag \\
&+h_8(5h_1+8h_2+10h_3+15h_4+12h_5+9h_6+6h_7+3h_8)+240.
\end{align}
Summarizing the above results, we have the following.
\begin{proposition}[\cite{Ga}]\label{Prop:GaJosephE8}
The Joseph ideal $J$ of type $E_8$ Lie algebra is generated by
\eqref{highestrootvectorE8} and \eqref{CasimirE8}.
\end{proposition}
\section{The $B$-algebras}\label{sect:B-algebra}
The notion of the $B$-algebra of a graded associative algebra
is introduced by Braden et. al. in \cite{BLPW}. It plays an essential
role in the equivariant Hikita conjecture (see \cite{KMP,KTW}).
In this section, we study the $B$-algebra of the quantizations
of the minimal orbits given in the previous section.
\subsection{$B$-algebra of the minimal nilpotent orbits}
Suppose ${\mathfrak g}$ is a simple Lie algebra, and $Q$ is the root lattice.
$\mathcal{U}({\mathfrak g})$ is the universal enveloping algebra of ${\mathfrak g}$,
$J$ is the Joseph ideal. Recall that there is the PBW filtration of $\mathcal{U}({\mathfrak g})$:
$$\mathcal{U}^0\subseteq \mathcal{U}^1 \subseteq \mathcal{U}^2\subseteq \cdots $$
On the other hand, $\mathcal{U}({\mathfrak g})$ can be decomposed as
$$\mathcal{U}({\mathfrak g})=\bigoplus_{\mu\in Q}\mathcal{U}_\mu.$$
Furthermore, the Joseph ideal $J$ satisfies the following
\begin{equation}\label{J decomp}
J=\bigoplus_{\mu\in Q}J_\mu=\bigoplus_{\mu\in Q}J\cap \mathcal{U}_\mu.
\end{equation}
Denote ${\mathscr A}:=Rees(\mathcal{U}({\mathfrak g})/J)$, then there is a weight decomposition induced by that of $\mathcal{U}({\mathfrak g})$,
$${\mathscr A}=\bigoplus_{\mu\in Q}{\mathscr A}_\mu.$$
\begin{definition}
The {\it B-algebra} of ${\mathscr A}$ is defined to be
\begin{equation}\label{B-algebra}
B({\mathscr A}):={\mathscr A}_0\Big/\sum_{\mu>0}\{ab|a\in {\mathscr A}_\mu, b\in {\mathscr A}_{-\mu}\}.
\end{equation}
\end{definition}
The following lemma is immediate.
\begin{lemma}\label{BA}
Let $\mathcal{I}$ be the ideal of $\mathcal{U}_0$ given by
\begin{equation*}
\mathcal{I}:=\sum_{\mu>0}\{ab|a\in \mathcal{U}_\mu, b\in \mathcal{U}_{-\mu}\}.
\end{equation*}
Then
$$B({\mathscr A})=(\mathcal{U}_0/\mathcal{I})/
\big(J_0/(J_0\cap \mathcal{I})
\big).$$
\end{lemma}
Now we describe $\mathcal{U}_0/\mathcal{I}$.
\begin{lemma}\label{UI}
As a (commutative) algebra,
\begin{equation*}
\mathcal{U}_0/\mathcal{I}\simeq \mathcal{U}({\mathfrak h})[\hbar].
\end{equation*}
\end{lemma}
\begin{proof}
Notice that we have the following simple and important decomposition of $\mathcal{U}({\mathfrak g})$ (cf. \cite{Ga}),
\begin{equation}\label{key decomp}
\mathcal{U}({\mathfrak g})=(\mathfrak{n}^+\mathcal{U}({\mathfrak g})+\mathcal{U}({\mathfrak g})\mathfrak{n}^-)\oplus \mathcal{U}({\mathfrak h}).
\end{equation}
For any $a\in \mathcal{U}_0$, we denote $a_{\mathfrak h}\in {\mathfrak h}$ to be the ${\mathfrak h}$-summand of $a$. We define a map
\begin{align}
\kappa: \mathcal{U}_0/\mathcal{I}\rightarrow \mathcal{U}({\mathfrak h}), \quad a+\mathcal{I}\mapsto a_{\mathfrak h}.
\end{align}
Firstly, we need to prove $\kappa$ is well-defined. For any $a\in \mathcal{I}$, by definition of $\mathcal{I}$, $a=a_+\cdot a_-$, where $a_+\in \mathcal{U}_{>0}$, $a_-\in \mathcal{U}_{<0}$. Without loss of generality, we assume $a_+\in \mathcal{U}_{>0}^k$ for some $k>0$. Now we perform induction on $k$. For $k=1$, $a_+\in \mathfrak{n}^+$, so $a\in (\mathfrak{n}^+\mathcal{U}({\mathfrak g})+\mathcal{U}({\mathfrak g})\mathfrak{n}^-)$. Suppose for $k \leq r$, we still have $a\in (\mathfrak{n}^+\mathcal{U}({\mathfrak g})+\mathcal{U}({\mathfrak g})\mathfrak{n}^-)$. Now consider $a_+\in \mathcal{U}_{>0}^{r+1}$, then there exists $v\in \mathfrak{n}^+$, $b, c\in \mathcal{U}$, such that $a_+=b\cdot v \cdot c$. Then
\begin{equation*}
a_+=b\cdot v \cdot c=v\cdot b \cdot c+\hbar [b, v]\cdot c.
\end{equation*}
we have $v\cdot b \cdot c\cdot a_-\in \mathfrak{n}^+\mathcal{U}({\mathfrak g})$. Since $[b, v]\cdot c\in \mathcal{U}_{>0}^{\leq r}$, by assumption of induction, \\
$[b, v]\cdot c\cdot a_-\in (\mathfrak{n}^+\mathcal{U}({\mathfrak g})+\mathcal{U}({\mathfrak g})\mathfrak{n}^-)$.
So $a\in (\mathfrak{n}^+\mathcal{U}({\mathfrak g})+\mathcal{U}({\mathfrak g})\mathfrak{n}^-)$. This means that $\mathcal{I}\subseteq (\mathfrak{n}^+\mathcal{U}({\mathfrak g})+\mathcal{U}({\mathfrak g})\mathfrak{n}^-)$, $\kappa$ is well-defined.
The surjectivity of $\kappa$ is easy to check. The injectivity of $\kappa$ is induced by
$$(\mathfrak{n}^+\mathcal{U}({\mathfrak g})+\mathcal{U}({\mathfrak g})\mathfrak{n}^-)\cap \mathcal{U}_0\subseteq \mathcal{I}.$$
This follows from the fact that $\mathfrak{n}^+\subseteq \mathcal{U}_{>0}$, $\mathfrak{n}^-\subseteq \mathcal{U}_{<0}$.
\end{proof}
By Lemmas \ref{BA} and \ref{UI}, we have
\begin{equation}\label{U/J}
B(\mathscr{A})=\mathcal{U}({\mathfrak h})/J_{\mathfrak h},
\end{equation}
where $J_{\mathfrak h}=\kappa(J_0/(J_0\cap \mathcal{I}))$.
By the definition of $\kappa$, $J_{\mathfrak h}$ is just the projection of $J_0$ onto $\mathcal{U}({\mathfrak h})$ in \eqref{key decomp}.
Suppose $h_1, h_2, \cdots, h_n$ is a basis of ${\mathfrak h}$, then $\mathcal{U}({\mathfrak h})$ is a polynomial ring generated by $h_1, \cdots, h_n$. The degree of polynomial gives us a natural filtration of $\mathcal{U}({\mathfrak h})$,
$$\mathcal{U}^0({\mathfrak h})\subseteq \mathcal{U}^1({\mathfrak h})\subseteq \mathcal{U}^2({\mathfrak h})\subseteq \cdots .$$
\begin{lemma}\label{J_h generator}
$J_{\mathfrak h}$ is an ideal of $\mathcal{U}({\mathfrak h})$ generated by the set $S_J$, where $S_J$ is the projection of $J_0\cap \mathcal{U}^2$ onto $\mathcal{U}({\mathfrak h})$ via \eqref{key decomp}.
\end{lemma}
\begin{proof}
Starting from $a_{\mathfrak h}\in J_{\mathfrak h}$, by the surjectivity of $\kappa$, there exists $a\in J_0$ such that $\kappa(a+\mathcal{I})=a_{\mathfrak h}$. By Proposition III.3.2 in \cite{Ga}, we know there exists $w\in J\cap \mathcal{U}^2$, $b, c\in \mathcal{U}({\mathfrak g})$ such that
\begin{equation*}
a=b\cdot w\cdot c.
\end{equation*}
Without loss of generality, we assume $b\in \mathcal{U}^i$, $c\in \mathcal{U}^j$ for some $i, j\geq 0$.
Now we claim that, $a=v+a_1\cdot w_0\cdot a_2$, where $w_0\in S_J$, $a_1$, $a_2\in \mathcal{U}({\mathfrak h})$
and $v\in (\mathfrak{n}^+\mathcal{U}({\mathfrak g})+\mathcal{U}({\mathfrak g})\mathfrak{n}^-)$. We will prove this claim by
induction on $k=i+j$.
\begin{enumerate}
\item[(1)] For $k=0$, we have $b, c\in {\mathbb C}$.
Therefore $a\in J_0\cap \mathcal{U}^2$, and it is easy to see the claim holds by \eqref{key decomp}.
\item[(2)] Suppose for $k\leq r$, the claim holds. Now consider $k=r+1$.
\begin{enumerate}
\item[(2i)]
If $b, c\in \mathcal{U}({\mathfrak h})$, then $w\in J_0\cap \mathcal{U}^2$. By \eqref{key decomp},
we have $w=v+w_0$, where
$v\in (\mathfrak{n}^+\mathcal{U}({\mathfrak g})+\mathcal{U}({\mathfrak g})\mathfrak{n}^-)\cap
\mathcal{U}^0$ and $w_0\in S_J$. Recall that in the proof of Lemma \ref{UI},
we show that $(\mathfrak{n}^+\mathcal{U}({\mathfrak g})
+\mathcal{U}({\mathfrak g})\mathfrak{n}^-)\cap \mathcal{U}^0=\mathcal{I}$ which is an ideal.
Thus $a=b v c+b w_0 c$ and
$b v c\in (\mathfrak{n}^+\mathcal{U}({\mathfrak g})+\mathcal{U}({\mathfrak g})\mathfrak{n}^-)
\cap \mathcal{U}^0$. The claim holds.
\item[(2ii)] If $b\notin \mathcal{U}({\mathfrak h})$, there exits
$x\in \mathfrak{n}^+$ or $\mathfrak{n}^+$ such that $b=b_1 x b_2$.
If $x\in \mathfrak{n}^+$, then
$$a=b_1 x b_2 w c=x b_1 b_2 w c +\hbar [b_1, x] b_2 w c.$$
Here $x b_1 b_2 w c\in \mathfrak{n}^+\mathcal{U}({\mathfrak g})$
and $[b_1, x] b_2\in \mathcal{U}^{< i}$.\\
If $x\in \mathfrak{n}^-$, then
\begin{align*}
a&=b_1 x b_2 w c= b_1 b_2 w c x+\hbar b_1[x, b_2 w c]\\
&= b_1 b_2 w c x+\hbar b_1([x, b_2]wc+b_2[x, w]c+b_2w[x, c]).
\end{align*}
Here $b_1 b_2 w c x\in \mathcal{U}({\mathfrak g})\mathfrak{n}^-$, and the remaining 3 terms in the last
expression is of the form $\mathcal{U}^{<i}\cdot (J\cup \mathcal{U}^2)\cdot \mathcal{U}^{j}$.
So the claim holds by the assumption of induction. Notice that we can perform a similar
discussion on $c$, the claim holds in the case $k=r+1$.
\end{enumerate}
\end{enumerate}
By the above claim, we have $a_{\mathfrak h}=a_1w_0a_2$, the proof of the lemma is complete.
\end{proof}
Lemma \ref{J_h generator} and \eqref{U/J} tell us that, to calculate $B({\mathscr A})$, we only need to calculate $J_0\cap \mathcal{U}^2$. Firstly we figure out the dimension of it.
Suppose $I={\rm gr}J=\bigoplus_{k\in {\mathbb N}}I^k$,
where $I^k=(J\cap \mathcal{U}^k)/(J\cap \mathcal{U}^{k-1})$. Then $I$ is an ideal of the polynomial ring
${\rm gr}(\mathcal{U}({\mathfrak g}))= Sym({\mathfrak g})$, and the degree of elements in $I^k$ is $k$.
For $\mu\in Q$, set $I^k_{\mu}$ to be the component of $I^k$ with weight $\mu$.
\begin{lemma}\label{lemma:dimJU2}With the above notations, we have
\begin{equation}\label{dimJU2}
\dim(J_0\cap \mathcal{U}^2)=\dim I^2_0.
\end{equation}
\end{lemma}
\begin{proof}
Since $J\neq \mathcal{U}({\mathfrak g})$, $1\notin J$, $J\cap \mathcal{U}^0=\{0\}$. Then as a vector space,
\begin{align*}
J\cap \mathcal{U}^1 \simeq I^1=(J\cap \mathcal{U}^1)/(J\cap \mathcal{U}^0).
\end{align*}
By a theorem of Konstant (see \cite[Theorem III.2.1]{Ga}),
as an ideal of $ Sym({\mathfrak g})$, $I$ is generated by $I^2$,
which contains the homogenous elements with degree 2.
Therefore $I^1=\{0\}$, and $ J\cap \mathcal{U}^1 =\{0\}$.
Now we consider $ J\cap \mathcal{U}^2$, as a vector space,
\begin{align*}
J\cap \mathcal{U}^2\simeq I^2=(J\cap \mathcal{U}^2)/(J\cap \mathcal{U}^1).
\end{align*}
Since the projection $J\cap \mathcal{U}^k\rightarrow (J\cap \mathcal{U}^k)/(J\cap \mathcal{U}^{k-1})$ is compatible with the decomposition \eqref{J decomp}, we have $J_0\cap \mathcal{U}^2\simeq I^2_0$.
\end{proof}
\subsubsection{Some calculations on ${\mathfrak g}$-modules}
In this subsection, we calculate $\dim I^2_0$ for ADE Lie algebras.
We have the following decomposition of ${\mathfrak g}$-module (see \cite{Ga} or \cite{Sh}).
\begin{theorem}[Kostant]
Suppose ${\mathfrak g}$ is a semisimple Lie algebra, and $\theta$ is the highest weight of the adjoint
representation ${\mathfrak g}$. As a ${\mathfrak g}$-module,
\begin{equation*}
{\rm Sym}^2 {\mathfrak g}\simeq V(2\theta)\oplus L_2,
\end{equation*}
where $V(2\theta)$ is the irreducible representation of highest weight $2\theta$,
and $L_2$ is a representation with underlying space $I^2$.
\end{theorem}
For a ${\mathfrak g}$-module $V$, we denote $V_0$ as the subspace of $V$ with weight $0$, then we have
the following.
\begin{lemma} With the notations as above, we have:
\begin{equation}\label{dimension of I}
\dim(I^2_0)=\dim({\rm Sym}^2 {\mathfrak g})_0- \dim V(2\theta)_0,
\end{equation}
where
\begin{equation}\label{dimSymh}
\dim({\rm Sym}^2 {\mathfrak g})_0=\frac{\dim{\mathfrak g}-\dim{\mathfrak h}}{2}+\dim ({\rm Sym}^2{\mathfrak h}).
\end{equation}
\end{lemma}
\begin{proof}
Just notice that all elements in $({\rm Sym}^2 {\mathfrak g})_0$ is a linear combination of
$x_\mu x_{-\mu}$, $x_\mu\in {\mathfrak g}_{\mu}$ and $h_i h_j$, $h_i, h_j\in {\mathfrak h}$.
\end{proof}
The calculation of $\dim V(2\theta)_0$ is more difficult. The main tool we use is the following formula
(see \cite[Theorem 22.3]{Hum}).
\begin{lemma}[Freudenthal]
Let $V=V(\lambda)$ be an irreducible ${\mathfrak g}$-module of highest weight $\lambda$.
Let $\Lambda$ be the set of weights of $V$. If $\mu\in \Lambda$,
then the multiplicity $m(\mu)$ of $\mu$ in $V$ is given recursively as follows:
\begin{equation}\label{Freudenthal}
\big((\lambda+\delta, \lambda+\delta)-(\mu+\delta, \mu+\delta)
\big)m(\mu)
=2\sum_{\alpha\in \Delta^+}\sum_{i=1}^{+\infty}m(\mu+i\alpha)(\mu+i\alpha, \alpha),
\end{equation}
where $\delta={1\over 2}\sum_{\alpha\in \Delta^+}\alpha$.
\end{lemma}
\begin{lemma}
For the ADE Lie algebra ${\mathfrak g}$,
\begin{equation}\label{dimV0}
\dim V(2\theta)_0=\frac{\dim{\mathfrak g}-\dim{\mathfrak h}}{2}.
\end{equation}
\end{lemma}
\begin{proof}
Notice that $\dim V(2\theta)_0$ is just the multiplicity $m(0)$ in $ V(2\theta)$.
We prove the lemma case by case.
\noindent\textbf{$A_n$ case:}
Firstly we list some data in $A_n$ case (see \cite{Hum} or \cite{Kac}).
\begin{align*}
Q&=\left\{\sum_{i=1}^{n+1}k_iv_i|k_i\in {\mathbb Z}, \sum_i k_i=0\right\},\\
\Delta &=\{v_i-v_j\}, \quad \Delta^+ =\{v_i-v_j|i<j \} ,\\
\Phi^+&=\{\alpha_1=v_1-v_2, \alpha_2=v_2-v_3, \cdots , \alpha_n=v_n-v_{n+1}\},\\
\theta &=v_1-v_{n+1},\quad \delta={1\over 2}(nv_1+(n-2)v_2+\cdots -(n-2)v_n-nv_{n+1}),\\
W &=\{\text{all permutations of the $v_i$}\}.
\end{align*}
Since $2\theta=2(v_1-v_{n+1})$ is the highest weight of $V$, $m(2\theta)=1$. Since $2\theta=2(v_1-v_{n+1})$, and $m(\mu)$ is invariant under the $W$-action (see \cite[Theorem 21.2]{Hum}), we have
\begin{equation}\label{117}
m(2(v_i-v_j))=1.
\end{equation}
Now we consider $m(2v_1-v_n-v_{n+1})$. By \eqref{Freudenthal}, we have
\begin{align*}
&\big((2\theta+\delta, 2\theta+\delta)-(2v_1-v_n-v_{n+1}+\delta, 2v_1-v_n-v_{n+1}+\delta)\big)
m(2v_1-v_n-v_{n+1})\\
=&2m(2\theta)(2\theta, v_{n}-v_{n+1}).
\end{align*}
One can check that
\begin{align*}
&(2\theta, v_{n}-v_{n+1})=2,\\
&(2\theta+\delta, 2\theta+\delta)-(2v_1-v_n-v_{n+1}+\delta, 2v_1-v_n-v_{n+1}+\delta)=4.
\end{align*}
Therefore
\begin{equation*}
m(2v_1-v_n-v_{n+1})=1.
\end{equation*}
By the $W$-invariance of $m(\mu)$, and $m(\mu)=m(-\mu)$, we have
\begin{equation}\label{118}
m(\pm(2v_i-v_j-v_k))=1.
\end{equation}
Now we consider $m(v_1+v_2-v_{n}-v_{n+1})$. By \eqref{Freudenthal}, we have
\begin{align*}
&\big(
(2\theta+\delta, 2\theta+\delta)-(v_1+v_2-v_{n}-v_{n+1}
+\delta, v_1+v_2-v_{n}-v_{n+1}+\delta)\big)\\
&\cdot m(v_1+v_2-v_{n}-v_{n+1})\\
=&2\big(m(2v_1-v_{n}-v_{n+1})(2v_1-v_{n}-v_{n+1} , v_1-v_2)\\
&+m(v_1+v_2-2v_{n+1})(v_1+v_2-2v_{n+1}, v_n-v_{n+1})\big).
\end{align*}
By \eqref{118}, we have $m(2v_1-v_{n}-v_{n+1})=m(v_1+v_2-2v_{n+1})=1$. Furthermore,
\begin{align*}
&(2v_1-v_{n}-v_{n+1} , v_1-v_2)=(v_1+v_2-2v_{n+1}, v_n-v_{n+1})=2,\\
&(2\theta+\delta, 2\theta+\delta)-(v_1+v_2-v_{n}-v_{n+1}+\delta, v_1+v_2-v_{n}-v_{n+1}+\delta)=8.
\end{align*}
Thus $m(2\theta-\alpha_1-\alpha_n)=1$ by the $W$-invariance of $m(\mu)$, and we have
\begin{equation}\label{119}
m(v_i+v_j-v_k-v_l)=1.
\end{equation}
Now we calculate $m(\theta)$. By \eqref{Freudenthal},
\begin{equation*}
\big((2\theta+\delta, 2\theta+\delta)-(\theta+\delta, \theta+\delta)\big)
m(\theta)=2\sum_{\alpha\in \Phi^+} m(\theta+\alpha)(\theta+\alpha, \alpha).
\end{equation*}
By \eqref{117}, \eqref{118} and \eqref{119}, $m(\theta+\alpha)=1$. Furthermore, we have
\begin{align*}
&(2\theta+\delta, 2\theta+\delta)-(\theta+\delta, \theta+\delta)=6+2n,\\
& \sum_{\alpha\in \Phi^+}(\theta+\alpha, \alpha)=(\theta, 2\delta)+2|\Phi^+|=2n+2\cdot {n(n+1)\over 2} =n(n+3).
\end{align*}
Then $m(\theta)=n$ and by the $W$-invariance of $m(\mu)$,
\begin{equation}\label{120}
m(v_i-v_j)=n.
\end{equation}
Finally, by \eqref{Freudenthal},
\begin{align}\label{A-m(0)}
\big((2\theta+\delta, 2\theta+\delta)-(\delta, \delta)\big)m(0)
=2\sum_{\alpha\in \Phi^+} \big(m(\alpha)(\alpha , \alpha)+m(2\alpha)(2\alpha, \alpha)\big).
\end{align}
By \eqref{117} and \eqref{120}, we have $m(\alpha)=n$ and $m(2\alpha)=1$. Furthermore,
$$(2\theta+\delta, 2\theta+\delta)-(\delta, \delta)=4n+8.$$
Thus \eqref{A-m(0)} is equivalent to
$$(4n+8)m(0)=2(2n+4)|\Phi^+|=4(n+2)\cdot {n(n+1)\over 2},$$
which induces
\begin{equation*}
m(0)={n(n+1)\over 2}.
\end{equation*}
\noindent\textbf{$D_n$ case:} The data of $D_n$ is as follows:
\begin{align*}
Q&=\left\{\sum_{i=1}^{n}k_iv_i|k_i\in {\mathbb Z}, \sum_i k_i\in 2{\mathbb Z}\right\},\\
\Delta &=\{\pm v_i \pm v_j\},\quad \Delta^+=\{v_i \pm v_j| i<j\}\\
\Phi^+&=\{\alpha_1=v_1-v_2, \alpha_2=v_2-v_3, \cdots , \alpha_{n-1}=v_{n-1}-v_{n}, \alpha_n=v_{n-1}+v_n\},\\
\theta &=v_1+v_{2}, \quad \delta=(n-1)v_1+(n_2)v_2+\cdot v_{n-1}, \\
W &=\{\text{all permutations and even number of sign changes of the $v_i$}\}.
\end{align*}
The argument is similar to $A_n$, so we just list the result and omit the details:
\begin{align*}
&m(2\theta)=m(2(v_1+v_2))=m(\pm2(v_i\pm v_j))=1,\\
&m(2v_1+v_2+v_3)=m(\pm 2v_i\pm v_j \pm v_k)=1,\\
&m(v_1+v_2+v_3+v_4)=m(\pm v_i\pm v_j\pm v_k\pm v_l)=2,\\
&m(2v_1)=m(\pm v_i)=n-2,\\
&m(v_1+v_2)=m(\pm v_i\pm v_j)=2n-3,\\
&m(0)=n(n-1).
\end{align*}
\noindent\textbf{Type E case:} By \cite[\S4]{BM}, we know that
for $E_6$, $m(0)=36$, for $E_7$, $m(0)=63$ and for $E_8$, $m(0)=120$.
They are exactly $\displaystyle\frac{\dim{\mathfrak g}-\dim{\mathfrak h}}{2}$ in these cases.
In summary, in all the ADE cases, we have $\displaystyle m(0)=\frac{\dim{\mathfrak g}-\dim{\mathfrak h}}{2}$.
\end{proof}
Combining \eqref{dimension of I}, \eqref{dimSymh} and \eqref{dimV0}, we have
the following.
\begin{proposition}\label{Prop:I^2_0}
If ${\mathfrak g}$ is of ADE type, then
\begin{equation}\label{dimI20}
\dim I^2_0=\dim({\rm Sym}^2{\mathfrak h}).
\end{equation}
\end{proposition}
\begin{proposition}\label{vectorspace of B-algebra}
If ${\mathfrak g}$ is of ADE type, then as ${\mathbb C}[\hbar]$-vector spaces
$$B({\mathscr A})\simeq{\mathfrak h}.$$
\end{proposition}
\begin{proof}
Suppose $\{h_1, h_2 \cdots, h_n\}$ is a basis of ${\mathfrak h}$. By \eqref{dimJU2} and\eqref{dimI20},
$d=\dim(J_0\cap \mathcal{U}^2)$ equals the number of $h_ih_j$s.
Furthermore, in \S\ref{subsection type A}-\ref{subsection En case},
we will find explicitly $d$ elements in $\dim(J_0\cap \mathcal{U}^2)$ which are linearly independent.
Thus the elements we find form a basis of $\dim(J_0\cap \mathcal{U}^2)$; moreover, they are of the form
$$h_ih_j+\sum_\mu {\mathfrak g}_\mu {\mathfrak g}_{-\mu}+\textup{linear term},\quad 1\leq i\leq j\leq n.$$
The proposition follows from \eqref{U/J} and Lemma \ref{J_h generator}.
\end{proof}
\subsection{$B$-algebra in the type A case}\label{subsection type A}
In this and the following two subsections, we
find the $B$-algebra of the
quantization of the minimal nilpotent
orbits in Lie algebras of type A, D and E respectively.
In this subsection, we study the type A Lie algebra case.
As we recalled in the previous section, the Joseph ideals
$J^z$ for type A Lie algebras are parameterized
by $z\in\mathbb C$.
We shall view $z$ as a formal parameter, and
accordingly, for type A Lie algebras,
the universal enveloping algebra $\mathcal U(\mathfrak g)$
are defined over the base ring $\mathbb C[z]$.
With such a convention, we write the whole family of Joseph
ideals by $J$. Therefore the filtered quantization
of the minimal nilpotent orbits $\overline{\mathcal O}_{min}$
are given by $\mathcal U(\mathfrak g)/J$, which
is an algebra over $\mathbb C[z]$,
and the associated graded quantization
$Rees\big(\mathcal{U}(\mathfrak{g})/J\big)$ is an algebra over $\mathbb C[z,\hbar]$.
It is easy to see that
$B\big(Rees(\mathcal{U}(\mathfrak{g})/J)\big)
=B\big(Rees(\mathcal{U})\big)/B\big(Rees(J)\big)$.
Now we find the relations in
the two-sided ideal $B\big(Rees(J)\big)$.
Since (\ref{vtheta+alpha2}) is an element in $J$ and $J$ is a two-sided ideal,
for $X\in \mathcal U^2(\mathfrak{g})$,
its action
$ad_X:=[X,-]$ on $Y_\theta Y_{2\cdots n-1}-Y_{1\cdots n-1}Y_{2\cdots n}$
is still an element in $J$. In the following we use this approach
to obtain elements in $B\big(Rees(J)\big)$, i.e., relations in $B\big(Rees(\mathcal{U}/J)\big)$.
{\it Step 1}. We first prove that the relation $h_ih_j=0$
in the $B$-algebra $B\big(Rees(\mathcal{U}(\mathfrak{g})/J)\big)$, for $|i-j|\geq 2$.
By applying $ad_{X_2}\circ ad_{X_{3\cdots n}}$ on (\ref{vtheta+alpha2}),
we obtain
\begin{equation}\label{formula:Ytheta}
Y_\theta h_2-Y_1Y_{2\cdots n}+Y_{12}Y_{3\cdots n}.
\end{equation}
Then we have two methods to obtain the corresponding ``weight 0" vector:
\begin{enumerate}
\item[(1)] by applying $ad_{X_\theta}$ on \eqref{formula:Ytheta}, we obtain
\begin{eqnarray}\label{0.1}
(h_1+\cdots h_n)h_2+X_{2\cdots n}Y_{2\cdots n}-Y_1X_1-X_{3\cdots n}Y_{3\cdots n}+Y_{12}X_{12} \in J,
\end{eqnarray}
which reduces to
\begin{eqnarray}\label{1.1}
(h_1+\cdots h_n)h_2-h_2\hbar \in B\big(Rees(J)\big).
\end{eqnarray}
\item[(2)] by applying $ad_{X_{1\cdots n-1}}\circ ad_{X_n}$
on \eqref{formula:Ytheta}, we obtain
\begin{eqnarray}\label{0.2}
(h_1+\cdots h_{n-1})h_2+X_{2\cdots n-1}Y_{2\cdots n-1}-Y_1X_1+Y_{12}X_{12}-X_{3\cdots n-1}Y_{3\cdots n-1} \in J,
\end{eqnarray}
which reduces to
\begin{eqnarray}\label{1.2}
(h_1+\cdots +h_{n-1})h_2-h_2\hbar \in B\big(Rees(J)\big).
\end{eqnarray}
\end{enumerate}
By subtracting \eqref{1.2} from \eqref{1.1},
we obtain $h_2h_n=0$ in $B\big(Rees(\mathcal{U}(\mathfrak{g})/J)\big)$.
In general, for $2<i<n-1$, by applying $ad_{ X_i}
\circ ad_{ X_{2\cdots i-1}}\circ ad_{ X_{i+1\cdots n-1}}$ on \eqref{formula:Ytheta}, we obtain
\begin{eqnarray}\label{1.2.1}
-Y_\theta h_i-Y_{1,\cdots,i-1}Y_{i,\cdots,n}+Y_{1,\cdots,i}Y_{i+1,\cdots,n} \in J,
\end{eqnarray}
then applying $ad_{ X_\theta}$ to the above element, we obtain
\begin{eqnarray}\label{1.3}
-(h_1+\cdots+h_n)h_i+h_i\hbar=0
\end{eqnarray} in $B\big(Rees(\mathcal{U}(\mathfrak{g})/J)\big)$.
There is another choice of the action, i.e., by applying
$ad_{X_{1\cdots n-1}}\circ ad_{X_n}$
on \eqref{1.2.1}, and we obtain
\begin{eqnarray}\label{1.4}
-(h_1+\cdots+h_{n-1})h_i+h_i\hbar=0
\end{eqnarray}
in $B\big(Rees(\mathcal{U}(\mathfrak{g})/J)\big)$.
Subtracting \eqref{1.4} from \eqref{1.3}, we obtain
\[
h_ih_n=0.
\]
Similarly, applying $ad_{ X_{1\cdots j-1}}\circ ad_{ X_{j,\cdots,n}}$
on (\ref{vtheta+alpha2}), we have
\begin{align}\label{hihj1}
(h_1+\cdots+h_{j-1})h_i=&-X_{i\cdots j-1}Y_{i\cdots j-1}+Y_{1\cdots i-1}X_{1\cdots i-1}\notag\\
&-Y_{1\cdots i}X_{1\cdots i}+X_{i+1\cdots j-1}Y_{i+1\cdots j-1}
\end{align}
in $Rees\big(\mathcal{U}(\mathfrak{g})/J\big)$.
And
applying $ ad_{ X_{1\cdots j}}\circ ad_{ X_{j+1,\cdots,n}}$ on (\ref{vtheta+alpha2}), we have
\begin{align}\label{hihj2}
(h_1+\cdots+h_j)h_i=&-X_{i\cdots j}Y_{i\cdots j}+Y_{1\cdots i-1}X_{1\cdots i-1}\notag\\
&-Y_{1\cdots i}X_{1\cdots i}+X_{i+1\cdots j}Y_{i+1\cdots j}
\end{align}
in $Rees\big(\mathcal{U}(\mathfrak{g})/J\big)$. Subtracting \eqref{hihj1} from \eqref{hihj2},
we have
\begin{eqnarray}\label{hihj result}
h_ih_j=0
\end{eqnarray}
in $B\big(Rees(\mathcal{U}(\mathfrak{g})/J)\big)$, for $|i-j|\geq 2$ and $1<i<j<n$.
As a by-product,
by plugging \eqref{hihj result} in \eqref{1.3} or \eqref{1.4},
we then have
\begin{eqnarray}\label{Vtheta+alpha}
h_k^2=-h_{k-1}h_k-h_kh_{k+1}+h_k\hbar,
\end{eqnarray}
in $B\big(Rees(\mathcal{U}(\mathfrak{g})/J)\big)$.
{\it Step 2}. Next, we deal with (\ref{vinU}).
Since $J$ is a two-sided ideal, by applying
$ad_{ X_n}\circ ad_{ X_{1\cdots n-1}}$ on \eqref{vinU}
and then using \eqref{hihj result}, we obtain
\begin{eqnarray}\label{2.1}
(1-n)h_n^2-(2n-2)h_{n-1}h_n+(n+1)(n-1)h_n\hbar+(n-1)zh_n\hbar \in B(Rees(J)).
\end{eqnarray}
Similarly, by applying $ad_{ X_{n-1,n}}\circ ad_{ X_{1\cdots n-2}}$ on \eqref{vinU}, we obtain
\begin{align}\label{2.0}
&(1-n)h_n^2+(3-n)h_{n-1}^2-(2n-4)(h_{n-1}h_n+h_{n-2}h_{n-1})\notag\\
&+(n+1)(n-2)h_{n-1}\hbar+(n+1)(n-1)h_n\hbar+(n-1)z(h_{n-1}+h_n)\hbar.
\end{align}
Subtracting \eqref{2.1} from \eqref{2.0}, we obtain
\begin{eqnarray}\label{2.2}
(3-n)h_{n-1}^2+2h_nh_{n-1}-(2n-4)h_{n-2}h_{n-1}+\big((n+1)(n-2)h_{n-1}+(n-1)zh_{n-1}\big)\hbar
\end{eqnarray}
in $B(Rees(J))$. \eqref{2.1} and \eqref{2.2} being in $B(Rees(J))$
implies that we have the relations
\begin{align*}
(1-n)h_n^2=&(2n-2)h_{n-1}h_n-(n+1)(n-1)h_n\hbar-(n-1)zh_n\hbar,\\
(3-n)h_{n-1}^2=&2h_nh_{n-1}+(2n-4)h_{n-2}h_{n-1}-\big((n+1)(n-2)h_{n-1}+(n-1)zh_{n-1}\big)\hbar
\end{align*}
in $B\big(Rees(\mathcal{U}(\mathfrak{g})/J)\big)$.
In general, by applying $ad_{ X_{n-k\cdots n}}\circ ad_{ X_{1\cdots n-k-1}}$ and
$ad_{ X_{n-k-1\cdots n}}\circ ad_{ X_{1\cdots n-k-2}}$ on \eqref{vinU}
respectively and then subtracting them, we obtain the relation
\begin{eqnarray}\label{Vtheta result}
-(2k-1-n)h_k^2=2(k-n)h_{k+1}h_k+(2k-2)h_kh_{k-1}\notag\\-(n+1)(k-1)h_k\hbar -(n-1)zh_k\hbar
\end{eqnarray}
in $B\big(Rees(\mathcal{U}(\mathfrak{g})/J)\big)$, for $k=1,\cdots, n$.
Plugging (\ref{Vtheta+alpha}) in (\ref{Vtheta result}), we have
\begin{eqnarray}\label{replace h_k}
h_kh_{k+1}=h_{k-1}h_k-zh_k\hbar-kh_k\hbar
\end{eqnarray}
in $B\big(Rees(\mathcal{U}(\mathfrak{g})/J)\big)$,
for $k=2,\cdots, n-1$.
\begin{proposition}
The subrepresentation $V(0)$ in Proposition \ref{structure of J} is spanned by
\begin{eqnarray}\label{Casimir}
\sum_{k=1}^{n-1}\dfrac{2k(n-k)}{n+1}h_kh_{k+1}+\sum_{k=1}^n
\dfrac{k(n-k+1)}{n+1}h_k^2-\big(nh_1+2(n-1)h_2+\cdots \notag\\
+k(n-k+1)h_k+\cdots+nh_n\big)\hbar-nz\left(\dfrac{z}{n+1}+1\right)\hbar^2
\end{eqnarray}
in $B\big(Rees(J)\big)$.
\end{proposition}
\begin{proof}
By plugging \eqref{hihj result} in \eqref{vtheta0An} we get the result.
\end{proof}
Summarizing the above results, we obtain the following relations.
\begin{proposition}Let $J$ be the Joseph ideal in the $A_n$ Lie algebra.
Let $h_1,\cdots, h_n$ be the generators of $B\big(Rees(\mathcal U(\mathfrak g)/J)\big)$.
Then we have
\begin{align}\label{hkhk+1 result}
h_kh_{k+1}=&-\dfrac{z+n+1}{n+1}(h_1+2h_2+\cdots+kh_k)\hbar\notag\\
&-\dfrac{z}{n+1}\big((n-k)h_{k+1}+\cdots+2h_{n-1}+h_n\big)
\hbar+z\left(\dfrac{z}{n+1}+1\right)\hbar^2
\end{align}
for $k=1,\cdots, n-1$, and
\begin{align}\label{hk2 result}
h_k^2=&\dfrac{2(z+n+1)}{n+1}\big(h_1+2h_2+\cdots+(k-1)h_{k-1}\big)\hbar\notag\\
&+\left((n+2-k)\dfrac{z}{n+1}+(k+1)\dfrac{z+n+1}{n+1}\right)h_k\hbar\notag
\\&+\dfrac{2z}{n+1}\big((n-k)h_{k+1}+\cdots+2h_{n-1}+h_n\big)\hbar
-2z\left(\dfrac{z}{n+1}+1\right)\hbar^2
\end{align}
for $k=1,\cdots, n$.
\end{proposition}
\begin{proof}
Plugging (\ref{Vtheta result}) and (\ref{hihj result}) in (\ref{Casimir}), we have
\begin{align}\label{hkhk+1}
&\dfrac{2(n-1)}{(n-3)(n-1)}h_1h_2+\dfrac{4(n-1)}{(n-5)(n-3)}h_1h_2+\cdots
+\dfrac{2k(n-k)}{(n-1-2k)(n+1-2k)}h_kh_{k+1}\notag\\
&+\cdots+\dfrac{2(n-1)}{(1-n)(3-n)}h_{n-1}h_n+\dfrac{k(n-k)(n-k+1)}{2k-1-n}h_k\hbar
+\dfrac{k(n-k+1)(n-1)}{(2k-1-n)(n+1)}zh_k\hbar\notag\\
&-\dfrac{nz(z+n+1)}{n+1}\hbar^2.
\end{align}
Then, plugging \eqref{replace h_k} successively in (\ref{hkhk+1}), we can replace
any $h_ih_{i+1}$ for $i=1,\cdots,n-1$ by a fixed
$h_kh_{k+1}$, and then obtain (\ref{hkhk+1 result}).
Plugging \eqref{Vtheta result} in \eqref{hkhk+1 result}, we obtain \eqref{hk2 result}.
\end{proof}
\subsection{$B$-algebra in the type D case}\label{subsection Dn case}
In this subsection, we compute the $B$-algebra of the quantization
of the minimal orbits for type D Lie algebras.
The task is now to find the generators of $B\big(Rees(J)\big)$.
{\it Step 1}. We first check that
$h_ih_j=0$ and $h_{i-1}h_i+h_i^2+h_ih_{i+1}-h_i\hbar=0$ for $1\leq i<j\leq n-2$.
Similarly to \S\ref{subsection type A}, by applying
$ad_{ X_{23\overline{4}\cdots\overline{n-2},n-1,n }}
\circ ad_{ X_{12\overline{3}\cdots\overline{n-2},n-1,n}}$
on \eqref{WofDn}, we obtain
\begin{eqnarray}\label{3.3.1}
(h_2+h_3+2h_4+\cdots+2h_{n-2}+h_{n-1}+h_n)(h_1+h_2+2h_3+\cdots+2h_{n-2}+h_{n-1}+h_n)\notag\\
-h_2\hbar-2h_3\hbar-4h_4\hbar-\cdots-4h_{n-2}\hbar-2h_{n-1}\hbar-2h_n\hbar \in B(Rees(J)).
\end{eqnarray}
Analogously, by applying $ad_{ X_{2\overline{3}\cdots\overline{n-2},n-1,n }}\circ
ad_{ X_{123\overline{4}\cdots\overline{n-2},n-1,n}}$ on \eqref{WofDn}, we obtain
\begin{eqnarray}\label{3.3.2}
(h_1+h_2+h_3+2h_4+\cdots+2h_{n-2}+h_{n-1}+h_n)(h_2+2h_3+
\cdots+2h_{n-2}+h_{n-1}+h_n)\notag\\-h_2\hbar-2h_3\hbar-4h_4\hbar-
\cdots-4h_{n-2}\hbar-2h_{n-1}\hbar-2h_n\hbar \in B(Rees(J)),
\end{eqnarray}
and by applying $ad_{ X_{3\overline{4}\cdots\overline{n-2},n-1,n }}
\circ ad_{ X_\theta}$ on \eqref{WofDn}, we obtain
\begin{eqnarray}\label{3.3.3}
(h_3+2h_4+\cdots+2h_{n-2}+h_{n-1}+h_n)(h_1+2h_2+2h_3+\cdots+2h_{n-2}+h_{n-1}+h_n)\notag\\-2h_3\hbar-4h_4\hbar-\cdots-4h_{n-2}\hbar-2h_{n-1}\hbar-2h_n\hbar \in B(Rees(J)).
\end{eqnarray}
By subtracting \eqref{3.3.1} from \eqref{3.3.2}, we obtain
\begin{eqnarray*}
h_1h_3=0
\end{eqnarray*}
and by subtracting \eqref{3.3.1} from \eqref{3.3.3}, we obtain
\begin{eqnarray*}
h_1h_2+h_2^2+h_2h_3-h_2\hbar=0,
\end{eqnarray*}
both in $B\big(Rees(\mathcal{U}/J)\big)$.
By applying $ad_{ X_{1234}}$ on \eqref{WofDn}, we obtain
\begin{eqnarray}\label{3.4}
Y_{23\overline{4}\cdots\overline{n-2},n-1,n}Y_{34\overline{5}\cdots\overline{n-2},n-1,n}-Y_{4\overline{5}\cdots\overline{n-2},n-1,n}Y_{2\overline{3}\cdots\overline{n-2},n-1,n}\notag\\-Y_{3\overline{4}\cdots\overline{n-2},n-1,n}Y_{234\overline{5}\cdots\overline{n-2},n-1,n} \in J.
\end{eqnarray}
Similarly to the above procedure obtaining \eqref{3.3.1}-\eqref{3.3.3}, we obtain the following
three relations in $B\big(Rees(\mathcal{U}/J)\big)$ from (\ref{3.4}),
\begin{eqnarray}\label{3.4.1}
&&(h_2+h_3+2h_4+\cdots+2h_{n-2}+h_{n-1}+h_n)(h_3+h_4+2h_5+\cdots+2h_{n-2}+h_{n-1}+h_n)\notag\\
&&-h_3\hbar-2h_4\hbar-4h_5\hbar-\cdots-4h_{n-2}\hbar-2h_{n-1}\hbar-2h_n\hbar,\\
&&(h_2+2h_3+\cdots+2h_{n-2}+h_{n-1}+h_n)(h_4+h_5+2h_6+\cdots+2h_{n-2}+h_{n-1}+h_n)\notag\\
&&-2h_4\hbar-4h_5\hbar-\cdots-4h_{n-2}\hbar-2h_{n-1}\hbar-2h_n\hbar,\label{3.4.2}
\end{eqnarray}
and
\begin{eqnarray}\label{3.4.3}
&&(h_3+2h_4+\cdots+2h_{n-2}+h_{n-1}+h_n)(h_2+h_3+h_4+2h_5+\cdots+2h_{n-2}+h_{n-1}+h_n)\notag\\&&-h_3\hbar-2h_4\hbar-4h_5\hbar-\cdots-4h_{n-2}\hbar-2h_{n-1}\hbar-2h_n\hbar.
\end{eqnarray}
By subtracting \eqref{3.4.1} from \eqref{3.4.3}, we obtain
\begin{eqnarray*}
h_2h_4=0
\end{eqnarray*}
and by subtracting \eqref{3.4.1} from \eqref{3.4.2}
we obtain
\begin{eqnarray*}
h_2h_3+h_3h_4+h_3^2-h_3\hbar=0,
\end{eqnarray*}
both in $B\big(Rees(\mathcal{U}/J)\big)$.
Now, for $i\geq n$, by applying $ad_{ X_{1\cdots i}}$ on \eqref{WofDn}, we obtain
\begin{eqnarray*}
h_2h_i=0 \quad\mbox{and}\quad
h_3h_i=0
\end{eqnarray*}
in $B\big(Rees(\mathcal{U}/J)\big)$.
Next, by applying $ad_{ X_{234}}\circ ad_{ X_{12345}}$ on \eqref{WofDn}, we obtain
\begin{eqnarray}\label{3.5}
Y_{4\overline{5}\cdots\overline{n-2},n-1,n}Y_{345\overline{6}\cdots\overline{n-2},n-1,n}-Y_{45\overline{6}\cdots\overline{n-2},n-1,n}Y_{34\overline{5}\cdots\overline{n-2},n-1,n}\notag\\-Y_{3\overline{4}\cdots\overline{n-2},n-1,n}Y_{5\overline{6}\cdots\overline{n-2},n-1,n} \in J.
\end{eqnarray}
Using the same method as before, we obtain
\begin{eqnarray*}
h_3h_4+h_4^2+h_4h_5-h_4\hbar=0 \quad\mbox{and}\quad
h_4h_i=0, \text{ for } i\geq 6.
\end{eqnarray*}
in $B\big(Rees(\mathcal{U}(\mathfrak{g})/J)\big)$.
In general, for $k\leq n-4$ by appropriate iterated actions of $ad_{ X_{1\cdots k}},\ ad_{ X_{2\cdots k-1 }},\ ad_{ X_{3\cdots k-2 }}$, $\cdots$ on (\ref{WofDn}), we obtain
\begin{eqnarray}\label{3.6.1}\label{hihjDn}
h_ih_j=0, \text{ for } |i-j|\geq 2,i\leq n-3<j,
\end{eqnarray}
and
\begin{eqnarray}\label{3.6.2}
h_{i-1}h_i+h_i^2+h_ih_{i+1}-h_i\hbar=0 \text{ for } i<j\leq n-2.
\end{eqnarray}
in $B\big(Rees(\mathcal{U}/J)\big)$.
{\it Step 2}. Now, let us find the relations for
$h_{n-1}h_n$, $h_{n-2}h_n$ and $h_nh_{n-1}$.
By applying $ad_{ X_{4\cdots n-3}}\circ
ad_{ X_{3\cdots n-5}}\circ ad_{ X_{2\cdots n-5 }}\circ ad_{ X_{1\cdots n}}$
on \eqref{WofDn}, we obtain
\begin{eqnarray}\label{3.6.3.0}
Y_{n-4,n-3,\overline{n-2},n-1,n}Y_{n-3,n-2}-Y_{n-2}
Y_{n-4,\overline{n-3},\overline{n-2},n-1,n}-Y_{n-3,\overline{n-2},n-1,n}Y_{n-4,n-3,n-2}
\end{eqnarray}
in $B\big(Rees(J)\big)$.
Then by applying $ad_{X_{n-2}}\circ ad_{X_{n-4,\overline{n-3},\overline{n-2},n-1,n}}$
on the above element, we obtain
\begin{eqnarray}\label{3.6.3}
(h_{n-4}+2h_{n-3}+2h_{n-2}+h_{n-1}+h_n )h_{n-2}-2h_{n-2}\hbar=0
\end{eqnarray}
in $B\big(Rees(\mathcal{U}/J)\big)$.
Similarly, by $ad_{ X_{4\cdots n-3}}\circ ad_{ X_{3\cdots n-5}}\circ ad_{ X_{2\cdots n-5 }}\circ ad_{ X_{1\cdots,n-2, n}}$ on \eqref{WofDn}, then by applying $ad_{X_{n-2,n-1}}\circ ad_{X_{n-4,\overline{n-3},\overline{n-2},n-1,n}}$, we obtain
\begin{eqnarray}\label{3.6.4}
(h_{n-4}+2h_{n-3}+2h_{n-2}+h_{n-1}+h_n )(h_{n-2}\hbar+h_{n-1}\hbar)-2(h_{n-2}\hbar+h_{n-1}\hbar)=0.
\end{eqnarray}
And by applying $ad_{ X_{4\cdots n-3}}\circ ad_{ X_{3\cdots n-5}}\circ ad_{ X_{2\cdots n-5 }}\circ ad_{ X_{1\cdots n-2,n-1}} $ on \eqref{WofDn}, then by applying $ad_{X_{n-2,n}}\circ ad_{X_{n-4,\overline{n-3},\overline{n-2},n-1,n}}$, we obtain
\begin{eqnarray}\label{3.6.5}
(h_{n-4}+2h_{n-3}+2h_{n-2}+h_{n-1}+h_n )(h_{n-2}\hbar+h_n\hbar)-2(h_{n-2}\hbar+h_n\hbar)=0.
\end{eqnarray}
By subtracting \eqref{3.6.4} from \eqref{3.6.3} and by subtracting \eqref{3.6.5} from \eqref{3.6.3}, we obtain
\begin{eqnarray*}
2h_{n-1}\hbar+h_{n-1}^2-2h_{n-2}h_{n-1}&=&0,
\\2h_n\hbar+h_n^2-2h_{n-2}h_n&=&0
\end{eqnarray*}
in $B\big(Rees(\mathcal{U}/J)\big)$. From (\ref{3.6.3}), we directly obtain
\begin{eqnarray}\label{3.6.3.1}
2h_{n-2}^2+2h_{n-3}h_{n-2}+h_{n-2}h_{n-1}+h_{n-1}h_n-2h_{n-2}\hbar=0
\end{eqnarray}
in $B\big(Rees(\mathcal{U}/J)\big)$.
We continue to consider the action $ad_{ X_{n-2,n}}$ on (\ref{3.6.3.0}), and obtain
\begin{eqnarray}\label{3.6.6}
h_{n-4}h_{n-3}+h_{n-3}^2+2h_{n-3}h_{n-2}+h_{n-2}^2+h_{n-2}h_{n-1}-h_{n-3}\hbar-h_{n-2}\hbar=0
\end{eqnarray}
in $B\big(Rees(\mathcal{U}/J)\big)$. Combining (\ref{3.6.6}) and (\ref{3.6.2}), we obtain
\begin{eqnarray}\label{3.6.7}
h_{n-3}h_{n-2}+h_{n-2}^2+h_{n-2}h_{n-1}-h_{n-2}\hbar=0.
\end{eqnarray}
{\it Step 3}. Now, let us consider the lowest weight vector (\ref{vtheta+alpha1}) in the
subrepresentation $V(\theta+\alpha_1)$. By applying
$ad_{ X_1}\circ ad_{ X_{\theta}}$ on \eqref{vtheta+alpha1}, we obtain
\begin{eqnarray}\label{3.0}
h_1(h_1+2h_2+\cdots+2h_{n-2}+h_{n-1}+h_n)-(n-2)h_1\hbar \in B(Rees(J)).
\end{eqnarray}
And by applying $ad_{ X_{12}}\circ ad_{ X_{12\overline{3}\cdots\overline{n-2},n-1,n}}$
on \eqref{vtheta+alpha1} we obtain
\begin{eqnarray}\label{3.1}
&&(h_1+h_2)(h_1+h_2+2h_3+\cdots+2h_{n-2}+h_{n-1}+h_n)\notag\\
&&-(n-2)h_1\hbar-(n-3)h_2\hbar
\in B(Rees(J)).
\end{eqnarray}
Similarly, for any integer $i\leq n-3$, by applying $ad_{ X_{1\cdots i }}\circ ad_{ X_{1\cdots i \overline{i+1} \cdots\overline{n-2},n-1,n}}$ on \eqref{vtheta+alpha1}, we obtain a family of elements
\begin{eqnarray}\label{3.2}
&&(h_1+h_2+\cdots+h_i)(h_1+\cdots h_i+2h_{i+1}+\cdots 2h_{n-2}+h_{n-1}+h_n)\notag \\
&&-(n-2)h_1\hbar-(n-3)h_2\hbar-\cdots-(n-i-1)h_1\hbar\in B(Rees(J)).
\end{eqnarray}
By subtracting \eqref{3.0} from \eqref{3.1}, we obtain
\begin{eqnarray*}
h_1^2+2h_1h_2+2h_1h_3+\cdots2h_1h_{n-2}+h_1h_{n-1}+h_1h_n-(n-2)h_1\hbar \in B(Rees(J)).
\end{eqnarray*}
Similarly, in (\ref{3.2}), take $i=k$ and $k-1$ respectively and let $k$ run over all $n$,
we subtract the latter from the former, and then obtain a family of elements in $B(Rees(J))$
\begin{eqnarray}
&&h_2^2+2h_2h_3+2h_2h_4+\cdots2h_2h_{n-2}+h_2h_{n-1}+h_2h_n-(n-3)h_2\hbar,\notag\\
&& \vdots \notag\\
&& h_{n-4}^2+2h_{n-4}h_{n-3}+2h_{n-4}h_{n-2}+h_{n-4}h_{n-1}+h_{n-4}h_n-3h_{n-4}\hbar,\notag\\
&& h_{n-3}^2+2h_{n-3}h_{n-2}+h_{n-3}h_{n-1}+h_{n-3}h_n-3h_{n-3}\hbar,\notag\\
&&h_{n-2}^2+h_{n-2}h_{n-1}+h_{n-2}h_{n}-h_{n-2}\hbar,\notag\\
&&h_{n-1}h_n=0.
\end{eqnarray}
Then by plugging (\ref{3.6.1}) in the above elements, we have
\begin{eqnarray}\label{Vtheta+alpha1result}
&&h_1^2+2h_1h_2-(n-2)h_1\hbar=0,\notag \\
&&h_2^2+2h_2h_3-(n-3)h_2\hbar=0,\notag \\
&&\vdots\notag \\
&&h_k^2+2h_kh_{k+1}-(n-k-1)h_k\hbar=0,\notag \\
&&\vdots\notag \\
&&h_{n-3}^2+2h_{n-3}h_{n-2}-2h_{n-3}\hbar=0,\\
&&h_{n-2}^2+h_{n-2}h_{n-1}+h_{n-2}h_{n}-h_{n-2}\hbar=0\label{h_n-2^2},\\
&&h_{n-1}h_n=0. \label{hn-1hnDn}
\end{eqnarray}
in $B\big(Rees(\mathcal{U}/J)\big)$.
{\it Step 4}. We now plug the above equations into $C-c_\lambda$ in
$B\big(Rees(J)\big)$. First, notice that (\ref{vtheta0Dn}) reduces to
\begin{align*}
&\sum_{i=1}^{n} h_ih_i^{\vee}-2\rho^{*}-c_\lambda\\
=&h_1(h_1+h_2+\cdots+h_{n-2}+\dfrac{1}{2}h_{n-1}+\dfrac{1}{2}h_n)\\
&+h_2\big(h_1+2(h_2+\cdots+h_{n-2}+h_{n-2})+h_{n-1}+h_n\big)\\
&+\cdots+h_{n-2}\left(
h_1+2h_2+\cdots+(n-3)h_{n-3}+(n-2)h_{n-2}+\dfrac{n-2}{2}(h_{n-1}+h_{n})\right)\\
&+\dfrac{1}{2}h_{n-1}\left(
h_1+h_2+\cdots+(n-2)h_{n-2}+\dfrac{n}{2}h_{n-1}+\dfrac{n-2}{2}h_{n}\right)\\
&+\dfrac{1}{2} h_n\left(
h_1+h_2+\cdots+(n-2)h_{n-2}+\dfrac{n-2}{2}h_{n-1}+\dfrac{n}{2}h_{n}\right)\\
&-\left(2(n-1)h_1+2(2n-3)h_2+\cdots+2(kn-\dfrac{k(k+1)}{2})h_k\right.\\
&\left.+\cdots+\dfrac{n(n-1)}{2}(h_{n-1}+h_n)\right)\hbar-c_\lambda
\end{align*}
in $B\big(Rees(J)\big)$.
By (\ref{3.6.1}), the above element gives the following identity
\begin{align}
&h_1^2+2h_1h_2+ 2(h_2^2+2h_2h_3)+\cdots+(n-3)(h_{n-3}^2+2h_{n-3}h_{n-2})\notag\\
&+(n-2)(h_{n-2}^2+h_{n-2}h_{n-1}+h_{n-2}h_n)+\dfrac{n}{4}(h_{n-1}^2+h_n^2)\notag\\
&-\left(2(n-1)h_1 +\cdots
+2(kn-\dfrac{k(k+1)}{2})h_k+\cdots+\dfrac{n(n-1)}{2}(h_{n-1}+h_n)\right)\hbar-c_\lambda\notag\\
=&0
\end{align}
and
\begin{align*}
&(n-2)h_1\hbar-2(n-1)h_1\hbar+2(n-3)h_2\hbar-2(2n-3)h_2\hbar+\cdots\\
&+k(n-k-1)h_k\hbar-2(kn-\dfrac{k(k+1)}{2})h_k\hbar+\cdots+2(n-3)h_{n-3}\hbar\\
&-[2(n-3)n-(n-3)(n-2)]h_{n-3}\hbar+(n-2)(h_{n-2}^2+h_{n-2}h_{n-1}+h_{n-2}h_n)\\
&+\dfrac{n}{4}(h_{n-1}^2+h_n^2)-\dfrac{n(n-1)}{2}(h_{n-1}+h_n)\hbar-(n^2-n-2)h_{n-2}\hbar-c_\lambda \\
=&-nh_1\hbar-2nh_2\hbar-\cdots-knh_k\hbar-\cdots-(n^2-3n)h_{n-3}\hbar\\
&+(n-2)(h_{n-2}^2+h_{n-2}h_{n-1}+h_{n-2}h_n)\\
&+\dfrac{n}{4}(h_{n-1}^2+h_n^2)-\dfrac{n(n-1)}{2}(h_{n-1}+h_n)\hbar
-(n^2-n-2)h_{n-2}\hbar-c_\lambda\\
=&-nh_1\hbar-2nh_2\hbar-\cdots-knh_k\hbar-\cdots-(n^2-3n)h_{n-3}\hbar+(n-2)h_{n-2}^2\\
&+(\dfrac{n}{2}-2)( h_{n-2}h_{n-1}+h_{n-2}h_n)
+(n-\dfrac{n^2}{2})(h_{n-1}+h_n)\hbar-(n^2-n-2)h_{n-2}\hbar-c_\lambda\\
=&0
\end{align*}
in $B\big(Rees(\mathcal{U}/J)\big)$.
By (\ref{h_n-2^2}), the above equation becomes
\begin{align*}
&-nh_1\hbar-2nh_2\hbar-\cdots-knh_k\hbar-\cdots-(n^2-3n)h_{n-3}\hbar+(n-2)h_{n-2}^2\\
&+(\dfrac{n}{2}-2)(h_{n-2}\hbar-h_{n-2}^2)+(n-\dfrac{n^2}{2})(h_{n-1}+h_n)\hbar
-(n^2-n-2)h_{n-2}\hbar-c_\lambda=0,
\end{align*}
and hence
\begin{align}\label{hn-2^2result}
h_{n-2}^2 =&2h_1\hbar +4h_2\hbar+\cdots+2kh_k\hbar+\cdots+ (2n-6)h_{n-2}\hbar+(2n-3)h_{n-2}\hbar\notag \\
&+(n-2)(h_{n-1}+h_n)\hbar-(2n-4)\hbar^2.
\end{align}
In general, plugging (\ref{hn-2^2result}) in (\ref{h_n-2^2}),
then combining it with (\ref{3.6.3.1}), we have
\begin{align}\label{hn-3hn-2result}
h_{n-3}h_{n-2}=&-h_1\hbar-\cdots-kh_k\hbar-(n-2)h_{n-2}\hbar-
\dfrac{n-2}{2} h_{n-1}\hbar\notag\\
&-\dfrac{n-2}{2}h_n\hbar+(n-2)\hbar^2.
\end{align}
Plugging (\ref{hn-2^2result}) and (\ref{hn-3hn-2result}) in (\ref{3.6.7}),
we obtain
\begin{align*}
h_{n-2}h_{n-1}=&-h_1\hbar-\cdots-kh_k\hbar-(n-2)h_{n-2}\hbar-\dfrac{n-2}{2} h_{n-1}\hbar\notag\\
&-\dfrac{n-2}{2}h_n\hbar+(n-2)\hbar^2.
\end{align*}
Similarly, by the equations in \eqref{3.6.2}, \eqref{Vtheta+alpha1result}
together with \eqref{hn-2^2result} and
\eqref{hn-3hn-2result}, we obtain
\begin{align}\label{h_k^2}
h_k^2=&2h_1\hbar+\cdots+2(k-i)h_{k-i}\hbar+\cdots+(n+k-1)h_k\hbar
+(2n-4)h_{k+1}\hbar+\cdots\notag\\
&+(2n-4)h_{n-2}\hbar+(n-2)h_{n-1}\hbar+(n-2)h_{n}\hbar-(2n-4)\hbar^2,
\end{align}
and therefore
\begin{align}
h_k h_{k+1}=&-h_1\hbar-\cdots-kh_k\hbar-(n-2)h_{k+1}\hbar-\cdots-(n-2)h_{n-2}\hbar\notag\\
&-\dfrac{n-2}{2} h_{n-1}-\dfrac{n-2}{2}h_n\hbar+(n-2)\hbar^2 \label{hkhk+1Dn}
\end{align}
for $k\leq n-2$, and
\begin{align}
h_{n-2}h_n=&h_{n-1}h_n=-th_1-\cdots-(n-k)h_{n-k}\hbar-\cdots\notag \\
&-(n-2)h_{n-2}\hbar-\dfrac{n-2}{2}h_{n-1}\hbar-h_n\hbar,\label{hn-2hn}\\
h_{n-1}^2=&2h_1\hbar+4h_2\hbar+\cdots+(2n-4)h_{n-2}\hbar+
nh_{n-1}\hbar+(n-2)h_n\hbar-(2n-4)\hbar^2,\label{hn-1^2}
\\
h_n^2=&2h_1\hbar+4h_2\hbar+\cdots+(2n-4)h_{n-2}\hbar
+(n-2)h_{n-1}\hbar+nh_n\hbar-(2n-4)\hbar^2.
\label{hn^2}
\end{align}
In summary, we have the following.
\begin{proposition}\label{prop:relinBDn}
Let $J$ be the Joseph ideal in the $D_n$ Lie algebra.
Let $h_1,\cdots, h_n$ be the generators of $B\big(Rees(\mathcal U(\mathfrak g)/J)\big)$.
Then $h_1,\cdots, h_n$ satisfy the relations \eqref{hihjDn}, \eqref{hn-1hnDn}and \eqref{h_k^2}-\eqref{hn^2}.
\end{proposition}
\subsection{$B$-algebra in the type E case}\label{subsection En case}
In the previous two subsections, we constructed with details
the $B$-algebras
of the quantizations of the minimal orbits of AD type.
For the type E case, the computations are analogous to the type D case,
and therefore we will be sketchy.
\subsubsection{The Kac-Frenkel construction}
For a Lie algebra of type A and D, we have canonical basis consisting of matrices,
but for type E Lie algebra, we do not have such a basis.
Thus for the relation in Chevalley basis
$[X_\alpha,X_\beta]=\varepsilon(\alpha,\beta) X_{\alpha+\beta}$, where
$\alpha, \beta, \alpha+\beta\in\Delta$, we cannot determine the sign of
$\varepsilon(\alpha,\beta) $ via canonical basis.
In \cite[\S7.8]{Kac}, Kac gave a construction, usually called
the {\it Frenkel-Kac construction} in the literature, to
determine the signs.
Let $\varepsilon:\Delta\times\Delta\rightarrow\pm 1$ be a function satisfying the
bimultiplicativity condition
\begin{align*}
\varepsilon(\alpha+\alpha{'},\beta)&=\varepsilon(\alpha,\beta)\varepsilon(\alpha{'},\beta),\\
\varepsilon(\alpha,\beta+\beta{'})&=\varepsilon(\alpha,\beta)\varepsilon(\alpha,\beta{'})
\end{align*}
for $\alpha,\alpha{'},\beta,\beta{'}\in\Delta$, and the condition
\begin{eqnarray*}
\varepsilon(\alpha,\alpha)=(-1)^{(\alpha|\alpha)},
\end{eqnarray*}
where $(\alpha|\alpha)$ is the normalized invariant form on affine algebra
(see \cite[\S 6.2]{Kac}). We call such a function an {\it asymmetry function}.
An asymmetry function
$\varepsilon$ can be constructed as follows:
Choose an orientation of the Dynkin diagram, let
$$
\begin{array}{ll}
\varepsilon(\alpha_i,\alpha_j)=-1,&\text{ if } i=j \text{ or if }
\overset{\alpha_i}{\circ}\rightarrow \overset{\alpha_j}{\circ} \\
\varepsilon(\alpha_i,\alpha_j)=1,& \text{ otherwise, i.e.,}
\overset{\alpha_i}{\circ} \ \overset{\alpha_j}{\circ} \text{ or }
\overset{\alpha_i}{\circ}\leftarrow \overset{\alpha_j}{\circ}.
\end{array}
$$
For example, if we choose the orientation of Lie algebra of type $E_6$ as follows:
\begin{eqnarray*}
\overset{\alpha_1}{\circ}\leftarrow\overset{\alpha_3}{\circ}\leftarrow
&\overset{\alpha_4}{\circ}&\rightarrow\overset{\alpha_5}{\circ}\rightarrow
\overset{\alpha_6}{\circ} \\ &\downarrow &\\
&\overset{\alpha_2}{\circ},
\end{eqnarray*}
then we have
\begin{align*}
&\varepsilon(\alpha_1,\alpha_1)=-1, \quad\varepsilon(\alpha_1,\alpha_j)=1,\ j=2,3,4,5,6 \\
&\varepsilon(\alpha_2,\alpha_2)=-1, \quad \varepsilon(\alpha_2,\alpha_j)=1,\ j=1,3,4,5,6 \\
&\varepsilon(\alpha_3,\alpha_1)=\varepsilon(\alpha_3,\alpha_3)
=-1,\quad \varepsilon(\alpha_3,\alpha_j)=1, j=2,4,5,6\\
&\varepsilon(\alpha_4,\alpha_2)=\varepsilon(\alpha_4,\alpha_3)
=\varepsilon(\alpha_4,\alpha_4)=\varepsilon(\alpha_4,\alpha_5)=-1,
\quad\varepsilon(\alpha_4,\alpha_5)=\varepsilon(\alpha_4,\alpha_6)=1, \\
&\varepsilon(\alpha_5,\alpha_5)=\varepsilon(\alpha_5,\alpha_6)
=-1,\quad \varepsilon(\alpha_5,\alpha_j)=1, j=1,2,3,4,\\
&\varepsilon(\alpha_6,\alpha_6)=-1, \quad\varepsilon(\alpha_6,\alpha_j)=1, j=1,2,3,4,5.
\end{align*}
With the Kac-Frenkel construction, we are able to write
the adjoint action of $\mathcal U(\mathfrak g)$ on itself explicitly,
where $\mathfrak g$ is the type E Lie algebra.
For example, by applying $ad_{X_{12345}}$ on the first term of \eqref{highestrootvectorE6}, we obtain
\begin{eqnarray*}
Y_{13456}\cdot \varepsilon(\alpha_1+\alpha_2+\alpha_3+\alpha_4+\alpha_5,\theta)Y_{23\bar{4}56}=Y_{13456}Y_{23\bar{4}56}.
\end{eqnarray*}
By this way we can find all the relations of elements in $B\big(Rees(J)\big)$,
which is completely analogous to the type D case.
In the rest of this section, we only list the necessary results,
and leave the details
to the interested readers.
\subsubsection{The $E_6$ case}
Let us consider the subrepresentation $V(0)$, the generator (\ref{CasimirE6}) reduces to
\begin{align*}
&\dfrac{1}{3} h_1(4h_1+3h_2+5h_3+6h_4+4h_5+2h_6)\\
&+h_2(h_1+2h_2+2h_3+3h_4+2h_5+h_6)\\
&+\dfrac{1}{3} h_3(5h_1+6h_2+10h_3+12h_4+8h_5+4h_6)\\
&+h_4(2h_1+3h_2+4h_3+12h_4+8h_5+4h_6)\\
&+\dfrac{1}{3} h_5(4h_1+6h_2+8h_3+12h_4+10h_5+5h_6)\\
&+\dfrac{1}{3}h_6(2h_1+3h_2+4h_3+6h_4+5h_5+4h_6)\\
&-16h_1\hbar-22h_2\hbar-30h_3\hbar-42h_4\hbar-30h_5\hbar-16h_6\hbar-36\hbar^2
\end{align*}
in $B\big(Rees(J)\big)$.
By the same method as in the type D case, we obtain the relations in
$B\big(Rees(\mathcal{U}/J)\big)$ induced by $(\ref{highestrootvectorE6})$ as follows:
\begin{align*}
& h_1^2+2h_1h_3=3h_1\hbar,\quad h_6^2+2h_5h_6=3h_6\hbar,\\
& h_5^2+2h_4h_5=2h_5\hbar,\quad h_3^2+2h_3h_4=2h_3\hbar, \\
&h_3^2+h_3h_4+h_1h_3=h_3\hbar,\quad h_5^2+h_4h_5+h_5h_6=h_5\hbar,\\
& h_4^2+h_3h_4+h_4h_5=h_4\hbar,\quad h_4^2+h_2h_3+h_2h_4=h_4\hbar,\quad
h_4^2+h_2h_4+h_4h_5=h_4\hbar,\\
& h_1h_2=h_1h_4=h_1h_5=h_1h_6=0,\\
& h_2h_3=h_2h_5=h_2h_6=0,\quad h_3h_5=h_3h_6=0,\quad h_4h_6=0.
\end{align*}
From these equations, we obtain the following.
\begin{proposition}\label{prop:relinBE_6}
Let $J$ be the Joseph ideal of the $E_6$ Lie algebra. Let
$h_1,\cdots, h_6$ be the generators of $B\big(Rees(\mathcal U(\mathfrak g)/J\big)$.
Then $h_1,\cdots, h_6$ satisfy the following relations:
\begin{align*}
h_4^2&=4h_1\hbar+6h_2\hbar+8h_3\hbar+13h_4\hbar+8h_5\hbar+4h_6\hbar-12\hbar^2,\\
h_3^2&=4h_1\hbar+6h_2\hbar+8h_3\hbar+13h_4\hbar+8h_5\hbar+4h_6\hbar-12\hbar^2,\\
h_1^2&=7h_1\hbar+6h_2\hbar+10h_3\hbar+12h_4\hbar+8h_5\hbar+4h_6\hbar-12\hbar^2,\\
h_2^2&=4h_1\hbar+6h_2\hbar+10h_3\hbar+12h_4\hbar+8h_5\hbar+4h_6\hbar-12\hbar^2,\\
h_5^2&=4h_1\hbar+6h_2\hbar+8h_3\hbar+12h_4\hbar+10h_5\hbar+4h_6\hbar-12\hbar^2,\\
h_6^2&=4h_1\hbar+6h_2\hbar+8h_3\hbar+12h_4\hbar+10h_5\hbar+7h_6\hbar-12\hbar^2,\\
h_1h_3&=-2h_1\hbar-3h_2\hbar-5h_3\hbar-6h_4\hbar-4h_5\hbar-2h_6\hbar+6\hbar^2,\\
h_2h_4&=-2h_1\hbar-3h_2\hbar-4h_3\hbar-6h_4\hbar-4h_5\hbar-2h_6\hbar+6\hbar^2,\\
h_3h_4&=-2h_1\hbar-3h_2\hbar-4h_3\hbar-6h_4\hbar-4h_5\hbar-2h_6\hbar+6\hbar^2,\\
h_4h_5&=-2h_1\hbar-3h_2\hbar-4h_3\hbar-6h_4\hbar-4h_5\hbar-2h_6\hbar+6\hbar^2,\\
h_5h_6&=-2h_1\hbar-3h_2\hbar-4h_3\hbar-6h_4\hbar-5h_5\hbar-2h_6\hbar+6\hbar^2.
\end{align*}
\end{proposition}
\subsubsection{The $E_7$ case}
We obtain the relations in $B\big(Rees(\mathcal{U}(\mathfrak{g})/J)\big)$ induced by
$(\ref{highestrootvectorE7})$ as follows
\begin{align*}
& h_1^2+2h_1h_3=3h_1\hbar,\quad h_6^2+2h_5h_6=3h_6\hbar,\\
&h_5^2+2h_4h_5=2h_5\hbar,\quad h_3^2+2h_3h_4=2h_3\hbar, \\
&h_3^2+h_3h_4+h_1h_3=h_3\hbar,\quad h_5^2+h_4h_5+h_5h_6=h_5\hbar,\\
& h_4^2+h_3h_4+h_4h_5=h_4\hbar,\quad h_4^2+h_2h_3+h_2h_4=h_4\hbar,\quad
h_4^2+h_2h_4+h_4h_5=h_4\hbar,\\
& h_7^2+2h_6h_7=4h_7\hbar,\quad h_6^2+h_5h_6+h_6h_7=h_6\hbar,\\
& h_1h_2=h_1h_4=h_1h_5=h_1h_6=h_1h_7=0,\\
& h_2h_3=h_2h_5=h_2h_6=h_2h_7=0,\quad h_3h_5=h_3h_6=h_3h_7=0,\\
& h_4h_6=h_4h_7=0,\quad h_5h_7=0
\end{align*}
in $B\big(Rees(\mathcal{U}(\mathfrak{g})/J)\big)$.
Let us consider the subrepresentation $V(0)$, the generator (\ref{CasimirE7}) reduces to
\begin{align*}
& h_1(2h_1+2h_2+3h_3+4h_4+3h_5+2h_6+h_7)\\
&+\dfrac{1}{2}h_2(4h_1+7h_2+8h_3+12h_4+9h_5+8h_6+3h_7)\\
&+ h_3(3h_1+4h_2+6h_3+8h_4+6h_5+4h_6+2h_7)\\
&+h_4(4h_1+6h_2+8h_3+12h_4+9h_5+6h_6+3h_7)\\
&+\dfrac{1}{2} h_5(6h_1+9h_2+12h_3+18h_4+15h_5+10h_6+5h_7)\\
&+h_6(2h_1+3h_2+4h_3+6h_4+5h_5+4h_6+2h_7)\\
&+\dfrac{1}{2}h_7(2h_1+3h_2+4h_3+6h_4+5h_5+4h_6+3h_7)\\
&-34h_1\hbar-49h_2\hbar-66h_3\hbar-94h_4\hbar
-75h_5\hbar-52h_6\hbar-27h_7+84\hbar \\
=&2h_1^2+6h_1h_3+\dfrac{7}{2}h_2^2+12h_2h_4+6h_3^2+16h_3h_4+12h_4^2+18h_4h_5
+\dfrac{15}{2}h_5^2+10h_5h_6\\
&+4h_6h_7+\dfrac{3}{2}h_7^2
-34h_1\hbar-49h_2\hbar-66h_3\hbar-96h_4\hbar-75h_5\hbar-52h_6\hbar-27h_7\hbar+84\hbar^2
\end{align*}
in $B\big(Rees(J)\big)$.
By these equations, we obtain the following.
\begin{proposition}\label{prop:relinBE_7}
Let $J$ be the Joseph ideal of the $E_7$ Lie algebra. Let
$h_1,\cdots, h_7$ be the generators of $B\big(Rees(\mathcal U(\mathfrak g)/J\big)$.
Then $h_1,\cdots, h_7$ satisfy the following relations:
\begin{align*}
h_2h_4&=4h_1\hbar+6h_2\hbar+8h_3\hbar+12h_4\hbar+9h_5\hbar+6h_6\hbar+3h_7\hbar-12\hbar^2,\\
h_4^2&=-8h_1\hbar-12h_2\hbar-16h_3\hbar-25h_4\hbar-18h_5\hbar-12h_6\hbar-6h_7\hbar+24\hbar^2,\\
h_3^2&=-8h_1\hbar-12h_2\hbar-18h_3\hbar-24h_4\hbar-18h_5\hbar-12h_6\hbar-6h_7\hbar+24\hbar^2,\\
h_2^2&=-8h_1\hbar-14h_2\hbar-16h_3\hbar-24h_4\hbar-18h_5\hbar-12h_6\hbar-6h_7\hbar+24\hbar^2,\\
h_1^2&=-8h_1\hbar-12h_2\hbar-16h_3\hbar-24h_4\hbar-18h_5\hbar-12h_6\hbar-6h_7\hbar+24\hbar^2,\\
h_5^2&=-8h_1\hbar-12h_2\hbar-16h_3\hbar-24h_4\hbar-20h_5\hbar-12h_6\hbar-6h_7\hbar+24\hbar^2,\\
h_6^2&=-8h_1\hbar-12h_2\hbar-16h_3\hbar-24h_4\hbar-20h_5\hbar-15h_6\hbar-6h_7\hbar+24\hbar^2,\\
h_7^2&=-8h_1\hbar-12h_2\hbar-16h_3\hbar-24h_4\hbar-20h_5\hbar-16h_6\hbar-10h_7\hbar+24\hbar^2,\\
h_3h_4&=4h_1\hbar+6h_2\hbar+8h_3\hbar+12h_4\hbar+9h_5\hbar+6h_6\hbar+3h_7\hbar-12\hbar^2,\\
h_4h_5&=4h_1\hbar+6h_2\hbar+8h_3\hbar+12h_4\hbar+9h_5\hbar+6h_6\hbar+3h_7\hbar-12\hbar^2,\\
h_1h_3&=4h_1\hbar+6h_2\hbar+9h_3\hbar+12h_4\hbar+9h_5\hbar+6h_6\hbar+3h_7\hbar-12\hbar^2,\\
h_5h_6&=4h_1\hbar+6h_2\hbar+8h_3\hbar+12h_4\hbar+10h_5\hbar+6h_6\hbar+3h_7\hbar-12\hbar^2,\\
h_6h_7&=4h_1\hbar+6h_2\hbar+8h_3\hbar+12h_4\hbar+10h_5\hbar+8h_6\hbar+3h_7\hbar-12\hbar^2
\end{align*}
in $B\big(Rees(\mathcal{U}(\mathfrak{g})/J)\big)$.
\end{proposition}
\subsubsection{The $E_8$ case}
We obtain the relations in $B\big(Rees(\mathcal{U}(\mathfrak{g})/J)\big)$ induced by
$(\ref{highestrootvectorE8})$ as follows
\begin{align*}
& h_1^2+2h_1h_3=3h_1\hbar,\quad h_6^2+2h_5h_6=3h_6\hbar,\\
& h_5^2+2h_4h_5=2h_5\hbar,\quad h_3^2+2h_3h_4=2h_3\hbar, \\
&h_3^2+h_3h_4+h_1h_3=h_3\hbar,\quad h_5^2+h_4h_5+h_5h_6=h_5\hbar,\\
& h_4^2+h_3h_4+h_4h_5=h_4\hbar,\quad h_4^2+h_2h_3+h_2h_4=h_4\hbar,\quad
h_4^2+h_2h_4+h_4h_5=h_4\hbar,\\
& h_7^2+2h_6h_7=4h_7\hbar,\quad h_6^2+h_5h_6+h_6h_7=h_6\hbar,\\
& h_8^2+2h_7h_8=5h_8\hbar,\quad h_7^2+h_6h_7+h_7h_8=h_7\hbar,\\
& h_1h_2=h_1h_4=h_1h_5=h_1h_6=h_1h_7=h_1h_8=0,\\
& h_2h_3=h_2h_5=h_2h_6=h_2h_7=h_2h_8=0,\quad h_3h_5=h_3h_6=h_3h_7=h_3h_8=0,\\
& h_4h_6=h_4h_7=h_4h_8=0,\quad h_5h_7=h_5h_8=0,\quad h_6h_8=0
\end{align*}
in $B\big(Rees(\mathcal{U}(\mathfrak{g})/J)\big)$.
Let us consider the subrepresentation $V(0)$, the generator (\ref{CasimirE8}) reduces to
\begin{align*}
&h_1(4h_1+5h_2+7h_3+10h_4+8h_5+6h_6+4h_7+2h_8)\notag \\
&+h_2(5h_1+8h_2+10h_3+15h_4+12h_5+9h_6+6h_7+3h_8)\notag \\
&+h_3(7h_1+10h_2+14h_3+20h_4+16h_5+12h_6+8h_7+4h_8)\notag \\
&+h_4(10h_1+15h_2+20h_3+30h_4+24h_5+18h_6+12h_7+6h_8)\notag \\
&+h_5(8h_1+12h_2+16h_3+24h_4+20h_5+15h_6+10h_7+5h_8)\notag \\
&+h_6(6h_1+9h_2+12h_3+18h_4+15h_5+12h_6+8h_7+4h_8)\notag \\
&+h_7(4h_1+6h_2+8h_3+12h_4+10h_5+8h_6+6h_7+3h_8)\notag \\
&+h_8(5h_1+8h_2+10h_3+15h_4+12h_5+9h_6+6h_7+3h_8)\\
&-92h_1\hbar-136h_2\hbar-182h_3\hbar-270h_4\hbar-220h_5\hbar
-168h_6\hbar-114h_7\hbar-58h_8\hbar+240
\end{align*}
in $B\big(Rees(J)\big)$.
From these equations, we obtain the following.
\begin{proposition}\label{prop:relinBE_8}
Let $J$ be the Joseph ideal of the $E_8$ Lie algebra. Let
$h_1,\cdots, h_8$ be the generators of $B\big(Rees(\mathcal U(\mathfrak g)/J\big)$.
Then $h_1,\cdots, h_8$ satisfy the following relations:
\begin{align*}
h_4^2&=20h_1\hbar+30h_2\hbar+40h_3\hbar+61h_4\hbar
+48h_5\hbar+36h_6\hbar+24h_7\hbar+12h_8\hbar-60\hbar^2,\\
h_2^2&=20h_1\hbar+32h_2\hbar+40h_3\hbar+60h_4\hbar
+48h_5\hbar+36h_6\hbar+24h_7\hbar+12h_8\hbar-60\hbar^2,\\
h_3^2&=20h_1\hbar+30h_2\hbar+42h_3\hbar+61h_4\hbar
+48h_5\hbar+36h_6\hbar+24h_7\hbar+12h_8\hbar-60\hbar^2,\\
h_1^2&=23h_1\hbar+30h_2\hbar+42h_3\hbar+60h_4\hbar
+48h_5\hbar+36h_6\hbar+24h_7\hbar+12h_8\hbar-60\hbar^2,\\
h_5^2&=20h_1\hbar+30h_2\hbar+40h_3\hbar+60h_4\hbar
+50h_5\hbar+36h_6\hbar+24h_7\hbar+12h_8\hbar-60\hbar^2,\\
h_6^2&=20h_1\hbar+30h_2\hbar+40h_3\hbar+60h_4\hbar
+50h_5\hbar+39h_6\hbar+24h_7\hbar+12h_8\hbar-60\hbar^2,\\
h_7^2&=20h_1\hbar+30h_2\hbar+40h_3\hbar+60h_4\hbar
+50h_5\hbar+40h_6\hbar+28h_7\hbar+12h_8\hbar-60\hbar^2,\\
h_8^2&=20h_1\hbar+30h_2\hbar+40h_3\hbar+60h_4\hbar
+50h_5\hbar+40h_6\hbar+30h_7\hbar+17h_8\hbar-60\hbar^2,\\
h_2h_4&=-10h_1\hbar-15h_2\hbar-20h_3\hbar-30h_4\hbar
-24h_5\hbar-18h_6\hbar-12h_7\hbar-6h_8\hbar+30\hbar^2,\\
h_3h_4&=-10h_1\hbar-15h_2\hbar-20h_3\hbar-30h_4\hbar
-24h_5\hbar-18h_6\hbar-12h_7\hbar-6h_8\hbar+30\hbar^2,\\
h_4h_5&=-10h_1\hbar-15h_2\hbar-20h_3\hbar-30h_4\hbar
-24h_5\hbar-18h_6\hbar-12h_7\hbar-6h_8\hbar+30\hbar^2,\\
h_1h_3&=-10h_1\hbar-15h_2\hbar-21h_3\hbar-30h_4\hbar
-24h_5\hbar-18h_6\hbar-12h_7\hbar-6h_8\hbar+30\hbar^2,\\
h_5h_6&=-10h_1\hbar-15h_2\hbar-20h_3\hbar-30h_4\hbar
-25h_5\hbar-18h_6\hbar-12h_7\hbar-6h_8\hbar+30\hbar^2,\\
h_6h_7&=-10h_1\hbar-15h_2\hbar-20h_3\hbar-30h_4\hbar
-25h_5\hbar-20h_6\hbar-12h_7\hbar-6h_8\hbar+30\hbar^2,\\
h_7h_8&=-10h_1\hbar-15h_2\hbar-20h_3\hbar-30h_4\hbar
-25h_5\hbar-20h_6\hbar-15h_7\hbar-6h_8\hbar+30\hbar^2
\end{align*}
in $B\big(Rees(\mathcal{U}(\mathfrak{g})/J)\big)$. \end{proposition}
\section{Proof of the main theorem}\label{sect:proofofmainthm}
We are now ready to prove the main theorem of our paper.
First, notice that by Lemma \ref{lemma:dimJU2} and
Proposition \ref{Prop:I^2_0}, the relations on $h_i$'s that we found
in the previous section are exactly all the relations of
$B(\mathscr{A})$.
\begin{proof}[Proof of Theorem \ref{maintheorem}]
First, by Proposition \ref{vectorspace of B-algebra},
it is direct to see that in all the cases, the isomorphisms to
be checked are isomorphisms of vector spaces.
We only need to prove that they preserve the algebra structures
on both sides.
In the $A_n$ case, let
\[
t_1\mapsto\dfrac{z\hbar}{n+1},\ t_2\mapsto
\dfrac{z+n+1}{n+1}\hbar,\ h_k\mapsto e_k,\text{for } k\in\mathbb{N} \text{ and }1\leq k\leq n
\]
By comparing (\ref{hihj result}),
(\ref{hkhk+1 result}), (\ref{hk2 result}) with (\ref{elej}), (\ref{ejej+1}),
(\ref{ej2}) respectively, we get the theorem.
In the $D_n$ case, let
\begin{eqnarray*}
\hbar \mapsto 2t,\ h_k\mapsto e_k,\ k\in\mathbb{N} \text{ and }1\leq k\leq n,
\end{eqnarray*}
By comparing the relations in Proposition \ref{prop:relinBDn}, namely,
\eqref{hihjDn}, \eqref{hkhk+1Dn}, \eqref{h_k^2}, \eqref{hn-2hn}, (\ref{hn-1^2}), (\ref{hn^2}) with
the relations in Corollary \ref{cohomology of Dn}, namely, \eqref{eiejDn},
\eqref{ekek+1Dn}, \eqref{ek2Dn}, \eqref{en-2en}, \eqref{en-1^2}, \eqref{en^2} respectively, we get the theorem.
In the type E case, let
\begin{eqnarray*}
\hbar \mapsto 2t,\ h_k\mapsto e_k,
\end{eqnarray*}
and compare the relations in Propositions \ref{prop:relinBE_6}, \ref{prop:relinBE_7}
and \ref{prop:relinBE_8}
with the relations in Corollaries \ref{cohomology of E6},
\ref{cohomology of E7} and \ref{cohomology of E8} respectively;
we get the desired isomorphisms.
\end{proof}
In this paper, we prove the main theorem by explicitly calculating out
the product structures of the two types of algebras.
We expect that there exists a closed formula for the $B$-algebras
of the quantizations of the minimal nilpotent orbits,
like the one of Bryan and Gholampour given in Theorem \ref{thm:BryanGholam}.
|
2302.13260
|
\section{Introduction}
\subsection{Background}
In \cite{HNY}, He-Nie-Yu studies the affine Deligne-Lusztig varieties with finite Coxeter parts. They study such types of varieties using the Deligne-Lusztig reduction method from \cite{DL76} and carefully investigating the reduction path. In the approach, they establish the ``multiplicity one'' result which is, roughly speaking, for any $\sigma$-conjugacy class $[b]\in B(G)$, there is at most one path in the reduction tree that corresponds to $[b]$. The proof of the ``multiplicity one'' result is obtained by showing that a certain combinatorial identity (of two $q$-polynomials, or more precisely, of the class polynomials) of the following form holds:
\[\sum_{[b]\in B(G,\mu)_\text{indec}} (q-1)^?q^{-??}=1.\]
They first reduce this to the case when $G$ is split and simply-laced and $\mu$ is a fundamental coweight (\cite{HNY} 6.5 and 6.6). Then, for type $A$, they check the identity using some geometric properties of affine Deligne-Lusztig varieties, such as dimension formulae and injectiveness of the projection map from the affine flag variety to the affine Grassmannian (\cite{HNY} 5.4). At the end (\cite{HNY} 6.9), they ask if a combinatorial proof of this identity exists. Our goal is to give such a proof.
\subsection{The identity and the sketch of proof}
The following identity is of interest to us.
\begin{prop}\label{mainthm}
Fix natural numbers $i< n$ and let $j\colonequals n-i>0$ for the sake of simplicity, then
\[\sum_{\substack{k\ge1\\ ((a_l,b_l))_{l\le k}\in D_k}} (q-1)^{k-1}q^{1-k+\frac{\sum_{1\le l_1<l_2\le k}(a_{l_1}b_{l_2}-a_{l_2}b_{l_1}) +\sum_{1\le l\le k}\gcd(a_l,b_l)}{2}}=q^{\frac{ij-n}{2}+1},\]where $D_k=\{((a_l,b_l))_{l\le k}: a_l,b_l\text{ are natural numbers and } a_1+\cdots+ a_k=i,~b_1+\cdots+b_k=n,\text{ and } 1>\frac{a_1}{b_1}>\cdots>\frac{a_k}{b_k}>0.\}$.
\end{prop}
\subsubsection{Notations}
Our strategy is to use a coordinate plane to understand the (index set of the) identity. A polygon always means a polygon \textit{whose vertices are all lattice points (i.e., the coordinates are integers)} and we count segments as $2$-gons. We denote by $\area(P)$ the area of $P$, by $i(P)$ the number of lattice points interior to $P$, and by $b(P)$ the number of lattice points on the boundary of $P$.
We will use $\mathbb{N}$ to mean the set of natural numbers.
\subsubsection{A few lemmas and the proof of the identity}
We first observe that changing $b_l$ into $b_l-a_l$ does not affect and helps us remove the condition that the slope is bounded by $1$.
\begin{lem}\label{btob-a}
Let $C_k$ be the set of $(x_l,y_l)\in\mathbb{N}^2$'s for $l\le k$ such that $x_1+\cdots+x_k=i$, $y_1+\cdots+y_k=j$, and $\frac{y_1}{x_1}<\cdots<\frac{y_k}{x_k}$. Then,
\[\sum_{\substack{k\ge1\\ ((a_l,b_l))_{l\le k}\in ?_k}} (q-1)^{k-1}q^{1-k+\frac{\sum_{1\le l_1<l_2\le k}(a_{l_1}b_{l_2}-a_{l_2}b_{l_1}) +\sum_{1\le l\le k}\gcd(a_l,b_l)}{2}}\] for $?=C$ and $?=D$ are equal.
\end{lem}
Next, the idea to interpret this summation is by making a one-to-one correspondence between an element $((x_l,y_l))_{l\le k}\in C_k$ and a convex polygon satisfying some condition. Let us denote by $\Delta$ the triangle whose vertices are $(0,0)$, $(i,0)$, and $(i,j)$. For simplicity, let $L$ denote the segment(=$2$-gon) connecting $(0,0)$ and $(i,j)$.
\begin{lem}\label{pick}There is a one-to-one correspondence between $C_k$ and the set of convex $(k+1)$-gons lying in $\Delta$ but not touching the horizontal and vertical edges of $\Delta$ such that $L$ is an edge. Under this correspondence, we have\begin{align*}
\frac{1}{2}\left(\sum_{1\le l_1<l_2\le k }(x_{l_1}y_{l_2}-x_{l_2}y_{l_1})+\sum_{1\le l\le k}\gcd(x_l,y_l)\right)\\=i(P)&+b(P)-1-\frac{1}{2}\gcd(i,j),
\end{align*}where $P$ is the corresponding $(k+1)$-gon.
\end{lem}
We will denote the set of such $(k+1)$-gons by $C_k$ abusing notation.\newpage
The main lemma in our proof is the following:
\begin{lem}\label{mainlem}
Let $C$ be the set of all convex polygons which lies in $\Delta$ not touching the horizontal and vertical edges and contains $L$ as an edge. Then, the following identity holds:
\[\sum_{P\in C}x^{u(P)} (1-x)^{v(P)-2}=1,\]where $u(P)$ is the number of lattice points interior to $\Delta\setminus P$ and $v(P)$ is the number of vertices of $P$.
\end{lem}
Finally, let us recall the well-known Pick's Theorem.
\begin{namedtheorem}[Theorem]
Let $P$ be a polygon in a coordinate plane. Then,\[\area(P)=i(P)+\frac{b(P)}{2}-1.\]
\end{namedtheorem}
We are now ready to prove \Cref{mainthm}.
\begin{proof}[Proof of \Cref{mainthm}]
By \Cref{btob-a,pick}, it is enough to show that
\[\sum_{k\ge1,~ P\in C_k} (q-1)^{k-1}q^{-(k-1)+i(P)+b(P)}=q^{\frac{ij-n+\gcd(i,j)}{2}+2}.\]
Now, we observe that the exponent part on the right-hand side can be written as $i(\Delta)+b(\Delta)-(n-1)$ simply using the facts that $\area(\Delta)=\frac{ij}{2}$ and $b(\Delta)=n+\gcd(i,j)$ and applying Pick's Theorem.
As $C=\cup_{k\ge1} C_k$, the index set of the summation is $C$. Now, observing that $u(P)=i(\Delta)+b(\Delta)-(n-1)-(i(P)+b(P))$, we are reduced to show
\[\sum_C \left(\frac{q-1}{q}\right)^{k-1}q^{-u(P)}=1.\]
This is nothing but the resulting identity of \Cref{mainlem} by letting $x=\frac{1}{q}$ because, for any $P\in C_k$, we have $v(P)-2=k+1-2=k-1$.
\end{proof}
\begin{rem}
We do not know if the identity of \Cref{mainlem} is well-known. It looks interesting to us because the left-hand side is not homogeneous in the sense that $u(P)+v(P)-2$ is not constant but this gives a way to generate $1$ using polynomials of the form $x^a(1-x)^b$.
We wonder if $\{(u(P),v(P)-2):P\in C\}$ parametrizes all such pairs $\{(a_i,b_i)\in\mathbb{N}^2:i\in I\}$ such that $\sum_{i\in I}x^{a_i}(1-x)^{b_i}=1$. More precisely, let $S=\{(a_i,b_i)\in\mathbb{N}^2: 1\le i\le k\}$ be a set satisfying \[\sum_{i=1}^k x^{a_i}(1-x)^{b_i}=1,\]and $(0,1)\in S$. Then we would like to ask if there exist $m,n\in\mathbb{N}$ such that $S=\{(u(P),v(P)-2):P\in C_{m,n}\}$ where $C_{m,n}$ is the set defined in \Cref{mainlem} corresponding to the triangle $\Delta_{m,n}$ whose vertices are $(0,0)$, $(m,0)$, and $(m,n)$.
\end{rem}
\section{Proofs of lemmas}
\begin{proof}[Proof of \Cref{btob-a}]
The one-to-one correspondence from $C_k$ to $D_k$ is given by $(x_l,y_l)\mapsto (x_l,x_l+y_l)$. It is easy to check that the conditions on the sums and the slopes are all equivalent. The one that needs justification is the exponent part. However, $x_{l_1}(x_{l_2}+y_{l_2})-x_{l_2}(x_{l_1}+y_{l_1})=x_{l_1}y_{l_2}-x_{l_2}y_{l_1}$ and $\gcd(x_l,x_l+y_l)=\gcd(x_l,y_l)$ obviously.
\end{proof}
\begin{proof}[Proof of \Cref{pick}]
Given $((x_l,y_l))_{l\le k}\in C_k$, consider the convex polygon $P$ with vertices $(0,0)$, $(x_1,y_1)$, $(x_1+x_2,y_1+y_2)$, $\cdots$, $(x_1+\cdots+x_k, y_1+\cdots+y_k)=(i,j)$. As $(\frac{x_i}{y_i})_i$ is increasing, the polygon $P$ is convex. Now, $y_1,x_k>0$ implies that $P$ does not touch the horizontal and vertical edges of $\Delta$. The inverse map from the set of polygons to the set of pairs is the obvious one.
Regarding the formula, it is easy to see that, using induction,
\[\frac{1}{2}\sum_{1\le l_1<l_2\le k}(x_{l_1}y_{l_2}-x_{l_2}y_{l_1})=\area(P).\]
Noting that $\gcd(a,b)+1$ is the number of lattice points on the segment connecting $(m,n)$ and $(m+a,n+b)$ for any integers $m$ and $n$, we get\[\sum_{1\le l\le k}\gcd(x_l,y_l)=b(P)-\gcd(i,n-i).\]
Applying Pick's Theorem, we get the conclusion.
\end{proof}
\begin{proof}[Proof of \Cref{mainlem}]\label{mainlemproof}
Both sides are polynomials in $x$, so we only need to prove it for all $0<x<1$. Let us consider the following probabilistic process:
For each point interior to $\Delta$, choose it with the probability $x$ and abandon it with the probability $1-x$. Then, we form the convex hull containing $(0,0)$, $(i,j)$, and the chosen points. It is easy to see that the resulting convex hull is an element of $C$. For example, if all interior points are abandoned, we end up getting $L$ and so the probability of obtaining $L$ is $(1-x)^{u(L)}$.
For $P\in C$, let $\prob(P)$ be the probability of obtaining $P$ as a result of the aforementioned process. Obviously, $\sum_{P\in C}\prob(P)=1$. So, it is enough to show that $\prob(P)=(1-x)^{u(P)}x^{v(P)-2}$ for all $P\in C$. However, this holds because the case when the resulting convex hull is $P$ is exactly when
1) the vertices of $P$ (except $(0,0)$ and $(i,j)$) are chosen ($=x^{v(P)-2}$) and
2) the vertices outside of $P$ are abandoned ($=(1-x)^{u(P)}$)
\noindent with no conditions on the other remaining points.
\end{proof}
\bibliographystyle{alpha}
|
2302.13606
|
\subsubsection{}}
\def\begin{gather*}{\begin{gather*}}
\def\end{gather*}{\end{gather*}}
\def\begin{question}{\begin{question}}
\def\end{question}{\end{question}}
\def\on{rank}{\on{rank}}
\newcommand{V_{-1}}{V_{-1}}
\newcommand{V_{-2}}{V_{-2}}
\newcommand{{\stackrel{\scriptscriptstyle{1}}{\rho}}{}}{{\stackrel{\scriptscriptstyle{1}}{\rho}}{}}
\newcommand{{\stackrel{\scriptscriptstyle{2}}{\rho}}{}}{{\stackrel{\scriptscriptstyle{2}}{\rho}}{}}
\newcommand{{\stackrel{\scriptscriptstyle{a}}{\rho}}{}}{{\stackrel{\scriptscriptstyle{a}}{\rho}}{}}
\newcommand{\mathbb{S}}{\mathbb{S}}
\newcommand{\mathbb{R}}{\mathbb{R}}
\newcommand{\mathbf{SO}}{\mathbf{SO}}
\newcommand{\mathbf{CO}}{\mathbf{CO}}
\newcommand{\mathbf{Spin}}{\mathbf{Spin}}
\newcommand{\mathbf{GL}}{\mathbf{GL}}
\newcommand{\mathbf{SL}}{\mathbf{SL}}
\newcommand{\mathbf{Sp}}{\mathbf{Sp}}
\newcommand{\mathbf{O}}{\mathbf{O}}
\newcommand{\mathfrak{so}}{\mathfrak{so}}
\newcommand{\mathfrak{sp}}{\mathfrak{sp}}
\newcommand{\mathfrak{spin}}{\mathfrak{spin}}
\newcommand{\mathfrak{gl}}{\mathfrak{gl}}
\newcommand{\mathfrak{sl}}{\mathfrak{sl}}
\newcommand{\mathfrak{su}}{\mathfrak{su}}
\newcommand{\mathrm{d}}{\mathrm{d}}
\newcommand{\mathbf{SU}}{\mathbf{SU}}
\newcommand{\mathbf{CSp}}{\mathbf{CSp}}
\begin{document}
\title[Exceptional geometries]{Exceptional real Lie algebras $\mathfrak{f}_4$ and $\mathfrak{e}_6$ via contactifications}
\vskip 1.truecm
\author{Pawe\l~ Nurowski} \address{Center for Theoretical Physics,
Polish Academy of Sciences, Al. Lotnik\'ow 32/46, 02-668 Warszawa, Poland}
\email{nurowski@cft.edu.pl}
\thanks{The research was funded from the Norwegian Financial Mechanism 2014-2021 with project registration number 2019/34/H/ST1/00636.}
\date{\today}
\begin{abstract}
In Cartan's PhD thesis, there is a formula defining a certain rank 8 vector distribution in dimension 15, whose algebra of authomorphism is the split real form of the simple exceptional complex Lie algebra $\mathfrak{f}_4$. Cartan's formula is written in the standard Cartesian coordinates in $\mathbb{R}^{15}$. In the present paper we explain how to find analogous formula for the flat models of any bracket generating distribution $\mathcal D$ whose symbol algebra $\mathfrak{n}({\mathcal D})$ is constant and 2-step graded, $\mathfrak{n}({\mathcal D})=\mathfrak{n}_{-2}\oplus\mathfrak{n}_{-1}$.
The formula is given in terms of a solution to a certain system of linear algebraic equations determined by two representations $(\rho,\mathfrak{n}_{-1})$ and $(\tau,\mathfrak{n}_{-2})$ of a Lie algebra $\mathfrak{n}_{00}$ contained in the $0$th order Tanaka prolongation $\mathfrak{n}_0$ of $\mathfrak{n}({\mathcal D})$.
Numerous examples are provided, with particular emphasis put on the distributions with symmetries being real forms of simple exceptional Lie algebras $\mathfrak{f}_4$ and $\mathfrak{e}_6$.
\end{abstract}
\maketitle
\tableofcontents
\newcommand{\tilde{\gamma}}{\tilde{\gamma}}
\newcommand{\tilde{\Gamma}}{\tilde{\Gamma}}
\newcommand{\tilde{\theta}}{\tilde{\theta}}
\newcommand{\tilde{T}}{\tilde{T}}
\newcommand{\tilde{r}}{\tilde{r}}
\newcommand{\sqrt{3}}{\sqrt{3}}
\newcommand{\tilde{\kappa}}{\tilde{\kappa}}
\newcommand{{K^{{~}^{\hskip-3.1mm\circ}}}}{{K^{{~}^{\hskip-3.1mm\circ}}}}
\newcommand{{\rm div}}{{\rm div}}
\newcommand{{\rm curl}}{{\rm curl}}
\newcommand{{\mathcal N}}{{\mathcal N}}
\newcommand{{\Upsilon}}{{\Upsilon}}
\newcommand{\invol}[2]{\draw[latex-latex] (root #1) to
[out=-30,in=-150] (root #2);}
\newcommand{\invok}[2]{\draw[latex-latex] (root #1) to
[out=-90,in=-90] (root #2);}
\section{Introduction: the notion of a contactification}\label{intr}
A \emph{contact structure} $(M,{\mathcal D})$ on a $(2n+1)$ dimensional real manifold $M$ is usually defined in terms of a 1-form $\lambda$ on $M$ such that
$$\underbrace{{\rm d}\lambda\wedge{\rm d}\lambda\wedge\dots\wedge{\rm d}\lambda}_{n\,\,\mathrm{times}}\wedge\lambda\neq 0$$
at each point $x\in M$. Given such a 1-form, the contact structure $(M,{\mathcal D})$ on $M$ is the rank $s=2n$ \emph{vector distribution}
$${\mathcal D}=\{X\in \mathrm{T}M\,\,\mathrm{s.t.}\,\, X\hook\lambda=0\}.$$
Note that any $\lambda'=a\lambda$, with $a$ being a nonvanishing function on $M$, defines the same contact structure $(M,{\mathcal D})$. We also note that given a contact structure $(M,{\mathcal D})$, we additionally have a family of 2-forms on $M$
$$\omega'=a\omega +\mu\wedge\lambda,\quad\mathrm{with}\quad \omega={\rm d}\lambda,$$
where $a\neq 0$ is a function, and $\mu$ is a 1-form on $M$. This, in particular, means that given a contact structure $(M,{\mathcal D})$, we have a rank $s=2n$ (bracket generating) distribution ${\mathcal D}$, and a \emph{line} of a closed 2-form $\omega$ \emph{in the distribution} ${\mathcal D}$, with
$${\rm d}\omega=0\quad\&\quad\underbrace{\omega\wedge\omega\wedge\dots\wedge\omega}_{n\,\,\mathrm{times}}\neq 0.$$
This can be compared with the notion of a \emph{symplectic} structure $(N,[\omega])$ on a $s=2n$ dimensional real manifold $N$. Such a structure is defined in terms of a line $\omega'=h\omega$ of a nowhere vanishing 2-form $\omega$ on $N$, such that
$${\rm d}\omega=0\quad\&\quad\underbrace{\omega\wedge\omega\wedge\dots\wedge\omega}_{n\,\,\mathrm{times}}\neq 0.$$
Here, contrary to the contact case, we have a \emph{ line} of a closed 2-form $\omega$ \emph{in the tangent space} $\mathrm{T}N$ rather than in the proper vector subbundle ${\mathcal D}\subsetneq\mathrm{T}N$.
By the \emph{Poincar\'e lemma}, locally, in an open set ${\mathcal O}\subset N$, the form $\omega$ defines a 1-form $\Lambda$ on $N$ such ${\rm d}\Lambda=\omega$. Therefore given a symplectic structure $(N,[\omega])$, we can locally \emph{contactify} it, by considering a $(2n+1)$ dimensional manifold $${\mathcal U}=\mathbb{R}\times{\mathcal O}\stackrel{\pi}{\to}\mathcal O,$$ with a 1-form $$\lambda={\rm d} u+\pi^*(\Lambda) $$
on $\mathcal U$; here the real variable $u$ is a coordinate along the $\mathbb{R}$ factor in $\mathcal U=\mathbb{R}\times\mathcal O$. As a result the structure $(M,{\mathcal D})=\big({\mathcal U},\ker(\lambda)\big)$ is a \emph{contact structure}, called a \emph{contact structure associated with the symplectic structure} $(N,[\omega])$.
\vspace{0.3cm}
We introduce the notion of a \emph{contactification} as a generalization of the above considerations.
\begin{definition}\label{def1}
Let $N$ be an $s$-dimensional manifold and let
${\rm d}{\mathcal D}^\perp:=\mathrm{Span}(\omega^1,\omega^2,$ $\dots,\omega^r)$ be a rank $r$ subbundle of $\bigwedge^2N$. Consider an $(s+r)$-dimensional fiber bundle $F\to M\stackrel{\pi}{\to}N$ over $N$. Let $(X_1,X_2,\dots, X_r)$ be a coframe of \emph{vertical vectors} in $M$. In particular we have $\pi_*(X_i)=0$ for all $i=1,2,\dots,r$.
Let us assume that on $M$ there exist $r$ one-forms $\lambda^i$, $i=1,2,\dots,r$, such that
$\det (X_i\hook\lambda^j)\neq 0$ on $M$,
and that
${\rm d}\lambda^i=\sum_{j=1}^r a^i{}_j\pi^*(\omega^j)+\sum_{j=1}^r\mu^i{}_j\wedge\lambda^j$ for all $i=1,2,\dots r$,
with some 1-forms $\mu^i{}_j$ and some functions $a^i{}_j$ on $M$ satisfying $\mathrm{det}(a^i{}_j)\neq 0$. Consider the corresponding rank $s$ distribution
${\mathcal D}=\{TM\ni X~|~ X\hook\lambda^i=0, i=1,2,\dots r\}$
on $M$.
Then the pair $(M,{\mathcal D})$ is called a \emph{contactification} of the pair $(N,{\rm d} {\mathcal D}^\perp)$.
\end{definition}
\begin{definition}
A real Lie algebra $\mathfrak{g}$ spanned over $\mathbb{R}$ by the vector fields $Y$ on $M$ of the contactification $(M,{\mathcal D})$ satisfying
\begin{equation}{\mathcal L}_Y\lambda^i\wedge\lambda^1\wedge\dots\wedge\lambda^r=0, \quad\forall i=1,2,\dots,r\label{ssymm}\end{equation}
is called the Lie algebra of infinitesimal symmetries of the contactification $(M,{\mathcal D})$. By definition, it is the same as the Lie algebra of infinitesimal symmetries of the distribution ${\mathcal D}$ on $M$. The vector fields $Y$ on $(M,{\mathcal D})$ satisfying \eqref{ssymm} are called infinitesimal symmetries of $(M,{\mathcal D})$, or of $\mathcal D$, for short.\label{def2}
\end{definition}
Below, we give a nontrivial example of the notions included in Definitions \ref{def1} and \ref{def2}.
\begin{example}\label{exa4}
Consider $N=\mathbb{R}^8$ with Cartesian coordinates $(x^1,x^2,$ $x^{3},x^{4},x^{5},x^{6},$ $x^{7},x^{8})$, and a space ${\rm d}{\mathcal D}^\perp=\mathrm{Span}(\omega^1,\omega^2,\omega^3,\omega^4,\omega^5,\omega^6,\omega^7)\subset\bigwedge^2N$, which is spanned by the following seven 2-forms on $N$:
$$
\begin{aligned}
\omega^1=\,\,&{\rm d} x^1\wedge{\rm d} x^{8}+{\rm d} x^2\wedge{\rm d} x^{5}+{\rm d} x^{3}\wedge{\rm d} x^{7}+{\rm d} x^{4}\wedge{\rm d} x^{6}\\
\omega^2=\,\,&-{\rm d} x^1\wedge{\rm d} x^{5}+{\rm d} x^2\wedge{\rm d} x^{8}+{\rm d} x^{3}\wedge{\rm d} x^{6}-{\rm d} x^{4}\wedge{\rm d} x^{7}\\
\omega^3=\,\,&-{\rm d} x^1\wedge{\rm d} x^{7}-{\rm d} x^2\wedge{\rm d} x^{6}+{\rm d} x^{3}\wedge{\rm d} x^{8}+{\rm d} x^{4}\wedge{\rm d} x^{5}\\
\omega^4=\,\,&{\rm d} x^1\wedge{\rm d} x^{2}+{\rm d} x^{3}\wedge{\rm d} x^{4}+{\rm d} x^{5}\wedge{\rm d} x^{8}+{\rm d} x^{6}\wedge{\rm d} x^{7}\\
\omega^5=\,\,&-{\rm d} x^1\wedge{\rm d} x^{6}+{\rm d} x^2\wedge{\rm d} x^{7}-{\rm d} x^{3}\wedge{\rm d} x^{5}+{\rm d} x^{4}\wedge{\rm d} x^{8}\\
\omega^6=\,\,&{\rm d} x^1\wedge{\rm d} x^{4}+{\rm d} x^2\wedge{\rm d} x^{3}-{\rm d} x^{5}\wedge{\rm d} x^{7}+{\rm d} x^{6}\wedge{\rm d} x^{8}\\
\omega^7=\,\,&{\rm d} x^1\wedge{\rm d} x^{3}-{\rm d} x^2\wedge{\rm d} x^{4}+{\rm d} x^{5}\wedge{\rm d} x^{6}+{\rm d} x^{7}\wedge{\rm d} x^{8}.
\end{aligned}
$$
As the bundle $N$ take $M=\mathbb{R}^{7}\times\mathbb{R}^8\to N$ with coordinates $(x^1,\dots,x^8,x^9\dots,x^{15})$, and take seven 1-forms
$$\begin{aligned}
\lambda^1=\,\,&{\rm d} x^9+ x^1{\rm d} x^{8}+ x^2{\rm d} x^{5}+ x^{3}{\rm d} x^{7}+ x^{4}{\rm d} x^{6}\\
\lambda^2=\,\,&{\rm d} x^{10} - x^1{\rm d} x^{5}+ x^2{\rm d} x^{8}+ x^{3}{\rm d} x^{6}- x^{4}{\rm d} x^{7}\\
\lambda^3=\,\,&{\rm d} x^{11} - x^1{\rm d} x^{7}- x^2{\rm d} x^{6}+ x^{3}{\rm d} x^{8}+ x^{4}{\rm d} x^{5}\\
\lambda^4=\,\,&{\rm d} x^{12} + x^1{\rm d} x^{2}+ x^{3}{\rm d} x^{4}+ x^{5}{\rm d} x^{8}+ x^{6}{\rm d} x^{7})\\
\lambda^5=\,\,&{\rm d} x^{13}- x^1{\rm d} x^{6}+ x^2{\rm d} x^{7}- x^{3}{\rm d} x^{5}+ x^{4}{\rm d} x^{8}\\
\lambda^6=\,\,&{\rm d} x^{14}+ x^1{\rm d} x^{4}+ x^2{\rm d} x^{3}- x^{5}{\rm d} x^{7}+ x^{6}{\rm d} x^{8}\\
\lambda^7=\,\,&{\rm d} x^{15}+ x^1{\rm d} x^{3}- x^2{\rm d} x^{4}+ x^{5}{\rm d} x^{6}+ x^{7}{\rm d} x^{8}.
\end{aligned}
$$
This defines a rank 8 distribution ${\mathcal D}=\{TM\ni X~|~ X\hook\lambda^i=0, i=1,2,\dots 7\},$ on $M$.
The pair $\big(M,{\mathcal D}\big)$ \emph{is a contactification of} $(N,d{\mathcal D}^\perp)$, since
$X_i=\partial_{i+8}$, $\det(X_i\hook\lambda^j)=1$, and ${\rm d}\lambda^i=\omega^i$ for all $i=1,\dots,7$.
In particular, in this example the rank 8 distribution
${\mathcal D}$ gives a \emph{2-step filtration} ${\mathcal D}_{-1}\subset\mathcal{D}_{-2}=\mathrm{T}M$, where ${\mathcal D}_{-1}={\mathcal D}$ and $\mathcal{D}_{-2}=[{\mathcal D}_{-1},{\mathcal D}_{-1}]=\mathrm{T}M$.
\end{example}
This example is essentially taken from \`Elie Cartan's PhD thesis \cite{CartanPhd}, actually its German version. We took it as our example inspired by the following quote from Sigurdur Helgason \cite{He}:
\begin{quote}
Cartan represented [the simple exceptional Lie group] ${\bf F}_4$ (...) by the Pfaffian system in $\mathbb{R}^{15}$ (...). Similar results for ${\bf E}_6$ in $\mathbb{R}^{16}$, ${\bf E}_7$ in $\mathbb{R}^{27}$ and ${\bf E}_8$ in $\mathbb{R}^{29}$ are indicated in \cite{CartanPhd}. Unfortunately, detailed proofs of these remarkable representations of the exceptional groups do not seem to be available.
\end{quote}
The 15-dimensional contactification $(M,{\mathcal D})$ from our Example \ref{exa4} is obtained in terms of the seven 1-forms $\lambda^i$, which are equivalent to the seven forms from the Cartan Pfaffian system in dimension 15 mentioned by Helgason. In particular, it follows that the \emph{distribution structure} $(M,{\mathcal D})$ has the simple exceptional Lie group ${\bf F}_4$, actually its \emph{real form} $F_I$ in the terminology of \cite{CS}, as a \emph{group of automorphisms}.
In this paper we will explain how one gets this realization of the exceptional Lie group ${\bf F}_4$, a realization of its real form $F_{II}$, and realizations of the two (out of 5) real forms $E_I$ and $E_{IV}$ of the complex simple exceptional Lie group ${\bf E}_6$. For this explanation we need some preparations consisting of recalling few notions associated with vector distributions on manifolds and spinorial representations of the orthogonal groups in space of \emph{real} spinors.
Finally we note that our approach in this paper is \emph{purely utilitarian}. We answer the question: \emph{How to get the explicit formulas in Cartesian coordinates for Pfaffian forms $(\lambda^1,\dots,\lambda^r)$, which have simple Lie algebras as symmetries?} One can study more general problems related to this on purely Lie theoretical ground. For example, one can ask when a 2-step graded nilpotent Lie algebra $\mathfrak{n}_{\minu}=\mathfrak{n}_{-2}\oplus\mathfrak{n}_{-1}$ has a given Lie algebra $\mathfrak{n}_{00}$ as a part of its Lie algebra $\mathfrak{n}_0$ of derivations preserving the strata, or a question as to when the Tanaka prolongation of such $\mathfrak{n}_{\minu}$ with $\mathfrak{n}_{00}\subset\mathfrak{n}_0$ is finite, or simple. This is beyond the scope of our paper. A reader interested in such problems may consult e.g. \cite{AC,Alt,Krug}.
\section{Magical equation for a contactification}
The purpose of this section is to prove the following crucial lemma, about a certain algebraic equation, which we call a \emph{magical equation}. It is the boxed equation \eqref{der0} below.
\begin{lemma}\label{l21}
Let $(\mathfrak{n}_{00},[\cdot,\cdot]_0)$ be a finite dimensional Lie algebra, and let $\rho:\mathfrak{n}_{00}\stackrel{\mathrm{hom}}{\to} \mathrm{End}(S)$ be its finite dimensional representation in a real vector space $S$ of dimension $s$.
In addition, let $R$ be an $r$-dimensional real vector space, and $\tau:\mathfrak{n}_{00}\to \mathrm{End}(R)$, be a linear map.
Finally let $\omega$ be a linear map $\omega:\bigwedge^2S\to R$, or what is the same, let $\omega\in\mathrm{Hom}(\bigwedge^2S,R)$.
Suppose now that the triple $(\rho,\omega,\tau)$ satisfy the following equation:
\begin{equation}\boxed{
\omega\big(\rho(A)X,Y\big)+\omega\big(X,\rho(A)Y\big)=\tau(A)\,\omega(X,Y),}
\label{maga}
\end{equation}
for all $A\in\mathfrak{n}_{00}$ and all $X,Y\in S$.
Then we have:
\begin{enumerate}
\item The map $\tau$ satisfies $$\big(\,\,\tau([A,B]_0)-[\tau(A),\tau(B)]_{\mathrm{End}(R)}\,\,\big)\omega\,\,=\,\,0\quad\quad \forall\,\,A,B\in\mathfrak{n}_{00}.$$
\item If the map $\tau:\mathfrak{n}_{00}\to \mathrm{End}(R)$ is a representation of $\mathfrak{n}_{00}$, i.e. if $$\tau([A,B]_0)=[\tau(A),\tau(B)]_{\mathrm{End}(R)},$$ then the real vector space $\mathfrak{g}_0:=R\oplus S\oplus\mathfrak{n}_{00}$ is a \emph{graded} Lie algebra
$$\mathfrak{g}_0=\mathfrak{n}_{-2}\oplus\mathfrak{n}_{-1}\oplus\mathfrak{n}_{00},$$
with the graded components $$\mathfrak{n}_{-2}=R,\quad \mathfrak{n}_{-1}=S,\quad\mathrm{with}\,\, \mathfrak{n}_{00}\,\,\mathrm{as\,\,the}\,\,0\,\,\mathrm{grade},$$ and with the Lie bracket $[\cdot,\cdot]$ given by:\label{ca2}
\begin{enumerate}
\item if $X,Y\in \mathfrak{n}_{00}$ then $[X,Y]=[X,Y]_0$,
\item if $A\in \mathfrak{n}_{00}$, $X\in \mathfrak{n}_{-1}$ then $[A,X]=\rho(A)X$,
\item if $A\in \mathfrak{n}_{00}$, $X\in \mathfrak{n}_{-2}$ then $[A,X]=\tau(A)X$,
\item $[\mathfrak{n}_{-1},\mathfrak{n}_{-2}]=[\mathfrak{n}_{-2},\mathfrak{n}_{-2}]=\{0\}$,
\item and, if $X,Y\in \mathfrak{n}_{-1}$ then $[X,Y]=\omega(X,Y)$.
\end{enumerate}
\item Moreover, in the case {\rm \eqref{ca2}} the Lie subalgebra
$$\mathfrak{n}_\minu=\mathfrak{n}_{-2}\oplus\mathfrak{n}_{-1}$$ of $\mathfrak{g}_0$
is a 2-step graded Lie algebra, and the algebra $\mathfrak{n}_{00}$ is a Lie subalgebra of the Lie algebra $$\mathfrak{n}_0=\big\{\,\mathrm{Der}(\mathfrak{n}_\minu)\ni D\,\,\mathrm{s.t.}\,\,D\mathfrak{n}_j\subset\mathfrak{n}_j\,\,\mathrm{for}\,\, j=-1,-2\,\big\}$$ of all derivations of $\mathfrak{n}_\minu$ preserving its strata $\mathfrak{n}_{-1}$ and $\mathfrak{n}_{-2}$.
\end{enumerate}
\end{lemma}
\begin{remark}
Note that, in the respective bases $\{ f_\mu\}_{\mu=1}^s$ in $S$ and $\{e_i\}_{i=1}^r$ in $R$, the equation \eqref{maga} is:
\begin{equation}\boxed{
\rho(A)^\alpha{}_\mu\,\,\omega^i{}_{\alpha\nu}+\rho(A)^\alpha{}_\nu\,\,\omega^i{}_{\mu\alpha}\,\,=\,\,\tau(A)^i{}_j\,\,\omega^i{}_{\mu\nu}}\label{magb}\end{equation}
for all $A\in\mathfrak{n}_{00}$, all $i=1,2,\dots,r$ and all $\mu,\nu=1,2,\dots,s$.
In this basis the condition (1) is $$\big(\,\,\tau([A,B]_0)-[\tau(A),\tau(B)]_{\mathrm{End}(R)}\,\,\big)^i{}_j\,\,\omega^j{}_{\mu\nu}\,\,=\,\,0$$ for all $i=1,2,\dots,r,\,\mu,\nu=1,2,\dots s$, and $A,B\in\mathfrak{n}_{00}$.
\end{remark}
\noindent
\emph{Proof of the lemma.}
The proof of part (1) is a pure calculation using the equation \eqref{maga}. We first rewrite it in the shorthand notation as:
$$
\rho(A)\omega+\omega\rho(A)^T=\tau(A)\omega, \quad \forall A\in\mathfrak{n}_{00}.$$
Then we have:
$$\begin{aligned}
\tau([A,B]_0)\omega=&\rho([A,B]_0)\omega+\omega\rho([A,B]_0)^T=\\
&\rho(A)\rho(B)\omega-\rho(B)\rho(A)\omega+\omega\rho(B)^T\rho(A)^T-\omega\rho(A)^T\rho(B)^T=\\
&\rho(A)\Big(\tau(B)\omega-\omega\rho(B)^T\Big)-\rho(B)\Big(\tau(A)\omega-\omega\rho(A)^T\Big)+\\&\Big(\tau(B)\omega-\rho(B)\omega\Big)\rho(A)^T-\Big(\tau(A)\omega-\rho(A)\omega\Big)\rho(B)^T=\\
&\rho(A)\Big(\tau(B)\omega\Big)-\rho(B)\Big(\tau(A)\omega\Big)+\Big(\tau(B)\omega\Big)\rho(A)^T-\Big(\tau(A)\omega\Big)\rho(B)^T=\\
&\tau(A)\tau(B)\omega-\tau(B)\omega\rho(A)^T-\Big(\tau(B)\tau(A)\omega-\tau(A)\omega\rho(B)^T\Big)+\\
&\tau(B)\omega\rho(A)^T-\tau(A)\omega\rho(B)^T=\tau(A)\tau(B)\omega-\tau(B)\tau(A)\omega=\\
&([\tau(A_,\tau(B)]_{\mathrm{End}(R)})\omega,
\end{aligned}
$$
which proves part (1).
The proof of parts (2) and (3) is as follows:\\
We need to check the Jacobi identity for the bracket $[\cdot,\cdot]$.
We first consider the representation
$$\sigma=\tau\oplus\rho\quad\mathrm{ of} \quad\mathfrak{n}_{00}\quad \mathrm{in}\quad \mathfrak{n}_\minu=\mathfrak{n}_{-2}\oplus\mathfrak{n}_{-1},$$
defined by
$$\sigma(A)(Y\oplus X)=\tau(A)Y\oplus\rho(A)X, \quad \forall A\in\mathfrak{n}_{00},\,\, X\in\mathfrak{n}_{-1},\,\, Y\in\mathfrak{n}_{-2}.$$
We then prove that the representation $\sigma$ is a \emph{strata preserving derivation} in $\mathfrak{n}_\minu$. This is implied by the definitions (a)-(e) of the bracket, and the fundamental equation \eqref{maga} as follows:
The strata preserving property of $\sigma$, $\sigma(\mathfrak{n}_{-i})\subset\mathfrak{n}_{-i}$, $i=1,2$, is obvious by the definitions of $\rho$ and $\tau$. However, we need to check that $\sigma$ is a derivation, i.e. that
\begin{equation}\sigma (A)[X,Y] =[\sigma(A)X,Y]+[X,\sigma(A)Y]\label{der0}\end{equation}
for all $A\in\mathfrak{n}_{00}$ and for all $X,Y\in \mathfrak{n}_\minu$.
Because of the strata preserving property of $\sigma$, which we have just established, and because of the point (d) of the definition of the bracket, the equation \eqref{der0} is satisfied when both $X$ and $Y$ are in $\mathfrak{n}_{-2}$, or when $X$ is in $\mathfrak{n}_{-1}$ and $Y$ is $\mathfrak{n}_{-2}$. The only thing to be checked is if \eqref{der0} is also valid when both $X$ and $Y$ belong to $\mathfrak{n}_{-1}$.
But this just follows directly from \eqref{maga}, since if $ X,Y\in\mathfrak{n}_{-1}$ then
$$\begin{aligned}\sigma(A)[X,Y]&=\sigma(A)\omega(X,Y)=\tau(A)\omega(X,Y)=\\
&\omega(\rho(A)X,Y)+\omega(X,\rho(A)Y)=[\rho(A)X,Y]+[X,\rho(A)Y]=\\
&[\sigma(A)X,Y]+[X,\sigma(A)Y],\quad \forall A\in\mathfrak{n}_{00}.\end{aligned}$$
Now we return to checking the Jacobi identity for the bracket $[\cdot,\cdot]$ in $\mathfrak{g}_0$:
On elements of the form $A,B\in \mathfrak{n}_{00}$, $Z\in \mathfrak{n}_\minu$, by (b)-(c), we have
$$[[A,B],Z]+[[Z,A],B]+[[B,Z],A]=\Big(\sigma([A,B])-[\sigma(A),\sigma(B)]\Big)Z,$$
which vanishes due to the representation property of $\sigma$. On the other hand, on elements $A\in \mathfrak{n}_{00}$ and $Z_1,Z_2\in \mathfrak{n}_\minu$ we have
$$[[A,Z_1],Z_2]+[[Z_2,A],Z_1]+[[Z_1,Z_2],A]=[\sigma(A)Z_1,Z_2]+[Z_1,\sigma(A)Z_2]-\sigma(A)[Z_1,Z_2],$$
which is again zero, on the ground of the derivation property \ref{der0} of $\sigma$. Obviously the bracket satisfies the Jacobi identity when it is restricted to $\mathfrak{n}_{00}$; it is the Lie bracket $[\cdot,\cdot]_o$ of the Lie algebra $\mathfrak{n}_{00}$. Finally, property (2) implies that $[[Z_1,Z_2],Z_3]=0$ for all $Z_1,Z_2,Z_3$ in $\mathfrak{n}_\minu$, hence the Jacobi identity is trivially satisfied for $[\cdot,\cdot]$, when it is restricted to $\mathfrak{n}_\minu$. \hspace{9.cm}$\Box$
\vspace{0.5cm}
In the following we will use the map $\omega\in\mathrm{Hom}(\bigwedge^2S,R)$ satisfying the \emph{magical equation} \eqref{der0}, to construct contactifications with nontrivial symmetry algebras $\mathfrak{g}$. The setting will include Cartan's contactification with symmetry ${\bf F}_4$ mentioned in the Helgason's quote. For this, however we need few preparations.
\section{Two-step filtered manifolds}
A \emph{2-step filtered structure} on an $(s+r)$-dimensional manifold $M$ is a pair $(M,{\mathcal D})$, in which $\mathcal D$ is a vector distribution of rank $s$ on $M$, such that it is \emph{bracket generating} in the quickest possible way. This means that its \emph{derived distribution} ${\mathcal D}_{-2}:=[{\mathcal D}_{-1},{\mathcal D}_{-1}]$, with ${\mathcal D}_{-1}={\mathcal D}$, is such that $${\mathcal D}_{-2}=\mathrm{T}M.$$
It provides the simplest nontrivial \emph{filtration}
$$\mathrm{T}M={\mathcal D}_{-2}\supset{\mathcal D}_{-1}$$
of the tangent bundle $\mathrm{T}M$.
A (local) \emph{authomorphism} of a 2-step filtered manifold $(M,{\mathcal D})$ is a (local) diffeomorphism $\phi:M\to M$ such that $\phi_*{\mathcal D}\subset{\mathcal D}$. Since authomorphism can be composed and have inverses, they form a \emph{group} $G$ of (local) authomorphism of $(M,{\mathcal D})$, also called a \emph{group of (local) symmetries of} $\mathcal D$. Infinitesimally the Lie group of automorphisms defines the \emph{Lie algebra} $\mathfrak{aut}({\mathcal D})$ \emph{of symmetries}, which is the real span of all vector fields $X$ on $M$ such that $[X,Y]\subset {\mathcal D}$ for all $Y\in{\mathcal D}$.
Among all the 2-step filtered manifolds $(M,{\mathcal D})$ particularly simple are those which can be realized on a group manifold of a \emph{2-step nilpotent} Lie group. These are related to the notion of the \emph{nilpotent approximation} of a pair $(M,{\mathcal D})$. This is defined as follows:
At every point $x\in M$ equipped with a 2-step filtration ${\mathcal D}_{-2}\supset{\mathcal D}_{-1}$ we have well defined vector spaces
$n_{-1}(x)={\mathcal D}_{-1}(x)$ and $n_{-2}(x)={\mathcal D}_{-2}(x)/{\mathcal D}_{-1}(x)$, which define a vector space
$$\mathfrak{n}(x)=\mathfrak{n}_{-2}(x)\oplus\mathfrak{n}_{-1}(x).$$
This vector space is naturally a \emph{Lie algebra}, with a \emph{Lie bracket} induced form the Lie bracket of vector fields in $\mathrm{T}M$. Due to the 2-step property of the filtration defined by $\mathcal D$ this Lie algebra is \emph{2-step nilpotent},
$$[\mathfrak{n}_{-1}(x),\mathfrak{n}_{-1}(x)]=\mathfrak{n}_{-2}(x)\quad\&\quad[\mathfrak{n}_{-1}(x),\mathfrak{n}_{-2}(x)]=\{0\}.$$
This 2-step nilpotent Lie algebra is a \emph{local invariant} of the structure $(M,{\mathcal D})$, and it is called a \emph{nilpotent approximation of} the structure $(M,{\mathcal D})$ at $x\in M$.
This enables for defining a class of particularly simple examples of 2-step filtered structures:
Consider a \emph{2-step nilpotent Lie algebra} $\mathfrak{n}=\mathfrak{n}_{-2}\oplus\mathfrak{n}_{-1}$, and let $M$ be a Lie \emph{group}, whose Lie algebra is $\mathfrak{n}$. The Lie algebra $\mathfrak{n}_M$ of left invariant vector fields on $M$ is isomorphic to $\mathfrak{n}$ and mirrors its gradation, $\mathfrak{n}_M={\mathfrak{n}_M}_{-2}\oplus{\mathfrak{n}_M}_{-1}$. Now, taking all linear combinations with \emph{smooth functions} coefficients of all vector fields from the graded component ${\mathfrak{n}_M}_{-1}$ of $\mathfrak{n}_M$, one defines a \emph{vector distribution} ${\mathcal D}=\mathrm{Span}_{{\mathcal{F}}(M)}(\mathfrak{n}_M)$ on $M$. The so constructed filtered structure $(M,{\mathcal D})$ is obviously 2-step graded and is the \emph{simplest} filtered structure with nilpotent approximation being equal to $\mathfrak{n}$ everywhere. We call this $(M,{\mathcal D})$ structure the \emph{flat model} for all the 2-step filtered structures having the same constant nilpotent approximation $\mathfrak{n}$.
It is remarkable that the largest possible symmetry of all 2-step filtered structures $(M,{\mathcal D})$ is precisely the symmetry of the flat model. As such it is \emph{algebraically} determined by the nilpotent approximation $\mathfrak{n}$. This is the result of Noboru Tanaka \cite{tanaka}. To describe it we recall the notion of \emph{Tanaka prolongation}.
\begin{definition}
The \emph{Tanaka prolongation} of a 2-step nilpotent Lie algebra $\mathfrak{n}$ is a graded Lie algebra $\mathfrak{g}(\mathfrak{n})$ given by a direct sum
\begin{equation}
\mathfrak{g}(\mathfrak{n})=\mathfrak{n}\oplus\mathfrak{n}_0\oplus\mathfrak{n}_1\oplus\dots\oplus\mathfrak{n}_j\oplus\cdots,\label{gt1}\end{equation} with
\begin{equation}\mathfrak{n}_k=\Big\{\bigoplus_{j<0}\mathfrak{n}_{k+j}\otimes\mathfrak{n}_j^*\ni A\,\,\mathrm{s.t.}\,\,A[X,Y]=[AX,Y]+[X,AY]\Big\}\label{gt}\end{equation}
for each $k\geq 0$.
Furthermore, for each $j\geq 0$, the Lie algebra $$\mathfrak{g}_j(\mathfrak{n})=\mathfrak{n}\oplus\mathfrak{n}_0\oplus\mathfrak{n}_1\oplus\dots\oplus\mathfrak{n}_j$$
is called the Tanaka prolongations of $\mathfrak{n}$ up to $j^{th}$ order.
\end{definition}
Setting $[A,X]=AX$ for all $A\in \mathfrak{n}_k$ with $k\geq 0$ and for all $X\in\mathfrak{n}$ makes the condition in \eqref{gt} into the Jacobi identity. Moreover, if $A\in \mathfrak{n}_k$ and $B\in\mathfrak{n}_l$, $k,l\geq 0$, then their commutator $[A,B]\in\mathfrak{n}_{k+l}$ is defined on elements $X\in\mathfrak{n}$ inductively, according to the Jacobi identity. By this we mean that it should satisfy
$$[A,B]X=[A,BX]-[B,AX],$$
which is sufficient enough to define $[A,B]$.
\begin{remark}
Note, in particular, that $\mathfrak{n}_0$ is the Lie algebra of \emph{all derivations of} $\mathfrak{n}$ preserving the two strata $\mathfrak{n}_{-1}$ and $\mathfrak{n}_{-2}$ of the direct sum $\mathfrak{n}=\mathfrak{n}_{-2}\oplus\mathfrak{n}_{-1}$:
$$\mathfrak{n}_0=\big\{\,\mathrm{Der}(\mathfrak{n})\ni D\,\,\mathrm{s.t.}\,\,D\mathfrak{n}_j\subset\mathfrak{n}_j\,\,\mathrm{for}\,\, j=-1,-2\,\big\}.$$
\end{remark}
Although the Tanaka prolongation of a nilpotent Lie algebra $\mathfrak{n}$ is in general infinite, in this paper we will be interested in \emph{situations when the Tanaka prolongation}
$$\mathfrak{g}=\mathfrak{g}(\mathfrak{n})$$
of the $2$-step nilpotent part $$\mathfrak{n}=\mathfrak{n}_{-2}\oplus\mathfrak{n}_{-1}$$ is \emph{finite} and \emph{symmetric}, in the sense $$\mathfrak{g}(\mathfrak{n})=\mathfrak{n}_{-2}\oplus\mathfrak{n}_{-1}\oplus\mathfrak{n}_0\oplus\mathfrak{n}_1\oplus\mathfrak{n}_2,$$
with
$$\dim(\mathfrak{n}_{-k})=\dim(\mathfrak{n}_k), \quad k=1,2.$$
Such situations \emph{are possible}, and in them the so defined Lie algebra $\mathfrak{g}(\mathfrak{n})$ is \emph{simple}. In such case the Tanaka prolongation $\mathfrak{g}(\mathfrak{n})$ is \emph{graded}, and the subalgebra $$\mathfrak{p}=\mathfrak{n}_0\oplus\mathfrak{n}_1\oplus\mathfrak{n}_2,$$ in such $\mathfrak{g}(\mathfrak{n})$ is \emph{parabolic}. Moreover, the Lie algebra
$$\mathfrak{p}_{opp}=\mathfrak{n}_{-2}\oplus\mathfrak{n}_{-1}\oplus\mathfrak{n}_0,$$
is also a parabolic subalgebra of this simple $\mathfrak{g}(\mathfrak{n})$. It is isomorphic to $\mathfrak{p}$, $\mathfrak{p}\simeq\mathfrak{p}_{opp}$.
Regardless of the fact if $\mathfrak{g}(\mathfrak{n})$ is finite or not, we have the following general theorem, which is a specialization of a remarkable theorem by Noboru Tanaka \cite{tanaka}:
\begin{theorem}\label{tansym}
Consider 2-step filtered structures $(M,{\mathcal D})$, with distributions ${\mathcal D}$ having the same constant milpotent approximation $\mathfrak{n}$. Then
\begin{itemize}
\item The most symmetric of all of these distribution structures is the flat model $(M,{\mathcal D})$, with $M$ being a nilpotent Lie group associated of the nilpotent approximation algebra $\mathfrak{n}$, and with $\mathcal D$ being the first component ${\mathcal D}^{-1}$ of the natural filtration on $M$ associated to the $2$-step grading in $\mathfrak{n}$.
\item The Lie algebra of automorphisms $\mathfrak{aut}({\mathcal D})$ of the flat model structure is isomorphic to the Tanaka prolongation $\mathfrak{g}(\mathfrak{n})$ of the nilpotent approximation $\mathfrak{n}$,
$\mathfrak{aut}({\mathcal D})\simeq \mathfrak{g}(\mathfrak{n}).$
\end{itemize}
\end{theorem}
\begin{remark}
This theorem is of fundamental importance for explanation of the Cartan's result about a realization of ${\bf F}_4$ in $\mathbb{R}^{15}$. As we will see Cartan's $\mathbb{R}^{15}$ is actually a \emph{domain of a chart} $({\mathcal U},\varphi)$ on a certain 2-step nilpotent Lie group $M$, with a 2-step nilpotent Lie algebra $\mathfrak{n}$, and the equivalent description of ${\bf F}_4$ in terms of a symmetry group of the contactification $(M,{\mathcal D})$ from our Example \ref{exa4} is valid because this contactification is just the flat model for the 2-step filtration $(M,{\mathcal D})$ with the nilpotent approximation $\mathfrak{n}$.
\end{remark}
Using the information about the Tanaka prolongation of a nilpotent Lie algebra $\mathfrak{n}$ we can enlarge our Lemma \ref{l21} by changing its point (3) into the following more complete form:
\begin{lemma}\label{l213}
With all the assumptions of Lemma \ref{l21}, and with points {\rm(1)} and {\rm (2)} as in Lemma \ref{l21}, its point {\rm(3)} is equivalent to
\begin{itemize}
\item[] {\rm(3)} Moreover, in the case {\rm \eqref{ca2}} the Lie subalgebra
$$\mathfrak{n}_\minu=\mathfrak{n}_{-2}\oplus\mathfrak{n}_{-1}$$ of $$\mathfrak{g}_0=\mathfrak{n}_{-2}\oplus\mathfrak{n}_{-1}\oplus \mathfrak{n}_{00}$$
is a 2-step graded nilpotent Lie algebra, and the algebra $\mathfrak{n}_{00}$ is a Lie \emph{subalgebra} of the Tanaka prolongation up to $0^{th}$ order $\mathfrak{g}_0(\mathfrak{n}_\minu)$ of the Lie algebra $\mathfrak{n}_\minu=\mathfrak{n}_{-2}\oplus\mathfrak{n}_{-1}$.
\end{itemize}
\end{lemma}
\begin{remark} The term `... $\mathfrak{n}_{00}$ is a Lie \emph{subalgebra} of the Tanaka prolongation up to $0^{th}$ order $\mathfrak{g}_0(\mathfrak{n}_\minu)$ of the Lie algebra $\mathfrak{n}_\minu=\mathfrak{n}_{-2}\oplus\mathfrak{n}_{-1}$..' in the above lemma, means that $\mathfrak{n}_{00}$, although nontrivial, is in general only a subalgebra of the $$\mathfrak{n}_0=\big\{\,\mathrm{Der}(\mathfrak{n}_\minu)\ni D\,\,\mathrm{s.t.}\,\,D \mathfrak{n}_j\subset \mathfrak{n}_j\,\,\mathrm{for}\,\, j=-1,-2\,\big\},\quad\quad \mathfrak{n}_{00}\subsetneq \mathfrak{n}_0,$$ which is the \emph{full} $0$ graded component of the Tanaka prolongation of $\mathfrak{n}_\minu$. So for applications it is reasonable to choose $\mathfrak{n}_{00}$ as large as possible.
\end{remark}
\section{Construction of contactifications with nice symmetries}
Consider a Lie algebra $(\mathfrak{n}_{00},[\cdot,\cdot]_0)$ and its two real representations $(\rho,S)$, $(\tau,R)$, in the respective real $s$- and $r$-dimensional vectors spaces $S$ and $R$.
Let $S=\mathbb{R}^s$, $R=\mathbb{R}^r$, and let $\{ f_\mu\}_{\mu=1}^s$ and $\{e_i\}_{i=1}^r$ be respective bases in $S$ and in $R$. Let $\{f^\mu\}_{\mu=1}^s$ be a basis in the vector space $S^*$ dual to the basis $\{ f_\mu\}_{\mu=1}^s$ , $f_\nu\hook f^\mu=\delta_\nu{}^\mu$.
To be in a situation of Lemma \ref{l21} we also assume that we have the homomorphism $\omega\in\mathrm{Hom}\bigwedge^2 S,R)$ satisfying the magical equation \eqref{maga}.
Then the map $\omega$ is $$\omega=\tfrac12\omega^i_{\mu\nu}e_i\otimes f^\mu\wedge f^\nu,$$ and it defines the coefficients $\omega^i{}_{\mu\nu}$, $i=1,\dots,r$, $\mu,\nu=1,2,\dots s$, which satisfy $\omega^i{}_{\mu\nu}=-\omega^i{}_{\nu\mu}$.
Now, consider an $s$-dimensional manifold, which is an open set $N$ of $\mathbb{R}^s$, $N\subset\mathbb{R}^s$, with coordinates $(x^\mu)_{\mu=1}^r$. Then, we have $r$ two-forms $(\omega^i)_{i=1}^r$ on $N$ defined by
$$\omega^i=\tfrac12\omega^i{}_{\mu\nu}{\rm d} x^\mu\wedge{\rm d} x^\nu.$$
This produces an $(N,{\rm d}{\mathcal D}^\perp)$ structure on $N$, with
$${\rm d}{\mathcal D}^\perp=\mathrm{Span}_\mathbb{R}(\omega^1,\dots,\omega^r).$$
We contactify it. For this we take a local $M=\mathbb{R}^r\times N$, with coordinates $\big(u^i,x^\mu\big)(_{i=1}^r)(_{\mu=1}^s)$, and define the `contact forms' on $M$ by
$$\lambda^i={\rm d} u^i+\omega^i{}_{\mu\nu}x^\mu{\rm d} x^\nu.$$
Because of Lemmas \ref{l21} and \ref{l213} the distribution ${\mathcal D}$ on $M$ defined by this contactification as in Definition \ref{def1}, equips $M$ with a \emph{2-step} filtered structure having ${\mathcal D}_{-1}=\mathcal D$. This has rank $s$. Now using Lemmas \ref{l21} and \ref{l213}, and Tanaka's Theorem \ref{tansym}, we get the following corollary.
\begin{corollary}
Let $M=\mathbb{R}^r\times\mathbb{R}^s$ and let $$\lambda^i={\rm d} u^i+\omega^i{}_{\mu\nu}x^\mu{\rm d} x^\nu, \quad i=1,\dots r,$$
with $\omega$ being a solution of the magical equation \ref{maga} such that $\mathrm{Im}(\omega)=R$.
Consider the distribution structure $(M,{\mathcal D})$ with
a rank $r$ distribution $${\mathcal D}=\{\mathrm{T}M\ni X,\,\, s.t.\,\,X\hook\lambda^i=0,\,\,i=1,\dots,r\}$$
on $M$.
Then, the Lie algebra of automorphisms $\mathfrak{aut}({\mathcal D})$ of $(M,{\mathcal D})$ is isomorphic to the Tanaka prolongation of the 2-step nilpotent Lie algebra $\mathfrak{n}_\minu=R\oplus S$ defined in point {\rm (3)} of Lemma \ref{l21} or \ref{l213}. The Lie algebra $\mathfrak{g}_0=R\oplus S\oplus \mathfrak{n}_{00}$ is nontrivially contained in the Tanaka prolongation up to the $0^{th}$ order $\mathfrak{g}_0(\mathfrak{n}_\minu)$ of $\mathfrak{n}_\minu$, with $\{0\}\neq \mathfrak{n}_{00}\subset\mathfrak{n}_0$, and as such is a subalgebra of the algebra of $\mathfrak{aut}({\mathcal D})$. \label{cruco}
\end{corollary}
\section{Majorana spinor representations of $\mathfrak{so}(p,q)$}\label{spintraut}
In this section we will explain how to construct the real spin representations of the Lie algebras $\mathfrak{so}(p,q)$, in cases when $p=n$, $q=n-1$, or $p=q=n$, $n=1,2,\dots n$. We will also give a construction of these representations for $\mathfrak{so}(0,n)$. We emphasize that we are only interested in \emph{real} spin representations. They share a general name of \emph{Majorana representations}. Our presentation of this material is adapted from \cite{traut}.
We will need Pauli matrices
\begin{equation}
\sigma_x=\begin{pmatrix} 0&1\\1&0\end{pmatrix},\quad \epsilon=-i\sigma_y=\begin{pmatrix} 0&-1\\1&0\end{pmatrix},\quad \sigma_z=\begin{pmatrix} 1&0\\0&-1\end{pmatrix},\label{pauu}
\end{equation}
and the $2\times 2$ identity matrix
\begin{equation}
I=\begin{pmatrix} 1&0\\0&1\end{pmatrix}.\label{pauu1}\end{equation}
We have the following identities:
\begin{equation}\begin{aligned}
&\sigma_x^2=\sigma_z^2=-\epsilon^2=I\\
&\sigma_x\epsilon=-\epsilon\sigma_x=\sigma_z,\quad\sigma_z\sigma_x=-\sigma_x\sigma_z=-\epsilon,\quad \epsilon\sigma_z=-\sigma_z\epsilon=\sigma_x.
\end{aligned}\label{iden}
\end{equation}
Now we quote \cite{traut}:
\begin{quote}
With this notation, \emph{restricting to low dimensions} $p+q=4,5,6$ \emph{and} $7$, the real representations of the Clifford algebra ${\mathcal C}\ell(0,p+q)$ are all in dimension $s=8$, and are generated by the $p+q$ matrices $\rho_1,\dots,\rho_{(p+q)}$ given by:
\begin{equation}\begin{aligned}
\rho_1&=\sigma_z\otimes I\otimes \epsilon\\
\rho_2&=\sigma_z\otimes \epsilon\otimes \sigma_x\\
\rho_3&=\sigma_z\otimes \epsilon\otimes \sigma_z\\
\rho_4&=\sigma_x\otimes \epsilon \otimes I\\
\rho_5&=\sigma_x\otimes \sigma_x\otimes \epsilon\\
\rho_6&=\sigma_x\otimes \sigma_z\otimes \epsilon\\
\rho_7&=\epsilon\otimes I\otimes I.
\end{aligned}\label{cl07}\end{equation}
The 8 matrices $\theta_\mu=\sigma_x\otimes\rho_\mu$, $\mu=1,\dots, 7$ and $\theta_8=\epsilon\otimes I \otimes I\otimes I$ give the real representation of ${\mathcal C}\ell(0,8)$ in $S=\mathbb{R}^{16}$. Dropping the first factor in $\rho_1,\rho_2,\rho_3$ one obtains the matrices generating a representation of ${\mathcal C}\ell(0,3)$ in $S=\mathbb{R}^4$, etc.
\end{quote}
Majorana representations of $\mathfrak{so}(n-1,n)$ in dimension $s=2^{n-1}$ are called \emph{Pauli representations}, and Majorana representations of $\mathfrak{so}(n,n)$ in dimension $s=2^n$, are called \emph{Dirac representations}.
To construct them we need generalizations of the \emph{Pauli} $\sigma$ matrices and \emph{Dirac} $\gamma$ \emph{matrices}. The construction of those is \emph{inductive}.
It starts with $p+q=1$ with one matrix $\sigma_1=1$, and for every $n=1,2,\dots$, it alternates between $p+q=2n-1$ of Pauli matrices $\sigma_\mu$, $\mu=1,\dots, 2n-1$, and $p+q=2n$ of Dirac matrices $\gamma_\mu$, $\mu=1,\dots, 2n$.
Again quoting Trautman \cite{traut} we have:
\begin{enumerate}
\item In dimension $p+q=1$ put $\sigma_1=1$.
\item Given $2^{n-1}\times 2^{n-1}$ matrices $\sigma_\mu$, $\mu=1,\dots,2n-1$, define
$$\gamma_\mu=\begin{pmatrix} 0&\sigma_\mu\\\sigma_\mu&0\end{pmatrix}\,\,\mathrm{for}\,\,\mu=1,\dots,2n-1,$$
and
$$\gamma_{2n}=\begin{pmatrix} 0&-I\\I&0\end{pmatrix},$$
where $I$ is the identity $2^{n-1}\times 2^{n-1}$ matrix.
\item Given $2^n\times 2^n$ matrices $\gamma_\mu$, $\mu=1,\dots,2n$, define $\sigma_\mu=\gamma_\mu$ for $\mu=1,\dots, 2n$, and $\sigma_{2n+1}=\gamma_1\dots\gamma_{2n}$, so that for $n>0$,
$$\sigma_{2n+1}=\begin{pmatrix} I&0\\0&-I\end{pmatrix}.$$
\end{enumerate}
In every dimension $p+q=2n-1$, $n\geq 1$, the Pauli matrices $\sigma_\mu$, $\mu=1,\dots,2n-1$ satisfy
$$\sigma_\mu\sigma_\nu+\sigma_\nu\sigma_\mu=2g_{\mu\nu}\underbrace{\big(I\otimes\dots\otimes I\big)}_{n-1\,\, \mathrm{times}},$$
where the $(2n-1)\times (2n-1)$ symmetric matrix $(g_{\mu\nu})$ is \emph{diagonal}, and has the following diagonal elements:
$$(g_{\mu\nu})=\mathrm{diag}\underbrace{(1,-1,\dots,-1,1)}_{(2n-1)\,\, \mathrm{times}}.$$
Likewise, in every dimension $p+q=2n$, $n\geq 1$, the Dirac matrices $\gamma_\mu$, $\mu=1,\dots,2n$ satisfy
$$\gamma_\mu\gamma_\nu+\gamma_\nu\gamma_\mu=2g_{\mu\nu}\underbrace{\big(I\otimes\dots\otimes I\big)}_{n\,\, \mathrm{times}},$$
where the $(2n)\times (2n)$ symmetric matrix $(g_{\mu\nu})$ is \emph{diagonal}, and has the following diagonal elements:
$$(g_{\mu\nu})=\mathrm{diag}\underbrace{(1,-1,\dots,1,-1)}_{2n\,\, \mathrm{times}}.$$
Therefore, for each $n=1,2,\dots$ the set $\{\sigma_\mu\}_{\mu=1}^{2n-1}$ of Pauli matrices generates the elements of a real $2^{n-1}$-dimensional representation of the Clifford algebra ${\mathcal C}\ell(n-1,n)$, and the set $\{\gamma_\mu\}_{\mu=1}^{2n}$ of Dirac matrices generates the elements of a real $2^{n}$-dimensional representation of the Clifford algebra ${\mathcal C}\ell(n,n)$.
Then, in turn, these real Clifford algebras representations can be further used to define the real spin representations of the Lie algebras $\mathfrak{so}(p+q,0)$, $\mathfrak{so}(n-1,n)$ and $\mathfrak{so}(n,n)$ as follows. One obtains all the generators of the spin representation of $\mathfrak{so}(g)$ by spanning it by all the elements of the form
\begin{itemize}
\item $\tfrac12\rho_\mu\rho_\nu$, with $1\leq \mu<\nu\leq (p+q)$, in the case of $\mathfrak{so}(p+q,0)$, $p+q=3,5,6,7$;
\item $\tfrac12\theta_\mu\theta_\nu$, with $1\leq \mu<\nu\leq 8$, in thecase of $\mathfrak{so}(8,0)$;
\item $\tfrac12\sigma_\mu\sigma_\nu$, with $1\leq \mu<\nu\leq (p+q)=2n-1$, in the case of $\mathfrak{so}(n-1,n)$;
\item $\tfrac12\gamma_\mu\gamma_\nu$, with $1\leq \mu<\nu\leq (p+q)=2n$, in the case of $\mathfrak{so}(n,n)$.
\end{itemize}
For further details consult \cite{traut}.
We will use all this information in next sections, when we create examples.
\section{Application: Obtaining the flat model for (3,6) distributions}
Let $(\rho,S)$ be the defining representation of $\mathfrak{so}(3)$ in $S=\mathbb{R}^3$. It can be generated by:
\begin{equation}\rho(A_1)=\begin{pmatrix} 0&0&-1\\0&0&0\\1&0&0\end{pmatrix},\quad\rho(A_2)=\begin{pmatrix} 0&1&0\\-1&0&0\\0&0&0\end{pmatrix},\quad\rho(A_1)=\begin{pmatrix} 0&0&0\\0&0&-1\\0&1&0\end{pmatrix}.\label{ro3a}\end{equation}
And let $(\tau,R)$ be an equivalent 3-dimensional representation of $\mathfrak{so}(3)$ given by
\begin{equation} \tau(A_1)=\begin{pmatrix} 0&0&-1\\0&0&0\\1&0&0\end{pmatrix},\quad\tau(A_2)=\begin{pmatrix} 0&-1&0\\1&0&0\\0&0&0\end{pmatrix},\quad\tau(A_1)=\begin{pmatrix} 0&0&0\\0&0&1\\0&-1&0\end{pmatrix}.\label{tau3a}\end{equation}
We claim that for these two representations of $\mathfrak{so}(3)$, in the standard bases in $S=\mathbb{R}^3$, $R=\mathbb{R}^3$, the magical equation \eqref{magb} has the following solution:
$$\omega^1_{\mu\nu}=\begin{pmatrix} 0&0&0\\0&0&-1\\0&1&0\end{pmatrix},\quad\omega^2_{\mu\nu}=\begin{pmatrix} 0&0&-1\\0&0&0\\1&0&0\end{pmatrix},\quad\omega^3_{\mu\nu}=\begin{pmatrix} 0&-1&0\\1&0&0\\0&0&0\end{pmatrix}.$$
Now using this solution $(\rho,\tau,\omega)$ of the magical equation \eqref{maga} we use the Corollary \ref{cruco} with $\lambda^i={\rm d} u^u+\omega^i{}_{\mu\nu}x^\mu{\rm d} x^\nu$, and obtain the following theorem.
\begin{theorem}
Let $M=\mathbb{R}^6$ with coordinates $(u^1,u^2,u^3,x^1,x^2,x^3)$ and consider three 1-forms
$$\begin{aligned}
\lambda^1=&{\rm d} u^1+x^2{\rm d} x^3\\
\lambda^2=&{\rm d} u^2+x^1{\rm d} x^3\\
\lambda^3=&{\rm d} u^3+x^1{\rm d} x^2
\end{aligned}$$
on $M$.
Then the rank 3 distribution ${\mathcal D}$ on $M$ defined by ${\mathcal D}=\{\mathrm{T}\mathbb{R}^6\ni X\,\,|\,\, X\hook \lambda^i=0,\,\,i=1,2,3\}$ has its Lie algebra of infinitesimal symmetries $\mathfrak{aut}{\mathcal D}$ isomorphic to the Tanaka prolongation of $\mathfrak{n}_\minu=R\oplus S$ where $(\rho,S=\mathbb{R}^3)$ and $(\tau,R=\mathbb{R}^3)$ are the respective representations \eqref{ro3a}, \eqref{tau3a} of $\mathfrak{n}_{00}=\mathfrak{so}(3)$.
The symmetry algebra $\mathfrak{aut}({\mathcal D})$ is isomorphic to the simple graded Lie algebra $\mathfrak{so}(4,3)$,
$$\mathfrak{aut}({\mathcal D})=\mathfrak{so}(4,3),$$
with the following gradation:
$$\mathfrak{aut}({\mathcal D})=\mathfrak{n}_{-2}\oplus\mathfrak{n}_{-1}\oplus\mathfrak{n}_0\oplus\mathfrak{n}_1\oplus\mathfrak{n}_2,$$
with $\mathfrak{n}_{-2}=R$, $\mathfrak{n}_{-1}=S$,
$$
\mathfrak{n}_0=\mathfrak{gl}(3,\mathbb{R})\supset\mathfrak{n}_{00}$$
$\mathfrak{n}_{1}=S^*$, $\mathfrak{n}_{2}=R^*$,
which is inherited from the distribution structure $(M,{\mathcal D})$. The duality signs $*$ at $R^*$ and $S^*$ above are with respect to the Killing form in $\mathfrak{so}(4,3)$.
The contactification $(M,{\mathcal D})$ is locally a flat model for the parabolic geometry of type $\big({\bf Spin}(4,3),P\big)$ related to the following \emph{crossed} Satake diagram: \tikzset{/Dynkin diagram/fold style/.style={stealth-stealth,thin,
shorten <=1mm,shorten >=1mm}}\begin{dynkinDiagram}[edge length=.5cm]{B}{oot}
\end{dynkinDiagram}.
\end{theorem}
\begin{proof}
Proof is by calculating the Tanaka prolongation of $\mathfrak{n}_\minu=R\oplus S$, which is $\mathfrak{gl}(3,\mathbb{R})$, naturally graded by the Tanaka prolongation algebraic procedure precisely as $\mathfrak{aut}({\mathcal D})$ in the statement of the theorem.
\end{proof}
\section{Application: Obtaining Biquard's 7-dimensional flat quaternionic contact manifold via contactification using spin representations of $\mathfrak{so}(1,2)$ and $\mathfrak{so}(3,0)$}
According to Trautman's procedure \cite{traut} there is a real representation of ${\mathcal C}\ell(0,3)$ in $\mathbb{R}^4$. There also is an analogous representation of ${\mathcal C}\ell(1,2)$. Both of them are generated by the $\sigma$ matrices
$$\sigma_1=\begin{pmatrix} 0&-1&0&0\\1&0&0&0\\0&0&0&-1\\0&0&1&0\end{pmatrix},\quad\sigma_2=\begin{pmatrix} 0&0&0&-\varepsilon\\0&0&-\varepsilon&0\\0&1&0&0\\1&0&0&0\end{pmatrix},\quad\sigma_3=\begin{pmatrix} 0&0&-\varepsilon&0\\0&0&0&\varepsilon\\1&0&0&0\\0&-1&0&0\end{pmatrix},$$
where $$\varepsilon=1\quad \mathrm{for}\quad {\mathcal C}\ell(0,3),$$ and $$\varepsilon=-1\quad\mathrm{for}\quad{\mathcal C}\ell(2,1).$$ One can check that these matrices\footnote{In Trautman's quote in the previous section, these matrices where denoted by $\rho_1$, $\rho_2$, $\rho_3$, and they were only explicitly given for $\varepsilon=1$.} satisfy the (representation of) Clifford algebra relations:
$$\sigma_\mu\sigma_\nu+\sigma_\nu\sigma_\mu=2g_{\mu\nu}\big(I\otimes I)$$
with all $g_{\mu\nu}$ being zero, except $g_{11}=-1$, $g_{22}=g_{33}=-\varepsilon$.
This leads to the following spinorial
representation $\rho$ of $\mathfrak{so}(0,3)$ or $\mathfrak{so}(1,2)$
\begin{equation}\rho(A_1)=-\tfrac12\sigma_3,\quad \rho(A_2)=\tfrac12\sigma_2,\quad\rho(A_3)=-\tfrac12\varepsilon\sigma_1.\label{so31}\end{equation}
Here $(A_1,A_2,A_3)$ constitutes a basis for $\mathfrak{so}(0,3)$ when $\varepsilon=1$ and for $\mathfrak{so}(2,1)$ when $\varepsilon=-1$. This can be extended to the representation of
$$\mathfrak{n}_{00}=\mathbb{R}\oplus \mathfrak{so}\big(\tfrac{1-\varepsilon}{2},\tfrac{5+\varepsilon}{2}\big)$$
in $S=\mathbb{R}^4$ by setting the value of $\rho$ on the generator $A_4=\mathrm{Id}$ as
\begin{equation}\rho(A_4)=\tfrac12\mathfrak(I\otimes I).\label{so32}\end{equation}
For this representation of $\mathbb{R}\oplus \mathfrak{so}\big(\tfrac{1-\varepsilon}{2},\tfrac{5+\varepsilon}{2}\big)$, the magical equation \eqref{maga} has a following solution
$$\begin{aligned}
\omega^1=\big(\omega^1{}_{\mu\nu}\big)=&\begin{pmatrix} 0&0&1&0\\0&0&0&-1\\-1&0&0&0\\0&1&0&0\end{pmatrix},\quad \omega^2=\big(\omega^2{}_{\mu\nu}\big)=\begin{pmatrix} 0&0&0&-1\\0&0&-1&0\\0&1&0&0\\1&0&0&0\end{pmatrix},\\
&\\
&\omega^3=\big(\omega^1{}_{\mu\nu}\big)=\begin{pmatrix} 0&-\varepsilon&0&0\\\varepsilon&0&0&0\\0&0&0&-1\\0&0&1&0\end{pmatrix},
\end{aligned}$$
with
\begin{equation}\begin{aligned}
\tau(A_1)=\begin{pmatrix} 0&0&0\\0&0&\varepsilon\\0&-1&0\end{pmatrix},\quad &\tau(A_2)=\begin{pmatrix} 0&0&-\varepsilon\\0&0&0\\-1&0&0\end{pmatrix},\quad\tau(A_3)=\begin{pmatrix} 0&-\varepsilon&0\\\varepsilon&0&0\\0&0&0\end{pmatrix}\\
&\tau(A_4)=\begin{pmatrix} 1&0&0\\0&1&0\\0&0&1\end{pmatrix}.
\end{aligned}
\label{so33}\end{equation}
This in particular gives the vectorial representation $\tau$ of
$$\mathfrak{n}_{00}=\mathbb{R}\oplus \mathfrak{so}\big(\tfrac{1-\varepsilon}{2},\tfrac{5+\varepsilon}{2}\big)$$
in $R=\mathbb{R}^3$.
Now, by using this solution for $(\rho,\omega,\tau)$ and applying our Corollary \ref{cruco} we have an $(s=4)$-dimensional manifold $N=\mathbb{R}^4$, equipped with $r=3$ two-forms $\omega^i=\tfrac12\omega^{}_{\mu\nu}{\rm d} x^\mu\wedge{\rm d} x^\nu$, $i=1,2,3$, which contactifies to an $(s+r)=7$-dimensional manifold $M=\mathbb{R}^7$ having a distribution structure $(M,{\mathcal D})$ defined as an annihilator of the $r=3$ one-forms $\lambda^i={\rm d} u^i+\omega^i{}_{\mu\nu}x^\mu{\rm d} x^\nu$, $i=1,2,3$.
We have the following theorem.
\begin{theorem}
Let $M=\mathbb{R}^7$ with coordinates $(u^1,u^2,u^3,x^1,x^2,x^3,x^4)$, and consider three 1-forms $\lambda^1,\lambda^2,\lambda^3$ on $M$ given by
$$\begin{aligned}
\lambda^1=&{\rm d} u^1+x^1{\rm d} x^3-x^2{\rm d} x^4,\\
\lambda^2=&{\rm d} u^2-x^1{\rm d} x^4-x^2{\rm d} x^3,\\
\lambda^3=&{\rm d} u^3-\varepsilon x^1{\rm d} x^2-x^3{\rm d} x^4,
\end{aligned}\quad\quad\mathrm{with}\quad\quad\varepsilon=\pm1.$$
The rank 4 distribution ${\mathcal D}$ on $M$ defined as ${\mathcal D}=\{\mathrm{T}\mathbb{R}^7\ni X\,\,|\,\,X\hook\lambda^1=X\hook\lambda^2=X\hook\lambda^3=0\}$ has its Lie algebra of infinitesimal automorphisms $\mathfrak{aut}(\mathcal D)$ isomorphic to the Tanaka prolongation of $\mathfrak{n}_{\minu}=R\oplus S$, where $(\rho,S=\mathbb{R}^4)$ is the spinorial representation \eqref{so31}-\eqref{so32} of $\mathfrak{n}_{00}=\mathbb{R}\oplus \mathfrak{so}\big(\tfrac{1-\varepsilon}{2},\tfrac{5+\varepsilon}{2}\big)$, and $(\tau,R=\mathbb{R}^3)$ is the vectorial representation \eqref{so33} of $\mathfrak{n}_{00}$.
The symmetry algebra $\mathfrak{aut}({\mathcal D})$ is isomorphic to the simple Lie algebra $\mathfrak{sp}\big(\tfrac{1-\varepsilon}{2},\tfrac{5+\varepsilon}{2}\big)$,
$$\mathfrak{aut}({\mathcal D})=\mathfrak{sp}\big(\tfrac{1-\varepsilon}{2},\tfrac{5+\varepsilon}{2}\big),$$
having the following natural gradation
$$\mathfrak{aut}({\mathcal D})=\mathfrak{n}_{-2}\oplus\mathfrak{n}_{-1}\oplus\mathfrak{n}_0\oplus\mathfrak{n}_1\oplus\mathfrak{n}_2,$$
with $\mathfrak{n}_{-2}=R$, $\mathfrak{n}_{-1}=S$,
$$\begin{aligned}
\mathfrak{n}_0=&\mathfrak{n}_{00}\oplus \mathfrak{so}\big(\tfrac{1-\varepsilon}{2},\tfrac{5+\varepsilon}{2}\big)=\\&\mathbb{R}\oplus \mathfrak{so}\big(\tfrac{1-\varepsilon}{2},\tfrac{5+\varepsilon}{2}\big)\oplus \mathfrak{so}\big(\tfrac{1-\varepsilon}{2},\tfrac{5+\varepsilon}{2}\big),\end{aligned}$$
$\mathfrak{n}_{1}=S^*$, $\mathfrak{n}_{2}=R^*$,
which is inherited from the distribution structure $(M,{\mathcal D})$. The duality signs $*$ at $R^*$ and $S^*$ above are with respect to the Killing form in $\mathfrak{sp}\big(\tfrac{1-\varepsilon}{2},\tfrac{5+\varepsilon}{2}\big)$.
The contactification $(M,{\mathcal D})$ is locally a flat model for the parabolic geometry of type $\Big(\mathbf{Sp}\big(\tfrac{1-\varepsilon}{2},\tfrac{5+\varepsilon}{2}\big),P\Big)$ related to the following \emph{crossed} satake diagrams:
\begin{enumerate}
\item
\tikzset{/Dynkin diagram/fold style/.style={stealth-stealth,thin,
shorten <=1mm,shorten >=1mm}}\begin{dynkinDiagram}[edge length=.5cm]{C}{*t*}
\end{dynkinDiagram}
in the case of $\varepsilon=1$, and
\item \tikzset{/Dynkin diagram/fold style/.style={stealth-stealth,thin,
shorten <=1mm,shorten >=1mm}}\begin{dynkinDiagram}[edge length=.5cm]{C}{oto}
\end{dynkinDiagram}
in the case of $\varepsilon=-1$.
\end{enumerate}
\end{theorem}
\begin{remark}
When $\varepsilon=1$ the flat parabolic geometry described in the above theorem is the lowest dimensional example of the \emph{quaternionic contact} geometry considered by Biquard \cite{bicquard}.
\end{remark}
\section{Application: Obtaining the exceptionals from contactifications of spin representations; the $\mathfrak{f}_4$ case}
We will now explain the Cartan realization of the simple exceptional Lie algebra $\mathfrak{f}_4$ in dimension $\mathbb{R}^{15}$ mentioned in the introduction.
The Satake diagrams for the real forms of the complex simple exceptional Lie algebra $\mathfrak{f}_4$ are as follows:\\
\centerline{\begin{dynkinDiagram}[edge length=.4cm]{F}{****}
\end{dynkinDiagram},\hspace{0.5cm}
\begin{dynkinDiagram}[edge length=.4cm]{F}{***o}
\end{dynkinDiagram},\hspace{0.5cm}
\begin{dynkinDiagram}[edge length=.4cm]{F}{oooo}\end{dynkinDiagram}.}
The first diagram corresponds to the \emph{compact} real form of $\mathfrak{f}_4$ an is not interesting for us. The other two diagrams are interesting:
\begin{enumerate}
\item The last, \begin{dynkinDiagram}[edge length=.4cm]{F}{oooo}\end{dynkinDiagram}, corresponds to the \emph{split} real form $\mathfrak{f}_I$ , and
\item the middle one, \begin{dynkinDiagram}[edge length=.4cm]{F}{***o}
\end{dynkinDiagram}, denoted by $\mathfrak{f}_{II}$ in \cite{CS}, is also interesting, since similarly to $\mathfrak{f}_I$, it defines a \emph{parabolic geometry} in dimension 15.
\end{enumerate}
Crossing the last node on the right in the diagrams for $\mathfrak{f}_I$ or $\mathfrak{f}_{II}$, as in\\
\centerline{\begin{dynkinDiagram}[edge length=.4cm]{F}{ooox}\end{dynkinDiagram} \hspace{0.5cm} or \hspace{0.5cm}\begin{dynkinDiagram}[edge length=.4cm]{F}{***x}\end{dynkinDiagram},} we see that in both algebras there exist \emph{parabolic subalgebras} $\mathfrak{p}_I$ or $\mathfrak{p}_{II}$, respectively, of dimension 37, $\dim(\mathfrak{p}_I)=\dim(\mathfrak{p}_{II})=37$. In both respective cases, these choices of parabolics, define similar gradations in the corresponding real forms $\mathfrak{f}_I$, $\mathfrak{f}_{II}$, of the simple exceptional Lie $\mathfrak{f}_4$:
$$
\mathfrak{f}_A=\mathfrak{n}_{-2A}\oplus\mathfrak{n}_{-1A}\oplus\mathfrak{n}_{0A}\oplus\mathfrak{n}_{1A}\oplus\mathfrak{n}_{2A}\quad \mathrm{for}\quad A=I,II,
$$
with
$$\mathfrak{n}_{-A}=\mathfrak{n}_{-2A}\oplus\mathfrak{n}_{-1A}\quad \mathrm{for}\quad A=I,II,$$
being 2-step nilpotent and having grading components $\mathfrak{n}_{-2A}$ and $\mathfrak{n}_{-1A}$ of respective dimension $r_A=7$ and $s_A=8$,
$$r_A=\dim(\mathfrak{n}_{-2A})=7,\quad\quad s_A=\dim(\mathfrak{n}_{-1A})=8\quad \mathrm{for}\quad A=I,II.$$
The Lie algebra $\mathfrak{n}_{0A}$ in the Tanaka prolongation of $\mathfrak{n}_{-A}$ up to $0^{th}$ order is
\begin{enumerate}
\item $\mathfrak{n}_{0I}=\mathbb{R}\oplus\mathfrak{so}(4,3)$ in the case of $\mathfrak{f}_I$, and
\item $\mathfrak{n}_{0II}=\mathbb{R}\oplus\mathfrak{so}(0,7)$ in the case of $\mathfrak{f}_{II}$.
\end{enumerate}
Thus, from the analysis performed here, we see that there exists two different 2-step filtered structures $(M_I,{\mathcal D}_I)$ and $(M_{II},{\mathcal D}_{II})$, both in dimension 15, with the respective $F_I$-symmetric, or $F_{II}$-symmetric flat models, realized on $M_I=F_I/P_I$ or $M_{II}=F_{II}/P_{II}$. Here $F_I$ and $F_{II}$ denote the real Lie groups whose Lie algebras are $\mathfrak{f}_I$ and $\mathfrak{f}_{II}$, respectively. Similarly $P_I$ and $P_{II}$ are parabolic subgroups of respective $F_i$ and $F_{II}$, whose Lie algebras are $\mathfrak{p}_I$ and $\mathfrak{p}_{II}$. Recalling that each of the real groups $\mathbf{SO}(4,3)$ and $\mathbf{SO}(0,7)$ has \emph{two} real irreducible representations $\rho$ in dimension $s=8$ and $\tau$ in dimension $r=7$, with the 8-dimensional representation $\rho$ being the \emph{spin} representation of either $\mathbf{SO}(4,3)$ or $\mathbf{SO}(0,7)$, we can now give the explicit realizations of the ${\bf F}_4$-symmetric structures $(M_A,{\mathcal D}_A)$ for $A=I,II$.
\subsection{Cartan's realization of $\mathfrak{f}_I$}\label{41}
The plan is to start with the Lie algebra $\mathfrak{n}_{00}=\mathbb{R}\oplus\mathfrak{so}(4,3)$, as in the crossed Satake diagram \begin{dynkinDiagram}[edge length=.4cm]{F}{ooox}\end{dynkinDiagram} of $\mathfrak{f}_I$, and its two representations: \begin{itemize}
\item a representaion $(\rho,S=\mathbb{R}^8)$, corresponding to the spin representation of $\mathbf{SO}(4,3)$ in $(s=8)$-dimensional space $\mathfrak{n}_{-1}=S$ of real Pauli spinors, and
\item a representation $(\tau,R=\mathbb{R}^7)$, corresponding to the vectorial representation of $\mathbf{SO}(4,3)$ in $(r=7)$-dimensional space $\mathfrak{n}_{-2}=R$ of vectors in $\mathbb{R}^{(4,3)}$.
\end{itemize}
Having these two representations of $\mathfrak{n}_{00}=\mathbb{R}\oplus\mathfrak{so}(4,3)$ in the same basis, we will then solve the equations \eqref{maga} for the map $\omega\in\mathrm{Hom}(\bigwedge^2S,R)$ which will give us the commutators between elements in $\mathfrak{n}_{-1}$. This via Corollary \ref{cruco} will provide the explicit realization of the 15-dimensional contactification $(M,{\mathcal D})$ with the exception simple Lie algebra $F_I$ as its symmetry.
Actually, the passage from $\rho$ to $\tau$ in the above plan, is a bit tricky, since we need to have these representations expressed in the same basis. To handle with this obstacle, we will start with the spin representation $\rho$, in the space of \emph{Pauli spinors} $S$, and then we will use the fact that the skew representation $\rho\wedge\rho$ in the space of the bispinors $\bigwedge^2S$ decomposes as
$$\textstyle \bigwedge^2S=\bigwedge_{21}\oplus\bigwedge_{7},$$
where $\bigwedge_{21}$ is the 21-dimensional \emph{adjoint representation} of $\mathbf{SO}(4,3)$ and $\bigwedge_7$ is its 7-dimensional \emph{vectorial representation} $\tau$. In this way we will have the two representations $(\rho,S)$ and $(\tau,R=\bigwedge_7)$, expressed in the same basis $\{A_I\}$ of $\mathbb{R}\oplus\mathbf{SO}(4,3)$, and will apply the Corollary \ref{cruco} to get the desired $F_I$-symmetric contactification in dimension 15. On doing this we will use notation from Section \ref{spintraut}.
According to \cite{traut}, the real 8-dimensional \emph{representation of the Clifford algebra} ${\mathcal C}\ell(4,3)$ is generated by the seven \emph{8-dimensional Pauli matrices}:
$$\begin{aligned}
&\sigma_1=\sigma_x\otimes\sigma_x\otimes\sigma_x\\
&\sigma_2=\sigma_x\otimes\sigma_x\otimes\epsilon\\
&\sigma_3=\sigma_x\otimes\sigma_x\otimes\sigma_z\\
&\sigma_4=\sigma_x\otimes\epsilon\otimes I\\
&\sigma_5=\sigma_x\otimes\sigma_z\otimes I\\
&\sigma_6=\epsilon\otimes I\otimes I\\
&\sigma_7=\sigma_z\otimes I\otimes I.
\end{aligned}
$$
Using the identities \eqref{iden}, especially the one saying that $\epsilon^2=-I$, one easily finds that the seven Pauli matrices $\sigma_i$, $i=1,2,\dots,7$, satisfy the \emph{Clifford algebra identity}
$$\sigma_i\sigma_j+\sigma_j\sigma_i\,\,=\,\,2g_{ij}\,\,(I\otimes I \otimes I)\,,\quad\quad i,j=1,2,\dots,7,$$
with the coefficients $g_{ij}$ forming a diagonal $7\times7$ matrix
$$\Big(\,\,g_{ij}\,\,\Big)\,\,=\,\,\mathrm{diag}\Big(1,-1,1,-1,1,-1,1\Big),$$
of signature $(4,3)$. Thus, the 8-dimensional Pauli matrices $\sigma_i$, $i=1,\dots,7$, generate the Clifford algebra ${\mathcal C}\ell(4,3)$, and in turn, \emph{by the general theory}, as described in Section \ref{spintraut}, they define the \emph{spin representation} $\rho$ of $\mathfrak{so}(4,3)$ in an 8-dimensional real vector space $S=\mathbb{R}^8$ of Pauli(-\emph{Majorana}) spinors.
\subsubsection{The spinorial representation of $\mathfrak{so}(4,3)$}
To be more explicit, let $(i,j)$ be such that $1\leq i<j\leq 7$, and let $I$ be a function
\begin{equation} I(i,j)=1+i+\tfrac12(j-3)j\label{Iij}\end{equation}
on such pairs. Note that the function $I$ is a bijection between the 21 pairs $(i,j)$ and the set of 21 natural numbers $I=1,2,\dots, 21$. Consider the twenty one $8\times 8$ real matrices $\sigma_i\sigma_j$ with $1\leq i<j\leq 7$, and a basis $\{A_I\}_{I=1}^{21}$ in the Lie algebra $\mathfrak{so}(4,3)$. Then the spin representation $\rho$ of $\mathfrak{so}(4,3)$ is given by
$$\rho(A_{I(i,j)})=\tfrac12\sigma_i\sigma_j\quad\mathrm{with}\quad 1\leq i<j\leq7.$$
Explicitly, we have:
\begin{equation}
\begin{array}{lll}
\rho(A_1)=\tfrac12 I\otimes I \otimes\sigma_z,&\quad \rho(A_8)=\tfrac12 I\otimes \epsilon\otimes\epsilon,&\quad \rho(A_{15})=\tfrac12 \sigma_z\otimes \sigma_z \otimes I,\\
\rho(A_2)=\tfrac12 I\otimes I \otimes\epsilon, &\quad\rho(A_9)=\tfrac12 I\otimes \epsilon \otimes\sigma_z, &\quad \rho(A_{16})=\tfrac12 \epsilon\otimes \sigma_x \otimes\sigma_x, \\
\rho(A_3)=\tfrac12 I\otimes I \otimes\sigma_x, &\quad\rho(A_{10})=\tfrac12 I\otimes \sigma_x \otimes I,&\quad \rho(A_{17})=\tfrac12 \epsilon\otimes \sigma_x \otimes\epsilon,\\
\rho(A_4)=\tfrac12 I\otimes I \otimes\sigma_x,&\quad \rho(A_{11})=\tfrac12 \sigma_z\otimes \sigma_x \otimes\sigma_x,&\quad \rho(A_{18})=\tfrac12 \epsilon\otimes \sigma_x \otimes\sigma_z,\\
\rho(A_5)=\tfrac12 I\otimes \sigma_z \otimes\epsilon,&\quad \rho(A_{12})=\tfrac12 \sigma_z\otimes \sigma_x \otimes\epsilon,&\quad \rho(A_{19})=\tfrac12 \epsilon\otimes \epsilon \otimes I,\\
\rho(A_6)=\tfrac12 I\otimes \sigma_z \otimes\sigma_z,&\quad \rho(A_{13})=\tfrac12 \sigma_z\otimes \sigma_x\otimes\sigma_z,&\quad \rho(A_{20})=\tfrac12 \epsilon\otimes \sigma_z \otimes I,\\
\rho(A_7)=\tfrac12 I\otimes \epsilon \otimes\sigma_x, &\quad \rho(A_{14})=\tfrac12 \sigma_z\otimes \epsilon\otimes I, &\quad
\rho(A_{21})=\tfrac12 \sigma_x\otimes I \otimes I.
\end{array}
\label{f41}\end{equation}
The spin representation $\rho$ of $V_0=\mathbb{R}\oplus\mathfrak{so}(4,3)$ needs one generator more. Let us call its $\rho(A_{22})$. We have
$$\rho(A_{22})=\tfrac12 I\otimes I\otimes I.$$
We determine the structure constants $c^K{}_{IJ}$ of $\mathbb{R}\oplus\mathfrak{so}(4,3)$ in the basis $A_I$ from
\begin{equation} [\,\,\rho(A_I),\,\,\rho(A_J)\,\,]\,\,=\,\,c^K{}_{IJ}\,\,\rho(A_K).\label{strco}\end{equation}
\subsubsection{Obtaining the vectorial representation of $\mathfrak{so}(4,3)$}
Now, we take the space $\bigwedge^2S$ and consider the skew symmetric representation $${\stackrel{\scriptscriptstyle{a}}{\rho}}{}=\rho\wedge\rho$$ in it. We will write it in the standard basis $f_\mu$, $\mu=1,\dots, 8$ in $S=\mathbb{R}^8$. We have $\rho(A_I)f_\mu=\rho_I{}^\nu{}_\mu f_\nu$. Now, the components of the 28-dimensional representation ${\stackrel{\scriptscriptstyle{a}}{\rho}}{}=\rho\wedge\rho_1$ are
$${\stackrel{\scriptscriptstyle{a}}{\rho}}{}_{I}{}^{\mu\nu}{}_{\alpha\beta}\,\,=\,\,\,\,\rho_I{}^{\mu}{}_\alpha\delta^{\nu}{}_\beta+
\delta^{\mu}{}_\alpha\rho_I{}^{\nu}{}_\beta-\rho_I{}^{\nu}{}_\alpha\delta^{\mu}{}_\beta-\delta^{\nu}{}_\alpha\rho_I{}^{\mu}{}_\beta\,\,,$$
and we have
$$\big({\stackrel{\scriptscriptstyle{a}}{\rho}}{}(A_I)w\big){}^{\mu\nu}={\stackrel{\scriptscriptstyle{a}}{\rho}}{}_{I}{}^{\mu\nu}{}_{\alpha\beta}w^{\alpha\beta}, \quad \forall w^{\alpha\beta}=w^{[\alpha\beta]}.$$
The Casimir operator for this representation is
$${\mathcal C}\,\,=\,\,10\,\,K^{IJ}\,\,{\stackrel{\scriptscriptstyle{a}}{\rho}}{}(A_I)\,{\stackrel{\scriptscriptstyle{a}}{\rho}}{}(A_J),$$
where $K^{IJ}$ is the inverse of the Killing form matrix $K_{IJ}=c^L{}_{IM}c^M{}_{JL}$ in the basis $A_I$. Since for the Killing form to be nondegenerate we must restrict to the semisimple part of $\mathbb{R}\oplus\mathfrak{so}(4,3)$, here the indices $I,J,K,L,M=1,2\dots,21$, and as always are summed over the repeated indices. One can check that in this basis of $\mathfrak{so}(4,3)$ the Killing form matrix is diagonal, and reads
$$
\big(\,\, K_{IJ}\,\,\big)=
10\,\mathrm{diag}\Big(\,1,-1,1,1,-1,1,-1,1,-1,1,1,-1,1,-1,1,-1,1,-1,1,-1,1\,\Big).
$$
The Casimir $\mathcal C$ defines the decomposition of the 28-dimensional reducible representation ${\stackrel{\scriptscriptstyle{a}}{\rho}}{}=\rho_1\wedge\rho_1$ onto $$\textstyle \bigwedge^2S=\bigwedge_{21}\oplus\bigwedge_7,$$ where
the 7-dimensional irreducible representation \emph{space} $\bigwedge_7$ \emph{is the eigenspace of the Casimir operator consisting of eigen-bispinors with eigenvalue equal to 6},
$$\textstyle {\mathcal C}\,\,\big(\bigwedge_7\big)\,=6\,\bigwedge_7.$$
Explicitly, in the same basis $A_I$, $I=1,2,\dots,21$, as before, this 7-dimensional representation $(\tau,R=\bigwedge_7)$ of the $\mathfrak{so}(4,3)$ Lie algebra is given by:
\begin{equation}\begin{aligned}
&\tau(A_1)=E_{66}-E_{22},\\
&\tau(A_2)=\tfrac12(E_{23}-E_{32}+E_{25}-E_{52}+E_{36}-E_{63}+E_{56}-E_{65}),\\
&\tau(A_3)=\tfrac12(E_{23}+E_{32}+E_{25}+E_{52}+E_{36}+E_{63}+E_{56}+E_{65}),\\
&\tau(A_4)=\tfrac12(E_{23}+E_{32}-E_{25}-E_{52}-E_{36}-E_{63}+E_{56}+E_{65}),\\
&\tau(A_5)=\tfrac12(E_{23}-E_{32}-E_{25}+E_{52}-E_{36}+E_{63}+E_{56}-E_{65}),\\
&\tau(A_6)=E_{33}-E_{55},\\
&\tau(A_7)=\tfrac12(E_{12}-E_{21}-E_{16}+E_{61}-E_{27}+E_{72}+E_{67}-E_{76}),\\
&\tau(A_8)=\tfrac12(-E_{12}-E_{21}-E_{16}-E_{61}-E_{27}-E_{72}-E_{67}-E_{76}),\\
&\tau(A_9)=\tfrac12(E_{13}-E_{31}+E_{15}-E_{51}-E_{37}+E_{73}-E_{57}+E_{75}),\\
&\tau(A_{10})=\tfrac12(E_{13}+E_{31}-E_{15}-E_{51}+E_{37}+E_{73}-E_{57}-E_{75}),\\
&\tau(A_{11})=\tfrac12(-E_{12}-E_{21}+E_{16}+E_{61}+E_{27}+E_{72}-E_{67}-E_{76}),\\
&\tau(A_{12})=\tfrac12(E_{12}-E_{21}+E_{16}-E_{61}+E_{27}-E_{72}+E_{67}-E_{76}),\\
&\tau(A_{13})=\tfrac12(-E_{13}-E_{31}-E_{15}-E_{51}+E_{37}+E_{73}+E_{57}+E_{75}),\\
&\tau(A_{14})=\tfrac12(-E_{13}+E_{31}+E_{15}-E_{51}-E_{37}+E_{73}+E_{57}-E_{75}),\\
&\tau(A_{15})=E_{11}-E_{77},\\
&\tau(A_{16})=\tfrac12(2E_{24}-E_{42}+E_{46}-2E_{64}),\\
&\tau(A_{17})=\tfrac12(2E_{24}+E_{42}+E_{46}+2E_{64}),\\
&\tau(A_{18})=\tfrac12(2E_{34}-E_{43}-E_{45}+2E_{54}),\\
&\tau(A_{19})=\tfrac12(-2E_{34}-E_{43}+E_{45}+2E_{54}),\\
&\tau(A_{20})=\tfrac12(-2E_{14}+E_{41}+E_{47}-2E_{74}),\\
&\tau(A_{21})=\tfrac12(2E_{14}+E_{41}-E_{47}-2E_{74}),\\
&\tau(A_{22})=E_{11}+E_{22}+E_{33}+E_{44}+E_{55}+E_{66}+E_{77},
\end{aligned}\label{f411}\end{equation}
where $E_{ij}$, $i,j=1,2,\dots,7$, denote $7\times 7$ matrices with zeroes everywhere except the value 1 in the entry $(i,j)$ seating at the crossing of the $i$th row and the $j$th column.
One can check that $$[\,\,\tau(A_I),\,\,\tau(A_J)\,\,]\,\,=\,\,c^K{}_{IJ}\,\,\tau(A_K),$$ with the same structure constants as in \eqref{strco}.
\subsubsection{A contactification with $\mathfrak{f}_I$ symmetry}
So now we are in the situation of having two representations $(\rho,S)$ and $(\tau,R=\bigwedge_7)$ od $\mathfrak{n}_{00}=\mathbb{R}\oplus\mathfrak{so}(4,3)$ and we can try to solve the equation \eqref{maga} for the map $\omega\in\mathrm{Hom}(\bigwedge^2S,R)$. Of course, if we started with some arbitrary $\rho$ and $\tau$ this equation would not have solutions other than 0, but here we expect to have solution, since we know it from the Cartan's PhD thesis \cite{CartanPhd}, and the announcement in Helgason's paper \cite{He}. And indeed there is a solution for a nonzero $\omega$, which when written in the basis $\{f_\mu\}$ in $S$ and $\{e_i\}$ in $R$ is such that it gives the \emph{seven} 2-forms $\omega^i=\tfrac12\omega^i{}_{\mu\nu}{\rm d} x^\mu\wedge{\rm d} x^\nu$, $i=1,\dots,7$, in $N=\mathbb{R}^8$ given by:
\begin{equation}\begin{aligned}
\omega^1=&{\rm d} x^1\wedge{\rm d} x^2-{\rm d} x^7\wedge{\rm d} x^8,\\
\omega^2=&{\rm d} x^2\wedge{\rm d} x^4-{\rm d} x^6\wedge{\rm d} x^8,\\
\omega^3=&{\rm d} x^1\wedge{\rm d} x^4-{\rm d} x^5\wedge{\rm d} x^8,\\
\omega^4=&\tfrac12\,\big(\,{\rm d} x^1\wedge{\rm d} x^6-{\rm d} x^2\wedge{\rm d} x^5-{\rm d} x^3\wedge{\rm d} x^8+{\rm d} x^4\wedge{\rm d} x^7\,\big),\\
\omega^5=&{\rm d} x^2\wedge{\rm d} x^3-{\rm d} x^6\wedge{\rm d} x^7,\\
\omega^6=&{\rm d} x^1\wedge{\rm d} x^3-{\rm d} x^5\wedge{\rm d} x^7,\\
\omega^7=&{\rm d} x^3\wedge{\rm d} x^4-{\rm d} x^5\wedge{\rm d} x^6.
\end{aligned}\label{2forms}\end{equation}
These, via the contactification and the theory summarized in Corollary \ref{cruco} lead to the following theorem.
\begin{theorem}\label{distf1}
Let $M=\mathbb{R}^{15}$ with coordinates $(u^1,\dots,u^7,x^1,\dots ,x^8)$, and consider seven 1-forms $\lambda^i,\dots,\lambda^7$ on $M$ given by
$$\begin{aligned}
\lambda^1=&{\rm d} u^1+ x^1{\rm d} x^2- x^7{\rm d} x^8,\\
\lambda^2=&{\rm d} u^2+ x^2{\rm d} x^4- x^6{\rm d} x^8,\\
\lambda^3=&{\rm d} u^3+ x^1{\rm d} x^4- x^5{\rm d} x^8,\\
\lambda^4=&{\rm d} u^4+\tfrac12\,\big(\, x^1{\rm d} x^6- x^2{\rm d} x^5-x^3{\rm d} x^8+ x^4{\rm d} x^7\,\big),\\
\lambda^5=&{\rm d} u^5+x^2{\rm d} x^3- x^6{\rm d} x^7,\\
\lambda^6=&{\rm d} u^6+x^1{\rm d} x^3- x^5{\rm d} x^7,\\
\lambda^7=&{\rm d} u^7+x^3{\rm d} x^4- x^5{\rm d} x^6.
\end{aligned}$$
The rank 8 distribution ${\mathcal D}$ on $M$ defined as ${\mathcal D}=\{\mathrm{T}\mathbb{R}^{15}\ni X\,\,|\,\,X\hook\lambda^1=\dots=X\hook\lambda^7=0\}$ has its Lie algebra of infinitesimal automorphisms $\mathfrak{aut}(\mathcal D)$ isomorphic to the Tanaka prolongation of $\mathfrak{n}_{\minu}=R\oplus S$, where $(\rho,S=\mathbb{R}^8)$ is the spinorial representation \eqref{f41} of $\mathfrak{n}_{00}=\mathbb{R}\oplus \mathfrak{so}\big(4,3)$, and $(\tau,R=\mathbb{R}^7)$ is the vectorial representation \eqref{f411} of $\mathfrak{n}_{00}$.
The symmetry algebra $\mathfrak{aut}({\mathcal D})$ is isomorphic to the simple Lie algebra $\mathfrak{f}_I$,
$$\mathfrak{aut}({\mathcal D})=\mathfrak{f}_I,$$
having the following natural gradation
$$\mathfrak{aut}({\mathcal D})=\mathfrak{n}_{-2}\oplus\mathfrak{n}_{-1}\oplus\mathfrak{n}_0\oplus\mathfrak{n}_1\oplus\mathfrak{n}_2,$$
with $\mathfrak{n}_{-2}=R$, $\mathfrak{n}_{-1}=S$,
$$
\mathfrak{n}_0=\mathfrak{n}_{00}=\mathbb{R}\oplus\mathfrak{so}(4,3),$$
$\mathfrak{n}_{1}=S^*$, $\mathfrak{n}_{2}=R^*$,
which is inherited from the distribution structure $(M,{\mathcal D})$. The duality signs $*$ at $R^*$ and $S^*$ above are with respect to the Killing form in $\mathfrak{f}_I$.
The contactification $(M,{\mathcal D})$ is locally a flat model for the parabolic geometry of type $(F_I,P_I)$ related to the following \emph{crossed} Satake diagram \begin{dynkinDiagram}[edge length=.4cm]{F}{ooox}\end{dynkinDiagram} of $\mathfrak{f}_I$.
\end{theorem}
\begin{remark}
Please note, that this is an example of an application of the magical equation \eqref{maga} in which the starting algebra $\mathfrak{n}_{00}$ was big enough, so that its Tanaka prolongation $\mathfrak{n}_0$ counterpart is precisely equal to $\mathfrak{n}_{00}$. This was actually expected from the construction based on the crossed Satake diagram \begin{dynkinDiagram}[edge length=.4cm]{F}{ooox}\end{dynkinDiagram}, which shows that $\mathfrak{n}_0$ of this parabolic geometry is precisely our $\mathfrak{n}_{00}=\mathbb{R}\oplus\mathfrak{so}(4,3)$.
\end{remark}
\begin{remark}
One sees that the distribution $\mathcal D$ in $\mathbb{R}^{15}$ with $\mathfrak{f}_I$ symmetry presented in Theorem \ref{distf1} looks different that the distribution from our Example \ref{exa4}. It follows, however that both these distributions are locally equivalent, and both have the same simple exceptional Lie algebra $\mathfrak{f}_I$ as an algebra of their authomorphisms.
\end{remark}
\subsubsection{Contactification for $\mathfrak{f}_I$: more algebra about $\mathfrak{so}(4,3)$}\label{phiform}
In our construction of the $\mathfrak{f}_I$ symmetric distribution $\mathcal D$ in Theorem \ref{distf1} the crucial role was played by the 7-dimensional span of 2-forms $\omega^i$, $i=1,2,\dots,7$. If we were given these seven 2-forms, we would produce the $\mathfrak{f}_I$ symmetric distribution $\mathcal D$ by the procedure of contactification.
It turns out that in $S=\mathbb{R}^8$ there is a particular 4-form
$$\Phi=\tfrac{1}{4!}\Phi_{\mu\nu\rho\sigma}{\rm d} x^\mu\wedge{\rm d} x^\nu\wedge{\rm d} x^\rho\wedge {\rm d} x^\sigma$$
that is $\mathbb{R}\oplus \mathfrak{so}(4,3)$ invariant
$$\rho_I{}^\alpha{}_\mu \Phi_{\alpha\nu\rho\sigma}+\rho_I{}^\alpha{}_\nu \Phi_{\mu\alpha\rho\sigma}+\rho_I{}^\alpha{}_\rho \Phi_{\mu\nu\alpha\sigma}+\rho_I{}^\alpha{}_\sigma \Phi_{\mu\nu\rho\alpha}=S_I\Phi_{\mu\nu\rho\sigma}.$$
It may be represented by:
$$\Phi=h_{ij}\omega^i\wedge\omega^j,$$
where $\omega^i$ are given by \eqref{2forms} and
$$\big(\,\,h_{ij}\,\,\big)=
\begin{pmatrix}
0&0&0&0&0&0&1\\0&0&0&0&0&-1&0\\0&0&0&0&1&0&0\\0&0&0&1&0&0&0\\0&0&1&0&0&0&0\\0&-1&0&0&0&0&0\\1&0&0&0&0&0&0 \end{pmatrix},$$
or in words\footnote{Note that since $(h_{ij})$ is a symmetric matrix of signature $(4,3)$, this fact alone shows that the span of seven 2-forms $\omega^i$ is a $7$-dimensional representation space of $\mathbf{SO}(4,3)$. Actually, this fact easily leads to the construction of the double cover ${\mathbb{Z}}_2\to{\bf Spin}(4,3)\to\mathbf{SO}(4,3)$. }: $h_{ij}$, $i,j,=1,2,\dots,7$, \emph{are all zero except} $h_{17}=h_{71}=-h_{26}=-h_{62}=h_{35}=h_{53}=h_{44}=1$.
The form $\Phi$ in full beauty reads:
\begin{equation}\begin{aligned}
\tfrac23\Phi\,\,=\,\,2\,\,\big(\,\,&{\rm d} x1\wedge{\rm d} x^2\wedge{\rm d} x^3\wedge{\rm d} x^4+{\rm d} x^5\wedge{\rm d} x^6\wedge{\rm d} x^7\wedge{\rm d} x^8\,\,\big)-\\
&{\rm d} x^1\wedge{\rm d} x^2\wedge{\rm d} x^5\wedge{\rm d} x^6+{\rm d} x^1\wedge{\rm d} x^3\wedge{\rm d} x^6\wedge{\rm d} x^8-\\&{\rm d} x^1\wedge{\rm d} x^4\wedge{\rm d} x^6\wedge{\rm d} x^7-{\rm d} x^2\wedge{\rm d} x^3\wedge{\rm d} x^5\wedge{\rm d} x^8+\\&{\rm d} x^2\wedge{\rm d} x^4\wedge{\rm d} x^5\wedge{\rm d} x^7-{\rm d} x^3\wedge{\rm d} x^4\wedge{\rm d} x^7\wedge{\rm d} x^8.
\end{aligned}
\label{4form}\end{equation}
It is remarkable that this 4-form alone encaptures all the features of the $\mathfrak{f}_I$ symmetric contactification we discussed in the entire Section \ref{41}. By this we mean following:
\begin{enumerate}
\item Consider $N=\mathbb{R}^8$ with coordinates $(x^\mu)$, $\mu=1,2,\dots,8$, and the 4-form $$\Phi=\tfrac{1}{4!}\Phi_{\mu\nu\rho\sigma}{\rm d} x^\mu\wedge{\rm d} x^\nu\wedge{\rm d} x^\rho\wedge {\rm d} x^\sigma$$ given by \eqref{4form}.
\item Consider an equation
$$A^\alpha{}_\mu \Phi_{\alpha\nu\rho\sigma}+A^\alpha{}_\nu \Phi_{\mu\alpha\rho\sigma}+A^\alpha{}_\rho \Phi_{\mu\nu\alpha\sigma}+A^\alpha{}_\sigma \Phi_{\mu\nu\rho\alpha}=S\Phi_{\mu\nu\rho\sigma}$$
for the real $7\times 7$ matrix $A=(A^\mu{}_\nu)$.
\item For simplicity solve it in two steps: \begin{itemize}
\item First with $S=0$. You obtain 21-dimensional solution space, which will be the \emph{spin representation} $\rho$ of $\mathfrak{so}(4,3)$. It is given $\rho(A)=A$.
\item Then prove that the only solution with $S\neq 0$ corresponds to $S=4$, and that, modulo the addition of linear combinations of solutions with $S=0$, it is given by $A=\mathrm{Id}_{8\times 8}$. Extend your possible $A$s with $S=0$ to $A$s including $A=\mathrm{Id}_{8\times 8}$.
\end{itemize}
\item In this way you will show that the stabilizer in $\mathfrak{gl}(8,\mathbb{R})$ of the 4-form $\Phi$ is the Lie algebra $\mathbb{R}\oplus\mathfrak{so}(4,3)$ in the \emph{spin representation} $\rho$ of Pauli spinors; $\rho(A)=A$.
\item Then search for a 7-dimensional space of
2-forms, spanned say by the 2-forms $\omega^i=\tfrac12\omega^i{}_{\mu\nu}{\rm d} x^\mu\wedge{\rm d} x^\nu$ satisfying
$$A^\alpha{}_\mu\,\, \omega^i{}_{\alpha\nu}\,\,+\,\,A^\alpha{}_\nu\,\, \omega^i{}_{\mu\alpha}\,\,=\,\,s^i{}_j \,\,\omega^j{}_{\mu\nu}\,\,$$
for all $A$s from the spin representation $\rho(A)=A$ of $\mathbb{R}\oplus\mathfrak{so}(4,3)$. Here $s^i{}_j$ are auxiliary constants
\footnote{Note however, that although you look for $\omega^i{}_{\mu\nu}$ with \emph{some} constants $s^i{}_j$, these constants have geometric meaning: comparing with our magical equation \eqref{magb} we see that the $7\times 7$ matrices $(s^i{}_j)$ constitute matrices of the defining representation $\tau$ of $\mathbb{R}\oplus\mathfrak{so}(4,3)$.}.
\item This space is uniquely defined by these equations, and after solving them you will get 7 linearly independent 2-forms $(\omega^1,\dots,\omega^7)$ in $N=\mathbb{R}^8$.
\item Contactifying the resulting structure $\big(N,\mathrm{Span}(\omega^1,\dots,\omega^7)\big)$, as we did e.g in Theorem \ref{distf1}, you will get $\mathfrak{f}_I$ symmetric distribution ${\mathcal D}$ in $\mathbb{R}^7\to \big(\,M=\mathbb{R}^{15}\,\big)\to \big(\,N=\mathbb{R}^8\,\big)$.
\end{enumerate}
\subsection{Realization of $\mathfrak{f}_{II}$}\label{42}
It seems that Cartan was only interested in the explicit realization of $\mathfrak{f}_I$. The realization of $\mathfrak{f}_{II}$ can be obtained in the same spirit as we have described in Section \ref{41}. Here without much of explanations since they parallel Section \ref{41}, we only display the main steps leading to this realization.
We start with the representation of the Clifford algebra ${\mathcal C}\ell(0,7)$ generated by the seven $\rho$-matrices from \eqref{cl07}. They satisfy
$$\rho_i\rho_j+\rho_j\rho_i=-2\delta_{ij}I_{8\times 8},\quad i,j=1,\dots,7.$$
They induce the 8-dimensional representation
$$\rho:\mathbb{R}\oplus\mathfrak{so}(0,7)\to \mathrm{End}(S)$$
of $n_{00}=\mathbb{R}\oplus\mathfrak{so}(0,7)$ in the space $S=\mathbb{R}^8$ of real Pauli spinors, generated by the 22 real $8\times 8$ matrices:
$$\begin{aligned}
&\rho(A_{I(i,j)})=\tfrac12\rho_i\rho_j, \quad 1\leq i<j\leq 7,\\
&\rho(A_{22})=\tfrac12 (I\otimes I\otimes I),
\end{aligned}
$$
with the index $I=I(i,j)$ given by \eqref{Iij}, and with $I,\sigma_x,\epsilon,\sigma_z$ given by \eqref{pauu}-\eqref{pauu1}.
Explicitly, in terms of matrices $I,\sigma_x,\epsilon,\sigma_z$ the generators of this spinorial representation of $\mathfrak{so}(0,7)$ are:
\begin{equation}
\begin{array}{lll}
\rho(A_1)=-\tfrac12 I\otimes \epsilon \otimes\sigma_z,&\quad \rho(A_8)=\tfrac12 \epsilon\otimes \sigma_z\otimes\sigma_z,&\quad \rho(A_{15})=-\tfrac12 I\otimes \epsilon \otimes I,\\
\rho(A_2)=\tfrac12 I\otimes \epsilon \otimes\sigma_x, &\quad\rho(A_9)=-\tfrac12 \epsilon\otimes \sigma_z \otimes\sigma_x, &\quad \rho(A_{16})=-\tfrac12 \sigma_x\otimes I \otimes\epsilon, \\
\rho(A_3)=-\tfrac12 I\otimes I \otimes\epsilon, &\quad\rho(A_{10})=-\tfrac12 I\otimes \sigma_z \otimes \epsilon,&\quad \rho(A_{17})=-\tfrac12 \sigma_x\otimes\epsilon \otimes\sigma_x,\\
\rho(A_4)=-\tfrac12 \epsilon\otimes \epsilon\otimes\epsilon,&\quad \rho(A_{11})=\tfrac12 \epsilon\otimes \sigma_z \otimes I,&\quad \rho(A_{18})=-\tfrac12 \sigma_x\otimes \epsilon \otimes\sigma_z,\\
\rho(A_5)=\tfrac12 \epsilon\otimes I \otimes\sigma_x,&\quad \rho(A_{12})=-\tfrac12 \epsilon\otimes \sigma_x \otimes\sigma_z,&\quad \rho(A_{19})=\tfrac12 \sigma_z\otimes \epsilon \otimes I,\\
\rho(A_6)=\tfrac12 \epsilon\otimes I\otimes\sigma_z,&\quad \rho(A_{13})=\tfrac12 \epsilon\otimes \sigma_x\otimes\sigma_x,&\quad \rho(A_{20})=\tfrac12 \sigma_z\otimes \sigma_x \otimes \epsilon,\\
\rho(A_7)=\tfrac12 \epsilon\otimes \sigma_x \otimes I, &\quad \rho(A_{14})=\tfrac12 I \otimes \sigma_x\otimes \epsilon, &\quad
\rho(A_{21})=\tfrac12 \sigma_z\otimes \sigma_z \otimes \epsilon.
\end{array}
\label{f42}\end{equation}
We also write down the corresponding generators of the vectorial representation $\tau$, which is the 7-dimensional irreducible component $\bigwedge_7$ of the representation $\rho\wedge\rho$,
which decomposes as $\bigwedge^2S=\bigwedge_{21}\oplus\bigwedge_7$. These generators read:
\begin{equation}
\begin{array}{lll}
\tau(A_1)=E_{31}-E_{13},&\quad \tau(A_8)=E_{37}-E_{73},&\quad \tau(A_{15})=E_{75}-E_{57},\\
\tau(A_2)=E_{12}-E_{21}, &\quad\tau(A_9)=E_{72}-E_{27}, &\quad \tau(A_{16})=E_{14}-E_{41}, \\
\tau(A_3)=E_{32}-E_{23}, &\quad\tau(A_{10})=E_{76}-E_{67},&\quad \tau(A_{17})=E_{34}-E_{43},\\
\tau(A_4)=E_{61}-E_{16},&\quad \tau(A_{11})=E_{51}-E_{15},&\quad \tau(A_{18})=E_{42}-E_{24},\\
\tau(A_5)=E_{36}-E_{63},&\quad \tau(A_{12})=E_{53}-E_{35},&\quad \tau(A_{19})=E_{46}-E_{64},\\
\tau(A_6)=E_{62}-E_{26},&\quad \tau(A_{13})=E_{25}-E_{52},&\quad \tau(A_{20})=E_{47}-E_{74},\\
\tau(A_7)=E_{17}-E_{71}, &\quad \tau(A_{14})= E_{65}-E_{56},&\quad
\tau(A_{21})=E_{54}-E_{45},
\end{array}
\label{f421}\end{equation}
where $E_{ij}$ are $7\times 7$ matrices with all zero entries, except at the $i$th-$j$th entry, where 1 resides.
We are again in a position ready for application of our Lemma \ref{l21}. Given the representations $(\rho,S=\mathbb{R}^8)$ and $(\tau,R=\bigwedge_7)$ of $\mathfrak{so}(0,7)$ we solve the magical equation \eqref{maga} for $\omega=\tfrac12\omega^i{}_{\mu\nu}e_i\otimes f^\mu\wedge f^\nu$. In this way we obtain the seven 2-forms $\omega^i=-\tfrac12\omega^i{}_{\mu\nu}{\rm d} x^\mu\wedge{\rm d} x^\nu$ on $N=\mathbb{R}^8$, with coordinates $(x^\mu)_{\mu=1}^8$, which read as follows:
\begin{equation}\begin{aligned}
\omega^1=&-{\rm d} x^1\wedge{\rm d} x^2-{\rm d} x^3\wedge{\rm d} x^4+{\rm d} x^5\wedge{\rm d} x^6+{\rm d} x^7\wedge{\rm d} x^8,\\
\omega^2=&{\rm d} x^1\wedge{\rm d} x^3-{\rm d} x^2\wedge{\rm d} x^4-{\rm d} x^5\wedge{\rm d} x^7+{\rm d} x^6\wedge{\rm d} x^8,\\
\omega^3=&-{\rm d} x^1\wedge{\rm d} x^4-{\rm d} x^2\wedge{\rm d} x^3+{\rm d} x^5\wedge{\rm d} x^8+{\rm d} x^6\wedge{\rm d} x^7,\\
\omega^4=&{\rm d} x^1\wedge{\rm d} x^5+{\rm d} x^2\wedge{\rm d} x^6+{\rm d} x^3\wedge{\rm d} x^7+{\rm d} x^4\wedge{\rm d} x^8,\\
\omega^5=&-{\rm d} x^1\wedge{\rm d} x^6+{\rm d} x^2\wedge{\rm d} x^5+{\rm d} x^3\wedge{\rm d} x^8-{\rm d} x^4\wedge{\rm d} x^7,\\
\omega^6=&{\rm d} x^1\wedge{\rm d} x^7+{\rm d} x^2\wedge{\rm d} x^8-{\rm d} x^3\wedge{\rm d} x^5-{\rm d} x^4\wedge{\rm d} x^6,\\
\omega^7=&{\rm d} x^1\wedge{\rm d} x^8-{\rm d} x^2\wedge{\rm d} x^7+{\rm d} x^3\wedge{\rm d} x^6-{\rm d} x^4\wedge{\rm d} x^5.
\end{aligned}\label{2forms2}\end{equation}
These, via the contactification lead to the following theorem.
\begin{theorem}\label{distf2}
Let $M=\mathbb{R}^{15}$ with coordinates $(u^1,\dots,u^7,x^1,\dots ,x^8)$, and consider seven 1-forms $\lambda^i,\dots,\lambda^7$ on $M$ given by
$$\begin{aligned}
\lambda^1=&{\rm d} u^1- x^1{\rm d} x^2- x^3{\rm d} x^4+ x^5{\rm d} x^6+ x^7{\rm d} x^8,\\
\lambda^2=&{\rm d} u^1+ x^1{\rm d} x^3- x^2{\rm d} x^4-x^5{\rm d} x^7+x^6{\rm d} x^8,\\
\lambda^3=&{\rm d} u^1- x^1{\rm d} x^4- x^2{\rm d} x^3+ x^5{\rm d} x^8+ x^6{\rm d} x^7,\\
\lambda^4=&{\rm d} u^1+ x^1{\rm d} x^5+ x^2{\rm d} x^6+ x^3{\rm d} x^7+ x^4{\rm d} x^8,\\
\lambda^5=&{\rm d} u^1- x^1{\rm d} x^6+x^2{\rm d} x^5+x^3{\rm d} x^8-x^4{\rm d} x^7,\\
\lambda^6=&{\rm d} u^1+ x^1{\rm d} x^7+ x^2{\rm d} x^8- x^3{\rm d} x^5-x^4{\rm d} x^6,\\
\lambda^7=&{\rm d} u^1+ x^1{\rm d} x^8- x^2{\rm d} x^7+ x^3{\rm d} x^6 - x^4{\rm d} x^5.
\end{aligned}$$
The rank 8 distribution ${\mathcal D}$ on $M$ defined as ${\mathcal D}=\{\mathrm{T}\mathbb{R}^{15}\ni X\,\,|\,\,X\hook\lambda^1=\dots=X\hook\lambda^7=0\}$ has its Lie algebra of infinitesimal automorphisms $\mathfrak{aut}(\mathcal D)$ isomorphic to the Tanaka prolongation of $\mathfrak{n}_{\minu}=R\oplus S$, where $(\rho,S=\mathbb{R}^8)$ is the spinorial representation \eqref{f42} of $\mathfrak{n}_{00}=\mathbb{R}\oplus \mathfrak{so}\big(0,7)$, and $(\tau,R=\mathbb{R}^7)$ is the vectorial representation \eqref{f421} of $\mathfrak{n}_{00}$.
The symmetry algebra $\mathfrak{aut}({\mathcal D})$ is isomorphic to the simple Lie algebra $\mathfrak{f}_{II}$,
$$\mathfrak{aut}({\mathcal D})=\mathfrak{f}_{II},$$
having the following natural gradation
$$\mathfrak{aut}({\mathcal D})=\mathfrak{n}_{-2}\oplus\mathfrak{n}_{-1}\oplus\mathfrak{n}_0\oplus\mathfrak{n}_1\oplus\mathfrak{n}_2,$$
with $\mathfrak{n}_{-2}=R$, $\mathfrak{n}_{-1}=S$,
$$
\mathfrak{n}_0=\mathfrak{n}_{00}=\mathbb{R}\oplus\mathfrak{so}(0,7),$$
$\mathfrak{n}_{1}=S^*$, $\mathfrak{n}_{2}=R^*$,
which is inherited from the distribution structure $(M,{\mathcal D})$. The duality signs $*$ at $R^*$ and $S^*$ above are with respect to the Killing form in $\mathfrak{f}_{II}$.
The contactification $(M,{\mathcal D})$ is locally a flat model for the parabolic geometry of type $(F_{II},P_{II})$ related to the following \emph{crossed} Satake diagram \begin{dynkinDiagram}[edge length=.4cm]{F}{***x}\end{dynkinDiagram} of $\mathfrak{f}_I$.
\end{theorem}
\begin{remark} In this way we realized the real form $\mathfrak{f}_{II}$ of the simple exceptional complex Lie algebra $\mathfrak{f}_4$ in $M=\mathbb{R}^{15}$ as a symmetry algebra of the Pfaffian system $(\lambda^1,\dots,\lambda^7)$.
This realization does not appear in Cartan's theses.
\end{remark}
\begin{remark}
Our present case of $\mathfrak{f}_{II}$ also admits description in terms of a certain $\mathbb{R}\oplus\mathfrak{so}(0,7)$ invariant 4-form $\Phi$ in $S=\mathbb{R}^8$, analogous to the 4-form $\Phi$ introduced in Section \ref{phiform}, when we discussed $\mathfrak{f}_I$. We skip the details of this here.
\end{remark}
\section{Spinorial representations in dimension 8}
Dimension \emph{eight} is quite exceptional, as for example, 8 is the highest possible dimension for the existence of Euclidean Hurwitz algebras, gifting us with the algebra of \emph{octonions}. From the perspective of our paper, which meanders through the realm of simple Lie algebras, eight is \emph{very} special: among all the complex simple Lie algebras, the Dynkin diagram of $\mathfrak{d}_4=\mathfrak{so}(8,\mathbb{C})$ which is \emph{defined} in dimension \emph{eight}, is the most symmetric:\\
\centerline{\begin{dynkinDiagram}[edge length=.4cm]{D}{oooo}
\end{dynkinDiagram}.}
\noindent
Visibly it has a threefold symmetry $S_3$.
The Lie algebra $\mathfrak{so}(8,\mathbb{C})$ has six real forms. These are: $\mathfrak{so}(8,0)$, $\mathfrak{so}(7,1)$, $\mathfrak{so}(6,2)$, $\mathfrak{so}^*(8)$, $\mathfrak{so}(5,3)$ and $\mathfrak{so}(4,4)$, with the following respective Satake diagrams:\\
\centerline{\begin{dynkinDiagram}[edge length=.4cm]{D}{****}
\end{dynkinDiagram},\hspace{0.5cm}\begin{dynkinDiagram}[edge length=.4cm]{D}{o***}
\end{dynkinDiagram},\hspace{0.5cm}\begin{dynkinDiagram}[edge length=.4cm]{D}{oo**}
\end{dynkinDiagram},\hspace{0.5cm}\begin{dynkinDiagram}[edge length=.4cm]{D}{ooo*}
\end{dynkinDiagram},\hspace{0.5cm}\tikzset{/Dynkin diagram/fold style/.style={stealth-stealth,thin,
shorten <=1mm,shorten >=1mm}}\begin{dynkinDiagram}[edge length=.4cm]{D}{oooo}
\dynkinFold{3}{4}
\end{dynkinDiagram},\hspace{0.5cm}\begin{dynkinDiagram}[edge length=.4cm]{D}{oooo}\end{dynkinDiagram}.}
We see that among these Satake diagrams the only ones that share the $S_3$ symmetry of the Dynkin diagram of the complex algebra $\mathfrak{d}_4$ are those of the \emph{compact real form} $\mathfrak{so}(8,0)$ and of the \emph{split real form} $\mathfrak{so}(4,4)$.
This $S_3$ symmetry of these two diagrams, indicates that the lowest dimensional real representations of $\mathfrak{so}(4,4)$ and $\mathfrak{so}(8,0)$, may have additional features when compared with spinorial representations of other $\mathfrak{so}(p,q)$s. In particular, for \emph{both} $\mathfrak{so}(4,4)$ and $\mathfrak{so}(8,0)$ we have:
\begin{itemize}
\item Their Dirac representation $(\rho,S)$ in the real vector space $S=\mathbb{R}^{16}$ is reducible over $\mathbb{R}$ and its split into two real \emph{Weyl} representations $(\rho_+,S_+)$ and $(\rho_-,S_-)$ in the respective vector spaces of \emph{Weyl spinors} $S_+=\mathbb{R}^8$ and $S_-=\mathbb{R}^8$, which have the same real dimension \emph{eight},
$$\rho=\rho_+\oplus\rho_-\quad\mathrm{in}\quad S=S_+\oplus S_-, \quad \mathrm{dim}_\mathbb{R} S_\pm=8.$$
\item The real Weyl representations $(\rho_\pm,S_\pm)$ are \emph{faithful}, \emph{irreducible} and \emph{nonequivalent}.
\item The defining representations $(\tau,R)$ of $\mathfrak{so}(4,4)$ and $\mathfrak{so}(8,0)$, as the algebra of endomorphisms in the space $R=\mathbb{R}^8$ of vectors preserving the bilinear form of respective signatures $(4,4)$ and $(8,0)$ has the same dimension \emph{eight} as the two Weyl representations $(\rho_\pm,S_\pm)$.
\item The real defining representations $(\tau,R)$ are \emph{irreducible} for both $\mathfrak{so}(4,4)$ and $\mathfrak{so}(8,0)$.
\item All three real 8-dimensional irreducible representations $(\rho_+,S_+)$, $(\rho_-,S_-)$ and $(\tau,R)$ of, respectively both, $\mathfrak{so}(4,4)$ and $\mathfrak{so}(8,0)$ are \emph{pairwise nonequivalent}.
\end{itemize}
Thus the Lie algebras $\mathfrak{so}(4,4)$ and $\mathfrak{so}(8,0)$ have three real, irreducible and nonequivalent representations $(\rho_+,\rho_-,\tau)$ in the vector space $\mathbb{R}^8$ of the \emph{defining} dimension $p+q=8$. For all $\mathfrak{so}(p,q)$ Lie algebras this is the only dimension $p+q$ that such situation with the irreducible representations occurs.
Below, we provide the explicit description of the \emph{triality representations} $(\rho_+,\rho_-,\tau)$ separately for $\mathfrak{so}(4,4)$ and $\mathfrak{so}(8,0)$.
\subsection{Triality representations of $\mathfrak{so}(4,4)$}
We recall from Section \ref{spintraut} that the Lie algebra $\mathfrak{so}(4,4)$ admits a representation $\rho$ in the 16-dimensional real vector space $S=\mathbb{R}^{16}$ of Dirac spinors. This is obtained by using the Dirac $\gamma$ matrices generating the representation of the Clifford algebra ${\mathcal C}\ell(4,4)$. In terms of the 2-dimensional Pauli matrices $(\sigma_x,\epsilon,\sigma_z,I)$ these look as follows:
\begin{equation} \begin{aligned}
&\gamma_1=\sigma_x\otimes\sigma_x\otimes\sigma_x\otimes\sigma_x\\
&\gamma_2=\sigma_x\otimes\sigma_x\otimes\sigma_x\otimes\epsilon\\
&\gamma_3=\sigma_x\otimes\sigma_x\otimes\sigma_x\otimes\sigma_z\\
&\gamma_4=\sigma_x\otimes\sigma_x\otimes\epsilon\otimes I\\
&\gamma_5=\sigma_x\otimes\sigma_x\otimes\sigma_z\otimes I\\
&\gamma_6=\sigma_x\otimes\epsilon\otimes I\otimes I\\
&\gamma_7=\sigma_x\otimes\sigma_z\otimes I\otimes I\\
&\gamma_8=\epsilon\otimes I\otimes I\otimes I.
\end{aligned}\label{dirga}\end{equation}
They satisfy the \emph{Dirac identity}
\begin{equation} \gamma_i\gamma_j+\gamma_j\gamma_i=2g_{ij} (I\otimes I\otimes I\otimes I), \quad i,j=1,\dots,8,\label{clifi}\end{equation}
with
$$\big( g_{ij} \big)=\mathrm{diag}(1,-1,1,-1,1,-1,1,-1).$$
The 28 generators of $\mathfrak{so}(4,4)$ in the Majorana-Dirac spinor representation $\rho$ in the space of Dirac spinors $S=\mathbb{R}^{16}$ are given by
$$\rho(A_{I(i,j)})=\tfrac12\gamma_i\gamma_j, \quad 1\leq i<j\leq 8,$$
where we again have used the function $I=I(i,j)$ defined in \eqref{Iij}. Note that since now $i<j$ can run from 1 to 8, the function has a range from 1 to 28.
We add to these generators the scaling generator, $\rho(A_{29})$,
$$\rho(A_{29})=\tfrac12 I\otimes I\otimes I\otimes I.$$
This extends the Dirac representation $\rho$ of the Lie algebra $\mathfrak{so}(4,4)$ to the representation of the \emph{homothety} Lie algebra $\mathfrak{coa}(4,4)=\mathbb{R}\oplus\mathfrak{so}(4,4)$.
In terms of the 2-dimensional Pauli matrices these generators look like:
\begin{equation}\label{dir44}
\begin{array}{ll}
\rho(A_1)= \tfrac12 I \otimes I\otimes I\otimes \sigma_z,&\rho(A_{15})=\tfrac12 I \otimes \sigma_z\otimes \sigma_z\otimes I,\\
\rho(A_2)= \tfrac12 I \otimes I\otimes I\otimes \epsilon ,&\rho(A_{16})=\tfrac12 I \otimes \epsilon\otimes \sigma_x\otimes \sigma_x,\\
\rho(A_3)= \tfrac12 I \otimes I\otimes I\otimes \sigma_x ,&\rho(A_{17})=\tfrac12 I \otimes \epsilon\otimes \sigma_x\otimes \epsilon ,\\
\rho(A_4)=\tfrac12 I \otimes I\otimes \sigma_z\otimes \sigma_x,&\rho(A_{18})=\tfrac12 I \otimes \epsilon\otimes \sigma_x\otimes \sigma_z ,\\
\rho(A_5)= \tfrac12 I \otimes I\otimes \sigma_z\otimes \epsilon,&\rho(A_{19})=\tfrac12 I \otimes \epsilon\otimes \epsilon\otimes I ,\\
\rho(A_6)= \tfrac12 I \otimes I\otimes \sigma_z\otimes \sigma_z,&\rho(A_{20})=\tfrac12 I \otimes \epsilon\otimes \sigma_z\otimes I ,\\
\rho(A_7)= \tfrac12 I \otimes I\otimes \epsilon\otimes \sigma_x,&\rho(A_{21})=\tfrac12 I \otimes \sigma_x\otimes I\otimes I,\\
\rho(A_8)= \tfrac12 I \otimes I\otimes \epsilon\otimes \epsilon,&\rho(A_{22})=\tfrac12 \sigma_z \otimes \sigma_x\otimes \sigma_x\otimes \sigma_x ,\\
\rho(A_9)= \tfrac12 I \otimes I\otimes \epsilon\otimes \sigma_z,&\rho(A_{23})=\tfrac12 \sigma_z \otimes \sigma_x\otimes \sigma_x\otimes \epsilon ,\\
\rho(A_{10})=\tfrac12 I \otimes I\otimes \sigma_x\otimes I,&\rho(A_{24})=\tfrac12 \sigma_z \otimes \sigma_x\otimes \sigma_x\otimes \sigma_z ,\\
\rho(A_{11})= \tfrac12 I \otimes \sigma_z\otimes \sigma_x\otimes \sigma_x ,&\rho(A_{25})=\tfrac12 \sigma_z \otimes \sigma_x\otimes \epsilon\otimes I ,\\
\rho(A_{12})= \tfrac12 I \otimes \sigma_z\otimes \sigma_x\otimes \epsilon,&\rho(A_{26})=\tfrac12 \sigma_z \otimes \sigma_x\otimes \sigma_z\otimes I ,\\
\rho(A_{13})= \tfrac12 I \otimes \sigma_z\otimes \sigma_x\otimes \sigma_z,&\rho(A_{27})=\tfrac12 \sigma_z \otimes \epsilon\otimes I\otimes I,\\
\rho(A_{14})= \tfrac12 I \otimes \sigma_z\otimes \epsilon\otimes I,&\rho(A_{28})=\tfrac12 \sigma_z \otimes \sigma_z\otimes I\otimes I.
\end{array}
\end{equation}
Looking at the first factor in \emph{all} of these generators we observe that it is either $I$ or $\sigma_z$, i.e. it is \emph{diagonal}. This means that this 16-dimensional representation of $\mathbb{R}\oplus\mathfrak{so}(4,4)$ is \emph{reducible}. It \emph{splits} onto two real $8$-dimensional \emph{Weyl representations}
$$\rho=\rho_+\oplus\rho_-\quad\mathrm{in}\quad S=S_+\oplus S_-, \quad \mathrm{dim}_\mathbb{R} S_\pm=8,$$
in the spaces $S_\pm$ of (Majorana)-Weyl spinors.
On generators of $\mathfrak{so}(4,4)$ these two 8-dimensional representations $\rho_\pm$, are given by:
\begin{equation}
\begin{array}{ll}
\rho_\pm(A_1)= \tfrac12 I\otimes I\otimes \sigma_z,&\rho_\pm(A_{15})=\tfrac12 \sigma_z\otimes \sigma_z\otimes I,\\
\rho_\pm(A_2)= \tfrac12 I\otimes I\otimes \epsilon ,&\rho_\pm(A_{16})=\tfrac12 \epsilon\otimes \sigma_x\otimes \sigma_x,\\
\rho_\pm(A_3)= \tfrac12 I\otimes I\otimes \sigma_x ,&\rho_\pm(A_{17})=\tfrac12 \epsilon\otimes \sigma_x\otimes \epsilon ,\\
\rho_\pm(A_4)=\tfrac12 I\otimes \sigma_z\otimes \sigma_x,&\rho_\pm(A_{18})=\tfrac12 \epsilon\otimes \sigma_x\otimes \sigma_z ,\\
\rho_\pm(A_5)= \tfrac12 I\otimes \sigma_z\otimes \epsilon,&\rho_\pm(A_{19})=\tfrac12 \epsilon\otimes \epsilon\otimes I ,\\
\rho_\pm(A_6)= \tfrac12 I\otimes \sigma_z\otimes \sigma_z,&\rho_\pm(A_{20})=\tfrac12 \epsilon\otimes \sigma_z\otimes I ,\\
\rho_\pm(A_7)= \tfrac12 I\otimes \epsilon\otimes \sigma_x,&\rho_\pm(A_{21})=\tfrac12 \sigma_x\otimes I\otimes I,\\
\rho_\pm(A_8)= \tfrac12 I\otimes \epsilon\otimes \epsilon,&\rho_\pm(A_{22})=\pm\tfrac12 \sigma_x\otimes \sigma_x\otimes \sigma_x ,\\
\rho_\pm(A_9)= \tfrac12 I\otimes \epsilon\otimes \sigma_z,&\rho_\pm(A_{23})=\pm\tfrac12 \sigma_x\otimes \sigma_x\otimes \epsilon ,\\
\rho_\pm(A_{10})=\tfrac12 I\otimes \sigma_x\otimes I,&\rho_\pm(A_{24})=\pm\tfrac12 \sigma_x\otimes \sigma_x\otimes \sigma_z ,\\
\rho_\pm(A_{11})= \tfrac12 \sigma_z\otimes \sigma_x\otimes \sigma_x ,&\rho_\pm(A_{25})=\pm\tfrac12 \sigma_x\otimes \epsilon\otimes I ,\\
\rho_\pm(A_{12})= \tfrac12 \sigma_z\otimes \sigma_x\otimes \epsilon,&\rho_\pm(A_{26})=\pm\tfrac12 \sigma_x\otimes \sigma_z\otimes I ,\\
\rho_\pm(A_{13})= \tfrac12 \sigma_z\otimes \sigma_x\otimes \sigma_z,&\rho_\pm(A_{27})=\pm\tfrac12 \epsilon\otimes I\otimes I,\\
\rho_\pm(A_{14})= \tfrac12 \sigma_z\otimes \epsilon\otimes I,&\rho_\pm(A_{28})=\pm\tfrac12 \sigma_z\otimes I\otimes I.
\end{array}\label{rhopm}
\end{equation}
We extend them to $\mathbb{R}\oplus\mathfrak{so}(4,4)$ by adding
$$\rho_\pm(A_{29})=\tfrac12 I\otimes I\otimes I.$$
It follows that the Weyl representations $(\rho_\pm,S_\pm)$ of $\mathfrak{so}(4,4)$ are \emph{irreducible} and \emph{nonequivalent}.
They can be used to find yet another real 8-dimensional representation of $\mathfrak{so}(4,4)$. For this one considers the tensor product representation $$\rho_+\otimes\rho_-.$$ This 64-dimensional real representation of $\mathfrak{so}(4,4)$ is \emph{reducible}. It decomposes as:
$$\rho_+\otimes\rho_-=\alpha\oplus\tau\quad\mathrm{in}\quad S_+\otimes S_-=T_{56}\oplus R,\quad\mathrm{with}\quad \dim_\mathbb{R}(R)=8,\,\,\dim_\mathbb{R}(T_{56})=56,$$
having irreducible components $(\alpha,T_{56})$ and $(\tau,R)$ of respective dimensions 56 and 8. Explicitly, on generators of $\mathbb{R}\oplus\mathfrak{so}(4,4)$, the 8-dimensional representation $\tau$ reads:
\begin{equation}\begin{aligned}
&\tau(A_1)=E_{66}-E_{22},\\
&\tau(A_2)=\tfrac12(E_{23}-E_{32}+E_{25}-E_{52}+E_{36}-E_{63}+E_{56}-E_{65}),\\
&\tau(A_3)=\tfrac12(E_{23}+E_{32}+E_{25}+E_{52}+E_{36}+E_{63}+E_{56}+E_{65}),\\
&\tau(A_4)=\tfrac12(E_{23}+E_{32}-E_{25}-E_{52}-E_{36}-E_{63}+E_{56}+E_{65}),\\
&\tau(A_5)=\tfrac12(E_{23}-E_{32}-E_{25}+E_{52}-E_{36}+E_{63}+E_{56}-E_{65}),\\
&\tau(A_6)=E_{33}-E_{55},\\
&\tau(A_7)=\tfrac12(E_{12}-E_{21}-E_{16}+E_{61}-E_{27}+E_{72}+E_{67}-E_{76}),\\
&\tau(A_8)=\tfrac12(-E_{12}-E_{21}-E_{16}-E_{61}-E_{27}-E_{72}-E_{67}-E_{76}),\\
&\tau(A_9)=\tfrac12(E_{13}-E_{31}+E_{15}-E_{51}-E_{37}+E_{73}-E_{57}+E_{75}),\\
&\tau(A_{10})=\tfrac12(E_{13}+E_{31}-E_{15}-E_{51}+E_{37}+E_{73}-E_{57}-E_{75}),\\
&\tau(A_{11})=\tfrac12(-E_{12}-E_{21}+E_{16}+E_{61}+E_{27}+E_{72}-E_{67}-E_{76}),\\
&\tau(A_{12})=\tfrac12(E_{12}-E_{21}+E_{16}-E_{61}+E_{27}-E_{72}+E_{67}-E_{76}),\\
&\tau(A_{13})=\tfrac12(-E_{13}-E_{31}-E_{15}-E_{51}+E_{37}+E_{73}+E_{57}+E_{75}),\\
&\tau(A_{14})=\tfrac12(-E_{13}+E_{31}+E_{15}-E_{51}-E_{37}+E_{73}+E_{57}-E_{75}),\\
&\tau(A_{15})=E_{11}-E_{77},\\
&\tau(A_{16})=\tfrac12(E_{24}-E_{42}+E_{28}-E_{82}+E_{46}-E_{64}-E_{68}+E_{86}),\\
&\tau(A_{17})=\tfrac12(E_{24}+E_{42}+E_{28}+E_{82}+E_{46}+E_{64}+E_{68}+E_{86}),\\
&\tau(A_{18})=\tfrac12(E_{34}-E_{43}+E_{38}-E_{83}-E_{45}+E_{54}+E_{58}-E_{85}),\\
&\tau(A_{19})=\tfrac12(-E_{34}-E_{43}-E_{38}-E_{83}+E_{45}+E_{54}+E_{58}+E_{85}),\\
&\tau(A_{20})=\tfrac12(-E_{14}+E_{41}-E_{18}+E_{81}+E_{47}-E_{74}-E_{78}+E_{87}),\\
&\tau(A_{21})=\tfrac12(E_{14}+E_{41}+E_{18}+E_{81}-E_{47}-E_{74}-E_{78}-E_{87}),\\
&\tau(A_{22})=\tfrac12(-E_{24}-E_{42}+E_{28}+E_{82}+E_{46}+E_{64}-E_{68}-E_{86}),\\
&\tau(A_{23})=\tfrac12(-E_{24}+E_{42}+E_{28}-E_{82}+E_{46}-E_{64}+E_{68}-E_{86}),\\
&\tau(A_{24})=\tfrac12(-E_{34}-E_{43}+E_{38}+E_{83}-E_{45}-E_{54}+E_{58}+E_{85}),\\
&\tau(A_{25})=\tfrac12(E_{34}-E_{43}-E_{38}+E_{83}+E_{45}-E_{54}+E_{58}-E_{85}),\\
&\tau(A_{26})=\tfrac12(E_{14}+E_{41}-E_{18}-E_{81}+E_{47}+E_{74}-E_{78}-E_{87}),\\
&\tau(A_{27})=\tfrac12(-E_{14}+E_{41}+E_{18}-E_{81}-E_{47}+E_{74}-E_{78}+E_{87}),\\
&\tau(A_{28})=-E_{44}+E_{88},\\
&\tau(A_{29})=E_{11}+E_{22}+E_{33}+E_{44}+E_{55}+E_{66}+E_{77}+E_{88},
\end{aligned}\label{tauweyl}\end{equation}
where $E_{ij}$, $i,j=1,2,\dots,8$, denote $8\times 8$ matrices with zeroes everywhere except the value 1 in the entry $(i,j)$ seating at the crossing of the $i$th row and the $j$th column.
The three real, irreducible, pairwise nonequivalent representations $(\rho_+,\rho_-,\tau)$ of $\mathfrak{so}(4,4)$, given by the formulas \eqref{rhopm} and \eqref{tauweyl} constitute the set of the \emph{triality representations} for $\mathfrak{so}(4,4)$.
\subsection{Triality representations of $\mathfrak{so}(8,0)$}
To get the real representation $(\rho,S)$ of $\mathfrak{so}(8,0)$ in the space $S=\mathbb{R}^{16}$ of Dirac spinors we need the real Dirac $\gamma$ matrices satisfying the \emph{Dirac identity} \eqref{clifi}, but now with $$g_{ij}=\delta_{ij},$$ where $\delta_{ij}$ is the \emph{Kronecker delta} in 8 dimensions.
Thus we need to modify the Dirac matrices $\gamma_i$ from \eqref{dirga} to have the proper signature of the metric. This is done in \emph{two} steps \cite{traut}. \emph{First} one puts the \emph{imaginary unit} $i$ in front of some of the Dirac matrices $\gamma_i$ generating the Clifford algebra ${\mathcal C}\ell(4,4)$, to get the proper signature of $(g_{ij})$. Although this produces few \emph{complex} generators, \emph{in step two} one uses them with the others, and modifies them in an algorithmic fashion so that they become all real and still satisfy the \emph{Dirac identity} with the proper signature of $(g_{ij})$. Explicitly, it is done as follows:
By placing the \emph{imaginary unit} $i$ in front of $\gamma_2$, $\gamma_4$, $\gamma_6$ and $\gamma_8$ in \eqref{dirga} we obtain 8 matrices
$$\tilde{\gamma}_{2j-1}=\gamma_{2j-1}, \quad \tilde{\gamma}_{2j}=i\gamma_{2j}, \quad j=1,2,3,4,$$
with $\gamma_i$, $i,1,\dots,8$, in \eqref{dirga}. These constitute generators of the \emph{complex} 16-dimensional representation of the Clifford algebra ${\mathcal C}\ell(8,0)$. We will also need the representation of this Clifford algebra, which is \emph{complex conjugate} of $\tilde{\gamma}$. This is generated by
$$\overline{\tilde{\gamma}}_{2j-1}=\gamma_{2j-1}, \quad \overline{\tilde{\gamma}}_{2j}=-\gamma_{2j}, \quad j=1,2,3,4.$$
The Clifford algebra representations generated by the Dirac matrices $\tilde{\gamma}$ and $\overline{\tilde{\gamma}}$ are \emph{real equivalent}, i.e. there exists a real $16\times 16$ matrix $B$ such that
$$B\tilde{\gamma}_i=\overline{\tilde{\gamma}}_iB,\quad \forall i=1,\dots,8.$$
It can be chosen so that
$$B^2=\mathrm{Id},$$
where $\mathrm{Id}=I\otimes I\otimes I\otimes I$.
Explicitly,
$$B=\sigma_z\otimes\epsilon\otimes\sigma_z\otimes \epsilon.$$
Using this matrix we define a \emph{new} set of eight $\gamma$ matrices\footnote{The $\gamma$-matrices used below should be considered as new symbols, and should not be confused with the $\mathfrak{so}(4,4)$ $\gamma$-matrices in formulas defining $\tilde{\gamma}$-matrices at the beginning of this section. One should forget about the definition of $\tilde{\gamma}$s in the formula below.} by:
$$\gamma_i\,=\,(i B+\mathrm{Id})\,\,\tilde{\gamma}_i\,\,(i B+\mathrm{Id})^{-1}, \quad\quad\forall i=1,\dots,8.$$
One can check that these 8 matrices are \emph{all real} and that they satisfy the desired Dirac identity:
$$\gamma_i\gamma_j+\gamma_j\gamma_i=2\delta_{ij} (I\otimes I\otimes I\otimes I), \quad i,j=1,\dots,8.$$
Explicitly we have:
$$\begin{aligned}
&\gamma_1=\sigma_x\otimes\sigma_x\otimes\sigma_x\otimes\sigma_x\\
&\gamma_2=-\epsilon\otimes\sigma_z\otimes\epsilon\otimes I\\
&\gamma_3=\sigma_x\otimes\sigma_x\otimes\sigma_x\otimes\sigma_z\\
&\gamma_4=\epsilon\otimes\sigma_z\otimes\sigma_x\otimes \epsilon\\
&\gamma_5=\sigma_x\otimes\sigma_x\otimes\sigma_z\otimes I\\
&\gamma_6=-\epsilon\otimes I\otimes \sigma_z\otimes \epsilon\\
&\gamma_7=\sigma_x\otimes\sigma_z\otimes I\otimes I\\
&\gamma_8=\sigma_x\otimes \epsilon\otimes \sigma_z\otimes \epsilon.
\end{aligned}$$
The 28 generators of $\mathfrak{so}(8,0)$ in the Majorana-Dirac spinor representation $\rho$ in the space of Dirac spinors $S=\mathbb{R}^{16}$ are given by
$$\rho(A_{I(i,j)})=\tfrac12\gamma_i\gamma_j, \quad 1\leq i<j\leq 8,$$
where again we have used the function $I=I(i,j)$ defined in \eqref{Iij}. Note that since now $i<j$ can run from 1 to 8, the function has a range from 1 to 28.
We add to this the scaling generator, $\rho(A_{29})$, extending the Lie algebra $\mathfrak{so}(4,4)$ to $\mathfrak{coa}(4,4)$, given by
$$\rho(A_{29})=\tfrac12 I\otimes I\otimes I\otimes I.$$
In terms of the 2-dimensional Pauli matrices these generators look like:
\begin{equation}\label{dir80}
\begin{array}{ll}
\rho(A_1)= -\tfrac12 \sigma_z \otimes \epsilon\otimes \sigma_z\otimes \sigma_x,&\rho(A_{15})=-\tfrac12 \sigma_z \otimes \sigma_x\otimes I\otimes \epsilon,\\
\rho(A_2)= \tfrac12 I \otimes I\otimes I\otimes \epsilon ,&\rho(A_{16})=\tfrac12 I \otimes \epsilon\otimes \sigma_x\otimes \sigma_x,\\
\rho(A_3)= \tfrac12 \sigma_z \otimes \epsilon\otimes \sigma_z\otimes \sigma_z ,&\rho(A_{17})=\tfrac12 \sigma_z \otimes I \otimes \epsilon\otimes I ,\\
\rho(A_4)=\tfrac12 \sigma_z \otimes \epsilon\otimes I\otimes \sigma_z,&\rho(A_{18})=\tfrac12 I \otimes \epsilon\otimes \sigma_x\otimes \sigma_z ,\\
\rho(A_5)= -\tfrac12 I \otimes I\otimes \sigma_z\otimes \epsilon,&\rho(A_{19})=-\tfrac12 \sigma_z \otimes I\otimes \sigma_x\otimes \epsilon ,\\
\rho(A_6)= -\tfrac12 \sigma_z \otimes \epsilon\otimes I\otimes \sigma_x,&\rho(A_{20})=\tfrac12 I \otimes \epsilon\otimes \sigma_z\otimes I ,\\
\rho(A_7)= \tfrac12 I \otimes I\otimes \epsilon\otimes \sigma_x,&\rho(A_{21})=\tfrac12 \sigma_z \otimes \sigma_z\otimes \sigma_z\otimes \epsilon,\\
\rho(A_8)= -\tfrac12 \sigma_z \otimes \epsilon\otimes \sigma_x\otimes I,&\rho(A_{22})=\tfrac12 I \otimes \sigma_z\otimes \epsilon\otimes \sigma_z ,\\
\rho(A_9)= \tfrac12 I \otimes I\otimes \epsilon\otimes \sigma_z,&\rho(A_{23})=-\tfrac12 \sigma_z \otimes \sigma_x\otimes \sigma_x\otimes \epsilon ,\\
\rho(A_{10})=\tfrac12 \sigma_z \otimes \epsilon\otimes \epsilon\otimes \epsilon,&\rho(A_{24})=-\tfrac12 I \otimes \sigma_z\otimes \epsilon\otimes \sigma_x ,\\
\rho(A_{11})= -\tfrac12 \sigma_z \otimes \sigma_x\otimes \epsilon\otimes \sigma_z ,&\rho(A_{25})=-\tfrac12 \sigma_z \otimes \sigma_x\otimes \epsilon\otimes I ,\\
\rho(A_{12})= -\tfrac12 I \otimes \sigma_z\otimes \sigma_x\otimes \epsilon,&\rho(A_{26})=\tfrac12 I \otimes \sigma_z\otimes I\otimes \epsilon ,\\
\rho(A_{13})=\tfrac12 \sigma_z \otimes \sigma_x\otimes \epsilon\otimes \sigma_x,&\rho(A_{27})=-\tfrac12 \sigma_z \otimes \epsilon\otimes I\otimes I,\\
\rho(A_{14})= -\tfrac12 I \otimes \sigma_z\otimes \epsilon\otimes I,&\rho(A_{28})=-\tfrac12 I \otimes \sigma_x\otimes \sigma_z\otimes \epsilon.
\end{array}
\end{equation}
Similarly as in the case of $\mathfrak{so}(4,4)$ this 16-dimensional representation of $\mathbb{R}\oplus\mathfrak{so}(4,4)$ is \emph{reducible}, again due to the appearance of $I$ and $\sigma_z$ only as the first factors in the above formulas. It \emph{splits} onto two real $8$-dimensional Weyl representations
$$\rho=\rho_+\oplus\rho_-\quad\mathrm{in}\quad S=S_+\oplus S_-, \quad \mathrm{dim}_\mathbb{R} S_\pm=8.$$
On generators of $\mathfrak{so}(8,0)$ these two 8-dimensional representations $\rho_\pm$, are given by:
\begin{equation}
\begin{array}{ll}
\rho_\pm(A_1)= \mp\tfrac12 \epsilon\otimes \sigma_z\otimes \sigma_x,&\rho_\pm(A_{15})=\mp\tfrac12 \sigma_x\otimes I\otimes \epsilon,\\
\rho_\pm(A_2)= \tfrac12 I\otimes I\otimes \epsilon ,&\rho_\pm(A_{16})=\tfrac12 \epsilon\otimes \sigma_x\otimes \sigma_x,\\
\rho_\pm(A_3)= \pm\tfrac12 \epsilon\otimes \sigma_z\otimes \sigma_z ,&\rho_\pm(A_{17})=\pm\tfrac12 I \otimes \epsilon\otimes I ,\\
\rho_\pm(A_4)=\pm\tfrac12 \epsilon\otimes I\otimes \sigma_z,&\rho_\pm(A_{18})=\tfrac12 \epsilon\otimes \sigma_x\otimes \sigma_z ,\\
\rho_\pm(A_5)= -\tfrac12 I\otimes \sigma_z\otimes \epsilon,&\rho_\pm(A_{19})=\mp\tfrac12 I\otimes \sigma_x\otimes \epsilon ,\\
\rho_\pm(A_6)= \mp\tfrac12 \epsilon\otimes I\otimes \sigma_x,&\rho_\pm(A_{20})=\tfrac12 \epsilon\otimes \sigma_z\otimes I ,\\
\rho_\pm(A_7)= \tfrac12 I\otimes \epsilon\otimes \sigma_x,&\rho_\pm(A_{21})=\pm\tfrac12 \sigma_z\otimes \sigma_z\otimes \epsilon,\\
\rho_\pm(A_8)= \mp\tfrac12 \epsilon\otimes \sigma_x\otimes I,&\rho_\pm(A_{22})=\tfrac12 \sigma_z\otimes \epsilon\otimes \sigma_z ,\\
\rho_\pm(A_9)= \tfrac12 I\otimes \epsilon\otimes \sigma_z,&\rho_\pm(A_{23})=\mp\tfrac12 \sigma_x\otimes \sigma_x\otimes \epsilon ,\\
\rho_\pm(A_{10})=\pm\tfrac12 \epsilon\otimes \epsilon\otimes \epsilon,&\rho_\pm(A_{24})=-\tfrac12 \sigma_z\otimes \epsilon\otimes \sigma_x ,\\
\rho_\pm(A_{11})= \mp\tfrac12 \sigma_x\otimes \epsilon\otimes \sigma_z ,&\rho_\pm(A_{25})=\mp\tfrac12 \sigma_x\otimes \epsilon\otimes I ,\\
\rho_\pm(A_{12})= -\tfrac12 \sigma_z\otimes \sigma_x\otimes \epsilon,&\rho_\pm(A_{26})=\tfrac12 \sigma_z\otimes I\otimes \epsilon ,\\
\rho_\pm(A_{13})=\pm\tfrac12 \sigma_x\otimes \epsilon\otimes \sigma_x,&\rho_\pm(A_{27})=\mp\tfrac12 \epsilon\otimes I\otimes I,\\
\rho_\pm(A_{14})= -\tfrac12 \sigma_z\otimes \epsilon\otimes I,&\rho_\pm(A_{28})=-\tfrac12 \sigma_x\otimes \sigma_z\otimes \epsilon.
\end{array}\label{weylso8}
\end{equation}
We extend them em to $\mathbb{R}\oplus\mathfrak{so}(8,0)$ by adding
$$\rho_\pm(A_{29})=\tfrac12 I\otimes I\otimes I.$$
It follows that the Weyl representations $(\rho_\pm,S_\pm)$ of $\mathfrak{so}(4,4)$ are \emph{irreducible} and \emph{nonequivalent}.
We use them to find the defining representation $(\tau,R)$ of $\mathfrak{so}(8,0)$ in the vector space $R=\mathbb{R}^8$ of vectors. We again consider the tensor product representation $\rho_+\otimes\rho_-$. It decomposes as:
$$\rho_+\otimes\rho_-=\alpha\oplus\tau\quad\mathrm{in}\quad S_+\otimes S_-=T_{56}\oplus R,\quad\mathrm{with}\quad \dim_\mathbb{R}(R)=8,\,\,\dim_\mathbb{R}(T_{56})=56,$$
having irreducible components $(\alpha,T_{56})$ and $(\tau,R)$ of respective dimensions 56 and 8. Explicitly, on generators $A_I$ of $\mathbb{R}\oplus\mathfrak{so}(8,0)$, the 8-dimensional representation $\tau$ reads:
\begin{equation}
\begin{array}{llll}
\tau(A_1)=E_{38}-E_{83},&\quad \tau(A_8)=E_{35}-E_{53},&\quad \tau(A_{15})=E_{52}-E_{25},&\quad \tau(A_{22})=E_{68}-E_{86},\\
\tau(A_2)=E_{78}-E_{87}, &\quad\tau(A_9)=E_{75}-E_{57}, &\quad \tau(A_{16})=E_{18}-E_{81},&\quad \tau(A_{23})=E_{36}-E_{63}, \\
\tau(A_3)=E_{37}-E_{73}, &\quad\tau(A_{10})=E_{54}-E_{45},&\quad \tau(A_{17})=E_{31}-E_{13},&\quad \tau(A_{24})=E_{76}-E_{67},\\
\tau(A_4)=E_{84}-E_{48},&\quad \tau(A_{11})=E_{28}-E_{82},&\quad \tau(A_{18})=E_{71}-E_{17},&\quad \tau(A_{25})=E_{64}-E_{46},\\
\tau(A_5)=E_{43}-E_{34},&\quad \tau(A_{12})=E_{32}-E_{23},&\quad \tau(A_{19})=E_{14}-E_{41},&\quad \tau(A_{26})=E_{56}-E_{65},\\
\tau(A_6)=E_{47}-E_{74},&\quad \tau(A_{13})=E_{72}-E_{27},&\quad \tau(A_{20})=E_{51}-E_{15},&\quad \tau(A_{27})=E_{26}-E_{62},\\
\tau(A_7)=E_{58}-E_{85}, &\quad \tau(A_{14})= E_{24}-E_{42},&\quad
\tau(A_{21})=E_{21}-E_{12},&\quad \tau(A_{28})=E_{16}-E_{61},
\end{array}
\label{so8}\end{equation}
where $E_{ij}$, $i,j=1,2,\dots,8$, denote $8\times 8$ matrices with zeroes everywhere except the value 1 in the entry $(i,j)$ seating at the crossing of the $i$th row and the $j$th column.
The three real, irreducible, pairwise nonequivalent representations $(\rho_+,\rho_-,\tau)$ of $\mathfrak{so}(8,0)$ given by the formulas \eqref{weylso8} and \eqref{so8} constitute the set of the \emph{triality representations} for $\mathfrak{so}(8,0)$.
\section{Application: 2-step graded realizations of real forms of the exceptional Lie algebra $\mathfrak{e}_6$ }
The simple exceptional complex Lie algebra $\mathfrak{e}_6$ has the following \emph{noncompact} real forms
\begin{enumerate}
\item $\mathfrak{e}_I$, with Satake diagram
\tikzset{/Dynkin diagram/fold style/.style={stealth-stealth,thin,
shorten <=1mm,shorten >=1mm}} \begin{dynkinDiagram}[edge length=.5cm]{E}{oooooo}
\end{dynkinDiagram},
\item $\mathfrak{e}_{II}$, with Satake diagram \tikzset{/Dynkin diagram/fold style/.style={stealth-stealth,thin,
shorten <=1mm,shorten >=1mm}}
\begin{dynkinDiagram}[edge length=.5cm]{E}{oooooo}\invol{1}{6}\invol{3}{5}
\end{dynkinDiagram},
\item $\mathfrak{e}_{III}$, with Satake diagram
\tikzset{/Dynkin diagram/fold style/.style={stealth-stealth,thin,
shorten <=1mm,shorten >=1mm}}\begin{dynkinDiagram}[edge length=.5cm]{E}{oo***o}\invol{1}{6
\end{dynkinDiagram}, and
\item $\mathfrak{e}_{IV}$, with Satake diagram \tikzset{/Dynkin diagram/fold style/.style={stealth-stealth,thin,
shorten <=1mm,shorten >=1mm}}\begin{dynkinDiagram}[edge length=.5cm]{E}{o****o}
\end{dynkinDiagram}.
\end{enumerate}
\`Elie Cartan in his theses \cite{CartanPhd, CartanPhdF} mentioned realization of the real form $\mathfrak{e}_I$ in $N=\mathbb{R}^{16}$. In the modern language, Cartan's realization is such that $\mathfrak{e}_I$ is the \emph{algebra of automorphisms} of the flat model of a \emph{parabolic geometry} of type $(E_I,P)$, where the choice of parabolic subgroup in the real form $E_I$ of the exceptional Lie group ${\bf E}_6$ is indicated by the following decoration of the Satake diagram for $\mathfrak{e}_I$: \tikzset{/Dynkin diagram/fold style/.style={stealth-stealth,thin,
shorten <=1mm,shorten >=1mm}} \begin{dynkinDiagram}[edge length=.5cm]{E}{ooooot}
\end{dynkinDiagram}. The structure on the 16-dimensional manifold $N=E_I/P$, whose symmetry is $E_I$ is a Majorana-Weyl $\mathbb{R}{\bf Spin}(5,5)$ structure, i.e. the reduction of the structure group of the tangent bundle $\mathrm{T}N$ to the $\mathbb{R}{\bf Spin}(5,5)\subset\mathbf{GL}(16,\mathbb{R})$ in the irreducible 16-dimensional representation of Majorana-Weyl spinors \cite{traut}. \emph{This geometry, as 1-step graded, is quite different from 2-step graded geometries considered in our paper}. We also mention, that if we wanted to have a realization of, say $\mathfrak{e}_{II}$ or $\mathfrak{e}_{III}$, in the spirit of Cartan's realization of $\mathfrak{e}_I$, i.e. if we crossed one lateral node in the Satake diagram of $\mathfrak{e}_{II}$ or $\mathfrak{e}_{III}$, we would be forced to cross the conjugated lateral root, resulting in the Satake diagrams \tikzset{/Dynkin diagram/fold style/.style={stealth-stealth,thin,
shorten <=1mm,shorten >=1mm}}
\begin{dynkinDiagram}[edge length=.5cm]{E}{toooot}\invol{1}{6}\invol{3}{5}
\end{dynkinDiagram} or \tikzset{/Dynkin diagram/fold style/.style={stealth-stealth,thin,
shorten <=1mm,shorten >=1mm}}\begin{dynkinDiagram}[edge length=.5cm]{E}{to***t}\invol{1}{6
\end{dynkinDiagram},
which would give realizations of the respective $\mathfrak{e}_{II}$ and $\mathfrak{e}_{III}$ in dimension \emph{twenty four}. This we did in \cite{DJPZ} providing realizations of $\mathfrak{e}_{II}$ and $\mathfrak{e}_{III}$ as Lie algebras of CR-automorphisms of certain 24-dimensional CR manifolds of CR dimension 16, and CR (real) codimension 8. The important point of these realizations of these two real forms of $]\mathfrak{e}_e$ was that these geometries were 2-step graded, as in the case of Cartan's realization of $\mathfrak{f}_I$, and they could have been also thought as realizations in terms of the symmetry algebras of the structure $(M,{\mathcal D})$, where $M$ is a certain 24-dimensional real manifold, and $\mathcal D$ is a real rank 16-distribution on $M$ with $[{\mathcal D},{\mathcal D}]=\mathrm{T}M$. Thus these two geometries described by us in \cite {DJPZ} are 2-step graded geometries of distributions. Very much like Cartan's realization of $\mathfrak{f}_I$.
In this section we give the remaining similar realizations of the yet untreated cases of $\mathfrak{e}_{II}$ and $\mathfrak{e}_{III}$.
\subsection{Realizations of $\mathfrak{e}_{I}$ and $\mathfrak{e}_{IV}$: generalities}
To get realizations of $\mathfrak{e}_{I}$ and $\mathfrak{e}_{IV}$ in dimension 24, we decorate the Satake diagrams of these two Lie algebras as follows: \tikzset{/Dynkin diagram/fold style/.style={stealth-stealth,thin,
shorten <=1mm,shorten >=1mm}} \begin{dynkinDiagram}[edge length=.5cm]{E}{toooot}
\end{dynkinDiagram} and \tikzset{/Dynkin diagram/fold style/.style={stealth-stealth,thin,
shorten <=1mm,shorten >=1mm}}\begin{dynkinDiagram}[edge length=.5cm]{E}{t****t}
\end{dynkinDiagram}.
These choices of a parabolic subalgebra in the respective $\mathfrak{e}_{I}$ and $\mathfrak{e}_{IV}$ produces the following gradation in these algebras:
$$\mathfrak{e}_A=\mathfrak{n}_{\minu2A}\oplus\mathfrak{n}_{\minu1A}\oplus\mathfrak{n}_{0A}\oplus\mathfrak{n}_{1A}\oplus\mathfrak{n}_{2A}\quad \mathrm{for}\quad A=I,IV,
$$
with
$$\mathfrak{n}_{\minu A}=\mathfrak{n}_{\minu2A}\oplus\mathfrak{n}_{\minu1A}\quad \mathrm{for}\quad A=I,IV,$$
being 2-step nilpotent and having grading components $\mathfrak{n}_{-2A}$ and $\mathfrak{n}_{\minu1A}$ of respective dimension $r_A=8$ and $s_A=16$,
$$r_A=\dim(\mathfrak{n}_{\minu2A})=8,\quad\quad s_A=\dim(\mathfrak{n}_{\minu1A})=16\quad \mathrm{for}\quad A=I,IV.$$
The Lie algebra $\mathfrak{n}_{0A}$ in the Tanaka prolongation of $\mathfrak{n}_{\minu A}$ up to $0^{th}$ order is
\begin{enumerate}
\item $\mathfrak{n}_{0I}=2\mathbb{R}\oplus\mathfrak{so}(4,4)=\mathbb{R}\oplus\mathfrak{co}(4,4)$ in the case of $\mathfrak{e}_I$, and
\item $\mathfrak{n}_{0IV}=2\mathbb{R}\oplus\mathfrak{so}(8,0)=\mathbb{R}\oplus\mathfrak{co}(8,0)$ in the case of $\mathfrak{e}_{IV}$.
\end{enumerate}
The last two statements, (1) and (2), get clear when one looks at the Satake diagrams we have just decorated. If we strip off the crossed nodes from these diagrams we get\begin{dynkinDiagram}[edge length=.4cm]{D}{oooo}\end{dynkinDiagram} and \begin{dynkinDiagram}[edge length=.4cm]{D}{****}
\end{dynkinDiagram}, clearly the simple part of $\mathfrak{n}_{0A}$s above.
Because of the grading property $[\mathfrak{n}_{iA},\mathfrak{n}_{jA}]\subset\mathfrak{n}_{(i+j)A}$ in the Lie algebras $\mathfrak{e}_A$, restricting to subalgebras $\mathfrak{n}_{\minu A}$ we see that we have representations $(\rho_A,\mathfrak{n}_{\minu1A})$ and $(\tau_A,\mathfrak{n}_{\minu 2A})$ given by the adjoint action of $\mathfrak{co}(4,4)$ or $\mathfrak{co}(8,0)$ which naturally seat in $\mathfrak{n}_{0A}$, respectively.
There is no surprise that the representations $(\rho_A,\mathfrak{n}_{\minu1A})$ are the Dirac spinor representations \eqref{dir44} and \eqref{dir80} of the respective $\mathfrak{co}(4,4)$ and $\mathfrak{co}(8,0)$ parts of $\mathfrak{n}_{0A}$s in the 16-dimensional real vector spaces $\mathfrak{n}_{\minu1A}$. As such, these representations are \emph{reducible} and they split each $\mathfrak{n}_{0A}$, $A=I,IV$, onto two irreducible representations $(\rho_{A\pm},\mathfrak{n}_{\minu1A\pm})$ in real 8-dimensional spaces $\mathfrak{n}_{\minu1A\pm}$ of Weyl spinors. This shows that the \emph{2-step nilpotent Lie algebra} $\mathfrak{n}_{\minu A}$ \emph{is}, for each $A=I,IV$, \emph{a natural representation space for the action of the three triality representations} $(\rho_+,\rho_-,\tau)$. We have,
$$\begin{aligned}\mathfrak{n}_{\minu A}=&\mathfrak{n}_{\minu2A}\oplus\mathfrak{n}_{\minu1A}=\\
&\mathfrak{n}_{\minu2A}\oplus\mathfrak{n}_{\minu1A+}\oplus\mathfrak{n}_{\minu1A-},\end{aligned}$$
and the 8-dimensional real irreducible representations $(\tau_A,\rho_A+,\rho_A-)$ of $\mathfrak{co}(4,4)$ or $\mathfrak{co}(8,0)$ acting in the respective $\mathfrak{n}_{\minu2A}$, $\mathfrak{n}_{\minu1A+}$ and $\mathfrak{n}_{\minu1A-}$.
We summarize the considerations from this section in the following theorem,
\begin{theorem} (Natural realization of the triality representations)
\begin{enumerate} \item The $\mathfrak{so}(4,4)$ triality:\\
The real form $\mathfrak{e}_I$ of the simple exceptional Lie algebra $\mathfrak{e}_6$, when graded according to the following decoration of its Satake diagram \tikzset{/Dynkin diagram/fold style/.style={stealth-stealth,thin,
shorten <=1mm,shorten >=1mm}} \begin{dynkinDiagram}[edge length=.5cm]{E}{toooot}
\end{dynkinDiagram}, has the $\mathfrak{n}_{\minu}$ part as a real 24-dimensional vector space, naturally split onto the three real 8-dimensional components $\mathfrak{n}_{\minu2}$, $\mathfrak{n}_{\minu1+}$ and $\mathfrak{n}_{\minu1-}$,
$$\mathfrak{n}_{\minu}=\mathfrak{n}_{\minu2}\oplus\mathfrak{n}_{\minu1+}\oplus\mathfrak{n}_{\minu1-}.$$
This decomposition is $\mathfrak{so}(4,4)$ invariant and consists of components $\mathfrak{n}_{\minu2}$, $\mathfrak{n}_{\minu1+}$ and $\mathfrak{n}_{\minu1-}$, on which the triality representation
$$\tau\oplus\rho_+\oplus\rho_-$$
of $\mathfrak{so}(4,4)$ acts irreducibly.
\item The $\mathfrak{so}(8,0)$ triality:\\
Likewise, the real form $\mathfrak{e}_{IV}$ of the simple exceptional Lie algebra $\mathfrak{e}_6$, when graded according to the following decoration of its Satake diagram \tikzset{/Dynkin diagram/fold style/.style={stealth-stealth,thin,
shorten <=1mm,shorten >=1mm}} \begin{dynkinDiagram}[edge length=.5cm]{E}{t****t}
\end{dynkinDiagram}, has the $\mathfrak{n}_{\minu}$ part as a real 24-dimensional vector space, naturally split onto the three real 8-dimensional components $\mathfrak{n}_{\minu2}$, $\mathfrak{n}_{\minu1+}$ and $\mathfrak{n}_{\minu1-}$,
$$\mathfrak{n}_{\minu}=\mathfrak{n}_{\minu2}\oplus\mathfrak{n}_{\minu1+}\oplus\mathfrak{n}_{\minu1-}.$$
This decomposition is $\mathfrak{so}(8,0)$ invariant and consists of components $\mathfrak{n}_{\minu2}$, $\mathfrak{n}_{\minu1+}$ and $\mathfrak{n}_{\minu1-}$, on which the triality representation
$$\tau\oplus\rho_+\oplus\rho_-$$
of $\mathfrak{so}(8,0)$ acts irreducibly.
\end{enumerate}
\end{theorem}
\subsection{An explicit realization of $\mathfrak{e}_I$ in dimension 24}
Taking as $(\rho,S)$ the Dirac spinors representation \eqref{dir44} of $\mathrm{co}(4,4)$ in dimension 16, and as $(\tau,R)$ the vectorial representation \eqref{tauweyl} of $\mathrm{co}(4,4)$ in dimension 8, we again are in the situation of a missing $\omega\in\mathrm{Hom}(\bigwedge^2S,R)$ from the triple $(\rho,\tau,\omega)$ described by the magical equation \eqref{maga}. Solving this equation for $\omega$ we obtain $\omega^i{}_{\mu\nu}$, $i=1,\dots,8$, $\mu,\nu=1,\dots,16$, which leads to the eight 2-forms $\omega^i=\tfrac12\omega^i{}_{\mu\nu}{\rm d} x^\mu\wedge{\rm d} x^\nu$ on a 16-dimensional manifold $N=\mathbb{R}^{16}$, which read
$$\begin{aligned}
\omega^1=\,\,&-{\rm d} x^1\wedge{\rm d} x^{10}+{\rm d} x^2\wedge{\rm d} x^{9}+{\rm d} x^7\wedge{\rm d} x^{16}-{\rm d} x^8\wedge{\rm d} x^{15}\\
\omega^2=\,\,&-{\rm d} x^2\wedge{\rm d} x^{12}+{\rm d} x^4\wedge{\rm d} x^{10}+{\rm d} x^6\wedge{\rm d} x^{16}-{\rm d} x^8\wedge{\rm d} x^{14}\\
\omega^3=\,\,&-{\rm d} x^1\wedge{\rm d} x^{12}+{\rm d} x^4\wedge{\rm d} x^{9}+{\rm d} x^5\wedge{\rm d} x^{16}-{\rm d} x^8\wedge{\rm d} x^{13}\\
\omega^4=\,\,&-{\rm d} x^5\wedge{\rm d} x^{10}+{\rm d} x^6\wedge{\rm d} x^{9}+{\rm d} x^7\wedge{\rm d} x^{12}-{\rm d} x^8\wedge{\rm d} x^{11}\\
\omega^5=\,\,&-{\rm d} x^2\wedge{\rm d} x^{11}+{\rm d} x^3\wedge{\rm d} x^{10}+{\rm d} x^6\wedge{\rm d} x^{15}-{\rm d} x^7\wedge{\rm d} x^{14}\\
\omega^6=\,\,&-{\rm d} x^1\wedge{\rm d} x^{11}+{\rm d} x^3\wedge{\rm d} x^{9}+{\rm d} x^5\wedge{\rm d} x^{15}-{\rm d} x^7\wedge{\rm d} x^{13}\\
\omega^7=\,\,&-{\rm d} x^3\wedge{\rm d} x^{12}+{\rm d} x^4\wedge{\rm d} x^{11}+{\rm d} x^5\wedge{\rm d} x^{14}-{\rm d} x^6\wedge{\rm d} x^{13}\\
\omega^8=\,\,&-{\rm d} x^1\wedge{\rm d} x^{14}+{\rm d} x^2\wedge{\rm d} x^{13}+{\rm d} x^3\wedge{\rm d} x^{16}-{\rm d} x^4\wedge{\rm d} x^{15}.
\end{aligned}$$
The manifold $N=\mathbb{R}^{16}$ with these 2-forms, after contactification, gives the following Theorem.
\begin{theorem}\label{diste6}
Let $M=\mathbb{R}^{24}$ with coordinates $(u^1,\dots,u^8,x^1,\dots ,x^{16})$, and consider eight 1-forms $\lambda^1,\dots,\lambda^8$ on $M$ given by
$$\begin{aligned}
\lambda^1=\,\,&{\rm d} u^1-x^1{\rm d} x^{10}+ x^2{\rm d} x^{9}+ x^7{\rm d} x^{16}- x^8{\rm d} x^{15}\\
\lambda^2=\,\,&{\rm d} u^2- x^2{\rm d} x^{12}+ x^4{\rm d} x^{10}+ x^6{\rm d} x^{16}- x^8{\rm d} x^{14}\\
\lambda^3=\,\,&{\rm d} u^3- x^1{\rm d} x^{12}+ x^4{\rm d} x^{9}+x^5{\rm d} x^{16}- x^8{\rm d} x^{13}\\
\lambda^4=\,\,&{\rm d} u^4- x^5{\rm d} x^{10}+ x^6{\rm d} x^{9}+ x^7{\rm d} x^{12}- x^8{\rm d} x^{11}\\
\lambda^5=\,\,&{\rm d} u^5-x^2{\rm d} x^{11}+ x^3{\rm d} x^{10}+ x^6{\rm d} x^{15}- x^7{\rm d} x^{14}\\
\lambda^6=\,\,&{\rm d} u^6- x^1{\rm d} x^{11}+ x^3{\rm d} x^{9}+ x^5{\rm d} x^{15}- x^7{\rm d} x^{13}\\
\lambda^7=\,\,&{\rm d} u^7- x^3{\rm d} x^{12}+ x^4{\rm d} x^{11}+ x^5{\rm d} x^{14}- x^6{\rm d} x^{13}\\
\lambda^8=\,\,&{\rm d} u^8- x^1{\rm d} x^{14}+ x^2{\rm d} x^{13}+ x^3{\rm d} x^{16}-x^4{\rm d} x^{15}
\end{aligned}$$
The rank 16 distribution ${\mathcal D}$ on $M$ defined as ${\mathcal D}=\{\mathrm{T}\mathbb{R}^{24}\ni X\,\,|\,\,X\hook\lambda^1=\dots=X\hook\lambda^8=0\}$ has its Lie algebra of infinitesimal automorphisms $\mathfrak{aut}(\mathcal D)$ isomorphic to the Tanaka prolongation of $\mathfrak{n}_{\minu}=R\oplus S$, where $(\rho,S=\mathbb{R}^{16})$ is the Dirac spinors representation \eqref{dir44} of $\mathfrak{n}_{00}=\mathfrak{co}(4,4)$, and $(\tau,R=\mathbb{R}^8)$ is the vectorial representation \eqref{tauweyl} of $\mathfrak{n}_{00}$.
The symmetry algebra $\mathfrak{aut}({\mathcal D})$ is isomorphic to the simple exceptional Lie algebra $\mathfrak{e}_{I}$,
$$\mathfrak{aut}({\mathcal D})=\mathfrak{e}_{I},$$
having the following natural gradation
$$\mathfrak{aut}({\mathcal D})=\mathfrak{n}_{-2}\oplus\mathfrak{n}_{-1}\oplus\mathfrak{n}_0\oplus\mathfrak{n}_1\oplus\mathfrak{n}_2,$$
with $\mathfrak{n}_{-2}=R$, $\mathfrak{n}_{-1}=S=S_+\oplus S_-$,
$$
\mathfrak{n}_0=\mathbb{R}\oplus\mathfrak{co}(4,4)\supset \mathfrak{n}_{00},$$
$\mathfrak{n}_{1}=S^*$, $\mathfrak{n}_{2}=R^*$,
and with the spaces $S_\pm$ being the carrier spaces for the Weyl spinors representations $\rho_\pm$ of $\mathfrak{co}(4,4)$.
The gradation in $e_I$ is inherited from the distribution structure $(M,{\mathcal D})$. The duality signs $*$ at $R^*$ and $S^*$ above are with respect to the Killing form in $\mathfrak{e}_{I}$.
The contactification $(M,{\mathcal D})$ is locally the flat model for the parabolic geometry of type $(E_{I},P_{I})$ related to the following \emph{crossed} Satake diagram \tikzset{/Dynkin diagram/fold style/.style={stealth-stealth,thin,
shorten <=1mm,shorten >=1mm}} \begin{dynkinDiagram}[edge length=.5cm]{E}{toooot}
\end{dynkinDiagram}.
\end{theorem}
\subsection{An explicit realization of $\mathfrak{e}_{IV}$ in dimension 24}
Similarly as in the previous section we take as $(\rho,S)$ the Dirac spinors representation \eqref{dir80} of $\mathrm{co}(8.0)$ in dimension 16, and as $(\tau,R)$ the vectorial representation \eqref{so8} of $\mathrm{co}(8,0)$ in dimension 8and we search for $\omega\in\mathrm{Hom}(\bigwedge^2S,R)$ solving the magical equation \eqref{maga}. We obtain $\omega^i{}_{\mu\nu}$, $i=1,\dots,8$, $\mu,\nu=1,\dots,16$, which provides us with the eight 2-forms $\omega^i=\tfrac12\omega^i{}_{\mu\nu}{\rm d} x^\mu\wedge{\rm d} x^\nu$ on a 16-dimensional manifold $N=\mathbb{R}^{16}$, which read
$$\begin{aligned}
\omega^1=\,\,&{\rm d} x^1\wedge{\rm d} x^{9}+{\rm d} x^2\wedge{\rm d} x^{10}+{\rm d} x^3\wedge{\rm d} x^{11}+{\rm d} x^4\wedge{\rm d} x^{12}-\\&{\rm d} x^5\wedge{\rm d} x^{13}-{\rm d} x^6\wedge{\rm d} x^{14}-{\rm d} x^7\wedge{\rm d} x^{15}-{\rm d} x^8\wedge{\rm d} x^{16}\\
\omega^2=\,\,&-{\rm d} x^1\wedge{\rm d} x^{10}+{\rm d} x^2\wedge{\rm d} x^{9}+{\rm d} x^3\wedge{\rm d} x^{12}-{\rm d} x^4\wedge{\rm d} x^{11}-\\&{\rm d} x^5\wedge{\rm d} x^{14}+{\rm d} x^6\wedge{\rm d} x^{13}+{\rm d} x^7\wedge{\rm d} x^{16}-{\rm d} x^8\wedge{\rm d} x^{15}\\
\omega^3=\,\,&-{\rm d} x^1\wedge{\rm d} x^{11}-{\rm d} x^2\wedge{\rm d} x^{12}+{\rm d} x^3\wedge{\rm d} x^{9}+{\rm d} x^4\wedge{\rm d} x^{10}+\\&{\rm d} x^5\wedge{\rm d} x^{15}+{\rm d} x^6\wedge{\rm d} x^{16}-{\rm d} x^7\wedge{\rm d} x^{13}-{\rm d} x^8\wedge{\rm d} x^{14}\\
\omega^4=\,\,&-{\rm d} x^1\wedge{\rm d} x^{12}+{\rm d} x^2\wedge{\rm d} x^{11}-{\rm d} x^3\wedge{\rm d} x^{10}+{\rm d} x^4\wedge{\rm d} x^{9}+\\&{\rm d} x^5\wedge{\rm d} x^{16}-{\rm d} x^6\wedge{\rm d} x^{15}+{\rm d} x^7\wedge{\rm d} x^{14}-{\rm d} x^8\wedge{\rm d} x^{13}\\
\omega^5=\,\,&{\rm d} x^1\wedge{\rm d} x^{13}+{\rm d} x^2\wedge{\rm d} x^{14}-{\rm d} x^3\wedge{\rm d} x^{15}-{\rm d} x^4\wedge{\rm d} x^{16}+\\&{\rm d} x^5\wedge{\rm d} x^{9}+{\rm d} x^6\wedge{\rm d} x^{10}-{\rm d} x^7\wedge{\rm d} x^{11}-{\rm d} x^8\wedge{\rm d} x^{12}\\
\omega^6=\,\,&{\rm d} x^1\wedge{\rm d} x^{14}-{\rm d} x^2\wedge{\rm d} x^{13}-{\rm d} x^3\wedge{\rm d} x^{16}+{\rm d} x^4\wedge{\rm d} x^{15}-\\&{\rm d} x^5\wedge{\rm d} x^{10}+{\rm d} x^6\wedge{\rm d} x^{9}+{\rm d} x^7\wedge{\rm d} x^{12}-{\rm d} x^8\wedge{\rm d} x^{11}\\
\omega^7=\,\,&{\rm d} x^1\wedge{\rm d} x^{15}-{\rm d} x^2\wedge{\rm d} x^{16}+{\rm d} x^3\wedge{\rm d} x^{13}-{\rm d} x^4\wedge{\rm d} x^{14}+\\&{\rm d} x^5\wedge{\rm d} x^{11}-{\rm d} x^6\wedge{\rm d} x^{12}+{\rm d} x^7\wedge{\rm d} x^{9}-{\rm d} x^8\wedge{\rm d} x^{10}\\
\omega^8=\,\,&-{\rm d} x^1\wedge{\rm d} x^{16}-{\rm d} x^2\wedge{\rm d} x^{15}-{\rm d} x^3\wedge{\rm d} x^{14}-{\rm d} x^4\wedge{\rm d} x^{13}-\\&{\rm d} x^5\wedge{\rm d} x^{12}-{\rm d} x^6\wedge{\rm d} x^{11}-{\rm d} x^7\wedge{\rm d} x^{10}-{\rm d} x^8\wedge{\rm d} x^{9}.
\end{aligned}$$
Contactifying, we have the following theorem
\begin{theorem}
Let $M=\mathbb{R}^{24}$ with coordinates $(u^1,\dots,u^8,x^1,\dots ,x^{16})$, and consider eight 1-forms $\lambda^1,\dots,\lambda^8$ on $M$ given by
$$\begin{aligned}
\lambda^1=\,\,&{\rm d} u^1+ x^1{\rm d} x^{9}+ x^2{\rm d} x^{10}+ x^3 {\rm d} x^{11}+ x^4 {\rm d} x^{12}- x^5 {\rm d} x^{13}- x^6 {\rm d} x^{14}- x^7 {\rm d} x^{15}- x^8 {\rm d} x^{16}\\
\lambda^2=\,\,&{\rm d} u^2- x^1 {\rm d} x^{10}+ x^2 {\rm d} x^{9}+ x^3 {\rm d} x^{12}- x^4 {\rm d} x^{11}- x^5 {\rm d} x^{14}+ x^6 {\rm d} x^{13}+ x^7 {\rm d} x^{16}- x^8 {\rm d} x^{15}\\
\lambda^3=\,\,&{\rm d} u^3- x^1 {\rm d} x^{11}- x^2 {\rm d} x^{12}+ x^3 {\rm d} x^{9}+ x^4 {\rm d} x^{10}+ x^5 {\rm d} x^{15}+ x^6 {\rm d} x^{16}- x^7 {\rm d} x^{13}- x^8 {\rm d} x^{14}\\
\lambda^4=\,\,&{\rm d} u^4- x^1 {\rm d} x^{12}+ x^2 {\rm d} x^{11}- x^3 {\rm d} x^{10}+ x^4 {\rm d} x^{9}+ x^5 {\rm d} x^{16}- x^6 {\rm d} x^{15}+ x^7 {\rm d} x^{14}- x^8 {\rm d} x^{13}\\
\lambda^5=\,\,&{\rm d} u^5+ x^1 {\rm d} x^{13}+ x^2 {\rm d} x^{14}- x^3 {\rm d} x^{15}- x^4 {\rm d} x^{16}+ x^5 {\rm d} x^{9}+ x^6 {\rm d} x^{10}- x^7 {\rm d} x^{11}- x^8 {\rm d} x^{12}\\
\lambda^6=\,\,&{\rm d} u^6+ x^1 {\rm d} x^{14}- x^2 {\rm d} x^{13}- x^3 {\rm d} x^{16}+ x^4 {\rm d} x^{15}- x^5 {\rm d} x^{10}+ x^6 {\rm d} x^{9}+ x^7 {\rm d} x^{12}- x^8 {\rm d} x^{11}\\
\lambda^7=\,\,&{\rm d} u^7+ x^1 {\rm d} x^{15}- x^2 {\rm d} x^{16}+ x^3 {\rm d} x^{13}- x^4 {\rm d} x^{14}+ x^5 {\rm d} x^{11}- x^6 {\rm d} x^{12}+ x^7 {\rm d} x^{9}- x^8 {\rm d} x^{10}\\
\lambda^8=\,\,&{\rm d} u^8 - x^1 {\rm d} x^{16}- x^2 {\rm d} x^{15}- x^3 {\rm d} x^{14}- x^4 {\rm d} x^{13}- x^5 {\rm d} x^{12}- x^6 {\rm d} x^{11}- x^7 {\rm d} x^{10}- x^8 {\rm d} x^{9}.
\end{aligned}$$
The rank 16 distribution ${\mathcal D}$ on $M$ defined as ${\mathcal D}=\{\mathrm{T}\mathbb{R}^{24}\ni X\,\,|\,\,X\hook\lambda^1=\dots=X\hook\lambda^8=0\}$ has its Lie algebra of infinitesimal automorphisms $\mathfrak{aut}(\mathcal D)$ isomorphic to the Tanaka prolongation of $\mathfrak{n}_{\minu}=R\oplus S$, where $(\rho,S=\mathbb{R}^{16})$ is the Dirac spinors representation \eqref{dir44} of $\mathfrak{n}_{00}=\mathfrak{co}(8,0)$, and $(\tau,R=\mathbb{R}^8)$ is the vectorial representation \eqref{tauweyl} of $\mathfrak{n}_{00}$.
The symmetry algebra $\mathfrak{aut}({\mathcal D})$ is isomorphic to the simple exceptional Lie algebra $\mathfrak{e}_{IV}$,
$$\mathfrak{aut}({\mathcal D})=\mathfrak{e}_{IV},$$
having the following natural gradation
$$\mathfrak{aut}({\mathcal D})=\mathfrak{n}_{-2}\oplus\mathfrak{n}_{-1}\oplus\mathfrak{n}_0\oplus\mathfrak{n}_1\oplus\mathfrak{n}_2,$$
with $\mathfrak{n}_{-2}=R$, $\mathfrak{n}_{-1}=S=S_+\oplus S_-$,
$$
\mathfrak{n}_0=\mathbb{R}\oplus\mathfrak{co}(8,0)\supset \mathfrak{n}_{00},$$
$\mathfrak{n}_{1}=S^*$, $\mathfrak{n}_{2}=R^*$,
and with the spaces $S_\pm$ being the Carrier spaces for the Weyl spinors representations $\rho_\pm$ of $\mathfrak{co}(8,0)$.
The gradation in $e_{iV}$ is inherited from the distribution structure $(M,{\mathcal D})$. The duality signs $*$ at $R^*$ and $S^*$ above are with respect to the Killing form in $\mathfrak{e}_{IV}$.
The contactification $(M,{\mathcal D})$ is locally the flat model for the parabolic geometry of type $(E_{IV},P_{IV})$ related to the following \emph{crossed} Satake diagram \tikzset{/Dynkin diagram/fold style/.style={stealth-stealth,thin,
shorten <=1mm,shorten >=1mm}}\begin{dynkinDiagram}[edge length=.5cm]{E}{t****t}
\end{dynkinDiagram}.
\end{theorem}
\section{Application: one more realization of $\mathfrak{e}_6$ and a warning}
Between the 24-dimensional realizations of $\mathfrak{e}_6$ mentioned in this paper, and Cartan's 16-dimensional realization of $\mathfrak{e}_6$ associated with the grading\tikzset{/Dynkin diagram/fold style/.style={stealth-stealth,thin,
shorten <=1mm,shorten >=1mm}}\begin{dynkinDiagram}[edge length=.5cm]{E}{ooooot}\end{dynkinDiagram}, there are 21-dimensional realizations of this algebra $\mathfrak{e}_6$ associated with the following Dynkin diagram crossing
\tikzset{/Dynkin diagram/fold style/.style={stealth-stealth,thin,
shorten <=1mm,shorten >=1mm}}\begin{dynkinDiagram}[edge length=.5cm]{E}{otoooo}\end{dynkinDiagram}. These define contact $\mathfrak{e}_6$ geometries and are described in \cite{CS} p. 425-426.
\subsection{Realization of $\mathfrak{e}_I$ in dimension 25}
Here we will briefly discuss yet another realization, now in dimension 25, corresponding to the following Dynkin diagram crossing: \tikzset{/Dynkin diagram/fold style/.style={stealth-stealth,thin,
shorten <=1mm,shorten >=1mm}}\begin{dynkinDiagram}[edge length=.5cm]{E}{ooooto}\end{dynkinDiagram} of $\mathfrak{e}_6$. This is for example mentioned in \cite{weyman}. Looking at the Satake diagrams of real forms of $\mathfrak{e}_6$, we see that this relaization is only possible for the real form $\mathfrak{e}_I$.
So we again use our Corollary \ref{cruco} with now $\mathfrak{n}_{00}=\mathfrak{sl}(2,\mathbb{R})\oplus\mathfrak{sl}(5,\mathbb{R})$ and with representations $(\rho,S)$ and $(\tau,R)$, as indicated in \cite{weyman} Section 5.3,
$S=\mathbb{R}^2\otimes\bigwedge^2\mathbb{R}^5,\quad R=\bigwedge^2\mathbb{R}^2\otimes\bigwedge^4\mathbb{R}^5.$
To be more explicit we obtain this representations as follows:
\begin{itemize}
\item We start with the defining representations $\tau_2$ of $\mathfrak{sl}(2,\mathbb{R})$ in $\mathbb{R}^2$ and $\tau_5$ of $\mathfrak{sl}(5,\mathbb{R})$ in $\mathbb{R}^5$, and define the representation $$\rho=\tau_2\otimes\big(\tau_5\wedge\tau_5\big)\quad\mathrm{of}\quad \mathfrak{sl}(2,\mathbb{R})\oplus\mathfrak{sl}(5,\mathbb{R})\quad\mathrm{ in}\quad S=\mathbb{R}^2\otimes\bigwedge^2\mathbb{R}^5=\mathbb{R}^{20}.$$
The representation $(\rho,S)$ is an irreducible real 20-dimensional representation of
$$\mathfrak{n}_{00}=\mathfrak{sl}(2,\mathbb{R})\oplus\mathfrak{sl}(5,\mathbb{R}).$$
\item Then we decompose the $190$-dimensional representation $\rho\wedge\rho$ onto the irreducibles:
$$\textstyle \rho\wedge\rho=\alpha\oplus\tau\oplus\beta\quad\mathrm{in}\quad\bigwedge_{50}\oplus R\oplus\bigwedge_{135},$$
with $(\alpha,\bigwedge_{50})$ being 50-dimensional, $(\tau,R)$ being 5-dimensional, and $(\beta,\bigwedge_{135})$ being $135$-dimensional.
\item We take the 20-dimensional representation $(\rho,S)$ and the $5$-dimensional representation $(\tau,R)$ of $\mathfrak{n}_{00}=\mathfrak{sl}(2,\mathbb{R})\oplus\mathfrak{sl}(5,\mathbb{R})$ as above, and apply our Corollary \ref{cruco}.
\end{itemize}
We obtain the following theorem.
\begin{theorem}
Let $M=\mathbb{R}^{25}$ with coordinates $(u^1,\dots,u^5,x^1,\dots ,x^{20})$, and consider five 1-forms $\lambda^1,\dots,\lambda^5$ on $M$ given by
$$\begin{aligned}
\lambda^1=\,\,&{\rm d} u^1- x^3{\rm d} x^{20}+ x^5{\rm d} x^{19}- x^6 {\rm d} x^{18}- x^8 {\rm d} x^{16}+ x^9 {\rm d} x^{15}- x^{10} {\rm d} x^{13}\\
\lambda^2=\,\,&{\rm d} u^2- x^2 {\rm d} x^{20}+ x^4 {\rm d} x^{19}- x^6 {\rm d} x^{17}- x^7 {\rm d} x^{16}+ x^9 {\rm d} x^{14}- x^{10} {\rm d} x^{12}\\
\lambda^3=\,\,&{\rm d} u^3- x^1 {\rm d} x^{20}+ x^4 {\rm d} x^{18}-x^5 {\rm d} x^{17}- x^7 {\rm d} x^{15}+ x^8 {\rm d} x^{14}- x^{10} {\rm d} x^{11}\\
\lambda^4=\,\,&{\rm d} u^4- x^1 {\rm d} x^{19}+ x^2 {\rm d} x^{18}- x^3 {\rm d} x^{17}- x^7 {\rm d} x^{13}+ x^8 {\rm d} x^{12}- x^9 {\rm d} x^{11}\\
\lambda^5=\,\,&{\rm d} u^5- x^1 {\rm d} x^{16}+ x^2 {\rm d} x^{15}- x^3 {\rm d} x^{14}- x^4 {\rm d} x^{13}+ x^5 {\rm d} x^{12}- x^6 {\rm d} x^{11}.
\end{aligned}$$
The rank 20 distribution ${\mathcal D}$ on $M$ defined as ${\mathcal D}=\{\mathrm{T}\mathbb{R}^{25}\ni X\,\,|\,\,X\hook\lambda^1=\dots=X\hook\lambda^5=0\}$ has its Lie algebra of infinitesimal automorphisms $\mathfrak{aut}(\mathcal D)$ isomorphic to the Tanaka prolongation of $\mathfrak{n}_{\minu}=R\oplus S$, where $(\rho,S=\mathbb{R}^{20})$ is the 20-dimensional irreducible representation of $\mathfrak{n}_{00}=\mathfrak{sl}(2,\mathbb{R})\oplus\mathfrak{sl}(5,\mathbb{R})$, and $(\tau,R=\mathbb{R}^5)$ is the 5-dimensional irreducible subrepresentation $\tau\in(\rho\wedge\rho)$ of $\mathfrak{n}_{00}$.
The symmetry algebra $\mathfrak{aut}({\mathcal D})$ is isomorphic to the simple exceptional Lie algebra $\mathfrak{e}_{I}$,
$$\mathfrak{aut}({\mathcal D})=\mathfrak{e}_{I},$$
having the following natural gradation
$$\mathfrak{aut}({\mathcal D})=\mathfrak{n}_{-2}\oplus\mathfrak{n}_{-1}\oplus\mathfrak{n}_0\oplus\mathfrak{n}_1\oplus\mathfrak{n}_2,$$
with $\mathfrak{n}_{-2}=R$, $\mathfrak{n}_{-1}=S$,
$$
\mathfrak{n}_0=\mathbb{R}\oplus\mathfrak{sl}(2,\mathbb{R})\oplus\mathfrak{sl}(5,\mathbb{R})\supset \mathfrak{n}_{00},$$
$\mathfrak{n}_{1}=S^*$, $\mathfrak{n}_{2}=R^*$.
The gradation in $e_I$ is inherited from the distribution structure $(M,{\mathcal D})$. The duality signs $*$ at $R^*$ and $S^*$ above are with respect to the Killing form in $\mathfrak{e}_{I}$.
The contactification $(M,{\mathcal D})$ is locally the flat model for the parabolic geometry of type $(E_{I},P_{I*})$ related to the following \emph{crossed} Satake diagram \tikzset{/Dynkin diagram/fold style/.style={stealth-stealth,thin,
shorten <=1mm,shorten >=1mm}}\begin{dynkinDiagram}[edge length=.5cm]{E}{ooooto}
\end{dynkinDiagram}.
\end{theorem}
\subsection{A realization of $\mathfrak{so}(7,6)$ in dimension 21} We know from\cite{CS} that the crossed Satake diagram \tikzset{/Dynkin diagram/fold style/.style={stealth-stealth,thin,
shorten <=1mm,shorten >=1mm}}\begin{dynkinDiagram}[edge length=.5cm]{E}{otoooo}\end{dynkinDiagram} corresponds to the $\mathfrak{e}_I$-symmetric contact geometry in dimension 21. It corresponds to the grading
$$\mathfrak{e}_I=\mathfrak{n}_{-2}\oplus\mathfrak{n}_{-1}\oplus\mathfrak{n}_0\oplus\mathfrak{n}_1\oplus\mathfrak{n}_2,$$
with $\dim(\mathfrak{n}_{\pm 1})=20$, $\dim(\mathfrak{n}_{\pm2})=1$ and $\mathfrak{n}_0=\mathfrak{gl}(6,\mathbb{R})$.
Interestingly dimension $n=78$ is the dimension not only of the exceptional simple Lie algebra $\mathfrak{e}_6$, but also for the \emph{simple} Lie algebras $\mathfrak{b}_6$ and $\mathfrak{c}_6$. For example, if we take the crossed Satake diagram \tikzset{/Dynkin diagram/fold style/.style={stealth-stealth,thin,
shorten <=1mm,shorten >=1mm}}\begin{dynkinDiagram}[edge length=.5cm]{B}{ooooot}\end{dynkinDiagram} we describe the following gradation
$$\mathfrak{so}(7,6)=\mathfrak{n}_{-2}\oplus\mathfrak{n}_{-1}\oplus\mathfrak{n}_0\oplus\mathfrak{n}_1\oplus\mathfrak{n}_2,$$
with $\dim(\mathfrak{n}_{\pm 1})=6$, $\dim(\mathfrak{n}_{\pm2})=15$ and $\mathfrak{n}_0=\mathfrak{gl}(6,\mathbb{R})$, in the simple Lie algebra $\mathfrak{so}(7,6)$. Here, taking $(\rho,S)$ as the defining representation $\rho(A)=A$ of $\mathbf{GL}(6,\mathbb{R})$ in $S=\mathbb{R}^6$, and taking the representation $(\tau,R)$ to be $\tau=\rho\wedge\rho$ in $R=\bigwedge^2\mathbb{R}^6=\mathbb{R}^{15}$, and applying our Corollary \ref{cruco} we get the following theorem\footnote{We invoke it, just to show that we do not only use spin representations in this paper.}.
\begin{theorem}
Let $M=\mathbb{R}^{21}$ with coordinates $(u^1,\dots,u^{15},x^1,\dots ,x^{6})$, and consider fifteen 1-forms $\lambda^1,\dots,\lambda^5$ on $M$ given by
$$
\lambda^{I(i,j)}=\,\,{\rm d} u^{I(i,j)}- x^i{\rm d} x^{j},$$
with $$I(i,j)=1+i+\tfrac12(j-3)j,\quad 1\leq i<j\leq 6.$$
The rank 6 distribution ${\mathcal D}$ on $M$ defined as ${\mathcal D}=\{\mathrm{T}\mathbb{R}^{21}\ni X\,\,|\,\,X\hook\lambda^1=\dots=X\hook\lambda^{15}=0\}$ has its Lie algebra of infinitesimal automorphisms $\mathfrak{aut}(\mathcal D)$ isomorphic to the Tanaka prolongation of $\mathfrak{n}_{\minu}=R\oplus S$, where $(\rho,S=\mathbb{R}^{6})$ is the 6-dimensional defining representation of $\mathfrak{n}_{00}=\mathfrak{gl}(6,\mathbb{R})$, and $(\tau,R=\bigwedge^2\mathbb{R}^6)$ is the $15$-dimensional irreducible subrepresentation $\tau=\rho\wedge\rho$ of $\mathfrak{n}_{00}$.
The symmetry algebra $\mathfrak{aut}({\mathcal D})$ is isomorphic to the simple exceptional Lie algebra $\mathfrak{so}(7,6)$,
$$\mathfrak{aut}({\mathcal D})=\mathfrak{so}(7,6),$$
having the following natural gradation
$$\mathfrak{aut}({\mathcal D})=\mathfrak{n}_{-2}\oplus\mathfrak{n}_{-1}\oplus\mathfrak{n}_0\oplus\mathfrak{n}_1\oplus\mathfrak{n}_2,$$
with $\mathfrak{n}_{-2}=R$, $\mathfrak{n}_{-1}=S$,
$$
\mathfrak{n}_0=\mathfrak{gl}(6,\mathbb{R})= \mathfrak{n}_{00},$$
$\mathfrak{n}_{1}=S^*$, $\mathfrak{n}_{2}=R^*$.
The gradation in $\mathfrak{so}(7,6)$ is inherited from the distribution structure $(M,{\mathcal D})$. The duality signs $*$ at $R^*$ and $S^*$ above are with respect to the Killing form in $\mathfrak{so}(7,6)$.
The contactification $(M,{\mathcal D})$ is locally the flat model for the parabolic geometry of type $(\mathfrak{so}(7,6),P)$ related to the following \emph{crossed} Satake diagram \tikzset{/Dynkin diagram/fold style/.style={stealth-stealth,thin,
shorten <=1mm,shorten >=1mm}}\begin{dynkinDiagram}[edge length=.5cm]{B}{ooooot}
\end{dynkinDiagram}.
\end{theorem}
\begin{bibdiv}
\begin{biblist}
\bib{AC}{article}
{
author={Dmitrij V. Alekseevsky},
author={Vicente Cortes},
title={Classification of N-(super)-extended Poincaré algebras and bilinear invariants of the spinor representation of Spin(p, q)},
journal={Comm. Math. Phys.},
volume={183(3)},
pages={477–510},
year={1997}
}
\bib{Alt}{article}
{
author={Andrea Altomani},
author={Andrea Santi},
title={Tanaka structures modeled on extended Poincaré algebras},
journal={Indiana Univ. Math. Journ.},
volume={63(1)},
pages={91–117},
year={2014}
}
\bib{bicquard}{article}
{
author = { Olivier Biquard },
title = {Quaternionic contact structures},
booktitle = {Quaternionic Structures in Mathematics and Physics},
chapter = {},
pages = {23-30},
doi = {10.1142/9789812810038\_0003},
URL = {https://www.worldscientific.com/doi/abs/10.1142/9789812810038_0003},
eprint = {https://www.worldscientific.com/doi/pdf/10.1142/9789812810038_0003},
abstract = { Abstract This article is a survey on the notion of quaternionic contact structures, which I defined in [2]. Roughly speaking, quaternionic contact structures are quaternionic analogues of integrable CR structures. }
}
\bib{CartanPhd}{article}{
author={Cartan, \'Elie},
title={\"Uber die einfachen Transformationsgruppen},
journal={Ber. Verh. k. Sachs. Ges. d. Wiss. Leipzig}
date={1893},
pages={395--420},
}
\bib{CartanPhdF}{article}
{
author={Cartan, \'Elie},
title={Sur la structure des grupes de transformations finis et continus},
journal={Oeuvres, 1}
date={1894},
pages={137-287},
}
\bib{CS}{book}{
author={\v{C}ap, Andreas},
author={Slov\'{a}k, Jan},
title={Parabolic geometries. I},
series={Mathematical Surveys and Monographs},
volume={154},
note={Background and general theory},
publisher={American Mathematical Society, Providence, RI},
date={2009},
pages={x+628},
isbn={978-0-8218-2681-2},
review={\MR{2532439}},
doi={10.1090/surv/154},
}
\bib{He}{article}{
author={Helgason, Sigurdur},
title={Invariant differential equations on homogeneous manifolds},
journal={BAMS},
volume={83},
date={1977},
pages={751-756},
}
\bib{DJPZ}{article}{
author = {Hill, C. Denson }
author={Merker, Joël}
author={Nie, Zhaohu}
author={Nurowski, Paweł},
title = {Accidental CR structures},
publisher = {arXiv},
year = {2023},
doi = {10.48550/ARXIV.2302.03119},
url = {https://arxiv.org/abs/2302.03119},
}
\bib{Krug}{article}
{
author={ M.G. Molina},
author={ B. Kruglikov},
author={I. Markina},
author={A. Vasil’ev},
title={Rigidity of 2-Step Carnot Groups},
journal={Journ. Geom. Anal.},
volume={28},
pages={1477–1501},
year={2018}
}
\bib{weyman}{article}{
author = {Kraśkiewicz, Witold}
author={Weyman, Jerzy},
title = {Geometry of orbit closures for the representations associated to gradings of Lie algebras of types $E_6$, $F_4$ and $G_2$},
year = {2012},
publisher = {arXiv},
doi = {10.48550/ARXIV.1201.1102},
url = {https://arxiv.org/abs/1201.1102}
}
\bib{tanaka}{article}{
author={Tanaka, Noboru},
title={On differential systems, graded Lie algebras and pseudogroups},
journal={Journal of Mathematics of Kyoto University},
pages={1-82}
volume = {10},
year = {1970},
}
\bib{traut}{book}{
author = {Trautman, Andrzej},
title={Clifford algebras and their representations},
year= {2006, http://trautman.fuw.edu.pl}, publisher = {in Encyclopedia of Mathematical Physics vol. 1, Elsevier},
editor= {J.-P. Françoise, G.L. Naber and Tsou S.T.},
address = {Oxford GB},
pages={518-530},
}
\end{biblist}
\end{bibdiv}
\end{document}
|
2302.13716
|
\section{Introduction}
Consider a non-elementary hyperbolic group $\Gamma$ endowed with an invariant metric $d$, satisfying some regularity assumptions, acting by measure class preserving transformations on its Gromov boundary $(\partial \Gamma,\nu_{d})$ equipped with $\nu_{d}$ the so-called Patterson-Sullivan measure associated with $d$. This yields to a one parameter family of isometric representations on $L^{p}$-spaces denoted by $(\pi_{t},L^{p}(\partial \Gamma,\nu_{d}))$, defined for almost every $\xi\in \partial \Gamma$ as \begin{align}\label{representation}
[\pi_{t}(\gamma)v](\xi)=\bigg(\frac{d\gamma_{*}\nu_{d}}{d\nu_{d}}(\xi)\bigg)^{\frac{1}{2}+t}v(\gamma^{-1}\xi).
\end{align}
The above representation satisfies
$\|\pi_{t}(\gamma) v\|_{p}=\|v\|_{p}$ for all $v\in L^{p}(\partial \Gamma,\nu_{d})$ and for all $\gamma \in \Gamma$, where $p$ is such that $1/p=1/2+t$ with $-\frac{1}{2}<t<\frac{1}{2}$. We call these representations \emph{$L^{p}$-boundary representations} of hyperbolic groups.\\
\emph{The boundary representation} of hyperbolic groups is nothing but $\pi_{0}$ and has been intensively studied this last decade and might be seen, from a dynamical point of view, as a generalization of ergodicity see \cite{BM}, \cite{BM2}, \cite{Ga}, \cite{Boy2}, \cite{BoyMa},
\cite{BoyP}, \cite{BG}, \cite{BLP}, \cite{Fink}, \cite{KS} \cite{KS2}, and in \cite{Ca}.
In the papers \cite{Bou}, \cite{Boy} and \cite{BPi} the representations (\ref{representation}) have already been studied but rather as representations on Hilbert spaces. In this paper we focus on $L^{p}$-spaces. We characterize the irreducibility of $L^{p}$-boundary representations $(\pi_{t},L^{p}(\partial \Gamma,\nu_{d}))$ with $1/p=1/2+t$ (where $-1/2<t<1/2$) thanks to an intertwining operator associated with the metric $d$, denoted by $\mathcal{I}_{t}$ satisfying $\mathcal{I}_{t}\pi_{t}=\pi_{-t}\mathcal{I}_{t}$. We prove that this a bounded operator $\mathcal{I}_{t}:L^{p}(\partial \Gamma,\nu)\rightarrow L^{q}(\partial \Gamma,\nu)$ with $1/q=1/2-t$, defined only for $0<t<1/2$ (see Subsection \ref{intertwiner}). It already appears in the context of hyperbolic groups and Hilbert spaces in \cite{BPi}, \cite{GAG} and in CAT(-1) spaces \cite{Bou} .\\
Our main result is the following:
\begin{theo} \label{mainT}
For all $-1/2<t<1/2$ and $1/p=1/2+t$, the $L^{p}$-boundary representations $(\pi_{t},L^{p}(\partial \Gamma,\nu_{d}))$ corresponding to $d$ are irreducible if and only if the intertwiner $\mathcal{I}_{|t|}$ is injective.
\end{theo}
\begin{remark}
In particular, we prove that the representation $(\pi_{t},L^{p}/\ker \mathcal{I}_{t})$ is irreducible for $1/p=1/2+t$ with $0<t<1/2.$
\end{remark}
We also deduce the following result in the context of rank one semisimple Lie groups. We do not know if the following theorem is present in the literature.
\begin{theo}\label{latt}
Let $G$ be a rank one semisimple Lie group of non compact type.
Let $\Gamma$ be a lattice in $G$. The $L^{p}$-boundary representations of $\Gamma$ corresponding to $\nu$ the unique $K$-invariant probability measure on its Poisson-Furstenberg boundary $G/P$ are irreducible for all $1<p<+\infty$, where $K$ is the maximal compact subgroup and $P$ the minimal parabolic subgroup of $G$.
\end{theo}
Indeed, we derive the above theorem from a generalizations of an ergodic theorem \`a la Bader-Muchnik for $L^{p}$-boundary representations, see Theorem \ref{BML2} in the following.
\subsection*{Notation}
Endow $\Gamma$ with the length function corresponding to $d$, $|\cdot|: \Gamma \rightarrow \mathbb{R}^{+}$ defined as $|\gamma |=d(1,\gamma )$ where $1$
is the identity element and $\gamma \in \Gamma.$\\
Let $S^{\Gamma}_{n,R}:=\{ \gamma \in \Gamma| nR\leq |\gamma|<(n+1)R\}$ for $R>0$ and let $|S^{\Gamma}_{n,R}|$ be the cardinal of $S^{\Gamma}_{n,R}$.\\
As in \cite{Boy}, we recall the definition of a spherical function associated with $\pi_{t}$. This is the matrix coefficient: \begin{align}
\phi_{t}:\gamma \in \Gamma \mapsto \langle \pi_{t}(\gamma)\textbf{1}_{\partial \Gamma}, \textbf{1}_{\partial \Gamma}\rangle \in \mathbb{R}^{+},
\end{align}
where $
\textbf{1}_{\partial \Gamma}$ stands for the characteristic function of $\partial \Gamma$. \\
It will be convenient to also introduce the \emph{continuous} function (see \cite[Lemma 3.2]{BPi})
\begin{align}\label{lafonction}
\sigma_{t}:\xi\in \partial \Gamma \mapsto \mathcal{I}_{t}(\textbf{1}_{\partial \Gamma})(\xi)\in \mathbb{R}^{+}\;\;\;\; (t>0).\end{align} This is also a strictly positive function.
Let $M_{\sigma^{-1}_{t}}$ be the corresponding multiplication operator, that is a bounded operator acting on any $L^{p}(\partial \Gamma,\nu_{d})$ for any $p>1$.\\
We denoted by $\mathcal{R}_{t}=M_{\sigma^{-1}_{t}} \mathcal{I}_{t}$ the Riesz operator as in \cite{BPi}.
\subsection*{Convergence results}
We deduce the above theorems from a theorem {\it \`a la Bader-Muchnik} for $L^{p}$-boundary representations of hyperbolic groups. Surprisingly, this kind of theorem known in the Hilbertian context holds for $L^{p}$-spaces.
\begin{theorem}\label{BML2}
For $R>0$ large enough, there exists a sequence of measures $\mu_{n,R}:\Gamma \rightarrow \mathbb{R}^{+}$, supported on $S^{\Gamma}_{n,R}$, satisfying $\mu_{n,R}(\gamma)\leq C /|S^{\Gamma}_{n,R}|$ for some $C>0$ independent of $n$ such that
for all $0<t<1/2 $, for all $f,g\in C(\Gamma \cup \partial \Gamma)$, for all $v\in L^{p}(\partial \Gamma,\nu_{o})$ and $w\in L^{q}(\partial \Gamma,\nu_{o})$:
$$\sum_{\gamma \in S^{\Gamma}_{n,R}}\mu_{n,R}(\gamma) f(\gamma ) g(\gamma^{-1} ) \frac{\langle \pi_{t}(\gamma)v,w\rangle }{\phi_{t}(\gamma)}\to \langle g_{|_{\partial \Gamma}}\mathcal{R}_{t}(v),\textbf{1}_{\partial \Gamma}\rangle \langle f_{|_{\partial \Gamma}},w \rangle, $$
as $n\to +\infty$.
\end{theorem}
\begin{remark}
If $\mu$ denotes a finitely supported random walk on a non-elementary hyperbolic group $\Gamma$ and $d:=d_{\mu}$ denotes the corresponding Green metric, then $(\Gamma,d_{\mu})$ satisfies the assumptions of Theorem \ref{mainT} and \ref{BML2}.
\end{remark}
\subsection*{Acknowledgement}
The first author would like to thank Ian Tice for useful discussions about Proposition \ref{weakschur} and Nigel Higson for comments on Theorem \ref{latt}.
\subsection{Structure of the paper}
Section \ref{sec2} contains preliminaries on $\delta$-hyperbolic spaces and hyperbolic groups, Patterson-Sullivan measures and equidistribution results, $L^{p}$-boundary representations as well as the definition of the intertwiner $\mathcal{I}_{t}$ for $t>0.$\\
In Section \ref{interpolation}, we recall some basic facts in interpolation theory and some known results about spherical functions for hyperbolic groups. We prove that $\mathcal{I}_{t}$ is a bounded operator from $L^{p}(\partial \Gamma,\nu)$ to $L^{q}(\partial \Gamma,\nu)$ with $1/p=1/2+t$ and $1/q=1/2-t$ where $0<t<1/2$.\\
Section \ref{section4} is devoted to proofs of Theorem \ref{BML2} and Theorem \ref{mainT}. In particular, a new tool we use is a $L^p$-version of Radial Property RD for $L^p$-boundary representations.\\
Section \ref{section5} is a discussion about the case of rank one globally symmetric spaces of non compact type and we provide a proof of Theorem \ref{latt}.
\section {Preliminaries on geometrical setting}\label{sec2}
\subsection{The geometrical setting and the regularity assumptions of the metric} A nice reference is \cite{BH}.\\
A metric space \((X,d)\) is said to be \emph{hyperbolic} if there exists $\delta\geq 0$ and a\footnote{if the condition holds for some \(o\) and \(\delta\), then it holds for any \(o\) and \(2\delta\)} basepoint \(o\in X\) such that for any \(x,y,z\in X\) one has
\begin{equation}\label{hyp}
(x,y)_{o}\geq \min\{ (x,z)_{o},(z,y)_{o}\}-\delta,
\end{equation}
where \((x,y)_{o}\) stands for the \emph{Gromov product} of \(x\) and \(y\) from \(o\), that is
\begin{equation}
(x,y)_{o}=\frac{1}{2}(d(x,o)+d(y,o)-d(x,y)).
\end{equation}
We consider \emph{proper} hyperbolic metric space (the closed balls are compact).
A sequence $(a_{n})_{n\in \mathbb{N}}$ in $X$ converges at infinity if $(a_{i},a_{j})_{o}\rightarrow +\infty$ as $i,j$ goes to $+\infty$. Set $(a_n)\sim(b_n)\Leftrightarrow (a_i,b_j)_o\to \infty$ as $i,j\to \infty$ : it defines an equivalence relation and the set of equivalence classes (that does not depent on the base point) is denoted by $\partial X$ and is called the Gromov boundary of $X$. The topology on $X$ naturally extends to $\overline{X}:=X\cup \partial X$ so that $\overline{X}$ and $\partial X$ are compact sets. The formula
\begin{equation}\label{gromovextended}
(\xi,\eta)_{o}:= \sup \liminf_{i,j}(a_{i},b_{j})_{o}
\end{equation}
(where the supremum is taken over all sequences $(a_n), (b_n)$ which represent $\xi$ and $\eta$ respectively)
allows to extend the Gromov product on $\overline{ X}\times \overline{ X}$ but in a \emph{non continuous way} in general. Moreover the boundary $\partial X$ carries a family of \emph{visual metrics}, depending on \(d\) and a real parameter \(\epsilon > 0\) denoted from now by $d_{o,\epsilon}$. The metric space $(\partial X,d_{o,\epsilon})$ is a compact subspace of the bordification $\overline{X}:=\partial X \cup X$ (also compact) and the open ball centered at $\xi$ of radius $r$ with respect to $d_{o,\epsilon}$ will be denoted by $B(\xi,r)$.\\
It turns out that in general, the Gromov product does not extend continuously to the bordification, see for example \cite[Example 3.16]{BH}. Following the authors of \cite{NS}, we say that a hyperbolic space $X$ is $\epsilon$-good, where $\epsilon>0$, if the following
two properties hold for each base point $o\in X$:
\begin{itemize}
\item The Gromov product $(\cdot,\cdot)_{o}$ on $X$ extends continuously to the bordification $X\cup \partial X$.
\item The map $d_{o,\epsilon}:(\xi,\eta)\in \partial X\times \partial X \mapsto e^{-\epsilon(\xi,\eta)_{o}}$ is a metric on $\partial X$.
\end{itemize}
The classical theory of $\delta$-hyperbolic spaces works under the assumption that the spaces are geodesic but to guarantee that the Gromov product extends continuously to the boundary, that is if two sequences $a_{n},b_{m} \in X \to \xi,\eta \in \partial X$, then the Gromov product satisfies $(a_{n},b_{m})_{o}\to (\xi,\eta)_{o}$, we shall work under the assumption of roughly geodesic spaces. In particular the conformal relation on the boundary holds: for all $x,y\in X$ and for all $\xi,\eta \in \partial X:$
\begin{equation}\label{conform}
d_{y,\epsilon}(\xi,\eta)=e^{\frac{\epsilon}{2} \big(\beta_{\xi}(x,y)+\beta_{\eta}(x,y)\big)}d_{x,\epsilon}(\xi,\eta),
\end{equation}
where the Busemann function $\beta_{\cdot}(\cdot,\cdot)$ is defined as $$(\xi,x,y)\in\partial X \times X \times X \mapsto \beta_{\xi}(x,y):= \lim_{n\to +\infty}d(x,a_{n}) -d(y,a_{n}),$$ where $(a_{n})_{n\in \mathbb{N}}$ represents $\xi.$
Recall that for all $\xi\in \partial X$ and $x,y\in X$
\begin{equation}\label{buse}
\beta_{\xi}(x,y)=-d(x,y)+2(\xi,y)_{x}.
\end{equation}
A metric space $(X,d)$ is roughly geodesic if there exists $C=C_X>0$ so that for all $x,y\in X$ there exists a rough geodesic joining $x$ and $y$, that is map $r:[a,b]\subset \mathbb{R}\rightarrow X$ with $r(a)=x$ and $r(b)=y$ such that
\begin{equation}\label{roughgeo} |t-s|-C_X \leq d(r(t),r(s))\leq |t-s|+C_X
\end{equation}
for all $t,s\in [a,b]$.
We say that two rough geodesic rays
$r,r':[0,+\infty)\rightarrow X$ are equivalent if \\ $\sup_{t}d(r(t),r'(t))<+\infty$. We write $\partial_{r} X$ for the set of equivalence classes of rough geodesic rays. When $(X,d)$ is a proper roughly geodesic space, $\partial X$ and $\partial_{r} X$ coincide.
\subsection{Hyperbolic groups}
For an introduction to theory of hyperbolic groups we refer to \cite{Gr} and \cite{G}.\\
Recall that a group $\Gamma$ acts properly discontinuously on a proper metric space if for every compact sets $K,L\subset X$, the set $|\{\gamma\in \Gamma\;;\; \gamma K\cap L\neq \emptyset\}|<\infty$. A {\it group $\Gamma$ is said to be hyperbolic} if it acts by isometries on some proper hyperbolic metric space $(X,d)$ such that $X/\Gamma$ is compact. A hyperbolic group is necessarily finitely generated (by \v{S}varc-Milnor's lemma). For such $\Gamma$, any finite set of generators $\Sigma$ gives rise to a Cayley graph $({\mathcal G}(\Gamma, \Sigma),d_{\Sigma})$ whose set of vertices are the elements of $\Gamma$, linked by length-one edges if and only if they differ by an element of $\Sigma$. Every {\it geodesic} hyperbolic metric space $(X,d)$ on which $\Gamma$ acts by isometries properly discontinuously with compact quotient is quasi-isometric to a Cayley graph of a hyperbolic group. If $\Gamma$ is a hyperbolic group
endowed with a left invariant metric quasi-isometric to a word metric, it turns out that the metric space $(\Gamma,d)$ is a proper roughly geodesic $\delta$-hyperbolic metric space, see for example \cite[Section 3.1]{Ga}.\\
The limit set of $\Gamma$ denoted by $\Lambda_{\Gamma}$ is the set of accumulation points in $\partial X$ of an (actually any) orbit. Namely $\Lambda_{\Gamma}:=\overline{\Gamma . o}\cap \partial X$, with the closure in $\overline{X}$. We say that $\Gamma$ is non-elementary if $|\Lambda_{\Gamma}|>2$ (and in this case, $|\Lambda_{\Gamma}|=\infty$). If $\Gamma$ is non-elementary and if the action is cocompact then $\Lambda_{\Gamma}=\partial X$.
Eventually, note that a combination of results due to Blach\`ere, Ha\" issinsky and Matthieu \cite{BHM} and of Nica and \v{S}pakula \cite{NS} provides
\begin{theorem}
A hyperbolic group acts by isometries, properly discontinuously and cocompactly on a proper roughly geodesic $\epsilon$-good $\delta$-hyperbolic space.
\end{theorem}
\subsection{To sum up}\label{class}
We assume that the metric space we are considering satisfies the following conditions:
\begin{itemize}
\item The metric space $(X,d)$ is $\delta$-hyperbolic.
\item The metric space $(X,d)$ proper.
\item The metric space $(X,d)$ is roughly geodesic.
\item The metric space $(X,d)$ $\epsilon$-good with some $\epsilon>0,$
\end{itemize}
and we let a non-elementary group $\Gamma$ act on $(X,d)$ under the following conditions:
\begin{itemize}
\item The action of $\Gamma$ is by isometries.
\item The action is properly discontinuous.
\item The action is cocompact.
\end{itemize}
In other words, the group $\Gamma$ is a non-elementary hyperbolic group and thus $\Gamma$ is infinite, discrete, countable and non-amenable.
\subsection{The Patterson -Sullivan measure}
Fix such $(X,d)$, pick an origin $o\in X$ and set $B(o,R)=\{ x\in X|d(o,x)<R\}$.
Consider a family of visual metrics $(d_{x,\epsilon})_{x\in X}$ associated with a parameter $\epsilon$. The compact metric space $(\partial X,d_{o,\epsilon})$ admits a Hausdorff measure of dimension \begin{equation}\label{DQ} D:={Q\over \epsilon} \end{equation}
where
\begin{equation}\label{volumegrowth}
Q=Q_{\Gamma,d}:=\limsup_{R\to +\infty}\frac{1}{R}\log |\Gamma.o\cap B(o,R)|,
\end{equation}
is the critical exponent of $\Gamma$ (w.r.t. its action on $(X,d)$).
This $D$-Hausdorff measure is nonzero, finite, unique up to a constant, and denoted by \(\nu_{o}\) when we normalize it to be a probability. The fundamental property we use is the Ahlfors regularity: the support of $\nu_{o}$ is in $\partial X$ and we say that $\nu_{o}$ is Ahlfors regular of dimension \(D\), if we have the following estimate for the volumes of balls: there exists $C>0$ so that for all $\xi \in \Lambda_{\Gamma}$ for all \(r \leq Diam (\partial X)\)
\begin{equation}\label{Ahlfors}
C^{-1} r^{D}\leq \nu_{o}(B(r,\xi)) \leq C r^{D}.
\end{equation}
The \emph{class} of measures $\nu_o$ is invariant under the action of $\Gamma$ and independent of the choice of \(\epsilon\). We refer to \cite{Pa}, \cite{Su}, \cite{BMo} and \cite{Co} for Patterson-Sullivan measures theory.
\subsection{Shadows and control of Busemann functions}
\subsubsection*{Upper Gromov bounded by above}
This assumption appears in the work of Connell and Muchnik in \cite{CM} as well as in the work of Garncarek on boundary unitary representations \cite{Ga}. We say that a space $X$ is \emph{upper gromov bounded by above} with respect to $o$, if there exists a constant $M>0$ such that for all $x\in X$ we have
\begin{equation} \sup_{\xi \in \partial X}(\xi,x)_{o}\geq d(o,x)-M.
\end{equation}
Morally, this definition allows us to choose a point in the boundary playing the role of the forward endpoint of a geodesic starting at $o$ passing through $x$ in the context of simply connected Riemannian manifold of negative curvature. \\
We denote by $\hat{x}_{o}$ a point in the boundary satisfying
\begin{equation} \label{endpoint}
(\hat{x}_{o},x)_{o}\geq d(o,x)-M.
\end{equation}
In particular, every roughly geodesic metric spaces are upper Gromov bounded by above
(see for example \cite[Lemma 4.1]{Ga}).
\subsubsection{Definition of shadows}
Let $(X,d)$ be a roughly geodesic, $\epsilon$-good, $\delta$-hyperbolic space.
Let $r>0$ and a base point $o \in X$.
Define a shadow for any $x\in X$ denoted by $O_{r}(o,x)$ as
\begin{equation}
O_{r}(o,x):=\{ \xi\in \partial X | (\xi,x)_{o}\geq d(x,o)-r\}.
\end{equation}
\begin{lemma}\label{ombre} Let $r>M+\delta$. Then
$$B(\hat{x}_{o},e^{-\epsilon(d(o,x)-r+\delta)})\subset O_{r}(o,x) \subset B(\hat{x}_{o},e^{-\epsilon(d(x,o)-r-\delta)}). $$
\end{lemma}
\begin{proof}
Assume $r>M+\delta$.
For the left inclusion we have
\begin{align*}
(\xi,x)_{o}&\geq \min \{ (\xi,\hat{x}_{o})_{o},(\hat{x}_{o},x)\}-\delta\\
&\geq\min \{d(o,x)-r+\delta,d(o,x)-M \}-\delta \\
&=d(o,x)-r \\
&\geq d(o,x)-r.
\end{align*}
For the other inclusion
\begin{align*}
(\xi,\hat{x}_{o})_{o}&\geq \min \{ (\xi,x)_{o},(\hat{x}_{o},x)_{o}\}-\delta\\
&\geq \min \{ d(x,o)-r,d(o,x)-M \}-\delta\\
&\geq d(o,x)-r -\delta.
\end{align*}
\end{proof}
The above lemma combined with Ahlfors regularity of $\nu_{o}$ provides
\begin{lemma}\label{shadow}
There exists $C>0$ such that for any $x \in X$, and for $r>M+\delta$ $$C^{-1}e^{-Qd(o,x)}\leq\nu_{o}(O_{r}(o,x))\leq C e^{-Qd(o,x)}.$$
\end{lemma}
Here is a lemma dealing with a covering and the multiplicity of a covering by shadows of the boundary.
\begin{lemma}\label{multiplicity}
\begin{enumerate} We have the two following properties:
\item \label{item1}For $R>0$ large enough, there exists $r>0$ such that $$\cup_{\gamma \in S^{\Gamma}_{n,R}} O_{r}(o,\gamma o)=\partial X.$$
\item \label{item2}For all $R,r>0$ large enough, there exists an integer $m$ such that for all $\xi \in \partial X$ we have for all $n\in \mathbb{N}$,
$\sum_{\gamma \in S^{\Gamma}_{n,R}} \textbf{1}_{O_{r}(o,\gamma o)}(\xi)\leq m.$
\end{enumerate}
\end{lemma}
\begin{proof}
Let $\kappa$ be the diameter of a relatively compact fundamental domain of the action of $\Gamma$ on $X$ containing $o$. Set \begin{equation}\label{choice}
R>3(C_{X}+\kappa)
\end{equation} where $C_{X}$ is the constant coming from the assumption \ref{roughgeo}.\\
We prove (\ref{item1}).
Let $\xi\in \partial X$ and consider $r_{o}$ a roughly geodesic representing $\xi$. Define $z_{\xi}:=r_{o}(nR+R/2)$. Hence, $nR\leq d(o,z_{\xi})< (n+1)R.$ Since the action is cocompact, there exists $\gamma \in \Gamma$ such that $d(\gamma o,z_{\xi})\leq \kappa.$
The choice of (\ref{choice}) ensures $ nR \leq d(\gamma o,o)<(n+1)R.$ Therefore,
\begin{align*}
(\xi, \gamma o)_{o}&\geq \min\{(\xi,z_{\xi})_{o} ,(z_{\xi},\gamma o)_{o}\}-\delta\\
&\geq \min\{(n+1)R-3C_{X},nR-\kappa\}-\delta\\
&\geq nR -\kappa-\delta\\
&\geq |\gamma|-R -\kappa-\delta,
\end{align*}
and thus $\xi \in O_{r}(o,\gamma o)$ with $r=R +\kappa+\delta$.\\
We now prove (\ref{item2}). Take $r>0$ and $R$ satisfying (\ref{choice}). For any $\gamma \in S^{\Gamma}_{n,R}$
and for all $\xi \in O_{r}(o,\gamma o)$
\begin{align*}
(\gamma o,z_{\xi})_{o}&\geq \min\{(\gamma o,\xi)_{o},(\xi,z_{\xi})_{o} \}-\delta\\
&\geq \min\{|\gamma|-r ,(n+1)R-3C_{X} \}-\delta\\
&\geq \min\{nR-r ,(n+1)R-3C_{X} \}-\delta\\
&\geq \min\{nR-r ,nR-C_{X} \}-\delta\\
&\geq nR-r-\delta.
\end{align*}
By definition of the Gromov product we deduce that $d(\gamma o,z_{\xi})\leq \rho$ where $\rho=2(R+r+\delta)$.
Thus the set $\{\gamma \in S^{\Gamma}_{n,R} | O_{r}(o,\gamma o)\ni \xi \}$ is contained in $B(z_{\xi},\rho)\cap \Gamma .o$. Since the action is cocompact, by taking a positive constant $R'$ bigger than $\kappa>0$, we obtain $|B(z_{\xi},\rho)\cap \Gamma .o| \leq |B(o,R')\cap \Gamma. o| $ with $R'=\rho-\kappa$. Set $m:=|B(o,R')\cap \Gamma .o| $ to conclude the proof.
\end{proof}
Recall that there exists $M>0$, such that if $\gamma$ is an element of $\Gamma$, one can choose $\hat{\gamma}_{o}$ a point in $\partial X$ satisfying (\ref{endpoint})
$$ (\hat{\gamma}_{o},\gamma o)_{o}\geq |\gamma|-M.$$
\begin{lemma}\label{crucial}
There exists $C>0$ such that for all $\xi \in \partial X$ there exists $g_{\xi}\in S^{\Gamma}_{n,R}$ such that for all $\gamma \in S^{\Gamma}_{n,R}$,
$\beta_{\xi}(o,\gamma o)\leq \beta_{\hat{\gamma}_{o}}(o,g_{\xi} o)+C.$
\end{lemma}
\begin{proof}
Pick a point $\xi \in \partial X$. Consider a roughly geodesic $r_{o}$ starting at $o$ representing $\xi$ and choose a point $z_{\xi}$ on it such that $nR\leq d(z_{\xi},o)< (n+1)R$ (since $R$ is large enough).
We have for $\gamma \in S^{\Gamma}_{n,R}$ \begin{align*}
(\gamma o, z_{\xi})_{o}&\geq \min\{(\gamma o, \hat{\gamma o})_{o}, (\hat{\gamma o}, z_{\xi})_{o}\}-\delta\\
& \geq \min\{|\gamma|-M, (\hat{\gamma o}, z_{\xi})_{o}\}-\delta,
\end{align*}
and therefore either $(\hat{\gamma o}, z_{\xi})_{o}\leq (\gamma o, z_{\xi})_{o} +\delta $ or $ (\gamma o, z_{\xi})_{o}\geq |\gamma|-M-\delta \geq d(o,z_{\xi})-M-R-\delta\geq (\hat{\gamma o}, z_{\xi})_{o}-M-R-\delta.$ In other words,
\begin{equation}\label{equ}
(\hat{\gamma o}, z_{\xi})_{o}\leq (\gamma o, z_{\xi})_{o} +C_{\delta,M,R}
\end{equation}
with $C_{\delta,M,R}=M+R+\delta.$
It follows that
\begin{align*}
(\xi,\gamma o)_{o}&\geq \min\{ (\xi,z_{\xi})_{o}, (z_{\xi},\gamma o)_{o} \}-\delta \\
&\geq \min\{ nR-C_{X} , (z_{\xi},\gamma o)_{o} \}-\delta\\
&\geq \min\{ nR-C_{X} , (\hat{\gamma o}, z_{\xi})_{o}-C_{\delta,M,R} \}-\delta\\
&\geq (\hat{\gamma o}, z_{\xi})_{o}-C',
\end{align*}
with $C'=C_{X}+R+\delta+C_{\delta,M,R}>0$.
Equality (\ref{buse}) implies that
$\beta_{\xi}(o,\gamma o)\leq \beta_{\hat{\gamma o}}(o,z_{\xi})+2C'.$ Now, choose $R$ large enough so that there exists $g_{\xi}\in \Gamma$ with $d(g_{\xi}o,z_{\xi})\leq \kappa$ and $go \in S^{\Gamma}_{n,R}$, where $\kappa$ is the diameter of a fundamental domain of the action of $\Gamma$ on $X$ containing $o$. Then\\
$$\beta_{\hat{\gamma o}}(o,z_{\xi})=\beta_{\hat{\gamma o}}(o,g_{\xi} o)+\beta_{\hat{\gamma o}}(g_{\xi} o,z_{\xi})\leq \beta_{\hat{\gamma o}}(o,g_{\xi} o)+d(g_{\xi} o,z_{\xi})\leq \beta_{\hat{\gamma o}}(o,g_{\xi} o)+\kappa.$$
To conclude the proof write
$$\beta_{\xi}(o,\gamma o)\leq \beta_{\hat{\gamma o}}(o,g_{\xi}o)+2C'+\kappa.$$
\end{proof}
\subsection{Equidistribution \`a la Roblin-Margulis}\label{equid}The following theorem appears under this form for the first time in \cite[Theorem 3.2]{BG} and has been inspired by results in \cite{Ro} and \cite{Ma}.
The unit Dirac mass centered at $x\in X$ is denoted by $D_{x}$.
\begin{theorem}\label{equi}
For any $R>0$ large enough, there exists a sequence of measures $\mu_{n,R}:\Gamma \rightarrow \mathbb{R}^{+}$ such that
\begin{enumerate}
\item \label{growth}There exists $C>0$ satisfying for all $n \in \mathbb{N}$ and all $\gamma \in S^{\Gamma}_{n,R}$ that $$\mu_{n,R}(\gamma)\leq C / |S^{\Gamma}_{n,R}|.$$
\item
We have the following convergence: $$\sum_{\gamma \in S^{\Gamma}_{n,R}}\mu_{n,R}(\gamma) D_{\gamma o} \otimes D_{\gamma^{-1} o} \rightharpoonup \nu_{o}\otimes \nu_{o},$$
as $n\to +\infty$, for the weak* convergence in $C(\overline{X}\times \overline{X})$.
\end{enumerate}
\end{theorem}
\subsection{$L^{p}$-representations}
The expression (\ref{representation}) of $\pi_{t}$ defines an isometric $L^{p}$-representation of $\Gamma$ for the exponent
\begin{equation}
p=\frac{2}{1+2t},
\end{equation}
with $0<t<1/2$.
Denote its conjugate exponent
\begin{equation}
q=\frac{2}{1-2t}.
\end{equation}
Observe that the contragredient representation of $(\pi_{t},L^{p})$ is $(\pi_{-t},\overline{L^{q}})$
with respect to the (non-degenerate) pairing $$\langle\cdot, \cdot\rangle:(v,w)\in L^{p}\times \overline{L^{q}} \rightarrow \int_{\partial X}v(\xi)\overline{w}(\xi) d\nu_{o}(\xi) \in \mathbb{C}.$$
In particular, the adjoint operator $\pi^{*}_{t}(\gamma)$ of $\pi_{t}(\gamma)$ is given for any $\gamma \in \Gamma$ by
\begin{equation}
\pi^{*}_{t}(\gamma)=\pi_{-t}(\gamma^{-1}).
\end{equation}
\subsection{An Intertwiner}\label{sec14}
Following \cite{BPi}, recall the definition of the operator $\mathcal{I}_{t}$ for $t>0$: for almost every $\xi \in \partial X$
\begin{equation}\label{intertwiner}
\mathcal{I}_{t}(v)(\xi):=\int_{\partial X} \frac{v(\eta)}{d^{(1-2t)D}_{o,\epsilon}(\xi,\eta)}d\nu_{o}(\eta).
\end{equation}
It has already been observed in \cite{BPi} that $\mathcal{I}_{t}$ is a self-adjoint compact operator on $L^{2}(\partial X,\nu_{o})$. We will show that for $0<t<1/2$, the formula (\ref{intertwiner}) defines $\mathcal{I}_{t}$ as a bounded operator from $L^{p}(\partial X,\nu_{o})$ to $L^{q}(\partial X,\nu_{o})$ with with $1/p=1/2+t$ and $1/q=1/2-t$, see Proposition \ref{cont}. Moreover, working under the assumptions of $\epsilon$-good spaces guarantees that the operator $\mathcal{I}_{t}$ intertwines $\pi_{t}$ and $\pi_{-t}$ from $L^{p}(\partial X,\nu_{o})$ to $L^{q}(\partial X,\nu_{o})$ thanks to the relation (\ref{conform}), see \cite[Proposition 3.17]{BPi}. Namely, for all $\gamma \in \Gamma$ and for all $v\in L^{p}(\partial X,\nu_{o}):$
\begin{equation}\label{intert}
\mathcal{I}_{t}\pi_{t}(\gamma)v=\pi_{-t}(\gamma)\mathcal{I}_{t}v.
\end{equation}
It will be also useful to consider \begin{equation}\label{sigmat}
\tilde{\sigma_{t}}:x\in \overline{X} \mapsto \int_{\partial X} e^{(1-2t)Q( x,\eta)_{o}}d\nu_{o}(\eta)\in \mathbb{R}^{+}.
\end{equation}
Observe that $\tilde{\sigma_{t}}$ restricted to $\partial X$ is nothing but $\sigma_{t}$ defined in (\ref{lafonction}).
We recall that
the function $\tilde{\sigma_{t}}$ si continuous on $\overline{X},$ see \cite[Proposition 3.4 ]{BPi}.
\section{Interpolation theory: Strong inequality of type $(p,q)$ for the intertwining operator and Application of Riesz-Thorin Theorem}\label{interpolation}
The aims of this section is to provide some material of interpolation theory to prove the main result concerning the operator $\mathcal{I}_{t}$, with $0<t<1/2$ based on the \emph{weak-type Schur's test}. The connection of interpolation theory and Lorentz psaces with boundary representations are already in the paper \cite{Cow} and \cite[Chapter 6]{FiPi}. Note also the very recent work \cite{GAG}.
\subsection{Lorentz spaces, interpolation and applications}
We follow \cite{Cow}.
Let $(\Omega,\mu)$ be a measure space. If $f:\Omega \rightarrow \mathbb{C}$ is a measurable function then define the nonincreasing rearrangement of $f$ $$f^*(t):= \inf \bigg\{ s>0 |\mu\big( \{|f|>s \}\big)\leq t \bigg\}.$$
The real function $f^*$ is a positive nonincreasing function, equimeasurable with $|f|$ and right continuous. Define then the norm, if $1<p,q<\infty$ as $$\|f\|_{L^{p,q}}=\bigg(\frac{p}{q} \int^{\infty}_{0}(t^{1/p}f^{*}(t))^{q}\frac{dt}{t}\bigg)^{1/q},$$ and $1<p<\infty$ with $q=\infty$:
$$\|f\|_{L^{p,\infty}}=\sup\{ t^{1/p}f^{*}(t)|t\in [0,+\infty]\}.$$
Define the Lorentz spaces for $1<p<+\infty$ and $1< q\leq +\infty$.
$$L^{p,q}(\Omega):=\{ f:\Omega \rightarrow \mathbb{C} \mbox{ measurable }| \|f\|_{p,q}<+\infty\}.$$
Here are some useful facts:
\begin{enumerate}
\item $L^{p,p}=L^{p}.$
\item $(L^{p,q})^{*}=L^{p',q'}.$
\item \label{point3} $L^{p,q_{1}}\subset L^{p,q_{2}}$ with $q_{1}<q_{2}$, that is $$\|v\|_{L^{p,q_{2}}}\leq C_{p,q_{1},q_{2}} \|v\|_{L^{p,q_{1}}},$$for some $C_{p,q_{1},q_{2}}>0.$
\end{enumerate}
Here is the \emph{} the fundamental tool, called \emph{the weak-type Schur's test} coming form interpolation theory.
\begin{prop}\label{weakschur}
Let $(X,\mathcal{M},\mu)$ and $(Y,\mathcal{N},\nu)$ be $\sigma$-finite measure spaces and let $1<p,q,r<\infty$ be such that $$\frac{1}{p}+\frac{1}{r}=\frac{1}{q}+1.$$
Let $k:X\times Y \mapsto \mathbb{R}$ be a measurable function and suppose that there exists $A>0$ such that
\begin{align*}
\|k(\cdot,y)\|_{L^{r,\infty}}&\leq A \mbox{ for a.e. } y \in Y, \\
\|k(x,\cdot)\|_{L^{r,\infty}}&\leq A \mbox{ for a.e. } x \in X.
\end{align*}
Therefore the formula $Tv(x)=\int_{Y}k(x,y)v(y)d\nu(y)$ defined a.e a function in $L^{q}(X,\mu)$ whenever $v$ is in $L^{p}(Y,\nu)$. And moreover for all $1\leq s \leq \infty $ there exists a constant $C=C_{p,q,r,s}>0$ depending on $p,q,r,s$ such that
$$\|Tv\|_{L^{q,s}}\leq C A\|v\|_{L^{p,s}}.$$
\end{prop}
We refer to \cite[Proposition 6.1]{Tao} for a proof.
\subsubsection{Analogs of homogenous functions on the boundary}
\begin{lemma} Let $1<1/p<\infty.$
We have for all $\xi\in \partial X$ $$\frac{1}{d^{ pD}_{o,\epsilon} (\xi,\cdot)}\in L^{1/p,\infty}.$$
\end{lemma}
\begin{proof}
Let $s>0.$ We have for all $\xi \in \partial X$
$$\nu_{o}(\{ \eta |d^{-pD}_{o,\epsilon}(\xi,\eta)>s\})=\nu_{o}(\{ \eta |d_{o,\epsilon}(\xi,\eta)<s^{-1/pD}\})=\nu_{o}\big(B(\xi,s^{-1/pD})\big).$$
We obtain for all $\xi \in \partial X$ and for $t>0:$
\begin{align*}
\bigg(\frac{1}{d^{ pD}_{o,\epsilon} (\xi,\cdot)}\bigg)^{*}(t)&=\inf \{ s>0| \nu_{o}(\{ \eta |d^{-pD}_{o,\epsilon}(\xi,\eta)>s)\leq t\}\\
&= \inf \{ s>0 | \nu_{o}\big(B(\xi,s^{-1/pD})\big) \leq t \} \\
&\leq \inf \{ s>0 | C s^{-1/p} \leq t \} \\
&= C^{p}t^{-p},
\end{align*}
for a constant $C>0$ coming from the Ahlfors regularity property \ref{Ahlfors}. It follows that $1/d^{ pD}_{o,\epsilon} (\xi,\cdot) \in L^{1/p,\infty}.$
\end{proof}
For $t>0$ we have for all $\xi\in \partial X$ $$\frac{1}{d^{(1-2t) D}_{o,\epsilon} (\xi,\cdot)}\in L^{r,\infty},$$
with $r=1/(1-2t).$ By symmetry, we have for all $\eta \in \partial X$ that $$\frac{1}{d^{(1-2t) D}_{o,\epsilon} (\cdot,\eta)}\in L^{r,\infty}$$ as well. We obtain the following result:
\begin{prop}\label{cont}
Let $0<t<1/2$ and let $p,q$ such that $1/p=1/2+t$ and $1/q=1/2-t$.
The operator $\mathcal{I}_{t}$ is bounded from $L^{p}(\partial X,\nu_{o})$ to $L^{q}(\partial X,\nu_{o})$.
\end{prop}
\begin{proof}
Note that $1/p+(1-2t)=1/2+t+(1-2t)=1+1/2-t=1+1/q.$ Proposition \ref{weakschur} implies that $$\|\mathcal{I}_{t}(v)\|_{L^{q,s}}\leq C_{t} \|v\|_{L^{p,s}} $$ for all $1\leq s\leq \infty$.
Pick $p \leq s=2/1-2t=q $. Thus $\|\mathcal{I}_{t}(v)\|_{L^{q,s}}=\|\mathcal{I}_{t}(v)\|_{L^{q}}$ and $\|v\|_{L^{p,s}}=\|v\|_{L^{p,q}}\leq C_{p,q}\|v\|_{L^{p,p}}=\|v\|_{L^{p}} $ by (\ref{point3}) to complete the proof.
\end{proof}
\subsection{Consequences of spectral gap estimates and Riesz-Thorin theorem}\label{section6}
\subsubsection{Spherical functions on hyperbolic groups }
As in \cite{Boy}, we recall the definition of a spherical function associated with $\pi_{t}$. This is the matrix coefficient: \begin{align}
\phi_{t}:\gamma \in \Gamma \mapsto \langle \pi_{t}(\gamma)\textbf{1}_{\partial X}, \textbf{1}_{\partial X}\rangle \in \mathbb{R}^{+},
\end{align}
and introduce the function $\omega_{t}(\cdot)$ for $t\in \mathbb{R}^{*} $ defined as
\begin{equation}
\omega_{t}(x) =\frac{2 \sinh\big( tQ x\big) }{e^{2tQ}-1}.
\end{equation}
Note that $\omega_{t}(\cdot)$ is a positive function for all $t\in \mathbb{R}^{*} $ converging uniformly on compact sets of $\mathbb{R}$ to
$ \omega_{0}(x)=x $ as $t\to 0$, and $\omega_{-t}(x)=e^{2tQ}\omega_{t}(x)$.\\
In \cite{Boy}, the following estimates, called \emph{ Harish-Chandra Anker estimates}, naming related to \cite{Ank} have been proved.
There exists $C>0$, such that for any $t\in \mathbb{R}$, we have for all $\gamma \in \Gamma$
\begin{align}\label{HCHestimates}
C^{-1}e^{-\frac{1}{2}Q|\gamma| }\big(1+\omega_{|t|}(|\gamma|)\big
\leq \phi_{t}(\gamma)\leq
Ce^{-\frac{1}{2}Q|\gamma|}\big(1+\omega_{|t|}(|\gamma|)\big).
\end{align}
Set for all $x\in \mathbb{R}$ \begin{equation}
\widetilde{\phi_{t}}(x):= e^{-\frac{1}{2}Qx}\big(1+\omega_{|t|}(x)\big).
\end{equation}
\subsubsection{A $L^{p}$-spectral inequality} We briefly recall some facts.
In \cite{Boy} the following spectral inequality, generalizing the so called ``Haagerup Property" or Property RD has been proved. Pick $R>0$ large enough. There exists $C>0$ such that for any $t \in \mathbb{R}$ and for all $f_{n}\in \mathbb{C}[\Gamma]$ supported in $S^{\Gamma}_{n,R}$, we have
\begin{equation}\label{RDgeneral}
\|\pi_{t}(f_{n})\|_{L^{2}\to L^{2}}\leq C \omega_{|t|}(nR)\|f_{n}\|_{\ell^{2}}.
\end{equation}
For $R>0$ large enough and for any $n\in \mathbb{N}^{*}$, consider $f_{n}=\sum_{\gamma \in S^{\Gamma}_{n,R}}\mu_{n,R}(\gamma)D_{\gamma } \in \mathbb{C}[\Gamma]$ supported in $S^{\Gamma}_{n,R}$. Note that (\ref{growth}) of Theorem \ref{equi} implies the existence of some positive constant $C>0$ such that for any $n\in \mathbb{N}^*$ $$ \|f_{n}\|_{\ell^{2}}=\|\sum_{\gamma \in S^{\Gamma}_{n,R}}\mu_{n,R}(\gamma)D_{\gamma }\|_{\ell^{2}}\leq C/|S^{\Gamma}_{n,R}|^{\frac{1}{2}}.$$
Lemma \ref{multiplicity} (\ref{item1}) implies the existence of $C>0$ such that $|S^{\Gamma}_{n,R}|\leq C e^{-Q nR}$.
From the lower bound of (\ref{HCHestimates}) together with above growth estimate we deduce the following ``spectral gap": there exists a constant $C>0$ such that for any $t\in \mathbb{R}$, we have for all $\gamma \in \Gamma,$ for all non negative integers $n$
\begin{equation}\label{radialestimates}
\|\pi_{t}(f_{n})\|_{L^{2}\to L^{2}}\leq C \widetilde{\phi_{t}}(nR),
\end{equation}
where $\widetilde{\phi_{t}}(nR)$ satisfies $C^{-1}\widetilde{\phi_{t}}(nR)\leq \phi_{t}(\gamma) \leq C\widetilde{\phi_{t}}(nR)$ for all $\gamma \in S^{\Gamma}_{n,R}$ where $C$ is a constant independent on $n$.\\
The aim of this subsection is to prove a $L^{p}$-version of the above inequality (\ref{radialestimates}).\\
Although $\pi_{t}$ is an isometric action on $L^{p}$ with $1/p=1/2+t$, it defines also a representation $\pi_{t}:\Gamma \rightarrow \mathbb{GL}(L^{r})$ where $ \mathbb{GL}(L^{r})$ stands for the group of bounded invertible linear operators acting on $L^{r}$. More precisely
\begin{prop}
For any $-1/2\leq t\leq 1/2$, for all $\gamma \in \Gamma$ the operator $\pi_{t}(\gamma)$ is bounded invertible operator on $L^{r}$ for all $1\leq r\leq \infty$ and moreover $\gamma \in \pi_{t}(\gamma) \in \mathbb{GL}(L^{r})$ is a group morphism.
\end{prop}
\begin{proof}
Pick $-1/2\leq t\leq 1/2$. Assume $1\leq r < \infty$. We have for all $\gamma \in \Gamma$ and for all $v\in L^{r}$
\begin{align*}
\|\pi_{t}(\gamma)v\|^{r}_{r}&=\int_{\partial X}e^{r(1/2+t)Q\beta_{\xi}(o,\gamma o)}|v(\gamma^{-1}\xi)|^{r}d\nu_{o}(\xi)\\
&=\int_{\partial X}e^{Q\beta_{\xi}(o,\gamma o) +(r(1/2+t)-1) Q\beta_{\xi}(o,\gamma o) }|v(\gamma^{-1}\xi)|^{r}d\nu_{o}(\xi)\\
&\leq e^{|r(1/2+t)-1)| Q |\gamma| } \int_{\partial X} e^{Q\beta_{\xi}(o,\gamma o) } |v(\gamma^{-1}\xi)|^{r}d\nu_{o}(\xi)\\
&= e^{|r(1/2+t)-1)| Q |\gamma| } \|v\|^{r}_{r},
\end{align*}
where the inequality follows from the fact $|\beta_{\xi}(x,y)|\leq d(x,y)$ for all $\xi\in \partial X$ and for all $x,y\in X$. \\
For the case $r=\infty$, we have for all $\gamma \in \Gamma$ and for all $v\in L^{\infty}$ that $$\|\pi_{t}(\gamma)v\|_{\infty}\leq e^{|1/2+t| Q |\gamma| } \|v\|_{\infty}.$$Hence, for all $\gamma \in \Gamma$ the operator $\pi_{t}(\gamma)$ is bounded invertible operator on $L^{r}$ for all $1\leq r\leq \infty$. The cocycle property of the Radon-Nikodym derivative implies that $\pi_{t}$ is a morphism and thus $\pi^{-1}_{t}(\gamma)=\pi_{t}(\gamma^{-1})$. \end{proof}
In order to prove a $L^{p}$-version of Inequality (\ref{radialestimates}) we need the following crucial lemma.\\
\begin{lemma}\label{BMtrick}
Let $R>0$ large enough. For any $n\in \mathbb{N}$, set $f_{n}=\sum_{\gamma \in S^{\Gamma}_{n,R}}\mu_{n,R}(\gamma)D_{\gamma } \in \mathbb{C}[\Gamma]$. Consider
$\pi_{t}(f_{n}) $ as an operator from $L^{\infty} \to L^{\infty}$ with $t\in [-1/2,1/2]$.
The exists $C_{\infty}>0$ such that for all $t\in [-1/2,1/2]$ and for all $n\in \mathbb{N}$ $$ \|\pi_{t}(f_{n})\|_{L^{\infty} \to L^{\infty}}\leq C_{\infty} \widetilde{\phi_{t}}(nR).$$
\end{lemma}
\begin{proof}
Let $t\in \mathbb{R}$ such that $-1/2\leq t\leq1/2$ and $R>0$. Consider the sequence of functions $(G_{n})_{n}$ defined for each $n$ as $$G_{n}:\xi \in \partial X \mapsto \sum_{\gamma \in S^{\Gamma}_{n,R}}\mu_{n,R}(\gamma)e^{(\frac{1}{2}+t)Q\beta_{\xi}(o,\gamma o)}.$$ Consider also $$\check{G}_{n}:\xi \in \partial X \mapsto \sum_{\gamma \in S^{\Gamma}_{n,R}}\mu_{n,R}(\gamma^{-1})e^{(\frac{1}{2}+t)Q\beta_{\xi}(o,\gamma o)}. $$
Let $F\in L^{\infty}(\partial X,\nu_{o})$. We have for every $\xi\in \partial X$ and for all $n\in \mathbb{N}:$
\begin{align*}
|\pi_{t}(f_{n})F(\xi)|&=\big|\sum_{\gamma \in S^{\Gamma}_{n,R}}\mu_{n,R}(\gamma)e^{(\frac{1}{2}+t)Q\beta_{\xi}(o,\gamma o)}F(\gamma^{-1}\xi)\big|\\
&\leq \sum_{\gamma \in S^{\Gamma}_{n,R}}\mu_{n,R}(\gamma)e^{(\frac{1}{2}+t)Q\beta_{\xi}(o,\gamma o)}|F(\gamma^{-1}\xi)| \\
&\leq \sum_{\gamma \in S^{\Gamma}_{n,R}}\mu_{n,R}(\gamma)e^{(\frac{1}{2}+t)Q\beta_{\xi}(o,\gamma o)}\|F\|_{\infty},
\end{align*}
and thus:
$$\|\pi_{t}(f_{n})F\|_{\infty}\leq \|G_{n}\|_{\infty}\|F\|_{\infty}.$$
In other words, we shall prove $$\sup_{n} \frac{ \|G_{n}\|_{\infty}}{\widetilde{\phi_{t}}(nR)}<\infty.$$ Pick $\xi\in \partial X$. Lemma \ref{crucial} implies that one can choose $R>0$ large enough such that there exist $C>0$ and $g_{\xi} \in S_{n,R}$ satisfying for all $\gamma \in S^{\Gamma}_{N,R}$
$$\beta_{\xi}(o,\gamma o)\leq \beta_{\hat{\gamma}_{o}}(o,g_{\xi} o)+C.$$ Furthermore, the right inclusion of Lemma \ref{ombre} implies that there exists $C'>0$ such that for all $\eta \in O_{r}(o,\gamma o)$
$$ \beta_{\hat{\gamma}_{o}}(o,g_{\xi} o)\leq \beta_{\eta}(o,g_{\xi} o)+C'.$$
It follows a ``quasi mean-value property" that reads as follows
$$ e^{(\frac{1}{2}+t)Q\beta_{\hat{\gamma}_{o}}(o,g_{\xi}o)} \leq \frac{C_{Q,t}}{|\nu_{o}(O_{r}(o,\gamma o))|} \int_{O_{r}(o,\gamma o)}e^{(\frac{1}{2}+t)Q\beta_{\eta }(o,g_{\xi}o)}d\nu_{o},$$
where $C_{Q,t}=e^{(\frac{1}{2}+t)Q+C'}.$
Therefore, using an absorbing constant $C$ independent on $t\in [-1/2,1/2]$ we obtain
\begin{align*}
G_{n}(\xi)&\leq C \sum_{\gamma \in S^{\Gamma}_{n,R}}\mu_{n,R}(\gamma)e^{(\frac{1}{2}+t)Q\beta_{\hat{\gamma}_{o}}(o,g_{\xi}o)} \\
&\leq C \sum_{\gamma \in S^{\Gamma}_{n,R}} \frac{\mu_{n,R}(\gamma)}{\nu_{o}(O_{r}(o,\gamma o))}\int_{O_{r}(o,\gamma o)}e^{(\frac{1}{2}+t)Q\beta_{\eta}(o,g_{\xi}o)}d\nu_{o}(\eta) \\
&\leq C \sum_{\gamma \in S^{\Gamma}_{n,R}} \int_{O_{r}(o,\gamma o)}e^{(\frac{1}{2}+t)Q\beta_{\eta }(o,g_{\xi}o)}d\nu_{o}(\eta) \\
&\leq C \phi_{t}(g_{\xi}),
\end{align*}
where the first inequality follows Lemma \ref{crucial}, the second inequality follows from Lemma \ref{shadow} combined with the growth of $|S^{\Gamma}_{n,R}|$ and the last inequality follows from the finite multiplicity of the covering $\cup_{ \gamma \in S^{\Gamma}_{n,R} }O_{r}(o,\gamma o)$ proved in Lemma \ref{multiplicity}. \\
The estimates of the spherical functions (\ref{HCHestimates}) applied to $g_{\xi}\in S^{\Gamma}_{n,R}$ together with the above inequality imply for almost every $\xi\in
\partial X$ and for all $n\in \mathbb{N}:$
$$G_{n}(\xi)\leq C \widetilde{\phi_{t}}(nR).$$ Thus
$$\sup_{n}\frac{\|G_{n}\|_{\infty}}{\widetilde{\phi_{t}}(nR)}<+\infty,$$ as required.
The above method applied to $\check{G}_{n}$ implies $$\sup_{n}\frac{\|\check{G}_{n}\|_{\infty}}{\widetilde{\phi_{t}}(nR)}<\infty.$$
\end{proof}
Eventually we obtain the $L^{p}$-version of Radial property RD for $L^p$-boundary representations.
\begin{theorem}\label{Lpspectral}
Let $R>0$ be large enough and let $r\in [1,\infty]$ such that $0\leq 1/r\leq 1$. There exists $C>0$ such that for any $-\frac{1}{2}\leq t \leq\frac{1}{2}$, such that for all $n\in \mathbb{N}$ with $f_{n}=\sum_{\gamma \in S^{\Gamma}_{n,R}}\mu_{n,R}(\gamma)D_{\gamma } \in \mathbb{C}[\Gamma]$ supported on $S^{\Gamma}_{n,R}$ we have:
\begin{align}
\|\pi_{t} (f_{n})\|_{L^{r}\to L^{r}}\leq C \widetilde{\phi_{t}}(nR).
\end{align}
And thus we have
\begin{align}
\sup_{\|v\|_{p},\|w\|_{q}\leq 1} |\langle \pi_{t} (f_{n})v,w\rangle |\leq C\widetilde{\phi_{t}}(nR).
\end{align}
\end{theorem}
\begin{remark}
It is worth noting that the above theorem can be viewed as a $L^{p}$-version of Radial Property RD for $L^p$-boundary representations of hyperbolic groups.
\end{remark}
\begin{proof}
The proof is based on Riesz-Thorin Theorem. We shall prove that for any $t\in [-1/2,1/2 ] $ and for each $n\in \mathbb{N}$ the operator $\pi_{t}(f_{n})$ viewed as an operator from $L^{1} \to L^{1}$ and as an operator from $L^{\infty} \to L^{\infty}$ is uniformly bounded with respect to $n$.\\
The second point follows from Lemma \ref{BMtrick}. We prove now the first point. First, observe that $\pi_{t}(f_{n})$ preserves the cone of positive functions since $f_{n}$ is positive and $\pi_{t}$ itself preserves the cone of positive functions. By decomposing a function $v$ into real an imaginary parts and positive and negative functions it is enough to find a bound for positive functions. Assume $v \geq 0$.
\begin{align*}
\|\pi_{t}(f_{n})v\|_{1}&=\langle \pi_{t}(f_{n})v,\textbf{1}_{\partial X}\rangle \\
&=\langle v, \pi_{-t}(\check{f_{n}}) \textbf{1}_{\partial X}\rangle \\
&\leq \|\check{G}_{n}\|_{\infty} \langle v, \textbf{1}_{\partial X}\rangle \\
&\leq C_{\infty}\widetilde{\phi_{t}}(nR) \|v\|_{1},
\end{align*}
where the first inequality follows from the proof of Lemma \ref{BMtrick}.
Therefore Riesz-Thorin theorem implies that
$\pi_{t}(f_{n}) $ defines a bounded operator from $L^{r}$ to $L^{r}$ for any $r$ such that $1/r\in [0,1]$.
\end{proof}
\section{Proofs}\label{section4}
The proof of Theorem \ref{BML2} relies on three steps.
\begin{proof}
Let $0<t<\frac{1}{2}$. \\
\textbf{Step 1:} Uniform boundedness. Consider for $R>0$ and for all non-negative integer $n$ the function $f_{n}$ supported on $S^{\Gamma}_{n,R}$ defined as: $$f_{n}=\sum_{\gamma \in S^{\Gamma}_{n,R}}\mu_{n,R}(\gamma)\frac{ D_{\gamma }}{\phi_{t}(\gamma)}.$$
Note that this a weighted version of the function $\sum_{\gamma \in S^{\Gamma}_{n,R}}\mu_{n,R}(\gamma) D_{\gamma }$ by the spherical function $\phi_{t}(\gamma).$ The $L^{p}$-spectral inequality of Theorem \ref{Lpspectral} together with the fact that there exists $C>0$ such that for all $\gamma \in S^{\Gamma}_{n,R}$, $C^{-1}\widetilde{\phi_{t}}(nR)\leq \phi_{t}(\gamma) $ imply
\begin{align}
\sup_{n}\| \pi_{t}(f_{n})\|_{L^{p}\to L^{p}} <+\infty.
\end{align}
Set $K:=\sup_{n}\| \pi_{t}(f_{n})\|_{L^{p}\to L^{p}} $. Given $f,g\in C(\overline{ X})$ we have for all $n\in \mathbb{N}$
\begin{align*}
| \sum_{\gamma \in S^{\Gamma}_{n,R}}\mu_{n,R}(\gamma)f(\gamma o)g(\gamma^{-1} o)\frac{\langle\pi_{t}(\gamma)v,w\rangle }{\phi_{t}(\gamma)}|&\leq C \|f\|_{\infty}\|g\|_{\infty}\| \pi_{t}(f_{n})\|_{L^{p}\to L^{p}} \|v\|_{p} \|w\|_{q} \\
&\leq K \|f\|_{\infty}\|g\|_{\infty} \|v\|_{p} \|w\|_{q}.
\end{align*}
By Banach-Alaoglu-Bourbaki Theorem and since on reflexive spaces the weak topology and the weak*-topology coincide, the limit $$\lim_{n\to +\infty} \sum_{\gamma \in S^{\Gamma}_{n,R}}\mu_{n,R}(\gamma)f(\gamma o)g(\gamma^{-1} o)\frac{\langle\pi_{t}(\gamma)v,w\rangle }{\phi_{t}(\gamma)}$$ exists for all $v\in L^{p}$, for all $w\in L^{q}$ and for all $f,g\in C(\overline{X})$, up to extraction.\\
\textbf{Step 2:} Computation of the limit.\\
We already know by \cite{BPi} that we have the desired result for $v,w$ in a dense subspace of $L^{p},L^{q}$ spaces (e.g. that for all $v,w \in Lip(\partial X)$ and $f,g\in C(\overline{X})$)
\begin{align*}
\lim_{n\to +\infty}\sum_{\gamma \in S^{\Gamma}_{n,R}}\mu_{n,R}(\gamma)f(\gamma o)g(\gamma^{-1} o)\frac{\langle\pi_{t}(\gamma)v,w\rangle }{\phi_{t}(\gamma)}=\langle g_{|_{\partial X}} \mathcal{R}_{t}(v), \textbf{1}_{\partial X}\rangle \langle f_{|_{\partial X}} ,w \rangle.
\end{align*}
\textbf{Step 3:} Conclusion.\\
Assume that $0<t<1/2$. Therefore $\mathcal{I}_{t}$ and thus $\mathcal{R}_{t}$ are continuous on $L^{p}$ with $1/p=1/2+t$.
The limit above together with the uniform bound of \textbf{Step 1} imply eventually that for $f,g\in C(\overline{ X})$ and for
$(v,w)\in L^{p}\times L^{q}$:
\begin{align*}
\lim_{n\to +\infty}\sum_{\gamma \in S^{\Gamma}_{n,R}}\mu_{n,R}(\gamma)f(\gamma o)g(\gamma^{-1} o)\frac{\langle\pi_{t}(\gamma)v,w\rangle }{\phi_{t}(\gamma)}=\langle g_{|_{\partial X}} \mathcal{R}_{t}(v), \textbf{1}_{\partial X}\rangle \overline{ \langle w,f_{|_{\partial X}} \rangle}.
\end{align*}
\end{proof}
\subsection{Proof of irreducibility}
To prove irreducibility of representations our main tool is Theorem \ref{BML2}.
\begin{lemma}\label{cyclic}
Let $0<t<\frac{1}{2}$. The vector $\textbf{1}_{\partial X}$ is cyclic for $(\pi_{t},L^{p})$ and the vector $\sigma_{t}$ is cyclic for $(\pi_{-t},\overline{Im(\mathcal{I}_{t})} ^{\|\cdot\|_{q}})$.
\end{lemma}
\begin{proof}
Theorem \ref{BML2} implies for all $w\in L^{q}$ the following convergence and for all continuous functions $f\in C(\overline{X})$:
$$\lim_{n\to +\infty}\sum_{\gamma \in S^{\Gamma}_{n,R}}\mu_{n,R}(\gamma)f(\gamma o)\frac{\langle\pi_{t}(\gamma)\textbf{1}_{\partial X},w\rangle}{\phi_{t}(\gamma)}= \langle f_{|_{\partial X}},w \rangle. $$
Now, given a function $f\in C(\partial X)$ consider $\tilde{f}\in C(\overline{X})$ such that $\tilde{f}_{|_{\partial X}}=f$. Set
\begin{equation}\label{fn}
f_{n}:=\sum_{\gamma \in S^{\Gamma}_{n,R}}\mu_{n,R}(\gamma)\tilde{f}(\gamma o)\frac{\pi_{t}(\gamma)\textbf{1}_{\partial X}}{\phi_{t}(\gamma)}\in \pi_{t}(\mathbb{C}[\Gamma])\textbf{1}_{\partial X}.
\end{equation}
Hence, $f_{n}\to f$ with respect to the weak topology of $L^{p}$. Therefore, since $C(\partial X)$ is dense in $L^{p}$ and since the closure of the weak topology and the $\|\cdot\|_{p}$ coincides, the vector $\textbf{1}_{\partial X}$ is cyclic for $L^{p}$.\\
We prove now that $\sigma_{t}$ is cyclic for $\pi_{-t}$. Recall that for all $v\in L^{p}$ we have $\mathcal{I}_{t}(v)\in L^{q}$ by Proposition \ref{cont}. Hence for all $v\in L^{p}$, with the same notation of (\ref{fn}) we have
$$\langle f_{n},\mathcal{I}_{t}(v) \rangle \to \langle f_{|_{\partial X}},\mathcal{I}_{t}(v)\rangle.$$ Since $\mathcal{I}_{t}$ is self adjoint, for all $v\in L^{p}$
\begin{align*}
\langle \mathcal{I}_{t}(f_{n}),v \rangle&=\langle \sum_{\gamma \in S^{\Gamma}_{n,R}}\mu_{n,R}(\gamma)\tilde{f}(\gamma o)\frac{\pi_{-t}(\gamma)\mathcal{I}_{t}(\textbf{1}_{\partial X})}{\phi_{t}(\gamma)},v\rangle\\
&\to \langle \mathcal{I}_{t}(f_{|_{\partial X}}),v \rangle,\\
\end{align*}
where $ \mathcal{I}_{t}(f_{n}) \in \pi_{-t}(\mathbb{C}[\Gamma])\sigma_{t}.$
Hence $$\overline{ \pi_{-t}(\mathbb{C}[\Gamma])\sigma_{t}} ^{\|\cdot\|_{q}}=\overline{\mathcal{I}_{t}(C(\partial X))}^{\|\cdot\|_{q}}.$$
Eventually, using the density of $C(\partial X)$ in $L^{p}$ and the continuity of $\mathcal{I}_{t}$ we deduce $$\overline{ \pi_{-t}(\mathbb{C}[\Gamma])\sigma_{t}} ^{\|\cdot\|_{q}}= \overline{\mathcal{I}_{t}(L^{p})}^{\|\cdot\|_{q}}.$$
\end{proof}
\begin{proof}[Proof of Theorem \ref{mainT}.]
First of all, since $\mathcal{I}_{t}$ is continuous from $L^{p}$ to $L^{q}$, the subspace $\ker \mathcal{I}_{t}$ is a closed invariant subspace of $(\pi_{t},L^{p})$. Thus, if $\mathcal{I}_{t}$ is non injective, then $(\pi_{t},L^{p})$ is not irreducible.\\
We shall prove now that $\mathcal{I}_{t}$ is injective then $(\pi_{t},L^{p})$ is irreducible for $1/p=1/2+t.$ Since $\mathcal{I}_{t}$ is a continuous operator from $L^{p}\to L^{q}$, a standard result in Banach spaces theory asserts that the dual space of $L^{p}/ \ker \mathcal{I}_{t}$ is the space $\overline{ Im (\mathcal{I}_{t})}^{\|\cdot\|_{q}}$, where the the weak closure for $L^{q}$ is the same as the $\|\cdot\|_{q}$-closure (see \cite[Chapter 3]{Me}). Hence, the dual representation of $(\pi_{t}, L^{p}/ \ker \mathcal{I}_{t} )$ is $(\pi_{-t}, \overline{ Im (\mathcal{I}_{t})}^{\|\cdot\|_{q}})$. We shall prove that $(\pi_{-t},L^{q})$ is irreducible to obtain irreducibility of $(\pi_{t}, L^{p}/ \ker \mathcal{I}_{t} )$.\\ Recall that
$ \pi_{-t}(f) \sigma_{t}=\mathcal{I}_{t}(\pi_{t}(f)\textbf{1}_{\partial X} )$, for all $f\in \mathbb{C}[\Gamma]$ and the function $\sigma_{t}$ is cyclic in $\overline{ Im (\mathcal{I}_{t})}^{\|\cdot\|_{q}}$ for $\pi_{-t}$ by Lemma \ref{cyclic}.\\
Now, let $0\neq K\subset \overline{ Im (\mathcal{I}_{t})}^{\|\cdot\|_{q}} \subset L^{q}(\partial X,\nu_{o})$ a $L^{q}$-closed subspace invariant by $\pi_{-t}$. Let $R>0$ large enough. For any $w\in K \subset L^{q}$ define for all $n\in \mathbb{N}$, the vector:
\begin{equation}\label{zzz}
w_{n}:=\sum_{\gamma \in S^{\Gamma}_{n,R}}\mu_{n,R}(\gamma) \tilde{ \sigma_{t}}(\gamma^{-1} o) \frac{\pi_{-t}(\gamma^{-1}) w }{\phi_{t}(\gamma)} \in K,
\end{equation}
where $\tilde{\sigma_{t}}$ has been defined in (\ref{sigmat}).
Theorem \ref{BML2} implies for all $v\in L^{p}(\partial X,\nu_{o})$ that as $n\to +\infty$:
$$ \sum_{\gamma \in S^{\Gamma}_{n,R}}\mu_{n,R}(\gamma) \tilde{\sigma_{t}}(\gamma^{-1} o) \frac{\langle v, \pi_{-t}(\gamma^{-1}) w\rangle }{\phi_{t}(\gamma)}\to \langle \mathcal{I}_{t}(v),\textbf{1}_{\partial X}\rangle \langle \textbf{1}_{\partial X} ,w \rangle=\langle v,\mathcal{I}_{t}(\textbf{1}_{\partial X})\rangle \langle\textbf{1}_{\partial X}, w \rangle,$$
and the above convergence reads as follows with respect to the weak topology on $L^{q}:$
$$w_{n}\to \langle \textbf{1}_{\partial X} , w \rangle \sigma_{t}.$$
Since $K$ is closed we have that $ \langle \textbf{1}_{\partial X} , w \rangle \sigma_{t} \in K.$ So, since $\sigma_{t}$ is cyclic, it is sufficient to show that there exists $0\neq w \in K$ such that $$\langle \textbf{1}_{\partial X} , w \rangle\neq 0.$$
Assume this is not the case: for all $w\in K$ we have $\langle \textbf{1}_{\partial X},w\rangle =0$. We would have that for all $\gamma \in \Gamma$ that $\langle \textbf{1}_{\partial X},\pi_{-t}(\gamma^{-1})w \rangle=0= \langle \pi_{t}(\gamma) \textbf{1}_{\partial X},w \rangle.$ And therefore, since $\textbf{1}_{\partial X}$ is cyclic for $(\pi_{t},L^{p}(\partial X,\nu_{o}))$ it implies that
$$ K\ \subset \{w\in L^{q}| \langle v,w\rangle=0,\forall v\in L^{p} \}$$
Since the pairing is non-degenerate then $ K=\{0 \}.$
Hence, if $K\neq \{0 \}$ then $K$ contains $\sigma_{t}$ and thus it has to be $\overline{ Im (\mathcal{I}_{t})}^{\|\cdot\|_{q}}$ and the proof is done.
\end{proof}
\section{Application to rank one semisimple Lie groups}\label{section5}
Let $G$ be a connected semisimple Lie group with finite center and let $\frak g$ be its Lie algebra.
Let $K$ be a maximal compact subgroup of $G$ and let $\frak k$ be its Lie algebra. Let $\frak p$ be the orthogonal complement of $\frak k$ in $\frak g$
relative to the Killing form $B$. Among the abelian sub-algebras of $\frak g$ contained in the subspace $\frak p$, let $\frak a$ be a maximal one. We assume $\dim \frak a=1$, i.e. the real rank of $G$ equals to $1$ (in particular $G$ is not compact). Let $\Sigma\subset \frak{a}^{*}$ be the root system associated to $(\frak g,\frak a)$. Let
\[
\frak g_{\alpha}=\{X\in\frak g: \mathrm{ad}(H)X=\alpha(H)X\ \ \forall H\in\frak a \}
\]
be the root space of $\alpha\in\Sigma$. Recall that $\Sigma=\{-\alpha,\alpha\}$ or $\Sigma=\{-2\alpha,-\alpha,\alpha,2\alpha\}$ where $\alpha$ is a positive root ($\alpha\in\Sigma$ is positive if and only if $\alpha(H)>0$ for all $H\in \frak a^+$). If $m_{1}=\dim \frak g_{\alpha}$ and $m_{2}=\frak g_{2\alpha}$ denoted by $\rho=\frac{1}{2}(m_{1}+2m_{2})\alpha$ be the half sum of positive roots. Let $H_{0}$
be the unique vecteur in $\frak{a}$ such that $\alpha(H_{0})=1$. Hence, $\frak{a}=\{ tH_{0},t\in\mathbb{R}\}$ and $\frak{a}_{+}=\{ H\in \frak{a}|\alpha(H)>0\}$ is identified with the open set of strictly positive real numbers.
Let $\frak n$ the nilpotent Lie algebra defined as the direct sum of root spaces of positive roots:
\[
\frak n=\bigoplus_{\alpha\in\Sigma^+}\frak g_{\alpha}.
\]
Let $A=\exp(\frak a)$, $A^+=\exp(\frak a^+)$ and $N=\exp(\frak n)$.
Let $G=KAN$ be the Iwasawa decomposition and $K\overline{A^{+}}K$ the Cartan decomposition defined by $\frak a^+$ where $\overline{A^{+}}$ denoted the closure $A^{+}$. Let $Z(A)$ be the centralizer of $A$ in $G$ and
$M=Z(A)\cap K$. The group $M$ normalizes $N$. Let $P=MAN$ be the minimal parabolic subgroup of $G$ associated to $\frak a^+$.
Let $\nu$ be the unique Borel regular $K$-invariant probability measure on the Furstenberg-Poisson boundary $G/P$ that is quasi-invariant under the action $G$ (we refer to \cite[Appendix B]{BDV} for a general discussion).
Let
\[
\rho_{t}:G\to \mathcal{U}(L^p(G/P,\nu))
\]
be the associated $L^{p}$-boundary representation of $G$ and defined the corresponding spherical function
\[
\phi_{t}(g)=\langle \rho_{t}(g)\textbf{ 1}_{G/P}, \textbf{1}_{G/P}\rangle.\]
The corresponding globally symmetric space of non compact type of $G$ is $G/K$ endowed with a $G$-invariant Riemannian metric denoted by $d$ induced by the Killing form on $\frak{g}\ / \frak{k}$ identified with the tangent space of $G/K$ at the point $o=eK$. A flat of dimension $k$ is defined as the image of a map $\mathbb{R}^{k}\rightarrow G/K$ locally isometric. The rank of $G$ is the largest dimension of a flat subspace of $G/K$. The rank one globally symmetric spaces of non compact type are classified as follows: there are the real hyperbolic spaces $H^{n}(\mathbb{R})$, the complex hyperbolic spaces $H^{n}(\mathbb{C})$, the quaternionic hyperbolic spaces $H^{n}(\mathbb{H})$ for $n\geq 2$ and the exceptional hyperbolic space that is the $2$-dimensional octonionic hyperbolic space $H^{2}(\mathbb{O})$. \\
If $(X,d)$ is one the above hyperbolic space, it is a CAT(-1) space and in particular it is a proper geodesic $\delta$-hyperbolic space and fits of course in the class of spaces \ref{class} for $\epsilon=1$. One can therefore consider its Gromov boundary $\partial X$ or equivalently the geometric boundary of $X$. The group $G$ acts by isometries on $(X,d)$ and its discrete subgroup acts properly discontinuously on $X$. Assume that $\Gamma$ is lattice (uniform or non-uniform) and perform the Patterson-Sullivan construction associated to $(\Gamma,d)$ with the base point $o=eK \in X$ to obtain a measure supported on $\partial X$ denoted by $\nu_{o}$. The Hausdorff dimension of $\nu_{o}$ is the critical exponent $Q_{\Gamma}$ of $\Gamma$ that coincides with the volume growth of the corresponding hyperbolic spaces $Q_{G}=m_{1}+2m_{2}$.\\
A geodesic ray starting at the origin can be represented using Cartan decomposition as $c(t)=ke^{tH_{0}}\cdot o=kMe^{tH_{0}}\cdot o$ where $t\in \mathbb{R}_{+}$ and $k\in K$. Then the Furstenberg-Poisson boundary $G/P$ can be identified with the geometric boundary $\partial X$ in the case of rank one symmetric space. Indeed, one can identify $G/P=K/M$ thanks to the Iwasawa decomposition $KAN$. It turns out that the Patterson-Sullivan measure $\nu_{o}$ associated with a lattice $\Gamma$ supported on $\partial X$ of dimension $Q_{\Gamma}$ coincides with the unique $K$-invariant measure $\nu$ on $G/P$. \\
Thus, the $L^{p}$-boundary representation of $\Gamma$ is nothing but the restriction of $\rho_{t}$ to $\Gamma$
\begin{align}
\pi_{t}:\gamma \in \Gamma\rightarrow \rho_{t_{|_{\Gamma}}} \in \mbox{Iso}(L^{p}(\partial X,\nu_{o}))
\end{align}
with $p$ such that $1/p=1/2+t$ with $0<t<1/2$. Since $\Gamma$ might be a non-uniform lattice, the results obtained above dealing with hyperbolic groups do not apply to $\Gamma$. Nevertheless we have the exact analog of Lemma \ref{BMtrick}.\\
In the following, for $R>0$ and for any $n\in \mathbb{N}$, the spheres $S^{\Gamma}_{n,R}$ are defined with respect to the length function $|\gamma|:=d(o,\gamma o)$ corresponding to the Riemannian metric $d$ on the symmetric space $G/K$. Moreover, one can take for $\mu_{n,R}$ , for $R>0$ and for any $n\in \mathbb{N}$ the standard average
$$\frac{1}{|S^{\Gamma}_{n,R}|} \sum_{\gamma \in S^{\Gamma}_{n,R}} D_{\gamma }.$$
\begin{lemma}\label{BMtrick2}
Let $\Gamma$ be a lattice in $G$.
Let $R>0$. For all $t\in [-1/2,1/2]$ and $n\in \mathbb{N}$ set $f_{n}=\frac{1}{|S^{\Gamma}_{n,R}|}\sum_{\gamma \in S^{\Gamma}_{n,R}}D_{\gamma } \in \mathbb{C}[\Gamma]$. Consider
$\pi_{t}(f_{n}) $ as an operator from $L^{\infty} \to L^{\infty}.$
The exists $C_{\infty}>0$ depending on $R$ such that for all $t\in [-1/2,1/2]$ $$ \|\pi_{t}(f_{n})\|_{L^{\infty} \to L^{\infty}}\leq C_{\infty} \widetilde{\phi_{t}}(nR).$$
\end{lemma}
\begin{proof}
Indeed the proof follows the same ideas of \cite[Proposition 3.2]{BoyHCH} and \cite[Section 2.5]{BLP}
\end{proof}
Therefore, following exactly the same method of the proof of Theorem \ref{BML2}, we obtain
\begin{theorem}\label{lattspectral}
Let $\Gamma$ be a lattice in a rank one connected semisimple Lie group with finite center $G$.
Let $R>0$ be large enough and let $r\in [1,\infty]$ such that $0\leq 1/r\leq 1$. There exists $C>0$ such that for any $-\frac{1}{2}\leq t \leq\frac{1}{2}$, such that for all $n\in \mathbb{N}$ with $f_{n}=\frac{1}{|S^{\Gamma}_{n,R}|}\sum_{\gamma \in S^{\Gamma}_{n,R}}D_{\gamma } \in \mathbb{C}[\Gamma]$ supported on $S^{\Gamma}_{n,R}$ we have:
\begin{align}
\|\pi_{t} (f_{n})\|_{L^{r}\to L^{r}}\leq C \widetilde{\phi_{t}}(nR).
\end{align}
And thus we have
\begin{align}
\sup_{\|v\|_{p},\|w\|_{q}\leq 1} |\langle \pi_{t} (f_{n})v,w\rangle |\leq C\widetilde{\phi_{t}}(nR).
\end{align}
\end{theorem}
The equidistribution theorem needed in (\ref{equid}) reads as follows in the context of lattices. We refer to \cite{Ro} for the next results in a more general setting.
\begin{theorem}\label{equi}
Let $\Gamma$ be a lattice in a rank one connected semisimple Lie group with finite center $G$. For any $R>0$, we have the following convergence: $$\frac{1}{|S^{\Gamma}_{n,R}|}\sum_{\gamma \in S^{\Gamma}_{n,R}} D_{\gamma o} \otimes D_{\gamma^{-1} o} \rightharpoonup \nu_{o}\otimes \nu_{o},$$
as $n\to +\infty$, for the weak* convergence in $C(\overline{X}\times \overline{X})$.
\end{theorem}
Eventually we obtain
\begin{theorem}\label{BML3}
Let $\Gamma$ be a lattice in a rank one connected semisimple Lie group with finite center $G$. For $R>0$ large enough, for all $0<t<1/2 $, for all $f,g\in C(\overline{X})$, for all $v\in L^{p}( \partial X,\nu_{o})$ and $w\in L^{q}(\partial X,\nu_{o})$:
$$\frac{1}{|S^{\Gamma}_{n,R}|}\sum_{\gamma \in S^{\Gamma}_{n,R}} f(\gamma ) g(\gamma^{-1} ) \frac{\langle \pi_{t}(\gamma)v,w\rangle }{\phi_{t}(\gamma)}\to \langle g_{|_{\partial \Gamma}}\mathcal{R}_{t}(v),\textbf{1}_{\partial \Gamma}\rangle \langle f_{|_{\partial \Gamma}},w \rangle, $$
as $n\to +\infty$.
\end{theorem}
\subsection{Intertwining operators}
The restriction of $\rho_{t}$ to $K$ on $L^{2}(K/M,\nu_{o})$ is given by
\begin{align*}
\rho_{t}(k)v(\xi)=v(k^{-1}\xi),
\end{align*}
since $\nu_{o}$ is $K$-invariant and note that the representation does not depend on $t$ anymore and provides a unitary representation of $K$. Therefore the intertwining relations \ref{intert} reads as follows: for all $k\in K$ and for all $t>0$
$$ \mathcal{I}_{t}\rho_{t}(k)= \rho_{t}(k) \mathcal{I}_{t}.$$
Peter-Weyl Theorem implies that $$ L^{2}(K/M,\nu_{o})=\oplus_{n\geq 0} V_{n},$$ where $V_{n}$ are finite dimensional irreducible unitary representation of $K$. Therefore, Schur's lemma implies that there exists a sequence of scalars $(\lambda_{n})_{n\geq 0}$ such that $\mathcal{I}_{t}$ restricted to $V_{n}$ is a scalar operator as follows $ \mathcal{I}_{t |_{V_{n}}}=\lambda_{n} Id_{ |_{V_{n} } } $ with $\lambda_{n}\neq 0$ for all $n\geq 0$. We deduce that $\mathcal{I}_{t}$ is injective viewed as an operator acting on $L^{2}$ and therefore it is injective as an operator from $L^{p}$ to $L^{q}$. Apply Theorem \ref{BML3} to lattices in rank one semisimple Lie groups and use exactly the same arguments of Theorem \ref{mainT} to complete the proof of Theorem \ref{latt}.
|
2302.13708
|
\section{Introduction} \label{sec:Intro}
Let $X$ be an $M\times N$ matrix; we assume that $\frac{M}{N}$ converges to a limit $\phi$ as both $M$ and $N$ tend to $\infty$ (although this may be relaxed). Let
\begin{equation}
\Sigma = VDV^*, \quad V = \begin{pmatrix} \mid & & \mid \\ \mathbf{v}_1 & \dots & \mathbf{v}_M \\ \mid & & \mid \end{pmatrix}, \quad D = \diag(\tau_1, \dots, \tau_M)
\end{equation}
with $\tau_1 \geq \cdots \geq \tau_M$, be an $M\times M$ real symmetric or complex Hermitian positive definite matrix together with its eigendecomposition.
We will make the following assumption about the ``training-data'' matrix $X$.
\begin{assumption} \label{assump:main}
$\Sigma$ is diagonal and the $M\times N$ matrix $X$ has i.i.d$.$ columns $\sim \mathcal{N}(0,\Sigma)$.
\end{assumption}
As is common in Random Matrix Theory, the dimensions $N$ and $M$ of $X$ and $\Sigma$, and of most every other matrix we will study, are assumed to go to infinity. Thus practically every major quantity of interest is a sequence of quantities, and all properties that we desire to study are those which emerge in the large dimensional limit. We therefore always, except perhaps when special emphasis is needed, suppress the dependence of matrices and functions thereof on the dimensions $N$, $M$, etc. We consider the sample covariance matrix
\begin{equation}
S = \Sigma^{1/2} XX^* \Sigma^{1/2}
\end{equation}
We let
\begin{equation}
S = ULU^*, \quad U = \begin{pmatrix} \mid & & \mid \\ \mathbf{u}_1 & \dots & \mathbf{u}_M \\ \mid & & \mid \end{pmatrix}, \quad L = \diag(\lambda_1, \dots, \lambda_M)
\end{equation}
with $\lambda_1 \geq \cdots \geq \lambda_M$ be its spectral decomposition.
It is a problem of great theoretical and practical interest to understand how the properties of the sample covariance matrix relate to properties of the population covariance matrix $\Sigma$. \bdrsuppress{(citation here?)}
Random Matrix Theory has had great success in the last decade in getting very fine control of random matrices $H$ by way of their resolvent
\begin{equation}
(H-zI)^{-1}
\end{equation}
This was first done by Mar\v{c}enko and Pastur (\cite{MP67}). Their approach was to show that the trace of the resolvent of a random matrix approximately satisfies some self-consistent equation and then to reason that trace of the resolvent, which is also the Stieltjes transform of the emperical eigenvalue measure, must be close to the true solution to the self-consistent equation. This has remained a popular and powerful technique.
One finds that the resolvent $R_M := R_M(z):= (S-zI)^{-1}$ of $S$ can be written as
\begin{equation}
\sum_{i=1}^M \frac{1}{\lambda_i - z} \mathbf{u}_i\mathbf{u}_i^*
\end{equation}
Ledoit and P\'{e}ch\'{e}, in their paper \cite{LP}, consider functions of the form
\begin{equation}
\Theta(z):= \sum_{i=1}^M \sum_{j=1}^M \frac{1}{\lambda_i -z} \mathbf{u}_i\mathbf{u}_i^* g(\tau_j)\mathbf{v}_j\mathbf{v}_j^*
\end{equation}
for some function $g:\mathbb{R}\to \mathbb{R}$ with finitely many discontinuities, which amounts to a weighting of the spectral decomposition of the resolvent; components of the resolvent in different eigendirections of the population covariance matrix are weighted according to the value of $g$ applied to the associated eigenvalue of the population covariance matrix. $\Theta$ may be simplified as follows:
\begin{equation}
\Theta(z) = \Trace((S-zI)^{-1} g(\Sigma))
\end{equation}
Here we recall that for a function $g:\mathbb{R}\to \mathbb{R}$ and a diagonal matrix $D$, we define a matrix $g(D)$ by
\begin{equation}
[g(D)]_{ij} := g(D_{ij})\delta_{ij},
\end{equation}
and for a real symmetric or complex Hermitian matrix $A$ with spectral decomposition $A = U^*DU$, we define
\begin{equation}
g(A) := U^*g(D)U
\end{equation}
The case $g \equiv 1$ is of course of interest. In this case we have
\begin{equation} \label{eq:CaseOfgEq1}
\Theta(z) = \Trace((S-zI)^{-1}) = \mathcal{S}\left(\mu\right), \quad \mu := \sum_{i=1}^M \delta_{\lambda_i}
\end{equation}
where for a measure $\sigma$ we define the Stieltjes transform $\mathcal{S}(\sigma): \mathbb{H} \to \mathbb{H}$ by
\begin{equation}
\mathcal{S}(\sigma)(z) := \int \frac{d\sigma(x)}{x-z}
\end{equation}
and where by $\mathbb{H}$ we denote the complex upper half-plane
\begin{equation}
\mathbb{H} := \set{z= E + i\eta \in \mathbb{C} \mid \eta > 0}
\end{equation}
We note that $E$ and $\eta$ are usually the way we will denote the real and imaginary parts of a complex argument to a Stieltjes transform.
The quantity \eqref{eq:CaseOfgEq1} has been deeply understood for a fairly general class of matrices, which we will detail soon.
We will also particularly interested in the case that $g$ is the identity, ie, $g(x) \equiv x$. In this case, $\Theta$ has another simplification:
\begin{equation} \label{eq:STofEstMeas}
\Theta = \mathcal{S}\left(\nu\right)
\end{equation}
where
\begin{equation}
\nu := \sum_{i=1}^M \mathbf{u}_i^*\Sigma \mathbf{u}_i \delta_{\lambda_i}
\end{equation}
The reason this case of $g$ is of particular interest to us is that the quantities
\begin{equation}
\mathbf{u}_i^*\Sigma\mathbf{u}_i
\end{equation}
are precisely the quantities which describe the Frobenius-norm optimal rotation equivariant shrinkage estimator for the population covariance matrix, as shown in \cite{LP}. Another result of the same paper was bounding $\nu$ close to a deterministic measure was one result of, although they did not provide rates of convergence. In this paper we improve their result by providing essentially optimal rate of convergence.
\begin{definition} [Shrinkage Estimators and Loss Function]
Given a realization of the sample covariance matrix $S$, we define a (rotation equivariant) shrinkage estimator $\hat{\Sigma}$ for the population covariance matrix $\Sigma$ via
\begin{equation}
\hat{\Sigma} = U \hat{D} U^*
\end{equation}
for some diagonal matrix $\hat{D}$. That is, we estimate the $\Sigma$ from $S$ by keeping the eigenvectors and changing the eigenvalues, presumably ``shrinking'' them since $S$ has the tendency to ``spread out'' the eigenvalues of $\Sigma$.
To measure the success of $\hat{\Sigma}$ we define the loss function
\begin{equation}
\mathcal{L}^{MV} (\hat{\Sigma}, \Sigma) := \frac{\Trace(\hat{\Sigma}^{-1} \Sigma \hat{\Sigma}^{-1})/N}{\left[ \Trace \left(\hat{\Sigma}^{-1}\right)/N \right]^2} - \frac{1}{\Trace\left(\Sigma^{-1}\right)/N}
\end{equation}
\end{definition}
Here $MV$ stands for minimum variance, and ``$\mathcal{L}^{MV}$ represents the \emph{true} variance of the linear combination of the original variables that has the minimum \emph{estimated} variance.'' (See \cite{ANS} for the quote and for more discussion of the suitability of this loss function.)
\begin{lemma} \label{lem:optimalShrinkForm}
With respect to $\mathcal{L}^{MV}$, the optimal shrinkage estimator is given when $\hat{D}_{ii} = \beta \mathbf{u}_i^*\Sigma \mathbf{u}$, where $\beta \neq 0$ is a scaling constant that we will take to be 1.
\end{lemma}
\begin{definition}
We define
\begin{equation}
D^{\mathrm{or}} := \mathrm{diag}(\mathbf{u}_i^*\Sigma \mathbf{u}_i)_i, \quad \Sigma^{\mathrm{or}} := U D^{\mathrm{or}} U^*
\end{equation}
and we call $\Sigma^{\mathrm{or}}$ the \emph{shrinkage oracle}.
\end{definition}
\begin{remark}
As noted above, the same choice of $\hat{D}$ is optimal for a Frobenius norm loss function \cite{LP}.
\end{remark}
The optimal shrunken eigenvalues $\mathbf{u}_i^*\Sigma \mathbf{u}_i$ are experimentally unavailable to us, so for statistical purposes Lemma \ref{lem:optimalShrinkForm} is of limited use to us. However, just as was done for $\mu$, in \cite{ALLRM}, namely, bounding it optimally close to a determinstic limit, we will do for $\nu$.
Given our random matrix with independent entries $X$ and our population covariance matrix $\Sigma$, we define the resolvent, or Green function introduced in \cite{ALLRM}:
\begin{equation}
G(z) := \begin{pmatrix} -\Sigma^{-1} & X \\ X^* & -zI \end{pmatrix}
\end{equation}
The reader accustomed to Random Matrix Theory will note that this is not the usual defintion of the resolvent. It is however an important realization made in \cite{ALLRM} that the more familiar resolvent
\begin{equation}
R_M := (S-zI)^{-1}
\end{equation}
can be neatly gotten from $G$, as well as the related resolvent
\begin{equation}
R_N := (X^* \Sigma X-zI)^{-1}
\end{equation}
and that this single matrix containing both resolvents unlocks powerful tools for studying resolvent estimates developed in the context of Wigner matrices. In fact, $G$ may be decomposed in block form as
\begin{equation}
\begin{pmatrix} z\Sigma^{1/2}R_M\Sigma^{1/2} & \cdot \\ \cdot & R_N \end{pmatrix}
\end{equation}
where the blocks labeled $\cdot$ are not of interest to us currently. The conjugation of $R_M$ by $\Sigma^{1/2}$ is not of great importance to the authors of \cite{ALLRM}, but it is very fortunate for us, the reason being that $\Theta$, our object of greatest interest, is precisely
\begin{equation}
\Theta = \Trace(R_M\Sigma) = \Trace (\Sigma^{1/2}R_M\Sigma^{1/2})
\end{equation}
by the invariance of $\Trace$ under cyclical permutation. Let us make a few more definitions and then quote a result of \cite{ALLRM}.
We define the the population spectral measure, or \emph{PSM}, of $\Sigma$ by
\begin{equation}
\pi = \frac{1}{M}\sum_{j=1}^M \delta_{\tau_i}
\end{equation}
This is of course just the probability measure which places equal weight at each of $\Sigma$'s eigenvalues, counted with multiplicity.
We also define the following notation of size for random variables, introduced in \cite{ErKnYa13}, which has proven very helpful for formulating results in RMT.
\begin{definition}[Stochastic Domination]
Given two sequences of random variables $X:= \{X_N\}_{N\in\mathbb{N}}$ and $Y:= \{Y_N\}_{N\in\mathbb{N}}$ (note that we again suppress that quantities of interest are sequence in $N$), we say that $Y$ \emph{stochastically dominates} $X$, or that $X \prec Y$, if for any (small) $\epsilon > 0$, (large) $D > 0$, and sufficiently large $N$, we have
\begin{equation}
P(X > N^\epsilon Y) < N^{-D}
\end{equation}
\end{definition}
A little more notation: we define the important function $m:\mathbb{H}\to \mathbb{H}$ as the unique such value solving
\begin{equation} \label{eq:mDef}
\frac{1}{m} = -z + \phi \int \frac{x}{1 + mx} \mathrm{d} \pi(x)
\end{equation}
for $z \in \mathbb{H}$.
We will list some things that we know about $m$.
\begin{itemize}
\item The equation \eqref{eq:mDef}, which we have taken as the definition of $m$, is the one which allows us to make the connection between the different definitions of $\delta$ in the papers \cite{LP} and \cite{ALLRM}.
\item $m$ is also the unique solution to $f(m) = z$, where
\begin{equation}
f(x) = -\frac{1}{x} + \phi\sum_{\tau_i\neq 0} \frac{1}{x + \tau_i^{-1}}.
\end{equation}
\item $\lim_{\mathbb{H} \ni z \to E} m(z) := \check{m}(E)$ exists and is given by
\begin{equation}
\check{m}(E) = \pi\left(\mathcal{H}w(E) + iw(E)\right)
\end{equation}
where $w:=\frac{d\varrho}{d\lambda}$ is the Radon-Nikodym derivative of $\varrho$ with respect to $\lambda$, and where $\mathcal{H}$ is the Hilbert transform (see \cite{ANS}). We mention Hilbert transforms because \cite{ANS} presents it this way and explains how the presence of the Hilbert transform provides a theoretical explanation for the phenomenon of eigenvalue shrinkage, but we will not heavily use the Hilbert transform in our treatment; we will use a equation from \cite{ALLRM} which $m$ satisfies to get control of $m$'s real and imaginary parts directly.
\end{itemize}
At this time, let us also define the shrinkage function
\begin{equation} \label{eq:deltaDef}
\delta(x) := \frac{x}{[\pi cx w(x)]^2 + [1-c-\pi cx\mathcal{H}w(x)]^2}
\end{equation}
This function appeared first in a slightly different form in \cite{LP} and then in its stated form in \cite{ANS}; in both cases it is useful to us as an approximator to the values $\mathbf{u}_i^* \Sigma \mathbf{u}_i$ which describe the optimal shrinkage estimator.
\begin{remark}
We note that there is a small discrepancy between our definition of $m$ and the definition of $m$ in the context of \cite{LP} for $ M/N \gtrsim 1+\epsilon$. However, this discrepancy only amounts to how the limiting empirical spectral measure weights 0, and thus can be easily accounted for.
\end{remark}
\begin{theorem}[Informal Statement of \cite{ALLRM}'s main result] \label{thm:ALLRMInformal}
Define the matrix
\begin{equation}
\Pi := \Pi(z) := \begin{pmatrix} -\Sigma(I + m(z)\Sigma)^{-1} & 0 \\ 0 & m(z)I \end{pmatrix}
\end{equation}
Then element by element, $G$ is very close to $\Pi$.
\end{theorem}
\begin{theorem}[Slightly more Formal Statement of part of Theorem \ref{thm:ALLRMInformal}] \label{thm:bottomCornerTrace}
If $\Sigma$'s population spectral measure $\pi$ satisfies some mild regularity constraints, then
\begin{equation}
\frac{1}{N}\Trace(R_N(z)) - m(z) = O_\prec\left((N\eta)^{-1}\right)
\end{equation}
\end{theorem}
This result is essentially optimal (up to the definitions in $\prec$) and cannot be gotten naively. The paper also provides essentially optimal bounds on individual resolvent elements; individual diagonal entries of $R_N$ are themselves close to $m$, but the difference is of an order $(N\eta)^{-1/2}$ in the bulk spectrum; this means that in averaging the diagonal elements to get the normalized trace, there is a fair bit of cancellation between different diagonal elements, as there is between independent random variables. The task of finding the ``parts'' of the random variables $(R_N)_{ii}$ which are independent to one another and thus provide this cancellation is the content of a ``Fluctuating Averaging Lemma'' in random matrix theory.
A corollary of this result is the ``Marchenko-Pastur law on small scales''
\begin{corollary} \label{cor:bottomCornerMeasures}
For any interval $I \subseteq \mathbb{R}$, we have
\begin{equation}
\mu(I) = \varrho(I) + O_\prec\left(N^{-1}\right)
\end{equation}
\end{corollary}
This corollary is important in that it captures the fact that statements about Stieltjes transforms of measures, which we have in great strength thanks to the techniques of \cite{ALLRM}, can be translated into statements about the measures themselves. This statement captures the fact that empirical eigenvalue distribution is given very accurately by a certain deterministic distribution, even on very fine scales: it says ``even in intervals which are predicted to contain only $N^\epsilon$ eigenvalues of $S$ according the to deterministic measure $\varrho$, we do have that the prediction is correct to leading order with very high probability.''
What we will first do in this note is adapt the proof of \ref{thm:bottomCornerTrace} to deal also with the quantity
\begin{equation}
\frac{1}{M} \Trace\left(zR_M\Sigma + \Sigma(I + m(z)\Sigma)^{-1})\right)
\end{equation}
which leads us to our first main result:
\begin{theorem} \label{thm:topCornerTrace}
If $\Sigma$ satisfies the same regularity conditions as required for Theorem \ref{thm:bottomCornerTrace}, then we have
\begin{equation}
\frac{1}{M} \Trace\left(zR_M\Sigma + \Sigma(I + m(z)\Sigma)^{-1}\right) = O_\prec \left((N\eta)^{-1}\right)
\end{equation}
Using equation \eqref{eq:mDef}, we may rewrite the limit $M^{-1}\Trace\left(-\Sigma(I + m(z)\Sigma)^{-1}\right)$ as
\begin{equation}
-\phi^{-1} \left( \frac{1}{zm}+1\right)
\end{equation}
\end{theorem}
Just as Marchenko-Pastur Law at small scales followed from Theorem \ref{thm:bottomCornerTrace}, so does our second main result. First we observe that \cite[Theorem~4]{LP} is equivalent to saying that the function $\delta$ from \eqref{eq:deltaDef} is the Radon-Nikodym derivative of the limiting measure for $\nu$ against the deformed Marchenko Pastur law. Our second main result adds a rate of convergence to this limiting behavior, as follows
\begin{corollary} \label{cor:topCornerMeasures}
We have for any interval $I \subseteq \mathbb{R}$ that
\begin{equation}
\mathrm{d}\nu(I) - \delta\mathrm{d}\varrho(I) = O_\prec\left(N^{-1}\right).
\end{equation}
\end{corollary}
If $\tilde{\Sigma}$ is the shrinkage estimator $\sum_{i=1}^M \delta(\lambda_i)\mathbf{u}_i\mathbf{u}_i^*$, then the above implies an order of error between $\tilde{\Sigma}$ and $\Sigma^{\text{or}}$:
\begin{equation}
\left|\mathrm{tr}\left(\Sigma^{\text{or}} - \tilde{\Sigma}\right)\right| \prec N^{-1}.
\end{equation}
Boundedness and continuity of $1/\delta$, together with the Portmanteau theorem, can then be used to show:
\begin{equation}
\mathcal{L}^{\text{MV}}(\Sigma^{\text{or}}, \Sigma) - \mathcal{L}^{\text{MV}}(\tilde{\Sigma}, \Sigma) = O_\prec( N^{-1}).
\end{equation}
Further, we hypothesize that no \emph{bona fide} shrinkage estimator can make this error asymptotically smaller, in which case this would be the smallest possible excess MV loss within the shrinkage class, as claimed in the abstract.
\section{Relation to Previous Works}
This paper is, firstly, a direct successor to the papers \cite{LP} and \cite{ANS} which advances a program established therein using recent advances in RMT. Another important connection is to the papers \cite{Bai2007} and \cite{VESD}, which discuss a measure which is related to ours: for a fixed unit vector $\mathbf{x}$, they study
\begin{equation}
F_{Q_1,\mathbf{x}} := \sum_{i=1}^M \@ifstar{\oldabs}{\oldabs*}{\mathbf{u}_i^*\mathbf{x}}^2 \delta_{\lambda_i}
\end{equation}
As in our context, they prove that this measure is close to a deterministic limit, which \cite{VESD} calls $F_{1c, \mathbf{x}}$ (up to adjusting from their context to ours). In particular, they prove
\begin{equation}
F_{Q_1, \mathbf{x}}(I) - F_{1c, \mathbf{x}}(I) \prec M^{-1/2}
\end{equation}
The earlier paper \cite{Bai2007} establishes the optimality of the factor $M^{-1/2}$ by remarkably establishing the joint asymptotic Gaussian distribution of any $k$ different analytic functions integrated against $F_{Q_1, \mathbf{x}}$ (one way that \cite{VESD} differs from or improves on \cite{Bai2007} is in the very high probability with which the error bounds hold).
We can recover our measure $\mu$ from $F_{Q_1, \mathbf{x}}$: indeed, if $\{\mathbf{v}_1, \dots, \mathbf{v}_M\}$ are the eigenvectors of $\Sigma$, then
\begin{equation}
\mu = M^{-1} \sum_{j=1}^M \pi_j F_{Q_1, \mathbf{v}_j}
\end{equation}
Similiarly, the limiting deterministic measures satisfy
\begin{equation}
\delta \mathrm{d}\varrho = M^{-1} \sum_{j=1}^M \pi_j F_{1c, \mathbf{v}_j}
\end{equation}
So our main result, with the error weakened from $O_\prec\left(N^{-1}\right)$ to $O_\prec \left(N^{-1/2}\right)$, is a consequence of the results of \cite{VESD}. \bdrsuppress{ Moreover, our context allows for weaker assumptions on $\Sigma$ and on the moments of $X$ that the context of \cite{VESD} (I am pretty sure, at least; we should check this more).}
The improvement by a factor of $M^{-1/2}$ that occurs after averaging is exactly reminiscent of the central limit theorem, which hints at a sort of independence between the meaures $F_{Q_1, \mathbf{v}_j}$ and $F_{Q_1, \mathbf{v}_{j'}}$. Also note that this improvement of $M^{-1/2}$ does not reflect an improvement of our work over theirs; the error bound of $O_\prec\left(N^{-1/2}\right)$ gotten in \cite{VESD} is optimal, and it is only after the averaging over many different deterministic vectors $\mathbf{x}$ that one sees the the improvement. Thus, our work is to their work as an averaged local law is to an entry-wise local law.
Lastly, one should note that an ultimate goal of the program investigated in \cite{VESD} is to establish the convergence of the CDF of $F_{Q_1, \mathbf{x}}$ to a Brownian bridge, which would amount to finding some ``internal'' independence inside $F_{Q_1, \mathbf{x}}$ in the form of independence between the quantities $\@ifstar{\oldangleBrac}{\oldangleBrac*}{\mathbf{u}_i, \mathbf{x}}$. The ``external'' indepdence that we have hinted at between the measures $F_{Q_1, \mathbf{v}_j}$ and $F_{Q_1, \mathbf{v}_{j'}}$ is not unrelated to this.
The connections between the papers \cite{Bai2007}, \cite{VESD} and ours are perhaps deeper than we have realized; future work will hopefully bring further connections to light.
\section{Tools}
First, we extend the definition of matrix multiplication to matrices indexed by arbitrary sets.
\begin{definition}
Let $\mathcal{I}_i$ be a finite set for $i \in \{1, 2, 3, 4\}$. Let $A$ be a $\mathcal{I}_1 \times \mathcal{I}_2$ matrix, and let $B$ be a $\mathcal{I}_3 \times \mathcal{I}_4$ matrix. We define the matrix product $AB$ to be a $\mathcal{I}_1 \times \mathcal{I}_4$ matrix satisfying
\begin{equation}
(AB)_{ij} = \sum_{k \in \mathcal{I}_2 \cap \mathcal{I}_3} A_{ik}B_{kj}
\end{equation}
\end{definition}
\begin{definition}
Let $A$ be an invertible matrix $\mathcal{J} \times \mathcal{J}$ matrix and let $S$ be its inverse. We define the minor $S^{(i)}$ via
\begin{equation}
S^{(i)} = \left( (A_{jk} : j,k \in \mathcal{J} \setminus \{i\})\right)^{-1}
\end{equation}
\end{definition}
\begin{lemma}[Resolvent Identities] \label{lem:resolventIdentities}
Let $A$ be an invertible $\mathcal{J} \times \mathcal{J}$ matrix and $S = A^{-1}$.
\begin{enumerate}
\item If $j,k \in \mathcal{J} \setminus \{i\}$,
\begin{equation}
S^{(i)}_{jk} = S_{jk} - \frac{S_{ji} S_{ik}}{S_{ii}}
\end{equation}
\item If $i \neq j$,
\begin{equation}
S_{ij} = -S_{ii} \left(AS^{(i)}\right)_{ij}
\end{equation}
\item We have
\begin{equation}
\frac{1}{S_{ii}} = S_{ii} - (A^*S^{(i)}A)_{ii}
\end{equation}
\end{enumerate}
\end{lemma}
\begin{proof}
\begin{enumerate}
\item By the definition of an the inverse, it suffices to show that
\begin{equation}
\sum_{l\in \mathcal{J} \setminus\{i\}}\left(S_{jl} - \frac{S_{ji} S_{il}}{S_{ii}}\right) A_{lk} = \delta_{jk}
\end{equation}
But the left-hand side indeed yields
\begin{equation}
\sum_{l\in \mathcal{J} \setminus\{i\}} S_{jl}A_{lk} - \frac{S_{ji}}{S_{ii}} \sum_{l\in \mathcal{J} \setminus\{i\}} S_{il}A_{lk} = (\delta_{jk} -S_{ji}A_{ik}) - \frac{S_{ji}}{S_{ii}} (\delta_{ik} - S_{ii} A_{ik}) = \delta_{jk}
\end{equation}
since we have assumed $k \neq i$.
\item This proof is taken from \cite{SSERG2}. Using part $(1)$, we get
\begin{equation}
\begin{split}
\left(AS^{(i)}\right)_{ij} = \sum_{k\in \mathcal{J} \setminus \{i\}} A_{ik} S^{(i)}_{kj} = \sum_{k\in \mathcal{J} \setminus \{i\}} A_{ik} \left(S_{kj} - \frac{S_{ki} S_{ij}}{S_{ii}}\right) \\
= - A_{ii}S_{ij} - \frac{S_{ij}}{S_{ii}}(1-A_{ii}S_{ii}) = - \frac{S_{ij}}{S_{ii}}
\end{split}
\end{equation}
as desired.
\item This is an immediate consequence of Schur's complement formula, wherein if
\begin{equation}
M = \begin{pmatrix} B & C \\ D & E \end{pmatrix}
\end{equation}
then
\begin{equation}
M^{-1} = \begin{pmatrix} (B - CE^{-1}D)^{-1} & * \\ * & * \end{pmatrix}
\end{equation}
provided all the inverses exist. To see how Schur's complement formula applies, it is helpful to write
\begin{equation}
(A^*S^{(i)}A)_{ii} = (e_i^*A^*)S^{(i)}(Ae_i)
\end{equation}
It is of course crucial that we use the correct definition of matrix multiplication here.
\end{enumerate}
\end{proof}
Next we apply these general matrix algebra facts to our specific resolvent $G$.
\begin{corollary} [Resolvent Identities for $G$] \label{cor:ResIds}
\begin{enumerate}
\item For $s, t \neq r$ we have
\begin{equation}
G^{(r)}_{st} = G_{st} - \frac{G_{sr} G_{rt}}{G_{rr}}
\end{equation}
\item If $\mu\neq \nu \in \mathcal{I}_N$, then
\begin{equation}
G_{\mu\nu} = -G_{\mu\mu} (X^*G^{(\mu)})_{\mu \nu} = -G_{\nu\nu} (G^{(\nu)}X)_{\mu \nu}
\end{equation}
Similarly if $i \neq j \in \mathcal{I}_M$, then
\begin{equation}
G_{ij} = -G_{ii}(XG^{(i)})_{ij} = -G_{jj} (G^{(j)}X^*)_{ij}
\end{equation}
Lastly if $i \in \mathcal{I}_M$ and $\mu \in \mathcal{I}_N$, then
\begin{equation}
G_{i\mu} = -G_{\mu\mu} (G^{(\mu)}X)_{i\mu}, \quad G_{\mu i} -G_{\mu\mu} (X^*G^{(\mu)})_{\mu i}
\end{equation}
\item We have
\begin{equation}
\frac{1}{G_{\mu\mu}} = -z - (X^*G^{(\mu)}X)_{\mu \mu}
\end{equation}
\end{enumerate}
\end{corollary}
\begin{proof}
These all follow from the lemma; only the correct definition of matrix multiplication must be used, and one should note that many of the lemmas' conclusions are insensitive to the diagonal entries of $A$, so that one may see a
$$ \begin{pmatrix} -\Sigma^{-1} & X \\ X^* & 0 \end{pmatrix} $$
when one expects to see a
$$ \begin{pmatrix} -\Sigma^{-1} & X \\ X^* & -zI \end{pmatrix} $$
\end{proof}
Let us cite one of the main results of \cite{ALLRM}
\begin{theorem}[Entrywise Local Law] \label{thm:EWLocalLaw}
For any deterministic unit vectors $\mathbf{v},\mathbf{w}\in \mathbb{R}^{\mathcal{I}}$, one has
\begin{equation}
\@ifstar{\oldangleBrac}{\oldangleBrac*}{\mathbf{v}, (G - \Pi)\mathbf{w}} = O_\prec (\Psi)
\end{equation}
where
\begin{equation}
\Psi:= \Psi(z) = \sqrt{\frac{\Imag m(z)}{N\eta}} + \frac{1}{N\eta}
\end{equation}
\end{theorem}
\section{Proof of Theorem \ref{thm:topCornerTrace}} \label{sec:topCornerTrace}
First, we prove a partial result. We do not include many details, and only show how section 5 of \cite{ALLRM} may be quickly adapted to our setting.
\begin{lemma} \label{lem:topCornerTraceDiag}
If $\Sigma$ is diagonal, then Theorem \ref{thm:topCornerTrace} holds.
\end{lemma}
\begin{proof}
This proof amounts to adapting section 5 of \cite{ALLRM} to address the top-left corner of thei matrix $G$. We use the notation of \cite{ALLRM} without further comment.
Equation (5.15) of \cite{ALLRM} reads
\begin{equation}
1(\Xi) G_{ii} = 1(\Xi)\frac{-\sigma_i}{1 + m_N \sigma_i + \sigma_i Z_i + O_\prec\left(\sigma_i \Psi_\Theta^2\right)}
\end{equation}
A Taylor expansion on this quantity, justified because $Z_i = O_\prec \left( \Psi_\Theta\right)$ by Lemma 5.2 of \cite{ALLRM}, yields
\begin{equation}
1(\Xi) G_{ii} = 1(\Xi)\left[\frac{-\sigma_i}{1 + m_N \sigma_i } + \frac{\sigma_i}{(1 + m_N \sigma_i)^2}Z_i + O_\prec\left( \sigma_i \Psi_\Theta^2\right) \right]
\end{equation}
Averaging now over $i$ yields
\begin{equation} \label{eq:afterAveraging}
1(\Xi)\frac{1}{M} \sum_i G_{ii} = 1(\Xi) \frac{1}{M} \sum_{i} \frac{-\sigma_i}{1 + m_N \sigma_i} + [Z]_M + O_\prec \left(\Psi_\Theta^2\right)
\end{equation}
Using that $(1 - \mathbb{E}_i) \frac{1}{G_{ii}} = Z_i$, we have by lemma 5.6 of \cite{ALLRM} that
\begin{equation} \label{eq:ZsubMBound}
[Z]_M = O_\prec\left(\Psi_\Theta^2\right)
\end{equation}
Using equation (3.10) of \cite{ALLRM} to bound $\Theta$, we have $\Psi_\Theta \prec \frac{1}{\sqrt{N\eta}}$, so that our equation \eqref{eq:ZsubMBound} becomes
\begin{equation}
[Z]_M = O_\prec\left((N\eta)^{-1}\right)
\end{equation}
Equation (3.1) of \cite{ALLRM} also yields that $\Xi$ holds with high probability, so that equation \eqref{eq:afterAveraging} yields
\begin{equation}
\frac{1}{M} \sum_i G_{ii} = \frac{1}{M} \sum_{i} \frac{-\sigma_i}{1 + m_N \sigma_i} + O_\prec\left((N\eta)^{-1}\right)
\end{equation}
Furthermore, it is a result of section 5 of \cite{ALLRM} that
\begin{equation}
\@ifstar{\oldabs}{\oldabs*}{m_N - m} = O_\prec\left( (N\eta)^{-1}\right)
\end{equation}
so that $m_N$ may be replaced with $m$ (it is noted that $\@ifstar{\oldabs}{\oldabs*}{1 + m(z) \sigma_i} \geq \tau$ under our regularity assumptions). Finally, $\frac{1}{M} \sum_{i} \frac{-\sigma_i}{1 + m_N \sigma_i}$ is precisely $\frac{1}{M} \sum_i \Pi_{ii}$, and
$$\sum_i G_{ii} = \Trace z\Sigma^{1/2} R_M\Sigma^{1/2} = z\Trace R_M \Sigma$$
so that we are done.
\end{proof}
\begin{remark}
In the recent paper \cite{VESD}, a very similar result to our \ref{lem:topCornerTraceDiag} is proven in their equation (3.13), under much more general moment assumptions on $X$; however, the resolvent used in that paper differs slightly from the one used in this paper. We hope to use some of their techniques perhaps to weaken some of the moment assumptions in our work.
\end{remark}
\section*{Acknowledgements}
This work was supported by the US Air Force Office of Scientific Research, Lab Task number 19RYCOR036. The views and opinions of this paper do not necessarily reflect the official positions of the Air Force. The public affairs approval number of this document is AFRL-2021-2586.
\bibliographystyle{amsplain}
|
2302.13758
|
\section{Introduction}
An important problem in modern number theory is to construct $p$-adic $L$-functions of automorphic representations, in particular due to the arithmetic applications they provide. In \cite{pollack2011}, Pollack and Stevens gave a construction of $p$-adic $L$-functions of modular forms using the theory of overconvergent modular symbols. There are several results for cuspidal automorphic representations using their methods, which on the other hand, have not been widely used in the non-cuspidal cases.
In \cite{bellaiche2015p}, Bellaïche and Dasgupta introduced the notions of \textit{$C$-cuspidality} and \textit{partial modular symbols} to construct the $p$-adic $L$-function of Eisenstein series. In \cite{chris2017}, Williams used \textit{Bianchi
modular symbols} to construct the $p$-adic $L$-function of cuspidal Bianchi modular forms, that is, automorphic forms for $\mathrm{GL}_2$ over an imaginary quadratic field $K$. In this article, we construct the $p$-adic $L$-function of non-cuspidal Bianchi modular forms by adapting and combining the methods of these two works. Moreover, we show a connection between our construction in the case of non-cuspidal base change Bianchi modular forms and Katz $p$-adic $L$-functions.
Non-cuspidal Bianchi modular forms present two important differences with cuspidal Bianchi modular forms and non-cuspidal elliptic modular forms:
a) the weight $(k,\ell)$ of a non-cuspidal Bianchi modular form is not necessarily parallel (in the cuspidal case the weight is parallel, i.e., $k=\ell$);
b) Bianchi modular forms take values on finite-dimensional vector spaces, thus non-cuspidal forms have several constant terms, unlike non-cuspidal elliptic modular forms.
\subsection{$C$-cuspidal Bianchi modular forms and partial symbols}{\label{s1 intro}}
For our study of non-cuspidal Bianchi modular forms, we introduce in section \ref{Fourier} the notion of \textit{quasi-cuspidality} and also generalize to the Bianchi setting the notion of \textit{$C$-cuspidality} given in \cite{bellaiche2015p}. These properties are related with the vanishing of constant terms of Fourier expansions at suitable cusps and satisfy the following relation for Bianchi modular forms with level at $p$:
\begin{equation*}
\{\text{cuspidal}\}\subset\{\text{quasi-cuspidal}\}\subset\{C\textrm{-cuspidal}\}\subset\{\text{Bianchi modular forms}\}.
\end{equation*}
For a cuspidal Bianchi modular form $\mathcal{F}$, there exists a formula for the complex $L$-function in terms of integrals of $\mathcal{F}$. For $C$-cuspidal Bianchi modular forms, we can prove an analogous formula (see Theorem \ref{T4.6} for the precise statement).
\begin{theorem}
Let $\mathcal{F}$ be a $C$-cuspidal Bianchi modular form and $\psi$ be a Hecke character with suitable conductor and infinity type. Then there is an integral formula for the $L$-function of $\mathcal{F}$ twisted by $\psi$.
\end{theorem}
In section \ref{S3}, we develop the theory of \textit{partial Bianchi modular symbols}. These are algebraic analogues of $C$-cuspidal Bianchi modular forms that are easier to study $p$-adically. To define such symbols, we generalize the classical partial modular symbols introduced in \cite{bellaiche2015p} and adapt the Bianchi modular symbols of parallel weight used in \cite{chris2017}. In Proposition \ref{attach partial}, we prove that we can attach, in a Hecke-equivariant way, a partial Bianchi modular symbol to a $C$-cuspidal Bianchi modular form.
To link partial Bianchi modular symbols with spaces of $p$-adic distributions, we introduce the \textit{overconvergent partial Bianchi modular symbols} in section \ref{section overconvergent} and generalizing \cite{chris2017}, we obtain a control theorem (see Theorem \ref{partialcontrol}).
\begin{theorem}(\textbf{Partial Bianchi control theorem}){\label{partialcontrolintro}}
For each prime $\mathfrak{p}$ above $p$, let $\lambda_{\mathfrak{p}}\in L^\times$ and suppose that the $p$-adic valuation of each $\lambda_{\mathfrak{p}}$ is \textit{sufficiently small}. Then the specialization map from overconvergent partial modular symbols to partial modular symbols restricted to the simultaneous $\lambda_{\mathfrak{p}}$-eigenspaces of the $U_{\mathfrak{p}}$ operators is an isomorphism.
\end{theorem}
In section \ref{S4.4}, we construct the $p$-adic $L$-function of a $C$-cuspidal Bianchi eigenform $\mathcal{F}$ of weight $(k,\ell)$ and level $\Omega_0(\mathfrak{n})$, with $U_\mathfrak{p}$-eigenvalues $\lambda_\mathfrak{p}$ with $p$-adic valuation as in Theorem \ref{partialcontrolintro}. For this, we first attach to $\mathcal{F}$ a complex-valued partial Bianchi modular eigensymbol $\phi_\mathcal{F}$ using Proposition \ref{attach partial}. Then, by fixing an isomorphism $\iota$ between $\mathbb{C}$ and $\overline{\mathbb{Q}}_p$, we view $p$-adically the values of $\phi_\mathcal{F}$ and then lift it uniquely to an overconvergent partial Bianchi modular eigensymbol $\Psi_\mathcal{F}$ using the control Theorem \ref{partialcontrolintro}. After taking the Mellin transform of $\Psi_\mathcal{F}$, we obtain the following Theorem (see Theorem \ref{C-cuspidal p-adic} for the precise statement):
\begin{theorem}\label{C-cuspidal p-adic intro}
There exists a locally analytic distribution $L_p^{\iota}(\mathcal{F},-)$ on the ray class group $\mathrm{Cl}_K(p^{\infty})$, such that for any Hecke character $\psi$ of $K$ of conductor $\mathfrak{f}|(p^\infty)$ and infinity type $0 \leqslant (q,r) \leqslant (k,\ell)$, we have
\begin{equation*}
L_p^{\iota}(\mathcal{F},\psi_{p-\mathrm{fin}})=(*) \Lambda(\mathcal{F},\psi),
\end{equation*}
where $\psi_{p-\mathrm{fin}}$ is the $p$-adic avatar of $\psi$, $(*)$ is an explicit factor depending on $\psi$ and $\Lambda(\mathcal{F},-)$ is the normalized $L$-function of $\mathcal{F}$.
The distribution $L_p^{\iota}(\mathcal{F},-)$ satisfy suitable growth conditions (see section \ref{admissible}) and hence is unique.
\end{theorem}
By Remark \ref{turn-C-cuspi}, we can construct the $p$-adic $L$-function of certain non-cuspidal Bianchi modular forms by turning them into $C$-cuspidal forms and applying the previous theorem.
\subsection{Non-cuspidal base change and Katz $p$-adic $L$-functions}
A case of historical interest that allows us to use the methods in section \ref{s1 intro} for the construction of its $p$-adic $L$-function is the non-cuspidal base change situation.
Suppose that $p$ splits in $K$ and let $\varphi$ be a Hecke character of $K$ with conductor $\mathfrak{m}$ coprime to $p$ and infinity type $(-k-1,0)$ with $k\geqslant0$. Denote by $f_{\varphi}$ the elliptic CM modular form induced by $\varphi$ and let $f_{\varphi/K}$ be the base change to $K$ of $f_{\varphi}$, which is known to be a non-cuspidal Bianchi modular form of weight $(k,k)$. In Proposition \ref{basechangequasi}, we prove that $f_{\varphi/K}$ is quasi-cuspidal. Let $f_{\varphi/K}^p$ be the ordinary $p$-stabilization of $f_{\varphi/K}$, which is a quasi-cuspidal Bianchi modular form by Remark \ref{basechangepstabilization}. In particular, since $f_{\varphi/K}^p$ is a small slope $C$-cuspidal Bianchi modular form, by Theorem \ref{C-cuspidal p-adic intro}, we can obtain its $p$-adic $L$-function $L_p^{\iota}(f_{\varphi/K}^p,-)$.
\begin{remark}
Our interest in the Bianchi modular form $f_{\varphi/K}^p$, relies on the fact that we can go deeper in the construction of its $p$-adic $L$-function. More precisely, we can avoid the use of an isomorphism $\iota$ between $\mathbb{C}$ and $\overline{\mathbb{Q}}_p$ and additionally, we can factorize its $p$-adic $L$-function as the product of two Katz $p$-adic $L$-functions.
\end{remark}
In section \ref{basechangeLfunction}, we study the $L$-function of $f_{\varphi/K}$ and prove that it factors as the product of two $L$-functions of Hecke characters in Lemma \ref{Prop 9.1}. Then, we combine such factorization with an algebraicity result of $L$-functions of Hecke characters, to show the existence of a \textit{complex period} $\Omega_{f_{\varphi/K}}$, which allows us to prove algebraicity of critical $L$-values of $f_{\varphi/K}$ in Proposition \ref{period base change}.
Let $\phi_{f_{\varphi/K}}$ be the complex-valued Bianchi modular symbol attached to $f_{\varphi/K}^p$ in Proposition \ref{attach partial}, then by defining $\phi'_{f_{\varphi/K}}:=\phi_{f_{\varphi/K}}/\Omega_{f_{\varphi/K}}$ we prove in Proposition \ref{Prop6.7} that $\phi'_{f_{\varphi/K}}$ has algebraic values and consequently, we can view it as having $p$-adic values. Finally, using the control Theorem \ref{partialcontrolintro}, we can lift $\phi'_{f_{\varphi/K}}$ to an overconvergent Bianchi modular symbol and taking its Mellin transform we obtain the following (see Theorem \ref{Cor8.3}):
\begin{theorem}{\label{C1.2}}
There exists a unique measure $L_p(f_{\varphi/K}^p,-)$ on the ray class group $\mathrm{Cl}_K(p^{\infty})$ such that for any Hecke character $\psi$ of $K$ of conductor $\mathfrak{f}|p^\infty$ and infinity type $0 \leqslant (q,r) \leqslant (k,k)$, we have
\begin{equation*}
L_p(f_{\varphi/K}^p,\psi_{p-\mathrm{fin}})=(*)\Lambda(f_{\varphi/K}^p,\psi).
\end{equation*}
\end{theorem}
\begin{remark}
The measures $L_p(f_{\varphi/K}^p,-)$ and $L_p^\iota(f_{\varphi/K}^p,-)$ obtained from Theorems \ref{C1.2} and \ref{C-cuspidal p-adic intro} respectively can be related, in Proposition \ref{equality of measures}, we prove \begin{equation*}
L_p(f_{\varphi/K}^p,-)=\frac{L_p^\iota(f_{\varphi/K}^p,-)}{\iota(\Omega_{f_{\varphi/K}})}.
\end{equation*}
\end{remark}
The factorization of the $L$-function of $f_{\varphi/K}$ in Lemma \ref{Prop 9.1} translates to the $p$-adic side. In fact, consider the $p$-adic $L$-function of $f_{\varphi/K}$ of Theorem \ref{C1.2} and the $p$-adic $L$-function constructed by Katz in \cite{katz1978p} and generalized by Hida and Tilouine in \cite{hida1993anti}. We obtain the following (see Theorem \ref{T9.4}):
\begin{theorem}
Under the hypothesis of Theorem \ref{C1.2}, for all $p$-adic characters $\kappa$
\begin{equation*}
L_p(f_{\varphi/K}^p,\kappa)=\frac{1}{\Omega_p(A)^{2k+2}}L_{p,\rm{Katz}}(\varphi_{p-\rm{fin}}^c\kappa \chi_p) L_{p,\rm{Katz}}(\varphi_{p-\rm{fin}}^c\kappa^c \chi_p)
\end{equation*}
where $\chi_p$ is the $p$-adic avatar of the adelic norm character and $\Omega_p(A)$ is a $p$-adic period.
\end{theorem}
\section*{Acknowledgements}
I would like to thank my PhD advisor Daniel Barrera for suggesting this topic to me, as well as for the many conversations we have had on the subject. Special thanks to Chris Williams for kindly answering my questions. I also thank Antonio Cauchi, Guhan Venkat and Eduardo Friedman for helpful comments that led to an improvement of this article. This work was funded by the National Agency for Research and Development (ANID)/Scholarship Program/BECA DOCTORADO NACIONAL/2018 - 21180506.
\section{$C$-cuspidal Bianchi modular forms}
In this section, we recall basic properties of Bianchi modular forms and define suitable vanishing conditions on its constant terms called $C$-cuspidality. In section \ref{L-C-cuspi} we study the $L$-functions of $C$-cuspidal forms.
\subsection{Notations} \label{notations}
Let $p$ be a rational prime and fix throughout the paper embeddings $\iota_\infty:\overline{\mathbb{Q}}\hookrightarrow\mathbb{C}$ and $\iota_p:\overline{\mathbb{Q}}\hookrightarrow\overline{\mathbb{Q}}_p$, note that the latter fixes a $p$-adic valuation $v_p$ on $\overline{\mathbb{Q}}_p$. Let $K$ be an imaginary quadratic field with discriminant $-D$, ring of integers $\mathcal{O}_K$ and different ideal $\mathcal{D}$ generated by $\delta=\sqrt{-D}$. Let $\mathfrak{n}=(p)\mathfrak{m}$ be an ideal of $\mathcal{O}_K$ with $\mathfrak{m}$ coprime to $(p)$. Denote the two embeddings of $K$ into $\mathbb{C}$ by $id$ and $c$ and henceforth write $(k,\ell)$ for $k \cdot id+ \ell\cdot c\in\mathbb{Z}[{id,c}]$, with $k,\ell\geqslant0$. Note that we are implicitly viewing $K\hookrightarrow\overline{\mathbb{Q}}$ under $\iota_\infty^{-1}\circ id$. Denote by $K_{\mathfrak{q}}$ the completion of $K$ with respect to the prime $\mathfrak{q}$ of $K$, $\mathcal{O}_{\mathfrak{q}}$ the ring of integers of $K_{\mathfrak{q}}$ and fix a uniformizer $\pi_\mathfrak{q}$ at $\mathfrak{q}$. Denote the adele ring of $K$ by $\mathbb{A}_K=\mathbb{C}\times\mathbb{A}_K^f$ where $\mathbb{A}_K^f$ are the finite adeles. Furthermore, denote the class group of $K$ by $\mathrm{Cl}(K)$ and the class number of $K$ by $h$, and -once and for all- fix a set of representatives $I_1,..., I_h$ for $\mathrm{Cl}(K)$, with $I_1=\mathcal{O}_K$ and each $I_i$ for $2\leqslant i \leqslant h$ integral and prime, with each $I_i$ coprime to $\mathfrak{n}$ and $\mathcal{D}$. (We will assume that for any Hecke character of conductor $\mathfrak{f}$ considered in the sequel, these representatives are also coprime to $\mathfrak{f}$).
Let $V_{n}(R)$ denote the space of homogeneous polynomials over a ring $R$ in two variables of degree $n\geqslant0$. Note that $V_{n}(\mathbb{C})$ is an irreducible complex right representation of $\mathrm{SU}_2(\mathbb{C})$, denote it by $\rho_{n}$.
For a general Hecke character $\psi$ of $K$ we denote by $\psi_{\infty}$, $\psi_f$ and $\psi_{\mathfrak{q}}$ the restriction of $\psi$ to $\mathbb{C}$, $\mathbb{A}_K^f$ and $K_{\mathfrak{q}}^{\times}$ respectively.
\subsection{Bianchi modular forms} \label{background}
Let $\Omega_0(\mathfrak{n})$ be the maximal compact subgroup of $\mathrm{GL_2}(\mathcal{O}_K\otimes_\mathbb{Z}\widehat{\mathbb{Z}})$ of matrices $\smallmatrixx{*}{*}{0}{*}$ modulo $\mathfrak{n}$. Let $\varphi$ be a Hecke character, with infinity type $(-k,-\ell)$ and conductor dividing $\mathfrak{n}$. For $u_f=\smallmatrixx{a}{b}{c}{d}\in\Omega_0(\mathfrak{n})$ we set $\varphi_{\mathfrak{n}}(u_f)=\varphi_{\mathfrak{n}}(d)=\prod_{\mathfrak{q}|\mathfrak{n}}\varphi_{\mathfrak{q}}(d_{\mathfrak{q}})$.
\begin{definition}
We say a function $\mathcal{F}:\mathrm{GL_2}(\mathbb{A}_K) \rightarrow V_{k+\ell+2}(\mathbb{C})$ is a \textit{Bianchi modular form} of weight $(k,\ell)$, level $\Omega_0(\mathfrak{n})$ and central action $\varphi$ if it satisfies:
(i) $\mathcal{F}$ is left-invariant under $\mathrm{GL_2}(K)$;
(ii) $\mathcal{F}(zg)=\varphi(z)\mathcal{F}(g)$ for $z\in \mathbb{A}_K^\times \cong Z(\mathrm{GL_2}(\mathbb{A}_K))$, where $Z(G)$ denote the center of the group $G$;
(iii) $\mathcal{F}(gu)=\varphi_\mathfrak{n}(u_f)\mathcal{F}(g)\rho_{k+\ell+2}(u_\infty)$ for $u=u_f\cdot u_\infty \in \Omega_0(\mathfrak{n})\times\mathrm{SU_2}(\mathbb{C})$;
(iv) $\mathcal{F}$ is an eigenfunction of the operators $D_\sigma$, for $\sigma=i,c$ (the two embeddings of $K$ into $\mathbb{C}$), where $D_\sigma /4$ denotes a component of the Casimir operator in the Lie algebra $\mathfrak{sl}_2(\mathbb{C})\otimes_{\mathbb{R}}\mathbb{C}$, and where we consider $\mathcal{F}(g_{\infty}g_f)$ as a function of $g_\infty \in \mathrm{GL_2}(\mathbb{C}).$
The space of such functions will be denoted by $\mathcal{M}_{(k,\ell)}(\Omega_0(\mathfrak{n}),\varphi)$. We say $\mathcal{F}$ is a \textit{cuspidal} Bianchi modular form if also satisfies:
(v) for all $g \in \mathrm{GL_2}(\mathbb{A}_K)$
\begin{equation*}
\int_{K\backslash\mathbb{A}_K}\mathcal{F}\left(\matrixx{1}{u}{0}{1}g\right)du=0,
\end{equation*}
where $du$ is the Lebesgue measure on $\mathbb{A}_K$. The space of such functions will be denoted by $S_{(k,\ell)}(\Omega_0(\mathfrak{n}),\varphi)$.
\label{D1.2}
\end{definition}
\begin{remark}
From \cite[\S2.5, Cor 2.2]{hida1994critical}) we have $S_{(k,\ell)}(\Omega_0(\mathfrak{n}),\varphi)=0$ if $k\neq \ell$ i.e., all non-trivial cuspidal Bianchi modular forms have parallel weight $(k,k)$.
\end{remark}
Recall the set of representatives $I_1,..., I_h$ for $\mathrm{Cl}(K)$ fixed in section \ref{notations} and denote by $\pi_i$ their corresponding fixed uniformizer. Set $g_i=\smallmatrixx{1}{0}{0}{t_i}$ where $t_1=1$ and for each $i\geqslant2$, define $t_i=(1,...,1,\pi_i,1,...)\in\mathbb{A}_K^{\times}$. Since $\mathrm{GL_2}(\mathbb{A}_K)=\coprod_{i=1}^{h}\mathrm{GL_2}(K)\cdot g_i\cdot [\mathrm{GL_2}(\mathbb{C})\times\Omega_0(\mathfrak{n})]$, a Bianchi modular form $\mathcal{F}\in\mathcal{M}_{(k,\ell)}(\Omega_0(\mathfrak{n}),\varphi)$, descends to a collection of $h$ functions
\begin{align}\label{descendGL2C}
F^i: \mathrm{GL_2}(\mathbb{C})&\longrightarrow V_{k+\ell+2}(\mathbb{C}) \nonumber \\
g &\longmapsto \mathcal{F}(g_ig).
\end{align}
Let $\mathcal{H}_3:=\mathbb{C}\times\mathbb{R}_{>0}$ be the hyperbolic space and $\mathrm{B}=\left\{ \smallmatrixx{t}{z}{0}{1}: z\in\mathbb{C}, t\in\mathbb{R}_{>0} \right\}$. Since $\mathrm {GL_2}(\mathbb{C})=Z(\mathrm{GL_2}(\mathbb{C}))\cdot \mathrm{B} \cdot \mathrm{SU_2}(\mathbb{C})$ and $B\cong \mathcal{H}_3$, we can descend further using ii) and iii) in Definition \ref{D1.2} to obtain $h$ functions
\begin{align}
f^i&: \mathcal{H}_3\longrightarrow V_{k+\ell+2}(\mathbb{C}) \nonumber \\
& (z,t) \longmapsto t^{-1}F^i\smallmatrixx{t}{z}{0}{1}.
\label{e.3.1}
\end{align}
Let $\gamma=\smallmatrixx{a}{b}{c}{d}\in
\Gamma_i(\mathfrak{n}):=\mathrm{SL_2}(K)\cap g_i \Omega_0(\mathfrak{n}) g_i^{-1}\mathrm{GL_2}(\mathbb{C})$, then each $f^i$ satisfies the automorphy condition
\begin{equation}
f^i(\gamma\cdot(z,t))=\varphi_\mathfrak{n}(d)^{-1}f^i(z,t)\rho_{k+\ell+2}(J(\gamma;(z,t))),
\label{e2.4}
\end{equation}
where $\gamma\cdot(z,t)=\left(\frac{(az+b)\overline{(cz+d)}+a\overline{c}|t|^2}{|cz+d|^2+|ct|^2}, \frac{|ad-bc|t}{|cz+d|^2+|ct|^2} \right)$ and $J(\gamma;(z,t)):=\smallmatrixx{cz+d}{\overline{ct}}{-ct}{\overline{cz+d}}$. Thus $f^i\in \mathcal{M}_{(k,\ell)}(\Gamma_i(\mathfrak{n}),\varphi_\mathfrak{n}^{-1})$, the space of \textit{Bianchi modular forms} on $\mathcal{H}_3$ satisfying the automorphy condition (\ref{e2.4}). If $\mathcal{F}\in S_{(k,k)}(\Omega_0(\mathfrak{n}),\varphi)$ is cuspidal then we say $f^i$ is a cuspidal Bianchi modular form and the space of such forms is denoted by $S_{(k,k)}(\Gamma_i(\mathfrak{n}),\varphi_\mathfrak{n}^{-1})$.
\begin{definition}\label{D2.3}
Let $\gamma\in \mathrm{GL_2}(\mathbb{C})$, for $f^i\in \mathcal{M}_{(k,\ell)}(\Gamma_i(\mathfrak{n}),\varphi_\mathfrak{n}^{-1})$, define the function $f^i|_\gamma:\mathcal{H}_3\rightarrow V_{k+\ell+2}(\mathbb{C})$ by
\begin{equation}
(f^i|_\gamma)(z,t)=det(\gamma)^{-k/2}\overline{det(\gamma)}^{-\ell/2}f^i(\gamma \cdot(z,t)) \rho_{k+\ell+2}^{-1}\left(J\left(\frac{\gamma}{\sqrt{det(\gamma)}} ;(z,t)\right)\right).
\end{equation}
\end{definition}
\begin{remark}
Note $f^i\in \mathcal{M}_{(k,\ell)}(\Gamma_i(\mathfrak{n}),\varphi_\mathfrak{n}^{-1})$ satisfies:
i) $f^i|_g(0,1)=F^i(g)$ for $g\in\mathrm{GL}_2(\mathbb{C})$, and for $g=\smallmatrixx{t}{z}{0}{1}\in\mathrm{B}$ we obtain (\ref{e.3.1}).
ii) $\left(f^i|_\gamma\right)(z,t)=\varphi_\mathfrak{n}(d)^{-1}f^i(z,t)$ for $\gamma=\smallmatrixx{a}{b}{c}{d} \in \Gamma_i(\mathfrak{n})$.
\end{remark}
\subsection{Fourier expansion and cuspidal conditions}{\label{Fourier}}
Consider the set $J_{(k,\ell)}=\{(\pm(k+1),\pm(\ell+1))\}$ and for $j=(j_1,j_2)\in J_{(k,\ell)}$ define
\begin{equation*}
h(j)=\frac{k+\ell+2}{2}+\frac{j_2-j_1}{2},\;\;\text{and}\;\;
\iota(j)=\left(\frac{1}{2}[j_1-(k+1)],\frac{1}{2}[j_2-(\ell+1)]\right).
\end{equation*}
If $\mathcal{F}:\mathrm{GL_2}(\mathbb{A}_K) \rightarrow V_{k+\ell+2}(\mathbb{C})$ is a Bianchi modular form of weight $(k,\ell)$, level $\Omega_0(\mathfrak{n})$ and central action $\varphi$, then $\mathcal{F}$ has a Fourier expansion given by (see \cite[Thm 6.7]{hida1994critical}):
\begin{align}\label{e.2.1}
\mathcal{F}\left[\matrixx{t}{z}{0}{1}\right]=|t|_{\mathbb{A}_K}\bigg[&\sum_{j\in J_{(k,\ell)}}t_\infty^{\iota(j)}\binom{k+\ell+2}{h(j)}c_j(t\mathcal{D},\mathcal{F})X^{k+\ell+2-h(j)}Y^{h(j)}\\
&+ \sum_{\alpha \in K^\times} c(\alpha t \mathcal{D},\mathcal{F})W(\alpha t_\infty) e_K(\alpha z)\bigg], \nonumber
\end{align}
where
\begin{itemize}
\item[i)] The Fourier coefficients $c_j(\cdot,\mathcal{F})$ and $c(\cdot,\mathcal{F})$ are functions on the fractional ideals of $K$, with $c_j(I,\mathcal{F})=c(I,\mathcal{F})=0$ for $I$ not-integral,
\item[ii)] $e_K$ is an additive character of $K\backslash \mathbb{A}_K$ defined by
\begin{equation*}
e_K=\left( \prod_{\mathfrak{q} \mathrm{prime}} (e_q \circ \mathrm{Tr}_{K_\mathfrak{q}/\mathbb{Q}_q})\right)\cdot (e_\infty \circ \mathrm{Tr}_{\mathbb{C}/\mathbb{R}}),
\end{equation*}
for
\begin{equation*}
e_q\left( \sum_j d_jq^j \right)=e^{-2\pi i \sum_{j<0} d_jq^j} \;\;\; \mathrm{and} \;\;\; e_\infty(r)=e^{2\pi ir};
\end{equation*}
and
\item[iii)] $W:\mathbb{C}^\times \rightarrow V_{k+\ell+2}(\mathbb{C})$ is the Whittaker function
\begin{equation*}
W(s):= \sum_{n=0}^{k+\ell+2} \binom{k+\ell+2}{n}\left( \frac{s}{i|s|} \right)^{\ell+1-n} K_{n-\ell-1}(4 \pi |s|)X^{k+\ell+2-n}Y^n,
\end{equation*}
where $K_n(x)$ is a Bessel function.
\end{itemize}
\begin{remark}\label{Remark1.8}
i) Note that $W(s)$ is not symmetric in $k$ and $\ell$,
this comes from the definition of the Whittaker function in \cite[(6.1)]{hida1994critical}
\noindent ii) Let $\mathcal{F}=\sum_{n=0}^{k+\ell+2}\mathcal{F}_nX^{k+\ell+2-n}Y^n$ be a Bianchi modular form, then by (\ref{e.2.1}), the constant term in the Fourier expansion of $\mathcal{F}_n$ is trivial if $n\notin\{h(j)|j\in J_{(k,\ell)}\}=\{0,k+1,\ell+1,k+\ell+2\}$.
\end{remark}
The Fourier expansion of $\mathcal{F}$ descends to $\mathcal{H}_3$ by
\begin{equation}\label{e.3.2}
f^i\left((z,t); \matrixxx{X}{Y} \right)=\sum_{n=0}^{k+\ell+2}f_n^i(z,t) X^{k+\ell+2-n}Y^{n},
\end{equation}
where
\begin{align*}\label{e.3.2}
f_n^i(z,t)&=\left[t^{\frac{j_1+j_2-k-\ell}{2}}\binom{k+\ell+2}{n}c_j(t_i\mathcal{D})\right]\delta_{n,h(j)}\\
&+|t_i|_{\mathbb{A}_K} t\binom{k+\ell+2}{n}\sum_{\alpha\in K^\times} \left[ c(\alpha t_i\mathcal{D})\left(\frac{\alpha}{i|\alpha|}\right)^{\ell+1-n} K_{n-\ell-1}(4\pi|\alpha|t)e^{2\pi i (\alpha z + \overline{\alpha z})}\right]
\end{align*}
with $j=(j_1,j_2)\in J_{(k,\ell)}$ as above and $\delta_{n,h(j)}=1$ if $n=h(j)$ and $\delta_{n,h(j)}=0$ otherwise.
Note that to ease notation, we have written $c_j(t_i\mathcal{D})$ and $c(\alpha t_i \mathcal{D})$ instead $c_j(t_i\mathcal{D},\mathcal{F})$ and $c(\alpha t_i\mathcal{D},\mathcal{F})$.
For each $i=1,...,h$, equation (\ref{e.3.2}) may be thought of as the Fourier expansion of $f^i$ at the cusp of infinity, which by Remark \ref{Remark1.8}, satisfies that the constant term in the Fourier expansion of $f^i_n$ is trivial if $n\notin\{0,k+1,\ell+1,k+\ell+2\}$.
We must consider Fourier expansions at all the ``$K$-rational'' cusps $\mathbb{P}^1(K)=K\cup\{\infty\}$, for this, let $\sigma\in\mathrm{GL}_2(K)$ sending $\infty$ to the cusp $s$. For each $i=1,..., h$, since $f^i\in \mathcal{M}_{(k,\ell)}(\Gamma_i(\mathfrak{n}),\varphi_\mathfrak{n}^{-1})$ then $f^i|_{\sigma}\in \mathcal{M}_{(k,\ell)}(\sigma^{-1}\Gamma_i(\mathfrak{n})\sigma,\varphi_{\mathfrak{n}}^{-1})$ and hence $f^i|_{\sigma}$ has a Fourier expansion as in (\ref{e.3.2}).
\begin{definition}
We say that $f^i$ \textit{vanishes} at the cusp $s$ if $f^i|_{\sigma}$ has trivial constant term, and \textit{quasi-vanishes} at the cusp $s$ if $(f^i|_{\sigma})_n$ has trivial constant term for $1\leqslant n\leqslant k+\ell+1$.
\end{definition}
\begin{remark}
i) The property of vanishing and quasi-vanishing at the cusp $s$ are well-defined, i.e., are independent of the choice of $\sigma$; for the vanishing case see \cite[\S 6.2.2]{bygott1998modular} and note that the same argument works for quasi-vanishing.
ii) Let $\mathcal{F}\in S_{\lambda}(\Omega_0(\mathfrak{n}),\varphi)$ be a cuspidal Bianchi modular form, then the cuspidal condition vi) in Definition \ref{D1.2} is equivalent to the vanishing of $f^i$ at all cusps for each $0\leqslant i \leqslant h$ (see \cite[Prop 3.2]{zhao1993certain}).
\end{remark}
\begin{definition}
We say that a Bianchi modular form $\mathcal{F}$ is \textit{quasi-cuspidal} if $f^i$ quasi-vanishes at all cusps for $0\leqslant i \leqslant h$.
\end{definition}
Recall that $\mathfrak{n}=(p)\mathfrak{m}$ with $\mathfrak{m}$ coprime to $(p)$ and define for each $i=1,..., h$ the set of cusps
\begin{equation*}
C_i:=\Gamma_i(\mathfrak{m})\infty\cup\Gamma_i(\mathfrak{m})0.
\end{equation*}
Since $\Gamma_i(\mathfrak{m})=\{\smallmatrixx{a}{b}{c}{d}\in\mathrm{SL}_2(K): b\in I,\;c\in\mathfrak{m} I^{-1}\}$, each
$C_i\subset\mathbb{P}^1(K)$ contains $\infty$ and elements $\frac{x}{y}\in K$ with $x\in I_i$ and either $y\in\mathfrak{m}$ or $y\in(\mathcal{O}_K/\mathfrak{m})^\times$.
\begin{definition}{\label{Def 2.9}}
We say that $f^{i}$ is \textit{$C_i$-cuspidal} if quasi-vanishes at all cusps in $C_i$.
\end{definition}
The previous definition of $C_i$-cuspidality differs from the one given in \cite{bellaiche2015p} for modular forms. We are not asking for the vanishing of $f^i$ at the cusps $C_i$, instead, we just need quasi-vanishing, i.e., we do not care about the vanishing of the functions $f^i_0$ and $f^i_{k+\ell+2}$. The motivation for considering quasi-vanishing instead of vanishing will become clear in Theorem \ref{T4.6}
To state $C_i$-cuspidality for all $i$ as a property of $\mathcal{F}$, we write $C:=(C_1,...,C_h)$ and introduce the notion of $C$-cuspidality.
\begin{definition}{\label{C-cusp}}
We say that $\mathcal{F}$ is \textit{C-cuspidal} if $f^i$ is $C_i$-cuspidal for $i=1,..., h$.
\end{definition}
\begin{remark}\label{rem1.11}
Note that for Bianchi modular forms with level at $p$ we have
\begin{equation*}
\{\text{cuspidal}\}\subset\{\text{quasi-cuspidal}\}\subset\{C\text{-cuspidal}\}\subset\{\text{Bianchi modular forms}\}.
\end{equation*}
\end{remark}
For $\mathfrak{q}\subset\mathcal{O}_K$ prime, consider our fixed uniformizer $\pi_\mathfrak{q}$ of $K_\mathfrak{q}$ as an element in $\mathbb{A}_K^{f,\times}$ trivial at every place $\neq\mathfrak{q}$. For each $\mathfrak{q}$ there is a Hecke operator acting on $\mathcal{M}_{(k,\ell)}(\Omega_0(\mathfrak{n}),\varphi)$ given by the double coset $[\Omega_0(\mathfrak{n})\smallmatrixx{1}{0}{0}{\pi_\mathfrak{q}}\Omega_0(\mathfrak{n})]$. These operators are denoted by $T_\mathfrak{q}$ if $\mathfrak{q}\nmid\mathfrak{n}$ and $U_\mathfrak{q}$ otherwise, and are all independent of choices of representatives. An eigenform is a Bianchi modular form that is a simultaneous eigenvector for all those operators.
We can describe the action of Hecke operators on each Bianchi modular form $f^i$ for principal ideals, but when the ideals are not principal, we need to use the whole collection $(f^1,..., f^h)$.
Recall our fixed representatives $I_1,...,I_h$ for the class group and consider the prime ideal $\mathfrak{q}\nmid\mathfrak{n}$, then for each $i \in \{1,...,h\}$ there is a unique $j_i\in\{1,...,h\}$ such that $\mathfrak{q} I_i =(\alpha_i)I_{j_i}$, for $\alpha_i\in K$. Then $T_\mathfrak{q}$ acts on each component of $(f^1,...,f^h)$ by double cosets by
\begin{equation}{\label{e.1.19}}
\mathcal{F}|_{T_\mathfrak{q}}=(f^1,...,f^h)|_{T_\mathfrak{q}}=\left(f^{j_1}|_{\left[\Gamma_{j_1}(\mathfrak{n})\smallmatrixx{1}{0}{0}{\alpha_1}\Gamma_1(\mathfrak{n})\right]},...,f^{j_h}|_{\left[\Gamma_{j_h}(\mathfrak{n})\smallmatrixx{1}{0}{0}{\alpha_h}\Gamma_h(\mathfrak{n})\right]}\right).
\end{equation}
Note that if $\mathcal{F}$ is quasi-cuspidal then $\mathcal{F}|_{T_\mathfrak{q}}$ is quasi-cuspidal for all Hecke operators $T_\mathfrak{q}$, but for $C$-cuspidal forms we have a more subtle result.
\begin{definition}
Let $\mathcal{H}_{\mathfrak{n},p}$ denote the $\mathbb{Q}$-algebra generated by the Hecke operators $\{T_\mathfrak{q} : (\mathfrak{q},\mathfrak{n})=1\}$ and $\{U_\mathfrak{p} :\mathfrak{p}|p\}$.
\end{definition}
\begin{proposition}{\label{proposition1.3}}
Let $\mathcal{F}\in\mathcal{M}_{(k,\ell)}(\Omega_0(\mathfrak{n}),\varphi)$ be a $C$-cuspidal Bianchi modular form, with $\mathfrak{n}=(p)\mathfrak{m}$ and $(\mathfrak{m},(p))=1$; then $\mathcal{H}_{\mathfrak{n},p}$ acts on $C$-cuspidal forms.
\end{proposition}
\begin{proof}
By (\ref{e.1.19}) we have to show that for all prime $\mathfrak{q}\nmid\mathfrak{m}$ with $\mathfrak{q} I_i =(\alpha_i)I_{j_i}$ the function $f^{j_i}|{\left[\Gamma_{j_i}(\mathfrak{n})\smallmatrixx{1}{0}{0}{\alpha_i}\Gamma_i(\mathfrak{n})\right]}$ is $C_{i}$-cuspidal. For this, take $s_i\in C_i$, $\sigma_{s_i}\in\mathrm{GL}_2(K)$ such that $\sigma_{s_i}\cdot\infty=s_i$ and $\gamma\in\Gamma_{j_i}(\mathfrak{n})\smallmatrixx{1}{0}{0}{\alpha_i}\Gamma_i(\mathfrak{n})$. We have to show that the constant term of $(f^{j_i}|_{\gamma\sigma_{s_i}})_n$ vanishes for $1\leqslant n\leqslant k+\ell+1$.
We first observe $2$ facts:
1) $\Gamma_i(\mathfrak{n})$ stabilizes $C_i$: note $\Gamma_i(\mathfrak{m})$ clearly stabilizes $C_i:=\Gamma_i(\mathfrak{m})\infty\cup\Gamma_i(\mathfrak{m})0$ and hence its subgroup $\Gamma_i(\mathfrak{n})$.
2) $\smallmatrixx{1}{0}{0}{\alpha_i}\cdot c_i\in C_{j_i}$ for all $c_i\in C_i$: since $(\mathfrak{q},\mathfrak{m})=1$ there exists $y_\mathfrak{q}\in\mathfrak{q}$ such that $y_\mathfrak{q}\in(\mathcal{O}_K/\mathfrak{m})^\times$, analogously, $(I_i,\mathfrak{m})=1$ there exists $y_i\in I_i$ such that $y_i\in(\mathcal{O}_K/\mathfrak{m})^\times$. By the identity $\mathfrak{q} I_i =(\alpha_i)I_{j_i}$ there exist an element $t_{j_i}\in I_{j_i}$ such that $y_\mathfrak{q} y_i=\alpha_i t_{j_i}$ and we have
\begin{equation}{\label{equat1.20}}
\alpha_i=\frac{y_\mathfrak{q} y_i}{t_{j_i}}.
\end{equation}
Let $c_i=x/y$, then $x\in I_i$ and either $y\in\mathfrak{m}$ or $y\in(\mathcal{O}_K/\mathfrak{m})^\times$, then we have
\begin{equation*}
\smallmatrixx{1}{0}{0}{\alpha_i}\cdot \frac{x}{y}= \frac{x}{\alpha_iy}=\frac{t_{j_i}x} {y_\mathfrak{q} y_iy}
\end{equation*}
with $t_{j_i}x\in I_{j_i}$ and either $y_\mathfrak{q} y_iy\in\mathfrak{m}$ or $y_\mathfrak{q} y_iy\in(\mathcal{O}_K/\mathfrak{m})^\times$ then $\smallmatrixx{1}{0}{0}{\alpha_i}\cdot c_i \in C_{j_i}$.
Now, going back to the proof, write $\gamma=\gamma_{j_i}\smallmatrixx{1}{0}{0}{\alpha_i}\gamma_i$ with $\gamma_j\in\Gamma_j(\mathfrak{n})$, then we have $\gamma_i\cdot s_i=s_i'\in C_i$ by 1); $\smallmatrixx{1}{0}{0}{\alpha_i}\cdot s_i'=s_{j_i}\in C_{j_i}$ by 2); and $\gamma_{j_i}\cdot s_{j_i}=s_{j_i}'\in C_{j_i}$ by 1). Then
\begin{equation*}
\gamma\sigma_{s_i}\cdot\infty=\gamma_{j_i}\smallmatrixx{1}{0}{0}{\alpha_i}\gamma_i\sigma_{s_i}\cdot\infty=\gamma_{j_i}\smallmatrixx{1}{0}{0}{\alpha_i}\gamma_i\cdot s_i=\gamma_{j_i}\smallmatrixx{1}{0}{0}{\alpha_i}\cdot s_i'=\gamma_{j_i}\cdot s_{j_i}=s_{j_i}'\in C_{j_i}.
\end{equation*}
Since $f^{j_i}$ is $C_{j_i}$-cuspidal and $\gamma\sigma_{s_i}\cdot\infty=s_{j_i}'\in C_{j_i}$, we obtain that the constant term of $(f^{j_i}|_{\gamma\sigma_{s_i}})_n$ is trivial for $1\leqslant n\leqslant k+\ell+1$.
\end{proof}
\subsection{$L$-function of $C$-cuspidal Bianchi modular forms}{\label{L-C-cuspi}}
Henceforth, we will consider $\psi$ a Hecke character over $K$ of conductor $\mathfrak{f}$ with $(\mathfrak{f},I_i)=1$ for each $i$ (see section \ref{notations}). For each ideal $\mathfrak{m}=\prod_{\mathfrak{q}|\mathfrak{m}}\mathfrak{q}^{n_{\mathfrak{q}}}$ coprime to $\mathfrak{f}$, we define $\psi(\mathfrak{m})=\prod_{\mathfrak{q}|\mathfrak{m}}\psi_{\mathfrak{q}}(\pi_{\mathfrak{q}})^{n_{\mathfrak{q}}}$ and $\psi(\mathfrak{m})=0$ if $\mathfrak{m}$ is not coprime to $\mathfrak{f}$. In an abuse of notation, we write $\psi$ for both the idelic Hecke character and the function it determines on ideals.
\begin{definition}
We define the $L$-function of a Bianchi modular form $\mathcal{F}$ twisted by $\psi$ as
\begin{equation*}
L(\mathcal{F},\psi,s):=\sum_{\substack{0\neq \mathfrak{a}\subset\mathcal{O}_K\\ (\mathfrak{a},\mathfrak{f})=1}} c(\mathfrak{a},\mathcal{F})\psi(\mathfrak{a})N(\mathfrak{a})^{-s} \;\;\;(s\in\mathbb{C});
\end{equation*}
and assign a ``part'' of $L(\mathcal{F},\psi,s)$ to every Bianchi modular form $f^i$ corresponding to $\mathcal{F}$ respect to $I_i$ by
\begin{equation*}
L^i(\mathcal{F},\psi,s)=L(f^i,\psi,s):=w^{-1}\sum_{\alpha\in K^\times} c(\alpha\delta I_i,\mathcal{F})\psi(\alpha\delta I_i)N(\alpha\delta I_i)^{-s} \end{equation*}
with $w=|\mathcal{O}_K^\times|$.
\end{definition}
\begin{remark}
In \cite[Chap. II]{weil1971dirichlet} it is proved that the $L$-function and the partial $L$-functions converge absolutely on some right-half plane. Also note that
\begin{equation}{\label{sumLfunctions}}
L(\mathcal{F},\psi,s)= L^1(\mathcal{F},\psi,s)+\cdot\cdot\cdot+ L^h(\mathcal{F},\psi,s).
\end{equation}
\end{remark}
The $L$-function of a $C$-cuspidal Bianchi modular form $\mathcal{F}$ can be written as a finite sum of Mellin tranforms by generalizing \cite[Thm1.8]{chris2017}. First, we need some facts about Gauss sums following the conventions in \cite[\S 3.3]{miyake2006modular} and noting that if a Hecke character $\psi$ has infinity type $(q,r)$, then in notation of op.cit has $(-q,-r)$.
\begin{definition}{\label{d1.5}}
The Gauss sum for $\psi$ is defined to be
\begin{equation*}
W(\psi)=\psi_\infty(\delta)^{-1}\sum_{\substack{[a]\in\mathfrak{f}^{-1}/\mathcal{O}_K \\ ((a)\mathfrak{f},\mathfrak{f})=1}}\psi_\mathfrak{f}(a)^{-1} e^{2\pi i \mathrm{Tr}_{K/\mathbb{Q}}(a/\delta)}.
\end{equation*}
\end{definition}
\begin{remark}{\label{Gauss}}
i) For all $c\in\mathcal{O}_K$, we have the standard property
\begin{equation*}
\psi_\infty(\delta)^{-1}\sum_{\substack{[a]\in\mathfrak{f}^{-1}/\mathcal{O}_K \\ ((a)\mathfrak{f},\mathfrak{f})=1}}\psi_\mathfrak{f}(a)^{-1} e^{2\pi i \mathrm{Tr}_{K/\mathbb{Q}}(ac/\delta)}=\begin{cases}
\psi_\mathfrak{f}(c)W(\psi)&: ((c)\mathfrak{f},\mathfrak{f})=1,\\
0 &:\rm{otherwise}.
\end{cases}
\end{equation*}
ii) In the Gauss sum $W(\psi^{-1})$ we can choose for each $0\leqslant i \leqslant h$, a representative $a\in C_i$ for each $[a]\in\mathfrak{f}^{-1}/\mathcal{O}_K$. For this, we put $a=d_b/\alpha_i$ with $d_b\in I_i$ and $d_b\equiv b$ (mod $\mathfrak{f}$) for each $b$ (mod $\mathfrak{f}$) (note that such $d_b$ exists because $(I_i,\mathfrak{f})=1$) and $\alpha_i\in K$ such that $\mathfrak{f} I_i =(\alpha_i)I_{j_i}$. Note that $\alpha_i^{-1}\in\mathfrak{f}^{-1}I_i^{-1}I_{j_i}\subset\mathfrak{f}^{-1}I_i^{-1}$, so in particular as $b$ ranges over all classes of $(\mathcal{O}_K/\mathfrak{f})^\times$ and as $d_b \in I_i$, we see that $d_b/\alpha_i$ ranges over a full set of coset representatives $[a]$ for $\mathfrak{f}^{-1}/\mathcal{O}_K$ with
$(a)\mathfrak{f}$ coprime to $\mathfrak{f}$. By (\ref{equat1.20}) we have that
$\alpha_i=y_\mathfrak{f} y_i/t_{j_i}$ with
$y_\mathfrak{f}\in(\mathcal{O}_K/\mathfrak{m})^\times$, $y_i\in(\mathcal{O}_K/\mathfrak{m})^\times$ and $t_{j_i}\in I_{j_i}$. Since $y_\mathfrak{f} y_i \in(\mathcal{O}_K/\mathfrak{m})^\times$ and $d_b\in I_i$ we obtain
$a=d_b/\alpha_i=d_b t_{j_i}/y_\mathfrak{f} y_i\in C_i$.
\end{remark}
\begin{proposition}\label{propo.1.1}
Let $\mathcal{F}\in\mathcal{M}_{(k,\ell)}(\Omega_0(\mathfrak{n}))$ be a $C$-cuspidal Bianchi modular form with $\mathfrak{n}=(p)\mathfrak{m}$ and $(\mathfrak{m},(p))=1$, then for a Hecke character $\psi$ of $K$ of conductor $\mathfrak{f}$ such that $(\mathfrak{f},\mathfrak{m})=1$ and infinity type $(-\frac{\ell+1-n}{2},\frac{\ell+1-n}{2})$ with $1\leqslant n \leqslant k+\ell+1$, we have
\begin{equation}\label{equa4.1}
L^i(\mathcal{F},\psi,s)=A(i,n,\psi,s)\left[\sum_{\substack{[a]\in\mathfrak{f}^{-1}/\mathcal{O}_K \\ ((a)\mathfrak{f},\mathfrak{f})=1,\;a\in C_i}}\psi_\mathfrak{f}(a)\int_{0}^{\infty}t^{2s-2}f_n^i(a,t)dt\right],
\end{equation}
where
\begin{equation*}
A(i,n,\psi,s)=\psi(t_i)|t_i|_f^{s-1}\frac{4(2\pi)^{2s}\sqrt{-1}^{\ell+1-n}\binom{k+\ell+2}{n}^{-1}}{|\delta|^{2s}\Gamma(s+\frac{n-\ell-1}{2})\Gamma(s-\frac{n-\ell-1}{2})wW(\psi^{-1})}.
\end{equation*}
\end{proposition}
\begin{proof}
First, note that we can write
\begin{equation*}
L^i(\mathcal{F},\psi,s)=w^{-1}\psi(t_i)|t_i|_f^s\sum_{\substack{\alpha\in K^\times\\ (\alpha\delta I_i,\mathfrak{f})=1}} c(\alpha\delta I_i,\mathcal{F})\psi_\infty(\alpha\delta)^{-1}\psi_\mathfrak{f}(\alpha\delta)^{-1}|\alpha\delta|^{-2s}.
\end{equation*}
Using i) and ii) in Remark \ref{Gauss} to replace $\psi_\mathfrak{f}(\alpha\delta)^{-1}$ in the expression above, noting that $\psi_\infty(\alpha)^{-1}=\left(\frac{\alpha}{|\alpha|}\right)^{\ell+1-n}$ and rearranging we have
\begin{align}{\label{equa}}
L^i(\mathcal{F},&\psi,s)=\frac{\psi(t_i)|t_i|_f^s}{wW(\psi^{-1})|\delta|^{2s}}\\
&\times\sum_{\substack{[a]\in\mathfrak{f}^{-1}/\mathcal{O}_K \\ ((a)\mathfrak{f},\mathfrak{f})=1,\;a\in C_i}}\psi_\mathfrak{f}(a)\left[\sum_{\alpha\in K^\times} c(\alpha\delta I_i,\mathcal{F})\left(\frac{\alpha}{|\alpha|}\right)^{\ell+1-n}|\alpha|^{-2s}e^{2\pi i \mathrm{Tr}_{K/\mathbb{Q}}(a\alpha)}\right].\nonumber
\end{align}
Now using the integral in \cite[\S 7]{hida1994critical} we have that
\begin{equation*}
|\alpha|^{-2s}=\frac{4(2\pi)^{2s}}{\Gamma(s+\frac{n-\ell-1}{2})\Gamma(s-\frac{n-\ell-1}{2})}\int_0^\infty t^{2s-1}K_{n-\ell-1}(4\pi|\alpha|t)dt.
\end{equation*}
Substituting in (\ref{equa}) the integral above and rearranging, we obtain the Fourier expansion of $f^i$ inside the integral. Since $f^i$ is $C_i$-cuspidal, then the integrals in (\ref{equa4.1})
converge and the result follows.
\end{proof}
Similarly to \cite[\S 2.6]{chris2017}, we want an integral formula for $L(\mathcal{F},\psi',1)$, with $\psi'$ a Hecke character of conductor $\mathfrak{f}$ coprime to $\mathfrak{m}$ and infinity type $0\leqslant (q,r) \leqslant (k,\ell)$.
For $q,r$ as above we put $n=\ell+q-r+1$ in Proposition \ref{propo.1.1}, then $1\leqslant n\leqslant k+\ell+1$ and $\psi$ has infinity type $(\frac{q-r}{2},\frac{-q+r}{2})$. Furthermore, by setting $s=\frac{q+r+2}{2}$ we have $L^i\left(\mathcal{F},\psi,\frac{q+r+2}{2}\right)=L^i(\mathcal{F},\psi',1)$ for $\psi'=\psi|\cdot|_{\mathbb{A}_K}^\frac{q+r}{2}$ which has infinity type $(q,r)$.
Applying the above changes to (\ref{equa4.1}) and noting that $W((\psi')^{-1})=|\delta|^{q+r}W(\psi^{-1})$, we obtain
\begin{align}{\label{subsnys2}}
L^i(\mathcal{F},\psi',1)&=\frac{\psi'(t_i)4(2\pi)^{q+r+2}\sqrt{-1}^{-q+r}\binom{k+\ell+2}{\ell+q-r+1}^{-1}}{|\delta|^2\Gamma(q+1)\Gamma(r+1)wW((\psi')^{-1})}\\
&\times \sum_{\substack{[a]\in\mathfrak{f}^{-1}/\mathcal{O}_K \\ ((a)\mathfrak{f},\mathfrak{f})=1,\;a\in C_i}}\psi'_\mathfrak{f}(a)\int_{0}^{\infty}t^{q+r}f_{\ell+q-r+1}^{i}(a,t)dt.\nonumber
\end{align}
In order to simplify the above formula and link modular symbols and $L$-values in section \ref{partialsymbolsattached} we make the following:
\begin{definition}{\label{factor}}
Let $\mathcal{F}$ be a $C$-cuspidal form of weight $(k,\ell)$, for each $1 \leqslant i \leqslant h$ and $0\leqslant (s,u) \leqslant (k,\ell)$ we define
\begin{equation*}
c_{s,u}^{i}(a)=2\binom{k+\ell+2}{\ell+s-u+1}^{-1} (-1)^{\ell-u+1}\int_{0}^{\infty}t^{s+u}f_{\ell+s-u+1}^{i}(a,t)dt.
\end{equation*}
\end{definition}
\begin{remark}
The integral in the above definition converges since $1\leqslant \ell+s-u+1 \leqslant k+\ell+1$ and $f^i$ is $C_i$-cuspidal. Also, note that $c_{s,u}^{i}$ is not symmetric in $k$ and $\ell$, this comes from part i) of Remark \ref{Remark1.8}.
\end{remark}
Applying Definition \ref{factor} on (\ref{subsnys2} we have
\begin{equation*}
L^i(\mathcal{F},\psi',1)=\frac{\psi'(t_i)2(2\pi i)^{q+r+2}(-1)^{l+q+r}}{D\Gamma(q+1)\Gamma(r+1)wW((\psi')^{-1})}\sum_{\substack{[a]\in\mathfrak{f}^{-1}/\mathcal{O}_K \\ ((a)\mathfrak{f},\mathfrak{f})=1,\;a\in C_i}}\psi'_\mathfrak{f}(a)c^i_{q,r}(a).
\end{equation*}
\begin{definition}{\label{gammafactors}}
Let $\psi$ be a Hecke character of infinity type $(q,r)$. Define the \textit{completed} $L$-function of $\mathcal{F}$ by
\begin{equation*}
\Lambda(\mathcal{F},\psi)=\frac{\Gamma(q+1)\Gamma(r+1)}{(2\pi i)^{q+1}(2\pi i)^{r+1}}L(\mathcal{F},\psi,1).
\end{equation*}
where $L(\mathcal{F},\psi,1)=\sum_{i=1}^hL^i(\mathcal{F},\psi,1)$.
\end{definition}
By the work done above, we obtain the following:
\begin{theorem}\label{T4.6}
Let $\mathcal{F}\in\mathcal{M}_{(k,\ell)}(\Omega_0(\mathfrak{n}))$ be a $C$-cuspidal Bianchi modular form with $\mathfrak{n}=(p)\mathfrak{m}$ and $\mathfrak{m}$ coprime to $(p)$, then for a Hecke character $\psi$ of $K$ of conductor $\mathfrak{f}$ coprime to $\mathfrak{m}$ and infinity type $0\leqslant (q,r) \leqslant (k,\ell)$, we have
\begin{equation*
\Lambda(\mathcal{F},\psi)=\frac{(-1)^{\ell+q+r}2}{DwW(\psi^{-1})}\sum_{i=1}^{h}\left[\psi(t_i)\sum_{\substack{[a]\in\mathfrak{f}^{-1}/\mathcal{O}_K \\ ((a)\mathfrak{f},\mathfrak{f})=1,\;a\in C_i}}\psi_\mathfrak{f}(a)c_{q,r}^{i}(a)\right].
\end{equation*}
\end{theorem}
\begin{remark}
For cuspidal Bianchi modular forms with level at $p$ and weight $(k,k)$ (which in particular are $C$-cuspidal) we obtain \cite[Thm 2.11]{chris2017}. For this note that $\psi_\mathfrak{f}(x_\mathfrak{f})\psi(x_\mathfrak{f})^{-1}\tau(\psi^{-1})^{-1}$ in op.cit. is equal to $W(\psi^{-1})^{-1}$.
\end{remark}
We finish this section with some remarks about algebraicity of $L$-values of $C$-cuspidal Bianchi modular forms.
In the case when we are dealing with cuspidal Bianchi modular forms, the ``critical'' values of this $L$-function can be controlled. We have the following theorem (see \cite[Thm 8.1]{hida1994critical}):
\begin{proposition}{\label{period}}
Let $\mathcal{F}$ be a cuspidal Bianchi modular form of weight $(k,k)$, there exists a period $\Omega_\mathcal{F}\in\mathbb{C}^\times$ and a number field $E$ such that, if $\psi$ is a Hecke character of infinity type $0\leqslant(q,r)\leqslant (k,k)$, with $q,r\in\mathbb{Z}$, we have
\begin{equation*}
\frac{\Lambda(\mathcal{F},\psi)}{\Omega_\mathcal{F}}\in E(\psi),
\end{equation*}
where $E(\psi)\subset\overline{\mathbb{Q}}$ is the extension of $E$ generated by the values of $\psi$.
\end{proposition}
\begin{remark}{\label{isomorphism}}
In the non-cuspidal case, depending on the Bianchi modular forms we are interested, we can prove algebraicity of critical $L$-values (see Proposition \ref{period base change} for an example). For the construction of the $p$-adic $L$-function, we are interested in view $p$-adically the critical $L$-values. For this, in section \ref{S4} we will use an isomorphism between $\mathbb{C}$ and $\overline{\mathbb{Q}}_p$.
\end{remark}
\section{Partial Bianchi modular symbols}{\label{S3}}
In this section, we introduce partial Bianchi modular symbols. These are algebraic analogues of $C$-cuspidal Bianchi modular forms that are easier to study $p$-adically. In section \ref{partialsymbolsattached}, we attach a partial symbol to $C$-cuspidal forms and link it with $L$-values.
\subsection{Partial modular symbols}
Let $\Gamma$ be a discrete subgroup of $\mathrm{SL}_2(K)$ and let $\mathcal{C}$ be a non-empty $\Gamma$-invariant subset of $\mathbb{P}^1(K)$.
We denote by $\Delta_{\mathcal{C}}$ the abelian group of divisors on $\mathcal{C}$, i.e.
\begin{equation*}
\Delta_{\mathcal{C}}=\mathbb{Z}[\mathcal{C}]=\left\{ \sum_{c\in \mathcal{C}}n_c\{c\}: n_c\in\mathbb{Z}, \; n_c=0 \;\text{for}\;\text{almost}\;\text{all}\;c \right\}
\end{equation*}
and by $\Delta_{\mathcal{C}}^0$ the subgroup of divisors of degree 0 (i.e., such that $\sum_{c\in\mathcal{C}}n_c=0$). Note that $\Delta_{\mathcal{C}}^0$ has a left action by the group $\Gamma$ (and indeed, of $\mathrm{SL}_2(K)$) by fractional linear transformations on $\mathcal{C}$.
Let $V$ be a right $\Gamma$-module, we provide the space $\mathrm{Hom}(\Delta_{\mathcal{C}}^0,V)$ with a right $\Gamma$-action by setting
\begin{equation*}
\phi_{|\gamma}(D)=\phi(\gamma \cdot D)_{|\gamma}.
\end{equation*}
\begin{definition}{\label{abstract partial}}
We define the space of partial modular symbols on $\mathcal{C}$ for $\Gamma$ with values in $V$, to be the space $\mathrm{Symb}_{\Gamma,\mathcal{C}}(V):= \mathrm{Hom}_\Gamma(\Delta_{\mathcal{C}}^0,V)$ of $\Gamma$-invariant maps from $\Delta_{\mathcal{C}}^0$ to $V$.
\end{definition}
\begin{remark}
When $\mathcal{C}=\mathbb{P}^1(K)$ we drop $\mathcal{C}$ from the notation and call $\mathrm{Symb}_{\Gamma}(V)$ the space of \textit{modular symbols} for $\Gamma$ with values in $V$ recovering \cite[Def 2.3]{chris2017}.
\end{remark}
Recall from section \ref{background} the group $\Omega_0(\mathfrak{n})$ and its twist $\Gamma_i(\mathfrak{n})$ for each $i=1,...,h$. Setting $\Gamma=\Gamma_i(\mathfrak{n})$ in Definition \ref{abstract partial} and taking suitable modules $V$ to be defined below, we can obtain more concrete partial modular symbols.
For a commutative ring $R$ recall that $V_k(R)$ denotes the space of homogeneous polynomials over $R$ in two variables of degree $k$. Furthermore, for integers $k,\ell\geqslant0$ we define $V_{k,\ell}(R):=V_k(R)\otimes_R V_{\ell}(R)$.
We identify $V_{k,\ell}(R)$ with the space of polynomials that are homogeneous of degree $k$ in two variables $X,Y$ and homogeneous of degree $\ell$ in two further variables $\overline{X}, \overline{Y}$.
\begin{definition}
Let $R$ be a $K$-algebra, we have a left $\Gamma_i(\mathfrak{n})$-action on $V_k(R)$ defined by
$\gamma\cdot P\binom{X}{Y}= P\binom{dX+bY}{cX+aY}$, for $\gamma=\smallmatrixx{a}{b}{c}{d}$. We then obtain a left $\Gamma_i(\mathfrak{n})$-action on $V_{k,\ell}(R)$ given by
\begin{equation*}
\gamma\cdot P\left[\matrixxx{X}{Y},\matrixxx{\overline{X}}{\overline{Y}}\right]=P\left[\matrixxx{dX+bY}{cX+aY},\matrixxx{\overline{d}\overline{X}+\overline{b}\overline{Y}}{\overline{c}\overline{X}+\overline{a}\overline{Y}}\right]. \end{equation*}
This induces a right $\Gamma_i(\mathfrak{n})$-action on the dual space $V_{k,\ell}^*(R)$ by setting
\begin{equation*}
\mu|_\gamma(P)=\mu(\gamma\cdot P).
\end{equation*}
\end{definition}
Let $\mathbf{C}=(\mathbf{C}_1,...,\mathbf{C}_h)$ with $\mathbf{C}_i$ a non-empty $\Gamma_i(\mathfrak{n})$-invariant subset of $\mathbb{P}^1(K)$.
\begin{definition}\label{Def5.2}
(i) Define the space of \textit{partial Bianchi modular symbols} on $\mathbf{C}_i$ of weight $(k,\ell)$ and level $\Gamma_i(\mathfrak{n})$ to be the space $\mathrm{Symb}_{\Gamma_i(\mathfrak{n}),\mathbf{C}_i}(V_{k,\ell}^*(\mathbb{C}))$.\\
(ii) Define the space of \textit{partial Bianchi modular symbols} on $\mathbf{C}$ of weight $(k,\ell)$ and level $\Omega_{0}(\mathfrak{n})$ to be the space
\begin{equation*}
\mathrm{Symb}_{\Omega_{0}(\mathfrak{n}),\mathbf{C}}(V_{k,\ell}^*(\mathbb{C})):=\bigoplus_{i=1}^{h} \mathrm{Symb}_{\Gamma_i(\mathfrak{n}),\mathbf{C}_i}(V_{k,\ell}^*(\mathbb{C})).
\end{equation*}
\end{definition}
\begin{remark}{\label{fullBianchi}}
When $\mathbf{C}_i=\mathbb{P}^1(K)$ for all $i$, we drop $\mathbf{C}$ from the notation and recover the space $\mathrm{Symb}_{\Omega_{0}(\mathfrak{n})}(V_{k,\ell}^*(\mathbb{C}))$ of Bianchi modular symbols in \cite[Def 2.4]{chris2017}.
\end{remark}
\subsection{Partial modular symbols and $C$-cuspidal forms}{\label{partialsymbolsattached}}
To relate partial modular symbols with $C$-cuspidal Bianchi modular forms we henceforth take $\mathbf{C}=C=(C_1,...,C_h)$ with $C_i=\Gamma_i(\mathfrak{m})\infty\cup\Gamma_i(\mathfrak{m})0$ as in section \ref{Fourier} and consider the space of partial Bianchi modular symbols on $C$ of weight $(k,\ell)$ and level $\Omega_{0}(\mathfrak{n})$.
In section \ref{Fourier} we defined Hecke operators on Bianchi modular forms. Similarly, we can define Hecke operators on the space of Bianchi modular symbols.
\begin{definition}{\label{definitionoperator}}
Let $\mathfrak{q}$ be a prime ideal, then the Hecke operator $T_\mathfrak{q}$ is defined on the space of Bianchi modular symbols $\mathrm{Symb}_{\Omega_{0}(\mathfrak{n})}(V_{k,\ell}^*(\mathbb{C}))$ by
\begin{equation*}
(\phi_1,...,\phi_h)|_{T_\mathfrak{q}}=\left(\phi_{j_1}\bigg|\left[\Gamma_{j_1}(\mathfrak{n})\smallmatrixx{1}{0}{0}{\alpha_1}\Gamma_1(\mathfrak{n})\right],...,\phi_{j_h}\bigg|\left[\Gamma_{j_h}(\mathfrak{n})\smallmatrixx{1}{0}{0}{\alpha_h}\Gamma_h(\mathfrak{n})\right]\right).
\end{equation*}
If $\mathfrak{q}|\mathfrak{n}$ we denote the Hecke operator by $U_\mathfrak{q}$.
\end{definition}
Analogously with $C$-cuspidal Bianchi modular forms, not all Hecke operators act on $\mathrm{Symb}_{\Omega_{0}(\mathfrak{n}),C}(V_{k,\ell}^*(\mathbb{C}))$.
\begin{proposition}{\label{actionHeckeoperator}}
The Hecke algebra $\mathcal{H}_{\mathfrak{n},p}$ acts on $\mathrm{Symb}_{\Omega_{0}(\mathfrak{n}),C}(V_{k,\ell}^*(\mathbb{C}))$.
\end{proposition}
\begin{proof}
Let $(\phi_1,...,\phi_h)\in \mathrm{Symb}_{\Omega_{0}(\mathfrak{n}),C}(V_{k,\ell}^*(\mathbb{C}))$, we have to show that for all prime $\mathfrak{q}\nmid\mathfrak{m}$ with $\mathfrak{q} I_i =(\alpha_i)I_{j_i}$ and for each $i=1,...,h$ then
\begin{equation*}
\phi_{j_i}\big|\left[\Gamma_{j_i}(\mathfrak{n})\smallmatrixx{1}{0}{0}{\alpha_i}\Gamma_i(\mathfrak{n})\right]\in \mathrm{Symb}_{\Gamma_i(\mathfrak{n}),C_i}(V_{k,\ell}^*(\mathbb{C})).
\end{equation*}
It suffices to prove $\gamma\cdot s_i\in C_{j_i}$ for all $\gamma=\gamma_{j_i}\smallmatrixx{1}{0}{0}{\alpha_i}\gamma_i \in \left[\Gamma_{j_i}(\mathfrak{n})\smallmatrixx{1}{0}{0}{\alpha_i}\Gamma_i(\mathfrak{n})\right]$ and $s_i\in C_i$.
Following the proof of Proposition \ref{proposition1.3} , for all $s_i\in C_{i}$ we have $\gamma_i\cdot s_i=s_i'\in C_i$ by fact 1), $\smallmatrixx{1}{0}{0}{\alpha_i}\cdot s_i'=s_{j_i}\in C_{j_i}$ by fact 2) and $\gamma_{j_i}\cdot s_{j_i}=s_{j_i}'\in C_{j_i}$ again by fact 1) and we obtain the result since $\phi_{j_i}\in\mathrm{Symb}_{\Gamma_{j_i}(\mathfrak{n}),C_{j_i}}(V_{k,\ell}^*(\mathbb{C}))$.
\end{proof}
Let $\mathcal{F}$ be a $C$-cuspidal Bianchi eigenform of weight $(k,\ell)$ and level $\Omega_0(\mathfrak{n})$. For $1\leqslant i \leqslant h$, recall the factor $c_{s,u}^{i}$ from Definition \ref{factor}. Let $\mathcal{X}^{k-q}\mathcal{Y}^q\overline{\mathcal{X}}^{\ell-r}\overline{\mathcal{Y}}^r$ be the element of the dual basis of $V_{k,\ell}^*(\mathbb{C})$ defined by
\begin{equation*}
\mathcal{X}^{k-q}\mathcal{Y}^q\overline{\mathcal{X}}^{\ell-r}\overline{\mathcal{Y}}^r(X^{k-i}Y^i\overline{X}^{\ell-j}\overline{Y}^j)=
\begin{cases}
1 & : q=i\;\;and\;\;r=j, \\
0 & : otherwise.
\end{cases}
\end{equation*}
\begin{definition}{\label{pbmsattach}}
For each Bianchi modular form $f^i$ descent from $\mathcal{F}$, we define
\begin{equation*}
\phi_{f^{i}}\in\mathrm{Hom}(\Delta_{C_i}^0,V_{k,\ell}^*(\mathbb{C}))
\end{equation*}
by setting
\begin{equation}
\phi_{f^{i}}(\{a\}-\{\infty\})= \sum_{s=0}^{k}\sum_{u=0}^\ell c_{s,u}^{i}(a)(\mathcal{Y}-a\mathcal{X})^{k-s}\mathcal{X}^s(\overline{\mathcal{Y}}-\overline{a}\overline{\mathcal{X}})^{\ell-u}\overline{\mathcal{X}}^u.
\end{equation}
for each $a\in C_i$.
\end{definition}
\begin{proposition}{\label{attach partial}}
We have $\phi_{\mathcal{F}}:=(\phi_{f^{1}},...,\phi_{f^{h}})\in \mathrm{Symb}_{\Omega_{0}(\mathfrak{n}),C}(V_{k,\ell}^*(\mathbb{C}))$. Moreover, the map $\mathcal{F}\rightarrow\phi_{\mathcal{F}}$ is $\mathcal{H}_{\mathfrak{n},p}$-equivariant.
\end{proposition}
\begin{proof}
The fact that $\phi_{f^i}\in \mathrm{Symb}_{\Gamma_i(\mathfrak{n}),C_i}(V_{k,\ell}^*(\mathbb{C}))$ is proven in the same way as in \cite[Prop2.9]{chris2017} and \cite[\S5.2]{ghate1999critical}, changing $(k,k)$ by $(k,\ell)$ where needed and noting that all the integrals appearing converge for $C$-cuspidality of $\mathcal{F}$.
The $\mathcal{H}_{\mathfrak{n},p}$-equivariance is an easy check using the definition of each $\phi_{f^i}$ and recalling that for non-principal ideals, the corresponding Hecke operators permutes the $h$ partial symbols.
\end{proof}
\begin{remark}{\label{Remarkevaluarpartial}}
For each $i$ and $0\leqslant (q,r) \leqslant (k,\ell)$ we have
\begin{equation*}
\phi_{f^i}(\{a\}-\{\infty\})\left[(X+aY)^qY^{k-q}(\overline{X}+\overline{a}\overline{Y})^{r}\overline{Y}^{\ell-r}\right]=c_{q,r}^i(a),
\end{equation*}
then we can link $\phi_{\mathcal{F}}$ with the $L$-values of $\mathcal{F}$ using Theorem \ref{T4.6}
\end{remark}
\section{$p$-adic $L$-function of $C$-cuspidal Bianchi modular forms}{\label{S4}}
In this section, we define \textit{overconvergent partial Bianchi modular symbols} to link partial symbols with $p$-adic distributions and prove a control theorem. In section \ref{S4.4} we construct $p$-adic $L$-functions of $C$-cuspidal Bianchi modular forms.
Henceforth, we denote $\mathcal{O}_{K,p}:=\mathcal{O}_K\otimes_\mathbb{Z}\Z_p$ to ease notation.
\subsection{Locally analytic distributions}{\label{sectionlocally}}
Suppose $p\mathcal{O}_K=\prod_{\mathfrak{p}|p}\mathfrak{p}^{e_\mathfrak{p}}$ and define $f_\mathfrak{p}$ to be the residue class degree of $\mathfrak{p}$. Note that $\sum f_\mathfrak{p} e_\mathfrak{p}=2$. Using the embedding $\iota_p:\overline{\mathbb{Q}}\hookrightarrow\overline{\mathbb{Q}}_p$ from section \ref{notations}, for each prime $\mathfrak{p}|p$, we have $f_\mathfrak{p} e_\mathfrak{p}$ embeddings $K_\mathfrak{p}\hookrightarrow\overline{\mathbb{Q}}_p$, and combining these for each prime, we get an embedding
\begin{align*}
\sigma:K\otimes\mathbb{Q}_p &\hookrightarrow \overline{\mathbb{Q}}_p\times \overline{\mathbb{Q}}_p \\a\;\;\;\;\; & \hookrightarrow (\sigma_1(a),\sigma_2(a)).
\end{align*}
For $r,s\in\mathbb{R}_{>0}$, define the rigid analytic $(r,s)$-neighborhood of $\roi_{K,p}$ in $\mathbb{C}_p^2$ to be
\begin{equation*}
B(\roi_{K,p},r,s):=\{(x,y)\in \mathbb{C}_p^2:\exists u\in\roi_{K,p} \; \text{such that}\; |x-\sigma_1(u)|\leqslant r, |y-\sigma_2(u)|\leqslant s\}.
\end{equation*}
Let $L$ be a finite extension of $\mathbb{Q}_p$ containing the image of $\sigma$. For $(r,s)$ as above, we write $\mathbb{A}[L,r,s]$ for the $L$-Banach space of rigid analytic functions on $B(\roi_{K,p},r,s)$ and $\mathbb{D}[L,r,s]$ for its Banach dual (see \cite[\S 5.1]{chris2017}).
Define the space of $L$-valued locally analytic distributions to be the projective limit
\begin{equation*}
\mathcal{D}(L)=\lim_{\longleftarrow}\mathbb{D}[L,r,s]=\bigcap_{r,s}\mathbb{D}[L,r,s].
\end{equation*}
We endow $\mathbb{A}[L,r,s]$ with a weight $(k,\ell)$-action of the semigroup
\begin{equation*}
\Sigma_0(p):=\left\{ \smallmatrixx{a}{b}{c}{d}\in M_2(\roi_{K,p}): p|c, a\in\roi_{K,p}^\times, ad-bc\nequal0 \right\}
\end{equation*}
by setting
\begin{equation*}
\gamma \cdot_{k,\ell} \zeta(x,y)= (a_1+c_1x)^k(a_2+c_2y)^\ell \zeta\left(\frac{b_1+d_1x}{a_1+c_1x},\frac{b_2+d_2y}{a_2+c_2y}\right);\, \sigma_i(\gamma)=\smallmatrixx{a_i}{b_i}{c_i}{d_i}.
\end{equation*}
These actions are compatible for various $(r,s)$. By duality they induce actions on $\mathbb{D}[L,r,s]$, and hence on $\mathcal{D}(L)$. We write $\mathbb{D}_{k,\ell}[L,r,s]$ and $\mathcal{D}_{k,\ell}(L)$ for the spaces $\mathbb{D}[L,r,s]$ and $\mathcal{D}(L)$ together with the weight $(k,\ell)$-action of $\Sigma_0(p)$.
\subsection{Overconvergent partial Bianchi modular symbols}{\label{section overconvergent}}
Recall the level $\Omega_0(\mathfrak{n})$ with $(p)|\mathfrak{n}$ from section \ref{background} and for each $1\leqslant i \leqslant h$, the $\Gamma_i(\mathfrak{n})$-invariant subset $C_i$ of $\mathbb{P}^1(K)$ from section \ref{Fourier}. Since the lower left entry of a matrix in $\Gamma_i(\mathfrak{n})$ is in $\mathfrak{p}$ for all $\mathfrak{p}|p$ (because $(p)|\mathfrak{n}$), using the embedding $\iota_p$, we have that $\Gamma_i(\mathfrak{n})\subset\Sigma_0(p)$. Then we can equip $\mathcal{D}_{k,\ell}(L)$ with an action of $\Gamma_i(\mathfrak{n})$ and define partial modular symbols with values in $\mathcal{D}_{k,\ell}(L)$.
\begin{definition}
(i) Define the space of \textit{overconvergent partial Bianchi modular symbols} on $C_i$ of weight $(k,\ell)$ and level $\Gamma_i(\mathfrak{n})$ with coefficients in $L$ to be the space $\mathrm{Symb}_{\Gamma_i(\mathfrak{n}),C_i}(\mathcal{D}_{k,\ell}(L))$.\\% of partial Bianchi modular symbols with values in $\mathcal{D}_{k,\ell}(L)$.\\
(ii) Define the space of \textit{overconvergent partial Bianchi modular symbols} on $C$ of weight $(k,\ell)$ and level $\Omega_{0}(\mathfrak{n})$ with coefficients in $L$ to be the space
\begin{equation*}
\mathrm{Symb}_{\Omega_{0}(\mathfrak{n}),C}(\mathcal{D}_{k,\ell}(L)):=\bigoplus_{i=1}^{h} \mathrm{Symb}_{\Gamma_i(\mathfrak{n}),C_i}(\mathcal{D}_{k,\ell}(L)).
\end{equation*}
\end{definition}
Note that the matrices appearing in Definition \ref{definitionoperator} of Hecke operators can be seen inside $\Sigma_0(p)$, then the Hecke algebra $\mathcal{H}_{\mathfrak{n},p}$ acts on $\mathrm{Symb}_{\Omega_{0}(\mathfrak{n}),C}(\mathcal{D}_{k,\ell})$.
There is a natural map $\mathcal{D}_{k,\ell}(L)\rightarrow V_{k,\ell}^
*(L)$ given by dualizing the inclusion of $V_{k,\ell}(L)$ into $\mathcal{A}_{k,\ell}(L)$. This induces a $\mathcal{H}_{\mathfrak{n},p}$-equivariant \textit{specialization map}
\begin{equation*}
\rho: \mathrm{Symb}_{\Omega_{0}(\mathfrak{n}),C}(\mathcal{D}_{k,\ell}(L)) \longrightarrow \mathrm{Symb}_{\Omega_{0}(\mathfrak{n}),C}(V_{k,\ell}^*(L)),
\end{equation*}
\begin{theorem}{\label{partialcontrol}}
(Partial Bianchi control theorem). For each prime $\mathfrak{p}$ above $p$, let $\lambda_{\mathfrak{p}}\in L^\times$. Suppose that $v_p(\lambda_{\mathfrak{p}})<(\mathrm{min}\{{k,\ell\}}+1)/e_{\mathfrak{p}}$ when $p=\mathfrak{p}^{e_\mathfrak{p}}$ is inert or ramified, or $v_p(\lambda_{\mathfrak{p}})<k+1$ and $v_p(\lambda_{\overline{\mathfrak{p}}})<\ell+1$ when $p$ splits as $\mathfrak{p}\overline{\mathfrak{p}}$, then the restriction of the specialization map
\begin{equation*}
\rho: \mathrm{Symb}_{\Omega_{0}(\mathfrak{n}),C}(\mathcal{D}_{k,\ell}(L))^{\{U_{\mathfrak{p}}=\lambda_{\mathfrak{p}}:\mathfrak{p}|p\}} \longrightarrow\mathrm{Symb}_{\Omega_{0}(\mathfrak{n}),C}(V_{k,\ell}^*(L))^{\{U_{\mathfrak{p}}=\lambda_{\mathfrak{p}}:\mathfrak{p}|p\}}
\end{equation*}
to the simultaneous $\lambda_{\mathfrak{p}}$-eigenspaces of the $U_{\mathfrak{p}}$ operators is an isomorphism.
\end{theorem}
\begin{proof}
This result is proved like its counterpart for Bianchi modular symbols of parallel weight $(k,k)$ in \cite{chris2017}, doing the corresponding adaptations for weight $(k,\ell)$. Here we just present a brief idea of how to follow Williams' proof.
First, the control theorem is proved for the space of rigid analytic distributions, $\mathbb{D}_{k,\ell}[L,1,1]$, with the operator $U_p$ using \cite[\S4]{chris2017} and noting the change of the condition $v_p(\lambda)<k+1$ in \cite[Lem 3.15]{chris2017} to $v_p(\lambda)<\mathrm{min}\{k,\ell\}+1$. Then, by the results of \cite[\S 5.2]{chris2017}, we can extend the result for $\mathcal{D}_{k,\ell}(L)$.
When $p$ is inert, the process described above allows us to obtain the result. For $p$ ramified as $\mathfrak{p}^2$, we note that $U_\mathfrak{p}=U_{\mathfrak{p}^2}=U_\mathfrak{p}^2$ and use \cite[Lem 6.9]{chris2017} to obtain the result. Finally, when $\mathfrak{p}$ splits, we adapt the methods in \cite[\S6]{chris2017} to lift simultaneous eigensymbols of $U_\mathfrak{p}$ and $U_{\overline{\mathfrak{p}}}$, obtaining the more subtle result on the slope.
\end{proof}
\begin{remark}
The partial control theorem can be proved using a cohomological interpretation of partial Bianchi modular symbols as \cite[\S 2.4]{bellaiche2015p} and following \cite{urban}, \cite{hansen}, \cite{canadian}. In this paper we did not use such an interpretation, however, for future works we will need to construct families of partial Bianchi modular symbols where cohomology is a key.
\end{remark}
\begin{definition}
Let $\mathcal{F}$ be an eigenform with eigenvalues $\lambda_I$, we say $\mathcal{F}$ has \textit{small slope} if $v_p(\lambda_{\mathfrak{p}})<(\mathrm{min}\{{k,\ell\}}+1)/e_{\mathfrak{p}}$ when $p=\mathfrak{p}^{e_\mathfrak{p}}$ is inert or ramified; or if $v_p(\lambda_{\mathfrak{p}})<k+1$ and $v_p(\lambda_{\overline{\mathfrak{p}}})<\ell+1$ when $p$ splits as $\mathfrak{p}\overline{\mathfrak{p}}$. We say $\mathcal{F}$ has \textit{critical slope} if does not have small slope.
\label{D10.4}
\end{definition}
\subsection{Admissible distributions}{\label{admissible}}
For each pair $r$, $s$, the space $\mathcal{D}_{k,\ell}[L,r,s]$ from section \ref{sectionlocally} admits an operator norm $||\cdot||_{r,s}$ via
\begin{equation*}
||\mu||_{r,s}= \sup_{0\neq f \in \mathcal{A}_{k,\ell}[L,r,s]} \frac{|\mu(f)|_p}{|f|_{r,s}},
\end{equation*}
where $|\cdot|_p$ is the usual $p$-adic absolute value on $L$ and $|\cdot|_{r,s}$ is the sup norm on $\mathcal{A}_{k,\ell}[L,r,s]$. Note that if $r \leqslant r'$, $s\leqslant s'$, then $||\mu||_{r,s}\geqslant||\mu||_{r',s'}$ for $\mu\in \mathcal{D}_{k,\ell}[L,r',s']$.
These norms give rise to a family of norms on the space of locally analytic functions that allow us to classify locally analytic distributions by growth properties as we vary in this family.
\begin{definition}
Let $\mu\in\mathcal{D}_{k,\ell}(L)$ be a locally analytic distribution.
(i) Suppose $p$ is inert or ramified in $K$, we say $\mu$ is $h$-admissible if $||\mu||_{r,r}=O(r^{-h})$ as $r\rightarrow 0^{+}$.
(ii) Suppose $p$ splits in $K$, we say $\mu$ is $(h_1,h_2)$-admissible if $||\mu||_{r,s}=O(r^{-h_1})$ uniformly in $s$ as $r\rightarrow 0^{+}$, and $||\mu||_{r,s}=O(r^{-h_2})$ uniformly in $r$ as $s\rightarrow 0^{+}$.
\end{definition}
\begin{proposition}
Let $\Psi\in \mathrm{Symb}_{\Gamma_i(\mathfrak{n}),C_i}(\mathcal{D}_{k,\ell}(L))$, and $r,s\leqslant1$. Defining
\begin{equation*}
||\Psi||_{r,s}:= \sup_{D\in\Delta_{C_i}^0}||\Psi(D)||
\end{equation*}
gives a well-defined norm on $ \mathrm{Symb}_{\Gamma_i(\mathfrak{n}),C_i}(\mathcal{D}_{k,\ell}(L))$.
\end{proposition}
\begin{proof}
First, since $\Delta_{C_i}^0$ is finitely generated as a $\mathbb{Z}[\Gamma_i(\mathfrak{n})]$-module (which follow from the fact that $\Gamma_i(\mathfrak{n})$ is a finitely generated group and that it has finitely many cusps.), then we can write $D\in \Delta_{C_i}^0$ as $D=\alpha_1 D_1+\cdots+\alpha_n D_n$ for a finite set of generators $D_j$ and $\alpha_j\in \mathbb{Z}[\Gamma_i(\mathfrak{n})]$.
Now, note that for every $\gamma\in\Gamma_i(\mathfrak{n})$ and $f\in\mathcal{A}_{k,\ell}[L,r,s]$ we can prove (in the same way of \cite[Lem 5.11]{chris2017}) that exist positive constants $A$ and $A'$ such that
\begin{equation*}
A|\gamma\cdot_{k,\ell} f|_{r,s} \leqslant |f|_{r,s} \leqslant A'|\gamma\cdot_{k,\ell} f|_{r,s}.
\end{equation*}
Finally, by the above there exists a constant $B$ such that (without loss of generality)
\begin{equation*}
||\Psi(D)||_{r,s}\leqslant B||\Psi(D_1)||_{r,s}.
\end{equation*}
In particular, the supremum is finite and hence gives a well-defined norm as required.
\end{proof}
The following proposition is the adaptation to overconvergent partial Bianchi modular symbols of \cite[Props 5.12, 6.15]{chris2017}.
\begin{proposition}{\label{admissibility}}
Let $\Psi \in \mathrm{Symb}_{\Gamma_i(\mathfrak{n}),C_i}(\mathcal{D}_{k,\ell}(L))$,
(i) Suppose $p$ is inert in $K$ and $\Psi$ is a $U_p$-eigensymbol with eigenvalue $\lambda$ and slope $h = v_p(\lambda)$. Then, for every $D\in\Delta_{C_i}^0$, the distribution $\Psi(D)$ is $h$-admissible.
(ii) Suppose $p$ splits in $K$ as $\mathfrak{p}\overline{\mathfrak{p}}$ and $\Psi$ is simultaneously a $U_{\mathfrak{p}}^n$- and $U_{\overline{\mathfrak{p}}}^n$-eigensymbol for some $n$, with non-zero eigenvalues $\lambda_1^n$ and $\lambda_2^n$ with slopes $h_1=v_p(\lambda_1)$ and $h_2=v_p(\lambda_2)$. Then, for every $D\in\Delta_{C_i}^0$, the distribution $\Psi(D)$ is $(h_1,h_2)$-admissible.
\end{proposition}
\begin{proof}
To prove part i) we follow line by line the proof of \cite[Prop 5.13]{chris2017} and conclude that for any $r$ and positive integer $m$, we have
\begin{equation*}
||\Psi(D)||_{r/p^m,r/p^m}\leqslant \lambda^{-m}||\Psi||_{r,r}.
\end{equation*}
To prove part ii) we use the same idea of part i) and conclude that for any $r,s$ using $m$-th powers of the operators $U_\mathfrak{p}^n$ and $U_{\overline{\mathfrak{p}}}^n$ we have
\begin{equation*}
||\Psi(D)||_{r/p^{mn},s}\leqslant |\lambda_1|^{-mn}||\Psi||_{r,s}\;\;\text{and}\;\;||\Psi(D)||_{r,s/p^{mn}}\leqslant |\lambda_2|^{-mn}||\Psi||_{r,s},
\end{equation*}
and combining both, we obtain the result.
\end{proof}
\subsection{Construction of the $p$-adic $L$-function}{\label{S4.4}}
Throughout this section, we fix an isomorphism $\iota:\mathbb{C}\rightarrow\overline{\mathbb{Q}}_p$ compatible with the embeddings of section \ref{notations}. We recall ray class groups and define the Mellin transform of an overconvergent partial Bianchi modular symbol.
\begin{definition}
Define the ray class group of level $p^\infty$ to be
\begin{equation*}
\mathrm{Cl}_K(p^{\infty})=K^\times\backslash \mathbb{A}_K^\times/\mathbb{C}^\times\prod_{v\nmid p}\mathcal{O}_v^\times
\end{equation*}
and denote by $\mathfrak{X}(\mathrm{Cl}_K(p^{\infty}))$ the two-dimensional rigid space of $p$-adic characters on $\mathrm{Cl}_K(p^{\infty})$.
\end{definition}
Note that $\mathrm{Cl}_K(p^\infty)$ can be written as $\mathrm{Cl}_K(p^\infty)=\bigcup_{i\in\mathrm{Cl}_K}\mathrm{Cl}_K^i(p^\infty)$, where $\mathrm{Cl}_K^i(p^\infty)$ is the fiber of $i$ under the canonical surjection $\mathrm{Cl}_K(p^\infty)\twoheadrightarrow\mathrm{Cl}_K$ to the class group of $K$, also note that, the choice of $t_i\in \mathbb{A}_K^{f ,\times}$ in section \ref{background} identifies $\mathrm{Cl}_K^i(p^\infty)$ non-canonically with $\roi_{K,p}^\times/\mathcal{O}_K^\times$.
Let $\Psi=(\Psi_1,...,\Psi_h)\in \mathrm{Symb}_{\Omega_{0}(\mathfrak{n}),C}(\mathcal{D}_{k,\ell}(L))$ be an overconvergent partial Bianchi modular symbol, we define for $i,j\in\mathrm{Cl}_K$ a distribution $\mu_i(\Psi_j)\in\mathcal{D}(\mathrm{Cl}_K^i(p^\infty),L)$ as follows.
Since $\{0\}-\{\infty\}\in \Delta_{C_i}^0 $ for all $i$, we have a distribution $\Psi_j(\{0\}-\{\infty\})|_{\roi_{K,p}^\times}$ on $\roi_{K,p}^\times$. This restricts to a distribution on $\roi_{K,p}^\times/\mathcal{O}_K^\times$, which gives the distribution $\mu_i(\Psi_j)$ on $\mathrm{Cl}_K^i(p^\infty)$ under the identification above.
\begin{definition}{\label{mellin}}
The \textit{Mellin transform} of $\Psi\in \mathrm{Symb}_{\Omega_{0}(\mathfrak{n}),C}(\mathcal{D}_{k,\ell}(L))$ is the ($L$-valued) locally analytic distribution on $\mathrm{Cl}_K(p^\infty)$ given by
\begin{equation*}
\mathrm{Mel}(\Psi):=\sum_{i\in\mathrm{Cl}_K}\mu_i(\Psi_i)\in \mathcal{D}(\mathrm{Cl}_K(p^\infty),L).
\end{equation*}
\end{definition}
\begin{remark}
The distribution $\mathrm{Mel}(\Psi)$ is independent of the choice of class group representatives.
\end{remark}
The theory of partial Bianchi modular symbols developed in sections \ref{S3} and \ref{S4}, allows us to construct the $p$-adic $L$-function of small slope $C$-cuspidal Bianchi modular forms.
\begin{theorem}\label{C-cuspidal p-adic}
Let $\mathcal{F}$ be a $C$-cuspidal Bianchi eigenform of weight $(k,\ell)$ and level $\Omega_0(\mathfrak{n})$, with $U_\mathfrak{p}$-eigenvalues $\lambda_\mathfrak{p}$, where $v_p(\lambda_{\mathfrak{p}})<(\mathrm{min}\{{k,\ell\}}+1)/e_{\mathfrak{p}}$ when $p=\mathfrak{p}^{e_\mathfrak{p}}$ is inert or ramified, or $v_p(\lambda_{\mathfrak{p}})<k+1$ and $v_p(\lambda_{\overline{\mathfrak{p}}})<\ell+1$ when $p$ splits as $\mathfrak{p}\overline{\mathfrak{p}}$. Let $\iota$ be the fixed isomorphism between $\mathbb{C}$ and $\overline{\mathbb{Q}}_p$. Then there exists a locally analytic distribution $L_p^{\iota}(\mathcal{F},-)$ on $\mathrm{Cl}_K(p^{\infty})$ such that for any Hecke character of $K$ of conductor $\mathfrak{f}|(p^\infty)$ and infinity type $0 \leqslant (q,r) \leqslant (k,\ell)$, we have
\begin{equation}{\label{interpolationprop}}
L_p^{\iota}(\mathcal{F},\psi_{p-\mathrm{fin}})=\left( \prod_{\mathfrak{p}|p} Z_{\mathfrak{p}}(\psi) \right) \left[ \frac{DwW(\psi^{-1})}{(-1)^{\ell+q+r}2\lambda_\mathfrak{f}} \right] \Lambda(\mathcal{F},\psi),
\end{equation}
where $\psi_{p-\mathrm{fin}}$ is the $p$-adic avatar of $\psi$ as in \cite[\S 7.3]{chris2017} and
\begin{equation*}
Z_\mathfrak{p}(\psi):=\begin{cases} 1-[\lambda_\mathfrak{p}\psi(\mathfrak{p})]^{-1} & : \mathfrak{p}\nmid\mathfrak{f}, \\ 1 & :\mbox{else}. \end{cases}
\end{equation*}
The distribution $L_p^{\iota}(\mathcal{F},-)$ is $(h_\mathfrak{p})_{\mathfrak{p}|p}$-admissible, where $h_\mathfrak{p}=v_p(\lambda_\mathfrak{p})$, and hence is unique.
We call $L_p^{\iota}(\mathcal{F},-)$ the $p$-adic $L$-function of $\mathcal{F}$.
\end{theorem}
\begin{proof}
The small slope $C$-cuspidal Bianchi modular form $\mathcal{F}$ corresponds to a collection of $h$ $C_i$-cuspidal forms $f^{i},...,f^h$ on $\mathcal{H}_3$.
We first attach to $\mathcal{F}$ a complex-valued partial Bianchi modular eigensymbol $\phi_\mathcal{F}=(\phi_{f^{1}},...,\phi_{f^{h}})$ using Proposition \ref{attach partial}.
By applying $\iota$ we obtain from $\phi_{\mathcal{F}}$ a symbol $\phi^{\iota}_{\mathcal{F}}=(\phi^{\iota}_{f^{1}},...,\phi^{\iota}_{f^{h}})$ with values in $V_{k,\ell}^*(\overline{\mathbb{Q}}_p)$. For each $i$, since $\Delta_{C_i}^0$ is finitely generated as a $\mathbb{Z}[\Gamma_i(\mathfrak{n})]$-module it follows that $\phi^{\iota}_{f^i}$ has values in $V_{k,\ell}^*(L_i)$ with $L_i/\mathbb{Q}_p$ finite, thus $\phi^{\iota}_{\mathcal{F}}\in \mathrm{Symb}_{\Omega_{0}(\mathfrak{n}),C}(V_{k,\ell}^*(L))$ for $L/\mathbb{Q}_p$ sufficiently large containing all $L_i$ and the eigenvalues of $\mathcal{F}$.
Since $\mathcal{F}$ has small slope, we can lift $\phi^{\iota}_{\mathcal{F}}$ to a unique $\Psi^{\iota}_{\mathcal{F}}=(\Psi^{\iota}_{f^{1}},...,\Psi^{\iota}_{f^{h}})\in \mathrm{Symb}_{\Omega_{0}(\mathfrak{n}),C}(\mathcal{D}_{k,\ell}(L))$ using the partial Bianchi control theorem (Theorem \ref{partialcontrol}).
Finally, we define the $p$-adic $L$-function of $\mathcal{F}$ as the Mellin transform of $\Psi^{\iota}_{\mathcal{F}}$
\begin{equation*}
L_p^{\iota}(\mathcal{F},-):=\mathrm{Mel}(\Psi^{\iota}_\mathcal{F}).
\end{equation*}
The interpolation property in (\ref{interpolationprop}) comes from the link between partial Bianchi modular symbols and $\Lambda(\mathcal{F},-)$ in Remark \ref{Remarkevaluarpartial}. Additionally, by Proposition \ref{admissibility} the distribution $L_p^{\iota}(\mathcal{F},-)$ is $(h_\mathfrak{p})_{\mathfrak{p}|p}$-admissible, where $h_\mathfrak{p}=v_p(\lambda_\mathfrak{p})$. Both interpolation and growth properties give the uniqueness of $L_p^{\iota}(\mathcal{F},-)$.
\end{proof}
\begin{remark}{\label{turn-C-cuspi}}
In general, non-cuspidal Bianchi modular forms do not need to be $C$-cuspidal. Whilst it is not computed in this article, the author believes that at least in the parallel weight case, we can consider a linear combination as in \cite[\S 6.1]{bellaiche2015p}, to turn a non-cuspidal Bianchi modular form into $C$-cuspidal and then construct its $p$-adic $L$-function. This relies on the fact that in the parallel weight case, in the Fourier expansion of a Bianchi modular form $\mathcal{F}=\sum_{n=0}^{2k+2}\mathcal{F}_nX^{2k+2-n}Y^n$, only the constant term of $\mathcal{F}_0$, $\mathcal{F}_{k+1}$ and $\mathcal{F}_{2k+2}$ can be non-trivial (see part ii in Remark \ref{Remark1.8}) and to achieve $C$-cuspidality we just need to control the constant term of $\mathcal{F}_{k+1}$ at the cusps $0$ and $\infty$. Note this is not the case in non-parallel weight $(k,\ell)$, where we need to control the constant terms of $\mathcal{F}_{k+1}$ and $\mathcal{F}_{\ell+1}$ both at the cusps 0 and $\infty$.
\end{remark}
\section{$p$-adic $L$-function of non-cuspidal base change Bianchi modular forms}
Throughout this section, suppose $p$ splits in $K$ as $\mathfrak{p}\overline{\mathfrak{p}}$ with $\mathfrak{p}$ being the prime corresponding to the embedding $\iota_p$ from section \ref{notations}.
For a non-cuspidal base change Bianchi modular form $\mathcal{F}$, we can go deeper in the construction of its $p$-adic $L$-function. More precisely, we can show the existence of a complex period that allows us to prove algebraicity of critical $L$-values of $\mathcal{F}$ and view $p$-adically the Bianchi modular symbol attached to $\mathcal{F}$ without using an isomorphism between $\mathbb{C}$ and $\overline{\mathbb{Q}}_p$. Additionally, we can factor the $p$-adic $L$-function of $\mathcal{F}$ as a product of two Katz $p$-adic $L$-functions.
\subsection{CM modular forms}
A cuspidal newform $f=\sum_{n\geqslant1} a_nq^n$ is said to have complex multiplication (CM) by $K$ if it admits a self twist by the Kronecker character $\chi_K$ of $K$, that is, if $a_q=\chi_K(q)a_q$ for all but finitely many primes $q$.
There is a connection between modular forms with CM by $K$ and Hecke characters on $K$.
Let $\varphi$ be a Hecke character of $K$ of conductor $\mathfrak{m}$ coprime to $p$ and infinity type $(-k-1,0)$ with $k\geqslant0$, then the theta series given by
\begin{equation*}
f_{\varphi}(z)= \sum_{n=1}^{\infty}a_n q^n= \sum_{\mathfrak{a}\; \text{integral}}\varphi(\mathfrak{a})q^{N(\mathfrak{a})}, \text{where $q=e^{2\pi i z}$, $z\in\mathbb{C}$, $Im(z)>0$},
\end{equation*}
is known to be an eigenform.
Recall that $\chi_K$ has conductor $D$ and let $\varphi_{\mathbb{Z}}$ be the Dirichlet character modulo $M=N(\mathfrak{m})$ given by
\begin{equation}
\varphi_{\mathbb{Z}}:a\mapsto\frac{\varphi(a\mathcal{O}_K)}{a^{k+1}}, \text{with $a\in\mathbb{Z},(a,M)=1$}.
\end{equation}
Then we have:
\begin{theorem}\label{Theorem6.1}\textit{(Hecke, Shimura)}
$f_{\varphi}$ is a newform of weight $k+2$, level $DM$ and nebentypus character $\chi_K\varphi_{\mathbb{Z}}$, i.e:
\begin{equation*}
f_{\varphi}\in S_{k+2}(\Gamma_0(DM),\chi_K\varphi_{\mathbb{Z}}).
\end{equation*}
\end{theorem}
On the other hand, Ribet in \cite[Prop 4.4, Thm 4.5]{ribet1977galois} showed that any newform with CM comes from a Hecke character.
\subsection{Base change Bianchi modular forms}\label{subsection basechange}
Let $f$ be a classical cuspidal newform of weight $k+2$, level $\Gamma_0(N)$ and nebentypus $\epsilon_{f}$, there is a process of \textit{base change lift} from $\mathbb{Q}$ to $K$ which constructs from $f$, a Bianchi modular form $f_{/K}$. Lifting may be described in the language of automorphic representations, or in the language of automorphic forms.
Let $\pi$ be the automorphic representation of $\mathrm{GL}_2(\mathbb{A}_{\mathbb{Q}})$ generated by $f$ and let BC($\pi$) be the base change of $\pi$ to $\mathrm{GL}_2(\mathbb{A}_K)$ (see \cite{langlands1980base}). The \textit{base change} of $f$ to $K$ is the normalized new vector $f_{/K}$ in BC($\pi$) which is a Bianchi modular form of weight $(k, k)$ and level $\mathfrak{n}\subset\mathcal{O}_K$ with $\frac{N}{(N,D)}\mathcal{O}_K|\mathfrak{n}|N\mathcal{O}_K$ (see \cite[\S 2.3]{friedberg1983imaginary}).
\begin{remark}
The Hecke eigenvalues of $f_{/K}$ are determined from those of $f$. For every prime $q$ not dividing the level of $f$, if the eigenvalue of $T_q$ on $f$ is $a_q$ then the eigenvalues $a_{\mathfrak{q}}$ for ${\mathfrak{q}}\mid q$ of its base change are given by: $a_{\mathfrak{q}}=a_{\overline{\mathfrak{q}}}=a_q$ if $q$ splits as $\mathfrak{q}\overline{\mathfrak{q}}$, $a_{\mathfrak{q}}=a_q$ if $q$ ramifies as $\mathfrak{q}^2$ and $a_{\mathfrak{q}}=a_q^2-2q^{k+1}$ if $q$ is inert as $q=\mathfrak{q}$.
\label{remark6.3}
\end{remark}
When $f$ does not have CM by $K$, its base change to $K$ is cuspidal; on the other hand, when $f=f_\varphi$ has CM by $K$, then the base change to $K$ is known to be non-cuspidal.
Using results in \cite{friedberg1983imaginary}, we can prove a nice property of vanishing for the non-cuspidal base change Bianchi modular forms.
\begin{proposition}{\label{basechangequasi}}
The Bianchi modular form $f_{\varphi/K}$ is quasi-cuspidal.
\end{proposition}
\begin{proof}
The quasi-cuspidality of $f_{\varphi/K}$ follows from a combination of the following observations and results in \cite{friedberg1983imaginary}.
Suppose first that $K$ has class number 1. Note that by (\ref{descendGL2C}), $f_{\varphi/K}$ descend to a form $F:\mathrm{GL}_2(\mathbb{C})\rightarrow V_{2k+2}(\mathbb{C})$ which can be seen as a function on $\mathrm{SL}_2(\mathbb{C})$ and we denoted by $F=(F_0,...,F_{2k+2})$. Then, the result follows from the following two observations:
i) Each lift of $f_\varphi$ quasi-vanishes at $\infty$: In general, \cite[Thm 3.1]{friedberg1983imaginary} gives us the Fourier expansion of a lift $F'$ of $f_\varphi$ at the cusp of $\infty$. We note that in such expansion only the constant term of $F'_0$ and $F'_{2k+2}$ can be non-trivial, then $F'$ quasi-vanishes at $\infty$.
ii) The Fourier expansion of $F$ at any cusp is related with the Fourier expansion of another lift of $f_\varphi$ at $\infty$: The paragraph before \cite[Cor 3.3]{friedberg1983imaginary} tell us how to calculate the Fourier expansion of $F$ at any cusp $s=\gamma\cdot \infty$ for $\gamma\in\mathrm{GL}_2(\mathcal{O}_K)$. Write $F$ as $F(\gamma g,V,\mathcal{L}_1)$ (using the notation in op.cit.), then we have $F(\gamma g,V,\mathcal{L}_1)=F(g,V^\gamma,\mathcal{L}_1^\gamma)$ by the paragraph before \cite[Cor 3.3]{friedberg1983imaginary}, but $F(g,V^\gamma,\mathcal{L}_1^\gamma)$ is just another lift of $f_\varphi$ related with the datum $V^\gamma$ and $\mathcal{L}_1^\gamma$. This last equation means that the Fourier expansion of $F$ at the cusp $s$ is the same as the Fourier expansion at $\infty$ of another lift $F'=F(g,V^\gamma,\mathcal{L}_1^\gamma)$ of $f_\varphi$.
For class number greater than 1, by (\ref{descendGL2C}) the Bianchi modular form $f_{\varphi/K}$ gives us $h$ forms $F^i:\mathrm{GL}_2(\mathbb{C})\rightarrow V_{2k+2}(\mathbb{C})$, which can be seen as functions on $\mathrm{SL}_2(\mathbb{C})$. Then, for each $F^i$ we proceed as above, having into account the observations above \cite[Cor 3.3]{friedberg1983imaginary} for class number greater than 1.
\end{proof}
The natural object to attach $p$-adic $L$-functions are $p$-stabilizations. We are interested in the $p$-stabilization of $f_{\varphi/K}$ satisfying the small slope condition of Definition \ref{D10.4}, i.e., the $p$-adic valuation of the eigenvalues of $U_\mathfrak{p}$ and $U_{\overline{\mathfrak{p}}}$ are both less than $k+1$.
The $p$-stabilizations of $f_{\varphi/K}$ can be explicitly described from the $p$-stabilizations of $f_\varphi$ by considering its Hecke polynomial at $p$ given by $x^2-a_p(f_{\varphi})x+ \epsilon_{f_\varphi}(p)p^{k+1}$. Since $p=\mathfrak{p}\overline{\mathfrak{p}}$, we have that
$a_p(f_{\varphi})=\varphi(\mathfrak{p})+\varphi(\overline{\mathfrak{p}})$ and $\epsilon_{f_{\varphi}}(p)=\chi_K(p)\varphi_{\mathbb{Z}}(p)=\varphi(p\mathcal{O}_K)/p^{k+1}=\varphi(\mathfrak{p}\overline{\mathfrak{p}})/p^{k+1}$,
thus the roots of the Hecke polynomial of $f_\varphi$ at $p$ are $\alpha_p=\varphi(\mathfrak{p})$ and $\beta_p=\varphi(\overline{\mathfrak{p}})$.
Now consider $x^2-a_{\mathfrak{p}}(f_{\varphi/K})x+ \epsilon_{f_{\varphi/K}}(\mathfrak{p})N(\mathfrak{p})^{k+1}$, the Hecke polynomial of $f_{\varphi/K}$ at $\mathfrak{p}$. By Remark \ref{remark6.3} and the definition of $\epsilon_{f_{\varphi/K}}$, we have
$a_\mathfrak{p}({f_{\varphi/K}})=a_{p}(f_{\varphi})$ and $\epsilon_{f_{\varphi/K}}(\mathfrak{p})=\chi_K(N(\mathfrak{p}))\varphi_{\mathbb{Z}}(N(\mathfrak{p}))=\chi_K(p)\varphi_{\mathbb{Z}}(p)=\epsilon_{f_{\varphi}}(p)$,
noting that the same occurs for $\overline{\mathfrak{p}}$. Then the Hecke polynomials of $f_{\varphi/K}$ at $\mathfrak{p}$ and $\overline{\mathfrak{p}}$ and the Hecke polynomial of $f_\varphi$ at $p$ are equal.
By the above, we can take the roots of the Hecke polynomial of $f_{\varphi/K}$ at $\mathfrak{p}$ to be $\alpha_\mathfrak{p}=\alpha_p=\varphi(\mathfrak{p})$ and $\beta_\mathfrak{p}=\beta_p=\varphi(\overline{\mathfrak{p}})$ and the roots of the Hecke polynomial of $f_{\varphi/K}$ at $\overline{\mathfrak{p}}$ to be $\alpha_{\overline{\mathfrak{p}}}=\varphi(\mathfrak{p})$ and $\beta_{\overline{\mathfrak{p}}}=\varphi(\overline{\mathfrak{p}})$.
If $f_\varphi^{\alpha}$ (resp. $f_\varphi^{\beta}$) is the $p$-stabilization of $f_\varphi$ corresponding to $\alpha_p$ (resp. $\beta_p$), we define its base change to $K$ to be the $p$-stabilization $f_{\varphi/K}^{\alpha\alpha}$ (resp. $f_{\varphi/K}^{\beta\beta}$) of $f_{\varphi/K}$ corresponding to $\alpha_\mathfrak{p}$, $\alpha_{\overline{\mathfrak{p}}}$ (resp. $\beta_\mathfrak{p}$, $\beta_{\overline{\mathfrak{p}}}$).
\begin{remark}{\label{basechangepstabilization}}
i) Note that $v_p(\alpha_\mathfrak{p})=v_p(\alpha_{\overline{\mathfrak{p}}})=v_p(\varphi(\mathfrak{p}))=k+1$ and $v_p(\beta_\mathfrak{p})=v_p(\beta_{\overline{\mathfrak{p}}})=v_p(\varphi(\overline{\mathfrak{p}}))=0 $. Since we are interested in the small slope $p$-stabilization of $f_{\varphi/K}$, henceforth we will work with $f_{\varphi/K}^p:=f_{\varphi/K}^{\beta\beta}$.
ii) The $p$-stabilization of a quasi-cuspidal form is again quasi-cuspidal, because the main matrix used in the $p$-stabilization process does not permute the components of the Bianchi modular form. In particular, $f_{\varphi/K}^p$ is quasi-cuspidal.
\end{remark}
\subsection{Non-cuspidal base change $L$-function}{\label{basechangeLfunction}}
The fact that $f_{\varphi/K}$ is constructed from a Hecke character, allows us to relate its complex $L$-function with $L$-functions of Hecke characters and prove algebraicity of critical $L$-values of $f_{\varphi/K}^p$.
\begin{lemma}\label{Prop 9.1}
Let $\psi$ be a Hecke character of $K$ and let $\psi^c(\mathfrak{q}):=\psi(\overline{\mathfrak{q}})$ where $\mathfrak{q}$ is an ideal of $K$ and $\overline{\mathfrak{q}}$ is its conjugate ideal. Then
\begin{equation*}
L(f_{\varphi/K},\psi,s)=L(\varphi^c\psi,s)L(\varphi^c\psi^c\lambda_K,s)=L(\varphi^c\psi\lambda_K,s)L(\varphi^c\psi^c,s)
\end{equation*}
where $\lambda_K=\chi_{K}\circ N$.
\end{lemma}
\begin{proof}
The Fourier coefficients of $f_{\varphi/K}$ at prime ideals $\mathfrak{q}|q$ are:
(i) $a_{\mathfrak{q}}=a_{\overline{\mathfrak{q}}}=a_q=\varphi(\mathfrak{q})+\varphi(\overline{\mathfrak{q}})$, if $q=\mathfrak{q}\overline{\mathfrak{q}}$;
(ii) $a_{\mathfrak{q}}=a_q=\varphi(\mathfrak{q})$ if $q=\mathfrak{q}^2$;
(iii) $a_{\mathfrak{q}}=a_q^2-2\chi_K(q)\varphi_{\mathbb{Z}}(q)q^{k+1}=2\varphi(\mathfrak{q})$ if $q=\mathfrak{q}$.
Then comparing the Euler factors at each prime $\mathfrak{q}\subset\mathcal{O}_K$, we obtain the result.
\end{proof}
\begin{remark}
There are 6 more ways to factor $L(f_{\varphi/K},\psi,s)$ as product of two Hecke $L$-functions, this comes from the fact that for a Hecke character $\nu$ we have $L(\nu,s)=L(\nu^c,s)$. Similar factorizations in the $p$-adic setting do not necessarily hold, because if $\nu$ has infinity type $(q,r)$ the involution $\nu \rightarrow \nu^c$
corresponds to the map $(q,r)\rightarrow(r,q)$ on weight space and therefore does not preserve the lower right quadrant of weights of Hecke characters that lie in the range
of classical interpolation of the Katz $p$-adic $L$-functions (see Theorem \ref{Katztheorem}).
\end{remark}
To prove algebraicity of the critical $L$-values of $f_{\varphi/K}$, we first recall some algebraicity results from \cite{bertolini2012p} about $L$-functions of Hecke characters and use the factorization in Lemma \ref{Prop 9.1}.
Let $A$ be an elliptic curve with complex multiplication by $\mathcal{O}_K$, defined over a finite extension $F$ of $K$ as in \cite[\S 2C]{bertolini2012p}. Let $\Omega(A)$ be the complex period attached to $A$.
For a Hecke character $\psi$, let $\Omega(\psi^c)$ be the complex period attached to $\psi^c$ in the comment above of \cite[Prop 2.11]{bertolini2012p}.
\begin{lemma}{\label{Lemma1.1}}
Let $\psi$ be a Hecke character of $K$ with infinity type $(a,b)$ such that $a>b$. Then for $m$ a critical value of $L(\psi,s)$ we have
\begin{equation*}
\frac{L(\psi,m)}{(2\pi i)^{m+b}\Omega(A)^{a-b}}\in\overline{\mathbb{Q}}.
\end{equation*}
\end{lemma}
\begin{proof}
Consider the character $\psi^{-1}$ which following the definition of infinity type in \cite{bertolini2012p}, has infinity type $(a,b)$.
By \cite[Prop 2.11]{bertolini2012p} we have
\begin{equation}{\label{eq1.5}}
\frac{\Omega((\psi^{-1})^c)}{(2\pi i)^b\Omega(A)^{a-b}}\in\overline{\mathbb{Q}}.
\end{equation}
Since $a>b$, by \cite[Thm 2.12]{bertolini2012p} we obtain for $m$ a critical value of $L(\psi,m)$ that
\begin{equation}{\label{eq1.6}}
\frac{L((\psi^{-1})^{-1},m)}{(2\pi i)^m\Omega((\psi^{-1})^c)}\in\overline{\mathbb{Q}}.
\end{equation}
Then putting together (\ref{eq1.5}) and (\ref{eq1.6}) the result follows.
\end{proof}
We now define a complex period attached to $f_{\varphi/K}$ that gives us algebraicity of $L$-values.
\begin{proposition}{\label{period base change}}
There exists a period $\Omega_{f_{\varphi/K}}\in\mathbb{C}^\times$
such that for all Hecke character $\psi$ of $K$ with infinity type $0\leqslant(q,r)\leqslant(k,k)$, we have $\Lambda(f_{\varphi/K},\psi)/\Omega_{f_{\varphi/K}}\in \overline{\mathbb{Q}}$.
\end{proposition}
\begin{proof}
By Lemma \ref{Prop 9.1} we have
\begin{equation*}
L(f_{\varphi/K},\psi,1)=L(\varphi^c\psi,1)L(\varphi^c\psi^c\lambda_K,1)=L(\varphi^c\psi|\cdot|_{\mathbb{A}_K},0)L(\varphi^c\psi^c\lambda_K|\cdot|_{\mathbb{A}_K},0).
\end{equation*}
Recall that $\varphi$ has infinity type $(-k-1,0)$, then $\varphi^c\psi|\cdot|_{\mathbb{A}_K}$ has infinity type $(q+1,r-k)$ and $\varphi^c\psi^c\lambda_K|\cdot|_{\mathbb{A}_K}$ has infinity type $(r+1,q-k)$. Noting that $q+1>r-k$ and $r+1>q-k$, by Lemma \ref{Lemma1.1} we have
\begin{equation*}
\frac{L(\varphi^c\psi|\cdot|_{\mathbb{A}_K},0)}{(2\pi i)^{r-k}\Omega(A)^{q+1-r+k}}\in\overline{\mathbb{Q}}\;\;\;\;\textrm{and}\;\;\;\;
\frac{L(\varphi^c\psi^c\lambda_K|\cdot|_{\mathbb{A}_K},0)}{(2\pi i)^{q-k}\Omega(A)^{r+1-q+k}}\in\overline{\mathbb{Q}}.
\end{equation*}
We obtain that
\begin{equation*}
\frac{L(f_{\varphi/K},\psi,1)}{(2\pi i)^{q+r-2k}\Omega(A)^{2k+2}}\in\overline{\mathbb{Q}}.
\end{equation*}
Finally, since
\begin{equation*}
\Lambda(f_{\varphi/K},\psi)=\frac{\Gamma(q+1)\Gamma(r+1)}{(2\pi i)^{q+r+2}}L(f_{\varphi/K},\psi,1),
\end{equation*}
taking
\begin{equation*}
\Omega_{f_{\varphi/K}}:=\left(\frac{\Omega(A)}{2\pi i}\right)^{2k+2}
\end{equation*}
we obtain the result.
\end{proof}
Suppose $f_{\varphi/K}$ does not have level at $p$ and consider its $p$-stabilization $f_{\varphi/K}^p$ as in the previous section.
\begin{corollary}{\label{lemma6.9}}
For all Hecke character $\psi$ of $K$ with infinity type $0\leqslant(q,r)\leqslant(k,k)$, we have $\Lambda(f_{\varphi/K}^p,\psi)/\Omega_{f_{\varphi/K}}\in \overline{\mathbb{Q}}$.
\end{corollary}
\begin{proof}
The $L$-functions of $f_{\varphi/K}^p$ and $f_{\varphi/K}$ are related (see \cite[\S3.3]{palacios} for Bianchi modular forms with trivial nebentypus and $K$ with class number 1) by
\begin{equation*}
\Lambda(f_{\varphi/K}^p,\psi)=\prod_{\mathfrak{q}|p}\left(1-\frac{\varphi(\mathfrak{p})\psi(\mathfrak{q})}{N(\mathfrak{q})}\right)\Lambda(f_{\varphi/K},\psi).
\end{equation*}
By Proposition \ref{period base change} we obtain the result.
\end{proof}
\subsection{Non-cuspidal base change $p$-adic $L$-function}{\label{s5.4}}
Since $f_{\varphi/K}^p$ has level at $p$ and is quasi-cuspidal by part ii) in Remark \ref{basechangepstabilization}, then is $C$-cuspidal with $C=(C_1,...,C_h)$ and $C_i=\mathbb{P}^1(K)$ for all $i$.
Let $f^i$ for $i=1,..,h$ be the collection of descents to $\mathcal{H}_3$ of $f_{\varphi/K}^p$. By Remark \ref{fullBianchi} and Proposition \ref{attach partial} we can attach to $f_{\varphi/K}^p$ a full Bianchi modular symbol $\phi_{f_{\varphi/K}^p}=(\phi_{f^1},...,\phi_{f^h})$ where
\begin{equation*}
\phi_{f^i}(\{a\}-\{\infty\})= \sum_{s,u=0}^{k}c_{s,u}^i(a)(\mathcal{Y}-a\mathcal{X})^{k-s}\mathcal{X}^s(\overline{\mathcal{Y}}-\overline{a}\overline{\mathcal{X}})^{k-u}\overline{\mathcal{X}}^u,
\end{equation*}
with $c_{s,u}^i(a)$ as in Definition \ref{factor} for $a\in K$.
\begin{remark}
Recall that for each $i$, the integrals inside $c_{s,u}^i(a)$ are convergent because $f^i$ quasi-vanishes for all cusps, since $f_{\varphi/K}^p$ is quasi-cuspidal.
\end{remark}
\begin{proposition}{\label{Prop6.7}}
Let $\Omega_{f_{\varphi/K}}$ be the period in Proposition \ref{period base change}, then the Bianchi modular symbol $\phi'_{f_{\varphi/K}^p}:=\phi_{f_{\varphi/K}^p}/\Omega_{f_{\varphi/K}}$ takes values in $V_{k,k}^*(E)$ for some number field $E$.
\end{proposition}
\begin{proof}
Let $\psi$ be a Hecke character of $K$ of infinity type $0\leqslant(q,r)\leqslant(k,k)$ then by Theorem \ref{T4.6} we have
\begin{equation*}
\Lambda(f_{\varphi/K}^p,\psi)=\left[\frac{(-1)^{k+q+r}2}{DwW(\psi^{-1})}\right]\sum_{i=1}^{h}\left[\psi(t_i)\sum_{\substack{[a]\in\mathfrak{f}^{-1}/\mathcal{O}_K \\ ((a)\mathfrak{f},\mathfrak{f})=1}}\psi_\mathfrak{f}(a)c_{q,r}^i(a)\right].
\end{equation*}
Dividing both sides by $\Omega_{f_{\varphi/K}}$ we obtain
\begin{equation}{\label{e.6.2}}
\left[\frac{(-1)^{k+q+r}2}{DwW(\psi^{-1})}\right]^{-1}\frac{\Lambda(f_{\varphi/K}^p,\psi)}{\Omega_{f_{\varphi/K}}}=\sum_{i=1}^{h}\left[\psi(t_i)\sum_{\substack{[a]\in\mathfrak{f}^{-1}/\mathcal{O}_K \\ ((a)\mathfrak{f},\mathfrak{f})=1}}\psi_\mathfrak{f}(a)\frac{c_{q,r}^i(a)}{\Omega_{f_{\varphi/K}}}\right].
\end{equation}
By Corollary \ref{lemma6.9} we have $\Lambda(f_{\varphi/K}^p,\psi)/\Omega_{f_{\varphi/K}}\in\overline{\mathbb{Q}}$ for all $\psi$ with infinity type $0\leqslant(q,r)\leqslant(k,k)$, then the left-hand side of (\ref{e.6.2}) is an algebraic number and consequently by linear independence of characters we have that $c_{q,r}^i(a)/\Omega'_{f_{\varphi/K}}\in\overline{\mathbb{Q}}$ for each $i$.
Consider the polynomial $(X+aY)^qY^{k-q}(\overline{X}+\overline{a}\overline{Y})^{r}\overline{Y}^{k-r}$, we can check that
\begin{equation*}
\phi_{f^i}(\{a\}-\{\infty\})\left[(X+aY)^qY^{k-q}(\overline{X}+\overline{a}\overline{Y})^{r}\overline{Y}^{k-r}\right]=c_{q,r}^i(a).
\end{equation*}
Then, defining $\phi'_{f^i}:=\phi_{f^i}/\Omega_{f_{\varphi/K}}$ we have
\begin{equation}{\label{algebra}}
\phi'_{f^i}(\{a\}-\{\infty\})\left[(X+aY)^qY^{k-q}(\overline{X}+\overline{a}\overline{Y})^{r}\overline{Y}^{k-r}\right]=c_{q,r}^i(a)/\Omega_{f_{\varphi/K}}\in\overline{\mathbb{Q}}.
\end{equation}
Now, using (\ref{algebra}), we will show $\phi'_{f^i}\in \mathrm{Symb}_{\Gamma_i(\mathfrak{n})}(V_{k,k}^*(\overline{\mathbb{Q}}))$. For this, consider some divisor $D\in\Delta^0:=\mathrm{Div}^0(\mathbb{P}^1(K))$ and a polynomial $P\left[\binom{X}{Y}\binom{\overline{X}}{\overline{Y}}\right]\in V_{k,k}(\overline{\mathbb{Q}})$, we want to show $[\phi'_{f^i}(D)](P)\in\overline{\mathbb{Q}}$.
Since $\Delta^0$ is generated by the divisors $\{a\}-\{\infty\}$, with $a\in K$, then it is enough to prove $[\phi'_{f^i}(\{a\}-\{\infty\})](P)\in\overline{\mathbb{Q}}$.
Let $P\left[\binom{X}{Y}\binom{\overline{X}}{\overline{Y}}\right]=\sum_{b,d=0}^{k}t_{b,d}X^bY^{k-b}\overline{X}^d\overline{Y}^{k-d}$ with $t_{b,d}\in\overline{\mathbb{Q}}$. For $0\leqslant b,d \leqslant k$, we can write each $X^bY^{k-b}\overline{X}^d\overline{Y}^{k-d}$ as a homogeneous polynomial $Q_{b,d}\left[\binom{X+aY}{Y}\binom{\overline{X}+\overline{a}\overline{Y}}{\overline{Y}}\right]$ after replacing $X$ by $(X+aY)-aY$ and $\overline{X}$ by $(\overline{X}+\overline{a}\overline{Y})-\overline{a}\overline{Y}$. Using (\ref{algebra}) on each $Q_{b,d}$, we obtain $[\phi'_{f^i}(\{a\}-\{\infty\})](Q_{b,d})\in\overline{\mathbb{Q}}$ and consequently $[\phi'_{f^i}(\{a\}-\{\infty\})](P)\in\overline{\mathbb{Q}}$.
Finally, using that $\Delta^0$ is finitely generated as a $\mathbb{Z}[\Gamma_{i}(\mathfrak{n})]$-module (see \cite[Lem 3.8]{chris2017}), we have $\phi'_{f^i}\in\mathrm{Symb}_{\Gamma_i(\mathfrak{n})}(V_{k,k}^*(E_i))$ for some number field $E_i$ for each $i=1,..h$, then there exists a number field $E$ such that $\phi'_{f_{\varphi/K}^p}=(\phi'_{f^1},...,\phi'_{f^h})\in\mathrm{Symb}_{\Omega_0(\mathfrak{n})}(V_{k,k}^*(E))$.
\end{proof}
The previous Proposition and the work done in section \ref{S4} allow us to obtain the $p$-adic $L$-function of $f_{\varphi/K}^p$ without using an isomorphism $\iota$ between $\mathbb{C}$ and $\overline{\mathbb{Q}}_p$.
\begin{theorem}{\label{Cor8.3}}
Let $\varphi$ be a Hecke character of $K$ with conductor $\mathfrak{m}$ coprime to $p$ and infinity type $(-k-1,0)$ with $k\geqslant0$, denote by $f_{\varphi}$ and $f_\varphi^p$ the CM modular form induced by $\varphi$ and its ordinary $p$-stabilization respectively. Let $f_{\varphi/K}^p$ be the base change to $K$ of $f_{\varphi}^p$ and $\Omega_{f_{\varphi/K}}$ be the complex period of Proposition \ref{period base change}. Then there exists a unique measure $L_p(f_{\varphi/K}^p,-)$ on $\mathrm{Cl}_K(p^{\infty})$ such that for any Hecke character $\psi$ of $K$ of conductor $\mathfrak{f}=\mathfrak{p}^t\overline{\mathfrak{p}}^s$ and infinity type $0 \leqslant (q,r) \leqslant (k,k)$, we have
\begin{equation}\label{equation6.3}
L_p(f_{\varphi/K}^p,\psi_{p-\mathrm{fin}})=\left[\prod_{\mathfrak{q}|p}\left(1-\frac{1}{\varphi(\overline{\mathfrak{p}})\psi(\mathfrak{q})}\right)\right]\left[ \frac{DwW(\psi^{-1})}{(-1)^{k+q+r}2 \varphi(\overline{\mathfrak{p}})^{t+s}\Omega_{f_{\varphi/K}}} \right] \Lambda(f_{\varphi/K}^p,\psi).
\end{equation}
\end{theorem}
\begin{proof}
By Proposition \ref{attach partial} we can attach to $f_{\varphi/K}^p$ a complex-valued Bianchi modular symbol $\phi_{f_{\varphi/K}^p}$. By Proposition \ref{Prop6.7} the Bianchi modular symbol $\phi'_{f_{\varphi/K}^p}=\phi_{f_{\varphi/K}^p}/\Omega_{f_{\varphi/K}}$ has values in $V_{k,k}^*(L)$ for a sufficiently large $p$-adic field $L$. Since $f_{\varphi/K}^p$ has small slope, we can lift $\phi'_{f_{\varphi/K}^p}$ to its corresponding unique overconvergent Bianchi eigensymbol $\Psi=(\Psi_1,...,\Psi_h)$ using Theorem \ref{partialcontrol}. Taking the Mellin transform of $\Psi$ we obtain a locally analytic distribution $L_p(f_{\varphi/K}^p,-)=\mathrm{Mel}(\Psi)$ on $\mathrm{Cl}_K(p^{\infty})$ that is $(h_\mathfrak{p},h_{\overline{\mathfrak{p}}})$-admissible, where $h_\mathfrak{p}=v_p(\lambda_\mathfrak{p})=0$ and $h_{\overline{\mathfrak{p}}}=v_p(\lambda_{\overline{\mathfrak{p}}})=0$, thus is bounded, i.e., it is a measure.
Using the connection between Bianchi modular symbols and $L$-values, for any Hecke character $\psi$ of $K$ of conductor $\mathfrak{f}=\mathfrak{p}^t\overline{\mathfrak{p}}^s$ and infinity type $0 \leqslant (q,r) \leqslant (k,k)$ we obtain a similar interpolation to Theorem \ref{C-cuspidal p-adic} given by
\begin{equation*}
L_p(f_{\varphi/K}^p,\psi_{p-\mathrm{fin}})=\left[ \prod_{\mathfrak{q}|p} \left(1-\frac{1}{a_\mathfrak{p}(f_{\varphi/K}^p)\psi(\mathfrak{p})}\right) \right]\left[ \frac{(-1)^{k+q+r}DwW(\psi^{-1})}{2a_\mathfrak{f}(f_{\varphi/K}^p)\Omega_{f_{\varphi/K}}} \right] \Lambda(f_{\varphi/K}^p,\psi).
\end{equation*}
Since $a_\mathfrak{p}(f_{\varphi/K}^p)=a_{\overline{\mathfrak{p}}}(f_{\varphi/K}^p)=\varphi(\overline{\mathfrak{p}})$ and $a_\mathfrak{f}(f_{\varphi/K}^p)=a_\mathfrak{p}(f_{\varphi/K}^p)^ta_{\overline{\mathfrak{p}}}(f_{\varphi/K}^p)^s=\varphi(\overline{{\mathfrak{p}}})^{t+s}$, we obtain the interpolation desired, which gives us the uniqueness of $L_p(f_{\varphi/K}^p,-)$.
\end{proof}
The functions $L_p(f_{\varphi/K}^p,-)$ from the previous Theorem and $L_p^\iota(f_{\varphi/K}^p,-)$ from Theorem \ref{C-cuspidal p-adic}, can be related using the factor $\iota(\Omega_{f_{\varphi/K}})$.
\begin{proposition}{\label{equality of measures}}
We have the following equality of measures on $\mathrm{Cl}_K(p^\infty)$
\begin{equation*}
L_p(f_{\varphi/K}^p,-)=\frac{L_p^\iota(f_{\varphi/K}^p,-)}{\iota(\Omega_{f_{\varphi/K}})}.
\end{equation*}
\end{proposition}
\begin{proof}
Using the interpolation property satisfied by each measure regarding the $L$-function of $f_{\varphi/K}^p$, for all Hecke character $\psi$ of $K$ of conductor $\mathfrak{f}|(p^\infty)$ and infinity type $0 \leqslant (q,r) \leqslant (k,k)$, we have
\begin{equation*}
L_p(f_{\varphi/K}^p,\psi_{p-\mathrm{fin}})=\frac{L_p^\iota(f_{\varphi/K}^p,\psi_{p-\mathrm{fin}})}{\iota(\Omega_{f_{\varphi/K}})}.
\end{equation*}
Note that there are infinitely many characters $\psi$ as above and both sides of the equation are bounded functions. Then the result follows because a non-zero bounded analytic function on an open ball has at most finitely many zeros.
\end{proof}
\subsection{Non-cuspidal base change and Katz $p$-adic $L$-functions}{\label{s5.5}}
To relate the $p$-adic $L$-function of $f_{\varphi/K}^p$ with Katz $p$-adic $L$-functions, we first introduce some notation and state the interpolation property of Katz $p$-adic $L$-functions, for more details the reader can see \cite{bertolini2012p} and \cite{hida1993anti}.
Let $\psi$ be a Hecke character of $K$ of suitable infinity type and conductor $\mathfrak{m}\mathfrak{f}$ with $\mathfrak{m}$ coprime to $p$ and $\mathfrak{f}|p^\infty$, then Katz constructed in \cite{katz1978p} the $p$-adic $L$-function of $\psi$ when $\mathfrak{m}$ is trivial. Later, Hida and Tilouine in \cite{hida1993anti} extended Katz' construction for non-trivial $\mathfrak{m}$.
Recall the Gauss sum of $\psi$ from Definition \ref{d1.5}, we now define the \textit{local} Gauss sum of $\psi$ at prime ideals $\mathfrak{q}$ dividing the conductor of $\psi$ by
\begin{equation*}
\tau_{\mathfrak{q}}(\psi):=\psi(\pi_{\mathfrak{q}}^{-t})\sum_{u\in(\mathcal{O}_{\mathfrak{q}}/{\mathfrak{q}}^t)^\times}\psi_{\mathfrak{q}}(u)e_{K}(u/\pi_{\mathfrak{q}}^{t}d_{\mathfrak{q}})
\end{equation*}
where $e_{K}$ is the character in section \ref{Fourier}, $\pi_\mathfrak{q}$ is a prime element in $\mathcal{O}_\mathfrak{q}$, $t=t(\mathfrak{q})$ is the exponent of $\mathfrak{q}$ in the conductor of $\psi$ and $d_\mathfrak{q}$ is the $\mathfrak{q}$ component of the idele $d$ associated to the different ideal of $K$. Outside the conductor of $\psi$, we simply put $\tau_{\mathfrak{q}}(\psi)=1$.
Recall the elliptic curve $A$ with complex multiplication by $\mathcal{O}_K$ of section \ref{basechangeLfunction}. In Lemma \ref{Lemma1.1}, we obtained algebraicity of critical $L$-values of $\psi$, using the periods $\Omega(A)$ and $\Omega(\psi^c)$. Analogously, there exists a $p$-adic period $\Omega_p(A)$, obtained by considering the base change $A_{\mathbb{C}_p}$ of the elliptic curve $A$ to the $p$-adic complex numbers $\mathbb{C}_p$, that gives us algebraicity of the $p$-adic $L$-function of $\psi$ (see \cite[\S 2D]{bertolini2012p} for more details).
\begin{theorem}{\label{Katztheorem}}(Katz, Hida-Tilouine)
Let $\mathfrak{m}$ be an ideal of $K$ coprime to $p$. There exists a unique measure $L_{p,\rm{Katz}}(-)$ on the ray class group $\mathrm{Cl}_K(\mathfrak{m} p^\infty)$ whose value on the $p$-adic avatar $\psi_{p-\text{fin}}$ of a Hecke character $\psi$ of $K$ of infinity type $(a,b)$ with $a\geqslant1$ and $b\leqslant0$ and conductor $\mathfrak{m}\mathfrak{f}$ with $\mathfrak{f}|(p^\infty)$ is given by:
\begin{equation*}
\frac{L_{p,\rm{Katz}}(\psi_{p-\text{fin}})}{\Omega_p(A)^{a-b}}=\frac{w}{2} W_{\mathfrak{p}}(\psi)\frac{(-1)^{a+b}\Gamma(a)\sqrt{D}^{b}}{(2\pi)^{b}\Omega(A)^{a-b}} (1-\psi(\overline{\mathfrak{p}}))\left(1-\frac{1}{\psi(\mathfrak{p})N(\mathfrak{p})^t}\right) L(\psi,0).
\end{equation*}
where $W_{\mathfrak{p}}(\psi)=N(\mathfrak{p})^{-t}\tau_{\mathfrak{p}}(\psi)$.
\end{theorem}
In order to link $L_p(f_{\varphi/K}^p,-)$ with Katz $p$-adic $L$-functions we use the factorization of $L(f_{\varphi/K},\psi,s)$ as the product of two $L$-functions of Hecke characters in section \ref{basechangeLfunction}. For this, we need to rewrite the interpolation property $L_p(f_{\varphi/K}^p,-)=(*)\Lambda(f_{\varphi/K}^p,-)$ in terms of $\Lambda(f_{\varphi/K},-)$. Considering the relation between $\Lambda(f_{\varphi/K}^p,-)$ and $\Lambda(f_{\varphi/K},-)$ in the proof of Corollary \ref{lemma6.9}, for any Hecke character $\psi$ of $K$ of conductor $\mathfrak{f}=\mathfrak{p}^t\overline{\mathfrak{p}}^s$ and infinity type $0 \leqslant (q,r) \leqslant (k,k)$, we have
\begin{equation}{\label{interpolation no p-estab}}
L_p(f_{\varphi/K}^p,\psi_{p-\mathrm{fin}})=E_p(f_{\varphi/K}^p)\left[ \frac{DwW(\psi^{-1})}{(-1)^{k+q+r}2 \varphi(\overline{\mathfrak{p}})^{t+s}\Omega_{f_{\varphi/K}}} \right] \Lambda(f_{\varphi/K},\psi),
\end{equation}
where
\begin{equation*}
E_p(f_{\varphi/K}^p)=\prod_{\mathfrak{q}|p} \left(1-\frac{\varphi(\mathfrak{p})\psi(\mathfrak{q})}{N(\mathfrak{p})}\right) \left(1-\frac{1}{\varphi(\overline{\mathfrak{p}})\psi(\mathfrak{q})}\right).
\end{equation*}
\begin{theorem}{\label{T9.4}}
Let $\varphi$ be a Hecke character of $K$ with conductor $\mathfrak{m}$ coprime to $p$ and infinity type $(-k-1,0)$ with $k\geqslant0$. Denote by $f_{\varphi}$ the CM modular form induced by $\varphi$ and $f_\varphi^p$ its ordinary $p$-stabilization. Let $f_{\varphi/K}^p$ be the base change to $K$ of $f_{\varphi}^p$, then for all $\kappa\in\mathfrak{X}(\mathrm{Cl}_K(p^\infty))$ we have
\begin{equation*}
L_p(f_{\varphi/K}^p,\kappa)=\frac{1}{\Omega_p(A)^{2k+2}}L_{p,\rm{Katz}}(\varphi_{p-\rm{fin}}^c\kappa \chi_p)L_{p,\rm{Katz}}(\varphi_{p-\rm{fin}}^c\kappa^c \chi_p),
\end{equation*}
where $\chi_p$ is the $p$-adic avatar of the adelic norm character and $\Omega_p(A)$ is the $p$-adic period in Theorem \ref{Katztheorem}.
\end{theorem}
\begin{remark}{\label{acomodandoKatz}}
Note that $L_p(f_{\varphi/K}^p,-)$ is a function on $\mathfrak{X}(\mathrm{Cl}_K(p^\infty))$ while the Katz $p$-adic $L$-functions in the above Theorem are functions on $\mathfrak{X}(\mathrm{Cl}_K(\overline{\mathfrak{m}} p^\infty)$. To relate them, we see the latter as functions on $\mathfrak{X}(\mathrm{Cl}_K(p^\infty))$ via the map $\mathrm{Cl}_K(\overline{\mathfrak{m}} p^\infty)\rightarrow\mathrm{Cl}_K(p^\infty)$.
\end{remark}
\begin{proof}
Recall that $L_p(f_{\varphi/K}^p,-)$ is $(h_\mathfrak{p},h_{\overline{\mathfrak{p}}})$-admissible, with $h_\mathfrak{p}=v_p(\lambda_\mathfrak{p})=0$ and $h_{\overline{\mathfrak{p}}}=v_p(\lambda_{\overline{\mathfrak{p}}})=0$, then is a measure. To obtain the equality of measures in the Theorem, it suffices to check the equality on $p$-adic characters $\psi_{p-\text{fin}}$ coming from finite order Hecke characters $\psi$ of conductor $\mathfrak{f}=\mathfrak{p}^t\overline{\mathfrak{p}}^s$.
For such characters, from (\ref{interpolation no p-estab}) and Definition \ref{gammafactors} we have
\begin{equation*}
L_p(f_{\varphi/K}^p,\psi_{p-\mathrm{fin}})=E_p(f_{\varphi/K}^p)\left[ \frac{DwW(\psi^{-1})}{(-1)^{k+1}2 \varphi(\overline{\mathfrak{p}})^{t+s}\Omega'_{f_{\varphi/K}}(2\pi)^2} \right] L(f_{\varphi/K},\psi,1),
\end{equation*}
and by Theorem \ref{Katztheorem} we have the following two interpolations
\begin{equation*}
\frac{L_{p,\rm{Katz}}((\varphi^c\psi|\cdot|_{\mathbb{A}_K})_{p-\text{fin}})}{\Omega_p(A)^{k+1}}=\frac{(-1)^{k-1}w W_{\mathfrak{p}}(\varphi^c\psi|\cdot|_{\mathbb{A}_K})(2\pi)^k}{2\sqrt{D}^k\Omega(A)^{k+1}E_p(\varphi^c\psi|\cdot|_{\mathbb{A}_K})^{-1}}L(\varphi^c\psi,1),
\end{equation*}
where
\begin{equation}
E_p(\varphi^c\psi|\cdot|_{\mathbb{A}_K})=\left(1-\frac{\varphi(\mathfrak{p})\psi(\overline{\mathfrak{p}})}{N(\mathfrak{p})}\right) \left(1-\frac{1}{\varphi(\overline{\mathfrak{p}})\psi(\mathfrak{p})}\right)
\label{e9.3}
\end{equation}
and
\begin{equation*}
\frac{L_{p,\rm{Katz}}((\varphi^c\psi^c\lambda_K|\cdot|_{\mathbb{A}_K})_{p-\text{fin}})}{\Omega_p(A)^{k+1}}=\frac{(-1)^{k-1}w W_{\mathfrak{p}}(\varphi^c\psi^c\lambda_K|\cdot|_{\mathbb{A}_K})(2\pi)^k}{2\sqrt{D}^{k}\Omega(A)^{k+1}E_p(\varphi^c\psi^c\lambda_K|\cdot|_{\mathbb{A}_K})^{-1}}L(\varphi^c\psi^c\lambda_K,1),
\end{equation*}
where
\begin{equation}
E_p(\varphi^c\psi^c\lambda_K|\cdot|_{\mathbb{A}_K})=\left(1-\frac{\varphi(\mathfrak{p})\psi(\mathfrak{p})}{N(\mathfrak{p})}\right) \left(1-\frac{1}{\varphi(\overline{\mathfrak{p}})\psi(\overline{\mathfrak{p}})}\right).
\label{e9.4}
\end{equation}
In order to obtain the equality desired, we use the factorization in Lemma \ref{Prop 9.1} and simplify the Euler products, the Gauss sums and study the periods in those interpolations as follows:
i) \textit{Euler factors}: The product of the Euler factors in equations (\ref{e9.3}) and (\ref{e9.4}) is equal to the Euler factor in the interpolation of $L_p(f_{\varphi/K}^p,\psi_{p-\mathrm{fin}})$, i.e.,
\begin{equation*}
E_p(\varphi^c\psi|\cdot|_{\mathbb{A}_K})E_p(\varphi^c\psi^c\lambda_K|\cdot|_{\mathbb{A}_K})=E_p(f_{\varphi/K}^p).
\end{equation*}
ii) \textit{Gauss sums}: The Gauss sums in the interpolations of the Katz $p$-adic $L$-functions are
\begin{align*}
W_p(\varphi^c\psi|\cdot|_{\mathbb{A}_K})&=N(\mathfrak{p}^{-t})\tau_{\mathfrak{p}}(\varphi^c\psi|\cdot|_{\mathbb{A}_K})\\
&=(\varphi^c\psi)(\pi_{\mathfrak{p}}^{-t})\sum_{u\in(\mathcal{O}_{\mathfrak{p}}/{\mathfrak{p}}^t)^\times}\psi_{\mathfrak{p}}(u)e_{K}(\pi_{\mathfrak{p}}^{-t}u)\\
&=(\varphi^c\psi)(\pi_{\mathfrak{p}}^{-t})\sum_{u\in(\mathcal{O}_K/{\mathfrak{p}}^t)^\times}\psi_{\mathfrak{p}}(u)e^{-2\pi i \mathrm{Tr}_{K/\mathbb{Q}}(u/\pi_{\mathfrak{p}}^{t})}\\
&=(\varphi^c\psi)(\pi_{\mathfrak{p}}^{-t})\psi_{\mathfrak{p}}(-\delta)^{-1}\sum_{u\in(\mathcal{O}_K/{\mathfrak{p}}^t)^\times}\psi_{\mathfrak{p}}(u)e^{2\pi i \mathrm{Tr}_{K/\mathbb{Q}}(u/\pi_{\mathfrak{p}}^{t}\delta)},
\end{align*}
\begin{align*}
W_p(\varphi^c\psi^c\lambda_K|\cdot|_{\mathbb{A}_K})&=N(\mathfrak{p}^{-s})\tau_{\mathfrak{p}}(\varphi^c\psi^c\lambda_K|\cdot|_{\mathbb{A}_K})\\
&=(\varphi^c\psi^c)(\pi_{\mathfrak{p}}^{-s})\sum_{u\in(\mathcal{O}_{\mathfrak{p}}/{\mathfrak{p}}^s)^\times}\psi_{\mathfrak{p}}^c(u)e_{K}(\pi_{\mathfrak{p}}^{-s}u)\\
&=(\varphi^c\psi^c)(\pi_{\mathfrak{p}}^{-s})\sum_{u\in(\mathcal{O}_\mathfrak{p}/\mathfrak{p}^s)^\times}\psi_{\mathfrak{p}}(\overline{u})e^{-2\pi i \mathrm{Tr}_{K/\mathbb{Q}}(u/\pi_{\mathfrak{p}}^{s})}\\
&=(\varphi^c\psi^c)(\pi_{\mathfrak{p}}^{-s})\sum_{u\in(\mathcal{O}_\mathfrak{p}/\mathfrak{p}^s)^\times}\psi_{\mathfrak{p}}(\overline{u})e^{-2\pi i \mathrm{Tr}_{K/\mathbb{Q}}(\overline{u}/\overline{\pi_{\mathfrak{p}}^{s}})}\\
&=(\varphi^c\psi^c)(\pi_{\mathfrak{p}}^{-s})\psi_{\overline{\mathfrak{p}}}(-\delta)^{-1}\sum_{v\in(\mathcal{O}_K/\overline{\mathfrak{p}}^s)^\times}\psi_{\overline{\mathfrak{p}}}(v)e^{2\pi i \mathrm{Tr}_{K/\mathbb{Q}}(v/\overline{\pi_{\mathfrak{p}}}^{s}\delta)}.
\end{align*}
Then, can be shown using for example ii) in \cite[Prop 2.14]{narki}, that the product of both Gauss sums is related to $W(\psi^{-1})$ by
\begin{align*}
&W_p(\varphi^c\psi|\cdot|_{\mathbb{A}_K})W_p(\varphi^c\psi^c\lambda_K|\cdot|_{\mathbb{A}_K})\\
&=\varphi^c(\pi_{\mathfrak{p}}^{-t-s})\psi^{-1}(\pi_\mathfrak{f})\psi_\mathfrak{f}(-\delta)^{-1}\sum_{b\in(\mathcal{O}_K/\mathfrak{f})^\times}\psi_\mathfrak{f}(b)e^{2\pi i \mathrm{Tr}_{K/\mathbb{Q}}(b/\pi_\mathfrak{f}\delta)}=\varphi(\overline{\mathfrak{p}})^{-t-s}W(\psi^{-1}),
\end{align*}
where in last equality, since $\psi$ has finite order, we have $\psi^{-1}(\pi_\mathfrak{f})=\psi_\infty(\pi_\mathfrak{f})=1$ and $\psi_\mathfrak{f}(-\delta)^{-1}=1$ because $-\delta$ is a $\mathfrak{p}$-adic unit for all $\mathfrak{p}$ above $p$.
Putting together i), ii) and recalling that $\Omega_{f_{\varphi/K}}:=\left(\frac{\Omega(A)}{2\pi i}\right)^{2k+2}$ in the proof of Proposition \ref{period base change}, we obtain
\begin{equation*}
L_p(f_{\varphi/K}^p,\psi_{p-\mathrm{fin}})=\frac{2D^{k+1}w^{-1}}{\Omega_p(A)^{2k+2}}L_{p,\rm{Katz}}((\varphi^c\psi|\cdot|_{\mathbb{A}_K})_{p-\text{fin}})L_{p,\rm{Katz}}((\varphi^c\psi^c\lambda_K|\cdot|_{\mathbb{A}_K})_{p-\text{fin}})
\end{equation*}
for all Hecke characters $\psi$ of finite order and conductor dividing $(p^\infty)$. By using the same argument in the proof of Proposition \ref{equality of measures}, we have an equality of measures on $\mathfrak{X}(\mathrm{Cl}_K(p^\infty))$.
Finally, to obtain the result, we can define the period $\Omega_{f_{\varphi/K}}$ in Proposition \ref{period base change} to be $\frac{2D^{k+1}\Omega(A)^{2k+2}}{w(2\pi i)^{2k+2}}$ noting that such normalization does not affect the algebraicity result on $\Lambda(f_{\varphi/K},-)$ in Proposition \ref{period base change}.
\end{proof}
\begin{remark}
Instead of viewing the Katz $p$-adic $L$-functions in Theorem \ref{T9.4} as functions on $\mathfrak{X}(\mathrm{Cl}_K(p^\infty))$ using Remark \ref{acomodandoKatz}, we can view the $p$-adic $L$-function of $f_{\varphi/K}^p$ as a function on $\mathfrak{X}(\mathrm{Cl}_K(\overline{\mathfrak{m}} p^\infty))$ (see \cite[\S 3.4]{exceptional}) and then compare it directly with the Katz $p$-adic $L$-functions.
\end{remark}
\bibliographystyle{amsalpha}
|
2302.13588
|
\section{Introduction}
\addtocontents{toc}{\protect\setcounter{tocdepth}{1}}
Let $A = \Bbbk[x_1, \cdots, x_n]$ over an algebraically closed field $\Bbbk$ of characteristic 0, and $G \subseteq \text{Aut}_{\text{gr}}(A)$ be a finite subgroup. It is natural to ask: what properties, and particularly what homological properties, does the invariant subalgebra $A^G$ satisfy? Two of the earliest answers are the Shephard-Todd-Chevalley theorem (50s) and the Watanabe theorem (70s), stated as the follows:
\begin{thm}
(Shephard-Todd-Chevalley theorem, \cite{ST}, \cite{C2}) Let $A = \Bbbk[x_1, \cdots, x_n]$ over an algebraically closed field $\Bbbk$ of characteristic 0, and $G \subseteq \text{Aut}_{\text{gr}}(A)$ be a finite subgroup. Then $A^G$ is regular if and only if $G$ is generated by (pseudo-)reflections.
\end{thm}
\begin{thm}
(Watanabe theorem, \cite{Watanabe}) Let $A = \Bbbk[x_1, \cdots, x_n]$ over an algebraically closed field $\Bbbk$ of characteristic 0, and $G \subseteq \text{Aut}_{\text{gr}}(A)$ be a finite subgroup containing no (pseudo-)reflections. Then $A^G$ is Gorenstein if and only if $\det g\bigg\vert_{A_1} = 1$ for all $g \in G$.
\end{thm}
In the following decades, the Shephard-Todd-Chevalley theorem and the Watanabe theorem became two of the most important motivations for invariant theory: if $A$ is a connected-graded, not necessarily commutative, $\Bbbk$-algebra, under what conditions on $G$ is the invariant subalgebra $A^G$ Artin-Schelter regular or Artin-Schelter Gorenstein? There are some known results, for example, \cite{EKZ3} for skew polynomial rings and quantum $2 \times 2$ matrix algebras, \cite{EKZ2} for down-up algebras. In recent time, \cite{GVW} proposed to study these questions for Poisson algebras: a commutative $\Bbbk$-algebra together with a noncommutative bracket. They provided a partial answer to the Shephard-Todd-Chevalley question (regularity) by investigating specific Poisson structures arising from semiclassical limits of Artin-Schelter regular algebras. In addition, they provide insights into the Watanabe theorem question (Gorensteinness) of Poisson enveloping algebras. Inspired by their work, we continued study of these questions for more Poisson structures, starting with the unimodular quadratic polynomial Poisson algebras of dimension 3 in this paper.
\medskip
In section 2, we develop a method for computing graded Poisson automorphisms of a Poisson algebra, and in Proposition 2.2 and Proposition 2.3, we apply this method to compute the graded Poisson automorphisms and Poisson reflections of all unimodular quadratic polynomial Poisson algebras of dimension 3. For clarity, we will only provide the computation of two particular cases that range in difficulty in the proof of Proposition 2.2 and Proposition 2.3. The complete computation can be found in the appendix attached at the end of this paper.
In section 3, we prove one of the main result of this paper, a graded rigidity theorem answering the Shephard-Todd-Chevalley question for unimodular quadratic polynomial Poisson algebras of dimension 3:
\begin{thm}
(Theorem 3.1) Let $P = \Bbbk[x_1,x_2,x_3]$ be a unimodular quadratic Poisson algebra, and $G \subseteq \text{PAut}_{\text{gr}}(P)$ be a finite subgroup. Then $P^G \cong P$ as Poisson algebras if and only if $G$ is trivial.
\end{thm}
For Poisson algebras without any Poisson reflections, Theorem 3.1 is an immediate consequence of the classical Shephard-Todd-Chevalley theorem. For unimodular quadratic polynomial Poisson algebras of dimension 3 with Poisson reflections, we provide a case-by-case proof, building upon the computation in Proposition 2.2 and Proposition 2.3. This result provides more examples of graded rigid Poisson algebras in addition to the ones studied in \cite{GVW}. In fact, most invariant subalgebras of unimodular quadratic polynomial Poisson algebras of dimension 3 under a Poisson reflection group does not even preserve unimodularity. There is, however, one invariant subalgebra that is unimodular. That being said, it is not isomorphic to the original Poisson algebra. So far, we have not found any example of Poisson algebra that is not graded rigid, including the ones studied in \cite{GVW}.
In section 4, we prove the remaining two main results of this paper, an answer to the Shephard-Todd-Chevalley question and a formula for computing the homological determinant of an induced automorphism for Poisson enveloping algebras of quadratic polynomial Poisson algebras:
\begin{thm}
(Theorem 4.3) Let $P = \Bbbk[x_1,\cdots,x_n]$ be a quadratic Poisson algebra, and $\mathcal{U}(P)$ be its Poisson enveloping algebra. Let $G \subseteq \text{PAut}_{\text{gr}}(P)$ be a nontrivial finite subgroup, and $\widetilde{G} \subseteq \text{Aut}_{\text{gr}}(\mathcal{U}(P))$ be the corresponding subgroup. Then $\mathcal{U}(P^{\widetilde{G}})$ is not Artin-Schelter regular.
\end{thm}
\begin{thm}
(Theorem 4.6) Let $P = \Bbbk[x_1, \cdots, x_n]$ be a quadratic Poisson algebra, and $\mathcal{U}(P)$ be its Poisson enveloping algebra. Let $g$ be a graded Poisson automorphism of $P$ of finite order, and $\tilde{g}$ be the corresponding automorphism of $\mathcal{U}(P)$. Then $\text{hdet} \tilde{g} = (\det g\bigg\vert_{P_1})^2$.
\end{thm}
Both Theorem 4.3 and Theorem 4.6 is based on the observations made in Lemma 4.2: if a graded Poisson automorphism $g$ of a quadratic polynomial Poisson algebra has eigenvalues $\lambda_1, \cdots, \lambda_m$, with multiplicity $c_1, \cdots, c_m$, then the induced automorphism $\tilde{g}$ of the Poisson enveloping algebra has $\lambda_1, \cdots, \lambda_m$, with multiplicity $2c_1, \cdots, 2c_m$. This observation generalizes (\cite{GVW}, Theorem 5.6). For Theorem 4.3, we compare the eigenvalues of $\tilde{g}$ with the eigenvalues of quasi-reflections of $\mathcal{U}(P)$ described in (\cite{EKZ4}, Theorem 3.1), and conclude that $\tilde{g}$ cannot be a quasi-reflection. For Theorem 4.6, we provide a formula for computing the trace series of $\tilde{g}$ in Lemma 4.4, and relate the trace series to the homological determinant of $\tilde{g}$ as in (\cite{JZ2}, Lemma 2.6). In addition, we demonstrate the significance of Theorem 4.6 in Proposition 4.7, answering the Watanabe question for Poisson enveloping algebras of unimodular quadratic polynomial Poisson algebras of dimension 3.
\medskip
In section 5, we remark on several interesting discoveries made when writing this paper, and propose a list of possible future research directions.
\bigskip
I would like to acknowledge and express my deepest gratitude to my advisor James Zhang for his invaluable advice and guidance throughout this research project. In addition, I would also like to thank Professor Xingting Wang for his reviews and his remarks/comments on this paper, and my office mate Andrew Tawfeek for his proofreading of my grammars.
\clearpage
\section{Preliminaries}
Let $\Bbbk$ be an algebraically closed field of characteristic 0.
\smallskip
A $\textit{Poisson algebra}$ is a commutative $\Bbbk$-algebra $P$ together with a bracket:
\[
\{\underline{\hspace{.1in}}, \underline{\hspace{.1in}}\}: P \times P \to P
\]
such that
\begin{enumerate}
\item $(P, \{\underline{\hspace{.1in}}, \underline{\hspace{.1in}}\})$ is a Lie algebra,
\item $\{a,bc\} = b\{a,c\} + \{a,b\}c$ for all $a, b, c \in P$.
\end{enumerate}
\smallskip
Let $P, Q$ be Poisson algebras. A map $g: P$ is called a \textit{Poisson homomorphism} if $g$ is an algebra homomorphism and a Lie algebra homomorphism simultaneously.
\smallskip
Let $P = \Bbbk[x_1, \cdots, x_n]$ be a polynomial Poisson algebra under the standard grading. A \textit{graded Poisson automorphism} of $P$ is a bijective Poisson homomorphism $g: P \to P$ such that $g(P_i) = P_i$ for all $i \geq 0$. The graded Poisson automorphism group of $P$ will be denoted as $\text{PAut}_{\text{gr}}(P)$. A \textit{Poisson reflection} of $P$ is a finite-order graded Poisson automorphism $g: P \to P$ such that $g\bigg\vert_{P_1}$ has the following eigenvalues: $\underbrace{1,\cdots,1}_{n-1}, \xi$, for some primitive $m$th root of unity $\xi$. The set consisting of all Poisson reflections of $P$ will be denoted as $\text{PR}(P)$.
\smallskip
Let $P = \Bbbk[x_1, \cdots, x_n]$ be a polynomial Poisson algebra under the standard grading. $P$ is called $\textit{quadratic}$ if $\{P_1, P_1\} \subseteq P_2$.
\smallskip
\begin{exmp}
Let $P = \Bbbk[x_1,x_2,x_3]$ be a quadratic Poisson algebra. (\cite{DH}, Theorem 2) has classified all quadratic Poisson structures into 13+1 classes. In this paper, we are primarily interested in the +1 class: a class consisting of Poisson structures of the following form:
\[
\{x_1,x_2\} = \frac{\partial \Omega}{\partial x_3}, \{x_2,x_3\} = \frac{\partial \Omega}{\partial x_1}, \{x_3,x_1\} = \frac{\partial \Omega}{\partial x_2},
\]
where $\Omega$ is a homogeneous polynomial of $x_1, x_2, x_3$ of degree 3. Such Poisson structures are called \textit{Jacobian}. In some literatures, such Poisson structures are also called \textit{unimodular}, because (\cite{LWW}, Proposition 2.6) proves these two notions coincide in case of $P = \Bbbk[x_1,x_2,x_3]$. (\cite{Ribeiro}, page 5) has classified $\Omega$ into 9 subclasses up to some scalars:
\begin{center}
\begin{tabular}{ |p{4cm}||p{3.75cm}|p{3.75cm}|p{3.75cm}| }
\hline
\centering $\boldsymbol{\Omega}$ & \centering $\boldsymbol{\{x_1,x_2\}}$ & \centering $\boldsymbol{\{x_2,x_3\}}$ & \centering $\boldsymbol{\{x_3,x_1\}}$ \tabularnewline
\hline
\hline
\centering $x_1^3$ & \centering 0 & \centering $3x_1^2$ & \centering 0 \tabularnewline
\hline
\centering $x_1^2x_2$ & \centering 0 & \centering $2x_1x_2$ & \centering $x_1^2$ \tabularnewline
\hline
\centering $2x_1x_2x_3$ & \centering $2x_1x_2$ & \centering $2x_2x_3$ & \centering $2x_1x_3$\tabularnewline
\hline
\centering $x_1^2x_2+x_1x_2^2$ & \centering $0$ & \centering $2x_1x_2+x_2^2$ & \centering $x_1^2+2x_1x_2$ \tabularnewline
\hline
\centering $x_1^3 + x_2^2x_3$ & \centering $x_2^2$ & \centering $3x_1^2$ & \centering $2x_2x_3$ \tabularnewline
\hline
\centering $x_1^3 + x_1^2x_3 + x_2^2x_3$ & \centering $x_1^2+x_2^2$ & \centering $3x_1^2+2x_1x_3$ & \centering $2x_2x_3$ \tabularnewline
\hline
\centering $\frac{1}{3}(x_1^3+x_2^3+x_3^3)+\lambda x_1x_2x_3, \lambda^3 \neq -1$ & \centering $x_3^2+\lambda x_1x_2$ & \centering $x_1^2+\lambda x_2x_3$ & \centering $x_2^2+\lambda x_1x_3$ \tabularnewline
\hline
\centering $x_1^3+x_1^2x_2+x_1x_2x_3$ & \centering $x_1x_2$ & \centering $3x_1^2+2x_1x_2+x_2x_3$ & \centering $x_1^2+x_1x_3$ \tabularnewline
\hline
\centering $x_1^2x_3 + x_1x_2^2$ & \centering $x_1^2$ & \centering $2x_1x_3+x_2^2$ & \centering $2x_1x_2$ \tabularnewline
\hline
\end{tabular}
\end{center}
\end{exmp}
\medskip
\medskip
Let $P$ be a Poisson algebra. A $\textit{Poisson enveloping algebra}$ is a triple $(\mathcal{U},\alpha,\beta)$:
\begin{itemize}
\item $\mathcal{U}$ is a $\Bbbk$-algebra,
\item $\alpha: (P,\cdot) \to \mathcal{U}$ is an algebra homomorphism,
\item $\beta: (P,\{\underline{\hspace{.1in}},\underline{\hspace{.1in}}\}) \to \mathcal{U}_L$ is a Lie homomorphism,
\end{itemize}
subject to the following conditions:
\begin{enumerate}
\item $\alpha(\{a,b\}) = \beta(a)\alpha(b) - \alpha(b)\beta(a)$ for all $a, b \in P$,
\item $\beta(ab) = \alpha(a)\beta(b) + \alpha(b)\beta(a)$ for all $a, b \in P$
\item if $(\mathcal{U}', \alpha', \beta')$ is another triple satisfying (1) and (2), then there exists a unique algebra homomorphism $h: \mathcal{U} \to \mathcal{U}'$ such that the following diagram is commutative:
\[
\begin{tikzcd}
\mathcal{U} \arrow{rrdd}{h}\\
\\
A \arrow{rr}[swap]{\alpha', \beta'} \arrow{uu}{\alpha,\beta} && \mathcal{U}'
\end{tikzcd}
\]
\end{enumerate}
A Poisson algebra has a unique Poisson enveloping algebra. In fact, we can describe its Poisson enveloping algebra by an explicit set of generators and relations as follows:
\begin{thm}
(\cite{B}, Theorem 2.2) Let $P = \Bbbk[x_1, \cdots, x_n]$ be a polynomial Poisson algebra. Then $\mathcal{U}(P)$ is the free $\Bbbk$-algebra generated by $x_1, \cdots, x_n, y_1, \cdots, y_n$, subject to the following relations:
\end{thm}
\begin{enumerate}
\item $[x_i, x_j] = 0$,
\item $[y_i, y_j] = \displaystyle{\sum_{k=1}^{n}\frac{\partial \{x_i,x_j\}}{\partial x_k}y_k}$,
\item $[y_i, x_j] = \{x_i, x_j\}$,
\end{enumerate}
for all $1 \leq i, j \leq n$, and $\alpha, \beta$ are defined as the follows:
\begin{alignat*}{3}
\alpha: P \to& \mathcal{U}(P), \hspace{.3in} \beta: P \to \mathcal{U}(P)\\
f \mapsto& f, \hspace{.76in} f \mapsto \sum_{k=1}^{n}\frac{\partial f}{\partial x_k}y_k.
\end{alignat*}
Let $P = \Bbbk[x_1, \cdots, x_n]$ be a polynomial Poisson algebra. Its Poisson enveloping algebra $\mathcal{U}(P)$ admits a Poincaré-Birkhoff-Witt basis $\{x_1^{i_1}\cdots x_n^{i_n}y_1^{j_1} \cdots y_n^{j_n}: i_r, j_s \geq 0\}$, and in particular, $\mathcal{U}(P)$ is locally finite: $\dim_k \mathcal{U}(P)_n < \infty$. In general, $\mathcal{U}(P)$ admits a filtration but not a grading: $\mathcal{U}(P) = \displaystyle{\bigcup_{i \in \mathbb{N}}\mathcal{U}_i(P)}$, where $\mathcal{U}_i(P)$ is the subspace of $\mathcal{U}(P)$ generated by at most $i$ elements of $\beta(P)$. Nevertheless, if $P$ has a quadratic Poisson structure, then $\mathcal{U}(P)$ is graded because all the relations will be homogeneous of degree 2. In this case, $\mathcal{U}(P)$ also satisfies a range of preferred properties:
\begin{enumerate}
\item $\mathcal{U}(P)$ is Noetherian (\cite{O}, Proposition 9).
\item $\mathcal{U}(P)$ has global dimension $2n$ (\cite{BZ}, Proposition 2.1).
\item $\mathcal{U}(P)$ is Artin-Shelter regular (\cite{LWZ2}, Corollary 1.5).
\item $\mathcal{U}(P)$ is Auslander regular (\cite{LWZ2}, Corollary 1.5).
\item $\mathcal{U}(P)$ is a quantum polynomial ring with Hilbert series $h_{\mathcal{U}(P)}(t) = \displaystyle{\frac{1}{(1-t)^{2n}}}$ (\cite{GVW}, Lemma 5.4).
\end{enumerate}
$\mathcal{U}(P)$ is noncommutative provided that $P$ has a nontrivial Poisson structure. Despite the noncommutativity, notions such as ``reflections" and ``determinant" can be carried over.
\smallskip
Let $A$ be an Artin-Schelter regular algebra, and $g$ be a graded automorphism of $A$. The \textit{trace series} of $g$ is $\text{Tr}_{A}(g,t) = \displaystyle{\sum_{i = 0}^{\infty}\text{tr}(g\bigg\vert_{A_i})t^i}$. In particular, if $g = \text{id}_{A}$, we recover the Hilbert series: $\text{Tr}_{A}(\text{id}_{A},t) = h_{A}(t)$.
\smallskip
Let $A$ be an Artin-Schelter regular algebra with Hilbert series $h_{A}(t) = \displaystyle{\frac{1}{(1-t)^{n}f(t)}}$ for some $f(t)$ satisfying $f(1) \neq 0$. A \textit{quasi-reflection} of $A$ is a finite-order graded automorphism $g: A \to A$ such that $\text{Tr}_{A}(g,t) = \displaystyle{\frac{1}{(1-t)^{n-1}q(t)}}$ for some $q(t)$ satisfying $q(1) \neq 0$. In particular, if $A$ is a commutative polynomial ring, we recover the notion of a classical reflection: $g\bigg\vert_{A_1}$ has eigenvalues $\underbrace{1, \cdots, 1}_{n-1}, \xi$ for some primitive $m$th root of unity $\xi$. If $A$ is a noncommutative quantum polynomial ring generated in degree 1, such as the Poisson enveloping algebra of a quadratic Poisson algebra, (\cite{EKZ4}, Theorem 3.1) proves that a quasi-reflection $g$ of $A$ necessarily has one of the following form:
\begin{itemize}
\item $g\bigg\vert_{A_1}$ has eigenvalues $\underbrace{1, \cdots, 1}_{n-1}, \xi$ for some primitive $m$th root of unity $\xi$,
\item $g$ has order 4 and $g\bigg\vert_{A_1}$ has eigenvalues $\underbrace{1, \cdots, 1}_{n-2}, i, -i$.
\end{itemize}
\smallskip
Let $A$ be an Artin-Schelter Gorenstein algebra of injective dimension $n$, and $g$ be a graded automorphism of $A$. (\cite{JZ2}, Lemma 2.2) proved that the $g$-linear map $H_{{A}_{\geq 1}}^n(A) \to \hspace{0.01in} ^{\sigma}H_{{A}_{\geq 1}}^n(A)$ necessarily equals to the product of a nonzero scalar $c$ and the Matlis dual map $(g^{-1})'$. \cite{JZ2} defines the \textit{homological determinant} of $g$ to be $\text{hdet} g = \displaystyle{\frac{1}{c}}$. In particular, if $A$ is a commutative polynomial ring, (\cite{JZ2}, Page 322) showed that $\text{hdet} g$ coincides with $\det g\bigg\vert_{A_1}$.
\
\section{Graded Poisson Automorphisms and Poisson Reflections}
In this section, we compute the graded Poisson automorphisms and the Poisson reflections of each unimodular quadratic polynomial Poisson algebras of dimension 3, as a groundwork for proving a ``Shephard-Todd-Chevalley theorem" (in section 3) and a ``Watanabe theorem" (in section 4) for such Poisson algebras.
\smallskip
Let $P = \Bbbk[x_1, \cdots, x_n]$ be a polynomial ring, not necessarily with a Poisson structure, and $g \in \text{Aut}_{\text{gr}}(P)$. Since $g$ is a homomorphism, for all $f \in P$, $g(f(x_1, \cdots, x_n)) = f(g(x_1), \cdots, g(x_n))$. In other words, $g$ is fixed once $g(x_1), \cdots, g(x_n)$ are fixed. Since $g$ is graded, $g(x_i) = \displaystyle{\sum_{j=1}^{n}}a_{ij}x_j$ for some $a_{ij} \in k$, for all $1 \leq i \leq n$. It follows that $g$ can be recorded as the following $n \times n$ matrix over $\Bbbk$:
\[
g =
\begin{pmatrix}
a_{11} & a_{12} & \cdots & a_{1n}\\
a_{21} & a_{22} & \cdots & a_{2n}\\
\vdots & \vdots & \ddots & \vdots\\
a_{n1} & a_{n2} & \cdots & a_{nn}
\end{pmatrix}.
\]
If, in addition, $P$ has a Poisson structure: for all $f_1, f_2 \in P$,
\begin{align*}
g(\{f_1, f_2\}) =& g(\displaystyle{\sum_{i=1}^{n}\sum_{j=1}^{n}\frac{\partial f}{\partial x_i}\frac{\partial f}{\partial x_j}}\{x_i,x_j\})\\
=& \displaystyle{\sum_{i=1}^{n}\sum_{j=1}^{n}g(\frac{\partial f}{\partial x_i})g(\frac{\partial f}{\partial x_j})}g(\{x_i,x_j\}),\\
\{g(f_1),g(f_2)\} =& \displaystyle{\sum_{i=1}^{n}\sum_{j=1}^{n}\frac{\partial g(f)}{\partial g(x_i)}\frac{\partial g(f)}{\partial g(x_j)}}\{g(x_i),g(x_j)\}.
\end{align*}
By commutativity of the following diagram:
\[
\begin{tikzcd}
k[x_1, \cdots, x_n] \arrow[rr, "\frac{\partial}{\partial x_i}"] \arrow[dd,"g",swap] & &\Bbbk[x_1, \cdots, x_n] \arrow[dd,"g"]\\
\\
k[x_1, \cdots, x_n] \arrow[rr, "\frac{\partial}{\partial g(x_i)}",swap] & &\Bbbk[x_1, \cdots, x_n]
\end{tikzcd}
\]
$\displaystyle{g(\frac{\partial f}{\partial x_i}) = \frac{\partial g(f)}{\partial g(x_i)}}$ and $\displaystyle{g(\frac{\partial f}{\partial x_j}) = \frac{\partial g(f)}{\partial g(x_j)}}$. It follows that a graded algebra automorphism $g$ is a graded Poisson automorphism if and only if $g(\{x_i,x_j\}) = \{g(x_i),g(x_i)\}$ for all $1 \leq i,j \leq n$.
In particular, if $n = 3$, the above argument can be summarized into the following lemma:
\begin{lem}
Let $P = \Bbbk[x_1, x_2, x_3]$ be one of the unimodular quadratic Poisson algebra in Example 1.1, and $g \in \text{PAut}_{\text{gr}}(P)$. Then $g$ is an invertible $3 \times 3$ matrix over $\Bbbk$:
\[
g = \begin{pmatrix}
a_{11} & a_{12} & a_{13}\\
a_{21} & a_{22} & a_{23}\\
a_{31} & a_{32} & a_{33}
\end{pmatrix},
\]
satisfying the following equations:
\begin{enumerate}
\item
\begin{align*}
g(\{x_1,x_2\}) =& \{g(x_1),g(x_2)\}\\
=& (a_{11}a_{22}-a_{12}a_{21})\{x_1,x_2\} + (a_{12}a_{23}-a_{13}a_{22})\{x_2,x_3\} + (a_{13}a_{21}-a_{11}a_{23})\{x_3,x_1\},
\end{align*}
\item
\begin{align*}
g(\{x_2,x_3\}) =& \{g(x_2), g(x_3)\}\\
=& (a_{21}a_{32}-a_{22}a_{31})\{x_1,x_2\} + (a_{22}a_{33}-a_{23}a_{32})\{x_2,x_3\} + (a_{23}a_{31}-a_{21}a_{33})\{x_3,x_1\},
\end{align*}
\item
\begin{align*}
g(\{x_3,x_1\}) =& \{g(x_3),g(x_1)\}\\
=& (a_{12}a_{31}-a_{11}a_{32})\{x_1,x_2\} + (a_{13}a_{32}-a_{12}a_{33})\{x_2,x_3\} + (a_{11}a_{33}-a_{13}a_{31})\{x_3,x_1\}.
\end{align*}
\end{enumerate}
\end{lem}
\smallskip
In the following Propositions, we provide a complete classification of graded Poisson automorphisms and Poisson reflections of unimodular quadratic polynomial Poisson algebras of dimension 3.
\begin{prop}
Retain the above notations.
\[
\begin{tabular}{ |p{4cm}||p{11.5cm}| }
\hline
\centering $\boldsymbol{\Omega}$ & \centering $\textbf{PAut}_{\textbf{gr}}\boldsymbol{(P)}$
\tabularnewline
\hline
\hline
\centering $x_1^3$ & \centering
$\begin{pmatrix}
\pm \sqrt{a_{22}a_{33}-a_{23}a_{32}} & 0 & 0\\
a_{21} & a_{22} & a_{23}\\
a_{31} & a_{32} & a_{33}
\end{pmatrix}$ \tabularnewline
\hspace{.1in} & \centering condition: $a_{22}a_{33} - a_{23}a_{32} \neq 0$.
\tabularnewline
\hline
\centering $x_1^2x_2$ & \centering
$\begin{pmatrix}
a_{11} & 0 & 0\\
0 & a_{22} & 0\\
a_{31} & a_{32} & a_{11}
\end{pmatrix}$
\tabularnewline
& \centering condition: $a_{11}, a_{22} \neq 0$.
\tabularnewline
\hline
\end{tabular}
\]
\begin{center}
\begin{tabular}{ |p{4cm}||p{11.5cm}| }
\hline
\centering $\boldsymbol{\Omega}$ & \centering $\textbf{PAut}_{\textbf{gr}}\boldsymbol{(P)}$
\tabularnewline
\hline
\hline
\centering $2x_1x_2x_3$ & \centering
$\begin{pmatrix}
a_{11} & 0 & 0\\
0 & a_{22} & 0\\
0 & 0 & a_{33}
\end{pmatrix}$,
$\begin{pmatrix}
0 & a_{12} & 0\\
0 & 0 & a_{23}\\
a_{31} & 0 & 0
\end{pmatrix}$,
$\begin{pmatrix}
0 & 0 & a_{13}\\
a_{21} & 0 & 0\\
0 & a_{32} & 0
\end{pmatrix}$
\tabularnewline
\hspace{.1in} & \centering condition: $a_{ij} \neq 0, 1 \leq i, j \leq 3$.
\tabularnewline
\hline
\centering \hspace{.1in}\\\hspace{.1in}\\ $x_1^2x_2+x_1x_2^2$ & \centering
$\begin{pmatrix}
0 & a_{33} & 0\\
-a_{33} & -a_{33} & 0\\
a_{31} & a_{32} & a_{33}
\end{pmatrix}$,
$\begin{pmatrix}
0 & -a_{33} & 0\\
-a_{33} & 0 & 0\\
a_{31} & a_{32} & a_{33}
\end{pmatrix}$,
$\begin{pmatrix}
-a_{33} & -a_{33} & 0\\
a_{33} & 0 & 0\\
a_{31} & a_{32} & a_{33}
\end{pmatrix}$, $\begin{pmatrix}
-a_{33} & 0 & 0\\
a_{33} & a_{33} & 0\\
a_{31} & a_{32} & a_{33}
\end{pmatrix}$,
$\begin{pmatrix}
a_{33} & 0 & 0\\
0 & a_{33} & 0\\
a_{31} & a_{32} & a_{33}
\end{pmatrix}$,
$\begin{pmatrix}
a_{33} & a_{33} & 0\\
0 & -a_{33} & 0\\
a_{31} & a_{32} & a_{33}
\end{pmatrix}$\tabularnewline
\hspace{.1in} & \centering condition: $a_{33} \neq 0$.
\tabularnewline
\hline
\centering $x_1^3 + x_2^2x_3$ & \centering
$\begin{pmatrix}
a_{11} & 0 & 0\\
0 & a_{11} & 0\\
0 & 0 & a_{11}
\end{pmatrix}$
\tabularnewline
\hspace{.1in} & \centering condition: $a_{11} \neq 0$.
\tabularnewline
\hline
\centering $x_1^3 + x_1^2x_3 + x_2^2x_3$ & \centering
$\begin{pmatrix}
a_{33} & 0 & 0\\
0 & a_{33} & 0\\
0 & 0 & a_{33}
\end{pmatrix}$,
$\begin{pmatrix}
-\frac{1}{2}a_{33} & \pm\frac{\sqrt{3}}{2}a_{33} & 0\\
\mp\frac{\sqrt{3}}{2}a_{33} & -\frac{1}{2}a_{33} & 0\\
\frac{9}{8}a_{33} & \mp\frac{3\sqrt{3}}{8}a_{33} & a_{33}
\end{pmatrix}$
\tabularnewline
\hspace{.1in} & \centering condition: $a_{33} \neq 0$.
\tabularnewline
\hline
\centering $\frac{1}{3}(x_1^3+x_2^3+x_3^3)+\lambda x_1x_2x_3, \lambda^3 \neq -1$ & \centering
$\begin{pmatrix}
a_{11} & 0 & 0\\
0 & a_{11} & 0\\
0 & 0 & a_{11}
\end{pmatrix}$,
$\begin{pmatrix}
0 & 1 & 0\\
0 & 0 & 1\\
1 & 0 & 0
\end{pmatrix}$,
$\begin{pmatrix}
1 & 0 & 0\\
0 & b & 0\\
0 & 0 & b^2
\end{pmatrix}$,
\tabularnewline
\hspace{.1in} & \centering condition: $a_{11} \neq 0$, $b$ is a primitive 3rd root of unity.
\tabularnewline
\hline
\centering $x_1^3+x_1^2x_2+x_1x_2x_3$ & \centering
$\begin{pmatrix}
a_{11} & 0 & 0\\
0 & \frac{a_{11}^2}{a_{33}} & 0\\
a_{33}-a_{11} & 0 & a_{33}
\end{pmatrix}$
\tabularnewline
\hspace{.1in} & \centering condition: $a_{11}, a_{33} \neq 0$.
\tabularnewline
\hline
\centering $x_1^2x_3 + x_1x_2^2$ & \centering
$\begin{pmatrix}
a_{11} & 0 & 0\\
a_{21} & a_{11} & 0\\
-\frac{a_{21}^2}{a_{11}} & -a_{21} & a_{11}
\end{pmatrix}$
\tabularnewline
\hspace{.1in} & \centering condition: $a_{11} \neq 0$.
\tabularnewline
\hline
\end{tabular}
\end{center}
\end{prop}
\medskip
\begin{prop}
Retain the above notations.
\[
\begin{tabular}{ |p{4cm}||p{11.5cm}| }
\hline
\centering $\boldsymbol{\Omega}$ & \centering $\textbf{PR}\boldsymbol{(P)}$
\tabularnewline
\hline
\hline
\centering $x_1^3$ & \centering $\emptyset$
\tabularnewline
\hline
\centering $x_1^2x_2$ & \centering $\begin{pmatrix}
1 & 0 & 0\\
0 & \xi & 0\\
0 & a_{32} & 1
\end{pmatrix}$
\tabularnewline
\hspace{.1in} & \centering condition: $\xi$ is a primitive root of unity.
\tabularnewline
\hline
\centering $2x_1x_2x_3$ & \centering
$\begin{pmatrix}
1 & 0 & 0\\
0 & 1 & 0\\
0 & 0 & \xi
\end{pmatrix}$,
$\begin{pmatrix}
1 & 0 & 0\\
0 & \xi & 0\\
0 & 0 & 1
\end{pmatrix}$,
$\begin{pmatrix}
\xi & 0 & 0\\
0 & 1 & 0\\
0 & 0 & 1
\end{pmatrix}$
\tabularnewline
\hspace{.1in} & \centering condition: $a_{ij} \neq 0, 1 \leq i, j \leq 3$.
\tabularnewline
\hline
\end{tabular}
\]
\begin{tabular}{ |p{4cm}||p{11.5cm}| }
\hline
\centering $\boldsymbol{\Omega}$ & \centering $\textbf{PR}\boldsymbol{(P)}$
\tabularnewline
\hline
\hline
\centering $x_1^2x_2+x_1x_2^2$ & \centering
$\begin{pmatrix}
0 & -1 & 0\\
-1 & 0 & 0\\
a_{31} & a_{31} & 1
\end{pmatrix}$,
$\begin{pmatrix}
-1 & 0 & 0\\
1 & 1 & 0\\
a_{31} & 0 & 1
\end{pmatrix}$,
$\begin{pmatrix}
1 & 1 & 0\\
0 & -1 & 0\\
0 & a_{32} & 1
\end{pmatrix}$
\tabularnewline
\hline
\centering $x_1^3 + x_2^2x_3$ & \centering $\emptyset$
\tabularnewline
\hline
\centering $x_1^3 + x_1^2x_3 + x_2^2x_3$ & \centering $\emptyset$
\tabularnewline
\hline
\centering $\frac{1}{3}(x_1^3+x_2^3+x_3^3)+\lambda x_1x_2x_3, \lambda^3 \neq -1$ & \centering $\emptyset$
\tabularnewline
\hline
\centering $x_1^3+x_1^2x_2+x_1x_2x_3$ & \centering
$\begin{pmatrix}
-1 & 0 & 0\\
0 & 1 & 0\\
2 & 0 &1
\end{pmatrix}$
\tabularnewline
\hline
\centering $x_1^2x_3 + x_1x_2^2$ & \centering
$\emptyset$
\tabularnewline
\hline
\end{tabular}
\end{prop}
\smallskip
\begin{proof}
We will provide computation for one relatively easy case: \textit{Unimodular 1} and one relatively hard example: \textit{Unimodular 4}. The computation for the remaining cases are highly similar and will be recorded in the appendix.
\medskip
\textit{Unimodular 1}. $\Omega = x_1^3$.
$\{x_1,x_2\} = 0$, $\{x_2,x_3\} = 3x_1^2$, $\{x_3,x_1\} = 0$.
\smallskip
In this case, we have the following system of equations:
\begin{enumerate}[label = (\arabic*)]
\item $a_{12}a_{23} = a_{13}a_{22}$.
\item $a_{11}^2 = a_{22}a_{33}-a_{23}a_{32}$.
\item $a_{12}^2 = 0$.
\item $a_{13}^2 = 0$.
\item $a_{11}a_{12} = 0$.
\item $a_{11}a_{13} = 0$.
\item $a_{12}a_{13} = 0$.
\item $a_{13}a_{32} = a_{12}a_{33}$.
\end{enumerate}
By (3), (4), $a_{12}, a_{13} = 0$, and (1), (5), (6), (7), (8) are redundant. By (2), $a_{11} = \pm \sqrt{a_{22}a_{33}-a_{23}a_{32}}$. In addition, since $g$ is invertible, $\deg g = a_{11}(a_{22}a_{33}-a_{23}a_{32}) = \pm \sqrt{a_{22}a_{33}-a_{23}a_{32}} (a_{22}a_{33} - a_{23}a_{32}) \neq 0$, or equivalently, $a_{22}a_{33} \neq a_{23}a_{32}$. Altogether:
\[
\text{PAut}_{\text{gr}}(P) = \{
\begin{pmatrix}
\pm \sqrt{a_{22}a_{33}-a_{23}a_{32}} & 0 & 0\\
a_{21} & a_{22} & a_{23}\\
a_{31} & a_{32} & a_{33}
\end{pmatrix}: a_{22}a_{33} \neq a_{23}a_{32}\}.
\]
\smallskip
Let $g$ be a graded Poisson automorphism of $P$. The characteristic polynomial of $g$ is
$\deg(g - \lambda I_3) = $
$\begin{pmatrix}
\pm \sqrt{a_{22}a_{33}-a_{23}a_{32}} - \lambda & 0 & 0\\
a_{21} & a_{22} - \lambda & a_{23}\\
a_{31} & a_{32} & a_{33} - \lambda
\end{pmatrix} = (\pm \sqrt{a_{22}a_{33}-a_{23}a_{32}}-\lambda)\bigg((a_{22}-\lambda)(a_{33}-\lambda)-a_{23}a_{32}\bigg)$. There are 3 eigenvalues: $\lambda_1 = \pm \sqrt{a_{22}a_{33}-a_{23}a_{32}}$, $\lambda_2, \lambda_3 = \displaystyle{\frac{a_{22}+a_{33}\pm \sqrt{(a_{22}-a_{33})^2+4a_{23}a_{32}}}{2}}$. Notice that $\lambda_2\lambda_3 = \displaystyle{\frac{(a_{22}+a_{33})^2 - (a_{22}-a_{33})^2 - 4a_{23}a_{32}}{4}} = a_{22}a_{23}-a_{23}a_{32} = (\pm \sqrt{a_{22}a_{33}}-a_{23}a_{32})^2 = \lambda_1^2$. If $g$ is a Poisson reflection, $\{\lambda_1, \lambda_2, \lambda_3\} = \{1,1,\xi\}$ for some primitive $m$-th root of unity $\xi$. If $\lambda_1 = 1$, then $\lambda_2\lambda_3 = 1$, contradicting to $\{\lambda_2, \lambda_3\} = \{1, \xi\}$. If $\lambda_1 = \xi$, then $\lambda_2\lambda_3 = \xi$, contradicting to $\{\lambda_2, \lambda_3\} = \{1,1\}$. In conclusion,
\[
\text{PR}(P) = \emptyset.
\]
\medskip
\textit{Unimodular 4}. $\Omega = x_1x_2(x_1+x_2)$.
$\{x_1,x_2\} = 0$, $\{x_2,x_3\} = 2x_1x_2+x_2^2$, $\{x_3,x_1\} = x_1^2+2x_1x_2$.
\smallskip
In this case, we have the following system of equations:
\begin{enumerate}[label = (\arabic*)]
\item $a_{13}a_{21} = a_{11}a_{23}$.
\item $a_{12}a_{23} = a_{13}a_{22}$.
\item $a_{12}a_{23}+a_{13}a_{21} = a_{13}a_{22}+a_{11}a_{23}$.
\item $2a_{11}a_{21}+a_{21}^2 = a_{23}a_{31}-a_{21}a_{33}$.
\item $2a_{12}a_{22}+a_{22}^2 = a_{22}a_{33}-a_{23}a_{32}$.
\item $2a_{13}a_{23}+a_{23}^2 = 0$.
\item $a_{11}a_{22}+a_{12}a_{21}+a_{21}a_{22} = a_{22}a_{33} - a_{23}a_{32} + a_{23}a_{31} - a_{21}a_{33}$.
\item $a_{11}a_{23}+a_{13}a_{21}+a_{21}a_{23} = 0$.
\item $a_{12}a_{23}+a_{13}a_{22}+a_{22}a_{23} = 0$.
\item $a_{11}^2+2a_{11}a_{21} = a_{11}a_{33}-a_{13}a_{31}$.
\item $a_{12}^2+2a_{12}a_{22} = a_{13}a_{32}-a_{12}a_{33}$.
\item $a_{13}^2+2a_{13}a_{23} = 0$.
\item $a_{11}a_{12} + a_{11}a_{22}+a_{12}a_{21} = a_{13}a_{32}-a_{12}a_{33}+a_{11}a_{33}-a_{13}a_{31}$.
\item $a_{11}a_{13} + a_{11}a_{23}+a_{13}a_{21} = 0$.
\item $a_{12}a_{13} + a_{12}a_{23}+a_{13}a_{22} = 0$.
\end{enumerate}
By (6), (12), $a_{13}, a_{23} = 0$. By invertibility, $a_{33} \neq 0$. These conditions nullifies (1), (2), (3), (6), (8), (9), (12), (14), (15). The remaining equations, after simplifications, are:
\begin{enumerate}[label = (\arabic*)]
\setcounter{enumi}{3}
\item $2a_{11}a_{21}+a_{21}^2=-a_{21}a_{33}$.
\item $2a_{12}a_{22}+a_{22}^2=a_{22}a_{33}$.
\setcounter{enumi}{6}
\item $a_{11}a_{22}+a_{12}a_{21}+a_{21}a_{22} = a_{22}a_{33}-a_{21}a_{33}$.
\setcounter{enumi}{9}
\item $a_{11}^2+2a_{11}a_{21} = a_{11}a_{33}$.
\item $a_{12}^2+2a_{12}a_{22} = -a_{12}a_{33}$.
\setcounter{enumi}{12}
\item $a_{11}a_{12}+a_{11}a_{22}+a_{12}a_{21} = -a_{12}a_{33} + a_{11}a_{33}$.
\end{enumerate}
\begin{itemize}
\item Suppose that $a_{21} \neq 0$ and $a_{11} = 0$. By invertibility, $a_{12} \neq 0$. These conditions nullify (10) and simplify the remaining equations to:
\begin{enumerate}[label = (\arabic*)]
\setcounter{enumi}{3}
\item $a_{21} = -a_{33}$.
\item $2a_{12}a_{22}+a_{22}^2=a_{22}a_{33}$.
\setcounter{enumi}{6}
\item $a_{12}a_{21}+a_{21}a_{22} = a_{22}a_{33}-a_{21}a_{33}$.
\setcounter{enumi}{10}
\item $a_{12}+2a_{22} = -a_{33}$.
\setcounter{enumi}{12}
\item $a_{21} = -a_{33}$.
\end{enumerate}
If $a_{22} \neq 0$, (5) states that $2a_{12}+a_{22}=a_{33}$. Combined with (11): $a_{12} = -a_{22}$ and $a_{22} = -a_{33}$. These solutions are compatible with (7). This gives one possible form:
$\begin{pmatrix}
0 & a_{33} & 0\\
-a_{33} & a_{33} & 0\\
a_{31} & a_{32} & a_{33}
\end{pmatrix}$. If $a_{22} = 0$, (11) states $a_{12} = -a_{33}$. This solution are compatible with (5), (7). This gives one possible form:
$\begin{pmatrix}
0 & -a_{33} & 0\\
-a_{33} & 0 & 0\\
a_{31} & a_{32} & a_{33}
\end{pmatrix}$.
\item Suppose that $a_{21} \neq 0$ and $a_{11} \neq 0$. These conditions simplify the equations to:
\begin{enumerate}[label = (\arabic*)]
\setcounter{enumi}{3}
\item $2a_{11}+a_{21}=-a_{33}$.
\item $2a_{12}a_{22}+a_{22}^2=a_{22}a_{33}$.
\setcounter{enumi}{6}
\item $a_{11}a_{22}+a_{12}a_{21}+a_{21}a_{22} = a_{22}a_{33}-a_{21}a_{33}$.
\setcounter{enumi}{9}
\item $a_{11}+2a_{21} = a_{33}$.
\item $a_{12}^2+2a_{12}a_{22} = -a_{12}a_{33}$.
\setcounter{enumi}{12}
\item $a_{11}a_{12}+a_{11}a_{22}+a_{12}a_{21} = -a_{12}a_{33} + a_{11}a_{33}$.
\end{enumerate}
(4) and (10) imply $a_{21} = -a_{11}$ and $a_{11} = -a_{33}$. If $a_{22} \neq 0$, (5) and (7) imply $a_{12} = 0$ and $a_{22} = a_{33}$. These solutions are compatible with (11) and (13). This gives one possible form: $\begin{pmatrix}
-a_{33} & 0 & 0\\
a_{33} & a_{33} & 0\\
a_{31} & a_{32} & a_{33}
\end{pmatrix}$. If $a_{22} = 0$, (7) states $a_{12}=-a_{33}$. This solution is compatible with (5), (11), (13). This gives one possible form: $\begin{pmatrix}
-a_{33} & -a_{33} & 0\\
a_{33} & 0 & 0\\
a_{31} & a_{32} & a_{33}
\end{pmatrix}$.
\item Suppose that $a_{21} = 0$. By invertibility, $a_{22} \neq 0$. These conditions nullify (4) and simplify the remaining equations to:
\begin{enumerate}[label = (\arabic*)]
\setcounter{enumi}{4}
\item $2a_{12}+a_{22}=a_{33}$.
\setcounter{enumi}{6}
\item $a_{11} = a_{33}$.
\setcounter{enumi}{9}
\item $a_{11}^2 = a_{11}a_{33}$.
\item $a_{12}^2+2a_{12}a_{22} = -a_{12}a_{33}$.
\setcounter{enumi}{12}
\item $a_{11}a_{12}+a_{11}a_{22} = -a_{12}a_{33} + a_{11}a_{33}$.
\end{enumerate}
If $a_{12} = 0$, (5) states $a_{22} = a_{33}$, and (7) states $a_{11} = a_{33}$. These solutions are compatible with (10), (11), (13). This gives one possible form: $\begin{pmatrix}
a_{33} & 0 & 0\\
0 & a_{33} & 0\\
a_{31} & a_{32} & a_{33}
\end{pmatrix}$. If $a_{12} \neq 0$, (5) and (11) imply $a_{12} = -a_{22}$ and $a_{22} = -a_{33}$, and (7) states $a_{11} = a_{33}$. These solutions are compatible with (10), (13). This gives one possible form:
$\begin{pmatrix}
a_{33} & a_{33} & 0\\
0 & -a_{33} & 0\\
a_{31} & a_{32} & a_{33}
\end{pmatrix}$.
\end{itemize}
If $a_{12} = 0$, (5) states $a_{22} = a_{33}$, and the remaining relations become redundant. If $a_{12} \neq 0$,
\begin{enumerate}[label = (\alph*)]
\item $2a_{12} + a_{22} = a_{33}$.
\item $a_{11} = a_{33}$.
\item $a_{12}+2a_{22} = -a_{33}$.
\end{enumerate}
Altogether,
\[
\text{PAut}_{\text{gr}}(P) = \{
\begin{pmatrix}
0 & a_{33} & 0\\
-a_{33} & -a_{33} & 0\\
a_{31} & a_{32} & a_{33}
\end{pmatrix},
\begin{pmatrix}
0 & -a_{33} & 0\\
-a_{33} & 0 & 0\\
a_{31} & a_{32} & a_{33}
\end{pmatrix},
\begin{pmatrix}
-a_{33} & -a_{33} & 0\\
a_{33} & 0 & 0\\
a_{31} & a_{32} & a_{33}
\end{pmatrix},
\]
\[
\hspace{1.05in}
\begin{pmatrix}
-a_{33} & 0 & 0\\
a_{33} & a_{33} & 0\\
a_{31} & a_{32} & a_{33}
\end{pmatrix},
\begin{pmatrix}
a_{33} & 0 & 0\\
0 & a_{33} & 0\\
a_{31} & a_{32} & a_{33}
\end{pmatrix},
\begin{pmatrix}
a_{33} & a_{33} & 0\\
0 & -a_{33} & 0\\
a_{31} & a_{32} & a_{33}
\end{pmatrix}: a_{33} \neq 0\}.
\]
\smallskip
The characteristic polynomial of the first matrix is $(\lambda^2+a_{33}\lambda+a_{33}^2)(a_{33}-\lambda)$. The eigenvalues are $\lambda_1 = a_{33}, \lambda_2, \lambda_3 = \displaystyle{\frac{(-1 \pm \sqrt{-3})a_{33}}{2}}$. Notice that $\lambda_2 + \lambda_3 = -a_{33} = -\lambda_1$. If $g$ is a Poisson reflection, $\{\lambda_1, \lambda_2, \lambda_3\} = \{1,1,\xi\}$ for some primitive $m$th root of unity $\xi$. In either case, $\lambda_2+\lambda_3 \neq \lambda_1$. Hence there is no Poisson reflection.
\smallskip
The characteristic polynomial of the second matrix is $(\lambda + a_{33})(\lambda - a_{33})(a_{33} - \lambda)$. The eigenvalues are $\lambda_1 = -a_{33}$, $\lambda_2, \lambda_3 = a_{33}$. This forces $a_{33} = 1$. Hence a candidate matrix has the form $g = \begin{pmatrix}0 & -1 & 0\\-1 & 0 & 0\\a_{31} & a_{32} & 1\end{pmatrix}$. By calculation, $g^m = \begin{cases}
\begin{pmatrix}
1 & 0 & 0\\
0 & 1 & 0\\
\frac{m}{2}(a_{31}-a_{32}) & \frac{m}{2}(a_{32}-a_{31}) & 1
\end{pmatrix} & m \text{ is even},\\
\\
\begin{pmatrix}
0 & -1 & 0\\
-1 & 0 & 0\\
\frac{m+1}{2}a_{31}-\frac{m-1}{2}a_{32} & \frac{m+1}{2}a_{32}-\frac{m-1}{2}a_{31} & 1
\end{pmatrix} & m \text{ is odd}.
\end{cases}$ This forces $a_{31} = a_{32}$. Hence a Poisson reflection has the form $\begin{pmatrix}
0 & -1 & 0\\
-1 & 0 & 0\\
a_{31} & a_{31} & 1
\end{pmatrix}$.
\smallskip
The characteristic polynomial of the third matrix is $(\lambda^2 + a_{33}\lambda + a_{33}^2)(a_{33} - \lambda)$. By the same argument for the first matrix, there is no Poisson reflection.
\smallskip
The characteristic polynomial of the fourth matrix is $(\lambda+a_{33})(\lambda-a_{33})^2$. The eigenvalues are $\lambda_1 = -a_{33}, \lambda_2, \lambda_3 = a_{33}$. This forces $a_{33} = 1$. Hence a candidate matrix has the form $g = \begin{pmatrix}
-1 & 0 & 0\\
1 & 1 & 0\\
a_{31} & a_{32} & 1
\end{pmatrix}$. By calculation, $g^m =
\begin{cases}
\begin{pmatrix}
1 & 0 & 0\\
0 & 1 & 0\\
\frac{m}{2}a_{32} & ma_{32} & 1
\end{pmatrix} & m \text{ is even},\\
\\
\begin{pmatrix}
-1 & 0 & 0\\
1 & 1 & 0\\
\frac{m-1}{2}a_{32}+a_{31} & ma_{32} & 1
\end{pmatrix} & m \text{ is odd}.
\end{cases}$ This forces $a_{32} = 0$. Hence a Poisson reflection has the form $\begin{pmatrix}
-1 & 0 & 0\\
1 & 1 & 0\\
a_{31} & 0 & 1
\end{pmatrix}$.
\smallskip
The characteristic polynomial of the fifth matrix is $(\lambda-a_{33})^3$. The eigenvalues are $\lambda_1, \lambda_2, \lambda_3 = a_{33}$. Hence there is no Poisson reflection.
\smallskip
The characteristic polynomial of the sixth matrix is $(-a_{33}-\lambda)(a_{33}-\lambda)^2$. The eigenvalues are $\lambda_1 = -a_{33}$, $\lambda_2, \lambda_3 = a_{33}$. This forces $a_{33} = 1$. Hence a candidate matrix has the form $g = \begin{pmatrix}
1 & 1 & 0\\
0 & -1 & 0\\
a_{31} & a_{32} & 1
\end{pmatrix}$. By calculation, $g^m =
\begin{cases}
\begin{pmatrix}
1 & 0 & 0\\
0 & 1 & 0\\
\frac{m}{2}a_{31} & ma_{31} & 1
\end{pmatrix} & m \text{ is even},\\
\\
\begin{pmatrix}
1 & 1 & 0\\
0 & -1 & 0\\
ma_{31} & \frac{m-1}{2}a_{31}+a_{32} & 1
\end{pmatrix} & m \text{ is odd.}\end{cases}$ This forces $a_{31} = 0$. Hence a Poisson reflection has the form $\begin{pmatrix}
1 & 1 & 0\\
0 & -1 & 0\\
0 & a_{32} & 1
\end{pmatrix}$.
In conclusion,
\[
\text{PR}(P) = \{
\begin{pmatrix}
0 & -1 & 0\\
-1 & 0 & 0\\
a_{31} & a_{31} & 1
\end{pmatrix},
\begin{pmatrix}
-1 & 0 & 0\\
1 & 1 & 0\\
a_{31} & 0 & 1
\end{pmatrix},
\begin{pmatrix}
1 & 1 & 0\\
0 & -1 & 0\\
0 & a_{32} & 1
\end{pmatrix}
\}.
\]
\end{proof}
\
\section{A Variant of Shephard-Todd-Chevalley Theorem}
In this section, we prove a variant of Shephard-Todd-Chevalley theorem for unimodular quadratic polynomial Poisson algebra of dimension 3, stated as the follows:
\begin{thm}
Let $P = \Bbbk[x_1,x_2,x_3]$ be a unimodular quadratic Poisson algebra, and $G \subseteq \text{PAut}_{\text{gr}}(P)$ be a finite subgroup. Then $P^G \cong P$ as Poisson algebras if and only if $G$ is trivial.
\end{thm}
\begin{proof}
It suffices to prove $\Rightarrow$. The classical Shephard-Todd-Chevalley theorem states that if $P^G \cong P$ as algebras, then $G$ is generated by reflections. Since $G \subseteq \text{PAut}_{\text{gr}}(P)$, $G$ is generated by Poisson reflections. By Proposition 2.3, \textit{Unimodular 1}, \textit{5}, \textit{6}, \textit{7}, \textit{9} have no Poisson reflection, hence the statement is trivially true. For the remaining \textit{Unimodular 2}, \textit{3}, \textit{4}, \textit{8}:
\smallskip
\textit{Unimodular 2}. $\Omega = x_1^2x_2$.
$\{x_1,x_2\} = 0$, $\{x_2,x_3\} = 2x_1x_2$, $\{x_3,x_1\} = x_1^2$.
\smallskip
In this case, $P$ has the following Poisson reflections:
\[
\begin{pmatrix}
1 & 0 & 0\\
0 & \xi & 0\\
0 & a_{32} & 1
\end{pmatrix}.
\]
Let $G \subseteq \text{PAut}_{\text{gr}}(P)$ be a finite subgroup generated by Poisson reflections. Let $A = \begin{pmatrix}
1 & 0 & 0\\
0 & \xi_n & 0\\
0 & a & 1
\end{pmatrix}$, $B = \begin{pmatrix}
1 & 0 & 0\\
0 & \xi_m & 0\\
0 & b & 1
\end{pmatrix}$ be two reflections in $G$. By calculation,
\[
AB-BA = \begin{pmatrix}
0 & 0 & 0\\
0 & 0 & 0\\
0 & a(\xi_m-1)+b(1-\xi_n) & 0
\end{pmatrix},
\]
hence $A$ commutes with $B$ if and only if $a = \displaystyle{\frac{\xi_n-1}{\xi_m-1}b}$. Suppose that $A$ and $B$ does not commute. Consider $ABA^{n-1}B^{m-1}$. First, since $AB \neq BA$, $ABA^{n-1}B^{m-1} \neq I_3$. Furthermore, notice that $ABA^{n-1}B^{m-1}$ has the form $\begin{pmatrix}
1 & 0 & 0\\
0 & 1 & 0\\
0 & c & 1
\end{pmatrix}$ for some constant $c$. This forces $c \neq 0$. However, such matrix has infinite order, a condtradiction. In conclusion, if $G \subseteq \text{PAut}_{\text{gr}}(P)$ is a reflection group, then $G$ is generated by the following commuting matrices:
\[
g_1 = \begin{pmatrix}
1 & 0 & 0\\
0 & \xi_{n_1} & 0\\
0 & a_1 & 1
\end{pmatrix},
g_2 = \begin{pmatrix}
1 & 0 & 0\\
0 & \xi_{n_2} & 0\\
0 & \frac{\xi_{n_2}-1}{\xi_{n_1}-1}a_1 & 1
\end{pmatrix},
\cdots,
g_m = \begin{pmatrix}
1 & 0 & 0\\
0 & \xi_{n_m} & 0\\
0 & \frac{\xi_{n_m}-1}{\xi_{n_1}-1}a_1 & 1
\end{pmatrix}
\]
for some primitive $n_i$th root of unity $\xi_{n_i}$. We claim $G$ is principally generated by $g_1g_2 \cdots g_m$. We prove this claim by induction:
Consider $\langle g_1, g_2 \rangle \subseteq G$. $g_1g_2 = \begin{pmatrix}
1 & 0 & 0\\
0 & \xi_{n_1}\xi_{n_2} & 0\\
0 & \frac{\xi_{n_1}\xi_{n_2}-1}{\xi_{n_1}-1}a_1 & 1
\end{pmatrix}$. We may write $\xi_{n_1} = e^{\frac{2k_1\pi i}{n_1}}$ and $\xi_{n_2} = e^{\frac{2k_2\pi i}{n_2}}$ for some $k_1, k_2$ relatively prime to $n_1, n_2$, respectively. By calculation, $g_1 = (g_1g_2)^{\frac{k_1\text{lcm}(n_1,n_2)}{n_1}}$. Once $g_1$ is contained in $\langle g_1g_2 \rangle$, $g_2 = g_1^{-1}g_1g_2$ is contained in $\langle g_1g_2 \rangle$. Hence we may replace $g_1, g_2$ by a single $g_1g_2$. This replacement is compatible with the remaining generators: $\frac{\xi_{n_i}-1}{\xi_{n_1}-1}a_1 = \frac{\xi_{n_i}-1}{\xi_{n_1}\xi_{n_2}-1} \cdot (\frac{\xi_{n_1}\xi_{n_2}-1}{\xi_{n_1}-1}a_1)$. Apply this argument iteratively, $G$ is principally generated by:
\[
g_1g_2 \cdots g_m = \begin{pmatrix}
1 & 0 & 0\\
0 & \xi_l & 0\\
0 & a & 1
\end{pmatrix},
\]
for some primitive $l$th root of unity $\xi_l$. In fact, $G \cong \mathbb{Z}_l$. By Molien's theorem (\cite{JZ}, page 369),
\begin{align*}
h_{P^{G}}(t) =& \frac{1}{l}\sum_{i=1}^{l}\frac{1}{(1-t)^2(1-\xi_l^{i}t)}\\
=& \frac{1}{(1-t)^2(1-t^l)}.
\end{align*}
In the meantime, set $y_1 = x_1$, $y_2 = x_2 + \frac{1-\xi_{n_l}}{a}x_3$, $y_3 = x_2^l$. $y_1, y_2, y_3$ are 3 algebraically independent polynomials invariant under $G$. Since $\Bbbk[y_1,y_2,y_3] \subseteq P^G$ has Hilbert series $\displaystyle{\frac{1}{(1-t)^2(1-t^l)}}$, $P^G = \Bbbk[y_1,y_2,y_3]$ together with the following Poisson bracket:
\[
\{y_1, y_2\} = \frac{\xi_l-1}{a}y_1^2, \hspace{.1in} \{y_2,y_3\} = \frac{2l(\xi_l-1)}{a}y_1y_2, \hspace{.1in} \{y_3, y_1\} = 0.
\]
Suppose that $P^G \cong P$. In particular, $P^G$ is unimodular. By (\cite{GVW}, Example 1.2), its Poisson bracket has the following form:
\[
\{y_1,y_2\} = \frac{\partial f}{\partial y_3}, \hspace{.05in}
\{y_2,y_3\} = \frac{\partial f}{\partial y_1}, \hspace{.05in}
\{y_3,y_1\} = \frac{\partial f}{\partial y_2},
\]
for some nonzero $f \in\Bbbk[y_1,y_2,y_3]$. In this case, $f$ is forced to satisfy:
\begin{itemize}
\item $f = \frac{\xi_l-1}{a}y_1^2y_3 + a(y_1,y_2)$,
\item $f = \frac{l(\xi_l-1)}{a}y_1^2y_2 + b(y_2,y_3)$,
\item $f = c(y_1,y_3)$,
\end{itemize}
for some $a(y_1,y_2), b(y_2,y_3), c(y_1,y_3) \in \Bbbk[y_1,y_2,y_3]$. It is clear that no such $f$ exists unless $l = 1$, a contradiction to the nontrivialness of $G$.
\medskip
\textit{Unimodular 3}. $\Omega = x_1x_2x_3$.
$\{x_1,x_2\} = x_1x_2$, $\{x_2,x_3\} = x_2x_3$, $\{x_3,x_1\} = x_1x_3$.
\smallskip
In this case, $P$ has the following Poisson reflections:
\[
\begin{pmatrix}
1 & 0 & 0\\
0 & 1 & 0\\
0 & 0 & \xi
\end{pmatrix},
\begin{pmatrix}
1 & 0 & 0\\
0 & \xi & 0\\
0 & 0 & 1
\end{pmatrix},
\begin{pmatrix}
\xi & 0 & 0\\
0 & 1 & 0\\
0 & 0 & 1
\end{pmatrix}
.
\]
This case is handled in the proof of (\cite{GVW}, Theorem 4.5) or more generally (\cite{GVW}, Theorem 3.8). The argument in \cite{GVW} is more or less the following:
Let $G \subseteq \text{PAut}_{\text{gr}}(P)$ be a finite subgroup generated by Poisson reflections. $G$ can be realized as an internal direct product: $G \cong G_1 \times G_2 \times G_3$, where $G_i$ is the subgroup of $G$ consisting of Poisson automorphisms acting trivially on $\{x_1,x_2,x_3\} \backslash \{x_i\}$, $1 \leq i \leq 3$. Then $P^G \cong \Bbbk[x_1^m, x_2^n, x_3^l]$ together with the following Poisson bracket:
\[
\{x_1^m,x_2^n\} = mnx_1^mx_2^n, \hspace{0.05in} \{x_2^n,x_3^l\} = nlx_2^nx_3^l, \hspace{.05in} \{x_3^l, x_1^m\} = mlx_1^mx_3^l,
\]
where $m = \text{exp}(G_1), n = \text{exp}(G_2), l = \text{exp}(G_3)$. By (\cite{Gaddis2020TheZC}, Theorem 4.6), $P^G \cong P$ if and only if $mn = nl = ml$. If so, $m = n = l = 1$, hence $G$ is trivial.
\medskip
\textit{Unimodular 4}. $\Omega = x_1x_2(x_1+x_2)$.
$\{x_1,x_2\} = 0$, $\{x_2,x_3\} = 2x_1x_2+x_2^2$, $\{x_3,x_1\} = x_1^2+2x_1x_2$.
\smallskip
In this case, $P$ has the following Poisson reflections:
\[
\begin{pmatrix}
0 & -1 & 0\\
-1 & 0 & 0\\
a_{31} & a_{31} & 1
\end{pmatrix},
\begin{pmatrix}
-1 & 0 & 0\\
1 & 1 & 0\\
a_{31} & 0 & 1
\end{pmatrix},
\begin{pmatrix}
1 & 1 & 0\\
0 & -1 & 0\\
0 & a_{32} & 1
\end{pmatrix}.
\]
Let $G \subseteq \text{PAut}_{\text{gr}}(P)$ be a finite Poisson reflection group. If $G$ contains 2 distinct Poisson reflections of the first type: $A_1 = \begin{pmatrix}
0 & -1 & 0\\
-1 & 0 & 0\\
a_1 & a_1 & 1
\end{pmatrix}$, $A_2 = \begin{pmatrix}
0 & -1 & 0\\
-1 & 0 & 0\\
a_2 & a_2 & 1
\end{pmatrix}$, then $A_1A_2 = \begin{pmatrix}
1 & 0 & 0\\
0 & 1 & 0\\
a_2-a_1 & a_2-a_1 & 1
\end{pmatrix}$, a matrix of infinite order, contradiction. Similarly, $G$ does not contain 2 distinct Poisson reflections of the second type or the third type. Consequently, $G$ belongs to one of the following 3 families:
\begin{enumerate}
\item $G$ is generated by a Poisson reflection of the first type: $A = \begin{pmatrix}0 & -1 & 0\\-1 & 0 & 0\\a & a & 1\end{pmatrix}$, a reflection of the second type, or a Poisson reflection of the third type. First, consider $G = \langle A \rangle$. By Molien's theorem (\cite{JZ}, page 369),
\begin{align*}
h_{P^{G}}(t) =& \frac{1}{2}\bigg(\frac{1}{(1-t)^3} + \frac{1}{(1-t)^2(1+t)}\bigg)\\
=& \frac{1}{(1-t)^2(1-t^2)}.
\end{align*}
Set $y_1 = x_1-x_2$, $y_2 = ax_1+ax_2 + 2x_3$, $y_3 = x_1x_2$. $y_1, y_2, y_3$ are 3 algebraically independent polynomials invariant under $G$. Since $\Bbbk[y_1, y_2, y_3] \subseteq P^G$ has Hilbert series $\displaystyle{\frac{1}{(1-t)^2(1+t)}}$, $P^G = \Bbbk[y_1, y_2, y_3]$ together with the following Poisson bracket:
\[
\{y_1, y_2\} = -2y_1^2 - 12y_3, \hspace{.1in} \{y_2,y_3\} = -2y_1y_3, \hspace{.1in} \{y_3,y_1\} = 0.
\]
Suppose that $P^G \cong P$. In particular, $P$ is unimodular. By (\cite{GVW}, Example 1.2), $P^G = \Bbbk[y_1,y_2,y_3]$ is unimodular if and only if its Poisson bracket has the following form:
\[
\{y_1,y_2\} = \frac{\partial f}{\partial y_3}, \hspace{.05in}
\{y_2,y_3\} = \frac{\partial f}{\partial y_1}, \hspace{.05in}
\{y_3,y_1\} = \frac{\partial f}{\partial y_2},
\]
for some nonzero $f \in \Bbbk[y_1,y_2,y_3]$. In this case, $f$ is forced to satisfy:
\begin{itemize}
\item $f = -2y_1^2y_3 - 6y_3^2 + a(y_1,y_2)$,
\item $f = -y_1^2y_3 + b(y_2,y_3)$,
\item $f = c(y_1,y_3)$,
\end{itemize}
for some $a(y_1,y_2), b(y_2,y_3), c(y_1,y_3) \in \Bbbk[y_1,y_2,y_3]$. It is clear that no such $f$ exists. If $G = \langle B \rangle$ or $G = \langle C \rangle$, a similar argument shows no such $f$ exists as well. This implies $P^G$ is unimodular if and only if $G$ is trivial, and hence $P^G \cong P$ if and only if $G$ is trivial.
\item $G$ is generated by Poisson reflections of the first type and the second type: $A = \begin{pmatrix}0 & -1 & 0\\-1 & 0 & 0\\a & a & 1\end{pmatrix}, B = \begin{pmatrix}-1 & 0 & 0\\1 & 1 & 0\\b & 0 & 1\end{pmatrix}$, Poisson reflections of the first type and the third type, Poisson reflections of the second type and the third type. First, consider $G = \langle A, B \rangle$. By calculation, $A^2 = B^2 = (AB)^3 = 1$. This implies $G$ is isomorphic to a quotient of $S_3$. Furthermore, $G$ is non-abelian: $AB \neq BA$. This implies $G$ has order $\geq 6$, hence $G \cong S_3$. By Molien's theorem (\cite{JZ}, page 369),
\begin{align*}
h_{P^G}(t) =& \frac{1}{6}\bigg(\frac{1}{(1-t)^3} + \frac{3}{(1-t)^2(1+t)} + \frac{2}{(1-t)(1+t+t^2)}\bigg)\\
=& \frac{1}{(1-t)(1-t^2)(1-t^3)}.
\end{align*}
After calculating $\displaystyle{\int}f = \frac{1}{|G|}\displaystyle{\sum_{g \in G}g(f)}$ for all monomials $f \in \Bbbk[x_1, x_2, x_3]$ with degree less or equal to 6, we discover that $y_1 = \frac{1}{3}(a+b)x_1+\frac{1}{3}(2a-b)x_2+x_3$, $y_2 = x_1^2+x_2^2+x_1x_2$, $y_3 = 2x_1^3+3x_1^2x_2-3x_1x_2^2-2x_2^3$ are 3 algebraically independent polynomials invariant under $G$. Since $\Bbbk[y_1,y_2,y_3] \subseteq P^G$ has Hilbert series $\displaystyle{\frac{1}{(1-t)(1-t^2)(1-t^3)}}$, $P^G = \Bbbk[y_1, y_2, y_3]$ together with the following Poisson bracket:
\[
\{y_1,y_2\} = y_3, \hspace{.1in} \{y_2, y_3\} = 0, \hspace{.1in} \{y_3, y_1\} = -6y_2^2.
\]
In this case, $P^{G}$ is unimodular with potential $f = -2y_2^3 + \frac{1}{2}y_3^2$. However, $P^G \not\cong P$:
\begin{align*}
P/(\{P,P\}) \cong& \Bbbk[x_1,x_2,x_3]/(x_1^2+2x_1x_2,x_2^2+2x_1x_2),\\
P^G/(\{P^G,P^G\}) \cong& \Bbbk[y_1, y_2, y_3]/(y_2^2,y_3).
\end{align*}
Notice that the former algebras has a unique non-principal prime ideal: $(x_1, x_2)$, while the latter has a unique principal minimal prime ideal $(y_2)$. Hence $P^G \not\cong P$, unless $G$ is trivial.
\item $G$ is generated by Poisson reflections of
\begin{itemize}
\item the first type and the third type: $A = \begin{pmatrix}
0 & -1 & 0\\
-1 & 0 & 0\\
a & a & 1
\end{pmatrix}$, $C = \begin{pmatrix}
1 & 1 & 0\\
0 & -1 & 0\\
0 & c & 1
\end{pmatrix}$,
\item the second type and the third type: $B = \begin{pmatrix}
-1 & 0 & 0\\
1 & 1 & 0\\
b & 0 & 1
\end{pmatrix}$, $C = \begin{pmatrix}
1 & 1 & 0\\
0 & -1 & 0\\
0 & c & 1
\end{pmatrix}$,
\item the first type, the second type, and the third type: $A = \begin{pmatrix}
0 & -1 & 0\\
-1 & 0 & 0\\
a & a & 1
\end{pmatrix}$, $B = \begin{pmatrix}
-1 & 0 & 0\\
1 & 1 & 0\\
b & 0 & 1
\end{pmatrix}$, $C = \begin{pmatrix}
1 & 1 & 0\\
0 & -1 & 0\\
0 & c & 1
\end{pmatrix}$.
\end{itemize}
\smallskip
First, consider $G = \langle A, B, C \rangle$. $ABC = \begin{pmatrix}
-1 & 0 & 0\\
1 & 1 & 0\\
b & b + c - a & 1
\end{pmatrix}$, and $(ABC)^n$'s
(3,2)-entry is $n(b+c-a)$. Since $G$ has finite order, $a = b+c$. Under this relation, $ABA = \begin{pmatrix}
0 & -1 & 0\\
-1 & 0 & 0\\
b+c & b+c & 1
\end{pmatrix}\begin{pmatrix}
-1 & 0 & 0\\
1 & 1 & 0\\
b & 0 & 1
\end{pmatrix}$
$\begin{pmatrix}
0 & -1 & 0\\
-1 & 0 & 0\\
b+c & b+c & 1
\end{pmatrix}
=
\begin{pmatrix}
1 & 1 & 0\\
0 & -1 & 0\\
0 & c & 1
\end{pmatrix} = C$. This proves $\langle A, B, C \rangle = \langle A, B \rangle$. This allows us to return to the last family.
In addition, if $G = \langle A, C \rangle$ or $G = \langle B, C \rangle$, we may reduce $G$ to $G = \langle A, B \rangle$ for some appropriate choices of $A, B$. Again, this allows us to return to the previous family.
\end{enumerate}
\end{proof}
\medskip
\textit{Unimodular 8}. $\Omega = x_1^3+x_1x_2x_3+x_1^2x_2$.
$\{x_1,x_2\} = x_1x_2$, $\{x_2,x_3\} = 3x_1^2+x_2x_3+2x_1x_2$, $\{x_3,x_1\} = x_1x_3+x_1^2$.
\smallskip
In this case, $P$ has the following Poisson reflections:
\[
\begin{pmatrix}
-1 & 0 & 0\\
0 & 1 & 0\\
2 & 0 & 1
\end{pmatrix}.
\]
Let $G = \langle \begin{pmatrix}
-1 & 0 & 0\\
0 & 1 & 0\\
2 & 0 & 1
\end{pmatrix} \rangle$. By Molien's theorem (\cite{JZ}, page 369),
\begin{align*}
h_{P^G}(t) =& \frac{1}{6}\bigg(\frac{1}{(1-t)^3} + \frac{1}{(1-t)^2(1+t)}\bigg)\\
=& \frac{1}{(1-t)^2(1-t^2)}.
\end{align*}
Set $y_1 = x_2$, $y_2 = x_1+x_3$, $y_3 = x_1^2$. $y_1, y_2, y_3$ are 3 algebraically independent polynomials in $P$ invariant under $G$. Since $\Bbbk[y_1, y_2, y_3] \subseteq P^G$ has Hilbert series $\displaystyle{\frac{1}{(1-t)^2(1-t^2)}}$, $P^G = \Bbbk[y_1, y_2, y_3]$ together with the following Poisson brackets:
\[
\{y_1,y_2\} = y_1y_2 + 3y_3, \hspace{.1in} \{y_2,y_3\} = 2y_2y_3, \hspace{.1in} \{y_3,y_1\} = 2y_1y_3.
\]
Suppose that $P^G \cong P$. In particular, $P^G$ is unimodular. By (\cite{GVW}, Example 1.2), $P^G =\Bbbk[y_1,y_2,y_3]$ is unimodular if and only if its Poisson bracket has the following form:
\[
\{y_1,y_2\} = \frac{\partial f}{\partial y_3}, \hspace{.05in}
\{y_2,y_3\} = \frac{\partial f}{\partial y_1}, \hspace{.05in}
\{y_3,y_1\} = \frac{\partial f}{\partial y_2},
\]
for some nonzero $f \in\Bbbk[y_1,y_2,y_3]$. In this case, $f$ is forced to satisfy:
\begin{itemize}
\item $f =y_1y_2y_3 + \frac{3}{2}y_3^2 + a(y_1,y_2)$,
\item $f = 2y_1y_2y_3 + b(y_2,y_3)$,
\item $f = 2y_1y_2y_3 + c(y_1,y_3)$,
\end{itemize}
for some $a(y_1,y_2), b(y_2,y_3), c(y_1,y_3) \in\Bbbk[y_1,y_2,y_3]$. It is clear that no such $f$ exists unless $G$ is trivial. Hence $P^G$ is unimodular if and only if $G$ is trivial.
\medskip
In conclusion, in each case $P$ has Poisson reflections, $P^G \not\cong P$ unless $G$ is trivial. In each case $P$ has no Poisson reflections, the classical Shephard-Todd-Chevalley theorem forces: $P^G \cong P$ if and only if $G$ is trivial.
\
\section{A Variant of Watanabe Theorem}
In this section, we prove a variant of Shephard-Todd-Chevalley theorem for for Poisson enveloping algebras of quadratic polynomial Poisson algebras, and a variant of Watanabe theorem for Poisson enveloping algebras of unimodular quadratic polynomial Poisson algebras of dimension 3 under the action induced by $\text{PAut}_{\text{gr}}(P) \to \text{Aut}_{\text{gr}}(\mathcal{U}(P))$.
\smallskip
Let $P = \Bbbk[x_1,\cdots,x_n]$ be a polynomial quadratic Poisson algebra, and $\mathcal{U}(P)$ be its Poisson enveloping algebra. Let $g \in \text{PAut}_{\text{gr}}(P)$. (\cite{GVW}, Lemma 5.1) constructs an induced graded algebra automorphism $\tilde{g}$ of $\mathcal{U}(P)$ as follows:
\begin{align*}
\tilde{g}(x_i) = g(x_i), \hspace{.5in} \tilde{g}(y_i) = \sum_{j=1}^{n}\frac{\partial g(x_i)}{\partial x_j}y_j,
\end{align*}
for all $1 \leq i \leq n$. In addition:
\begin{lem}
Let $P = \Bbbk[x_1,\cdots,x_n]$ be a polynomial quadratic Poisson algebra, and $G \subseteq \text{PAut}_{\text{gr}}(P)$ be a subgroup. Then $\widetilde{G} = \{\tilde{g}: g \in G\} \subseteq \text{Aut}_{\text{gr}}(\mathcal{U}(P))$ is isomorphic to $G$.
\end{lem}
\begin{proof}
Define $G \to \widetilde{G}: g \mapsto \tilde{g}$. Let $g_1, g_2 \in G$. It is clear that $\widetilde{g_1}\widetilde{g_2}$ and $\widetilde{g_1g_2}$ agree on $x_1, \cdots, x_n$. In addition, for all $1 \leq i \leq n$,
\begin{align*}
(\widetilde{g_1}\widetilde{g_2})(y_i) =& \widetilde{g_1}(\widetilde{g_2}(y_i))\\
=& \widetilde{g_1}\bigg(\sum_{j=1}^{n}\frac{\partial g_2(x_i)}{\partial x_j}y_j\bigg)\\
=& \sum_{j=1}^{n}g_1\bigg(\frac{\partial g_2(x_i)}{\partial x_j}\bigg)\bigg(\sum_{k=1}^{n}\frac{\partial g_1(x_j)}{\partial x_k}y_k\bigg)\\
=& \sum_{j=1}^{n}\sum_{k=1}^{n}g_1\bigg(\frac{\partial g_2(x_i)}{\partial x_j}\bigg)\bigg(\frac{\partial g_1(x_j)}{\partial x_k}y_k\bigg)\\
=& \sum_{j=1}^{n}\sum_{k=1}^{n}\frac{\partial g_1(g_2(x_i))}{\partial g_1(x_j)}\frac{\partial g_1(x_j)}{\partial x_k}y_k\\
=& \sum_{k=1}^{n}\frac{\partial g_1(g_2(x_i))}{\partial x_k}y_k\\
=& \widetilde{g_1g_2}(y_i),
\end{align*}
where the fifth equality follows from the commutativity of the following diagram:
\[
\begin{tikzcd}
k[x_1, \cdots, x_n] \arrow[rr, "\frac{\partial}{\partial x_j}"] \arrow[dd,"g_1",swap] & &\Bbbk[x_1, \cdots, x_n] \arrow[dd,"g_1"]\\
\\
k[x_1, \cdots, x_n] \arrow[rr, "\frac{\partial}{\partial g_1(x_j)}",swap] & &\Bbbk[x_1, \cdots, x_n]
\end{tikzcd}
\]
Suppose that $\widetilde{g_1} = \widetilde{g_2}$. In particular, $g_1(x_i) = \widetilde{g_1}(x_i) = \widetilde{g_2}(x_i) = g_2(x_i)$, for all $1 \leq i \leq n$. Then $g_1 = g_2$. In the meantime, by construction, $g \mapsto \tilde{g}$ is surjective, hence an isomorphism.
\end{proof}
\medskip
Lemma 4.1 allows us to formulate the following questions, as generalizations of the Shephard-Todd-Chevalley theorem and the Watanabe theorem:
\begin{enumerate}[label = (\arabic*)]
\item Under what conditions on $G$ is $\mathcal{U}(P)^{\widetilde{G}}$ Artin-Schelter regular?
\item Under what conditions on $G$ is $\mathcal{U}(P)^{\widetilde{G}}$ Artin-Schelter Gorenstein?
\end{enumerate}
\smallskip
To answer Question (1), we first observe the following:
\begin{lem}
Let $P = \Bbbk[x_1,\cdots,x_n]$ be a polynomial quadratic Poisson algebra, and $g$ be an graded Poisson automorphism of $P$ of finite order. Suppose that $g\bigg\vert_{P_1}$ has eigenvalues $\lambda_1, \cdots, \lambda_m$, with multiplicity $c_1, \cdots, c_m$, respectively. Then $\tilde{g}\bigg\vert_{\mathcal{U}(P)_1}$ has eigenvalues $\lambda_1, \cdots, \lambda_m$, with multiplicity $2c_1, \cdots, 2c_m$, respectively.
\end{lem}
\begin{proof}
Let $\{v_1, \cdots, v_{c_1}\}$ be a basis for the eigenspace of $\lambda_1$ in $P_1$. By calculation,
\begin{align*}
\tilde{g}(v_i) =& g(v_i)\\
=& \lambda_i v_i,\\
\tilde{g}(\sum_{j=1}^{n}\frac{\partial g(v_i)}{\partial x_j}y_j)
=& \sum_{j=1}^{n}g(\frac{\partial (\lambda_iv_i)}{\partial x_j})\tilde{g}(y_j)\\
=& \lambda_i\sum_{j=1}^{n}g(\frac{\partial v_i}{\partial x_j})\tilde{g}(y_j)\\
=& \lambda_i\sum_{j=1}^{n}\sum_{k=1}^{n}\frac{\partial g(v_i)}{\partial g(x_j)}\frac{\partial g(x_j)}{\partial x_k}y_k\\
=& \lambda_i \sum_{k=1}^{n}\frac{\partial g(v_i)}{\partial x_k}y_k,
\end{align*}
for all $1 \leq i \leq c_1$. Since $\tilde{g}$ is a graded automorphism, $\{\tilde{g}(v_1), \cdots, \tilde{g}(v_{c_1}), \displaystyle{\tilde{g}(\sum_{j=1}^{n}\frac{\partial g(v_1)}{\partial x_j}y_j)},$
$\displaystyle{\cdots, \tilde{g}(\sum_{j=1}^{n}\frac{\partial g(v_{c_1})}{\partial x_j}y_j)}\}$ are linearly independent in $\mathcal{U}(P)_1$. By a similar argument, we may find $2c_i$ linearly independent eigenvectors of $\lambda_i$ of $\tilde{g}\bigg\vert_{\mathcal{U}(P)_1}$ in $\mathcal{U}(P)_1$ for all $1 \leq i \leq m$. Since $\displaystyle{\sum_{i=1}^{m}c_i} = n$, $\displaystyle{\sum_{i=1}^{m}2c_i} = 2n$. This forms an eigenbasis for $\mathcal{U}(P)_1$. In particular, the multiplicity of $\lambda_i$ in $\mathcal{U}(P)_1$ equals to twice of multiplicity of $\lambda_i$ in $P_1$.
\end{proof}
\smallskip
\begin{thm}
Let $P = \Bbbk[x_1,\cdots,x_n]$ be a polynomial quadratic Poisson algebra, and $G \subseteq \text{PAut}_{\text{gr}}(P)$ be a nontrivial finite subgroup. Then $\mathcal{U}(P^{\widetilde{G}})$ is not Artin-Schelter regular.
\end{thm}
\begin{proof}
By (\cite{GVW}, Lemma 5.4), $\mathcal{U}(P)$ is a quantum polynomial ring. Let $g$ be a quasi-reflection of $\mathcal{U}(P)$. By (\cite{EKZ4}, Theorem 3.1), $g$ satisfies one of the following:
\begin{itemize}
\item The eigenvalues of $g$ are $\underbrace{1,\cdots,1}_{n-1}, \xi$, for some primitive $m$th root of unity $\xi$.
\item The order of $g$ is 4. The eigenvalues are $\underbrace{1,\cdots,1}_{n-2}$, $i$, $-i$.
\end{itemize}
Comparing the eigenvalues in Lemma 4.2, $\widetilde{G}$ contains no quasi-reflections. Now, (\cite{EKZ4}, Lemma 6.1) states that for a Noetherian regular algebra $A$ and a finite subgroup $H \subseteq \text{Aut}_{\text{gr}}(A)$, if $H$ contains no quasi-reflections, then $A^H$ has infinite global dimension. In this case, $\mathcal{U}(P)^{\widetilde{G}}$ has infinite global dimension, hence not Artin-Schelter regular.
\end{proof}
\smallskip
Artin-Schelter regularity, as Proposition 4.3 has stated, is not achievable for any nontrivial $G$. Artin-Schelter Gorensteinness, on the other hand, can be achieved in some cases. In general, it is extremely hard to verify Artin-Schelter Gorensteinness of $\mathcal{U}(P)^{\widetilde{G}}$ by constructing a minimal free resolution of the trivial module due to the difficulty of describing the relations of $\mathcal{U}(P)^{\widetilde{G}}$ systemmatically. Instead, we turn to the following result:
\begin{thm}
(\cite{JZ2}, Theorem 3.3) Let $A$ be a Noetherian Artin-Schelter Gorenstein, and $H \subseteq \text{GrAut}(A)$ be a finite subgroup. If $\text{hdet} h = 1$ for all $h \in H$, then $A^H$ is Artin-Schelter Gorenstein.
\end{thm}
\smallskip
To apply this result, we need some knowledge about the homological determinant of each $\tilde{g} \in \widetilde{G}$. The following lemma, as a generalization of (\cite{GVW}, Theorem 5.6), states a crucial step for its computation.
\begin{lem}
Let $P = \Bbbk[x_1, \cdots, x_n]$ be a polynomial quadratic Poisson algebra, and $g$ be a graded Poisson automorphism of $P$ of finite order. Suppose that $g\bigg\vert_{P_1}$ has eigenvalues $\lambda_1, \cdots, \lambda_m$, with multiplicity $c_1, \cdots, c_m$, respectively. Then
\[
\text{Tr}_{\mathcal{U}(P)}(\tilde{g},t) = \frac{1}{(1-\lambda_1t)^{2c_1} \cdots (1-\lambda_mt)^{2c_m}}.
\]
\end{lem}
\begin{proof}
Since $P$ is a commutative polynomial ring,
\[
\text{Tr}_{P}(g,t) = \frac{1}{(1-\lambda_1t)^{c_1} \cdots (1-\lambda_mt)^{c_m}}.
\]
Let $1 + \displaystyle{\sum_{i \geq 1}a_it^i}$ be the Taylor expansion of $\text{Tr}_{P}(g,t)$. It suffices to show that $(1 + \displaystyle{\sum_{i \geq 1}a_it^i})^2 = \text{Tr}_{\mathcal{U}(P)}(g,t)$. Fix $d \geq 1$. By (\cite{OPS}, Theorem 3.7), $\mathcal{U}(P)_d$ has a $\Bbbk$-linear basis $\{x_1^{p_1}\cdots x_n^{p_n}y_1^{q_1} \cdots y_n^{q_n}: \displaystyle{\sum_{j=1}^{n} (p_j+q_j)} = d\}$. Let $r_1, \cdots, r_n \geq 0$, and $b_{r_1,\cdots,r_n}$ be the coefficient of $x_1^{r_1} \cdots x_n^{r_n}$ in $g(x_1^{r_1} \cdots x_n^{r_n})$. Consider the coefficient of $x_1^{p_1} \cdots x_n^{p_n}y_1^{q_1} \cdots y_n^{q_n}$ in $\tilde{g}(x_1^{p_1} \cdots x_n^{p_n}y_1^{q_1} \cdots y_n^{q_n})$:
\begin{enumerate}[label = (\arabic*)]
\item Since $[x_i,x_j] = 0$ in $\mathcal{U}(P)$, the coefficient of $x_1^{p_1} \cdots x_n^{p_n}$ in $\tilde{g}(x_1^{p_1} \cdots x_n^{p_n})$ is $b_{p_1, \cdots, p_n}$.
\item Since $[y_i,y_j] = \displaystyle{\sum_{l=1}^{n}\frac{\partial \{x_i,x_j\}}{\partial x_l}y_l}$ and $[y_i,x_j] = \{x_i,x_j\}$, the coefficient of $y_1^{q_1} \cdots y_n^{q_n}$ in $\tilde{g}(y_1^{q_1} \cdots y_n^{q_n})$ is $b_{q_1, \cdots, q_n}$.
\item Since $\tilde{g}(x_j) \subseteq \displaystyle{\bigoplus_{l=1}^{n}kx_l}$ for all $1 \leq j \leq n$, the coefficient of $x_1^{p_1} \cdots x_n^{p_n}y_1^{q_1} \cdots y_n^{q_n}$ in $\tilde{g}(x_1^{p_1} \cdots x_n^{p_n}y_1^{q_1} \cdots y_n^{q_n})$ is $b_{p_1, \cdots, p_n}b_{q_1, \cdots, q_n}$.
\end{enumerate}
Let $1 + \displaystyle{\sum_{i \geq 1}e_it^i}$ be the Taylor expansion of $\text{Tr}_{\mathcal{U}(P)}(\tilde{g},t)$. By definition, $e_d$ equals to the summation of all $b_{p_1, \cdots, p_n}b_{q_1, \cdots, q_n}$ ranging over $p_1 + \cdots + p_n + q_1 + \cdots + q_n = d$. Equivalently, $e_d = \displaystyle{\sum_{i+j=d}a_ia_j}$. Consequently, $(1 + \displaystyle{\sum_{i=1}^{n}a_it^i})^2 = 1 + \displaystyle{\sum_{i=1}^{n}e_it^i}$, and
\begin{align*}
\text{Tr}_{\mathcal{U}(\mathcal{P})}(\tilde{g},t) =& \text{Tr}_{P}(g,t)^2\\
=& \bigg(\frac{1}{(1-\lambda_1t)^{c_1} \cdots (1-\lambda_mt)^{c_m}}\bigg)^2\\
=& \frac{1}{(1-\lambda_1t)^{2c_1} \cdots (1-\lambda_mt)^{2c_m}}.
\end{align*}
\end{proof}
\smallskip
The following theorem states an explicit formula for computing the homological determinant of $\tilde{g}$.
\begin{thm}
Let $P = \Bbbk[x_1, \cdots, x_n]$ be a polynomial quadratic Poisson algebra, and $g$ be a graded Poisson automorphism of $P$ of finite order. Then $\text{hdet} \tilde{g} = (\det g\bigg\vert_{P_1})^2$.
\end{thm}
\begin{proof}
By (\cite{JZ2}, Proposition 4.2), $\tilde{g}$ is rational over $\Bbbk$. By (\cite{JZ2}, Lemma 2.6), $\text{Tr}_{\mathcal{U}(P)}(\tilde{g},t) = (\text{hdet}g)^{-1}t^{-l} + \textit{lower terms}$, when written as a Laurent series in $t^{-1}$. By Lemma 4.5, \[\displaystyle{\text{Tr}_{\mathcal{U}(P)}(\tilde{g},t) = \frac{1}{(1-\lambda_1t)^{2c_1} \cdots (1-\lambda_mt)^{2c_m}} = \frac{1}{\bigg(\det(g\bigg\vert_{P_1})\bigg)^2t^{2n} + \textit{lower terms}}}
= \bigg(\text{det}g\bigg\vert_{P_1}^2\bigg)^{-1}t^{-2n} + \textit{lower terms}.\]
By definition, $\text{hdet}\tilde{g} = (\det g\bigg\vert_{P_1})^2$.
\end{proof}
\bigskip
Combined with Proposition 2.2, Theorem 4.6 provides an answer to Question (2) for unimodular quadratic polynomial Poisson algebras of dimension 3:
\begin{prop}
Let $P = \Bbbk[x_1, x_2, x_3]$ be one of the following Poisson algebra, and $G \subseteq \text{Aut}_{\text{gr}}(P)$ be a finite subgroup consisting one or more of the following generators, then $\mathcal{U}(P)^{\widetilde{G}}$ is Artin-Schelter Gorenstein.
\[
\begin{tabular}{ |p{2.75cm}||p{13cm}|}
\hline
\centering $\boldsymbol{\Omega}$ &
\centering \textbf{Generators}\tabularnewline
\hline
\hline
\centering $x_1^3$ &
\hspace{.1in}
$\begin{pmatrix}
\pm\sqrt{a_{22}a_{33}-a_{23}a_{32}} & 0 & 0\\
a_{21} & a_{22} & a_{23}\\
a_{31} & a_{32} & a_{33}
\end{pmatrix}$, $a_{22}a_{33}-a_{23}a_{32}$ is a 3rd root of unity.
\\
\hline
\centering $x_1^2x_2$ &
\hspace{.1in}
$\begin{pmatrix}
a_{11} & 0 & 0\\
0 & \pm \displaystyle{\frac{1}{a_{11}^2}} & 0\\
a_{31} & a_{32} & a_{11}
\end{pmatrix}$, $a_{11} \neq 0$.
\\
\hline
\centering $x_1x_2x_3$ &
\hspace{.1in}
$\begin{pmatrix}
a_{11} & 0 & 0\\
0 & a_{22} & 0\\
0 & 0 & \pm \displaystyle{\frac{1}{a_{11}a_{22}}}
\end{pmatrix}$,
$\begin{pmatrix}
0 & a_{12} & 0\\
0 & 0 & a_{23}\\
\pm \displaystyle{\frac{1}{a_{12}a_{23}}} & 0 & 0
\end{pmatrix}$,
$\begin{pmatrix}
0 & 0 & a_{13}\\
a_{21} & 0 & 0\\
0 & \pm \displaystyle{\frac{1}{a_{13}a_{21}}} & 0
\end{pmatrix}$
\\
\hline
\end{tabular}
\]
\[
\begin{tabular}{ |p{2.75cm}||p{13cm}|}
\hline
\centering $\boldsymbol{\Omega}$ &
\centering \textbf{Generators}\tabularnewline
\hline
\hline
\centering $x_1x_2(x_1+x_2)$ &
\hspace{.1in} $\begin{pmatrix}
0 & a_{33} & 0\\
-a_{33} & -a_{33} & 0\\
a_{31} & a_{32} & a_{33}
\end{pmatrix}$,
$\begin{pmatrix}
0 & -a_{33} & 0\\
-a_{33} & 0 & 0\\
a_{31} & a_{32} & a_{33}
\end{pmatrix}$,
$\begin{pmatrix}
-a_{33} & -a_{33} & 0\\
a_{33} & 0 & 0\\
a_{31} & a_{32} & a_{33}
\end{pmatrix}$,\\
& \hspace{.1in}
$\begin{pmatrix}
-a_{33} & 0 & 0\\
a_{33} & a_{33} & 0\\
a_{31} & a_{32} & a_{33}
\end{pmatrix}$,
$\begin{pmatrix}
a_{33} & 0 & 0\\
0 & a_{33} & 0\\
a_{31} & a_{32} & a_{33}
\end{pmatrix}$,
$\begin{pmatrix}
a_{33} & a_{33} & 0\\
0 & -a_{33} & 0\\
a_{31} & a_{32} & a_{33}
\end{pmatrix}$,
\\
\hspace{.1in} & \hspace{.14in} $a_{33}$ is a 6th root of unity.\\
\hline
\centering $x_1^3+x_2^2x_3$ &
\hspace{.1in}
$\begin{pmatrix}
a_{11} & 0 & 0\\
0 & a_{11} & 0\\
0 & 0 & a_{11}
\end{pmatrix}$, $a_{11}$ is a 6th root of unity.\\
\hline
\centering $x_1^3+x_1^2x_3+x_2^2x_3$ &
\hspace{.1in}
$\begin{pmatrix}
a_{33} & 0 & 0\\
0 & a_{33} & 0\\
0 & 0 & a_{33}
\end{pmatrix}$,
$\begin{pmatrix}
-\frac{1}{2}a_{33} & \pm\frac{\sqrt{3}}{2}a_{33} & 0\\
\mp\frac{\sqrt{3}}{2}a_{33} & -\frac{1}{2}a_{33} & 0\\
\frac{9}{8}a_{33} & \mp\frac{3\sqrt{3}}{8}a_{33} & a_{33}
\end{pmatrix}$,
$a_{33}$ is a 6th root of unity.
\\
\hline
\centering $\frac{1}{3}(x_1^3+x_2^3+x_3^3) + \lambda x_1x_2x_3$ &
\hspace{.1in}
$\begin{pmatrix}
a_{11} & 0 & 0\\
0 & a_{11} & 0\\
0 & 0 & a_{11}
\end{pmatrix}$,
$\begin{pmatrix}
0 & 1 & 0\\
0 & 0 & 1\\
1 & 0 & 0
\end{pmatrix}$,
$\begin{pmatrix}
1 & 0 & 0\\
0 & b & 0\\
0 & 0 & b^2
\end{pmatrix}$,
$a_{11}$ is a 6th root of unity, $b$ is a \\
& \hspace{2.95in} primitive 3rd root of unity.\\
\hline
\centering $x_1^3+x_1x_2x_3+x_1^2x_2$ &
\hspace{.1in}
$\begin{pmatrix}
a_{11} & 0 & 0\\
0 & \displaystyle{\frac{a_{11}^2}{a_{33}}} & 0\\
a_{33}-a_{11} & 0 & a_{33}
\end{pmatrix}$,
$a_{11}, a_{33} \neq 0$.\\
\hline
\centering $x_1^3+x_1x_2x_3+x_1^2x_2$ &
\hspace{.1in}
$\begin{pmatrix}
a_{11} & 0 & 0\\
a_{21} & a_{11} & 0\\
-\displaystyle{\frac{a_{21}^2}{a_{11}}} & -2a_{21} & a_{11}
\end{pmatrix}$,
$a_{11}$ is a 6th root of unity.\\
\hline
\end{tabular}
\]
\end{prop}
\smallskip
\begin{proof}
This proposition is a re-statement of (\cite{JZ2}, Theorem 3.3) in the case of Poisson enveloping algebra of unimodular quadratic polynomial Poisson algebras of dimension 3: we identify $g \in \text{Aut}_{\text{gr}}(P)$ such that $\text{hdet}\tilde{g} = 1$. By Theorem 4.5, we identify $g \in \text{Aut}_{\text{gr}}(P)$ such that $\det g\bigg\vert_{P_1} = \pm 1$. In Proposition 2.2, we have classified all $g \in \text{Aut}_{\text{gr}}(P)$. The remaining is a routine computation of determinant.
\end{proof}
\
\section{Future Work}
In this section, we remark on some interesting discoveries made when proving the main results of this paper, and propose some possible questions for future research.
\begin{quest}
Let $P = \Bbbk[x_1, \cdots, x_n]$ be a quadratic Poisson algebra, and $G \subseteq \text{PAut}_{\text{gr}}(P)$ be a finite subgroup. For all quadratic Poisson structures that have been studied: Theorem 3.1, (\cite{GVW}, Theorem 4.4, 4.11, 4.17, 4.19), $P^G \cong P$ if and only if $G$ is trivial. Is this statement necessarily true for all quadratic Poisson structures? In other words, is this the Shephard-Todd-Chevalley theorem for quadratic Poisson algebras? A good starting place is the 13 non-unimodular quadratic Poisson structures classified in (\cite{DH}, Theorem 2).
\end{quest}
\smallskip
\begin{quest}
Let $P = \Bbbk[x_1, x_2, x_3]$ be the unimodular Poisson algebra defined by $\Omega = x_1x_2(x_1+x_2)$, and $G \subseteq \text{PAut}_{\text{gr}}(P)$ be the finite subgroup generated by $\begin{pmatrix}0 & -1 & 0\\-1 & 0 & 0\\a_1 & a_1 & 1\end{pmatrix}$ and $\begin{pmatrix}0 & -1 & 0\\-1 & 0 & 0\\a_2 & a_2 & 1\end{pmatrix}$. In Theorem 3.1, we proved that the invariant subalgebra $P^G$ remains unimodular (despite not being isomorphic to $P$). This is a rare incident. Indeed, the invariant subalgebras of the remaining Poisson structures in Theorem 3.1 and the Poisson Sklyanin structures in (\cite{GVW}, Theorem 4.4) are not unimodular, even when $G$ is generated by Poisson reflections. Are there more unimodular Poisson algebras $P$ and more finite subgroups $G \subseteq \text{PAut}_{\text{gr}}(P)$ such that the invariant subalgebra $P^G$ is unimodular? Can we find a set of conditions on $P$ and $G$ to guarantee the unimodularity of $P^G$?
\end{quest}
\smallskip
\begin{quest}
Let $P = \Bbbk[x_1, \cdots, x_n]$ be a Poisson algebra with its Poisson structure derived from the semiclassical limit of an Artin-Schelter regular algebra $A$. It appears that the invariant subalgebras of $P$ and the invariant subslagebras of $A$ bear a striking resemblance. One example is (\cite{EKZ3}, Theorem 4.5) and (\cite{GVW}, Theorem 3.8): these two papers state an identical Shephard-Todd-Chevalley theorem for skew polynomial rings and Poisson algebras arising from them. Another example is (\cite{EKZ3}, Proposition 5.8) and (\cite{GVW}, 4.6-4.11): these two papers capture some significant similarities between quantum matrix algebras and Poisson algebras arising from them. Naturally, we ask if there is a general theory connecting the invariants of the Poisson algebra and its corresponding Artin-Schelter regular algebra. As of February 2023, we have proved some preliminary results regarding this question, and we will upload a paper presenting our discoveries to arXiv shortly.
\end{quest}
\
\section*{Appendix}
In this section, we provide a complete computation of Proposition 2.2 and Proposition 2.3, omitted from section 2.
\medskip
\textit{Unimodular 1}. $\Omega = x_1^3$.
$\{x_1,x_2\} = 0$, $\{x_2,x_3\} = 3x_1^2$, $\{x_3,x_1\} = 0$.
\smallskip
See the proof of Proposition 2.2.
\
\textit{Unimodular 2}. $\Omega = x_1^2x_2$.
$\{x_1,x_2\} = 0$, $\{x_2,x_3\} = 2x_1x_2$, $\{x_3,x_1\} = x_1^2$.
\smallskip
In this case, we have the following system of equations:
\begin{enumerate}[label = (\arabic*)]
\item $a_{13}a_{21} = a_{11}a_{23}$.
\item $a_{12}a_{23} = a_{13}a_{22}$.
\item $2a_{11}a_{21} = a_{23}a_{31}-a_{21}a_{33}$.
\item $a_{12}a_{22} = 0$.
\item $a_{13}a_{23} = 0$.
\item $a_{11}a_{22}+a_{12}a_{21}=a_{22}a_{33}-a_{23}a_{32}$.
\item $a_{11}a_{23}+a_{13}a_{21} = 0$.
\item $a_{12}a_{23}+a_{13}a_{22} = 0$
\item $a_{11}^2 = a_{11}a_{33}-a_{13}a_{31}$.
\item $a_{12}^2 = 0$.
\item $a_{13}^2 = 0$.
\item $a_{11}a_{12} = a_{13}a_{32}-a_{12}a_{33}$.
\item $a_{11}a_{13} = 0$.
\item $a_{12}a_{13} = 0$.
\end{enumerate}
By (10), $a_{12} = 0$. By (11), $a_{13} = 0$. Since $g$ is invertible, $a_{11} \neq 0$. Plug these in (1) gives $a_{23} = 0$. Plug these into the remaining equations:
\begin{enumerate}[label = (\arabic*)]
\item $0 = 0$.
\item $0 = 0$.
\item $2a_{11}a_{21} = -a_{21}a_{33}$.
\item $0 = 0$.
\item $0 = 0$.
\item $a_{11}a_{22}=a_{22}a_{33}$.
\item $0 = 0$.
\item $0 = 0$
\item $a_{11}^2 = a_{11}a_{33}$.
\item $a_{12} = 0$.
\item $a_{13} = 0$.
\item $0 = 0$.
\item $0 = 0$.
\item $0 = 0$.
\end{enumerate}
Since $a_{11} \neq 0$, (9) implies $a_{11} = a_{33}$. Plug this into (3): $2a_{21} = -a_{21}$, implying $a_{21} = 0$. Then
\[
g =
\begin{pmatrix}
a_{11} & 0 & 0\\
0 & a_{22} & 0\\
a_{31} & a_{32} & a_{11}
\end{pmatrix}.
\]
Since $g$ is invertible, $a_{11}, a_{22} \neq 0$ In conclusion,
\[
\text{PAut}_{\text{gr}}(P) = \{
\begin{pmatrix}
a_{11} & 0 & 0\\
0 & a_{22} & 0\\
a_{31} & a_{32} & a_{11}
\end{pmatrix}: a_{11}, a_{22} \neq 0\}.
\]
\medskip
The eigenvalues of $g$ are $a_{11}, a_{11}, a_{22}$. If $g$ is a reflection, we must have $a_{11} = 1$ and $a_{22} = \xi$ for some primitive $n$th root of unity. Hence $g$ has the form
\[
g =
\begin{pmatrix}
1 & 0 & 0\\
0 & \xi & 0\\
a_{31} & a_{32} & 1
\end{pmatrix}.
\]
Suppose that $g$ has finite order $m$. Notice that the (3,1)-entry is $ma_{31}$ and the (3,2)-entry is
\[
a_{32}(\xi^{m}+\xi^{m-1} + \cdots + 1).
\]
Immediately $a_{31} = 0$. If $a_{32} \neq 0$, $\xi^{m-1}+\xi^{m-2} + \cdots + 1 = 0$ implies $\xi^{m}-1 = (\xi-1)(\xi^{m-1}+\xi^{m-2} + \cdots + 1) = 0$. In other words, $\xi$ is a $m$th root of unity, hence $n | m$, and hence $n = m$. In conclusion,
\[
\text{PR}(P) = \{
\begin{pmatrix}
1 & 0 & 0\\
0 & \xi & 0\\
0 & a_{32} & 1
\end{pmatrix}\}.
\]
\
\textit{Unimodular 3}. $\Omega = x_1x_2x_3$.
$\{x_1,x_2\} = x_1x_2$, $\{x_2,x_3\} = x_2x_3$, $\{x_3,x_1\} = x_1x_3$.
\smallskip
In this case, we have the following system of equations:
\begin{enumerate}[label = (\arabic*)]
\item $a_{11}a_{21} = 0$.
\item $a_{12}a_{22} = 0$.
\item $a_{13}a_{23} = 0$.
\item $a_{11}a_{22} + a_{12}a_{21} = a_{11}a_{22}-a_{12}a_{21}$.
\item $a_{11}a_{23} + a_{13}a_{21} = a_{13}a_{21}-a_{11}a_{23}$.
\item $a_{12}a_{23} + a_{13}a_{22} = a_{12}a_{23}-a_{13}a_{22}$.
\item $a_{21}a_{31} = 0$.
\item $a_{22}a_{32} = 0$.
\item $a_{23}a_{33} = 0$.
\item $a_{21}a_{32}+a_{22}a_{31} = a_{21}a_{32}-a_{22}a_{31}$.
\item $a_{21}a_{33}+a_{23}a_{31} = a_{23}a_{31}-a_{21}a_{33}$.
\item $a_{22}a_{33}+a_{23}a_{32} = a_{22}a_{33}-a_{23}a_{32}$.
\item $a_{11}a_{31} = 0$.
\item $a_{12}a_{32} = 0$.
\item $a_{13}a_{33} = 0$.
\item $a_{11}a_{32}+a_{12}a_{31} = a_{12}a_{31}-a_{11}a_{32}$.
\item $a_{11}a_{33}+a_{13}a_{31} = a_{11}a_{33}-a_{13}a_{31}$.
\item $a_{12}a_{33}+a_{13}a_{32} = a_{13}a_{32}-a_{12}a_{33}$.
\end{enumerate}
Cancelling the common terms and rearrange the order of the equations:
\begin{enumerate}[label = (\arabic*)]
\item $a_{11}a_{21} = 0$.
\item $a_{11}a_{23} = 0$.
\item $a_{11}a_{31} = 0$.
\item $a_{11}a_{32} = 0$.
\item $a_{12}a_{21} = 0$.
\item $a_{12}a_{22} = 0$.
\item $a_{12}a_{32} = 0$.
\item $a_{12}a_{33} = 0$.
\item $a_{13}a_{22} = 0$.
\item $a_{13}a_{23} = 0$.
\item $a_{13}a_{31} = 0$.
\item $a_{13}a_{33} = 0$.
\item $a_{21}a_{31} = 0$.
\item $a_{21}a_{33} = 0$.
\item $a_{22}a_{31} = 0$.
\item $a_{22}a_{32} = 0$.
\item $a_{23}a_{32} = 0$.
\item $a_{23}a_{33} = 0$.
\end{enumerate}
Suppose that $a_{11} \neq 0$. Immediately, (1), (2), (3), (4) imply $a_{21}, a_{23}, a_{31}, a_{32} = 0$. Since $g$ is invertible, $a_{22}, a_{33} \neq 0$. Plug these into the above equations: $a_{12}, a_{13} = 0$.
Suppose that $a_{11} = 0$. Suppose further that $a_{12} \neq 0$. Immediately, (5), (6), (7), (8) imply $a_{21}, a_{22}, a_{32}, a_{33} = 0$. Since $g$ is invertible, $a_{23}, a_{31} \neq 0$. Plug these into the above equations: $a_{13} = 0$.
Suppose that $a_{11} = 0$. The remaining case is $a_{12} = 0$. Since $g$ is invertible, $a_{13} = 0$. Immediately, (9), (10), (11), (12) imply $a_{22}, a_{23}, a_{31}, a_{33} = 0$. Since $g$ is invertible, $a_{21}, a_{32} = 0$, and all other equations become redundant.
In conclusion,
\[
\text{PAut}_{\text{gr}}(P) = \{
\begin{pmatrix}
a_{11} & 0 & 0\\
0 & a_{22} & 0\\
0 & 0 & a_{33}
\end{pmatrix},
\begin{pmatrix}
0 & a_{12} & 0\\
0 & 0 & a_{23}\\
a_{31} & 0 & 0
\end{pmatrix},
\begin{pmatrix}
0 & 0 & a_{13}\\
a_{21} & 0 & 0\\
0 & a_{32 }& 0
\end{pmatrix}
: a_{ij} \neq 0\}.
\]
\medskip
\begin{itemize}
\item The eigenvalues of the first matrix are $a_{11}, a_{22}, a_{33}$. If $g$ is a reflection, then $g$ has one of the following forms:
\[
\begin{pmatrix}
1 & 0 & 0\\
0 & 1 & 0\\
0 & 0 & \xi
\end{pmatrix},
\begin{pmatrix}
1 & 0 & 0\\
0 & \xi & 0\\
0 & 0 & 1
\end{pmatrix},
\begin{pmatrix}
\xi & 0 & 0\\
0 & 1 & 0\\
0 & 0 & 1
\end{pmatrix},
\]
for some primitive $n$th root of unity $\xi$.
\item The characteristic polynomial of the second matrix is $\lambda^3 = a_{12}a_{23}a_{31}$. Hence the eigenvalues are $\sqrt[3]{a_{12}a_{23}a_{31}}, \sqrt[3]{a_{12}a_{23}a_{31}}\xi_3, \sqrt[3]{a_{12}a_{23}a_{31}}\xi_3^2$, where $\xi_3$ is a primitive 3rd root of unity. If $g$ is a reflection, then one of the root equals to $\xi$ for some primitive $n$th root of unity $\xi$. If either $\sqrt[3]{a_{12}a_{23}a_{31}}\xi_3, \sqrt[3]{a_{12}a_{23}a_{31}}\xi_3^2 = \xi$, then their product $(\sqrt[3]{a_{12}a_{23}a_{31}})^2 = \xi$. But this is impossible, because $\sqrt[3]{a_{12}a_{23}a_{31}} = 1$. If $\sqrt[3]{a_{12}a_{23}a_{31}} = \xi$, then $\sqrt[3]{a_{12}a_{23}a_{31}}\xi_3 = \sqrt[3]{a_{12}a_{23}a_{31}}\xi_3^2 = 1$, which is impossible since $\xi_3 \neq 1$.
\item The characteristic polynomial of the third matrix is $\lambda^3 = a_{13}a_{21}a_{32}$. Hence the eigenvalues are $\sqrt[3]{a_{13}a_{21}a_{32}}, \sqrt[3]{a_{13}a_{21}a_{32}}\xi_3, \sqrt[3]{a_{13}a_{21}a_{32}}\xi_3^2$, where $\xi_3$ is a primitive 3rd root of unity. By a similar argument as above, $g$ is not a reflection in this case.
\end{itemize}
In conclusion,
\[
\text{PR}(P) = \{
\begin{pmatrix}
1 & 0 & 0\\
0 & 1 & 0\\
0 & 0 & \xi
\end{pmatrix},
\begin{pmatrix}
1 & 0 & 0\\
0 & \xi & 0\\
0 & 0 & 1
\end{pmatrix},
\begin{pmatrix}
\xi & 0 & 0\\
0 & 1 & 0\\
0 & 0 & 1
\end{pmatrix}\}.
\]
\
\textit{Unimodular 4}. $\Omega = x_1x_2(x_1+x_2)$.
$\{x_1,x_2\} = 0$, $\{x_2,x_3\} = 2x_1x_2+x_2^2$, $\{x_3,x_1\} = x_1^2+2x_1x_2$.
\smallskip
See the proof of Proposition 2.2.
\
\textit{Unimodular 5}. $\Omega = x_1^3+x_2^2x_3$.
$\{x_1,x_2\} = x_2^2$, $\{x_2,x_3\} = 3x_1^2$, $\{x_3,x_1\} = 2x_2x_3$.
\smallskip
In this case, we have the following system of equations:
\begin{enumerate}[label = (\arabic*)]
\item $a_{21}^2 = 3a_{12}a_{23}-3a_{13}a_{22}$.
\item $a_{22}^2 = a_{11}a_{22}-a_{12}a_{21}$.
\item $a_{23}^2 = 0$.
\item $a_{21}a_{22} = 0$.
\item $a_{21}a_{23} = 0$.
\item $a_{22}a_{23} = a_{13}a_{21}-a_{11}a_{23}$.
\item $a_{11}^2 = a_{22}a_{33}-a_{23}a_{32}$.
\item $3a_{12}^2 = a_{21}a_{32}-a_{22}a_{31}$.
\item $a_{13}^2 = 0$.
\item $a_{11}a_{12} = 0$.
\item $a_{11}a_{13} = 0$.
\item $3a_{12}a_{13} = a_{23}a_{31}-a_{21}a_{33}$.
\item $2a_{21}a_{31} = 3a_{13}a_{32}-3a_{12}a_{33}$.
\item $2a_{22}a_{32} = a_{12}a_{31}-a_{11}a_{32}$.
\item $a_{23}a_{33} = 0$.
\item $a_{21}a_{32}+a_{22}a_{31} = 0$.
\item $a_{21}a_{33} + a_{23}a_{31} = 0$.
\item $a_{22}a_{33}+a_{23}a_{32} = a_{11}a_{33}-a_{13}a_{31}$.
\end{enumerate}
From (3), (9), $a_{13}, a_{23} = 0$. Since $g$ is invertible, $a_{33} \neq 0$. Plug these into the above equations:
\begin{enumerate}[label = (\arabic*)]
\item $a_{21}^2 = 0$.
\item $a_{22}^2 = a_{11}a_{22}-a_{12}a_{21}$.
\item $a_{23} = 0$.
\item $a_{21}a_{22} = 0$.
\item $0 = 0$.
\item $0 = 0$.
\item $a_{11}^2 = a_{22}a_{33}$.
\item $3a_{12}^2 = -a_{22}a_{31}$.
\item $a_{13} = 0$.
\item $a_{11}a_{12} = 0$.
\item $0 = 0$.
\item $0 = -a_{21}a_{33}$.
\item $2a_{21}a_{31} = -3a_{12}a_{33}$.
\item $2a_{22}a_{32} = a_{12}a_{31}-a_{11}a_{32}$.
\item $0 = 0$.
\item $a_{21}a_{32}+a_{22}a_{31} = 0$.
\item $a_{21}a_{33} = 0$.
\item $a_{22} = a_{11}$.
\end{enumerate}
Further simplify the equations with (1), (18), and $a_{22} \neq 0$:
\begin{enumerate}[label = (\arabic*)]
\item $a_{21} = 0$.
\item $0 = 0$.
\item $a_{23} = 0$.
\item $0 = 0$.
\item $0 = 0$.
\item $0 = 0$.
\item $a_{11} = a_{33}$.
\item $0 = 0$.
\item $a_{13} = 0$.
\item $a_{12} = 0$.
\item $0 = 0$.
\item $0 = 0$.
\item $0 = 0$.
\item $a_{32} = 0$.
\item $0 = 0$.
\item $a_{31} = 0$.
\item $0 = 0$.
\item $a_{22} = a_{11}$.
\end{enumerate}
In conclusion,
\[
\text{PAut}_{\text{gr}}(P) = \{
\begin{pmatrix}
a_{11} & 0 & 0\\
0 & a_{11} & 0\\
0 & 0 & a_{11}
\end{pmatrix}: a_{11} \neq 0
\}.
\]
It is clear that
\[
\text{PR}(P) = \emptyset.
\]
\
\
\textit{Unimodular 6}. $\Omega = x_1^3+x_1^2x_3+x_2^2x_3$.
$\{x_1,x_2\} = x_1^2+x_2^2$, $\{x_2,x_3\} = 3x_1^2+2x_1x_3$, $\{x_3,x_1\} = 2x_2x_3$.
\smallskip
In this case, we have the following system of equations:
\begin{enumerate}[label = (\arabic*)]
\item $a_{11}^2+a_{21}^2 = a_{11}a_{22}-a_{12}a_{21}+3a_{12}a_{23}-3a_{13}a_{22}$.
\item $a_{12}^2+a_{22}^2 = a_{11}a_{22}-a_{12}a_{21}$.
\item $a_{13}^2 + a_{23}^2 = 0$.
\item $a_{11}a_{12}+a_{21}a_{22} = 0$.
\item $a_{11}a_{13}+a_{21}a_{23} = a_{12}a_{23}-a_{13}a_{22}$.
\item $a_{12}a_{13}+a_{22}a_{23} = a_{13}a_{21}-a_{11}a_{23}$.
\item $3a_{11}^2+2a_{11}a_{31} = a_{21}a_{32}-a_{22}a_{31}+3a_{22}a_{33}-3a_{23}a_{32}$.
\item $3a_{12}^2+2a_{12}a_{32} = a_{21}a_{32}-a_{22}a_{31}$.
\item $3a_{13}^2+2a_{13}a_{33} = 0$.
\item $3a_{11}a_{12}+a_{11}a_{32}+a_{12}a_{31} = 0$.
\item $3a_{11}a_{13}+a_{11}a_{33}+a_{13}a_{31} = a_{22}a_{33}-a_{23}a_{32}$.
\item $3a_{12}a_{13}+a_{12}a_{33}+a_{13}a_{32} = a_{23}a_{31}-a_{21}a_{33}$.
\item $2a_{21}a_{31} = a_{12}a_{31}-a_{11}a_{32} + 3a_{13}a_{32}-3a_{12}a_{33}$.
\item $2a_{22}a_{32} = a_{12}a_{31}-a_{11}a_{32}$.
\item $2a_{23}a_{33} = 0$.
\item $a_{21}a_{32}+a_{22}a_{31} = 0$.
\item $a_{21}a_{33}+a_{23}a_{31} = a_{13}a_{32}-a_{12}a_{33}$.
\item $a_{22}a_{33}+a_{23}a_{32} = a_{11}a_{33}-a_{13}a_{31}$.
\end{enumerate}
If $a_{33} = 0$, (9) implies $a_{13} = 0$, and (3) implies $a_{23} = 0$, contradicting to the invertibility of $g$. Hence $a_{33} \neq 0$. By (15), $a_{23} = 0$. By (3), $a_{13} = 0$. Plug these into the above equations:
\begin{enumerate}[label = (\arabic*)]
\item $a_{11}^2+a_{21}^2 = a_{11}a_{22}-a_{12}a_{21}$.
\item $a_{12}^2+a_{22}^2 = a_{11}a_{22}-a_{12}a_{21}$.
\item $0 = 0$.
\item $a_{11}a_{12}+a_{21}a_{22} = 0$.
\item $0 = 0$.
\item $0 = 0$.
\item $3a_{11}^2+2a_{11}a_{31} = a_{21}a_{32}-a_{22}a_{31}+3a_{22}a_{33}$.
\item $3a_{12}^2+2a_{12}a_{32} = a_{21}a_{32}-a_{22}a_{31}$.
\item $0 = 0$.
\item $3a_{11}a_{12}+a_{11}a_{32}+a_{12}a_{31} = 0$.
\item $a_{11} = a_{22}$.
\item $a_{12} = -a_{21}$.
\item $2a_{21}a_{31} = a_{12}a_{31}-a_{11}a_{32}-3a_{12}a_{33}$.
\item $2a_{22}a_{32} = a_{12}a_{31}-a_{11}a_{32}$.
\item $0 = 0$.
\item $a_{21}a_{32}+a_{22}a_{31} = 0$.
\item $a_{21} = -a_{12}$.
\item $a_{22} = a_{11}$.
\end{enumerate}
Further simplify the equations by (11) and (12):
\begin{enumerate}[label = (\arabic*)]
\item $0 = 0$.
\item $0 = 0$.
\item $0 = 0$.
\item $0 = 0$.
\item $0 = 0$.
\item $0 = 0$.
\item $3a_{11}^2+2a_{11}a_{31} = a_{21}a_{32}-a_{22}a_{31}+3a_{22}a_{33}$.
\item $3a_{12}^2+2a_{12}a_{32} = a_{21}a_{32}-a_{22}a_{31}$.
\item $0 = 0$.
\item $3a_{11}a_{12}+a_{11}a_{32}+a_{12}a_{31} = 0$.
\item $a_{11} = a_{22}$.
\item $a_{12} = -a_{21}$.
\item $2a_{21}a_{31} = a_{12}a_{31}-a_{11}a_{32}-3a_{12}a_{33}$.
\item $2a_{22}a_{32} = a_{12}a_{31}-a_{11}a_{32}$.
\item $0 = 0$.
\item $a_{21}a_{32}+a_{22}a_{31} = 0$.
\item $a_{21} = -a_{12}$.
\item $a_{22} = a_{11}$.
\end{enumerate}
If $a_{12} = 0$, we may easily derive that $a_{21}, a_{31}, a_{32} = 0$ and $a_{11} = a_{33}$. Then
\[
g = \begin{pmatrix}
a_{33} & 0 & 0\\
0 & a_{33} & 0\\
0 & 0 & a_{33}
\end{pmatrix}.
\]
Suppose that $a_{12} \neq 0$. We need to solve the following equations:
\begin{enumerate}[label = (\alph*)]
\item $a_{11} = a_{22}$.
\item $a_{12} = -a_{21}$.
\item $3a_{11}^2+2a_{11}a_{31} = -a_{12}a_{32}-a_{11}a_{31}+3a_{11}a_{33}$.
\item $3a_{12}^2+2a_{12}a_{32} = -a_{12}a_{32}-a_{11}a_{31}$.
\item $3a_{11}a_{12}+a_{11}a_{32}+a_{12}a_{31} = 0$.
\item $-2a_{12}a_{31} = a_{12}a_{31}-a_{11}a_{32}-3a_{12}a_{33}$.
\item $2a_{11}a_{32} = a_{12}a_{31}-a_{11}a_{32}$.
\item $a_{11}a_{31} = a_{12}a_{32}$.
\end{enumerate}
Use (h) to simplify (d): $3a_{12}^2 + 4a_{12}a_{32} = 0$. This further implies $a_{32} = -\frac{3}{4}a_{12}$. Plug this into (e): $a_{31} = -\frac{9}{4}a_{11}$. Combined, (g) implies $a_{11} = -\frac{1}{2}a_{33}$. Then (h) implies $a_{12} = \pm \frac{\sqrt{3}}{2}a_{33}$. Finally, notice that
\[
g =
\begin{pmatrix}
-\frac{1}{2}a_{33} & \pm \frac{\sqrt{3}}{2}a_{33} & 0\\
\mp \frac{\sqrt{3}}{2}a_{33} & -\frac{1}{2}a_{33} & 0\\
\frac{9}{8}a_{33} & \mp \frac{3\sqrt{3}}{8}a_{33} & a_{33}
\end{pmatrix}
\]
satisfies (a)-(h) simultaneously. In conclusion,
\[
\text{PAut}_{\text{gr}}(P) = \{
\begin{pmatrix}
a_{33} & 0 & 0\\
0 & a_{33} & 0\\
0 & 0 & a_{33}
\end{pmatrix},
\begin{pmatrix}
-\frac{1}{2}a_{33} & \pm \frac{\sqrt{3}}{2}a_{33} & 0\\
\mp \frac{\sqrt{3}}{2}a_{33} & -\frac{1}{2}a_{33} & 0\\
\frac{9}{8}a_{33} & \mp \frac{3\sqrt{3}}{8}a_{33} & a_{33}
\end{pmatrix}: a_{33} \neq 0
\}.
\]
\medskip
The first matrix cannot be a reflection as it has 3 repeated eigenvalues $a_{33}$. The second matrix has characteristic polynomial
\[
(\lambda^2 + a_{33}\lambda + a_{33}^2)(a_{33}-\lambda).
\]
It has eigenvalues $\displaystyle{\frac{(-1 \pm \sqrt{-3})a_{33}}{2}}, a_{33}$. Notice that this is identical to the first matrix in \textit{Unimodular 4}. By the same argument, there is no reflection. In conclusion,
\[
\text{PR}(P) = \emptyset.
\]
\
\textit{Unimodular 7}. $\Omega = \frac{1}{3}(x_1^3+x_2^3+x_3^2) + \lambda x_1x_2x_3$, $\lambda^3 \neq -1$.
$\{x_1,x_2\} = x_3^2+\lambda x_1x_2$, $\{x_2,x_3\} = x_1^2+\lambda x_2x_3$, $\{x_3,x_1\} = x_2^2+\lambda x_1x_3$.
\smallskip
This case is already computed by \cite{MLTU}:
\[
\text{PAut}_{\text{gr}}(P) =
\{
\begin{pmatrix}
a_{11} & 0 & 0\\
0 & a_{11} & 0\\
0 & 0 & a_{11}
\end{pmatrix},
\begin{pmatrix}
0 & 1 & 0\\
0 & 0 & 1\\
1 & 0 & 0
\end{pmatrix},
\begin{pmatrix}
1 & 0 & 0\\
0 & b & 0\\
0 & 0 & b^2
\end{pmatrix}: a_{11} \neq 0,\]
\[
\hspace{4.59in}b\text{ is a root of } x^2+x+1 \text{ in } k
\}.
\]
It is clear that
\[
\text{PR}(P) = \emptyset.
\]
\
\textit{Unimodular 8}. $\Omega = x_1^3+x_1x_2x_3+x_1^2x_2$.
$\{x_1,x_2\} = x_1x_2$, $\{x_2,x_3\} = 3x_1^2+x_2x_3+2x_1x_2$, $\{x_3,x_1\} = x_1x_3+x_1^2$.
\smallskip
In this case, we have the following system of equations:
\begin{enumerate}[label = (\arabic*)]
\item $a_{11}a_{21} = 3a_{12}a_{23}-3a_{13}a_{22}+a_{13}a_{21}-a_{11}a_{23}$.
\item $a_{12}a_{22} = 0$.
\item $a_{13}a_{23} = 0$.
\item $a_{11}a_{22}+a_{12}a_{21} = a_{11}a_{22}-a_{12}a_{21}+2a_{12}a_{23}-2a_{13}a_{22}$.
\item $a_{11}a_{23}+a_{13}a_{21} = a_{13}a_{21}-a_{11}a_{23}$.
\item $a_{12}a_{23}+a_{13}a_{22} = a_{12}a_{23}-a_{13}a_{22}$.
\item $3a_{11}^2+a_{21}a_{31}+2a_{11}a_{21} = 3a_{22}a_{33}-3a_{23}a_{32}+a_{23}a_{31}-a_{21}a_{33}$.
\item $3a_{12}^2+a_{22}a_{32}+2a_{12}a_{22} = 0$.
\item $3a_{13}^2+a_{23}a_{33}+2a_{13}a_{23} = 0$.
\item $6a_{11}a_{12}+a_{21}a_{32}+a_{22}a_{31}+2a_{11}a_{22}+2a_{12}a_{21} = a_{21}a_{32}-a_{22}a_{31}+2a_{22}a_{33}-2a_{23}a_{32}$.
\item $6a_{11}a_{13}+a_{21}a_{33}+a_{23}a_{31}+2a_{11}a_{23}+2a_{13}a_{21} = a_{23}a_{31}-a_{21}a_{33}$.
\item $6a_{12}a_{13}+a_{22}a_{33}+a_{23}a_{32}+2a_{12}a_{23}+2a_{13}a_{22} = a_{22}a_{33}-a_{23}a_{32}$.
\item $a_{11}a_{31}+a_{11}^2 = 3a_{13}a_{32}-3a_{12}a_{33}+a_{11}a_{33}-a_{13}a_{31}$.
\item $a_{12}a_{32}+a_{12}^2 = 0$.
\item $a_{13}a_{33}+a_{13}^2 = 0$.
\item $a_{11}a_{32}+a_{12}a_{31}+2a_{11}a_{12} = a_{12}a_{31}-a_{11}a_{32}+2a_{13}a_{32}-2a_{12}a_{33}$.
\item $a_{11}a_{33}+a_{13}a_{31}+2a_{11}a_{13} = a_{11}a_{33}-a_{13}a_{31}$.
\item $a_{12}a_{33}+a_{13}a_{32}+2a_{12}a_{13} = a_{13}a_{32}-a_{12}a_{33}$.
\end{enumerate}
After initial cancellations, we have
\begin{enumerate}[label = (\arabic*)]
\item $a_{11}a_{21} = 3a_{12}a_{23}+a_{13}a_{21}$.
\item $a_{12}a_{22} = 0$.
\item $a_{13}a_{23} = 0$.
\item $a_{12}a_{21} = a_{12}a_{23}$.
\item $a_{11}a_{23} = 0$.
\item $a_{13}a_{22} = 0$.
\item $3a_{11}^2+a_{21}a_{31}+2a_{11}a_{21} = 3a_{22}a_{33}-3a_{23}a_{32}+a_{23}a_{31}-a_{21}a_{33}$.
\item $3a_{12}^2+a_{22}a_{32} = 0$.
\item $3a_{13}^2+a_{23}a_{33} = 0$.
\item $3a_{11}a_{12}+a_{22}a_{31}+a_{11}a_{22}+a_{12}a_{21} = a_{22}a_{33}-a_{23}a_{32}$.
\item $3a_{11}a_{13}+a_{21}a_{33}+a_{13}a_{21} = 0$.
\item $3a_{12}a_{13}+a_{23}a_{32}+a_{12}a_{23}+a_{13}a_{22} = 0$.
\item $a_{11}a_{31}+a_{11}^2 = 3a_{13}a_{32}-3a_{12}a_{33}+a_{11}a_{33}-a_{13}a_{31}$.
\item $a_{12}(a_{12}+a_{32}) = 0$.
\item $a_{13}(a_{13}+a_{33}) = 0$.
\item $a_{11}a_{32}+a_{11}a_{12} = a_{13}a_{32}-a_{12}a_{33}$.
\item $a_{13}a_{31}+a_{11}a_{13} = 0$.
\item $a_{12}a_{33}+a_{12}a_{13} = 0$.
\end{enumerate}
Suppose that $a_{12} \neq 0$. By (2), $a_{22} = 0$. Plug this into (8): $3a_{12}^2 = 0$, a contradiction. This forces $a_{12} = 0$. Plug this into the above equations:
\begin{enumerate}[label = (\arabic*)]
\item $a_{11}a_{21} = a_{13}a_{21}$.
\item $0 = 0$.
\item $a_{13}a_{23} = 0$.
\item $0 = 0$.
\item $a_{11}a_{23} = 0$.
\item $a_{13}a_{22} = 0$.
\item $3a_{11}^2+a_{21}a_{31}+2a_{11}a_{21} = 3a_{22}a_{33}-3a_{23}a_{32}+a_{23}a_{31}-a_{21}a_{33}$.
\item $a_{22}a_{32} = 0$.
\item $3a_{13}^2+a_{23}a_{33} = 0$.
\item $a_{22}a_{31}+a_{11}a_{22} = a_{22}a_{33}-a_{23}a_{32}$.
\item $3a_{11}a_{13}+a_{21}a_{33}+a_{13}a_{21} = 0$.
\item $a_{23}a_{32}+a_{13}a_{22} = 0$.
\item $a_{11}a_{31}+a_{11}^2 = 3a_{13}a_{32}+a_{11}a_{33}-a_{13}a_{31}$.
\item $0 = 0$.
\item $a_{13}(a_{13}+a_{33}) = 0$.
\item $a_{11}a_{32} = a_{13}a_{32}$.
\item $a_{13}a_{31}+a_{11}a_{13} = 0$.
\item $0 = 0$.
\end{enumerate}
Notice that (3) and (5) together imply $a_{23} = 0$. Then (9) implies $a_{13} = 0$. Plug these into the above equations and further simplify:
\begin{enumerate}[label = (\arabic*)]
\item $a_{21} = 0$.
\item $0 = 0$.
\item $0 = 0$.
\item $0 = 0$.
\item $0 = 0$.
\item $0 = 0$.
\item $a_{11}^2 = a_{22}a_{33}$.
\item $a_{32} = 0$.
\item $0 = 0$.
\item $a_{31}+a_{11} = a_{33}$.
\item $0 = 0$.
\item $0 = 0$.
\item $a_{31}+a_{11} = a_{33}$.
\item $0 = 0$.
\item $0 = 0$.
\item $0 = 0$.
\item $0 = 0$.
\item $0 = 0$.
\end{enumerate}
In conclusion,
\[
\text{PAut}_{\text{gr}}(P) = \{
\begin{pmatrix}
a_{11} & 0 & 0\\
0 & \frac{a_{11}^2}{a_{33}} & 0\\
a_{33}-a_{11} & 0 & a_{33}
\end{pmatrix}: a_{11}, a_{33} \neq 0
\}.
\]
\medskip
The eigenvalues are $\lambda_1 = a_{11}, \lambda_2 = \frac{a_{11}^2}{a_{33}}, \lambda_3 = a_{33}$. Notice that $\lambda_1^2 = \lambda_2\lambda_3$. Suppose that $g$ is a reflection: $\{\lambda_1, \lambda_2, \lambda_3\} = \{1,1,\xi\}$ for some primitive $n$th root of unity $\xi$. If $\lambda_1 = 1$, then $\lambda_2\lambda_3 = \xi = 1$, a contradiction. If $\lambda_1 = \xi$, then $\xi^2 = 1$ or simply $\xi = -1$. In addition, $g = \begin{pmatrix}
-1 & 0 & 0\\
0 & 1 & 0\\
2 & 0 & 1
\end{pmatrix}$ has finite order 2. In conclusion,
\[
\text{PR}(P) =
\{\begin{pmatrix}
-1 & 0 & 0\\
0 & 1 & 0\\
2 & 0 & 1
\end{pmatrix}\}.
\]
\
\textit{Unimodular 9}. $\Omega = x_1x_2^2+x_1^2x_3$.
$\{x_1,x_2\} = x_1^2$, $\{x_2,x_3\} = x_2^2+2x_1x_3$, $\{x_3,x_1\} = 2x_1x_2$.
\smallskip
In this case, we have the following system of equations:
\begin{enumerate}[label = (\arabic*)]
\item $a_{11}^2 = a_{11}a_{22}-a_{12}a_{21}$.
\item $a_{12}^2 = a_{12}a_{23}-a_{13}a_{22}$.
\item $a_{13}^2 = 0$.
\item $a_{11}a_{12} = a_{13}a_{21}-a_{11}a_{23}$.
\item $a_{11}a_{13} = a_{12}a_{23}-a_{13}a_{22}$.
\item $a_{12}a_{13} = 0$.
\item $a_{21}^2+2a_{11}a_{31} = a_{21}a_{32}-a_{22}a_{31}$.
\item $a_{22}^2+2a_{12}a_{32} = a_{22}a_{33}-a_{23}a_{32}$.
\item $a_{23}^2+2a_{13}a_{33} = 0$.
\item $a_{21}a_{22}+a_{11}a_{32}+a_{12}a_{31} = a_{23}a_{31}-a_{21}a_{33}$.
\item $a_{21}a_{23}+a_{11}a_{33}+a_{13}a_{31} = a_{22}a_{33}-a_{23}a_{32}$.
\item $a_{22}a_{23}+a_{12}a_{33}+a_{13}a_{32} = 0$.
\item $2a_{11}a_{21} = a_{12}a_{31}-a_{11}a_{32}$.
\item $2a_{12}a_{22} = a_{13}a_{32}-a_{12}a_{33}$.
\item $a_{13}a_{23} = 0$.
\item $a_{11}a_{22}+a_{12}a_{21} = a_{11}a_{33}-a_{13}a_{31}$.
\item $a_{11}a_{23}+a_{13}a_{21} = a_{13}a_{32}-a_{12}a_{33}$.
\item $a_{12}a_{23}+a_{13}a_{22} = 0$.
\end{enumerate}
By (3), $a_{13} = 0$. By (9), $a_{23} = 0$. By (2), $a_{12} = 0$. Plug these into the above equations:
\begin{enumerate}[label = (\arabic*)]
\item $a_{11}^2 = a_{11}a_{22}$.
\item $a_{12} = 0$.
\item $a_{13} = 0$.
\item $0 = 0$.
\item $0 = 0$.
\item $0 = 0$.
\item $a_{21}^2+2a_{11}a_{31} = a_{21}a_{32}-a_{22}a_{31}$.
\item $a_{22}^2 = a_{22}a_{33}$.
\item $a_{23} = 0$.
\item $a_{21}a_{22}+a_{11}a_{32} = -a_{21}a_{33}$.
\item $a_{11}a_{33} = a_{22}a_{33}$.
\item $0 = 0$.
\item $2a_{11}a_{21} = -a_{11}a_{32}$.
\item $0 = 0$.
\item $0 = 0$.
\item $a_{11}a_{22} = a_{11}a_{33}$.
\item $0 = 0$.
\item $0 = 0$.
\end{enumerate}
Since $g$ is invertible, $a_{11}, a_{33} \neq 0$.
\begin{enumerate}[label = (\arabic*)]
\item $a_{11} = a_{22}$.
\item $a_{12} = 0$.
\item $a_{13} = 0$.
\item $0 = 0$.
\item $0 = 0$.
\item $0 = 0$.
\item $a_{21}^2+3a_{11}a_{31} = a_{21}a_{32}$.
\item $a_{22} = a_{33}$.
\item $a_{23} = 0$.
\item $2a_{21}+a_{32} = 0$.
\item $a_{11} = a_{22}$.
\item $0 = 0$.
\item $2a_{21} = -a_{32}$.
\item $0 = 0$.
\item $0 = 0$.
\item $a_{22} = a_{33}$.
\item $0 = 0$.
\item $0 = 0$.
\end{enumerate}
Plug (10) into (7): $a_{31} = -\frac{a_{21}^2}{a_{11}}$. In conclusion,
\[
\text{PAut}_{\text{gr}}(P) = \{
\begin{pmatrix}
a_{11} & 0 & 0\\
a_{21} & a_{11} & 0\\
-\frac{a_{21}^2}{a_{11}} & -2a_{21} & a_{11}
\end{pmatrix}: a_{11} \neq 0
\}.
\]
It is clear that $g$ cannot be a reflection, as it has 3 repeated eigenvalues $a_{11}$:
\[
\text{PR}(P) = \emptyset.
\]
\bibliographystyle{alpha}
|
2206.14088
|
\section{Introduction}
This article considers range theorems for the Poisson transform on Riemannian symmetric spaces $Z$ in the context of horospherical complex geometry.
We assume that $Z$ is of non-compact type and let $G$ be the semisimple Lie group of isometries of $Z$. Then $Z$ is homogeneous for $G$ and identified
as $Z=G/K$, where $K\subset G$ is a maximal compact subgroup and stabilizer of a fixed base point $z_0\in Z$. Classical examples are the real hyperbolic spaces which will
receive special explicit attention at the end of the article.
\par The Poisson transform maps sections of line bundles over the compact boundary
$\partial Z$ to eigenfunctions of the commutative algebra of $G$-invariant
differential operators $\mathbb{D}(Z)$ on $Z$. Recall that $\partial Z = G/P$ is a real flag manifold
for $P=MAN$ a minimal parabolic subgroup originating from an Iwasawa decomposition
$G=KAN$ of $G$. The line bundles we consider are parametrized by the complex characters $\lambda$ of the abelian group
$A$, and we write $\mathcal{P}_\lambda$ for the corresponding Poisson transform.
\par The present paper initiates the study of the Poisson transform in terms of the $N$-geometry of both $Z$ and
$\partial Z$. Identifying the contractible group $N$ with its open dense orbit in $\partial Z$, functions on $N$ correspond to sections of the line bundle via extension by zero. On the other hand
$N\backslash Z\simeq A$.
Hence, given a function $f\in L^2(N)$ with Poisson transform $\phi\in \mathcal{P}_\lambda(f)$, it is natural to consider the family $\phi_a$, $a\in A \simeq N\backslash Z$, of functions restricted to the $N$-orbits $Na\cdot z_0\subset Z$.
A basic observation then is that the functions $\phi_a$ extend holomorphically to $N$-invariant tubular neighborhoods $\mathcal{T}_a\subset N_\mathbb{C}$ of $N$. Our main result, Theorem \ref{maintheorem}, identifies for positive parameters $\lambda$ the image $\mathcal{P}_\lambda(L^2(N))$ with a class of families $\phi_a$ in
weighted Bergman spaces $\mathcal{B}(\mathcal{T}_a, {\bf w}_{\lambda, a})$ on these tubes $\mathcal{T}_a$.
\par Range theorems for the Poisson transform in terms of the $K$-geometry of both
$\partial Z$ and $Z$ were investigated in \cite{I} for spaces of rank one. Note that $\partial Z\simeq K/M$ and that every line bundle over $K/M$ is trivial, so that sections can be identified with functions on $K/M$. On the other hand
$K\backslash Z\simeq A/W$ with $W$ the little Weyl group, a finite reflection group. Given
a function $f \in L^2(K/M)$ the image $\phi=\mathcal{P}_\lambda(f)$ therefore induces a family of partial functions $\phi_a: K\to \mathbb{C}$ with $\phi_a(k):=\phi(ka\cdot z_0)$ on the $K$-orbits in $Z$ parametrized by
$a\in A$.
As $\phi$ is continuous, we have $\phi_a\in L^2(K)$, and \cite{I} characterizes the image $\mathcal{P}_\lambda(L^2(K/M))$
in terms of the growth of $\|\phi_a\|_{L^2(K)}$ and suitable maximal functions. Interesting follow up work includes
\cite{BOS} and \cite{Ka}.
\bigskip To explain our results in more detail, we first describe our perspective on eigenfunctions of the algebra $\mathbb{D}(Z)$. The Iwasawa decomposition $G=KAN$ allows us to identify $Z=G/K$ with the solvable group
$S=NA$. Inside $\mathbb{D}(Z)$ one finds a distinguished element, the Laplace--Beltrami
operator $\Delta_Z$. Upon identifying $Z$ with $S$ we use the symbol
$\Delta_S$ instead of $\Delta_Z$. Now it is a remarkable fact that all $\Delta_S$-eigenfunctions extend to a universal $S$-invariant domain $\Xi_S\subset S_\mathbb{C}$. In fact, $\Xi_S$ is closely related to the crown domain
$\Xi\subset Z_\mathbb{C}=G_\mathbb{C}/K_\mathbb{C}$ of $Z$, and we refer to Section~\ref{section crown} for details. In particular, there exists a maximal domain $0\in \Lambda\subset \mathfrak{n} = \rm{Lie}(N)$ such that
\begin{equation} \label{XiS}\Xi_S \supset S \exp(i\Lambda)\,. \end{equation}
The domain $\Lambda$ has its origin in the unipotent model of the crown domain \cite[Sect.~8]{KO} and, except in the rank one cases, its geometry is not known. Proposition \ref{prop bounded} implies that $\Lambda$ is bounded for a class of classical groups, including $G=\operatorname{GL}(n,\mathbb{R})$. It is an interesting open problem whether $\Lambda$ is bounded or convex in general.
\par Now let $\phi: S\to \mathbb{C}$ be an eigenfunction of $\Delta_S$. For each $a\in A$ we define the
partial function
$$\phi_a: N \to \mathbb{C}, \quad n\mapsto \phi(na)\, .$$
Because eigenfunctions extend to $\Xi_S$, we see from \eqref{XiS}
that $\phi_a$ extends to a holomorphic function on the tube domain
$$\mathcal{T}_a:= N\exp(i\Lambda_a)\subset N_\mathbb{C}$$
with $\Lambda_a= \operatorname{Ad}(a)\Lambda$. The general perspective of this paper is to view
an eigenfunction $\phi$ as a family of holomorphic functions $(\phi_a)_{a\in A}$
with $\phi_a$ belonging to $\mathcal{O}(\mathcal{T}_a)$, the space of all holomorphic functions on $\mathcal{T}_a$.
\par We now explain the Poisson transform and how eigenfunctions of the algebra $\mathbb{D}(Z)$ can be characterized
by their boundary values on $\partial Z$. Fix a minimal parabolic subgroup
$P=MAN$ with $M=Z_K(A)$. If $\theta: G\to G$ denotes the Cartan involution
with fixed point group $K$, we consider $\overline N=\theta(N)$ and the parabolic subgroup $\overline P= M A \overline N$ opposite to $P$. Because $N\overline P\subset G$
is open dense by the Bruhat decomposition, it proves convenient to identify $\partial Z$ with $G/\overline P$. In the sequel we view $N\subset \partial Z=G/\overline P$ as an open dense subset.
\par For each $\lambda\in \mathfrak{a}_\mathbb{C}^*$ one defines the Poisson transform (in the $N$-picture) as
$$
\mathcal{P}_\lambda: C_c^\infty(N) \to C^\infty(S)\ ,
$$
\begin{equation}
\label{Poisson0} \mathcal{P}_\lambda f(s)= \int_N f(x) {\bf a} (s^{-1} x)^{\lambda + \rho} \ dx\
\qquad (s\in S)\ ,
\end{equation}
where ${\bf a}: KA\overline N \to A$ is the middle projection with respect to the opposite Iwasawa decomposition and $\rho\in \mathfrak{a}^*$ is the Weyl half sum with respect to $P$.
In this article we restrict to parameters $\lambda$ with $\operatorname{Re} \lambda (\alpha^\vee)>0$ for all
positive co-roots $\alpha^\vee\in \mathfrak{a}$, denoted in the following as $\operatorname{Re}\lambda>0$. This condition ensures that the integral defining the Harish-Chandra ${\bf c}$-function
$${\bf c}(\lambda):=\int_N {\bf a}(n)^{\lambda+\rho} \ dn$$
converges absolutely.
\par Recall the Harish-Chandra isomorphism between
$\mathbb{D}(Z)$ and the $W$-invariant polynomials on $\mathfrak{a}_\mathbb{C}^*$, where $W$ is
the Weyl group of the pair $(\mathfrak{g}, \mathfrak{a})$. In particular, $\operatorname{spec} \mathbb{D}(Z)=\mathfrak{a}_\mathbb{C}^*/ W$, and for each $[\lambda]=W\cdot \lambda$ we denote by $\mathcal{E}_{[\lambda]}(S)$ the corresponding eigenspace on $S\simeq Z$. The image of the Poisson transform consists of eigenfunctions, $\operatorname{im} \mathcal{P}_\lambda(C_c^\infty(N))\subset
\mathcal{E}_{[\lambda]}(S)$. Because ${\bf a}(\cdot)^{\lambda+\rho}$ belongs to $L^1(N)$ for $\operatorname{Re}\lambda>0$, $\mathcal{P}_\lambda$ extends from $C_c^\infty(N)$ to $L^2(N)$.
The goal of this article is to characterize $\mathcal{P}_\lambda(L^2(N))$. As a first step towards this goal, for $f\in L^2(N)$ and $\phi=\mathcal{P}_\lambda(f)$ note the straightforward estimate
$$\|\phi_a\|_{L^2(N)} \leq a^{\rho -\operatorname{Re} \lambda}{\bf c}(\operatorname{Re} \lambda) \|f\|_{L^2(N)}\, .$$
for all $a\in A$.
The basic observation in this paper is that the kernel $n\mapsto {\bf a}(n)^{\lambda+\rho}$ underlying
the Poisson transform \eqref{Poisson0} extends holomorphically to $\mathcal{T}^{-1}:=\exp(i\Lambda)N$ and remains
$N$-integrable along every fiber, i.e.~for any fixed $y\in \exp(i\Lambda)$ the kernel $n\mapsto {\bf a}(yn)^{\lambda+\rho}$
is integrable over $N$.
This allows us to formulate a sufficient condition for positive left $N$-invariant continuous weight functions ${\bf w}_\lambda$ on the tubes $\mathcal{T}=N\exp(i\Lambda)$, namely (see also \eqref{request w})
\begin{equation} \label{request intro w}\int_{\exp(i\Lambda)} {\bf w}_\lambda(y) \|\delta_{\lambda, y}\|_1^2 \ dy <\infty\, .\end{equation}
In the sequel we assume that ${\bf w}_\lambda$ satisfies condition \eqref{request intro w} and define rescaled weight functions
$${\bf w}_{\lambda,a}: \mathcal{T}_a\to \mathbb{R}_{>0}, \ \ ny\mapsto {\bf w}_\lambda(\operatorname{Ad}(a^{-1})y)\qquad (y\in\exp(i\Lambda_a))$$
on the scaled tubes $\mathcal{T}_a$.
The upshot then is that $\phi_a\in \mathcal{O}(\mathcal{T}_a)$ lies in the weighted Bergman
space
$$\mathcal{B}(\mathcal{T}_a, {\bf w}_{\lambda,a}):=\{ \psi\in \mathcal{O}(\mathcal{T}_a)\mid \|\psi\|^2_{\mathcal{B}_{a, \lambda}}:=
\int_{\mathcal{T}_a} |\psi(z)|^2 {\bf w}_{\lambda,a}(z) dz <\infty\}$$
where $dz$ is the Haar measure on $N_\mathbb{C}$ restricted to $\mathcal{T}_a$.
This motivates us the definition of the following Banach subspace of $\mathcal{E}_{[\lambda]}(Z)\subset \mathcal{O}(\Xi_S)$
$$\mathcal{B}(\Xi_S, \lambda):=\{ \phi \in \mathcal{E}_{[\lambda]}(Z)\mid
\|\phi\|:=\sup_{a\in A} a^{\operatorname{Re}\lambda -2\rho} \|\phi_a\|_{\mathcal{B}_{a,\lambda}}<\infty\}\, .$$
It will be consequence of Theorem \ref{maintheorem} below that $\mathcal{B}(\Xi_S, \lambda)$ as a vector space does not depend on the particular choice of the positive left $N$-invariant weight function ${\bf w}_\lambda$ satisfying \eqref{request intro w}.
The main result of this article (see Theorem \ref{main theorem} in the main text) now reads:
\begin{theorem}\label{maintheorem}Let $Z=G/K$ be a Riemannian symmetric space and $\lambda\in \mathfrak{a}_\mathbb{C}^*$ be a
parameter such that $\operatorname{Re} \lambda>0$. Then
$$\mathcal{P}_\lambda: L^2(N) \to \mathcal{B}(\Xi_S, \lambda)$$
is an isomorphism of Banach spaces, i.e. there exist $c,C>0$ depending on ${\bf w}_\lambda$ such that
$$c \|\mathcal{P}_\lambda(f)\|\leq \|f\|_{L^2(N)} \leq C \|\mathcal{P}_\lambda(f)\|\qquad (f\in L^2(N))\, .$$
\end{theorem}
Let us mention that the surjectivity of $\mathcal{P}_\lambda$ relies on the established Helgason conjecture (see \cite{K6,GKKS}) and the Bergman inequality.
We now recall that $\mathcal{P}_\lambda$ is inverted by the boundary value map, that is
$${1\over {\bf c}(\lambda)} \lim_{a\to \infty\atop a\in A^-} a^{\lambda-\rho} \mathcal{P}_\lambda f(na) = f(n)\qquad (n\in N)$$
where the limit is taken along a fixed ray in the interior of the negative Weyl chamber.
Define the positive constant
\begin{equation} \label{def w const} w(\lambda):=\left[\int_{\exp(i\Lambda)} {\bf w}_\lambda(y) \ dy\right]^{1\over 2}.
\end{equation}
We observe that this constant is indeed finite, see Subsection \ref{sub:norm}.
In Theorem \ref{norm limit} of the main text we obtain a corresponding norm limit formula:
\begin{theorem}\label{norm limit intro}
For any $f\in L^2(N)$, $\phi=\mathcal{P}_\lambda(f)$ we have
\begin{equation} {1\over w(\lambda) |{\bf c}(\lambda)|} a^{\operatorname{Re} \lambda - 2\rho} \|\phi_a\|_{\mathcal{B}_{a,\lambda}} \to \|f\|_{L^2(N)} \qquad (f\in L^2(N))\end{equation}
for $a\to \infty$ on a ray in $A^-$.
\end{theorem}
Let us emphasize that the weight functions ${\bf w}_\lambda$ are not unique
and it is natural to ask about the existence of optimal choices, i.e.~choices for which $\mathcal{P}_\lambda$ establishes an isometry
between $L^2(N)$ and $\mathcal{B}(\Xi_S, \lambda)$, in other words whether a norm-sup identity holds:
\begin{equation} \label{norm sup} \sup_{a\in A} {1\over w(\lambda) |{\bf c}(\lambda)|} a^{\operatorname{Re} \lambda - 2\rho} \|\phi_a\|_{\mathcal{B}_{a,\lambda}} =\|f\|_{L^2(N)} \qquad (f\in L^2(N))\, .\end{equation}
The answer is quite interesting in the classical example of the real hyperbolic space
$$Z=\operatorname{SO}_e(n+1,1)/\operatorname{SO}(n+1)\simeq \mathbb{R}^n \times \mathbb{R}_{>0} = N\times A$$
where the study was initiated in \cite{RT} and is now completed in Section \ref{sect hyp}. Here $N=\mathfrak{n}=\mathbb{R}^n$ is abelian and we recall the classical formulas for the
Poisson kernel and ${\bf c}$-function
$${\bf a}(x)^{\lambda+\rho} = ( 1 +\|x\|_2^2)^{-(\lambda +n/2)}\qquad (x\in N=\mathbb{R}^n)\, ,$$
$${\bf c}(\lambda)= \pi^{n/2} \frac{\Gamma(2\lambda)}{\Gamma(\lambda+n/2)}\, . $$
It is now easily seen that $\Lambda=\{ y \in \mathbb{R}^n \mid \|y\|_2<1\}$ is the open unit ball.
A natural family of weights to consider are powers of the Poisson kernel parametrized by $\alpha>0$
\begin{equation} \label{special weight 1} {\bf w}_{\lambda}^\alpha(z) =
(2\pi)^{-n/2} \frac{1}{\Gamma(\alpha)} \left(1-\|y\|_2^2\right)_+^{\alpha -1} \qquad (z=x+iy\in \mathcal{T} = \mathbb{R}^n +i\Lambda)\, .\end{equation}
These weights satisfy condition \eqref{request intro w} exactly for
$ \alpha > \max\{2s-1,0\} $ where $s=\operatorname{Re} \lambda$, see Lemma \ref{deltabound}.
Moreover, in Theorem \ref{thm hyp} we establish the following:
\begin{enumerate}
\item \label{one} Condition \eqref{request intro w} is only sufficient and Theorems
\ref{maintheorem}, \ref{norm limit intro} hold even for $\alpha>\max\{2s-\frac{n+1}{2}, 0\}$.
\item For $\alpha$ as in \eqref{one} and $\lambda=s>0$ real the norm-sup identity \eqref{norm sup} holds.
\end{enumerate}
Let us stress that \eqref{norm sup} is a new feature and isn't recorded (and perhaps not even true) for the range investigations with respect to the $K$-geometry in the rank one case: there one verifies lim-sup identities which are even weaker than the norm-limit
formula in Theorem \ref{norm limit intro}, see
\cite{I}.
\section{Notation}
\label{sec:notation}
Most of the notation used in this paper is standard for semisimple Lie groups and symmetric spaces and can be found for instance in \cite{H3}.
Let $G$ be the real points of a connected algebraic reductive group defined over $\mathbb{R}$ and let $\mathfrak{g}$ be its Lie algebra. Subgroups of $G$ are denoted by capitals. The corresponding subalgebras are denoted by the corresponding fraktur letter, i.e.~$\mathfrak{g}$ is the Lie algebra of $G$ etc.
\par We denote by $\mathfrak{g}_\mathbb{C}=\mathfrak{g}\otimes_\mathbb{R} \mathbb{C}$ the complexification of $\mathfrak{g}$ and by $G_{\mathbb{C}}$ the group of complex points. We fix a Cartan involution $\theta$ and write $K$ for the maximal compact subgroup that is fixed by $\theta$. We also write $\theta$ for the derived automorphism of $\mathfrak{g}$. We write $K_{\mathbb{C}}$ for the complexification of $K$, i.e.~$K_{\mathbb{C}}$ is the subgroup of $G_{\mathbb{C}}$ consisting of the fixed points for the analytic extension of $\theta$.
The Cartan involution induces the infinitesimal Cartan decomposition
$\mathfrak{g} =\mathfrak{k} \oplus\mathfrak{s}$. Let $\mathfrak{a}\subset\mathfrak{s}$ be a maximal abelian
subspace.
The set of restricted roots of $\mathfrak{a}$ in $\mathfrak{g}$ we denote by $\Sigma\subset \mathfrak{a}^*\backslash \{0\}$
and write $W$ for the Weyl group of $\Sigma$. We record the familiar root space decomposition
$$\mathfrak{g}=\mathfrak{a}\oplus\mathfrak{m}\oplus \bigoplus_{\alpha\in\Sigma} \mathfrak{g}^\alpha\ ,$$
with $\mathfrak{m}=\mathfrak{z}_\mathfrak{k}(\mathfrak{a})$.
Let $A$ be the connected subgroup of $G$ with Lie algebra $\mathfrak{a}$ and let $M=Z_{K}(\mathfrak{a})$.
We fix a choice of positive roots $\Sigma^+$ of $\mathfrak{a}$ in $\mathfrak{g}$
and write $\mathfrak{n}=\bigoplus_{\alpha\in\Sigma^+} \mathfrak{g}^\alpha$ with corresponding unipotent subgroup
$N=\exp\mathfrak{n}\subset G$. As customary we set $\overline \mathfrak{n} =\theta(\mathfrak{n})$ and accordingly
$\overline N = \theta(N)$.
For the Iwasawa decomposition $G=KA\overline N$ of $G$ we define the projections $\mathbf{k}:G\to K$ and $\mathbf{a}:G\to A$ by
$$
g\in \mathbf{k}(g)\mathbf{a}(g)\overline N\qquad(g\in G).
$$
Let $\kappa$ be the Killing form on $\mathfrak{g}$ and let $\widetilde\kappa$ be a non-degenerate $\operatorname{Ad}(G)$-invariant symmetric bilinear form on $\mathfrak{g}$ such that its restriction to $[\mathfrak{g},\mathfrak{g}]$ coincides with the restriction of $\kappa$ and $-\widetilde\kappa(\,\cdot\,,\theta\,\cdot\,)$ is positive definite. We write $\|\cdot\|$ for the corresponding norm on $\mathfrak{g}$.
\section{The complex crown of a Riemannian symmetric space}\label{section crown}
The Riemannian symmetric space
$Z=G/K$ can be realized as a totally real subvariety of the Stein symmetric space
$Z_\mathbb{C}= G_\mathbb{C}/K_\mathbb{C}$:
$$ Z=G/K \hookrightarrow Z_\mathbb{C}, \ \ gK\mapsto gK_\mathbb{C}\, .$$
In the following we view $Z\subset Z_\mathbb{C}$ and write $z_0=K\in Z$ for the standard base point.
We define the subgroups $A_\mathbb{C}=\exp(\mathfrak{a}_\mathbb{C})$ and
$N_\mathbb{C}=\exp(\mathfrak{n}_\mathbb{C})$ of $G_\mathbb{C}$. We denote by $F:=[A_\mathbb{C}]_{2-\rm{tor}}$ the finite group
of $2$-torsion elements and note that $F=A_\mathbb{C} \cap K$. Our concern is also with the solvable group $S=AN$ and its complexification
$S_\mathbb{C}=A_\mathbb{C} N_\mathbb{C}$. Note that $S\simeq Z$ as transitive $S$-manifolds, but
the natural morphism $S_\mathbb{C}\to Z_\mathbb{C}$ is neither onto nor injective. Its image
$S_\mathbb{C} \cdot z_0$ is Zariski open in the affine variety $Z_\mathbb{C}$ and we have
$S_\mathbb{C}/F \simeq S_\mathbb{C}\cdot z_0$.
The maximal $G\times K_\mathbb{C}$-invariant domain in
$G_\mathbb{C}$ containing $e$ and contained in $ N_\mathbb{C} A_\mathbb{C} K_\mathbb{C}$ is given by
\begin{equation} \label{crown1}
\widetilde \Xi
= G\exp(i\Omega)K_\mathbb{C}\ ,
\end{equation}
where $\Omega=\{ Y\in \mathfrak{a}\mid (\forall \alpha\in\Sigma)
\alpha(Y)<\pi/2\}$.
Note in particular that
\begin{equation} \label{c-intersect} \widetilde \Xi=\left[\bigcap_{g\in G} g N_\mathbb{C} A_\mathbb{C} K_\mathbb{C}\right]_0\end{equation}
with $[\ldots ]_0$ denoting the connected component of $[\ldots]$ containing $e$.
Taking right cosets by $K_\mathbb{C}$, we obtain
the $G$-domain
\begin{equation}\label{crown2} \Xi:=\widetilde \Xi/K_\mathbb{C} \subset Z_\mathbb{C}=G_\mathbb{C}/K_\mathbb{C}\ ,\end{equation}
commonly referred to as the {\it crown domain}. See
\cite{Gi} for the origin of the notion, \cite[Cor.~3.3]{KS} for the inclusion
$\widetilde \Xi\subset N_\mathbb{C} A_\mathbb{C} K_\mathbb{C}$ and \cite[Th.~4.3]{KO} for the maximality.
We recall that $\Xi$ is a contractible space. To be more precise, let $\widehat\Omega=\operatorname{Ad}(K)\Omega$ and note that $\widehat\Omega$ is an open convex subset of $\mathfrak{s}$. As a consequence of the Kostant convexity theorem it satisfies $\widehat\Omega\cap\mathfrak{a}=\Omega$ and $p_{\mathfrak{a}}(\widehat\Omega)=\Omega$, where $p_{\mathfrak{a}}$ is the orthogonal projection $\mathfrak{s}\to\mathfrak{a}$. The fiber map
$$
G\times_{K}\widehat\Omega\to\Xi; \quad [g,X]\mapsto g\exp(iX)\cdot K_{\mathbb{C}}\ ,
$$
is a diffeomorphism by \cite[Prop.~4, 5 and 7]{AG}. Since $G/K\simeq\mathfrak{s}$ and $\widehat\Omega$ are both contractible, also $\Xi$ is contractible. In particular, $\Xi$ is simply connected.
\par As $\Xi\subset S_\mathbb{C}\cdot z_0$ we also obtain a realization
of $\Xi$ in $S_\mathbb{C}/F$ which, by the contractibility of $\Xi$ lifts to an $S$-equivariant
embedding of $\Xi\hookrightarrow S_\mathbb{C}$. We denote the image by $\Xi_S$.
Let us remark that $\Xi_S$ is not known explicitly in appropriate coordinates
except when $Z$ has real rank one, which was determined in \cite{CK}.
\par We recall ${\bf a}: G \to A$ the middle projection of the Iwasawa decomposition $G=KA\overline{N}$ and note that
${\bf a}$ extends holomorphically to
\begin{equation}\label{tilde Xi}
\widetilde \Xi^{-1}
:=\{g^{-1}:g\in\widetilde\Xi\}\ .
\end{equation}
Here we use that $\widetilde \Xi\subset \overline N_\mathbb{C} A_\mathbb{C} K_\mathbb{C}$ as a consequence of
$\Xi\subset N_\mathbb{C} A_\mathbb{C} K_\mathbb{C}$ and the $G$-invariance of $\Xi$. Moreover, the simply connectedness of $\Xi$ plays a role to achieve ${\bf a}: \widetilde \Xi^{-1}\to A_\mathbb{C}$ uniquely: A priori ${\bf a}$ is only defined as a map to $A_\mathbb{C}/F$.
We denote the extension of ${\bf a}$ to $\widetilde \Xi^{-1}$ by the same symbol.
Likewise one remarks that $\mathbf{k}: G \to K$ extends holomorphically to $\widetilde \Xi^{-1}$ as well.
\subsection{Unipotent model for the crown}
Let us define a domain $\Lambda\subset \mathfrak{n}$ by
$$\Lambda:=\{ Y \in \mathfrak{n}\mid \exp(iY)\cdot z_0\subset \Xi\}_0$$
where the index $\{\cdot\}_0$ refers to the connected component of $\{\cdot\}$
containing $0$. Then we have
$$\Xi=G\exp(i\Lambda)\cdot z_0$$
by \cite[Th. 8.3]{KO}. In general the precise shape of $\Lambda$ is not known
except for a few special cases, in particular if the real rank of $G$ is one
(see \cite[Sect. 8.1 and 8.2]{KO}).
\begin{prop} \label{prop bounded} For $G=\operatorname{GL}(n,\mathbb{R})$ the domain $\Lambda\subset \mathfrak{n}$ is bounded.
\end{prop}
\begin{rmk}\label{rmk bounded} A general real reductive group $G$ can be embedded into $G=\operatorname{GL}(n,\mathbb{R})$ with compatible Iwasawa
decompositions. Then it happens in a variety of cases that the crown domain $\Xi=\Xi(G)$ for $G$ embeds
into the one of $\operatorname{GL}(n,\mathbb{R})$. For example this is the case for $G=\operatorname{SL}(n,\mathbb{R}), \operatorname{Sp}(n,\mathbb{R}), \operatorname{Sp}(p,q), \operatorname{SU}(p,q)$, and we refer to \cite[Prop. 2.6]{KrSt} for a complete list. In all these cases $\Lambda$ is then bounded as a consequence of Proposition
\ref{prop bounded}. \end{rmk}
\begin{proof}[Proof of Proposition \ref{prop bounded}] Define
$$\Lambda'=\{ Y \in \mathfrak{n}\mid \exp(iY)N \subset K_\mathbb{C} A_\mathbb{C} \overline N_\mathbb{C}\}_0$$
and note that $\Lambda'=-\Lambda$. Now \eqref{c-intersect} for $N$ replaced by $\overline N$ implies
$\Lambda\subset \Lambda'$. We will show an even stronger statement by replacing $\Lambda$ by $\Lambda'$; in other words we search for the largest tube domain
$T_{N,\Lambda'}:=\exp(i\Lambda') N$ contained in $K_\mathbb{C} A_\mathbb{C} \overline N_\mathbb{C}$ and show that the tube has bounded base.
As usual we let $K_\mathbb{C}=
\operatorname{SO}(n,\mathbb{C})$, $ A_\mathbb{C}=\operatorname{diag}(n, \mathbb{C}^*)$ and $\overline N_\mathbb{C}$ be the unipotent
lower
triangular matrices. We recall the construction of the basic $K_\mathbb{C}\times \overline N_\mathbb{C}$-invariant functions
on $G_\mathbb{C}$. With $e_1, \ldots, e_n$ the standard basis of $\mathbb{C}^n$ we let $v_i:= e_{n-i+1}$, $1\leq i\leq n$.
Now for $1\leq k\leq n-1$ we define a holomorphic function on $G_\mathbb{C} = \operatorname{GL}(n,\mathbb{C})$ by
$$f_k(g) = \det \left(\langle g(v_i), g(v_j)\rangle_{1\leq i,j\leq n-k}\right) \qquad (g\in G_\mathbb{C})$$
where $\langle z,w\rangle = z^t w$ is the standard pairing of $\mathbb{C}^n$. As the standard pairing is $K_\mathbb{C}$-invariant we obtain
that $f_k$ is left $K_\mathbb{C}$-invariant. Furthermore from
$$f_k(g) =\langle g(v_1)\wedge\ldots \wedge g(v_{n-k}), g(v_1)\wedge\ldots \wedge g(v_{n-k})\rangle_{\bigwedge^{n-k}\mathbb{C}^n}$$
we see that $f_k$ is right-$\overline N_\mathbb{C}$-invariant. In particular we have
$$f_k(\kappa a\overline n)= a_{k+1} \cdot\ldots \cdot a_n \qquad (\kappa \in K_\mathbb{C} , \overline n\in \overline N_\mathbb{C})$$
for $a=\operatorname{diag}(a_1, \ldots, a_n)\in A_\mathbb{C}$.
Hence $f_k$ is not vanishing on $K_\mathbb{C} A_\mathbb{C} \overline N_\mathbb{C}$ and in particular not on the tube domain
$T_{N,\Lambda'}$ which is contained in $K_\mathbb{C} A_\mathbb{C} \overline N_\mathbb{C}$.
\par The functions $f_k$ are right semi-invariant under the maximal parabolic subgroup
$\overline P_k = L_k \overline U_k$ with $L_k=\operatorname{GL}(k,\mathbb{R})\times \operatorname{GL}(n-k,\mathbb{R})$ embedded block-diagonally
and $\overline U_k ={\bf1}_n+ \operatorname{Mat}_{(n-k)\times k}(\mathbb{R})$ with $\operatorname{Mat}_{(n-k)\times k }(\mathbb{R})$ sitting in the lower left corner.
We obtain with $U_k= {\bf1}_n+ \operatorname{Mat}_{k\times (n-k)}(\mathbb{R})$ an abelian subgroup of $N$ with $\mathfrak{u}_k = \operatorname{Mat}_{k \times (n-k)}(\mathbb{R})$
and record for $Z=X+iY\in \mathfrak{u}_{k,\mathbb{C}}$ that
$$f_k(\exp(Z))= \det ({\bf1}_{n-k} + Z^t Z)\, .$$
From this we see that the largest $U_k$-invariant tube domain in $U_{k,\mathbb{C}}=\operatorname{Mat}_{k\times (n-k)}(\mathbb{C})$ to which $f_k$ extends to a zero free holomorphic function is given by
$$T_k = \operatorname{Mat}_{k\times(n-k)}(\mathbb{R}) + i \Upsilon_k$$
where
$$\Upsilon_k=\{ Y\in \operatorname{Mat}_{k\times(n-k)}(\mathbb{R})\mid {\bf1}_{n-k}- Y^tY\ \hbox{is positive definite} \}$$
is bounded and convex.
\par With $\mathfrak{n}_k = \mathfrak{l}_k\cap \mathfrak{n}$ we obtain a subalgebra of $\mathfrak{n}$ such that $\mathfrak{n} = \mathfrak{n}_k \ltimes \mathfrak{u}_k$ is
a semi-direct product with abelian ideal $\mathfrak{u}_k$. Accordingly we have $N\simeq U_k \times N_k$ under the multiplication map and likewise we obtain, via Lemma \ref{lemma bipolar} below,
for each $k$ a diffeomorphic polar map
$$\Phi_k: \mathfrak{u}_k \times \mathfrak{n}_k \times N \to N_\mathbb{C}, \ \ (Y_1, Y_2, n)\mapsto \exp(iY_1)\exp(iY_2)n\, .$$
Note that
$$\Phi_k^{-1}(T_{N,\Lambda'})=\Lambda_k'\times N$$
with $\Lambda_k'\subset \mathfrak{u}_k\times \mathfrak{n}_k$ a domain containing $0$. Now let $\Lambda_{k,1}'$ be the projection of $\Lambda_k'$ to $\mathfrak{u}_k$ and likewise we define $\Lambda_{k,2}'\subset \mathfrak{n}_k$. Note that $\Lambda_k'\subset \Lambda_{k,1}'\times \Lambda_{k,2}'$. We now claim that $\Lambda_{k,1}'\subset \Upsilon_k$. In fact
let $Y=Y_1+Y_2 \in \Lambda_k'$. Then $\exp(iY_1)\exp(iY_2)\in T_{N,\Lambda'}$ and thus, as $f_k$ is right
$N_{k,\mathbb{C}}$-invariant,
$$ 0\neq f_k(\exp(iY_1)\exp(iY_2))=f_k(\exp(iY_1))\,.$$
Our claim follows.
\par To complete the proof we argue by contradiction and assume that $\Lambda'$ is unbounded. We will show that
this implies that $\Lambda_{k,1}'$ becomes unbounded, a contradiction to the claim above. Suppose now that
there is an unbounded sequence $(Y^m)_{m\in \mathbb{N}}\subset \Lambda'$. We write elements
$Y\in \mathfrak{n}$ in coordinates $Y=\sum_{1\leq i <j\leq n} Y_{i,j}$. Let now $1\leq k\leq n-1$
be maximal such that all $Y^{m}_{i,j}$ stay bounded for $j\leq k$. Our choice of parabolic subgroup then is
$\overline P_k$.
By assumption we have
that $Y^m_{i, k+1}$
becomes unbounded for some $1\leq i \leq k$. Let $l\geq i$ be maximal with this property.
We write elements $Y\in \mathfrak{n}$ as $Y_1+Y_2$ with $Y_1 \in \mathfrak{u}_k$ and
$Y_2\in \mathfrak{n}_k$.
Now for any $Y=Y_1+Y_2\in \mathfrak{n}$ we find unique $\widetilde Y_1, X\in \mathfrak{u}_k$ such that
\begin{equation} \label{triple exp} \exp(iY)=\exp(i(Y_1 +Y_2))= \exp(i\widetilde Y_1) \exp(iY_2)\exp(X)\end{equation}
as a consequence of the fact that $\Phi_k$ is diffeomorphic and the identity
$$\exp(iY) U_{k, \mathbb{C}} = \exp(iY_2) U_{k,\mathbb{C}}$$
in the Lie group $N_\mathbb{C}/ U_{k,\mathbb{C}}$.
By Dynkin's formula and the abelianess of $\mathfrak{u}_k$ we infer from \eqref{triple exp}
$$iY= ((i\widetilde Y_1*iY_2)*X)=i\widetilde Y_1 +iY_2+X+\sum_{j=1}^{n-1} c_j i^{j+1} (\operatorname{ad} Y_2)^j \widetilde Y_1
+\sum_{j=1}^{n-1} d_j i^j (\operatorname{ad} Y_2)^j X$$
for certain rational constants $c_j, d_j\in \mathbb{Q}$. In particular, comparing real and imaginary parts on both sides we obtain two equations:
\begin{equation} \label{matrix1} Y_1 = \widetilde Y_1 +\sum_{j=1}^{n_1} c_{2j}(-1)^j
(\operatorname{ad} Y_2)^{2j} \widetilde Y_1 +\sum_{j=0}^{n_2} d_{2j+1} (-1)^{j} (\operatorname{ad} Y_2)^{2j+1} X \end{equation}
\begin{equation} \label{matrix2} X= \sum_{j=0}^{n_2} c_{2j+1}(-1)^j
(\operatorname{ad} Y_2)^{2j+1} \widetilde Y_1 -\sum_{j=1}^{n_1} d_{2j} (-1)^{j} (\operatorname{ad} Y_2)^{2j} X, \end{equation}
where $n_1=\lfloor \frac{n-1}{2}\rfloor$ and $n_2=\lceil \frac{n-1}{2}-1\rceil$.
Our claim now is that $(\widetilde Y_1^m)_{l, k+1}$ is unbounded. If $l=k$, then we deduce from
\eqref{matrix1} that $(Y_1^m)_{k, k+1}= (\widetilde Y_1^m)_{k, k+1}$ is unbounded, i.e., our desired contradiction. Now suppose $l<k$.
We are interested in the entries of $\widetilde Y_1$ in the first column
and for that we let $\pi_1: \mathfrak{u}_{k,\mathbb{C}}=\operatorname{Mat}_{k\times (n-k)} (\mathbb{C}) \to \mathbb{C}^k$ the projection to the first column. We decompose $\mathfrak{l}_k=\mathfrak{l}_{k,1} +\mathfrak{l}_{k,2}$ with
$\mathfrak{l}_{k,1}= \mathfrak{gl}(k, \mathbb{R})$ and $\mathfrak{l}_2=\mathfrak{gl}(n-k,\mathbb{R})$.
Write $\mathfrak{u}_{k,j}=\mathbb{R}^k$ for the subalgebra of $\mathfrak{u}_k$ consisting of the $j$-th column and observe
\begin{align} \label{pi1}
\pi_1([\mathfrak{l}_{k,2}\cap \mathfrak{n}_k, \mathfrak{u}_k])&=\{0\}\\
\label{lfk1} [\mathfrak{l}_{k,1}, \mathfrak{u}_{k,j}]&\subset \mathfrak{u}_{k,j}.
\end{align}
Now write $Y_2 = Y_{2|1} + Y_{2|2}$ according to $\mathfrak{l}_{k}=\mathfrak{l}_{k,1}+\mathfrak{l}_{k,2}$.
From \eqref{matrix1}--\eqref{matrix2} together with \eqref{pi1}--\eqref{lfk1} we then derive that
\begin{align} \label{matrix3}\pi_1(Y_1) &= \pi_1(\widetilde Y_1) +\sum_{j=1}^{n_1} c_{2j}(-1)^j
(\operatorname{ad} Y_{2|1})^{ 2j} \pi_1(\widetilde Y_1)\\
\notag & \quad +\sum_{j=0}^{n_2} d_{2j+1} (-1)^{j} (\operatorname{ad} Y_{2|1})^{2j+1} \pi_1(X) \end{align}
and
\begin{equation} \label{matrix4} \pi_1(X)= \sum_{j=0}^{n_2} c_{2j+1}(-1)^j
(\operatorname{ad} Y_{2|1})^{2j} \pi_1(\widetilde Y_1) -\sum_{j=1}^{n_1} d_{2j} (-1)^{j} (\operatorname{ad} Y_{2|1})^{2j} \pi_1(X) \, .\end{equation}
We apply this now to $Y=Y^m$ and note that $Y_{2|1}^m$ is bounded by the construction of $\overline P_k$. From \eqref{matrix3} and \eqref{matrix4} we obtain that
$X^m_{k+1, k}=0$ and $(\widetilde Y_1^m )_{k, k+1}= (Y_1^m)_{k, k+1}$ and recursively
we obtain that $X_{i, k+1}^m$ and $\widetilde Y_{i, k+1}^m$ remain bounded for $i<l$. It then follows from \eqref{matrix3}, as $Y^m_{l, k+1}$ is unbounded, that $\widetilde Y^m_{l, k+1}$ is unbounded. This is the desired contradiction and completes the proof of the proposition.
\end{proof}
\begin{lemma} \label{lemma bipolar}Let $\mathfrak{n}$ be a nilpotent Lie algebra, $N_\mathbb{C}$ a simply connected Lie group
with Lie algebra $\mathfrak{n}_\mathbb{C}$ and $N=\exp(\mathfrak{n})\subset N_\mathbb{C}$. Let further $\mathfrak{n}_1, \mathfrak{n}_2\subset \mathfrak{n}$ subalgebras with $\mathfrak{n}=\mathfrak{n}_1 +\mathfrak{n}_2$ (not necessarily direct). Suppose that
$\mathfrak{n}_1$ is abelian. Then the 2-polar map
$$\Phi: \mathfrak{n}_1 \times\mathfrak{n}_2 \times N \to N_\mathbb{C}, \ \ (Y_1, Y_2, n) \mapsto \exp(iY_1) \exp(iY_2) n $$
is onto. If moreover, the sum $\mathfrak{n}_1+\mathfrak{n}_2$ is direct and $\mathfrak{n}_1$ in addition is an ideal, then
$\Phi$ is diffeomorphic.
\end{lemma}
\begin{proof} We prove the statement by induction on $\dim N$. Let $Z(N_\mathbb{C})\subset N_\mathbb{C}$ the center of $N_\mathbb{C}$. Note that $Z(N_\mathbb{C})$ is connected and of positive dimension if $\dim \mathfrak{n}>0$. Set $\widetilde \mathfrak{n}:=\mathfrak{n}/\mathfrak{z}(\mathfrak{n})$, $\widetilde \mathfrak{n}_i:= (\mathfrak{n}_i +\mathfrak{z}(\mathfrak{n}))/\mathfrak{z}(\mathfrak{n})$ and $\widetilde N_\mathbb{C} =
N_\mathbb{C}/ Z(N_\mathbb{C})$. Induction applies and we deduce that for every $n_\mathbb{C} \in N_\mathbb{C}$
we find elements $n\in N$, $Y_i\in \mathfrak{n}_i$ and $z_\mathbb{C}\in Z(N_\mathbb{C})$ such that
$$ n_\mathbb{C} = \exp(iY_1) \exp(iY_2) n z_\mathbb{C}.
$$
We write $z_\mathbb{C} = z y $ with $z\in Z(N)$ and $y=\exp(iY)$ with $Y\in \mathfrak{z}(\mathfrak{n})$. Write
$Y=Y_1' +Y_2'$ with $Y_i'\in \mathfrak{n}_i$. As $Y$ is central $Y_1'$ commutes with
$Y_2'$ and so $y =\exp(Y_1')\exp(Y_2')$. Putting matters together we arrive at
$$ n_\mathbb{C} = \exp(iY_1)\exp(iY_1') \exp(iY_2') \exp(iY_2) nz.
$$
Now $nz\in N$ and $\exp(iY_1)\exp(iY_1')=\exp(i(Y_1+Y_1'))$. Finally,
$\exp(iY_2')\exp(iY_2)= \exp(iY_2'')n_2$ for some $Y_2''\in \mathfrak{n}_2$ and
$n_2\in N_2 =\exp(\mathfrak{n}_2)$. This proves that $\Phi$ is surjective.
\par For the second part let us assume the further requirements. We confine ourselves with showing that $\Phi$ is injective.
So suppose that
$$\exp(iY_1)\exp(iY_2) n= \exp(iY_1') \exp(iY_2') n'$$
and reduce both sides mod the normal subgroup $N_{1,\mathbb{C}}$. Hence $Y_2=Y_2'$.
Since we have $N\simeq N_1\times N_2$ under multiplication we may assume, by the same argument that $n=n_1\in N_1$ and $n'=n_1'$. Now injectivity is immediate.
\end{proof}
\section{The Poisson transform and the Helgason conjecture}
\subsection{Representations of the spherical principal series}
Recall the Weyl half sum $\rho:=\frac{1}{2}\sum_{\alpha\in \Sigma^+} (\dim \mathfrak{g}^\alpha)\cdot \alpha\in \mathfrak{a}^*$. For each $\lambda\in \mathfrak{a}_\mathbb{C}^*$ and $a\in A$ we define
$a^\lambda:= e^{\lambda(\log a)}$.
Let $\overline P = M A\overline N$ and define for
$\lambda\in \mathfrak{a}_\mathbb{C}^*$ the normalized character
$$\chi_\lambda: \overline P \to \mathbb{C}^*,\quad \overline p = ma\overline n \mapsto a^{\lambda-\rho}\,. $$
Associated to this character is the line bundle $\mathcal{L}_\lambda= G\times_{\overline P} \mathbb{C}_\lambda\to G/\overline P$.
The sections of this line bundle form the representations of the spherical principal
series: We denote the $K$-finite sections by $V_\lambda$, the analytic sections
by $V_\lambda^\omega$ and the smooth sections by $V_\lambda^\infty$.
Note in particular,
$$V_\lambda^\infty=\{ f\in C^\infty(G)\mid f(g\overline p ) =
\chi_\lambda(\overline p)^{-1} f(g), \ \overline p\in \overline P, g \in G \}$$
and that $V_\lambda^\infty$ is a $G$-module under the left regular representation.
Now given $f_1\in V_\lambda^\infty$ and $f_2\in V_{-\lambda}^\infty$ we obtain that
$f:=f_1f_2$ is a smooth section of $\mathcal{L}_{-\rho}$ which identifies with the 1-density bundle of the compact flag variety $G/\overline P$. Hence we obtain a natural $G$-invariant non-degenerate pairing
\begin{equation} \label{dual}V_\lambda^{\infty}\times V_{-\lambda}^\infty\to \mathbb{C}, \quad (f_1, f_2)\mapsto \langle f_1, f_2\rangle:=\int_{G/\overline P} f_1f_2\, .\end{equation}
In particular, the Harish-Chandra module dual to $V_\lambda$ is isomorphic to $V_{-\lambda}$.
The advantage using the pairing \eqref{dual} is that it easily gives formulas when trivializing
$\mathcal{L}_\lambda$, and one securely obtains correct formulas for the compact and non-compact
picture. Using this pairing we define distribution vectors as the strong dual
$V_\lambda^{-\infty}=(V_{-\lambda}^\infty)'$. Likewise we obtain hyperfunction vectors
$V_\lambda^{-\omega}$. Altogether we have the natural chain
$$ V_\lambda\subset V_\lambda^\omega\subset V_\lambda^\infty\subset
V_\lambda^{-\infty} \subset V_{\lambda}^{-\omega}\, .$$
We denote by $f_{\lambda, K}\in V_\lambda$ the $K$-fixed vector with
$f_{\lambda, K}({\bf1})=1$ and normalize the identification of $\mathcal{L}_{-\rho}$ with the 1-density
bundle such that $\int f_{-\rho, K}=1$.
\subsection{Definition of the Poisson transform and Helgason's conjecture}
We move on with the concept of Poisson transform and the Helgason conjecture on
$Z=G/K$ which was formulated in \cite{H1} and first established in \cite{K6}; see also \cite{GKKS} for a novel elementary treatment. We denote by ${\mathbb D}(Z)$ the commutative algebra of $G$-invariant differential
operators and recall that the Harish-Chandra homomorphism for $Z$ asserts that ${\mathbb D}(Z)\simeq \operatorname{Pol}(\mathfrak{a}_\mathbb{C}^*)^W$ with $W$ the Weyl group. In particular,
$\operatorname{spec}{\mathbb D}(Z)\simeq \mathfrak{a}_\mathbb{C}^*/W$. For $\lambda\in \mathfrak{a}_\mathbb{C}^*$
we denote by $\mathcal{E}_{[\lambda]}(Z)$ the ${\mathbb D}(Z)$-eigenspace attached to
$[\lambda]=W\cdot \lambda\in \mathfrak{a}_\mathbb{C}^*/W$. Note that all functions in $\mathcal{E}_{[\lambda]}(Z)$ are eigenfunctions of $\Delta_Z$ to eigenvalue $\lambda^2 - \rho^2$
with $\lambda^2$ abbreviating the Cartan--Killing pairing $\kappa(\lambda, \lambda)$.
In case $Z$ has real rank one, let us remark that this characterizes
$\mathcal{E}_{[\lambda]}(Z)$, i.e.
$$ \mathcal{E}_{[\lambda]}(Z)=\{ f \in C^\infty(Z)\mid \Delta_Z f = (\lambda^2 -\rho^2)f\}\, . $$
For $\lambda\in \mathfrak{a}_\mathbb{C}^*$ one defines the $G$-equivariant Poisson transform
$$\mathcal{P}_\lambda: V_\lambda^{-\omega}\to C^\infty(G/K),
\ \ f\mapsto (gK\mapsto \langle f, g\cdot f_{-\lambda, K}\rangle).
$$
The Helgason conjecture then asserts that $\mathcal{P}_\lambda$ is onto the
$\mathbb{D}(Z)$-eigenspace $\mathcal{E}_{[\lambda]}(Z)$
provided that $f_{K,-\lambda}$ is cyclic in $V_{-\lambda}$, i.e.
$\mathcal{U}(\mathfrak{g})f_{K,-\lambda}= V_{-\lambda}$. The latter condition is always satisfied
if Kostant's condition \cite[Th.~8]{Kos} holds: $\operatorname{Re} \lambda(\alpha^\vee)\geq 0$ for all positive roots $\alpha$.
In the sequel we abbreviate this condition as $\operatorname{Re} \lambda \geq 0$.
If $\operatorname{Re} \lambda >0$, then the Poisson transform is inverted
by the boundary value map
$$b_\lambda: \mathcal{E}_\lambda(Z) \to V_\lambda^{-\omega}, \ \
\phi\mapsto (g\mapsto {\bf c}(\lambda)^{-1}\lim_{a\to \infty\atop a\in A^-} a^{\lambda -\rho} \phi(ga))$$
where ${\bf c}(\lambda)$ is the Harish-Chandra ${\bf c}$-function:
$${\bf c}(\lambda):=\int_N{\bf a}(n)^{\lambda +\rho} \ dn $$
with ${\bf a}: KA\overline N \to A$ the middle projection.
In particular, we have \begin{equation} \label{boundary} b_\lambda(\mathcal{P}_\lambda(f)) = f \qquad (f \in V_\lambda^{-\omega}, \operatorname{Re} \lambda >0)\, .\end{equation}
\section{The Poisson transform in terms of $S$-geometry}
We fix a parameter
$\lambda$ such that $\operatorname{Re} \lambda >0$. The goal is to identify
subspaces of $V_\lambda^{-\omega}$ for which $\mathcal{P}_\lambda$ has a particularly
nice image in terms of $S$-models. From what we already explained we
have
$$ \operatorname{im} \mathcal{P}_\lambda\subset \mathcal{O}(\Xi_S)$$
and, in particular, for all $\phi \in \operatorname{im} \mathcal{P}_\lambda$ and $a\in A$
we have $\phi_a\in \mathcal{O}(\mathcal{T}_a)$.
The general problem here is that one wants to identify
$V_\lambda^{-\omega}$ with a certain subspace of $C^{-\omega}(N)$ which is tricky and
depends on the parameter $\lambda$. The compact models for the spherical principal series here are much cleaner to handle as the restriction maps
$$\res_{K,\lambda} : V_\lambda^\infty \to C^\infty(K/M)=C^\infty(K)^M, \quad f\mapsto f|_K$$
are isomorphisms. In this sense we obtain a natural identification
$V_\lambda^{-\omega} \simeq C^{-\omega}(K/M)$ as $K$-modules which is
parameter independent.
Contrary to that, the faithful restriction map
$$\res_{N,\lambda} : V_\lambda^\infty \to C^\infty(N), \quad f\mapsto f|_N$$
is not onto and the image depends on $\lambda$.
For a function $h\in C^\infty(N)$ we define a function on
the open Bruhat cell $NMA\overline N$ by
$$H_\lambda(n ma\overline n) = h(n) a^{-\lambda+\rho}\, .$$
Then the image of $\res_{N,\lambda}$ is by definition given by
$$ C_\lambda^\infty(N)=\{ h \in C^\infty(N)\mid H_\lambda\ \hbox{extends to a smooth function on $G$}\}\, .$$
In this sense $V_\lambda^{-\omega}$ corresponds in the non-compact model to
$$C_\lambda^{-\omega}(N)= \{ h \in C^{-\omega}(N)\mid H_\lambda|_{K\cap N \overline P}\ \hbox{extends to a hyperfunction on $K$}\}\, .$$
Having said this we take an element $f\in C_\lambda^{-\omega}(N)$ and observe that
the Poisson transform in terms of $S$ is given by
\begin{equation} \label{Poisson} \mathcal{P}_\lambda f(s)= \int_N f(x) {\bf a} (s^{-1} x)^{\lambda + \rho} \ dx\
\qquad (s\in S)\end{equation}
with ${\bf a}: KA\overline N \to A$ the middle projection. In accordance with
\eqref{boundary} we then have
$${1\over {\bf c}(\lambda)} \lim_{a\to \infty\atop a\in A^-} a^{\lambda-\rho} \mathcal{P}_\lambda f(na) = f(n)\qquad (n\in N)\,.$$
Let us note that
the Hilbert model of $\mathcal{H}_\lambda=L^2(K/M)\subset C^{-\omega}(K/M)=V_\lambda^{-\omega}$ of $V_\lambda$ corresponds in the non-compact picture to $L^2(N, {\bf a}(n)^{2 \operatorname{Re} \lambda} dn)\supset L^2(N)$ and hence
$$L^2(N)\subset C^{-\omega}_\lambda(N)\qquad (\operatorname{Re} \lambda\geq 0)\, .$$
\par
The main objective now is to give a novel characterization of $\mathcal{P}_\lambda(L^2(N))$ for $\operatorname{Re} \lambda>0$.
For a function $\phi$ on $S=NA$ and $a\in A$ we recall the partial functions on $N$ defined by
$$\phi_a(n)= \phi(na)\qquad (n\in N)\, .$$
Now, given $f\in L^2(N)$ we let
$\phi:=\mathcal{P}_\lambda(f)$ and rewrite \eqref{Poisson} as
\begin{equation} \label{P rewrite} {1\over {\bf c}(\lambda)} a^{\lambda-\rho} \phi_a(n) = \int_N f(x)\delta_{\lambda, a}(n^{-1}x) \ dx \end{equation}
with
$$\delta_{\lambda, a}(x):= {1\over {\bf c}(\lambda)} a^{-2\rho} {\bf a} (a^{-1} x a)^{\lambda+\rho}
\qquad (x \in N)\, .$$
We first note that the condition $\operatorname{Re} \lambda>0$ then implies that $\delta_{\lambda, a}$ is a Dirac-sequence
on $N$ for $a\to \infty$ on a ray in the negative Weyl chamber.
\begin{lemma} Let $\phi=\mathcal{P}_\lambda(f)$ for $f\in L^2(N)$. Then the following assertions
hold:
\begin{enumerate}
\item $\phi_a\in L^2(N)$ for all $a\in A$.
\item $\|\phi_a\|_{L^2(N)} \leq a^{\rho -\operatorname{Re} \lambda}{\bf c}(\operatorname{Re} \lambda) \|f\|_{L^2(N)}$.
\end{enumerate}
\end{lemma}
\begin{proof} Both assertions are immediate from the fact that $\|\delta_{a,\lambda}\|_{L^1(N)} \leq
\frac {{\bf c}(\operatorname{Re} \lambda)}{|{\bf c}(\lambda)|}$ and \eqref{P rewrite}.
\end{proof}
\subsection{Partial holomorphic extensions of eigenfunctions}
Recall that the Poisson transform $\phi = \mathcal{P}_\lambda(f)$ belongs to $\mathcal{O}(\Xi_S)$ with all partial functions $\phi_a$ extending to holomorphic functions on $\mathcal{T}_a$.
For $y\in\exp(i\Lambda_a)$ we thus can define
$$\phi_{a,y}(n):=\phi_a(n y)\qquad (n \in N)\, .$$
Let $\delta_\lambda:=\delta_{\lambda, 1}$ and put
$$\delta_{\lambda, y}: N \to \mathbb{C} , \quad x \mapsto \delta_\lambda(y^{-1} x )\ .$$
\begin{lemma} The following assertions hold:
\begin{enumerate} \item The function ${\bf v}_\lambda(y):=\sup_{k\in K} |{\bf a}(y^{-1} k)^{\lambda +\rho}|$ is finite for all $y \in \exp(i\Lambda)$.\\
\item The function $\delta_{\lambda, y}$ is integrable with $L^1(N)$-norm
\begin{equation} \label{delta bound2} v_\lambda(y):=\|\delta_{\lambda, y}\|_1\leq {\bf v}_\lambda(y)\frac{{\bf c}(\operatorname{Re} \lambda)}{|\bf c(\lambda)|}\, .\end{equation}
\end{enumerate}
\end{lemma}
\begin{proof}
Part (1) is a consequence of the fact that ${\bf a}: G\to A$, considered as a
map from $K\backslash G \to A$, extends holomorphically
to $\Xi^{-1}\to A_\mathbb{C}$ with $\Xi^{-1}$ considered as a subset of $K_\mathbb{C} \backslash G_\mathbb{C}$, see
\eqref{tilde Xi}. \\
For the proof of (2) we note the identity \begin{equation} \label{delta bound} \delta_\lambda(y^{-1} x )=\delta_\lambda(x) {\bf a}(y^{-1} {\bf k}(x))^{\lambda +\rho}
\qquad (x \in N, y\in \exp(i\Lambda)),\end{equation}
where ${\bf k}: G \to K$ is defined by the opposite Iwasawa decomposition $G=KA\overline N$.
Combined with part (1), \eqref{delta bound} implies that for all $y \in \exp(i\Lambda)$ the function
$\delta_{\lambda, y}$
is integrable on $N$, with the asserted estimate \eqref{delta bound2} for its norm.
\end{proof}
\begin{lemma}\label{lemma a bound} For $\operatorname{Re} \lambda>0$, $f\in L^2(N)$ and $\phi=\mathcal{P}_\lambda(f)$ we have
\begin{equation}\label{upper a-bound} \|\phi_{a, y}\|_{L^2(N)} \leq {|\bf c}(\lambda)| a^{\rho -\operatorname{Re} \lambda}\|\delta_{\lambda, y^{a^{-1}}}\|_{L^1(N)}
\|f\|_{L^2(N)}\qquad (y\in \exp(i\Lambda_a))\, .\end{equation}
\end{lemma}
\begin{proof} From \eqref{P rewrite} we obtain
$$ {1\over {\bf c}(\lambda)} a^{\lambda-\rho} \phi_{a,y}(n) = \int_N f(x)\delta_{\lambda, a}(y^{-1}n^{-1}x) \ dx $$
and thus
$$ {1\over |{\bf c}(\lambda)|} a^{\operatorname{Re} \lambda-\rho}\| \phi_{a,y}\|_{L^2(N)} \leq
\|\delta_{\lambda, a}(y^{-1}\cdot)\|_{L^1(N)} \|f\|_{L^2(N)}.
$$
The assertion \eqref{upper a-bound} follows. \end{proof}
\subsection{A class of weight functions}\label{subsection weight functions}
We now let ${\bf w}_\lambda: \exp(i\Lambda)\to \mathbb{R}_{>0}$ be any positive continuous function such that
\begin{equation} \label{request w} d(\lambda):=\int_{\exp(i\Lambda)} {\bf w}_\lambda(y) \|\delta_{\lambda, y}\|_1^2 \ dy <\infty\end{equation}
and define a left $N$-invariant function on the tube $\mathcal{T}_a$ by
$${\bf w}_{\lambda,a}: \mathcal{T}_a\to \mathbb{R}_{>0}, \quad n y\mapsto {\bf w}_\lambda (\operatorname{Ad}(a^{-1})y)\qquad (y\in \exp(i\Lambda_a))\, .$$
\begin{rmk} In general we expect that $\Lambda$ is bounded. In view
of \eqref{delta bound2} one may then take
$${\bf w}_\lambda \equiv 1\ ,$$
as ${\bf v}_\lambda^{-2}$ is bounded from below by a positive constant.
Optimal choices for ${\bf w}_\lambda$ in special cases will be presented at the end of the article.
\end{rmk}
We now show that $\phi_a\in \mathcal{O}(\mathcal{T}_a)$ belongs to the weighted Bergman
space
$$\mathcal{B}(\mathcal{T}_a, {\bf w}_{\lambda,a}):=\{ \psi\in \mathcal{O}(\mathcal{T}_a)\mid \|\psi\|^2_{\mathcal{B}_{a, \lambda}}:=
\int_{\mathcal{T}_a} |\psi(z)|^2 {\bf w}_{\lambda,a}(z) dz <\infty\},$$
where $dz$ is the Haar measure on $N_\mathbb{C}$ restricted to $\mathcal{T}_a$.
In more precision with $d(\lambda)$ from \eqref{request w}
we record the following
\begin{lemma} Let $\operatorname{Re} \lambda>0$, $f\in L^2(N)$ and $\phi=\mathcal{P}_\lambda(f)$. Then we have the following inequality
\begin{equation}\label{normb1}
\|\phi_a\|_{\mathcal{B}_{a,\lambda}}
\leq |{\bf c}( \lambda)| \sqrt{d(\lambda)} a^{2\rho-\operatorname{Re} \lambda}\|f\|_{L^2(N)}\, .\end{equation}
\end{lemma}
\begin{proof} Starting with \eqref{upper a-bound} the assertion follows from the estimate
\begin{align*} \notag \|\phi_a\|_{\mathcal{B}_{a,\lambda}}&\leq a^{\rho- \operatorname{Re} \lambda} |{\bf c}(\lambda)| \left[\int_{\exp(i\Lambda^a)}
{\bf w}_\lambda(y^{a^{-1}}) \|\delta_{\lambda, y^{a^{-1}}}\|_1^2 \ dy]\right]^{1\over 2}
\|f\|_{L^2(N)}\\
\notag&= |{\bf c}(\lambda)| \left[\int_{\exp(i\Lambda)}
{\bf w}_\lambda(y) \|\delta_{\lambda, y}\|_1^2 \ dy]\right]^{1\over 2}
a^{2\rho-\operatorname{Re} \lambda}\|f\|_{L^2(N)}\\
&=
|{\bf c}(\lambda)| \sqrt{d(\lambda)} a^{2\rho-\operatorname{Re} \lambda}\|f\|_{L^2(N)}\,,
\end{align*}
as desired.
\end{proof}
The lemma motivates the definition of the following Banach subspace of $\mathcal{E}_{[\lambda]}(Z)\subset \mathcal{O}(\Xi_S)$:
$$\mathcal{B}(\Xi_S, \lambda):=\{ \phi \in \mathcal{E}_{[\lambda]}(Z)\mid
\|\phi\|:=\sup_{a\in A} a^{\operatorname{Re}\lambda -2\rho} \|\phi_a\|_{\mathcal{B}_{a,\lambda}}<\infty\}\, .$$
Indeed, \eqref{normb1} implies
\begin{equation} \label{P cont} \|\mathcal{P}_\lambda(f)\|\leq C \|f\|_{L^2(N)} \qquad (f\in L^2(N))\end{equation}
with $C:={\bf c}(\operatorname{Re} \lambda) \sqrt{d(\lambda)}$ and therefore the first inequality
in the following theorem:
\begin{theorem}\label{main theorem}Let $Z=G/K$ be a Riemannian symmetric space and $\lambda\in \mathfrak{a}_\mathbb{C}^*$ be a
parameter such that $\operatorname{Re} \lambda>0$. Then $$\mathcal{P}_\lambda: L^2(N) \to \mathcal{B}(\Xi_S, \lambda)$$
is an isomorphism of Banach spaces. In particular, there exist $c_1,c_2>0$ such that
$$c_1 \|\mathcal{P}_\lambda(f)\|\leq \|f\|_{L^2(N)} \leq c_2 \|\mathcal{P}_\lambda(f)\|\qquad (f\in L^2(N))\, .$$
\end{theorem}
\begin{proof} Since $\operatorname{Re}\lambda>0$, the Poisson transform is injective. Further, \eqref{P cont}
shows that $\mathcal{P}_\lambda$ takes values in $\mathcal{B}(\Xi_S, \lambda)$ and is continuous.
In view of the open mapping theorem, it thus suffices to show that
$\mathcal{P}_\lambda$ is surjective. Note now that the weight ${\bf w}_\lambda$ is uniformly bounded above and below by positive constants when restricted to a compact subset $\exp(i\Lambda_c)\subset \exp(i\Lambda)$. Hence the Bergman inequality implies the bound
\begin{equation} \label{norm 1} \|\psi|_N\|_{L^2(N)} \leq C a^{-\rho} \|\psi\|_{\mathcal{B}_{a,\lambda}}\quad (\psi \in \mathcal{B}(\mathcal{T}_a, {\bf w}_{\lambda, a})).\end{equation}
We apply this to $\psi=\phi_a$ for some $\phi\in \mathcal{B}(\Xi_S,\lambda)$ and obtain that
$a^{\lambda -\rho} \phi_a|_N $ is bounded in $L^2(N)$. Hence we obtain for some sequence $(a_n)_{n\in \mathbb{N}}$ on a ray in $A^-$ that
$a_n^{\lambda-\rho} \phi_{a_n}|_N \to h$ weakly for some $h \in L^2(N)$.
By the Helgason conjecture we know that $\phi = \mathcal{P}_\lambda(f)$ for some $f\in C^{-\omega}_\lambda(N)$ and that
\begin{equation} \label{limit} {\bf c}(\lambda)^{-1} a^{\lambda -\rho} \phi_a|_N \to f\end{equation}
as appropriate hyperfunctions on $N$ for $a\to \infty$ in $A^-$ on a ray. Hence $h=f$ and we obtain
the second inequality of the theorem.
\end{proof}
\subsection{The norm limit formula}
\label{sub:norm}
Define a positive constant
\begin{equation} \label{def w const} w(\lambda):=\left[\int_{\exp(i\Lambda)} {\bf w}_\lambda(y) \ dy\right]^{1\over 2} .\end{equation}
Note that $w(\lambda)$ is indeed finite. This will follow from \eqref{request w} provided we can show that $\|\delta_{\lambda, y}\|_1\geq 1$. Now, using Cauchy's theorem
we see that
\begin{equation} \label{cy} \int_N {\bf a} (y^{-1} n)^{\lambda +\rho} \ dn = {\bf c}(\lambda)\end{equation}
does not depend on $y\in \exp(i\Lambda)$. The estimate $\|\delta_{\lambda, y}\|_1\geq 1$ follows.
The purpose of this section is to prove the norm limit formula as stated in the introduction
\begin{theorem} \label{norm limit} For any $f\in L^2(N)$ and $\phi = \mathcal{P}_\lambda(f)$ we have
\begin{equation} \label{norm limit2} {1\over w(\lambda) |{\bf c}(\lambda)|} a^{\operatorname{Re} \lambda - 2\rho} \|\phi_a\|_{\mathcal{B}_{a,\lambda}} \to \|f\|_{L^2(N)} \qquad (f\in L^2(N))\end{equation}
for $a\to \infty$ on a ray in $A^-$.
\end{theorem}
\begin{proof} In the sequel we first note that for any integrable function $\psi$ on $\mathcal{T}_a$ we have
$$
\int_{\mathcal{T}_a} |\psi(z)|^2\ dz = \int_{\Lambda_a} \int_N |\psi(yn)|^2 \ dn \ dY
$$
with $y=\exp(iY)$ and $dY$ the Lebesgue measure on $\mathfrak{n}$. With that
we rewrite the square of the left hand side of \eqref{norm limit2} as
\begin{align*} &{1\over w(\lambda)^2|{\bf c}(\lambda)|^2} a^{2\operatorname{Re} \lambda - 4\rho} \|\phi_a\|_{\mathcal{B}_{a,\lambda}}^2= \\
&= {1\over w(\lambda)^2|{\bf c}(\lambda)|^2} a^{2\operatorname{Re} \lambda - 4\rho} \int_{\Lambda_a}\int_N |\phi_a(ny)|^2 {\bf w}_{\lambda, a} (y) \ dn \ dY \\
&= {1\over w(\lambda)^2|{\bf c}(\lambda)|^2}a^{2\operatorname{Re} \lambda - 2\rho} \int_{\Lambda}\int_N |\phi_a(ny^a)|^2 {\bf w}_{\lambda, {\bf1}} (y) \ dn \ dY \\
&= {1\over w(\lambda)^2|{\bf c}(\lambda)|^2}a^{2\operatorname{Re} \lambda - 2\rho} \int_{\Lambda}\int_N \left|\int_N f(x) {\bf a} (y^{-1} a^{-1} n^{-1} x)^{\lambda +\rho} \ dx \right|^2 {\bf w}_{\lambda, {\bf1}} (y) \ dn \ dY \\
&={1\over w(\lambda)^2|{\bf c}(\lambda)|^2} a^{-4\rho} \int_{\Lambda}\int_N \left|\int_N f(x) {\bf a} (y^{-1} a^{-1} n^{-1} xa)^{\lambda +\rho} \ dx \right|^2 {\bf w}_{\lambda, {\bf1}} (y) \ dn \ dY\, . \\
\end{align*}
Next we consider the function on $N$
$$\delta_{\lambda, y, a}(n):={1\over {\bf c}(\lambda)} a^{-2\rho} {\bf a} (y^{-1} a^{-1} na)^{\lambda +\rho} $$
and observe that this defines for any fixed $y\in \exp(i\Lambda)$ a Dirac-sequence when
$a\in A^-$ moves along a fixed ray to infinity, see \eqref{cy} for $\int_N \delta_{\lambda, a,y}= 1$.
We thus arrive at
\begin{multline*} {1\over w(\lambda)^2|{\bf c}(\lambda)|^2} a^{2\operatorname{Re} \lambda - 4\rho} \|\phi_a\|_{\mathcal{B}_{a,\lambda}}^2\\
= {1\over w(\lambda)^2} \int_{\Lambda}\int_N \left|\int_N f(x) \delta_{\lambda, y, a} (n^{-1}x) \ dx \right|^2 {\bf w}_{\lambda, {\bf1}} (y) dn \ dY \ .
\end{multline*}
We define a convolution type operator
$$T_{\lambda, y, a}: L^2(N) \to L^2(N), \quad f\mapsto \left(n \mapsto \int_N f(x) \delta_{\lambda, y, a}(n^{-1}x)\ dx\right) $$
and note that by Young's convolution inequality
$$\|T_{\lambda, y, a}(f)\|_2 \leq \|\delta_{\lambda, y, a}\|_1 \cdot \|f\|_2\ .$$
We continue with some standard estimates:
\begin{align*} &\left|\int_N \left|\int_N f(x) \delta_{\lambda, y, a} (n^{-1}x) \ dx \right|^2 \ dn \ - \|f\|_2^2\right|= \left|\| T_{\lambda, y, a}(f)\|_2^2 - \|f\|_2^2\right|\\
&\quad = \left| \|T_{\lambda, y, a}\|_2 - \|f\|_2\right| \cdot (\|T_{\lambda, y, a}(f) \|_2 + \|f\|_2)\\
&\quad \leq \|T_{\lambda, y, a}(f) - f\|_2 \cdot \|f\|_2( 1+ \|\delta_{\lambda, y, a}\|_1)\\
&\quad =\left\|\int_N (f(\cdot x) - f(\cdot)) \delta_{\lambda, y, a} (x) \ dx \right\|_2 \cdot \|f\|_2( 1+ \|\delta_{\lambda, y, a}\|_1)\\
&\quad \leq \|f\|_2( 1+ \|\delta_{\lambda, y, a}\|_1)\int_N \|f(\cdot x) - f(\cdot)\|_2 |\delta_{\lambda, y, a}(x)| \ dx \, .
\end{align*}
Now note that $x\mapsto \|f(\cdot x) - f(\cdot)\|_2$ is a bounded continuous function and $\frac{|\delta_{\lambda, y, a}|}{\|\delta_{\lambda, y,a}\|_1}$ is a Dirac-sequence
for $a\to \infty$ in $A^-$ on a ray. Hence we obtain a positive function
$\kappa_f(a)$ with $\kappa_f(a) \to 0$ for $a\to \infty$ in $A^-$ on a ray such that
$$\int_N \|f(\cdot x) - f(\cdot)\|_2 |\delta_{\lambda, y, a}(x)| \ dx \leq \|\delta_{\lambda, y,a}\|_1 \kappa_f (a)\, .$$
Putting matters together we have shown that
\begin{align*}& \left|{1\over |{\bf c}(\lambda)|^2} a^{2\operatorname{Re} \lambda - 4\rho} \|\phi_a\|_{\mathcal{B}_{a,\lambda}}^2 -(\int_\Lambda {\bf w}_{\lambda,{\bf1}})\cdot \|f\|_2\right|\\
&\quad \le\kappa_f(a)\|f\|_2^2 \int_{\Lambda} (1 +\|\delta_{\lambda, y, a}\|_1) \|\delta_{\lambda, y, a}\|_1 {\bf w}_{\lambda, {\bf1}}(y) \ dy\ . \end{align*}
Finally observe that $\|\delta_{\lambda, y, a}\|_1 =\|\delta_{\lambda, y}\|_1$
and hence
$$\int_{\Lambda} (1 +\|\delta_{\lambda, y, a}\|_1) \|\delta_{\lambda, y, a}\|_1 {\bf w}_{\lambda, {\bf1}}(y) \ dy <\infty\ ,$$
by the defining condition \eqref{request w} for ${\bf w}_\lambda$.
With that the proof of the norm limit formula \eqref{norm limit2} is complete.
\end{proof}
\section{The real hyperbolic space}\label{sect hyp}
In this section we investigate how the main results of this article take shape in the case of real hyperbolic spaces. After recalling the explicit formulas of the Poisson kernel we
provide essentially sharp estimates for $\|\delta_{\lambda, y}\|_1$ which allow us to perform the construction of a family of nice explicit weight functions ${\bf w}_\lambda$ satisfying \eqref{request w}.
These in turn have the property that for real parameters $\lambda=\operatorname{Re} \lambda$ the weighted Bergman space $\mathcal{B}(\Xi_S, \lambda)$ becomes isometric to $L^2(N)$. In particular,
the Banach space $\mathcal{B}(\Xi_S, \lambda)$ is in fact a Hilbert space for the exhibited family of weights.
\subsection{Notation} Our concern is with the real hyperbolic space $ \mathbf{H}_n(\mathbb{R}) = G/K $ where $ G = \operatorname{SO}_e(n+1,1)$ and $K = \operatorname{SO}(n+1)$ for $n\geq 1$. Here $\operatorname{SO}_e(n+1,1)$ is the identity component of the group $\operatorname{SO}(n+1,1)$. The Iwasawa decomposition $ G = KAN $ is given by $ N = \mathbb{R}^n$, $K = \operatorname{SO}(n+1) $ and $ A = \mathbb{R}_+.$ and we can identify $ \mathbf{H}_n(\mathbb{R}) $ with the upper half-space $ \mathbb{R}^{n+1}_+ = \mathbb{R}^n \times \mathbb{R}_+ $ equipped with the Riemannian metric $ g = a^{-2} (|dx|^2+da^2 ).$ We abbreviate and write $|\cdot|=\|\cdot\|_2$ for the Euclidean norm on $\mathbb{R}^n$.
For any $ \lambda \in \mathbb{C} $ which is not a pole of $\Gamma(\lambda+n/2)$ we consider the normalized kernels
$$ p_\lambda(x, a) = \pi^{-n/2} \frac{\Gamma(\lambda+n/2)}{\Gamma(2\lambda)} a^{\lambda+n/2}(a^2+|x|^2)^{-(\lambda+n/2)}, $$
which play the role of the normalized Poisson kernel when $ \mathbf{H}_n(\mathbb{R}) $ is identified with the group $ S = NA$, $N =\mathbb{R}^n$, $A = \mathbb{R}_+.$ In fact, with ${\bf a}: G \to A$ the Iwasawa projection
with regard to $G=KA\overline N$ as in the main text we record for $x\in N=\mathbb{R}^n$ that
$${\bf a}(x)^{\lambda+\rho} = ( 1 +|x|^2)^{-(\lambda +n/2)}\, .$$
Further we have
$${\bf c}(\lambda)= \pi^{n/2} \frac{\Gamma(2\lambda)}{\Gamma(\lambda+n/2)} $$
so that
$$ p_\lambda(x, a) = {1\over {\bf c}(\lambda)} {\bf a}(a^{-1} x)^{\lambda +\rho}.
$$
In the sequel we assume that $s:=\operatorname{Re} \lambda>0$ and note that $\rho=n/2$.
The classical Poisson transform (normalize \eqref{Poisson} by ${1\over {\bf c}(\lambda)}$) of a function $ f \in L^2(\mathbb{R}^n) $ is then given by
\begin{align*} \mathcal{P}_\lambda f(x,a) &= f*p_{\lambda}(\cdot, a)\\
&=\pi^{-n/2} \frac{\Gamma(\lambda+n/2)}{\Gamma(2\lambda)} a^{-(\lambda+n/2)} \int_{\mathbb{R}^n} f(u) (1+a^{-2}|x-u|^2)^{-\lambda-n/2} du\, \end{align*}
with $\ast$ the convolution on $N=\mathbb{R}^n$. It is easy to check that $ \mathcal{P}_\lambda f(x,a) $ is an eigenfunction of the Laplace--Beltrami operator $ \Delta $ with eigenvalue $\lambda^2- (n/2)^2.$
From the explicit formula for the Poisson kernel it is clear that for each $ a \in A $ fixed, $ \mathcal{P}_\lambda f(x, a) $ has a holomorphic extension to the tube domain
$$ \mathcal{T}_a:= \{ x+iy \in \mathbb{C}^n: |y| < a \} = N\exp(i\Lambda_a)\subset N_\mathbb{C}=\mathbb{C}^n, $$
where $ \Lambda_a = \{ y \in \mathbb{R}^n : |y| < a \}.$ Writing $ \phi_a(x) = \mathcal{P}_\lambda f(x,a) $ as in (\ref{P rewrite}) we see that
$$
\delta_{\lambda, y}(x)= {1\over {\bf c}(\lambda)} (1+(x+iy)^2)^{-(\lambda+n/2)}.
$$
A weight function $ {\bf w}_\lambda $ satisfying (\ref{request w}), namely
$$ d(\lambda) = \int_{|y| <1} {\bf w}_\lambda(y) \|\delta_{\lambda,y}\|_1^2 \, dy < \infty$$
can be easily found. Indeed, as
$$ (1+z^2)^{-(n/2+\lambda)} = \frac{2^{-n-\lambda}}{\Gamma(\lambda+n/2)} \int_0^\infty e^{-\frac{1}{4t} (1+z^2)} t^{-n/2-\lambda-1} dt $$
where $ z^2 = z_1^2+z_2^2+...+z_n^2$ we have
$$ |\delta_{\lambda,y}(x)| \leq c_\lambda \int_0^\infty e^{-\frac{1}{4t} (1-|y|^2+|x|^2)} t^{-n/2-s-1} dt $$
valid for $ |y| <1.$ From this it is immediate that we have the estimate
$$ \|\delta_{\lambda,y}\|_1 \leq c_\lambda (1-|y|^2)_+^{-s}\, .$$
However this bound is not optimal and we can do better with slightly more effort. This will be part of the next subsection.
\subsection{Bounding $\|\delta_{\lambda, y}\|_1$ and special weights.}
\begin{lemma}\label{deltabound} For $s=\operatorname{Re} \lambda>0$ we have for a constant $C=C(\lambda, n)>0$ that
$$\|\delta_{\lambda, y}\|_1 \asymp
\begin{cases*} C & if $0<s<\frac{1}{2}$,\\
C |\log(1-|y|^2)_+| & if $s=\frac{1}{2}$,\\
C (1-|y|^2)_+^{-s+\frac{1}{2}} & if $s>\frac{1}{2}$,
\end{cases*} \qquad \qquad (|y|<1).$$
\end{lemma}
\begin{proof}
To begin with we have
\begin{align*} \|\delta_{\lambda,y}\|_1&\asymp \int_{\mathbb{R}^n} |1+(x+iy)^2|^{-(n/2+s)}\ dx \\
&\asymp \int_{\mathbb{R}^n} (1-|y|^2 +|x|^2+ 2|\langle x, y\rangle|)^{-(n/2+s)}\ dx\ .
\end{align*}
With $\gamma=\sqrt{1-|y|^2}$ we find
\begin{align*} \|\delta_{\lambda,y}\|_1&\asymp \int_{\mathbb{R}^n} (\gamma^2 +|x|^2+ 2|\langle x, y\rangle|)^{-(n/2+s)}\ dx\\
&= \int_{\mathbb{R}^n} (\gamma^2 +\gamma^2|x|^2+ 2 \gamma |\langle x, y\rangle|)^{-(n/2+s)}\ \gamma^n dx\\
&= \gamma^{-2s}\int_{\mathbb{R}^n} (1 +|x|^2+ 2 |\langle x, \gamma^{-1}y \rangle|)^{-(n/2+s)}\ dx\ .
\end{align*}
Set
$$I_n(s,\gamma):=\int_{\mathbb{R}^n} (1 +|x|^2+ 2 |\langle x, \gamma^{-1}y \rangle|)^{-(n/2+s)}\ dx\,. $$
Then it remains to show that
\begin{equation} \label{Ins} I_n (s,\gamma) \asymp\begin{cases*} \gamma^{2s} & if $0<s<\frac{1}{2}$, \\
\gamma |\log \gamma| & if $s=\frac{1}{2}$, \\
\gamma & if $s>\frac{1}{2}$
\end{cases*} \, .\end{equation}
We first reduce the assertion to the case $n=1$ and assume $n\geq 2$.
By rotational symmetry we may assume that $y=y_1 e_1$ is a multiple of the first unit vector with $1/2<y_1 <1$. Further we write $x=(x_1,x')$ with $x'\in \mathbb{R}^{n-1}$. Introducing polar coordinates $r=|x'|$, we find
\begin{align*}
&I_n(s,\gamma) = \int_{\mathbb{R}^n} (1+ |x'|^2 +x_1^2+2 \gamma^{-1}|x_1|y_1 )^{-(n/2+s)}\ dx\\
& \asymp \int_0^\infty \int_0^\infty r^{n-2} (1+ r^2 +x_1^2+2 \gamma^{-1}x_1y_1 )^{-(n/2+s)} dx_1 \ dr \, .
\end{align*}
With $a^2:=1 + x_1^2 +2x_1 y_1 \gamma^{-1}$ this rewrites as
$$I_n(s, \gamma)\asymp \int_0^\infty \int_0^\infty r^{n-2} (r^2 +a^2)^{-(n/2+s)} \ dr \ dx_1 $$
and with the change of variable $r=at$ we arrive at a splitting of integrals
\begin{align*} I_n(s,\gamma) &\asymp \int_0^\infty \int_0^\infty t^{n-2} (1+t^2)^{-\frac{n}{2} -s} a^{- n - 2s} a^{n-2} a \ dt \ dx_1 \\
&= \underbrace{\left(\int_0^\infty t^{n-2} (1+t^2)^{-\frac{n}{2} -s} \ dt \right)}_{:=J_n(s)} \cdot \underbrace{\left (\int_0^\infty
( 1 + x_1^2 +2 \gamma^{-1} x _1 y_1)^{-s -\frac{1}{2}} \ dx_1\right)}_{=I_1(s,\gamma)}\end{align*}
Now $J_n(s)$ remains finite as long as $n\geq 2$ and $s>0$. Thus we have reduced
the situation to the case of $n=1$ which we finally address.
\par It is easy to check that $ I_1(s,\gamma) \asymp \gamma^{2s}$ for $ 0 < s < 1/2 $ and $ I_1(s,\gamma) \asymp \gamma $ for $ s >1/2$. When $ s = 1/2 $ we can evaluate $ \gamma^{-1} I_1(1/2,\gamma) $ explicitly. Indeed, by a simple computation we see that $ \gamma^{-1} I_1(1/2,\gamma)$ is given by
$$ 2 \int_0^\infty \frac{1}{(x_1+y_1)^2- (y_1^2-\gamma^2)} dx_1 = \frac{-1}{ \sqrt{y_1^2-\gamma^2}}
\log \frac{y_1- \sqrt{y_1^2-\gamma^2}}{y_1 + \sqrt{y_1^2-\gamma^2}}. $$
This gives the claimed estimate.
\end{proof}
For $\alpha>0$ we now define special weight functions by
\begin{equation} \label{special weight} {\bf w}_\lambda^\alpha(z) =
(2\pi)^{-n/2} \frac{1}{\Gamma(\alpha)} \left(1-|y|^2\right)_+^{\alpha -1} \, \qquad (z=x+iy\in \mathcal{T})\, .\end{equation}
As a consequence of Lemma \ref{deltabound} we obtain
\begin{cor} The weight ${\bf w}_\lambda^\alpha$
satisfies the integrability condition \eqref{request w} precisely for
$$\alpha>\max\{2s-1, 0\}\, .$$
\end{cor}
\begin{rmk} Observe that ${\bf w}_{\lambda}^\alpha(z)$ is a power of the Iwasawa projection ${\bf a} (y)$. It would be interesting to explore this further in higher rank, i.e.
whether one can find suitable weights which are of the form
$${\bf w}_\lambda(ny)=|{\bf a}(y)^\alpha|\qquad ny \in \mathcal{T}$$
for some $\alpha=\alpha(\lambda)\in \mathfrak{a}^*$.
\end{rmk}
For later reference we also record the explicit expression
\begin{equation} {\bf w}_{\lambda,a}^\alpha(z) =
(2\pi)^{-n/2} \frac{1}{\Gamma(\alpha)} \left(1-\frac{|y|^2}{a^2}\right)_+^{\alpha -1}\end{equation}
for the rescaled weights.
In the next subsection we will show that the general integrability
condition for the weight function \eqref{request w} is sufficient, but not sharp. By a direct use of the Plancherel theorem for the Fourier transform on $ \mathbb{R}^n $ we will show that one can do better for $\mathbf{H}_n(\mathbb{R})$.\\
\subsection{Isometric identities}
Let $K_\lambda$ be the Macdonald Bessel function
and $I_{\alpha+n/2} $ be the Bessel function of first kind with $\alpha>0$. For $s:=\operatorname{Re} \lambda >0$, we define non-negative weight functions
\begin{equation}\label{weigh}
w_\lambda^\alpha(\xi): = |\xi|^{2s} \left|K_{\lambda}( |\xi|)\right|^2 \frac{I_{\alpha+n/2-1}(2|\xi|)}{(2|\xi|)^{\alpha+n/2-1}}\qquad (\xi \in \mathbb{R}^n)\, .\end{equation}
\begin{theorem} \label{thm level isometry} Let $\alpha>0, \lambda\in \mathbb{C}$, and
$s=\operatorname{Re} \lambda>0$. There exists an explicit constant $c_{n,\alpha,\lambda} >0$ such that for all $f \in L^2(\mathbb{R}^n)$ and $\phi_a=\mathcal{P}_\lambda f(\cdot, a)$ we have the identity
\begin{equation} \label{level isometry} \int_{\mathcal{T}_a} |\phi_a(z)|^2 {\bf w}_{\lambda,a}^\alpha(z)\, dz =c_{n,\alpha,\lambda} \, a^{-2s+2n} \int_{\mathbb{R}^n} |\widehat{f}(\xi)|^2 \, w_{\lambda}^\alpha(a \xi) \, d\xi \qquad (a>0)\end{equation}
where $ {\bf w}_\lambda^\alpha$ is as in \eqref{special weight}.
\end{theorem}
\begin{proof} Let us set
$$ \varphi_{\lambda,a}(x) = \pi^{-n/2} \frac{\Gamma(\lambda+n/2)}{\Gamma(2\lambda)} (a^2+|x|^2)^{-(\lambda+n/2)} $$ so that we can write $ \phi_a(z) = \mathcal{P}_\lambda f(z,a) = a^{\lambda+n/2} f \ast \varphi_{\lambda,a}(z).$ In view of the Plancherel theorem for the Fourier transform we have
$$ \int_{\mathbb{R}^n} |\phi_a(x+iy)|^2 dx = a^{2s+n} \int_{\mathbb{R}^n} e^{-2 y \cdot \xi} |\widehat{f}(\xi)|^2 |\widehat{\varphi}_{\lambda,a}(\xi)|^2 \, d\xi\, . $$
Integrating both sides of the above against the weight function ${\bf w}_{\lambda,a}^\alpha(z)$ we obtain the identity
\begin{equation} \label{main id} \int_{\mathcal{T}_a} |\phi_a(z)|^2 {\bf w}_{\lambda,a}^\alpha(z)dz = a^{2 s+n} \int_{\mathbb{R}^n} |\widehat{f}(\xi)|^2 \, v_a^\alpha(\xi) \, |\widehat{\varphi}_{\lambda,a}(\xi)|^2 \, d\xi\end{equation}
where $ v_a^\alpha(\xi) $ is the function defined by
$$ v_a^\alpha(\xi) = (2\pi)^{-n/2} \, \frac{1}{\Gamma(\alpha)} \, \int_{|y| < a} e^{-2 y \cdot \xi}\, \left(1-\frac{|y|^2}{a^2}\right)_+^{\alpha-1}\ dy.$$
Both functions $ v_a^\alpha(\xi) $ and $\widehat{\varphi}_{\lambda,a}(\xi)$ can be evaluated explicitly in terms of Bessel and Macdonald functions. We begin with
$v_a^\alpha$ and recall that the Fourier transform of $(1-|y|^2)^{\alpha-1}_+$ is explicitly known in terms of $J$-Bessel functions:
$$
(2\pi)^{-n/2} \int_{\mathbb{R}^n} (1-|y|^2)^{\alpha-1}_+ e^{-i y\cdot \xi} dy = \Gamma(\alpha) 2^{\alpha-1} |\xi|^{-\alpha-n/2+1}J_{\alpha+n/2-1}(|\xi|).
$$
As the $J$-Bessel functions analytically extend to the imaginary axis, it follows that
\begin{equation}
\label{FTweight}
(2\pi)^{-n/2} \, a^{-n}\, \int_{\mathbb{R}^n} \left( 1-\frac{|y|^2}{a^2} \right)_+^{\alpha-1} e^{-2y\cdot \xi} dy = \Gamma(\alpha) 2^{\alpha-1} \, (2a |\xi|)^{-\alpha-n/2+1} I_{\alpha+n/2-1}(2 a |\xi|)
\end{equation}
where $ I_{\alpha+n/2-1}$ is the modified Bessel function of first kind. We arrive at
\begin{equation} \label{vsa}
v_a^\alpha(\xi)=2^{\alpha-1} a^n (2a |\xi|)^{-\alpha-n/2+1} I_{\alpha+n/2-1}(2 a |\xi|)\, .\end{equation}
\par Moving on to $\widehat{\varphi}_{\lambda,a}(\xi)$ we use the integral representation
$$ \varphi_{\lambda,a}(x) = \frac{(4 \pi)^{-n/2} 2^{-2\lambda}}{\Gamma(2\lambda)} \int_0^\infty e^{-\frac{1}{4t}(a^2+|x|^2)} t^{-n/2-\lambda-1} \, dt $$
and calculate the Fourier transform as
$$ \widehat{\varphi}_{\lambda,a}(\xi) = \frac{(2 \pi)^{-n/2} 2^{-2\lambda}}{\Gamma(2\lambda)} \int_0^\infty e^{-\frac{1}{4t}a^2} \, e^{-t|\xi|^2} \,t^{-\lambda-1} \, dt\, . $$
The Macdonald function of type $ \nu $ is given by the integral representation
$$ r^\nu K_\nu(r) = 2^{\nu-1} \int_0^\infty e^{-t-\frac{r^2}{4t}} t^{\nu-1} dt,$$
for any $ r >0.$ In terms of this function we have
\begin{equation} \label{phiK} \widehat{\varphi}_{\lambda,a}(\xi) = \frac{(2 \pi)^{-n/2} 2^{1-\lambda}}{\Gamma(2\lambda)} a^{-2\lambda} (a|\xi|)^\lambda K_\lambda(a|\xi|)\, .\end{equation}
Using these explicit formulas we obtain from \eqref{main id} that
$$ \int_{\mathcal{T}_a} |\phi_a(z)|^2 {\bf w}_{\lambda,a}^\alpha(z)dz =c_{n,\alpha,\lambda}\, a^{-2s+2n} \int_{\mathbb{R}^n} |\widehat{f}(\xi)|^2 \, w_{\lambda}^\alpha(a \xi) \, d\xi$$
for an explicit constant $c_{n,\alpha,\lambda} $ and
\begin{equation} \notag w_\lambda^\alpha(\xi)= |\xi|^{2s} \left|K_{\lambda}( |\xi|)\right|^2 \frac{I_{\alpha+n/2-1}(2|\xi|)}{(2|\xi|)^{\alpha+n/2-1}},
\end{equation}
by \eqref{vsa} and \eqref{phiK}.
\end{proof}
In the sequel we write $\mathcal{B}_\alpha(\Xi_S, \lambda)$ to indicate the dependence on $\alpha>0$.
Now we are ready to state the main theorem of this section:
\begin{theorem}\label{thm hyp} For $\alpha>\max\{2s-\frac{n+1}{2}, 0\}$ the Poisson transform establishes
an isomorphism of Banach spaces
$$\mathcal{P}_\lambda: L^2(N)\to \mathcal{B}_\alpha(\Xi_S, \lambda)\,.$$
If moreover $\lambda=s$ is real, then $\mathcal{P}_\lambda$ is an isometry up to positive scalar.
\end{theorem}
\begin{proof} The behaviour of $ w_s^\alpha(\xi) $ can be read out from the well known asymptotic properties of the functions $ K_\nu $ and $ I_\alpha.$ Indeed we can show (see Proposition \ref{prop monotone} below) that there exists $c_1, c_2>0$ such that
$$ c_1 \, (1+|\xi|)^{-(\alpha+ \frac{n+1}{2}-2s)} \leq w_\lambda^\alpha(\xi) \leq c_2 \, (1+|\xi|)^{-(\alpha+\frac{n+1}{2}-2s)} \qquad (\xi \in \mathbb{R}^n)\, .$$
When $ 2s > \frac{n+1}{2},$ we can choose $ \alpha = (2s - \frac{n+1}{2}) >0 $ in
Theorem \ref{thm level isometry} above. With this choice, note that $ w_\lambda^\alpha(\xi) \leq c_3 $ and consequently we have
$$ \int_{\mathcal{T}_a} |\phi_a(z)|^2 {\bf w}_{\lambda,a}^\alpha(z)dz \leq c_{n,\alpha,\lambda}\, a^{-2s+2n} \int_{\mathbb{R}^n} |\widehat{f}(x)|^2 \, dx .$$
This inequality implies that $\mathcal{P}_\lambda$ is defined. The surjectivity follows as in the proof of Theorem \ref{maintheorem} and with that the first assertion is established. The second part is a consequence of Theorem
\ref{thm level isometry} and the stated monotonicity in Proposition \ref{prop monotone}.
\end{proof}
\begin{proposition}\label{prop monotone}
Let $\lambda = s>0$ be real and $\alpha>\max\{2s-\frac{n+1}{2}, 0\}$. Then $w_s^\alpha(r)$, $r >0 $, is positive and monotonically decreasing. Moreover, $w_s^\alpha(0) =2^{-\alpha-\frac{n}{2}-2s-1} \frac{\Gamma(s)^2}{\Gamma(\alpha+\frac{n}{2})} $ and $w_s^\alpha(r) \sim c_\alpha \, r^{-(\alpha+\frac{n+1}{2}-2s)}$ for $ r \to \infty$.
\end{proposition}
\begin{proof} From the definition, we have
$$ w_s^\alpha(r)= r^{2s} K_s(r)^2 \frac{I_{\alpha+n/2-1}(2r)}{(2r)^{\alpha+n/2-1}}.$$
Evaluating $ r^sK_s(r)$ and $ \frac{I_{\alpha+n/2-1}(2r)}{(2r)^{\alpha+n/2-1}}$ at $ r =0 $ by making use of the limiting forms close to zero (see \cite[10.30]{OlMax}) we obtain $ w_s^\alpha(0) = 2^{-\alpha-\frac{n}{2}-2s-1} \frac{\Gamma(s)^2}{\Gamma(\alpha+\frac{n}{2})}$ as claimed in the proposition. The well known asymptotic properties of $ K_s(r) $ and $ I_\beta(r) $, see \cite[10.40]{OlMax}, proves the other claim. It therefore remains to show that $ w_s^\alpha(r) $ is monotonically decreasing, which will follow once we show that the derivative of $ w_s^\alpha(r)$ is negative at any $ r >0.$
Making use of the well known relations
$$
\frac{d}{dr}\big( r^s K_s(r)\big) = -r^s K_{s-1}(r),\qquad \frac{d}{dr}\Big(\frac{I_\beta(r)}{r^\beta}\Big) = \frac{I_{\beta+1}(r)}{r^\beta}$$
a simple calculation shows that
$$ \frac{d}{dr}\big( w_s^\alpha(r)\big) = 2 r^{2s} K_s(r) (2r)^{-(\alpha+n/2-1)} F(r) $$
where $ F(r) = \Big( K_s(r) I_{\alpha+n/2}(2r) - K_{s-1}(r) I_{\alpha+n/2-1}(2r) \Big).$ Thus we only need to check that $ F(r) < 0 $ or equivalently
$$ \frac{I_{\alpha+n/2}(2r)}{I_{\alpha+n/2-1}(2r)} < \frac{K_{s-1}(r)}{K_s(r)}.$$
In order to verify this we make use of the following inequality proved by J. Segura \cite[Theorem 1.1]{JS}: for any $ \nu\ge 0$ and $r > 0 $ one has
$$ \frac{I_{\nu+1/2}(r)}{I_{\nu-1/2}(r)} < \frac{r}{\nu+\sqrt{\nu^2+r^2}} \leq \frac{K_{\nu-1/2}(r)}{K_{\nu+1/2}(r)}.$$
Replacing $ r $ by $ 2r $ in the first inequality, from the above we deduce that
$$ \frac{I_{\nu+1/2}(2r)}{I_{\nu-1/2}(2r)} < \frac{r}{\nu/2+\sqrt{\nu^2/4+r^2}} \leq \frac{K_{(\nu-1)/2}(r)}{K_{(\nu+1)/2}(r)}.$$
In the above we choose $ \nu= 2s -1 $ so that $ (\nu-1)/2 = s-1 $ and $ (\nu+1)/2 = s.$ With $ \beta = \alpha+(n-1)/2 $ we have
$$ \frac{I_{\beta+1/2}(2r)}{I_{\beta-1/2}(2r)} < \frac{r}{\beta/2+\sqrt{\beta^2/4+r^2}} < \frac{r}{\nu/2+\sqrt{\nu^2/4+r^2}}\leq \frac{K_{(\nu-1)/2}(r)}{K_{(\nu+1)/2}(r)}$$
provided $ \beta > \nu$, i.e. $ \alpha+(n-1)/2 > 2s-1$ which is precisely the condition
on $\alpha$ in the statement of the proposition.
\end{proof}
\section{Remarks on the extension problem}
We recall that given a function $f \in L^2(N)$ the Poisson transform $\phi =\mathcal{P}_\lambda(f)$ for $\operatorname{Re} \lambda>0$, viewed as a function on $S$, gives us a solution
$$\Delta_S \phi =( |\lambda|^2 -|\rho|^2) \phi$$
such that we can retrieve $f$ through the boundary value map
$$b_\lambda(f)(n) = {\bf c}(\lambda)^{-1}\lim_{a\to \infty\atop a\in A^-} a^{\lambda -\rho} \phi(na))\, .$$
We normalize $\mathcal{P}_\lambda$ in the sequel by ${1\over {\bf c}(\lambda)} \mathcal{P}_\lambda$ and replace $\phi$ by $\psi(na) = a^{\lambda -\rho} \phi(na)$. Hence it is natural
to ask about the differential equation which $\psi$ satisfies. To begin with we first derive a formula for $\Delta_S$ in $N\times A$-coordiantes. Note that $\Delta_Z$ on $Z=G/K$ descends from the Casimir operator $\mathcal{C}$
on right $K$-invariant functions on $G$ and so we start with $\mathcal{C}$. We assume that $G$ is semisimple and let $\kappa:\mathfrak{g}\times \mathfrak{g}\to \mathbb{R}$ be the Cartan--Killing form.
For an orthonormal basis $H_1, \ldots, H_n$ of $\mathfrak{a}$ with respect to $\kappa|_{\mathfrak{a} \times \mathfrak{a}} $ we form the operator $\Delta_A= \sum_{j=1}^n H_j^2$ viewed
as a left $G$-invariant differential operator on $G$. Likewise we define $\Delta_M$ with respect to an orthonornal basis of $\mathfrak{m}$ with respect to
$-\kappa|_{\mathfrak{m} \times \mathfrak{m}}$. Now for each root space $\mathfrak{g}^\alpha$, $\alpha\in \Sigma^+$ we choose a basis $E_\alpha^j$, $1\leq j \leq m_\alpha=\dim \mathfrak{g}^\alpha$, such that with
$F_\alpha^j:=-\theta(E_\alpha^j)\in \mathfrak{g}^{-\alpha}$ we have $\kappa(E_\alpha^j, F_\alpha^k)=\delta_{jk}$.
Having said all that we obtain
$$\mathcal{C}=\Delta_A -\Delta_M+ \sum_{\alpha>0}\sum_{j=1}^{m_\alpha} E_\alpha^j F_\alpha^j + F_\alpha^j E_\alpha^j\, .$$
Now $\kappa|_{\mathfrak{a} \times \mathfrak{a}}$ complex linearly extended identifies $\mathfrak{a}_\mathbb{C}$ with $\mathfrak{a}_\mathbb{C}^*$ and for $\lambda\in \mathfrak{a}_\mathbb{C}^*$ we let $H_\lambda\in \mathfrak{a}_\mathbb{C}$ such that
$\lambda = \kappa(\cdot, H_\lambda)$. We further recall that $[E_\alpha^j, F_\alpha^j]=H_\alpha$.
Thus we can rewrite $\mathcal{C}$ as
$$\mathcal{C}=\Delta_A -H_{2\rho} + \Delta_M+ \sum_{\alpha>0}\sum_{j=1}^{m_\alpha} 2E_\alpha^j F_\alpha^j \, .$$
Now note that $2E_\alpha^j F_\alpha^j = 2E_\alpha^j E_\alpha^j+ 2E_\alpha^j( F_\alpha^j - E_\alpha^j)$ with
$F_\alpha^j -E_\alpha^j\in \mathfrak{k}$. Therefore $\mathcal{C}$ descends on right $K$-invariant smooth functions on $G$ to the operator
$$\Delta_Z = \Delta_A -H_{2\rho} + \sum_{\alpha>0}\sum_{j=1}^{m_\alpha} 2E_\alpha^j E_\alpha^j \, .$$
Now if we identify $Z=G/K$ with $N\times A$ then we see that that a point $(n,a)$ the operator $\Delta_S$ is given in the separated form
$$\Delta_S = \Delta_A -H_{2\rho} + \underbrace{\sum_{\alpha>0}\sum_{j=1}^{m_\alpha} 2a^{2\alpha}E_\alpha^j E_\alpha^j}_{=: \Delta_N^a}$$
with $\Delta_N^a$ acting on the right of $N$. Hence
$$\Delta_S = \Delta_A -H_{2\rho} +\Delta_N^a\, .$$
Let now $\phi = \phi(n,a)$ be an eigenfunction of $\Delta_S$ to parameter $\lambda\in \mathfrak{a}_\mathbb{C}^*$, say
$$\Delta_S \phi =( |\lambda|^2 - |\rho|^2) \phi\, .$$
We think of $\phi=\mathcal{P}_\lambda(f)$ for some generalized function $f$ on $N$ so that we can retrieve $f$ from $\phi$ via the boundary value map
\eqref{boundary}. As motivated above we replace $\phi$ by $\psi(n,a)= a^{\lambda -\rho} \phi(n,a)$ and see what differential
equation $\psi$ satisfies. Since $\phi = a^{\rho-\lambda} \psi$ we have
$$(\Delta_A -H_{2\rho})a^{\rho-\lambda} \psi = a^{\rho -\lambda} (|\lambda|^2 -|\rho|^2 + \Delta_A - H_{2\rho} + \underbrace{ 2 H_{\rho -\lambda}}_{ =H_{2\rho} - H_{2\lambda}})\psi\, .$$
This means that $\psi$ satisfies the differential equation
\begin{equation} \label{ext1}(\Delta_A - H_{2 \lambda} + \Delta_N^a)\psi(n,a)=0\, .\end{equation}
If we assume now that $\operatorname{Re} \lambda>0$ and $\phi=\mathcal{P}_\lambda f$, then we have
\begin{equation} \label{ext2}\lim_{a\to \infty\atop a\in A^-} \psi(n,a)= f(n)\end{equation}
The pair of equations \eqref{ext1} and \eqref{ext2} we refer to is the extension problem with parameter $\lambda$ for the operator $\Delta_S$. Given $f$ it has a unique solution provided
$\Delta_S$ generates all of $\mathbb{D}(Z)$, that is, the real rank of $G$ is one.
It is instructive to see what it is classically for the real hyperbolic spaces with $N=\mathbb{R}^n$. Here we use the classical (i.e. normalized Poisson transform)
and the extension problem becomes, for $(x,t)\in \mathbb{R}^n \times \mathbb{R}_{>0}$ (we use the notation from the previous section),
\begin{equation} \label{ext1a}
\big(\partial_t^2 + \frac{1-\lambda/2}{t} \partial_t+ \Delta_{\mathbb{R}^n}\big)\psi(x,t)=0\qquad (x\in \mathbb{R}^n, t>0)\end{equation}
If we assume now that $\operatorname{Re} \lambda>0$ and $\phi=\mathcal{P}_\lambda f$, then we have
\begin{equation} \label{ext2a}\lim_{t\to 0} \psi(x,t)= f(x)\, .\end{equation}
The general theory then tells us that there is a unique solution $\psi$ of the extension problem \eqref{ext1a} and \eqref{ext2a} considered by Caffarelli and Silvestre in \cite{CS}.
\section*{Acknowledgement}
The third author (L.R.) is supported by the Basque Government through the BERC 2022-2025 program and by the Spanish Ministry of Science, Innovation and Universities through BCAM Severo Ochoa excellence accreditation SEV-2017-2018 and PID2020-113156GB-I00. She also acknowledges the RyC project RYC2018-025477-I and Ikerbasque.
|
2206.14174
|
\section{Introduction}
In this paper we continue our long running programme to identify (up to conjugacy) all the finitely many arithmetic lattices $\Gamma$ in the group of orientation preserving isometries of hyperbolic $3$-space $Isom^+(\mathbb H^3)\cong PSL(2,\mathbb C)$ generated by two elements of finite order $p$ and $q$, we also allow $p=\infty$ or $q=\infty$ to mean that a generator is parabolic. In \cite{MM} it is proved that there are only finitely many such lattices. In fact it is widely expected that there are only finitely many two generator arithmetic lattices in total in $PSL(2,\mathbb C)$, though this remains unknown. There are infinitely many distinct three generator arithmetic lattices.
It is well known that for such a lattice as $\Gamma$ above, $\mathbb H^3/\Gamma$ is geometrically the $3$-sphere with a marked trivalent graph of singular points -- the vertices of the graph corresponding to the finite spherical triangle groups and dihedral groups. The possible graphs for the cases at hand can be seen in Figure 1. These precisely follow the descriptions of all lattices in $PSL(2,\mathbb C)$ generated by two parabolic elements as proved by Aimi, Lee,Sakai, and Sakuma, and also Akiyoshi, Ohshika, Parker, Sakuma and Yoshida, \cite{ALSS,AOPSY}, who resolved a conjecture of Agol to this effect. Our results suggest that this conjecture (modified in the obvious way as per Figure 1.) is valid for all groups generated by two elements of finite order which is not freely generated by those two elements.
\medskip
In two dimensions a lattice in $Isom^+(\mathbb H^2)$ generated by two elements of finite order is necessarily a triangle group. Takeuchi \cite[Theorem 3]{Take} has identified the $82$ arithmetic Fuchsian triangle groups. Here is a summary of what is known in dimension three.
\begin{itemize}
\item There are exactly $4$ arithmetic lattices generated by two parabolic elements. These are all knot and link complements and so torsion free. The explicit groups can be found in \cite{GMM}.
\item There are exactly $14$ arithmetic lattices generated by a parabolic element and an elliptic of order $2\leq p < \infty$. Necessarily $p\in \{2,3,4,6\}$. Six of these groups have $p=2$, three have $p=3$, and three have $p=4$. The explicit groups can be found in \cite{CMMO}.
\item There are exactly $16$ arithmetic lattices generated by two elements of finite orders $p$ and $q$ with $p,q \geq 6$. In each case $p=q$, twelve of these groups have $p=q=6$, and two have $p=q=12$ and one each for $p=q=8$ and $p=q=10$. The explicit groups can be found in \cite{MM}
\end{itemize}
Most of these groups are orbifold surgeries on two bridge knots and links of ``small'' slope. Here. we will meet examples of Dehn surgeries on knots of more than 12 crossings.
\medskip
From this information the reader can see that there are basically four cases left to deal with in order to compete the identification we seek. Those are when one generator is elliptic of order $q=2,3,4,5$ and $2\leq p < \infty$. In fact a discrete sugroup of $Isom^+(\mathbb H^3)$ generated by two elements of the same finite order (say $p$) admits a $\mathbb Z_2$ extension to a group generated by an element of order two and an element of order $p$. As a finite extension remains an arithmetic lattice, there really are now only three cases to deal with, $q\in\{3,4,5\}$ and here we assume that $q=4$ and $2\leq p\leq \infty$. Below is a summary of our main result.
\begin{theorem}\label{mainthm} Let $\Gamma=\langle f,g \rangle$ be an arithmetic lattice generated by $f$ of order $4$ and $g$ of finite order $p\geq 2$. Then $p\in \{2,3,4,5,6\}$ and the degree of the invariant trace field
\[ k\Gamma=\mathbb Q(\{\mbox{\rm{tr\,}}^2(h):h\in\Gamma\}) \]
is at most $4$.
\begin{enumerate}
\item If $p=2$, then there are fifty four groups.
\item If $p=3$, then there are ten groups.
\item If $p=4$, then there are twenty seven groups.
\item If $p=5$, then there is one group.
\item If $p=6$, then there are five groups.
\end{enumerate}
\end{theorem}
\medskip
The groups appearing in Theorem \ref{mainthm} all fall into the pattern of Heckoid groups as described in \cite{ALSS,AOPSY}. The singular graph of $\mathbb H^3/\Gamma$ is one of the following.
\scalebox{0.5}{\includegraphics[viewport=-40 520 570 800]{HeckoidGroups}}\\
{\bf Figure 1.} {\it The Heckoid groups. When $p\neq 4$ the braid must have an even number of crossings in the first example. Here $m\geq 1$.}
\bigskip
In the tables that follow we set
\begin{equation}
\gamma =\gamma(f,g) = \mbox{\rm{tr\,}} [f,g]-2.
\end{equation}
and give the minimal polynomial for $\gamma$. With the orders of the generators, this completely determines the arithmetic structure of the group, the invariant trace field $k\Gamma=\mathbb Q(\gamma)$ and the associated quaternion algebra -- we discuss these things below.
Then $\gamma$ is an invariant of the Nielsen class of the generating pair $\{f,g\}$. With the order of the generators, which we shall always assume are primitive (that is their trace should be $\pm 2\cos \big(\frac{\pi}{p}\big)$) and $\gamma$ it is straightforward to construct a discrete and faithful representation of the group in $PSL(2,\mathbb C)$, \cite{GM1}. We give this. representation below at (\ref{XY}). Note that the very difficult general problem of proving discreteness is overcome by the arthimetic criterion as described in \cite{GMMR} and \cite{MMpq}. We recall those results, and in particular the identification theorem, below at Theorem \ref{idthm}.
For each group $\Gamma$, as noted, we give the minimal polynomial for $\gamma$ and the discriminant of the invariant trace field $k\Gamma=\mathbb Q(\gamma)$. Next the column $(Farey,order)$ indicates a particular word in the group $\langle f,g \rangle$ has a particular order (or is the identity). Briefly this word is found as follows. The simple closed curves on the $4$ times punctured sphere $\IS_4$ separating one pair of points from another are enumerated by their slope, coming from the Farey expansion. The enumeration of these simple closed curves informs the deformation theory of Keen-Series-Maskit \cite{KS,KS2,KSM} which relates this slope (and the geodesic in the homotopy class) to a bending deformation along this geodesic of $\IS_4$ which terminates on the boundary of moduli space as the length of this curve shrinks to zero (and so the associated word becomes parabolic). This bending locus is called a pleating ray. Bending further creates cone manifolds some of which are discrete lattices when the cone angle becomes $2\pi/n$ for a positive integer $n$. The recent results of \cite{ALSS,AOPSY} show this process to describe all the discrete and faithful representations of groups generated by two parabolic elements which are not free. There is of course a strong connection with the Schubert normal form of a two-bridge knot or link, \cite{BZ}. If one takes a two-bridge link, say with Schubert normal form $r/s$, and performs orbifold $(p,0)$ and $(q,0)$ Dehn surgery on the components of the link, and if the result is hyperbolic, then the orbifold fundamental group of the space so formed is a discrete subgroup of $PSL(2,\mathbb C)$ generated by elliptic elements of order $p$ and $q$, say $f$ and $g$ respectively - the images of the two meridians around each link component (or the images of the two parabolic generators of the group). The word associated with the slope remains the identity. However the value $\gamma(f,g)$ may not always lie on the pleating ray (we give more details below) but possibly on the pleating ray $1-\frac{r}{s}$ - this is simply because of the way we chose to enumerate slopes between. This then also provides the Conway notation for the knot of link complement via the continued fraction expansion of the slope $r/s$. There is one further issue here and that is that the exact choice of generators in $PSL(2,\mathbb C)$ matters since we classify all Nielson pairs and in some cases there may be different pairs generating the same group and obviously they will have different relations. In the description of the group (in terms of the pleating ray $\gamma$ lies on) we will have the following setup:
If $X,Y\in PSL(2,\mathbb C)$, representing $f$ and $g$, are given by
\begin{equation}\label{XY} X= \left[\begin{array}{cc} \zeta & 1\\ 0 &\bar\zeta \end{array}\right], Y= \left[\begin{array}{cc} \eta & 0\\ \mu &\bar\eta \end{array}\right],\; \zeta = \cos\frac{\pi}{p}+i\sin\frac{\pi}{p},\;\; \eta = \cos\frac{\pi}{q}+i\sin\frac{\pi}{q},\end{equation}
we compute
\begin{equation}\label{mug}
\gamma(f,g) = \mu (\mu -4 \sin \frac{\pi }{p}\; \sin \frac{\pi }{q}).
\end{equation}
Thus two different choices for $\mu$ lead to the same group up to M\"obius conjugacy. One of these values of $\mu$ lies on the pleating ray and gives the ``presentation'' we choose - invariably we choose
\[ \mu = 1-\sqrt{1+\gamma},\quad \Im m(\mu)>0.\]
We make this choice so as to give the simplest presentation of the underlying orbifold, though often it does not matter and both choices of $\mu$ lead to conjugate groups.
There is a simple recipe for moving from a slope to a word and we give the algorithm for this in the appendix. For instance the first few Farey slopes (greater then $\frac{1}{2}$) are
\[ \left\{\frac{1}{2},\frac{4}{7},\frac{3}{5},\frac{5}{8},\frac{2}{3},\frac{5}{7},\frac{3}{4},\frac{4}{5},1\right\} \]
and in the same order the words are, with $x=X^{-1}$ and $y=Y^{-1}$,
\[ \{ XYxy ,\; XYxyXYxYXyxYXy ,\; XYxyXyxYXy, \; XYxyXyxYxyXYxYXy, \]
\[ XYxYXy,\; XYxYXyXyxYxyXy,\; XYxYxyXy,\; XYxYxYXyXy,\; Xy\}\]
Now let us describe how to read the tables by example. If we look at the $7^{th}$ entry of the $(4,3)$-arithmetic lattices table below we have $X=f$ has order $4$ and $Y=g$ has order $3$. We see $\gamma(f,g)\approx-2.55804+1.34707 i$, the complex root with positive real part of its minimal polynomial $z^4+6z^3+13z^2+8z+1$ (there is only ever one conjugate pair of complex roots). The pair $(3/5,2)$ indicates the word $w_{3/5} = XYxyXyxYXy$ is elliptic of order two when $X,Y$ have the form at (\ref{XY}) and $\mu$ has one of the two values solving (\ref{mug}), in this case
\begin{eqnarray*} \mbox{\rm{tr\,}} w_{3/5} &=& -\mu ^5+\sqrt{5 \sqrt{3}+38} \mu ^4-\left(2 \sqrt{3}+15\right) \mu ^3+\frac{\left(15 \sqrt{3}+7\right) \mu ^2}{\sqrt{2}}\\ && -\left(\sqrt{3}+10\right) \mu +\sqrt{\sqrt{3}+2}\end{eqnarray*}
which is $0$ when $\mu=1.79696 \pm 1.17706 i$. The other possibility for $\mu$ so that (\ref{mug}) holds is $0.652527 - 1.17706 i$ and for that choice $w_{3/5}$ is loxodromic. With minor additional work this gives us a presentation
\[ \langle f,g:f^4=g^3=(fgf^{-1}g^{-1}fg^{-1}f^{-1}gfg^{-1})^2=1 \rangle \]
This group is therefore an arithmetic generalised triangle group in the sense of \cite{HMR}, however it is not identified in that paper as they restrict themselves largely to the case $w_{1/2}=[X,Y]$ is elliptic.
Next, we have also given a co-volume approximation in some cases. This is obtained from adapting the Poincar\'e subroutine in Week's programme Snap to our setting of groups generated by two elliptic elements. This was previously done by Cooper \cite{Cooper} in his PhD thesis. This gives an approximation to the volume of $\mathbb H^3/\langle f,g\rangle$. However, this approximation is enough to give the precise index of the the group $\langle f,g\rangle$ in its maximal order and the latter has an explicit volume formula due to Borel \cite{Bo} if further refinement is needed.
\medskip
We expect that the topological classification (the structure of the singular locus and the observation that $\gamma(f,g)$ lies on an extended pleating ray) suggested by our data from the arithmetic groups is also generally true for groups generated by two elements of finite order which are not freely generated.
\medskip
A major technical advance used in this article is provided by the development of the Keen-Series-Maskit deformation theory from the case of the Riley slice (specifically \cite{KS}), to the more general case of groups generated by two elements of finite order, \cite{EMS1,EMS2}. Using this we are able to further give defined neighbourhoods of pleating rays which lie entirely in these quasiconformal deformation spaces. This allows us to set up a practical algorithm, which we describe below and implement here, to decide whether or not a two generator discrete group -- identified solely by arithmetic criteria -- is free on the given generating pair. And if it is not free on the two generators, then to identify a nontrivial word in the group. This process is guaranteed to work if the discrete group is geometrically finite and certain conjectures are true, and of course the groups we seek will fall into this category, though the groups we test may not {\it apriori} be geometrically finite. This gives us an effective computational description of certain moduli spaces akin to the Riley Slice \cite{KS}. We expect that any lattice we might find comes from a bending of a word of some slope that has become elliptic - it's just a matter of finding them. Since we additionally expect (hope) that the slope is not too big (denominator small), the possibilities might be few and so we simply search for them. However we will see that in fact run into technical difficulties using this approach in a couple of cases where the slops get up to $101/120$ and some guesses are needed to limit the search space (there are $4387$ slopes less than this and we need to precisely generate a polynomial of degree depending on the numerator - and when the degree is large these span pages of course. Here we are lucky because it turns out that the slopes we seek (as we found out with a lot of experimentation) do not have large integers in their continued fraction expansion. Note that $101/120$ has Conway notation $[6,3,5,1]$ and represents a link of about 15 crossings (we do not know what the crossing number is) and similarly with $7/102$ (Conway $[3,1,1,14]$) probably with about $19$ crossings. However we do know that $29/51$ below has Conway notation $[7,3,1,1]$ and does have exactly $12$ crossings ($\#762$ in Rolfsen's tables) - it is a surprising group discovered by Zhang \cite{Zhang} in her PhD thesis. It is quite remarkable that the invariant trace field is at most quartic for all of the above groups.
\medskip
We recall that we require that $f$ and $g$ are primitive elliptic elements, that is $\mbox{\rm{tr\,}}^2(f)=4\cos^2 \frac{\pi}{4}$ and $\mbox{\rm{tr\,}}^2(g)=4\cos^2 \frac{\pi}{p}$. This now identifies the Heckoid group from which a complete presentation can be determined if required. However some of these can be quite long and so we have not given them.
\subsection{The five $(4,6)$ arithmetic hyperbolic lattices.}
\begin{center}
\begin{tabular}{|c|c|c|c|c|} \hline
No. & polynomial & (Farey,order) & $\gamma$ & Discr. $k\Gamma$ \\ \hline
1 & $z+1$ & $Tet(4,6;3)$ & $-1$ & $ -12 $ \\ \hline
2 & $z^2 +6$ & $(7/10,1)$ & $\sqrt{6} i$ & $-24$ \\ \hline
3 & $z^3 +3z^2 +6z+2$ & $(5/8,1)$ & $-1.2980+ 1.8073i$& $-216$ \\ \hline
4 & $ z^3 +5z^2 +9z+3 $ & $(7/12,1)$ & $-2.2873+ 1.3499i$ & $-204$ \\ \hline
5 & $ z^4+8z^2+6z+1$ & $(17/24,1)$ & $0.3620+2.8764i$& $-2412$ \\ \hline
\end{tabular} \\
\end{center}
\subsection{The (4,5) arithmetic hyperbolic lattice.} In the case $p=5$ there is one arithmetic lattice and it is cocompact, $Tet(4,5;3)$. It has $\gamma=-1$. This group has presentation
\[ \langle x,y:x^4=y^5 =[x,y]^3=(x[yx^{-1}])^2=(x[y,x^{-1}]y)^2=[y,x^{-1}]y)^2=1 \rangle \]
and is the orientation preserving subgroup of the reflection group with Coxeter diagram $4-3-5$. The number field $k\Gamma$ has discriminant $-400$.
\subsection{The ten $(4,3)$ arithmetic hyperbolic lattices.}
\begin{center}
\begin{tabular}{|c|c|c|c|c|} \hline
No. & polynomial & (Farey , order)& $\gamma$ & Disc. $k\Gamma$ \\ \hline
1 & $z+2$ & $GT(4,3;2)$ & $ -2 $ & $-24 $ \\ \hline
2 & $z^2 +4$ & $ (3/4;2)$ & $2i$ & $ -4 $ \\ \hline
3 & $z^2+4 z+6$ & $ (3/4;1)$ & $-2+\sqrt{2}i$ & $ -8 $ \\ \hline
- & $z^3-z^2+z+1$ & $ (4/5;4)^*$ & $0.77184 + 1.11514 i$& $-44$ \\ \hline
4 & $z^3 +3z^2 +5z+1$ & $(2/3;2)$ & $-1.38546 + 1.56388 i$& $-140 $ \\ \hline
5& $ z^3+3z^2+9z+9$ & $ (17/24;2)$ & $-0.83626+2.46585 i$& $ -108 $ \\ \hline
6 & $ z^3+7z^2+17z+13$ & $(7/12;1)$ & $-2.77184 + 1.11514 i$& $-44 $ \\ \hline
- & $ z^4+2z^3+3z^2+4z+1$ & $(3/4;3)^*$ & $-0.10176+1.4711 i$& $ -976$ \\ \hline
- & $ z^4-2z^3-z^2+6z+1$ & $(6/7;3)^*$ & $1.78298 +1.08419 i$& $ -1424 $ \\ \hline
7 & $ z^4+6z^3+13z^2+8z+1$ & $(3/5;2)$ & $-2.55804+1.34707 i$& $ -3376 $ \\ \hline
8 & $ z^4+6z^3+13z^2+10z+1$ & $(3/5;4)$ \& $(8/13;3)$ & $-2.20711+0.97831 i $& $ -448$ \\ \hline
9 & $ z^4+4z^3+10z^2+12z+4$ & $(7/10;1)$ & $-1+2.05817 i $& $ -400$ \\ \hline
10 & $ z^4+8z^3+22z^2+22z+6$ & $(9/16;1)$ & $-3.11438+0.83097 i $ & $ -2096 $ \\ \hline
\end{tabular} \\
\end{center} The group $\#8$ above is in fact the index two subgroup of the tetrahedral reflection group $T_5[2,3,3;2,3,4]$, \cite[pg. 228 \& Table 1]{MR2}. The three groups marked with an asterisk are infinite covolume web-groups with a hyperideal vertex, or of finite index in the truncated tetrahedral reflection groups enumerated in \cite{CM}.
\subsection{The fifty-four $(4,2)$ arithmetic hyperbolic lattices.}
\subsubsection{Real, quadratic.}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|} \hline
No. & polynomial & (Farey , order)& $\gamma$ & Disc. $k\Gamma$ & covolume\\ \hline
{\bf Real} & $z+2$ & $GT(4,3;2)$ & $-2$ & $-24$ & \\ \hline
{\bf Quad.} & &&& \\ \hline
2 & $z^2 +2z+2$ & $ (3/4;2) $ & $-1+ i$ & $ -4$&$ 0.457 $ \\ \hline
3 & $z^2 +z+1$ & $ (4/5;2) $ & $-0.5 + 0.86602 i$ & $ -3$ & $ 0.253$ \\ \hline
4 & $z^2 +3z+3$ & $ (7/10;1)$ & $-1.5 + 0.86602 i$& $-3$ & $ 0.253$\\ \hline
5 & $z^2 +1$ & $ (5/6;2)$ & $i$ & $ -4$ & $ 0.915 $\\ \hline
6 & $z^2 +4z+5$ & $ (2/3;4)$ & $-2+i$ & $-4 $& $ 0.915 $\\ \hline
- & $z^2 +2z+3$ & $(3/4;3)$ & $-1+1.41421 i$& $ -8$& $ \infty $ \\ \hline
7 & $z^2 +z+2$ & $(19/24;1)$ & $-0.5 - 1.32288 i$& $ -7$ & $ 1.332$ \\ \hline
8 & $z^2 +3z+4$ & $(17/24;1)$ & $-1.5 +1.32288 i$& $ -7$& $ 1.332 $ \\ \hline
9 & $z^2 +5z+7$ & $(19/30;1) $ & $-2.5+ 0.86602 i$ & $-3 $ & $ 1.524$\\ \hline
10 & $z^2-z+1$ & $(13/15;2)$ & $0.5+ 0.86602 i$ & $-3 $ & $ 1.524$ \\ \hline
\end{tabular} \\
\end{center}
\newpage
\subsubsection{Cubic.}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|} \hline
No. & polynomial & (Farey , order)& $\gamma$ & Disc. $k\Gamma$ & covolume\\ \hline
11 & $z^3-2z+2$ & $(43/48;1)$ & $0.884646 + 0.58974 i$ & $-76 $ & $ 1.985$\\ \hline
12 & $z^3+6z^2+10z+2$ & $(39/40;4)$ & $-2.884646 + 0.58974 i$ & $-76 $ & $ 1.985 $\\ \hline
13 & $z^3-z+1$ & $(8/9;2)$ & $0.662359 - 0.56228 i$ & $-23 $ & $ 0.824$ \\ \hline
14 & $z^3+6z^2+11z+5$ & $(11/18;1)$ & $-2.66236 + 0.56228 i$ & $ -23 $ & $ 0.824$ \\ \hline
15 & $z^3+z^2-z+1$ & $(7/8;3)$ & $0.419643 - 0.60629 i$ & $ -44$ & $ 0.264$\\ \hline
16 & $z^3+5z^2+7z+1$ & $(5/8;3)$ & $-2.41964 - 0.60629 i$ & $-44 $ & $ 0.264$ \\ \hline
17 & $z^3+z^2+1$ & $(6/7;2)$ & $0.232786 + 0.79255 i$ & $-31 $ & $0.595$\\ \hline
18 & $z^3+5z^2+8z+3$ & $(9/14;1)$ & $-2.23279+ 0.79255 i$ & $ -31$ & $0.595$\\ \hline
19 & $z^3+z^2+2$ & $(17/24;1)$ & $0.34781+1.02885i$ & $-116$ & $ 2.232 $ \\ \hline
20 & $z^3+5z^2+8z+2$ & $(17/24;1)$ & $-2.34781 +1.02885 i$ & $-116 $ & $ 2.232 $ \\ \hline
21 & $z^3+2z^2+z+1$ & $(5/6;3)$ & $-0.122561 + 0.74486 i$ & $ -23$& $ 0.137$ \\ \hline
22 & $z^3+ 4 z^2 + 5 z+1$ & $(2/3;3)$ & $-1.87744 - 0.74486 i$ & $ -23$& $ 0.137$ \\ \hline
23 & $z^3+z^2+z+2$ & $(101/120;1)$ & $0.17660+1.20282 i $& $-83 $ & $ 3.308$\\ \hline
24 & $z^3+5z^2+9z+4$ & $(79/120;1)$ & $-2.1766+ 1.20282i$ & $-83 $ & $ 3.308$ \\ \hline
25 & $z^3+2z^2+2z+3$ & $(49/60;1)$ & $-0.09473 + 1.28374 i$ & $ -139$& $ 2.597 $ \\ \hline
26 & $z^3+4z^2+6z+1$ & $(41/60;1)$ & $-1.90527 + 1.28374 i$ & $-139 $ & $ 2.597 $ \\ \hline
27 & $z^3+2z^2+2z+2$ & $(13/16;1)$ & $-0.22815+1.11514 i$ & $ -44$ & $0.793$ \\ \hline
28 & $z^3+4z^2+6z+2$ & $(11/16;1)$ & $-1.77184 + 1.11514 i$ & $ -44$& $ 0.793 $ \\ \hline
29 & $z^3+z^2+2z+1$ & $(17/21;2)$ & $-0.21508 + 1.30714 i$& $-23 $ & $ 0.824$ \\ \hline
30 & $z^3+5z^2+10z+7$ & (29/42;1) & $-1.78492+ 1.30714 i$ & $ -23 $& $ 0.824 $\\ \hline
31 & $z^3+2z^2+3z+1$ & $(7/9;2)$, & $-0.78492 + 1.30714 i$ & $ -23$ & $ 0.824 $ \\ \hline
32 & $z^3+4z^2+7z+5$ & $(13/18;1)$, & $-1.21508 + 1.30714 i$ & $-23 $& $ 0.824 $ \\ \hline
33 & $z^3+3z^2+5z+4$ & $ (31/40;1)$ & $-0.773301 + 1.46771 i$ & $ -59$ & $ 1.927 $\\ \hline
34 & $z^3+3z^2+5z+2$ & $ (29/40;1)$ & $-1.2267 + 1.46771 i$ & $ -59$ & $ 1.927$ \\ \hline
35 & $z^3+3z^2+4z+3$ & $(11/14;1)$ & $-0.658836-1.16154 i$ & $-31 $ & $0.593$ \\ \hline
36 & $z^3+3z^2+4z+1$ & $(5/7;2)$ & $-1.34116 + 1.16154 i$ & $-31 $ & $0.593$ \\ \hline
\end{tabular} \\ \end{center}
\newpage
\subsubsection{Quartic.}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|} \hline
No. & polynomial & (Farey , order)& $\gamma$ & Disc. $k\Gamma$ & covol. \\ \hline
\\ \hline
37& $ z^4 - 4 z^2 + z + 3 $ & $(7/102;1)$ & $1.36778 + 0.23154 i $ & $ -731$ & $ 2.9003 $ \\ \hline
38 & $ 1 + 15 z + 20 z^2 + 8 z^3 + z^4 $ & $(29/51;2)$ & $-3.36778+ 0.23154 i $ & $ -731$ & $ 2.9003 $ \\ \hline
39& $1 + z - 2 z^2 + z^4$ & $(77/85;2)$ & $1.00755 + 0.51311i $ & $ -283$ & $ 3.6673 $ \\ \hline
40 & $ 7 + 23 z + 22 z^2 + 8 z^3 + z^4 $ & $(101/170;1)$ & $-3.00755 + 0.51311i $ & $ -283$ & $ 3.6673 $ \\ \hline
41 & $z^4+z^3-3z^2-z+3$ & $(31.34;1)$ & $1.06115 + 0.38829 i$ & $ -491 $ & $ 1.542$ \\ \hline
42 & $z^4+7z^3+15z^2+9z+1$ & $(10/17;2) $ & $-3.06115+ 0.38829 i$ & $ -491$ & $ 1.542$\\ \hline
- & $z^4+4z^3+7z^2+6z+1$ & $(3/4;5) $ & $-1+ 1.27202 i$ & $ -400 $ & $\infty$ \\ \hline
- & $z^4+4z^3+8z^2+8z+2$ & $(3/4;4) $ & $-1+ 1.55377 i$ & $ -1024 $ & $\infty$ \\ \hline
- & $z^4+4z^3+8z^2+8z+1$ & $(3/4;6) $ & $-1+1.65289 i$ & $ -4608 $ & $\infty$ \\ \hline
43 & $z^4+5z^3+11z^2+11z+3$ & $(40/51;2)$ & $-1.42057+ 1.45743 i$ & $ -731$ & $ 2.889 $\\ \hline
44 & $z^4+3z^3+5z^2+5z+1$ & $(40/51;2)$ & $-0.57943+ 1.45743 i$& $ -731$ & $ 2.889$ \\ \hline
45 & $z^4+2z^3+2z^2+3z+1$ & $(14/17;2)$ & $0.03640 +1.21238 i$ & $-491 $ & $ 1.540 $\\ \hline
46 & $z^4+6z^3+14z^2+13z+3$ & $(23/34;1)$ & $-2.03640 +1.21238 i$ & $ -491$& $ 1.540$ \\ \hline
47 & $z^4+6z^3+13z^2+10z+1$ & $(13/20;1)$ & $-2.20711 +0.97831 i$ & $ -448$ & $ 1.028 $ \\ \hline
48 & $z^4+2z^3+z^2+2z+1$ & $(17/20;1)$ & $0.20711 +0.97831 i$ & $ -448$ & $ 1.028 $ \\ \hline
49 & $z^4+z^3-z^2+z+1$ & $(15/17;2)$ & $0.65138 + 0.75874 i$ & $-507 $ & $ 1.624 $ \\ \hline
50 & $z^4+7z^3+17z^2+15z+3$ & $(21/34;1)$ & $-2.65139 + 0.75874 i$ & $ -507$& $ 1.624$ \\ \hline
51 & $ z^4+3 z^3+4 z^2+4 z+1 $ & (4/5;3) & $ -0.40630 + 1.19616 i $ &$-331$ & $ 0.447 $\\ \hline
52 & $ z^4+5 z^3+10 z^2+8 z+1 $ & (7/10;3) & $ -1.59369 + 1.19616 i $ & $-331$&$ 0.447 $ \\ \hline
53 & $ z^4 + z^3 - 2 z^2+ 1 $ & (9/10;3) & $ 0.78810 + 0.40136 i $ &$-283$ & $ 0.347$ \\ \hline
54 & $ z^4+7 z^3+16 z^2+12 z+1 $ & (3/5;3) & $ -2.78810 + 0.40135i$ & $-283$&$ 0.347 $\\ \hline
\end{tabular}
\end{center}
\newpage
\subsection{The twenty-seven $(4,4)$ arithmetic hyperbolic lattices.}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|} \hline
No. & polynomial & (Farey,order)& $\gamma$ & Disc. $k\Gamma$ & covol.\\ \hline
\\ \hline
1 & $z+2$ & $GT(4,4;2)$ & $-2$ & $-4$ &$0.91596$ \\ \hline
-& $z+3$ & $Tet(4,4;3)$ & $-3$ & $-8$ & $\infty$ \\ \hline
-& $1 + 3 z + z^2$ & $Tet(4,4;5)$ & $-2.61803$ & $ 5 $ & $\infty$ \\ \hline
-& $2 + 4 z + z^2$ & $GT(4,4;4)$ & $-3.41421$ & $ 8$ & $\infty$ \\ \hline
-& $1 + 4 z + z^2$ & $GT(4,4;6) $ & $-3.73205$ & $12$ & $\infty$ \\ \hline
\\ \hline
2 & $3 + 3 z + z^2$ & $ (3/5,1) $ & $-1.5 + 0.866025 i$ & $ -3$ & $0.5067$ \\ \hline
3& $5 + 2 z + z^2$ & $ (2/3;2) $ & $-1 + 2 i$ & $ -4$ & $1.8319 $ \\ \hline
4& $8 + 5 z + z^2$ & $(7/12;1)$ & $-2.5 + 1.32288 i$ & $ -7$ & $2.6648$ \\ \hline
5& $7 - z + z^2$ & $(11/15;1)$ & $0.5 + 2.59808 i$ & $-3 $ & $3.0491$ \\ \hline
\\ \hline
6& $3 + 8 z + 5 z^2 + z^3$ & $(4/7;1)$ & $-2.23278+0.79255 i$ & $ -31$ & $1.1878$ \\ \hline
7& $4 + 8 z - 4 z^2 + z^3$ & $(19/24;1)$ & $2.20409 + 2.22291 i$ & $-76$ & $3.9710$ \\ \hline
8& $5 + 3 z - 2 z^2 + z^3$ & $(7/9;1)$ & $1.44728 +1.86942 i$ & $-23 $ & $1.6481$ \\ \hline
9& $1 + 3 z - z^2 + z^3$ & $(3/4;3)$ & $0.647799 + 1.72143 i$ & $-44$ & $0.5295$ \\ \hline
10& $3 + 4 z + z^2 + z^3 $ & $(5/7;1)$ & $-0.108378 + 1.95409 i$ & $-31$ & $1.1900$ \\ \hline
11& $4 + 8 z + z^2 + z^3$ & $(17/24;1)$ & $-0.241944 + 2.7734 i $& $-116$ & $4.4654$ \\ \hline
12& $1 + 3 z + 2 z^2 + z^3$ & $(2/3,3)$ & $-0.78492 + 1.30714 i$ & $ -23$ & $0.2746$ \\ \hline
13& $8 + 11 z + 3 z^2 + z^3$ & $(41/60;1)$ & $-1.06238 + 2.83049 i$& $-83$ & $6.6173$ \\ \hline
14& $3 + 10 z + 4 z^2 + z^3$ & $(19/30;1)$ & $-1.82848 + 2.32426 i$ & $ -139$ & $5.1943$ \\ \hline
15& $4 + 8 z + 4 z^2 + z^3$ & $(5/8;1)$ & $-1.6478 + 1.72143 i $& $-44$ & $0.1322 $ \\ \hline
16& $7 + 12 z + 5 z^2 + z^3$ & $(5/9;1)$ & $-2.09252 + 2.052 i$& $-23 $ & $4.2441$ \\ \hline
17& $8 + 15 z + 7 z^2 + z^3$ & $(11/20;1)$ & $-3.10278 + 0.665457 i$ & $ -59$ & $3.8557$ \\ \hline
18& $3 + 8 z + 5 z^2 + z^3$ & $(4/7;1)$ & $-2.23278+0.79255 i$ & $ -31$ & $1.1878$ \\ \hline
\end{tabular}
\end{center}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|} \hline
No. & polynomial & (Farey,order)& $\gamma$ & Disc. $k\Gamma$ & covol.\\ \hline
\\ \hline
19& $z^4-8 z^3+12 z^2+23 z+3$ & $(44/51;1) $ & $4.55279 + 1.09648 i$ & $-731$ & $5.8007 $ \\ \hline
20& $7 + 15 z + 4 z^2 - 4 z^3 + z^4$ & $(69/85;1) $ & $2.76698 + 2.06021 i$ & $-283$ & $7.3346 $ \\ \hline
21& $3 + 13 z + 5 z^2 - 5 z^3 + z^4$ & $(14/17;1) $ & $3.09755 + 1.60068 i$ & $-491$ & $3.0852$ \\ \hline
22& $3 + 13 z + 17 z^2 + 7 z^3 + z^4$ & $(29/51;1$ & $-2.94722 + 1.22589 i$ & $ -731$ & $5.7795$ \\ \hline
23& $3 + 11 z + 12 z^2 + 4 z^3 + z^4$ & $(11/17;1)$ & $-1.39573 + 2.51303 i$ & $-491 $ & $3.0815$ \\ \hline
24& $1 + 6 z + 7 z^2 + 2 z^3 + z^4$ & $(7/10;1)$ & $-0.5 + 2.36187 i$ & $-448$ & $2.0575$ \\ \hline
25& $3 + 9 z + 5 z^2 - z^3 + z^4$ & $(13/17;1)$ & $1.15139 - 2.50596 i$ & $-507$ & $3.2480$ \\ \hline
26& $z^4+5 z^3+10 z^2+6 z+1$ & $(3/5;3)$ & $-2.07833 + 1.4203 i$ & $-331$ & $0.8948$ \\ \hline
27& $z^4-3 z^3+2 z^2+6 z+1$ & $(4/5;3)$ & $2.03623 + 1.43534 i$ & $-283$ & $0.6951$ \\ \hline
\end{tabular}
\end{center}
For completeness we recall Takeuchi's result in this case.
\begin{theorem} Let $\Gamma=\langle f,g \rangle$ with $f$ and $g$ of finite order $4$ and $p$ be an arithmetic Fuchsian triangle group Then there are eleven such groups and $\{4,p,q\}$ form one of the following triples.
\begin{itemize}
\item noncompact : $(2,4,\infty)$, $(4,4,\infty)$,
\item compact: $(2,4,12)$, $(2,4,18)$, $(3,4,4)$, $(3,4,6)$, $(3,4,12)$, $(4,4,4)$, $(4,4,5)$, $(4,4,6)$, $(4,4,9)$.
\end{itemize}
\end{theorem}
\section{Total degree bounds.}
In this section we first outline the arithmetic criteria on the complex number
\[ \gamma=\gamma(f,g) = \mbox{\rm{tr\,}}[f,g]-2 \]
that are necessary in order for the group $\Gamma=\langle f,g:f^4=g^p=
\cdots=1\rangle$ to be a discrete subgroup of an arithmetic group, following on from \cite{MM,MMpq}. This is encapsulated in the Identification Theorem \ref{idthm} below. It places rather stringent conditions on $\gamma=\gamma(f,g)$.
Next, in order for $\Gamma$ to be a lattice, it is necessary that is it not free on generators. We then use the ``disjoint isometric circles test'' and the Klein ``ping-pong'' lemma to obtain a coarse bound on $|\gamma|$. The arithmetic criteria and coarse bound then allow us to either directly give degree bounds or, in some cases, adapt the method of Flammang and Rhin \cite{FR} to obtain a total degree bound for the field $k\Gamma=\mathbb Q(\gamma)$. It will turn out that the degree is actually no more than $4$ as we have noted above, however a total degree bound of $7$ or $8$ brings us down into the range of feasible search spaces for integral monic polynomials with the root bounds we obtain exploiting arithmeticity and bounds on $|\gamma(f,g)|$, and so this is the first bound we will seek. Note that this gives first bounds on $p$ since $\mathbb Q(\cos\frac{2\pi}{p})\subset k\Gamma$ and
\[ [\mathbb Q:k\Gamma]=[\mathbb Q:\mathbb Q(\cos\frac{2\pi}{p})][\mathbb Q(\cos\frac{2\pi}{p}):k\Gamma]\leq 8.\]
Then refining this degree bound down to $4$ or $5$ relies on us obtaining a coarse description of the moduli space ${\cal M}_{4,p}$ --- the deformation space of Riemann surfaces $\IS_{4,4,p,p}$ topologically homeomorphic to the sphere with two cone points of order $4$ and two cone points of order $p$, an analogue of the Riley slice for groups generated by two parabolics where $\IS_{4,4,p,p}$ is replaced by the four times punctured sphere. Actually only the cases $(4,3)$,$(4,3)$ and $(4,4)$ have very large search spaces for (as we shall see below) there is not necessarily an intermediate field real for us to work with. Indeed the $(4,2)$ case is actually covered by the methods of Flammang and Rhin \cite{FR} after they achieved a degree $9$ bound (for the $p=q=3$ case) and their methods largely carry over but are a bit more complicated since these moduli spaces when $q=4$ and $p\neq 2,4$ are not symmetric and do have significantly larger complements (where the groups we are looking for will lie). Next, the arithmetic criteria show that $\mathbb Q(\gamma)$ has one complex conjugate pair of embeddings and good bounds on the real embeddings of the number $\gamma$ due to a ramification requirement. Thus the coefficients of the minimal polynomial for $\gamma$ are bounded with a bound on $|\gamma|$ (though very crudely). Given the degree bound we can now look through the space of polynomials with integer coefficients within these ranges to produce a list of possibilities. Without good bounds this can take quite a long time (many days). Then a factorisation criteria significantly reduces the possibilities further. This leaves us with a moderate number of values for $\gamma$ (perhaps several thousand). At this point there is no issue about whether the group is discrete. That is guaranteed.
The only question is if it is a lattice and if so what is it. This is where we need the refined descriptions of moduli spaces. Our earlier work implemented a version of the Poincar\'e polyhedral theorem to construct a fundamental domain for the group acting on hyperbolic $3$-space and thereby hope to determine if this action is cocompact or not \cite{Cooper}. The case of cofinite volume but not cocompact is rather easier in the arithmetic case as then the invariant trace field $k\Gamma$ is quadratic \cite{MR}. This implementation worked well when there were no points quite close to the boundary (where the geometrically infinite groups are dense). However it fell over for points close to the boundary, and was a case by case analysis which was fine when there were only a few tens of groups to look at. Here we replace the last few steps by a finer description - and so tighter bounds - on the set where $\gamma$ must lie, giving smaller search spaces (a matter of minutes). Then we ``guess'' it is a Heckoid group and then set about finding what it might be by enumerating the possible words and associated trace polynomials. This just happens to work --- and of course it is natural to conjecture that is always does.
\subsection{The standard representation.} Define the two matrices
\begin{equation}\label{AB} A = \begin{pmatrix} \cos \pi/p & i \sin \pi/p \\ i \sin \pi/p &
\cos \pi/p \end{pmatrix}, \quad B = \begin{pmatrix}
\cos \pi/q & iw \sin\ \pi/q \\ i w^{-1} \sin \pi/q & \cos \pi/q \end{pmatrix}.
\end{equation}
Then if $\Gamma = \langle f,g\rangle $ is a non-elementary Kleinian group with $o(f)=p$ (where $o(f)$ denotes the order of $f$) and
$o(g) = q$, where $p \geq q \geq 3$ then $\Gamma$ can be normalised so that
$f,g$ are represented by the matrices $A,B$ respectively. The parameter
$\gamma$ is related to $w$ by
\begin{equation}\label{2}
\gamma = \sin^2 \frac{\pi}{p} \, \sin^2 \frac{\pi}{q} \; (w - \frac{1}{w})^2.
\end{equation}
Given $\gamma$, we can further normalise and choose $w$ such that $| w |
\leq 1$ and ${\rm Re}(w) \geq 0$.
\subsection{Isometric circles.}
The isometric circles of the M\"obius transformations determined by (\ref{AB}) are
\[ \{z:|i \sin(\frac{\pi}{p}) z \pm \cos(\frac{\pi}{p}) |^2=1\}, \;\;\; \{ |i w^{-1} \sin(\frac{\pi}{q}) z \pm \cos (\frac{\pi}{q}) |^2=1\} \]
These circles are paired by the respective M\"obius transformations.
With our normalisation on $w$ as above, these two pairs of circles are disjoint precisely when
\begin{equation}\label{3}
| i w \cot \pi/q + i \cot \pi/p | + \frac{| w |}{\sin \pi/q} \leq \frac{1}{\sin \pi/p}.
\end{equation}
If $\gamma$ is real, the isometric circles are disjoint if and only if $\gamma<-4$ or $\gamma > 4\big(\sqrt{2} \cos(\frac{\pi }{p})+1\big)+2\cos(\frac{2 \pi }{p})$. In the case of equality here the isometric circles are tangent.
\bigskip
With our normalisations, the well known Klein ``ping-pong" argument quickly implies that, should the isometric circles be pairwise disjoint or at worse tangent, then the group generated by $A$ and $B$ is free on these two generators. It therefore cannot be an arithmetic lattice. In fact in these circumstances (with $p=4$ and $q\geq 3$) it is easy to see
\[ \left[ {\Bbb D}(i,\sqrt{2})\cap{\Bbb D}(-i,\sqrt{2}) \right] \setminus \left[{\Bbb D}\Big(i w\cot (\frac{\pi}{q}),\frac{|w|}{\sin(\frac{\pi}{q})}\Big) \cup {\Bbb D}\Big(-i w\cot (\frac{\pi}{q}),\frac{|w|}{\sin(\frac{\pi}{q})}\Big)\right] \]
is a nonempty fundamental domain for the action of $\Gamma=\langle f,g\rangle$ on $\oC\setminus \Lambda(\Gamma)$.
\medskip
We now need to turn the inequality (\ref{3}) into a condition on $\gamma$. From (\ref{3}) we see with $q=4$ and $w=r e^{i\theta}$, $0\leq \theta\leq \pi/2$, that
\begin{eqnarray*} | r \cos(\theta) + \cot \pi/p +i r\sin(\theta)| + \sqrt{2}r & \leq &\frac{1}{\sin \pi/p} \\
\Big( r \cos(\theta) + \cot (\frac{\pi}{p})\Big)^2 + r^2\sin^2(\theta) & \leq &(\frac{1}{\sin\frac{\pi}{p}}- \sqrt{2}r )^2 \\
2 r \cos(\theta) \cot (\frac{\pi}{p}) & \leq &1+r^2 - \frac{2\sqrt{2}r}{\sin\frac{\pi}{p}} \\
\end{eqnarray*}
On the curve where equality holds, we obtain a parametric equation for a region $\Omega_p$ bounded by this curve.
\[\Omega_p\; :\;0=1+r^2 -2 r \; \frac{\sqrt{2}- \cos(\theta) \cos (\frac{\pi}{p})}{\sin\frac{\pi}{p}}\]
\scalebox{0.5}{\includegraphics[viewport=-50 250 600 750]{ParametricRegion}}\\
{\bf Figure 2.} {\it The smooth parametric regions $\Omega_p$ from smallest to largest, $p=3,6,9$ and $12$. If $\gamma \not\in \Omega_p$, then $\langle f,g\rangle$ is freely generated. Inset is the actual region (in the case $p=3$) where free generation occurs, illustrated by interior pleating rays landing on its boundary.}
\medskip
The $\gamma$ values for groups we seek are inside the curves $\Omega_p$. However the regions $\Omega_p$ where the simplest criterion giving a region where disjoint isometric circles holds is apparently much larger than that required to identify groups that are freely generated. However these regions are sufficient to provide initial bounds especially for $p$ large enough that $[\mathbb Q(\cos 2\pi/p):\mathbb Q]>17$, that is $31$ possible values no more than $60$.
\bigskip
\subsection{Real points.} At this point we would like to remove the cases $\gamma\in \mathbb R$ from further consideration. This result is primarily based on \cite{MM3}
\subsubsection{If $\gamma>0$,} then the group $\Gamma$ is a Fuchsian triangle group, or is freely generated, and is therefore not an arithmetic lattice. However, the $(4,p,q)$- triangle groups $\langle a,b|a^4=b^p=(ab)^q \rangle$ have parameters
\begin{eqnarray*}(\gamma,\mbox{\rm{tr\,}}^2(f)-4,\mbox{\rm{tr\,}}^2(g)-4)& = & (\gamma,-2,-4\sin^2(\frac{\pi}{p})), \\ \gamma &=& 4 \cos (\frac{\pi }{q})\big(\sqrt{2} \cos(\frac{\pi }{p})+\cos(\frac{\pi }{q})\big)+2\cos(\frac{2 \pi }{p})\end{eqnarray*}
\subsubsection{If $\gamma<0$,} and if $\langle f,g\rangle$ is not free on generators, then $-4<\gamma <0$. It follows that $-2< \mbox{\rm{tr\,}}[f,g] < 2$ and $[f,g]$ is elliptic. All groups generated by two elements of finite order and whose commutator has finite order were identified in \cite{MM3}. As examples, the $(4,p,q)$-generalised triangle groups $\langle a,b|a^4=b^p=[a,b]^q \rangle$ have parameters
\[ (\gamma,-2,-4\sin^2(\frac{\pi}{p})), \gamma =-2-2\cos(\frac{\pi}{p}) \]
\bigskip
From that classification we have the following result.
\begin{theorem} Let $\Gamma=\langle f,g\rangle$ be a non-elementary Kleinian group where $f,g$ are
elliptic elements with $o(f) =4$, $o(g) = q$ and $o([f,g]) = n$. Assume that $\gamma$ is nonelementary and does not have an invariant
hyperbolic plane. Then the generators give rise to the sets of parameters described are
\begin{enumerate}
\item {\it Generalised Triangle groups} $\langle x,y|x^4=y^p=[x,y]^n=1\rangle$.
\[ (\gamma,-2,\beta)= (-2-2 \cos(\pi/n),-2,-4 \sin^2(\pi/p)). \]
\item {\it Tetrahedral groups with $n$ odd}. \begin{eqnarray*}
\langle x,y|x^4 & = & y^p=[x,y]^n=1, (x[y, x^{-1}]^{(n-1)/2})^2 = 1, \\
&& (x[y, x^{-1}]^{(n-1)/2}y)^2 = ([y, x^{-1}]^{(n-1)/2}y)^2 =1\rangle
\end{eqnarray*}
\[ (\gamma,-2,\beta)= (-2-2 \cos(2\pi/n),-2,-4 \sin^2(\pi/p)). \]
\item {\it Tetrahedral groups with $n$ odd and $n\geq 7$}. In this case we have a group with the presentation as determined above, but additional possibilities for the value of $\gamma$.
\[ (\gamma,-2,\beta)= (-2-2 \cos(4\pi/n),-2,-4 \sin^2(\pi/p)). \]
\end{enumerate} \
The only cocompact group is the group $(4,p;n)=(4, 5; 3)$ which is arithmetic. The only non-cocompact groups of finite
covolume are $(4,p;n)=GT(4,3; 2), Tet(4, 6; 3), GT(4,4,2)$,
which are all arithmetic.
\end{theorem}
Futher data on these groups can be found in \cite{MM3}.
\medskip
Notice here that if $n=2$, then $tr[f,g]=0$ and $\gamma(f,g)=-2=\beta(f)$. Then $\gamma(f,gfg^{-1})=\gamma(f,g)(\gamma(f,g)-\beta(f))=0$ and $f$ and $gfg^{-1}$ share a fixed point on $\oC$. In this last case here $(4,p;n)=(4, 4; 2)$ is the $\mathbb Z_2$ extension with parameters $(1+i,-2,-4)$. In summary we have the possible values for $\gamma\in \mathbb R$ for arithmetic Kleinian groups.
\[
p=5; \gamma=-3, \quad
p=3 ; \gamma=-2,\quad
p=4 ; \gamma=-2\quad
p=6 ; \gamma=-3
\]
\subsection{Arithmetic restrictions on the commutator $\gamma(f,g)$.}
Having dispensed with the case that $\gamma=\gamma(f,g)\in \mathbb R$ in the rest of this paper we shall assume that $\gamma$ is complex.
We require the following preliminaries. Let
$\Gamma$ be any non-elementary finitely-generated subgroup of $\mbox{\rm{PSL}}(2,\mathbb C)$.
Let $\Gamma^{(2)} = \langle g^2 \mid g \in \Gamma \rangle $ so that $\Gamma^{(2)}$ is a subgroup of finite
index in $\Gamma$. Define
\begin{equation} \left.
\begin{array}{lll}
k\Gamma & = & \mathbb Q(\{ \mbox{\rm{tr\,}}(h) \mid h \in \Gamma^{(2)} \}) \\
A\Gamma & = & \{ \sum a_i h_i \mid a_i \in k\Gamma, h_i \in \Gamma^{(2)} \}
\end{array}\;\;\;\; \right\}
\end{equation}
where, with the usual abuse of notation, we regard elements of $\Gamma$ as matrices,
so that $A\Gamma \subset M_2(\mathbb C)$.
Then $A\Gamma$ is a quaternion algebra over $k\Gamma$ and the pair $(k\Gamma, A\Gamma)$ is
an invariant of the commensurability class of $\Gamma$. If, in addition, $\Gamma$
is a Kleinian group of finite co-volume, then $k\Gamma$ is a number field.
We state the identification theorem as follows:
\begin{theorem} \label{idthm} Let $\Gamma$ be a subgroup of
$\mbox{\rm{PSL}}(2,\mathbb C)$ which is finitely-generated and non-elementary. Then $\Gamma$
is an arithmetic Kleinian group if and only if the following conditions all hold:
\begin{enumerate}
\item $k\Gamma$ is a number field with exactly one complex place,
\item for every $g \in \Gamma$, $\mbox{\rm{tr\,}}(g)$ is an algebraic integer,
\item $A\Gamma$ is ramified at all real places of $k\Gamma$.
\item $\Gamma$ has finite co-volume.
\end{enumerate}
\end{theorem}
It should be noted that the first three conditions together imply that $\Gamma$ is
Kleinian, and without the fourth condition, are sufficient to imply that $\Gamma$ is a subgroup of an arithmetic Kleinian group.
The first two conditions clearly depend on the traces of the elements of $\Gamma$.
In addition, we may also find a Hilbert symbol for $A\Gamma$ in terms of the
traces of elements of $\Gamma$ so that the third condition also depends on the traces
(for all this, see \cite{MR1},\cite[Chap. 8]{MR}).
\subsection{The factorisation condition.}\label{factorisation}
For an arithmetic group generated by elliptic elements of order $4$ and $p$ \cite{MMpq} observed that as a consequence of the Fricke identity the number $\lambda= \mbox{\rm{tr\,}} f \mbox{\rm{tr\,}} g \mbox{\rm{tr\,}} fg$ is an algebraic integer and satisfies the quadratic equation
\begin{equation}\label{eqn5}
x^2 - 8\cos^2\frac{\pi}{p}\, x
+ 8\cos^2\frac{\pi}{p}(-2 + 4\cos^2\frac{\pi}{p} -\gamma) = 0,
\end{equation}
Thus
\[ \lambda = 4 \cos ^2 \frac{\pi }{p} \pm 2 \sqrt{2} \cos \frac{\pi }{p} \sqrt{\gamma + 2\sin^2\frac{ \pi }{p}} \]
and
\begin{equation}\label{lp}
\lambda_p= 2 \sqrt{2} \cos \frac{\pi }{p} \sqrt{\gamma + 2\sin^2\frac{ \pi }{p}}
\end{equation}
is an algebraic integer.
Further, if $p$ is odd, then $2 \cos \frac{\pi }{p} $ is a unit and so
\[ \alpha_p = \sqrt{2} \sqrt{\gamma +2\sin^2\frac{ \pi }{p}} =\sqrt{2\gamma -\beta} \]
is an algebraic integer. Since the field $\mathbb Q(\gamma)$ has one complex place we must have $\mathbb Q(\gamma)=\mathbb Q(\lambda_p)$ and when $p$ is odd $\mathbb Q(\gamma)=\mathbb Q(\alpha_p)$.
\medskip
This factorisation criterion yielding $\mathbb Q(\gamma)=\mathbb Q(\lambda_p)$ is a powerful obstruction for an algebraic integer $\gamma$ to satisfy when $p\neq 2$. In particular we will apply it in the following form which is easily seem to be equivalent in the case at hand in fields with one complex place.
\begin{theorem} \label{samedegree} Let $\Gamma=\langle f,g\rangle$ be an arithmetic Kleinian group generated by elliptic elements of order $4$ and $p$, with $p\neq 2$. Then the minimal polynomial for $\lambda_p$ as at (\ref{lp}) has the same degree as the minimal polynomial for $\gamma=\gamma(f,g)$.
\end{theorem}
For $p\not\in\{2,3,4,6\}$ there is an intermediate real field as $\cos(2\pi/p)\not \in\mathbb Q$ lies in the invariant trace field. We set
\[ L = \mathbb Q(\cos(2\pi/p)),\quad\quad [\mathbb Q(\gamma):\mathbb Q]=[L:\mathbb Q] \times [\mathbb Q(\gamma):L] \]
Now if $\Gamma$ is arithmetic, $k\Gamma$ will be a number field with one complex place if and only if $L(\gamma)$
has one complex place and the quadratic at (\ref{eqn5}) splits into linear factors over
$L(\gamma)$. This implies that, if $\tau$ is any real embedding of
$L(\gamma)$, then the image of the discriminant of (\ref{eqn5}), which is
$32 \cos^2\frac{\pi}{p}(-2\sin^2\frac{\pi}{p} + \gamma)$,
under $\tau$ must be positive. Clearly this is equivalent to requiring that
\begin{equation}\label{eqn6}
\tau( 2\cos^2\frac{\pi}{p} + \gamma) > 0.
\end{equation}
Thus $k\Gamma$ has one complex place if and only if (i) $\mathbb Q(\gamma)$
has one complex
place, (ii) $L \subset \mathbb Q(\gamma)$, (iii) for all real embeddings $\tau$ of $\mathbb Q(\gamma)$,
(\ref{eqn6}) holds and (iv) the quadratic at (\ref{eqn5}) factorises over $\mathbb Q(\gamma)$.
Now, still in the cases where $p> 2$,(\cite[\S 3.6]{MR})
\begin{equation}\label{eqn7}
A\Gamma = \HS{-1}{ 2\cos^2(\frac{\pi}{p}) \,\gamma}{k\Gamma}.
\end{equation}
Under all the real embeddings of $k\Gamma$, $A\Gamma$ is ramified at
all real places of $k\Gamma$ if and only if, under any real embedding $\tau$
of $k\Gamma$,
\begin{equation}\label{eqn8}
\tau(\gamma) < 0.
\end{equation}
Thus, summarising, we have the following theorem which we will use to determine the possible $\gamma$ values for the groups we seek.
\begin{theorem}[The Identification Theorem] \label{2genthm}
Let $\Gamma = \langle f , g \rangle $ be a non-elementary
subgroup of $\mbox{\rm{PSL}}(2,\mathbb C)$ with $f$ of order $4$ and $g$ of order $p$, $p \geq 3$. Let $\gamma(f,g) = \gamma \in \mathbb C \setminus \mathbb R$. Then $\Gamma$ is an arithmetic Kleinian group if and only
if
\begin{enumerate}
\item $\gamma$ is an algebraic integer,
\item $\mathbb Q(\gamma) \supset L = \mathbb Q(\cos 2 \pi/p)$ and $\mathbb Q(\gamma)$ is a number field with exactly one complex place,
\item if $\tau : \mathbb Q(\gamma) \rightarrow \mathbb R$ such that $\tau |_L = \sigma$, then
\begin{equation}\label{eqn10}
- \sigma( 2\sin^2 \frac{\pi}{p}) < \tau(\gamma) < 0,
\end{equation}
\item the algebraic integer $\lambda_p$ defined at (\ref{lp}) has the same degree as $\gamma$,
\item $\Gamma$ has finite co-volume.
\end{enumerate}
\end{theorem}
\subsection{The possible values of $p$.}
If the generator $f$ has order $4$, then Theorem \ref{2genthm}, implies first, that $\gamma$ is an algebraic integer,
secondly, that $\mathbb Q(\gamma)$ has exactly one complex place and thirdly,
that $\mathbb Q(\gamma)$ must contain $L = \mathbb Q(\cos 2 \pi/p)$. Let
\begin{equation} [\mathbb Q(\gamma) : L ] = r
\end{equation}
Since the minimal polynomial for $\gamma$ factors over $L$ we know $n=r\mu=[\mathbb Q(\gamma):\mathbb Q]$. Further, all real embeddings of $\gamma$ lie in the interval $[-2,0]$.
Next, from the disjoint isometric circles criteria, we obtain the following information if the group $\Gamma=\langle f,g\rangle$ is not freely generated --- a necessary condition if $\Gamma$ is to be arithmetic.
\begin{lemma} \label{modlem} Let $\langle f,g\rangle$ be a discrete group with $o(f)=4$ and $o(g)=p$ and which is not free on these two generators. Let $\gamma=\gamma(f,g)$, Then
\begin{enumerate}
\item $\Re e(\gamma)>-4$ and this is sharp as for each $p\geq 2$, $GT(4,p;\infty)$ has $[f,g]$ parabolic and $\gamma=-4$. This group is free on generators. Further, $GT(4,p;n)$ has $\gamma=-2-2\cos(\frac{\pi}{n})$ which tends to $-4$ with $n$. None of these groups are free on generators, \cite{MM3}.
\item For each $p\geq 2$ we have
\begin{equation}\label{trivial bound} |\gamma|\leq 4 \big(\sqrt{2} \cos(\frac{\pi }{p})+1\big)+2\cos(\frac{2 \pi }{p}).\end{equation}
This estimate is sharp and achieved by the $(4,p,\infty)$ triangle group. The $(4,p,n)$ triangle groups have
\[\gamma=2 \left(\cos(\frac{2 \pi }{n})+2\cos(\frac{\pi }{p}) \left(\sqrt{2} \cos(\frac{\pi }{n})+\cos(\frac{\pi }{p})\right)\right)\]
which tends to $ \big(\sqrt{2} \cos(\frac{\pi }{p})+1\big)+2\cos(\frac{2 \pi }{p})$ as $n\to\infty$. The $(4,p,n)$-triangle groups are not free on generators.
\end{enumerate}
\end{lemma}
\noindent{\bf Proof.} To obtain the bound $|\gamma|\leq 4 \big(\sqrt{2} \cos(\frac{\pi }{p})+1\big)+2\cos(\frac{2 \pi }{p})$ for each $p$ we simply find the point of maximum modulus on the region $\Omega_p$ using calculus. Next, there are points within $\Omega_p$ whose real part is less than $-4$. We have to show these give rise to groups freely generated. We do this by a cut and paste argument to produce a fundamental domain from the isometric circles, which are no longer disjoint. We can normalise so that the isometric circles of $f$ and of $g$ are
\[ I(f): |z\pm i|=\sqrt{2},\quad I(g): |z\pm i\omega\cot \frac{\pi}{p}|=\frac{1}{\sin\frac{\pi}{p}} \]
and that $\Im m(i\omega)\geq 0$. Let $D_1=\{z:|z+i\omega\cot \frac{\pi}{p}|=\frac{1}{\sin\frac{\pi}{p}}\}$ and $D_2=\{z:|z-i\omega\cot \frac{\pi}{p}|=\frac{1}{\sin\frac{\pi}{p}}\}$ be the disks bounded by the isometric circles of $g$.
\begin{lemma}\label{fog} Suppose that $f^{-1}(D_1)\cap (D_1\cup D_2)=f(D_2)\cap (D_1\cup D_2)=\emptyset$ Then $\langle f,g \rangle$ is free on generators.
\end{lemma}
\noindent{\bf Proof.} We replace the fundamental domain $D(i,\sqrt{2})\cap D(-i,\sqrt{2})$ for $f$ with the domain
\[ [D(i,\sqrt{2})\cap D(-i,\sqrt{2})]\cup D_1\cup D_2)\setminus (f^{-1}(D_1) \cup f(D_2)) \]
By construction the exterior of the isometric circles of $g$ lie in this region and this region is a fundamental domain for $f$. This the ``ping-pong'' hypotheses are satisfied.
\hfill $\Box$
\medskip
\scalebox{0.5}{\includegraphics[viewport=-100 300 500 730]{Nfog}}
\medskip
\noindent{\bf Figure 3.}{\it Isometric circles of $f$ and $g$ (in red). The blues disks are the images of the leftmost isometric circle of $f$ under $f^{-1}$, and the rightmost under $f$.The modified fundamental domain for $f$ consists of the intersection of the fundamental disks for $f$, together with the fundamental disks for $g$ (red) and then deleting their images (two blue disks).}
\medskip
This cut and paste technique for moving the fundamental domains around to a good configuration can be extended, but quickly becomes to complex to obtain useful information. It apparent that is is only possible when the angle between the axes of $f$ and $g$ is small compared with the radius of the disk. We want to turn the hypotheses of Lemma \ref{fog} into a restriction on $\gamma$. Things have been set up so
\[ f(z) = \frac{z+i}{iz+1}. \]
There is also an evident rotational symmetry. We therefore need to assert $f(D)$ is disjoint from the isometric circles of $g$ where $D$ is the left-most isometric circle of $g$. We write
\[\zeta=i\omega\cot \frac{\pi}{p}+\frac{e^{i\theta}}{\sin\frac{\pi}{p}}, \quad \mbox{that is $\zeta\in \partial D$} \]
and we need to show
\[ |f(\zeta)\pm i\omega\cot \frac{\pi}{p}|\geq \frac{1}{\sin\frac{\pi}{p}} \]
These yield the two inequalities
\[ \left| -i+i w \cot \frac{\pi }{p} +\frac{2}{-i+i w \cot \frac{\pi }{p} +e^{i \theta } \csc \frac{\pi }{p} }\right|\sin \frac{\pi }{p} \geq 1\]
and
\[\left| -i -iw \cot \frac{\pi }{p} +\frac{2}{-i+i w \cot \frac{\pi }{p} +e^{i \theta } \csc \frac{\pi }{p} }\right| \sin \frac{\pi }{p} \geq 1\]
\subsection{First bounds.}
We now make use of these facts,
together with the inequalities that $\gamma$ and its conjugates must satisfy
values for the triple $(4,p,r)$ for which there may exist a $\gamma$-parameter
corresponding to an arithmetic Kleinian group which is not free using the criteria above. Our first estimate is straightforward and is simply used to cut down the set of triples we search through.
\begin{lemma}\label{easybound} Let $\langle f,g\rangle$ be a discrete group with $o(f)=4$ and $o(g)=p$ and which is not free on these two generators. Let $\gamma=\gamma(f,g)$. Then the degree $[\mathbb Q(\gamma):\mathbb Q]\leq 16$.
\end{lemma}
\noindent{\bf Proof.} Let $P(z)$ be the minimial polynomial for $1+\gamma$ of degree $n$. Then $P$ has complex roots $1+\gamma$ and $1+\bar\gamma$ and, as $\mathbb Q(\gamma)$ has one complex place, $n-2$ real roots $r_i$ in the interval $[-1,1]$. As $P$ is irreducible and integral
\begin{eqnarray*}
1 &\leq & |P(0)|^2|P(-1)||P(1)| = |1+\gamma|^4|\gamma|^2|\gamma+2|^2 \prod_{i=1}^{n-2} r_i^2(1-r_i^2) \\
&\leq & \left(7+4 \sqrt{2}\right)^4\left(6+4 \sqrt{2}\right)^2\left(8+4 \sqrt{2}\right)^24^{2-n} \end{eqnarray*}
where we have used the trivial bounds on $|\gamma|$ from Lemma \ref{modlem}. This is not possible if $n\geq 17$. \hfill $\Box$
\medskip
There is far more information to be had here when we identify a specific $p$. For instance if $p=4$ and $P$ is the minimal polynomial for $\gamma$, following the above argument we see $r_i\in [-1,0]$ and
\begin{eqnarray*}
1 &\leq & |P(0)||P(-1)| = |\gamma|^2|1+\gamma|^2 \prod_{i=1}^{n-2} |r_i |(1-|r_i|)
\leq \left(10884 + 7696\sqrt{2}\right)^2 4^{2-n} \end{eqnarray*}
This gives us a total degree bound of at most $9$. In fact this (where the orders of $f$ and $g$ are the same) is typically the worst case, and degree $9$ yields a feasible search space (as per \cite{FR}).
\medskip
The field $L$ is totally
real and the embeddings $\sigma : L \rightarrow \mathbb R$ are defined by
\[ \sigma( \cos \frac{2 \pi}{p}) = \cos \frac{2 \pi j}{p}, \hskip15pt (j,p)=1, \;\; j< [p/2].\]
Let us denote these embeddings by $\sigma_1, \sigma_2, \ldots , \sigma_{\mu}$,
with $\sigma_1 = {\rm Id}$. Here $\mu=[L:\mathbb Q]$ and from the online list of integer sequences A055034 we find that
\[ [L:\mathbb Q] = \Phi[p]/2, \]
where $\Phi$ is Euler's function. Since $\gamma$ is complex, our bound from Lemma \ref{easybound} implies that $\Phi[p] \leq 16$. Hence $p\leq 60$, and in fact only $30$ possibilities for $p$ remain.
\subsection{Discriminant bounds.}
We now follow \cite[\S 4.2]{MM} to obtain an estimate on the relative discriminant. The discriminant of the minimal polynomial for $\gamma$ is
\[ disc(\gamma) = |\gamma-\bar\gamma|^2 \prod_{ i=1}^{n-2} |\gamma - r_i|^4 \prod_{1 \leq i < j \leq n-2} (r_i - r_j)^2, \]
where the $n-2$ real roots $r_i\in [-2,0]$. Let
\begin{equation}
\mu = [L:\mathbb Q]
\end{equation}
so that the total degree of the field $\mathbb Q(\gamma)$ is $n=r\mu$.
For $n \geq 2$, let $D_n$ denote the minimum absolute value of the discriminant of
any field of degree $n$ over $\mathbb Q$ with exactly one complex place. For small
values of $n$ the number $D_n$ has been widely investigated (\cite{CDO,Di,DO})
and lower bounds for $D_n$ for all $n$ can be
computed (\cite{Mull,Od,Rodgers,Stark}). In \cite{Od}, the bound
is given in the form $D_n > A^{n-2} B^2 \exp(-E)$ for varying values of
$A,B$ and $E$. Choosing, by experimentation, suitable values from this table
we obtain the bounds shown in Table 1. We will use more precise data later.
\begin{table}[h]
\begin{center}
\begin{tabular}{clcl}
Degree $n$ & Bound & Degree $n$ & Bound \\
2 & 3 &
3 & 27 \\
4 & 275 &
5 & 4511 \\
6 & 92779 &
7 & 2306599 \\
8 & 68856875* &
9 & $0.11063894 \times 10^{10} $ \\
10 & $0.31503776 \times 10^{11}$ &
11 & $0.90315026 \times 10^{12}$ \\
12 & $0.25891511 \times 10^{14}$ &
13 & $0.74225785 \times 10^{15}$ \\
14 & $0.21279048 \times 10^{17}$&
15 & $0.61002775 \times 10^{18}$ \\
16 & $0.17488275 \times 10^{20}$ &
17 & $0.50135388 \times 10^{21}$ \\
18 & $0.14372813 \times 10^{23} $ &
19 & $0.41203981 \times 10^{24}$ \\
20 & $0.11812357 \times 10^{26}$
\end{tabular}
\caption{Discriminant Bounds}
\end{center}
\end{table}
\subsection{Schur's bound.}
We will need to use Schur's bound \cite{S} which gives that,
if $-1 \leq x_1 < x_2 < \cdots < x_r \leq 1$ with $r \geq 3$ then
\begin{equation}\label{eqn31}
\prod_{1 \leq i < j \leq r} (x_i - x_j)^2 \leq M_r = \frac{2^2\,3^3\, \ldots r^r\, 2^2 \, 3^3 \, \ldots (r-2)^{r-2}}{3^3\, 5^5 \, \dots (2r-3)^{2r-3}}.
\end{equation}
The bounds we have at hand give
\begin{equation}\label{disc} D_n \leq disc(\gamma) \leq |\gamma-\bar\gamma|^2 (2+|\gamma|)^{2(n-2)} M_{n-2}, \quad\quad n \geq \Phi[p] \end{equation}
where $\Phi$ is the Euler phi function. However, this can be improved to capture the fact that the real roots of the minimal polynomial for $\gamma$ are not so well distributed as those of the sharp estimate for Schur's formula (roots of Chebychev polynomials). Following \cite{MM} we
let $\Delta_1 = \delta_{\mathbb Q(\gamma) \mid L}$, the relative discriminant
of the field extension $\mathbb Q(\gamma) \mid L$, and let $\Delta$ denote the
discriminant of the basis $1, \gamma, \gamma^2, \ldots , \gamma^{r-1}$ over
$L$. Then
$$| N_{L \mid \mathbb Q}(\Delta) | \geq | N_{L \mid \mathbb Q}(\Delta_1) |.$$
\begin{equation}\label{eqn33}
|N_{L \mid \mathbb Q}(\delta_{\mathbb Q(\gamma) \mid L})| = |\Delta_{\mathbb Q(\gamma)}|/\Delta_L^r.
\end{equation}
Next set
\begin{equation}\label{18}
K(p,r) = M_{r-2}[2(\sqrt{2}+ \cos\frac{\pi}{p})^2]^{4(r-2)}\left(\sin\frac{\pi}{p} \right)^{2(r-2)(r-3)}|\gamma-\bar\gamma|^2
\end{equation}
The discriminant $\Delta_p$ of the field $\mathbb Q(\cos \frac{2\pi}{p})$ is given in \cite[(30)]{MM}. Then (see \cite[(31)]{MM}) we have the inequality
\begin{equation}\label{19}
K(p,r) \left(\sin\frac{\pi}{p}\right)^{-2r(r-1)}\left( \frac{\delta_{p}}{8}\right)^{\mu r(r-1)} M_r^{\mu-1} \geq \max\{1, D_n/\Delta_L^r\}
\end{equation}
where here
\[ \delta_p = \left\{ \begin{array}{ll}
1 & {\rm if~}p \neq \pi^{\alpha}, \;\; \pi~{\rm a~prime} \\
\pi & {\rm if~}p = \pi^{\alpha}, \;\; \pi ~{\rm a~prime},
\end{array}
\right. \]
If (\ref{disc}) fails to eliminate a case, then we put together inequalities (\ref{18}) and (\ref{19}) to try to eliminate it. We obtain the following remaining possibilities.
\begin{center}
\begin{tabular}{|c|c|c|l|}
\hline
$p$ & $[\mathbb Q(\gamma):L]$ & $\Delta_p$ & total degree $n$ \\ \hline
3 & 1 & $1$& $n\leq 11$ \\ \hline
4 & 1 & $1$&$n\leq 8$\\ \hline
5 & 2 & $5$&$n=4,6,8$\\ \hline
6 & 1& $1$&$n=2,3,4$ \\ \hline
7 & 3 & $49$&$n=6,9$ \\ \hline
8 & 2 & $8$&$n=4,6,8$ \\ \hline
9 & 3 & $81$&$n=6,9$\\ \hline
10 & 2 & $5$&$n=4,6$\\ \hline
11 & 5 & $14641$&$n=10$ \\ \hline
12 & 2& $12$&$n=4,6,8$ \\ \hline
15 & 4 & $1125$&$n=8$ \\ \hline
18 & 3 & $81$&$n=6,9$ \\ \hline
20 & 4 & $2000$&$n=8$ \\ \hline
24 & 4 & $2304$&$n=8$ \\ \hline
30 & 4 & $1125$& $n=8$ \\ \hline
\end{tabular}
\end{center}
For all but small $p$ we find that the minimal polynomial for $\gamma$ is quadratic or cubic over the base field $\mathbb Q(\cos 2\pi/p)$. We will work through some explicit examples to find all these algebraic integers in a moment, but all these search spaces are feasible with a little work except $p=3$ for which we will find alternative arguments, and $p=4$ which will follow from \cite{FR} once we have better bounds on the shape of the moduli space. The case $p=5$ also looks troublesome. Typically we search for $1+\gamma$ as then all real roots are in $[-1,1]$ and it is easier to get bounds on the symmetric functions of the roots, and therefore the coefficients of the polynomials.
\subsection{$p=7$, $p=9$, $p=11$ \& $p=18$.} There are four cases not yet eliminated where the total degree exceeds $8$, these are $p=11$ and total degree $n=10$ and $p=7, 9,18$ with total degree $6$ or $9$. Let us work through these case, first with $p=11$. The minimal polynomial $P$ is quadratic over $\mathbb Q(\cos(2\pi/11))$. It has the complex conjugate pair of roots $\gamma$ and $\bar\gamma$ and $8$ real roots which come in pairs, two in $[-2\sin(2\pi/11),0]$, two in $[-2\sin(3\pi/11),0]$, two in $[-2\sin(4\pi/11),0]$ and two in $[-2\sin(5\pi/11),0]$. Therefore the norm of the relative discriminant is bounded by
\begin{eqnarray*}
\lefteqn{|\gamma-\bar \gamma|^2 4^4 \sin^2(2\pi/11) \sin^2(3\pi/11) \sin^2(4\pi/11) \sin^2(5 \pi/11) \Delta_{11}^2}\quad\quad\\
& = & |\gamma-\bar \gamma|^2 \; \frac{11}{2\sin^2(\pi/11)} \; (14641)^2 \geq D_{10}= 0.31503776 \times 10^{11}.
\end{eqnarray*}
This implies
\[ |\gamma-\bar \gamma|=2\Im m(\gamma) \geq 25.6581, \]
and this of course is a contradiction.
\medskip
Next with $p=18$. The minimal polynomial $P$ is cubic over $\mathbb Q(\cos(2\pi/18))$. It has the complex conjugate pair of roots $\gamma$ and $\bar\gamma$ and a real root in $[-2\sin^2\pi/18,0]$ and $6$ real roots which come in triples, three in $[-2\sin(5\pi/18),0]$, and three in $[-2\sin(7\pi/18),0]$. Therefore the discriminant is bounded by
\begin{eqnarray*}
\lefteqn{|\gamma-\bar \gamma|^2 (|\gamma|+ 2\sin^2\pi/18)^4 16^{-2} (2\sin(5\pi/18))^6(2\sin(7\pi/18))^6 \Delta_{18}^3}\quad\quad\\
& = & |\gamma-\bar \gamma|^2 (|\gamma|+ 2\sin^2\pi/18)^4 \times164609 \geq D_{9}= 0.11063894 \times 10^{10} .
\end{eqnarray*}
This implies
\[ |\gamma-\bar \gamma|^2 (|\gamma|+ 2\sin^2\pi/18)^4 \geq 6721.32, \]
which gives $|\gamma|\geq 14$ and this is a contradiction.
We next consider the degree $6$ case to show in a simple case how we can use more refined information about the discriminant to eliminate a value. Following the above argument we would achieve the estimate
\begin{equation}\label{deg6}
{\cal N}(\mathbb Q(\gamma)|L) \times 81^2 \geq D_{6}= 92779.
\end{equation}
The norm ${\cal N}$ is a rational integer but $92779$ is prime. This shows the inequality can never be sharp. It gives the estimate $|\gamma-\bar \gamma|\geq 1.8$.
However, from the useful resource $https://hobbes.la.asu.edu/NFDB/$, \cite{JR} we can identify all the polynomials giving number fields with one complex place, degree $6$ and discriminant less than $3\times 10^6$ - there are $2352$ such fields. This list is proven complete. The one we seek for $\mathbb Q(\gamma)$ has $\mathbb Q(\cos \frac{2\pi}{18})$ as a subfield and the formula $(\ref{deg6})$ shows the discriminant has $3^8$ as a factor and Galois group $A_4\times C_2$. This leaves $19$ candidates. These are identified in the table below with $\delta$ as the distance between the smallest and largest roots. The polynomials presented are unique up to integer translation.
\medskip
\begin{tabular}{|c|c|c|l|c|}
\hline
& $-\Delta(\mathbb Q(\gamma))$ & $p(z)$ & $\delta$ \\
\hline
1 &$ 3^8 \times 19 $ & $x^6 - 3x^4 - 2x^3 + 3x^2 + 3x - 1.$ & $2.76$\\
\hline
2 &$ 3^9 \times 17$ & $ x^6 - 3x^4 - 5x^3 + 3x + 1.$ & $2.91$ \\
\hline
3 &$ 3^8 \times 107 $ & $ x^6 - 3x^5 + 3x^4 - 9x^2 + 6x - 1.$ & $3.53$ \\
\hline
4 &$ 3^8 \times 2^6 $ & $ x^6 - 3x^2 + 1.$ & $2.47$ \\
\hline
5 &$ 3^9 \times 2^6$ & $x^6 - 3x^4 + 3.$& $ 3.18$\\
\hline
6 &$ 3^9 \times 37 $ & $ x^6 - 3x^5 + 3x^4 - x^3 - 3x^2 + 3x + 1.$ & $2.66$\\
\hline
7 &$ 3^9\times 53 $ & $ x^6 - 3x^4 - 4x^3 + 6x + 1.$ & $3.47$ \\
\hline
8 &$ 38 \times 1271 $ & $ x^6 - 3x^4 - 2x^3 - 6x^2 - 6x - 1.$ & $4.00$ \\
\hline
9 &$ 3^8\times 163 $ & $ x^6 - 3x^5 + 3x^4 + 6x^3 - 15x^2 + 6x + 1.$ & $3.10$ \\
\hline
10 &$ 3^8 \times 179 $ & $ x^6 - 3x^5 + 5x^3 - 3x^2 + 3.$ & $3.33$ \\
\hline
11 &$ 3^9\times 73 $ & $x^6 - 8x^3 + 9x + 1$ & $2.59$\\
\hline
12 &$ 3^8 \times 251 $ & $ x^6 - 3x^5 + 6x^2 + 6x - 1.$ &$ 3.24$ \\
\hline
13 &$ 3^ 9\times 89 $ & $ x^6 - 3x^4 - 4x^3 - 9x^2 - 3x + 1.$ & $4.22$\\
\hline
14 &$ 3^ 8\times 271 $ & $ x^6 - 3x^5 - 3x^3 + 6x^2 + 6x + 1.$ & $3.54$ \\
\hline
15 &$ 3^ 8\times 17\times 19 $ & $x^6 - 3x^5 + 5x^3 - 12x^2 + 9x + 3 .$ & $4.47$ \\
\hline
16 &$ 3^ 8\times 17\times 19 $ & $x^6 - 7x^3 - 6x^2 + 18x - 3.$ & $3.32$\\
\hline
17 &$ 3^9\times 109 $ & $x^6 - 3x^5 - 3x^4 + 3x^3 + 9x^2 + 9x + 3.$ &$4.06$\\
\hline
18 &$ 3^ 8\times 359 $ & $ x^6 - 6x^4 - 4x^3 - 3x^2 + 3.$ & $4.94$ \\
\hline
19 &$ 3^ 8\times 431 $ & $x^6 - 3x^5 + x^3 + 18x - 9.$ & $3.93$\\
\hline
\end{tabular}
\medskip
None of these polynomials satisfy the root restrictions we require. In particular $\delta > 2$ implies no integral translate of any of these polynomials has all its real roots in an interval of length $2$. Therefore no integral translate can have the root distributions we have established as necessary. In fact in this case $\delta > 2 \sin^2(7\pi/18)\approx 1.766$ will suffice. We therefore have
\begin{equation}
|\gamma-\bar\gamma|^2\big(2\sin^2 \frac{5\pi}{18}\big)^2 \big(2\sin^2 \frac{7\pi}{18}\big)^2 81^2 \geq {\cal N}(\mathbb Q(\gamma)|L) \times \Delta_p^2 \geq 3\times 10^6.
\end{equation}
Hence
\[ \Im m(\gamma) \geq 5.15829 \]
and this easily implies the group is free on its generators.
\medskip
The remaining cases in this subsection are $p=7,9$. As with $p=18$ there are two cases when the extension over $L$ is quadratic or a cubic. In the cubic case Galois group in question is the wreath product $S(3)wr3$ (of order $2^3 3^4$ - T28 in the notation of \cite{JR}) There are $285$ fields to consider with discriminant less than $3.84\times 10^{10}$. In the case $p=7$, we must have $7^6$ as a factor of the discriminant (the smallest discriminant here is $7^6 \times 22679$ ) and in the case $p=9$ we must have $3^{12}$ as a factor (the smallest discriminant here is $3^{12}\times53\times 163$).
\begin{eqnarray*}
|\gamma-\bar \gamma|^2 (|\gamma|+ 2\sin^2\pi/9)^4 16^{-2} (2\sin(2\pi/9))^6(2\sin(4\pi/9))^6 \Delta_{9}^3\geq 3.84\times 10^{10}\\
|\gamma-\bar \gamma|^2 (|\gamma|+ 2\sin^2\pi/7)^4 16^{-2} (2\sin(2\pi/7))^6(2\sin(4\pi/7))^6 \Delta_{7}^3 \geq 3.84\times 10^{10} .
\end{eqnarray*}
The first case $p=7$ quickly gives either
\begin{itemize}
\item If $|\gamma-\bar\gamma|\leq 8$, then $|\gamma|\geq 9.16471$, or
\item if $|\gamma-\bar\gamma|\leq 5$, then $|\gamma|\geq 11.6923$.
\end{itemize}
and for $p=9$, simply $|\gamma|\geq 11.1919$. Together these inequalities imply that the associated group is free.
In the quadratic cases we have have a field of degree $6$. We enumerate the degree $6$ fields (actually we are enumerating the polynomials) with one complex place and discriminant less than $1.7 \times 10^6$. In $p=7$ there is a factor of $7^4$ and when $p=9$ a factor of $3^8$. The Galois group is $A_4\times C_2$ and there are $40$ such fields (polynomials). Then
\begin{eqnarray*}
|\gamma-\bar \gamma|^2 (2\sin(2\pi/9))^2(2\sin(4\pi/9))^2 \Delta_{9}^2\geq 2.2\times 10^6.\\
|\gamma-\bar \gamma|^2 (2\sin(2\pi/7))^2(2\sin(4\pi/7))^2 \Delta_{7}^2 \geq 2.2\times 10^6 .
\end{eqnarray*}
Together these inequalities yield $|\gamma-\bar \gamma|>10 $ and again this implies the groups is discrete and free on generators.
\medskip
Apart from the case $p=3,4$ and $5$ we have now established the total degree is at most $8$. We will actually run through searches to verify these results independently and we will discuss that later.
\subsection{$p=15,20,24,30$, degree $8$.}
These four cases arise from a quadratic polynomial over the base field $L$. In each case we can compute the relative discriminant as
\begin{equation}
|\gamma-\gamma|^2 \; \delta_2^2\; \delta_3^2\; \delta_4^2 \; \Delta_p^2 \geq D_8 = 68856875
\end{equation}
Here $\delta_i=\sigma_i(2\sin^2\frac{\pi}{p})$ as $i$ runs through the three non-identity Galois automorphisms $\sigma_i$ of $L$, since a pair of roots lies in the interval $[-2\sigma_i( \sin^2\frac{\pi}{p}),0]$. For $p=15$ we find
\begin{eqnarray*}
p=15,& & |\gamma-\gamma|^2 \; 2^6 \; \sin^2 \frac{2\pi}{15} \sin^2 \frac{4\pi}{15} \sin^2 \frac{7\pi}{15} \times 1125^2 \geq 68856875.\\
\end{eqnarray*}
Hence $\Im m(\gamma) \; \geq 5.10151.$ and this implies the group is free. For the other cases the discriminant bound is not enough to eliminate them. Thus we seek a stronger bound.
From the resource $https://hobbes.la.asu.edu/NFDB/$, \cite{JR} once again we can identify all the polynomials giving number fields with one complex place, degree $8$ and discriminant less than $2\times 10^9$. When $p=20$ $2^8\times 5^6$ is a factor of the discriminant (smallest such is $2^8\times 5^6\times 79$), $p=24$ finds $2^16 \times 3^4$ as a factor (smallest such is $2^{16}\times 3^4 \times 47$) and $p=30$ finds $3^4 \times 5^6$ as a factor (smallest such is $3^4 \times 5^6 \times 59$). The Galois groups are either $[2^4]^4$ ($T27$ in \cite{JR}) when $p=20,30$ or $[2^4]E(4)$ ($T31$ in \cite{JR}) when $p=24$.
These lists are proven complete. When $p=20$ we have the following $12$ possibilities.
\medskip
\begin{tabular}{|c|c|l|c|}
\hline
& $-\Delta$ & $p(z)$ &$\delta$ \\
\hline
1 &$2^85^679$ & $ x^8 - 2x^7 + x^6 - 6x^5 - x^4 + 12x^3 - x^2 - 4x + 1$ & $3.29$\\
\hline
2 &$2^85^719$ & $ x^8 - 4x^7 + 2x^6 + 8x^5 - 10x^4 + 2x^3 + 7x^2 - 6x + 1.$ &$3.55$\\
\hline
3 &$ 2^85^6199 $ & $x^8 - 4x^7 + 3x^6 + 6x^4 - 8x^2 + 2x + 1.$ & $3.30$\\
\hline
4 &$ 2^85^6239 $& $x^8 - 4x^7 + 10x^6 - 16x^5 - x^4 + 24x^3 - 10x^2 - 4x + 1.$ & $2.97$ \\
\hline
5 &$ 2^85^759$ & $x^8 - 2x^7 - 7x^6 + 16x^5 + 5x^4 - 24x^3 + 8x^2 + 8x - 4.$ & $4.75$ \\
\hline
6 &$2^{12}5^619$ & $ x^8 - 2x^7 - 8x^6 + 10x^5 + 11x^4 - 10x^3 - 2x^2 + 4x + 1.$ &$5.53$\\
\hline
7 &$ 2^{12}5^619 $ & $ x^8 - 8x^6 - 4x^5 + 8x^4 - 8x^3 - 12x^2 + 4x + 1.$ & $4.82$\\
\hline
8 &$ 2^85^6359 $ & $x^8 - 9x^6 - 10x^5 + 11x^4 + 30x^3 + 11x^2 - 10x + 1.$ & $5.16$\\
\hline
9 &$ 2^83^45^7$ & $x^8 - 10x^4 + 15x^2 - 5.$ & $2.64$ \\
\hline
10 &$ 2^83^45^7$ & $x^8 - 5x^6 + 5x^4 + 5x^2 - 5.$ & $3.36$ \\
\hline
11&$2^85^6439$ & $x^8 - 2x^7 - 3x^6 + x^4 + 10x^3 + 3x^2 - 6x + 1.$ & $4.08$ \\
\hline
12&$2^85^6479$ & $x^8 - 3x^6 - 11x^4 - 30x^3 - 17x^2 + 1.$ & $4.13$\\
\hline
\end{tabular}
When $p=30$ there are $26$ possibilities (which we do not enumerate)
Finally when $p=24$, there are $7$ possibilities.
\begin{tabular}{|c|c|l|c|}
\hline
& $-\Delta$ & $p(z)$ & $\delta$ \\
\hline
1 &$2^{16}3^4 47 $ & $ x^8 - 8x^5 + x^4 + 12x^3 - 2x^2 - 4x + 1$ & $2.29$\\
\hline
2 &$2^{18}3^4 23 $ & $ x^8 - 6x^6 - 8x^5 + 4x^4 + 12x^3 - 2x^2 - 4x + 1.$ & $4.29$ \\
\hline
3 &$ 2^{16}3^4191 $ & $x^8 - 4x^6 - 11x^4 - 12x^3 + 30x^2 + 36x + 9.$ & $4.42$\\
\hline
4 &$ 2^{16}3^6 23 $& $ x^8 - 4x^7 + 2x^6 + 8x^5 - 14x^4 + 4x^3 + 14x^2 - 8x + 1.$ & $4.06$\\
\hline
5 &$ 2^{16}3^4 239$ & $x^8 - 6x^6 + 5x^4 - 12x^3 - 12x^2 + 1.$ & $4.49$ \\
\hline
6 &$ 2^{18}3^4 71 $ & $ x^8 - 4x^7 + 10x^6 - 12x^5 - 20x^4 + 40x^3 - 18x^2 + 1.$ & $3.84$\\
\hline
7 &$ 2^{20}3^4 23 $ & $ x^8 - 8x^6 - 4x^5 + 8x^4 - 8x^3 - 12x^2 + 4x + 1.$ & $4.82$\\
\hline
\end{tabular}
Hence we may use the bound $2\times 10^9$ to see
\begin{eqnarray*}
p=20,& & |\gamma-\gamma|^2 \; 2^6 \; \sin^2 \frac{3\pi}{20} \sin^2 \frac{7\pi}{20} \sin^2 \frac{9\pi}{20} \times 2000^2 \geq 2\times 10^9.\\
p=24,& & |\gamma-\gamma|^2 \; 2^6 \; \sin^2 \frac{5\pi}{24} \sin^2 \frac{7\pi}{24} \sin^2 \frac{11\pi}{24} \times 2304^2 \geq 2\times 10^9.\\
p=30,& & |\gamma-\gamma|^2 \; 2^6 \; \sin^2 \frac{7\pi}{30} \sin^2 \frac{11\pi}{30} \sin^2 \frac{13\pi}{30} \times 1125^2 \geq 2\times 10^9.
\end{eqnarray*}
Then
\begin{eqnarray*}
p=20,& & \Im m(\gamma) \; \geq 8.75528\\
p=24,& & \Im m(\gamma) \; \geq 5.29112.\\
p=30,& & \Im m(\gamma) \; \geq 6.94947.
\end{eqnarray*}
Thus all these groups are free as well.
\begin{center}
\scalebox{0.7}{\includegraphics[viewport=80 340 800 800]{Modulispace4p}}
\end{center}
\noindent{\bf Figure 2.} {\it Rough descriptions of the moduli space of $\;\mathbb Z_4*\mathbb Z_p$ for $p=5,6,7,8,10$ and $12$. These moduli spaces contain the convex hull of the points (which are roots of the Farey polynomials of degree up to $120$, and contain about $513$ boundary cusp groups.)}
\medskip
From these rough descriptions of the moduli spaces one directly obtains concrete information such as the following. We will give a careful (and sharper) proof for this later in the $(4,5)$ case and then again when we examine the $(3,4)$ and $(4,4)$ case in considerable detail. The details for other $p$ are exactly the same except that since we only give rough bounds it is only necessary to examine pleating rays of small slope.
\begin{lemma} Let $\Gamma=\langle f,g \rangle$ be an arithmetic lattice generated by $f$ of order $4$ and $g$ of finite order $p\in \{5,6,7,8,10,12\}$. Then
\begin{equation}
|\Im m(\gamma(f,g))|\leq 4.
\end{equation}
This bound is also true if $\Gamma$ is not a lattice and is not free.
\end{lemma}
The sharp bound here is nearly 3.5.
\medskip
Next we consider the following remaining possibilities.
\begin{center}
\begin{tabular}{|c|c|c|l|}
\hline
$p$ & $[\mathbb Q(\gamma):L]$ & $\Delta_p$ & total degree $n$ \\ \hline
3 & 1 & $1$& $n\leq 11$ \\ \hline
4 & 1 & $1$&$n\leq 8$\\ \hline
5 & 2 & $5$&$n=4,6,8$\\ \hline
6 & 1& $1$&$n=2,3,4$ \\ \hline
8 & 2 & $8$&$n=4,6,8$ \\ \hline
10 & 2 & $5$&$n=4,6$\\ \hline
12 & 2& $12$&$n=4,6,8$ \\ \hline
\end{tabular}
\end{center}
\subsection{$p=5,8,10$ and $p=12$.}
These are quadratic, cubic and quartic extensions over the base field which is a degree two extension of $\mathbb Q$. We next examine the limits of the approach we have used so far.
\subsubsection{$p=12$ : quartic.}
As always, we estimate the discriminant, using the relative discriminant. There are $4$ real roots in $[-2\sin^2\frac{5\pi}{12},0]$ and two in $[-2\sin^2\frac{\pi}{12},0]$. We use Schur's bound to deal with the real roots in $[-2\sin^2\frac{5\pi}{12},0]$
\begin{equation}
|\gamma-\gamma|^2 \big(|\gamma|+2\sin^2\frac{\pi}{12}\big)^8 \; \big(\sin^2\frac{5\pi}{12}\big)^{12} \; \frac{2^23^34^42^2}{3^3 5^5}\; 12^4 \geq D_8 \geq 68856875
\end{equation}
We quickly see that in order to get useful information on $\gamma$ we will need to bound the discriminant by something of the order $10^{12}$. The Galois group in question is $[2^4]E(4)$ of order $64$. However there are (provably) $15,648$ polynomials which meet our criteria.
\subsubsection{$p=12$ : cubic.} We have
\begin{equation}
|\gamma-\gamma|^2 \big(|\gamma|+2\sin^2\frac{\pi}{12}\big)^4 \; 16^{-2} \big(2\sin^2\frac{5\pi}{12}\big)^{6} \; 12^3 \geq D_6
\end{equation}
To get useful information we need a discriminant bound of about $10^8$ with a factor of $2^63^3$. The Galois group is $A_4C_2$ ($T6$ in \cite{JR}). There are (provably) $2319$ such polynomials.
\subsubsection{$p=12$ : quadratic.} We have
\begin{equation}
|\gamma-\gamma|^2 \big(2\sin^2\frac{5\pi}{12}\big)^{2} \; 12^2 \geq D_4
\end{equation}
To get useful information we need a discriminant bound of about $4\times 10^5$ with a factor of $2^43^2$. The Galois group is $D4$ ($T3$ in \cite{JR}). There are (provably) $3916$ such polynomials (the smallest discriminant is $2^43^223$ with polynomial $x^4 - 2x^3 - x^2 + 2^x - 2$). With a bit of work this method will work here but for quadratics it is far easier to search.
\medskip
The point to discussing these three cases is that the methods applied above produce too many polynomials to be really useful. It is oftentimes simpler to search for the polynomials directly. The several thousand polynomials identified above will not have their real roots in the intervals we require, nor their complex roots bounds bounded appropriately. Finally the factorisation condition discussed in \S \ref{factorisation}, and which we will come to rely on heavily to eliminate cases, doesn't seem to be directly applicable in any way.
\section{Searches.} In this section we give a few pertinent examples of the searches we performed to eliminate cases. First off is a case we have already dealt with, but which admits some useful features which are easier to follow.
\subsection{The case $p=4$ and $q=7$, degree $2$ over $\mathbb Q(2\cos\frac{\pi}{7})$, total degree $6$. }
The intermediate field is
\[ L=\mathbb Q(2\cos\frac{\pi}{7}) \]
of discriminant $49$. We suppose $\gamma$ is quadratic over $L$. The minimal polynomial for $\gamma$ is irreducible over $\mathbb Z$ and then factors as
\[ (z^2+a_1z+a_0)(z^2+b_1z+b_0)(z^2+c_1z+c_0) \]
where we may suppose that $\gamma$ and $\bar \gamma$ are roots of the first factor. In order for the arithmeticity conditions to be satisfied we must have
\begin{itemize}
\item $z^2+b_1z+b_0$ has two real roots, say $r_\pm= \frac{1}{2}\big(-b_1\pm \sqrt{b_1^2-4b_0}\big)$, in $-[2\sin^2\frac{2\pi}{7},0]$, thus $b_1> 0$
\begin{equation}\label{T1}
-4\sin^2\frac{2\pi}{7} \leq 2r_- = -b_1-\sqrt{b_1^2-4b_0},
\end{equation}
and note the implication $b_1^2\geq 4b_0>0$, strict inequality because of irreducibility. We will always take square roots with positive real part. Also
\item $z^2+c_1z+c_0$ has two real roots (say $s_\pm$) in $-[2\sin^2\frac{3\pi}{7},0]$, thus $c_1>0$ and
\begin{equation}\label{T2}
-4\sin^2\frac{3\pi}{7} \leq 2s_-=-c_1-\sqrt{c_1^2-4c_0},
\end{equation}
with $c_1^2\geq 4c_0>0$.
\end{itemize}
We also know from our criterion regarding free groups used above that
\begin{eqnarray}\label{T3}
0 < |\gamma|^2 & = & a_0 <\left[ 2 \left(2+2 \sqrt{2} \cos \frac{\pi }{7} +\sin \frac{3 \pi }{14} \right)\right]^2 \approx 106.991 \end{eqnarray}
It is efficient to improve this bound on $a_0$ and we can do this using the relative discriminant. The total degree of $\gamma$ is $6$ and $|\Delta(\mathbb Q(2\cos\frac{\pi}{7}))|=49$. As above there are $13$ polynomials yielding fields with one complex place an discriminant divisible by $7^4$ and less than $10^6$. None of these polynomials have the properties we seek. Hence
\[ |N(\gamma-\bar\gamma)| \times 49^2 \geq 10^6 \]
where $N$ is the relative norm,
\[ |N(\gamma-\bar\gamma)| =| \gamma-\gamma|^2(r_+-r_-)^2(s_+-s_-)^2 \leq |\gamma-\gamma|^2 16 \sin^4\frac{2\pi}{7} \sin^4\frac{3\pi}{7} \]
and hence
\begin{equation} \label{T4}
\Im m[\gamma]> 4.39079,
\end{equation}
and although we know this is enough to remove this case, we keep calculating as this bound further implies
\begin{equation} \label{T5}
19.279 \leq |\gamma|^2=a_0 < 106.7
\end{equation}
An improvement on (\ref{T3}) alone.
\medskip
An integral basis for $\mathbb Q(2\cos\frac{\pi}{7})$ can be found from $1$, $2\cos\frac{\pi}{7}$ and $2\cos\frac{3\pi}{7}$. We also us the remaining Galois conjugate $2\cos\frac{5\pi}{7}$. We can therefore write, for rational integers $p_i,q_i,r_i$, $i=1,2$,
\begin{eqnarray*}
a_0= p_0+ 2q_0\cos\frac{\pi}{7} +2r_0 \cos\frac{3\pi}{7},&& a_1= p_1+ 2q_1\cos\frac{\pi}{7} +2r_1 \cos\frac{3\pi}{7},\\
b_0=p_0+ 2q_0\cos\frac{3\pi}{7} +2r_0 \cos\frac{5\pi}{7}, && b_1=p_1+ 2q_1\cos\frac{3\pi}{7} +2r_1 \cos\frac{5\pi}{7} ,\\
c_0=p_0+ 2q_0\cos\frac{5\pi}{7} +2r_0 \cos\frac{\pi}{7}, && c_1=p_1+ 2q_1\cos\frac{5\pi}{7} +2r_1 \cos\frac{\pi}{7}.
\end{eqnarray*}
From (\ref{T1}) and (\ref{T2}) we also have the bounds
\begin{eqnarray*}
0 < b_0< \left(2\sin^2\frac{2\pi }{7}\right)^2, && 0< b_1<4 \sin^2\frac{2\pi }{7} ,\\
0 < c_0 < \left(2\sin^2\frac{3\pi }{7}\right)^2, &&0< c_1<4 \sin^2\frac{3\pi }{7}.
\end{eqnarray*}
We can solve $p_0,q_0,r_0$ in terms of $a_0,b_0,c_0$.
\begin{eqnarray*}
p_0&=& \frac{1}{14} (4 (b_0+c_0+(b_0-c_0) \cos\big(\frac{\pi }{7}\big))+c_0 \text{csc}\big(\frac{\pi }{14}\big)-a_0 (-4+\text{csc}\big(\frac{3 \pi }{14}\big)+4 \sin\big(\frac{\pi }{14}\big))),\\
q_0 & = & \frac{2}{7} (a_0 \cos\big(\frac{\pi }{7}\big)-b_0 \cos\big(\frac{\pi }{7}\big)+b_0 \sin\big(\frac{\pi }{14}\big)-c_0 \sin\big(\frac{\pi }{14}\big)+(a_0-c_0) \sin\big(\frac{3 \pi }{14}\big)),\\
r_0 &=&\frac{2}{7} (-b_0 \cos\big(\frac{\pi }{7}\big)+c_0 \cos\big(\frac{\pi }{7}\big)+a_0 \sin\big(\frac{\pi }{14}\big)-c_0 \sin\big(\frac{\pi }{14}\big)+(a_0-b_0) \sin\big(\frac{3 \pi }{14}\big))
\end{eqnarray*}
From this we deduce that
\begin{eqnarray*}
0\leq p_0 \leq 7,& 0\leq q_0\leq 21, &0\leq r_0\leq 12
\end{eqnarray*}
In exactly the same way we find the bounds
\begin{eqnarray*}
-1\leq p_1 \leq 3,& -7\leq q_1\leq 3, &-4 \leq r_1\leq 2.
\end{eqnarray*}
This gives us now $880880$ cases to consider. Of these there is only one case which passes all the above tests. That is $p_0=3$, $q_0= 11$, $r_0= 6$, $p_1= 1$, $q_1= -2$ and $r_1= -1$. The first pair of real roots are $-0.89457$ and $-0.462326$ both greater than $-2\sin^2\frac{2\pi}{7} = -1.22252$, while the second pair is $-1.63397$ and $-0.0580492$, both greater than $-2\sin^2\frac{3\pi}{7} =-1.90097$. However, the complex roots are $\gamma = 1.52446 \pm 4.81327 i$ are easily seen to be in the space of groups freely generated as the imaginary part of $\gamma$ is too large.
\subsection{$p=12$}
The intermediate field is
\[ L=\mathbb Q(2\cos\frac{2\pi}{4},2\cos \frac{2\pi}{12})=\mathbb Q(\sqrt{3}) \]
\begin{center}
\scalebox{0.8}{\includegraphics[viewport=80 520 500 800]{MapDegree2}}
\end{center}
\noindent{\bf Figure 3.}
{\it The moduli space of discrete groups generated by elliptics of order $p=4$ and $q=12$ obtained from identifying the roots of Farey polynomials. The red points are those $\gamma$ satisfying the criteria:
\begin{enumerate}
\item the associated group is not obviously free, that is
\begin{itemize}
\item $0< Im(\gamma(f,g))< 4$,
\item $\Re e(\gamma)\geq -4$, (achieved in the $GT(4,12,\infty)$-generalised triangle group).
\item $\Re e(\gamma)\leq 3(2+\sqrt{3})$, (achieved in the $(4,12,\infty)$-triangle group).
\end{itemize}
\item $\gamma$ is the complex root of a quadratic over $L$.
\item the two real embeddings of $\gamma$ lie in the interval $[-2\sin^2\frac{5\pi}{12},0]$.
\end{enumerate}}
\subsubsection{Degree 2 over $L=\mathbb Q(\sqrt{3})$}
The minimal polynomial for $\gamma$ has the form
\[ p(z)=(z^2+a_1 z+ a_0)(z^2+b_1 z + b_0) \]
We always assume the first polynomial has a complex conjugate pair of roots. Here $a_0,a_1,b_0,b_1\in \mathbb Q(\sqrt{3})$ and $b_i$ is the Galois conjugate of $a_i$. We write
\begin{eqnarray*}
a_0=p_0+q_0\sqrt{3}, &&b_0=p_0-q_0\sqrt{3}\\
a_1=p_1+q_1\sqrt{3}, &&b_1=p_1-q_1\sqrt{3}
\end{eqnarray*}
The first polynomial factor has roots $\gamma,\bar\gamma$ and the second factor has roots $r,s \in [-2\sin^2\frac{5\pi}{12},0]$. Thus
\begin{eqnarray*}
a_0=|\gamma|^2, &&b_0=rs\\
a_1= - 2 \Re e[\gamma], &&b_1=-(r+s)
\end{eqnarray*}
From this we see that
\begin{eqnarray*}
0< a_0 < 9 (7+4 \sqrt{3} ), && 0< b_0 < 4 \sin^4\frac{5\pi}{12} =\frac{7}{4}+\sqrt{3} \\
-3(2+\sqrt{3}) < a_1< 8, && 0< b_1<2+\sqrt{3}.
\end{eqnarray*}
Adding these inequalities provides bounds on $p_i$ and $q_i$.
\begin{eqnarray*}
0 < a_0 + b_0 = 2 p_0 < \frac{37}{4} \left(7+4 \sqrt{3}\right) \\
-3(2+\sqrt{3}) < a_1 + b_1=2p_1 < 10+ \sqrt{3}
\end{eqnarray*}
Hence, as $q_0=(p_0-b_0)/\sqrt{3}$, we find the following bounds.
\begin{eqnarray*}
1\leq p_0 \leq 64, && -5\leq p_1 \leq 5 \\
\big[\frac{p_0-\frac{7}{4}}{\sqrt{3}}\big] \leq q_0 \leq \big[\frac{p_0}{\sqrt{3}}\big], && \big[\frac{p_1-2 }{\sqrt{3}}\big] \leq q_1 \leq \big[\frac{p_1}{\sqrt{3}}\big].
\end{eqnarray*}
This gives us $2,688$ cases to consider. In higher degree searches we will want to break this up, but this presents no computational problems. We have a few conditions to satisfy. First $\gamma$ is complex so $a_1^2<4 a_0$. Next $b_1^2>4 b_0$ for two real roots and additionally, for the two real roots to be in the correct interval,
\[ -1-\frac{\sqrt{3}}{2} < \frac{-b_1}{2} \pm \frac{1}{2}\sqrt{ b_1^2-4 b_0 } <0 \]
This gives us the condition
\[ 2+\sqrt{3} > b_1+ \sqrt{ b_1^2-4 b_0 } \]
These simple tests reduce the search space to $202$ candidates. We next realise that $|\Im m(\gamma)|<3.5$, giving the simple additional test $-49< a_1^2-4a_0 $. There are only now only $18$ values for $\gamma$ satisfying our criteria and these are illustrated above in Figure X. Of these all but $7$ are well outside our region and can be shown to be free on generators or alternatively one can check the factorisation as we shall now do. The seven points and their minimal polynomial as as follows.
\begin{eqnarray*}
\gamma_1 =& \frac{1}{2} \left(\sqrt{3}+i \sqrt{5+4 \sqrt{3}}\right) ,& 1 + 6 z + z^2 + z^4.\\
\gamma_2 =& -1+i \sqrt{1+\sqrt{3}} , & 1 + 8 z + 8 z^2 + 4 z^3 + z^4.\\
\gamma_3 =& \frac{1}{2} \left(\sqrt{3}+i \sqrt{13+8 \sqrt{3}}\right) ,&4 + 12 z + 5 z^2 + z^4. \\
\gamma_4 =& -1+i \sqrt{3+2 \sqrt{3}}, & 4 + 16 z + 12 z^2 + 4 z^3 + z^4. \\
\gamma_5 =& 1+\sqrt{3}+i \sqrt{3+2 \sqrt{3}} ,&1 + 20 z + 6 z^2 - 4 z^3 + z^4 .\\
\gamma_6 =& \frac{1}{2} +\sqrt{3}+i \frac{1}{2}\sqrt{19+12\sqrt{3}}, & 16 + 32 z + 5 z^2 - 2 z^3 + z^4.\\
\gamma_7 =& \frac{1}{2}\left(3 +\sqrt{3}+i \sqrt{8+6\sqrt{3}}\right), & 13 + 42 z + 4 z^2 - 6 z^3 + z^4.
\end{eqnarray*}
Then we compute the minimal polynomial for $\lambda_p$
\begin{eqnarray*}
\lambda_1: & & 33 - 240 z^2 + 238 z^4 - 16 z^6 + z^8.\\
\lambda_2 : & & 1 - 180 z^2 + 182 z^4 + 12 z^6 + z^8.\\
\lambda_3 : & & 97 - 464 z^2 + 446 z^4 - 16 z^6 + z^8. \\
\lambda_4 : & & 33 - 372 z^2 + 390 z^4 + 12 z^6 + z^8.\\
\lambda_5 : & & 193 - 1004 z^2 + 870 z^4 - 44 z^6 + z^8.\\
\lambda_6 : & & -11 - 588 z^2 + 890 z^4 - 36 z^6 + z^8.\\
\lambda_7 : & &-3 - 1032 z^2 + 1306 z^4 - 64 z^6 + z^8.
\end{eqnarray*}
Since these are all degree eight we conclude these groups are not arithmetic.
\medskip
\subsection{p=10} A similar situation occurs when $p=10$.
\[ \left(\sqrt{3}+1\right) \sqrt{\gamma -\frac{\sqrt{3}}{2}+1} \]
After a similar search there there are three points we must consider, the complex roots of the three equations
\begin{center}
\begin{tabular}{|c|c|} \hline
$\gamma$ polynomial & $\lambda$ polynomial \\ \hline
$1+4 z+2 z^2+z^3+z^4$ & $-25 - 400 z^2 + 165 z^4 - 10 z^6 + z^8$ \\ \hline
$1+12 z-2 z^2-3 z^3+z^4$ &$ -1775 - 700 z^2 + 485 z^4 - 40 z^6 + z^8 $\\ \hline
$1+11 z+6 z^2+z^3+z^4$ & $-725 - 1000 z^2 + 385 z^4 - 10 z^6 + z^8$ \\ \hline
\end{tabular} \\
\end{center}
We again find there are no arithmetic lattices.
\subsection{p=8}
\[ \lambda_8=\sqrt{2 \left(\sqrt{2}+2\right) \gamma +2}\]
\begin{center}
\begin{tabular}{|c|c|} \hline
$\gamma$ polynomial & $\lambda$ polynomial \\ \hline
$2+8 z+8 z^2+4 z^3+z^4$ & $16 - 224 z^2 + 120 z^4 + 8 z^6 + z^8$ \\ \hline
$1+8 z+4 z^2+z^4$ & $ 272 - 704 z^2 + 328 z^4 - 16 z^6 + z^8$ \\ \hline
$1+6 z+7 z^2+2 z^3+z^4$ & $ 496 - 736 z^2 + 256 z^4 + z^8 $ \\ \hline
$1+10 z+13 z^2+6 z^3+z^4$ & $ 112 - 448 z^2 + 160 z^4 + 24 z^6 + z^8$ \\ \hline
$7+14 z+3 z^2-2 z^3+z^4$ &$ 368 - 928 z^2 + 544 z^4 - 32 z^6 + z^8$ \\ \hline
$7+20 z+14 z^2+4 z^3+z^4$ & $ 144 - 672 z^2 + 392 z^4 + 8 z^6 + z^8 $ \\ \hline
$4+20 z+5 z^2-2 z^3+z^4$ & $ 112 - 1120 z^2 + 656 z^4 - 32 z^6 + z^8 $ \\ \hline
$14+28 z+2 z^2-4 z^3+z^4$ & $ 16 - 1088 z^2 + 856 z^4 - 48 z^6 + z^8 $ \\ \hline
$7+40 z+10 z^2-8 z^3+z^4$ & $ 656 - 2784 z^2 + 1480 z^4 - 72 z^6 + z^8 $ \\ \hline
\end{tabular}
\end{center}
\subsection{p=6}
As noted earlier, this case is covered by the results of \cite{MM6}. The methods used here are similar and lead to the same results. The candidates we find are
\begin{center}
\begin{tabular}{|c|c|} \hline
$\gamma$ polynomial & $\lambda$ polynomial \\ \hline
$z^2 +6$ & $15 - 6 z + z^2$ \\ \hline
$z^3 +3z^2 +6z+2$ & $9 + 9 z - 3 z^2 + z^3$ \\ \hline
$ z^3 +5z^2 +9z+3 $ & $-9 + 15 z - 3 z^2 + z^3$ \\ \hline
$ z^4+8z^2+6z+1$ & $-9 + 18 z + 12 z^2 - 6 z^3 + z^4$ \\ \hline
\end{tabular}
\end{center}
\subsection{The case $p=4$ and $q=5$, degree $2,3$ and $4$ over $\mathbb Q(2\cos\frac{\pi}{5})$, total degree $4,6$ and $8$. }
In this subsection we meet the first really challenging search where it is likely that there may be some groups to find --- we have already found one with $\gamma$ real. We shall find that there are no more. The intermediate field is
\[ L=\mathbb Q(2\cos \frac{2\pi}{5})=\mathbb Q(\sqrt{5}) \]
We have the following absolute bounds on the modulus of $\gamma$ for groups which are not free;
\begin{equation}
|\gamma|<4 \big(\sqrt{2} \cos\frac{\pi}{5}+1 \big) + 2\cos \frac{2\pi}{5} = 9.19453.
\end{equation}
This is achieved in the $(4,5,\infty)$-triangle group. We also use the bounds
\begin{equation}
|\Im m(\gamma)|\leq 4, \;\;\; \Re e(\gamma) \geq -4.
\end{equation}
We have already established that the total degree is no more than $8$ using the discriminant method. The minimal polynomial for $\gamma$ now factors into two polynomials. Both of these polynomials are either degree $2$, $3$ or $4$ and we consider each case separately.
\subsection{Degree 2 over $L=\mathbb Q(\sqrt{5})$}
The minimal polynomial for $\gamma$ has the form
\[ p(z)=(z^2+a_1 z+ a_0)(z^2+b_1 z + b_0) \]
We always assume the first polynomial has a complex conjugate pair of roots. Here $a_0,a_1,b_0,b_1\in \mathbb Q(\sqrt{5})$ are algebraic integers and $b_i$ is the Galois conjugate of $a_i$.
\begin{eqnarray*}
a_0=\frac{p_0+q_0\sqrt{5}}{2}, && b_0=\frac{p_0-q_0\sqrt{5}}{2}\\
a_1=\frac{p_1+q_1\sqrt{5}}{2}, && b_1=\frac{p_1-q_1\sqrt{5}}{2}
\end{eqnarray*}
where $p_i,q_i\in\mathbb Z$ are integers with the same parity.
We observe that
\begin{eqnarray*}
p_i = a_i+b_i,\quad
q_i = (p_i-2b_i)/\sqrt{5}
\end{eqnarray*}
Our number theoretic restrictions on $\gamma$ imply the first polynomial factor has roots $\gamma,\bar\gamma$ and the second factor has roots $r,s \in [-2\sin^2\frac{2\pi}{5},0]$. Thus
\begin{eqnarray*}
a_0=|\gamma|^2, &&b_0=rs\\
a_1= - 2 \Re e[\gamma], &&b_1=-(r+s)
\end{eqnarray*}
From this we see that
\begin{eqnarray*}
0< a_0 < 84.5393 , && 0< b_0 < 4 \sin^4\frac{2\pi}{5} =3.27254 \\
-18.3891 < a_1 < 8, && 0< b_1< 4 \sin^2\frac{2\pi}{5} =3.61803 .
\end{eqnarray*}
It is quite apparent that all these bounds can be improved with work. This is really only helpful when the searches are very large as we will see.
Hence we find the following bounds.
\begin{eqnarray*}
1\leq p_0=a_0+b_0 \leq 87, &&
\big[ \big(p_0-\frac{5}{4}(\sqrt{5}+3)\big)/\sqrt{5} \big] +1 \leq q_0 \leq \big[p_0/\sqrt{5} \big] \\ -18 \leq p_1 =a_1+b_1 \leq 11 ,&&\big[(p_1-\sqrt{5}-1)/\sqrt{5} \big] +1 \leq q_1 \leq \big[p_1/\sqrt{5} \big].
\end{eqnarray*}
We are only interested in complex values for $\gamma$, and $|\Im m(\gamma)|<4$, so $a_1^2-4a_0< -16$. Also the condition that the second polynomial have two real roots in $[-2\sin^2\frac{2\pi}{5},0]$ gives the inequality
\[ 0< b_1^2-4b_0 < (4\sin^2\frac{2\pi}{5} - b_1)^2 \]
It is this root condition that will be the most challenging test for our polynomials. After additionally checking parity conditions this gives $20$ polynomials whose complex roots are illustrated below. They well indicate the problems we face. Figure 1. shows a rough picture or the exterior of the closed space of faithful discrete and free representations of the group $\mathbb Z_4*\mathbb Z_5$ in $PSL(2,\mathbb C)$ and the two points we have found.
\begin{center}
\scalebox{0.55}{\includegraphics[angle=-90,viewport=100 300 500 500]{map45}}\\
\end{center}
\noindent{\bf Figure 4.}
{\it The $20$ potential $\gamma$ values. Four identified as not free, or not discrete. One as a subgroup of an arithmetic group.}
\medskip
Of these $20$ points, all but $4$ are well outside our region where non-free groups must lie and these $16$ are captured by neighbourhoods of pleating rays as per \cite{EMS2}. One of these $16$ is in fact a discrete free subgroup of an arithmetic group and indicated by an arrow. The remaining $4$ points are indicated with an arrow.
Notice that we are faced with the apparently real possibility of a point lying very close to the boundary and trying to decide if the associated group is free or not. However at this point we cannot confirm any of these groups is discrete (other than those 16 we identified as free) as we have not used, or satisfied, all the arithmetic information needed for the identification theorem. The remaining necessary and sufficient condition we need to check is the factorisation condition. In each case we set (where we have removed the factor $2\cos \frac{\pi }{5}$ from (\ref{lp}) as it is a unit).
\begin{equation}\label{lp}
\lambda_i= \sqrt{2\gamma_i + 4\sin^2\frac{ \pi }{5}}
\end{equation}
We know that it must be the case that $\mathbb Q(\gamma_i)=\mathbb Q(\lambda_i)$ and in particular the minimal polynomials for $\gamma_i$ and $\lambda_i$ must have the same degree, in this case degree $4$. This is quite straightforward to check and we do it for {\it all} $20$ points. The four points that might be at question are listed below with their minimal polynomials.
\medskip
{\small
\noindent\begin{tabular}{|c|c|l|l|}
\hline
& $\gamma $ & min. polynomial $\gamma$ & min. polynomial $\lambda$ \\
\hline
1 & $1.618 + 2.058 i $ & $1 + 8 x + 3 x^2 - 2 x^3 + x^4$ & $181 - 226 x^2 + 87 x^4 - 14 x^6 + x^8$ \\
\hline
2 & $-0.5 + 2.569 i $ & $1 + 7 x + 8 x^2 + 2 x^3 + x^4$ & $171 - 144 x^2 + 37 x^4 - 6 x^6 + x^8$ \\
\hline
3 & $-1.809 + 1.892i$ &$1 + 10 x + 12 x^2 + 5 x^3 + x^4$ & $71 - 70 x^2 + 3 x^4 + x^8$ \\
\hline
4 & $3.736 + 1.996i $ & $1 + 26 x + 7 x^2 - 6 x^3 + x^4$ & $251 - 452 x^2 + 173 x^4 - 22 x^6 + x^8$\\
\hline
\end{tabular}
}
\medskip
Surprisingly there is the point $\gamma=1.61803 + 3.91487i$ (also indicated in Figure 4.) with minimal polynomial $x^4-2 x^3+14 x^2+22 x+1$ and with $\lambda= \sqrt{2\gamma + 4\sin^2\frac{ \pi }{5}}$ having minimal polynomial $x^4-6 x^3+11 x^2+4 x-19$. Both of degree $4$. It is clear that $\gamma\in \mathbb Q(\lambda)$ and hence the group generated by elliptics of order $4$ and $5$ with this value $\gamma$ for the commutator parameter is indeed a subgroup of an arithmetic Kleinian group. However, as noted, it is captured by a pleating ray neighbourhood. We have established the following.
\begin{corollary} There are no arithmetic Kleinian groups generated by elements of order $4$ and $5$ with invariant trace field of degree $4$.
\end{corollary}
We now turn to the case of degree $6$, but offer fewer details. We do note that the modulis space desciption and the capturing pleating ray neighbourhoods are independent of the degree of $\mathbb Q(\gamma)$, so we may use the same picture to remove possible values of $\gamma$.
\subsection{Degree 3 over $L=\mathbb Q(\sqrt{5})$}
The minimal polynomial for $\gamma$ has the form
\begin{equation}\label{poly} p(z)=(z^3+a_2 z^2 +a_1 z+ a_0)(z^3+b_2 z^2+b_1 z + b_0)
\end{equation}
We continue to assume the first polynomial has a complex conjugate pair of roots. We write, for $p_i$ and $q_i$ of the same parity,
\begin{eqnarray*}
2a_0=p_0+q_0\sqrt{5}, &&2b_0=p_0-q_0\sqrt{5}\\
2a_1=p_1+q_1\sqrt{5}, &&2b_1=p_1-q_1\sqrt{5} \\
2a_2=p_2+q_2\sqrt{5}, &&2b_2=p_2-q_2\sqrt{5}
\end{eqnarray*}
The first polynomial factor has roots $\gamma,\bar\gamma,r$ with $r\in (-2\sin^2\frac{\pi}{5},0)$ and the second factor has real roots $r_1,r_2,r_3 \in [-2\sin^2\frac{2\pi}{5},0]$. Thus
\begin{eqnarray*}
a_0=-|\gamma|^2 r, && b_0=-r_1r_2r_3.\\
a_1= |\gamma|^2+2 r \Re e[\gamma], &&b_1=r_1r_2+r_2r_3+r_1r_3.\\
a_2= - 2 \Re e[\gamma]-r, &&b_2=-(r_1+r_2+r_3).
\end{eqnarray*}
From this, with some easy estimates, we see that
\begin{eqnarray*}
0< a_0 < 58.416, && 0< b_0 < 5.921\\
-1 < a_1< 97.246 , && 0< b_1<9.82 \\
-18.39 < a_2< 8.691, && 0< b_2<5.428
\end{eqnarray*}
As before, adding these inequalities provides bounds on $p_i$ and $q_i$.
\begin{eqnarray*}
1\leq p_0 \leq 65, \quad
0\leq p_1 \leq 107, \quad
-18 \leq p_2 \leq 14 .
\end{eqnarray*}
\begin{eqnarray*}
\left[ \frac{ p_0-12}{\sqrt{5}} \right]+1 \leq \frac{ p_0-2b_0}{\sqrt{5}}= q_0 \leq \left[ \frac{ p_0}{\sqrt{5}} \right] \\
\left[ \frac{ p_1-20}{\sqrt{5}} \right]+1 \leq \frac{p_1-2b_1}{\sqrt{5}}= q_1\leq \left[ \frac{ p_1}{\sqrt{5}} \right] \\
\left[ \frac{ p_2-11}{\sqrt{5}} \right]+1 \leq \frac{p_2-2b_2}{\sqrt{5}}= q_2 \leq \left[ \frac{ p_2}{\sqrt{5}} \right]
\end{eqnarray*}
Checking parity alone gives us $6793308$ cases to consider. Again we have a few conditions to satisfy.
\begin{enumerate}
\item The first factor must be negative at $-2\sin^2\frac{\pi}{5}$.
\[ a_0+\frac{1}{4} (\sqrt{5}-5) a_1-\frac{5}{8}\big((\sqrt{5}-3) a_2 -2 \sqrt{5}+5\big) <0 \]
\item The second factor must be negative at $-2\sin^2\frac{2\pi}{5}$ and have derivative with two roots in the interval $[-2\sin^2\frac{2\pi}{5},0]$
\begin{eqnarray*}
b_0-\frac{1}{4} (\sqrt{5}+5)b_1+\frac{5}{8} \left((\sqrt{5}+3)b_2-2 \sqrt{5}-5\right)<0\\
\frac{3}{4} (\sqrt{5}+5 )> b_2+ \sqrt{b_2^2-3b_1}
\end{eqnarray*}
\item The discriminant of the first factor is negative and the second factor positive.
\begin{eqnarray*}
0 > -27 a_0^2+a_1^2 \left(-4 a_1+a_2^2\right)+2a_0 \left(9 a_1a_2-2 a_2^3\right) \\
0 < -27 b_0^2+b_1^2 \left(-4 b_1+b_2^2\right)+2b_0 \left(9 b_1b_2-2 b_2^3\right)
\end{eqnarray*}
\end{enumerate}
If these tests are all passed, as they were by $114$ candidates, then we numerically computed the roots to identify $\gamma$ and check that all requirements were satisfied. This time there were only three points clearly lying outside the discrete free space. However we found the minimal polynomial in $\lambda$ for all these cases and found it to be degree $12$. This proves
\begin{corollary} There are no arithmetic Kleinian groups generated by elements of order $4$ and $5$ with invariant trace field of degree $6$.
\end{corollary}
\subsection{Degree 4 over $L=\mathbb Q(\sqrt{5})$}
The minimal polynomial for $\gamma$ has the form
\begin{equation}\label{poly} p(z)=(z^4+a_3z^3+a_2 z^2 +a_1 z+ a_0)(z^4+b_3z^3+b_2 z^2+b_1 z + b_0)
\end{equation}
We continue to assume the first polynomial has a complex conjugate pair of roots. We write, for $p_i$ and $q_i$ of the same parity,
\begin{eqnarray*}
2a_0=p_0+q_0\sqrt{5}, &&2b_0=p_0-q_0\sqrt{5}\\
2a_1=p_1+q_1\sqrt{5}, &&2b_1=p_1-q_1\sqrt{5} \\
2a_2=p_2+q_2\sqrt{5}, &&2b_2=p_2-q_2\sqrt{5} \\
2a_3=p_3+q_3\sqrt{5}, &&2b_3=p_3-q_3\sqrt{5}
\end{eqnarray*}
Following our earlier notation, direct estimates yield
\begin{eqnarray*}
0< b_0 < 10.71 \quad
0< b_1<23.69, \quad 0< b_2 <19.64 \quad 0< b_3 < 7.24 \\
0< a_0< 40.364 , \quad -9.382<a_3<18.39 .
\end{eqnarray*}
It is worth a little time to improve the obvious bounds on $a_1$ and $a_2$. The polynomial
\[ q(x)=x^4+a_3x^3+a_2x^2+a_1x+a_0 \]
has a complex conjugate pair of roots and two roots in the interval $[-2\sin^2\frac{\pi}{5},0]$. Thus $q'(0)=a_1>0$ and $q'(-2\sin^2\frac{\pi}{5})<0$. These conditions are not sufficient. Using elementary calculus, which we leave the reader to consider, one also obtains
\[ 0 \leq a_1 <112.69, \quad\quad -1.76 < a_2 < 88.03 \]
Hence
\begin{eqnarray*}
1\leq p_0 \leq 40, \quad
0\leq p_1 \leq 114, \quad
-1 \leq p_2 \leq 90, \quad -9 \leq p_3 \leq 21.
\end{eqnarray*}
\begin{eqnarray*}
\left[ \frac{ p_0-21.5}{\sqrt{5}} \right]+1 \leq \frac{ p_0-2b_0}{\sqrt{5}}= q_0 \leq \left[ \frac{ p_0}{\sqrt{5}} \right] \\
\left[ \frac{ p_1-48}{\sqrt{5}} \right]+1 \leq \frac{p_1-2b_1}{\sqrt{5}}= q_1\leq \left[ \frac{ p_1}{\sqrt{5}} \right] \\
\left[ \frac{ p_2-40}{\sqrt{5}} \right]+1 \leq \frac{p_2-2b_2}{\sqrt{5}}= q_2 \leq \left[ \frac{ p_2}{\sqrt{5}} \right] \\
\left[ \frac{ p_3-15}{\sqrt{5}} \right]+1 \leq \frac{p_3-2b_3}{\sqrt{5}}= q_3 \leq \left[ \frac{ p_3}{\sqrt{5}} \right]
\end{eqnarray*}
This search is now about $10^{10}$ cases after checking parity. To make this search more efficient we need to find tests on coefficients as soon as possible in the process. The most obvious target is the fact that the polynomial
\[ p(z)=z^4+b_3z^3+b_2z^2+b_1z+b_0 \]
has four real roots in the interval $I_5=[-2\sin^2\frac{2\pi}{5},0]$. Then
\begin{eqnarray*}
p'(z) & = & 4z^3+3 b_3z^2+2 b_2z +b_1, \quad \mbox{has three real root in $I_5$} \\
p''(z) & = & 12z^2+6 b_3z+2 b_2, \quad \mbox{has two real root in $I_5$} \\
p'''(z) & = & 24z+6 b_3, \quad \mbox{has a real root in $I_5$}
\end{eqnarray*}
This last condition gives $-2\sin^2\frac{2\pi}{5}<-b_3/4<0$, which implies $0<b_3< 8\sin^2\frac{2\pi}{5}$ which we have by construction. The condition on $p''(z)$ implies
\[ 0< \frac{b_3}{4} \pm \frac{\sqrt{9 b_3^2-24 b_2}}{12} < 2 \sin^2\frac{2\pi}{5} \]
The content here is that
\begin{eqnarray*}
0 < & 3 b_3^2-8 b_2 & < 3\big(8 \sin^2\frac{2\pi}{5}-b_3\big)^2
\end{eqnarray*}
Now the cubic $p'(z)$ has three real roots in $I_5$. It's discriminant must satisfy
\[ 108 b_1^2+27b_1b_3^3+32 b_2^3<108 b_1b_2b_3+9b_2^2 b_3^2 \]
and $p'(-2\sin^2\frac{2\pi}{5})<0$,
\[ b_1-\frac{1}{2} (\sqrt{5}+5 )b_2+\frac{5}{8} (3 \sqrt{5} b_3+9b_3-8 \sqrt{5}-20 ) <0 \]
Finally, for the quartic itself we must have the discriminant positive and at least one real root.
\medskip
After this search we still do not yet have candidate polynomials, just a much shorter list of possibilities. It is hard to assert that the two real roots of the first polynomial lie in the correct interval without actually computing them. Now that we have a shorter list of possibilities we do just that. We compute the roots of $q(x)=x^4+a_3x^3+a_2x^2+a_1x+a_0$ and check the imaginary part of the complex roots are smaller than $5$ is absolute value and that the real roots lie in the shorter interval. This left us with $9$ possibilities.
{\small \begin{eqnarray*}
\frac{1}{2} \left(7+3 \sqrt{5}\right)+\frac{1}{2} \left(45+19 \sqrt{5}\right) z+\frac{1}{2} \left(56+22 \sqrt{5}\right) z^2+\frac{1}{2} \left(-9-7 \sqrt{5}\right) z^3+z^4\\ \frac{1}{2} \left(18+8 \sqrt{5}\right)+\frac{1}{2} \left(67+29 \sqrt{5}\right) z+\frac{1}{2} \left(56+22 \sqrt{5}\right) z^2+\frac{1}{2} \left(-9-7 \sqrt{5}\right) z^3+z^4\\ \frac{1}{2} \left(7+3 \sqrt{5}\right)+\frac{1}{2} \left(37+15 \sqrt{5}\right) z+\frac{1}{2} \left(40+14 \sqrt{5}\right) z^2+\frac{1}{2} \left(-6-6 \sqrt{5}\right) z^3+z^4\\ \frac{1}{2} \left(7+3 \sqrt{5}\right)+\frac{1}{2} \left(44+18 \sqrt{5}\right) z+\frac{1}{2} \left(47+17 \sqrt{5}\right) z^2+\frac{1}{2} \left(-6-6 \sqrt{5}\right) z^3+z^4\\ \frac{1}{2} \left(3+\sqrt{5}\right)+\frac{1}{2} \left(28+10 \sqrt{5}\right) z+\frac{1}{2} \left(38+12 \sqrt{5}\right) z^2+\frac{1}{2} \left(-3-5 \sqrt{5}\right) z^3+z^4\\ \frac{1}{2} \left(18+8 \sqrt{5}\right)+\frac{1}{2} \left(67+29 \sqrt{5}\right) z+\frac{1}{2} \left(70+28 \sqrt{5}\right) z^2+\frac{1}{2} \left(16+4 \sqrt{5}\right) z^3+z^4\\ \frac{1}{2} \left(3+\sqrt{5}\right)+\frac{1}{2} \left(43+17 \sqrt{5}\right) z+\frac{1}{2} \left(75+29 \sqrt{5}\right) z^2+\frac{1}{2} \left(19+5 \sqrt{5}\right) z^3+z^4\\ \frac{1}{2} \left(7+3 \sqrt{5}\right)+\frac{1}{2} \left(58+24 \sqrt{5}\right) z+\frac{1}{2} \left(86+34 \sqrt{5}\right) z^2+\frac{1}{2} \left(19+5 \sqrt{5}\right) z^3+z^4\\ \frac{1}{2} \left(7+3 \sqrt{5}\right)+\frac{1}{2} \left(55+23 \sqrt{5}\right) z+\frac{1}{2} \left(90+36 \sqrt{5}\right) z^2+\frac{1}{2} \left(19+5 \sqrt{5}\right) z^3+z^4
\end{eqnarray*}
}
All of these cases are well within the space of free representations, but in fact none of them satisfied the factorisation criterion.
\begin{theorem} There are no arithmetic Kleinian groups generated by elements $f$ of order $4$ and $g$ of order $5$ with $\gamma(f,g)\not\in\mathbb R$.
\end{theorem}
In fact we have actually proven quite a bit more as we have shown every subgroup of an arithmetic Kleinian group generated by $f$ of order $4$ and $g$ of order $5$, if discrete and if $\gamma$ is complex, is free. We therefore have the following theorem.
\begin{theorem} Let $\gamma$ be an arithmetic Kleinian group. Suppose $f$ of order $4$ and $g$ of order $5$ lie in $\Gamma$. Then either $\langle f,g\rangle$ is free, or $\Gamma$ contains $Tet(4,5;3)$ with finite index.
\end{theorem}
\section{$p=2,3$ and $4$.}
Apart from the case $p=6$ already covered, it is here that we will find many examples. Unfortunately there is not necessarily any intermediate field to help us in our searches. The fact that earlier we had a polynomial factor with all real roots was enormously helpful. Therefore the cases here are of a quite different nature. As we will see, the cases $p=2,4$ can be dealt with together as they will be commensurable, and the polynomials we seek are largely covered by the results of Flammang and Rhin \cite{FR}. The complexity of that case is in getting a very accurate description of the moduli space. That leaves the case $p=3$ which we now deal with.
\subsection{$p=3$.}
We first identify a useful total degree bound. This is obtained as a modification of the method of auxilliary functions introduced by Smyth and used in this context by Flammang and Rhin, \cite{FR}. We will have more to say about this a bit later.
Let $\alpha = 1+\gamma$ and note that then
\begin{equation}\label{alphabounds} |\alpha-1|<3+2 \sqrt{2}, \quad |\alpha|<4+2 \sqrt{2} \end{equation}
\bigskip
Let $t_0$ be the unique solution to the equation
\[ \left|\frac{t}{1+t}\right|^t\frac{1}{1+t}=\frac{3}{2^{t+1}} \]
Then $t_0\approx 4.28291$. A little calculus reveals that the function $|x|^t|1-x|\leq 0.0770561$ on the interval $[-\frac{1}{2},1]$. The graph is illustrated below.
\scalebox{0.5}{\includegraphics[viewport=-40 520 570 800]{Graph1}}\\
\medskip
\noindent {\bf Figure 5.} {\it The graph of $|x|^t|1-x|$ with $t=4.28291$.}
\medskip
Let
\[ q_\alpha(z)=z^n+a_{n-1}z^{n-1}+\cdots+a_1z+a_0 = (z-\alpha)(z-\bar\alpha)(z-r_1) \cdots(z-r_{n-2}) \]
be the minimal polynomial for $\alpha$. Then, as $r_i\in [-\frac{1}{2},1]$,
\begin{eqnarray} |q_\alpha(0)|^{t_0}|q_\alpha(1)| &=& |\alpha|^{2t_0}|\alpha-1|^2 \prod_{i=1}^{n-2} |r_i|^{t_0} |1-r_{i}|\nonumber \\
& \leq & (3+2 \sqrt{2})^2\Big(4+2 \sqrt{2}\Big)^{2t_0} (0.0770561)^{n-2}. \label{a0bound}
\end{eqnarray}
As the left hand side here is greater than one, we deduce that $n\leq 9$. Indeed if $p(0)\neq \pm 1$, then $n\leq 8$. In fact we have the following lemma which will be useful in our high degree searches.
\begin{lemma}\label{*lemma} Let $\mathbb Q(\alpha)$ be a field with one complex place, $\sigma(\alpha)\in [-\frac{1}{2},1] $ for all real embeddings, and satisfying (\ref{alphabounds}). The the degree of this field is no more than $9$, and if it is $9$, then $\alpha$ is a unit, and $1\leq q(1) \leq 7$, where $q$ is the minimal polynomial for $\alpha$. If the degree is $8$, then $1\leq |q(0)|\leq 2$, and $|q(0)|=2$ implies $|q(1)|\leq 5$. Further, in the degree $9$ case we also have $|\alpha|\geq 5.28769$, and in the degree $8$ case $|\alpha|\geq 4.1139$.
\end{lemma}
For the last two bounds, we also use the fact that $|\alpha-1|=|\gamma|$, and our bounds on the values $|\gamma|$ when $\gamma=\gamma(f,g)$ and $\langle f,g\rangle$ is not free.
\bigskip
Having these degree bounds now allows us to search to find all the possibilities for the values $\gamma$ satisfying the criteria determined by the Identification Theorem \ref{2genthm}. Some of these searches are at first sight barely feasible, for instance the degree $9$ case. However, the factorisation condition described earlier significantly reduces the possibilities by imposing strong parity conditions on the coefficients of the minimal polynomial of
\begin{equation} \lambda=\sqrt{2\alpha+1}=\sqrt{2\gamma+3}, \quad \mathbb Q(\lambda)=\mathbb Q(\alpha)=\mathbb Q(\gamma)
\end{equation} as we now describe. We discuss briefly the low degree cases of degree $2$ and $3$ as there are straightforward. We then go into rather more detail in degrees $4$ and $5$.
\section{The case of degree 2.}
We have $\gamma^2+b\gamma+c=0$, and $a_3=\sqrt{2\gamma+3}$ must also be a quadratic integer by the factorisation condition. Also, using easier bound on the size of the moduli space, $\Im m(\gamma)<3$ and $-4<\Re e(\gamma) < 6$, and as $\gamma$ must be complex, $4c>b^2$. This gives us $\gamma=\frac{1}{2}(-b+\sqrt{b^2-4c})$ and hence
\[ 8> b > -12, \quad 0> b^2-4c >- 36 \]
This gives $162$ cases to consider, many of which are not in the moduli space. However once we implement the factorisation test that the minimal polynomial of $a_3$ is degree 2 this quickly reduces to the three complex values
$-2+\sqrt{2}i$, $2i$ and $-3+2i$. The first, $-2+\sqrt{2}i$, is found from $(3,0)$, $(4,0)$ orbifold Dehn surgery on the Whitehead link. The second, $2i$, is the associated Heckoid group with $(W_{3/4})^2=1$ and the last group is free on its generators. One can establish this later fact directly or it is a consequence of the results described in Figure 6 below.
\section{The case of degree 3.}
We consider the polynomial $p(z)=z^3+az^2+bz+c$ which must have exactly one real root $r\in [-3/2,0]$. Then
\[ p(z)= -\bar \gamma \gamma r + (\bar \gamma \gamma + \bar\gamma r + \gamma r )z -
(\bar\gamma + \gamma + r)+ z^3 \]
With our bounds on $r$ and $\gamma$ we find
\[ -9 \leq a \leq 10, \quad 1\leq b \leq 25, \quad 1\leq c \leq 36,\]
As $p(0)=c>0, p'(0)=b>0$ and $p(-3/2)<0$ and $p'(-3/2)>0$ and the discriminant must be negative a simple loop with these tests finds about $5,500$ candidates. One the factorisation condition is applied to these potential candidates only $17$ remain. This is illustrated below with the $4$ points outside the moduli space indicated. Proving that these are the only such points in the complement of the moduli space of groups free on generators follows from the arguments we provide in the next case of degree $4$ which will give a provable computational description of these spaces. \\
\scalebox{0.5}{\includegraphics[angle=-90,viewport=10 80 550 550]{degree3map}}\\
\noindent{\bf Figure 6.} {\it The possible $5,500$ candidate $\gamma$ values in the case of the degree $3$ search. Of these only $17$ satisfy the factorisation condition and only $4$ are in the complement of the moduli space of groups free on generators of order $3$ and $4$.}
\bigskip
We now describe the degree $4$ and $5$ cases in some detail for two reasons. First in degree $4$ we find several cases which do turn out to generate arithmetic lattices, and then in degree $5$ there are none, but we need to develop more ideas to limit the search spaces. Finally we carefully eliminate the highest degree search, degree $9$. We have searched the cases degree $6,7$ and $8$. They all go the same way and there is nothing to report.
\section{The case of degree 4.}
The minimal polynomial for $\gamma$ and hence $\lambda$ has degree $4$,
\[\lambda^4+a_3\lambda^3+a_2\lambda^2+a_1\lambda+a_0 = 0 \]
Thus
\[(2\alpha+1)^2+a_2(2\alpha+1)+a_0 = -\sqrt{2\alpha+1}(a_1 +a_3(2\alpha+1) \]
Squaring both sides and rearranging gives us a polynomial for $\alpha$.
\begin{eqnarray*}
0& = & 16 \alpha ^4+\left(-8 a_3^2+16 a_2+32\right) \alpha ^3+\left(4 a_2^2+24 a_2-12 a_3^2+8 a_0-8 a_1 a_3+24\right) \alpha ^2 \\ && +\left(-2 a_1^2-8 a_3 a_1+4 a_2^2-6 a_3^2+8 a_0+4 a_0 a_2+12 a_2+8\right) \alpha \\&& +\left(a_0^2+2 a_2 a_0+2 a_0-a_1^2+a_2^2-a_3^2+2 a_2-2 a_1 a_3+1\right)
\end{eqnarray*}
There is a unique monic polynomial for the algebraic integer $\alpha$ over $\mathbb Q$ and so all these coefficients are divisible by $16$.
\begin{enumerate}
\item degree 3 coefficient: $a_3^2$ is even, so \underline{$a_3$ is even}.
\item degree 2 coefficient: $ a_2^2+2a_2+2 a_0+2 $ is divisible by $4$. Thus \underline{$a_2$ is even}. Then
$2 a_0+2 $ is divisible by $4$, so \underline{$a_0$ is odd}.
\item degree 1 coefficient: $- a_1^2-3 a_3^2+4 a_0+2 a_0 a_2+6 a_2+4$ is divisible by $8$. We reduce mod $2$ to see \underline{$a_1$ is even}.
\item degree 0 coefficient $\left(1+a_0-a_1+a_2-a_3\right) \left(1+a_0+a_1+a_2+a_3\right)$. We reduce mod $2$ to see \underline{$a_0$ is odd}
\end{enumerate}
We now make the substitutions $a_i=2k_i$ or $2k_i+1$ as the case may be. We expand out the polynomial for $\alpha$ and check parities again. We find $k_2$ is even and $k_0$ is odd, and the sum $k_1+k_3$ is even. Again write $k_2=2m_2$, $k_0=2m_0-1$ and $k_3=2m_3-k_1$. Substitute back and expand out. We have
\begin{align*}
&16 \left(\left(m_0+m_2\right){}^2-m_3^2\right)\\
&+32 \left(\left(1+m_2\right) \left(m_0+m_2\right)+k_1 m_3-3 m_3^2\right) \alpha \\
&+16 \left(1+2 m_0+m_2 \left(4+m_2\right)-\left(k_1-6 m_3\right) \left(k_1-2 m_3\right)\right) \alpha ^2\\
& -32 \left(-1-m_2+\left(k_1-2 m_3\right){}^2\right) \alpha ^3 +16 \alpha ^4
\end{align*}
as $16$ times the minimal polynomial. Further, we have proved the following.
\begin{lemma} The minimal polynomial for $\lambda$ has the following coefficient structure. There are integers $n_0,n_1,n_2$ and $n_3$ such that
\begin{itemize}
\item $a_0=-1 + 4 n_0$.
\item $a_1= 2n_1$.
\item $a_2=4n_2$.
\item $a_3=-2n_1 + 4 n_3$.
\end{itemize}
Moreover the coefficient structure of the minimal polynomial for $\alpha$ has (with the same $n_i$)
\begin{itemize}
\item $b_0=\left(n_0+n_2\right){}^2-n_3^2$.
\item $b_1= 2 \left(\left(1+n_2\right) \left(n_0+n_2\right)+n_1 n_3-3 n_3^2\right)$.
\item $b_2=2 \left(\left(1+n_2\right) \left(n_0+n_2\right)+n_1 n_3-3 n_3^2\right)$.
\item $b_3=-2 \left(-1-n_2+\left(n_1-2 n_3\right){}^2\right)$.
\end{itemize}
Note that also $b_1$ and $b_3$ are even, $b_2$ has the same parity as $b_3/2$.
\end{lemma}
With this information we can new set up a simple search. To do so we use Vieta's formulas. First for $\alpha$ whose real roots are in $[-\frac{1}{2},1]$.
First note that once we fix $\alpha$ the coefficients of its minimal polynomial are monomials in the real roots $r_i$. Thus either increasing or decreasing. Thus the extrema occur at $r_i=1$ or $r_i=-\frac{1}{2}$ so it is just a small finite problem to find these. We replace $\alpha+\bar\alpha$ by either $|\alpha|=4+2\sqrt{2}$ or $-4$. This is unchallenging in low degree, but a bit of work in higher degree.
\medskip
\begin{tabular}{|l|c|c|} \hline
$b_0=|\alpha|^2 r_1r_2$ & $-16 \leq b_0\leq 33$ \\ \hline
$b_1=-|\alpha|^2(r_1+r_2)-(\alpha+\bar\alpha)r_1r_2 =2x_1$& $-53 \leq x_1\leq 24$ \\ \hline
$b_2=|\alpha|^2+(\alpha+\bar\alpha)(r_1+r_2)+r_1r_2 $& $0 \leq b_2\leq 74$ \\ \hline
$b_3 =-( \alpha+\bar\alpha+r_1+r_2)=2x_3 $ & $-7 \leq x_3 \leq 4$ \\ \hline
\end{tabular}
\medskip
With the one other parity restriction we have found this is a search of about $1.5$ million cases.
For the $\lambda$ search we may assume additionally that $\lambda+\bar\lambda \geq 0$ with the choice of square root to find the search bounds (this time $r_1,r_2\in [-\sqrt{3},\sqrt{3}]$). Recall $\lambda=\sqrt{2\alpha+1}$ and $|\lambda|<\sqrt{9+4\sqrt{2}}\approx 3.828$.
\medskip
{\small
\noindent \begin{tabular}{|l|c|c|c|} \hline
$a_0=|\lambda|^2 r_1r_2$& $-43 \leq a_0\leq 43 $ & $ -10 \leq n_0 \leq 11 $ \\ \hline
$a_1=-|\lambda|^2(r_1+r_2)-(\lambda+\bar\lambda)r_1r_2$& $-73 \leq a_1\leq 27$& $ -36 \leq n_1 \leq 13$\\ \hline
$a_2=|\lambda|^2+(\lambda+\bar\lambda)(r_1+r_2)+r_1r_2 $& $-2 \leq a_2\leq 30$& $ -1 \leq n_2 \leq 7$ \\ \hline
$a_3=-( \lambda+\bar\lambda+r_1+r_2)$& $-11\leq a_3\leq 3 $& $\Big[\frac{2n_1-11}{4}\Big] \leq n_3\leq\Big[ \frac{3+2n_1}{4}\Big] $ \\ \hline
\end{tabular} }
\medskip
This is a search of about $22$ thousand cases - nearly two orders of magnitude smaller. So we of course choose to do the latter search. However as the degree grows, the symmetric functions start to get quite big quite quickly (for $b_i$ they are evaluated at $-\frac{1}{2}$ or $1$, while for $a_i$ they are evaluated at $\pm \sqrt{3}$) and from about degree $7$ or $8$ it is best to search through the coefficients $a_i$. For these larger degrees it is worthwhile obtaining much better coefficient bounds as well.
\medskip
Within these bounds on the coefficients we tested the real root condition, that $|\lambda|<4$ and that $|\gamma|<5.85$ and $|\Im m(\gamma)|<3.75$ we found $55$ possibilities for $\gamma=\alpha-1$ up to complex conjugacy. These are illustrated below.
\scalebox{0.5}{\includegraphics[viewport=-40 520 570 800]{163}}\\
\medskip
\noindent{\bf Figure 7.} {\it The degree $4$ candidate values for $\gamma$ satisfying all arithmetic criteria}
\medskip
We now have to decide whether these groups are free. Each is definitely a discrete subgroup of an arithmetic Kleinian group at this point.
\scalebox{0.85}{\includegraphics[viewport=100 520 570 800]{mod163}}\\
\noindent{\bf Figure 8.} {\it The degree $4$ candidate values for $\gamma$ satisfying all arithmetic criteria mapped against the moduli space.}
\medskip
Now to eliminate these points we first examine the $17$ Farey polynomials and the pleating rays of low slopes where the denominator (which is the degree of the polynomial) is small.
{\small \begin{equation}\label{slp}\{1/2, 3/5,4/7,6/7,5/8,5/9,7/11,8/13,9/14,11/17,13/20,16/21,15/26\}.\end{equation}}
The inverse image of the half-space $H=\{z=x+i y: x\leq -2\}$ under the branch of the Farey polynomial $P_{r/s}$ which yields the rational pleating ray $r/s$ is proved in \cite{EMS2} to consist only of geometrically finite groups free on the enerators of order $3$ and $4$. For these low slopes these preimages capture all but $4$ points easily. There are some computation issues in drawing the figure below for the last $4$ polynomial inverse images, it is really only for visual confirmation. In fact all we need to do is evaluate the associated polynomial with integer coefficients on the point in question, show that the image lies in $\{z=x+i y: x\leq -2\}$, then just check that we have the right branch - but this is determined by the pleating ray and path lifting which are direct to check as we can numerically identify the other roots and critical points.
\scalebox{0.5}{\includegraphics[angle=-90,viewport=-40 100 550 550]{PleatingCapture34}}\\
\noindent{\bf Figure 9.} {\it Neighbourhoods of the pleating rays with slopes give at (\ref{slp}) capturing all but $7$ points. }
\medskip
There are now $7$ points that remain and the associated groups are not going to be free on the two generators. These appear in the table below. We will explain how these groups are determined later.
\section{Degree $5$ search.}
As these searches get bigger it is efficient to look for further simple tests to eliminate possible candidates before we numerically calculate the roots. We first look at the parity considerations as above. Following the above strategy, we assume minimal polynomial for $\lambda$ has degree $5$,
\[p(\lambda)=\lambda^5+a_4+a_3\lambda^3+a_2\lambda^2+a_1\lambda+a_0 = 0 \]
Thus
\[a_4(2\alpha+1)^2+a_2(2\alpha+1)+a_0 = -\sqrt{2\alpha+1}(a_1 +a_3(2\alpha+1)+(2\alpha+1)^2) \]
Squaring both sides and rearranging gives us a polynomial for $\alpha$.
{\small
\begin{eqnarray}\label{mpa}
p(\alpha)& = & 32 \alpha ^5+16 \left(-a_4^2+2 a_3+5\right) \alpha ^4+8 \left(a_3^2+8 a_3-4 a_4^2+2 a_1+2 a_2 a_4+10\right) \alpha ^3 \\ && \nonumber +4 \left(-a_2^2+2 a_4 a_2+3 a_3^2-6 a_4^2+12 a_3+2 a_1 \left(a_3+3\right)-2 a_0 a_4+10\right) \alpha ^2\\ && +\left(2 \left(a_1+a_3+1\right){}^2+4 \left(a_3+2\right) \left(a_1+a_3+1\right)+4 \left(a_0+a_2+a_4\right) \left(a_2-2 a_4\right)\right) \alpha \nonumber \\ && \nonumber+\left(\left(a_1+a_3+1\right){}^2-\left(a_0+a_2+a_4\right){}^2\right)
\end{eqnarray}}
There is a unique monic polynomial for the algebraic integer $\alpha$ over $\mathbb Q$ and so all these coefficients are divisible by $32$.
We reduce these coefficients modulo $2$ to determine the parity necessary.
\begin{itemize}
\item \underline{$a_4$ is odd}, $a_4=2k_4+1$.
\item $a_3^2+8 a_3-4 a_4^2+2 a_1+2 a_2 a_4+10$ is divisible by $4$, so \underline{$a_3$ is even}, $a_3=2k_3$.
Then $1+ a_1- a_2 a_4$ is even. Hence $ a_1+ a_2+1$ is odd.
\item $-a_2^2+2 a_4 a_2+3 a_3^2-6 a_4^2+12 a_3+2 a_1 \left(a_3+3\right)-2 a_0 a_4+10$ is divisible by $8$. So \underline{$a_2$ is even}, $a_2=2k_2$ and \underline{$a_1$ is odd}, $a_1=2k_1+1$. Then
\[-4 a_0 k_4-2 a_0-4 k_2^2+4 k_2+4 k_3^2+4 k_1+4 k_3+2\]
is divisible by $8$, so \underline{$a_0$ is odd}, $a_0=2k_0+1$. Then
\begin{equation}\label{*1} k_0 + k_1+ k_4 \end{equation}
is even.
\item Next, making the substitutions identified above, we have
\begin{equation}\label{*2} k_1^2+2 k_2^2+3 k_3^2-2 k_0+2 k_0 k_2+2 k_3-2 k_2 k_4-2 k_4+1 \end{equation}
divisible by $4$. Thus $k_1^2 +3 k_3^2$ is odd and so \underline{$k_1+k_3$ is odd}.
\item Next, expanding out the constant term gives
\[ k_0+k_1+k_2+k_3+k_4 \]
is even. Thus $k_0+k_2+k_4$ is odd. Hence (\ref{*1}) gives $k_1+k_2$ odd.
\end{itemize}
We now record this information.
\begin{align}
& \label{*5} a_0=2k_0+1, \quad a_1=2k_1+1,\quad a_2=2k_2, \quad a_3=2k_3,\quad a_4=2k_4+1. \\
& \label{*6} k_1+k_2 \mbox{ is odd}, \quad k_1+k_3 \mbox{ is odd},\quad k_0+k_2+k_4 \mbox{ is odd}.
\end{align}
Writing $k_2=2m_2+1-k_1, \; k_3=2m_3+1-k_1, \; k_4=2m_4-k_0-k_1$ now yields the polynomial
\begin{align*}
& 16 \Big((2-k_1+m_2+m_3+m_4) (k_1-m_2+m_3-m_4) \Big)\\
&+32 \Big(1-3 k_1^2+3 m_3 (2+m_3)-6 m_4+2 k_0 (1-k_1+m_2+m_4)\\ & \quad +k_1 (4+5 m_2-m_3+7 m_4)-2 (m_2^2+2 m_4^2+m_2 (2+3 m_4)) \Big) \alpha \\
&+32 \Big(4-2 k_0^2-6 k_1^2-5 m_2-2 m_2^2+13 m_3+6 m_3^2-(13+12 m_2) m_4-12 m_4^2 \\ & \quad +k_0 (6-8 k_1+6 m_2+10 m_4)+k_1 (5+8 m_2-4 m_3+18 m_4) \Big) \alpha ^2
\\&+32 \Big(-4 k_0^2-5 k_1^2+2 k_0 (3-5 k_1+2 m_2+8 m_4)+k_1 (2+4 m_2-4 m_3+20 m_4) \\ & \quad -2 (-3+m_2-2 m_3 (3+m_3)+6 m_4+4 m_2 m_4+8 m_4^2)\Big) \alpha ^3
\\ &-64 \Big(k_0^2+k_0 (-1+2 k_1-4 m_4)+(k_1-2 m_4){}^2-2 (1+m_3-m_4)\Big) \alpha ^4+32 \alpha ^5
\end{align*}
Only the constant term
\[ (2-k_1+m_2+m_3+m_4) (k_1-m_2+m_3-m_4) =(1+m_3)^2 -(1+m_2+m_4-k_1)^2 \]
is now an issue. As it is a difference of squares it is not congruent to $2$ mod $4$. It also must be even, thus the constant term in $q_\alpha(z)$ is an even integer.
Now (\ref{*5}) and (\ref{*6}) give necessary and sufficient conditions for the factorisation condition to hold.
\medskip
With a little simplification, putting together what is above, we now have the following information.
\begin{eqnarray*}
a_0=2k_0+1,\quad a_1=2k_1+1,\quad a_2=4m_2+2-2k_1, \\a_3=4m_3+2-2k_1,\quad a_4=4m_4-2k_0-2k_1+1.
\end{eqnarray*}
There are another couple of quick tests we can utilise to cut down the search space. The polynomial $p(z)$ has odd degree, irreducible and all its real roots in the interval $[-\sqrt{3},\sqrt{3}]$. Thus
\begin{eqnarray*}
0 &< & p(\sqrt{3}) = \sqrt{3} k_1+3 \sqrt{3} k_3+k_0+3 k_2+9 k_4+5 \sqrt{3}+5.\\
0 &> & p(-\sqrt{3}) =-\sqrt{3} k_1-3 \sqrt{3} k_3+k_0+3 k_2+9 k_4-5 \sqrt{3}+5.
\end{eqnarray*}
Subtract the second from the first and substituting, we find
\begin{equation}
k_1+3k_3+5=k_1+3(2m_3+1-k_1)+5>0
\end{equation}
and so we can implement $m_3\geq (k_1-4)/3$ in the search we set up.
\subsection{The search.} At this point we can begin our search. Let $\lambda,\bar\lambda, r,s,t$ be the roots of the minimal polynomial. We can choose $\lambda$ to have non-negative real part. Without any great attempt to get the best possible bounds we obtained
\medskip
{\small
\begin{tabular}{|l|c|c|} \hline
$a_0=|\lambda|^2 rst=2k_0+1$& $-33 \leq k_0\leq 32 $ \\ \hline
$a_1=|\lambda|^2(rs+rt+st)=2k_1+1$& $-66 \leq k_1\leq 74$ \\ \hline
$a_2=|\lambda|^2(r+s+t)+(\lambda+\bar\lambda)(rs+rt+st)+rst =2k_2$& $-32 \leq k_2\leq 67$ \\ \hline
$a_3=|\lambda|^2+rs+rt+st+(\lambda+\bar\lambda)(r+s+t) =2k_3$& $-21\leq k_3\leq 29$ \\ \hline
$a_4= \lambda+\bar\lambda+r+s+t =2k_4+1 $& $-3\leq k_4\leq 5$ \\ \hline
\end{tabular}
}
\medskip
With the parity checks and the simple tests suggested this loop has about $84$ million candidates. Many of these candidates will have $4$ complex roots.
We remark that if there is exactly one complex conjugate pair of roots for the irreducible minimal polynomial for $\lambda$ of degree $5$, then the same can be said of $\alpha$ if it is complex, since $\alpha\in \mathbb Q(\lambda)$, so $ \mathbb Q(\alpha)\subset \mathbb Q(\lambda)$ and since the only proper subfields of a field with one complex place are totally real $ \mathbb Q(\alpha)= \mathbb Q(\lambda)$. However $\alpha$ cannot be real as then
\[ 5=[\mathbb Q:\mathbb Q(\lambda)]= [\mathbb Q(\alpha):\mathbb Q(\lambda)][\mathbb Q:\mathbb Q(\alpha)] \]
so $[\mathbb Q:\mathbb Q(\alpha)]=1$, $\alpha\in \mathbb Z$ and $\lambda$ is an integer in a quadratic field.
\medskip
We could implement a further test of the discriminant in these searches, but polynomial root finding algorithms are quick enough that simply computed the roots and checked that $\lambda$ was a viable candidate, that is all the real roots are in $[-\sqrt{3},\sqrt{3}]$. This gave us a list with about 5100 candidates. The first we found was $x^5-x^4+6x^3+22x^2-23x-51$ with real roots $-1.674, -1.490,1.62$ and $\lambda=1.269+i3.308$. However, as we did not implement {\it all} the parity checks in the search to keep the structure simple,
\[ \gamma=\frac{1}{2}(\lambda^2-3) = -6.168+i4.20 \]
has a minimal polynomial of degree $5$ with real roots in $[-\frac{1}{2},1]$, but it is not monic.
We then searched through these polynomials to find those minimal polynomials for $\gamma=(\lambda^2-3)/2$ which were monic, had $-4<\Re e(\gamma)$ and $|\Im(\gamma)|<4$. The values of $\gamma$ we found are illustrated below.
\scalebox{0.7}{\includegraphics[viewport=50 430 770 850]{Degree5Map34}}\\
\noindent{\bf Figure 10.} {\it The degree $5$ candidates with $11/13$ pleating ray neighbourhood illustrated.}
The most computationally challenging polynomial we found was
\[ 1+7 x+12 x^2-3 x^4+x^5 \]
with real roots $-1.29375, -0.378269, -0.240317$ and complex roots $2.45617 \pm i 1.57165$. This was captured by the $11/13$ pleating ray neighbourhood illustrated above in Figure 10.
\subsection{Additional searches} As the degree grows the search space is growing rapidly and the fact that all the real roots of $\alpha$ are in $[-\frac{1}{2},1]$ means symmetric functions grow slowly and we can obtain better bounds for our searches (although this does not happen in degree $5$). We now outline how this goes in this case and present data.
\medskip
Let $q_\alpha(z)=z^5+b_4z^4+b_3z^3+b_2z^2+b_1z^1+b_0$ be the minimal polynomial for $\alpha$.
Equations (\ref{*5}) and (\ref{*6}) together with the difference of squares condition we found yield the following information on the coefficients $b_i$.
\begin{itemize}
\item $b_4$ is even, $b_4=2n_4$ and $n_4$ has the same parity as $m_2+m_3+m_4$.
\item $b_3$ has the same parity as $m_2+m_3+m_4$.
\item $b_2$ is even, $b_2=2n_2$
\item $b_1$ has the same parity as $1+m_3$
\item $b_0$ is even, $b_0=2n_0$ and $n_0$ has the same parity as $1+m_3$.
\item $b_3$ has the same parity as $n_4$
\item $b_1$ has the same parity as $n_0$
\end{itemize}
We now go about finding bounds on these quantities.
\subsection{Loop bounds.}
We recall $-8\leq \alpha+\bar\alpha \leq 6+4\sqrt{2}$, $|\alpha|\leq 3+2\sqrt{2}$, and $r,s,t\in [-\frac{1}{2},1]$.
The following information which follows from calculus. We set $x=|\alpha|^2$ and $2x=\alpha+\bar\alpha$, $-4<x<(3+2\sqrt{2})$ and calculate gradients to find extrema are on the boundary and then minimise. So for instance, $-0.737 < rs+rt+st <3$.
\begin{eqnarray*}
-33.98 &< b_0 =&- |\alpha|^2 rst < |\alpha|^2/2 < 16.99 \\
-25.3 &< b_1=&|\alpha|^2(rs+rt+st)+(\alpha+\bar\alpha)(r+s+t) < 136.9 \\
-108.2 &< b_2=&-|\alpha|^2(r+s+t)-(\alpha+\bar\alpha)(rs+rt+st) < 55.3 \\
-24 &< b_3=&(\alpha+\bar\alpha)(r+s+t) < 34.9 \\
-14.7 &< b_4=&-(\alpha+\bar\alpha + r+s+t) < 9.5
\end{eqnarray*}
Thus
\begin{eqnarray*}
-16 \leq n_0 \leq 8 &
-25\leq b_1 \leq 136 &
-54 \leq n_2\leq 26 \\
-23 \leq b_3 \leq 34 &&
-7 \leq n_4 \leq 4
\end{eqnarray*}
We actually ran this search (which took significantly longer) to confirm our results.
\section{The case of degree $6$ \& $7$}
\subsection{Degree $6$ parity considerations}
As before we obtain another expression for the minimal polynomial for $\alpha$ with the coefficients coming from the minimal polynomial for $\lambda$.
\begin{align}
&\left(-1-a_0+a_1-a_2+a_3-a_4+a_5\right) \left(1+a_0+a_1+a_2+a_3+a_4+a_5\right) \nonumber \\
&+2 \left(-2 a_0 \left(3+a_2+2 a_4\right)-2 \left(1+a_2+a_4\right) \left(3+a_2+2 a_4\right)+\left(a_1+a_3+a_5\right) \left(a_1+3 a_3+5 a_5\right)\right) \alpha \nonumber \\ &+\left(8 a_1 a_3+12 a_3^2-8 \left(3+a_4\right) \left(1+a_0+a_2+a_4\right)-4 \left(3+a_2+2 a_4\right){}^2+24 a_1 a_5+48 a_3 a_5+40 a_5^2\right) \alpha ^2\nonumber \\ &+8 \left(a_3^2-2 \left(a_0+a_2 \left(4+a_4\right)+2 \left(5+a_4 \left(5+a_4\right)\right)\right)+2 a_1 a_5+8 a_3 a_5+10 a_5^2\right) \alpha ^3\nonumber \\&-16 \left(15+2 a_2+a_4 \left(10+a_4\right)-2 a_3 a_5-5 a_5^2\right) \alpha ^4\nonumber\\ &+32 \left(-6-2 a_4+a_5^2\right) \alpha ^5-64 \alpha ^6 \label{afor}
\end{align}
The necessary parity conditions in degree 6 are
\begin{itemize}
\item $a_0=2m_0+1$
\item $a_1=2m_1$
\item $a_2=4m_2+3$
\item $a_3=4m_3$
\item $a_4=8m_4-4m_2-2m_0+3$
\item $a_5=8m_5-4m_3-2m_1 $
\end{itemize}
With the addition of another parity condition $m_4+m_5$ odd, these are sufficient as well.
Flammang and Rhin give a method for bounding values of symmetric functions and coefficients using various identities. We adopt those in degree $7$ and $8$. However here we simply adopt the following strategy. We replace $|\lambda|$ by $x\in [0,\sqrt{7+\sqrt{2}}]$ and $\lambda+\bar\lambda$ by $2x$. Calculate the gradient of the function of $5$ variables to see there are no interior extrema in the region $x\in[0,\sqrt{7+4\sqrt{2}}]$, $r_i\in [-\sqrt{3},\sqrt{3}]$. Then symmetry allows us to move the $r_i$ in an increasing or decreasing manner to the value $\pm \sqrt{3}$. We could improve these estimates by realising that $r_i\approx \pm \sqrt{3}$ implies the corresponding real embedding gives $\sigma_i(\alpha)\approx 1$ and we have seen that the real roots of the minimal polynomial for $\alpha$ cannot pile up near $1$.
\[ -113 \leq a_0 = |\lambda|^2 r_1 r_2 r_3 r_4 \leq 113 \]
\[ a_1 =- |\lambda|^2 (r_1 r_2 r_3 +r_1r_2r_4+r_2r_3r_4)-(\lambda+\bar\lambda)(r_1r_2r_3r_4) \]
\[ -261 \leq a_1\leq 129 \]
\[ a_2 = |\lambda|^2 (r_1 r_2 +r_1r_3 +r_1r_4+r_2r_3+r_2r_4+r_3r_4)+(\lambda+\bar\lambda)(r_1 r_2 r_3 +r_1r_2r_4+r_2r_3r_4) \leq \]
\[ -188 \leq a_2 \leq 338 \]
\[ a_3 = -|\lambda|^2 (r_1 +r_2+r_3 +r_4)-(\lambda+\bar\lambda)(r_1 r_2 +r_1r_3 +r_1r_4+r_2r_3+r_2r_4+r_3r_4) \]
\[ -215 \leq a_3 \leq 43 \]
\[ a_4=|\lambda|^2+(\lambda+\bar\lambda)(r_1+r_2+r_3+r_4)+(r_1r_2+r_1r_3+r_1r_4+ r_2r_3+ r_2r_4+r_3r_4) \]
\[-11 \leq a_4\leq 79 \]
\[ -14\leq a_5=-\lambda-\bar\lambda-r_1-r_2-r_3-r_4\leq 6 \]
At first sight this loop seems to allow $20\times 91 \times 258\times 526\times 390\times 226 \approx 21\times 10^{12}$ possibilities. However the parity considerations reduce this to only $5\times 10^9$ possibilities. For instance, $a_5$ lies in an interval of width $20$ and so after having determined $n_1,\ldots,n_4$ the value $m_5$ must lie in an interval of length $20/8$ and so admits at most three values. There is more than can be said here. If we review the formula (\ref{afor}) we see that the coefficient for $\alpha^5$ there is $32 \left(-6-2 a_4+a_5^2\right)$, which must be divisible by $64$ and is since $a_5$ is even. In terms of the real roots $s_i$ of the minimal polynomial for $\alpha$ we find that there is an integer $m$ so that
\[ \alpha+\bar\alpha+s_1+s_2+s_3+s_4 = (-3- a_4+\frac{1}{2} a_5^2) m \]
In the loop described above we can calculate $m_1,m_3$ and hence $m_5$ before we calculate $m_4$, and hence $a_4$. Now
\[ -9\leq \alpha+\bar\alpha+s_1+s_2+s_3+s_4 \leq 17 \]
Hence, given $a_5$,
\[ -9\leq \big(-3- a_4+\frac{1}{2} a_5^2\big) m \leq 17 \]
If $a_4$ lies in an interval of width $\frac{26}{|m|}$ and hence $m_4$ lies in an interval of width $\frac{26}{8|m|}$. If $|m|=1$, then that gives at most $4$ possibilities for $a_4$. Decreasing the search space by a factor of at least $3$. If $|m|=2$, then only three possibilities and as soon as $m\geq 4$ there is at most one possibilty, with
\[ 2 \leq - 4m_4+2m_2+m_0+\frac{1}{4} a_5^2 \leq 10 \]
Other observations, coming from the additional parity condition, are that the constant term coefficient in the minimal polynomial for $\alpha$ is
\[ \frac{1}{64} \left(-1-a_0+a_1-a_2+a_3-a_4+a_5\right) \left(1+a_0+a_1+a_2+a_3+a_4+a_5\right) \]
\[ = \frac{1}{4} (3 + 4 m_4 - 4 m_5) (1 + m_4 + m_5) \]
so we must have $m_4+m_5$ odd, and this halves the search space again. These, and a few other simple observations, reduce the search space by another order of magnitude and additionally give simple tests to eliminate candidates before we use a root finding algorithm.
\scalebox{0.65}{\includegraphics[viewport=-10 350 670 850]{Degree6Map34}}\\
\noindent{\bf Figure 11.} {\it The degree $6$ candidates satisfying all arithmetic criteria.}
\medskip
We ran this search to find $32$ points, illustrated here, and that we should consider further. These were not in the required region.
\subsection{Degree $7$ parity considerations}
\begin{align*}
&128 \alpha ^7+64 \alpha ^6 \left(2 a_5-a_6 ^2+7\right)+32 \alpha ^5 (2 a_3-2 a_6 (a_4+3 a_6 )+a_5 (a_5+12)+21)
\\
&+16 \alpha ^4 \left(2 a_1-2 a_2 a_6 +2 a_3 (a_5+5)-a_4^2-10 a_4 a_6 +5 a_5 (a_5+6)-15 a_6 ^2+35\right)\\
&+8 \alpha ^3 \left(-2 a_0a_6 +2 a_1 (a_5+4)-2 a_2 (a_4+4 a_6 )+a_3^2+4 a_3 (2 a_5+5) \right. \\
& \left. \quad -4 \left(a_4^2+5 a_4 a_6 +5 a_6 ^2\right)+10 a_5 (a_5+4)+35\right)\\
&+4 \alpha ^2 \left(-2 (a_4+3 a_6 ) (a_0+a_2+a_4+a_6 )+2 a_1 (a_3+3 a_5+6)-(a_2+2 a_4+3 a_6 )^2\right.\\
& \left. \quad+3 a_3^2+4 a_3 (3 a_5+5)+10 a_5^2+30 a_5+21\right)\\
&+\alpha \left(-4 (a_2+2 a_4+3 a_6 ) (a_0+a_2+a_4+a_6 )+2 (a_1+a_3+a_5+1)^2 \right. \\
& \left. \quad +4 (a_3+2 a_5+3) (a_1+a_3+a_5+1)\right) \\ & +\left((a_1+a_3+a_5+1)^2-(a_0+a_2+a_4+a_6 )^2\right)
\end{align*}
We continue to reduce these coefficients mod $2$ making substitutions as appropriate. Then \underline{$a_6$ is odd}, $a_6=2k_6+1$. Then \underline{$a_5$} is odd, $a_5=2k_5+1$. Hence $a_3+a_4$ is even, and subsequently \underline{$a_4$, hence $a_3$} are odd, $a_4=2k_4+1$, $a_3=2k_3+1$. It follows \underline{$a_0$ is odd}, $a_0=2k_0+1$. Subsequently \underline{$a_1$ and $a_2$ are odd}, $a_2=2k_2+1$,$a_1=2k_1+1$. Expanding out and looking at the coefficients in terms of $k_i$ we find
\begin{itemize}
\item $k_1+k_2+k_4$ is even.
\item $k_0+k_1+k_2+k_4+k_5+k_6$ is odd.
\item $k_2+k_3+k_6$ is even.
\item $k_1+k_3+k_5$ is even.
\item $k_0+k_1+k_2+k_3+k_4+k_5+k_6$ is even.
\end{itemize}
Hence $k_3$ is odd.
\begin{itemize}
\item $k_0+k_1+k_2$ is odd. $k_2=2m_2+1-k_0-k_1$
\item $k_0+k_4$ is odd. $k_4=2m_4+1-k_0$
\item $k_2+k_6$ is odd. $k_6=2\tilde{m_6}+1-k_2=2m_6+k_0+k_1$
\item $k_1+k_5$ is odd. $k_5=2m_5+1-k_1$
\end{itemize}
At this point the coefficients of $\alpha^4,\alpha^5$ and $\alpha^6$ are divisible by $128$. We continue. Looking at the coeffficient of $\alpha^2$ gives $k_3=2m_3+1$ odd and deals with the coefficient of $\alpha^3$. Next $m_3+m_5$ is even, as is $m_2+m_4+m_6$. Subsequently $m_4+m_5$, is even, and $m_3+m_4$, is even. Then we have
\begin{itemize}
\item $k_2=2m_2+1-k_0-k_1$
\item $k_3=2m_3+1$
\item $k_4=2(2n_4-m_3)+1-k_0$
\item $k_6=2(2n_6-(2n_4-m_3)-m_2)+k_0+k_1$
\item $k_5=2(2n_5-m_3)+1-k_1$
\end{itemize}
Relabeling we now have an inductive structure to determine the coefficients.
\begin{itemize}
\item $a_0=2n_0+1$
\item $a_1=2n_1+1$
\item $a_2=4n_2+3-2(n_0+n_1) $
\item $a_3=4n_3+3$
\item $a_4=8n_4-4n_3-2n_0+3$
\item $a_5=8n_5-4n_3-2n_1+3$
\item $a_6=8n_6- 8n_4-4n_3-4n_2+2n_0+2n_1+1 $
\end{itemize}
\subsection{The case of degree $8$}
Let
\[ \lambda ^8+a_7\lambda ^7+a_6\lambda ^6+a_5\lambda ^5+a_4\lambda ^4+a_3 \lambda ^3+a_2\lambda ^2+a_1\lambda +a_0=0 \]
be the minimal polynomial for $\lambda$. Let
\[ \alpha ^8+b_7\alpha ^7+b_6\alpha ^6+b_5\alpha ^5+b_4\alpha ^4+b_3 \alpha ^3+b_2\alpha ^2+b_1\alpha +b_0 = 0 \]
be the minimal polynomial for $\alpha$. \\
\begin{lemma} The disk $\{|\alpha|<4.4960\}$ is excluded. Hence
\[ 4.49 \leq |\alpha| \leq 4+2\sqrt{2} \]
and
\[ 4\leq \Re e(\alpha) \leq 4+2\sqrt{2} \]
\end{lemma}
\scalebox{0.75}{\includegraphics[viewport=40 450 470 800]{ExcludedDisk}}\\
\noindent {\bf Figure 13.}{\it The excluded disk in the $\alpha$-plane.}
\medskip
\noindent{\bf Proof.} Let $\alpha_0$ be the root of the equation
\[ \alpha_0^{2 t_0} (\alpha_0 - 1)^2 (0.0770561)^6 = 1 \]
Then (noting $\gamma=\alpha-1$) and $|\alpha|< \alpha_0$ Theorem \ref{a0bound} gives
\[ |p(0)|^{t_0}|p(1)|\leq |\alpha|^{2 t_0}|\alpha - 1|^2 (0.0770561)^6 < 1 \]
and this implies that $p(0)=0$ or $p(1)=0$ which is impossible. \hfill $\Box$
\medskip
Thus the possible values of $\alpha$ that we seek are confined to a rather small region. It is rather unsurprising then that there are no degree $8$ cases. Against this however, we do know there are infinitely many lattices in this region.
We next recall Lemma \ref{*lemma} that implies $|p(0)|=|b_0|\leq 2$ in this case. We calculate as before to find $2^8$ times the minimal polynomial for $\alpha$ to be
{\small \begin{align*}
&(-1 - a_0 + a_1 - a_2 + a_3 - a_4 + a_5 - a_6 + a_7) (1 + a_0 + a_1 + a_2 + a_3 +
a_4 + a_5 + a_6 + a_7) \\ &+
2 (-2 a_0 (4 + a_2 + 2 a_4 + 3 a_6) -
2 (1 + a_2 + a_4 + a_6) (4 + a_2 + 2 a_4 + 3 a_6) \\ & + (a_1 + a_3 + a_5 +
a_7) (a_1 + 3 a_3 + 5 a_5 + 7 a_7)) \alpha \\ & +
4 (2 a_1 a_3 + 3 a_3^2 + 6 a_1 a_5 + 12 a_3 a_5 + 10 a_5^2 -
2 (1 + a_0 + a_2 + a_4 + a_6) (6 + a_4 + 3 a_6) \\& - (4 + a_2 + 2 a_4 +
3 a_6)^2 + 12 a_1 a_7 + 20 a_3 a_7 + 30 a_5 a_7 +
21 a_7^2) \alpha^2 \\ & +
8 (a_3^2 + 2 a_1 a_5 + 8 a_3 a_5 + 10 a_5^2 -
2 (4 + a_6) (1 + a_0 + a_2 + a_4 + a_6)\\& -
2 (6 + a_4 + 3 a_6) (4 + a_2 + 2 a_4 + 3 a_6) + 8 a_1 a_7 + 20 a_3 a_7 +
40 a_5 a_7 + 35 a_7^2) \alpha^3 \\ &+ (-32 a_0 -
16 (70 + a_4^2 - a_5 (2 a_3 + 5 a_5) + 10 a_4 (3 + a_6) +
2 a_2 (5 + a_6) \\ & + 5 a_6 (14 + 3 a_6)) +
32 (a_1 + 5 (a_3 + 3 a_5)) a_7 + 560 a_7^2) \alpha^4 \\& +
32 (a_5^2 - 2 (28 + a_2 + a_4 (6 + a_6) + 3 a_6 (7 + a_6)) + 2 a_3 a_7 +
12 a_5 a_7 + 21 a_7^2) \alpha^5 \\ & -
64 (28 + 2 a_4 + a_6 (14 + a_6) - 2 a_5 a_7 - 7 a_7^2) \alpha^6 \\ &+
128 (-8 - 2 a_6 + a_7^2) \alpha^7 - 256 \alpha^8
\end{align*}}
We then inductive reduce modulo $2$ to gain the parity of the coefficients, make the appropriate substitutions and repeat until all the remaining coefficients are a multiple of $2^8$ (we cary out this process more carefully in the next subsection for degree $9$). After this we find that there are integers $k_0,k_1,\ldots,k_8$ so that
\begin{eqnarray*}
a_0 & = & 1 + 2 k_0 \\
a_1 & = & -2+4 k_1 \\
a_2 & = & -2 k_0+4 k_2 \\
a_3 & = & 2-4 k_1+4 k_3 \\
a_4 & = & -2 \left(1+k_0+4 k_1-4 k_4\right) \\
a_5 & = & 2+4 k_1-8 k_4+8 k_5 \\
a_6 & = & -2 \left(7 k_0-4 k_1+2 k_2+4k_4-8 k_6\right) \\
a_7 & = & -2 \left(1+2 k_1+2 k_3+4 k_4+4 k_5-8 k_7\right)
\end{eqnarray*}
This gives us the minimal polynomial for $\alpha$ as
\[ x^8+b_7x^7+b_6x^6+b_5x^5+b_4x^4+b_3x^3+b_2x^2+b_1 x+b_0 \]
and where
\begin{eqnarray*}
b_0 &= &\big(k_0-k_6\big){}^2-\big(k_4-k_7\big){}^2,\\
b_1 & = & 2\big(\big(6 k_0-k_1+k_2+k_4-6 k_6\big) \big(k_0-k_6\big)\\ && \quad -\big(k_1+k_3+5 k_4+k_5-6 k_7\big) \big(k_4-k_7\big)-\big(k_4-k_7\big){}^2\big),\\
b_2 & =& 2 \big(-1+11 k_0-4 k_1+3 k_2+4 k_4-12 k_6\big) \big(k_0-k_6\big)\\ && \quad -\big(k_1+k_3+5 k_4+k_5-6 k_7\big){}^2-2 \big(1+2 k_1+3 k_3+8 k_4+4 k_5-12 k_7\big) \big(k_4-k_7\big)\\ && \quad -4 \big(k_1+k_3+5 k_4+k_5-6 k_7\big) \big(k_4-k_7\big)\\ && \quad +\big(6 k_0-k_1+k_2+k_4-6 k_6\big){}^2,\\
b_3 & = & =2\big(73 k_0^2+k_1^2-k_2+3 k_2^2-k_3\\ && \quad-4 k_3^2-9 k_4+7 k_2 k_4-41 k_3 k_4-81 k_4^2-k_5\\ && \quad -9 k_3 k_5-50 k_4 k_5-5 k_5^2+k_0 \big(-8-39 k_1+31 k_2+39 k_4-153 k_6\big)+8 k_6\\ && \quad-32 k_2 k_6-40 k_4 k_6+80 k_6^2-k_1 \big(7 k_2+7 k_3+42 k_4+8 k_5-40 k_6-42 k_7\big)+9 k_7\\ && \quad+50 k_3 k_7+220 k_4 k_7+60 k_5 k_7-140 k_7^2\big),\\
b_4& = & -2 k_0+2 \big(-2+7 k_0-4 k_1+2 k_2+4 k_4-8 k_6\big) \big(6 k_0-k_1+k_2+k_4-6 k_6\big)+2 k_6\\ && \quad+\big(1-11 k_0+4 k_1-3 k_2-4 k_4+12 k_6\big){}^2\\ && \quad-4 \big(\big(1+2 k_1+3 k_3+8 k_4+4 k_5-12 k_7\big) \big(k_1+k_3+5 k_4+k_5-6 k_7\big)\\ && \quad+\big(1+2 k_1+2 k_3+4 k_4+4 k_5-8 k_7\big) \big(k_4-k_7\big)\big)\\ && \quad-\big(1+2 k_1+3 k_3+8 k_4+4 k_5-12 k_7\big){}^2\\ && \quad-2 \big(1+2 k_1+2 k_3+4 k_4+4 k_5-8 k_7\big) \big(k_1+k_3+5 k_4+k_5-6 k_7\big)
,\\ b_5 & = & 2 \big(\big(-1+11 k_0-4 k_1+3 k_2+4 k_4-12 k_6\big) \big(-2+7 k_0-4 k_1+2 k_2+4 k_4-8 k_6\big) \\ && \quad -6 k_0+k_1-k_2-k_4+6 k_6-\big(1+2 k_1+3 k_3+8 k_4+4 k_5-12 k_7\big){}^2\\ && \quad-\big(1+2 k_1+3 k_3+8 k_4+4 k_5-12 k_7\big) \big(1+2 k_1+2 k_3+4 k_4+4 k_5-8 k_7\big)\\ && \quad-2 \big(1+2 k_1+2 k_3+4 k_4+4 k_5-8 k_7\big) \big(k_1+k_3+5 k_4+k_5-6 k_7\big)\big),\\
b_6&=& 2-22 k_0+8 k_1-6 k_2-8 k_4+24 k_6+\big(2-7 k_0+4 k_1-2 k_2-4 k_4+8 k_6\big){}^2\\ && \quad-4 \big(1+2 k_1+3 k_3+8 k_4+4 k_5-12 k_7\big) \big(1+2 k_1+2 k_3+4 k_4+4 k_5-8 k_7\big)\\ && \quad-\big(1+2 k_1+2 k_3+4 k_4+4 k_5-8 k_7\big){}^2,\\
b_7 & = & 2 \big(2-7 k_0+4 k_1-2 k_2-4 k_4+8 k_6-\big(1+2 k_1+2 k_3+4 k_4+4 k_5-8 k_7\big){}^2\big).
\end{eqnarray*}
Notice that $b_0$ is a difference of squares. As $2$ is not a quadratic residue we see that $b_0\neq \pm 2$ and hence must be $\pm 1$, by Lemma \ref{*lemma}.
\begin{lemma} If $a^2-b^2=\pm 1$, then
\[ (a,b)\in \{(0,\pm1),(\pm 1,0) \}\]
\end{lemma}
\noindent{\bf Proof.} $a^2-b^2= (a+b)(a-b)=\pm 1$ so both factors are $\pm 1$. $a-b=1$ gives $a=1+b$ and $1+2b=\pm 1$ so $b=-1$, and $a=0$, or $b=0$.
$a-b=-1$ gives $a=-1+b$ and $-1+2b=\pm 1$ so $b=1$, and $a=0$, or $b=0$. \hfill $\Box$
\medskip
We can now make the following assertions about the coefficients for the minimal polynomial of $\alpha$.
\begin{lemma} Let
\[ \alpha ^8+b_7\alpha ^7+b_6\alpha ^6+b_5\alpha ^5+b_4\alpha ^4+b_3 \alpha ^3+b_2\alpha ^2+b_1\alpha +b_0 = 0 \]
be the minimal polynomial for $\alpha$. Then
\begin{itemize}
\item $b_0=\pm 1$.
\item $b_7$, $b_5$, $b_3$ and $b_1$ are even.
\item $b_6$ has the same parity as $\frac{1}{2} b_7$.
\item $b_4$ has the opposite parity to $b_6$.
\end{itemize}
\end{lemma}
\subsection{Degree $8$ coefficient bounds.}
We want to use the information in the above lemma to prove that there are no such polynomials, as in degree $7$. As earlier, the coefficients in the search are determined by the root structure. But this time the search for $\lambda$ is quite a bit larger. However we can use the excluded disk to improve search bounds as $4.49 \leq |\alpha| \leq 4+2\sqrt{2}$ and $\alpha+\bar\alpha>8.9$ as we recall that all the real roots of the minimal polynomial for $\alpha$ lie in $[-1,1]$. Then a few elementary estimates got the search runtime down to a day. We found no candidates.
\subsection{The case of degree $9$}
In light of our first calculation, the following lemma suffices to deal with this case.
\begin{lemma} Let $\alpha$ be a complex root of a degree $9$ polynomial with $\lambda=\sqrt{2\alpha+1}$ an algebraic integer and we have $\mathbb Q(\lambda)=\mathbb Q(\alpha)$. Then $\alpha$ is not a unit.
\end{lemma}
The proof is simply a very long calculation. We repeatedly apply the process above to determine the parity of the coefficients for the minimal polynomial for $\lambda$,
\[\lambda ^4+a_8\lambda ^8+a_7\lambda ^7+a_6\lambda ^6+a_5\lambda ^5+a_4\lambda ^4+a_3 \lambda ^3+a_2\lambda ^2+a_1\lambda +a_0\]
As before we find a multiple of the minimal polynomial for $\alpha$,
\noindent {\tiny
\begin{tabular}{|l|} \hline
$ \left(1+a_1+a_3+a_5+a_7\right){}^2-\left(a_0+a_2+a_4+a_6+a_8\right){}^2 $ \\ \hline
$ \left(2 \left(1+a_1+a_3+a_5+a_7\right){}^2+4 \left(1+a_1+a_3+a_5+a_7\right) \left(4+a_3+2 a_5+3 a_7\right)\right. $\\ $\left.-4 \left(a_0+a_2+a_4+a_6+a_8\right) \left(a_2+2 a_4+3 a_6+4 a_8\right)\right) \alpha $ \\ \hline
$\left(8 \left(1+a_1+a_3+a_5+a_7\right) \left(6+a_5+3 a_7\right)+8 \left(1+a_1+a_3+a_5+a_7\right) \left(4+a_3+2 a_5+3 a_7\right) \right.$ \\ $\left. +4 \left(4+a_3+2 a_5+3 a_7\right){}^2-4 \left(a_2+2 a_4+3 a_6+4 a_8\right){}^2-8 \left(a_0+a_2+a_4+a_6+a_8\right) \left(a_4+3 a_6+6 a_8\right)\right) \alpha ^2$ \\ \hline
$8 \left(84+a_3^2+70 a_5+10 a_5^2+112 a_7+40 a_5 a_7+35 a_7^2+2 a_1 \left(10+a_5+4 a_7\right)+4 a_3 \left(2 a_5+5 \left(2+a_7\right)\right) \right. $ \\ $\left.-2 \left(a_0+a_2+a_4+a_6+a_8\right) \left(a_6+4 a_8\right)-2 \left(a_2+2 a_4+3 a_6+4 a_8\right) \left(a_4+3 a_6+6 a_8\right)\right) \alpha ^3$ \\ \hline
$16 \left(126-a_4^2+70 a_5+5 a_5^2-2 a_2 a_6-10 a_4 a_6-15 a_6^2+140 a_7+30 a_5 a_7+35 a_7^2+2 a_1 \left(5+a_7\right)\right.$\\ $\left. +2 a_3 \left(15+a_5+5 a_7\right)-2 a_0 a_8-10 a_2 a_8-30 a_4 a_8-70 a_6 a_8-70 a_8^2\right) \alpha ^4$\\ \hline
$32 \left(126+2 a_1+42 a_5+a_5^2-2 a_4 a_6-6 a_6^2+112 a_7+12 a_5 a_7+21 a_7^2+2 a_3 \left(6+a_7\right)-2 a_2 a_8-12 a_4 a_8-42 a_6 a_8-56 a_8^2\right) \alpha ^5$\\ \hline
$64 \left(84+2 a_3-a_6^2+56 a_7+7 a_7^2+2 a_5 \left(7+a_7\right)-2 a_4 a_8-14 a_6 a_8-28 a_8^2\right) \alpha ^6$\\ \hline
$128 \left(36+2 a_5+16 a_7+a_7^2-2 a_6 a_8-8 a_8^2\right) \alpha ^7$\\ \hline
$256 \left(9+2 a_7-a_8^2\right) \alpha ^8 + 512\alpha ^9$ \\ \hline
\end{tabular}
}
\bigskip
We see
\begin{itemize}
\item $ 9+2 a_7-a_8^2$ even, so \underline{$a_8$ is odd.}
\item $2 a_5+a_7^2-2 a_6$ divisible by $4$, so \underline{$a_7$ is even} and \underline{$a_5-a_6$ is even.}
\item $2 a_3-2 a_4+6 a_5-6 a_6-a_6^2+4 a_5 k_7+4 k_7^2-4 a_4 k_8-4 a_6 k_8$ divisible by $8$, so \underline{$a_5$ and $a_6$ are even}. Further, $a_3-a_4+2 k_5-2 k_6-2 k_6^2+2 k_7^2-2 a_4 k_8$ is divisible by $4$.
\item $3+a_1-a_2+6 a_3-6 a_4+2 k_5+2 k_5^2-2 k_6-2 a_4 k_6-4 k_6^2+2 a_3 k_7+2 k_7^2-2 a_2 k_8-4 a_4 k_8-4 k_6 k_8$ is divisible by $8$. So $a_1-a_2$ is odd.
\item $24-2 a_0+10 a_1-10 a_2+30 a_3-30 a_4-a_4^2+12 k_5+4 a_3 k_5+20 k_5^2-12 k_6-4 a_2 k_6-20 a_4 k_6-28 k_6^2-8 k_7+4 a_1 k_7+20 a_3 k_7+24 k_5 k_7+12 k_7^2-24 k_8-4 a_0 k_8-20 a_2 k_8-28 a_4 k_8-24k_6 k_8-24 k_8^2$ is divisible by $32$, so \underline{$a_4$ is even}, and $a_0+a_3$ is odd.
\item $84+a_3^2+140 k_5+40 k_5^2+224 k_7+160 k_5 k_7+140 k_7^2+4 a_1 \left(5+k_5+4 k_7\right)+8 a_3 \left(5+2 k_5+5 k_7\right)-4 \left(1+a_0+a_2+2 k_4+2 k_6+2 k_8\right) \left(2+k_6+4 k_8\right)-4 \left(3+k_4+3 k_6+6 k_8\right) \left(4+a_2+4 k_4+6 k_6+8 k_8\right)$ is divisible by $64$ and so \underline{$a_3$ is even, and $a_0$ is odd.}
\item $-4+20 a_1-20 a_2-a_2^2-24 k_0+60 k_3+4 a_1 k_3+12 k_3^2-64 k_4-12 a_2 k_4-8 k_0 k_4-24 k_4^2+84 k_5+12 a_1 k_5+48 k_3 k_5+40 k_5^2-96 k_6-24 a_2 k_6-24 k_0 k_6-80 k_4 k_6-60 k_6^2+112 k_7+24 a_1 k_7+80 k_3 k_7+120 k_5 k_7+84 k_7^2-136 k_8-40 a_2 k_8-48 k_0 k_8-120 k_4 k_8-168 k_6 k_8-112 k_8^2$ is divisible by $128$, so \underline{$a_2$ is even and $a_1$ is odd}.
\end{itemize}
Subsequently we reduce the coefficients modulo $2$ to see that
\begin{itemize}
\item $k_1+k_2$ is even
\item $k_1+k_3$ is even
\item $k_1+k_4$ is odd
\item $k_1+k_5$ is odd
\item $k_1+k_6$ is even
\item $k_1+ k_7$ is even
\item $k_0+k_1+ k_8$ is even
\end{itemize}
Thus we substitute
\[ k_2=2m_2+k_1;k_3=2m_3+k_1;k_4=2m_4+1+k_1;k_5=2m_5+1+k_1; \]\[k_6=2m_6+k_1;k_7=2m_7+k_1;k_8=2m_8+1+k_1;k_0=2m_0+1\]
We expand out the coefficients of the polynomials once again to see
\begin{itemize}
\item $1+m_2+m_5+ m_6$ is even
\item $1+ m_3+ m_5+ m_7$ is even
\item $k_1+m_0+m_4+ m_8$ is even
\item $k_1+m_3+m_7$ is even
\item $k_1+m_2+m_6$ is even
\end{itemize}
We then make substitutions once again of the sort $m_8=2n_8-k_1-m_0-m_4$, $m_7=2n_7-m_3-k_1$, and $m_6=2n_6-m_2-k_1$ and simplify the coefficients once again checking parity. We find
\begin{itemize}
\item $k_1+n_5+n_6+n_7+n_8$ is even
\item $1+n_6+n_8$ is even
\item $1+n_6+n_8+m_2+ m_3+ m_4$ is even
\end{itemize}
make the substitutions
\[ n_7= 2r_7-\left(1+k_1+ n_5\right); n_8= 2r_8-\left(1+ n_6\right); \] \[ m_4=2 n_4-\left(1+n_6+n_8+m_2+ m_3\right); n_6=\left(2r_6-\left(1+ 2r_8-\left(k_1+n_5\right)\right)\right) \]
At this point every coefficient is an integral multiple of $512$ except the constant term coefficient which is $256 \left(r_7^2-r_8^2\right)$. However $2$ is not a difference of squares so the constant term is at least $1024$ and $\alpha$ is not a unit.
\medskip
We remark that a similar situation seems to arise in all odd degrees, we also checked it for $3,5$ and $7$.
\section{The case that $p=2$ and $p=4$.}
This case is quite different to our previous searches in that much of the work has been done in early work of Flammang and Rhin \cite{FR}. Motivated by a question of ours in addressing the identification of all arithmetic Kleinian groups generated by an elliptic of order $2$ and an elliptic of order $3$ they proved the following.
\begin{theorem} There are $15909$ algebraic integers $\alpha$ whose minimal polynomial has exactly one complex conjugate pair of roots in the ellipse
\[ {\cal E} = \{z\in \mathbb C: |z+1|+|z-2|\geq 5 \} \]
and such that all the real roots of this polynomial lie in the interval $[-1,2]$. The degree of such an $\alpha$ is less than $10$, and
\begin{itemize}
\item $22$ polynomials in degree $2$,
\item $206$ polynomials in degree $3$,
\item $918$ polynomials in degree $4$,
\item $2524$ polynomials in degree $5$,
\item $4401$ polynomials in degree $6$,
\item $4260$ polynomials in degree $7$,
\item $2792$ polynomials in degree $8$,
\item $600$ polynomials in degree $9$, and
\item $186$ polynomials in degree $10$.
\end{itemize}
\end{theorem}
In order to use this list there are a few things we must establish. First that the exterior of the moduli space of groups freely generated be elements of order $2$ and $4$, $\mathbb C\setminus {\cal M}_{2,4}$ lies within this ellipse. In fact it does not, however this space has a $\mathbb Z_2$ symmetry in the line $\{\Re e(z)=-1\}$ which respects discreteness (however the symmetric pair of discrete groups are not necessarily isomorphic --- only having a common index two subgroup.) Thus we need only show $\{z:\Re(z)\geq -1\} \setminus {\cal M}_{2,4}$ lies in the ellipise. Again it does not, but an integer translate by $1$ of it does! This observation was made by Zhang in her PhD thesis \cite{Zhang}. At that time we did not have the technology to decide the question of whether a group had finite co-volume or not though Coopers PhD thesis \cite{Cooper} implemented a modified version of Week's Snappea code to decide for many of them.
Actually we additionally searched these spaces, using Flammang and Rhin's degree bounds, just to check that we had found all the possibilities.
\medskip
A rough description of the space $\mathbb C\setminus {\cal M}_{2,4}$ can be found as all the roots of all Farey polynomial lie in it. Such a description is illustrated below.
\scalebox{0.75}{\includegraphics[viewport=50 460 470 780]{First44}}\\
{\bf Figure 12.} {\it The space $\mathbb C\setminus {\cal M}_{2,4}$ found by roots of Farey polynomials. Notice the symmetry in $\Re e(z)=-1$ }
\medskip
We will now give a provable description of this space in order to get reasonable degree bounds for $\alpha=\gamma+1$ to limit the actual number of polynomials we have to consider. As previously we enumerate the ``first'' 129 Farey words. We compute the pleating rays corresponding to the following $14$ slopes.
\[ F_{rat}=\{\frac{1}{2},\frac{2}{3},\frac{3}{4},\frac{3}{5},\frac{4}{5},\frac{5}{6},\frac{4}{7},\frac{5}{7},\frac{6}{7},\frac{5}{8},\frac{7}{8},\frac{5}{9},\frac{7}{9},\frac{8}{9}\}.\]
The associated Farey polynomials $p_{r/s}(z)$ are monic, with integer coefficients and have degree no more than $9$. The pleating ray is a particular branch of $p_{r/s}^{-1}((-\infty,-2])$, actually the branch making angle $r\pi/s$ at $\infty$ with the positive real axis. We then compute $D_{r/s}=p_{r/s}(\{\Re e(z)\leq -2)$, the $r/s$ pleating ray neighbourhood, and the results of \cite{EMS} show that for all $r/s$, $D_{r/s}\subset \overline{{\cal M}_{2,4}}$ touching $\partial {\cal M}_{2,4}$ at a single point -- the $r/s$-cusp group. This is illustrated below.
\scalebox{0.5}{\includegraphics[angle=-90,viewport=-40 50 600 620]{Region44}}\\
\noindent {\bf Figure 14.} {\it The space $\mathbb C\setminus {\cal M}_{2,4}$ (grey) and pleating ray neighbourhoods $D_{r/s}$. The grey region is found by a conjectural description of the space of groups free on two generators given by Bowditch, \cite{Bowditch}. The bounded parabaloid shaped regions are proved to lie completely in ${\cal M}_{2,4}$.}
\medskip
This gives us an approximation to the set ${\cal M}_{2,4}$. Should we find a value $\gamma\in \{z:\Re e(z)>-1,\Im m(z)>0\}$ which does not lie in the bounded region
\[ \mathbb C \setminus \Big\{ \{-1 < \Re e(z)<2 \}\cup \bigcup_{\frac{r}{s}\in F_{rat}} D_{r/s} \Big\}, \]
then we know that the group generated by elements of order two and four with this particular commutator value is free on the two generators.
\medskip We remark that it is quite straightforward to test if a point is in some $D_{r/s}$. We simply evaluate the Farey polynomial $p_{r/s}$ on it to see the image has real part less than $-2$ (we can guess the particular $r/s$ by inspection). Then we check that this point lies in the same branch of $p_{r/s}^{-1}(\{\Re e(z)>-2\})$ as that determined by the pleating ray from an elementary path lifting which we can also implement computationally.
\bigskip
\subsection{The symmetry of the space ${\cal M}_{2,4}$}
First we describe the symmetry of the space ${\cal M}_{2,4}$.
\begin{theorem} Let $\langle f,g\rangle$ be a Kleinian group generated by elements $g$ of order $2$ and $f$ of order $4$ and set $\gamma=\gamma(f,g)$. Then there is $\langle f,h\rangle$, a Kleinian group generated by elements of order $2$ and $4$ and such that $\gamma(f,h)=-2-\gamma$. The groups $\langle f,g\rangle$ and $\langle f,h\rangle$ have a common index two subgroup $\langle f,gfg^{-1}\rangle$ and so both are simultaneously lattices (or not).
\end{theorem}
\noindent{\bf Proof.} The axis of $g$ bisects (and is perpendicular to) the common perpendicular $\ell$ between the axes of $f$ and $gfg^{-1}$. Let $h$ be the elliptic of order two whose axis is perpendicular to both that of $g$ and $\ell$. Then evidently $hfh^{-1}$ is an element of order the same as $f$ and it shares the same axis of $f$ and has the same trace. Thus $hfh^{-1}=f^{\pm1}$. We calculate that $\gamma(f,hfh^{-1})=\gamma(f,h)(\gamma(f,h)+2) = \gamma(f,g)(\gamma(f,g)+2)$ and a moments analysis shows $\gamma(f,g)\neq \gamma(f,h)$ so $\gamma(f,g)=-2+\gamma(f,h)$. The statement regarding index two follows immediately by looking at the coset decomposition as $g$ and $h$ both have order two. \hfill $\Box$
The figure below clearly indicates, and it is not too difficult to prove just using the combinatorics of the isometric circles to construct a fundamental domain, that
\begin{equation}
\mathbb C \setminus {\cal M}_{2,4} \subset \{-2+{\cal E}\} \cup \{-1+{\cal E}\}
\end{equation}
\scalebox{0.73}{\includegraphics[viewport=40 420 770 750]{24Covered}}\\
Notice that the interval $[-2,0]$ lies in both $\{-2+{\cal E}\}$ and $\{-1+{\cal E}\}$. Thus we find that the values of $\gamma$ that we seek are those points in $-1+{\cal E}$ whose minimal polynomial has real roots which actually lie inside the interval $[-2,0]$ (recall they do lie in $[-2,1]$). Once we have this list of points we go through the process of deciding if they are in or out of $\mathbb C \setminus {\cal M}_{2,4}$.
\medskip
Now using this symmetry we may assume $0 \leq \Re e(\alpha) \leq 3$. We also have the elementary bounds $|\alpha|\leq 3$, $|\alpha+1|\leq 4$ and $|\alpha-1|\leq 2$. Now we obtain degree bounds as earlier following (\ref{alphabounds}).
Let
\[ q_\alpha(z)=z^n+a_{n-1}z^{n-1}+\cdots+a_1z+a_0 = (z-\alpha)(z-\bar\alpha)(z-r_1) \cdots(z-r_{n-2}) \]
be the minimal polynomial for $\alpha$. Then, as $r_i\in [-1,1]$,
\begin{eqnarray} |q_\alpha(-1)||q_\alpha(0)||q_\alpha(1)| &=& |\alpha|^2|\alpha^2-1|^2\prod_{i=1}^{n-2} |r_i| |1-r_{i}|\nonumber \\
& \leq & 576 (0.3849)^{n-2}.
\end{eqnarray}
as $\max \{x(x^2-1):x\in [-1,1]\} = 0.3849...$.
As the left hand side here is greater than one, we deduce that $n\leq 8$. Indeed if $p(0)\neq \pm 1$, then $n\leq 7$.
We therefore run through the $12331$ polynomials in the list above, along with the degree 8 polynomials with last coefficient 1. We are only interested in those whose real roots lie in $[-1,1]$ and complex root has real part of absolute value no more than $\sqrt{3}$.
\subsection{Degree $2$} There are $12$ possible values, of which $6$ lie in the region $\mathbb C \setminus \Big\{ \{-1 < \Re e(z)<2 \}\cup \bigcup_{\frac{r}{s}\in F_{rat}} D_{r/s} \Big\}$.\\
\scalebox{0.5}{\includegraphics[angle=-90,viewport=40 50 600 660]{Degree2Points}}\\
\subsection{Degree $3$} There are $175$ possible values, of which $13$ lie in the region $\mathbb C \setminus \Big\{ \{-1 < \Re e(z)<2 \}\cup \bigcup_{\frac{r}{s}\in F_{rat}} D_{r/s} \Big\}$.\\
\scalebox{0.5}{\includegraphics[angle=-90,viewport=40 50 600 660]{Degree3Points}}\\
\subsection{Degree $4$} With rather coarse search bounds
\[ -8\leq a \leq 1, -12\leq b \leq 22, -23\leq c \leq 23, -8\leq d \leq 8 \]
we found $572$ possible values, of which $23$ lie in the region $\mathbb C \setminus \Big\{ \{-1 < \Re e(z)<2 \}\cup \bigcup_{\frac{r}{s}\in F_{rat}} D_{r/s} \Big\}$. \\
\scalebox{0.75}{\includegraphics[viewport=50 500 750 800]{24Degree4}}\\
We then checked irreducibility of the polynomial and this left $13$ possibilities in the region $\{\Re e(z)\geq -1\}$ and one further point was eliminated by adding another pleating ray neighbourhood. This leaves the list we have presented.
\subsection{The case $p=4$.} In this case we have a group generated by two elements of order $4$. The following theorem shows that if $\langle f,g\rangle$ is such a group, then there is a $\mathbb Z_2$ extension to a group $\langle f,\Phi \rangle$ generated by elliptics of orders $2$ and $4$. Of course if $\langle f,g\rangle$ is a discrete subgroup of an arithmetic Kleinian group, then so is $\langle f,\Phi \rangle$. The only issue is the calculation of the relevant commutators, the arithmetic data, and identifying the slope.
\begin{theorem} Let $\langle f,g\rangle$, generated by two elements of order $4$, be a discrete subgroup of an arithmetic Kleinian group and $\gamma=\gamma(f,g)$ with $\gamma$ complex. Suppose $\tilde{\gamma}(\tilde{\gamma}+2)=\gamma$. Then there is $\Phi$ of order two and such that $\langle f,\Phi\rangle$ is a discrete subgroup of an arithmetic Kleinian group and that $\tilde{\gamma}=\gamma(f,\Phi)$.
\end{theorem}
\noindent{\bf Proof.} One of the two elliptics of order two whose axis bisects the common perpendicular of the axes of $f$ and of $g$ and interchanges these axes is the element $
\Phi$ we seek. Then $\Phi f\Phi^{-1}=g^{\pm 1}$ and $\langle f,\Phi\rangle$ is discrete as it contains a discrete group of index two. Similarly these groups are commensurable, and have the same invariant trace field. Arithmeticity is preserved by finite extensions. To see this more concretely, the factorisation condition must hold for $\gamma$, that condition is precisely that $\mathbb Q(\tilde{\gamma})=\mathbb Q(\gamma)$. We have already seen the trace identity
\[ \gamma=\gamma(f,g)=\gamma(f,\Phi f\Phi)=\gamma(f,\Phi)(\gamma(f,\Phi)+2) =\tilde{\gamma}(\tilde{\gamma}+2) \]
This completes the proof. \hfill $\Box$
\medskip
This result now shows us the can find all the groups in the case $p=4$ from our analysis of the previous case $p=2$.
\section{Finding the groups.}
The recent solution of Agol's conjecture \cite{ALSS,AOPSY} identifying all the Kleinian groups generated by two parabolic elements as two bridge knot and link groups and associated Heckoid groups suggests a method for identifying our groups.
We conjecture that the same is true for groups generated by two elements of finite order. This suggests the following approach to identifying these groups. Let $\Gamma=\langle f,g\rangle$ with $o(f)=4$ and $o(g)=p$. We only sketch the ideas here.
Let $m/n$ and $r/s$ be two rational numbers such that $m$, $n$ (resp. $r$, $s$) are coprime. Define an operation $\oplus$ such that $m/n\oplus r/s=(m + r)/(n + s)$; we call $\oplus$ Farey addition. Suppose $m/n < r/s$. Then $m/n < (m + r)/(n + s) < r/s$. These Farey fractions enumerate the simple closed curves on the four times punctured sphere, and hence words in the free group of rank two. These words correspond to pleating rays. As examples: We start with a pair of rationals and associated rational words (in generators $X$ and $Y$, with $x=X^{-1}$ and $y=Y^{-1}$):
\[ \{1/2, 1/1\} \mapsto \{\{X, Y, x, y\}, \{X, y\}\}\]
and inductively create Farey words by addition;
\begin{eqnarray*} \left\{\frac{1}{2},\frac{2}{3},1\right\} &\mapsto& \{\{X,Y,x,y\},\{X,Y,x,Y,X,y\},\{X,y\}\} \\
\left\{\frac{1}{2},\frac{3}{5},\frac{2}{3},\frac{3}{4},1\right\}&\mapsto&\{\{X,Y,x,y\},\{X,Y,x,y,X,y,x,Y,X,y\},\\ && \{X,Y,x,Y,X,y\},\{X,Y,x,Y,x,y,X,y\},\{X,y\}\} \end{eqnarray*}
Here is some Mathematica code which does this (from Zhang \cite{Z}).
\medskip
{\tiny \noindent L1 = {1/2, 1/1};\\
L2 = {{X, Y, x, y}, {X, y}} (*list of rational words*)\\
NF = 3 (*determines how many polynomials to make *)\\
For[j = 1, j$\leq$ NF, j++,(* determines the number of rational words made*)] \\
d = Length[L1];\\
For[i=1, i$\leq$(2 d - 1), i=i + 2,\\
N1 = (Numerator[L1[[i]]] + Numerator[L1[[i + 1]]] )/(Denominator[L1[[i]]] + Denominator[L1[[i + 1]]]);\\
(* Farey addition*)\\
L1 = Insert[L1, N1, i + 1]; (* orders the fractions *)\\
T2 = Join[L2[[i]], L2[[i + 1]]]; d1 = Denominator[N1] + 1; \\
R = Switch[T2[[d1]], x, X, X, x, y, Y, Y, y];\\
T2 = ReplacePart[T2, R, d1]; L2 = Insert[L2, T2, i + 1]]]}
\medskip
Each entry in the second list corresponds to a word $W_{r/s}$. We take a representation of a group generated by elliptics of order $4$ and $p$ by assigning
\[ X=\left(\begin{array}{cc}\frac{1+i}{\sqrt{2}}& 1 \\ 0 &\frac{1+i}{\sqrt{2}} \end{array} \right), \;\;\;\;\;
Y=\left(\begin{array}{cc}\cos(\frac{\pi}{p})+i\sin(\frac{\pi}{p}) & 0 \\ -\mu &\cos(\frac{\pi}{p})-i\sin(\frac{\pi}{p}) \end{array} \right),\]
Then
\begin{equation} p_{r/s}(\mu)={\rm Trace}(W_{r/s})\end{equation}
Some examples (with $p=2$ and the first $9$ fractions)
\begin{center}
\begin{tabular}{|c|c|} \hline
Farey & polynomial \\ \hline
$\frac{1}{2}$ & $2-2 \sqrt{2} \mu +\mu ^2$ \\ \hline
$\frac{4}{7}$ & $ -\sqrt{2}+25 \mu -58 \sqrt{2} \mu ^2+118 \mu ^3-65 \sqrt{2} \mu ^4+41 \mu ^5-7 \sqrt{2} \mu ^6+\mu ^7 $\\ \hline
$\frac{3}{5}$ & $\sqrt{2}-13 \mu +17 \sqrt{2} \mu ^2-19 \mu ^3+5 \sqrt{2} \mu ^4-\mu ^5$ \\ \hline
$\frac{5}{8}$ & $2-16 \sqrt{2} \mu +104 \mu ^2-144 \sqrt{2} \mu ^3+220 \mu ^4-100 \sqrt{2} \mu ^5+54 \mu ^6-8 \sqrt{2} \mu ^7+\mu ^8$ \\ \hline
$\frac{2}{3}$ & $-\sqrt{2}+5 \mu -3 \sqrt{2} \mu ^2+\mu ^3$ \\ \hline
$\frac{5}{7}$ & $\sqrt{2}-17 \mu +36 \sqrt{2} \mu ^2-84 \mu ^3+55 \sqrt{2} \mu ^4-39 \mu ^5+7 \sqrt{2} \mu ^6-\mu ^7$ \\ \hline
$\frac{3}{4}$ & $2-4 \sqrt{2} \mu +10 \mu ^2-4 \sqrt{2} \mu ^3+\mu ^4$ \\ \hline
$\frac{4}{5}$ & $-\sqrt{2}+5 \mu -11 \sqrt{2} \mu ^2+17 \mu ^3-5 \sqrt{2} \mu ^4+\mu ^5$ \\ \hline
$\frac{1}{1}$ & $\sqrt{2}-\mu$ \\ \hline
\end{tabular} \\
\end{center}
Notice that the degree of the polynomial is the denominator or the fraction and that $W_{1/2}=XYxy=[X,y]$ and so $p_{1/2}(\mu)-2=\gamma(X,Y)$.
To identify our groups we used the following procedure. From $p$ we construct a list of about $100$ fractions and polynomials $p_{r/s}(\mu)$. From an monic polynomial $z^3 +5z^2 +7z+1$ candidate we identify $\gamma=-2.41964+0.60629i$ and $\mu=\sqrt{2}+\sqrt{2+\gamma }=1.60761 + 1.5675 i$, from $-2 \sqrt{2} \mu +\mu ^2=\gamma$. We then evaluate all these polynomials on $\mu$.
\begin{center}
\begin{tabular}{|c|c|} \hline
Farey $r/s$& polynomial $P_{r/s}$\\ \hline
$\frac{1}{2}$ & $-0.419643 - 0.606291 i$ \\ \hline
$\frac{4}{7}$ & $-1.42553 + 1.59871 i$\\ \hline
$\frac{3}{5}$ & $0.540536 + 1.03152 i$ \\ \hline
$\frac{5}{8}$ & $1. - 2.84217 \times 10^{-14} i $ \\ \hline
$\frac{2}{3}$ & $1.02696 - 0.838121 i$ \\ \hline
$\frac{5}{7}$ & $-4.56052 + 1.21192 i$ \\ \hline
$\frac{3}{4}$ & $2.6478 + 1.72143 i$ \\ \hline
$\frac{4}{5}$ & $-3.39159 + 2.16591 i$ \\ \hline
$\frac{1}{1}$ & $0.398566 - 0.760591 i$ \\ \hline
\end{tabular} \\
\end{center}
This list suggests that ${\rm Trace}(W_{5/8})=1$ and that therefore $W_{5/8}^3=1$. We could deduce that this point $\gamma$ gives us the Heckoid group $(5/8;3)$. In this case the two bridge link with slope $5/8$ one component surgered by $(4,0)$ Dehn surgery and the other by $(2,0)$ Dehn surgery with the tunnelling word $W_{5/8}$ elliptic of order three.
We presented this data with round off error, and we must decide if it is real or not. There are two fortunate things here. First we know apriori from the Identification Theorem that this group generated by elements of order $2$ and $4$ with $\gamma$ as the commutator {\it is} discrete. We are just trying to find out what it is. Secondly $\gamma$ is presented as an algebraic integer so we can use integer arithmetic, identify the minimal polynomial of $P_{r/s}(\gamma)$ and its roots. We can then assure ourselves that $1$ is indeed a root and that no others are close. Or we could go back and enter $\gamma$ and compute $W_{r/s}$ symbolically. These amount to the same thing of course.
In the case at hand $\mu = \sqrt{2}+ \sqrt{2 + \gamma}$ and $\mu$ is a root of
\[ 1 - 298 z^2 + 251 z^4 - 200 z^6 + 71 z^8 - 14 z^{10} + z^{12} \]
The minimal polynomial of $P_{5/8}(\mu)$ is then found to be $z-1$.
This procedure works well in almost all cases. However some of the cases required us searching through a few thousand slopes. Doing this using integer arithmetic takes a very long time, while using numerical approximation introduces round-off errors as there are approximately four times as many matrices to multiply together as the denominator of the slope (so sometimes around $500$ matrices). We found that working with 40 digit precision (and hoping for a result accurate to a couple of decimal places) gave us an answer in a reasonable time - a couple of hours. Of course once you have a possible answer it is much easier and quicker to verify it is in fact correct.
|
2205.03965
|
\section{Introduction}
Graph Ramsey theory is currently among the most active areas in combinatorics. Two of the main parameters in the theory are Ramsey number and size Ramsey number, which are defined as follows. Given two graphs $G_1$ and $G_2$, we write $G\to (G_1,G_2)$ if for any edge colouring of $G$ such that each edge is coloured either red or blue, the graph $G$ always contains either a red copy of $G_1$ or a blue copy of $G_2$. The \emph{Ramsey number} $r(G_1,G_2)$ is the smallest possible number of vertices in a graph $G$ satisfying $G\to (G_1,G_2)$. The \emph{size Ramsey number} $\hat{r}(G_1,G_2)$ is the smallest possible number of edges in a graph $G$ satisfying $G\to (G_1,G_2)$. That is to say, $r(G_1,G_2)=\min\{|V(G)|:G\to (G_1,G_2)\}$, and $\hat{r}(G_1,G_2)=\min\{|E(G)|:G\to (G_1,G_2)\}$.
The size Ramsey number was introduced by Erd\H os, Faudree, Rousseau, and Schelp \cite{erdos1978size} in 1978. Some variants have also been studied since then. In 2015, Rahadjeng, Baskoro, and Assiyatun \cite{rahadjeng2015connected} initiated the study of such a variant called connected size Ramsey number by adding the condition that $G$ is connected. Formally speaking, the \emph{connected size Ramsey number} ${\hat{r}}_c(G_1,G_2)$ is the smallest possible number of edges in a connected graph $G$ satisfying $G\to (G_1,G_2)$. It is easy to see that $\hat{r}(G_1,G_2)\le {\hat{r}}_c(G_1,G_2)$, and equality holds when both $G_1$ and $G_2$ are connected graphs. But the latter parameter seems more tricky when $G_1$ or $G_2$ is disconnected. The previous results are mainly concerned with the connected size Ramsey numbers of a matching versus a sparse graph such as a path, a star, and a cycle.
Let $nK_2$ be a matching with $n$ edges, and $P_m$ a path with $m$ vertices. Vito, Nabila, Safitri, and Silaban \cite{vito2021size} gave an upper bound of ${\hat{r}}_c(nK_2,P_m)$, and the exact values of ${\hat{r}}_c(nK_2,P_3)$ for $n=2,3,4$.
\begin{thm}\cite{vito2021size}
\label{thm:pm}
For $n\ge 1$, $m\ge 3$, ${\hat{r}}_c(nK_2,P_m)\le \begin{cases}n(m+2)/2-1, & \text{if}\ n\ \text{is even}; \\ (n+1)(m+2)/2-3, & \text{if}\ n\ \text{is odd}. \end{cases}$\\ Equality holds for $m=3$ and $1\le n\le 4$.
\end{thm}
If $m$ is much larger than $n$, this upper bound cannot be tight. Because Erd\H os and Faudree \cite{erdos1984size} constructed a connected graph which implies ${\hat{r}}_c(nK_2,P_m)\le m+c\sqrt{m}$, where $c$ is a constant depending on $n$. But for small $m$, the above upper bound can be tight. Our first result determines the exact values of ${\hat{r}}_c(nK_2,P_3)$ for all positive integers $n$, which generalises the equality of \cref{thm:pm}.
\begin{thm}
\label{thm:P3}
For all positive integers $n$, we have ${\hat{r}}_c(nK_2,P_3)=\flo{(5n-1)/2}$.
\end{thm}
Rahadjeng, Baskoro, and Assiyatun \cite{rahadjeng2017connected} proved that ${\hat{r}}_c(nK_2,C_4)\le 5n-1$ for $n\ge 4$. This upper bound can be improved from $5n-1$ to $\lfloor (9n-1)/2 \rfloor$.
\begin{thm}
\label{thm:C4}
For all positive integers $n$, we have ${\hat{r}}_c(nK_2,C_4)\le \lfloor (9n-1)/2 \rfloor$.
\end{thm}
Now we prove the theorem by constructing a graph with $\lfloor (9n-1)/2 \rfloor$ edges. Let $K_{3,3}-e$ be the graph $K_{3,3}$ with one edge deleted. It is easy to check that $K_{3,3}-e\to (2K_2,C_4)$. We use $nG$ to denote $n$ disjoint copies of $G$. If $n$ is even, then $\frac{n}{2}(K_{3,3}-e)\to (nK_2,C_4)$. The graph $\frac{n}{2}(K_{3,3}-e)$ has $n/2$ components and can be connected by adding $n/2-1$ edges. If $n$ is odd, then $\frac{n-1}{2}(K_{3,3}-e)\cup C_4\to (nK_2,C_4)$. The graph $\frac{n-1}{2}(K_{3,3}-e)\cup C_4$ has $(n+1)/2$ components and can be connected by adding $(n-1)/2$ edges. In both cases, we obtain a connected graph with $\lfloor (9n-1)/2 \rfloor$ edges and hence the upper bound follows.
It seems likely that the determination of ${\hat{r}}_c(nK_2,C_4)$ for all $n$ is tricky. We believe the upper bound is tight and pose the following conjecture.
\begin{conj}
\label{conj:C4}
For all positive integers $n$, ${\hat{r}}_c(nK_2,C_4)=\lfloor (9n-1)/2 \rfloor$.
\end{conj}
Even though solving the above conjecture seems out of our reach, we show a result which has the same flavour and has exact values: ${\hat{r}}_c(nK_2,C_3)=4n-1$.
\begin{thm}
\label{thm:C3}
For all positive integers $n$, we have ${\hat{r}}_c(nK_2,C_3)=4n-1$.
\end{thm}
Proofs of \cref{thm:P3} and \cref{thm:C3} will be presented in Section \ref{section2} and Section \ref{section3}, respectively. To prove the lower bounds, we need to discuss the connectivity of a graph $G$. If $G$ is not 2-connected, the basic properties of blocks and end blocks are needed, which can be found in Bondy and Murty \cite[Chap.~5.2]{bondy2008graph}. Moreover, the following terminology is used frequently in the proofs. We say $G$ has a \emph{$(G_1,G_2)$-colouring} if there is a red-blue edge colouring of $G$ such that $G$ contains neither a red $G_1$ nor a blue $G_2$. Thus, it is equivalent to $G\not\to (G_1,G_2)$.
\section{A matching versus $P_3$}\label{section2}
For the upper bound, we know that $C_4\to (2K_2,P_3)$. If $n$ is even, then $\frac{n}{2}C_4\to (nK_2,P_3)$. The graph $\frac{n}{2}C_4$ has $n/2$ components and can be connected by adding $n/2-1$ edges. If $n$ is odd, then $\frac{n-1}{2}C_4\cup P_3\to (nK_2,P_3)$. The graph $\frac{n-1}{2}C_4\cup P_3$ has $(n+1)/2$ components and can be connected by adding $(n-1)/2$ edges. In both cases, we obtain a connected graph with $\flo{(5n-1)/2}$ edges and hence the upper bound follows.
For the lower bound, we use induction on $n$. The result is obvious for $n=1,2$. Assume that for $k<n$ and any connected graph $G$ with $\flo{(5k-3)/2}$ edges, we have $G\not\to (kK_2,P_3)$. Now consider $G$ to be a connected graph with minimum number of edges such that $G\to (nK_2,P_3)$. Thus, for any proper connected subgraph $G'$ of $G$, we have $G'\not\to (nK_2,P_3)$. Since $n\ge 3$, $G$ has at least six edges. Suppose to the contrary that $G$ has at most $\flo{(5n-3)/2}$ edges. We will deduce a contradiction and hence ${\hat{r}}_c(nK_2,P_3)\ge \flo{(5n-1)/2}$.
An edge set $E_1$ of a connected graph $G$ is called \emph{deletable}, if $E_1$ satisfies the following conditions:
\begin{lemenum}
\item $E_1$ can be partitioned into two edge sets $E_2$ and $E_3$, where $E_2$ forms a star and $E_3$ forms a matching;
\item any edge of $E(G)\setminus E_1$ is nonadjacent to $E_3$;
\item the graph induced by $E(G)\setminus E_1$ is still connected.
\end{lemenum}
Note that for a deletable edge set $E_1$, the graph $G-E_1$ may have some isolated vertices, but all edges of $G-E_1$ belong to the same connected component. We have the following property of a deletable edge set.
\begin{claim}\label{clm: deletableedge}
Every deletable edge set has size at most two.
\end{claim}
\begin{proof}
Let $E_1$ be a deletable edge set. If $|E_1|\ge 3$, then the graph induced by $E(G)\setminus E_1$ has at most $\flo{(5n-3)/2}-3$ edges and hence an $((n-1)K_2, P_3)$-colouring by induction. We then colour all edges of $E_2$ red and all edges of $E_3$ blue. This is a $(nK_2, P_3)$-colouring of $G$, a contradiction.
\end{proof}
A \emph{non-cut vertex} of a connected graph is a vertex whose deletion still results in a connected graph. That is, every vertex of a nontrivial connected graph is either a cut vertex or a non-cut vertex. Since $E_3$ in the definition of a deletable edge set can be empty, the edges incident to a non-cut vertex form a deletable edge set. We have the following direct corollary.
\begin{claim}\label{clm: noncut}
Every non-cut vertex has degree at most two.
\end{claim}
If $G$ is a 2-connected graph, by \cref{clm: noncut}, $G$ is a cycle. Beginning from any edge of $G$, we may colour all edges consecutively along the cycle. We alternately colour two edges red and one edge blue, until all edges of $G$ have been coloured. Obviously $G$ contains no blue $P_3$. From $(5n-3)/2\le 3(n-1)$ and the colouring of $G$ we see that $G$ contains no red matching with $n$ edges. Thus, $G\not\to (nK_2,P_3)$.
Now we assume that $G$ is connected but not 2-connected. Recall that a \emph{block} of a graph is a subgraph that is nonseparable and is maximal with respect to this property. An \emph{end block} is a block that contains exactly one cut vertex of $G$. We have the following observation.
\begin{claim}\label{clm: endblock}
Every end block is either a $K_2$ or a cycle.
\end{claim}
\begin{proof}
Let $B$ be an end block with at least three vertices, and let $v$ be the single cut vertex of $G$ that is contained in $B$. Since $B$ is 2-connected, the subgraph $B-v$ is still connected. By \cref{clm: noncut}, every non-cut vertex has degree two. It follows that $B-v$ is either a path or a cycle. We see that $v$ has two neighbours in $B$. If not, $v$ has at least three neighbours in $B$, each of which has degree one in $B-v$. Since a path has two vertices of degree one and a cycle has no vertex of degree one, $B-v$ is neither a path nor a cycle, a contradiction. Hence, every vertex of $B$ has two neighbours in $B$. Since $B$ is 2-connected, it must be a cycle.
\end{proof}
Since $G$ is not 2-connected, there is at least one cut vertex. Choose any cut vertex as a \emph{root}, denoted by $r$. For a vertex $u$ of $G$, if any path from $u$ to $r$ must pass through a cut vertex $v$, then $u$ is called a \emph{(vertex) descendant} of $v$. For any edge $e$ of $G$, if both ends of $e$ are descendants of $v$, then $e$ is called an \emph{edge descendant} of $v$. For a cut vertex $v$, the block containing $v$ but no other descendant of $v$ is called a \emph{parent block} of $v$. It is obvious that every cut vertex has a unique parent block, except that the root $r$ has no parent block. If $v$ is a cut vertex but every descendant of $v$ is not a cut vertex of $G$, we call $v$ an \emph{end-cut}. It is obvious that $G$ has at least one end-cut. We have the following property of end-cuts.
\begin{claim}\label{clm: end-cut}
Every end-cut is contained in a unique end block, which is $K_2$. Moreover, if an end-cut is not the root of $G$, its parent block is also $K_2$.
\end{claim}
\begin{proof}
Let $v$ be an end-cut. If $v$ is not the root $r$, by the definition of end-cut, every block containing $v$ is an end block, except for its parent block. If we delete $v$ and all descendants of $v$ from $G$, the induced subgraph is still connected, denoted by $G'$. This is because, any vertex of $G'$ is not a descendant of $v$. For any two vertices of $G'$, there is a path joining them in $G$ without passing through $v$. So the path still exists in $G'$ and hence $G'$ is connected. In the following, regardless of whether $v$ is the root or not, we first colour all edges incident to $v$ red, then give a colouring of all edge descendants of $v$. After that, we find a colouring of $G'$ by the inductive hypothesis. We prove that this edge colouring of $G$ is an $(nK_2,P_3)$-colouring under certain conditions.
By \cref{clm: endblock}, every end block is either a $K_2$ or a cycle. Assume that $v$ has $t_1$ neighbours in its parent block. Note that if $v$ is the root of $G$, then $t_1=0$. Assume $v$ is contained in $t_2$ blocks which are $K_2$, and in $t$ blocks which are cycles. Let $p_1+2, p_2+2, \dots, p_t+2$ be the cycle lengths of these $t$ cycles. If we remove $v$ from $G$, the cycles become $t$ disjoint paths with length $p_1, p_2, \dots, p_t$ respectively. We colour all edges incident to $v$ red. For each path with length $p_i$, where $1\le i\le t$, we colour all edges from one leaf to the other leaf consecutively along the path, alternately with one edge blue and two edges red. Now we have coloured $x:=t_1+t_2+2t+p_1+p_2+\dots+p_t$ edges, no blue $P_3$ appears, and the maximum red matching has $y:=1+\flo{(p_1+1)/3}+\dots+\flo{(p_t+1)/3}$ edges.
If $v$ is the root, then we have already coloured all edges of $G$. So $x\le \flo{(5n-3)/2}$, and we need to check that $y\le n-1$. Since $G$ has at least six edges, we have $6t_2+7t+p_1\ge 11$. Thus, $n\ge (2x+3)/5\ge (2t_2+4t+2p_1+\dots+2p_t+3)/5\ge (t+p_1+\dots+p_t+4)/3>y$. This implies that $G\not\to (nK_2,P_3)$.
If $v$ is not the root, recall that $G'$ is formed by the remaining edges and is connected. We can use the inductive hypothesis. The graph $G'$ has at most $\flo{(5n-3)/2}-x$ edges, which is $\flo{(5(n-2x/5)-3)/2}\le \flo{(5(n-\flo{2x/5})-3)/2}$. So $G'$ has an $((n-\flo{2x/5})K_2,P_3)$-colouring. It is not difficult to check that $G$ has no blue $P_3$, and the maximum red matching has at most $y+n-\flo{2x/5}-1$ edges. It is left to show that under what conditions $y+n-\flo{2x/5}-1$ is less than $n$. So we deduce a contradiction.
Since $v$ has at least one neighbour in its parent block, it follows that $t_1\ge 1$. If $t\ge 1$, then $1+(t-1)/3\le (4t+1)/5$, and $\flo{(p_1+1)/3}\le (2p_1+1)/5$. Thus,
\begin{align*}
y&=1+\flo{(p_1+1)/3}+\flo{(p_2+1)/3}+\dots+\flo{(p_t+1)/3}\\
&\le 1+\flo{(p_1+1)/3}+(p_2+1)/3+\dots+(p_t+1)/3\\
&\le 1+\flo{(p_1+1)/3}+(t-1)/3+2(p_2+\dots+p_n)/5\\
&\le (4t+1)/5+(2p_1+1)/5+2(p_2+\dots+p_n)/5\\
&\le 2(t_1+t_2+2t+p_1+p_2+\dots+p_t)/5=2x/5.
\end{align*}
If $t=0$ and $t_1+t_2\ge 3$, then $y=1<6/5\le 2x/5$. In both cases, we have $y\le 2x/5$ and hence $y+n-\flo{2x/5}-1<n$.
Now we consider the remainder case when $t=0$ and $t_1+t_2\le 2$. Since $v$ is an end-cut, we have $t_2\ge 1$. Thus, $t_1=t_2=1$. It follows from $t=0$ and $t_2=1$ that $v$ is contained in a unique end block which is $K_2$. It follows from $t_1=1$ that $v$ has only one neighbour in its parent block. Since each block is either a $K_2$ or a 2-connected subgraph, the parent block of $v$ must be $K_2$.
\end{proof}
Let $v$ be an end-cut. By \cref{clm: end-cut}, it cannot be the root of $G$. Let $u$ be the other end of its parent block, and $v^+$ the descendant of $v$. If $u$ is contained in an end block which is an edge $uu^+$, then $uv, uu^+, vv^+$ form a deletable edge set. If $u$ is contained in an end block which is not an edge, by \cref{clm: endblock}, the end block is a cycle. Let $u^+$ be a neighbour of $u$ on the cycle. Then $uv, uu^+, vv^+$ form a deletable edge set. If $u$ has at least two end-cuts as its descendants, let $w$ be another end-cut and $uw, ww^+$ edge descendants of $u$. Then $uv, vv^+, uw, ww^+$ form a deletable edge set. If $u$ has only one end-cut as its descendant, which is $v$, then all edges incident to $u$ and the edge $vv^+$ form a deletable edge set. By \cref{clm: deletableedge}, each of the above cases leads to a contradiction. This completes the proof of the lower bound.
\section{A matching versus $C_3$}\label{section3}
Now we prove \cref{thm:C3}. The graph $nC_3$ has $n$ components and can be connected by adding $n-1$ edges. Denote this connected graph by $H$. It follows from $nC_3\to (nK_2,C_3)$ that $H\to (nK_2,C_3)$. Thus, ${\hat{r}}_c(nK_2,C_3)\le 4n-1$.
Set $\mathcal{G}=\{G:|E(G)|\le 4n-2, \text{and}\ G\to (nK_2,C_3)\}$. We will prove that $\mathcal{G}$ is an empty set and hence the lower bound follows. Suppose not, choose a graph $G$ from $\mathcal{G}$ with minimum order and minimum size subjective to its order. Thus, for any proper connected subgraph $G'$ of $G$ with at most $4k-2$ edges, we have $G'\not\to (kK_2,C_3)$. We present the proof through a sequence of claims.
\begin{claim}\label{clm:minimumdegree}
The minimum degree of $G$ is at least two.
\end{claim}
\begin{proof}
Suppose that $G$ has a vertex $v$ of degree one. Then $G-v$ has an $(nK_2,C_3)$-colouring. It can be extended to an $(nK_2,C_3)$-colouring of $G$ by colouring the edge incident to $v$ blue, which contradicts our assumption that $G\to (nK_2,C_3)$. Thus, $\delta(G)\ge 2$.
\end{proof}
\begin{claim}\label{clm:nocutedge}
The graph $G$ has no cut edge.
\end{claim}
\begin{proof}
Suppose that $G$ has a cut edge $e$. Then $G-e$ has two connected components $X$ and $Y$. Let $k_1,k_2$ be the integers such that $4k_1-5\le |E(X)|\le 4k_1-2$ and $4k_2-5\le |E(Y)|\le 4k_2-2$ respectively. Then $X$ has a $(k_1K_2,C_3)$-colouring and $Y$ has a $(k_2K_2,C_3)$-colouring. It can be extended to an $((k_1+k_2-1)K_2,C_3)$-colouring of $G$ by colouring $e$ blue. So the maximum red matching has at most $k_1+k_2-2$ edges. From $(4k_1-5)+(4k_2-5)+1\le |E(X)|+|E(Y)|+1\le 4n-2$ we deduce that $k_1+k_2-2\le n-1/4$, a contradiction which implies our claim.
\end{proof}
\begin{claim}\label{clm:2-connected}
The graph $G$ is 2-connected.
\end{claim}
\begin{proof}
If $G$ is not 2-connected, let $v$ be a cut vertex of $G$, and $B_1,B_2,\dots,B_\ell$ the blocks containing $v$, where $\ell\ge 2$. If $v$ has only one neighbour in $B_i$ for some $i$ with $1\le i\le \ell$, then $B_i$ is not 2-connected. Since any block is either 2-connected or a $K_2$, $B_i$ has to be $K_2$. Hence, $B_i$ is a cut edge \cite[Chap.~4.1.18]{west2001introduction}, which contradicts \cref{clm:nocutedge}. Thus, for each $B_i$ with $1\le i\le \ell$, $v$ has at least two neighbours in $B_i$.
The vertex set $V(G)$ can be partitioned into two parts $X$ and $Y$ as follows. If any path from $u$ to $v$ has to pass through a vertex of $B_1$ other than $v$, then $u\in X$; otherwise $u\in Y$. Let $G_1$ and $G_2$ be the subgraphs induced by $X\cup \{v\}$ and $Y$ respectively. It is obvious that $G_1$ contains $B_1$ and $G_2$ contains $B_2\cup \dots\cup B_\ell$. And they share only one vertex, which is $v$. Let $k_1,k_2$ be the integers such that $4k_1-5\le |E(G_1)|\le 4k_1-2$ and $4k_2-5\le |E(G_2)|\le 4k_2-2$ respectively. Then $G_1$ has a $(k_1K_2,C_3)$-colouring and $G_2$ has a $(k_2K_2,C_3)$-colouring. Combining the two colourings we have an $((k_1+k_2-1)K_2,C_3)$-colouring of $G$. So the maximum red matching has at most $k_1+k_2-2$ edges. From $(4k_1-5)+(4k_2-5)\le |E(G_1)|+|E(G_2)|\le 4n-2$ we deduce that $k_1+k_2-2\le n$. If $k_1+k_2-2<n$, then this colouring is an $(nK_2,C_3)$-colouring of $G$. If $k_1+k_2-2=n$, then we have $|E(G_1)|=4k_1-5$ and $|E(G_2)|=4k_2-5$. We obtain an $(nK_2,C_3)$-colouring of $G$ as follows. If $\ell=2$, then both $B_1-v$ and $B_2-v$ are connected. Hence, both $G_1-v$ and $G_2-v$ are connected, which implies that they have a $((k_1-1)K_2,C_3)$-colouring and a $((k_2-1)K_2,C_3)$-colouring, respectively. It can be extended to an $((k_1+k_2-2)K_2,C_3)$-colouring of $G$ by colouring all edges incident to $v$ red. Thus, $G\not\to (nK_2,C_3)$. If $\ell\ge 3$, then for each $i$ with $2\le i\le \ell$, we delete an edge $vv_i$ from $B_i$. Both $G_1-v$ and $G_2-\{vv_2,\dots,vv_\ell\}$ are connected. So they have a $((k_1-1)K_2,C_3)$-colouring and a $((k_2-1)K_2,C_3)$-colouring, respectively. It can be extended to an $((k_1+k_2-2)K_2,C_3)$-colouring of $G$ by colouring the remaining edges red. Again, $G\not\to (nK_2,C_3)$.
\end{proof}
\begin{claim}\label{clm:maximumdegree}
The maximum degree of $G$ is at most three.
\end{claim}
\begin{proof}
For any vertex $v$ of $G$, by \cref{clm:2-connected}, $G-v$ is still connected. If $d(v)\ge 4$, $G-v$ has at most $4(n-1)-2$ edges and hence an $((n-1)K_2,C_3)$-colouring by the choice of $G$. It can be extended to an $(nK_2,C_3)$-colouring of $G$ by colouring all edges incident to $v$ red, a contradiction. Thus, the maximum degree of $G$ is at most three.
\end{proof}
\begin{claim}\label{clm:3-regular}
The graph $G$ is 3-regular.
\end{claim}
\begin{proof}
By \cref{clm:minimumdegree} and \cref{clm:maximumdegree}, $2\le d(v)\le 3$ for any vertex $v$ of $G$. If $G$ is 2-regular, by \cref{clm:2-connected}, $G$ is a cycle. If $G$ is a triangle, then $n\ge 2$. We colour all edges of $G$ red, which is a $(nK_2,C_3)$-colouring. If $G$ is not a triangle, then we colour all edges of $G$ blue, which is a $(K_2,C_3)$-colouring. Thus, $G$ cannot be 2-regular.
If $G$ is not 3-regular, then $G$ has some vertices with degree two and some with degree three. There exist two adjacent vertices with degrees two and three respectively, denoted by $v_1$ and $v_2$. Since $G-v_2$ is connected, and $v_1$ has only one neighbour in $G-v_2$, it follows that $G-\{v_1,v_2\}$ is connected. This graph has at most $4(n-1)-2$ edges and hence an $((n-1)K_2,C_3)$-colouring. It can be extended to an $(nK_2,C_3)$-colouring of $G$ by colouring all edges incident to $v_2$ red and the remaining edge incident to $v_1$ blue, a contradiction which implies our claim.
\end{proof}
\begin{claim}\label{clm:triangle}
Each edge of $G$ is contained in at least one triangle.
\end{claim}
\begin{proof}
Suppose that $G$ has an edge $e$ which is not contained in any triangle. By \cref{clm:nocutedge}, $G-e$ is connected. It follows from the choice of $G$ that $G-e$ has an $(nK_2,C_3)$-colouring. It can be extended to an $(nK_2,C_3)$-colouring of $G$ by colouring $e$ blue, a contradiction.
\end{proof}
Consider a triangle $v_1v_2v_3$. By \cref{clm:3-regular}, each of $v_1,v_2,v_3$ has another neighbour, denoted by $v_4,v_5,v_6$ respectively. If $v_4,v_5,v_6$ are the same vertex, then $v_1,v_2,v_3,v_4$ forms a $K_4$. Since $G$ is a 3-regular 2-connected graph, the whole graph $G$ is a $K_4$ and $n\ge 2$. We colour the triangle $v_1v_2v_3$ red, and the other three edges blue. This is a $(2K_2,C_3)$-colouring of $G$, a contradiction. Thus, at least two of $v_4,v_5,v_6$ are distinct, say, $v_4$ and $v_5$ are two distinct vertices. The vertex $v_3$ cannot be adjacent to both $v_4$ and $v_5$, since otherwise $d(v_3)\ge 4$. Without loss of generality, assume that $v_3$ is not adjacent to $v_4$. Moreover, $v_4$ is not adjacent to $v_2$, since otherwise $d(v_2)\ge 4$. By \cref{clm:triangle}, $v_1v_4$ is contained in a triangle, denoted by $v_1v_4v_6$. Since $v_6$ is different with $v_2,v_3$, we have $d(v_1)\ge 4$, a final contradiction.
|
2205.03922
|
\section*{References}}
\usepackage{lineno}
\usepackage[colorlinks,linkcolor={blue}]{hyperref}
\modulolinenumbers[5]
\usepackage{amsfonts}
\usepackage{graphicx}
\usepackage{amsmath}
\usepackage{graphicx}
\usepackage{amssymb, nicefrac}
\usepackage{mathtools}
\DeclarePairedDelimiter{\setof}{\{}{\}}
\DeclarePairedDelimiter{\abs}{\lvert}{\rvert}
\DeclarePairedDelimiter{\norm}{\lVert}{\rVert}
\DeclarePairedDelimiter{\dotp}{\langle}{\rangle}
\DeclarePairedDelimiter{\paren}{(}{)}
\renewcommand{\iff}{\text{if and only if}}
\usepackage{xcolor}
\definecolor{lightblue}{rgb}{0.8,0.8,1}
\newcommand{\sk}[2]{{\color{lightblue}#1} {\color{blue}#2}}
\newtheorem{theorem}{Theorem}
\newtheorem{maintheorem}{Theorem}
\newtheorem{acknowledgement}[theorem]{Acknowledgement}
\newtheorem{algorithm}[theorem]{Algorithm}
\newtheorem{axiom}[theorem]{Axiom}
\newtheorem{case}[theorem]{Case}
\newtheorem{claim}[theorem]{Claim}
\newtheorem{conclusion}[theorem]{Conclusion}
\newtheorem{condition}[theorem]{Condition}
\newtheorem{conjecture}[theorem]{Conjecture}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{criterion}[theorem]{Criterion}
\newtheorem{definition}[theorem]{Definition}
\newtheorem{example}[theorem]{Example}
\newtheorem{Example}[theorem]{Example}
\newtheorem{exercise}[theorem]{Exercise}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{notation}[theorem]{Notation}
\newtheorem{problem}[theorem]{Problem}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{rem}[theorem]{Remark}
\newtheorem{solution}[theorem]{Solution}
\newtheorem{summary}[theorem]{Summary}
\newtheorem{assumption}[theorem]{Assumption}
\newtheorem{remark}[theorem]{Remark}
\newtheorem{remarks}[theorem]{Remarks}
\newenvironment{proof}[1][Proof]{\textbf{#1.} }{\ \rule{0.5em}{0.5em}}
\newcommand{\comment}[1]{}
\newcommand{\cover}[1]{\stackrel{#1}{\Longrightarrow}}
\newcommand{\invcover}[1]{\stackrel{#1}{\Longleftarrow}}
\newcommand{\mathrm{dom}}{\mathrm{dom}}
\newcommand{\mathrm{int}}{\mathrm{int}}
\newcommand{\mathrm{Id}}{\mathrm{Id}}
\makeatletter
\usepackage{tikz}
\newcommand*\circled[2][1.6]{\tikz[baseline=(char.base)]{
\node[shape=circle, draw, inner sep=1pt,
minimum height={\f@size*#1},] (char) {\vphantom{WAH1g}#2};}}
\newcommand\NoStart{\circled[0.0]{$N_0$} }
\makeatother
\bibliographystyle{elsarticle-num}
\begin{document}
\begin{frontmatter}
\title{Computer assisted proofs for transverse collision \\
and near collision orbits in the restricted three body problem}
\author{Maciej J. Capi\'nski\footnote{M. C. was partially supported by the NCN grants 2019/35/B/ST1/00655 and 2021/41/B/ST1/00407.}}
\ead{maciej.capinski@agh.edu.pl}
\address{AGH University of Science and Technology, al. Mickiewicza 30, 30-059 Krak\'ow, Poland}
\author{Shane Kepley}
\ead{s.kepley@vu.nl}
\address{Vrije Universiteit Amsterdam,
De Boelelaan 1105, 1081 HV Amsterdam, Netherlands }
\author{J.D. Mireles James\footnote{J.D.M.J. was partially supported by
NSF Grant DMS 1813501}}
\ead{jmirelesjameds@fau.edu}
\address{Florida Atlantic University, 777 Glades Road, Boca Raton, Florida, 33431}
\begin{abstract}
This paper considers two point boundary value problems for conservative systems
defined in multiple coordinate systems, and develops a flexible a-posteriori framework
for computer assisted existence proofs.
Our framework is applied to the study collision and near collision orbits
in the circular restricted three body problem. In this case the coordinate systems are the
standard rotating coordinates, and the two Levi-Civita coordinate systems
regularizing collisions with each of the massive primaries. The proposed framework is used
to prove the existence of a number of orbits which have long been studied numerically
in the celestial mechanics literature, but for which there are no existing analytical proofs
at the mass and energy values considered here. These include
transverse ejection/collisions from one primary body to the other,
Str\"{o}mgren's assymptotic periodic orbits
(transverse homoclinics for $L_{4,5}$), families of periodic orbits
passing through collision, and
orbits connecting $L_4$ to ejection or collision.
\end{abstract}
\begin{keyword} Celestial mechanics, collisions, transverse homoclinic, computer assisted proofs.
\MSC[2010]
37C29, 37J46, 70F07.
\end{keyword}
\end{frontmatter}
\section{Introduction} \label{sec:intro}
The present work develops computer assisted arguments for proving theorems about
collision and near collision orbits in conservative systems, and applies these arguments
to a number of questions involving the planar circular restricted three body problem
(PCRTBP). The PCRTBP, defined formally in Section \ref{sec:PCRTBP},
describes the motion of
an infinitesimal particle like a satellite, asteroid, or comet
moving in the field of two massive bodies called the primaries.
These primary bodies are assumed to orbit their center of mass
on Keplerian circles. Changing to a co-rotating frame
of reference results in autonomous equations of motion, and
choosing normalized units of distance, mass, and time
reduces the number of parameters in the problem to one: the mass ratio of
the primaries.
We consider the following questions about
the dynamics of the infinitesimal body in the PCRTBP.
In each case we are interested in non-perturbative
values of the mass and energy parameters.
Recall that in systems which conserve energy,
periodic orbits occur in one parameter families
-- or tubes -- parameterized by energy.
We note also that the PCRTBP has an equilibrium solution, or
Lagrange point, called $L_4$ in the upper half plane
forming an equilateral triangle with the two
primaries. (Similarily, $L_5$ forms an equilateral tringle
in the lower half plane).
We develop a methodology which allows us to address the following questions.
\begin{itemize}
\item \textbf{Q1:} \textit{Do there exist orbits of the
infinitesimal body, which collide with
one primary in forward time, and
the other primary in backward time?} We refer to such orbits as
primary-to-primary ejection-collisions.
\item \textbf{Q2:} \textit{Do there exist orbits of
infinitesimal body
which are assymptotic to the $L_4$ in backward time,
but which collide with a primary in forward time?}
(Or the reverse - from ejection to $L_4$). We refer to these as
$L_4$-to-collision orbits (or ejection-to-$L_4$ orbits).
\item \textbf{Q3:} \textit{Do there exist orbits
of the infinitesimal body which are
asymptotic in both forward and backward time to $L_4$? }
Such orbits are said to be homoclinic to $L_{4}$.
\item \textbf{Q4:} \textit{Do there exist
tubes of large amplitude periodic orbits
for the infinitesimal body, which accumulate to an ejection-collision orbit with
one of the primaries?} Such tubes are said to terminate
at an ejection-collision orbit.
\item \textbf{Q5:} \textit{Do there exist tubes of periodic orbits
for the infinitesimal body
which accumulate to a pair of ejection-collision orbits going
from one primary to the other and back?}. Such
tubes are said to terminate at a consecutive
ejection-collision.
\end{itemize}
In response to the questions above we have the following theorems,
which constitute the main results of the present work.
\begin{theorem}
For the PCRTBP with mass ratio $1/3$
there exist ejection-collision orbits
from one primary to the other, in both directions. (See page \pageref{thm:ejectionCollision} for the precise statement.)
\end{theorem}
\begin{theorem}For the PCRTBP with equal masses,
there exist ejection-to-$L_4$ orbits, and
$L_4$-to-collision orbits. (See page \pageref{thm:CAP-L4-to-collision} for the precise statement.)
Analogous orbits exist for $L_5$ by symmetry considerations.
\end{theorem}
\begin{theorem}For the PCRTBP with equal masses,
there exist transverse homoclinic orbits for $L_4$. (See page \pageref{thm:CAP-connections} for the precise statement.) Analogous orbits exist for $L_5$ by symmetry considerations.
\end{theorem}
\begin{theorem}For the PCRTBP with Earth-Moon mass parameter,
there exists a family of periodic orbits which accumulate to
an ejection-collision orbit involving the Earth.
The ejection-collision orbit
has ``large amplitude'' in the sense that it passes
near collision with the Moon.
(See page \pageref{th:CAP-Lyap} for the precise statement.)
\end{theorem}
\begin{theorem}For the PCRTBP with equal masses,
there exists a family of periodic orbits which accumulate to a
a consecutive ejection-collision orbit involving both primaries.
(See page \pageref{thm:doubleCollision} for the precise statement.)
\end{theorem}
\begin{remark}[Termination orbits] \label{rem:termination}
{\em
Theorems 3,4,5 involve the termination of
tubes of periodic orbits.
In the case of Theorem 3, the existence of
a transverse $L_4$ homoclinic implies the
further existence of a family
of periodic orbits which accumulates to the $L_4$ homoclinic
by a theorem of Henrard \cite{MR0365628}.
It is worth remarking further that the orbits of Theorem 3 imply also the
existence of chaotic dynamics in the $L_4$ energy level. This is due to
a theorem of Devaney \cite{MR0442990}.
In Theorems 4 and 5, we obtain families of periodic orbits
terminating at the ejection-collision orbit by a direct application of
the implicit function theorem.
Termination orbits have a long history in celestial mechanics,
and are of fundamental importance in equivariant bifurcation
theory. We refer the interested reader to the discussion of ``Str\"{o}mgren's
termination principle'' in Chapter 9 of
\cite{theoryOfOrbits}, and to the works of
\cite{MR1879221,MR2042173,MR2969866} on equivariant families
in the Hill three body and restricted three body problems. See also
the works of \cite{MR3007103,MR2821620} on global continuation families
in the restricted $N$-body problem.}
\end{remark}
\begin{remark}[Ballistic transport] \label{rem:ballisticTransport}
{\em
Theorem 1 establishes the existence of ballistic transport,
or a zero energy transfer, from one primary to the other in
finite time. The existence of
ballistic transport shows for example that debris
can diffuse between a planet and it's moon, or between a
star and one of it's planets, using only the natural dynamics
of the system.
This phenomena is observed
for example when Earth rocks, ejected into space
after a meteor strike, are later found on the Moon
\cite{earthMoonRock} (or vice versa).
In a similar fashion, Theorem 2 shows the existence of orbits
whose velocity limits to zero in backward time, but to
infinity in finite forward time (or vice versa). Such orbits
describe ballistic transfer from $L_4$ to a primary.
}
\end{remark}
\begin{remark}[Moulton's $L_4$ periodic orbits] \label{rem:Moulton}
{\em
The family of periodic orbits whose existence
is established in Theorem 5 are of Moulton's $L_4$ type,
in the sense of \cite{moultonBook}.
That is, these are periodic orbits which when projected into the $(x,y)$
plane (i.e. the configuration space) have non-trivial winding about $L_4$.
See also Chapter 9 of \cite{theoryOfOrbits},
or the works of \cite{onMoulton_Szebehely,szebehelyTriangularPoints}
for a more complete discussion of the history
(and controversy) surrounding Moulton's orbits.
The present work provides the first mathematically
rigorous proof that Moulton type $L_4$ periodic orbits exist.
}
\end{remark}
Each of the five theorems above
are proven using a common analytical
set up for two point boundary value problems (BVPs)
in energy manifolds of systems defined in several different
coordinate systems.
Our setup for the BVPs is designed to allow for rigorous computer assisted validation of the needed assumptions using interval arithmetic. This is implemented using
freely available validated numerical tools
for computing mathematically rigorous enclosures of solutions of initial value
problems, variational equations, and invariant manifolds.
In particular, we make extensive use of the CAPD library for
validated numerical integration of ODEs \cite{CAPD_paper}.
(Additional details about these algorithms
are found in \cite{MR1930946,cnLohner}.
Similar methods for computing validated enclosures of
stable/unstable manifolds attached to equilibrium solutions
are discussed in \cite{MR3906230,MR3792792}.)
Collisions and near collision orbits are incorporated into this analytical
setup via the classical Levi-Civita regularization. In these
coordinates the set of all collisions appears as a
simple one dimensional manifold, which we refer to as the collision set \cite{MR1555161}.
Once we obtain the collision set analytically we formulate BVPs
for orbits beginning and ending at collision.
We review the Levi-Civita coordinates for
the PCRTBP in Section \ref{sec:PCRTBP}, and refer the interested reader
to Chapter 3 of \cite{theoryOfOrbits}, to
the notes of \cite{cellettiCollisions,MR633766}, and to the works of
\cite{MR562695,MR359459,MR3069058} for much more complete
overview of regularization in celestial mechanics.
\begin{remark}[Collisions in the literature]
{\em
Collisions are an essential and delicate topic in celestial mechanics.
While it has been shown that
the set of orbits which collide in finite time has measure zero
\cite{MR295648,MR321386},
it is also known that the embedding of the collision
set may be topologically complicated.
For example, recent results of \cite{MR3951693} show that
there exist open sets where collisions are
dense. Many mathematically rigorous theorems on the existence of
collisions exploit perturbative techniques,
taking one or more of the masses to be small
\cite{MR640127,MR638060,MR682839,MR993819,MR1342132},
or the energy to be large
\cite{MR967629,MR1849229,MR1919782,MR3417880,MR3417880,
MR1805879,MR2245344,MR2331205,MR2256650,MR2163532}.
These works depend on results from geometry/topology,
the calculus of variations, and the KAM theory.
For parameter and energy regimes
where analytical results are unavailable,
numerical studies illuminate the dynamics
of the collision set
\cite{MR3693390,MR3880194,MR4110029,MR4162341,
MR4270813,tereOlleCollisions,MR3665263,MR4361879}. Our work goes towards providing a framework for computer assisted proofs for collision orbits, for the parameter regimes where the perturbative methods cannot be applied.
}
\end{remark}
\begin{remark}[CAPs in the literature]
{\em
Constructive computer assisted arguments have been used to prove
many theorems in celestial mechanics.
An overview of the literature on computer assisted proofs (CAPs)
in celestial mechanics is beyond the scope of the
present paper, and we refer the interested reader to the
works of \cite{MR2112702,MR2259202,MR2312391,MR3863686,MR3923486}
on periodic orbits, the works of
\cite{MR1947690,MR1961956,MR3032848,MR3906230,MR2824484}
on connecting orbits and chaos,
the works of \cite{oscillations,capinski_roldan,diffusionCRTBP,maciejMarianDiffusion}
on oscillations to infinity, center manifolds, and Arnold diffusion,
and the works of \cite{MR1101365,MR1101369,alexCAPKAM,MR2150352,MR4128817}
on quasi-periodic orbits and KAM phenomena.
By looking also to the references in the papers cited in this paragraph,
the interested reader will come away with a deeper appreciation of
the role of CAPs in celestial mechanics.
We remark that, until now, collisions have been viewed
largely as impediments to the implementation of CAPs. We demonstrate in the current paper that this is not the case.
}
\end{remark}
\bigskip
The remainder of the paper is organized as follows.
In Section \ref{sec:problem} we describe the problem setup
in terms of an appropriate multiple shooting problem, and
establish tools for solving the problem. In particular, we
define the unfolding parameters which we use to isolate
transverse solutions in energy level sets and use this notion to formulate Theorem \ref{th:single-shooting} and Lemma \ref{lem:multiple-shooting-2} which we later use for our computer assisted proofs.
In Section \ref{sec:PCRTBP}
we describe the PCRTBP and it's Levi-Civita regularization.
Sections \ref{sec:ejectionToCollision}, \ref{sec:L4_to_collision}, and
\ref{sec:symmetric-orbits}
describe respectively the formulation of the multiple shooting
problem for primary-to-primary ejection-collision orbits,
$L_4$ to ejection/collision orbits, $L_4$ homoclinic orbits,
and periodic ejection-collision families.
Section \ref{sec:CAP} describes our computer assisted proof strategy and
illustrates how this strategy is used to prove our main theorems.
Some technical details are given in the appendices.
The codes implementing the computer assisted proofs discussed in
this paper are available at the homepage of the first author MC.
\section{Problem setup}
\label{sec:problem}
Consider an ODE with one or more first integrals or constants of motion.
For such systems, the level sets of the integrals
give rise to invariant sets. Indeed, the level sets are invariant
manifolds except at critical points of the conserved quantities.
In this section we describe a shooting method for two point
boundary value problems between submanifolds of the level set. To be more
precise, we consider two manifolds, parameterized (locally) by some
functions, which are contained in a level set. We present a method which
allows us to find points on these manifold which are linked by a solution of
an ODE. This in particular implies that the two manifolds intersect. Our
method will allow us to establish transversality of the intersection within
the level set.
We consider an ODE%
\begin{equation}
x^{\prime}=f\left( x\right) , \label{eq:ode-1}
\end{equation}
where $f:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}$. Assume that the flow $%
\phi\left( x,t\right) $ induced by (\ref{eq:ode-1}) has an integral of
motion expressed as%
\begin{equation*}
E:\mathbb{R}^{d}\rightarrow\mathbb{R}^{k},
\end{equation*}
which means that
\begin{equation}
E\left( \phi\left( x,t\right) \right) =E\left( x\right) ,
\label{eq:E-integral}
\end{equation}
for every $x\in\mathbb{R}^{d}$ and $t\in \mathbb{R}$. Fix $c\in\mathbb{R%
}^{k}$ and define the level set
\begin{equation}
M :=\left\{x \in \mathbb{R}^d : E(x)=c\right\} , \label{eq:M-level-set}
\end{equation}
and assume that $M$ is (except possibly at some degenerate points) a smooth
manifold. Consider two open sets $D_{1}\subset\mathbb{R}^{d_{1}}$ and $%
D_{2}\subset\mathbb{R}^{d_{2}}$ and two chart maps
\begin{equation}
P_{i}: D_{i}\rightarrow M\subset\mathbb{R}^{d}\qquad\text{for }i=1,2,
\label{eq:Pi-intro}
\end{equation}
parameterizing submanifolds of $M$.
\begin{remark}
One can for example think of the $P_{1}$ and $P_{2}$ as parameterizations of
the exit or entrance sets on some
local unstable and stable manifolds, respectively, of some invariant object.
However in some of the applications to follow $P_{1,2}$ will parameterize
collision sets in regularized coordinates or some surfaces of symmetry for $%
f $.
\end{remark}
We seek points $\bar{x}_{i}\in D_{i}$ for $i=1,2$ and a time $\bar{\tau}\in%
\mathbb{R}$ such that%
\begin{equation}
\phi\left( P_{1}(\bar{x}_{1}),\bar{\tau}\right) =P_{2}\left( \bar{x}%
_{2}\right) . \label{eq:problem-1}
\end{equation}
Note that if $P_1$ and $P_2$ parameterize some $\phi$-invariant manifolds, then
Equation \eqref{eq:problem-1} implies that these manifolds intersect. The
setup is depicted in Figure \ref{fig:setup}.
\begin{figure}[ptb]
\begin{center}
\includegraphics[height=4cm]{Fig-1.pdf}
\end{center}
\caption{The left and right plots are in $\mathbb{R}^{d}$ with a $d-k$
dimensional manifold $M$ depicted in gray. The manifolds $%
P_{i}(D_{i})\subset M$, for $i=1,2$, are represented by curves inside of $M$. We seek $\bar x_{1} \in D_1, \bar x_{2} \in D_2$ and $\bar{\tau} \in \mathbb{R}$ such
that $\protect\phi(P_{1}(\bar x_{1}),\bar{\tau})=P_{2}(\bar x_{2})$.
The two points $P_{i}(\bar x_{i})$, for $i=1,2$, are represented by dots.}
\label{fig:setup}
\end{figure}
\begin{remark}
Denote by $x_{1},x_{2}$ the points $x_{1}\in\mathbb{R}^{d_{1}}$ and by $%
x_{2}\in\mathbb{R}^{d_{2}}$: this avoids confusion with $x\in\mathbb{R}^{d}$.
\end{remark}
We introduce a general scheme which allows us to:
\begin{enumerate}
\item Establish the intersection of the manifolds parameterized by $P_{1}$
and $P_{2}$ by means of a suitable Newton operator.
\item Establish that the intersection is transverse relative to the level
set $M$.
\item Provide a setup flexible enough for multiple shooting between charts in different coordinates.
\end{enumerate}
Our methodology is applied to establish connections between stable/unstable
and collision manifolds in the PCRTBP.
\subsection{Level set shooting}
We now provide a more detailed formulation of problem (\ref{eq:problem-1})
which allows us to describe connections between multiple level sets
in distinct coordinate systems (instead of just one
coordinate system as discussed in Section \ref{eq:M-level-set}).
This allows us to study applications to collision dynamics
as boundary value problems joining points in different coordinate systems.
Let $U_1, U_2 \subset \mathbb{R}^{d}$
be open sets and consider smooth
functions $E_{1},E_{2}$%
\begin{equation*}
E_{i}:U_{i}\rightarrow\mathbb{R}^{k}\qquad\text{for }i=1,2,
\end{equation*}
for which $DE_{i}\left( x\right) $ is of rank $k$ for every $x\in U_{i}$,
for $i=1,2.$ We fix $c_{1},c_{2}\in\mathbb{R}^{k}$ and define the following
the level sets
\begin{equation*}
M_{i}=\left\{ x\in U_{i}:E_{i}\left( x\right) =c_{i}\right\} \qquad\text{for
}i=1,2,
\end{equation*}
and assume that $M_{i}\neq\emptyset$ for $i=1,2$. Observe that the
$M_i$ are smooth $d-k$ dimensional
manifolds by the assumption that $DE_{i}$ are of rank $k$, for $i=1,2$.
Consider now a smooth function
$R:U_{1}\times\mathbb{R\times\mathbb{R}}^{k}\rightarrow\mathbb{R}^{d}$
We introduce the following notation for coordinates%
\begin{equation*}
\left( x,\tau,\alpha\right) \in\mathbb{R}^{d}\times\mathbb{R\times \mathbb{R}%
}^{k},\qquad y\in\mathbb{R}^{d},
\end{equation*}
and define a parameter dependent family of maps $R_{\tau
,\alpha}:U_{1}\rightarrow\mathbb{R}^{d}$ by%
\begin{equation*}
R_{\tau,\alpha}\left( x\right) :=R\left( x,\tau,\alpha\right) ,
\end{equation*}
and assume that for each $(x, \tau, \alpha) \in \mathbb{R}^{d + k + 1}$, the
$d \times d$ matrix
\begin{equation*}
\frac{\partial}{\partial x} R(x, \tau, \alpha),
\end{equation*}
is invertible, so that $R_{\tau, \alpha}(x)$ is a local diffeomorphism
on $\mathbb{R}^d$.
The following definition makes precise our assumptions about when
$R_{\tau, \alpha}(x)$ takes values in $M_2$.
\begin{definition}
\label{def:unfolding}We say that $\alpha $ is an unfolding parameter for $R$
if the following two conditions are satisfied for every $x \in M_1$.
\begin{enumerate}
\item If $R_{\tau, \alpha}(x) \in M_2$, then $\alpha = 0$.
\item If $R_{\tau, 0}(x) \in U_2$, then $R_{\tau, 0}(x) \in M_2$.
\end{enumerate}
\end{definition}
\medskip
To emphasize that we are interested in points mapped
from $M_{1}$ to $M_{2}$, we say that $\alpha$ is an unfolding
parameter for $R$\emph{\ from }$M_{1}$\emph{\ to }$M_{2}$.
Assume from now on that $\alpha$ is an unfolding parameter for $R$. We
consider two open sets $D_{1}\subset \mathbb{R}^{d_{1}}$ and $D_{2}\subset%
\mathbb{R}^{d_{2}}$ where $d_{1},d_{2}\in\mathbb{N}$ and two smooth
functions
\begin{equation*}
P_{i}:D_{i}\rightarrow M_{i},\qquad\text{for }i=1,2,
\end{equation*}
each of which is a diffeomorphism onto its image. Define%
\begin{equation*}
F: D_{1}\times D_{2}\times\mathbb{R}\times%
\mathbb{R}^{k}\rightarrow\mathbb{\mathbb{R}}^{d}
\end{equation*}
by the formula
\begin{equation}
F\left( x_{1},x_{2},\tau,\alpha\right) :=R_{\tau,\alpha}\left( P_{1}\left(
x_{1}\right) \right) -P_{2}\left( x_{2}\right) . \label{eq:F-def-1-shooting}
\end{equation}
We require tha
\begin{equation}
d_{1}+d_{2}+1+k=d, \label{eq:dimensions}
\end{equation}
and seek $\bar{x}_{1},\bar{x}_{2},\bar{\tau}$ such that%
\begin{equation}
\label{eq:zero}
F\left( \bar{x}_{1},\bar{x}_{2},\bar{\tau}, 0\right) = R_{\bar{\tau }%
, 0}\left( P_{1}\left( \bar{x}_{1}\right) \right) -P_{2}\left( \bar{x}
_{2}\right) =0,
\end{equation}
with $DF\left( \bar{x}_{1},\bar{x}_{2},\bar{\tau},0\right)$ an isomorphism.
In fact, we do more than simply solve (\ref{eq:zero}). For some open interval $I \subset\mathbb{R}$ containing $\bar{\tau}$ we establish a transverse
intersection between the smooth manifolds $R\left( P_{1}\left( D_{1}\right)
,I,0\right) $ and $P_{2}\left( D_{2}\right) $ at $\bar{y} := P_{2}\left( \bar{x}%
_{2}\right) \in M_{2}$.
\medskip
The setup above, and in particular the roles of the
parameters $\alpha$ and $\tau$, might appear puzzling. We now
give an example which informs the intuition. In
the applications we have in mind, $\tau$ is the time associated with the flow map
of an ODE. The unfolding parameter $\alpha$ deals with the fact that we
solve a problem restricted to the level sets $M_i$
for $i=1,2$ though there are other practical methods to enforce this constraint.\footnote{%
Alternatives are to either fix the energy and use its formula to eliminate
one of the variables in the equations of motion, or to
work with coordinates in which we can write $M_i$
as graphs of some functions and use these functions and
appropriate projections to enforce the constraints. We believe that the
approach with the unfolding parameter has the advantage that it
simplifies formulas and easier to implement.}.
Consider the following example.
\begin{example}
\label{ex:motivating}(Canonical unfolding.) Consider the ODE in Equation \eqref{eq:ode-1}%
and $E:\mathbb{R}^{d}\rightarrow\mathbb{R}$ satisfying Equation \eqref{eq:E-integral}. Suppose $c\in\mathbb{R}$ is fixed
and denote its associated level set by $M := \left\{ E=c\right\}$ (In this example we have $k=1$ and $E_{1}=E_{2}=E$%
.) Assume there are smooth functions $P_{1},P_{2}$ as in \eqref{eq:Pi-intro}
and that $d_{1}+d_{2}+2=d$.
We construct a shooting operator for Equation \eqref{eq:problem-1} by choosing $R$ as follows.
Consider the $\alpha$-parameterized family of ODEs
\begin{equation*}
x^{\prime}=f(x)+\alpha\nabla E\left( x\right).
\end{equation*}
Let $\phi_{\alpha}\left( x,t\right)$ denote the induced flow and note that
$\phi_{0}=\phi$ is the flow induced by Equation \eqref{eq:ode-1}. Defining the shooting
operator by the formula
\begin{equation}
R\left( x,\tau,\alpha\right) :=\phi_{\alpha}\left( x,\tau\right),
\label{eq:R-alpha}
\end{equation}
we see that solving Equation \eqref{eq:problem-1}
is equivalent to solving Equation \eqref{eq:zero}.
Observe that $\alpha$ is unfolding for $R$ because $E$ is an
integral of motion for $\phi$ from which it follows that
\begin{align*}
\frac{d}{dt}E\left( R_{\tau,\alpha}\left( x\right) \right) & = \frac{d}{dt}
E(\phi_{\alpha}(x,t)) \\
& =\nabla E\left( \phi_{\alpha}\left( x,t\right) \right) \cdot\left( f(\phi
_{\alpha}\left( x,t\right) )+\alpha\nabla E\left( \phi_{\alpha}\left(
x,t\right) \right) \right) \\
& =\alpha\left\Vert \nabla E\left( \phi_{\alpha}\left( x,t\right) \right)
\right\Vert ^{2},
\end{align*}
where $\cdot$ denotes the standard scalar product. Here we have used the fact that
Equation \eqref{eq:E-integral} implies $\nabla E\left( x\right) \cdot f(x)=0$ but also $\nabla E(\phi_\alpha(x,t)) \neq 0$ since $\nabla E$ is assumed to have rank $1$ everywhere.
\end{example}
Returning to the general setup we have the following theorem.
\begin{theorem}
\label{th:single-shooting}Assume that $\alpha$ is an unfolding parameter for
$R$ and $F$ is defined as in Equation \eqref {eq:F-def-1-shooting}. If
\begin{equation}
F\left( \bar{x}_{1},\bar{x}_{2},\bar{\tau},\bar{\alpha}\right) =0,
\label{eq:F-zero-in-thm1}
\end{equation}
then $\bar{\alpha}=0$. Moreover, if $DF\left( \bar{x}_{1},\bar{x}_{2},\bar{%
\tau},0\right) $ is an isomorphism, then there exists an open interval $I\subset%
\mathbb{R}$ of $\bar{\tau}$ such that the manifolds $R\left( P_{1}\left( D_{1}\right)
,I,0\right) $ and $P_{2}\left( D_{2}\right) $ intersect transversally in $%
M_{2}$ at $\bar{y}:=P_{2}\left( \bar{x}_{2}\right) $. Specifically, we have the splitting
\begin{equation}
T_{\bar{y}}R\left( P_{1}\left( D_{1}\right) ,I,0\right) \oplus T_{\bar{y}%
}P_{2}\left( D_{2}\right) =T_{\bar{y}}M_{2},
\label{eq:th1-transversality}
\end{equation}
and moreover, $\bar{y}$ is an isolated transverse point.
\end{theorem}
\begin{proof}
Recalling the definition of $F$ in Equation \eqref{eq:F-def-1-shooting}
and the hypothesis of Equation \eqref{eq:F-zero-in-thm1},
we have that $\bar{x}=P_{1}\left(
\bar {x}_{1}\right) \in M_{1}$ and $\bar{y}=P_{2}\left( \bar{x}_{2}\right)
\in M_{2}$. The fact that $\alpha$ is an unfolding parameter for $R$,
combined with $R\left( \bar{x},\bar{\tau},\bar{\alpha}\right) =\bar{y}$,
implies that $\bar{\alpha}=0$.
Since $F(\bar x_1,\bar x_2, \bar \tau, 0)=0$, we see that $R(P_1(D_1),I,0)$
and $P_2(D_2)$ intersect at $\bar y$.%
Our hypotheses on $P_{1,2}$ and $R$ imply that
$R\left( P_{1}\left( D_{1}\right) ,I,0\right) $ and $P_{2}\left(
D_{2}\right) $ are submanifolds
of $M_{2}$ so evidently
\begin{equation*}
T_{\bar{y}}R\left( P_{1}\left( D_{1}\right) ,I,0\right) \oplus T_{\bar{y}%
}P_{2}\left( D_{2}\right) \subset T_{\bar{y}}M_{2}.
\end{equation*}
However, from the assumption in Equation \eqref{eq:dimensions} we have $%
d-k=d_{1}+d_{2}+1$ and therefore it suffices to prove that $T_{\bar{y}}R\left( P_{1}\left(
D_{1}\right) ,I,0\right) \oplus T_{\bar{y}}P_{2}\left( D_{2}\right) $ is $d-k$ dimensional.
Suppose $\setof*{e_{1},\ldots ,e_{d_{1}}}$ is a basis for $\mathbb{R}^{d_{1}}$ and $\setof*{\tilde{e}_{1},\ldots ,\tilde{e}%
_{d_{2}}}$ is a basis for $\mathbb{R}^{d_{2}}$. Define%
\begin{align*}
v_{i}& :=\frac{\partial R}{\partial x_{1}}\left( \bar{x}_{1},\bar{\tau}%
,0\right) DP_{1}\left( \bar{x}_{1}\right) e_{i}\qquad \text{for }i=1,\ldots
,d_{1} \\
v_{i}& :=DP_{2}\left( \bar{x}_{2}\right) \tilde{e}_{i-d_{1}}\qquad \text{for
}i=d_{1}+1,\ldots ,d_{1}+d_{2} \\
v_{d_{1}+d_{2}+1}& :=\frac{\partial R}{\partial \tau }\left( \bar{x}_{1},%
\bar{\tau},0\right) .
\end{align*}%
After differentiating Equation \eqref{eq:F-def-1-shooting} we obtain the formula
\begin{equation*}
DF=\left(
\begin{array}{cccc}
\frac{\partial F}{\partial x_{1}} & \frac{\partial F}{\partial x_{2}} &
\frac{\partial F}{\partial \tau } & \frac{\partial F}{\partial \alpha }%
\end{array}%
\right) =\left(
\begin{array}{cccc}
\frac{\partial R}{\partial x_{1}}DP_{1} & -DP_{2} & \frac{\partial R}{%
\partial \tau } & \frac{\partial R}{\partial \alpha }%
\end{array}%
\right),
\end{equation*}%
and since $DF$ is an isomorphism at $\left( \bar{x}_{1},\bar{x}_{1},\bar{\tau%
},0\right) $, it follows that the vectors $v_{1},\ldots ,v_{d_{1}+d_{2}+1}$ span a $%
d_{1}+d_{2}+1=d-k$ dimensional space.
Observe that
\begin{align*}
T_{\bar{y}}R\left( P(D_1) , I, 0\right) & =\text{span}\left(
v_{1},\ldots,v_{d_{1}},v_{d_{1}+d_{2}+1}\right) , \\
T_{\bar{y}} P_{2} \left(D_2 \right) & =\text{span}\left(
v_{d_{1}+1},\ldots,v_{d_{1}+d_{2}}\right),
\end{align*}
proving the claim in Equation \eqref{eq:th1-transversality}.
Moreover, since
\begin{equation*}
\dim R\left( P_{1}\left( D_{1}\right) ,I,0\right) +\dim P_{2}\left(
D_{2}\right) =\left( d_{1}+1\right) +d_{2}=d-k=\dim M_{2},
\end{equation*}
it follows that $\bar{y}$ is an isolated transverse intersection point which concludes
the proof.
\end{proof}
We finish this section by defining an especially simple ``dissipative'' unfolding
parameter which works in the setting of the PCRTBP.
\begin{example}
\label{ex:dissipative-unfolding}(Dissipative unfolding.) Let $x,y\in \mathbb{%
R}^{2k}$, let $\Omega:\mathbb{R}^{2k}\rightarrow\mathbb{R}$ and $J\in\mathbb{%
R}^{2k\times2k}$ be of the form%
\begin{equation*}
J=\left(
\begin{array}{cc}
0 & \operatorname{Id}_{k} \\
-\operatorname{Id}_{k} & 0%
\end{array}
\right) ,
\end{equation*}
where $\operatorname{Id}_{k}$ is a $k\times k$ identity matrix. Let us consider an ODE of
the form
\begin{equation*}
\left(x', y'\right) = f\left( x,y\right) :=\left( y,2Jy+\frac {\partial%
}{\partial x}\Omega\left( x\right) \right) .
\end{equation*}
One can check that $E\left( x,y\right) =-\left\Vert y\right\Vert ^{2}+2\Omega\left(
x\right) $ is an integral of motion. Consider the parameterized family of ODEs%
\begin{equation}
\left(x',y'\right) = f_{\alpha}\left( x,y\right) :=f\left( x,y\right)
+\left( 0,\alpha y\right) , \label{eq:dissipative-vect-alpha}
\end{equation}
and let $\phi_{\alpha}\left( \left( x,y\right) ,t\right) $ denote the flow
induced by Equation \eqref{eq:dissipative-vect-alpha}. Define
the shooting operator defined by
\begin{equation}
R\left( \left( x,y\right) ,\tau,\alpha\right) :=\phi_{\alpha}\left( \left(
x,y\right) ,\tau\right). \label{eq:R-alpha-dissipative}
\end{equation}
As in Example \ref{ex:motivating}, one can check the
equivalence between Equations \eqref{eq:problem-1}
and \eqref{eq:zero}. The fact that $\alpha$ is unfolding for $R$ follows as%
\begin{equation*}
\frac{d}{dt}E\left( \phi_{\alpha}\left( \left( x,y\right) ,t\right) \right)
=-2\alpha\left\Vert y\right\Vert ^{2}.
\end{equation*}
\end{example}
\subsection{Level set multiple shooting\label{sec:multiple-shooting}}
Consider a sequence of open sets $U_{1},\ldots,U_{n}\subset\mathbb{R}^{d}$
and a sequence of smooth maps
\begin{equation*}
E_{i}:U_{i}\rightarrow\mathbb{R}^{k}\qquad\text{for }i=1,\ldots,n
\end{equation*}
for which $DE_{i}\left( x\right) $ is of rank $k$ for every $x\in U_{i}$,
for $i=1,\ldots,n$. Let $c_{1},\ldots,c_{n}\in\mathbb{R}^{k}$ be a fixed
sequence with corresponding level sets
\begin{equation*}
M_{i}:=\left\{ x\in U_{i}:E_{i}\left( x\right) =c_{i}\right\} \qquad\text{%
for }i=1,\ldots,n.
\end{equation*}
Let%
\begin{equation*}
R^{i}:U_{i}\times\mathbb{R}\times\mathbb{R}^{k}\rightarrow\mathbb{R}%
^{d}\qquad\text{for }i=1,\ldots,n-1
\end{equation*}
be a sequence of smooth functions which defines a sequence of parameter
dependent maps
\begin{align*}
R_{\tau,\alpha}^{i} & :U_{i}\rightarrow\mathbb{R}^{d}, \\
R_{\tau,\alpha}^{i}\left( x\right) & :=R^{i}\left( x,\tau,\alpha\right)
,\qquad\text{for }i=1,\ldots,n-1.
\end{align*}
We assume that for each fixed $\tau$ and $\alpha$, each of the maps is a
local diffeomorphism on $\mathbb{R}^d$.
Let $D_{0} \subset \mathbb{R}^{d_{0}}$ and $D_{n} \subset \mathbb{R}^{d_{n}}$ be
open sets, and let
\begin{equation*}
P_{0}:D_{0}\rightarrow M_0\subset \mathbb{R}^{d},\qquad\qquad
P_{n}:D_{n}\rightarrow M_n\subset \mathbb{R}^{d},
\end{equation*}
be diffeomorphisms onto their image. Assume that%
\begin{equation}
d_{0}+d_{n}+1+k=d \label{eq:dimensions-multiple-shooting}
\end{equation}%
and consider the function%
\begin{equation*}
\tilde{F}:\mathbb{R}^{nd}\supset D_{0}\times \underset{n-1}{\underbrace{%
\mathbb{R}^{d}\times \ldots \times \mathbb{R}^{d}}}\times D_{n}\times
\mathbb{R}\times \mathbb{R}^{k}\rightarrow \underset{n}{\underbrace{\mathbb{R%
}^{d}\times \ldots \times \mathbb{R}^{d}}},
\end{equation*}%
defined by the formula
\begin{equation}
\tilde{F}\left( x_{0},\ldots ,x_{n},\tau ,\alpha \right) =\left(
\begin{array}{r@{\,\,\,\,}l}
P_{0}\left( x_{0}\right) & -\,\,\,x_{1} \\
R_{\tau ,\alpha }^{1}\left( x_{1}\right) & -\,\,\,x_{2} \\
& \, \vdots \\
R_{\tau ,\alpha }^{n-2}\left( x_{n-2}\right) & -\,\,\,x_{n-1} \\
R_{\tau ,\alpha }^{n-1}\left( x_{n-1}\right) & -\,\,\,P_{n}\left(
x_{n}\right)
\end{array}%
\right) \label{eq:multi-prob}
\end{equation}
We now define the following functions
\begin{align*}
R& :U_{1}\times \mathbb{R}\times \mathbb{R}%
^{k}\rightarrow \mathbb{R}^{d}, \\
F& : D_{0}\times
D_{n}\times \mathbb{R}\times \mathbb{R}^{k}\rightarrow \mathbb{R}^{d}
\end{align*}%
by the formulas
\begin{align}
R\left( x_{1},\tau ,\alpha \right) & =R_{\tau ,\alpha }\left( x_{1}\right)
:=R_{\tau ,\alpha }^{n-1}\circ \ldots \circ R_{\tau ,\alpha }^{1}\left(
x_{1}\right) , \notag \\
F\left( x_{0},x_{n},\tau ,\alpha \right) & :=R_{\tau ,\alpha }\left(
P_{0}\left( x_{0}\right) \right) -P_{n}\left( x_{n}\right) .
\label{eq:F-parallel}
\end{align}
\begin{definition}
We say that $\alpha $ is an unfolding parameter for the sequence $R_{\tau
,\alpha }^{i}$ if it is unfolding for $R_{\tau ,\alpha }=R_{\tau ,\alpha
}^{n-1}\circ \ldots \circ R_{\tau ,\alpha }^{1}.$
\end{definition}
We now formulate the following lemma.
\begin{lemma}
\label{lem:multiple-shooting-2}If $\tilde{F}\left( \bar{x}_{0},\ldots,\bar {x%
}_{n},\bar{\tau},\bar{\alpha}\right) =0$ and $D\tilde{F}\left( \bar{x}%
_{0},\ldots,\bar{x}_{n},\bar{\tau},\bar{\alpha}\right) $ is an isomorphism,
then $F\left( \bar{x}_{0},\bar{x}_{n},\bar{\tau},\bar{\alpha}\right) =0$ and
$DF\left( \bar{x}_{0},\bar{x}_{n},\bar{\tau},\bar{\alpha}\right) $ is an
isomorphism.
\end{lemma}
\begin{proof}
The fact that $F\left( \bar{x}_{0},\bar{x}_{n},\bar{\tau},\bar{\alpha }%
\right) =0$ follows directly from the way $\tilde{F}$ and $F$ are defined in Equations
\eqref{eq:multi-prob} and \eqref{eq:F-parallel} respectively. Before
proving that $DF$ is an isomorphism, we set up some notation. We will write%
\begin{equation*}
dR^{i}:=\frac{\partial R^{i}}{\partial x_{i}}\left( \bar{x}_{i},\bar{\tau },%
\bar{\alpha}\right)\qquad\text{for }i=1,\ldots,n-1.
\end{equation*}
It will be convenient for us to swap the order of the coordinates, so we
define%
\begin{equation}
\hat{F}\left( x_{1},\ldots, x_{n},x_{0},\tau,\alpha\right) :=\tilde{F}\left(
x_{0},x_{1},\ldots, x_{n},\tau,\alpha\right), \label{eq:F-reordered}
\end{equation}
and write%
\begin{equation*}
\hat{F}=\left( \hat{F}_{1},\ldots,\hat{F}_{n}\right) \qquad\text{where\qquad
}\hat{F}_{i}:\mathbb{R}^{nd}\rightarrow\mathbb{R}^{d},\text{ for }%
i=1,\ldots, n.
\end{equation*}
Finally, the last notation we introduce is $z\in\mathbb{R}^{d}$ to combine
the coordinates from the domain of $F$ together
\begin{equation*}
z=\left( z_{1},\ldots,z_{d}\right) =\left( x_{n},x_{0},\tau,\alpha\right) \in%
\mathbb{R}^{d_{n}}\times\mathbb{R}^{d_{0}}\times\mathbb{R}\times \mathbb{R}%
^{k}=\mathbb{R}^{d}.
\end{equation*}
Note that $z$ is also the variable corresponding to the last $d$ coordinates from
the domain of $\hat F$ (see Equation \eqref{eq:F-reordered}). Finally, we remark that all derivatives considered in the argument below are computed at the point $(\bar{x}
_{0},\ldots,\bar{x}_{n},\bar{\tau},\bar{\alpha})$.
With the above notation we see that%
\begin{equation*}
D\hat{F}=\left(
\begin{array}{ccccc}
-\operatorname{Id} & 0 & \cdots & 0 & \frac{\partial\hat{F}_{1}}{\partial z} \\
dR^{1} & -\operatorname{Id} & \ddots & \vdots & \frac{\partial\hat{F}_{2}}{\partial z} \\
0 & \ddots & \ddots & 0 & \vdots \\
\vdots & \ddots & dR^{n-2} & -\operatorname{Id} & \frac{\partial\hat{F}_{n-1}}{\partial z}
\\
0 & \cdots & 0 & dR^{n-1} & \frac{\partial\hat{F}_{n}}{\partial z}%
\end{array}
\right),
\end{equation*}
and $D\hat{F}$ is an isomorphism since $D\tilde{F}$ is an isomorphism. To see this define a sequence of vectors $v^{1},\ldots,v^{d}\in\mathbb{R}%
^{nd}$ of the form%
\begin{equation*}
v^{i}=\left(
\begin{array}{c}
v_{1}^{i} \\
\vdots \\
v_{n}^{i}%
\end{array}
\right) \in\mathbb{R}^{d}\times\ldots\times\mathbb{R}^{d}=\mathbb{R}%
^{nd}\qquad\text{for }i=1,\ldots,d,
\end{equation*}
with $v_{1}^{i}$,$v_{n}^{i}\in\mathbb{R}^{d}$ chosen as
\begin{equation}
v_{1}^{i}=\frac{\partial\hat{F}_{1}}{\partial z_{i}},\qquad\qquad
v_{n}^{i}=\left(
\begin{array}{ccccccc}
0 & \cdots & 0 & \overset{i}{1} & 0 & \cdots & 0%
\end{array}
\right) ^{\top}, \label{eq:vin-choice}
\end{equation}
and $v_{2}^{i},\ldots,v_{n-1}^{i}\in\mathbb{R}^{d}$ defined inductively as%
\begin{equation}
v_{k}^{i}=dR^{k-1}v_{k-1}^{i}+\frac{\partial\hat{F}_{k}}{\partial z_{i}}\quad%
\text{for }k=2,\ldots,n-1. \label{eq:v-ik}
\end{equation}
Note that from the choice of $v_{n}^{i}$ in (\ref{eq:vin-choice}) the
vectors $v^{1},\ldots,v^{d}$ are linearly independent.
By direct computation\footnote{%
From (\ref{eq:multi-prob}) and (\ref{eq:v-ik}) follow the cancellations when
multiplying the vector $v^i$ by $D\hat F$.} it follows that%
\begin{equation}
D\hat{F}v^{i}=\left(
\begin{array}{c}
0 \\
dR^{n-1}v_{n-1}^{i}+\frac{\partial\hat{F}_{n}}{\partial z_{i}}%
\end{array}
\right) \qquad\text{for }i=1,\ldots,d, \label{eq:proof-shooting-0}
\end{equation}
where the zero is in $\mathbb{R}^{\left( n-1\right) d}$.
Looking at (\ref{eq:multi-prob}), since $\hat{F}_{1},\ldots \hat{F}_{n-1}$
do not depend on $x_{n}$, we see that for $i\in \left\{ 1,\ldots
,d_{n}\right\} $ we have $\frac{\partial \hat{F}_{1}}{\partial z_{i}}=\ldots
=\frac{\partial \hat{F}_{n-1}}{\partial z_{i}}=0,$ so
\begin{eqnarray}
dR^{n-1}v_{n-1}^{i}+\frac{\partial \hat{F}_{n}}{\partial z_{i}}
&=&dR^{n-1}\left( dR^{n-2}v_{n-2}^{i}+\frac{\partial \hat{F}_{n-1}}{\partial
z_{i}}\right) -\frac{\partial P_{n}}{\partial x_{n,i}}
\label{eq:proof-shooting-1} \\
&=&dR^{n-1}\left( dR^{n-2}v_{n-2}^{i}+0\right) -\frac{\partial P_{n}}{%
\partial x_{n,i}} \notag \\
&=&\cdots \notag \\
&=&dR^{n-1}\ldots dR^{1}v_{1}^{i}-\frac{\partial P_{n}}{\partial x_{n,i}}
\notag \\
&=&dR^{n-1}\ldots dR^{1}\frac{\partial \hat{F}_{1}}{\partial z_{i}}-\frac{%
\partial P_{n}}{\partial x_{n,i}} \notag \\
&=&-\frac{\partial P_{n}}{\partial x_{n,i}}\qquad \text{for }i=1,\ldots
,d_{n}. \notag
\end{eqnarray}
Similarly, for $j=i-d_{n}\in \left\{ 1,\ldots ,d_{0}\right\} $ from (\ref%
{eq:multi-prob}) we see that $\frac{\partial \hat{F}_{1}}{\partial z_{i}}=%
\frac{\partial P_{0}}{\partial x_{0,j}}$ and $\frac{\partial \hat{F}_{2}}{%
\partial z_{i}}=\ldots =\frac{\partial \hat{F}_{n}}{\partial z_{i}}=0$, so
\begin{align}
dR^{n-1}v_{n-1}^{i}+\frac{\partial \hat{F}_{n}}{\partial z_{i}}&
=dR^{n-1}dR^{n-2}\ldots dR^{1}\frac{\partial P_{0}}{\partial x_{0,j}}=\frac{%
\partial \left( R_{\bar{\tau},\bar{\alpha}}\circ P_{0}\right) }{\partial
x_{0,j}} \label{eq:proof-shooting-2} \\
& \qquad \qquad \qquad \qquad \qquad \left. \text{for }i=d_{n}+1,\ldots
,d_{n}+d_{0}.\right. \notag
\end{align}
The index $i=d_{n}+d_{0}+1$ corresponds to $\tau $. Similarly to (\ref%
{eq:proof-shooting-1}), by inductively applying the chain rule, it follows
that
\begin{equation}
dR^{n-1}v_{n-1}^{i}+\frac{\partial \hat{F}_{n}}{\partial z_{i}}=\frac{%
\partial R}{\partial \tau }\qquad \text{for }i=d_{n}+d_{0}+1.
\label{eq:proof-shooting-3}
\end{equation}
Finally, for $j=i-d_{n}-d_{0}-1\in \left\{ 1,\ldots ,k\right\} $, the variable $z_i$ corresponds to $\alpha_j$, and also by
applying the chain rule we obtain that%
\begin{equation}
dR^{n-1}v_{n-1}^{i}+\frac{\partial \hat{F}_{n}}{\partial z_{i}}=\frac{%
\partial R}{\partial \alpha _{j}}\qquad \text{for }i=d_{n}+d_{0}+2,\ldots ,d.
\label{eq:proof-shooting-4}
\end{equation}
Combining Equations \eqref{eq:proof-shooting-0}--\eqref{eq:proof-shooting-4} we see that%
\begin{equation}
\left(
\begin{array}{ccc}
D\hat{F}v^{1} & \cdots & D\hat{F}v^{d}%
\end{array}%
\right) =\left(
\begin{array}{cccc}
0 & 0 & 0 & 0 \\
-\frac{\partial P_{n}}{\partial x_{n}} & \frac{\partial \left( R_{\bar{\tau},%
\bar{\alpha}}\circ P_{0}\right) }{\partial x_{0}} & \frac{\partial R}{%
\partial \tau } & \frac{\partial R}{\partial \alpha }%
\end{array}%
\right) . \label{eq:DF-multi-final}
\end{equation}%
Since $v^{1},\ldots ,v^{d}$ are linearly independent and since $D\hat{F}$ is
an isomorphism, the rank of the above matrix is $d$. Looking at Equation \eqref{eq:multi-prob} we see that the lower part of the matrix in Equation \eqref{eq:DF-multi-final} corresponds to $DF$ which implies that $DF$ is of rank $d$, hence is an isomorphism.
\end{proof}
We see that we can validate assumptions of Theorem \ref{th:single-shooting}
by setting up a multiple shooting problem (\ref{eq:multi-prob}) and applying
Lemma \ref{lem:multiple-shooting-2}. To do so, one needs to additionally
check whether $\alpha $ is an unfolding parameter for the sequence $R_{\tau
,\alpha }^{i}.$
\section{Regularization of collisions in the PCRTBP} \label{sec:PCRTBP}
\label{sec:CAPS} In this section we formally introduce the equations
of motion for the PCRTBP as discussed in Section \ref{sec:intro}.
Recall that the problem describes a three body system, where two massive
primaries are on circular orbits about their center of mass, and
a third massless particle moves in their field.
The equations of motion for the massless particle
are expressed in a co-rotating
frame with the frequency of the primaries.
Writing Newton's laws in the co-rotating frame leads to
\begin{align}
x^{\prime \prime }& =2y^{\prime }+\partial _{x}\Omega (x,y),
\label{eq:NewtonPCRTBP} \\
y^{\prime \prime }& =-2x^{\prime }+\partial _{y}\Omega (x,y), \notag
\end{align}%
where
\begin{equation*}
\Omega (x,y)=(1-\mu )\left( \frac{r_{1}^{2}}{2}+\frac{1}{r_{1}}\right) +\mu
\left( \frac{r_{2}^{2}}{2}+\frac{1}{r_{2}}\right) ,
\end{equation*}%
\begin{equation*}
r_{1}^2=(x-\mu )^{2}+y^{2},\quad \quad \text{and}\quad \quad
r_{2}^2=(x+1-\mu )^{2}+y^{2}.
\end{equation*}%
Here $x,y$ are the positions of the massless particle on the plane. The $\mu
$ and $1-\mu $ are the masses of the primaries (normalized so that the
total mass of the system is $1$).
The rotating frame is oriented so that the
primaries lie on the $x$-axis, with the center of mass at the origin.
We take $\mu \in (0, \frac{1}{2}]$ so that the large body
is always to the right of the origin.
The larger primary has mass $m_{1}=1-\mu $ and is located at the position
$(\mu ,0)$. Similarly the smaller primary has mass $m_{2}=\mu $
and is located at position $(\mu -1,0)$.
The top frame of Figure \ref{fig:PCRTBP_coordinates} provides a schematic
for the positioning of the primaries and the massless particle.
\begin{figure}[t]
\begin{center}
\includegraphics[height=7.5cm]{Fig-2.pdf}
\end{center}
\caption{Three coordinate frames for the PCRTBP: the center top image
depicts the classical PCRTBP in the rotating frame. The bottom left and
right frames depict the restricted three body problem in Levi-Civita
coordinates: regularization of collisions with $m_{2}$ on the left and with $%
m_{1}$ on the right. Observe that in these coordinates the regularized body
has been moved to the origin. The Levi-Civita transformations $T_{1}$ and $%
T_{2}$ provide double covers of the original system, so that in the
regularized frames there are singularities at the two copies of the
remaining body. }
\label{fig:PCRTBP_coordinates}
\end{figure}
Let $U\subset \mathbb{R}^{4}$ denote the open set
\begin{equation*}
U:=\left\{ (x,p,y,q)\in \mathbb{R}^{4}\,|\,\left( x,y\right) \not\in \{\left(
\mu ,0\right) ,\left( \mu -1,0\right) \}\right\} .
\end{equation*}%
The vector field $f\colon U\rightarrow \mathbb{R}^{4}$ defined by
\begin{equation}
f(x,p,y,q):=\left(
\begin{array}{c}
p \\
2q+x-\frac{(1-\mu )\left( x-\mu \right) }{((x-\mu )^{2}+y^{2})^{3/2}}-\frac{%
\mu \left( x+1-\mu \right) }{((x+1-\mu )^{2}+y^{2})^{3/2}} \\
q \\
-2p+y-\frac{(1-\mu )y}{((x-\mu )^{2}+y^{2})^{3/2}}-\frac{\mu y}{((x+1-\mu
)^{2}+y^{2})^{3/2}}%
\end{array}%
\right) \label{eq:PCRTBP}
\end{equation}%
is equivalent to the second order system given in \eqref{eq:NewtonPCRTBP}.
Note that
\begin{equation*}
\Vert f(x,p,y,q)\Vert \rightarrow \infty \quad \quad \text{as either}\quad
\quad (x,y)\rightarrow (\mu ,0)\quad \text{or}\quad (x,y)\rightarrow (\mu
-1,0).
\end{equation*}
Let $\mathbf{x}=(x,p,y,q)$ denote the coordinates in $U$ and
denote by $\phi (\mathbf{x},t)$ the flow generated by $f$ on $U$.
The system (\ref{eq:PCRTBP}) has an integral of motion $E\colon U\rightarrow
\mathbb{R}$ given by
\begin{equation}
E\left( \mathbf{x}\right) =-p^{2}-q^{2}+2\Omega (x,y),
\label{eq:JacobiIntegral}
\end{equation}%
which is refered to as the Jacobi integral.
We are interested in orbits with initial conditions $\mathbf{x} \in U$
with the property that their positions limit to either $m_1 :=(\mu, 0)$
or $m_2 := (\mu -1, 0)$ in finite time. Such orbits, which reach a
singularity of the vector field
$f$ in finite time, are called collisions.
It has long been known that if we fix our attention to a specific
level set of the Jacobi integral
for some fixed $c\in \mathbb{R}$, then it is possible to
make a change of coordinates which ``removes'' or regularizes the
singularities.
This idea is reviewed in the next sections.
\subsection{Regularization of collisions with $m_{1}$}
To regularize a collision with $m_{1}$, define the complex variables $%
z=x+iy, $ and the new ``regularized''
variables $\hat{z}=\hat{x}+i\hat{y},$ related to $z$ by the transformation
\begin{equation*}
\hat{z}^{2}=z-\mu .
\end{equation*}%
One also rescales time in the
regularized coordinates with the rescaled time $\hat{t}$ related to the original time $t$ by the formula
\begin{equation*}
\frac{dt}{d\hat{t}}=4|\hat{z}|^{2}.
\end{equation*}
Let $U_{1}\in \mathbb{R}^{4}$ denote the open set
\begin{equation*}
U_{1}=\left\{ \mathbf{\hat{x}}=(\hat{x},\hat{p},\hat{y},\hat{q})\in \mathbb{R%
}^{4} : \left( \hat{x},\hat{y}\right) \notin \left\{ \left( 0,-1\right)
,\left( 0,1\right) \right\} \right\} .
\end{equation*}%
This set will be the domain of the regularized vector field which allows us to
``flow through'' collisions with $m_1$ but not
with $m_{2}$.
A lengthy calculation (see \cite{theoryOfOrbits}), applying the change of
coordinates and time rescaling just described to the vector field $f$
defined in Equation \eqref{eq:PCRTBP} leads to the regularized Levi-Civita vector
field $f_{1}^{c}\colon U_{1}\rightarrow \mathbb{R}^{4}$ with the ODE $%
\mathbf{\hat{x}}^{\prime }=f_{1}^{c}\left( \mathbf{\hat{x}}\right) $ given by%
\begin{eqnarray}
\hat{x}^{\prime } &=&\hat{p}, \notag \\
\hat{p}^{\prime } &=&8\left( \hat{x}^{2}+\hat{y}^{2}\right) \hat{q}+12\hat{x}%
(\hat{x}^{2}+\hat{y}^{2})^{2}+16\mu \hat{x}^{3}+4(\mu -c)\hat{x} \notag\\
&&+\frac{8\mu (\hat{x}^{3}-3\hat{x}\hat{y}^{2}+\hat{x})}{((\hat{x}^{2}+\hat{y%
}^{2})^{2}+1+2(\hat{x}^{2}-\hat{y}^{2}))^{3/2}}, \notag\\
\hat{y}^{\prime } &=&\hat{q}, \label{eq:regularizedSystem_m1} \\
\hat{q}^{\prime } &=&-8\left( \hat{x}^{2}+\hat{y}^{2}\right) \hat{p}+12\hat{v%
}\left( \hat{x}^{2}+\hat{y}^{2}\right) ^{2}-16\mu \hat{y}^{3}+4\left( \mu
-c\right) \hat{y} \notag \\
&&+\frac{8\mu (-\hat{y}^{3}+3\hat{x}^{2}\hat{y}+\hat{y})}{((\hat{x}^{2}+\hat{%
y}^{2})^{2}+1+2(\hat{x}^{2}-\hat{y}^{2}))^{3/2}}, \notag
\end{eqnarray}
where the parameter $c$ in the above ODE is $c=E(x,p,y,q)$.
The main observation is that the regularized vector field is well defined at
the origin $\left( \hat{x},\hat{y}\right) =\left( 0,0\right) $, and that the
origin maps to the collision with $m_{1}$ when we invert the Levi-Civita
coordinate transformation.
Let $\psi _{1}^{c}(\mathbf{\hat{x}},\hat{t})$ denote the flow generated by $%
f_{1}^c$. The flow conserves the first integral $E_{1}^{c}\colon
U_{1}\rightarrow \mathbb{R}$ given by
\begin{eqnarray}
E_{1}^{c}(\mathbf{\hat{x}}) &=&-\hat{q}^{2}-\hat{p}^{2}+4(\hat{x}^{2}+\hat{y}%
^{2})^{3}+8\mu (\hat{x}^{4}-\hat{y}^{4})+4(\mu -c)(\hat{x}^{2}+\hat{y}^{2})
\notag \\
&&+8(1-\mu )+8\mu \frac{(\hat{x}^{2}+\hat{y}^{2})}{\sqrt{(\hat{x}^{2}+\hat{y}%
^{2})^{2}+1+2(\hat{x}^{2}-\hat{y}^{2})}}. \label{eq:reg_P_energy}
\end{eqnarray}
Note that the parameter $c$ appears both in the formulae for $f_{1}^{c}$ and
$E_{1}^{c}$. We write $\psi _{1}^{c}$ to stress that the flow depends
explicitly on the choice of $c$. We choose
$c \in \mathbb{R}$ and then, after regularization, have new coordinates
which allow us to study collisions only in the level set
\begin{equation}
M:=\left\{ \mathbf{x}\in U : E(\mathbf{x})=c\right\} .
\label{eq:M-level-set-c}
\end{equation}
We define the linear subspace $\mathcal{C}_{1}\subset \mathbb{R}^{4}$ by
\begin{equation*}
\mathcal{C}_{1}=\left\{ (\hat{x},\hat{p},\hat{y},\hat{q})\in \mathbb{R}%
^{4}\,|\,\hat{x}=\hat{y}=0\right\} ,
\end{equation*}%
The change of coordinates between the two coordinate systems is given by the
transform $T_{1}\colon U_{1}\backslash \mathcal{C}_{1}\rightarrow U$,
\begin{equation}
\mathbf{x}=T_{1}(\mathbf{\hat{x}}):=\left(
\begin{array}{c}
\hat{x}^{2}-\hat{y}^{2}+\mu \\
\frac{\hat{x}\hat{p}-\hat{y}\hat{q}}{2(\hat{x}^{2}+\hat{y}^{2})} \\
2\hat{x}\hat{y} \\
\frac{\hat{y}\hat{p}+\hat{x}\hat{q}}{2(\hat{x}^{2}+\hat{y}^{2})}%
\end{array}%
\right) , \label{eq:T1-def}
\end{equation}%
and is a local diffeomorphism on $U_{1}\backslash \mathcal{C}_{1}$. The
following theorem collects results from
\cite{theoryOfOrbits}, and relates the dynamics of the original and the regularized
systems.
\begin{theorem}
\label{thm:LeviCivitta}
Let $c$ be the fixed parameter
determining the level set $M$ in Equation \eqref{eq:M-level-set-c}. Assume that $%
\mathbf{x}_{0}\in U$ satisfies $E(\mathbf{x}_{0})=c,$ and assume that $%
\mathbf{\hat{x}}_{0}\in U_{1}\setminus \mathcal{C}_{1}$ is such that $%
\mathbf{x}_{0}=T_{1}\left( \mathbf{\hat{x}}_{0}\right) $. Then the curve%
\begin{equation*}
\gamma \left( s\right) :=T_{1}\left( \psi _{1}^{c}(\hat{\mathbf{x}}%
_{0},s)\right)
\end{equation*}%
parameterizes the following possible solutions of the PCRTBP in $M$:
\begin{enumerate}
\item If for every $\hat{t}\in \lbrack -\hat{T},\hat{T}]$ we have $\psi
_{1}^{c}(\hat{\mathbf{x}}_{0},\hat{t})\in U_{1}\setminus \mathcal{C}_{1}$,
then $\gamma \left( s\right) ,$ for $s\in \lbrack -\hat{T},\hat{T}]$ lies on
a trajectory of the PCRTBP which avoids collisions. Moreover, the time $t$
in the original coordinates that corresponds to the time $\hat{t}\in \lbrack
-\hat{T},\hat{T}]$ in the regularised coordinates is recovered by the
integral
\begin{equation}
t=4\int_{0}^{\hat{t}}\left( \hat{x}(s)^{2}+\hat{y}(s)^{2}\right) ds,
\label{eq:time-recovery}
\end{equation}%
i.e.
\begin{equation*}
\phi \left( t,\mathbf{x}_{0}\right) =T_{1}\left( \psi _{1}^{c}(\hat{\mathbf{x%
}}_{0},\hat{t})\right) .
\end{equation*}
\item If for $\hat{T}>0$, for every $\hat{t}\in \lbrack 0,\hat{T})$ we have $%
\psi _{1}^{c}(\hat{\mathbf{x}}_{0},\hat{t})\in U_{1}\setminus \mathcal{C}%
_{1} $ and $\psi _{1}^{c}(\hat{\mathbf{x}}_{0},\hat{T})\in \mathcal{C}_{1}$,
then in the original coordinates the trajectory starting from $\mathbf{x}%
_{0} $ reaches the collision with $m_{1}$ at time $T>0$ given by%
\begin{equation}
T=4\int_{0}^{\hat{T}}\left( \hat{x}(s)^{2}+\hat{y}(s)^{2}\right) \,ds.
\label{eq:time-to-collision}
\end{equation}
\item If for $\hat{T}<0$, for every $\hat{t}\in (\hat{T},0]$ we have $\psi
_{1}^{c}(\hat{\mathbf{x}}_{0},\hat{t})\in U_{1}\setminus \mathcal{C}_{1}$
and $\psi _{1}^{c}(\hat{\mathbf{x}}_{0},\hat{T})\in \mathcal{C}_{1}$, then
in the original coordinates the backward trajectory starting from $\mathbf{x}%
_{0}$ reaches the collision with $m_{1}$ at time $T<0$ expressed in Equation \eqref{eq:time-to-collision}.
\end{enumerate}
\end{theorem}
Orbits satisfying condition 2 from Theorem \ref%
{thm:LeviCivitta} are collision orbits, while orbits satisfying condition
3 from Theorem \ref{thm:LeviCivitta} are called ejection orbits. From Theorem \ref{thm:LeviCivitta} we see that for regularized orbits $\psi
_{1}^{c}\left( \mathbf{\hat{x}}_{0},\hat{t}\right) $ to have a physical
meaning in the original coordinates we need to choose $c=E\left( T_{1}\left(
\mathbf{\hat{x}}_{0}\right) \right)$ for the regularization energy. The following lemma, whose proof is a standard calculation (see \cite{theoryOfOrbits}), addresses this
choice.
\begin{lemma}
\label{lem:energies-cond}For every $\mathbf{\hat{x}}\in U_{1}$, we have
\begin{equation}
E\left( T_{1}\left( \mathbf{\hat{x}}\right) \right) =c\qquad \text{if and only if} \qquad
E_{1}^{c}\left( \mathbf{\hat{x}}\right) =0. \label{eq:energies-cond-m1}
\end{equation}
\end{lemma}
The following corollary of Lemma \ref{lem:energies-cond} is a consequence of evaluating the expression
for the energy at zero when the positions are zero.
\begin{corollary}
\label{cor:collisions-m1}If we consider $\mathbf{\hat{x}}=\left( \hat{x},%
\hat{p},\hat{y},\hat{q}\right) $ with $\hat{x}=\hat{y}=0$, which corresponds
to a collision with $m_{1}$, then from $E_{1}^{c}\left( \mathbf{\hat{x}}%
\right) =0$ we see that for a trajectory $\psi _{1}^{c}\left( \mathbf{\hat{x}%
},\hat{t}\right) $ starting from a collision point $\mathbf{\hat{x}}=\left(
0,\hat{p},0,\hat{q}\right) $ to have a physical meaning in the original
coordinates it is necessary and sufficient that
\begin{equation}
\hat{q}^{2}+\hat{p}^{2}=8(1-\mu ). \label{eq:collision-m1}
\end{equation}
\end{corollary}
\begin{definition}
\label{def:ejection-collision-manifolds}We refer to
\begin{equation*}
\left\{ \psi _{1}^{c}\left( \mathbf{\hat{x}},\hat{t}\right) :\hat{q}^{2}+%
\hat{p}^{2}=8(1-\mu ),\,\hat{t}\geq 0\text{ and }\psi _{1}^{c}(\mathbf{\hat{x%
}},[0,\hat{t}])\cap \mathcal{C}_{1}=\emptyset \right\}
\end{equation*}%
as the ejection manifold from $m_{1},$ and%
\begin{equation*}
\left\{ \psi _{1}^{c}\left( \mathbf{\hat{x}},\hat{t}\right) :\hat{q}^{2}+%
\hat{p}^{2}=8(1-\mu ),\,\hat{t}\leq 0\text{ and }\psi _{1}^{c}(\mathbf{\hat{x%
}},[\hat{t},0])\cap \mathcal{C}_{1}=\emptyset \right\}
\end{equation*}%
as the collision manifold to $m_{1}$.
\end{definition}
Note that both the collision and the ejection manifolds depend on the
choice of $c$. That is, we have a family of collision/ejection manifolds,
parameterized by the Jacobi constant $c$. For a
fixed $c$ the collision manifold, when viewed in the original coordinates,
consists of points with energy $c$, whose forward trajectory reaches the
collision with $m_{1}$. Similarly, for fixed $c$, the ejection manifold, in
the original coordinates, consists of points with energy $c$ whose backward
trajectory collide with $m_{1}$.
Thus, the circle defined in Corollary \ref{cor:collisions-m1} is a sort
of ``fundamental domain'' for ejections/collisions to $m_1$ with energy $c$.
\subsection{Regularization of collisions with $m_{2}$} \label{sec:regSecondPrimary}
To regularize at the second primary, we define the coordinates $\tilde{z}=%
\tilde{x}+i\tilde{y}$ through $\tilde{z}^{2}=z+1-\mu $ and consider the time
rescaling $dt/d\tilde{t}=4|\tilde{z}|^{2}$.
As in the previous section, define
\begin{eqnarray*}
U_{2}&:= &\left\{ \mathbf{\tilde{x}}=(\tilde{x},\tilde{p},\tilde{y},\tilde{q}%
)\in \mathbb{R}^{4}\,|\,\left( \tilde{x},\tilde{y}\right) \notin \left\{
\left( -1,0\right) ,\left( 1,0\right) \right\} \right\} , \\
\mathcal{C}_{2}&:= &\left\{ \mathbf{\tilde{x}}=(\tilde{x},\tilde{p},\tilde{y}%
,\tilde{q})\in \mathbb{R}^{4}\,|\,\tilde{x}=\tilde{y}=0\right\},
\end{eqnarray*}%
so that $U_{2}$ consists of points in the regularized coordinates which do
not collide with $m_{1}$, and $\mathcal{C}_{2}$ consists of
points which collide with $m_{2}$.
The regularized Levi-Civita vector field $f_{2}^{c}:U_{2}\rightarrow \mathbb{%
R}^{4}$ with the ODE $\mathbf{\tilde{x}}^{\prime }=f_{2}^{c}\left( \mathbf{%
\tilde{x}}\right) $ is of the form (see \cite{theoryOfOrbits})%
\begin{eqnarray}
\tilde{x}^{\prime } &=&\tilde{p}, \label{eq:reg_S_field} \notag \\
\tilde{p}^{\prime } &=&8\left( \tilde{x}^{2}+\tilde{y}^{2}\right) \tilde{q}%
+12\tilde{x}(\tilde{x}^{2}+\tilde{y}^{2})^{2}-16(1-\mu )\tilde{x}%
^{3}+4\left( (1-\mu )-c\right) \tilde{x} \notag \\
&&+\frac{8(1-\mu )\left( -\tilde{x}^{3}+3\tilde{x}\tilde{y}^{2}+\tilde{x}%
\right) }{((\tilde{x}^{2}+\tilde{y}^{2})^{2}+1+2(\tilde{y}^{2}-\tilde{x}%
^{2}))^{3/2}}, \notag \\
\tilde{y}^{\prime } &=&\tilde{q}, \label{eq:regularizedSystem_m2} \\
\tilde{q}^{\prime } &=&-8\left( \tilde{u}^{2}+\tilde{y}^{2}\right) \tilde{p}%
+12\tilde{y}(\tilde{x}^{2}+\tilde{y}^{2})^{2}+16(1-\mu )\tilde{y}%
^{3}+4\left( (1-\mu )-c\right) \tilde{y} \notag \\
&&+\frac{8(1-\mu )\left( \tilde{y}^{3}-3\tilde{x}^{2}\tilde{y}+\tilde{y}%
\right) }{((\tilde{x}^{2}+\tilde{y}^{2})^{2}+1+2(\tilde{y}^{2}-\tilde{x}%
^{2}))^{3/2}}, \notag
\end{eqnarray}%
with the integral of motion
\begin{align}
E_{2}^{c}\left( \mathbf{\tilde{x}}\right) & =-\tilde{p}^{2}-\tilde{q}^{2}+4(%
\tilde{x}^{2}+\tilde{y}^{2})^{3}+8(1-\mu )(\tilde{y}^{4}-\tilde{x}%
^{4})+4\left( (1-\mu )-c\right) (\tilde{x}^{2}+\tilde{y}^{2}) \notag \\
& \quad +8(1-\mu )\frac{\tilde{x}^{2}+\tilde{y}^{2}}{\sqrt{(\tilde{x}^{2}+%
\tilde{y}^{2})^{2}+1+2(\tilde{y}^{2}-\tilde{x}^{2})}}+8\mu . \label{eq:E2}
\end{align}
We write $\psi_2^c( \mathbf{\tilde x},\tilde t)$ for the flow induced
by (\ref{eq:reg_S_field}).
The change of coordinates from the regularized coordinates $\mathbf{\tilde{x}%
}$ to the original coordinates $\mathbf{x}$ is given by $T_{2}:U_{2}%
\setminus \mathcal{C}_{2}\rightarrow \mathbb{R}^{4}$\textbf{\ }of the form%
\begin{equation}
\mathbf{x}=T_{2}\left( \mathbf{\tilde{x}}\right) =\left(
\begin{array}{c}
\tilde{x}^{2}-\tilde{y}^{2}+\mu-1 \\
\frac{\tilde{x}\tilde{p}-\tilde{y}\tilde{q}}{2(\tilde{x}^{2}+\tilde{y}^{2})}
\\
2\tilde{x}\tilde{y} \\
\frac{\tilde{y}\tilde{p}+\tilde{x}\tilde{q}}{2(\tilde{x}^{2}+\tilde{y}^{2})}%
\end{array}%
\right) . \label{eq:T2-def}
\end{equation}
A theorem analogous to Theorem \ref{thm:LeviCivitta} characterizes solution
curves in the two coordinate systems and the collisions with the second
primary $m_{2}$. Also, analogously to Lemma \ref{lem:energies-cond} and
Corollary \ref{cor:collisions-m1} for every $\mathbf{\ \tilde{x}}\in U_{2}$
we have%
\begin{equation}
E\left( T_{2}\left( \mathbf{\tilde{x}}\right) \right) =c\qquad \text{if and only if} \qquad
E_{2}^{c}\left( \mathbf{\tilde{x}}\right) =0, \label{eq:energies-cond-m2}
\end{equation}%
and a trajectory $\psi _{2}^{c}\left( \mathbf{\tilde{x}},\tilde{t}\right) $
starting from a collision point $\mathbf{\tilde{x}}=\left( 0,\tilde{p},0,%
\tilde{q}\right) $ with $m_{2}$ has physical meaning in the original
coordinates if and only if
\begin{equation}
\tilde{q}^{2}+\tilde{p}^{2}=8\mu . \label{eq:collision-m2}
\end{equation}
We introduce the notions of the ejection and collision manifolds for $m_{2}$
analogously to Definition \ref{def:ejection-collision-manifolds}.
\begin{figure}[t!]
\begin{center}
\includegraphics[height=6.0cm]{ejectionCollisionPic} %
\end{center}
\caption{Ejection collision orbits in the PCRTBP when $\mu = 1/4$
and $C = 3.2$. The grey curves at the top and bottom of the figure illustrate
the zero velocity curves, i.e. the boundaries of the prohibited Hill's regions, for
this value of $C$. The black dots at $x = \mu$ and $x = -1+\mu$ depict
the locations of the primary bodies. The curves in the middle of the figure
represent two ejection-collision orbits: $m_2$ to $m_1$ (bottom) and
$m_1$ to $m_2$ (top). (Recall that $m_2$ is on the left and $m_1$ on the right;
compare with Figure \ref{fig:PCRTBP_coordinates}.) These orbits are computed by numerically
locating an approximate zero of the function defined in Equation \eqref{eq:collisionOperator}.
The blue portion of the orbit is in the
original coordinates, while green and red are on the ejection and
collision manifolds in regularized coordinates, respectively. The curves
are plotted by changing all points back to the original coordinates.}
\label{fig:ejectionCollisions}
\end{figure}
\section{Ejection-collision orbits} \label{sec:ejectionToCollision}
We now define a level set multiple shooting operator whose zeros
correspond to transverse ejection-collision orbits from the body $m_{k}$
to the body $m_{l}$ for $k,l\in\left\{ 1,2\right\}$ in the PCRTBP.
Two such orbits in the PCRTBP are illustrated in Figure \ref{fig:ejectionCollisions}.
Note that the PCRTBP has the form discussed in Example
\ref{ex:dissipative-unfolding}, so that a dissipative unfolding
is given by the one parameter family of ODEs
\begin{equation}
f_{\alpha}(x,p,y,q)=f(x,p,y,q)+\alpha\left( 0,p,0,q\right),
\label{eq:unfoldedPCRTBP}
\end{equation}
where $f$ is as defined in Equation \eqref{eq:PCRTBP}.
Let $\phi_{\alpha}(\mathbf{x},t)$ denote the flow generated by the
the vector field of Equation \eqref{eq:unfoldedPCRTBP}.
For $c\in\mathbb{R}$ consider the fixed energy level
set $M$. Then $\alpha$ is an unfolding parameter for
the mapping
\begin{equation*}
R_{\tau,\alpha}\left( \mathbf{x}\right) =\phi_{\alpha}(\mathbf{x},\tau)
\end{equation*}
from $M$ to $M$. (Here
$R_{\tau,\alpha}:\mathbb{R}^{4}\rightarrow \mathbb{R}^{4}$ for fixed $\alpha,\tau\in\mathbb{R}$.)
Define the functions $P_{i} \colon \mathbb{R} \to \mathbb{R}^4$ for $i = 1,2$ by
\begin{equation}
P_{i}\left( \theta \right) :=\left\{
\begin{array}{lll}
(0,\sqrt{8\left( 1-\mu \right) }\cos \left( \theta \right) ,0,\sqrt{8\left(
1-\mu \right) }\sin \theta ) & & \text{for }i=1,\medskip \\
(0,\sqrt{8\mu }\cos \left( \theta \right) ,0,\sqrt{8\mu }\sin \theta ) & &
\text{for }i=2.%
\end{array}%
\right. \label{eq:collisions-par-Pi}
\end{equation}%
By Equations \eqref{eq:collision-m1} and \eqref{eq:collision-m2} the function $%
P_{i}\left( \theta \right) $ parameterizes the collision set for the primary $%
m_{i}$, with $i=1,2$.
Fix $k,l\in \left\{ 1,2\right\} $ and consider level sets $%
M_{1},\ldots ,M_{6}\subset \mathbb{R}^{4}$ defined by %
\begin{align*}
M_{1}& =M_{2}=\left\{ E_{k}^{c}=0\right\} , \\
M_{3}& =M_{4}=\left\{ E=c\right\} , \\
M_{5}& =M_{6}=\left\{ E_{l}^{c}=0\right\}.
\end{align*}%
Choose $s>0$,
and for $i = 1,2$ recall the definition of the coordinate transformations
$T_{i} \colon U_i \backslash \mathcal{C}_i \to \mathbb{R}^4$ defined
in Equations \eqref{eq:T1-def} and \eqref{eq:T2-def}.
Taking the maps $R_{\tau ,\alpha }^{1},\ldots ,R_{\tau ,\alpha
}^{5}:\mathbb{R}^{4}\rightarrow \mathbb{R}^{4}$ as%
\begin{align*}
R_{\tau ,\alpha }^{1}\left( x_{1}\right) & =\psi _{k}^{c}\left(
x_{1},s\right) , \\
R_{\tau ,\alpha }^{2}\left( x_{2}\right) & =T_{k}\left( x_{2}\right) , \\
R_{\tau ,\alpha }^{3}\left( x_{3}\right) & =\phi _{\alpha }\left( x_{3},\tau
\right) , \\
R_{\tau ,\alpha }^{4}\left( x_{4}\right) & =T_{l}^{-1}\left( x_{4}\right) ,
\\
R_{\tau ,\alpha }^{5}\left( x_{5}\right) & =\psi _{l}^{c}\left(
x_{5},s\right),
\end{align*}
we let
\begin{equation*}
F:\mathbb{R\times }\underset{5 \ \text{copies}}{\underbrace{\mathbb{R}^{4}\mathbb{\times }%
\ldots \mathbb{\times R}^{4}}}\mathbb{\times R\times R\times R\rightarrow }%
\underset{6 \ \text{copies}}{\underbrace{\mathbb{R}^{4}\mathbb{\times }\ldots \mathbb{\times
R}^{4}}}
\end{equation*}%
be defined as %
\begin{equation}\label{eq:collisionOperator}
F\left( x_{0},x_{1},\ldots x_{5},x_{6},\tau ,\alpha \right):=
\left(
\begin{array}{r@{\,\,\,}l}
P_{k}\left( x_{0}\right) & -\,\,\,x_{1} \\
R_{\alpha ,\tau }^{1}\left(x_{1}\right) &- \,\,\, x_{2} \\
R_{\alpha ,\tau }^{2}\left(x_{2}\right) &- \,\,\, x_{3} \\
R_{\alpha ,\tau }^{3}\left(x_{3}\right) &- \,\,\, x_{4} \\
R_{\alpha ,\tau }^{4}\left( x_{4}\right) &- \,\,\, x_{5} \\
R_{\alpha ,\tau }^{5}\left( x_{5}\right) &- \,\,\, P_{l}\left( x_{6}\right)
\end{array}
\right),
\end{equation}
where $x_{0},x_{6},\tau ,\alpha \in \mathbb{R}$ and $%
x_{1},\ldots ,x_{5}\in \mathbb{R}^{4}$.
We also write $\left( x_{k},p_{k},y_{k},q_{k}\right) $ and $\left(
x_{l},p_{l},y_{l},q_{l}\right) $ to denote the regularized coordinates given by the
coordinate transformations $T_{k}$ and $T_{l}$, respectively.
\begin{lemma}\label{lem:collision-connections}
Let $\mathbf{x}^{\ast }=\left( x_{0}^{\ast },\ldots ,x_{6}^{\ast }\right)
$ and $\tau ^{\ast }>0$. If
\begin{equation*}
DF\left( \mathbf{x}^{\ast },\tau ^{\ast },0\right)
\end{equation*}%
is an isomorphism and
\begin{equation*}
F\left( \mathbf{x}^{\ast },\tau ^{\ast },0\right) =0,
\end{equation*}%
then the orbit of the point $x_3^{\ast}$
is ejected from the primary body $m_{k}$ and collides with
the primary body $m_{l}.$ (The same is true of the orbit of
the point $x_4^{\ast}$.)
Moreover, intersection of the collision and ejection manifolds is
transversal on the energy level $\left\{ E=c\right\} $ and the time
from the ejection to the collision is
\begin{equation}
\tau ^{\ast }+4\int_{0}^{s}\left\Vert \pi _{x_{k},y_{k}}\psi _{k}^{c}\left(
x_{1}^{\ast },u\right) \right\Vert ^{2}du+4\int_{0}^{s}\left\Vert \pi
_{x_{l},y_{l}}\psi _{l}^{c}\left( x_{5}^{\ast },u\right) \right\Vert ^{2}du.
\label{eq:time-between-collisions}
\end{equation}%
(Above we use the Euclidean norm.)
\end{lemma}
\begin{proof}
We have $d_{0}=d_{6}=k=1$ and $d=4$, so the condition in Equation \eqref
{eq:dimensions-multiple-shooting} is satisfied.
We now show that $\alpha $ is an unfolding parameter for $R_{\tau
,\alpha }=R_{\tau ,\alpha }^{5}\circ \ldots \circ R_{\tau ,\alpha }^{1}$.
Since $E_{i}^{c}$ is an integral of motion for the flow $\psi _{i}^{c}$, for
$i=1,2$, we see that%
\begin{equation*}
\begin{array}{rcl}
x_{1}\in M_{1}=\left\{ E_{k}^{c}=0\right\} & \qquad \iff \qquad & R_{\tau
,\alpha }^{1}\left( x_{1}\right) =\psi _{k}^{c}\left( x_{1},s\right) \in
M_{2}=\left\{ E_{k}^{c}=0\right\} ,\medskip \\
x_{5}\in M_{5}=\left\{ E_{l}^{c}=0\right\} & \qquad \iff \qquad & R_{\tau
,\alpha }^{5}\left( x_{5}\right) =\psi _{l}^{c}\left( x_{5},s\right) \in
M_{6}=\left\{ E_{l}^{c}=0\right\} .%
\end{array}%
\end{equation*}%
Also, by Equations \eqref{eq:energies-cond-m1} and \eqref{eq:energies-cond-m2} we see
that%
\begin{equation*}
\begin{array}{rcl}
x_{2}\in M_{2}=\left\{ E_{k}^{c}=0\right\} & \qquad \iff \qquad & R_{\tau
,\alpha }^{2}\left( x_{2}\right) =T_{k}\left( x_{2}\right) \in M_{3}=\left\{
E=c\right\} ,\medskip \\
x_{4}\in M_{4}=\left\{ E=c\right\} & \qquad \iff \qquad & R_{\tau ,\alpha
}^{4}\left( x_{2}\right) =T_{l}^{-1}\left( x_{4}\right) \in M_{5}=\left\{
E_{l}^{c}=0\right\} .%
\end{array}%
\end{equation*}%
Moreover $\alpha $ is an unfolding parameter for the PCRTBP, and hence
for
\begin{equation*}
R_{\tau ,\alpha }^{3}\left( x_{3}\right) =\phi _{\alpha }\left( x_{3},\tau
\right).
\end{equation*}%
Note that for $i=1,2,4,5$, the maps$R_{\tau ,\alpha }^{i}$ takes the level sets $M_{i}$ into the level set $M_{i+1}$ and this does not depend on the choice of $\alpha$.
Then, since $\alpha $ is an unfolding parameter for $R_{\tau ,\alpha }^{3}$,
it follows directly from Definition \ref{def:unfolding} that $%
\alpha $ is an unfolding parameter for $R_{\tau ,\alpha }=R_{\tau ,\alpha
}^{5}\circ \ldots \circ R_{\tau ,\alpha }^{1}.$
By applying Lemma \ref{lem:multiple-shooting-2} to
\begin{equation*}
\tilde{F}\left( x_{0},x_{6},\tau ,\alpha \right) :=R_{\tau ,\alpha }\left(
P_{k}\left( x_{0}\right) \right) -P_{l}\left( x_{6}\right)
\end{equation*}%
we obtain that $D\tilde{F}\left( x_{0}^{\ast },x_{6}^{\ast },\tau ^{\ast
},0\right) $ is an isomorphism and that $\tilde{F}\left( x_{0}^{\ast
},x_{6}^{\ast },\tau ^{\ast },0\right) =0$. Since
\begin{equation*}
\tilde{F}\left( x_{0}^{\ast },x_{6}^{\ast },\tau ^{\ast },0\right) =\psi
_{l}^{c}\left( T_{l}^{-1}\left( \phi \left( T_{k}\left( \psi _{k}^{c}\left(
P_{k}(x_{0}^{\ast }),s\right) \right) ,\tau ^{\ast }\right) \right)
,s\right) -P_{l}\left( x_{6}^{\ast }\right) ,
\end{equation*}%
we see that, by Theorem \ref{thm:LeviCivitta} (and its mirror counterpart for
the collision with $m_{2}$) we have an orbit originating
at the point $P_{k}(x_{0}^{\ast })$ on the collision set for $m_k$, and
terminating at the point $P_{l}\left( x_{6}^{\ast }\right) $
on the collision set for $m_l$. The
transversality of the intersection between the
ejection manifold of $m_{k}$ and the collision manifold
of $m_{l}$ follows from Theorem \ref{th:single-shooting}.
The time between collisions in Equation \eqref{eq:time-between-collisions} follows
from Equation \eqref{eq:time-to-collision}.
\end{proof}
\begin{remark}[Additional shooting steps] \label{rem:additionalShooting}
{\em
We remark that in practice, computing accurate enclosures
of flow maps requires shortening the time step. Consider for example
the third and fourth component of $F$ as defined in
Equation \eqref{eq:collisionOperator}, and
suppose that time step of length $\nicefrac{\tau}{N}$ is desired. By the properties
of the flow map, solving the sub-system of equations
\begin{equation}\label{eq:colOp_comp3}
\begin{aligned}
R_{\alpha, \tau}^3(x_3) - x_4 = \phi_\alpha(x_3, \tau) - x_4 &= 0 \\
R_{\alpha, \tau}^4(x_4) - x_5 = T^{-1}_l(x_4) - x_5 &= 0
\end{aligned}
\end{equation}
is equivalent to solving
\begin{align*}
\phi_\alpha(x_3, \nicefrac{\tau}{N}) - y_1 &= 0\\
\phi_\alpha(y_1, \nicefrac{\tau}{N}) - y_2 &= 0\\
&\vdots \\
\phi_\alpha(y_{N-2}, \nicefrac{\tau}{N}) - y_{N-1} &= 0 \\
\phi_{\alpha}(y_{N-1}, \nicefrac{\tau}{N}) - x_4 &= 0 \\
T_l^{-1}(x_4) - x_5 &= 0,
\end{align*}
and we can append these new variables and
components to the map $F$ defined in Equation \eqref{eq:collisionOperator}
without changing the zeros of the operator.
Moreover, by Lemma \ref{lem:multiple-shooting-2} the
transversality result for the operator is not changed by the addition of
additional steps. Indeed, by the same reasoning we can (and do) add intermediate
shooting steps in the regularized coordinates
to reduce the time steps to any desired tolerance.
}
\end{remark}
\section{Connections between collisions and libration points $L_{4}$, $L_{5}$%
}\label{sec:L4_to_collision}
For each value of $\mu \in (0, 1/2]$,
the PCRTBP has exactly five equilibrium solutions.
For traditional reasons, these are referred to as libration points
of the PCRTBP. Three of these are collinear with the primary bodies,
and lie on the $x$-axis. These are referred to as $L_1, L_2$ and
$L_3$, and they correspond to the co-linear relative equilibrium
solutions discovered by Euler. The remaining two libration points
are located at the third vertex of the equilateral triangles whose other two
vertices are the primary and secondary bodies. These are referred to as $L_4$
and $L_5$, and correspond to the equilateral triangle solutions of Lagrange.
Figure \ref{fig:PCRTBP_librations} illustrates the locations of the
libration points in the phase space.
\begin{figure}[!t]
\centering
\includegraphics[height=5cm]{Fig-3.pdf}
\caption{The three collinear libration points $L_{1,2,3}$ and the
equilateral triangle libration points $L_{4, 5}$, relative to the positions
of the primary masses $m_1$ and $m_2$.}
\label{fig:PCRTBP_librations}
\end{figure}
For all values of the mass ratio, the collinear libration points have saddle
$\times$ center stability. The center manifolds give rise to important
families of periodic orbits known as Lyapunov families. The stability of $%
L_4 $ and $L_5$ depend on the mass ratio $\mu$. For
\begin{equation*}
0 < \mu < \mu_* \approx 0.04,
\end{equation*}
where the exact value is $\mu_* = 2/(25 + \sqrt{621})$, the triangular
libration points have center $\times$ center stability. That is, they are
stable in the the sense of Hamiltonian systems and exhibit the full ``zoo''
of nearby KAM objects.
When $\mu > \mu_*$, the triangular libration points $L_4$ and $L_5$ have
saddle-focus stability. That is, they have a complex conjugate pair of
stable and a complex conjugate pair of
unstable eigenvalues. The four
eigenvalues then have the form
\begin{equation*}
\lambda = \pm \alpha \pm i \beta,
\end{equation*}
for some $\alpha, \beta > 0$. In this case, each libration point has an
attached two dimensional stable and two dimensional unstable manifold. Since
these two dimensional manifolds live in the three dimensional energy level
set of $L_{4,5}$,
there exists the possibility
that they intersect the two dimensional collision or ejection
manifolds of the primaries transversely. It is also possible that the
stable/unstable manifolds of $L_{4,5}$ intersect one other transversely giving rise to
homoclinic or heteroclinic connecting orbits.
In fact, in this paper we prove that both of these phenomena occur and in this section we discuss our method for proving the existence of intersections between a stable/unstable manifold of $L_{4,5}$, and
an ejection/collision manifold of a primary body. Any point of intersection between these
manifolds gives rise to an orbit which is asymptotic
to $L_4$, but which collides or is ejected from one of the
massive bodies.
Two such orbits are illustrated in Figure \ref{fig:EC_to_collision}.
\begin{figure}[!t]
\centering
\includegraphics[height=4.75cm]{L4_EC_pic1.pdf}\includegraphics[height=4.75cm]{L4_EC_pic2.pdf}
\caption{Libration-to-collision and ejection-to-libration orbits for
$\mu = 1/2$ and $c = 3$ (which is the $L_4$ value of the Jacobi constant
in the equal mass problem). The left
frame illustrates an ejection to $L_4$ orbit, and the right frame an $L_4$
to collision. In each frame $m_1$ is depicted as a black dot and $L_4$
as a red dot. The boundary of a parameterized local unstable manifold for
$L_4$ is depicted as the red circle; stable boundary the green circle. The
orbits are found by computing an approximate zero of the map
defined in Equation \eqref{eq:EC_to_L4_operator}.
The green portion of the left, and red portion of the right
curves are computed in regularized coordinates
for the body $m_1$. These points are transformed back to the original
coordinates for the plot.}
\label{fig:EC_to_collision}
\end{figure}
Let $\overline{B}\subset \mathbb{R}^{2}$ denote a closed ball with radius $1$.
%
Assume that%
\begin{equation*}
w_{j}^{\kappa }:\overline{B}\rightarrow \mathbb{R}^{4}\qquad \text{%
for }j\in \{4,5\}\text{ and }\kappa \in \left\{ u,s\right\},
\end{equation*}%
parameterize the two dimensional local stable/unstable manifolds of $L_{j}$.
We assume that the charts are normalized so
that $w_{j}^{\kappa }\left( 0\right) =L_{j}$. Then
\begin{equation*}
w_{j}^{\kappa }\left( \overline{B}\right) =W_{\text{loc}}^{\kappa
}\left( L_{j}\right) \qquad \text{for }j\in \{4,5\},\text{ }\kappa \in
\left\{ u,s\right\} .
\end{equation*}%
Define the functions%
\begin{equation*}
P_{j}^{\kappa }:\mathbb{R}\rightarrow \mathbb{R}^{4}\qquad \text{for }j\in
\{4,5\}\text{ and }\kappa \in \left\{ u,s\right\},
\end{equation*}%
by%
\begin{equation}
P_{j}^{\kappa }\left( \theta \right) :=w_{j}^{\kappa }\left( \cos
\theta ,\sin \theta \right) . \label{eq:Pj-lib}
\end{equation}
For $i\in \{1,2\}$ consider $P_{i}$ as defined in Equation \eqref{eq:collisions-par-Pi}.
For
\begin{equation*}
\mathbf{x}=\left( x_{0},x_{1},x_{2},x_{3},x_{4}\right) \in \mathbb{R}^{14},
\end{equation*}%
where $x_{0},x_{4}\in \mathbb{R}, x_{1},x_{2},x_{3}\in \mathbb{R}^{4}$, and $j \in \left\{ 4,5\right\} $ we
define
\begin{equation*}
F_{i,j}^{u},F_{i,j}^{s}:\mathbb{R}^{16}\rightarrow \mathbb{R}^{16},
\end{equation*}%
by the formulas
\begin{equation} \label{eq:EC_to_L4_operator}
F_{i,j}^{u}\left( \mathbf{x},\tau ,\alpha \right) =\left(
\begin{array}{r@{\,\,-\,\,}l}
P_{j}^{u}\left( x_{0}\right) & x_{1} \\
\phi _{\alpha }\left( x_{1},\tau \right) & x_{2} \\
T_{i}^{-1}(x_{2}) & x_{3} \\
\psi _{i}^{c_{j}}\left( x_{3},s\right) & P_{i}(x_{4})%
\end{array}%
\right) ,\quad F_{i,j}^{s}\left( \mathbf{x},\tau ,\alpha \right) =\left(
\begin{array}{r@{\,\,-\,\,}l}
P_{i}(x_{0}) & x_{1} \\
\psi _{i}^{c_{j}}\left( x_{1},s\right) & x_{2} \\
T_{i}(x_{2}) & x_{3} \\
\phi _{\alpha }\left( x_{3},\tau \right) & P_{j}^{s}\left( x_{4}\right)%
\end{array}%
\right).
\end{equation}%
Here $\tau ,\alpha \in \mathbb{R}$ and the constant $c_{j}$ in $\psi
_{i}^{c_{j}}$ is chosen as $c_{j}=E\left( L_{j}\right) $.
Zeros of the operator $F_{i,j}^{u}$ correspond to intersections of the
unstable manifold of $L_{j}$ with the collision manifold of mass $m_{i}.$
We also refer to this as a heteroclinic connection from $L_{j}$ to $m_{i}$.
Similarly, zeros of the operator $F_{i,j}^{s}$ correspond to intersections between
the stable manifold of $L_{j}$ with the ejection manifold of mass $m_{i}.$ In other
words, they lead to heteroclinic connections ejected from $%
m_{i}$ and limiting to the libration point $L_{j}$ in forward time. This is expressed formally in the following lemma.
\begin{lemma}\label{lem:Li-collisions}
Fix $i\in \left\{ 1,2\right\} ,$ $j\in \{4,5\}$, and $\kappa \in \left\{
u,s\right\} $. Suppose there exists
$\mathbf{x}^{\ast} = (x_0^{\ast}, x_1^{\ast}, x_2^{\ast}, x_3^{\ast}, x_4^{\ast}) \in \mathbb{R}^{14}$
and $\tau ^{\ast } > 0 $ satisfying
\begin{equation*}
F_{i,j}^{\kappa }\left( \mathbf{x}^{\ast },\tau ^{\ast },0\right) =0,
\end{equation*}
and such that
\begin{equation*}
DF_{i,j}^{\kappa }\left( \mathbf{x}^{\ast },\tau ^{\ast },0\right)
\end{equation*}%
is an isomorphism. Then we have the following two cases.
\begin{enumerate}
\item If $\kappa = u$, then the orbit of $x_1^{\ast}$ is heteroclinic from the libration point $L_{j}$
to collision with $m_{i}$ and the intersection of $W^{u}\left( L_{j}\right)$ with the collision manifold of $m_{i}$ is transverse with respect to the energy level $\left\{ E=c_j\right\} $.
\item If $\kappa = s$, then the orbit of $x_3^{\ast}$ is heteroclinic from the libration point $L_{j}$
to ejection with $m_{i}$ and the intersection of $W^{s}\left( L_{j}\right)$ with the ejection manifold of $m_i$ is transverse with respect to the energy level $\left\{ E=c_j\right\} $.
\end{enumerate}
\end{lemma}
\begin{proof}
The proof follows from an argument similar to the proof of Lemma \ref{lem:collision-connections}.
\end{proof}
\bigskip
By a small modification of the operator just defined, we can
study orbits homoclinic or heteroclinic to the libration points as well. Such orbits
arise as intersections of the stable/unstable manifolds of
the libration points, and lead naturally to two point BVPs.
Three such orbits, homoclinic to $L_4$ in the
PCRTBP, are illustrated in Figure \ref{fig:PCRTBP_L4_homoclinics}.
Note that homoclinic/heteroclinic connections between equilibrium solutions
do not require changing to regularized coordinates as
such orbits exists for all forward and backward time and cannot
have any collisions. While this claim is mathematically correct,
any homoclinic/heteroclinic orbit which passes sufficiently close to a
collision with $m_{i}$ for $i\in \left\{ 1,2\right\} $ becomes
difficult to continue numerically. Consequently, these orbits may still be difficult or impossible to validate via computer assisted proof.
In this case regularization techniques are an asset even when studying orbits
which
pass near a collision.
The left and center homoclinic orbits in
Figure \ref{fig:PCRTBP_L4_homoclinics} for example are computed
entirely in the usual PCRTBP coordinates, while the right orbit was
computed using both coordinate systems. With this in mind we express the homoclinic/heteroclinic problem in the framework set up in the previous sections.
\begin{figure}[!t]
\centering
\includegraphics[height=3.8cm]{L4_homoclinics1.pdf}\includegraphics[height=3.8cm]{L4_homoclinics2.pdf}\includegraphics[height=3.8cm]{L4_homoclinics3.pdf}
\caption{
Transverse homoclinic orbits at $L_4$
for $\mu = 1/2$ in the $C = 3$ energy level.
Each orbit traverses the illustrated curves in a clockwise fashion.
The left and center orbits were known to
Stromgren and Szebeheley. The center and right
orbits possess no symmetry, and the orbit on the right
passes close to collision with $m_2$.
Each orbit is found by approximately computing a zero of the
map defined in Equation \eqref{eq:homoclinicOperator}.
The left and center orbits are computed in only the standard
coordinate system. The orbit on the right is computed by changing to regularized
coordinates for the middle third of the flight.
}
\label{fig:PCRTBP_L4_homoclinics}
\end{figure}
Let $P^{\kappa}_{j}:\mathbb{R}\rightarrow \mathbb{R}^{4}$, for $j\in \left\{
4,5\right\} $ be the functions defined in Equation \eqref{eq:Pj-lib} and consider
\begin{equation*}
\mathbf{x}=\left( x_{0},\ldots ,x_{6}\right) \in \mathbb{R}^{22},
\end{equation*}%
where $x_{0},x_{6}\in \mathbb{R}$ and $x_{1},\ldots ,x_{5}\in \mathbb{R}^{4}$%
, and fix $s_{1},s_{2}>0$. Let
\begin{equation*}
F_{i,j,k}:\mathbb{R}^{24}\rightarrow \mathbb{R}^{24},\qquad \text{for }%
j,k\in \{4,5\},i\in \left\{ 1,2\right\} ,
\end{equation*}%
be defined as%
\begin{equation} \label{eq:homoclinicOperator}
F_{i,j,k}\left( \mathbf{x},\tau ,\alpha \right) :=\left(
\begin{array}{r@{\,\,-\,\,}l}
P_{j}^{u}\left( x_{0}\right) & x_{1} \\
\phi _{\alpha }\left( x_{1},\tau \right) & x_{2} \\
T_{i}^{-1}(x_{2}) & x_{3} \\
\psi _{i}^{c_{j}}\left( x_{3},s_{1}\right) & x_{4} \\
T_{i}(x_{4}) & x_{5} \\
\phi _{\alpha }\left( x_{5},s_{2}\right) & P_{k}^{s}\left( x_{6}\right)
\end{array}%
\right) .
\end{equation}
One can formulate an analogous result to the Lemmas
\ref{lem:collision-connections} and \ref{lem:Li-collisions},
so that
\[
F_{i,j,k}\left( \mathbf{x}^{\ast },\tau ^{\ast },0\right) =0,
\]
together with $DF_{i,j,k}\left( \mathbf{x}^{\ast },\tau ^{\ast },0\right) $
an isomorphism implies that the manifolds $W^{u}\left( L_{j}\right) $ and
$W^{s}\left( L_{k}\right) $ intersect transversally.
Again, the advantage of
solving $F_{i,j,k}=0$ over parallel shooting in the original coordinates
is that one can establish the existence of connections which pass
arbitrarily close to a collision $m_{1}$ and/or $m_2$.
Indeed, the operator defined in Equation \eqref{eq:homoclinicOperator}
can be generalized to study homoclinic orbits which make any
finite number of flybys of the primaries in any order before returning
to $L_{4,5}$ by making additional changes of variables to regularized
coordinates every time the orbit passes near collision.
\section{Symmetric periodic orbits passing through collision\label%
{sec:symmetric-orbits}}
In this section we show that our method applies to the study of
families of periodic orbits which pass through a
collision. By this we mean the following. We will prove the existence of
a family of orbits parameterized by the value of the Jacobi constant on an interval.
As in the introduction, we refer to this as a tube of
periodic orbits. For all values in the interval except one, the intersection
of the energy level set with the tube is a periodic orbit. For a single
isolated value of the energy the intersection of the energy level set with
the tube is an ejection-collision orbit involving $m_{1}$. The situation
is depicted in Figure \ref{fig:Lyap}.
\begin{figure}[tbp]
\begin{center}
\includegraphics[height=3.95cm]{Fig-4_0.pdf} %
\includegraphics[height=3.95cm]{Fig-4_1.pdf} %
\includegraphics[height=3.95cm]{Fig-4_2.pdf}
\par
\includegraphics[height=3.95cm]{Fig-4_0c.pdf} %
\includegraphics[height=3.95cm]{Fig-4_1c.pdf} %
\includegraphics[height=3.95cm]{Fig-4_2c.pdf}
\end{center}
\caption{A family of Lyapunov periodic orbits passing through a collision.
The left two figures are in the original coordinates, the middle two are in
the regularised coordinates at $m_{1}$ and the right two are in
regularised coordinates at $m_{2}$. (Compare with Figure \protect\ref%
{fig:PCRTBP_coordinates}.) The trajectories computed in the original
coordinates are in black, and the trajectories computed in the regularized
coordinates are in red. The collision with $m_1$ is indicated by a cross.
The mass $m_2$ is added in the closeup figures as a black dot. The operator (%
\protect\ref{eq:Fc-choice}) gives half of a periodic orbit in red and black.
The second half, which follows from the symmetry, is depicted in grey. The
plots are for the Earth-moon system.}
\label{fig:Lyap}
\end{figure}
\begin{figure}[tbp]
\begin{center}
\includegraphics[height=3.95cm]{Fig-4_0_detail_1} %
\includegraphics[height=3.95cm]{Fig-4_0_detail_2} %
\end{center}
\caption{A closeup of a Lyapunov orbit before (left) and after (right) passing through the collision.
The plot is in the original coordinates.}
\label{fig:Lyap-closeup}
\end{figure}
To establish such a family of periodic orbits we make use of the time
reversing symmetry of the PCRTBP. Recall that for%
\begin{equation*}
S\left( x,p,y,q\right) :=\left( x,-p,-y,q\right)
\end{equation*}%
and for the flow $\phi \left( \mathbf{x},t\right) $ of the PCRTBP we have
that%
\begin{equation}
S\left( \phi \left( \mathbf{x},t\right) \right) =\phi \left( S\left( \mathbf{%
x}\right) ,-t\right) . \label{eq:symmetry-prop}
\end{equation}%
Let us introduce the notation $\mathcal{S}$ to stand for the set of self $S$%
-symmetric points%
\begin{equation*}
\mathcal{S:}=\left\{ \mathbf{x}\in \mathbb{R}^{4}:\mathbf{x}=S\left( \mathbf{%
x}\right) \right\} .
\end{equation*}
The property in Equation \eqref{eq:symmetry-prop} is used to find periodic orbits as
follows. Suppose $\mathbf{x},\mathbf{y}\in \mathcal{S}
$ satisfy $\mathbf{y}=\phi \left( \mathbf{x},t\right)$. Then by Equation \eqref{eq:symmetry-prop},
we have
\begin{equation}
\phi \left( \mathbf{x},2t\right) =\phi \left( \mathbf{y},t\right) =\phi
\left( S\left( \mathbf{y}\right) ,t\right) =S\left( \phi \left( \mathbf{y}%
,-t\right) \right) =S\left( \mathbf{x}\right) =\mathbf{x},
\label{eq:S-symm-periodic}
\end{equation}%
meaning that $\mathbf{x}$ lies on a periodic orbit. Our strategy is then
to set up a boundary value problem which shoots from $\mathcal{S%
}$ to itself.
The set $\mathcal{S}$ lies on the $x$-axis in the $\left( x,y\right)$
coordinate frame. From the left plot in Figure \ref{fig:Lyap} it is clear that
we are interested in points on $\mathcal{S}$ which will pass
through collision with $m_{1}$ and close to the collision with $m_{2}$.
We therefore consider the set $\mathcal{S}$ transformed to the
regularized coordinates of $m_1$ and $m_2$.
\begin{lemma}
Let $\mathcal{\hat{S}},\mathcal{\tilde{S}}\subset \mathbb{R}^{4}$ be defined
as%
\begin{eqnarray*}
\mathcal{\hat{S}} &=&\left\{ \left( 0,\hat{p},\hat{y},0\right) :\hat{p},\hat{%
y}\in \mathbb{R}\right\} , \\
\mathcal{\tilde{S}} &=&\left\{ \left( \tilde{x},0,0,\tilde{q}\right) :\tilde{%
x},\tilde{q}\in \mathbb{R}\right\} .
\end{eqnarray*}%
Then $T_{1}(\mathcal{\hat{S}})=\mathcal{S}$ and $T_{2}(\mathcal{\tilde{S}})=%
\mathcal{S}$.
\end{lemma}
\begin{proof}
The proof follows directly from the definition of $T_{1}$ and $T_{2}$. (See Equations \eqref{eq:T1-def} and \eqref{eq:T2-def}.)
\end{proof}
The intuition behind the choice of $\mathcal{\hat{S}},$ $\mathcal{\tilde{S}}$
is seen in Figure \ref{fig:PCRTBP_coordinates}. From the figure we see that
the set $\mathcal{\hat{S}}$ is the vertical axis $\{\hat{x}=0\}$ and $\mathcal{\tilde{S}}$ is
the horizontal axis $\left\{ \tilde{y}=0\right\} $, which join the primaries
in the regularized coordinates.
To find the desired symmetric periodic orbits we fix an energy level $c\in \mathbb{R}$
and introduce an appropriate shooting operator, whose zero implies the
existence of an orbit with energy $c$. Slightly abusing notation,
let us first define two functions $\hat{p},\tilde{q}:\mathbb{R}%
^{2}\rightarrow \mathbb{R}$ as%
\begin{eqnarray*}
\hat{p}\left( \hat{y},c\right) & := &\sqrt{4\hat{y}^{6}-8\mu \hat{y}%
^{4}+4(\mu -c)\hat{y}^{2}+\frac{8\mu \hat{y}^{2}}{\sqrt{\hat{y}^{4}+1-2\hat{y%
}^{2}}}+8(1-\mu )}, \\
\tilde{q}\left( \tilde{x},c\right) &:=&\sqrt{4\tilde{x}^{6}-8(1-\mu )\tilde{%
x}^{4}+4\left( (1-\mu )-c\right) \tilde{x}^{2}+\frac{8(1-\mu )\tilde{x}^{2}}{%
\sqrt{\tilde{x}^{4}+1-2\tilde{x}^{2}}}+8\mu }.
\end{eqnarray*}%
Observe that from Equations \eqref{eq:reg_P_energy} and \eqref{eq:E2}
we have
\begin{align}
E_{1}^{c}\left( 0,\hat{p}\left( \hat{y},c\right) ,\hat{y},0\right) & =0,
\label{eq:pc-implicit} \\
E_{2}^{c}\left( \tilde{x},0,0,\tilde{q}\left( \tilde{x},c\right) \right) &
=0. \label{eq:qc-implicit}
\end{align}%
Next, we define $P_{1}^{c},P_{2}^{c}:\mathbb{R}\rightarrow
\mathbb{R}^{4}$ by
\begin{align*}
\hat{P}_{1}^{c}\left( \hat{y}\right) & :=\left( 0,\hat{p}\left( \hat{y}%
,c\right) ,\hat{y},0\right) , \\
\tilde{P}_{2}^{c}\left( \tilde{x}\right) & :=\left( \tilde{x},0,0,\tilde{q}%
\left( \tilde{x},c\right) \right),
\end{align*}%
and note that $P_{1}^{c}\left( \mathbb{R}\right) \subset \mathcal{\hat{S}}$ and $%
P_{2}^{c}\left( \mathbb{R}\right) \subset \mathcal{\tilde{S}}$.
Taking
\begin{equation*}
\mathbf{x}=(x_{0}, x_{1},\ldots ,x_{5},x_{6})\in
\mathbb{R}\times \underset{5 \ \text{copies}}{\underbrace{\mathbb{R}^{4}\times \ldots \times
\mathbb{R}^{4}}}\times \mathbb{R}=\mathbb{R}^{22}\mathbb{,}
\end{equation*}%
we define the shooting operator $F_{c}:\mathbb{R}^{24}\rightarrow
\mathbb{R}^{24}$ as
\begin{equation}
F_{c}\left( \mathbf{x},\tau ,\alpha \right) =\left(
\begin{array}{r@{\,\,-\,\,}l}
\hat{P}_{1}^{c}\left( x_{0}\right) & x_{1} \\
\psi _{1}^{c}\left( x_{1},s\right) & x_{2} \\
T_{1}\left( x_{2}\right) & x_{3} \\
\phi _{\alpha }\left( x_{3},\tau \right) & x_{4} \\
T_{2}^{-1}\left( x_{4}\right) & x_{5} \\
\psi _{2}^{c}\left( x_{5},s\right) & \tilde{P}_{2}^{c}\left( x_{6}\right)%
\end{array}%
\right) . \label{eq:Fc-choice}
\end{equation}%
We have the following result.
\begin{lemma}
\label{lem:Lyap-existence} Suppose that for $c\in \mathbb{R}$ we have
an $\mathbf{x}\left( c\right) \in\mathbb{R}^{22}$ and $\tau \left( c\right)
\in \mathbb{R}$ for which
\begin{equation*}
F_{c}\left( \mathbf{x}\left( c\right) ,\tau \left( c\right) ,0\right) =0,
\end{equation*}
then we have one of the following three cases:
\begin{enumerate}
\item If $x_{0}\left( c\right) \neq 0$ and $x_{6}\left( c\right) \neq 0$,
then the orbit through
$T_{1}( \hat{P}_{1}^{c}\left( x_{0}\left( c\right) \right) )$
is periodic.
\item If $x_{0}\left( c\right) =0$ and $x_{6}\left( c\right) \neq 0$, then
then the orbit through
$T_{1}( \hat{P}_{1}^{c}\left( x_{0}\left( c\right) \right) )$ is an ejection-collision with $m_1$.
\item If $x_{0}\left( c\right) \neq 0$ and $x_{6}\left( c\right) =0$, then
then the orbit through
$T_{1}( \hat{P}_{1}^{c}\left( x_{0}\left( c\right) \right) )$ is an ejection-collision with $m_2$.
\end{enumerate}
\end{lemma}
\begin{proof}
The result follows immediately from the definition of $F_{c}$
in Equation \eqref{eq:Fc-choice} and from Theorem \ref{thm:LeviCivitta} (or the analogous theorem for $m_2$). We highlight the fact that due to Equations \eqref{eq:pc-implicit}--\eqref{eq:qc-implicit} we have $E_{1}^{c}(\hat{P}%
_{1}^{c}\left( x_{0}\right) )=0$ and $E_{2}^{c}( \tilde{P}%
_{2}^{c}\left( x_{6}\right) ) =0$, so the trajectories in the
regularized coordinates correspond to the physical trajectories in the
physical coordinates of the PCRTBP.
\end{proof}
We can use the implicit function theorem to compute the derivative of $\mathbf{x}%
\left( c\right) $ with respect to $c$. Let us write $\mathbf{y}\left(
c\right) :=\left( \mathbf{x}\left( c\right) ,\tau \left( c\right) ,\alpha
\left( c\right) \right) $ and suppose $F_c(\mathbf{y}(c))=0$. (Note that in fact we must also have that $\alpha \left( c\right) =0$ since $\alpha $ is unfolding.) Then $\frac{d}{dc}\mathbf{x}\left(
c\right) $ is computed from the first coordinates of the vector $\frac{d}{dc}%
\mathbf{y}\left( c\right) $ and is given by the formula
\begin{equation}
\frac{d}{dc}\mathbf{y}\left( c\right) =-\left( \frac{\partial F_{c}}{%
\partial \mathbf{y}}\right) ^{-1}\frac{\partial F_{c}}{\partial c}.
\label{eq:implicit-dx-dc}
\end{equation}
\begin{theorem}
\label{th:Lyap-through-collision}Assume that for $c\in \left[ c_{1},c_{2}%
\right] $ the functions $\mathbf{x}\left( c\right) $ and $\tau \left(
c\right) $ solve the implicit equation
\begin{equation*}
F_{c}\left( \mathbf{x}\left( c\right) ,\tau \left( c\right) ,0\right) =0.
\end{equation*}
If%
\begin{eqnarray}
\label{eq:Bolzano-condition-Lyap}
x_{0}\left( c_{1}\right) >0>x_{0}\left( c_{2}\right) , \\
\label{eq:x6-nonzero}
x_{6}\left( c\right) \neq 0\qquad \text{for all }c\in \left[ c_{1},c_{2}\right],
\end{eqnarray}
and
\begin{equation}
\frac{d}{dc}x_{0}\left( c\right) <0\qquad \text{for all }c\in \left[
c_{1},c_{2}\right] , \label{eq:der-cond-Lyap}
\end{equation}%
then there exists a unique energy parameter $c^{\ast }\in \left(
c_{1},c_{2}\right) $ for which we have have an intersection of the ejection
and collision manifolds of $m_{1}$. Moreover, for all remaining $c\in \left[
c_{1},c_{2}\right] \setminus \left\{ c^{\ast }\right\} $ the orbit of the point $%
T_{1}( \hat{P}_{1}^{c}\left( x_{0}\left( c\right) \right) ) $
is periodic.
\end{theorem}
\begin{proof}
The result follows directly from the Bolzano theorem and Lemma \ref%
{lem:Lyap-existence}.
\end{proof}
Theorem \ref{th:Lyap-through-collision} is deliberately formulated so that its hypotheses
can be validated via computer assistance. Specifically, rigorous enclosures of Equation
\eqref{eq:implicit-dx-dc} are rigorously computed and Equations
\eqref{eq:Bolzano-condition-Lyap}-\eqref{eq:der-cond-Lyap} are rigorously verified using interval arithmetic.
\medskip
We finish this section with an example of a similar approach, which can be
used for the proofs of double collisions in the case when $m_{1}=m_{2}=\frac{%
1}{2}$. That is, we establish the existence of a family of periodic orbits,
parameterized by energy (the Jacobi constant), which are symmetric with respect to
the $y$-axis, and such that for a single parameter from the family we have a
double collision as in Figure \ref{fig:eq}.
\begin{figure}[tbp]
\begin{center}
\includegraphics[height=3.95cm]{Fig-5_1.pdf} %
\includegraphics[height=3.95cm]{Fig-5_2.pdf}
\end{center}
\caption{A family of periodic orbits passing through a double collision. The
left figure is in the original coordinates and the right figure is in the
regularised coordinates at $m_{1}$. The trajectories computed in the
original coordinates are in black, the trajectories computed in the
regularized coordinates are in red, and the collision orbit is in blue. The
second half of an orbit, which follows from the $R$-symmetry, is depicted in
grey. The plots are for the system with equal masses.}
\label{fig:eq}
\end{figure}
In this case consider $R:\mathbb{R}^{4}\rightarrow \mathbb{R}^{4}$ defined as%
\begin{equation*}
R\left( x,p,y,q\right) =\left( -x,p,y,-q\right) .
\end{equation*}%
For the case of two equal masses, we have the time reversing symmetry
\begin{equation}
R\left( \phi \left( \mathbf{x},t\right) \right) =\phi \left( R\left( \mathbf{%
x}\right) ,-t\right) . \label{eq:R-symmetry.}
\end{equation}%
We denote by $\mathcal{R}$ the set of all points which are $R$-self
symmetric, i.e. $\mathcal{R}=\{\mathbf{x}=R\left( \mathbf{x}\right) \}$. An
argument mirroring Equation \eqref{eq:S-symm-periodic} shows that if
two points $\mathbf{x},\mathbf{y}\in \mathcal{R}$ have $%
\mathbf{y}=\phi \left( \mathbf{x},t\right) ,$ then these points must lie on a
periodic orbit.
To obtain the existence of the family of orbits
depicted in Figure \ref{fig:eq}, define $p:%
\mathbb{R}^{2}\rightarrow \mathbb{R}$ and $P_{1}^{c},P_{2}^{c}:\mathbb{R}%
\rightarrow \mathbb{R}^{4}$ as%
\begin{eqnarray*}
p\left( y,c\right) &:=&\sqrt{2\Omega (0,y)-c}, \\
P_{1}^{c}\left( y\right) &:=&\left( 0,p\left( y,c\right) ,y,0\right) , \\
P_{2}^{c}\left( y\right) &:=&\left( 0,-p\left( y,c\right) ,y,0\right) .
\end{eqnarray*}%
Note that $P_{1}^{c}\left( y\right) ,P_{2}^{c}\left( y\right) \in \mathcal{R}
$ and $E\left( P_{1}^{c}\left( y\right) \right) =E\left( P_{2}^{c}\left(
y\right) \right) =c$ (see Equation \eqref{eq:JacobiIntegral}). Consider $%
x_{0},x_{7}\in \mathbb{R}$ and $x_{1},\ldots ,x_{6}\in \mathbb{R}^{4},$
where
\begin{equation}
x_{4}=\left( s_{4},\hat{p}_{4},\hat{y}_{4},\hat{q}_{4}\right) \in \mathbb{R}%
^{4}. \label{eq:s4}
\end{equation}%
We emphasize that the first coordinate in $x_{4}$ will be used here in a
slightly less standard way than in the previous examples. We define also
\begin{equation*}
\mathrm{\hat{x}}_{4}:=\left( 0,\hat{p}_{4},\hat{y}_{4},\hat{q}_{4}\right)
\in \mathbb{R}^{4}.
\end{equation*}%
We now choose some fixed $s_{2},s_{5}\in \mathbb{R}$, $s_{2},s_{5}>0$, and for
\begin{equation*}
\mathbf{x}=\left( x_{0},\ldots ,x_{7}\right) \in \mathbb{R}\times \underset{6%
}{\underbrace{\mathbb{R}^{4}\times \mathbb{\ldots }\times \mathbb{R}^{4}}}%
\times \mathbb{R}=\mathbb{R}^{26}
\end{equation*}%
define the operator $F_{c}:\mathbb{R}^{26}\times \mathbb{R}\times \mathbb{R%
}\rightarrow \mathbb{R}^{28}$ as
\begin{equation}
F_{c}\left( \mathbf{x},\tau ,\alpha \right) =\left(
\begin{array}{r@{\,\,-\,\,}l}
P_1^{c}\left( x_{0}\right) & x_{1} \\
\phi _{\alpha }\left( x_{1},s_{2}\right) & x_{2} \\
T_{1}^{-1}\left( x_{2}\right) & x_{3} \\
\psi _{1}^{c}\left( x_{3},s_{4}\right) & \mathrm{\hat{x}}_{4} \\
\psi _{1}^{c}\left( \mathrm{\hat{x}}_{4},s_{5}\right) & x_{5} \\
T_{1}\left( x_{5}\right) & x_{6} \\
\phi _{\alpha }\left( x_{6},\tau \right) & P_2^{c}\left( x_{7}\right)
\end{array}%
\right) . \label{eq:Fc-equal}
\end{equation}
Note that in Equation \eqref{eq:Fc-equal} the $s_{2},s_{5}$ are some fixed
parameters, and $s_{4}$ is one of the coordinates of $\mathbf{x}$. We claim that if $F_{c}\left( \mathbf{x},\tau ,0,0\right) =0$ and $\pi _{\hat{y}_{4}}%
\mathbf{x}=0$, then the orbit of $x_2$ passes through the collision with
$m_{1}$. This is because $\mathrm{\hat{x}}_{4}=\left( 0,\hat{p}_{4},\hat{y}
_{4},\hat{q}_{4}\right) $, so that $F_{c}=0$ ensures that the point
$\psi _{1}^{c}\left( x_{3},s_{4}\right)$ is zero on the $\hat x_4$ coordinate.
So, if $F_{c}(\mathbf{x})=0$ and $\pi _{\hat{y}_{4}}\mathbf{x}=0$,
then $\pi_{\hat x_4, \hat y_4}\psi _{1}^{c}\left( x_{3},s_{4}\right)=0$ and we arrive at the collision.
Moreover, by the $R$-symmetry of the system in this case we
also establish heteroclinic connections between collisions with
$m_{1}$ and $m_{2}$ (see Figure \ref{fig:eq}).
If on the other hand $F_{c}=0$ and $\pi _{\hat{y}_{4}}\mathbf{x}\neq 0,$
then we have a periodic orbit passing near
the collisions with $m_{1}$ and $m_{2}$.
One can prove a result analogous to Theorem \ref{th:Lyap-through-collision}
with the minor difference being that instead of using $x_{0}$ in
Equations \eqref{eq:Bolzano-condition-Lyap} and \eqref{eq:der-cond-Lyap} we take $
\hat{y}_{4}.$ We omit the details in order not to repeat the same argument.
\section{Computer assisted proofs for collision/near collision orbits}
\label{sec:CAP}
\subsection{Newton-Krawczyk method}
For a smooth mapping $F : \mathbb{R}^n \to \mathbb{R}^n$, the following
theorem provides sufficient conditions for the existence of a solution of $%
F(x)=0$ in the neighborhood of a \textquotedblleft good
enough\textquotedblright\ approximate solution. The hypotheses of the
theorem require measuring the defect associated with the approximate
solution, as well as the quality of a certain condition number for an
approximate inverse of the derivative. Theorems of this kind are used widely
in computer assisted proofs, and we refer the interested reader to the works
of \cite{MR0231516,MR1100928,MR1057685,MR2807595,MR2652784,
MR3971222,MR3822720,jpjbReview} for a more complete overview.
Let $\left\Vert \cdot \right\Vert $ be a norm in $\mathbb{R}^{n}$ and let $%
\overline{B}(x_{0},r)\subset \mathbb{R}^{n}$ denote a closed ball of radius $r \geq 0$ centered at $x_0$ in that norm.
\begin{theorem}[Newton-Krawczyk]
\label{thm:NK} \label{thm:aPosteriori} Let $U\subset \mathbb{R}^{n}$ be an
open set and $F\colon U\rightarrow \mathbb{R}^{n}$ be at least of class $C^2$. Suppose
that $x_{0}\in U$ and let $A$ be a $n\times n$ matrix. Suppose that $Y,Z,r>0$
are positive constants such that $\overline{B}(x_{0},r)\subset U$ and
\begin{eqnarray}
\Vert AF(x_{0})\Vert &\leq &Y, \label{eq:Krawczyk-Y} \\
\sup_{x\in \overline{B}(x_{0},r)}\Vert \mathrm{Id}-ADF(x)\Vert &\leq &Z.
\label{eq:Krawczyk-Z}
\end{eqnarray}%
If
\begin{equation}
Zr-r+Y\leq 0, \label{eq:Krawczyk-ineq}
\end{equation}%
then there is a unique $\hat{x}\in \overline{B}(x_{0},r)$ for which $F(\hat{x%
})=0.$ Moreover, $DF(\hat{x})$ is invertible.
\end{theorem}
\begin{proof}
The proof is included in \ref{sec:proof} for the sake of completeness.
\end{proof}
\bigskip
The theorem is well suited for applications to computer assisted proofs. To
validate the assumptions its enough to compute interval enclosures of the
quantities $F(x_{0})$ and $DF(B)$, where $B$ is a suitable
ball. These enclosures are done using interval arithmetic,
and the results are returned as sets (cubes in $\mathbb{R}^{n}$ and $\mathbb{%
R}^{n\times n}$) enclosing the correct values. A good choice for the matrix $%
A$ is any floating point approximate inverse of the derivative of $F$ at $%
x_{0}$, computed with standard linear algebra packages. The advantage of
working with such an approximation is that there is no need to compute a rigorous interval enclosure of
a solution of a linear equation
(as in the interval Newton
method). In higher dimensional problems,
solving linear equations can lead to large overestimation (the so called
``wrapping effect'').
In our work the evaluation of $F$ and its derivative involves integrating
ODEs and variational equations. There are well know general purpose
algorithms for solving these problems, and we refer the
interested reader to \cite{c1Lohner,cnLohner,MR2807595}.
For parameterizing the invariant manifolds attached to
$L_4$ with interval enclosures, we exploit the techniques discussed in \cite{myNotes}
(validated integration is also discussed in this reference).
We remark that our implementations use the IntLab laboratory running under MatLab\footnote{%
https://www.tuhh.de/ti3/rump/intlab/} and/or the CAPD\footnote{%
Computer Assisted Proofs in Dynamics, http://capd.ii.uj.edu.pl} $C^{++}$
library, and recall that the source codes are found at the homepage of MC.
See \cite{Ru99a} and \cite{CAPD_paper} as references for the usage
and the functionality of the libraries.
\subsection{Computer assisted existence proofs for ejection-collision orbits}
\label{sec:EC}
The methodology of Section \ref{sec:ejectionToCollision}, and
especially Lemma \ref{lem:collision-connections}, is combined with Theorem \ref{thm:NK}
to obtain the following.
\begin{maintheorem}\label{thm:ejectionCollision}
\label{thm:CAP-ejCol} Consider the
planar PCRTBP with $\mu = 1/4$
and $c = 3.2$. Let
\[
\overline{p} =
\left(
\begin{array}{c}
-0.564897282072410 \\
\phantom{-}0.978399619177283 \\
-0.099609551141525 \\
-0.751696444982537
\end{array}
\right),
\]
\[
r = 2.7 \times 10^{-13},
\]
and
\[
B_r = \left\{ x \in \mathbb{R}^4 \, : \|x - \overline{p}\| \leq r\right\},
\]
where the norm is the maximum norm on components. Then, there exists a unique $p_* \in B_r$ such that the orbit of $p_*$
is ejected from $m_2$ (at $x = -1 + \mu, y= 0$), collides with $m_1$ (at $x = \mu, y= 0$), and
the total time $T$ from ejection to collision satisfies the estimate
\begin{equation*}
2.42710599795 \leq T \leq 2.42710599796.
\end{equation*}
In addition, the ejection manifold of $m_2$
intersects the collision manifold of $m_1$ transversely
along the orbit of $p_*$, where transversality is
relative to the level set $\setof*{E = 3.2}$. Moreover, there exists a transverse $S$-symmetric counterpart ejected from $m_1$ and colliding with $m_2$.
\end{maintheorem}
\begin{proof}
The first step in the proof is to define an appropriate version of the map $%
F $ in Equation \eqref{eq:collisionOperator},
whose zeros correspond to ejection-collision
orbits from $m_2$ to $m_1$.
In particular we set $k = 2$
and $l = 1$, and choose (somewhat arbitrarily) the parameter $s = 0.35$ in
the definition of the component maps $R_{\tau, \alpha}^1$ and $R_{\tau,
\alpha}^5$. The parameter $s$ determines how long to integrate/flow in the
regularized coordinates.
Next we compute an approximate zero
$\overline{x} \in \mathbb{R}^{24}$
of $F$ using Newton's method. Note that interval arithmetic is not
required in this step. The resulting numerical data is recorded in Table
\ref{table:th1}, and we note that $\overline{x}_3$ in the table
corresponds to $\overline{p}$ in the hypothesis of the theorem.
Note also that we take $\bar\alpha$ in the approximate solution to be zero.
\begin{table}[tbp]
{\scriptsize
\begin{tabular}{cllll}
\hline
$\overline{x}_0 = $ & $\phantom{(-}2.945584780500716$ & & & \\
$\overline{x}_1 = $ & $(\phantom{-} 0.0,$ & $-1.387134030283961,$ & $\phantom{-}0.0,$ & $%
\phantom{-}0.275425456390970)$ \\
$\overline{x}_2 = $ & $(-0.444581369966432,$ & $-1.038375926396089,$
& $\phantom{-}0.112026231721142,$ & $\phantom{-}0.449167625710802)$ \\
$\overline{x}_3 = $ & $(-0.564897282072410,$ & $\phantom{-}0.978399619177283,$ &
$-0.099609551141525,$ & $-0.751696444982537)$ \\
$\overline{x}_4 = $ & $(-0.244097430449606,$ & $\phantom{-}0.878139982728136,$ & $-0.025435855606099,$ & $\phantom{-}0.543608549989376)$ \\
$\overline{x}_5 = $ & $( \phantom{-}0.018086991443589,$ & $-0.732714475912918,$
& $-0.703153304556756,$ & $\phantom{-}1.254598547822042)$ \\
$\overline{x}_6 = $ & $\phantom{(-}1.459760691418490$ & & & \\
$\overline{\tau} = $ & $\phantom{(-}2.051635871465197$ & & & \\
$\overline{\alpha} = $ & $\phantom{(-}0.0$ & & & \\
\hline
\end{tabular}
}
\caption{ Numerical data used in the proof of Theorem \protect\ref%
{thm:CAP-ejCol}, giving the approximate solution of $F=0$ for the operator \eqref{eq:collisionOperator},
whose zeros correspond to the ejection-collision orbits from $m_2$ to $m_1$.
We set the mass ratio to $\mu = 1/4$
and Jacobi constant to $c = 3.2$.
The resulting orbit is illustrated in Figure \ref{fig:ejectionCollisions} (bottom curve).
\label{table:th1}\label{tab:ejColTab1} }
\end{table}
We define $A$ to be the numerically computed approximate inverse of $DF(%
\overline{x})$, and let
\begin{equation*}
B = \overline{B}(\overline{x}, r_*),
\end{equation*}
denote the closed ball of radius
\begin{equation*}
r_* = 2\times 10^{-12},
\end{equation*}
in the maximum norm about the numerical approximation.
(The reader interested in the numerical entries of the Matrix can
run the accompanying computer program). We note that the choice of
$r_*$ is somewhat arbitrary. (It should be small enough that there is not
too much ``wrapping'', but not so small that there is no $r \leq r_*$
satisfying the hypothesis of Theorem \ref{thm:NK}).
Using interval
arithmetic and validated numerical integration we compute an interval
enclosure of the length $24$ vector of intervals $\mathbf{F}$ having
\begin{equation*}
F(\overline{x}) \in \mathbf{F},
\end{equation*}
and an interval enclosure of a $24 \times 24$ interval matrix $\mathbf{M}$
with
\begin{equation*}
DF(x) \in \mathbf{M} \quad \quad \mbox{for all } x \in B.
\end{equation*}
We then check, again using interval arithmetic, that
\begin{equation*}
\|A \mathbf{F} \| \in 10^{-12} \times
[ 0.0, 0.26850976470521]
\end{equation*}
and that
\begin{equation*}
\|\mbox{Id} - A \mathbf{M} \| \in 10^{-7} \times
[ 0.0, 0.23119622467860].
\end{equation*}
From these we have
\begin{equation*}
\|A F(\overline{x}) \| \leq Y < 0.269 \times 10^{-12}
\end{equation*}
and
\begin{equation*}
\sup_{x \in B} \|\mbox{Id} - A DF(x)\|\leq Z < 0.232 \times 10^{-7},
\end{equation*}
though the actual bounds stored in the computer are tighter than those
just reported (hence the inequality).
We let
\begin{equation*}
r = \sup\left( \frac{Y}{1- Z}\right) \leq 2.7 \times 10^{-13},
\end{equation*}
and note again that the actual bound stored in the computer is
smaller than reported here.
We then check, using interval arithmetic, that
\begin{equation*}
Z r - r + Y \leq - 5.048 \times 10^{-29} < 0.
\end{equation*}
We also note that, since $r \leq r_*$, we have that $\overline{B}(\overline{%
x}, r) \subset B$, so that
\begin{equation*}
\sup_{x \in \overline{B}(\overline{x}, r)} \| \mbox{Id} - A DF(x)\|
\leq Z,
\end{equation*}
on the smaller ball as well.
From this we conclude, via Theorem \ref{thm:NK}, that there exists a unique $%
x_* \in \overline{B}(\overline{x}, r) \subset \mathbb{R}^{24}$
so that $F(x_*) =0$, and moreover that $DF(x_*)$ is invertible.
Hence, it now follows from Lemma \ref{lem:collision-connections} that there exists a transverse ejection-collision
from $m_2$ to $m_1$ in the PCRTBP.
Note that the integration time in the standard coordinates
\begin{equation*}
\bar \tau = 2.051635871465197,
\end{equation*}
is one of the variables of $F$ (we are simply reading this off the table).
The rescaled integration time in the regularized coordinates is fixed to be $%
s = 0.35$. Our programs compute validated bounds on the integrals in Equation %
\eqref{eq:time-between-collisions} and provide interval enclosures for the time each orbit spends in the regularized coordinate systems of $%
m_1$ and $m_2$ respectively. This interval enclosure is
\begin{equation*}
T_1 + T_2 \in [ 0.27116751585137, 0.27116751585615] + [ 0.10430261063473, 0.10430261063793].
\end{equation*}
Since the true integration time $\tau_*$ is in an $r$-neighborhood
of $\bar\tau$ it follows that
\begin{equation*}
\tau_* \in [ 2.05163587146492, 2.05163587146547].
\end{equation*}
Interval addition of the three time intervals
containing $T_1$, $T_2$ and $\tau_*$
provides the desired final bound on the total time of flight
given in the theorem.
The connection in the other direction follows from the $S$-symmetry of the system (see Equation \eqref{eq:symmetry-prop}). The computational part of the proof is implemented in IntLab running under MatLab, and took 21 minutes
to run on a standard desktop computer.
\end{proof}
\medskip
The orbit whose existence is proven in Theorem \ref{thm:ejectionCollision} is illustrated in Figure \ref{fig:ejectionCollisions} (lower orbit of the two orbits illustrated in the figure). The higher orbit follows from the $S$-symmetry of the PCRTBP.
We remark that our implementation actually subdivides the time steps $s = 0.35$ in regularized
coordinates 50 times, while the time step $\bar \tau$ is subdivided 200 times.
This only enlarges the size of
the system of equations as discussed in Remark \ref{rem:additionalShooting}.
Validation of the $50 + 200 + 50 = 300$ steps of Taylor integration, along with the spatial and
parametric variational equations, takes most of the computational
time for the proof.
The choice of the mass $\mu = 1/4$ and the energy $c = 3.2$ was more or less arbitrary and the existence of many similar orbits could be proven using the same method.
\subsection{Connections between ejections/collisions and the libration
points $L_4$, $L_5$} \label{sec:EC_to_L4_proofs}
We apply the methodology of Section \ref{sec:L4_to_collision}, and
especially Lemma \ref{lem:Li-collisions}, in conjunction with Theorem \ref%
{thm:NK} to obtain the following result. The
local stable (or unstable) manifolds at $L_4$ are computed
using the methods and implementation of \cite{MR3906230}.
See \ref{sec:manifolds} for a few additional remarks concerning the parameterizations.
\begin{maintheorem}
\label{thm:CAP-L4-to-collision} Consider planar PCRTBP with $\mu = 1/2$
and $c = 3$ is the energy of $L_4$. Let
\[
\overline{p} = \left(
\begin{array}{c}
\phantom{-}0.003213450375413 \\
\phantom{-}0.197716496638868 \\
-0.404375730348827 \\
\phantom{-}0.696149210661807 \\
\end{array}
\right),
\]
\[
r = 8.2 \times 10^{-12},
\]
and
\[
B_r = \left\{ x \in \mathbb{R}^4 \, \colon \, \| x - \bar{p}\| \leq r \right\}.
\]
Then there exists a unique point
\[
p_* \in B_r
\]
such that the orbit of $p_*$ accumulates to $L_4$ as $t \to - \infty$, collides with
$m_1$ (located at $x = \mu, y= 0$) in finite forward time, and the unstable manifold of $L_4$ intersects
the collision set of $m_1$ transversely along the orbit of
$p_*$, where transversality is relative to level set
$\setof*{E = 3}$.
\end{maintheorem}
\begin{table}[t!]
{\scriptsize
\begin{tabular}{cllll}
\hline
$\overline{x}_0 = $ & $\phantom{(-}0.329444389425640$ & & & \\
$\overline{x}_1 = $
& $( -0.032305434322402,$ &
$-0.044152238388004,$ & $\phantom{-}0.843244687835647,$ & $0.005057045291404)$ \\
$\overline{x}_2 = $ & $( \phantom{-}0.003213450375413,$ &
$\phantom{-}0.197716496638868,$ &
$-0.404375730348827,$ &
$0.696149210661807)$ \\
$\overline{x}_3 = $ & $ ( \phantom{-}0.268116630482827,$ &
$-0.943915863314079,$ &
$-0.754104155383092,$ &
$0.671496024758153)$ \\
$\overline{x}_4 = $ & $\phantom{(-}1.696671399505923$ & & & \\
$\overline{\tau} = $ & $\phantom{(-}7.034349085576677$ & & & \\
$\overline{\alpha} = $ & $\phantom{(-}0.0$ & & & \\
\hline
\end{tabular}}
\caption{ Numerical data providing an approximate zero of the
map $F^u_{i,j}$ defined in Equation \eqref{eq:EC_to_L4_operator},
for $i = 1$, $j= 4$, $c=3$, $\mu=1/2$ and $s = 0.5$.
The data is used in the proof of Theorem \protect\ref{thm:CAP-L4-to-collision},
and results in the existence of the $L_4$ to collision orbit illustrated in the right frame of
Figure \ref{fig:EC_to_collision}. \label{tab:L4-to-collisionTab1} }
\end{table}
\begin{proof}
The proof is similar to the proof of Theorem \ref{thm:ejectionCollision},
and we only sketch the argument. Orbits accumulating
to $L_4$ in backward time and colliding with $m_1$ are equivalent to
zeros of the mapping
$F^u_{i,j}$ defined in Equation \eqref{eq:EC_to_L4_operator} with $j = 4$
and $i = 1$. We also set the parameter $s = 0.5$, which is the integration time
in the regularized coordinates.
The first step is to compute a numerical zero
$\bar{x} = (\bar x_0,\bar x_1,\bar x_2,\bar x_3,\bar x_4,\bar \tau,\bar \alpha) \in \mathbb{R}^{16}$ of $F^u_{i,j}$.
This step exploits Newton's method (no interval arithmetic necessary), and
the resulting data is reported in Table \ref{tab:L4-to-collisionTab1}.
Note that $\bar x_1 \in \mathbb{R}^4$ from the table is the initial condition $\bar{p}$
in the statement of the theorem.
We take $A$ to be a numerically computed approximate
inverse of the $16 \times 16$ matrix $DF_{i,j}^u(\bar{x})$. Again, the
definition of $A$ does not require interval arithmetic.
For the next step we compute interval enclosures of $F(\bar{x})$
and of $DF_{i,j}^u(x)$ for $x$ in a cube of radius
$r_* = 5 \times 10^{-9}$ and obtain that
\[
\|A F(\bar{x})\| \in 10^{-11} \times
[ 0.0, 0.82147145471154],
\]
and that
\[
\sup_{x \in B_{r_*}(\bar{x})} \| \mbox{Id} - A D F_{i,j}^u(x) \|
\in [ 0.0, 0.00151459031904].
\]
Using interval arithmetic we compute
\[
r = \frac{Y}{1-Z} \leq
8.3 \times 10^{-12},
\]
where the actual value stored in the computer is smaller than reported
here (and hence the inequality). We then check, using interval arithmetic,
that $Z r - r + Y < 0$. Since $r < r_*$, we have that
there exists a unique $x_* \in B_r(\bar{x})$ so that
$F_{i,j}^u(x_*) = 0$.
Moreover, transversality follows from the non-degeneracy of the derivative
of $F_{i,j}^u$.
The proof is
implemented in IntLab running under MatLab, and took about 30 minutes
to run on a standard desktop computer.
\end{proof}
By replacing the operator $F_{i,j}^u$ with
the operator $F^s_{i,j}$
defined in Equation \eqref{eq:EC_to_L4_operator}, again with $j = 4$
and $i = 1$, we obtain a nonlinear map whose
zeros correspond to ejection-to-$L_4$ orbits.
We compute an approximate numerical
zero of the resulting operator (the numerical data is given in Table
\ref{tab:m1-to-L4_Tab3}) and repeat a nearly identical argument to
that above. This results in the existence of a transverse
ejection-to-$L_4$ orbit in the PCRTBP with $\mu = 1/4$ and $c=3$.
The validated error bound for the numerical data has
\[
r \leq 1.8 \times 10^{-11},
\]
so that the desired orbit passes with in an $r$-neighborhood of the point
\[
\bar{p} =
\left(
\begin{array}{c}
-0.112449038686947 \\
-0.553321424594493 \\
\phantom{-} 0.308527098616200 \\
\phantom{-}0.727049637558896 \\
\end{array}
\right).
\]
In this way we prove the existence of both the orbits illustrated in Figure \ref%
{fig:EC_to_collision}. More precisely, the orbit whose existence is established in
Theorem \ref{thm:CAP-L4-to-collision} is illustrated in the right frame of the figure,
and the orbit discussed in the preceding remarks is illustrated in the left frame.
\begin{table}[t!]
{\scriptsize
\begin{tabular}{cllll}
\hline
$\overline{x}_0 = $ & $\phantom{(-}1.561515178070094$ & & & \\
$\overline{x}_1 = $
& $( \phantom{-}0.0,$ & $ \phantom{-}0.018562030958889,$ & $0.0,$ & $
1.999913860896684)$ \\
$\overline{x}_2 = $
& $( \phantom{-}0.191471460280817,$ & $\phantom{-}0.959639244531484,$ &
$0.805673853857139,$ & $1.170011720749615)$ \\
$\overline{x}_3 = $
& $(-0.112449038686946,$ & $-0.553321424594493,$ &
$0.308527098616200,$ & $0.727049637558895)$ \\
$\overline{x}_4 = $
& $\phantom{(-}5.229765599216696$ & & & \\
$\overline{\tau} = $ & $\phantom{(-}4.673109099822270$ & & & \\
$\overline{\alpha} = $ & $\phantom{(-}0.0$ & & & \\
\hline
\end{tabular}
}
\caption{
Numerical data for an approximate zero of the
map $F^s_{i,j}$ defined in Equation \eqref{eq:EC_to_L4_operator},
with $i = 1$, $j= 4$ and $s = 0.5$.
An argument similar to the proof of Theorem \ref{thm:CAP-L4-to-collision},
using the data in the table, leads to an existence proof for the
ejection-to-$L_4$ orbit illustrated in the left frame of Figure \ref{fig:EC_to_collision}.
\label{tab:m1-to-L4_Tab3}}
\end{table}
\subsection{Transverse homoclinics for $L_4$ and $L_5$} \label{sec:homoclinicProofs}
Combining the methodology of Section \ref{sec:L4_to_collision}, and
especially Lemma \ref{lem:Li-collisions}, with Theorem \ref{thm:NK} we
obtain the following result.
\begin{maintheorem}
\label{thm:CAP-connections} Consider the planar PCRTBP with $\mu = 1/2$ and
$c = 3$ is the energy level of $L_4$.
Let
\[
\bar{p} =
\left(
\begin{array}{c}
-0.037058535628028 \\
-0.007623220519232 \\
\phantom{-}0.873641524369283 \\
\phantom{-}0.033084516464648
\end{array}
\right),
\]
and
\[
B_r = \left\{ x \in \mathbb{R}^4 \, : \, \| x - \bar{p} \| \leq r \right\},
\]
where
\[
r = 1.6 \times 10^{-9}.
\]
Then there exists a unique $p_* \in B_r$ so that the orbit of $p_*$ is homoclinic to $L_4$ and $W^s(L_4)$ intersects $W^u(L_4)$ transverseley along the
orbit of $p_*$, where transversality is relative to the level set $\setof*{E = 3}$.
\end{maintheorem}
\begin{table}[t!]
\label{tab:L4-homoclinic} {\scriptsize
\begin{tabular}{cllll}
\hline
$\overline{x}_0 = $ & $ \phantom{(-}1.411845524482813$ & & & \\
$\overline{x}_1 = $
& $(-0.037058535628028,$ & $-0.007623220519232,$ & $\phantom{-}0.873641524369283,$ & $\phantom{-}0.033084516464648)$ \\
$\overline{x}_2 = $ & $( -0.243792823114517,$ & $-1.231115802740768,$ & $\phantom{-}0.191555403283542,$ & $-0.508371511645513)$ \\
$\overline{x}_3 = $ & $(\phantom{-} 0.536705934592082,$ & $-1.502936895854406,$ & $\phantom{-}0.178454709494811,$ & $-0.106295188690239)$ \\
$\overline{x}_4 = $ & $( -0.504618223339967,$ & $-0.258236025635830,$ & $-0.463683951257916,$ & $-1.155517796520023)$ \\
$\overline{x}_5 = $ & $( -0.460363255327369,$ &$-0.431694933697799,$ & $\phantom{-}0.467966743350051,$ & $\phantom{-}0.748266448178995)$ \\
$\overline{x}_6 = $ & $\phantom{(-} 5.988827136344083$ & & & \\
$\overline{\tau} = $ & $\phantom{(-}4.753189987600258$ & & & \\
$\overline{\alpha} = $ & $\phantom{(-}0.0$ & & & \\
\hline
\end{tabular}
}
\caption{
Numerical data for the proof of Theorem \protect\ref{thm:CAP-connections},
which provides an approximate zero of the
$L_4$ homoclinic map $F_{i,j,k}$ defined in Equation \eqref{eq:homoclinicOperator},
when $i = k = 4$, $j = 2$, $s_1 = 1.8635$, and $s_2 = 5$. The orbit is depicted on the right plot in Figure \ref{fig:PCRTBP_L4_homoclinics}.
}
\end{table}
\begin{proof}
As in the earlier cases, the argument hinges on proving the existence of a
zero of a suitable nonlinear mapping, in this case the map
$F_{i,j,k}$ defined in Equation \eqref{eq:homoclinicOperator},
with $i = k = 4$ and $j = 2$. The integration time parameters
are set as $s_1 = 1.8635$ and $s_2 = 5$. These are the flow times
in the regularized coordinates and in the original coordinates (the second time)
respectively.
With these choices, a zero of $F_{4,2,4}$ corresponds
to an orbit homoclinic to $L_4$ which passes through the
Levi-Civita coordinates regularized at $m_2$.
The numerical data $\bar x \in \mathbb{R}^{24}$ providing an approximate zero of $F_{4,2,4}$ is
reported in Table \ref{tab:L4-homoclinic}.
Note that $x_1$ corresponds to $\bar{p}$ in the hypothesis of the theorem.
We let $A$ be a numerically computed approximate
inverse of the matrix $DF_{4,2,4}(\bar{x})$. The table data and the
matrix $A$ are computed using a numerical Newton scheme, and
standard double precision floating point operations.
Using validated numerical integration schemes, validated bounds on
the local stable/unstable manifold parameterizations, and interval arithmetic,
we compute interval enclosures of $ F_{4,2,4}(\bar{x})$ and of
$D F_{4,2,4}(B_r(\bar{x}))$ with
where $r = 1.659487745915747 \times 10^{-9}$. We then check that
\[
\| A F(\bar{x}) \| \in 10^{-8} \times
[ 0.0, 0.16432156145308],
\]
and that
\[
\sup_{x \in B_r(\bar{x})} \| \mbox{Id} - A D F_{4,2,4}(B_r(\bar{x})) \| \in
[ 0.0, 0.00980551463848].
\]
Finally, we use interval arithmetic to verify that $Z r - r + Y < 0$ and transversality
follows as in the earlier cases which completes the proof.
\end{proof}
\bigskip
Note that, from a numerical perspective, this is the most difficult computer
assisted argument presented so far. This is seen in the fact that $Z \approx 10^{-2}$
and $r \approx 10^{-9}$. That is, these constants are roughly three orders of magnitude
less accurate than the previous theorems. On the other hand, the orbit itself is
more complicated than those in the previous theorems. We note that the accuracy
of the result could be improved by taking smaller integration steps and/or
using higher order Taylor approximation. However, this would also increase the required computational time.
Now, by symmetry, the result above gives a transverse homoclinic orbit for
$L_5$ which passes near $m_1$. We also observe that each of these transverse homoclinic orbits also
satisfy the hypotheses of the theorems of Devaney and Henard discussed in
Section \ref{sec:intro}. In particular, Theorem \ref{thm:CAP-connections} also proves the existence of a chaotic
subsystem in the $c = 3$ energy level of the PCRTBP near the orbit of $p_*$, and a tube of
periodic orbits parameterized by the Jacobi constant which accumulate to the homoclinic orbit through $p_*$.
We remark that, using similar arguments, we are able to prove also the
existence and transversality of of the homoclinic orbits in the left and center frames of
Figure \ref{fig:PCRTBP_L4_homoclinics}.
More precisely, let
\[
\bar{p}_1 = \left(
\begin{array}{c}
-0.033854025583296 \\
-0.043110876471418 \\
\phantom{-}0.844639632487862 \\
\phantom{-}0.007320747846173
\end{array}
\right), \quad \quad \quad
\bar{p}_2 = \left(
\begin{array}{c}
\phantom{-}0.029871559148065 \\
-0.006337684774610 \\
\phantom{-} 0.850175365286339 \\
-0.034734413580682
\end{array}
\right),
\]
and
\[
r_1 = 2.03 \times 10^{-10}, \quad \quad \quad
r_2 = 1.84 \times 10^{-8}.
\]
Then there exist unique points $p^1_* \in B(\bar{p}_1, r_1)$ and
$p_*^2 \in B(\bar{p}_2, r_2)$ so that $W^{s,u}(L_4)$ intersect transversely
along the orbits through these points. It is also interesting to note that $r_2$ is two orders of magnitude larger than $r_1$. This is
caused by the fact that the time of flight (integration time)
is longer in this case and, more importantly, the fact that the second orbit passes very close to $m_1$. Indeed,
the error bounds for the second orbit would very likely be improved by changing to
regularized coordinates near $m_1$ and this may even be necessary to validate some homoclinics passing even closer to $m_1$ or $m_2$. Nevertheless, we were able to validate these orbits
in standard coordinates so we have not done this here.
The orbit of $p_*^1$ is illustrated in the left frame of Figure \ref{fig:PCRTBP_L4_homoclinics}
appears to have $y$-axis symmetry, however we do not use this symmetry nor do we rigorously prove its existence. The
orbit of $p_*^2$ is illustrated in the center frame of Figure \ref{fig:PCRTBP_L4_homoclinics} has no apparent symmetry.
The orbits illustrated in the left and center frames have appeared previously in the
literature, as remarked in Section \ref{sec:intro}. However, to the best of our knowledge this is the first mathematically rigorous proof of their existence.
\subsection{Periodic orbits passing through collision} \label{sec:PO_collisions}
We apply the methodology of section \ref{sec:symmetric-orbits}, namely Lemma %
\ref{lem:Lyap-existence} and Theorem \ref{th:Lyap-through-collision}, with
Theorem \ref{thm:NK} to obtain the following result. We consider the
Earth-Moon mass ratio largely for the sake of variety.
\begin{maintheorem}
\label{th:CAP-Lyap}Consider the Earth-Moon system \footnote{%
So named because this is the approximate mass ratio of the Moon relative to the Earth.} where $m_2$ has mass $\mu =0.0123/1.0123$ and $%
m_{1}$ has mass $1-\mu$. Let\footnote{%
In fact, our numerical calculations suggest that a more accurate value of
the Jacobi constant for which we have the collision is $1.434045949300768$.
However, since in the theorem we obtain only interval results, we round $%
c_{0}$ so that digits smaller than the width of the interval are not used.}%
\begin{equation*}
c_{0}=1.4340459493,\qquad \text{and\qquad }\delta =10^{-11}.
\end{equation*}%
There exists a single value $c^{\ast }\in \left( c_{0}-\delta ,c_{0}+\delta
\right) $ of the Jacobi integral, for which we have an orbit along the
intersection of the ejection and collision manifolds of $m_{1}$. Moreover,
for every $c\in \left[ c_{0}-\delta ,c_{0}+\delta \right] \setminus \left\{
c^{\ast }\right\} $ we have an $S$-symmetric Lyapunov orbit, that passes
close to the collision with $m_{1}$. In addition, for every $c\in \left\{ 1.2,1.25,1.3,\ldots ,1.65\right\} $
there exists a Lyapunov orbit, which passes close the collision with $m_{1}$%
. (These orbits are depicted in Figure \ref{fig:Lyap}.)
\end{maintheorem}
\begin{table}
{\scriptsize
\begin{tabular}{r l l l l}
\hline
$\bar x_0 = $ & \phantom{-(}0.0 & & & \\
$\bar x_1 = $ & (\phantom{-}0.0, & \phantom{-}2.8111911379251, & \phantom{-}0.0, &\phantom{-}0.0) \\
$\bar x_2 = $ & (\phantom{-}0.96886794638213, & -0.3219837525934, & -0.52587590839627, & -2.8644348266831) \\
$\bar x_3 = $ & (\phantom{-}0.67431017475157, & -0.74811608844773, & -1.0190086228395, & -1.0721803622694) \\
$\bar x_4 = $ & (-1.0199016713004, & \phantom{-}0.72482377063238, & -0.062207790440189, & \phantom{-}1.1639536137604) \\
$\bar x_5 = $ & (\phantom{-}0.1377088390491,& -0.32616835939217, & -0.22586709346235, & \phantom{-}0.6480010784062) \\
$\bar x_6 = $ & \phantom{(-}0.070375791076957 \\
$\bar \tau = $ & \phantom{(-}2.0972398526268 \\
$\bar \alpha = $ & \phantom{(-}0.0 \\
\hline
\end{tabular}
}
\caption{Numerical data for the proof of Theorem \protect\ref{th:CAP-Lyap}, which gives an approximate solution to $F_c=0$ for the operator (\ref{eq:Fc-choice}), for which we have a collision of the family of Lyapunov orbits with $m_1$ for the Earth-Moon system (see Figure \ref{fig:Lyap}). This occurs for a unique value of the Jacobi constant $c^* \in \mathbf{c}$. \label{tabl:Lyap}}
\end{table}
\begin{proof}
The orbits for the Jacobi integral values in $\mathbf{c}:=\left[
c_{0}-\delta ,c_{0}+\delta \right] $ were established by means of Theorems %
\ref{th:Lyap-through-collision} and \ref{thm:aPosteriori}. We have first
pre-computed numerically (through a standard, non-interval, numerical
computation) an approximation $\mathbf{\bar{x}}\in \mathbb{R}^{22}$, $\bar{%
\tau}\in \mathbb{R}$ for the functions $\mathbf{x}\left( c\right) $ and $%
\tau \left( c\right) $, for $c\in \mathbf{c}$. (The $\mathbf{\bar{x}}$ and $%
\bar \tau $ are written out in Table \ref{tabl:Lyap}.) We then took $%
\bar x:=\left( \mathbf{\bar x},\bar \tau ,0\right) \in \mathbb{R}^{24},$ and
a ball $\overline{B}\left( \bar x,r\right) $, in the maximum norm, with $%
r=10^{-11}.$ We established using Theorem \ref%
{thm:aPosteriori} that $\mathbf{x}\left( c\right) $ and $\tau \left(
c\right) $ satisfying
\begin{equation*}
F_{c}\left( \mathbf{x}\left( c\right) ,\tau \left( c\right) ,0\right)
=0,\qquad \text{for }c\in \mathbf{c},
\end{equation*}%
are $r$-close to $\mathbf{\bar{x}}$ and $\bar{\tau}.$ To apply Theorem \ref%
{thm:aPosteriori} we have used the matrix $A$ to be an approximation of $%
\left( DF_{c}(\mathbf{\bar{x}},\bar{\tau},0)\right) ^{-1}$ (computed with
standard numerics, without interval arithmetic).
We also checked using interval arithmetic that%
\begin{eqnarray*}
x_{0}\left( c_0-\delta \right) &\in &[3.2261\cdot 10^{-12},5.2262\cdot
10^{-12}]>0, \\
x_{0}\left( c_0+\delta \right) &\in &[-4.6229\cdot 10^{-12},-2.6228\cdot
10^{-12}]<0.
\end{eqnarray*}
By using Equation \eqref{eq:implicit-dx-dc}, we have established the following
interval arithmetic bound for the derivative of $x_0$ with respect to the
parameter
\begin{equation*}
\frac{d}{dc}x_{0}\left( c\right) \in \left[ -0.53146,-0.25344\right]
<0\qquad \text{for }c\in \mathbf{c}.
\end{equation*}%
We also verified that%
\begin{equation*}
x_{6}\left( c\right) \in \left[ 0.07037579,0.07037580\right] ,\qquad \text{%
for }c\in \mathbf{c},
\end{equation*}%
so $x_{6}\left( c\right) \neq 0$. This proves all necessary hypotheses of Theorem \ref%
{th:Lyap-through-collision} are satisfied for the interval $\mathbf{c}$,
which finishes the first part of the proof.
The Lyapunov orbits for $c\in \left\{ 1.2,1.25,1.3,\ldots ,1.65\right\} $
were estabilshed in a similar way. For each value of the Jacobi constant we
have non-rigorously computed an approximation of a point for which $F_{c}$
is close to zero, and validated that we have $F_{c}=0$ for a point in a
given neighbourhood of each approximation by means of Theorem \ref%
{th:Lyap-through-collision}. Then each Lyapunov orbit followed from Lemma %
\ref{lem:Lyap-existence}. The proof was conducted by using the CAPD library \cite{CAPD_paper} and took
under 4 seconds on a standard laptop.
\end{proof}
In a similar way we have used the operator in Equation \eqref{eq:Fc-equal} to prove the
following result.
\begin{table}
{\scriptsize
\begin{tabular}{r l l l l}
\hline
$\bar x_0 = $ & \phantom{(-}2.1500812504263 & & & \\
$\bar x_1 = $ & (\phantom{-}0.0, &\phantom{-}1.9284591731628, & \phantom{-}2.1500812504263, &\phantom{-}0.0) \\
$\bar y_1 = $ & (\phantom{-}0.69048473611567, &\phantom{-}1.7931365837031, &\phantom{-}2.0235432631366, &-0.68131264815823) \\
$\bar y_2 = $ & (\phantom{-}1.2840491252838, &\phantom{-}1.4060903194974, &\phantom{-}1.6633024005717, &-1.2578372410208) \\
$\bar y_3 = $ &(\phantom{-}1.6975511373876, & \phantom{-}0.82331762641153, &\phantom{-}1.1255430505039, &-1.635312833307) \\
$\bar y_4 = $ &(\phantom{-}1.8749336204161, &\phantom{-}0.13626785074409, & \phantom{-}0.4974554541058, &-1.7408028751654) \\
$\bar y_5 = $ & (\phantom{-}1.7998279644685, &-0.53073278614628,& -0.11297480280335, & -1.5366473737295) \\
$\bar y_6 = $ &(\phantom{-}1.5061749347656, & -1.0305902992759,& -0.59342931060715, & -1.0405479042095) \\
$\bar y_7 = $ &(\phantom{-}1.0818972907729,& -1.2225719420862, &-0.85102013466618, &-0.34180581034401) \\
$\bar y_8 = $ & (\phantom{-}0.65897461363208,& -1.0129455565064, &-0.83705911740279, &\phantom{-}0.41484122714387) \\
$\bar x_2 = $ & (\phantom{-}0.39363679634804, &-0.35214129843918,& -0.55459777216455, &\phantom{-}1.118144276789) \\
$\bar x_3 = $ & (\phantom{-}0.47871801188109, &-1.6325298121847, &-0.5792530867862, &\phantom{-}0.66259374214967) \\
$\bar x_4 = $ & (\phantom{-}{\bf 0.40239981358785},& -1.0164469492932, &\phantom{-}0.0, &\phantom{-}1.7224504635177) \\
$\bar x_5 = $ & (-0.25865224139372,& -0.43561054122851,& \phantom{-}0.51876042853484, &\phantom{-}1.7861707478994) \\
$\bar x_6 = $ &(\phantom{-}0.29778859976434,& -1.2111468567795, &-0.2683570951738, &-1.0237309759288) \\
$\bar x_7 = $ & \phantom{(}-0.38367247647373 & & & \\
$\bar \tau = $ & \phantom{(-}0.24444305938687 & & & \\
$\bar \alpha = $ & \phantom{(-}0.0 & & & \\
\hline
\end{tabular}
}
\caption{Numerical data for the proof of Theorem \protect\ref{thm:doubleCollision} giving an approximate solution to $F_c=0$, for the operator (\ref{eq:Fc-equal}), for $c=2.05991609689$ for which we have a double collision of a family of $R$-symmetric periodic orbits for the equal masses system; see Figure \ref{fig:eq}. In the bold font we have singled out the first coefficient of $x_4$, which is the time $s_4$ and not the physical coordinate of the collision point, for which we have $\hat x=0$. (See Equations \eqref{eq:s4} and \eqref{eq:Fc-equal}.)\label{tabl:eq}}
\end{table}
\begin{maintheorem} \label{thm:doubleCollision}
Consider the equal masses system where $\mu =\frac{1}{2}$. Let%
\footnote{%
We believe that a more accurate value of the Jacobi constant for which we
have the double collision is $2.059916096889689$.}%
\begin{equation*}
c_{0}=2.05991609689,\qquad \text{and\qquad }\delta =10^{-11}.
\end{equation*}%
There exists a single value $c^{\ast }\in \left( c_{0}-\delta ,c_{0}+\delta
\right) $ of the Jacobi integral, for which we have two intersections of the
ejection and collision manifolds of $m_{1}$ and $m_{2}$ (a double
collision). Moreover, for every $c\in \left[ c_{0}-\delta ,c_{0}+\delta %
\right] \setminus \left\{ c^{\ast }\right\} $ we have an $R$-symmetric
periodic orbit, that passes close to the collision with both $m_{1}$ and $%
m_{2}$.
In addition, for every $c\in \left\{ 2,2.05,2.1,2.15,2.2\right\} $ there
exists an $R$-symmetric periodic orbit, which passes close the collisions
with $m_{1}$ and $m_{2}$. (See Figure \ref{fig:eq}.)
\end{maintheorem}
\begin{proof}
The proof follows along the same lines as the proof of Theorem \ref%
{th:CAP-Lyap}. We do not write out the details of all the estimates since we
feel that this brings little added value\footnote{%
The code for the proof is made available on the personal web page of Maciej
Capi\'{n}ski.}. In the operator $F_{c}$ from Equation \eqref{eq:Fc-equal} we have
taken $s_{2}=3.3$ and $s_{5}=0.3$. The fact that $s_{2}$ involves a long
integration time caused a technical problem for us in obtaining an estimate
for $\frac{d}{dc}\pi _{\hat{y}}\mathbf{x}\left( c\right) $. To get a good
enough estimate to establish that $\frac{d}{dc}\pi _{\hat{y}}\mathbf{x}%
\left( c\right) >0$ we needed to include additional points $y_{1},\ldots
,y_{m}$ in the shooting scheme and extend $F_{c}$ to include
\begin{equation*}
\phi _{\alpha }\left( x_{1},s\right) -y_{1},\quad \phi _{\alpha }\left(
y_{1},s\right) -y_{2},\quad \ldots \quad \phi _{\alpha }\left(
y_{m-1},s\right) -y_{m},\quad \phi _{\alpha }\left( y_{m},s\right) -x_{2},
\end{equation*}%
where $s=s_{2}/\left( m+1\right) .$ We took $m=8$, and the point $X_{0}$
wich serves as our approximation for $F_{c}=0$ is written out in Table \ref%
{tabl:eq}. The proof took under 10 seconds on a standard laptop.
\end{proof}
\begin{remark}[MatLab with IntLab versus CAPD]
{\em
We note that the computer programs implemented in $C^{++}$ using the
CAPD library run much faster than the programs implemented in MatLab using
IntLab to manage the interval arithmetic. This is not surprising, as compiled
programs typically run several hundred times faster than MatLab programs,
and the use of interval arithmetic only complicates things. Moreover,
CAPD is a well tested, optimized, general purpose package, while our
IntLab codes were written specifically for this project. The CAPD library, due to its efficient integrators, allowed us to perform almost all of the proofs without subdividing the time steps, which was needed for the MatLab code (see Remark \ref{rem:additionalShooting} and comments at the end of section \ref{sec:EC}), except for the proof of Theorem \ref{thm:doubleCollision} (see Table \ref{tabl:eq}). In particular, little time
has been spent on optimizing these codes. Nevertheless, it is nice to have
rigorous integrators implemented in multiple languages, and the codes for
validating the 2D stable/unstable manifolds at $L_4$ were written in IntLab
and have not been ported to $C^{++}$.
}
\end{remark}
|
2205.03928
|
\part{title}
\usepackage{amsmath,amsthm,amssymb,amscd}
\newcommand{\mathcal E}{\mathcal E}
\newtheorem{theorem}{Theorem}[section]
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{result}[theorem]{Result}
\newtheorem{corollary}[theorem]{Corollary}
\theoremstyle{definition}
\newtheorem{definition}[theorem]{Definition}
\newtheorem{example}[theorem]{Example}
\newtheorem{xca}[theorem]{Exercise}
\newtheorem{problem}[theorem]{Problem}
\theoremstyle{remark}
\newtheorem{remark}[theorem]{Remark}
\newtheorem{conj}[theorem]{Conjecture}
\numberwithin{equation}{section}
\allowdisplaybreaks
\begin{document}
\title[number of complete subgraphs of Peisert graphs]
{number of complete subgraphs of Peisert graphs and finite field hypergeometric functions}
\author{Anwita Bhowmik}
\address{Department of Mathematics, Indian Institute of Technology Guwahati, North Guwahati, Guwahati-781039, Assam, INDIA}
\email{anwita@iitg.ac.in}
\author{Rupam Barman}
\address{Department of Mathematics, Indian Institute of Technology Guwahati, North Guwahati, Guwahati-781039, Assam, INDIA}
\email{rupam@iitg.ac.in}
\subjclass[2020]{05C25; 05C30; 11T24; 11T30}
\date{9th May 2022}
\keywords{Peisert graphs; clique; finite fields; character sums; hypergeometric functions over finite fields}
\begin{abstract}
For a prime $p\equiv 3\pmod{4}$ and a positive integer $t$, let $q=p^{2t}$. Let $g$ be a primitive element of the finite field $\mathbb{F}_q$.
The Peisert graph $P^\ast(q)$ is defined as the graph with vertex set $\mathbb{F}_q$ where $ab$ is an edge if and only if $a-b\in\langle g^4\rangle \cup g\langle g^4\rangle$.
We provide a formula, in terms of finite field hypergeometric functions, for the number of complete subgraphs of order four contained in $P^\ast(q)$.
We also give a new proof for the number of complete subgraphs of order three contained in $P^\ast(q)$ by evaluating certain character sums. The computations for the number of complete subgraphs of order four are quite tedious,
so we further give an asymptotic result for the number of complete subgraphs of any order $m$ in Peisert graphs.
\end{abstract}
\maketitle
\section{introduction and statements of results}
The arithmetic properties of Gauss and Jacobi sums have a very long history in number theory, with applications in Diophantine equations and the theory of $L$-functions. Recently, number theorists have obtained generalizations of classical hypergeometric functions that are assembled with these sums, and these functions have recently led to applications in graph theory. Here we make use of these functions, as developed by Greene, McCarthy, and Ono \cite{greene, greene2,mccarthy3, ono2} to study substructures in Peisert graphs, which are relatives of the well-studied Paley graphs.
\par The Paley graphs are a well-known family of undirected graphs constructed from the elements of a finite field. Named after Raymond Paley, they were introduced as graphs independently by Sachs in 1962 and Erd\H{o}s \& R\'enyi in 1963,
inspired by the construction of Hadamard matrices in Paley's paper \cite{paleyp}. Let $q\equiv 1\pmod 4$ be a prime power. Then the Paley graph of order $q$ is the graph with vertex set as the finite field $\mathbb{F}_q$ and edges defined as,
$ab$ is an edge if $a-b$ is a non-zero square in $\mathbb{F}_q$.
\par
It is natural to study the extent to which a graph exhibits symmetry. A graph is called \textit{symmetric} if, given any two edges $xy$ and $x_1y_1$, there exists a graph automorphism sending $x$ to $x_1$ and $y$ to $y_1$.
Another kind of symmetry occurs if a graph is isomorphic to its complement, in which case the graph is called \textit{self-complementary}. While Sachs studied the self-complementarity properties of the Paley graphs, Erd\H{o}s \& R\'enyi were interested in their symmetries.
It turns out that the Paley graphs are both self-complementary and symmetric.
\par
It is a natural question to ask for the classification of all self-complementary and symmetric (SCS) graphs. In this direction, Chao's classification in \cite{chao} sheds light on the fact that the only such possible graphs of prime order are the Paley graphs.
Zhang in \cite{zhang}, gave an algebraic characterization of SCS graphs using the classification of finite simple groups, although it did not follow whether one could find such graphs other than the Paley graphs.
In 2001, Peisert gave a full description of SCS graphs as well as their automorphism groups in \cite{peisert}. He derived that there is another infinite family of SCS graphs apart from the Paley graphs, and, in addition, one more graph not belonging to any of the two former families.
He constructed the $P^\ast$-graphs (which are now known as \textit{Peisert graphs}) as follows. For a prime $p\equiv 3\pmod{4}$ and a positive integer $t$, let $q=p^{2t}$. Let $g$ be a primitive element of the finite field $\mathbb{F}_q$,
that is, $\mathbb{F}_q^\ast=\mathbb{F}_q\setminus\{0\}=\langle g\rangle$. Then the Peisert graph $P^\ast(q)$ is defined as the graph with vertex set $\mathbb{F}_q$ where $ab$ is an edge if and only if $a-b\in\langle g^4\rangle \cup g\langle g^4\rangle$.
It is shown in \cite{peisert} that the definition is independent of the choice of $g$. It turns out that an edge is well defined, since $q\equiv 1\pmod 8$ implies that $-1\in\langle g^4\rangle$.
\par
We know that a complete subgraph, or a clique, in an undirected graph is a set of vertices such that every two distinct vertices in the set are adjacent. The number of vertices in the clique is called the order of the clique.
Let $G^{(n)}$ denote a graph on $n$ vertices and let $\overline{G^{(n)}}$ be its complement.
Let $k_m(G)$ denote the number of cliques of order $m$ in a graph $G$. Let $T_m(n)=\text{min}\left(k_m(G^{(n)})+ k_m(\overline{G^{(n)}})\right) $ where the minimum is taken over all graphs on $n$ vertices.
Erd\H{o}s \cite{erdos}, Goodman \cite{goodman} and Thomason \cite{thomason} studied $T_m(n)$ for different values of $m$ and $n$. Here we note that the study of $T_m(n)$ can be linked to Ramsey theory.
This is because, the diagonal Ramsey number $R(m,m)$ is the smallest positive integer $n$ such that $T_m(n)$ is positive. Also, for the function $k_m(G^{(n)})+ k_m(\overline{G^{(n)}})$ on graphs with $n=p$ vertices, $p$ being a prime,
Paley graphs are minimal in certain ways; for example, in order to show that $R(4,4)$ is atleast $18$, the Paley graph with $17$ vertices acts as the only graph (upto isomorphism) such that $k_m(G^{(17)})+ k_m(\overline{G^{(17)}})=0$.
What followed was a study on $k_m(G)$, $G$ being a Paley graph. Evans et al. \cite{evans1981number} and Atansov et al. \cite{atanasov2014certain} gave formulae for $k_4(G)$, where $G$ is a Paley graph with number of vertices a prime and a prime-power,
respectively. One step ahead led to generalizations of Paley graphs by Lim and Praeger \cite{lim2006generalised}, and computing the number of cliques of orders $3$ and $4$ in those graphs by Dawsey and McCarthy \cite{dawsey}.
Very recently, we \cite{BB} have defined \emph{Paley-type} graphs of order $n$ as follows. For a positive integer $n$, the Paley-type graph $G_n$ has the finite commutative ring $\mathbb{Z}_n$ as its vertex set and edges defined as, $ab$ is an edge if
and only if $a-b\equiv x^2\pmod{n}$ for some unit $x$ of $\mathbb{Z}_n$. For primes $p\equiv 1\pmod{4}$ and any positive integer $\alpha$, we have also found the number of cliques of order $3$ and $4$ in the Paley-type graphs $G_{p^{\alpha}}$.
\par
The Peisert graphs lie in the class of SCS graphs alongwith Paley graphs, so it would serve as a good analogy to study the number of cliques in the former class too. There is no known formula for the number of cliques of order $4$ in
Peisert graph $P^{\ast}(q)$. The main purpose of this paper is to provide a general formula for $k_4(P^\ast(q))$. In \cite{jamesalex2}, Alexander found the number of cliques of order $3$ using the properties that the
Peisert graph are edge-transitive and that any pair of vertices connected by an edge have the same number of common neighbors (a graph being edge-transitive means that, given any two edges in the graph, there exists a graph automorphism sending one edge to the other).
In this article, we follow a character-sum approach to compute the number of cliques of orders $3$ and $4$ in Peisert graphs. In the following theorem, we give a new proof for the number of cliques of orders $3$ in Peisert graphs by evaluating certain character sums.
\begin{theorem}\label{thm1}
Let $q=p^{2t}$, where $p\equiv 3\pmod 4$ is a prime and $t$ is a positive integer. Then, the number of cliques of order $3$ in the Peisert graph $P^{\ast}(q)$ is given by $$k_3(P^\ast(q))=\dfrac{q(q-1)(q-5)}{48}.$$
\end{theorem}
Next, we find the number of cliques of order $4$ in Peisert graphs. In this case, the character sums are difficult to evaluate. We use finite field hypergeometric functions to evaluate some of the character sums. Before we state our result on $k_4(P^\ast(q))$,
we recall Greene's finite field hypergeometric functions from \cite{greene, greene2}. Let $p$ be an odd prime, and let $\mathbb{F}_q$ denote the finite field with $q$ elements, where $q=p^r, r\geq 1$.
Let $\widehat{\mathbb{F}_q^{\times}}$ be the group of all multiplicative
characters on $\mathbb{F}_q^{\times}$. We extend the domain of each $\chi\in \widehat{\mathbb{F}_q^{\times}}$ to $\mathbb{F}_q$ by
setting $\chi(0)=0$ including the trivial character $\varepsilon$. For multiplicative characters $A$ and $B$ on $\mathbb{F}_q$, the binomial coefficient ${A \choose B}$ is defined by
\begin{align*}
{A \choose B}:=\frac{B(-1)}{q}J(A,\overline{B}),
\end{align*}
where $J(A, B)=\displaystyle \sum_{x \in \mathbb{F}_q}A(x)B(1-x)$ denotes the Jacobi sum and $\overline{B}$ is the character inverse of $B$.
For a positive integer $n$, and $A_0,\ldots, A_n, B_1,\ldots, B_n\in \widehat{\mathbb{F}_q^{\times}}$, Greene \cite{greene, greene2} defined the ${_{n+1}}F_n$- finite field hypergeometric function over
$\mathbb{F}_q$ by
\begin{align*}
{_{n+1}}F_n\left(\begin{array}{cccc}
A_0, & A_1, & \ldots, & A_n\\
& B_1, & \ldots, & B_n
\end{array}\mid x \right)
:=\frac{q}{q-1}\sum_{\chi\in \widehat{\mathbb{F}_q^\times}}{A_0\chi \choose \chi}{A_1\chi \choose B_1\chi}
\cdots {A_n\chi \choose B_n\chi}\chi(x).
\end{align*}
For $n=2$, we recall the following result from \cite[Corollary 3.14]{greene}:
$${_{3}}F_{2}\left(\begin{array}{ccc}
A, & B, & C \\
& D, & E
\end{array}| \lambda\right)=\sum\limits_{x,y\in\mathbb{F}_q}A\overline{E}(x)\overline{C}E(1-x)B(y)\overline{B}D(1-y)\overline{A}(x-\lambda y).$$
Some of the biggest motivations for studying finite field hypergeometric functions have been their connections with Fourier coefficients and eigenvalues of modular forms and with counting points on certain kinds of algebraic varieties. For example, Ono \cite{ono} gave formulae for the number of $\mathbb{F}_p$-points on elliptic curves in terms of special values of Greene's finite field hypergeometric functions. In \cite{ono2}, Ono wrote a beautiful chapter on finite field hypergeometric functions and mentioned several open problems on hypergeometric functions and their relations to modular forms and algebraic varieties. In recent times, many authors have studied and found solutions to some of the problems posed by Ono.
\par Finite field hypergeometric functions are useful in the study of Paley graphs, see for example \cite{dawsey, wage}. In the following theorem, we express the number of cliques of order $4$ in Peisert graphs in terms of finite field hypergeometric functions.
\begin{theorem}\label{thm2}
Let $p$ be a prime such that $p\equiv 3\pmod 4$. For a positive integer $t$, let $q=p^{2t}$. Let $q=u^2+2v^2$ for integers $u$ and $v$ such that $u\equiv 3\pmod 4$ and $p\nmid u$ when $p\equiv 3\pmod 8$. If $\chi_4$ is a character of order $4$, then the number of cliques of order $4$ in the Peisert graph $P^{\ast}(q)$ is given by
\begin{align*}
k_4(P^\ast(q))=\frac{q(q-1)}{3072}\left[2(q^2-20q+81)+2 u(-p)^t+3q^2\cdot {_{3}}F_{2}\left(\begin{array}{ccc}
\hspace{-.12cm}\chi_4, &\hspace{-.14cm} \chi_4, &\hspace{-.14cm} \chi_4^3 \\
& \hspace{-.14cm}\varepsilon, &\hspace{-.14cm} \varepsilon
\end{array}| 1\right)
\right].
\end{align*}
\end{theorem}
Using Sage, we numerically verify Theorem $\ref{thm2}$ for certain values of $q$. We list some of the values in Table \ref{Table-1}. We denote by ${_{3}}F_{2}(\cdot)$ the hypergeometric function appearing in Theorem \ref{thm2}.
\begin{table}[ht]
\begin{center}
\begin{tabular}{|c |c | c | c | c | c | c|}
\hline
$p$ &$q$ & $k_4(P^\ast(q))$ & $u$ & $q^2 \cdot {_{3}}F_{2}(\cdot)$ & $k_4(P^\ast(q))$ &${_{3}}F_{2}(\cdot)$\\
&& (by Sage) & & (by Sage) & (by Theorem \ref{thm2}) &\\\hline
$3$ &$9$ & $0$ & $-1$ & $10$ & $0$& $0.1234\ldots$ \\
$7$ &$49$ & $2156$ & $7$ & $-30$ & $2156$& $-0.0123\ldots$\\
$3$ &$81$ & $21060$ & $7$ & $-62$ & $21060$& $-0.0094\ldots$\\
$11$ &$121$ & $116160$ & $7$ & $42$ & $116160$& $0.0028\ldots$\\
$19$ &$361$ & $10515930$ & $-17$ & $522$ & $10515930$& $0.0040\ldots$\\
$23$ &$529$ & $49135636$ & $23$ & $930$ & $49135636$& $0.0033\ldots$\\
\hline
\end{tabular}
\caption{Numerical data for Theorem \ref{thm2}}
\label{Table-1}
\end{center}
\end{table}
\par
We note that the number of $3$-order cliques in the Peisert graph of order $q$ equals the number of $3$-order cliques in the Paley graph of the same order.
The computations for the number of cliques of order $4$ are quite tedious, so we further give an asymptotic result in the following theorem, for the number of cliques of order $m$ in Peisert graphs, $m\geq 1$ being an integer.
\begin{theorem}\label{asym}
Let $p$ be a prime such that $p\equiv 3\pmod 4$. For a positive integer $t$, let $q=p^{2t}$. For $m\geq 1$, let $k_m(P^\ast(q))$ denote the number of cliques of order $m$ in the Peisert graph $P^\ast(q)$.
Then $$\lim\limits_{q\to\infty}\dfrac{k_m(P^\ast(q))}{q^m}=\dfrac{1}{2^{{m}\choose_{2}}m!}.$$
\end{theorem}
\section{preliminaries and some lemmas}
We begin by fixing some notations. For a prime $p\equiv 3\pmod{4}$ and positive integer $t$, let $q=p^{2t}$. Let $g$ be a primitive element of the finite field $\mathbb{F}_q$, that is, $\mathbb{F}_q^\ast=\mathbb{F}_q\setminus\{0\}=\langle g\rangle$.
Now, we fix a multiplicative character $\chi_4$ on $\mathbb{F}_q$ of order $4$ (which exists since $q\equiv 1\pmod 4$). Let $\varphi$ be the unique quadratic character on $\mathbb{F}_q$. Then, we have $\chi_4^2=\varphi$.
Let $H=\langle g^4\rangle\cup g\langle g^4\rangle$. Since $H$ is the union of two cosets of $\langle g^4\rangle $ in $\langle g\rangle $, we see that $|H|=2\times \frac{q-1}{4}=\frac{q-1}{2}$.
We recall that a vertex-transitive graph is a graph in which, given any two vertices in the graph, there exists some graph automorphism sending one of the vertices to the other. Peisert graphs being symmetric, are vertex-transitive. Also,
the subgraphs induced by $\langle g^4\rangle$ and $g\langle g^4\rangle$ are both vertex transitive: if $s,t$ are two elements of $\langle g^4\rangle$ (or $g\langle g^4\rangle$) then the map on the vertex set of $\langle g^4\rangle$ (or $g\langle g^4\rangle$)
given by $x\longmapsto \frac{t}{s} x$ is an isomorphism sending $s$ to $t$.
The subgraph of $P^\ast(q)$ induced by $H$ is denoted by $\langle H\rangle$.
\par
Throughout the article, we fix $h=1-\chi_4(g)$. For $x\in\mathbb{F}_q^\ast$, we have the following:
\begin{align}\label{qq}
\frac{2+h\chi_4(x)+\overline{h}\overline{\chi_4}(x)}{4} = \left\{
\begin{array}{lll}
1, & \hbox{if $\chi_4(x)\in\{1,\chi_4(g)\}$;} \\
0, & \hbox{\text{otherwise.}}
\end{array}
\right.
\end{align}
We note here that for $x\neq 0$, $x\in H$ if and only if $\chi_4(x)=1$ or $\chi_4(x)=\chi_4(g)$.
\par
We have the following lemma which will be used in proving the main results.
\begin{lemma}\label{rr}
Let $q=p^{2t}$ where $p\equiv 3\pmod 4$ is a prime and $t$ is a positive integer. Let $\chi_4$ be a multiplicative character of order $4$ on $\mathbb{F}_q$, and let $\varphi$ be the unique quadratic character. Then, we have $J(\chi_4,\chi_4)=J(\chi_4,\varphi)=-(-p)^t$.
\end{lemma}
\begin{proof}
By \cite[Proposition 1]{katre}, we have $J(\chi_4,\chi_4)=-(-p)^t$. We also note that by Theorem 2.1.4 and Theorem 3.2.1 of \cite{berndt}, where the results remain the same if we replace a prime by a prime power,
we see that $J(\chi_4,\varphi)=\chi_4(4)J(\chi_4,\chi_4)=a_4+ib_4$, where $a_4^2+b_4^2=q$ and $a_4\equiv -(\frac{q+1}{2})\pmod 4$. Hence, $a_4\equiv 1\pmod 4$ and $a_4=-(-p)^t,b_4=0$. Thus, we obtain $J(\chi_4,\varphi)=J(\chi_4,\chi_4)=-(-p)^t$.
\end{proof}
Next, we evaluate certain character sums in the following lemmas.
\begin{lemma}\label{lem1}
Let $q\equiv 1\pmod 4$ be a prime power and let $\chi_4$ be a character on $\mathbb{F}_q$ of order $4$ such that $\chi_4(-1)=1$, and let $\varphi$ be the unique quadratic character. Let $a\in\mathbb{F}_q$ be such that $a\neq0,1$.
Then, $$\sum_{y\in\mathbb{F}_q}\chi_4((y-1)(y-a))=\varphi(a-1)J(\chi_4,\chi_4).$$
\end{lemma}
\begin{proof} We have
\begin{align*}
&\sum_{y\in\mathbb{F}_q}\chi_4((y-1)(y-a))=\sum_{y'\in\mathbb{F}_q}\chi_4(y'(y'+1-a))\\
&=\sum_{y''\in\mathbb{F}_q}\chi_4((1-a)y'')\chi_4((1-a)(y''+1))
=\varphi(1-a)\sum_{y''\in\mathbb{F}_q}\chi_4(y''(y''+1))\\
&=\varphi(1-a)\sum_{y''\in\mathbb{F}_q}\chi_4(-y''(-y''+1))\\
&=\varphi(1-a)J(\chi_4,\chi_4),
\end{align*}
where we used the substitutions $y-1=y'$, $y''=y'(1-a)^{-1}$, and replaced $y''$ by $-y''$.
\end{proof}
\begin{lemma}\label{lem2}
Let $q\equiv 1\pmod 4$ be a prime power and let $\chi_4$ be a character on $\mathbb{F}_q$ of order $4$ such that $\chi_4(-1)=1$. Let $a\in\mathbb{F}_q$ be such that $a\neq0,1$. Then, $$\sum_{y\in\mathbb{F}_q}\chi_4(y)\overline{\chi_4}(a-y)=-1.$$
\end{lemma}
\begin{proof} We have
\begin{align*}
&\sum_{y\in\mathbb{F}_q}\chi_4(y)\overline{\chi_4}(a-y)=\sum_{y'\in\mathbb{F}_q}\chi_4(ay')\overline{\chi_4}(a-ay')\\
&=\sum_{y'\in\mathbb{F}_q}\chi_4(y')\overline{\chi_4}(1-y')\\
&=\sum_{y'\in\mathbb{F}_q}\chi_4\left(y'(1-y')^{-1}\right)\\
&=\sum_{y''\in\mathbb{F}_q, y''\neq -1}\chi_4(y'') =-1,
\end{align*}
where we used the substitutions $y' =y a^{-1}$ and $y'' =y'(1-y')^{-1}$, respectively.
\end{proof}
\begin{lemma}\label{lem3}
Let $q=p^{2t}$, where $p\equiv 3\pmod 4$ is a prime and $t$ is a positive integer. Let $\chi_4$ be a character on $\mathbb{F}_q$ of order $4$ and let $\varphi$ be the unique quadratic character.
Let $J(\chi_4,\chi_4)=J(\chi_4,\varphi)=\rho$, where $\rho=-(-p)^t$. Then,
\begin{align}\label{koro}
\sum\limits_{x,y\in\mathbb{F}_q, x\neq 1}\overline{\chi_4}(x)\chi_4(y)\chi_4(1-y)\chi_4(x-y)=-2\rho
\end{align} and
\begin{align}\label{koro1}
\sum\limits_{x,y\in\mathbb{F}_q, x\neq 1}\overline{\chi_4}(x)\chi_4(y)\chi_4(1-y)\overline{\chi_4}(x-y)=1-\rho.
\end{align}
\end{lemma}
\begin{proof}
By Lemma \ref{lem2}, we have
\begin{align*}
\sum_{y\neq 0,1}\chi_4(y)\chi_4(1-y)\sum_{x\neq 0,1,y}\overline{\chi_4}(x)\chi_4(x-y)
&=\sum_{y\neq 0,1}\chi_4(y)\chi_4(1-y)\left[-1-\chi_4(y-1) \right]\\
&=-\rho-\sum_y \chi_4(y)\varphi(1-y)=-2\rho,
\end{align*}
which proves \eqref{koro}. Next, using the substitution $x'=xy^{-1}$, we have
\begin{align}\label{sum-new}
\sum_x \overline{\chi_4}(x)\overline{\chi_4}(x-y)&=\sum_{x'} \overline{\chi_4}(x'y)\overline{\chi_4}(x'y-y)\notag \\
&=\varphi(y)\rho.
\end{align}
So, using \eqref{sum-new}, we find that
\begin{align*}
&\sum_{y\neq 0,1}\chi_4(y)\chi_4(1-y)\sum_{x\neq 0,1,y}\overline{\chi_4}(x)\overline{\chi_4}(x-y)\\
&=\sum_{y\neq 0,1}\chi_4(y)\chi_4(1-y)\left[\varphi(y)\rho-\overline{\chi_4}(y-1) \right]\\
&=\rho\sum_y \overline{\chi_4}(y)\chi_4(1-y)-\sum_y \chi_4(y)\\
&=-\rho+1.
\end{align*}
This completes the proof of the lemma.
\end{proof}
We need to evaluate several analogous character sums as in Lemma \ref{lem3}. To this end, we have the following two lemmas whose proofs merely involve Lemmas \ref{lem1} and \ref{lem2} (as in Lemma \ref{lem3}).
\begin{lemma}\label{lema1}
Let $q=p^{2t}$, where $p\equiv 3\pmod 4$ is a prime and $t$ is a positive integer. Let $\chi_4$ be a character on $\mathbb{F}_q$ of order $4$ and let $\varphi$ be the unique quadratic character. Let $J(\chi_4,\chi_4)=J(\chi_4,\varphi)=\rho$, where $\rho=-(-p)^t$.
Then, we have
\begin{align*}
&\sum\limits_{x,y\in\mathbb{F}_q, x\neq 1} \chi_4^{i_1}(y)\chi_4^{i_2}(1-y)\chi_4^{i_3}(x-y)\\
&=\left\{
\begin{array}{lll}
-2\rho, & \hbox{if $(i_1, i_2, i_3)\in \{(1, 1, 1), (-1, -1, -1)\};$} \\
2, & \hbox{if $(i_1, i_2, i_3)\in \{(1, 1, -1), (-1, -1, 1)\};$} \\
1-\rho, & \hbox{if $(i_1, i_2, i_3)\in \{(1, -1, 1), (1, -1, -1), (-1, 1, 1), (-1, 1, -1)\}$.}
\end{array}
\right.
\end{align*}
\end{lemma}
\begin{lemma}\label{corr}
Let $q=p^{2t}$, where $p\equiv 3\pmod 4$ is a prime and $t$ is a positive integer. Let $\chi_4$ be a character on $\mathbb{F}_q$ of order $4$ and let $\varphi$ be the unique quadratic character. Let $J(\chi_4,\chi_4)=J(\chi_4,\varphi)=\rho$, where $\rho=-(-p)^t$.
Then, for $i_1,i_2,i_3\in\{\pm 1\}$, we have the following tabulation of the values of the expression given below:
\begin{align}\label{new-eqn1}
\sum\limits_{x,y\in\mathbb{F}_q, x\neq 1}A_x \cdot \chi_4^{i_1}(y)\chi_4^{i_2}(1-y)\chi_4^{i_3}(x-y).
\end{align}
For $w\in\{1,2,\ldots,8\}$ and $z\in \{1,2,\ldots,7\}$, the $(w,z)$-th entry in the table corresponds to \eqref{new-eqn1},
where $A_x$ is either $\chi_4(x),\overline{\chi_4}(x),\chi_4(1-x)$ or $\overline{\chi_4}(1-x)$ and the tuple $(i_1,i_2,i_3)$ depends on $w$.
\begin{align*}
\begin{array}{|l|l|l|l|l|l|l|}
\cline {4 - 7 } \multicolumn{3}{c|}{} & \multicolumn{4}{|c|}{A_{x}} \\
\hline i_{1} & i_{2} & i_{3} & \chi_4(x) & \overline{\chi_4}(x) & \chi_4(1-x) & \overline{\chi_4}(1-x) \\
\hline
1 & 1 & 1 & -2 \rho & -2 \rho & -2 \rho & -2 \rho \\
1 & 1 & -1 & 1-\rho & 1-\rho & 1- \rho & 1-\rho \\
1 & -1 & 1 & {\rho}^2+1 & 2 & {\rho}^2-\rho & 1-\rho \\
1 & -1 & -1 & 1-\rho & {\rho}^2-\rho & 2 & {\rho}^2+1 \\
-1 & 1 & 1 &{\rho}^2-\rho & 1-\rho &{\rho}^2+1 & 2 \\
-1 & 1 & -1 &2 & {\rho}^2+1 &1-\rho & {\rho}^2-\rho \\
-1 & -1 & 1 &1-\rho & 1-\rho &1-\rho & 1-\rho \\
-1 & -1 & -1 &-2\rho & -2\rho &-2\rho & -2\rho\\
\hline
\end{array}
\end{align*}
For example, the $(3,6)$-th position contains the value ${\rho}^2-\rho$. Here $w=3$ corresponds to $i_1=1,i_2=-1,i_3=1$; $z=6$ corresponds to the column $A_x=\chi_4(1-x)$.
So, $$\sum\limits_{x,y\in\mathbb{F}_q, x\neq 1}\chi_4(1-x)\chi_4(y) \overline{\chi_4}(1-y)\chi_4(x-y)={\rho}^2-\rho.$$
\end{lemma}
\begin{proof}
The calculations follow along the lines of Lemma \ref{lem1} and Lemma \ref{lem2}. For example, in Lemma \ref{lem3}, one can take $\chi_4(x),~\chi_4(x-1)$ or $\overline{\chi_4}(x-1)$ in place of $\overline{\chi_4}(x)$ in $\eqref{koro}$ and $\eqref{koro1}$ (which we denote by $A_x$),
and easily evaluate the corresponding character sum.
\end{proof}
\begin{lemma}\label{lem4}
Let $q=p^{2t}$, where $p\equiv 3\pmod 4$ is a prime and $t$ is a positive integer. Let $\chi_4$ be a character of order $4$. Let $\varphi$ and $\varepsilon$ be the quadratic and the trivial characters, respectively. Let $q=u^2+2v^2$ for integers $u$ and $v$ such that $u\equiv 3\pmod 4$ and $p\nmid u$ when $p\equiv 3\pmod 8$.
Then,
\begin{align*}
&{_{3}}F_2\left(\begin{array}{ccc}
\chi_4, & \chi_4, & \chi_4\\ & \varepsilon, & \varepsilon
\end{array}\mid 1 \right)=
{_{3}}F_2\left(\begin{array}{ccc}\overline{\chi_4}, & \overline{\chi_4}, & \overline{\chi_4}\\ & \varepsilon, & \varepsilon \end{array}\mid 1\right)\\
&={_{3}}F_2\left(\begin{array}{ccc}\chi_4, & \overline{\chi_4}, & \overline{\chi_4}\\ & \varphi, & \varepsilon\end{array}\mid 1\right)
={_{3}}F_2\left(\begin{array}{ccc}\overline{\chi_4}, & \chi_4, & \chi_4\\ & \varphi, & \varepsilon\end{array}\mid 1\right)\\
&=\frac{1}{q^2}[-2u(-p)^t].
\end{align*}
\end{lemma}
\begin{proof}
Let $\chi_8$ be a character of order $8$ such that $\chi_8^2=\chi_4$. Now, Proposition 1 in \cite{katre} tells us that $J(\chi_4,\chi_4)=-(-p)^t$ and hence it is real. Again, by Theorem 3.3.3 and the paragraph preceeding Theorem 3.3.1 in \cite{berndt}, $J(\chi_8,\chi_8^2)=\chi_8(-4)J(\chi_4,\chi_4)$,
where $\chi_8(4)=\pm 1$ and thus, is also real. By \cite[Theorem 4.37]{greene}, we have
\begin{align}\label{doe}
{_{3}}F_{2}\left(\begin{array}{ccc}\chi_4, & \chi_4, & \chi_4\\ & \varepsilon, & \varepsilon\end{array}\mid 1\right)&=\binom{\chi_8}{\chi_8^2}\binom{\chi_8}{\chi_8^3}+\binom{\chi_8^5}{\chi_8^2}\binom{\chi_8^5}{\overline{\chi_8}}\notag \\
&=\frac{\chi_8(-1)}{q^2}[J(\chi_8,\chi_8^6)J(\chi_8,\chi_8^5)+J(\chi_8^5,\chi_8^6)J(\chi_8^5,\chi_8)].
\end{align}
Using Theorems 2.1.5 and 2.1.6 in \cite{berndt} we obtain
\begin{align*}
&J(\chi_8,\chi_8^6)=\chi_8(-1)J(\chi_8,\chi_8),\\
&J(\chi_8,\chi_8^5)=\chi_8(-1)J(\chi_8,\chi_8^2),\\
&J(\chi_8^5,\chi_8^6)=\chi_8(-1)\overline{J(\chi_8,\chi_8)}.
\end{align*}
Substituting these values in $\eqref{doe}$ and using \cite[Lemma 3.6 (2)]{dawsey}, we find that
\begin{align}\label{real}
{_{3}}F_{2}\left(\begin{array}{ccc}\chi_4, & \chi_4, & \chi_4\\ & \varepsilon, & \varepsilon\end{array}\mid 1\right)&=\frac{\chi_8(-1)}{q^2}[J(\chi_8,\chi_8)J(\chi_8,\chi_8^2)+\overline{J(\chi_8,\chi_8)}J(\chi_8,\chi_8^2)]\notag \\
&=\frac{1}{q^2}J(\chi_8,\chi_8^2)\times 2 Re(J(\chi_8,\chi_8))\times \chi_8(-1)\notag \\
&=\frac{1}{q^2}[-2u(-p)^t].
\end{align}
Since ${_{3}}F_{2}\left(\begin{array}{ccc}\overline{\chi_4}, & \overline{\chi_4}, & \overline{\chi_4}\\ & \varepsilon, & \varepsilon\end{array}\mid 1\right)$ is the conjugate of
${_{3}}F_{2}\left(\begin{array}{ccc}\chi_4, & \chi_4, & \chi_4\\ & \varepsilon, & \varepsilon\end{array}\mid 1\right)$, so both are equal as the value given in \eqref{real} is a real number. Using Lemma 4.37 in \cite{greene} again, we have
\begin{align}\label{jack}
{_{3}}F_{2}\left(\begin{array}{ccc}\chi_4, & \overline{\chi_4}, & \overline{\chi_4}\\ & \varphi, & \varepsilon\end{array}| 1\right)&=\binom{\overline{\chi_8}}{\chi_8^2}\binom{\overline{\chi_8}}{\chi_8}+\binom{\chi_8^3}{\chi_8^2}\binom{\chi_8^3}{\overline{\chi_8}^3}\notag \\
&=\frac{\chi_8(-1)}{q^2}[J(\overline{\chi_8},\overline{\chi_8}^2)J(\overline{\chi_8},\overline{\chi_8})+J(\chi_8^3,\overline{\chi_8}^2)J(\chi_8^3,\chi_8^3)].
\end{align}
Recalling Theorem 2.1.6 in \cite{berndt} gives $J(\chi_8,\chi_8)=J(\chi_8^3,\chi_8^3)$. Also, Theorem 2.1.5 in \cite{berndt} gives
$J(\chi_8^3,\overline{\chi_8}^2)=\overline{J(\chi_8^5,\chi_8^2)}=\overline{J(\chi_8,\chi_8^2)}=J(\chi_8,\chi_8^2)$. Hence, $\eqref{jack}$ yields
\begin{align*}
{_{3}}F_{2}\left(\begin{array}{ccc}\chi_4, & \overline{\chi_4}, & \overline{\chi_4}\\ & \varphi, & \varepsilon\end{array}| 1\right)&=\frac{1}{q^2}J(\chi_8,\chi_8^2)\times 2 Re(J(\chi_8,\chi_8))\times \chi_8(-1)\\
&=\frac{1}{q^2}[-2u(-p)^t],
\end{align*}
which is the same real number we found in $\eqref{real}$. Hence, its complex conjugate, namely ${_{3}}F_{2}\left(\begin{array}{ccc}\overline{\chi_4}, & \chi_4, & \chi_4\\ & \varphi, & \varepsilon\end{array}| 1\right)$ is also real and has the same value. This completes the proof of the lemma.
\end{proof}
Next, we note the following observations given in the beginning of the sixth section in \cite{dawsey}. We state it as a lemma since we shall use it in proving Theorem \ref{thm2}. Greene \cite{greene, greene2} gave some transformation formulae which we
list here as follows. Let $A,B,C,D,E$ be characters on $\mathbb{F}_q$. Then, we have
\begin{align}
&{_{3}}F_{2}\left(\begin{array}{ccc}A, & B, & C\\ & D, & E \end{array}| 1\right)={_{3}}F_{2}\left(\begin{array}{ccc}B\overline{D}, & A\overline{D}, & C\overline{D}\\ & \overline{D}, & E\overline{D}\end{array}| 1\right),\label{1}\\
&{_{3}}F_{2}\left(\begin{array}{ccc}A, & B, & C\\ & D, & E \end{array}| 1\right)=ABCDE(-1)\cdot {_{3}}F_{2}\left(\begin{array}{ccc}A, & A\overline{D}, & A\overline{E}\\ & A\overline{B}, & A\overline{C}\end{array}| 1\right),\label{2}\\
&{_{3}}F_{2}\left(\begin{array}{ccc}A, & B, & C\\ & D, & E \end{array}|1\right)=ABCDE(-1)\cdot{_{3}}F_{2}\left(\begin{array}{ccc}B\overline{D}, & B, & B\overline{E}\\ & B\overline{A}, & B\overline{C}\end{array}| 1\right),\label{3}\\
&{_{3}}F_{2}\left(\begin{array}{ccc}A, & B, & C\\ & D, & E\end{array}| 1\right)=AE(-1)\cdot{_{3}}F_{2}\left(\begin{array}{ccc}A, & B, & E\overline{C}\\ & AB\overline{D}, & E\end{array}| 1\right),\label{4}\\
&{_{3}}F_{2}\left(\begin{array}{ccc}A, & B, & C\\ & D, & E\end{array}| 1\right)=AD(-1)\cdot{_{3}}F_{2}\left(\begin{array}{ccc}A, & D\overline{B}, & C\\ & D, & AC\overline{E} \end{array}| 1\right),\label{5}\\
&{_{3}}F_{2}\left(\begin{array}{ccc}A, & B, & C\\ & D, & E \end{array}| 1\right)=B(-1)\cdot{_{3}}F_{2}\left(\begin{array}{ccc}\overline{A}D, & B, & C\\ & D, & BC\overline{E}\end{array}| 1\right),\label{6}\\
&{_{3}}F_{2}\left(\begin{array}{ccc}A, & B, & C\\ & D, & E \end{array}| 1\right)=AB(-1)\cdot{_{3}}F_{2}\left(\begin{array}{ccc}\overline{A}D, & \overline{B}D, & C\\ & D, & DE\overline{AB}\end{array}| 1\right).\label{7}
\end{align}
Let $X=\{(t_1,t_2,t_3,t_4,t_5)\in\mathbb{Z}_4^5: t_1,t_2,t_3\neq 0,t_4,t_5;~t_1+t_2+t_3\neq t_4,t_5\}$. To each of the transformations in $\eqref{1}$ to $\eqref{7}$, Dawsey and McCarthy in \cite{dawsey} associated a map on $X$; for example, the transformation in $\eqref{1}$ gives that
$${_{3}}F_{2}\left(\begin{array}{ccc}\chi_4^{t_1}, & \chi_4^{t_2}, & \chi_4^{t_3}\\ & \chi_4^{t_4}, & \chi_4^{t_5}\end{array}| 1\right)={_{3}}F_{2}\left(\begin{array}{ccc}\chi_4^{t_2-t_4}, &\chi_4^{t_1-t_4} , & \chi_4^{t_3-t_4}\\ & \chi_4^{-t_4}, &\chi_4^{t_5-t_4}\end{array}| 1\right),$$
so it induces a map $f_1: X\rightarrow X$
given by
$$f_1(t_1,t_2,t_3,t_4,t_5)=(t_2-t_4,t_1-t_4,t_3-t_4,-t_4,t_5-t_4).$$
Similarly, the other transformations in $\eqref{2}$ to $\eqref{7}$ led to the construction of the maps $f_2$ to $f_7$.
\begin{lemma}\label{dlemma}
Let $X=\{(t_1,t_2,t_3,t_4,t_5)\in\mathbb{Z}_4^5: t_1,t_2,t_3\neq 0,t_4,t_5;~t_1+t_2+t_3\neq t_4,t_5\}$. Define the functions $f_i:X\rightarrow X,~i\in\{1,2,\ldots,7\}$ in the following manner:
\begin{align*}
f_1(t_1,t_2,t_3,t_4,t_5)&=(t_2-t_4,t_1-t_4,t_3-t_4,-t_4,t_5-t_4),\\
f_{2}\left(t_{1}, t_{2}, t_{3}, t_{4}, t_{5}\right)&=\left(t_{1}, t_{1}-t_{4}, t_{1}-t_{5}, t_{1}-t_{2}, t_{1}-t_{3}\right),\\
f_{3}\left(t_{1}, t_{2}, t_{3}, t_{4}, t_{5}\right)&=\left(t_{2}-t_{4}, t_{2}, t_{2}-t_{5}, t_{2}-t_1,t_2-t_3\right),\\
f_{4}\left(t_{1}, t_{2}, t_{3}, t_{4}, t_{5}\right)&=\left(t_{1}, t_{2}, t_{5}-t_{3}, t_{1}+t_{2}-t_{4}, t_{5}\right),\\
f_{5}\left(t_1, t_{2}, t_{3}, t_{4}, t_{5}\right)&=\left(t_{1}, t_{4}-t_{2}, t_{3}, t_{4}, t_{1}+t_{3}-t_{5}\right),\\
f_{6}\left(t_{1}, t_{2}, t_{3}, t_{4}, t_{5}\right)&=\left(t_{4}-t_{1}, t_{2}, t_{3}, t_{4}, t_{2}+t_{3}-t_{5}\right),\\
f_{7}\left(t_{1},t_{2}, t_{3},t_{4}, t_{5}\right)&=\left(t_{4}-t_{1},t_{4}-t_{2},t_{3}, t_{4}, t_{4}+t_{5}-t_{1}-t_{2}\right).
\end{align*}
Then the group generated by $f_1,\ldots,f_7$, with operation composition of functions, is the set
$$\mathcal{F}=\{f_0,f_i,f_j \circ f_l,f_4\circ f_1,f_6\circ f_2,f_5\circ f_3,f_1\circ f_4\circ f_1: 1\leq i\leq 7,~1\leq j\leq 3,~4\leq l\leq 7\},$$
where $f_0$ is the identity map. \\
Moreover, the group $\mathcal{F}$ acts on the set $X$. If we associate the $5$-tuple $(t_1,t_2,\ldots,t_5)\in X$ to the hypergeometric function ${_{3}}F_{2}\left(\begin{array}{ccc}\chi_4^{t_1}, & \chi_4^{t_2}, & \chi_4^{t_3}\\ & \chi_4^{t_4}, & \chi_4^{t_5}\end{array}| 1\right)$,
then each orbit of the group action consists of a number of $5$-tuples $(t_1,t_2,\ldots,t_5)$, and the corresponding ${}_3 F_{2}$ terms have the same value.
\end{lemma}
\begin{proof}
For a proof, see Section $6$ of \cite{dawsey}.
\end{proof}
In order to prove Theorem \ref{asym}, the following famous theorem, due to Andr\'e Weil, serves as the crux. We state it here.
\begin{theorem}[Weil's estimate]\label{weil}
Let $\mathbb{F}_q$ be the finite field of order $q$, and let $\chi$ be a character of $\mathbb{F}_q$ of order $s$. Let $f(x)$ be a polynomial of degree $d$ over $\mathbb{F}_q$ such that $f(x)$ cannot be written in the form $c\cdot {h(x)}^s$, where $c\in\mathbb{F}_q$. Then
$$\Bigl\lvert\sum_{x\in\mathbb{F}_q}\chi(f(x))\Bigr\rvert\leq (d-1)\sqrt{q}.$$
\end{theorem}
The rest of the article goes as follows. In Section $3$, we prove Theorem \ref{thm1}. In Section $4$, we prove Theorem \ref{thm2}. Finally, in Section $5$ we prove the asymptotic formula for the number of cliques of any order in Peisert graphs.
To count the number of cliques in Peisert graphs, we note that since the graph is vertex-transitive, so any two vertices in the graph are contained in the same number of cliques of a particular order.
We will also use the following notation throughout the proofs. For an induced subgraph $S$ of a Peisert graph and a vertex $v\in S$, we denote by $k_3(S)$ and $k_3(S,v)$ the number of cliques of order $3$ in $S$ and the number of cliques of order $3$ in $S$ containing $v$, respectively.
\section{number of $3$-order cliques in $P^\ast(q)$}
In this section, we prove Theorem \ref{thm1}. Recall that $\mathbb{F}_q^\ast=\langle g\rangle$ and $H=\langle g^4\rangle\cup g\langle g^4\rangle$. Also, $\langle H\rangle$ is the subgraph induced by $H$ and $h=1-\chi_4(g)$.
\begin{proof}[Proof of Theorem \ref{thm1}]
Using the vertex-transitivity of $P^\ast(q)$, we find that
\begin{align}\label{trian}
k_3(P^\ast(q))&=\frac{1}{3}\times q\times k_3(P^\ast(q),0)\notag \\
&=\frac{q}{3}\times \text{number of edges in }\langle H\rangle .
\end{align}
Now,
\begin{align}\label{ww-new}
\text{the number of edges in~} \langle H\rangle =\frac{1}{2}\times \mathop{\sum\sum}_{\chi_4(x-y)\in \{1, \chi_4(g)\}} 1,
\end{align}
where the 1st sum is taken over all $x$ such that $\chi_4(x)\in\{1,\chi_4(g)\}$ and the 2nd sum is taken over all $y\neq x$ such that $\chi_4(y)\in\{1,\chi_4(g)\}$. Hence, using \eqref{qq} in \eqref{ww-new}, we find that
\begin{align}\label{ww}
&\text{the number of edges in~}\langle H\rangle \notag \\
&=\frac{1}{2\times 4^3}\sum\limits_{x\neq 0}(2+h\chi_4(x)+\overline{h}\overline{\chi_4}(x))\notag\\
&\hspace{1.5cm}\times \sum\limits_{y\neq 0,x}[(2+h\chi_4(y)+\overline{h}\overline{\chi_4}(y))(2+h\chi_4(x-y)+\overline{h}\overline{\chi_4}(x-y))].
\end{align}
We expand the inner summation in $\eqref{ww}$ to obtain
\begin{align}\label{ee}
&\sum\limits_{y\neq 0,x}[4+2h\chi_4(y)+2\overline{h}\overline{\chi_4}(y)+2h\chi_4(x-y)+2\overline{h}\overline{\chi_4}(x-y)+2\chi_4(y)\overline{\chi_4}(x-y)\notag \\
& +2\overline{\chi_4}(y)\chi_4(x-y)-2\chi_4(g)\chi_4(y(x-y))+2\chi_4(g)\overline{\chi_4}(y(x-y))].
\end{align}
We have
\begin{align}\label{new-eqn3}
\sum\limits_{y\neq 0,x}\chi_4(y(x-y))=\sum\limits_{y\neq 0,1}\chi_4(xy)\chi_4(x-xy)=\varphi(x) J(\chi_4,\chi_4).
\end{align}
Using Lemma \ref{lem2} and \eqref{new-eqn3}, \eqref{ee} yields
\begin{align}\label{new-eqn2}
&\sum\limits_{y\neq 0,x}[(2+h\chi_4(y)+\overline{h}\overline{\chi_4}(y))(2+h\chi_4(x-y)+\overline{h}\overline{\chi_4}(x-y))]\notag \\
&=4(q-3)-4h\chi_4(x)-4\overline{h}\overline{\chi_4}(x)-2\chi_4(g)\varphi(x)J(\chi_4,\chi_4)+2\chi_4(g)\varphi(x)\overline{J(\chi_4,\chi_4)}.
\end{align}
Now, putting \eqref{new-eqn2} into \eqref{ww}, and then using Lemma \ref{rr}, we find that
\begin{align*}
&\text{the number of edges in }\langle H\rangle\\
=&\frac{1}{2\times 4^3}\sum\limits_{x\neq 0}[(2+h\chi_4(x)+\overline{h}\overline{\chi_4}(x))(4(q-3)-4h\chi_4(x)-4\overline{h}\overline{\chi_4}(x))]\\
=&\frac{1}{2\times 4^3}\sum\limits_{x\neq 0}[8(q-5)+(4h(q-3)-8h)\chi_4(x)+(4\overline{h}(q-3)-8\overline{h})\overline{\chi_4}(x)]\\
=&\frac{(q-1)(q-5)}{16}.
\end{align*}
Substituting this value in $\eqref{trian}$ gives us the required result.
\end{proof}
\section{number of $4$-order cliques in $P^\ast(q)$}
In this section, we prove Theorem \ref{thm2}. First, we recall again that $\mathbb{F}_q^\ast=\langle g\rangle$ and $H=\langle g^4\rangle\cup g \langle g^4\rangle$. Let $J(\chi_4,\chi_4)=J(\chi_4,\varphi)=\rho$, where the value of $\rho$ is given by Lemma \ref{rr}.
Let $q=u^2+2v^2$ for integers $u$ and $v$ such that $u\equiv 3\pmod 4$ and $p\nmid u$ when $p\equiv 3\pmod 8$. Let $\chi_8$ be a character of order $8$ such that $\chi_8^2=\chi_4$. Note that in the proof we shall use the fact that $\chi_4(-1)=1$ multiple times. Recall that $h=1-\chi_4(g)$.
\begin{proof}[Proof of Theorem \ref{thm2}]
Noting again that $P^\ast(q)$ is vertex-transitive, we find that
\begin{align}\label{tt}
k_4(P^\ast(q))
&=\frac{q}{4}\times \text{ number of $4$-order cliques in $P^\ast(q)$ containing }0\notag \\
&=\frac{q}{4}\times k_3(\langle H\rangle).
\end{align}
Let $a, b\in H$ be such that $\chi_4(ab^{-1})=1$. We note that
\begin{align}\label{new-eqn4}
k_3(\langle H\rangle, a) =\frac{1}{2}\times \mathop{\sum\sum}_{\chi_4(x-y)\in \{1, \chi_4(g)\}} 1,
\end{align}
where the 1st sum is taken over all $x$ such that $\chi_4(x), \chi_4(a-x)\in\{1,\chi_4(g)\}$ and the 2nd sum is taken over all $y\neq x$ such that $\chi_4(y), \chi_4(a-y)\in\{1,\chi_4(g)\}$. Hence, using \eqref{qq} in \eqref{new-eqn4}, we find that
\begin{align*}
&k_3(\langle H\rangle, a)\\
&=\frac{1}{2\times 4^5}\sum_{x\neq 0,a}\sum_{y\neq 0,a,x}[(2+h\chi_4(a-x)+\overline{h}\overline{\chi_4}(a-x))\\
&\times (2+h\chi_4(a-y)+\overline{h}\overline{\chi_4}(a-y))(2+h\chi_4(x-y)+\overline{h}\overline{\chi_4}(x-y))\\
&\times (2+h\chi_4(x)+\overline{h}\overline{\chi_4}(x))(2+h\chi_4(y)+\overline{h}\overline{\chi_4}(y))].
\end{align*}
Using the substitution $Y=ba^{-1}y$, the sum indexed by $y$ in the above yields
\begin{align*}
&k_3(\langle H\rangle, a)\\
&=\frac{1}{2\times 4^5}\sum_{x\neq 0,a}\sum_{Y\neq 0,b,ba^{-1}x}
[(2+h\chi_4(a-x)+\overline{h}\overline{\chi_4}(a-x))\\
&\times (2+h\chi_4(Y-b)+\overline{h}\overline{\chi_4}(Y-b))(2+h\chi_4(Y-ba^{-1}x)+\overline{h}\overline{\chi_4}(Y-ba^{-1}x))\\
&\times (2+h\chi_4(x)+\overline{h}\overline{\chi_4}(x))(2+h\chi_4(Y)+\overline{h}\overline{\chi_4}(Y))] \\
&=\frac{1}{2\times 4^5}\sum_{Y\neq 0,b}\sum_{x\neq 0,a,ab^{-1}Y}[(2+h\chi_4(a-x)+\overline{h}\overline{\chi_4}(a-x))\\
&\times
(2+h\chi_4(Y-b)+\overline{h}\overline{\chi_4}(Y-b)) (2+h\chi_4(Y-ba^{-1}x)+\overline{h}\overline{\chi_4}(Y-ba^{-1}x))\\
&\times (2+h\chi_4(x)+\overline{h}\overline{\chi_4}(x))(2+h\chi_4(Y)+\overline{h}\overline{\chi_4}(Y))].
\end{align*}
Again, using the substitution $X=ba^{-1}x$ yields
\begin{align*}
&k_3(\langle H\rangle, a)\\
&=\frac{1}{2\times 4^5}\sum_{Y\neq 0,b}\sum_{X\neq 0,b,Y}[(2+h\chi_4(b-X)+\overline{h}\overline{\chi_4}(b-X))\\
&\times(2+h\chi_4(b-Y)+\overline{h}\overline{\chi_4}(b-Y))(2+h\chi_4(X-Y)+\overline{h}\overline{\chi_4}(X-Y))\\
&\times (2+h\chi_4(X)+\overline{h}\overline{\chi_4}(X))(2+h\chi_4(Y)+\overline{h}\overline{\chi_4}(Y))] \\
&=k_3(\langle H\rangle,b).
\end{align*}
Thus, if $a, b\in H$ are such that $\chi_4(ab^{-1})=1$, then
\begin{align}\label{cond}
k_3(\langle H\rangle,a)=k_3(\langle H\rangle,b).
\end{align}
Let $\langle g^4\rangle =\{x_1,\ldots,x_{\frac{q-1}{4}}\}$ with $x_1=1$ and $g\langle g^4\rangle=\{y_1,\ldots, y_{\frac{q-1}{4}}\}$ with $y_1=g$. Then,
\begin{align}\label{pick}
\sum_{i=1}^{\frac{q-1}{4}}k_3(\langle H\rangle,x_i)+\sum_{i=1}^{\frac{q-1}{4}}k_3(\langle H\rangle,y_i)=3\times k_3(\langle H\rangle).
\end{align}
By $\eqref{cond}$, we have
$$k_3(\langle H\rangle,x_1)=k_3(\langle H\rangle,x_2)=\cdots=k_3(\langle H\rangle,x_{\frac{q-1}{4}})$$
and
$$k_3(\langle H\rangle,y_1)=k_3(\langle H\rangle,y_2)=\cdots=k_3(\langle H\rangle,y_{\frac{q-1}{4}}).$$
Hence, \eqref{pick} yields
\begin{align}\label{1g}
k_3(\langle H\rangle)=\frac{q-1}{12}[k_3(\langle H\rangle, 1)+ k_3(\langle H\rangle, g)].
\end{align}
Thus, we need to find only $k_3(\langle H\rangle, 1)$ and $k_3(\langle H\rangle, g)$. We first find $k_3(\langle H\rangle, 1)$.
\par We have
\begin{align}\label{xandy}
&k_3(\langle H\rangle,1)\notag \\
&=\frac{1}{2\times 4^5}\sum_{x\neq 0,1}[ (2+h\chi_4(1-x)+\overline{h}\overline{\chi_4}(1-x))(2+h\chi_4(x)+\overline{h}\overline{\chi_4}(x))]\notag\\
&\hspace{1.5cm} \sum_{y\neq 0,1,x}[(2+h\chi_4(1-y)+\overline{h}\overline{\chi_4}(1-y))
(2+h\chi_4(x-y)+\overline{h}\overline{\chi_4}(x-y)) \notag \\
&\hspace{2.5cm}\times (2+h\chi_4(y)+\overline{h}\overline{\chi_4}(y))].
\end{align}
Let $i_1,i_2,i_3\in\{\pm 1\} $ and let $F_{i_1,i_2,i_3}$ denote the term $\chi_4^{i_1}(y)\chi_4^{i_2}(1-y)\chi_4^{i_3}(x-y)$. Using this notation, we expand and evaluate the inner summation in \eqref{xandy}. We have
\begin{align}\label{sun}
&\sum_{y\neq 0,1,x}[2+h\chi_4(y)+\overline{h}\overline{\chi_4}(y)][2+h\chi_4(1-y)+\overline{h}\overline{\chi_4}(1-y)][2+h\chi_4(x-y)+\overline{h}\overline{\chi_4}(x-y)]\notag\\
&=\sum_{y\neq 0,1,x}[8+4h\chi_4(y)+4\overline{h}\overline{\chi_4}(y)+4h\chi_4(1-y)+4\overline{h}\overline{\chi_4}(1-y)+4h\chi_4(x-y)\notag\\&+4\overline{h}\overline{\chi_4}(x-y)
+4\chi_4(y)\overline{\chi_4}(1-y)+4\overline{\chi_4}(y)\chi_4(1-y)+4\chi_4(y)\overline{\chi_4}(x-y)\notag\\
&+4\overline{\chi_4}(y)\chi_4(x-y)
+4\chi_4(1-y)\overline{\chi_4}(x-y)+4\overline{\chi_4}(1-y)\chi_4(x-y)\notag\\
&+2h^2\chi_4(y)\chi_4(1-y)+2{\overline{h}}^2\overline{\chi_4}(y)\overline{\chi_4}(1-y)+2h^2\chi_4(y)\chi_4(x-y)\notag\\
&+2{\overline{h}}^2\overline{\chi_4}(y)\overline{\chi_4}(x-y)
+2h^2\chi_4(1-y)\chi_4(x-y)+2{\overline{h}}^2\overline{\chi_4}(1-y)\overline{\chi_4}(x-y)\notag\\
&+h^3 F_{1,1,1}+2hF_{1,1,-1}+2hF_{1,-1,1}+2\overline{h}F_{1,-1,-1}+2hF_{-1,1,1}+2\overline{h}F_{-1,1,-1}
\notag\\
&+2\overline{h}F_{-1,-1,1}+{\overline{h}}^3F_{-1,-1,-1}].
\end{align}
Now, referring to Lemmas \ref{lem1} and \ref{lem2}, we can easily check that any term of the form $\sum\limits_{y}\chi_4(\cdot)\overline{\chi_4}(\cdot)$ gives $-1$, $\sum\limits_y \chi_4((y-1)(y-x))$
gives $\varphi(x-1)\rho$ and $\sum\limits_y \chi_4(y(y-x))$ gives $\varphi(x)\rho$. Hence, $\eqref{sun}$ yields
\begin{align}\label{yonly}
&\sum_{y\neq 0,1,x}[2+h\chi_4(y)+\overline{h}\overline{\chi_4}(y)][2+h\chi_4(1-y)+\overline{h}\overline{\chi_4}(1-y)][2+h\chi_4(x-y)+\overline{h}\overline{\chi_4}(x-y)]\notag \\
&=A+B\chi_4(x)+\overline{B}\overline{\chi_4}(x)+B\chi_4(x-1)+\overline{B}\overline{\chi_4}(x-1)-4\chi_4(x)\overline{\chi_4}(x-1)\notag \\
&-4\overline{\chi_4}(x)\chi_4(x-1)-2h^2\chi_4(x)\chi_4(x-1)-2{\overline{h}}^2\overline{\chi_4}(x)\overline{\chi_4}(x-1)\notag \\
&+h^3 F_{1,1,1}+2hF_{1,1,-1}+2hF_{1,-1,1}+2\overline{h}F_{1,-1,-1}+2hF_{-1,1,1}+2\overline{h}F_{-1,1,-1}\notag
\\&+2\overline{h}F_{-1,-1,1}+{\overline{h}}^3F_{-1,-1,-1}\notag\\
&=:\mathcal{I},
\end{align}
where $A=8(q-8)$ and $B=-12h$.
\par Next, we introduce some notations. Let
\begin{align*}
B_1&=16(q-9)+6B+\overline{B}h^2,\\
D_1&=2\overline{B}-8\overline{h}+Bh^2-4h^3,\\
E_1&=8(q-9)+4Bh,\\
F_1&=16(q-9)+4 Re(B\overline{h}).
\end{align*}
For $i\in\{1,2,3,4\}$ and $j\in\{1,2,\ldots,8\}$, we define the following character sums.
\begin{align*}
T_j&:=\sum_{x\neq 0,1}\sum_y \chi_4^{i_1}(y)\chi_4^{i_2}(1-y)\chi_4^{i_3}(x-y),\\
U_{ij}&:=\sum_{x\neq 0,1}\chi_4^l(m)\sum_y \chi_4^{i_1}(y)\chi_4^{i_2}(1-y)\chi_4^{i_3}(x-y),\\
V_{ij}&:=\sum_x\chi_4^{l_1}(x)\chi_4^{l_2}(1-x)\sum_y \chi_4^{i_1}(y)\chi_4^{i_2}(1-y)\chi_4^{i_3}(x-y),
\end{align*}
where
\begin{align*}
l = \left\{
\begin{array}{lll}
1, & \hbox{if $i$ is odd,} \\
-1, & \hbox{\text{otherwise};}
\end{array}
\right.
\end{align*}
\begin{align*}
m = \left\{
\begin{array}{lll}
x, & \hbox{if $i\in\{1,2\}$,} \\
1-x, & \hbox{\text{otherwise;}}
\end{array}
\right.
\end{align*}
and
\begin{align*}
(l_1,l_2) = \left\{
\begin{array}{lll}
(1,1), & \hbox{if $i=1$,} \\
(1,-1), & \hbox{if $i=2$,} \\
(-1,1), & \hbox{if $i=3$,} \\
(-1,-1), & \hbox{if $i=4$.}
\end{array}
\right.
\end{align*}
Also, corresponding to each $j$, let $(i_1,i_2,i_3)$ take the value according to the following table:
\begin{table}[h!]
\begin{center}
\begin{tabular}{ |c| c| c| c| }
\hline
$j$ & $i_1$ & $i_2$ & $i_3$ \\
\hline
$1$ & $1$ & $1$ & $1$ \\
$2$ & $1$ & $1$ & $-1$ \\
$3$ & $1$ & $-1$ & $1$\\
$4$ & $1$ & $-1$ & $-1$\\
$5$ & $-1$ & $1$ & $1$\\
$6$ & $-1$ & $1$ & $-1$\\
$7$ & $-1$ & $-1$ & $1$\\
$8$ & $-1$ & $-1$ & $-1$\\
\hline
\end{tabular}
\end{center}
\end{table}\\
Then, using $\eqref{yonly}$ and the notations we just described, $\eqref{xandy}$ yields
\begin{align*}
&k_3(\langle H\rangle,1)=\frac{1}{2048}\sum_{x\neq 0,1}[2+h\chi_4(x)+\overline{h}\overline{\chi_4}(x)][2+h\chi_4(1-x)+\overline{h}\overline{\chi_4}(1-x)]\times \mathcal{I}\\
=&\frac{1}{2048}\sum_{x\neq 0,1}\Big[ 32(q-15)+B_1\chi_4(x)+\overline{B_1}\overline{\chi_4}(x)+B_1\chi_4(x-1)+\overline{B_1}\overline{\chi_4}(x-1)\\
&+4 Re(Bh)\varphi(x)+4 Re(Bh)\varphi(x-1)+D_1\chi_4(x)\varphi(x-1)+\overline{D_1}\overline{\chi_4}(x)\varphi(x-1)\\
&+D_1\varphi(x)\chi_4(x-1)+\overline{D_1}\varphi(x)\overline{\chi_4}(x-1)+E_1\chi_4(x)\chi_4(x-1)+\overline{E_1}\overline{\chi_4}(x)\overline{\chi_4}(x-1) \\
&+F_1\chi_4(x)\overline{\chi_4}(1-x)+\overline{F_1}\overline{\chi_4}(x)\chi_4(x-1)\Big]\\
&+\frac{1}{2\times 4^5}\Big[ 4h^3T_1+8hT_2+8h T_3+8\overline{h}T_4+8h T_5+8\overline{h}T_6+8\overline{h}T_7+4{\overline{h}}^3 T_8\\
&+2h^4 U_{11}+4h^2 U_{12}+4h^2 U_{13}+8 U_{14}+4h^2 U_{15}+8 U_{16}+8 U_{17}+4{\overline{h}}^2 U_{18}\\
&+4h^2 U_{21} +8 U_{22}+8 U_{23}+4{\overline{h}}^2 U_{24}+8 U_{25}+4{\overline{h}}^2 U_{26}+4{\overline{h}}^2 U_{27}+2{\overline{h}}^4 U_{28}\\
&+2h^4 U_{31}+4h^2 U_{32}+4h^2 U_{33}+8 U_{34}+4h^2 U_{35}+8 U_{36}+8 U_{37}+4{\overline{h}}^2 U_{38}\\
&+4h^2 U_{41} +8 U_{42}+8 U_{43}+4{\overline{h}}^2 U_{44}+8 U_{45}+4{\overline{h}}^2 U_{46}+4{\overline{h}}^2 U_{47}+2{\overline{h}}^4 U_{48}\\
&+h^5 V_{11}+2h^3 V_{12}+2h^3 V_{13}+4h V_{14}+2h^3 V_{15}+4h V_{16}+4h V_{17}+4\overline{h} V_{18}\\
&+2h^3 V_{21}+4h V_{22}+4h V_{23}+4\overline{h}V_{24}+4h V_{25}+4\overline{h}V_{26}+4\overline{h}V_{27}+2{\overline{h}}^3 V_{28}\\
&+2h^3 V_{31}+4h V_{32}+4h V_{33}+4\overline{h}V_{34}+4h V_{35}+4\overline{h}V_{36}+4\overline{h}V_{37}+2{\overline{h}}^3 V_{38}\\
&+4h V_{41}+4\overline{h}V_{42}+4\overline{h}V_{43}+2{\overline{h}}^3 V_{44}+4\overline{h}V_{45}+2{\overline{h}}^3 V_{46}+2{\overline{h}}^3 V_{47}+{\overline{h}}^5 V_{48} \Big].
\end{align*}
Using Lemmas \ref{lem3}, \ref{lema1} and \ref{corr}, we find that
\begin{align}\label{bigex}
&k_3(\langle H\rangle,1)=\frac{1}{2048}\left[32(q^2-20q+81) \right.\notag \\
&+h^5 V_{11}+2h^3 V_{12}+2h^3 V_{13}+4h V_{14}+2h^3 V_{15}+4h V_{16}+4h V_{17}+4\overline{h} V_{18}\notag \\
&+2h^3 V_{21}+4h V_{22}+4h V_{23}+4\overline{h}V_{24}+4h V_{25}+4\overline{h}V_{26}+4\overline{h}V_{27}+2{\overline{h}}^3 V_{28}\notag \\
&+2h^3 V_{31}+4h V_{32}+4h V_{33}+4\overline{h}V_{34}+4h V_{35}+4\overline{h}V_{36}+4\overline{h}V_{37}+2{\overline{h}}^3 V_{38}\notag \\
&\left.+4h V_{41}+4\overline{h}V_{42}+4\overline{h}V_{43}+2{\overline{h}}^3 V_{44}+4\overline{h}V_{45}+2{\overline{h}}^3 V_{46}+2{\overline{h}}^3 V_{47}+{\overline{h}}^5 V_{48}\right].
\end{align}
Now, we convert each term of the form $V_{i j}$ $[i \in\{1,2,3,4\}, j\in\{1,2, \ldots, 8\}]$ into its equivalent $q^{2}\cdot {_{3}}F_{2}$ form. We use the notation $(t_{1}, t_{2}, \ldots, t_{5})\in \mathbb{Z}_4^5$
for the term $q^{2}\cdot {_{3}}F_{2}\left(\begin{array}{ccc}\chi_4^{t_{1}}, & \chi_4^{t_{2}}, & \chi_4^{t_{3}}\\ & \chi_4^{t_{4}}, & \chi_4^{t_{5}}\end{array}| 1\right)$.
Then, $\eqref{bigex}$ yields
\begin{align}\label{bigexp}
&k_3(\langle H\rangle,1)=\frac{1}{2048}\left[32(q^2-20q+81)\notag \right. \\
&\hspace{.5cm}+h^{5}(3,1,1,2,2)+2 h^{3}(1,1,3,2,0)+2 h^{3}(3,1,1,0,2)+4 h(1,1,3,0,0)\notag \\
&\hspace{.5cm}+2h^{3}(3,3,1,0,2)+4 h(1,3,3,0,0)+4 h(3,3,1,2,2)+4 \overline{h}(1,3,3,2,0) \notag\\
&\hspace{.5cm}+2 h^{3}(3,1,3,2,2)+4 h(1,1,1,2,0)+4 h(3,1,3,0,2)+4 \overline{h}(1,1,1,0,0)\notag\\
&\hspace{.5cm}+4 h(3,3,3,0,2)+4 \overline{h}(1,3,1,0,0)+4 \overline{h}(3,3,3,2,2)+2 {\overline{h}}^{3}(1,3,1,2,0)\notag \\
&\hspace{.5cm}+2 h^{3}(3,1,3,2,0)+4 h(1,1,1,2,2)+4 h(3,1,3,0,0)+4 \overline{h}(1,1,1,0,2)\notag\\
&\hspace{.5cm}+4 h(3,3,3,0,0)+4 \overline{h}(1,3,1,0,2)+4 \overline{h}(3,3,3,2,0)+2 {\overline{h}}^{3}(1,3,1,2,2) \notag \\
&\hspace{.5cm}+4 h(3,1,1,2,0)+4 \overline{h}(1,1,3,2,2)+4 \overline{h}(3,1,1,0,0)+2{\overline{h}}^{3}(1,1,3,0,2)\notag\\
&\hspace{.5cm}\left. +4 \overline{h}(3,3,1,0,0)+2 \overline{h}^{3}(1,3,3,0,2)+ 2\overline{h}^{3}(3,3,1,2,0)+{\overline{h}}^{5}(1,3,3,2,2)\right].
\end{align}
Next, we use Lemma \ref{dlemma} alongwith the notations therein. We list the tuples $(t_1,t_2,\ldots, t_5)$ in each orbit of the group action of $\mathcal{F}$ on $X$, and then group the corresponding terms in $\eqref{bigexp}$ together.
The orbit representatives $(1,1,1,0,0)$, $(3,3,3,0,0)$, $(1,3,3,2,0)$, $(3,1,1,2,0)$ and $(1,1,3,0,0)$ given in the proof of Corollary 2.7 in \cite{dawsey} are the ones whose orbits exhaust the hypergeometric terms in $\eqref{bigexp}$.
We denote the $q^2\cdot {_{3}}F_{2}$ terms corresponding to these orbit representatives as $M_1,M_2,\ldots,M_5$ respectively. Then, $\eqref{bigexp}$ yields
\begin{align}\label{mex}
&k_3(\langle H\rangle,1)=\frac{1}{2048}
\left[32(q^2-20q+81)\right. \notag \\
&\hspace{.5cm}+h^{5}M_4+2 h^{3}M_1+2 h^{3}M_1+4 hM_5 +2h^{3}M_1+4 hM_5+4 hM_1+4 \overline{h}M_3 \notag\\
&\hspace{.5cm}+2 h^{3}M_4+4 hM_5+4 hM_2+4 \overline{h}M_1+4 hM_5+4 \overline{h}M_5+4 \overline{h}M_5+2 {\overline{h}}^{3}M_3\notag \\
&\hspace{.5cm}+2 h^{3}M_4+4 hM_5+4 hM_5+4 \overline{h}M_5
+4 hM_2+4 \overline{h}M_1+4 \overline{h}M_5+2 {\overline{h}}^{3}M_3 \notag \\
&\hspace{.5cm}+4 hM_4+4 \overline{h}M_3+4 \overline{h}M_5+2{\overline{h}}^{3}M_2
+4 \overline{h}M_5+2 \overline{h}^{3}M_2+\left. 2\overline{h}^{3}M_2+{\overline{h}}^{5}M_3\right].
\end{align}
Using Lemma \ref{lem4} (note that we could not reduce $M_5$), $\eqref{mex}$ yields
\begin{align}\label{mexp}
k_3(\langle H\rangle,1)=\frac{1}{128}\left[2(q^2-20q+81)+2 u(-p)^t
+3q^2\cdot{_{3}}F_{2}\left(\begin{array}{ccc}\chi_4, & \chi_4, & \overline{\chi_4}\\ & \varepsilon, & \varepsilon\end{array}| 1\right) \right].
\end{align}
Returning back to $\eqref{1g}$, we are now left to calculate $k_3( \langle H\rangle,g)$. Again, we have
\begin{align}\label{gxandy}
&k_3(\langle H\rangle,g)\notag\\
&=\frac{1}{2048}\sum_{x\neq 0,g}\sum_{y\neq 0,g,x}\left[ (2+h\chi_4(g-x)+\overline{h}\overline{\chi_4}(g-x)) (2+h\chi_4(g-y)+\overline{h}\overline{\chi_4}(g-y))\notag \right.\\
&\times\left. (2+h\chi_4(x-y)+\overline{h}\overline{\chi_4}(x-y))(2+h\chi_4(x)+\overline{h}\overline{\chi_4}(x)) (2+h\chi_4(y)+\overline{h}\overline{\chi_4}(y))\right].
\end{align}
Using the substitutions $Y=yg^{-1}$ and $X=xg^{-1}$, and then using the fact that $h\chi_4(g)=\overline{h}$, \eqref{gxandy} yields
\begin{align*}
&k_3(\langle H\rangle,g)\\
&=\frac{1}{2048}\sum_{x\neq 0,1}\sum_{y\neq 0,1,x}\left[ (2+\overline{h}\chi_4(1-x)+h\overline{\chi_4}(1-x)) (2+\overline{h}\chi_4(1-y)+h\overline{\chi_4}(1-y))\notag \right.\\
&\times\left. (2+\overline{h}\chi_4(x-y)+h\overline{\chi_4}(x-y)) (2+\overline{h}\chi_4(x)+h\overline{\chi_4}(x))(2+\overline{h}\chi_4(y)+h\overline{\chi_4}(y))\right].
\end{align*}
Comparing this with $\eqref{xandy}$ we see that the expansion of the expression inside this summation will consist of the same summation terms as in $\eqref{xandy}$ except that the coefficient corresponding to each summation will
become the complex conjugate of the corresponding coefficient of the same summation. This means that, to calculate the coefficient of each summation after expanding the expression in $\eqref{gxandy}$, we need to replace each corresponding coefficient in $\eqref{mex}$ by its complex conjugate.
Now, $\eqref{mexp}$ is the final expression from $\eqref{mex}$, and we see that \eqref{mexp} contains three summands, two of them being real numbers and the other being a ${}_3F_2$ term whose coefficient is also a real number. Then by the foregoing argument, $\eqref{gxandy}$ yields the same value as given in $\eqref{mexp}$. Thus, $\eqref{1g}$ gives that
\begin{align*}
k_3(\langle H\rangle)=\frac{q-1}{768}&\left[2(q^2-20q+81)+2 u(-p)^t+3q^2\cdot {_{3}}F_{2}\left(\begin{array}{ccc}\chi_4, & \chi_4, & \overline{\chi_4}\\ & \varepsilon, & \varepsilon\end{array}| 1\right)\right].
\end{align*}
Substituting the above value in $\eqref{tt}$, we complete the proof of the theorem.
\end{proof}
\section{proof of theorem $\ref{asym}$}
Let $m\geq 1$ be an integer. We have observed that the calculations for computing the number of $4$-order cliques in $P^\ast(q)$ become very tedious.
However, we can have an asymptotic result on the number of cliques of order $m$ in $P^\ast(q)$ as $q\rightarrow\infty$. The method follows along the lines of \cite{wage} and so we prove by the method of induction.
\begin{proof}[Proof of Theorem \ref{asym}]
Let $\mathbb{F}_q^\ast=\langle g\rangle$. We set a formal ordering of the elements of $\mathbb{F}_q:\{a_1<\cdots<a_q\}$. Let $\chi_4$ be a fixed character on $\mathbb{F}_q$ of order $4$ and let $h=1-\chi_4(g)$.
First, we note that the result holds for $m=1,2$ and so let $m\geq 3$. Let the induction hypothesis hold for $m-1$. We shall use the notation `$a_m\neq a_i$' to mean $a_m\neq a_1,\ldots,a_{m-1}$. Recalling \eqref{qq}, we see that
\begin{align}\label{ss}
k_m(P^\ast(q))&=\mathop{\sum\cdots\sum}_{a_1<\cdots<a_m}\prod_{1\leq i<j\leq m} \frac{2+h\chi_4(a_i-a_j)+\overline{h}\chi_4^3(a_i-a_j)}{4}\notag \\
&=\frac{1}{m}\mathop{\sum\cdots\sum}_{a_1<\cdots<a_{m-1}}\left[ \prod_{1\leq i<j\leq m-1}\frac{2+h\chi_4(a_i-a_j)+\overline{h}\chi_4^3(a_i-a_j)}{4}\right.\notag \\
&\left.\frac{1}{4^{m-1}}\sum\limits_{a_m\neq a_i}\prod_{i=1}^{m-1}\{2+h\chi_4(a_m-a_i)+\overline{h}\chi_4^3(a_m-a_i)\}\right]
\end{align}
In order to use the induction hypothesis, we try to bound the expression $$\sum\limits_{a_m\neq a_i}\prod_{i=1}^{m-1}\{2+h\chi_4(a_m-a_i)+\overline{h}\chi_4^3(a_m-a_i)\}$$
in terms of $q$ and $m$. We find that
\begin{align}\label{dd}
\mathcal{J}&:=\sum\limits_{a_m\neq a_i} \prod_{i=1}^{m-1}\{2+h\chi_4(a_m-a_i)+\overline{h}\chi_4^3(a_m-a_i)\}\notag \\
&=2^{m-1}(q-m+1)\notag \\
&+\sum\limits_{a_m\neq a_i}[(3^{m-1}-1)\text{ number of terms containing expressions in }\chi_4]
\end{align}
Each term in \eqref{dd} containing $\chi_4$ is of the form $$2^f h^{i'}\overline{h}^{j'}\chi_4((a_m-a_{i_1})^{j_1}\cdots (a_m-a_{i_s})^{j_s}),$$ where
\begin{equation}\label{asy}
\left.\begin{array}{l}
0\leq f\leq m-2,\\
0\leq i',j'\leq m-1,\\
i_1,\ldots,i_s \in \{1,2,\ldots,m-1\},\\
j_1,\ldots,j_s \in \{1,3\},\text{ and}\\
1\leq s\leq m-1.
\end{array}\right\}
\end{equation}
Let us consider such an instance of a term containing $\chi_4$. Excluding the constant factor $2^fh^{i'}\overline{h}^{j'}$, we obtain a polynomial in the variable $a_m$. Let $g(a_m)=(a_m-a_{i_1})^{j_1}\cdots (a_m-a_{i_s})^{j_s}$. Using Weil's estimate (Theorem \ref{weil}), we find that
\begin{align}\label{asy1}
\mid\sum\limits_{a_m\in\mathbb{F}_q}\chi_4(g(a_m))\mid\leq (j_1+\cdots+j_s-1)\sqrt{q}.
\end{align}
Then, using \eqref{asy1} we have
\begin{align}\label{asy2}
|2^fh^{i'}\overline{h}^{j'} \sum\limits_{a_m}\chi_4(g(a_m))|&\leq 2^{f+i'+j'}(j_1+\cdots+j_s-1)\sqrt{q}\notag \\
&\leq 2^{3m-4}(3m-4)\sqrt{q}\notag \\
&\leq 2^{3m}\cdot 3m\sqrt{q}.
\end{align}
Noting that the values of $\chi_4$ are roots of unity, using \eqref{asy2}, and using \eqref{asy} and the conditions therein, we obtain
\begin{align*}
&\mid 2^f h^{i'}\overline{h}^{j'}\sum\limits_{a_m\neq a_i}\chi_4(g(a_m))\mid\\
&=\mid 2^fh^{i'}\overline{h}^{j'}\left\lbrace \sum\limits_{a_m}\chi_4(g(a_m))-\chi_4(g(a_1))-\cdots-\chi_4(g(a_{m-1})) \right\rbrace \mid\\
&\leq 2^{3m}\cdot 3m\sqrt{q}+2^{2m-3}\\
&\leq 2^{2m}(1+2^m\cdot 3m\sqrt{q}),
\end{align*}
that is,
$$-2^{2m}(1+2^m\cdot 3m\sqrt{q})\leq 2^f h^{i'}\overline{h}^{j'}\sum\limits_{a_m\neq a_i}\chi_4(g(a_m))\leq 2^{2m}(1+2^m\cdot 3m\sqrt{q}).$$
Then, \eqref{dd} yields
\begin{align*}
&2^{m-1}(q-m+1)-2^{2m}(1+2^m\cdot 3m\sqrt{q})(3^{m-1}-1)\\
&\leq \mathcal{J}\\
&\leq 2^{m-1}(q-m+1)+2^{2m}(1+2^m\cdot 3m\sqrt{q})(3^{m-1}-1)
\end{align*}
and thus, \eqref{ss} yields
\begin{align}\label{asy3}
&[2^{m-1}(q-m+1)-2^{2m}(1+2^m\cdot 3m\sqrt{q})(3^{m-1}-1)]\times\frac{1}{m\times 4^{m-1}}k_{m-1}(P^\ast(q))\notag\\
&\leq k_m(P^\ast(q))\notag \\
&\leq [2^{m-1}(q-m+1)+2^{2m}(1+2^m\cdot 3m\sqrt{q})(3^{m-1}-1)]\times\frac{1}{m\times 4^{m-1}}k_{m-1}(P^\ast(q))
\end{align}
Dividing by $q^m$ throughout in \eqref{asy3} and taking $q\rightarrow \infty$, we have
\begin{align}\label{ff}
&\lim_{q\rightarrow \infty}\frac{2^{m-1}(q-m+1)-2^{2m}(1+2^m\cdot 3m\sqrt{q})(3^{m-1}-1)}{m\times 4^{m-1}\times q}\lim_{q\rightarrow \infty}\frac{k_{m-1}(P^\ast(q))}{q^{m-1}}\notag \\
&\leq \lim_{q\rightarrow \infty}\frac{k_m(P^\ast(q))}{q^m}\notag \\
&\leq \lim_{q\rightarrow \infty}\frac{2^{m-1}(q-m+1)+2^{2m}(1+2^m\cdot 3m\sqrt{q})(3^{m-1}-1)}{m\times 4^{m-1}\times q}\lim_{q\rightarrow \infty}\frac{k_{m-1}(P^\ast(q))}{q^{m-1}}
\end{align}
Now, using the induction hypothesis and noting that
\begin{align*}
&\lim\limits_{q\to\infty}\frac{2^{m-1}(q-m+1)\pm 2^{2m}(1+2^m\cdot 3m\sqrt{q})(3^{m-1}-1)}{m\times 4^{m-1}q}\\
&=\frac{1}{m\times 4^{m-1}}2^{m-1}\\
&=\frac{1}{m\times 2^{m-1}} ,
\end{align*}
we find that both the limits on the left hand side and the right hand side of \eqref{ff} are equal. This completes the proof of the result.
\end{proof}
Taking $m=3$ in Theorem \ref{asym}, we find that
$$\lim\limits_{q\to\infty}\dfrac{k_3(P^\ast(q))}{q^3}=\frac{1}{48}.$$
We obtain the same limiting value from Theorem \ref{thm1} as well.
\par Taking $m=4$ in Theorem \ref{thm2} and Theorem \ref{asym}, we obtain the following corollary which is also evident from Table \ref{Table-1}.
\begin{corollary}\label{cor1}
We have
\begin{align*}
\lim\limits_{q\to\infty} {_{3}}F_{2}\left(\begin{array}{ccc}
\chi_4, & \chi_4, & \chi_4^3 \\
& \varepsilon, & \varepsilon
\end{array}| 1\right)=0.
\end{align*}
\end{corollary}
\begin{proof}
Putting $m=4$ in Theorem \ref{asym}, we have
\begin{align}\label{eqn1-cor1}
\lim\limits_{q\to\infty}\dfrac{k_4(P^\ast(q))}{q^4}=\frac{1}{1536}.
\end{align}
Putting $m=4$ in Theorem \ref{thm2}, we have
\begin{align}\label{eqn2-cor1}
\lim\limits_{q\to\infty}\dfrac{k_4(P^\ast(q))}{q^4}=\frac{1}{1536}+3\times \lim\limits_{q\to\infty} {_{3}}F_{2}\left(\begin{array}{ccc}
\chi_4, & \chi_4, & \chi_4^3 \\
& \varepsilon, & \varepsilon
\end{array}| 1\right).
\end{align}
Combining \eqref{eqn1-cor1} and \eqref{eqn2-cor1}, we complete the proof.
\end{proof}
\section{Acknowledgements}
We are extremely grateful to Ken Ono for previewing a preliminary version of this paper and for his helpful comments.
|
2205.02634
|
\section{Introduction}
Let $G = (V,E)$ be a graph with vertex set $V$ and edge set $E$. Throughout this paper, we consider graphs without loops and directed edges.
For each vertex $v\in V$, the set $N(v)=\{u\in V | uv \in E\}$ refers to the open neighbourhood of $v$ and the set $N[v]=N(v)\cup \{v\}$ refers to the closed neighbourhood of $v$ in $G$. The degree of $v$, denoted by ${\rm deg}(v)$, is the cardinality of $N(v)$.
A set $S\subseteq V$ is a dominating set if every vertex in $\overline{S}= V- S$ is adjacent to at least one vertex in $S$.
The domination number $\gamma(G)$ is the minimum cardinality of a dominating set in $G$. There are various domination numbers in the literature.
For a detailed treatment of domination theory, the reader is referred to \cite{domination}.
\medskip
The concept of super domination number was introduced by Lema\'nska et al. in 2015 \cite{Lemans}. A dominating set $S$ is called a super dominating set of $G$, if for every vertex $u\in \overline{S}$, there exists $v\in S$ such that $N(v)\cap \overline{S}=\{u\}$. The cardinality of a smallest super dominating set of $G$, denoted by $\gamma_{sp}(G)$, is the super domination number of $G$. We refer the reader to \cite{Alf,Dett,Nima,Kri,Kle,Zhu} for more details on super dominating set of a graph.
\medskip
Let $G$ be a connected graph constructed from pairwise disjoint connected graphs
$G_1,\ldots ,G_n$ as follows. Select a vertex of $G_1$, a vertex of $G_2$, and identify these two vertices. Then continue in this manner inductively. Note that the graph $G$ constructed in this way has a tree-like structure, the $G_i$'s being its building stones (see Figure \ref{Figure1}).
\begin{figure}[!h]
\begin{center}
\psscalebox{0.6 0.6}
{
\begin{pspicture}(0,-4.819607)(13.664668,2.90118)
\pscircle[linecolor=black, linewidth=0.04, dimen=outer](5.0985146,1.0603933){1.6}
\pscustom[linecolor=black, linewidth=0.04]
{
\newpath
\moveto(11.898515,0.66039336)
}
\pscustom[linecolor=black, linewidth=0.04]
{
\newpath
\moveto(11.898515,0.26039338)
}
\pscustom[linecolor=black, linewidth=0.04]
{
\newpath
\moveto(12.698514,0.66039336)
}
\pscustom[linecolor=black, linewidth=0.04]
{
\newpath
\moveto(10.298514,1.0603933)
}
\pscustom[linecolor=black, linewidth=0.04]
{
\newpath
\moveto(11.098515,-0.9396066)
}
\pscustom[linecolor=black, linewidth=0.04]
{
\newpath
\moveto(11.098515,-0.9396066)
}
\pscustom[linecolor=black, linewidth=0.04]
{
\newpath
\moveto(11.898515,0.66039336)
}
\pscustom[linecolor=black, linewidth=0.04]
{
\newpath
\moveto(11.898515,-0.9396066)
}
\pscustom[linecolor=black, linewidth=0.04]
{
\newpath
\moveto(11.898515,-0.9396066)
}
\pscustom[linecolor=black, linewidth=0.04]
{
\newpath
\moveto(12.698514,-0.9396066)
}
\pscustom[linecolor=black, linewidth=0.04]
{
\newpath
\moveto(12.698514,0.26039338)
}
\pscustom[linecolor=black, linewidth=0.04]
{
\newpath
\moveto(14.298514,0.66039336)
\closepath}
\psbezier[linecolor=black, linewidth=0.04](11.598515,1.0203934)(12.220886,1.467607)(12.593457,1.262929)(13.268515,1.0203933715820312)(13.943572,0.7778577)(12.308265,0.90039337)(12.224765,0.10039337)(12.141264,-0.69960666)(10.976142,0.5731798)(11.598515,1.0203934)
\psbezier[linecolor=black, linewidth=0.04](4.8362556,-3.2521083)(4.063277,-2.2959895)(4.6714916,-1.9655427)(4.891483,-0.99004078729821)(5.111474,-0.014538889)(5.3979383,-0.84551746)(5.373531,-1.8452196)(5.349124,-2.8449216)(5.6092343,-4.208227)(4.8362556,-3.2521083)
\psbezier[linecolor=black, linewidth=0.04](8.198514,-2.0396066)(6.8114076,-1.3924998)(6.844908,-0.93520766)(5.8785143,-1.6996066284179687)(4.9121203,-2.4640057)(5.6385145,-3.4996066)(6.3385143,-2.8396065)(7.0385146,-2.1796067)(9.585621,-2.6867135)(8.198514,-2.0396066)
\pscircle[linecolor=black, linewidth=0.04, dimen=outer](7.5785146,-3.6396067){1.18}
\psdots[linecolor=black, dotsize=0.2](11.418514,0.7403934)
\psdots[linecolor=black, dotsize=0.2](9.618514,1.5003934)
\psdots[linecolor=black, dotsize=0.2](6.6585145,0.7403934)
\psdots[linecolor=black, dotsize=0.2](3.5185144,0.96039337)
\psdots[linecolor=black, dotsize=0.2](5.1185145,-0.51960665)
\psdots[linecolor=black, dotsize=0.2](5.3985143,-2.5796065)
\psdots[linecolor=black, dotsize=0.2](7.458514,-2.4596066)
\rput[bl](8.878514,0.42039338){$G_i$}
\rput[bl](7.478514,-4.1196065){$G_j$}
\psbezier[linecolor=black, linewidth=0.04](0.1985144,0.22039337)(0.93261385,0.89943534)(2.1385605,0.6900083)(3.0785143,0.9403933715820313)(4.0184684,1.1907784)(3.248657,0.442929)(2.2785144,0.20039338)(1.3083719,-0.042142253)(-0.53558505,-0.45864862)(0.1985144,0.22039337)
\psbezier[linecolor=black, linewidth=0.04](2.885918,1.4892112)(1.7389486,2.4304078)(-0.48852357,3.5744174)(0.5524718,2.1502930326916756)(1.5934672,0.7261687)(1.5427756,1.2830372)(2.5062277,1.2429687)(3.46968,1.2029002)(4.0328875,0.5480146)(2.885918,1.4892112)
\psellipse[linecolor=black, linewidth=0.04, dimen=outer](9.038514,0.7403934)(2.4,0.8)
\psbezier[linecolor=black, linewidth=0.04](9.399693,1.883719)(9.770389,2.812473)(12.016343,2.7533927)(13.011008,2.856550531577144)(14.005673,2.9597082)(13.727474,2.4925284)(12.761896,2.2324166)(11.796317,1.9723049)(9.028996,0.9549648)(9.399693,1.883719)
\pscircle[linecolor=black, linewidth=0.04, dimen=outer](9.898515,-3.3396065){1.2}
\psellipse[linecolor=black, linewidth=0.04, dimen=outer](2.2985144,-1.1396066)(0.4,1.4)
\psellipse[linecolor=black, linewidth=0.04, dimen=outer](2.4985144,-3.3396065)(1.8,0.8)
\psdots[linecolor=black, dotsize=0.2](2.2985144,0.26039338)
\psdots[linecolor=black, dotsize=0.2](2.2985144,-2.5396066)
\psdots[linecolor=black, dotsize=0.2](8.698514,-3.3396065)
\psdots[linecolor=black, dotsize=0.2](9.898515,-2.1396067)
\psellipse[linecolor=black, linewidth=0.04, dimen=outer](10.298514,-1.5396066)(2.0,0.6)
\end{pspicture}
}
\end{center}
\caption{\label{Figure1} A graph with subgraph units $G_1,\ldots , G_n$.}
\end{figure}
Usually say that $G$ is obtained by point-attaching from $G_1,\ldots , G_n$ and that $G_i$'s are the primary subgraphs of $G$. A particular case of this construction is the decomposition of a connected graph into blocks (see \cite{Deutsch}).
We refer the reader to \cite{Alikhani1,Nima0,Moster} for more details and results in the concept of graphs from primary subgraphs.
\medskip
In this paper, we continue the study of super domination number of a graph. In Section 2, we mention some previous results, also the definition of $G\odot v$ and find a shrap upper bound for the super domination number of that. In Section 3, we obtain some results on the chain of graphs that is a special case of graphs which are obtained by point-attaching from primary subgraphs. Finally, in Section 4, we find some sharp bounds on the super domination number of the bouquet of graphs which is another special case of graphs that are made by point-attaching.
\section{Super domination number of $G\odot v$}
$G\odot v$ is the graph obtained from $G$ by the removal of all edges between
any pair of neighbours of $v$ \cite{Alikhani}. Some results in this operation can be found in \cite{Nima1}. In this section, we study the super domination number of $G\odot v$.
First, we state some known results.
\begin{theorem}\cite{Lemans}\label{thm-1}
Let $G$ be a graph of order $n$ which is not empty graph. Then,
$$1\leq \gamma(G) \leq \frac{n}{2} \leq \gamma_{sp}(G) \leq n-1.$$
\end{theorem}
\begin{theorem}\cite{Lemans}\label{thm-2}
\begin{itemize}
\item[(i)]
For a path graph $P_n$ with $n\geq 3$, $\gamma_{sp}(P_n)=\lceil \frac{n}{2} \rceil$.
\item[(ii)]
For a cycle graph $C_n$,
\begin{displaymath}
\gamma_{sp}(C_n)= \left\{ \begin{array}{ll}
\lceil\frac{n}{2}\rceil & \textrm{if $n \equiv 0, 3 ~ (mod ~ 4)$, }\\
\\
\lceil\frac{n+1}{2}\rceil & \textrm{otherwise.}
\end{array} \right.
\end{displaymath}
\item[(iii)]
For the complete graph $K_n$, $\gamma_{sp}(K_n)=n-1$.
\item[(iv)]
For the complete bipartite graph $K_{n,m}$, $\gamma_{sp}(K_{n,m})=n+m-2$, where $min\{n,m\}\geq 2$.
\item[(v)]
For the star graph $K_{1,n}$, $\gamma_{sp}(K_{1,n})=n$.
\end{itemize}
\end{theorem}
\begin{theorem}\cite{Nima}\label{Firend-thm}
For the friendship graph $F_n$, $\gamma_{sp}(F_n)=n+1$.
\end{theorem}
\begin{theorem}\cite{Nima}\label{G/v}
Let $G=(V,E)$ be a graph and $v\in V$ is not a pendant vertex. Then,
$$ \gamma_{sp}(G/v)\leq \gamma_{sp} (G)+\lfloor \frac{{\rm deg} (v)}{2} \rfloor -1,$$
where $G/v$ is the graph obtained by deleting $v$ and putting a clique on the open neighbourhood of $v$.
\end{theorem}
Here we consider to $G\odot v$. First suppose that $v$ is a pendant vertex. Then by the definition of $G\odot v$, we have $G\odot v=G$. So we have the following easy result:
\begin{proposition}\label{Godotvpendant}
Let $G=(V,E)$ be a graph and $v\in V$ is a pendant vertex. Then,
$$ \gamma_{sp}(G\odot v)= \gamma_{sp} (G).$$
\end{proposition}
Hence, there is no reason to compute $\gamma_{sp}(G\odot v)$ when $v$ is a pendant vertex. Now we find a sharp upper bound for the super domination number of $G\odot v$ when $v$ is not a pendant vertex.
\begin{theorem}\label{Godotv}
Let $G=(V,E)$ be a graph and $v\in V$ is not a pendant vertex. Then,
$$\gamma_{sp}(G\odot v)\leq \gamma_{sp} (G)+\lfloor \frac{{\rm deg} (v)}{2} \rfloor -1.$$
\end{theorem}
\begin{proof}
Suppose that $v\in V$ such that ${\rm deg} (v)=n\geq2$ and $N(v)=\{v_1,v_2,\ldots,v_n\}$. Also $D$ is a super dominating set for $G$. We have the following cases:
\begin{itemize}
\item[(i)]
$v\notin D$. So, there exists $v_r\in N(v)\cap D$ such that $N(v_r)\cap \overline{D} = \{v\}$ which means that all other neighbours of $v_r$ are in $D$ too. There is no vertex such as $v_p\in N(v)$ that dominates $v_q\in N(v)\cap \overline{D}$ and satisfies the condition of super dominating set, because in that case we have $\{v_q,v\}\subseteq N(v_p)\cap \overline{D} $ which is a contradiction. So all vertices in $ N(v)\cap \overline{D}$ dominate by some vertices which are not in $N(v)$. Now by removing all edges between
any pair of neighbours of $v$, $D$ is a super dominating set for the $G\odot v$ too. So, $\gamma_{sp}(G\odot v)\leq \gamma_{sp} (G)$.
\item[(ii)]
$v\in D$ and for some $1 \leq i \leq n$, there exists $v_i\in N(v)$ such that $N(v)\cap \overline{D} = \{v_i\}$. So, all other neighbours of $v$ should be in $D$. Now by removing all edges between
any pair of neighbours of $v$, $D$ is a super dominating set for the $G\odot v$ too. So, $\gamma_{sp}(G\odot v)\leq \gamma_{sp} (G)$.
\item[(iii)]
$v\in D$ and for every $1 \leq i \leq n$, there does not exist $v_i\in N(v)$ such that $N(v)\cap \overline{D} = \{v_i\}$. If $v_i\in N(v)$ is dominated by $v_i'$ such that $v_i'\notin N(v)$, then after removing all edges between
any pair of neighbours of $v$, $v_i$ still can dominated by $v_i'$ and $N(v_i')\cap \overline{D} = \{v_i\}$. So we keep all vertices in $D-N(v)$ in our dominating set. If $v_i\in N(v)$ is dominated by $v_j$ such that $v_j\in N(v)$ and $N(v_j)\cap \overline{D} = \{v_i\}$, then we simply add $v_i$ to our dominating set after removing all edges between
any pair of neighbours of $v$. At most we have $\lfloor \frac{n}{2} \rfloor$ vertices with this condition. Without loss of generality, suppose that $v_1$ dominates $v_2$, $v_3$ dominates $v_4$, $v_5$ dominates $v_6$ and so on. Since all vertices in $N(v)-\{v_2\}$ are in $D\cup\{v_4,v_6,\ldots\}$, then $v_2$ is now dominated by $v$, and then by our argument, $D\cup\{v_4,v_6,\ldots\}$ is a super dominating set for $G\odot v$. Hence, $\gamma_{sp}(G\odot v)\leq \gamma_{sp} (G)+\lfloor \frac{n}{2} \rfloor -1$.
\end{itemize}
Therefore we have the result.
\hfill $\square$\medskip
\end{proof}
\begin{remark}
Upper bound in Theorem \ref{Godotv} is sharp. It suffices to consider $G=F_n$ as friendship graph and $v$ the vertex with ${\rm deg}(v)=2n$. By Theorem \ref{Firend-thm}, $\gamma_{sp} (G)=n+1$. One can easily check that $G\odot v=K_{1,2n}$ and then by Theorem \ref{thm-2}, $\gamma_{sp}(G\odot v)=2n$. Therefore $\gamma_{sp}(G\odot v)= \gamma_{sp} (G)+\lfloor \frac{{\rm deg} (v)}{2} \rfloor -1$.
\end{remark}
We end this section by an immediate result of Theorems \ref{G/v} and \ref{Godotv}.
\begin{corollary}
Let $G=(V,E)$ be a graph and $v\in V$ is not a pendant vertex. Then,
$$ \gamma _{sp}(G) \geq \frac{\gamma _{sp}(G\odot v)+\gamma _{sp}(G/v)}{2}-\lfloor \frac{{\rm deg} (v)}{2} \rfloor +1.$$
\end{corollary}
\section{Super domination number of chain of graphs}
In this section, we consider to a special case of graphs which is obtained by point-attaching from primary subgraphs, and is called chain of graphs $G_1,\ldots , G_n$. Suppose that $x_i,y_i \in V(G_i)$. Let $C(G_1,...,G_n)$ be the chain of graphs $\{G_i\}_{i=1}^n$ with respect to the vertices $\{x_i, y_i\}_{i=1}^k$ which is obtained by identifying the vertex $y_i$ with the vertex $x_{i+1}$ for $i=1,2,\ldots,n-1$ (see Figure \ref{chain-n}).
\begin{figure}[!h]
\begin{center}
\psscalebox{0.75 0.75}
{
\begin{pspicture}(0,-3.9483333)(12.236668,-2.8316667)
\psellipse[linecolor=black, linewidth=0.04, dimen=outer](1.2533334,-3.4416668)(1.0,0.4)
\psellipse[linecolor=black, linewidth=0.04, dimen=outer](3.2533333,-3.4416668)(1.0,0.4)
\psellipse[linecolor=black, linewidth=0.04, dimen=outer](5.2533336,-3.4416668)(1.0,0.4)
\psellipse[linecolor=black, linewidth=0.04, dimen=outer](8.853333,-3.4416668)(1.0,0.4)
\psellipse[linecolor=black, linewidth=0.04, dimen=outer](10.853333,-3.4416668)(1.0,0.4)
\psdots[linecolor=black, fillstyle=solid, dotstyle=o, dotsize=0.3, fillcolor=white](2.2533333,-3.4416666)
\psdots[linecolor=black, fillstyle=solid, dotstyle=o, dotsize=0.3, fillcolor=white](0.25333345,-3.4416666)
\psdots[linecolor=black, fillstyle=solid, dotstyle=o, dotsize=0.3, fillcolor=white](2.2533333,-3.4416666)
\psdots[linecolor=black, fillstyle=solid, dotstyle=o, dotsize=0.3, fillcolor=white](4.2533336,-3.4416666)
\psdots[linecolor=black, fillstyle=solid, dotstyle=o, dotsize=0.3, fillcolor=white](4.2533336,-3.4416666)
\psdots[linecolor=black, fillstyle=solid, dotstyle=o, dotsize=0.3, fillcolor=white](9.853333,-3.4416666)
\psdots[linecolor=black, fillstyle=solid, dotstyle=o, dotsize=0.3, fillcolor=white](9.853333,-3.4416666)
\psdots[linecolor=black, fillstyle=solid, dotstyle=o, dotsize=0.3, fillcolor=white](11.853333,-3.4416666)
\rput[bl](0.0,-3.135){$x_1$}
\rput[bl](2.0400002,-3.2016668){$x_2$}
\rput[bl](3.9866667,-3.1216667){$x_3$}
\rput[bl](2.1733334,-3.9483335){$y_1$}
\rput[bl](4.12,-3.9483335){$y_2$}
\rput[bl](6.1733336,-3.8816667){$y_3$}
\rput[bl](0.9600001,-3.6283333){$G_1$}
\rput[bl](3.0,-3.5883334){$G_2$}
\rput[bl](5.04,-3.5616667){$G_3$}
\psdots[linecolor=black, fillstyle=solid, dotstyle=o, dotsize=0.3, fillcolor=white](6.2533336,-3.4416666)
\psdots[linecolor=black, fillstyle=solid, dotstyle=o, dotsize=0.3, fillcolor=white](7.8533335,-3.4416666)
\psdots[linecolor=black, dotsize=0.1](6.6533337,-3.4416666)
\psdots[linecolor=black, dotsize=0.1](7.0533333,-3.4416666)
\psdots[linecolor=black, dotsize=0.1](7.4533334,-3.4416666)
\rput[bl](9.6,-3.0816667){$x_n$}
\rput[bl](11.826667,-3.8683333){$y_n$}
\rput[bl](9.586667,-3.9483335){$y_{n-1}$}
\rput[bl](8.533334,-3.6016667){$G_{n-1}$}
\rput[bl](7.4,-3.1616666){$x_{n-1}$}
\rput[bl](10.613334,-3.575){$G_n$}
\end{pspicture}
}
\end{center}
\caption{Chain of $n$ graphs $G_1,G_2, \ldots , G_n$.} \label{chain-n}
\end{figure}
Before we start the study of super domination number of chain of graphs, we mention the following easy result which is a direct result of the definition of super dominating set and super domination number:
\begin{proposition}\label{pro-disconnect}
Let $G$ be a disconnected graph with components $G_1$ and $G_2$. Then
$$\gamma _{sp}(G)=\gamma _{sp}(G_1)+\gamma _{sp}(G_2).$$
\end{proposition}
Now we consider to chain of two graphs and find sharp upper and lower bounds for its super domination number.
\begin{theorem}\label{chain2-thm}
Let $G_1$ and $G_2$ be two disjoint connected graphs and let
$x_i,y_i \in V(G_i)$ for $i\in \{1,2\}$. Let $C(G_1,G_2)$ be the chain of graphs $\{G_i\}_{i=1}^2$ with respect to the vertices $\{x_i, y_i\}_{i=1}^2$ which obtained by identifying the vertex $y_1$ with the vertex $x_{2}$. Let this vertex in $V(C(G_1,G_2))$ be $z$ (see Figure \ref{chain-2}). Then,
$$\gamma _{sp}(G_1)+\gamma _{sp}(G_2) -1\leq \gamma _{sp}(C(G_1,G_2)) \leq \gamma _{sp}(G_1)+\gamma _{sp}(G_2).$$
\end{theorem}
\begin{figure}
\begin{center}
\psscalebox{0.6 0.6}
{
\begin{pspicture}(0,-4.8)(20.99,-0.8)
\pscircle[linecolor=black, linewidth=0.08, dimen=outer](2.26,-2.4){1.6}
\psellipse[linecolor=black, linewidth=0.08, dimen=outer](7.66,-2.4)(1.8,0.8)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](0.66,-2.4)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](3.86,-2.4)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](5.86,-2.4)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](9.46,-2.4)
\rput[bl](0.0,-2.56){$x_1$}
\rput[bl](5.22,-2.5){$x_2$}
\rput[bl](4.16,-2.56){$y_1$}
\rput[bl](9.82,-2.5){$y_2$}
\rput[bl](2.06,-4.8){$G_1$}
\rput[bl](7.5,-4.76){$G_2$}
\pscircle[linecolor=black, linewidth=0.08, dimen=outer](15.06,-2.4){1.6}
\psellipse[linecolor=black, linewidth=0.08, dimen=outer](18.46,-2.4)(1.8,0.8)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](13.46,-2.4)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](16.66,-2.4)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](16.66,-2.4)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](20.26,-2.4)
\rput[bl](12.8,-2.56){$x_1$}
\rput[bl](20.62,-2.5){$y_2$}
\rput[bl](15.86,-4.8){$C(G_1,G_2)$}
\rput[bl](16.66,-1.96){$z$}
\end{pspicture}
}
\end{center}
\caption{Graphs $G_1,G_2$ and $C(G_1,G_2)$ with respect to vertices $y_1$ and $x_2$, respectively.} \label{chain-2}
\end{figure}
\begin{proof}
First, we find an upper bound for $\gamma _{sp}(C(G_1,G_2))$. Let $S_1$ be a super dominating set of $G_1$ and $\gamma _{sp}(G_1)=|S_1|$, and also $S_2$ be a super dominating set of $G_2$ and $\gamma _{sp}(G_2)=|S_2|$. we have the following cases:
\begin{itemize}
\item[(i)]
$y_1 \in S_1$ and $x_2 \in S_2$. In this case, $y_1$ and $x_2$ may have or have not influence on the vertices in $\overline{S_1}$ and $\overline{S_2}$, respectively. So we consider the following cases:
\begin{itemize}
\item[(i.1)]
There exists $g_1\in N(y_1)$ such that $N(y_1)\cap \overline{S_1} = \{g_1\}$ and $g_2\in N(x_2)$ such that $N(x_2)\cap \overline{S_2} = \{g_2\}$. So $N(y_1)-\{g_1\}\subseteq S_1$ and $N(y_2)-\{g_2\}\subseteq S_2$. Let
$$S=\left( S_1 \cup S_2 \cup \{z,g_1\} \right)-\{y_1,x_2\}. $$
$S$ is a super dominating set for $C(G_1,G_2)$, because $g_2$ is dominated by $z$ and since all neighbours of $y_1$ are in $S$ now, then $N(z)\cap \overline{S} = \{g_2\}$. The rest of vertices in $\overline{S}$ are dominated by the same vertex as before and the definition of super dominating set holds. So in this case,
$$ \gamma _{sp}(C(G_1,G_2)) \leq \gamma _{sp}(G_1)+\gamma _{sp}(G_2).$$
\item[(i.2)]
There exists $g_1\in N(y_1)$ such that $N(y_1)\cap \overline{S_1} = \{g_1\}$ but there does not exist $g_2\in N(x_2)$ such that $N(x_2)\cap \overline{S_2} = \{g_2\}$. We know that $N(y_1)-\{g_1\}\subseteq S_1$, but we may have more than one vertex in $N(x_2)\cap \overline{S_2}$ or may have all vertices in $N(x_2)$ as a subset of $S_2$. Since we have no knowledge about $N(x_2)\cap \overline{S_2}$, let
$$S=\left( S_1 \cup S_2 \cup \{z,g_1\} \right)-\{y_1,x_2\}. $$
Clearly, $S$ is a super dominating set for $C(G_1,G_2)$ since all vertices in $\overline{S}$ are dominated by the same vertex as before. Hence
$$ \gamma _{sp}(C(G_1,G_2)) \leq \gamma _{sp}(G_1)+\gamma _{sp}(G_2).$$
\item[(i.3)]
There does not exist $g_1\in N(y_1)$ such that $N(y_1)\cap \overline{S_1} = \{g_1\}$ but there exists $g_2\in N(x_2)$ such that $N(x_2)\cap \overline{S_2} = \{g_2\}$. It is similar to part (i.2).
\item[(i.4)]
There does not exist $g_1\in N(y_1)$ such that $N(y_1)\cap \overline{S_1} = \{g_1\}$ and there does not exist $g_2\in N(x_2)$ such that $N(x_2)\cap \overline{S_2} = \{g_2\}$. We may have more than one vertex in $N(y_1)\cap \overline{S_1}$ or may have all vertices in $N(y_1)$ as a subset of $S_1$, and same argument about $x_2$. Let
$$S=\left( S_1 \cup S_2 \cup \{z\} \right)-\{y_1,x_2\}. $$
Then all vertices in $\overline{S}$ are dominated by the same vertex as before and the definition of the super dominating set holds. So we have
$$ \gamma _{sp}(C(G_1,G_2)) \leq \gamma _{sp}(G_1)+\gamma _{sp}(G_2)-1.$$
\end{itemize}
\item[(ii)]
$y_1 \in S_1$ and $x_2 \notin S_2$. In this case, we only pay attention to $y_1$. So we consider the following cases:
\begin{itemize}
\item[(ii.1)]
There exists $g_1\in N(y_1)$ such that $N(y_1)\cap \overline{S_1} = \{g_1\}$. So $N(y_1)-\{g_1\}\subseteq S_1$. Let
$$S=\left( S_1 \cup S_2 \cup \{g_1\} \right)-\{y_1\}. $$
We show that $S$ is a super dominating set for $C(G_1,G_2)$. By the definition of $S$ we have $g_1\in S$, so we do not need to consider it in the definition of super dominating set. Since $x_2 \notin S_2$, then there exists $h\in S_2$ such that $N(h)\cap \overline{S_2} = \{x_2\}$. Now we consider to $z$ and clearly we have $N(h)\cap \overline{S} = \{z\}$. The rest of vertices in $\overline{S}$ are dominated by the same vertex as before and the definition of the super dominating set holds. So
$$ \gamma _{sp}(C(G_1,G_2)) \leq \gamma _{sp}(G_1)+\gamma _{sp}(G_2).$$
\item[(ii.2)]
There does not exist $g_1\in N(y_1)$ such that $N(y_1)\cap \overline{S_1} = \{g_1\}$. So simply let
$$S=\left( S_1 \cup S_2 \cup \{z\} \right)-\{y_1\}. $$
By an easy argument same as before, we conclude that $S$ is a super dominating set for $C(G_1,G_2)$ and therefore
$$ \gamma _{sp}(C(G_1,G_2)) \leq \gamma _{sp}(G_1)+\gamma _{sp}(G_2).$$
\end{itemize}
\item[(iii)]
$y_1 \notin S_1$ and $x_2 \in S_2$. It is similar to part (ii).
\item[(iv)]
$y_1 \notin S_1$ and $x_2 \notin S_2$. Let $S=S_1 \cup S_2$. Then by similar argument as part (ii.1), $S$ is a super dominating set for $C(G_1,G_2)$ and hence
$$ \gamma _{sp}(C(G_1,G_2)) \leq \gamma _{sp}(G_1)+\gamma _{sp}(G_2).$$
\end{itemize}
Therefore in all cases we have $\gamma _{sp}(C(G_1,G_2)) \leq \gamma _{sp}(G_1)+\gamma _{sp}(G_2)$. Now we find a lower bound for $\gamma _{sp}(C(G_1,G_2))$. First we find a super dominating set for $C(G_1,G_2)$. Let this set be $D$ and $\gamma _{sp}(C(G_1,G_2))=|D|$. Now by using this set, we find super dominating sets for $G_1$ and $G_2$. Consider to the following cases:
\begin{itemize}
\item[(i)]
$z\in D$. In this case, $z$ may has or has not influence on the vertices in $\overline{D}$. So we consider the following cases:
\begin{itemize}
\item[(i.1)]
There exists $u\in N(z)$ such that $N(z)\cap \overline{D} = \{u\}$. So $N(z)-\{u\}\subseteq D$ and therefore all other neighbours of $z$ are in $D$. Without loss of generality, suppose that $u\in V(G_1)$. Now we separate components $G_1$ and $G_2$ from $C(G_1,G_2)$ and form a disconnected graph with components $G_1$ and $G_2$, replace vertex $z$ with $x_2$ in $G_1$ and replace vertex $z$ with $y_1$ in $G_2$ (see Figure \ref{chain-2}). Let
$$D_1=\left( D \cup \{y_1\}\right)-\left( V(G_2) \cup \{z\} \right).$$
We show that $D_1$ is a super dominating set for $G_1$. $u$ is dominated by $y_1 \in D_1$ now and since $N(z)-\{u\}\subseteq D$, then $N(y_1)-\{u\}\subseteq D_1$. Hence $N(y_1)\cap \overline{D_1} = \{u\}$. The rest of vertices in $\overline{D_1}$ are dominated by the same vertex as before and the definition of the super dominating set holds. So $\gamma _{sp}(G_1)\leq|D_1|$. Now we consider to $G_2$. Let
$$D_2=\left( D \cup \{x_2\}\right)-\left( V(G_1) \cup \{z\} \right).$$
Since $x_2 \in D_2$, clearly all vertices in $\overline{D_2}$ are dominated by the same vertex as before. So the definition of the super dominating set holds and $\gamma _{sp}(G_2)\leq|D_2|$. By Proposition \ref{pro-disconnect}, super domination number of a disconnected graph with components $G_1$ and $G_2$ is the summation of cardinal of each super dominating set of them. Since
$$D_1\cup D_2=\left( D \cup \{y_1,x_2\}\right) - \{z\},$$
and $D_1\cap D_2= \{\}$, then
$$ \gamma _{sp}(G_1)+\gamma _{sp}(G_2)\leq|D_1|+|D_2|=|D_1\cup D_2|=\gamma _{sp}(C(G_1,G_2))+1.$$
\item[(i.2)]
There does not exist $u\in N(z)$ such that $N(z)\cap \overline{D} = \{u\}$. Same as previous case, we form $G_1$ and $G_2$. Let
$$D_1=\left( D \cup \{y_1\}\right)-\left( V(G_2) \cup \{z\} \right),$$
and
$$D_2=\left( D \cup \{x_2\}\right)-\left( V(G_1) \cup \{z\} \right).$$
All vertices in $\overline{D_1}$ and $\overline{D_2}$ are dominated by the same vertex as before. So by similar argument as previous case, we have
$$ \gamma _{sp}(G_1)+\gamma _{sp}(G_2)\leq \gamma _{sp}(C(G_1,G_2))+1.$$
\end{itemize}
\item[(ii)]
$z\notin D$. So there exists $v\in D$ such that $N(v)\cap \overline{D} = \{z\}$. We form $G_1$ and $G_2$ same as part (i.1). Without loss of generality, suppose that $v\in V(G_1)$. Let
$$D_1= D - V(G_2) ,$$
and
$$D_2=\left( D \cup \{x_2\}\right)- V(G_1).$$
$D_1$ is a super dominating set for $G_1$ because $y_1$ is dominated by $v$ and $N(v)\cap \overline{D} = \{y_1\}$, and the rest of vertices in $\overline{D_1}$ are dominated by the same vertex as before. So $\gamma _{sp}(G_1)\leq|D_1|$. Since $x_2 \in D_2$, all vertices in $\overline{D_2}$ are dominated by the same vertex as before and the definition of super dominating set holds. So $D_2$ is a super dominating set for $G_2$. Hence $\gamma _{sp}(G_2)\leq|D_2|$. Since $D_1\cup D_2= D \cup \{x_2\}$, and $D_1\cap D_2= \{\}$, then
$$ \gamma _{sp}(G_1)+\gamma _{sp}(G_2)\leq\gamma _{sp}(C(G_1,G_2))+1.$$
\end{itemize}
Hence in all cases, $ \gamma _{sp}(G_1)+\gamma _{sp}(G_2)\leq\gamma _{sp}(C(G_1,G_2))+1$, and therefore we have the result.
\hfill $\square$\medskip
\end{proof}
\begin{remark}
Bounds in the Theorem \ref{chain2-thm} are sharp. For the upper bound, it suffices to consider $G_1=G_2=P_3$. Then by Theorem \ref{thm-2}, $ \gamma _{sp}(G_1)=\gamma _{sp}(G_2)=2$. Now let $y_1$ and $x_2$ be the vertex with degree 2 in $G_1$ and $G_2$, respectively. One can easily check that $C(G_1,G_2)=K_{1,4}$, and by Theorem \ref{thm-2}, $\gamma _{sp}(C(G_1,G_2))=4=\gamma _{sp}(G_1)+\gamma _{sp}(G_2)$. For the lower bound, it suffices to consider $H_1=F_4$ and $H_2=F_5$, where $F_n$ is the friendship graph of order $n$. Then by Theorem \ref{Firend-thm}, $ \gamma _{sp}(H_1)=5$ and $\gamma _{sp}(H_2)=6$. Now let $y_1$ be the vertex with degree 8 in $H_1$ and $x_2$ be the vertex with degree 10 in $H_2$, respectively. One can easily check that $C(H_1,H_2)=F_9$, and by Theorem \ref{Firend-thm}, $\gamma _{sp}(C(H_1,H_2))=10=\gamma _{sp}(H_1)+\gamma _{sp}(H_2)-1$.
\end{remark}
We end this section by an immediate result of Theorem \ref{chain2-thm}.
\begin{corollary}
Let $G_1,G_2, \ldots , G_n$ be a finite sequence of pairwise disjoint connected graphs and let
$x_i,y_i \in V(G_i)$. Let $C(G_1,...,G_n)$ be the chain of graphs $\{G_i\}_{i=1}^n$ with respect to the vertices $\{x_i, y_i\}_{i=1}^k$ which obtained by identifying the vertex $y_i$ with the vertex $x_{i+1}$ for $i=1,2,\ldots,n-1$ (Figure \ref{chain-n}). Then,
$$ \left( \sum_{i=1}^{n} \gamma _{sp}(G_i) \right) - n \leq \gamma _{sp}(C(G_1,...,G_n)) \leq \sum_{i=1}^{n} \gamma _{sp}(G_i).$$
\end{corollary}
\section{Super domination number of bouquet of graphs}
In this section, we consider to another special case of graphs which is obtained by point-attaching from primary subgraphs.
Let $G_1,G_2, \ldots , G_n$ be a finite sequence of pairwise disjoint connected graphs and let
$x_i \in V(G_i)$. Let $B(G_1,...,G_n)$ be the bouquet of graphs $\{G_i\}_{i=1}^n$ with respect to the vertices $\{x_i\}_{i=1}^n$ and obtained by identifying the vertex $x_i$ of the graph $G_i$ with $x$ (see Figure \ref{bouquet-n}).
\begin{figure}[!h]
\begin{center}
\psscalebox{0.75 0.75}
{
\begin{pspicture}(0,-6.76)(5.6,-1.16)
\rput[bl](2.6133332,-3.64){$x_1$}
\rput[bl](3.0533333,-4.0933332){$x_2$}
\rput[bl](2.6533334,-4.5466666){$x_3$}
\rput[bl](2.5866666,-1.8133334){$G_1$}
\rput[bl](4.72,-4.1066666){$G_2$}
\rput[bl](2.56,-6.2){$G_3$}
\rput[bl](2.1333334,-4.04){$x_n$}
\rput[bl](0.21333334,-4.0933332){$G_n$}
\psellipse[linecolor=black, linewidth=0.04, dimen=outer](1.4,-3.96)(1.4,0.4)
\psellipse[linecolor=black, linewidth=0.04, dimen=outer](2.8,-2.56)(0.4,1.4)
\psellipse[linecolor=black, linewidth=0.04, dimen=outer](4.2,-3.96)(1.4,0.4)
\psellipse[linecolor=black, linewidth=0.04, dimen=outer](2.8,-5.36)(0.4,1.4)
\psdots[linecolor=black, dotsize=0.1](0.8,-4.76)
\psdots[linecolor=black, dotsize=0.1](1.2,-5.16)
\psdots[linecolor=black, dotsize=0.1](1.6,-5.56)
\psdots[linecolor=black, dotstyle=o, dotsize=0.5, fillcolor=white](2.8,-3.96)
\rput[bl](2.6533334,-4.04){$x$}
\end{pspicture}
}
\end{center}
\caption{Bouquet of $n$ graphs $G_1,G_2, \ldots , G_n$ and $x_1=x_2=\ldots=x_n=x$.} \label{bouquet-n}
\end{figure}
Clearly, bouquet of two graphs $G_1$ and $G_2$ with respect to vertices $x_1\in V(G_1)$ and $x_2\in V(G_2)$, is the same as chain of these two graphs. So by Theorem \ref{chain2-thm}, we have:
\begin{proposition}\label{bouquet2-prop}
Let $G_1$ and $G_2$ be two disjoint connected graphs and let
$x_i \in V(G_i)$ for $i\in \{1,2\}$. Let $B(G_1,G_2)$ be the bouquet of graphs $\{G_i\}_{i=1}^2$ with respect to the vertices $\{x_i\}_{i=1}^2$ which obtained by identifying the vertex $x_1$ with the vertex $x_{2}$. Then,
$$\gamma _{sp}(G_1)+\gamma _{sp}(G_2) -1\leq \gamma _{sp}(B(G_1,G_2)) \leq \gamma _{sp}(G_1)+\gamma _{sp}(G_2).$$
\end{proposition}
Here we consider to bouquet of three graphs and find upper and lower bounds for the super domination number of that.
\begin{theorem}\label{bouquet3-thm}
Let $G_1$, $G_2$ and $G_3$ be two disjoint connected graphs and let
$x_i \in V(G_i)$ for $i\in \{1,2,3\}$. Let $B(G_1,G_2,G_3)$ be the bouquet of graphs $\{G_i\}_{i=1}^3$ with respect to the vertices $\{x_i\}_{i=1}^3$ which obtained by identifying these vertices. Then,
$$\gamma _{sp}(G_1)+\gamma _{sp}(G_2)+\gamma _{sp}(G_3) -2\leq \gamma _{sp}(B(G_1,G_2,G_3)) \leq \gamma _{sp}(G_1)+\gamma _{sp}(G_2)+\gamma _{sp}(G_3).$$
\end{theorem}
\begin{proof}
First we consider to $G_1$ and $G_2$. Suppose that $H=B(G_1,G_2)$ with respect to the vertices $\{x_i\}_{i=1}^2$ which obtained by identifying the vertex $x_1$ with the vertex $x_{2}$. Let this vertex be $y$. Now we consider to graphs $H$ and $G_3$ and let $B(H,G_3)$ be the bouquet of these graphs with respect to the vertices $y$ and $x_3$. Clearly, we have $B(G_1,G_2,G_3)=B(H,G_3)$. First we find the lower bound. By Proposition \ref{bouquet2-prop}, we have:
\begin{align*}
\gamma _{sp}(B(G_1,G_2,G_3)) &= \gamma _{sp}(B(H,G_3)) \\
&\geq \gamma _{sp}(H)+\gamma _{sp}(G_3) -1 \\
&= \gamma _{sp}(B(G_1,G_2))+\gamma _{sp}(G_3) -1 \\
&\geq \gamma _{sp}(G_1)+\gamma _{sp}(G_2)+\gamma _{sp}(G_3) -2.
\end{align*}
By the same argument, we have $\gamma _{sp}(B(G_1,G_2,G_3)) \leq \gamma _{sp}(G_1)+\gamma _{sp}(G_2)+\gamma _{sp}(G_3)$, and therefore we have the result.
\hfill $\square$\medskip
\end{proof}
As an immediate result of Proposition \ref{bouquet2-prop} and Theorem \ref{bouquet3-thm}, by using induction we have:
\begin{corollary}\label{Cor-bouq-n}
Let $G_1,G_2, \ldots , G_n$ be a finite sequence of pairwise disjoint connected graphs and let
$x_i,y_i \in V(G_i)$. Let $B(G_1,...,G_n)$ be the bouquet of graphs $\{G_i\}_{i=1}^n$ with respect to the vertices $\{x_i\}_{i=1}^k$ which obtained by identifying these vertices. Then,
$$ \left( \sum_{i=1}^{n} \gamma _{sp}(G_i) \right) - n +1 \leq \gamma _{sp}(B(G_1,...,G_n)) \leq \sum_{i=1}^{n} \gamma _{sp}(G_i).$$
\end{corollary}
We end this section by showing that bounds for $\gamma _{sp}(B(G_1,...,G_n))$ are sharp.
\begin{remark}
Bounds in Corollary \ref{Cor-bouq-n} are sharp. For the lower bound, it suffices to consider $G_1=G_2= \ldots = G_n=F_2$ where $F_n$ is the friendship graph of order $n$ and let $x_i$ for $i=1,2,\ldots,n$ be the vertex with degree 4 in $F_2$. One can easily check that $B(G_1,...,G_n)=F_{2n}$. By Theorem \ref{Firend-thm}, we have $\gamma _{sp}(B(G_1,...,G_n))=2n+1$. Also $\gamma _{sp}(F_2)=3$. So $\gamma _{sp}(B(G_1,...,G_n))=\left( \sum_{i=1}^{n} \gamma _{sp}(G_i) \right) - n +1$. For the upper bound, it suffices to consider $H_1=H_2= \ldots = H_n=P_2$ where $P_n$ is the path graph of order $n$. Clearly, we have $\gamma _{sp}(P_2)=1$ and $B(H_1,...,H_n)=K_{1,n}$. By Theorem \ref{thm-2}, $\gamma _{sp}(B(H_1,...,H_n))=n$. Hence, $\gamma _{sp}(B(H_1,...,H_n))=\left( \sum_{i=1}^{n} \gamma _{sp}(H_i) \right)$.
\end{remark}
\section{Conclusions}
In this paper, we obtained a sharp upper bound for super domination number of graphs which modified by operation $\odot$ on vertices. Also
we presented some results for super domination number of chain and bouquet of finite sequence of pairwise disjoint connected graphs. Future topics of interest for future research include the following suggestions:
\begin{itemize}
\item[(i)]
Finding sharp lower bound for super domination number of $G\odot v$.
\item[(ii)]
Finding super domination number of link and circuit of graphs.
\item[(iii)]
Finding super domination number of subdivision and power of a graph.
\item[(iv)]
Counting the number of super dominating sets of graph $G$ with size $k\geq \gamma _{sp}(G)$.
\end{itemize}
\section{Acknowledgements}
The author would like to thank the Research Council of Norway and Department of Informatics, University of Bergen for their support.
|
2205.02578
|
\section{Introduction}\label{Section1}
Let $G$ be a finite group, and let $\chi$ be a character of $G$. We define the field of values of $\chi$ as
$$\mathbb{Q}(\chi)=\mathbb{Q}(\chi(g)|g \in G).$$
We also define
$$f(G)=\max_{F/\mathbb{Q}}|\{\chi \in \operatorname{Irr}(G)|\mathbb{Q}(\chi)=F\}|.$$
In \cite{Alex}, A.Moretó proved that the order of a group is bounded in terms of $f(G)$. This is, there exists $b : \mathbb{N} \rightarrow \mathbb{N}$ such that $|G|\leq b(f(G))$, for every finite group $G$. In that work, it was also proved that $f(G)=1$ if and only if $G=1$. In that paper, it was left as a problem to classify all groups with $f(G)=2$ and $f(G)=3$. Our main result provides this classification.
\begin{thmA}
Let $G$ be a finite group. Then
\begin{itemize}
\item[(i)] If $f(G)=2$, then $G \in \{\mathsf{C}_{2},\mathsf{C}_{3},\mathsf{C}_{4},\mathsf{D}_{10},\mathsf{A}_{4},\mathsf{F}_{21}\}$.
\item[(ii)] If $f(G)=3$, then $G \in \{\mathsf{S}_{3},\mathsf{D}_{14},\mathsf{D}_{18},\mathsf{F}_{20},\mathsf{F}_{52}, \mathsf{A}_{5},\operatorname{PSL}(2,8),\operatorname{Sz}(8)\}$.
\end{itemize}
where $\mathsf{F}_{n}$ and $\mathsf{D}_{n}$ are the Frobenius group and the dihedral group of order $n$, respectively. As a consequence, the best possible values for $b(2)$ and $b(3)$ are $21$ and $29.120$, respectively.
\end{thmA}
We will study the solvable case and the non-solvable case separately. In the non-solvable case, using a theorem of Navarro and Tiep \cite{Navarro-Tiep}, the condition $f(G)\leq 3$ implies that $G$ possesses $3$ rational characters. Then, we will use the main results of \cite{Rossi} to restrict the structure of non-solvable groups with $f(G)\leq 3$. We will divide the solvable case in two different steps. In the first step we classify all metabelian groups with $f(G)\leq 3$. To do this we will use the condition $f(G)\leq 3$ to give an upper bound to the number of irreducible characters or equivalently an upper bound to the number of conjugacy classes. Once we have bounded the number of conjugacy classes we will use the classification given in \cite{VeraLopez} to finish our classification. In the second step we prove that if $G$ is a solvable group with $f(G)\leq 3$, then $G$ is metabelian.
We close this Introduction by remarking that, as expected, the bounds that are attainable from \cite{Alex} are far from best possible. Following the proof in \cite{Alex} we can see that if $f(G)=2$ and $G$ is solvable, then $G$ has at most $256$ conjugacy classes. It follows from Brauer's bound for the order of a group in terms of its number of conjugacy classes \cite{Brauer} that $|G|\leq 2^{2^{256}}$. We remark that even though there are asymptotically better more recent bounds, they depend on non-explicit constants and it is not clear if they are better for groups with at most $256$ conjugacy classes.
\section{Preliminaries}\label{Section2}
In this section we are going to present the basic results that will be used in this work, sometimes without citing them explicitly.
\begin{lem}
Let $G$ be a finite group. If $N$ is a normal subgroup of $G$ then $f(G/N)\leq f(G)$.
\end{lem}
\begin{lem}[Lemma 3.1 of \cite{Alex}]\label{cf}
Let $G$ be a finite group and $\chi \in \operatorname{Irr}(G)$. Then $|\mathbb{Q}(\chi):\mathbb{Q}|\leq f(G)$.
\end{lem}
As a consequence of this result, if $f(G)\leq 3$ then $|\mathbb{Q}(\chi):\mathbb{Q}|\leq 3$. Therefore, $\mathbb{Q}(\chi)$ will be $\mathbb{Q}$, a quadratic extension of $\mathbb{Q}$ or a cubic extension of $\mathbb{Q}$. We can also deduce that if $f(G)\leq 3$ and $\chi \in \operatorname{Irr}(G)$, then there exists $g \in G$ such that $\mathbb{Q}(\chi)=\mathbb{Q}(\chi(g))$.
\begin{lem}
Let $G$ be a group with $f(G)=3$ and $\chi \in \operatorname{Irr}(G)$ such that $|\mathbb{Q}(\chi):\mathbb{Q}|=2$. Then $\{\psi \in \operatorname{Irr}(G)|\mathbb{Q}(\psi)=\mathbb{Q}(\chi)\}=\{\chi,\chi^{\sigma}\}$, where $\operatorname{Gal}(\mathbb{Q}(\chi)/\mathbb{Q})=\{1,\sigma\}$.
\begin{proof}
Clearly $\{\chi,\chi^{\sigma}\} \subseteq \{\psi \in \operatorname{Irr}(G)|\mathbb{Q}(\psi)=\mathbb{Q}(\chi)\}$. Suppose that there exists $\psi \in \operatorname{Irr}(G)\setminus \{\chi,\chi^{\sigma}\}$ with $\mathbb{Q}(\psi)=\mathbb{Q}(\chi)$. Then, $\chi,\chi^{\sigma},\psi,\psi^{\sigma}$ are four irreducible characters with the same field of values, which contradicts that $f(G)\leq 3$.
\end{proof}
\end{lem}
As a consequence, if $f(G)\leq 3$ we deduce that for each quadratic extension $F$ of $\mathbb{Q}$, we have at most two irreducible characters of $G$ whose fields of values is $F$.
Let $n$ be a positive integer, we define the cyclotomic extension of order $n$, as $\mathbb{Q}_{n}=\mathbb{Q}(e^{\frac{2i\pi }{n}})$. We recall that for every $\chi \in \operatorname{Irr}(G)$ and for every $g\in G$, $\mathbb{Q}(\chi(g))\in \mathbb{Q}_{o(g)}$. The following two lemmas will be useful to deal with $\mathbb{Q}_{o(g)}$, where $g \in G$.
\begin{lem}\label{order}
Assume that $G/G''=\mathsf{F}_{rq}$, where $q$ is a prime $G/G'\cong \mathsf{C}_{r}$ is the Frobenius complement of $\mathsf{F}_{rq}$ and that $G''$ is a $p$-elementary abelian group. Then, for every $g \in G\setminus G'$ we have that $o(g)$ divides $rp$.
\end{lem}
\begin{lem}
Let $n$ be a positive integer. Then the following hold.
\begin{itemize}
\item[(i)] If $n=p$, where $p$ is an odd prime then $\mathbb{Q}_{n}$ contains only one quadratic extension.
\item[(ii)] If $n=p$, where $p$ is an odd prime then $\mathbb{Q}_{n}$ contains only one cubic extension if $n\equiv 1 \pmod 3$ and contains no cubic extension if $n\not \equiv 1 \pmod 3$.
\item[(iii)] If $n=p^{k}$, where $p$ is an odd prime and $k\geq 2$ then $\mathbb{Q}_{n}$ contains only one quadratic extension.
\item[(iv)] If $n=p^{k}$, where $p$ is an odd prime and $k\geq 2$ then $\mathbb{Q}_{n}$ contains one cubic extension if $p\equiv 1 \pmod 3$ or $p=3$ and contains no cubic extension if $p\equiv -1 \pmod 3$.
\item[(v)] If $n=p^{k}q^{t}$ where $p$ and $q$ are odd primes and $k,t \geq 1$ then $\mathbb{Q}_{n}$ contains $3$ quadratic extensions.
\item[(vi)] If $n=p^{k}q^{t}$ where $p$ and $q$ are odd primes and $k,t \geq 1$ then $\mathbb{Q}_{n}$ contains $4$ cubic extensions if both $\mathbb{Q}_{p^k}$ and $\mathbb{Q}_{q^t}$ contain cubic extensions, contains one cubic extensions if only one of $\mathbb{Q}_{p^k}$ and $\mathbb{Q}_{q^t}$ contains a cubic extensions and does not contain cubic extensions if both $\mathbb{Q}_{p^k}$ and $\mathbb{Q}_{q^t}$ do not contain cubic extensions.
\item[(vii)] If $n$ is odd then $\mathbb{Q}_{n}=\mathbb{Q}_{2n}$.
\end{itemize}
\begin{proof}
This result follows from elementary Galois Theory. As an example we prove (iii) and (iv). We know that $\operatorname{Gal}(\mathbb{Q}_{p^k}/\mathbb{Q})\cong \mathsf{C}_{p^{k-1}(p-1)}$. Since $\mathbb{Q}_{p^k}$ has as many quadratic extensions as the number subgroups of index 2 of $\operatorname{Gal}(\mathbb{Q}_{p^k}/\mathbb{Q})$, we deduce that $\mathbb{Q}_{p^k}$ has only one quadratic extension. Now, we observe that $\mathbb{Q}_{p^k}$ has cubic extensions if and only if $3$ divides $p^{k-1}(p-1)$ and this occurs if and only if $p=3$ or if $3$ divides $p-1$. If $\mathbb{Q}_{p^k}$ has cubic extensions, we can reason as in the quadratic case to prove that $\mathbb{Q}_{p^k}$ has only one cubic extension. Thus, (iv) follows.
\end{proof}
\end{lem}
The following is well known.
\begin{lem}\label{exten}
Let $N$ be a normal subgroup of $G$ and let $\theta \in \operatorname{Irr}(N)$ be invariant in $G$. If $(|G:N|,o(\theta)\theta(1))=1$ then there exists a unique $\chi \in \operatorname{Irr}(G)$ such that $\chi_{N}=\theta$ and $o(\chi)=o(\theta)$. Moreover, $\mathbb{Q}(\chi)=\mathbb{Q}(\theta)$. In particular, if $(|G:N|,|N|)=1$ then every invariant character of $N$ has an unique extension with the same order and the same field of values.
\begin{proof}
By Theorem 6.28 of \cite{Isaacscar}, there exists $\chi$ an unique extension such that $o(\chi)=o(\theta)$, and clearly $\mathbb{Q}(\theta) \subseteq \mathbb{Q}(\chi)$. Assume that $\mathbb{Q}(\theta) \not=\mathbb{Q}(\chi)$, then there exists $\sigma \in \operatorname{Gal}(\mathbb{Q}(\chi)/\mathbb{Q}(\theta))\setminus\{1\}$. Then, $\chi^{\sigma}$ extends $\theta$ and $o(\chi)=o(\theta)=o(\chi^{\sigma})$, by unicity of $\chi$ that is impossible. Thus, $\mathbb{Q}(\theta) =\mathbb{Q}(\chi)$ as we claimed.
\end{proof}
\end{lem}
We need to introduce some notation in order to state results deduced from \cite{VeraLopez}. If $G$ is a finite group, then we write $k(G)$ to denote the number of conjugacy classes in $G$, and we write $\alpha(G)$ to denote the number of $G$-conjugacy classes contained in $G\setminus S(G)$, where $S(G)$ is the socle of $G$.
\begin{thm}\label{Vera-Lopez}
Let $G$ be a group such that $k(G)\leq 11$. If $f(G)\leq 3$ then $G \in \{\mathsf{C}_{2},\mathsf{C}_{3},\mathsf{C}_{4},\mathsf{D}_{10},\mathsf{A}_{4},\mathsf{F}_{21},\mathsf{S}_{3},\mathsf{D}_{14},\mathsf{D}_{18},\mathsf{F}_{20},\mathsf{F}_{52},\mathsf{A}_{5}, \operatorname{PSL}(2,8),\operatorname{Sz}(8)\}$.
\begin{proof}
Using the classification of \cite{VeraLopez} of groups with $k(G)\leq 11$, we can see that these are the only groups with $f(G)\leq 3$ and $k(G)\leq 11$.
\end{proof}
\end{thm}
\begin{thm}\label{Vera-Lopez2}
Let $G$ be a group such that $S(G)$ is abelian, $4 \leq \alpha(G) \leq 9$ and $k(G/S(G))\leq 10$. Then, $f(G)>3$.
\begin{proof}
If $G$ is a non-nilpotent group such that $4 \leq \alpha(G) \leq 10$ and $k(G/S(G))\leq 10$, then $G$ must be one of the examples listed in Lemmas 4.2, 4.5, 4.8, 4.11, 4.14 of \cite{VeraLopez}. However, any of these examples has $f(G)\leq 3$.
\end{proof}
\end{thm}
Now, we classify all nilpotent groups with $f(G)\leq 3 $.
\begin{thm}\label{nilpotent}
If $G$ is a nilpotent group with $f(G)\leq 3$ then $G \in \{\mathsf{C}_{2},\mathsf{C}_{3},\mathsf{C}_{4}\}$.
\begin{proof}
Let $p$ be a prime dividing $|G|$. Then there exists $K$, a normal subgroup of $G$, such that $G/K=\mathsf{C}_{p}$. Therefore, $f(\mathsf{C}_{p})= f(G/K)\leq f(G)\leq3$, and hence $p \in \{2,3\}$. Thus, the set of prime divisors of $|G|$ is contained in $\{2,3\}$.
If $6$ divides $|G|$, then there exists $K$, a normal subgroup of $G$, such that $G/K=\mathsf{C}_{6}$. However, $f(\mathsf{C}_{6})=4> 3$ and we deduce that $G$ must be a $p$-group. It follows that $G/\Phi(G)$ is an elementary abelian $2$-group or an elementary abelian $3$-group with $f(G) \leq 3$. Since $f(\mathsf{C}_{2}\times \mathsf{C}_{2})=4$ and $f(\mathsf{C}_{3}\times \mathsf{C}_{3})=8$, we have that $G/\Phi(G) \in \{\mathsf{C}_{2},\mathsf{C}_{3}\}$. Thus, $G$ is a cyclic $2$-group or a cyclic $3$-group. Since $f(\mathsf{C}_{8})>3$ and $f(\mathsf{C}_{9})>3$ it follows that $G\in \{\mathsf{C}_{2},\mathsf{C}_{4},\mathsf{C}_{3}\}$.
\end{proof}
\end{thm}
In the remaining we will assume that $G$ is not a nilpotent group. From this case, we can also deduce the following result.
\begin{cor}\label{der}
If $G$ is group with $f(G)\leq3$ then either $G=G'$ or $G/G' \in \{\mathsf{C}_{2},\mathsf{C}_{3},\mathsf{C}_{4}\}$.
\begin{proof}
Suppose that $G'<G$, then $G/G'$ is an abelian group with $f(G/G')\leq 3$. Thus, by Theorem \ref{nilpotent}, $G/G' \in \{\mathsf{C}_{2},\mathsf{C}_{3},\mathsf{C}_{4}\}$.
\end{proof}
\end{cor}
In the proof of the solvable case of Theorem A, we need to see that there are no groups $G$ with $f(G)\leq 3$ of certain orders. We collect them in the next result.
\begin{lem}\label{casos}
There exists no group $G$ with $f(G)\leq 3$ and $|G| \in \{30,42, 48,50,54,\\70,84,98,100,126,147,156,234,260,342,558,666,676,774,882,903,954,1098,1206,\\1314,1404,2756,4108,6812,8164\}$.
\begin{proof}
We observe that all numbers in the above list are smaller than 2000, except $\{2756,4108,6812,8164\}$. However, the numbers $\{2756,4108,6812,8164\}$ are cube-free. Thus, all groups whose order is contained in the above list can be computed in GAP \cite{gap}. Analysing these groups we deduce the result.
\end{proof}
\end{lem}
\section{Non-solvable case}\label{Section3}
In this section we classify the non-solvable groups with $f(G)\leq 3$.
\begin{thm}\label{nonsolvable}
Let $G$ be a non-solvable group with $f(G)\leq 3$. Then, $G \in \{\mathsf{A}_{5}, \operatorname{PSL}_{2}(8),\\ \operatorname{Sz}(8)\}$.
\end{thm}
If $G$ is a group with $f(G)\leq 3$, it follows trivially that $G$ possesses at most $3$ irreducible rational characters. We will use the following results from \cite{Navarro-Tiep} and \cite{Rossi}, which classify the non-solvable groups with two or three rational characters, respectively.
\begin{thm}[Theorems B and C of \cite{Navarro-Tiep}]\label{Navarro-Tiep}
Let $G$ be a non-solvable group. Then $G$ has at least 2 irreducible rational characters. Moreover, $G$ has exactly two irreducible rational characters if and only if $M/N \cong \operatorname{PSL}_{2}(3^{2a+1})$, where $M=O^{2'}(G)$, $N=O_{2'}(M)$ and $a \geq 1$.
\end{thm}
\begin{thm}[Theorem B of \cite{Rossi}]\label{simplePrev2}
Let $G$ be a nonsolvable group with exactly three rational characters. If $M:=O^{2'}(G)$, then there exists $N$ a normal and solvable subgroup such that $M/N$ is one of the following groups.
\begin{itemize}
\item[(i)] $\operatorname{PSL}_{2}(2^{n})$, where $n\geq2$.
\item[(ii)] $\operatorname{PSL}_{2}(q)$, where $q\equiv 5 \pmod{24}$ or $q\equiv-5 \pmod{24}$.
\item[(iii)] $\operatorname{Sz}(2^{2t+1})$, where $t \geq 1$.
\item[(iv)] $ \operatorname{PSL}_{2}(3^{2a+1})$, where $a \geq 1$.
\end{itemize}
Moreover, if $M/N$ has the form (i),(ii) or (iii) then $N=O_{2'}(M)$. In particular, if $S$ is a simple group with at most three rational characters then $S$ is one of the groups listed above.
\end{thm}
We begin by using Theorem \ref{simplePrev2}, to determine the simple groups with $f(G)\leq 3$. Looking at the character tables of the groups $\operatorname{PSL}_{2}(q)$ (see \cite{Dornhoff}, chapter 38) and $\operatorname{Sz}(q)$ (see \cite{Geck}), we see that there is always an entry of the form $e^{\frac{-2\pi i}{q-1}}+e^{\frac{-2\pi i}{q-1}}$. For this reason, we study whether $e^{\frac{2\pi i}{r}}+e^{\frac{2\pi i}{r}}$ is rational, quadratic or cubic.
\begin{lem}\label{omega}
Let $r$ be a positive integer, let $\nu=e^{\frac{2\pi i}{r}}$ and let $\omega=\nu+\nu^{-1}$. Then the following hold
\begin{itemize}
\item[(i)] $\omega$ is rational if and only if $r\in \{3,4,6\}$.
\item[(ii)] $\omega$ is quadratic if and only if $r\in \{5,8,10\}$.
\item[(iii)] If $\omega$ is cubic then $r\in \{7,9,14,18\}$.
\end{itemize}
\begin{proof}
Let $k< r-1$ such that $(r,k)=1$. Then, there exists $\sigma_{k} \in \operatorname{Gal}(\mathbb{Q}(\nu)/\mathbb{Q})$ such that $\sigma_{k}(\nu)=\nu^{k}$. Thus, $\sigma_{k}(\omega)=\nu^{k}+\nu^{-k}$.
Suppose that $\omega\in \mathbb{Q}$. Then, $\sigma_{k}(\omega)=\omega$ and hence $\nu^{k}=\nu^{-1}=\nu^{r-1}$ for every $k\in \{2,\ldots,r-1\}$ such that $(r,k)=1$. Thus, we deduce that $\phi(r)=2$ and hence $r\in \{3,4,6\}$.
Suppose now that $\omega$ is quadratic. Then, there exists $\sigma \in \operatorname{Gal}(\mathbb{Q}(\nu)/\mathbb{Q})$ such that $\sigma(\omega)\not=\omega$. We deduce that $\sigma(\nu)=\nu^{k_{0}}$, where $k_{0} \in \{2,\ldots,r-2\}$ and $(r,k_{0})=1$. Since $\omega$ is quadratic, it follows that $\sigma(\omega)$ is the only Galois conjugate of $\omega$ and hence $\{k \leq r|(r,k)=1\}=\{1,k_{0},r-k_{0},r-1\}$ (otherwise we would obtain another Galois conjugate). Thus, $\phi(r)=4$ and (ii) follows.
If $\omega$ is cubic, reasoning as in the previous case, if follows that $\phi(r)= 6$ and hence (iii) follows.
\end{proof}
\end{lem}
\begin{thm}\label{simple}
Let $S$ be a nonabelian simple group with $f(S)\leq 3$. Then $S \in \{\mathsf{A}_{5},\operatorname{PSL}(2,8),\operatorname{Sz}(8)\}$.
\begin{proof}
Since $f(S)\leq 3$, then $S$ has at most three rational characters. Thus, $S$ has the form described in Theorem \ref{simplePrev2}. We claim that the only groups in those families with $f(S)\leq3$ are $\mathsf{A}_{5}(=\operatorname{PSL}(2,4))$, $\operatorname{PSL}(2,8)$ and $\operatorname{Sz}(8)$.
Let $S=\operatorname{PSL}_{2}(q)$ where $q$ is a prime power or let $S=\operatorname{Sz}(q)$ where $q=2^{2f+1}$ and $f\geq 1$. We know that there exists $\chi \in \operatorname{Irr}(S)$ and $a \in S$ such that $\chi(a)=e^{\frac{-2\pi i}{q-1}}+e^{\frac{-2\pi i}{q-1}}$. The condition $f(S)\leq 3$ implies that $|\mathbb{Q}(\chi(a)):\mathbb{Q}|\leq 3$. By Lemma \ref{omega}, we deduce that $q-1 \in \{3,4,5,6,7,8,9,10,14,18\}$. If $S=\operatorname{PSL}_{2}(q)$, we have that $q=2^n$, $q=3^{2f+1}$ or $q\equiv \pm 5 \pmod{24}$. Thus, we only have to consider the cases $q \in \{5,8,19\}$. If $S=\operatorname{PSL}(2,q)$, we have that $3=f(\operatorname{PSL}_{2}(5))=f(\operatorname{PSL}_{2}(8))$ and $f(\operatorname{PSL}_{2}(19))=4$. If $S=\operatorname{Sz}(q)$, we have that $q=2^{2f+1}$ and hence we only have to consider the case $q=8$. Then, we have that $f(\operatorname{Sz}(8))=3$.
Thus, the only simple groups with $f(S)=3$ are $\mathsf{A}_{5}$, $\operatorname{PSL}_{2}(8)$ and $\operatorname{Sz}(8)$.
\end{proof}
\end{thm}
Using Theorem \ref{Navarro-Tiep} we prove that a non-solvable group with $f(G)\leq 3$ has exactly three rational characters.
\begin{thm}\label{2racional}
Let $G$ be a non-solvable group with $f(G)\leq 3$. Then, $G$ has exactly three rational irreducible characters. In particular, $f(G)=3$.
\begin{proof}
By Theorem \ref{Navarro-Tiep}, $G$ has at least two rational irreducible characters. Suppose that $G$ has exactly two rational irreducible characters. Applying again Theorem \ref{Navarro-Tiep}, if $M=O^{2'}(G)$ and $N=O_{2'}(M)$ then $M/N \cong \operatorname{PSL}_{2}(3^{2a+1})$. Taking the quotient by $N$, we may assume that $N=1$.
Since $f(M)=f(\operatorname{PSL}_{2}(3^{2a+1}))>3$ by Theorem \ref{simple}, we deduce that $M<G$. Now, we claim that there exists a rational character of $M$ that can be extended to a rational character of $G$. By Lemma 4.1 of \cite{Auto}, there exists $\psi \in \operatorname{Irr}(M)$, rational and extendible to a rational character $\varphi \in \operatorname{Irr}(\operatorname{Aut}(M))$. If $H=G/\mathsf{C}_{G}(M)$ then we can identify $H$ with a subgroup of $\operatorname{Aut}(M)$ that contains $M$. Then, $\varphi_{H}\in \operatorname{Irr}(H)\subseteq \operatorname{Irr}(G)$ and it is rational, as we wanted.
Let $\chi \in \operatorname{Irr}(G/M)\setminus\{1\}$. Since $|G/M|$ is odd we have that $\chi$ cannot be rational. Thus, there exists $\rho\not =\chi$, a Galois conjugate of $\chi$. Then, $\mathbb{Q}(\chi)=\mathbb{Q}(\rho)$. Since $\psi$ is extendible to the rational character $\varphi \in \operatorname{Irr}(G)$, applying Gallagher's Theorem we have that $\chi \varphi\not=\rho \varphi$ are two irreducible characters of $G$ and $\mathbb{Q}(\chi)=\mathbb{Q}(\rho)=\mathbb{Q}(\varphi\chi)=\mathbb{Q}(\varphi\rho)$. Therefore, we have $4$ irreducible characters with the same fields of values, which is impossible.
\end{proof}
\end{thm}
Now, we use Theorem \ref{simplePrev2} to determine $G/O_{2'}(G)$.
\begin{thm}\label{reduction}
Let $G$ be a finite non-solvable group with $f(G)=3$. Then $G/O_{2'}(G) \in \{\mathsf{A}_{5},\operatorname{PSL}_{2}(8),\operatorname{Sz}(8)\}$.
\begin{proof}
Let $M$ and $N$ be as in Theorem \ref{simplePrev2}. Taking the quotient by $N$ we may assume that $N=1$. Suppose first that $M<G$. As in Theorem \ref{2racional}, there exists $\psi \in \operatorname{Irr}(M)$ such that it is extendible to a rational character $\varphi \in \operatorname{Irr}(G)$. As in Theorem \ref{2racional}, if we take $\chi \in \operatorname{Irr}(G/M)\setminus\{1\}$ and $\rho$ a Galois conjugate of $\chi$, then $\mathbb{Q}(\chi)=\mathbb{Q}(\rho)=\mathbb{Q}(\varphi\chi)=\mathbb{Q}(\varphi\rho)$, where all of these characters are different.
Thus, $M=G$ and hence $G$ is a simple group with $f(G)=3$. By Theorem \ref{simple}, $G\in \{\mathsf{A}_{5},\operatorname{PSL}(2,8),\operatorname{Sz}(8)\}$.
If we apply the previous reasoning to $G/N$ we have that $G/N$ is one of the desired groups. In either case, $G/N$ has the form (i),(ii) or (iii) of Theorem \ref{simplePrev2} and hence $N=O_{2'}(G)$.
\end{proof}
\end{thm}
To complete our proof it only remains to prove that $O_{2'}(G)=1$. However, we need to study before two special cases. First, we study the case when $O_{2'}(G)=Z(G)$.
\begin{thm}\label{quasisimple}
There is no quasisimple group $G$ such that $O_{2'}(G)=Z(G)$, $O_{2'}(G)>1$ and $G/Z(G) \in \{\mathsf{A}_{5},\operatorname{PSL}_{2}(8),\operatorname{Sz}(8)\}$.
\begin{proof}
Suppose that such a group exists. Then we have that $|Z(G)|$ divides $|M(S)|$, where $S=G/Z(G)$. Since the Schur multiplier of $\mathsf{A}_{5}$, $\operatorname{Sz}(8)$ and $\operatorname{PSL}_{2}(8)$ is $\mathsf{C}_{2}$, $\mathsf{C}_{2}\times \mathsf{C}_{2}$ and the trivial group, respectively we have that $Z(G)$ is a $2$-group. However, $Z(G)=O_{2'}(G)$ and hence $|Z(G)|$ has odd order. Thus, $Z(G)=1$ and the result follows.
\end{proof}
\end{thm}
We need to introduce more notation to deal with the remaining cases. Let $G$ be a finite group, we define $o(G)=\{o(g)|g \in G \setminus \{1\}\}$. Suppose that $f(G)\leq 3$ and let $\chi \in \operatorname{Irr}(G)$ be a non-rational character. Then $\mathbb{Q}(\chi)=\mathbb{Q}(\chi(g))$ for some $g \in G \setminus \{1\}$. Thus, $\mathbb{Q}(\chi)$ is a quadratic extension or a cubic extension of $\mathbb{Q}_{n}$, where $n = o(g)$. If $N$ is a normal subgroup of $G$, then we write $\operatorname{Irr}(G|N)$ to denote the set of irreducible constituents of $\theta^{G}$, for $\theta \in \operatorname{Irr}(N)\setminus \{1_{N}\}$. We also recall that if $N$ is a normal subgroup of $G$ and $\theta \in \operatorname{Irr}(N)$, then $I_{G}(\theta)=\{g \in G|\theta^{g}=g\}$ will be the inertia subgroup of $\theta$ in $G$.
\begin{thm}\label{other}
There is no group $G$ with $f(G)\leq 3$ such that $G/O_{2'}(G) \in \{\mathsf{A}_{5},\operatorname{PSL}_{2}(8),\\ \operatorname{Sz}(8)\}$, $O_{2'}(G)$ is elementary abelian and a $G/O_{2'}(G)$-simple module.
\begin{proof}
Writte $V=O_{2'}(G)$ and let $|V|=p^d$. Thus if $\mathbb{F}_{p}$ is the field of $p$ elements, $V$ can be viewed as an irreducible $\mathbb{F}_{p}$-module of $G/V$ of dimension $d$. We can extend the associated representation to a representation of $G/V$ over an algebraically closed field in characteristic $p$. Thus, the representation given by $V$ can be expressed as a sum of irreducible representations of $G/V$ over an algebraically closed field in characteristic $p$. Let $m(S)$ be the smallest degree of a non-linear $p$-Brauer character of $S$. Then, we have that $d \geq m(G/V)$. We have to distinguish two different cases: when $p$ divides $|G/V|$ and when $p$ does not divide $|G/V|$.
\underline{Case $p$ does not divide $|G/V|$:} In this case the Brauer characters are the ordinary characters. Thus, $|V|=p^{d}$ where $d$ is at least the smallest degree of an irreducible non-trivial character of $G/V$. Now, let $\lambda \in \operatorname{Irr}(V)\setminus \{1\}$. Then, $\mathbb{Q}(\lambda)\subseteq \mathbb{Q}_{p}$. Since $(|G/V|,|V|)=1$, then we have that $(|I_{G}(\lambda)/V|,|V|)=1$. Thus, by Lemma \ref{exten}, we have that $\lambda$ has an extension $\psi \in \operatorname{Irr}(I_{G}(\lambda))$ with $\mathbb{Q}(\psi)=\mathbb{Q}(\lambda)\subseteq \mathbb{Q}_{p}$. By Clifford's correspondence $\psi^{G} \in \operatorname{Irr}(G)$ and $\mathbb{Q}(\psi^{G})\subseteq \mathbb{Q}(\psi) \subseteq \mathbb{Q}_{p}$. Thus, given $\zeta$ an orbit of $G/V$ on $\operatorname{Irr}(V)\setminus \{1_{V}\}$, there exists $\chi_{\zeta} \in \operatorname{Irr}(G|V)$ such that $\mathbb{Q}(\chi_{\zeta})\subseteq \mathbb{Q}_{p}$. Let $F$ be the unique quadratic extension of $\mathbb{Q}_{p}$ and let $T$ be the unique cubic extension of $\mathbb{Q}_{p}$ (if such an extension exists). Since $\operatorname{Irr}(G/V)$ contains three rational characters, we deduce that $\mathbb{Q}(\chi_{\zeta})\in \{T,F\}$ and since $F$ is quadratic, then there are at most two characters whose field of values is $F$. Thus, $|\operatorname{Irr}(G|V)|\leq 3+2=5$ and therefore $|V|=|\operatorname{Irr}(V)|\leq |\operatorname{Irr}(G|V)||G/V|+1\leq 5|G/V|+1$.
\begin{itemize}
\item[(i)] Case $G/V=\mathsf{A}_{5}$: In this case $|V|\geq 7^3=343$ (because 7 is the smallest prime not dividing $|G/V|$ and 3 is the smallest degree of a non-linear character of $\mathsf{A}_{5}$). On the other hand, we have that $|V|=|\operatorname{Irr}(V)|\leq |\operatorname{Irr}(G|V)||G/V|+1\leq 5\cdot 60+1=301<343$, which is a contradiction.
\item[(ii)] Case $G/V=\operatorname{PSL}_{2}(8)$: In this case $|V|\geq 5^{7}=78125$ and $|V|\leq 5\cdot504+1=2521$, which is a contradiction.
\item[(iii)] Case $G/V=\operatorname{Sz}(8)$: In this case $|V|\geq 3^{14}=4782969$ and $|V|\leq 5\cdot 29120+1=145601$, which is a contradiction.
\end{itemize}
\underline{Case $p$ divides $G/V$:} From the Brauer character tables of $\{\mathsf{A}_{5},\operatorname{PSL}_{2}(8),\\ \operatorname{Sz}(8)\}$, we deduce that $m(\mathsf{A}_{5})=3$ for $p \in \{3,5\}$, $m(\operatorname{PSL}_{2}(8))=7$ for $p \in \{3,7\}$ and $m(\operatorname{Sz}(8))=14$ for $p \in \{3,7,13\}$.
\begin{itemize}
\item [(i)] Case $G/V=\operatorname{PSL}_{2}(8)$:
\begin{itemize}
\item [a)] $p=7$: In this case $|V|=7^{d}$ with $d\geq 7$ and $o(G)=\{2,3,7,9,2\cdot 7, 3\cdot 7, 7 \cdot 7, 9 \cdot 7\}$. On the one hand, the number of non-trivial conjugacy classes of $G$ contained in $V$ is at least $\frac{|V|}{|G/V|}\geq \frac{7^{7}}{504}\geq 1634$. Therefore, we deduce that $|\operatorname{Irr}(G)|\geq 1634$. On the other hand, we have that there are at most 3 quadratic extension and at most 4 cubic extensions contained in $\mathbb{Q}_{n}$, where $n \in o(G)$. Applying again that $f(G)\leq 3$, we have that the number of non-rational characters in $G$ is at most $2\cdot3+3\cdot 4=18$. Counting the rational characters we have that $|\operatorname{Irr}(G)|\leq 21<1634$, which is a contradiction.
\item [b)] $p=3$: In this case $|V|=3^{d}$ with $d\geq 7$ and by calculation $k(G)=|\operatorname{Irr}(G)|\leq 3+2\cdot 3+3\cdot 2=15$. We know that $V=S(G)$, and hence if $\alpha(G)\leq 9$, then $f(G)>3$ by Theorem \ref{Vera-Lopez2} (clearly $\alpha(G)\geq 4$ because $k(G/S(G))=9$). Thus, $\alpha(G)\geq 10$. Since $V=S(G)$ and $k(G)\leq 15$, we deduce that $V$ contains at most $4$ non-trivial $G$-conjugacy classes. Thus, $|V|\leq 504\cdot 4+1=2017<3^{7}$ and hence we have a contradiction.
\end{itemize}
\item [(ii)] Case $G/V=\operatorname{Sz}(8)$: In this case $|V|\geq 5^{14}$ and as before $|\operatorname{Irr}(G)|\geq 209598$.
\begin{itemize}
\item [a)] $p=5$: By calculation, $|\operatorname{Irr}(G)|\leq 3 +2 \cdot 7+3\cdot 2=23<209598$, which is a contradiction.
\item [b)] $p\in \{7,13\}$: By calculation, $|\operatorname{Irr}(G)|\leq 3+2\cdot 7+3\cdot 4 =29<209598$, which is a contradiction.
\end{itemize}
\item [(iii)] Case $G/V=\mathsf{A}_{5}$:
\begin{itemize}
\item [a)] $p=3$: In this case $|V|=3^d$, where $d\geq 3$ and by calculation we have that, $|\operatorname{Irr}(G)|\leq 3+ 2\cdot 3+3 \cdot 1 =12$. As before, applying Theorem \ref{Vera-Lopez2}, we can deduce that $|V|$ contains at most one non-trivial $G$-conjugacy class. Thus, $|V|\leq 61$ and since $V$ is a 3-group we deduce that $|V|= 3^3$. We also deduce that 26 is the size of a $G$-conjugacy class. That is impossible since 26 does not divide $|G/V|=60$.
\item [b)] $p=5$: In this case $k(G)\leq 9$ and by Theorem \ref{Vera-Lopez} there is no group with the desired properties.
\end{itemize}
\end{itemize}
We conclude that there is no group with the desired form and hence $V=1$.
\end{proof}
\end{thm}
Now, we are prepared to prove of Theorem \ref{nonsolvable}
\begin{proof}[Proof of Theorem \ref{nonsolvable}]
By Theorem \ref{reduction}, we know that $G/O_{2'}(G) \in \{\mathsf{A}_{5},\operatorname{PSL}_{2}(8), \\\operatorname{Sz}(8)\}$. We want to prove that $O_{2'}(G)=1$. Suppose that $O_{2'}(G)>1$. Taking an appropriate quotient, we may assume that $O_{2'}(G)$ is a minimal normal subgroup of $G$. Since $O_{2'}(G)$ is solvable, we have that $O_{2'}(G)$ is a $p$-elementary abelian subgroup for some odd prime $p$. There are two possibilities $O_{2'}=Z(G)$ or $O_{2'}(G)$ is a $G/O_{2'}(G)$-simple module. The first one is impossible by Theorem \ref{quasisimple} and the second one is impossible by Theorem \ref{other}. Thus, $O_{2'}(G)=1$ and the result follows.
\end{proof}
Therefore, the only non-solvable groups with $f(G)\leq 3$ are $\mathsf{A}_{5},\operatorname{PSL}_{2}(8)$ and $\operatorname{Sz}(8)$. In the remaining we will assume that $G$ is solvable.
\section{Metabelian case}\label{Section5}
Let $G$ be a finite metabelian group with $f(G)\leq 3$. By Corollary \ref{der}, we have that $G/G' \in \{\mathsf{C}_{2},\mathsf{C}_{3},\mathsf{C}_{4}\}$ and hence we can divide this case in different subcases.
\begin{lem}\label{caso23ab}
Let $G$ be a finite group such that $f(G)\leq 3$, $|G:G'|\in \{2,3\}$ and $G'$ is $p$-elementary abelian. Then $G \in \{\mathsf{S}_{3},\mathsf{D}_{10},\mathsf{A}_{4},\mathsf{D}_{14},\mathsf{F}_{21}\}$.
\begin{proof}
Let $\psi \in \operatorname{Irr}(G')\setminus \{1_{G'}\}$ and let $I_{G}(\psi)$ be the inertia group of $\psi$ in $G$. Since $G/G'$ is cyclic, applying Theorem 11.22 of \cite{Isaacscar}, we have that any non-principal character of $G'$ can be extended to a character of its inertia group. Thus, we have that $\psi$ cannot be invariant and hence $I_{G}(\psi)<G$. Therefore, $I_{G}(\psi)=G'$.
Therefore, if $\chi \in \operatorname{Irr}(G|G')$ we have that $\chi$ has the form $\chi=\psi^{G}$, where $\psi \in \operatorname{Irr}(G')\setminus \{1_{G'}\}$. Since $\mathbb{Q}(\psi)\subseteq \mathbb{Q}_{p}$, we have that $\mathbb{Q}(\psi^{G})\subseteq \mathbb{Q}_{p}$. We know that there exists at most one quadratic extension in $\mathbb{Q}_{p}$ and at most one cubic extension in $\mathbb{Q}_{p}$. Since $\operatorname{Irr}(G/G')$ contains at least one rational character and $f(G)\leq 3$, we have that $|\operatorname{Irr}(G|G')|\leq 2+1\cdot 2+ 1\cdot 3=7$. Since $|\operatorname{Irr}(G/G')|\leq 3$, we have that $k(G)=|\operatorname{Irr}(G)| = |\operatorname{Irr}(G|G')|+|\operatorname{Irr}(G/G')|\leq 7+3=10$. By Theorem \ref{Vera-Lopez}, our only possible options are $G \in \{\mathsf{S}_{3},\mathsf{D}_{10},\mathsf{A}_{4},\mathsf{D}_{14},\mathsf{F}_{21}\}$.
\end{proof}
\end{lem}
\begin{thm}\label{caso2ab}
Let $G$ be a metabelian group with $f(G)\leq 3$ such that $|G:G'|=2$. Then $G \in \{\mathsf{S}_{3},\mathsf{D}_{10},\mathsf{D}_{14},\mathsf{D}_{18}\}$.
\begin{proof}
Assume first that $G'$ is a $p$-group. We note that $F(G)=G'$. Therefore, $G'/\Phi(G)=F(G)/\Phi(G)$ is $p$-elementary abelian. Thus, by Lemma \ref{caso23ab}, we have that $G/\Phi(G) \in \{\mathsf{S}_{3},\mathsf{D}_{10},\mathsf{D}_{14}\}$ and hence $G'/\Phi(G)$ is cyclic. Therefore, $G'$ is a cyclic $p$-group and and we have only three possibilities for $p$. We analyse the cases $p=3$, $p=5$ and $p=7$ separately.
If $p=3$, then $G'$ is a cyclic group of order $3^{l}$. If $l \geq 3$ then there exists $K$, a characteristic subgroup of $G'$ of index $3^{l-3}$. Thus $|G/K|=2\cdot3^{3}=54$ and $f(G/K)\leq 3$. However, by Lemma \ref{casos}, there is no group of order 54 with $f(G)\leq 3$. Thus, $l\in\{1,2\}$. If $l=1$ then $G=\mathsf{S}_{3}$ and if $l=2$ then $G=\mathsf{D}_{18}$.
If $p \in \{5,7\}$, then $G'$ is a cyclic group of order $p^{l}$. If $l \geq 2$ then there exists $K$ characteristic in $G'$ of index $p^{l-2}$. Thus $|G/K|=4\cdot p^{2}$ and $f(G/K)\leq 3$. For $p=5$, we have that $|G/K|=2\cdot 5^{2}=50$ and for $p=7$, we have that $|G/K|=2\cdot 7^{2}=98$. However, by Lemma \ref{casos} there is no group of order $50$ or $98$ with $f(G)\leq3$.
Therefore the prime divisors of $|G'|$ are contained in $\{3,5,7\}$ and if $G'$ is a $p$-group then $G \in \{\mathsf{S}_{3},\mathsf{D}_{18},\mathsf{D}_{10},\mathsf{D}_{14}\}$. It only remains to prove that $|G'|$ cannot be divisible by two different primes. Suppose that $3$ and $5$ divide $|G'|$. Taking a quotinet by a Sylow $7$-subgroup of $G'$ we may assume that the only prime divisors of $|G'|$ are $3$ and $5$. By the case when $|G'|$ is a $p$-group, we deduce that the Sylow $3$-subgroups and Sylow $5$-subgroups of $G'$ are both cyclic. Then $f(G/\Phi(G))\leq 3$ and $G'/\Phi(G)=\mathsf{C}_{3}\times \mathsf{C}_{5}$. Therefore, $G/\Phi(G)$ is a group of order 30 with $f(G/\Phi(G))\leq 3$, which is impossible by Lemma \ref{casos}. Analogously, we can prove that if any of the pairs $\{3,7\}$ or $\{5,7\}$ divide $|G'|$ at the same time, then there exists a group $H$ with $f(H)\leq 3$ of order $42$ or $70$, respectively. Applying again Lemma \ref{casos}, we have a contradiction. Thus, $G'$ is a $p$-group and hence $G \in \{\mathsf{S}_{3},\mathsf{D}_{18},\mathsf{D}_{10},\mathsf{D}_{14}\}$.
\end{proof}
\end{thm}
\begin{thm}\label{caso3ab}
Let $G$ be a metabelian group with $f(G)\leq 3$ such that $|G:G'|=3$. Then $G \in \{\mathsf{A}_{4},\mathsf{F}_{21}\}$.
\begin{proof}
As in Theorem \ref{caso2ab}, we assume first that $G'$ is a $p$-group. By Proposition \ref{caso23ab}, we have that $G/\Phi(G) \in \{\mathsf{A}_{4},\mathsf{F}_{21}\}$. Therefore, we have two possibilities for $p$.
If $p=7$, then $G'/\Phi(G)=\mathsf{C}_{7}$. Thus, $G'$ is a cyclic group of order $7^{l}$. If $l \geq 2$ then there exists $K$, a characteristic subgroup of $G'$ of index $7^{l-2}$. Thus $|G/K|=3\cdot7^{2}=147$ and $f(G/K)\leq 3$. However, by Lemma \ref{casos}, there is no group of order 147 with $f(G)\leq 3$. Thus, $l=1$ and hence $G= \mathsf{F}_{21}$.
If $p=2$, then $G'/\Phi(G)=\mathsf{C}_{2}\times \mathsf{C}_{2}$. Thus $G'$ can be expressed as the product of two cyclic subgroups $\mathsf{C}_{1}$ and $\mathsf{C}_{2}$ with $|\mathsf{C}_{1}|=2^{n}$ and $|\mathsf{C}_{2}|=2^{m}$. Assume first that $n>m$. Then, we can take $H$ the unique subgroup of $\mathsf{C}_{1}$ of order $2^{m}$. Thus, $K=H\times \mathsf{C}_{2}$ is normal in $G$ and $(G/K)'$ is a cyclic 2-group. Thus, $f(G/K)\leq 3$, $|G/K:(G/K)'|=3$ and $(G/K)'$ is a cyclic 2-group, which is not possible by Proposition \ref{caso23ab}. It follows that $n=m$ and hence $G'$ is a product of 2 cycles of length $n$. If $n \geq 2$ then there exists $K \operatorname{car} G'$ such that $G'/K$ is a product of two cycles of order 4. Thus, $f(G/K)\leq 3$ and $|G/K|=48$, which contradicts Lemma \ref{casos}. It follows that $n=1$ and hence $G=\mathsf{A}_{4}$.
Then, we have that the prime divisors of $G'$ are contained in $\{2,7\}$ and if $G'$ is a $p$-group then $G \in \{\mathsf{A}_{4},\mathsf{F}_{21}\}$. Assume now that 2 and 7 divide $|G'|$. Then, $G'/\Phi(G)=\mathsf{C}_{2}\times \mathsf{C}_{2}\times \mathsf{C}_{7}$. Thus, $|G/\Phi(G)|=84$ and $f(G/\Phi(G))\leq 3$, which is impossible by Lemma \ref{casos}. Then, $G'$ must be a $p$-group and hence $G \in \{\mathsf{A}_{4},\mathsf{F}_{21}\}$.
\end{proof}
\end{thm}
\begin{lem}\label{caso4abe}
Let $G$ be a group such that $G/G'=\mathsf{C}_{4}$, $f(G)\leq 3$ and $G'$ is $p$-elementary abelian. Then, $G \in \{\mathsf{F}_{20},\mathsf{F}_{52}\}$.
\begin{proof}
First, we observe that $p$ is odd. Otherwise $G$ would be a nilpotent group with $f(G)\leq 3$ such that $|G:G'|=4$. Thus, we would have that $G=\mathsf{C}_{4}$, which is impossible.
Let $\psi \in \operatorname{Irr}(G')\setminus \{1_{G'}\}$. As in Lemma \ref{caso23ab}, we can deduce that $I_{G}(\psi)<G$. Therefore, there are two possible options.
The first one is that $I_{G}(\psi)=G'$. In this case $\psi^{G}\in \operatorname{Irr}(G)$ and hence $\mathbb{Q}(\psi^{G})\subseteq \mathbb{Q}(\psi)\subseteq \mathbb{Q}_{p}$. The other one is that $|G:I_{G}(\psi)|=2$. Applying Lemma \ref{exten}, we have that $\psi $ is extendible to $\varphi \in \operatorname{Irr}(I_{G}(\psi))$ such that $\mathbb{Q}(\varphi)=\mathbb{Q}(\psi)\subseteq \mathbb{Q}_{p}$. Let $\operatorname{Irr}(I_{G}(\psi)/G')=\{1,\rho\}$. By Gallagher's Theorem (See Corollary 6.17 of \cite{Isaacscar}), $\varphi$ and $\varphi\rho$ are all extensions of $\psi$. Since $\mathbb{Q}(\rho)=\mathbb{Q}$, we have that $\mathbb{Q}(\varphi\rho)=\mathbb{Q}(\varphi)\subseteq \mathbb{Q}_{p}$. Let $\tau \in \{\varphi,\varphi\rho\}$. We have that $\tau^{G} \in \operatorname{Irr}(G)$, and hence $\mathbb{Q}(\tau^{G})\subseteq \mathbb{Q}(\tau)\subseteq \mathbb{Q}_{p}$. Therefore, if $\chi \in \operatorname{Irr}(G|G')$ then $\mathbb{Q}(\chi)\subseteq \mathbb{Q}_{p}$. We know that $\mathbb{Q}_{p}$ contains at most a unique quadratic extension and at most one cubic extension. Since $f(G)\leq 3$, we deduce that $ \operatorname{Irr}(G|G')$ contains at most 5 non-rational characters and since $\operatorname{Irr}(G/G')$ contains two rational characters then $\operatorname{Irr}(G|G')$ contains at most one rational character. Therefore, $|\operatorname{Irr}(G|G')|\leq 6$ and hence $k(G)=|\operatorname{Irr}(G/G')|+|\operatorname{Irr}(G|G')|\leq 4+6=10$. By Theorem \ref{Vera-Lopez}, we deduce that the only groups such that $G/G'=\mathsf{C}_{4}$, $G'$ is elementary abelian, $f(G)\leq 3$ and $k(G)\leq 10$ are $\{\mathsf{F}_{20},\mathsf{F}_{52}\}$.
\end{proof}
\end{lem}
\begin{thm}\label{caso4ab}
Let $G$ be a metabelian group with $f(G)\leq 3$ such that $G'$ is abelian and $|G:G'|=4$. Then, $G \in \{\mathsf{F}_{20},\mathsf{F}_{52}\}$.
\begin{proof}
As in Theorem \ref{caso2ab}, we are going to assume first that $G'$ is a $p$-group. By Lemma \ref{caso4abe}, we have that $G/\Phi(G) \in \{\mathsf{F}_{20},\mathsf{F}_{52}\}$ and hence $G'$ is a cyclic $p$-group, where $p \in \{5,13\}$.
In both cases $G'$ is a cyclic group of order $p^{l}$. If $l \geq 2$ then there exists $K \operatorname{car} G'$ a group of index $p^{l-2}$. Thus $|G/K|=4\cdot p^{2}$ and $f(G/K)\leq 3$. For $p=5$, we have that $|G/K|=4\cdot 5^{2}=100$ and for $p=13$, we have that $|G/K|=4\cdot 13^{2}=676$. However, by Lemma \ref{casos} there is no group of order 100 or 676 with $f(G)\leq3$.
Then, we have that the prime divisors of $G'$ are contained in $\{5,13\}$ and if $G'$ is a $p$-group then $G \in \{\mathsf{F}_{20},\mathsf{F}_{52}\}$. Assume now that 5 and 13 and divide $|G'|$. Then, $G'/\Phi(G)= \mathsf{C}_{5}\times \mathsf{C}_{13}$. Thus, $|G|=4\cdot 5 \cdot 13=260$, which contradicts Lemma \ref{casos}. Then, $G'$ must be a $p$-group and hence $G \in \{\mathsf{F}_{20},\mathsf{F}_{52}\}$.
\end{proof}
\end{thm}
\section{Solvable case}
In this section we classify all solvable group with $f(G)\leq 3$. By the results of the previous section we have that $G/G'' \in \{\mathsf{C}_{2},\mathsf{C}_{3},\mathsf{C}_{4},\mathsf{S}_{3}, \mathsf{D}_{10},\mathsf{A}_{4},\mathsf{D}_{14},\mathsf{D}_{18},\mathsf{F}_{20},\mathsf{F}_{21},\mathsf{F}_{52}\}$. Therefore, the result will be completed if we prove that $G''=1$. We will begin by determining all possible $\mathbb{Q}(\chi)$ for $\chi \in \operatorname{Irr}(G|G'')$ and then, we will use this to bound $k(G)$. Then, the result will follow from Theorems \ref{Vera-Lopez}and \ref{Vera-Lopez2} and some calculations.
\begin{lem}\label{restocasos}
Let $G$ be a group such that $G''\not=1$, $G''$ is $p$-elementary abelian, $G/G''\in \{\mathsf{S}_{3},\mathsf{D}_{10},\mathsf{D}_{14},\mathsf{F}_{21},\mathsf{F}_{20},\mathsf{F}_{52}\}$ and $p$ does not divide $|G'/G''|$. If $r=|G:G'|$, then $\mathbb{Q}(\chi)\subseteq \mathbb{Q}_{rp}$ for every $\chi \in \operatorname{Irr}(G|G'')$.
\begin{proof}
Let $\lambda \in \operatorname{Irr}(G'')\setminus \{1_{G''}\}$. We know that $\mathbb{Q}(\lambda)\subseteq \mathbb{Q}_{p}$ and $\lambda$ cannot be extended to an irreducible character of $G'$. Since $(|G''|,|G':G''|)=1$ it follows that $\lambda$ extends irreducibly to a character of its inertia subgroup in $G'$. Since $|G':G''|$ is prime, we deduce that $G''$ is the inertia subgroup of $\lambda$ in $G'$ and hence $\lambda^{G'}\in \operatorname{Irr}(G')$. We also observe that $\mathbb{Q}(\lambda^G)\subseteq \mathbb{Q}(\lambda)\subseteq \mathbb{Q}_{p}$.
By Lemma \ref{order}, we know that for every $g \in G \setminus G'$ and for every $\chi \in \operatorname{Irr}(G)$, $\chi(g) \in \mathbb{Q}_{rp}$. If $\chi \in \operatorname{Irr}(G|G'')$ then by the previous comments $\mathbb{Q}(\chi_{G'})\subseteq \mathbb{Q}_{p}\subseteq \mathbb{Q}_{rp}$ and therefore $\mathbb{Q}(\chi) \subseteq \mathbb{Q}_{rp}$.
\end{proof}
\end{lem}
\begin{lem}\label{casoD18}
Let $G$ be a group such that $G''\not=1$, $G''$ is $p$-elementary abelian, $G/G''=\mathsf{D}_{18}$ and $p\not=3$. Then $\mathbb{Q}(\chi)\subseteq \mathbb{Q}_{3p}$ for every $\chi \in \operatorname{Irr}(G|G'')$. If $f(G)\leq 3$ then $k(G)\leq 15$. Moreover, if $p=2$ then $k(G)\leq 8$ and if $p$ is an odd prime such that $p\equiv -1 \pmod 3$ then $k(G)\leq 12$
\begin{proof}
Let $\lambda \in \operatorname{Irr}(G'')\setminus \{1_{G''}\}$ and let $T=I_{G'}(\lambda)$. We know that $\mathbb{Q}(\lambda)\subseteq \mathbb{Q}_{p}$ and $\lambda$ cannot be extended to an irreducible character of $G'$. Since $(|G''|,|G':G''|)=1$, by Lemma \ref{exten}, $\lambda$ extends $\mu \in \operatorname{Irr}(T)$ with $\mathbb{Q}(\mu)=\mathbb{Q}(\lambda)\subseteq \mathbb{Q}_{p}$ and hence $T<G'$. We have two different possibilities. The first one is that $T=G''$. In this case $\lambda^G\in \operatorname{Irr}(G')$ and hence $\mathbb{Q}(\lambda^G)\subseteq \mathbb{Q}(\lambda)\subseteq \mathbb{Q}_{p}\subseteq \mathbb{Q}_{3p}$. The second one is that $|T:G''|=3$. In this case, $\operatorname{Irr}(T/G'')=\{1,\rho, \rho^2\}$. By Gallagher's Theorem $\operatorname{Irr}(T|\lambda)=\{\mu, \rho\mu, \rho^2\mu\}$ and since $\mathbb{Q}_{\rho}=\mathbb{Q}_{3}$ we deduce that $\mathbb{Q}(\psi)\subseteq \mathbb{Q}_{3p}$ for every $\psi \in \operatorname{Irr}(T|\lambda)$. Let $\psi \in \operatorname{Irr}(T|\lambda)$ then $\psi^{G'}\in \operatorname{Irr}(G')$ and hence $\mathbb{Q}(\psi^{G'})\subseteq \mathbb{Q}(\psi)\subseteq \mathbb{Q}_{3p}$.
By Lemma \ref{order}, we know that for every $g \in G \setminus G'$ and for every $\chi \in \operatorname{Irr}(G)$, $\chi(g) \in \mathbb{Q}_{2p}=\mathbb{Q}_{p}\subseteq \mathbb{Q}_{3p}$. If $\chi \in \operatorname{Irr}(G|G'')$ then by the previous comments $\mathbb{Q}(\chi_{G''})\subseteq \mathbb{Q}_{3p}$ and therefore $\mathbb{Q}(\chi) \subseteq \mathbb{Q}_{3p}$.
Since $\operatorname{Irr}(G/G'')$ contains 3 rational characters, we deduce that if $\chi \in \operatorname{Irr}(G|G'')$ then $\mathbb{Q}(\chi)$ is either quadratic extension of $\mathbb{Q}_{3p}$ or a cubic extension of $\mathbb{Q}_{3p}$.
If $p=2$, then $\mathbb{Q}_{3p}=\mathbb{Q}_{3}$, which possesses a unique quadratic extension and no cubic extension. Thus, $|\operatorname{Irr}(G|G'')|\leq 2$ and hence $k(G)=|\operatorname{Irr}(G)|=|\operatorname{Irr}(G/G'')|+|\operatorname{Irr}(G|G'')|\leq 6+2=8$.
If $p$ is an odd prime, then $\mathbb{Q}_{3p}$ possesses three quadratic extensions and at most one cubic extension. Thus, $|\operatorname{Irr}(G|G'')|\leq 3\cdot 2+1\cdot 3=9$ and hence $k(G)\leq 6+9=15$. We also observe that $\mathbb{Q}_{3p}$ possesses a cubic extension if and only if $p\equiv 1 \pmod 3$. Thus, if $p\equiv -1 \pmod 3$ then $k(G)\leq 12$.
\end{proof}
\end{lem}
\begin{lem}\label{casoA4}
Let $G$ be a group such that $G''\not=1$, $G''$ is $p$-elementary abelian, $G/G''=A_{4}$. Then $\mathbb{Q}(\chi)\subseteq \mathbb{Q}_{3p}$ for every $\chi \in \operatorname{Irr}(G)$. If $f(G)\leq 3$ then $k(G)\leq12$. Moreover, if $p\not\equiv 1 \pmod 3$ then $k(G)\leq 9$.
\begin{proof}
First, we study the orders of the elements of $G$. If $g \in G''$ then $o(g)$ divides $p$. If $g \in G'\setminus G''$, then $o(g)$ divides $2p$. If $g \in G \setminus G'$, then $o(g)$ divides $3p$.
Let $\chi\in \operatorname{Irr}(G)$. Then, $\mathbb{Q}(\chi_{G''})\subseteq \mathbb{Q}_{p}$. If $g \in G \setminus G'$ then $\chi(g) \in \mathbb{Q}_{3p}$. Finally, if $g \in G'\setminus G''$, then $\chi(g)\in \mathbb{Q}_{2p}$. Thus, $\mathbb{Q}(\chi)$ is contained in $\mathbb{Q}_{2p}$ or in $\mathbb{Q}_{3p}$.
If $p=2$ then $\mathbb{Q}_{2p}=\mathbb{Q}(i)$ and $\mathbb{Q}_{3p}=\mathbb{Q}_{3}$. Since $\mathbb{Q}(i)$ and $\mathbb{Q}_{3}$ are both quadratic, then $k(G)=|\operatorname{Irr}(G)|\leq 2\cdot 2+3=7<9$.
Assume now that $p\not=2$. Then, $\mathbb{Q}_{2p}=\mathbb{Q}_{p}$ and it follows that $\mathbb{Q}(\chi) \subseteq \mathbb{Q}_{3p}$ for every $\chi \in \operatorname{Irr}(G)$. If $p=3$ then $\mathbb{Q}_{3p}=\mathbb{Q}_{9}$. Then $\mathbb{Q}_{3p}$ possesses only one quadratic extension and one cubic extension. Therefore, $k(G)=|\operatorname{Irr}(G)|\leq 2\cdot 1+3\cdot 1+3=8<9$.
Assume that $p\not=3$ is an odd prime. Then $\mathbb{Q}_{3p}$ has three quadratic extensions and at most one cubic extension. It follows that $k(G)\leq 2\cdot 3+3\cdot 1+3=12$. We also have that if $p\equiv -1 \pmod 3$ then $\mathbb{Q}_{3p}$ has no cubic extension and hence $k(G)\leq 9$.
\end{proof}
\end{lem}
\begin{thm}\label{solvable}
Let $G$ be a solvable group with $f(G)\leq 3$. Then, $G \in \{\mathsf{C}_{2},\mathsf{C}_{3},\mathsf{C}_{4},\mathsf{S}_{3},\\ \mathsf{D}_{10},\mathsf{A}_{4},\mathsf{D}_{14},\mathsf{D}_{18},\mathsf{F}_{20},\mathsf{F}_{21},\mathsf{F}_{52}\}$.
\begin{proof}
If $G$ is metabelian, by Theorems \ref{caso2ab},\ref{caso3ab} and \ref{caso4ab}, $G\in \{\mathsf{C}_{2},\mathsf{C}_{3},\mathsf{C}_{4},\mathsf{S}_{3},\mathsf{D}_{10},\mathsf{A}_{4},\mathsf{D}_{14},\\ \mathsf{D}_{18},\mathsf{F}_{20},\mathsf{F}_{21},\mathsf{F}_{52}\}$. Therefore, we only have to prove that $G''=1$.
Assume that $G''>1$. Taking an appropriate quotient we may assume that $G''$ is minimal normal subgroup. Since $G$ is solvable we have that $G''$ is $p$-elementary abelian for some prime $p$. We also have that $G/G''$ is a group such that $(G/G'')'$ is abelian and $f(G/G'')\leq 3$. Thus, $G/G'' \in \{\mathsf{S}_{3}, \mathsf{D}_{10},\mathsf{A}_{4},\mathsf{D}_{14},\mathsf{D}_{18},\mathsf{F}_{20},\mathsf{F}_{21},\mathsf{F}_{52}\}$. We study first the case $G/G''=\mathsf{A}_{4}$.
\underline{Case $G/G''=\mathsf{A}_{4}$:} By Lemma \ref{casoA4}, if $p\not\equiv 1 \pmod 3$ then $k(G)\leq 9$. Thus, by Theorem \ref{Vera-Lopez}, we have that the only possibility is that $G''=1$, which is a contradiction. Thus, we may assume that $p\equiv 1 \pmod 3$, $k(G)\leq 12$ and $G''$ is the unique minimal normal subgroup. Thus, $S(G)=G''$ and hence $k(G/S(G))=5\leq 6$. If $4\leq\alpha(G)\leq 9$, by Theorem \ref{Vera-Lopez2}, we would obtain a contradiction. Therefore, we may assume that $G''$ contains a unique $G$-conjugacy class of non-trivial elements. As a consequence, $|G''|\leq 12+1=13$. We also have $|G''|$ is a power of a prime, $p$, such that that $p\equiv 1 \pmod 3$. Thus the only possibilities are $|G''|\in \{7,13\}$ and hence $|G|\in \{84,156\}$. By Lemma \ref{casos}, there is no group of order $84$ or $156$ and $f(G)\leq 3$ and hence we have a contradiction.
Assume now that $G/G''\not=\mathsf{A}_{4}$. In this case $G'/G''$ is a cyclic group. If $p$ divides $|G':G''|$ then $G'$ is nilpotent and hence $G''\subseteq \Phi(G')$. Thus, $G'$ is cyclic and hence it is abelian, which is a contradiction. It follows that $(|G':G''|,p)=1$. Now, we are going to study separately the case $G/G''=\mathsf{D}_{18}$ and the case $G/G''\in \{\mathsf{S}_{3},\mathsf{D}_{10},\mathsf{D}_{14},\mathsf{F}_{21},\mathsf{F}_{20},\mathsf{F}_{52}\}$.
\underline{Case $G/G''=\mathsf{D}_{18}$:} Since $p\not=3$, we may apply Lemma \ref{casoD18}. If $p=2$, then $k(G)\leq 9$ and hence reasoning as in the case $G/G''=\mathsf{A}_{4}$ we have a contradiction. Thus we may assume that $p$ is odd. Assume now that, $p$ is an odd prime such that $p\not\equiv 1 \pmod 3$. In this case $k(G)\leq 12$. If $k(G)\leq 11$, using Theorem \ref{Vera-Lopez} we would have a contradiction, and hence we may assume that $k(G)=12$. If $4 \leq \alpha(G) \leq 9$ then, using Theorem \ref{Vera-Lopez2}, we would have another contradiction. Thus $\alpha(G)\geq 10$ and hence $G''$ has at most 2 $G$-conjugacy classes. It follows that $|G''|\leq 18+1=19$ and $|G''|=\frac{18}{|H|}+1$, where $H \leq \mathsf{D}_{18}$. Since $|G''|$ must be a power of a prime, $p$, such that $p\not\equiv 1 \pmod 3$, we have that there is no integer with the desired characteristics. Assume now that $p\equiv 1 \pmod 3$. In this case $k(G)\leq 15$. As before, we may assume that $\alpha(G)\geq 10$ and hence $|G''|\leq 4 \cdot 18+1=73$. Therefore, $|G''|\in \{7, 13, 19, 31, 37, 43, 49,53, 61, 67, 73 \}$ and hence $|G| \in \{126, 234, 342, 558, 666, 774, 882, 954, 1098, 1206, 1314\}$. Applying again Lemma \ref{casos}, we have a contradiction.
\underline{Case $G/G''\in \{\mathsf{S}_{3},\mathsf{D}_{10},\mathsf{D}_{14},\mathsf{F}_{21},\mathsf{F}_{20},\mathsf{F}_{52}\}$:} Since $(|G':G''|,p)=1$, we may apply Lemma \ref{restocasos}. Thus, if $r=|G:G'|$ and $\chi \in \operatorname{Irr}(G|G'')$, we have that $\mathbb{Q}(\chi)\subseteq \mathbb{Q}_{rp}$. We study the cases $r=2,3,4$ separately.
\begin{itemize}
\item [(i)] Case $G/G''\in \{\mathsf{S}_{3},\mathsf{D}_{10},\mathsf{D}_{14}\}$: In these cases $|G:G'|=2$ and hence for all $\chi \in \operatorname{Irr}(G|G'')$ we have that $\mathbb{Q}(\chi)\subseteq \mathbb{Q}_{2p}=\mathbb{Q}_{p}$. Thus, $\operatorname{Irr}(G|G'')$ contains at most 5 non-rational characters. We also observe that $\operatorname{Irr}(G/G'')$ posses at most three non-rational character. Counting rational characters we have that $k(G)\leq 3+3+5=11 $. Thus, $G$ must be one of the groups listed in Theorem \ref{Vera-Lopez}, which implies $G''=1$. That is a contradiction.
\item [(ii)] Case $G/G''=\mathsf{F}_{21}$: If $\chi \in \operatorname{Irr}(G|G'')$ then $\mathbb{Q}(\chi)\subseteq\mathbb{Q}_{3p}$. Let $p\not\in\{2,3\}$ be a prime. Then, $\mathbb{Q}_{3p}$ contains three quadratic extensions and at most one cubic extension and one of these quadratic extensions is $\mathbb{Q}_{3}$. Since we have two characters in $\operatorname{Irr}(G/G'')$ whose field of values is $\mathbb{Q}_{3}$ there is no character in $\operatorname{Irr}(G|G'')$ whose field of values is $\mathbb{Q}_{p}$. Thus, $\operatorname{Irr}(G|G'')$ contains at most $2\cdot 2+3\cdot 1=7$ non-rational characters and it contains at most $4$ non-rational characters if $p\equiv -1 \pmod 3$. Thus, $k(G)\leq 7+4+3=14$ and if $p\not\equiv 1 \pmod 3$ then $k(G)\leq 11$. Similarly, we may deduce that if $p=2$ then $k(G)\leq 7$ and if $p=3$ then $k(G)\leq 12$. If $p=2$ or $p\equiv -1 \pmod 3$ then $k(G)\leq 11$. Applying again Theorem \ref{Vera-Lopez}, we would get a contradiction. Thus, we only have to consider the cases when $p=3$ or when $p\equiv 1 \pmod 3$. If $p=3$ then $k(G)\leq 12$. Reasoning as in the case $G'/G''=\mathsf{A}_{4}$, we have that $G''$ contains a unique $G$-conjugacy class of non-trivial elements. Thus, $|G''|$ is a power of $3$ of the form $\frac{21}{|H|}+1$, where $H\leq \mathsf{F}_{21}$, which is impossible. Finally, assume that $p\not=3$ and $p\equiv 1 \pmod 3$. Then, $k(G)\leq 14$. Thus, reasoning as in case $G/G''=\mathsf{D}_{18}$, we have that $|G''|$ contains at most $3$ non-trivial conjugacy classes of $G$. Therefore, $|G''|\leq 21\cdot3+1=64$, and $|G''|$ is a prime power of a prime $p$, such that $p\equiv 1 \pmod 3$. Thus, we deduce that $|G''|\in \{7,13,19,31,37,43,61\}$. Since $|G''|-1$ must be the sum of at most three divisors of $|G/G''|=21$, we have that $|G''|\in \{7,43\}$. Applying that $(|G':G''|,p)=1$, we have that $|G''|=43$ and hence $|G|=21\cdot 43=903$. By Lemma \ref{casos}, there is no group of order $903$ with $f(G)\leq 3$.
\item [(iii)] Case $G/G''\in \{\mathsf{F}_{20},\mathsf{F}_{52}\}$: Then $G/G''=\mathsf{F}_{4q}$ for $q \in \{5,13\}$ and $(q,p)=1$. Thus, applying Lemma \ref{restocasos}, we have that $\mathbb{Q}(\chi)\subseteq \mathbb{Q}_{4p}$ for every $\chi \in \operatorname{Irr}(G|G'')$. Reasoning as in the case $G/G''=\mathsf{F}_{21}$ we have that if $p\not=2$ then $\operatorname{Irr}(G|G'')$ contains at most $7$ non-rational characters and if $p=2$ then $\operatorname{Irr}(G|G'')$ cannot contain non-rational characters. Therefore, if $p=2$ then $k(G)\leq 8$. Thus, applying again Theorem \ref{Vera-Lopez}, we would have a contradiction. Therefore we may assume that $p$ is an odd prime. We claim now that $|G''|\equiv 1 \pmod q$. Since $(|G:G''|,p)=1$, applying Schur-Zassenhaus Theorem, we have that $G''$ is complemented in $G$ by $\mathsf{C}_{4}\ltimes \mathsf{C}_{q}$. We claim that $\mathsf{C}_{q}$ cannot fix any non-trivial element of $G''$. We have, that the action of $\mathsf{C}_{q}$ on $G''$ is coprime. Thus, by Theorem 4.34 of \cite{Isaacs}, $G''=[G'',\mathsf{C}_{q}]\times C_{G''}(\mathsf{C}_{q})$. Since, $C_{G''}(\mathsf{C}_{q})\leq G''$ is normal in $G$, and $G''$ is minimal normal we have that either $C_{G''}(\mathsf{C}_{q})=1$ or $C_{G''}(\mathsf{C}_{q})=G''$. If $C_{G''}(\mathsf{C}_{q})=G''$ then $G'$ is abelian, which is a contradiction. Thus, $C_{G''}(\mathsf{C}_{q})=1$ and hence $\mathsf{C}_{q}$ does not fix any non-trivial element in $G''$. Therefore, $|G''|\equiv 1 \pmod q$.
\begin{itemize}
\item [a)] Case $G/G''=\mathsf{F}_{20}$: It is easy to see that $k(G)\leq 12$ if $p\equiv 1 \pmod 3$ and $k(G)\leq 9$ if $p\not\equiv 1 \pmod 3$. As in the case $G/G''=\mathsf{A}_{4}$, we may assume that $p\equiv 1 \pmod 3$ and that $G''$ possesses a unique non-trivial $G$-conjugacy clas. Therefore, $|G''|\leq20+1=21$, is congruent to 1 modulo $5$ and it is a power or a prime $p$, such that $3$ divides $p-1$. We see that there is no integer with these properties.
\item [b)] Case $G/G''=\mathsf{F}_{52}$: It is easy to see that $k(G)\leq 15$. As in the case $G/G''=\mathsf{D}_{18}$, we may assume that $\alpha(G)\geq 10$ and hence $G''$ contains at most 5 $G$-conjugacy classes. Therefore, $|G''|\leq 4\cdot 52+1=209$. It follows that $|G''|\equiv 1 \pmod 13$, $|G''|\leq 209$ and it is a power of a prime. Thus, $|G''|\in \{27,53,79,131,157\}$ and hence $|G|\in \{1404,2756,4108,6812,8164\}$, which contradicts Lemma \ref{casos}.
\end{itemize}
\end{itemize}
We conclude that $G''=1$ and the result follows.
\end{proof}
\end{thm}
Now, Theorem A follows from Theorems \ref{nonsolvable} and \ref{solvable}.
\renewcommand{\abstractname}{Acknowledgements}
\begin{abstract}
This work will be part of the author’s PhD thesis, under the supervision of Alexander Moret\'o. He would like to thank him.
\end{abstract}
|
2205.14255
|
\section{Introduction}
The Graph Minor Theorem of Robertson and Seymour~\cite{RS} implies that
any minor closed graph property $\mathcal{P}$ is characterized by a finite
set of obstructions. For example, planarity is
determined by $K_5$ and $K_{3,3}$ \cite{K,W} while
linkless embeddability has
seven obstructions, known as the Petersen family~\cite{RST}.
However, Robertson and Seymour's proof is highly non-constructive and it remains frustratingly difficult to identify the obstruction set, even for a simple property such as apex (see~\cite{JK}).
Although we know that the obstruction set for a property $\mathcal{P}$ is finite, in practice it is often difficult to establish any useful bounds on its size.
In the absence of concrete bounds,
information about the shape or distribution of an
obstruction set would be welcome. Given that the number of obstructions is finite, one might anticipate a roughly normal distribution with a central maximum and numbers tailing off to either side.
Indeed, many of the known obstruction sets do follow this
pattern. In \cite[Table 2]{MW}, the authors present a listing of more than 17 thousand obstructions for torus embeddings. Although, the list is likely incomplete, it does appear to follow a normal
distribution both with respect to graph order and graph size, see
Figure~\ref{fig:TorObs}.
\begin{figure}[htb]
\centering
\subfloat[By order]{\includegraphics[width=0.4\textwidth]{TLO.png}}
\hfill
\subfloat[By size]{\includegraphics[width=0.4\textwidth]{TLS.png}}
\caption{Distribution of torus embedding obstructions}
\label{fig:TorObs}
\end{figure}
\begin{table}[ht]
\centering
\begin{tabular}{l|ccccccccccc}
Size & 18 & 19 & 20 & 21 & 22 & 23 & $\cdots$ & 28 & 29 & 30 & 31 \\ \hline
Count & 6 & 19 & 8 & 123 & 517 & 2821 & $\cdots$ & 299 & 8 & 4 & 1 \end{tabular}
\caption{Count of torus embedding obstructions by size.}
\label{tab:TorSiz}
\end{table}
However, closer inspection (see Table~\ref{tab:TorSiz}) shows
that there is a {\em dip}, or local minimum, in the number of obstructions at size twenty. We will say that the dip occurs at
{\em small size} meaning it is near the end of the left tail
of the size distribution. Some properties in fact have a {\em gap}
at small size, meaning there are sizes near the end of the left
tail for which there are no obstructions at all.
In the next section, we survey topological properties for which
we know something about the obstruction set. Almost all
exhibit a dip or gap at small size.
In Section 3, we prove the following.
\begin{theorem} \label{thm:3MM}
There are exactly three obstructions to knotless embedding of size 23.
\end{theorem}
A {\em knotless embedding} of a graph is an embedding in $\mathbb{R}^3$ such
that each cycle is a trivial knot. Since there are no obstructions of size 20 or less, 14 of size 21, 92 of size 22 and at least 156 of size 28 (see \cite{FMMNN, GMN}), the theorem shows
that the knotless embedding obstruction set also has a dip at small size, 23.
In that section, we also pose a question: if $G$ has a vertex of degree less than
three, is it true that no graph related to $G$ by a sequence of $\nabla\mathrm{Y}$ moves, nor
any graph related to $G$ by a sequence of $\mathrm{Y}\nabla$ moves can be an obstruction
for knotless embedding?
In Section 4, we prove the following.
\begin{theorem}
\label{thm:ord10}
There are exactly 35 obstructions to knotless embedding of order 10.
\end{theorem}
In contrast to graph size, distributions with respect to graph order
generally do not have dips or gaps. In particular, Theorem~\ref{thm:ord10} continues an increasing trend of
no obstructions of order 6 or less,
one obstruction of order 7~\cite{CG}, two of order 8~\cite{CMOPRW,BBFFHL}, and eight of order 9~\cite{MMR}.
\section{Dips at small size}
As mentioned in the introduction, it remains difficult to determine the
obstruction set even for simple graph properties. In this
section we survey the topological graph properties for which we
know something about the obstruction set.
Almost all demonstrate a gap or dip at small size. We will look
at properties with a gap, a dip, and neither, in turn. Note that, as in
Table~\ref{tab:TorSiz}, when we present data in a table, we begin
with the extreme left end of the distribution. There are no obstructions
of size smaller than that of the first column in our tables.
We begin with a listing of graph properties for which there is a gap at small size. The obstruction set for apex-outerplanar graphs was
determined by Ding and Dziobiak~\cite{DD}. A graph is {\em apex-outerplanar} if it is outerplanar, or becomes outerplanar on deletion of a single vertex. The 57 obstructions are
distributed by size as in Table~\ref{tab:AOPSiz}. There are gaps at
size 11, 16, and 17, and a dip at size 13. While this distribution has a gap at small size, 11, it is far from normal.
\begin{table}
\centering
\begin{tabular}{l|cccccccccc}
Size & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 \\ \hline
Count & 1 & 1 & 0 & 7 & 2 & 6 & 10 & 0 & 0 & 30
\end{tabular}
\caption{Count of apex-outerplanar obstructions by size.}
\label{tab:AOPSiz}
\end{table}
Various authors \cite{JK,LMMPRTW,P} have investigated the set of apex
obstructions. A graph is {\em apex} if it is planar, or becomes
planar on deletion of a single vertex.
As yet, we do not have a complete listing of the apex obstruction
set, but Jobson and K{\'e}zdy~\cite{JK} report that there
are at least 401 obstructions. Table~\ref{tab:MMNASiz} shows the
classification of obstructions through size 21 obtained by Pierce~\cite{P} in his senior thesis. Note the gaps at size 16 and 17.
\begin{table}
\centering
\begin{tabular}{l|ccccccc}
Size & 15 & 16 & 17 & 18 & 19 & 20 & 21 \\ \hline
Count & 7 & 0 & 0 & 4 & 5 & 22 & 33
\end{tabular}
\caption{Count of apex obstructions through size 21.}
\label{tab:MMNASiz}
\end{table}
We say that a graph is {\em 2-apex} if it is apex, or becomes apex on deletion of a single vertex. Table~\ref{tab:MMN2ASiz} shows the little that we know about the obstruction set for this family \cite{MP}.
Aside from the counts for sizes 21 and 22 and the gap at size 23, we know only that there are obstructions for each size from 24 through 30.
\begin{table}
\centering
\begin{tabular}{l|ccc}
Size & 21 & 22 & 23 \\ \hline
Count & 20 & 60 & 0
\end{tabular}
\caption{Count of 2-apex obstructions through size 23.}
\label{tab:MMN2ASiz}
\end{table}
Most of the remaining topological properties whose obstruction
set has been studied have a dip at small size. In the introduction,
we mentioned the dip at size 20 for the torus obstructions.
One of our main results, Theorem~\ref{thm:3MM},
demonstrates a dip at size 23 for knotless embedding.
Table~\ref{tab:PPSiz} shows the dip at size 17 for the 35 obstructions
to embedding in the projective plane~\cite{A,GHW,MT}.
\begin{table}
\centering
\begin{tabular}{l|cccccc}
Size & 15 & 16 & 17 & 18 & 19 & 20 \\ \hline
Count & 4 & 8 & 7 & 12 & 2 & 2
\end{tabular}
\caption{Count of projective planar obstructions by size.}
\label{tab:PPSiz}
\end{table}
A final example of a dip at small size is the 38 obstructions for
outer-cylindrical graphs classified by Archdeacon
et al.~\cite{ABDHS}. A graph is
{\em outer-cylindrical} if it has a planar embedding so that all
vertices fall on two faces. Table~\ref{tab:OCSiz} illustrates
the dip at size 11 for this obstruction set.
\begin{table}
\centering
\begin{tabular}{l|cccccc}
Size & 9 & 10 & 11 & 12 & 13 & 14 \\ \hline
Count & 1 & 2 & 1 & 27 & 6 & 1
\end{tabular}
\caption{Count of outer-cylindrical obstructions by size.}
\label{tab:OCSiz}
\end{table}
Some small obstruction sets have no
gap or dip simply because they consist of only one or two sizes. There are two obstructions to planarity, one each of size 9 ($K_{3,3}$) and 10 ($K_5$). The two obstructions, $K_4$ and $K_{3,2}$, to outerplanarity both have size six and the seven obstructions to linkless embedding in the
Petersen family~\cite{RST} are all of size 15.
We know of only one example
of a larger obstruction set for a topological property that
exhibits no dip or gap.
Archdeacon, Hartsfield, Little, and Mohar~\cite{AHLM}
determined the 32 obstructions to outer-projective-planar embedding
and Table~\ref{tab:OPPSiz} shows this set has no dips or gaps.
\begin{table}
\centering
\begin{tabular}{l|ccc}
Size & 10 & 11 & 12 \\ \hline
Count & 1 & 7 & 24
\end{tabular}
\caption{Count of outer-projective-planar obstructions by size.}
\label{tab:OPPSiz}
\end{table}
\section{Knotless embedding obstructions of size 23}
In this section we prove Theorem~\ref{thm:3MM}:
there are exactly three obstructions
to knotless embedding of size 23. In subsection~\ref{sec:Heawood} we
ask a question that may be of independent interest.
Question~\ref{ques:d3MMIK}:
If $\delta(G) < 3$, is it true that $G$ has no MMIK descendants
or ancestors? Definitions are given below.
We begin with some terminology.
A graph that admits no knotless embedding is {\em intrinsically knotted (IK)}. In contrast, we'll call the graphs that admit a knotless
embedding {\em not intrinsically knotted (nIK)}. If $G$ is
in the obstruction set for knotless embedding we'll say
$G$ is {\em minor minimal intrinsically knotted (MMIK)}.
This reflects that, while $G$ is IK, no proper minor
of $G$ has that property. Similarly, we will call 2-apex obstructions {\em minor minimal not 2-apex (MMN2A)}.
Our strategy for classifying MMIK graphs of size 23 is based on
the following observation.
\begin{lemma} \cite{BBFFHL,OT} \label{lem:2apex}
If $G$ is 2-apex, then $G$ is not IK
\end{lemma}
Suppose $G$ is MMIK of size 23. By Lemma~\ref{lem:2apex}, $G$ is
not 2-apex and, therefore, $G$ has an MMN2A minor. The MMN2A graphs
through order 23 were classified in~\cite{MP}. All but eight of them
are also MMIK and none are of size 23. It follows that a MMIK graph
of size 23 has one of the eight exceptional MMN2A graphs as a minor.
Our strategy is to construct all size 23 expansions of the eight
exceptional graphs and determine which of those is in fact MMIK.
Before further describing our search, we remark that it does rely
on computer support. Indeed, the initial classification of MMN2A
graphs in \cite{MP} is itself based on a computer search.
We give a traditional proof that there are three size 23 MMIK graphs,
which is stated as Theorem~\ref{thm:TheThree} below.
We rely on computers only for the argument that there are
no other size 23 MMIK graphs. We remark that even if
we cannot provide a complete, traditional proof
that there are no more than three
size 23 MMIK graphs,
our argument does strongly suggest that there are
far fewer MMIK graphs of size 23 than the
known 92 MMIK graphs of size 22 and at least 156 of size 28~\cite{FMMNN, GMN}.
In other words, even without computers, we have compelling evidence that there is a dip at size 23 for the
obstructions to knotless embedding.
Below we give graph6 notation~\cite{sage} and edge lists for the three MMIK graphs of size 23.
\noindent%
$G_1$ \verb"J@yaig[gv@?"
$$[(0, 4), (0, 5), (0, 9), (0, 10), (1, 4), (1, 6), (1, 7), (1, 10), (2, 3), (2, 4), (2, 5), (2, 9),$$
$$ (2, 10), (3, 6), (3, 7), (3, 8), (4, 8), (5, 6), (5, 7), (5, 8), (6, 9), (7, 9), (8, 10)]$$
\noindent%
$G_2$ \verb"JObFF`wN?{?"
$$[(0, 2), (0, 4), (0, 5), (0, 6), (0, 7), (1, 5), (1, 6), (1, 7), (1, 8), (2, 6), (2, 7), (2, 8),$$
$$(2, 9), (3, 7), (3, 8), (3, 9), (3, 10), (4, 8), (4, 9), (4, 10), (5, 9), (5, 10), (6, 10)]$$
\noindent%
$G_3$ \verb"K?bAF`wN?{SO"
$$[(0, 4), (0, 5), (0, 7), (0, 11), (1, 5), (1, 6), (1, 7), (1, 8), (2, 7), (2, 8), (2, 9), (2, 11),$$
$$(3, 7), (3, 8), (3, 9), (3, 10), (4, 8), (4, 9), (4, 10), (5, 9), (5, 10), (6, 10), (6, 11)]$$
The graph $G_1$ was discovered by Hannah Schwartz~\cite{N}. Graphs
$G_1$ and $G_2$ have order 11 while $G_3$ has order 12. We prove
the following in subsection~\ref{sec:3MMIKpf} below.
\begin{theorem} \label{thm:TheThree}
The graphs $G_1$, $G_2$, and $G_3$ are MMIK of size 23
\end{theorem}
We next describe the computer search that shows there are no other size
23 MMIK graphs and completes the proof of
Theorem~\ref{thm:3MM}. There are
eight exceptional graphs of size at most 23
that are MMN2A and not MMIK. Six of them are in the Heawood family of size 21 graphs.
The other two, $H_1$ and $H_2$, are 4-regular graphs on 11 vertices with size 22 described in \cite{MP}, listed
in the appendix, and shown in Figure~\ref{fig:H1H2}.
It is straightforward to verify that, while
neither $H_1$ nor $H_2$ is $2$-apex, all
their proper minors are.
This shows these graphs are MMN2A.
In Figure~\ref{fig:H1H2} we give
knotless embeddings of these two graphs;
these graphs are not IK, let alone
MMIK. It turns out that the three graphs of Theorem~\ref{thm:TheThree} are
expansions of $H_1$ and $H_2$. In subsection~\ref{sec:1122graphs} below
we argue that no other size 23 expansion of $H_1$ or $H_2$ is MMIK.
\begin{figure}[htb]
\centering
\includegraphics[scale=1]{h1h2.eps}
\caption{Knotless embeddings of graphs $H_1$ (left) and $H_2$ (right).}
\label{fig:H1H2}
\end{figure}
The Heawood family consists of twenty graphs of size 21
related to one another by $\nabla\mathrm{Y}$ and $\mathrm{Y}\nabla$ moves, see
Figure~\ref{fig:TY}.
In \cite{GMN,HNTY} two groups, working independently,
verified that 14 of the graphs in the family are MMIK, and the remaining six are MMN2A and not MMIK.
In subsection~\ref{sec:Heawood}
below, we argue that no size 23 expansion of any of these six Heawood family
graphs is MMIK.
\begin{figure}[htb]
\centering
\includegraphics[scale=1]{DY.eps}
\caption{$\nabla\mathrm{Y}$ and $\mathrm{Y}\nabla$ moves.}
\label{fig:TY}
\end{figure}
Combining the arguments of the next three subsections give a proof of
Theorem~\ref{thm:3MM}.
Before diving into the details, we
state a few lemmas we'll use throughout. The first is about the
{\em minimal degree} $\delta(G)$, which is the least degree
among the vertices of graph $G$.
\begin{lemma} If $G$ is MMIK, then $\delta(G) \geq 3$.
\label{lem:delta3}
\end{lemma}
\begin{proof} Suppose $G$ is IK with $\delta(G) < 3$.
By either deleting, or contracting an edge on, a
vertex of small degree, we find a proper minor that
is also IK.
\end{proof}
\begin{lemma}
\label{lem:tyyt}
The $\nabla\mathrm{Y}$ move preserves IK: If $G$ is IK and $H$ is obtained from $G$ by a $\nabla\mathrm{Y}$ move, then $H$ is also IK.
Conversely, the $\mathrm{Y}\nabla$ move preserves
nIK: if $H$ is nIK and $G$ is obtained from $H$ by a $\mathrm{Y}\nabla$ move, then $G$ is also nIK.
\end{lemma}
\begin{proof} Sachs~\cite{S} showed that $\mathrm{Y}\nabla$ preserves linkless embedding and
essentially the same argument applies to
knotless embeddings.
\end{proof}
Finally, we note that the MMIK property can move backwards
along $\nabla\mathrm{Y}$ moves.
\begin{lemma} \cite{BDLST,OT}
\label{lem:MMIK}
Suppose $G$ is IK and
$H$ is obtained from $G$ by a $\nabla\mathrm{Y}$ move.
If $H$ is MMIK, then $G$ is also.
\end{lemma}
\subsection{Proof of Theorem~\ref{thm:TheThree}}
\label{sec:3MMIKpf}
In this subsection we prove Theorem~\ref{thm:TheThree}: the three graphs $G_1$, $G_2$, and $G_3$ are MMIK.
\subsubsection{$G_1$ is MMIK}
To show that $G_1$ is MMIK, we first argue that no proper minor is IK.
Up to isomorphism, there are 12 minors obtained by contracting or deleting
a single edge. Each of these is 2-apex, except for the MMN2A graph $H_1$.
By Lemma~\ref{lem:2apex}, a 2-apex graph is nIK and Figure~\ref{fig:H1H2}
gives a knotless embedding of $H_1$.
(Note that we use software to check that every cycle is a trivial knot
in our knotless embeddings. A Mathematica version of the program
is available at Ramin Naimi's homepage~\cite{NW}.)
This shows that all proper minors
of $G_1$ are nIK.
We next show that $G_1$ is IK. For this we use a lemma due, independently, to two groups~\cite{F,TY}. Let $\mathcal{D}_4$ denote the multigraph of
Figure~\ref{fig:D4} and, for $ i = 1,2,3,4$, let $C_i$ be the cycle
of edges $e_{2i-1}, e_{2i}$. For any given embedding of $\mathcal{D}_4$, let $\sigma$
denote the mod 2 sum of the Arf invariants of the 16 Hamiltonian cycles
in $\mathcal{D}_4$
and $\mbox{lk}(C,D)$ the mod 2 linking number
of cycles $C$ and $D$.
Since the Arf invariant of the unknot is zero,
an embedding of $\mathcal{D}_4$ with
$\sigma \neq 0$ must have a knotted cycle.
\begin{figure}[htb]
\centering
\includegraphics[scale=1]{D4.eps}
\caption{The $\mathcal{D}_4$ graph.}
\label{fig:D4}
\end{figure}
\begin{lemma}\cite{F,TY}
\label{lem:D4}
Given an embedding of $\mathcal{D}_4$, $\sigma \neq 0$ if and only if
$\mbox{lk}(C_1,C_3) \neq 0$ and $\mbox{lk}(C_2,C_4) \neq 0$.
\end{lemma}
\begin{figure}[htb]
\centering
\includegraphics[scale=1.25]{g1.eps}
\caption{The graph $G_1$.}
\label{fig:G1}
\end{figure}
To use Lemma~\ref{lem:D4} we need pairs of linked cycles.
We will find minors of $G_1$ that are members of the Petersen family of graphs
as these are the obstructions to linkless embedding~\cite{RST}. For example,
contracting edges $(3,6)$, $(5,7)$, and $(6,9)$
in Figure~\ref{fig:G1} yields a $K_{4,4}$ minor with
$\{3,4,5,10\}$ and $\{0,1,2,8\}$ as the two parts. For convenience, we
use the smallest vertex label to denote the new vertex obtained when contracting
edges. Thus, we denote by 3 the vertex obtained by identifying 3, 6, and 9
of $G_1$. Further deleting the edge $(1,3)$ we identify the Petersen
family graph $K_{4.4}^-$ as a minor of $G_1$.
There are nine pairs of disjoint cycles in
$K_{4,4}^-$ and we will denote these pairs as $A_1$ through $A_9$.
In Table~\ref{tab:An}, we first give the cycle pair in the $K_{4,4}^-$
and then the corresponding pair in $G_1$.
\begin{table}[htb]
\centering
\begin{tabular}{c|l|l}
$A_1$ & 0,4,1,5 -- 2,3,8,10 & 0,4,1,7,5 -- 2,3,8,10 \\
$A_2$ & 0,4,1,10 -- 2,3,8,5 & 0,4,1,10 -- 2,3,8,5 \\
$A_3$ & 1,4,2,5 -- 0,3,8,10 & 1,4,2,5,7 -- 0,9,6,3,8,10 \\
$A_4$ & 1,4,2,10 -- 0,3,8,5 & 1,4,2,10 -- 0,9,6,3,8,5 \\
$A_5$ & 1,4,8,5 -- 0,3,2,10 & 1,4,8,5,7 -- 0,9,6,3,2,10 \\
$A_6$ & 1,4,8,10 -- 0,3,2,5 & 1,4,8,10 -- 0,9,6,3,2,5 \\
$A_7$ & 1,5,2,10 -- 0,3,8,4 & 1,7,5,2,10 -- 0,9,6,3,8,4 \\
$A_8$ & 0,5,1,10 -- 2,3,8,4 & 0,5,7,1,10 -- 2,3,8,4 \\
$A_9$ & 1,5,8,10 -- 0,3,2,4 & 1,7,5,8,10 -- 0,9,6,3,2,4 \\
\end{tabular}
\caption{Nine pairs of cycles in $G_1$ called $A_1, \ldots, A_9$.}
\label{tab:An}
\end{table}
Similarly, we will describe a $K_{3,3,1}$ minor that gives pairs of cycles
$B_1$ through $B_9$. Contract edges $(0,9)$, $(1,7)$, $(2,3)$, $(3,7)$,
$(3,8)$, $(5,7)$, and $(7,9)$. Delete vertex $6$ and edge $(2,9)$.
The result is a $K_{3,3,1}$ minor with parts $\{0,2,8\}$, $\{4,5,10\}$, and
$\{1\}$. In Table~\ref{tab:Bn} we give the nine pairs of cycles, first
in $K_{3,3,1}$ and then in $G_1$.
\begin{table}[htb]
\centering
\begin{tabular}{c|l|l}
$B_1$ & 0,1,4 -- 2,5,8,10 & 0,4,1,7,9 -- 2,5,8,10 \\
$B_2$ & 0,1,5 -- 2,4,8,10 & 0,5,7,9 -- 2,4,8,10 \\
$B_3$ & 0,1,10 -- 2,4,8,5 & 0,9,7,1,10 -- 2,4,8,5 \\
$B_4$ & 1,2,4 -- 0,5,8,10 & 1,4,2,3,7 -- 0,5,8,10 \\
$B_5$ & 1,2,5 -- 0,4,8,10 & 2,3,7,5 -- 0,4,8,10 \\
$B_6$ & 1,2,10 -- 0,4,8,5 & 2,3,7,1,10 -- 0,4,8,5 \\
$B_7$ & 1,4,8 -- 0,5,2,10 & 1,4,8,3,7 -- 0,5,2,10 \\
$B_8$ & 1,5,8 -- 0,4,2,10 & 3,7,5,8 -- 0,4,2,10 \\
$B_9$ & 1,8,10 -- 0,4,2,5 & 1,7,3,8,10 -- 0,4,2,5 \\
\end{tabular}
\caption{Nine pairs of cycles in $G_1$ called $B_1, \ldots, B_9$.}
\label{tab:Bn}
\end{table}
Another $K_{4,4}^-$ minor of $G_1$ will give our last set of nine cycle pairs.
Contract edges $(0,9)$, $(2,5)$, and $(3,8)$ to obtain a $K_{4,4}$
with parts $\{0,1,2,8\}$ and $\{4,6,7,10\}$. Then delete
edge $(1,4)$ to make a $K_{4.4}^-$ minor. Table~\ref{tab:Cn} lists
the nine pairs of cycles, first in the $K_{4,4}^-$ minor and
then in $G_1$.
\begin{table}[htb]
\centering
\begin{tabular}{c|l|l}
$C_1$ & 0,6,1,7 -- 2,4,8,10 & 9,6,1,7 -- 2,4,8,10 \\
$C_2$ & 0,6,1,10 -- 2,4,8,7 & 0,9,6,1,10 -- 2,4,8,3,7,5 \\
$C_3$ & 1,6,2,7 -- 0,4,8,10 & 1,6,5,7 -- 0,4,8,10 \\
$C_4$ & 1,6,2,10 -- 0,4,8,7 & 1,6,5,2,10 -- 0,4,8,3,7,9 \\
$C_5$ & 1,6,8,7 -- 0,4,2,10 & 1,6,,3,7 -- 0,4,2,10 \\
$C_6$ & 1,6,8,10 -- 0,4,2,7 & 1,6,3,8,10 -- 0,4,2,5,7,9 \\
$C_7$ & 0,7,1,10 -- 2,4,8,6 & 0,9,7,1,10 -- 2,4,8,3,6,5 \\
$C_8$ & 1,7,2,10 -- 0,4,8,6 & 1,7,5,2,10 -- 0,4,8,3,6,9 \\
$C_9$ & 1,7,8,10 -- 0,4,2,6 & 1,7,3,8,10 -- 0,4,2,5,6,9 \\
\end{tabular}
\caption{Nine pairs of cycles in $G_1$ called $C_1, \ldots, C_9$.}
\label{tab:Cn}
\end{table}
As shown by Sachs~\cite{S}, in any embedding of $K_{4,4}^-$
or $K_{3,3,1}$, at least one
pair of the nine disjoint cycles in each graph has odd linking number.
We will simply say the cycles are {\em linked} if the linking number is odd.
Fix an embedding of $G_1$. Our goal is to show that the embedding must
have a knotted cycle.
We will argue by contradiction. For a contradiction, assume that there
is no knotted cycle in the embedding of $G_1$. We leverage Lemma~\ref{lem:D4}
to deduce that certain pairs of cycles are not linked. Eventually,
we will conclude that none of $B_1, \ldots, B_9$ are linked.
This is a contradiction as these correspond to cycles in a $K_{3,3,1}$
and we know that every embedding of this Petersen family graph
must have a pair of linked cycles~\cite{S}.
The contradiction shows that there must in fact be a knotted
cycle in the embedding of $G_1$. As the embedding is arbitrary, this
shows that $G_1$ is IK.
We illustrate our strategy by first focusing on the pair
$A_2 = $ 0,4,1,10 -- 2,3,8,5.
Combine $A_2$ with each $B_i$. In each case we form a $\mathcal{D}_4$ graph
as in Figure~\ref{fig:D4}. Since the $B_i$ are pairs of cycles in $K_{3,3,1}$,
a Petersen family graph, at least one pair is linked~\cite{S}.
If $A_2$ is
also linked, then Lemma~\ref{lem:D4} implies that the embedding of
$G_1$ has a knotted cycle, in contradiction to our assumption.
Therefore, we conclude that $A_2$ is not linked (i.e., does not
have odd linking number).
\begin{table}[htb]
\centering
\begin{tabular}{c|l}
$B_1$ & $\{0,1,4\}$, $\{2,5,8\}$, $\{3,7\}$, $\{10\}$ \\
$B_2$ & $\{0\}$, $\{1,4,10\}$, $\{2,3,8\}$, $\{5\}$ \\
$B_3$ & $\{0,1,10\}$, $\{2,5,8\}$, $\{3,7\}$, $\{4\}$ \\
$B_4$ & $\{0,10\}$, $\{1,4\}$, $\{2,3\}$, $\{5,8\}$ \\
$B_5$ & $\{0,4,10\}$, $\{1\}$, $\{2,3,5\}$, $\{8\}$ \\
$B_6$ & $\{0,4\}$, $\{1,10\}$, $\{2,3\}$, $\{5,8\}$ \\
$B_7$ & $\{0,10\}$, $\{1,4\}$, $\{2,5\}$, $\{3,8\}$ \\
$B_8$ & $\{0,4,10\}$, $\{1,7\}$, $\{2\}$, $\{3,8,5\}$ \\
$B_9$ & $\{0,4\}$, $\{1,10\}$, $\{2,5\}$, $\{3,8\}$ \\
\end{tabular}
\caption{Pairing $A_2$ with $B_1, \ldots, B_9$.}
\label{tab:A2}
\end{table}
In Table~\ref{tab:A2}, we list the vertices in $G_1$ that are identified to give each of the four vertices of $\mathcal{D}_4$.
Let's examine the pairing with $B_2$ as an example
to see how this results in a $\mathcal{D}_4$.
We identify $\{1,4,10\}$ as a single vertex by contracting edges $(1,4)$ and
$(1,10)$. Similarly contract $(2,3)$ and $(3,8)$ to make a vertex of
the $\mathcal{D}_4$ from vertices $\{2,3,8\}$ of $G_1$. In this way, the cycle 0,4,1,10
of $A_2$ in $G_1$ becomes cycle $C_1$ of the $\mathcal{D}_4$
(see Figure~\ref{fig:D4})
between $\{1,4,10\}$ and $\{0\}$ and
the cycle 2,3,8,5 becomes cycle $C_3$ between $\{2,3,8\}$ and $\{5\}$.
Similarly 0,5,7,9 of $B_2$ becomes homeomorphic to the cycle $C_2$
between $\{0\}$ and $\{5\}$. For the final cycle of $B_2$, 2,4,8,10,
we observe that, in homology,
$\mbox{2,4,8,10 } = \mbox{ 1,4,2,10 } \cup \mbox{ 1,4,8,10}$.
Assuming $\mbox{lk}((\mbox{0,5,7,9}),(\mbox{ 2,4,8,10})) \neq 0$,
then one of
$\mbox{lk}((\mbox{0,5,7,9}),(\mbox{1,4,2,10}))$ and
$\mbox{lk}((\mbox{0,5,7,9}),(\mbox{1,4,8,10}))$ is
also nonzero. Whichever it is, 1,4,2,10 or 1,4,8,10, that will be our $C_4$ cycle in the $\mathcal{D}_4$ of Figure~\ref{fig:D4}.
To summarize, we've argued that $A_2$ forms a $\mathcal{D}_4$
with each pair $B_1, \ldots, B_9$. Since at least one of the $B_i$'s is
linked, then, assuming $A_2$ is linked, these two pairs make a
$\mathcal{D}_4$ that has a knotted cycle. Therefore,
by way of contradiction, going forward, we may assume $A_2$ is
not linked.
We next argue that $A_6$ is not linked. Pairing with
the $B_i$'s again, the vertices for each $\mathcal{D}_4$ are
\begin{align*}
B_1 & \{0,9\}, \{1,4\}, \{2,5\}, \{8,9\} &
B_2 & \{0,5,9\}, \{1,7\}, \{2\}, \{4,8,10 \} \\
B_3 & \{0,9\}, \{1,10\}, \{2,5\}, \{4,8\} &
B_4 & \{0,5\}, \{1,4\}, \{2,3\}, \{8,10 \} \\
B_5 & \{0\}, \{1,7\}, \{2,3,5\}, \{4,8,10\} &
B_6 & \{0,5\}, \{1,10\}, \{2,3\}, \{4,8 \} \\
B_7 & \{0,2,5\}, \{1,4,8\}, \{3\}, \{10\} &
B_9 & \{0,2,5\}, \{1,8,10\}, \{3\}, \{4 \}.\\
\end{align*}
For $B_8 = $ 3,7,5,8 -- 0,4,2,10, we first split one of the $A_6$
cycles:
$\mbox{0,5,2,3,6,9 } = \mbox{ 0,5,2,9 } \cup \mbox{ 2,3,6,9}$.
One of the two summands must link with the other $A_2$ cycle
$1,4,8,10$. If
$\mbox{lk}((\mbox{0,5,2,9}),(\mbox{1,4,8,10})) \neq 0$,
then, by a symmetry of $G_1$,
$\mbox{lk}((\mbox{8,5,2,3}),(\mbox{1,4,0,10})) \neq 0$. But this last is
the pair $A_2$, and we have already argued that $A_2$ is not linked.
Therefore, it must be that
$\mbox{lk}((\mbox{2,3,6,9}),(\mbox{1,4,8,10})) \neq 0$.
Next, we split a cycle of $B_8$:
$\mbox{0,4,2,10 } = \mbox{ 1,4,2,10 } \cup \mbox{ 0,4,1,10}$.
If $\mbox{lk}((\mbox{3,7,5,8}),(\mbox{1,4,2,10})) \neq 0$,
form a $\mathcal{D}_4$ with vertices $\{1,4,10\}$, $\{2\}$, $\{3\}$, and $\{8\}$.
On the other hand,
If $\mbox{lk}((\mbox{3,7,5,8}),(\mbox{0,4,1,10})) \neq 0$,
form a $\mathcal{D}_4$ with vertices $\{0,9\}$, $\{1,4,10\}$, $\{3\}$,
and $\{8\}$.
For every choice of $B_i$ we can make a $\mathcal{D}_4$ with $A_6$. We know
that at least one $B_i$ is a linked pair. If $A_6$ is also linked,
then, by Lemma~\ref{lem:D4}, this embedding of $G_1$ has a knotted
cycle. Therefore, by way of contradiction, we may assume the
pair of $A_6$ is not linked.
We next eliminate $B_2$ by pairing it with each $A_i$. As we're assuming
$A_2$ and $A_6$ are not linked, it must be some other $A_i$ pair that is
linked. Here are the vertices of the $\mathcal{D}_4$'s in each case:
\begin{align*}
A_1 & \{0,5,7\}, \{2,8,10\}, \{3,6,9\}, \{4\} &
A_3 & \{0,9\}, \{2,4\}, \{5,7\}, \{8,10 \} \\
A_4 & \{0,5,9\}, \{1,7\}, \{2,4,10\}, \{8\} &
A_5 & \{0,9\}, \{2,10\}, \{4,8\}, \{5,7 \} \\
A_7 & \{0,9\}, \{2,10\}, \{4,8\}, \{5,7\} &
A_8 & \{0,5,7\}, \{2,4,8\}, \{3,6,9\}, \{10\} \\
A_9 & \{0,9\}, \{2,4\}, \{5,7\}, \{8,10\}. \\
\end{align*}
This shows $B_2$ is not linked. Since $B_8$ is
the same as $B_2$ by a symmetry of $G_1$, $B_8$
is likewise not linked. Ultimately, we will show
that no $B_i$ is linked. So far, we have this for $B_2$ and $B_8$.
Our next step is to argue $C_1$ is not linked by pairing it
with the remaining $A_i$'s:
\begin{align*}
A_1 & \{1,7\}, \{2,8,10\}, \{3,6\}, \{4\} &
A_3 & \{1,7\}, \{2,4\}, \{6,9\}, \{8,10 \} \\
A_4 & \{1\}, \{2,4,10\}, \{6,9\}, \{8\} &
A_5 & \{1,7\}, \{2,10\}, \{4,8\}, \{6,9 \} \\
A_7 & \{0,4,8\}, \{1,5,7\}, \{6\}, \{10\} &
A_8 & \{1,7\}, \{2,4,8\}, \{3,6\}, \{10\} \\
A_9 & \{1,7\}, \{2,4\}, \{6,9\}, \{8,10\}. \\
\end{align*}
By a symmetry of $G_1$, $C_5$ is also not linked.
Now we argue that $C_3$ is not linked, again by pairing
with $A_i$'s:
\begin{align*}
A_1 & \{0,4\}, \{1,5,7\}, \{3,6\}, \{8,10\} &
A_3 & \{0,8,10\}, \{1,5,7\}, \{4\}, \{6 \} \\
A_5 & \{0,10\}, \{1,5,7\}, \{4,8\}, \{6\} &
A_7 & \{0,4,8\}, \{1,5,7\}, \{6\}, \{10\} \\
A_8 & \{0,10\}, \{1,5,7\}, \{4,8\}, \{6\} &
A_9 & \{0,4\}, \{1,5,7\}, \{6\}, \{8,10\}. \\
\end{align*}
For $A_4$ we split the second cycle:
$\mbox{0,9,6,3,8,5 } = \mbox{ 0,9,6,5 } \cup \mbox{ 6,3,8,5}$.
Suppose that it's 0,9,6,5 that is linked with 1,4,2,10. In this case
we also split that cycle:
$\mbox{1,4,2,10 } = \mbox{ 1,4,8,10 } \cup \mbox{ 2,4,8,10}$.
To get a $\mathcal{D}_4$ when pairing 0,9,6,5 -- 1,4,8,10 with $C_3$ we use
vertices $\{0\}$, $\{1\}$, $\{4,8,10\}$, and $\{5,6\}$ and
for 0,9,6,5 -- 2,4,8,10 with $C_3$, $\{0\}$, $\{2,3,7\}$, $\{4,8,10\}$,
and $\{5,6\}$.
On the other hand, if it's 6,3,8,5 that's linked with
1,4,2,10, we write
$\mbox{1,4,2,10 } = \mbox{ 0,4,1,10 } \cup \mbox{ 0,4,2,10}$.
Then when $C_3$ is paired with 6,3,8,5 --- 0,4,1,10, we have
a $\mathcal{D}_4$ using vertices $\{0,4,10\}$, $\{1\}$, $\{5,6\}$, and $\{8\}$
while if $C_3$ is paired with 6,3,8,5 -- 0,4,2,10 the vertices
are $\{0,4,10\}$, $\{2,7,9\}$, $\{5,6\}$, and $\{8\}$.
This completes the argument that $C_3$ is not
linked.
We will show that $A_8$ is not linked by pairing with the
remaining $C_i$'s. For $C_8$, the vertices would be $\{0\}$, $\{1,5,7,10\}$,
$\{2\}$, and $\{3,4,8\}$. The remaining cases involve splitting cycles.
For $C_2$ we write
$\mbox{2,4,8,3,7,5 } = \mbox{ 3,7,5,8 } \cup \mbox{ 2,4,8,5}$.
If it's 3,7,5,8 -- 0,9,6,1,10 that's linked, we use vertices
$\{0,1,10\}$, $\{2,9\}$, $\{3,8\}$, and $\{5,7\}$ and
if, instead, 2,4,8,5 -- 0,9,6,10 is linked, we have
$\{0,1,10\}$, $\{2,4,8\}$, $\{3,6\}$, and $\{5\}$.
For $C_4$, it's a cycle of $A_8$ that we rewrite:
$\mbox{0,5,7,1,10 } = \mbox{ 0,9,7,1,10 } \cup \mbox{ 0,9,7,5}$.
In either case, we use the same vertices: $\{0,7,9\}$, $\{1,5,6,10\}$,
$\{2\}$, and $\{3,4,8\}$.
For $C_6$,
$\mbox{0,4,2,5,7,9 } = \mbox{ 0,4,2,9 } \cup \mbox{ 2,5,7,9}$.
When 0,4,2,9 is linked with 1,6,3,8,10 the vertices are
$\{0,9\}$, $\{1,10\}$, $\{2,4\}$, and $\{3,8\}$
while if 2,5,7,9 links 1,6,3,8,10, we use
$\{1,10\}$, $\{2\}$, $\{3,8\}$, and $\{5,7\}$.
Continuing with $C_7$,
$\mbox{2,4,8,3,6,5 } = \mbox{ 3,6,5,8 } \cup \mbox{ 2,4,8,5}$.
The vertices for 3,6,5,8 -- 0,9,7,1,0 are $\{0,1,7,10\}$,
$\{2,9\}$, $\{3,8\}$, and $\{5\}$
and for 2,4,8,5 -- 0,9,7,1,10, use
$\{0,1,7,10\}$, $\{2,4,8\}$, $\{3,6,9\}$, and $\{5\}$.
Finally, in the case of $C_9$, write
$\mbox{0,4,2,5,6,9 } = \mbox{ 0,4,2,9 } \cup \mbox{ 2,5,6,9}$.
When 0,4,2,9 -- 1,7,3,8,10 is linked, the vertices are
$\{0\}$, $\{1,7,10\}$, $\{2,4\}$, and $\{3,8\}$
and for 2,5,6,9 -- 1,7,3,8,10 use $\{1,7,10\}$,
$\{2\}$, $\{3,8\}$, and $\{5\}$.
This completes the argument for $A_8$. By a symmetry of $G_1$,
$A_1$ is also not linked.
In other words, going forward, we will assume it is one of
$A_3$, $A_4$, $A_5$, $A_7$, or $A_9$ that is linked.
Next, we'll argue that $B_1$ is not linked by comparing
with the remaining $A_i$'s:
\begin{align*}
A_3 & \{0,9\}, \{1,4,7\}, \{2,5\}, \{8,10\} &
A_4 & \{0,9\}, \{1,4\}, \{2,10\}, \{5,8\} \\
A_5 & \{0,9\}, \{1,4,7\}, \{2,10\}, \{5,8\} &
A_7 & \{0,4,9\}, \{1,7\}, \{2,5,10\}, \{8\} \\
A_9 & \{0,4,9\}, \{1,7\}, \{2\}, \{5,8,10\}. \\
\end{align*}
By a symmetry of $G_1$ we also assume $B_3$ is not linked.
Now, by pairing with the remaining $A_i$'s, we show $B_5$ is not
linked:
\begin{align*}
A_3 & \{0,8,10\}, \{2,5,7\}, \{3\}, \{4\} &
A_5 & \{0,10\}, \{2,3\}, \{4,8\}, \{5,7\} \\
A_7 & \{0,4,8\}, \{2,5,7\}, \{3\}, \{10\} &
A_9 & \{0,4\}, \{2,3\}, \{5,7\}, \{8,10\}. \\
\end{align*}
For $A_4$ we employ several splits. First,
$\mbox{0,9,6,3,8,5 } = \mbox{ 0,9,6,5 } \cup \mbox{ 6,3,8,5}$.
In the case that 0,9,6,5 -- 1,4,2,10 is linked, write
$\mbox{1,4,2,10 } = \mbox{ 2,4,8,10 } \cup \mbox{ 1,4,8,10}$.
Pairing 0,9,6,5 -- 2,4,8,10 with $B_5$, the vertices are
$\{0\}$, $\{2\}$, $\{4,8,10\}$, and $\{5\}$.
If instead it's 0,9,6,5 -- 1,4,8,10 that's linked, we use
$\{0\}$, $\{1,7\}$, $\{4,8,10\}$, and $\{5\}$.
So we assume that 6,3,8,5 -- 1,4,2,10 is linked and
rewrite a $B_5$ cycle:
$\mbox{2,3,7,5 } = \mbox{ 2,3,6,5 } \cup \mbox{ 3,6,5,7}$.
In case 2,3,6,5 -- 0,4,8,10 is linked, we make a further
split:
$\mbox{1,4,2,10 } = \mbox{ 1,7,9,2,10 } \cup \mbox{ 1,4,9,2,10}$.
Thus, assuming 2,3,6,5 -- 0,4,8,10 and 1,7,9,2,10 -- 6,3,8,5 are
both linked, we have a $\mathcal{D}_4$ with vertices
$\{2\}$, $\{3,5,6\}$, $\{8\}$, and $\{10\}$.
If instead it's 2,3,6,5 -- 0,4,8,10 and 1,4,9,2,10 -- 6,3,8,5 that
are linked, use $\{2\}$, $\{3,5,6\}$, $\{4\}$, and $\{8\}$.
This leaves the case where 3,6,5,7 -- 0,4,8,10 is linked.
We must split a final time:
$\mbox{1,4,2,10 } = \mbox{ 0,4,1,10 } \cup \mbox{ 0,4,2,10}$.
If 3,6,5,7 -- 0,4,8,10 and 0,4,1,10 -- 6,3,8,5 are both linked,
the vertices are $\{0,4,10\}$, $\{1,7\}$, $\{3,5,6\}$, and $\{8\}$.
On the other hand, when 3,6,5,7 -- 0,4,8,10 and 0,4,2,10 -- 6,3,8,5
are linked, use $\{0,4,10\}$, $\{2,7,9\}$, $\{3,5,6\}$, and $\{8\}$.
This completes the argument for $B_5$, which we have shown is
not linked.
Next we turn to $B_9$ which we compare with the remaining $A_i$'s:
\begin{align*}
A_3 & \{0\}, \{1,7\}, \{2,4,5\}, \{3,8,10\} &
A_4 & \{0,5\}, \{1,10\}, \{2,4\}, \{3,8\} \\
A_7 & \{0,4\}, \{1,7,10\}, \{2,5\}, \{3,8\} &
A_9 & \{0,2,4\}, \{1,7,8,10\}, \{3\}, \{5\}. \\
\end{align*}
This leaves $A_5$ for which we rewrite:
$\mbox{0,9,6,3,2,10 } = \mbox{ 0,9,2,10 } \cup \mbox{ 2,3,6,9}$.
If 0,9,2,10 -- 1,4,8,5,7 is linked, then observe that, by a symmetry of
$G_1$, this is the same as $A_8$, which is not linked. Thus, we can assume
2,3,6,9 -- 1,4,8,5,7 is linked and rewrite:
$\mbox{1,4,8,5,7 } = \mbox{ 1,4,0,5,7 } \cup \mbox{ 0,4,8,5}$.
Pairing 2,3,6,9 - 1,4,0,5,7 with $B_9$ gives a $\mathcal{D}_4$ on vertices
$\{0,4,5\}$, $\{1,7\}$, $\{2\}$, and $\{3\}$.
If it's 2,3,6,9 -- 0,4,8,5 that's linked, then, pairing with $B_9$,
use $\{0,4,5\}$, $\{2\}$, $\{3\}$, and $\{8\}$. This
completes the argument for $B_9$ and, by a symmetry of $G_1$, also
for $B_7$.
Recall that our goal is to argue, for a contradiction, that no
$B_i$ is linked. At this stage we are left only with $B_4$ and $B_6$
as pairs that could be linked.
Before providing the argument for the remaining
two $B_i$'s, we first eliminate a few more $A_i$'s, starting with
$A_4$, which we compare with the two remaining $B_i$'s:
\begin{align*}
B_4 & \{0,5,8\}, \{1,2,4\}, \{3\}, \{10\} &
B_6 & \{0,5,8\}, \{1,2,10\}, \{3\}, \{4\}. \\
\end{align*}
Next $A_5$, again by pairing with $B_4$ and $B_6$:
\begin{align*}
B_4 & \{0,10\}, \{1,4,7\}, \{2,3\}, \{5,8\} &
B_6 & \{0\}, \{1,7\}, \{2,3,10\}, \{4,8,5\}. \\
\end{align*}
Since $A_9$ agrees with $A_5$ under a symmetry of $G_1$,
this leaves only $A_3$ and $A_7$ as pairs that may yet be linked
among the $A_i$'s.
As a penultimate step, we show that $C_6$ is not linked
by comparing with these two remaining $A_i$'s:
\begin{align*}
A_3 & \{0,9\}, \{1\}, \{2,4,5,7\}, \{3,6,8,10\} &
A_7 & \{0,4,9\}, \{1,10\}, \{2,5,7\}, \{3,6,8\}. \\
\end{align*}
By a symmetry of $G_1$, we can also assume that $C_9$ is not linked.
This leaves only four $C_i$'s that may be linked: $C_2$, $C_4$, $C_7$,
and $C_8$.
Finally, compare $B_6$ with the remaining $C_i$'s:
\begin{align*}
C_4 & \{0,4,8\}, \{1,2,10\}, \{3,7\}, \{5\} &
C_8 & \{0,4,8\}, \{1,2,7,10\}, \{3\}, \{5\}. \\
\end{align*}
For $C_2$ we rewrite:
$\mbox{2,4,8,3,7,5 } = \mbox{ 2,3,8,4 } \cup \mbox{ 2,3,7,5}$.
If 2,3,8,4 -- 0,9,6,1,10 is linked, then pairing with $B_6$, we
have vertices $\{0\}$, $\{1,10\}$, $\{2,3\}$, and $\{4,8\}$.
On the other hand pairing $B_6$ with 2,3,7,5 -- 0,9,6,1,10 we'll
have a $\mathcal{D}_4$ using the vertices $\{0\}$, $\{1,10\}$, $\{2,3,7\}$, and
$\{5\}$.
For $C_7$:
$\mbox{2,4,8,3,6,5 } = \mbox{ 2,3,8,4 } \cup \mbox{ 2,3,6,5}$.
Then 2,3,8,4 -- 0,9,7,1,10 with $B_6$ gives a $\mathcal{D}_4$ for
vertices $\{0\}$, $\{1,7,10\}$, $\{2,3\}$, and $\{4,8\}$.
On the other hand, 2,3,6,5 -- 0,9,7,1,10 pairs with $B_6$
using vertices $\{0\}$, $\{1,7,10\}$, $\{2,3\}$, and $\{5\}$.
This completes the argument for $B_6$.
By a symmetry of $G_1$ we see that $B_4$ is also not linked.
In this way, the assumption that there is no knotted cycle in
the given embedding of $G_1$ forces us to conclude that
no pair $B_1, \ldots, B_9$ is linked. However, these
correspond to the cycles of a $K_{3,3,1}$. As Sachs~\cite{S}
has shown, any embedding of $K_{3,3,1}$
must have a pair of cycles with odd linking number.
The contradiction shows that there can
be no such knotless embedding and $G_1$ is IK.
This completes the proof that $G_1$ is MMIK.
\subsubsection{$G_2$ is MMIK}
Again, we must argue that $G_2$ is IK, and every proper minor is nIK.
Up to isomorphism, there are 26 minors obtained by deleting or contracting
an edge of $G_2$. Each of these is 2-apex but for the graph $H_2$, for
which a knotless embedding is given in Figure~\ref{fig:H1H2}. This
shows that the proper minors of $G_2$ are nIK.
\begin{figure}[htb]
\centering
\includegraphics[scale=1.25]{g2.eps}
\caption{The graph $G_2$.}
\label{fig:G2}
\end{figure}
We will use Lemma~\ref{lem:D4} to show that $G_2$ is IK. The argument is similar
to that for $G_1$ above. We begin by identifying four ways that
the Petersen family graph $P_9$, of order nine,
appears as a minor of graph $G_2$. Using the vertex labelling of
Figure~\ref{fig:G2}, on contracting edge $(0,4)$ and deleting vertex 8,
the resulting graph has $P_9$ as a subgraph. We denote the seven cycle pairs in this
minor $A1, \ldots, A7$ and list the corresponding links in $G_2$ in
Table~\ref{tab:G2An}.
\begin{table}[htb]
\centering
\begin{tabular}{c|l}
$A_1$ & 0,4,10,5 -- 1,6,2,9,3,7 \\
$A_2$ & 0,2,6,10,4 -- 1,5,9,3,7 \\
$A_3$ & 0,2,9,5 -- 1,6,10,3,7 \\
$A_4$ & 0,5,1,7 -- 2,6,10,3,9 \\
$A_5$ & 0,4,10,3,7 -- 1,5,9,2,6 \\
$A_6$ & 1,5,10,6 -- 0,2,9,3,7 \\
$A_7$ & 3,9,5,10 -- 0,2,6,1,7 \\
\end{tabular}
\caption{Seven pairs of cycles in $G_2$ called $A_1, \ldots, A_7$.}
\label{tab:G2An}
\end{table}
Similarly, if we contract edge $(3,10)$ and delete vertex 7 in $G_2$,
the resulting graph has a $P_9$ subgraph. We'll call these seven cycles
$B_1, \ldots B_7$ as in Table~\ref{tab:G2Bn}
\begin{table}[htb]
\centering
\begin{tabular}{c|l}
$B_1$ & 0,2,6 -- 1,5,9,3,10,4,8 \\
$B_2$ & 0,2,8,4 -- 1,5,9,3,10,6 \\
$B_3$ & 0,2,9,5 -- 1,6,10,4,8 \\
$B_4$ & 0,4,10,6 -- 1,5,9,2,8 \\
$B_5$ & 0,5,1,6 -- 2,8,4,10,3,9 \\
$B_6$ & 1,6,2,8 -- 0,4,10,3,9,5 \\
$B_7$ & 2,6,10,3,9 -- 0,4,8,1,5 \\
\end{tabular}
\caption{Seven pairs of cycles in $G_2$ called $B_1, \ldots, B_7$.}
\label{tab:G2Bn}
\end{table}
Contracting edge $(4,10)$ and deleting vertex 6, we have the
seven cycles of Table~\ref{tab:G2Cn}.
\begin{table}[htb]
\centering
\begin{tabular}{c|l}
$C_1$ & 3,8,4,10 -- 0,2,9,5,1,7 \\
$C_2$ & 3,8,1,7 -- 0,2,9,5,10,4 \\
$C_3$ & 2,8,3,9 -- 0,4,10,5,1,7 \\
$C_4$ & 0,4,10,3,7 -- 1,5,9,2,8 \\
$C_5$ & 3,9,5,10 -- 0,2,8,1,7 \\
$C_6$ & 1,5,10,4,8 -- 0,2,9,3,7 \\
$C_7$ & 0,2,8,4 -- 1,5,9,3,7 \\
\end{tabular}
\caption{Seven pairs of cycles in $G_2$ called $C_1, \ldots, C_7$.}
\label{tab:G2Cn}
\end{table}
Finally, if we contract edge $(3,9)$ and delete vertex 7 in $G_2$,
the resulting graph has a $P_9$ subgraph. We'll call these seven cycles
$D_1, \ldots D_7$ as in Table~\ref{tab:G2Dn}.
\begin{table}[htb]
\centering
\begin{tabular}{c|l}
$D_1$ & 2,8,3,9 -- 0,4,10,6,1,5 \\
$D_2$ & 0,2,9,5 -- 1,6,10,4,8 \\
$D_3$ & 0,2,8,4 -- 1,5,9,3,10,6 \\
$D_4$ & 1,5,9,3,8 -- 0,2,6,10,4 \\
$D_5$ & 1,6,2,8 -- 0,4,10,3,9,5 \\
$D_6$ & 3,8,4,10 -- 0,2,6,1,5 \\
$D_7$ & 2,6,10,3,9 -- 0,4,8,1,5 \\
\end{tabular}
\caption{Seven pairs of cycles in $G_2$ called $D_1, \ldots, D_7$.}
\label{tab:G2Dn}
\end{table}
We will need to introduce two more Petersen family graph minors later,
but let's begin by ruling out some of the pairs we already have.
As in our argument for $G_1$, we assume that we have a knotless
embedding of $G_2$ and step by step argue that various
cycle pairs are not linked (i.e. do not have odd linking number)
using Lemma~\ref{lem:D4}. Eventually, this will allow us
to deduce that all seven pairs $B_1, \ldots, B_7$ are
not linked. This is a contradiction since Sachs~\cite{S}
showed that in any embedding of $P_9$, there
must be a pair of cycles with odd linking number.
The contradiction shows that there is no such knotless embedding
and $G_2$ is IK.
We'll see that $A_5$ is not linked by showing it
results in a $\mathcal{D}_4$ with every pair $B_1, \ldots, B_7$.
Indeed the vertices of the $\mathcal{D}_4$ are formed by contracting the
following vertices
\begin{align*}
B_1 & \{0\}, \{1,5,9\}, \{2,6\}, \{3,4,10\} &
B_2 & \{0,4\}, \{1,5,6,9\}, \{2\}, \{3,10\} \\
B_3 & \{0\}, \{1,6\}, \{2,5,9\}, \{4,10\} &
B_4 & \{0,4,10\}, \{1,2,5,9\}, \{3,8\}, \{6\} \\
B_5 & \{0\}, \{1,5,6\}, \{2,9\}, \{3,4,10\} &
B_7 & \{0,4\}, \{1,5\}, \{2,6,9\},\{3,10\}. \\
\end{align*}
For $B_6 = $ 1,6,2,8 -- 0,4,10,3,9,5, we first split one of the $A_5$
cycles:
$\mbox{0,4,10,3,7 } = \mbox{0,4,8,3,7 } \cup \mbox{ 3,8,4,10}$.
One of the two summands must link with the other $A_5$ cycle
$1,5,9,2,6$. If
$\mbox{lk}((\mbox{3,8,4,10}),(\mbox{1,5,9,2,6})) \neq 0$,
then, by contracting edges, we form a $\mathcal{D}_4$ with $A_5$
whose vertices are $\{1,2,6\}, \{3,4,10\}, \{5,9\}, \{8\}$.
On the other hand, if
$\mbox{lk}((\mbox{0,4,8,3,7}),(\mbox{1,5,9,2,6})) \neq 0$,
then we will split the $B_6$ cycle
$\mbox{0,4,10,3,9,5 } = \mbox{0,4,10,5 } \cup \mbox{ 3,9,5,10}$.
When
$\mbox{lk}((\mbox{1,6,2,8}),(\mbox{0,4,10,5})) \neq 0$,
we have a $\mathcal{D}_4$ with vertices $\{0,4\}, \{1,2,6\}, \{5\}, \{8\}$
and when
$\mbox{lk}((\mbox{1,6,2,8}),$ $(\mbox{3,9,5,10})) \neq 0$,
the $\mathcal{D}_4$ is on $\{1,2,6\}, \{3\}, \{5,9\}, \{8\}$.
We have shown that, for each $B_i$, we must have a $\mathcal{D}_4$
with $A_5$. Sachs~\cite{S} showed that in every embedding
of $P_9$, there is a pair of linked cycles. Thus, in our
embedding of $G_2$, at least one $B_i$ is linked. If $A_5$
were also linked, that would result in a $\mathcal{D}_4$ with
a knotted cycle by Lemma~\ref{lem:D4}. This
contradicts our assumption that we have a knotless embedding
of $G_2$. Therefore, going forward, we can assume $A_5$ is not linked.
We next argue that $B_7$ is not linked by comparing with
$C_1, \ldots, C_7$. For $C_4$, $C_6$, and $C_7$, we
immediately form a $\mathcal{D}_4$ as follows
\begin{align*}
C_4 & \{0,4\}, \{1,5,8\}, \{2,9\}, \{3,7\} &
C_6 & \{0\}, \{1,4,5,8\}, \{2,3,9\}, \{10\} \\
C_7 & \{0,4,8\}, \{1,5\}, \{2\}, \{3,9\}. \\
\end{align*}
For the remaining pairs, we will split a cycle of the $C_i$.
For $C_1$, write
$\mbox{0,2,9,5,1,7 } = \mbox{0,2,7 } \cup \mbox{ 1,5,9,2,7}$.
In the first case,
where
$\mbox{lk}((\mbox{3,8,4,10}),$ $(\mbox{0,2,7})) \neq 0$,
the $\mathcal{D}_4$ is on $\{0\}, \{2\}, \{3,10\}, \{4,8\}$.
In the second case,
$\mbox{lk}((\mbox{3,8,4,10}),$ $(\mbox{1,5,9,7})) \neq 0$,
we have $\{1,5\}, \{2,9\}, \{3,10\}, \{4,8\}$.
For $C_2$, split
$\mbox{0,2,9,5,10,4 } = \mbox{0,2,9,5 } \cup \mbox{ 0,4,10,5}$
with, in the first case, a $\mathcal{D}_4$ on
$\{0,5\}, \{1,8\}, \{2,9\}, \{3\}$
and, in the second, on
$\{0,4,5\}, \{1,8\}, \{3\}, \{10\}$.
For $C_5$,
$\mbox{0,2,8,1,7 } = \mbox{0,2,7 } \cup \mbox{ 1,7,2,8}$,
the first case is
$\{0\}, \{2\}, \{3,9,10\}, \{5\}$
and the second has $\{1,8\}, \{2\}, \{3,9,10\}, \{5\}$.
Finally, for $C_3$
split
$\mbox{0,4,10,5,1,7 } = \mbox{1,6,7 } \cup \mbox{ 1,6,7,0,4,10,5}$.
In the first case, there's a $\mathcal{D}_4$ with vertices
$\{1\}, \{2,3,9\}, \{6\}, \{8\}$.
In the second case, we split the same cycle a second time:
$\mbox{1,6,7,0,4,10,5 } = \mbox{1,5,10,6 } \cup
\mbox{ 0,4,10,6,7}$.
In the first subcase, we have a $\mathcal{D}_4$ with vertices
$ \{1,5\}, \{2,3,9\},\{6,10\}, \{8\}$
and in the second subcase, $\{0,4\}, \{2,3,9\}, \{6,10\}, \{8\}$.
Going forward, we can assume that $B_7$ is not linked.
We next argue that $A_4$ is not linked by comparing with
$D_1, \ldots, D_7$. For four links, we immediately
give the vertices of the $\mathcal{D}_4$:
\begin{align*}
D2 & \{0,5\}, \{1\}, \{2,9\}, \{6,10\} &
D3 & \{0\}, \{1,5\}, \{2\}, \{3,6,9,10\} \\
D4 & \{0\}, \{1,5\}, \{2,6,10\}, \{3,9\} &
D5 & \{0,5\}, \{1\}, \{2,6\}, \{3,9,10\}. \\
\end{align*}
For $D_1$, split the second cycle of $A_4$:
$\mbox{2,6,10,3,9 } = \mbox{3,9,4,10 } \cup \mbox{ 2,9,4,10,6}$.
In the first case, the verices of the $\mathcal{D}_4$ are
$\{0,1,5\}, \{2,7\}, \{3,9\}, \{4,10\}$ and
in the second, $\{0,1,5\}, \{2,9\}, \{3,7\}, \{4,6,10\}$.
For $D_6$, split the second cycle:
$\mbox{0,2,6,1,5 } = \mbox{0,2,9,5 } \cup \mbox{ 1,5,9,2,6}$.
In the first case, we have vertices
$\{0,5\}, \{1,8\}, \{2,9\}, \{3,10\}$ and in the second,
$\{0,4\}, \{1,5\}, \{2,6,9\}, \{3,10\}$.
Finally, $D_7$ is the same pair of cycles as $B_7$, which
we have assumed is not linked. Going forward, we will
assume $A_4$ is not linked.
We have already argued that we can assume $A_4$ and $A_5$ are
not linked. Our next steps will eliminate $A_2$ and $A_3$, leaving
only three $A_i$ that could be linked. For this we use another
Petersen graph minor.
Using the labelling of Figure~\ref{fig:G2},
partition the vertices as $\{0,8,9,10\}$ and $\{2,3,4,5\}$.
Contract edges $(0,7)$ and $(2,6)$.
This resulting graph has a $K_{4,4}^-$
subgraph, where $(5,8)$ is the missing edge. As in
Table~\ref{tab:G2En}, we'll call the resulting
nine pairs of cycles $E_1, \ldots, E_9$.
\begin{table}[htb]
\centering
\begin{tabular}{c|l}
$E_1$ & 2,8,4,9 -- 0,5,10,3,7 \\
$E_2$ & 0,4,10,5 -- 2,8,3,9 \\
$E_3$ & 0,2,6,10,5 -- 3,8,4,9 \\
$E_4$ & 0,2,4,8 -- 3,9,5,10 \\
$E_5$ & 4,9,5,10 -- 0,2,8,3,7 \\
$E_6$ & 0,4,8,3,7 -- 2,6,10,5,9 \\
$E_7$ & 0,5,9,3,7 -- 2,6,10,4,8 \\
$E_8$ & 0,4,9,5 -- 2,6,10,3,8 \\
$E_9$ & 0,2,9,5 -- 3,8,4,10 \\
\end{tabular}
\caption{Nine pairs of cycles in $G_2$ called $E_1, \ldots, E_9$.}
\label{tab:G2En}
\end{table}
We will use $E_1, \ldots, E_9$ to show that $A_2$ may be assumed
unlinked.
Except for $E_1$, we list the vertices of the $\mathcal{D}_4$:
\begin{align*}
E_2 & \{0,4,10\}, \{2\}, \{3,9\}, \{5\} &
E_3 & \{0,2,6,10\}, \{3,9\}, \{4\}, \{5\} \\
E_4 & \{0,2,4\}, \{1,8\}, \{3,5,9\}, \{10\} &
E_5 & \{0,2\}, \{3,7\}, \{4,10\}, \{5,9\} \\
E_6 & \{0,4\}, \{2,6,10\}, \{3,7\}, \{5,9\} &
E_7 & \{0\}, \{1,8\}, \{2,4,6,10\}, \{3,5,7,9\} \\
E_8 & \{0,4\}, \{2,6,10\}, \{3\}, \{5,9\} &
E_9 & \{0,2\}, \{3\}, \{4,10\}, \{5,9\}. \\
\end{align*}
The argument for $E_1$ is a little involved.
Let's split the second cycle
$\mbox{0,5,10,3,7 } = \mbox{0,5,10,6,1,7 } \cup \mbox{ 1,6,10,3,7}$.
\noindent%
Case 1: Suppose that
$\mbox{lk}((\mbox{2,8,4,9}),(\mbox{0,5,10,6,1,7})) \neq 0$.
Next split the first cycle of $A_2$:
$\mbox{0,2,6,10,4 } = \mbox{0,2,6 } \cup \mbox{ 0,4,10,6}$.
Case 1a):
Suppose that
$\mbox{lk}((\mbox{0,2,6}),(\mbox{1,5,9,3,7})) \neq 0$.
We split the second $E_1$ cycle again:
$\mbox{0,5,10,6,1,7 } = \mbox{0,5,10,6 } \cup \mbox{ 0,6,1,7}$
In the first case, where
$\mbox{lk}((\mbox{2,8,4,9}),(\mbox{0,5,10,6})) \neq 0$,
we have a $\mathcal{D}_4$ with vertices
$\{0,6\}, \{2\}, \{5\}, \{9\}$.
In the second case,
$\mbox{lk}((\mbox{2,8,4,9}),(\mbox{0,6,1,7})) \neq 0$,
the $\mathcal{D}_4$ vertices are $\{0,6\}, \{1,7\},$ $ \{2\}, \{9\}$.
Case 1b):
Suppose that
$\mbox{lk}((\mbox{0,4,10,6}),(\mbox{1,5,9,3,7})) \neq 0$.
Split the second $E_1$ cycle again:
$\mbox{0,5,10,6,1,7 } = \mbox{0,5,10,6 } \cup \mbox{ 0,6,1,7}$.
In the first case, where
$\mbox{lk}((\mbox{2,8,4,9}),(\mbox{0,5,10,6})) \neq 0$,
the $\mathcal{D}_4$ vertices are $\{0,6,10\}, \{4\}, \{5\}, \{9\}$.
In the second case, where
$\mbox{lk}((\mbox{2,8,4,9}),(\mbox{0,6,1,7})) \neq 0$,
we have vertices $\{0,6\}, \{1,7\},$ $ \{4\}, \{9\}$.
\noindent%
Case 2:
Suppose that
$\mbox{lk}((\mbox{2,8,4,9}),(\mbox{1,6,10,3,7})) \neq 0$.
We split the fist cycle of $A_2$:
$\mbox{0,2,6,10,4 } = \mbox{0,2,6 } \cup \mbox{ 0,4,10,6}$.
In the first case, the $\mathcal{D}_4$ has vertices
$\{1,3,7\}, \{2\}, \{6\}, \{9\}$ and in the second
$\{1,3,7\}, \{4\}, \{6,10\}, \{9\}$.
This completes the argument for $A_2$, which we henceforth assume
is not linked.
Next we again use $E_1, \ldots, E_9$ to see that $A_3$ is also
not linked. Except for $E_8$ and $E_9$ we immediately have
a $\mathcal{D}_4$:
\begin{align*}
E_1 & \{0,5\}, \{1,8\}, \{2,9\}, \{3,7,10\} &
E_2 & \{0,5\}, \{2,9\}, \{3\}, \{10\} \\
E_3 & \{0,2,5\}, \{3\}, \{6\}, \{9\} &
E_4 & \{0,2\}, \{1,8\}, \{3,10\}, \{5,9\} \\
E_5 & \{0,2\}, \{3,7\}, \{5,9\}, \{10\} &
E_6 & \{0\}, \{2,5,9\}, \{3,7\}, \{6,10\} \\
E_7 & \{0,5,9\}, \{2\}, \{3,7\}, \{6,10\} \\
\end{align*}
For $E_8$, split the first cycle
$\mbox{0,4,9,5 } = \mbox{0,5,1,7 } \cup \mbox{ 0,4,9,5,1,7}$.
In the first case, we have a $\mathcal{D}_4$ with vertices
$\{0,5\}, \{1,7\}, \{2\}, \{3,6,10\}$.
In the second case, we further split the first cycle of $A_3$:
$\mbox{0,2,9,5 } = \mbox{2,8,4,9 } \cup \mbox{ 0,2,8,4,9,5}$
leading to two subcases. In the first subcase the $\mathcal{D}_4$ is
$\{1,7\}, \{2,8\}, \{3,6,10\}, \{4,9\}$ and in the second
$\{0,4,5,9\}, \{1,7\}, \{2,8\}, \{3,6,10\}$.
For $E_9$ split the first cycle
$\mbox{0,2,9,5 } = \mbox{0,2,6,1,5 } \cup \mbox{ 2,6,1,5,9}$
with a $\mathcal{D}_4$ on first
$\{0,2,5\}, \{1,6\}, \{3,10\}, \{4,9\}$
and then
$\{0,4\}, \{1,6\}, \{2,5,9\}, \{3,10\}$.
We will not need $E_1, \ldots, E_9$ in the remainder of the argument.
At this stage, we can assume that it is one of $A_1$, $A_6$, and
$A_7$ that is linked.
Our next step is to argue that we can assume
$B_5$ is not linked by comparing with the three remaining $A_i$'s.
For the first two, we immediately recognize a $\mathcal{D}_4$:
\begin{align*}
A_1 & \{0,5\}, \{1,6\}, \{2,3,9\}, \{4,10\} &
A_6 & \{0\}, \{1,5,6\}, \{2,3,9\}, \{10\} \\
\end{align*}
For $A_7$, split the second cycle:
$\mbox{0,2,6,1,7 } = \mbox{0,2,7 } \cup \mbox{ 2,6,1,7}$
so that the $\mathcal{D}_4$ has vertices
$\{0\}, \{2\}, \{3,9,10\}, \{5\}$ in the first case
and then $\{1,6\}, \{2\},$ $ \{3,9,10\}, \{5\}$.
Going forward, we assume that $B_5$ is not linked.
Next we will eliminate $B_2$ and $B_3$. For $B_3$, we have a $\mathcal{D}_4$
with each of the remaining $A_i$'s:
\begin{align*}
A_1 & \{0,5\}, \{1,6\}, \{2,9\}, \{4,10\} &
A_6 & \{0,2,9\}, \{1,6,10\}, \{3,8\}, \{5\} \\
A_7 & \{0,2\}, \{1,6\}, \{5,9\}, \{10\}
\end{align*}
For $B_2$, notice first that, as we are assuming $A_3$ is not linked,
by a symmetry of $G_2$, we can assume that
0,2,8,4 - 1,6,10,3,7 is also not linked.
Again, since $B_3$ is unlinked, the symmetric pair
0,2,8,4 - 1,5,9,3,7 is also not linked.
Since $\mbox{1,5,9,3,10,6 } = \mbox{1,5,9,3,7 } \cup \mbox{ 1,6,10,3,7}$
we conclude that $B_2$ is not linked. Having eliminated four
of the $B_i$'s, going forward, we can assume that it is one
of $B_1$, $B_4$, and $B_6$ that is linked. Recall that our ultimate
goal is to argue that none of the $B_i$ are linked and thereby force
a contradiction.
Our next step is to argue that $C_1$ is not linked by
comparing with the remaining three $A_i$'s.
For $A_1$ split the second cycle of $C_1$
$\mbox{0,2,9,5,1,7 } = \mbox{0,2,9,5 } \cup \mbox{ 0,5,1,7}$,
yielding first a $\mathcal{D}_4$ on
$\{0,5\}, \{2,9\}, \{3\}, \{4,10\}$ and then on
$\{0,5\}, \{1,7\}, \{3\}, \{4,10\}$.
For $A_6$, we have $\mathcal{D}_4$ with vertices
$\{0,2,7,9\}, \{1,5\},$ $ \{3\}, \{10\}$.
For $A_7$, split the second cycle
$\mbox{0,2,6,1,7 } = \mbox{0,2,8,1,7 } \cup \mbox{ 1,6,2,8}$.
In the first case, we have a $\mathcal{D}_4$ with vertices
$\{0,1,2,7\}, \{3,10\}, \{5,9\}, \{8\}$. In
the second case, split the second cycle of $C_1$
$\mbox{0,2,9,5,1,7 } = \mbox{0,2,9,5 } \cup \mbox{ 0,5,1,7}$,
resulting in a $\mathcal{D}_4$ either on
$\{2\}, \{3,10\}, \{5,9\}, \{8\}$ or
$\{1\}, \{3,10\}, \{5\}, \{8\}$.
Now we can eliminate $A_7$.
Using a symmetry of $G_2$ we will instead argue that the
pair $A_7'$ = 3,8,4,10 - 0,2,6,1,7 is not linked.
Note that this resembles $C_1$, which we just proved unlinked.
Since $\mbox{0,2,6,1,7 } \cup \mbox{ 0,2,9,5,1,7 } = \mbox{1,5,9,2,6}$
it will be enough to show that 3,8,4,10 - 1,5,9,2,6
is not linked by comparing with the remaining $A_i$'s:
\begin{align*}
A_1 & \{1,2,6,9\}, \{3\}, \{4,10\}, \{5\} &
A_6 & \{1,5,6\}, \{2,9\}, \{3\}, \{10\} \\
A_7 & \{0,4\}, \{1,2,6\}, \{3,10\}, \{5,9\} \\
\end{align*}
Thus we can assume that it is $A_1$ or $A_6$ that is the linked
pair in our embedding of $G_2$.
As for the $B_i$'s, only three candidates remain. We next eliminate
$B_1$ by comparing with the remaining two $A_i$'s.
For $A_1$, split the second cycle of $B_1$:
$\mbox{1,5,9,3,10,4,8 } = \mbox{1,5,10,4,8 } \cup \mbox{ 3,9,10,5}$
giving a $\mathcal{D}_4$ on either
$\{0\}, \{1\}, \{2,6\}, \{4,5,10\}$
or $\{0\}, \{2,6\}, \{3,9\}, \{5,10\}$.
For $A_6$, using the same split of the second cycle of $B_1$
the vertices are either
$\{0,2\}, \{1,5,10\},$ $ \{4,9\}, \{6\}$ or
$\{0,2\}, \{3,9\}, \{5,10\}, \{6\}$.
To proceed, we will argue that $C_5$ is unlinked. In fact, we will
show that it is $D_6 = C_5'$, the result of applying the symmetry
of $G_2$, that is not linked by comparing with the two remaining
$A_i$'s:
\begin{align*}
A_1& \{0,5\}, \{1,2,6\}, \{3\}, \{4,10\} &
A_6& \{0,2\}, \{1,5,6\}, \{3\}, \{10\}
\end{align*}
We can now eliminate $B_6$ by comparing with the $C_i$'s.
This will leave only $B_4$, which, therefore, must be linked.
We have already argued that $C_1$ and $C_5$ are not linked.
Also, by a symmetry of $G_2$, since $B_7$ is not linked, $C_4$ is also
not linked. For $C_3$ and $C_7$, we immediately see a $\mathcal{D}_4$:
\begin{align*}
C_3 & \{0,4,5,10\}, \{1\}, \{2,6\}, \{3,9\} &
C_7 & \{0,4\}, \{1\}, \{2,8\}, \{3,9,5\}.
\end{align*}
For $C_2$, split the second cycle:
$\mbox{0,2,9,5,10,4 } = \mbox{0,2,6,10,4 } \cup \mbox{ 2,6,10,5,9}$.
In the first case, we have a $\mathcal{D}_4$ on
$\{0,4,10\}, \{1,8\}, \{2,6\}, \{3\}$.
In the second case, split the second cycle of $B_6$:
$\mbox{0,4,10,3,9,5 } = \mbox{0,4,10,3,7 } \cup \mbox{ 0,5,9,3,7}$
giving a $\mathcal{D}_4$ with vertices
$\{1,8\}, \{2,6\}, \{3,7\}, \{10\}$ in the first subcase
and $\{1,8\}, \{2,6\}, \{3,7\}, \{5,9\}$ in the second.
For $C_6$, we split the second cycle of $B_6$:
$\mbox{0,4,10,3,9,5 } = \mbox{0,4,10,5} \cup \mbox{ 3,9,5,10}$
giving, first, a $\mathcal{D}_4$ on $\{0\}, \{1,8\}, \{2\}, \{4,5,10\}$
and, second, a $\mathcal{D}_4$ on $\{1,8\}, \{2\},$ $ \{3,9\}, \{5,10\}$.
Since $B_1, \ldots, B_7$, represent the pairs of cycles
in an embedding of $P_9$, we know by \cite{S} that at least
one pair must have odd linking number. We have just argued
that all but $B_4$ are not linked, so we can conclude that
it is $B_4$ that has odd linking number in our embedding
of $G_2$. We will now derive a contradiction by using a final
Petersen family graph minor.
\begin{table}[htb]
\centering
\begin{tabular}{c|l}
$F_1$ & 0,5,1,7 -- 2,6,10,3,8 \\
$F_2$ & 0,4,8,1,5 -- 2,6,10,3,7 \\
$F_3$ & 0,2,8,4 -- 1,6,10,3,7 \\
$F_4$ & 0,2,7 -- 1,6,10,3,8 \\
$F_5$ & 1,5,10,6 -- 2,7,3,8 \\
$F_6$ & 1,7,3,8 -- 0,2,6,10,5 \\
$F_7$ & 1,6,2,8 -- 0,5,10,3,7 \\
$F_8$ & 1,6,2,7 -- 0,4,8,3,10,5 \\
\end{tabular}
\caption{Eight pairs of cycles in $G_2$ called $F_1, \ldots, F_8$.}
\label{tab:G2Fn}
\end{table}
Our last set of cycles comes from a $P_8$ minor. This is the
Petersen family graph on eight vertices that is not $K_{4,4}^-$.
Using the labelling of $G_2$ in Figure~\ref{fig:G2},
contracting edges $(0,4)$ and $(0,5)$ and deleting vertex 9
results in a graph with a $P_8$ subgraph. This graph has
eight pairs of cycles shown in Table~\ref{tab:G2Fn}.
Using $B_4$ will derive a $\mathcal{D}_4$ with each $F_i$.
For the first five $F_i$ we immediately find a $\mathcal{D}_4$:
\begin{align*}
F_1 & \{0\}, \{1,2,6\}, \{3\}, \{4,10\} &
F_2 & \{0,2\}, \{1,5,6\}, \{3\}, \{10\} \\
F_3 & \{0,5\}, \{1,2,6\}, \{3\}, \{4,10\} &
F_4 & \{0,2\}, \{1,5,6\}, \{3\}, \{10\} \\
F_5 & \{0,5\}, \{1,2,6\}, \{3\}, \{4,10\}. \\
\end{align*}
Since $B_4$ is linked, we deduce that the pair
$B_4' = 2,7,3,9 -- 0,4,8,1,5$, obtained by the
symmetry of $G_2$, is also linked. Using $B_4'$
we have a $\mathcal{D}_4$ with the remaining cycles of $F$:
\begin{align*}
F_6 & \{0,5\}, \{1,8\}, \{2\}, \{3,7\} &
F_7 & \{0,5\}, \{1,8\}, \{2\}, \{3,7\} \\
F_8 & \{0,4,5,8\}, \{1\}, \{2,7\}, \{3\} \\
\end{align*}
We have shown that there is a $\mathcal{D}_4$ with each $F_i$ using
the pairs $B_4$ or $B_4'$, both of which must be linked.
Since the $F_1, \ldots, F_8$
represent the cycle pairs of a $P_8$ minor, at least one
of them has odd linking number~\cite{S}. By
Lemma~\ref{lem:D4}, our embedding of $G_2$ has a
knotted cycle. This contradicts our assumption
that we were working with a knotless embedding.
The contradiction shows that there is no such knotless
embedding and $G_2$ is IK. This completes the proof
that $G_2$ is MMIK.
\subsubsection{$G_3$ is MMIK}
The graph $G_3$ is obtained from $G_2$ by a single $\nabla\mathrm{Y}$ move.
Specifically, using the edge list for $G_2$ given above:
$$[(0, 2), (0, 4), (0, 5), (0, 6), (0, 7), (1, 5), (1, 6), (1, 7), (1, 8), (2, 6), (2, 7), (2, 8),$$
$$(2, 9), (3, 7), (3, 8), (3, 9), (3, 10), (4, 8), (4, 9), (4, 10), (5, 9), (5, 10), (6, 10)],$$
make the $\nabla\mathrm{Y}$ move on the triangle $(0,2,6)$.
We have just shown that $G_2$ is IK, so Lemma~\ref{lem:tyyt}
implies $G_3$ is also. It remains to show that no proper minor is IK.
Up to isomorphism, there are 26 minors obtained by deleting or contracting
an edge of $G_3$. Each of these is 2-apex but for the graph $H_2$, for
which a knotless embedding is given in Figure~\ref{fig:H1H2}. This
shows that the proper minors of $G_2$ are nIK. This completes the argument
that $G_3$ is MMIK.
Together the arguments of the last three subsubsections are a
proof of Theorem~\ref{thm:TheThree}.
\subsection{Expansions of nIK Heawood family graphs}
\label{sec:Heawood}
We will use the notation of \cite{HNTY}
to describe the twenty graphs in the Heawood family,
which we also recall in the appendix.
Kohara and Suzuki~\cite{KS} showed that 14
graphs in this family are MMIK. The remaining six,
$N_9$, $N_{10}$, $N_{11}$, $N'_{10}$, $N'_{11}$, and $N'_{12}$, are nIK~\cite{GMN,HNTY}. The
graph $N_9$ is called $E_9$ in \cite{GMN}.
In this subsection
we argue that no size 23 expansion of these six graphs is MMIK.
We begin by reviewing some terminology for graph families, see~\cite{GMN}.
The {\em family} of graph $G$ is
the set of graphs related to $G$ by a sequence of zero or
more $\nabla\mathrm{Y}$ and $\mathrm{Y}\nabla$
moves, see Figure~\ref{fig:TY}. The graphs
in $G$'s family are {\em cousins} of $G$. We do not allow $\mathrm{Y}\nabla$
moves that would result in doubled edges and all cousins
have the same size. If a $\nabla\mathrm{Y}$ move on $G$ results in graph $H$, we
say $H$ is a {\em child} of $G$ and $G$ is a {\em parent} of $H$.
The set of graphs that can be obtained from $G$ by a sequence of $\nabla\mathrm{Y}$
moves are $G$'s {\em descendants}. Similarly, the set of graphs
that can be obtained from $G$ by a sequence of $\mathrm{Y}\nabla$ moves
are $G$'s {\em ancestors}. By Lemma~\ref{lem:tyyt},
if $G$ is IK, then every descendant of $G$ is also IK.
Conversely, if $G$ is nIK, every ancestor of $G$ is also nIK.
The Heawood family graphs are the cousins of the Heawood graph,
which is denoted $C_{14}$ in \cite{HNTY}.
All have size 21. We can expand a graph to one
of larger size either by adding an edge or by splitting a vertex.
In {\em splitting a vertex} we replace a graph $G$ with
a graph $G'$ so that the order increases by one: $|G'| = |G|+1$.
This means we replace a vertex $v$ of $G$ with two vertices
$v_1$ and $v_2$ in $G'$ and identify the
remaining vertices of $G'$ with those of
$V(G) \setminus \{v\}$.
As for edges, $E(G')$ includes the
edge $v_1v_2$. In addition, we require that
the union of the neigborhoods of $v_1$ and $v_2$
in $G'$ otherwise agrees with the neighborhood of $v$:
$N(v) = N(v_1) \cup N(v_2) \setminus \{v_1,v_2 \}$. In other words,
$G$ is the result of contracting $v_1v_2$ in $G'$ where double
edges are suppressed: $G = G'/v_1v_2$.
Our goal
is to argue that there is no size 23 MMIK graph that
is an expansion of one of the six nIK Heawood family graphs, $N_9$, $N_{10}$, $N_{11}$, $N'_{10}$, $N'_{11}$, and $N'_{12}$.
As a first step, we will argue that, if there were
such a size 23 MMIK expansion, it would also be an
expansion of one of 29 nIK graphs of size 22.
Given a graph $G$, we will use $G+e$ to denote a graph obtained by
adding an edge $e \not\in E(G)$.
As we will show, if $G$ is a Heawood family graph, then $G+e$ will fall
in one of three families that we will call the $H_8+e$ family,
the $E_9+e$ family, and the $H_9+e$ family. The $E_9+e$ family is
discussed in \cite{GMN} where it is shown to consist of 110 graphs, all IK.
The $H_8 + e$ graph is formed by adding an edge to the
Heawood family graph $H_8$ between two of its degree 5 vertices. The $H_8+e$ family consists of 125 graphs, 29 of which are nIK and the remaining 96 are IK, as we will now
argue. For this, we leverage
graphs in the Heawood family. In addition to $H_8+e$,
the family includes an $F_9+e$ graph formed by
adding an edge between the two degree 3 vertices of $F_9$.
Since $H_8$ and $F_9$ are both IK~\cite{KS},
the corresponding
graphs with an edge added are as well. By
Lemma~\ref{lem:tyyt}, $H_8+e$, $F_9+e$ and all their
descendants are IK. These are the 96 IK graphs in the family.
The remaining 29 graphs are all ancestors of six
graphs that we describe below. Once we establish that
these six are nIK, then Lemma~\ref{lem:tyyt} ensures
that all 29 are nIK.
We will denote the six graphs
$T_i$, $i = 1,\ldots,6$ as five of them have a degree
Two vertex.
After contracting an edge on the degree 2 vertex, we
recover one of the nIK Heawood family graphs, $N_{11}$ or $N'_{12}$. It follows that these five graphs
are also nIK.
The two graphs that become $N_{11}$ after
contracting an edge have the following graph6 notation\cite{sage}:
$T_1$: \verb'KSrb`OTO?a`S' $T_2$: \verb'KOtA`_LWCMSS'
The three graphs that contract to $N'_{12}$ are:
$T_3$: \verb'LSb`@OLOASASCS' $T_4$: \verb'LSrbP?CO?dAIAW' $T_5$: \verb'L?tBP_SODGOS_T'
The five graphs we have described so far along with
their ancestors account for 26 of the nIK graphs in the
$H_8+e$ family. The remaining three are ancestors of
$T_6$: \verb'KSb``OMSQSAK'
Figure~\ref{fig:T6} shows a knotless embedding of $T_6$.
By Lemma~\ref{lem:tyyt}, its two ancestors
are also nIK and this completes the
count of 29 nIK graphs in the $H_8+e$ family.
\begin{figure}[htb]
\centering
\includegraphics[scale=1]{t6.eps}
\caption{A knotless embedding of the $T_6$ graph.}
\label{fig:T6}
\end{figure}
The graph $H_9+e$ is formed by adding an edge to $H_9$
between the two degree 3 vertices. There are five graphs
in the $H_{9}+e$, four of which are $H_9+e$ and its
descendants. Since $H_9$ is IK~\cite{KS}, by
Lemma~\ref{lem:tyyt}, these four graphs
are all IK. The remaining graph in the family is
the MMIK graph denoted $G_{S}$ in \cite{GMN} and shown in Figure~\ref{fig:HS}.
Although the graph is credited to Schwartz in that
paper, it was a joint discovery of Schwartz and
and Barylskiy.~\cite{N}.
Thus, all five graphs in the $H_9+e$ family are IK.
\begin{figure}[htb]
\centering
\includegraphics[scale=1]{gs.eps}
\caption{The MMIK graph $G_S$.}
\label{fig:HS}
\end{figure}
We remark that among these three families, the only instances of a graph with a degree 2 vertex occur in the family $H_8+e$, which also contains no MMIK graphs. This observation suggests the following question.
\begin{question}
\label{ques:d3MMIK}
If $G$ has minimal degree $\delta(G) < 3$,
is it true that $G$'s ancestors and descendants include no MMIK graphs?
\end{question}
Initially, we suspected that such a $G$ has no MMIK cousins at all.
However, we discovered that the MMIK graph of size 26, described in
Section~\ref{sec:ord10} below, includes graphs of minimal degree two
among its cousins.
Although we have not completely resolved the question, we have two partial results.
\begin{theorem} If $\delta(G) < 3$ and $H$ is a descendant of $G$, then $H$ is not MMIK.
\end{theorem}
\begin{proof} Since $\delta(G)$ is non-increasing under
the $\nabla\mathrm{Y}$ move, $\delta(H) \leq \delta(G) < 3$ and
$H$ is not MMIK by Lemma~\ref{lem:delta3}.
\end{proof}
As defined in \cite{GMN} a graph has a $\bar{Y}$ if
there is a degree 3 vertex that is also part of a $3$-cycle.
A $\mathrm{Y}\nabla$ move at such a vertex would result in doubled
edges.
\begin{lemma}\label{lem:ybar} A graph with a $\bar{Y}$ is not MMIK.
\end{lemma}
\begin{proof} Let $G$ have a vertex $v$ with
$N(v) = \{a,b,c\}$ and $ab \in E(G)$. We can assume
$G$ is IK. Make a $\nabla\mathrm{Y}$ move on triangle $v,a,b$ to
obtain the graph $H$. By Lemma~\ref{lem:tyyt}
$H$ is IK, as is the homeomorphic graph $H'$ obtained
by contracting an edge at the degree 2 vertex $c$.
But $H' = G - ab$ is obtained by deleting an
edge $ab$ from $G$. Since $G$ has a proper subgraph
$H'$ that is IK, $G$ is not MMIK.
\end{proof}
\begin{theorem} If $G$ has a child $H$ with $\delta(H) < 3$,
then $G$ is not MMIK.
\end{theorem}
\begin{proof} By Lemma~\ref{lem:delta3}, we can assume
$G$ is IK with $\delta(G) = 3$. It follows that $G$ has
a $\bar{Y}$ and is not MMIK by the previous lemma.
\end{proof}
Suppose $G$ is a size 23 MMIK expansion of one
of the six nIK Heawood family graphs.
We will argue that $G$ must be an expansion
of one of the graphs in the three families, $H_8+e$,
$E_9+e$, and $H_9+e$. However, as a MMIK graph,
$G$ can have no size 22 IK minor. Therefore, $G$ must
be an expansion of one of the 29 nIK graphs in the
$H_8+e$ family.
There are two ways to form a size 22 expansion of one of
the six nIK graphs, either add an edge or split a vertex.
We now show that if $H$ is in the Heawood family, then
$H+e$ is in one of the three families,
$H_8+e$, $E_9+e$, and $H_9 + e$. We begin with a
general observation about how adding an edge to
a graph $G$ interacts with the graph's family.
\begin{theorem}
If $G$ is a parent of $H$, then every $G+e$ has a cousin that is an $H+e$.
\end{theorem}
\begin{proof}
Let $H$ be obtained by a $\nabla\mathrm{Y}$ move that replaces the triangle $abc$ in $G$ with three edges on the new vertex $v$. That is,
$V(H) = V(G) \cup \{v\}$.
Form $G+e$ by adding the edge $e = xy$. Since $V(H) = V(G) \cup \{v\}$, then $x,y \in V(H)$ and the graph $H+e$ is a
cousin of $G+e$ by a $\nabla\mathrm{Y}$ move on the triangle $abc$.
\end{proof}
\begin{corollary}
\label{cor:Gpe}
If $G$ is an ancestor of $H$, then every $G+e$ has a cousin that is an $H+e$.
\end{corollary}
Since every graph in the Heawood family is an ancestor of either the
Heawood graph (called $C_{14}$ in \cite{HNTY}) or the graph $H_{12}$,
it will be important to understand the graphs that result from
adding an edge to these two graphs.
\begin{theorem}
\label{thm:Heawpe}
Let $H$ be the Heawood graph. Up to isomorphism, there are two $H+e$ graphs. One
is in the $H_8+e$ family, the other in the $E_9+e$ family.
\end{theorem}
\begin{proof}
The diameter of the Heawood graph is three. Up to isomorphism, we can either add an edge between vertices of distance two or three.
If we add an edge between vertices of distance two,
the result is a graph in the $H_8+e$ family. If the distance
is three, we are adding an edge between the different parts and the result is a bipartite graph of size 22.
As shown in~\cite{KMO}, this means it is cousin 89 of the
$E_9+e$ family.
\end{proof}
\begin{theorem}
\label{thm:H12pe}
Let $G$ be formed by adding an edge to $H_{12}$.
Then $G$ is in the $H_8+e$, $E_9+e$, or $H_9+e$ family.
\end{theorem}
\begin{proof}
We remark that $H_{12}$ consists of six degree 4 vertices and six degree 3 vertices.
Moreover, five of the degree 3 vertices are created by $\nabla\mathrm{Y}$ moves in the process of obtaining $H_{12}$ from $K_7$.
Let $a_i$ ($i = 1 \ldots 5$) denote those five degree 3 vertices.
Further assume that $b_1$ is the remaining degree 3 vertex and $b_2$, $b_3$, $b_4$, $b_5$, $b_6$ and $b_7$ are
the remaining degree 4 vertices.
Then the $b_j$ vertices correspond to vertices of $K_7$ before applying the $\nabla\mathrm{Y}$ moves.
First suppose that $G$ is obtained from $H_{12}$ by adding an edge which connects two $b_j$ vertices.
Since these seven vertices are the vertices of $K_7$ before using $\nabla\mathrm{Y}$ moves, there is exactly one vertex among the $a_i$, say $a_1$, that is adjacent to the two endpoints of the added edge.
Let $G'$ be the graph obtained from $G$ by applying
$\mathrm{Y}\nabla$ moves at $a_2$, $a_3$, $a_4$ and $a_5$.
Then $G'$ is isomorphic to $H_8+e$.
Therefore $G$ is in the $H_8+e$ family.
Next suppose that $G$ is obtained from $H_{12}$ by adding an edge which connects two $a_i$ vertices.
Let $a_1$ and $a_2$ be the endpoints of the added edge.
We assume that $G'$ is obtained from $G$ by using
$\mathrm{Y}\nabla$ moves at $a_3$, $a_4$ and $a_5$.
Then there are two cases: either $G'$ is obtained from $H_9$ or $F_9$ by adding an edge which connects two degree 3 vertices.
In the first case, $G'$ is isomorphic to $H_9+e$.
Thus $G$ is in the $H_9+e$ family.
In the second case, $G'$ is in the $H_8+e$ or $E_9+e$ family by Corollary~\ref{cor:Gpe} and Theorem~\ref{thm:Heawpe}.
Thus $G$ is in the $H_8+e$ or $E_9+e$ family.
Finally suppose that $G$ is obtained from $H_{12}$ by adding an edge which connects an $a_i$ vertex and
a $b_j$ vertex.
Let $a_1$ be a vertex of the added edge.
We assume that $G'$ is the graph obtained from $G$ by using $\mathrm{Y}\nabla$ moves at $a_2$, $a_3$, $a_4$ and $a_5$.
Since $G'$ is obtained from $H_8$ by adding an edge, $G'$ is in the $H_8+e$ or $E_9+e$ family by Corollary~\ref{cor:Gpe} and Theorem~\ref{thm:Heawpe}.
Therefore $G$ is in the $H_8+e$ or $E_9+e$ family.
\end{proof}
\begin{corollary} If $H$ is in the Heawood family, then $H+e$ is in the $H_8+e$, $E_9+e$, or $H_9+e$ family.
\end{corollary}
\begin{proof} The graph $H$ is either an ancestor of the Heawood graph or $H_{12}$. Apply Corollary~\ref{cor:Gpe} and
Theorems~\ref{thm:Heawpe} and \ref{thm:H12pe}.
\end{proof}
\begin{corollary}
\label{cor:29nIK}
If $H$ is in the Heawood family and $H+e$ is nIK, then $H+e$ is one of the 29 nIK graphs in the
$H_8+e$ family
\end{corollary}
\begin{lemma}
\label{lem:deg2}
Let $H$ be a nIK Heawood family graph and $G$ be an expansion obtained by splitting a vertex of $H$.
Then either $G$ has a vertex of degree at most two, or else it is in the $H_8+e$, $E_9+e$, or $H_{9}+e$ family.
\end{lemma}
\begin{proof}
Note that $\Delta(H) \leq 5$. If $G$ has no vertex
of degree at most two, then the vertex split produces a vertex of degree three.
A $\mathrm{Y}\nabla$ move on the degree three vertex produces $G'$ which is of the form $H+e$.
\end{proof}
\begin{corollary}
\label{cor:deg2}
Suppose $G$ is nIK and a size 22 expansion of a nIK Heawood family graph.
Then either $G$ has a vertex of degree at most two or $G$ is in the $H_8 + e$ family.
\end{corollary}
\begin{theorem}
\label{thm:23to29}
Let $G$ be size 23 MMIK with a minor that is
a nIK Heawood family graph. Then $G$ is an expansion of
one of the 29 nIK graphs in the $H_8+e$ family.
\end{theorem}
\begin{proof}
There must be a size 22 graph $G'$ intermediate to
$G$ and the Heawood family graph $H$. That is,
$G$ is an expansion of $G'$, which is an expansion
of $H$. By Corollary~\ref{cor:deg2}, we can
assume $G'$ has a vertex $v$ of degree at most two.
By Lemma~\ref{lem:delta3},
a MMIK graph has minimal degree, $\delta(G) \geq 3$.
Since
$G'$ expands to $G$ by adding an edge or splitting
a vertex, we conclude $v$ has degree two exactly
and $G$ is $G'$ with an edge added at $v$.
Since $\delta(H) \geq 3$, this means $G'$ is obtained
from $H$ by a vertex split.
In $G'$, let $N(v) = \{a,b\}$ and let $cv$ be
the edge added to form $G$. Then $H = G'/av$ and
we recognize $H+ac$ as a minor of $G$.
We are assuming $G$ is
MMIK, so $H+ac$ is nIK and, by Corollary~\ref{cor:29nIK},
one of the 29 nIK graphs in the $H_8+e$ family. Thus,
$G$ is an expansion of $H+ac$, which is one of these 29
graphs, as required.
\end{proof}
It remains to study the expansions of the 29 nIK graphs
in the $H_8+e$ family. We will give an overview of the
argument, leaving many of the details to the appendix.
The size 23 expansions of the 29 size 22 nIK graphs fall
into one of eight families, which we identify by the
number of graphs in the family: $\mathcal{F}_9$, $\mathcal{F}_{55}$
$\mathcal{F}_{174}$, $\mathcal{F}_{183}$, $\mathcal{F}_{547}$, $\mathcal{F}_{668}$,
$\mathcal{F}_{1229}$, and $\mathcal{F}_{1293}$. We list the graphs
in each family in the appendix.
\begin{theorem} If $G$ is a size 23 MMIK expansion of a nIK
Heawood family graph, then $G$ is in one of the
eight families, $\mathcal{F}_9$, $\mathcal{F}_{55}$
$\mathcal{F}_{174}$, $\mathcal{F}_{183}$, $\mathcal{F}_{547}$, $\mathcal{F}_{668}$,
$\mathcal{F}_{1229}$, and $\mathcal{F}_{1293}$.
\end{theorem}
\begin{proof}
By Theorem~\ref{thm:23to29}, $G$ is an expansion
of $G'$, which is one of the 29 nIK graphs
in the $H_8+e$ family. As we have seen,
these 29 graphs are ancestors of the six graphs
$T_1, \ldots, T_6$. By Corollary~\ref{cor:Gpe},
we can find the $G'+e$ graphs by looking at the six
graphs. Given the family listings in the appendix,
it is straightforward to verify that each $T_i+e$ is in
one of the eight families. This accounts for
the graphs $G$ obtained by adding an edge to
one of the 29 nIK graphs in the $H_8+e$ family.
If instead $G$ is obtained by splitting a vertex
of $G'$, we use the strategy of Lemma~\ref{lem:deg2}.
By lemma~\ref{lem:delta3}, $\delta(G) \geq 3$.
Since $\Delta(G') \leq 5$,
the vertex split must produce a vertex of degree three.
Then, a $\mathrm{Y}\nabla$ move on the degree three vertex produces
$G''$ which is of the form $G'+e$. Thus $G$ is a cousin
of $G'+e$ and must be in one of the eight families.
\end{proof}
To complete our argument that there is no size 23 MMIK
graph with a nIK Heawood family minor, we
argue that there are no MMIK graphs in the
eight families $\mathcal{F}_9, \mathcal{F}_{55}, \ldots, \mathcal{F}_{1293}$.
In large part our argument is based on two criteria
that immediately show a graph $G$ is not MMIK.
\begin{enumerate}
\item $\delta(G) < 3$, see Lemma~\ref{lem:delta3}.
\item By deleting an edge, there is a proper minor $G-e$ that is an IK graph in the $H_8+e$, $E_9+e$,
or $H_9+e$ families. In this case $G$ is IK, but not MMIK.
\end{enumerate}
By Lemma~\ref{lem:MMIK}, if $G$ has an ancestor
that satisfies criterion 2, then $G$ is also not MMIK.
By Lemma~\ref{lem:tyyt}, if $G$ has a nIK descendant, then
$G$ is also not nIK.
\begin{theorem}There is no MMIK graph in the $\mathcal{F}_9$ family.
\end{theorem}
\begin{proof}
Four of the nine graphs satisfy the first criterion, $\delta(G) = 2$, and these are
not MMIK by Lemma~\ref{lem:delta3}. The remaining
graphs are descendants of a graph $G$
that is IK but not MMIK.
Indeed, $G$ satisfies criterion 2: by deleting an edge, we recognize $G-e$ as
an IK graph in the $H_9+e$ family (see the appendix
for details). By
Lemma~\ref{lem:MMIK}, $G$ and its descendants are also
not MMIK.
\end{proof}
\begin{theorem}There is no MMIK graph in the $\mathcal{F}_{55}$ family.
\end{theorem}
\begin{proof}
All graphs in this family have $\delta(G) \geq 3$,
so none satisfy the first criterion.
All but two of the graphs in this family are not MMIK
by the second criterion. The remaining two graphs
have a common parent that is IK but not MMIK. By
Lemma~\ref{lem:MMIK}, these last two graphs are also
not MMIK. See the appendix for details.
\end{proof}
We remark that $\mathcal{F}_{55}$ is the only one of the eight
families that has no graph with $\delta(G) < 3$.
\begin{theorem}There is no MMIK graph in the $\mathcal{F}_{174}$ family.
\end{theorem}
\begin{proof}
All but 51 graphs are not MMIK by the first criterion. Of
the remaining graphs, all but 17 are not MMIK by the second
criterion. Of these, 11 are descendants of a graph
$G$ that is IK but not MMIK by the second criterion.
By Lemma~\ref{lem:MMIK}, these 11 are also not MMIK.
This leaves six graphs. For these we find two nIK
descendants. By Lemma~\ref{lem:tyyt} the remaining
six are also not MMIK. Both descendants have a degree
2 vertex. On contracting an edge of the degree 2 vertex,
we obtain a homeomorphic graph that is one of the
29 nIK graphs in the $H_8+e$ family.
\end{proof}
\begin{theorem}There is no MMIK graph in the $\mathcal{F}_{183}$ family.
\end{theorem}
\begin{proof}
All graphs in this family have a vertex of degree two or
less and are not MMIK by the first criterion.
\end{proof}
\begin{theorem}There is no MMIK graph in the $\mathcal{F}_{547}$ family.
\end{theorem}
\begin{proof}
All but 229 of the graphs in the family are not MMIK
by criterion one. Of those, all but 52 are not MMIK by
criterion two. Of those, 25 are ancestors of one
of the graphs meeting criterion two and are not MMIK
by Lemma~\ref{lem:MMIK}. For the remaining 27 graphs,
all but five have a nIK descendant and are not IK
by Lemma~\ref{lem:tyyt}. For the remaining five,
three are ancestors of one of the five. In
Figure~\ref{fig:547UK} we give knotless embeddings
of the other two graphs. Using Lemma~\ref{lem:tyyt},
all five graphs are nIK, hence not MMIK.
\end{proof}
\begin{figure}[htb]
\centering
\includegraphics[scale=1]{f547.eps}
\caption{Knotless embeddings of two graphs in $\mathcal{F}_{547}$.}
\label{fig:547UK}
\end{figure}
\begin{theorem}There is no MMIK graph in the $\mathcal{F}_{668}$ family.
\end{theorem}
\begin{proof}
All but 283 of the graphs in the family are not MMIK
by criterion one. Of those, all but 56 are not MMIK by
criterion two. Of those, 23 are ancestors of one
of the graphs meeting criterion two and are not MMIK
by Lemma~\ref{lem:MMIK}. For the remaining 33 graphs
all but three have a nIK descendant and are not IK
by Lemma~\ref{lem:tyyt}. Of the remaining three,
two are ancestors of the third. Figure~\ref{fig:668UK}
is a knotless embedding of the common descendant. By
Lemma~\ref{lem:tyyt} all three of these graphs are nIK, hence not MMIK.
\end{proof}
\begin{figure}[htb]
\centering
\includegraphics[scale=1]{f668.eps}
\caption{Knotless embedding of a graph in $\mathcal{F}_{668}$.}
\label{fig:668UK}
\end{figure}
\begin{theorem}There is no MMIK graph in the $\mathcal{F}_{1229}$ family.
\end{theorem}
\begin{proof}
There are 268 graphs in the family that are not MMIK
by criterion one. Of the remaining 961 graphs, all but
140 are not MMIK by criterion two. Of those, all but three are
ancestors of one of the graphs meeting criterion two and are not MMIK
by Lemma~\ref{lem:MMIK}. The remaining three graphs have an IK minor
by contracting an edge and are, therefore, not MMIK.
\end{proof}
\begin{theorem}There is no MMIK graph in the $\mathcal{F}_{1293}$ family.
\end{theorem}
\begin{proof}
There are 570 graphs in the family that are not MMIK
by criterion one. Of the remaining 723 graphs, all but
99 are not MMIK by criterion two. Of those, all but 12 are
ancestors of one of the graphs meeting criterion two and are not MMIK
by Lemma~\ref{lem:MMIK}. The remaining 12 graphs have an IK minor
by contracing an edge and are, therefore, not MMIK.
\end{proof}
\subsection{Expansions of the size 22 graphs $H_1$ and $H_2$}
\label{sec:1122graphs}
We have argued that a size 23 MMIK graph must have a minor that
is either one of six nIK graphs in the Heawood family, or else
one of two $(11,22)$ graphs that we call
$H_1$: \verb'J?B@xzoyEo?' and $H_2$: \verb'J?bFF`wN?{?'
(see Figure~\ref{fig:H1H2}).
We treated expansions of the Heawood family
graphs in the previous subsection. In this subsection we show
that $G_1$, $G_2$, and $G_3$ are the only size 23 MMIK expansions
of $H_1$ and $H_2$.
By Lemma~\ref{lem:delta3}, if a vertex split of $H_1$ results in a vertex
of degree less than three, the resulting graph is not MMIK. Since
$H_1$ is $4$-regular, the only other way to make a vertex split
produces adjacent degree 3 vertices. Then, a $\mathrm{Y}\nabla$ move on one of the degree
three vertices yields an $H_1+e$. Thus, a size
23 MMIK expansion of $H_1$ must be in the family of a $H_1+e$.
Up to isomorphism, there are six $H_1+e$ graphs formed by adding an edge to $H_1$.
These six graphs generate families of size 6, 2, 2, and 1. Three of the six
graphs are in the family of size 6 and there is one each in
the remaining three families.
All graphs in the family of size six are ancestors of three graphs.
In Figure~\ref{fig:sixfam} we provide knotless embeddings of those three
graphs. By Lemma~\ref{lem:tyyt}, all graphs in this family are nIK, hence not MMIK.
\begin{figure}[htb]
\centering
\includegraphics[scale=1]{6fh1.eps}
\caption{Knotless embeddings of three graphs in the size six family from $H_1$.}
\label{fig:sixfam}
\end{figure}
In a family of two graphs, there is a single $\nabla\mathrm{Y}$ move. In Figure~\ref{fig:two2s}
we give knotless embeddings of the children in these two families.
By Lemma~\ref{lem:tyyt}, all graphs in these two families are nIK, hence not MMIK.
\begin{figure}[htb]
\centering
\includegraphics[scale=1]{2fh1.eps}
\caption{Knotless embeddings of two graphs in the size two families from $H_1$.}
\label{fig:two2s}
\end{figure}
The unique graph in the family of size one is $G_1$.
In subsection~\ref{sec:3MMIKpf}
we show that this graph is MMIK.
Using the edge list of $G_1$ given above near the beginning of Section~3:
$$[(0, 4), (0, 5), (0, 9), (0, 10), (1, 4), (1, 6), (1, 7), (1, 10), (2, 3), (2, 4), (2, 5), (2, 9),$$
$$ (2, 10), (3, 6), (3, 7), (3, 8), (4, 8), (5, 6), (5, 7), (5, 8), (6, 9), (7, 9), (8, 10)],$$
we recover $H_1$ by deleting edge $(2,5)$.
Again, since $H_2$ is $4$-regular, MMIK expansions formed by vertex splits (if any)
will be in the families of $H_2+e$ graphs. Up to isomorphism,
there are three $H_2+e$ graphs. These produce a family of size four and another
of size two.
\begin{figure}[htb]
\centering
\includegraphics[scale=1]{4fh2.eps}
\caption{Knotless embeddings of two graphs in the size four family from $H_2$.}
\label{fig:fourfam}
\end{figure}
The family of size four includes two $H_2+e$ graphs. All graphs in the
family are ancestors of the two graphs that are each shown
to have a knotless embedding in
Figure~\ref{fig:fourfam}. By Lemma~\ref{lem:tyyt}, all graphs in this family
are nIK, hence not MMIK.
The family of size two consists of the graphs $G_2$ and $G_3$. In
subsection~\ref{sec:3MMIKpf} we show that these two graphs are MMIK.
Using the edge list for $G_2$ given above near the beginning of
Section~3:
$$[(0, 2), (0, 4), (0, 5), (0, 6), (0, 7), (1, 5), (1, 6), (1, 7), (1, 8), (2, 6), (2, 7), (2, 8),$$
$$(2, 9), (3, 7), (3, 8), (3, 9), (3, 10), (4, 8), (4, 9), (4, 10), (5, 9), (5, 10), (6, 10)],$$
we recover $H_2$ by deleting edge $(0,2)$.
As for $G_3$:
$$[(0, 4), (0, 5), (0, 7), (0, 11), (1, 5), (1, 6), (1, 7), (1, 8), (2, 7), (2, 8), (2, 9), (2, 11),$$
$$(3, 7), (3, 8), (3, 9), (3, 10), (4, 8), (4, 9), (4, 10), (5, 9), (5, 10), (6, 10), (6, 11)],$$
contracting edge $(6,11)$ leads back to $H_2$.
\section{
\label{sec:ord10}%
Knotless embedding obstructions of order ten.}
In this section, we prove Theorem~\ref{thm:ord10}: there are exactly 35 obstructions to knotless embedding of order 10. As in the previous
section, we refer to knotless embedding obstructions as MMIK graphs.
We first describe the 26 graphs given in~\cite{FMMNN,MNPP}
and then list the 9 new graphs unearthed by our computer search.
\subsection{26 previously known order 10 MMIK graphs.}
In~\cite{FMMNN}, the authors describe 264 MMIK graphs. There are three
sporadic graphs (none of order 10), the rest falling into four graph families. Of these, 24 have 10 vertices and they appear in the families
as follows.
There are three MMIK graphs of order 10 in the Heawood family~\cite{KS, GMN, HNTY}: $H_{10}, F_{10}, $ and $E_{10}$.
In~\cite{GMN}, the authors study the other three families.
All 56 graphs in the $K_{3,3,1,1}$ family are MMIK.
Of these, 11 have order 10: Cousins 4, 5, 6, 7, 22, 25, 26, 27, 28, 48, and 51. There are 33 MMIK graphs in the family of $E_9+e$. Of these seven have order 10: Cousins 3, 28, 31, 41, 44, 47, and 50. Finally, the family of $G_{9,28}$ includes 156 MMIK graphs. Of these, there are three of
order 10: Cousins 2, 3, and 4.
The other two known MMIK graphs of order 10 are described in~\cite{MNPP},
one having size 26 and the other size 30.
We remark that the family for the graph of size 26 includes
both MMIK graphs and graphs with $\delta(G) = 2$. However,
no ancestor or descendant of a $\delta(G) = 2$ graph is MMIK.
This is part of our motivation for Question~\ref{ques:d3MMIK}
\subsection{Nine new MMIK graphs of order 10}
In this subsection we list the nine additional MMIK graphs that
we found after an exhaustive computer search of the 11716571 connected
graphs of order 10 conducted in sage~\cite{sage}.
In each case, we use the program
of~\cite{MN} to verify that the graph we found is IK.
We use the Mathematica implementation of the program
available at Ramin Naimi's website~\cite{NW}.
To show that the graph is MMIK, we must in addition verify that
each minor formed by deleting or contracting an edge is nIK.
Many of these minors are $2$-apex and not IK by Lemma~\ref{lem:2apex}.
There remain 21 minors and below we discuss how we know that those are
also nIK.
First we list the nine new MMIK graphs of order 10,
including size, graph6 format~\cite{sage}, and an edge list.
\begin{enumerate}
\item Size: 25; graph6 format: \verb"ICrfbp{No"
$$[(0, 3), (0, 4), (0, 5), (0, 6), (1, 4), (1, 5), (1, 6), (1, 7), (1, 8),$$
$$(2, 5), (2, 6), (2, 7), (2, 8), (2, 9), (3, 6), (3, 7), (3, 8),$$
$$(3, 9), (4, 7), (4, 8), (4, 9), (5, 8), (5, 9), (6, 9), (7, 9)]$$
\item Size: 25; graph6 format: \verb"ICrbrrqNg"
$$[(0, 3), (0, 4), (0, 5), (0, 8), (1, 4), (1, 5), (1, 6), (1, 7), (1, 8),$$
$$(2, 5), (2, 6), (2, 7), (2, 8), (2, 9), (3, 6), (3, 7), (3, 8),$$
$$(3, 9), (4, 6), (4, 7), (4, 9), (5, 9), (6, 8), (6, 9), (8, 9)]$$
\item Size: 25; graph6 format: \verb"ICrbrriVg"
$$[(0, 3), (0, 4), (0, 5), (0, 8), (1, 4), (1, 5), (1, 6), (1, 7), (1, 8),$$
$$(1, 9), (2, 5), (2, 6), (2, 7), (2, 8), (3, 6), (3, 7), (3, 9),$$
$$(4, 6), (4, 7), (4, 8), (4, 9), (5, 9), (6, 8), (6, 9), (8, 9)]$$
\item Size: 25; graph6 format: \verb"ICrbrriNW"
$$[(0, 3), (0, 4), (0, 5), (0, 8), (1, 4), (1, 5), (1, 6), (1, 7), (1, 8),$$
$$(2, 5), (2, 6), (2, 7), (2, 8), (2, 9), (3, 6), (3, 7), (3, 9),$$
$$(4, 6), (4, 7), (4, 8), (4, 9), (5, 9), (6, 8), (7, 9), (8, 9)]$$
\item Size: 27; graph6 format: \verb"ICfvRzwfo"
$$[(0, 3), (0, 4), (0, 5), (0, 6), (0, 8), (0, 9), (1, 5), (1, 6), (1, 7),$$
$$(1, 8), (2, 5), (2, 6), (2, 7), (2, 8), (3, 4), (3, 5), (3, 7), (3, 8),$$
$$(3, 9), (4, 6), (4, 7), (4, 8), (4, 9), (5, 7), (5, 9), (6, 9), (7, 9)]$$
\item Size: 29; graph6 format: \verb"ICfvRr^vo"
$$[(0, 3), (0, 4), (0, 5), (0, 6), (0, 8), (0, 9), (1, 5), (1, 6), (1, 7), (1, 8),$$
$$(1, 9), (2, 5), (2, 6), (2, 7), (3, 4), (3, 5), (3, 7), (3, 8), (3, 9), (4, 6),$$
$$(4, 7), (4, 8), (4, 9), (5, 8), (5, 9), (6, 8), (6, 9), (7, 8), (7, 9)]$$
\item Size: 30; graph6 format: \verb"IQjuvrm^o"
$$[(0, 2), (0, 4), (0, 5), (0, 6), (0, 7), (0, 8), (1, 3), (1, 5), (1, 6), (1, 7),$$
$$(1, 8), (1, 9), (2, 4), (2, 5), (2, 7), (2, 8), (2, 9), (3, 5), (3, 6), (3, 7),$$
$$(3, 9), (4, 6), (4, 7), (4, 8), (4, 9), (5, 8), (5, 9), (6, 8), (6, 9), (7, 9)]$$
\item Size: 31; graph6 format: \verb"IQjur~m^o"
$$[(0, 2), (0, 4), (0, 5), (0, 6), (0, 8), (1, 3), (1, 5), (1, 6), (1, 7), (1, 8), (1, 9),$$
$$(2, 4), (2, 5), (2, 7), (2, 8), (2, 9), (3, 5), (3, 6), (3, 7), (3, 9), (4, 6),$$
$$(4, 7), (4, 8), (4, 9), (5, 7), (5, 8), (5, 9), (6, 7), (6, 8), (6, 9), (7, 9)]$$
\item Size: 32; graph6 format: \verb"IEznfvm|o"
$$[(0, 3), (0, 4), (0, 5), (0, 6), (0, 7), (0, 8), (0, 9), (1, 3), (1, 4), (1, 5), (1, 6),$$
$$(1, 7), (1, 8), (1, 9), (2, 4), (2, 5), (2, 6), (2, 7), (2, 8), (2, 9), (3, 6), (3, 7),$$
$$(3, 9), (4, 5), (4, 7), (4, 8), (5, 8), (5, 9), (6, 7), (6, 8), (6, 9), (7, 9)]$$
\end{enumerate}
To complete our argument, it remains to argue that the 21 non 2-apex minors are
nIK. Each of these minors is formed by deleting or contracting an edge
in one of the nine graphs just listed. Of these minors, 19 have a
2-apex descendant and are nIK by Lemmas~\ref{lem:2apex} and \ref{lem:tyyt}. In Figure~\ref{fig:ord10}
we give knotless embeddings of the remaining two minors showing
that they are also nIK.
\begin{figure}[htb]
\centering
\includegraphics[scale=1]{order10.eps}
\caption{Knotless embeddings of two order 10 graphs.}
\label{fig:ord10}
\end{figure}
\section{Acknowledgements}
We thank Ramin Naimi for use of his programs which were
essential for this project.
|
2302.11932
|
\section{Introduction}\label{sec:intro}
For $n \ge 2$ let $\mathcal{I}_n$ denote the set of all irreducible degree $n$ polynomials in ${\mathbb F}_{2}[x]$, and let $\text{Tr}_n: {\mathbb F}_{2^n} \rightarrow {\mathbb F}_2: \alpha \mapsto \alpha + \alpha^2 + \alpha^{2^2} + \cdots + \alpha^{2^{n-1}}$ denote the absolute trace function.
For a polynomial $f = x^n + f_{n-1}x^{n-1} + \cdots + f_1x + 1 \in \mathcal{I}_n$, if $\alpha$ is a root of $f$ then $f_{n-1} = \text{Tr}_n(\alpha)$ and $f_{1} = \text{Tr}_n(\alpha^{-1})$: $f_{n-1}$ and $f_{1}$ are known as the trace and cotrace respectively.
We partition $\mathcal{I}_n$ into four sets $S_{i,j}(n)$ with $i,j \in {\mathbb F}_2$ by placing each $f \in \mathcal{I}_n$ into $S_{f_{n-1},f_1}(n)$.
Table~\ref{table1} contains the cardinality of these sets for $2 \le n \le 32$ (note that we do not define $S_{i,j}(1)$).
Elements of $S_{1,1}(n)$ are useful for practical applications since they give rise to representations of ${\mathbb F}_{2^{n \cdot 2^{l}}}$ for all $l \ge 1$ via the iteration of the so-called $Q$-transform~\cite{meyn} (cf. \S\ref{sec:parity}), provided that $n \ne 3$~\cite{niederreiter}.
It is clear that for any $n \ge 3$ the sets $S_{0,1}(n)$ and $S_{1,0}(n)$ have the same cardinality,
since any member $f$ of one set can be mapped to a corresponding member of the other set via the reciprocal transform $f^*(x) = x^n f(1/x)$, which reverses the coefficients of $f$: since this transform is invertible (indeed it is its own inverse) it gives a natural bijection between the two sets, in the sense that it is simple and has explanatory power.
Ahmadi observed that for odd $n \ge 3$ the sets $S_{0,0}(n)$ and $S_{1,1}(n)$ also have the same cardinality~\cite{omran}, which raises the question of whether or not there exists a natural bijection between them, just as for $S_{0,1}(n)$ and $S_{1,0}(n)$?
There exist bijective proofs of numerous combinatorial identities: indeed, Stanley has exhibited $250$ `Bijective Proof Problems' of various levels of difficulty, including 27 open problems~\cite{Stanley}. Occasionally, a natural bijection can illuminate the relation between two sets of the same cardinality. One such example is Benjamin and Bennett's elegant solution~\cite{Dilcue} to a question posed by Corteel, Savage, Wilf and Zeilberger, which asked for a bijective explanation of the fact that among ordered pairs of polynomials of degree $n$ over ${\mathbb F}_{2}[x]$, there are as many coprime pairs as there are non-coprime pairs~\cite{CSWZ}. Benjamin and Bennett constructed such a bijection by applying Euclid's algorithm to any pair, flipping the final remainder bit and then reversing Euclid's algorithm using the same quotients. The main purpose of the present work is to exhibit a natural bijection which explains Ahmadi's observation.
The author further observed that for even $n$, the difference $|S_{1,1}(n)| - |S_{0,0}(n)|$ is equal to the number of trace $1$ irreducibles of degree $n/2$.
Before presenting our bijective proof of Ahmadi's observation in \S\ref{sec:bij}, for good measure we first prove his and our observations in two different ways, in \S\ref{sec:easy} and~\S\ref{sec:nied}, each proof having its own merits. We finish by presenting a proposition on the parity of $|S_{1,1}(n)|$ in \S\ref{sec:parity}, which arises from similar considerations. For reference and clarity we now state our two main theorems explicitly.
\begin{theorem}\label{mainthm}
For odd $n \ge 3$ the sets $S_{0,0}(n)$ and $S_{1,1}(n)$ have the same cardinality.
\end{theorem}
\begin{table}[h!]
\caption{Cardinality of $S_{i,j}(n)$ for $2 \le n \le 32$}
\begin{center}\label{table1}
\begin{tabular}{c|c|c|c|c}
\hline
$n$ & $|S_{0,0}(n)|$ & $|S_{0,1}(n)|$ & $|S_{1,0}(n)|$ & $|S_{1,1}(n)|$\\
\hline
2 & 0 & 0 & 0 & 1 \\
3 & 0 & 1 & 1 & 0 \\
4 & 0 & 1 & 1 & 1 \\
5 & 2 & 1 & 1 & 2 \\
6 & 1 & 3 & 3 & 2 \\
7 & 4 & 5 & 5 & 4 \\
8 & 7 & 7 & 7 & 9 \\
9 & 14 & 14 & 14 & 14\\
10 & 21 & 27 & 27 & 24 \\
11 & 48 & 45 & 45 & 48 \\
12 & 81 & 84 & 84 & 86 \\
13 & 154 & 161 & 161 & 154 \\
14 & 285 & 291 & 291 & 294 \\
15 & 550 & 541 & 541 & 550 \\
16 & 1001 & 1031 & 1031 & 1017 \\
17 & 1926 & 1929 & 1929 & 1926 \\
18 & 3626 & 3626 & 3626 & 3654 \\
19 & 6888 & 6909 & 6909 & 6888 \\
20 & 13041 & 13122 & 13122 & 13092 \\
21 & 24998 & 24931 & 24931 & 24998 \\
22 & 47565 & 47667 & 47667 & 47658 \\
23 & 91124 & 91237 & 91237 & 91124 \\
24 & 174652 & 174698 & 174698 & 174822 \\
25 & 335588 & 335500 & 335500 & 335588 \\
26 & 644805 & 645435 & 645435 & 645120 \\
27 & 1242822 & 1242682 & 1242682 & 1242822 \\
28 & 2396385 & 2396520 & 2396520 & 2396970 \\
29 & 4627850 & 4628545 & 4628545 & 4627850 \\
30 & 8946665 & 8947923 & 8947923 & 8947756 \\
31 & 17319148 & 17317685 & 17317685 & 17319148 \\
32 & 33551833 & 33554983 & 33554983 & 33553881 \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{theorem}\label{evenn}
For even $n$, the difference $|S_{1,1}(n)| - |S_{0,0}(n)|$ is equal to the number of trace $1$ irreducibles of degree $n/2$. In particular, we have
\[
|S_{1,1}(n)| - |S_{0,0}(n)| = \frac{1}{n}\sum_{\substack{{d \mid n/2} \\
d \ \text{odd}}} \mu(d) 2^{n/2d},
\]
where $\mu(\cdot)$ is the M\"obius function, which is defined by:
\[
\mu(n) = \begin{cases}
1 & \text{if } n = 1,\\
(-1)^k & \text{if } n \text{ is a product of $k$ distinct primes},\\
0 & \text{otherwise}.
\end{cases}
\]
\end{theorem}
\section{First proofs of the observations}\label{sec:easy}
Our first proofs of Theorems~\ref{mainthm} and~\ref{evenn} are easy and the most direct, but are perhaps the least illuminating since they use two well-known theorems.
\vspace{3mm}
\newline
\noindent {\em First proof of Theorem~\ref{mainthm}.}
For $n \ge 1$ it is well known that
\[
|\mathcal{I}_n| = \frac{1}{n} \sum_{d \mid n} \mu(d) 2^{n/d}.
\]
It is also well known (see~\cite{carlitz}) that the number of binary irreducibles of degree $n \ge 1$ with trace $1$ is
\begin{equation}\label{trace1count}
\frac{1}{2n} \sum_{\substack{{d \mid n} \\
d \ \text{odd}}} \mu(d) 2^{n/d}.
\end{equation}
Assume now that $n \ge 2$. Then~(\ref{trace1count}) equals $|S_{1,1}(n)| + |S_{1,0}(n)|$. Furthermore, since
$|S_{1,0}(n)| = |S_{0,1}(n)|$ we have
\begin{eqnarray}
\nonumber |S_{1,1}(n)| - |S_{0,0}(n)| &=& (|S_{1,1}(n)| + |S_{1,0}(n)|) - (|S_{0,1}(n)| + |S_{0,0}(n)|)\\
\nonumber &=& (|S_{1,1}(n)| + |S_{1,0}(n)|) - (|\mathcal{I}_n| - |S_{1,1}(n)| - |S_{1,0}(n)|)\\
\nonumber &=& 2(|S_{1,1}(n)| + |S_{1,0}(n)|) - |\mathcal{I}_n|\\
\label{terms} &=& \frac{1}{n} \sum_{\substack{{d \mid n} \\
d \ \text{odd}}} \mu(d) 2^{n/d} - \frac{1}{n} \sum_{d \mid n} \mu(d) 2^{n/d} = -\frac{1}{n}\sum_{\substack{{d \mid n} \\
d \ \text{even}}} \mu(d) 2^{n/d}
\end{eqnarray}
If $n$ is odd then the final sum in expression~(\ref{terms}) is empty, which proves Theorem~\ref{mainthm}. \qed
\vspace{3mm}
\noindent {\em First proof of Theorem~\ref{evenn}.}
From~(\ref{terms}) we have
\begin{eqnarray}
\label{line3} |S_{1,1}(n)| - |S_{0,0}(n)| &=& -\frac{1}{n}\sum_{\substack{{d \mid n} \\
d \ \text{even}}} \mu(d) 2^{n/d} \\
\label{line4} &=& -\frac{1}{n}\sum_{\substack{{2d \mid n} \\
d \ \text{odd}}} \mu(2d) 2^{n/2d} \\
\label{line5} &=& \frac{1}{n}\sum_{\substack{{d \mid n/2} \\
d \ \text{odd}}} \mu(d) 2^{n/2d},
\end{eqnarray}
where expression~(\ref{line4}) discounts all those $d$ in the sum in equation~(\ref{line3}) which are divisible by $4$, since
$\mu$ of such $d$ is zero. The sum in~(\ref{line5}) is nothing but~(\ref{trace1count}) but for argument $n/2$, as claimed. \qed
\section{Second proofs of the observations}\label{sec:nied}
Our second proofs of Theorems~\ref{mainthm} and~\ref{evenn} are based on Niederreiter's explicit count of $|S_{1,1}(n)|$~\cite{niederreiter} and arguably give more insight than our first proofs.
\vspace{3mm}
\newline
\noindent {\em Second proof of Theorem~\ref{mainthm}.}
Let $n \ge 1$. For $i \in {\mathbb F}_2$ let $N_i(n) = \#\{ \alpha \in {\mathbb F}_{2^n}^{\times} \mid \text Tr_n(\alpha) = \text{Tr}_n(\alpha^{-1}) = i\}$. Niederreiter expressed $N_1(n)$ as follows:
\begin{eqnarray}
\nonumber N_1(n) &=& \sum_{\alpha \in {\mathbb F}_{2^n}^{\times}} \bigg( \frac{1}{2} \sum_{a \in {\mathbb F}_2} (-1)^{a(\text{Tr}_n(\alpha) + 1)} \bigg)
\bigg( \frac{1}{2} \sum_{b \in {\mathbb F}_2} (-1)^{b(\text{Tr}_n(\alpha) + 1)} \bigg)\\
\nonumber &=& \frac{1}{4} \sum_{a,b \in {\mathbb F}_2} (-1)^{a+b} \sum_{\alpha \in {\mathbb F}_{2^n}^{\times}} (-1)^{\text{Tr}_{n}(a \alpha + b \alpha^{-1})} \\
\label{kloostersum} &=& \frac{1}{4} \big((2^n - 1) +1 + 1 + \sum_{\alpha \in {\mathbb F}_{2^n}^{\times}} (-1)^{\text{Tr}_{n}(\alpha + \alpha^{-1})} \big),
\end{eqnarray}
where in~(\ref{kloostersum}) the $2^n - 1$ corresponds to $a = b = 0$, the two $+1$'s correspond to $a \ne b$, and the final sum
corresponds to $a = b = 1$. Note that the final sum is the well-known Kloosterman sum evaluated at $1$, but for our purposes we need not evaluate it. In particular, using a similar argument we have:
\begin{eqnarray}
\nonumber N_0(n) &=& \sum_{\alpha \in {\mathbb F}_{2^n}^{\times}} \bigg( \frac{1}{2} \sum_{a \in {\mathbb F}_2} (-1)^{a \text{Tr}_n(\alpha)} \bigg)
\bigg( \frac{1}{2} \sum_{b \in {\mathbb F}_2} (-1)^{b \text{Tr}_n(\alpha)} \bigg)\\
\nonumber &=& \frac{1}{4} \sum_{a,b \in {\mathbb F}_2} \sum_{\alpha \in {\mathbb F}_{2^n}^{\times}} (-1)^{\text{Tr}_{n}(a \alpha + b \alpha^{-1})} \\
\nonumber &=& \frac{1}{4} \big((2^n - 1) -1 - 1 + \sum_{\alpha \in {\mathbb F}_{2^n}^{\times}} (-1)^{\text{Tr}_{n}(\alpha + \alpha^{-1})} \big),
\end{eqnarray}
and therefore $N_0(n) = N_1(n) - 1$.
Now let $G_i(n) = \#\{ \alpha \in {\mathbb F}_{2^n}^{\times} \mid \text Tr_n(\alpha) = \text{Tr}_n(\alpha^{-1}) = i \ \text{and} \ {\mathbb F}_2(\alpha) = {\mathbb F}_{2^n} \}$,
i.e., $G_i(n)$ is the cardinality of the subset of elements counted by $N_i(n)$ which are roots of irreducible degree $n$ polynomials. Since any irreducible degree $n$ polynomial has precisely $n$ roots in ${\mathbb F}_{2^n}$ we see that $|S_{i,i}(n)| = \frac{1}{n} G_i(n)$ for $n \ge 2$. For each $\alpha \in {\mathbb F}_{2^n}^{\times}$ there is a uniquely determined irreducible polynomial in ${\mathbb F}_2[x]$ of degree $d \mid n$ for which $\alpha$ is a root, and so by transitivity of the trace, for this $d$ we have
\begin{equation}\label{transitivity}
\text{Tr}_n(\alpha) = \text{Tr}_d \big( \frac{n}{d} \alpha \big) = \frac{n}{d} \text{Tr}_d(\alpha).
\end{equation}
Thus $\text{Tr}_n(\alpha) = 1$ if and only if $n/d$ is odd and $\text{Tr}_d(\alpha) = 1$, and likewise for $\alpha^{-1}$.
Niederreiter therefore deduces that
\begin{equation}\label{N1}
N_1(n) = \sum_{\substack{{d \mid n} \\
n/d \ \text{odd}}} G_1(d).
\end{equation}
Also, by~(\ref{transitivity}) $\text{Tr}_n(\alpha) = 0$ if and only if $n/d$ is even and it does not matter what $\text{Tr}_d(\alpha)$ is, or
$n/d$ is odd and $\text{Tr}_d(\alpha) = 0$, and likewise for $\alpha^{-1}$.
In the former case the contribution to $N_0(n)$ is simply the cardinality of the largest subfield of ${\mathbb F}_{2^n}$ such that $n/d$ is
even, minus $1$ since the zero element is not counted by $N_0(n)$. We therefore deduce that
\begin{equation}\label{N0}
N_0(n) = |{\mathbb F}_{2^{\text{max} \{ d \ \mid \ n/d \ \text{even} \} }}^{\times} | + \sum_{\substack{{d \mid n} \\
n/d \ \text{odd}}} G_0(d) .
\end{equation}
If $n$ is odd then by M\"obius inversion~(\ref{N1}) gives
\[
G_1(n) = \sum_{d \mid n} \mu(d) N_1(n/d),
\]
while the first term in~(\ref{N0}) becomes empty and M\"obius inversion gives
\begin{eqnarray}
\nonumber G_0(n) &=& \sum_{d \mid n} \mu(d) N_0(n/d) \\
\nonumber &=& \sum_{d \mid n} \mu(d) (N_1(n/d) - 1) \\
\nonumber &=& \sum_{d \mid n} \mu(d) N_1(n/d) - \sum_{d \mid n} \mu(d) \\
\nonumber &=& G_1(n) \ \text{for} \ n \ge 2.
\end{eqnarray}
Since $|S_{i,i}(n)| = \frac{1}{n} G_i(n)$ for $n \ge 2$ we have $|S_{0,0}(n)| = |S_{1,1}(n)|$ for odd $n \ge 3$ which reproves Theorem~\ref{mainthm}. \qed
\vspace{3mm}
\noindent {\em Second proof of Theorem~\ref{evenn}.}
Let $k \ge 1$ and let $t \ge1$ be odd. Then for argument $2^k t$,
the $d$ occurring in the sums of~(\ref{N1}) and~(\ref{N0}) are of the form $d = 2^k e$ with $e$ a positive divisor of $t$. We thus have
\begin{equation}\label{N1even}
N_1(2^k t) = \sum_{e \mid t} G_1(2^k e),
\end{equation}
and
\begin{equation}\label{N0even}
N_0(2^k t) = | {\mathbb F}_{2^{2^{k-1} t}}^{\times} | + \sum_{e \mid t} G_0(2^k e).
\end{equation}
Since one cannot immediately apply M\"obius inversion to~(\ref{N1even}) and~(\ref{N0even}) to obtain $G_1(2^k t)$ and
$G_0(2^k t)$, for any integer $m \ge 1$ and $i = 0,1$ define
\[
H_i(m) = \sum_{d|m} G_i(2^k d).
\]
Then by M\"obius inversion we have
\begin{equation}\label{Gexp}
G_i(2^k m) = \sum_{d \mid m} \mu \bigg(\frac{m}{d} \bigg) H_i(d).
\end{equation}
If $m$ is odd then by the definition of $H_i$ and by~(\ref{N1even}) and~(\ref{N0even}) respectively, we have
$H_1(m) = N_1(2^k m)$ and $H_0(m) = N_0(2^k m) - | {\mathbb F}_{2^{2^{k-1} m}}^{\times} |$. Let $n = 2^k m$ with $k \ge 1$ and $m \ge 1$ odd. Then rewriting~(\ref{Gexp}) using these equations respectively, we obtain
\[
G_1(2^k m) = \sum_{d \mid m} \mu \bigg( \frac{m}{d} \bigg) N_1(2^k d),
\]
and
\begin{eqnarray}
\nonumber G_0(2^k m) &=& \sum_{d \mid m} \mu \bigg( \frac{m}{d} \bigg) (N_0(2^k d) - | {\mathbb F}_{2^{2^{k-1} d}}^{\times} |) \\
\nonumber &=& \sum_{d \mid m} \mu \bigg( \frac{m}{d} \bigg) N_0(2^k d) - \sum_{d \mid m} \mu \bigg( \frac{m}{d} \bigg) (2^{nd/2m} - 1) \\
\nonumber &=& \sum_{d \mid m} \mu \bigg( \frac{m}{d} \bigg) (N_1(2^k d) - 1) - \sum_{d \mid m} \mu ( d) (2^{n/2d}-1) \\
\label{final2} &=& G_1(2^k m) - \sum_{d \mid m} \mu ( d) 2^{n/2d}.
\end{eqnarray}
Dividing equation~(\ref{final2}) by $n$ reproves Theorem~\ref{evenn}. \qed
\section{An explicit bijection between $S_{0,0}(n)$ and $S_{1,1}(n)$ for odd $n$}\label{sec:bij}
We now present a bijective proof of Theorem~\ref{mainthm}. Crucial to our bijection are the following two transforms. Let
$\psi : \mathcal{I}_n \rightarrow \mathcal{I}_n : f \mapsto (x+1)^n f(\frac{1}{x+1})$, which has inverse
$\psi^{-1} : f \mapsto x^n f(\frac{x+1}{x})$, as is easily verified.
Since the arguments of $f$ in $\psi$ and $\psi^{-1}$ are invertible fractional linear transformations, they map irreducibles to irreducibles and are thus well-defined.
We observed that under $\psi$ and $\psi^{-1}$, which $S_{i',j'}(n)$ an element of $S_{i,j}(n)$ maps to depends only on $i$ and $j$, the parity of $n$ and the parity of the number of monomials $x^k$ in the range $2 \le k \le n-2$ which have odd exponent, the latter of which motivates the following equivalent definition.
\begin{definition}
For a polynomial $f \in \mathcal{I}_n$ we define its signature $\sigma_f \in {\mathbb F}_2$ to be $\sum_{k = 2}^{n-2} k f_k \pmod{2}$.
\end{definition}
We have the following important lemma.
\begin{lemma}\label{lemma2.1}
Let $n \ge 3$ be odd, let $f \in S_{1,1}(n)$ and let $g \in S_{0,0}(n)$. Then
\begin{enumerate}[label={(\roman*)}]
\item \[
\psi(f) \in \begin{cases}
S_{0,0}(n) &\mbox{if } \ \sigma_f = 1 \\
S_{0,1}(n) & \mbox{if } \ \sigma_f = 0 \end{cases}
\]
\item \[
\psi^{-1}(f) \in \begin{cases}
S_{0,0}(n) &\mbox{if } \ \sigma_f = 0 \\
S_{1,0}(n) & \mbox{if } \ \sigma_f = 1 \end{cases}
\]
\item \[
\psi(g) \in \begin{cases}
S_{1,1}(n) &\mbox{if } \ \sigma_g = 1 \\
S_{1,0}(n) & \mbox{if } \ \sigma_g = 0 \end{cases}
\]
\item \[
\psi^{-1}(g) \in \begin{cases}
S_{1,1}(n) &\mbox{if } \ \sigma_g = 0 \\
S_{0,1}(n) & \mbox{if } \ \sigma_g = 1 \end{cases}
\]
\end{enumerate}
\end{lemma}
\begin{proof}
For part (i), observe that
\begin{equation}\label{eqn1}
\psi(f) = (x+1)^n + (x+1)^{n-1} + \sum_{k=2}^{n-2} f_k (x + 1)^{n-k} + (x+1) + 1
\end{equation}
The coefficient of $x^{n-1}$ in~(\ref{eqn1}) is ${n \choose n-1} + 1 = n + 1 \equiv 0 \pmod{2}$, since $n$ is odd. The coefficient of $x$ in~(\ref{eqn1}) is
\[
{n \choose 1} + {n-1 \choose 1} + \sum_{k = 2}^{n-2} f_k {n-k \choose 1} + 1 = n + (n - 1) +
\sum_{k = 2}^{n-2} f_k (n-k) +1 \equiv
\sum_{k=2}^{n-2} f_k + \sum_{k = 2}^{n-2} k f_k \pmod{2}.
\]
Since all irreducibles polynomials in ${\mathbb F}_2[x]$ of degree $ > 1$ necessarily have an odd number of terms (for otherwise $x+1$ would be a factor), we have $\sum_{k=2}^{n-2} f_k \equiv 1 \pmod{2}$, which completes the proof of part (i). For part (ii)
observe that
\begin{equation}\label{eqn2}
\psi^{-1}(f) = (x+1)^n + (x+1)^{n-1}x + \sum_{k=2}^{n-2} f_k (x + 1)^k x^{n-k} + (x+1)x^{n-1} + x^n.
\end{equation}
The coefficient of $x^{n-1}$ in~(\ref{eqn2}) is
\[
{n \choose n-1} + {n-1 \choose n-2} + \sum_{k = 2}^{n-2} f_k {k \choose k-1} + 1 = n + (n - 1) + \sum_{k = 2}^{n-2} f_k k + 1 \equiv
\sum_{k = 2}^{n-2} k f_k \pmod{2}.
\]
The coefficient of $x$ in~(\ref{eqn2}) is ${n \choose 1} + 1 = n + 1 \equiv 0 \pmod{2}$. This completes the proof of
part (ii). For part (iii) observe that
\begin{equation}\label{eqn3}
\psi(g) = (x+1)^n + \sum_{k=2}^{n-2} g_k (x + 1)^{n-k} + 1.
\end{equation}
The coefficient of $x^{n-1}$ in~(\ref{eqn3}) is ${n \choose n-1} = n \equiv 1 \pmod{2}$, since $n$ is odd. The coefficient of $x$ in~(\ref{eqn3}) is
\[
{n \choose 1} + \sum_{k = 2}^{n-2} g_k {n-k \choose 1} = n + \sum_{k = 2}^{n-2} g_k (n-k) \equiv
1 + \sum_{k=2}^{n-2} g_k + \sum_{k = 2}^{n-2} k g_k \equiv \sum_{k = 2}^{n-2} k g_k \pmod{2},
\]
which proves part (iii). For part (iv) observe that
\begin{equation}\label{eqn4}
\psi^{-1}(g) = (x+1)^n + \sum_{k=2}^{n-2} g_k (x + 1)^k x^{n-k} + x^n.
\end{equation}
The coefficient of $x^{n-1}$ in~(\ref{eqn4}) is
\[
{n \choose n-1} + \sum_{k = 2}^{n-2} g_k {k \choose k-1} = n + \sum_{k = 2}^{n-2} g_k k \equiv
1 + \sum_{k = 2}^{n-2} k g_k \pmod{2}.
\]
The coefficient of $x$ in~(\ref{eqn4}) is ${n \choose 1} = n \equiv 1 \pmod{2}$, which completes the proof of part (iv)
and the lemma. \qed
\end{proof}
We now reprove Theorem~\ref{mainthm} with an explicit bijection.
\vspace{3mm}
\newline
\noindent {\em Third proof of Theorem~\ref{mainthm}.}
Let $f \in S_{1,1}(n)$ and define a map
$\phi : S_{1,1}(n) \rightarrow S_{0,0}(n)$ by
\[
\phi(f) := \begin{cases}
\psi(f) &\mbox{if } \ \sigma_f = 1 \\
\psi^{-1}(f) & \mbox{if } \ \sigma_f = 0. \end{cases}
\]
Also, let $g \in S_{0,0}(n)$ and define a map
$\rho: S_{0,0}(n) \rightarrow S_{1,1}(n)$ by
\[
\rho(g) := \begin{cases}
\psi(g) &\mbox{if } \ \sigma_g = 1 \\
\psi^{-1}(g) & \mbox{if } \ \sigma_g = 0. \end{cases}
\]
We will show that $\phi$ and $\rho$ are inverse to one another.
Firstly, if $\sigma_f = 1$ then by Lemma~\ref{lemma2.1}(i) we have $\phi(f) = \psi(f) \in S_{0,0}(n)$.
Since $\psi^{-1}(\psi(f)) = f \in S_{1,1}(n)$, by Lemma~\ref{lemma2.1}(iv) we must have $\sigma_{\psi(f)} = 0$.
Hence $\rho(\phi(f)) = f$ in this case. Furthermore, if $\sigma_f = 0$ then by Lemma~\ref{lemma2.1}(ii) we have
$\phi(f) = \psi^{-1}(f) \in S_{0,0}(n)$. Since $\psi(\psi^{-1}(f)) = f \in S_{1,1}(n)$, by Lemma~\ref{lemma2.1}(iii) we must have $\sigma_{\psi^{-1}(f)} = 1$. Hence $\rho(\phi(f)) = f$ in this case too and $\rho$ is a left inverse for $\phi$.
Secondly, if $\sigma_g$ = 1 then by Lemma~\ref{lemma2.1}(iii) we have $\rho(g) = \psi(g) \in S_{1,1}(n)$.
Since $\psi^{-1}(\psi(g)) = g \in S_{0,0}(n)$, by Lemma~\ref{lemma2.1}(ii) we must have $\sigma_{\psi(g)} = 0$.
Hence $\phi(\rho(g)) = g$ in this case. Furthermore, if $\sigma_g = 0$ then by Lemma~\ref{lemma2.1}(iv) we have
$\rho(g) = \psi^{-1}(g) \in S_{1,1}(n)$. Since $\psi(\psi^{-1}(g)) = g \in S_{0,0}(n)$, by Lemma~\ref{lemma2.1}(i) we must have
$\sigma_{\psi^{-1}(g)} = 1$. Hence $\phi(\rho(g)) = g$ in this case too and $\rho$ is a right inverse for $\phi$. Thus $\phi$ and $\rho$ are inverse to one another. \qed
\vspace{3mm}
\subsection{An open problem for even $n$}
To complement the above proof of Theorem~\ref{mainthm}, it would be desirable to have a bijective proof of
Theorem~\ref{evenn}, i.e., a natural map between $S_{1,1}(n)$ and $S_{0,0}(n)$ union the set of trace $1$ irreducibles of
degree $n/2$, when $n$ is even. One obstruction however is that the subset of $S_{0,0}(n)$ consisting of elements with signature $0$ maps to itself under the action on $\mathcal{I}_n$ of the group generated by the reciprocal transform and $\psi$, which is isomorphic to $GL_2({\mathbb F}_2)$ (see~\cite{michon} for a classification of this action). Similarly, the subset of
$S_{1,1}(n)$ consisting of elements with signature $1$ maps to itself under this action. Hence, if there exists such a bijection then other more sophisticated maps will be required. One possible approach consists of first factoring members of $S_{0,0}(n)$ and $S_{1,1}(n)$ over ${\mathbb F}_4$ into two degree $n/2$
(conjugate) irreducibles and acting on either factor by carefully chosen elements of $GL_2({\mathbb F}_4)$ according to some arithmetic characteristics, just as we did with $\psi$ and $\psi^{-1}$, since then all polynomials concerned are of the same degree. However, the details of this action are naturally more complicated than the one arising from $GL_2({\mathbb F}_2)$ and we leave its study and finding an explicit bijection as an open problem.
\section{The parity of $|S_{1,1}(n)|$}\label{sec:parity}
In this final short section we present an elementary result whose proof arises from simple transforms of polynomials and bijections.
We first recall some relevant definitions and supporting results.
A polynomial $f \in {\mathbb F}_2[x]$ is said to be {\em self-reciprocal} if $f^* = f$. Let the set of degree $n$ self-reciprocal irreducible (SRI) polynomials in ${\mathbb F}_2[x]$ with trace $1$ be denoted by $\text{SRI}_{1}(n)$. For a degree $n$ polynomial $f$ the $Q$-transform of $f$, denoted $f^Q$, is defined to be $x^n f(x + 1/x)$, which is self-reciprocal and of degree $2n$.
A useful and well-known result -- originally due to Varshamov and Garakov~\cite{varshamov} and later generalised by
Meyn~\cite{meyn} -- is that $f^Q$ is irreducible if and only if $f$ is irreducible and $f_1 = 1$. We have the following proposition.
\begin{proposition}
$|S_{1,1}(n)| \equiv 1 \pmod{2}$ if and only if $n = 2^k$ with $k \ge 1$.
\end{proposition}
\begin{proof}
The reciprocal transform acts on $S_{1,1}(n)$, partitioning it into pairs of distinct polynomials $(f,f^*)$ and a set of fixed points, namely
$\text{SRI}_{1}(n)$. Hence $|S_{1,1}(n)| \equiv |\text{SRI}_{1}(n)| \pmod{2}$ and we need only determine the parity of
$|\text{SRI}_{1}(n)|$. For odd $n > 1$ there are no SRIs, since if $\alpha$ is a root of an SRI then so is $1/\alpha$, and so
the number of roots must be even. Therefore let $n$ be even. For any $f \in \text{SRI}_{1}(n)$ there
exists a unique $f'$ of degree $n/2$ such that $f = f'^Q$ (see for instance~\cite[Lemma 6]{ahmadivega}).
One may thus partition $\text{SRI}_{1}(n)$ into pairs of distinct polynomials $(f, (f'^*)^Q)$ and a set of fixed points for
which $f' = f'^{*}$.
These fixed points are precisely $\text{SRI}_{1}(n/2)$, since by the Varshamov-Garakov criterion $f'$ is irreducible and
$f_{1}^{'} = 1$, and the trace $f_{n/2 - 1}^{'}$ equals $f_{1}^{'}$.
Hence $|\text{SRI}_{1}(n)| \equiv |\text{SRI}_{1}(n/2)| \pmod{2}$. If $n = 2^k m$ with odd $m > 1$, then applying this
descent step repeatedly gives
\[
|S_{1,1}(2^k m)| \equiv |\text{SRI}_{1}(2^k m)| \equiv |\text{SRI}_{1}(2^{k-1} m)| \equiv \cdots \equiv |\text{SRI}_{1}(m)|
\equiv 0 \pmod{2}.
\]
On the other hand, if $n = 2^k$ then descending as before gives $|S_{1,1}(2^k)| \equiv |\text{SRI}_{1}(1)| \pmod{2}$. Since $x + 1$ is the only element of $\text{SRI}_1(1)$, the result follows. \qed
\end{proof}
Note that one could in principle analyse Niederreiter's (complicated) explicit formulae~\cite{niederreiter} for $|S_{1,1}(n)|$ in order to obtain this result. However, the above approach is perhaps more enlightening.
\section*{Acknowledgements}
This work was supported by the Engineering and Physical Sciences Research Council via grant number EP/W021633/1. I would like to thank Omran Ahmadi for informing me of his observation, which motivated the search for a bijective proof of Theorem~\ref{mainthm}.
\bibliographystyle{plain}
|
2302.12035
|
\section{Introduction}
Classical orthogonal polynomials (Hermite, Laguerre, Jacobi, and Bessel) have been characterized using different approaches. For instance, they can be characterized in terms of differential equations \cite{B29}, their derivatives (\cite{Hahn35,Krall36}), structure relations (\cite{ASC72,Ger40, MBP94}), and a Rodrigues formula (\cite{Tri70}), among others (see \cite{GMM21,MP94} and the references therein). More recently, an approach that uses linear functionals and duality was introduced by P. Maroni (\cite{Ma87}). In all of these approaches, the starting point is to use the basis of monomials to represent polynomials and state the results.
However, an interesting (and most recent) approach is to start from the theory of semi-infinite matrices. The bibliography on this subject has grown greatly in the last years, and it has become increasingly difficult to do a comprehensive review of all the references. Hence, we refer the reader to \cite{V13,V2021} (and the refereces therein) where the algebra of infinite triangular matrices and the algebra of infinite Hessenberg matrices are used to study some aspects of orthogonal polynomials, and to \cite{{D20},M2021} (and the references therein) where the main tool is the Cholesky factorization of Gram matrices of bilinear forms. We remark that the Cholesky factorization proves to be quite fruitful in the study of non standard orthogonality such as multiple, matrix, Sobolev, and multivariate orthogonality as well as orthogonality on the unit circle of the complex plane, and have successfully found its way into applications in random matrices, Toda lattices, integrable systems, Riemann-Hilbert problems, Painlevé equations, and Darboux transformations, among others topics.
Our goal is to contribute to the link between matrix factorization and orthogonal polynomials. In particular, we deal with several characterizations of classical orthogonal polynomials. However, we shift our paradigm from infinite matrices to the \textit{finite} Gram matrix $G_n$ associated with a bilinear form defined on the linear space of polynomials of degree at most $n\geqslant 0$. For standard orthogonality, $G_n$ is a Hankel matrix (all of its antidiagonals are constant). Taking into account that the moments of a linear functional associated with a family of classical orthogonal polynomials satisfy a second order linear recurrence relation (\cite{MBP94}), we can say that this paper deals with Hankel matrices with an additional structure: the entries of $G_n$ satisfy such recurrence relation. In this way, we can extend the bilinear form to the linear space of polynomials of degree at most $n+1$ by constructing a new Gram matrix $G_{n+1}$ by means of bordering $G_n$ with a new row and column whose entries are obtained using the recurrence relation and the entries of $G_n$. The resulting matrix $G_{n+1}$ will also be a Hankel matrix with the additional structure mentioned above. Consequently, it will be possible to prove by induction that the properties satisfied by $G_n$ are also satisfied by $G_{n+1}$.
The change from infinite matrices to subsequently bordering finite matrices is motivated by an alternative proof of a classical result about the interlacing of zeros of orthogonal polynomials of consecutive degrees found in \cite{GMM21}. This classical result states that if the zeros of any two polynomials of consecutive degrees interlace, then these polynomials are elements of a sequence of orthogonal polynomials associated with a positive definite moment functional. This result can be proved using the Euclidean division algorithm for polynomials. However, this result can also be deduced using the following theorem about interlacing eigenvalues of Hermitian matrices found in \cite[p. 185]{HJ85}:
\begin{theorem}
Let $n$ be a given positive integer, and let $\{x_{n,k}\}_{k=1}^n$ and $\{x_{n-1,k}\}_{k=1}^{n-1},$ be two given sequences of numbers such that
$$x_{n,1}<x_{n-1,1}<\cdots <x_{n,k}<x_{n-1,k}<x_{n,k+1}<\cdots< x_{n-1,n-1}<x_{n,n}.$$
Let $\Lambda=\operatorname{diag}(x_{n-1,1},x_{n-1,2},\cdots,x_{n-1,n-1})$. Then there exists real number $b$ and a real vector $\bar{y}=(y_1,\ldots,y_{n-1})^{\top}\in\mathbb{R}^{n-1}$ such that $(x_{n,k})_{k=1}^n$ is the set of eigenvalues of the real symmetric matrix
$$B=\left(\begin{array}{c|c}
\Lambda & \bar{y}\\
\hline
\\[-6pt]
\bar{y}^{\top}&b
\end{array}\right).$$
\end{theorem}
\noindent From this, it seems reasonable to think that the procedure of bordering matrices encoding information about polynomial sequences is well suited for presenting and deducing results about orthogonal polynomials most likely due to the fact that for all $n\geqslant 0$, the linear space of polynomials of degree at most $n$ is a subspace of the space of polynomials of degree at most $n+1$. In this way, the Gram matrices of bilinear forms associated with classical orthogonal polynomials posses the adequate structure to start exploring our proposed paradigm.
The paper is organized as follows. Section \ref{classicalops} presents basic background on classical orthogonal polynomials and their associated linear functionals, and we introduce classical sequences of real numbers in Section \ref{classicalsequences}. In Section \ref{orthogonality}, we discuss the Cholesky factorization of Hankel matrices obtained from given sequences of real number and its relation to orthogonal polynomials. We present several characterizations of classical sequences of real numbers in Section \ref{characterizations}.
\section{Orthogonal polynomials and linear functionals}\label{classicalops}
For $n\geqslant 0$, let $\Pi_n$ be the linear space of polynomials of degree at most $n$ of a real variable and real coefficients, and let $\Pi=\bigcup_{n\geqslant 0}\Pi_n$.
Let $\Pi^*$ denote the algebraic dual space of $\Pi$. That is, $\Pi^*$ is the linear space of linear functionals defined on $\Pi$,
$$
\Pi^*=\left\{\mathbf{u}:\Pi \rightarrow \mathbb{R} \ \ : \ \ \mathbf{u} \text{ is linear} \right\}.
$$
We denote by $\langle \mathbf{u}, p\rangle$ the image of the polynomials $p$ under the linear functional $\mathbf{u}$.
Any linear functional $\mathbf{u}$ is completely defined by the values
$$
\mu_n\, :=\,\langle \mathbf{u}, x^n\rangle,\quad n\geqslant 0,
$$
and extended by linearity to all polynomials, where $\mu_n$ is called the $n$-th moment of $\mathbf{u}$. Therefore, we refer to $\mathbf{u}$ as a moment functional.
A moment functional $\mathbf{u}$ is called positive definite if $\langle \mathbf{u}, p^2\rangle >0$ for every non zero polynomial $p\in \Pi$.
Let $\mathbf{u}$ be a moment functional. A sequence of polynomials $\{P_n(x)\}_{n\geqslant 0}$ is called an orthogonal polynomial sequence (OPS) with respect to $\mathbf{u}$ if
\begin{enumerate}
\item[(1)] $\deg\,P_n=n$,
\item[(2)] $\langle \mathbf{u}, P_n\,P_m\rangle = h_n\,\delta_{n,m}$, with $h_n\ne 0$.
\end{enumerate}
Here $\delta_{n,m}$ denotes the Kronecker delta defined as
$$
\delta_{n,m}=\left\{\begin{array}{ll}
1, & n=m,\\
0, & n\ne m.
\end{array}
\right.
$$
If there exists an OPS associated with $\mathbf{u}$, then $\mathbf{u}$ is called quasi-definite. Positive definite moment functionals are quasi-definite.
Observe that an OPS $\{P_n(x)\}_{n\geqslant 0}$ constitutes a basis for $\Pi$. If for all $n\geqslant 0$, the leading coefficient of $P_n(x)$ is 1, then $\{P_n(x)\}_{n\geqslant 0}$ is called a monic orthogonal polynomial sequence (MOPS).
Given a moment functional $\mathbf{u}$ and a polynomial $q(x)$, we define the left multiplication of $\mathbf{u}$ by $q(x)$ as the moment functional $q\,\mathbf{u}$ such that
$$
\langle q\,\mathbf{u}, p\rangle \,=\,\langle \mathbf{u}, q\,p\rangle, \quad \forall p\in \Pi,
$$
and we define the distributional derivative $D\mathbf{u}$ by
$$
\langle D\mathbf{u}, p\rangle \,=\, -\langle \mathbf{u}, p'\rangle, \quad \forall p\in \Pi.
$$
Moreover, the product rule is satisfied, that is,
$$
D(q\,\mathbf{u})\,=\,q'\,\mathbf{u}+q\,D\mathbf{u}.
$$
\begin{definition}
Let $\mathbf{u}$ be a quasi-definite moment functional, and let $\{P_n(x)\}_{n\geqslant 0}$ be an OPS with respect to $\mathbf{u}$. Then $\mathbf{u}$ is classical if there are nonzero polynomials $\phi(x)$ and $\psi(x)$ with $\deg \phi \leqslant 2$ and $\deg \psi =1$, such that $\mathbf{u}$ satisfies the distributional Pearson equation
\begin{equation}\label{pearson}
D(\phi\,\mathbf{u})\,=\,\psi\,\mathbf{u}.
\end{equation}
The sequence $\{P_n(x)\}_{n\geqslant 0}$ is called a classical OPS.
\end{definition}
The following characterizations of classical moment functionals and OPS will be of central importance in the sequel.
\begin{theorem}\label{th:classical-char}
Let $\mathbf{u}$ be a quasi-definite moment functional, and $\{P_n(x)\}_{n\geqslant 0}$ its associated MOPS. The following statements are equivalent:
\begin{enumerate}
\item[1.] $\mathbf{u}$ is a classical moment functional.
\item[2.] (Bochner, \cite{B29}) There are nonzero polynomials $\phi(x)$ and $\psi(x)$ with $\deg \phi\leqslant 2$ and $\deg \psi=1$ such that, for $n\geqslant 0$, $P_n(x)$ satisfies
\begin{equation}\label{bochner-diffeq}
\phi(x)\,P_n''(x)+\psi(x)\,P_n'(x)\,=\,\lambda_n\,P_n(x),
\end{equation}
where $\lambda_n=n\,(\frac{n-1}{2}\phi''+\psi')$.
\item[3.] (Hahn,\cite{Hahn35}) There is a nonzero polynomial $\phi(x)$ with $\deg \phi\leqslant 2$, such that $\left\{\frac{P_{n+1}'(x)}{n+1} \right\}_{n\geqslant 0}$ is the MOPS associated with the moment functional $\mathbf{v}=\phi(x)\,\mathbf{u}$.
\item[4.] (First structure relation, \cite{ASC72}) There is a nonzero polynomial with $\deg \phi\leqslant 2$, and real numbers $a_n$, $b_n$, $c_n$, $n\geqslant 1$, with $c_n\ne 0$, such that
$$
\phi(x)\,P'_n(x)\,=\,a_n\,P_{n+1}(x)+b_n\,P_{n}(x)+c_n\,P_{n-1}(x), \quad n\geqslant 1.
$$
\item[5.] (Second structure relation, \cite{Ger40, MBP94}) There are real numbers $\alpha_n$ and $\beta_n$, $n\geqslant 2$, such that
\begin{equation}\label{structrel}
P_n(x)\,=\,\frac{P'_{n+1}(x)}{n+1}+\alpha_n\,\frac{P_n'(x)}{n}+\beta_n\,\frac{P'_{n-1}(x)}{n-1},\quad n\geqslant 2.
\end{equation}
\item[6.] (Rodrigues formula, \cite{Tri70}) There is a non zero polynomial $\phi(x)$ with $\deg \phi\leqslant 2$, and a non zero real number $k_n\ne 0$ such that
$$
D^n(\phi^n(x)\,\mathbf{u})\,=\,k_n\,P_n(x)\,\mathbf{u}, \quad n\geqslant 0.
$$
\end{enumerate}
\end{theorem}
It is well known (see \cite{B29} as well as \cite{Krall41}) that, up to affine transformations of the independent variable, the only families of positive definite classical orthogonal polynomials are the Hermite, Laguerre, and Jacobi polynomials. The corresponding moment functionals admit an integral representation of the form
$$
\langle \mathbf{u}, p\rangle \,=\,\int_{I}p(x)\,w(x)\,dx, \quad p\in \Pi,
$$
where $I=\mathbb{R}$ and $w(x)=e^{-x^2}$ in the Hermite case, $I=(0,+\infty)$ and $w(x)=x^{\alpha}e^{-x}$ with $\alpha>-1$ in the Laguerre case, and $I=(-1,1)$ and $w(x)=(1-x)^{\alpha}(1+x)^{\beta}$ with $\alpha,\beta>-1$ in the Jacobi case. We note that in each case, $w(x)>0$ on $I$ and, thus, we say that $w(x)$ is a weight function.
The definition of classical moment functionals in terms of the distributional Pearson equation not only encompasses positive definite moment functionals associated with weight functions, but includes the non positive case as well. Considering the non positive definite case gives rise to the Bessel classical moment functional satisfying the distributional Pearson equation \eqref{pearson} with $\phi(x)=x^2$ and $\psi(x)=ax+2$. The Bessel functional is quasi-definite when $a\ne -1,-2,\ldots$ Moreover, it has the following integral representation
$$
\langle \mathbf{u}, p\rangle = \int_c p(z)\,w(z)\,dz, \quad p\in \Pi,
$$
where $w(z)=(2\,\pi\,i)^{-1}z^{a-2}e^{-2/z}$, and $c$ is the unit circle oriented in the counter-clockwise direction.
Observe that from Theorem \ref{th:classical-char}, if $\mathbf{u}$ is a classical moment functional satisfying~\eqref{pearson}, then $\mathbf{v}=\phi(x)\,\mathbf{u}$ is a classical moment functional satisfying the Pearson equation
$$
D(\phi\,\mathbf{v})\,=\,(\psi+\phi')\,\mathbf{v}.
$$
Iterating this idea, we get that the high-order derivatives of classical orthogonal polynomials are again classical orthogonal polynomials of the same type.
\begin{theorem}[\cite{Hahn35,Krall41, Krall36}]\label{th:polynomials-Q}
Let $\mathbf{u}$ be a classical moment functional satisfying \eqref{pearson}, and $\{P_n(x)\}_{n\geqslant 0}$ its corresponding MOPS. For $k\geqslant 0$, let $\mathbf{v}_k=\phi^k(x)\,\mathbf{u}$ and $\{Q_{n,k}(x)\}_{n\geqslant 0}$ be the sequence of polynomials given by
\begin{equation}\label{eq-polynomials-Q}
Q_{n,k}(x):=\dfrac{1}{(n+1)_k}P_{n+k}^{(k)}(x), \quad n\geqslant 0,
\end{equation}
where $p^{(k)}$ is the $k$-th derivative of $p$, and $(\nu)_k=\nu\,(\nu+1)\cdots (\nu+k-1)$, $(\nu)_0=1$, denotes the Pochhammer symbol. Then, for each $k\geqslant 0$, $\{Q_{n,k}(x)\}_{n\geqslant 0}$ is a MOPS associated with the moment functional $\mathbf{v}_k$, satisfying
$$
D(\phi\,\mathbf{v}_k)\,=\,\psi_k\,\mathbf{v}_k,
$$
where $\psi_k(x)=\psi(x)+k\,\phi'(x)$. Hence, $\mathbf{v}_k$ is a classical moment functional.
\end{theorem}
\section{Classical sequences of numbers}\label{classicalsequences}
This section is devoted to presenting the definition of classical moment functionals from a different approach. We start by introducing sequences of real numbers that satisfy a second order recurrence relation and use them to construct linear functionals defined on $\Pi$.
\begin{definition}\label{def:classical sequence}
Let $\{\mu_n\}_{n\geqslant 0}$ be a sequence of real numbers with $\mu_0\ne 0$. Then $\{\mu_n\}_{n\geqslant 0}$ is a pre-classical sequence if there are real numbers $a,b,c,d,e$ satisfying
$$
|a|+|b|+|c|>0, \quad n\,a+d\ne 0 \quad n\geqslant 0,
$$
such that the following holds
\begin{equation}\label{eq:ttr-moments}
(n\,a+d)\,\mu_{n+1}+(n\,b+e)\,\mu_n+n\,c\,\mu_{n-1}=0, \quad n \geqslant 0.
\end{equation}
By convention, $\mu_n=0$ whenever $n<0$.
\end{definition}
Let $\{\mu_n\}_{n\geqslant 0}$ be a pre-classical sequence of real numbers. Then it is possible to define a functional $\mathbf{u}$ as
$$
\mu_n\, :=\,\langle \mathbf{u}, x^n\rangle,\quad n\geqslant 0,
$$
and extend it by linearity to all polynomials, where $\mu_n$ is called the $n$-th moment of $\mathbf{u}$. Therefore, we refer to $\mathbf{u}$ as a pre-classical moment functional. Observe that the condition $n\,a+d\ne 0$, $n\geqslant 0$, guarantees that $\mathbf{u}$ is completely defined since each moment
$$
\mu_{n+1}=-\frac{1}{n\,a+d}[(n\,b+e)\,\mu_n+n\,c\,\mu_{n-1}], \quad n\geqslant 0,
$$
is well-defined.
The recurrence relation \eqref{eq:ttr-moments} can be passed down to the pre-classical moment functional associated with $\{\mu_n\}_{n\geqslant 0}$.
\begin{theorem} \label{th:pearson}
A sequence $\{\mu_n\}_{n\geqslant 0}$ is pre-classical if and only if there are non zero polynomials $\phi(x)$ and $\psi(x)$ with $\deg \phi\leqslant 2$, $\deg \psi =1$, and $\frac{n}{2}\phi''+\psi'\ne 0$ for $n\geqslant 0$, such that the moment functional $\mathbf{u}$ defined by $\mu_n=\langle \mathbf{u},x^n\rangle$ satisfies the distributional Pearson equation
\begin{equation*}
D(\phi\,\mathbf{u})\,=\,\psi\,\mathbf{u}.
\end{equation*}
\end{theorem}
\begin{proof}
Suppose that $\mathbf{u}$ satisfies \eqref{pearson} with non zero $\phi(x)=a\,x^2+b\,x+c$ and $\psi(x)=d\,x+e$ with $\deg \psi=1$ such that $0\ne \frac{n}{2}\phi''+\psi'=n\,a+d$ for $n\geqslant 0$. Then
$$
\langle \mathbf{u}, \phi\,p'+\psi\,p\rangle =0, \quad \forall p\in \Pi.
$$
In particular,
$$
0=\langle \mathbf{u}, n\,\phi\,x^{n-1}+\psi\,x^n\rangle = (n\,a+d)\,\mu_{n+1}+(n\,b+e)\,\mu_n+n\,c\,\mu_{n-1} , \quad n\geqslant 0.
$$
Therefore $\{\mu_n\}_{n\geqslant 0}$ is a pre-classical sequence of real numbers and, thus, $\mathbf{u}$ is a pre-classical moment functional. It is easy to verify that the implications in the opposite direction holds by inverting each of the previous steps.
\end{proof}
For any sequence $\{\mu_n\}_{n\geqslant 0}$, we can define the sequence of matrices $\{G_n\}_{n\geqslant 0}$ where $G_n$ is an $(n+1)\times (n+1)$ matrix given by $G_0=\mu_0$ and
\begin{equation}\label{def:Gn}
G_n= \left[\begin{array}{ccc|c}
& & & \mu_n \\
& G_{n-1} & &\vdots \\
& & &\mu_{2n-1}\\
\hline
\mu_n & \ldots & \mu_{2n-1} & \mu_{2n}
\end{array}\right]=\begin{bmatrix}
\mu_0 & \mu_1 & \ldots & \mu_n \\
\mu_1 & \mu_2 & \ldots & \mu_{n+1} \\
\vdots & \vdots & \ddots & \vdots \\
\mu_n & \mu_{n+1} & \dots & \mu_{2n}
\end{bmatrix}, \quad n\geqslant 0.
\end{equation}
In particular, consider the sequence $\{\mu_n\}_{n\geqslant 0}$ with $\mu_0=1$ and $\mu_n=0$ for $n\geqslant 1$. Observe that this sequence corresponds to the linear functional $\boldsymbol{\delta}\in \Pi^*$, known as the Dirac delta, defined as
$$
\langle \boldsymbol{\delta}, p\rangle =p(0), \quad \forall p\in \Pi.
$$
In this case, we have $G_0=1$,
$$
G_n= \begin{bmatrix}
1 & 0 & \ldots & 0 \\
0 & 0 & \ldots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & \dots & 0
\end{bmatrix},
$$
and $\det G_n =0$ for $n\geqslant 1$. In the sequel, we will need to exclude this and other similar cases and, therefore, we impose that $\det G_n \ne 0$ for $n\geqslant 0$. Hence, we have the following definition.
\begin{definition}\label{def:classical}
A pre-classical sequence $\{\mu_n\}_{n\geqslant 0}$ is classical if the sequence of matrices $\{G_n\}_{n\geqslant 0}$ defined as in \eqref{def:Gn} satisfy $\det G_n \ne 0$ for $n\geqslant 0$. The moment functional defined by $\mu_n=\langle \mathbf{u}, x^n\rangle$ is called a classical moment functional.
\end{definition}
In the sequel, some orthogonal bases for $\mathbb{R}^{n+1}$ associated with a classical sequence of numbers will play an important role in our study of such sequences. Therefore, in the following section we discuss the orthogonal structure of $\mathbb{R}^{n+1}$ induced by general sequences of numbers.
\section{Sequences of numbers and orthogonality}\label{orthogonality}
Given an integer $n\geqslant 0$, a sequence of numbers induces a bilinear form in $\mathbb{R}^{n+1}$ whose Gram matrix is $G_n$ defined in \eqref{def:Gn}. In this section, we explore orthogonal bases of $\mathbb{R}^{n+1}$ associated with a sequence of numbers and use it to construct bases of polynomials in $\Pi_n$ orthogonal with respect to its corresponding moment functional.
For $n\geqslant 0$, we will denote by $\bar{e}_0,\ldots,\bar{e}_{n}$ the columns of the identity matrix $I_{n+1}$, that is, $I_{n+1}=[\bar{e}_0 \ \bar{e}_1 \ \ldots \ \bar{e}_{n}]$. The set of column vectors $\mathcal{E}_{n}=\{\bar{e}_0,\ldots,\bar{e}_{n}\}$ is called the canonical basis for $\mathbb{R}^{n+1}$.
\begin{definition}\label{def:bilinearform}
Let $\{\mu_n\}_{n\geqslant 0}$ be a sequence of numbers. For $n\geqslant 0$, $\mathfrak{B}_n(\cdot,\cdot)$ denotes the bilinear form defined by
$$
\mathfrak{B}_n(\bar{u},\bar{v})\,=\,\bar{u}^{\top}\,G_n\,\bar{v}, \quad \forall \bar{u},\bar{v}\in \mathbb{R}^{n+1},
$$
and is called the bilinear form associated with $\{\mu_n\}_{n\geqslant 0}$ (relative to the canonical basis of $\mathbb{R}^{n+1}$).
\end{definition}
For $n\geqslant 0$, if $\det G_n>0$, then $\mathfrak{B}_n(\cdot,\cdot)$ is an inner product on $\mathbb{R}^{n+1}$. In this case, we can define the norm
$$
\|\bar{u}\|_n= \sqrt{\mathfrak{B}_n(\bar{u},\bar{u})}, \quad \forall \bar{u}\in \mathbb{R}^{n+1}.
$$
Of course, there are many orthogonal bases for $\mathbb{R}^{n+1}$ associated with $\mathfrak{B}_n$. However, we are interested in orthogonal bases obtained from the Cholesky factorization of $G_n$. Recall that if $\det G_n\ne 0$, there is an $(n+1)\times (n+1)$ unit lower triangular matrix (that is, with $1$'s in its main diagonal) $S_n^{-1}$ and an $(n+1)\times (n+1)$ non singular diagonal matrix $H_n$ such that
$$
G_n=S_n^{-1}\,H_n\,S_n^{-\top},
$$
where $S_n^{-\top}=(S_n^{-1})^{\top}$ (see Theorem 4.1.3 in \cite{GV13}). Moreover, this matrix factorization is unique. We can immediately observe that if we write the above identity as
$$
S_n\,G_n\,S_n^{\top}\,=\,H_n,
$$
then we have an orthogonality relation for the columns of $S_n^{\top}$ as we show in the following theorem.
\begin{theorem}\label{th:choleskyorth}
Let $G_n=S_n^{-1}\,H_n\,S_n^{-\top}$ be the Cholesky decomposition of $G_n$. Then the columns of $S_n^{\top}$ form an orthogonal basis for $\mathbb{R}^{n+1}$ with respect to the bilinear form $\mathfrak{B}_n$.
\end{theorem}
\begin{proof}
If $\bar{s}_0,\bar{s}_1,\ldots,\bar{s}_{n}$ are the columns of $S_n^{\top}$, then
$$
H_n=S_n\,G_n\,S_n^{\top}=\begin{bmatrix}
\bar{s}_0^{\top} \\[3pt] \bar{s}_1^{\top} \\ \vdots \\ \bar{s}_{n}^{\top}
\end{bmatrix}\,G_n\, \begin{bmatrix} \bar{s}_0 & \bar{s}_1 & \ldots & \bar{s}_{n} \end{bmatrix}.
$$
Since $H_n$ is a diagonal matrix and $G_n$ is the representation of the bilinear form $\mathfrak{B}_n$, it follows that
$$
\mathfrak{B}_n(\bar{s}_i,\bar{s}_j)=0 \quad \text{for} \quad i\ne j,
$$
and
$$
\mathfrak{B}_n(\bar{s}_i,\bar{s}_i)=h_i, \quad i=0,1,\ldots,n,
$$
where $h_i$ is the $i$-th non zero entry of $H_n$, that is, $H_n=\text{diag}(h_0,h_1,\ldots,h_{n})$
\end{proof}
The following theorem shows that the Cholesky factorization of $G_n$ and $G_{n+1}$ are related. In fact, the Cholesky factorization of $G_{n+1}$ is obtained by bordering $S_n$ and $H_n$ with a new row and a new column. The proof relies heavily on the expressions for $G_{n}^{-1}$ and $\det G_n$ in terms of cofactors. If $(G_n)_{i,j}$ is the $(i,j)$-cofactor of $G_n$ with $i,j=0,1,\ldots,n$, then
$$
\det G_n = \sum_{k=0}^n\,\mu_{i+k}\,(G_n)_{i,k}, \quad G_n^{-1}=\frac{1}{\det G_n}\begin{bmatrix} (G_n)_{0,0} & (G_{n})_{1,0} & \ldots & (G_n)_{n,0} \\
(G_n)_{0,1} & (G_n)_{1,1} & \ldots & (G_n)_{n,1} \\
\vdots & \vdots & \ddots & \vdots \\
(G_n)_{0,n} & (G_n)_{1,n} & \ldots & (G_n)_{n,n}
\end{bmatrix}.
$$
Hereon, $\mathtt{0}$ will denote the zero matrix of appropriate size.
\begin{theorem}\label{th:choleskySn+1}
Let $n\geqslant 0$, and let $S_n^{-1}\,H_n\,S_n^{-\top}$ be the Cholesky factorization of $G_n$. Then, $G_{n+1}=S_{n+1}^{-1}\,H_{n+1}\,S_{n+1}^{-\top}$ with
$$
S_{n+1}^{\top}=\left[\begin{array}{c|c}S_n^{\top} & \hat{s}_{n+1} \\
\hline
\mathtt{0} & 1 \end{array} \right], \quad H_{n+1}=\left[\begin{array}{c|c}H_n & \mathtt{0}\\
\hline
\mathtt{0} & h_{n+1} \end{array} \right],
$$
where $[\hat{s}_{n+1}^{\top} \ 1 ]^{\top}$ is a vector given by the formal determinant
\begin{equation}\label{eq:choleskySn+1}
\begin{bmatrix}
\hat{s}_{n+1} \\
1
\end{bmatrix}= \frac{1}{\det G_n} \det\left[\begin{array}{ccc|c} & & & \mu_{n+1}\\
& G_n & & \vdots \\
& & & \mu_{2n+1} \\
\hline
\bar{e}_0 & \ldots & \bar{e}_n & \bar{e}_{n+1}\end{array} \right],
\end{equation}
and
\begin{equation}\label{eq:hn+1}
h_{n+1}=\frac{\det G_{n+1}}{\det G_n}.
\end{equation}
\end{theorem}
\begin{proof}
Observe that the columns of $[S_n^{\top} \ \mathtt{0}]^{\top}$ are linear combinations of the vectors $\bar{e}_0,\ldots,\bar{e}_n\in \mathbb{R}^{n+2}$ (that is, the first $n+1$ columns of $I_{n+2}$). Then $H_{n+1}=S_{n+1}\,G_{n+1}\,S_{n+1}^{\top}$ with $S_{n+1}^{\top}$ and $H_{n+1}$ as in \eqref{eq:choleskySn+1} and \eqref{eq:hn+1} if and only if
$$
\mathfrak{B}_{n+1}([\hat{s}_{n+1}^{\top} \ 1 ]^{\top},\bar{e}_i)=0, \quad i=0,1,\ldots,n,
$$
and
$$
\mathfrak{B}_{n+1}([\hat{s}_{n+1}^{\top} \ 1 ]^{\top},[\hat{s}_{n+1}^{\top} \ 1 ]^{\top})=\frac{\det G_{n+1}}{\det G_n}.
$$
The above conditions can be written as a system of linear equations:
\begin{equation*}
G_n\,\hat{s}_{n+1}=-\begin{bmatrix}
\mu_{n+1} \\ \vdots \\ \mu_{2n+1}
\end{bmatrix}.
\end{equation*}
By Cramer's rule,
\begin{equation}\label{eq:vectorshat}
\hat{s}_{n+1}=\frac{1}{\det G_n}\bigg((G_{n+1})_{n+1,0}\,\bar{e}_0+\cdots+(G_{n+1})_{n+1,n}\,\bar{e}_n \bigg),
\end{equation}
which is the expansion of the determinant \eqref{eq:choleskySn+1} across the last row.
Moreover, we also have
\begin{equation}\label{eq:choleskyvector}
\hat{s}_{n+1}=-G_n^{-1}\begin{bmatrix}
\mu_{n+1} \\ \vdots \\ \mu_{2n+1}
\end{bmatrix},
\end{equation}
and
\begin{align*}
S_{n+1}\,G_{n+1}\,S_{n+1}^{\top}&= \left[\begin{array}{c|c}S_n & \mathtt{0} \\
\hline
\\[-8pt]
\hat{s}_{n+1}^{\top} & 1 \end{array} \right]\,\left[\begin{array}{ccc|c}
& & & \mu_{n+1} \\
& G_{n} & &\vdots \\
& & &\mu_{2n+1}\\
\hline
\mu_{n+1} & \ldots & \mu_{2n+1} & \mu_{2n+2}
\end{array}\right]\,\left[\begin{array}{c|c}S_n^{\top} & \hat{s}_{n+1} \\
\hline
\mathtt{0} & 1 \end{array} \right]\\[5pt]
&= \left[\begin{array}{c|c} H_n & S_n\,B_n \\ \hline \\[-8pt] C_n\,S_n^{\top} & h_{n+1} \end{array}\right],
\end{align*}
where
$$
B_n= G_n\,\hat{s}_{n+1}+\begin{bmatrix}
\mu_{n+1} \\ \vdots \\ \mu_{2n+1}
\end{bmatrix}, \quad C_n= \hat{s}_{n+1}^{\top}\,G_n+\begin{bmatrix}
\mu_{n+1} \\ \vdots \\ \mu_{2n+1}
\end{bmatrix}^{\top}, \quad h_{n+1} = [\hat{s}_{n+1}^{\top} \,|\, 1]\begin{bmatrix}
\mu_{n+1} \\ \vdots \\ \mu_{2n+1} \\ \hline \mu_{2n+2}
\end{bmatrix}.
$$
It follows from \eqref{eq:choleskyvector} that $B_n=\mathtt{0}$ and $C_n=\mathtt{0}$. Finally, using \eqref{eq:vectorshat} and the fact that $(G_{n+1})_{n+1,n+1}=\det G_n$, we obtain
$$
h_{n+1}=\frac{1}{\det G_n}\sum_{j=0}^{n+1} \mu_{n+1+j}(G_{n+1})_{n+1,j}= \frac{\det G_{n+1}}{\det G_n},
$$
which proves \eqref{eq:hn+1}.
\end{proof}
We can reformulate the above discussion in the context of $\Pi_n$ as follows. Given a sequence of numbers $\{\mu_n\}_{n\geqslant 0}$, denote by $\mathbf{u}$ the moment functional defined by $\mu_n=\langle \mathbf{u}, x^n\rangle$. If $\det G_n\ne 0$, then
$$
\langle \mathbf{u}, p\,q\rangle, \quad p,q\in \Pi_n,
$$
is a (non degenerate) bilinear form defined on $\Pi_n$. It is easy to see that its Gram matrix relative to the basis of monomials is $G_n$. Let $G_n=S_n^{-1}\,H_n\,S_n^{-\top}$ be the Cholesky decomposition of $G_n$ ($\det G_n\ne 0$) and let
$$
S_n=\begin{bmatrix}
1 & 0 & 0 & \ldots & 0\\
s_{1,0} & 1 & 0 & \ldots & 0\\
s_{2,0} & s_{2,1} & 1 & \ldots & 0 \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
s_{n,0} & s_{n,1} & s_{n,2} & \ldots & 1
\end{bmatrix}
$$
be the explicit expression of the matrix $S_n$. It follows from Theorem \ref{th:choleskyorth} that the set of polynomials $\{P_0(x),P_1(x),\ldots, P_n(x)\}$ where
$$
P_k(x)=s_{k,0}+s_{k,1}x+\cdots+s_{k,k-1}x^{k-1}+x^k, \quad k=0,1,2,\ldots,n,
$$
form an orthogonal basis for $\Pi_n$ with respect to $\mathbf{u}$; that is, $\deg P_k = k$ and
$$
\langle \mathbf{u}, P_j\,P_i\rangle = h_j\,\delta_{i,j}, \quad 0\leqslant i,j\leqslant n,
$$
with $h_j\ne 0$. Moreover, by Theorem \ref{th:choleskySn+1} the polynomial
\begin{equation*}
P_{n+1}(x)= \frac{1}{\det G_n} \det\left[\begin{array}{ccc|c} & & & \mu_{n+1}\\
& G_n & & \vdots \\
& & & \mu_{2n+1} \\
\hline
1 & \ldots & x^n & x^{n+1}\end{array} \right],
\end{equation*}
has degree exactly $n+1$, is orthogonal to every polynomial in $\Pi_n$ and
$$
h_{n+1}=\langle \mathbf{u}, P_{n+1}^2\rangle = \frac{\det G_{n+1}}{\det G_n}.
$$
\section{Characterizations of classical sequences}\label{characterizations}
Our goal for this section is to recast Theorem \ref{th:classical-char} in terms of classical sequences by shifting our point of view from the moment functional $\mathbf{u}$ to the Gram matrix $G_n$ associated with a bilinear form defined on $\Pi_n$. Recall that $G_n$ is a Hankel matrix (all of its antidiagonals are constant). Hence, we can say that this section deals with Hankel matrices with an additional structure: the entries of $G_n$ satisfy the recurrence relation \eqref{eq:ttr-moments}. In this way, we can extend the bilinear form to $\Pi_{n+1}$ by constructing a new Gram matrix $G_{n+1}$ by means of bordering $G_n$ with a new row and column whose entries are obtained with \eqref{eq:ttr-moments} from the entries of $G_n$. The resulting matrix $G_{n+1}$ will also be a Hankel matrix with the additional structure mentioned above. Consequently, it will be possible to prove by induction that the properties satisfied by $G_n$ are also satisfied by $G_{n+1}$.
The following matrices will play an important role in the sequel.
\begin{definition}\label{def:matrixR}
Let $a,b,c,d$, and $e$ be real numbers such that $|a|+|b|+|c|>0$ and $n\,a+d\ne 0$ for $n\geqslant 0$. For $n\geqslant 1$, we define the $n\times (n+1)$ matrix $R_n$ recursively as follows:
$$
R_{n}=\left[ \begin{array}{ccccc|c}
& & & & & 0 \\
& & & R_{n-1} & & \vdots \\
& & & & & 0\\
\hline
0 & \ldots & 0 & (n-1)\,c & (n-1)\,b+e & (n-1)\,a+d
\end{array}\right], \quad R_1=\begin{bmatrix} e & d \end{bmatrix}.
$$
\end{definition}
Consider the differential operator $\mathcal{R}: \Pi\rightarrow \Pi$ defined as
$$
\mathcal{R}[p]=\phi(x)\,p'+\psi(x)\,p, \quad \forall p\in \Pi,
$$
where $\phi(x)=a\,x^2+b\,x+c$ and $\psi(x)=d\,x+e$. Observe that
$$
\mathcal{R}[x^n]\,=\,(n\,a+d)\,x^{n+1}+(n\,b+e)\,x^n+n\,c\,x^{n-1}, \quad n\geqslant 0.
$$
In this way, the matrix $R_{n+1}^{\top}$ is the matrix representation relative to the basis of monomials of $\mathcal{R}$ restricted to $\Pi_n$.
\bigskip
Let $\{\mu_n\}_{n\geqslant 0}$ be a sequence of real numbers. Define the vector of moments
$$
\mathtt{M}_n=\begin{bmatrix}\mu_0 & \mu_1 & \ldots & \mu_n\end{bmatrix}^{\top}, \quad n\geqslant 0.
$$
If $\{\mu_n\}_{n\geqslant 0}$ is pre-classical, then equation \eqref{eq:ttr-moments} for $\{\mu_0,\ldots,\mu_n\}$ can be written as
$$
R_n\,\mathtt{M}_n=\mathtt{0}.
$$
This implies that if $\mathbf{u}$ is the moment functional defined as $\mu_n:=\langle \mathbf{u}, x^n\rangle$, then by \eqref{eq:ttr-moments}, the following holds
$$
\left\langle \mathbf{u}, \mathcal{R}[x^n] \right\rangle =0, \quad n\geqslant 0.
$$
Now, consider the vector whose entries are the monomials in $\Pi_n$:
$$
\mathtt{X}_n=\begin{bmatrix}1 & x & \ldots & x^n \end{bmatrix}^{\top}.
$$
If we define the $n\times (n+1)$ matrix
$$
N_{n}=\left[ \begin{array}{c|c}
N_{n-1} & \mathtt{0} \\
\hline
\mathtt{0} & n
\end{array}\right], \quad N_1=\begin{bmatrix}0 & 1 \end{bmatrix}, \quad N_0 = 0,
$$
then
$$
\frac{d}{dx}\mathtt{X}_n\,=\,N_n^{\top}\,\mathtt{X}_{n-1}.
$$
\subsection{Bochner-type characterization}
For $n\geqslant 0$, let us introduce the $(n+1)\times (n+1)$ matrix $\mathtt{D}_n$ defined as
$$
\mathtt{D}_0=0, \qquad \mathtt{D}_n:=R_n^{\top}N_n, \quad n\geqslant 1.
$$
It is possible to express $\mathtt{D}_n$ recursively as follows:
\begin{align*}
\mathtt{D}_{n}=\left[ \begin{array}{ccc|c}
& & & 0 \\
& & & \vdots \\
& \mathtt{D}_{n-1} & & 0\\
& & & n\,(n-1)\,c\\[6pt]
& & & n\,((n-1)\,b+e)\\[6pt]
\hline
0 & \ldots & 0 & n\,((n-1)\,a+d)
\end{array}\right], \quad n\geqslant 1.
\end{align*}
Observe that $\mathtt{D}_n$ is the matrix representation relative the basis of monomials of the operator
\begin{equation}\label{def:operatorD}
\mathcal{D}[p]:=\mathcal{R}\left[\frac{d}{dx}p \right]=\phi(x)\,p''+\psi(x)\,p', \quad \forall p\in \Pi,
\end{equation}
restricted to $\Pi_n$.
In the following theorem, we show that $\mathtt{D}_n$ is a self-adjoint matrix with respect to the bilinear form $\mathfrak{B}_n$ given in Definition \ref{def:bilinearform}.
\begin{theorem}\label{th:selfadjoint}
Let $\{\mu_n\}_{n\geqslant 0}$ be a classical sequence satisfying \eqref{eq:ttr-moments}, and let $\mathfrak{B}_n$ be the operator defined in Definition \ref{def:bilinearform}. Then, for $n\geqslant 0$, the matrix $\mathtt{D}_n$ satisfies
\begin{equation}\label{eq:selfadjoint}
\mathfrak{B}_n(\mathtt{D}_n\bar{u}, \bar{v})=\mathfrak{B}_n(\bar{u}, \mathtt{D}_n\bar{v}), \quad \bar{u},\bar{v}\in \mathbb{R}^{n+1}.
\end{equation}
\end{theorem}
\begin{proof}
Observe that proving \eqref{eq:selfadjoint} is equivalent to proving
$$
\mathtt{D}_n^{\top}\,G_n=G_n\,\mathtt{D}_n.
$$
We prove this for $n\geqslant 0$ by induction.
It is obvious that $\mathtt{D}_0^{\top}\,G_0=G_0\,\mathtt{D}_0$. We also prove the case $n=1$ for the sake of clarity since it is the first non trivial case. We compute
$$
\mathtt{D}_1^{\top}\,G_1=\left[\begin{array}{c|c}
0 & 0 \\
\hline
e & d
\end{array}\right]\left[\begin{array}{c|c}
\mu_0 & \mu_1 \\
\hline
\mu_1 & \mu_2
\end{array}\right]=\left[\begin{array}{c|c}
\mathtt{D}_0^{\top} & 0 \\
\hline
e & d
\end{array}\right]\left[\begin{array}{c|c}
G_0 & \mu_1 \\
\hline
\mu_1 & \mu_2
\end{array}\right].
$$
Again, multiplying by blocks, we get
$$
\mathtt{D}_1^{\top}\,G_1=\left[\begin{array}{c|c}
\mathtt{D}_0^{\top}\,G_0 & \mathtt{D}_0^{\top}\,\mu_1 \\
\hline
d\,\mu_1+e\,\mu_0 & e\,\mu_1+d\,\mu_2
\end{array}\right]=\left[\begin{array}{c|c}
G_0\,\mathtt{D}_0^{\top} & 0 \\
\hline
d\,\mu_1+e\,\mu_0 & e\,\mu_1+d\,\mu_2
\end{array}\right].
$$
Since $\{\mu_n\}_{n\geqslant 0}$ is a classical sequence, by condition \eqref{eq:ttr-moments} with $n=0$, we have
$$
\mu_1\,\mathtt{D}_0=0=d\,\mu_1+e\,\mu_0.
$$
Therefore, we can write
$$
\mathtt{D}_1^{\top}\,G_1=\left[\begin{array}{c|c}
G_0\,\mathtt{D}_0^{\top} & e\,\mu_0+d\,\mu_1 \\
\hline
\mu_1\,\mathtt{D}_0 & e\,\mu_1+d\,\mu_2
\end{array}\right]=\left[\begin{array}{c|c}
G_0 & \mu_1 \\
\hline
\mu_1 & \mu_2
\end{array}\right]\left[\begin{array}{c|c}
\mathtt{D}_0 & e \\
\hline
0 & d
\end{array}\right]=G_1\,\mathtt{D}_1.
$$
This proves $\mathtt{D}_1^{\top}\,G_1=G_1\,\mathtt{D}_1$.
Now, suppose that $\mathtt{D}_k^{\top}\,G_k=G_k\,\mathtt{D}_k$ holds for some $k\geqslant 0$. We compute
\begin{align*}
&\mathtt{D}_{k+1}^{\top}\,G_{k+1}\\
&=\left[ \begin{array}{ccccc|c}
& & & & & 0 \\
& & & \mathtt{D}_{k}^{\top} & & \vdots \\
& & & & & 0\\
\hline
0 & \ldots & 0 & (k+1)\,k\,c & (k+1)\,(k\,b+e) & (k+1)\,(k\,a+d)
\end{array}\right]\\
&\qquad \times \left[\begin{array}{ccc|c}
& & & \mu_{k+1} \\
& G_{k} & &\vdots \\
& & &\mu_{2k+1}\\
\hline
\mu_{k+1} & \ldots & \mu_{2k+1} & \mu_{2k+2}
\end{array}\right].
\end{align*}
Multiplying by blocks, we get
$$
\mathtt{D}_{k+1}^{\top}\,G_{k+1}=\left[ \begin{array}{c|c}
\mathtt{D}_{k}^{\top}\,G_k & \bar{x}_k\\[3pt]
\hline
\\[-6pt]
\bar{y}_k^{\top} & \bar{z}_k
\end{array}
\right],
$$
where
$$
\bar{x}_k=\mathtt{D}_k^{\top}\begin{bmatrix}\mu_{k+1} \\ \vdots \\ \mu_{2k+1} \end{bmatrix},
$$
$$
\bar{y}_k^{\top}= \left[ (k+1)\,k\,c \ \ \ (k+1)\,(k\,b+e) \ \ \ (k+1)\,(k\,a+d)
\right]\left[\begin{array}{ccc}
\mu_{k-1} & \ldots & \mu_{2k-1} \\
\mu_k & \ldots & \mu_{2k} \\
\hline
\mu_{k+1} & \ldots & \mu_{2k+1}
\end{array}\right],
$$
and
$$
\bar{z}_k=(k+1)\,k\,c\,\mu_{2k}+(k+1)\,(k\,b+e)\,\mu_{2k+1}+(k+1)\,(k\,a+d)\,\mu_{2k+2}=\bar{z}_k^{\top}.
$$
Since by our induction hypothesis we have
$$
\mathtt{D}_{k+1}^{\top}\,G_{k+1}=\left[ \begin{array}{c|c}
G_k\,\mathtt{D}_{k} & \bar{x}_k\\[3pt]
\hline
\\[-6pt]
\bar{y}_k^{\top} & \bar{z}_k^{\top}
\end{array}
\right],
$$
it is clear that our main efforts should focus on showing that $\bar{x}_k=\bar{y}_k$. Observe that
$$
\bar{x}_k=\left[ \begin{array}{ccccc|c}
& & & & & 0 \\
& & & \mathtt{D}_{k-1}^{\top} & & \vdots \\
& & & & & 0\\
\hline
0 & \ldots & 0 & k\,(k-1)\,c & k\,((k-1)\,b+e) & k\,((k-1)\,a+d)
\end{array}\right]\begin{bmatrix}\mu_{k+1} \\ \vdots \\ \mu_{2k+1} \end{bmatrix}.
$$
Then, the $k$-th entry of $\bar{x}_k$ is
$$
k\,(k-1)\,c\,\mu_{2k-1}+k\,((k-1)\,b+e)\,\mu_{2k}+k\,((k-1)\,a+d)\,\mu_{2k+1},
$$
while the $k$-th entry of $\bar{y}_k$ is
$$
(k+1)\,k\,c\,\mu_{2k-1}+(k+1)\,(k\,b+e)\,\mu_{2k}+(k+1)\,(k\,a+d)\,\mu_{2k+1},
$$
which means that the $k$-th entry of $\bar{y}_k-\bar{x}_k$ is
$$
(2k\,a+d)\,\mu_{2k+1}+(2k\,b+e)\,\mu_{2k}+2k\,c\,\mu_{2k-1}=0,
$$
where we have used condition \eqref{eq:ttr-moments}.
Now, noticing that the last row of $\mathtt{D}_{k-1}^{\top}$ is
$$
\begin{bmatrix}
0 & \ldots & 0 & (k-1)\,(k-2)\,c & (k-1)\,((k-2)\,b+2) & (k-1)\,((k-2)\,a+d)
\end{bmatrix},
$$
we have that the $(k-1)$-th entry of $\bar{x}_k$ is
$$
(k-1)\,(k-2)\,c\,\mu_{2k-2}+(k-1)\,((k-2)\,b+2)\,\mu_{2k-1}+(k-1)\,((k-2)\,a+d)\,\mu_{2k},
$$
while the $(k-1)$-th entry of $\bar{y}_k$ is
$$
(k+1)\,k\,c\,\mu_{2k-2}+(k+1)\,(k\,b+e)\,\mu_{2k-1}+(k+1)\,(k\,a+d)\,\mu_{2k}.
$$
Then the $(k-1)$-th entry of $\bar{y}_k-\bar{x}_k$ is
$$
2\bigg[((2k-1)\,a+d)\,\mu_{2k}+((2k-1)\,b+e)\,\mu_{2k-1}+(2k-1)\,c\,\mu_{2k-2}\bigg]=0,
$$
where, again, we have used \eqref{eq:ttr-moments} with $n=k-1$.
If we continue in this way, then for $i=0,1,\ldots,k$, the $(i+1)$-th entry of $\bar{y}_k-\bar{x}_k$ is
$$
(k+1-i)\bigg[((k+i)\,a+d)\mu_{k+1+i}+((k+i)\,b+e)\,\mu_{k+i}+(k+i)\,c\,\mu_{k-1+i} \bigg]=0.
$$
This means that $\bar{x}_k=\bar{y}_k$ and, consequently,
$$
\mathtt{D}_{k+1}^{\top}\,G_{k+1}=\left[ \begin{array}{c|c}
G_k\,\mathtt{D}_{k} & \bar{y}_k\\[3pt]
\hline
\\[-6pt]
\bar{x}_k^{\top} & \bar{z}_k^{\top}
\end{array}
\right] = G_{k+1}\,\mathtt{D}_{k+1},
$$
which proves that $\mathtt{D}_n^{\top}\,G_n=G_n\,\mathtt{D}_n$ holds for $n\geqslant 0$.
\end{proof}
Let us now consider the eigenvectors of $\mathtt{D}_n$. Suppose that $\bar{u},\bar{v}\in \mathbb{R}^{n+1}$ are eigenvectors corresponding to distinct eigenvalues $\lambda$ and $\overline{\lambda}$, respectively. Then,
$$
\lambda\,\mathfrak{B}_n(\bar{u}, \bar{v})=\mathfrak{B}_n(\lambda\,\bar{u},\bar{v})=\mathfrak{B}_n(\mathtt{D}_n\bar{u},\bar{v})=\mathfrak{B}_n(\bar{u},\mathtt{D}_n\bar{v})=\mathfrak{B}_n(\bar{u},\overline{\lambda}\,\bar{v})=\overline{\lambda}\,\mathfrak{B}_n(\bar{u},\bar{v}),
$$
which implies
$$
(\lambda-\overline{\lambda})\,\mathfrak{B}_n(\bar{u},\bar{v})=0.
$$
Since $\lambda\ne \overline{\lambda}$, we must have that $\mathfrak{B}_n(\bar{u},\bar{v})=0$ and, therefore, $\bar{u}$ and $\bar{v}$ are orthogonal with respect to $\mathfrak{B}_n$. We have already encountered the eigenvectors of $\mathtt{D}_n$, as the following theorem shows. This theorem is, in fact, a characterization of classical sequences in terms of $\mathtt{D}_n$.
\begin{theorem}\label{th:eighenvectors}
For $n\geqslant 0$, let $S_n^{-1}\,H_n\,S_n^{-\top}$ be the Cholesky factorization of $G_n$ and let $\bar{s}_{n,0},\bar{s}_{n,1},\ldots,\bar{s}_{n,n}$ denote the columns of $S_n^{\top}$. Then $\{\mu_n\}_{n\geqslant 0}$ is a classical sequence if and only if
$$
\mathtt{D}_n\,\bar{s}_{n,j}\,=\,\lambda_j\,\bar{s}_{n,j}, \quad n\geqslant 0, \quad 0\leqslant j \leqslant n,
$$
where $\lambda_j=j\,[(j-1)\,a+d]$.
\end{theorem}
\begin{proof}
Suppose that $\{\mu_n\}_{n\geqslant 0}$ is a classical sequence. For $n=0$, then $\mathtt{D}_0=0$ and $\bar{s}_{0,0}=1$. Thus it is obvious that $\mathtt{D}_0\,\bar{s}_{0,0}=\lambda_0\,\bar{s}_{0,0}$.
For $n=1$,
$$
\mathtt{D}_1=\left[\begin{array}{c|c}
\mathtt{D}_0 & e \\
\hline
0 & d
\end{array}\right], \quad \bar{s}_{1,0}=\left[ \begin{array}{c} \bar{s}_{0,0} \\ \hline 0 \end{array}\right].
$$
Then, multiplying by blocks, we have
$$
\mathtt{D}_1\,\bar{s}_{1,0}=\left[\begin{array}{c} \mathtt{D}_0\,\bar{s}_{0,0} \\ \hline 0 \end{array}\right]=\lambda_0\,\left[ \begin{array}{c} \bar{s}_{0,0} \\ \hline 0 \end{array}\right]=\lambda_0\,\bar{s}_{1,0}.
$$
Since $S_1^{\top}$ is upper triangular with $1$'s on its diagonal, then $\{\bar{s}_{1,0},\bar{s}_{1,1}\}$ constitutes a basis for $\mathbb{R}^2$. Then we can write
$$
\mathtt{D}_1\,\bar{s}_{1,1}=a_{1,1}\,\bar{s}_{1,1}+a_{1,0}\,\bar{s}_{1,0},
$$
for some constants $a_{1,1}$ and $a_{1,0}$. Using the orthogonality of the columns of $S_1^{\top}$ with respect to $\mathfrak{B}_1$ and Theorem \ref{th:selfadjoint}, we obtain
$$
a_{1,j}=\frac{\mathfrak{B}_1(\mathtt{D}_1\,\bar{s}_{1,1},\bar{s}_{1,j})}{h_j}=\frac{\mathfrak{B}_1(\bar{s}_{1,1},\mathtt{D}_1\,\bar{s}_{1,j})}{h_j}, \quad j=0,1.
$$
It follows that $a_{1,0}=0$ and, thus,
$$
\mathtt{D}_1\,\bar{s}_{1,1}=a_{1,1}\,\bar{s}_{1,1}=\begin{bmatrix}
* \\ a_{1,1}
\end{bmatrix},
$$
where we have taken into account that $\bar{s}_{1,1}=[* \ 1]^{\top}$ (the value of the entry denoted by $*$ has no relevance). If we multiply by blocks, we get
$$
\mathtt{D}_1\,\bar{s}_{1,1}=\begin{bmatrix}
* \\ d
\end{bmatrix},
$$
which implies that $a_{1,1}=d=\lambda_1$.
Now, suppose that
$$
\mathtt{D}_k\,\bar{s}_{k,j}\,=\,\lambda_j\,\bar{s}_{k,j}, \quad 0\leqslant j \leqslant k,
$$
holds for some $k\geqslant 0$. Recall that
\begin{align*}
\mathtt{D}_{k+1}=\left[ \begin{array}{ccc|c}
& & & 0 \\
& & & \vdots \\
& \mathtt{D}_{k} & & 0\\
& & & (k+1)\,k\,c\\[6pt]
& & & (k+1)\,(k\,b+e)\\[6pt]
\hline
0 & \ldots & 0 & (k+1)\,(k\,a+d)
\end{array}\right],
\end{align*}
and that, by Theorem \eqref{th:choleskySn+1},
$$
\bar{s}_{k+1,j}=\left[\begin{array}{c}
\bar{s}_{k,j} \\
\hline
0
\end{array}\right], \quad 0\leqslant j \leqslant k, \quad \text{and} \quad \bar{s}_{k+1,k+1}=\left[\begin{array}{c}
* \\
\hline
1
\end{array}\right],
$$
where the values of the entries denoted by $*$ are not relevant here. Multiplying by blocks and using the induction hypothesis, we get
$$
\mathtt{D}_{k+1}\,\bar{s}_{k+1,j}=\left[\begin{array}{c}
\mathtt{D}_{k}\,\bar{s}_{k,j} \\
\hline
0
\end{array}\right]=\lambda_j\,\bar{s}_{k+1,j}, \quad 0\leqslant j \leqslant k.
$$
Since $S_{k+1}^{\top}$ is an upper triangular matrix with $1$'s on its diagonal, its columns constitute a basis for $\mathbb{R}^{k+1}$. Then, we can write
$$
\mathtt{D}_{k+1}\,\bar{s}_{k+1,k+1}=\sum_{j=0}^{k+1}\,a_{k+1,j}\,\bar{s}_{k+1,j},
$$
where, by the orthogonality of the columns of $S_{k+1}^{\top}$ with respect to $\mathfrak{B}_{k+1}$ and Theorem \ref{th:selfadjoint}, we have
\begin{align*}
a_{k+1,j}&=\frac{\mathfrak{B}_{k+1}(\mathtt{D}_{k+1}\,\bar{s}_{k+1,k+1},\bar{s}_{k+1,j})}{h_j}\\
&=\frac{\mathfrak{B}_{k+1}(\bar{s}_{k+1,k+1},\mathtt{D}_{k+1}\,\bar{s}_{k+1,j})}{h_j}, \quad 0\leqslant j \leqslant k+1.
\end{align*}
It follows that
\begin{align*}
a_{k+1,j}=\,\lambda_j\,\frac{\mathfrak{B}_{k+1}(\bar{s}_{k+1,k+1},\bar{s}_{k+1,j})}{h_j}\,=\,0, \quad 0\leqslant j \leqslant k,
\end{align*}
and, consequently,
$$
\mathtt{D}_{k+1}\,\bar{s}_{k+1,k+1}=a_{k+1,k+1}\,\bar{s}_{k+1,k+1}=\left[\begin{array}{c}
* \\
\hline
a_{k+1,k+1}
\end{array}\right].
$$
Moreover, if we multiply by blocks, we get
$$
\mathtt{D}_{k+1}\,\bar{s}_{k+1,k+1}=\left[\begin{array}{c}
* \\
\hline
\lambda_{k+1}
\end{array}\right],
$$
which implies that $a_{k+1,k+1}=\lambda_{k+1}$. Then the sufficient condition is proved by the principle of induction.
\bigskip
Conversely, if $\mathfrak{B}_n$ is the bilinear form defined in Definition \ref{def:bilinearform}, then
$$
\mathfrak{B}_n(\mathtt{D}_n\,\bar{s}_{n,j},\bar{s}_{n,k})\,=\,\mathfrak{B}_n(\bar{s}_{n,j},\mathtt{D}_n\,\bar{s}_{n,k}), \quad n\geqslant 0, \quad 0\leqslant j,k\leqslant n.
$$
Indeed,
\begin{align*}
\mathfrak{B}_n(\mathtt{D}_n\,\bar{s}_{n,j},\bar{s}_{n,k})-\mathfrak{B}_n(\bar{s}_{n,j},\mathtt{D}_n\,\bar{s}_{n,k})=\,(\lambda_j-\lambda_k)\,\mathfrak{B}_n(\bar{s}_{n,j},\bar{s}_{n,k})=\,(\lambda_j-\lambda_k)\,h_j\,\delta_{j,k},
\end{align*}
which vanishes for all $0\leqslant j,k\leqslant n$. This implies that
$$
S_n\,(\mathtt{D}_n^{\top}\,G_n-G_n\,\mathtt{D}_n)\,S_n^{\top}\,=\,\mathtt{0}, \quad n\geqslant 0,
$$
or, equivalently,
$$
\mathtt{D}_n^{\top}\,G_n-G_n\,\mathtt{D}_n\,=\,\mathtt{0}.
$$
Since $\mathtt{D}_n=R_n^{\top}N_n$, the first column of the above matrix identity reads $N_n^{\top}\,R_n\,\mathtt{M}_n=\mathtt{0}$, or, equivalently,
$$
k\,\left[\big((k-1)\,a+d \big)\,\mu_{k}+\big((k-1)\,b+e \big)\,\mu_{k-1}+(k-1)\,\mu_{k-2} \right]=0,
$$
for $n\geqslant 0$ and $0\leqslant k \leqslant n$. It follows that $\{\mu_n\}_{n\geqslant 0}$ satisfies \eqref{eq:ttr-moments}; hence, it is pre-classical. Furthermore, $\{\mu_n\}_{n\geqslant 0}$ is classical since, otherwise, $G_n$ would not have a Cholesky factorization for some $n\geqslant 0$.
\end{proof}
The above results can be passed down to the operator $\mathcal{D}$ defined in \eqref{def:operatorD}. Let $\{\mu_n\}_{n\geqslant 0}$ be a classical sequence of real numbers, and let $\mathbf{u}$ be the moment functional defined as $\mu_n=\langle \mathbf{u}, x^n\rangle$, $n\geqslant 0$. Then \eqref{eq:selfadjoint} implies that
$$
\left\langle \mathbf{u}, \mathcal{D}[p]\,q \right\rangle \,=\, \left\langle \mathbf{u}, p\,\mathcal{D}[q]\right\rangle, \quad \forall p,q\in \Pi.
$$
That is, $\mathcal{D}$ is a self-adjoint operator on polynomials. Moreover, for $n\geqslant 0$, let $S_n^{-1}\,H_n\,S_n^{-\top}$ be the Cholesky factorization of $G_n$ and let $\bar{s}_{n,0},\bar{s}_{n,1},\ldots,\bar{s}_{n,n}$ denote the columns of $S_n^{\top}$. From Theorem \ref{th:eighenvectors} we deduce that the sequence of polynomials $\{P_n\}_{n\geqslant 0}$ with
$$
P_n(x)\,=\,\bar{s}_{n,n}^{\top}\,\mathtt{X}_n, \quad n\geqslant 0,
$$
are eigenfunctions of the operator $\mathcal{D}$. That is,
$$
\mathcal{D}[P_n]\,=\,\lambda_n\,P_n, \quad n\geqslant 0,
$$
with $\lambda_n=n\,[(n-1)\,a+d]$. Note that $\{P_n\}_{n\geqslant 0}$ is a sequence of polynomials orthogonal with respect to $\mathbf{u}$.
\subsection{Hahn-type characterization}
Let $\{ \mu_n \}_{n \geqslant 0}$ be a sequence of real numbers and let $a,b,c\in \mathbb{R}$ such that $|a|+|b|+|c|>0$. We can define a new sequence $\{\sigma_{n}\}_{n\geqslant 0}$ as follows:
$$
\sigma_n = a \,\mu_{n+2} + b\, \mu_{n+1} + c\, \mu_n, \quad n\geqslant 0.
$$
Notice that if $\mathbf{u}$ is the moment functional defined as $\mu_n=\langle \mathbf{u}, x^n\rangle$, then $\{\sigma_n\}_{n\geqslant 0}$ is the sequence of moments of the functional given by $\mathbf{v}=\phi(x)\,\mathbf{u}$ where $\phi(x)=a\,x^2+b\,x+c$. Indeed, for $n\geqslant 0$,
$$
\langle \mathbf{v},x^n\rangle = \langle \mathbf{u},\phi\,x^n\rangle =\langle \mathbf{u}, a\,x^{n+2}+b\,x^{n+1}+c\,x^n\rangle = a \mu_{n+2} + b \mu_{n+1} + c \mu_n=\sigma_n.
$$
We denote by $\{ G_n^{(1)}\}_{n \geqslant 0}$ the sequence of $(n+1)\times (n+1)$ matrices with
\begin{equation}\label{def:Gn1}
G_{0}^{(1)} = \sigma_0, \quad \text{and} \quad G_n^{(1)}= \left[\begin{array}{ccc|c}
& & & \sigma_n \\
& G_{n-1}^{(1)} & &\vdots \\
& & &\sigma_{2n-1}\\
\hline
\sigma_n & \ldots & \sigma_{2n-1} & \sigma_{2n}
\end{array}\right], \quad n\geqslant 1.
\end{equation}
The following theorem shows that the pre-classical character is inherited by $\{\sigma_n\}_{n\geqslant 0}$.
\begin{theorem}\label{th:sigmapreclassical}
If $\{ \mu_n \}_{n \geqslant 0}$ is pre-classical satisfying \eqref{eq:ttr-moments}, then $\{ \sigma_n \}_{n \geqslant 0}$ is pre-classical satisfying
$$
(n\,a+d_1)\,\sigma_{n+1}+(n\,b+e_1)\,\sigma_n+n\,c\,\sigma_{n-1}\,=\,0, \quad n\geqslant 0,
$$
where $d_1=d+2\,a$ and $e_1=e+b$. Moreover,
\begin{equation}\label{eq:sigmas}
\sigma_n \, =\, -(n\,a+d)\,\mu_{n+2} - (n\,b+e)\,\mu_{n+1}- n\,c\,\mu_n, \quad n\geqslant 0.
\end{equation}
\end{theorem}
\begin{proof}
For $n\geqslant 0$, we compute
\begin{align*}
&(n\,a+d_1)\, \sigma_{n+1} + (n\,b+e_1)\, \sigma_n + n\,c\, \sigma_{n-1}\\
&= a\,\big( [(n+2)a + d] \mu_{n+3} + [(n+2)b + e] \mu_{n+2} + (n+2)c \mu_{n+1}-b\,\mu_{n+2}-2\,c\,\mu_{n+1} \big) \\
&\ \ + b\,\big( [(n+1)a + d] \mu_{n+2} + [(n+1)b + e] \mu_{n+1} + (n+1)c\, \mu_{n}+a\,\mu_{n+2}-c\,\mu_n \big) \\
&\ \ \ + c\,\big( (n\,a + d)\, \mu_{n+1} + (n\,b + e)\, \mu_{n} + n\,c\, \mu_{n-1} +2\,a\,\mu_{n+1}+b\,\mu_n\big),
\end{align*}
where we have used $\sigma_n = a\, \mu_{n+2} + b\, \mu_{n+1} + c\, \mu_n$. By \eqref{eq:ttr-moments}, we have
\begin{align*}
(n\,a+d_1)&\, \sigma_{n+1} + (n\,b+e_1)\, \sigma_n + n\,c\, \sigma_{n-1}\\
&= -a\,b\, \mu_{n+2} - 2\,a\,c\,\mu_{n+1} +a\,b\, \mu_{n+2} -b\,c\, \mu_n + 2\,a\,c\, \mu_{n+1} + b\,c\, \mu_n = 0.
\end{align*}
Finally, \eqref{eq:sigmas} follows from the fact that \eqref{eq:ttr-moments} can be written as
$$
a \,\mu_{n+2} + b\, \mu_{n+1} + c\, \mu_n\,=\,-(n\,a+d)\,\mu_{n+2} - (n\,b+e)\,\mu_{n+1}- n\,c\,\mu_n.
$$
\end{proof}
When $\{\mu_n\}_{n\geqslant 0}$ is a classical sequence of real numbers, the matrix $G_n^{(1)}$ satisfies an interesting and useful relation involving the matrices $G_n$ and $\mathtt{D}_n$.
\begin{prop}\label{prop:NGN}
Let $\{\mu_n\}_{n\geqslant 0}$ be a classical sequence of real numbers satisfying \eqref{eq:ttr-moments}. Then, for $n\geqslant 0$,
$$
N_{n+1}^{\top}\,G_n^{(1)}\,N_{n+1}\,=\,-\mathtt{D}_{n+1}^{\top}\,G_{n+1}.
$$
\end{prop}
\begin{proof}
We prove this theorem by induction. For $n=0$, on one hand we have
$$
N_1^{\top}\,G_0^{(1)}\,N_1\,=\,\begin{bmatrix} 0 \\ 1 \end{bmatrix}\,\sigma_0\,\begin{bmatrix} 0 & 1 \end{bmatrix} \,=\,\begin{bmatrix} 0 & 0 \\ 0 & \sigma_0\end{bmatrix}.
$$
On the other hand, we have
$$
-\mathtt{D}_{1}^{\top}\,G_{1}\,=\,\begin{bmatrix}
0 & 0 \\
-e & -d
\end{bmatrix}\,\begin{bmatrix}
\mu_0 & \mu_1 \\
\mu_1 & \mu_2
\end{bmatrix}\,=\,\begin{bmatrix} 0 & 0 \\ -e\,\mu_0-d\,\mu_1 & -e\,\mu_1-d\,\mu_2 \end{bmatrix}.
$$
Since $\{\mu_n\}_{n\geqslant 0}$ satisfies \eqref{eq:ttr-moments}, we have that $-e\,\mu_0-d\,\mu_1=0$ and, by \eqref{eq:sigmas}, $-e\,\mu_1-d\,\mu_2=\sigma_0$. Therefore,
$$
-\mathtt{D}_{1}^{\top}\,G_{1}\,=\,\begin{bmatrix} 0 & 0 \\ 0 & \sigma_0\end{bmatrix},
$$
which proves that $N_1^{\top}\,G_0^{(1)}\,N_1 = -\mathtt{D}_{1}^{\top}\,G_{1}$.
Now, suppose that $N_{k+1}^{\top}\,G_k^{(1)}\,N_{k+1}\,=\,-\mathtt{D}_{k+1}^{\top}\,G_{k+1}$ holds for $k\geqslant 0$. On one hand, we compute
\begin{align*}
N_{k+2}^{\top}\,&G_{k+1}^{(1)}\,N_{k+2}\\
&=\,\left[ \begin{array}{c|c}
N_{k+1}^{\top} & \mathtt{0} \\[3pt]
\hline
\\[-6pt]
\mathtt{0} & k+2
\end{array}\right] \,\left[\begin{array}{ccc|c}
& & & \sigma_{k+1} \\
& G_{k}^{(1)} & &\vdots \\
& & &\sigma_{2k+1}\\
\hline
\sigma_{k+1} & \ldots & \sigma_{2k+1} & \sigma_{2k+2}
\end{array}\right]\, \left[ \begin{array}{c|c}
N_{k+1} & \mathtt{0} \\[3pt]
\hline
\\[-6pt]
\mathtt{0} & k+2
\end{array}\right].
\end{align*}
Multiplying by blocks and using the induction hypothesis, we get
$$
N_{k+2}^{\top}\,G_{k+1}^{(1)}\,N_{k+2}\,=\,\left[\begin{array}{c|c} -\mathtt{D}_{k+1}^{\top}\,G_{k+1} & \bar{x}_k \\[3pt]
\hline
\\[-6pt]
\bar{x}_k^{\top} & (k+2)^2\,\sigma_{2k+2}\end{array}\right],
$$
where
$$
\bar{x}_k\,=\,(k+2)\,N_{k+1}^{\top}\,\begin{bmatrix} \sigma_{k+1} \\ \vdots \\ \sigma_{2k+1} \end{bmatrix}.
$$
On the other hand,
\begin{align*}
&-\mathtt{D}_{k+2}^{\top}\,G_{k+2}\\
&=\left[ \begin{array}{ccccc|c}
& & & & & 0 \\
& & & -\mathtt{D}_{k+1}^{\top} & & \vdots \\
& & & & & 0\\
\hline
0 & \ldots & 0 & -(k+2)\,(k+1)\,c & -(k+2)\,[(k+1)\,b+e] & -(k+2)\,[(k+1)\,a+d]
\end{array}\right]\\
&\qquad \times \left[\begin{array}{ccc|c}
& & & \mu_{k+2} \\
& G_{k+1} & &\vdots \\
& & &\mu_{2k+3}\\
\hline
\mu_{k+2} & \ldots & \mu_{2k+3} & \mu_{2k+4}
\end{array}\right].
\end{align*}
After multiplying by blocks, we have
$$
-\mathtt{D}_{k+2}^{\top}\,G_{k+2}\,=\,\left[\begin{array}{c|c} -\mathtt{D}_{k+1}^{\top}\,G_{k+1} & \bar{y}_k \\[3pt]
\hline
\\[-6pt]
\bar{z}_k & w_k\end{array}\right],
$$
where
\begin{align*}
&\bar{y}_k=-\mathtt{D}_{k+1}^{\top}\,\begin{bmatrix} \mu_{k+2} \\ \vdots \\ \mu_{2k+3} \end{bmatrix}, \\[6pt]
&\bar{z}_k=\begin{bmatrix}0 & \ldots & 0 & -(k+2)\,(k+1)\,c & -(k+2)\,[(k+1)\,b+e]\end{bmatrix}\,G_{k+1}\\
& \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad -(k+2)\,[(k+1)\,a+d]\,\begin{bmatrix} \mu_{k+2} & \ldots & \mu_{2k+3}\end{bmatrix},\\[6pt]
&w_k= -(k+2)\bigg[(k+1)\,c\,\mu_{2k+2}+[(k+1)\,b+e]\,\mu_{2k+3} +[(k+1)\,a+d]\,\mu_{2k+4}\bigg].
\end{align*}
By Theorem \ref{th:selfadjoint}, we have that $\bar{z}_k=\bar{y}_k^{\top}$. Hence, our efforts should focus on showing that $\bar{x}_k=\bar{y}_k$ and $w_k=(k+2)^2\,\sigma_{2k+2}$. Let us start by proving the second identity.
Observe that
\begin{align*}
w_k=&-(k+2)\,\bigg[[(2k+2)\,a+d]\,\mu_{2k+4}+[(2k+2)\,b+e]\,\mu_{2k+3}+(2k+2)\,c\,\mu_{2k+2} \bigg]\\
&+(k+2)\,(k+1)\,(a\,\mu_{2k+4}+b\,\mu_{2k+3}+c\,\mu_{2k+2}).
\end{align*}
From \eqref{eq:sigmas} and the fact that $\sigma_{2k+2}=a\,\mu_{2k+4}+b\,\mu_{2k+3}+c\,\mu_{2k+2}$, we deduce that
$$
w_k=(k+2)\,\sigma_{2k+2}+(k+2)\,(k+1)\,\sigma_{2k+2}=(k+2)^2\,\sigma_{2k+2}.
$$
In order to prove that $\bar{x}_k=\bar{y}_k$, we note that
$$
\bar{x}_k\,=\,(k+2)\,\left[ \begin{array}{c|c}
N_{k}^{\top} & \mathtt{0} \\[3pt]
\hline
\\[-6pt]
\mathtt{0} & k+1
\end{array}\right]\,\begin{bmatrix} \sigma_{k+1} \\ \vdots \\ \sigma_{2k+1} \end{bmatrix},
$$
and
$$
\bar{y}_k=\left[ \begin{array}{ccccc|c}
& & & & & 0 \\
& & & -\mathtt{D}_{k}^{\top} & & \vdots \\
& & & & & 0\\
\hline
0 & \ldots & 0 & -(k+1)\,k\,c & -(k+1)\,(k\,b+e) & -(k+1)\,(k\,a+d)
\end{array}\right]\,\begin{bmatrix} \mu_{k+2} \\ \vdots \\ \mu_{2k+3} \end{bmatrix}.
$$
Then, the last entry of $\bar{x}_k-\bar{y}_k$ is
\begin{align*}
&(k+2)\,(k+1)\,\sigma_{2k+1}+(k+1)\,[k\,c\,\mu_{2k+1}+(k\,b+e)\,\mu_{2k+2}+(k\,a+d)\,\mu_{2k+3}]\\
&=(k+2)\,(k+1)\,\sigma_{2k+1}-(k+1)\,\sigma_{2k+1}-(k+1)^2\,\sigma_{2k+1}=0,
\end{align*}
where we have used \eqref{eq:sigmas} and the fact that $\sigma_{2k+1}=a\,\mu_{2k+3}+b\,\mu_{2k+2}+c\,\mu_{2k+1}$. In general, the $i$-th entry of $\bar{x}_k-\bar{y}_k$, with $1\leqslant i \leqslant k+2$, is
\begin{align*}
&(i-1)\,\bigg[(k+2)\,\sigma_{k+i-1}+[(i-2)\,c\,\mu_{k+i-1}+[(i-2)\,b+e]\,\mu_{k+i}+[(i-2)\,a+d]\,\mu_{k+1+i}]\bigg]\\
&=(i-1)\,\bigg[(k+2)\,\sigma_{k+i-1}-\sigma_{k+i-1}-(k+1)\,\sigma_{k+i-1} \bigg]=0,
\end{align*}
where we have used \eqref{eq:sigmas} and the fact that $\sigma_{k+i-1}=a\,\mu_{k+1+i}+b\,\mu_{k+i}+c\,\mu_{k+i-1}$. This proves that $\bar{x}_k-\bar{y}_k=\mathtt{0}$ and, in turn, that $N_{k+2}^{\top}\,G_{k+1}^{(1)}\,N_{k+2}\,=\,-\mathtt{D}_{k+2}^{\top}\,G_{k+2}$.
It follows from the Principle of Induction that $N_{n+1}^{\top}\,G_n^{(1)}\,N_{n+1}\,=\,-\mathtt{D}_{n+1}^{\top}\,G_{n+1}$ holds for $n\geqslant 0$.
\end{proof}
Now, let $\{\mu_n\}_{n\geqslant 0}$ be a classical sequence of real numbers. For $n\geqslant 1$, let $S_{n}^{-1}\,H_{n}\,S_{n}^{-\top}$ be the Cholesky factorization of $G_{n}$ and let $\bar{s}_{n,0},\bar{s}_{n,1},\ldots,\bar{s}_{n,n}$ denote the columns of $S_{n}^{\top}$. Consider the set $\{ \bar{s}_{n,0}^{(1)}, \ldots, \bar{s}_{n,n}^{(1)}\}$ of vectors in $\mathbb{R}^{n+1}$ with
$$
\bar{s}_{n,j}^{(1)}=\frac{1}{j+1}\,N_{n+1}\,\bar{s}_{n+1,j+1}, \quad 0\leqslant j \leqslant n.
$$
Observe that $\{ \bar{s}_{n,0}^{(1)}, \ldots, \bar{s}_{n,n}^{(1)}\}$ constitutes a basis for $\mathbb{R}^{n+1}$. Furthermore, since $S_{n}^{\top}$ is a unit upper triangular matrix, the $(n+1)\times (n+1)$ matrix $S_{n,1}^{\top}$ defined as
\begin{equation}\label{def:Sn1}
S_{n,1}^{\top}=\left[\bar{s}_{n,0}^{(1)}\ \bar{s}_{n,1}^{(1)}\, \ldots\, \bar{s}_{n,n}^{(1)} \right],
\end{equation}
is also a unit upper triangular matrix. We show that $G_n^{(1)}$ defined in \eqref{def:Gn1} admits a Cholesky factorization with $S_{n,1}^{-1}$ its triangular matrix factor.
\begin{theorem}\label{th:choleskysigma}
Let $\{\mu_n\}_{n\geqslant 0}$ be a classical sequence of real numbers satisfying \eqref{eq:ttr-moments}. For $n\geqslant 0$, let $S_{n}^{-1}\,H_{n}\,S_{n}^{-\top}$ be the Cholesky factorization of $G_{n}$. Then $G_n^{(1)}$ admits the Cholesky factorization given by
$$
G_n^{(1)}= S_{n,1}^{-1}\,H_{n,1}\,S_{n,1}^{-\top},
$$
where $S_{n,1}^{\top}$ is the matrix defined in \eqref{def:Sn1} and $H_{n,1}=\textnormal{diag}[h_{0}^{(1)}, \ldots, h_{n}^{(1)} ]$ with
$$
h^{(1)}_{j}=-\frac{\lambda_{j+1}}{(j+1)^2}\,h_{j+1}, \quad j\geqslant 0,
$$
and $\lambda_j=j\,[(j-1)\,a+d]$.
\end{theorem}
\begin{proof}
For $0\leqslant j,k \leqslant n$, we compute
$$
\left(\bar{s}_{n,j}^{(1)}\right)^{\top}\,G_n^{(1)}\,\bar{s}_{n,k}^{(1)}\,=\,\frac{1}{(j+1)\,(k+1)}\,\bar{s}_{n+1,j+1}^{\top}\,N_{n+1}^{\top}\,G_n^{(1)}\,N_{n+1}\,\bar{s}_{n+1,k+1}.
$$
Using Theorem \ref{th:eighenvectors} and Proposition \ref{prop:NGN}, we obtain
\begin{align*}
\left(\bar{s}_{n,j}^{(1)}\right)^{\top}\,G_n^{(1)}\,\bar{s}_{n,k}^{(1)}\,&=\,-\frac{1}{(j+1)\,(k+1)}\,\bar{s}_{n+1,j+1}^{\top}\,\mathtt{D}_{n+1}^{\top}\,G_{n+1}\,\bar{s}_{n+1,k+1}\\
&=-\frac{\lambda_{j+1}}{(j+1)\,(k+1)}\,\bar{s}_{n+1,j+1}^{\top}\,G_{n+1}\,\bar{s}_{n+1,k+1}\\
&=-\frac{\lambda_{j+1}}{(j+1)\,(k+1)}\,\mathcal{B}_{n+1}(\bar{s}_{n+1,j+1},\,\bar{s}_{n+1,k+1})\\
&=-\frac{\lambda_{j+1}}{(j+1)^2}\,h_{j+1}\,\delta_{j,k}.
\end{align*}
This implies that $S_{n,1}\,G_n^{(1)}\,S_{n,1}^{\top}= H_{n,1}$ and, hence, $G_n^{(1)}= S_{n,1}^{-1}\,H_{n,1}\,S_{n,1}^{-\top}$.
\end{proof}
\begin{corollary}\label{coro:sigmaclassical}
If $\{\mu_n\}_{n\geqslant 0}$ is a classical sequence, then so is $\{\sigma_n\}_{n\geqslant 0}$.
\end{corollary}
\begin{proof}
By Theorem \eqref{th:sigmapreclassical}, $\{\sigma_n\}_{n\geqslant 0}$ is a pre-classical sequence.
Now, we must show that $\det G_n^{(1)}\ne 0$ for $n\geqslant 0$ (Definition \ref{def:classical}). From Theorem \ref{th:choleskysigma} we deduce that
$$
\det G_n^{(1)}=\det H_{n,1}=h^{(1)}_{0}\cdots h^{(1)}_{n}, \quad n\geqslant 0,
$$
with
$$
h^{(1)}_{j}=-\frac{\lambda_{j+1}}{(j+1)^2}\,h_{j+1}, \quad j\geqslant 0 ,
$$
and $\lambda_j=j\,[(j-1)\,a+d]$. Since $\{\mu_n\}_{n\geqslant 0}$ is classical, then we have
$$
h_{n} \ne 0 \quad \text{and} \quad n\,a+d\ne 0, \quad n\geqslant 0,
$$
(see equality \eqref{eq:hn+1} and Definition \ref{def:classical sequence}). This implies that $h^{(1)}_j\ne 0$ for $j\geqslant 0$. Therefore $\det G_n^{(1)}\ne 0$ for $n\geqslant 0$ and, thus, $\{\sigma_n\}_{n\geqslant 0}$ is classical.
\end{proof}
Theorem \ref{th:choleskysigma} implies that for $n\geqslant 0$, the columns of $S_{n,1}^{\top}$ constitute an orthogonal basis for $\mathbb{R}^{n+1}$ with respect to the bilinear form associated with $\{\sigma_n\}_{n\geqslant 0}$ (see Definition \ref{def:bilinearform}), which we denote by
$$
\mathfrak{B}^{(1)}_n(\bar{u},\bar{v})\,=\,\bar{u}^{\top}\,G_n^{(1)}\,\bar{v}, \quad \forall \bar{u},\bar{v}\in \mathbb{R}^{n+1}.
$$
We are ready for the following characterizations of classical sequences.
\begin{theorem}\label{th:Hahn-type}
Let $\{\mu_n\}_{n\geqslant 0}$ be a sequence of real numbers such that $\det G_n\ne 0$ for $n\geqslant 0$. Let $S_{n}^{-1}\,H_{n}\,S_{n}^{-\top}$ be the Cholesky factorization of $G_{n}$ and let $\bar{s}_{n,0},\bar{s}_{n,1},\ldots,\bar{s}_{n,n}$ denote the columns of $S_{n}^{\top}$. Then $\{\mu_n\}_{n\geqslant 0}$ is classical if and only if there are real numbers $a,b,c$ satisfying
$$
|a|+|b|+|c|>0,
$$
such that the set $\{ \bar{s}_{n,0}^{(1)}, \ldots, \bar{s}_{n,n}^{(1)}\}$ of vectors in $\mathbb{R}^{n+1}$ with
$$
\bar{s}_{n,j}^{(1)}=\frac{1}{j+1}\,N_{n+1}\,\bar{s}_{n+1,j+1}, \quad 0\leqslant j \leqslant n,
$$
constitutes an orthogonal basis for $\mathbb{R}^{n+1}$ with respect to the bilinear form associated with $\{\sigma_n\}_{n\geqslant 0}$, where
$$
\sigma_n\,=\,a\,\mu_{n+2}+b\,\mu_{n+1}+c\,\mu_n, \qquad n\geqslant 0.
$$
\end{theorem}
\begin{proof}
If $\{\mu_n\}_{n\geqslant 0}$ is classical, then it follows from Theorem \ref{th:choleskysigma} that
$$
S_{n,1}\,G_n^{(1)}\,S_{n,1}^{\top}\,=\,H_{n,1}, \quad n\geqslant 0,
$$
where
$$
S_{n,1}^{\top}=\left[\bar{s}_{n,0}^{(1)}\ \bar{s}_{n,1}^{(1)}\, \ldots\, \bar{s}_{n,n}^{(1)} \right],
$$
and $H_{n,1}=\textnormal{diag}[h_{0}^{(1)}, \ldots, h_{n}^{(1)} ]$ with
$$
h^{(1)}_{j}=-\frac{\lambda_{j+1}}{(j+1)^2}\,h_{j+1}, \quad j\geqslant 0,
$$
and $\lambda_j=j\,[(j-1)\,a+d]$. This implies that
$$
\mathfrak{B}_n^{(1)}(\bar{s}_{n,i}^{(1)},\bar{s}_{n,j}^{(1)})=h^{(1)}_j\,\delta_{i,j}, \quad 0\leqslant i,j\leqslant n.
$$
This proves the necessary condition.
Conversely, for $n\geqslant 0$, on one hand we have
$$
\mathfrak{B}_n^{(1)}(\bar{s}_{n,k}^{(1)},\bar{s}_{n,0}^{(1)})=h_0^{(1)}\,\delta_{k,0}
$$
or, equivalently,
$$
\left(\bar{s}_{n,k}^{(1)}\right)^{\top}\,G_n^{(1)}\,\bar{s}_{n,0}^{(1)}\,=\,\frac{1}{k+1}\,\bar{s}_{n+1,k+1}^{\top}\,N_{n+1}^{\top}\,G_n^{(1)}\,\bar{s}_{n,0}^{(1)}=-\lambda_{1}\,h_{1}\,\delta_{k,0}.
$$
Using the fact that $s_{n,0}^{(1)}=\bar{e}_0$ where, recall, $\bar{e}_0$ is the first column of the identity matrix of order $n+1$, we write
$$
\bar{s}_{n+1,k+1}^{\top}\,N_{n+1}^{\top}\,\begin{bmatrix}
\sigma_0 \\ \sigma_1 \\ \vdots \\ \sigma_n
\end{bmatrix}=-(k+1)\,\lambda_{1}\,h_{1}\,\delta_{k,0}.
$$
On the other hand
$$
\mathfrak{B}_{n+1}(\bar{s}_{n+1,k+1},\bar{s}_{n+1,1})=\bar{s}_{n+1,k+1}^{\top}\,G_{n+1}\,\bar{s}_{n+1,1}= h_1\,\delta_{k,0}=h_1\,(k+1)\,\delta_{k,0}.
$$
Therefore,
$$
\bar{s}_{n+1,k+1}^{\top}\,N_{n+1}^{\top}\,\begin{bmatrix}
\sigma_0 \\ \sigma_1 \\ \vdots \\ \sigma_n
\end{bmatrix}=-\lambda_{1}\,\bar{s}_{n+1,k+1}^{\top}\,G_{n+1}\,\bar{s}_{n+1,1},
$$
and, thus,
$$
\bar{s}_{n+1,k+1}^{\top}\left(N_{n+1}^{\top}\,\begin{bmatrix}
\sigma_0 \\ \sigma_1 \\ \vdots \\ \sigma_n
\end{bmatrix}+\lambda_{1}\,G_{n+1}\,\bar{s}_{n+1,1} \right)=0, \quad 0\leqslant k \leqslant n.
$$
Since the first row of $N_{n+1}^{\top}$ is $[0 \ 0 \ldots 0]$ and $\bar{s}_{n+1,0}^{\top}\,G_{n+1}\,\bar{s}_{n+1,1}=0$, we have
$$
S_{n+1}^{\top}\left(N_{n+1}^{\top}\,\begin{bmatrix}
\sigma_0 \\ \sigma_1 \\ \vdots \\ \sigma_n
\end{bmatrix}+\lambda_{1}\,G_{n+1}\,\bar{s}_{n+1,1} \right)=\mathtt{0},
$$
and, consequently,
\begin{equation}\label{eq:hahn}
N_{n+1}^{\top}\,\begin{bmatrix}
\sigma_0 \\ \sigma_1 \\ \vdots \\ \sigma_n
\end{bmatrix}+\lambda_{1}\,G_{n+1}\,\bar{s}_{n+1,1} =\mathtt{0},
\end{equation}
where we have used the fact that $S_n^{\top}$ is an invertible matrix. If
$$
\bar{s}_{n+1,1}=[s_{1,0} \ 1 \ 0 \ldots 0]^{\top},
$$
then let $d=\lambda_1$ and $e=\lambda_1\,s_{1,0}$. Observe that by Theorem \ref{th:choleskySn+1}, $s_{1,0}$ is independent of $n$. In this way, the entries of \eqref{eq:hahn} read
\begin{align*}
&d\,\mu_1+e\,\mu_0 = 0,\\
&k\,\sigma_{k-1}+d\,\mu_{k+1}+e\,\mu_k=(k\,a+d)\,\mu_{k+1}+(k\,b+e)\,\mu_k+k\,c\,\mu_{k-1}=0, \quad 1\leqslant k \leqslant n,
\end{align*}
which proves that $\{\mu_n\}_{n\geqslant 0}$ satisfies \eqref{eq:ttr-moments} and, thus, $\{\mu_n\}_{n\geqslant 0}$ is classical.
\end{proof}
Corollary \ref{coro:sigmaclassical} shows that the classical character of $\{\mu_n\}_{n\geqslant 0}$ is inherited by $\{\sigma_n\}_{n\geqslant 0}$ allowing us to apply all of our previous results about classical sequences to $\{\sigma_n\}_{n\geqslant 0}$ and $\{G_n^{(1)}\}_{n\geqslant 0}$. Therefore, we can define a new sequence $\{\sigma_{n}^{(2)}\}_{n\geqslant 0}$ as follows:
$$
\sigma_n^{(2)} = a \,\sigma_{n+2} + b\, \sigma_{n+1} + c\, \sigma_n, \quad n\geqslant 0.
$$
We denote by $\{ G_n^{(2)}\}_{n \geqslant 0}$ the sequence of $(n+1)\times (n+1)$ matrices with
\begin{equation*}\label{def:Gn2}
G_{0}^{(2)} = \sigma_0^{(2)}, \quad \text{and} \quad G_n^{(2)}= \left[\begin{array}{ccc|c}
& & & \sigma_n^{(2)} \\
& G_{n-1}^{(2)} & &\vdots \\
& & &\sigma_{2n-1}^{(2)}\\
\hline
\sigma_n^{(2)} & \ldots & \sigma_{2n-1}^{(2)} & \sigma_{2n}^{(2)}
\end{array}\right], \quad n\geqslant 1.
\end{equation*}
By Theorem \ref{th:sigmapreclassical}, $\{\sigma_{n}^{(2)}\}_{n\geqslant 0}$ is pre-classical satisfying
$$
(n\,a+d_2)\,\sigma_{n+1}^{(2)}+(n\,b+e_2)\,\sigma_n^{(2)}+n\,c\,\sigma_{n-1}^{(2)}\,=\,0, \quad n\geqslant 0,
$$
where $d_2=d+4\,a$ and $e_2=e+2\,b$. Moreover, if $\det G_n^{(2)}\ne 0$ for $n\geqslant 0$, then $\{\sigma_{n}^{(2)}\}_{n\geqslant 0}$ is classical and, by Theorem \ref{th:Hahn-type}, the set $\{ \bar{s}_{n,0}^{(2)}, \ldots, \bar{s}_{n,n}^{(2)}\}$ of vectors in $\mathbb{R}^{n+1}$ with
$$
\bar{s}_{n,j}^{(2)}=\frac{1}{j+1}\,N_{n+1}\,\bar{s}_{n+1,j+1}^{(1)}, \quad 0\leqslant j \leqslant n,
$$
constitutes an orthogonal basis for $\mathbb{R}^{n+1}$ with respect to the bilinear form $\mathfrak{B}_n^{(2)}$ associated with $\{\sigma_n^{(2)}\}_{n\geqslant 0}$, that is,
$$
\mathfrak{B}^{(2)}_n(\bar{s}_{n,j}^{(2)},\bar{s}_{n,k}^{(2)})=h_j^{(2)}\,\delta_{j,k}, \quad 0\leqslant j, k \leqslant n,
$$
where
$$
h^{(2)}_{j}=-\frac{\lambda_{j+1}^{(1)}}{(j+1)^2}\,h_{j+1}^{(2)}, \quad j\geqslant 0 ,
$$
and $\lambda_j^{(1)}=j\,[(j-1)\,a+d_1]$.
Iterating this idea, we obtain the following result.
\begin{corollary}\label{th:higher-Hahn-type}
Let $\{\mu_n\}_{n\geqslant 0}$ be a classical sequence of real numbers. Let $S_{n}^{-1}\,H_{n}\,S_{n}^{-\top}$ be the Cholesky factorization of $G_{n}$ and let $\bar{s}_{n,0},\bar{s}_{n,1},\ldots,\bar{s}_{n,n}$ denote the columns of $S_{n}^{\top}$. For each $k\geqslant 1$, define the sequence of real numbers $\{\sigma_n^{(k)}\}_{n\geqslant 0}$ by
$$
\sigma_n^{(k)}\, =\, a\, \sigma_{n+2}^{(k-1)} + b\, \sigma_{n+1}^{(k-1)} + c\, \sigma_n^{(k-1)}, \quad n\geqslant 0,
$$
where $\sigma_n^{(0)}=\mu_n$ for $n\geqslant 0$. Then $\{ \sigma_n^{(k)} \}_{n \geqslant 0}$ is classical satisfying
$$
(n\,a+d_k)\,\sigma_{n+1}^{(k)}+(n\,b+e_k)\,\sigma_n^{(k)}+n\,c\,\sigma_{n-1}^{(k)}\,=\,0, \quad n\geqslant 0,
$$
where $d_k=d+2\,k\,a$ and $e_k=e+k\,b$. Moreover, the set $\{ \bar{s}_{n,0}^{(k)}, \ldots, \bar{s}_{n,n}^{(k)}\}$ of vectors in $\mathbb{R}^{n+1}$ with
$$
\bar{s}_{n,j}^{(k)}=\frac{1}{j+1}\,N_{n+1}\,\bar{s}_{n+1,j+1}^{(k-1)}, \quad 0\leqslant j \leqslant n,
$$
where $s_{n,j}^{(0)}=\bar{s}_{n,j}$, constitutes an orthogonal basis for $\mathbb{R}^{n+1}$ with respect to the bilinear form associated with $\{\sigma_n^{(k)}\}_{n\geqslant 0}$.
\end{corollary}
Observe that the vectors $\{\bar{s}_{n,0}^{(k)}, \ldots, \bar{s}_{n,n}^{(k)}\}$ in Corollary \ref{th:higher-Hahn-type} can be written in terms of the vectors $\bar{s}_{n,0},\bar{s}_{n,1},\ldots,\bar{s}_{n,n}$ as follows: for each $k\geqslant 1$,
\begin{equation}\label{eq:longsn}
\bar{s}_{n,j}^{(k)}= \frac{1}{(j+1)_k}N_{n+1}\,N_{n+2}\cdots N_{n+k}\,\bar{s}_{n+k,j+k}, \quad 0\leqslant j \leqslant n,
\end{equation}
where $(\nu)_k=\nu\,(\nu+1)\cdots (\nu+k-1)$, $(\nu)_0=1$, denotes the Pochhammer symbol. If $\{ G_n^{(k)}\}_{n \geqslant 0}$ dentoes the sequence of $(n+1)\times (n+1)$ matrices with
\begin{equation}\label{def:Gnk}
G_{0}^{(k)} = \sigma_0^{(k)}, \quad \text{and} \quad G_n^{(k)}= \left[\begin{array}{ccc|c}
& & & \sigma_n^{(k)} \\
& G_{n-1}^{(k)} & &\vdots \\
& & &\sigma_{2n-1}^{(k)}\\
\hline
\sigma_n^{(k)} & \ldots & \sigma_{2n-1}^{(k)} & \sigma_{2n}^{(k)}
\end{array}\right], \quad n\geqslant 1,
\end{equation}
then the orthogonality of $\{\bar{s}_{n,0}^{(k)}, \ldots, \bar{s}_{n,n}^{(k)}\}$ with respect to the bilinear form $\mathfrak{B}_n^{(k)}$ associated with $\{\sigma_n^{(k)}\}_{n\geqslant 0}$ is given by
\begin{equation}\label{eq:orthogonalityk}
\mathfrak{B}^{(k)}_n(\bar{s}_{n,j}^{(k)},\bar{s}_{n,i}^{(k)})\,=\,(\bar{s}_{n,j}^{(k)})^{\top}\,G_n^{(k)}\,\bar{s}_{n,i}^{(k)} \,=\,h_j^{(k)}\,\delta_{j,i}, \quad 0\leqslant i,j \leqslant n,
\end{equation}
where
$$
h^{(k)}_{j}=-\frac{\lambda_{j+1}^{(k-1)}}{(j+1)^2}\,h_{j+1}^{(k-1)}, \quad j\geqslant 0 ,
$$
and $\lambda_j^{(k)}=j\,[(j-1)\,a+d_k]$. Note that we can write
$$
h^{(k)}_{j}=(-1)^k\frac{\prod_{i=0}^{k-1}\lambda_{j+k-i}^{(i)}}{[(j+1)_k]^2}\,h_{j+k}, \quad j\geqslant 0 .
$$
Moreover, \eqref{eq:orthogonalityk} implies the Cholesky factorization of $G_n^{(k)}$:
$$
S_{n,k}\,G_n^{(k)}\,S_{n,k}^{\top}\,=\,H_{n,k}, \quad n\geqslant 0,
$$
where
$$
S_{n,k}^{\top}=\left[\bar{s}_{n,0}^{(k)}\ \bar{s}_{n,1}^{(k)}\, \ldots\, \bar{s}_{n,n}^{(k)} \right],
$$
and $H_{n,k}=\textnormal{diag}[h_{0}^{(k)}, \ldots, h_{n}^{(k)} ]$.
Let us reformulate the above results in terms of polynomials and moment functionals. Let $\{\mu_n\}_{n\geqslant 0}$ be a classical sequence of real numbers satisfying \eqref{eq:ttr-moments}, and let $\mathbf{u}$ be the moment functional defined as $\mu_n=\langle \mathbf{u}, x^n\rangle$, $n\geqslant 0$. For each $k\geqslant 1$, the sequence of real numbers $\{\sigma_n^{(k)}\}_{n\geqslant 0}$ defined by
$$
\sigma_n^{(k)}\, =\, a\, \sigma_{n+2}^{(k-1)} + b\, \sigma_{n+1}^{(k-1)} + c\, \sigma_n^{(k-1)}, \quad n\geqslant 0,
$$
where $\sigma_n^{(0)}=\mu_n$ for $n\geqslant 0$, is the sequence of moments of the functional given by $\mathbf{v}_k\,=\,\phi^k\,\mathbf{u}$ where $\phi(x)=a\,x^2+b\,x+c$. Moreover, for $n\geqslant 0$, let $S_n^{-1}\,H_n\,S_n^{-\top}$ be the Cholesky factorization of $G_n$ and let $\bar{s}_{n,0},\bar{s}_{n,1},\ldots,\bar{s}_{n,n}$ denote the columns of $S_n^{\top}$. For $n\geqslant 0$, Corollary \ref{th:higher-Hahn-type} implies that the polynomials $\{Q_{0,k}(x), Q_{1,k}(x),\ldots,Q_{n,k}(x)\}$ given by
$$
Q_{j,k}(x)\,=\,(\bar{s}_{n,j}^{(k)})^{\top}\,\mathtt{X}_n, \quad 0\leqslant j \leqslant n,
$$
satisfy $\langle \mathbf{v}_k, Q_{i,k}\,Q_{j,k}\rangle = h_j^{(k)}\,\delta_{i,j}$. Therefore, $\{Q_{0,k}(x), Q_{1,k}(x),\ldots,Q_{n,k}(x)\}$ constitutes an orthogonal basis for $\Pi_n$ with respect to $\mathbf{v}_k$. Furthermore, Theorem \ref{th:choleskySn+1} allows us to write
$$
Q_{n,k}(x)\,=\,(\bar{s}_{n,n}^{(k)})^{\top}\,\mathtt{X}_n, \quad n\geqslant 0,
$$
and, in this way, $\{Q_{n,k}(x)\}_{n\geqslant 0}$ is an MOPS associated with $\mathbf{v}_k$. We note that if
$$
P_n(x)\,=\,\bar{s}_{n,n}^{\top}\,\mathtt{X}_n, \quad n\geqslant 0,
$$
then, by \eqref{eq:longsn}, we have
$$
Q_{n,k}(x)\,=\,\frac{P_{n+k}^{(k)}(x)}{(n+1)_k}, \quad n\geqslant 0,
$$
where $P_n^{(k)}(x)$ denotes the $k$-th order derivative of $P_n(x)$.
\subsection{First structure relation}
For $n\geqslant 1$ and given real numbers $a,b$, and $c$, we define the $(n+3)\times (n+1)$ matrices
\begin{equation}\label{def:matrizPhi}
\Phi_n=\left[\begin{array}{c|c}
& \mathtt{0} \\
\Phi_{n-1} & c \\
& b \\
\hline
\mathtt{0} & a
\end{array}\right], \quad \Phi_0=\begin{bmatrix}
c \\ b \\ a
\end{bmatrix}.
\end{equation}
\begin{lemma}
Let $\{\mu_n\}_{n\geqslant 0}$ and $\{\sigma_n\}_{n\geqslant 0}$ be sequences of real numbers satisfying
$$
\sigma_n=a\,\mu_{n+2}+b\,\mu_{n+1}+c\,\mu_n, \quad n\geqslant 0,
$$
where $a,b,c\in \mathbb{R}$. Then, for $n\geqslant 2$,
\begin{equation}\label{eq:GPhi}
G_n^{(1)}\,\begin{bmatrix}
\bar{v} \\ 0 \\ 0
\end{bmatrix}\,=\,G_n\,\Phi_{n-2}\,\bar{v}, \quad \forall\, \bar{v}\in \mathbb{R}^{n-1}.
\end{equation}
\end{lemma}
\begin{proof}
We use induction to prove this result. For $n=2$, observe that, for all $v\in \mathbb{R}$,
$$
G_2^{(1)}\,\begin{bmatrix}
v \\ 0 \\ 0\end{bmatrix}=\begin{bmatrix}
\sigma_0 \\ \sigma_1 \\ \sigma_2
\end{bmatrix}v=a\,v\,\begin{bmatrix}
\mu_2 \\ \mu_3 \\ \mu_4
\end{bmatrix}+b\,v\,\begin{bmatrix}
\mu_1 \\ \mu_2 \\ \mu_3
\end{bmatrix}+c\,v\,\begin{bmatrix}
\mu_0 \\ \mu_1 \\ \mu_2
\end{bmatrix} = G_2\,\Phi_0\,v.
$$
This proves the base case.
Suppose that \eqref{eq:GPhi} holds for some $k\geqslant 2$. Let
$$
\bar{v}=\begin{bmatrix}
\nu_1 \\ \vdots \\ \nu_{k}
\end{bmatrix}\in \mathbb{R}^k.
$$
We compute
\begin{align*}
G_{k+1}^{(1)}\begin{bmatrix}
\bar{v} \\ 0 \\ 0
\end{bmatrix}
=G_{k+1}^{(1)}\left(\begin{bmatrix}\nu_1 \\ \vdots \\ \nu_{k-1} \\ 0 \\ 0 \\ 0 \end{bmatrix}+\begin{bmatrix}
0 \\ \vdots \\ 0 \\ \nu_{k} \\ 0 \\ 0
\end{bmatrix} \right).
\end{align*}
Multiplying by blocks and using the induction hypothesis, we get
\begin{align*}
G_{k+1}^{(1)}\begin{bmatrix}
\bar{v} \\ 0 \\ 0
\end{bmatrix}&= \left[\begin{array}{ccc|c}
& & & \sigma_{k+1} \\
& G_{k}^{(1)} & &\vdots \\
& & &\sigma_{2k+1}\\
\hline
\sigma_{k+1} & \ldots & \sigma_{2k+1} & \sigma_{2k+2}
\end{array}\right]\begin{bmatrix}\nu_1 \\ \vdots \\ \nu_{k-1} \\ 0 \\ 0 \\ 0 \end{bmatrix}+\nu_{k}\begin{bmatrix} \sigma_{k-1}\\ \vdots \\ \sigma_{2k-1}\\\sigma_{2k}\end{bmatrix}\\[6pt]
&= \left[\begin{array}{ccc}
& G_k\,\Phi_{k-2} & \\[-6pt]
\\
\hline
\sigma_{k+1} & \cdots & \sigma_{2k-1}
\end{array}\right]\begin{bmatrix}\nu_1 \\ \vdots \\ \nu_{k-1} \end{bmatrix}+\nu_{k}\begin{bmatrix} \sigma_{k-1}\\ \vdots\\ \sigma_{2k-1} \\ \sigma_{2k}\end{bmatrix}\\[3pt]
&=\left[\begin{array}{ccc|c}
& & & \sigma_{k-1} \\
& G_k\,\Phi_{k-2} & &\vdots \\
& & &\sigma_{2k-1}\\
\hline
\sigma_{k-1} & \ldots & \sigma_{2k-1} & \sigma_{2k}
\end{array}\right]\,\bar{v}.
\end{align*}
Then, it is straightforward to verify that
$$
\left[\begin{array}{ccc|c}
& & & \sigma_{k-1} \\
& G_k\,\Phi_{k-2} & &\vdots \\
& & &\sigma_{2k-1}\\
\hline
\sigma_{k-1} & \ldots & \sigma_{2k-1} & \sigma_{2k}
\end{array}\right]= \left[\begin{array}{ccc|c}
& & & \mu_{k+1} \\
& G_{k} & &\vdots \\
& & &\mu_{2k+1}\\
\hline
\mu_{k+1} & \ldots & \mu_{2k+1} & \mu_{2k+2}
\end{array}\right]\left[\begin{array}{c|c}
& \mathtt{0} \\
\Phi_{k-2} & c \\
& b \\
\hline
\mathtt{0} & a
\end{array}\right].
$$
This proves that
$$
G_{k+1}^{(1)}\,\begin{bmatrix}
\bar{v} \\ 0 \\ 0
\end{bmatrix}\,=\,G_{k+1}\,\Phi_{k-1}\,\bar{v},
$$
and, thus, \eqref{eq:GPhi} holds for $n\geqslant 2$.
\end{proof}
We have the following characterization of classical sequences of real numbers.
\begin{theorem}\label{th:firststructurerelation}
Let $\{\mu_n\}_{n\geqslant 0}$ be a sequence of real numbers such that $\det G_n\ne 0$ for $n\geqslant 0$. Let $S_{n}^{-1}\,H_{n}\,S_{n}^{-\top}$ be the Cholesky factorization of $G_{n}$ and let $\bar{s}_{n,0},\bar{s}_{n,1},\ldots,\bar{s}_{n,n}$ denote the columns of $S_{n}^{\top}$. Then $\{\mu_n\}_{n\geqslant 0}$ is classical if and only if there are real numbers $a,b,c$ satisfying
$$
|a|+|b|+|c|>0,
$$
and real numbers $a_j,b_j,c_j$, $j\geqslant 0$, with $c_j\ne 0$, such that the set $\{ \bar{s}_{n,0}^{(1)}, \ldots, \bar{s}_{n,n}^{(1)}\}$ of vectors in $\mathbb{R}^{n+1}$ with
$$
\bar{s}_{n,j}^{(1)}=\frac{1}{j+1}\,N_{n+1}\,\bar{s}_{n+1,j+1}, \quad 0\leqslant j \leqslant n,
$$
satisfy
\begin{equation}\label{eq:firststrel}
\Phi_{n-2}\,\bar{s}_{n-2,j}^{(1)} = a_j\,\bar{s}_{n,j+2}+b_j\,\bar{s}_{n,j+1}+c_j\,\bar{s}_{n,j}, \quad 0\leqslant j \leqslant n-2,
\end{equation}
with $\Phi_n$ as defined in \eqref{def:matrizPhi}
\end{theorem}
\begin{proof}
We will use the fact that
$$
S_{n,1}^{\top}=\left[\bar{s}_{n,0}^{(1)}\ \bar{s}_{n,1}^{(1)}\, \ldots\, \bar{s}_{n,n}^{(1)} \right],
$$
is a unit upper triangular matrix.
Suppose that $\{\mu_n\}_{n\geqslant 0}$ is a classical sequence satisfying \eqref{eq:ttr-moments}. Since $\Phi_{n-2}\,\bar{s}_{n-2,j}^{(1)}\in \mathbb{R}^{n+1}$ and $\{\bar{s}_{n,0},\bar{s}_{n,1},\ldots,\bar{s}_{n,n}\}$ constitutes an orthogonal basis for $\mathbb{R}^{n+1}$, we can write
$$
\Phi_{n-2}\,\bar{s}_{n-2,j}^{(1)}=\sum_{k=0}^{n}a_{j,k}\,\bar{s}_{n,k},
$$
with
$$
a_{j,k}\,h_k\,=\,\bar{s}_{n,k}^{\top}\,G_n\,\Phi_{n-2}\,\bar{s}_{n-2,j}^{(1)}, \quad 0\leqslant k \leqslant n.
$$
The absence of a subindex indicating some dependence on $n$ is justified by Theorem \ref{th:choleskySn+1} and Theorem \ref{th:choleskysigma}, which imply that the expression for $\bar{s}_{n,k}$ and $\bar{s}_{n-2,j}^{(1)}$ are independent of $n$.
By \eqref{eq:GPhi} and since
$$
S_{n,1}^{\top}=\left[\begin{array}{c|c} S_{n-2,1}^{\top} & \ast \\
\hline
\mathtt{0} & \begin{matrix} 1 & \ast \\ 0 & 1\end{matrix} \end{array} \right],
$$
we have
$$
a_{j,k}\,h_k\,=\,\bar{s}_{n,k}^{\top}\,G_n^{(1)}\,\begin{bmatrix}\bar{s}_{n-2,j}^{(1)}\\ 0 \\ 0 \end{bmatrix}=\bar{s}_{n,k}^{\top}\,G_n^{(1)}\,\bar{s}_{n,j}^{(1)}, \quad 0\leqslant k \leqslant n, \quad 0\leqslant j \leqslant n-2.
$$
On one hand, by Corollary \ref{th:higher-Hahn-type}, the set $\{ \bar{s}_{n,0}^{(1)}, \ldots, \bar{s}_{n,n}^{(1)}\}$ is an orthogonal basis for $\mathbb{R}^{n+1}$, and since $\bar{s}_{n,k}=\bar{s}_{n,k}^{(1)}+\alpha_{k,k-1}\,\bar{s}^{(1)}_{n,k-1}+\cdots+\alpha_{k,0}\,\bar{s}_{n,0}^{(1)}$, we get that $a_{j,k}=0$ for $0\leqslant k \leqslant j-1$, and
$$
a_{j,j}\,h_j=\bar{s}_{n,j}^{\top}\,G_n^{(1)}\,\bar{s}_{n,j}^{(1)}= (\bar{s}_{n,j}^{(1)})^{\top}\,G_n^{(1)}\,\bar{s}_{n,j}^{(1)}=h_j^{(1)}.
$$
Therefore,
$$
a_{j,j}=\frac{h_j^{(1)}}{h_j}=-\frac{\lambda_{j+1}}{(j+1)^2}\ne 0.
$$
On the other hand, since $S_{n-2,1}^{\top}$ is upper triangular, we have
$$
\Phi_{n-2}\,\bar{s}_{n-2,j}^{(1)}=\begin{bmatrix}
\ast \\ a \\ \mathtt{0}
\end{bmatrix}\in \mathbb{R}^{n+1}, \quad 0 \leqslant j \leqslant n-2,
$$
where the last $n-j-2$ entries are zero. This implies that $a_{j,k}=0$ for $k\geqslant j+3$. Therefore, \eqref{eq:firststrel} holds with $a_j=a_{j,j+2}$, $b_j=a_{j,j+1}$ and $c_j=a_{j,j} \neq 0$.
Conversely, suppose that \eqref{eq:firststrel} holds with the entries of $\Phi_{n-2}$ are real numbers $a,b,c$ satisfying $|a|+|b|+|c|>0$. On one hand, multiplying both sides of \eqref{eq:firststrel} by $\bar{s}_{n,0}^{\top}\,G_n$, we obtain for $0\leqslant j \leqslant n-2$,
$$
\bar{s}_{n,0}^{\top}\,G_n\left( a_j\,\bar{s}_{n,j+2}+b_j\,\bar{s}_{n,j+1}+c_j\,\bar{s}_{n,j}\right) = \bar{s}_{n,0}^{\top}\,G_n\Phi_{n-2}\,\bar{s}_{n-2,j}^{(1)}.
$$
By the orthogonality of $\{\bar{s}_{n,0},\bar{s}_{n,1},\ldots,\bar{s}_{n,n}\}$, we have
$$
\bar{s}_{n,0}^{\top}\,G_n\Phi_{n-2}\, N_{n-1}\,\bar{s}_{n-1,j+1} \,=\, (j+1) \,c_0\,h_0\,\delta_{j,0}.
$$
On the other hand, we have
$$
\bar{s}_{n-1,1}^{\top}\,G_{n-1}\,\bar{s}_{n-1,j+1}\,=\,h_1\,\delta_{j,0}=(j+1)\,h_1\,\delta_{j,0}, \quad 0 \leqslant j \leqslant n-2.
$$
Therefore,
$$
\bar{s}_{n,0}^{\top}\,G_n\Phi_{n-2}\,N_{n-1}\,\bar{s}_{n-1,j+1}\,=\,c_0\,\frac{h_0}{h_1}\,\bar{s}_{n-1,1}^{\top}\,G_{n-1}\,\bar{s}_{n-1,j+1},
$$
or, equivalently,
$$
\left(\bar{s}_{n,0}^{\top}\,G_n\,\Phi_{n-2}\,N_{n-1}-c_0\,\frac{h_0}{h_1}\,\bar{s}_{n-1,1}^{\top}\,G_{n-1} \right)\,\bar{s}_{n-1,j+1}=0, \quad 0\leqslant j \leqslant n-2.
$$
Since $N_{n-1}\bar{s}_{n-1,0}=\mathtt{0}$ and $\bar{s}_{n-1,1}^{\top}\,G_{n-1}\,\bar{s}_{n-1,0}=0$, we have
$$
\left(\bar{s}_{n,0}^{\top}\,G_n\,\Phi_{n-2}\,N_{n-1}-c_0\,\frac{h_0}{h_1}\,\bar{s}_{n-1,1}^{\top}\,G_{n-1} \right)\,S_{n-1}^{\top}=\mathtt{0},
$$
and, therefore,
\begin{equation}\label{eq:struct}
\bar{s}_{n,0}^{\top}\,G_n\,\Phi_{n-2}\,N_{n-1}-c_0\,\frac{h_0}{h_1}\,\bar{s}_{n-1,1}^{\top}\,G_{n-1} =\mathtt{0},
\end{equation}
where we have used the fact that $S_{n-1}^{\top}$ is an invertible matrix. If
$$
\bar{s}_{n-1,1}^{\top}=[s_{1,0} \ 1 \ 0 \ldots 0],
$$
then let $d=-c_0\frac{h_0}{h_1}$ and $e=d\,s_{1,0}$. In this way, the entries of \eqref{eq:struct} read
\begin{align*}
&d\,\mu_1+e\,\mu_0 = 0,\\
&(k\,a+d)\,\mu_{k+1}+(k\,b+e)\,\mu_k+k\,c\,\mu_{k-1}=0, \quad 1\leqslant k \leqslant n-1,
\end{align*}
which proves that $\{\mu_n\}_{n\geqslant 0}$ satisfies \eqref{eq:ttr-moments} and, thus, $\{\mu_n\}_{n\geqslant 0}$ is classical.
\end{proof}
We briefly recast Theorem \ref{th:firststructurerelation} in terms of polynomials. Let $\{\mu_n\}_{n\geqslant 0}$ be a classical sequence of real numbers, and let $\mathbf{u}$ be the moment functional defined as $\mu_n=\langle \mathbf{u}, x^n\rangle$, $n\geqslant 0$. Let $\{P_n(x)\}_{n\geqslant 0}$ and $\{Q_n(x)\}_{n\geqslant 0}$ let be sequences of polynomials with
$$
P_n(x)\,=\,\bar{s}_{n,n}^{\top}\,\mathtt{X}_n \quad \text{and} \quad Q_{n}(x)\,=\,(\bar{s}_{n,n}^{(1)})^{\top}\,\mathtt{X}_n \quad n\geqslant 0.
$$
Then $\{P_n(x)\}_{n\geqslant 0}$ is an OPS associated with $\mathbf{u}$, and
$$
Q_j(x)\,=\,\frac{P_{j+1}'(x)}{j+1}, \quad 0\leqslant j \leqslant n.
$$
For $a,b,c\in \mathbb{R}$ such that $|a|+|b|+|c|>0$, and $n\geqslant 0$, the matrix $\Phi_n$ is the matrix representation of the linear mapping from $\Pi_n$ to $\Pi_{n+2}$ defined by $p(x) \mapsto \phi(x)\,p(x)$ with $\phi(x)=a\,x^2+b\,x+c$. Therefore,
$$
(\Phi_{n}\,\bar{s}_{n,n}^{(1)})^{\top}\,\mathtt{X}_{n+2}=\phi(x)\,Q_{n}(x).
$$
In this way, by Theorem \ref{th:firststructurerelation}, $\mathbf{u}$ is a classical moment functional if and only if there is a non zero polynomial $\phi(x)$ with $\deg \phi(x) \leqslant 2$, and real numbers $a_n,b_n,c_n$, $n\geqslant 0$, with $c_n\ne 0$,
$$
\phi(x)\,Q_n(x)=a_n\,p_{n+2}(x)+b_n\,p_{n+2}(x)+c_n\,p_n(x), \quad n\geqslant 0.
$$
\subsection{Second structure relation}
The following characterization is similar to \eqref{eq:firststrel} but it has a dual flavor in the sense that the roles of $\{\bar{s}_{n,j}\}_{j=0}^n$ and $\{\bar{s}_{n,j}^{(1)}\}_{j=0}^n$ are interchanged.
\begin{theorem}\label{th;secondstructurerelation}
Let $\{\mu_n\}_{n\geqslant 0}$ be a sequence of real numbers such that $\det G_n\ne 0$ for $n\geqslant 0$. Let $S_{n}^{-1}\,H_{n}\,S_{n}^{-\top}$ be the Cholesky factorization of $G_{n}$ and let $\bar{s}_{n,0},\bar{s}_{n,1},\ldots,\bar{s}_{n,n}$ denote the columns of $S_{n}^{\top}$. Then $\{\mu_n\}_{n\geqslant 0}$ is classical if and only if there are real numbers $\kappa_j,\xi_j$, $j\geqslant 0$, such that
\begin{equation}\label{eq:secondstr}
\bar{s}_{n,j}\,=\,\bar{s}_{n,j}^{(1)}+\kappa_j\,\bar{s}_{n,j-1}^{(1)}+\xi_j\,\bar{s}_{n,j-2}^{(1)}, \quad 0\leqslant j \leqslant n,
\end{equation}
where, by convention, we set $\bar{s}_{n,-2}^{(1)}\,=\,\bar{s}_{n,-1}^{(1)}\,=\,\mathtt{0}$.
\end{theorem}
\begin{proof}
We will use the fact that
$$
S_{n,1}^{\top}=\left[\bar{s}_{n,0}^{(1)}\ \bar{s}_{n,1}^{(1)}\, \ldots\, \bar{s}_{n,n}^{(1)} \right],
$$
is a unit upper triangular matrix.
Suppose that $\{\mu_n\}_{n\geqslant 0}$ is a classical sequence. By Corollary \ref{th:higher-Hahn-type}, $\{ \bar{s}_{n,0}^{(1)}, \ldots, \bar{s}_{n,n}^{(1)}\}$ is an orthogonal basis for $\mathbb{R}^{n+1}$. Since $S_{n,1}^{\top}$ is a unit upper triangular matrix, we can write
$$
\bar{s}_{n,j}\,=\,\bar{s}_{n,j}^{(1)}+\sum_{k=0}^{j-1} a_{j,k}\,\bar{s}_{n,k}^{(1)}, \quad 0 \leqslant j \leqslant n,
$$
where
$$
a_{j,k}\,h_k^{(1)}\,=\,(\bar{s}_{n,k}^{(1)})^{\top}\,G_n^{(1)}\,\bar{s}_{n,j}, \quad 0\leqslant k \leqslant j-1.
$$
As before, the absence of a subindex indicating some dependence on $n$ is justified by Theorem \ref{th:choleskySn+1} and Theorem \ref{th:choleskysigma}, which imply that the expression for $\bar{s}_{n,k}$ and $\bar{s}_{n-2,j}^{(1)}$ are independent of $n$. Using \eqref{eq:GPhi} and Theorem \ref{th:firststructurerelation}, we get
$$
a_{j,k}\,h_k^{(1)}\,=\,(a_k\,\bar{s}_{n,k+2}+b_k\,\bar{s}_{n,k+1}+c_k\,\bar{s}_{n,k})^{\top}\,G_n\,\bar{s}_{n,j}, \quad 0\leqslant k \leqslant n-2.
$$
From the orthogonality of $\{ \bar{s}_{n,0}, \ldots, \bar{s}_{n,n}\}$, we obtain $a_{j,k}\,=\,0$ for $k\leqslant j-3$. Therefore, \eqref{eq:secondstr} holds with $\kappa_j=a_{j,j-1}$ and $\xi_j=a_{j,j-2}$.
Conversely, suppose that there are real numbers $\kappa_j, \xi_j$, $j\geqslant 0$, such that \eqref{eq:secondstr} holds. For $n\geqslant 0$, define the $(n+1)\times (n+1)$ unit upper triangular matrices
$$
U_0=1, \quad U_1=\begin{bmatrix}
1 & \kappa_1 \\
0 & 1
\end{bmatrix}, \quad U_n=\left[ \begin{array}{c|c}
& \mathtt{0} \\
U_{n-1} & \xi_n \\
& \kappa_n \\
\hline
\mathtt{0} & 1
\end{array}\right], \quad n\geqslant 2.
$$
Then \eqref{eq:secondstr} can be written as
$$
S_n^{\top}\,=\,S_{n,1}^{\top}\,U_n.
$$
Using $H_n=S_n\,G_n\,S_n^{\top}$, we get
$$
U_n\,=\,U_n\,H_n^{-1}\,S_n\,G_n\,S_n^{\top}\,=\,U_n\,H_n^{-1}\,S_n\,G_n\,S_{n,1}^{\top}\,U_n,
$$
which implies
$$
I_{n+1}\,=\,U_n\,H_n^{-1}\,S_n\,G_n\,S_{n,1}^{\top}.
$$
From this equality we get
\begin{equation}\label{eq:a}
S_{n,1}^{-\top}\,=\,U_n\,H_n^{-1}\,S_n\,G_n.
\end{equation}
On one hand, observe that
\begin{equation}\label{eq:b}
\begin{aligned}
N_{n+1}\,S_{n+1}^{\top}&=\,\begin{bmatrix}
\mathtt{0} & N_{n+1}\,\bar{s}_{n+1,1} & N_{n+1}\,\bar{s}_{n+1,2} & \ldots & N_{n+1}\,\bar{s}_{n+1,n+1}
\end{bmatrix}\\
&=\,\begin{bmatrix}
\mathtt{0} & \bar{s}_{n,0}^{(1)} & 2\,\bar{s}_{n,1}^{(1)} & \ldots & (n+1)\,\bar{s}_{n,n}^{(1)}
\end{bmatrix}\\
&=\,S_{n,1}^{\top}\,N_{n+1},
\end{aligned}
\end{equation}
Combining \eqref{eq:a} and \eqref{eq:b}, we get
$$
N_{n+1}\,=\,U_n\,H_n^{-1}\,S_n\,G_n\,N_{n+1}\,S_{n+1}^{\top}.
$$
On the other hand, we have
$$
\frac{1}{h_1}\,\bar{s}_{n+1,1}^{\top}\,G_{n+1}\,S_{n+1}^{\top}\,=\,\bar{s}_{n,0}^{\top}\,N_{n+1}.
$$
Therefore,
\begin{equation}\label{eq:c}
\frac{1}{h_1}\,\bar{s}_{n+1,1}^{\top}\,G_{n+1}\,=\,\bar{s}_{n,0}^{\top}\,U_n\,H_n^{-1}\,S_n\,G_n\,N_{n+1}.
\end{equation}
If we let
\begin{equation}\label{eq:d}
\begin{bmatrix}
c \\ b \\ a \\ \mathtt{0}
\end{bmatrix} = \frac{\xi_2}{h_2}\,\bar{s}_{n,2}+\frac{\kappa_1}{h_1}\,\bar{s}_{n,1}+\frac{1}{h_0}\,\bar{s}_{n,0}, \quad \text{and} \quad \begin{bmatrix}
e \\ d \\ \mathtt{0}
\end{bmatrix}=-\frac{1}{h_1}\,\bar{s}_{n+1,1},
\end{equation}
then the entries of \eqref{eq:c} read
\begin{align*}
-e\,\mu_0-d\,\mu_{1}&= 0,\\
-e\,\mu_k-d\,\mu_{k+1}&=k\,c\,\mu_{k-1}+k\,b\,\mu_k+k\,a\,\mu_{k+1}, \quad 1\leqslant k \leqslant n+1,
\end{align*}
which proves that $\{\mu_n\}_{n\geqslant 0}$ satisfies \eqref{eq:ttr-moments} and, thus, $\{\mu_n\}_{n\geqslant 0}$ is classical.
\end{proof}
For a classical sequence $\{\mu_n\}_{n\geqslant 0}$, let $\mathbf{u}$ be the moment functional defined as $\mu_n=\langle \mathbf{u}, x^n\rangle$, $n\geqslant 0$, and let $\{P_n(x)\}_{n\geqslant 0}$ and $\{Q_n(x)\}_{n\geqslant 0}$ let be sequences of polynomials with
$$
P_n(x)\,=\,\bar{s}_{n,n}^{\top}\,\mathtt{X}_n \quad \text{and} \quad Q_{n}(x)\,=\,(\bar{s}_{n,n}^{(1)})^{\top}\,\mathtt{X}_n \quad n\geqslant 0.
$$
Theorem \ref{th;secondstructurerelation} implies that there are real numbers $\kappa_n, \xi_n$, $n\geqslant 0$, such that
$$
P_n(x)\,=\,Q_n(x)+\kappa_n\,Q_{n-1}(x)+\xi_n\,Q_{n-2}(x), \quad n\geqslant 0,
$$
where, by convention, $Q_{-2}(x)=Q_{-1}(x)=0$. Moreover, we deduce from \eqref{eq:d} and Theorem \ref{th:pearson} that $\mathbf{u}$ satisfies $D(\phi\,\mathbf{u})=\psi\,\mathbf{u}$ with
$$
\phi(x)=\frac{\xi_2}{h_2}P_2(x)+\frac{\kappa_1}{h_1}P_1(x)+\frac{1}{h_0}P_0(x) \quad \text{and} \quad \psi(x)=-\frac{1}{h_1}P_1(x).
$$
\subsection{Rodrigues-type formula}
Recall that classical sequences of real numbers $\{\mu_n\}_{n\geqslant 0}$ satsisfy the three-term recurrence relation \eqref{eq:ttr-moments} which can be written in matrix form as
$$
N_{n+1}^{\top}\,G_n^{(1)}\bar{s}_{n,0}+G_n\,\begin{bmatrix}
e \\ d \\ \mathtt{0}
\end{bmatrix}\,=\,\mathtt{0}, \quad n\geqslant 1,
$$
with $G_n^{(1)}$ as defined in \eqref{def:Gn1}. The following characterization shows that classical sequences satisfy higher order recurrence relations which can be written in matrix form as well.
\begin{theorem}\label{th:rodrigues}
Let $\{\mu_n\}_{n\geqslant 0}$ be a sequence of real numbers such that $\det G_n\ne 0$, $n\geqslant 0$. Let $S_{n}^{-1}\,H_{n}\,S_{n}^{-\top}$ be the Cholesky factorization of $G_{n}$ and let $\bar{s}_{n,0},\bar{s}_{n,1},\ldots,\bar{s}_{n,n}$ denote the columns of $S_{n}^{\top}$. Then $\{\mu_n\}_{n\geqslant 0}$ is classical if and only if there are $a,b,c\in \mathbb{R}$ such that $|a|+|b|+|c|>0$, and non zero real numbers $\varpi_k$, $k\geqslant 1$, such that
\begin{equation}\label{eq:rodrigues}
N_{n+k}^{\top}\cdots N_{n+1}^{\top}\,G_n^{(k)}\,\bar{s}_{n,0}\,=\,\varpi_k\,\,G_{n+k}\,\bar{s}_{n+k,k}, \quad n\geqslant 1, \quad 1\leqslant k \leqslant n,
\end{equation}
with $G_n^{(k)}$ as defined in \eqref{def:Gnk}.
\end{theorem}
\begin{proof}
Suppose that $\{\mu_n\}_{n\geqslant 0}$ is classical satisfying \eqref{eq:ttr-moments}. For $k\geqslant 0$, by Corollary \ref{th:higher-Hahn-type}, the set $\{ \bar{s}_{n,0}^{(k)}, \ldots, \bar{s}_{n,n}^{(k)}\}$ of vectors in $\mathbb{R}^{n+1}$ with
$$
\bar{s}_{n,j}^{(k)}=\frac{1}{j+1}\,N_{n+1}\,\bar{s}_{n+1,j+1}^{(k-1)}, \quad 0\leqslant j \leqslant n,
$$
where $s_{n,j}^{(0)}=\bar{s}_{n,j}$, constitutes an orthogonal basis for $\mathbb{R}^{n+1}$ with respect to the bilinear form associated with $\{\sigma_n^{(k)}\}_{n\geqslant 0}$. Let
$$
S_{n,k}^{\top}=\left[\bar{s}_{n,0}^{(k)}\ \bar{s}_{n,1}^{(k)}\, \ldots\, \bar{s}_{n,n}^{(k)} \right].
$$
On one hand, observe that for $k\geqslant 1$,
\begin{equation*}
\begin{aligned}
N_{n+1}\,S_{n+1,k-1}^{\top}&=\,\begin{bmatrix}
\mathtt{0} & N_{n+1}\,\bar{s}_{n+1,1}^{(k-1)} & N_{n+1}\,\bar{s}_{n+1,2}^{(k-1)} & \ldots & N_{n+1}\,\bar{s}_{n+1,n+1}^{(k-1)}
\end{bmatrix}\\
&=\,\begin{bmatrix}
\mathtt{0} & \bar{s}_{n,0}^{(k)} & 2\,\bar{s}_{n,1}^{(k)} & \ldots & (n+1)\,\bar{s}_{n,n}^{(k)}
\end{bmatrix}\\
&=\,S_{n,k}^{\top}\,N_{n+1},
\end{aligned}
\end{equation*}
Then, since $\bar{s}^{(k)}_{n,0}=\bar{s}_{n,0}$ for $k\geqslant 1$,
\begin{align*}
S_{n+k}\,N_{n+k}^{\top}\cdots N_{n+1}^{\top}\,G_n^{(k)}\,\bar{s}_{n,0}&=\,N_{n+k}^{\top}\cdots N_{n+1}^{\top}\,S_{n,k}\,G_n^{(k)}\,\bar{s}_{n,0}^{(k)}\\
&=\,k!\,h_0^{(k)}\,\bar{e}_k\in \mathbb{R}^{n+k+1},
\end{align*}
where $\bar{e}_k$ denotes the $k+1$-th column of the identity matrix $I_{n+k+1}$. On the other hand,
$$
\frac{1}{h_k}\,S_{n+k}\,G_{n+k}\,\bar{s}_{n+k,k}\,=\,\bar{e}_k.
$$
Since $S_{n+k}$ is invertible, it follows that
$$
N_{n+k}^{\top}\cdots N_{n+1}^{\top}\,G_n^{(k)}\,\bar{s}_{n,0}\,=\,k!\,\frac{h_0^{(k)}}{h_k}\,\,G_{n+k}\,\bar{s}_{n+k,k}, \quad k\geqslant 1.
$$
Hence, \eqref{eq:rodrigues} holds with $\varpi_k=k!\,\frac{h_0^{(k)}}{h_k}$.
Conversely, suppose that there are $a,b,c\in \mathbb{R}$ such that $|a|+|b|+|c|>0$, and non zero real numbers $\varpi_k$, $k\geqslant 1$, such that \eqref{eq:rodrigues} holds. In particular, for $k=1$, we have
\begin{equation}\label{eq:e}
N_{n+1}^{\top}\,G_n^{(1)}\,\bar{s}_{n,0}\,=\,\varpi_1\,\,G_{n+1}\,\bar{s}_{n+1,1}, \quad n\geqslant 0.
\end{equation}
If we let
$$
\begin{bmatrix}
e \\ d
\end{bmatrix} =-\varpi_1\,\bar{s}_{1,1},
$$
and since $\bar{s}_{n+1,1}=\begin{bmatrix}\bar{s}_{1,1} \\ \mathtt{0} \end{bmatrix}$, then the entries of \eqref{eq:e} read
\begin{align*}
-e\,\mu_0-d\,\mu_{1}&= 0,\\
-e\,\mu_k-d\,\mu_{k+1}&=k\,c\,\mu_{k-1}+k\,b\,\mu_k+k\,a\,\mu_{k+1}, \quad 1\leqslant k \leqslant n+2,
\end{align*}
which proves that $\{\mu_n\}_{n\geqslant 0}$ satisfies \eqref{eq:ttr-moments} and, thus, $\{\mu_n\}_{n\geqslant 0}$ is classical.
\end{proof}
We remark that if $\{\mu_n\}_{n\geqslant 0}$ is a classical sequence of real numbers, and $\mathbf{u}$ be the moment functional defined as $\mu_n=\langle \mathbf{u}, x^n\rangle$, $n\geqslant 0$, then the entries of \eqref{eq:e} can be written as
$$
\langle D(\phi\,\mathbf{u}), x^k \rangle \,=\,\langle \psi\,\mathbf{u}, x^k\rangle, \quad 0\leqslant k \leqslant n+2,
$$
where $\phi(x)=a\,x^2+b\,x+c$ and $\psi(x)=d\,x+e$, which holds for all $n\geqslant 0$. Hence, $\mathbf{u}$ satisfies \eqref{pearson}. Moreover, it is straightforward, but tedious, to verify that the entries of \eqref{eq:rodrigues} can be written as
$$
\langle D^k(\phi^k\,\mathbf{u}), x^j\rangle \,=\,\langle \varpi_k\,P_k\,\mathbf{u}, x^j\rangle, \quad 0\leqslant j \leqslant n+k+1,
$$
where $P_k(x)=\bar{s}_{n+k,k}^{\top}\,\mathtt{X}_{n+k}$, $k\geqslant 1$, which holds for $n\geqslant 1$. Hence, $\mathbf{u}$ satisfies $D^k(\phi^k\,\mathbf{u})=\varpi_k\,P_k\,\mathbf{u}$ for $k\geqslant 1$ (and holds for $k=0$ with $\varpi_0=1$).
|
2302.11964
|
\section{Introduction}
Let $(M, g)$ be a smooth compact connected Riemannian manifold on dimension $n \ge 2$ with smooth boundary $\Sigma$. The Steklov problem on $(M, g)$ consists in finding the real numbers $\sigma$ and the harmonic functions $f : M \longrightarrow \mathbb{R}$ such that $\partial_\nu f=\sigma f$ on $\Sigma$, where $\nu$ denoted the outward normal on $\Sigma$. Such a $\sigma$ is called a Steklov eigenvalue of $(M, g)$. It is well known that the Steklov spectrum forms a discrete sequence $0= \sigma_0(M,g) < \sigma_1(M,g) \le \sigma_2(M,g) \le \ldots \nearrow \infty$. Each eigenvalue is repeated with its multiplicity, which is finite. If the context is clear, it happens that we simply write $\sigma_k(M)$ for $\sigma_k(M,g)$.
\medskip
Recently, authors investigated the Steklov problem on manifolds of revolution \cite{FTY, FS, XC, XC2}. It is known \parencite[Thm. 1.1]{CEG2} that for any connected compact manifold $(M, g)$ of dimension $n \ge 3$, there exists a family $(g_\epsilon)$ of Riemannian metrics conformal to $g$ which coincide with $g$ on the boundary of $M$, such that
\begin{align*}
\sigma_1(M, g_\epsilon) \underset{\epsilon \to 0}{\longrightarrow} \infty.
\end{align*}
Therefore, it is relevant to study manifolds that satisfy certain chosen constraints. A natural constraint for the manifolds, is that they are (hyper)surfaces of revolution of the Euclidean space. Some work has already been done on these kind of manifolds, see for example \cite{CGG, CV}. We refer to \parencite[Sect. 3.1]{CGG} for a recall about what these manifolds are.
\medskip
This work lead to the discovery of lower and upper bounds for the Steklov eigenvalues of a hypersurface of revolution. Let us begin by recalling some recent results.
\medskip
Let us start by results about one boundary component (hyper)surfaces of revolution. In dimension $n=2$, it is proved that each surface of revolution $M \subset \mathbb{R}^3$ with boundary $\S^1 \subset \mathbb{R}^2 \times \{0\}$ is Steklov isospectral to the unit disk, see \parencite[Prop. 1.10]{CGG}. In dimension $n \ge 3$, many bounds were given. It is proved that each hypersurface of revolution with one boundary component $M \subset \mathbb{R}^{n+1}$ satisfies $\sigma_k(M) \ge \sigma_k(\mathbb{B}^n)$, where $\mathbb{B}^n$ is the Euclidean ball. The equality holds if and only if $M= \mathbb{B}^n \times \{0\}$, see \parencite[Thm. 1.8]{CGG}.
In \parencite[Thm. 1]{CV}, the authors show the following upper bound: if $M \subset \mathbb{R}^{n+1}$ is a hypersurface of revolution with one boundary component, then for each $k \ge 1$, we have
\begin{align*}
\sigma_{(k)}(M) < k+n-2,
\end{align*}
where $\sigma_{(k)}(M)$ is the $k$th distinct eigenvalue of $M$. Although there exists no equality case, this upper bound is sharp. Indeed, for each $\epsilon >0$ and each $k\ge 1$, there exists a hypersurface of revolution $M_\epsilon$ such that $\sigma_{(k)}(M_\epsilon) > k+n-2-\epsilon$.
\medskip
Note that these results come from the work done on hypersurfaces of revolution that have one boundary component. Therefore, the goal of this paper is to investigate the Steklov problem on a hypersurface of revolution with two boundary components. As it was already done in \cite{CGG} and in \cite{CV}, we will consider hypersurfaces with boundary components isometric to $\S^{n-1}$. Let us begin by defining the context.
\begin{defn} \label{defn : revolution}
A $n$-dimensional compact hypersurface of revolution $(M,g)$ of the Euclidean space with two boundary components isometric to two copies of $\S^{n-1}$ is the warped product $M=[0, L] \times \S^{n-1}$ endowed with the Riemannian metric
\begin{align*}
g(r,p) = dr^2 + h^2(r)g_0(p),
\end{align*}
where $(r,p) \in [0, L] \times \S^{n-1}$, $g_0$ is the canonical metric of the $(n-1)$-sphere of radius one and $h : [0, L] \longrightarrow \mathbb{R}_+^*$ is a smooth function which satisfies:
\begin{enumerate}
\item[(1)] $|h'(r)| \le 1$ for all $r \in [0, L]$;
\item[(2)] $h(0)=h(L)=1$.
\end{enumerate}
Assumption $(1)$ comes from the fact that $(M, g)$ is a hypersurface of the Euclidean space $\mathbb{R}^{n+1}$, see \parencite[Sect. 3.1]{CGG} for more details. Assumption $(2)$ implies that each component of the boundary is isometric to $\S^{n-1}$, see Fig. \ref{fig : surface revolution def}.
\medskip
If $M = [0, L] \times \S^{n-1}$ and $h : [0, L] \longrightarrow \mathbb{R}_+^*$ satisfies the properties above, we say by convenience in this paper that $M$ is a \textit{hypersurface of revolution}, we say that $g(r, p) = dr^2 + h^2(r)g_0(p)$ is a \textit{metric of revolution on $M$ induced by $h$} and we call the number $L$ the \textit{meridian length} of $M$.
\end{defn}
\begin{figure}[H]
\centering
\includegraphics[scale=0.6]{images/surface_revolution_def.png}
\caption{Since $h(0)=h(L)=1$, the boundary of $M$ consists in two copies of $\S^{n-1}$.}
\label{fig : surface revolution def}
\end{figure}
Some bounds have already been given is this case. Indeed, \parencite[Thm. 1.1]{CGG} states that if $M \subset \mathbb{R}^{n+1}$, $n \ge 3$, is a hypersurface of revolution (in the sense of \Cref{defn : revolution}), and $L>2$ is the meridian length of $M$, then for each $k \ge 1$,
\begin{align*}
\sigma_k(M) \ge \sigma_k(\mathbb{B}^n \sqcup \mathbb{B}^n).
\end{align*}
Moreover, the inequality is sharp. In the case $0 < L \le 2$, we have also a lower bound:
\begin{align*}
\sigma_k(M) \ge \left( 1- \frac{L}{2}\right)^{n-1} \sigma_k(C_L, dr^2 + g_0),
\end{align*}
However, this inequality does not appear to be sharp.
\medskip
On our side, we will look for \textit{upper bounds} for the Steklov eigenvalues of hypersurfaces of revolution. A first natural question that one can ask is the following:
\begin{center}
\textit{Given the dimension $n \ge 3$ and the meridian length $L$ of $M$, does a bound $B_n^k(L)$ exist, such that for all metric of revolution $g$ on $M$, we have $\sigma_k(M,g) < B_n^k(L)$?}
\end{center}
The answer to this question is positive, see \parencite[Prop. 3.3]{CGG}. As such, another natural question is the following:
\begin{center}
\textit{Given the dimension $n \ge 3$ and the meridian length $L$ of $M$, does a metric of revolution $g^*$ on $M$ exist, such that $\sigma_k(M,g) \le \sigma_k(M,g^*)$ for all metrics of revolution $g$ on $M$? }
\end{center}
Our investigations show that the answer is negative. Indeed, a sharp upper bound $B_n^k(L)$ exists, but no metric of revolution on $M=[0,L]\times \S^{n-1}$ satisfies the equality case. However, there exists a non-smooth metric $g^*$, that we will call \textit{degenerated maximizing metric}, which maximizes the $k$th Steklov eigenvalue, for each $k \in \mathbb{N}$. This metric is non-smooth, therefore $g^*$ is no metric of revolution on $M$ in the sense of Definition \ref{defn : revolution}.
Endowed with this metric, $(M,g^*)$ can be seen as two annulus glued together; we provide more information about this degenerated maximizing metric $g^*$ and the geometric representation of $(M, g^*)$ in Sect. \ref{sect : proof}.
\medskip
Let us state our first result:
\begin{thm} \label{thm : pricipal}
Let $(M=[0,L] \times \S^{n-1}, g_1)$ be a hypersurface of revolution of the Euclidean space with two boundary components isometric to two copies of $\S^{n-1}$ and meridian length~$L$. Let us suppose $n \ge 3$. Then there exists a metric of revolution $g_2$ on $M$ such that for each $k \ge 1$,
\begin{align*}
\sigma_k(M,g_1) < \sigma_k(M, g_2).
\end{align*}
\end{thm}
This result implies that among all metrics of revolution on $M$, none maximizes the $k$th non zero Steklov eigenvalue.
Nevertheless, given any metric of revolution $g_1$ on $M$, we can iterate Theorem \ref{thm : pricipal} to generate a sequence of metrics $(g_i)_{i=1}^\infty$ on $M$. This sequence converges to a unique non-smooth metric $g^*$ on $M$, which is quite simple (see Sect. \ref{sect : proof}) and which maximizes the $k$th Steklov eigenvalue. That is why we call $g^*$ the degenerated maximizing metric. Hence, as we search for the optimal bounds $B_n^k(L)$, we must use the information contained $g^*$.
\medskip
Let us start by studying the case $k=1$. We fix $n \ge 3$ and $L >0$ and search for a sharp upper bound $B_n(L)$ for $\sigma_1(M,g)$. In this case, we are able to calculate an expression for $B_n(L)$:
\begin{thm} \label{thm : principal deux}
Let $(M= [0, L] \times \S^{n-1}, g)$ be a hypersurface of revolution of the Euclidean space with two boundary components isometric to two copies of $\S^{n-1}$ and dimension $n \ge 3$. Then the first non trivial Steklov eigenvalue $\sigma_1(M,g)$ is bounded above, by a bound that depends only on the dimension $n$ and the meridian length $L$ of $M$:
\begin{align*}
\sigma_1(M,g) < B_n(L) := \min\left\{ \frac{(n-2)\left( 1+L/2 \right)^{n-2}}{\left( 1+L/2 \right)^{n-2}-1}, \frac{(n-1)\left( \left(1+L/2\right)^n -1 \right)}{\left( 1+ L/2 \right)^n +n-1}\right\}.
\end{align*}
Moreover, this bound is sharp: for each $\epsilon >0$, there exists a metric of revolution $g_\epsilon$ on $M$ such that $\sigma_1(M, g_\epsilon) >B_n(L)- \epsilon$.
\end{thm}
We have the following asymptotic behaviour:
\begin{align*}
B_n(L) & \underset{L \to \infty}{\longrightarrow} n-2 \\
B_n(L) & \underset{L \to 0}{\longrightarrow } 0,
\end{align*}
see Fig. \ref{fig : borne}.
\medskip
Carrying on with our investigations, we study the function $L \longmapsto B_n(L)$. This allows us to find a sharp upper bound $B_n$ such that for all meridian length $L$ and metric of revolution $g$ on $M$, we have $\sigma_1(M,g) < B_n$:
\begin{corollaire} \label{cor : corollaire du principal deux}
Let $n \ge 3$.
Then there exists a bound $B_n < \infty$ such that for all hypersurfaces of revolution $(M, g)$ of the Euclidean space with two boundary components isometric to two copies of $\S^{n-1}$, we have
\begin{align*}
\sigma_1(M, g) < B_n := \frac{(n-2)\left(1+\frac{L_1}{2}\right)^{n-2}}{\left(1+\frac{L_1}{2}\right)^{n-2}-1},
\end{align*}
where $L_1$ is the unique real positive solution of the equation
\begin{align*}
\left(1+L/2\right)^{2n-1}-(n-1)\left(1+L/2\right)^n-(n-1)^2\left(1+L/2\right)^{n-2}+n-1=0.
\end{align*}
Moreover, this bound is sharp: for each $\epsilon >0$, there exists a hypersurface of revolution with two boundary components $(M_\epsilon, g_\epsilon)$ such that $\sigma_1(M_\epsilon, g_\epsilon) > B_n - \epsilon$.
\end{corollaire}
We say that $L_1$ is a \textit{critical length} associated with $k=1$, see Definition \ref{def : critical length}.
\begin{prop} \label{rem : dimension sur L1}
Let $n \ge 3$, and let $L_1 = L_1(n)$ be the critical length associated with $k =1$. Then we have:
\begin{align*}
\lim_{n\to \infty} L_1(n) = 0
\; \mbox{ and } \;
\lim_{n\to \infty} B_n = \infty.
\end{align*}
\end{prop}
Note that the behaviour of $L_1$ is surprising since we know \parencite[Prop. 3.3]{CGG} that when $n$ is fixed, then $L \ll 1$ implies $\sigma_1(M,g) \ll 1$.
\medskip
Now that we have provided information about sharp upper bounds for $\sigma_1(M, g)$, it is natural to wonder what kind of stability properties the hypersurfaces of revolution possess. A first interesting question is the following:
\begin{center}
\textit{Given the information that $\sigma_1(M=[0, L] \times \S^{n-1}, g)$ is close to the sharp upper bound $B_n$, can we conclude that the meridian length $L$ of $M$ is close to the critical length $L_1$?}
\end{center}
The answer to this question is positive. Indeed we will prove that if $L$ is not close to $L_1$, then $\sigma_1(M, g)$ is not close to $B_n$. Additionally, given the information that $\sigma_1(M, g)$ is $\delta$-close to $B_n$, we will show that the distance between $L$ and $L_1$ is less than $\delta$, up to a constant of proportionality which depends only on the dimension $n$.
\begin{thm} \label{thm : stability 1}
Let $M=[0, L] \times \S^{n-1}$, with $L >0$ and $n \ge 3$. Let us suppose $L \ne L_1$. Then there exists a constant $C(n, L) >0$ such that for all metric of revolution $g$ on $M$, we have
\begin{align*}
B_n - \sigma_1(M, g) \ge C(n, L).
\end{align*}
Moreover, there exists a constant $C(n) > 0$ such that for all $0 < \delta < \frac{B_n - (n-2)}{2}$, we have
\begin{align*}
|B_n - \sigma_1(M, g) | < \delta \implies |L_1 -L| < C(n) \cdot \delta.
\end{align*}
\end{thm}
Here is another interesting question about stability properties:
\begin{center}
\textit{Given the information that $\sigma_1(M, g)$ is close to the sharp upper bound $B_n(L)$, can we conclude that the metric of revolution $g$ is close to the degenerated maximizing metric $g^*$?}
\end{center}
We answer positively to this question, by stating that if $g$ is not close to $g^*$, then $\sigma_1(M,g)$ is not close to $B_n(L)$.
\medskip
For this purpose, given $m \in [1, 1+L/2)$, we note
\begin{align*}
\mathcal{M}_m := & \{\mbox{metrics of revolution } g \mbox{ on } M \\
& \mbox{ induced by a function } h \mbox{ such that } \max_{r \in [0, L]} \{h(r)\} \le m \}.
\end{align*}
$\mathcal{M}_m$ can be thought as the set of all metrics of revolution that are not close to the degenerated maximizing metric $g^*$, where the qualitative appreciation of the word "close" is given by the parameter $m$. The larger $m$ is, the closer to $g^*$ the metrics in $\mathcal{M}_m$ can be.
\medskip
We get the following result:
\begin{thm} \label{thm : stability k}
Let $(M=[0, L] \times \S^{n-1}, g)$ be a hypersurface of revolution of the Euclidean space with two boundary components isometric to two copies of $\S^{n-1}$ and dimension $n \ge 3$. Let $m \in [1, 1+L/2)$ and $\mathcal{M}_m$ as above. Then there exists a constant $C(n, L, m) >0$ such that for all $g \in \mathcal{M}_m$, we have
\begin{align*}
B_n(L)- \sigma_1(M, g) \ge C(n, L, m).
\end{align*}
\end{thm}
These results solve the case $k=1$. Therefore, it would be interesting to find the same kind of results for any $k \ge 1$. After having calculated sharp upper bounds for some higher values of $k$ in Sect. \ref{subsect : sigma2 sigmam1} and \ref{subsect : sigmam1plus1}, one can see that in order to get an expression for $B_n^k(L)$, we need to distinguish between many cases. As such, giving a general formula for $B_n^k(L)$ or $B_n^k := \sup_{L \in \mathbb{R}_+^*} \{B_n^k(L)\}$ seems difficult. We discuss this in \Cref{rem : borne sans L}.
\begin{defn} \label{def : critical length}
We say that $L_k \in \mathbb{R}_+^*$ is a finite critical length associated with $k$ if we have $B_n^k = B_n^k(L_k)$. We say that $k$ has a critical length at infinity if it satisfies $B_n^k= \lim_{L\to \infty} B_n^k(L)$.
\end{defn}
These lengths are critical in the following sense: if $L_k \in \mathbb{R}_+^*$ is a finite critical length for a certain $k \in \mathbb{N}$ and if we write $g^*$ the degenerated maximizing metric on $M_k=[0, L_k]\times \S^{n-1}$, then
\begin{align*}
B_n^k= \sigma_k(M_k,g^*).
\end{align*}
Given $n \ge 3$, there exist some $k$ which have a finite critical length associated with. Indeed, thanks to \Cref{cor : corollaire du principal deux}, we know that $k=1$ has this property. Moreover, we know that there exist some $k$ which have a critical length at infinity, see Sect. \ref{subsect : sigma2 sigmam1}.
\medskip
Since we want to study upper bounds for the Steklov eigenvalues, it is then natural to ask what qualitative and quantitative information we can provide about these critical lengths.
\medskip
We get the following result:
\begin{thm} \label{thm : principal 4}
Let $n \ge 3$. Then there exist infinitely many $k \in \mathbb{N}$ which have a finite critical length associated with.
Moreover, if we call $(k_i)_{i=1}^\infty \subset \mathbb{N}$ the increasing sequence of such $k$ and if we call $(L_i)_{i=1}^\infty$ the associated sequence of finite critical lengths, then we have
\begin{align*}
\lim_{i\to \infty }L_i=0.
\end{align*}
\end{thm}
The existence of finite critical lengths is something surprising when we compare with what happens in the case of hypersurfaces of revolution with one boundary component. Indeed, using our vocabulary, we can state that in the case of hypersurfaces of revolution with one boundary component, each $k \in \mathbb{N}$ has a critical length at infinity, see \cite{CV}. Nevertheless, in our case, Theorem \ref{thm : principal 4} guarantees that there exist infinitely many $k \in \mathbb{N}$ which have a finite critical length associated with. Moreover, we will show in Sect. \ref{subsect : sigma2 sigmam1} that there exist some $k$ which have a critical length at infinity. However, we do not know if there are \textit{infinitely many} of them. This consideration leads to the following open question (Question \ref{q : open}):
\begin{center}
\textit{Given $n \ge 3$, are there finitely or infinitely many $k \in \mathbb{N}$ such that $k$ has a critical length at infinity?}
\end{center}
\textbf{Plan of the paper.}
In Sect. \ref{sect : var car}, we recall what is the variational characterizations of the Steklov eigenvalues before giving the expression of eigenfunctions on hypersurfaces of revolution. We will then have enough information to prove Theorem \ref{thm : pricipal} in Sect. \ref{sect : proof}. In Sect. \ref{sect : mixed problems}, we introduce the notion of mixed Steklov-Dirichlet and Steklov-Neumann problems and state some propositions about them. This will allow us to prove Theorem \ref{thm : principal deux}, Corollary \ref{cor : corollaire du principal deux} and \Cref{rem : dimension sur L1} in Sect. \ref{sect : proof 2}. Then we prove the stability properties, i.e Theorem \ref{thm : stability 1} and Theorem \ref{thm : stability k} in Sect. \ref{sect : stability}. We continue by performing some calculation for sharp upper bounds for higher eigenvalues in Setc \ref{sect : 6}. We conclude by proving Theorem \ref{thm : principal 4} in Sect. \ref{sect : proof infinity de k avec L fini}.
\medskip
\textbf{Acknowledgment.} I would like to warmly thank my thesis supervisor Bruno Colbois for offering me the opportunity to work on this topic, and for his precious help which enabled me to solve the difficulties encountered. I also want to thank several of my friends and colleagues, Antoine Bourquin, Laura Grave De Peralta and Flavio Salizzoni, for various discussions we had, their help and advice.
\section{Variational characterization of the Steklov eigenvalues and hypersurfaces of revolution} \label{sect : var car}
Let us state some general facts about Steklov eigenfunctions and about hypersurfaces of revolution.
\subsection{Variational characterisation of the Steklov eigenvalues}
Let $(M, g)$ be a Riemannian manifold, with smooth boundary $\Sigma$. Then we can characterize the $k$th Steklov eigenvalue of $M$ by the following formula:
\begin{align} \label{form : car var}
\sigma_k(M, g) = \min \left\lbrace R_g(f) : f \in H^1(M), \; f \perp_\Sigma f_0, f_1, \ldots, f_{k-1} \right\rbrace,
\end{align}
where
\begin{align*}
R_g(f) = \frac{\int_M |\nabla f|^2 dV_g}{\int_\Sigma |f|^2 dV_\Sigma}
\end{align*}
is called the Rayleigh quotient and
\begin{align*}
f \perp_\Sigma f_i \iff \int_\Sigma f f_i dV_\Sigma =0.
\end{align*}
Another way to characterize the $k$th eigenvalue of $M$ is given by the Min-Max principle:
\begin{align} \label{form : min max}
\sigma_k(M,g) = \min_{E \in \mathcal{H}_{k+1}(M)} \max_{0 \neq f \in E} R_g(f),
\end{align}
where $\mathcal{H}_{k+1}$ is the set of all $(k+1)$-dimensional subspaces in the Sobolev space $H^1(M)$.
\subsection{Hypersurfaces of revolution} \label{sect : manifold}
A particular case of hypersurfaces of revolution
is the following: let $M=[0, L] \times \S^{n-1}$ be endowed with a metric of revolution $g(r,p)=dr^2+h^2(r)g_0(p)$.
Let us suppose that there exists $\epsilon >0$ such that $h(r) = 1+r$ on $[0, \epsilon]$.
Let us consider the connected component of the boundary $\S_0$ associated with $h(0)$. Then the $\epsilon$-neighborhood of $\S_0$ is an annulus with inner radius $1$ and outer radius $1+\epsilon$.
\begin{figure}[H]
\centering
\includegraphics[scale=0.6]{images/metric_interesting.png}
\caption{On $[0, \epsilon]$, we have $h(r)=1+r$ and on $[L-\epsilon, L]$, we have $h(r) = -r+L+1$. This implies that the $\epsilon$-neighborhood of the boundary consists in two disjointed copies of an annulus with inner radius $1$ and outer radius $1+\epsilon$.}
\label{fig : surface de revolution}
\end{figure}
We state now a proposition that provides us with information about the expression of the Steklov eigenfunctions of a hypersurface of revolution.
\medskip
We denote by $0 =\lambda_0 < \lambda_1 \le \lambda_2 \le \ldots \nearrow \infty$ the spectrum of the Laplacian on $(\S^{n-1}, g_0)$ and we consider $(S_j)_{j=0}^\infty$ an orthonormal basis of eigenfunctions associated.
\begin{prop} \label{prop : sep variables}
Let $(M, g)$ be a hypersurface of revolution as in \Cref{defn : revolution}. Then each eigenfunction on $M$ can be written as $f_k(r, p) = u_l(r)S_j(p)$, where $u_l$ is a smooth function on $[0, L]$.
\end{prop}
This property is well known for warped product manifolds (and thus for our case of hypersurfaces of revolution) and it is often used, see for example \parencite[Remark 1.1]{DHN}, \parencite[Lemma 3]{Esc}, \parencite[Prop. $3.16$]{T2} or \parencite[Prop. $9$]{XC}.
\medskip
\section{The degenerated maximizing metric} \label{sect : proof}
Let us begin by proving Theorem \ref{thm : pricipal}.
\begin{proof}
We write $g_1(r,p)=dr^2+h_1^2(r)g_0(p)$. Because $h_1$ is smooth and $|h_1'| \leq 1$, we have $h_1(r) < 1+\frac{L}{2}$ for all $r \in [0, L]$. Since $h_1$ is continuous and $[0, L]$ is compact, $h_1$ reaches its maximum on $[0, L]$. Let us call
\begin{align*}
m := \max_{r \in [0, L]} \{h_1(r)\}.
\end{align*}
Notice that $1 \le m < 1+\frac{L}{2}$.
\medskip
Let us define a smooth function $h_2 : [0, L] \longrightarrow \mathbb{R}$ by
\begin{align*}
h_2(r)=
\left\{
\begin{array}{lcl}
1+r & \mbox{if} & 0 \le r \le m-1 \\
1+L-r & \mbox{if} & L-m+1 \le r \le L.
\end{array}
\right.
\end{align*}
For $r\in (m-1, L-m+1)$, we only require that $h_2(r) > m$, that $h_2(L/2)= \frac{1+L/2+m}{2}$ and that
\begin{align*}
g_2(r,p) := dr^2 + h_2^2(r)g_0(p)
\end{align*}
is a symmetric metric of revolution on $M$, i.e for all $r \in [0, L]$, we have $h_2(r)= h_2(L-r)$. Note that we have $h_2 \ge h_1$ and that for $r \in (m-1,L-m+1)$ we have $h_2(r) > h_1(r)$.
\medskip
Besides, using \parencite[Sect.2]{CEG2}, for $f$ a smooth function on $M$, we have
\begin{align*}
R_{g_1}(f) = \frac{\int_M |\nabla f|_{g_1}^2 dV_{g_1}}{\int_\Sigma |f|^2dV_\Sigma} = \frac{\int_M \left((\partial_r f)^2 + \frac{1}{h_1^2}|\tilde{\nabla} f|_{g_0}^2 \right)h_1^{n-1}dV_{g_0}}{\int_\Sigma |f|^2dV_\Sigma}
\end{align*}
and
\begin{align*}
R_{g_2}(f) = \frac{\int_M |\nabla f|_{g_2}^2 dV_{g_2}}{\int_\Sigma |f|^2dV_\Sigma} = \frac{\int_M \left((\partial_r f)^2 + \frac{1}{h_2^2}|\tilde{\nabla} f|_{g_0}^2 \right)h_2^{n-1}dV_{g_0}}{\int_\Sigma |f|^2dV_\Sigma},
\end{align*}
where $\tilde{\nabla} f$ is the gradient of $f$ seen as a function of $p$.
\medskip
Since $n\ge 3$, for all function $f \in H^1(M)$, we have $R_{g_1}(f)\le R_{g_2}(f)$. Using the Min-Max principle, we can conclude that for all $k \ge 1$, we have $\sigma_k(M, g_1) \le \sigma_k(M, g_2)$. However, here we want to show a strict inequality. Let us go on with the proof.
\medskip
Because of the existence of a continuum of points $r$ for which $h_1(r) < h_2(r)$, if $\partial_r f$ does not vanish on any interval, then the inequality is strict.
\medskip
Let $k\ge 1$ be an integer. Let $E_{k+1}:=Span(f_{0,2}, \ldots, f_{k,2})$, where $f_{i,2}$ is a Steklov eigenfunction associated with $\sigma_i(M,g_2)$. We can choose these functions such that for all $i = 0, \ldots, k$, we have
\begin{align*}
\int_\Sigma (f_{i,2})^2 dV_\Sigma = 1,
\end{align*}
and hence
\begin{align*}
\int_M |\nabla f_{i,2}|_{g_2}^2dV_{g_2} = \sigma_i(M,g_2).
\end{align*}
Let $f^{*} = \sum_{i=0}^k a_i f_{i,2} \in E_{k+1}$ be such that $\max_{f \in E_{k+1}} R_{g_1}(f) = R_{g_1}(f^*)$.
\medskip
Let us now consider two cases:
\begin{enumerate}
\item Let us suppose $f^* = a_k f_{k,2}$ with $a_k \ne 0$, i.e $f^*$ is an eigenfunction associated with $\sigma_k(M, g_2)$. Then by \Cref{prop : sep variables}, we have $f^*(r,p) = u_j(r) S_j(p)$. Moreover, using \parencite[Prop. 2]{CV}, we know that $u_j$ is a non trivial solution of the ODE
\begin{align*}
\frac{1}{h^{n-1}} \frac{d}{dr} \left( h^{n-1} \frac{d}{dr}u_j \right) - \frac{1}{h_2^2} \lambda_j u_j = 0.
\end{align*}
\begin{enumerate}
\item If $\lambda_j =0$, which means $S_j = S_0 = const$, then $u_j$ cannot be locally constant. Indeed, otherwise $f^*$ would be locally constant, but since $f^*$ is harmonic, this implies that $f^*$ is constant, see \cite{Aron}. That is not the case because $k \ge 1$.
\item If $\lambda_j \ne 0$, then $u_j$ cannot be locally constant, otherwise the ODE is not satisfied.
\end{enumerate}
Hence $u_j$ is not locally constant and then $\partial_r f^*$ does not vanish on any interval. Therefore, using the Min-Max principle (\ref{form : min max}), we have
\begin{align*}
\sigma_k(M, g_1) \le \max_{f \in E_{k+1}} R_{g_1}(f) = R_{g_1}(f^*) < R_{g_2}(f^*) = \sigma_k(M, g_2).
\end{align*}
\item Let us suppose $f^* = \sum_{i=0}^k a_i f_{i,2}$ such that there exists $0 \le i <k$ such that $a_i \ne 0$.
Then by the Min-Max principle (\ref{form : min max}), we have
\begin{align*}
\sigma_k(M,g_1) & \le \max_{f \in E_{k+1}}R_{g_1}(f)
= R_{g_1}(f^*)
\le R_{g_2}(f^*) \\
& = \frac{\int_M \sum_{i=0}^k a_i^2 |\nabla f_{i,2}|^2dV_{g_2}}{\int_\Sigma (\sum_{i=0}^k a_i f_{i,2})^2 dV_\Sigma} \\
& = \frac{\sum_{i=0}^k a_i^2 \sigma_i(M, g_2)}{\sum_{i=0}^k a_i^2 } \quad \mbox{ since } \int_\Sigma f_{i,2} f_{j,2} dV_\Sigma = \delta_{ij} \\
& < \sigma_k(M,g_2).
\end{align*}
\end{enumerate}
In both cases, we have
\begin{align*}
\sigma_k(M, g_1) < \sigma_k(M, g_2).
\end{align*}
\end{proof}
\begin{rem}
We never used the assumption that $g_2$ is a \textit{symmetric} metric of revolution on $M$. However, it will be useful in the proofs of the theorems that are coming.
\end{rem}
The process that constructs the metric $g_2$ from $g_1$ can then be repeated to create a third metric $g_3$, and so on. This generates a sequence of metric $(g_i)$, obtained from a sequence of functions $(h_i)$.
\begin{figure}[H]
\centering
\includegraphics[scale=0.6]{images/suite_metriques.PNG}
\caption{On the left, $M=[0,L]\times \S^{n-1}$ is endowed with a metric $g_i$ of the sequence. On the right, $M=[0,L]\times \S^{n-1}$ is endowed with another metric $g_j$ of the sequence, $j >i$. }
\label{fig : image suite metrique}
\end{figure}
The sequence $(h_i)$ uniformly converges to the function
\begin{align*}
h^* : [0, L] & \longrightarrow \mathbb{R} \\
r & \longmapsto \left\{
\begin{array}{lcl}
1+r & \mbox{if} & 0 \le r \le L/2 \\
1+L-r & \mbox{if} & L/2 \le r \le L.
\end{array}
\right.
\end{align*}
This function is not smooth. Hence $(M, g^*)$, where $g^* = dr^2 + h^{*2}(r)g_0$, is no hypersurface of revolution in the sense of Definition \ref{defn : revolution}. In the limit, $(M, g^*)$ can be seen as the gluing of two annulus of inner radius $1$ and outer radius $1+L/2$. The metric $g^*$ is therefore a maximizing metric, but is degenerated since it is induced by the function $h^*$ which is non-smooth. That is why, as already mentioned, we call $g^*$ the \textit{degenerated maximizing metric on $M$}.
\section{Mixed problems} \label{sect : mixed problems}
In this section, with introduce the notion of mixed problems on a manifold. This will allow us to prove Theorem \ref{thm : principal deux} in the next section.
\subsection{Mixed problems and their variationnal characterization}
Let $(N, \partial N)$ be a smooth compact connected manifold and $A \subset N$ be a domain which satisfies $\partial N \subset \partial A$. Let us suppose that $\partial A$ is smooth and let us call $\partial_{int}A$ the intersection of $\partial A$ with the interior of $N$.
\begin{defn}
The Steklov-Dirichlet problem on $A$ is the eigenvalue problem
\begin{align*}
\left\{
\begin{array}{ll}
\Delta f =0& \mbox{ in } A \\
\partial_\nu f = \sigma f & \mbox{ on } \partial N \\
f =0 & \mbox{ on } \partial_{int}A.
\end{array}
\right.
\end{align*}
\end{defn}
It is well known that this mixed problem possesses solutions that form a discrete sequence
\begin{align*}
0 < \sigma_0^D(A) \le \sigma_1^D(A) \le \ldots \nearrow \infty.
\end{align*}
The variational characterisation of the $k$th Steklov-Dirichlet eigenvalue is the following:
\begin{align*}
\sigma_k^D(A) = \min_{E \in \mathcal{H}_{k+1,0}(A)} \max_{0 \neq f \in E} \frac{\int_A |\nabla f|^2 dV_A}{\int_\Sigma |f|^2 dV_\Sigma},
\end{align*}
where $\mathcal{H}_{k+1, 0}$ is the set of all $(k+1)$-dimensional subspaces in the Sobolev space
\begin{align*}
H_0^1(A)= \{f \in H^1(A) : f=0 \mbox{ on } \partial_{int}A \}.
\end{align*}
\begin{defn}
The Steklov-Neumann problem on $A$ is the eigenvalue problem
\begin{align*}
\left\{
\begin{array}{ll}
\Delta f =0& \mbox{ in } A \\
\partial_\nu f = \sigma f & \mbox{ on } \partial N \\
\partial_\nu f = 0 & \mbox{ on } \partial_{int}A.
\end{array}
\right.
\end{align*}
\end{defn}
It is well known that this mixed problem possesses solutions that form a discrete sequence
\begin{align*}
0 = \sigma_0^N(A) \le \sigma_1^N(A) \le \ldots \nearrow \infty.
\end{align*}
The variational characterisation of the $k$th Steklov-Neumann eigenvalue is the following:
\begin{align*}
\sigma_k^N(A) = \min_{E \in \mathcal{H}_{k+1}(A)} \max_{0 \neq f \in E} \frac{\int_A |\nabla f|^2 dV_A}{\int_\Sigma |f|^2 dV_\Sigma},
\end{align*}
where $\mathcal{H}_{k+1}$ is the set of all $(k+1)$-dimensional subspaces in the Sobolev space $H^1(A)$.
\subsection{Mixed problems on annular domains}
Let $\mathbb{B}_1$ and $\mathbb{B}_R$ be the balls in $\mathbb{R}^n$, with $R >1$ and $n \ge 3$, centered at the origin. The annulus $A_R$ is defined as follows: $A_R = \mathbb{B}_R \backslash \overline{\mathbb{B}}_1$. We say that this annulus is of inner radius $1$ and outer radius $R$. This particular kind of domains shall be useful in this paper.
\medskip
For such domains, it is possible to compute explicitly $\sigma_{(k)}^D(A_R)$, which is the $(k)$th eigenvalue of the Steklov-Dirichlet problem on $A_R$, counted without multiplicity.
\medskip
We state here Proposition $4$ of \cite{CV}:
\begin{prop} \label{prop : SD}
For $A_R$ as above, consider the Steklov-Dirichlet problem
\begin{align*}
\left\{
\begin{array}{ll}
\Delta f = 0 & \mbox{ in } A_R \\
\partial_\nu f = \sigma f & \mbox{ on } \partial \mathbb{B}_1 \\
f = 0 & \mbox{ on } \partial \mathbb{B}_R.
\end{array}
\right.
\end{align*}
Then, for $k \ge 0$, the $(k)$th eigenvalue (counted without multiplicity) of this problem is
\begin{align*}
\sigma_{(k)}^D(A_R) = \frac{(k+n-2)R^{2k+n-2}+k}{R^{2k+n-2}-1}.
\end{align*}
\end{prop}
It is possible to get the expression of the eigenfunctions of the Steklov-Dirichlet problem on an annular domain.
\begin{lemma} \label{lem : Dirichlet}
Each eigenfunction $\varphi_l$ of the Steklov-Dirichlet problem on the annulus $A_{R}$ can be expressed as $\varphi_l(r,p)=\alpha_l(r)S_l(p)$, where $S_l$ is an eigenfunction for the $l^{th}$ harmonic of the sphere $\mathbb{S}^{n-1}$.
\end{lemma}
We can compute explicitly $\sigma_{(k)}^N(A_R)$, which is the $(k)$th eigenvalue of the Steklov-Neumann problem on $A_R$, counted without multiplicity.
\medskip
We state now Proposition $5$ of \cite{CV}:
\begin{prop} \label{prop : SN}
For $A_R$ as above, consider the Steklov-Neumann problem
\begin{align*}
\left\{
\begin{array}{ll}
\Delta f = 0 & \mbox{ in } A_R \\
\partial_\nu f = \sigma f & \mbox{ on } \partial \mathbb{B}_1 \\
\partial_\nu f = 0 & \mbox{ on } \partial \mathbb{B}_R.
\end{array}
\right.
\end{align*}
Then, for $k \ge 0$, the $(k)$th eigenvalue (counted without multiplicity) of this problem is
\begin{align*}
\sigma_{(k)}^N(A_R) = k \frac{(k+n-2)(R^{2k+n-2}-1)}{kR^{2k+n-2}+k+n-2}.
\end{align*}
\end{prop}
In the same manner as before, we have the following:
\begin{lemma} \label{lem : Neumann}
Each eigenfunction $\phi_l$ of the Steklov-Neumann problem on the annulus $A_{R}$ can be expressed as $\phi_l(r,p)=\beta_l(r)S_l(p)$, where $S_l$ is an eigenfunction for the $l^{th}$ harmonic of the sphere $\mathbb{S}^{n-1}$.
\end{lemma}
\section{The first non trivial eigenvalue} \label{sect : proof 2}
Using the last section, we can prove Theorem \ref{thm : principal deux}.
\begin{proof}
Let $(M= [0, L] \times \S^{n-1}, g)$ be a hypersurface of revolution, where $L>0$ is the meridian length of $M$. We recall that the boundary $\Sigma$ of $M$ consists of two disjointed copies of $\S^{n-1}$. We want to find a sharp upper bound $B_n(L)$ for $\sigma_1(M, g)$.
\medskip
We call $A_{1+L/2}$ the annulus of inner radius $1$ and outer radius $1+L/2$.
Let $\varphi_0$ be an eigenfunction for the first eigenvalue of the Steklov-Dirichlet problem on $A_{1+L/2}$, i.e.
\begin{align*}
\sigma_0^D(A_{1+L/2}) =\frac{\int_0^{L/2} \int_{\S^{n-1}} \left( (\partial_r \varphi_0)^2 + \frac{1}{(1+r)^2} | \Tilde{\nabla}\varphi_0|^2 \right) (1+r)^{n-1} dV_{g_0}dr}{\int_{\S^{n-1}} \varphi_0^2(0,p) dV_{g_0}}.
\end{align*}
We define a new function
\begin{align} \label{fct : varphi}
\tilde{\varphi_0} : [0,L] \times \S^{n-1} & \longrightarrow \mathbb{R} \\
(r, p) & \longmapsto \left\{
\begin{array}{cc}
\varphi_0(r,p) & \mbox{ if } 0 \le r \le L/2 \\
- \varphi_0(L-r,p) & \mbox{ if } L/2 \le r \le L.
\end{array} \nonumber
\right.
\end{align}
The function $\tilde{\varphi_0}$ is continuous and we can check that
\begin{align*}
\int_\Sigma \tilde{\varphi_0}(r,p) dV_\Sigma & = \int_{\S^{n-1}} \Tilde{\varphi_0}(0,p) dV_{g_0} + \int_{\S^{n-1}} \Tilde{\varphi_0}(L,p) dV_{g_0} \\
& = \int_{\S^{n-1}} \varphi_0(0,p) dV_{g_0} - \int_{\S^{n-1}} \varphi_0(0,p) dV_{g_0} \\
& = 0.
\end{align*}
Hence, thanks to formula (\ref{form : car var}), the function $\Tilde{\varphi_0}$ can be used as a test function for $\sigma_1(M,g)$.
We have
\begin{align} \label{eqn : sigma 0}
\sigma_1(M, g) & \le R_g(\Tilde{\varphi_0}) \nonumber \\
& < R_{\tilde{g}}(\tilde{\varphi_0}) \quad \mbox{ where } \tilde{g}= dr^2 + \tilde{h}^2g_0 \mbox{ comes from Theorem } \ref{thm : pricipal} \nonumber \\
& = \frac{\int_0^L \int_{S^{n-1}} \left( (\partial_r \Tilde{\varphi_0})^2 + \frac{1}{\tilde{h}(r)^2} | \Tilde{\nabla} \Tilde{\varphi_0}|^2 \right) \tilde{h}(r)^{n-1} dV_{g_0}dr}{\int_\Sigma \Tilde{\varphi_0}^2(0,p) dV_{\Sigma}} \nonumber \\
& = \frac{2 \times \int_0^{L/2} \int_{S^{n-1}} \left( (\partial_r \Tilde{\varphi_0})^2 + \frac{1}{\tilde{h}(r)^2} | \Tilde{\nabla} \Tilde{\varphi_0}|^2 \right) \tilde{h}(r)^{n-1} dV_{g_0}dr}{2 \times \int_{\S^{n-1}} \Tilde{\varphi_0}^2(0,p) dV_{g_0}} \quad \mbox{ since } \tilde{g} \mbox{ is symmetric} \nonumber \\
& = \frac{ \int_0^{L/2} \int_{S^{n-1}} \left( (\partial_r \varphi_0)^2 + \frac{1}{\tilde{h}(r)^2} | \Tilde{\nabla} \varphi_0|^2 \right) \tilde{h}(r)^{n-1} dV_{g_0}dr}{ \int_{\S^{n-1}} \varphi_0^2(0,p) dV_{g_0}} \nonumber \\
& < \frac{\int_0^{L/2} \int_{S^{n-1}} \left( (\partial_r \varphi_0)^2 + \frac{1}{(1+r)^2} | \Tilde{\nabla}\varphi_0|^2 \right) (1+r)^{n-1} dV_{g_0}dr}{\int_{\S^{n-1}} \varphi_0^2(0,p) dV_{g_0}} \nonumber \\
& = \sigma_0^D(A_{1+L/2}),
\end{align}
where the second strict inequality comes from the existence of a continuum of points $r \in [0, L/2]$ such that $\tilde{h}(r) < 1+r$.
\medskip
If $\phi_1$ is an eigenfunction for the first non trivial eigenvalue of the Steklov-Neumann problem on $A_{1+L/2}$, i.e
\begin{align*}
\sigma_1^N(A_{1+L/2}) = \frac{\int_0^{L/2} \int_{S^{n-1}} \left( (\partial_r \phi_1)^2 + \frac{1}{(1+r)^2} | \Tilde{\nabla} \phi_1 |^2 \right) (1+r)^{n-1} dV_{g_0}dr}{\int_{\S^{n-1}} \phi_1^2(0,p) dV_{g_0}},
\end{align*}
then we define a new function
\begin{align*}
\Tilde{\phi_1} : [0, L] \times \S^{n-1} &\longrightarrow \mathbb{R} \\
(r,p) & \longmapsto \left\{
\begin{array}{cc}
\phi_1(r,p) & \mbox{ if } 0 \le r \le L/2 \\
\phi_1(L-r,p) & \mbox{ if } L/2 \le r \le L.
\end{array}
\right.
\end{align*}
The function $\tilde{\phi_1}$ is continuous and we can check that
\begin{align*}
\int_\Sigma \Tilde{\phi_1}(r,p) dV_\Sigma & = \int_{\S^{n-1}} \Tilde{\phi_1}(0,p) + \int_{\S^{n-1}} \Tilde{\phi_1}(L,p) \\
& = 0 + 0 \\
& = 0,
\end{align*}
hence we can use it as a test function for $\sigma_1(M,g)$.
The same calculations as in (\ref{eqn : sigma 0}) show that
\begin{align*}
\sigma_1(M,g) < \sigma_1^N(A_{1+L/2}).
\end{align*}
Putting all together, we get
\begin{align} \label{ineg : bound}
\sigma_1(M,g) < B_n(L) :&= \min\left\{ \sigma_0^D(A_{1+L/2}), \; \sigma_1^N(A_{1+L/2}) \right\} \nonumber \\
& = \min\left\{ \frac{(n-2)\left( 1+L/2 \right)^{n-2}}{\left( 1+L/2 \right)^{n-2}-1}, \frac{(n-1)\left( \left(1+L/2\right)^n -1 \right)}{\left( 1+ L/2 \right)^n +n-1}\right\}.
\end{align}
We already proved that the inequality in (\ref{ineg : bound}) is strict. We will now prove that the bound $B_n(L)$ is sharp. This means that for each $\epsilon >0$, there exists a metric of revolution $g_\epsilon$ on $M$ such that $\sigma_1(M, g_\epsilon) > B_n(L)-\epsilon$.
\medskip
Let $\epsilon >0$.
Let $M=[0, L] \times \S^{n-1}$ and let $g_\epsilon(r,p) = dr^2+h_\epsilon^2(r)g_0(p)$ be a metric of revolution on $M$ such that
\begin{enumerate}
\item $h_\epsilon$ is symmetric: for all $r \in [0, L]$, we have $h_\epsilon(r)=h_\epsilon(L-r)$;
\item For all $r \in [0, L/2- \delta]$, we have $h_\epsilon(r)=(1+r)$, with $\delta$ small enough to guarantee that for all $r \in [0, L/2]$, we have $(1+r)^{n-1}-h_\epsilon(r)^{n-1} < \frac{\epsilon}{B_n(L)} =: \epsilon^*$. Geometrically, this means that $(M, g_\epsilon)$ looks like two copies of an annulus, see Fig. \ref{fig : image suite metrique}.
\end{enumerate}
Let $f_1$ be an eigenfunction for $\sigma_1(M, g_\epsilon)$. Because $(M, g_\epsilon)$ is symmetric, then we can choose $f_1$ symmetric or anti-symmetric, which means that for all $r \in [0, L]$ and $p \in \S^{n-1}$, we have $|f_1(r,p)|=|f_1(L-r,p)|$.
Moreover, it results from the calculations in (\ref{eqn : sigma 0}) that for any symmetric or anti-symmetric function $f$, we have
\begin{align*}
R_{g_\epsilon}(f) = \frac{ \int_0^{{L}/2} \int_{S^{n-1}} \left( (\partial_r f)^2 + \frac{1}{h_\epsilon(r)^2} | \Tilde{\nabla} f|^2 \right) h_\epsilon(r)^{n-1} dV_{g_0}dr}{ \int_{\S^{n-1}} f^2(0,p) dV_{g_0}}.
\end{align*}
We will compare
\begin{align*}
R_{g_\epsilon}(f_1) = \frac{ \int_0^{{L}/2} \int_{S^{n-1}} \left( (\partial_r f_1)^2 + \frac{1}{h_\epsilon(r)^2} | \Tilde{\nabla} f_1|^2 \right) h_\epsilon(r)^{n-1} dV_{g_0}dr}{ \int_{\S^{n-1}} (f_1)^2(0,p) dV_{g_0}}
\end{align*}
with
\begin{align*}
R_{A_{1+L/2}}(f_1) = \frac{ \int_0^{{L}/2} \int_{S^{n-1}} \left( (\partial_r f_1)^2 + \frac{1}{(1+r)^2} | \Tilde{\nabla} f_1|^2 \right) (1+r)^{n-1} dV_{g_0}dr}{ \int_{\S^{n-1}} (f_1)^2(0,p) dV_{g_0}}.
\end{align*}
If we call $S := R_{A_{1+L/2}}(f_1) - R_{g_\epsilon}(f_1)$, we have
\begin{align*}
S & = \frac{ \int_0^{L/2} \int_{S^{n-1}} (\partial_r f_1)^2 \left( (1+r)^{n-1} - h_\epsilon(r)^{n-1} \right) + | \Tilde{\nabla} f_1|^2 \left( (1+r)^{n-3} -h_\epsilon(r)^{n-3} \right) dV_{g_0}dr}{ \int_{\S^{n-1}} (f_1)^2(0,p) dV_{g_0}} \\
& < \frac{ \int_0^{L/2} \int_{S^{n-1}} ((\partial_r f_1)^2 \cdot \epsilon^* + | \Tilde{\nabla} f_1|^2 \cdot \epsilon^* ) dV_{g_0}dr}{ \int_{\S^{n-1}} (f_1)^2(0,p) dV_{g_0}} \\
& = \epsilon^* \cdot \frac{ \int_0^{L/2} \int_{S^{n-1}} ((\partial_r f_1)^2 + | \Tilde{\nabla} f_1|^2 ) dV_{g_0}dr}{ \int_{\S^{n-1}} (f_1)^2(0,p) dV_{g_0}} \\
& < \epsilon^* \cdot \frac{ \int_0^{L/2} \int_{S^{n-1}} ((\partial_r f_1)^2 h_\epsilon(r)^{n-1} + | \Tilde{\nabla} f_1|^2 h_\epsilon(r)^{n-3} ) dV_{g_0}dr}{ \int_{\S^{n-1}} (f_1)^2(0,p) dV_{g_0}} \quad \mbox{ since $h_\epsilon \ge 1$L ,0L ,0[[[[[[} \\
& = \epsilon^* \cdot \sigma_1(M, g_\epsilon) \quad \mbox{ since $f_1$ is an eigenfunction} \\
& < \epsilon^* \cdot B_n(L) \\
& = \epsilon.
\end{align*}
Hence, we have
\begin{align} \label{fml : ineg sharp}
R_{A_{1+L/2}}(f_1) < \sigma_1(M,g_\epsilon) + \epsilon.
\end{align}
We now have two cases:
\begin{enumerate}
\item $f_1$ can be written as $f_1(r,p)=u_0(r) S_0(p)$, where $S_0$ is a trivial harmonic function of the sphere, i.e $S_0$ is constant (we can choose $S_0 \equiv 1/\mbox{Vol}(\S^{n-1})$), and $u_0$ is smooth. Hence $f_1$ is constant on $\{0\} \times \S^{n-1}$,
\begin{align*}
\int_{\{0\} \times \S^{n-1}} f_1(r,p) dV_{g_0} = u_0(0) \ne 0.
\end{align*}
Moreover, since $|f_1(r, p)| = |f_1(L-r,p)|$ for all $r \in [0, L]$ and since
\begin{align*}
\int_{\Sigma} f_1(r,p) dV_\Sigma=0,
\end{align*}
we have
\begin{align*}
f_1\left(\frac{L}{2}, p\right) = 0.
\end{align*}
Therefore, we can use $f_{1_{\vert_{[0, L/2] \times \S^{n-1}}}}$ as a test function for $\sigma_0^D(A_{1+L/2})$, and we can state
\begin{align*}
\sigma_0^D(A_{1+L/2}) \leq R_{A_{1+L/2}}(f_1).
\end{align*}
\item $f_1$ can be written as $f_1(r,p) = u_1(r) S_1(p)$, where $S_1$ is a non constant harmonic of the sphere associated with the first non zero eigenvalue and $u_1$ is smooth. Hence
\begin{align*}
\int_{\{0\} \times \S^{n-1}} f_1(r,p) dV_{g_0} = 0.
\end{align*}
Moreover, we have $u_1(L/2) > 0$.
\medskip
Added with the fact that $|f_1(r, p)| = |f_1(L-r,p)|$ for all $r \in [0, L]$ and because $f_1$ is smooth, we can conclude
\begin{align*}
\partial_r f_1\left( \frac{L}{2}, p\right) = 0.
\end{align*}
Therefore, we can use $f_{1_{\vert_{[0, L/2] \times \S^{n-1}}}}$ as a test function for $\sigma_1^N(A_{1+L/2})$ and we can state
\begin{align*}
\sigma_1^N(A_{1+L/2}) \leq R_{A_{1+L/2}}(f_1).
\end{align*}
\end{enumerate}
But we defined $B_n(L)$ as
\begin{align*}
B_n(L) = \min \{\sigma_0^D(A_{1+L/2}), \sigma_1^N(A_{1+L/2}) \}.
\end{align*}
Hence we have
\begin{align*}
B_n(L) \le R_{A_{1+L/2}}(f_1) \stackrel{(\ref{fml : ineg sharp})}{<} \sigma_1(M,g_\epsilon) + \epsilon
\end{align*}
and then
\begin{align*}
\sigma_1(M,g_\epsilon) > B_n(L) - \epsilon.
\end{align*}
\end{proof}
From this result we can prove Corollary \ref{cor : corollaire du principal deux}.
\begin{proof}
As seen before, the inequality (\ref{ineg : bound}) holds, which is
\begin{align*}
\sigma_1(M,g) < \min\left\{ \frac{(n-2)\left( 1+L/2 \right)^{n-2}}{\left( 1+L/2 \right)^{n-2}-1}, \frac{(n-1)\left( \left(1+L/2\right)^n -1 \right)}{\left( 1+ L/2 \right)^n +n-1}\right\}.
\end{align*}
We consider the two functions
\begin{align*}
L & \longmapsto \frac{(n-2)\left( 1+L/2 \right)^{n-2}}{\left( 1+L/2 \right)^{n-2}-1} = \sigma_0^D(A_{1+L/2}) \\
L & \longmapsto \frac{(n-1)\left( \left(1+L/2\right)^n -1 \right)}{\left( 1+ L/2 \right)^n +n-1} = \sigma_1^N(A_{1+L/2}).
\end{align*}
\begin{figure}[H]
\centering
\includegraphics[scale=1.0]{images/borne_sigma_1.png}
\caption{Representation of the case $n = 5$. The decreasing smooth curve is $\sigma_0^D(A_{1+L/2})$ while the increasing smooth curve is $\sigma_1^N(A_{1+L/2})$. The dark curve is the bound $B_5(L)$ given by Theorem \ref{thm : principal deux}.
}
\label{fig : borne}
\end{figure}
We can show that $L \longmapsto \sigma_0^D(A_{1+L/2})$ is strictly decreasing with $L$. Indeed, let $L' > L$ and let $\varphi_0$ be an eigenfunction for $\sigma_0^D(A_{1+L/2})$. We consider
\begin{align*}
\bar{\varphi}_0 : [0, \frac{L'}{2}] \times \S^{n-1} & \longrightarrow \mathbb{R}
\end{align*}
the extension by $0$ of $\varphi_0$ to the annulus $A_{1+L'/2}$. We get
\begin{align*}
\sigma_0^D(A_{1+L'/2}) < R_{A_{1+L/2}}(\Bar{\varphi}_0) = R_{A_{1+L'/2}}(\varphi_0) = \sigma_0^D(A_{1+L/2}),
\end{align*}
where the strict inequality comes from the fact that $\Bar{\varphi}_0$ is clearly not an eigenfunction.
\medskip
In the same way, we can show that $L \longmapsto \sigma_1^N(A_{1+L/2})$ is strictly increasing with $L$. Indeed, let $L' > L$ and let $\phi_1$ be an eigenfunction for $\sigma_1^N(A_{1+L'/2})$. We consider
\begin{align*}
\bar{\phi}_1 : [0, \frac{L}{2}] \times \S^{n-1} & \longrightarrow \mathbb{R}
\end{align*}
the restriction of $\phi_1$ to the annulus $A_{1+L/2}$. We get
\begin{align*}
\sigma_1^N(A_{1+L/2}) \le R_{A_{1+L/2}}(\Bar{\phi}_1) < R_{A_{1+L'/2}}(\phi_1) = \sigma_1^N(A_{1+L'/2}).
\end{align*}
Hence the bound we gave possesses a maximum depending only on the dimension $n$, given by
\begin{align*}
\sigma_1(M,g) < B_n := \frac{(n-2)\left( 1+L_1/2 \right)^{n-2}}{\left( 1+L_1/2 \right)^{n-2}-1}
\end{align*}
where $L_1$ is the unique positive solution of the equation
\begin{align*}
\left(1+L/2\right)^{2n-1}-(n-1)\left(1+L/2\right)^n-(n-1)^2\left(1+L/2)\right)^{n-2}+n-1=0.
\end{align*}
In order to prove that this bound is sharp, let $\epsilon >0$. Let us define $M_\epsilon := [0, L_1]\times \S^{n-1}$. Theorem \ref{thm : principal deux} guarantees that there exists a metric of revolution $g_\epsilon$ on $M_\epsilon$ such that $\sigma_1(M_\epsilon, g_\epsilon) > B_n(L_1) - \epsilon = B_n - \epsilon$, which ends the proof.
\end{proof}
Let us continue by proving \Cref{rem : dimension sur L1}.
\begin{proof}
We know that there exists a unique positive value of $L$, that we call $L_1=L_1(n)$, such that the equality
\begin{align*}
\left(1+L/2\right)^{2n-1}-(n-1)\left(1+L/2\right)^n-(n-1)^2\left(1+L/2)\right)^{n-2}+n-1=0
\end{align*}
holds. For obvious reasons, we substitute $(1+L/2)$ by $R$ and we can state that there is a unique value of $R \in (1, \infty)$ such that the equality
\begin{align*}
R^{2n-1}-(n-1)R^n-(n-1)^2R^{n-2}+n-1=0
\end{align*}
holds. This equation is equivalent to
\begin{align*}
R^{n-2}\left( R^{n+1}-(n-1)R^2-(n-1)^2\right)+n-1=0,
\end{align*}
and we call $R_1=R_1(n)$ its unique solution in $(1, \infty)$.
Let us prove that $R_1(n) \underset{n \to \infty}{\longrightarrow} 1$.
\medskip
Let us call
\begin{align*}
\psi_n(R) := R^{n+1}-(n-1)R^2-(n-1)^2
\end{align*}
and
\begin{align*}
\Psi_n(R) := R^{n-2}\left( R^{n+1}-(n-1)R^2-(n-1)^2\right)+n-1.
\end{align*}
Then, for $R_1$ to be such that $\Psi_n(R_1) =0$, it is necessary that $\psi_n(R_1)<0$.
\medskip
Thus,
\begin{align*}
R_1^{n+1} & < (n-1)R_1^2 + (n-1)^2 \\
& < (n-1)^2(R_1^2 +1) \\
& < (n-1)^2 \cdot 2R_1^2 \hspace{2 cm} \mbox{ since } R_1 > 1 \\
& < (n-1)^3 R_1^2 \hspace{2 cm} \mbox{ since } n-1 \ge 2.
\end{align*}
Therefore,
\begin{align*}
(n+1) \ln(R_1) < 3 \ln(n-1) + 2 \ln(R_1)
\end{align*}
so
\begin{align*}
\ln(R_1) < \frac{3 \ln(n-1)}{n-1}
\end{align*}
and
\begin{align*}
R_1 < e^{\frac{3 \ln(n-1)}{n-1}}.
\end{align*}
Remember that we substituted $(1+L/2)$ by $R$, and we can sate that
\begin{align*}
L_1(n) & < 2 \left( e^{\frac{3 \ln(n-1)}{n-1}} -1 \right) \\
& \le 2 \left( 1+ \frac{3 \ln(n-1)}{n-1} + \frac{3\ln(n-1)}{n-1} -1 \right) \\
& = \frac{12 \ln(n-1)}{n-1} \sim \frac{12 \ln(n)}{n}.
\end{align*}
Therefore,
\begin{align*}
L_1(n) \underset{n \to \infty }{ \longrightarrow } 0.
\end{align*}
Moreover, we have
\begin{align*}
n-1 > B_n
> n-2 \underset{n \to \infty}{\longrightarrow} \infty.
\end{align*}
\end{proof}
\section{Stability properties of hypersurfaces of revolution } \label{sect : stability}
The goal of this section is to prove Theorem \ref{thm : stability 1} and Theorem \ref{thm : stability k}, which ensure some stability properties for the hypersurfaces we are studying in this paper.
\subsection{Proof of Theorem \ref{thm : stability 1}}
Remember that here, we suppose $L \ne L_1$.
\begin{proof}
Let $g$ be any metric of revolution on $M= [0, L] \times \S^{n-1}$. Then we have
\begin{align*}
\sigma_1(M, g) < B_n(L),
\end{align*}
where $B_n(L)$ is given by \Cref{thm : principal deux}.
\medskip
Let us define $C(n, L):= B_n - B_n(L)$, which is strictly positive since we assumed $L \ne L_1$. Then we have
\begin{align*}
B_n - \sigma_1(M, g) \ge B_n - B_n(L) = C(n, L).
\end{align*}
Let $0< \delta < \frac{B_n-(n-2)}{2}$, and let us suppose $|B_n - \sigma_1(M, g)| < \delta$. Therefore, we have
$|B_n - \sigma_1(M, g^*)| < \delta$, where we wrote $g^*$ the degenerated maximizing metric on $M$. Let us split our proof in two cases:
\begin{enumerate}
\item We suppose $L_1 < L$. In this case, we have $B_n(L) = \sigma_0^D(A_{1+L/2}) = \frac{(n-2)(1+L/2)^{n-1}}{(1+L/2)^{n-1}-1}$. For practical reasons, let us write
\begin{align*}
R := 1+L/2 \mbox{ and } \sigma_1(R) := \frac{(n-2)R^{n-2}}{(n-2)R^{n-2}-1}.
\end{align*}
Hence we have $|B_n - \sigma_1(R)| < \delta \implies R \in [R_1, R_\delta]$, where $R_1 = 1+L_1/2$ and $R_\delta$ is defined by $\sigma_1(R_\delta) = B_n - \delta$. Note that $R_\delta$ exists since we assumed $\delta < B_n -(n-2)$. One can calculate that
\begin{align*}
R_\delta = \left( \frac{B_n - \delta}{B_n - (n-2) - \delta}\right)^{\frac{1}{n-2}} \mbox{ and } R_1 = \left( \frac{B_n}{B_n - (n-2)} \right)^{\frac{1}{n-2}}.
\end{align*}
Thus, we have
\begin{align*}
|R_1 - R| \le R_\delta - R_1 = \left( \frac{B_n - \delta}{B_n - (n-2) - \delta}\right)^{\frac{1}{n-2}} - \left( \frac{B_n}{B_n - (n-2)} \right)^{\frac{1}{n-2}}.
\end{align*}
To estimate this expression, we use the identity $x^{n-2} - y^{n-2} = (x-y)(x^{n-3} + x^{n-4}y + \ldots + xy^{n-4} + y^{n-3})$, with $x = R_\delta$ and $y = R_1$. On the one hand, we can compute that
\begin{align*}
R_\delta^{n-2} - R_1^{n-2} = \frac{(n-2)\delta}{(B_n-(n-2)-\delta)(B_n-(n-2))} \le \frac{2(n-2)\delta}{(B_n - (n-2))^2},
\end{align*}
where the inequality comes from the assumption $\delta < \frac{B_n - (n-2)}{2}$. On the other hand, we can compute that
\begin{align*}
R_\delta^{n-3} + R_\delta^{n-4}R_1 + \ldots + R_\delta R_1^{n-4} + R_1^{n-3} \ge (n-2) \cdot \left( \frac{n-2}{B_n -(n-2)}\right)^{\frac{n-3}{n-2}}.
\end{align*}
Therefore,
\begin{align*}
R_\delta - R_1 \le \frac{2/(B_n-(n-2))^2}{\left( B_n/(B_n-(n-2))\right)^{\frac{n-3}{n-2}}} \cdot \delta := C_1(n) \cdot \delta.
\end{align*}
Remember that we wrote $R = 1+L/2$ and we can state that, for $L_1 < L$ and $0 < \delta < \frac{B_n -(n-2)}{2}$, we have
\begin{align*}
B_n - \sigma_1(M, g) < \delta \implies L - L_1 < 2C_1(n) \cdot \delta.
\end{align*}
\item Now we suppose $L < L_1$ and we make a similar calculation, this time with $B_n(L) = \sigma_1^N(A_{1+L/2}) = \frac{(n-1)((1+L/2)^n-1)}{(1+L/2)^n+n-1}$. We obtain a constant
\begin{align*}
C_2(n) := \frac{ (n-2)^2/(n-1-B_n)^2 }{ n \left( ((n-1)B_n+1)/(n-1-B_n) \right)^{ \frac{1}{n} }}
\end{align*}
such that
\begin{align*}
B_n -\sigma_1(M, g) < \delta \implies |L_1 -L| \le 2C_2(n) \cdot \delta.
\end{align*}
\end{enumerate}
Defining $C(n) := 2 \cdot \max \{ C_1(n), C_2(n)\}$ concludes the proof.
\end{proof}
\subsection{Proof of Theorem \ref{thm : stability k}}
Remember that we fixed $m \in [1, 1+L/2)$ and that we defined $\mathcal{M}_m := \{$metrics of revolution $g$ induced by a function $h$ such that $\max_{r \in [0, L]} \{h(r)\} \le m\}$.
\begin{proof}
Let $g \in \mathcal{M}_m$, and let $h : [0, L] \longrightarrow \mathbb{R}_+^*$ be the function which induces $g$. We define a new function $h_m : [0, L] \longrightarrow \mathbb{R}_+^*$ as follows:
\begin{align*}
h_m(r) = \left\{
\begin{array}{ll}
1+r & \mbox{ if } 0 \le r \le m-1 \\
m & \mbox{ if } m-1 \le r \le L -m+1 \\
1+L-r & \mbox{ if } L-m+1 \le r \le L.
\end{array}
\right.
\end{align*}
\begin{figure}[h]
\centering
\includegraphics[scale=0.5]{images/stability.PNG}
\caption{Since $g \in \mathcal{M}_m$, the function $h$ which induces $g$ satisfies $h \le h_m$.}
\label{fig: h_m}
\end{figure}
We call $g_m$ the metric induced by $h_m$. Notice that $g_m$ is no metric of revolution in the sense of \Cref{defn : revolution} since $h_m$ is not smooth.
\medskip
In the same spirit as in Sect. \ref{sect : proof}, we use \parencite[Sect. 2]{CEG2} to state that for any smooth function $f$ on $M$, we have
\begin{align*}
R_{g}(f) = \frac{\int_M \left((\partial_r f)^2 + \frac{1}{h^2}|\tilde{\nabla} f|_{g_0}^2 \right)h^{n-1}dV_{g_0}}{\int_\Sigma |f|^2dV_\Sigma}
\end{align*}
and
\begin{align*}
R_{g_m}(f) = \frac{\int_M \left((\partial_r f)^2 + \frac{1}{h_m^2}|\tilde{\nabla} f|_{g_0}^2 \right)h_m^{n-1}dV_{g_0}}{\int_\Sigma |f|^2dV_\Sigma}.
\end{align*}
Therefore, since $n \ge 3$ and $h \le h_m$, we have
\begin{align*}
\sigma_1(M, g) \le \sigma_1(M, g_m).
\end{align*}
Let us define $C(n, L, m) := B_n(L) - \sigma_1(M, g_m)$, which is strictly positive. Then we have
\begin{align*}
B_n(L)- \sigma_1(M, g) \ge B_n(L)- \sigma_1(M, g_m) = C(n, L, m).
\end{align*}
\end{proof}
\section{Upper bounds for higher Steklov eigenvalues} \label{sect : 6}
In this section, we want to compute some sharp upper bound for higher Steklov eigenvalues of hypersurfaces of revolution. Therefore, we will have to deal with the multiplicity of the eigenvalues. We write $\lambda_{(k)}, \sigma_{(k)}, \sigma_{(k)}^D, \sigma_{(k)}^N$ for the $(k)$th eigenvalue counted without multiplicity.
\medskip
Before we can state and prove our results, let us remind useful tools in Sect. \ref{subsect : multiplicity}.
\subsection{The question of multiplicity} \label{subsect : multiplicity}
Given a hypersurface of revolution $(M=[0,L] \times \S^{n-1}, g)$, we want to provide information about the multiplicity of the Steklov eigenvalues of $(M, g)$.
\medskip
For the classical Laplacian problem $\Delta S = \lambda S$ on $(\S^{n-1}, g_0)$, we know \parencite[pp. 160-162]{BGM} that the set of eigenvalues is $\{ \lambda_{(k)} = k(n+k-2) : k \ge 0 \}$. Besides, the multiplicity $m_0$ of $\lambda_{(0)}=0$ is $1$ and the multiplicity of $\lambda_{(k)}$ is
\begin{align} \label{frm : multiplicite}
m_k:=\frac{(n+k-3)(n+k-4) \ldots n(n-1)}{k!}(n+2k-2).
\end{align}
As such, given $k \ge 0$, there exist $m_k$ independent functions $S_k^1, \ldots, S_k^{m_k}$ such that $\Delta S_k^i = \lambda_{(k)} S_k^i, \; i=1, \ldots, m_k$.
\medskip
With these notations, given $k \ge 0$, there are $m_k$ independent Steklov-Dirichlet eigenfunctions associated with the eigenvalue $\sigma_{(k)}^D(A_{1+L/2})$, that can be written $\varphi_k^i(r,p) = \alpha_{k}(r)S_{k}^i(p), \; i = 1, \ldots, m_k$. For the Steklov-Neumann case, the eigenfunctions associated with $\sigma_{(k)}^N(A_{1+L/2})$ can be written $\phi_k^i(r,p) = \beta_{k}(r)S_k^i(p), \; i = 1, \ldots, m_k$. Indeed, for each of these problems, the multiplicity of the $(k)$th eigenvalue is exactly $m_k$, as stated by \parencite[Prop. 3]{CV}.
\subsection{Upper bound for \texorpdfstring{$\sigma_{2}(M, g), \ldots, \sigma_{m_1}(M, g)$}{sigma2}} \label{subsect : sigma2 sigmam1}
In this section, we prove the following theorem:
\begin{thm}
Let $(M = [0, L] \times S^{n-1}, g)$ be a hypersurface of revolution of the Euclidean space with two boundary components isometric to two copies of $\S^{n-1}$ and dimension $n \ge 3$. Let $m_1$ be the multiplicity of the first non trivial eigenvalue of the classical Laplacian problem on $(\S^{n-1}, g_0)$. Then we have
\begin{align*}
\sigma_2(M, g) = \ldots = \sigma_{m_1}(M, g) < B_n^2(L) = \ldots = B_n^{m_1}(L) = : \sigma_{(1)}^N(A_{1+L/2}).
\end{align*}
Moreover, this bound is sharp: for all $\epsilon >0$ there exists a metric of revolution $g_\epsilon$ on $M$ such that
\begin{align*}
\sigma_2(M, g_\epsilon) = \ldots = \sigma_{m_1}(M, g_\epsilon) > \sigma_{(1)}^N(A_{1+L/2}) - \epsilon.
\end{align*}
\end{thm}
\begin{proof}
Let us consider $2$ cases.
\begin{enumerate}
\item $M=[0,L] \times \S^{n-1}$, with $L\le L_1$. We write $f_1^1$ an eigenfunction associated with $\sigma_1(M,g)$. Since $L \le L_1$, we have $B_n(L)=\sigma_1^N(A_{1+L/2})$ and therefore $f_1^1(r,p)=u_1(r)S_1^1(p)$.
We consider now a new function denoted $f_1^2$ given by
$
f_1^2(r,p)=u_1(r)S_1^2(p).
$
We can check that
\begin{align*}
\int_\Sigma f_1^2(r,p)dV\Sigma=0 \quad \mbox{ and } \quad \int_\Sigma f_1^1(r,p)f_1^2(r,p)dV\Sigma =0.
\end{align*}
Moreover, we have
\begin{align*}
\sigma_1(M, g)=R_g(f_1^1) = R_g(f_1^2).
\end{align*}
In the same way, we write
\begin{align*}
f_1^i(r,p) = u_1(r) S_1^i(p), \quad i=1, \ldots, m_1
\end{align*}
and we can conclude
\begin{align*}
\sigma_1(M,g) = \sigma_2(M,g) = \ldots = \sigma_{m_1}(M, g).
\end{align*}
Therefore, we already have a sharp upper bound for these eigenvalues, which is given by $\sigma_1^N(A_{1+L/2})$.
\item $M=[0,L] \times \S^{n-1}$, with $L>L_1$.
We call $f_1$ an eigenfunction associated with $\sigma_1(M,g)$. Since $L > L_1$, we have $B_n(L)=\sigma_0^D(A_{1+L/2})$. Therefore $f_1(r,p)=u_0(r)S_0(p)$.
We write now $f_2^1(r,p) = u_2(r)S_1^1(p)$ an eigenfunction associated with $\sigma_2(M,g)$. As before, we then consider $m_1$ functions denoted $f_2^i(r,p) = u_2(r)S_1^i(p), i= 1, \ldots, m_1$ and we get
\begin{align*}
\sigma_2(m,g) = \ldots = \sigma_{m_1+1}(M,g).
\end{align*}
Let us consider a function $\phi_1(r,p) = \beta_1(r)S_1(p)$ associated with $\sigma_{(1)}^N(A_{1+L/2})$. In the same spirit as before, we define a function
\begin{align*}
\Tilde{\phi_1} : [0, L] \times \S^{n-1} &\longrightarrow \mathbb{R} \\
(r,p) & \longmapsto \left\{
\begin{array}{cc}
\phi_1(r,p) & \mbox{ if } 0 \le r \le L/2 \\
\phi_1(L-r,p) & \mbox{ if } L/2 \le r \le L.
\end{array}
\right.
\end{align*}
We can check that the function $\tilde{\phi_1}$ is continuous and that $\int_\Sigma \tilde{\phi_1}dV_\Sigma=0$. Moreover, it is immediate that $\int_\Sigma \tilde{\phi_1} f_1 dV_\Sigma = 0$. Hence we can use $\tilde{\phi_1}$ as a test function for $\sigma_2(M, g)$ and as we did before, we can see that
\begin{align*}
\sigma_2(M, g) & \le R_g(\Tilde{\phi_1}) \nonumber \\
& < R_{\tilde{g}}(\tilde{\phi_1}) \nonumber \mbox{ where } \tilde{g} \mbox{ comes from Theorem \ref{thm : pricipal}} \\
& = \frac{\int_0^L \int_{S^{n-1}} \left( (\partial_r \Tilde{\phi_1})^2 + \frac{1}{\tilde{h}(r)^2} | \Tilde{\nabla} \Tilde{\phi_1}|^2 \right) \tilde{h}(r)^{n-1} dV_{g_0}dr}{\int_\Sigma \Tilde{\phi_1}^2(0,p) dV_{g_0}} \nonumber \\
& = \frac{2 \times \int_0^{L/2} \int_{S^{n-1}} \left( (\partial_r \Tilde{\phi_1})^2 + \frac{1}{\tilde{h}(r)^2} | \Tilde{\nabla} \Tilde{\phi_1}|^2 \right) \tilde{h}(r)^{n-1} dV_{g_0}dr}{2 \times \int_{\S^{n-1}} \Tilde{\phi_1}^2(0,p) dV_{g_0}} \nonumber \\
& = \frac{ \int_0^{L/2} \int_{S^{n-1}} \left( (\partial_r \phi_1)^2 + \frac{1}{\tilde{h}(r)^2} | \Tilde{\nabla} \phi_1|^2 \right) \tilde{h}(r)^{n-1} dV_{g_0}dr}{ \int_{\S^{n-1}} \phi_1^2(0,p) dV_{g_0}} \nonumber \\
& < \frac{\int_0^{L/2} \int_{S^{n-1}} \left( (\partial_r \phi_1)^2 + \frac{1}{(1+r)^2} | \Tilde{\nabla}\phi_1|^2 \right) (1+r)^{n-1} dV_{g_0}dr}{\int_{\S^{n-1}} \phi_1^2(0,p) dV_{g_0}} \nonumber \\
& = \sigma_{(1)}^N(A_{1+L/2}).
\end{align*}
\end{enumerate}
Therefore, regardless of the value of $L>0$, we have
\begin{align*}
\sigma_2(M,g) = \ldots = \sigma_{m_1}(M,g) < B_n^2(L) = \ldots = B_n^{m_1}(L) := \sigma_{(1)}^N(A_{1+L/2}).
\end{align*}
Moreover, this bound is sharp : for all $\epsilon >0$, there exists a metric $g_\epsilon$ on $M=[0,L]\times \S^{n-1}$ such that $\sigma_2(M, g_\epsilon) = \ldots = \sigma_{m_1}(M,g_\epsilon) > \sigma_{(1)}^N(A_{1+L/2}) - \epsilon$. Indeed, as before it is sufficient to choose the metric $g_\epsilon = dr^2 + h_\epsilon^2g_0$, with the function $h_\epsilon$ such that
\begin{enumerate}
\item $h_\epsilon$ is symmetric;
\item For all $r \in [0, L/2- \delta]$, we have $h_\epsilon(r)=1+r$, with $\delta$ small enough.
\end{enumerate}
The proof of sharpness goes as before.
\end{proof}
The upper bound we gave, namely $ \sigma_{(1)}^N(A_{1+L/2})$, depends on the dimension of $M$ and the meridian length $L$ of $M$. It is easy to see that $\sigma_{(1)}^N(A_{1+L/2})$, which is strictly increasing, satisfies
\begin{align*}
\sigma_{(1)}^N(A_{1+L/2}) = \frac{(n-1)\left( \left(1+L/2\right)^n -1 \right)}{\left( 1+ L/2 \right)^n +n-1} \underset{L \rightarrow \infty}{\longrightarrow} n-1.
\end{align*}
Therefore, we have got a bound that depends only on the dimension $n$ of $M$. Given a hypersurface of revolution $(M, g)$ with two boundary components, we have
\begin{align*}
\sigma_2(M,g) = \ldots = \sigma_{m_1}(M,g) < B_n^2 = \ldots = B_n^{m_1} := n-1.
\end{align*}
Moreover, this bound is sharp, in the sense that for all $\epsilon >0$, there exists a hypersurface of revolution $(M_\epsilon, g_\epsilon)$ such that $\sigma_2(M_\epsilon,g_\epsilon) = \ldots = \sigma_{m_1}(M_\epsilon,g_\epsilon) > n-1 - \epsilon$.
Indeed, we can choose $L_\epsilon$ large enough for $\sigma_1^N(A_{1+L_\epsilon/2})$ to be $\frac{\epsilon}{2}$-close to $n-1$, and then define $M_\epsilon := [0,L_\epsilon] \times \S^{n-1}$. Now we can put a metric $g_\epsilon$ on $M_\epsilon$ such that $\sigma_2(M_\epsilon, g_\epsilon) = \ldots = \sigma_{m_1}(M_\epsilon, g_\epsilon) > \sigma_1^N(A_{1+L_\epsilon/2}) - \frac{\epsilon}{2}$, and we are done.
\medskip
Our calculations showed that the eigenvalues $k = 2, \ldots, m_1$ have a critical length at infinity.
\subsection{Upper bound for \texorpdfstring{$\sigma_{m_1+1}(M,g)$}{sigmam1plus1}} \label{subsect : sigmam1plus1}
Now we are interested in the next eigenvalue, namely $\sigma_{m_1+1}(M, g)$. For that reason, let us define a new special meridian length $L_2$: it is the unique solution of the equation $\sigma_0^D(A_{1+L/2}) = \sigma_{(2)}^N(A_{1+L/2})$. Remark that we have $L_2<L_1$. We prove the following theorem:
\begin{thm}
Let $(M = [0, L] \times \S^{n-1}, g)$ be a hypersurface of revolution of the Euclidean space with two boundary component isometric to two copies of $\S^{n-1}$ and dimension $n \ge 3$. Let $m_1$ be the multiplicity of the first non trivial eigenvalue of the classical Laplacian problem on $(\S^{n-1}, g_0)$. Then we have
\begin{align*}
\sigma_{m_1+1}(M,g) < B_n^{m_1+1}(L) := \left\{
\begin{array}{ll}
\sigma_{(2)}^N(A_{1+L/2}) & \mbox{ if } L \le L_2 \\
\sigma_{(0)}^D(A_{1+L/2}) & \mbox{ if } L_2 < L \le L_1 \\
\sigma_{(1)}^N(A_{1+L/2}) & \mbox{ if } L_1 < L,
\end{array}
\right.
\end{align*}
see Fig. \ref{fig : sigmam1plus1}.
Moreover, this bound is sharp: for all $\epsilon >0$, there exist a metric of revolution $g_\epsilon$ on $M$ such that
\begin{align*}
\sigma_{m1+1}(M, g_\epsilon) > B_n^{m_1+1}(L) - \epsilon.
\end{align*}
\end{thm}
\begin{proof}
Now we have to distinguish $3$ cases.
\begin{enumerate}
\item $M=[0,L]\times \S^{n-1}$, with $L \le L_2$. We call $f_1^1(r,p) = u_1(r)S_1^1(p), \ldots, f_1^{m_1}(r,p)= u_1(r)S_1^{m_1}(p)$ the Steklov eigenfunctions associated with $\sigma_{(1)}(M,g)= \sigma_1(M,g) = \ldots = \sigma_{m_1}(M,g)$.
There exists an eigenfunction $\phi_2(r,p) = \beta_2(r) S_2(p)$ associated with $\sigma_{(2)}^N(M,g)= \sigma_{m_1+1}^N(M,g)$. We define a new function
\begin{align*}
\Tilde{\phi_2} : [0, L] \times \S^{n-1} &\longrightarrow \mathbb{R} \\
(r,p) & \longmapsto \left\{
\begin{array}{cc}
\phi_2(r,p) & \mbox{ if } 0 \le r \le L/2 \\
\phi_2(L-r,p) & \mbox{ if } L/2 \le r \le L.
\end{array}
\right.
\end{align*}
This function is continuous, satisfies $\int_\Sigma \tilde{\phi_2}dV_\Sigma =0$ and we can check that for all $i=1, \ldots, m_1$,
\begin{align*}
\int_\Sigma \tilde{\phi_2} f_1^idV_\Sigma =0.
\end{align*}
Hence we can use $\tilde{\phi_2}$ as a test function for $\sigma_{m_1+1}(M,g)$. The same kind of calculations as before show that we have
\begin{align*}
\sigma_{m_1+1}(M, g) < \sigma_{(2)}^N(A_{1+L/2}),
\end{align*}
which is a sharp upper bound.
\item $M=[0,L]\times \S^{n-1}$, with $L_2 < L \le L_1$. We call $f_1^1(r,p) = u_1(r)S_1^1(p), \ldots, f_1^{m_1}(r,p)= u_1(r)S_1^{m_1}(p)$ the Steklov eigenfunctions associated with $\sigma_{(1)}(M,g)= \sigma_1(M,g) = \ldots = \sigma_{m_1}(M,g)$.
There exists an eigenfunction $\varphi_0(r,p) = \alpha_0(r) S_0(p)$ associated with $\sigma_{0}^D(M,g)$. We use the function $\tilde{\varphi_0}$ we defined before, namely
\begin{align*}
\tilde{\varphi_0} : [0,L] \times \S^{n-1} & \longrightarrow \mathbb{R} \\
(r, p) & \longmapsto \left\{
\begin{array}{cc}
\varphi_0(r,p) & \mbox{ if } 0 \le r \le L/2 \\
- \varphi_0(L-r,p) & \mbox{ if } L/2 \le r \le L.
\end{array} \nonumber
\right.
\end{align*}
We already saw that $\tilde{\varphi_0}$ is continuous, that $\int_\Sigma \tilde{\varphi_0}dV_\Sigma =0$ and we can check that for all $i = 1, \ldots, m_1$,
\begin{align*}
\int_\Sigma \tilde{\varphi_0}f_1^i dV_\Sigma =0.
\end{align*}
Using $\tilde{\varphi_0}$ as a test function, we get
\begin{align*}
\sigma_{m_1+1}(M,g) < \sigma_{(0)}^D(A_{1+L/2}),
\end{align*}
which is a sharp upper bound.
\item $M=[0,L]\times \S^{n-1}$, with $L_1 \le L$. Then $\sigma_1(M, g) < \sigma_2(M, g) = \ldots = \sigma_{m_1+1}(M, g)$. We already dealt with this case before and we saw that
\begin{align*}
\sigma_{m_1+1}(M, g) < \sigma_{(1)}^N(A_{1+L/2}),
\end{align*}
which is a sharp upper bound.
\end{enumerate}
Therefore, given a hypersurface of revolution $(M=[0, L] \times \S^{n-1}, g)$, we have a sharp upper bound for $\sigma_{m_1+1}(M,g)$, depending on $n$ and $L$, given by
\begin{align*}
\sigma_{m_1+1}(M,g) < B_n^{m_1+1}(L) := \left\{
\begin{array}{ll}
\sigma_{(2)}^N(A_{1+L/2}) & \mbox{ if } L \le L_2 \\
\sigma_{(0)}^D(A_{1+L/2}) & \mbox{ if } L_2 < L \le L_1 \\
\sigma_{(1)}^N(A_{1+L/2}) & \mbox{ if } L_1 < L.
\end{array}
\right.
\end{align*}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.48]{images/sigmam1plus1.PNG}
\caption{Representation of the case $n = 5$. The dark curve is the bound given.
}
\label{fig : sigmam1plus1}
\end{figure}
The proof of sharpness goes as before.
\end{proof}
From this, one can once again look for a sharp upper bound for $\sigma_{m_1+1}(M,g)$ that depends only on the dimension $n$ of $M$. This bound is given by
\begin{align} \label{eqn : cas m1plus1}
\sigma_{m_1+1}(M,g) < B_n^{m_1+1} & := \max \left\{ \sigma_0^D(A_{1+{L_2}/2}), n-1\right\} \nonumber \\
& = \left\{
\begin{array}{ll}
\sigma_0^D(A_{1+{L_2}/2}) & \mbox{ if } 3 \le n \le 6 \\
n-1 & \mbox{ if } 7 \le n.
\end{array}
\right.
\end{align}
A proof of (\ref{eqn : cas m1plus1}) is given in Appendix \ref{appen : cas m1plus1}.
\medskip
Therefore, the eigenvalue $k = m_1+1$ possesses a finite critical length if $3 \le n \le 6$, and it has a critical length at infinity in $7 \le n$.
\begin{rem} \label{rem : borne sans L}
It is then tempting to search for an expression for $B_n^k := \sup_{L \in \mathbb{R}_+^*} \{B_n^k(L)\}$ for any $n$ and $k$; but it seems to be hard to give an explicit formula for it. Indeed, as Sect. \ref{subsect : sigma2 sigmam1} and \ref{subsect : sigmam1plus1} suggest, the function $ L \longmapsto B_n^k(L)$ is hard to determine and can be either smooth (case for instance of Sect. \ref{subsect : sigma2 sigmam1}) or piecewisely smooth (case for instance of Sect. \ref{subsect : sigmam1plus1}). In the second case, there are possibly many irregular points that we have to consider. Moreover, depending on the value of $n$ and $k$, it is possible that
\begin{enumerate}
\item $k$ has a finite critical length, i.e $B_n^k =B_n^k(L_k)$ for a certain $L_k \in \mathbb{R}_+^*$. That is for instance the case of $\sigma_1(M,g)$ or $\sigma_{m_1+1}$ if $n=3, 4, 5 \mbox{ or } 6$;
\item $k$ has a critical length at infinity, i.e $B_n^k = \lim_{L \to \infty} B_n^k(L)$. That is for instance the case of $\sigma_2(M,g), \ldots, \sigma_{m_1}(M,g)$.
\end{enumerate}
Furthermore, we will prove in Sect. \ref{sect : proof infinity de k avec L fini} that for all $n \ge 3$, there are infinitely many $k$ that have a finite critical length associated to them. In all these cases, the function $ L \longmapsto B_n^k(L)$ is piecewisely smooth.
\end{rem}
\section{Critical lengths of hypersurfaces of revolution} \label{sect : proof infinity de k avec L fini}
We recall that given $n \ge 3$, we are interested in giving information about the set of finite critical lengths. We want to prove \Cref{thm : principal 4}, i.e that there are infinitely many $k$ such that $B_n^k = B_n^k(L_k)$ for a certain finite $L_k \in \mathbb{R}_+^*$, and that the sequence of critical lengths converges to $0$.
\begin{proof}
As before, for $j \ge 0$, we denote by $m_j$ the number given by the formula (\ref{frm : multiplicite}), which is the multiplicity of $\sigma_{(j)}^D(A_R)$ as well as the multiplicity of $\sigma_{(j)}^N(A_R)$.
Let $i \ge 2$ be an integer. We claim that for all $k$ such that
\begin{align} \label{eqn : k donnant L etoile}
m_0 + \sum_{j=1}^{i-1} 2m_j + m_i < k \le m_0 + \sum_{j=1}^{i} 2m_j,
\end{align}
we have
\begin{align*}
B_n^k = B_n^k(L_k)
\end{align*}
for a certain $L_k \in \mathbb{R}_+^*$.
\medskip
Indeed, let $k$ satisfying (\ref{eqn : k donnant L etoile}). Then, because of the asymptotic behaviour of the functions $L \longmapsto \sigma_{(j)}^D(A_{1+L/2})$ and $L \longmapsto \sigma_{(j)}^N(A_{1+L/2})$, there exists $C >0$ such that for all $L > C$, we have $B_n^k(L) = \sigma_{(i)}^D(A_{1+L/2})$. But we can compute that
\begin{align*}
\frac{\partial}{\partial L} \sigma_{(i)}^D(A_{1+L/2}) = - \frac{4(L+2)(2i+n-2)^2(1+L/2)^{2i+n}}{\left(4(1+L/2)^{2i+n}-L^2-4L-4 \right)^2} < 0,
\end{align*}
which means that the function $L \longmapsto \sigma_{(i)}^D(A_{1+L/2})$ is strictly decreasing. Hence, for $L > L' > C$, we have $B_n^k(L) < B_n^k(L')$. Therefore, for such a $k$, we have
\begin{align*}
B_n^k= B_n^k(L_k)
\end{align*}
with $L_k$ finite, which exactly means that $L_k$ is a finite critical length associated to $k$.
\medskip
The condition given by (\ref{eqn : k donnant L etoile}) is sufficient, but is not a necessary one. Indeed, $k=1$ does not meet condition (\ref{eqn : k donnant L etoile}) but we have $B_n^1 = B_n^1(L_1)$, where $L_1$ is given by Corollary \ref{cor : corollaire du principal deux}.
\medskip
Then, defining $k_1 := 1$ and for each $i \ge 2$, defining $k_i := m_0 + \sum_{j=1}^{i} 2m_j$, we get a sequence $(k_i)_{i=1}^\infty$ such that
\begin{align*}
B_n^{k_i} = B_n^{k_i}(L_i)
\end{align*}
for a certain $L_i \in \mathbb{R}_+^*$ finite.
\medskip
Now we want to prove that the sequence of finite critical lengths $(L_i)_{i=1}^\infty$ converges to $0$.
Let $i \ge 1$. We know that $L_i$ has to be a solution of the equation $\sigma_{(j_1)}^D(A_{1+L/2}) = \sigma_{(j_2)}^N(A_{1+L/2})$ for a certain couple $(j_1, j_2) \in \mathbb{N}^2$. As said before, for $L >C$, we have $B_n^{k_i}(L) = \sigma_{(i)}^D(A_{1+L/2})$, hence $L_i \le L_i^*$, where $L_i^*$ is the unique solution of the equation
\begin{align*}
\sigma_{(i)}^D(A_{1+L/2}) = \sigma_{(i+1)}^N(A_{1+L/2}).
\end{align*}
Therefore, in order to prove that $L_i \underset{i \to \infty}{\longrightarrow} 0$, we prove that $L_i^* \underset{i \to \infty}{\longrightarrow} 0$.
Using Propositions \ref{prop : SD} and \ref{prop : SN}, making some calculations and substituting $(1+L/2)$ by $R$, we can see that solving
\begin{align*}
\sigma_{(i)}^D(A_{1+L/2}) = \sigma_{(i+1)}^N(A_{1+L/2})
\end{align*}
is equivalent to find the unique value $R_i \in (1, \infty)$ which is solution of the equation
\begin{align*}
(i+1)R^{2i+n-2}\left( R^{2i+n} - R^2(2i+n-1)- \frac{(i+n-1)(2i+n-1)}{i+1} \right) +(i+n-1)=0.
\end{align*}
Let us call
\begin{align*}
\underbrace{(i+1)R^{2i+n-2} \left( \underbrace{ R^{2i+n} - R^2(2i+n-1)- \frac{(i+n-1)(2i+n-1)}{i+1} }_{=: \psi_i(R)} \right) +(i+n-1)}_{=: \Psi_i(R)}.
\end{align*}
Because $ (i+1)R^{2i+n-2} >0$ and $(i+n-1) >0$, then for $R_i$ to be the solution of the equation $\Psi_i(R) =0$, it is necessary that $\psi_i(R_i) <0$.
\medskip
Then we have
\begin{align*}
R^{2i +n} & < R^2(2i+n-1) + \frac{(i+n-1)(2i+n-1)}{i+1} \\
& < R^2 \left( (2i+n-1) + \frac{(i+n-1)(2i+n-1)}{i+1} \right) \\
& = R^2 \left( \frac{(2i+n-1)(2i+n)}{i+1} \right) \\
& < R^2(2i+n)^2.
\end{align*}
Therefore we have
\begin{align*}
\ln(R) < \frac{2\ln(2i+n)}{2i+n-2} < \frac{2\ln(2i+n)}{2i+n}
\end{align*}
and thus
\begin{align*}
R < e^{\frac{2\ln(2i+n)}{2i+n} }
\end{align*}
Remember that we substituted $(1+L/2)$ by $R$, and then the unique solution of the equation
\begin{align*}
\sigma_{(i)}^D(A_{1+L/2}) = \sigma_{(i+1)}^N(A_{1+L/2})
\end{align*}
is a value $L_i^*$ wich satisfies
\begin{align*}
0 < L_i^* < 2 \left( e^{\frac{2\ln(2i+n)}{2i+n} }-1 \right) \le \frac{8 \ln(2i+n)}{2i+n} \sim \frac{8\ln(i)}{i}.
\end{align*}
Therefore,
\begin{align*}
L_i < L_i^* \underset{i \to \infty}{\longrightarrow} 0.
\end{align*}
In particular, for each $\delta >0$ there exists $k_0 \in \mathbb{N}$ such that for each $k >k_0$ which has a finite critical length $L_k$, then $L_k < \delta$.
\end{proof}
As said before, the assumption on $k$ is sufficient but not necessary. This consideration naturally leads to the following open question:
\begin{question} \label{q : open}
Given $n \ge 3$, are there finitely or infinitely many $k \in \mathbb{N}$ such that $k$ has a critical length at infinity?
\end{question}
\begin{appendices}
\section{Proof of equality (\ref{eqn : cas m1plus1})} \label{appen : cas m1plus1}
We know that there exists an unique $L_2 >0$ such that $\sigma_{(0)}^D(A_{1+L_2/2})= \sigma_{(2)}^N(A_{1+L_2/2})$. We want to choose, depending on the value of $n$, if $\sigma_{(0)}^D(A_{1+L_2/2})$ is bigger or lesser than $n-1$. For this purpose, let us call $L_D$ the unique positive value such that $\sigma_{(0)}^D(A_{1+L_D/2})=n-1$, and let us call $L_N$ the unique positive value such that $\sigma_{(2)}^N(A_{1+L_N/2}) = n-1$.
\medskip
Then we have the following fact: if $L_N < L_D$, we have $\sigma_{(0)}(A_{1+L_2/2}) > n-1$. On the contrary, if $L_D < L_N$, we have $\sigma_{(0)}(A_{1+L_2/2}) < n-1$.
\begin{figure}[H]
\centering
\includegraphics[scale=0.5]{images/annexe_2.PNG}
\caption{Representation of the case $n=5$. Since $L_N < L_D$, then $\sigma_{(0)}(A_{1+L_2/2}) > n-1$.}
\label{fig: annexe}
\end{figure}
Hence, let us solve the equation $\sigma_{(0)}^D(A_{1+L_D/2}) = n-1$, i.e let us find the unique $L_D >0$ such that
\begin{align*}
\frac{(n-2)(1+L_D/2)^{n-2}}{(1+L_D/2)^{n-2}-1}=n-1.
\end{align*}
We find
\begin{align*}
L_D = 2(n-1)^\frac{1}{n-2}-2.
\end{align*}
Similarly, solving the equation
\begin{align*}
\frac{2n((1+L_N/2)^{n+2}-1)}{2(1+L_N/2)^{n+2}+n} = n-1
\end{align*}
leads to
\begin{align*}
L_N = 2\left(\frac{n(n+1)}{2}\right)^{\frac{1}{n+2}}-2.
\end{align*}
We have to find for which values of $n$ we have $L_D < L_N$ and vice versa. This leads to the inequation
\begin{align*}
\left( \frac{n(n+1)}{2} \right)^{\frac{1}{n+2}} > \left( n-1 \right)^{\frac{1}{n-2}}.
\end{align*}
We have
\begin{align*}
\left( \frac{n(n+1)}{2} \right)^{\frac{1}{n+2}} > \left( n-1 \right)^{\frac{1}{n-2}}
\end{align*}
which is equivalent to
\begin{align*}
\left( \frac{n(n+1)}{2} \right)^{\frac{n-2}{n+2}} > n-1 .
\end{align*}
Let us suppose $n \ge 9$.
\begin{align*}
\left( \frac{n(n+1)}{2} \right)^{\frac{n-2}{n+2}} &> \left( \frac{n^2}{2} \right)^{\frac{n-2}{n+2}} = \frac{1}{2^{\frac{n-2}{n+2}}} n^{\frac{2n-4}{n+2}}
> \frac{1}{2} n^{\frac{2n-4}{n+2}}
> \frac{1}{2} n^{\frac{14}{11}}.
\end{align*}
Let us analyse the function $f : [9, \infty ) \longrightarrow \mathbb{R}, x \longmapsto \frac{1}{2}x^{\frac{14}{11}}$.
We have $f(9) > 8$, and $f'(x) = \frac{14}{22} x^{\frac{3}{11}}$. We can compute that $f'(9) >1$ and since $f''(x) = \frac{42}{242}x^{\frac{-9}{11}} >0$ for all $x \in [9, \infty)$, we can conclude $f'(x) >1 $ for all $x \in [9, \infty)$. Hence, $f(x) > x-1$ for all $x \in [9, \infty)$.
\medskip
Therefore, for all integers $n \ge 9$, we have
\begin{align*}
\left( \frac{n(n+1)}{2} \right)^{\frac{n-2}{n+2}} > n-1
\end{align*}
and then $L_D < L_N$. We can manually compute the cases $n=3, \ldots, 8$ and we can conclude that
\begin{align*}
\left\{
\begin{array}{ll}
L_N < L_D & \mbox{ if } 3 \le n \le 6 \\
L_D < L_N & \mbox{ if } 7 \le n.
\end{array}
\right.
\end{align*}
Therefore, we have
\begin{align*}
\max \left\{ \sigma_0^D(A_{1+{L_2}/2}), n-1\right\} & = \left\{
\begin{array}{ll}
\sigma_0^D(A_{1+{L_2}/2}) & \mbox{ if } 3 \le n \le 6 \\
n-1 & \mbox{ if } 7 \le n.
\end{array}
\right.
\end{align*}
\end{appendices}
\printbibliography
\end{document}
|
2205.08453
|
\subsection*{Organization}
\end{section}
\begin{section}{Preliminaries}
In this section we recall notions of sectional category and topological complexity; we refer to
\cite{BasGRT14, CohFW21, CohFW, Far03, Gar19, Rud10, Sva66} for more information.
\subsection*{Sectional category}
Let $p: E \to B$ be a Hurewicz fibration. The \emph{sectional category} of $p$, denoted ${\sf{secat}}[p: E \to B]$ or ${\sf{secat}}(p)$, is defined as the least non-negative integer $k$ such that there exists an open cover
$\{U_0, U_1, \dots, U_k\}$ of $B$
with the property that each open set $U_i$ admits a continuous section $s_i: U_i \to E$ of $p$.
We set ${\sf{secat}}(p)=\infty$ if no finite $k$ with this property exists.
The \emph{generalized sectional category} of a Hurewicz fibration $p: E \to B$, denoted ${\sf{secat}}_g[p: E \to B]$ or ${\sf{secat}}_g(p)$, is defined as the least non-negative integer $k$ such that $B$ admits a partition $$B=F_0 \sqcup F_1 \sqcup ... \sqcup F_k, \quad F_i\cap F_j = \emptyset \text{ \ for \ } i\neq j$$
with each set $F_i$ admitting a continuous section $s_i: F_i \to E$ of $p$.
We set ${\sf{secat}}_g(p)=\infty$ if no such finite $k$ exists.
It is obvious that ${\sf{secat}}(p) \geq {\sf{secat}}_g(p)$ in general. However, as was established in \cite{Gar19},
in many interesting situations there is an equality:
\begin{theorem}
\label{lemma secat betweeen ANR spaces}
Let $p : E \to B$ be a Hurewicz fibration with $E$ and $B$ metrizable absolute neighborhood retracts (ANRs).
Then ${\sf{secat}}(p) = {\sf{secat}}_g(p).$
\end{theorem}
In the sequel the term \lq\lq fibration\rq\rq \ will always mean \lq\lq Hurewicz fibration\rq\rq, unless otherwise stated explicitly.
The following Lemma will be used later in the proofs.
\begin{lemma}\label{lm:secat} (A) If for two fibrations $p: E\to B$ and $p':E'\to B$ over the same base $B$ there exists a continuous map $f$ shown on the following digram
\begin{center}
$\xymatrix{
E \ar[dr]_{ p} \ar[rr]^{f} && E' \ar[dl]^{p'} \\
& B
}$
\end{center}
then ${\sf{secat}}(p)\ge {\sf{secat}}(p')$.
(B) If a fibration $p: E\to B$ can be obtained as a pull-back from another fibration $p': E'\to B'$ then
${\sf{secat}}(p)\le {\sf{secat}}(p')$.
(C) Suppose that for two fibrations $p: E\to B$ and $p': E'\to B'$ there exist continuous maps $f, g, F, G$ shown on the commutative diagram
\begin{center}
$\xymatrix{
E\ar[r]^F\ar[d]_p & E'\ar[d]^{p'} \ar[r]^G & E\ar[d]^p\\
B \ar[r]_f & B'\ar[r]_{g}& B,
}$
\end{center}
such that $g\circ f: B\to B$ is homotopic to the identity map ${\sf{Id}}_B: B\to B$. Then
${\sf{secat}}(p)\le {\sf{secat}}(p')$.
\end{lemma}
\begin{proof} Statements (A) and (B) are well-known and follow directly from the definition of sectional category. Below we give the proof of (C) which uses (A) and (B).
Consider the fibration $q: \bar E \to B$ induced by $f: B\to B'$ from $p': E'\to B'$. Here $\bar E=\{(b, e')\in B\times E'; f(b)=p'(e')\}$ and $q(b, e')=b$. Then
\begin{eqnarray}\label{sec11}
{\sf{secat}}(q)\le {\sf{secat}}(p')
\end{eqnarray}
by statement (B).
Consider the map $\bar G: \bar E\to E$ given by $\bar G(b, e')= G(e')$ for $(b, e')\in \bar E$. Then
$$(p\circ \bar G)\ (b, e') = p(G(e')) = g(p'(e')) = g(f(b)) = ((g\circ f)\circ q)\ (b, e')$$
and thus $p\circ \bar G = (g\circ f)\circ q $ and using the assumption $g\circ f\simeq {\sf{Id}}_B$ we obtain $p\circ \bar G \simeq q$.
Let $h_t: B\to B$ be a homotopy with $h_0=g\circ f$ and $h_1={\sf{Id}}_B$, $t\in I$.
Using the homotopy lifting property, we obtain a homotopy
$\bar G_t: \bar E\to E$, such that $\bar G_0=\bar G$ and $p\circ \bar G_t = h_t\circ q$. The map $\bar G_1: \bar E \to E$ satisfies $p\circ \bar G_1= q$; in other words, $\bar G_1$ appears in the commutative diagram
\begin{center}
$\xymatrix{
\bar E \ar[dr]_{ q} \ar[rr]^{\bar G_1} && E \ar[dl]^{p} \\
& B
}$
\end{center}
Applying to this diagram statement (A) we obtain the inequality ${\sf{secat}}(p) \le {\sf{secat}}(q)$ which together with inequality (\ref{sec11}) implies ${\sf{secat}}(p) \le {\sf{secat}}(p')$, as claimed.
\end{proof}
\subsection*{Topological complexity}
Let $X$ be a path-connected topological space. Consider the path space $X^I$ (i.e. the space of all continuous
maps $I=[0,1] \to X$ equipped with compact-open topology) and the fibration
$$\pi : X^I \to X \times X, \quad \alpha \mapsto (\alpha(0), \alpha(1)).$$
The {\it topological complexity} ${\sf{TC}}(X)$ of $X$ is defined as ${\sf{TC}}(X):={\sf{secat}}(\pi)$, cf. \cite{Far03}. For information on recent developments related to the notion of ${\sf{TC}}(X)$ we refer the reader to \cite{GraLV}, \cite{Dra}.
For any $r\ge 2$, fix $r$ points $0\le t_1<t_2<\dots <t_r\le 1$ (which we shall call the {\it \lq\lq time schedule\rq\rq}) and consider the evaluation map
\begin{eqnarray}\label{pir}
\pi_r : X^I \to X^r, \quad \alpha \mapsto \left(\alpha(t_1), \alpha(t_2), \dots, \alpha(t_r)\right), \quad \alpha \in X^I.
\end{eqnarray}
Typically, one takes $t_i=(i-1)(r-1)^{-1}$.
The {\it $r$-th sequential topological complexity} is defined as ${\sf{TC}}_r(X):={\sf{secat}}(\pi_r)$; this invariant was originally introduced by Rudyak \cite{Rud10}.
It is known that ${\sf{TC}}_r(X)$ is a homotopy invariant, it vanishes if and only if the space $X$ is contractible. Moreover,
${\sf{TC}}_{r+1}(X)\ge {\sf{TC}}_r(X)$. Besides, ${\sf{TC}}(X)={\sf{TC}}_2(X)$.
\subsection*{Parametrized topological complexity}
For a Hurewicz fibration $p : E \to B$ denote by $E^I_B\subset E^I$ the space of all paths $\alpha: I\to E$ such that $p\circ \alpha: I\to B$ is a constant path. Let $E^2_B\subset E\times E$ denote the space of pairs $(e_1, e_2)\in E^2$ satisfying $p(e_1)=p(e_2)$.
Consider the fibration
$$\Pi: E^I_B \to E^2_B = E\times_B E, \quad \alpha \mapsto (\alpha(0), \alpha(1)).$$
The fibre of $\Pi$ is the loop space $\Omega X$ where $X$ is the fibre of the original fibration $p:E\to B$.
The following notion was introduced in a recent paper \cite{CohFW21}:
\begin{definition}
The {\it parametrized topological complexity} ${\sf{TC}}[p : E \to B]$ of the fibration $p : E \to B$ is defined as
$${\sf{TC}}[p : E \to B]={\sf{secat}}[\Pi: E^I_B \to E^2_B].$$
\end{definition}
Parametrized motion planning algorithms are universal and flexible, they are capable to function
under a variety of external conditions which are parametrized by the points of the base $B$.
We refer to \cite{CohFW21} for more detail and examples.
If $B'\subset B$ and $E'=p^{-1}(B')$ then obviously ${\sf{TC}}[p : E \to B] \geq {\sf{TC}}[p' : E' \to B']$ where $p'=p|_{E'}$. In particular, restricting to a single fibre we obtain $${\sf{TC}}[p : E \to B] \geq {\sf{TC}}(X).$$
\end{section}
\section{The concept of sequential parametrized topological complexity}
In this section we define a new notion of sequential parametrized topological complexity and establish its basic properties.
Let $p : E \to B$ be a Hurewicz fibration with fibre $X$. Fix an integer $r\ge 2$ and denote
$$E^r_B= \{(e_1, \cdots, e_r)\in E^r; \, p(e_1)=\cdots = p(e_r)\}.$$
Let $E^I_B\subset E^I$ be as above
the space of all paths $\alpha: I\to E$ such that $p\circ \alpha: I\to B$ is constant. Fix $r$ points
$$0\le t_1<t_2<\dots <t_r\le 1$$ in $I$ (for example, one may take $t_i=(i-1)(r-1)^{-1}$ for $i=1, 2, \dots, r$), which will be called the {\it time schedule}.
Consider the evaluation map
\begin{eqnarray}\label{Pir}
\Pi_r : E^I_B \to E^r_B, \quad \Pi_r(\alpha) = (\alpha(t_1), \alpha(t_2), \dots, \alpha(t_r)).\end{eqnarray}
$\Pi_r$ is a Hurewicz fibration, see \cite[Appendix]{CohFW}, the fibre of $\Pi_r$ is $(\Omega X)^{r-1}$.
A section $s: E^r_B \to E^I$ of the fibration $\Pi_r$ can be interpreted as a parametrized motion planning algorithm, i.e. as
a function which assigns to every sequence of points $(e_1, e_2, \dots, e_r)\in E^r_B$ a continuous path $\alpha: I\to E$ (motion of the system) satisfying $\alpha(t_i)=e_i$ for every $i=1, 2, \dots, r$ and such that the path
$p\circ \alpha: I \to B$ is constant. The latter condition means that the system moves under constant external conditions
(such as positions of the obstacles).
Typically $\Pi_r$ does not admit continuous sections; then the
motion planning algorithms are necessarily discontinuous.
The following definition gives a measure of complexity of sequential parametrized motion planning algorithms. This concept is the main character of this paper.
\begin{definition}\label{def:main}
The {\it $r$-th sequential parametrized topological complexity} of the fibration $p : E \to B$, denoted ${\sf{TC}}_r[p : E \to B]$, is defined as the sectional category of the fibration $\Pi_r$, i.e.
\begin{eqnarray}
{\sf{TC}}_r[p : E \to B]:={\sf{secat}}(\Pi_r).
\end{eqnarray}
\end{definition}
In more detail, ${\sf{TC}}_r[p : E \to B]$ is the minimal integer $k$ such that there is a open cover
$\{U_0, U_1, \dots, U_k\}$ of $E^r_B$ with the property that each open set $U_i$
admits a continuous section $s_i : U_i \to E^I_B$ of $\Pi_r$.
Let $B'\subset B$ be a subset and let $E'=p^{-1}(B')$ be its preimage, then obviously $${\sf{TC}}_r[p: E \to B]\geq {\sf{TC}}_r[p': E' \to B']$$
where $p'=p|_{E'}$.
In particular, taking $B'$ to be a single point, we obtain $${\sf{TC}}_r[p: E \to B] \geq {\sf{TC}}_r(X),$$ where $X$ is the fibre of $p$.
\begin{example}\label{example para tc trivial fibration}
Let $p: E \to B$ be a trivial fibration with fibre $X$, i.e. $E=B\times X$.
In this case we have $E^r_B=B\times X^r$, $E^I_B= B\times X^I$ and the map $\Pi_r: E_B^I\to E^r_B$ becomes
$$\Pi_r: B\times X^I\to B\times X^r, \quad \Pi_r= {\sf{Id}}_B\times \pi_r,$$
where ${\sf{Id}}_B: B\to B$ is the identity map and $\pi_r$ is the fibration (\ref{pir}). Thus we obtain in this example
$${\sf{TC}}_{r}[p: E \to B]= {\sf{TC}}_r(X),$$
i.e. for the trivial fibration the sequential parametrized topological complexity equals the sequential topological complexity of the fibre.
\end{example}
\begin{prop}\label{princ}
Let $p: E \to B$ be a principal bundle with a connected topological group $G$ as fibre. Then $${\sf{TC}}_{r}[p: E \to B] = {\sf{cat}}(G^{r-1})={\sf{TC}}_r(G).$$
\end{prop}
\begin{proof} Let $0\le t_1<t_2<\dots < t_r\le 1$ be the fixed time schedule.
Denote by $P_0G\subset G^I$ the space of paths $\alpha$ satisfying $\alpha(t_1)=e$ where $e\in G$ denotes the unit element. Consider the evaluation map $\pi_r': P_0G\to G^{r-1}$ where $\pi'_r(\alpha)=(\alpha(t_2), \alpha(t_3), \dots, \alpha(t_r))$. We obtain the commutative diagram
\begin{center}
$\xymatrix{
P_0G \times E \ar[d]_{ \pi'_r \times {\sf{Id}} } \ar[r]^{F} & E^I_B \ar[d]^{\Pi_r} \\
G^{r-1} \times E \ar[r]_{F'} & E^r_B
}$
\end{center}
where
$F: P_0G \times E \to E^I_B$ and $F': G^{r-1}\times E\to E^r_B$ are homeomorphisms given by
$$F(\alpha, x)(t)= \alpha(t)x, \quad F'(g_2, g_3, \dots, g_r, x)= (x, g_2 x, g_3 x, \dots, g_r x),$$
where $\alpha \in P_0G$, \, $x\in E$, \, $t\in I$ and $g_i\in G$.
Thus we have
$$
{\sf{TC}}_r[p: E\to B]= {\sf{secat}}(\Pi_r)={\sf{secat}}(\pi'_r \times {\sf{Id}} )={\sf{secat}}(\pi'_r).
$$
Clearly, ${\sf{secat}}(\pi'_r)={\sf{cat}}(G^{r-1})$ since $P_0G$ is contractible.
And finally ${\sf{cat}}(G^{r-1})={\sf{TC}}_r(G)$, see
\cite[Theorem 3.5]{BasGRT14}.
\end{proof}
\begin{example} As a specific example consider the Hopf fibration $p: S^3 \to S^2$ with fibre $S^1$. Applying the result of the previous Proposition we obtain
$${\sf{TC}}_r[p: S^3 \to S^2]={\sf{TC}}_r(S^1)=r-1$$
for any $r\ge 2$.
\end{example}
\subsection*{Alternative descriptions of sequential parametrized topological complexity}
Let $K$ be a path-connected finite CW-complex and let $k_1, k_2, \cdots, k_r\in K$ be a collection of $r$ pairwise
distinct points of $K$, where $r\ge 2$. For a Hurewicz fibration $p: E \to B$, consider the space $E^{K}_B$ of all continuous maps
$\alpha: K \to E$ such that the composition $p\circ \alpha: K\to B$ is a constant map.
We equip $E^K_B$ with the compact-open topology induced from the function space $E^K$.
Consider the evaluation map
$$\Pi_r^K : E^{K}_B \to E^r_B, \quad \Pi^K_r(\alpha) = (\alpha(k_1), \alpha(k_2), \cdots, \alpha(k_r)) \quad \mbox{for}
\quad\alpha\in E^K_B.$$
It is known that $\Pi^K_r$ is a Hurewicz fibration, see Appendix to \cite{CohFW}.
\begin{lemma}\label{lemma para tc by secat} For any path-connected finite CW-complex $K$
and a set of pairwise distinct points $k_1, \dots, k_r\in K$ one has $${\sf{secat}}(\Pi^K_r) = {\sf{TC}}_r[p:E\to B].$$
\end{lemma}
\begin{proof} Let $0\le t_1<t_2<\dots<t_r\le 1$ be a given time schedule used in the definition of the map $\Pi_r=\Pi_r^I$ given by (\ref{Pir}).
Since $K$ is path-connected we may find a continuous map $\gamma: I\to K$ with $\gamma(t_i) =k_i$ for all $i=1, 2, \dots, r$. We obtain a continuous map $F_\gamma: E^K_B \to E^I_B$ acting by the formula $F_\gamma(\alpha) = \alpha \circ \gamma$. It is easy to see that the following diagram commutes
$$
\xymatrix{
E_B^{K} \ar[rr]^{F} \ar[dr]_{\Pi_{r}^K}& &E_B^{I} \ar[dl]^{\Pi_{r}^I} \\ & E_B^r
}$$
Using statement (A) of Lemma \ref{lm:secat} we obtain
$${\sf{TC}}_r[p:E\to B]= {\sf{secat}}(\Pi^I_r)\le {\sf{secat}}(\Pi_r^K).$$
To obtain the inverse inequality note that any locally finite CW-complex is metrisable. Applying Tietze extension
theorem we can find continuous functions $\psi_1, \dots, \psi_r: K\to [0,1]$ such that $\psi_i(k_j)=\delta_{ij}$,
i.e. $\psi_i(k_j)$ equals 1 for $j=i$ and it equals $0$ for $j\not=i$. The function $f=\min\{1, \sum_{i=1}^r t_i\cdot \psi_i\}: K\to [0,1]$
has the property that $f(k_i)=t_i$ for every $i=1, 2, \dots, r$. We obtain a continuous map $F': E^I_B \to E^K_B$, where $F'(\beta) = \beta\circ f$, \, $\beta\in E^I_B$, which appears in the commutative diagram
$$
\xymatrix{
E_B^{I} \ar[rr]^{F'} \ar[dr]_{\Pi_{r}^I}& &E_B^{K} \ar[dl]^{\Pi_{r}^K} \\ & E_B^r
}$$
By Lemma \ref{lm:secat} this implies the opposite inequality
${\sf{secat}}(\Pi_r^K) \le {\sf{secat}}(\Pi^I_r)$
and completes the proof.
\end{proof}
The following proposition is an analogue of \cite[Proposition 4.7]{CohFW21}.
\begin{prop}
Let $E$ and $B$ be metrisable separable ANRs and let $p: E \to B$ be a locally trivial fibration. Then the sequential parametrized topological complexity ${\sf{TC}}_r[p: E \to B]$ equals the smallest integer $n$ such that $E_B^r$ admits a partition $$E_B^r=F_0 \sqcup F_1 \sqcup ... \sqcup F_n, \quad F_i\cap F_j= \emptyset \text{ \ for } i\neq j,$$
with the property that on each set $F_i$ there exists a continuous section $s_i : F_i \to E_B^I$ of $\Pi_r$. In other words, $${\sf{TC}}_r[p : E \to B] = {\sf{secat}}_g[\Pi_r: E_B^I \to E^r_B].$$
\end{prop}
\begin{proof}
From the results of \cite[Chapter IV]{Bor67} it follows that the fibre $X$ of $p: E \to B$ is ANR and hence $X^r$ is also ANR. Now, $E^r_B$ is the total space of the locally trivial fibration $E_B^r \to B$ with fibre $X^r$. Thus, applying \cite[Chapter IV, Theorem 10.5]{Bor67}, we obtain that the space $E^r_B$ is ANR. Using \cite[Proposition 4.7]{CohFW21} we see that $E_B^I$ is ANR.
Finally, using Theorem \ref{lemma secat betweeen ANR spaces}, we conclude that ${\sf{TC}}_r[p : E \to B] = {\sf{secat}}_g[\Pi_r: E_B^I \to E^r_B].$
\end{proof}
\section{Fibrewise homotopy invariance}
\begin{prop}\label{prop homotopy invariant sptc}
Let $p : E \to B$ and $p': E' \to B$ are two fibrations and let $f: E\to E'$ and $g: E'\to E$ be two continuous maps
such the following diagram commutes
$$
\xymatrix{
E \ar@<-1.4pt>[rr]^{g} \ar[dr]_{p}& &E' \ar@<-1.4pt>[ll]^{f} \ar[dl]^{p'} \\ & B
}$$
i.e. $p=p'\circ f$ and $p'=p\circ g$.
If the map $g \circ f : E \to E$ is fibrewise homotopic to the identity map ${\sf{Id}}_{E}: E \to E$ then
$${\sf{TC}}_r[p: E \to B] \leq {\sf{TC}}_r[p': E' \to B].$$
\end{prop}
\begin{proof} Denote by $f^r: E^r_B\to E'^r_B$ the map given by $f^r(e_1, \dots, e_r) =(f(e_1), \dots, f(e_r))$ and by
$f^I: E^I_B \to E'^I_B$ the map given by $f^I(\gamma)(t)= f(\gamma(t))$ for $\gamma\in E^I_B$ and $t\in I$. One defines similarly the maps $g^r: E'^r_B\to E^r_B$ and $g^I: E'^I_B \to E^I_B$. This. gives the commutative diagram
\begin{center}
$\xymatrix{
E^I_B\ar[r]^{f^I}\ar[d]_{\Pi_r} & E'^I_B\ar[d]^{\Pi'_r} \ar[r]^{g^I} & E^I_B\ar[d]^{\Pi_r}\\
E^r_B \ar[r]_{f^r} & E'^r_B\ar[r]_{g^r}& E^r_B,
}$
\end{center}
in which $g^r\circ f^r \simeq {\sf{Id}}_{E^r_B}$. Applying statement (C) of Lemma \ref{lm:secat} we obtain
\begin{eqnarray*}
{\sf{TC}}_r[p:E\to B]&=&{\sf{secat}}[\Pi_r: E^I_B \to E^r_B]\\ &\le& {\sf{secat}}[\Pi'_r: E'^I_B \to E'^r_B] \\ &=& {\sf{TC}}_r[p':E'\to B].
\end{eqnarray*}
\end{proof}
Proposition \ref{prop homotopy invariant sptc} obviously implies the following property of
${\sf{TC}}_r[p:E\to B]$:
\begin{corollary}\label{fwhom}
If fibrations $p : E \to B$ and $p' : E' \to B$ are fibrewise homotopy equivalent then $${\sf{TC}}_r[p: E \to B] = {\sf{TC}}_r[p': E' \to B].$$
\end{corollary}
\section{Further properties of ${\sf{TC}}_r[p: E\to B]$}
Next we consider products of fibrations:
\begin{prop}\label{prop product inequality}
Let $p_1: E_1 \to B_1$ and $p_2: E_2 \to B_2$ be two fibrations where the spaces $E_1, E_2, B_1, B_2$ are metrisable.
Then for any $r\geq 2$ we have
$${\sf{TC}}_r[p_1\times p_2 : E_1 \times E_2 \to B_1\times B_2]\leq {\sf{TC}}_r[p_1 : E_1 \to B_1] + {\sf{TC}}_r[p_2 : E_2 \to B_2].$$
\end{prop}
\begin{proof}
The proof is essentially identical to the proof of \cite[Proposition 6.1]{CohFW21} where it is done for the case $r=2.$
\end{proof}
\begin{prop}
Let $p_1: E_1 \to B$ and $p_2: E_2 \to B$ be two fibrations where the spaces $E_1, E_2, B$ are metrisable.
Consider the fibration
$p: E\to B$ where $E=E_1\times_B E_2=\{(e_1, e_2)\in E_1 \times E_2 \ | \ p_1(e_1) = p_2(e_2)\}$
and $p(e_1, e_2)=p(e_1)=p(e_2)$.
Then $${\sf{TC}}_r[p: E \to B]\, \leq \, {\sf{TC}}_r[p_1 : E_1 \to B] + {\sf{TC}}_r[p_2 : E_2 \to B].$$
\end{prop}
\begin{proof} Viewing $B$ as the diagonal of $B\times B$ gives
$${\sf{TC}}_r[p: E \to B]\leq {\sf{TC}}_r[p_1\times p_2 : E_1 \times E_2 \to B\times B].$$
Combining this inequality with the result of Proposition \ref{prop product inequality} completes the proof.
\end{proof}
\begin{lemma}
For any fibration $p: E \to B$ one has $${\sf{TC}}_{r+1}[p: E \to B]\geq {\sf{TC}}_{r}[p: E \to B].$$
\end{lemma}
\begin{proof} We shall apply Lemma \ref{lemma para tc by secat} and consider the interval $K=[0,2]$ and the
time schedule
$0\le t_1< t_2< \dots< t_r\le 1$ and the additional point $t_{r+1}=2.$ We have the following diagram
\begin{center}
$\xymatrix{
E^I_B\ar[r]^{F}\ar[d]_{\Pi_r} & E^K_B\ar[d]^{\Pi^K_{r+1}} \ar[r]^{G} & E^I_B\ar[d]^{\Pi_r}\\
E^r_B \ar[r]_{f} & E^{r+1}_B\ar[r]_{g}& E^r_B,
}$
\end{center}
where $f$ acts by the formula $f(e_1, e_2, \dots, e_r) = (e_1, e_2, \dots, e_r, e_r)$ and $F$ sends a path
$\gamma: I\to E$, $\gamma\in E^I_B$, to the path $\bar \gamma: K=[0,2]\to E$ where $\bar\gamma|_{[0,1]}=\gamma$ and
$\bar \gamma(t) = \gamma(1)$ for any $t\in [1, 2]$. The vertical maps are evaluations at the points $t_1, \dots, t_r$ and
at the points $t_1, \dots, t_r, t_{r+1}$, for
$\Pi_r$ and $\Pi^K_{r+1}$ correspondingly. The map $G$ is the restriction, it maps $\alpha: K\to E$ to $\alpha_I: I\to E$. Similarly, the map $g: E^{r+1}_B \to E^r_B$ is given by $(e_1, \dots, e_r, e_{r+1}) \mapsto (e_1, \dots, e_r)$. The diagram commutes and besides the composition $g\circ f: E^r_B\to E^r_B$ is the identity map. Applying statement (C) of Lemma \ref{lm:secat} we obtain
\begin{eqnarray*}
{\sf{TC}}_r[p:E\to B]&=&{\sf{secat}}[\Pi_r: E^I_B\to E^r_B] \\ &\le& {\sf{secat}}[\Pi^K_{r+1}: E^K_B\to E^{r+1}_B]\\ &=& {\sf{TC}}_{r+1}[p:E\to B].
\end{eqnarray*}\end{proof}
\section{Upper and lower bounds for ${\sf{TC}}_r[p:E\to B]$}
In this section we give upper and lower bound for sequential parametrized topological complexity.
\begin{prop}\label{prop upper bound}
Let $p: E \to B$ be a locally trivial fibration with fiber $X$, where $E, B, X$ are CW-complexes. Assume that the fiber $X$ is $k$-conected, where $k\ge 0$. Then
\begin{eqnarray}\label{upper}
{\sf{TC}}_{r}[p: E \to B]<\frac{{\sf{hdim}} (E_B^r) + 1}{k+1}\leq \frac{r\cdot \dim X+\dim B + 1}{k+1}.
\end{eqnarray}
\end{prop}
\begin{proof}
Since $X$ is $k$-connected, the loop space $\Omega X$ is $(k-1)$-connected and hence the space $(\Omega X)^{r-1}$ is also $(k-1)$-connected. Thus, the fibre of the fibration $\Pi_r : E_B^I \to E_B^r$ is $(k-1)$-connected and applying Theorem 5 from \cite{Sva66} we obtain:
\begin{equation}\label{equation upper bound}
{\sf{TC}}_{r}[p: E \to B] \, =\, {\sf{secat}}(\Pi_r)\, < \, \frac{{\sf{hdim}} (E_B^r) + 1}{k+1},
\end{equation} where ${\sf{hdim}}(E_B^r)$ denotes homotopical dimension of $E_B^r$, i.e. the minimal dimension of a CW-complex homotopy equivalent to $E^r_B$,
$${\sf{hdim}}(E_B^r ) := \min\{\dim \, Z | \, Z \, \mbox{is a CW-complex homotopy equivalent to} \, E_B^r \}.$$
Clearly, ${\sf{hdim}}(E_B^r)\leq \dim(E_B^r)$.
The space $E^r_B$ is the total space of a locally trivial fibration with base $B$ and fibre $X^r$.
Hence, $\dim(E_B^r)\, \leq \, \dim(X^r)+ \dim B= r\cdot \dim X+ \dim B$. Combining this with (\ref{equation upper bound}),
we obtain (\ref{upper}).
\end{proof}
Below we shall use the following classical result of A.S. Schwarz \cite{Sva66}:
\begin{lemma}\label{lemma lower bound of secat}
For any fibration $p: E \to B$ and coefficient ring $R$, if there exist cohomology classes $u_1, \cdots, u_k\in \ker[p^*: H^*(B; R)\to H^*(E; R)]$ such that their cup-product is nonzero, $u_1 \cup \cdots \cup u_k \neq 0 \in H^*(B; R)$, then ${\sf{secat}}(p) \geq k$.
\end{lemma}
The following Proposition gives a simple and powerful lower bound for sequential parametrized topological complexity.
\begin{prop}\label{lemma lower bound for para tc}
For a fibration $p: E \to B$, consider the diagonal map $\Delta : E \to E^r_B$ where $\Delta(e)= (e, e, \cdots, e)$, and the induced by $\Delta$ homomorphism in cohomology $\Delta^\ast: H^\ast(E^r_B;R) \to H^\ast(E;R)$ with coefficients in a ring $R$.
If there exist cohomology classes $$u_1, \cdots, u_k \in \ker[\Delta^* : H^*(E_B^r; R) \to H^*(E; R)]$$
such that
$$u_1 \cup \cdots \cup u_k \neq 0 \in H^*(E_B^r; R)$$ then ${\sf{TC}}_{r}[p: E \to B]\geq k$.
\end{prop}
\begin{proof}
Define the map $c : E \to E_B^I$ where $c(e)(t) = e$ is the constant path. Note that the map $c : E \to E_B^I$ is a homotopy equivalence. Besides, the following diagram commutes
$$
\xymatrix{
E \ar[rr]^{c} \ar[dr]_{\Delta}&&E_B^I \ar[dl]^{\Pi_r}\\
& E_B^r & &
}
$$
and thus, $\ker[\Pi_r^*: H^*(E_B^r; R) \to H^*(E_B^I; R)] = \ker[\Delta^*: H^*(E_B^r; R) \to H^*(E; R)]$. The result now follows from Lemma \ref{lemma lower bound of secat} and from the definition ${\sf{TC}}_r[p:E\to B]={\sf{secat}}(\Pi_r)$.
\end{proof}
\section{Cohomology algebras of certain configuration spaces}
In this section we present auxiliary results about cohomology algebras of relevant configuration spaces which will be used later in this paper for computing the sequential parametrized topological complexity of the Fadell - Neuwirth fibration.
All cohomology groups will be understood as having the integers as coefficients although the symbol $\Bbb Z$ will be skipped from the notations.
We start with the following well-known fact, see \cite[Chapter V, Theorem 4.2]{FadH01}:
\begin{lemma}\label{lemma cohomology of configuration space}
The integral cohomology ring $H^\ast(F(\mathbb{R}^d, m+n))$ contains $(d-1)$-dimensional cohomology classes
$\omega_{ij}$, where $1\leq i< j\leq m+n,$
which multiplicatively generate $H^*(F(\mathbb{R}^d, m+n))$ and satisfy the following defining relations
$$(\omega_{ij})^2=0 \quad \mbox{and}\quad
\omega_{ip}\omega_{jp}= \omega_{ij}(\omega_{jp}-\omega_{ip})\quad \text{for all }\, i<j<p.$$
\end{lemma}
The cohomology class $\omega_{ij}$ arises as follows. For $1\le i<j\le m+n$, mapping a configuration
$(u_1, \dots, u_{m+n})\in F(\mathbb{R}^d, m+n)$ to the unit vector
$$\frac{u_i-u_j}{||u_i-u_j||}\, \in\, S^{d-1},$$
defines a continuous map
$\phi_{ij}: F(\mathbb{R}^d, m+n)\to S^{d-1}$, and the class $$\omega_{ij}\in H^{d-1}(F(\mathbb{R}^d, m+n))$$ is defined by
$\omega_{ij}=\phi^\ast_{ij}(v)$
where $v\in H^{d-1}(S^{d-1})$ is the fundamental class.
Below we shall denote by $E$ the configuration space $E=F(\mathbb{R}^d, n+m)$. A point of $E$ will be understood as a configuration
$$(o_1, o_2, \dots, o_m, z_1, z_2, \dots, z_n)$$
where the first $m$ points $o_1, o_2, \dots, o_m$ represent \lq\lq obstacles\rq\rq\, while the last $n$ points $z_1, z_2, \dots, z_n$ represent \lq\lq robots\rq\rq. The map
\begin{eqnarray}\label{FN}
p: F(\mathbb{R}^d, m+n) \to F(\mathbb{R}^d, m),
\end{eqnarray}
where
$$ p(o_1, o_2, \dots, o_m, z_1, z_2, \dots, z_n)= (o_1, o_2, \dots, o_m),$$
is known as the Fadell - Neuwirth fibration. This map was introduced in \cite{FadN} where the authors showed that $p$ is a locally trivial fibration. The fibre of $p$ over a configuration $\mathcal O_m=\{o_1, \dots, o_m\}\in F(\mathbb{R}^d, m)$
is the space $X=F(\mathbb{R}^d- \mathcal O_m, n)$, the configuration space of $n$ pairwise distinct points lying in the complement of the set $\mathcal O_m=\{o_1, \dots, o_m\}$ of $m$ fixed obstacles.
We plan to use Lemma \ref{lemma lower bound for para tc} to obtain lower bounds for the topological complexity, and for this reason our first task will be to calculate the integral cohomology ring of the space
$E_B^r$. Here $E$ denotes the space $E=F(\mathbb{R}^d, m+n)$ and $B$ denotes the space $B=F(\mathbb{R}^d, m)$ and $p: E\to B$ is the Fadell - Neuwirth fibration (\ref{FN}); hence $E^r_B$ is the space of $r$-tuples $(e_1, e_2, \dots, e_r)\in E^r$ satisfying
$p(e_1)=p(e_2)=\dots= p(e_r)$.
Explicitly, a point of the space $E_B^r$ can be viewed as a configuration
\begin{eqnarray}\label{confr}
(o_1, o_2, \cdots o_m, z^1_1, z^1_2, \cdots z^1_n, z^2_1, z^2_2, \cdots, z^2_n, \cdots, z^r_1, z^r_2, \cdots, z^r_n)
\end{eqnarray}
of $m+rn$ points
$o_i, \, z^l_j\in \mathbb{R}^d$ (for
$i=1, 2, \dots, m$, $j=1, 2, \dots, n$ and $l=1, 2, \dots, r$),
such that
\begin{enumerate}
\item $o_i \neq o_{i'}$ for $i\neq i'$,
\item $o_i \neq z^l_j$ for $1\leq i \leq m, \, 1\leq j \leq n$ and $1\leq l \leq r$,
\item
$z^l_j \neq z^l_{j'}$ for $j \neq j'$.
\end{enumerate}
The following statement is a generalisation of Proposition 9.2 from \cite{CohFW21}.
\begin{prop}\label{prop relation of cohomology classes}
The integral cohomology ring $H^*(E_B^r)$ contains cohomology classes
$\omega^l_{ij}$
of degree $d-1$, where $1\leq i < j \leq m+n$ and $1\leq l \leq r$,
satisfying the relations
\begin{enumerate}
\item[{\rm (a)}] $\omega_{ij}^l = \omega_{ij}^{l'} \text{ for } 1\leq i < j \leq m$ and $1\leq l \leq l' \leq r$,
\item[{\rm (b)}] $(\omega_{ij}^l)^2=0\text{ for } i < j \text{ and } 1\leq l \leq r$,
\item[{\rm (c)}] $\omega_{ip}^l\omega_{jp}^l= \omega_{ij}^l(\omega_{jp}^l-\omega_{ip}^l) \text{ for } i<j<p \text { and }\, 1\leq l \leq r.$
\end{enumerate}
\end{prop}
\begin{proof} For $1\le l\le r$,
consider the
projection map $p_l : E_B^r \to E$ which acts as follows: the configuration (\ref{confr}) is mapped into
$$(u_1, u_2, \dots, u_{m+n})\in E=F(\mathbb{R}^d, m+n)$$
where
$$
u_i=\left\{
\begin{array}{lll}
o_i& \mbox{for} & i\le m,\\
z^l_{i-m}& \mbox{for} & i> m.
\end{array}
\right.
$$
Using Lemma \ref{lemma cohomology of configuration space} and the cohomology classes
$\omega_{ij}\in H^{d-1}(E)$, we define $$(p_{l})^*(\omega_{ij})=\omega^l_{ij}\, \in \, H^*(E_B^r).$$
Relations {\rm {(a), \, (b), \, (c)}} are obviously satisfied. This completes the proof.
\end{proof}
For $1\leq i < j \leq m$ we shall denote the class $\omega_{ij}^l\in H^{d-1}(E^r_B)$ simply by $\omega_{ij}$; this is justified because of the relation {\rm {(a)}} above.
We shall introduce notations for the classes which arise as the cup-products of the generators $\omega^l_{ij}$.
Consider two sequences of integers
$I=(i_1, i_2, \cdots, i_p)$ and $J=(j_1, j_2, \cdots, j_p)$ where $i_s, j_s\in \{1, 2, \dots, m+n\}$ for $s=1, 2, \dots, p$.
We shall say that the sequence $J$ is {\it increasing} if $j_1<j_2<\dots <j_p$. Besides, we shall write $I<J$ if $i_s<j_s$ for all $s=1, 2, \dots, p$.
A pair of sequences $I<J$ of length $p$ as above determines the cohomology class
$$\omega_{IJ}^l=\omega^l_{i_1j_1}\omega^l_{i_2j_2}\cdots \omega^l_{i_pj_p}\in H^{(d-1)p}(E_B^r)$$
for any $l=1, 2, \dots, r$. Note that the order of the factors is important in the case when the dimension $d$ is even.
Because of the property {\rm {(a)}} of Proposition \ref{prop relation of cohomology classes}
this class is independent of $l$ assuming that $j_p\le m$; for this reason, if $j_p\le m$,
we shall denote $\omega^l_{IJ}$ simply by $\omega_{IJ}$.
The next result is a generalisation of \cite[Proposition 9.3]{CohFW21} where the case $r=2$ was studied.
\begin{prop}\label{prop basic cohomology classes}
An additive basis of $H^*(E_B^r)$ is formed by the following set of cohomology classes
\begin{eqnarray}\label{basis}
\omega_{IJ}\omega^1_{I_1J_1}\omega^2_{I_2J_2}\cdots \omega^r_{I_rJ_r}\, \in\, H^\ast(E^r_B),\end{eqnarray}
where:
\begin{enumerate}
\item[{\rm (i)}] the sequences $J, J_1, J_2, \cdots J_r$ are increasing,
\item[{\rm (ii)}] the sequence $J$ takes values in $\{2, 3, \cdots, m\}$,
\item[{\rm (iii)}] the sequences $J_1, J_2, \dots, J_r$ take values in $\{m+1, \cdots m+n\}$.
\end{enumerate}
\end{prop}
\begin{proof} Recall our notations: $E=F(\mathbb{R}^d, m+n)$, \, $B=F(\mathbb{R}^d, m)$ and $p: E\to B$ is the Fadell - Neuwirth fibration
(\ref{FN}). Consider the fibration
$$p_r: E^r_B \to B \quad \mbox{where}\quad p_r(e_1, \dots, e_r) =p(e_1)=\dots=p(e_r).$$
Its fibre over a configuration $\mathcal O_m=(o_1, \dots, o_m)\in B$
is $X^r$, the Cartesian product of $r$ copies of the space $X$, where
$X=F(\mathbb{R}^d-\mathcal O_m, n)$.
We shall apply Leray-Hirsch theorem to the fibration $p_r: E^r_B \to B$. The classes $\omega_{ij}$
with $i < j \leq m$ originate from the base of this fibration. Moreover, from Lemma \ref{lemma cohomology of configuration space} it follows that a free additive basis of $H^\ast(B)$ forms the set of the classes $\omega_{IJ}$ where $I<J$ run over all sequences of elements of the set $\{ 1,2,...,m\} $ such that the sequence $J = (j_1,j_2,...,j_p)$ is increasing.
Next consider the classes of the form
$$\omega^1_{I_1J_1}\omega^2_{I_2J_2}\cdots \omega^r_{I_rJ_r}\, \in\, H^\ast(E^r_B),$$
with increasing sequences $J_1, J_2, \dots, J_r$ satisfying {\rm {(iii)}} above.
Using the known
results about the cohomology algebras of configuration spaces (see \cite{FadH01}, Chapter V, Theorems
4.2 and 4.3) as well as the Ku\"nneth theorem, we see that the restrictions of the family of these
classes onto the fiber $X^r$ form a free basis in the cohomology of the fiber $H^\ast (X^r).$
Hence, Leray-Hirsch theorem \cite{Hat} is applicable and we obtain that a free basis of the cohomology
$H^\ast (E^r_B)$ is given by the set of classes described in the statement of Proposition \ref{prop basic cohomology classes}.
This completes the proof.
\end{proof}
Let $J=(j_1, j_2, \dots, j_p)$ be an increasing sequence of positive integers
and let $j$ be an integer satisfying $j_p<j$. Our goal is to decompose the product
$$
\omega^l_{j_1 j}\omega^l_{j_2 j}\dots \omega^l_{j_p j} \in H^{p(d-1)}(E^r_B), \quad l=1, \dots, r,
$$
in the basis elements of Proposition \ref{prop basic cohomology classes}.
We say that a sequence
$I=(i_1, i_2, \dots, i_p)$ is a {\it $J$-modification} if $i_1=j_1$ and for $s=2, 3, \dots, p$ each number $i_s$ equals either $i_{s-1}$
or $j_s$. An increasing sequence of length $p$ has $2^{p-1}$ modifications.
For example, for $p=3$ the sequence
$J= (j_1, j_2, j_3)$ has the following $4$ modifications
\begin{eqnarray}\label{mod}
(j_1, j_2, j_3), \, \, (j_1, j_1, j_3), \, \, (j_1, j_2, j_2), \, \, (j_1, j_1, j_1).
\end{eqnarray}
For a $J$-modification $I$ we shall denote by $r(I)$ the number of repetitions in $I$. For instance, the numbers of repetitions of the modifications (\ref{mod}) are $0, \, 1, \, 1, \, 2$ correspondingly.
The following statement is equivalent to Proposition 3.5 from \cite{CohFW}. Lemma 9.5 from \cite{CohFW21}
gives the answer in a recurrent form.
\begin{lemma}\label{lm:mod} For a sequence $j_1<j_2<\dots < j_p<j$ of positive integers denote
$J=(j_1, j_2, \dots, j_p)$ and $J'=(j_2, j_3, \dots, j_p, j)$.
In the cohomology algebra $H^\ast(E^r_B)$ associated to the Fadell - Neuwirth fibration,
one has the following
relation
\begin{eqnarray}\label{dec}
\omega^l_{j_1 j}\omega^l_{j_2 j}\dots \omega^l_{j_p j} = \sum_I (-1)^{r(I)} \omega^l_{I J'},
\end{eqnarray}
where $I$ runs over $2^{p-1}$ $J$-modifications and $l=1, 2, \dots, r.$
\end{lemma}
\begin{proof} We shall use induction in $p$. For $p=1$ there is nothing to prove. For $p=2$ the statement of Lemma \ref{lm:mod} is
$$
\omega^l_{j_1j}\omega^l_{j_2j}= \omega^l_{j_1j_2}\omega^l_{j_2j}-\omega^l_{j_1j_2}\omega^l_{j_1 j},
$$
which is the familiar $3$-term relation, see Proposition \ref{prop relation of cohomology classes}, statement (c).
The first term on the right corresponds to the sequence $I=(j_1,j_2)$ and the second term corresponds to
$I=(j_1, j_1)$; the latter has one repetition and appears with the minus sign.
Suppose now that Lemma \ref{lm:mod} is true for all sequences $J$ of length $p$.
Consider an increasing sequence $J=(j_1, j_2, \dots, j_{p+1})$ of length $p+1$ and
an integer $j$ satisfying $j>j_{p+1}$. Denote by $K=(j_1, j_2, \dots, j_p)$ the shortened sequence and let
$I=(i_1, i_2, \dots, i_p)$ be a modification of $K$. As in (\ref{dec}), denote $K'=(j_2, j_3, \dots, j_p, j)$.
Consider the product
\begin{eqnarray*}
\omega^l_{I K'}\omega^l_{j_{p+1}j}&=&
\omega^l_{i_1j_2}\omega^l_{i_2j_3}\dots \omega^l_{i_{p-1}j_p} \omega^l_{i_p j}\cdot \omega^l_{j_{p+1}j}\\
&=& \left[\omega^l_{i_1j_2}\omega^l_{i_2j_3}\dots \omega^l_{i_{p-1}j_p}
\right]\cdot \omega^l_{i_pj_{p+1}}\cdot \left[\omega^l_{j_{p+1}j} -\omega^l_{i_pj}\right]\\
&=& \omega^l_{I_1J'}-\omega^l_{I_2J'}
\end{eqnarray*}
where $I_1=(i_1, \dots, i_p, j_{p+1})$ and $I_2= (i_1, \dots, i_p, i_p)$ are two modifications of $J$ extending $I$.
The equality of the second line is obtained by applying the relation (c) of Proposition \ref{prop relation of cohomology classes}.
Note that $r(I_2)=r(I_1)+1$ which is consistent with the minus sign.
Thus, we see that Lemma follows by induction.
\end{proof}
\begin{corollary}\label{cor:form}
Any basis element (\ref{basis}) which appears with nonzero coefficient in the decomposition of a monomial of the form
$$
\omega^l_{j_1 j}\omega^l_{j_2 j}\dots \omega^l_{j_p j}
$$
contains a factor of the form $\omega^l_{j_s j}$, where $s\in \{1, 2, \dots, p\}$.
\end{corollary}
Consider the diagonal map
$$\Delta : E \to E_B^r, \quad \Delta(e) = (e, e, \dots, e), \quad e\in E.$$
\begin{lemma} \label{lm:ker}
The kernel of the homomorphism $\Delta^* : H^*(E_B^r) \to H^*(E)$ contains
the cohomology classes of the form
$$\omega^l_{ij}-\omega^{l'}_{ij}.$$
\end{lemma}
\begin{proof} This follows directly from the definition of the classes $\omega^l_{ij}$; compare the proof of Propositioon 9.4 from
\cite{CohFW21}.
\end{proof}
\section{Sequential parametrized topological complexity of the Fadell-Neuwirth bundle; the odd-dimensional case}\label{sec:odd}
Our goal is to compute the sequential parametrized topological complexity of the Fadell - Neuwirth bundle.
As we shall see, the answers in the cases of odd and even dimension $d$ are slightly different. When $d$ is odd the cohomology algebra has only classes of even degree and is therefore commutative; in the case when $d$ is even the cohomology algebra is skew-commutative which imposes major
distinction in treating these two cases.
The main result of this section is:
\begin{theorem}\label{thm:odd}
For any odd $d\ge 3$, and for any $n\ge 1$, $m\ge 2$ and $r\ge 2$, the sequential parametrized topological complexity of the Fadell - Neuwirth bundle (\ref{FN}) equals
$rn+m -1$.
\end{theorem}
This result was obtained in \cite{CohFW21} for $r=2$.
Note that the special case of $d=3$ is most important for robotics applications.
As in the previous section, we shall denote the Fadell - Neuwirth bundle (\ref{FN}) by $p: E\to B$ for short; this convention will be in force in this and in the following sections.
We start with the following statement which is valid without imposing restriction on the parity of the dimension $d\ge 3$.
Note that for $d= 2$ we shall have a stronger upper bound in \S \ref{sec:9}.
\begin{prop}\label{prop upper bound for Fadell-Neuwirth bundle}
For any $d\geq 3$ and $m\ge 2$ one has $${\sf{TC}}_r[p: E\to B] \leq rn+m-1.$$
\end{prop}
\begin{proof} The space $E^r_B$ is $(d-2)$-connected and in particular it is
simply connected (since $d\ge 3)$. By Proposition \ref{prop basic cohomology classes} the top dimension with nonzero cohomology is $(rn + m-1)(d-1)$. Hence the
homotopical dimension of the configuration space ${\sf{hdim}} (E^r_B)$ equals
$(rn+m-1)(d-1)$. Here we use the well-known fact that the homotopical dimension of a simply connected space with torsion free integral cohomology equals its cohomological dimension. The fibre $X=F(\mathbb{R}^d - \mathcal O_m, n)$ of the Fadell - Neuwirth bundle
$p: E\to B$ is $(d-2)$-connected. Applying Proposition \ref{prop upper bound} we obtain
$${\sf{TC}}_r[p : E\to B]\, <\, rn+m-1+\frac{1}{d-1},$$
which is equivalent to our statement.
\end{proof}
To complete the proof of Theorem \ref{thm:odd} we only need to establish the lower bound:
\begin{prop}\label{odd lower bound for Fadell-Neuwirth bundle}
For any odd $d\geq 3$ and $m\ge 2$ one has $${\sf{TC}}_r[p: E\to B] \geq rn+m-1.$$
\end{prop}
Note that the assumption of this Proposition that the dimension $d$ is odd is essential as Proposition \ref{odd lower bound for Fadell-Neuwirth bundle} is false for $d$ even, see below.
\begin{proof} We shall use Lemma \ref{lemma lower bound for para tc} and Propositions \ref{prop relation of cohomology classes}
and \ref{prop basic cohomology classes} and Lemma \ref{lm:ker}.
Consider the cohomology classes
\begin{eqnarray*}
x_1& =& \prod_{i=2}^m(\omega^1_{i(m+1)} - \omega^2_{i(m+1)})\, \in \, H^{(m-1)(d-1)}(E^r_B),\\
x_2 &=& \prod_{j=m+1}^{m+n}(\omega_{1j}^{2} - \omega_{1j}^{1})^2 \, =\, -2 \prod_{j=m+1}^{m+n} \omega^1_{1j}\omega^2_{ij}\, \in \, H^{2n(d-1)}(E^r_B),\\
x_3&=& \prod_{l=3}^{r}\prod_{j=m+1}^{m+n}(\omega_{1j}^{l} - \omega_{1j}^{1})\, \in H^{n(r-2)(d-1)}(E^r_B).
\end{eqnarray*}
Each of these classes is a product of elements of the kernel of the homomorphism $\Delta^\ast: H^\ast(E^r_B)\to H^\ast(E)$, by Lemma \ref{lm:ker}. Proposition \ref{odd lower bound for Fadell-Neuwirth bundle} would follow once we show that
the product
\begin{eqnarray*}
x_1 x_2 x_3 \not= 0\, \in \, H^\ast(E^r_B)
\end{eqnarray*}
is nonzero. By Proposition \ref{prop basic cohomology classes}, the product $x_1 x_2 x_3$ is a linear combination of the basis cohomology classes and it is
nonzero if at least one coefficient in this decomposition does not vanish.
According to \cite{CohFW21}, cf. page 248, the product $x_1 x_2$ contains the basis element
\begin{eqnarray}
\omega_{I_0J_0} \omega^1_{IJ} \omega^2_{I'J}\, \in \, H^{(2n+m-1)(d-1)}(E^r_B)
\end{eqnarray}
with a nonzero coefficient; here
\begin{eqnarray*}
I_0&=& (1, 2, 2, \dots, 2), \quad
J_0= (2, 3, \dots, m),
\end{eqnarray*}
and
\begin{eqnarray*}
I= (1, 1, \dots, 1), \quad
I'= (2, 1, 1, \dots, 1),\quad
J= (m+1, m+2, \dots, m+n),
\end{eqnarray*}
with $|I_0|= |J_0|=m-1$ and $ |I|=|I'|= |J|=n.$
The product representing $x_3$ can be expanded into a sum. This sum contains the class $\prod_{l=3}^{r}\omega^l_{IJ}$
and each of the other terms contains a factor of type $\omega^1_{1j}$. Since obviously $x_1x_2\omega^1_{1j}=0,$ we obtain that the product $x_1x_2x_3$ contains the basis element
$$
\omega_{I_0J_0}\cdot \omega^1_{IJ}\cdot \omega^2_{I'J}\cdot \prod_{l=3}^{r}\omega^l_{IJ}
$$
with a nonzero coefficient. Hence $x_1 x_2x_3 \not=0$ is nonzero. This completes the proof of Proposition \ref{odd lower bound for Fadell-Neuwirth bundle}.
\end{proof}
\begin{remark}
The lower bound estimate of Proposition \ref{odd lower bound for Fadell-Neuwirth bundle} fails to work in the case when the dimension $d$ of the ambient Euclidean space is even. Indeed, then the classes $\omega^l_{ij}$ have odd degree (which equals $d-1$)
and the square of any class of odd degree vanishes (since the cohomology algebra $H^\ast(E^r_B)$ with integral coefficients is torsion free). Thus, in the case of even dimension $d$ the product $x_2$ vanishes. In the following section
we shall suggest a different estimate for $d$ even .
\end{remark}
\section{Sequential parametrized topological complexity of the Fadell-Neuwirth bundle; the even-dimensional case}\label{sec:even}\label{sec:9}
In this section we give a lower bound for ${\sf{TC}}_r[p:E\to B]$ for the Fadell - Neuwirth bundle (\ref{FN})
in the case when the dimension $d$ of the Euclidean space $\mathbb{R}^d$ is even.
We also prove a matching upper bound for the planar case $d=2$.
Such an upper bound can be obtained for any even $d$ by a totally different method; this material will be presented in another publication.
First we establish the following lower bound which is valid for any $d$ regardless of its parity.
\begin{prop}\label{prop lower bound for Fadell-Neuwirth bundle even d}
For any $d\ge 2$, $r\ge 2$ and $m\ge 2$,
the sequential parametrized topological complexity of the Fadell - Neuwirth bundle
satisfies
\begin{eqnarray}\label{lowereven}
{\sf{TC}}_r[p:E\to B]\ge rn+m-2.
\end{eqnarray}
\end{prop}
\begin{proof}
We shall use Lemma \ref{lemma lower bound for para tc}. Consider the following three cohomology classes
$x_1, x_2, x_3\in H^\ast(E^r_B)$ where
\begin{eqnarray*}
x_1&=& \prod_{i=2}^m(\omega^1_{i(m+1)} - \omega^2_{i(m+1)})\, \in H^{(m-1)(d-1)}(E^r_B),\\
x_2 &=& \prod_{j=m+2}^{m+n}(\omega_{(j-1)j}^{1} - \omega_{(j-1)j}^{2})\, \in H^{(n-1)(d-1)}(E^r_B), \\
x_3 &=& \prod_{l=2}^{r}\prod_{j=m+1}^{m+n}(\omega_{1j}^{l} - \omega_{1j}^{1}) \, \in H^{n(r-1)(d-1)}(E^r_B).
\end{eqnarray*}
Each of these classes is the product of elements of the kernel of $\Delta^\ast$, see Lemma \ref{lm:ker} and the total number of the factors is $rn+m-2$. Hence, by Lemma \ref{lemma lower bound for para tc}, our statement (\ref{lowereven}) will follow
once we know that the product $x_1x_2x_3\not=0\in H^\ast(E^r_B)$ is nonzero.
Consider the following sequences
\begin{eqnarray*}
\begin{array}{lll}
I_0=(2, 2, \dots, 2), &\mbox{where} &|I_0|=m-2,\\
J_0=(3, 4, \dots, m), &\mbox{where} &|J_0|=m-2,\\
I=(1, 1, \dots, 1), &\mbox{where} &|I|=n,\\
J=(m+1, m+2, \dots, m+n), &\mbox{where}&|J|=n,\\
K=(2, m+1, m+2, \dots, m+n-1), &\mbox{where} &|K|=n.
\end{array}
\end{eqnarray*}
We claim that the basis element
\begin{eqnarray}\label{target}
\omega_{I_0J_0}\omega^1_{KJ} \omega_{IJ}^2 \omega_{IJ}^3\dots \omega_{IJ}^r
\end{eqnarray}
appears in the decomposition of the product $x_1x_2x_3$ with a nonzero coefficient.
We note that opening the brackets in the products defining $x_2$ and $x_3$ produces a sum of pairwise distinct basis elements. In particular, we see that $x_3$ contain the summand
$$\prod_{l=2}^r \prod_{j=m+1}^{m+n} \omega^l_{1 j} \, =\, \prod_{l=2}^r \omega_{IJ}^l$$
which is a part of the product (\ref{target}). One of the summands of $x_2$ is
$$\prod_{j=m+1}^{m+n-1}\omega^1_{j(j+1)}$$
which equals the factor
$\omega^1_{KJ}$ of (\ref{target}) with the factor $\omega^1_{2(m+1)}$ missing (this factor will come from $x_1$).
Potentially, some summands from the decomposition of $x_2$ can annihilate terms of decomposition of $x_3$ since they contain factors of type $\omega^2_{kl}$; however this cancellation may not happen since $x_2$ contains factors $\omega^2_{(j-1) j}$ while $x_3$ contains factors of the form $\omega^2_{1j}$ and $j-1\ge m+1\ge 3$.
Finally we apply Lemma \ref{lm:mod} and Corollary \ref{cor:form} to obtain the decomposition of $x_1$ as a sum of the basis elements.
One of the summands in the decomposition of the product $\prod_{i=2}^m \omega^1_{i(m+1)}$ is the basis element
$$\omega^1_{23}\omega^1_{24}\dots\omega^1_{2m}\omega^1_{2(m+1)}= \omega_{I_0J_0} \omega^1_{2(m+1)}.$$
We see the appearance of the factor $\omega_{I_0J_0}$ as well as of the missing class $\omega^1_{2(m+1)}$. Thus,
the basis element (\ref{target}) appears in the decomposition of the product $x_1x_2x_3$ with a nonzero coefficient and hence $x_1x_2x_3\not=0.$
This completes the proof.
\end{proof}
Next we state the main result of this section:
\begin{theorem}\label{thm:even}
For any $m\geq 2$, $n\ge 1$ and $r\ge 2$, the $r$-th sequential parametrized topological complexity of the Fadell-Neuwirth bundle in the plane is given by$${\sf{TC}}_r[p : F(\mathbb{R}^2, n+m) \to F(\mathbb{R}^2, m)]=rn + m - 2.$$
\end{theorem}
\begin{proof}
Proposition \ref{prop lower bound for Fadell-Neuwirth bundle even d} gives the lower bound. In the proof below we
establish the upper bound. We shall adopt the method developed in \cite{CohFW}.
As in \cite{CohFW}, we identify $\mathbb{R}^2$ with the set of complex numbers $\mathbb{C}$ and for any $s\geq 3$ consider the
homeomorphism
$$h_s: F(\mathbb{C}, s) \to F(\mathbb{C} \smallsetminus \{0, 1\}, s-2)\times F(\mathbb{C}, 2)$$
given by
$$h_s(u_1, u_2, ..., u_s)=\left(\left(\frac{u_3-u_1}{u_2-u_1}, \frac{u_4-u_1}{u_2-u_1}, ..., \frac{u_s-u_1}{u_2-u_1} \right),
(u_1, u_2)\right),$$
where $u_i\in \mathbb{C}$, $u_i\not= u_j$ for $i\not=j$. Thus, using the algebraic structure of complex numbers we may split the configuration space into a product.
We have the following commutative diagram
$$\xymatrix{
F(\mathbb{C}, n+m) \ar[d]_{p} \ar[r]^{\hskip -2 cm {h_{n+m}}} & F(\mathbb{C} \smallsetminus \{0, 1\}, n+m-2)\times F(\mathbb{C}, 2) \ar[d]^{q\times {\sf{Id}}} \\
F(\mathbb{C}, m) \ar[r]_{\hskip -2 cm {h_m}} & F(\mathbb{C} \smallsetminus \{0, 1\}, m-2)\times F(\mathbb{C}, 2)
}$$
where $p$ is the Fadell - Neuwirth fibration, $q$ is analogue of the Fadell - Neuwirth bundle for the plane with points $0, 1$ removed and with $m-2$ obstacles, and ${\sf{Id}}$ is the identity map. In the case when $m=2$ we shall consider the space
$F(\mathbb{C} \smallsetminus \{0, 1\}, m-2)$ as consisting of a single point; then the diagram above will make sense for $m=2$
(two obstacles only) as well.
Noting that ${\sf{TC}}_r[{\sf{Id}} : F(\mathbb{C}, 2) \to F(\mathbb{C}, 2)]=0$ and applying Proposition \ref{prop product inequality} we obtain
\begin{equation*}
\label{equation para tc r=2}
{\sf{TC}}_r[p: F(\mathbb{C}, n+m) \to F(\mathbb{C}, m)] \leq {\sf{TC}}_r[q: E' \to B'],
\end{equation*}
where $E'=F(\mathbb{C} \smallsetminus \{0, 1\}, n+m-2)$ and $B'=F(\mathbb{C} \smallsetminus \{0, 1\}, m-2) $.
The fibre of the fibration $q: E' \to B'$ is the configuration space $F(\mathbb{C} \smallsetminus \mathcal{O}_m, n)$, which is connected and has homotopical dimension $n$. The homotopical dimension of the base $F(\mathbb{C} \smallsetminus \{0, 1\}, m-2)$ is $m-2$. Proposition \ref{prop upper bound} gives ${\sf{TC}}_r[q: E'\to B'] \le
rn+ m-2.$
Hence, $${\sf{TC}}_r[p: F(\mathbb{C}, n+m) \to F(\mathbb{C}, m)]\leq rn+ m-2.$$ This completes the proof.
\end{proof}
\begin{remark} Theorems \ref{thm:odd} and \ref{thm:even} leave unanswered the question about the sequential parametrized topological complexity for the Fadell - Neuwirth bundle for $d\ge 4$ even. The upper bound of Proposition \ref{prop upper bound for Fadell-Neuwirth bundle} and the lower bound of Proposition \ref{prop lower bound for Fadell-Neuwirth bundle even d} specify the answer with indeterminacy one. In a forthcoming publication we shall extend the upper bound $rn +m -2$ for any $d\ge 2$ even. We shall employ the method which was briefly described in \cite{FarW}, \S 7 for the case $r=2$.
\end{remark}
\section{${\sf{TC}}$-generating function and rationality}
\subsec{} Definition \ref{def:main} associates with each fibration $p: E\to B$ an infinite sequence of integer numerical invariants,
\begin{eqnarray}\label{seq}
{\sf{TC}}_2[p: E\to B], \quad {\sf{TC}}_3[p: E\to B], \quad \dots,\quad {\sf{TC}}_r[p: E\to B],\quad \dots
\end{eqnarray}
In order to understand the global behaviour of the sequence (\ref{seq}), it can be organised into a generating function
\begin{eqnarray}\label{gen}
\mathcal F(t) = \sum_{r\ge 1} {\sf{TC}}_{r+1}[p: E\to B]\cdot t^r,
\end{eqnarray}
which we shall call the {\it ${\sf{TC}}$-generating function of the fibration $p: E\to B$}.
Various analytic properties of the generating function $\mathcal F(t)$ reflect asymptotic behaviour of the sequence (\ref{seq}) and topological structure of the fibration $p: E\to B$. Rationality of the generating function (\ref{gen}) would mean existence of a linear recurrence relation between the integers (\ref{seq}) representing sequential parametrized topological complexities for various values of $r$.
\begin{lemma}
The ${\sf{TC}}$-generating function (\ref{gen}) depends only on the fiberwise homotopy type of the fibration $p: E\to B$.
\end{lemma}
\begin{proof}
This is equivalent to Corollary \ref{fwhom}.
\end{proof}
\subsec{} In paper \cite{FarO19} the authors introduced the ${\sf{TC}}$-generating function
\begin{eqnarray}\label{gen:x}
\mathcal F_X(t) = \sum_{r\ge 1} {\sf{TC}}_{r+1}(X)\cdot t^r
\end{eqnarray}
associating with a finite path-connected CW-complex $X$ a formal power series (\ref{gen:x}). The paper \cite{FarO19} contains many examples when this power series can be explicitly computed and in all these examples
$\mathcal F_X(t)$ is representable by a rational function of the form
\begin{eqnarray}\label{form}
\mathcal F_X(t) = \frac{A}{(1-t)^2} + \frac{B}{1-t}+ p(t), \quad \mbox{where}\quad p(t)\quad \mbox{is a polynomial}.
\end{eqnarray}
This property of $\mathcal F_X(t)$ is equivalent to the recurrence relation
$${\sf{TC}}_{r+1}(X) = {\sf{TC}}_r(X) +A$$
valid for all sufficiently large $r$.
Moreover, in many examples the principal residue $A$ equals the Lusternik - Schnirelmann category,
\begin{eqnarray}\label{cat}
A={\sf{cat}}(X).
\end{eqnarray}
These examples lead to the Rationality Question of \cite{FarO19}: for which finite CW-complexes the formal power series (\ref{gen:x}) represents a rational function of the form (\ref{form}) with the top residue equal the Lusternik - Schnirelmann category (\ref{cat})?
\subsec{} In subsequent paper \cite{FarKS20} the authors analysed a class of CW-complexes violating the Ganea conjecture and found examples $X$ such that the ${\sf{TC}}$-generating function (\ref{gen:x}) is a rational function of the form (\ref{form}) although the top residue $A$ is distinct from ${\sf{cat}}(X)$.
\subsec{} Next we mention a few examples when the generating function (\ref{gen}) can be computed.
Firstly, suppose that $p: E\to B$ is the trivial fibration with path-connected fibre $X$. Then the generating function (\ref{gen})
equals $\mathcal F_X(t)$.
Secondly, consider the Hopf bundle $p: S^3\to S^2$. Then, according to Proposition \ref{princ}, we have
$${\sf{TC}}_{r+1}[p:S^3\to S^2]={\sf{TC}}_{r+1}(S^1)= {\sf{cat}}((S^1)^r)=r \quad \mbox{for any}\quad r\ge 1.$$
Therefore, the ${\sf{TC}}$-generating function of the Hopf bundle equals
$$\sum_{r\ge 1} r\cdot t^r \, =\, \frac{t}{(1-t)^2} \, =\, \frac{1}{(1-t)^2} - \frac{1}{1-t}.$$
In this case the principal residue equals $A = 1= {\sf{cat}}(S^1)$.
Exactly the same answer for the ${\sf{TC}}$-generating function can be obtained in the case of a more general Hopf bundle $p: S^{2n+1}\to {\Bbb {CP}}^n$.
\subsec{} Consider now the ${\sf{TC}}$-generating function of the Fadell - Neuwirth bundle
$p: F(\mathbb{R}^d, m+n)\to F(\mathbb{R}^d, m)$ which was analysed in this paper.
We start with the case when the dimension $d$ is odd. Then we have the ${\sf{TC}}$-generating function
$$\mathcal F(t) \, =\, \sum_{r=1} [(r+1)n +m-1]\cdot t^r = \frac{n}{(1-t)^2} + \frac{m-1}{1-t} -n - m+1.
$$
It is a rational function of the form (\ref{form}) with the principal residue
$$A= n = {\sf{cat}}(F(\mathbb{R}^d-\mathcal O_m, n))$$ equal category of the fibre. Using Theorem 1.3 from \cite{GonG15} we
may write the ${\sf{TC}}$-generating function of the fiber $X=F(\mathbb{R}^d-\mathcal O_m, n)$ as follows
\begin{eqnarray}
\mathcal F_X(t) = n \cdot \sum_{r\ge 1} (r+1)t^r = \frac{n}{(1-t)^2} -n.
\end{eqnarray}
In this example the ${\sf{TC}}$-generating functions of the bundle and of the fiber have the same principal residue and their difference has a simple pole at $t=1$.
The ${\sf{TC}}$-generating function of the Fadell - Neuwirth bundle is slightly different in the case when $d=2$:
$$\mathcal F(t) \, =\, \sum_{r=1} [(r+1)n +m-2]\cdot t^r = \frac{n}{(1-t)^2} + \frac{m-2}{1-t} -n - m+2.$$
In this case the power series represents a rational function of form (\ref{form}) and the principal residue equals the Lusternik - Schnirelmann category of the fibre.
The examples above suggest the following general question:
{\it How the ${\sf{TC}}$-generating functions of a fibration $p:E\to B$ and of its fibre $X$ are related?}
More specifically we may ask:
{\it For which fibrations $p: E\to B$ the differences
$${\sf{TC}}_{r+1}[p:E\to B] - {\sf{TC}}_{r}[p:E\to B] \quad \mbox{and}\quad {\sf{TC}}_{r}[p:E\to B]-{\sf{TC}}_r(X)$$
are eventualy constant?}
This stabilisation happens in the case of the Fadell - Neuwirth fibration.
|
2205.06223
|
\section{Introduction}\label{section-introduction}
Stern's sequence $(a(n))_{n \geq 0}$, defined by the recurrence relations
$$ a(2n) = a(n), \quad a(2n+1) = a(n)+a(n+1),$$
for $n \geq 0$, and initial values $a(0) = 0$, $a(1) = 1$,
has been studied for over 150 years. It was introduced by Stern in 1858 \cite{Stern:1858}, and later studied by
Lucas \cite{Lucas:1878}, Lehmer \cite{Lehmer:1929}, and many others. For a survey of the Stern sequence and its amazing properties, see the papers of Urbiha \cite{Urbiha:2001} and Northshield
\cite{Northshield:2010}. It is an example of a $2$-regular sequence \cite[Example 7]{Allouche&Shallit:1992}. The first few values of this sequence are
given in Table~\ref{tab1}; it is sequence \seqnum{A002487} in
the {\it On-Line Encyclopedia of Integer Sequences} (OEIS)\cite{Sloane:2022}.
\begin{table}[H]
\begin{center}
\begin{tabular}{c|cccccccccccccccc}
$n$ & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15\\
\hline
$a(n)$ & 0 & 1 & 1 & 2 & 1 & 3 & 2 & 3 & 1 & 4 & 3 & 5 & 2 & 5 & 3 & 4
\end{tabular}
\end{center}
\caption{First few values of the Stern sequence.}
\label{tab1}
\end{table}
The sequence $a(n)$ rises and falls in a rather complicated way; see
Figure~\ref{fig1}.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=6.5in]{sternchart3.png}
\end{center}
\caption{Stern's sequence and its running maximum for $0\leq n \leq 1200$.}
\label{fig1}
\end{figure}
For this reason, several authors have been interested in understanding the
local maxima of $(a(n))_{n \geq 0}$.
This is easiest to determine when one restricts one's attention to numbers with $i$ bits; that is, to the interval $[2^{i-1}, 2^{i})$. Lucas \cite{Lucas:1878} observed without proof
that $\max_{2^{i-1} \leq n < 2^i} a(n) = F_{i+1}$, where
$F_n$ is the $n$th Fibonacci number, defined as usual by
$F_0 = 0$, $F_1 = 1$, and $F_n = F_{n-1} + F_{n-2}$ for $n \geq 2$, and proofs
were later supplied by Lehmer \cite{Lehmer:1929} and Lind \cite{Lind:1969}.
The second- and third-largest values in the same interval,
$[2^{i-1}, 2^{i})$, were determined by Lansing \cite{Lansing:2014}, and
more general results for these intervals were obtained by Paulin \cite{Paulin:2017}.
On the other hand,
Coons and Tyler \cite{Coons&Tyler:2014} showed that
$$ \limsup_{n \rightarrow \infty} \frac{a(n)}{n^{\log_2 \varphi}} =
\frac{\varphi^{\log_2 3}}{\sqrt{5}},$$
where $\varphi = (1+\sqrt{5})/2$ is the golden ratio. This gives
the maximum order of growth of Stern's sequence. Later, Defant \cite{Defant:2016} generalized their result to the analogue of Stern's sequence in all integer bases $b \geq 2$.
In this paper, we are concerned with the positions of the ``running maxima'' or
``record-setters'' of the Stern sequence overall, not restricted to
subintervals of the form $[2^{i-1}, 2^i)$. These are the indices $v$
such that $a(j) < a(v)$ for all $j < v$. The first few record-setters
and their values are given in Table~\ref{tab2}.
\begin{table}[H]
\begin{center}
\begin{tabular}{c|cccccccccccccccccc}
$i$ & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 \\
\hline
$v_i$ & 0 & 1 & 3 & 5 & 9 & 11 & 19 & 21 & 35 & 37 & 43 & 69& 73 & 75 & 83 & 85 & 139 & 147 \\
$a(v_i)$ & 0 & 1 & 2 & 3 & 4 & 5 & 7 & 8 & 9 & 11 & 13 & 14 & 15 & 18 & 19 & 21 & 23 &26
\end{tabular}
\end{center}
\caption{First few record-setters for the Stern sequence.}
\label{tab2}
\end{table}
The sequence of record-setters $(v_i)_{i \geq 1}$ is sequence \seqnum{A212288} in the OEIS,
and the sequence $(a(v_i))_{i \geq 1}$ is sequence \seqnum{A212289} in the OEIS.
In this paper, we provide a complete description of the record-setters for
the Stern sequence.
To state the theorem, we need to use a standard notation for
repetitions of strings: for a string $x$, the expression
$x^i$ means $\overbrace{xx\cdots x}^i$. Thus, there is a possibility for confusion between ordinary powers of integers
and powers of strings, but hopefully the context will make our meaning clear.
\begin{theorem} \label{mainTheorem}
The $k$-bit record-setters, for $k < 12$, are
given in Table~\ref{tab3}.
For $k \geq 12$,
the $k$-bit record-setters of the Stern sequence, listed
in increasing order, have the following representation in base $2$:
\begin{itemize}
\item $k$ even, $k = 2n$:
$$\begin{cases}
100\, (10)^a\, 0\, (10)^{n-3-a}\, 11, & \text{ for } 0 \leq a \leq n-3; \\
(10)^{b}\, 0\, (10)^{n-b-1} \, 1, & \text{ for } 1 \leq b \leq \lfloor n/2 \rfloor; \\
(10)^{n-1}\, 11.
\end{cases}$$
\item $k$ odd, $k=2n+1$:
$$
\begin{cases}
10 00\, (10)^{n-2}\, 1 ; \\
100100\, (10)^{n-4}\, 011; \\
100\, (10)^b\, 0\, (10)^{n-2-b} \, 1, & \text{ for } 1 \leq b \leq \lceil n/2 \rceil - 1; \\
(10)^{a+1}\,
0\, (10)^{n-2-a}\, 11, & \text{ for } 0 \leq a \leq n-2;\\
(10)^{n}\, 1.
\end{cases}
$$
\end{itemize}
In particular, for $k \geq 12$, the number of $k$-bit record-setters
is $\lfloor 3k/4 \rfloor - (-1)^k$.
\end{theorem}
In this paper, we prove the correctness of the classification above by ruling out many cases and then trying to find the set of record-setters.
Our approach is to interpret numbers as binary strings. In Section \ref{basics}, we will introduce and provide some basic lemmas regarding this approach.
To find the set of record-setters, we exclude many candidates and prove they do not belong to the set of record-setters in Section \ref{search_space}.
In Section \ref{limit1001000}, we rule out more candidates by using some calculations based on Fibonacci numbers.
Finally, in Sections \ref{final_even} and \ref{final_odd}, we finish the classification of record-setters and prove Theorem \ref{mainTheorem}.
{\small\begin{center}
\begin{longtable}[htb]{c|r|r}
$k$ & record-setters & numerical \\
& with $k$ bits & values \\
\hline
1 & 1 & 1 \\
2 & 11 & 3 \\
3 & 101 & 5 \\
4 & 1001 & 9 \\
& 1011 & 11 \\
5 & 10011 & 19 \\
& 10101 & 21 \\
6 & 100011 & 35 \\
& 100101 & 37 \\
& 101011 & 43 \\
7 & 1000101 & 69 \\
& 1001001 & 73 \\
& 1001011 & 75 \\
& 1010011 & 83 \\
& 1010101 & 85 \\
8 & 10001011 & 139 \\
& 10010011 & 147 \\
& 10010101 & 149 \\
& 10100101 & 165 \\
& 10101011 & 171 \\
9 & 100010101 & 277 \\
& 100100101 & 293 \\
& 100101011 & 299 \\
& 101001011 & 331 \\
& 101010011 & 339 \\
& 101010101 & 341 \\
10 & 1000101011 & 555 \\
& 1001001011 & 587 \\
& 1001010011 & 595 \\
& 1001010101 & 597 \\
& 1010010101 & 661 \\
& 1010101011 & 683 \\
11 & 10001010101 & 1109 \\
& 10010010101 & 1173 \\
& 10010100101 & 1189 \\
& 10010101011 & 1195 \\
& 10100101011 & 1323 \\
& 10101001011 & 1355 \\
& 10101010011 & 1363 \\
& 10101010101 & 1365 \\
\caption{$k$-bit record-setters for $k < 12$.}
\label{tab3}
\end{longtable}
\end{center}
}
\section{Basics}\label{basics}
We start off by defining a new sequence $(s(n))_{n \geq 0}$, which is the
Stern sequence shifted by one: $s(n) = a(n + 1)$ for $n \geq 0$. Henceforth we will be mainly concerned with $s$ instead of $a$. Let $R$ be the set of record-setters
for the sequence $(s(n))_{n \geq 0}$, so that
$R = \{ v_i - 1 \, : \, i \geq 1 \}$.
A {\it hyperbinary representation\/} of a positive integer $n$ is a summation of powers of $2$, using each power
at most twice.
The following theorem of Carlitz \cite{Carlitz:1964} provides another way of interpreting the quantity $s(n)$:
\begin{theorem}
The number of hyperbinary representations of $n$ is $s(n)$.
\end{theorem}
We now define some notation. We frequently represent integers as strings of digits.
If $ x = e_{t-1} e_{t-2} \cdots e_1 e_0$
is a string of digits 0, 1, or 2, then $[x]_2$ denotes the
integer $n = \sum_{0 \leq i < t} e_i 2^i$. For example,
\begin{equation*}
43 = [101011]_2 = [012211]_2 = [020211]_2 = [021011]_2 = [100211]_2.
\label{example43}
\end{equation*}
By ``breaking the power $2^i$'' or the $(i + 1)$-th bit from the right-hand side, we mean writing $2^i$ as two copies of $2^{i - 1}$. For example, breaking the power $2^1$ into
$2^0 + 2^0$ can be thought of as rewriting the string $10$ as $02$.
Now we state two helpful but straightforward lemmas:
\begin{lemma} \label{breakBits} Let string $x$ be the binary representation of $n \geq 0$, that is $(x)_2 = n$. All proper hyperbinary representations of $n$ can be reached from $x$, only by breaking powers $2^i$, for $0 < i <|x|$.
\end{lemma}
\begin{proof}
To prove this, consider a hyperbinary representation string $y = c_{t-1} c_{t-2} \cdots c_1 c_0$ of $n$. We show that $y$ can be reached from $x$ using the following algorithm: Let $i$ be the position of $y$'s leftmost 2. In each round, change bits $c_i := c_i - 2$ and $c_{i+1} := c_{i+1} + 1$. By applying this algorithm, $i$ increases until the number of 2s decrease, while the value $[y]_2$ remains the same. Since $i$ cannot exceed $t - 1$, eventually $y$ would have no 2s. Therefore, string $y$ becomes $x$. By reversing these steps, we can reach the initial value of $y$ from $x$, only by ``breaking" bits.
\end{proof}
\begin{lemma} \label{breaktwice} Let string $x$ be the binary representation of $n \geq 0$. In the process of reaching a hyperbinary representation from $x$, only by breaking bits, a bit cannot be broken twice.
\end{lemma}
\begin{proof}
Since $2^i > 2^{i-1} + \cdots + 2^0$, and $[2(0)^i]_2$ $>$ $[(2)^{i-1}]_2$, the $(i+1)$-th bit from right cannot be broken twice.
\end{proof}
For simplicity, we define a new function, $G(x)$, and work with binary and hyperbinary representations henceforward.
The argument of $G$ is a string $x$ containing only the digits $\{0,1,2, 3\}$, and its value is
the number of different hyperbinary representations reachable from $x$, only by the breaking mechanism we defined above. Thus, for example,
Eq.~\eqref{example43} demonstrates that $G(101011) = 5$.
Although the digit 3 cannot appear in a proper hyperbinary representation, we use it here to mean that the corresponding bit \textit{must} be broken. Also, from Lemma~\ref{breaktwice}, we know that the digit 4 cannot appear since it must be broken twice. We can conclude from Lemma \ref{breakBits}, for a \textit{binary} string $x$, we have $G(x) = s([x]_2)$. We define $G(\epsilon)= 1$.
In what follows, all variables have the domain $\{ 0,1 \}^*$; if we have a need for the digits $2$ and $3$, we write them explicitly.
We will later use the following lemma to get rid of 2s and 3s in our hyperbinary representations and get a representation using only $0$s and $1$s:
\begin{lemma} \label{remove23} For a binary string $h$, the equalities
\begin{itemize}
\item[(a)] $G(2h) = G(1h)$,
\item[(b)] $G(30h) = G(1h)$,
\item[(c)] $G(3(1)^i0h) = G(1h)$,
\item[(d)] $G(3(1)^i) = G(3) = 0$
\end{itemize}
hold.
\end{lemma}
\begin{proof}
\leavevmode
\begin{itemize}
\item[(a)]
According to Lemma \ref{breaktwice}, we cannot break the leftmost bit twice. Therefore, the number of different hyperbinary representations we can reach from $2h$ and $1h$, i.e. their $G$-value, is the same.
\item[(b)] Since 3 cannot appear in a hyperbinary representation, we must break it. This results in a new string
$22h$. Due to Lemma \ref{breaktwice}, the first (leftmost) $2$ is useless,
and we cannot break it again. Thus, $G(30h) = G(2h)
= G(1h)$.
\item[(c)] Since we have to break the 3 again, the string $3(1)^i0h$ becomes $23(1)^{i -1}0h$, and $G(3(1)^i0h) = G(3(1)^{i -1}0h)$ . By continuing this we get $G(3(1)^i0h)
= G(30h) = G(1h)$.
\item[(d)] To calculate $3(1)^i$'s $G$-value, we must count the number of proper hyperbinary representations reachable from $3(1)^i$. The first 3 must be broken, and by breaking 3, we obtain another string of the same format, i.e., $3(1)^{i-1}$. By continuing this, we reach the string $3$, which cannot be broken any further and is not a valid hyperbinary string. Therefore $G(3(1)^i) = G(3) = 0$
\end{itemize}
\end{proof}
We now define two transformations on string $h$, prime and double prime transformations.
For a string $h$, we let $h'$ be the string resulting from adding two to its leftmost bit, and then applying Lemma~\ref{remove23} to remove the excessively created 2 or 3. Therefore, string $h'$ is either a {\it binary} string, or it is 3, which is not transformable as the case (d) in Lemma~\ref{remove23}. For example,
\begin{itemize}
\item[(a)] If $h = 0011$, then we get $2011$, and by applying Lemma~\ref{remove23}, we have $h' =1011$.
\item[(b)] If $h = 1011$, then $h' = 111$.
\item[(c)] If $h = \epsilon$, then $h$ has no leftmost bit, and $h'$ is undefined. Therefore, we set $\epsilon' = 3$ and $G(\epsilon') = 0$.
\item[(d)] If $h = 1$, then $h' = 3$ and $G(h') = 0$.
\end{itemize}
We let $h''$ be the string resulting from removing all trailing zeroes and decreasing the rightmost bit by 1. For
example,
\begin{itemize}
\item[(a)] If $h = 100\ 100$, then $h'' = 1000$;
\item[(b)] If $h = 1011$, then $h'' = 10\ 10$;
\item[(c)] If $h = 3$, then $h'' = 2$;
\item[(d)] If $h = 0^i$ for
$i \geq 0$, then after removing trailing zeros, the string does not have a rightmost bit and is not in the transformation function's domain. Therefore, we set $G(h'') = 0$.
\end{itemize}
The reason behind defining prime and double prime of strings is to allow dividing a single string into two pieces and calculating the $G$ function for both pieces. This way, we can calculate $G$-values more easily. For example, $h'$ is useful when a bit with the value $2^{|h|}$ is broken, and $h''$ is useful when we want to break $2^0$ and pass it to another string on its right. Lemma~\ref{breaktwice} implies this usefulness as we cannot break a bit twice; thus, we can assume the two pieces are entirely separate after breaking a bit.
\section{Ruling out Candidates for Record-Setters}\label{search_space}
In this section, by using Lemmas \ref{breakBits} and \ref{remove23}, we try to decrease the search space as much as possible. A useful tool is linear algebra. We now define a certain matrix $\mu(x)$ for a binary string $x$. We set
\begin{equation}
\mu(x) =
\begin{bmatrix}
G(x) & G(x'')\\
G(x') & G((x')'')
\end{bmatrix} .
\end{equation}
For example, when $|x|=1$, the values are
\begin{align*}
&G(1) = 1, && G(1'') = G(0) = 1,\\
&G(1') = G(3) = 0, && G( (1')'') = G(3'') = G(2) = G(1) = 1,\\
&G(0) = 1, && G(0'') = 0,\\
&G(0') = G(2) = 1, && G( (0')'') = G(2'') = G(1) = 1,
\end{align*}
and the corresponding matrices are
\begin{equation*}
\mu(1) =
\begin{bmatrix}
1 & 1\\
0 & 1
\end{bmatrix}
\text{ and }
\mu(0) =
\begin{bmatrix}
1 & 0\\
1 & 1
\end{bmatrix}.
\end{equation*}
In the case where $x = \epsilon$, the values are
\begin{align*}
&G(\epsilon) = 1, && G(\epsilon'') = 0,\\
&G(\epsilon') = G(3) = 0, && G( (\epsilon')'') = G(3'') = G(2) = G(1) = 1,\\
\end{align*}
and the matrix is
\begin{equation*}
\mu(\epsilon) =
\begin{bmatrix}
1 & 0\\
0 & 1
\end{bmatrix},
\end{equation*}
the identity matrix.
\begin{theorem} \label{matrix_linearization}
For two binary strings $x$ and $y$, the equation
\begin{equation}
\mu(xy) = \mu(x)\cdot\mu(y)
\end{equation}
holds.
\end{theorem}
\begin{proof}
To show this, we prove $\mu(1x) = \mu(1)\cdot\mu(x)$ and $\mu(0x) = \mu(0) \cdot \mu(x)$. The general case for $\mu(xy) = \mu(x)\cdot\mu(y)$ then follows by induction.
We first prove the case for $1x$. Consider
\begin{equation*}
\mu(1)\cdot\mu(x) =
\begin{bmatrix}
1 & 1\\
0 & 1
\end{bmatrix} \cdot
\begin{bmatrix}
G(x) & G(x'')\\
G(x') & G((x')'')
\end{bmatrix}
=
\begin{bmatrix}
G(x) + G(x') & G(x'') + G((x')'')\\
G(x') & G((x')'')
\end{bmatrix},
\end{equation*}
which must equal
\begin{equation*}
\mu(1x) =
\begin{bmatrix}
G(1x) & G((1x)'')\\
G((1x)') & G(((1x)')'')
\end{bmatrix}.
\end{equation*}
We first prove $G(1x) = G(x) + G(x')$. Consider two cases where the first 1 either breaks or not. The number of hyperbinary representations where it does not break equals $G(x)$; if it breaks, then the rest of the string becomes $0x'$, which has $G(x')$ representations.
To show $G((1x)'') = G(x'') + G((x')'')$, we use the same approach. The first one either breaks or not, resulting in two different strings, $x$ and $x'$. In both cases, we must apply the double prime transformation to break a $2^0$ in order to pass it to a string on the right side of $1x$.
For the equality of the bottom row, the string $(1x)'$ is $3x$; thus, the 3 must be broken, and the rest of the string becomes $x'$. So $\mu(1x) = \mu(1)\cdot\mu(x)$ holds.
The case of $0x$ can be shown using similar conclusions. Consider
\begin{equation*}
\mu(0)\cdot\mu(x) =
\begin{bmatrix}
1 & 0\\
1 & 1
\end{bmatrix} \cdot
\begin{bmatrix}
G(x) & G(x'')\\
G(x') & G((x')'')
\end{bmatrix}
=
\begin{bmatrix}
G(x) & G(x'') \\
G(x) + G(x') & G(x'') + G((x')'')
\end{bmatrix},
\end{equation*}
which must equal
\begin{equation*}
\mu(0x) =
\begin{bmatrix}
G(0x) & G((0x)'')\\
G((0x)') & G(((0x)')'')
\end{bmatrix} =
\begin{bmatrix}
G(x) & G(x'')\\
G(2x) & G((2x)'')
\end{bmatrix}
=
\begin{bmatrix}
G(x) & G(x'')\\
G(1x) & G((1x)'')
\end{bmatrix}.
\end{equation*}
We have already shown $G(1x) = G(x) + G(x')$ and $G((1x)'') = G(x'') + G((x')'')$. Therefore, the equation $\mu(0x) = \mu(0)\cdot\mu(x)$ holds, and the theorem is proved.
\end{proof}
This theorem also gives us a helpful tool
to compute $G(x)$, $G(x'')$, $G(x')$, and $G((x')''$ as $\mu(x)$ is just a multiplication of $\mu(1)$s and $\mu(0)$s.
\begin{lemma} \label{G_linearization}
For a string $x$, the equation $G(x) = \begin{bmatrix \mu(x) \begin{bmatrix $ holds. This multiplication simply returns the top-left value of the $\mu(x)$ matrix.
\end{lemma}
From Theorem \ref{matrix_linearization} and Lemma \ref{G_linearization} we deduce the following result.
\begin{lemma} \label{string-division} For binary strings $x, y$, the equation
\begin{equation}
G(xy) = G(x)G(y) + G(x'')G(y')
\end{equation}
holds.
\end{lemma}
\begin{proof}
We have
\begin{align*}
G(xy) &= \begin{bmatrix\mu(xy)\begin{bmatrix = \begin{bmatrix\mu(x)\mu(y)\begin{bmatrix\\
&= \begin{bmatrix
\begin{bmatrix}
G(x)G(y) + G(x'')G(y') & G(x)G(y'') + G(x'')G((y')'')\\
G(x')G(y)+ G((x')'')G(y') & G(x')G(y'') + G((x')'')G((y')'')
\end{bmatrix}\begin{bmatrix \\
&= G(x)G(y) + G(x'')G(y').
\end{align*}
This can also be explained in another way. If we do not break the rightmost bit of $x$, we can assume the two strings are separate and get $G(x)G(y)$ number of hyperbinary representations. In case we break it, then $G(x'')G(y')$ ways exist.
\end{proof}
In what follows, we always set $v := \begin{bmatrix$ and $w := \begin{bmatrix$.
Here we define three comparators that help us replace substrings (or contiguous subsequences) in order to obtain a new string without decreasing the string's $G$-value.
\begin{definition}[Comparators]
In this paper, when we state a matrix $M_1$ is greater than or equal to the matrix $M_0$, we mean each entry of $M_1 - M_0$ is non-negative (they both must share the same dimensions).
\begin{itemize}
\item The infix comparator: For two strings $y$ and $t$, the relation $ t \geq_{\rm inf} y$ holds if $\mu(t) \geq \mu(y)$ holds.
\item The suffix comparator: For two strings $y$ and $t$, the relation $ t \geq_{\rm suff} y$ holds if $
\mu(t)\cdot w
\geq \mu(y)\cdot w$ holds.
\item The prefix comparator: For two strings $y$ and $t$, the relation $t \geq_{\rm pref} y$ holds if $
v\cdot\mu(t)
\geq v\cdot\mu(y)
$ holds.
\end{itemize}
\end{definition}
\begin{lemma} \label{gc_lemma}
If $t \geq_{\rm inf} y$, and $t$ represents a smaller string, no record-setter can contain $y$ as its substring.
\end{lemma}
\begin{proof}
Consider a string $a = xyz$. According to Lemma \ref{G_linearization}, we have
\begin{equation*}
G(a) = v \cdot \mu(x) \cdot \mu(y) \cdot \mu(z) \cdot w.
\end{equation*}
Since $ \mu(t) \geq \mu(y)$, and all entries in the matrices are positive, the replacement of $y$ with $t$ does not decrease $G(a)$, and also yields a smaller number, that is $(xtz)_2 \leq (xyz)_2$. Therefore, $(xyz)_2 \notin R$.
\end{proof}
As an example, consider the two strings $111$ and $101$. Then $101 \geq_{\rm inf} 111$ holds, since
\begin{equation*}
\mu(101) =
\begin{bmatrix}
2 & 3\\
1 & 2
\end{bmatrix} \geq
\mu(111) =
\begin{bmatrix}
1 & 3\\
0 & 1
\end{bmatrix} .
\end{equation*}
\begin{lemma} \label{endLemma}
If $t < y$ and $t \geq_{\rm suff} y$, then $y$ is not a suffix of a record-setter.
\end{lemma}
\begin{proof}
Consider a string $a = xy$. We have shown $G(a) = v \cdot \mu(x) \cdot \mu(y) \cdot w$. By
replacing $y$ with $t$, since $\mu(t) \cdot w \geq \mu(y) \cdot w$, the value $G(a)$ does not decrease, and we obtain a smaller string.
\end{proof}
\begin{lemma} \label{beginLemma}
If $t < y$ and $t \geq_{\rm pref} x$, then $x$ is not a prefix of a record-setter.
\end{lemma}
\begin{corollary} \label{lemma111}
For an $h \in R$, since $101 \geq_{\rm inf} 111$, then $h$ cannot contain $111$ as a substring.
\end{corollary}
We have established that a record-setter $h$ cannot contain three consecutive 1s. Now, we plan to prove $h$ cannot have two consecutive 1s, either. We do this in the following lemmas and theorems.
The following theorem provides examples that their $G$-values equal Fibonacci numbers.
\begin{theorem} \label{fibonacci-vals} For $i \geq 0$, the equations
\begin{align}
G((10)^i) &= F_{2i+1},\label{Fib1st} \\
G((10)^i0) &= F_{2i + 2},\label{Fib2nd}\\
G(1(10)^i) &= F_{2i + 2}, \text{ and}\label{Fib3rd} \\
G(1(10)^i0) &= F_{2i + 3}\label{Fib4th}
\end{align}
hold.
\end{theorem}
\begin{proof}
We first prove that the following equation holds:
\begin{equation}
\mu((10)^i) = \begin{bmatrix}
F_{2i + 1} & F_{2i}\\
F_{2i} & F_{2i - 1}
\end{bmatrix} .
\label{mat10}
\end{equation}
The case for $i = 1$, namely $\mu(10) = \begin{bmatrix}
2 & 1\\
1 & 1
\end{bmatrix}$, holds. We now use induction:
\begin{equation*}
\mu((10)^{i + 1}) = \mu((10)^i) \mu(10) =
\begin{bmatrix}
F_{2i + 1} & F_{2i}\\
F_{2i} & F_{2i - 1}
\end{bmatrix}
\begin{bmatrix}
2 & 1\\
1 & 1
\end{bmatrix} =
\begin{bmatrix}
F_{2i + 3} & F_{2i + 2}\\
F_{2i + 2} & F_{2i + 1}
\end{bmatrix},
\end{equation*}
and thus we can conclude \eqref{Fib1st}.
For the other equations \eqref{Fib2nd}, \eqref{Fib3rd}, and \eqref{Fib4th}, we proceed similarly:
\begin{align*}
\mu((10)^i0) = \mu((10)^i)\mu(0) =
\begin{bmatrix}
F_{2i + 1} & F_{2i}\\
F_{2i} & F_{2i - 1}
\end{bmatrix}
\begin{bmatrix}
1 & 0\\
1 & 1
\end{bmatrix} =
\begin{bmatrix}
F_{2i + 2} & F_{2i}\\
F_{2i + 1} & F_{2i - 1}
\end{bmatrix};\\
\mu(1(10)^i) = \mu(1)\mu((10)^i) =
\begin{bmatrix}
1 & 1\\
0 & 1
\end{bmatrix}
\begin{bmatrix}
F_{2i + 1} & F_{2i}\\
F_{2i} & F_{2i - 1}
\end{bmatrix}
=
\begin{bmatrix}
F_{2i + 2} & F_{2i + 1}\\
F_{2i} & F_{2i - 1}
\end{bmatrix};\\
\mu(1(10)^i0) = \mu(1)\mu((10)^i)\mu(0) =
\begin{bmatrix}
F_{2i + 2} & F_{2i + 1}\\
F_{2i} & F_{2i - 1}
\end{bmatrix}
\begin{bmatrix}
1 & 0\\
1 & 1
\end{bmatrix} =
\begin{bmatrix}
F_{2i + 3} & F_{2i + 1}\\
F_{2i + 1} & F_{2i - 1}
\end{bmatrix} .
\end{align*}
Multiplying these by $v$ and $w$ as in Lemma \ref{G_linearization} confirms the equalities \eqref{Fib1st}--\eqref{Fib4th}.
\end{proof}
\begin{lemma} \label{lemma1100} If $h \in R$, then $h$ cannot contain a substring of the form $1(10)^{i}0$ for $i>0$.
\end{lemma}
\begin{proof}
To prove this we use Theorem \ref{fibonacci-vals} and the infix-comparator to show $t = (10)^{i+1} \geq_{\rm inf} y = 1(10)^{i}0$:
\begin{equation*}
\mu(t) = \begin{bmatrix}
F_{2i + 3} & F_{2i + 2}\\
F_{2i + 2} & F_{2i + 1}
\end{bmatrix} \geq
\mu(y) = \begin{bmatrix}
F_{2i + 3} & F_{2i + 1}\\
F_{2i + 1} & F_{2i - 1}
\end{bmatrix} .
\end{equation*}
We conclude $t \geq_{\rm inf} y$ for $i \geq 1$. Because of this, a $00$ cannot appear to the right of a $11$, since if it did, it would contain a
substring of the form $1(10)^i0$.
\end{proof}
\begin{lemma} \label{lemma110} If $h \in R$, then $h$ does not end in $1(10)^{i}$ for $i \geq 0$.
\end{lemma}
\begin{proof}
Consider $t = (10)^i0$ and $y = 1(10)^{i}$. Then
\begin{equation*}
\mu(t) = \begin{bmatrix}
F_{2i + 2} & F_{2i}\\
F_{2i + 1} & F_{2i - 1}
\end{bmatrix} \quad
\mu(y) = \begin{bmatrix}
F_{2i + 2} & F_{2i + 1}\\
F_{2i} & F_{2i - 1}
\end{bmatrix} .
\end{equation*}
and
\begin{equation*}
\mu(t)\begin{bmatrix = \begin{bmatrix}
F_{2i + 2}\\
F_{2i + 1}
\end{bmatrix} \geq
\mu(y)\begin{bmatrix = \begin{bmatrix}
F_{2i + 2}\\
F_{2i}
\end{bmatrix}
\end{equation*}
Hence $t \geq_{\rm suff} y$, and $h$ cannot end in $y$.
\end{proof}
\begin{theorem}
A record-setter $h \in R$ cannot contain the substring $11$.
\end{theorem}
\begin{proof}
Suppose it does. Consider the rightmost $11$. Due to Lemma \ref{lemma1100}, there cannot be two consecutive 0s
to its right. Therefore, the string must end in $1(10)^i$, which is impossible due to Lemma \ref{lemma110}.
\end{proof}
Therefore, we have shown that a record-setter $h$ is a concatenation of multiple strings of the form $1(0^i)$, for $i>0$. The next step establishes an upper bound on $i$ and shows that $i \leq 3$.
\begin{theorem} \label{only10100} A record-setter $h \in R$ cannot contain the substring $10000$.
\end{theorem}
\begin{proof}
First, we show $h$ cannot begin with $10000$:
\begin{equation*}
\begin{bmatrix \mu(10\ 10) = \begin{bmatrix}
5 & 3
\end{bmatrix}
\geq
\begin{bmatrix \mu(10000) = \begin{bmatrix}
5 & 1
\end{bmatrix}
\Longrightarrow 10\ 10 \geq_{\rm pref} 10000 .
\end{equation*}
Now consider the leftmost $10000$; it has to have a $10$, $100$, or $1000$ on its left:
\begin{align*}
\mu(1000\ 100) &= \begin{bmatrix}
14 & 5 \\
11 & 4
\end{bmatrix}
\geq
\mu(10\ 10000) = \begin{bmatrix}
14 & 3 \\
9 & 2
\end{bmatrix}
&&\Longrightarrow 1000\ 100 \geq_{\rm inf} 10\ 10000;
\\
\mu(1000\ 1000) &= \begin{bmatrix}
19 & 5 \\
15 & 4
\end{bmatrix}
\geq
\mu(100\ 10000) =
\begin{bmatrix}
19 & 4 \\
14 & 3
\end{bmatrix}
&&\Longrightarrow 1000\ 1000 \geq_{\rm inf} 100\ 10000;
\\
\mu(100\ 100\ 10) &= \begin{bmatrix}
26 & 15 \\
19 & 11
\end{bmatrix}
\geq
\mu(1000\ 10000) = \begin{bmatrix}
24 & 5 \\
19 & 4
\end{bmatrix}
&&\Longrightarrow 100\ 100\ 10 \geq_{\rm inf} 1000\ 10000 .
\end{align*}
Consequently, the substring $10000$ cannot appear in $h$.
\end{proof}
\section{Limits on the number of 1000s and 100s}\label{limit1001000} At this point, we have established that a record-setter's binary representation consists of a concatenation of 10s, 100s, and 1000s.
The following theorem limits the appearance of 1000 to the beginning of a record-setter:
\begin{theorem} \label{begin1000} A record-setter can only have 1000 at its beginning, except in the case $1001000$.
\end{theorem}
\begin{proof}
It is simple to check this condition manually for strings of length $<12$. Now, consider a record-setter $h \in R$, with $|h| \geq 12$. String $h$ must at least have three 1s.
To prove $h$ can only have 1000 at its beginning, we use our comparators to show neither
\begin{itemize}
\item[(a)] \textcolor{blue}{101000}, nor
\item[(b)] \textcolor{blue}{1001000}, nor
\item[(c)] \textcolor{blue}{10001000}
\end{itemize}
can appear in $h$.
\begin{itemize}
\item[(a)] Consider the following comparison:
\begin{equation} \label{tenThousand}
\mu(100\ 100) = \begin{bmatrix}
11 & 4 \\
8 & 3
\end{bmatrix}
\geq
\mu(\textcolor{blue}{10\ 1000}) = \begin{bmatrix}
11 & 3 \\
7 & 2
\end{bmatrix}
\Longrightarrow 100\ 100 \geq_{\rm inf}\textcolor{blue}{10\ 1000}.
\end{equation}
We can infer that 101000 cannot appear in $h$.
\item[(b)] In this case, for every $x < \textcolor{blue}{1001000}$, the equation $\mu(x) < \mu(\textcolor{blue}{1001000})$ holds, and we cannot find a replacement right away. Therefore, we divide this into two cases:
\begin{itemize}
\item[(b1)] In this case, we consider \textcolor{blue}{1001000} in the middle or at the end, thus it must have a 10, 100, or 1000 immediately on its left:
\begin{align}
\label{hundredThousand}
\begin{alignedat}{3}
\mu( 100\ 100\ 100 ) =
\begin{bmatrix}
41 & 15 \\
30 & 11
\end{bmatrix}
&\geq \
&\mu( 10\ \textcolor{blue}{1001000} )
& = \begin{bmatrix}
41 & 11 \\
26 & 7
\end{bmatrix},\\
\mu( 1000\ 10\ 10\ 10 ) =
\begin{bmatrix}
60 & 37 \\
47 & 29
\end{bmatrix}
&\geq \
&\mu( 100\ \textcolor{blue}{1001000} )
& = \begin{bmatrix}
56 & 15 \\
41 & 11
\end{bmatrix},\\
\mu( 10000\ 10\ 10\ 10 ) =
\begin{bmatrix}
73 & 45 \\
60 & 37
\end{bmatrix}
&\geq \
&\mu( 1000\ \textcolor{blue}{1001000} )
& = \begin{bmatrix}
71 & 19 \\
56 & 15
\end{bmatrix}.
\end{alignedat}
\end{align}
\item[(b2)] The other case would be for \textcolor{blue}{1001000} to appear at the beginning:
\begin{align}
\label{thousandLeftHundred}
\begin{alignedat}{3}
\mu( 1000\ 110\ 10 )
= \begin{bmatrix}
35 & 22 \\
27 & 17
\end{bmatrix}
&\geq
&\ \mu( \textcolor{blue}{1001000}\ 10 )
= \begin{bmatrix}
34 & 19 \\
25 & 14
\end{bmatrix},\\
\mu( 1000\ 10\ 10\ 10 )
= \begin{bmatrix}
60 & 37 \\
47 & 29
\end{bmatrix}
&\geq
&\ \mu( \textcolor{blue}{1001000}\ 100 )
= \begin{bmatrix}
53 & 19 \\
39 & 14
\end{bmatrix},\\
\mu( 100\ 10\ 10\ 100 )
= \begin{bmatrix}
76 & 29 \\
55 & 21
\end{bmatrix}
&\geq
&\ \mu( \textcolor{blue}{1001000}\ 1000 )
= \begin{bmatrix}
72 & 19 \\
53 & 14
\end{bmatrix}.
\end{alignedat}
\end{align}
\end{itemize}
Therefore $h$ cannot contain \textcolor{blue}{1001000}.
\item[(c)] Just like the previous case, there is no immediate replacement for \textcolor{blue}{10001000}. We divide this into two cases:
\begin{itemize}
\item[(c1)] There is a prefix replacement for \textcolor{blue}{10001000}:
\begin{multline}
v. \mu( 10\ 100\ 10 )
= \begin{bmatrix}
19 & 11
\end{bmatrix}
\geq
v.\mu( \textcolor{blue}{10001000} )
= \begin{bmatrix}
19 & 5
\end{bmatrix}\\ \Longrightarrow 10\ 100\ 10 \geq_{\rm pref} \textcolor{blue}{10001000}.
\end{multline}
\item[(c2)] In case \textcolor{blue}{10001000} does not appear at the beginning, there must be a 10, 100, or a 1000 immediately on its left:
\begin{align}
\label{thousandThousand}
\begin{alignedat}{3}
\mu( 10\ 10\ 10\ 100 ) =
\begin{bmatrix}
55 & 21 \\
34 & 13
\end{bmatrix}
&\geq\
&\mu( 10\ \textcolor{blue}{10001000} )
& = \begin{bmatrix}
53 & 14 \\
34 & 9
\end{bmatrix},\\
\mu( 100\ 10\ 10\ 100 )
= \begin{bmatrix}
76 & 29 \\
55 & 21
\end{bmatrix}
&\geq\
&\mu( 100\ \textcolor{blue}{10001000} )
&= \begin{bmatrix}
72 & 19 \\
53 & 14
\end{bmatrix},\\
\text{and }\mu( 1000\ 10\ 10\ 100 )
= \begin{bmatrix}
97 & 37 \\
76 & 29
\end{bmatrix}
&\geq\
&\mu( 1000\ \textcolor{blue}{10001000} )
&= \begin{bmatrix}
91 & 24 \\
72 & 19
\end{bmatrix}.
\end{alignedat}
\end{align}
\end{itemize}
\end{itemize}
\end{proof}
Considering Theorem \ref{begin1000}, we can easily guess that 1000s do not often appear in record-setters. In fact, they only appear once for each length. We will prove this result later in Lemmas \ref{even1000} and \ref{odd1000}, but for now, let us consider that our strings only consist of 10s and 100s.
The plan from here onward is to limit the number of 100s. The next set of theorems and lemmas concerns this limitation. To do this, we calculate the maximum $G$-values for strings with $0, 1, \ldots, 5$ 100s and compare them. Let $h$ be a string; we define the function $\delta(h)$ as the difference between the number of 0s and 1s occurring in $h$. For strings only containing 100s and 10s, the quantity $\delta(h)$ equals the number of 100s in $h$.
The following theorem was previously proved in \cite{Lucas:1878}:
\begin{theorem} \label{max-val-prime}
The maximum $G$-value for strings of length $2n$ $(s(t)$ for $ 2^{2n-1} \leq t < 2^{2n})$ is $F_{2n + 1}$, and it first
appears in the record-setter $(10)^n$.
The maximum $G$-value for strings of length $2n + 1$ $(s(t)$ for $ 2^{2n} \leq t < 2^{2n + 1})$ is $F_{2n + 2}$, and it first
appears in the record-setter $(10)^n0$.
\end{theorem}
The above theorem represents two sets of strings $(10)^+$ and $(10)^+0$, with $\delta$-values 0 and 1.
\begin{lemma} \label{replace10}
Consider a string $yz$, where $z$ begins with 1. If $|z| = 2n$ for $n \geq 1$, then $G(y (10)^{2n}) \geq G(yz)$. If $|z| = 2n + 1$, then $G(y (10)^{2n}0) \geq G(yz)$.
\end{lemma}
\begin{proof}
Consider the matrix $\mu((10)^n)\begin{bmatrix = \begin{bmatrix}
F_{2n + 1}\\
F_{2n}
\end{bmatrix}$. The suffix matrix for $z$ is $\mu(z)\begin{bmatrix = \begin{bmatrix}
G(z)\\
G(z')
\end{bmatrix}$. Since $F_{2n + 1} \geq G(z)$, and $|z'| < |z|$ (since $z$ begins with 1), the value of $G(z')$ cannot exceed $F_{2n}$. Therefore $(10)^n \geq_{\rm suff} z$.
For an odd length $2n + 1$, with the same approach, the matrix $\mu((10)^n0)\begin{bmatrix = \begin{bmatrix}
F_{2n + 2}\\
F_{2n + 1}
\end{bmatrix} \geq \mu(z)\begin{bmatrix = \begin{bmatrix}
G(z)\\
G(z')
\end{bmatrix}$, and $z$ can be replaced with $(10)^n0$.
\end{proof}
To continue our proofs, we need simple lemmas regarding the Fibonacci sequence:
\begin{lemma} \label{oddFibZero}
The sequence $F_1F_{2n}$, $F_3F_{2n - 2}$, \ldots, $F_{2n-1}F_2$ is strictly decreasing.
\end{lemma}
\begin{proof}
Consider an element of the sequence $F_{2i+1}F_{2n - 2i}$. There are two cases to consider, depending on the relative magnitude of $n$ and $2i$.
If $n \geq 2i + 1$, then
\begin{align*}
F_{2i + 1}F_{2n - 2i} &= F_{2i + 2}F_{2n - 2i} - F_{2i}F_{2n - 2i} = F^2_{n + 1} - F^2_{n - 2i - 1} - F^2_n + F^2_{n - 2i}\\ &= (F^2_{n+1} - F^2_{n}) + (F^2_{n - 2i} - F^2_{n - 2i - 1}).
\end{align*}
Notice that the first term,
namely $(F_{n+1}^2 -F_n^2)$ is a constant, while the second term
$F^2_{n - 2i} - F^2_{n - 2i - 1} = F_{n - 2i - 2}F_{n - 2i + 1}$ decreases
with an increasing $i$.
If $n \leq 2i$, then
\begin{equation*}
F_{2i + 1}F_{2n - 2i} = (F^2_{n+1} - F^2_{n}) + (F^2_{2i - n} - F^2_{2i + 1 - n}).
\end{equation*}
The non-constant term is $F^2_{2i - n} - F^2_{2i + 1 - n} = -F_{2i - n - 1}F_{2i + 2 - n}$, which is negative and still decreases.
\end{proof}
\begin{lemma} \label{evenMult}
The sequence $F_0F_{2n}$, $F_2F_{2n - 2}$, \ldots, $F_nF_n$ is strictly increasing.
\end{lemma}
\begin{proof}
For $0 \leq i \leq n/2$, We already know that $F_{2i}F_{2n - 2i} = F^2_n - F^2_{n - 2i}$. Since the sequence $F^2_n$, $F^2_{2n - 2}$, \ldots, $F^2_0$ decreases, the lemma holds.
\end{proof}
In the next theorem, we calculate the maximum $G$-value obtained by a string $x$ with $\delta(x) = 2$.
\begin{lemma} [Strings with two 100s] \label{two100s}
The maximum $G$-value for strings with two 100s occurs for $(10)^n0(10)^{n-1}0$ for lengths $l = 4n$, or
for $(10)^{n}0(10)^{n}0$ for lengths $l = 4n + 2$, while $l \geq 6$.
\end{lemma}
\begin{proof}
To simplify the statements, we write $\mu(10) = \mu(1)\mu(0)$ as $\mu_{10}$, and $\mu(0)$ as $I_2 + \gamma_0$, where $$I_2 = \begin{bmatrix, \text{ and } \gamma_0 = \begin{bmatrix.$$
Consider the string $(10)^i0 (10)^j0(10)^k$, where $i,j \geq 1$ and $k \geq 0$:
\begin{align*}
G((10)^i0(10)^j0(10)^k) = v\mu^i_{10}\mu(0)\mu^j_{10}\mu(0)\mu^k_{10}w
= v\mu^i_{10}(I + \gamma_0)\mu^j_{10}(I + \gamma_0)\mu^{k}_{10}w\\
= v\mu^{i + j + k}_{10}w +
v\mu^i_{10}\gamma_0\mu^{j + k}_{10}w +
v\mu^{i + j}_{10}\gamma_0\mu^k_{10}w +
v\mu^i_{10}\gamma_0\mu^j_{10}\gamma_0\mu^k_{10}w.
\end{align*}
We now evaluate each summand in terms of Fibonacci numbers.
\begin{align*}
v\mu^{i + j + k}_{10}w &= v\begin{bmatrix}
F_{2i + 2j + 2k + 1} & F_{2i + 2j + 2k}\\
F_{2i + 2k + 2k} & F_{2i + 2j + 2k - 1}
\end{bmatrix}w = F_{2i + 2j + 2k + 1} \\
v\mu^i_{10}\gamma_0\mu^{j + k}_{10}w &= \begin{bmatrix}
F_{2i + 1} & F_{2i}
\end{bmatrix}
\begin{bmatrix
\begin{bmatrix}
F_{2j + 2k + 1}\\
F_{2j + 2k}
\end{bmatrix} = F_{2i}F_{2j + 2k + 1}
\\
v\mu^{i + j}_{10}\gamma_0\mu^k_{10}w &= \begin{bmatrix}
F_{2i + 2j + 1} & F_{2i + 2j}
\end{bmatrix}
\begin{bmatrix
\begin{bmatrix}
F_{2k + 1}\\
F_{2k}
\end{bmatrix} = F_{2i+2j}F_{2k + 1}\\
v\mu^i_{10}\gamma_0\mu^j_{10}\gamma_0\mu^k_{10}w &=
\begin{bmatrix}
F_{2i + 1} & F_{2i}
\end{bmatrix}
\begin{bmatrix
\begin{bmatrix}
F_{2j + 1} & F_{2j}\\
F_{2j} & F_{2j - 1}
\end{bmatrix}
\begin{bmatrix
\begin{bmatrix}
F_{2k + 1}\\
F_{2k}
\end{bmatrix} = F_{2i}F_{2j}F_{2k + 1} .
\end{align*}
For a fixed $i$, according to Lemma \ref{oddFibZero}, to maximize the above equations $k := 0$ must become zero, and $j := j + k$. Then the above equation can be written as
\begin{equation*}
G((10)^i0(10)^j0) = v\mu^i_{10}I_2\mu^j_{10}\mu(0)w + v\mu^i_{10}\gamma_0\mu^j_{10}\mu(0)w = F_{2i + 2j + 2} + F_{2i}F_{2j + 2}.
\end{equation*}
In case $l = 4n = 2i + 2j + 2$, to maximize the above equation, according to Lemma \ref{evenMult}, $i = n$, $j = n-1$, and the $G$-value would be $F_{4n} + F^2_{2n}$. In case $l = 4n + 2$, $i = j = n$, and the $G$-value is $F_{4n + 2} + F_{2n}F_{2n + 2} = F_{4n + 2} + F^2_{2n + 1} - 1$. Thus the theorem holds. Also in general, for any even $l$, the maximum $G$-value $\leq F_{l} + F^2_{l/2}$.
\end{proof}
\begin{lemma} \label{minValSingle100}
Let $x = (10)^i0(10)^{n - i}$ be a string of length $2n + 1$ for $n \geq 1$ and $i\geq 1$ containing a single 100. Then, the minimum $G$-value for $x$ is $F_{2n + 1} + F_{2n - 1}$.
\end{lemma}
\begin{proof}
We have
\begin{align*}
G(x) = G((10)^i0(10)^{n - i}) = v \cdot \mu^i_{10} \cdot (I + \gamma_0) \cdot \mu^{n - i}_{10} \cdot w = F_{2n + 1} + F_{2i}F_{2n-2i+1} \\
\xRightarrow{{\rm Thm.}~\ref{oddFibZero}\ i = 1\ } F_{2n + 1} + F_{2n - 1}.
\end{align*}
\end{proof}
\begin{theorem} \label{three100s}
For two strings $x$ and $y$, if $\delta(x) = 3$ and $\delta(y) = 1$ then $G(x) < G(y)$.
\end{theorem}
\begin{proof}
Consider the two strings of the same length below:
\begin{center}
\begin{tabular}{ll}
$x = (10)^i100$ \fbox{$(10)^j0(10)^{k-1-j}0$}
\\
$y= 100(10)^i$ \fbox{$(10)^{k}$} .
\end{tabular}
\end{center}
We must prove for $i \geq 0$, $j \geq 1$, and $k - 1 - j \geq 1$, the inequality $G(x) \leq G(y)$ holds, where $y$ has the minimum $G$-value among the strings with a single 100 (see Lemma \ref{minValSingle100}).
\begin{align*}
G(x) &= G((10)^i100)G((10)^j0(10)^{k-1-j}0) +
G((10)^i0)G(1(10)^{j-1}0(10)^{k-1-j}0)\\
&\leq F_{2i + 4} (F^2_k + F_{2k}) + F_{2i + 2}F_{2k} =
F_{2i + 4} \left(\dfrac{2F_{2k + 1} - F_{2k} - 2}{5} + F_{2k} \right) + F_{2i + 2}F_{2k} .\\
G(y) &= G(100(10)^i)F_{2k + 1} +
G(100(10)^{i-1}0)F_{2k} \\
&= (F_{2i+3} + F_{2i + 1})F_{2k + 1} + (F_{2i} + F_{2i + 2})F_{2k} \\
&= (F_{2i+4} - F_{2i})F_{2k + 1} + (F_{2i} + F_{2i + 2})F_{2k}.
\end{align*}
We now show $G(y) - G(x) \geq 0$:
\begin{multline}
G(y) - G(x) \geq (F_{2i+4} - F_{2i})F_{2k + 1} + (F_{2i} + \cancel{F_{2i + 2}})F_{2k} - F_{2i + 4} \left(\dfrac{2F_{2k + 1} + 4F_{2k} - 2}{5} \right) - \cancel{F_{2i + 2}F_{2k}} \\
\begin{aligned}
\xRightarrow{\times 5}
5F_{2i + 4}F_{2k + 1} - 5F_{2i}F_{2k - 1}
- 2F_{2i + 4}F_{2k + 1} - 4F_{2i + 4}F_{2k} + 2F_{2i + 4} &\\=
F_{2i + 4}(3F_{2k + 1} - 4F_{2k} + 2) - 5F_{2i}F_{2k - 1} &\\=
F_{2i + 4}(F_{2k - 1} + F_{2k - 3} + 2) - 5F_{2i}F_{2k - 1} &\\=
F_{2i + 4}(F_{2k - 3} + 2) + F_{2k - 1}(F_{2i+4} - 5F_{2i}) &\\=
F_{2i + 4}(F_{2k - 3} + 2) + F_{2k - 1}(\cancel{5F_{2i}} + 3F_{2i-1} - \cancel{5F_{2i}}) &\geq 0.
\end{aligned}
\end{multline}
\end{proof}
Theorem \ref{three100s} can be generalized for all odd number occurrences of 100. To do this, replace the right side of the third 100 occurring in $x$ using Lemma~\ref{replace10}.
\begin{lemma} \label{replaceWith10010}
Let $i \geq 1$, and let $x$ be a string with $|x| = 2i + 3$ and $\delta(x) = 3$.
Then $y = 100(10)^i \geq_{\rm suff} x$.
\end{lemma}
\begin{proof}
We have already shown that $G(y) > G(x)$ (Theorem~\ref{three100s}). Also, the inequality $G(y') > G(x')$ holds since $y' = (10)^{i + 1}$, and $G(y')$ is the maximum possible $G$-value for strings of length $2i + 2$.
\end{proof}
\begin{theorem} \label{noFour100s}
Let $n \geq 4$. If $|x| = 2n + 4$ and $\delta(x) = 4$, then $x \notin R$.
\end{theorem}
\begin{proof}
Consider three cases where $x$ begins with a 10, a 10010, or a 100100.
If $x$ begins with $10$, due to Lemma \ref{replaceWith10010}, we can replace the right side of the first 100, with $100(10)^*$, and get the string $y$. For example, if $x = $ 10 10 \textcolor{blue}{100} 10 100 100 100 becomes $y = $ 10 10 \textcolor{blue}{100} \textcolor{blue}{100} 10 10 10 10, which has a greater $G$-value.
Then consider the strings $a = 10\ 100\ 100\ (10)^i$ and $b = 100\ 10\ (10)^i\ 100$:
\begin{align*}
\mu(a)\begin{bmatrix &= \begin{bmatrix}
30 & 11 \\
19 & 7
\end{bmatrix}
\begin{bmatrix}
F_{2i + 1}\\
F_{2i}
\end{bmatrix} =
\begin{bmatrix}
30F_{2i + 1} + 11F_{2i}\\
19F_{2i + 1} + 7F_{2i}
\end{bmatrix} =
\begin{bmatrix}
11F_{2i + 3} + 8F_{2i + 1}\\
7F_{2i + 3} + 5F_{2i + 1}
\end{bmatrix}\\
\mu(b)\begin{bmatrix &= \begin{bmatrix}
7 & 4 \\
5 & 3
\end{bmatrix}
\begin{bmatrix}
F_{2i + 4}\\
F_{2i + 3}
\end{bmatrix} =
\begin{bmatrix}
7F_{2i + 4} + 4F_{2i + 3}\\
5F_{2i + 4} + 3F_{2i + 3}
\end{bmatrix},
\end{align*}
so $b \geq_{\rm suff} a$ for $i \geq 1$. Therefore, by replacing suffix $a$ with $b$, we get a smaller string with a greater $G$-value. So $x \notin R$.
Now consider the case where $x$ begins with 10010. Replace the 1st 100's right with $100(10)^{n - 1}$, so that we get $100\ 100\ (10)^{n-1}$. After these replacements, the $G$-value does not decrease, and we also get smaller strings.
The only remaining case has $x$ with two 100s at the beginning. We compare $x$ with a string beginning with 1000, which is smaller. Let $x_2$ represent the string $x$'s suffix of length $2n - 2$, with two 100s. The upper bound on $G(x_2)$ and $G(10x_2)$ is achieved using Lemma \ref{two100s}:
\begin{equation*}
G(x) = G(1001\ 00 x_2) = G(1001)G(00x_2) + G(1000)G(10x_2)
\leq 3(F_{2n-2} + F^2_{n - 1}) + 4(F_{2n} + F^2_n) .
\end{equation*}
After rewriting the equation to swap $F^2$s with first order $F$, multiply the equation by 5 to remove the $\dfrac{1}{5}$ factor:
\begin{equation*}
3(2F_{2n -1} + 4F_{2n - 2} - 2)+ 4(2F_{2n + 1} + 4F_{2n} + 2) = 8F_{2n + 2} + 14F_{2n} + 6F_{2n - 2} + 2
\end{equation*}
We now compare this value with $5G(1000\ (10)^n)$:
\begin{align*}
5G(1000\ (10)^n) = 20F_{2n + 1} + 5F_{2n}\\
20F_{2n + 1} + 5F_{2n} &\geq 8F_{2n + 2} + 14F_{2n} + 6F_{2n - 2} + 2 \\ \rightarrow
12F_{2n + 1} &\geq 17F_{2n} + 6F_{2n - 2} + 2\\ \rightarrow
12F_{2n - 1} &\geq 5F_{2n} + 6F_{2n - 2} + 2
\\ \rightarrow
7F_{2n - 1} &\geq 11F_{2n - 2} + 2 \\ \rightarrow
7F_{2n - 3} &\geq 4F_{2n - 2} + 2 \\ \rightarrow
3F_{2n - 3} &\geq 4F_{2n - 4} + 2 \\ \rightarrow
2F_{2n - 5} &\geq F_{2n - 6} + 2,
\end{align*}
which holds for $n \geq 4$. Therefore we cannot have four 100s in a record-setter. For six or more 100s, the same proof can be applied by replacing the fourth 100's right with 10s using Theorem~\ref{replace10}.
\end{proof}
\begin{theorem} \label{even1000}
For even lengths $2n + 4$ with $n \geq 0$, only a single record-setter $h$ beginning with 1000 exists. String $h$ is also the first record-setter of length $2n + 4$.
\end{theorem}
\begin{proof}
The only record-setter is $h = 1000\ (10)^n$. Let $x$ be a string with length $|x| = 2n$ containing 100 substrings ($n$ must be $\geq 3$ to be able to contain 100s). Using Lemma \ref{two100s}:
\begin{equation*}
5G(1000\ x) \leq 4(5F^2_n + 5F_{2n}) + 5F_{2n}
\leq 8F_{2n + 1} + 21F_{2n} + 8 \leq 5F_{2n + 4}.
\end{equation*}
The above equation holds for $n \geq 5$. For $n = 4$:
\begin{equation*}
5G(1000\ x) \leq 4(F^2_4 + F_{8}) + F_{8} = 141 \leq F_{12} = 144.
\end{equation*}
For $n = 3$:
\begin{equation*}
G(1000\ 100\ 100) = 52 \leq G(101010100) = 55.
\end{equation*}
Ergo, the $G$-value cannot exceed $F_{2n + 4}$, which the smaller string $(10)^{n + 1}0$ already has.
Let us calculate $G(h)$:
\begin{align*}
G(1000\ (10)^{n}) = 4F_{2n + 1} + F_{2n} = F_{2n + 2} + 3F_{2n + 1}\\ = F_{2n + 3} + 2F_{2n + 1} > F_{2n + 3} + F_{2n + 2} = F_{2n + 4} .
\end{align*}
Hence, the string $1000\ (10)^{n}$ is the first record-setter of length $2n + 4$ with a $G$-value greater than $F_{2n + 4}$, which is the maximum (Theorem~\ref{max-val-prime}) generated by the strings of length $2n + 3$. This makes $h$ the first record-setter of length $2n + 4$.
\end{proof}
\begin{theorem}
Let $x$ be a string with length $|x| = 2n + 9$, for $n \geq 3$, and $\delta(x) \geq 5$. Then $x \notin R$.
\end{theorem}
\begin{proof}
Our proof provides smaller strings with greater $G$-values only based on the position of the first five 100s. So for cases where $\delta(x) \geq 7$, replace the right side of the fifth 100 with 10s (Lemma~\ref{replace10}). Therefore consider $\delta(x)$ as 5, and $x = (10)^i0\ (10)^j0\ (10)^k0\ (10)^p0\ (10)^q0\ (10)^r$,
with $i,j,k,p,q, \geq 1$ and $r \geq 0$.
First, we prove that if $i = 1, j = 1, k = 1$ does not hold, then $x \notin R$.
\begin{itemize}
\item[(a)] If $i>1$, then smaller string $100(10)^{n + 3}$ has a greater $G$-value as proved in Lemma \ref{three100s}.
\item[(b)] If $j > 1$, using the approach as in Theorem~\ref{noFour100s}, we can obtain a smaller string with a greater $G$-value.
\item[(c)] If $k > 1$, using Lemma~\ref{replaceWith10010}, by replacing $(10)^k0\ (10)^p0\ (10)^q0\ (10)^r$ with $100\ (10)^{n + 1 - j}$, we obtain $y$ with $G(y) > G(x)$.
\end{itemize}
Now consider the case where $i = 1$, $j = 1$, $k = 1$. Let $x_2$, with $|x_2| = 2n$, be a string with two 100s:
\begin{align*}
&G(100100100\ x_2) = 41(F^2_n + F_{2n}) + 15F_{2n} \leq 16.4F_{2n + 1} + 47.8F_{2n} + 16.4\\
&G(100010101\ 0(10)^{n-1}0) = 23F_{2n} + 37F_{2n + 1}\\
&23F_{2n} + 37F_{2n + 1} - 16.4F_{2n + 1} - 47.8F_{2n} - 16.4 \geq 20F_{2n + 1} -25F_{2n} - 17 \geq 0
\end{align*}
The above equation holds for $n \geq 2$.
\end{proof}
\begin{theorem} \label{odd1000}
For odd lengths $2n + 5$ with $n \geq 1$, only a single record-setter $h$ beginning with 1000 exists. String $h$ is also the first record-setter of length $2n+5$.
\end{theorem}
\begin{proof}
The first record-setter is $h = 1000\ (10)^n0$. Consider another string $1000x$. If $x$ has three or more occurrences of 100, then Lemma \ref{three100s} showed that $1000\ 100\ (10)^{n - 1}$ has a greater $G$-value. Therefore it is enough to consider strings $x$s with a single 100. Suppose $1000x = 1000\ (10)^{n-i}0(10)^i$, with $i \geq 1$:
\begin{equation*}
G(1000\ (10)^{n-i}0(10)^i) = 4G((10)^{n-i}0(10)^i) +
G(1(10)^{n - i - 1}0(10)^i) .
\end{equation*}
We now evaluate $G((10)^{n-i}0(10)^i)$:
\begin{align*}
v\mu^{n}_{10}w &= v\begin{bmatrix}
F_{2n + 1} & F_{2n}\\
F_{2n} & F_{2n - 1}
\end{bmatrix}w = F_{2n + 1} \\
v\mu^{n - i}_{10}\gamma_0\mu^{i}_{10}w &= \begin{bmatrix}
F_{2n - 2i + 1} & F_{2n - 2i}
\end{bmatrix}
\begin{bmatrix
\begin{bmatrix}
F_{2i + 1}\\
F_{2i}
\end{bmatrix} = F_{2n - 2i}F_{2i + 1}\\
\Longrightarrow\ G((10)^{n-i}0(10)^i) &=
v\mu^{n - i}_{10}(I_2 + \gamma_0)\mu^{i}_{10}w = F_{2n + 1} + F_{2n - 2i}F_{2i + 1}.
\end{align*}
Next, we evaluate $G(1(10)^{n - i - 1}0(10)^i)$:
\begin{align*}
v\mu(1)\mu^{n - 1}_{10}w &= v\begin{bmatrix}
F_{2n} & F_{2n - 1}\\
F_{2n - 2} & F_{2n - 3}
\end{bmatrix}w = F_{2n} \\
v\mu(1)\mu^{n - i - 1}_{10}\gamma_0\mu^{i}_{10}w &= \begin{bmatrix}
F_{2n - 2i} & F_{2n - 2i - 1}
\end{bmatrix}
\begin{bmatrix
\begin{bmatrix}
F_{2i + 1}\\
F_{2i}
\end{bmatrix} = F_{2n - 2i - 1}F_{2i + 1}\\
\Longrightarrow\ G(1(10)^{n - i - 1}0(10)^i) &=
v\mu(1)\mu^{n - i - 1}_{10}(I_2 + \gamma_0)\mu^{i}_{10}w = F_{2n} + F_{2n - 2i - 1}F_{2i + 1}.
\end{align*}
We can now determine $G(1000\ (10)^{n-i}0(10)^i)$:
\begin{align*}
G(1000\ (10)^{n-i}0(10)^i) = 4F_{2n + 1} + 4F_{2n - 2i}F_{2i + 1} + F_{2n} + F_{2n - 2i - 1}F_{2i + 1}\\ =
4F_{2n + 1} + F_{2n} + F_{2i + 1}(4F_{2n - 2i} + F_{2n - 2i - 1})\\ =
4F_{2n + 1} + F_{2n}+ F_{2i + 1}(2F_{2n - 2i} + F_{2n - 2i + 2}).
\end{align*}
To maximize this, we need to make $i$ as small as possible:
\begin{equation*}
4F_{2n + 1} + F_{2n}+ F_{3}(2F_{2n - 2} + F_{2n}) = 4F_{2n + 1} + 3F_{2n} + 4F_{2n - 2} < F_{2n + 5},
\end{equation*}
which is less than $G((10)^{n + 2}) = F_{2n + 5}$. For $h$ we have
\begin{align*}
G(1000\ (10)^{n}0) = 4G((10)^{n}0) +
G(1(10)^{n - 1}0) = 4F_{2n + 2} + F_{2n + 1} \\= F_{2n + 3} + 3F_{2n + 2} = F_{2n + 4} + 2F_{2n + 2} > F_{2n + 4} + F_{2n + 3} = F_{2n + 5}.
\end{align*}
Therefore, the string $1000\ (10)^n0$ is the only record-setter beginning with 1000. Also, since string $h$ begins with 1000 instead of 100 or 10, it is the first record-setter of length $2n + 5$.
\end{proof}
\section{Record-setters of even length} \label{final_even}
At this point, we have excluded all the possibilities of a record-setter $x$ with $\delta(x) > 2$, because if the string $x$ contains 1000, by Theorem~\ref{even1000}, the only record-setter has a $\delta$-value of 2. Also, $x$ cannot have more than two 100s according to Theorem~\ref{noFour100s}. We determine the set of records-setters with $\delta(x) = 2$. If $\delta(x) = 0$, then $x$ has the maximum $G$-value as shown in Theorem \ref{max-val-prime}.
\begin{theorem} \label{evenBeginEnd}
Let $x = (10)^i0 (10)^j0 (10)^k$ be a string with two 100s, that is $i,j \geq 1$ and $k \geq 0$. If $i > 1$ and $k > 0$, then $x\notin R$.
\end{theorem}
\begin{proof}
For a fixed $i$, to maximize the $G$-value, we must minimize $k$ and add its value to $j$.
\begin{align*}
G((10)^i0 (10)^j0 (10)) = F_{2i + 2j + 3} +
F_{2i + 2j}F_{3} +
F_{2i}F_{2j + 3} +
F_{2i}F_{2j}F_{3}\\
= F_{2i + 2j + 3} + 2F_{2i + 2j} + F_{2i}F_{2j + 3} + 2F_{2i}F_{2j}.
\end{align*}
We now compare this value with a smaller string:
$$
G((10)^{i-1}0 (10)^{j+2}0) = F_{2i + 2j + 3} +
F_{2i + 2j + 2} +
F_{2i - 2}F_{2j + 5} +
F_{2i - 2}F_{2j + 4}$$
and
\begin{align}\label{equationIJK}
\begin{split}
&G((10)^{i-1}0 (10)^{j+2}0) - G((10)^i0 (10)^j0 (10)) \\
&= \cancel{F_{2i + 2j + 3}} +
F_{2i + 2j + 2} +
F_{2i - 2}F_{2j + 6} -
\cancel{F_{2i + 2j + 3}} - 2F_{2i + 2j} - F_{2i}(F_{2j + 3}
+ 2F_{2j}) \\
&=F_{2i + 2j - 1} + F_{2i - 2}F_{2j + 6} - F_{2i}(F_{2j + 3}
+ 2F_{2j}) \\
&=F_{2i + 2j - 1} + F_{2i - 2}(F_{2j + 6} - F_{2j + 3}
- 2F_{2j}) - F_{2i - 1}(F_{2j + 3} + 2F_{2j}) \\
&=F_{2i + 2j - 1} + 2F_{2i - 2}(F_{2j + 3} + F_{2j + 1}) - F_{2i - 1}(F_{2j + 3} + 2F_{2j}) \\
&=F_{2i + 2j - 1} + F_{2j + 3}F_{2i - 4} + 2F_{2i - 2}F_{2j + 1} - 2F_{2i - 1}F_{2j} \geq 0.
\end{split}
\end{align}
Since $i \geq 2$, Equation \eqref{equationIJK} holds, resulting in finding a smaller string with a greater $G$-value.
\end{proof}
\begin{theorem}
The record-setters of even length $2n + 2$,
for $n \geq 5$, are as follows:
$$\begin{cases}
1000\ (10)^{n - 1},\\
100\ (10)^{i+1}0\ (10)^{n - i - 2}, &\text{ for } 0 \leq i \leq n - 2, \\
(10)^i0\ (10)^{n - i}0, & \text{ for } 1 < i \leq \lceil\frac{n}{2}\rceil ,\\
(10)^{n + 1}.
\end{cases}$$
\label{eventhm}
\end{theorem}
\begin{proof}
According to Theorem \ref{even1000}, the set of record-setters for length $2n + 2$ starts with $1000 (10)^{n - 1}$.
Next, we consider the strings beginning with 100 and using the same solution as in Theorem \ref{odd1000}, the $G$-value increases as the 2nd 100 moves to the end.
Theorem \ref{evenBeginEnd} proved if a record-setter does not begin with a 100, it must end in one. In Lemmas \ref{two100s} and \ref{evenMult} we showed $G((10)^i0 (10)^{n - i}0) = F_{2n + 2} + F_{2i}F_{2n - 2i + 2}$ and it would increase until $i \leq \lceil\frac{n}{2}\rceil$.
In Theorem \ref{max-val-prime}, it was shown that the smallest string with the maximum $G$-value for strings of length $2n+2$ is $(10)^{n + 1}$ and $G((10)^{n + 1}) = F_{2n + 3}$.
\end{proof}
\section{Record-setters of odd length} \label{final_odd}
For strings $x$ of odd length, we have established that if $\delta(x) > 3$, then $x \notin R$. We now look for the final set of record-setters, among the remaining possibilities, with either $\delta(x) = 3$ or $\delta(x) = 1$.
We first consider the cases with $\delta$-value three.
\begin{theorem}\label{breakTwoEven}
Let $x = (10)^c0 (10)^i0 (10)^j0 (10)^k$ be a string with three 100s and $|x|= 2n + 3$.
If $c > 0$, or $i > 1$ and $k > 0$, then $x \notin R$.
\end{theorem}
\begin{proof}
As stated in Theorem \ref{three100s}, the $G$-value of $100(10)^{n}$ is greater than $x$. Therefore, if $c > 0$, then $x \notin R$.
For $c = 1$, we can write $G(x)$ as follows:
\begin{equation} \label{breakingOdd}
G(1\ 00 (10)^i0 (10)^j0 (10)^k ) = G((10)^i0 (10)^j0 (10)^k)
+ G((10)^{i+1}0 (10)^j0 (10)^k),
\end{equation}
which is the sum of $G$-values of two strings with even length and two 100s.
Now, in case $i>1$ and $k>0$, according to Theorem \ref{evenMult}, by decreasing $i$ by one and making $k = 0$, we obtain a greater $G$-value. Hence, the theorem holds.
\end{proof}
\begin{theorem} \label{begin100100}
$100100\ (10)^n0$ and $100100\ (10)^{n-1}0\ 10$ are the only record-setters beginning with $100100$.
\end{theorem}
\begin{proof}
The first record-setter begins with a 1000:
\begin{equation*}
G(1000\ 10\ (10)^n0) = 9F_{2n + 2} + 5F_{2n + 1}.
\end{equation*}
Now define $x_i = 100\ 100\ (10)^{i}0(10)^{n-i}$ for $1 \leq i \leq n$. We get
\begin{align*}
G(x_i) &= G(100\ 100\ (10)^{i}0(10)^{n-i})\\
&= 11(F_{2n + 1} + F_{2i}F_{2n - 2i + 1}) + 4(F_{2n} + F_{2i - 1}F_{2n - 2i + 1}) \\
&= 11F_{2n + 1} + 4F_{2n} + F_{2n - 2i + 1}(11F_{2i} + 4F_{2i - 1})\\
&= 11F_{2n + 1} + 4F_{2n} + F_{2n - 2i + 1}(4F_{2i + 2} + 3F_{2i}).
\end{align*}
As $i$ increases, the value of $G(x_i)$ also increases. Suppose $i = n - 2$. Then
\begin{align*}
G(x_{n - 2}) = G(100\ 100\ (10)^{n-2}0(10)^{2}) =
11F_{2n + 1} + 4F_{2n} + 5(4F_{2n - 2} + 3F_{2n - 4}).
\end{align*}
This value is smaller than $9F_{2n + 2} + 5F_{2n + 1}$. Therefore, if $i < n -1$, then $x \notin R$.
If $i = n - 1$, then for the string $x_{n - 1}$
we have
\begin{align*}
G(x_{n - 1}) &= G(100\ 100\ (10)^{n-1}0(10))
= 11F_{2n + 1} + 4F_{2n} + 2(4F_{2n} + 3F_{2n - 2})\\
&= 11F_{2n + 1} + 12F_{2n} + 6F_{2n - 2} >
9F_{2n + 2} + 5F_{2n + 1}.
\end{align*}
Also, for $i = n$, we know $x_n > x_{n - 1}$. Therefore, the first two record-setters after $1000\ (10)^{n + 1}0$ are $100100\ (10)^{n - 1} 0 10$ followed by $100100\ (10)^n 0$.
\end{proof}
Putting this all together, we have the following result.
\begin{theorem}
The record-setters of odd length $2n + 3$, for $n \geq 5$,
are:
$$\begin{cases}
1000\ (10)^{n - 1}0,\\
100\ 100\ (10)^{n - 3}0\ 10,\\
100\ 100\ (10)^{n - 2}0,\\
100\ (10)^{i}0\ (10)^{n - i - 1}0, &\text{ for } 1 < i \leq \lceil\frac{n-1}{2}\rceil, \\
(10)^{i+1}0 (10)^{n-i}, & \text{ for } 0 \leq i \leq n.
\end{cases}$$
\label{oddthm}
\end{theorem}
\begin{proof}
The first three strings were already proven in Theorems~\ref{begin100100} and~\ref{odd1000}.
We showed in Eq.~\eqref{breakingOdd} how to break the strings beginning with a 100 into two strings of even lengths. Thus, using Lemmas \ref{two100s} and \ref{evenMult}, for the strings of the form
$$ 100 (10)^{i}0 (10)^{n - i - 1}0 $$
for
$1 < i \leq \lceil\frac{n-1}{2}\rceil$,
the $G$-value increases with increasing $i$.
Moreover, Theorem~\ref{three100s} shows that the minimum $G$-value for a string having a single 100 is greater than the maximum with three 100s. So after the strings with three 100s come those with a single 100. Also, due to the Lemma \ref{oddFibZero} while using the same calculations as in Lemma \ref{minValSingle100}, as $i$ increases, we get greater $G$-values until we reach the maximum $F_{2n + 3} = G((10)^{n + 1})$.
\end{proof}
We can now prove
Theorem~\ref{mainTheorem}.
\begin{proof}[Proof of Theorem~\ref{mainTheorem}]
By combining the results of
Theorems~\ref{eventhm} and
\ref{oddthm}, and noting that the indices for the sequence $s$ differ by $1$ from the sequence $a$,
the result now follows.
\end{proof}
We can obtain two useful corollaries of the main result. The first gives an explicit description of the record-setters, and their $a$-values.
\begin{corollary}
The record setters lying in the interval $[2^{k-1}, 2^k)$ for $k \geq 12$ and even
are, in increasing order
\begin{itemize}
\item $2^{2n-1} + {{2^{2n-2} - 2^{2n-2a-3}+1} \over {3}}$ for $0 \leq a \leq n-3$;
\item
${{2^{2n+1} - 2^{2n-2b} - 1} \over {3}}$ for $1 \leq b \leq \lfloor n/2 \rfloor$;
and
\item
$(2^{2n+1} + 1)/3$,
\end{itemize}
where $k = 2n$.
The Stern values of these are, respectively
\begin{itemize}
\item $L_{2a+3}F_{2n-2a-3} + L_{2a+1} F_{2n-2a-4}$
\item $F_{2b+2} F_{2n-2b} + F_{2b} F_{2n-2b-1}$
\item $F_{2n+1}$
\end{itemize}
where $L_0 = 2$, $L_1 = 1$, and $L_n = L_{n-1} + L_{n-2}$ for $n \geq 2$ are the Lucas numbers.
The record-setters for $k\geq 12$ and odd are, in increasing order,
\begin{itemize}
\item $2^{2n} + {{2^{2n-2} - 1} \over 3} $
\item $2^{2n} + 2^{2n-3} + {{2^{2n-4} -7} \over 3} $
\item $2^{2n} + {{2^{2n-1} - 2^{2n-2b-2} -1} \over 3}$ for $1 \leq b \leq \lceil n/2 \rceil - 1$
\item ${{2^{2n+2} - 2^{2n-2a-1} +1} \over 3}$ for $0 \leq a \leq n-2$;
\item $(2^{2n+2} - 1)/3$,
\end{itemize}
where $k = 2n+1$.
The corresponding Stern values are
\begin{itemize}
\item $F_{2n+1} + F_{2n-4}$
\item $F_{2n+1} + 8 F_{2n-8}$
\item $L_{2b+3} F_{2n-2b-2} + L_{2b+1} F_{2n-2b-3}$
\item $F_{2a+4} F_{2n-2a-1} + F_{2a+2} F_{2n-2a-2}$
\item $F_{2n+2}$
\end{itemize}
\end{corollary}
\begin{proof}
We obtain the record-setters from Theorem~\ref{mainTheorem} and the identities
$[(10)^i]_2 = (2^{2i+1}-2)/3$
and $[(10)^i 1]_2 =
(2^{2i+2}-1)/3$.
We obtain their Stern values from Eq.~\ref{mat10}.
\end{proof}
\begin{corollary}
The binary representations of the record-breakers for the
Stern sequence form a context-free language.
\end{corollary}
\section{Acknowledgments}
We thank Colin Defant for conversations about the problem in 2018, and for his suggestions about the manuscript.
\bibliographystyle{new2}
|
2302.12947
|
\section{Introduction}
In this paper, we discuss the following two (intersection) numbers defined as values of residue integrals.
\begin{defi}
\ba
&&w(\sigma_{(N-k)d+j-1}({\cal O}_{h^{N-2-j}}){\cal O}_{h^{0}})_{0,2}:=\no\\
&&\frac{1}{(2 \pi \sqrt{-1})^{d+1}} \oint_{C_{0}} \frac{dz_{0}}{(z_{0})^{N}} \oint_{C_{1}} \frac{dz_{1}}{(z_{1})^{N}} \dots \oint_{C_{d}} \frac{dz_{d}}{(z_{d})^{N}}(z_{0})^{N-2-j} (z_{1} - z_{0})^{(N-k)d+j-1} \biggl(\prod_{l=1}^{d} e^{k}(z_{l-1},z_{l}) \biggr)\no\\
&&\times\prod_{l=1}^{d-1} \frac{1}{kz_{l} (2z_{l} - z_{l-1} - z_{l+1})} \hspace{7cm}(N>k).
\label{residue1}
\ea
\ba
&&w(\sigma_{j}({\cal O}_{h^{N-2-j}}){\cal O}_{h^{-1-(k-N)d}}|({\cal O}_{h})^{1+(k-N)d})_{0,2|1+(k-N)d}:=\no\\
&&\frac{1}{(2 \pi \sqrt{-1})^{d+1}} \oint_{C_{0}} \frac{dz_{0}}{(z_{0})^{N}} \oint_{C_{1}} \frac{dz_{1}}{(z_{1})^{N}} \dots \oint_{C_{d}} \frac{dz_{d}}{(z_{d})^{N}}(z_{0})^{N-2-j} (z_{1} - z_{0})^{j} \biggl(\prod_{l=1}^{d} e^{k}(z_{l-1},z_{l}) \biggr)\no\\
&&\times\biggl(\prod_{l=1}^{d-1} \frac{1}{kz_{l} (2z_{l} - z_{l-1} - z_{l+1})}\biggr)\frac{1}{(z_{d})^{1+(k-N)d}}\left( d + \frac{z_0}{z_1 - z_0} \right)^{1+(k-N)d} \hspace{1cm}(N\leq k).
\label{residue2}
\ea
In the above formulas, $e^{k}(z,w)$ is given by $\prod_{j=0}^{k}(jz+(k-j)w)$, and the operation $\frac{1}{2 \pi \sqrt{-1}} \oint_{C_{i}} dz_{i}$ means taking residues at $z_{i} = 0$ for $i = 0,d$ and at $z_{i} = 0, \frac{z_{i-1} + z_{i+1}}{2}$ for $i = 1 , \dots , d-1$. Residue integral is taken in ascending order with respect to the subscript of $z_{i}$'s.
\label{wdefi}
\end{defi}
In the above definition, we assume that the integer $j$ can take any non-negative integers.
The first one, $w(\sigma_{(N-k)d+j-1}({\cal O}_{h^{N-2-j}}){\cal O}_{h^{0}})_{0,2}$,
is given as an intersection number of the moduli space of quasimaps $\widetilde{Mp}_{0,2}(N,d)$ from $CP^{1}$ with two marked points to $CP^{N-1}$ \cite{Jin1, Jin3, S},
if $0\leq j\leq N-2$. In this case, we can express the intersection number by using elements of Chow ring of $\widetilde{Mp}_{0,2}(N,d)$:
\ba
&&w(\sigma_{(N-k)d+j-1}({\cal O}_{h^{N-2-j}}){\cal O}_{h^{0}})_{0,2}=\no\\
&&\int_{\widetilde{Mp}_{0,2}(N,d)}(H_{1}-H_{0})^{(N-k)d+j-1}(H_{0})^{N-2-j}\biggl(\prod_{j=1}^{d}\frac{e^{k}(H_{j-1},H_{j})}{kH_{j}}\biggr)(kH_{d}).
\ea
In the above formula, we interpret $\frac{e^{k}(H_{j-1},H_{j})}{kH_{j}}$ as $\prod_{i=1}^{k}(iH_{j=1}+(k-i)H_{j})$ and $H_{0},H_{1},\cdots, H_{d}$ are generators of Chow ring of $\widetilde{Mp}_{0,2}(N,d)$ that satisfy the following relations \cite{S}:
\ba
(H_{0})^{N}=0,\;(H_{j})^{N}(2H_{j}-H_{j-1}-H_{j+1})=0\;\;(j=1,2,\cdots,d-1),\;(H_{d})^{N}=0.
\ea
If $j>N-2$, we can no longer express $w(\sigma_{(N-k)d+j-1}({\cal O}_{h^{N-2-j}}){\cal O}_{h^{0}})_{0,2}$ in terms of Chow ring because negative power of $H_{0}$ appears. But the residue integral representation (\ref{residue1}) may give us non-vanishing rational number even in this case.
The second one, $w(\sigma_{j}({\cal O}_{h^{N-2-j}}){\cal O}_{h^{-1-(k-N)d}}|({\cal O}_{h})^{1+(k-N)d})_{0,2|1+(k-N)d}$ is more exotic. The symbol ``$h$'' originally means hyperplane class
in $H^{1,1}(CP^{N-1},\mathbb{C})$, but in notation of the intersecion number, negative power of $h$ appears. It is formally interpreted as a $2+(1+(k-N)d)$ pointed intersection number
of the moduli space of quasimaps $\widetilde{Mp}_{0,2|(1+(k-N)d)}(N,d)$ constructed in \cite{JS}.
By allowing negative power of $h$ formally, this intersection number can alternatively be represented as follows:
\ba
&&w(\sigma_{j}({\cal O}_{h^{N-2-j}}){\cal O}_{h^{-1-(k-N)d}}|({\cal O}_{h})^{1+(k-N)d})_{0,2|1+(k-N)d}=\no\\
&&\sum_{i=0}^{\mbox{min.}\{1+(k-N)d, j\}}{1+(k-N)d\choose i}d^{1+(k-N)d-i}w(\sigma_{j-i}(h^{N-2-j+i}){\cal O}_{h^{-1-(k-N)d}})_{0,d},
\ea
In the above formula, we assumed Hori's equation \cite{hori} for $2+m$ pointed intersection numbers:
\begin{equation}
w(\sigma_j ({\cal O}_{h^a} ) {\cal O}_{h^b} | ({\cal O}_h )^m )_{0,2|m} \stackrel{\mathrm{formally}}{=} d \cdot w(\sigma_j ({\cal O}_{h^a} ) {\cal O}_{h^b} | ({\cal O}_h )^{m-1} )_{0,2|m-1} + w(\sigma_{j-1} ({\cal O}_{h^{a+1}} ) {\cal O}_{h^b} | ({\cal O}_h )^{m-1} )_{0,2|m-1},
\end{equation}
and applied it iteratively. This equation is proved in the case of $m=1$ in \cite{JM2}. By allowing the following ``formal'' expression:
\ba
&&w(\sigma_{j-i}({\cal O}_{h^{N-2-j+i}}){\cal O}_{h^{-1-(k-N)d}})_{0,d}\stackrel{\mathrm{formally}}{=}\no\\
&&\int_{\widetilde{Mp}_{0,2}(N,d)}(H_{1}-H_{0})^{j-i}(H_{0})^{N-2-j+i}\biggl(\prod_{j=1}^{d}\frac{e^{k}(H_{j-1},H_{j})}{kH_{j}}\biggr)(kH_{d})\frac{1}{(H_{d})^{1+(k-N)d}},
\ea
we reach the formula (\ref{residue2}). $w(\sigma_{j}({\cal O}_{h^{N-2-j}}){\cal O}_{h^{-1-(k-N)d}}|({\cal O}_{h})^{1+(k-N)d})_{0,2|1+(k-N)d}$ may also turn out to be non-vanishing for
any non-negative integer $j$.
In this paper, we prove the following two theorems on these numbers.
\begin{theorem}
If $N-k>0$, the following equality holds.
\ba
\frac{1}{k}w(\sigma_{(N-k)d+j-1}({\cal O}_{h^{N-2-j}}){\cal O}_{h^{0}})_{0,2}=\frac{1}{j!}\frac{\d^{j}}{\d \epsilon^{j}}\left(\frac{\prod_{r=1}^{kd}(r+k\epsilon)}{\prod_{r=1}^{d}(r+\epsilon)^N}\middle) \right|_{\epsilon=0}.
\label{mainth1}
\ea
\label{main1}
\end{theorem}
\begin{theorem} If $N-k\leq 0$, the following equality holds.
\ba
\frac{1}{k}w(\sigma_{j}({\cal O}_{h^{N-2-j}}){\cal O}_{h^{-1-(k-N)d}}|({\cal O}_{h})^{1+(k-N)d})_{0,2|1+(k-N)d}
=\frac{1}{j!}\frac{\d^{j}}{\d \epsilon^{j}}\left(\frac{\prod_{r=1}^{kd}(r+k\epsilon)}{\prod_{r=1}^{d}(r+\epsilon)^N}\middle) \right|_{\epsilon=0}.
\label{mainth2}
\ea
\label{main2}
\end{theorem}
These two theorems are extentions of our former result given in \cite{JM1}, which realized generalized hypergeometric series used in mirror computation of genus $0$ Gromov-
Witten invariants of Calabi-Yau hypersurface in $CP^{N-1}$ as generating function of intersection number $w(\sigma_{j}({\cal O}_{h^{N-2-j}}){\cal O}_{h^{-1}})_{0,d}$ of $\widetilde{Mp}_{0,2}(N,d)$,
to the case of degree $k$ hypersurface in $CP^{N=1}$. Theorem \ref{main1} corresponds to Fano $(k>N)$ case, and Theorem \ref{main2} corresponds to Calabi-Yau and general
type $(k\geq N)$ cases.
In Fano case, Givental considered the following differential equation:
\ba
\left(\biggl(\frac{d}{dx}\biggr)^{N-1}-ke^{x}\prod_{j=1}^{k-1}(k\frac{d}{dx}+j)\right)w(x)=0.
\label{givd}
\ea
Linear independent solutions of the above equation are given as follows.
\ba
w_{j}(x)=\sum_{d=0}^{\infty}\frac{\d^{j}}{\d \epsilon^{j}}\left(\frac{\prod_{r=1}^{kd}(r+k\epsilon)}{\prod_{r=1}^{d}(r+\epsilon)^N}e^{d+\epsilon}x\middle) \right|_{\epsilon=0}\;\;(j=0,1,\cdots,N-2).
\label{wgiv}
\ea
In \cite{giv}, Givental computed gravitational Gromov-Witten invariant $\langle\sigma_{(N-k)d+j-1}({\cal O}_{h^{N-2-j}}){\cal O}_{h^{0}}\rangle_{0,d}$, which is defined as intersection number of
moduli space of stable maps $\overline{M}_{0,2}(CP^{N-1},d)$, by using localization technique invented by Kontsevich \cite{km}, and proved the following theorem:
\begin{theorem}{\bf (Givental)}
If $N-k\geq 2$, the following equality holds.
\ba
\frac{1}{k}\langle\sigma_{(N-k)d+j-1}({\cal O}_{h^{N-2-j}}){\cal O}_{h^{0}}\rangle_{0,2}=\frac{1}{j!}\frac{\d^{j}}{\d \epsilon^{j}}\left(\frac{\prod_{r=1}^{kd}(r+k\epsilon)}{\prod_{r=1}^{d}(r+\epsilon)^N}\middle) \right|_{\epsilon=0}.
\ea
\label{givth1}
\end{theorem}
Therefore, Theorem \ref{main1} corresponds to quasimap version of Theorem \ref{givth1}. Since we are treating the moduli space of quasimaps $\widetilde{Mp}_{0,2}(N,d)$, the
equality (\ref{mainth1}) holds in the $N-k=1$ case. Origin of this difference is expalined in \cite{Jin2}. In contrast to complexity of the proof of Theorem \ref{givth1}, due to complicated combinatorial structure of boundaries of the moduli space of stable maps, our proof of Theorem \ref{main1} is quite straghtforward and simple.
In the general type case, we can still consider the differential equation (\ref{givd}) and the series given in (\ref{wgiv}) are still formal solutions. But as was suggested in \cite{iritani},
convergence radii of these series are equal to $0$. Therefore, Theorem \ref{main2} should be regarded as a ``formal'' result. Exotic characteristics of the intersection number $w(\sigma_{j}({\cal O}_{h^{N-2-j}}){\cal O}_{h^{-1-(k-N)d}}|({\cal O}_{h})^{1+(k-N)d})_{0,2|1+(k-N)d}$ may come from this formality. Theorem \ref{main2} can be interpreted as a kind of completion of the equality observed in \cite{Jin2}:
\ba
\frac{1}{k}\cdot d^{1+(k-N)d}\cdot w({\cal O}_{h^{N-2}}{\cal O}_{h^{-1-(k-N)d}})_{0,2}=\frac{(kd)!}{(d!)^{N}}.
\ea
In closing this section, we mention new feature of the proof of the main thoerems, presented in Subsection 2.1. This technique drastically simplifies computational processes of
the proof. Hence the proof given in Subsection 2.2 and 2.3 can be regarded as simplification of the proof given in our former literature \cite{JM1}.
\vspace{2em}
{\bf Acknowledgment}
We would like to thank Prof.~G.~Ishikawa and Prof.~A.~Tsuchida for kind encouragement.
Our research is partially supported by JSPS grant No. 22K03289.
\section{Proof of the Main Theorems }
\subsection{The ``Infinitesimal Displacement'' of a Pole}
In this subsection, in order to compute the residue integrals (\ref{residue1}) and (\ref{residue2}) effectively, we introduce technique of reduction of order of a pole in the residue integrals. Let $f(z, w)$ be a holomorphic function on $\mathbb{C}^2$, and $C(0)$ and $C(\frac{z + \alpha}{2})$ be contours $z(t) := r_1 \exp (2 \pi \sqrt{-1} t) \ (r_1 > 0 \; ; \; 0 \le t \le 1)$ on $z$-plane and $w(t) := \frac{z + \alpha}{2} + r_2 \exp (2 \pi \sqrt{-1} t) \ (r_2> 0 \; ; \; 0 \le t \le 1)$ on $w$-plane, respectively. We consider the following residue integral:
\begin{equation}
I_j := \frac{1}{(2\pi \sqrt{-1})^2} \oint_{C(0)} \frac{dz}{z} \oint_{C(\frac{z + \alpha}{2})} dw f(z, w) \left( \frac{w-z}{z} \right)^j dz \quad (j = 0, 1, \dots ). \label{originint}
\end{equation}
We allow the integrand in the above integral to have a higher order pole at $z=0$. In such case, we have to compute higher derivative with respect to the variable $z$.
In order to avoid computing higher derivative, we introduce generating function of $I_{j}$ (this operation leads to ``infinitesimal displacement'' of the pole at $z=0$). Then we can reduce our computation to taking residue of a pole of order 1. Let $F(\varepsilon)$ be the generating function of $I_j \ (j=0, 1, \cdots)$ given as follows:
\begin{align}
F(\varepsilon) &:= \sum_{j=0}^{\infty} I_j \varepsilon^j \notag \\
&=\frac{1}{(2\pi \sqrt{-1})^2} \sum_{j=0}^{\infty} \oint_{C(0)} \frac{dz}{z} \oint_{C(\frac{z + \alpha}{2})} dw \;f(z, w) \left( \frac{w-z}{z} \varepsilon \right)^j , \label{genfunc}
\end{align}
where $\varepsilon$ is a small parameter. The part of $w$-integration of the above generating funcion:
\begin{align}
G_j (z; \varepsilon) &:= \frac{1}{2 \pi \sqrt{-1}} \oint_{C(\frac{z + \alpha}{2})} dw \;f(z, w) \left( \frac{w-z}{z} \varepsilon \right)^j \notag \\
&= \int_{0}^{1} f \left( z, \frac{z + \alpha}{2} + r_2 e^{2 \pi \sqrt{-1} t} \right) \left( \frac{\frac{z + \alpha}{2} + r_2 e^{2 \pi \sqrt{-1} t} - z}{z} \varepsilon \right)^j \cdot r_2 e^{2 \pi \sqrt{-1} t} dt,
\end{align}
is holomorphic in $z$. By using Weierstrass M-test, we can easily see that we can exchange order of integration and summation in (\ref{genfunc}):
\begin{align}
\frac{1}{2\pi \sqrt{-1}} \sum_{j=0}^{\infty} \oint_{C(0)} \frac{dz}{z} G_j (z; \varepsilon) &= \frac{1}{(2\pi \sqrt{-1})^2} \oint_{C(0)} \frac{dz}{z} \oint_{C(\frac{z + \alpha}{2})} dw \sum_{j=0}^{\infty} f(z, w) \left( \frac{w-z}{z} \varepsilon \right)^j \notag \\
&= \frac{1}{1 + \varepsilon} \cdot \frac{1}{(2\pi \sqrt{-1})^2} \oint_{C(0)} dz \oint_{C(\frac{z + \alpha}{2})} dw \frac{f(z, w)}{z -\frac{\varepsilon}{1 + \varepsilon} w} \notag \\
&= \frac{1}{1 + \varepsilon} \cdot \frac{1}{2\pi \sqrt{-1}} \oint_{C(0)} dz \int_{0}^{1} \frac{f \left(z, \frac{z + \alpha}{2} + r_2 e^{2 \pi \sqrt{-1} t} \right)}{z - \frac{\varepsilon}{1 + \varepsilon} \left( \frac{z + \alpha}{2} + r_2 e^{2 \pi \sqrt{-1} t} \right)} \cdot r_2 e^{2 \pi \sqrt{-1} t} dt \notag \\
&= \frac{1}{2 + \varepsilon} \cdot \frac{1}{2\pi \sqrt{-1}} \oint_{C(0)} dz \int_{0}^{1} \frac{f \left(z, \frac{z + ( \alpha + 2 r_2 e^{2 \pi \sqrt{-1}t})}{2} \right)}{z - \frac{\varepsilon}{2 + \varepsilon} \left( \alpha + 2 r_2 e^{2 \pi \sqrt{-1} t} \right)} \cdot 2 r_2 e^{2 \pi \sqrt{-1} t} dt \notag \\
&= \frac{1}{2 + \varepsilon} \int_{0}^{1} \left\{ \frac{1}{2\pi \sqrt{-1}} \oint_{C(0)} dz \frac{f \left(z, \frac{z + ( \alpha + 2 r_2 e^{2 \pi \sqrt{-1}t})}{2} \right)}{z - \frac{\varepsilon}{2 + \varepsilon} \left( \alpha + 2 r_2 e^{2 \pi \sqrt{-1} t} \right)} \right\} \cdot 2 r_2 e^{2 \pi \sqrt{-1} t} dt , \label{genint}
\end{align}
for all $\varepsilon$'s that satisfy,
\begin{equation}
|\varepsilon| < m := \min \left\{ \left| \frac{r_2 e^{2 \pi \sqrt{-1} t}}{\left( \frac{r_1 e^{2 \pi \sqrt{-1} s} + \alpha}{2} + r_2 e^{2 \pi \sqrt{-1} t} \right) - r_1 e^{2 \pi \sqrt{-1}s}} \right| \; ; \; s, t \in [0, 1] \right\}.
\end{equation}
Note that this condition ensures convergence of the series $\sum_{j=0}^{\infty} \left( \frac{w-z}{z} \varepsilon \right)^j$ in (\ref{genint}). Since
\begin{equation}
\lim_{\varepsilon \to 0} \left| \frac{\varepsilon}{2 + \varepsilon} (\alpha + 2 r_2 e^{2 \pi \sqrt{-1}t}) - 0 \right| = 0,
\end{equation}
holds and $|\alpha + 2 r_2 e^{2 \pi \sqrt{-1}t}|$ is bounded from above as a function of $t$, we can take some positive constant $r \ (< m)$ such that $\frac{\varepsilon}{2 + \varepsilon} (\alpha + 2 r_2 e^{2 \pi \sqrt{-1}t})$ is contained in interior of the contour $C(0)$ if $|\varepsilon| < r$. Then we can apply Cauchy's integral theorem to the integral (\ref{genint}):
\begin{align}
&\frac{1}{2 + \varepsilon} \int_{0}^{1} \left\{ \frac{1}{2\pi \sqrt{-1}} \oint_{C(0)} dz \frac{f \left(z, \frac{z + ( \alpha + 2 r_2 e^{2 \pi \sqrt{-1}t})}{2} \right)}{z - \frac{\varepsilon}{2 + \varepsilon} \left( \alpha + 2 r_2 e^{2 \pi \sqrt{-1} t} \right)} \right\} \cdot 2 r_2 e^{2 \pi \sqrt{-1} t} dt \notag \\
&= \frac{1}{2 + \varepsilon} \int_{0}^{1} f \left( \frac{\varepsilon}{2 + \varepsilon} ( \alpha + 2 r_2 e^{2 \pi \sqrt{-1}t}) , \frac{1 + \varepsilon}{2 + \varepsilon} ( \alpha + 2 r_2 e^{2 \pi \sqrt{-1}t}) \right) \cdot 2 r_2 e^{2 \pi \sqrt{-1} t} dt \notag \\
&= \frac{1}{1 + \varepsilon} \int_{0}^{1} f \left( \frac{\varepsilon}{1 + \varepsilon} \cdot \frac{1 + \varepsilon}{2 + \varepsilon} ( \alpha + 2 r_2 e^{2 \pi \sqrt{-1}t}) , \frac{1 + \varepsilon}{2 + \varepsilon} ( \alpha + 2 r_2 e^{2 \pi \sqrt{-1}t}) \right) \cdot \frac{1 + \varepsilon}{2 + \varepsilon} \cdot 2 r_2 e^{2 \pi \sqrt{-1} t} dt \notag \\
&= \frac{1}{1 + \varepsilon} \cdot \frac{1}{2 \pi \sqrt{-1}} \oint_{C(\frac{1 + \varepsilon}{2 + \varepsilon} \alpha)} dw f \left( \frac{\varepsilon}{1 + \varepsilon} w ,w \right), \label{resint}
\end{align}
where $C(\frac{1 + \varepsilon}{2 + \varepsilon} \alpha)$ is a contour $w(t) := \frac{1 + \varepsilon}{2 + \varepsilon} (\alpha + 2r_2 \exp (2 \pi \sqrt{-1} t)) \ (0 \le t \le 1)$ on $w$-plane. With these discussions, we have proved the following lemma:
\begin{lem}
We can choose some constant $r (> 0)$ such that the following equality:
\begin{equation}
\frac{1}{(2\pi \sqrt{-1})^2} \sum_{j=0}^{\infty} \oint_{C(0)} \frac{dz}{z} \oint_{C(\frac{z + \alpha}{2})} dw f(z, w) \left( \frac{w-z}{z} \right)^j \varepsilon^j = \frac{1}{1 + \varepsilon} \cdot \frac{1}{2 \pi \sqrt{-1}} \oint_{C(\frac{1 + \varepsilon}{2 + \varepsilon} \alpha)} dw f \left( \frac{\varepsilon}{1 + \varepsilon} w ,w \right), \label{lem1}
\end{equation}
holds for all $\varepsilon$'s that satisfy $|\varepsilon| < r$. In particular, the generating function $F(\varepsilon)$ of the integral $I_j$ is holomorphic at $\varepsilon = 0$.
\end{lem}
\begin{Rem} Since we have,
\begin{equation}
w = \frac{\alpha + \frac{\varepsilon}{1 + \varepsilon}w}{2} \Longleftrightarrow w = \frac{1 + \varepsilon}{2 + \varepsilon} \alpha,
\end{equation}
we can interpret that the integral
\begin{equation}
\frac{1}{2 \pi \sqrt{-1}} \oint_{C(\frac{1 + \varepsilon}{2 + \varepsilon} \alpha)} dw \;f \left( \frac{\varepsilon}{1 + \varepsilon} w ,w \right)
\end{equation}
is obtained as a result of taking residue at $z = \frac{\varepsilon}{1 + \varepsilon}w$ of the expression:
\begin{equation}
\frac{1}{2 \pi \sqrt{-1}} \oint_{C(\frac{z + \alpha}{2})} dw \frac{f(z, w)}{z -\frac{\varepsilon}{1 + \varepsilon} w}
\end{equation}
in (\ref{genint}).
\label{rem}
\end{Rem}
\subsection{Proof of Theorem \ref{main1}}
In this section, we prove Theorem 1 by using Lemma 1. By Definition \ref{wdefi}, $w(\sigma_{(N-k)d+j-1}({\cal O}_{h^{N-2-j}}){\cal O}_{h^{0}})_{0,2}$ is given by,
\begin{align}
w(\sigma_{(N-k)d+j-1}({\cal O}_{h^{N-2-j}}){\cal O}_{h^{0}})_{0,2} &= \frac{1}{(2 \pi \sqrt{-1})^{d+1}} \oint_{C_0} \frac{dz_0}{(z_0 )^N} \dots \oint_{C_d} \frac{dz_d}{(z_d )^N} \notag \\
&\quad \cdot (z_0 )^{N-2-j} (z_1 - z_0 )^{(N-k)d+j-1} \frac{\prod_{i=1}^{d} e^k (z_{i-1} , z_{i})}{\prod_{i=1}^{d-1} kz_i (2z_i - z_{i+1} - z_{i-1})} \cdot (z_d )^0 \notag \\
&= \frac{1}{(2 \pi \sqrt{-1})^{d+1}} \oint_{C_0} \frac{dz_0}{z_0} \oint_{C_1} \frac{dz_1}{(z_1 )^N} \dots \oint_{C_d} \frac{dz_d}{(z_d )^N} \notag \\
&\quad \cdot \frac{(z_1 - z_0 )^{(N-k)d-1}}{2z_1 - z_2 - z_0} \cdot \frac{e(z_0 , z_1 )}{z_0} \cdot \frac{\prod_{i=2}^{d} e^k (z_{i-1} , z_{i})}{\prod_{i=1}^{d-2} (2z_{i+1} - z_{i+2} - z_{i})} \notag \\
&\quad \cdot \frac{1}{\prod_{i=1}^{d-1} kz_i} \cdot \left( \frac{z_1 - z_0}{z_0} \right)^{j} ,
\end{align}
where
\begin{equation}
e^{k}(z,w):=\prod_{j=0}^{k}((k-j)z+jw)\;\;\;(N-k\geq 1),
\end{equation}
is a degree ($k+1$) polynomial and $\frac{1}{2 \pi \sqrt{-1}} \oint_{C_i} dz_i \ (i = 0 , \dots , d)$ is operation of taking residue(s) at,
\begin{equation}
\begin{cases}
z_i = 0 & (i= 0 , d), \\
z_i = 0 , \frac{z_{i-1} + z_{i+1}}{2} & (i = 1, \dots , d-1).
\end{cases}
\end{equation}
Note that $e^k (z , w)$ is divisible by $z$ and $w$ (and therefore $e^k (z, 0) \equiv 0$). In order to prove our assertion, we introduce generating function of the above integrals:
\begin{align}
F_0 (\epsilon ) &:= \sum_{j=0}^{\infty} w(\sigma_{(N-k)d+j-1}({\cal O}_{h^{N-2-j}}){\cal O}_{h^{0}})_{0,2} \; \varepsilon^j \notag \\
&= \frac{1}{2 \pi \sqrt{-1}} \sum_{j=0}^{\infty} \oint_{C_0} \frac{dz_0}{z_0} \left( \frac{1}{2 \pi \sqrt{-1}} \oint_{C_1} \frac{dz_1}{(z_1 )^N} f_0 (z_0 , z_1 ) \left( \frac{z_1 - z_0}{z_0} \right)^{j} \right) \varepsilon^j ,
\end{align}
where $f_0 (z_0 , z_1 )$ is defined by,
\begin{align}
f_0 (z_0 , z_1 ) &:= (z_1 - z_0 )^{(N-k)d-1} \cdot \frac{e(z_0 , z_1 )}{z_0} \notag \\
&\quad \cdot \left( \frac{1}{(2 \pi \sqrt{-1})^{d-1}} \oint_{C_2} \frac{dz_2}{(z_2 )^N} \dots \oint_{C_d} \frac{dz_d}{(z_d )^N} \frac{\prod_{i=2}^{d} e^k (z_{i-1} , z_{i})}{\prod_{i=1}^{d-2} (2z_{i+1} - z_{i+2} - z_{i})} \cdot \frac{1}{\prod_{i=1}^{d-1} kz_i} \cdot \frac{1}{2z_1 - z_2 - z_0} \right).
\end{align}
With this set-up, we have only to prove the following equality:
\begin{equation}
F_0 (\varepsilon ) = k \cdot \frac{\prod_{r=1}^{kd}(r+k\epsilon)}{\prod_{r=1}^{d}(r+\epsilon)^N} \quad (\mbox{for any sufficiently small} \ \varepsilon).
\end{equation}
Since $f_0 (z_0 , z_1 )$ is holomorphic in $z_0$ and the operation $\frac{1}{2 \pi \sqrt{-1}} \oint_{C_0} dz_0$ is realized as contour integral around a circle centered at origin of $z_0$-plane, we can apply Lemma 1 by taking some constant $r_0 (> 0)$. Then we obtain,
\begin{align}
F_0 (\epsilon ) &= \frac{1}{1 + \varepsilon} \cdot \frac{1}{2 \pi \sqrt{-1}} \oint_{C_1^{\prime}} \frac{dz_1}{(z_1 )^N} \; f_0 \left( \frac{\varepsilon}{1 + \varepsilon} z_1 , z_1 \right) \notag \\
&= \frac{k}{(1 + \varepsilon )^{(N-k)(d-1)-1} (2 + \varepsilon )} \frac{\prod_{r=1}^{k} (r + k \varepsilon )}{(1 + \varepsilon )^N} F_1 (\varepsilon ) \quad (| \varepsilon | < r_0),
\end{align}
where we used the equality:
\begin{equation}
\frac{e^k \left( \frac{\varepsilon}{1 + \varepsilon} z_1 , z_1 \right)}{\frac{\varepsilon}{1 + \varepsilon} z_1 } = k \left( \prod_{r=1}^k (r + k \varepsilon ) \right) \left( \frac{z_1}{1 + \varepsilon} \right)^{k},
\end{equation}
and set $F_{1}(\varepsilon)$ as follows,
\begin{align}
F_1 (\varepsilon) &:= \frac{1}{(2 \pi \sqrt{-1})^{d}} \oint_{C_1^{\prime}} dz_1 \oint_{C_2} \frac{dz_2}{(z_2 )^{N}} \dots \oint_{C_d} \frac{dz_d}{(z_d )^N} \notag \\
&\quad \cdot \frac{(z_1 )^{(N-k)(d-1)-1}}{z_1 - \frac{1 + \varepsilon}{2 + \varepsilon} z_2} \cdot \frac{\prod_{i=2}^{d} e^k (z_{i-1} , z_{i})}{\prod_{i=1}^{d-2} (2z_{i+1} - z_{i+2} - z_{i})} \cdot \frac{1}{\prod_{i=1}^{d-1} kz_i}.
\label{thm1int1}
\end{align}
We further rewrite $F_{1}(\varepsilon)$ into,
\begin{align}
F_{1}(\varepsilon)&= \frac{1}{2 \pi \sqrt{-1}} \oint_{C_1^{\prime}} dz_1 \left( \frac{1}{2 \pi \sqrt{-1}} \oint_{C_2} \frac{dz_2}{(z_2 )^N} \frac{f_1 (z_1 , z_2 )}{z_1 - \frac{1 + \varepsilon}{2 + \varepsilon} z_2} \right). \label{thm1int2}
\end{align}
In (\ref{thm1int1}) and (\ref{thm1int2}), $\frac{1}{2 \pi \sqrt{-1}} \oint_{C_1^{\prime}} dz_1$ means operation of taking residues at $z_1 = 0$, $\frac{1 + \varepsilon}{2 + \varepsilon} z_2$, and $f_1 (z_1 , z_2 )$ is defined by
\begin{align}
f_1 (z_1 , z_2 ) &:= (z_1 )^{(N-k)(d-1)-1} \cdot \frac{e(z_1 , z_2 )}{kz_1} \notag \\
&\quad \cdot \left( \frac{1}{(2 \pi \sqrt{-1})^{d-2}} \oint_{C_3} \frac{dz_3}{(z_3 )^{N}} \dots \oint_{C_d} \frac{dz_d}{(z_d )^N} \frac{\prod_{i=3}^{d} e^k (z_{i-1} , z_{i})}{\prod_{i=2}^{d-2} (2z_{i+1} - z_{i+2} - z_{i})} \cdot \frac{1}{\prod_{i=2}^{d-1} kz_i} \cdot \frac{1}{2z_2 - z_3 - z_1} \right).
\end{align}
Then $f_1 (z_1 , z_2 )$ is holomorphic in $z_1$, and therefore the operation $\frac{1}{2 \pi \sqrt{-1}} \oint_{C_1^{\prime}} dz_1$ is realized as a contour integral around a circle of sufficiently small radius centered at the $z_1 = \frac{1}{2} z_2$ on a $z_1$-plane. Moreover, since
\begin{equation}
\lim_{\varepsilon \to 0} \left| \frac{1 + \varepsilon}{2 + \varepsilon} z_2 - \frac{1}{2} z_2 \right| = 0
\end{equation}
and $z_2$ lies in some bounded subset of $z_2$-plane
, one can find some constant $r_1$ such that $0 < r_1 < r_0$ and $z_1 = \frac{1 + \varepsilon}{2 + \varepsilon} z_2$ is in interior of a circle in $z_1$-plane if $|\varepsilon| < r_1$.
Hence we can integrate (\ref{thm1int2}) with respect to $z_1$ as follows:
\begin{align}
F_1 (\varepsilon ) &= \frac{1}{2 \pi \sqrt{-1}} \oint_{C_2^{\prime}} \frac{dz_2}{(z_2 )^N} \; f_1 \left( \frac{2 + \varepsilon}{1 + \varepsilon}z_2 , z_2 \right) \notag \\
&= \frac{(1 + \varepsilon )^{(N-k)(d-1)-1}}{(2 + \varepsilon )^{(N-k)(d-2)-2}(3 + \varepsilon )} \frac{\prod_{r=k+1}^{2k} (r + k \varepsilon )}{(2 + \varepsilon )^{N}} F_2 (\varepsilon ) \quad (| \varepsilon | < r_{1}), \label{int1}
\end{align}
where
\begin{align}
F_2 (\varepsilon) &:= \frac{1}{(2 \pi \sqrt{-1})^{d-1}} \oint_{C_2^{\prime}} dz_2 \oint_{C_3} \frac{dz_3}{(z_3 )^{N}} \dots \oint_{C_d} \frac{dz_d}{(z_d )^N} \notag \\
&\quad \cdot \frac{(z_2 )^{(N-k)(d-2)-1}}{z_2 - \frac{2 + \varepsilon}{3 + \varepsilon} z_3} \frac{\prod_{i=3}^{d} e^k (z_{i-1} , z_{i})}{\prod_{i=2}^{d-2} (2z_{i+1} - z_{i+2} - z_{i})} \cdot \frac{1}{\prod_{i=2}^{d-1} kz_i},
\end{align}
and $\frac{1}{2 \pi \sqrt{-1}} \oint_{C_2^{\prime}} dz_2$ means operation of taking residues at $z_2 = 0$, $\frac{2 + \varepsilon}{3 + \varepsilon} z_3$ (the reason why the second residue is taken at $z_2 = \frac{2 + \varepsilon}{3 + \varepsilon} z_3$ is same as the one given in Remark \ref{rem}). By repeating the procedures so far, we reach the following
expression:
\begin{align}
F_0 (\varepsilon ) &= \sum_{j=0}^{\infty} w(\sigma_{(N-k)d+j-1}({\cal O}_{h^{N-2-j}}){\cal O}_{h^{0}})_{0,2} \; \varepsilon^j \notag \\
&= \frac{k}{(1 + \varepsilon )^{(N-k)(d-1)-1} (2 + \varepsilon )} \frac{\prod_{r=1}^{k} (r + k \varepsilon )}{(1 + \varepsilon )^N} F_1 (\varepsilon ) \notag \\
&= \frac{k}{(2 + \varepsilon )^{(N-k)(d-2)-1} (3 + \varepsilon )} \frac{\prod_{r=1}^{2k} (r + k \varepsilon)}{\prod_{r=1}^{2} (r + \varepsilon )^{N}} F_2 (\varepsilon ) \notag \\
&= \dotsb \notag \\
&= \frac{k}{(d-1 + \varepsilon )^{(N-k) \cdot 1 - 1} (d + \varepsilon )} \frac{\prod_{r=1}^{(d-1)k} (r + k \varepsilon )}{\prod_{r=1}^{d-1} (r + \varepsilon )^{N}} F_{d-1} (\varepsilon) \quad (\mbox{for any sufficiently small} \ \varepsilon),
\end{align}
where
\begin{equation}
F_{d-1} (\varepsilon) := \frac{1}{(2 \pi \sqrt{-1})^2} \oint_{C_{d-1}^{\prime}} dz_{d-1} \oint_{C_d} \frac{dz_d}{(z_d )^{N}} \frac{(z_{d-1})^{(N-d) \cdot 1 - 1}}{z_{d-1} - \frac{d - 1 + \varepsilon}{d + \varepsilon} z_{d}} \frac{e(z_{d-1} , z_{d})}{kz_{d-1}},
\end{equation}
and $\frac{1}{2 \pi \sqrt{-1}} \oint_{C_{d-1}^{\prime}} dz_{d-1}$ is an operator of taking residues at $z_{d-1} = 0$, $\frac{d-1 + \varepsilon}{d + \varepsilon} z_{d}$. Then we can easily
evaluate this integral as,
\begin{equation}
F_{d-1} (\varepsilon ) = \frac{(d - 1 + \varepsilon )^{(N-k) \cdot 1 - 1}}{(d + \varepsilon )^{-1}} \frac{\prod_{r = (d-1)k + r}^{kd} (r + k \varepsilon )}{(d + \varepsilon )^{N}}.
\end{equation}
In this way, we finally obtain,
\begin{equation}
F_0 (\varepsilon ) = k \cdot \frac{\prod_{r=1}^{kd}(r+k\epsilon)}{\prod_{r=1}^{d}(r+\epsilon)^N} \quad (\mbox{for any sufficiently small} \ \varepsilon),
\end{equation}
which completes the proof of Theorem \ref{main1}. \ $\square$
\subsection{Proof of Theorem \ref{main2}}
As was done in the proof of Theorem 1, we consider generationg function,
\begin{align}
G_0 (\varepsilon) &:= \sum_{j=0}^{\infty} w(\sigma_{j}({\cal O}_{h^{N-2-j}}){\cal O}_{h^{-1-(k-N)d}}|({\cal O}_{h})^{1+(k-N)d})_{0,2|1+(k-N)d} \; \varepsilon^j.
\end{align}
By using (\ref{residue2}) in Definition \ref{wdefi}, $G_{0}(\varepsilon)$ is given as the following residue integral:
\begin{align}
G_0 (\varepsilon)&= \frac{1}{2 \pi \sqrt{-1}} \sum_{j=0}^{\infty} \oint_{C_0} \frac{dz_0}{z_0} \left( \frac{1}{2 \pi \sqrt{-1}} \oint_{C_1} \frac{dz_1}{(z_1 )^N} \; g_0 (z_0 , z_1 ) \left( \frac{z_1 - z_0}{z_0} \right)^{j} \right) \varepsilon^{j},
\end{align}
where we set $g_0 (z_0 , z_1 )$ as,
\begin{align}
g_0 (z_0 , z_1 ) &:= \left( d + \frac{z_0}{z_1 - z_0} \right)^{1+(k-N)d} \cdot \frac{e^k (z_0 , z_1 )}{z_0} \notag \\
&\quad \cdot \left( \frac{1}{(2 \pi \sqrt{-1})^{d-1}} \oint_{C_2} \frac{dz_2}{(z_2 )^N} \dots \oint_{C_d} \frac{dz_d}{(z_d )^N} \frac{\prod_{i=2}^{d} e^k (z_{i-1} , z_{i})}{\prod_{i=1}^{d-2} (2z_{i+1} - z_{i+2} - z_{i})} \cdot \frac{1}{\prod_{i=1}^{d-1} kz_i} \cdot \frac{(z_d )^{-1-(k-N)d}}{2z_1 - z_2 - z_0 } \right).
\end{align}
Since $g_0 (z_0 , z_1 )$ is holomorphic in $z_0$, we can apply Lemma 1 and the remaining processes go in the same way as the proof of Theorem 1. $\Box$
|
2302.12942
|
\section{Introduction}\label{s:intro}
A \emph{Latin rectangle} is an $n \times m$ matrix, with $n \leq m$, on $m$ symbols such that each symbol occurs at most once in each row and column. A \emph{Latin square} is a square Latin rectangle. Let $L$ be a Latin square with symbol set $S$. We will index the rows and columns of $L$ by $S$ and we will denote the symbol in row $i$ and column $j$ of $L$ by $L_{i, j}$.
Let $\F_q$ denote the finite field with $q$ elements. For a field $\F_q$ with characteristic $p$ we will let $\F_p$ denote the prime subfield of $\F_q$. Let $\mathcal{R}_q$ and $\mathcal{N}_q$ denote the set of quadratic residues, and quadratic non-residues of the multiplicative group $\F_q^*$, respectively. Let $\{a, b\} \subseteq \F_q$ be such that $\{ab, (a-1)(b-1)\} \subseteq \mathcal{R}_q$. We can then define a $q \times q$ Latin square $\mathcal{L}[a, b]$ by
\[
(\mathcal{L}[a, b])_{i, j} = \begin{cases}
i & \text{if } j = i, \\
i + a(j-i) & \text{if } j-i \in \R_q, \\
i + b(j-i) & \text{if } j-i \in \N_q.
\end{cases}
\]
Such squares are called \emph{quadratic Latin squares}. Quadratic Latin squares have previously been used to construct perfect $1$-factorisations~\cite{MR2134185, MR4070707, hon}, mutually orthogonal Latin squares~\cite{MR1222645, MR3837138}, atomic Latin squares~\cite{MR2134185}, Falconer varieties~\cite{hon}, and maximally non-associative quasigroups~\cite{MR4275825, numquad}. Quadratic Latin squares are the main focus of this paper.
A \emph{Latin subrectangle} of a Latin square is a submatrix which is itself a Latin rectangle. A \emph{Latin subsquare} is a square Latin subrectangle. An \emph{intercalate} is a $2 \times 2$ Latin subsquare. A Latin square is called \emph{$N_2$} if it contains no intercalates. It is known~\cite{MR401504, MR396298, MR427101, MR1820691} that an $N_2$ Latin square of order $n$ exists if and only if $n \not\in \{2, 4\}$. Such squares are also known to be rare~\cite{MR1685535, MR4488316} and can be used to construct disjoint Steiner triple systems~\cite{MR401504}. We completely characterise when a quadratic Latin square is $N_2$.
\begin{thm}\label{t:n2}
Let $q$ be an odd prime power. The Latin square $\mathcal{L}[a, b]$ of order $q$ contains an intercalate if and only if,
\[
(2ab-a-b)(a+b)(a-1) \in \R_q \text{ and } \{2(a+b-2)(a-1), 2a(a+b)\} \subseteq \N_q,
\]
or both $q \equiv 1 \bmod 4$ and $b \in \{2-a, a/(2a-1), -a\}$.
\end{thm}
Let $G$ be a graph. A \emph{$1$-factor} of $G$ is a collection $M$ of its edges such that every vertex of $G$ is incident to exactly one edge in $M$. A \emph{$1$-factorisation} of $G$ is a partition of its edges into $1$-factors. Let $\mathcal{F}$ be a $1$-factorisation of $G$. Each pair of $1$-factors in $\mathcal{F}$ induces a subgraph of $G$, which is the union of cycles of even length. We will say that $\mathcal{F}$ contains these cycles. The problem of investigating $1$-factorisations which satisfy certain conditions on their cycles has received some attention. The most notable case of this is the study of perfect $1$-factorisations. If all the cycles in $\mathcal{F}$ are Hamiltonian then $\mathcal{F}$ is called \emph{perfect}. See~\cite{MR2597247, MR2311238} for applications of perfect $1$-factorisations to computer science. We will be most interested in studying $1$-factorisations of complete graphs and complete bipartite graphs. It is known that a $1$-factorisation of $K_{2n}$ exists for all positive integers $n$ and a $1$-factorisation of $K_{n, n}$ exists for all positive integers $n$.
In $1964$, Kotzig~\cite{MR0173249} conjectured that a perfect $1$-factorisation of $K_{2n}$ exists for all positive integers $n$. Despite receiving lots of attention, this conjecture remains far from resolved. There are only three known infinite families~\cite{MR0173249, MR2216455} of perfect $1$-factorisations of complete graphs. These families prove the existence of perfect $1$-factorisations of $K_{2n}$ where $2n \in \{p+1, 2p\}$ for an odd prime $p$. Perfect $1$-factorisations of $K_{2n}$ are also known to exist for some sporadic values of $n$. See~\cite{MR4070707} for a list of these values.
It is known that a perfect $1$-factorisation of $K_{n, n}$ can only exist if $n=2$ or $n$ is odd. Laufer~\cite{MR582276} showed that if there exists a perfect $1$-factorisation of $K_{2n}$ for some positive integer $n$, then there exists a perfect $1$-factorisation of $K_{2n-1, 2n-1}$. It is thus conjectured that a perfect $1$-factorisation of $K_{n, n}$ exists for all odd $n$. This conjecture also remains far from resolved. There are eight known infinite families of perfect $1$-factorisations of complete bipartite graphs~\cite{MR582276, MR1899629, MR2216455, hon}. These families prove the existence of perfect $1$-factorisations of $K_{n, n}$ where $n \in \{p, 2p-1, p^2\}$ for an odd prime $p$. There are also known perfect $1$-factorisations of $K_{n, n}$ for some sporadic values of $n$.
A contrasting problem to the construction of perfect $1$-factorisations is the construction of $1$-factorisations which contain only short cycles. H\'{a}ggkvist~\cite{hagg} asked the following question. Given a graph $G$, what is the least integer $m$ such that there is a $1$-factorisation of $G$ whose cycles are all of length at most $m$. Particular interest has been given to the case where $G$ is a complete bipartite graph. It has been conjectured that for all sufficiently large $n$ there exists a $1$-factorisation of $K_{n, n}$ whose cycles are all of length at most $6$. This problem has been studied in~\cite{MR422088, MR2071339, MR2433008, MR2475030, MR3537912}. The current best known result, due to Benson and Dukes~\cite{MR3537912}, is that, for each positive integer $n$, there exists a $1$-factorisation of $K_{n, n}$ whose cycles are all of length at most $182$. The current best known result for complete graphs is due to Dukes and Ling~\cite{MR2475030}. It states that for all positive integers $n$, there exists a $1$-factorisation of $K_{2n}$ whose cycles are all of length at most $1720$.
Let $L$ be a Latin square with symbol set $S$ of size $n$. For each $\{i, j\} \subseteq S$ with $i \neq j$, the permutation mapping row $i$ to row $j$, denoted by $r_{i, j}$, is defined by $r_{i, j}(L_{i, k}) = L_{j, k}$ for all $k \in S$. We call such permutations \emph{row permutations} of $L$ and we call each cycle in a row permutation a \emph{row cycle} of $L$. Every row cycle of $L$ has length at least two. If every row permutation of $L$ consists of a single row cycle of length $n$ then $L$ is called \emph{row-Hamiltonian}. A row cycle of length $m$ in $L$ induces a $2 \times m$ Latin subrectangle of $L$. So $L$ is $N_2$ if and only if it contains no row cycle of length $2$, and $L$ is row-Hamiltonian if and only if it does not contain any $m \times k$ Latin subrectangles with $1 < m \leq k < n$.
An \emph{ordered $1$-factorisation} of a graph $G$ is a $1$-factorisation with a total ordering on its $1$-factors. Let $L$ be an $n \times n$ Latin square. There is a known method to construct an ordered $1$-factorisation $\mathcal{E}$ of $K_{n, n}$ from $L$. Furthermore, for each row cycle of length $\ell$ in $L$, there is a corresponding cycle of length $2\ell$ in $\mathcal{E}$. This construction is reversible. If $L$ satisfies some symmetry conditions, then we can also construct a $1$-factorisation $\mathcal{F}$ of $K_{n+1}$ from $L$. For every row cycle of length $\ell$ in $L$, there is a corresponding cycle in $\mathcal{F}$ which has length $2\ell$ or $\ell+1$. These constructions will be discussed in further detail in \sref{s:back}. Many authors have used Latin squares to construct $1$-factorisations of graphs, including perfect $1$-factorisations. We will denote the $1$-factorisation of $K_{n, n}$ obtained from a Latin square $L$ by $\mathcal{E}(L)$, and we will denote the $1$-factorisation of $K_{n+1}$ obtained from a suitable Latin square $L$ by $\mathcal{F}(L)$.
Our second main result concerns the row cycles of quadratic Latin squares.
\begin{thm}\label{t:quadneg}
Let $P$ denote the set of all odd primes. There exists a function $f : P \to \mathbb{N}$ such that every quadratic Latin square of order $q = p^d$ contains a row cycle of length at most $p$ if $d \geq f(p)$. Furthermore, if $L = \mathcal{L}[a, b]$ is a quadratic Latin square of order $q$ with $\{a, b\} \not\subseteq \F_p \cap \N_q$ then $L$ contains a row cycle of length exactly $p$ if $d \geq f(p)$.
\end{thm}
We will prove \tref{t:quadneg} by constructing a suitable function $f$ where $f(p)$ is asymptotically equal to $p\log(16)/\log(p)$. We note that this function $f$ we construct is not minimal.
The \emph{cycle structure} of a permutation is a sorted list of the lengths of its cycles. Let $L = \mathcal{L}[a, b]$ be a quadratic Latin square. The cycle structure of any row permutation of $L$ is equal to the cycle structure of the row permutation $r_{0, 1}$ of $L$ or the cycle structure of the row permutation $r_{0, 1}$ of $\mathcal{L}[b, a]$ (see \lref{l:quadrowperms}). This makes it tempting to consider quadratic Latin squares when searching for perfect $1$-factorisations or $1$-factorisations which contain only short cycles. However \tref{t:quadneg} tells us that quadratic Latin squares of order $p^d$ will not be useful for constructing perfect $1$-factorisations if $d$ is too large. It also limits the usefulness of quadratic Latin squares of order $p^d$ for constructing $1$-factorisations which contain only short cycles if $d$ is too large, with the possible exception of the squares $\mathcal{L}[a, b]$ with $\{a, b\} \subseteq \F_p \cap \N_q$. Note that such squares can only exist when $d$ is odd.
An \emph{anti-perfect} $1$-factorisation of a graph is a $1$-factorisation which does not contain any Hamiltonian cycles. It is known~\cite{MR2071339} that an anti-perfect $1$-factorisation of $K_{n, n}$ exists if and only if $n \not\in \{2, 3, 5\}$. The existence question of anti-perfect $1$-factorisations of complete graphs was almost completely resolved. It is known (see e.g.~\cite{MR3954017}) that an anti-perfect $1$-factorisation of $K_{2n}$ exists if $2 < 2n \equiv 2 \bmod 6$ or $4 < 2n \equiv 4 \bmod 6$. These $1$-factorisations come from Steiner $1$-factorisations. If $2n \equiv 0 \bmod 6$ then an anti-perfect $1$-factorisation of $K_{2n}$ exists if $12 \leq 2n \leq 100$. Also, the previously mentioned result of Dukes and Ling~\cite{MR2475030} implies the existence of an anti-perfect $1$-factorisation of $K_{2n}$ whenever $2n \geq 1722$. We resolve the existence problem of anti-perfect $1$-factorisations of complete graphs.
\begin{thm}\label{t:antiperf}
There exists an anti-perfect $1$-factorisation of $K_{2n}$ if $2n \geq 8$.
\end{thm}
We note that all $1$-factorisations of $K_{2n}$ are perfect if $2n \leq 6$. We also note that our contribution to \tref{t:antiperf} is little more than an observation that the method of Dukes and Ling~\cite{MR2475030} can be used to prove the existence of anti-perfect $1$-factorisations of $K_{2n}$ for almost all orders.
Let $L$ be a Latin square with symbol set $S$ of size $n$. By indexing the rows and columns of $L$ by $S$ we can consider $L$ as a set of $n^2$ triples of the form $(\text{row}, \text{column}, \text{symbol}) \in S^3$. A \emph{conjugate} of $L$ is a Latin square obtained from $L$ by uniformly permuting the elements of each triple. An \emph{atomic} Latin square is a Latin square whose conjugates are all row-Hamiltonian. Such squares have been studied in~\cite{MR2216455, MR2134185, MR2024246, MR4070707, MR1670298}. We define a Latin square of order $n$ to be \emph{anti-atomic} if none of its conjugates contain a row cycle of length $n$. We prove the following.
\begin{thm}\label{t:antiatom}
An anti-atomic Latin square of order $n$ exists for all $n \not\in \{2, 3, 5\}$.
\end{thm}
\tref{t:quadneg} suggests that we could build anti-perfect $1$-factorisations and anti-atomic Latin squares using quadratic Latin squares. We can indeed achieve this for some orders. To describe our results we need the following definition.
Let $L$ and $M$ be Latin squares with symbol sets $S$ and $T$, respectively. The \emph{direct product} of $L$ and $M$, denoted by $L \times M$, is the Latin square with symbol set $S \times T$ defined by $(L \times M)_{(a, b), (x, y)} = (L_{a, x}, M_{b, y})$. We can now state our last main result.
\begin{thm}\label{t:quadanti}
Let $n \not\in \{1, 3, 5, 15\}$ be an odd integer. There exists an anti-atomic Latin square of order $n$ which is the direct product of quadratic Latin squares. If $n$ contains a prime power divisor $m \neq 3$ with $m \equiv 3 \bmod 4$ then there exists a Latin square $L$ which is the direct product of quadratic Latin squares such that the $1$-factorisation $\mathcal{F}(L)$ of $K_{n+1}$ is well-defined and anti-perfect.
\end{thm}
\tref{t:quadanti} implies that we can also construct anti-perfect $1$-factorisations of complete bipartite graphs using direct products of quadratic Latin squares.
\section{Background}\label{s:back}
Let $L$ be a Latin square with symbol set $S$ of size $n$. Let $\text{Sym}(S)$ denote the group of permutations of $S$. For $\{\sigma_1, \sigma_2, \sigma_3\} \subseteq \text{Sym}(S)$ we can define a Latin square $L(\sigma_1, \sigma_2, \sigma_3)$ which consists of the triples $(\sigma_1(r), \sigma_2(c), \sigma_3(s))$ for each triple $(r, c, s)$ of $L$. We say that a Latin square is \emph{isotopic} to $L$ if it is $L(\sigma_1, \sigma_2, \sigma_3)$ for some $\{\sigma_1, \sigma_2, \sigma_3\} \subseteq \text{Sym}(S)$. We say that a Latin square is \emph{isomorphic} to $L$ if it is $L(\sigma, \sigma, \sigma)$ for a permutation $\sigma \in \text{Sym}(S)$. Isotopy preserves the lengths of row cycles of a Latin square. We label each conjugate of $L$ by a $1$-line permutation which gives the order of the coordinates of the conjugate, relative to the order of the coordinates of the original square. So the $(1, 2, 3)$-conjugate of $L$ is itself and the $(2, 1, 3)$-conjugate is the matrix transpose of $L$. If $L$ is equal to its $(1, 3, 2)$-conjugate then $L$ is called \emph{involutory}. If $L_{i, i} = i$ for all $i \in S$ then $L$ is called \emph{idempotent}.
We now describe the method mentioned in \sref{s:intro} which can be used to construct an ordered $1$-factorisation of $K_{n, n}$ from an $n \times n$ Latin square. Let $L$ be a Latin square with symbol set $S$ of size $n$. Label the vertices of $K_{n, n}$ by $S \times \{c, s\}$ where $(x_1, y_1)$ is adjacent to $(x_2, y_2)$ if and only if $y_1 \neq y_2$. For each $i \in S$ we construct a $1$-factor $e_i$ of $K_{n, n}$ from row $i$ of $L$ as follows. For each $j \in S$ add the edge $\{(j, c), (k, s)\}$ to $e_i$ where $L_{i, j} = k$. Then the set $\mathcal{E}(L) = \{e_i : i \in S\}$ is an ordered $1$-factorisation of $K_{n, n}$, where the order on the $1$-factors comes from the order of the rows of $L$. Furthermore, if the row permutation $r_{i, j}$ of $L$ contains a cycle of length $\ell$ then the subgraph of $K_{n, n}$ induced by the $1$-factors $e_i$ and $e_j$ contains a cycle of length $2\ell$. In particular, $L$ is row-Hamiltonian if and only if $\E(L)$ is perfect. For a more detailed description of this construction see~\cite{MR2130738}. The infinite families of perfect $1$-factorisations of complete bipartite graphs from~\cite{MR1899629, hon} were constructed using row-Hamiltonian Latin squares. In fact, the family of row-Hamiltonian Latin squares constructed in~\cite{hon} is the family of quadratic Latin squares $\mathcal{L}[-1, 2]$ of prime order $p$ with $p \equiv 1 \bmod 8$ or $p \equiv 3 \bmod 8$.
As mentioned in \sref{s:intro}, if a Latin square $L$ of order $n$ satisfies some symmetry conditions then we can construct a $1$-factorisation of $K_{n+1}$ from $L$. Those symmetry conditions are that $L$ must be idempotent and involutory. We will briefly outline the construction now. For a more detailed description see~\cite{MR2130738}. Let $L$ be an idempotent, involutory Latin square with symbol set $S$ of size $n$. Let $v$ be any symbol which is not in $S$. Label the vertices of $K_{n+1}$ by $S \cup \{v\}$. For each $i \in S$ we construct a $1$-factor $f_i$ of $K_{n+1}$ from row $i$ of $L$ as follows. Add the edge $\{i, v\}$ to $f_i$, and for each $j \in S \setminus \{i\}$ add the edge $\{j, k\}$ to $f_i$ where $L_{i, j} = k$. The $1$-factor $f_i$ is well defined because $L$ is idempotent and involutory. Then $\mathcal{F}(L) = \{f_i : i \in S\}$ is an ordered $1$-factorisation of $K_{n+1}$, where the order on the $1$-factors comes from the order of the rows of $L$. Before describing the relationship between the row cycles of $L$ and the cycles in $\mathcal{F}(L)$ we will need the following lemma.
\begin{lem}\label{l:lsymcyc}
Let $L$ be an idempotent, involutory Latin square with symbols set $S$. Let $i, j \in S$ and let $r = r_{i, j}$. The cycle of $r$ containing $i$ also contains $j$. If $r$ contains the cycle $(x_0, x_1, \ldots, x_k)$ then it also contains the cycle $(L_{i, x_k}, L_{i, x_{k-1}}, \ldots, L_{i, x_0})$. Furthermore these cycles coincide if and only if $x_\ell = i$ for some $\ell \in \{0, 1, \ldots, k\}$.
\end{lem}
\begin{proof}
Throughout the proof let $X = (x_0, x_1, \ldots, x_k)$ be a cycle of $r$. We first prove that $r(L_{i, x_\ell}) = L_{i, x_{\ell-1}}$ for any $\ell \in \{0, 1, \ldots, k\}$ (where we take $\ell-1$ modulo $k+1$). Write $x_{\ell-1} = L_{i, a}$ for some $a \in S$. Then $x_\ell = r(x_{\ell-1}) = r(L_{i, a}) = L_{j, a}$. So $r(L_{i, x_\ell}) = L_{j, x_\ell} = a = L_{i, x_{\ell-1}}$ because $L$ is involutory. Therefore $r$ contains the cycle $(L_{i, x_k}, L_{i, x_{k-1}}, \ldots, L_{i, x_0})$.
Suppose, for the moment, that $X$ contains $i$. Since $L$ is idempotent we know that $X$ and $(L_{i, x_k}, L_{i, x_{k-1}}, \ldots, L_{i, x_0})$ must coincide. We will show that $X$ contains $j$. Without loss of generality assume that $x_0 = i$. If $k$ is odd then we must have $x_{(k+1)/2} = L_{i, x_{(k+1)/2}}$, which is impossible since $L$ is idempotent. Therefore $k$ is even and $X$ can be written as
\[
(i, x_1, x_2, \ldots, x_{k/2}, L_{i, x_{k/2}}, \ldots, L_{i, x_2}, L_{i, x_1}).
\]
In particular we must have $r(x_{k/2}) = L_{i, x_{k/2}}$. Write $x_{k/2} = L_{i, b}$ for some $b \in S$. Then because $L$ is involutory we have that $b = L_{i, x_{k/2}} = r(x_{k/2}) = L_{j, b}$. But $L$ is idempotent, hence we must have $b = j$ and therefore $j$ is contained in $X$.
Now suppose that $X$ is equal to the cycle $(L_{i, x_k}, L_{i, x_{k-1}}, \ldots, L_{i, x_0})$. We will show that $X$ must contain $i$. We can write $x_0 = L_{i, x_\ell}$ for some $\ell \in \{0, 1, \ldots, k\}$. Then we also have $x_m = L_{i, x_{\ell-m}}$ for each $m \in \{0, 1, \ldots, k\}$ where $\ell-m$ is taken modulo $k+1$. If $\ell$ is even then taking $m=\ell/2$ we see that $x_m = L_{i, x_m}$ which implies that $x_m=i$. If $m$ is odd then taking $m=(\ell+1)/2$ we see that $x_m = r(x_{m-1}) = r(L_{i, x_m}) = L_{j, x_m}$ which implies that $x_m=j$. Either way, $X$ must contain $i$.
\end{proof}
We can now describe the relationship between the row cycles of $L$ and the cycles in $\mathcal{F}(L)$. Let $r = r_{i, j}$ be a row permutation of $L$ and let $(i, x_1, x_2, \ldots, j, \ldots, x_k)$ be the cycle of $r$ containing $i$ and $j$. Then $\mathcal{F}(L)$ contains the cycle $(v, i, x_1, x_2, \ldots, j, \ldots, x_k)$. Let $(y_0, y_1, \ldots, y_k)$ be a cycle of $r$ which does not contain $i$, so that $(L_{i, y_k}, L_{i, y_{k-1}}, \ldots, L_{i, y_0})$ is also a cycle of $r$. Then $\mathcal{F}(L)$ contains the cycle $(y_0, L_{i, y_k}, y_1, L_{y, i_{k-1}}, \ldots, y_k, L_{i, y_0})$. In particular, $L$ is row-Hamiltonian if and only if $\mathcal{F}(L)$ is perfect.
\section{Main results}\label{s:main}
The following result will be used frequently, and it is one of our primary motivators for studying quadratic Latin squares (see e.g.~\cite{hon}).
\begin{lem}\label{l:quadrowperms}
Let $q$ be an odd prime power and let $\{a, b\} \subseteq \F_q$ be such that $\{ab, (a-1)(b-1)\} \subseteq \mathcal{R}_q$.
\begin{enumerate}[(i)]
\item If $q \equiv 3 \bmod 4$ then every row permutation of the Latin square $\mathcal{L}[a, b]$ has the same cycle structure as the row permutation $r_{0,1}$ of $\mathcal{L}[a, b]$.
\item If $q \equiv 1 \bmod 4$ then every row permutation of the Latin square $\mathcal{L}[a, b]$ has the same cycle structure as either the row permutation $r_{0,1}$ of $\mathcal{L}[a, b]$ or the row permutation $r_{0,1}$ of $\mathcal{L}[b, a]$.
\end{enumerate}
\end{lem}
Therefore, to investigate the row cycles of quadratic Latin squares it suffices to consider only the row permutations mapping row $0$ to row $1$.
The structure of this section is as follows. In \sref{ss:quadcyc} we develop a general method to study the row cycles of quadratic Latin squares. We will then apply these methods in \sref{ss:n2} to characterise quadratic Latin squares which contain row cycles of length $2$. This will allow us to prove \tref{t:n2}. In \sref{ss:p-cycles} we will prove \tref{t:quadneg}, and in \sref{ss:antiperf} we will prove \tref{t:antiperf}, \tref{t:antiatom} and \tref{t:quadanti}.
\subsection{Row cycles of quadratic Latin squares}\label{ss:quadcyc}
Throughout this subsection let $q$ be an odd prime power and let $c \in \{2, 3, \ldots, q\}$. We call a pair $(a, b) \in \F_q^2$ \emph{valid} if $\{ab, (a-1)(b-1)\} \subseteq \mathcal{R}_q$. It is known~\cite{MR1222645} that the number of valid pairs in $\F_q^2$ is $(q-3)(q-5)/4+q-2$. Denote the row permutation $r_{0, 1}$ of a quadratic Latin square $\mathcal{L}[a, b]$ by $\alpha[a, b]$ and define the set,
\[
\G = \left\{\alpha[a, b] : (a, b) \in \F_q^2 \text{ is valid}\right\}.
\]
For a valid pair $(a, b) \in \F_q^2$ define the permutation $\varphi[a, b]$ by
\[
\varphi[a, b](x) = \begin{cases}
0 & \text{if } x = 0, \\
ax & \text{if } x \in \R_q, \\
bx & \text{if } x \in \N_q.
\end{cases}
\]
Then $(\mathcal{L}[a, b])_{i, j} = i + \varphi[a, b](j-i)$ for all $\{i, j\} \subseteq \F_q$. Let $\alpha = \alpha[a, b]$ and let $\varphi = \varphi[a, b]$. Then $\alpha$ is defined by,
\[
\alpha(j) = \varphi(\varphi^{-1}(j)-1)+1.
\]
A straightforward computation shows that $\varphi[a, b]^{-1} = \varphi[a^{-1}, b^{-1}]$ if $a \in \R_q$ and $\varphi[a, b]^{-1} = \varphi[b^{-1}, a^{-1}]$ if $a \in \N_q$.
We now introduce some tools which can be used to investigate the cycles of a permutation $\alpha \in \G$.
We will call a cycle of length $k$ in a permutation a $k$-cycle. For a sequence $z$, we denote the $i$-th element of $z$ by $z_i$, starting from $z_0$. For a cycle $\beta$ of $\alpha$ and element $j$ in the cycle $\beta$ we will write $j \in \beta$. Let $\eta : \F_q^* \to \mathbb{C}$ denote the quadratic character, and extend $\eta$ to $\F_q$ by defining $\eta(0) = 0$.
\begin{defin}\label{d:asat}
Let $z \in \{-1, 0, 1\}^{2c}$ and $\alpha \in \G$. Suppose that there is a $c$-cycle $\beta$ of $\alpha$ and element $j \in \beta$ such that $z_{2k} = \eta(\alpha^k(j))$ and $z_{2k+1} = \eta(\varphi^{-1}(\alpha^k(j))-1)$ for each $k \in \{0, 1, 2, \ldots, c-1\}$. Then we say that $\alpha$ \emph{satisfies} $z$ with cycle $\beta$ and element $j \in \beta$.
\end{defin}
We will sometimes simply say that $\alpha$ satisfies $z$ or that $\alpha$ satisfies $z$ with element $j \in \F_q$. Let $\alpha \in \G$. Suppose that $\alpha$ satisfies a sequence $z \in \{-1, 0, 1\}^{2c}$ with $z_k = 0$ for some $k \in \{0, 1, 2, \ldots, 2c-1\}$. Then either $\alpha^m(j) = 0$ or $\varphi^{-1}(\alpha^m(j))-1=0$ for some $m \in \{0, 1, 2, \ldots, c-1\}$. The first case implies that $0 \in \beta$ and the second implies that $\beta$ contains $\alpha^m(j) = \varphi(1) = a$. We will let the cycles of $\alpha$ containing $0$ and $a$ be denoted by $\alpha_0$ and $\alpha_a$, respectively. We will deal with these cycles separately, hence we will mostly be concerned with sequences $z \in \{-1, 1\}^{2c}$.
For a positive integer $i$ and sequence $z \in \{-1, 1\}^{2c}$ let $z^i$ denote the sequence obtained by cyclically rotating $z$ by $i$ positions. That is, $z^i_{k} = z_{k+i \bmod 2c}$. We note the following simple observation.
\begin{lem}\label{l:cycshift}
Let $\alpha \in \G$. If $\alpha$ satisfies $z \in \{-1, 1\}^{2c}$ then $\alpha$ satisfies $z^{2i}$ for all $i \in \{0, 1, 2, \ldots, c-1\}$.
\end{lem}
\begin{proof}
Suppose that $\alpha$ satisfies $z$ with cycle $\beta$ and element $j \in \beta$. It is simple to verify, using \dref{d:asat}, that $\alpha$ satisfies $z^{2i}$ with cycle $\beta$ and element $\alpha^i(j) \in \beta$.
\end{proof}
We will need the following notation to deal with sequences $z \in \{-1, 1\}^{2c}$.
\begin{defin}
Let $\{i, j\} \subseteq \{0, 1, 2, \ldots, 2c-1\}$ with $i \leq j$. For a sequence $z \in \{-1, 1\}^{2c}$ we define,
\[
\begin{aligned}
& e^+(i, j) = |\{i \leq k \leq j : k \text{ is even and } z_k = 1\}|, \\
& o^+(i, j) = |\{i \leq k \leq j : k \text{ is odd and } z_k = 1\}|, \\
& e^-(i, j) = |\{i \leq k \leq j : k \text{ is even and } z_k = -1\}|, \\
& o^-(i, j) = |\{i \leq k \leq j : k \text{ is odd and } z_k = -1\}|.
\end{aligned}
\]
Also define,
\[
\begin{aligned}
&u^+(i, j) = o^+(i, j) - e^+(i, j), \\
&u^-(i, j) = o^-(i, j) - e^-(i, j), \\
&v^+(i, j) = o^+(i, j) - e^-(i, j), \\
&v^-(i, j) = o^-(i, j) - e^+(i, j).
\end{aligned}
\]
\end{defin}
For $i > j$ we define $u^+(i, j) = u^-(i, j) = v^+(i, j) = v^-(i, j) = 0$. We note that these quantities implicitly depend on the choice of sequence $z \in \{-1, 1\}^{2c}$. We now prove a result concerning how permutations in $\G$ act on elements of $\F_q$. We will need to consider the cases $a \in \mathcal{R}_q$ and $a \in \mathcal{N}_q$ separately. We will repeatedly use the simple property that $u^+(i, j) + u^+(j+1, k) = u^+(i, k)$ for any $i \leq j \leq k$. The same holds when replacing $u^+$ by $u^-$, $v^+$ or $v^-$.
\begin{lem}\label{l:junalphr}
Let $\alpha = \alpha[a, b] \in \Gamma$ with $a \in \mathcal{R}_q$. Let $\varphi = \varphi[a, b]$ and $z \in \{-1, 1\}^{2c}$. Suppose that $\alpha$ satisfies $z$ with element $j \in \F_q$. Then for all $m \in \{0, 1, 2, \ldots, c\}$,
\begin{equation}\label{e:junalph}
\alpha^m(j) = a^{u^+(0, 2m-1)}b^{u^-(0, 2m-1)}j + \sum_{k=1}^{2m} (-1)^{k}a^{u^+(k, 2m-1)}b^{u^-(k, 2m-1)},
\end{equation}
and
\begin{equation}\label{e:junalphvar}
\varphi^{-1}(\alpha^m(j))-1 = a^{u^+(0, 2m)}b^{u^-(0, 2m)}j + \sum_{k=1}^{2m+1} (-1)^{k}a^{u^+(k, 2m)}b^{u^-(k, 2m)}.
\end{equation}
\end{lem}
\begin{proof}
We will prove the claim by induction on $m$. If $m=0$ then \eref{e:junalph} simply states that $\alpha^m(j) = j$, which is true. Since $\alpha$ satisfies $z$ we know that $\eta(j) = z_0$. Hence $\varphi^{-1}(j)-1 = a^{-e^+(0, 0)}b^{-e^-(0, 0)}j-1$, which agrees with \eref{e:junalphvar}. Now suppose that~\eref{e:junalph} and \eref{e:junalphvar} hold for some $m \geq 0$. Then,
\[
\begin{aligned}
\alpha^{m+1}(j) &= \varphi(\varphi^{-1}(\alpha^{m}(j))-1)+1 \\
&= a^{o^+(2m+1, 2m+1)}b^{o^-(2m+1, 2m+1)}(\varphi^{-1}(\alpha^{m}(j))-1)+1 \\
&= a^{u^+(2m+1, 2m+1)}b^{u^-(2m+1, 2m+1)}\left(a^{u^+(0, 2m)}b^{u^-(0, 2m)}j + \sum_{k=1}^{2m+1} (-1)^{k}a^{u^+(k, 2m)}b^{u^-(k, 2m)}\right)+1 \\
&= a^{u^+(0, 2m+1)}b^{u^-(0, 2m+1)}j + \left(\sum_{k=1}^{2m+1} (-1)^{k}a^{u^+(k, 2m+1)}b^{u^-(k, 2m+1)}\right) + 1 \\
&= a^{u^+(0, 2m+1)}b^{u^-(0, 2m+1)}j + \sum_{k=1}^{2m+2} (-1)^{k}a^{u^+(k, 2m+1)}b^{u^-(k, 2m+1)},
\end{aligned}
\]
which agrees with~\eref{e:junalph}. Using this we have that,
\[
\begin{aligned}
\varphi^{-1}(\alpha^{m+1}(j))-1 &= a^{-e^+(2m+2, 2m+2)}b^{-e^-(2m+2, 2m+2)}\alpha^{m+1}(j)-1 \\
&= a^{u^+(2m+2, 2m+2)}b^{u^-(2m+2, 2m+2)}\bigg(a^{u^+(0, 2m+1)}b^{u^-(0, 2m+1)}j + \\
&\hspace{5mm} \sum_{k=1}^{2m+2} (-1)^{k}a^{u^+(k, 2m+1)}b^{u^-(k, 2m+1)}\bigg)-1 \\
&= a^{u^+(0, 2m+2)}b^{u^-(0, 2m+2)}j + \left(\sum_{k=1}^{2m+2} (-1)^{k}a^{u^+(k, 2m+2)}b^{u^-(k, 2m+2)}\right) - 1 \\
&= a^{u^+(0, 2m+2)}b^{u^-(0, 2m+2)}j + \sum_{k=1}^{2m+3} (-1)^{k}a^{u^+(k, 2m+2)}b^{u^-(k, 2m+2)},
\end{aligned}
\]
which agrees with~\eref{e:junalphvar} and so the lemma follows by induction.
\end{proof}
Using analogous arguments we can prove the following result.
\begin{lem}\label{l:junalphnr}
Let $\alpha = \alpha[a, b] \in \Gamma$ with $a \in \mathcal{N}_q$. Let $\varphi = \varphi[a, b]$ and $z \in \{-1, 1\}^{2c}$. Suppose that $\alpha$ satisfies $z$ with element $j \in \F_q$. Then for all $m \in \{0, 1, 2, \ldots, c\}$,
\begin{equation*}
\alpha^m(j) = a^{v^+(0, 2m-1)}b^{v^-(0, 2m-1)}j + \sum_{k=1}^{2m} (-1)^{k}a^{v^+(k, 2m-1)}b^{v^-(k, 2m-1)},
\end{equation*}
and
\[
\varphi^{-1}(\alpha^m(j))-1 = a^{v^+(0, 2m)}b^{v^-(0, 2m)}j + \sum_{k=1}^{2m+1} (-1)^{k}a^{v^+(k, 2m)}b^{v^-(k, 2m)}.
\]
\end{lem}
Let $\alpha = \alpha[a, b] \in \Gamma$. Suppose that $a \in \mathcal{R}_q$ and consider \lref{l:junalphr}. Setting $m=c$ in \eref{e:junalph} we see that,
\[
j = a^{u^+(0, 2c-1)}b^{u^-(0, 2c-1)}j + \sum_{k=1}^{2c} (-1)^{k}a^{u^+(k, 2c-1)}b^{u^-(k, 2c-1)}.
\]
In order to investigate this equation we need to distinguish two cases, depending on whether or not $a^{u^+(0, 2c-1)}b^{u^-(0, 2c-1)}$ is equal to $1$. We also need to make the analogous case distinction when $a \in \mathcal{N}_q$.
Recall that an $m$-th root of unity in $\F_q$ is an element $x$ such that $x^m = 1$.
We will say that all non-zero elements of $\F_q$ are $0$-th roots of unity. If $m$ is a negative integer then we will say that $x$ is an $m$-th root of unity if $x^{-1}$ is a $(-m)$-th root of unity. For $\alpha = \alpha[a, b] \in \G$ and $z \in \{-1, 1\}^{2c}$ define,
\[
t(z, \alpha) = \begin{cases}
u^+(0, 2c-1) & \text{if } a \in \mathcal{R}_q, \\
v^+(0, 2c-1) & \text{if } a \in \mathcal{N}_q.
\end{cases}
\]
We note that $e^-(0, 2c-1) = c-e^+(0, 2c-1)$ and $o^-(0, 2c-1) = c-o^+(0, 2c-1)$. Hence
\[
u^-(0, 2c-1) = c-e^+(0, 2c-1) - (c-o^+(0, 2c-1)) = o^+(0, 2c-1) - e^+(0, 2c-1) = -u^+(0, 2c-1).
\]
So $a^{u^+(0, 2c-1)}b^{u^-(0, 2c-1)} = (ab^{-1})^{t(z, \alpha)}$ if $a \in \R_q$ and $a^{v^+(0, 2c-1)}b^{v^-(0, 2c-1)} = (ab^{-1})^{t(z, \alpha)}$ if $a \in \N_q$.
We therefore make the following definition.
\begin{defin}
Let $z \in \{-1, 1\}^{2c}$ and $\alpha = \alpha[a, b] \in \G$. We say that the pair $(z, \alpha)$ is of \emph{Type One} if $ab^{-1}$ is not a $t(z, \alpha)$-th root of unity in $\F_q$. Otherwise we say that $(z, \alpha)$ is of \emph{Type Two}.
\end{defin}
Fix a permutation $\alpha \in \G$. We will say that a sequence $z \in \{-1, 1\}^{2c}$ is a Type One sequence or Type Two sequence according to whether the pair $(z, \alpha)$ is of Type One or Type Two. Let $\beta \not\in \{\alpha_0, \alpha_a\}$ be a cycle of $\alpha$ and let $j \in \beta$. Using \dref{d:asat} we can associate a sequence $z \in \{-1, 1\}^{2c}$ to the cycle $\beta$ and element $j \in
\beta$. Furthermore, by \lref{l:cycshift} we know that changing the element $j$ of $\beta$ simply cyclically rotates the sequence $z$ by an even number of positions. It is clear that $(z, \alpha)$ is of Type One if and only if $(z^{2i}, \alpha)$ is of Type One, for all $i \in \{0, 1, 2, \ldots, c-1\}$. Thus we define $\beta$ to be a Type One cycle if $(z, \alpha)$ is of Type One, and we define $\beta$ to be a Type Two cycle otherwise.
Our goal in this subsection is to develop a method to investigate the cycles of a permutation $\alpha \in \G$. To do this we will study Type One cycles, Type Two cycles, and the cycles $\alpha_0$ and $\alpha_a$, separately.
\subsubsection{Type One cycles}\label{sss:t1}
The goal of this subsection is to prove necessary and sufficient conditions for a permutation in $\G$ to contain a Type One cycle of length $c$. Let $k$ be a positive integer and $\{x, y\} \subseteq \{-1, 1\}^{k}$. We define the \emph{concatenation} of $x$ and $y$, denoted by $x \oplus y$, to be the sequence $(x_0, x_1, \ldots, x_{k-1}, y_0, y_1, \ldots, y_{k-1}) \in \{-1, 1\}^{2k}$.
\begin{defin}
Let $k$ be a positive integer and $z \in \{-1, 1\}^{2k}$. We call $z$ \emph{even periodic} if we can write $z = \bigoplus_{i=1}^{k/d} y$ for some proper divisor $d$ of $k$ and some $y \in \{-1, 1\}^{2d}$.
\end{defin}
Let $z \in \{-1, 1\}^{2c}$ be even periodic so that we can write $z = \bigoplus_{k=1}^{c/d} y$ for some positive integer $d$ and some $y \in \{-1, 1\}^{2d}$. Observe the following simple consequence of the even periodicity of $z$.
\[
u^+(k, 2c-1) = \left(\frac cd - \left\lceil \frac{k+1}{2d} \right\rceil \right) u^+(0, 2d-1) + u^+(k \bmod 2d, 2d-1),
\]
for all $k \in \{0, 1, 2, \ldots, 2c-1\}$. In particular we have that $u^+(0, 2c-1) = (c/d)u^+(0, 2d-1)$. The same holds when replacing $u^+$ by $v^+$. We will now show that a permutation in $\G$ cannot satisfy an even periodic sequence of Type One.
\begin{lem}\label{l:eperiod}
Let $\alpha \in \G$ and $z \in \{-1, 1\}^{2c}$ be an even periodic sequence.
If $\alpha$ satisfies $z$ then $(z, \alpha)$ is of Type Two.
\end{lem}
\begin{proof}
Write $\alpha = \alpha[a, b]$ for some valid pair $(a, b) \in \F_q^2$. Write $z = \bigoplus_{k=1}^{c/d} y$ for some proper divisor $d$ of $c$ and some $y \in \{-1, 1\}^{2d}$.
Assume that $\alpha$ satisfies $z$ with cycle $\beta$ and element $j \in \beta$. Suppose that $a \in \mathcal{R}_q$. From \lref{l:junalphr} we know that,
\begin{equation}\label{e:alphnj}
\begin{aligned}
\alpha^c(j) &= a^{u^+(0, 2c-1)}b^{u^-(0, 2c-1)}j + \sum_{k=1}^{2c} (-1)^{k}a^{u^+(k, 2c-1)}b^{u^-(k, 2c-1)} \\
&= (a^{u^+(0, 2d-1)}b^{u^-(0, 2d-1)})^{c/d}j + \\
&\hspace{5mm} \sum_{k=1}^{2c} (-1)^{k}\left((a^{u^+(0, 2d-1)}b^{u^-(0, 2d-1)})^{c/d-\lceil (k+1)/(2d) \rceil} \cdot a^{u^+(k \bmod 2d, 2d-1)}b^{u^-(k \bmod 2d, 2d-1)}\right) \\
&= (a^{u^+(0, 2d-1)}b^{u^-(0, 2d-1)})^{c/d}j +
\sum_{k=1}^{2d}(-1)^k \sum_{i=0}^{c/d-1} (a^{u^+(0, 2d-1)}b^{u^-(0, 2d-1)})^ia^{u^+(k, 2d-1)}b^{u^-(k, 2d-1)} \\
&= (a^{u^+(0, 2d-1)}b^{u^-(0, 2d-1)})^{c/d}j +
\sum_{i=0}^{c/d-1} (a^{u^+(0, 2d-1)}b^{u^-(0, 2d-1)})^i \sum_{k=1}^{2d}(-1)^ka^{u^+(k, 2d-1)}b^{u^-(k, 2d-1)}.
\end{aligned}
\end{equation}
Now suppose, for a contradiction, that $z$ is a Type One sequence. So $a^{u^+(0, 2c-1)}b^{u^-(0, 2c-1)} \neq 1$, hence $a^{u^+(0, 2d-1)}b^{u^-(0, 2d-1)} \neq 1$ also. Thus we can write,
\[
\sum_{i=0}^{c/d-1} (a^{u^+(0, 2d-1)}b^{u^-(0, 2d-1)})^i = \frac{(1-(a^{u^+(0, 2d-1)}b^{u^-(0, 2d-1)})^{c/d})}{(1-a^{u^+(0, 2d-1)}b^{u^-(0, 2d-1)})}.
\]
Substituting this into \eref{e:alphnj} we have that,
\[
\alpha^c(j) = (a^{u^+(0, 2d-1)}b^{u^-(0, 2d-1)})^{c/d}j + \frac{(1-(a^{u^+(0, 2d-1)}b^{u^-(0, 2d-1)})^{c/d})}{(1-a^{u^+(0, 2d-1)}b^{u^-(0, 2d-1)})}\sum_{k=1}^{2d}(-1)^ka^{u^+(k, 2d-1)}b^{u^-(k, 2d-1)}.
\]
Since $\alpha^c(j) = j$ we obtain,
\[
j = \frac1{1-a^{u^+(0, 2d-1)}b^{u^-(0, 2d-1)}}\sum_{k=1}^{2d}(-1)^ka^{u^+(k, 2d-1)}b^{u^-(k, 2d-1)}.
\]
By clearing the denominator we obtain,
\[
j = a^{u^+(0, 2d-1)}b^{u^-(0, 2d-1)}j + \sum_{k=1}^{2d}(-1)^ka^{u^+(k, 2d-1)}b^{u^-(k, 2d-1)} = \alpha^d(j)
\]
from \eref{e:junalph}. This contradicts the fact that $\beta$ is a $c$-cycle. The case where $a \in \mathcal{N}_q$ can be handled using analogous arguments.
\end{proof}
Let $\alpha \in \G$. Define,
\[
X_{c, \alpha} = \{z \in \{-1, 1\}^{2c} : z \text{ is not even periodic and } (z, \alpha) \text{ is of Type One}\}.
\]
Define an equivalence relation $\sim$ on $X_{c, \alpha}$ by $z \sim y$ if and only if $z = y^{2i}$ for some $i \in \{0, 1, \ldots, c-1\}$. It is simple to verify that $\sim$ is indeed an equivalence relation on this set. For notational convenience we will identify an equivalence class of $X_{c, \alpha} / \sim$ with an element of that equivalence class.
By combining \lref{l:cycshift} and \lref{l:eperiod} we have the following result.
\begin{lem}\label{l:cycsseqs}
A permutation $\alpha \in \G$ contains a Type One $c$-cycle if and only if it satisfies a sequence in $X_{c, \alpha} / \sim$.
\end{lem}
For a positive integer $m$ let $\Gamma_m$ be the subset of $\Gamma$ consisting of elements $\alpha[a, b]$ where $ab^{-1}$ is not a $k$-th root of unity, for any $k \in \{1, 2, \ldots, m\}$. We note that if $\alpha \in \G_c$ then the set $X_{c, \alpha}$ depends only on whether $a \in \mathcal{R}_q$ or $a \in \mathcal{N}_q$. Hence we use the term $X_{c, 1}$ to denote the set $X_{c, \alpha}$ for some $\alpha = \alpha[a, b] \in \G_c$ with $a \in \mathcal{R}_q$, and we write $X_{c, 2}$ to denote the set $X_{c, \alpha}$ for some $\alpha = \alpha[a, b] \in \G_c$ with $a \in \mathcal{N}_q$.
We will now find necessary and sufficient conditions for a permutation $\alpha \in \G$ to satisfy a sequence in $X_{c, \alpha} / \sim$. Let $z \in \{-1, 1\}^{2c}$ and define the bivariate Laurent polynomial $F_{0, z}$ over $\F_q$ by
\[
F_{0, z}(x, y) = (1-(xy^{-1})^{u^+(0, 2c-1)})\sum_{k=1}^{2c}(-1)^kx^{u^+(k, 2c-1)}y^{u^-(k, 2c-1)}.
\]
Then for $i \in \{1, 2, \ldots, 2c-1\}$ define,
\[
F_{i, z}(x, y) = x^{u^+(0, i-1)}y^{u^-(0, i-1)}F_{0, z}(x, y) + (1-(xy^{-1})^{u^+(0, 2c-1)})^2\sum_{k=1}^{i} (-1)^kx^{u^+(k, i-1)}y^{u^-(k, i-1)}.
\]
Also define the bivariate Laurent polynomial
\[
G_{0, z}(x, y) = (1-(xy^{-1})^{v^+(0, 2c-1)})\sum_{k=1}^{2c}(-1)^kx^{v^+(0, 2c-1)}y^{v^-(k, 2c-1)},
\]
and for $i \in \{1, 2, \ldots, 2c-1\}$ define,
\[
G_{i, z}(x, y) = x^{v^+(0, i-1)}y^{v^-(0, i-1)}G_{0, z}(x, y) + (1-(xy^{-1})^{v^+(0, 2c-1)})^2\sum_{k=1}^{i} (-1)^kx^{v^+(k, i-1)}y^{v^-(k, i-1)}.
\]
\begin{lem}\label{l:t1necsuf}
Let $\alpha = \alpha[a, b] \in \Gamma$ and $z \in X_{c, \alpha} / \sim$. If $a \in \mathcal{R}_q$ then $\alpha$ satisfies $z$ if and only if $\eta(F_{i, z}(a, b)) = z_i$ for all $i \in \{0, 1, 2, \ldots, 2c-1\}$. If $a \in \mathcal{N}_q$ then $\alpha$ satisfies $z$ if and only if $\eta(G_{i, z}(a, b)) = z_i$ for all $i \in \{0, 1, 2, \ldots, 2c-1\}$.
\end{lem}
\begin{proof}
We will prove the lemma in the case where $a \in \R_q$. The case where $a \in \N_q$ can be proven using analogous arguments. Suppose that $\alpha$ satisfies $z$ with element $j \in \F_q$. It follows from \lref{l:junalphr} that $F_{2i, z}(a, b) = (1-(ab^{-1})^{u^+(0, 2c-1)})^2\alpha^i(j)$ and $F_{2i+1, z}(a, b) = (1-(ab^{-1})^{u^+(0, 2c-1)})^2(\varphi^{-1}(\alpha^i(j))-1)$ for all $i \in \{0, 1, 2, \ldots, c-1\}$. Since $\alpha$ satisfies $z$ we know that $\eta(F_{i, z}(a, b)) = \eta((1-(ab^{-1})^{u^+(0, 2c-1)})^2F_{i, z}(a, b)) = z_i$ for all $i \in \{0, 1, 2, \ldots, 2c-1\}$. Now suppose that $\eta(F_{i, z}(a, b)) = z_i$ for all $i \in \{0, 1, 2, \ldots, 2c-1\}$. It is simple to verify that $\alpha$ satisfies $z$ with element $j = F_{0, z}(a, b)/(1-(ab^{-1})^{u^+(0, 2c-1)})^2$.
\end{proof}
Combining \lref{l:cycsseqs} and \lref{l:t1necsuf} we can obtain necessary and sufficient conditions for a permutation in $\G$ to contain a Type One cycle of length $c$. We will see in \sref{ss:n2} that we can use these conditions to bound the number of permutations in $\G_c$ which contain a Type One $c$-cycle.
\subsubsection{Type Two cycles}\label{sss:t2}
In this subsection we provide necessary conditions for a permutation in $\G$ to contain a Type Two $c$-cycle. We also describe how to use these conditions to bound the number of permutations in $\G_c$ which contain a Type Two cycle of length $c$. For a permutation $\alpha \in \G$, define $Y_{c, \alpha}$ to be the set of sequences $z \in \{-1, 1\}^{2c}$ such that $(z, \alpha)$ is of Type Two. Note that for a permutation $\alpha = \alpha[a, b] \in \G_c$, the set $Y_{c, \alpha}$ depends only on whether $a \in \mathcal{R}_q$ or $a \in \mathcal{N}_q$. Therefore we will write $Y_{c, 1}$ to be $Y_{c, \alpha}$ for some $\alpha = \alpha[a, b] \in \G_c$ with $a \in \mathcal{R}_q$. Similarly we will write $Y_{c, 1}$ to be $Y_{c, \alpha}$ for some $\alpha = \alpha[a, b] \in \G_c$ with $a \in \mathcal{N}_q$. The following is a consequence of \lref{l:cycshift}.
\begin{lem}\label{l:t2cycs}
A permutation $\alpha \in \G$ contains a Type Two $c$-cycle if and only if it satisfies a sequence in $Y_{c, \alpha} / \sim$.
\end{lem}
Let $f(x_1, x_2, \ldots, x_k)$ be a Laurent polynomial over $\F_q$ and let $i \in \{1, 2, \ldots, k\}$. The \emph{total degree} of $f$ in $x_i$, denoted by $\deg(f, x_i)$, is the difference between the maximum power of $x_i$ in $f$, and the minimum power of $x_i$ in $f$. If $k=1$ then we say that $f$ has total degree $\deg(f, x_1)$.
\begin{lem}\label{l:t2polys}
Let $\alpha = \alpha[a, b] \in \Gamma$ and suppose that $\alpha$ satisfies a Type Two sequence $z \in \{-1, 1\}^{2c}$. Then $(a, b)$ is a root of some bivariate Laurent polynomial $g$ over $\F_q$. Furthermore, $\deg(g, x) \leq 2c$ and $\deg(g, y) \leq 2c$.
\end{lem}
\begin{proof}
First suppose that $a \in \mathcal{R}_q$. As $\alpha$ satisfies $z$ it follows from \lref{l:junalphr} that $(a, b)$ is a root of the bivariate Laurent polynomial,
\[
g(x, y) = \sum_{k=1}^{2c} (-1)^kx^{u^+(k, 2c-1)}y^{u^-(k, 2c-1)}.
\]
The total degree of $g$ in $y$ is equal to the quantity $\max\{u^-(k, 2c-1) : k \in \{1, 2\ldots, 2c\}\} - \min\{u^-(k, 2c-1) : k \in \{1, 2\ldots, 2c\}\} \leq 2c$ because $u^-(k, 2c-1) \leq c$ for any $k \in \{0, 1, 2, \ldots, 2c-1\}$. Similarly $\deg(g, x) \leq 2c$. The case where $a \in \mathcal{N}_q$ can be handled using analogous arguments.
\end{proof}
We will denote the Laurent polynomial $g$ in \lref{l:t2polys} associated to the sequence $z \in \{-1, 1\}^{2c}$ by $g_z$.
\lref{l:t2cycs} and \lref{l:t2polys} could be used to bound the number of permutations in $\G_c$ which contain a Type Two $c$-cycle. The number of roots of a non-zero bivariate Laurent polynomial $f(x, y)$ over $\F_q$ is bounded by $q\deg(f, y)$. If a permutation $\alpha[a, b] \in \G_c$ with $a \in \R_q$ contains a $c$-cycle then $(a, b)$ must be a root of $g_z$ for some $z \in Y_{c, 1} / \sim$. If $g_z$ is not the zero polynomial for any $z \in Y_{c, 1} / \sim$, then we can use \lref{l:t2polys} to bound the number of permutations $\alpha[a, b] \in \G_c$ with $a \in \R_q$ which contain a Type Two $c$-cycle. Similarly, if $g_z$ is not the zero polynomial for any $z \in Y_{c, 2} / \sim$ then we can bound the number of permutations $\alpha[a, b] \in \G$ with $a \in \N_q$ which contain a Type Two $c$-cycle. However we note that if $c$ is equal to the characteristic of $\F_q$, then there does exist sequences $z \in (Y_{c, 1} / \sim) \cup (Y_{c, 2} / \sim)$ such that $g_z$ is the zero polynomial. This fact will be used in \sref{ss:p-cycles}.
\subsubsection{$\alpha_0$ and $\alpha_a$}\label{sss:a0aa}
In this subsection we bound the number of permutations $\alpha \in \G$ such that $\alpha_0$ or $\alpha_a$ have length $c$.
\begin{lem}\label{l:a0aapoly}
Let $\alpha = \alpha[a, b] \in \Gamma$ and $m \in \{0, 1, 2, \ldots, q-1\}$. There is a set $T_m$ containing at most $4^m$ trivariate Laurent polynomials over $\F_q$ which satisfies the following property: For every $j \in \F_q$, there is some $t \in T_m$ such that $\alpha^m(j) = t(a, b, j)$. Furthermore, for each $t(x, y, z) \in T_m$ it holds that $\deg(t, x) \leq m$ and $\deg(t, y) \leq m$.
\end{lem}
\begin{proof}
We will prove the claim assuming that $a \in \mathcal{R}_q$. Similar arguments can be used to prove the claim in the case where $a \in \N_q$. We will proceed by induction on $m$. When $m=0$, the set $T_0$ containing the polynomial $t(x, y, z)=z$ suffices. Now suppose that the claim is true for some $m \geq 0$. By induction we know that $\alpha^{m+1}(j) = \alpha(t(a, b, j))$ for some $t \in T_m$. Then,
\[
\alpha^{m+1}(j) = \begin{cases}
t(a, b, j) - a + 1 & \text{if } \{t(a, b, j), a^{-1}t(a, b, j)-1\} \subseteq \mathcal{R}_q, \\
a^{-1}bt(a, b, j)-b+1 & \text{if } t(a, b, j) \in \mathcal{R}_q \text{ and } a^{-1}t(a, b, j)-1 \in \mathcal{N}_q, \\
ab^{-1}t(a, b, j) - a + 1 & \text{if } t(a, b, j) \in \mathcal{N}_q \text{ and } b^{-1}t(a, b, j)-1 \in \mathcal{R}_q, \\
t(a, b, j) - b + 1 & \text{if } \{t(a, b, j), b^{-1}t(a, b, j)-1\} \subseteq \mathcal{N}_q.
\end{cases}
\]
Define $T_{m+1} = \{t(x, y, z) - x + 1, x^{-1}yt(x, y, z)-y+1, xy^{-1}t(x, y, z) - x + 1, t(x, y, z) - y + 1 : t \in T_m\}$. By construction $\alpha^{m+1}(j) = t(a, b, j)$ for some $t \in T_{m+1}$. Also, $|T_{m+1}| \leq 4|T_m| \leq 4^{m+1}$ by induction. Furthermore, each $t \in T_{m+1}$ has been obtained from some $t' \in T_m$. The process of changing $t'$ to $t$ increases the total degree in $x$ and the total degree in $y$ by at most $1$. Therefore $\deg(t, x) \leq m+1$ and $\deg(t, y) \leq m+1$ for all $t \in T_{m+1}$.
\end{proof}
We can use \lref{l:a0aapoly} to bound the number of permutations $\alpha \in \G$ such that $\alpha_0$ or $\alpha_a$ is a $c$-cycle. Let $T_c$ be the set of trivariate Laurent polynomials from \lref{l:a0aapoly}. The number of pairs $(a, b)$ which are solutions to the equation $t(x, y, 0) = 0$ for some $t \in T$ is at most $qc$. As $|T| \leq 4^c$ it follows that the number of permutations $\alpha \in \G$ with $\alpha_0$ being a $c$-cycle is at most $qc4^c$. The same conclusion holds for the number of permutations $\alpha \in \G$ such that $\alpha_a$ is of length $c$.
\subsection{$N_2$ quadratic Latin squares}\label{ss:n2}
In this subsection we will apply the results proven in \sref{ss:quadcyc} to investigate permutations in $\G$ which contain cycles of length two, also known as \emph{transpositions}. This will allow us to prove \tref{t:n2}.
Throughout this subsection let $q$ be an odd prime power. We will first determine when a permutation in $\G$ contains a Type One transposition. To do this, we construct the sequences in $X_{2, 1} / \sim$ and $X_{2, 2} / \sim$. We then apply \lref{l:t1necsuf} to these sequences to obtain necessary and sufficient conditions for a permutation in $\G_2$ to contain a Type One transposition. We know that the set $\G \setminus \G_2$ consists of the permutations $\alpha[a, a]$ and $\alpha[a, -a]$, which will be dealt with separately. Following the described method we obtain the following results.
\begin{lem}\label{l:t1}
The permutation $\alpha[a, b] \in \G_2$ contains a Type One transposition if and only if,
\[
(2ab-a-b)(a+b)(a-1) \in \R_q \text{ and } \{2(a+b-2)(a-1), 2a(a+b)\} \subseteq \N_q.
\]
\end{lem}
\begin{proof}
We distinguish four cases, depending on whether $q \equiv 1 \bmod 4$ or $q \equiv 3 \bmod 4$ and whether $a \in \R_q$ or $a \in \N_q$. There are only minor differences in the arguments for these four cases so we will only prove the case where $q \equiv 3 \bmod 4$ and $a \in \R_q$. By iterating through the sequences in $X_{2, 1} / \sim$ and using \lref{l:t1necsuf} we can determine that a permutation $\alpha[a, b] \in \G_2$ with $a \in \R_q$ contains a Type One transposition if and only if:
\begin{enumerate}[$(i)$]
\item $\{(2ab-a-b)(a-b), b(a+b)(b-1)(a-b), b(a+b-2)(a-b), 2(1-b)(a-b)\} \subseteq \N_q$,
\item $\{(a+b)(1-b)(a-b), b(a+b-2ab)(a-b), 2a(b-1)(a-b), (2-a-b)(a-b)\} \subseteq \N_q$,
\item $\{2b(a-1)(a-b), (2-a-b)(a-b), (1-a)(a+b)(a-b), a(a+b-2ab)(a-b)\} \subseteq \N_q$, or
\item $\{a(a+b-2)(a-b), 2(1-a)(a-b), (2ab-a-b)(a-b), a(a-1)(a+b)(a-b)\} \subseteq \N_q$.
\end{enumerate}
Using the fact that $-1 \in \mathcal{N}_q$ and $\{a, b, (a-1)(b-1)\} \subseteq \mathcal{R}_q$ we can combine conditions $(i)$ and $(iv)$ to be
\begin{equation}\label{e:v}
\{(2ab-a-b)(a-b), (a+b)(a-1)(a-b), (a+b-2)(a-b), 2(1-a)(a-b)\} \subseteq \N_q.
\end{equation}
Similarly we can combine conditions $(ii)$ and $(iii)$ to be
\begin{equation}\label{e:vi}
\{(a+b)(1-a)(a-b), (a+b-2ab)(a-b), 2(a-1)(a-b), (2-a-b)(a-b)\} \subseteq \N_q.
\end{equation}
The lemma then follows by combining \eref{e:v} and \eref{e:vi}.
\end{proof}
We will now determine when a permutation in $\G_2$ contains a Type Two transposition. To do this, we first compute the sets $Y_{2, 1} / \sim$ and $Y_{2, 2} / \sim$. We know from \lref{l:t2polys} that if $\alpha[a, b] \in \G_2$ satisfies a sequence in one of these sets, then $(a, b)$ must be a root of some bivariate Laurent polynomial. By iterating through the sequences in $Y_{2, 1} / \sim$ and $Y_{2, 2} / \sim$ and constructing the associated Laurent polynomials we can state the following lemma.
\begin{lem}\label{l:n2t2}
Let $\alpha = \alpha[a, b] \in \G_2$. If $\alpha$ contains a Type Two transposition then the pair $(a, b)$ is a solution to one of the following equations:
\begin{enumerate}[(i)]
\item $2-2b=0$,
\item $2-a-b=0$,
\item $1-2b+ba^{-1}=0$,
\item $2-2a=0$.
\end{enumerate}
\end{lem}
Checking the solutions of equations $(i)$, $(ii)$, $(iii)$ and $(iv)$ in \lref{l:n2t2} and recalling that $1 \not\in \{a, b\}$ we obtain the following corollary.
\begin{cor}\label{c:n2t2}
Let $\alpha = \alpha[a, b] \in \G_2$ with $b \not\in \{2-a, a/(2a-1)\}$. Then $\alpha$ does not contain a Type Two transposition.
\end{cor}
We will now find conditions for a permutation $\alpha \in \G_2$ to satisfy $\alpha^2(0)=0$ or $\alpha^2(a)=a$.
\begin{lem}\label{l:n2alph0}
Let $\alpha = \alpha[a, b] \in \G_2$ with $b \not\in \{2-a, a/(2a-1)\}$. Then $\alpha_0$ is not a transposition.
\end{lem}
\begin{proof}
We will distinguish four cases, depending on whether $q \equiv 1 \bmod 4$ or $q \equiv 3 \bmod 4$, and whether $a \in \R_q$ or $a \in \N_q$. We will consider the case where $q \equiv 3 \bmod 4$ and $a \in \mathcal{R}_q$. The other cases can be dealt with using similar arguments. Since $-1 \in \mathcal{N}_q$ we have that $\alpha(0) = \varphi(\varphi^{-1}(0)-1)+1 = \varphi(-1)+1 = 1-b$. Hence,
\[
\alpha^2(0) = \begin{cases}
2-b-a & \text{if } \{1-b, a^{-1}-a^{-1}b-1\} \subseteq \mathcal{R}_q, \\
a^{-1}b-a^{-1}b^2-b+1 & \text{if } 1-b \in \mathcal{R}_q \text{ and } a^{-1}-a^{-1}b-1 \in \mathcal{N}_q, \\
ab^{-1}-2a+1 & \text{if } 1-b \in \mathcal{N}_q \text{ and } b^{-1}-2 \in \mathcal{R}_q, \\
2-2b & \text{if } \{1-b, b^{-1}-2\} \subseteq \mathcal{N}_q.
\end{cases}
\]
Suppose that $\alpha_0$ is a transposition. If $\{1-b, a^{-1}-a^{-1}b-1\} \subseteq \R_q$ then $b=2-a$. If $1-b \in R_q$ and $a^{-1}-a^{-1}b-1 \in \mathcal{N}_q$ then $b$ is a root of the polynomial $a^{-1}x^2+x(1-a^{-1})-1 = a^{-1}(x-1)(x+a)$. As $b \neq 1$ we must have $b=-a$ and thus $\alpha \not\in \G_2$. If $1-b \in \N_q$ and $b^{-1}-2 \in \R_q$ then $b=a/(2a-1)$. Finally if $\{1-b, b^{-1}-2\} \subseteq \N_q$ then $b=1$ which is false.
\end{proof}
\begin{lem}\label{l:n2alpha}
Let $\alpha = \alpha[a, b] \in \G_2$ with $b \not\in \{2-a, a/(2a-1)\}$. Then $\alpha_a$ is not a transposition.
\end{lem}
\begin{proof}
We will first prove the claim assuming that $a \in \mathcal{R}_q$. We have that $\alpha(a) = \varphi(\varphi^{-1}(a)-1)+1 = \varphi(0)+1 = 1$. Hence,
\[
\alpha^2(a) = \varphi(a^{-1}-1)+1 = \begin{cases}
2-a & \text{if } a^{-1}-1 \in \mathcal{R}_q, \\
a^{-1}b-b+1 & \text{if } a^{-1}-1 \in \mathcal{N}_q.
\end{cases}
\]
If $\alpha_a$ is a transposition then either $a=1$ or $b = (a-1)/(a^{-1}-1) = -a$, both of which are false. Using similar arguments we can show that if $a \in \N_q$ and $\alpha^2(a) = a$ then $b \in \{2-a, a/(2a-1)\}$.
\end{proof}
By combining \lref{l:t1}, \cyref{c:n2t2}, \lref{l:n2alph0} and \lref{l:n2alpha} we have completely classified when a permutation $\alpha[a, b] \in \G$ with $b \not\in \{a, -a, 2-a, a/(2a-1)\}$ contains a transposition. It is known that quadratic Latin squares of the form $\mathcal{L}[a, a]$ are isotopic to the Cayley table of the cyclic group of order $q$. Therefore when $b=a$ the square $\mathcal{L}[a, b]$ does not contain a transposition. So it remains to deal with the permutations $\alpha[a, b] \in \G$ when $b \in \{-a, 2-a, a/(2a-1)\}$. If $q \equiv 3 \bmod 4$ then such permutations are not well defined. As $-1 \in \mathcal{N}_q$ we have $-a^2 \in \mathcal{N}_q$ hence $(a, -a)$ is not valid. As $(2-a-1)(a-1) = -(a-1)^2 \in \mathcal{N}_q$ the pair $(a, 2-a)$ is also not valid. Finally we note that if $(a, a/(2a-1))$ is a valid pair then we must have both $a^2/(2a-1) \in \mathcal{R}_q$ and $-(a-1)^2/(2a-1) \in \mathcal{R}_q$ and clearly both cannot be true. So to complete our classification of the permutations in $\G$ which contain a transposition we must now consider the permutations $\alpha[a, -a]$, $\alpha[a, 2-a]$ and $\alpha[a, a/(2a-1)]$ in the case where $q \equiv 1 \bmod 4$. We will in fact show that almost all of these permutations contain a transposition.
\begin{lem}\label{l:2-a}
Suppose that $q \equiv 1 \bmod 4$. Let $a \in \F_q$ with $a(2-a) \in \mathcal{R}_q$ and let $\alpha = \alpha[a, 2-a] \in \G$. If there exists some $j \in \F_q$ such that $\{aj, a^{-1}j-1\} \subseteq \R_q$ and $\{a(j-a+1), a(j-1)\} \subseteq \N_q$ then $\alpha^2(j) = j$.
\end{lem}
\begin{proof}
We have that,
\[
\begin{aligned}
\alpha^2(j) &= \varphi(\varphi^{-1}(\varphi(\varphi^{-1}(j)-1)+1)-1)+1 \\
&= \varphi(\varphi^{-1}(\varphi(a^{-1}j-1)+1)-1)+1 \\
&= \varphi(\varphi^{-1}(j-a+1)-1)+1 \\
&= \varphi((j-1)/(2-a))+1 \\
&= j,
\end{aligned}
\]
as required.
\end{proof}
Using analogous arguments we can show the following lemmas.
\begin{lem}\label{l:a/2a-1}
Suppose that $q \equiv 1 \bmod 4$. Let $a \in \F_q$ with $2a-1 \in \mathcal{R}_q$ and let $\alpha = \alpha[a, a/(2a-1)] \in \G$. If there exists some $j \in \F_q$ such that $\{aj, a(j-1)\} \subseteq \R_q$ and $\{a^{-1}j-1, a(j+a-1)\} \subseteq \N_q$ then $\alpha^2(j) = j$.
\end{lem}
\begin{lem}\label{l:-a}
Suppose that $q \equiv 1 \bmod 4$. Let $a \in \F_q$ with $(a-1)(a+1) \in \mathcal{R}_q$ and let $\alpha = \alpha[a, -a] \in \G$. If there exists some $j \in \F_q$ such that $\{aj, a(j-a-1)\} \subseteq \R_q$ and $\{a^{-1}j-1, a(j-1)\} \subseteq \N_q$ then $\alpha^2(j) = j$.
\end{lem}
To finish the classification of permutations in $\G$ which contain a transposition we will need some tools. For convenience, if $f$ is a Laurent polynomial over $\F_q$ with a pole at $0$, then we will say that $f(0) = \infty$. We will then define $\eta(\infty) = 0$. The following~\cite{MR27006} is a version of the Weil bound.
\begin{thm}\label{t:weil}
Let $f$ be a monic Laurent polynomial over $\F_q$ of total degree $d$. If $f$ is not the square of a Laurent polynomial then for every $e \in \F_q$ we have,
\begin{equation}\label{e:charsum}
\left\vert \displaystyle\sum_{x \in \F_q} \eta(ef(x)) \right\vert \leq (d - 1)q^{1/2}.
\end{equation}
\end{thm}
In the special case where $f$ is a quadratic polynomial with non-zero discriminant, the following result~\cite{MR1429394} gives an explicit value for the sum in \eref{e:charsum}.
\begin{thm}\label{t:quadweil}
Let $f \in \F_q[x]$ be a monic, quadratic polynomial with non-zero discriminant. For every $e \in \F_q$ we have,
\[
\sum_{x \in \F_q} \eta(ef(x)) = -\eta(e).
\]
\end{thm}
We can use \tref{t:weil} and \tref{t:quadweil} to prove the following result.
\begin{lem}\label{l:translargeq}
Suppose that $193 \leq q \equiv 1 \bmod 4$. Then every permutation in the set $\{\alpha[a, b] \in \G : b \in \{2-a, a/(2a-1), -a\}\}$ contains a transposition.
\end{lem}
\begin{proof}
We will prove that every permutation of the form $\alpha[a, 2-a]$ in $\G$ contains a transposition. The remaining claims can be proven using similar arguments.
Let $a \in \F_q$ such that $a(2-a) \in \R_q$. Define
\[
V_a = \{j \in \F_q : \{aj, a^{-1}j-1\} \subseteq \R_q, \{a(j-a+1), a(j-1)\} \subseteq \N_q\}.
\]
By \lref{l:2-a}, if $V_a \neq \emptyset$ then $\alpha[a, 2-a]$ contains a transposition. Define
\[
Q(x) = (1+\eta(ax))(1+\eta(a^{-1}x-1))(1-\eta(a(x-a+1)))(1-\eta(a(x-1))).
\]
If $x \in V_a$ then $Q(x) = 16$. If $x \in \{0, 1, a, a-1\}$ then $Q(x) \leq 8$. If $x \in \F_q \setminus (V_a \cup \{0, 1, a, a-1\})$ then $Q(x) = 0$. Let $S = \sum_{x \in \F_q} Q(x)$. Then $S \leq 16|V_a| + 32$. Expanding $Q(x)$ and using the fact that $\eta$ is a homomorphism on $\F_q^*$ we can write $S$ as a sum of terms of the form $\sum_{x \in \F_q} \eta(\pm K(x))$ where $K$ is the product of $k$ distinct factors in $\{ax, a^{-1}x-1, a(x-a+1), a(x-a+1)\}$ for some $k \in \{0, 1, 2, 3, 4\}$. Note that the roots of these factors are distinct because $a \neq 2$. For each $k \in \{1, 2, 3, 4\}$ there are $\binom{4}{k}$ terms $K$ of degree $k$, and \tref{t:weil} or \tref{t:quadweil} applies to each such term. Using these theorems we obtain the bound
$S \geq q-11q^{1/2}-6$. As $S \leq 16|V_a| + 32$ it follows that $|V_a| \geq (q-11q^{1/2}-38)/16$, which is positive if $q \geq 193$.
\end{proof}
We will use \tref{t:weil} and \tref{t:quadweil} in this way many times throughout the paper.
To finish the classification of permutations in $\G$ which contain a transposition we used a computer search.
\begin{lem}\label{l:3mod4notrans}
Suppose that $q \equiv 3 \bmod 4$. The permutation $\alpha[a, b] \in \G$ contains a transposition if and only if,
\[
(2ab-a-b)(a+b)(a-1) \in \R_q \text{ and } \{2(a+b-2)(a-1), 2a(a+b)\} \subseteq \mathcal{N}_q.
\]
\end{lem}
\begin{lem}\label{l:1mod4notrans}
Suppose that $q \equiv 1 \bmod 4$. The permutation $\alpha[a, b] \in \G$ contains a transposition if and only if one of the following holds.
\begin{enumerate}[(i)]
\item $(2ab-a-b)(a+b)(a-1) \in \R_q \text{ and } \{2(a+b-2)(a-1), 2a(a+b)\} \subseteq \mathcal{N}_q$.
\item $b=2-a$ and $(q, a) \not\in \{(13, 3), (13, 8), (17, 12), (17, 15), (37, 11), (37, 27), (41, 13), (41, 25)\}$.
\item $b=a/(2a-1)$ and $(q, a) \not\in \{(13, 2), (13, 9), (17, 5), (17, 8), (37, 11), (37, 27), (41, 23), (41, 26)\}$.
\item $b=-a$ and $(q, a) \not\in \{(13, 7), (13, 11), (17, 3), (17, 11), (37, 10), (37, 26), (41, 12), (41, 17)\}$.
\end{enumerate}
\end{lem}
We are now ready to prove \tref{t:n2}.
\begin{proof}[Proof of \tref{t:n2}]
If $q \equiv 3 \bmod 4$ then the result follows by combining \lref{l:quadrowperms} and \lref{l:3mod4notrans}. Now assume that $q \equiv 1 \bmod 4$. \lref{l:quadrowperms} implies that $\mathcal{L}[a, b]$ contains an intercalate if and only if either $\alpha[a, b]$ contains a transposition or $\alpha[b, a]$ contains a transposition. Since $\{ab, (a-1)(b-1)\} \subseteq \R_q$ for a valid pair $(a, b) \in \F_q^2$, it follows that $(a, b)$ satisfies condition $(i)$ in \lref{l:1mod4notrans} if and only if $(b, a)$ does too. Also note that for each permutation $\alpha[a, b]$ which is an exception in condition $(i)$, $(ii)$ or $(iii)$ in \lref{l:1mod4notrans}, the permutation $\alpha[b, a]$ is not an exception. The result should now be clear.
\end{proof}
We can also find the number of $N_2$ quadratic Latin squares of order $q$.
\begin{lem}\label{l:numn2}
The number of $N_2$ quadratic Latin squares of order $q$ is $7q^2/32 + O(q^{3/2})$.
\end{lem}
\begin{proof}
Fix $a \in \F_q \setminus \{-1, 0, 1, 2\}$ and define
\[
V_a = \{b \in \F_q : \{ab, (a-1)(b-1), (2ab-a-b)(a+b)(a-1)\} \subseteq \mathcal{R}_q, \{2(a+b-2)(a-1), 2a(a+b)\} \subseteq \mathcal{N}_q\}.
\]
If $q \equiv 3 \bmod 4$ then $|V_a|$ is the number of quadratic Latin squares of the form $\L[a, b]$ which contain an intercalate. If $q \equiv 1 \bmod 4$ then the number of squares $\L[a, b]$ which contain an intercalate is $|V_a| + m$ where $m \in \{0, 1, 2, 3\}$. Using \tref{t:weil} and \tref{t:quadweil} in an analogous way as in the proof of \lref{l:translargeq} we can show that,
\[
\frac1{32}(q-79-58q^{1/2}) \leq |V_a| \leq \frac1{32}(q+79+58q^{1/2}).
\]
The condition that $a \not\in \{-1, 2\}$ is required in order to apply \tref{t:weil} and \tref{t:quadweil} to estimate $|V_a|$. As there are $q-4$ choices for $a \in \F_q \setminus \{-1, 0, 1, 2\}$ it follows that the number of quadratic Latin squares $\mathcal{L}[a, b]$ of order $q$ with $a \not\in \{-1, 2\}$ which contain an intercalate is $q^2/32 + O(q^{3/2})$. Recall that the total number of quadratic Latin squares of order $q$ is $q^2/4 + O(q)$, and there are $O(q)$ quadratic Latin squares of the form $\mathcal{L}[-1, b]$ and $\mathcal{L}[2, b]$. It follows that the number of $N_2$ quadratic Latin squares of order $q$ is $7q^2/32 + O(q^{3/2})$.
\end{proof}
To conclude this subsection we describe how to bound the number of permutations in $\G$ which contain a cycle of length $c$, for some $c \in \{2, 3, \ldots, q\}$. Firstly, it is easy to bound the number of permutations in $\G \setminus \G_c$. The comments at the end of \sref{sss:t2} and \sref{sss:a0aa} describe how to bound the number of permutations in $\G_c$ which contain a cycle of length $c$ which is not of Type One. As mentioned at the end of \sref{sss:t1}, \lref{l:cycsseqs} and \lref{l:t1necsuf} give us necessary and sufficient conditions for a permutation in $\G_c$ to contain a Type One cycle of length $c$. To bound the number of permutations which satisfy these conditions we could use \tref{t:weil} and \tref{t:quadweil} in a similar way as used to find the number of $N_2$ quadratic Latin squares. However to apply these theorems in this way we would need to know when products of functions in the set $\{F_{i, z} : i \in \{0, 1, 2, \ldots, 2c-1\}\}$ and products of functions in $\{G_{i, z} : i \in \{0, 1, 2, \ldots, 2c-1\}\}$ are, up to multiplication by a constant, the square of a Laurent polynomial. It seems a difficult task to predict when this occurs.
\subsection{Row cycles of length $p$ in quadratic Latin squares}\label{ss:p-cycles}
In this subsection we will prove \tref{t:quadneg}. Throughout this subsection let $p$ be an odd prime, $d$ a positive integer and $q = p^d$.
\begin{lem}\label{l:pcycprelim}
Let $\alpha = \alpha[a, b] \in \G$ with $a \in \R_q$. If there exists $y \in \F_q$ such that $\{y-ja+j, y-(j+1)a+j : j \in \{0, 1, \ldots, p-1\}\} \subseteq \R_q$ then $\alpha$ contains a $p$-cycle.
\end{lem}
\begin{proof}
We will prove by induction on $k \in \{0, 1, 2, \ldots, p\}$ that $\alpha^k(y) = y-ka+k$. The claim is trivial when $k=0$. Suppose that $\alpha^k(y) = y-ka+k$ for some $k \in \{0, 1, 2, \ldots, p\}$. Then $\alpha^k(y) \in \R_q$, hence $\varphi^{-1}(\alpha^k(y))-1 = a^{-1}y-(k+1)+ka^{-1} = a^{-1}(y-(k+1)a+k) \in \R_q$ by assumption. Therefore $\alpha^{k+1}(y) = a(a^{-1}(y-(k+1)a+k))+1 = y-(k+1)a+(k+1)$. The lemma follows.
\end{proof}
In fact, if the hypotheses of \lref{l:pcycprelim} hold then $\alpha$ satisfies the sequence $z \in \{-1, 1\}^{2p}$ defined by $z_i = 1$ for $i \in \{0, 1, 2, \ldots, 2p-1\}$. Analogous arguments allow us to prove the following.
\begin{lem}\label{l:pcycprelim2}
Let $\alpha = \alpha[a, b] \in \G$ with $a \in \N_q$. If there exists $y \in \F_q$ such that $\{y-ja+j, y-(j+1)a+j : j \in \{0, 1, \ldots, p-1\}\} \subseteq \N_q$ then $\alpha$ contains a $p$-cycle.
\end{lem}
If the hypotheses of \lref{l:pcycprelim2} hold then $\alpha$ satisfies the sequence $z \in \{-1, 1\}^{2p}$ defined by $z_i = (-1)^{i+1}$ for $i \in \{0, 1, 2, \ldots, 2p-1\}$. We are now ready to prove \tref{t:quadneg}.
\begin{proof}[Proof of \tref{t:quadneg}]
Let $(a, b) \in \F_q^2$ be valid and let $\alpha = \alpha[a, b]$. First suppose that $\{a, b\} \subseteq \F_p \cap \N_q$. Using \lref{l:junalphr} it is simple to verify that $\alpha_0$ is contained in $\F_p$. Thus $\alpha$ has a cycle of length at most $p$.
Now we deal with the case where $\{a, b\} \not\subseteq \F_p \cap \N_q$. So either $\{a, b\} \subseteq \F_p \cap \R_q$ or $\{a, b\} \not\subseteq \F_p$. We will deal with the latter case first. Since $\mathcal{L}[a, b]$ and $\mathcal{L}[b, a]$ are isomorphic~\cite{MR2134185} and isotopy preserves the lengths of row cycles, we can swap $a$ and $b$ if necessary. Thus we can assume that $a \not\in \F_p$. We will first assume that $a \in \R_q \setminus \F_p$. Define,
\[
Y = \{y \in \F_q : \{y-ja+j, y-(j+1)a+j\} \subseteq \R_q \text{ for all } j \in \{0, 1, 2, \ldots, p-1\}\}.
\]
By \lref{l:pcycprelim}, if $Y \neq \emptyset$ then $\alpha$ contains a $p$-cycle. Define,
\[
Q(y) = \prod_{j=0}^{p-1}(1+\eta(y-ja+j))(1+\eta(y-(j+1)a+j))
\]
and $S = \sum_{y \in \F_q} Q(y)$. If $y \in Y$ then $Q(y) = 4^p$. If $y \in \{ja-1, (j+1)a-1 : j \in \{0, 1, 2, \ldots, p-1\}\}$ then $Q(y) \leq 2^{2p-1}$. In all other cases $Q(y)=0$. It follows that $S \leq 4^p|Y|+p4^p$. Expanding $Q(x)$ and using the fact that $\eta$ is a homomorphism on $\F_q^*$ we can write $S$ as the sum over terms of the form $\sum_{y \in \F_q} \eta(\pm K(y))$ where $K$ is the product of $k$ distinct factors in $\{y-ja+j, y-(j+1)a+j : j \in \{0, 1, 2, \ldots, p-1\}\}$ for some $k \in \{0, 1, 2, \ldots, 2p\}$. If $ja-j = ka-k$ for some $\{j, k\} \subseteq \{0, 1, 2, \ldots, p-1\}$ then $j=k$. Similarly if $(j+1)a-j = (k+1)a-k$ then $j=k$. Suppose that $ja-j = (k+1)a-k$. First note that $j=k+1$ is not a solution to this equation. Hence if this equation is satisfied we must have $a=(j-k)/(j-k-1) \in \F_p$, which is a contradiction. It follows that the roots of each term $K$ are distinct. For each $k \in \{1, 2, \ldots, 2p\}$ there are $\binom{2p}{k}$ terms $K$ of degree $k$, and \tref{t:weil} or \tref{t:quadweil} applies to each such term. Using these theorems we obtain the bound
\[
S \geq q-\binom{2p}{2} - \sum_{k=3}^{2p} \binom{2p}{k} (k-1)q^{1/2} = q + q^{1/2}(p-1)(1-4^p+2p) + p(1-2p).
\]
Combining this with the fact that $S \leq 4^p|Y|+p4^p$ gives
\[
|Y| \geq 2^{-2p}(q + q^{1/2}(p-1)(1-4^p+2p) + p(1-2p-4^p)).
\]
Therefore $Y \neq \emptyset$ if $q + q^{1/2}(p-1)(1-4^p+2p) + p(1-2p-4^p) > 0$.
This inequality will be true if $q^{1/2} > ((1-p)(1-4^p+2p)+((p-1)^2(1-4^p+2p)^2-4p(1-2p-4^p))^{1/2})/2$. Set $q=p^d$ for some positive integer $d$. Then the previous inequality will hold provided that $d > 2\log(((1-p)(1-4^p+2p)+((p-1)^2(1-4^p+2p)^2-4p(1-2p-4^p))^{1/2})/2)/\log(p)$. Define
\[
f(d) = \left\lfloor \frac2{\log(p)}\log\left(\frac12((1-p)(1-4^p+2p)+((p-1)^2(1-4^p+2p)^2-4p(1-2p-4^p))^{1/2})\right) \right\rfloor+1.
\]
Then we have shown that $\mathcal{L}[a, b]$ contains a row cycle of length $p$ if $d \geq f(p)$. We can use analogous arguments in conjunction with \lref{l:pcycprelim2} to prove that same result if $a \in \N_q \setminus \F_p$.
We now deal with the case where $\{a, b\} \subseteq \F_p \cap \R_q$. By \lref{l:pcycprelim}, to show that $\alpha$ contains a $p$-cycle it suffices to show that there exists some $y \in \F_q$ such that $\{y-ja+j, y-(j+1)a+j : j \in \{0, 1, \ldots, p-1\}\} \subseteq \R_q$. Note that $y-ja+j = y-(k+1)a+k$ where $k=j+a/(1-a)$. Therefore the result will follow if
$\{y-ja+j : j \in \{0, 1, \ldots, p-1\}\} \subseteq \R_q$. We can use analogous arguments as in the case where $a \in \R_q \setminus \F_p$ to show that $\alpha$ contains a $p$-cycle if if $q=p^d$ with $d > 2\log(((2-p)(1-2^p+p)+((p-2)^2(1-2^p+p)^2-8p(1-p-2^p))^{1/2})/2)/\log(p)$. As this quantity is less than $f(p)$ it follows that $\mathcal{L}[a, b]$ contains a row cycle of length $p$ if $d \geq f(p)$.
\end{proof}
The function $f$ provided in the proof of \tref{t:quadneg} satisfies $f(p) \sim \lfloor p\log(16)/\log(p) \rfloor +1$. Furthermore, $f$ is not minimal. For example, $f(3) = 9$, however every quadratic Latin square of order $3^d$ contains a $3$-cycle if $d \geq 7$.
\subsection{Anti-perfect $1$-factorisations and anti-atomic Latin squares}\label{ss:antiperf}
In this subsection we prove our main results concerning anti-perfect $1$-factorisations and anti-atomic Latin squares. To prove \tref{t:antiperf} and \tref{t:antiatom} we need the following definition. Let $v$ be a positive integer and $K \subseteq \{2, 3, 4, \ldots\}$. A \emph{pairwise balanced design} $\pbd(v, K)$ is a pair $(X, \B)$ where $X$ is a set of order $v$ whose elements are called points, and $\B$ is a collection of subsets of $X$ called blocks, such that the size of each block in $\B$ is an element of $K$, and each pair of distinct points in $X$ appears in exactly one block in $\B$.
We can use PBD's to construct Latin squares, in a method known as the `PBD construction', which we describe now. It is known that an idempotent Latin square of order $n$ exists for all $n \neq 2$. Suppose that $(X, \B)$ is a $\pbd(v, K)$ for some positive integer $v$ and some set $K \subseteq \{3, 4, 5, \ldots, \}$. For each $B \in \B$ let $L^B$ be an idempotent Latin square with symbol set $B$. We can then define an $v \times v$ idempotent Latin square $L$ with symbol set $X$ by,
\[
L_{i, j} = \begin{cases}
i & \text{if } i = j, \\
L^B_{i, j} & \text{if } i \neq j, \text{ where } B \text{ is the unique block in } \B \text{ with } \{i, j\} \subset B.
\end{cases}
\]
The PBD construction has been used to solve various problems, such as the construction of mutually orthogonal Latin squares (see e.g.~\cite{MR2469212}) and the construction of $1$-factorisations which contain only short cycles~\cite{MR2475030, MR2433008, MR3537912}. We now give a series of simple lemmas regarding Latin squares obtained from the PBD construction. The first is a simple observation.
\begin{lem}\label{l:pbdconj}
Any conjugate of a Latin square obtained from the PBD construction can also be obtained from the PBD construction.
\end{lem}
The following is a known result~\cite{MR2433008}.
\begin{lem}\label{l:pbdcyc}
Let $L$ be a Latin square obtained from the pairwise balanced design $(X, \B)$.
Let $\{i, j\} \subseteq X$ and let $B \in \B$ be the block containing $i$ and $j$. Then $r_{i, j}(B) = B$.
\end{lem}
\begin{lem}\label{l:pbdideminv}
Let $L$ be a Latin square obtained from the pairwise balanced design $(X, \B)$. If $L^B$ is involutory for each $B \in \B$ then $L$ is also involutory.
\end{lem}
\begin{proof}
Let $\{i, j\} \subseteq X$ with $i \neq j$. Then $k = L_{i, j} = L^B_{i, j}$ where $B \in \B$ is the block containing $i$ and $j$. Hence $k \in B$ also. Furthermore $L^B$ is idempotent, thus $k \neq i$. Thus $L_{i, k} = L^B_{i, k} = j$ because $L^B$ is involutory. Therefore $L$ is involutory as well.
\end{proof}
It is known that an idempotent, involutory Latin square of order $n$ exists if and only if $n$ is odd. Combining \lref{l:pbdconj}, \lref{l:pbdcyc} and \lref{l:pbdideminv} we obtain the following corollary.
\begin{cor}\label{c:pbdanti}
Let $K \subseteq \{3, 4, 5, \ldots, \}$ and suppose that there exists a $\pbd(v, K)$ with at least two blocks. Then there exists an anti-atomic Latin square of order $v$. Furthermore if $K$ contains only odd integers, then there exists an anti-perfect $1$-factorisation of $K_{v+1}$.
\end{cor}
A result of Colbourn, Haddad and Linek~\cite{MR1370131} implies the existence of a $\pbd(v, K)$ with at least two blocks where $K$ contains only odd integers whenever $v \geq 7$ is odd. Combining this with \cyref{c:pbdanti} proves \tref{t:antiperf}. As mentioned in \sref{s:intro}, Dukes and Ling~\cite{MR2475030} constructed a $1$-factorisation of $K_{v+1}$ whose cycles are all of length at most $1720$, for all odd $v$. Each of these $1$-factorisation come from the PBD construction with a $\pbd(v, \{3, 5\})$. \lref{l:pbdcyc} tells us that these $1$-factorisations are actually anti-perfect for all $v \geq 7$. We are now ready to prove \tref{t:antiatom}.
\begin{proof}[Proof of \tref{t:antiatom}]
A Result of Hartman and Heinrich~\cite{MR1209190} implies the existence of a $\pbd(v, K)$ with at least two blocks where $2 \not\in K$ whenever $v = 7$ or $v \geq 9$. \cyref{c:pbdanti} then implies that there exists an anti-atomic Latin square of order $v$ whenever $v \not\in \{1, 2, 3, 4, 5, 6, 8\}$. Let $L$ be a Latin square which is derived from the Cayley table of a group $G$. By~\cite[Theorem 4.2.2]{MR0351850}, every conjugate of $L$ is isotopic to itself. Every row cycle in the row permutation $r_{g, h}$ of $L$ has length equal to the order of $gh^{-1}$ in $G$. Thus the existence of a non-cyclic group of order $v$ implies the existence of an anti-atomic Latin square of order $v$. This proves that anti-atomic Latin squares of order $v$ exist for $v \in \{4, 6, 8\}$. It is easy to verify that every Latin square of order $v \in \{2, 3, 5\}$ contains a row cycle of length $v$, and thus is not anti-atomic.
\end{proof}
\lref{l:pbdconj} and \lref{l:pbdcyc} imply that anti-atomic Latin squares can be built from $1$-factorisations constructed in~\cite{MR2475030, MR2433008, MR3537912}. We also record that Latin squares corresponding to Steiner $1$-factorisations give us examples of anti-atomic Latin squares of order $q$ for all $q \equiv 1 \bmod 6$ or $3 < q \equiv 3 \bmod 6$.
The remainder of this subsection will be devoted to proving \tref{t:quadanti}. The first step is to find for which prime powers $q$ there exists quadratic, idempotent, involutory Latin squares of order $q$ which do not contain any row cycle of length $q$.
Every quadratic Latin square is idempotent. The following lemma~\cite{MR623318} gives sufficient conditions for $\mathcal{L}[a, b]$ to be involutory.
\begin{lem}\label{l:qii}
Let $q \equiv 3 \bmod 4$ and $a \in \N_q \setminus \{-1\}$. The $q \times q$ Latin square $\mathcal{L}[a, a^{-1}]$ is involutory.
\end{lem}
Let $q$ be an odd prime power. Every conjugate of a Latin square $\mathcal{L}[a, b]$ of order $q$ is also a quadratic Latin square, and a result of Wanless~\cite{MR2134185} allows us to determine these conjugates. Using this, it is simple to verify that the only involutory quadratic Latin squares which are not given by \lref{l:qii} are those squares of the form $\mathcal{L}[a, a]$. As mentioned in \sref{ss:n2}, such squares are isotopic to the Cayley table of the cyclic group of order $q$.
The following is a corollary of \tref{t:n2}.
\begin{cor}\label{c:iii}
Let $q \equiv 3 \bmod 4$ be a prime power and let $a \in \N_q$. The Latin square $\mathcal{L}[a, a^{-1}]$ contains an intercalate if and only if $\{2(1-a), 2(a^2+1)\} \subseteq \N_q$.
\end{cor}
\begin{proof}
\tref{t:n2} implies that $\mathcal{L}[a, a^{-1}]$ contains an intercalate if and only if $(2-a-a^{-1})(a+a^{-1})(a-1) \in \R_q$ and $\{2(a+a^{-1}-2)(a-1), 2a(a+a^{-1})\} \subseteq \N_q$. This is equivalent to the condition that $\{2(a+a^{-1}-2)(a-1), 2a(a+a^{-1})\} \subseteq \N_q$ because $2(a+a^{-1}-2)(a-1) \cdot 2a(a+a^{-1}) = 4a(a+a^{-1}-2)(a+a^{-1})(a-1)$. This is equivalent to $\{2(1-a), 2(a^2+1)\} \subseteq \N_q$ because $a+a^{-1}-2 = a^{-1}(a-1)^2$.
\end{proof}
Let $L$ be a quadratic, idempotent, involutory Latin square of prime power order $q \equiv 3 \bmod 4$ which contains an intercalate. \lref{l:quadrowperms} implies that every row permutation of $L$ contains a transposition.
A standard application of \tref{t:weil} and \tref{t:quadweil}, in conjunction with \cyref{c:iii}, shows that such a square exists for all $q \geq 83$.
A computer search then allows us to prove the following result.
\begin{lem}\label{l:notrh}
Let $q \equiv 3 \bmod 4$ be a prime power with $q > 3$. There exists a quadratic, idempotent, involutory Latin square $L$ of order $q$ such that no row permutation of $L$ is a $q$-cycle.
\end{lem}
\lref{l:notrh} proves the existence of an anti-perfect $1$-factorisations of $K_{q+1}$ for any prime power $q$ where $3 < q \equiv 3 \bmod 4$. We note that V\'{a}zquez-\'{A}vila~\cite{MR4354936} showed that if $q \equiv 11 \bmod 24$ is a prime power then there is a quadratic, idempotent, involutory Latin square $L$ of order $q$ such that the $1$-factorisation $\mathcal{F} = \mathcal{F}(L)$ of $K_{q+1}$ satisfies the following property: Every pair of $1$-factors in $\mathcal{F}$ induces a subgraph in $K_{q+1}$ which contains a $4$-cycle. In particular, $\mathcal{F}$ is anti-perfect.
The next step in proving \tref{t:quadanti} is to prove some simple results concerning the direct product of Latin squares.
\begin{lem}\label{l:dpcyc}
Let $L$ and $M$ be Latin squares. Let the set of lengths of row cycles in $L$ and the set of lengths of row cycles in $M$ be $R$ and $P$, respectively. The set of lengths of row cycles in $L \times M$ is,
\[
R \cup P \cup \{\text{\rm lcm}(r, p) : r \in R, p \in P\}.
\]
\end{lem}
\begin{proof}
Let the symbol sets of $L$ and $M$ be $S$ and $T$, respectively. For $i, j \in S$ denote the row permutation of $L$ mapping row $i$ to row $j$ by $u_{i, j}$. Similarly for $k, \ell \in T$ denote the row permutation of $M$ mapping row $k$ to row $\ell$ by $v_{k, \ell}$. For the purposes of this proof we will say that the permutations $u_{i, i}$ and $v_{k, k}$ denote the identity permutation, for $i \in S$ and $k \in T$. Let $\{(i, k), (j, \ell)\} \subseteq S \times M$ and consider the row permutation $r_{(i, k), (j, \ell)}$ of $L \times M$. We will show that $r_{(i, k), (j, \ell)} = u_{i, j} \times v_{k, \ell}$. Let $x \in S \times T$. Then $x = (L \times M)_{(i, k), (a, b)}$ for some $a \in S$ and $b \in T$. So,
\[
\begin{aligned}
r_{(i, k), (j, \ell)}(x) &= r_{(i, k), (j, \ell)}((L \times M)_{(i, k), (a, b)}) \\
&= (L \times M)_{(j, \ell), (a, b)} \\
&= (L_{j, a}, M_{\ell, b}) \\
&= (u_{i, j}(L_{i, a}), v_{k, \ell}(M_{k, b})) \\
&= u_{i, j} \times v_{k, \ell} (L_{i, a}, M_{k, b}) \\
&= u_{i, j} \times v_{k, \ell}(x).
\end{aligned}
\]
Hence $r_{(i, k), (j, \ell)} = u_{i, j} \times v_{k, \ell}$ as claimed. So the length of the cycle in $r_{(i, k), (j, \ell)}$ containing $x = (y, z)$ is the lowest common multiple of the lengths of the row cycle of $u_{i, j}$ containing $y$ and the row cycle of $v_{k, \ell}$ containing $z$. The lemma should now be clear.
\end{proof}
\begin{cor}\label{c:antiatomprod}
Let $L$ and $M$ be Latin squares, such that at least one of $L$ or $M$ is anti-atomic. Then $L \times M$ is anti-atomic.
\end{cor}
\begin{proof}
Let the symbol sets of $L$ and $M$ be $S$ and $T$, respectively. \lref{l:dpcyc} implies that if $L$ does not contain a row cycle of length equal to $|S|$ or $M$ does not contain a row cycle of length equal to $|T|$ then $L \times M$ does not contain a row cycle of length equal to $|S||T|$. It is a simple task to verify that the $(x, y, z)$-conjugate of $L \times M$ is equal to the direct product of the $(x, y, z)$-conjugate of $L$ and the $(x, y, z)$-conjugate of $M$, for any $1$-line permutation $(x, y, z)$ of $\{1, 2, 3\}$. The result follows.
\end{proof}
\begin{lem}\label{l:dpsym}
Let $L$ and $M$ be idempotent, involutory Latin squares. Then $L \times M$ is also idempotent and involutory.
\end{lem}
\begin{proof}
Let the symbol sets of $L$ and $M$ be $S$ and $T$, respectively. Let $(a, b) \in S \times T$. We know that $(L \times M)_{(a, b), (a, b)} = (L_{a, a}, M_{b, b}) = (a, b)$ because $L$ and $M$ are idempotent. Hence $L \times M$ is idempotent. Let $\{(u, v), (x, y), (c, d)\} \subseteq S \times T$ be such that $(L \times M)_{(u, v), (x, y)} = (c, d)$. Then $L_{u, x} = c$ and hence $L_{u, c} = x$ because $L$ is involutory. Similarly $M_{v, d} = y$. Thus $(L \times M)_{(u, v), (c, d)} = (x, y)$ and so $L \times M$ is involutory.
\end{proof}
We are now ready to prove \tref{t:quadanti}.
\begin{proof}[Proof of \tref{t:quadanti}]
We first prove that there exists an anti-atomic Latin square of odd prime power order $q$ if and only if $q \not\in \{3, 5\}$. Let $q$ be an odd prime power. If $q \equiv 3 \bmod 4$ then by \lref{l:quadrowperms}, a quadratic Latin square of order $q$ contains an intercalate if and only if all of its row permutations contain a transposition. As noted in the proof of \tref{t:n2}, a valid pair $(a, b) \in \F_q^2$ satisfies condition $(i)$ in \lref{l:1mod4notrans} if and only if $(b, a)$ also does.
Combining this with \lref{l:quadrowperms} implies that if $q \equiv 1 \bmod 4$ then a quadratic Latin square of order $q$ contains an intercalate if and only if all of its row permutations contain a transposition, other than the squares corresponding to the exceptions listed in \lref{l:1mod4notrans}. The property of a Latin square containing an intercalate is invariant under conjugation. Therefore if a quadratic Latin square contains an intercalate, then either it corresponds to one of the exceptions in \lref{l:1mod4notrans}, or it is anti-atomic.
The comment before \lref{l:notrh} tells us that a quadratic Latin square of order $q$ containing an intercalate exists if $83 \leq q \equiv 3 \bmod 4$. We can then use a computer to confirm that an anti-atomic quadratic Latin square of order $q \equiv 3 \bmod 4$ exists whenever $3 < q \leq 79$. If $q \equiv 1 \bmod 4$ then we know from \tref{t:n2} that all Latin squares $\mathcal{L}[a, -a]$ of order $q$ contain an intercalate, and only eight of these correspond to the exceptions in \lref{l:1mod4notrans}. We can use this fact, along with \tref{t:weil} and \tref{t:quadweil} in the standard way, to prove that there exists an anti-atomic quadratic Latin square of order $q$ for all $5 < q \equiv 1 \bmod 4$.
This proves that there exists an anti-atomic Latin square of odd prime power order $q$ if and only if $q \not\in \{3, 5\}$. Let $n \not\in \{3, 5, 15\}$ be an odd integer with prime power factorisation $q_1 \cdot q_2 \cdots q_k$. Then $q_i \not\in \{3, 5\}$ for some $i \in \{1, 2, \ldots, k\}$. The existence of an anti-atomic Latin square of order $q_i$ combined with \cyref{c:antiatomprod} proves the existence of an anti-atomic Latin square of order $n$.
We now prove the second claim of \tref{t:quadanti}. Let $n$ be an odd integer which contains a prime power divisor $m \neq 3$ with $m \equiv 3 \bmod 4$. Let the prime power factorisation of $n$ be $m \cdot q_2 \cdots q_k$. For each $q_i$ with $i \in \{2, 3, \ldots, k\}$ there exists a quadratic, idempotent, involutory Latin square of order $q_i$. In particular, any square of the form $\mathcal{L}[a, a]$ with $a \in \F_q \setminus \{0, 1\}$ satisfies this property. Also, \lref{l:notrh} implies the existence of a quadratic, idempotent, involutory Latin square of order $m$ which does not contain a row cycle of length $m$. Combining these facts with \lref{l:dpcyc} and \lref{l:dpsym} proves the claim.
\end{proof}
\section{Conclusion}
In \sref{ss:n2} we characterised exactly when a quadratic Latin square is $N_2$. Note that $N_2$ quadratic Latin squares of order $q$ exist for all odd prime powers $q$, because the squares $\mathcal{L}[a, a]$ are $N_2$ for any $a \in \F_q \setminus \{0, 1\}$. We also found that there are $7q^2/32 + O(q^{3/2})$ quadratic $N_2$ Latin squares of order $q$. \lref{l:dpcyc} implies that the direct product of $N_2$ Latin squares is also $N_2$. This fact was known by McLeish~\cite{MR396298}. This means that we can construct $N_2$ Latin squares of order $n$ for all odd $n$ by using the direct product of Latin squares. In fact, we can construct many $N_2$ Latin squares for all odd orders. However, this construction only gives a small number of $N_2$ Latin squares, in comparison to the total number of $N_2$ Latin squares. Kwan, Sah, Sawhney and Simkin~\cite{latinsub} have used a probabilistic argument to show that there are $(e^{-9/4}n-o(n))^{n^2}$ Latin squares of order $n$ which are devoid of intercalates.
\tref{t:quadneg} tells us that quadratic Latin squares of order $q = p^d$ will not be useful in constructing perfect $1$-factorisations unless $d$ is small. It also tells us that unless $d$ is small, the only quadratic Latin squares which could be useful in constructing $1$-factorisations which contain only short cycles are the squares $\mathcal{L}[a, b]$ with $\{a, b\} \subseteq \F_p \cap \N_q$. Combining \lref{l:quadrowperms} with the fact that every row permutation of such a square contains a cycle of length at most $p$ makes it tempting to investigate these squares when searching for $1$-factorisations which contain only short cycles. However computational evidence seems to suggest that such Latin squares always contain some large row cycles.
\section*{Acknowledgements}
I would like to thank Ian Wanless for many useful discussions and for helpful comments on an early draft of this paper. The author was supported by an Australian Government Research Training Program Scholarship.
\printbibliography
\end{document}
|
2302.12905
|
\section{Introduction}
\hskip .5cm Throughout this paper, $R$ will be an associative ring with identity, and all modules will be, unless otherwise specified, unital left $R$-modules. When right $R$-modules need to be used, they will be denoted as $M_R$, while in these cases left $R$-modules will be denoted by $_R M$. whenever it is convenient, right $R$-modules are identified with modules over the opposite ring $R^{op}$. We use $\mathcal{I}(R)$, $\mathcal{P}(R)$, $\mathcal{F}(R)$ and $\mathcal{C}(R)$ to denote the classes of injective, projective, flat and cotorsion $R$-modules, respectively.
From a theorem by Christensen, Estrada and Thompson \cite[Theorem 4.5]{CET20}, the subcategory $\mathcal{GF}(R)\cap\mathcal{C}(R)$ of Gorenstein flat and cotorsion $R$-modules is a Frobenius category with projective-injective objects all flat-cotorsion $R$-modules. Hence, the stable category $\mathcal{E}_0:=\underline{\mathcal{GF}(R)\cap\mathcal{C}(R)}$ is a triangulated category. On the other hand, \v{S}aroch and \v{S}\v{t}ov\'{\i}\v{c}ek \cite[Section 4]{SS20} constructed a hereditary abelian model structure $\mathcal{M}_0$ where the class of cofibrant objects coincides with $\mathcal{GF}(R)$ and the class of fibrant objects coincides with $\mathcal{C}(R)$. As a particular case of Estrada, Iacob, and P\'{e}rez \cite[Corollary 4.6]{EIP20} (see also \cite[Corollary 4.18]{BEGO22b}) the homotopy category ${\rm Ho}(\mathcal{M}_0)$ of this model structure is triangulated equivalent to the stable category $\mathcal{E}_0$.
In the first part of this paper, Section 3, we are interested in a general situation of the same kind of problem: that of modules of bounded Gorenstein flat dimension. It is therefore natural to ask whether we can construct a model structure such that the class of cofibrant objects coincides with the class $\mathcal{GF}_n(R)$ of all modules having Gorenstein flat dimension less than an integer $n\geq 0$.
First, we show that there is a trivial model structure (in the sense that every module is a trivial object) $$\mathcal{N}_n=(\mathcal{GF}_n(R), R\text{-Mod},\mathcal{GF}_n(R)^\perp).$$
In fact, this is nothing but saying that $\mathcal{GF}_n(R)$ is the left hand of a complete cotorsion pair. However, the following result shows more than that.
\bigskip
\noindent{ \bf Theorem A.} \textbf{(The $n$-Gorenstein flat cotorsion pair)} The class $\mathcal{GF}_n(R)$ forms the left hand of a perfect hereditary cotorsion pair $(\mathcal{GF}_n(R),\mathcal{GC}_n(R))$ with $\mathcal{GC}_n(R)=\mathcal{PGF}(R)^\perp\cap \mathcal{F}_n(R)^\perp$.
In particular, $\mathcal{GF}_n(R)$ is covering and $\mathcal{GC}_n(R)$ is enveloping.
\bigskip
Theorem A is essential to obtain our next main result (Theorem \ref{GFn mod struc}). This result provides us with a (non-trivial) model structure where the class of fibrant objects coincides with $\mathcal{C}_n(R):=\mathcal{F}_n(R)^\perp$ the right ${\rm Ext}$-orthogonal class of all $R$-modules of flat dimension less than $n$.
One of the consequences of this model structure is that the stable category $\mathcal{E}_n=\underline{\mathcal{GF}_n(R)\cap\mathcal{C}_n(R)}$ modulo objects in $\mathcal{F}_n(R)\cap\mathcal{C}_n(R)$ is triangulated (Corollary \ref{quas-Frob}). In the case $n=0$, Liang and Wang \cite[Theorem 1(a)]{LW20} have recently shown that this stable category is compactly generated when $R$ is right coherent with finite global Gorenstein dimension. It is natural to ask whether this generatedness property is inherited by $\mathcal{E}_n$ for all $n\geq 0$. Based on their work and a recent result by Huerta, Mendoza and P\'{e}rez \cite[Corollary 8.8]{HMP23}, we answer this question in the positive, assuming only that $R$ has finite global Gorenstein dimension.
\bigskip
\noindent{ \bf Theorem B.} (\textbf{The $n$-Gorenstein flat model structure}) \label{ThA} There exists a hereditary abelian model structure on $R$-Mod $$\mathcal{M}_n=\left(\mathcal{GF}_n(R),\mathcal{PGF}(R)^\perp, \mathcal{C}_n(R)\right).$$
Consequently, there are triangulated equivalences
$$\underline{\mathcal{GF}_n(R)\cap \mathcal{C}_n(R)}\simeq\cdots \simeq \underline{\mathcal{GF}(R)\cap \mathcal{C}(R)}\simeq \underline{\mathcal{PGF}(R)}.$$
Furthermore, if $R$ has finite global Gorenstein projective dimension, then these triangulated categories are compactly generated.
\bigskip
The $n$-Gorenstein flat model structure has been found for two particular rings: (Iwanaga-)Gorenstein rings by Perez \cite[Theorem 14.3.1]{Per16} where the trivial objects are known and coincide with modules having finite flat dimension and right coherent rings by A. Xu \cite[Theorem 4.5(2)]{Xu17} where less information is known about the class of trivial objects.
Therefore, Theorem B generalizes and improves their results in the sense that we do not need any assumption on the ring $R$ with an explicit description of the trivial objects. This makes Theorem B of interest since the class of trivial objects is the most important class in any model structure for it determines the corresponding homotopy category, as explained in the fundamental theorem of model categories \cite[Theorem 1.2.10]{Hov99}.
We point out that the proof of Theorem B is different from that of Perez and Xu. It is mainly based on the work developed by \v{S}aroch and \v{S}\v{t}ov\'{\i}\v{c}ek in \cite{SS20}. One of the key results (Theorem \ref{Charac of GFn}) is a new and useful characterization of modules having finite Gorenstein flat dimension.
$$*\;*\;*$$
A guiding principle in the development of Gorenstein homological algebra is to study analogues of results concerning absolute homological dimensions. For example, consider the well-known questions: (Q1) Is every Gorenstein projective module Gorenstein flat? and (Q2) Is the class of Gorenstein projective modules special precovering? These questions have been studied by many authors, such as Cort\'{e}s-Izurdiaga and \v{s}aroch \cite{CS21}, Emmanouil \cite{Emm12}, Enochs and Jenda \cite{EJ00} and Iacob \cite{Iac20} among others. One could also ask similar questions in the case of Ding projective modules. However, all these questions remain open.
\v{S}aroch and \v{S}\v{t}ov\'{\i}\v{c}ek \cite{SS20}, on the other hand, introduced PGF modules and showed that they are Gorenstein flat and they form a special precovering class. These properties can be seen as positive answers to Questions (Q1) and (Q2) if we think of PGF modules as an alternative definition of Gorenstein projective modules.
Our main purpose of Section 4 is to support the following claim: \textbf{"the PGF dimension could serve as an alternative definition of the Gorenstein (Ding) projective dimension over any ring"}.
Recall \cite[Definition 2.1]{EJL05} that a ring $R$ is left $n$-perfect if every flat $R$-module has projective dimension $\leq n$. In particular, left perfect rings are exactly left $0$-perfect rings. We show (Theorem \ref{n-perf}) that a ring $R$ is left $n$-perfect if and only if every Gorenstein flat left $R$-module has PGF dimension less than $n$. In the case $n=0$, we obtain other equivalent assertions similar to the classical ones, supporting the above claim.
It follows by \cite[Theorem 4.4]{SS20} that any PGF module is Gorenstein projective. But whether the converse is true remains open. This leaves us with another question: (Q3) When is any Gorenstein projective a PGF module? This question was first asked and investigated by \v{S}aroch and \v{S}\v{t}ov\'{\i}\v{c}ek \cite{SS20} and later by Iacob \cite{Iac20}. It turns out that this question is closely related to Questions (Q1) and (Q2) (see Remark \ref{Q1 and Q2}).
One nice way to study Question (Q3) is to measure how far a Gorenstein projective module is from being PGF. It is proved (Proposition \ref{PGF=GP}) that this can be reduced to two cases: either Gorenstein projective modules are as close as possible to PGF, i.e., Gorenstein projective and PGF modules coincide, or, as far as possible, and this means that the PGF dimension of any Gorenstein projective module is infinite.
Next, we obtain a variety of conditions that are equivalent to the first case (Proposition \ref{PGF=GP}). Under these equivalent assertions, we get a positive answer to Questions (Q1) and (Q2) (Corollary \ref{Q1 and Q2}). In particular, this is the case when $R$ is right weak coherent and left $n$-perfect or $R$ is a ring such that every injective right $R$-module has finite flat dimension.
Right weak coherent rings were introduced by Izurdiaga in \cite{CI16} as a natural generalization of right coherent rings. They are rings for which the direct product of any family of flat $R$-modules has finite flat dimension. It turns out that these rings provide a general framework in which one can replace the assumption of being coherent with that of being weak coherent (see Corollary \ref{PGF=DP} and Remark \ref{coh to weak coh}).
The rest of Section 4 is devoted to the global dimension of $R$ with respect to the class $\mathcal{PGF}(R)$ and its link to other global dimensions. First, we provide simple ways to compute it (Theorem \ref{charc of PGFD}). This result is then used to show our last main result of this paper which, along with its consequences (Corollaries \ref{PGFD=GID=DID} and \ref{Aus}), clearly support our claim above. The following result states that, unlike the classes $\mathcal{PGF}(R)$ and $\mathcal{GP}(R)$, their global dimensions coincide.
\bigskip
\noindent{ \bf Theorem C.} For any ring $R$, we have the following equality: $${\rm sup}\{{\rm {PGF}\mbox{-}pd}_R(M)|\text{ $M$ is an $R$-module}\}={\rm sup}\{{\rm Gpd}_R(M)|\text{ $M$ is an $R$-module}\}.$$
\section{Preliminaries}
\noindent{\bf Resolutions.} Given a class $\mathcal{X}$ of $R$-modules and a class $\mathcal{Y}$ of right $R$-modules, an $\mathcal{X}$-resolution of an $R$-module $M$ is an exact complex $\cdots\to X_1\to X_0\to M\to 0$ where
$X_i\in\mathcal{X}$.
A sequence $\textbf{X}$ of $R$-modules is called $\left(\mathcal{Y}\otimes_R-\right)$-exact (resp., ${\rm Hom}_R(\mathcal{X},-)$-exact,
$ {\rm Hom}_R(-,\mathcal{X})$-exact) if $Y\otimes_R\textbf{X}$ (resp., ${\rm Hom}_R(X,\textbf{X})$, ${\rm Hom}_R(\textbf{X},X)$)
is an exact complex for every $Y\in\mathcal{Y}$ (resp., $X\in\mathcal{X}$).
An $R$-module $M$ is said to have $\mathcal{X}$-resolution dimension less than an integer $n\geq 0$, ${\rm resdim}_\mathcal{X}(M)\leq n$, if $M$ has a finite $\mathcal{X}$-resolution: $0\to X_n\to\cdots\to X_1\to X_0\to M\to 0.$ If $n$ is the least non negative integer for which such a sequence exists then its $\mathcal{X}$-resolution dimension is precisely $n,$ and if there is no such $n$ then we define its $\mathcal{X}$-resolution dimension as $\infty$.
Given a class of $R$-modules $\mathcal{Z}$, the $\mathcal{X}$-dimension of $\mathcal{Z}$ is defined as:
$${\rm resdim}_\mathcal{X}(\mathcal{Z})={\rm sup}\{{\rm resdim}_\mathcal{X}(Z)|Z\in\mathcal{Z}\}.$$
$\mathcal{X}$-coresolutions and $\mathcal{X}$-coresolution dimensions are defined dually.
\noindent{\bf Gorenstein modules and dimensions.} An $R$-module $M$ is called Gorenstein flat if it is a syzygy of an $(\mathcal{I}(R^{op})\otimes_R-)$-exact exact sequence of flat $R$-modules. Replacing flat with projective, we get the definition of projectively coresolved Gorenstein flat (PGF for short) modules \cite{SS20}. An $R$-module $M$ is called Gorenstein projective if it is a syzygy of a ${\rm Hom}_R(-,\mathcal{P}(R))$-exact exact sequence of projective $R$-modules. Gorenstein injective modules are defined dually. We let $\mathcal{GF}(R)$
(resp., $\mathcal{PGF}(R)$, $\mathcal{GP}(R)$, and $\mathcal{GI}(R)$) denote the subcategory of Gorenstein flat (resp., PGF, Gorenstein projective, and Gorenstein injective) $R$-modules.
The Gorenstein flat (resp., PGF, Gorenstein projective, Gorenstein injective) dimension of an $R$-module $M$ is defined as ${\rm Gfd}_R(M):={\rm resdim}_{\mathcal{GF}(R)}(M)$ (resp., ${\rm {PGF}\mbox{-}pd}_R(M):={\rm resdim}_{\mathcal{PGF}(R)}(M)$, ${\rm Gpd}_R(M):={\rm resdim}_{\mathcal{GP}(R)}(M)$, and ${\rm Gid}_R(M):={\rm coresdim}_{\mathcal{GI}(R)}(M)$).
Let ${\rm GPD}(R)$ (resp, ${\rm GID}(R)$) denote the global Gorenstein projective (resp., global Gorenstein injective) dimension of $R$. i.e., $${\rm GPD}(R):={\rm Gpd}_R(R\text{-Mod})={\rm sup}\{{\rm Gpd}_R(M)|M\in R\text{-Mod}\}$$ (resp., ${\rm GID}(R):={\rm Gid}_R(R\text{-Mod})={\rm sup}\{{\rm Gid}_R(M)|M\in R\text{-Mod}\}$).
\begin{defn} Let $\mathcal{F}_n(R)$ and $\mathcal{GF}_n(R)$ denote the subcategories of $R$-modules having flat and Gorenstein flat dimension less than $n\geq 0$. We call modules in these subcategories $n$-flat and $n$-Gorenstein flat, respectively.
Set $\mathcal{C}_n(R)=\mathcal{F}_n(R)^\perp$ and $\mathcal{GC}_n(R)=\mathcal{GF}_n(R)^\perp$. The modules in $\mathcal{C}_n(R)$ and $\mathcal{GC}_n(R)$ are called $n$-cotorsion and $n$-Gorenstein cotorsion, respectively.
\end{defn}
Note that $0$-flat (resp., Gorenstein $0$-flat, $0$-cotorsion, $0$-Gorenstein cotorsion) modules coincide with the flat (resp., Gorenstein flat, cotorsion, Gorenstein cotorsion) modules.
\bigskip
\noindent{\bf Cotorsion pairs.} Given an abelian category $\mathcal{A}$, and a class of objects $\mathcal{X}$ in $\mathcal{A}$, we use the following standard notations: $\mathcal{X}^\perp=\{A\in\mathcal{A}|{\rm Ext}^1_\mathcal{A}(X,A)=0,\forall X\in\mathcal{X}\}$ and
$^\perp\mathcal{X}=\{A\in\mathcal{A}|{\rm Ext}^1_\mathcal{A}(A,X)=0,\forall X\in\mathcal{X}\}.$
An $\mathcal{X}$-precover of an object $A\in\mathcal{A}$ is a morphism $f:X\to A$ with $X\in \mathcal{X}$, in such a way that $f_*:{\rm Hom}_\mathcal{A}(X',X)\to {\rm Hom}_\mathcal{A}(X',M)$ is surjective for every $X'\in \mathcal{X}$. An $\mathcal{X}$-precover is called an $\mathcal{X}$-cover if every endomorphism $g: X\to X$ such that $fg=f$ is an automorphism of $X$. If every object has an $\mathcal{X}$-(pre)cover then the class $\mathcal{X}$ is said to be (pre)covering. An $\mathcal{X}$-precover is called special if it is epimorphism and ${\rm Ker} f\in\mathcal{X}^\perp$. $\mathcal{X}$-(special pre)envelopes can be defined dually.
A pair $\left(\mathcal{X},\mathcal{Y}\right)$ of classes of objects in $\mathcal{A}$ is called a cotorsion pair if $\mathcal{X}^\perp=\mathcal{Y}$ and $\mathcal{X}=^\perp\mathcal{Y}$.
A cotorsion pair $\left(\mathcal{X},\mathcal{Y}\right)$ is said to be:
$\bullet$ Hereditary if ${\rm Ext}^i_\mathcal{A}(X,Y)=0$ for every $X\in\mathcal{X}$, $Y\in\mathcal{Y}$ and $i\geq 1$.
$\bullet$ Complete if any object in $\mathcal{A}$ has a special $\mathcal{X}$-precover and a special $\mathcal{Y}$-preenvelope.
$\bullet$ Perfect if every module has an $\mathcal{X}$-cover and a $\mathcal{Y}$-envelope.
$\bullet$ Cogenerated by a set if there is a set $\mathcal{S}$ such that $\mathcal{X}=\mathcal{S}^\perp$.
It is well known that a perfect cotorsion pair $(\mathcal{X},\mathcal{Y})$ in $R$-Mod is complete. The converse holds when $\mathcal{X}$ is closed under direct limits \cite[Corollary 2.3.7]{GT12}. A well-known method for constructing complete cotorsion pairs in $R$-Mod is to cogenerate one from a set (see \cite[Theorem 3.2.1]{GT12}, for instance).
\bigskip
\noindent{\bf Abelian model structures.}
Given a bicomplete category $\mathcal{A}$, a model structure on $\mathcal{A}$ is given by three classes of morphisms of $\mathcal{A}$, called cofibrations, fibrations and weak equivalences that satisfy a set of axioms \cite[Definition 1.1.3]{Hov99}.
Over a (bicomplete) abelian category $\mathcal{A}$, Hovey defined abelian model structures and showed that we can shift all our attention from morphisms to objects \cite[Theorem 2.2]{Hov02}. Namely, an abelian model structure on $\mathcal{A}$, is equivalent to a triple (known as Hovey triple) $\mathcal{M}=(\mathcal{Q},\mathcal{W},\mathcal{R})$ of classes of objects in $\mathcal{A}$ such that $(\mathcal{Q}, \mathcal{W}\cap\mathcal{R})$ and $(\mathcal{Q}\cap\mathcal{W},\mathcal{R})$ are complete cotorsion pairs and $\mathcal{W}$ is thick. In this case, $Q$ (resp., $\mathcal{R}$, $\mathcal{W}$) is precisely the class of cofibrant (resp., fibrant, trivial) objects and an abelian model structure (in the sense of \cite[Definition 1.1.3]{Hov99}) is one which the following two conditions are satisfied:
(i) A morphism is a (trivial) cofibration if and only if it is a monomorphism with (trivially) cofibrant cokernel.
(ii) A morphism is a (trivial) fibration if and only if it is an epimorphism with (trivially) fibrant kernel.
An abelian model structure is called hereditary if both of the associated cotorsion pairs are hereditary (see Gillespie \cite{Gil16} for a nice survey on hereditary abelian model categories). In this paper, we often identify abelian model structures with Hovey triples.
Given a model structure $\mathcal{M}$, the homotopy category, denoted as ${\rm Ho}_\mathcal{A}(\mathcal{M})$, is defined by formally inverting the weak equivalences of $\mathcal{M}$. More precisely, ${\rm Ho}_A(\mathcal{M})$ is obtained after localizing $\mathcal{A}$ at the class of weak equivalences. If moreover, $\mathcal{M}$ is hereditary, the homotopy category is known to be an algebraic triangulated category and it encodes a variety of (relative) homological algebra on $\mathcal{A}$.
\section{n-Gorenstein flat modules and their model structure}
\hskip .5cm In this section, we aim at proving Theorems A and B in the introduction. First, we need some basic properties of $n$-Gorenstein flat modules.
\begin{prop}\label{Gn-flat prop} The class of $n$-Gorenstein flat $R$-modules is projectively resolving, closed under direct sums, direct summands, and direct limits.
\end{prop}
{\parindent0pt {\bf Proof.\ }} It is clear that any projective module is Gorenstein flat, and then $n$-Gorenstein flat. On the other hand, taking $W=R$ in \cite{BEGO22}, we get that $\mathcal{GF}(R)={\rm G_WF}(R)$ and this class is closed under extensions by \cite[Theorem 4.11]{SS20}. Therefore, the class $\mathcal{GF}_n(R)$ is closed under extensions and kernels of epimorphisms by \cite[Proposition 7.9]{BEGO22} and direct sums and summands by \cite[Corollary 7.8(2)]{BEGO22}. It remains to show that $\mathcal{GF}_n(R)$ is closed under direct limits.
Following \cite[Corollary 1.7 and the Remark that follows it]{AR94} it suffices to show that the class $\mathcal{GF}_n(R)$ is closed under well ordered continuous direct limits of $n$-Gorenstein flat $R$-modules: $$(M_\alpha)_{\alpha<\lambda}:= M_0\to M_1\to \cdots\to M_\omega\to M_{\omega+1}\to \cdots$$
Here continuous (or smooth in the sense of \cite{AR94}) means that for each limit ordinal $\beta<\lambda$, $M_\beta=\varinjlim_{\alpha<\beta} M_\alpha$.
$\bullet$ If $\lambda < \omega$, then $\lim_{\alpha <\lambda} M_\alpha=M_{\lambda-1}\in \mathcal{GF}_n(R)$.
$\bullet$ Assume now that $\lambda=\omega$. By \cite[Proposition 7.11]{BEGO22}, there exists an exact sequence $0\to M_0\to F_0\to G_0\to 0$
with $F_0\in\mathcal{F}_n(R)$ and $G_0$ Gorenstein flat. Consider the push out
$$\xymatrix{
&0 \ar[r] &M_0 \ar[r] \ar[d] &F_0 \ar[r] \ar[d] & G_0 \ar[r] \ar@{=}[d] &0\\
&0 \ar[r] &M_1 \ar[r] &V \ar[r] & G_0 \ar[r] &0.
}$$
Since $M_1\in\mathcal{GF}_n(R)$ and $G_0$ is Gorenstein flat, $V\in\mathcal{GF}_n(R)$. Using again \cite[Proposition 7.11]{BEGO22}, we get an exact sequence $0\to V\to F_1\to G\to 0$ with $F_1\in\mathcal{F}_n(R)$ and $G$ Gorenstein flat.
Consider another push out
$$\xymatrix{
& & &0 \ar[d] &0 \ar[d]\\
&0 \ar[r] &M_1 \ar[r] \ar@{=}[d] &V \ar[r] \ar[d] & G_0 \ar[r] \ar[d] &0\\
&0 \ar[r] &M_1 \ar[r] &F_1 \ar[r]\ar[d] & G_1 \ar[r]\ar[d] &0\\
& & &G \ar[d] \ar@{=}[r] & G\ar[d] \\
& & &0 &0.}$$
Since $G_0$ and $G$ are Gorenstein flat, so is $G_1$. Therefore, we get the following morphism of exact sequences induced by
the morphism $M_0\to M_1$:
$$\xymatrix{
&A_0=:\ar[d] &0 \ar[r] &M_0 \ar[r] \ar[d] &F_0 \ar[r] \ar[d] & G_0 \ar[r] \ar[d] &0\\
&A_1=: &0 \ar[r] &M_1 \ar[r] &F_1 \ar[r] & G_1 \ar[r] &0
}.$$
Repeating this process, we get a commutative diagram with exact rows
$$\xymatrix{
&A_0=:\ar[d] & 0 \ar[r] &M_0 \ar[r] \ar[d] &F_0 \ar[r] \ar[d] & G_0 \ar[r] \ar[d] &0\\
&A_1=:\ar[d] &0 \ar[r] &M_1 \ar[r]\ar[d] &F_1 \ar[r]\ar[d] & G_1 \ar[r]\ar[d] &0\\
&A_2=:\ar[d] &0 \ar[r] &M_2 \ar[r]\ar[d] &F_2 \ar[r] \ar[d] & G_2 \ar[r]\ar[d] &0\\
&: & &: &: & :
}$$
where $F_i\in\mathcal{F}_n$ and $G_i$ is Gorenstein flat for every $i\geq 0$. Applying the exact functor $\varinjlim$ to this commutative diagram, we get the induced exact sequence
$$\varinjlim A_m=: 0\to \varinjlim M_m\to \varinjlim F_m\to \varinjlim G_m\to 0$$
with $\varinjlim G_m\in \mathcal{GF}(R)\subseteq \mathcal{GF}_n(R)$ by \cite[ Lemma 3.1]{YL12} and $\varinjlim F_m\in \mathcal{F}_n(R)$ since the functor ${\rm Tor}^R_{n+1}(X,-)$ commutes with direct limits for any right $R$-module $X$. Hence, $M_\omega=\varinjlim M_m\in\mathcal{GF}_n(R)$.
Finally, using transfinite induction, it is clear that the above argument generalizes and we get that $\varinjlim_{\alpha<\lambda} M_\alpha\in\mathcal{GF}_n(R)$ as desired.
\cqfd
The following two lemmas will be used in several places in this paper.
\begin{lem} \label{n-flat is PGC} The inclusion
$\mathcal{F}_n(R) \subseteq \mathcal{PGF}(R)^\perp$ holds true.
\end{lem}
{\parindent0pt {\bf Proof.\ }} It follows from the fact that $ \mathcal{PGF}(R)^\perp$ is thick by \cite[Theorem 4.9]{SS20} and $\mathcal{F}(R)\subseteq \mathcal{PGF}(R)^\perp$ by \cite[Theorem 4.11]{SS20}. \cqfd
\begin{lem} \label{n-flat cot-pair} $(\mathcal{F}_n(R),\mathcal{C}_n(R))$ is a perfect and hereditary cotorsion pair cogenerated by a set.
\end{lem}
{\parindent0pt {\bf Proof.\ }} Follows by \cite[Theorem 3.4(2)]{DM07} and its proof \cqfd
The following characterization of $n$-Gorenstein flat modules is a key result to show many results in this section. The case $n=0$ is due to \v{S}aroch and \v{S}\v{t}ov\'{\i}\v{c}ek \cite[Theorem 4.11]{SS20}.
\begin{thm} \label{Charac of GFn} The following assertions are equivalent for any $R$-module $M$.
\begin{enumerate}
\item $M$ is $n$-Gorenstein flat.
\item There exists a ${\rm Hom}_R(-,\mathcal{F}_n(R)\cap\mathcal{C}_n(R))$-exact exact sequence $$0\to K\to L\to M\to 0$$
where $K\in \mathcal{F}_n(R)$ and $L\in \mathcal{PGF}(R)$.
\item ${\rm Ext}^1_R(M,C)=0$ for every $C\in \mathcal{PGF}(R)^\perp\cap \mathcal{C}_n(R)$.
\item There exists an exact sequence of $R$-modules $$0\to M\to F\to N\to 0$$
where $F\in\mathcal{F}_n(R)$ and $N\in \mathcal{PGF}(R)$.
\end{enumerate}
Consequently, $\mathcal{F}_n(R)=\mathcal{GF}_n(R)\cap \mathcal{PGF}(R)^\perp$.
\end{thm}
{\parindent0pt {\bf Proof.\ }} $4.\Rightarrow 1.$ It follows by Proposition \ref{Gn-flat prop} since $F\in\mathcal{F}_n(R)\subseteq \mathcal{GF}_n(R)$ and $N\in \mathcal{PGF}(R)\subseteq \mathcal{GF}_n(R)$.
$1.\Rightarrow 4.$ Assume that ${\rm Gfd}_R(M)\leq n$ and proceed by induction on n.
The case $n=0$ follows from \cite[Theorem 4.11(4)]{SS20}.
Assume that $n\geq 1$. There exists an exact sequence $0\rightarrow G_n\rightarrow\cdots\rightarrow G_0\rightarrow M\rightarrow 0$, where $G_i\in \mathcal{GF}(R)$ for every $i=0,\cdots,n.$ Let $K={\rm Ker}(G_0\to M)$. Clearly, ${\rm Gfd}_R(K)\leq n-1$. So, by induction there exists an exact sequence $0\rightarrow K\rightarrow F'\rightarrow N'\rightarrow 0,$ where $N'\in \mathcal{PGF}(R)$ and ${\rm fd}_R(F')\leq n-1$. Consider the pushout diagram $$\xymatrix{ & 0\ar[d] & 0\ar[d] & & \\ 0\ar[r] & K\ar[d] \ar[r] & G_0\ar@{-->}[d] \ar[r] & M \ar@{=}[d] \ar[r] & 0 \\ 0\ar[r] & F'\ar@{-->}[r]\ar[d] & D \ar[d] \ar[r] & M\ar[r] & 0 \\ & N'\ar@{=}[r] \ar[d] & N'\ar[d] & & \\ & 0 & 0}.$$
By the middle column $D$ is Gorenstein flat. Using induction again, we get a short exact sequence of $R$-modules $0\to D\to F_0\to N\to 0$ where $F_0$ is flat and $N\in\mathcal{PGF}(R)$.
Consider now another pushout diagram
$$\xymatrix{ & & 0\ar[d] & 0\ar[d] & \\ 0\ar[r] & F'\ar@{=}[d] \ar[r] & D\ar[d] \ar[r] & M\ar@{-->}[d] \ar[r] & 0 \\ 0\ar[r] & F'\ar[r] & F_0 \ar@{-->}[r]\ar[d] & F\ar[r] \ar[d] & 0 \\ & & N\ar[d]\ar@{=}[r] & N\ar[d] & \\ & & 0 & 0}$$
It remains to see that ${\rm fd}_R(F)\leq n$. But this is true by the middle row since $F_0$ is flat and ${\rm fd}_R(F')\leq n-1$.
$1.\Rightarrow 2.$ The case $n=0$ holds by \cite[Theorem 4.11(2)]{SS20}. So, we may assume that $n\geq 1$. By \cite[Theorem 4.9]{SS20}, there exists a special PGF precover $0\to N\to G\to M\to 0.$ By \cite[Proposition 7.9]{BEGO22}, ${\rm Gfd}_R(N)\leq {\rm sup}\{{\rm Gfd}_R(G),{\rm Gfd}_R(M)-1\}\leq n-1$ which implies by $(4)$ that there exists an exact sequence of $R$-modules $0\to N\to K\to G'\to 0$
where ${\rm fd}_R(K)\leq n-1$ and $G'\in\mathcal{PGF}(R)$. Consider the pushout
$$\xymatrix{ & 0 \ar[d] & 0 \ar[d] \\ 0 \ar[r] & N \ar[d] \ar[r] & G \ar[r] \ar@{-->}[d] & M \ar[r] \ar@{=}[d] & 0 \\ 0 \ar[r] & K \ar@{-->}[r] \ar[d] & L \ar[d] \ar[r] & M \ar[r] & 0 \\ & G' \ar@{=}[r]\ar[d] & G' \ar[d] \\ & 0 & 0}$$
Since $\mathcal{PGF}(R)$ is closed under extensions, $L\in\mathcal{PGF}(R)$. Then, there exits (by definition) an exact sequence $0\to L\to P\to L'\to 0$, with $P$ projective and $L'\in \mathcal{PGF}(R)$. Consider now another pushout
$$\xymatrix{ & & 0 \ar[d] & 0 \ar[d] \\ 0 \ar[r] & K \ar@{=}[d] \ar[r] & L \ar[r] \ar[d] & M \ar[r] \ar@{-->}[d] & 0 \\ 0 \ar[r] & K \ar[r] & P \ar[d] \ar@{-->}[r] & H \ar[r]\ar[d] & 0 \\ & & L' \ar[d]\ar@{=}[r] & L' \ar[d]\\ & & 0 & 0}.$$
Since ${\rm fd}_R(K)\leq n-1$ and $P$ is projective (then flat), $H\in\mathcal{F}_n(R)$ by the middle row. It follows from the following commutative digram with exact rows
$$\xymatrixcolsep{1.7pc}\xymatrix{ 0 \ar[r] &{\rm Hom}_R(H,X) \ar[r] \ar[d] &{\rm Hom}_R(P,X) \ar[r] \ar[d] &{\rm Hom}_R(K,X) \ar@{=}[d] \ar[r] & {\rm Ext}^1_R(H,X)=0\\
0\ar[r] &{\rm Hom}_R(M,X) \ar[r] &{\rm Hom}_R(L,X) \ar[r] & {\rm Hom}_R(K,X)
}$$
that $0\to L\to K\to M\to 0$ is ${\rm Hom}_R(-,X)$-exact for any module $X\in\mathcal{F}_n(R)\cap\mathcal{C}_n(R)$.
$ 2.\Rightarrow 3.$ Similar to \cite[Theorem 4.11(2)$\Rightarrow$(3)]{SS20}, using the fact that any module $C\in \mathcal{PGF}(R)^\perp\cap \mathcal{C}_n(R)$ has a special $n$-flat precover which exists by Lemma \ref{n-flat cot-pair}.
$ 3.\Rightarrow 4.$ Similar to \cite[Theorem 4.11(3)$\Rightarrow$(4)]{SS20} using the fact that every module $F\in \mathcal{PGF}(R)^\perp$ has a special $n$-flat precover which exists by Lemma \ref{n-flat cot-pair} and that $\mathcal{F}_n(R)\subseteq \mathcal{PGF}(R)^\perp$ by Lemma \ref{n-flat is PGC}.
Finally, it remains to show the equality. The inclusion $\mathcal{F}_n(R)\subseteq \mathcal{GF}_n(R)$ is clear and by Lemma \ref{n-flat is PGC} we also have $\mathcal{F}_n(R)\subseteq \mathcal{PGF}(R)^\perp$. Then, $\mathcal{F}_n(R)\subseteq \mathcal{GF}_n(R)\cap \mathcal{PGF}(R)^{\perp}$. Conversely, if $M\in\mathcal{GF}_n(R)\cap \mathcal{PGF}(R)^{\perp}$, then there exists by (4) a split exact sequence of $R$-modules $0\to M\to F\to N\to 0$, where $N\in\mathcal{PGF}(R)$ and ${\rm fd}_R(X)\leq n$. Then, $M\in \mathcal{F}_n(R)$. \cqfd
The following result, which is Theorem A from the introduction, improves and generalizes \cite[Proposition 14.3.5]{Per16} and \cite[Lemma 4.4]{Xu17}. Here we have a more explicit description of the right hand of the $n$-Gorenstein flat cotorsion pair without any assumptions on the ring.
\begin{cor}\label{GFn cot pair} The pair $\left(\mathcal{GF}_n(R),\mathcal{PGF}(R)^\perp\cap \mathcal{C}_n(R)\right)$ is a perfect hereditary cotorsion pair cogenerated by a set.
Consequently, the following assertions hold.
\begin{enumerate}
\item[(a)] Every $R$-module has a $n$-Gorenstein flat cover and $n$-Gorenstein cotorsion envelope.
\item[(b)] An $R$-module is $n$-Gorenstein cotorsion if and only if it is $n$-cotorsion from $\mathcal{PGF}(R)^\perp$.
\end{enumerate}
\end{cor}
{\parindent0pt {\bf Proof.\ }} Let $\mathcal{A}=\mathcal{GF}_n(R)$ and $\mathcal{B}=\mathcal{PGF}(R)^\perp \cap \mathcal{C}_n(R)$. By Theorem \ref{Charac of GFn}(3), $\mathcal{A}=\;^{\perp}\mathcal{B}$. On the other hand, clearly $\mathcal{PGF}(R)\cup \mathcal{F}_n(R)\subseteq \mathcal{GF}_n(R)$. Then, $\mathcal{A}^{\perp}=\mathcal{GF}_n(R)^{\perp}\subseteq \mathcal{F}_n(R)^\perp \cap \mathcal{PGF}(R)^\perp=\mathcal{B}$. Hence, $\mathcal{A}^{\perp}=\mathcal{B}$ and $(\mathcal{A},\mathcal{B})$ is a cotorsion pair.
Now we prove that this cotorsion pair is perfect. But, since $\mathcal{GF}_n(R)$ is closed under direct limits by Proposition \ref{Gn-flat prop}(3), we only need to check that it is complete \cite[Corollary 2.3.7]{GT12}. Following \cite[Theorem 3.2.1(b)]{GT12}, a nice way to do that is to show that it is cogenerated by a set. By the proof of \cite[Theorem 4.9]{SS20}, $(\mathcal{PGF}(R),\mathcal{PGF}(R)^\perp)$ is a cotorsion pair cogenerated by a set $\mathcal{A}_1$. We also know by Lemma \ref{n-flat cot-pair} that the cotorsion pair $(\mathcal{F}_n(R),\mathcal{C}_n(R))$ is cogenerated by a set $\mathcal{A}_2$. Then, $\mathcal{PGF}(R)^\perp \cap \mathcal{C}_n(R)=\mathcal{A}_1^\perp \cap \mathcal{A}_2^\perp=(\mathcal{A}_1\cup\mathcal{A}_2)^\perp$. This means that our desired pair is cogenerated by a set and hence complete. \cqfd
Now we are ready to prove our second main result in this section. The following result improves \cite[Theorem 14.3.1]{Per16} and \cite[Theorem 4.5]{Xu17} in the sense that we need no extra assumptions on the ring $R$ with an obvious description of the trivial objects.
\begin{thm}\label{GFn mod struc} For any ring $R$, there exists a hereditary abelian model structure on $R$-Mod $$\mathcal{M}_n:=\left(\mathcal{GF}_n(R),\mathcal{PGF}(R)^\perp, \mathcal{C}_n(R)\right).$$ Equivalently, there exists a hereditary abelian model structure on $R$-Mod, in which the cofibrations coincide with the monomorphisms with $n$-Gorenstein flat cokernel, the fibrations coincide with the epimorphisms with $n$-cotorsion kernel, and $\mathcal{PGF}(R)^\perp$ is the class of trivial objects.
\end{thm}
{\parindent0pt {\bf Proof.\ }} By Theorem \ref{Charac of GFn}, $\mathcal{F}_n(R)=\mathcal{GF}_n(R)\cap \mathcal{PGF}(R)^\perp$. Then, by Lemma \ref{n-flat cot-pair} and Corollary \ref{GFn cot pair}, there are two complete and hereditary cotorsion pairs:
$$(\mathcal{GF}_n(R),\mathcal{PGF}(R)^\perp\cap\mathcal{C}_n(R)) \text{ and } (\mathcal{GF}_n(R)\cap \mathcal{PGF}(R)^\perp,\mathcal{C}_n(R)).$$
Finally, the class $\mathcal{PGF}(R)^\perp$ is thick by \cite[Theorem 4.9]{SS20} which completes the proof. \cqfd
Recall that a Frobenius category $\mathcal{A}$ is an exact category with enough injectives and projectives such that the projective objects coincide with the injective objects. In this case, we can form the stable category $\underline{\mathcal{A}}:=\mathcal{A}/\sim$, which has the same objects as $\mathcal{A}$ and ${\rm Hom}_{\mathcal{A}/\sim}(X,Y):={\rm Hom}_{\mathcal{A}}(X,Y)/\sim$, where $f\sim g$ if and only if $f-g$ factors through a projective-injective object.
The following result can be proved directly as in the case $n=0$ (see \cite[Theorem 4.5]{CET20}). However, we follow an alternative approach due to Gillespie \cite{Gil11}. Recall \cite[Sections 4 and 5]{Gil11} that for any hereditary Hovey triple $\mathcal{M}=(\mathcal{Q},\mathcal{W},\mathcal{R})$, the subcategory $\mathcal{Q}\cap \mathcal{R}$ is a Frobenius exact category (with the exact structure given by the short exact sequences with terms in $\mathcal{Q}\cap \mathcal{R}$) whose projective-injective objects are precisely those in $\mathcal{Q}\cap\mathcal{W}\cap\mathcal{R}$. Moreover, the stable category $\underline{\mathcal{Q}\cap\mathcal{R}}$ is triangulated equivalent to ${\rm Ho}(\mathcal{M})$ (see also \cite[Proposition 4.2 and Theorem 4.3]{Gil16}). Consequently, we have the following corollary.
\begin{cor}\label{quas-Frob} The subcategory $\mathcal{GF}_n(R)\cap \mathcal{C}_n(R)$ of both $n$-Gorenstein flat and $n$-cotorsion $R$-modules is a Frobenius category. The projective-injective modules are given by objects in $\mathcal{F}_n(R)\cap\mathcal{C}_n(R)$. Moreover, the homotopy category of the $n$-Gorenstein flat model structure is triangle equivalent to the stable category
$\underline{\mathcal{GF}_n(R)\cap \mathcal{C}_n(R)}$.
\end{cor}
Let $(\mathcal{T},\Sigma)$ be a triangulated category with coproducts and an autofunctor $\Sigma$. Recall that an object $C$ of $\mathcal{T}$ is compact if for each family $\{Y_i|i\in I\}$ of objects of $\mathcal{T}$, the canonical morphism
$$\bigoplus_{i\in I} {\rm Hom}_\mathcal{T}(C,Y_i)\to {\rm Hom}_\mathcal{T}(C,\bigoplus_{i\in I}Y_i)$$
is an isomorphism. The category $\mathcal{T}$ is called compactly generated if there exists a set $\mathcal{S}\subseteq \mathcal{T}$ of
compact objects such that for each $0\neq Y\in\mathcal{Y}$ there is a morphism $0\neq f:\Sigma^m S\to Y$ for some $S\in\mathcal{S}$ and $m\in\mathbb{Z}$.
It follows by Liang and Wang \cite[Theorem 1(a)]{LW20} that the stable category $\mathcal{E}_0=\underline{\mathcal{GF}(R)\cap\mathcal{C}(R)}$ is compactly generated when $R$ is right coherent with finite left global Gorenstein dimension. Based on their work, this property is inherited by the stable categories $\underline{\mathcal{PGF}(R)}$ and $\underline{\mathcal{GF}_n(R)\cap\mathcal{C}_n(R)}$ for every integer $n\geq 0$ without the coherence assumption.
\begin{cor} \label{gener}There exist triangulated equivalences
$$\underline{\mathcal{GF}_n(R)\cap \mathcal{C}_n(R)} \backsimeq\cdots \backsimeq \underline{\mathcal{GF}_1(R)\cap \mathcal{C}_1(R)} \backsimeq \underline{\mathcal{GF}(R)\cap \mathcal{C}(R)}\simeq \underline{\mathcal{PGF}(R)}.$$
Furthermore, if ${\rm GPD}(R)<\infty$, then these triangulated categories are compactly generated.
\end{cor}
{\parindent0pt {\bf Proof.\ }} By Theorem \ref{GFn mod struc} and \cite[Theorem 4.9]{SS20} we have the model structures
$$\mathcal{M}_n=(\mathcal{GF}_n(R),\mathcal{PGF}(R)^\perp,\mathcal{C}_n(R)) \text{ and } \mathcal{N}=(\mathcal{PGF}(R), \mathcal{PGF}(R)^\perp, R\text{-Mod})$$
with the same trivial objects and the following inclusions
$$\mathcal{PGF}\subseteq \mathcal{GF}(R)\subseteq \mathcal{GF}_1(R)\subseteq \cdots \subseteq \mathcal{GF}_n(R).$$
Applying \cite[Lemma 5.4]{EG19}, we get the following triangulated equivalences
$${\rm Ho}(\mathcal{M}_n) \backsimeq\cdots \backsimeq {\rm Ho}(\mathcal{M}_1) \backsimeq {\rm Ho}(\mathcal{M}_0)\simeq {\rm Ho}(\mathcal{N}).$$
On the other hand, following Corollary \ref{quas-Frob} and \cite[page 27]{SS20}, we have
$${\rm Ho}(\mathcal{M}_n)\backsimeq \underline{\mathcal{GF}_n(R)\cap \mathcal{C}_n(R)} \text{ and }{\rm Ho}(\mathcal{N})\backsimeq \underline{\mathcal{PGF}(R)}.$$
as triangulated categories. Hence, the first statement follows.
Assume now that ${\rm GPD}(R)<\infty$. It follows by \cite[Corollary 8.8]{HMP23} that $R$ has finite global Gorenstein AC-projective dimension, i.e., any $R$-module has finite $\mathcal{GP}_{ac}(R)$-resolution dimension where $\mathcal{GP}_{ac}(R)$ denotes the class of Gorenstein AC-projective modules in the sense of \cite{BGH14}. Therefore, our last claim follows by \cite[Theorem 24, Proposition 21 and Corollary 33]{LW20} and the above triangulated equivalences.
\cqfd
It is well-known that the stable categories $\underline{\mathcal{GP}(R)}$ and $\underline{\mathcal{GI}(R)}$ are triangulated categories. Li and Wang showed in \cite[Theroem 1]{LW20} that these triangulated categories are compactly generated whenever ${\rm GPD}(R)<\infty$ with $R$ right or left coherent. Their proof is mainly based on \cite[Theorem 25]{LW20}. However, it turns out by a recent result of Huerta, Mendoza and P\'{e}rez \cite[Corollary 8.8]{HMP23} that the coherence assumption in \cite[Theorem 25]{LW20} is not needed. Therefore, following the same argument in \cite[Corollary 34]{LW20}, we have the improved result:
\begin{cor} Let $R$ be a ring with ${\rm GPD}(R)<\infty$. Then,
$$\underline{\mathcal{GP}(R)}\simeq \underline{\mathcal{GI}(R)}\simeq \underline{\mathcal{GF}(R)\cap\mathcal{C}(R)}$$
are compactly generated
\end{cor}
\section{PGF dimension of modules and rings}
\hskip .5cm In this section, we are interested in the PGF dimension of modules and rings and its connection with other homological dimensions.
The following result is a key result to prove many of the results that will follow. It is a combination of \cite[Theorem 3.20]{Ste22} and \cite[Theorem 3.4$(i)\Leftrightarrow (ii)$]{DE22}, except for the last statement.
\begin{prop}\label{charac of PGFn} The following assertions are equivalent:
\begin{enumerate}
\item ${\rm {PGF}\mbox{-}pd}_R(M)\leq n.$
\item $M$ is a direct summand in a module $N$ such that there exists an exact sequence of $R$-modules
$$0\to N\to X\to N\to 0$$ where ${\rm pd}_R(X)\leq n$ and ${\rm Tor}_{n+1}^R(E,N)=0$ for any injective right $R$-module.
\item There exists an exact sequence of $R$-modules $$0\to M\to X\to G\to 0$$ where $G\in\mathcal{PGF}(R)$ and ${\rm pd}_R(X)\leq n$.
\end{enumerate}
Consequently, $\mathcal{PGF}_n(R)\cap \mathcal{PGF}(R)^{\perp}=\mathcal{P}_n(R).$
\end{prop}
{\parindent0pt {\bf Proof.\ }} (1) is equivalent to (2) by \cite[Theorem 3.20]{Ste22} and to (3) by \cite[Theorem 3.4]{DE22}. We only have to prove the equality $\mathcal{PGF}_n(R)\cap \mathcal{PGF}(R)^{\perp}=\mathcal{P}_n(R)$.
Clearly $\mathcal{P}_n(R)\subseteq \mathcal{PGF}_n(R)$ and by Lemma \ref{n-flat is PGC}, $\mathcal{P}_n(R)\subseteq \mathcal{PGF}(R)^\perp$. Then, $\mathcal{P}_n(R)\subseteq \mathcal{PGF}_n(R)\cap \mathcal{PGF}(R)^{\perp}$. Conversely, if $M\in\mathcal{PGF}_n(R)\cap \mathcal{PGF}(R)^{\perp}$, then there exists by (3) a split exact sequence of $R$-modules
$0\to M\to X\to G\to 0$ where $G\in\mathcal{PGF}(R)$ and ${\rm pd}_R(X)\leq n$. Then, $M\in \mathcal{P}_n(R)$.
\cqfd
As a consequence, we get new characterizations of left ($n$-)perfect rings. Recall that \cite[Definition 2.1]{EJL05} a ring $R$ is left $n$-perfect if every flat $R$-module has a projective dimension less than $n$. In particular, left perfect rings are exactly left $0$-perfect rings.
\begin{thm}\label{n-perf}
A ring $R$ is left $n$-perfect if and only every Gorenstein flat has PGF dimension at most $n$. In particular, the following assertions are equivalent:
\begin{enumerate}
\item $R$ is left perfect.
\item $\mathcal{GF}(R)=\mathcal{PGF}(R)$.
\item $\mathcal{PGF}(R)$ is closed under direct limits.
\item $\left(\mathcal{PGF}(R),\mathcal{PGF}(R)^\perp\right) $ is a perfect cotorsion pair.
\item $\mathcal{PGF}(R)$ is covering.
\item Every Gorenstein flat $R$-module has a PGF cover.
\item Every flat $R$-module has a projective cover.
\end{enumerate}
\end{thm}
{\parindent0pt {\bf Proof.\ }} $(\Leftarrow)$ Clearly $\mathcal{F}(R)\subseteq \mathcal{GF}(R)\subseteq \mathcal{PGF}_n(R)$ and by Lemma \ref{n-flat is PGC}, $\mathcal{F}(R)\subseteq \mathcal{PGF}(R)^\perp$. Thus, $\mathcal{F}(R)\subseteq \mathcal{PGF}_n(R)\cap \mathcal{PGF}(R)^\perp=\mathcal{P}_n(R)$ by Proposition \ref{charac of PGFn}. This means that $R$ is left $n$-perfect. $(\Rightarrow )$ Let $M$ be a Gorenstein flat $R$-module. By \cite[Theorem 3.5 and Proposition 3.6]{BM07}, $M$ is a direct summand in a module $N$ such that there exists an exact sequence of $R$-modules $0\to N\to F\to N\to 0$
with $F$ flat and ${\rm Tor}_{1}^R(E,N)=0$ for any injective right $R$-module. It follows that ${\rm Tor}_{k\geq 1}^R(E,N)\cong {\rm Tor}_{1}^R(E,N)=0$. Since $F\in\mathcal{F}(R)\subseteq \mathcal{P}_n(R)$, it follows from Proposition \ref{charac of PGFn} that $M\in \mathcal{PGF}_n(R)$.
We now prove the equivalences (1)-(7):
$(1)\Leftrightarrow (2)$ By \cite[Theorem 4.9]{SS20}. It also follows from the above equivalence.
$(2)\Rightarrow (3)$ By Proposition \ref{Gn-flat prop}(3).
$(3)\Rightarrow (4)$ By \cite[Theorem 4.9]{SS20} and \cite[Corollary 2.3.7]{GT12}.
$(4)\Rightarrow (5)$ Clear.
$(5)\Rightarrow (6)$ Clear.
$(6)\Rightarrow (7)$ Let $F$ be a flat $R$-module and $f:P\to F$ be a PGF cover. We claim that $f$ is a projective cover. By Wakamatsu Lemma, $K:={\rm Ker} f\in \mathcal{PGF}(R)^\perp$. But $F\in\mathcal{F}(R)\subseteq \mathcal{PGF}(R)^\perp$ by Lemma \ref{n-flat is PGC} and $\mathcal{PGF}(R)\cap \mathcal{PGF}(R)^\perp=\mathcal{P}(R)$ by Proposition \ref{charac of PGFn}. Hence, $P\in \mathcal{PGF}(R)\cap \mathcal{PGF}(R)^\perp=\mathcal{P}(R)$. It remains to show that for any morphism $g: P\to P$ such that $fg=f$, is an automorphism. But this holds true as $f$ is a PGF cover.
$(7)\Rightarrow (1)$ By the proof of \cite[Theorem (3)$\Rightarrow$ (1)]{EJ00}. \cqfd
Theorem \ref{n-perf} can be interpreted from the point of view of global dimensions. For this purpose (and for later use), we introduce the following homological invariant.
\begin{defn}The global PGF dimension of $R$ with respect to a class $\mathcal{X}\subseteq R$-Mod is defined as:
${\rm PGFD}_{\mathcal{X}}(R):={\rm sup}\{{\rm {PGF}\mbox{-}pd}_R(X)|X\in\mathcal{X}\}.$
In particular, we set
$${\rm PGFD}(R):={\rm PGFD}_{R\text{-Mod}}(R)={\rm sup}\{{\rm {PGF}\mbox{-}pd}_R(M)| M\in R\text{-Mod}\}$$
and we simply call it the global PGF dimension of $R$.
\end{defn}
We note that the global PGF dimension has been recently studied by Dalezios and Emmanouil \cite{DE22}.
\begin{cor}A ring $R$ is left $n$-perfect if and only if ${\rm PGFD}_{\mathcal{GF}(R)}(R)\leq n$.
In particular, $R$ is left perfect if and only if ${\rm PGFD}_{\mathcal{GF}(R)}(R)=0$.
\end{cor}
We now turn our attention to the questions raised in the Introduction.
\begin{rem} \label{GP1,2,3} Consider the following assertions:\\
(GP1) Any Gorenstein projective $R$-module is Gorenstein flat.\\
(GP2) Any $R$-module has a special Gorenstein projective precover.\\
(GP3) Any Gorenstein projective $R$-module is PGF, that is, $\mathcal{PGF}(R)=\mathcal{GP}(R).$
It follows that (GP3) is equivalent to (GP1) by \cite[Theorem 3]{Iac20} and implies (GP2) by \cite[Theorem 4.9]{SS20}. However, it is not clear whether (GP2) implies (GP1) or (GP3). Following this remark, our focus will be on (GP3).
\end{rem}
Instead of asking whether any Gorenstein projective module is PGF or not, one could ask the following question:
(Q4): How far is a Gorenstein projective module from being PGF?
\bigskip
The following result reduces this problem to two cases.
\begin{prop} \label{PGF-pd(GP)} The PGF dimension of a Gorenstein projective $R$-module $M$ is either zero or infinite.
Consequently, the global dimension ${\rm PGFD}_{\mathcal{GP}(R)}(R)$ is either zero or infinite.
\end{prop}
{\parindent0pt {\bf Proof.\ }} Assume $n={\rm {PGF}\mbox{-}pd}_R(M)<\infty$ and let us consider a special PGF precover (which exists by \cite[Theorem 4.9]{SS20}) $0\to N\to G\to M\to 0.$
By \cite[Proposition 2.4(ii)]{DE22}, ${\rm {PGF}\mbox{-}pd}_R(N)\leq max\{{\rm {PGF}\mbox{-}pd}_R(G),{\rm {PGF}\mbox{-}pd}_R(M)\}\leq n.$
Hence, $N\in\mathcal{PGF}_n(R)\cap\mathcal{PGF}(R)^\perp=\mathcal{P}_n(R)$ by Proposition \ref{charac of PGFn}. Moreover, since $M\in\mathcal{GP}(R)$, there exists an exact sequence of $R$-modules $0\to M\to P_{-1}\to \cdots \to P_{-n}\to L\to 0$ with each $P_j$ projective. Hence, ${\rm Ext}^1_R(M,N)\cong {\rm Ext}^{n+1}_R(L,N)=0$ as $N\in\mathcal{P}_n(R)$ and then the above short exact sequence splits. Therefore, $M\in \mathcal{PGF}(R)$ and this means that ${\rm {PGF}\mbox{-}pd}_R(M)=0$. \cqfd
In order to continue our investigations, we need the notion of definable classes. Recall from \cite[Theorem 3.4.7]{Pre09} that a class of modules $\mathcal{D}$ is called definable if it is closed under direct products, direct limits and pure submodules. Such a class is, in particular, closed under direct sums and direct summands. We denote by $\langle\mathcal{X}\rangle$ the definable class generated by a class of modules $\mathcal{X}$, that is, the smallest definable
class containing $\mathcal{X}$. The definable closure $\langle \mathcal{X}\rangle$ can be constructed, for instance, by closing $\mathcal{X}$ under direct products, then under pure submodules and, finally, under pure quotients. In case, $\mathcal{X}=\{X\}$, we simply write $\langle\mathcal{X}\rangle=\langle X\rangle$.
It has been shown in many recent works \cite{EIP20,CS21,SS20} that the definable classes $\langle R\rangle$ and $\langle R^+\rangle$ are of great importance. They will also be of interest to us. Thus, we propose the following definition, giving names to the modules belonging to these classes.
\begin{defn}An $R$-module is called definable flat if it belongs to the definable class $\langle R\rangle$. Dually, an $R$-module is called definable injective if it belongs to the definable class $\langle (R_R)^+\rangle$.
\end{defn}
Clearly, $\langle R\rangle=\langle \mathcal{P}(R)\rangle=\langle \mathcal{F}(R)\rangle$ and $\langle (_RR)^+\rangle=\langle\mathcal{I}(R^{op})\rangle =\langle\mathcal{FI}(R^{op})\rangle $. Moreover, $R$ is right coherent if and only if definable flat $R$-modules coincide with flats if and only if definable injective right $R$-modules coincide with FP-injectives (see \cite[Theorem 3.4.24]{Pre09}).
As we will see next, the class of definable flat modules is a key point for extending many results from the class of coherent rings to a larger class of rings.
\begin{lem}\label{<R>=Fn(R)} Given an integer $n\geq 0$, ${\rm fd}_R(\langle R\rangle)\leq n$ if and only if the direct product of any family of flat $R$-modules has finite flat dimension $\leq n$.
Consequently, ${\rm fd}_R(\langle R\rangle)={\rm sup}\{{\rm fd}_R(\prod_iF_i)| \text{ $(F_i)_i$ is a family of flat $R$-modules} \}.$
\end{lem}
{\parindent0pt {\bf Proof.\ }} ($\Rightarrow $) For every family $(F_i)_i$ of flat $R$-modules, we have $F_i\in\langle \mathcal{F}(R)\rangle=\langle R\rangle$ and since $\langle R\rangle$ is closed under products, $\prod_i F_i\in \langle R\rangle$. Hence, ${\rm fd}_R(\prod_iF_i)\leq{\rm fd}_R(\langle R\rangle)\leq n$. ($\Leftarrow$) Assume ${\rm fd}_R(\prod\mathcal{F}(R))\leq n$ and let $X\in \langle R\rangle=\langle \mathcal{F}(R)\rangle$. The definable class $\langle \mathcal{F}(R)\rangle$ can be constructed by closing $\mathcal{F}(R)$ under products, then under pure submodules and finally under pure quotients. By hypothesis, any direct product of any family of flat $R$-modules belongs to $\mathcal{F}_n(R)$, and clearly $\mathcal{F}_n(R)$ is closed under pure submodules and quotients (since a short exact sequence $\mathcal{E}$ is pure if and only if its character $\mathcal{E}^+$ is split). Hence, $\langle R\rangle=\langle \mathcal{F}(R)\rangle=\mathcal{F}_n(R)$, that is, ${\rm fd}_R(\langle R\rangle)\leq n$. \cqfd
Izurdiaga \cite{CI16} investigated rings for which the direct product of any family of flat $R$-modules has finite flat dimension. He calls them \cite[Section 4]{CI16} right weak coherent rings. Moreover, If $n$ is the maximum of the set consisting of all flat dimensions of all direct products of flat $R$-modules, then $R$ is called right weak $n$-coherent. In particular, right weak $0$-coherent rings are nothing but right coherent rings. It follows by Lemma \ref{<R>=Fn(R)} that $R$ is right weak coherent if and only if and only if any definable flat module has finite flat dimension. In this case, $R$ is right weak $n$-coherent with $n={\rm fd}_R(\langle R\rangle)$.
We also recall from \cite{Gil10} the notion of Ding projective modules (which are introduced for the first time in \cite{DLM09} under a different name). An $R$-module $M$ is called Ding projective if it is a syzygy of a ${\rm Hom}_R(-,\mathcal{F}(R))$-exact exact sequence of projective $R$-modules. The Ding projective dimension of an $R$-module $M$ is defined as ${\rm Dpd}_R(M)={\rm resdim}_{\mathcal{DP}(R)}(M)$ where $\mathcal{DP}(R)$ denotes the class of Ding projective $R$-modules.
As a consequence of the above lemma, the assumption that $R$ is right coherent in \cite[Theorem 2]{Iac20} and \cite[Corollary 5.10]{CS21} can be replaced by that $R$ is right weak coherent.
\begin{cor}\label{PGF=DP} Assume that $R$ is right weak coherent ring. Given an $R$-module $M$, we have ${\rm Dpd}_R(M)= {\rm {PGF}\mbox{-}pd}_R(M)$. In particular, $\mathcal{DP}(R)=\mathcal{PGF}(R)$ is the left hand of a complete cotorsion pair.
\end{cor}
{\parindent0pt {\bf Proof.\ }} We only need to show the equality $\mathcal{PGF}(R)=\mathcal{DP}(R)$. The inclusion $\mathcal{PGF}(R)\subseteq \mathcal{DP}(R)$ holds for any ring $R$. For the other inequality, let $M\in\mathcal{DP}(R)$, then $M$ is a syzygy of a ${\rm Hom}_R(-,\mathcal{F}(R))-$exact exact sequence $\textbf{P}$ of projective $R$-modules. Let $X$ be a definable flat $R$-module. By Lemma \ref{<R>=Fn(R)}, $m={\rm fd}_R(X)<\infty$. Then, $X$ has a finite flat resolution $$0\to F_m\to \cdots \to F_0\to X\to 0.$$
Applying the functor ${\rm Hom}_R(\textbf{P},-)$ to this exact sequence, we get an exact sequence of complexes
$$0\to {\rm Hom}_R(\textbf{P},F_m)\to \cdots \to{\rm Hom}_R(\textbf{P},F_0) \to {\rm Hom}_R(\textbf{P},X)\to 0.$$ As the class of exact complexes is thick, it follows that the complex ${\rm Hom}_R(\textbf{P},X)$ is exact and hence $M\in\mathcal{PGF}(R)$.
The last claim follows by \cite[Theorem 4.9]{SS20}. \cqfd
One could, as in Lemma \ref{<R>=Fn(R)}, consider rings for which every definable flat has finite projective dimension. The following lemma, which we will use later, says that such rings are nothing new.
\begin{lem}\label{pd(<R>)} ${\rm pd}_R(\langle R\rangle)<\infty$ if and only if $R$ is left weak coherent and right $n$-perfect for some integer $n\geq 0$.
\end{lem}
{\parindent0pt {\bf Proof.\ }} $(\Rightarrow)$ Assume $n={\rm pd}_R(\langle R\rangle)<\infty$. Then, ${\rm pd}_R(X)\leq n$ for all $X\in \langle R\rangle$. In particular, ${\rm fd}_R(X)\leq {\rm pd}_R(X)\leq n$ for all $X\in \langle R\rangle$ and ${\rm pd}_R(X)\leq n$ for all $X\in\mathcal{F}(R)\subseteq \langle R\rangle$.
$(\Leftarrow)$ Assume that $m={\rm fd}_R(\langle R\rangle)<\infty$. Given a partial projective resolution of an $R$-module $M\in \langle R\rangle$
$$0\to K_m\to P_{m-1}\to \cdots P_0\to M\to 0,$$
we get that $K_m$ is flat and hence ${\rm pd}_R(K_m)\leq n$ as $K_m\in \langle R\rangle$. Therefore, ${\rm pd}_R(M)\leq n+m$. Thus, ${\rm pd}_R(\langle R\rangle)\leq n+m<\infty$. \cqfd
Now we are ready to give a partial answer to Question (Q3) raised in the Introduction. Recall that a ${\rm Hom}_R(-,\mathcal{P}(R))$-exact exact sequence of projective modules is called complete projective resolution.
\begin{prop}\label{PGF=GP} $\mathcal{PGF}(R)=\mathcal{GP}(R)$ if and only if any complete projective resolution is ${\rm Hom}_R(-,\langle R\rangle)$-exact if and only if any complete projective resolution is $(\mathcal{I}(R^{op})\otimes-)$-exact.
In particular, this is the case if one of the following holds:
\begin{enumerate}
\item[(a)] $R$ is right weak coherent and left $n$-perfect for some integer $n\geq 0$.
\item[(b)] $R$ is a ring with any injective right $R$-module has finite flat dimension.
\end{enumerate}
\end{prop}
{\parindent0pt {\bf Proof.\ }} The equivalences follow immediately from \cite[Corollary 4.5]{SS20}.
(a) By above, it suffices to show that any complete projective resolution $\textbf{P}$ is ${\rm Hom}_R(X,-)$-exact for any definable flat $R$-module $X$. But, this can be shown as in the proof of corollary \ref{PGF=DP}, using a finite projective resolution of $X$ which exists by assumption and Lemma \ref{pd(<R>)}.
(b) Follows by \cite[Proposition 9]{Iac20}.
\cqfd
\begin{rem}\label{coh to weak coh}\item
\begin{enumerate}
\item[(i)] Note that the assertion "any complete projective resolution is ${\rm Hom}_R(-,\langle R\rangle)$-exact" is equivalent to $\langle R\rangle\subseteq \mathcal{GP}(R)^\perp$. Therefore, Proposition \ref{PGF=GP} improves \cite[Theorem 4]{Iac20} in the sense that if we replace the class of flat modules with that of definable flat we can drop the coherence assumption. We also note that every right coherent ring is right weak coherent, so the example in Proposition \ref{PGF=GP}(a) is more general.
\item[(ii)] In view of Proposition \ref{PGF=GP}, Remark \ref{GP1,2,3} and Theorem \ref{n-perf}, if $R$ is right weak coherent, then $R$ is left perfect if and only if $\mathcal{GP}(R)=\mathcal{GF}(R)$. This result improves \cite[Theorem 4.5 $(i)\Leftrightarrow (iv)$]{CET20}.
\end{enumerate}
\end{rem}
Consequently, partial answers to questions (Q1) and (Q2) are obtained.
\begin{cor} \label{Q1 and Q2} Assume that $R$ satisfies the equivalent conditions of Proposition \ref{PGF=GP}.
\begin{enumerate}
\item For any $R$-module $M$, we have ${\rm Gfd}_R(M)\leq {\rm Gpd}_R(M)$.
In particular, every Gorenstein projective $R$-module is Gorenstein flat.
\item $(\mathcal{GP}(R),\mathcal{GP}(R)^\perp)$ is a complete hereditary cotorsion pair.
In particular, every $R$-module has a special Gorenstein projective precover.
\end{enumerate}
\end{cor}
{\parindent0pt {\bf Proof.\ }} 1. We only need to show that $\mathcal{GP}(R)\subseteq \mathcal{GF}(R)$. But this follows by Proposition \ref{PGFD=GPD} as $\mathcal{GP}(R)=\mathcal{PGF}(R)\subseteq \mathcal{GF}(R)$.
2. Follows by Proposition \ref{PGF=GP} and \cite[Theorem 4.9]{SS20}.
\cqfd
We now focus our attention on the global PGF dimension of $R$. As a first result, we provide simple ways to compute it (compare with \cite[Theorem 5.1]{DE22}).
\begin{thm}\label{charc of PGFD}The following assertions are equivalent:
\begin{enumerate}
\item ${\rm PGFD}(R)\leq n$.
\item The following two assertions hold.
\begin{enumerate}
\item ${\rm id}_R(M)\leq n$ for every definable flat $R$-module $M$.
\item ${\rm pd}_R(M)\leq n$ for every injective $R$-module $M$.
\end{enumerate}
\item The following two assertions hold.
\begin{enumerate}
\item ${\rm fd}_{R^{op}}(M)\leq n$ for every definable injective right $R$-module $M$
\item ${\rm pd}_R(M)\leq n$ for every injective $R$-module $M$.
\end{enumerate}
\item The following two assertions hold.\begin{enumerate}
\item ${\rm fd}_{R^{op}}(M)\leq n$ for every (FP-)injective right $R$-module $M$.
\item ${\rm pd}_R(M)\leq n$ for every injective $R$-module $M$.
\end{enumerate}
\end{enumerate}
Consequently, the global PGF dimension of $R$ can be computed by the following formulas:\begin{eqnarray*}
{\rm PGFD}(R)&=&max\{{\rm pd}_R(\mathcal{I}(R)),{\rm id}_R(\langle R\rangle)\}\\
&=&max\{{\rm pd}_R(\mathcal{I}(R)),{\rm fd}_{R^{op}}(\langle R^+\rangle)\}\\
&=& max\{{\rm pd}_R(\mathcal{I}(R),{\rm fd}_{R^{op}}(\mathcal{I}(R^{op}))\}.
\end{eqnarray*}
\end{thm}
{\parindent0pt {\bf Proof.\ }} $1.\Rightarrow 2.$ (a) Consider an $R$-module $N$ and a finite PGF resolution:
$$0\to G_n\to \cdots \to G_0\to N\to 0.$$
By \cite[Corollary 4.5]{SS20}, ${\rm Ext}^{k\geq 1}_R(G_i,M)=0$ for any $M\in\langle R\rangle$ and $i$. Hence, ${\rm Ext}^{n+1}_R(N,M)\cong {\rm Ext}_R^1(G_n,M)=0$. Then, ${\rm id}_R(M)\leq n$.
(b) Let $M$ be an injective $R$-module. Since ${\rm {PGF}\mbox{-}pd}_R(M)\leq n$, there exists a split short exact sequence $0\to M\to X\to G\to 0$ with ${\rm pd}_R(X)\leq n$ by Proposition \ref{charac of PGFn}(3). Hence, ${\rm pd}_R(M)\leq n$.
$2.\Rightarrow 3.$ We only prove (a) as (b) is clear. For any definable injective right $R$-module $M$, we have ${\rm fd}_{R^{op}}(M)\leq n$ if and only if ${\rm Tor}^R_{n+1}(M,-)=0$ if and only if $ {\rm Ext}_R^{n+1}(-,M^+)\cong{\rm Tor}^R_{n+1}(M,-)^+=0$ if and only ${\rm id}_R(M^+)\leq n$. This later holds by 2(a) and \cite[Remark 2.10]{EIP20} as $M^+\in\langle R\rangle$. Hence, ${\rm fd}_{R^{op}}(M)\leq n$.
$3.\Rightarrow 4.$ (b) is clear and (a) follows by $\mathcal{I}({R^{op}})\subseteq \mathcal{FI}({R^{op}})\subseteq \langle \mathcal{FI}({R^{op}})\rangle= \langle (R_R)^+\rangle$.
$4.\Rightarrow 1.$ Let $M$ be an $R$-module. Consider a projective and an injective resolution of $M$: $$\cdots \to P_1\to P_0\to M\to 0 \text{ and } 0\to M\to I_0\to I_1\to\cdots,$$
respectively. Decomposing these exact sequences into short exact ones we get, for every integer $i\in {\mathbb{N}}$,
$$0 \to L_{i+1}\to P_i\to L_i\to 0\text{ \;\;and\;\; }0\to K_i\to I_i\to K_{i+1}\to 0$$
where $L_i={\rm Coker}(P_{i+1}\to P_{i})$ and $K_i={\rm Ker}(I_i\to I_{i+1})$. Note that $M=L_0=K_0$.
Adding the direct sum of the first sequences,
$$0 \to \bigoplus_{i\in{\mathbb{N}}}L_{i+1}\to \bigoplus_{i\in{\mathbb{N}}}P_i\to M\oplus( \bigoplus_{i\in{\mathbb{N}}}L_{i+1})\to 0$$
to the direct product of the second ones,
$$0\to M\oplus( \prod_{i\in{\mathbb{N}}}K_{i+1})\to \prod_{i\in{\mathbb{N}}}I_i\to \prod_{i\in{\mathbb{N}}}K_{i+1}\to 0$$
we get an exact sequence of the form
$0\to N\to X\to N\to 0$, where $$X=(\bigoplus_{i\in N} P_i)\oplus (\prod_{i\in N} I_i)\text{ and } N=M\oplus \left((\bigoplus_{i\in N} L_{i+1}) \oplus (\prod_{i\in N} K_{i+1})\right).$$
By (b), ${\rm pd}_R(X)={\rm pd}_R((\prod_{i\in N} I_i))\leq n$ and by (a), ${\rm Tor}^R_{n+1}(E,N)=0$ for any injective right $R$-module $E$. And since $M$ is a direct summand in $N$, it follows from Proposition \ref{charac of PGFn} that ${\rm {PGF}\mbox{-}pd}_R(M)\leq n$ as desired.
\cqfd
Global dimensions of rings are known to characterize classical rings. And the global PGF dimension is no exception.
Recall that a ring $R$ is called quasi-Frobenius if injective left (resp., right) $R$-modules coincide with projective left (resp., right) modules.
\begin{cor}${\rm PGFD}(R)=0$ if and only if $R$ is quasi-Frobenius.
\end{cor}
{\parindent0pt {\bf Proof.\ }} $(\Rightarrow)$ By Theorem \ref{charc of PGFD}(2), we have $\mathcal{P}(R)\subseteq \langle R\rangle\subseteq \mathcal{I}(R)$ and $\mathcal{I}(R)\subseteq \mathcal{P}(R)$. Then, $\mathcal{P}(R)=\mathcal{I}(R)$ and therefore $R$ is quasi-Frobenius. $(\Leftarrow)$ If $R$ is quasi-Frobenius, then $\mathcal{I}(R)=\mathcal{P}(R)$ and $\mathcal{I}(R^{op})=\mathcal{P}(R^{op})\subseteq \mathcal{F}(R^{op})$. Hence, ${\rm PGFD}(R)=0$ by Theorem \ref{charc of PGFD}(4).
\cqfd
At this point, it is natural to ask what the relationship between the global PGF dimension and the global Gorenstein projective dimension could be. Clearly, ${\rm GPD}(R)\leq {\rm PGFD}(R)$ as $\mathcal{PGF}(R)\subseteq \mathcal{GP}(R)$. Over an (Iwanaga-)Gorenstein ring $R$, it is an immediate consequence of Huang \cite[Theorems 4.9 and 4.13]{Hua22} that we have an equality ${\rm GPD}(R)={\rm PGFD}(R)$. Dalezios and Emmanouil, on the other hand, have shown in \cite[Theorem 5.1]{DE22} this equality when $R$ has finite global PGF dimension.
Our last main result gives a positive answer to this question for any ring $R$.
\begin{thm} \label{PGFD=GPD}The global Gorenstein projective dimension of $R$ coincides with the global PGF dimension of $R$. That is, ${\rm GPD}(R)={\rm PGFD}(R).$
\end{thm}
{\parindent0pt {\bf Proof.\ }} The inequality ${\rm GPD}(R)\leq {\rm PGFD}(R)$ is clear since $\mathcal{PGF}(R)\subseteq \mathcal{GP}(R)$. If ${\rm GPD}(R)=\infty$, then ${\rm PGFD}(R)={\rm GPD}(R)$ and we are done.
Assume $n={\rm GPD}(R)<\infty$. By \cite[Corollary 2.7]{BM10}, ${\rm pd}_R(E)\leq n$ for every injective $R$-module $E$. On the other hand, following \cite[Examples 4.4(3)]{CI16}, $R$ is right weak coherent. It follows that ${\rm fd}_R(\langle R\rangle)<\infty$ by Lemma \ref{<R>=Fn(R)}. This implies that ${\rm id}_R(\langle R\rangle)\leq n$ by \cite[Corollary 2.7]{BM10}. Applying now Theorem \ref{charc of PGFD}(2), we get that ${\rm PGFD}(R)\leq n$.
\cqfd
We end the paper with some consequences of Theorem \ref{PGFD=GPD}.%
Gorenstein injective modules are seen as dual to Gorenstein projective modules. Based on this, one could introduce a dual notion of a PGF module. Let us call an $R$-module Gorenstein definable injective if it is a syzygy of a ${\rm Hom}_R(\langle R^+\rangle,-)$-exact exact sequence of injective $R$-modules. Define its Gorenstein definable injective dimension as ${\rm Gdid}(M)={\rm resdim}_{\mathcal{GDI}(R)}(M)$ where $\mathcal{GDI}(R)$ denotes the class of Gorenstein definable injective $R$-modules.
It is known that the global dimension of $R$ can be computed either by projective or injective dimensions, that is, the following equality holds:
$${\rm sup}\{{\rm pd}_R(M)|\text{ \text{ $M\in R$-Mod}}\}={\rm sup}\{{\rm id}_R(M)|\text{ $M\in R$-Mod}\}.$$
This fact was extended to the Gorenstein setting by Enochs and Jenda \cite[Section 12.3]{EJ00} for (Iwanaga-)Gorenstein rings and later by Bennis and Mahdou \cite[Theorem 1.1]{BM10} for any ring.
Inspired by this, one could ask whether the equality
$${\rm sup}\{{\rm {PGF}\mbox{-}pd}_R(M)|\text{$M\in R$-Mod}\}={\rm sup}\{{\rm Gdid}_R(M)| \text{ $M\in R$-Mod}\}$$
holds true as well?
Using again Theorem \ref{PGFD=GPD}, we give a positive answer to this question.
\begin{cor}\label{PGFD=GID=DID} For any ring $R$, we have the following equalities:
$${\rm PGFD}(R)={\rm GID}(R)={\rm sup}\{\rm {Gdid}_R(M)|\text{ $M$ is an $R$-module}\}.$$
\end{cor}
{\parindent0pt {\bf Proof.\ }} First, we have the equality ${\rm PGFD}(R)={\rm GID}(R)$ that follows by Theorem \ref{PGFD=GPD} and \cite[Theorem 1.1]{BM10}.
Clearly, ${\rm GID}(R)\leq {\rm sup}\{{\rm Gdid}(M)|\text{ $M\in R$-Mod}\}$ since $\mathcal{GDI}(R)\subseteq \mathcal{GI}(R)$. Assume now that $n={\rm GID}(R)<\infty$. If we prove that ${\rm id}_R(M)<\infty$ for any definable injective $R$-module $M$, we are done, as this condition implies that $\mathcal{GI}(R)=\mathcal{GDI}(R)$. Following \cite[Corollary 2.7]{BM10}, it suffices to show that ${\rm fd}_R(M)<\infty$ for any definable injective $R$-module. But, using \cite[Lemma 5.6(1)]{CS21}, it suffices to show that ${\rm fd}_R(M)<\infty$ for every FP-injective $R$-module $M$.
Let $_RM$ be FP-injective. Then, there exists a pure monomorphism $0\to M \hookrightarrow E$ with $_RE$ injective. Using again \cite[Corollary 2.7]{BM10}, we get that ${\rm fd}_R(E)\leq n$ and hence ${\rm fd}_R(M)\leq {\rm fd}_R(E)\leq n<\infty$ as desired.\cqfd
Auslander’s Theorem on the global dimension states that we can compute the global dimension of $R$ by just computing the projective dimension of cyclic $R$-modules. That is, the formula ${\rm gldim}(R)={\rm sup}\{{\rm pd}_R(R/I)|\text{ $I$ is a left idea}\}$ holds true. Bennis, Hu and Wang \cite[Theorem 1.1]{BHW15} extended this formula to the Gorenstein setting for commutative rings. However, one can see that the same proof also holds for non-commutative rings. Taking advantage of this fact, together with Theorem \ref{PGFD=GPD}, we get a PGF version of Auslander’s Theorem.
\begin{cor}\label{Aus}(\textbf{Auslander's Theorem on the global PGF dimension}) For any ring $R$, we have the following formula:
$${\rm PGFD}(R)={\rm sup}\{{\rm {PGF}\mbox{-}pd}_R(R/I)|\text{ $I$ is a left ideal}\}.$$
\end{cor}
{\parindent0pt {\bf Proof.\ }} The inequality $\{{\rm {PGF}\mbox{-}pd}_R(R/I)|\text{ $I$ is a left ideal}\}\leq {\rm PGFD}(R)$ is clear. We may assume that $n={\rm sup}\{{\rm {PGF}\mbox{-}pd}_R(R/I)|\text{ $I$ is a left ideal}\}<\infty$. Since $\mathcal{PGF}(R)\subseteq \mathcal{GP}(R)$, we have ${\rm Gpd}(R/I)\leq {\rm {PGF}\mbox{-}pd}_R(R/I)\leq n$ for every left ideal $I$, and therefore ${\rm sup}\{{\rm Gpd}_R(R/I)|\text{ $I$ is a left ideal}\}\leq n$. On the other hand, by the non-commutative version of \cite[Theorem 1.1]{BHW15}, we have that ${\rm GPD}(R)={\rm sup}\{{\rm Gpd}_R(R/I)|\text{ $I$ is a left ideal}\}\leq n$. Finally, using Theorem \ref{PGFD=GPD}, we get that ${\rm PGFD}(R)={\rm GPD}(R)\leq n$ which completes the proof. \cqfd
\bigskip
\noindent\textbf{Acknowledgement:} The author would like to thank Professor Marco A. P\'{e}rez for his kindness in sharing his book \cite{Per16} and Professor Luis Oyonarte for his comments on an earlier version of this paper and for his constant support.
|
1904.02546
|
\section{Introduction}\label{s:intro}
\IEEEPARstart{F}{ortran} has a long history in scientific programming and is
still in common use today~\cite{decyk2007fortran} in application codes for
climate science \cite{e3sm}, weather forecasting \cite{um}, chemical
looping reactors \cite{mfix}, plasma physics, and other fields.
As a domain-specific language for scientific computing, Fortran enables
modernization and improvement of application codes by the publication of new
standards specifications that define new features and intrinsic functions.
Unfortunately, these specifications take years to become available to Fortran
users: as of 2018, only six of eleven surveyed compilers fully implement even
half of Fortran 2008's new features \cite{chivers}.
Another approach to language extensibility is the distribution of libraries,
which can be developed and deployed much more rapidly than compilers. However,
most contemporary compiled-language scientific and computing libraries
are written in C and C++, and
their capabilities are either unavailable to Fortran users or exposed through
fragile C interface code.
Providing Fortran application developers with robust bindings to
high-performance C++ libraries will substantially and rapidly enrich their
toolset.
As available computing power increases, more scientific codes are improving
their simulations' fidelity by incorporating additional physics. Often
multiphysics codes rely on disparate pieces of scientific software written in
multiple programming languages. Increasingly complex simulations require
improved multi-language interoperability.
Furthermore, the drive to exascale scientific computing motivates the
replacement of custom numerical solvers in Fortran application code with
modern solvers, particularly those with support for distributed-memory
parallelism and device/host architectures.
The increasing necessity for robust, low-maintenance Fortran bindings\changed{---for
multi-language application developers and scientific library authors---}demands a
new tool for coupling Fortran applications to existing C and C++ code. This
article introduces such a tool, implemented as a new extension to the Simplified
Wrapper and Interface Generator (SWIG) tool~\cite{swig2003}. A set of simple
examples demonstrate the utility and scope of SWIG-Fortran, and timing measurements with a mock numerical library illustrate the minimal performance
impact of the generated wrapper code.
\section{Background}
For decades, Fortran application developers and domain scientists have required
capabilities that can only be implemented in a systems programming language such
as C. The MPI specification, which declares both C and Fortran interfaces,
provides a clear example: the MPICH implementation was written in C and to this
day uses a custom tool to automate the generation of a Fortran interface and the
underlying C binding routines \cite{gropp1996}.
Over the years, many attempts have been made to build a generic tool to generate
Fortran bindings to existing C C++ code.
Early efforts explored manual generation of encapsulated \emph{proxy}
Fortran interfaces to C++ classes using Fortran 95 \cite{Gra1999}.
Some C scientific libraries such as Silo, HDF5, and
PETSc include custom tools to generate Fortran from their own
header files. Most of these tools use non-portable means to
bind the languages with the help of configuration scripts, because they were
initially developed before widespread
support for the Fortran 2003 standard~\cite{f2003}, which added features for
standardized interoperability with ISO C code.
Some newer software projects, such as the first iteration of a Fortran
interface~\cite{morris2012} to the Trilinos~\cite{trilinos} numerical library
that motivated this work, use Fortran 2003 features but are limited to manually
generated C interfaces to C++ code, with hand-written Fortran shadow
interfaces layered on those interfaces. The Babel tool~\cite{Epp2012} can
\emph{automatically} generate data
structures and glue code for numerous languages (including C++ and Fortran 2003)
from a custom interface description language, but it is suited for data
interoperability in a polyglot application rather than for exposing existing C
and C++ interfaces to Fortran.
Any practical code generation tool must be able to parse and interact with
advanced language features, such as C++ templates which are critical to today's
parallel scientific libraries. The advent of GPU accelerators further
complicates inter-language translation by the data sharing imposed by its device/host
dichotomy. The maturity and flexibility of SWIG allows us to address
these and other emergent concerns in a generic tool for generating high-performance,
modern Fortran interfaces from existing C and C++ library header files with
minimal effort from the C++ library developers and no effort on the downstream
Fortran application developers.
\section{Methodology}\label{s:methodology}
The core functionality of SWIG is to parse library header files and generate
C-linkage wrapper interfaces to each function. It composes
these interfaces out of small code snippets called ``typemaps,'' responsible for
converting a type in C++ to a different representation of that type in a target
language. The new Fortran target language comprises a library of these
typemaps and a C++ ``language module'' compiled into the SWIG executable.
SWIG generates wrappers only for code specified in an interface file, which can
direct SWIG to process specified C and C++ header files. The interface file
also informs SWIG of additional type conversions and code transformations to be
made to the modules, procedures, or classes to be wrapped. Each invocation of
SWIG-Fortran with an interface file creates two source files: a C++ file with
C-linkage wrapper code that calls the C++ libraries, and a Fortran module file
that calls the C-linkage wrapper code (Fig.~\ref{f:swig_data}).
\begin{figure}[htb]
\centering
\includegraphics{fig/swig-data-extended.pdf}
\caption{Dependency flow for SWIG-generated code. Green arrows symbolize
``includes'', yellow arrows are ``read by,'' red arrows are ``generates,''
and blue arrows are ``compiled and linked into.''}\label{f:swig_data}
\end{figure}
These generated source files must be compiled and linked together against the
C++ library being wrapped, and the resulting Fortran module can be directly used
by downstream user code. The generated files can be included in a C++ library
distribution without requiring library users or application developers to
install or even have any knowledge of SWIG.
As SWIG processes an interface file (or a header file referenced by that file),
it encounters C and C++ declarations that may generate several pieces of code.
Functions, classes, enumerations, and other declarations generate user-facing
Fortran equivalents and the private wrappers that transform data to pass it
between the Fortran user and the C++ library. These type-mapping transformations
are enabled by the Fortran 2003 standard, which mandates a set of interoperable
data types between ISO C and Fortran, defined in the \code{ISO\_C\_BINDING}
intrinsic module.
\subsection{Type conversions}\label{s:type_conversions}
\changed{%
SWIG-Fortran can pass all ISO C compatible data types between C++ and
Fortran without copying or transforming the data. Additional typemaps included
with the SWIG library provide transformations between C++ types and Fortran
types that are analogous but not directly compatible.}
These advanced type transformation routines shield Fortran application users
from the complexities of inter-language translation.
Consider the character string, which for decades has complicated C/Fortran binding
due to its different representation by the two languages.
The size of a C string is determined by counting the number of characters until
a null terminator \code{\textbackslash0} is encountered, but Fortran string
sizes are fixed at
allocation. The Fortran ISO C binding rules prohibit Fortran strings from being
directly passed to C; instead, SWIG-Fortran injects code to convert a string to a
zero-terminated array of characters, which \emph{can} interact with C.
\changed{A small C-bound struct containing the array's length and the address of its
initial element is passed to the C++ wrapper code, which then can instantiate a
\code{std::string} or pass a \code{char*} pointer to the C/C++ library.}
Returning a string from C++ similarly passes a pointer and length through the C
wrapper layer. The Fortran wrapper code creates an \code{allocatable} character
array of the correct size, copies in the data from the character array, and frees
the C pointer if necessary. Thus, C/C++ library routines that accept or return
strings can be called using \emph{native} types familiar to the Fortran user
without any knowledge of null terminators.
Another notable scalar type conversion defined by SWIG-Fortran is boolean
value translation. In C and C++, the \code{bool} type is defined to be
\emph{true} if
nonzero and \emph{false} if zero, whereas Fortran logical values are
defined to be true if the least significant bit is 1 and false otherwise.
The automated wrapper generation frees developers from having to understand
that the value 2 may be \code{true} in C but \code{false} in Fortran.
Finally, the SWIG typemap system also allows multiple C++ function arguments to
be converted to a single argument in the target language. This
allows a multi-argument C++ function
\begin{lstlisting}[language=C++]
double cpp_sum(const double* arr, std::size_t len);
\end{lstlisting}
to generate a Fortran function that accepts a native Fortran array:
\begin{lstlisting}[language=Fortran]
function cpp_sum(data) result(swig_result)
real(C_DOUBLE) :: swig_result
real(C_DOUBLE), dimension(:), intent(IN), target :: data
end function
\end{lstlisting}
by creating temporary arguments using the intrinsic Fortran \code{SIZE} and
\code{C\_LOC} functions.
Advanced typemaps can be constructed to perform other transformations on the
input to facilitate the translation of C++ APIs into forms familiar to Fortran
application developers. For example, the wrappers may increment input parameters
by 1 so that library users can continue using the idiomatic 1-offset indexing of
Fortran rather than counting from 0 as in C++.
\subsection{Functions}
Functions in C/C++ are \emph{procedures} in Fortran. A function in C/C++ with a
\code{void} return value will translate to a \code{subroutine} in Fortran, and a function
returning anything else will yield a Fortran \code{function}.
Each function processed by SWIG-Fortran generates a single C-linkage shim function in
the C++ file. This thin wrapper converts the
function's arguments from Fortran- and C-compatible datatypes to the function's
actual C++ argument types, calls the function, and converts the result back to a
Fortran-compatible datatype. The wrapper function also implements other optional
features such as exception handling and input validation.
In the corresponding \code{.f90} module file, SWIG-Fortran generates a private,
\code{bind(C)}-qualified declaration of the C wrapper in an \code{INTERFACE}
block. This interface is called by a public Fortran shim procedure that translates
native Fortran datatypes to and from the C interface datatypes.
Figure~\ref{f:code_flow} demonstrates the control flow for a Fortran user
calling a C++ library function through the SWIG-generated module and wrapper
code.
\begin{figure}[htb]
\centering
\includegraphics[width=1.75in]{fig/code-flow.pdf}
\caption{Program flow for calling a C++ library from Fortran through
SWIG-generated wrappers.}\label{f:code_flow}
\end{figure}
If a function is overloaded, SWIG-Fortran generates a unique, private Fortran shim procedure
for each overload. These procedures are then combined under a
\emph{separate module procedure} that is given a public interface with the original
function's name. For example, an overloaded free function \code{myfunc} in C++ will
generate two private procedures and add an interface to the module
specification:
\begin{lstlisting}[language=Fortran]
public :: myfunc
interface myfunc
module procedure myfunc__SWIG_0, myfunc__SWIG_1
end interface
\end{lstlisting}
\changed{%
Since Fortran does not allow a module procedure or generic interface to contain
both functions (which return an object) \emph{and} subroutines (which return
nothing), SWIG-Fortran detects, ignores, and warns about such incompatible
overloaded C++ functions, e.g.,}
\begin{lstlisting}[language=c++]
void overloaded();
int overloaded(int);
\end{lstlisting}
Templated functions are supported, but since the wrapper code is generated
without any knowledge of the downstream Fortran user's code, each template
instantiation must be explicitly specified in the SWIG interface file:
\begin{lstlisting}[language=SWIG]
// C++ library function declaration:
template<class T> void do_it(T value);
// SWIG template instantiations:
\end{lstlisting}
The two instantiated subroutines can be called from user Fortran code on
integers and single-precision floating-point values, but since the example does
not instantiate on \code{bool}, the user's Fortran code cannot call
\code{do\_it} with a logical argument. Note that the chosen names for
template instantiations (\code{do\_it\_int} and \code{do\_it\_real}) have no
bearing on the way they are being called from a Fortran code, which is through
the original \code{do\_it} name.
The restriction of SWIG-time instantiation may be particularly limiting for
library functions that accept generic functors%
\changed{%
, including the C++11 lambda functions that are crucial to programming models such as Kokkos and RAJA.
However, SWIG's support for function pointer conversion does enable some
flexibility for user codes.} Instantiating a function template using a
\emph{function pointer} as the predicate operator allows the function to be used
with arbitrary functions with a fixed type signature:
\begin{lstlisting}[language=SWIG]
template<class T, class BinaryPredicate>
bool compare(T lhs, T rhs, BinaryPredicate cmp) {
return cmp(lhs, rhs);
}
typedef bool (*cmp_int_funcptr)(int, int);
\end{lstlisting}
This instantiated \code{compare} function can be used in Fortran with an
arbitrary comparator that accepts integer arguments:
\begin{lstlisting}[language=Fortran]
result = compare(123_c_int, 100_c_int, c_funloc(my_comparator))
\end{lstlisting}
if the user has defined a function such as
\begin{lstlisting}[language=Fortran]
function my_comparator(left, right) bind(C) &
result(is_less)
use, intrinsic :: ISO_C_BINDING
integer(C_INT), intent(in), value :: left
integer(C_INT), intent(in), value :: right
logical(C_BOOL) :: is_less
is_less = modulo(left, 100_c_int) &
< modulo(right, 100_c_int)
end function
\end{lstlisting}
This limited capability for inversion of control (i.e., C++ library code calling
Fortran application code) will be extended and documented in future work.
\subsection{Automated class generation}
Like the thin wrappers generated for procedures, SWIG also creates thin
proxy wrappers for C++ classes. Here, each Fortran \emph{derived type} mirrors a C++ class,
contains \emph{type-bound procedures} that mirror C++ member functions, and stores a
pointer to the C++ class with memory management flags. Its assignment and memory
management semantics mimic that of an allocatable Fortran pointer. SWIG supports single
inheritance by generating a derived type with the \code{EXTENDS} attribute, so
functions accepting a \code{class(Base)} argument will also accept a
\code{class(Derived)}.
Proxy types are instantiated via a module interface function that shares the
name of the derived type, giving construction the same syntax as in other
high-level languages and in other modern Fortran idioms \cite{rouson2012}.
Instances are nullified (and deallocated if owned) by calling the \code{release}
procedure.
The same proxy type can be used as an interface to a C++ class
pointer, const reference, or value; and it must be able to correctly transfer
ownership during assignment or when being returned from a wrapped C++ function.
To support this variety of similar but distinct use cases in a single instance
of a Fortran type, the proxy type stores a bit field of flags alongside the pointer to
the C++ object. One bit denotes ownership, another marks the instance as a C++
\code{rvalue}, and the third bit is set if the instance is \code{const}. The
\code{rvalue} bit is set only when returning a value from a function. A custom
Fortran \code{assignment(=)} generic function \emph{transfers} ownership when
this bit is set.
The following block of code demonstrates the assignment semantics; it neither
leaks nor double-deletes memory.
\begin{lstlisting}[language=Fortran,numbers=right,numbersep=-10pt,stepnumber=1,numberstyle=\tiny,firstnumber=1]
type(Foo) :: owner, alias
owner = Foo(2)
owner = Foo(3)
alias = owner
call alia
call owne
\end{lstlisting}
In line 2, a new temporary \code{Foo} object is created and returned with the
\code{rvalue} and \code{own} flags. The assignment operation in line 2 transfers
ownership from that temporary object and assigns it to the \code{owner}
instance, replacing the initial value of a null C pointer, and clearing the
\code{rvalue} flag. Line 3 creates another object, but when the SWIG-generated
assignment operator is called, it first \emph{deletes} the original \code{Foo}
object before capturing the new one. Without the special assignment operator,
memory would be leaked. The next assignment (line 4) copies the underlying C
pointer but not the memory flag, so that \code{alias} is a non-owning pointer to
the same object as \code{owner}. Line 5 clears the pointer but does not call any
destructor because it did not own the memory. The final line actually destroys
the underlying object because its ownership flag is set.
This methodology is an alternative to implementing the Fortran proxy type as a
shared pointer that relies on the Fortran \code{FINAL} feature
\cite{rouson2012}. This feature is intentionally avoided because it is not
implemented (or buggy) in some recent compilers, sixteen years after the
specification of Fortran 2003 \cite{chivers}.
\subsection{Exception handling}
Since Fortran has no exception handling, any uncaught C++ exception from a
wrapped library call will immediately terminate the program.
With SWIG's \code{\%exception} feature, C++ exceptions
can be caught and handled by the Fortran code by setting and clearing an
integer flag. For example, assuming that one wants to use a conservative
square root function \code{careful\_sqrt} that throws an exception when a given
number is non-positive, the Fortran code could look like this:
\begin{lstlisting}[language=Fortran]
use except, only : careful_sqrt, ierr, get_serr
call careful_sqrt(-4.0)
if (ierr /= 0) then
write(0,*) "Got error ", ierr, ": ", get_serr()
ierr = 0
endif
\end{lstlisting}
where \code{ierr} is a nonzero error code, and \code{get\_serr()} returns a string
containing the exception message. This approach allows Fortran code to
gracefully recover from exception-safe C++ code. For example, if a C++ numeric
solver throws an error if it fails to converge, the Fortran application would
be able to detect the failure, print a message, and write the unconverged
solution.
\subsection{HPC-oriented features}
The SWIG Fortran target language \changed{is able to wrap CUDA}
kernels using the Thrust C++ interface and use the resulting code with Fortran
OpenACC kernels. The implementation is designed to avoid the performance
penalty of copying between the host and device inside the wrapper layer: the
underlying device data pointer is seamlessly handed off between C++/CUDA and
Fortran.
Here is an example SWIG module that wraps the \code{thrust::sort} function to enable
sorting on-device data using a highly optimized kernel:
\begin{lstlisting}[language=SWIG]
#include <thrust/sort.h>
template<typename T>
static void thrust_sort(thrust::device_ptr<T> DATA, size_t SIZE) {
thrust::sort(DATA, DATA + SIZE);
}
\end{lstlisting}
The corresponding test code simply calls the SWIG-generated \code{sort} function:
\begin{lstlisting}[language=Fortran]
program test_thrustacc
use thrustacc, only : sort
implicit none
integer, parameter :: n = 64
integer :: i
integer :: failures = 0
real, dimension(:), allocatable :: a
real :: mean
! Generate N uniform numbers on [0,1)
allocate(a(n))
call random_number(a)
write(0,*) a
!$acc data copy(a)
!$acc kernels
do i = 1,n
a(i) = a(i) * 10
end do
!$acc end kernels
call sort(a)
!$acc end data
write(0,*) a
end program
\end{lstlisting}
Note that the ACC \code{data copy} occurs before the native Fortran ACC kernel and
after the SWIG-wrapped Thrust kernel, demonstrating that the interoperability of
the Fortran and C++ data requires no data movement.
SWIG also supports automatic conversion of MPI communicator handles between the
Fortran \code{mpi} module and MPI's C API by generating wrapper code that calls
\code{MPI\_Comm\_f2c}. The SWIG interface code
\begin{lstlisting}[language=SWIG]
void set_my_comm(MPI_Comm comm);
\end{lstlisting}
will generate a wrapper function that can be called from Fortran using the
standard \code{mpi} module and its communicator handles:
\begin{lstlisting}[language=Fortran]
use mpi
call set_my_comm(MPI_COMM_WORLD)
\end{lstlisting}
\subsection{Direct C binding}
A special \code{\%fortranbindc} directive in SWIG-Fortran will bypass wrapper
function generation and instead build direct \code{bind(C)} public function
interfaces in the Fortran module for C-linkage functions. A similar macro,
\code{\%fortranbindc\_type}, will generate \code{bind(C)} derived types in
Fortran from C-compatible structs, with no wrapper code. Another directive
\code{\%fortranconst} will generate \code{parameter}-qualified Fortran constants
from \code{\#define} macros, and SWIG will further generate Fortran C-bound
enumerations from C \code{enum} types.
\subsection{Language features}
The SWIG Fortran module maps many other C++ capabilities to Fortran.
Table~\ref{t:features} lists features supported by SWIG and their
implementation status in the Fortran module. SWIG currently has only limited
support for C++11 features, so these are omitted from the table.
\begin{table}[htb]
\centering
\caption{List of C++ features supported by SWIG and their implementation
status (\ding{51}/\ding{86}/\ding{55}\ for full/partial/none) in SWIG-Fortran.}
\label{t:features}
\begin{tabular}{lc}
\toprule
Feature & Status \\
\midrule
Data type conversion & \ding{51} \\
\quad Fundamental types & \ding{51} \\
\quad C strings & \ding{51} \\
\quad Pointers and references & \ding{51} \\
\quad Function pointers, member function pointers & \ding{51} \\
\quad POD structs & \ding{51} \\
\quad Arrays & \ding{86} \\
\quad Shared pointers & \ding{51} \\
\quad \code{std::string} & \ding{51} \\
\quad \code{std::vector} & \ding{51} \\
\quad \code{thrust::device\_ptr} & \ding{51} \\
\quad Other standard library containers & \ding{86} \\
Functions & \ding{51} \\
\quad Default arguments & \ding{51} \\
\quad Overloading & \ding{86} \\
\quad Operator overloading & \ding{55} \\
Templates & \ding{51} \\
Classes & \ding{51} \\
\quad Member data & \ding{51} \\
\quad Inheritance & \ding{86} \\
\quad Multiple inheritance & \ding{55} \\
Constants & \ding{51} \\
\quad Enumerations & \ding{51} \\
\quad Compile-time constants & \ding{51} \\
Exceptions & \ding{51} \\
\bottomrule
\end{tabular}
\end{table}
\section{Applications}\label{s:examples}
This section contains two examples of the capabilities outlined in the previous
section and a discussion of their performance: \textsl{(i)} wrapping a standard C++ library
sort function, and \textsl{(ii)} accessing a sparse matrix-vector multiplication computational
kernel through a generated interface with performance discussion.
\subsection{Sorting}
Sorting arrays is a common operation in problem setup and data mapping
routines, and the C++ standard library provides generic algorithms for efficient
sorting in its \code{<algorithm>} header. In contrast, Fortran application
developers must choose a sorting algorithm and implement it by hand for each
data type, an approach with many shortcomings. Application developers
must understand that a naive sorting algorithm that works well for desktop-sized
problems may not be performant for exascale-sized problems. Manually implementing
a robust version of a sorting algorithm is also notoriously error-prone.
Finally, having to instantiate the implemented algorithm for each data type
increases development and maintenance cost.
Using an externally supplied, efficient, and well-tested algorithm is clearly a
better approach.
The following self-contained SWIG interface wraps the C++-supplied \code{sort} implementation
into a generic Fortran subroutine that operates on either integer or real Fortran arrays:
\begin{lstlisting}[language=SWIG,numbers=right,numbersep=-10pt,stepnumber=1,numberstyle=\tiny,firstnumber=1]
#include <algorithm>
template<class T>
void sort(T *ptr, size_t size) {
std::sort(ptr, ptr + size);
}
{(int *ptr, size_t size),
(double *ptr, size_t size)};
\end{lstlisting}
The first line declares the name of the resulting Fortran module. Lines 2--4
insert the standard library include into the generated C++ wrapper code, and the
following \code{\%inline} block both \emph{inserts} the code into the wrapper
and \emph{declares} it to SWIG. Lines 11--14 inform SWIG that a special predefined
typemap (which treats two C++ pointer/size arguments as a single Fortran array
pointer argument; see~\S\ref{s:type_conversions}) should be applied to the
function signature of the declared
\code{sort} function. The final two lines direct SWIG to instantiate the sort
function for both integer and double-precision types.
Fortran application developers need not understand or even see any of the above
code; they merely link against the compiled SWIG-generated files and use the
wrapped function as follows:
\begin{lstlisting}[language=Fortran]
use algorithm
integer(c_int), dimension(:), allocatable :: x
integer(c_double), dimension(:), allocatable :: y
! ... Allocate and fill x and y ...
call sort(x)
call sort(y)
\end{lstlisting}
A developer might wonder whether using C++ instead of Fortran for a numeric
algorithm will slow their code, so Table~\ref{t:sort} compares the performance
of two Fortran codes that sort an array of $N$ random real numbers. The first
code implements a standard Fortran quicksort numerical
recipe~\cite{fortran90guide}, and the second calls the SWIG-wrapped C++
function. Both experiments were compiled using GCC 7.3.0 and run on a Intel Xeon E5-2620 v4
workstation, and the timings were averaged across 40 runs to remove variability.
\begin{table}
\caption{Performance comparison of $N$-element array sort.}\label{t:sort}
\centering
\begin{tabular}{lrrrr}
\toprule
& \multicolumn{4}{c}{Time (s) for $N={}$} \\
Implementation & $10^4$ & $10^5$ & $10^6$ & $10^7$ \\
\midrule
Native Fortran quicksort & 0.0011 & 0.0104 & 0.1132 & 1.3058 \\
Wrapped C++ \code{std::sort} & 0.0008 & 0.0074 & 0.0877 & 1.0189 \\
\bottomrule
\end{tabular}
\end{table}
The external C++ sort actually outperforms the native Fortran implementation,
likely due to its algorithmic implementation: the standard C++ \code{std::sort}
function uses \emph{introsort}, which has the algorithmic strengths of both
heapsort and quicksort \cite{musser1997}, but is more lengthy to describe and
implement. Yet with SWIG-Fortran, ``implementing'' the advanced algorithm is far
easier still.
\subsection{Sparse matrix multiplication}
The sparse matrix--vector multiplication (SpMV) algorithm is a
well-known computational kernel commonly used in linear algebra. Sparse
matrices, which have relatively few nonzero elements, are typically stored and
operated upon as memory-efficient formats such as compressed row storage
(CRS)~\cite{davis2006direct}. In CRS format, a matrix with $E$ nonzero
entries and $M$ rows is stored in three arrays. The first two,
\code{vals} and \code{col\_inds}, each have $E$ elements: the values and column
indices, respectively, of each nonzero matrix entry. A third length-$M$ array,
\code{row\_ptrs}, comprises the offset of the first nonzero entry in each row of
the matrix.
With CRS, the sparse matrix-vector multiplication algorithm consists of two
nested loops:
\begin{lstlisting}[language=Fortran]
! Given matrix A stored in CRS format
! and vectors x and y, compute y = Ax
do i = 1, M
! Loop over nonzero entries in row i
do j = row_ptrs(i), row_ptrs(i+1)-1
y(i) = y(i) + vals(j) * x(col_inds(j))
end do
end do
\end{lstlisting}
For the SWIG-wrapped SpMV algorithm, the data associated with the matrix is stored in a
C++ class and accessed through a Fortran interface. To analyze the performance,
three interfaces with different access granularity are considered. The coarsest
provides access to the full matrix, i.e.~three arrays containing the sparse
structure. The intermediate-granularity interface returns a single row of
nonzero entries for each call, and the finest interface provides functions
to access individual elements in the matrix. The three granularities require
different numbers of calls to the SWIG interface:
the matrix-granularity interface calls C++ only three times total, the
row-level interface calls C++ twice per matrix row, and the finest calls it
twice per nonzero matrix entry.
This example uses a simple 2D Laplacian matrix corresponding to a discretization
of the Poisson equation on a 2D $n \times n$ Cartesian grid. Such a matrix has 5
diagonals with gaps between diagonals and a main diagonal on $-n$, $-1$, $0$,
$+1$, and $+n$. The ``Standard'' column of Table~\ref{t:spmv}
presents the SpMV execution timings results for $n = 3000$.
\begin{table*}[t]
\caption{SpMV time for 2D Laplacian matrix on a $3000 \times 3000$ grid.}
\label{t:spmv}
\centering
\begin{tabular}{cccccc}
\toprule
\multirow{2}{*}{Interface} & \multirow{2}{*}{Standard} & \multicolumn{2}{c}{LTO} & \multicolumn{2}{c}{LTO (null pointer check)} \\
\cmidrule(lr){3-4}\cmidrule(lr){5-6}
&& Library only & All & Library only & All \\
\midrule
Matrix (coarse) & 0.095 & 0.095 & 0.076 & 0.095 & 0.077 \\
Row (intermediate) & 0.184 & 0.162 & 0.067 & 0.158 & 0.163 \\
Element (fine) & 0.684 & 0.505 & 0.215 & 0.531 & 0.325 \\
\bottomrule
\end{tabular}
\end{table*}
As expected, the
computational work increases with the number of calls to the wrapper, resulting
in a factor of seven slowdown to go from the coarsest to the finest granularity.
Since the compiler is unable to
inline the C++ wrapper function into the Fortran application code, the optimizer
must make unnecessarily conservative approximations that hurt performance. One
workaround is to use link-time optimization (LTO)~\cite{gcc_lto}, which
compiles user code into an intermediate representation (IR) rather than assembly
code. The IR from multiple translation units, even from different languages, can be
inlined and optimized together during the link stage.
In the case of a C++ library bundled with a SWIG-generated Fortran interface,
LTO would be applied during the library's installation to the
original C++ code, the flattened C++ wrapper file, and the Fortran module file.
However, if SWIG-generated code is part of an application, the Fortran user code
could additionally be built with LTO as well. The ``LTO'' column of
Table~\ref{t:spmv} compares the performance of the SpMV test code for these two
hypothetical situations against the ``standard'' case of no LTO. Enabling LTO as
a library improves performance modestly for the finest-grained case, but as part
of an entire application toolchain it results in dramatic (3$\times$)
performance improvements for the fine-grain interfaces.
Note that the benefits of LTO depend on the complexity of the generated wrapper
code. Adding an assertion to check for null pointers reduced LTO-provided
performance by a factor of 1.5--2$\times$, shown in the last column of
Table~\ref{t:spmv}. Thus, the wrapper interface writer, who has a degree of
control over the generated code, should consider the tradeoff between stability
and performance.
\section{Conclusion}\label{s:conclusion}
This article introduces a new, full-featured approach to generating modern
Fortran interfaces to C++ code\changed{, allowing C++ library developers to
easily expose their work's functionality to Fortran application developers,
and potentially improving the coupling between the two different languages in
the same application}.
By leveraging the SWIG automation tool, it
supports many C++ features critical to contemporary scientific software
libraries, including inheritance, templates,
and exceptions. Future work will demonstrate SWIG's utility in exposing the
Trilinos and \changed{SUNDIALS} numerical libraries to pre-exascale Fortran application
codes.
The developed software, examples, and performance code presented here are
available under open-source licenses at \url{https://github.com/swig-fortran}.
\section*{Acknowledgments}
This research was supported by the Exascale Computing Project (17-SC-20-SC),
a joint project of the U.S. Department of Energy's Office of Science and
National Nuclear Security Administration, responsible for delivering a capable
exascale ecosystem, including software, applications, and hardware technology,
to support the nation’s exascale computing imperative.
This research used resources of the Oak Ridge Leadership Computing Facility at
the Oak Ridge National Laboratory, which is supported by the Office of Science
of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725.
|
2203.08369
|
\section{Introduction}\label{sec:Inc}
\def{\rm d} {{\rm d}}
Vaccination is critical for the prevention and control of infectious diseases, there are more than 20 life-threatening diseases could be prevented by vaccines up to now.
Vaccinators can achieve immunity by having the immune system recognize foreign substances, antibodies are then screened and generated to produce antibodies against the pathogen or similar pathogen, and then giving the injected individual a high level of disease resistance.
In \cite{LiuTakeuchiShingoJTB2008}, Liu et al. proposed the following system with continuous vaccination strategy:
\begin{equation}
\label{PrModel1}\left\{
\begin{array}{l}
\vspace{2mm}
\displaystyle \frac{{\rm d} S(t)}{{\rm d} t} = \Lambda - \beta_1 S(t)I(t) - \alpha S(t) - \mu S(t),\\
\vspace{2mm}
\displaystyle \frac{{\rm d} V(t)}{{\rm d} t} = \alpha S(t) - \beta_2 V(t)I(t) - (\gamma_1 + \mu) V(x,t),\\
\vspace{2mm}
\displaystyle \frac{{\rm d} I(t)}{{\rm d} t} = \beta_1 S(t)I(t) + \beta_2 V(t)I(t) - \gamma I(x,t) - \mu I(x,t),\\
\displaystyle \frac{{\rm d} R(t)}{{\rm d} t} = \gamma_1V(t) + \gamma I(t)- \mu R(t),
\end{array}\right.
\end{equation}
where $S(t),\ V(t), \ I(t)$ and $R(t)$ are the densities of susceptible, vaccinated, infective and removed individuals at time $t$, respectively.
The parameters of model (\ref{PrModel1}) are biologically explained as in Table \ref{tab}.
\begin{table}[h]
\centering
\begin{tabular}{cl}
\hline
Parameter & \hspace{0.5cm}Interpretation \\
\hline\hline
$\Lambda$ & Recruitment rate \\
$\beta_1$ & Disease transmission rate between infectious and susceptible individuals \\
$\beta_2$ & Disease transmission rate between infectious and vaccinated individuals \\
$\alpha$ & The vaccinated rate \\
$\mu$ & Natural death rate \\
$\gamma$ & Recovery rate \\
$\gamma_1$ & Rate at which a vaccinating individual obtains immunity \\
\hline \\
\end{tabular}
\caption{Biological meaning of parameters in model (\ref{PrModel1}).}
\label{tab}
\end{table}
In \cite{LiuTakeuchiShingoJTB2008}, the authors shown that the disease-free equilibrium for model (\ref{PrModel1}) is globally asymptotically stable (GAS) if the basic reproduction number is less than one, while if the number is greater than one, then a positive endemic equilibrium exists which is GAS.
Since then, the epidemic models with vaccination have attracted the attention of many scholars.
Kuniya \cite{KuniyaNARWA2013} extended the study in \cite{LiuTakeuchiShingoJTB2008} to a multi-group case, and then studied the global stability by using the graph-theoretic approach and Lyapunov method.
Considering the effect of age, three vaccination epidemic models with age structure are proposed in \cite{DuanYuanLiAMC2014,WangZhangKuniyaIMAJAM2016,WangGuoLiuIMAJAM2017}, and the global stabilities are studied.
For more recent studies on the vaccination epidemic models, we refer to \cite{WangWangZhangMMAS2022,HuoJAAC} and the references therein.
With the increasing trend of globalization and mobility of people, the spatial structure of human density and location has a significant impact on the spread of diseases. It is necessary to investigate the role of diffusion in the epidemic modeling. Mathematically, Laplacian operator in the reaction-diffusion systems could can be well used to study the infectious disease model with diffusion, since it could describe the random diffusion of each individual in the adjacent space. On the other hand, nonlocal operator could describe the long range diffusion on the whole habitat \cite{LiLiYangAMC2014}.
In the study of local and nonlocal diffusive epidemic models, there is a solution called traveling wave solution (TWS). Viewing from infectious diseases perspective, the existence of TWS for epidemic model implies that the disease can be invaded \cite{LiLiLinCPAA2015}.
Up to now, there have been many studies on the TWS for local and nonlocal diffusive epidemic models (see, for example, \cite{Hosono,DucrotMagalNon2011,WangXuJAAC2014,WangMa2017,XuLiLinJDE2019,ZhangLiuFengJinAML2021}).
By considering both vaccination and spatial diffusion, Xu et al. \cite{XuXuHuangCMA2018} studied a local diffusive SVIR model, where the global dynamics on bounded domain and TWS on unbounded domain for the model were studied.
Meanwhile, the problem of TWS for two different SVIR models with nonlocal diffusion were investigated in \cite{ZhangLiuMBE2019,ZhouYangHsuDCDSB2020}.
Unlike local and nonlocal diffusive, there is another diffusion in infectious disease modeling, which is discrete diffusion. In fact, epidemic model with discrete diffusion can be regarded as lattice system, such system is better to describe the epidemic model with patch structure \cite{SanHeCPAA2021}. Recently, Chen et al. \cite{ChenGuoHamelNon2017} proposed a lattice SIR epidemic model:
\begin{equation}
\label{PreModel}\left\{
\begin{array}{l}
\vspace{2mm}
\displaystyle \frac{{\rm d} S_n}{{\rm d} t}= [S_{n+1} + S_{n-1} - 2S_n] + \mu - \beta S_nI_n - \mu S_n,\\
\displaystyle \frac{{\rm d} I_n}{{\rm d} t}= d[I_{n+1} + I_{n-1} - 2I_n] + \beta S_nI_n - (\gamma+\mu) I_n,
\end{array}\right.
\end{equation}
where $n\in\mathbb{Z}$. $S_n$ and $I_n$ denote densities of susceptible and infectious individuals at time $t$ and niche $n$. $\beta$ is the disease transmission rate. $1$ (normalized) and $d$ denote the random migration parameters for each compartments.
Chen et al. shown that system (\ref{PreModel}) admits TWS when $\Re_0>1$ and $c\geq c^*$. More recently, the TWS for (\ref{PreModel}) was proved to be converged to the endemic equilibrium by Zhang et al \cite{ZhangLiuDCDSB2021,ZhangWangLiuJNS2021}.
Model (\ref{PreModel}) is an SIR model with constant recruitment (i.e. the constant $\Lambda$), and
the existence of TWS for the discrete diffusive epidemic model without constant recruitment was studied in \cite{FuGuoWuJNCA2016,ZhangWuIJB2019,ZhangYuMMAS2021}. However, to our best knowledge, there are only a few studies focus on the problem of TWS for discrete diffusive epidemic models, especially for the model with constant recruitment.
Based on the above facts, in order to study the role of vaccination and patch structure in the disease modeling, we consider a discrete diffusive vaccination model as follows
\begin{equation}
\label{Model}\left\{
\begin{array}{l}
\vspace{2mm}
\displaystyle \frac{{\rm d} S_n}{{\rm d} t}= [S_{n+1} - 2S_n + S_{n-1} ] + \Lambda - \beta_1S_nI_n - \alpha S_n - \mu S_n,\\
\vspace{2mm}
\displaystyle \frac{{\rm d} V_n}{{\rm d} t}= [V_{n+1} - 2V_n + V_{n-1} ] + \alpha S_n - \beta_2V_nI_n - \gamma_1 V_n - \mu V_n,\\
\vspace{2mm}
\displaystyle \frac{{\rm d} I_n}{{\rm d} t}= d[I_{n+1} - 2I_n + I_{n-1} ] + \beta_1S_nI_n + \beta_2V_nI_n - \gamma I_n - \mu I_n,\\
\displaystyle \frac{{\rm d} R_n}{{\rm d} t}= [R_{n+1} - 2R_n + R_{n-1} ] + \gamma_1 V_n + \gamma I_n - \mu R_n,
\end{array}\right.
\end{equation}
where $S_n$, $V_n$, $I_n$ and $R_n$ denote susceptible, vaccinated, infectious and removed individuals.
$d$ is the spatial motility of infectious individuals and the diffusive rate of other compartments are normalized to be 1. The biological significance of the parameters of (\ref{Model}) are the same as those in (\ref{PrModel1}).
The current paper devotes to study the existence of TWS for system (\ref{Model}) with bilinear incidence.
In fact, there are very few studies on TWS for the epidemic model with bilinear incidence and the main difficulty is the boundedness of TWS \cite{SanHeCPAA2021}.
On the other hand, introducing the constant recruitment (i.e. $\Lambda$ in model (\ref{Model})) will bring much more complexity in mathematical analysis than the system without constant recruitment.
Moreover, it is difficult to obtain the behaviour of TWS at $+\infty$ for such model (see, for example, \cite{ChenGuoHamelNon2017}).
One motivation of this paper is to show the convergence of TWS for lattice epidemic model (\ref{Model}).
To gain this purpose,
we will construct an appropriate Lyapunov functional for the wave form equations corresponding to lattice dynamical system (\ref{Model}).
To do this, we prove the persistence of TWS, which is crucial to guarantee the Lyapunov functional has a lower bound.
We should be point out that, for different models, the construction of Lyapunov functional is also different and requires technique.
Biologically, since the vaccination has an effect of decreasing the basic reproduction number in \cite{LiuTakeuchiShingoJTB2008}, we want to study how vaccination affects the speed of TWS.
The organization of this paper is as follows. In section \ref{Sec:Pre}, we give some preliminaries results. Section \ref{Sec:Existence} devote to study the existence of TWS of system (\ref{Model}) by applying Schauder's fixed point theorem. In Section \ref{Sec:Bound}, we show the boundedness of TWS. Furthermore, we show the convergence of TWS in Section \ref{Sec:Lyapunov}. Finally, there is a brief discussion and some explanations from the perspective of epidemiology will be given in Section \ref{Sec:Dis}.
\section{Preliminaries}\label{Sec:Pre}
Firstly, the corresponding ODE system for (\ref{Model}) is
\begin{equation}
\label{ODEModel}\left\{
\begin{array}{l}
\vspace{2mm}
\displaystyle \frac{{\rm d} S}{{\rm d} t}= \Lambda - \beta_1SI - \mu_1 S,\\
\vspace{2mm}
\displaystyle \frac{{\rm d} V}{{\rm d} t}= \alpha S - \beta_2VI - \mu_2 V,\\
\displaystyle \frac{{\rm d} I}{{\rm d} t}= \beta_1SI + \beta_2VI - \mu_3 I,
\end{array}\right.
\end{equation}
where $R$ equation is decoupled from other equations.
Clearly, system (\ref{ODEModel}) has a disease-free equilibrium $E_0 = (S_0,V_0,0) = \left(\frac{\Lambda}{\mu_1}, \frac{\Lambda\alpha}{\mu_1\mu_2}, 0\right)$.
Define
\begin{align}\label{R_0}
\Re_{0} = \frac{\beta_1S_0+\beta_2V_0}{\mu_3}
\end{align}
as the basic reproduction number.
The well known results for (\ref{ODEModel}) is
\begin{theorem}\cite{LiuTakeuchiShingoJTB2008}\label{Th13}
For system (\ref{ODEModel}), if $\Re_0<1$, $E_0$ is globally asymptotically stable; if $\Re_0>1$, system (\ref{ODEModel}) has a globally asymptotically stable positive equilibrium $E^* = (S^*,V^*,I^*)$ satisfy
\begin{equation}
\label{EE}\left\{
\begin{array}{l}
\vspace{2mm}
\displaystyle \Lambda - \beta_1S^*I^* - \mu_1 S^*=0,\\
\vspace{2mm}
\displaystyle \alpha S^* - \beta_2V^*I^* - \mu_2 V^*=0,\\
\displaystyle \beta_1S^*I^* + \beta_2V^*I^* - \mu_3 I^*=0.
\end{array}\right.
\end{equation}
\end{theorem}
Now, we state our purpose of the current paper.
Letting $\varsigma=n+ct$ in system (\ref{Model}), where $c$ is wave speed, we arrive at
\begin{equation}
\label{WaveEqu}\left\{
\begin{array}{l}
\vspace{2mm}
\displaystyle cS'(\varsigma)= \digamma[S](\varsigma) + \Lambda - \mu_1 S(\varsigma) - \beta_1S(\varsigma)I(\varsigma),\\
\vspace{2mm}
\displaystyle c V'(\varsigma)= \digamma[V](\varsigma) + \alpha S(\varsigma) - \beta_2V(\varsigma)I(\varsigma) - \mu_2 V(\varsigma),\\
\displaystyle c I'(\varsigma)= d\digamma[I](\varsigma) + \beta_1S(\varsigma)I(\varsigma) + \beta_2V(\varsigma)I(\varsigma) - \mu_3 I(\varsigma)
\end{array}\right.
\end{equation}
for all $\varsigma\in \mathbb{R}$, where $\digamma[\chi](\varsigma)\triangleq \chi(\varsigma+1) - 2 \chi(\varsigma) + \chi(\varsigma-1) $, $\mu_1 = \alpha + \mu$, $\mu_2 = \gamma_1 + \mu$ and $\mu_3 = \gamma + \mu$.
We want to find TWS satisfying:
\begin{equation}\label{Bound1}
\lim_{\varsigma\rightarrow-\infty}(S(\varsigma), V(\varsigma), I(\varsigma))=(S_0, V_0, 0),
\end{equation}
and
\begin{equation}\label{Bound2}
\lim_{\varsigma\rightarrow+\infty}(S(\varsigma), V(\varsigma), I(\varsigma))=(S^*, V^*, I^*).
\end{equation}
\subsection{Eigenvalue problem}
Linearizing the third equation of (\ref{WaveEqu}) at the $E_0$ yields
\begin{equation}
\label{LinearModel}
c I'(\varsigma)= d\digamma[ I](\varsigma) - \mu_3 I(\varsigma) + (\beta_1 S_0 + \beta_2 V_0) I(\varsigma).
\end{equation}
Let $ I(\varsigma)=e^{\mathfrak{r} \varsigma}$, we have
\begin{equation}\label{ODE}
d[e^\mathfrak{r} + e^{-\mathfrak{r}} - 2] - c\mathfrak{r} + (\beta_1 S_0 + \beta_2 V_0) - \mu_3 = 0.
\end{equation}
Denote
\begin{equation}\label{Delta}
\Delta(\mathfrak{r},c) = d[e^\mathfrak{r} + e^{-\mathfrak{r}} - 2] - c\mathfrak{r} + (\beta_1 S_0 + \beta_2 V_0) - \mu_3.
\end{equation}
By some calculations, for $\mathfrak{r}>0$ and $c>0$, we have
\begin{align*}
&\Delta(0,c) = (\beta_1 S_0 + \beta_2 V_0) - (\mu+\gamma),\ \ \ \lim_{c\rightarrow+\infty} \Delta (\mathfrak{r},c) = -\infty,\\
&\frac{\partial \Delta(\mathfrak{r}, c)}{\partial\mathfrak{r}} = d [e^\mathfrak{r} - e^{-\mathfrak{r}}] - c,\ \ \ \frac{\partial \Delta(\mathfrak{r}, c)}{\partial c}= -\mathfrak{r} < 0,\\
&\frac{\partial^2 \Delta(\mathfrak{r}, c)}{\partial\mathfrak{r}^2} = d [e^\mathfrak{r} + e^{-\mathfrak{r}}] > 0,\ \ \ \frac{\partial \Delta(\mathfrak{r}, c)}{\partial\mathfrak{r}}\bigg|_{(0,c)} = -c < 0.\\
\end{align*}
Therefore,
\begin{lemma}\label{WaveSpeed}
Let $\Re_{0}>1.$ There exist $\mathfrak{c}^*>0$ and $\mathfrak{r}^*>0$ such that
\[
\frac{\partial \Delta(\mathfrak{r}, c)}{\partial \mathfrak{r}}\bigg|_{(\mathfrak{r}^*,\mathfrak{c}^*)} = 0\ \ \textrm{and}\ \ \Delta(\mathfrak{r}^*,\mathfrak{c}^*) = 0.
\]
Furthermore,
\begin{description}
\item[(i)] $\Delta(\mathfrak{r}, c)>0$ for all $\mathfrak{r}$ if $0<c<\mathfrak{c}^*$;
\item[(ii)] $\Delta(\mathfrak{r}, c)=0$ has only one positive real root $\mathfrak{r}^*$ if $c=\mathfrak{c}^*$;
\item[(iii)] $\Delta(\mathfrak{r}, c)=0$ has two positive real roots $\mathfrak{r}_1,$ $\mathfrak{r}_2$ with $\mathfrak{r}_1<\mathfrak{r}^*<\mathfrak{r}_2$ if $c>\mathfrak{c}^*$.
\end{description}
\end{lemma}
\subsection{Sub- and super-solutions}
Fix $c>\mathfrak{c}^*$ and $\Re_0>1$, we show the following lemma.
\begin{lemma}\label{UpLow}
For $\varepsilon_i>0$ small enough and $M_i>0$ $(i=1,2,3)$ large enough, we define the following six functions:
\begin{equation*
\left\{
\begin{array}{l}
\displaystyle S^+(\varsigma)=S_0,\\
\displaystyle V^+(\varsigma)=V_0,\\
\displaystyle I^+(\varsigma) = e^{\mathfrak{r}_1 \varsigma},
\end{array}
\right.
\ \ \ \ \ \
\left\{
\begin{array}{l}
\displaystyle S^-(\varsigma)=\max\{S_0(1-M_1 e^{\varepsilon_1 \varsigma}),0\},\\
\displaystyle V^-(\varsigma)=\max\{V_0(1-M_2 e^{\varepsilon_2 \varsigma}),0\},\\
\displaystyle I^-(\varsigma)=\max\{e^{\mathfrak{r}_1\varsigma}(1-M_3e^{\varepsilon_3 \varsigma}),0\}.
\end{array}\right.
\end{equation*}
Then they satisfy
\begin{equation}\label{up}
\left\{
\begin{array}{l}
c{S^+}'(\varsigma) \geq \digamma[S^+] + \Lambda - \mu_1 S^+ - \beta_1 S^+I^-,\qquad\ \\
c{V^+}'(\varsigma) \geq \digamma[V^+] + \alpha S^+ - \beta_2 V^+I^- - \mu_2 V^+,\qquad\quad\ \ \\
c {I^+}'(\varsigma) \geq d\digamma[I^+] + \beta_1 S^+I^+ + \beta_2 V^+I^+ - \mu_3 I^+,
\end{array}\right.
\end{equation}
and
\begin{subequations}\label{low}
\begin{numcases}{}
\label{S}
c{S^-}'(\varsigma) \leq \digamma[S^-] + \Lambda - \mu_1 S^- - \beta_1 S^-I^+,\ \ \ \ \ \ \ \ \ \ \varsigma\neq \varepsilon_1^{-1}\ln M_1^{-1}:=\mathfrak{X}_1,\\
\label{V}
c {V^-}'(\varsigma) \leq \digamma[V^-] + \alpha S^- - \beta_2 V^-I^+ - \mu_2 V^-,\ \ \ \ \ \ \varsigma\neq \varepsilon_2^{-1}\ln M_2^{-1}:=\mathfrak{X}_2,\\
\label{I}
c {I^-}'(\varsigma) \leq d\digamma[I^-] + \beta_1 S^-I^- + \beta_2 V^-I^- - \mu_3 I^-,\ \ \ \varsigma \neq \varepsilon_3^{-1}\ln M_3^{-1}:=\mathfrak{X}_3.
\end{numcases}
\end{subequations}
\end{lemma}
\begin{proof}
The proof of (\ref{up}) are trivial, so we omit the details.
Now, we focus on the proof of inequalities \eqref{low}.
If $\varsigma>\mathfrak{X}_1$, then equation (\ref{S}) holds since $S^-(\varsigma)=0$.
If $\varsigma<\mathfrak{X}_1$, then $S^-(\varsigma)=S_0(1-M_1 e^{\varepsilon_1 \varsigma})$,
and
\begin{align*}
&\ \digamma[S^-](\varsigma) + \Lambda - \mu S^-(\varsigma) - \beta_1 S^-(\varsigma)I^+(\varsigma) - c{S^-}'(\varsigma)\\
\geq &\ e^{\varepsilon_1 \varsigma} S_0 \left[-M_1 (e^{\varepsilon_1} + e^{-\varepsilon_1} - 2 - \mu - c \varepsilon_1) -\beta_1 e^{\mathfrak{r}_1\varsigma}e^{-\varepsilon_1\varsigma}\right].
\end{align*}
Note taht $2 - e^{\varepsilon_1} - e^{-\varepsilon_1} - \mu - c \varepsilon_1 < 0$ and $e^{(\mathfrak{r}_1 - \varepsilon_1)\varsigma}\leq 1$ since $0<\varepsilon_1<\mathfrak{r}_1$is small enough and $\varsigma<\mathfrak{X}_1<0$. Thus, we need to choose a sufficiently large
\[
M_1 \geq \frac{\beta_1}{\mu + c \varepsilon_1 + e^{\varepsilon_1} + e^{-\varepsilon_1} - 2}.
\]
Then (\ref{S}) holds.
As for (\ref{I}), we choose $M_3$ such that $\frac{1}{\varepsilon_3}\ln M_3>\max\left\{\frac{1}{\varepsilon_1}\ln M_1,\frac{1}{\varepsilon_2}\ln M_2\right\}$.
If $\varsigma > \mathfrak{X}_3$, then (\ref{I}) holds since $ I^-(\varsigma)=0$. If $\varsigma < \mathfrak{X}_3$, then $ I^-(\varsigma)=e^{\mathfrak{r}_1\varsigma}(1-M_3e^{\varepsilon_3 \varsigma})$,
and (\ref{I}) is equivalent to
\begin{align*}
&\ d\digamma[I^-](\varsigma) + \beta_1 S^-(\varsigma)I^-(\varsigma) + \beta_2 V^-(\varsigma)I^-(\varsigma) - \mu_3 I^-(\varsigma) - c {I^-}'(\varsigma)\\
\geq &\ d \left[e^{\mathfrak{r}_1(\varsigma+1)}\left(1-M_3 e^{\varepsilon_3(\varsigma+1)}\right)+e^{\mathfrak{r}_1(\varsigma-1)}\left(1-M_3 e^{\varepsilon_3(\varsigma-1)}\right)-2e^{\mathfrak{r}_1\varsigma}\left(1-M_3 e^{\varepsilon_3\varsigma}\right)\right]\\
&\ +\beta_1 S_0e^{\mathfrak{r}_1 \varsigma}\left(1-M_1 e^{\varepsilon_1 \varsigma}\right)\left(1-M_3 e^{\varepsilon_3\varsigma}\right)+\beta_2 V_0e^{\mathfrak{r}_1 \varsigma}\left(1-M_2 e^{\varepsilon_2 \varsigma}\right)\left(1-M_3 e^{\varepsilon_3\varsigma}\right)\\
&\ -\mu_3 e^{\mathfrak{r}_1 \varsigma}\left(1-M_3 e^{\varepsilon_3\varsigma}\right) - c\mathfrak{r}_1 e^{\mathfrak{r}_1\varsigma} + c(\mathfrak{r}_1+\varepsilon_3) e^{(\mathfrak{r}_1+\varepsilon_3)\varsigma}\\
\geq &\ e^{\mathfrak{r}_1\varsigma}\Delta(\mathfrak{r}_1,c) - e^{(\mathfrak{r}_1+\varepsilon_3)\varsigma}M_3\Delta(\mathfrak{r}_1+\varepsilon_3,c) - \beta_1 S_0 M_1 e^{(\mathfrak{r}_1+\varepsilon_1)\varsigma} - \beta_2 V_0 M_2 e^{(\mathfrak{r}_1+\varepsilon_2)\varsigma}.
\end{align*}
Using the definition of $\Delta(\mathfrak{r},c)$ and noticing that $\Delta(\mathfrak{r}_1+\varepsilon_3,c)<0$, then it suffices to show that
\[
- M_3\Delta(\mathfrak{r}_1+\varepsilon_3,c) \geq \beta_1 S_0 M_1 e^{(\varepsilon_1-\varepsilon_3)\varsigma} + \beta_2 V_0 M_2 e^{(\varepsilon_2-\varepsilon_3)\varsigma},
\]
which holds for $M_3$ is large enough.
\end{proof}
\section{Existence of traveling wave solutions}\label{Sec:Existence
Let $\mathfrak{B}>-\mathfrak{X}_3>0$. Define
\begin{equation*}
\Gamma_\mathfrak{B} \triangleq \left\{(\phi, \varphi, \psi)\in C([-\mathfrak{B},\mathfrak{B}],\mathbb{R}^3)\left|
\begin{array}{l}
\vspace{2mm}
\displaystyle S^-(\varsigma)\leq \phi(\varsigma) \leq S^+(\varsigma),\ V^-(\varsigma)\leq \varphi(\varsigma) \leq V^+(\varsigma),\\
\vspace{2mm}
\displaystyle I^-(\varsigma)\leq \psi(\varsigma) \leq I^+(\varsigma)\ \ {\rm for}\ \ {\rm all}\ \ \varsigma\in[-\mathfrak{B},\mathfrak{B}],\\
\displaystyle \phi(-\mathfrak{B})=S^-(-\mathfrak{B}),\ \ \varphi(-\mathfrak{B})=V^-(-\mathfrak{B}),\\
\displaystyle \psi(-\mathfrak{B})=I^-(-\mathfrak{B}).
\end{array}\right.\right\}.
\end{equation*}
For any $(\phi,\varphi,\psi)\in C([-\mathfrak{B},\mathfrak{B}],\mathbb{R}^3)$,
define
\begin{equation*}
\label{hat1}
\hat{\phi}(\varsigma)=\left\{
\begin{array}{ll}
\displaystyle \phi(\mathfrak{B}), &\mbox{for $\varsigma>\mathfrak{B}$,}
\\
\displaystyle \phi(\varsigma), &\mbox{for $\varsigma\in[-\mathfrak{B},\mathfrak{B}]$,}
\\
\displaystyle S^-(\varsigma), &\mbox{for $\varsigma< -\mathfrak{B}$,}\\
\end{array}\right.\ \ \
\hat{\varphi}(\varsigma)=\left\{
\begin{array}{ll}
\displaystyle \varphi(\mathfrak{B}), &\mbox{for $\varsigma>\mathfrak{B}$,}
\\
\displaystyle \varphi(\varsigma), &\mbox{for $\varsigma\in[-\mathfrak{B},\mathfrak{B}]$,}
\\
\displaystyle V^-(\varsigma), &\mbox{for $\varsigma< -\mathfrak{B}$,}\\
\end{array}\right.
\end{equation*}
and
\begin{equation*}
\label{hat2}
\hat{\psi}(\varsigma)=\left\{
\begin{array}{ll}
\displaystyle \psi(\mathfrak{B}), &\mbox{for $\varsigma>\mathfrak{B}$,}
\\
\displaystyle \psi(\varsigma), &\mbox{for $\varsigma\in[-\mathfrak{B},\mathfrak{B}]$,}
\\
\displaystyle I^-(\varsigma), &\mbox{for $\varsigma< -\mathfrak{B}$.}\\
\end{array}\right.
\end{equation*}
For $(\phi,\varphi,\psi)\in \Gamma_\mathfrak{B}$, consider
\begin{equation}
\label{TruPro}\left\{
\begin{array}{l}
\vspace{2mm}
\displaystyle cS'(\varsigma) + (2+\mu_1+\rho_1)S(\varsigma) = \hat{\phi}(\varsigma+1) + \hat{\phi}(\varsigma-1) + \Lambda + \rho_1 \phi(\varsigma) - \beta_1 \phi(\varsigma)\psi(\varsigma) := H_1(\psi,\varphi,\psi),\\
\vspace{2mm}
\displaystyle cV'(\varsigma) + (2+\mu_2+\rho_2)V(\varsigma) = \hat{\varphi}(\varsigma+1) + \hat{\varphi}(\varsigma-1) + \alpha\phi(\varsigma) + \rho_2 \varphi - \beta_2 \varphi(\varsigma)\psi(\varsigma) := H_2(\psi,\varphi,\psi),\\
\vspace{2mm}
\displaystyle cI'(\varsigma) + (2d+\mu_1)I(\varsigma) = \hat{\psi}(\varsigma+1) + \hat{\psi}(\varsigma-1) + \beta_1 \phi(\varsigma)\psi(\varsigma) + \beta_2 \varphi(\varsigma)\psi(\varsigma) := H_3(\psi,\varphi,\psi),\\
\displaystyle (S,V,I)(-\mathfrak{B}) = (S^-,V^-,I^-)(-\mathfrak{B}),
\end{array}\right.
\end{equation}
where $\rho_1$ is large enough such that $\rho_1 \phi - \beta_1 \phi\psi$ is nondecreasing on $\phi$ and $\rho_2$ is large enough such that $\rho_2 \varphi - \beta_2 \varphi\psi$ is nondecreasing on $\varphi$.
Clearly, system (\ref{TruPro}) has a unique solution $(S_\mathfrak{B}(\varsigma),V_\mathfrak{B}(\varsigma),I_\mathfrak{B}(\varsigma))\in C^1([-\mathfrak{B},\mathfrak{B}],\mathbb{R}^3)$. Define
\[
\mathcal{A} = (\mathcal{A}_1,\mathcal{A}_2,\mathcal{A}_3):\Gamma_\mathfrak{B}\rightarrow C^1\left([-\mathfrak{B},\mathfrak{B}],\mathbb{R}^3\right)
\]
by
\[
S_\mathfrak{B}(\varsigma)=\mathcal{A}_1(\phi,\varphi,\psi)(\varsigma),\ \ V_\mathfrak{B}(\varsigma)=\mathcal{A}_2(\phi,\varphi,\psi)(\varsigma)\ \ {\rm and}\ \ I_\mathfrak{B}(\varsigma)=\mathcal{A}_3(\phi,\varphi,\psi)(\varsigma).
\]
\begin{lemma}\label{OA}
The operator $\mathcal{A}$ maps $\Gamma_\mathfrak{B}$ into itself and it is completely continuous.
\end{lemma}
\begin{proof}
Firstly, it is easy to show $\mathcal{A}$ maps $\Gamma_\mathfrak{B}$ into $\Gamma_\mathfrak{B}$ by Lemma \ref{UpLow}, so we omit the details.
Next, we focus on the second part of Lemma \ref{OA}.
For $i=1,2$, suppose that $(\phi_i(\varsigma),\varphi_i(\varsigma),\psi_i(\varsigma))\in\Gamma_\mathfrak{B}$ with
\[
S_{X,i}(\varsigma)=\mathcal{A}_1(\phi_i(\varsigma),\varphi_i(\varsigma),\psi_i(\varsigma)),\ \ V_{X,i}(\varsigma)=\mathcal{A}_2(\phi_i(\varsigma),\varphi_i(\varsigma),\psi_i(\varsigma)),
\]
and
\[
I_{X,i}(\varsigma)=\mathcal{A}_3(\phi_i(\varsigma),\varphi_i(\varsigma),\psi_i(\varsigma)).
\]
Direct calculation yields
\[
S_\mathfrak{B}(\varsigma) = S^-(-\mathfrak{B}) e ^{-\frac{2+\mu_1+\rho_1}{2}(\varsigma+\mathfrak{B})} + \frac{1}{c}\int_{-\mathfrak{B}}^\varsigma e ^{\frac{2+\mu_1+\rho_1}{2}(\tau+\mathfrak{B})}H_1(\phi,\varphi,\psi)(\tau){\rm d} \tau,
\]
\[
V_\mathfrak{B}(\varsigma) = V^-(-\mathfrak{B}) e ^{-\frac{2+\mu_2+\rho_2}{2}(\varsigma+\mathfrak{B})} + \frac{1}{c}\int_{-\mathfrak{B}}^\varsigma e ^{\frac{2+\mu_2+\rho_2}{2}(\tau+\mathfrak{B})}H_2(\phi,\varphi,\psi)(\tau){\rm d} \tau
\]
and
\[
I_\mathfrak{B}(\varsigma) = I^-(-\mathfrak{B}) e ^{-\frac{2d+\mu_3}{2}(\varsigma+\mathfrak{B})} + \frac{1}{c}\int_{-\mathfrak{B}}^\varsigma e ^{\frac{2d+\mu_3}{2}(\tau+\mathfrak{B})}H_3(\phi,\varphi,\psi)(\tau){\rm d} \tau,
\]
For $i=1,2$ and any $(\phi_i, \varphi_i, \psi_i)\in\Gamma_\mathfrak{B}$, we have
\begin{align*}
|\phi_1(\varsigma)\psi_1(\varsigma) - \phi_2(\varsigma)\psi_2(\varsigma)|
\leq & \ |\phi_1(\varsigma)\psi_1(\varsigma) - \phi_1(\varsigma)\psi_2(\varsigma)|+|\phi_1(\varsigma)\psi_2(\varsigma) - \phi_2(\varsigma)\psi_2(\varsigma)|\\
\leq & \ S_0 \max_{\varsigma\in[-\mathfrak{B},\mathfrak{B}]}|\psi_1(\varsigma)-\psi_2(\varsigma)| + e^{\mathfrak{r}_1 \mathfrak{B}} \max_{\varsigma\in[-\mathfrak{B},\mathfrak{B}]}|\phi_1(\varsigma)-\phi_2(\varsigma)|.
\end{align*}
Hence,
\begin{align*}
&\ c(S_{\mathfrak{B},1}'(\varsigma)-S_{\mathfrak{B},2}'(\varsigma))+(2+\mu_1)(S_{\mathfrak{B},1}(\varsigma)-S_{\mathfrak{B},2}(\varsigma))\\
\leq &\ |(\hat{\phi}_1(\varsigma+1)-\hat{\phi}_2(\varsigma+1))| + |(\hat{\phi}_1(\varsigma-1)-\hat{\phi}_2(\varsigma-1))| + \beta_1|\phi_1(\varsigma)\psi_1(\varsigma) - \phi_2(\varsigma)\psi_2(\varsigma)|\\
\leq &\ S_0 \max_{\varsigma\in[-\mathfrak{B},\mathfrak{B}]}|\psi_1(\varsigma)-\psi_2(\varsigma)| + \left(2+e^{\mathfrak{r}_1 \mathfrak{B}}\right)\max_{\varsigma\in[-\mathfrak{B},\mathfrak{B}]}|\phi_1(\varsigma)-\phi_2(\varsigma)|.
\end{align*}
Hence, the operator $\mathcal{A}$ is continuous by some similar arguments with $V_\mathfrak{B}$ and $I_\mathfrak{B}$. Moreover, $S_\mathfrak{B}'$, $V_\mathfrak{B}'$ and $I_\mathfrak{B}'$ are bounded by (\ref{TruPro}).
Thus, the operator $\mathcal{A}$ is completely continuous.
\end{proof}
By using Schauder's fixed point theorem, there exists $(S_\mathfrak{B},V_\mathfrak{B},I_\mathfrak{B})\in\Gamma_\mathfrak{B}$ such that
\[
(S_\mathfrak{B}(\varsigma),V_\mathfrak{B}(\varsigma),I_\mathfrak{B}(\varsigma)) = \mathcal{A}(S_\mathfrak{B},V_\mathfrak{B},I_\mathfrak{B})(\varsigma)
\]
for $\varsigma\in[-\mathfrak{B},\mathfrak{B}]$. Next, we give some prior estimate for $(S_\mathfrak{B},V_\mathfrak{B},I_\mathfrak{B})$.
Define
\[
C^{1,1}([-\mathfrak{B},\mathfrak{B}])=\{\upsilon\in C^1([-\mathfrak{B},\mathfrak{B}])\ |\ \upsilon,\upsilon' \textrm{are Lipschitz continuous}\}
\]
with
\begin{gather*}
\|\upsilon\|_{C^{1,1}([-\mathfrak{B},\mathfrak{B}])}=\max_{x\in[-\mathfrak{B},\mathfrak{B}]}|\upsilon|+\max_{x\in[-\mathfrak{B},\mathfrak{B}]}|\upsilon'|+
\sup_{\begin{subarray}{c} x,y\in [-\mathfrak{B},\mathfrak{B}] \\
x\neq y
\end{subarray}}\frac{|\upsilon'(x)-\upsilon'(y)|}{|x-y|}.
\end{gather*}
\begin{lemma}\label{LemC}
There exists constant $\mathcal{C}(\mathcal{X})>0$ such that
\begin{equation*}
\|S_\mathfrak{B}\|_{C^{1,1}([-\mathcal{X},\mathcal{X}])}\leq \mathcal{C}(\mathcal{X}),\ \ \|V_\mathfrak{B}\|_{C^{1,1}([-\mathcal{X},\mathcal{X}])}\leq \mathcal{C}(\mathcal{X})\ \ {\rm and}\ \ \|I_\mathfrak{B}\|_{C^{1,1}([-\mathcal{X},\mathcal{X}])}\leq \mathcal{C}(\mathcal{X})
\end{equation*}
for $\mathcal{X}<\mathfrak{B}$.
\end{lemma}
\begin{proof}
Since $(S_\mathfrak{B},V_\mathfrak{B},I_\mathfrak{B})$ is the fixed point of $\mathcal{A},$ one has
\begin{equation}
\label{FixEqu}\left\{
\begin{array}{l}
\vspace{2mm}
\displaystyle cS_\mathfrak{B}'(\varsigma) = \hat{S}_\mathfrak{B}(\varsigma+1) + \hat{S}_\mathfrak{B}(\varsigma-1) - (2+\mu_1)S_\mathfrak{B}(\varsigma) + \Lambda - \beta_1 S_\mathfrak{B}(\varsigma)I_\mathfrak{B}(\varsigma),\\
\vspace{2mm}
\displaystyle cV_\mathfrak{B}'(\varsigma) = \hat{V}_\mathfrak{B}(\varsigma+1) + \hat{V}_\mathfrak{B}(\varsigma-1) - (2+\mu_2)V_\mathfrak{B}(\varsigma) + \alpha S_\mathfrak{B}(\varsigma) - \beta_2 V_\mathfrak{B}(\varsigma)I_\mathfrak{B}(\varsigma),\\
\displaystyle cI_\mathfrak{B}'(\varsigma) = d\hat{I}_\mathfrak{B}(\varsigma+1) + d\hat{I}_\mathfrak{B}(\varsigma-1) - (2d+\mu_3)I_\mathfrak{B}(\varsigma) + (\beta_1 S_\mathfrak{B}(\varsigma) + \beta_2 V_\mathfrak{B}(\varsigma))I_\mathfrak{B}(\varsigma),
\end{array}\right.
\end{equation}
where
\begin{equation*}
\label{hatSV}
\hat{S}_\mathfrak{B}(\varsigma)=\left\{
\begin{array}{ll}
\displaystyle S_\mathfrak{B}(\mathfrak{B}), &\mbox{for $\varsigma>\mathfrak{B}$,}
\\
\displaystyle S_\mathfrak{B}(\varsigma), &\mbox{for $\varsigma\in[-\mathfrak{B},\mathfrak{B}]$,}
\\
\displaystyle S^-(\varsigma), &\mbox{for $\varsigma< -\mathfrak{B}$,}\\
\end{array}\right.\ \ \
\hat{V}_\mathfrak{B}(\varsigma)=\left\{
\begin{array}{ll}
\displaystyle V_\mathfrak{B}(\mathfrak{B}), &\mbox{for $\varsigma>\mathfrak{B}$,}
\\
\displaystyle V_\mathfrak{B}(\varsigma), &\mbox{for $\varsigma\in[-\mathfrak{B},\mathfrak{B}]$,}
\\
\displaystyle V^-(\varsigma), &\mbox{for $\varsigma< -\mathfrak{B}$,}\\
\end{array}\right.
\end{equation*}
and
\begin{equation*}
\label{hatI}
\hat{I}_\mathfrak{B}(\varsigma)=\left\{
\begin{array}{ll}
\displaystyle I_\mathfrak{B}(\mathfrak{B}), &\mbox{for $\varsigma>\mathfrak{B}$,}
\\
\displaystyle I_\mathfrak{B}(\varsigma), &\mbox{for $\varsigma\in[-\mathfrak{B},\mathfrak{B}]$,}
\\
\displaystyle I^-(\varsigma), &\mbox{for $\varsigma< -\mathfrak{B}$.}\\
\end{array}\right.
\end{equation*}
Since $0\leq S_\mathfrak{B}(\varsigma)\leq S_0$, $0\leq V_\mathfrak{B}(\varsigma)\leq V_0$ and $0\leq I_\mathfrak{B}(\varsigma)\leq e^{\mathfrak{r}_1 \mathcal{X}}$ for all $\varsigma\in[-\mathcal{X},\mathcal{X}]$,
from (\ref{FixEqu}) we have
\[
|S_\mathfrak{B}'(\varsigma)|\leq \frac{4+\mu_1}{c}S_0 + \frac{\Lambda}{c} + \frac{\beta_1S_0}{c}e^{\mathfrak{r}_1 \mathcal{X}},
\]
\[
|V_\mathfrak{B}'(\varsigma)|\leq \frac{4+\mu_2}{c}V_0 + \frac{\alpha S_0}{c} + \frac{\beta_2V_0}{c}e^{\mathfrak{r}_1 \mathcal{X}},
\]
and
\[
|I_\mathfrak{B}'(\varsigma)|\leq \frac{4d+\mu_3 + (\beta_1S_0 + \beta_2V_0)}{c}e^{\mathfrak{r}_1 \mathcal{X}}.
\]
Hence,
\[
\|S_\mathfrak{B}\|_{C^{1}([-\mathcal{X},\mathcal{X}])}\leq C_1(\mathcal{X}),\ \ \|V_\mathfrak{B}\|_{C^{1}([-\mathcal{X},\mathcal{X}])}\leq C_1(\mathcal{X})\ \ {\rm and}\ \ \|I_\mathfrak{B}\|_{C^{1}([-\mathcal{X},\mathcal{X}])}\leq C_1(\mathcal{X}).
\]
for some constant $C_1(\mathcal{X}) > 0$.
It follows from \cite{ZhangWuIJB2019} that $|\hat{S}_\mathfrak{B}(\varsigma+1)-\hat{S}_\mathfrak{B}(\eta+1)|\leq C_1(\mathcal{X})|\varsigma-\eta|$ and $|\hat{S}_\mathfrak{B}(\varsigma-1)-\hat{S}_\mathfrak{B}(\eta-1)|\leq C_1(\mathcal{X})|\varsigma-\eta|$ for all $\varsigma,\eta\in[-\mathcal{X},\mathcal{X}]$. Furthermore
\begin{align*}
&\ |\beta_1 S_\mathfrak{B}(\varsigma)I_\mathfrak{B}(\varsigma) - \beta_1 S_\mathfrak{B}(\eta)I_\mathfrak{B}(\eta)|\\
\leq &\ |\beta_1 S_\mathfrak{B}(\varsigma)I_\mathfrak{B}(\varsigma) - \beta_1 S_\mathfrak{B}(\varsigma)I_\mathfrak{B}(\eta)|+|\beta_1 S_\mathfrak{B}(\varsigma)I_\mathfrak{B}(\eta) - \beta_1 S_\mathfrak{B}(\eta)I_\mathfrak{B}(\eta)|\\
\leq &\ \beta_1 C_1(\mathcal{X})\left(|S_\mathfrak{B}(\varsigma)-S_\mathfrak{B}(\eta)| + |I_\mathfrak{B}(\varsigma)-I_\mathfrak{B}(\eta)|\right)
\end{align*}
for all $\varsigma,\eta\in[-\mathcal{X},\mathcal{X}]$. Thus, $\|S_\mathfrak{B}\|_{C^{1,1}([-\mathcal{X},\mathcal{X}])}\leq \mathcal{C}(\mathcal{X})$ for some constant $\mathcal{C}(\mathcal{X}) > 0$.
Similarly,
\[
|V_\mathfrak{B}\|_{C^{1,1}([-\mathcal{X},\mathcal{X}])}\leq \mathcal{C}(\mathcal{X})\ \ {\rm and}\ \ \|I_\mathfrak{B}\|_{C^{1,1}([-\mathcal{X},\mathcal{X}])}\leq \mathcal{C}(\mathcal{X}).
\]
for any $\mathcal{X}<\mathfrak{B}$.
\end{proof}
With the help of Lemma \ref{LemC} and following from the standard arguments in \cite{ZhangWangLiuJNS2021}, we can conclude that $(S, V, I)$ is solution for system (\ref{WaveEqu}) with
\[
S^-\leq S(\varsigma)\leq S^+,\ \ V^-\leq V(\varsigma)\leq V^+,\ \ I^-\leq I(\varsigma)\leq I^+,\ \ \forall \varsigma\in\mathbb{R}.
\]
\section{Boundedness of traveling wave solution}\label{Sec:Bound}
In the following, we first show the boundedness of $(S, V, I)$.
\begin{lemma}\label{lem1}
The functions $S(\varsigma)$, $V(\varsigma)$ and $I(\varsigma)$ satisfy
\[
0<S(\varsigma)<S_0,\ \ 0<V(\varsigma)<V_0\ \ {\rm and}\ \ I(\varsigma)>0\ \ {\rm in}\ \ \mathbb{R}.
\]
\end{lemma}
\begin{proof}
Firstly, to show $S(\varsigma)>0$. If there exists some $\varsigma_0$ such that $S(\varsigma_0) = 0$, then $\digamma[S](\varsigma_0)\geq0$ and $S'(\varsigma_0) = 0$. Due to (\ref{WaveEqu}), we have
\[
0 = \digamma[S](\varsigma_0) + \Lambda > 0,
\]
which is a contradiction. Similarly, we have $ V(\varsigma)>0$ in $\mathbb{R}$.
Next, if there is $\varsigma_1$ such that $I(\varsigma_1) = 0$ and $I(\varsigma)>0$ for $\varsigma<\varsigma_1$.
From the third equation of (\ref{WaveEqu}), we have
\[
I(\varsigma_1+1) + I(\varsigma_1-1) = 0.
\]
Consequently, $ I(\varsigma_1+1) = I(\varsigma_1-1) = 0$ since $I(\varsigma)\geq0$ in $\mathbb{R}$, which is a contradiction
Lastly, we show that $S(\varsigma)<S_0$, if there exists $\varsigma_2$ such that $S(\varsigma_2) = S_0$, one has that
\[
0 = \digamma[S](\varsigma_2) - \beta_1 S(\varsigma_2) <0.
\]
This contradiction leads to $S(\varsigma)<S_0$. Similarly, we have $V(\varsigma)<V_0$ for all $\varsigma\in\mathbb{R}$.
This ends the proof.
\end{proof}
Now, we show the following four claims.
\textbf{Claim I}. The functions $\frac{ I(\varsigma\pm1)}{ I(\varsigma)}$ is bounded in $\mathbb{R}$.
To show this claim, we denote $\kappa := (2+\mu_3)/c$ and $U(\varsigma) := e^{\kappa \varsigma} I(\varsigma)$, one has that
\[
c U'(\varsigma) = e^{\kappa \varsigma}(c I'(\varsigma) + (\mu_3 + 2) I(\varsigma)>0,
\]
From the monotonicity of $U(\varsigma)$, we have
\[
\frac{ I(\varsigma-1)}{I(\varsigma)} < e^\kappa
\]
in $\mathbb{R}$.
Direct calculation yields
\begin{align}\label{Equ3}
\nonumber\left[e^{\kappa\varsigma} I(\varsigma)\right]' & = \frac{1}{c}e^{\kappa\varsigma}\left[d I(\varsigma+1) + d I(\varsigma-1) + (\beta_1S(\varsigma) + \beta_2 V(\varsigma)) I(\varsigma)\right]\\
& > \frac{d}{c}e^{\kappa\varsigma} I(\varsigma+1).
\end{align}
Integrating (\ref{Equ3}) over $[\varsigma,\varsigma+1]$ and using the monotonicity of $e^{\kappa \varsigma}$, one has
\begin{align*}
e^{\kappa(\varsigma+1)} I(\varsigma+1)\ > & \ e^{\kappa\varsigma} I(\varsigma) + \frac{d}{c}\int_\varsigma^{\varsigma+1}e^{\kappa s} I(s+1){\rm d} s\\
> & \ e^{\kappa\varsigma} I(\varsigma) + \frac{d}{c}\int_\varsigma^{\varsigma+1}e^{\kappa (\varsigma+1)} I(\varsigma+1)e^{-\kappa}{\rm d} s\\
= & \ e^{-\kappa}\left[ I(\varsigma) + \frac{d}{c} I(\varsigma+1)\right].
\end{align*}
Hence,
\begin{equation}\label{Equ4}
\left[e^{\kappa\varsigma} I(\varsigma)\right]' > \left(\frac{d}{c}\right)^2 e^{-2\kappa}e^{\kappa(\varsigma+1)} I(\varsigma+1).
\end{equation}
Integrating (\ref{Equ4}) from $\varsigma-\frac{1}{2}$ to $\varsigma$ yields
\[
\frac{ I\left(\varsigma+\frac{1}{2}\right)}{ I(\varsigma)} < 2 \left(\frac{c}{d}\right)^2 e^{\frac{3}{2}\kappa},\ \ \forall\varsigma\in\mathbb{R}.
\]
Similarly, integrating (\ref{Equ4}) over $[\varsigma, \varsigma+\frac{1}{2}]$, we have
\[
\frac{ I(\varsigma+1)}{ I\left(\varsigma+\frac{1}{2}\right)} < 2 \left(\frac{c}{d}\right)^2 e^{\frac{3}{2}\kappa},\ \ \forall\varsigma\in\mathbb{R}.
\]
Thus
\[
\frac{ I(\varsigma+1)}{ I(\varsigma)} = \frac{ I\left(\varsigma+\frac{1}{2}\right)}{ I(\varsigma)} \frac{ I(\varsigma+1)}{ I\left(\varsigma+\frac{1}{2}\right)} < 4 \left(\frac{c}{d}\right)^4 e^{3\kappa},\ \ \forall\varsigma\in\mathbb{R}.
\]
\textbf{Claim II}. $\frac{ I'(\varsigma)}{ I(\varsigma)}$ is bounded in $\mathbb{R}$.
This claim is true because Claim I and the third equation of (\ref{WaveEqu}).
Choose a sequence $\{c_k,S_k, V_k, I_k\}$ of the TWS for (\ref{Model}) in a compact subinterval of $(0,\infty)$, we have the following claim.
\textbf{Claim III}. For a sequence $\{\varsigma_k\}$, we have $S(\varsigma_k)\rightarrow 0$ and $ V(\varsigma_k)\rightarrow 0$ as $k\rightarrow +\infty$ provided that $I(\varsigma_k)\rightarrow +\infty$ as $k\rightarrow +\infty$.
Let $\varsigma_k$ be a subsequence of $\{\varsigma_k\}_{k\in\mathbb{N}}$ with $I_k(\varsigma_k)\rightarrow+\infty$ and $S_k(\varsigma_k)\geq\varepsilon$ as $k\rightarrow+\infty$ in $\mathbb{R}$ for all $k\in\mathbb{N}$. Let $\tilde{c}>0$ be the lower bound of $\{c_k\}$ and we have
\[
S'_k(\varsigma)\leq \frac{2S_0+\Lambda}{\tilde{c}} := \delta_0\ \ \textrm{in}\ \ \mathbb{R}.
\]
We further denote $\delta = \frac{\varepsilon}{\delta_0}$, and we have
\[
S_k(\varsigma)\geq\frac{\varepsilon}{2},\ \ \ \forall\varsigma\in[\varsigma_k-\delta,\varsigma_k]\ \ \textrm{and}\ \ \forall k\in\mathbb{N}.
\]
Thanks to Claim II, there exists some $C_0>0$ such that
\[
\frac{ I_k(\varsigma_k)}{ I_k(\varsigma)} = \exp\left\{\int_{\varsigma}^{\varsigma_k}\frac{ I'_k(\sigma)}{ I_k(\sigma)}{\rm d} \sigma\right\}\leq e^{C_0\delta},\ \ \forall\varsigma\in[\varsigma_k-\delta, \varsigma_k]
\]
for all $k\in\mathbb{N}$. Thus
\[
\min_{\varsigma\in[\varsigma_k-\delta,\ \varsigma_k]} I_k(\varsigma)\geq e^{-C_0\delta} I_k(\varsigma_k),
\]
which give us
\[
\min_{\varsigma\in[\varsigma_k-\delta,\ \varsigma_k]} I_k(\varsigma) \rightarrow +\infty\ \ \textrm{as}\ \ k\rightarrow+\infty
\]
since $ I_k(\varsigma_k)\rightarrow+\infty$ as $k\rightarrow+\infty$.
Recalling the first equation of (\ref{WaveEqu}), and denote $\varsigma_k:=\min\limits_{\varsigma\in[\varsigma_k-\delta, \varsigma_k]} I_k(\varsigma)$, one can have
\[
\max_{\varsigma\in[\varsigma_k-\delta, \varsigma_k]}S'_k(\varsigma)\leq \delta_0 - \frac{\beta_1\varepsilon}{2}I_k(\varsigma_k)\rightarrow-\infty\ \ \textrm{as}\ \ k\rightarrow+\infty.
\]
Tnen,
\[
S'_k(\varsigma)\leq - \frac{2S_0}{\delta},\ \ \forall k\geq K\ \ \textrm{and}\ \ \varsigma\in[\varsigma_k-\delta, \varsigma_k].
\]
for some $K>0$.
Thus, we have $S_k(\varsigma_k)\leq-S_0$, $\forall k\geq K$, which is a contradiction.
Similarly, we can show that $ V_k(\varsigma_k)\rightarrow0$ as $k\rightarrow+\infty$.
\textbf{Claim IV}. If $\limsup\limits_{\varsigma\rightarrow+\infty} I(\varsigma)=+\infty$, then $\lim\limits_{\varsigma\rightarrow+\infty} I(\varsigma)=+\infty$.
With a similar arguments in \cite[Lemma 3.4]{ChenGuoHamelNon2017}, we know that Claim IV is true.
We are now in position to show the boundedness of $I(\varsigma)$ by using Claim I-IV.
\begin{lemma}\label{lem5}
$I(\varsigma)$ is bounded in $\mathbb{R}$.
\end{lemma}
\begin{proof}
Suppose that $\limsup\limits_{\varsigma+\rightarrow\infty} I(\varsigma)=+\infty$, then$\lim\limits_{\varsigma\rightarrow+\infty}(S(\varsigma),V(\varsigma))=(0,0)$.
Denote $\theta(\varsigma)=\frac{I'(\varsigma)}{ I(\varsigma)}$, we have
\[
c\theta(\varsigma) = d e^{\int_{\varsigma}^{\varsigma+1}\theta(s){\rm d} s} + d e^{\int_{\varsigma}^{\varsigma-1}\theta(s){\rm d} s} - (2+\mu_3) + \beta_1S(\varsigma)+\beta_2 V(\varsigma),
\]
By using \cite[Lemma 3.4]{ChenGuoMA2003},
the finite limit of $\theta(\varsigma)$ at $+\infty$ exists and denoted by $\omega$, which is satisfying
\[
\Upsilon(\kappa,c) := d\left(e^\kappa + e^{-\kappa} - 2\right) -c\kappa - \mu_3 = 0.
\]
Clearly, $\Upsilon(\kappa,c) = 0$ has a unique positive real root $\kappa_0$.
From Lemma \ref{WaveSpeed}, we have
\[
d\left(e^{\mathfrak{r}_2} + e^{-\mathfrak{r}_2} - 2\right) -c\mathfrak{r}_2 - \mu_3 < 0,
\]
Recall the definition of $\mathfrak{r}_1$ and $\mathfrak{r}_2$, we have $\mathfrak{r}_2<\kappa_0$. Since $\lim\limits_{\varsigma\rightarrow+\infty}\theta(\varsigma) = \kappa_0$, then there exists $\tilde{\varsigma}$ such that
\[
I(\varsigma)\geq C e{\left(\frac{\mathfrak{r}_2+\kappa_0}{2}\right)\varsigma}\ \ {\rm for}\ \ {\rm all}\ \ \varsigma\geq\tilde{\varsigma},
\]
with some constant $C$, which contradicts with $ I(\varsigma)\leq e^{\mathfrak{r}_1\varsigma}$ in $\mathbb{R}$ and $\mathfrak{r}_1<\kappa_0$.
The proof is done.
\end{proof}
The following lemma is to show that $I(\varsigma)$ cannot approach $0$.
\begin{lemma}\label{lem6}
There holds $\liminf\limits_{\varsigma\rightarrow+\infty} I(\varsigma)>0$.
\end{lemma}
\begin{proof}
We only need to show that if $ I(\varsigma)\leq\varepsilon_0$ for $\varepsilon_0>0$ is small enough, then $ I'(\varsigma)>0$ for all $\varsigma\in \mathbb{R}$.
If not, we can choose a sequence $\{\varsigma_k\}_{k\in\mathbb{N}}$ with $c_k\in(a,b)$ so that $ I(\varsigma_k)\rightarrow0$ as $k\rightarrow+\infty$ and $ I'(\varsigma_k)\leq0$, where $a$ and $b$ are two positive constants. Let
\[
S_k(\varsigma):= S(\varsigma_k+\varsigma),\ \ V_k(\varsigma):= V(\varsigma_k+\varsigma)\ \ \textrm{and}\ \ I_k(\varsigma):= I(\varsigma_k+\varsigma),
\]
then, $I_k(0)\rightarrow0$, $I_k(\varsigma)\rightarrow0$ and $I_k'(\varsigma)\rightarrow0$ locally uniformly in $\mathbb{R}$ as $k\rightarrow+\infty$,
and we can obtain that $S_\infty = S_0$ and $ V_\infty = V_0$ by a similar proof in \cite[Lemma 3.8]{ChenGuoHamelNon2017}.
Let $\pi_k(\varsigma):=\frac{ I_k(\varsigma)}{ I_k(0)}$, we have
\[
\pi_k'(\varsigma) = \frac{ I_k'(\varsigma)}{ I_k(0)} = \frac{ I_k'(\varsigma)}{ I_k(\varsigma)}\pi_k(\varsigma).
\]
Hence, $\pi_k(\varsigma)$ and $\pi_k'(\varsigma)$ also locally uniformly in $\mathbb{R}$ as $k\rightarrow+\infty$. Hence
\[
c_\infty\pi_\infty'(\varsigma)= d \digamma[\pi_\infty](\varsigma) + (\beta_1 S_0+\beta_2 V_0)\pi_\infty(\varsigma) - \mu_3 \pi_\infty(\varsigma)
\]
as $k\rightarrow+\infty$.
One can have $\pi_\infty(\varsigma)>0$ in $\mathbb{R}$. Indeed, if there is a $\varsigma_0$ such that $\pi_\infty(\varsigma_0)=0$, then ${\pi}'_\infty(\varsigma_0)=0$ and
then
\[
0 = d(\pi_\infty(\varsigma_0+1)+\pi_\infty(\varsigma_0-1)).
\]
Thus $\pi_\infty(\varsigma_0+1) = \pi_\infty(\varsigma_0-1) = 0$, it follows that $\pi_\infty(\varsigma_0+o) = 0$ for all $o\in\mathbb{Z}$. Recall that
$c_\infty\pi_\infty'(\varsigma) \geq - \mu_3 \pi_\infty(\varsigma)$, then the map $\varsigma\mapsto \pi_\infty(\varsigma) e^{\frac{\mu_3\varsigma}{c_\infty}}$ is nondecreasing. Since it vanishes at $\varsigma_0+o$ for all $m\in\mathbb{Z}$, one can conclude that $\pi_\infty = 0$ in $\mathbb{R}$, which is a contradiction with $\pi_\infty(0) = 1$.
Denote $P(\varsigma):=\frac{\pi_\infty'(\varsigma)}{\pi_\infty(\varsigma)}$, one has that
\begin{equation}\label{Z}
c_\infty P(\varsigma)= d e^{\int^{\varsigma+1}_\varsigma P(s){\rm d} s}{\rm d} y + d e^{\int^{\varsigma-1}_\varsigma P(s){\rm d} s}{\rm d} y - 2 + \beta_1 S_0+\beta_2 V_0 - \mu_3.
\emph{}\end{equation}
Using \cite[Lemma 3.4]{ChenGuoMA2003}, $P(\varsigma)$ has finite limits $\omega_{\pm}$ and satisfy
\[
c_\infty \omega_\pm = d\left(e^{\omega_\pm} + e^{-\omega_\pm} -2\right) + \beta_1 S_0+\beta_2 V_0 - \mu_3.
\]
By Lemma \ref{WaveSpeed}, we know have $\omega_\pm>0$ and $\pi_\infty'(\pm\infty)$ are positive.
Moreover, one can have $\pi_\infty'(\varsigma)>0$ for all $\varsigma\in\mathbb{R}$. In fact, if there exists some $\varsigma^*$ such that $P(\varsigma^*) = \inf_{\mathbb{R}}P(\varsigma)$,
then $P(\varsigma^*) = 0$. Differentiating (\ref{Z}) yields
\[
c_\infty P'(\varsigma) = d(P(\varsigma+1) - P(\varsigma))\frac{\pi_\infty(\varsigma+1)}{\pi_\infty(\varsigma)} + d(P(\varsigma-1) - P(\varsigma))\frac{\pi_\infty(\varsigma-1)}{\pi_\infty(\varsigma)}.
\]
It follows that
\[
P(\varsigma^*) = P(\varsigma^*+1) = P(\varsigma^*-1).
\]
Hence $P(\varsigma^*) = P(\varsigma^*+\kappa)$ for all $\kappa\in\mathbb{Z}$. Then,
\[
\inf_{\mathbb{R}}P(\varsigma)\geq\min\{P(+\infty),P(-\infty)\}>0.
\]
Furthermore,
\[
0<\pi_\infty'(0) = \lim_{k\rightarrow+\infty}\pi_k'(0) = \lim_{k\rightarrow+\infty}\frac{ I_k'(0)}{ I_k(0)}.
\]
Thus, $ I'(\varsigma_k) = I_k'(0)>0$, which is a contradiction.
\end{proof}
\section{Convergence of the traveling wave solution}\label{Sec:Lyapunov}
In this section, we show the convergence of traveling wave solutions.
\begin{theorem}\label{theorem2}
If $\Re_0 > 1$, then for each $c > \mathfrak{c}^*$, system (\ref{Model}) has a TWS $(S(\varsigma), V(\varsigma), I(\varsigma))$ satisfying conditions (\ref{Bound1}) and (\ref{Bound2}).
\end{theorem}
\begin{proof}
In what following, we use $(S,V,I)$ short for $(S(\varsigma),V(\varsigma),I(\varsigma))$.
Define the following four functionals
\[
W_1(\varsigma) = c S^* g\left(\frac{S}{S^*}\right) + c V^* g\left(\frac{V}{V^*}\right) + c I^* g\left(\frac{I}{I^*}\right),
\]
\[
W_2(\varsigma) = \int_0^1 g\left(\frac{S(\varsigma-\sigma)}{S^*}\right){\rm d} \sigma - \int_{-1}^0 g\left(\frac{S(\varsigma-\sigma)}{S^*}\right){\rm d} \sigma
\]
\[
W_3(\varsigma) = \int_0^1 g\left(\frac{ V(\varsigma-\sigma)}{V^*}\right){\rm d} \sigma - \int_{-1}^0 g\left(\frac{ V(\varsigma-\sigma)}{V^*}\right){\rm d} \sigma
\]
and
\[
W_4(\varsigma) = \int_0^1 g\left(\frac{ I(\varsigma-\sigma)}{I^*}\right){\rm d} \sigma - \int_{-1}^0 g\left(\frac{ I(\varsigma-\sigma)}{I^*}\right){\rm d} \sigma,
\]
where $g(x)=x-1-\ln x$.
The derivative of $W_1(\varsigma)$ is calculated as follows
\[
\frac{{\rm d} W_1(\varsigma)}{{\rm d} \varsigma} = \left(1-\frac{S^*}{S}\right) \digamma[S](\varsigma) + \left(1-\frac{V^*}{ V}\right) \digamma[ V](\varsigma) + \left(1-\frac{I^*}{ I}\right) d\digamma[ I](\varsigma) + \Sigma(\varsigma),
\]
where
\begin{align}\label{Theta}
\nonumber\Sigma(\varsigma) = & \left(1-\frac{S^*}{S}\right) \left(\Lambda - \mu_1 S - \beta_1 SI\right) + \left(1-\frac{V^*}{S}\right) \left(\alpha S - \beta_2 VI - \mu_2 V\right)\\
& + \left(1-\frac{I^*}{ I}\right) \left((\beta_1 S + \beta_2 V)I - \mu_3 I\right).
\end{align}
Since $(S^*,I^*,V^*)$ is the endemic equilibrium of system (\ref{Model}) and $\mu_1 = \mu + \alpha$,
one has
\begin{align*}
\Sigma(\varsigma) = &\ \mu S^* \left(2-\frac{S^*}{S}-\frac{S}{S^*}\right) + \mu_2 V^* \left(3-\frac{S^*}{S}-\frac{V}{V^*}-\frac{SV^*}{S^* V}\right)\\
& -\beta_1 S^* I^*\left[g\left(\frac{S^*}{S}\right) + g\left(\frac{S}{S^*}\right)\right] -\beta_2 V^* I^*\left[g\left(\frac{S^*}{S}\right) + g\left(\frac{SV^*}{S^* V}\right) + g\left(\frac{V}{V^*}\right)\right].
\end{align*}
Furthermore,
\begin{align*}
\frac{{\rm d} W_2(\varsigma)}{{\rm d} \varsigma} = &\frac{{\rm d}}{{\rm d} \varsigma} \left[\int_0^1 g\left(\frac{S(\varsigma-\sigma)}{S^*}\right){\rm d} \sigma - \int_{-1}^0 g\left(\frac{S(\varsigma-\sigma)}{S^*}\right){\rm d} \sigma\right]\\
= & \int_0^1 \frac{{\rm d}}{{\rm d} \varsigma}g\left(\frac{S(\varsigma-\sigma)}{S^*}\right){\rm d} \sigma - \int_{-1}^0 \frac{{\rm d}}{{\rm d} \varsigma}g\left(\frac{S(\varsigma-\sigma)}{S^*}\right){\rm d} \sigma\\
= & - \int_0^1 \frac{{\rm d}}{{\rm d} \sigma}g\left(\frac{S(\varsigma-\sigma)}{S^*}\right){\rm d} \sigma + \int_{-1}^0 \frac{{\rm d}}{{\rm d} \sigma}g\left(\frac{S(\varsigma-\sigma)}{S^*}\right){\rm d} \sigma\\
= & 2 g\left(\frac{S}{S^*}\right) - g\left(\frac{S(\varsigma-1)}{S^*}\right) - g\left(\frac{S(\varsigma+1)}{S^*}\right).
\end{align*}
Similarly,
\[
\frac{{\rm d} W_3(\varsigma)}{{\rm d} \varsigma} = 2 g\left(\frac{ V}{V^*}\right) - g\left(\frac{ V(\varsigma-1)}{V^*}\right) - g\left(\frac{ V(\varsigma+1)}{V^*}\right)
\]
and
\[
\frac{{\rm d} W_4(\varsigma)}{{\rm d} \varsigma} = 2 g\left(\frac{ I}{I^*}\right) - g\left(\frac{ I(\varsigma-1)}{I^*}\right) - g\left(\frac{ I(\varsigma+1)}{I^*}\right).
\]
Now, we define a Lyapunov functional as
\[
\mathcal{V}(\varsigma) = W_1(\varsigma) + S^* W_2(\varsigma) + V^* W_3(\varsigma) + d I^* W_4(\varsigma),
\]
and
\begin{align*}
\frac{{\rm d} \mathcal{V}(\varsigma)}{{\rm d} \varsigma}
= &\mu S^* \left(2-\frac{S^*}{S}-\frac{S}{S^*}\right) + \mu_2 V^* \left(3-\frac{S^*}{S}-\frac{ V}{V^*}-\frac{SV^*}{S^* V}\right)\\
& -\beta_1 S^* I^*\left[g\left(\frac{S^*}{S}\right) + g\left(\frac{S}{S^*}\right)\right] -\beta_2 V^* I^*\left[g\left(\frac{S^*}{S}\right) + g\left(\frac{SV^*}{S^* V}\right) + g\left(\frac{V}{V^*}\right)\right]\\
&- S^* \left[g\left(\frac{S(\varsigma-1)}{S}\right) + g\left(\frac{S(\varsigma+1)}{S}\right)\right] - V^* \left[g\left(\frac{ V(\varsigma-1)}{ V}\right) + g\left(\frac{ V(\varsigma+1)}{ V}\right)\right]\\
& - dI^* \left[g\left(\frac{ I(\varsigma-1)}{ I}\right) + g\left(\frac{ I(\varsigma+1)}{ I}\right)\right].
\end{align*}
Recall that $g(x)\geq0$. Hence, the map $\varsigma\mapsto \mathcal{V}(\varsigma)$ is non-increasing. Choosing $\{\varsigma_k\}_{k\geq 0}$ as an increasing sequence with $\varsigma_k>0$ and $\varsigma_k\rightarrow+\infty$ as $k\rightarrow+\infty$. Let
$$\{S_k(\varsigma)=S(\varsigma+\varsigma_k)\}_{k\geq 0},\ \ \{ V_k(\varsigma)= V(\varsigma+\varsigma_k)\}_{k\geq 0}\ \ \textrm{and}\ \ \{ I_k(\varsigma)= I(\varsigma+\varsigma_k)\}_{k\geq 0}.$$
Since $S$, $V$ and $I$ have bounded derivatives then the sequences of functions $\{S_k(\varsigma)\}$, $\{V_k(\varsigma)\}$ and $\{I_k(\varsigma)\}$ converge in $C_{loc}^{\infty}(\mathbb{R})$ as $k\rightarrow+\infty$ by Arzela-Ascoli theorem, up to extraction of a subsequence,
we assume that the sequences $\{S_k(\varsigma)\}$, $\{V_k(\varsigma)\}$ and $\{I_k(\varsigma)\}$ convergence to some nonnegative $C^\infty$ functions $S_\infty$, $ V_\infty$ and $ I_\infty.$ Since $\mathcal{V}(S, V, I)(\varsigma)$ is bounded from below,
then there exists $M_0$ and some large $k$ such that
\[
M_0\leq \mathcal{V}(S_k, V_k, I_k)(\varsigma)=\mathcal{V}(S, V, I)(\varsigma+\varsigma_k)\leq \mathcal{V}(S, V, I)(\varsigma).
\]
Hence, there exists some $\delta\in \mathbb{R}$ such that $\lim\limits_{k\rightarrow\infty} \mathcal{V}(S_k, V_k, I_k)(\varsigma)=\delta$, $\forall\varsigma\in \mathbb{R}$. Using Lebegue dominated convergence theorem, one has that
\[
\lim_{k\rightarrow+\infty}\mathcal{V}(S_k, V_k, I_k)(\varsigma)=\mathcal{V}(S_\infty, V_\infty, I_\infty)(\varsigma),\ \varsigma\in \mathbb{R}.
\]
Thus
\[
\mathcal{V}(S_\infty, V_\infty, I_\infty)(\varsigma)=\delta.
\]
Recall that $\frac{{\rm d} \mathcal{V}}{{\rm d} \varsigma}=0$ if and only if $S\equiv S^*$, $ V\equiv V^*$ and $ I\equiv I^*$, which finishes the proof.
\end{proof}
\begin{remark}
For the case $c=\mathfrak{c}^*$, we can obtain the existence of TWS by using a similar approximation technique used in \cite[Section 4]{ChenGuoHamelNon2017}. The TWS for $c=\mathfrak{c}^*$ is also satisfy \eqref{Bound1} and \eqref{Bound2} since the Lyapunov functional is independent of $c$.
\end{remark}
\section{Discussion}\label{Sec:Dis}
In this paper, we proposed a discrete diffusive vaccination epidemic model (i.e., system (\ref{Model})), which seems to be more realistic than the non-delayed model (1.2). Employing Schauder’s fixed point theorem and Lyapunov functional, we obtain the existence of nontrivial positive TWS, which is connecting two different equilibrium. Our research examines the conditions (i.e. basic reproduction number) under which an infectious disease can spread, even this disease has a vaccine.
Now we finish this section with some explanations from the perspective of epidemiology.
Assume that $(\hat{\mathfrak{r}},\hat{c})$ is a root of $\Delta(\mathfrak{r},c)=0$, by some calculations, we obtain
\[
\frac{{\rm d} \hat{c}}{{\rm d} \gamma_1} <0,\ \ \frac{{\rm d} \hat{c}}{{\rm d} d} > 0,\
\frac{{\rm d} \hat{c}}{{\rm d} \beta_1} >0,\ \ \frac{{\rm d} \hat{c}}{{\rm d} \beta_2} >0\ \ {\rm and}\ \ \frac{{\rm d} \hat{c}}{{\rm d} \Re_0} >0.
\]
Mathematically, $\hat{c}$ is a decreasing on $\gamma_1$, while $\hat{c}$ is an increasing function on $d$, $\beta_1$ and $\beta_2$. From the biological point of view, this indicates the following three scenarios:
\begin{description}
\item[I.] The more successful the vaccination, the slower the disease spreads;
\item[II.] The faster the infected individuals move, the faster the disease spreads;
\item[III.] The more effective the infections are, the faster the disease spreads.
\end{description}
Accordingly, a good understanding of the movement of the infected individuals and the vaccination rate of susceptible individuals could be important in disease control strategy.
In fact, as in the ordinary differential equation case in \cite{LiuTakeuchiShingoJTB2008}, the basic reproduction number $\Re_0$ is decreasing on $\gamma_1$, while $\Re_0$ is increasing on $\beta_1$ and $\beta_2$. Compared with \cite{LiuTakeuchiShingoJTB2008}, our study proposes a new explanation, which is to control the movement of the infected individuals.
Another important thing is the effectiveness of vaccination $\gamma_1$ is important than the vaccination rate $\alpha$, which explain the importance of complete vaccination.
In addition, as we know, numerical simulation can help us to see the influence of model parameters on the qualitative analysis of TWS. However, the topic of the present paper is focus on the mathematical theory analysis for the model, and we leave numerical simulation as our future work to fit some specific diseases.
\section*{Acknowledgements}
The authors would like to thank Professor Shigui Ruan from University of Miami for his suggestions.
|
1412.8018
|
\section{introduction}\label{intro}
Stability of linear time-varying (LTV) systems has been a topic of significant interest in a wide range of disciplines including but not restricting to mathematical modeling and control of dynamical systems,~\cite{rosenbrook1963stability,1100529,1084637,Ilchmann1987157,tsakalis1993linear,DaCunha2005381,Phat2006343}. Discrete-time, LTV dynamics can be represented by the following model:
\begin{equation}\label{1}
{\bf{x}}(k+1)={{P}}_k{\bf{x}}(k)+B_k\mathbf{u}_k,\qquad k \geq 0,
\end{equation}
where~$\mathbf{x}_k\in\mathbb{R}^n$ is the state vector,~$P_k$'s are the system matrices,~$B_k$'s are the input matrices, and~$\mathbf{u}_k\in\mathbb{R}^s$ is the input vector. This model is particularly relevant to design and analysis of distributed fusion algorithms when the system matrices,~$P_k$'s, are (sub-) stochastic, i.e. they are non-negative and each row sums to at most~$1$. Examples include leader-follower algorithms,~\cite{tanner02,4200874}, consensus-based control algorithms,~\cite{5509836,5456181,4456762}, and sensor localization,~\cite{khan2009distributed,khan2010diland}.
In contrast to the case when the system matrices,~$P_k$'s, are time-invariant, i.e.~$P_k=P,\forall k$, as in many studies related to the above examples, we are motivated by the scenarios when these system matrices are time-varying. The dynamic system matrices do not only model time-varying neighboring interactions, but, in addition, capture agent mobility in multi-agent networks. Consider, for example, the leader-follower algorithm,~\cite{tanner02,4200874}, where~$n$ \emph{sensors} update their states,~$\mathbf{x}_k$'s in Eq.~\eqref{1}, as a linear-convex combination of the neighboring states, and~$s=1$ \emph{anchor} keeps its (scalar) state,~$u_k$, fixed at all times. It is well-known that under mild conditions on network connectivity the sensor states converge to the anchor state. However, the neighboring interactions change over time if the sensors are mobile. In the case of possibly random motion over the sensors, at each time~$k$, it is not guaranteed that a sensor can find any neighbor at all. If a sensor finds a set of neighbors to exchange information, none of these neighbors may be an anchor. We refer to the general class of such time-varying fusion algorithms over mobile agents as \emph{Distributed Dynamic Fusion~(DDF)}. In this context, we study the conditions required on the DDF system matrices such that the \emph{dynamic} fusion converges to (a linear combination of) the anchor state(s).
For linear time-invariant (LTI) systems, a necessary and sufficient condition for stability is that the \textit{spectral radius}, i.e. the absolute value of the largest eigenvalue, of the system matrix is subunit. A well-known result from the matrix theory is that if the (time-invariant) system matrix,~$P$, is irreducible \textit{and} sub-stochastic, sometimes referred to as \emph{uniformly sub-stochastic},~\cite{kolpakov,kolpakov_rus:83}, the spectral radius of~$P$ is strictly less than one and~$\lim_{k \rightarrow \infty}\mathbf{x}_k$ converges to zero. In contrast, the DDF algorithms over mobile agents result into a time-varying system, Eq.~\eqref{1}, where a system matrix,~$P_k$, at any time~$k$ is non-negative, and can be: (i) identity if no sensor is able to update its state; (ii) stochastic if the updating sensor divides the total weight of~$1$ among the sensors in its neighborhood; or, (iii) sub-stochastic if the total weight of~$1$ is divided among both sensors and anchors. In addition, it can be verified that in DDF algorithms, the resulting LTV system may be such that the spectral radius,~$\rho(P_k)$, of the system matrices follow~$\rho(P_k)=1,\forall k$. This is, for example, when only a few sensors update and the remaining stick to their past states.
Asymptotic stability for LTV systems may be characterized by the \textit{joint spectral radius} of the associated family of system matrices. Given a finite set of matrices,~$\mathcal{M}=\{{A}_{1},\ldots,{A}_{m}\}$, the joint spectral radius of the set~$\mathcal{M}$, was introduced by Rota and Strang,~\cite{rota1960note}, as a generalization of the classical notion of spectral radius, with the following definition:
\begin{equation*}\label{2}
\rho(\mathcal{M}):=\lim_{k \rightarrow \infty} \max\limits_{A \in {\mathcal{M}}_k } {\Vert {A}\Vert}^{\frac{1}{k}},
\end{equation*}
in which~${\mathcal{M}}_k$ is the set of all possible products of the length~$k \geq 1$, i.e.
\begin{equation*}
{\mathcal{M}}_k =\{A_{i_1} A_{i_2} \ldots A_{i_k} \}: 1 \leq i_j \leq m,~j=1,\ldots,k.
\end{equation*}
Joint spectral radius (JSR) is independent of the choice of norm, and represents the maximum growth rate that can be achieved by forming arbitrary long products of the matrices taken from the set~${\mathcal{M}}$. It turns out that the asymptotic stability of the LTV systems, with system matrices taken from the set~$\mathcal{M}$, is guaranteed,~\cite{parrilo2008approximation}, if and only if
\begin{equation*}
\rho(\mathcal{M}) < 1.
\end{equation*}
Although the JSR characterizes the stability of LTV systems, its computation is NP-hard,~\cite{tsitsiklis1997lyapunov}, and the determination of a strict bound is undecidable,~\cite{blondel2000boundedness}. Naturally, much of the existing literature has focused on JSR approximations,~\cite{parrilo2008approximation,gripenberg1996computing,blondel2000boundedness,jungers2009joint,blondel2005computationally,tsitsiklis1997lyapunov,qu2005products,touri2011product}. For example, Ref.~\cite{blondel2005computationally} studies lifting techniques to approximate the JSR of a set of matrices. The main idea is to build a lifted set with a larger number of matrices, or a set of matrices with higher dimensions, such that the relation between the JSR of the new set and the original set is known. Lifting techniques provide better bounds at the price of a higher computational cost. In~\cite{parrilo2008approximation}, a sum of squares programming technique is used to approximate the JSR of a set of matrices; a bound on the quality of the approximation is also provided, which is independent of the number of matrices. Stability of LTV systems is also closely related to the convergence of infinite products of matrices. Of particular interest is the special case of the (infinite) product of non-negative and/or (sub-) stochastic matrices, see~\cite{guu2003convergence,daubechies1992sets,bru1994convergence,beyn1997infinite,hartfiel1974infinite,pullman1966infinite,kochkarev1995continuous,Elsner1997133}. In addition to non-negativity and sub-stochasticity, the majority of these works set other restrictions, such as irreducibility or bounds on the row sum on each matrix in the set.
The main contributions of this paper are as follows. \emph{Design}: we provide a set of conditions on the elements of the system matrices under which the asymptotic stability of the corresponding LTV system can be guaranteed. \emph{Analysis}: we propose a general framework to determine the stability of an (infinite) product of (sub-) stochastic matrices. Our approach does not require either the computation or an approximation of the JSR. Instead, we partition the infinite set of system matrices (stochastic, sub-stochastic, or identity) into non-overlapping slices--a slice is defined as the smallest product of (consecutive) system matrices such that: (i) every row in a slice is strictly less than one; and, (ii) the slices cover the entire sequence of system matrices. Under the conditions established in the design, we subsequently show that the infinity norm of each slice is subunit (recall that in the DDF setup, infinity norm of each system matrix is one). Finally, in order to establish the relevance to the fusion \textit{applications} of interest, we use the theoretical results to derive the convergence and steady-state of a dynamic leader-follower algorithm.
An important aspect of our analysis lies in the study of slice lengths. First, we show that longer slices may have an infinity norm that is closer to one as compared to shorter slices. Clearly, if one can show that each slice norm is subunit (with a uniform upper bound of~$<1$) then one further has to guarantee an infinite number of such slices to ensure stability. The aforementioned argument naturally requires slices of finite length, as finite slices covering infinite (system) matrices lead to an infinite number of slices. An avid reader may note that guaranteeing a sharp upper bound on the length of every slice may not be possible for certain network configurations. To address such configurations, we characterize the rate at which the slices (not necessarily in an order) grow large such that the LTV stability is not disturbed. In other words, a longer slice may capture a slow information propagation in the network; characterizing the aforementioned growth is equivalent to deriving the rate at which the information propagation may deteriorate in a network such that the fusion is still achievable.
The rest of this paper is organized as follows. We formulate the problem in Section~\ref{PF}, while Section~\ref{IP} studies the convergence of an infinite product of (sub-) stochastic matrices. Stability of discrete-time LTV systems with (sub-) stochastic system matrices is studied in Section~\ref{stability}. We provide applications to distributed dynamic fusion in Section~\ref{app} and illustrations of the results in Section~\ref{example}. Finally, Section~\ref{conc} concludes the paper.
\section{Problem formulation}\label{PF}
In this paper, we study the \textit{asymptotic stability} of the following Linear Time-Varying (LTV) dynamics:
\begin{equation}\label{eq2}
{\bf{x}}(k+1)={{P}}_k{\bf{x}}(k)+{{B}}_k{\bf{u}}(k),\qquad k \geq 0,
\end{equation}
where~${\mathbf{x}}(k)\in\mathbb{R}^n$ is the state vector,~${{P}}_k\in\mathbb{R}^{n \times n}$ is the time-varying system matrix,~${B}_k\in\mathbb{R}^{n \times s}$ is the time-varying input matrix,~${\bf{u}}(k)\in\mathbb{R}^s$ is the input vector, and~$k$ is the discrete-time index. We consider the system matrix,~${P}_k$ at each~$k$, to be non-negative and either \emph{sub-stochastic}, \emph{stochastic}, or \emph{identity}, along with some conditions on its elements. The input matrix,~${B}_k$ at each~$k$, may be arbitrary as long as some regularity conditions are satisfied. These regularity conditions on the system matrices,~~${P}_k$'s and~~${B}_k$'s, are collected in the Assumptions {\bf A0--A2} in the following.
In this paper, we are interested in deriving the conditions on the corresponding system matrices under which the LTV dynamics in Eq.~\eqref{eq2} forget the initial condition,~$\mathbf{x}(0)$, and converge to some function of the input vector,~$\mathbf{u}_k$. The motivation behind this investigation can be cast in the context of distributed fusion over dynamic graphs that we introduce in the following.
\subsection{Distributed Dynamic Fusion}
Consider a network of~$n+s$ mobile nodes moving arbitrarily in a (finite) region of interest, where~$n$ mobile sensors implement a distributed algorithm to obtain some relevant function of~$s$ (mobile) anchors; examples include the leader-follower setup,~\cite{tanner02,4200874}, and sensor localization,~\cite{khan2009distributed,khan2010diland}. The sensors may be thought of as mobile agents that collect information from the anchors and disseminate within the sensor network. Each node may have restricted mobility in its respective region and thus many sensors may not be able to directly connect to the anchors. Since the motion of each node is arbitrary, the network configuration at any time~$k$ is completely unpredictable. It is further likely that at many time instants, no node has any neighbor in its communication radius.
Formally, sensors, in the set~$\Omega$, are the nodes in the graph that update their states,~$x_i(k)\in\mathbb{R}, i=1,\ldots,n$, as a linear-convex function of the neighboring nodes; while anchors, in the set~$\kappa$, are the nodes that inject information,~$u_j(k)\in\mathbb{R},j=1,\ldots,s$, in the network. Let~$\mathcal{N}_i(k)$ denote the set of neighbors (not including sensor~$i$) of sensor~$i$ according to the underlying graph at time~$k$, with~$\mathcal{D}_i(k)\triangleq\{i\}\cup\mathcal{N}_i(k)$. We assume that at each time~$k$, only one sensor, say~$i$, updates its state\footnote{Although multiple sensors may update their states at each iteration, without loss of generality, we assume that at most one sensor may update.},~$x_i(k)$. Since the underlying graph is dynamic, the updating sensor~$i$ implements one of the following updates:
\begin{enumerate}[(i)]
\item No neighbors:
\begin{eqnarray}\label{U1}
x_i(k+1) = x_i(k),\qquad\mathcal{N}_i(k)=\emptyset.
\end{eqnarray}
\item No neighboring anchor,~$\mathcal{N}_i(k)\cap\kappa=\emptyset$:
\begin{eqnarray}\label{U2}
x_i(k+1) = \sum\limits_{l\in{\mathcal{D}_{i}(k)}}({P}_{k})_{i,l}{x}_{l}(k).
\end{eqnarray}
\item At least one anchor as a neighbor:
\begin{eqnarray}\nonumber
x_i(k+1) &=&
\sum\limits_{l\in{\mathcal{D}_{i}(k)\cap\Omega}}({P}_{k})_{i,l}{x}_{l}(k)\\ \label{U3}
&+& \sum\limits_{j\in{\mathcal{D}_i(k)}\cap\kappa}({B}_{k})_{i,j}{u}_{j}(k),
\end{eqnarray}
with~$\mathcal{N}_i(k)\cap\kappa\neq\emptyset$.
\end{enumerate}
At every other (non-updating) sensor,~$l\neq i$, we have
\begin{eqnarray}\label{U4}
x_l(k+1) = x_l(k).
\end{eqnarray}
\subsection{Assumptions}
Let~${P}_k=({P}_{k})_{i,l}$, and~${B}_k=({B}_{k})_{i,j}$, we now enlist the assumptions:
\noindent {\bf A0}: When the updating sensor,~$i$, has no anchor as a neighbor, the update in Eq.~\eqref{U2} is linear-convex, i.e.
\begin{eqnarray}
\sum\limits_{l\in{\mathcal{D}_{i}(k)}\cap\Omega}({P}_{k})_{i,l} = 1,
\end{eqnarray}
resulting in a (row) stochastic system matrix,~${P}_k$.
\noindent {\bf A1}: When the updating sensor,~$i$, has no anchor but at least one sensor as a neighbor, the weight it assigns to each neighbor (including the self-weight) is such that
\begin{eqnarray}\label{bnd1}
0 < \beta_1 \leq ({P}_{k})_{i,l} < 1,\qquad \forall l \in {\mathcal{D}}_i(k),\beta_1\in\mathbb{R},
\end{eqnarray}
\noindent {\bf A2}: When the updating sensor updates with an anchor, the update, Eq.~\eqref{U3}, over the sensors,~$\mathcal{D}_i(k)\cap\Omega$, satisfies
\begin{eqnarray}\label{bnd2}
\sum\limits_{l\in{\mathcal{D}_{i}(k)}\cap\Omega}({P}_{k})_{i,l} \leq\beta_2 < 1,
\end{eqnarray}
resulting in a sub-stochastic system matrix,~${P}_k$. Also note that the update over the anchors,~$\mathcal{N}_i(k)\cap\kappa$, in Eq.~\eqref{U3}, follows
\begin{eqnarray}\label{bnd3}
({B}_{k})_{i,j}\geq\alpha>0,\qquad\forall j\in\mathcal{N}_i(k)\cap\kappa.
\end{eqnarray}
If, in addition, we enforce~$\sum_l ({P}_{k})_{i,l}+\sum_j ({B}_{k})_{i,j}=1$, as it is assumed in leader-follower,~\cite{tanner02,4200874}, or sensor localization,~\cite{khan2009distributed,khan2010diland}, Eq.~\eqref{bnd3} naturally leads to the bound in Eq.~\eqref{bnd2}.
Clearly, which of the four updates in Eqs.~\eqref{U1}--\eqref{U4} is applied by the updating sensor,~$i$, depends on being able to satisfy the corresponding assumptions ({\bf A0--A2}), \emph{in addition} to the neighborhood configuration. Indeed, letting
\begin{eqnarray*}
\mathbf{x}(k) &=& \left[x_1(k),\ldots,x_n(k)\right]^\top,\\
\mathbf{u}(k) &=& \left[u_1(k),\ldots,u_m(k)\right]^\top,
\end{eqnarray*}
result into the LTV system in Eq.~\eqref{eq2}. Clearly, the time-varying system matrices,~${P}_k$, are either sub-stochastic, stochastic, or identity, depending on the nature of the update.
{\bf Remarks:} It is meaningful to comment on the assumptions made above. Non-negativity and stochasticity are standard in the literature concerning relevant iterative algorithms and multi-agent fusion, see e.g.~\cite{5509836,5456181,4456762,khan2009distributed}. When there is a neighboring anchor, Eq.~\eqref{bnd2} provides an \emph{upper bound on unreliability} thus restricting the amount of unreliable information added in the network by a sensor. Eq.~\eqref{bnd3}, on the other hand, can be viewed as a \emph{lower bound on reliability}; it ensures that whenever an anchor is included in the update, a certain amount of information is always contributed by the anchor. An avid reader may note that Eq.~\eqref{bnd3} guarantees that the following does not occur:~$({B}_{k_1})_{i,j}\ra0$, where~$k_1\geq0$ is a subsequence within~$k$ denoting the instants when Eq.~\eqref{U3} is implemented. Similarly, Eqs.~\eqref{bnd1} and~\eqref{bnd2} ensure that no sensor is assigned a weight arbitrarily close to~$1$ and thus no sensor may be entrusted with the role of an anchor. Note also that Eq.~\eqref{bnd1} naturally leads to an upper bound on the neighboring sensor weight, i.e.~$(P_k)_{i,l\neq i}\leq1-\beta_1$, because~$\mathcal{D}_i(k)$ always includes~$\{i\}$. Also when there is no neighboring anchor, Eq.~\eqref{bnd1} guarantees that sensors do not completely forget their past information by putting a non-zero self-weight on their own previous states. Finally, we point out that the bounds in Eqs.~\eqref{bnd1}--\eqref{bnd3}, are naturally satisfied by LTI dynamics:~$\mathbf{x}(k+1)=P\mathbf{x}(k) + B\mathbf{u}(k)$, with non-negative matrices; a topic well-studied in the context of iterative algorithms,~\cite{tsit_book,plemmons:79}, and multi-agent fusion.
\section{Infinite product of (sub-) stochastic matrices}\label{IP}
In this section, we study the convergence of
\begin{equation}\label{eq11}
\lim_{k \rightarrow \infty}{{P}}_{k}{{P}}_{k-1} \ldots {{P}}_0,
\end{equation}
where~${{P}}_{k}$ is the system matrix at time~$k$, as defined in Section~\ref{PF}. Since multiplication with the identity matrix has no effect on the convergence of the sequence, in the rest of the paper we only consider the updates, in which at least one sensor is able to find and exchange information with some neighbors, i.e. ${{P}}_{k} \neq I_n,~\forall k$. We are interested in establishing the stability properties of this infinite product. Studying the joint spectral radius is prone to many challenges as described in Section~\ref{intro}, and we choose the infinity norm to study the convergence conditions. The infinity norm,~$\|M\|_\infty$, of a square matrix,~$M$, is defined as the maximum of the absolute row sums. Clearly, the infinity norm of~${{P}}_k$ is one for all~$k$ since each system matrix has at most one sub-stochastic row.
To establish a subunit infinity norm, we divide the system matrices into non-overlapping \textit{slices} and show that each slice has an infinity norm strictly less than one; the entire chain of system matrices is covered by these non-overlapping slices. Let one of the slices be denoted by~$M$ with length~$|M|$ and, without loss of generality, index the matrices within~$M$ as
\begin{eqnarray}\label{M}
{{M}} = P_{|{{M}}|}P_{|{{M}}|-1}P_{|{{M}}|-2}\ldots P_{3}P_{2}P_{1}.
\end{eqnarray}
\noindent Using slice notation, we can introduce a new discrete-time index,~$t$, which allows us to study the following
\begin{eqnarray}
\lim_{t \rightarrow \infty}{{M}}_{t}{{M}}_{t-1}\ldots{{M}_{0}},
\end{eqnarray}
instead of Eq.~\eqref{eq11}, note that~$ k > t$.
We define a system matrix,~$P_k$, as a \textit{success} if it decreases the row sum of some row in~$P_k$, which was stochastic before this successful update. Each success, thus, adds a \emph{new} sub-stochastic row to a slice, and~$n$ such successful updates are required to complete a slice. In this argument, we assume that a row that becomes sub-stochastic remains sub-stochastic, which is not in true in general, after successive multiplication with stochastic or sub-stochastic matrices,~$\left(\ldots P_{k+2}P_{k+1}\right)$. Thus, we will derive the explicit conditions under which the sub-stochasticity of a row is preserved. Before we proceed with our main result we provide the following lemmas:
\begin{lem}\label{lem1}
For the infinity norm to be less than one, each slice has to contain at least one sub-stochastic update.
\end{lem}
\begin{proof}
Since any set of stochastic matrices form a \textit{group} under multiplication \cite{anton2010elementary}, a slice without a sub-stochastic update will be a stochastic matrix whose infinity norm is~$1$.
\end{proof}
We now motivate the slice construction as follows. Partition the rows in an arbitrary~$P_k$ into two distinct sets: set~$\mathcal{I}$ contains all sub-stochastic rows, and the remaining (stochastic) rows form the other set,~$\mathcal{U}$. We initiate each slice with the first success,~$\vert \mathcal{I} \vert=1,~\vert \mathcal{U} \vert=n-1$, and terminate it \textit{after} the~$n$th success,~$\vert \mathcal{I} \vert=n,~\vert \mathcal{U} \vert=0$, when each row becomes sub-stochastic. Between the~$n$th success in the current slice, say~$M_j$, and the first success in the next slice,~$M_{j+1}$, all we can have are stochastic or sub-stochastic matrices that must preserve the sub-stochasticity of each row. See Fig.~\ref{f0} for the slice representation, where the rightmost system matrices (encircled in Fig.~\ref{f0}) of each slice, i.e.~${P}_0,~{P}_{m_0},~\ldots,~{P}_{m_{j-1}} \ldots$, are sub-stochastic. The~$j$th slice length may be defined as
\begin{equation*}
\vert {M}_{j} \vert = m_j - m_{j-1},\qquad m_{-1}=0,
\end{equation*}
and slice lengths are not necessarily equal.
\begin{figure}[!h]
\centering
\includegraphics[width=2.75in]{slices.pdf}
\caption{Slice representation}
\label{f0}
\end{figure}
In the next lemma, we show how a stochastic row can become sub-stochastic, in a slice,~${{M}}$. We index~$P_k$'s in a slice,~$M$, by~$P_{\vert {M} \vert}, \ldots, P_2, P_1$ to simplify notation, and define the product of all system matrices up to time~$k$ in a slice as
\begin{eqnarray}\label{14}
J_k=P_k P_{k-1} \ldots P_2 P_1, \qquad 0 < k \leq |{{M}}|.
\end{eqnarray}
\begin{lem}\label{lem2}
Suppose the~$i$-th row of~$J_k$ is stochastic at index~$k$ of a given slice,~${{M}}$, and~$P_{k+1}$ is the next system matrix. Row~$i$ in~$J_{k+1}$ can become sub-stochastic by either:\\
(i) a sub-stochastic update at the~$i$-th row of~$P_{k+1}$; or,\\
(ii) a stochastic update at the~$i$-th row of~$P_{k+1}$, such that
\begin{eqnarray*}
\exists j \in \mathcal{I}_k,\mbox{ with }(P_{k+1})_{ij} \neq 0,
\end{eqnarray*}
where~$\mathcal{I}_k$ is the set of sub-stochastic rows in~$J_k$.
\end{lem}
\begin{proof}
For the sake of simplicity, let~$P \triangleq P_{k+1}, J \triangleq J_k$, in the following. Updating the~$i$th row at index~$k+1$ leads to
\begin{eqnarray}\label{16}
P_{k+1}\triangleq P=
\left[
\begin{array}{cccccc}
I_{1:i-1}\\
(P)_{i,1} & (P)_{i,2} && \ldots && (P)_{i,n}\\
&&&&& I_{i+1:n}
\end{array}
\right],
\end{eqnarray}
where~$I_n$ is~$n \times n$ identity matrix;~$i$th row after this update is
\begin{eqnarray*}
(PJ)_{i,j} = \sum_{m=1}^n(P)_{i,m}(J)_{m,j},
\end{eqnarray*}
where~$(PJ)_{i,j}$ is the~$(i,j)$th element of~$PJ$, and the~$i$th row sum becomes
\begin{eqnarray}\label{ui}
\sum_{j}(PJ)_{i,j}&=&\sum_{j}\sum_{m=1}^n(P)_{i,m}(J)_{m,j},\\
&=& \sum_{j} \Big((P)_{i,1}(J)_{1,j}+ \ldots + (P)_{i,n}(J)_{n,j} \Big),\nonumber\\
&=&(P)_{i,1}\underbrace{\sum_{j}(J)_{1,j}}_{\leq 1}+ \ldots + (P)_{i,n}\underbrace{\sum_{j}(J)_{n,j}}_{\leq 1}\nonumber.
\end{eqnarray}
Thus, we have
\begin{eqnarray}\label{uii}
\sum_{j}(PJ)_{i,j}\leq (P)_{i,1}+ \ldots + (P)_{i,n}.
\end{eqnarray}
Let us first consider \textit{case (i)} where the~$i$th row of~$P$ is sub-stochastic. From Eq.~\eqref{uii} and Assumption {\bf A2}, we have
\begin{eqnarray}\label{19}
\sum_{j}(PJ)_{i,j}\leq \sum_{j=1}^{n}(P)_{i,j} \leq\beta_2 < 1.
\end{eqnarray}
Therefore, the~$i$th row becomes sub-stochastic after a sub-stochastic update at row~$i$.
We now consider \textit{case~(ii)} where the~$i$th row of~$P$ is stochastic, i.e.~$\sum_{m=1}^{n}{(P)}_{i,m} = 1$. In this case,~$\sum_{j}(PJ)_{i,j}$ is a linear-convex combination of the row sums of~$J$, which is strictly less than one, if and only if~$J$ has at least one sub-stochastic row, say~${m}^{\prime}$, such that~${(P)}_{i,{m}^{\prime}} \neq 0$, i.e.
\begin{eqnarray}\label{20}
\sum_{j}(PJ)_{i,j}=\underbrace{(P)_{i,{m}^{\prime}}}_{\neq 0}\underbrace{(J)_{{m}^{\prime},j}}_{<1}+\sum_{m\neq{m}^{\prime}}(P)_{i,m}\underbrace{(J)_{m,j}}_{\leq1}.
\end{eqnarray}
So~$\sum_{j}(PJ)_{i,j} <1$ and the lemma follows.
\end{proof}
In the next lemma, we show that sub-stochasticity is preserved for each sub-stochastic row within a slice.
\begin{lem}\label{lem3}
With Assumptions {\bf{A0-A2}}, a sub-stochastic row, say~$i$, remains sub-stochastic throughout a slice.
\end{lem}
\begin{proof}
We use the notation of Lemma~\ref{lem2} on~$J$,~$P$, and Eq.~\eqref{16}, and rewrite Eq.~\eqref{ui} as
\begin{eqnarray}
\sum_{j}(PJ)_{i,j}&=&\sum_{j}\sum_{m=1}^n(P)_{i,m}(J)_{m,j},\nonumber\\
&=& \sum_{m \in \mathcal{I}} \Bigg((P)_{i,m}\sum_{j}(J)_{m,j} \Bigg) \nonumber\\
&+& \sum_{m \in \mathcal{U}} \Bigg((P)_{i,m}{\sum_{j}(J)_{m,j}} \Bigg).
\end{eqnarray}
Let us consider the general case after the first success, where there exist~$r \geq 1$ sub-stochastic rows in~$J$, i.e.~$\vert \mathcal{I} \vert=r$, and~$\vert \mathcal{U} \vert=n-r$. Without loss of generality, suppose the~$r$ sub-stochastic rows of~$J$ lie in the first~$r$ rows. We need to show that if the~$i$th row in~$J$ is sub-stochastic, i.e.~$i\leq r$, it remains sub-stochastic after a multiplication by either a stochastic or a sub-stochastic system matrix,~$P$. Rewrite the~$i$th row sum as
\begin{eqnarray}\label{27}
\sum_{j}(PJ)_{i,j}=
(P)_{i,1}\sum_{j}(J)_{1,j}+ \ldots + (P)_{i,r}\sum_{j}(J)_{r,j}\nonumber\\
+ (P)_{i,r+1}\underbrace{\sum_{j}(J)_{r+1,j}}_{=1}+ \ldots + (P)_{i,n}\underbrace{\sum_{j}(J)_{n,j}}_{=1}\nonumber.
\end{eqnarray}
Thus,
\begin{eqnarray}
\sum_{j}(PJ)_{i,j}&=&(P)_{i,1}\sum_{j}(J)_{1,j}+ \ldots + (P)_{i,r}\sum_{j}(J)_{r,j}\nonumber\\
&+&(P)_{i,r+1}+\ldots+(P)_{i,n},
\end{eqnarray}
where we used the fact that in~$J$, any row~$\in \mathcal{U}$ is stochastic.
\emph{Let us first consider the~$i$th row of~$P$ to be stochastic}:
\begin{eqnarray*}
(P)_{i,1}+\ldots+(P)_{i,r}+(P)_{i,r+1}+\ldots+(P)_{i,n} = 1.
\end{eqnarray*}
Thus,
\begin{eqnarray*}
(P)_{i,r+1}+\ldots+(P)_{i,n} = 1 - \Big((P)_{i,1}+\ldots+(P)_{i,r}\Big).
\end{eqnarray*}
Therefore, from Eq.~\eqref{27} we can write
\begin{eqnarray}
\sum_{j}(PJ)_{i,j}&=&
(P)_{i,1}\sum_{j}(J)_{1,j}+ \ldots + (P)_{i,r}\sum_{j}(J)_{r,j}\nonumber\\
&+& 1- \Big((P)_{i,1}+\ldots+(P)_{i,r}\Big).
\end{eqnarray}
Finally,
\begin{eqnarray}\label{29}
0 \leq \sum_{j}(PJ)_{i,j} = & 1 +& (P)_{i,1}\Bigg(\sum_{j}(J)_{1,j} -1\Bigg)\nonumber\\
&\vdots& \nonumber\\
&+& (P)_{i,r}\Bigg(\sum_{j}(J)_{r,j} -1\Bigg),\\
\leq & 1 +& (P)_{i,i}\Bigg(\sum_{j}(J)_{i,j} -1\Bigg),\label{25}
\end{eqnarray}
because the first~$r$ rows in~$J$ are sub-stochastic leading to~$\sum_j(J)_{m,j}-1<0$, for any~$m=1,\ldots,i,\ldots r$, and since the~$i$th row in~$P$ is stochastic, by Assumption {\bf{A1}} we have
\[
0<\beta_1\leq (P)_{i,i}.
\]
Note that in Eq.~\eqref{29} the only way to lose sub-stochasticity is to have~$(P)_{i,m}=0$ for all~$m\leq r$. However, sub-stochasticity can be preserved by putting a non-zero weight on any row in~$\mathcal{I}$. Since this knowledge in not available in general, a sufficient condition to ensure this is~$0<\beta_1\leq(P)_{i,i}$. Thus, the~$i$th row sum remains strictly less than one (and greater than zero) after any stochastic update at the~$i$th row as long as Assumption {\bf{A1}} is satisfied. Note that the lower bound on the~$j$th row sum stems from the non-negativity of system matrices.
\emph{Now consider the~$i$th row of~$P$ to be sub-stochastic}. From {\bf{A2}}, we have
\begin{eqnarray*}
(P)_{i,1}+\ldots+(P)_{i,r}+(P)_{i,r+1}+\ldots+(P)_{i,n} \leq \beta_2 <1.
\end{eqnarray*}
Therefore,
\begin{eqnarray*}
(P)_{i,r+1}+\ldots+(P)_{i,n} \leq \beta_2 - \Big((P)_{i,1}+\ldots+(P)_{i,r}\Big).
\end{eqnarray*}
Thus, from Eq.~\eqref{27} we can write
\begin{eqnarray}
\sum_{j}(PJ)_{i,j}&\leq&
(P)_{i,1}\sum_{j}(J)_{1,j}+ \ldots + (P)_{i,r}\sum_{j}(J)_{r,j}\nonumber\\
&+& \beta_2- \Big((P)_{i,1}+\ldots+(P)_{i,r}\Big).
\end{eqnarray}
Finally,
\begin{eqnarray}\label{29-}
0 \leq \sum_{j}(PJ)_{i,j}\leq \beta_2 &+& (P)_{i,1}\Bigg(\sum_{j}(J)_{1,j} -1\Bigg)\nonumber\\
&\vdots& \nonumber\\
&+& (P)_{i,r}\Bigg(\sum_{j}(J)_{r,j} -1\Bigg),\nonumber\\
\leq &\beta_2 & <1,
\end{eqnarray}
where again we used the fact that~$\sum_j(J)_{m,j}-1<0,m=1,\ldots,i,\ldots r$. Eq.~\eqref{29-} shows that in case of a sub-stochastic~$i$th row in~$P_{k+1}$, this row remains sub-stochastic in~$J_{k+1}$, as long as Assumption {\bf{A2}} is satisfied and the conditions on individual weights are not required. Note the strict inequality, i.e. if~$(P)_{i,m}\neq0$ for any~$m=1,\ldots,r$, then
\begin{eqnarray*}
\sum_{j}(PJ)_{i,j} < \beta_2.
\end{eqnarray*}
This lemma establishes that under the Assumptions {\bf A0-A2}, sub-stochasticity is always preserved.
\end{proof}
The results so far describe the behavior of the sub-stochastic rows in the slices explicitly derived under the regularity conditions in Assumptions {\bf A0-A2}. The next results characterize the infinity norm bound on the slices. To this end, let us define~${\beta}_4(j)$, as the \emph{maximum row sum over the sub-stochastic rows} of the product of all system matrices before~$P_j$ in the~$M$th slice. Mathematically,
\begin{eqnarray}\label{beta4}
{\beta}_4(j)= \max\limits_{m \in \mathcal{I}_{j-1}} \{{v}_m\},
\end{eqnarray}
where~$v_m$ is the~$m$th element of the following column vector
\begin{eqnarray*}
\mathbf{v}_{j-1}= (J_{j-1}){\textbf{1}_n}=(P_{j-1} \ldots P_{3}P_{2}P_{1}){\textbf{1}_n},
\end{eqnarray*}
and~${\textbf{1}}_n$ is the column vector of~$n$ ones.
It can be inferred from our discussion so far that a sub-stochastic update at row~$i$ is sufficient but not necessary for the~$i$th row to be sub-stochastic. In the following lemma, we consider the case where no sub-stochastic update occurs at row~$i$ throughout a slice, and provide an upper bound for the~$i$th row sum at the end of a slice.
\begin{lem}\label{lm4}
Assume there is no sub-stochastic update at the~$i$-th row within a given slice,~$M$. The~$i$-th row sum of this slice is upper bounded by
\begin{eqnarray}
1+{\beta_1}^{\vert {M}\vert - {h}_{i} +1}({\beta}_4({h}_{i})-1),
\end{eqnarray}
where the first success at row~$i$ occurs in the~${h}_{i}$-th update of this slice.
\end{lem}
\begin{proof}
Eq.~\eqref{29} expresses the~$i$th row sum after a stochastic update at row~$i$. Clearly, before the first success, at index~$h_i$,
\begin{eqnarray*}
\sum_{j}(J_k)_{i,j}=1,\qquad \forall k < {h}_{i}.
\end{eqnarray*}
In order to find the maximum possible row sum for the~$i$th row at the end of a slice, we should find a scenario, which maximizes the row sum after the first success at index~$h_i$ and keeps maximizing it at each subsequent update. Let us consider Eq.~\eqref{29} after the first success at index~$h_i$. Since no sub-stochastic update is allowed at row~$i$ from the lemma's statement, the first success occurs via a stochastic update at the~$i$th row, and Assumption~{\bf A1} is applicable. Since any non-zero~$(P)_{i,m \in \mathcal{I}}$ decreases the row sum, the minimum number of such weights maximizes the right hand side (RHS) of Eq.~\eqref{29}. Suppose~$(P_{{h}_{i}})_{i,{r}^{\prime}}$ is the only non-zero among all~$(P_{{h}_{i}})_{i,j \in \mathcal{I}}$'s. In this case, Eq.~\eqref{29} reduces to the following
\begin{eqnarray}\label{30}
\sum_{j}(P_{{h}_{i}} J_{{{h}_{i}-1}})_{i,j}= 1 - (P_{{h}_{i}})_{i,{r}^{\prime}}\Bigg(1-\sum_{j}(J_{{{h}_{i}-1}})_{{r}^{\prime},j}\Bigg),
\end{eqnarray}
in which~${r}^{\prime} \in \mathcal{I}_{{h}_{i}-1}$. Also note that~${r}^{\prime} \neq i$, since~$i$ is stochastic before the time instant,~$h_i$. In order to maximize the RHS of Eq.~\eqref{30},~$(P_{{h}_{i}})_{i,{r}^{\prime}}$ should be minimized, and~$\sum_{j}{(J_{{{h}_{i}-1}})}_{{r}^{\prime},j}$ should be maximized. From Eq.~\eqref{beta4}, the maximum value of~$\sum_{j}{(J_{{{h}_{i}-1}})}_{{r}^{\prime},j}$ before the first success is~${\beta}_4({h}_{i})$. Thus, after the~${h}_{i}$th update, where row~$i$ becomes sub-stochastic for the first time, we can write
\begin{eqnarray}\label{31}
\sum_{j}(P_{{h}_{i}} J_{{{h}_{i}-1}})_{i,j}\leq 1 - \beta_1 \Bigg(1-{\beta}_4({h}_{i})\Bigg).
\end{eqnarray}
After this update,~$J_{{h}_{i}}=P_{{h}_{i}}\ldots P_2 P_1$, and
\begin{eqnarray}
{\beta}_4({h}_{i}) \leq {\beta}_4({h}_{i}+1) \leq 1 - \beta_1 \Bigg(1-{\beta}_4({h}_{i})\Bigg),
\end{eqnarray}
where~${\beta}_4({h}_{i})$ is the~$r^{\prime}$th row sum in~$J_{{h_i}-1}$, and~${\beta}_4({h}_{i}+1)$ is the~$i$th row sum in~$J_{h_i}$.
Under this scenario, after the first success, the~$i$th row has the maximum row sum over all rows of~$J_{{h}_{i}}$, and in order to increase this row sum at the next update, the~$i$th row has to update only with itself. Note that after the success at index~$h_i$, row~$i$ becomes sub-stochastic, and~${r}^{\prime}=i$, for any subsequent update until the end of a slice. After the next update,~$P_{{h}_{i}+1}$, using the same argument we can write
\begin{eqnarray}\label{32}
\sum_{j}(P_{{h}_{i}+1} J_{{h}_{i}})_{i,j} & = & 1 - (P_{{h}_{i}+1})_{i,i}\Bigg(1-\sum_{j}(J_{{h}_{i}})_{i,j}\Bigg),\nonumber\\
&\leq& 1 - \beta_1 \Bigg(1- (1 - \beta_1 (1-{\beta}_4({h}_{i}))\Bigg)\nonumber\\
&=& 1 - {\beta_1}^{2} \Bigg(1-{\beta}_4({h}_{i})\Bigg).
\end{eqnarray}
If row~$i$ keeps updating with itself, at the end of slice, we have after~${\vert {M}\vert - {h}_{i}}$ number of such updates
\begin{eqnarray}\label{33}
\sum_{j}({P}_{\vert {M}\vert} J_{\vert {M}\vert -1 })_{i,j} \leq 1+{\beta_1}^{\vert {M}\vert - {h}_{i}+1}({\beta}_4({h}_{i}) -1),
\end{eqnarray}
and the lemma follows.
\end{proof}
In the following lemma, we consider the general case where sub-stochastic updates are also allowed at row~$i$ and provide an upper bound for the~$i$th row sum at the end of a slice.
\begin{lem}\label{lm5}
Assume there is at least one sub-stochastic update at the~$i$-th row within a given slice,~$M$. The~$i$-th row sum of this slice is upper bounded by
\begin{eqnarray}
1+{\beta_1}^{\vert {M}\vert - {g}_{i}}({\beta}_2 -1),
\end{eqnarray}
where the last sub-stochastic update at row~$i$ occurs in the~${g}_{i}$-th update of a slice.
\end{lem}
\begin{proof}
As shown in Eq.~\eqref{29-} and by Assumption {\bf A2}, any sub-stochastic update at row~$i$ imposes the upper bound of~$\beta_2$ on the~$i$th row sum.
Thus, after the \textit{last} sub-stochastic update at row~$i$ we have
\begin{eqnarray*}
\sum_{j}(P_{{g}_{i}} J_{{g}_{i}-1})_{i,j}\leq \beta_2 < 1.
\end{eqnarray*}
After~$P_{{g}_{i}}$, there is no sub-stochastic update, and by Assumption {\bf{A1}}, the~$i$th self-weight will be non-zero until the end of the slice. Following the same argument as in Lemma~\ref{lm4}, the upper bound on the~$i$th row sum is maximized after each update if the~$i$th row does not update with any sub-stochastic row other than itself. For any update after the last success, Eq.~\eqref{bnd1} holds and we have
\begin{eqnarray}
{\mathcal{N}_{i}(k)}\cap\mathcal{I}_k = \emptyset,\qquad \forall k > {g}_{i}.
\end{eqnarray}
After the~$P_{{g}_{i}+1}$th update we have
\begin{eqnarray}\label{34}
\sum_{j}(P_{{g}_{i}+1} J_{{g}_{i}})_{i,j}\leq 1 - \beta_1 \Bigg(1-{\beta}_2\Bigg),
\end{eqnarray}
and at the end of a slice, we have
\begin{eqnarray}\label{35}
\sum_{j}({P}_{\vert {M}\vert} J_{\vert {M}\vert -1 })_{i,j} \leq 1+{\beta_1}^{\vert {M}\vert - {g}_{i}}({\beta}_2 -1),
\end{eqnarray}
and the lemma follows.
\end{proof}
In the previous two lemmas, we provide an upper bound for each row sum for two cases: when all updates are stochastic \emph{and} when sub-stochastic updates are also allowed. The following lemma combines these bounds and relate them to the infinity norm bound of a slice.
\begin{lem}
For a given slice,~${{{M}}}$,
\begin{eqnarray}\label{bound1}
\|M\|_\infty\leq\max\limits_{i}\{1+{\beta_1}^{\vert {M}\vert - l_i}({\beta} -1)\},
\end{eqnarray}
where
\begin{eqnarray*}
l_i &=& h_i - 1,~\beta=\beta_4(h_i),~\mbox{stochastic updates at row i},\\
l_i &=& g_i,~\beta=\beta_2,~\mbox{(sub-) stochastic updates at row i}.
\end{eqnarray*}
\end{lem}
\noindent The next lemma studies the worst case scenario for the infinity norm of a slice, which provides an upper bound for Eq.~\eqref{bound1}.
\begin{lem}\label{lm7}
With assumptions {\bf{A0-A2}}, for the~$j$th slice we have
\begin{eqnarray}\label{41}
{{\Vert M_j \Vert}_{\infty}} \leq 1-{\alpha}_j <1, \qquad j\geq0,
\end{eqnarray}
where
\begin{eqnarray}\label{42}
{\alpha}_j = f({{\vert M_j \vert}},\beta_1,\beta_2)={\beta_1}^{{{\vert M_j \vert}} - 1}({1-\beta}_2).
\end{eqnarray}
\end{lem}
\begin{proof}
In order to find the maximum upper bound on the infinity norm of a slice, we consider a \textit{worst case scenario}, in which a row sum incurs the largest increase throughout the slice. To do so, we examine the maximum possible upper bound on the~$i$th row sum for the two cases discussed in Lemmas~\ref{lm4} and~\ref{lm5} separately.
Consider no sub-stochastic update at the~$i$th row. We should find a scenario that maximizes the RHS of Eq.~\eqref{33}. In addition, we need to make sure that such scenario is practical, i.e. all other rows become sub-stochastic before a slice is terminated. Since there are no sub-stochastic updates at row~$i$, a slice can not be initiated by an update in row~$i$, i.e.~$h_i \geq 2$. At the initiation of a slice, one row other than~$i$, becomes sub-stochastic, and the upper bound imposed on this row is~$\beta_2$ by Assumption~{\bf{A2}}, hence~${\beta}_4(h_i)={\beta}_4(2)=\beta_2$. Therefore, following the discussion in Lemma~\ref{lm4},
\begin{eqnarray}\label{43}
1+{\beta_1}^{\vert {M_j}\vert - 1}({\beta}_2-1),
\end{eqnarray}
provides the largest upper bound on the~$i$th row sum of $M_j$. Note that this bound is feasible if we consider the following scenario. After row~$i$ becomes sub-stochastic at~$h_i=2$ we let next~$n-2$ updates for the other stochastic rows to become sub-stochastic, each updating \textit{only} with the sub-stochastic row with the largest row sum. Thus the largest row sum keeps increasing in the same manner as discussed in Lemma~\ref{lm4} within the next~$n-2$ updates. At~$n+1$th update, row~$i$ again updates with a row, which has the maximum row sum in~$J_n$, and keeps updating by itself until the slice is terminated. The aforementioned scenario is equivalent to the one where the first success at row~$i$ occurs at~$h_i=n$, and all other rows become sub-stochastic within the first~$n-1$ updates, and
\begin{equation}
\beta_4(h_i=n)= 1 + {\beta_1}^{n-2}(\beta_2-1).
\end{equation}
Now consider sub-stochastic updates at row~$i$. The RHS of Eq.~\eqref{35} is maximized if~$g_i$ is minimized. In this case, the minimum value for~$g_i$ is one, which corresponds to a scenario where a sub-stochastic update at row~$i$ initiates a slice and no other sub-stochastic update occurs at this row. Using the same argument as before, all other rows become sub-stochastic within the next~$n-1$ updates and the largest upper bound on the~$i$th row in this case is the same as the one given in Eq.~\eqref{43}.
\end{proof}
Finally, note that for a given slice,~$M$,
\begin{eqnarray}
\|M\|_\infty\leq 1+ {\beta_1}^{\vert {{{M}}} \vert-1}({\beta_2}-1)
\end{eqnarray}
is the \textit{largest} upper bound on the infinity norm of a slice.
\section{Stability of discrete-time systems}\label{stability}
In this section, we study the stability of discrete-time, LTV dynamics with (sub-) stochastic system matrices. We start with the following definitions:
\begin{definition}
The system represented in Eq.~\eqref{eq2} is \textit{asymptotically stable} (or convergent) if for any~${\bf{x}}(0)$,
\begin{equation}
\lim_{k \rightarrow \infty}{\bf{x}}(k) \nonumber
\end{equation}
is bounded and convergent.
\end{definition}
\begin{definition}
The system represented in Eq.~\eqref{eq2} is \textit{absolutely asymptotically stable} (or zero-convergent) if for any~${\bf{x}}(0)$,
\begin{equation}
\lim_{k \rightarrow \infty}{\bf{x}}(k)=0.\nonumber
\end{equation}
\end{definition}
Recall that we are interested in the asymptotic stability of Eq.~\eqref{eq2}, such that the steady-state forgets the initial conditions and is a function of inputs. A sufficient condition towards this aim is the absolutely asymptotic stability of the following:
\begin{eqnarray}\label{eqq15}
{\bf{x}}(k+1)&=&{{P}_k}{\bf{x}}(k), \qquad k\geq0,\\
&=&{P}_{k} {P}_{k-1} \ldots {{P}_0}{\bf{x}}(0),\nonumber
\end{eqnarray}
for any~${\bf{x}}(0)$, which is equivalent to having
\begin{equation}
\lim_{k \rightarrow \infty} {P}_{k} {P}_{k-1} \ldots {{P}_0}= \mathbf{0}_{n\times n},
\end{equation}
where the subscript below $\mathbf{0}$ denote its dimensions. As depicted in Fig.~\ref{f0}, we can take advantage of the slice representation and study the following dynamics:
\begin{eqnarray}\label{eqq26}\label{50}
{\bf{y}}(t+1)={{M}}_{t}{\bf{y}}(t), \qquad t \geq 0,
\end{eqnarray}
instead of Eq.~\eqref{eqq15}, where
\begin{eqnarray*}
{\bf{y}}(0)&=&{\bf{x}}(0),\\
{\bf{y}}(t)&=& {\bf{x}}\left(t_1\right), \qquad t\geq1, t_1=\sum\limits_{i=1}^{t}{\vert M_i \vert}.
\end{eqnarray*}
Thus, for absolutely asymptotic stability of Eq.~\eqref{eqq26}, for any~${\bf{y}}(0)$, we require
\begin{eqnarray}\label{49}
\lim_{t \rightarrow \infty}{\bf{y}}(t+1)&=&\lim_{t \rightarrow \infty} {{M}}_{t}{\bf{y}}(t),\\
&=& \lim_{t \rightarrow \infty}{{M}}_{t}{{M}}_{t-1} \ldots {{M}}_{0}{\bf{y}}(0),\nonumber\\
&=& \mathbf{0}_n.\nonumber
\end{eqnarray}
We provide our main result in the following theorem.
\begin{thm}\label{thm0}
With assumption {\bf{A0-A2}}, the LTV system in Eq.~\eqref{eqq26} is absolutely asymptotically stable if either one of the following is true:
\begin{enumerate}[(i)]
\item Each slices has a bounded length, i.e.
\begin{eqnarray}\label{51}
\vert M_j \vert \leq N < {\infty} , \qquad \forall j,~N \in \mathbb{N};
\end{eqnarray}
\item There exist a set,~$J_1$, consisting of an infinite number of slices such that
\begin{eqnarray}
\vert M_{j} \vert \leq N_1 < {\infty}, \qquad \forall M_j\in J_1,\\
\vert M_{j} \vert < {\infty}, \qquad \forall M_j \notin J_1;
\end{eqnarray}
\item There exists a set,~$J_2$, of slices such that
\begin{eqnarray*}
\exists M_j \in J_2:~~\vert M_j \vert \leq \frac{1}{{\rm ln}\left({\beta_1}\right)}{\rm ln}\left(\frac{1 - e^{(-\gamma_2i^{-\gamma_1})}}{1-\beta_2}\right)+1,
\end{eqnarray*}
for every~$i \in \mathbb{N}$, and~$|M_j|<\infty,j\notin J_2$.
\end{enumerate}
\end{thm}
\begin{proof}
Using the sub-multiplicative norm property, Eq.~\eqref{50} leads to
\begin{align}\label{55}
{\Vert{\bf{y}}(t+1)\Vert}_\infty &\leq {\Vert {{M}}_{t} \Vert}_{\infty} \ldots {\Vert {{M}}_{0} \Vert}_{\infty} {\Vert {\bf{y}}(0) \Vert}_{\infty}.
\end{align}
\emph{Case (i)}: From Eqs.~\eqref{41},~\eqref{42} and~\eqref{51}, we have
\begin{eqnarray}
{{\Vert M_j \Vert}_{\infty}} \leq \delta < 1,\qquad\forall j,
\end{eqnarray}
where $\delta=1+{\beta_1}^{{N} - 1}({\beta}_2-1) <1$, and this case follows.
\textit{Case (ii)}: We first note that the infinity norm of each slice has a trivial upper bound of~$1$. From Eq.~\eqref{55}, we have
\begin{align}
\lim_{t \rightarrow \infty}{\Vert {\bf{y}}(t+1) \Vert}_{\infty} &\leq \lim_{t \rightarrow \infty}\prod\limits_{j \in J_1} {\Vert{M}_{j}\Vert}_\infty \prod\limits_{j \notin J_1}{\Vert{M}_{j}\Vert}_\infty{\Vert {\bf{y}}(0) \Vert}_{\infty},\nonumber\\
&\leq \lim_{t \rightarrow \infty} \prod\limits_{j \in J_1} {\Vert{M}_{j}\Vert}{\Vert {\bf{y}}(0) \Vert}_{\infty}.
\end{align}
Similar to case (i), this case follows by defining
\begin{eqnarray*}
\|M_j\|_\infty\leq \delta_{1}=1+{\beta_1}^{{N_1} - 1}({\beta}_2-1) <1.,
\end{eqnarray*}
\textit{Case (iii)}: With~$\alpha_j$ in Eq.~\eqref{42}, Eq.~\eqref{55} leads to
\begin{eqnarray}\label{62}
\lim_{t \rightarrow \infty}{\Vert {\bf{y}}(t+1) \Vert}_{\infty} \leq \lim_{t \rightarrow \infty} \prod_{j=0}^t(1-\alpha_j) {\Vert {\bf{y}}(0) \Vert}_{\infty}.
\end{eqnarray}
Consider the asymptotic convergence of the infinite product of a sequence~$1-\alpha_j$ to~$0$. We have
\begin{eqnarray}\label{lne1}
\lim_{t \rightarrow \infty}\prod_{j=1}^t(1-\alpha_j)=0,\mbox{ or } \lim_{t \rightarrow \infty}\sum_{j=1}^t(-{\rm ln}(1-\alpha_j))=\infty.
\end{eqnarray}
Now note that
\[
\sum_{i=1}^{\infty}\gamma_2i^{-\gamma_1} = \infty, \qquad \mbox{for }0\leq\gamma_1\leq 1,0<\gamma_2,
\]
because~$\frac{1}{i^{\gamma_1}}$ sums to infinity for all values of~$\gamma_1$ in~$[0,1]$, and multiplying by a positive number,~$\gamma_2$, does not change the infinite sum. It can be verified that Eq.~\eqref{lne1} holds when
\begin{eqnarray*}
-{\rm ln}(1-\alpha_j) &\geq& \gamma_2i^{-\gamma_1},
\end{eqnarray*}
subsequently resulting into
\begin{eqnarray*}
1-\alpha_j &\leq& e^{(-\gamma_2i^{-\gamma_1})},
\end{eqnarray*}
for some~$\gamma_1\in[0,1]$, and~$0<\gamma_2$.
Therefore if for any~$i \in \mathbb{N}$, there exist a slice,~$M_j$, in the set,~$J_2$, such that
\begin{eqnarray}\label{64-0}
\|M_j\|_\infty\leq 1-\alpha_j \leq e^{(-\gamma_2i^{-\gamma_1})},
\end{eqnarray}
we get
\begin{eqnarray}\label{57}
\lim_{t \rightarrow \infty}\prod_{j=0}^t(1-\alpha_j) = \underbrace{\prod_{j\in J_2}(1-\alpha_j)}_{=0} \prod_{j\notin J_2}(1-\alpha_j)=0,
\end{eqnarray}
and absolutely asymptotic stability follows. By substituting~$\alpha_j$ from Eq.~\eqref{42} in the left hand side of Eq.~\eqref{64-0}, we get
\begin{eqnarray}\nonumber
1 - {\beta_1}^{{{\vert M_j \vert}} - 1}({1-\beta}_2) &\leq& e^{(-\gamma_2i^{-\gamma_1})},
\end{eqnarray}
which leads to
\begin{eqnarray}\label{65}
{\rm ln}\left(\frac{1 - e^{(-\gamma_2i^{-\gamma_1})}}{1-\beta_2}\right) &\leq& (\vert M_j \vert - 1){\rm ln}\beta_1.\label{58}
\end{eqnarray}
Since $\beta_1< 1$, ${\rm ln}\beta_1$ is negative and dividing both sides of~Eq.~\eqref{58} by a negative number changes the inequality, i.e.
\begin{eqnarray}\label{c3}
\vert M_j \vert \leq \frac{1}{{\rm ln}\left({\beta_1}\right)}{\rm ln}\left(\frac{1 - e^{(-\gamma_2i^{-\gamma_1})}}{1-\beta_2}\right)+1.
\end{eqnarray}
Now note that the first~${\rm ln}$ is negative; for the bound to remain meaningful, the second~${\rm ln}$ must also be negative that requires
\begin{align*}
1 - e^{(-\gamma_2i^{-\gamma_1})} &< 1-\beta_2,\\ \mbox{ or, } \beta_2 &< e^{(-\gamma_2i^{-\gamma_1})}.
\end{align*}
It can be verified that the above inequality is true for any value of $\beta_2\in[0,1)$ by choosing an appropriate $0<\gamma_2$.
To conclude, we note that if the slices are such that there exists a slice with length following Eq.~\eqref{c3} for every~$i\in\mathbb{N}$, not necessarily in any order, the infinite product of such slices goes to a zero matrix. Finally, from Eq.~\eqref{62}
\begin{eqnarray}
\lim_{t \rightarrow \infty}{\Vert {\bf{y}}(t+1)\Vert}_{\infty}=0,
\end{eqnarray}
which completes the proof in this case.
\end{proof}
In the following, we shed some light on case (iii) and Eq.~\eqref{c3}. First, note that Eq.~\eqref{c3} does not require the slice indices to be~$i$. In other words, the slice lengths are not growing as~$i$ increases and slices satisfying Eq.~\eqref{c3} may appear in any order. For the next argument, note that the RHS of Eq.~\eqref{c3} goes to~$+\infty$ as $i\rightarrow\infty$; because $e^{(-\gamma_2 i^{-\gamma_1})}$ goes to~$1$. A longer slice length can be related to a slow information propagation in the network. Eq.~\eqref{c3} further shows that LTV stability does not require bounded slice lengths (as in cases (i) and (ii)); the slice lengths can be unbounded as long as a well-behaved sub-sequence of slices exist (in any order) whose lengths do not increase faster than the upper bound in Eq.~\eqref{c3}.
Next note that $\gamma_1=1$ is a valid choice, which corresponds to the fastest growing exponential,~$e^{(-\gamma_2 i^{-1})}$, whose infinite product is $0$.
This means that only a sub-sequence of slices need to behave such that their behavior is not worse that~${e^{-\gamma_2i^{-1}}}$, in any order. We may write this requirement as
\begin{eqnarray*}
\mathbb{P}\left(M_j \mbox{ exists for some~$j$ such that }\|M_j\|_\infty\leq {e^{-\gamma_2i^{-1}}}\right)=1,
\end{eqnarray*}
$\forall i\geq1$ and~$0<\gamma_2$, where $\mathbb{P}$ denotes the probability of the corresponding event. On the other hand, by choosing $\gamma_1=0$ the upper bound on the slice length in \textit{case (iii)} becomes a constant. Hence, the first two cases are in fact special cases of this bound if we set $N$ and $N_1$ as
\begin{eqnarray*}
\frac{1}{{\rm ln}\left({\beta_1}\right)}{\rm ln}\left(\frac{1 - e^{(-\gamma_2)}}{1-\beta_2}\right)+1.
\end{eqnarray*}
\section{Distributed Dynamic Fusion}\label{app}
We now show the relevance of the results in Sections~\ref{IP} and~\ref{stability} to Distributed Dynamic Fusion (DDF) that we briefly introduced in Sections~\ref{intro} and~\ref{PF}. In order to explain the DDF, let us first consider LTI fusion of the form:~$\mathbf{x}_{k+1} = P\mathbf{x}_k+B\mathbf{u}_k$, where~$\mathbf{x}_k\in\mathbb{R}^n$ is the vector of~$n$ sensor states and~$\mathbf{u}_k\in\mathbb{R}^s$ is the vector~$s$ anchor states. The matrix~$P$ collects the sensor-to-sensor coefficients while the matrix~$B$ collects the sensor-to-anchor coefficients. It is clear that if the spectral radius of~$P$ is subunit, the sensor states,~$\mathbf{x}_k$, forget the initial states,~$\mathbf{x}_0$, and converge to the convolution between the system's impulse response~$(I-P)^{-1}B$ and the anchor states,~$\mathbf{u}_k$. When the system matrices are designed such that the concatenated matrix,~$[P~B]$, is row-stochastic, then a subunit spectral radius, $\rho(P)<1$, can be guaranteed if each sensor has a path from at least one anchor. With these conditions, the constant system matrix, $P_k=P,B_k=B,$ at each~$k$, ensure that the information travels from the anchors to each sensor infinitely often and in an exact same fashion at each~$k$.
In the context of DDF, the system matrices,~$P_k$ and~$B_k$, are a function of the network configuration, and the LTI information flow cannot be guaranteed for any~$P_k,B_k$. In fact, there can be situations when every (mobile) sensor has no neighbors resulting into~$\mathbf{x}_{k+1}=\mathbf{x}_k$, i.e.~$P_k=I_n$ and $B_k=\mathbf{0}_{n\times s}$. The construct of slices ensures that the aforementioned information flow (each sensor having a path from at least one anchor, possibly in arbitrarily different ways) is guaranteed over each slice. In this sense, the \emph{success} regarded (earlier in Section~\ref{IP}) as having a sub-stochastic row, say~$i$, in some arbitrary~$P_k$ in an arbitrary slice,~$M_j$, is equivalent to saying that sensor~$i$ is now \emph{informed}, i.e. sensor~$i$'s current state is now influenced by the anchor(s) in the~$j$th slice. Having~$n$ such (distinct) successes means that in the~$j$th slice, each sensor is now informed. Having infinite such slices means that each sensor becomes informed infinitely often; compare this with the LTI case when all of the~$n$ sensors become informed at each~$k$ and this process repeats infinitely often over~$k$.
The results in Sections~\ref{IP} and~\ref{stability} can also be cast in the context of the DDF discussion above. Lemma~\ref{lem1} states that for each sensor to become informed in every slice, one sensor has to directly receive information from an anchor. Lemma~\ref{lem2} shows how an \emph{uninformed} sensor, say~$i$, may become informed in each slice: either via an anchor, i.e. a sub-stochastic update at row~$i$, or via an (already) informed sensor, i.e. a stochastic update at row~$i$ but with a non-zero weight on any informed sensor. Subsequently, Lemma~\ref{lem3} shows that a sufficient condition for an informed sensor to remain informed in each slice is to assign a non-zero self-weight, i.e. Assumption~{\bf A1}; this makes sense as an informed sensor may become uninformed by updating only with uninformed sensors in its neighborhood.
Lemmas~\ref{lm4}--\ref{lm7} further quantify the rate at which each slice is completed, i.e. the rate at which each sensor becomes informed in any given slice. The upper bound given in Eq.~\eqref{41} is the worst case for a slice, as this case is likely to happen given the possibility of any arbitrary network configuration. Drawing an analogy with the LTI scenario, the slices have to be completed infinitely often and thus, Theorem~\ref{thm0} considers all of the slices and provides different ways to guarantee an infinite number of slices. We emphasize that information diffusion in the network can actually deteriorate (not necessarily in an order) and Theorem~\ref{thm0} further provides the ``rate'' at which well-behaved network configurations must occur. Since the discussion so far mostly caters to ``forgetting the sensor initial conditions'', i.e. the asymptotic stability, we now show the steady-state of the DDF for a particular application of interest.
\subsection{Dynamic Leader-Follower}
In this setup, the goal for the entire sensor network is to converge to the state of one anchor (multiple anchors and converging to their linear-convex combination may also be considered, see e.g.~\cite{khan2009distributed,khan2010diland}). Let~${\bf{1}}_{n}$ be the~$n \times 1$ column vector of~$n~1$’s, and~$u$ be the scalar state of the (single) anchor, which is known and does not change over time. The leader-follower algorithm requires~$\lim_{k \rightarrow \infty}{\bf{x}}(k) \nonumber = {\bf{1}}_{n}{{u}}$, where $\mathbf{x}_k\in\mathbb{R}^n$ collects the states of all of the sensors. Since this is a dynamic algorithm with mobile sensors, a sensor may not find any neighbor at many time instants. When a sensor does find neighbors, an anchor may not be one of them. Furthermore, if a sensor has the anchor as a neighbor at some time, this anchor may not be a neighbor going forward because the nodes are mobile. We now use the results from Section~\ref{stability}, to provide the asymptotic stability analysis of the dynamic leader-follower algorithm.
\begin{thm}
Consider a network of~$n$ sensors and~$s=1$ anchor with the following update:
\begin{equation}\label{71}
{\bf{x}}(k+1)={{P}}_k{\bf{x}}(k)+{{B}}_k{{u}},\qquad k \geq 0,
\end{equation}
in which~$u$ is the state of the anchor. With assumption {\bf{A0-A2}}, in addition to the following
\begin{equation}\label{lf}
\sum_j ({P}_{k})_{i,j}+({B}_{k})_{i,j}=1, \qquad \forall k,
\end{equation}
all sensors (asymptotically) converge to the anchor state.
\end{thm}
\begin{proof}
It can be verified that Eq.~\eqref{71} results into
\begin{eqnarray*}
{\bf{x}}(k+1)= (P_k \ldots P_0){\bf{x}}(0) + \sum\limits_{m=0}^{k}\left(\prod_{j=1}^m P_{k-j+1}\right)B_{k-m}{{u}}.\\
\end{eqnarray*}
With the slice notation, we have
\begin{equation}\label{72}
{\bf{y}}(t+1)={{M}}_t{\bf{y}}(t)+{{N}}_t{{u}},\qquad k \geq 0,
\end{equation}
where~${\bf{y}}(0)={\bf{x}}(0),M_0 = P_{\vert M_0 \vert -1} \ldots P_0,$ and
\begin{eqnarray}
M_t &=& P_{\left(\sum\limits_{i=0}^{t}{\vert M_i \vert}\right) - 1} \ldots P_{\left(\sum\limits_{i=0}^{t-1}{\vert M_i \vert}\right)},~ t >0,\label{73}\\
N_t&=&\sum\limits_{m=0}^{{\vert M_t \vert - 1}}\left(\prod_{j=1}^m P_{{\vert M_t \vert}-j}\right)B_{{\vert M_t \vert - 1}-m}\label{74}.
\end{eqnarray}
In addition, we have
\begin{eqnarray*}
{\bf{y}}(t+1)=(M_t \ldots M_0){\bf{y}}(0) + \sum\limits_{m=0}^{t}\left(\prod_{j=1}^m M_{t-j+1}\right)N_{t-m}{{u}}.
\end{eqnarray*}
Since~$u$ is a constant, and
\begin{eqnarray}\label{rho}
\rho(M_t)\leq {\Vert M_t \Vert}_{\infty} < 1, \qquad \forall t,
\end{eqnarray}
as~$t \rightarrow \infty$ in Eq.~\eqref{72},~${\bf{y}}_{t+1}$ converges to a limit,~${\bf{y}}^{*}$. This limit is further unique because it is a fixed point of a linear iteration with bounded matrices,~\cite{tsit_book}. Therefore,
\begin{equation*}
\lim\limits_{t \rightarrow \infty} {\bf{y}}_{t+1}={\bf{y}}^{*},
\end{equation*}
and we have
\begin{eqnarray}
{\bf{y}}^{*} = M_t {\bf{y}}^{*} + N_t {{u}} \rightarrow (I_n-M_t){\bf{y}}^{*} = N_t {{u}},
\end{eqnarray}
where~${\bf{y}}^{*}={\bf{x}}^{*}$ is the limiting states of the sensors. Thus,
\begin{eqnarray}
{\bf{x}}^{*} = {(I_n-M_t)}^{-1} N_t {{u}},
\end{eqnarray}
for which we used the fact that~${(I-M_t)}$ is invertible due to~Eq.~\eqref{rho}. In order to show that the limiting states of the sensors are indeed the anchor state, we require
\begin{eqnarray}\label{80}
{(I_n-M_t)}^{-1} N_t = \mathbf{1}_n~~\Rightarrow~~M_t{\bf{1}}_n+N_t = {\bf{1}}_n.
\end{eqnarray}
Note that~$N_t$ is a column vector since there is only one anchor. Before we proceed, for the sake of simplicity let us represent any arbitrary $t$th slice as:
\begin{eqnarray*}
M_t &\triangleq& P_{T} P_{{T}-1} \ldots P_{0}, \qquad \vert M_t \vert =T+1.
\end{eqnarray*}
By substituting~$M_t$ and~$N_t$ from Eqs.~\eqref{73} and~\eqref{74} in Eq.~\eqref{80}, we need to show that
\begin{eqnarray}
( P_{T} \ldots P_{{0}}){\bf{1}}_n+
\sum\limits_{m=0}^{T}\left(\prod_{j=1}^m P_{T+1-j}\right)B_{T-m} ={\bf{1}}_n.
\end{eqnarray}
By expanding the left hand side of the above, we have
\begin{eqnarray}
(P_{T} P_{{T}-1}\ldots P_{{0}}){\bf{1}}_n&+&(P_{T} P_{{T}-1}\ldots P_{{1}})B_{0}\nonumber\\
&+&(P_{T} P_{{T}-1}\ldots P_{{2}})B_{1}\nonumber\\
&\vdots&\nonumber\\
&+&(P_{T} P_{{T}-1})B_{T-2}\nonumber\\
&+&(P_{T})B_{T-1}\nonumber\\
&+&(B_{T}).\label{82}
\end{eqnarray}
The first line of the above expression can be simplified as
\begin{eqnarray}\label{7-2}
(P_{T} P_{{T}-1}\ldots P_{{1}})( P_0{{\bf{1}}_n}+B_{0}),
\end{eqnarray}
in which~$B_{0} \neq 0$ is a~$n \times 1$ vector corresponding to the first sub-stochastic update at the beginning of the slice,~$M_t$. Also,~$B_{0}$ has only one non-zero, say~$\alpha_i$, at the~$i$th position if sensor~$i$ updates with the anchor at the beginning of the slice,~$M_t$. From Eq.~\eqref{lf}, it can be verified that
\begin{eqnarray}
P_{0}{\bf{1}}_n+B_{0}={\bf{1}}_n.
\end{eqnarray}
Therefore, Eq.~\eqref{82} reduces to
\begin{eqnarray}\label{85}
(P_{T} P_{{T}-1}\ldots P_{{1}}){\bf{1}}_n+(P_{T}\ldots P_{{2}})B_{{1}}+ \ldots+B_{T}.
\end{eqnarray}
After the first (sub-stochastic) update, each~$B_{j}, (1\leq j\leq T)$, has exactly one non-zero in case of sub-stochastic updates and all zeros otherwise. The procedure continues in a similar way for any sub-stochastic update, i.e. update with the anchor. Let us consider now the alternate case where the update is stochastic, i.e. without the anchor and with some neighboring sensors. Suppose~$B_{c}$ is the next sub-stochastic update, and we have~$B_{j}=\mathbf{0}_{n-1}, (1 \leq j < c)$. Eq.~\eqref{85} then reduces to
\begin{eqnarray}
(P_{T}\ldots P_{{c}+1})(P_{c} P_{{c}-1}\ldots P_{{1}}
{\bf{1}}_n+B_{{c}})+\ldots+(B_{T}).\label{86}
\end{eqnarray}
Since between~$P_1$ and~$P_c$ there is no sub-stochastic update,~$P_{{c}-1}\ldots P_{{1}}
{\bf{1}}_n={\bf{1}}_n$, and we can rewrite Eq.~\eqref{86} as
\begin{eqnarray}
(P_{T}\ldots P_{{c}+1})(P_{c}
{\bf{1}}_n+B_{{c}})+\ldots+(B_{T}),\label{87}
\end{eqnarray}
and the procedure continues as before (note the similarity between Eq.~\eqref{7-2}, and the first term on the left hand side of Eq.~\eqref{87}). Finally,
\begin{eqnarray}
( P_{T} \ldots P_{{0}}){\bf{1}}_n&+&
\sum\limits_{m=0}^{M}(\prod_{j=1}^m P_{T+1-j})B_{T-m}\nonumber\\
&=&P_{{T}}{\bf{1}}_n+B_{T}\nonumber\\
&=&{\bf{1}}_n,
\end{eqnarray}
which leads to $\lim\limits_{k \rightarrow \infty}{\bf{x}}(k)= {\bf{x}}^{*}={{u}}$.
\end{proof}
\section{Illustrative Example}\label{example}
In this section, we provide a few numerical examples to illustrate the concepts described in this paper. We show the product of~$4 \times 4$ (sub-) stochastic matrices. Assumptions {\bf{A0-A2}} are satisfied with~$\beta_1=0.05$,~$\beta_2=0.7$. At each iteration, the update matrix, which is left multiplied to the product of past matrices, randomly takes one of the following forms:
(i) identity matrix except for the~$i$th~($1\leq i \leq4$) row, which is replaced by a stochastic row vector; or,
(ii) identity matrix except for the~$i$th~($1\leq i \leq4$) row, which is replaced by a sub-stochastic row vector; or,
(iii) a~$4 \times 4$ identity matrix,~$I_4$.
Fig.~\ref{f2} (Left) shows the infinity norm and the spectral radius of the product of system matrices. In addition, the infinity norm of the product of slices are illustrated for comparison. Slice lengths, at the termination of each slice and over the slice index,~$t$, are shown in Fig.~\ref{f2} (Right). The minimum slice length is $5$, and $15$ slices are completed within $200$ iterations of this simulation. Note that the infinity norm of the slices is the only (strictly) monotonically decreasing curve.
\begin{figure}[!h]
\centering
\subfigure{\includegraphics[width=1.72in,height=1.70in]{1.pdf}}
\subfigure{\includegraphics[width=1.72in,height=1.70in]{2.pdf}}
\caption{(Left) Spectral Radius vs. Infinity Norm. (Right) Slice lengths.}
\label{f2}
\end{figure}
In Figs.~\ref{f3} and~\ref{f4}, we illustrate the dynamic leader-follower algorithm. Fig.~\ref{f3} shows the network configuration with~$n=4$ mobile sensors, where sensor~$i$ is restricted to move in the region,~$R_i$, marked as the corresponding disk. The anchor only moves in the region,~$R_0$, and the random trajectories taken by each node are marked; shown only over the first~$40$ iterations to maintain visual clarity as random trajectories clutter in a short time. We choose~$1.5$ times the radius of the innermost circle as the communication radius; note that only sensors,~$1,2$ in regions~$R_1,R_2$, may be able to talk to the anchor given this communication radius and depending on the corresponding node locations within the respective regions,~$R_0,R_1,R_2$, see the top-left figure. In the top-right figure, we show a time instant when no sensor is able to communicate with any other node; thus resulting in an identity system matrix. The bottom-left figure shows the case when only one sensor,~$1$ in region~$R_1$, communicates with the anchor; thus resulting in a sub-stochastic system matrix. Finally, the bottom-right figure shows the stochastic update when sensor~$3$, in region~$R_3$, is able to communicate with sensor~$4$, in region~$R_4$. Clearly, we have chosen this network configuration, (random) motion model, and communication radius for visual convenience; the setup is applicable to any scenario where the communication radius and random motion models ensure that the information (possibly over a longer time window) travels from the anchor to each mobile sensor.
\begin{figure}[!h]
\centering
\subfigure{\includegraphics[width=1.72in]{N_Fig11.pdf}}
\subfigure{\includegraphics[width=1.72in]{N_Fig12.pdf}}
\subfigure{\includegraphics[width=1.72in]{N_Fig13.pdf}}
\subfigure{\includegraphics[width=1.72in]{N_Fig14.pdf}}
\caption{Dynamic leader-follower: Mobile sensors, red circles, and the anchor, red triangle, follow a restricted motion in their corresponding disks. The blue (and gray) lines show the nodal trajectories whereas the circles around the sensors show their communication radii.}
\label{f3}
\end{figure}
Finally, Fig.~\ref{f4} shows the sensor states with the anchor state chosen at~$u=3$. We note that the sensors closer to the anchor converge faster to the anchor state as compared to the farther sensors. This is because of the information flow in this particular scenario. That a sensor, whose state is closer to the anchor state, does not lose this information is ensured by the conditions established on the sensor weights. In particular, an informed sensor does not lose its (partial) knowledge when updating only with neighboring sensors because: (i) each sensor assigns some weight to its past information; and (ii) no sensor is allowed to assign an arbitrarily large weight on any neighboring sensor. We emphasize that this simple illustration is significantly insightful and demonstrates the key concepts of the theoretical results described in this paper. Clearly, the setup can be extended to arbitrary motion models, network configurations, and large networks.
\begin{figure}[!h]
\centering
\includegraphics[width=2.65in]{simFig1.pdf}
\caption{Dynamic leader-follower: Sensor and anchor states.}
\label{f4}
\end{figure}
\section{Conclusion}\label{conc}
In this paper, we study asymptotic stability of Linear Time-Varying (LTV) systems with (sub-) stochastic system matrices. Motivated by applications in distributed dynamic fusion (DDF), we design the conditions on the system matrices that lead to asymptotic stability of such dynamics. Rather than exploring the joint spectral radius of the (infinite) set of system matrices, we partition them into non-overlapping slices, such that each slice has a subunit infinity norm, and slices cover the entire sequence of the system matrices. We use infinity norm to characterize the asymptotic stability and provide upper bounds on the infinity norm of each slice as a function of the slice length and some additional system parameters. We show that asymptotic stability is guaranteed not only in the trivial case where all (or an infinite subset) of slices have a bounded length, but also if there exist an infinite subset of slices whose (unbounded) lengths do not grow faster than a particular exponential growth. We apply these theoretical findings to the dynamic leader-follower algorithm and establish the conditions under which each sensor converges to the state of the anchor. These concepts are further illustrated with insightful examples.
\bibliographystyle{IEEEtran}
|
2201.03214
|
\section{Introduction}
\IEEEPARstart{C}{yber}
security of multi-agent systems and distributed algorithms has become an important research area in systems control in the last decade. For multi-agent systems, consensus is one of the fundamental problems \cite{bullo2009distributed}, \cite{Lynch}.
Based on consensus algorithms, various applications and distributed algorithms have been developed to solve different industrial problems, e.g., clock synchronization \cite{kikuya2017fault}, energy management \cite{yang2013consensus}, distributed state estimation \cite{mitra2021}, distributed optimization \cite{nedic2009distributed, sundaram2018distributed}, and so on.
As concerns for cyber security have rised in general, consensus problems in the presence of adversarial agents creating failures and attacks have attracted much attention; see, e.g.,
\cite{dibaji2018resilient, leblanc2013resilient, su2017reaching, nugraha2021dynamic}.
One class of interdisciplinary problems that has been studied in both control and computer science is that of resilient consensus \cite{dibaji2017resilient, leblanc2013resilient, vaidya2012iterative}.
In these works, the adversarial agents are categorized into basically two types: Malicious agents and Byzantine agents. These agents are capable to manipulate their data
arbitrarily and may even collaborate with each other. However, certain differences lie between them since malicious agents are limited as they must broadcast the same messages
to all of their neighbors, while Byzantine agents can send individual messages to different neighbors (e.g., \cite{leblanc2013resilient, Lynch, teixeira2012attack}).
Resilient consensus problems under the Byzantine model have a rich history in the area of distributed computing \cite{Lynch}. In \cite{dolev1982byzantine}, it has
been shown that given a network with $n$ nodes and at most $f$ Byzantine nodes, a necessary and sufficient condition to reach resilient exact consensus is that $n\geq3f+1$ and the graph connectivity is no less than $2f+1$. Furthermore, in \cite{fischer1985impossibility}, it has been reported that under deterministic asynchronous updates, even one misbehaving agent can make it impossible for the system to reach exact consensus. Then, to avoid this constraint of exact consensus, the authors of \cite{dolev1986reaching} introduced the approximate Byzantine consensus problem in complete networks (i.e., all-to-all communication), where the non-adversarial, normal nodes are required to achieve approximate agreement by converging to a relatively small interval in finite time. In \cite{vaidya2012iterative}, the authors proposed a necessary and sufficient condition for the approximate Byzantine consensus problem in networks with general topologies.
In this paper, we study resilient asymptotic (approximate) consensus under malicious model from the viewpoint of the so-called mean subsequence reduced (MSR) algorithms, which are known for the simplicity and scalability (e.g., \cite{azadmanesh2002asynchronous, leblanc2013resilient, vaidya2012iterative, abbas2020interplay, senejohnny2019resilience, wang2020event, wang2021resilient}).
This line of work has gained much attention in the last decade as the malicious model can be widely assumed for broadcasting networks \cite{leblanc2013resilient}, wireless sensor networks \cite{kikuya2017fault}, and so on.
A basic assumption in MSR algorithms is the knowledge regarding an upper bound on the maximum number of malicious agents among the neighbors; this bound is denoted by $f$ throughout this paper. Such a bound represents the level of caution assumed by the system operator and can be set based on past experience, with possibly some safety margin.
Then, at each iteration, each node eliminates extreme values received from neighbors to avoid being influenced by such potentially faulty values. In particular, it removes the $f$ largest values and the $f$ smallest values from neighbors.
Moreover, the graph property called robustness is shown to be critical for the network structure, guaranteeing the success of resilient consensus algorithms in static networks \cite{dibaji2017resilient, leblanc2013resilient} as well as time-varying networks \cite{saldana2017resilient}.
A recent work \cite{usevitch2020determining} attempts to check robustness of given graphs using mixed integer linear programming.
Nevertheless, such robustness requires the networks to be relatively dense and complex. Therefore, how to enhance resilience of more sparse networks without changing the original network topologies has become an urgent problem.
There are several directions for developing solutions to tackle this problem. In \cite{abbas2017improving}, the authors improved the graph robustness by introducing trusted nodes, which are assumed to be fault-free with extra security measures. They also provided another alternative method \cite{abbas2019diversity} to enhance the graph robustness by introducing different types of nodes in the network and by assuming that the attackers can only compromise a certain type of nodes.
On the other hand, the works \cite{yuan2021secure, zhao2018resilient} pursue approaches based on detection of malicious agents in the network. Compared to MSR algorithms, which do not have such detection capabilities, the algorithms are applicable to more sparse networks with the same tolerance of adversaries.
Furthermore, in \cite{su2017reaching}, by introducing multi-hop communication in MSR algorithms, the authors solved the approximate Byzantine consensus problem with a weaker condition on network structures compared to that derived under the one-hop communication model \cite{vaidya2012iterative}.
In \cite{sakavalas2020asynchronous}, the authors tackled the same problem under asynchronous updates based on rounds, which is different from the asynchrony setting used in this paper (see the discussions in Section VI).
The work \cite{khan2020exact} studied Byzantine binary consensus under the local broadcast model (malicious model) using a flooding algorithm, where nodes relay their values over the entire network.
Multi-hop communication techniques are commonly used in the areas of wireless communication \cite{goldsmith2005wireless} and computer science \cite{Lynch}.
Such techniques are also used for consensus problems in many works. In \cite{jin2006multi}, a multi-hop relay technique is introduced in the consensus problem to increase the speed of consensus forming. In \cite{zhao2016global}, a similar method based on multi-hop relay is developed to solve the global leader-following consensus problem. Moreover, application of multi-hop communication in wireless sensor networks from the viewpoint of control is investigated in \cite{manfredi2013design}. It is clear that with multi-hop communication, each node can have more information for updating compared to the one-hop case. Thus, the network may have more resilience against adversary nodes. Yet, only few works have looked into resilient consensus with multi-hop communication.
As mentioned earlier, the work \cite{su2017reaching} is restricted to the Byzantine adversary case.
To the best of our knowledge, resilient consensus with multi-hop communication under malicious attacks still remains as an open problem.
Moreover, for many applications of multi-agent systems, time delays in the communication among agents can happen naturally \cite{ren2011distributed}, \cite{mesbahi2010graph}.
Especially, in the multi-hop communication setting, the length of time delays should increase as the messages are relayed by more agents.
It is hence of significant importance to extend our algorithm to the case of asynchronous updates with time delays. We must note that such a case is not considered in \cite{su2017reaching} and \cite{khan2020exact}.
The main contribution of this paper can be outlined as follows.
Inspired by the definition of graph robustness of \cite{leblanc2013resilient}, we extend this notion to the multi-hop setting and name it as \textit{robustness with $l$ hops}.
Specifically, we formally characterize the ability of normal nodes to be influenced by normal multi-hop neighbors outside a given node set in the malicious environment.
Furthermore, we provide analysis for the properties of the new notion of robustness.
Unlike in the case with one-hop communication, the MSR algorithm in the multi-hop case may not exclude all the possible effects from malicious nodes if each normal node just eliminates the $f$ largest and the $f$ smallest received values. Since a malicious node can manipulate not only its own value but also the values it relays, such nodes can produce more than $f$ false values even if there are at most $f$ malicious nodes. To completely exclude the effects from malicious nodes, we propose the multi-hop weighted mean subsequence reduced (MW-MSR) algorithm. Normal nodes using the MW-MSR algorithm will exclude the extreme values which are produced precisely by $f$ multi-hop neighbors. To realize this trimming capability in the multi-hop setting requires the notion of \textit{message cover}, which represents the set of nodes intersecting with a given set of different message paths.
In this paper, we consider the malicious model, which is suitable for broadcast network and is different from the Byzantine model studied in \cite{su2017reaching}.
Then we derive necessary and sufficient graph conditions based on the new notion of robustness with $l$ hops for the proposed MW-MSR algorithms to achieve resilient consensus under synchronous updates and asynchronous updates with time delays in the communication.
Moreover, we present examples to illustrate how multi-hop communication helps to improve graph robustness without changing the network topology.
As a side result, we prove that for the case of unbounded path length in message relaying, our graph condition is equivalent to the necessary and sufficient graph condition for binary consensus under malicious attacks studied in \cite{khan2020exact}.
The rest of this paper is organized as follows.
In Section~II, we outline preliminaries on graphs and the system model. In Sections~III and IV, we present the MW-MSR algorithm and define graph robustness with multi-hop communication, respectively.
Then in Sections~V and VI, we derive tight graph conditions under which the MW-MSR algorithms guarantee resilient asymptotic consensus under synchronous and asynchronous updates, respectively.
In Section~VII, we provide some properties of the new robustness and in Section~VIII, we present examples to demonstrate that multi-hop communication can improve robustness of graphs in general.
Lastly, in Section~IX, we conclude the paper.
A preliminary version of this paper appeared as \cite{yuan2021resilient}.
The current paper contains all the proofs of the theoretical results, further discussions, as well as more extensive simulations.
\section{Preliminaries and Problem Formulation}
In this section, we provide preliminaries on the network models
and introduce the basic settings for the resilient
consensus problems studied in this paper.
\subsection{Network Model}
First, we introduce the graph notions used in this paper.
Consider the directed graph $\mathcal{G} = (\mathcal{V},\mathcal{E})$ consisting of the node set $\mathcal{V}=\{1,...,n\}$ and the edge set $\mathcal{E}\subset \mathcal{V} \times \mathcal{V}$. Here, the edge $(j,i)\in \mathcal{E}$ indicates that node $i$ can get information from node $j$.
A path from node $i_1$ to $i_m$ is a sequence of distinct nodes $(i_1, i_2, \dots, i_m)$, where $(i_j, i_{j+1})\in \mathcal{E} $ for $j=1, \dots, m-1$. Such a path is referred to as an $(m-1)$-hop path (or a path of length $m-1$) and also as $(i_1,i_m)$-path when the number of hops is not relevant but the source and destination nodes are. We also say that node $i_m$ is reachable from node $i_1$.
An $\mathcal{X}u$-path is a path from a node in set $\mathcal{X}$ to node $u\notin \mathcal{X}$. We also denote the set minus symbol by $\mathcal{X}\setminus\mathcal{Y}$.
For node $i$, let $\mathcal{N}_i^{l-}$ be the set of nodes that can reach node $i$ via at most $l$-hop paths, where $l$ is a positive integer. Also, let $\mathcal{N}_i^{l+}$ be the set of nodes that are reachable from node $i$ via at most $l$-hop paths.
The $l$-th power of the graph $\mathcal{G}$, denoted by $\mathcal{G}^l$, is a multigraph\footnote[1]{
In a multigraph, two nodes can have multiple edges between them.} with the same vertices as $\mathcal{G}$ and a directed edge from node $j$ to node $i$ is defined by a path of length at most $l$ from $j$ to $i$ in $\mathcal{G}$.
The adjacency matrix $A = [a_{ij} ]$ of $\mathcal{G}^l$ is given by $\alpha \leq a_{ij}<1$ if $j\in \mathcal{N}_i^{l-}$ and otherwise $a_{ij} = 0$, where $\alpha > 0$ is a fixed lower bound. We assume that $\sum_{j=1,j\neq i}^{n} a_{ij}\leq 1$ for all $i$. Let $L = [b_{ij} ]$ be the Laplacian matrix of $\mathcal{G}^l$, whose entries are defined as $b_{ii} =\sum_{j=1,j\neq i}^{n}a_{ij}$ and $b_{ij} = -a_{ij}, i\neq j$; we can see that the sum of the elements of each row of $L$ is zero.
\subsection{Multi-hop Communication for Multi-agent Consensus}\label{problemsetting}
Here, we introduce the multi-agent system with multi-hop communication and the update rule used by the agents under no attacks.
Consider a time-invariant network modeled by the directed graph $\mathcal{G} = (\mathcal{V},\mathcal{E})$. Each node $i$ has a real-valued state $x_i[k]$.
The goal of the agents is to arrive at consensus in their state values asymptotically, that is, $|x_i[k] - x_j[k]| \rightarrow 0$ as $k\rightarrow \infty$ for all $i,j\in \mathcal{V}$. This is to be achieved by updating the states at each time step $k$ based on the information exchanged among the nodes. Their initial values $x_i[0]$ are given. Until we reach Section~VI, we assume that no delay is present in the communication among nodes.
In this problem setting, the agents not only communicate with their direct neighbors as in conventional schemes,
but also with their multi-hop neighbors. Let $l$ be the maximum number of hops allowed in the network. Specifically,
node $i_1$ can send messages of its own to an $l$-hop neighbor $i_{l+1}$ via different paths.
We represent a message as a tuple $m=(w,P)$, where $w=\mathrm{value}(m)\in \mathbb{R}$ is the message content and $P=\mathrm{path}(m)$ indicates the path via which message $m$ is transmitted.
Moreover, nodes $i_1$ and $i_{l+1}$ are, respectively, the message source and the message destination.
When source node $i_1$ sends out a message, $P$ is a path vector of length $l+1$ with the source node being $i_1$ and other entries being empty. Then the one-hop neighbor $i_2$ receives this message from $i_1$, and it stores the value of node $i_1$ for consensus and relays the value of node $i_1$ to all the one-hop neighbors of $i_2$ with the second entry of $P$ being $i_2$ and other entries being unchanged. This relay procedure will continue until every entry of $P$ of this message is occupied, i.e., this message reaches node $i_{l+1}$. We denote by $\mathcal{V}(P)$ the set of nodes in $P$.
We now outline the message exchanges among the agents.
At each time $k$, normal node $i$ conducts the following steps:
\noindent\textit{1. Transmit step:} Transmit message $m_{ij}[k]=(x_i[k],P_{ij}[k])$ over each $l$-hop path to node $j
\in\mathcal{N}_i^{l+}$.
\noindent\textit{2. Receive step:} Receive messages $m_{ji}[k]=(x_j[k],P_{ji}[k])$ from $j\in \mathcal{N}_i^{l-}$, whose destination is $i$.
Let $\mathcal{M}_i[k]$ be the set of messages that node $i$ received in this step.
\noindent\textit{3. Update step:} Update the state $x_i[k]$ as
\begin{equation}
x_i[k+1]=g_i(\mathcal{M}_i[k]), \label{updaterule}
\end{equation}
where $g_i(\cdot)$ is a real-valued function of the states received in this time step, to be defined later.
In the Transmit step and Receive step, nodes exchange messages with others that are up to $l$ hops away. Then in the Update step, node $i$ updates its state using the received values in $\mathcal{M}_i[k]$. Note that the adversary nodes may deviate from this specification as we describe in the next subsection.
In an agent network equipped with multi-hop communication, as the consensus update rule \eqref{updaterule}, we can extend the common one (e.g., \cite{olfati2007consensus}). Let $u_i[k]$ denote the control input for node $i$ at time $k$. Each node updates as
\begin{equation}
\begin{aligned}
x_i[k+1]&=x_i[k]+u_i[k], \\
u_i[k]&=-\sum_{j\in \mathcal{N}_i^{l-}} a_{ij}[k](x_i[k]-x_j[k]).
\end{aligned}
\end{equation}
This system can be given in the compact form as
\begin{equation}\label{m1}
\begin{aligned}
x[k+1]&=x[k] + u[k],\\
u[k]&=-L[k]x[k],
\end{aligned}
\end{equation}
where $x[k]\in \mathbb{R}^n$ and $u[k]\in \mathbb{R}^n$ are the state vector and control input vector, respectively, and $L[k]$ is the Laplacian matrix of the $l$-th power of $\mathcal{G}$ determined by the messages $m_{ij}[k], i\in \mathcal{V} \ \text{and} \ j\in \mathcal{N}_i^{l-}$.
As a generalization of the one-hop result (e.g. \cite{bullo2009distributed}, \cite{mesbahi2010graph}), it is obvious that with $l$-hop communication, consensus is possible if $\mathcal{G}^l$ has a rooted spanning tree.
\subsection{Threat Model}\label{threatmodel}
We introduce the threat model adopted in this paper. In the network, the node set $\mathcal{V}$ is partitioned into the set of normal nodes $\mathcal{N}$ and the set of adversary nodes $\mathcal{A}$. The latter set $\mathcal{A}$ is unknown to the normal nodes at all times. Moreover, we constrain the class of adversaries as follows (see, e.g., \cite{leblanc2013resilient}):
\begin{definition}
\textit{($f$-total set)}
The set of adversary nodes $\mathcal{A}$ is said to be $f$-total
if it contains at most $f$ nodes, i.e., $\left| \mathcal{A}\right| \leq f$.
\end{definition}
\begin{definition}
\textit{(Malicious nodes)}
An adversary node $i\in \mathcal{A}$ is said to be a malicious node
if it can arbitrarily modify its own value and relayed values, but sends the same state
and relayed values to its neighbors at each iteration.
It can also decide not to send any value.\footnote[2]{This behavior corresponds to the omissive/crash model \cite{Lynch}.}
\end{definition}
As commonly done in the literature \cite{leblanc2013resilient}, \cite{su2017reaching},
we assume that each normal node knows the value of $f$ and the topology information of the graph up to $l$ hops.
Moreover, the malicious model is reasonable in applications such as wireless sensor networks, where neighbors' information is obtained
by broadcast communication.
In the multi-hop setting studied in this paper, it is important to impose the following assumption.
\begin{assumption}\label{assumptionpath}
Each malicious node $i$ cannot manipulate the path values in the
messages containing its own state $x_i[k]$ and those that it relays.
\end{assumption}
This is introduced for ease of analysis, but is not a strong constraint. In fact, manipulating message paths can be easily detected and hence does not create problems. We show how this can be done in Section~II-E.
\subsection{Resilient Asymptotic Consensus}
We now introduce the type of consensus among the normal agents to be sought in this paper \cite{leblanc2013resilient}, \cite{su2017reaching}, \cite{dibaji2017resilient}.
\begin{definition}
If for any possible sets and behaviors of the
malicious agents and any state values of the normal
nodes, the following two conditions are satisfied,
then we say that the normal agents reach
resilient asymptotic consensus:
\begin{enumerate}
\item Safety: There exists a bounded safety interval $\mathcal{S}$ determined by the initial values of the normal agents such that $x_i[k] \in \mathcal{S}, \forall i \in \mathcal{N}, k \in \mathbb{Z}_+$.
\item Agreement: There exists a state $x^*\in \mathcal{S}$
such that $\lim_{k\to \infty}x_i[k]=x^*, \forall i\in \mathcal{N}$.
\end{enumerate}
\end{definition}
The problem studied in this paper is to develop an MSR-based algorithm for agents that can make $l$-hop communication to reach resilient consensus under the $f$-total malicious model and to characterize conditions on the network topology for the algorithm to properly perform.
Note that in general, for MSR-based algorithms with one-hop communication, resilient consensus can be achieved under the $f$-total model with the necessary and sufficient condition expressed in terms of the so-called graph robustness; see, e.g., \cite{dibaji2018resilient, leblanc2013resilient}, and the following sections for the definition of robust graphs and related discussions.
\subsection{Discussion on Manipulation in Message Path Information}
It is notable that multi-hop communication is vulnerable to false data injection in the information relayed by nodes, which can make the problem of resilient consensus more complicated than the one-hop case. Earlier, Assumption \ref{assumptionpath} was introduced stating that the malicious nodes however cannot manipulate the path information in messages that they relay. We briefly explain here how such attacks can be detected, inspired by the discussion in \cite{su2017reaching}.
Such detection requires that each node can identify the neighbor from which it receives each message, which is commonly assumed (e.g., \cite{su2017reaching}, \cite{dolev1982byzantine}).
Moreover, there are many methods to realize this function in real-world applications. For instance, by using the encryption technique of the RSA algorithm \cite{rivest1978method}, each node can send out its value associated with a digital signature using its own private key. Then, using the sender’s public key, the receiver can confirm that this message is indeed sent by the particular sender.
In each iteration of the synchronous algorithm, there are three potential cases where a message sent to normal node $i$ is
manipulated in its path information:
(i) Node $i$ receives multiple messages along the same path $P$; (ii) it receives messages along an unknown path $P'$; or
(iii) it does not receive any message along a known path $P$. Note that a normal node receives only one message along each path in each iteration when no adversarial node is present.
For case (i), this faulty behavior is caused either by duplicating messages or by manipulating path information in messages.
We show that in both situations, the receiving node $i$ can find that there is at least one faulty node in path $P$. It is obvious for the first situation. For the path manipulating situation, consider the case where a normal node $h$ receives a message $m =(w, P)$ directly from node $j$ but the path $P$ does not contain node $j$. Then node $h$ knows that node $j$ is faulty, and will not forward the message.
This indicates that in general, if there is a sequence of faulty nodes along a path, then the last one in the sequence must keep its own index within the path information in the messages that it transmits.
Moreover, this argument also holds for case (ii), i.e., node $i$ knows that at least one node in path $P'$ is faulty.
Therefore, in cases (i) and (ii), from the perspective of node $i$, manipulating the message path data is equivalent to having a faulty node in $P$ or $P'$ sending additional messages with manipulated values, and thus it will remove any values in this path by the MW-MSR algorithm.
For case (iii), either a faulty node does not send/forward the message $m$, or it manipulates the message path $P$. In the latter case, for node $i$, manipulating the message path is equivalent to having a faulty node in $P$ not sending/forwarding the message.
Actually, this analysis can be extended to the algorithms with asynchronous updates.
In each update of such algorithms, consider the following three path manipulating cases for node $i$: (i) Node $i$ receives multiple messages along one path $P$ at the same time step; (ii) it receives messages along an unknown path $P'$; or
(iii) it does not receive any message along path $P$ in a period of time $\tau$, where $\tau$ is the maximum time delay of normal agents. Note that in case (i) for the asynchronous algorithm, faulty nodes can send multiple messages along $P$ as long as these faulty messages do not arrive at node $i$ at the same time step and this behavior will not affect normal agents, since only the most recent values of multi-hop neighbors will be used in the asynchronous MW-MSR algorithm. The analysis of cases (ii) and (iii) is similar to that of the synchronous algorithm.
The above analysis is based on the assumption that there is no packet loss in the fault-free networks. In real-world applications, packet losses can happen even in fault-free networks. We note that there are methods to deal with this issue.
Packet losses in the communication between two normal nodes can also cause the situation of case (iii) mentioned above. Like the one-hop W-MSR algorithm, if a packet loss happens in the communication from neighbor $j$ to node $i$, then node $i$ may receive only $|\mathcal{N}_i|-1$
values at this particular time step and still remove $f$ largest and $f$ smallest received values; hence, node $i$ uses less information from normal nodes to update. This behavior will not violate the safety interval, but it may slow down the speed of consensus. If the packet loss behavior happens frequently in this transmission path, then node $i$ can consider this path containing faulty nodes.
\section{Multi-hop Weighted MSR Algorithm}
In this section, we introduce the multi-hop weighted MSR (MW-MSR) algorithm, which is designed to solve the resilient consensus problem under the multi-hop setting. We first introduce the notion of message cover which plays a key role in the trimming function of our MSR algorithm. Then we outline the structure of the MW-MSR algorithm and provide examples to illustrate the idea behind the algorithm.
The notion of message cover \cite{su2017reaching} is crucial in the update rule of our algorithm to be proposed in this section. It evaluates the effects of adversary nodes that can possibly manipulate the updates of normal nodes in a multi-hop communication setting. Its formal definition is given as follows.
\begin{definition} For a graph $\mathcal{G} = (\mathcal{V},\mathcal{E})$, let $\mathcal{M}$ be a set of messages transmitted over $\mathcal{G}$, and let $\mathcal{P}(\mathcal{M})$ be the set of message paths of all the messages in $\mathcal{M}$, i.e., $\mathcal{P}(\mathcal{M}) =\{\mathrm{path}(m):m \in \mathcal{M}\}$. A \textit{message cover} of $\mathcal{M}$ is a set of nodes $\mathcal{T}(\mathcal{M})\subset \mathcal{V}$ whose removal disconnects all message paths, i.e., for each path $P\in \mathcal{P}(\mathcal{M})$, we have $\mathcal{V}(P)\cap \mathcal{T}(\mathcal{M})\neq \emptyset$. In particular, a \textit{minimum} message cover of $\mathcal{M}$ is defined by
\begin{equation*}
\mathcal{T}^*(\mathcal{M})\in \arg \min_{\substack{ \mathcal{T}(\mathcal{M}): \textup{ Cover of } \mathcal{M}}} \left| \mathcal{T} (\mathcal{M})\right| .
\end{equation*}
\end{definition}
\vspace{0.12cm}
As a simple example, consider the set $\mathcal{M}$ of paths connecting node $i$ to node $j$ which do not overlap. Then, its message cover must contain at least one node per path.
Clearly, there may be multiple minimum message covers if the paths are of length greater than three.
\begin{algorithm}[t]
\caption{MW-MSR Algorithm}
1) At each time $k$, normal node $i$
sends its own message to nodes in $\mathcal{N}_i^{l+}$.
Then, it obtains messages of the nodes in $\mathcal{N}_i^{l-}$ and itself, whose set is denoted by $\mathcal{M}_i[k]$, and sorts the values in $\mathcal{M}_i[k]$ in an increasing order.
2) (a) Define two subsets of $\mathcal{M}_i[k]$ based on the message values:
\begin{equation*}
\overline{\mathcal{M}}_i[k]=\{ m\in \mathcal{M}_i[k]: \mathrm{value}(m)> x_i[k] \},
\end{equation*}
\begin{equation*}
\underline{\mathcal{M}}_i[k]=\{ m\in \mathcal{M}_i[k]: \mathrm{value}(m)< x_i[k] \}.
\end{equation*}
(b) Then, let $\overline{\mathcal{R}}_i[k]=\overline{\mathcal{M}}_i[k]$ if the cardinality of a minimum cover of $\overline{\mathcal{M}}_i[k]$ is less than $f$, i.e., $\left| \mathcal{T}^* (\overline{\mathcal{M}}_i[k])\right| <f$. Otherwise, let $\overline{\mathcal{R}}_i[k]$ be the largest sized subset of $\overline{\mathcal{M}}_i[k]$ such that (i) for all $m\in \overline{\mathcal{M}}_i[k]\setminus \overline{\mathcal{R}}_i[k]$ and $m'\in \overline{\mathcal{R}}_i[k]$ we have $\mathrm{value}(m) \leq \mathrm{value}(m')$, and (ii) the cardinality of a minimum cover of $\overline{\mathcal{R}}_i[k]$ is exactly $f$, i.e., $\left| \mathcal{T}^* (\overline{\mathcal{R}}_i[k])\right| =f$.
(c) Similarly, let $\underline{\mathcal{R}}_i[k]=\underline{\mathcal{M}}_i[k]$ if the cardinality of a minimum cover of $\underline{\mathcal{M}}_i[k]$ is less than $f$, i.e., $\left| \mathcal{T}^* (\underline{\mathcal{M}}_i[k])\right| <f$. Otherwise, let $\underline{\mathcal{R}}_i[k]$ be the largest sized subset of $\underline{\mathcal{M}}_i[k]$ such that (i) for all $m\in \underline{\mathcal{M}}_i[k]\setminus \underline{\mathcal{R}}_i[k]$ and $m'\in \underline{\mathcal{R}}_i[k]$ we have $\mathrm{value}(m) \geq \mathrm{value}(m')$, and (ii) the cardinality of a minimum cover of $\underline{\mathcal{R}}_i[k]$ is exactly $f$, i.e., $\left| \mathcal{T}^* (\underline{\mathcal{R}}_i[k])\right| =f$.
(d) Finally, let $\mathcal{R}_i[k]=\overline{\mathcal{R}}_i[k]\cup\underline{\mathcal{R}}_i[k]$.
3) Node $i$ updates its value as follows:
\begin{equation}
x_i[k+1]=\sum_{m\in \mathcal{M}_i[k]\setminus \mathcal{R}_i[k]} a_{i}[k]\mathrm{value}(m), \label{msrupdate}
\end{equation}
where $a_{i}[k]=1/(\left| \mathcal{M}_i[k]\setminus \mathcal{R}_i[k] \right| )$.
\end{algorithm}
Now, we are ready to introduce the structure of the synchronous \textit{MW-MSR algorithm} in Algorithm 1.
Note that the one-hop version of the MW-MSR algorithm (i.e., with $l=1$) is equivalent to the W-MSR algorithm in \cite{leblanc2013resilient}.
However, we can see that the difference between the MW-MSR algorithm and the W-MSR algorithm mainly lies in the trimming function in step 2 when $l\geq 2$.
For general MSR algorithms of one-hop communication, the essential idea for the normal nodes to avoid being affected by adversary nodes is that
each normal node $i$ will exclude the effects from $f$ neighbors with extreme values (possibly sent by faulty nodes).
This can guarantee that values outside the safety interval will not be used by any normal node at any time step. In the one-hop case, for each normal node $i$, the number of values received from such $f$ neighbors is exactly $f$, i.e., node $i$ will trim away $f$ largest and $f$ smallest values at each step. This is because each neighbor sends only one value of its own to node $i$ at each step under the typical assumptions made in MSR-related works \cite{leblanc2013resilient}, \cite{vaidya2012iterative}.
Under the multi-hop setting, the situation changes significantly even if we assume that each node can only send out one value of its own to its neighbors at each step. Since each node relays the values from different neighbors, normal node $i$ can receive more than one value from one direct neighbor. Thus, in the MW-MSR algorithm, normal node $i$ cannot just trim away $f$ largest and $f$ smallest values at each step. Instead, it needs to trim away the largest and smallest values from exactly $f$ nodes within $l$ hops away, which is the generalization of the essential idea in the one-hop W-MSR algorithm.
\begin{figure}[t]
\centering
\subfigure[]{
\includegraphics[width=1.65in]{5-node-graph}
}
\quad
\vspace{-5pt}
\subfigure[]{
\includegraphics[width=1.1in]{one-two-hop}
}
\vspace{-4pt}
\caption{(a) A 5-node graph. (b) Values removed by node 1 in one-hop and two-hop algorithms.}
\label{one-hop-two-hop}
\end{figure}
To characterize the number of the extreme values from exactly $f$ nodes for node $i$, the notion of minimum message cover (MMC) is designed.
Intuitively speaking, for normal node $i$, $\overline{\mathcal{R}}_i[k]$ and $\underline{\mathcal{R}}_i[k]$ are the largest sized sets of received messages containing very large and small values that may have been generated or tampered by $f$ adversary nodes, respectively.
Here, we focus on how $\underline{\mathcal{R}}_i[k]$ is determined, as $\overline{\mathcal{R}}_i[k]$ can be obtained in a similar way.
When the cardinality of the MMC of set $\underline{\mathcal{M}}_i[k]$ is no more than $f$, node $i$ simply takes $\underline{\mathcal{R}}_i[k]=\underline{\mathcal{M}}_i[k]$.
Otherwise, node $i$ will check the first $f+1$ values of $\underline{\mathcal{M}}_i[k]$, and if the MMC of these values is of cardinality $f$, then it will check the first $f+2$ values of $\underline{\mathcal{M}}_i[k]$. This procedure will continue until for the first $f+h$ ($h\geq 1$) values of $\underline{\mathcal{M}}_i[k]$, the MMC of these values is of cardinality $f+1$. Then $\underline{\mathcal{R}}_i[k]$ is taken as the first $f+h-1$ values of $\underline{\mathcal{M}}_i[k]$.
After sets $\overline{\mathcal{R}}_i[k]$ and $\underline{\mathcal{R}}_i[k]$ are determined, in the control input $u_i(k)$ computed by \eqref{msrupdate} in step 3, values in these sets are excluded. Note that this control is consistent with the one in \eqref{updaterule} when $f=0$.
We also illustrate the determination of such subsets through a simple example.
Consider the network in Fig. \ref{one-hop-two-hop} with initial states $x[0]=[2\ 4\ 100\ 8\ 10]^T$, where node 3 is set to be malicious ($f=1$) and constantly broadcasts the value 100 as its own value as well as those in the relayed messages. We look at node 1 at time $k=0$ and drop the time index $k$. In the one-hop version of the MW-MSR algorithm, the input for node 1 is $\{x_1, x_2, x_5, x_3\}$, and it chooses $\overline{\mathcal{R}}_1[0]=\{x_3=100\}$ and $\underline{\mathcal{R}}_1[0]=\emptyset$ in step 2 of the algorithm (since the value $x_1$ is the smallest in the input).
In the two-hop version of the MW-MSR algorithm, node 1 receives the state values $x_2, x_3,$ and $ x_5$ directly from nodes 2, 3, and 5, respectively. Moreover, it receives the relayed values of node 4 through nodes 2, 3, and 5, denoted by $x_4^2, x_4^3,$ and $ x_4^5$. Then
the sorted input for node 1 is $\{x_1, x_2, x_4^2, x_4^5, x_5, x_4^3, x_3\}$, and node 1 checks the MMC of the subset of the largest values starting from the $(f+1)$th value (since the values before the $f$th one are definitely removed by node 1). First, it evaluates $\{x_4^3, x_3\}$, and the MMC of this message set is the node set $\{3\}$ with cardinality 1. Then, it evaluates $\{x_5, x_4^3, x_3\}$ and the MMC of this message set can be found to be the node set $\{3, 5\}$ with cardinality 2, which is bigger than $f=1$. As a result, node 1 confirms that $\{x_4^3, x_3\}$ is the largest sized set of the large values that may have been generated or tampered by $f$ adversary nodes. Therefore, node 1 chooses $\overline{\mathcal{R}}_1[0]=\{x_4^3=100, x_3=100\}$ and $\underline{\mathcal{R}}_1[0]=\emptyset$ in step 2 of the algorithm.
In this paper, the key question to be addressed is, under what conditions on the network can the above algorithm achieve resilient asymptotic consensus? Our approach is to develop a generalization of the results and analysis of the one-hop case. In particular, this necessitates us to extend the notion of graph robustness by taking account of multi-hop communication. This is carried out in the next section.
\section{Robustness with Multi-hop Communication}
In this section, we discuss the notion of graph robustness.
This notion was first introduced in \cite{leblanc2013resilient}, which corresponds
to the one-hop case. We provide its multi-hop generalization,
which plays a crucial role in our resilient consensus problem.
As in the definition of robustness for the one-hop case \cite{leblanc2013resilient},
we start with the definition of $r$-reachability. Specifically,
when the communication is only one-hop,
a node set is said to be $r$-reachable if it contains
at least one node that has at least $r$ incoming neighbors outside this set. This notion basically captures the capability of a set to be influenced by the outside of the set when the nodes apply the MSR algorithms with parameter $r-1$.
\begin{figure}[t]
\centering
\subfigure[]{
\includegraphics[width=1.25in]{2-reachable}
}
\quad
\vspace{-3pt}
\subfigure[]{
\includegraphics[width=1.25in]{1-reachable}
}
\vspace{-3pt}
\caption{(a) Node $i$ has two independent paths originating from the outside of $\mathcal{V}_1$ with respect to set $\mathcal{F}=\{j\}$. (b) Node $i$ only has one independent path sharing the same property.}
\label{3-reachable}
\vspace*{-3.5mm}
\end{figure}
In generalizing this notion to the case of multi-hop communication, it is crucial to extend the above-mentioned capability. In particular, the influence from the outside of the set may come from remote nodes and are not restricted to direct neighbors. With a slight change, in the multi-hop setting, we define the $r$-reachability as follows.
\begin{definition}
Consider a graph $\mathcal{G} = (\mathcal{V},\mathcal{E})$ with $l$-hop communication. For $r\in \mathbb{Z}_+$, set $\mathcal{F}\subset \mathcal{V}$, and nonempty set $\mathcal{V}_1\subset \mathcal{V}$, a node $i\in \mathcal{V}_1$
is said to be $r$-reachable with $l$-hop communication with respect to $\mathcal{F}$ if it has at least $r$ independent paths (i.e., only node $i$ is the common node in these paths) of at most $l$ hops originating from nodes outside $\mathcal{V}_1$ and all these paths do not have any node in set $\mathcal{F}$ as an intermediate node (i.e., the nodes in $\mathcal{F}$ can be source or destination nodes in these paths).
\end{definition}
Intuitively speaking, for any set $\mathcal{F}\subset \mathcal{V}$ and for node $i\in \mathcal{V}_\text{1}$ to have the above-mentioned property, there should be at least $r$ source nodes outside $\mathcal{V}_\text{1}$ and at least one independent path of length at most $l$ hops from each of the $r$ source nodes to node $i$, where such a path does not contain any internal node from the set $\mathcal{F}$.
It is clear that for the one-hop case, to count the independent paths simply becomes to count the in-neighbors.
As an example, consider the graph in Fig.~\ref{3-reachable}(a), where the two node sets $\mathcal{V}_1$ and $\mathcal{V}_2$ are taken as indicated and the set $\mathcal{F}=\{j\}$. Here, node $i\in \mathcal{V}_1$ has two independent paths of at most two hops originating from the nodes outside $\mathcal{V}_1$ with respect to the set $\mathcal{F}$. In contrast, in a similar graph shown in Fig.~\ref{3-reachable}(b), such a property is lost and node $i$ has only one path from the outside of $\mathcal{V}_1$ w.r.t. the set $\mathcal{F}$.
\begin{figure}[t]
\centering
\subfigure[]{
\includegraphics[width=0.85in]{fig3}
}
\quad
\subfigure[]{
\includegraphics[width=1.35in]{7node}
}
\vspace{-3pt}
\caption{(a) The graph is not $(2, 2)$-robust with one hop, but it is $(2,2)$-robust with $2$ hops. (b) The graph is $(2,2)$-robust with one hop and $(3,3)$-robust with $2$ hops.}
\label{graph1}
\vspace*{-3.5mm}
\end{figure}
Now, we are ready to generalize this notion to the entire graph and define $r$-robustness and $(r,s)$-robustness with $l$ hops as follows.
\begin{definition}\label{rs-robust} A directed graph $\mathcal{G} = (\mathcal{V},\mathcal{E})$ is said to be $(r,s)$-robust with $l$ hops with respect to a given set $\mathcal{F}\subset \mathcal{V}$,
if for every pair of nonempty disjoint subsets $\mathcal{V}_\text{1},\mathcal{V}_\text{2}\subset \mathcal{V}$, at least one of the following conditions holds:
\begin{enumerate}
\item $\mathcal{Z}_{\mathcal{V}_1}^r=\mathcal{V}_1$,
\item $\mathcal{Z}_{\mathcal{V}_2}^r=\mathcal{V}_2$,
\item $\left| \mathcal{Z}_{\mathcal{V}_1}^r\right| +\left| \mathcal{Z}_{\mathcal{V}_2}^r\right| \geq s$,
\end{enumerate}
\noindent where $\mathcal{Z}_{\mathcal{V}_a}^\textit{r}$ is the set of nodes in $\mathcal{V}_\textit{a}$ ($a=1,2$) that are $r$-reachable with $l$-hop communication with respect to $\mathcal{F}$.
Moreover, if the graph $\mathcal{G}$ satisfies this property with respect to any set $\mathcal{F}$ following the $f$-total model, then we say that $\mathcal{G}$ is $(r,s)$-robust with $l$ hops under the $f$-total model.
When it is clear from the context, we will just say $\mathcal{G}$ is $(r,s)$-robust with $l$ hops.
Furthermore, when the graph is $(r,1)$-robust with $l$ hops, we also say it is $r$-robust with $l$ hops.
\end{definition}
Generally, robustness of a graph increases as the relay range $l$ increases.
We will illustrate this point using the graphs in Fig.~\ref{graph1}.
Note that graph robustness with multi-hop communication needs to be checked for every possible set $\mathcal{F}$ satisfying the $f$-total model. In the context of this paper, we are interested to check $(r,s)$-robustness under $(r-1)$-total model.
First, the graph in Fig.~\ref{graph1}(a) is not $(2,2)$-robust with one hop since for the sets $\{1,2\}$ and $\{3,4\}$, none of the nodes has two in-neighbors outside the corresponding set. However, under the 1-total model, this graph is $(2,2)$-robust with $2$ hops. For instance, we first check the condition for the set $\mathcal{F}=\{1\}$. For sets $\{1,2\}$ and $\{3,4\}$, all of the nodes 1, 3 and 4 have two independent paths of at most two hops originating from the outside of the set with node 1 not being the internal node. After checking all the possible subsets of $\mathcal{V}$, one can confirm that this graph is $(2,2)$-robust with $2$ hops with respect to set $\mathcal{F}=\{1\}$. Since this graph is actually symmetric for each node, we can conclude that for the set $\mathcal{F}=\{v\}$ ($v=2,3,4$), this graph is $(2,2)$-robust with $2$ hops with respect to this set. Hence, this graph is $(2,2)$-robust with $2$ hops.
Next, we look at the graph in Fig.~\ref{graph1}(b).
When $l=1$, this graph is $(2,2)$-robust with $1$ hop but not $(3,3)$-robust with $1$ hop.
When $l=2$, it becomes $(3,3)$-robust with $2$ hops.
It is further noted that the level of robustness is constrained
by the in-degrees of the nodes.
In the graph of Fig.~\ref{graph1}(b), each node has four incoming edges. As a result,
for $r\geq 4$, this graph cannot be $(r, s)$-robust with any number of hops.
We elaborate more on this aspect in Section~VII.
\section{Synchronous Network}
In this section, we analyze the MW-MSR algorithm under synchronous updates, i.e., each normal node will update its value using those received from all of its $l$-hop neighbors in a synchronous manner with other nodes at each time $k$.
\subsection{Matrix Representation}
First, we write the system in a matrix form.
For ease of notation in our analysis, reorder the node indices so that the normal nodes take indices $1,\dots,n_N$ and the malicious nodes are $n_N+1,\dots,n$. Then the state vector and control input vector can be written as
\begin{equation}
x[k]=\begin{bmatrix} x^N[k] \\ x^F[k] \end{bmatrix}, \medspace u[k]=\begin{bmatrix} u^N[k] \\ u^F[k] \end{bmatrix},
\end{equation}
where the superscript $N$ stands for normal and $F$ for faulty. Regarding the control inputs $u^N[k]$ and $u^F[k]$, the normal nodes follow \eqref{msrupdate} while the malicious nodes may not. Hence, they can be expressed as
\begin{equation}
\begin{array}{lll}
u^N[k] =-L^N[k]x[k],\\
u^F[k] : \textup{arbitrary,}
\end{array}
\end{equation}
where $L^N[k]\in \mathbb{R}^{n_N\times n}$ is the matrix formed by the first $n_N$ rows of $L[k]$ associated with normal nodes. The row sums of this matrix $L^N[k]$ are zero as in $L[k]$.
Thus, we can rewrite the system as
\begin{equation} \label{system1}
x[k+1]=\left( I_n - \begin{bmatrix} L^N[k] \\ 0 \end{bmatrix} \right) x[k] + \begin{bmatrix} 0 \\ I_{n_F} \end{bmatrix}u^F[k].
\end{equation}
\vspace{0.12cm}
\subsection{Consensus Analysis with Multi-hop Communication}
Now we are ready to provide a necessary and sufficient condition for resilient consensus applying the synchronous MW-MSR algorithm. The following theorem is the first main contribution of this paper.
\begin{theorem}\label{syn}
Consider a directed graph $\mathcal{G} = (\mathcal{V},\mathcal{E})$ with $l$-hop communication, where each normal node updates its value according to the synchronous MW-MSR algorithm with parameter
$f$. Under the $f$-total malicious model, resilient asymptotic
consensus is achieved if and only if the network topology is
$(f + 1, f + 1)$-robust with $l$ hops.
Moreover, the safety interval is given by
\begin{equation*}
\mathcal{S}=\big[ \min x^N[0], \max x^N[0] \big].
\end{equation*}
\end{theorem}
\vspace{0.12cm}
\begin{proof}
\textit{(Necessity)} If $\mathcal{G}$ is not $(f +1, f +1)$-robust with $l$ hops, then
there are nonempty, disjoint subsets $\mathcal{V}_1, \mathcal{V}_2\subset \mathcal{V}$ such that none of
the conditions in Definition \ref{rs-robust} holds. Suppose that the initial value of
each node in $\mathcal{V}_1$ is $a$ and each node in $\mathcal{V}_2$ takes $b$, with $a < b$.
Let all other nodes have initial values taken from the interval
$(a, b)$. Since $| \mathcal{Z}_{\mathcal{V}_1}^{f+1}| +| \mathcal{Z}_{\mathcal{V}_2}^{f+1}| \leq f$, suppose that all nodes in $\mathcal{Z}_{\mathcal{V}_1}^{f+1}$ and $\mathcal{Z}_{\mathcal{V}_2}^{f+1}$ are malicious and take constant values.
Then there is still at least one
normal node in both $\mathcal{V}_1$ and $\mathcal{V}_2$ since $| \mathcal{Z}_{\mathcal{V}_1}^{f+1}| < \left| \mathcal{V}_1\right|$ and $| \mathcal{Z}_{\mathcal{V}_2}^{f+1}| < \left| \mathcal{V}_2\right|$, respectively. Then these normal nodes remove all the values of incoming neighbors outside of their respective sets since the message cover of these values has cardinality equal to $f$ or less. According to the synchronous MW-MSR algorithm, such normal nodes will keep their values and consensus cannot be achieved.
\textit{(Sufficiency)}
First, we show that the safety condition of resilient consensus is satisfied.
Let $\overline{x}^N[k]$ and $\underline{x}^N[k]$ to be
the maximum and minimum values of the normal nodes at
time $k$, respectively.
We can show that $\overline{x}^N[k]$ is monotonically nonincreasing and $\underline{x}^N[k]$ is monotonically nondecreasing, and thus each of them has some limit.
This can be directly shown from the definitions and the facts that the values
used in the MW-MSR update rule always lie within the interval $\big[ \underline{x}^N[k], \overline{x}^N[k] \big] \subseteq \mathcal{S}$ for $k\geq 0$.
Since at each time $k$, in step 2 of Algorithm 1, node $i$ wipes out the possibly manipulated values from at most $f$ nodes within $l$ hops.
Moreover, the update rule \eqref{system1} uses a convex combination of the values in $\big[ \underline{x}^N[k], \overline{x}^N[k] \big]$.
Therefore, the safety condition is satisfied.
Then, we denote the limits of $\overline{x}^N[k]$ and $\underline{x}^N[k]$ by $\overline{\omega}$ and $\underline{\omega}$,
respectively. We will prove by contradiction to show that $\overline{\omega}=\underline{\omega}$, and thus the normal nodes will reach consensus.
Suppose that $\overline{\omega} > \underline{\omega}$. We can then take $\epsilon_0 > 0$ such that $\overline{\omega}-\epsilon_0 > \underline{\omega}+\epsilon_0$.
Fix $\epsilon<\epsilon_0\alpha^{n_N}/(1-\alpha^{n_N})$, where $0<\epsilon<\epsilon_0$ and $\alpha$ is the minimum of all $a_{i}[k]$ in step 3 of the MW-MSR algorithm.
For $1\leq \gamma \leq n_N$, define $\epsilon_\gamma$ recursively as
\begin{equation*}
\epsilon_{\gamma}= \alpha\epsilon_{\gamma-1}-(1-\alpha)\epsilon.
\end{equation*}
So we have $0 < \epsilon_{\gamma} < \epsilon_{\gamma-1}\leq \epsilon_0$ for all $\gamma$, since it holds that
\begin{equation}\label{epsilon_positive}
\begin{aligned}
\epsilon_\gamma &= \alpha\epsilon_{\gamma-1}-(1-\alpha)\epsilon
= \alpha^\gamma\epsilon_0-(1-\alpha^\gamma)\epsilon\\
&\geq \alpha^{n_N}\epsilon_0-(1-\alpha^{n_N})\epsilon>0.
\end{aligned}
\end{equation}
At any time step $k$ and for any $\epsilon_t>0$, define two sets:
\begin{equation*}
\begin{aligned}
\mathcal{Z}_1(k,\epsilon_t)&=\{i\in \mathcal{V}: x_i[k]>\overline{\omega}-\epsilon_t\},\\
\mathcal{Z}_2(k,\epsilon_t)&=\{i\in \mathcal{V}: x_i[k]<\underline{\omega}+\epsilon_t\}.
\end{aligned}
\end{equation*}
By the definition of $\epsilon_0$, $\mathcal{Z}_1(k,\epsilon_0)$ and $\mathcal{Z}_2(k,\epsilon_0)$ are disjoint.
Let $k_\epsilon$ be the time such
that $\overline{x}^N[k] < \overline{\omega}+\epsilon $ and $\underline{x}^N[k] > \underline{\omega}-\epsilon$, $\forall k\geq k_\epsilon$. Such a $k_\epsilon$ exists since $\overline{x}^N[k]$ and $\underline{x}^N[k]$ converge to $\overline{\omega}$ and $\underline{\omega}$, respectively, in monotonic manners as discussed above.
Consider the nonempty and disjoint sets $\mathcal{Z}_1(k_\epsilon,\epsilon_0)$ and $\mathcal{Z}_2(k_\epsilon,\epsilon_0)$. Notice that
the network is $(f + 1, f + 1)$-robust with $l$ hops w.r.t. any set $\mathcal{F}$ following the $f$-total model and the set of malicious nodes $\mathcal{A}$ also satisfies the $f$-total model. Hence, the network is $(f + 1, f + 1)$-robust with $l$ hops w.r.t. the set $\mathcal{A}$ and at least one of the three conditions in Definition \ref{rs-robust} holds.
Also notice that the normal node with value $\overline{x}^N[k_\epsilon]$ will definitely be in set $\mathcal{Z}_1(k_\epsilon,\epsilon_0)$, and it is similar for the case of $\mathcal{Z}_2(k_\epsilon,\epsilon_0)$.
Hence, all nodes in either $\mathcal{Z}_1(k_\epsilon,\epsilon_0)$ or $\mathcal{Z}_2(k_\epsilon,\epsilon_0)$ have the $(f+1)$-reachable property, or the union of the two sets contains at least $f + 1$ nodes having the $(f+1)$-reachable property. Since there are at most $f$ malicious nodes,
for all cases, there must exist a normal node in the union of $\mathcal{Z}_1(k_\epsilon,\epsilon_0)$ and $\mathcal{Z}_2(k_\epsilon,\epsilon_0)$ such that it has at least $f +1$ independent paths originating from different nodes outside of its set and these paths do not have any internal node in $\mathcal{A}$.
Suppose that normal
node $i\in \mathcal{Z}_1(k_\epsilon,\epsilon_0)\cap \mathcal{N}$ has the $(f+1)$-reachable property. Thus, node $i$ has at least $f+1$ neighbors within $l$ hops outside set $\mathcal{Z}_1(k_\epsilon,\epsilon_0)$, i.e., the values of these neighbors are smaller than $x_i[k_\epsilon]$ and are at
most equal to $\overline{\omega}- \epsilon_0$. Moreover,
the original values of these multi-hop neighbors of node $i$ will definitely reach node $i$ even if the source nodes are malicious (since the internal nodes of these paths are all normal and they relay the values as received, without making any changes).
Hence, node $i$ will use at least one of these values to update its own. This is because in step 2(c) of Algorithm 1,
node $i$ will remove values lower than its own
value of which the cardinality of the minimum message cover
is at most $f$. As a result, among the neighbors within $l$ hops outside set $\mathcal{Z}_1(k_\epsilon,\epsilon_0)$, the values from up to $f$ of them will be disregarded by node $i$.
Now, in Algorithm 1, the update rule \eqref{msrupdate} of step 3 is applied. Here, each coefficient of the neighbors is lower bounded by $\alpha$.
Since the largest value that node $i$ will use at time $k_\epsilon$ is $\overline{x}^N[k_\epsilon]$, placing the largest possible weight on $\overline{x}^N[k_\epsilon]$ produces
\begin{equation*}
\begin{aligned}
&x_i[k_\epsilon+1] \leq (1-\alpha)\overline{x}^N[k_\epsilon]+\alpha(\overline{\omega}-\epsilon_0)\\
&~~\leq (1-\alpha)(\overline{\omega}+\epsilon)+\alpha(\overline{\omega}-\epsilon_0) \leq \overline{\omega}-\alpha\epsilon_0+(1-\alpha)\epsilon.
\end{aligned}
\end{equation*}
Note that this upper bound also applies to the updated value
of any normal node not in $\mathcal{Z}_1(k_\epsilon,\epsilon_0)$, because such
a node will use its own value in its update. Similarly, if node
$i\in \mathcal{Z}_2(k_\epsilon,\epsilon_0)\cap \mathcal{N}$ has the $(f+1)$-reachable property, then $x_i[k_\epsilon+1]\geq \underline{\omega}+\alpha\epsilon_0-(1-\alpha)\epsilon$. Again, any normal node not in $\mathcal{Z}_2(k_\epsilon,\epsilon_0)$ will have the same lower bound.
Next, consider the sets $\mathcal{Z}_1(k_\epsilon+1,\epsilon_1)$ and $\mathcal{Z}_2(k_\epsilon+1,\epsilon_1)$. By $\epsilon_1<\epsilon_0$, these two sets are still disjoint. Since at least one of the normal nodes in $\mathcal{Z}_1(k_\epsilon,\epsilon_0)$ decreases at
least to $\overline{\omega}-\epsilon_1$ (or below), or one of the nodes in $\mathcal{Z}_2(k_\epsilon,\epsilon_0)$
increases at least to $\underline{\omega}+\epsilon_1$ (or above), it must hold
$\left| \mathcal{Z}_1(k_\epsilon+1,\epsilon_1)\cap \mathcal{N}\right|< \left| \mathcal{Z}_1(k_\epsilon,\epsilon_0)\cap \mathcal{N}\right|$, $\left| \mathcal{Z}_2(k_\epsilon+1,\epsilon_1)\cap \mathcal{N}\right|< \left| \mathcal{Z}_2(k_\epsilon,\epsilon_0)\cap \mathcal{N}\right|$, or both. Recall that $0 < \epsilon_{\gamma} < \epsilon_{\gamma-1}\leq \epsilon_0$. As long as there are still normal nodes in
$\mathcal{Z}_1(k_\epsilon+\gamma,\epsilon_\gamma)$ and/or $\mathcal{Z}_2(k_\epsilon+\gamma,\epsilon_\gamma)$, we can repeat the above analysis for time step $k_\epsilon+\gamma$, which will result in either $\left| \mathcal{Z}_1(k_\epsilon+\gamma,\epsilon_\gamma)\cap \mathcal{N}\right|< \left| \mathcal{Z}_1(k_\epsilon+\gamma-1,\epsilon_{\gamma-1})\cap \mathcal{N}\right|$, $\left| \mathcal{Z}_2(k_\epsilon+\gamma,\epsilon_\gamma)\cap \mathcal{N}\right|< \left| \mathcal{Z}_2(k_\epsilon+\gamma-1,\epsilon_{\gamma-1})\cap \mathcal{N}\right|$, or both.
Since $\left| \mathcal{Z}_1(k_\epsilon,\epsilon_0)\cap \mathcal{N}\right|+\left| \mathcal{Z}_2(k_\epsilon,\epsilon_0)\cap \mathcal{N}\right| \leq {n_N}$,
there must be some time step $k_\epsilon + T$ (with $T\leq {n_N}$) such that either
$\mathcal{Z}_1(k_\epsilon+ T,\epsilon_T)\cap \mathcal{N} $ or $\mathcal{Z}_2(k_\epsilon+ T,\epsilon_T)\cap \mathcal{N} $ is empty. In the former case, all normal nodes in the network at time step
$k_\epsilon+T$ have values at most $\overline{\omega}-\epsilon_T$, while in the latter case all
normal nodes at time step $k_\epsilon + T$ have values
no less than $\underline{\omega}+\epsilon_T$.
By \eqref{epsilon_positive} and $T\leq n_N$, it holds that $\epsilon_T > 0$.
Hence, we have contradiction to the fact that the largest value monotonically
converges to $\overline{\omega}$ (in the former case) or that the smallest
value monotonically converges to $\underline{\omega}$ (in the latter case). Hence, it must be the case that $\epsilon_0 = 0$, proving that $\overline{\omega} = \underline{\omega}$.
\end{proof}
We emphasize that the graph condition based on the notion of robustness with $l$ hops is tight for our MW-MSR algorithm. Our notion captures the capability of agents to be influenced by the outside of the set in the multi-hop settings.
We note that in \cite{su2017reaching}, an idea similar to robustness is proposed, and based on it, a tight necessary and sufficient condition for Byzantine consensus using an MSR-type algorithm with multi-hop communication is provided. However, the focus there is on the Byzantine model and the condition is expressed in terms of
the subgraph consisting of only the normal nodes.
Part of the reason to focus on only the subgraph of normal nodes is that Byzantine nodes are more adversarial compared to malicious nodes as they can send different values to different neighbors. Hence, the subgraph of normal nodes has to be sufficiently robust to fight against the possible attacks.
The condition there is an extension of the one for the one-hop case shown in \cite{vaidya2012iterative}. Moreover, to meet the condition there, each node in $\mathcal{G}$ must have at least $2f+1$ incoming edges. This is different from the case for the malicious model studied in this paper, where at least $2f$ incoming edges are required. Further discussions on the minimum requirement for our algorithm to guarantee resilient consensus are given in Section \ref{properties}.
\section{Asynchronous Network}
In this section, we analyze the MW-MSR algorithm under asynchronous updates with time delays in the communication among nodes.
We employ the control input taking account of possible delays in the values from the neighbors as
\begin{equation}
u_i[k]=\sum_{j\in \mathcal{N}_i^{l-}} a_{ij}[k]x_j^P[k-\tau_{ij}^P[k]],
\end{equation}
where $x_j^P[k]$ denotes the value of node $j$ at time $k$ sent along path $P$ and $\tau_{ij}^P[k]\in \mathbb{Z}_+$ is the delay in this $(j,i)$-path $P$ at time $k$.
The delays are time varying and may be different in each path, but we assume the common upper bound $\tau$ on any \textit{normal} path $P$, over which all internal nodes are normal, as
\begin{equation}
0\leq \tau_{ij}^P[k] \leq \tau,\medspace j\in \mathcal{N}_i^{l-}, \medspace k\in \mathbb{Z}_+.
\end{equation}
Hence, each normal node $i$ becomes aware of the value
of each of its normal $l$-hop neighbor $j$ on each normal $(j,i)$-path $P$ at least once in $\tau$ time steps, but
possibly at different time instants \cite{dibaji2017resilient}. Although we impose this bound on the delays for transmission of messages, the normal nodes need neither the value of this bound nor the information that whether a path $P$ is a normal one or not.
The structure of the asynchronous MW-MSR algorithm can be outlined as follows.
At each time $k$, each normal node $i$ chooses to update or not. If it chooses not to update, then it keeps its value as $x_i[k+1]=x_i[k]$.
Otherwise, it uses the most recently received values of all its $l$-hop neighbors on each $l$-hop path to update its value using the MW-MSR algorithm in Algorithm~1. Like the one-hop MSR algorithm, if node $i$ does not receive any value along some path $P$ originating from its $l$-hop neighbor $j$ (i.e., the crash model), then node $i$ will take this value on path $P$ as an empty value and will discard this value when it applies the WM-MSR algorithm.
As we discussed earlier in Section II-E, in the asynchronous case also, manipulating message paths is equivalent to manipulating message values only and hence can be disregarded in our analysis.
Let $D[k]$ be a diagonal matrix whose $i$th entry is
given by $d_i[k]=\sum_{j=1}^{n} a_{ij}[k].$ Then,
let the matrices $A_\gamma[k]\in \mathbb{R}^{n\times n}$ for $ 0\leq \gamma \leq \tau$ and $L_{\tau}[k]\in \mathbb{R}^{n\times (\tau +1)n}$ be given by
\begin{equation}
A_\gamma[k]=\left\{
\begin{array}{lll}
a_{ij}[k] &\textup{if} \thinspace i\neq j \thinspace\textup{and}\thinspace \tau_{ij}[k]=\gamma,\\
0 & \textup{otherwise,}
\end{array}
\right.
\end{equation}
and $L_{\tau}[k]=\Big[ D[k]-A_0[k] \medspace -A_1[k] \medspace \cdots \medspace -A_{\tau}[k] \Big]$, respectively.
Note that the summation of each row of $L_{\tau}[k]$ is zero.
The delay $\tau_{ij}[k]$ will be set to be one of the delays $\tau_{ij}^P[k]$ corresponding to the normal paths as we discuss further later.
Now, the control input can be expressed as
\begin{equation}
\begin{array}{lll}
u^N[k] =-L_{\tau}^N[k]z[k],\\
u^F[k] : \textup{arbitrary,}
\end{array}
\end{equation}
where $z[k]= [x[k]^T x[k-1]^T \cdots x[k-\tau]^T]^T$ is a $(\tau+1)n$-dimensional vector for $k\geq0$ and $L_{\tau}^N[k]$ is a matrix formed by the first $n_N$ rows of $L_{\tau}[k]$. Here, to simplify the discussion, we assume that $z[0]$ consists of the given initial values of the agents. Then, the agent dynamics can be written as
\begin{equation} \label{system2}
x[k+1]=\Gamma[k] z[k] + \begin{bmatrix} 0 \\ I_{n_F} \end{bmatrix}u^F[k],
\end{equation}
where $\Gamma[k]$ is an $n\times(\tau+1)n$ matrix given by $\Gamma[k] = \begin{bmatrix} I_n & 0 \end{bmatrix} - \begin{bmatrix} L_{\tau}^N[k]^T & 0 \end{bmatrix}^T. $
The main result of this section now follows.
Here, the safety interval differs from the synchronous case and is given by
\begin{equation} \label{safety2}
\mathcal{S}_{\tau}=\Big[ \min z^N[0], \max z^N[0] \Big].
\end{equation}
\begin{theorem}\label{asyntheorem}
Consider a directed graph $\mathcal{G} = (\mathcal{V},\mathcal{E})$ with $l$-hop communication, where each normal node updates its value according to the asynchronous MW-MSR algorithm with parameter
$f$. Under the $f$-total malicious model, resilient asymptotic
consensus is achieved
only if the underlying graph is
$(f + 1, f + 1)$-robust with $l$ hops. Moreover, if the underlying graph
is $(2f + 1)$-robust with $l$ hops, then resilient consensus is attained
with the safety interval given by \eqref{safety2}.
\end{theorem}
See the Appendix for the proof of this theorem.
\begin{remark}
In comparison to the synchronous update case studied in the previous section, the graph condition to achieve resilient consensus under the asynchronous updates with delays is more restrictive. This is because when the nodes updates asynchronously, the normal nodes may not receive the same values from the malicious nodes, which creates a more adversarial situation for resilient consensus.
Moreover, in the next section, in Lemma \ref{asynapplysyn}, we prove that under the $f$-total model, a graph which is $(2f + 1)$-robust with $l$ hops is also $(f + 1, f + 1)$-robust with $l$ hops.
For instance, the graph in Fig.~\ref{graph3} is $3$-robust with $2$ hops and hence $(2,2)$-robust with $2$ hops.
Thus, as expected, the sufficient condition for the asynchronous algorithm guaranteeing resilient consensus is also sufficient for the synchronous algorithm.
\end{remark}
\begin{remark}
The difference in the network requirements
discussed above for the synchronous and asynchronous
algorithms is a consequence partly due
to the deterministic nature in transmission times.
In \cite{dibaji2018resilient}, it is shown that for an asynchronous algorithm with one-hop
communication without delays, we can recover the tight necessary and
sufficient network condition of the synchronous case, i.e., the graph
to be $(f+1,f+1)$-robust under the $f$-total malicious model.
It is interesting that this result requires \textit{randomization}
in agents' communication instants whereas
in the deterministic case, the sufficient graph condition remains
to be $(2f+1)$-robustness, which coincides with the implication of
Theorem \ref{asyntheorem}. In the multi-hop setting, however, delays are
critical and hence we do not pursue such results in this paper.
\end{remark}
\begin{figure}[t]
\centering
\includegraphics[width=1.35in]{6node}
\vspace{-7pt}
\caption{The graph is not $2$-robust with one hop, but it is $3$-robust with $2$ hops and $(2,2)$-robust with $2$ hops.}
\label{graph3}
\vspace*{-3.5mm}
\end{figure}
\section{Discussions on Graph Robustness with Multi-hop Communication}\label{properties}
In this section, we demonstrate some properties of graph robustness with $l$ hops, which generalize the properties of robustness with one hop in \cite{leblanc2013resilient} when $l=1$ for this new notion. Moreover, we provide the analysis of graph robustness with $l$ hops for the case where $l$ is sufficiently large. This corresponds to the case of unbounded path length.
\subsection{Properties of Robustness with l Hops}
As discussed earlier, our definition of robustness with $l$ hops is a generalization of the definition of robustness with one-hop communication from \cite{leblanc2013resilient}.
Here, we are interested in investigating how properties of robustness with the one-hop case can be extended to the multi-hop case.
In what follows, we present a series of lemmas that analyze the generalized notion of robustness.
Recall that robustness with $l$ hops is defined w.r.t. the given set $\mathcal{F}$ satisfying the $f$-total model, but we omit saying this when it is clear from the context in this section. Typically, we are interested in $f=r-1$ for $r$-robustness.
The first result is simple, stating that $(r,s)$-robustness with $l$ hops of a graph holds with smaller $r$ and $s$.
\begin{lemma}
If a directed graph $\mathcal{G} = (\mathcal{V},\mathcal{E})$ is $(r, s)$-robust with $l$ hops, then it is also $(r', s')$-robust with $l$ hops when $0 \leq r' \leq r \ \text{and} \ 1 \leq s' \leq s$.
\end{lemma}
In the second result, we show that the level of robustness of a given graph with $l$ hops does not decrease by adding edges to the graph nor by increasing the relay range $l$.
\begin{lemma}
Suppose that a directed subgraph $\mathcal{G} = (\mathcal{V},\mathcal{E})$ of $\mathcal{G}' = (\mathcal{V},\mathcal{E}')$ is $(r, s)$-robust with $l$ hops, where $\mathcal{E} \subseteq\mathcal{E}'$. Then $\mathcal{G}'$ is $(r, s)$-robust with $l$ hops.
Moreover, $\mathcal{G}$ is $(r, s)$-robust with $l'$ hops, where $l'\geq l$.
\end{lemma}
The maximum robustness with $l$ hops for a graph consisting of $n$ nodes is the same as that with the one-hop case. The following lemma suggests that the bound $n \geq 2f+1$ cannot be breached by introducing multi-hop communication to the MSR algorithms. This bound is dependent on the nature of the MSR algorithms, which cannot tolerate half or more of the agents to be adversarial.
\begin{lemma}\label{maxrobust}
No directed graph $\mathcal{G} = (\mathcal{V},\mathcal{E})$ on $n$ nodes is $(
\lceil n/2\rceil + 1)$-robust with $l$ hops. Moreover, the complete graph $\mathcal{K}_n$ is $(
\lceil n/2\rceil, s)$-robust with $l$ hops for $1 \leq s \leq n$.
\end{lemma}
\begin{proof}
We consider the nontrivial case with $n\geq 3$. Then pick $\mathcal{V}_1$ and
$\mathcal{V}_2$ by taking any bipartition of $\mathcal{V}$ (i.e., $\mathcal{V}_1\cap \mathcal{V}_2 =\emptyset$ and $\mathcal{V}_1 \cup \mathcal{V}_2 =\mathcal{V}$)
such that $|\mathcal{V}_1| = \lceil n/2\rceil$ and $|\mathcal{V}_2| = \lfloor n/2\rfloor$. Neither $\mathcal{V}_1$ nor $\mathcal{V}_2$ has
$\lceil n/2\rceil+1$ nodes; thus, neither one has a node being $(\lceil n/2\rceil+1)$-reachable with $l$-hop communication. Hence, $\mathcal{G}$ is not $(\lceil n/2\rceil+1)$-robust with $l$ hops.
The proof for the complete graph case follows an analysis similar to the one for the one-hop case \cite{leblanc2013resilient}.
\end{proof}
The following lemma exposes that the notion of $(r, s)$-robust graphs has a more complicated structure in the relation between the two parameters $r$ and $s$.
\begin{lemma}\label{2fplus1}
If a directed graph $\mathcal{G} = (\mathcal{V},\mathcal{E})$ is $(r, s)$-robust with $l$ hops under the $f$-total model, then it is also $(r-1, s+1)$-robust with $l$ hops under the $f$-total model.
\end{lemma}
\begin{proof}
Consider any nonempty disjoint subsets $\mathcal{V}_1, \mathcal{V}_2\subset \mathcal{V}$. If $|\mathcal{Z}_{\mathcal{V}_a}^r| = |\mathcal{V}_a|$ or $|\mathcal{Z}_{\mathcal{V}_a}^{r-1}| = |\mathcal{V}_a|$ for $a=1, 2$, then this pair of subsets satisfies conditions 1) or 2) of $(r-1,s+1)$-robustness with $l$ hops. Thus, assume $|\mathcal{Z}_{\mathcal{V}_a}^r| < |\mathcal{V}_a|$ and $|\mathcal{Z}_{\mathcal{V}_a}^{r-1}| < |\mathcal{V}_a|$ for $a=1, 2$. Then condition 3) for $(r,s)$-robustness with $l$ hops must be satisfied, i.e., $| \mathcal{Z}_{\mathcal{V}_1}^r| +| \mathcal{Z}_{\mathcal{V}_2}^r| \geq s$.
Since $s\geq1$, at least one of $\mathcal{Z}_{\mathcal{V}_1}^r$, $\mathcal{Z}_{\mathcal{V}_2}^r$ is nonempty. Suppose that $\mathcal{Z}_{\mathcal{V}_1}^r$ is nonempty. Then, we choose $i\in \mathcal{Z}_{\mathcal{V}_1}^r$. Pick the new pair of nonempty disjoint subsets as $\mathcal{V}_1'= \mathcal{V}_1\setminus \{i\}$ and $\mathcal{V}_2'= \mathcal{V}_2$.
Observe that if $j \in \mathcal{Z}_{\mathcal{V}_1'}^r$ then $j \in \mathcal{Z}_{\mathcal{V}_1}^{r-1}$.
Because node $i$ is the only difference between $\mathcal{V}_1'$ and $ \mathcal{V}_1$, and node $i$ can only be part of one independent path whose destination is node $j$.
Consequently, even if node $i$ is on one independent path whose destination is node $j$, node $j$ still has the $(r-1)$-reachable property, i.e., it has at least $r-1$ other independent paths of at most $l$ hops originating from nodes outside $\mathcal{V}_1$ and these paths do not have any internal nodes from set $\mathcal{F}$. Then, by adding $i\in \mathcal{Z}_{\mathcal{V}_1}^r\subseteq \mathcal{Z}_{\mathcal{V}_1}^{r-1}$ back into the set $\mathcal{V}_1$, we have
\begin{equation} \label{ineq1}
|\mathcal{Z}_{\mathcal{V}_1}^{r-1}| \geq |\mathcal{Z}_{\mathcal{V}_1'}^r| +1 .
\end{equation}
Moreover, $|\mathcal{Z}_{\mathcal{V}_1'}^r| < |\mathcal{V}_1'|$ and $|\mathcal{Z}_{\mathcal{V}_2'}^r| < |\mathcal{V}_2'|$ (since $\mathcal{V}_2'=\mathcal{V}_2$). The first inequality holds because if $|\mathcal{Z}_{\mathcal{V}_1'}^r| = |\mathcal{V}_1'|$ and from the observation we just proved (if $j \in \mathcal{Z}_{\mathcal{V}_1'}^r$ then $j \in \mathcal{Z}_{\mathcal{V}_1}^{r-1}$), we have
$|\mathcal{Z}_{\mathcal{V}_1}^{r-1}| =|\mathcal{V}_1|$, leading us to a contradiction.
Therefore, for nonempty disjoint subsets $\mathcal{V}_1', \mathcal{V}_2'$, it must hold that $| \mathcal{Z}_{\mathcal{V}_1'}^r| +| \mathcal{Z}_{\mathcal{V}_2'}^r| \geq s$.
Together with \eqref{ineq1}, we have
\begin{equation*}
| \mathcal{Z}_{\mathcal{V}_1}^{r-1}| +| \mathcal{Z}_{\mathcal{V}_2}^{r-1}| \geq | \mathcal{Z}_{\mathcal{V}_1'}^r| +| \mathcal{Z}_{\mathcal{V}_2'}^r|+1\geq s+1 ,
\end{equation*}
suggesting that $\mathcal{G}$ is $(r-1, s+1)$-robust with $l$ hops under the $f$-total model.
\end{proof}
From this result, we can directly derive the following corollary, which indicates that if a graph is $(2f+1)$-robust with $l$ hops, then it is also $(f+1,f+1)$-robust with $l$ hops.
\begin{corollary}\label{asynapplysyn}
If a directed graph $\mathcal{G} = (\mathcal{V},\mathcal{E})$ is $(r+s-1)$-robust with $l$ hops, where $1\leq r+s-1\leq \lceil n/2\rceil$, then $\mathcal{G}$ is $(r,s)$-robust with $l$ hops.
\end{corollary}
Similar to the one-hop case, robustness with $l$ hops can guarantee a certain level of graph connectivity and a minimum in-degree of the graph. The proof for the connectivity result is trivial and hence omitted.
\begin{lemma}
If a directed graph $\mathcal{G} = (\mathcal{V},\mathcal{E})$ is $r$-robust with $l$ hops, then $\mathcal{G}$ is at least $r$-connected.
\end{lemma}
\begin{lemma}\label{mini-degree}
If a directed graph $\mathcal{G} = (\mathcal{V},\mathcal{E})$ is $(r,s)$-robust with $l$ hops under the $(r-1)$-total model, where $0\leq r\leq \lceil n/2\rceil$ and $1\leq s\leq n$, then $\mathcal{G}$ has the minimum in-degree $\delta(\mathcal{G})$ as
\begin{equation*}
\delta(\mathcal{G}) \geq \left\{
\begin{array}{lll}
r+s-1 &\textup{if} \ s<r,\\
2r-2 & \textup{if}\ s\geq r.
\end{array}
\right.
\end{equation*}
\end{lemma}
\vspace{0.12cm}
\begin{proof}
The case for $n\leq 2$ and $r\leq 1$ is trivial. Consider the case for $n\geq 3$ and $2\leq r\leq \lceil n/2\rceil$.
Fix node $i\in \mathcal{V}$. Choose $\mathcal{V}_1=\{i\}$ and $\mathcal{V}_2= \mathcal{V}\setminus \mathcal{V}_1$. Then, $| \mathcal{Z}_{\mathcal{V}_2}^r|=0$ and $| \mathcal{Z}_{\mathcal{V}_1}^r| =|\mathcal{V}_1|$. Hence, node $i$ must have at least $r$ independent paths from outside and the number of the in-neighbors of node $i$ should be $|\mathcal{N}_i^-|\geq r$. (Note that the malicious nodes can be the source nodes of these independent paths.)
When $s<r$, form $\mathcal{V}_1$ by choosing $s-1$ in-neighbors of node $i$ along with node $i$ itself. Then, choose $\mathcal{V}_2= \mathcal{V}\setminus \mathcal{V}_1$.
Since $|\mathcal{V}_1| = s < r$, then, $| \mathcal{Z}_{\mathcal{V}_2}^r|=0$ and $| \mathcal{Z}_{\mathcal{V}_1}^r| =|\mathcal{V}_1|$. The worst case is that the $s-1$ in-neighbors of node $i$ are all malicious, node $i$ should have at least $r$ independent paths from outside, and these paths do not contain any malicious nodes as internal nodes.
This implies that node $i$ has additional
$r$ in-neighbors outside of $\mathcal{V}_1$, thereby guaranteeing $|\mathcal{N}_i^-|\geq r+s-1$.
When $s \geq r$, form $\mathcal{V}_1$ by choosing $r-2$ in-neighbors of node $i$ along with node $i$ itself. Then, choose
$\mathcal{V}_2= \mathcal{V}\setminus \mathcal{V}_1$. Since $|\mathcal{V}_1| < r$ and $s \geq r$, we have $| \mathcal{Z}_{\mathcal{V}_2}^r|=0$ and $| \mathcal{Z}_{\mathcal{V}_1}^r| =|\mathcal{V}_1|$.
Consider the worst case that the $r-2$ in-neighbors of node $i$ are all malicious.
It must be that node $i$ has additional $r$ in-neighbors outside of $\mathcal{V}_1$, thereby guaranteeing $|\mathcal{N}_i^-|\geq 2r-2$. Since we choose $i\in \mathcal{V}$
arbitrarily, we have proved the bound for $\delta(\mathcal{G})$.
\end{proof}
Here, we have extended the proof in \cite{leblanc2013resilient} to the multi-hop case.
From Lemma \ref{mini-degree}, we conclude that
$\mathcal{G}$ should have the minimum in-degree no less than $2f$
to guarantee resilient consensus using the MW-MSR algorithm under the $f$-total malicious model. This holds since the underlying graph $\mathcal{G}$ is at least $(f+1,f+1)$-robust with $l$ hops for achieving resilient consensus.
For MSR algorithms, nodes with $2f$ in-neighbors may not use any values from neighbors, which may appear problematic especially if there are multiple such nodes. However, there are examples such as cycle graphs\footnote[3]{A cycle graph is an undirected graph consisting of only a single cycle.} satisfying $(f+1,f+1)$-robust with $l$ hops, while every node in cycle graphs has in-degree as $2f$. Detailed analysis is given in the next subsection.
\subsection{The Case of Unbounded Path Length}
In this subsection, we discuss the relation of the graph conditions used in this paper and in the recent work \cite{khan2020exact}. The authors there studied the Byzantine binary consensus under the local broadcast model, which is essentially equivalent to the $f$-total malicious mode studied in the current paper. The proposed algorithm in \cite{khan2020exact} is based on a non-iterative flooding algorithm, where nodes must relay their values over the entire network along with the path information. This model corresponds to the case of unbounded path length in our work, i.e. $l\geq l^*$, where $l^*$ is the longest cycle-free path length of the network.
Moreover, they propose a tight necessary and sufficient graph condition for their algorithm to achieve binary consensus under synchronous updates.
Our aim in this part of the paper is to establish that our graph condition is equivalent to theirs for the case of unbounded path length ($l\geq l^*$). Further, we will highlight that to achieve the same tolerance as the algorithm in \cite{khan2020exact}, our algorithm does not in general require $l^*$-hop communication necessarily for general graphs.
To show the equivalence between the two graph conditions, we introduce some graph notions from \cite{khan2020exact}.
There, normal nodes update the states based on a modified certified propagation algorithm \cite{tseng2015broadcast}, i.e., when a normal node receives $f+1$ same binary values from different paths excluding a suspicious set $\mathcal{F}$, it commits its value to this value. Hence, their graph notion is closely related to the partitions of sets $\mathcal{V}$ and $\mathcal{F}$.
\begin{definition}
For disjoint node sets $\mathcal{X}, \mathcal{Y}$, we say $\mathcal{X}\rightarrow \mathcal{Y}$ if and only if set $\mathcal{X}$ contains at least $f+1$ distinct incoming neighbors of $\mathcal{Y}$, i.e., $|\{i:(i, j)\in \mathcal{E}, i \in \mathcal{X}, j \in \mathcal{Y}\}| >f$. Denote $\mathcal{X}\not\rightarrow \mathcal{Y}$ when $\mathcal{X}\rightarrow \mathcal{Y}$ is not true.
\end{definition}
\begin{definition}
For disjoint node sets $\mathcal{X}, \mathcal{Y}$ and for set $\mathcal{F}$, we say $\mathcal{X}\stackrel{\mathcal{F}}{\rightsquigarrow} \mathcal{Y}$ if and only if for every node $u\in \mathcal{Y}$, there exist at least $f+1$ disjoint
$\mathcal{X}u$-paths that have only $u$ in common and none of them contains any internal node from the set $\mathcal{F}$. Denote $\mathcal{X}\stackrel{\mathcal{F}}{\not\rightsquigarrow} \mathcal{Y}$ when $\mathcal{X}\stackrel{\mathcal{F}}{\rightsquigarrow} \mathcal{Y}$ is not true.
\end{definition}
Two graph notions are introduced next, called conditions NC and SC. They are known to be equivalent \cite{khan2020exact}.
\begin{definition}
\textit{(Condition NC)}
Given a graph $\mathcal{G} = (\mathcal{V},\mathcal{E})$, condition NC is said to hold if for every
partition $\mathcal{L},\mathcal{C},\mathcal{R}$ of $\mathcal{V}$, and for every set $\mathcal{F}$ with $|\mathcal{F}|\leq f$,
where both $\mathcal{L} \setminus \mathcal{F}$ and $\mathcal{R} \setminus \mathcal{F}$ are non-empty, we have that either $\mathcal{R} \cup \mathcal{C}\rightarrow \mathcal{L} \setminus \mathcal{F}$ or $\mathcal{L} \cup \mathcal{C} \rightarrow \mathcal{R} \setminus \mathcal{F}$.
\textit{(Condition SC)}
Given a graph $\mathcal{G} = (\mathcal{V},\mathcal{E})$, condition SC is said to hold if for every
partition $\mathcal{L},\mathcal{R}$ of $\mathcal{V}$, and for every set $\mathcal{F}$ with $|\mathcal{F}|\leq f$,
where both $\mathcal{L} \setminus \mathcal{F}$ and $\mathcal{R} \setminus \mathcal{F}$ are non-empty, we have that either $\mathcal{L}\stackrel{\mathcal{F}}{\rightsquigarrow} \mathcal{R}\setminus\mathcal{F}$ or $\mathcal{R}\stackrel{\mathcal{F}}{\rightsquigarrow} \mathcal{L}\setminus\mathcal{F}$.
\end{definition}
We are now ready to show that condition NC (and hence condition SC) is equivalent to our robust graph notion with multi-hop communication used in Theorem \ref{syn}.
\begin{proposition}\label{connectivity}
Consider a directed graph $\mathcal{G} = (\mathcal{V},\mathcal{E})$ with $l$-hop communication where $l\geq l^*$.
The graph $\mathcal{G}$ is $(f + 1, f + 1)$-robust with $l$ hops if and only if condition NC holds.
\end{proposition}
\begin{proof}
We first show the only if part. Since the graph is $(f+1,f+1)$-robust with $l$ hops under the $f$-total model, at least one of the three conditions in Definition \ref{rs-robust} holds. For every partition $\mathcal{L},\mathcal{R}$ of $\mathcal{V}$, and for every set $|\mathcal{F}|\leq f$, where both $\mathcal{L} \setminus \mathcal{F}$ and $\mathcal{R} \setminus \mathcal{F}$ are non-empty, we conclude that there is at least one normal node $i$ in the union of $\mathcal{L}$ and $\mathcal{R}$ that has $f+1$ independent paths from outside and these paths do not contain any internal nodes in $\mathcal{F}$. This can be seen in the proof of Theorem \ref{syn}. Suppose that node $i\in \mathcal{L}$ has this property.
For each independent path, there exists at least one edge that goes from the outside of $\mathcal{L}$ to a node in $\mathcal{L}$, and thus, $\mathcal{R} \cup \mathcal{C}\rightarrow \mathcal{L} \setminus \mathcal{F}$. The case for $i\in \mathcal{R}$ can be proved similarly.
Next, we show the if part by contradiction. Suppose that $\mathcal{G}$ is not $(f + 1, f + 1)$-robust with $l$ hops. Then, none of the three conditions in Definition \ref{rs-robust} holds. For every partition $\mathcal{L},\mathcal{R}$ of $\mathcal{V}$, and for every set $|\mathcal{F}|\leq f$, where both $\mathcal{L} \setminus \mathcal{F}$ and $\mathcal{R} \setminus \mathcal{F}$ are non-empty, we conclude that all the normal nodes in the union of $\mathcal{L}$ and $\mathcal{R}$ have at most $f$ independent paths from outside where these paths do not contain any internal nodes in $\mathcal{F}$. Hence, $\mathcal{L}\stackrel{\mathcal{F}}{\not\rightsquigarrow} \mathcal{R}\setminus\mathcal{F}$ and $\mathcal{R}\stackrel{\mathcal{F}}{\not\rightsquigarrow} \mathcal{L}\setminus\mathcal{F}$ (i.e., condition SC does not hold), and thus we have contradiction.
Finally, since condition SC and condition NC are equivalent, we have proved that condition NC implies the $(f + 1, f + 1)$-robustness with $l$ hops.
\end{proof}
Although our condition coincides with those in \cite{khan2020exact} when $l\geq l^*$, we note that the maximum robustness of a given graph does not require $l\geq l^*$ necessarily. That is, for a given graph under the $f$-total model, our algorithm may not require $l^*$-hop communication to get the same tolerance as the algorithm using $l^*$-hop communication in \cite{khan2020exact}. We illustrate this fact by presenting the examples of cycle graphs.
\begin{figure}[t]
\centering
\includegraphics[width=1.35in]{cycle}
\vspace{-7pt}
\caption{Illustration for cycle graphs.}
\label{cyclegraph}
\vspace*{-3.5mm}
\end{figure}
From the example graph in Fig.~\ref{graph1}(a), we can see that
for any cycle graph, the following lemma holds. This indicates that resilient consensus is guaranteed using the MW-MSR algorithm in such graphs when one node behaves maliciously. In \cite{khan2020exact}, it is reported that a cycle graph can tolerate one malicious node, but their algorithm uses the $l^*$-hop communication. In this respect, as the following lemma suggests, our algorithm is efficient by exploiting the ability of the MW-MSR algorithm and the graph condition is tighter than the one in \cite{khan2020exact}. Moreover, note that a cycle graph is 2-connected, which is the minimum connectivity requirement for any MSR-based algorithm to guarantee resilient consensus for $f=1$.
\begin{lemma}
The cycle graph $\mathcal{C}_n$ with $n>2$ nodes is $(2,2)$-robust with $\lceil l^*/2 \rceil$ hops under the $1$-total model.
\end{lemma}
\begin{proof}
We need to show that for any node partition $\mathcal{V}_1$,
$\mathcal{V}_2$ of $\mathcal{V}$, at least one of the conditions for $(2,2)$-robustness with $l$ hops holds. Let $\mathcal{F}$ be a set of a single node, i.e., $\mathcal{F}=\{m_1\}$ (satisfying the 1-total model).
We first select a single node as set $\mathcal{V}_1$. Then, for any set $\mathcal{F}$ and for any set $\mathcal{V}_2$, condition 1) for $(2,2)$-robustness with $l$ hops holds. (The case is similar when we select non-neighboring nodes as set $\mathcal{V}_1$.)
Second, we select two neighboring nodes as set $\mathcal{V}_1$. If node $m_1\notin \mathcal{V}_1$, then condition 1) for $(2,2)$-robustness with 2 hops holds. If node $m_1\in \mathcal{V}_1$, then node $m_1$ has 2 independent 2-hop paths originating from outside. To meet the conditions for $(2,2)$-robustness with $l$ hops, we need to find another node having this property in $\mathcal{V}_2$
(see the illustration in Fig. \ref{cyclegraph}).
The worst case is when all the remaining nodes are in $\mathcal{V}_2$. Then the middle node in $\mathcal{V}_2$ has 2 shortest paths originating from outside, which are of length $\lceil l^*/2 \rceil$ hops.
We can continue this process and select three neighboring nodes as set $\mathcal{V}_1$. We can follow an analysis as above: If node $m_1\in \mathcal{V}_1$ and all the remaining nodes are in $\mathcal{V}_2$, then the middle node in $\mathcal{V}_2$ has 2 shortest paths originating from outside, which are shorter than the length $\lceil l^*/2 \rceil$ hops. This process can be continued until we switch sets $\mathcal{V}_1$ and $\mathcal{V}_2$. Hence, we conclude that the cycle graph $\mathcal{C}_n$ is $(2,2)$-robust with $\lceil l^*/2 \rceil$ hops.
\end{proof}
\section{Numerical Examples}
In this section, we conduct numerical simulations over networks
using both synchronous and asynchronous versions of the proposed MW-MSR algorithm to verify their effectiveness.
\begin{figure}[t]
\centering
\includegraphics[width=3.2in,height=1.3in]{state_1hop}
\vspace{-7pt}
\caption{Time responses of the synchronous one-hop W-MSR algorithm.}
\label{one-hop-state}
\vspace*{-3.5mm}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=3.2in,height=1.3in]{state_2hop}
\vspace{-7pt}
\caption{Time responses of the synchronous two-hop MW-MSR algorithm.}
\label{two-hop-syn}
\vspace*{-3.5mm}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=3.2in,height=1.3in]{6_state_1hop}
\vspace{-7pt}
\caption{Time responses of the synchronous one-hop W-MSR algorithm.}
\label{6_one-hop-state}
\vspace*{-3.5mm}
\end{figure}
\begin{figure}[t]
\centering
\subfigure[\scriptsize{The states of all nodes.}]{
\includegraphics[width=3.2in,height=1.3in]{6_state_2hop}
}
\vspace{-7pt}
\subfigure[\scriptsize{Consensus error.}]{
\includegraphics[width=3.2in,height=1.3in]{6_error_2hop}
}
\vspace{-7pt}
\caption{Time responses of the synchronous two-hop MW-MSR algorithm.}
\label{6_two-hop-syn}
\vspace*{-3.5mm}
\end{figure}
\subsection{Synchronous MW-MSR Algorithm}
In this part, we conduct simulations for the synchronous MW-MSR algorithm.
Consider the undirected network in Fig.~\ref{graph1}(a) with $f=1$.
Let the initial states be $x[0]=[1\ 2\ 4\ 6]^T$.
This graph is not $(2, 2)$-robust with one hop, and hence, is not robust enough to tolerate $f=1$ using the conventional one-hop W-MSR algorithm.
Here we set node 1 to be malicious and let the value of node 1 evolve based on the sine function w.r.t. time.
Then, normal nodes update their values using the one-hop W-MSR algorithm.
The results are given in Fig.~\ref{one-hop-state}, and resilient consensus among normal nodes is not achieved.
Next, consider the two-hop case for this graph. For malicious node 1, we assume that it does not only manipulate its own value as in the one-hop case, but also relays false information. Specifically, when node 1 receives a message from node 4 and relays the value $x_4[k]$ to node 2, it manipulates this value based on the sine function w.r.t. time. Similarly, when node 1 receives a message from node 2 and relays the value $x_2[k]$ to node 4, it manipulates this value to a fixed value of 0.5.
Then, we observe that resilient consensus is achieved as shown in Fig.~\ref{two-hop-syn}.
\subsection{Asynchronous MW-MSR Algorithm}
In this part, we conduct simulations for the asynchronous MW-MSR algorithm.
Consider the directed network in Fig.~\ref{graph3} with $f=1$.
Let the nodes take initial states as $x[0]=[3\ 5\ 1\ 7\ 3\ 9]^T$.
This graph is not $2$-robust with one hop (e.g., consider the sets $\{1,3,5\}$ and $\{2,4,6\}$), and hence, is not robust enough to tolerate $f=1$ using the one-hop W-MSR algorithm.
Here, we set node 1 to be malicious and assume that the value of node 1 evolves based on the sine function w.r.t. time.
Then, we apply the one-hop W-MSR algorithm and observe that resilient consensus among normal nodes is not achieved as shown in Fig.~\ref{6_one-hop-state}.
Next, consider the two-hop case for this graph. It becomes $3$-robust with $2$ hops, and hence, it is also $(2,2)$-robust with $2$ hops as Corollary \ref{asynapplysyn} indicates.
Therefore, in the two-hop case, it can tolerate one malicious node under both synchronous and asynchronous updates.
For malicious node 1, we assume that it does not only manipulate its own value as in the one-hop case, but also relays false information. Specifically, when node 1 receives a message from node 3 and relays the value $x_3[k]$ to its other neighbors, it manipulates this value to a fixed value of 0.5. Similarly, when node 1 receives a message from node 6 and relays the value $x_6[k]$ to other neighbors, it manipulates this value to a fixed value of 9.5. Additionally, when node 1 receives a message from node 5 and relays the value $x_5[k]$ to other neighbors, it manipulates this value based on the sine function w.r.t. time.
In Fig.~\ref{6_two-hop-syn}, we plot the consensus error given by $\Delta x_0[k]= \max x^N[k]- \min x^N[k]$ and observe that resilient consensus is achieved.
Lastly, we examine the two-hop algorithm under asynchronous updates with delays. We consider the same attack, but let each normal node update in a periodic manner. Specifically, nodes 2, 3, 4, 5, and 6 update in every 1, 5, 4, 3, and 2 steps, respectively. The delays for the messages from one-hop neighbors and two-hop neighbors are set as 0 and 1 step, respectively.
Fig.~\ref{6_two-hop-asyn} shows the states as well as the consensus error given by $\Delta x_\tau[k]= \max z^N[k]- \min z^N[k]$. It indicates that consensus is attained despite the malicious attacks.
Note that in this situation, we can only guarantee the nonincreasing property of $\Delta x_\tau[k]$ (with $\tau=5$).
Through these simulations, we have verified the effectiveness of the MW-MSR algorithms to achieve resilient consensus in small-scale networks.
\begin{figure}[t]
\centering
\subfigure[\scriptsize{The states of all nodes.}]{
\includegraphics[width=3.2in,height=1.3in]{6_state_2hop_asyn}
}
\vspace{-7pt}
\subfigure[\scriptsize{Consensus error.}]{
\includegraphics[width=3.2in,height=1.3in]{6_error_2hop_asyn}
}
\vspace{-7pt}
\caption{Time responses of the asynchronous two-hop MW-MSR algorithm.}
\label{6_two-hop-asyn}
\vspace*{-3.5mm}
\end{figure}
\subsection{Simulations in Large Wireless Sensor Networks}
In this part of the simulations, we create a WSN composed of 100 nodes located in a grid structure as shown in Fig.~\ref{location}. Let the nodes take indices $0, 1,\dots, 99$ and the coordinate of node $i$ is $( i\mod 10 , \lfloor \frac{i}{10} \rfloor)$.
Each node can communicate only with the nodes located within the communication radius of $r$.
Once $r$ is determined, the topology of the network is formed.
Then we apply the one-hop, two-hop, and three-hop MW-MSR algorithms to the network.
Recall that $f$ denotes the maximum number of malicious nodes in the network, and we increase $f$ from 0 to 11 by selecting the malicious nodes with indices in the order of $32, 34, 36, 38, 43, 62, 64, 66, 68, 74, 14$.
\begin{figure}[]
\centering
\includegraphics[width=1.8in]{location}
\vspace{-5pt}
\caption{The 100-node sensor network. The red nodes are set as malicious one by one as $f$ increases up to 11.}
\label{location}
\end{figure}
Here, we examine how the network connectivity affects the performance of the MW-MSR algorithms with different hops. Using different values for the number of malicious agents $f$ and the communication radius $r$, the results of the one-hop algorithm are presented in Fig.~\ref{success}(a).
For each $f$ and each $r$ (corresponding to one cell in the figure), we compute the success rate of the algorithm to achieve resilient consensus over 50 Monte Carlo runs with randomly chosen initial values (within [0, 100]) of the normal agents for each run.
The malicious nodes take values based on sine functions w.r.t. time.
Similarly, we also conduct simulations for the two-hop and the three-hop algorithms and the results are given in Figs.~\ref{success}(b) and (c), respectively.
One can see that by increasing the number of hops $l$, the success rate of the algorithm to achieve resilient consensus increases almost for every value of $f$. Such improvement is especially significant when $f\leq6$ and $r\leq3$. This verifies our intuition as well as theoretical findings that graph robustness increases as $l$ increases.
However, the difference between the results for the two-hop and the three-hop algorithms is somewhat minor. One reason is that the maximum robustness under the $f$-total model of a given graph is bounded by the minimum in-degree $2f$. This may indicate that the two-hop communication for this graph already reaches the number of hops for the maximum graph robustness.
\begin{figure}[t]
\centering
\subfigure[\scriptsize{One-hop algorithm.}]{
\includegraphics[width=3.2in,height=1.3in]{1hop_newiter70_monte50}
}
\vspace{-7pt}
\subfigure[\scriptsize{Two-hop algorithm.}]{
\includegraphics[width=3.2in,height=1.3in]{2hop_newiter70_monte50}
}
\vspace{-7pt}
\subfigure[\scriptsize{Three-hop algorithm.}]{
\includegraphics[width=3.2in,height=1.3in]{3hop_newiter70_monte50}
}
\caption{Success rate of the MW-MSR algorithm.}
\label{success}
\vspace*{-3.5mm}
\end{figure}
\begin{figure}[t]
\centering
\subfigure[\scriptsize{Consensus error for $f=0, r=1.2$.}]{
\includegraphics[width=3.2in,height=1.3in]{error_f0_r1.2}
}
\vspace{-7pt}
\subfigure[\scriptsize{Consensus error for $f=1, r=1.2$.}]{
\includegraphics[width=3.2in,height=1.3in]{error_f1_r1.2}
}
\vspace{-7pt}
\subfigure[\scriptsize{Consensus error for $f=9, r=3.1$.}]{
\includegraphics[width=3.2in,height=1.3in]{error_f9_r3.1}
}
\caption{Consensus error of the MW-MSR algorithm.}
\label{error_3algorithms}
\vspace*{-3.5mm}
\end{figure}
In these simulations, the success rate for reaching consensus is determined by the level of consensus error at time $k=70$. If the consensus error is below the threshold $c=1$, then the run is considered as a success. Hence, if the consensus process is very slow, it may be considered as a failure.
For example, observe that for the case of $f = 0$ (i.e., with no attacks) the success rate increases with larger $r \leq 1.5$ while the network is connected as long as $r>1$.
We should remark that the process of consensus forming can be accelerated by increasing the number of hops, as discussed in \cite{jin2006multi} for the fault-free case. This can be clearly seen in all plots in Fig.~\ref{error_3algorithms}, presenting the time responses of the consensus errors for several cases of $f$ and $r$, where $\Delta x1, \Delta x2$ and $ \Delta x3$ stand for the consensus errors for the one-hop, two-hop, and three-hop algorithms, respectively. Based on these examples, we conclude that by introducing multi-hop communication to the MSR algorithms, it does not only improve the robustness of the network but also accelerates the convergence in consensus forming even in adversarial environments.
\section{Conclusion}
In this paper, we have investigated the resilient consensus problem when multi-hop communication is available.
We have proposed generalized versions of MSR algorithms to correctly use the additional values received from multi-hop neighbors.
Moreover, we have fully characterized the network requirement for the algorithms in terms of robustness with $l$ hops.
By introducing multi-hop communication, the convergence of the resilient consensus process can be accelerated. Furthermore, it provides an effective way to enhance robustness of networks without increasing physical communication links.
In future works, we intend to extend our algorithms to the asynchronous Byzantine consensus problem using multi-hop communication with a fixed number of hops.
\appendices
\section*{Appendix}
\section*{Proof of Theorem \ref{asyntheorem}}
\begin{proof}
(Necessity) Since the synchronous algorithm is a special case of the asynchronous algorithm, the necessary condition in Theorem \ref{syn} also holds here.
(Sufficiency) First, we show the safety condition.
For $k = 0$, by the assumption on $z[0]$, it holds $z^N[0]\in \mathcal{S}_{\tau}$, and thus $x_i[0] \in \mathcal{S}_{\tau} ,\forall i \in \mathcal{N}$.
Next, for $k \geq 0$, let $\overline{x}^N_\tau[k]$ and $\underline{x}^N_\tau[k]$ be the largest value and the smallest value, respectively, of the normal agents from time $k, k-1, \dots, k-\tau$. That is,
\begin{equation}
\begin{array}{lll}
\overline{x}^N_\tau[k] =\max \left( x^N[k], x^N[k-1],\dots, x^N[k-\tau]\right),\\
\underline{x}^N_\tau[k] = \min \left( x^N[k], x^N[k-1],\dots, x^N[k-\tau]\right).
\end{array}
\end{equation}
Then we prove that $\overline{x}^N_\tau[k]$ is a nonincreasing function of $k \geq 0$.
By \eqref{system2}, at time $k \geq 0$, each normal agent updates its value based on a convex combination of the neighbors' values from $k$ to $k-\tau$.
Moreover, the values outside of the interval determined by the normal agents' values $[\underline{x}^N_\tau[k], \overline{x}^N_\tau[k]]$ will be ignored by step 2 of the MW-MSR algorithm. This is because in this step, node $i$ will remove the largest sized subsets of large and small values that can be manipulated by at most $f$ nodes within $l$ hops.
Hence, we obtain $x_i[k+1] \leq \max \left( x^N[k], x^N[k-1],\dots, x^N[k-\tau]\right)$ for any $ i \in \mathcal{N}$. We also have
\begin{equation*}
\begin{aligned}
x_i[k] &\leq \max \left( x^N[k], x^N[k-1],\dots, x^N[k-\tau]\right),\\
x_i[k-1] &\leq \max \left( x^N[k], x^N[k-1],\dots, x^N[k-\tau]\right),\\
&\vdots\\
x_i[k+1-\tau]&\leq \max \left( x^N[k], x^N[k-1],\dots, x^N[k-\tau]\right)
\end{aligned}
\end{equation*}
for any $ i \in \mathcal{N}$. Therefore, $\overline{x}^N_\tau[k]$ is nonincreasing in time as
\begin{equation*}
\begin{aligned}
\overline{x}^N_\tau&[k+1] =\max \left( x^N[k+1], x^N[k],\dots, x^N[k+1-\tau]\right)\\
&\leq \max \left( x^N[k], x^N[k-1],\dots, x^N[k-\tau]\right)=\overline{x}^N_\tau[k].
\end{aligned}
\end{equation*}
We can similarly prove that $\underline{x}^N_\tau[k]$ is nondecreasing in time. This indicates that for $k \geq 0$, we have $x_i[k]\in \mathcal{S}_{\tau}$, $\forall i \in \mathcal{N}$. Thus, we have shown the safety condition.
Next, we show the convergence. As shown above, $\overline{x}^N_\tau[k]$ and $\underline{x}^N_\tau[k]$ are monotonically decreasing and increasing, respectively,
and moreover bounded. Thus, both of their limits exist and are denoted by $\overline{\omega}_\tau$ and $\underline{\omega}_\tau$, respectively.
We claim that the limits satisfy $\overline{\omega}_\tau=\underline{\omega}_\tau$, i.e., consensus is achieved. We prove by contradiction and assume that $\overline{\omega}_\tau>\underline{\omega}_\tau$.
Recall that $\alpha$ lower bounds the nonzero entries of $\Gamma[k]$. Choose $\epsilon_0 > 0$ small enough that $\overline{\omega}_\tau-\epsilon_0 > \underline{\omega}_\tau+\epsilon_0$. Fix
\begin{equation}\label{ep}
\epsilon<\frac{\epsilon_0\alpha^{(\tau+1)n_N}}{(1-\alpha^{(\tau+1)n_N})}, \medspace 0<\epsilon<\epsilon_0.
\end{equation}
Define the sequence $\{\epsilon_\gamma\}$ by
\begin{equation*}
\epsilon_{\gamma+1}= \alpha\epsilon_\gamma-(1-\alpha)\epsilon, \medspace \gamma=0,1,\dots,(\tau+1)n_N-1.
\end{equation*}
So we have $0 < \epsilon_{\gamma+1} < \epsilon_{\gamma}$ for all $\gamma$. In particular, they are positive because by \eqref{ep}, it holds that
\begin{equation*}
\begin{aligned}
\epsilon_{(\tau+1)n_N}&= \alpha^{(\tau+1)n_N}\epsilon_0- \sum_{m=0}^{(\tau+1)n_N-1}\alpha^m(1-\alpha)\epsilon\\
&= \alpha^{(\tau+1)n_N}\epsilon_0-(1-\alpha^{(\tau+1)n_N})\epsilon>0.
\end{aligned}
\end{equation*}
Take $k_\epsilon \in \mathbb{Z}_+$ such that $\overline{x}^N_\tau[k]<\overline{\omega}_\tau+\epsilon$ and $\underline{x}^N_\tau[k]>\underline{\omega}_\tau-\epsilon$ for $k\geq k_\epsilon$. Such $k_\epsilon$ exists due to the convergence of $\overline{x}^N_\tau[k]$ and $\underline{x}^N_\tau[k]$. Then we can define the two disjoint sets as
\begin{equation*}
\mathcal{Z}_{1\tau}(k_\epsilon+\gamma,\epsilon_\gamma)=\{j\in \mathcal{N}: x_j[k_\epsilon+\gamma]>\overline{\omega}_\tau-\epsilon_\gamma\},
\end{equation*}
\begin{equation*}
\mathcal{Z}_{2\tau}(k_\epsilon+\gamma,\epsilon_\gamma)=\{j\in \mathcal{N}: x_j[k_\epsilon+\gamma]<\underline{\omega}_\tau+\epsilon_\gamma\}.
\end{equation*}
Next, we show that one of the two sets becomes empty in a finite number of steps, which contradicts the assumption on $\overline{\omega}_\tau$ and $\underline{\omega}_\tau$ being the limits. Consider the set $\mathcal{Z}_{1\tau}(k_\epsilon,\epsilon_0)$. Due to the definition of $\overline{x}^N_\tau[k]$ and its limit $\overline{\omega}_\tau$, one or more normal nodes are contained in the union of the sets $\mathcal{Z}_{1\tau}(k_\epsilon+\gamma,\epsilon_\gamma)$ for $0 \leq\gamma\leq\tau + 1$. We claim that $\mathcal{Z}_{1\tau}(k_\epsilon,\epsilon_0)$ is in fact nonempty. To prove this, it is sufficient to show that if a normal node $j$ is not in $\mathcal{Z}_{1\tau}(k_\epsilon+\gamma,\epsilon_\gamma)$, then it is not in $\mathcal{Z}_{1\tau}(k_\epsilon+\gamma+1,\epsilon_{\gamma+1})$ for $\gamma=0,\dots,\tau$.
Suppose that node $j$ satisfies $x_j[k_\epsilon+\gamma]\leq \overline{\omega}_\tau-\epsilon_\gamma$. Every normal node updates its value to a convex combination of the multi-hop neighbors' values at the current or previous times. Moreover, the values greater than $\overline{x}^N_\tau[k_\epsilon+\gamma]$ are ignored in step 2 of the MW-MSR algorithm. Hence, the value of node $j$ at the next time step is upper bounded as
\begin{equation}\label{converge}
\begin{aligned}
x_j&[k_\epsilon+\gamma+1] \leq (1-\alpha)\overline{x}^N_\tau[k_\epsilon+\gamma]+\alpha(\overline{\omega}_\tau-\epsilon_\gamma)\\
&\leq (1-\alpha)(\overline{\omega}_\tau+\epsilon)+\alpha(\overline{\omega}_\tau-\epsilon_\gamma)\\
&\leq \overline{\omega}_\tau-\alpha\epsilon_\gamma+(1-\alpha)\epsilon
=\overline{\omega}_\tau-\epsilon_{\gamma+1}.
\end{aligned}
\end{equation}
It thus follows that node $j$ is not in $\mathcal{Z}_{1\tau}(k_\epsilon+\gamma+1,\epsilon_{\gamma+1})$. This means that the cardinality of the set $\mathcal{Z}_{1\tau}(k_\epsilon+\gamma,\epsilon_\gamma)$ is nonincreasing for $\gamma=0,\dots,\tau+1$. The same holds for $\mathcal{Z}_{2\tau}(k_\epsilon+\gamma,\epsilon_\gamma)$, and hence $\mathcal{Z}_{2\tau}(k_\epsilon,\epsilon_0)$ is nonempty too.
We next show that one of these two sets in fact becomes empty in finite time. Since the graph is $(2f + 1)$-robust with $l$ hops w.r.t. any set $\mathcal{F}$ satisfying the $f$-total model, the graph is also $(2f + 1)$-robust with $l$ hops w.r.t. set $\mathcal{A}$ (i.e., the set of adversarial nodes).
Therefore, between the two nonempty disjoint sets $\mathcal{Z}_{1\tau}(k_\epsilon,\epsilon_0)$ and $\mathcal{Z}_{2\tau}(k_\epsilon,\epsilon_0)$, one of them has a normal agent with at least $2f + 1$ independent paths originating from the nodes outside and these paths do not have any internal node in the set $\mathcal{A}$.
Suppose that normal
node $i\in \mathcal{Z}_{1\tau}(k_\epsilon,\epsilon_0)$ has this property. Since there are at most $f$ malicious nodes and node $i$ can only remove the values of which the cardinality of the minimum message cover is $f$. Moreover, node $i$ is supposed to update once in at most $\tau$ time steps.
Therefore, when node $i$ makes an update at time $k_\epsilon+\tau$, it will use at least one delayed value from the normal nodes outside the set $\mathcal{Z}_{1\tau}(k_\epsilon,\epsilon_0)$, upper bounded by $\overline{\omega}_\tau-\epsilon_\tau$. It thus follows that, at time $k_\epsilon+\tau$, when node $i$ makes an update, its value can be bounded as
\begin{equation*}
x_i[k_\epsilon+\tau+1]\leq (1-\alpha)\overline{x}^N_\tau[k_\epsilon+\tau]+\alpha(\overline{\omega}_\tau-\epsilon_\tau).
\end{equation*}
By \eqref{converge}, we have $x_i[k_\epsilon+\tau+1] \leq \overline{\omega}_\tau-\epsilon_{\tau+1}$. We can conclude that if node $i$ in $\mathcal{Z}_{1\tau}(k_\epsilon,\epsilon_0)$ has $2f +1$ independent paths originating from the nodes outside the set, then it goes outside of $\mathcal{Z}_{1\tau}(k_\epsilon+\tau+1,\epsilon_{\tau+1})$ after $\tau + 1$ steps. Consequently,
$\left|\mathcal{Z}_{1\tau}(k_\epsilon+\tau+1,\epsilon_{\tau+1}) \right| < \left| \mathcal{Z}_{1\tau}(k_\epsilon,\epsilon_0)\right| $. Likewise, it follows that if $\mathcal{Z}_{2\tau}(k_\epsilon,\epsilon_0)$ has a node having at least $2f+1$ independent paths originating from the nodes outside, then $\left|\mathcal{Z}_{2\tau}(k_\epsilon+\tau+1,\epsilon_{\tau+1}) \right| < \left| \mathcal{Z}_{2\tau}(k_\epsilon,\epsilon_0)\right| $.
Since there are only $n_N$ normal nodes, we can repeat the steps above until one of the sets $\mathcal{Z}_{1\tau}(k_\epsilon+\tau+1,\epsilon_{\tau+1})$ and $\mathcal{Z}_{2\tau}(k_\epsilon+\tau+1,\epsilon_{\tau+1})$ becomes empty, and it takes no more than $(\tau + 1)n_N$ steps. Once the set becomes empty, it remains so indefinitely. This contradicts the assumption that $\overline{\omega}_\tau$ and $\underline{\omega}_\tau$ are the limits. Therefore, we obtain $\overline{\omega}_\tau=\underline{\omega}_\tau$.
\end{proof}
|
1805.01300
|
\section{Introduction}
Conformal field theory{~\cite{francesco_conformal_1999}} (CFT) has become the center of much interest during the past decades. Due to its powerful nature in two dimensions, it has been widely applied to study the universal behavior at the critical points of two-dimensional statistical systems and one-dimensional quantum systems, where the correlation length of the system diverges.
Notable applications include the classification of universality classes{~\cite{francesco_conformal_1999}}, the gapless edge modes of fractional quantum Hall systems {~\cite{moore_nonabelions_1991}}, the entanglement entropy{~\cite{holzhey_geometric_1994,vidal_entanglement_2003,calabrese_entanglement_2004, kitaev_topological_2006, levin_detecting_2006, fendley2007topological, fradkin_entanglement_2006}}, and the Kondo problem{~\cite{affleck_conformal_1995}}.
Recently, the entropy correction on a Klein bottle is proposed to be a universal characterization of the critical systems described by CFT, which is called the Klein bottle entropy{~\cite{tu_universal_2017}}.
The path integral on a Klein bottle is achieved by swapping the left movers and the right movers via a reflection operator defined on the CFT Hilbert space. The operation swaps the world line and glues them back via taking the trace in the path integral.
This result is soon generalized to other nonorientable manifolds, such as the M\"obius strip{~\cite{chen_conformal_2017}} and the real projective plane{~\cite{wang_logarithmic_2018}}.
In conformal critical systems, the Klein bottle entropy is a universal value which only depends on the type of the CFT,
and thus it can be applied to characterize the underlying CFT description of the system.
The Klein bottle entropy can also be used to accurately pinpoint quantum critical points, even those without local order parameters{~\cite{chen_conformal_2017}}.
In lattice models, the Klein bottle entropy can be efficiently calculated using prevailing numerical algorithms,
such as quantum Monte Carlo{~\cite{tang_universal_2017}} and thermal tensor network methods{~\cite{chen_conformal_2017,wang_logarithmic_2018}}.
In these lattice model calculations, however, it can be a subtle issue to identify the correct lattice operation which exactly exchange the left movers and right movers in the CFT level. Therefore, one should carry out a careful analysis in order to confirm that the results of the lattice simulation match the CFT predictions.
The initial work Ref.~{\onlinecite{tu_universal_2017}} of the Klein bottle entropy only concentrates on the rational CFT (RCFT), whose space of states can be decomposed into a finite number of representations of the Virasoro algebra or other extended chiral algebra (such as the Kac-Moody algebra). It is interesting to investigate how the Klein bottle entropy extends to a broader class of CFTs.
The main focus of the present work is to study the Klein bottle entropy in another notable category of CFT, the free boson theory compactified on a circle, which includes both rational and nonrational CFTs.
The compactified boson CFTs all have the identical central charge $c = 1$ and are characterized by the compactification radius $R$.
In condensed matter physics, the compactified boson CFT plays an important role through its connection to the Luttinger liquid theory, which is a remarkably successful and powerful framework describing the low-energy physics of one-dimensional critical systems{~\cite{ludwig_methods_1995,giamarchi_quantum_2003}}.
The Luttinger liquids are relevant to various experimental systems, such as carbon nanotubes{~\cite{bockrath_luttinger-liquid_1999,yao_carbon_1999,ishii_direct_2003,gao_evidence_2004}} , semiconductor wires{~\cite{yacoby_nonuniversal_1996,levy_luttinger-liquid_2006}}, and highly tunable ultracold atomic gases~\cite{kinoshita_observation_2004, clement_exploring_2009, haller_pinning_2010}.
The Luttinger liquid theory is fully characterized by two parameters, the sound velocity $v$ and the Luttinger parameter $K$.
The value of the Luttinger parameter $K$ has a direct relation with the compactification radius $R$ of the free boson CFT\footnote{the specific form of this relation depends on the choice of the dual field, as will be discussed in Sec.~\ref{sec:determineR}}.
However, the value of the Luttinger parameter cannot be reliably determined in field theory calculations, and one usually has to resort to the microscopic models to obtain its value which is usually still a nontrival task{~\cite{giamarchi_quantum_2003,song_general_2010,song_bipartite_2012,dalmonte_critical_2012,dalmonte_estimating_2011,lauchli_operator_2013,alcaraz_in_preparation_2018}}. It is highly desired to have a direct determination of the Luttinger parameter without any finite-size scaling or fitting procedure.
In this paper, by studying the Klein bottle entropy of the compactified boson CFT, we discover a simple relation between the Klein bottle entropy $\ln g$ and the compactification radius $R$, $\ln g = \ln R$.
This simple relation suggests an efficient and accurate method to extract the Luttinger parameter from lattice models in a straightforward manner.
To verify this relation numerically, we perform quantum Monte Carlo (QMC) calculations in the $S = 1 / 2$ XXZ chain, whose Luttinger parameter (and thus the compactification radius) can be exactly obtained from the Bethe ansatz solution{~\cite{giamarchi_quantum_2003}}.
As an application, we present numerical results in the $S = 1$ XXZ chain, which cannot be exactly solved, and our results serve as an accurate numerical determination of the Luttinger parameter in this model.
This paper is organized as follows.
In Sec.~\ref{sec:RCFT}, we review the main results of Ref.~{\onlinecite{tu_universal_2017}}, by introducing the definition of the Klein bottle entropy and deriving its RCFT prediction.
We also show how to extract the Klein bottle entropy from lattice models by discussing the transverse field Ising model (TFIM) in detail.
In Sec.~\ref{sec:CBCFT}, we present the main result of this work, the prediction of the Klein bottle entropy in the compactified boson CFT, and perform the numerical calculations in the XXZ model with spin $S = 1 / 2$ and $S = 1$.
Sec.~\ref{sec:summary} summarizes the results.
In Appendix \ref{sec:extendedMCmethod}, we discuss the details of the extended ensemble Monte Carlo method in the XXZ chain, and in Appendix \ref{sec:JWsolTFIMXY}, we present the exact solutions of the Klein bottle entropy for the two solvable lattice models, the TFIM and the XY chain.
\section{The Klein bottle entropy of rational CFT}\label{sec:RCFT}
In this section, we mainly review the results of Ref.~{\onlinecite{tu_universal_2017}}, which concentrates on RCFT.
We first review the definition of the Klein bottle entropy and derive the prediction of its value in RCFT\footnote{Here we only consider unitary CFTs. For non-unitary CFTs, the RCFT prediction of the Klein bottle entropy needs to be modified.}.
In order to show how to extract the Klein bottle entropy from lattice models, we discuss the example of TFIM in detail.
We show that, in lattice models, the effect of the reflection operator defined in the context of the CFT may be actualized by a bond-centered lattice reflection, but whether this lattice reflection would lead to the Klein bottle entropy predicted by the CFT is not obvious, and one usually needs to perform a careful analysis to confirm this.
\subsection{CFT prediction}
Let us consider a (1+1)-dimensional quantum chain with length $L$ and periodic boundary condition. At inverse temperature $\beta=1/T$ (the Boltzmann constant $k_{\text{B}}$ is set to be 1), its partition function can be written as a path integral defined on a torus with size $L\times \beta$. When the system is critical and its low-energy effective theory is a CFT, the partition function becomes the torus partition function $Z^{\mathcal{T}}$ of the CFT ($\mathcal{T}$ stands for torus){~\cite{francesco_conformal_1999}}
\begin{equation}
Z^{\mathcal{T}} = \mathrm{Tr}_{\mathcal{H} \otimes \overline{\mathcal{H}}}
(q^{L_0 - c / 24} \bar{q}^{\bar{L}_0 - c / 24}),
\end{equation}
where $\mathcal{H} \otimes \overline{\mathcal{H}}$ represents the tensor-product Hilbert space of the holomorphic sector $\mathcal{H}$ and the antiholomorphic sector $\overline{\mathcal{H}}$ of the CFT.
$L_0$ and $\bar{L}_0$ are the zeroth-level holomorphic and antiholomorphic Virasoro generators.
$c$ is the central charge.
$q = \mathrm{e}^{2 \pi \mathrm{i} \tau}$ with $\tau = \mathrm{i} v \beta / L $, where $v$ is the velocity of the CFT,
and $\bar{q}$ is the complex conjugate of $q$.
In the CFT, the Klein bottle partition function $Z^{\mathcal{K}}$ ($\mathcal{K}$ denotes the Klein bottle) is defined by~\cite{blumenhagen_introduction_2009}
\begin{equation}
Z^{\mathcal{K}} = \mathrm{Tr}_{\mathcal{H} \otimes
\overline{\mathcal{H}}} (\Omega q^{L_0 - c / 24} \bar{q}^{\bar{L}_0 - c /
24}), \label{klein-part}
\end{equation}
where an extra operator $\Omega$ is inserted, which effectively interchanges the holomorphic and antiholomorphic sectors, $\Omega | \alpha, \bar{\mu} \rangle = | \mu, \bar{\alpha} \rangle$, i.e., interchanges the left and right movers.
As a result, only the left-right symmetric states $| \alpha, \bar{\alpha} \rangle$ have contributions to the Klein bottle partition function.
One can then write $Z^{\mathcal{K}}$ as
\begin{equation}
Z^{\mathcal{K}} = \mathrm{Tr}_{\mathrm{sym}} (q^{2 (L_0 - c / 24)}), \label{klein-part2}
\end{equation}
where the subscript ``sym'' indicates that the trace in (\ref{klein-part2}) is taken over the left-right symmetric states $|\alpha, \bar{\alpha} \rangle$ in $\mathcal{H} \otimes \overline{\mathcal{H}}$.
For rational CFTs, the Hilbert space can be organized into a finite number of conformal towers, each of which is formed by a primary state and its descendant states.
In such CFTs, the torus partition function is given by
\begin{equation}
Z^{\mathcal{T}} = \sum_{a, b} \chi_a (q) M_{a, b} \bar{\chi}_b (\bar{q}) .
\end{equation}
Here $\chi_a (q) = \mathrm{Tr}_a (q^{L_0 - c / 24})$ is called a character, where $a$ labels the primary state of the conformal tower, and the trace is over the conformal tower of states.
$\bar{\chi}$ is the antiholomorphic correspondence of $\chi$.
$M_{a, b}$ represents the element of the $M$-matrix, which are non-negative integers representing the number of primary states $(a, \bar{b})$ in the Hilbert space $\mathcal{H} \otimes \overline{\mathcal{H}}$.
On the other hand, according to Eq.~\eqref{klein-part2}, one can write $Z^{\mathcal{K}}$ as
\begin{equation}
Z^{\mathcal{K}} = \sum_a M_{a, a} \chi_a (q^2).
\end{equation}
In the limit $L \gg v \beta$, the partition functions can be evaluated by using the modular transformation of the characters, i.e., $\chi_a (q) = \sum_b S_{a b} \chi_b (q')$, where $q = \mathrm{e}^{- 2 \pi \frac{v \beta}{L}}$ and $q' = \mathrm{e}^{- 2 \pi \frac{L}{v \beta}}$, and $S_{a b}$ is the element of the modular S matrix{~\cite{francesco_conformal_1999}}. When $L \gg v \beta$, $q' \rightarrow 0$, then in the character $\chi_a (q')$ the primary state $a$ dominates, so $\chi_a(q') \approx (q')^{h_a - c / 24}$, where $h_a$ is the conformal weight of the primary field $a$.
Furthermore, among all primary fields, the identity field with conformal weight $h_I = 0$ dominates over other primary fields with $h_a > 0$.
Based on the discussion above, for the torus partition function, due to the modular invariance of the partition function $M = S^{\dagger} M S$ and uniqueness of the identity field $M_{I, I} = 1$, one obtains
\begin{equation}
Z^{\mathcal{T}} = \sum_{a, b} \chi_a (q') M_{a, b} \bar{\chi}_b
(q') \approx | \chi_I (q') |^2 = \mathrm{e}^{\frac{\pi c L}{6
v \beta}} .
\end{equation}
Meanwhile, for the Klein bottle partition function, since $\chi_a (q^2) = \sum_b S_{a b} \chi_b (q'^{1 / 2}) \approx S_{a I} \chi_I (q'^{1 / 2})$, we have
\begin{equation}
Z^{\mathcal{K}} \approx \sum_a M_{a, a} S_{a I} \chi_I (q'^{1 / 2}) = g
\mathrm{e}^{\frac{\pi c L}{24 v \beta}},
\end{equation}
where we have introduced
\begin{equation}
g = \sum_a M_{a, a} S_{a I} = \sum_a \frac{M_{a, a} d_a}{\mathcal{D}}.
\label{eq:gforrationalCFT}
\end{equation}
Here $d_a$ is the quantum dimension of the primary field $a$ and $\mathcal{D} = \sqrt{\sum_a d^2_a}$ is the total quantum dimension, which satisfy $S_{aI} = d_a/\mathcal{D}$.
In lattice models, besides the pure CFT predictions, one has to take into account the nonuniversal free energy terms,
\begin{eqnarray}
\ln Z^{\mathcal{T}} & \approx & - f_0 \beta L + \frac{\pi c}{6 v \beta} L, \label{toruslatticeF}
\\
\ln Z^{\mathcal{K}} & \approx & - f_0 \beta L + \frac{\pi c}{24 v \beta} L +
\ln g, \label{kleinlatticeF}
\end{eqnarray}
where $f_0$ represents the bulk free energy density and $\ln g$ is the Klein bottle entropy, which is universal and only depends on the quantum dimensions of the primary fields [see Eq.~(\ref{eq:gforrationalCFT})].
We note that Eq.~\eqref{toruslatticeF} is the seminal result obtained in Refs.~\onlinecite{affleck_universal_1986, blote_conformal_1986},
and Eq.~\eqref{kleinlatticeF} is the central result of Ref.~\onlinecite{tu_universal_2017}.
Actually, one can cancel the nonuniversal terms in (\ref{toruslatticeF}) and (\ref{kleinlatticeF}) and extract the Klein bottle entropy $\ln g$ by calculating the following partition function ratio:
\begin{equation}
\ln g = \ln \frac{Z^{\mathcal{K}} \left( 2 L, \frac{\beta}{2}
\right)}{Z^{\mathcal{T}} (L, \beta)} . \label{eq:Kleinbottleexpression}
\end{equation}
We emphasize that the Klein bottle entropy is universal, which is unchanged even in the zero-temperature limit $\beta\rightarrow\infty$, as long as the thermodynamic limit $L\rightarrow\infty$ is taken first.
In this regard, the Klein bottle entropy reflects the ground-state properties of the system.
Therefore, Eq.~\eqref{eq:Kleinbottleexpression} allows one to extract the ground-state properties directly from thermal systems, without any fitting procedure.
\subsection{Transverse field Ising model}
\label{subsec:Ising}
As a concrete example, we consider the spin-1/2 critical Ising chain,
\begin{equation}
H_{\mathrm{Ising}} = - \sum_{i = 1}^L (S_i^x S_{i + 1}^x + \frac{1}{2} S_i^z),
\label{eq:IsingHamil}
\end{equation}
which is well known to be described by the Ising CFT. Here we have imposed periodic boundary condition, i.e., $S^{\nu}_{L + 1} = S_1^{\nu}\, (\nu = x, y, z)$.
For simplicity, we only consider the case of even $L$.
The spin-1/2 critical Ising chain can be transformed into a spinless fermion model via the Jordan-Wigner transformation, $S_i^z = f_i^{\dagger} f_i - \frac{1}{2}$ and $S_i^x = \frac{1}{2} ( f_i^{\dagger}+f_i) \mathrm{e}^{\mathrm{i} \pi \sum_{l < i} n_l} $. The Hamiltonian (\ref{eq:IsingHamil}) is then fermionized as
\begin{eqnarray}
H & = & - \frac{1}{4} \sum_{i = 1}^{L - 1} (f_i^{\dagger} - f_i) (f_{i +
1}^{\dagger} + f_{i + 1}) - \frac{1}{4} \sum_{i = 1}^L (2 f_i^{\dagger} f_i
- 1) \nonumber\\
& & + \frac{1}{4} Q (f_L^{\dagger} - f_L) (f_1^{\dagger} + f_1),
\label{eq:IsingfermionizedHamil}
\end{eqnarray}
where the fermion parity $Q = \mathrm{e}^{i \pi \sum_{l = 1}^L n_l} = \pm 1$ is a conserved quantity in this model.
The Hilbert space splits into two sectors with definite fermion parity in each sector. The two sectors, following the CFT convention, are called Neveu-Schwarz and Ramond sectors, respectively{~\cite{francesco_conformal_1999}}.
In the Neveu-Schwarz sector, the fermion parity is even ($Q = 1$), with allowed lattice momenta $k = \pm \frac{\pi}{L}, \pm \frac{3 \pi}{L}, \ldots, \pm \frac{(L - 1) \pi}{L}$.
In the Ramond sector, the fermion parity is odd ($Q = - 1$), with allowed lattice momenta $k = 0, \pm \frac{2 \pi}{L}, \pm \frac{4 \pi}{L}, \ldots, \pm \frac{(L - 2) \pi}{L}, \pi$.
In the two sectors, the Hamiltonian (\ref{eq:IsingfermionizedHamil}) takes the same form
\begin{equation}
H_{\pm} = - \frac{1}{4} \sum_{i = 1}^L (f_i^{\dagger} - f_i) (f_{i +
1}^{\dagger} + f_{i + 1}) - \frac{1}{4} \sum_{i = 1}^L (2 f_i^{\dagger} f_i
- 1),
\end{equation}
but with different boundary conditions for fermions. In the Neveu-Schwarz (Ramond) sector, the fermions have antiperiodic (periodic) boundary condition $f_{L + 1} = - f_1$ ($f_{L + 1} = f_1$).
\subsubsection{Bond-centered lattice reflection} \label{sec:latticereflection}
For lattice models, one needs to find an operator defined on the lattice, which effectively interchanges the left and right movers when acting on the states of the system.
As indicated in Ref.~{\onlinecite{tu_universal_2017}}, the following bond-centered lattice reflection operator $P$ serves as a natural candidate:
\begin{equation}
P|s_1, s_2, \ldots, s_{L - 1}, s_L \rangle = |s_L, s_{L - 1}, \ldots, s_2,
s_1 \rangle,
\end{equation}
where $s_i$ represents the spin state at site $i$.
Next, we need to work out the action of the reflection operator $P$ in the fermionic basis.
From $P S_i^{\nu} P^{- 1} = S^{\nu}_{L+1-i}$, one obtains $P f_i^{\dagger} P^{- 1} = f_{L - i + 1}^{\dagger} Q$ with the help of the Jordan-Wigner transformation, and in the momentum space
\begin{equation}
P f_k^{\dagger} P^{- 1} = \mathrm{e}^{\mathrm{i} (L + 1) k} f_{-
k}^{\dagger} Q. \label{latticePMomentum}
\end{equation}
According to Eq.~{\eqref{latticePMomentum}}, up to a phase factor, the lattice reflection reflects a fermion mode of momentum $k$ to a fermion mode of momentum $- k$.
As a result, one can infer that only a few states which are composed of ``fermion pairs'' like $f^{\dagger}_{- k} f^{\dagger}_k$ (except $k = 0, \pi$, since the corresponding fermion mode is reflected to itself, up to a phase factor) will contribute to the Klein bottle partition function, while most other states in the Hilbert space are orthogonal to their reflection partners.
To construct the states that are invariant under lattice reflection, one can consider a state $| \psi \rangle$ that is invariant under the lattice reflection, i.e., $P| \psi \rangle = | \psi \rangle$, and create fermion modes on top of this state.
According to Eq.~{\eqref{latticePMomentum}}, one can easily see that
\begin{eqnarray}
P f^{\dagger}_{- k} f^{\dagger}_k | \psi \rangle & = &
(\mathrm{e}^{-\mathrm{i} (L + 1) k} f_k^{\dagger} Q) (\mathrm{e}^{
\mathrm{i} (L + 1) k} f_{-k}^{\dagger} Q) P| \psi \rangle \nonumber\\
& = & f_k^{\dagger} Q f_{- k}^{\dagger} Q | \psi \rangle \nonumber\\
& = & f^{\dagger}_{- k} f^{\dagger}_k | \psi \rangle.
\end{eqnarray}
On the other hand, we note that the vacuum state $|0 \rangle$ corresponds to the spin fully polarized state and it is invariant under the lattice reflection, i.e., $P| 0 \rangle = |0 \rangle$. What is more,
\begin{eqnarray}
P f_{k = 0}^{\dagger} |0 \rangle & = & f_{k = 0}^{\dagger} |0 \rangle, \\
P f_{k = \pi}^{\dagger} |0 \rangle & = & - f_{k = \pi}^{\dagger} |0 \rangle
.
\end{eqnarray}
Therefore, the states generated by creating fermion pairs like $f^{\dagger}_k f^{\dagger}_{- k}$ on top of $|0 \rangle$ or $f_{k = 0}^{\dagger} |0 \rangle$ are invariant under lattice reflection, and these states have total momentum $k_{\mathrm{tot}} = 0$.
Meanwhile, states generated by creating fermion pairs on top of $f_{k = \pi}^{\dagger} |0 \rangle$ have total momentum $k_{\mathrm{tot}} = \pi$, and these states are invariant under lattice reflection up to a sign factor $- 1$.
States in the other forms are all orthogonal to their reflection partners.
In the lattice models described by RCFT, in order to verify whether the lattice reflection will lead to the Klein bottle entropy predicted by CFT, we need to identify the primary states and investigate their behavior under the lattice reflection.
Only the quantum dimensions of the primary states that are invariant under the lattice reflection can be counted in the summation of Eq.~\eqref{eq:gforrationalCFT}.
We also note that sometimes there may exist primary states that are invariant under lattice reflection up to a sign factor $-1$.
In such cases, when calculating the Klein bottle entropy, one needs to add an additional minus sign before the corresponding quantum dimension in Eq.~\eqref{eq:gforrationalCFT}~\cite{chen_conformal_2017}.
\subsubsection{Identification of the primary states of Ising CFT}
As discussed above, in order to calculate the Klein bottle entropy of the critical Ising chain using the RCFT prediction Eq.~\eqref{eq:gforrationalCFT}, we need to identify the primary states in the fermionic picture and analyze their behavior under the lattice reflection.
We also obtain the energy spectrum of the critical Ising chain of size $L=14$ by means of exact diagonalization calculations, and then, as a separate check, we identify the primary states obtained from the fermionic picture in this energy spectrum.
In the Neveu-Schwarz sector, $Q = 1$, $k = \pm \frac{\pi}{L}, \pm \frac{3
\pi}{L}, \ldots, \pm \frac{(L - 1) \pi}{L}$. By a Fourier transformation, the
Hamiltonian becomes
\begin{eqnarray}
H_+ & = & \frac{1}{4} \sum_{k > 0} (f_k^{\dagger}, f_{- k}) \left(
\begin{array}{cc}
- 2 \mathrm{cos} k - 2 & - 2 \mathrm{i} \mathrm{sin} k\\
2 \mathrm{i} \mathrm{sin} k & 2 \mathrm{cos} k + 2
\end{array} \right) \left( \begin{array}{c}
f_k\\
f_{- k}^{\dagger}
\end{array} \right) \nonumber\\
& & + \frac{1}{4} \sum_{k > 0} \left( - 2 \mathrm{cos} k - 2 \right) +
\frac{L}{4} . \label{eq:IsingNShamilt}
\end{eqnarray}
By diagonalizing the matrix in Eq.~\eqref{eq:IsingNShamilt}, one can obtain the dispersion relation $\varepsilon_k = \pm E_k $, where we have introduced $E_k = \mathrm{cos} (k / 2)$.
We then write $H_+$ as
\begin{equation}
H_+ = \sum_{k > 0} E_k (\alpha_k^{\dagger} \alpha_k + \beta_k
\beta_k^{\dagger}) - \sum_{k > 0} E_k + \frac{1}{4} \sum_{k > 0} \left( - 2
\mathrm{cos} k - 2 \right) + \frac{L}{4},
\end{equation}
where $\alpha_k = f_k \sin \frac{k}{4} + f_{- k}^{\dagger} \cos \frac{k}{4}$,\, $\beta_k = \mathrm{i} f_k \cos \frac{k}{4} - \mathrm{i} f_{- k}^{\dagger} \sin \frac{k}{4}$.
We introduce
\begin{equation}
\gamma_k = \left\{ \begin{array}{ll}
\alpha_k & \text{for } k > 0\\
\beta_{- k}^{\dagger} & \text{for } k < 0
\end{array} \right.
\end{equation}
and note the ground-state energy is
\begin{equation}
E_{\mathrm{gs}} = \frac{L}{4} - \frac{1}{4} \sum_{k > 0} \left( E_k + 2
\mathrm{cos} k + 2 \right) = - \frac{1}{2 \sin \left( \frac{\pi}{2 L}
\right)} . \label{eq:isinggsE}
\end{equation}
so the Hamiltonian becomes
\begin{equation}
H_+ = \sum_{k \neq 0} E_k \gamma_k^{\dagger} \gamma_k + E_{\mathrm{gs}} .
\end{equation}
The low-energy states in the Neveu-Schwarz sector are generated by creating fermion modes $\gamma_k^{\dagger}$'s on top of the ground state $| \mathrm{gs} \rangle$.
In order to determine whether the ground state is invariant under lattice reflection, we have to derive the ground-state wave function.
To do this, we write the Hamiltonian in the subspace of $|0 \rangle$ and $f_k^{\dagger} f_{-k}^{\dagger} |0 \rangle$,
\begin{equation}
H_+^k = \left( \begin{array}{cc}
0 & - \frac{\mathrm{i}}{2} \mathrm{sin} k\\
\frac{\mathrm{i}}{2} \mathrm{sin} k & - \cos k - 1
\end{array} \right) .
\end{equation}
By diagonalizing this matrix, we obtain
\begin{equation}
| \mathrm{gs} \rangle = \prod_{k > 0} (u_k + v_k f_k^{\dagger} f_{-
k}^{\dagger}) |0 \rangle,
\end{equation}
where $u_k = \mathrm{i} \sin (k / 4),\, v_k = \cos (k / 4)$.
Apparently, the ground state has total momentum $0$ and it is invariant under the lattice reflection.
The ground state corresponds to the primary field $(I,\bar{I})$ in the Ising CFT, which has the conformal weight $(h_I, \bar{h}_{\bar{I}}) = (0, 0)$.
This state is labeled by $(I, \bar{I})$ in the energy spectrum Fig.~\ref{fig:ising-spectrum}.
The lowest excitation state in the Neveu-Schwarz sector is
\begin{equation}
| \psi, \bar{\psi} \rangle = \gamma_{k=(1-2/L)\pi}^{\dagger} \gamma_{k=-(1-2/L)\pi}^{\dagger} |
\mathrm{gs} \rangle .
\end{equation}
Note that for $k > 0$
\begin{equation}
P \gamma_k^{\dagger} P^{- 1} = P \alpha_k^{\dagger} P^{- 1} = - \mathrm{i}
\mathrm{e}^{\mathrm{i} k} \beta_k Q = - \mathrm{i} \mathrm{e}^{\mathrm{i} k}
\gamma_{- k}^{\dagger} Q.
\end{equation}
One can infer that $| \psi, \bar{\psi} \rangle$ is invariant under the lattice
reflection. This state has momentum 0. The energy of this state is
\begin{equation}
E_{(\psi, \bar{\psi})} = E_{\mathrm{gs}} + 2 \cos \left( \frac{L - 2}{2 L}
\pi \right) = - \frac{1}{2 \sin \left( \frac{\pi}{2 L} \right)} + 2 \sin
\left( \frac{\pi}{2 L} \right) .
\end{equation}
This state corresponds to the primary field $(\psi, \bar{\psi})$ with conformal weight $(h_{\psi}, \bar{h}_{\bar{\psi}}) = (1 / 2, 1 / 2)$.
We label it by $(\psi, \bar{\psi})$ in the energy spectrum Fig.~\ref{fig:ising-spectrum}.
In the Ramond sector, $Q = - 1$, $k = 0, \pm \frac{2 \pi}{L}, \pm \frac{4 \pi}{L}, \ldots, \pm \frac{(L - 2) \pi}{L}, \pi$.
Similarly, the Hamiltonian in this sector can be written as
\begin{eqnarray}
H_- & = & \frac{1}{4} \sum_{0 < k < \pi} (f_k^{\dagger}, f_{- k}) \left(
\begin{array}{cc}
- 2 \mathrm{cos} k - 2 & - 2 \mathrm{i} \mathrm{sin} k\\
2 \mathrm{i} \mathrm{sin} k & 2 \mathrm{cos} k + 2
\end{array} \right) \left( \begin{array}{c}
f_k\\
f_{- k}^{\dagger}
\end{array} \right) \nonumber\\
& & - f^{\dagger}_{k = 0} f_{k = 0} + \frac{1}{4} \sum_{0 < k < \pi}
\left( - 2 \mathrm{cos} k - 2 \right) + \frac{L}{4} ,
\label{eq:TRIMHinRamond}
\end{eqnarray}
where the fermionic mode $f^{\dag}_{k=\pi}$ does not show up since its single-particle energy is zero.
The energy dispersion $\varepsilon_k = \pm E_k = \pm \cos (k / 2)$.
The energy of the lowest state in the Ramond sector is
\begin{equation}
E_{(\sigma, \bar{\sigma})} = - \frac{1}{2} - \frac{1}{2} \sum_{0 < k < \pi}
\left( \cos k + 2 \cos \frac{k}{2} \right) = - \frac{1}{2} \cot \left(
\frac{\pi}{2 L} \right) . \label{eq:isingsigmaE}
\end{equation}
Note that since $\varepsilon_{k=0} = -1$ [see Eq.~\eqref{eq:TRIMHinRamond}], the $k=0$ mode is occupied in $|\sigma,\bar{\sigma}\rangle$, and the $k=\pi$ state is therefore unoccupied due to the odd fermion parity constraint.
Similarly as above, one can obtain the wave function for this state
\begin{equation}
|\sigma,\bar{\sigma}\rangle = f^{\dag}_{k=0} \prod_{0<k<\pi}(u_k + v_k f^{\dag}_{k} f^{\dag}_{-k}) |0\rangle,
\end{equation}
which is invariant under the lattice reflection.
This state has total momentum $k_{\mathrm{tot}} = 0$ and corresponds to the primary field $(\sigma, \bar{\sigma})$ in the Ising CFT, with conformal weight $(h_{\sigma}, \bar{h}_{\bar{\sigma}}) = (1 / 16, 1 / 16)$.
We label it by $(\sigma, \bar{\sigma})$ in the energy spectrum Fig.~\ref{fig:ising-spectrum}.
\begin{figure}[!htb]
\resizebox{8.5cm}{!}{\includegraphics{fig-ising.pdf}}
\caption{The low-energy spectrum of the critical Ising chain of size $L = 14$.
The primary states are marked by $(I, \bar{I})$, $(\sigma, \bar{\sigma})$ and $(\psi, \bar{\psi})$ in the spectrum. The energy differences $\Delta E$'s have been rescaled according to the scaling dimensions of the primary fields.
The conformal towers of descendant states are marked by the same color with the corresponding primary states, while the unidentified states are marked as blue. \ \label{fig:ising-spectrum}}
\end{figure}
From the discussions above, we see that the three primary states of Ising CFT all have total momentum $k_{\mathrm{tot}} = 0$, and they are all invariant under the lattice reflection.
As a result, the corresponding three primary fields all contribute to the Klein bottle partition function.
The quantum dimensions of the primary fields $I$, $\psi$, and $\sigma$ are, respectively 1, 1, and $\sqrt{2}$, and the total quantum dimension $\mathcal{D}=2$.
From Eq.~\eqref{eq:gforrationalCFT}, one can then obtain
\begin{equation}
g_{\mathrm{Ising}} = \frac{2 + \sqrt{2}}{2} ,
\end{equation}
where we have used $M_{a,a}=1 \, \forall a$ in the Ising CFT.
As a consistency check, one can compute the Klein bottle entropy for the critical Ising chain analytically and compare with the CFT prediction.
As shown in Ref.~{\onlinecite{tu_universal_2017}}, the CFT prediction and the exact solution are consistent with each other.
The details of the exact solution are presented in Appendix \ref{subsec:exacttoising}.
\section{The Klein bottle entropy of the compactified boson CFT}\label{sec:CBCFT}
In this section, we extend the results on Klein bottle entropy to compactified boson CFT, which contains both rational and nonrational CFTs.
As a central result of this work, we present the Klein bottle entropy of the compactified boson CFT, which provides direct access to the compactification radius $R$.
This result provides a practical numerical method to extract the Luttinger parameter of lattice models, due to the direct relation between the compactification radius and the Luttinger parameter.
As concrete examples, we first discuss the spin-$1 / 2$ XY chain in detail, which can be analyzed from both the rational $U(1)_4$ CFT and the compactified boson CFT perspectives. Next, we extend our discussion to the XXZ chain with $S=1/2$ and $S=1$ and numerically calculate the Klein bottle entropy in the critical phases of these models.
\subsection{CFT prediction}
In the free boson CFT, the descendant states in the Hilbert space are obtained by acting $j_{- k}$ and $\bar{j}_{- k}$ ($k > 0$) on the highest weight states $| \alpha \rangle$ as{~\cite{blumenhagen_introduction_2009}}
\begin{equation}
j_{- 1}^{n_1} j_{- 2}^{n_2} \ldots \bar{j}_{- 1}^{m_1} \bar{j}_{- 2}^{m_2}
\ldots | \alpha \rangle \text{ with } m_k, n_k \geqslant 0,
\end{equation}
where $j_{- k}$ ($\bar{j}_{- k}$) is the Laurent mode of the chiral current $j (z) = \mathrm{i} \partial_z \phi (z, \bar{z})$ (antichiral current $\bar{j} (\bar{z}) = \mathrm{i} \partial_{\bar{z}} \phi (z, \bar{z})$) with $\phi (z, \bar{z})$ being the free boson field.
For $k > 0$, $j_{- k}$ ($\bar{j}_{- k}$) plays the role of creation operator of the excitations in the holomorphic (antiholomorphic) sector, and $j_k$ ($\bar{j}_k$) is the annihilation operator of the excitations in the holomorphic (antiholomorphic) sector.
Highest weight states $| \alpha \rangle$ are those states which are annihilated by all annihilation operators, i.e., $j_k | \alpha \rangle = 0$, $\bar{j}_k | \alpha \rangle = 0$ $\forall k > 0$.
When the free boson is compactified on a circle, the highest weight states can be represented as $|n, m \rangle$, where $n, m \in \mathbbm{Z}$.
These states are eigenstates of $j_0$ and $\bar{j}_0$,
\begin{eqnarray}
j_0 |n, m \rangle & = & \left( \frac{n}{R} + \frac{R m}{2} \right) |n, m
\rangle, \label{eq:j0eigen}\\
\bar{j}_0 |n, m \rangle & = & \left( \frac{n}{R} - \frac{R m}{2} \right) |n,
m \rangle, \label{eq:j0bareigen}
\end{eqnarray}
where $R$ is the compactification radius. Here $n$ corresponds to the center of mass momentum, which is quantized due to the existence of the compactification radius $R$.
Meanwhile, $m$ is the winding number of the bosonic field, $\phi (x + L, t) \equiv \phi (x, t) + 2 \pi m R$.
To evaluate the Klein bottle partition function \eqref{klein-part} for compactified boson CFT, one needs to find all the states that are invariant under the reflection operator $\Omega$.
The operator $\Omega$ effectively interchanges the holomorphic and antiholomorphic sectors, or more concretely, in the present case,
\begin{equation}
\Omega^{-1} j_k \Omega = \bar{j}_k, \; k \in \mathbbm{Z}.
\end{equation}
To determine the reflected state of the highest weight state $|n, m \rangle$, one can act $j_0$ on $\Omega |n, m \rangle$ and get $j_0 \Omega |n, m \rangle = \Omega (\Omega^{- 1} j_0 \Omega) |n, m \rangle = \Omega \bar{j}_0 |n, m \rangle = \left( \frac{n}{R} - \frac{R m}{2} \right) \Omega |n, m \rangle .$
Thus, we have
\begin{equation}
\Omega |n, m \rangle = |n, - m \rangle,
\end{equation}
which indicates that only highest weight states with winding number $m = 0$ are left-right symmetric.
As a result, the symmetric states contributing to the Klein bottle partition function in Eq.~{\eqref{klein-part2}} can generally be expressed as
\begin{equation}
|n ; n_1, n_2, n_3, \ldots \rangle = j_{- 1}^{n_1} j_{- 2}^{n_2} \ldots
\bar{j}_{- 1}^{n_1} \bar{j}_{- 2}^{n_2} \ldots |n, 0 \rangle,
\label{eq:compbosonCFTsymm}
\end{equation}
where $n_k \geqslant 0, n \in \mathbbm{Z}$.
To evaluate Eq.~{\eqref{klein-part2}}, one can express the zeroth Virasoro generator in terms of $j_k$'s,
\begin{equation}
L_0 = \frac{1}{2} j_0 j_0 + \sum_{k = 1}^{\infty} j_{- k} j_k,
\end{equation}
and act $L_0$ on $|n ; n_1, n_2, n_3, \ldots \rangle$.
According to the commutation relation $[j_{k_1}, j_{k_2}] = k_1 \delta_{k_1, k_2}$, $[j_{k_1}, \bar{j}_{k_2}] = 0$, we obtain $[j_{- k} j_k, j_{- k}^{n_k}] = k n_k j_{- k}^{n_k}$, and thus $|n ; n_1, n_2, n_3, \ldots \rangle$ is an eigenstate of $L_0$,
\begin{equation}
L_0 |n ; n_1, n_2, n_3, \ldots \rangle = \left( \frac{1}{2} \frac{n^2}{R^2}
+ \sum_{k = 1}^{\infty} k n_k \right) |n ; n_1, n_2, n_3, \ldots \rangle .
\end{equation}
Therefore, according to Eq.~{\eqref{klein-part2}}, the Klein bottle partition function can be expressed as
\begin{eqnarray}
Z^{\mathcal{K}} & = & \sum_{n, n_1, n_2, \ldots \in \mathbbm{Z}}
\langle n ; n_1, n_2, n_3, \ldots |q^{2 (L_0 - c / 24)} |n ; n_1, n_2, n_3,
\ldots \rangle \nonumber\\
& = & q^{- c / 12} \sum_{n, n_1, n_2, \ldots \in \mathbbm{Z}}
q^{\frac{n^2}{R^2} + 2 \sum_{k \geqslant 1} k n_k} \nonumber\\
& = & q^{- c / 12} \sum_{n \in \mathbbm{Z}} q^{\frac{n^2}{R^2}} \prod_{k =
1}^{\infty} \frac{1}{1 - q^{2 k}} \nonumber\\
& = & \theta_3 (2 \tau / R^2) \frac{1}{\eta (2 \tau)},
\end{eqnarray}
where $\eta (\tau) \equiv q^{1 / 24} \prod_{k = 1}^{\infty} (1 - q^k)$ is the Dedekind-$\eta$ function, and $\theta_3 (\tau) \equiv \sum_{n \in \mathbbm{Z}} q^{n^2 / 2}$ is Jacobi's theta function.
To further evaluate $Z^{\mathcal{K}} (\tau)$, one uses the modular transformation of Jacobi's theta and Dedekind-$\eta$ functions,
\begin{eqnarray}
\sqrt{- \mathrm{i} \tau} \theta_3 (\tau) & = & \theta_3 (- 1 / \tau), \\
\sqrt{- \mathrm{i} \tau} \eta (\tau) & = & \eta (- 1 / \tau),
\end{eqnarray}
and obtains
\begin{equation}
Z^{\mathcal{K}} = R \frac{\theta_3 (- R^2 / 2 \tau)}{\eta (- 1 / 2
\tau)} .
\end{equation}
Under the condition $L \gg v \beta$, we have
\begin{eqnarray}
\theta_3 \left( \frac{- R^2}{2 \tau} \right) & = & 1 + 2 \mathrm{e}^{- \pi
\frac{L R^2}{2 \beta v}}, \\
\eta \left( - \frac{1}{2 \tau} \right) & = & \mathrm{e}^{- \frac{1}{24}
\frac{\pi L}{\beta v}},
\end{eqnarray}
and therefore
\begin{equation}
Z^{\mathcal{K}} (L, \beta) = R \mathrm{e}^{\frac{1}{24} \frac{\pi L}{\beta
v}} .
\end{equation}
When combining with $Z^{\mathcal{T}} (L, \beta) = \mathrm{e}^{\frac{1}{6} \frac{\pi L}{\beta v}}$, we finally arrive at
\begin{equation}
g = \frac{Z^{\mathcal{K}} (2 L, \beta / 2)}{Z^{\mathcal{T}} (L, \beta)} = R,
\end{equation}
and the Klein bottle entropy is thus
\begin{equation}
\ln g = \ln R. \label{eq:compactifiedbosongkb}
\end{equation}
Equation {\eqref{eq:compactifiedbosongkb}} is the central result of this work.
When the square of the radius $R^2$ is not a rational number, this result goes beyond the scope of Ref.~{\onlinecite{tu_universal_2017}}, which focuses on rational CFTs.
Moreover, since the Luttinger liquid corresponds to a compactified boson CFT, and the Luttinger parameter has a direct relationship with the compactification radius, the simple relation Eq.~{\eqref{eq:compactifiedbosongkb}} allows us to determine the Luttinger parameter via computing the Klein bottle entropy of lattice models.
\subsection{XY chain}
As a concrete example to demonstrate the general result \Eq{eq:compactifiedbosongkb}, we first consider the case of the spin-1/2 XY model
\begin{equation}
H = - \sum_{i = 1}^L (S_i^x S^x_{i + 1} + S_i^y S_{i + 1}^y).
\label{eq:XYchain}
\end{equation}
This model is known to be described by a $U (1)_4$ CFT, which is a RCFT and, at the same time, a compactified boson CFT.
In the meantime, the model also allows exact solution via fermionization. Thus, it provides a nice starting point for checking consistency. As in the case of the critical Ising chain, we use the lattice reflection operation to interchange the left and right movers of the CFT.
By using the Jordan-Wigner transformation, the model (\ref{eq:XYchain}) is transformed into a spinless fermion model
\begin{equation}
H = - \frac{1}{2} \sum_{i = 1}^{L - 1} \left( f_i^{\dagger} f_{i + 1} +
\mathrm{h.c.} \right) + \frac{1}{2} Q (f_L^{\dagger} f_1 + f_1^{\dagger}
f_L), \label{eq:XYafterJW}
\end{equation}
where $Q = \mathrm{e}^{\mathrm{i} \pi \sum_{l = 1}^L n_l}$ is the fermion parity.
$Q$ is a conserved quantity and the Hilbert space splits into the Neveu-Schwarz ($Q = 1$ with even number of fermions) and Ramond ($Q = -1$ with odd number of fermions) sectors. In both sectors, the Hamiltonians take the same form
\begin{equation}
H_{\pm} = - \frac{1}{2} \sum_{i = 1}^L (f_i^{\dagger} f_{i + 1} + f_{i +
1}^{\dagger} f_i), \label{eq:XYspinlessfermoin}
\end{equation}
with antiperiodic (periodic) boundary condition $f_{L + 1} = - f_1$ ($f_{L + 1} = f_1$) for the Neveu-Schwarz (Ramond) sector. After a Fourier transformation, the Hamiltonian is expressed as
\begin{equation}
H_{\pm} = - \sum_k \cos k f_k^{\dagger} f_k
\label{eq:XYspinlessfermionmomentum}
\end{equation}
with the allowed momenta $k = \pm \frac{\pi}{L}, \pm \frac{3 \pi}{L}, \ldots, \pm \frac{(L - 1) \pi}{L}$ in the Neveu-Schwarz sector and $k = 0, \pm \frac{2 \pi}{L}, \ldots, \pm \frac{(L - 2) \pi}{L}, \pi$ in the Ramond sector (we choose $L = 4 m, m \in \mathbbm{N}$ for simplicity). The single-particle energy appearing in (\ref{eq:XYspinlessfermionmomentum}) will be denoted by $E_k = -\cos k$ below.
\subsubsection{Identification of the primary states}
Since the XY model is described by the rational $U(1)_4$ CFT, we start from the perspective of RCFT by identifying all the primary states of the XY chain in the fermion picture and analyzing their behavior under the lattice reflection, as in the case of TFIM.
We also present the energy spectrum of the XY chain of size $L=20$ obtained by means of exact diagonalization calculations, and then identify the primary states obtained from the fermionic picture in the spectrum as a separate check.
The ground state of the system is in the Neveu-Schwarz sector
\begin{equation}
| \mathrm{gs} \rangle = \prod_{| k | < \pi / 2} f^{\dagger}_k |0 \rangle ,
\end{equation}
and the ground-state energy is
\begin{equation}
E_{(I, \bar{I})} = - \sum_{| k | < \pi / 2} \cos k = - \frac{1}{\sin
\frac{\pi}{L}} .
\end{equation}
The ground state has total momentum $k_{\mathrm{tot}} = 0$, and apparently, it is invariant under the lattice reflection. The ground state corresponds to the primary field $(I, \bar{I})$ in the $U (1)_4$ CFT.
This state is labeled by $(I, \bar{I})$ in the energy spectrum (see Fig.~\ref{fig:XYspectrum}).
The lowest excited state in the Neveu-Schwarz sector is obtained by creating a pair of fermions just above the Fermi surface
\begin{equation}
| \psi, \bar{\psi} \rangle = f_{k = \frac{\pi}{2} + \frac{\pi}{L}}^{\dagger}
f_{k = - \frac{\pi}{2} - \frac{\pi}{L}}^{\dagger} | \mathrm{gs} \rangle
\end{equation}
with energy
\begin{equation}
E_{(\psi, \bar{\psi})} = E_{(I, \bar{I})} + 2 \sin \frac{\pi}{L}.
\end{equation}
$| \psi, \bar{\psi} \rangle$ has total momentum
$k_{\ensuremath{\operatorname{tot}}} = 0$, and it is also invariant under the lattice reflection.
This state corresponds to the primary field $(\psi, \bar{\psi})$ in the $U (1)_4$ CFT, which has conformal weight $(h_{\psi}, \bar{h}_{\bar{\psi}}) = (1 / 2, 1 / 2)$.
We label it by $(\psi, \bar{\psi})$ in Fig.~\ref{fig:XYspectrum}.
There is also one degenerate state $f_{k = \frac{\pi}{2} + \frac{\pi}{L}} f_{k = - \frac{\pi}{2} - \frac{\pi}{L}} | \mathrm{gs} \rangle$ which is obtained by annihilating two fermions near the Fermi surface.
In the Ramond sector, the lowest energy states are
\begin{eqnarray}
|s_+, \bar{s}_+ \rangle & = & \prod_{| k | \leqslant \frac{\pi}{2}}
f_k^{\dagger} |0 \rangle, \\
|s_-, \bar{s}_- \rangle & = & \prod_{| k | < \frac{\pi}{2}} f_k^{\dagger} |0
\rangle .
\end{eqnarray}
These two states are degenerate since $E_{k = \pm\frac{\pi}{2}} = 0$ (note that $k=\pm\frac{\pi}{2}$ are allowed for $L = 4 m, m \in \mathbbm{N}$).
The energy of these two states is
\begin{equation}
E_{(s_+, \bar{s}_+)} = E_{(s_-, \bar{s}_-)} = - \cot \frac{\pi}{L} .
\end{equation}
Both $|s_+, \bar{s}_+ \rangle$ and $|s_-, \bar{s}_- \rangle$ have total momentum 0 and they are invariant under the lattice reflection.
They correspond to the $U (1)_4$ CFT primary states $(s_+, \bar{s}_+)$ and $(s_-, \bar{s}_-)$ with conformal weight $(h_{s_+}, \bar{h}_{\bar{s}_+}) = (h_{s_-}, \bar{h}_{\bar{s}_-}) = (1 / 8, 1 / 8)$.
These two states are labeled by $(s_+, \bar{s}_+)$ and $(s_-, \bar{s}_-)$ in Fig.~\ref{fig:XYspectrum}.
\begin{figure}[!htb]
\resizebox{8.5cm}{!}{\includegraphics{fig-xxzed.pdf}}
\caption{ The energy spectrum of the XY model with $L = 20$.
The four primary states are marked by $(I, \bar{I})$, $(s_+, \bar{s}_+)$, $(s_-, \bar{s}_-)$ and $(\psi, \bar{\psi})$ in the spectrum.
The energy differences $\Delta E$'s have been rescaled according to the scaling dimensions of the primary states.
The states in the same conformal tower are marked by the same color with the corresponding primary state, while the unidentified states are marked as blue.
\label{fig:XYspectrum}}
\end{figure}
From Fig.~\ref{fig:XYspectrum}, one may notice that there are also low-energy states with total momentum $\pi$.
According to the discussions in Sec.~\ref{sec:latticereflection}, one may suspect whether these states will contribute to the Klein bottle entropy with a $-1$ factor.
To clarify this, we also identify them in the fermionic picture.
In the Neveu-Schwarz sector, the lowest-energy states
with momentum $\pi$ are $f^{\dagger}_{k = \frac{\pi}{2} + \frac{\pi}{L}} f_{k
= - \frac{\pi}{2} + \frac{\pi}{L}} | \mathrm{gs} \rangle$ and $f_{k =
\frac{\pi}{2} - \frac{\pi}{L}} f^{\dagger}_{k = - \frac{\pi}{2} -
\frac{\pi}{L}} | \mathrm{gs} \rangle$ with energy $E_{(I, \bar{I})} + 2 \sin
\frac{\pi}{L}$.
In the Ramond sector, the lowest-energy states with momentum
$\pi$ are $f^{\dagger}_{k = \frac{\pi}{2} + \frac{2 \pi}{L}} f_{k = -
\frac{\pi}{2} + \frac{2 \pi}{L}} |s_{\pm}, \bar{s}_{\pm} \rangle$ and
$f^{\dagger}_{k = - \frac{\pi}{2} - \frac{2 \pi}{L}} f_{k = \frac{\pi}{2} -
\frac{2 \pi}{L}} |s_{\pm}, \bar{s}_{\pm} \rangle$ with energy $E_{(s_{\pm},
\bar{s}_{\pm})} + 2 \sin \frac{2 \pi}{L}$.
These states are created via the Umklapp process,
and one can easily check that these states have no contribution to the Klein bottle partition function, since they are not left-right symmetric.
From the discussion above, we find that the four primary states of the XY model are all invariant under the lattice reflection operation. These four primary fields are Abelian with quantum dimension $d_a=1$ (total quantum dimension $\mathcal{D}=2$). According to Eq.~{\eqref{eq:gforrationalCFT}}, one has
\begin{equation}
g_{\mathrm{XY}} = 2, \label{eq:xychainRCFT}
\end{equation}
since $M_{a,a} = 1 \, \forall a$ in the $U(1)_4$ CFT. This result is in agreement with the exact solution of XY model, as shown in Ref.~{\onlinecite{tu_universal_2017}}.
We include the details of the exact solution in Appendix \ref{sec:exacttoXYZs}.
\subsubsection{Identification of the compactification radius}\label{sec:determineR}
From the perspective of the compactified boson CFT, it is crucial to determine the compactification radius to make the prediction on the value of the Klein bottle entropy.
It is well known that there exists a duality in this category of CFTs, which results in the invariance of the torus partition function and the spectrum under the interchange $R \leftrightarrow 2/R$. This duality is called the T duality~\cite{francesco_conformal_1999}.
As indicated by Eq.~\eqref{eq:compactifiedbosongkb}, the T duality is broken on the Klein bottle.
When Eq.~\eqref{eq:compactifiedbosongkb} is applied to lattice models, it cannot be determined in the context of the continuous field theory which radius should be chosen.
Therefore, in lattice models, we need to construct the boson field starting from the microscopic model, and analyze the effect of the lattice reflection in order to determine the compactification radius that should be used.
The low-energy excitations of the XY model are described by a noninteracting Luttinger liquid model, based on which the compactified boson theory is introduced by the bosonization technique.
Following Ref.~{\onlinecite{von_delft_bosonization_1998}}, the Hamiltonian of the Luttinger liquid reads
\begin{equation}
H = \frac{v_F}{2 \pi} \int_{- L / 2}^{L / 2} \mathrm{d} x : \left[
\psi^{\dagger}_{\mathrm{L}} (x) \mathrm{i} \partial_x \psi_{\mathrm{L}} (x)
+ \psi_{\mathrm{R}}^{\dagger} (x) (- \mathrm{i} \partial_x)
\psi_{\mathrm{R}} (x) \right] :, \label{eq:TLLmodelH}
\end{equation}
where $v_F$ is the Fermi velocity.
The normal ordering is defined by $: A := A - \langle A \rangle_{\mathrm{gs}}$, where $\langle A \rangle_{\mathrm{gs}}$ represents the expectation value of the operator $A$ in the ground state $|\mathrm{gs} \rangle$.
The fermion fields $\psi_{\mathrm{L}} (x)$ and $\psi_{\mathrm{R}} (x)$ are defined by $\Psi (x) = \mathrm{e}^{- \mathrm{i} k_F x} \psi_{\mathrm{L}} (x) + \mathrm{e}^{\mathrm{i} k_F x} \psi_{\mathrm{R}} (x)$, where the fermion field $\Psi (x)$ is introduced from the spinless fermion model in Eq.~\eqref{eq:XYafterJW},
\begin{eqnarray}
\Psi (x) & = & \left( \frac{2 \pi}{L} \right)^{1 / 2} \sum_{p = -
\infty}^{\infty} \mathrm{e}^{\mathrm{i} p x} f_p \\
& = & \left( \frac{2 \pi}{L} \right)^{1 / 2} \sum^{\infty}_{k = - \infty}
\left( \mathrm{e}^{- \mathrm{i} (k_F + k) x} f_{k, \mathrm{L}} +
\mathrm{e}^{\mathrm{i} (k_F + k) x} f_{k, \mathrm{R}} \right),
\label{eq:fermionfield2}
\end{eqnarray}
where $f_{k, \mathrm{L} / \mathrm{R}} \equiv f_{\mp (k + k_F)}$.
In the Luttinger liquid, the energy spectrum is linearized, $\varepsilon_{k, \mathrm{L} / \mathrm{R}} = v_F k$, and the range of $k$ has been extended to $(- \infty, + \infty)$, in order to perform the bosonization approach.
The Hamiltonian is then expressed as
\begin{equation}
H = \sum_{k = - \infty}^{\infty} \sum_{\eta = \mathrm{L,R}} v_F k : f_{k
\eta}^{\dagger} f_{k \eta} : .
\end{equation}
In terms of the fermion modes $f_{k, \mathrm{L/R}}$, the fermion field $\psi_{\mathrm{L/R}} (x)$ can be written as $\psi_{\mathrm{L/R}} (x) = (2 \pi / L)^{1 / 2} \sum_{k = - \infty}^{+ \infty} \mathrm{e}^{\mp \mathrm{i} k x} f_{k, \mathrm{L/R}}$, and the fermion density $\rho_{\mathrm{L/R}} \equiv : \psi^{\dagger}_{\mathrm{L/R}} \psi_{\mathrm{L/R}} :$ is expressed as
\begin{equation}
\rho_{\mathrm{L/R}}(x) = \frac{2 \pi}{L} \sum_q \mathrm{e}^{\mp \mathrm{i} q x}
\sum_k : f_{k - q, \mathrm{L/R}}^{\dagger} f_{k, \mathrm{L/R}} : = \sum_q
\mathrm{e}^{\mp \mathrm{i} q x} \rho_{q, \mathrm{L/R}},
\end{equation}
where we have introduced $\rho_{q, \mathrm{L/R}} = \frac{2 \pi}{L} \sum_k : f_{k - q, \mathrm{L/R}}^{\dagger} f_{k, \mathrm{L/R}} :$.
For $q = 0$, $\rho_{q, \mathrm{L/R}} = \frac{2 \pi}{L} n_{\mathrm{L/R}}$ corresponds to the number of fermions in the left/right-moving sector, while for $q < 0$ ($q > 0$), $\rho_{q, \mathrm{L/R}}$ creates (annihilate) particle-hole excitations in the corresponding sector.
Under the lattice reflection, according to Eq.~{\eqref{latticePMomentum}}, one can easily check that $P f^{\dagger}_{k, \mathrm{L/R}} P = \mathrm{e}^{\mp \mathrm{i} (L + 1) (k + k_F)} f^{\dagger}_{k, \mathrm{R/L}} Q$, then
\begin{eqnarray}
P \rho_{q, \mathrm{L/R}} P & = & \frac{2 \pi}{L} \sum_k \left[ P f_{k - q,
\mathrm{L/R}}^{\dagger} f_{k, \mathrm{L/R}} P - \langle f_{k - q,
\mathrm{L/R}}^{\dagger} f_{k, \mathrm{L/R}} \rangle_{\mathrm{gs}} \right]
\nonumber\\
& = & \mathrm{e}^{\pm \mathrm{i} q (L + 1)} \frac{2 \pi}{L} \sum_k : f_{k -
q, \mathrm{R/L}}^{\dagger} f_{k, \mathrm{R/L}} : \nonumber\\
& = & \mathrm{e}^{\pm \mathrm{i} q} \rho_{q, \mathrm{R/L}},
\label{eq:latticereflectdensity}
\end{eqnarray}
where we have used the fact that the ground state is left-right symmetric and the fermion density $\rho_{\mathrm{L/R}}(x)$ should be periodic in $x$. The modes of the fermion density in the left-moving and right-moving sectors are indeed interchanged under the lattice reflection.
For $q = 0$, the phase factor $\mathrm{e}^{\pm \mathrm{i} q} = 1$, which implies that the numbers of fermions in the left and right sectors are interchanged under the lattice reflection.
For $q \neq 0$, there would be a nonvanishing phase factor.
However, the phase factor would cancel for left-right-symmetric states, while those states that are not symmetric have no contribution to the Klein bottle partition function.
The bosonization method is based on the fact that the particle-hole excitations in one dimension have bosonic nature, due to the commutation relation $[\rho_{q \eta}, \rho_{- q' \eta}] = \frac{2 \pi}{L} q \delta_{q q'} \, (q, q' > 0)$. In fact, one can construct left/right-moving boson field in terms of the particle-hole excitations,
\begin{equation}
\phi_{\mathrm{L/R}} (x) = \sum_{q \neq 0} \frac{\mathrm{i}}{q} \mathrm{e}^{-
a q / 2} \mathrm{e}^{\mp \mathrm{i} q x} \rho_{q, \mathrm{L/R}} \pm \rho_{0,
\mathrm{L/R}} x, \label{eq:notimeboson}
\end{equation}
where $a > 0$ is an infinitesimal regularization parameter to regularize ultraviolet divergent momentum summations (not to be confused with the primary state $a$). The fermion density satisfies
\begin{equation}
\pm \partial_x \phi_{\mathrm{L/R}} (x) = \rho_{\mathrm{L/R}}(x) .
\label{eq:fermiondensityandboson}
\end{equation}
and the bosonization identity is
\begin{equation}
\psi_{\mathrm{L/R}} (x) = a^{- 1 / 2} \mathrm{e}^{\pm \mathrm{i}
\frac{\pi}{L} x} F_{\mathrm{L/R}} \mathrm{e}^{- \mathrm{i}
\phi_{\mathrm{L/R}} (x)},
\end{equation}
where $F_{\mathrm{L/R}}$ is the Klein factor, which decreases the fermion number in left-moving (right-moving) branch by one.
To obtain the time dependence of the boson field $\phi_{\mathrm{L/R}}$, one can first write the linearized Hamiltonian in terms of the modes of the particle-hole excitations{~\cite{von_delft_bosonization_1998}},
\begin{equation}
H = \frac{v_F L}{2 \pi} \left( \sum_{q > 0, \eta = \mathrm{L,R}} \rho_{- q
\eta} \rho_{q \eta} + \frac{1}{2} \rho_{0 \eta}^2 \right) .
\end{equation}
Using the imaginary-time Heisenberg picture $A (\tau) = \mathrm{e}^{H \tau} A \mathrm{e}^{- H \tau}$, it would be straightforward to obtain that
\begin{eqnarray}
\rho_{q \eta} (\tau) & = & \rho_{q \eta} \mathrm{e}^{- v_F q \tau}, \\
F_{\eta} (\tau) & = & F_{\eta} \mathrm{e}^{- v_F \rho_{_{0 \eta}} \tau}
\mathrm{e}^{\frac{\pi}{L} v_F \tau} .
\end{eqnarray}
Formally, one can absorb the time dependence $\mathrm{e}^{- v_F \rho_{_{0 \eta}} \tau}$ of the Klein factor into the boson field.
The time-dependent boson field then becomes
\begin{equation}
\phi_{\mathrm{L/R}} (x, \tau) = \sum_{q \neq 0} \frac{\mathrm{i}}{q}
\mathrm{e}^{- a q / 2} \mathrm{e}^{- q (\pm \mathrm{i} x + v_F \tau)}
\rho_{q, \mathrm{L/R}} - \mathrm{i} \rho_{0, \mathrm{L/R}} (\pm \mathrm{i} x
+ v_F \tau) . \label{eq:timeboson}
\end{equation}
One can see that $\phi_{\mathrm{L/R}} (x, \tau)$ only depends on $\xi \equiv \mathrm{i} x + v_F \tau$ and $\bar{\xi} \equiv - \mathrm{i} x + v_F \tau$, respectively. The bosonization identity becomes
\begin{equation}
\psi_{\mathrm{L/R}} (x, \tau) = a^{- 1 / 2} \mathrm{e}^{\frac{\pi}{L} (v_F
\tau \pm \mathrm{i} x)} F_{\mathrm{L/R}} \mathrm{e}^{- \mathrm{i}
\phi_{\mathrm{L/R}} (x, \tau)},
\end{equation}
where the Klein factor $F_{\mathrm{L/R}}$ has no time dependence.
Next, from the left/right-moving boson field one can construct a pair of dual fields
\begin{eqnarray}
\phi (x, \tau) & = & \phi_{\mathrm{L}} (x, \tau) + \phi_{\mathrm{R}} (x,
\tau), \label{eq:dualboson1}\\
\theta (x, \tau) & = & \phi_{\mathrm{L}} (x, \tau) - \phi_{\mathrm{R}} (x,
\tau) . \label{eq:dualboson2}
\end{eqnarray}
By writing down $\phi (x, \tau)$ and $\theta (x, \tau)$ explicitly, one can see that the dual boson fields are compactified
bosons{~\cite{francesco_conformal_1999}},
\begin{eqnarray}
\phi (x, \tau) & = & \frac{2 \pi}{L} \left( n_{\mathrm{L}} - n_{\mathrm{R}}
\right) x - \frac{2 \pi}{L} \left( n_{\mathrm{L}} + n_{\mathrm{R}} \right)
(\mathrm{i} v_F \tau) \nonumber\\
& & + \frac{\mathrm{i}}{q} \mathrm{e}^{- a q / 2} \sum_{q \neq 0} \left(
\mathrm{e}^{- q (\mathrm{i} x + v_F \tau)} \rho_{q \mathrm{L}} +
\mathrm{e}^{- q (- \mathrm{i} x + v_F \tau)} \rho_{q \mathrm{R}} \right),
\nonumber\\
& & \\
\theta (x, \tau) & = & \frac{2 \pi}{L} \left( n_{\mathrm{L}} +
n_{\mathrm{R}} \right) x - \frac{2 \pi}{L} \left( n_{\mathrm{L}} -
n_{\mathrm{R}} \right) (\mathrm{i} v_F \tau) \nonumber\\
& & + \frac{\mathrm{i}}{q} \mathrm{e}^{- a q / 2} \sum_{q \neq 0} \left(
\mathrm{e}^{- q (\mathrm{i} x + v_F \tau)} \rho_{q \mathrm{L}} -
\mathrm{e}^{- q (- \mathrm{i} x + v_F \tau)} \rho_{q \mathrm{R}} \right),
\nonumber\\
& &
\end{eqnarray}
whose ``zero modes'' $\phi_0$ and $\theta_0$ have already been absorbed into the Klein factor{~\cite{von_delft_bosonization_1998}}.
In the Neveu-Schwarz sector, $n_{\mathrm{L}} + n_{\mathrm{R}} \in 2\mathbbm{Z}$, so $n_{\mathrm{L}} - n_{\mathrm{R}} \in 2\mathbbm{Z}$.
In the Ramond sector, $n_{\mathrm{L}} + n_{\mathrm{R}} \in 2\mathbbm{Z}+ 1$.
However, one needs to note that there exists a fermion mode with zero momentum in the Ramond sector [created by $f^{\dagger}_{k=0}$ in Eq.~\eqref{eq:XYspinlessfermionmomentum}], which is always occupied in the low-energy description and belongs to neither the left-moving nor right-moving sectors.
Formally we can ``split'' this state, and denote $n_{\mathrm{L}}$ and $n_{\mathrm{R}}$ as half-integers, i.e., $n_{\mathrm{L/R}} = n'_{\mathrm{L/R}} + 1 / 2$ with $n_{\mathrm{L/R}}' \in \mathbbm{Z}$. Therefore $n'_{\mathrm{L}} + n_{\mathrm{R}}' \in 2\mathbbm{Z}$ and $n_{\mathrm{L}} - n_{\mathrm{R}} = n'_{\mathrm{L}} - n_{\mathrm{R}}' \in 2\mathbbm{Z}$. As a result, in both sectors $n_{\mathrm{L}} - n_{\mathrm{R}} \in 2\mathbbm{Z}$ and $n_{\mathrm{L}} + n_{\mathrm{R}} \in \mathbbm{Z}$, so $\phi$ has the radius $R = 2$ [note that $\phi (x + L, t) = \phi (x, t) + 2 \pi (n_L-n_R)$] and $\theta$ has the radius $R = 1$ [note that $\theta (x + L, t) = \theta (x, t) + 2 \pi (n_L+n_R)$].
The low-energy physics of the XY model is thus described by two seemingly distinct boson fields with different compactification radius, and the two radii are related by the T duality~\cite{francesco_conformal_1999}.
On the other hand, in the compactified boson CFT, the Laurent modes $j_q$ (or $\bar{j}_q$, the antiholomorphic counterpart) of the $U (1)$ current $j = \mathrm{i} \partial_z \phi$ ($\bar{j} = \mathrm{i} \partial_{\bar{z}} \phi$) of the $R = 2$ boson corresponds to the modes of the fermion density in the left(right)-moving sector, where $z \equiv \mathrm{e}^{2 \pi \xi / L} = \mathrm{e}^{2 \pi (\mathrm{i} x + v_F \tau) / L}$ ($\bar{z} \equiv \mathrm{e}^{2 \pi \bar{\xi} / L} = \mathrm{e}^{2 \pi (-\mathrm{i} x + v_F \tau) / L}$).
Comparing Eqs.~{\eqref{eq:notimeboson}}, \eqref{eq:fermiondensityandboson} and \eqref{eq:timeboson}, and noting $\partial_z = \frac{1}{z} \frac{L}{2 \pi} \partial_{\xi}$ and $\partial_{\bar{z}} = \frac{1}{\bar{z}} \frac{L}{2 \pi} \partial_{\bar{\xi}}$, one can see that
\begin{equation}
j_q = \frac{L}{2 \pi} \rho_{q, \mathrm{L}}, \; \bar{j}_q = \frac{L}{2 \pi}
\rho_{q, \mathrm{R}} . \label{eq:jqrouqcorresp}
\end{equation}
Meanwhile, for the field $\theta$ with radius $R = 1$,
\begin{equation}
j'_q = \frac{L}{2 \pi} \rho_{q, \mathrm{L}}, \; \bar{j}'_q = - \frac{L}{2 \pi}
\rho_{q, \mathrm{R}},
\end{equation}
where $j'_q = \mathrm{i} \partial_z \theta$ and $\bar{j}' = \mathrm{i} \partial_{\bar{z}} \theta$.
According to Eq.~{\eqref{eq:latticereflectdensity}} and the discussions below, for the field $\phi$ with $R=2$, the lattice reflection $P$ indeed interchanges the holomorphic and the antiholomorphic sectors, up to a factor that will cancel in the symmetric states.
In contrast, for its dual field $\theta$ with $R=1$, the lattice reflection will introduce an additional minus sign.
Therefore, one can see that the bond-centered lattice reflection in the XXZ model matches the reflection operation in the CFT Hilbert space of the field $\phi$ with $R = 2$, instead of its dual field $\theta$. Using Eq.~{\eqref{eq:compactifiedbosongkb}}, we get
\begin{equation}
\ln g = \ln R = \ln 2.
\end{equation}
This is consistent with the RCFT result in Eq.~\eqref{eq:xychainRCFT}.
What is more, from the above discussion, one can gain a physical interpretation of the states in Eq.~{\eqref{eq:compbosonCFTsymm}} that contribute the Klein bottle entropy.
From Eq.~{\eqref{eq:jqrouqcorresp}}, the highest weight states $|n, m \rangle$ are annihilated by any $j_q$ $(q > 0)$ that annihilates the particle-hole excitations, so $|n, m \rangle$ represents the Fermi-sea states with $n, m \in \mathbbm{Z}$ representing the number of fermions, correspondingly $\frac{n}{2} + m$ and $\frac{n}{2} - m$ in the left-moving and right-moving sector.
Therefore, $j_{- 1}^{n_1} j_{- 2}^{n_2} \ldots \bar{j}_{- 1}^{n_1} \bar{j}_{- 2}^{n_2} \ldots |n, 0 \rangle$ represents the state with the same number of fermions and same particle-hole excitations in the two sectors, which is apparently left-right symmetric and thus makes a contribution to the Klein bottle entropy.
\subsection{XXZ model}\label{sec:introduceXXZ}
Next, we consider the spin-1/2 XXZ model by adding a nearest-neighboring Ising interaction to the XY chain,
\begin{equation}
H = - \sum_{i = 1}^L (S_i^x S^x_{i + 1} + S_i^y S_{i + 1}^y) + \Delta
\sum_{i = 1}^L S_i^z S_{i + 1}^z, \label{eq:xxzhamiltonian}
\end{equation}
where $\Delta$ is an anisotropy coefficient.
For $- 1 < \Delta \leqslant 1$, the system is in the Luttinger liquid phase, and its low-energy physics can be described by a compactified boson CFT{~\cite{giamarchi_quantum_2003}}.
For general value of $\Delta$ within this phase, CFT prediction on the Klein bottle entropy is the first result which goes beyond the RCFT results of Ref.~\onlinecite{tu_universal_2017}.
As in the case of XY model, the XXZ model can be transformed into a spinless fermion model by Jordan-Wigner transformation, with an additional interaction term $H_{\mathrm{int}}$ compared to the XY model, i.e., $H = H_{\mathrm{0}} + H_{\mathrm{int}}$ with $H_{\mathrm{0}}$ representing the noninteracting fermion model obtained from the XY model, and $H_{\mathrm{int}}$ reads
\begin{equation}
H_{\mathrm{int}} = \Delta \sum_{i = 1}^L \left( f_i^{\dagger} f_i -
\frac{1}{2} \right) \left( f_{i + 1}^{\dagger} f_{i + 1} - \frac{1}{2}
\right) . \label{eq:interactioninXXZ}
\end{equation}
Since the fermion parity $Q$ is still conserved, we can again split the Hilbert space into the Neveu-Schwarz and Ramond sectors with different fermion parities $Q = \pm 1$ and boundary conditions $f_{L + 1} = \mp f_1$. In the frame of bosonization one can obtain the underlying compacitified boson CFT of this system, which leads to the CFT prediction of the Klein bottle entropy in this model.
\subsubsection{CFT prediction}
To obtain the compacitified boson description of the XXZ model, we pass to the continuum limit.
The interaction term $H_{\mathrm{int}}$ can be written as{~\cite{affleck_field_1989}}
\begin{equation}
H_{\mathrm{int}} = H_{\mathrm{d-d}} + H_{\mathrm{Umklapp}},
\end{equation}
where we have introduced the local fermion-fermion interaction term $H_{\mathrm{d-d}}$ and the Umklapp term $H_{\mathrm{Umklapp}}$ that scatter the fermion between different sectors
\begin{eqnarray}
H_{\mathrm{d-d}} & = & \Delta \int_{- L / 2}^{L / 2} \frac{\mathrm{d} x}{2
\pi} : \left( \rho_{\mathrm{L}}^2 + \rho_{\mathrm{R}}^2 + 4
\rho_{\mathrm{L}} \rho_{\mathrm{R}} \right) :, \\
H_{\mathrm{Umklapp}} & = & - 2 \Delta \int_{- L / 2}^{L / 2}
\frac{\mathrm{d} x}{2 \pi} : \left[ \left( \psi_{\mathrm{L}}^{\dagger}
\psi_{\mathrm{R}} \right)^2 + \mathrm{h.c.} \right] : .
\end{eqnarray}
In the Luttinger liquid phase, by renormalization group analysis, the Umklapp process $H_{\mathrm{Umklapp}}$ is irrelevant for $- 1 < \Delta < 1$ and marginally irrelevant at the Heisenberg point $\Delta = 1$, while for $\Delta > 1$ $H_{\mathrm{Umklapp}}$ becomes relevant and introduces a mass term which causes the system to be gapped~\cite{giamarchi_quantum_2003}.
For $- 1 < \Delta \leqslant 1$, the interaction generally renormalizes the parameters and the interaction term $H_{\mathrm{d-d}}$ can be written as
\begin{equation}
H_{\mathrm{d-d}} = \int_{- L / 2}^{L / 2} \frac{\mathrm{d} x}{2 \pi} :
\left[ \frac{1}{2} g_4 \left( \rho_{\mathrm{L}}^2 + \rho_{\mathrm{R}}^2
\right) + g_2 \rho_{\mathrm{L}} \rho_{\mathrm{R}} \right] :,
\end{equation}
where $g_2$ and $g_4$ are undetermined coefficients which depend on the specific choice of the parameter $\Delta$. Correspondingly, the kinetic term $H_0$ can be represented in terms of the fermion densities,
\begin{equation}
H_0 = \frac{v_F}{2 \pi} \int_{- L / 2}^{L / 2} \mathrm{d} x : \frac{1}{2}
\left( \rho_{\mathrm{L}}^2 + \rho_{\mathrm{R}}^2 \right) :,
\end{equation}
and the total Hamiltonian $H = H_{\mathrm{0}} + H_{\mathrm{d-d}}$ can be written in a diagonalized form as
\begin{equation}
H = \frac{v}{4} \int_{- L / 2}^{L / 2} \frac{\mathrm{d} x}{2 \pi} : \left[
\frac{1}{K} \left( \rho_{\mathrm{L}} + \rho_{\mathrm{R}} \right)^2 + K
\left( \rho_{\mathrm{L}} - \rho_{\mathrm{R}} \right)^2 \right] :,
\end{equation}
where $v = \sqrt{(v_F + g_4)^2 - g_2^2}$ is the velocity and $K = \sqrt{\frac{v_F + g_4 - g_2}{v_F + g_4 + g_2}}$ is called the Luttinger parameter, which is usually used as a parametrization of the interaction strength in the system.
Generally, the values of $v$ and $K$ cannot be reliably obtained from field theory calculations and one has to resort to the microscopic models to determine their values.
In the case of the spin-1/2 XXZ model, $v$ and $K$ are determined by the Bethe ansatz solution{~\cite{giamarchi_quantum_2003}}
\begin{eqnarray}
K & = & \frac{\pi}{2 (\pi - \cos^{- 1} \Delta)}, \\
v & = & \frac{\pi}{2} \frac{\sqrt{1 - \Delta^2}}{\cos^{- 1} \Delta} .
\end{eqnarray}
According to {\eqref{eq:fermiondensityandboson}}, {\eqref{eq:dualboson1}} and {\eqref{eq:dualboson2}}, the Hamiltonian can be expressed in terms of the boson fields,
\begin{equation}
H = \frac{v}{4} \int_{- L / 2}^{L / 2} \frac{\mathrm{d} x}{2 \pi} : \left[
\frac{1}{K} (\partial_x \theta)^2 + K (\partial_x \phi)^2 \right] : .
\end{equation}
Note that in the noninteracting case (XY limit), $g_2 = g_4 = 0$, and $v = K = 1$.
As a result, the interaction effectively rescales the compactified bosons as $\Theta = \frac{\theta}{\sqrt{K}}, \Phi = \sqrt{K} \phi$, and the Hamiltonian becomes
\begin{equation}
H = \frac{v}{4} \int_{- L / 2}^{L / 2} \frac{\mathrm{d} x}{2 \pi} :
[(\partial_x \Theta)^2 + (\partial_x \Phi)^2] :,
\end{equation}
where $\Theta$ has radius $1 / \sqrt{K}$ and $\Phi$ has radius $2 \sqrt{K}$.
According to the discussion in the case of the XY model in Sec.~\ref{sec:determineR}, one concludes that the Klein bottle entropy should be calculated based on the boson field $\Phi$, which gives
\begin{equation}
g = 2 \sqrt{K} = \sqrt{\frac{2 \pi}{\pi - \cos^{- 1} \Delta}} . \label{eq:CFTofxxzspinhalf}
\end{equation}
\subsubsection{Numerical results} \label{subsubsec:numericalResultofXXZ}
Next we verify Eq.~\eqref{eq:CFTofxxzspinhalf} numerically.
We employ a quantum Monte Carlo simulation in the XXZ chain, by calculating the partition function ratio $g = Z^{\mathcal{K}} (2 L, \beta / 2) / Z^{\mathcal{T}} (L, \beta)$ using an improved version of the extended ensemble Monte Carlo method{~\cite{tang_universal_2017}}.
We include the details of the algorithm in Appendix \ref{sec:extendedMCmethod}.
\begin{figure}[!htb]
\resizebox{8.5cm}{!}{\includegraphics{fig-XXZ-gK.pdf}}
\caption{ Comparison of the QMC result of the Klein bottle entropy with the CFT prediction.
The error bars are smaller than the data points.
In the QMC calculation, the parameters are chosen as $L = 440, \beta = 44$, where $g$ is calculated by $Z^{\mathcal{K}} (2 L, \beta / 2) / Z^{\mathcal{T}} (L, \beta)$, according to Eq.~{\eqref{eq:Kleinbottleexpression}}. \label{fig:XXZgkb}}
\end{figure}
As shown in Fig.~\ref{fig:XXZgkb}, one can see that the numerical results and the CFT predictions are in good agreement with each other, except in the vicinity of $\Delta = 1$.
The slight deviation may originate from the marginally irrelevant Umklapp process at $\Delta = 1$.
We have observed this kind of slight deviation in the $q = 4$ Potts model, which also has a marginally irrelevant term{~\cite{tang_universal_2017}}.
The remarkable agreement of the CFT prediction and the QMC numerical results indicates that the Klein bottle entropy can be a reliable tool to extract the Luttinger parameter $K$ from the lattice models.
Comparing to the existing methods{~\cite{giamarchi_quantum_2003,song_general_2010,song_bipartite_2012,dalmonte_critical_2012,dalmonte_estimating_2011,lauchli_operator_2013,alcaraz_in_preparation_2018}},
the advantage of the present approach is that one can directly obtain the Luttinger parameter by calculating the Klein bottle entropy in a finite temperature calculation, without any fitting procedure.
\subsection{Spin-1 XXZ model}
Next we employ our QMC method to the more challenging spin-1 XXZ model,
where the Hamiltonian still takes the form of Eq.~{\eqref{eq:xxzhamiltonian}},
but the operators $S^{\nu} (\nu = x, y, z)$ are now spin-1 operators.
We calculate the Klein bottle entropy in this model for $- 1 < \Delta \leqslant 1$.
The spin-1 XXZ model is in the Luttinger liquid phase only in the range $- 1 < \Delta \leqslant 0$. While for $0 < \Delta \leqslant 1$, the system is in the massive Haldane phase with a finite energy gap{~\cite{botet_ground-state_1983,alcaraz_critical_1992,kitazawa_phase_1996}}.
In contrast to the case of $S=1/2$, the spin-1 XXZ model cannot be exactly solved.
According to the relation $g=R=2\sqrt{K}$, our numerical results of the Klein bottle entropy can be used to conversely determine the Luttinger parameter $K$.
We can compare our numerical result with the conjecture proposed in Ref.~{\onlinecite{alcaraz_critical_1992}} for the Luttinger parameter in the spin-$S$ XXZ model, $K_S = 2 S K_{S = 1 / 2}$, which is equivalent to
\begin{equation}
g_S = \sqrt{2 S} g_{S = 1 / 2}. \label{eq:spin1conjecture}
\end{equation}
For $S = 1$, we have $g_{S = 1} = \sqrt{2} g_{S = 1 / 2}$.
As shown in Fig.~\ref{fig-spin1-gkb}, the conjectured formula is in good agreement with the numerical results up to some small deviations.
The conjecture \eqref{eq:spin1conjecture} was proposed based on the finite-size-scaling results of the exact diagonalization data~\cite{alcaraz_critical_1992}, and currently there is no rigorous proof for this conjecture.
However, based on the symmetry analysis, one can show that the XY ($\Delta = 0$) point of this conjecture is exact.
On the one hand, in the context of the continuous field theory, it is known that there is an inherent SU(2) symmetry in the Berezinskii-Kosterlitz-Thouless (BKT) transition point~\cite{halpern_quantum_1975,halpern_equivalent-boson_1976,banks_bosonization_1976,ginsparg_curiosities_1988,nomura_su2_1998}.
At the BKT transition point, the Luttinger parameter is restricted, and possible choices include $K=1/2$ (corresponding to the SU(2)$_1$ Wess-Zumino-Witten model) and $K=2$~\cite{nomura_su2_1998}.
On the other hand, a hidden SU(2) symmetry was found in the spin-1 XY model~\cite{kitazawa_su_2003}.
Together with the exact diagonalization results given by Ref.~{\onlinecite{alcaraz_critical_1992}}, which indicate $K=2$ in the spin-1 XY model,
one can infer that the BKT transition between the Luttinger-liquid phase and the Haldane phase locates exactly at the XY point ($\Delta = 0$),
and this point precisely corresponds to $g=R=2\sqrt{2}$.
As one can see in Fig.~\ref{fig-spin1-gkb}, there exists some small deviation between our numerical result and the exact result at the XY point, which is again attributed to the marginally irrelevant term, since the BKT transition is driven by the marginal operator.
We also calculate the Klein bottle entropy out of the critical region into the gapped Haldane phase.
With $\Delta$ passing the BKT transition point $\Delta = 0$, the originally degenerate values of the Klein bottle entropy at different $\beta$ and $L$ start to deviate with each other, as shown in Fig.~\ref{fig-spin1-gkb}.
The deviation starts exactly at the BKT transition point between the two phases~\cite{chen_conformal_2017}.
We note that it was a difficult task to determine the BKT transition point from numerical calculations, due to the exponentially small energy gap towards the BKT transition point in the gapped phase~\cite{kosterlitz_ordering_1973, kosterlitz_critical_1974}.
In the Haldane phase, the value of the Klein bottle entropy will eventually converge to the ground-state degeneracy of the system on the Klein bottle when $\beta$ is large enough.
\begin{figure}[!htb]
\resizebox{8.5cm}{!}{\includegraphics{fig-xxz-spin1-gK.pdf}}
\caption{ The QMC result for the Klein bottle entropy for $- 1 < \Delta \leqslant 1$.
The error bars are smaller than the data points.
For $1 < \Delta \leqslant 0$, the system is in the Luttinger liquid phase, and the Klein bottle degeneracy $g$ gives the compactification radius $R$.
The solid line is the conjectured Luttinger liquid parameter for $S = 1$ in Ref.~\onlinecite{alcaraz_critical_1992}.
For $\Delta > 0$, the system is in the gapped Haldane phase, and the Klein bottle entropy varies with the different temperature $\beta$ and system size $L$, in contrast to the case of critical phase.
The deviation of the Klein bottle entropy of different parameters starts at the quantum critical point $\Delta = 0$, as shown in the inset, where $\Delta g = g (\beta, L) - g (\beta = 6, L = 480)$ is plotted. \label{fig-spin1-gkb}}
\end{figure}
\subsection{The Affleck-Ludwig entropy}
As a comparison, we also attempted to extract the compactification radius by calculating the Affleck-Ludwig (AL) entropy in the spin-1 XXZ model{~\cite{affleck_universal_1991}}.
The AL entropy emerges from the open boundary of a long cylinder, which is universal and only depends on the CFT and conformal boundary conditions.
In lattice models, when $L \gg v \beta$, one can obtain the AL entropy by calculating the ratio between the partition functions of the systems on a long cylinder and a torus{~\cite{tang_universal_2017}},
\begin{equation}
\ln \left( \frac{Z^{\mathcal{C}}}{Z^{\mathcal{T}}} \right) \approx
S_{\ensuremath{\operatorname{AL}}} - f_b \beta, \label{eq:ALasZratio}
\end{equation}
where $f_b$ is the surface free energy density, which is a nonuniversal quantity. By a linear extrapolation, one can get the AL entropy as the intercept.
In the compactified boson CFT, the AL entropy is also dependent on the compactification radius $R$.
For simplicity, we only consider the case that the boundary conditions on the two boundaries are the same.
For the Dirichlet and Neumann boundary condition, we have{~\cite{oshikawa_boundary_2010}}
\begin{eqnarray}
S_{\mathrm{AL}}^{\mathrm{D}} & = & \ln (R / 2), \\
S_{\mathrm{AL}}^{\mathrm{N}} & = & - \ln R.
\end{eqnarray}
In the XXZ model, the fixed (free) boundary condition of spin chain corresponds to the Neumann (Dirichlet) boundary condition for the free boson{~\cite{eggert_magnetic_1992,affleck_edge_1998}}.
For simplicity, we only performed the QMC calculations for the free boundary condition.
However, in practice, due to the existence of the nonuniversal term $- f_b \beta$, the partition function ratio decays exponentially with $\beta$, and the error of the calculation becomes intolerable when $\beta$ reaches some certain value.
On the other hand, the calculation result from the finite-size lattice will converge to the universal value only when the $\beta$ and $L$ is large enough{~\cite{tang_universal_2017}}.
In our calculations of the spin-1 XXZ model, unfortunately, the range of $\beta$ where we are able to perform the calculation cannot give the accurate value of AL entropy.
The difficulty here highlights the advantage of using Klein bottle entropy, which is free of nonuniversal surface energies and does not need any extrapolation procedure.
\section{Summary}\label{sec:summary}
To summarize, in this paper, we first review the results and details of the initial work Ref.~{\onlinecite{tu_universal_2017}} which focuses on the Klein bottle entropy in RCFT and discuss in detail how to extract the Klein bottle entropy from lattice model calculations via the bond-centered lattice reflection.
We then go beyond the scope of RCFT and study the Klein bottle entropy in the compactified boson CFT, which contains both rational and nonrational CFTs.
We obtain a simple relation between the Klein bottle entropy and the compactification radius, $\ln g = \ln R$, which is the central result of our work.
Due to the direct connection between the compactification radius and the Luttinger parameter, our result provides a straightforward and efficient method to extract the Luttinger parameter from lattice models.
In lattice models, we employ quantum Monte Carlo calculations in the XXZ chain with $S = 1 / 2$ and $S = 1$, respectively.
For the exactly solvable spin-$1 / 2$ XXZ chain, our numerical results show excellent agreement with the CFT prediction, except the slight deviations near the isotropic point $\Delta = 1$, which we attribute to the marginally irrelevant fields.
For the $S = 1$ XXZ chain that cannot be exactly solved, our numerical results serve as a new numerical determination of the Luttinger parameter in this model.
\section*{Acknowledgment}
We thank F.~C.~Alcaraz and G.~Sierra for helpful discussions.
This work is supported by NSF-China under Grant No.11504008 (W.T. and X.C.X),
Ministry of Science and Technology of China under the Grant No.2016YFA0302400 (L.W.)
and the DFG via project A06 of SFB 1143 (H.H.T.).
The simulation is performed at Tianhe-1A platform at the National Supercomputer Center in Tianjin.
Parts of the calculations are performed using the ALPS library{~\cite{albuquerque_alps_2007,bauer_alps_2011}}.
\clearpage
|
1805.01212
|
\section{Introduction}
The Bianchi IX Universe~\cite{Landau:1980,Misner:1973} is the most
interesting among the Bianchi models. In fact, like the Bianchi type
VIII, it is the most general allowed by the homogeneity constraint,
but unlike the former, it admits an isotropic limit, naturally reached
during its evolution \cite{Grishchuk:1975, Doroshkevich:1973} (see
also \cite{Kirillov:2002,Montani:2008}), coinciding with the
positively curved Robertson-Walker geometry. Furthermore, the
evolution of the Bianchi IX model towards the initial singularity is
characterized by a chaotic structure, first outlined
in~\cite{Belinskii:1970} in terms of the Einstein equations morphology
and then re-analyzed in Hamiltonian formalism by \cite{Misner:1969,
Misner:1969mixmaster}. Actually, the Bianchi IX chaotic evolution
towards the singularity constitutes, along with the same behavior
recovered in Bianchi type VIII, the prototype for a more general
feature, characterizing a generic inhomogeneous cosmological model
\cite{Belinskii:1982} (see also \cite{Kirillov:1993, Montani:1995,
Montani:2008, Montani:2011,Heinzle:2009, Barrow:1979}).
The original interest toward the Bianchi IX chaotic cosmology was due
to the perspective, de facto failed, to solve the horizon paradox via
such a statistical evolution of the Universe scale factors, from which
derives the name, due to C. W. Misner, of the so-called
\emph{Mixmaster model}. However, it was clear since from the very
beginning, that the chaotic properties of the Mixmaster model had to
be replaced, near enough to the initial singularity (essentially
during that evolutionary stage dubbed Planck era of the Universe), by
a quantum evolution of the primordial Universe. This issue was first
addressed in~\cite{Misner:1969}, where the main features of the
Wheeler-De Witt equation~\cite{DeWitt:1967} are implemented in the
scenario of the Bianchi IX cosmology. This analysis, besides its
pioneering character (see also~\cite{Graham:1994, Montani:2008}),
outlined the interesting feature that a quasi-classical state of the
Universe, in the sense of large occupation numbers of the
wave function, is allowed up to the initial singularity.
More recent approaches to Canonical Quantum Gravity, like the
so-called Loop Quantum Gravity~\cite{Cianfrani:2014}, suggested that
the geometrical operators, areas and volumes, are actually
characterized by a discrete spectrum~\cite{Rovelli:1995}. The
cosmological implementation of this approach led to the definition of
the concept of ``Big-Bounce''~\cite{Ashtekar:2006quantum,
Ashtekar:2006num} (i.e.\ to a geometrical cut-off of the initial
singularity, mainly due to the existence of a minimal value for the
Universe Volume) and could transform the Mixmaster Model in a cyclic
Universe~\cite{Montani:2018,Barrow:2017}. The complete implementation
of the Loop Quantum Gravity to the Mixmaster model is not yet
available~\cite{Ashtekar:2009, Ashtekar:2011bkl, Bojowald:2004ra}, but
general considerations on the volume cut-off led to characterize the
semi-classical dynamics as chaos-free~\cite{Bojowald:2004}. A
quantization procedure, able to mimic the cut-off physics contained in
the Loop Quantum Cosmology, is the so-called \emph{Polymer quantum
Mechanics}, de facto a quantization on a discrete lattice of the
generalized coordinates~\cite{Ashtekar:2001xp} (for cosmological
implementations see~\cite{Corichi:2007pr, Montani:2013, Moriconi:2017,
Battisti:2008, Hossain:2010, DeRisi:2007, Hassan:2017,
Seahra:2012}).
Here, we apply the Polymer Quantum Mechanics to the isotropic variable
$\alpha$ of the Mixmaster Universe, described both in the Hamiltonian
and field equations representation. We first analyze the Hamiltonian
dynamics in terms of the so called semiclassical polymer equations,
obtained in the limit of a finite lattice scale, but when the
classical limit of the dynamics for $\hbar\rightarrow 0$ is taken.
Such a semiclassical approach clearly offers a characterization for
the behavior of the mean values of the quantum Universe, in the spirit
of the Ehrenfest theorem~\cite{Sakurai:2011}. This study
demonstrates that the singularity is not removed and the chaoticity of
the Mixmaster model is essentially preserved, and even enforced in
some sense, in this cut-off approach.
Then, in order to better characterize the chaotic features of the
obtained dynamics, we translate the Hamiltonian polymer-like dynamics,
in terms of the modified Einstein equations, in order to calculate the
morphology of the new Poincar\'e return map, i.e.\ the modified BKL
map.
We stress that, both the reflection rule of the point-Universe against
the potential walls and the modified BKL map acquire a readable form
up to first order in the small lattice step parameter. Both these two
analyses clarify that the chaotic properties of the Bianchi IX model
survive in the Polymer formulation and are expected to preserve the
same structure of the standard General Relativity case. In
particular, when investigating the field equations, we numerically
iterate the new map, showing how it asymptotically overlaps to the
standard BKL one.
The main merit of the present study is to offer a detailed and
accurate characterization of the Mixmaster dynamics when the Universe
volume (i.e.\ the isotropic Misner variable) is treated in a
semiclassical Polymer approach, demonstrating how this scenario does
not alter, in the chosen representation, the existence of the initial singularity and the chaoticity
of the model in the asymptotic neighborhoods of such a singular point.
Finally, we repeat the quantum Misner analysis in the Polymer revised
quantization framework and we show, coherently with the semiclassical
treatment, that the Misner conclusion about the states with high
occupation numbers, still survives: such occupation numbers still
behave as constants of motion in the asymptotic dynamics.
It is worth to be noted that imposing a discrete structure to the
variable $\alpha$ does not ensure that the Bianchi IX Universe volume
has a natural cut-off. In fact, such a volume behaves like
$e^{3\alpha}$ and it does not take a minimal (non-zero) value when
$\alpha$ is discretized. This fact could suggest that the surviving
singularity is a consequence of the absence of a real geometrical
cut-off, like in Loop Quantum Cosmology (of which our approach mimics
the semi-classical features). However, the situation is subtler, as
clarified by the following two considerations.
\begin{enumerate}[label= (\roman*)]
\item In~\cite{Montani:2013}, where the polymer approach is applied to
the anisotropy variables $\beta _{\pm}$, the Mixmaster chaos is removed
like in \cite{Bojowald:2004} even if no discretization is imposed on the isotropic (volume)
variable. Furthermore, approaching the singularity, the
anisotropies classically diverge and their discretization does not
imply any intuitive regularization, as de facto it takes place
there. The influence of the discretization induced by the Polymer
procedure, can affect the dynamics in a subtle manner,
non-necessarily predicted by the simple intuition of a cut-off
physics, but depending on the details of the induced symplectic
structure.
\item In Loop Quantum Gravity, the spectrum of the geometrical spatial
volume is discrete, but it must be emphasized that such a spectrum
still contains the zero eigenvalue. This observation, when the
classical limit is constructed including the suitably weighted zero
value, allows to reproduce the continuum properties of the classical
quantities. Such a point of view is well-illustrated
in~\cite{Rovelli:2002vp}, where the preservation of a classical
Lorentz transformation is discussed in the framework of Loop Quantum
Gravity. This same consideration must hold in Loop Quantum
Cosmology, for instance in~\cite{Ashtekar:2006quantum}, where the
position of the bounce is determined by the scalar field parameter,
i.e., on a classical level, it depends also on the initial
conditions and not only on the quantum modification to geometrical
properties of space-time.
\end{enumerate}
Following the theoretical paradigm suggested by the point (ii), the
analysis of the Polymer dynamics of the Misner isotropic variable
$\alpha$ is extremely interesting because it corresponds to a
discretization of the Universe volume, but allowing for the value zero
of such a geometrical quantity.
We note that picking, as the phase-space variable to quantize, the
scale factor $a = e^{\alpha}$, or any other given power of this, the
corresponding Polymer discretization is somehow forced to become
singularity-free, i.e.\ we observe the onset of a Big Bounce. However,
in the present approach, based on a logarithmic scale factor, the
polymer dynamical scheme offers an intriguing arena to test
the Polymer Quantum Mechanics in the Minisuperspace, even in
comparison with the predictions of Loop Quantum
Cosmology~\cite{Cianfrani:2014}.
Finally, we are interested in determining the fate of the Mixmaster
model chaoticity, which is suitably characterized by the Misner
variables representation and whose precise description (i.e.\
ergodicity of the dynamics, form of the invariant measure) is reached
in terms of the Misner-Chitr\'e-like variables~\cite{Montani:2011,
Montani:1997, Imponente:2001fy}, which are double logarithmic scale
factors. Although these variables mix the isotropic and anisotropic
Misner variables to some extent, the Misner-Chitr\'e time-like variable in the
configurational scale, once discretized would not guarantee a minimal value
of the universe volume.
Thus the motivations for a Polymer treatment
of the Mixmaster model in which only the $\alpha$ dynamics is
deformed relies on different and converging requests, coming from the
cosmological, the statistical and the quantum feature of the Bianchi
IX universe.
The paper is organized as follows. In Section \ref{sec:PQM} we
introduce the main kinematic and dynamic features of the Polymer
Quantum Mechanics. Then we outline how this singular representation
can be connected with the Schr\"odinger one through an appropriate
Continuum Limit.
In Section \ref{sec:mixm-model:-class} we describe the dynamics of the
homogeneous Mixmaster model as it can be derived through the Einstein
field equations. A particular attention is devoted to the Bianchi I
and II models, whose analysis is useful for the understanding of the
general Bianchi IX (Mixmaster) dynamics.
The Hamiltonian formalism is introduced in Section
\ref{sec:HamiltonianFormalism}, where we review the semiclassical and
quantum properties of the model as it was studied by Misner in
\cite{Misner:1969}.
Section \ref{sec:mixm-poly} is devoted to the analysis of the polymer
modified Mixmaster model in the Hamiltonian formalism, both from a
semiclassical and a quantum point of view. Our results are then
compared with the ones derived by some previous models.
In Section~\ref{sec:polymer-bkl-map} the semiclassical
behavior of the polymer modified Bianchi I and Bianchi II models is
developed through the Einstein equations formalism, while the modified
BKL map is derived and its properties discussed from an analytical and
numerical point of view in Section~\ref{sec:polymer-BKL-map}.
In Section \ref{sec:physical-considerations} we discuss two important physical issues, concerning
the link of the polymer representation with Loop Quantum Cosmology and the implications
of a polymer quantization of the whole Minisuperspace variables on the Mixmaster chaotic features.
Finally, in Section \ref{sec:conclusions} brief concluding remarks follow.
\section{Polymer Quantum Mechanics}\label{sec:PQM}
The Polymer Quantum Mechanics (PQM) is a representation of the usual
canonical commutation relations (CCR), unitarily nonequivalent to the
Schr\"{o}dinger\ one. It is a very useful tool to investigate the consequences of
the assumption that one or more variables of the phase space are
discretized. Physically, it accounts for the introduction of a
cutoff. In certain cases where the Schr\"{o}dinger\ representation is
well-defined, the cutoff can be removed through a certain limiting
procedure and the usual Schr\"{o}dinger\ representation is recovered, as shown in
\cite{Corichi:2007pr} and summed up in Sec.~\ref{sec:continuumLimit}.
Many people in the Quantum Gravity community think that there is a
maximum theoretical precision achievable when measuring space-time
distances, this belief being backed up by valuable although heuristic
arguments
\cite{Salecker-Wigner:1958,AmelinoCamelia:1999-Salecker-Wigner,
Hossenfelder:2012jw}, so that the cutoff introduced in PQM is
assumed to be a fundamental quantity. Some results of Loop Quantum
Gravity\cite{Ashtekar:2011} (the discrete spectrum of the area
operator) and String Theory\cite{Lidsey:1999} (the minimum length of a
string) point clearly in the direction of a minimal length scale
scenario, too.
PQM was first developed by Ashtekar et
al.\cite{Ashtekar:2001xp,Ashtekar:2002,Ashtekar:2003} who also credit
a previous work of Varadarajan\cite{Varadarajan:2000} for some
ideas. It was then further refined also by Corichi et
al.\cite{Corichi:2007pr,Corichi:2007cqg}. They developed the PQM in
the expectation to shed some light on the connection between the
Planckian-energy Physics of Loop Quantum Gravity and the lower-energy
Physics of the Standard Model.
\subsection{The Schr\"{o}dinger\ representation}
Let us consider a quantum one-dimensional free particle with phase
space $(q,p) \in \Gamma = \numberset{R}^2$. The standard CCR are summarized by
the relation
\begin{equation}\label{standard-CCR}
\left[ \hat{q}, \hat{p} \right] = i \hat{I}
\end{equation}
where $\hat{q}$ and $\hat{p}$ are the position and momentum operators
respectively and $\hat{I}$ is the identity operator. These operators
form a basis for the so called Heisenberg algebra\cite{Binz:2008}. At
this stage we have not made any assumption regarding the Hilbert space
of the theory, yet.
To introduce the Polymer representation in Sec.~\ref{sec:kinematics},
it is convenient to consider also the Weyl algebra $\mathcal{W}$, generated by
exponentiation of $\hat{q}$ and $\hat{p}$
\begin{equation}\label{UV-basis}
\hat{U}(\alpha_i) = e^{i\alpha\hat{q}}; \quad \hat{V}(\beta_i) =
e^{i\beta\hat{p}}
\end{equation}
where $\alpha$ and $\beta$ are real parameters with the dimension of
momentum and length, respectively.
The CCR~\eqref{standard-CCR} become
\begin{equation}\label{Weyl-CCR}
\hat{U}(\alpha) \cdot \hat{V}(\beta) =
e^{-i\alpha\beta}\hat{V}(\beta) \cdot \hat{U}(\alpha)
\end{equation}
where $\cdot$ indicates the product operation of the algebra. All
other product combinations commute.
A generic element of the Weyl algebra $W \in \mathcal{W}$ is a finite linear
combination
\begin{equation}
W(\alpha, \beta) = \sum_i (A_i U(\alpha_i) + B_i V(\beta_i))
\end{equation}
where $A_i$ and $B_i$ are complex coefficients and the
product~(\ref{Weyl-CCR}) is extended by linearity. It can be shown
that $\mathcal{W}$ has a structure of a $\mathcal{C}^{*}$-algebra.
The Schr\"{o}dinger\ representation is obtained as soon as we introduce the Schr\"{o}dinger\
Hilbert space of square Lebesgue-integrable functions
$\mathcal{H}_S = L^2(\numberset{R},dq)$ and the action of the bases operators on it:
\begin{equation}
\hat{q} \psi(q) = q \psi(q); \quad \hat{p} \psi(q) = -i\partial_q
\psi(q)
\end{equation}
where $\psi \in L^2(\numberset{R},dq)$ and $q \in \numberset{R}$.
\subsection{The polymer representation:
kinematics}\label{sec:kinematics}
The Polymer representation of Quantum Mechanics can be introduced as
follows. First, we consider the abstract states $\ket{\mu}$ of the
Polymer Hilbert space $\mathcal{H}_{\text{poly}}$ labeled by the real
parameter $\mu$. A generic state of $\mathcal{H}_{\text{poly}}$ consists of a
finite linear combination
\begin{equation}\label{polymer-generic-ket}
\ket{\Psi} = \sum^N_{i = 1} a_i \ket{\mu_i}
\end{equation}
where $N\in\numberset{N}$ and we assume the fundamentals kets to be orthonormal
\begin{equation}\label{polymer-inner-product}
\braket{\mu}{\nu} = \delta_{\mu,\nu}
\end{equation}
The Hilbert space $\mathcal{H}_{\text{poly}}$ is the Cauchy completion of the
vectors~\eqref{polymer-generic-ket} with respect to the
product~\eqref{polymer-inner-product}. It can be shown that
$\mathcal{H}_{\text{poly}}$ is nonseparable\cite{Corichi:2007pr}.
Two fundamental operators can be defined on this Hilbert space. The
``label'' operator $\hat{\epsilon}$ and the ``displacement'' operator
$\hat{\boldsymbol{s}}(\lambda)$ with $\lambda \in \numberset{R}$, which act on
the states $\ket{\mu}$ as
\begin{equation}
\hat{\epsilon}\ket{\mu} \coloneqq \mu\ket{\mu}; \quad
\hat{\boldsymbol{s}}(\lambda)\ket{\mu} \coloneqq \ket{\mu + \lambda}
\end{equation}
The $\hat{\boldsymbol{s}}$ operator is discontinuous in $\lambda$
since for each value of $\lambda$ the resulting states
$\ket{\mu + \lambda}$ are orthogonal. Thus, there is no Hermitian
operator that by exponentiation could generate the displacement
operator.
We denote the wave functions in the $p$-polarization as
$\psi(p) = \braket{p}{\psi}$, where
$\psi_\mu(p) = \braket{p}{\mu} = e^{i\mu p}$. The $\hat{V}(\lambda)$
operator defined in~\eqref{UV-basis} shifts the plane waves by
$\lambda$
\begin{equation}\label{V-as-shift-operator}
\hat{V}(\lambda) \cdot \psi_\mu(p) = e^{i\lambda p} e^{i\mu p} =
e^{i(\lambda + \mu) p} = \psi_{(\lambda + \mu)}(p)
\end{equation}
As expected, $\hat{V}(\lambda)$ can be identified with the
displacement operator $\hat{\boldsymbol{s}}(\lambda)$ and the
$\hat{p}$ operator cannot be defined rigorously. On the other hand the
operator $\hat{q}$ is defined by
\begin{equation}
\hat{q} \cdot \psi_\mu(p) = -i \frac{\partial}{\partial p} \psi_\mu(p)
= \mu e^{i\mu p} = \mu \psi_\mu(p)
\end{equation}
and thus can be identified with the abstract displacement operator
$\hat{\epsilon}$. The reason $\hat{q}$ is said to be discrete is that
the eigenvalues of this operator are the labels $\mu$, and even when
$\mu$ can take value in a continuum of possible values, they can be
regarded as a discrete set, because the states are orthonormal for all
values of $\mu$.
The $\hat{V}(\lambda)$ operators form a $\mathcal{C}^{*}$-Algebra and the
mathematical theory of $\mathcal{C}^{*}$-Algebras\cite{Arveson:1998} provide us
with the tools to characterize the Hilbert space which the wave
functions $\psi(p)$ belong to. It is given by the \emph{Bohr
compactification} $\numberset{R}_b$ of the real line\cite{Rudin:2011}:
$\mathcal{H}_{\text{poly}} = L^2(\numberset{R}_b,d\mu_H)$, where the Haar measure
$d\mu_H$ is the natural choice.
In terms of the fundamental functions $\psi_\mu(p)$ the inner product
takes the form
\begin{equation}\label{abstract-PQM-inner-product}
\begin{split}
\braket{\psi_\mu}{\psi_\nu} \coloneqq & \int_{\numberset{R}_b} d\mu_H
\bar{\psi_\mu}(p) \psi_\nu(p) \\ \coloneqq &\lim_{L\to\infty}
\frac{1}{2L} \int^L_{-L} dp \bar{\psi_\mu}(p) \psi_\nu(p) =
\delta_{\mu,\nu}
\end{split}
\end{equation}
\subsection{Dynamics}\label{sec:dynamics}
Once a definition of the polymer Hilbert space is given, we need to
know how to use it in order to analyze the dynamics of a system. The
first problem to be faced is that, as it was said after equation
\eqref{V-as-shift-operator}, the polymer representation isn't able to
describe $\hat{p}$ and $\hat{q}$ at the same time. It is then
necessary to choose the ``discrete'' variable and to approximate its
conjugate momentum with a well defined and quantizable function. For
the sake of simplicity let us investigate the simple case of a
one-dimensional particle described by the Hamiltonian:
\begin{equation}
H=\frac{p^2}{2m}+\mathcal{V}(q)
\label{chiara:ParticleHam}
\end{equation}
in p-polarization. If we assume that $\hat{q}$ is the discrete
operator (in the sense explained in Sec. \ref{sec:kinematics}), we
then need to approximate the kinetic term $\frac{p^2}{2m}$ in an
appropriate manner. The procedure outlined in \cite{Corichi:2007pr}
consists in the regularization of the theory, through the introduction
of a regular graph $\gamma_{\mu_0}$ (that is a lattice with spacing
$\mu_0$), such as:
\begin{equation}
\gamma_{\mu_0}=\{q\in\mathbb{R}\ |\ q=n\mu_0,\ \forall n\in\mathbb{Z}\}
\end{equation}
It follows that the only states we shall consider, in order to remain
in the graph, are that of the form
$\ket{\psi}=\sum_n{b_n\ket{\mu_n}}\in \mathcal{H}_{\gamma_{\mu_0}}$,
where the $b_n$ are coefficients such as $\sum_n|b_n|^2<\infty$ and
the $\ket{\mu_n}$ (with $\mu_n=n\mu_0$) are the basic kets of the
Hilbert space $\mathcal{H}_{\gamma_{\mu_0}}$, which is a separable
subspace of $\mathcal{H}_{poly}$. Moreover the action of all
operators on these states will be restricted to the graph. Since the
exponential operator $\hat{V}(\lambda)=e^{i\lambda p}$ can be
identified with the displacement one $\hat{s}$ (Eq.
\eqref{V-as-shift-operator}), it is possible to use a regularized
version of it
($\hat{V}(\mu_0)$ such that
$\hat{V}(\mu_0)\ket{\mu_n}=\ket{\mu_n+\mu_0}=\ket{\mu_{n+1}}$) in order
to define an approximated version of $\hat{p}$:
\begin{multline}
\hat{p}_{\mu_0}\ket{\mu_n}=\frac{1}{2i\mu_0}[\hat{V}(\mu_0)-\hat{V}(-\mu_0)]\ket
{\mu_n}=\\
=-\frac{i}{2\mu_0}(\ket{\mu_{n+1}}-\ket{\mu_{n-1}})
\end{multline}
This definition is based on the fact that, for $p<<\frac{1}{\mu_0}$,
one gets
$p\simeq\frac{1}{\mu_0}\sin(\mu_0 p)=\frac{1}{2i\mu_0}(e^{i\mu_0
p}-e^{-i\mu_0 p})$. From this result it is easy to derive the
approximated kinetic term $\hat{p}_{\mu_0}^2$ by applying the same
definition of $\hat{p}_{\mu_0}$ two times:
\begin{multline}
\hat{p}_{\mu_0}^2\ket{\mu_n}=\hat{p}_{\mu_0}\cdot\hat{p}_{\mu_0}\ket{\mu_n}=\\
=\frac{1}{4\mu_0^2}(\ket{\mu_{n+2}}+\ket{\mu_{n-2}}-2\ket{\mu_n})
\end{multline}
It is now possible to introduce a regularized Hamiltonian operator,
which turns out to be well-defined on $\mathcal{H}_{\gamma_{\mu_0}}$
and symmetric:
\begin{equation}
\hat{H}_{\mu_0}=\frac{\hat{p}_{\mu_0}^2}{2m}+\mathcal{V}(\hat{q})\in\mathcal{H}_
{\gamma_0}
\end{equation}
In the p-polarization, then, $\hat{p}^2_{\mu_0}$ acts as a
multiplication operator
($\hat{p}^2_{\mu_0}\psi(p)=\frac{1}{\mu_0^2}\sin^2(\mu_0 p)\psi(p)$),
while $\hat{q}$ acts as a derivative operator
($\hat{q}\psi(p)=i\partial_p\psi(p)$).
\subsection{Continuum Limit}\label{sec:continuumLimit}
As we have seen in the previous section, the condition
$p<<\frac{1}{\mu_0}$ is a fundamental hypothesis of the polymer
approach. If this limit is valid for all the orbits of the system to
be quantized, such a representation is expected to give results
comparable to the ones of the standard quantum representation (based
on the Schr\"odinger equation). This is the reason why a limit
procedure from the polymer system to the Schr\"odinger one is
necessary to understand how the two approaches are related. In other
words we need to know if, given a polymer wave function defined on the
graph $\gamma_0$ (with spacing $a_0$), it is possible to find a
continuous function which is best approximated by the previous one in
the limit when the graph becomes finer, that is:
\begin{equation}
\gamma_0\to\gamma_n=\left\{q_k\in\mathbb{R}\ |\ q_k=ka_n,\ \text{with}\
a_n=\frac{a_0}{2^n},\ \forall k\in\mathbb{Z}\right\}
\end{equation}
The first step to take in order to answer this question consists in
the introduction of the scale $C_n$ which is a decomposition of the
real line in the union of closed-open intervals with the lattice
$\gamma_n$ points as extrema, that cover the whole line and do not
intersect. For each scale we can then define an effective theory based
on the fact that every continuous function can be approximated by
another function that is constant on the intervals defined by the
lattice. It follows that we can create a link between
$\mathcal{H}_{poly}$ and $\mathcal{H}_S$ for each scale $C_n$:
\begin{equation}
\sum_m\psi(ma_n)\delta_{ma_n,q}\in\mathcal{H}_{poly}\to\sum_m\psi(ma_n)\chi_{
\alpha_m}(q)\in\mathcal{H}_S
\end{equation}
where $\chi_{\alpha_m}$ is the characteristic function on the interval
$\alpha_m=[ma_n,(m+1)a_n)$. Thus it is possible to define an
effective Hamiltonian $\hat{H}_{C_n}$ for each scale $C_n$. The set of
effective theories is then analyzed by the use of a renormalization
procedure \cite{Corichi:2007pr}, and the result is that the existence
of a continuum limit is equivalent to both the convergence of the
energy spectrum of the effective theory to the continuous one and the
existence of a complete set of renormalized eigenfunctions.
\section{The homogeneous Mixmaster model: classical dynamics}
\label{sec:mixm-model:-class}
With the aim of investigating in Sec.~\ref{sec:polymer-BKL-map} the
modifications to the Bianchi IX dynamics produced by the introduction
of the Polymer cutoff, in this Section we briefly review some relevant
results obtained by Belinskii, Khalatnikov and Lifshitz (BKL)
regarding the classical Bianchi IX model. In particular they found
that an approximate solution for the Bianchi IX dynamics can be given
in the form of a Poincar\'e recursive map called \emph{BKL map}. This
map is obtained as soon as the exact solutions of Bianchi I and
Bianchi II are known. Hence, we will first derive the dynamical
solution to the classical Bianchi I and II models.
The Einstein equations of a generic Bianchi model can be expressed in
terms of the \emph{spatial metric}
$\eta(t)=\text{diag}[a(t),\\ b(t),c(t)]$ and the \emph{constants of
structure} $\lambda_l,\lambda_m,\lambda_n$ of the inequivalent
isometry group that characterize each Bianchi model, where ``$a,b,c$''
are said \emph{cosmic scale factors} and describe the behavior in the
synchronous time $t$ of the three independent spatial
directions\cite{Landau:1980}.
\begin{subequations}\label{gener-dynam-bianchi:log-variables}
\begin{align}
(q_l)_{\tau\tau} = & \frac{1}{2a^2 b^2 c^2}
\left ( {\lambda_l}^2 a^4 - {\left (
\lambda_m b^2 -
\lambda_n c^2 \right )}^2 \right )\\
(q_m)_{\tau\tau} = & \frac{1}{2a^2 b^2 c^2}
\left ( {\lambda_m}^2 b^4 - {\left (
\lambda_l a^2 -
\lambda_n c^2 \right )}^2 \right )\\
(q_n)_{\tau\tau} = & \frac{1}{2a^2 b^2 c^2}
\left ( {\lambda_n}^2 c^4 - {\left (
\lambda_l a^2 -
\lambda_m b^2 \right )}^2 \right )
\end{align}
\end{subequations}
\begin{equation}\label{gener-dynam-bianchi:constraint}
{(q_l + q_m + q_n)}_{\tau\tau} = {(q_l)}_\tau {(q_m)}_\tau
+ {(q_l)}_\tau {(q_n)}_\tau + {(q_m)}_\tau {(q_n)}_\tau
\end{equation}
where the logarithmic variable $q_l(\tau),q_m(\tau),q_n(\tau)$ and
logarithmic time $\tau$ are defined as such:
\begin{equation}
q_l = 2\ln a, \quad q_m = 2\ln b, \quad q_n = 2\ln c, \quad dt
= (abc) d\tau
\label{log-variables-definition}
\end{equation}
and the subscript $\Box_{\tau}$ is a shorthand for the derivative in
$\tau$. The isometry group characteristic of Bianchi IX is the SO(3)
group. The Class A Bianchi models\cite{Montani:2011} (of which Bianchi
I,II and IX are members) are set apart just by those three constants of
structure.
\subsection{Bianchi I}\label{sec:bianchi-i}
The Bianchi I model in void is also called \emph{Kasner
solution}\cite{Wainwright:2008}. By substituting in
Eqs~\eqref{gener-dynam-bianchi:log-variables} the Bianchi I constants
of structure $\lambda_l=0,\lambda_m=0,\lambda_n=0$, we get the simple
equations of motion:
\begin{equation}\label{bianchi-i:equations-of-motion}
{(q_l)}_{\tau\tau} = {(q_m)}_{\tau\tau} = {(q_n)}_{\tau\tau} = 0
\end{equation}
whose solution can be given as a function of the logarithmic time
$\tau$ and four parameters:
\begin{equation}\label{bianchi-i:solution}
\begin{dcases}
q_l(\tau) = 2 \Lambda p_l \tau\\
q_m(\tau) = 2 \Lambda p_m \tau\\
q_n(\tau) = 2 \Lambda p_n \tau
\end{dcases}
\end{equation}
where $p_l,p_m,p_n$ are said \emph{Kasner indices} and $\Lambda$ is a
positive constant that will have much relevance in
Section~\ref{sec:polymer-bianchi-i-dynamics}. It is needed to ensure
that the sum of Kasner indices is always one:
\begin{equation}\label{kasner:sum}
p_l + p_m + p_n = 1
\end{equation}
From~\eqref{bianchi-i:solution}
and~\eqref{log-variables-definition}, if we
exploit the constraint~\eqref{kasner:sum}, we obtain
\begin{equation}\label{bianchi-i:t-tau}
\tau = \frac{1}{\Lambda} \ln(\Lambda t)
\end{equation}
From relation~\eqref{bianchi-i:t-tau} stems the appellative
``logarithmic time'' for the time variable $\tau$.
By substituting the Bianchi I equations of
motion~\eqref{bianchi-i:equations-of-motion} and their
solution~\eqref{bianchi-i:solution}
into~\eqref{gener-dynam-bianchi:constraint}, after applying the
constraint~\eqref{kasner:sum}, we find the additional constraint
\begin{equation}\label{kasner:sumofsquares}
{p_l}^2 + {p_m}^2 + {p_n}^2 = 1
\end{equation}
With the exception of the null measure set $(p_l, p_m, p_n) = (0,0,1)$
and $(p_l, p_m, p_n) = (-1/3,2/3,2/3)$, Kasner indices are always
different from each other and one of them is always negative. The
$(0,0,1)$ case can be shown to be equivalent to the Minkowski
space-time. It is customary to order the Kasner indices from the
smallest to the greatest $p_1 < p_2 < p_3$. Since the three variables
$p_1, p_2, p_3$ are constrained by equations~\eqref{kasner:sum}
and~\eqref{kasner:sumofsquares}, they can be expressed as functions of
a unique parameter $u$ as
\begin{equation}\label{kasner:u-param}
p_1(u) = \tfrac{-u}{1+u+u^2},~~
p_2(u) = \tfrac{1 + u}{1+u+u^2},~~
p_3(u) = \tfrac{u(1 + u)}{1+u+u^2}
\end{equation}
\begin{equation}\label{kasner:u-range}
1 \leq u < +\infty
\end{equation}
The range and the values of the Kasner indices are portrayed in
Fig.~\ref{fig:kasner:u-param}.
\begin{figure}
\begin{center}
\includegraphics[width=0.6\columnwidth]{Fig1.png}
\end{center}
\caption{Kasner indices as functions of the inverse of the $u$
parameter. This figure was taken from~\cite{wiki:BKL} and is
released under
\href{https://creativecommons.org/licenses/by-sa/3.0/}{CC BY-SA
3.0} license.}
\label{fig:kasner:u-param}
\end{figure}
Excluding the Minkowskian case $(0,0,1)$, for any choice of the Kasner
indices, the spatial metric
$\eta=\text{diag}(t^{2p_l},t^{2p_m},t^{2p_n})$ has a physical naked
singularity, called also \emph{Big Bang}, when $t = 0$ (or
$\tau\to -\infty$).
\subsection{Bianchi II}\label{sec:bianchi-ii}
A period in the evolution of the Universe when the r.h.s
of~\eqref{gener-dynam-bianchi:log-variables} can be neglected, and the
dynamics is Bianchi I-like, is called \emph{Kasner regime} or
\emph{Kasner epoch}. In this Section we show how Bianchi II links
together two different Kasner epochs at $\tau\to -\infty$ and
$\tau\to\infty$. A series of successive Kasner epochs, where one
cosmic scale factor increases monotonically, is called a \emph{Kasner
era}.
Again, after substituting the Bianchi II structure constants
$\lambda_l=1,\lambda_m=0,\lambda_n=0$ into
Eqs~\eqref{gener-dynam-bianchi:log-variables}, we get
\begin{subequations}\label{bianchi-ii:log-einstein}
\begin{align}
\label{bianchi-ii:alpha}
&{(q_l)}_{\tau\tau} = - e^{2q_l}\\
\label{bianchi-ii:beta-gamma}
&{(q_m)}_{\tau\tau} = {(q_n)}_{\tau\tau} = e^{2q_l}
\end{align}
\end{subequations}
It should be noted that the conditions
\begin{equation}\label{bianchi-ii:sum-relations}
{(q_l)}_{\tau\tau} + {(q_m)}_{\tau\tau} = {(q_l)}_{\tau\tau} +
{(q_n)}_{\tau\tau} = 0
\end{equation}
hold for every $\tau$.
Let us consider the explicit solutions of
equations~\eqref{bianchi-ii:log-einstein}
\begin{subequations}\label{bianchi-ii:solution}
\begin{align}
{(q_l)}(\tau) = & \ln(c_1 \sech(\tau c_1 + c_2))\\
{(q_m)}(\tau) = & 2c_3 + 2\tau c_4 - \ln(c_1 \sech(\tau c_1 + c_2))\\
{(q_n)}(\tau) = & 2c_5 + 2\tau c_6 - \ln(c_1 \sech(\tau c_1 + c_2))
\end{align}
\end{subequations}
where $c_1,\dots,c_6$ are integration constants. We now analyze
how~\eqref{bianchi-ii:solution} behave as the time variable $\tau$
approaches $+\infty$ and $-\infty$ (remembering also
equation~\eqref{bianchi-i:t-tau}). By taking the asymptotic limit to
$\pm\infty$, we note that a Kasner regime at $+\infty$ is ``mapped''
to another Kasner regime at $-\infty$, but with different Kasner
indices.
\begin{equation}\label{bianchi-ii:limits}
\begin{dcases}
p_l = \tfrac{{(q_l)}_\tau}{2\Lambda} = \tfrac{-c_1 /2}{c_4 + c_6 + c_1 /2}\\
p_m = \tfrac{{(q_m)}_\tau}{2\Lambda} = \tfrac{c_4 + c_1 /2}{c_4 + c_6 + c_1
/2}\\
p_n = \tfrac{{(q_n)}_\tau}{2\Lambda} = \tfrac{c_6 + c_1 /2}{c_4 + c_6 + c_1 /2}
\end{dcases}~
\begin{dcases}
p'_l = \tfrac{{(q_l)}_\tau}{2\Lambda} = \tfrac{c_1 /2}{c_4 + c_6 - c_1 /2}\\
p'_m = \tfrac{{(q_m)}_\tau}{2\Lambda} = \tfrac{c_4 - c_1 /2}{c_4 + c_6 - c_1
/2}\\
p'_n = \tfrac{{(q_n)}_\tau}{2\Lambda} = \tfrac{c_6 - c_1 /2}{c_4 +
c_6 - c_1 /2}
\end{dcases}
\end{equation}
where we have exploited the notation of
equation~\eqref{bianchi-i:solution}. Both these sets of indices
satisfy the two Kasner relations~\eqref{kasner:sum}
and~\eqref{kasner:sumofsquares}.
The old (the unprimed ones) and the new indices (the primed ones) are
related by the BKL map, that can be given in the form of the new
primed Kasner indices $p_l',p_m',p_n'$ and $\Lambda'$ as functions of
the old ones. To calculate it we need at least four relations between the new and
old Kasner indices
$f_i(p_l,p_m,p_n,\Lambda,p'_l,p'_m,p'_n,\Lambda') = 0$ with
$i = 1,\dots,4$. Then we can invert these relations to find the BKL
map.
One $f_i$ is simply the sum of the primed Kasner indices (that is the
primed version of~\eqref{kasner:sum}). Other two relations can be
obtained from~\eqref{bianchi-ii:sum-relations}:
\begin{align}
\label{bianchi-ii:rel-1}
\Lambda(p_l + p_m) = &\Lambda'(p'_l + p'_m)\\
\label{bianchi-ii:rel-2}
\Lambda(p_l + p_n) = &\Lambda'(p'_l + p'_n)
\end{align}
The fourth relation is obtained by direct comparison of the asymptotic
limits~\eqref{bianchi-ii:limits}, and for this reason we will call it
\emph{asymptotic relation}.
\begin{equation}
\label{bianchi-ii:asympt-rel}
\Lambda p_l = - (\Lambda' p'_l)
\end{equation}
All other relations that can be obtained
from~\eqref{bianchi-ii:limits} are equivalent
to~\eqref{bianchi-ii:asympt-rel}.
Finally, by inverting the four
relations~\eqref{kasner:sum},\eqref{bianchi-ii:rel-1},
\eqref{bianchi-ii:rel-2},\\ \eqref{bianchi-ii:asympt-rel} and assuming
again that $p_l$ is the negative Kasner index $p_1$, we finally get
the classical BKL map:
\begin{gather}\label{bianchi-ii:BKL-map}
p'_l = \frac{|p_l|}{1-2|p_l|}, \quad p'_m =
\frac{p_m-2|p_l|}{1-2|p_l|},
\quad p'_n = \frac{p_n-2|p_l|}{1-2|p_l|},\\
\nonumber \Lambda' = (1-2|p_l|) \Lambda
\end{gather}
The main feature of the BKL map is the exchange of the negative index
between two different directions. Therefore, in the new epoch, the
negative power is no longer related to the $l$-direction and the
perturbation to the new Kasner regime (which is linked to the
$l$-terms on the r.h.s. of
Eq.~\eqref{gener-dynam-bianchi:log-variables}) is damped and vanishes
towards the singularity: the new (primed) Kasner regime is stable
towards the singularity.
If we then insert the parametrization~\eqref{kasner:u-param}
and~\eqref{kasner:u-range} into the BKL map~\eqref{bianchi-ii:BKL-map}
just found, we get that the map in the $u$ parameter is
simple-looking.
\begin{equation}\label{bianchi-vii:BKL-map}
\begin{dcases}
p_l = p_1(u)\\
p_m = p_2(u)\\
p_n = p_3(u)
\end{dcases}
\xrightarrow{\text{BKL map}}
\begin{dcases}
p'_l = p_2(u - 1)\\
p'_m = p_1(u - 1)\\
p'_n = p_3(u - 1)
\end{dcases}
\end{equation}
We emphasize that the role of the negative index is swapped between
the $l$ and the $m$-direction.
\subsection{Bianchi IX}\label{sec:bianchi-ix}
Here we briefly explain how the BKL map can be used to characterize
the Bianchi IX dynamics. This is possible because it can be
shown~\cite{Montani:2011} that the Bianchi IX dynamical evolution is
made up piece-wise by Bianchi I and II-like ``blocks''.
First, we take a look again at the Einstein
equations~\eqref{gener-dynam-bianchi:log-variables}: but now we
substitute the Bianchi IX structure constants
$\lambda_l=1,\lambda_m=1,\lambda_n=1$ into them to get the Einstein
equations for Bianchi IX:
\begin{equation}\label{bianchi-ix:einstein-equations}
\begin{aligned}
{(q_l)}_{\tau\tau} = {(b^2 - c^2)}^2 - a^4\\
{(q_m)}_{\tau\tau} = {(a^2 - c^2)}^2 - b^4\\
{(q_n)}_{\tau\tau} = {(a^2 - b^2)}^2 - c^4
\end{aligned}
\end{equation}
Eq.~\eqref{gener-dynam-bianchi:constraint} stays valid for every
Bianchi model.
Again, we assume an initial Kasner regime, where the negative Kasner
index $p_1$ is associated with the $l$-direction. Thus, the
``exploding'' cosmic scale factor in the r.h.s.\ of
\eqref{bianchi-ix:einstein-equations} is identified again with $a$. By
retaining in the r.h.s.\ of \eqref{bianchi-ix:einstein-equations} only
the terms that grow towards the singularity, we obtain a system of
ordinary differential equations in time completely similar to the one
just encountered for Bianchi II~\eqref{bianchi-ii:log-einstein}, with
the only caveat that now there is no initial condition (i.e.\ no
choice of the initial Kasner indices) such that the initial or final
Kasner regimes are stable towards the singularity.
This because all the cosmic scale factors $\{a,b,c\}$ are treated on
an equal footing in the
r.h.s. of~\eqref{bianchi-ix:einstein-equations}. After every Kasner
epoch or era, no matter on which axis the negative index is swapped,
there will always be a perturbation in the Einstein
equations~\eqref{bianchi-ix:einstein-equations} that will make the
Kasner regime unstable. We thus have an infinite series of Kasner eras
alternating and accumulating towards the singularity.
Given an initial $u$ parameter, the BKL map that takes it to the next
one $u'$ is
\begin{equation}\label{bianchi-ix:BKL-map}
u' =
\begin{dcases}
u - 1 ~~ \text{for} ~~ u > 2\\
\frac{1}{u - 1 }~~ \text{for} ~~ u \leq 2\\
\end{dcases}
\end{equation}
Let us assume an initial $u_0 = k_0 + x_0$, where $k_0 = [u_0]$ is the
integer part and $x_0 = u_0 - [u_0]$ is the fractional part. If $x_0$
is irrational, as it is statistically and more general to infer, after
the $k_0$ Kasner epochs that make up that Kasner era, its inverse
$1/x_0$ would be irrational, too. This inversion happens infinitely
many times until the singularity is reached and is the source of the
chaotic features of the BKL
map\cite{Barrow:1981,Barrow:1982,Khalatnikov:1984hzi,Lifshitz:1971,
Montani:1997,Szydlowski:1993}.
\section{Hamiltonian Formulation of the Mixmaster model}
\label{sec:HamiltonianFormalism}
In this section we summarize some important results obtained applying
the R. Arnowitt, S. Deser and C.W. Misner (ADM) Hamiltonian methods to
the Bianchi models. The main benefit of this approach is that the
resulting Hamiltonian system resembles the simple one of a pinpoint
particle moving in a potential well. Moreover it provides a method to
quantize the system through the canonical formalism.
The line element for the Bianchi IX model is:
\begin{equation}
ds^2=N^2(t)dt^2+\frac{1}{4}e^{2\alpha}{(e^{2\beta})}_{ij}\sigma_i\sigma_j
\label{chiara:IXLineElement}
\end{equation}
where $N$ is the Lapse Function, characteristic of the canonical
formalism and $\sigma_i$ are 1-forms depending on the Euler angles of
the SO(3) group of symmetry. $\beta_{ij}$ is a diagonal, traceless
matrix and so it can be parameterized in terms of the two independent
variables $\beta_\pm$ as
$\beta_{ij}=(\beta_++\sqrt{3}\beta_-,\beta_+-\sqrt{3}\beta_-,-2\beta_+)$. The
factor $\frac{1}{4}$ is chosen (following \cite{Misner:1969}) so that
when $\beta_{ij}=0$, we obtain just the standard metric for a
three-sphere of radius $r=e^\alpha$. Thus for $\beta=0$ this metric is
the Robertson-Walker positive curvature metric. The variables
$(\alpha,\beta_\pm)$ were introduced by C.W. Misner
in~\cite{Misner:1969mixmaster}; $\alpha$ describes the expansion of
the Universe, i.e. its volume changes
$(\mathcal{V}\propto e^{3\alpha})$, while $\beta_\pm$ are related to
its anisotropies (shape deformations). The kinetic term of the
Hamiltonian of the Bianchi models, written in the Misner variables,
results to be diagonal; we can then write,
following~\cite{Arnowitt:1960pr}, the super-Hamiltonian for the
Bianchi models simply as:
\begin{small}
\begin{equation}
\mathcal{H}_B=\frac{N\kappa}{3(8\pi)^2}e^{-3\alpha}\left(-p_\alpha^2+p_+^2+p_-^2
+\frac{3(4\pi)^4}{\kappa^2}e^{4\alpha}V_B(\beta_\pm)\right)
\label{chiara:Hamiltonian}
\end{equation}
\end{small}
where $(p_\alpha,p_\pm)$ are the conjugate momenta to
$(\alpha,\beta_\pm)$ respectively and N is the lapse function. It
should be noted that the corresponding super-Hamiltonian constraint
$\mathcal{H}_B=0$ describes the evolution of a point
$\beta=(\beta_+,\beta_-)$, that we call $\beta$-point, in function of
a new time coordinate $\alpha$ (that is the shape of the Universe in
function of its volume). Such point is subject to the
\textit{anisotropy potential} $V_B(\beta_\pm)$ which is a non-negative
function of the anisotropy coordinates, whose explicit form depends on
the Bianchi model considered. In particular Bianchi type I corresponds
to the $V_B=0$ case (free particle), Bianchi type II potential
corresponds to a single, infinite exponential wall
$\left(V_B=e^{-8\beta_+}\right)$, while for Bianchi type IX we have a
closed domain expressed by:
\begin{multline}
V_B(\beta_\pm)=2e^{4\beta_+}\left(\cosh(4\sqrt{3}\beta_-)-1\right)+\\
- 4e^{2\beta_+}\cosh(2\sqrt{3}\beta_-) + e^{-8\beta_+}
\label{chiara:BianchiIXpotential}
\end{multline}
The potential ~\eqref{chiara:BianchiIXpotential} has steep exponential
walls with the symmetry of an equilateral triangle with flared open
corners, as we can see from Fig. \ref{fig:potential}. The presence of
the term $e^{4\alpha}$ in \eqref{chiara:Hamiltonian}, moreover, causes
the potential walls to move outward as we approach the cosmological
singularity $(\alpha\to-\infty)$.
\begin{figure}
\includegraphics[scale=.35]{Fig2.png}
\caption{\label{fig:potential} Equipotentials of the
function $V_B(\beta_\pm)$ in the $(\beta_+,\beta_-)$-plane. Here
we can see that far from the origin the function has the symmetry
of an equilateral triangle, while near the origin $(V_B<1)$ the
equipotentials are closed curves.}
\end{figure}
Following the ADM reduction procedure, we can solve the
super-Hamiltonian constraint~\eqref{chiara:Hamiltonian} with respect
to the momentum conjugate to the new time coordinate of the
phase-space (i.e. $\alpha$). The result is the so-called
\textit{reduced Hamiltonian} $H_{ADM}$:
\begin{equation}
H_{ADM}\coloneqq
-p_\alpha=\sqrt{p_+^2+p_-^2+\frac{3(4\pi)^4}{\kappa}e^{4\alpha}V_B(\beta_\pm)}
\label{chiara:redHamiltonian}
\end{equation}
from which we can derive the classical dynamics of the model by
solving the corresponding Hamilton's equations:
\begin{subequations}
\begin{align}
\beta'_\pm &\equiv \frac{d\beta_\pm}{d\alpha}= \frac{\partial
H_{ADM}}{\partial p_\pm}\\
p'_\pm &\equiv \frac{dp_\pm}{d\alpha}=-\frac{\partial H_{ADM}}{\partial
\beta_\pm}\\
H'_{ADM} &\equiv \frac{dH_{ADM}}{d\alpha}= \frac{\partial
H_{ADM}}{\partial \alpha}
\end{align}
\label{chiara:HamiltonEqs}
\end{subequations}
It should be noted that the choice $\dot{\alpha}=1$ fixes the temporal
gauge $N_{ADM}=\frac{6(4\pi)^2}{H_{ADM}\kappa}e^{3\alpha}$.
Since we are interested in the dynamics of the model near the
singularity $(\alpha\to-\infty)$ and because of the steepness of the
potential walls, we can consider the $\beta$-point to spend most of
its time moving as a free particle (far from the potential wall
$V_B\simeq 0$), while $V_B(\beta_\pm)$ is non-negligible only during
the short time of a bounce against one of the three walls. It follows
that we can separate the analysis into two different phases. As far
as the free motion phase (Bianchi type I approximation) is concerned,
we can derive the velocity of the $\beta$-point from the first of Eqs.
\eqref{chiara:HamiltonEqs} (with $V_B=0$), obtaining:
\begin{equation}
\beta'\equiv\sqrt{\beta'^2_++\beta'^2_-}=1
\label{chiara:velocity}
\end{equation}
It remains to study the bounce against the potential. Following
\cite{Misner:1969} we can summarize the results of this analysis in
three points: (i) The position of the potential wall is defined as the
position of the equipotential in the $\beta$-plane bounding the region
in which the potential terms are significant. The wall turns out to
have velocity
$|\beta'_{wall}|\equiv\frac{d\beta_{wall}}{d\alpha}=\frac{1}{2}$, that
is one half of the $\beta$-point velocity. So a bounce is indeed
possible. (ii) Every bounce occurs according to the reflection law:
\begin{equation}
\sin\theta_i-\sin\theta_f=\frac{1}{2}\sin(\theta_i+\theta_f)
\label{chiara:reflectionLaw}
\end{equation}
where $\theta_i$ and $\theta_f$ are the incidence and the reflection
angles respectively. (iii) The maximum incidence angle for the
$\beta$-point to have a bounce against a specific potential wall turns
out to be:
\begin{equation}
\theta_{max}=\arccos\left(\frac{1}{2}\right)=\frac{\pi}{3}
\label{chiara:maxAngle}
\end{equation}
Because of the triangular symmetry of the potential, this result
confirms the fact that a bounce against one of the three walls always
happens. It follows that the $\beta$-point undergoes an infinite
series of bounces while approaching the singularity and, sooner or
later, it will assume all the possible directions, regardless the
initial conditions. In short we are dealing with a pinpoint particle
which undergoes an uniform rectilinear motion marked by the constants
of motion $(p_+,p_-)$ until it reach a potential wall. The bounce
against this wall has the effect of changing the values of
$p_\pm$. After that, a new phase of uniform rectilinear motion (with
different constants of motion) starts. If we remember that the free
particle case corresponds to the Bianchi type I model, whose solution
is the Kasner solution (see Sec. \ref{sec:bianchi-i}), we recover the
sequence of Kasner epochs that characterizes the BKL map
(Sec. \ref{sec:bianchi-ix}).
In conclusion, it is worth mentioning that, through the analysis of
the geometrical properties of the scheme outlined so far near the
cosmological singularity, we can obtain a sort of conservation law,
that is:
\begin{equation}
\langle{H_{ADM}\alpha}\rangle=\text{const}
\label{chiara:adiabaticInvariant}
\end{equation}
Here the symbol $\langle{...}\rangle$ denotes the average value of
$H_{ADM}\alpha$ over a large number of runs and bounces. In fact,
although it is not a constant, this quantity acquires the same
constant value just before each bounce:
$H^n_{ADM}\alpha_n=H^{n+1}_{ADM}\alpha_{n+1}$. In this sense
quantities with this property will be named \textit{adiabatic
invariants}.
\subsection{Quantization of the Mixmaster model}
Near the Cosmological Singularity some quantum effects are expected to
influence the classical dynamics of the model. The use of the
Hamiltonian formalism enables us to quantize the cosmological theory
in the canonical way with the aim of identify such effects.
Following the canonical formalism \cite{Misner:1969}, we can introduce
the basic commutation relations
$[\hat{\beta}_a,\hat{p}_b]=i\delta_{ab}$, which can be satisfied by
choosing $\hat{p}_a=-i\frac{\partial}{\partial\beta_a}$. Hence all the
variables become operators and the super-Hamiltonian constraint
\eqref{chiara:Hamiltonian} is written in its quantum version
($\hat{\mathcal{H}}\Psi(\alpha,\beta_\pm)=0$) i.e. the Wheeler-De Witt
(WDW) equation:
\begin{small}
\begin{equation}
\left[\frac{\partial^2}{\partial\alpha^2}-\frac{\partial^2}{\partial\beta_+^2}-
\frac{\partial^2}{\partial\beta_-^2}+\frac{3(4\pi)^4}{\kappa^2}e^{4\alpha}V_B(
\beta_\pm)\right]\Psi(\alpha,\beta_\pm)=0
\label{chiara:WDWequation}
\end{equation}
\end{small}
The function $\Psi(\alpha,\beta_\pm)$ is the wave function of the
Universe, which we can choose of the form:
\begin{equation}
\Psi(\alpha,\beta_\pm)=\sum_n\chi_n(\alpha)\phi_n(\alpha,\beta_\pm)
\label{chiara:waveFunction}
\end{equation}
If we assume the validity of the \textit{adiabatic approximation} (see
\cite{Montani:2013})
\begin{equation}
|\partial_\alpha\chi_n(\alpha)|>>|\partial_\alpha\phi_n(\alpha,\beta_\pm)|
\label{chiara:adiabaticHypothesis}
\end{equation}
we can solve the WDW equation by separation of variables. In
particular the eigenvalues $E_n$ of the reduced Hamiltonian
\eqref{chiara:redHamiltonian} are obtained from those of the equation
$\hat{H}_{ADM}\phi_n=E_n^2\phi_n$ that is:
\begin{small}
\begin{equation}
\left[-\frac{\partial^2}{\partial\beta_+^2}-\frac{\partial^2}{\partial\beta_-^2}
+\frac{3(4\pi)^4}{\kappa^2}e^{4\alpha}V_B(\beta_\pm)\right]\phi_n(\alpha,\beta_
\pm)=E_n^2\phi_n
\end{equation}
\end{small}
The result, achieved by approximating the triangular potential with a
quadratic box with infinite vertical walls and the same area of the
triangular one $(A=\frac{3}{4}\sqrt{3}\alpha^2)$, turns out to be:
\begin{equation}
E_n\sim\frac{2\pi}{3^{3/4}}\sqrt{n^2+m^2}\ \alpha^{-1}=\frac{a_n}{\alpha}
\label{chiara:Eigenvalues}
\end{equation}
where $n,m$ are the integer and positive quantum numbers related to
the anisotropies $\beta_\pm$.
An important conclusion that C.W. Misner derived from this analysis is
that quasi-classical states (i.e. quantum states with very high
occupation numbers) are preserved during the evolution of the Universe
towards the singularity. In fact if we substitute the quasi-classical
approximation $H\simeq E_n$ in \eqref{chiara:adiabaticInvariant} and
we study it in the limit $\alpha\to-\infty$, we find:
\begin{equation}
\langle n^2+m^2\rangle=\text{const}
\end{equation}
We can therefore conclude that if we assume that the present Universe
is in a quasi-classical state of anisotropy ($n^2+m^2>>1$), then, as
we extrapolate back towards the Cosmological Singularity, the quantum
state of the Universe remains always quasi-classical.
\section{Mixmaster model in the Polymer approach}\label{sec:mixm-poly}
Here the Hamiltonian formalism introduced in Sec.
\ref{sec:HamiltonianFormalism} is used for the analysis of the
dynamics of the modified Mixmaster model. We choose to apply the
polymer representation to the isotropic variable $\alpha$ of the
system due to its deep connection with the volume of the Universe. As
a consequence we will need to find an approximated operator for the
conjugate momentum $p_\alpha$, while the anisotropy variables
$\beta_\pm$ will remain unchanged from the standard case.
The introduction of the polymer representation consists in the formal
substitution, derived in Sec. \ref{sec:dynamics}:
\begin{equation}
p_\alpha^2\to\frac{1}{\mu^2}\sin^2(\mu p_\alpha)
\label{chiara:polymerSubstitution}
\end{equation}
which we apply to the super-Hamiltonian constraint $\mathcal{H}_B=0$
(with $\mathcal{H}_B$ defined in \eqref{chiara:Hamiltonian}):
\begin{multline}
\frac{N\kappa}{3(8\pi)^2}e^{-3\alpha}\left(-\frac{1}{\mu^2}\sin^2(\mu
p_\alpha)+p_+^2+p_-^2+\right.\\
+\left.\frac{3(4\pi)^4}{\kappa^2}e^{4\alpha}V_B(\beta_\pm)\right)=0
\label{chiara:polymerSuperHam}
\end{multline}
In what follows we will use the ADM reduction formalism in order to
study the new model both from a semiclassical \footnote{In this case
the word ``semiclassical'' means that our super-Hamiltonian
constraint was obtained as the lowest order term of a WKB expansion
for $\hbar\to0$.} and a quantum point of view.
\subsection{Semiclassical Analysis}\label{sec:semiclassical-analysis}
First of all we derive the reduced polymer Hamiltonian\\
$(H_\alpha\coloneqq-p_\alpha)$ from \eqref{chiara:polymerSuperHam}:
\begin{equation}
H_\alpha=\frac{1}{\mu}\arcsin\left(\sqrt{\mu^2\left[p_+^2+p_-^2+\frac{3(4\pi)^4}
{\kappa^2}e^{4\alpha}V_B(\beta_\pm)\right]}\right)
\end{equation}
where the condition
$0\leq\mu^2(p_+^2+p_-^2+\frac{3(4\pi)^4}{\kappa^2}e^{4\alpha}V_B(\beta_\pm))\leq
1$ has to be imposed due to the presence of the function arcsine. The
dynamics of the system is then described by the deformed Hamilton's
equations (that come from Eqs. \eqref{chiara:HamiltonEqs} with
$H_{ADM}\equiv H_\alpha$):
\begin{small}
\begin{subequations}
\begin{align}
\beta'_\pm&=\frac{p_\pm}{\sqrt{(p^2+\frac{3(4\pi)^4}{\kappa^2}e^{4\alpha}V_B
)[1-\mu^2(p^2+\frac{3(4\pi)^4}{\kappa^2}e^{4\alpha}V_B)]}}
\label{chiara:anisotropyVelocity}\\
p'_\pm&=-\frac{\frac{3(4\pi)^4}{2\kappa^2}e^{4\alpha}\frac{\partial
V_B}{\partial\beta_\pm}}{\sqrt{(p^2+\frac{3(4\pi)^4}{\kappa^2}e^{4
\alpha}V_B)[1-\mu^2(p^2+\frac{3(4\pi)^4}{\kappa^2}e^{4\alpha}V_B
)]}}\\
H'_\alpha&=\frac{\frac{6(4\pi)^4}{\kappa^2}e^{4\alpha}V_B}{\sqrt{[1-\mu^2(p^
2+\frac{3(4\pi)^4}{\kappa^2}e^{4\alpha}V_B)](p^2+\frac{3(4\pi)^4}{
\kappa^2}e^{4\alpha}V_B)}}
\end{align}
\label{chiara:polymerHamEq}
\end{subequations}
\end{small}
where the prime stands for the derivative with respect to $\alpha$ and
$p^2=p_+^2+p_-^2$.
As for the standard case, we start by studying the simplest case of
$V_B=0$ (Bianchi I approximation), which corresponds to the phase when
the $\beta$-point is far enough from the potential walls. Here the
persistence of a cosmological singularity can be easily demonstrated
since $\alpha$ results to be linked to the time variable $t$ through a
logarithmic relation (as we will see later in
Eq. \eqref{polymer-bianchi-i:solution-alpha}):
$\alpha\sim\ln(t)\underset{t\to0}{\longrightarrow}-\infty$. Moreover
the anisotropy velocity of the particle can be derived from Eq.
\eqref{chiara:anisotropyVelocity} (while $p_\pm$ and $H_\alpha$ are
constants of motion) and the result is:
\begin{equation}
\beta'\equiv\sqrt{\beta'^2_++\beta'^2_-}=\frac{1}{\sqrt{1-\mu^2p^2}}\coloneqq
r_\alpha(\mu, p_\pm)
\label{chiara:polymerAniVel}
\end{equation}
Thus the modified anisotropy velocity depends on the constants of
motion of the system, and it turns out to be always greater than the
standard one $(r_\alpha(\mu, p_\pm)>1,\ \forall
p_\pm\in\mathbb{R})$. On the other side the velocity of each potential
wall is the same of the standard case
$\left(|\beta'_{wall}|=\frac{1}{2}\right)$. In fact the Bianchi II
potential (i.e. our approximation of a single wall of the Bianchi IX
potential) can be written in terms of the determinant of the metric
$(\sqrt{\eta}=e^{3\alpha})$ as
\begin{equation}
e^{4\alpha-8\beta_+}=(\sqrt{\eta})^{\frac{4}{3}-\frac{8\beta_+}{3\alpha}}
\label{chiara:BianchiIIpotential}
\end{equation}
which, in the limit $\alpha\to-\infty$, tends to the
$\Theta$-function:
\begin{equation}
e^{4\alpha}V_B=\begin{cases}
\infty\quad &\text{if}\quad \frac{4}{3}-\frac{8\beta_+}{3\alpha}<0\\
0\quad &\text{if}\quad \frac{4}{3}-\frac{8\beta_+}{3\alpha}>0\\
\end{cases}
\end{equation}
It follows that the position of the wall is defined by the equation
$\frac{4}{3}-\frac{8\beta_+}{3\alpha}=0$, i.e.
$|\beta_{wall}|=\frac{\alpha}{2}$. The $\beta$-particle is hence
faster than the potential, therefore a bounce is always possible also
after the polymer deformation. For this reason, and from the fact that
the singularity is not removed, we can state that there will certainly
be infinite bounces onto the potential walls. Moreover, the more the
point is moving fast with respect to the walls, the more the bounces
frequency is high. All these facts, are hinting at the possibility
that chaos is still present in the Bianchi IX model in our Polymer
approach. It is worth noting that this is a quite interesting result if
we consider the strong link between the Polymer and the Loop Quantum
Cosmology, which tends to remove the chaos and the cosmological
singularity \cite{Ashtekar:2006es,Ashtekar:2009vc}. In particular,
when dealing with PQM, a spatial lattice of spacing $\mu$ must be
introduced to regularize the theory
(Sec~\ref{sec:continuumLimit}). Hence, one would naively think that
the Universe cannot shrink past a certain threshold volume
(singularity removal). Actually we know that Loop Quantum Cosmology
applied both to FRW \cite{Ashtekar:2006es} and Bianchi I
\cite{Ashtekar:2009vc} predicts this \textit{Big Bounce}-like dynamics
for the primordial Universe. Anyway the persistence of the
cosmological singularity in our model seems to be a consequence of our
choice of the configurational variables, while its chaotic dynamics
towards such a singularity is expected to be a more roboust physical
feature. In order to analyze a single bounce we need to parameterize
the anisotropy velocity in terms of the incident and reflection angles
($\theta_i$ and $\theta_f$, shown in Fig. \ref{Bounce2}):
\begin{equation}
\begin{array}{cc}
(\beta'_-)_i=r_{\alpha_i}\sin\theta_i &
(\beta'_+)_i=-r_{\alpha_i}\cos\theta_i\\
(\beta'_-)_f=r_{\alpha_f}\sin\theta_f & (\beta'_+)_f=r_{\alpha_f}\cos\theta_f
\end{array}
\label{chiara:velocityParameterization}
\end{equation}
As usual the subscripts $i$ and $f$ distinguish the initial quantities
(just before the bounce) from the final ones (just after the bounce).
Now we can derive the maximum incident angle for having a bounce
against a given potential wall. If we consider, for example, the left
wall of Fig. \ref{fig:potential} we can observe that a bounce occurs
only when $(\beta'_+)_i>\beta'_{wall}=\frac{1}{2}$, that is:
\begin{equation}
\theta_{max}^\alpha=\arccos\left(\frac{1}{2r_{\alpha_i}}\right)\simeq\frac{\pi}{
3}+\frac{1}{2\sqrt{3}}\mu^2p^2
\label{chiara:polyMaxAngle}
\end{equation}
Two important features should be underlined about this result: the
first one is that $\theta_{max}^\alpha$ is bigger than the standard
maximum angle $\left(\theta_{max}=\frac{\pi}{3}\right)$, as we can
immediately see from its second order expansion. The second one is
that when the anisotropy velocity tends to infinity one gets
$\theta_{max}^\alpha\to\frac{\pi}{2}$ (Fig.
\ref{fig:polymerMaxAngle}). Both these characteristics along with the
triangular symmetry of the potential confirm the presence of an
infinite series of bounces.
\begin{figure}
\includegraphics[scale=.6]{Fig3.png}
\caption{\label{fig:polymerMaxAngle} The maximum
angle to have a bounce as a function of the anisotropy velocity
$r_\alpha\in[1,\infty)$. The two limit values
$\frac{\pi}{3}\ (r_\alpha=1)$ and
$\frac{\pi}{2}\ (r_\alpha\to\infty)$ are highlited.}
\end{figure}
Each bounce involves new values for the constants of motion of the
free particle ($p_\pm, H_{\alpha}$) and a change in its direction. The
relation between the directions before and after a bounce can be
inferred by considering the Hamilton's equations
\eqref{chiara:polymerHamEq} again, this time with $V_B=e^{-8\beta_+}$.
First of all we observe that $V_B$ depends only from $\beta_+$, from
which we deduce that $p_-$ is yet a constant of motion. A second
constant of motion is $K\coloneqq H_\alpha-\frac{1}{2}p_+$, as we can
verify from the Hamilton equations \eqref{chiara:polymerHamEq}. An
expression for $p_\pm$ can be derived from Eq.
\eqref{chiara:anisotropyVelocity}:
\begin{equation}
p_\pm=\frac{1}{\mu}\beta'_\pm\sin(\mu H_\alpha)\sqrt{1-\sin^2(\mu H_\alpha)}
\end{equation}
where the compact notation
$\sqrt{\mu^2(p^2+\frac{3(4\pi)^4}{\kappa^2}e^{4\alpha-8\beta_+})}=\\ \sin(\mu
H_\alpha)$ is used. By the substitution of this expression and of the
parameterization \eqref{chiara:velocityParameterization} in the two
constants of motion we obtain two modified conservation laws:
\begin{subequations}
\begin{multline}
r_{\alpha_i}\sin(\theta_i)\sin(\mu H_i)\sqrt{1-\sin^2(\mu H_i)}=\\
=r_{\alpha_f}\sin(\theta_f)\sin(\mu H_f)\sqrt{1-\sin^2(\mu H_f)}
\label{chiara:ConsModP-}
\end{multline}
\begin{multline}
H_i+\frac{1}{2 \mu}r_{\alpha_i}\cos(\theta_i)\sin(\mu
H_i)\sqrt{1-\sin^2(\mu H_i)}=\\
=H_f-\frac{1}{2 \mu}r_{\alpha_f}\cos(\theta_f)\sin(\mu
H_f)\sqrt{1-\sin^2(\mu H_f)}
\label{chiara:ConsModK}
\end{multline}
\label{chiara:ConsMod}
\end{subequations}
The reflection law can be then derived after a series of algebraic
passages which involve the substitution of \eqref{chiara:ConsModP-} in
\eqref{chiara:ConsModK}, and the use of the explicit expression
\eqref{chiara:polymerAniVel} for the anisotropy velocity $r_\alpha$:
\begin{multline}
\frac{1}{2} \mu p_f \left(\cos\theta_f+\frac{\cos \theta_i
\sin\theta_f}{\sin\theta_i}\right)=\\
=\arcsin(\mu p_f)-\arcsin(\mu p_i)
\label{chiara:reflLaw}
\end{multline}
In order to have this law in a simpler form, comparable to the
standard case, an expansion up to the second order in $\mu p<<1 $ is
required. The final result is:
\begin{equation}
\frac{1}{2}\sin(\theta_i+\theta_f)=\sin\theta_i(1+\Pi_f^2)-\sin\theta_f(1+\Pi_i^
2)
\label{chiara:secondOrderReflLaw}
\end{equation}
where we have defined $\Pi^2=\frac{1}{6}\mu^2p^2$.
\subsection{Comparison with previous models}\label{sec:comparison}
As we stressed in Sec. \ref{sec:semiclassical-analysis} our model
turns out to be very different from Loop Quantum Cosmology
\cite{Ashtekar:2006es,Ashtekar:2009vc}, which predicts a Big
Bounce-like dynamics. Moreover, if the polymer approach is applied to
the anisotropies $\beta_\pm$, leaving the volume variable $\alpha$
classical \cite{Montani:2013}, a singular but non-chaotic dynamics
is recovered.
It should be noted that the polymer approach in the perturbative limit
can be interpreted as a modified commutation relation
\cite{Battisti:2008du}:
\begin{equation} [\hat{q},\hat{p}]=i(1-\mu p^2)
\end{equation}
where $\mu>0$ is the deformation parameter. Another quantization
method involving an analogue modified commutation relation is the one
deriving by the Generalized Uncertainty Principle (GUP):
\begin{equation}
\Delta q\Delta p\geq\frac{1}{2}(1+s(\Delta p)^2+s\braket{p}^2),\quad (s>0)
\label{math:GUP}
\end{equation}
through which a minimal length is introduced and which is linked to
the commutation relation:
\begin{equation} [\hat{q},\hat{p}]=i(1+sp^2)
\end{equation}
In \cite{BattistiGUP:2009} the Mixmaster model in the GUP approach
(applied to the anisotropy variables $\beta_\pm$) is developed and the
resulting dynamics is very different from the one deriving by the
application of the polymer representation to the same variables
\cite{Montani:2013}. The GUP Bianchi I anisotropy velocity turns out
to be always greater than $1$ ($\beta_{GUP}'^2=1+6sp^2+9s^2p^4$) and,
in particular, greater than the wall velocity $\beta'^{GUP}_{wall}$
(which in this case is different from $\frac{1}{2}$). Moreover the
maximum angle of incidence for having a bounce is always greater than
$\frac{\pi}{3}$. Therefore the occurrence of a bounce between the
$\beta$-particle and the potential walls of Bianchi IX can be
deduced. The GUP Bianchi II model is not analytic (no reflection law
can be inferred), but some qualitative arguments lead to the
conclusion that the deformed Mixmaster model can be considered a
chaotic system. In conclusion if we apply the two modified
commutation relations to the anisotropy (physical) variables of the
system we recover very different behaviors. Instead the polymer
applied to the geometrical variable $\alpha$ leads to a dynamics very
similar to the GUP one described here: the anisotropy velocity
\eqref{chiara:polymerAniVel} is always greater than $1$ and the
maximum incident angle \eqref{chiara:polyMaxAngle} varies in the range
$\frac{\pi}{3}<\theta_{max}<\frac{\pi}{2}$, so that the chaotic
features of the two models can be compared.
\subsection{Quantum Analysis}
The modified super-Hamiltonian constraint
\eqref{chiara:polymerSuperHam} can be easily quantized by promoting
all the variables to operators and by approximating $p_\alpha$ through
the substitution \eqref{chiara:polymerSubstitution}:
\begin{multline}
\left[-\frac{1}{\mu^2}\sin^2(\mu p_\alpha)+\hat{p}_+^2+\hat{p}_-^2+\right.\\
+\left.\frac{3(4\pi)^4}{\kappa^2}e^{4\alpha}V_B(\hat{\beta}_\pm)\right]\psi(p_
\alpha,p_\pm)=0
\label{chiara:polymerQuantumEq}
\end{multline}
If we consider the potential walls as perfectly vertical, the motion
of the quantum $\beta$-particle boils down to the motion of a free
quantum particle with appropriate boundary conditions. The free
motion solution is obtained by imposing $V_B=0$ as usual, and by
separating the variables in the wave function:
$\psi(p_\alpha,p_\pm)=\chi(p_\alpha)\phi(p_\pm)$. It should be noted
that, although we can no longer take advantage of the adiabatic
approximation \eqref{chiara:adiabaticHypothesis}, due to the necessary
use of the momentum polarization in Eq.
\eqref{chiara:polymerQuantumEq}, nonetheless we can suppose that the
separation of variables is reasonable. In fact in the limit where the
polymer correction vanishes ($\mu\to0$), the Misner solution (for
which the adiabatic hypothesis is true) is recovered.
The anisotropy component $\phi(p_\pm)$ is obtained by solving the
eigenvalues problem:
\begin{equation}
(\hat{p}_+^2+\hat{p}_-^2)\phi(p_\pm)=k^2\phi(p_\pm)
\end{equation}
where $k^2=k_+^2+k_-^2$ ($k_\pm$ are the eigenvalues of $\hat{p}_\pm$
respectively). Therefore the eigenfunctions are
\\$\phi(p_\pm)=\phi_+(p_+)\phi_-(p_-)$ with:
\begin{subequations}
\begin{align}
\phi_+(p_+)=A\delta(p_+-k_+)+B\delta(p_++k_+)\\
\phi_-(p_-)=C\delta(p_--k_-)+D\delta(p_-+k_-)
\end{align}
\label{chiara:polyEigenfunctions}
\end{subequations}
where $A,B,C,D$ are integration constants. By substituting this result
in \eqref{chiara:polymerQuantumEq} we find also the isotropic
component $\chi(p_\alpha)$ and the related eigenvalue
$\bar{p}_\alpha$:
\begin{align}
\chi(p_\alpha)=\delta(p_\alpha-\bar{p}_\alpha)\\
\bar{p}_\alpha=\frac{1}{\mu}\arcsin(\mu k)
\end{align}
The potential term is approximated by a square box with vertical
walls, whose length is expressed by $L(\alpha)=L_0+\alpha$, so that
they move outward with velocity $|\beta'_{wall}|=\frac{1}{2}$:
\begin{equation}
V_B(\alpha,\beta_\pm)=\begin{cases}
0 \quad\ \text{if}\quad
-\frac{L(\alpha)}{2}\leq\beta_\pm\leq\frac{L(\alpha)}{2}\\
\infty\quad\text{elsewhere}
\end{cases}
\end{equation}
The system is then solved by applying the boundary conditions:
\begin{equation}
\begin{cases}
\phi_+\left(\frac{L(\alpha)}{2}\right)=\phi_+\left(-\frac{L(\alpha)}{2}\right)=0
\\
\phi_-\left(\frac{L(\alpha)}{2}\right)=\phi_-\left(-\frac{L(\alpha)}{2}\right)=0
\end{cases}
\end{equation}
to the eigenfunctions \eqref{chiara:polyEigenfunctions}, expressed in
coordinate representation (i.e. $\phi(\beta_\pm)$, which can be
calculated through a standard Fourier transformation). The final
solution is:
\begin{multline}
\phi_{n,m}(\beta_\pm)=\frac{1}{2L(\alpha)} \left( e^{ik^+_n
\beta_+}-e^{-i
k^+_n \beta_+}e^{-i n \pi}\right)\cdot\\
\cdot \left( e^{ik^-_m \beta_-}-e^{-ik^-_m \beta_-}e^{-i m
\pi}\right)
\label{chiara:anisoPolymerWaveFun}
\end{multline}
with:
\begin{equation}
k^+_n=\frac{n \pi}{L(\alpha)}; \qquad k^-_m=\frac{m \pi}{L(\alpha)}
\end{equation}
Instead the isotropic component results to be:
\begin{align}
\bar{p}_\alpha=\frac{1}{\mu}\arcsin\left(\frac{\mu\pi}{L(\alpha)}\sqrt{m^2+n^2}
\right)\label{chiara:polyEigenvalues}\\
\chi(\alpha)=e^{\frac{i}{\mu}\underset{0}{\overset{x}{\int }}dt
\arcsin\left(\frac{\mu\pi}{L(\alpha)}\sqrt{m^2+n^2}\right)}
\label{chiara:isoPolymerWaveFun}
\end{align}
We emphasize the fact that, as we announced at the beginning of this
section, the Misner solution \eqref{chiara:Eigenvalues} can be easily
recovered from Eq. \eqref{chiara:polyEigenvalues}, by taking the
limit $\mu\to0$.
\subsection{Adiabatic Invariant}
Because of the various hints about the presence of a chaotic behavior,
outlined in Sec. \ref{sec:semiclassical-analysis} and that will be
confirmed by the analysis of the Poincar\'e map in Sec
\ref{sec:polymer-BKL-map}, we can reproduce a calculation similar to
the Misner's one \cite{Misner:1969} in order to obtain information
about the quasi-classical properties of the early universe. In Sec.
\ref{sec:semiclassical-analysis} we found that the $\beta$-point
undergoes an infinite series of bounces against the three potential
walls, moving with constant velocity between a bounce and
another. Every bounce implies a decreasing of $H_\alpha$ due to the
conservation of $K=H_\alpha-\frac{1}{2}p_+$ and the definition of a
new direction of motion through Eq.
\eqref{chiara:secondOrderReflLaw}. Because of the ergodic properties
of the motion we can assume that it won't be influenced by the choice
of the initial conditions. Thus we can choose them in order to
simplify the subsequent analysis. A particularly convenient choice is:
$\theta_i+\theta_f=60^\circ$ for the first bounce. In fact as a
consequence of the geometrical properties of the system all the
subsequent bounces will be characterized by the same angle of
incidence: $\theta'_i=\theta_i$ (Fig. \ref{Bounce2}).
\begin{figure}
\includegraphics[scale=.5]{Fig4.png}
\caption{Relation between the angles of incidence of
two following bounces: $\theta_i$ and $\theta'_i$. Here we can see
that the condition $\theta_f+\theta'_i=60^\circ$ must hold in
order to have $180^\circ$ as the sum of internal angles of the
triangle $A\overset{\triangle}{B}C$. It follows that the initial
condition $\theta_i+\theta_f=60^\circ$ implies
$\theta_i=\theta_i'$ for every couple of subsequent bounces.}
\label{Bounce2}
\end{figure}
Let us now analyze a single bounce, keeping in mind Fig. \ref{Bounce}.
\begin{figure}
\includegraphics[scale=.35]{Fig5.png}
\caption{Geometric relations between two successive
bounces.}
\label{Bounce}
\end{figure}
At the time $\alpha$ in which we suppose the bounce to occur, the wall
is in the position $\beta_{wall}=\frac{|\alpha|}{2}$. Moving between
two walls, the particle has the constant velocity $|\beta'|=r_\alpha$
(Eq. \eqref{chiara:polymerAniVel}), so that we can deduce the spatial
distance between two bounces in terms of the ``time'' $|\alpha|$
necessary to cover such a distance:
\begin{subequations}
\begin{align}
[\text{distance covered before the bounce}]=r_{\alpha_i}|\alpha_i|\\
[\text{distance covered after the bounce}]=r_{\alpha_f}|\alpha_f|
\end{align}
\end{subequations}
Now if we apply the Sine theorem to the triangles
$A\overset{\triangle}{B}C$ and $A\overset{\triangle}{B}D$, we find the
relation:
\begin{equation}
\frac{r_{\alpha_i}|\alpha_i|}{r_{\alpha_f}|\alpha_f|}=\frac{\sin\theta_i}{\sin
\theta_f}
\label{chiara:constAlpha1}
\end{equation}
Moreover, from \eqref{chiara:ConsModP-} one gets:
\begin{equation}
\frac{r_{\alpha_f}\sin(\mu H_{\alpha_f})\sqrt{1-\sin^2(\mu
H_{\alpha_f})}}{r_{\alpha_i}\sin(\mu H_{\alpha_i})\sqrt{1-\sin^2(\mu
H_{\alpha_i})}}=\frac{\sin\theta_i}{\sin\theta_f}=\text{cost}
\label{chiara:constAlpha2}
\end{equation}
From these two relations we can then deduce a quantity which remains
unaltered after the bounce:
\begin{multline}
r_{\alpha_i}^2 |\alpha_i| \sin(\mu H_i) \sqrt{1- \sin^2(\mu H_i)}=\\
=r_{\alpha_f}^2 |\alpha_f| \sin(\mu H_f) \sqrt{1- \sin^2(\mu H_f)}
\label{chiara:singleBounce}
\end{multline}
Now we can generalize the argument to the n-th bounce (taking place at
the ``time'' $\alpha_n$ instead of $\alpha$). By using again the Sine
theorem (applied to the triangle $A\overset{\triangle}{B}D$) we get
the relation
$r_{\alpha_i}|\alpha_i|=\frac{\sin120^\circ}{2\sin\theta_f}|\alpha_n|=C|\alpha_n
|$, where $C$ takes the same value for every bounce. Correspondingly
$r_{\alpha_f}|\alpha_f|=C|\alpha_{n+1}|$. If we substitute these new
relations in Eq. \eqref{chiara:singleBounce},we can then conclude
that:
\begin{equation}
\langle r_\alpha |\alpha| \sin(\mu H) \sqrt{1-\sin^2(\mu H)} \rangle=\text{cost}
\label{chiara:polymerAdiabaticInv}
\end{equation}
where the average value is taken over a large number of bounces. As
we are interested in the behavior of quantum states with high
occupation number, remembering that for such states the
quasi-classical approximation $H_\alpha\simeq\bar{p}_\alpha$ is valid,
we can substitute Eq. \eqref{chiara:polyEigenvalues} and
Eq. \eqref{chiara:polymerAniVel} in \eqref{chiara:polymerAdiabaticInv}
and evaluate the correspondent quantity in the limit
$\alpha\to-\infty$. The result is that also in this case the
occupation number is an adiabatic invariant:
\begin{equation}
\langle \sqrt{m^2+n^2} \rangle = \text{cost}
\end{equation}
that is the same conclusion obtained by Misner without the polymer
deformation. It follows that also in this case a quasi-classical
state of the Universe is allowed up to the initial singularity.
\section{Semiclassical solution for the Bianchi I and II models}
\label{sec:polymer-bkl-map}
As we have already outlined in Sec.~\ref{sec:mixm-model:-class}, to
calculate the BKL map it is necessary first to solve the Einstein's
Eqs for the Bianchi I and Bianchi II models.
Therefore, in Sec.~\ref{sec:polymer-bianchi-i-dynamics} we solve the
Hamilton's Eqs in Misner variables for the Polymer Bianchi I, in the
semiclassical approximation introduced in
Sec.~\ref{sec:semiclassical-analysis}.
Then in Sec.~\ref{sec:parametrization} we derive a parametrization for
the Polymer Kasner indices. It is not possible anymore to parametrize
the three Kasner indices using only one parameter $u$ as
in~\eqref{kasner:u-param}, but two parameters are needed. This is due
to a modification to the Kasner
constraint~\eqref{kasner:sumofsquares}.
In Sec.~\ref{sec:polymer-bianchi-ii} we calculate how the Bianchi II
Einstein's Eqs are modified when the PQM is taken into account. Then
we derive a solution for these Eqs, using the lowest order
perturbation theory in $\mu$. We are therefore assuming that $\mu$ is
little with respect to the Universe volume cubic root.
\subsection{Polymer Bianchi I}
\label{sec:polymer-bianchi-i-dynamics}
As already shown in Sec.~\ref{sec:semiclassical-analysis}, the Polymer
Bianchi I Hamiltonian is obtained by simply setting $V_B = 0$
in~\eqref{chiara:polymerSuperHam}. Upon solving the Hamilton's Eqs
derived from the Hamiltonian~\eqref{chiara:polymerSuperHam} in the
case of Bianchi I, variables $p_\alpha$ and $p_\pm$ are recognized to
be constants throughout the evolution. We then choose the time gauge
so as to have a linear dependence of the volume $\mathcal{V}$ from the
time $t$ with unity slope, as in the standard
solution~\eqref{bianchi-i:solution}:
\begin{equation}\label{polymer-bianchi-i:time-gauge}
N = - \frac{64 \pi^2 \mu}{\kappa \sin(2{\mu} p_\alpha)}
\end{equation}
Moreover, to have a Universe expanding towards positive times, we must
restrict the range of $\sin(2{\mu} p_\alpha) < 0$. This condition
reflects on $p_\alpha$ as
\begin{equation}\label{polymer-bianchi-i:p-alpha-range}
-\frac{\pi}{2\mu} + \frac{n\pi}{\mu} < p_\alpha <
\frac{n\pi}{\mu},\qquad p_\alpha \in \numberset{Z}
\end{equation}
The branch connected to the origin $n=0$, will be called
\emph{connected-branch}. For $\mu \rightarrow 0$, it gives the correct
continuum limit: $p_\alpha < 0$. Even though, for different choices of
$n\neq0$, we cannot provide a simple and clear explanation of their
physical meaning, when we will define the parametrization of the
Kasner indices in Sec.~\ref{sec:parametrization}, we will be forced to
consider another branch too, in addition to the connected-one. We
choose arbitrarily the $n=1$ branch and, by analogy, we will call it
the \emph{disconnected-branch}.
The solution to Hamilton's Eqs for Polymer Bianchi I in
the~\eqref{polymer-bianchi-i:time-gauge} time gauge, reads as
\begin{subequations}\label{polymer-bianchi-i:solution}
\begin{align}
\label{polymer-bianchi-i:solution-alpha}
\alpha(t) = & ~\frac{1}{3} \ln(t)\\
\label{polymer-bianchi-i:solution-beta}
\beta_\pm(t) = & - \frac{2}{3} \frac{\mu p_\pm}{\sin(2{\mu}
p_\alpha)} \ln(t) =
- \frac{2\mu p_\pm}{\sin(2{\mu} p_\alpha)}
\alpha(t)
\end{align}
\end{subequations}
Since $p_\alpha$, $p_\pm$ and $\mu$ in
the~\eqref{polymer-bianchi-i:solution} are all numerical constants,
it is qualitatively the very
same as the classical solution~\eqref{bianchi-i:solution}.
In particular, even in the Polymer case the volume
$\mathcal{V}(t) \propto e^{3\alpha(t)} = t$ goes to zero when
$t \rightarrow 0$: i.e.\ \emph{the singularity is not removed} and
we have a Big Bang, as already pointed out in
Sec.~\ref{sec:semiclassical-analysis}.
By making a reverse canonical transformation from Misner variables
($\alpha$, $\beta_\pm$) to ordinary ones ($q_l$, $q_m$, $q_n$), we
find out that~\eqref{kasner:sum} holds unchanged
while~\eqref{kasner:sumofsquares} is modified into:
\begin{equation}\label{polymer-bianchi-i:sumofsquares-p}
{p_l}^2 + {p_m}^2 + {p_n}^2 = \frac{1}{3} \left( 1 +
\frac{2}{\cos^2(\mu p_\alpha)}\right)
\end{equation}
If the continuum limit $\mu \rightarrow 0$ is taken, we get back the
old constraint~\eqref{kasner:sumofsquares} as expected. Since the relation
\begin{equation}\label{kasner-vs-anisotropy-velocity}
{p_l}^2 + {p_m}^2 + {p_n}^2 = \frac{1}{3}+\frac{2}{3}\beta'^2
\end{equation}
holds, the inequality
${p_l}^2 + {p_m}^2 + {p_n}^2 \geq 1,\quad \forall p_\alpha \in \numberset{R}$ is
directly linked to $\beta$-point velocity~\eqref{chiara:polymerAniVel}
being always equal or greater than one, as demonstrated in
Sec.~\ref{sec:semiclassical-analysis}. This is a noteworthy difference
with respect to the standard case of Eq.~\eqref{chiara:velocity},
where the speed was always one.
We can derive an alternative expression for the Kasner
constraint~\eqref{polymer-bianchi-i:sumofsquares-p} by exploiting the
notation of Eqs~\eqref{bianchi-i:solution}:
\begin{equation}\label{polymer-bianchi-i:sumofsquares-exact}
{p_l}^2 + {p_m}^2 + {p_n}^2 = \frac{1}{3} \left( 1 +
\frac{4}{1 \pm \sqrt{1 - Q^2 }} \right)
\end{equation}
where the plus sign is for the connected-branch while the minus for
the disconnected-branch. We have defined the dimensionless quantity
$Q$ as
\begin{equation}\label{polymer-bianchi-i:Q}
Q := \frac{{(8\pi)}^2 \mu \Lambda }{\kappa }
\end{equation}
It is worth noticing that the lattice spacing $\mu$ has been
incorporated in $Q$, so that the continuum limit $\mu\to 0$ is now
equivalently reached when $Q\to 0$.
Condition~\eqref{polymer-bianchi-i:p-alpha-range} reflects on the
allowed range for Q:\
\begin{equation}\label{polymer-bianchi-i:Q-range}
\abs*{Q} \leq 1
\end{equation}
both for the connected and disconnected branches.
\subsection{Parametrization of the Kasner indices}\label{sec:parametrization}
In this Section we define a parametrization of the Polymer Kasner
indices. In the standard case of~\eqref{kasner:u-param}, because
there are three Kasner indices and two Kasner
constraints~\eqref{kasner:sum} and~\eqref{kasner:sumofsquares}, only
one parameter $u$ is needed to parametrize the Kasner indices. On the
other hand, in the Polymer case, even if there are two Kasner
constraints~\eqref{kasner:sum}
and~\eqref{polymer-bianchi-i:sumofsquares-exact} as well,
constraint~\eqref{polymer-bianchi-i:sumofsquares-exact} already
depends on another parameter $Q$. This means that any parametrization
of the Polymer Kasner indices will inevitably depend on two
parameters, that we arbitrarily choose as $u$ and $Q$, where $u$ is
defined on the same range as the standard case~\eqref{kasner:u-param}
and $Q$ was defined in~\eqref{polymer-bianchi-i:Q}. They are both
dimensionless. We will refer to the following expressions as the
$(u,Q)$-parametrization for the Polymer Kasner indices:
\begin{align}\label{parametrization:u-Q-param}
\begin{split}
p_1 & = \frac{1}{3}\left\{ 1 - \frac{1 + 4u + u^2}{1 + u + u^2}
{\left\lbrack \frac{1}{2} \left( 1 \pm \sqrt{1 - Q^2} \right)
\right\rbrack}^{-\frac{1}{2}} \right\}\\
p_2 & = \frac{1}{3}\left\{ 1 + \frac{2 + 2u - u^2}{1 + u + u^2}
{\left\lbrack \frac{1}{2} \left( 1 \pm \sqrt{1 - Q^2} \right)
\right\rbrack}^{-\frac{1}{2}} \right\}\\
p_3 & = \frac{1}{3}\left\{ 1 +\frac{-1 + 2u + 2u^2}{1 + u + u^2}
{\left\lbrack \frac{1}{2} \left( 1 \pm \sqrt{1 - Q^2} \right)
\right\rbrack}^{-\frac{1}{2}} \right\}
\end{split}
\end{align}
where the plus sign is for the connected branch and the minus sign is
for the disconnected branch. By construction, the standard
$u$-parametrization~\eqref{kasner:u-param} is recovered in the limit
$Q \rightarrow 0$ of the connected branch. We will see clearly in
Sec.~\ref{sec:BKL-map-properties} how the $Q$ parameter can be thought
of as a measure of the \emph{quantization degree} of the Universe. The
more the $Q$ of the connected-branch is big in absolute value, the
more the deviations from the standard dynamics due to PQM are
pronounced. The opposite is true for the disconnected-branch. To gain
insight on multiple-parameters parametrizations of the Kasner indices,
the reader can refer to\cite{Montani:2004mq,Elskens:1987}.
The $(u,Q)$-parametrization~\eqref{parametrization:u-Q-param} is even
in $Q$: $p_a(u,Q) = p_a(u,-Q)$ with $a=l,m,n$. We can thus assume
without loss of generality that $0 \leq Q \leq 1$. Another interesting
feature of the Polymer Bianchi I model, that is evident from
Fig.~\ref{fig:parametrization:u-Q}, is that, for every $u$ and $Q$ in
a non-null measure set, two Kasner indices can be simultaneously
negative (instead of only one as is the case of the standard
$u$-parametrization~\eqref{kasner:u-param}).
In a similar fashion as the $u$ parameter is ``remapped'' in the
standard case (second line of~\eqref{bianchi-ix:BKL-map}), also the
$(u,Q)$ parametrization is to be ``remapped'', if the $u$ parameter
happens to become less than one. Down below we list these remapping
prescriptions explicitly: we show how the
$(u,Q)$-parametrization~\eqref{parametrization:u-Q-param} can be
recovered if the $u$ parameter becomes smaller than 1, through a
reordering of the Kasner indices and a remapping of the $u$ parameter.
\begin{itemize}\label{parametrization:remapping}
\item \centering $0 < u < 1$
\[\begin{dcases} p_1\left(u,Q\right) \rightarrow
p_1\left(\frac{1}{u},Q\right)\\ p_2\left(u,Q\right) \rightarrow
p_3\left(\frac{1}{u},Q\right)\\ p_3\left(u,Q\right) \rightarrow
p_2\left(\frac{1}{u},Q\right)
\end{dcases}\]
\item \centering $-\tfrac{1}{2} < u < 0$
\[\begin{dcases} p_1\left(u,Q\right) \rightarrow
p_2\left(-\frac{1+u}{u},Q\right)\\ p_2\left(u,Q\right)
\rightarrow p_3\left(-\frac{1+u}{u},Q\right)\\
p_3\left(u,Q\right) \rightarrow p_1\left(-\frac{1+u}{u},Q\right)
\end{dcases}\]
\item \centering $-1 < u < -\frac{1}{2}$
\[\begin{dcases} p_1\left(u,Q\right) \rightarrow p_3\left(
-\frac{u}{1+u},Q\right)\\ p_2\left(u,Q\right) \rightarrow
p_2\left( -\frac{u}{1+u},Q\right)\\ p_3\left(u,Q\right)
\rightarrow p_1\left( -\frac{u}{1+u},Q\right)
\end{dcases}\]
\item \centering $-2 < u < -1$
\[\begin{dcases} p_1\left(u,Q\right) \rightarrow p_3\left(
-\frac{1}{1+u},Q\right)\\ p_2\left(u,Q\right) \rightarrow
p_1\left( -\frac{1}{1+u},Q\right)\\ p_3\left(u,Q\right)
\rightarrow p_2\left( -\frac{1}{1+u},Q\right)
\end{dcases}\]
\item \centering $u < -2$
\[\begin{dcases} p_1\left(u,Q\right) \rightarrow p_2\left(
-\left(1+u\right),Q\right)\\ p_2\left(u,Q\right) \rightarrow
p_1\left( -\left(1+u\right),Q\right)\\ p_3\left(u,Q\right)
\rightarrow p_3\left( -\left(1+u\right),Q\right)
\end{dcases}\]
\end{itemize}
where the indices on the left are defined for the $u$ shown after the
bullet $\bullet$, while the indices on the right are in the
``correct'' range $u>1$. The arrow `$\rightarrow$' is there to
indicate a conceptual mapping. However, in value it reads as an
equality `$=$'. As far as the Q parameter is concerned, a
``remapping'' prescription for $Q$ is not needed because values
outside the boundaries~\eqref{polymer-bianchi-i:Q-range} are not
physical and should never be considered.
In Figure~\ref{fig:parametrization:u-Q} the values of the ordered
Kasner indices are displayed for the
$(u,Q)$-parametrization~\eqref{parametrization:u-Q-param}, where
$Q\in\left\lbrack 0,1 \right\rbrack$. Because the range $u > 1$ is not
easily plottable, the equivalent parametrization in
$u\in\left\lbrack 0,1 \right\rbrack$ was used. We notice that the
roles of $p_m$ and $p_n$ are exchanged for this range choice.
\begin{figure}
\begin{center}
\includegraphics[width=1\columnwidth]{Fig6.png}
\end{center}
\caption{Plot of the ``spectrum'' of the Kasner indices in
the $(u,Q)$-parametrization~\eqref{parametrization:u-Q-param}. Every
value of $(u,Q)$ in the dominion $u\in\lbrack 1,\infty)$ and
$Q\in[0,1]$ uniquely selects a triplet of Kasner indices. The
vertical plane delimits the two branches of the
$(u,Q)$-parametrization~\eqref{parametrization:u-Q-param}. The
connected-branch is the one containing the red (dark gray) lines, i.e.\
the
standard parametrization shown in Fig.~\ref{fig:kasner:u-param}.
The $Q$ parameter of the connected-branch is increasing from 0 to
1 in the positive direction of the $Q$-axis, while for the
disconnected-branch it is decreasing from 1 to 0. The black line
delimits the part of the $(u,Q)$-plane where there is only one
(above) or two (below) negative Kasner indices. The plot is cut on
the bottom and on top but it is actually extending up to
$\pm\infty$.}
\label{fig:parametrization:u-Q}
\end{figure}
\subsection{Polymer Bianchi II}\label{sec:polymer-bianchi-ii}
Here we apply the method described
in~\ref{sec:polymer-bianchi-i-dynamics} to find an approximate
solution to the Einstein's Eqs of the Polymer Bianchi II model. We
start by selecting the $V_B$ potential appropriate for Bianchi
II~\eqref{chiara:BianchiIIpotential} and we substitute it in the
Hamiltonian~\eqref{chiara:polymerSuperHam} (in the following we will
always assume the time gauge $N=1$).
Then, starting from Hamilton's Eqs, inverting the Misner canonical
transformation and converting the synchronous time $t$-derivatives
into logarithmic time $\tau$-derivatives, the Polymer Bianchi II
Einstein's Eqs are found to be:
\begin{equation}\label{polymer-bianchi-ii:einstein-equations}
\begin{dcases}
{q_l}_{\tau\tau}(\tau) = \frac{1}{3} e^{2q_l(\tau)} \left\lbrack
-4 + \sqrt{1+{\left( \frac{2{(4\pi)}^2\mu}{\kappa} v(\tau)
\right)}^2 } \right\rbrack \\
{q_m}_{\tau\tau}(\tau) = \frac{1}{3} e^{2q_l(\tau)} \left\lbrack 2
+ \sqrt{1+{\left( \frac{2{(4\pi)}^2\mu}{\kappa} v(\tau)
\right)}^2 } \right\rbrack \\
{q_n}_{\tau\tau}(\tau) = \frac{1}{3} e^{2q_l(\tau)} \left\lbrack 2
+ \sqrt{1+{\left( \frac{2{(4\pi)}^2\mu}{\kappa} v(\tau)
\right)}^2 } \right\rbrack
\end{dcases}
\end{equation}
where
$v(\tau) := {q_l}_\tau(\tau) + {q_m}_\tau(\tau) +
{q_n}_\tau(\tau)$. In the $Q \ll 1$ approximation it is enough to
consider only the connected-branch of the solutions, i.e.~the branch
that in the limit $\mu\to 0$ make
Eqs~\eqref{polymer-bianchi-ii:einstein-equations} reduce to the
correct classical Eqs~\eqref{bianchi-ii:log-einstein}.
Now we find an approximate solution to the
system~\eqref{polymer-bianchi-ii:einstein-equations}, with the only
assumption that $\mu$ is little compared to the cubic root of the
Universe volume. We are entitled then to exploit the standard
perturbation theory and expand the solution until the first non-zero
order in $\mu$. Because $\mu$ appears only squared $\mu^2$ in the
Einstein's Eqs~\eqref{polymer-bianchi-ii:einstein-equations}, the
perturbative expansion will only contain even powers of $\mu$.
First we expand the solution at the first order in $\mu^2$
\begin{equation}\label{polymer-bianchi-ii:approximate-solution}
\begin{dcases}
q_l(\tau) = q_l^0(\tau) + \mu^2 q_l^1(\tau) + O(\mu^4) \\
q_m(\tau) = q_m^0(\tau) + \mu^2 q_m^1(\tau) + O(\mu^4) \\
q_n(\tau) = q_n^0(\tau) + \mu^2 q_n^1(\tau) + O(\mu^4)
\end{dcases}
\end{equation}
where $q_{l,m,n}^0$ are the zeroth order terms and $q_{l,m,n}^1$ are
the first order terms in $\mu^2$. We recall the zeroth order solution
$q_{l,m,n}^0$ is just the classical
solution~\eqref{bianchi-ii:solution}, where the $q_{l,m,n}(\tau)$
there must be now appended a 0-apex, accordingly.
Considering the fact that ${q_m}_{\tau\tau} = {q_n}_{\tau\tau}$, we
can simplify $q_n(\tau)$
from~\eqref{polymer-bianchi-ii:einstein-equations}, by setting
\begin{multline}\label{approximate-solution:perturbative-expansion}
q_n(\tau) = q_m(\tau) + 2 (c_5 + c_6\tau) + \mu^2 (c_{11} + c_{12}
\tau) + \\ - 2 (c_3 + c_4\tau) - \mu^2 (c_9 + c_{10}\tau)
\end{multline}
where the first six constants of integration $c_1,c_2,\dots,c_{6}$
play the same role at the zeroth order as in
Eq.~\eqref{bianchi-ii:solution}, while the successive six
$c_7,\dots,c_{12}$ are needed to parametrize the first order
solution.
Then we substitute~\eqref{approximate-solution:perturbative-expansion}
in~\eqref{polymer-bianchi-ii:einstein-equations} and, as it is
required by perturbations theory, we gather only the zeroth and first
order terms in $\mu^2$ and neglect all the higher order terms. The
Einstein's Eqs for the first order terms $q_l^1$ and $q_m^1$ are then
found to be:
\begin{numcases}{\label{approximate-solution:1order-eqs-2}}
\label{approximate-solution:1order-eqs-2-ql}
\frac{\partial^2{q_l^1}}{\partial\tau^2} + {c_1}^2 \sech^2(c_1\tau +
c_2) \times \\ \nonumber \times \left\{ 2 {q_l^1} + \frac{C}{3}
{\left\lbrack 2(c_6+c_4) + c_1 \tanh(c_1\tau + c_2)
\right\rbrack}^2 \right\} = 0\\
\label{approximate-solution:1order-eqs-2-qm}
\frac{\partial^2{q_m^1}}{\partial\tau^2} + {c_1}^2 \sech^2(c_1\tau +
c_2) \times \\ \nonumber \times \left\{ 2 {q_l^1} + \frac{C}{3}
{\left\lbrack 2(c_6+c_4) + c_1 \tanh(c_1\tau + c_2)
\right\rbrack}^2 \right\} = 0
\end{numcases}
where we have defined $C \equiv \frac{2{(4\pi)}^2}{\kappa^2}$ for
brevity and we have substituted the zeroth order
solution~\eqref{bianchi-ii:solution} where needed.
Eqs~\eqref{approximate-solution:1order-eqs-2} are two almost uncoupled
non-homogeneous ODEs.\@ \eqref{approximate-solution:1order-eqs-2-ql}
is a linear second order non-homogeneous ODE with non-constant
coefficients and, being completely uncoupled, it can be solved
straightly.\@ \eqref{approximate-solution:1order-eqs-2-qm} is solved
by mere substitution of the solution
of~\eqref{approximate-solution:1order-eqs-2-ql} in it and subsequent
double integration.
To solve~\eqref{approximate-solution:1order-eqs-2-ql} we exploited a
standard method that can be found, for example, in~\cite[Lesson
23]{Tenenbaum:2012}. This method is called \emph{reduction of order
method} and, in the case of a second order ODE, can be used to find
a particular solution once any non trivial solution of the related
homogeneous equation is known.
The homogeneous equation associated
to~\eqref{approximate-solution:1order-eqs-2-ql} is
\begin{equation}\label{approximate-solution:homogeneous-eq}
\frac{\partial^2{q_l^1}}{\partial\tau^2} + 2{c_1}^2 \sech^2(c_1\tau
+ c_2) {q_l^1} = 0
\end{equation}
whose general solution reads as:
\begin{equation}\label{approximate-solution:homogeneous-solution}
{q_l^1}_O(\tau) = -\frac{c_8}{c_1} + (c_7 + c_8 \tau) \tanh(c_1\tau
+ c_2)
\end{equation}
By applying the above-mentioned method, we obtain the following
solution for~\eqref{approximate-solution:1order-eqs-2-ql}
\begingroup\makeatletter\def\f@size{9}\check@mathfonts
\def\maketag@@@#1{\hbox{\m@th\large\normalfont#1}}
\begin{equation}\label{approximate-solution:1order-q-l-sol}
\begin{split}
q_l^1(\tau) = & -\tfrac{c_8}{c_1} + (c_7 + c_8 \tau) \tanh(c_1\tau
+
c_2)\\
& -\tfrac{C}{36}c_1 \tanh(c_1\tau + c_2) \left\{ 3 \left\lbrack
{c_1}^2 + 8 {(c_4 + c_6)}^2 \right\rbrack \tau \right. \\
& + 16 (c_4 + c_6) \ln\left\lbrack\cosh(c_1\tau +
c_2)\right\rbrack - 3c_1 \tanh(c_1\tau + c_2) \Big\}
\end{split}
\end{equation}
\endgroup The solution
for~\eqref{approximate-solution:1order-eqs-2-qm} is found by
substituting~\eqref{approximate-solution:1order-q-l-sol} in it and
then integrating two times:
\begingroup\makeatletter\def\f@size{8.8}\check@mathfonts
\def\maketag@@@#1{\hbox{\m@th\large\normalfont#1}}
\begin{equation}
\begin{split}
q_m^1(\tau) = &
~c_{10} + c_9\tau - (c_7 + c_8 \tau) \tanh(c_1 \tau + c_2)\\
& - \tfrac{C}{36} \Big\{ 16 c_1 (c_4 + c_6)(c_1\tau + c_2)
{c_1}^2\sech^2(c_1\tau + c_2) \\
& + 8 \left\lbrack {c_1}^2 + 12 {(c_4+c_6)}^2 \right\rbrack \ln
(\cosh (c_1 \tau +
c_2)) \\
& - c_1\tanh(c_1 \tau + c_2) \left\lbrack 48 (c_4 + c_6) + 3
\left({c_1}^2 + 8{(c_4 + c_6)}^2\right)\tau
\right. \\
& + 16(c_4 + c_6) \ln(\cosh(c_1\tau + c_2)) \Big\rbrack \Big\}
\end{split}
\end{equation}
\endgroup
Now that we know $q_l^0$, $q_l^1$, $q_m^0$ and $q_m^1$, the
complete solution for $q_l$, $q_m$ and $q_n$ is found
through~\eqref{polymer-bianchi-ii:approximate-solution}
and~\eqref{approximate-solution:perturbative-expansion} by mere
substitution.
\section{Polymer BKL map}\label{sec:polymer-BKL-map}
In this last Section, we calculate the Polymer modified BKL map
and study some of its properties.
In Sec.~\ref{sec:polymer-bkl-map-kasner-indices} the Polymer BKL map
on the Kasner indices is derived while in
Sec.~\ref{sec:BKL-map-properties} some noteworthy properties of the
map are discussed.
Finally in Sec~\ref{sec:numerical-simulation} the results of a simple
numerical simulation of the Polymer BKL map over many iterations are
presented and discussed.
\subsection{Polymer BKL map on the Kasner indices}
\label{sec:polymer-bkl-map-kasner-indices}
Here we use the method outlined in
Secs. from~\ref{sec:polymer-bianchi-i-dynamics} to
\ref{sec:polymer-bianchi-ii} to directly calculate the Polymer BKL map
on the Kasner indices.
First, we look at the asymptotic limit at $\pm\infty$ for the solution
of Polymer Bianchi
II~\eqref{polymer-bianchi-ii:approximate-solution}. As in the standard
case, the Polymer Bianchi II model ``links together'' two Kasner
epochs at plus and minus infinity. In this sense, the dynamics of
Polymer Bianchi II is not qualitatively different from the standard
one. We will tell the quantities at plus and minus infinity apart by
adding a prime $\Box'$ to the quantities at minus infinity and leaving
the quantities at plus infinity un-primed. The two Kasner solutions at
plus and minus infinity can be still parametrized according
to~\eqref{bianchi-i:solution}.
By summing together equations~\eqref{bianchi-i:solution} and using the
first Kasner constraint~\eqref{kasner:sum}, we find that:
\begin{equation}\label{polymer-BKL-map:lambda}
\begin{cases}
\lim_{\tau \rightarrow +\infty} \frac{1}{2\tau}\left( q_l + q_m +
q_n \right) =
\Lambda \\
\lim_{\tau \rightarrow -\infty} \frac{1}{2\tau}\left( q'_l + q'_m
+ q'_n \right) = \Lambda'
\end{cases}
\end{equation}
By taking the limits at $\tau \rightarrow \pm \infty$ of the
derivatives of the Polymer Bianchi II
solution~\eqref{polymer-bianchi-ii:approximate-solution},\\~\\
{\centering $\boxed{\lim_{\tau \rightarrow +\infty}}$\par}
\begingroup\makeatletter\def\f@size{8.8}\check@mathfonts
\def\maketag@@@#1{\hbox{\m@th\large\normalfont#1}}
\begin{align}
\nonumber {q_l}_\tau = & - c_1 + \mu^2 c_8 - \mu^2 \frac{C}{36} c_1
\left\lbrack 3{c_1}^2 + 16 c_1 (c_4 + c_6)
+ 24 {(c_4 + c_6)}^2 \right\rbrack \\
\nonumber {q_m}_\tau = & ~ 2c_4 + c_1 + \mu^2 (-c_8 + c_9) - \mu^2
\frac{C}{36} c_1 \left\lbrack 5{c_1}^2 + 24 {(c_4 +
c_6)}^2 \right\rbrack \\
\nonumber {q_n}_\tau = & ~ 2c_6 + c_1 + \mu^2 (-c_8 + c_{11}) - \mu^2
\frac{C}{36} c_1 \left\lbrack 5{c_1}^2 + 24 {(c_4 +
c_6)}^2 \right\rbrack \\
\nonumber \Lambda = & ~ \frac{c_1}{2} + c_4 + c_6 + \mu^2 \frac{1}{2} (-c_8 + c_9 +
c_{11}) + \\
& ~ - \mu^2 \frac{C}{72} c_1 \left\lbrack
13 {c_1}^2 + 16 c_1 (c_4 + c_6) + 168 {(c_4 + c_6)}^2
\right\rbrack \label{polymer-BKL-map:limit-plus}
\end{align}
\endgroup
{\centering$\boxed{\lim_{\tau \rightarrow -\infty}}$\par}
\begingroup\makeatletter\def\f@size{8.8}\check@mathfonts
\def\maketag@@@#1{\hbox{\m@th\large\normalfont#1}}
\begin{align}
\nonumber {q'_l}_\tau = & ~ c_1 - \mu^2 c_8 + \mu^2 \frac{C}{36} c_1
\left\lbrack 3{c_1}^2 - 16 c_1 (c_4 + c_6)
+ 24 {(c_4 + c_6)}^2 \right\rbrack \\
\nonumber {q'_m}_\tau = & ~ 2c_4 - c_1 - \mu^2 (c_8 + c_9) + \mu^2
\frac{C}{36} c_1 \left\lbrack 5{c_1}^2 + 24 {(c_4 +
c_6)}^2 \right\rbrack \\
\nonumber {q'_n}_\tau = & ~ 2c_6 - c_1 - \mu^2 (c_8 + c_{11}) + \mu^2
\frac{C}{36} c_1 \left\lbrack 5{c_1}^2 + 24 {(c_4 +
c_6)}^2 \right\rbrack \\
\nonumber \Lambda' = & ~ -\frac{c_1}{2} + c_4 + c_6 + \mu^2 \frac{1}{2} (c_8 + c_9 +
c_{11}) + \\
& ~ + \mu^2 \frac{C}{72} c_1 \left\lbrack
13 {c_1}^2 - 16 c_1 (c_4 + c_6) + 168 {(c_4 + c_6)}^2
\right\rbrack \label{polymer-BKL-map:limit-minus}
\end{align}
\endgroup
and comparing~\eqref{polymer-BKL-map:limit-plus}
and~\eqref{polymer-BKL-map:limit-minus}
with~\eqref{bianchi-i:solution} and~\eqref{polymer-BKL-map:lambda}, we
can find the following expressions for the primed and unprimed Kasner
indices and $\Lambda$:
\begin{numcases}{\label{polymer-BKL-map:system}}
2 \Lambda{p_l} = f_l (\boldsymbol{c})\\
2 \Lambda{p_m} = f_m (\boldsymbol{c})\\
2 \Lambda{p_n} = f_n (\boldsymbol{c})\\
\Lambda= f_\Lambda(\boldsymbol{c})\\
\label{polymer-BKL-map:rel1Minus}
2 \Lambda'{p'_l} = f'_l (\boldsymbol{c})\\
\label{polymer-BKL-map:rel2Minus}
2 \Lambda'{p'_m} = f'_m (\boldsymbol{c})\\
2 \Lambda'{p'_n} = f'_n (\boldsymbol{c})\\
\label{polymer-BKL-map:rel4Minus}
\Lambda' = f'_\Lambda(\boldsymbol{c})
\end{numcases}
where the functions $f_{l,m,n,\Lambda}$ and $f'_{l,m,n,\Lambda}$
correspond to the r.h.s.\ of~\eqref{polymer-BKL-map:limit-plus}
and~\eqref{polymer-BKL-map:limit-minus} respectively and
$\boldsymbol{c}$ is a shorthand for the set $\{c_1,\dots,c_{12}\}$.
Now, in complete analogy with~\eqref{bianchi-ii:asympt-rel}, we look
for an asymptotic condition. We don't need to solve the whole
system~\eqref{polymer-BKL-map:system}, but only to find a relation
between the old and new Kasner indices and $\Lambda$ at the first
order in $\mu^2$. In practice, not all of
the~\eqref{polymer-BKL-map:system} relations are actually needed to
find an asymptotic condition.
We recall from~\eqref{bianchi-ii:BKL-map} that the standard BKL map on
$\Lambda$ is
\begin{equation}\label{polymer-BKL-map:classical-map-on-lambda}
\Lambda' = (1 + 2 p_l) \Lambda
\end{equation}
where we have assumed that $p_l < 0$, as we will continue to do in the
following. As everything until now is hinting to, we prescribe that
the Polymer modified BKL map reduces to the standard BKL map when
$\mu\to 0$, so that
relation~\eqref{polymer-BKL-map:classical-map-on-lambda} is modified
in the Polymer case only perturbatively. Since we are considering
only the first order in $\mu^2$, we can write for the Polymer case:
\begin{equation}
\Lambda' = (1 + 2 p_l) \Lambda + \mu^2 h(\boldsymbol{c}) + O(\mu^4)
\end{equation}
Solving for $h(\boldsymbol{c})$, we find that
\begin{align*}
h(\boldsymbol{c}) & = \frac{1}{\mu^2} \left( \Lambda' - \Lambda - 2p_l
\Lambda \right)\\
& = \frac{1}{\mu^2} \left( f'_\Lambda(\boldsymbol{c})
- f_\Lambda(\boldsymbol{c}) - f_l(\boldsymbol{c}) \right)\\
& = \frac{4C}{9} c_1 \left\lbrack {c_1}^2 + c_1 (c_4 + c_6) + 12
{(c_4 + c_6)}^2 \right\rbrack
\end{align*}
We can invert the subsystem made up by the zeroth order of
equations~\eqref{polymer-BKL-map:rel1Minus},
~\eqref{polymer-BKL-map:rel2Minus}
and~\eqref{polymer-BKL-map:rel4Minus}
\begin{equation*}
\begin{dcases}
2 \Lambda{p_l} = -c_1\\
2 \Lambda{p_m} = 2c_4 + c_1\\
\Lambda = \frac{c_1}{2} + c_4 + c_6
\end{dcases}
\end{equation*}
to express $h=h(p_l,p_m,\Lambda)$. Finally the asymptotic condition
reads as:
\begin{equation}\label{polymer-BKL-map:asymptotic-condition}
\Lambda' \approx (1 + 2 p_l) \Lambda - \mu^2 \frac{16C}{9}
{\Lambda}^3 p_l \left( 6 + 11p_l + 7{p_l}^2 \right)
\end{equation}
We stress that this is only one out of many equivalent ways to extract
from the system~\eqref{polymer-BKL-map:system} an asymptotic
condition.
Now, we need other three conditions to derive the Polymer BKL map. One
is provided by the sum of the primed Kasner indices at minus infinity.
This is the very same both in the standard case~\eqref{kasner:sum} and
in the Polymer case.
Sadly, the two conditions~\eqref{bianchi-ii:rel-1}
and~\eqref{bianchi-ii:rel-2} are not valid anymore. Instead, one
condition can be derived by noticing that
\begin{gather*}
{q_m}_{\tau\tau} - {q_n}_{\tau\tau} = 0 ~~ \Rightarrow ~~
{q_m}_{\tau} - {q_n}_{\tau} = \text{const} ~~ \Rightarrow ~~\\
({q_m}_{\tau} - {q_n}_{\tau})|_{\tau\rightarrow +\infty} =
({q_m}_{\tau} - {q_n}_{\tau})|_{\tau \rightarrow -\infty} ~~ \Rightarrow ~~\\
\Lambda(p_l - p_m) = \Lambda'(p'_l - p'_m)
\end{gather*}
Lastly, we choose as the fourth condition the sum of the squares of
the Kasner indices at minus
infinity~\eqref{polymer-bianchi-i:sumofsquares-exact}. Because of the
assumption $Q \ll 1$, we will consider here only the connected branch
of~\eqref{polymer-bianchi-i:sumofsquares-exact}. We gather now all the
four conditions and put them in a system:
\begin{equation}\label{polymer-BKL-map:4-conditions}
\begin{dcases}
p'_l + p'_m + p'_n = 1\\
Q (p_l - p_m) = Q' (p'_l - p'_m)\\
Q' \approx (1 + 2 p_l) Q - \tfrac{2}{9} Q^3 p_l \left(
6 + 11p_l + 7{p_l}^2 \right) \\
{p'_l}^2 + {p'_m}^2 + {p'_n}^2 = \frac{1}{3} \left( 1 + \frac{4}{1
+ \sqrt{1 - Q^2 }} \right)
\end{dcases}
\end{equation}
where we have also used definition~\eqref{polymer-bianchi-i:Q}.
Finding the polymer BKL map is now only a matter of solving the
system~\eqref{polymer-BKL-map:4-conditions} for the un-primed
indices. The \emph{Polymer BKL map} at the first order in $\mu^2$ is
then:
\begin{equation}\label{polymer-BKL-map:polymer-BKL-map-kasner}
\begin{split}
p'_l & \approx - \frac{p_l}{1 + 2p_l} - \frac{2}{9} Q^2
\left\lbrack \frac{7 + 14p_l + 9{p_l}^2}{{(1 + 2p_l)}^2}
\right\rbrack\\
p'_m & \approx \frac{2p_l + p_m}{1 + 2p_l} + \frac{2}{9} Q^2 p_l \times \\
& \times \left\lbrack \frac{ -3 + p_l + 9{p_l}^2 + 8{p_l}^3 + p_m
\left( 6 + 11p_l + 7{p_l}^2 \right) }{{(1 + 2p_l)}^2}
\right\rbrack\\
p'_n & \approx \frac{2p_l + p_n}{1 + 2p_l} + \frac{2}{9} Q^2 p_l \times \\
& \times \left\lbrack \frac{ -3 + p_l + 9{p_l}^2 + 8{p_l}^3 + p_n
\left( 6 + 11p_l + 7{p_l}^2 \right) }{{(1 + 2p_l)}^2}
\right\rbrack\\
Q' & \approx (1 + 2 p_l) Q - \tfrac{2}{9} Q^3 p_l \left(
6 + 11p_l + 7{p_l}^2 \right)\\
\end{split}
\end{equation}
where all the terms of order $O(Q^4)$ were neglected. It is worth
noticing that, by taking the limit $\mu \rightarrow 0$, the standard
BKL map~\eqref{bianchi-ii:BKL-map} is immediately recovered. We stress
that the form of the map is not unique. One can use the two Kasner
constraints~\eqref{kasner:sum}
and~\eqref{polymer-bianchi-i:sumofsquares-exact} to ``rearrange'' the
Kasner indices as needed.
\subsection{BKL map properties}\label{sec:BKL-map-properties}
First we derive how the Polymer BKL map can be expressed in terms of
the $u$ and $Q$ parameters:
\begin{equation}\label{BKL-map-properties:BKL-map-sketch}
\begin{dcases}
u\\
Q
\end{dcases}
\xrightarrow[\text{map}]{\text{BKL}}
\begin{dcases}
u' = u'(u,Q)\\
Q' = Q'(u,Q)\\
\end{dcases}
\end{equation}
Then we discuss some of its noteworthy properties.
We start by inserting in the polymer BKL
map~\eqref{polymer-BKL-map:polymer-BKL-map-kasner} the connected
branch of the $(u,Q)$
parametrization~\eqref{parametrization:u-Q-param} and requiring the
relations
\begin{equation}
\begin{dcases}
p_l = p_1(u,Q)\\
p_m = p_2(u,Q)\\
p_n = p_3(u,Q)
\end{dcases}
\xrightarrow[\text{map}]{\text{BKL}}
\begin{dcases}
p'_l = p_2(u'(u,Q),Q'(u,Q))\\
p'_m = p_1(u'(u,Q),Q'(u,Q))\\
p'_n = p_3(u'(u,Q),Q'(u,Q))
\end{dcases}
\end{equation}
to be always satisfied. The resulting Polymer BKL map on $(u,Q)$ is
quite complex. We therefore split it in many terms:
\begin{align*}
A(u) = & ~ 72 u {\left(u^2 + u + 1 \right)}^2 \left( -2 +3u -3u^2 +u^3
\right)\\
B(u) = & ~ -12 - 6 u + 53 u^2 - 119 u^3 + 204 u^4 + \\
& ~ - 187 u^5 + 112 u^6 - 57 u^7 + 3 u^8 \\
C(u) = & ~ 5184 {(1 + u^2 + u^4)}^4\\
D(u) = & ~ 1296 {(1 - u + u^2)}^6 {(1 + u + u^2)}^2\\
E(u) = & ~ 3 {(1 - u + u^2)}^2 \left(63 - 162 u + 663 u^2 - 1350 u^3 + \right. \\
& ~ + 2398 u^4 - 3402 u^5 + 3607 u^6 - 3402 u^7 + \\
& \left. + 2398 u^8 - 1350 u^9 + 663 u^{10} - 162 u^{11} + 63 u^{12} \right)\\
F(u) = & ~ 72 (-1 + u + 3 u^3 + 3 u^5 + u^6 + 2 u^7)\\
G(u) = & ~ -3 + 57 u - 112 u^2 + 187 u^3 - 204 u^4 + \\
& ~ + 119 u^5 - 53 u^6 + 6 u^7 + 12 u^8 \\
H(u) = & ~ -72 {(1 + u^2 + u^4)}^2 \\
L(u) = & ~ 3 \left( 1 + u^2 + u^4 \right)
\end{align*}
that have to be inserted in
\begin{subequations}\label{BKL-map-properties:BKL-map}
\begin{align}
\label{BKL-map-properties:BKL-map-u}
u' & = \frac{ A(u) + Q^2 B(u) + \sqrt{C(u) + Q^2 D(u) + Q^4
E(u)}}{F(u) + Q^2 G(u)} \\
\label{BKL-map-properties:BKL-map-Q}
Q' & = \frac{\sqrt{ H(u) + \sqrt{C(u) + Q^2 D(u) + Q^4 E(u)
}}}{L(u)}
\end{align}
\end{subequations}
where we recall that the dominions of definition for $u$ and $Q$ are
$u\in\lbrack 1,\infty)$ and $Q\in[0,1]$. All the square-roots and
denominators appearing in~\eqref{BKL-map-properties:BKL-map} are well
behaved (always with positive argument or non zero respectively) for
any $(u,Q)$ in the intervals of definition. It is not clearly evident
at first sight, but the polymer BKL map reduces to the standard
one~\eqref{bianchi-ix:BKL-map} if $Q \rightarrow 0$.
Now, we study the asymptotic behavior of the polymer BKL
map~\eqref{BKL-map-properties:BKL-map-u} for $u$ going to
infinity. This limit is relevant to us for two reasons. Firstly, In
Sec.~\ref{sec:semiclassical-analysis} we already pointed out that
infinitely many bounces of the $\beta$-point against the potential
walls happen until the singularity is reached. This means that the
Polymer BKL map is to be iterated infinitely many times.
Secondly, we suppose the Polymer BKL map (or at least its $u$
portion~\eqref{BKL-map-properties:BKL-map-u}) to be ergotic, so that
every open set of the parameter space is visited with non null
probability. This assumption is backed up by the observation that the
Polymer BKL map~\eqref{BKL-map-properties:BKL-map-u} tends
asymptotically to the standard BKL map~\eqref{bianchi-ix:BKL-map}
(that is ergodic), as shown in the following
Sec.~\ref{sec:numerical-simulation} through a numerical simulation.
Hence, every open interval of the $u > 1$ line is to be visited
eventually: we want therefore to assess how the map behaves for
$u\to\infty$. Physically, a big value for $u$ means a long (in term of
epochs) Kasner era, and in turn this means that the Universe is going
deep inside one of the corners of Fig.~\ref{fig:potential}. As we can
appreciate from Fig.~\ref{BKL-map-properties:u-map}, for
$u\rightarrow\infty$, $u'$ reaches a plateau:
\begin{equation}\label{BKL-map-properties:plateau}
\lim_{u\rightarrow\infty} u'(u,Q) = \frac{24 + Q^2 + \sqrt{3\left(7
Q^4+48 Q^2+192\right)}}{4 Q^2}
\end{equation}
\begin{figure}
\begin{center}
\includegraphics[width=0.85\columnwidth]{Fig7.png}
\end{center}
\caption{The Polymer BKL map for
$u$~\eqref{BKL-map-properties:BKL-map-u} is plotted for
$Q = 1/10$. There a plateau $u' \approx 1200$ is reached at about
$u \approx 10^4$. This means that for any $u \gtrsim 10^4$ the
next $u'$ will inevitably be $u' \approx 1200$, no matter how big
$u$ is.}
\label{BKL-map-properties:u-map}
\end{figure}
The plateau~\eqref{BKL-map-properties:plateau} is higher and steeper
the more $Q$ is close to zero. This physically means that there is a
sort of ``centripetal potential'' that is driving the $\beta$-point
off the corners and towards the center of the triangle. We infer that
this ``centripetal potential'' is somehow linked to the velocity-like
quantity $v(\tau)$ that appears in the polymer Bianchi II Einstein's
Eqs~\eqref{polymer-bianchi-ii:einstein-equations}. Because of this
plateau, the mechanism that drives the Universe away from the corner,
implicit in the standard BKL map~\eqref{bianchi-ix:BKL-map}, seems to
be much more efficient in the quantum case. For $Q\rightarrow 0$, the
plateau tends to disappear:
$\lim_{\substack{u\rightarrow\infty\\Q\rightarrow 0}} u'(u,Q) =
\infty$.
As a side note, we remember that it has not been proved analytically
that the standard BKL map is still valid deep inside the corners. As a
matter of fact, the BKL map can be derived analytically only for the
very center of the edges of the triangle of Fig.~\ref{fig:potential},
because those are the only points where the Bianchi IX potential is
exactly equal to the Bianchi II potential. The farther we depart from
the center of the edges, the more the map looses precision. At any
rate, there are some numerical studies for the Bianchi IX
model~\cite{Moser:1973,Berger:2002} that show how the standard BKL map
is valid with good approximation even inside the corners.
The analysis of the behavior of the Polymer BKL map for the $Q$
parameter~\eqref{BKL-map-properties:BKL-map-Q} is more
convoluted. Little can be said analytically about the overall behavior
across multiple iterations because of its evident complexity. For this
reason, in Sec.~\ref{sec:numerical-simulation} we discuss the results
of a simple numerical simulation that probes the behavior of the
map~\eqref{BKL-map-properties:BKL-map} over many iterations.
The most important point to check is if the dominion of
definition~\eqref{polymer-bianchi-i:Q-range} for $Q$ is preserved by
the map. We remember that for $Q \approx 1$ the perturbation theory,
by the means of which we have derived the map, is not valid
anymore. So every result in that range is to be taken with a grain of
salt.
In Fig.~\ref{BKL-map-properties:threshold} it is shown the maximum
value of $u^*=u^*(Q)$ for which the polymer BKL map on $Q$ is
monotonically decreasing, i.e.\ for any $Q\in[0,1]$ it is shown the
corresponding $u^*$, such that for $u < u^*$ the polymer BKL map on
$Q$ is monotonically decreasing. It is clear how the more $Q$ is
little the more $u$ can grow before coming to the point that $Q$ stops
decreasing and starts increasing.
\begin{figure}
\begin{center}
\includegraphics[width=0.7\columnwidth]{Fig8.png}
\end{center}
\caption{For any $Q\in[0,1]$ it is shown the
corresponding $u^*(Q)$ such that for $u < u^*$ the Polymer BKL map
on $Q$~\eqref{BKL-map-properties:BKL-map-Q} is monotonically
decreasing. For a single iteraction it can be shown that for every
$Q < 0.96$ the following $Q' < 1$. The function $u^*(Q)$ cannot be
given in closed form.}
\label{BKL-map-properties:threshold}
\end{figure}
That said, as soon as we consider the behavior of $u$, and
particularly the plateau of Fig.~\ref{BKL-map-properties:u-map}, we
notice that the Universe cannot ``indulge'' much time in ``big-$u$
regions'': even if it happens to assume a huge value for $u$, this is
immediately dampened to a much smaller value at the next
iteration. The net result is that $Q$ is almost always decreasing as
it is also strongly suggested by the numerical simulation of
Sec.~\ref{sec:numerical-simulation}.
Summarizing, the Polymer BKL map on
$Q$~\eqref{BKL-map-properties:BKL-map-Q}, apart from a small set of
initial conditions in the region where the perturbation theory is
failing $Q \approx 1$, preserves the dominion of definition of
$Q$~\eqref{polymer-bianchi-i:Q-range} and is decreasing at almost any
iteration.
\subsection{Numerical simulation}\label{sec:numerical-simulation}
In this Section we present the results of a simple numerical
simulation that unveils some interesting features of the Polymer BKL
map~\eqref{BKL-map-properties:BKL-map}. The main points of this
simulation are very simple:
\begin{itemize}
\item An initial couple of values for $(u,Q)$ is chosen inside the
dominion $u\in[1,\infty)$ and $Q\in[0,0.96]$.
\item We remember from Sec.\ref{sec:bianchi-ix}
and~\ref{sec:parametrization} that, at the end of each Kasner era,
the $u$ parameter becomes smaller than 1 and needs to be remapped to
values greater than 1. This marks the beginning of a new Kasner
era. This remapping is performed through the relations listed on
page~\pageref{parametrization:remapping}. In the standard case,
because the standard BKL map is just $u \rightarrow u - 1$, the $u$
parameter cannot, for any reason, become smaller than 0. For the
Polymer Bianchi IX, however, it is possible for $u$ to become less
than zero. This is why we have derived many remapping relations, to
cover the whole real line.
\item Many values ($\approx 2^{18}$) for the initial conditions,
randomly chosen in the interval of definition, were tested. We
didn't observe any ``anomalous behavior'' in any element of the
sample. This meaning that all points converged asymptotically to the
standard BKL map as is discussed in the following.
\end{itemize}
One sample of the simulation is displayed in
Fig.~\ref{fig:numerical-simulation:logscale} using a logarithmic scale
for $Q$ to show how effective is the map in damping high values of
$Q$.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{Fig9.png}
\end{center}
\caption{In this figure the graphical results of a
sample of the numerical simulation are portrayed. The polymer BKL
map~\eqref{BKL-map-properties:BKL-map} has been evolved starting
from the initial value $(u=10,Q=0.24)$. Every point is the result
of a single iteration of the BKL map. The role played by the
Polymer BKL map~\eqref{BKL-map-properties:BKL-map} is just to take
a single point and map it to the next one. A total of 14 Kasner
eras are shown, each with a different color. The fact that,
starting from the upmost point, the map on $Q$ is almost always
decreasing is very evident from the downwards-going ladder-like
shape of the eras.}
\label{fig:numerical-simulation:logscale}
\end{figure}
From the results of the numerical simulations the following
conclusions can be drawn:
\begin{itemize}
\item The polymer BKL map~\eqref{BKL-map-properties:BKL-map} is ``well
behaved'' for any tested initial condition $u\in[1,\infty)$ and
$Q\in[0,0.96]$: the dominion of definition for $u$ and $Q$ are
preserved.
\item The line $Q = 0$ is an attractor for the Polymer BKL map: for
any tested initial condition, the Polymer BKL map eventually evolved
until becoming arbitrarily close to the standard BKL map.
\item The Polymer BKL map on $Q$ is almost everywhere decreasing. It
can happen that, especially for initial values $Q \approx 1$, for
very few iterations the map on $Q$ is increasing, but the overall
behavior is almost always decreasing. The probability to have an
increasing behavior of $Q$ gets smaller at every iteration. In the
limit of infinitely many iterations this probability goes to zero.
\item Since the Polymer BKL map~\eqref{BKL-map-properties:BKL-map}
tends asymptotically to the standard BKL
map~\eqref{bianchi-ix:BKL-map}, we expect that the notion of chaos
for the standard BKL map, given for example in~\cite{Barrow:1982},
can be applied, with little or no modifications, to the Polymer
case, too (although this has not been proven rigorously).
\end{itemize}
\section{Physical considerations} \label{sec:physical-considerations}
In this section we address two basic questions concerning the physical link
between Polymer and Loop quantization methods
and what happens when all the Minisuperspace variables are discrete,
respectively.
First of all, we observe that Polymer
Quantum Mechanics is an independent approach from Loop Quantum Gravity.
Using the
Polymer procedure is equivalent to implement
a sort of discretization of the considered configurational variables.
Each variable is treated separately, by introducing a suitable graph
(de facto a one-dimensional lattice structure):
the group of spatial translations on this graph is a $U(1)$ type, therefore
the natural group of symmetry underlying
such quantization method is $U(1)$ too, differently from Loop Quantum Gravity, where the basic group of
symmetry is $SU(2)$.
It is important to stress that Polymer
Quantum Mechanics is not unitary connected
to the standard Quantum Mechanics,
since the Stone - Von Neumann theorem is broken in
the discretized representation.
Even more subtle is that Polymer quantization procedures applied to different
configurational representations of the
same system are, in general, not unitary related.
This is clear in the zero order WKB limit of
the Polymer quantum dynamics, where
the formulations in two sets of variables,
which are canonically related in the
standard Hamiltonian representation, are
no longer canonically connected in the
Polymer scenario, mainly due to the
non-trivial implications of the prescription
for the momentum operator.
When the Loop Quantum Gravity \cite{Cianfrani:2014, Thiemann:2007loop} is applied to
the Primordial Universe, due to the homogeneity
constraint underlying the Minisuperspace
structure, it loses the
morphology of a $SU(2)$ gauge theory
(this point is widely discussed in \cite{Cianfrani:2010ji, Cianfrani:2011wg}) and
the construction of a kinematical Hilbert space, as well as of the geometrical
operators, is performed by an effective,
although rigorous, procedure.
A discrete scale is introduced in the
Holonomy definition, taken on a square
of given size and then the curvature
term, associated to the Ashtekar-Barbero-Immirzi connection, (the so-called
Euclidean term of the scalar constraint)
is evaluated on such a square path.
It seems that just in this step the
Loop Quantum Cosmology acquires the features of a polymer graph, associated to an
underlying $U(1)$ symmetry.
The real correspondence between the two
approaches emerges in the semi-classical
dynamics of the Loop procedure \cite{Ashtekar:2002}, which is isomorphic to the zero
order WKB limit of the Polymer quantum approach. In this sense, the Loop Quantum
Cosmology studies legitimate the implementation of the Polymer formalism to
the cosmological Minisuperspace.
However, if on one hand, the Polymer
quantum cosmology predictions are,
to some extent, contained in the Loop Cosmology, on the other hand, the former
is more general because it is applicable
to a generic configurational representation,
while the latter refers specifically to
the Ashtekar-Berbero-Immirzi connection variable.
Thus, the subtle question arises about which
is the proper set of variables in order that the
implementation of the Polymer procedure
mimics the Loop treatment, as well as
which is the physical meaning of different
Polymer dynamical behaviours in different sets of variables.
In \cite{Mantero:2018cor}
it is argued, for the isotropic Universe
quantization, that the searched correspondence
holds only if the cubed scale factor is
adopted as Polymer variable: in fact this choice leads to a critical density
of the Universe which is independent of the scale factor
and a direct link between the Polymer
discretization step and the Immirzi parameter is found.
This result assumes that
the Polymer parameter is maintained independent of the scale factor,
otherwise
the correspondence
above seems always possible. In this
respect, different choices of the configuration variables when Polymer
quantizing a cosmological system could be
mapped into each other by suitable
re-definition of the discretization step
as a function of the variables themselves.
Here we apply the Polymer procedure to the
Misner isotropic variable and not to
the cubed scale factor, so that different
issues with respect to Loop Quantum Gravity
can naturally emerge.
The merit of the present choice is that
we discretize the volume of the Universe,
without preventing its vanishing behavior.
This can be regarded as an effective procedure
to include the zero-volume eigenvalue in the system dynamics,
like it can happen in Loop Quantum Gravity,
but it is no longer evident in its
cosmological implementation.
Thus, no real contradiction exists between the
present study and the Big-Bounce prediction
of the Loop formulation, since they are
expectedly two independent physical
representations of the same quantum system. As discussed on the
semi-classical level
in \cite{Antonini:2018gdd}, when using the
cubed scale factor as isotropic dynamics,
the Mixmaster model becomes non-singular
and chaos free, just as predicted in
the Loop Quantum Cosmology analysis
presented in \cite{Bojowald:2004}.
However, in such a representation, the
vanishing behavior of the Universe volume
is somewhat prevented \emph{a priori},
by means of the Polymer discretization
procedure.
Finally, we observe that, while in the
present study, the Polymer dynamics of
the isotropic variable $\alpha$ preserves
(if not even enforces) the Mixmaster chaos,
in \cite{Montani:2013} the Polymer analysis for the anisotropic variables
$\beta_{\pm}$ is associated to the
chaos disappearance.
This feature is not surprising since the
two approaches have a different physical meaning:
the discretization of $\alpha$ has to do
with geometrical properties of the space-time
(it can be thought as an embedding variable),
while the implementation of the Polymer
method to $\beta_{\pm}$ really affects
the gravitational degrees of freedom of
the considered cosmological model.
Nonetheless, it becomes now interesting to understand what happens to the
Mixmaster chaotic features when the two
sets of variables ($\alpha$ and $\beta_\pm$) are simultaneously Polymer
quantized.
In the following subsection, we provide an answer to such an intriguing
question, at least on the base of the semi-classical dynamics.
\subsection{The polymer approach applied to the whole Minisuperspace}
According to the polymer prescription \eqref{chiara:polymerSubstitution},
the super - Hamiltonian constraint is now:
\begin{small}
\begin{multline}
\mathcal{H}=\frac{B\kappa}{3(8\pi)^2}e^{-3\alpha}
\left(-\frac{1}{\mu_\alpha^2}\sin^2(\mu_\alpha p_\alpha)+\frac{1}
{\mu^2}\sin^2(\mu p_+)+\right.\\
\left.+\frac{1}{\mu^2}\sin^2(\mu p_-)+\frac{3(4\pi)^4}{\kappa^2}
e^{4\alpha}V_B(\beta_\pm)\right)=0
\end{multline}
\end{small}
where $\mu_\alpha$ is the polymer parameter associated to $\alpha$ and $\mu$ the one associated
to $\beta_\pm$.
The dynamics of the system can be derived through the Hamilton's equations
\eqref{chiara:HamiltonEqs}.
As usual, we start considering the simple Bianchi I case $V_B=0$.
Following the procedure of Sec. \ref{sec:polymer-bianchi-i-dynamics}, we find that the Universe
is still singular. In fact, the solution to Hamilton's equations in the \eqref{polymer-bianchi-i:time-gauge} time gauge
(with $\mu_\alpha$ instead of $\mu$) is:
\begin{subequations}
\begin{align}
\alpha(t) &=\frac{1}{3}\ln(t) \label{alpha-vs-t}\\
\beta_\pm(t) &=-\frac{1}{3}\frac{\sin(2\mu p_\pm)}{\sin(2\mu_\alpha p_\alpha)}\ln(t)
\end{align}
\end{subequations}
from which it follows that $\alpha(t)\rightarrow-\infty$ as $t\to0$.
Moreover, the sum of the squared Kasner indices, calculated by making a reverse canonical transformation
from Misner to ordinary variables, reads as:
\begin{small}
\begin{equation}
p_l^2+p_m^2+p_n^2=\frac{1}{3}\left(1+2\frac{\sin^2(2\mu p_+)+\sin^2(2\mu p_-)}{\sin^2(2\mu_\alpha p_\alpha)}\right)
\end{equation}
\end{small}
from which is possible to derive the anisotropy velocity, according to Eq. \eqref{kasner-vs-anisotropy-velocity}).
The anisotropy velocity can be also calculated through the ADM reduction
method, as in Sec. \ref{sec:semiclassical-analysis}.
The new reduced Hamiltonian is:
\begin{equation}
H_{poly}=\frac{1}{\mu_\alpha}\arcsin\left(\sqrt{\frac{\mu_\alpha^2}{\mu^2}
(\sin^2(\mu p_+)+\sin^2(\mu p_-))}\right)
\end{equation}
and from the Hamilton's equation for $\beta_\pm$:
\begin{small}
\begin{equation}
\beta'_\pm\equiv\frac{d\beta_\pm}{d\alpha}=\frac{\sin(\mu p_\pm)
\cos(\mu p_\pm)}{\sqrt{\left[1-\frac{\mu_\alpha^2}{\mu^2}
(\sin^2(\mu p_+)+\sin^2(\mu p_-))\right](\sin^2(\mu p_+)+\sin^2(\mu p_-))}}
\end{equation}
\end{small}
we derive:
\begin{small}
\begin{multline}\label{beta'-general-case}
\beta'\equiv\sqrt{\beta'^2_++\beta'^'_-}=\\
=\sqrt{\frac{\sin^2(\mu p_+)\cos^2(\mu p_+)+\sin^2(\mu p_-)
\cos^2(\mu p_-)}{\left[1-\frac{\mu_\alpha^2}
{\mu^2}(\sin^2(\mu p_+)+\sin^2(\mu p_-))\right](\sin^2(\mu p_+)+
\sin^2(\mu p_-))}}
\end{multline}
\end{small}
that turns out to be greater than one when $\mu_\alpha\geq\mu$ (see Fig. \ref{fig:anisotropy-velocity}).
\begin{figure}
\begin{center}
\includegraphics[width=1\columnwidth]{Fig10.png}
\end{center}
\caption{Anisotropy velocity \eqref{beta'-general-case} (when $\mu_\alpha=\mu$) as a function of $(\mu p_+,\mu p_-)$
compared with the standard anisotropy velocity $\beta'=1$.}
\label{fig:anisotropy-velocity}
\end{figure}
Finally, it should be noted that since in the general $V_B\neq0$ case the wall
velocity $\beta'_{wall}$ is not influenced by the introduction
of the polymer quantization, from Eq. \eqref{beta'-general-case} we find that
the maximum incident angle for having a bounce
against a given potential wall $\left(\theta_{max}=\arccos\left(\frac{\beta'_{wall}}{\beta'}\right)\right)$
is, also in this case, always greater than $\pi/3$ when $\mu_\alpha\geq\mu$.
We can then conclude that also when
the polymer approach is applied to all the three configuration variables
simultaneously, the Universe can be singular and chaotic just such as the one analyzed above.
\section{Conclusions}\label{sec:conclusions}
In the present study, we analyzed the Mixmaster model in the framework
of the semiclassical Polymer Quantum Mechanics, implemented on the
isotropic Misner variable, according to the idea that the cut-off
physics mainly concerns the Universe volume.
We developed a semiclassical and quantum dynamics, in order to
properly characterize the structure of the singularity, still present
in the model. The presence of the singularity is essentially due to
the character of the isotropic variable conjugate momentum as a
constant of motion.
On the semiclassical level we studied the system evolution both in the
Hamiltonian and field equations representation, generalizing the two
original analyses in \cite{Misner:1969} and \cite{Belinskii:1970},
respectively. The two approaches are converging and complementary,
describing the initial singularity of the Mixmaster model as reached
by a chaotic dynamics, that is, in principle, more complex than the
General Relativity one, but actually coincides with it in the
asymptotic limit.
This issue is a notable feature, since in Loop Quantum Cosmology is
expected that the Bianchi IX model is chaos-free \cite{Bojowald:2004,
Bojowald:2004ra} and it is well known \cite{Ashtekar:2001xp} that
the Polymer semiclassical dynamics closely resembles the average
feature of a Loop treatment in the Minisuperspace. However, we stress
that, while the existence of the singularity in Polymer Quantum
Mechanics appears to be a feature depending on the nature of the
adopted configuration variables, nonetheless the properties of the Poincar\'e map of the
model is expected to be a solid physical issue, independent on the
particular representation adopted for the system.
The canonical quantization of the Mixmaster Universe that we performed
in the full Polymer quantum approach, i.e. writing down a
Wheeler-DeWitt equation in the momentum representation, accordingly to
the so-called Continuum Limit discussed in \cite{Corichi:2007pr}, is
completely consistent with the semiclassical results. In fact, the
Misner demonstration, for the standard canonical approach, that states
with high occupation numbers can survive to the initial singularity,
remains still valid in the Polymer formulation, here presented. This
issue confirms that the cut-off we introduced in the configuration
space on the isotropic Misner variable does not affect the
cosmological character of the Mixmaster model.
This result appears rather different from the analysis in
\cite{Montani:2013}, where the polymer approach has been addressed for
the anisotropic Misner variables (the real degrees of
freedom of the cosmological gravitational field) with the emergence of
a non-chaotic cosmology. Such a discrepancy suggests that the Polymer
regularization of the asymptotic evolution to the singularity produces
more profound modifications when it touches physical properties than
geometrical features. Actually, the isotropic Misner variable can
be suitably interpreted as a good time variable for the system (an
embedding variable in the language of
\cite{Isham:1985,Isham:1985bis}), while the Universe anisotropies
provide a precise physical information on the considered cosmological
model.
Despite this consideration about the gauge-like nature of the Misner
isotropic variable, which shed light on the physics of our
dynamical results, nonetheless we regard as important to perform
further investigations on the nature of the singularity when other
variables are considered to characterize the Universe volume, since we
expect that, for some specific choice the regularization of the
Big-Bang to the Big-Bounce must take place (see for instance \cite{Antonini:2018gdd}). However, even on the
basis of the present analysis, we suggest that the features of the Poincar\'e map of
the Bianchi IX model and then of the generic cosmological singularity
(locally mimicked by the same Bianchi IX-like time evolution) is a
very general and robust property of the primordial Universe, not
necessarily connected with the existence of a cut-off physics on the
singularity.
\begin{acknowledgements}
G.P.\ would like to thank Evgeny Grines\footnote{Lobachevsky State
University of Nizhny Novgorod, Department of Control Theory and
Dynamics of Systems} for the valuable assistance in determining some of
the mathematical properties of the Polymer BKL map.
We would also like to thank Eleonora Giovannetti for her contribution to the analysis of Sec. \ref{sec:physical-considerations}
\end{acknowledgements}
|
2105.01469
|
\section{Introduction}
We are interested in counting (or uniformly sampling) vertices of a polytope defined by linear inequalities $Ax\leq b$. In particular we concentrate on the sort of polytopes that arise in the study of computational optimisation problems: integral polytopes (convex hulls of finite subsets of $\Nset^n$) and especially $0/1$ polytopes (convex hulls of subsets of $\{0,1\}^n$). We assume that instances are presented as systems of linear inequalities.
\begin{description}[itemsep=0pt]
\item [Problem:] \nPolytopeVertices.
\item [Instance:] An $m\times n$ integer matrix $A\in\Zset^{m\times n}$ and a vector $b\in\Zset^m$.
\item [Promise:] The inequalities $Ax\leq b$ define a 0/1 polytope $P$.
\item [Output:] The number of vertices of $P$.
\end{description}
It is not clear whether it is possible to decide, in polynomial time, if a given system of linear inequalities defines a 0/1 polytope, or, more generally, an integral polytope. Certainly, it is no easy task. Ding et al.~\cite{DingFengZang} give a proof of co-NP-completeness, but this is for a system of inequalities defining an unbounded polyhedron. In the same paper it is shown, by combining several difficult results, that deciding integrality of the polytope $\{x\in\Rset^n:Ax\leq b, x\geq\bfz\}$ can be decided in polynomial time, in the case where $A$ is a 0/1 matrix and $b=\bfo$.
Given the uncertainty surrounding this question, it seems reasonable to add the promise that $P$ is a 0/1 polytope. Note that if we could verify that $P$ is integral in polynomial time, then we could go on verify that it is a 0/1 polytope in polynomial time: we would just need to check that $P$ is contained in the cube $[0,1]^n$, which can be done by linear programming.
In cases of interest, the matrix $A$ and vector $b$ will not be arbitrary, but will have been introduced with a certain goal in mind. The insights that led to their construction can almost certainly be used to provide an elementary proof of integrality of~$P$. Furthermore, except in Section~\ref{sec:TDI}, the matrix $A$ will be `totally unimodular', and hence will necessarily define an integral polytope. Total unimodularity is decidable in polynomial time using Seymour's decomposition theorem for regular matroids~\cite{SeymourDecomp}: see Truemper~\cite{Truemper}.
Our first observation is that \nPolytopeVertices{} is hard to solve \emph{exactly}. This follows easily by encoding perfect matchings in bipartite graphs as vertices of a $0/1$ polytope. A proof of this easy result can be found at the end of this section.
\begin{proposition}\label{prop:numPcomplete}
\nPolytopeVertices\ is $\numP$-complete.
\end{proposition}
In light of this strong negative result, we naturally turn our attention to approximate counting. Before presenting our results, we briefly review the main definitions and concepts used in the study of approximation algorithms for counting problems. The reader is directed to Dyer, Goldberg, Greenhill and Jerrum~\cite{Relative} for precise definitions and a survey of the wider context.
The standard notion of efficient approximation algorithm in the context of counting problems is the {\it Fully Polynomial Randomised Approximation Scheme}, or FPRAS{}. This is a randomised algorithm that is required to produce a solution within relative error $1\pm\varepsilon$, in time polynomial in the instance size and $\varepsilon^{-1}$.\footnote{By the standard duplication trick, any polynomial approximation ratio can be amplified into an $(1\pm\eps)$-approximation in polynomial-time.} The computational complexity of approximate counting problems can be compared through {\it Approximation-Preserving\/} (or AP-) {\it reductions}. These are randomised polynomial-time Turing reductions that preserve (closely enough) the error tolerance. The set of problems that have an FPRAS is closed under AP-reducibility.
Stockmeyer~\cite{Stockmeyer} was the first to produce evidence that approximate counting is of lower computational complexity than exact counting. This key insight was refined by Valiant and Vazirani~\cite[Cor.~3.6]{NPeasy}, who showed that every function in $\numP$ can be approximated (in the FPRAS sense) by a polynomial-time Turing machine with an oracle for an \NP-complete problem. Therefore the strongest negative result we can have for approximate counting is one of \NP-hardness. An example of such a maximally hard problem is $\nIS$, which asks for the number of independent sets of all sizes in a general graph. It follows that the existence of an FPRAS for $\nIS$ would imply $\NP=\RP$. An \NP-hard problem concerning polytopes is given in the next section.
Many counting problems are known to have an FPRAS and many others to be \NP-hard to approximate. An interesting empirical observation is that many of the rest are interreducible via approximation-preserving reductions. One member of this equivalence class is $\nBIS$, which asks for the number of independent sets of all sizes in a bipartite graph.
\begin{description}[itemsep=0pt]
\item [Problem:] \nBIS.
\item [Instance:] A bipartite graph $B$.
\item [Output:] The number of independent sets (of all sizes) in $B$.
\end{description}
After two decades of fairly intensive study, no FPRAS for $\nBIS$ has been discovered, but neither has $\nBIS$ been shown to be \NP-hard to approximate. So showing that a counting problem $\Pi$ is approximation-preserving interreducible with $\nBIS$ can be interpreted as evidence that $\Pi$ does not admit an FPRAS, though the evidence falls short of a demonstration of \NP-hardness. An example involving 0/1 polytopes is presented in Section~\ref{sec:transposenetwork}.
Khachiyan~\cite{Khachiyan} showed that approximately counting vertices of general polytopes presented as linear inequalities is \NP-hard, a result rediscovered by Najt \cite{Najt}. As we just saw, this is the strongest possible demonstration of intractability. However, the vertices of the polytopes employed in the proof have rational coordinates, which, if rescaled to integers, would be exponential in the dimension of the ambient space. Our aim in this work is to find hard examples in small integers.
Ultimately, we would like to characterise the complexity of computing approximate solutions to \nPolytopeVertices. We have not been able to establish the strongest intractability result, which would be a demonstration of \NP-hardness.
The typical approach of showing \NP-hardness for an approximate counting problem is to reduce from a combinatorial optimisation problem.
However, the combinatorial optimisation problem related to a $0/1$ polytope is often tractable, which makes showing \NP-hardness difficult.
Instead, we are able to obtain an \NP-hardness result by relaxing the allowed problem instances from 0/1 polytopes to `half-integral' polytopes whose vertices are elements of $\{0,1,2\}^n$. (More conventionally, these polytopes are scaled by a factor 2, so as to have vertices in $\{0,\frac12,1\}^n$.) This negative result relates to a natural family of `perfect 2-matching polytopes' associated with graphs. See Proposition~\ref{prop:P2M-vertices}.
What can be said about the complexity of \nPolytopeVertices{} itself? The most prominent class of matrices defining integer polytopes are the \emph{totally unimodular matrices}. These are matrices~$A$ that have the property that every square submatrix of $A$ has determinant $-1$, 0 or 1. (In particular, the elements of $A$ take values in $\{-1,0,1\}$.) Most 0/1 polytopes arising in combinatorial optimisation arise from totally unimodular matrices. Network matrices and transposes of network matrices are natural subclasses of totally unimodular matrices that will be defined in Section~\ref{sec:transposenetwork}. In some sense, these matrix classes are universal in that every totally unimodular matrix can be built from network matrices, transposes of network matrices, and a certain $5\times5$ matrix.
In Section~\ref{sec:transposenetwork}, we show (Theorem~\ref{thm:BISequiv}) that \nPolytopeVertices, when restricted to transposes of network matrices, is interreducible with $\nBIS$ with respect to approximation-preserving reductions. This locates this special case of the problem as accurately as possible, given the current state of knowledge of the complexity landscape. When restricted to network matrices, \nPolytopeVertices\ appears to be become easier, and we identify a subclass of polytopes (Proposition~\ref{prop:network}) for which the vertex counting problem is solvable in the FPRAS sense. We leave it as an open question whether there is an FPRAS for \nPolytopeVertices\ when restricted to network matrices more generally.
In the final section we go beyond totally unimodular matrices. Here, there are fewer examples in the literature, but we do note that at least one naturally occurring class of polytopes, arising from `stable matchings', has been shown to give rise to a vertex counting problem that is interreducible with $\nBIS$. This raises the intriguing possibility that \nPolytopeVertices\ itself is actually equivalent in complexity to $\nBIS$.
The complexity of approximate counting and (almost) uniform sampling are usually closely related, and this is indeed the case here. For simplicity we concentrate throughout on approximate counting, but the corresponding uniform sampling problems have essentially the same complexity. This follows by general considerations~\cite{JVV} from the fact that \nPolytopeVertices\ and all the restrictions of it considered here are self-reducible. We expand on this remark at the end of Section~\ref{sec:transposenetwork}.
Mihail and Vazirani \cite{MV89} conjectured that the simple random walk on the graph of a $0/1$ polytope is rapidly mixing, which is still open. Note that this conjecture, if true, does not directly imply an FPRAS for \nPolytopeVertices{}, because the degree of the vertices can be exponentially large in the dimension of the ambient space. The stationary distribution of the random walk over graphs of $0/1$ polytopes can be very different from the uniform distribution.
Although we restrict attention to complexity-theoretic results in this work, it should be noted that several authors have studied heuristic approaches, including Avis and Devroye~\cite{AvisDevroye} and Salomone, Vaisman and Kroese~\cite{SalomoneEtAl}. We round off the section with the deferred proof.
\begin{proof}[Proof of Proposition \ref{prop:numPcomplete}]
Membership in $\numP$ is clear.
To demonstrate hardness, we simply show that \nPolytopeVertices\ includes counting perfect matchings in a bipartite graph as a special case. Suppose $G$ is a bipartite graph, and denote the vertex and edge sets of~$G$ by $V(G)$ and $E(G)$. Introduce variables $\{x_{uv}:uv\in E(G)\}$ in 1-1 correspondence with the $m$ edges of~$G$. Consider the $m$-dimensional perfect matching polytope $P_\mathrm{PM}(G)$ of~$G$, whose vertices are in bijection with the perfect matchings of~$G$. Specifically, for each perfect matching $M\subseteq E(G)$ of $G$ there corresponds a vertex of $P_\mathrm{PM}(G)$ given by
$$
x_{uv}=\begin{cases}
1,&\text{if $uv\in M$};\\
0,&\text{otherwise},
\end{cases}
$$
for all $uv\in E(G)$. In the case that $G$ is bipartite, the polytope $P_\mathrm{PM}(G)$ is defined by the following inequalities~\cite[Thm 18.1]{SchrijverA}:
\begin{align*}
0\leq x_{uv}\leq1,&\quad \text{for all $uv\in E(G)$, and}\\
\sum_{v\in V(G):uv\in E(G)} x_{uv}=1,&\quad \text{for all $u\in V(G)$}.
\end{align*}
Thus, counting perfect matchings in a bipartite graph can be reduced to counting vertices of an easily computable and easily described polytope. The result follows from Valiant's classical result that counting perfect matchings in a bipartite graph is $\numP$-complete~\cite{ValPerm}.
\end{proof}
\section{The perfect 2-matching polytope}
In this section we see that by going a little beyond 0/1 polytopes we can find counting problems that are \NP-hard to approximate.
Given a graph $G$, the \emph{perfect 2-matching polytope} (P2M polytope) is defined by the system of linear inequalities
\begin{align*}
0\leq x_{uv}\leq 2, & \quad\text{for $uv\in E(G)$};\\
\sum_{v:uv\in E(G)} x_{uv} = 2, & \quad\text{for $u\in V(G)$}.
\end{align*}
Let $\Zptm(G)$ be the number of vertices of the P2M polytope associated with $G$.
We will show that approximating $\Zptm(G)$ is \NP-hard.
Recall that an edge cover of a graph is a set $C$ of edges such that any vertex is incident to some edge $e\in C$.
Balinski \cite{Balinski} observed the following characterisation of the vertices of the P2M polytope.
(See also Schrijver~\cite[Cor.~30.2b]{SchrijverA} together with the observation at the end of \cite[\S30.3]{SchrijverA}.)
\begin{proposition} \label{prop:P2M-vertices}
The vertices of the P2M polytope correspond to edge covers consisting of a matching $M$,
with $x_e=2$ for $e\in M$, and vertex-disjoint odd cycles that are also disjoint from $M$,
with $x_e=1$ for each edge $e$ in any of the odd cycles.
\end{proposition}
\begin{theorem} \label{thm:P2M-NP-hard}
There is no FPRAS for $\Zptm(G)$ unless $\NP=\RP$.
\end{theorem}
\begin{proof}
We call edge covers corresponding to the vertices of the P2M polytope, as in \Cref{prop:P2M-vertices}, \emph{P2M covers},
and call edge covers of $G$ consisting of vertex-disjoint odd cycles without the matching \emph{odd cycle covers}. We denote the number of odd cycle covers of~$G$ by $\Zoc(G)$. By definition, the number of P2M covers is $\Zptm(G)$.
First we reduce deciding the existence of a Hamiltonian path between two given vertices in a bipartite graph to deciding the existence of an odd cycle cover in a general graph. The former problem is known to be \NP-complete~\cite{bipartiteHC}. (The given reference treats the Hamilton \emph{cycle} problem, but essentially the same reduction deals with \emph{paths}.)
Let $G$ be a bipartite graph with two distinguished vertices $s$ and~$t$.
We introduce a new vertex~$w$, and add two edges $sw$ and $wt$.
Call the resulting graph~$G'$.
Any odd cycle in $G'$ must include the new vertex~$w$,
and so any odd cycle cover of ~$G'$ must consist of a Hamiltonian path in $G$ from $s$ to~$t$, together with the two edges incident at~$w$.
Conversely, any Hamiltonian path in $G$ from $s$ to~$t$ can be extended to an odd cycle cover of~$G'$ by adding edges $sw$ and $wt$.
Thus, deciding the existence of an odd cycle cover of a graph is \NP-complete.
Next we reduce deciding the existence of an odd cycle cover to approximating the number of P2M covers.
For the reduction, we use the gadget in \Cref{fig:hex-gadget} to reduce the contribution from the matching (isolated edges).
Note that the parameter~$\ell$ will be tuned later.
\begin{figure}[htbp]
\centering
\begin{tikzpicture}[scale=0.8, inner sep=1pt, transform shape]
\draw (0,0) node [draw,fill,shape=circle,color=black, label=180:{\Large $u$}] (u) {};
\draw (0.5,0.866) node [draw,fill,shape=circle,color=black] (u1) {};
\draw (1.5,0.866) node [draw,fill,shape=circle,color=black] (u2) {};
\draw (0.5,-0.866) node [draw,fill,shape=circle,color=black] (u1') {};
\draw (1.5,-0.866) node [draw,fill,shape=circle,color=black] (u2') {};
\draw (2,0) node [draw,fill,shape=circle,color=black] (u3) {};
\draw (u) edge [semithick] (u1)
(u1) edge [semithick] (u2)
(u2) edge [semithick] (u3)
(u) edge [semithick] (u1')
(u1') edge [semithick] (u2')
(u2') edge [semithick] (u3);
\draw (2.5,0.866) node [draw,fill,shape=circle,color=black] (u4) {};
\draw (3.5,0.866) node [draw,fill,shape=circle,color=black] (u5) {};
\draw (2.5,-0.866) node [draw,fill,shape=circle,color=black] (u4') {};
\draw (3.5,-0.866) node [draw,fill,shape=circle,color=black] (u5') {};
\draw (4,0) node [draw,fill,shape=circle,color=black] (u6) {};
\draw (u3) edge [semithick] (u4)
(u4) edge [semithick] (u5)
(u5) edge [semithick] (u6)
(u3) edge [semithick] (u4')
(u4') edge [semithick] (u5')
(u5') edge [semithick] (u6);
\draw (5,0) node {\LARGE \dots \dots};
\draw (6,0) node [draw,fill,shape=circle,color=black] (u6') {};
\draw (6.5,0.866) node [draw,fill,shape=circle,color=black] (u7) {};
\draw (7.5,0.866) node [draw,fill,shape=circle,color=black] (u8) {};
\draw (6.5,-0.866) node [draw,fill,shape=circle,color=black] (u7') {};
\draw (7.5,-0.866) node [draw,fill,shape=circle,color=black] (u8') {};
\draw (8,0) node [draw,fill,shape=circle,color=black] (u9) {};
\draw (u6') edge [semithick] (u7)
(u7) edge [semithick] (u8)
(u8) edge [semithick] (u9)
(u6') edge [semithick] (u7')
(u7') edge [semithick] (u8')
(u8') edge [semithick] (u9);
\draw (8.5,0.866) node [draw,fill,shape=circle,color=black] (u10) {};
\draw (9.5,0.866) node [draw,fill,shape=circle,color=black] (u11) {};
\draw (8.5,-0.866) node [draw,fill,shape=circle,color=black] (u10') {};
\draw (9.5,-0.866) node [draw,fill,shape=circle,color=black] (u11') {};
\draw (10,0) node [draw,fill,shape=circle,color=black] (u12) {};
\draw (11,0) node [draw,fill,shape=circle,color=black, label=0:{\Large $v$}] (v) {};
\draw (u9) edge [semithick] (u10)
(u10) edge [semithick] (u11)
(u11) edge [semithick] (u12)
(u9) edge [semithick] (u10')
(u10') edge [semithick] (u11')
(u11') edge [semithick] (u12)
(u12) edge [semithick] (v);
\end{tikzpicture}
\caption{The hexagon gadget, with $2\ell$ hexagons in the middle.}
\label{fig:hex-gadget}
\end{figure}
Let $G$ be an instance for odd cycle covers.
Replace each edge $uv\in E(G)$ with a copy of the gadget, and consider any P2M cover in the resulting graph $G'$.
The possible configurations induced by this cover on the gadget with end points $u$ and $v$ can be partitioned into three types (see \Cref{fig:types-PMU}):
\begin{itemize}
\item Type P[ath]. There is a path from $u$ to $v$ together with some isolated edges.
Note that there are two choices for the path in each hexagon.
\item Type M[atched]. There is no path from $u$ to $v$, but $u$ and $v$ are both covered (by isolated edges).
Note that there are two choices in every other hexagon, starting from the first hexagon (counting from $u$).
\item Type U[nmatched]. There is no path from $u$ to $v$, and $u$ and $v$ are both uncovered.
Note that there are two choices in every other hexagon, starting from the second hexagon (counting from $u$).
\end{itemize}
Note that this list is exhaustive.
\begin{figure}[htbp]
\centering
\begin{tikzpicture}[scale=0.8, inner sep=1pt, transform shape]
\draw (0,0) node [draw,fill,shape=circle,color=black, label=180:{\Large $u$}] (u) {};
\draw (0.5,0.866) node [draw,fill,shape=circle,color=black] (u1) {};
\draw (1.5,0.866) node [draw,fill,shape=circle,color=black] (u2) {};
\draw (0.5,-0.866) node [draw,fill,shape=circle,color=black] (u1') {};
\draw (1.5,-0.866) node [draw,fill,shape=circle,color=black] (u2') {};
\draw (2,0) node [draw,fill,shape=circle,color=black] (u3) {};
\draw (u) edge [thick, red] (u1)
(u1) edge [thick, red] (u2)
(u2) edge [thick, red] (u3)
(u) edge [semithick] (u1')
(u1') edge [thick, red] (u2')
(u2') edge [semithick] (u3);
\draw (2.5,0.866) node [draw,fill,shape=circle,color=black] (u4) {};
\draw (3.5,0.866) node [draw,fill,shape=circle,color=black] (u5) {};
\draw (2.5,-0.866) node [draw,fill,shape=circle,color=black] (u4') {};
\draw (3.5,-0.866) node [draw,fill,shape=circle,color=black] (u5') {};
\draw (4,0) node [draw,fill,shape=circle,color=black] (u6) {};
\draw (u3) edge [thick, red] (u4)
(u4) edge [thick, red] (u5)
(u5) edge [thick, red] (u6)
(u3) edge [semithick] (u4')
(u4') edge [thick, red] (u5')
(u5') edge [semithick] (u6);
\draw (5,0) node {\LARGE \dots \dots};
\draw (6,0) node [draw,fill,shape=circle,color=black] (u6') {};
\draw (6.5,0.866) node [draw,fill,shape=circle,color=black] (u7) {};
\draw (7.5,0.866) node [draw,fill,shape=circle,color=black] (u8) {};
\draw (6.5,-0.866) node [draw,fill,shape=circle,color=black] (u7') {};
\draw (7.5,-0.866) node [draw,fill,shape=circle,color=black] (u8') {};
\draw (8,0) node [draw,fill,shape=circle,color=black] (u9) {};
\draw (u6') edge [semithick] (u7)
(u7) edge [thick, red] (u8)
(u8) edge [semithick] (u9)
(u6') edge [thick, red] (u7')
(u7') edge [thick, red] (u8')
(u8') edge [thick, red] (u9);
\draw (8.5,0.866) node [draw,fill,shape=circle,color=black] (u10) {};
\draw (9.5,0.866) node [draw,fill,shape=circle,color=black] (u11) {};
\draw (8.5,-0.866) node [draw,fill,shape=circle,color=black] (u10') {};
\draw (9.5,-0.866) node [draw,fill,shape=circle,color=black] (u11') {};
\draw (10,0) node [draw,fill,shape=circle,color=black] (u12) {};
\draw (11,0) node [draw,fill,shape=circle,color=black, label=0:{\Large $v$}] (v) {};
\draw (u9) edge [thick, red] (u10)
(u10) edge [thick, red] (u11)
(u11) edge [thick, red] (u12)
(u9) edge [semithick] (u10')
(u10') edge [thick, red] (u11')
(u11') edge [semithick] (u12)
(u12) edge [thick, red] (v);
\draw (5,-1.5) node {Type P};
\begin{scope}[shift = {(0,-4)}]
\draw (0,0) node [draw,fill,shape=circle,color=black, label=180:{\Large $u$}] (u) {};
\draw (0.5,0.866) node [draw,fill,shape=circle,color=black] (u1) {};
\draw (1.5,0.866) node [draw,fill,shape=circle,color=black] (u2) {};
\draw (0.5,-0.866) node [draw,fill,shape=circle,color=black] (u1') {};
\draw (1.5,-0.866) node [draw,fill,shape=circle,color=black] (u2') {};
\draw (2,0) node [draw,fill,shape=circle,color=black] (u3) {};
\draw (u) edge [thick, red] (u1)
(u1) edge [semithick] (u2)
(u2) edge [thick, red] (u3)
(u) edge [semithick] (u1')
(u1') edge [thick, red] (u2')
(u2') edge [semithick] (u3);
\draw (2.5,0.866) node [draw,fill,shape=circle,color=black] (u4) {};
\draw (3.5,0.866) node [draw,fill,shape=circle,color=black] (u5) {};
\draw (2.5,-0.866) node [draw,fill,shape=circle,color=black] (u4') {};
\draw (3.5,-0.866) node [draw,fill,shape=circle,color=black] (u5') {};
\draw (4,0) node [draw,fill,shape=circle,color=black] (u6) {};
\draw (u3) edge [semithick] (u4)
(u4) edge [thick, red] (u5)
(u5) edge [semithick] (u6)
(u3) edge [semithick] (u4')
(u4') edge [thick, red] (u5')
(u5') edge [semithick] (u6);
\draw (5,0) node {\LARGE \dots \dots};
\draw (6,0) node [draw,fill,shape=circle,color=black] (u6') {};
\draw (6.5,0.866) node [draw,fill,shape=circle,color=black] (u7) {};
\draw (7.5,0.866) node [draw,fill,shape=circle,color=black] (u8) {};
\draw (6.5,-0.866) node [draw,fill,shape=circle,color=black] (u7') {};
\draw (7.5,-0.866) node [draw,fill,shape=circle,color=black] (u8') {};
\draw (8,0) node [draw,fill,shape=circle,color=black] (u9) {};
\draw (u6') edge [semithick] (u7)
(u7) edge [thick, red] (u8)
(u8) edge [semithick] (u9)
(u6') edge [thick, red] (u7')
(u7') edge [semithick] (u8')
(u8') edge [thick, red] (u9);
\draw (8.5,0.866) node [draw,fill,shape=circle,color=black] (u10) {};
\draw (9.5,0.866) node [draw,fill,shape=circle,color=black] (u11) {};
\draw (8.5,-0.866) node [draw,fill,shape=circle,color=black] (u10') {};
\draw (9.5,-0.866) node [draw,fill,shape=circle,color=black] (u11') {};
\draw (10,0) node [draw,fill,shape=circle,color=black] (u12) {};
\draw (11,0) node [draw,fill,shape=circle,color=black, label=0:{\Large $v$}] (v) {};
\draw (u9) edge [semithick] (u10)
(u10) edge [thick, red] (u11)
(u11) edge [semithick] (u12)
(u9) edge [semithick] (u10')
(u10') edge [thick, red] (u11')
(u11') edge [semithick] (u12)
(u12) edge [thick, red] (v);
\draw (5,-1.5) node {Type M};
\end{scope}
\begin{scope}[shift = {(0,-8)}]
\draw (0,0) node [draw,fill,shape=circle,color=black, label=180:{\Large $u$}] (u) {};
\draw (0.5,0.866) node [draw,fill,shape=circle,color=black] (u1) {};
\draw (1.5,0.866) node [draw,fill,shape=circle,color=black] (u2) {};
\draw (0.5,-0.866) node [draw,fill,shape=circle,color=black] (u1') {};
\draw (1.5,-0.866) node [draw,fill,shape=circle,color=black] (u2') {};
\draw (2,0) node [draw,fill,shape=circle,color=black] (u3) {};
\draw (u) edge [semithick] (u1)
(u1) edge [thick, red] (u2)
(u2) edge [semithick] (u3)
(u) edge [semithick] (u1')
(u1') edge [thick, red] (u2')
(u2') edge [semithick] (u3);
\draw (2.5,0.866) node [draw,fill,shape=circle,color=black] (u4) {};
\draw (3.5,0.866) node [draw,fill,shape=circle,color=black] (u5) {};
\draw (2.5,-0.866) node [draw,fill,shape=circle,color=black] (u4') {};
\draw (3.5,-0.866) node [draw,fill,shape=circle,color=black] (u5') {};
\draw (4,0) node [draw,fill,shape=circle,color=black] (u6) {};
\draw (u3) edge [semithick] (u4)
(u4) edge [thick, red] (u5)
(u5) edge [semithick] (u6)
(u3) edge [thick, red] (u4')
(u4') edge [semithick] (u5')
(u5') edge [thick, red] (u6);
\draw (5,0) node {\LARGE \dots \dots};
\draw (6,0) node [draw,fill,shape=circle,color=black] (u6') {};
\draw (6.5,0.866) node [draw,fill,shape=circle,color=black] (u7) {};
\draw (7.5,0.866) node [draw,fill,shape=circle,color=black] (u8) {};
\draw (6.5,-0.866) node [draw,fill,shape=circle,color=black] (u7') {};
\draw (7.5,-0.866) node [draw,fill,shape=circle,color=black] (u8') {};
\draw (8,0) node [draw,fill,shape=circle,color=black] (u9) {};
\draw (u6') edge [semithick] (u7)
(u7) edge [thick, red] (u8)
(u8) edge [semithick] (u9)
(u6') edge [semithick] (u7')
(u7') edge [thick, red] (u8')
(u8') edge [semithick] (u9);
\draw (8.5,0.866) node [draw,fill,shape=circle,color=black] (u10) {};
\draw (9.5,0.866) node [draw,fill,shape=circle,color=black] (u11) {};
\draw (8.5,-0.866) node [draw,fill,shape=circle,color=black] (u10') {};
\draw (9.5,-0.866) node [draw,fill,shape=circle,color=black] (u11') {};
\draw (10,0) node [draw,fill,shape=circle,color=black] (u12) {};
\draw (11,0) node [draw,fill,shape=circle,color=black, label=0:{\Large $v$}] (v) {};
\draw (u9) edge [semithick] (u10)
(u10) edge [thick, red] (u11)
(u11) edge [semithick] (u12)
(u9) edge [thick, red] (u10')
(u10') edge [semithick] (u11')
(u11') edge [thick, red] (u12)
(u12) edge [semithick] (v);
\draw (5,-1.5) node {Type U};
\end{scope}
\end{tikzpicture}
\caption{Three types of covers (coloured in red) for the gadget.}
\label{fig:types-PMU}
\end{figure}
If there are $2\ell$ hexagons in the gadget then there are $4^\ell$ configurations of Type~P, $2^\ell$ of type~M
and $2^\ell$ of type U.
Let's return to the P2M cover of $G'$.
The edges in~$G$ corresponding to Type~P configurations in~$G'$ form a collection of disjoint odd cycles in~$G$.
(Note that the path in a Type P configuration is of odd length.)
Moreover the edges of~$G$ corresponding to Type~$M$ configurations form a collection of isolated edges, which are disjoint from the odd cycles.
Finally, the collection of all odd cycles and isolated edges described above cover all the vertices of~$G$, and hence form a P2M cover of~$G$.
Conversely, any P2M cover in $G$ can be lifted to a P2M cover of $G'$ by choosing a Type~P configuration in~$G'$ for every cycle edge of~$G$,
a Type~M configuration in~$G'$ for every isolated edge in $G$, and a Type~U configuration for every other edge of $G$.
Suppose the P2M cover of $G$ has $k$ cycle edges, and hence $m-k$ other edges, where $m=\abs{E(G)}$.
Then the number of P2M covers in $G'$ that correspond to a particular P2M cover in $G$ is $4^{\ell k}2^{\ell(m-k)}=2^{\ell m}2^{\ell k}$.
Thus, for sufficiently large $\ell$, we expect $\frac{\Zptm(G')}{2^{\ell m}2^{\ell n}}$ to be a good approximation to $\Zoc(G)$.
To be more specific, we will choose $\ell\ge m+2$.
The discussion above implies that $\Zptmk{n}(G') = 2^{\ell m}2^{\ell n}\Zoc(G)$,
where $\Zptmk{k}(G')$ denotes the number of P2M covers of~$G'$ with $k$~edges of the Type~P{}.
Note that $\Zptm(G')=\sum_{k=0}^{n}\Zptmk{k}(G')$.
Moreover, as $\Zptm(G)\le 2^m$,
\begin{align}\label{eqn:sum-p2m-n-1}
\sum_{k=0}^{n-1}\Zptmk{k}(G') \le 2^m 2^{\ell m} 2^{\ell (n-1)} \le \frac{2^{\ell(m+n)}}{4}.
\end{align}
Suppose we want to decide whether $\Zoc(G)$ is zero or non-zero. Given an FPRAS for $\Zptm(G')$,
we compute $\widetilde{Z}$ such that $\frac12\le \frac{\widetilde{Z}}{\Zptm(G')}\le \frac32$ with probability at least~$\frac34$.
There are two cases.
\begin{itemize}
\item If $\widetilde{Z}\ge 2^{\ell(m+n)-1}$,
then $\Zptm(G')\ge \frac{2^{\ell(m+n)}}{3} > \frac{2^{\ell(m+n)}}{4}$,
which by \eqref{eqn:sum-p2m-n-1} implies that $\Zptmk{n}(G')> 0$ and hence $\Zoc(G)\ge 1$.
\item Otherwise $\widetilde{Z} < 2^{\ell(m+n)-1}$,
and thus $\Zptm(G') < 2\times 2^{\ell(m+n)-1}= 2^{\ell(m+n)}$.
Since $\Zptmk{n}(G)$ is divisible by $2^{\ell(m+n)}$ it follows that $\Zptmk{n}(G')=0$ and $\Zoc(G)=0$.
\end{itemize}
With probability at least $\frac34$ we correctly decide whether $G$ has an odd-cycle cover. Since we have a polynomial-time randomised algorithm with two-sided error for an \NP-complete problem, it follows that $\NP\subseteq\BPP$. It is a standard fact that the latter inclusion implies $\RP=\NP$.
\end{proof}
The `powering' construction (\Cref{fig:hex-gadget}) used here is similar to those used in a similar context by Khachiyan~\cite{Khachiyan} and Najt~\cite{Najt}, and more generally by Jerrum, Valiant and Vazirani~\cite{JVV}. Note that the proof actually shows a technically stronger result, namely that approximating $\Zptm(G)$ within (say) a factor~2 is an \NP-hard problem.
\section{Transposes of network matrices}\label{sec:transposenetwork}
As we noted, one way to specify a 0/1 polytope is by giving a system of linear inequalities $Ax\leq b$ where the matrix~$A$ is totally unimodular. Network matrices and their transposes are interesting subclasses of totally unimodular matrices. In some sense, network matrices and their transposes, together with a certain $5\times 5$ matrix, generate all totally unimodular matrices~\cite{Truemper}. We begin by defining the class of network matrices.
Suppose we have a directed graph $(V,E)$ and a directed tree $(V,T)$ on the same vertex set. (The orientations of the arcs of the tree are arbitrary, and do not necessarily point towards a specific root vertex. It is convenient to allow parallel arcs in the graph $(V,E)$.) Given an arc $e=uv\in E$ and an arc $t\in T$, let $\Pi$ be the path from $u$ to $v$ in $T$. Then define the $|T|\times|E|$ matrix~$A$ by
$$
A_{te}=\begin{cases}
+1,&\text{if $t$ occurs in a forward direction in $\Pi$;}\\
-1,&\text{if $t$ occurs in a backward direction in $\Pi$;}\\
0,&\text{otherwise}.
\end{cases}
$$
The matrix $A$ is the \emph{network matrix generated by} $(V,E)$ and $(V,T)$. An integer matrix is said to be a network matrix if it is generated in this way.
\begin{proposition}\label{prop:networkimpliesTU}
Any network matrix is totally unimodular.
\end{proposition}
See \cite[Thm 13.20]{SchrijverA} for a proof. Of course the transpose of a network matrix is also totally unimodular. The following easy lemma will be useful. Amongst other things, it tells us that if we have a system of inequalities defined by network matrix, or the transpose of a network matrix, then we can freely add additional bounds such as $\bfz\leq x\leq\bfo$.
\begin{lemma}\label{lem:preserve}
The property of being a network matrix is preserved under the following operations:
\begin{enumerate}[label=(\alph*)]
\item duplicating a row or a column;
\item negating a row or a column;
\item extending the matrix with a unit row or column (one which has a 1 in a single location and zeros elsewhere).
\end{enumerate}
\end{lemma}
\begin{proof}
Let $A$ be a network matrix defined by the directed graph $(V,E)$ and directed tree $(V,T)$. Recall that rows of $A$ correspond to tree arcs $t\in T$ and columns to graph arcs $e\in E$.
\begin{enumerate}[label=(\alph*)]
\item To duplicate a row indexed by arc $t=uv\in T$, introduce a new vertex $w$ and set $T:=T\cup\{uw,wv\}\setminus\{uv\}$. To duplicate a column indexed by arc $e\in E$, introduce a new arc $e'$ parallel to~$e$.
\item To negate a row indexed by $t$, reverse the direction of arc $t\in T$. To negate a column indexed by~$e$, reverse the direction of arc $e\in E$.
\item To introduce a new row with a 1 in the column indexed by $e=uv\in E$, introduce a new vertex~$w$ and set $E:=E\cup\{uw\}\setminus\{uv\}$ and $T:= T\cup\{vw\}$. To introduce a new column with a 1 in the row indexed by $t=uv\in T$, introduce a new arc $uv$ to~$E$.
\end{enumerate}
It is easy to check that these actions have the desired effect on the matrix~$A$.
\end{proof}
So now consider a 0/1~polytope~$P$ defined by inequalities $Ax\leq b$ where $A$ is the \emph{transpose} of a network matrix.
Let us consider the defining equations of the facets of~$P$ in terms of $(V,E)$ and $(V,T)$.
The arcs in $T$ correspond to variables and those in $E$ to (left-hand sides of) inequalities. Let $uv\in E$ be an arc in the graph. The variables that occur in the corresponding inequality are the ones encountered when tracing out the unique path from $u$ to $v$ in $(V,T)$.
The coefficient of a variable is $+1$ or $-1$ depending on whether the arc in the tree is aligned with or against the direction of the path.
\begin{example}[independent sets in a bipartite graph]\label{ex:BIS}
Let $B=(U\cupdot U',F)$ be a bipartite (undirected) graph,
where $U=\{u_1,\ldots,u_n\}$ and $U'=\{u_1',\ldots,u_m'\}$ are the parts of the bipartition of the vertex set.
We encode the independent sets in $B$ as vertices of a polytope defined by the transpose of a network matrix. As above, we specify this matrix by giving the graph $(V,E)$ and tree $(V,T)$.
Introduce a new vertex $r$, and let $V=U\cup\{r\}\cup U'$ and $T=\{u_ir:1\leq i\leq n\}\cup\{ru_j':1\leq j\leq m\}$.
Let the arc set~$E$ be obtained from~$F$ by simply orienting all edges in~$F$ from $U$ to~$U'$.
Each arc $u_ir\in T$ (respectively, $ru_j'\in T$) corresponds to a variable $x_i$ (respectively, $y_j$).
Consider the network matrix~$A$ defined by $(V,E)$ and $(V,T)$.
Each inequality in
$\transpose{A}\transpose{(x_1,\ldots,x_n,y_1,\ldots,y_m)}\le\*1$
is of the form $x_i+y_j\leq 1$ for some $u_iu_j'\in F$.
Then we add the inequalities $0\leq x_i,y_j\leq 1$ for every $i\in[n]$ and $j\in[m]$,
which can be done while maintaining the defining matrix to be the transpose of a network matrix,
by Lemma~\ref{lem:preserve}(c).
These inequalities define the independent (or stable) set polytope of the bipartite graph~$B$ \cite[Thm.~19.7]{SchrijverA}. This fact also follows easily from total modularity of the system of inequalities, which itself follows from Proposition~\ref{prop:networkimpliesTU}.
\end{example}
Our goal in this section is to precisely locate the complexity of \nPolytopeVertices, when the matrix $A$ is the transpose of a network matrix. For the upper bound (which is the non-trivial direction) we use an approximation-preserving reduction to the following problem.
\begin{description}[itemsep=0pt]
\item [Problem.] \npnSAT.
\item [Instance.] A CNF Boolean formula $\varphi$ in which each clause contains at most one positive literal and at most one negative literal.
\item [Output.] The number of satisfying assignments of $\varphi$.
\end{description}
We know that $\npnSAT$ is equivalent to $\nBIS$ under polynomial-time approximation-preserving reductions~\cite[Thm~5]{Relative}.
\begin{theorem}\label{thm:BISequiv}
When $A$ is restricted to be the transpose of a network matrix, the problem \nPolytopeVertices{} is equivalent under polynomial-time approximation-preserving reductions to \nBIS.
\end{theorem}
\begin{proof}
We have just seen in \Cref{ex:BIS} that \nBIS{} is essentially a special case of \nPolytopeVertices, so we just need to describe a polynomial-time approximation-preserving reduction from \nPolytopeVertices\ to $\npnSAT$ in the case that $A$ is the transpose of a network matrix. The reduction exploits a construction from Chen, Dyer, Goldberg, Jerrum, Lu, McQuillan and Richerby~\cite[Lem.\ 46]{ChenEtAl}.
Let the polytope $P$ be an instance of \nPolytopeVertices\ defined by a matrix~$A$ and vector~$b$, where $A$ is the transpose of a network matrix. Suppose in turn that $A$ is specified by the directed tree $(V,T)$ and directed graph $(V,E)$ on the common vertex set~$V$.
Recall that variables are associated with arcs in $T$, thus: $\{x_t:t\in T\}$. Choose an arbitrary root $r\in V$ as reference point. The first step is to make a change of variables. Introduce a new set of variables $\{z_v:v\in V\}$, and define $z_r=0$ and $z_v-z_u=x_t$ for all $t=uv\in T$. Thus
$$
z_v=\pm x_{t_1}\pm x_{t_2}\pm \cdots\pm x_{t_\ell},
$$
where $(t_1,t_2,\ldots,t_\ell)$ is the path from $r$ to $v$ in the tree, and the sign associated with the $i$th term is positive if arc $t_i$ is traversed in the forward direction and negative otherwise. Note that the variables $\{x_t:t\in T\}$ determine the variables $\{z_v:v\in V\}$ and vice versa.
Now consider the inequality defined by an arc $e=uv\in E$:
$$
\sum_{i=1}^\ell\pm x_{t_i}\leq b_e,
$$
where $(t_1,t_2,\ldots t_\ell)$ is the path from $u$ to $v$ in~$T$, and the signs are determined by the directions of the arcs. When translated to the new set of variables, this inequality simplifies to $z_v-z_u\leq b_e$. (If $w$ is the lowest common ancestor of $u$ and~$v$ in the tree~$T$, then the $x$-variables corresponding to vertices between $r$ and $w$ appear twice in $z_v-z_u$, but they have opposite signs and hence cancel. Also, the $x$-variables corresponding to vertices between $u$ and $w$ are negated, reflecting the fact that the path from $u$ to~$v$ is being traversed in the direction \emph{towards} the root~$r$.) So it just remains to encode the inequalities
\begin{equation}\label{eq:zineqs}
0\leq z_v-z_u\leq1,\>\text{for $uv=e\in T$}\quad\text{and}\quad
z_v-z_u\leq b_e,\>\text{for $uv=e\in E$}
\end{equation}
as clauses within an instance~$\varphi$ of $\npnSAT$.
To do this, introduce Boolean variables $\{\zeta_v^i:v\in V\text{ and }-n<i\leq n\}$ and start to build an instance $\varphi$ on this variable set by introducing clauses
$$
\{\zeta_v^{i+1}\implies \zeta_v^i:v\in V\text{ and }-n<i<n\}.
$$
Note that the clause $\zeta_v^{i+1}\implies \zeta_v^i$ is logically equivalent to $\neg \zeta_v^{i+1}\vee \zeta_v^i$, so has the correct syntactic form. Also note that for each $v\in V$ there are $2n+1$ consistent assignments to the variables $\{\zeta_v^i:-n<i\leq n\}$, namely
$$
(\zeta_v^{-n+1},\zeta_v^{-n+2},\ldots,\zeta_v^{n-1},\zeta_v^n)=\begin{cases}
(0,0,0,\ldots,0,0),\\
(1,0,0,\ldots,0,0),\\
(1,1,0,\ldots,0,0),\\
\qquad\vdots\\
(1,1,1,\ldots,1,0),\\
(1,1,1,\ldots,1,1),
\end{cases}
$$
where we associate false with~0 and true with~1.
We use these Boolean assignments to encode integer assignments in the range $\{-n,\ldots,+n\}$ to~$z_v$ via the correspondence
$$
\zeta_v^i=1\iff i\leq z_v,\quad\text{for all $-n<i\leq n$}.
$$
Note that, by construction, for all $v\in V$,
$$
|z_v|\leq \text{(length of the path in $T$ from $r$ to $v$)}\leq n,
$$
so our encoding covers the feasible range of $z_v$.
It only remains to encode the $z$-inequalities \eqref{eq:zineqs}, together with $z_r=0$, by adding extra clauses to~$\varphi$. Note that every $z$-inequality is of the form $z_v-z_u\leq c$, which is equivalent to the collection of clauses $\{\zeta_v^{i+c}\implies\zeta_u^i:-n<i,i+c\leq n\}$. Finally, the equality $z_r=0$ is equivalent to the conjunction of clauses
$$
\zeta_r^{-n+1},\;\zeta_r^{-n+2},\;\ldots,\;\zeta_r^{-1},\;\zeta_r^0,\;\neg\zeta_r^1,\;\neg\zeta_r^2\;\ldots,\;\neg\zeta_r^{n-1},\;\neg\zeta_r^n.
$$
This completes the construction of the instance $\varphi$ of $\npnSAT$. We see that the number of satisfying assignments to~$\varphi$ is equal to the number of feasible solutions to the $z$-inequalities, which in turn is equal to the number of vertices of the polytope~$P$. The reduction is parsimonious (i.e., preserves the number of solutions) and hence is certainly approximation preserving.
\end{proof}
We finish the section by noting that counting problems associated with 0/1 polytopes are self-reducible, in the sense that the set of vertices of a 0/1 polytope~$P$ can be expressed as the union of the vertices of two lower-dimensional polytopes $P_0$ and~$P_1$ obtained by intersection with planes of the form $x_n=0$ and $x_n=1$. For all the counting problems considered here, the polytopes $P_0$ and $P_1$ come from the same class of polytopes as~$P$. This is an easy observation for general 0/1 polytopes, and follows from Lemma~\ref{lem:preserve} for polytopes defined by (transposes of) network matrices. For self-reducible problems, approximate counting and (almost) uniform sampling are related by polynomial-time reductions, as observed by Jerrum, Valiant and Vazirani~\cite{JVV}.
\section{Network matrices}
Now consider the defining equations $Ax\leq b$ of a polytope~$P$ when $A$ is a network matrix. Relative to the previous section, the roles of variables and equations are reversed. Variables now correspond to arcs in~$E$ and equations to arcs in~$T$. Fix a tree arc $t\in T$ and define
\begin{align*}
F_t^+&=\big\{e=uv\in E:\text{the path from $u$ to $v$ passes through $t$ is the forward direction}\big\},\\
F_t^-&=\big\{e=uv\in E:\text{the path from $u$ to $v$ passes through $t$ is the backward direction}\big\},
\end{align*}
where the paths in question are paths in the tree $(V,T)$.
Then the equation defined by the arc $t$ is
$$
L_t(x)=\sum_{e\in F_t^+}x_e-\sum_{e\in F_t^-}x_e\leq b_t.
$$
We are interested in the polytope defined by the system $\{L_t\leq b_t:t\in T\}$.
\begin{example}[The matching polytope for a bipartite graph]
Given a bipartite graph~$B=(U\cupdot U',F)$ where $\abs{U}=\abs{U'}=n$,
we set up the same directed graph $(V,E)$ and tree $(V,T)$ as in Example~\ref{ex:BIS} for $\nBIS$.
For each edge $u_iu_j'\in F$ of~$B$ we introduce a variable $x_{ij}$ and associate it with the arc $u_iu_j'\in E$.
Now, the inequality associated with arc $u_ir\in T$ (respectively, $ru_j'\in T$) of the directed tree has the form $\sum_{j:ij\in F}x_{ij}\leq c$ (respectively, $\sum_{i:ij\in F}x_{ij}\leq c$).
The defining inequalities of the matching polytope of $B$ are obtained by setting $c=1$ for all edges $u_iu_j\in F$, and adding the inequalities $x_{ij}\geq0$ for all $i,j\in[n]$. To obtain the perfect matchings polytope, we simply include an inequality $\sum_{j:ij\in F}x_{ij}\geq1$ (respectively $\sum_{i:ij\in F}x_{ij}\geq1$) complementary to each inequality $\sum_{j:ij\in F}x_{ij}\leq1$ (respectively $\sum_{i:ij\in F}x_{ij}\leq1$). By Lemma~\ref{lem:preserve}, the matrix~$A$ defining this augmented set of inequalities is still a network matrix. The polytope defined by these inequalities is the (perfect) \emph{matching polytope} of the graph~$B$ \cite[Cor.~18.1b]{SchrijverA}; its vertices correspond to (perfect) matchings in~$B$. This fact also follows easily from total unimodularity of the matrix~$A$.
\end{example}
Now imagine that, for each arc $e=uv\in E$, we route $f_e$~units of flow from $u$ to~$v$ in the tree~$T$. The total flow through tree arc $t\in T$ is then
$$
\sum_{e\in F_t^+}f_e-\sum_{e\in F_t^-}f_e.
$$
So we can think of the vertex counting problem in terms of integer flows in a network defined on $(V,\overleftarrow{E}\cup T)$. Arcs in $E$ are reversed (denoted $\overleftarrow{E}$) and have a lower bound of~0 and an upper bound of~1. Arcs in $T$ have upper bounds defined by the right-hand sides of the inequalities; thus arc $t\in T$ has a upper bound of~$b_t$. The vertices of the corresponding polytope are in bijection with integer flows in the network. Unfortunately, we don't know an FPRAS for integer flows at this level of generality. However, we do have a positive result for a special case.
\begin{proposition}\label{prop:network}
There is an FPRAS for the following problem: Given a network matrix~$A$, together with a promise that $P=\{x:\bfz\leq Ax\leq\bfo\}$ is a 0/1 polytope, return the number of vertices of~$P$.
\end{proposition}
\begin{proof}
Applying the above translation yields a network in which each arc has capacity~1. In this special case, Jerrum, Sinclair and Vigoda~\cite[Cor.~8.2]{JSV} show that the number of integral flows can be obtained by reduction to counting perfect matchings, for which there is an FPRAS.
\end{proof}
There seems to be no strong reason to doubt that counting flows in networks with more general bounds admits an FPRAS{}. However, it appears that this extension of the known FPRAS for counting perfect matchings would require new ideas. Note that using network matrices we can encode problems such as $b$-matchings and $b$-edge covers in bipartite graphs. A \emph{$b$-matching} (respectively, \emph{$b$-edge cover}) of a graph is an edge subset that covers each vertex at most (respectively, at least) $b$~times.
For these problems, we have efficient approximation algorithms for some~$b$ but not in general: see Huang, Lu and Zhang~\cite{winding}.
\begin{problem}
Is there an FPRAS for \nPolytopeVertices, subject only to the restriction that $A$ is a network matrix?
\end{problem}
\section{Beyond totally unimodular matrices}\label{sec:TDI}
Most integral polytopes arising in combinatorial optimisation arise from totally unimodular matrices. In fact, unimodular matrices are the only ones with the property that the polyhedron defined by $Ax\leq b$ is integral for all choices of the integral vector~$b$. However, if we consider $A$ and~$b$ together, it can happen that the pair $(A,b)$ defines an integral polytope even when $A$ is not totally unimodular. Since we have not so far discovered any family of $0/1$ polytopes whose vertex-counting problem is harder than $\nBIS$, it is tempting to look beyond totally unimodular.
A known class of integer polytopes arise from `Totally Dual Integral' (TDI) pairs $(A,b)$~\cite[\S5.17]{SchrijverA}. A fascinating example is provided by the stable matching polytope of a bipartite graph, which is defined by a natural system of polynomially many inequalities~\cite{VandeVate}. The defining matrix~$A$ is apparently not totally unimodular, but the linear system $(A,b)$ was shown to be TDI by Kir\'{a}ly and Pap~\cite{KiralyPap}. Intriguingly, Chebolu, Goldberg and Martin~\cite{ChGM} have shown that the problem of counting stable matchings (and hence the vertices of the stable matching polytope) is interreducible with $\nBIS$ via approximation-preserving reductions. So, again, we do not manage to get beyond $\nBIS$. This raises the question of whether the vertex counting problem considered here is $\nBIS$-easy.
\begin{problem}
Is \nPolytopeVertices{} approximation-preserving reducible to $\nBIS$ in general?
\end{problem}
\bibliographystyle{plain}
|
2108.10794
|
\section{Introduction}\label{sec:intro}
Laughlin wavefunctions
\[
\Psi_p(z_1,\dots , z_N) \propto \prod_{1\leq j < k \leq N } (z_j-z_k)^{p+2} \, \prod_{j=1}^N \exp\left( - \frac{|z_j|^2}{2\ell^2}\right) ,
\]
describe the ground-state properties of highly correlated quantum systems such as quantum Hall systems \cite{PG90,Girvin2005} or rapidly rotating Bose gases \cite{Regnault:2004,Cooper:2008} in a two-dimensional complex geometry $ z_1,\dots z_N \in \mathbb{C} $. In that context, $ \ell > 0 $ is the magnetic length, which arises naturally in Hall systems through the perpendicular, constant magnetic field. In the case of dilute Bose gases, the rotational velocity takes the role of the magnetic field.
In his seminal paper \cite{PhysRevLett.51.605}, Haldane derived Hamiltonians,
$ W_p = \sum_{1\leq j < k \leq N } w_p(j,k) $,
with non-negative pair interactions $ w_p \geq 0 $, which
have the Laughlin wavefunction with parameter $ p\in \mathbb{N}_0 $
among its zero-energy eigenstates.
These so-called pseudopotentials also effectively describe the excitations above the Laughlin state.
The statistics of the many-particle Hilbert space on which $ W_p $ acts is tied to $ p $: bosonic statistics for $ p $ even and fermionic for $ p $ odd.
The pair potential $ w_p $ projects onto states in the lowest Landau level (LLL) with relative angular momentum at most $ p $, and formally results from an expansion of a radially symmetric pair interaction with respect to relative angular momentum; see~\cite{Trugman:1985lv,Girvin2005,LeePapicThomale:2015}.
A rigorous justification of the emergence of such pair interactions in a scaling limit can be found in \cite{Lewin:2009,seiringer:2020}. In the case $ p = 0 $, which models a rapidly rotating dilute Bose gas and is the guiding example in this paper, $ w_0 \propto \delta $ is just a delta-pair interaction on the LLL.
Haldane pseudopotentials are conjectured to faithfully describe all important features and, in particular, the rigidity of quantum Hall systems or rotating Bose gases \cite{Girvin2005,Cooper:2008,Lewin:2009,RSY:2014,seiringer:2020}. Their zero-energy eigenstates have a maximal filling fraction $\nu(p) = (p+2)^{-1} $. Higher fillings $ \nu $ lead to a ground-state energy which increases with $ \nu $. For the bosonic case $ p = 0 $ in the planar geometry, this results in the Yrast line of ground-state energies as a function of the conserved total angular momentum; see \cite{Regnault:2004,Cooper:2008,Lewin:2009}.
Most importantly, pseudopotentials are conjectured to have a uniform spectral gap above its ground-state space -- a feature, which is responsible for the incompressibilty of the quantum fluid \cite{Lieb:2019vl,Roug19,NWY:2021} as well as the quantization of the Hall conductance~\cite{hastings:2015,Bachmann:2018lb,Bachmann:2021dp}. The gap is expected to be stable with respect to perturbations and the details of the two-dimensional complex geometry (cf.\ \cite{Haldaneetal2016}).
In this paper, we follow the route taken in~\cite{Rezayi:1994wn,Bergholtz:2005pl,Jansen:2012da,Nakamura:2012bu,NWY:2020,NWY:2021} and simplify matters by changing the geometry and truncating the pseudopotential. The
cylinder geometry has the advantage that its LLL is spanned by an orthonormal basis $\{\psi_x | x\in{\mathbb Z}\}$ with a natural one-dimensional lattice structure.
The spanning one-particle orbitals are given by
\begin{equation}\label{eq:Landauorbital}
\psi_{x}(\xi,\eta) = \sqrt{\frac{\alpha}{2\pi^{3/2}\ell^2}}
\exp\left(i x \frac{\alpha \eta}{\ell}\right) \exp\left(-\frac{1}{2}\left[\frac{\xi}{\ell}-x\alpha\right]^2\right)
\end{equation}
with $\xi \in {\mathbb R}$, $\eta\in[0,2\pi R)$ marking the positions on the cylinder, and $\alpha:=\ell/R$ the ratio of the magnetic length to the cylinder radius $R> 0 $. In terms of the complex coordinates $ z_j := \xi_j + i \alpha \eta_j $ the Laughlin wavefunctions in this geometry take the form
\[
\Psi_p(z_1,\dots , z_N) \propto \prod_{1\leq j < k \leq N } \left(e^{z_j/R} -e^{z_k/R} \right)^{p+2} \, \prod_{j=1}^N \exp\left( - \frac{|\xi_j|^2}{2\ell^2}\right) .
\]
The pair interaction of a pseudopotential, which has this Laughlin state in its zero-energy eigenspace, projects onto orbitals with relative coordinates $ |x_j - x_k | \leq p $.
Using the annihilation and creation operators $ a_x $ and $ a_x^* $ of the one-particle orbitals \eqref{eq:Landauorbital}, whose statistics is again determined by $ p \in \mathbb{N}_0 $, the pseudopotential is of the form
\[
W_{p} = \sum_{s\in{\mathbb Z}/2}B_{p,s}^*B_{p,s}, \quad \text{with}\quad B_{p,s} := \sum_{k}{\vphantom{\sum}}' F_p(2k\alpha)a_{s-k}a_{s+k}\quad \mbox{and}\quad F_p(t) := \sum_{0\leq m \leq p } H_m(t) e^{-t^2/4} .
\]
The primed sum is over ${\mathbb Z}$ if $s$ is integer and ${\mathbb Z}+\frac{1}{2}$ otherwise, and the summation in $ F_p $ is over integers $ m $ of the same parity as $ p $. Depending on the parity of $ p $, the real polynomials $ H_m $ result from orthogonalizing the even, respectively odd, monomials in $ \{ 1, t , \dots , t^p \} $ with respect to the natural scalar product induced by the $ k $-sum; for details see~\cite{LeePapicThomale:2015}. In the thin-cylinder limit $ \alpha \to \infty $, $ H_m $ is the $m$-th order Hermite polynomial. We refer to~\cite{Rezayi:1994wn,Lee:2004bs,Jansen:2012da} and in particular \cite{LeePapicThomale:2015} for a more detailed discussion of pseudopotentials in the cyclinder geometry.
As a case study of a bosonic problem, we focus on the simplest case $ p = 0 $ in the thin-cylinder limit for which we may take $ H_0(t) = 1 $. Analogous to the $ p = 1 $ fermionic case studied in~\cite{NWY:2020,NWY:2021}, for $ \alpha \to \infty $ it is reasonable
to truncate the summation in $ B_{p,s} $ to its lowest-non-trivial order, that is, $|k|\leq 1 $ for $ p= 0 $. This results in a finite-range model which, after changing the prefactor, coincides with the formal Hamiltonian
\begin{equation}\label{eq:Haminf}
\sum_{x} n_x n_{x+1} + \kappa \sum_{x} q_x^* q_x , \quad \text{with}\quad q_x = a_x^2 -\lambda a_{x-1} a_{x+1}, \quad n_x = a_x^* a_x ,
\end{equation}
and $ \kappa = e^{\alpha^2/2}/ 4 $ and $ \lambda = - 2 e^{-\alpha^2} $ as the physical parameters. For the purposes of this work, we consider more generally that $\kappa>0$ and $\lambda\in{\mathbb C}\setminus\{0\}$. The aim of this paper is to address the rigidity properties in this truncated bosonic model. In particular, we will establish a uniform spectral gap and a bound on the analogue of the Yrast line in this simplified model. \\
\subsection{Main results}\label{sec:main}
For the precise mathematical definition of the Hamiltonian analyzed in our results, we restrict the truncated model~\eqref{eq:Haminf} to Landau orbitals~\eqref{eq:Landauorbital} whose center variable $ x \in \mathbb{Z} $ is in an interval $ \Lambda = [ a,b] $.
The bosonic Fock space associated with these Landau orbitals is the closure
\begin{equation}
{\mathcal H}_\Lambda := \overline{\operatorname{span}}\left\{ | \mu \rangle \, | \, \mu \in \mathbb{N}_0^\Lambda \right\}
\end{equation}
of orthonormal vectors associated with occupation numbers $ \mu \in \mathbb{N}_0^\Lambda $ of the single-particle orbitals $ x \in \Lambda $.
We will refer to $ \mu $ as a particle configuration.
The Fock space $ \mathcal{H}_\Lambda $ carries the natural scalar product $ \langle \varphi | \psi \rangle := \sum_{\mu \in \mathbb{N}_0^\Lambda } \overline{\varphi(\mu)} \psi(\mu) $ where $\psi = \sum_{\mu\in{\mathbb N}_0^\Lambda}\psi(\mu)\ket{\mu}$.
The truncated Hamiltonian with open respectively periodic boundary conditions, then corresponds to the energy form
\[
\langle \psi | H^{\sharp}_\Lambda \psi \rangle = \sum_{\mu \in \mathbb{N}_0^\Lambda } e_\Lambda^\sharp(\mu) |\psi(\mu)|^2 + \kappa \, \sum_{\nu \in \mathbb{N}_0^\Lambda} \sum_{x\in \Lambda^\sharp} \left| (q_x\psi)(\nu) \right|^2 , \quad \sharp \in \{ \textrm{obc} , \textrm{per} \} ,
\]
with $ e_\Lambda^\textrm{obc}(\mu) := \sum_{x=a}^{b-1} \mu_x \mu_{x+1} $, respectively, $ e_\Lambda^\textrm{per}(\mu) := e_\Lambda^\textrm{obc}(\mu) + \mu_b \mu_{a} $, representing the electrostatic energy. The summation for the hopping term extends over $ \Lambda^\textrm{obc} = [a+1,b-1] $ for open boundary conditions and $ \Lambda^\textrm{per} = [a,b] $ for periodic boundary conditions, in which case additions are understood modulo the volume $ |\Lambda| = b-a+1 $.
The hopping operator is defined via
\begin{equation}\label{creation-action}
(q_x\psi)(\nu) := \sqrt{(\nu_x+1)(\nu_x+2)} \ \psi\left((\alpha^*_x)^2 \nu\right) - \lambda \sqrt{(\nu_{x-1}+1)(\nu_{x+1}+1)} \ \psi\left(\alpha^*_{x+1} \alpha^*_{x-1} \nu\right)
\end{equation}
and expressed in terms of the functions $ \alpha^*_x : \mathbb{N}_0^\Lambda \to \mathbb{N}_0^\Lambda $ with $ x \in \Lambda $, which map configurations $ \nu $ to $ \alpha^*_x \nu $ by adding a particle at the site $ x \in \Lambda $, i.e.\ $ \nu_x \to \nu_x+1 $ and the particle numbers at all other sites are unchanged. We will also use $ \alpha_x : \mathbb{N}_0^\Lambda \to \mathbb{N}_0^\Lambda $ for subtracting a particle from the site $ x \in \Lambda $, that is $ \nu_x \to \nu_x - 1 $, provided that $ \nu_x \geq 1 $.
Through Friedrich's extension theorem, the above non-negative energy forms define (unbounded) self-adjoint operators $ H_\Lambda^\sharp : \mathrm{dom}(H_\Lambda^\sharp) \to {\mathcal H}_\Lambda $. To ease the notation, we will also frequently drop the superscript $ '\textrm{obc}' $ for open boundary conditions and write $ H_\Lambda \equiv H_\Lambda^\textrm{obc} $. Both Hamiltonians are frustration free as they are sums of non-negative terms with ground-state spaces given by the respective kernels
\[ \mathcal{G}_\Lambda^\sharp := \ker H_\Lambda^\sharp . \]
For open boundary conditions
$ \mathcal{G}_\Lambda \equiv \mathcal{G}_\Lambda^\textrm{obc} $
can easily be seen to be infinite dimensional as, e.g. $H_\Lambda \ket{n0\ldots0}=0$ for all $n\in{\mathbb N}_0$. One of the key results, which is contained in Section~\ref{sec:VMD}, is an explicit characterization of $ \mathcal{G}_\Lambda $ and of $ \mathcal{G}_\Lambda^\textrm{per} $, whose dimension will be shown to grow exponentially with the volume. Building on this, the main aim in this work is to prove that the spectral gaps above the ground states are strictly positive uniformly in the system size $ |\Lambda | $, i.e. for both boundary conditions $ \sharp \in \{ \textrm{obc} , \textrm{per} \} $ there exists $\gamma^\sharp>0$ so that
\begin{equation}
E_1^\sharp({\mathcal H}_\Lambda) := \inf_{0\neq\psi\in \mathrm{dom}(H_\Lambda^\sharp) \cap (\mathcal{G}_{[1,L]}^\sharp)^\perp } \frac{\braket{\psi}{H_\Lambda^\sharp\psi}}{\|\psi\|^2} \geq \gamma^\sharp
\end{equation}
for all intervals $\Lambda$ sufficiently large.
We will
estimate this spectral gap in terms of
\begin{equation}
\gamma_\kappa^\textrm{obc}(|\lambda|^2) := \frac{1}{5}\min\left\{ 4\gamma_\kappa^\textrm{per}(|\lambda|^2) , \frac{2\kappa |\lambda|^2}{\kappa+1} \right\}, \quad \gamma_\kappa^\textrm{per}(|\lambda|^2) := \frac{1}{4} \min\left\{1, \frac{2\kappa}{\kappa+1} , \frac{2 \kappa}{1+\kappa|\lambda|^2} \right\} .
\end{equation}
For open boundary conditions the main result is the following:
\begin{theorem}[OBC Spectral Gap]\label{thm:main}
There is a monotone increasing function $ f : [0,\infty) \to [0,\infty) $ such that for all $ 0\neq \lambda \in \mathbb{C} $ with the property $ f(|\lambda|^2) < 1/3 $ and all $ \kappa \geq 0 $:
\begin{equation}\label{eq:main}
\inf_{\Lambda \subset \mathbbm{Z}, |\Lambda| \geq 10 } E_1^\textrm{obc}({\mathcal H}_\Lambda) \geq \min\left\{ \gamma_\kappa^\textrm{obc}(|\lambda|^2) , \, \frac{2\kappa}{3} \left( 1 - \sqrt{3 f(|\lambda|^2/2) } \right)^2\right\} .
\end{equation}
\end{theorem}
The proof of this theorem is provided in~Subsection~\ref{sec:eeproofmain}. An explicit expression for $ f $, which is monotone increasing, is stated in Theorem~\ref{thm:tiling_gap}, where we also show that $f(|\lambda|^2/2)<1/3$ for $|\lambda|\leq 7.4$.
For $\kappa>0$ fixed and $|\lambda| \ll 1$, which covers the physical parameter regime, the minimum in~\eqref{eq:main} is taken at $ \gamma_\kappa^{\textrm{obc}}(|\lambda|^2) $ which
is of the order $\mathcal{O}(|\lambda|^2)$. The bound is sharp in this regime due to the existence of edge states which are discussed in more detail at the end of Subsection~\ref{sec:Electrostatic}.
As an example of such an edge state, consider the two-dimensional space
\[\operatorname{span}\{\ket{20100\ldots0}, \, \ket{1200\ldots0}\}\subseteq {\mathcal H}_\Lambda\]
which is invariant under the action of $H_\Lambda^\textrm{obc}$. Diagonalizing the associated $ 2\times 2 $ matrix yields eigenvalues $E_{\pm} = (\kappa|\lambda|^2+\kappa + 1)(1 \pm \sqrt{1-4\kappa|\lambda|^2/(\kappa|\lambda|^2+\kappa + 1)^2})$, the smallest of which is of order
\begin{equation}\label{ex:edge_energy}
E_- = \frac{2\kappa|\lambda|^2}{\kappa+1} + \mathcal{O}(|\lambda|^4)
\end{equation}
when $|\lambda|\ll 1$.
In contrast, the bulk gap is strictly bounded away from zero uniformly for small $ |\lambda | $. This is shown in our second main result.
\begin{theorem}[Bulk Spectral Gap]\label{thm:main2}
There is a monotone increasing function $ f : [0,\infty) \to [0,\infty) $ such that for all $ 0\neq \lambda \in \mathbb{C} $ with the property $ f(|\lambda|^2/2) < 1/3 $ and all $ \kappa \geq 0 $:
\begin{equation}\label{eq:main2}
\liminf_{|\Lambda| \to \infty} E_1^\textrm{per}({\mathcal H}_\Lambda) \geq \min\left\{ \gamma_\kappa^\textrm{per}(|\lambda|^2) , \, \frac{\kappa}{3 (1+|\lambda|^2) } \left( 1 - \sqrt{3 f(|\lambda|^2/2) } \right)^2 \right\} .
\end{equation}
\end{theorem}
The proof of this theorem is given in Subsection~\ref{Sec:ProofMain2}. As explained next in detail, the key to establishing this result is to
explicitly deconstruct the Hilbert space into invariant subspaces to which different gap estimating techniques are applied to circumvent the edge states. In contrast to~\eqref{eq:main} the bound~\eqref{eq:main2} survives the limit $ \lambda \to 0 $, in which case the (bulk) spectral gap is explicit $ \min\{ 1, 2 \kappa \} $.
For a more detailed understanding of the bulk excitations, we explore in Subsection~\ref{sec:Excited_States} other invariant subspaces which we conjecture to support the lowest excitations. Their energies, which are of course consistent with~\eqref{eq:main2},
are determined perturbatively for small $ |\lambda| $ in
Subsection~\ref{sec:Excited_States}. We also include there a brief discussion of the many-body scars in this model.
\subsection{Invariant subspaces and the proof strategy}\label{sec:invariant}
As its fermionic cousin studied in~\cite{NWY:2021}, the bosonic model at hand is not integrable in the sense that there is no extensive number of independent conserved quantities. For the Haldane pseudopotentials, the conserved quantities are the total particle number $ \sum_{x} n_x $ and the center of mass $ \sum_x x n_x $. Regardless, one can explicitly determine an extensive number of invariant subspaces. This observation is the foundation of the analysis in this paper. A preview of this was provided above where we
discussed the edge-state example.
We recall from \cite{Birman:1987} that a closed subspace $ \mathcal{V} \subset {\mathcal H} $ is invariant, or equivalently reducing in the case of a self-adjoint operator $ A: \mathrm{dom}(A) \to {\mathcal H} $, if and only if the corresponding orthogonal projection $ P_{\mathcal{V}} $ commutes with the operator,
\[ P_{\mathcal{V}} A \ \mathrm{dom}(A) = A P_{\mathcal{V}} \ \mathrm{dom}(A) . \]
Since the electrostatic part of the Hamiltonian is diagonal in the configuration basis, we can construct an invariant subspace of $H_\Lambda^\sharp$ by considering the action of the hopping terms on a fixed configuration $\sigma_\Lambda(R)\in{\mathbb N}_0^\Lambda$. Namely, an invariant subspace results from taking the span of all configuration states $\mu\in{\mathbb N}_0^\Lambda$ that have a nonzero inner product with a state of the form $(q_{x_k}^*q_{x_k}\ldots q_{x_1}^*q_{x_1})\ket{\sigma_\Lambda(R)}$ for some $k\geq 1$ and $x_1,\ldots,x_k $.
Similar to the analysis from~\cite{NWY:2021}, a convenient way of labeling the spanning set of configurations is by means of domino tilings of $ \Lambda $, for which the generating configuration $ \sigma_\Lambda(R) $ is characterized by a root tiling $R$.
The exact definition of these lattice tilings is in Section~\ref{sec:VMD}, where we also define and state the key properties of the associated closed, invariant subspace $ \mathcal{C}_\Lambda \equiv \mathcal{C}_\Lambda^\textrm{obc}$ and $ \mathcal{C}_\Lambda^\textrm{per} $ of all tiling states for open and periodic boundary conditions, respectively; see~\eqref{tiling_spaces} and~\eqref{per_tiling_space}. Most importantly, $\mathcal{C}_\Lambda^\sharp $ for both $ \sharp \in \{ \textrm{obc}, \textrm{per} \} $ contains the ground-state space $ \mathcal{G}_\Lambda^\sharp $ of the respective Hamiltonian $H_\Lambda^\sharp$.
Since both
$$ {\mathcal H}_\Lambda = \mathcal{C}_\Lambda\oplus \mathcal{C}_\Lambda^\perp , \quad\mbox{and}\quad {\mathcal H}_\Lambda = \mathcal{C}_\Lambda^\textrm{per} \oplus \left( \mathcal{C}_\Lambda^\textrm{per}\right) ^\perp $$
constitute orthogonal decompositions of the Hilbert space into closed invariant subspaces of $ H_\Lambda^\textrm{obc} $ and $ H_\Lambda^\textrm{per} $ respectively and $ \mathcal{G}_\Lambda^\sharp \subset \mathcal{C}_\Lambda^\sharp \subset \mathrm{dom} H_\Lambda^\sharp $, the spectral gap of $H_\Lambda^\sharp $ can be realized as
\begin{equation}\label{eq:twoway}
E_1^\sharp({\mathcal H}_\Lambda) = \min\left\{ E_1^\sharp(\mathcal{C}_\Lambda^\sharp) , E_0^\sharp\left( \big(\mathcal{C}_\Lambda^\sharp\big)^\perp\right)\right\} , \quad \sharp \in \{ \textrm{obc}, \textrm{per} \} ,
\end{equation}
where $E_1^\sharp(\mathcal{C}_\Lambda^\sharp)$ is the spectral gap of $ H_\Lambda^\sharp $ restricted to $ \mathcal{C}_\Lambda^\sharp$, and $E_0^\sharp\left( \big( \mathcal{C}_\Lambda^\sharp\big) ^\perp \right)$ is the ground-state energy of $ H_\Lambda^\sharp $ in the orthogonal subspace $ \big( \mathcal{C}_\Lambda^\sharp\big) ^\perp $, i.e.
\begin{equation}\label{eq:gapabbrev}
E_1^\sharp(\mathcal{C}_\Lambda^\sharp) := \inf_{0\neq\psi\in \mathcal{C}_\Lambda^\sharp\cap \left(\mathcal{G}_\Lambda^\sharp\right)^\perp} \frac{\braket{\psi}{H_\Lambda^\sharp\psi}}{\|\psi\|^2}, \qquad
E_0^\sharp\left(\big(\mathcal{C}_\Lambda^\sharp\big)^\perp\right) := \inf_{0\neq\eta\in \left(\mathcal{C}_\Lambda^\sharp\right)^\perp\cap\mathrm{dom}(H_\Lambda^\sharp)} \frac{\braket{\eta}{H_\Lambda^\sharp\eta}}{\|\eta\|^2} .
\end{equation}
We employ different strategies to lower bound these energies uniformly in $ \Lambda $:
\begin{enumerate}\itemsep0ex
\item The martingale method for a bound on $ E_1^\textrm{obc}(\mathcal{C}_\Lambda) $ (cf.~Section~\ref{sec:MM}).
\item A finite-volume condition, which lower bounds the periodic gap $ E_1^\textrm{per}(\mathcal{C}_\Lambda^\textrm{per}) $ in terms of the spectral gap $ E_1^\textrm{obc}(\mathcal{C}_\Lambda^\infty) $ for open boundary conditions restricted to the subspace of bulk tilings $ \mathcal{C}_\Lambda^\infty \subset \mathcal{C}_\Lambda $ (cf.~\eqref{def:IVBVMD} and~Section~\ref{sec:UBG}).
\item Electrostatic estimates for bounds on $ E_0^\textrm{obc}(\mathcal{C}_\Lambda^\perp) $ and $ E_0^\textrm{per}\left(\big(\mathcal{C}_\Lambda^\textrm{per}\big)^\perp\right) $, which also relate to the Yrast line mentioned in the introduction (cf.~Theorems~\ref{thm:perp_bound} and~\ref{thm:electro2}, and Proposition~\ref{prop:Yrast}).
\end{enumerate}
Before delving into the details, let us put these strategies in context.
The martingale method and finite-volume criteria \cite{affleck:1988,knabe:1988,nachtergaele:1996,Gosset:2016hy,lemm:2018,Lemm:2019qh,Kastoryano:2019ja,abdul-rahman:2020} have previously only been developed for and applied to quantum spin or lattice fermion systems, for which the dimension of the finite-volume Hilbert space is finite. For our lattice bosons, the dimension of $ {\mathcal H}_\Lambda $ and even of $ \mathcal{C}_\Lambda $ is infinite. This does not merely require technical amendments of the method, but poses the additional problem that the method's induction hypothesis, namely the existence of a positive spectral gap for any finite-volume Hamiltonian, does not a priori hold. For the present model, we solve this by showing that $ E_1^\textrm{obc}(\mathcal{C}_\Lambda) $ is realized on the finite-dimensional invariant subspace of bulk tilings $ \mathcal{C}_\Lambda^\infty \subset \mathcal{C}_\Lambda $; see~\eqref{def:IVBVMD} and Theorem~\ref{thm:gap_reduction}.
Similar to, but in fact more severe than its fermionic cousin studied in~\cite{NWY:2021}, the present model has plenty of low-energy edge states for the Hamiltonian $ H_\Lambda $ with open boundary conditions in comparison to the bulk Hamiltonian $ H_\Lambda^\textrm{per} $. In this situation, it is a well recognized hard problem to rigorously establish a bulk gap which does not scale with the energy of edge states.
This stems from the fact that the known proof strategies, the martingale method and finite-volume criteria, involve finite-volume Hamiltonians with open boundary conditions.
We solve this problem by restricting these proof techniques a priori to invariant subspaces $\mathcal{C}_\Lambda^\textrm{per} \subset \mathcal{C}_\Lambda^\infty $, which project out the edge states. This novel twist on these methods is provided here for the truncated bosonic Haldane pseudopotential. It is, however, equally applicable to the fermionic model, where it would also prove a bulk gap which is stable for small $ |\lambda| $ thereby improving~\cite[Therorem~1.2]{NWY:2021}.
Hence, despite many similarities to the fermionic case of the truncated $\nu = 1/3$ Haldane pseudopotential studied in \cite{NWY:2021}, beyond modifying and streamlining of the proof of that result, the analysis in the present paper tackles three additional challenges -- the adaptation of the martingale method through a reduction to finite-dimensional subspaces, electrostatic estimates, and a proof of a bulk gap that circumvents edge states by customizing the gap-techniques to appropriate invariant subspaces.
\section{Tilings and their state spaces}\label{sec:VMD}
The goal of this section is to identify invariant subspaces $\mathcal{C}_\Lambda$ and $ \mathcal{C}_\Lambda^\textrm{per} $ that contain the ground-state space $ \mathcal{G}_\Lambda = \ker H_\Lambda$ and $ \mathcal{G}_\Lambda^\textrm{per} = \ker H_\Lambda^\textrm{per}$ respectively. These subspaces will be constructed as a direct sum of invariant subspaces $\mathcal{C}_\Lambda^\sharp(R)$ each of which supports a unique ground state and is spanned by a finite subset of the orthonormal occupation basis $ \{ \ket{\mu} : \mu\in{\mathbb N}_0^\Lambda\} $ of $ {\mathcal H}_\Lambda $. Each of the chosen occupation states is described by a domino-tiling of the lattice, where the values of each domino indicate the occupation numbers of the covered sites.
To motivate the definition of these tiles, recall that as the Hamiltonian is frustration-free, the ground state space is the set of vectors that simultaneously minimize the energy of all interaction terms:
\[
\ker(H_\Lambda) = \bigcap_{x=a}^{b-1}\ker(n_xn_{x+1}) \cap \bigcap_{x=a+1}^{b-1}\ker(q_x), \qquad \ker(H_\Lambda^\textrm{per}) = \bigcap_{x=a}^{b}\left( \ker(n_xn_{x+1}) \cap \ker(q_x) \right) .
\]
Every particle configuration $ \ket{\mu }$ gives rise to an electrostatic energy and is a ground state of these terms if and only if $\mu_x\mu_{x+1}=0$ for all $x$. The operator $q_x$ acts nontrivially on the sites $ \{ x-1,x,x+1\} $, and satisfies the equation
\begin{equation}\label{dipole_gs}
q_x\ket{101} = -\frac{\lambda}{\sqrt{2}} q_x\ket{020}.
\end{equation}
Therefore, starting from a configuration of 1's and 0's that is a ground state of the electrostatic terms, a ground state of the hopping terms $\sum_{x} q_x^*q_x$ in either case of boundary conditions can be constructed by summing over the set of all configurations obtained from replacing sequences (101) with (020) and appropriately scaling. A relation similar to \eqref{dipole_gs} holds if either the first or third site in the configuration on the LHS contains more than one particle or if the middle site on the RHS of \eqref{dipole_gs} contains more than 2 particles.
However, the action of $ q_x $ will result in a configuration with electrostatic energy and thus such configurations cannot contribute to a ground state. This indicates that the bulk of a ground state can be at most half-filled.
Of course, there are other configurations that satisfy the electrostatic ground-state condition $ e_\Lambda^\sharp(\mu) = 0 $. For example, any configuration with at most one particle is automatically in the kernel of $q_x$.
With these observations in mind, we now turn to defining Void-Monomer-Dimer tilings. Since \eqref{dipole_gs} must be satisfied at every site $x$ in either the thermodynamic limit $\Lambda \uparrow {\mathbb Z}$ or on $ \Lambda $ in the periodic geometry, we first define three \emph{bulk tiles}:
\begin{enumerate}
\item a void $V=(0)$, which covers a single site and contains no particles,
\item a monomer $M=(10)$, which covers two sites and contains a single particle on the first site, and
\item a dimer $D=(0200)$, which covers 4 sites and contains only two particles on the second site.
\end{enumerate}
A \emph{VMD tiling} of ${\mathbb Z}$ is any tiling of the entire lattice by these three tiles. Similarly, a \emph{periodic VMD tiling} of a finite volume $\Lambda=[a,b]$ with the periodic boundary conditions is any covering of the ring by these tiles.
\subsection{BVMD tilings for open boundary conditions}\label{subsec:tilings}
To describe the ground state for open boundary conditions, we need additional \emph{boundary tiles} to account for possible edge configurations that can support the ground states. One way to obtain such tiles is to consider the set of truncated tiles created from restricting a VMD-tiling of $ {\mathbb Z} $ to $\Lambda$. We ignore cuttings that produce tiles with no particles, as these can be equivalently constructed using voids. This produces the following set of boundary tiles, which we refer to as \emph{${\mathbb Z}$-induced boundary tiles}:
\begin{enumerate}
\item \emph{On the left boundary:} a truncated dimer $B_2^l=(200)$ which covers three sites and contains two particles on the first site.
\item \emph{On the right boundary:}
\begin{enumerate}
\item a truncated monomer $M^{(1)}= (1)$, which covers one site and contains one particle,
\item a truncated dimer $B_2^r=(02)$, which covers two sites and contains two particles on the second site, and
\item a truncated dimer $D^{(1)}=(020)$, which covers three sites and contains two particles on the second site.
\end{enumerate}
\end{enumerate}
To acccount for the full ground state of $ H_\Lambda $ two additional types of boundary tiles are found from the following observation: if $\mu\in {\mathbb N}_0^\Lambda$ is of the form
\[
\ket{\mu} = \ket{\mu_a00}\otimes\ket{\mu'}\otimes\ket{0\mu_b}
\]
where $\mu'\in\{0,1,2\}^{|\Lambda|-5}$ is a particle configuration obtained from a tiling of $[a+3,b-2]$ by bulk tiles (see \eqref{config}) then
\begin{equation}\label{artificial_action}
H_{[a,b]}\ket{\mu} = \ket{\mu_a00}\otimes H_{[a+3,b-2]}\ket{\mu'}\otimes\ket{0\mu_b}.
\end{equation}
A similar statement holds if $\ket{\mu} = \ket{\mu_a00}\otimes\ket{\mu^b}$ or $\ket{\mu} = \ket{\mu^a}\otimes\ket{0\mu_b}$ where $\mu^b$, resp.\ $\mu^a$, is the particle configuration associated to a VMD-tiling by bulk and right, resp.\ left, ${\mathbb Z}$-induced boundary tiles. Thus, we introduce the following \emph{non-${\mathbb Z}$-induced boundary tiles} for $n\geq 3$:
\begin{enumerate}
\item \emph{On the left boundary:} $B_n^l = (n00)$ covering three sites with $n$ particles on the first site.
\item \emph{On the right boundary:} $B_n^r = (0n)$ covering two sites with $n$ particles on the second site.
\end{enumerate}
The fact that an edge site of a ground state of $ H_\Lambda $ can hold an arbitrary number of particles is a consequence of the lack of hopping at the boundary. As indicated by \eqref{dipole_gs}, the interior sites of a ground state can hold at most two particles, and so this completes the set of boundary tiles.
A \emph{BVMD-tiling} of $\Lambda$ is then defined as any ordered covering of $\Lambda$
\begin{equation}
T = (T_1, \, T_2, \ldots, \, T_k) \in\mathcal{T}_\Lambda
\end{equation}
where each $T_i$ is one of the tiles defined above and only $T_1$, resp. $T_k$, can belong to the set of left, resp.\ right, boundary tiles. The number of tiles $k$ in a tiling of $\Lambda$ can vary since tiles have different lengths. The set of all BVMD-tilings of $ \Lambda $ will be abbreviated by $ \mathcal{T}_\Lambda $.
Motivated by~\eqref{dipole_gs} we define two \emph{substitution rules} that allow us to create a new tiling $T'$ from a fixed tiling $T$ by replacing two neighboring monomers by a dimer or vice-versa. Pictorially, these are represented by
\begin{equation}\label{eq:replacement}
(0200) \leftrightarrow (10)(10) \quad (020) \leftrightarrow (10)(1),
\end{equation}
in which we exchange a bulk dimer $D$ with two bulk monomers, or a truncated right dimer $D^{(1)}$ with a bulk monomer and truncated monomer. These rules induce a equivalence relation ``$\leftrightarrow$'' on the set of BVMD tilings $\mathcal{T}_\Lambda$. Namely, we say that two tilings $T,T'\in\mathcal{T}_\Lambda$ are \emph{connected} and write $T\leftrightarrow T'$ if $T$ becomes $T'$ after a finite number of replacements of the form in \eqref{eq:replacement}. Each equivalence class $\mathcal{T}_\Lambda(R)$ is uniquely characterized by a \emph{root tiling} $R = (R_1, \ldots, R_k)\in\mathcal{T}_\Lambda$, which is defined as any tiling such that
\begin{align*}
R_1 & \in \{V, \, M\}\cup\{B_n^l \, : \, n \geq 2\} , \\
R_i &\in \{V,\, M\} \quad \mbox{for all} \,\, 1<i<k , \\
R_k &\in \{V,\, M,\, M^{(1)} \}\cup\{B_n^r \, : \, n\geq 2 \},
\end{align*}
see Figure~\ref{fig:equivalence_class}. Said differently, a root tiling is any BVMD-tiling of $\Lambda$ that does not use the dimers $(0200)$ or $(020)$. We denote the set of root-tilings by $\mathcal{R}_\Lambda$.
Consequently, we can partition $\mathcal{T}_\Lambda$ into subsets labeled by the root tilings,
\begin{equation}
\mathcal{T}_\Lambda = \biguplus_{R\in\mathcal{R}_\Lambda} \mathcal{T}_\Lambda(R), \quad \mathcal{T}_{\Lambda}(R) = \{T \in \mathcal{T}_\Lambda \, | \, T\leftrightarrow R\}.
\end{equation}
\begin{figure}
\centering{\includegraphics[scale=.3]{Pictures/BVMD_space.png}}
\caption{The equivalence class $\mathcal{T}_\Lambda(R)$ for the root tiling $R$ in the bottom line with boundary conditions $B_n^l $ with $n\geq2$ and $M^{(1)}$.}\label{fig:equivalence_class}
\end{figure}
The natural embedding
\begin{equation}\label{config}
\sigma_{\Lambda}: \mathcal{T}_\Lambda \to {\mathbb N}_0^\Lambda, \quad T\mapsto \sigma_\Lambda(T)
\end{equation}
identifies each tiling $T$ with its particle configuration $\sigma_\Lambda(T)$. As we will show next, the particle configurations in the range are uniquely characterized by the tiling.
\begin{lem}[BVMD Tiling Configurations]\label{lem:tiling_configs}
Fix an interval $\Lambda = [a,b]$ with $|\Lambda|\geq 4$. A configuration $\mu\in{\mathbb N}_0^{\Lambda}$ is in $\operatorname{ran} \sigma_\Lambda$ if and only if the following three conditions hold:
\begin{enumerate}
\item $\mu_x \geq 3$ implies $x\in\{a,b\}$
\item $\mu_x \geq 1$ implies $\mu_{x\pm 1} = 0$
\item $\mu_x{\geq 2}$ implies $\mu_{x\pm 2} = 0$ and $\mu_{x\pm 3}\leq 1$,
\end{enumerate}
and we consider the conditions to be vacuously true for any site $x\pm k \in {\mathbb Z}\setminus \Lambda$. Moreover, the tiling $T\in\mathcal{T}_\Lambda$ for which $ \mu = \sigma_\Lambda(T) $ is unique, i.e., $ \sigma_\Lambda $ is injective.
\end{lem}
\begin{proof}
Given the set of tiles defined above and the boundary constraints, it is clear that any $\mu\in \operatorname{ran}(\sigma_\Lambda) $ satisfies Conditions 1-3.
Conversely, suppose that $\mu \in {\mathbb N}_0^\Lambda$ satisfies Conditions 1-3. We first determine the unique choice of boundary tiles (if any), and then place each type of bulk tile systematically from longest to shortest.
If either $\mu_a\geq 2$ or $\mu_b\geq 2$, then combining Conditions 2-3, it is clear there are enough empty sites next to the boundary site to lay the corresponding tile $B_n^\#$, with $\#\in\{l,r\}$. Similarly, if $\mu_{b-1} = 2$, there are enough empty sites to lie $D^{(1)}$ and one can always place $M^{(1)}$ if $\mu_b = 1$. Moreover, such configurations cannot be covered by bulk tiles, and so this uniquely places the boundary tiles.
For any remaining uncovered site $x$ for which $\mu_x = 2$, a bulk dimer must and can be placed to cover $x$ as $\mu_{x\pm1}=\mu_{x+2}=0$ by Conditions 2-3. These tiles do not overlap with one another or with any boundary tile as for any other $y\in\Lambda$ with $\mu_y \geq 2$, Conditions 2-3 imply $|x-y|\geq 3$ which is the minimum distance one needs to place two successive dimer tiles, or a dimer neighboring a boundary tile with two or more particles.
Similarly, for any remaining uncovered site with $\mu_x = 1$, we must and can place a bulk monomer as Condition 2 guarantees that $\mu_{x+1}=0$. Once again, this tile does not overlap with any other previously placed tiles since Condition 2 guarantees this does not overlap with a neighboring monomer, and Condition 3 guarantees this does not overlap with any neighboring tile with two or more particles.
All remaining uncovered sites hold no particles and thus must be tiled with voids. This completes the unique tiling $T$ that produces the configuration, i.e. $\mu = \sigma_\Lambda(T)$ as desired.
\end{proof}
With respect to each root-tiling $R\in\mathcal{R}_\Lambda$, we define \emph{the BVMD-subspace associated to $R$} and \emph{the space of all BVMD-tilings} by
\begin{equation}\label{tiling_spaces}
\mathcal{C}_\Lambda(R) = \operatorname{span} \left\{ \ket{\sigma_\Lambda(T)}\, | \, T\in \mathcal{T}_\Lambda(R)\right\}, \quad \mathcal{C}_\Lambda = \overline{\operatorname{span}} \{ \ket{\sigma_\Lambda(T)} \, | \, T \in \mathcal{T}_\Lambda\},
\end{equation}
respectively. Each $\mathcal{C}_\Lambda(R)$ is finite-dimensional as there are only finitely many tilings $T$ connected to a root $R$. However, $\dim(\mathcal{C}_\Lambda) = \infty$ as there are an infinite number of non-${\mathbb Z}$-induced boundary tiles. The following lemma summarizes some of the most important properties of these subspaces.
\begin{lem}[BVMD Tiling Space Properties] \label{lem:tiling_properties} Let $\Lambda=[a,b]$ be any interval with $|\Lambda|\geq 4$.
\begin{enumerate}
\item $\mathcal{C}_\Lambda(R)\perp \mathcal{C}_\Lambda(R')$ for any pair of root-tilings $R\neq R'$. As a consequence,
$
\mathcal{C}_\Lambda = \bigoplus_{R\in\mathcal{R}_\Lambda}\mathcal{C}_\Lambda(R).
$
\item $\mathcal{C}_\Lambda(R) $ is an invariant subspace of $H_\Lambda$ for each $R\in\mathcal{R}_\Lambda$ and the restriction of $H_\Lambda $ to $ {\mathcal{C}_\Lambda} \subset \mathrm{dom}(H_\Lambda) $ is a bounded operator. In particular, $\mathcal{C}_\Lambda$ is also invariant.
\end{enumerate}
\end{lem}
\begin{proof}
1. The set of configuration states constitutes an orthonormal basis for ${\mathcal H}_\Lambda$. Since $\mathcal{T}_\Lambda(R) \cap \mathcal{T}_\Lambda(R') = \emptyset$ for any two distinct roots $R\neq R'$, the first result is an immediate consequence of the injectivity of $\sigma_\Lambda$ and the definition of $\mathcal{C}_\Lambda(R)$, see \eqref{tiling_spaces}. The decomposition of $\mathcal{C}_\Lambda$ is an immediate consequence of \eqref{tiling_spaces} since each $\mathcal{C}_\Lambda(R)$ is finite-dimensional and the direct sum of countably many orthogonal closed subspaces is closed.
2. It is trivial that $\mathcal{C}_\Lambda(R)\subseteq\mathrm{dom}(H_\Lambda)$ as it is a span of a finite set of vectors in $\mathrm{dom}(H_\Lambda)$. We first show that $q_x^*q_x \ket{\sigma_\Lambda(T)} \in \mathcal{C}_\Lambda(R)$ for each $T\in\mathcal{T}_\Lambda(R)$ and $x\in[a+1,b-1]$. By direct computation, one finds
\begin{equation}\label{zero_configs}
q_x^*q_x \ket{\sigma_\Lambda(T)} = 0 \in \mathcal{C}_\Lambda(R)
\end{equation}
if on the interval $[x-1,x+1] \subset \Lambda$, the configuration $\sigma_\Lambda(T)$ either has one particle on site $x$ and no particles at $x\pm1$, or $\sigma_\Lambda(T)$ has a pair of neighboring sites with no particles. One is thus left to consider tilings for which the particle configuration on $[x-1,x+1]$ is $ (101) $ or $ (020) $, that is, tilings $T^{M}\in \mathcal{T}_\Lambda(R)$ with two consecutive monomers with particles at $x\pm1$, or tilings $T^D\in \mathcal{T}_\Lambda(R)$ with a dimer ($D$ or $D^{(1)}$) with two particles at $x$. Note that these two sets are in one-to-one correspondence via a single replacement connecting $T^M\leftrightarrow T^D$.
Fixing a pair $T^M \leftrightarrow T^D$ as above, a direct computation yields
\begin{align}
q_x^*q_x \ket{\sigma_\Lambda(T^M)} & = |\lambda|^2 \ket{\sigma_{\Lambda}(T^M)} - \lambda\sqrt{2} \ket{\sigma_\Lambda(T^D)} \label{q_action_1}\\
q_x^*q_x \ket{\sigma_\Lambda(T^D)} & = - \overline{\lambda}\sqrt{2} \ket{\sigma_\Lambda(T^M)}+ 2\ket{\sigma_{\Lambda}(T^D)} .\label{q_action_2}
\end{align}
Thus, the action of $q_x^*q_x$ on either kind of configuration produces a vector in $\mathcal{C}_\Lambda(R)$ and $H_\Lambda \mathcal{C}_\Lambda(R) \subseteq \mathcal{C}_\Lambda(R)$ as claimed.
From \eqref{zero_configs}-\eqref{q_action_2}, it also follows that
\[
\|q_x^*q_x\|_{\mathcal{C}_\Lambda(R)} := \sup_{0\neq \psi\in \mathcal{C}_\Lambda(R)}\frac{\|q_x^*q_x\psi\|}{\|\psi\|} \leq |\lambda|^2+2
\]
where $|\lambda|^2+2$ is the largest eigenvalue of the $ 2\times 2 $ matrix
\begin{equation}\label{MM_action}
\begin{bmatrix}
|\lambda|^2 & -\overline{\lambda}\sqrt{2}\\
-\lambda\sqrt{2} & 2
\end{bmatrix}.
\end{equation}
Therefore, $\|H_\Lambda\|_{\mathcal{C}_\Lambda(R)} \leq (|\Lambda|-2)( |\lambda|^2+2)$. Since $R$ is arbitrary, the same bound holds for $H_\Lambda \restriction_{\mathcal{C}_\Lambda}$ by part 1. Thus, $\mathcal{C}_\Lambda \subseteq \mathrm{dom}(H_\Lambda)$ and the claimed invariance and boundedness holds.
\end{proof}
\subsection{The ground state space for open boundary conditions}
We now turn to determining the ground states of $H_\Lambda$ on any interval $\Lambda$ with $|\Lambda|\geq 5$. We begin by proving that the ground-state space is contained in $\mathcal{C}_\Lambda$, and then use this in combination with Lemma~\ref{lem:tiling_properties} to establish an orthogonal basis for the ground state space in Theorem~\ref{thm:gss}.
\begin{lem}[Support of Ground States]\label{lem:support}
For any interval $\Lambda =[a,b]$ with $|\Lambda|\geq 5$, the ground state space of $H_\Lambda$ is supported on BVMD-tilings, that is, $\mathcal{G}_{\Lambda} \subseteq \mathcal{C}_\Lambda$.
\end{lem}
\begin{proof}
Consider the expansion $
\psi = \sum_{\mu\in {\mathbb N}_0^\Lambda} \psi(\mu) \ket{\mu} $ of an arbitrary ground state $\psi \in \mathcal{G}_\Lambda$ in terms of the configuration basis.
We use Lemma~\ref{lem:tiling_configs} and the frustration free property to show that $\psi(\mu)\neq 0$ implies $\mu\in \operatorname{ran}(\sigma_\Lambda)$.
First, frustration-freeness guarantees that $\psi$ is in the kernel of each electrostatic interaction term $n_xn_{x+1}$. As such, for each $ \mu\in {\mathbb N}_0^\Lambda $
\begin{equation}\label{electrostatic_ff}
0 = \mu_x \mu_{x+1}\psi(\mu) \quad \mbox{for all} \; x\in[a,b-1] ,
\end{equation}
and so $\mu$ satisfies Condition 2 of Lemma~\ref{lem:tiling_configs} if $\psi(\mu) \neq 0$.
Second, frustration-freeness also implies $\psi \in \ker(q_x)$ for any $x\in[a+1,b-1]$. In particular, $0 = (q_x\psi)(\nu)$ for all $ \nu \in {\mathbb N}_0^\Lambda $ from which it follows that $ \psi(\mu) \neq 0 $ if and only if $\psi(\eta) \neq 0$ where $\mu$ and $\eta$ are the two associated configurations (see \eqref{creation-action}):
\begin{equation}\label{config_def}
\mu := (\alpha^*_x)^2\nu , \qquad \eta := \alpha^*_{x+1} \alpha^*_{x-1} \nu .
\end{equation}
If there is $x\in[a+1,b-1]$ such that $\mu_x \geq 3$, then considering \eqref{config_def} the configuration $\eta$ associated to $\nu = \alpha_x^2\mu$ satisfies $\eta_x\eta_{x\pm1} >0$, and hence $ \psi(\mu) = \psi(\eta) = 0 $ by \eqref{electrostatic_ff}. Therefore, Condition 1 of Lemma~\ref{lem:tiling_configs} holds if $\psi(\mu)\neq 0$.
Now, consider any configuration $\eta\in {\mathbb N}_0^\Lambda$ for which $\eta_{x-1} \geq 2$ and $\eta_{x + 1}>0$ for some $x\in[a+1,b-1]$. Then the configuration $\nu=\alpha_{x-1}\alpha_{x+1}\eta$ is well-defined, and the configuration $\mu$ as in \eqref{config_def} satisfies $\mu_{x-1}\mu_x >0$. Arguing as in the previous case we again find $\psi(\eta) = \psi(\mu) = 0$. The analogous argument holds if $\eta_{x+1}\geq 2$ and $\eta_{x-1}>0$. Therefore, if $\psi(\eta) \neq 0$ and $\eta_x \geq 2$ for some $x\in\Lambda$, then $\eta_{x\pm 2} = 0$.
To show that $\psi(\mu) \neq 0$ implies Condition 3 of Lemma~\ref{lem:tiling_configs} for $ \mu $, it is only left to show that $\psi(\mu) = 0$ if $\min\{\mu_x,\,\mu_{x+3}\} \geq 2$ for some $x\in[a,b-3]$. Since $|\Lambda|\geq 5$, it is clear that either $x$ or $x-3$ is an interior site. Assume that $x>a$, and define $\nu=\alpha_x^2\mu$. Then, $\eta$ as in \eqref{config_def} satisfies $\eta_{x+1} > 0$ and $\eta_{x+3} \geq 2$. By the previous case this implies $0=\psi(\eta) = \psi(\mu)$. The analogous argument holds in the case that $x+3$ is interior, where we apply \eqref{config_def} with $\nu=\alpha_{x+3}^2\mu$ and $\eta=\alpha_{x+2}^*\alpha_{x+4}^*\nu$. This completes the proof.
\end{proof}
To summarize, the results up to this point, we have found that every BVMD-tiling space $\mathcal{C}_\Lambda(R)$ is a closed invariant subspace of the Hamiltonian $H_\Lambda$ and any two distinct BVMD-spaces are orthogonal. Moreover, for $|\Lambda|\geq 5$ the ground state space is contained in the closed span of all BVMD-tilings $\mathcal{C}_\Lambda$. Since $ \mathcal{G}_\Lambda \subset \mathcal{C}_\Lambda = \bigoplus_{R\in\mathcal{R}_\Lambda} \mathcal{C}_\Lambda(R)$, the orthogonality and invariance of the individual BVMD-spaces imply
\begin{equation}\label{gs_direct_sum}
\mathcal{G}_\Lambda = \bigoplus_{R\in\mathcal{R}_\Lambda} (\mathcal{G}_\Lambda \cap \mathcal{C}_\Lambda(R)).
\end{equation}
Hence, one can build a orthogonal basis for $\mathcal{G}_\Lambda$ by finding an orthogonal basis of each $\mathcal{G}_\Lambda \cap \mathcal{C}_\Lambda(R)$ and taking the union over all root tilings. We prove in Theorem~\ref{thm:gss} that each $\mathcal{G}_\Lambda \cap \mathcal{C}_\Lambda(R)$ is one-dimensional and spanned by the \emph{BVMD-state} $\psi_\Lambda(R)$ defined by
\begin{equation}
\label{BVMD}
\psi_\Lambda(R) := \sum_{T\in \mathcal{T}_\Lambda(R)} \left(\frac{\lambda}{\sqrt{2}}\right)^{d(T)} \ket{\sigma_{\Lambda}(T)}
\end{equation}
where $d(T)$ is the number of dimers $D$ or $D^{(1)}$ in the tiling $T$. Our convention implies that $d(R) = 0$ for all root tilings.
\begin{theorem}[OBC Ground-State Space]\label{thm:gss} Fix an interval $\Lambda $ with $|\Lambda|\geq 5$. For any root-tiling $R\in\mathcal{R}_\Lambda$, one has
\begin{equation}\label{BVMD_GS}
\mathcal{G}_\Lambda \cap \mathcal{C}_\Lambda(R) =\operatorname{span}\{\psi_\Lambda(R)\}.
\end{equation}
Thus, the BVMD-states form an orthogonal basis of the ground state space $\mathcal{G}_\Lambda$,
\begin{equation}\label{G_Lambda}
\mathcal{G}_\Lambda = \operatorname{span} \{ \psi_\Lambda(R) \, | \, R\in \mathcal{R}_\Lambda\}.
\end{equation}
\end{theorem}
\begin{proof}
Any vector
\[
\psi = \sum_{T\in\mathcal{T}_\Lambda(R)} c_T\ \ket{\sigma_\Lambda(T)} \in \mathcal{C}_\Lambda(R) \quad\mbox{with coefficients $ c_T:= \psi(\sigma_\Lambda(T)) $}
\]
is in $ \mathcal{G}_\Lambda $ if and only if $\psi \in\ker(q_x)\cap \mathcal{C}_\Lambda(R)$ for all interior sites $x\in[a+1,b-1]$.
Using the criterion from Lemma~\ref{lem:tiling_configs} it is easy to check that $q_x \ket{\sigma_\Lambda(T)} = 0$ for all tilings $T$ except those that have either a pair of neighboring monomers with particles at $x\pm 1$, or a dimer with two particles at $x$. Consequently, if $ \psi \in\ker(q_x)\cap \mathcal{C}_\Lambda(R) $ then
\begin{equation}\label{qx_kernel}
0=q_x\psi = \sum_{T^M\in\mathcal{T}_\Lambda^M(R)} q_x\left(c_{T^M}\ket{\sigma_\Lambda(T^M)}+c_{T^D}\ket{\sigma_\Lambda(T^D)}\right),
\end{equation}
where $\mathcal{T}_\Lambda^M(R)$ denotes the set of tilings of $\Lambda$ that have two monomers with particles at $x\pm 1$, and $T^D$ is the tiling obtained by replacing these two monomers with a dimer in $T^M$. A direct computation shows that
\begin{equation}\label{qx-action}
q_x(c_{T^M}\ket{\sigma_\Lambda(T^M)}+c_{T^D}\ket{\sigma_\Lambda(T^D)}) = (-\lambda c_{T^M}+\sqrt{2}c_{T^D})\ket{\sigma_\Lambda(T^V)}
\end{equation}
where $T^V$ is the tiling obtained by replacing the two monomers at $x\pm 1$ with voids. Noting that $T^V \neq \tilde{T}^V$ for any pair of distinct $T^M, \tilde{T}^M\in \mathcal{T}_\Lambda^M(R)$, combining \eqref{qx_kernel} with \eqref{qx-action} implies that
\begin{equation}\label{gs_coeff}
c_{T^D} = \frac{\lambda}{\sqrt{2}} c_{T^M}.
\end{equation}
Conversely, given any pair of tilings $T^M,T^D\in\mathcal{T}_\Lambda(R)$ that differ only by a single replacement of two monomers by a dimer, there is an interior $x\in [a+1,b-1]$ for which \eqref{qx-action} holds and, hence, the respective coefficients satisfy \eqref{gs_coeff}. By definition, every $T\in\mathcal{T}_\Lambda(R)$ can be connected to the root tiling $R$ by replacing all dimers $D$ or $D^{(1)}$ by a pair of neighboring monomers. Thus, inductively applying \eqref{gs_coeff} shows
\[
c_T = c_R\left(\frac{\lambda}{\sqrt{2}}\right)^{d(T)}\quad \text{for all}\quad T\in\mathcal{T}_\Lambda(R),
\]
from which it follows that $\psi = c_R \psi_\Lambda(R)$. This completes the proof.
\end{proof}
\subsection{Properties of BVMD States}\label{subsec:BVMD_properties}
We briefly summarize some important properties of BVMD-states, the proofs of which are immediate consequences of the previous results, or simple modifications of the equivalent statements found in \cite{NWY:2021}.
\begin{enumerate}
\item
Applying the replacement rules \eqref{eq:replacement} to any tiling $T\in\mathcal{T}_\Lambda$ leaves the number of particles invariant. As a consequence, each BVMD-state is an eigenstate of the number operator $N_\Lambda= \sum_{x\in \Lambda} n_x $,
\[
N_\Lambda\psi_\Lambda(R) = \sum_{x\in \Lambda} \sigma_\Lambda(R)_x \ \psi_\Lambda(R) .
\]
Moreover, the orthogonality of distinct BVMD-spaces immediately implies that
\[
\braket{\psi_\Lambda(R')}{\psi_\Lambda(R)} = \delta_{R,R'} \sum_{T\in \mathcal{T}_\Lambda(R)} \left(\frac{|\lambda|^2}{2}\right)^{d(T)}
\]
and $\dim(\mathcal{G}_\Lambda) = |\mathcal{R}_\Lambda|=\infty$, as there are an infinite number of non-${\mathbb Z} $-induced boundary tiles.
\item Observing that voids are unaffected by the replacement rules, each BVMD-state can be factored (up to possible boundary states) using void states $\ket{0}$, and \emph{squeezed Tau-Thouless states} $\varphi_{L+1}^{(i)}\in {\mathcal H}_{[1,2L+i]}$. For fixed $L\geq 0$ and $i\in\{1,2\}$, the squeezed Tau-Thouless state $\varphi_{L+1}^{(i)}$ is the BVMD-state generated by the root tiling that covers $2L+i$ sites with monomers, that is
\begin{equation}\label{TT}
\varphi_{L+1}^{(i)} := \psi_{[1,2L+i]}(M_{L+1}^{(i)}), \quad M_{L+1}^{(i)} = (M, M, \ldots, M, \, M^{(i)}),
\end{equation}
where $M^{(2)}=M$, and $M_{L+1}^{(i)}$ has $L+1$ tiles, see Figure~\ref{fig:M3}. We will also write $\varphi_L:=\varphi_L^{(2)}$ and use the convention $\varphi_0 = 1$.
\begin{figure}
\begin{center}
\includegraphics[scale=.3]{Pictures/M3_tiling.png}
\end{center}
\caption{The tilings generated from $M_3^{(2)}$.}
\label{fig:M3}
\end{figure}
To factorize an arbitrary BVMD-state $\psi_\Lambda(R)$, let $ \{v_1, \ldots, v_k\}\subseteq \Lambda =[a,b]$ be the ordered set of sites covered by voids in the root tiling $R=(R_1,\ldots, R_m)$, and denote by $L_i\in{\mathbb N}_0$, $i=1, \ldots, k+1$, the number of monomers ($M$ or $M^{(1)}$) between $v_{i-1}$ and $v_i$. Here, we use the convention that $v_0 = a-1$ and $v_{k+1} = b+1$. Then, $\psi_\Lambda(R)$ factors as
\begin{equation}\label{fragmentation}
\psi_\Lambda(R) = \psi^l \otimes \varphi_{L_1}\otimes\ket{0}_{v_1}\otimes \ldots \otimes\varphi_{L_{k}}\otimes\ket{0}_{v_k}\otimes\psi^r
\end{equation}
where the boundary states $\psi_{l}$, $\psi_r$ are:
\begin{equation}
\psi^l = \begin{cases}
\ket{n00} & \text{if}\;\;R_1 =B_n^l\\
1 & \text{otherwise}
\end{cases}
\qquad
\psi^r = \begin{cases}
\varphi_{L_{k+1}}\otimes \ket{0n}, & \text{if}\;\;R_m =B_n^r \\
\ket{\varphi_{L_{k+1}}^{(1)} }& \text{if}\;\;R_m = M^{(1)}\\
\ket{\varphi_{L_{k+1}}} & \text{otherwise}
\end{cases},
\end{equation}
see Figure~\ref{fig:equivalence_class}. The formal proof of this expression follows from a slight modification the argument used in \cite[Theorem 2.10]{NWY:2021}.
\item
As a fundamental building block of the BVMD-states, the squeezed Tao-Thouless states and their properties play a key role in our analysis. Since the bulk monomer and dimer both end in a vacant site, for each $L\geq 1$
\begin{equation}\label{vp2_to_vp1}
\varphi_{L} = \varphi_L^{(1)} \otimes \ket{0}.
\end{equation}
In view of the substitution rules \eqref{eq:replacement}, for either $i\in\{1,2\}$ these states can be further decomposed according to the following \emph{recursion relations}: for any $n=l+r$ with $l\geq 1$ and $r\geq 2$,
\begin{equation}\label{recursion_general}
\varphi_n^{(i)} = \varphi_l\otimes \varphi_r^{(i)} +\frac{\lambda}{\sqrt{2}} \varphi_{l-1} \otimes \ket{\sigma_d}\otimes\varphi_{r-1}^{(i)}
\end{equation}
where $\ket{\sigma_d} = \ket{0200}$. In the case that $r=1$, one also has the modified relation
\begin{equation}\label{recursion}
\varphi_n^{(i)} = \varphi_{n-1}\otimes\varphi_1^{(i)} + \frac{\lambda}{\sqrt{2}} \varphi_{n-2} \otimes \ket{\sigma_d^{(i)}}
\end{equation}
where $\ket{\sigma_d^{(1)}} := \ket{020}$ and $\ket{\sigma_d^{(2)}} := \ket{\sigma_d}$, see Figure~\ref{fig:M3}.
\item The final property is an expression for the ratio $\beta_n:=\|\varphi_{n-1}\|^2/\|\varphi_{n}\|^2$ and follows from observing that the two vectors on the right side of \eqref{recursion} are orthogonal. As such $\|\varphi_n^{(i)}\|^2 = \|\varphi_n\|^2$ for all $i$ and
\begin{equation}\label{norm_recursion}
\|\varphi_n\|^2 = \|\varphi_{n-1}\|^2 + \frac{|\lambda|^2}{2}\|\varphi_{n-2}\|^2.
\end{equation}
By applying the argument of \cite[Lemma 2.13]{NWY:2021}, this relation indicates that the ratio $\beta_n$ converges as $n\to \infty$. Specifically,
\begin{equation}\label{alpha_convergence}
\beta_n = \frac{1}{\beta_+}\cdot\frac{1-\beta^n}{1-\beta^{n+1}} \to \frac{1}{\beta_+}
\end{equation}
where $\beta = \frac{\beta_-}{\beta_+}\in(-1,0)$ and $\beta_{\pm} = (1\pm \sqrt{1+2|\lambda|^2})/2$.
\end{enumerate}
\subsection{Tiling spaces and ground states for periodic boundary conditions}\label{sec:periodic_tilings}
For the ground state of $H_\Lambda^\mathrm{per}$ the relation in \eqref{dipole_gs} holds at every site in $\Lambda=[a,b]$. Hence, the ground state space can be described in terms of tilings that only require the bulk tiles $V$, $M$, and $D$. As defined at the beginning of Section~\ref{sec:VMD}, we call any cover $T$ of the ring $\Lambda$ by these tiles a \emph{periodic VMD-tiling}, and further say it is a \emph{periodic root tiling} if it only consists of bulk monomers and voids.
Any periodic tiling can be written in a (non-unique) ordered form $T = (T_1, \ldots, T_k)$ as long as the location of the first tile, e.g. the one covering $ a $, is specified. Two periodic tilings are then called \emph{connected}, denoted $T\leftrightarrow T'$, if they can be transformed into one another using the bidirectional replacement rule $(10)(10)\leftrightarrow (0200)$, for which we consider the first and last tiles in $T$ to be neighbors. The set of periodic root tilings $\mathcal{R}_{\Lambda}^\mathrm{per}$ partitions the set of all periodic tilings $ \mathcal{T}_\Lambda^\textrm{per} $ via this equivalence relation. An invariant subspace of the Hamiltonian $H_\Lambda^\mathrm{per}$ is given by
\[
\mathcal{C}_\Lambda^\mathrm{per}(R) := \operatorname{span}\left\{\ket{\sigma_\Lambda(T)} | T\leftrightarrow R\right\}
\]
where $\sigma_\Lambda(T)\in{\mathbb N}_0^\Lambda$ is again the particle configuration associated with the periodic tiling $T\in \mathcal{T}_\Lambda^\textrm{per} $, cf.~\eqref{config}.
A consequence of Lemma~\ref{lem:periodic_configs} below is that these tiling spaces are again mutually orthogonal and
\begin{equation}\label{per_tiling_space}
\mathcal{C}_\Lambda^\mathrm{per} := \operatorname{span} \left\{\ket{\sigma_\Lambda(T)} | \, T\in\mathcal{T}_{\Lambda}^\mathrm{per}\right\} = \bigoplus_{R\in\mathcal{R}_{\Lambda}^{\mathrm{per}}}\mathcal{C}_\Lambda^\mathrm{per}(R).
\end{equation}
This subspace will turn out finite-dimensional, and hence closed.
Note that cutting a periodic tiling between the endpoints $a$ and $b$ produces a BVMD tiling of the interval $\Lambda$, and so one can identify $\mathcal{T}_\Lambda^\mathrm{per} \subseteq \mathcal{T}_\Lambda$. As such, configurations that arise from period tilings can be characterized in a similar, in fact, even simpler way than done in Lemma~\ref{lem:tiling_properties}.
\begin{lem}[Periodic VMD-Tiling Configurations]\label{lem:periodic_configs}
Given a ring $\Lambda = [a,b]$ with $|\Lambda|\geq 4$, a configuration $\mu\in{\mathbb N}_0^{\Lambda}$ is in the range of the restriction $ \sigma_\Lambda: \mathcal{T}_\Lambda^\textrm{per} \to {\mathbb N}_0^{\Lambda} $ if and only if the following two conditions hold:
\begin{enumerate}
\item $\mu_x \geq 1$ implies $\mu_{x\pm 1} = 0$
\item $\mu_x{\geq 2}$ implies $\mu_{x\pm 2} = 0$ and $\mu_{x\pm 3}\leq 1$,
\end{enumerate}
where $x\pm k$ is taken modulo $|\Lambda|$. Moreover, the tiling $T\in \mathcal{T}_\Lambda^\textrm{per} $ for which $ \mu = \sigma_\Lambda(T) $ is unique, i.e., $ \sigma_\Lambda \restriction_{\mathcal{T}_\Lambda^\textrm{per}} $ is injective.
\end{lem}
The proof of this result follows exactly as that of Lemma~\ref{lem:tiling_configs} without the case of boundary tiles and with the observation that any tiling configuration $\sigma_\Lambda(T)$ with $T\in \mathcal{T}_\Lambda^\textrm{per} $ has at most two particles at any site.
We conclude this section by establishing the following properties of the ground state space.
\begin{theorem}[Periodic Ground State Space] \label{thm:periodic_gss} The following properties hold for the ground state space $\mathcal{G}_\Lambda^\mathrm{per}$ on any ring $\Lambda = [a,b]$ with $|\Lambda|\geq 4$:
\begin{enumerate}
\item The set of periodic VMD-states $\{\psi_\Lambda^\mathrm{per}(R) | R\in \mathcal{R}_\Lambda^\mathrm{per}\}$ is an orthogonal basis of $\mathcal{G}_\Lambda^\mathrm{per}$
where
\begin{equation}
\psi_\Lambda^{\mathrm{per}}(R) = \sum_{T\in \mathcal{T}_\Lambda^\mathrm{per}(R)} \left(\frac{\lambda}{\sqrt{2}}\right)^{d(T)}\ket{\sigma_\Lambda(R)}
\end{equation}
and $d(T)$ is again the number of dimers $D$ in the periodic tiling $T$.
\item The dimension grows exponentially in the system size. Specifically, there are positive constants $c,C>0$ independent of $\Lambda$ for which
\begin{equation}\label{gs_dimension}
c\mu_+^{|\Lambda|} \leq \dim \mathcal{G}_\Lambda^\mathrm{per} \leq C\mu_+^{|\Lambda|},
\end{equation}
where $\mu_+ := (1+\sqrt{5})/2$.
\item For any periodic root tiling, $N_\Lambda\psi_\Lambda^\mathrm{per}(R)=N_\Lambda(R)\psi_\Lambda^\mathrm{per}(R)$, where $N_\Lambda(R)$ is the number of particles in $R\in \mathcal{R}_\Lambda^\mathrm{per}$. Moreover, the ground state is at most half filled,
\begin{equation}\label{gs_filling}
\frac{1}{2}-\frac{1}{2|\Lambda|} \leq \max_{R\in\mathcal{R}_\Lambda^\mathrm{per}}\frac{N_\Lambda(R)}{|\Lambda|} \leq \frac{1}{2}.
\end{equation}
\end{enumerate}
\end{theorem}
\begin{proof}
1. This result follows from the same argument used in the proof of Theorem~\ref{thm:gss}.
2. From part 1, it is clear that $\dim\mathcal{G}_\Lambda^\mathrm{per} = |\mathcal{R}_\Lambda^\mathrm{per}|$.
Any periodic root tiling of $\Lambda = [a,b]$ considered as a ring either covers $\{a,b\}$ with a monomer, or is a root tiling of the interval $[a,b]$ by monomers and voids. As such,
\[
|\mathcal{R}_\Lambda^\mathrm{per}| = r_{|\Lambda|} + r_{|\Lambda|-2}
\]
where $r_{|\Lambda|}$ is the number of tilings of an interval of length $|\Lambda|$ with monomers and voids, which is clearly finite.
Thus, we need only count the number of tilings $r_{L}$ that cover an interval of size $L$ (with open boundary conditions) by voids and monomers. This number satisfies the recursion relation
\begin{equation}\label{tile_count}
r_L = r_{L-1} + r_{L-2}
\end{equation}
with initial conditions $r_1=1$ and $r_2 =2$. The solution reads $r_L = (\mu_+^{L+1}-\mu_-^{L+1})/\sqrt{5}$ where $\mu_{\pm} = (1\pm\sqrt{5})/2$, from which the result follows.
3. The claim $N_\Lambda\psi_\Lambda^\mathrm{per}(R) = N_\Lambda(R)\psi_\Lambda^\mathrm{per}(R)$ is clear since the replacement rule does not change the total number of particles. The value $N_\Lambda(R)$ is maximized by any root-tiling $R_{\max}$ with $\lfloor\frac{|\Lambda|}{2}\rfloor$ monomers. This gives
\[
N_\Lambda(R_{\max}) = \begin{cases}
|\Lambda|/2, & |\Lambda| \text{ even} \\
(|\Lambda|-1)/2, & |\Lambda| \text{ odd}
\end{cases}
\]
which establishes \eqref{gs_filling}.
\end{proof}
\section{Proof of a uniform gap in the BVMD tiling space}\label{sec:MM}
We now apply the martingale method to produce a lower bound on the spectral gap $E_1(\mathcal{C}_\Lambda)$ corresponding to open boundary conditions that is uniform in the volume. The martingale method can be used to estimate the spectral gap above the ground state of a frustration-free Hamiltonian on a finite-dimensional Hilbert space. While in previous works it has been used to study spectral gaps for finite-volume quantum spin and lattice fermion models, we adapt it here to the present lattice boson model.
\subsection{Reduction to a finite dimensional subspace}
As remarked earlier, one difficulty in adapting the martingale method is that the Hilbert space $ {\mathcal H}_\Lambda $ and the tiling subspace $ \mathcal{C}_\Lambda $ are both infinite dimensional. For the present model, we solve this issue and establish an initial estimate on the finite-volume gap by observing that $ E_1(\mathcal{C}_\Lambda) $ is realized on the invariant subspace associated to \emph{bulk BVMD-tilings} of $ \Lambda $, which turns out to be finite dimensional. This set is the collection of all tilings generated by the substitution rules on a subset $\mathcal{R}_\Lambda^{\infty} \subset\mathcal{R}_\Lambda$ of root-tilings $R=(R_1, \ldots, R_k) $ for which the boundary tiles are restricted to
\begin{equation}\label{na_bdy_conds}
R_1 \in \{V, \, M, \, B_2^l\}, \quad R_k \in \{V, \, M, \, M^{(1)}, \, B_2^r\} .
\end{equation}
Said differently, this is precisely the set of tilings obtained from truncating VMD-tilings of ${\mathbb Z}$.
The corresponding subspace of \emph{${\mathbb Z} $-induced BVMD-tilings}, or \emph{bulk tilings} for short, is abbreviated by
\begin{equation}\label{def:IVBVMD}
\mathcal{C}_\Lambda^{\infty} := \bigoplus_{R\in\mathcal{R}_{\Lambda}^{\infty}}\mathcal{C}_\Lambda(R).
\end{equation}
Since each subspace $ \mathcal{C}_\Lambda(R) $ is invariant for $ H_\Lambda $, so too is $ \mathcal{C}_\Lambda^{\infty} \subseteq\mathrm{dom}(H_\Lambda)$, which allows us to define the gap
\begin{equation}\label{eq:gapinvariant}
E_1(\mathcal{C}_\Lambda^\infty) := \inf_{0\neq \psi\in\mathcal{C}_\Lambda^\infty \cap \mathcal{G}_\Lambda^\perp}\frac{\braket{\psi}{H_\Lambda\psi}}{\|\psi\|^2}.
\end{equation}
%
\begin{theorem}[Restriction to Bulk Tilings]\label{thm:gap_reduction} For any interval $\Lambda = [a,b]$:
\begin{enumerate}\itemsep0pt
\item $ \dim \mathcal{C}_\Lambda^{\infty} < \infty $,
\item $ \displaystyle
E_1(\mathcal{C}_\Lambda) \geq \min\left\{ E_1(\mathcal{C}_\Lambda^{\infty}) , \, E_1(\mathcal{C}_{[a+3,b]}^{\infty}), E_1(\mathcal{C}_{[a,b-2]}^{\infty}), E_1(\mathcal{C}_{[a+3,b-2]}^{\infty}) \right\} $ is strictly positive, where we use the convention that $E_1(\mathcal{V}_{\Lambda'})=\infty$ if $\mathcal{V}_{\Lambda'} \subseteq \mathcal{G}_{\Lambda'}$ or $ \Lambda' = \emptyset $.
\end{enumerate}
\end{theorem}
\begin{proof}
1. It suffices to show that $|\mathcal{R}_\Lambda^{\infty}|<\infty$ since $\dim(\mathcal{C}_\Lambda(R))<\infty$ for each $R\in \mathcal{R}_\Lambda$. The number of root tilings $R\in\mathcal{R}_\Lambda^{\infty}$ that cover $\Lambda$ with just voids $V$ and bulk monomers $M$ satisfies the recursion relation from \eqref{tile_count}. As a consequence, the number of root tilings with a fixed pair of boundary tiles $R_l$, $R_r$ is given by $r_{L-|R_l|-|R_r|}$ where $|R_\#|$ is the length of the tile $ R_\# $. Using the convention that $r_0 = 1$, this implies
\[
|\mathcal{R}_\Lambda^{\infty}| = \sum_{\substack{R_l\in\{V,M,B_2^l\}\\R_r\in\{V,M,M^{(1)},B_2^r\}}}r_{|\Lambda|-|R_l|-|R_r|}
\]
which is clearly finite.
2. Since $\mathcal{C}_\Lambda = \bigoplus_{R\in\mathcal{R}_\Lambda}\mathcal{C}_\Lambda(R)\subseteq\mathrm{dom}(H_\Lambda)$ is a sum of orthogonal, invariant subspaces all of which contain a unique ground state, $\psi_\Lambda(R)$, the spectral gap on $\mathcal{C}_\Lambda $ is the infimum over the gaps in each subspace,
\[
E_1(\mathcal{C}_\Lambda) = \inf\{E_1(\mathcal{C}_\Lambda(R)) \,| \, R\in \mathcal{R}_\Lambda\} .
\]
The analogous argument implies $E_1(\mathcal{C}_\Lambda^\infty) = \inf\{E_1(\mathcal{C}_\Lambda(R)) \,| \, R\in\mathcal{R}_\Lambda^\infty\}$. Thus, we need only consider $E_1(\mathcal{C}_\Lambda(R))$ for $R\in\mathcal{R}_\Lambda\setminus\mathcal{R}_\Lambda^\infty$. Suppose that $R=(R_1,\ldots,R_k)\in\mathcal{R}_\Lambda$ is such that both boundary tiles do not belong to the sets in~\eqref{na_bdy_conds}, that is, $R_1 = B_n^l$ and $R_k = B_m^r$ for some $n,m\geq 3$. Since the replacement rules do not apply to these tiles, any nonzero $\psi_\Lambda \in \mathcal{C}_\Lambda(R)$ factors as
\[
\psi_\Lambda = \ket{n00}\otimes \psi_{\Lambda'}\otimes \ket{0m}
\]
where $\psi_{\Lambda'}\in \mathcal{C}_{\Lambda'}(R')$, $\Lambda'=[a+3,b-2]$, and $R'=(R_2,\ldots, R_{k-1})\in\mathcal{R}_{\Lambda'}^{\infty}$. Moreover, \eqref{artificial_action} shows
\[
H_\Lambda\psi_\Lambda = \ket{n00}\otimes H_{\Lambda'}\psi_{\Lambda'}\otimes \ket{0m},
\]
and so $E_\Lambda^1(\mathcal{C}_\Lambda(R)) = E_{\Lambda'}^1(\mathcal{C}_{\Lambda'}(R'))$. A similar statement can be made in the case that only one, but not both, of the boundary tiles of $R$ do not belong to~\eqref{na_bdy_conds}. This proves the asserted inequality. The strict positivity of the spectral gap follows from the fact that $ H_{\Lambda} $ restricted to $ \mathcal{C}_{\Lambda}^\infty $ for any finite interval $ \Lambda $ is unitarily equivalent to a matrix.
\end{proof}
\subsection{The martingale method}\label{subsec:martingale}
We are now able to apply the martingale method proved in \cite[Theorem 5.1]{nachtergaele:2016b} to produce a lower bound on the spectral gap $E_1(\mathcal{C}_\Lambda^{\infty})$. Since our Hamiltonian is translation invariant, it is sufficient to consider $\Lambda = [1,L]$, and Theorem~\ref{thm:tiling_gap} below establishes that
$
\inf_{L\geq 7}E_1(\mathcal{C}_{[1,L]}^{\infty}) >0 $.
To state the main result in this section, recall that for any operator $A $ on ${\mathcal H}_{\Lambda'} $,
the mapping
\begin{equation}\label{idenfification} A \mapsto A\otimes \mathbbm{I}_{\Lambda\setminus \Lambda'}
\end{equation}
identifies $A$ as an operator on ${\mathcal H}_\Lambda$ for any finite $\Lambda \supseteq \Lambda'$. We introduce several sequences of positive operators of this type associated to a fixed $\Lambda = [1,L]$ with $L \geq 7$. Let $N\geq 3$, $k\in\{1,2\}$ denote the unique integers so that $L=2N+k$ and define two finite sequences of Hamiltonians $H_n,h_n \geq 0$ for $2\leq n \leq N$ by
\begin{equation}\label{ham_sequences}
H_n = \sum_{k=2}^n h_k, \quad
h_{n} = H_{\Lambda_n} \quad \text{where}\quad
\Lambda_n = \begin{cases}
[1,4+k], & n=2 \\
[2n+k-5,2n+k], & 3\leq n\leq N
\end{cases}
\end{equation}
The associated sequence of intervals satisfies $|\Lambda_2|\in\{5,6\}$, $|\Lambda_n| = 6$ for $n\geq 3$, and $|\Lambda_n\cap\Lambda_{n+1}|=4$ for $2\leq n< N$, from which one can check that each interaction term supported on $\Lambda$ ($n_xn_{x+1}$ or $q_x^*q_x$) is a summand in at least one and most three of the Hamiltonians $h_n$. As a result, for all $2\leq n \leq N$,
\begin{equation}\label{equivalent_hams}
H_{[1,2n+k]} \leq H_n \leq 3 H_{[1,2n+k]}
\end{equation}
and, in particular, $H_\Lambda \leq H_{N} \leq 3H_{\Lambda}$. An important consequence of \eqref{equivalent_hams} is that the ground-state spaces agree. Thus,
\begin{equation}
\ker H_n = \mathcal{G}_{[1,2n+k]}\otimes{\mathcal H}_{[2n+k+1,L]}\subseteq {\mathcal H}_\Lambda
\end{equation}
where $\mathcal{G}_{[1,2n+k]}$ is as in Theorem~\ref{thm:gss}. Let $G_n$ denote the orthogonal projection onto $\ker H_n$. By frustration-freeness, $\ker H_{n+1}\subseteq \ker H_n$ for each $n$, and so the following resolution of the identity forms a mutually orthogonal family of orthogonal projections:
\begin{equation}\label{En}
E_n := \begin{cases}
\mathbbm{I} - G_2 & n = 1 \\
G_n - G_{n+1} & 2\leq n \leq N-1 \\
G_{N} & n=N
\end{cases}
\end{equation}
Finally, we denote by $g_n,$ $2\leq n \leq N$, the orthogonal projection onto $\ker h_n=\mathcal{G}_{\Lambda_n}\otimes{\mathcal H}_{\Lambda\setminus\Lambda_n}\subseteq{\mathcal H}_\Lambda$.
For our application of the martingale method, we consider the restriction of these operators to the subspace $\mathcal{C}_\Lambda^{\infty}=\bigoplus_{R\in\mathcal{R}_\Lambda^{\infty}}\mathcal{C}_\Lambda(R) $. The BVMD-space $\mathcal{C}_\Lambda(R)$ for each root-tiling $R\in\mathcal{R}_\Lambda$ is invariant under $H_{\Lambda'}$ for any $\Lambda' \subseteq \Lambda$ as this is invariant under all $q_x^*q_x$ supported on $\Lambda$ (and in particular those supported on $\Lambda'$). By the same reasoning, $ \mathcal{C}_\Lambda(R) $ is invariant under $h_n$ and $H_n$ as well as the associated ground-state projections $g_n$ and $G_n$ for all $n\geq 2$. Hence each of these self-adjoint operators can be jointly block-diagonalized with respect to the decomposition ${\mathcal H}_\Lambda = \mathcal{C}_\Lambda^{\infty}\oplus(\mathcal{C}_\Lambda^{\infty})^\perp.$ Explicitly, for any $2\leq n \leq N$:
\begin{align}\label{blocks}
A_n & = A^{\infty}_n + (\mathbbm{I} - P_{\mathcal{C}_\Lambda^{\infty}}) A_n(\mathbbm{I} - P_{\mathcal{C}_\Lambda^{\infty}}) , \qquad \mbox{with} \quad A^{\infty}_n:=A_n\restriction_{\mathcal{C}_\Lambda^{\infty}} =P_{\mathcal{C}_\Lambda^{\infty}} A_n P_{\mathcal{C}_\Lambda^{\infty}}.
\end{align}
where $A_n \in \{H_n,\, h_n,\, G_n, \, g_n\}$ and $P_{\mathcal{C}_\Lambda^{\infty}}$ is the orthogonal projection onto $\mathcal{C}_{\Lambda}^{\infty}$. This block diagonalization also extends to every $E_n$ by \eqref{En}. Thus, the restriction of any of these operators to $\mathcal{C}_\Lambda^{\infty}$ is given by the associated block diagonal component $ A^{\infty}_n $.
We are now able to state the main result in this section:
\begin{theorem}[Application of the Martingale Method]\label{thm:tiling_gap}
Fix $\Lambda =[1,L]$ with $L\geq 7$, and let $h_n$, $g_n$ and $E_n$ be as in \eqref{ham_sequences} and \eqref{En}. The restrictions of these operators to $\mathcal{C}_\Lambda^{\infty}$ satisfy the following three criterion:
\begin{enumerate}
\item For all $n\geq 2$, $h_n^{\infty}\geq 2\kappa(\mathbbm{I}-g_n^{\infty})$.
\item For all $n\geq 2$, $[g_n^{\infty},E_m^{\infty}]\neq 0$ only if $m \in [n-3,n-1]$.
\item For all $2\leq n \leq N-1$ and $|\lambda|\neq 0$, the ground state projections satisfy
\begin{equation}\label{MM3}
\|g_{n+1}^{\infty}E_n^{\infty}\| \leq f(|\lambda|^2/2) := \sup_{n\geq 4}f_n(|\lambda|^2/2)
\end{equation}
where given $\beta_n = \|\varphi_{n-1}\|^2/\|\varphi_n\|^2$, see \eqref{alpha_convergence},
\begin{equation}\label{def:fn}
f_n(r) : = r\beta_n\beta_{n-2}\left(\frac{[1-\beta_{n-1}(1+r)]^2}{1+2r}+\frac{2(1-\beta_{n-1})^2}{1+r}\right).
\end{equation}
\end{enumerate}
As a consequence, if $|\lambda|>0$ and $f(|\lambda|^2/2)<1/3$ then the spectral gap of $H_\Lambda$ in $\mathcal{C}_\Lambda^{\infty}$ is bounded from below by
\begin{equation}\label{tiling_gap}
E_1(\mathcal{C}_{[1,L]}^{\infty}) \geq \frac{2\kappa}{3}(1-\sqrt{3f(|\lambda|^2/2)})^2.
\end{equation}
Moreover, $f(|\lambda|^2/2)<1/3$ for $|\lambda|\leq 7.4.$
\end{theorem}
\subsection{Restrictions of BVMD spaces to subvolumes}\label{sec:truncations}
For the proof of Theorem~\ref{thm:tiling_gap}, one needs to analyze the action on $\mathcal{C}_\Lambda^{\infty}$ by operators $A$ supported on subintervals $\Lambda'\subseteq \Lambda$. It is therefore useful to expand $\mathcal{C}_\Lambda^{\infty}$ as a direct sum of tensor products of the form $\mathcal{K}_{\Lambda_l}\otimes\mathcal{C}_{\Lambda'}(R)\otimes\mathcal{K}_{\Lambda_r}$ where $\Lambda = \Lambda_l\cup \Lambda' \cup \Lambda_r$ is the disjoint union of three consecutive intervals, and $\mathcal{K}_{\Lambda_\#}\subseteq {\mathcal H}_{\Lambda_\#}.$ The main observation that allows us to write $\mathcal{C}_\Lambda^{\infty}$ in such a form is that for any tiling $T\in\mathcal{T}_\Lambda$ the restriction of the configuration $\sigma_\Lambda(T)$ to $ \Lambda' $ satisfies the requirements of Lemma~\ref{lem:tiling_configs}. Therefore,
\begin{equation}\label{tiling_restriction}
\sigma_\Lambda(T)\restriction_{\Lambda'} = \sigma_{\Lambda'}(T') \quad \text{for some} \quad T'\in\mathcal{T}_{\Lambda'}
\end{equation}
and one sees that if $T$ is a ${\mathbb Z} $-induced tiling then so too is $T'$ as every site holds at most two particles, see Figure~\ref{fig:restrictions}.
To state the desired decomposition of $\mathcal{C}_\Lambda^{\infty}$, we introduce its orthonormal configuration basis
\begin{align}
\mathcal{B}_{\Lambda}^{\infty} & = \left\{\ket{\mu} \, | \, \mu\in\operatorname{ran}\sigma_\Lambda\cap\{0,1,2\}^\Lambda\right\} \label{NATile-basis}.
\end{align}
\begin{figure}
\begin{center}
\includegraphics[scale=.3]{Pictures/restrictions.png}
\end{center}
\caption{The restriction $T'$ of a tiling $T$ to $\Lambda'$. Inserting any connected tiling $T''\leftrightarrow T'$ produces a tiling on $\Lambda$ that is connected to $T$.}
\label{fig:restrictions}
\end{figure}
\begin{lem}[Tiling Space Decomposition]\label{lem:tiling_decomp}
Let $\Lambda = [a,b]$, and suppose that $\Lambda'\subseteq \Lambda$ is a subinterval with $|\Lambda'|\geq 4$. Then, $C_\Lambda^{\infty}$ can be decomposed as
\begin{equation}\label{tiling_decomp}
\mathcal{C}_{\Lambda}^{\infty} = \bigoplus_{R'\in\mathcal{R}_{\Lambda'}^{\infty}}\bigoplus_{\substack{\ket{\mu}\in\mathcal{B}_{\Lambda}^{\infty} \, : \\ \mu = (\mu^l, \, \sigma_{\Lambda'}(R'), \, \mu^r)}} \ket{\mu^l}\otimes\mathcal{C}_{\Lambda'}(R')\otimes\ket{\mu^r},
\end{equation}
where $\mu^l$ and $\mu^r$ are the subconfigurations of $\mu$ supported on the subinterval of $\Lambda$ to the left and right of $\Lambda'$, respectively. In the case that one of these subintervals is empty, we use the convention $\ket{\mu^\#} = 1$.
\end{lem}
Above, we use a slight abuse of notation and denote $\mathcal{S}\otimes \psi:=\{\phi\otimes\psi:\phi\in\mathcal{S}\}\subseteq {\mathcal H}_1\otimes{\mathcal H}_2$ for a subset $\mathcal{S}\subseteq{\mathcal H}_1$ and a vector $\psi\in{\mathcal H}_2$ of two Hilbert spaces. Note that the direct sum in \eqref{tiling_decomp} is well-defined since it is taken over a collection of mutually orthogonal subspaces. Moreover, every root tiling $R'\in\mathcal{R}_{\Lambda'}^\infty$ is represented in at least one summand on the RHS of \eqref{tiling_decomp} since any tiling configuration $\sigma_{\Lambda'}(T')$ can be extended by zeros to a tiling configuration on $\Lambda$.
\begin{proof}
Fix $R'\in\mathcal{R}_{\Lambda'}^{\infty}$ and pick any $\mu = (\mu^l,\sigma_{\Lambda'}(R'),\mu^r)\in \operatorname{ran}\sigma_{\Lambda}\cap\{0,1,2\}^\Lambda$. Applying the replacement rules to neighboring monomers in $R'$ to create $T'\in\mathcal{T}_{\Lambda'}(R')$ once again produces a configuration $\mu(T')=(\mu^l,\sigma_{\Lambda'}(T'),\mu^r)$ that satisfies the conditions of Lemma~\ref{lem:tiling_configs}, see Figure~\ref{fig:restrictions}. Moreover, this configuration is $ {\mathbb Z} $-induced since each site holds at most two particles. Hence, $\ket{\mu(T')}\in\mathcal{B}_{\Lambda}^{\infty}$ for all $T'\in\mathcal{T}_{\Lambda'}(R')$, and one finds that the RHS of \eqref{tiling_decomp} is a subspace of $\mathcal{C}_\Lambda^{\infty}$.
Now, fix any $\ket{\mu}\in\mathcal{B}_{\Lambda}^{\infty}$, and decompose $\mu =(\mu^l,\mu^{\Lambda'},\mu^r)$ where $\mu^{\Lambda'}$ is the subconfiguration associated with $\Lambda'$. Since $\mu=\sigma_\Lambda(T)$ for some BVMD-tiling $T$ on $\Lambda$, $\mu^{\Lambda'}=\sigma_{\Lambda'}(T')$ for some $T'\in\mathcal{T}_{\Lambda'}$ by \eqref{tiling_restriction}. Moreover, this tiling is $ {\mathbb Z} $-induced as $\mu_x^{\Lambda'}\leq 2$ for all $x\in\Lambda'$. Thus,
\[
\ket{\mu} \in \ket{\mu^l}\otimes\mathcal{C}_{\Lambda'}(R')\otimes\ket{\mu^r}
\]
where $R'\in\mathcal{R}_{\Lambda'}^{\infty}$ is the root-tiling associated to $T'$. Once again, $\mu(R'):=(\mu^l,\sigma_{\Lambda'}(R'),\mu^r)$ is a $ {\mathbb Z} $-induced BVMD-tiling of $\Lambda$ since applying the replacement rules to any dimer ($D$ or $D^{(1)}$) in $T'$ reproduces a configuration on $\Lambda$ that satisfies Lemma~\ref{lem:tiling_configs}, see Figure~\ref{fig:restrictions}. Thus, $\mathcal{C}_\Lambda^{\infty}$ is a subspace of the RHS of \eqref{tiling_decomp} and equality holds as claimed.
\end{proof}
The following is an immediate consequence of the above decomposition.
\begin{cor}[Subspace Reductions]\label{cor:reductions} Suppose $A$ is a self-adjoint operator supported on a subinterval $\Lambda'\subseteq\Lambda$ with $|\Lambda'|\geq 4$, and $\mathcal{C}_{\Lambda'}(R')\subseteq\mathrm{dom}(A)$ is an invariant subspace of $A$ for each $R'\in\mathcal{R}_{\Lambda'}^{\infty}$. Then $\mathcal{C}_\Lambda^{\infty}\subseteq \mathrm{dom}(A\otimes\mathbbm{I}_{\Lambda\setminus\Lambda'})$ is an invariant subspace of $A\otimes\mathbbm{I}_{\Lambda\setminus\Lambda'}$, and the following properties hold:
\begin{enumerate}
\item $\|A\otimes \mathbbm{I}_{\Lambda\setminus\Lambda'}\|_{\mathcal{C}_\Lambda^{\infty}}=\|A\|_{\mathcal{C}_{\Lambda'}^{\infty}}$, where the subscript denotes the Hilbert space in which the operator norm is taken.
\item $\operatorname{spec}(A\otimes \mathbbm{I}_{\Lambda\setminus\Lambda'}\restriction_{\mathcal{C}_\Lambda^{\infty}}) = \operatorname{spec}(A\restriction_{\mathcal{C}_{\Lambda'}^{\infty}}).$
\end{enumerate}
\end{cor}
Above, we use the notation $A\otimes\mathbbm{I}_{\Lambda\setminus\Lambda'}$ to emphasize which Hilbert space we are considering the action of $A$. We suppress this notation in the proof below. Note also that the norms are well defined since $\mathcal{C}_{\Lambda}^{\infty}$ is finite-dimensional for any finite volume $\Lambda$.
\begin{proof} Since each $\mathcal{C}_{\Lambda'}(R')$ is invariant under $A$, the latter is block diagonal with respect to the decomposition in \eqref{tiling_decomp} as
\[
A\left(\ket{\mu^l}\otimes\mathcal{C}_{\Lambda'}(R)\otimes\ket{\mu^r}\right) = \ket{\mu^l}\otimes \left(A\mathcal{C}_{\Lambda'}(R)\right)\otimes\ket{\mu^r}.
\]
As a consequence, the norm and spectrum of the restrictions agree, i.e.
\[
\|A\|_{\ket{\mu_l}\otimes\mathcal{C}_{\Lambda'}(R')\otimes\ket{\mu_r}} = \|A\|_{\mathcal{C}_{\Lambda'}(R')} \quad\text{and}\quad \operatorname{spec}(A\restriction_{\ket{\mu_l}\otimes\mathcal{C}_{\Lambda'}(R')\otimes\ket{\mu_r}}) = \operatorname{spec}(A\restriction_{\mathcal{C}_{\Lambda'}(R')}),
\]
The claimed equalities then follow from applying the mutual orthogonality of the BVMD-spaces to conclude
\begin{equation} \label{norm_max}
\|A\|_{\mathcal{C}_{\Lambda'}^{\infty}} = \max_{R'\in\mathcal{R}^\infty_{\Lambda'}}\|A\|_{\mathcal{C}_{\Lambda'}(R')} \quad \mbox{and} \quad \operatorname{spec}(A\restriction_{\mathcal{C}_{\Lambda'}^{\infty}}) = \bigcup_{R'\in\mathcal{R}_{\Lambda'}^{\infty}}\operatorname{spec}(A\restriction_{\mathcal{C}_{\Lambda'}(R')}),
\end{equation}
and similarly for $\|A\|_{\mathcal{C}_\Lambda^\infty}$ and $\operatorname{spec}(A\restriction_{C_\Lambda^\infty})$ given \eqref{tiling_decomp}.
\end{proof}
A natural question to ask is how \eqref{tiling_decomp} relates to the trivial decomposition~\eqref{def:IVBVMD}.
As will be evident in the proof of Theorem~\ref{thm:tiling_gap}, the particular case of interest is when $\Lambda = [a,b]$ and $\Lambda'=[a,b-2]$. It is easy to see for this situation that every $\mathcal{C}_{\Lambda'}(R')\otimes\ket{\mu^r}$ as in \eqref{tiling_decomp} is contained in some $\mathcal{C}_\Lambda(R)$ with $R\in\mathcal{R}_{\Lambda}^{\infty}$. More can be said, though, as illustrated in the next result. Specifically, we show that every $\mathcal{C}_{\Lambda}(R)$ decomposes as a direct sum of one or two subspaces of the form $\mathcal{C}_{\Lambda'}(R')\otimes\ket{\mu^r}$. This result is again derived from the possible ways tilings on $\Lambda$ can restrict to tilings on $\Lambda'$.
There are two cases one needs to consider, which are distinguished by whether or not the replacement rules apply to the last two tiles in $R\in\mathcal{R}_\Lambda^{\infty}$. We denote by
\begin{equation}\label{R_MM}
\mathcal{R}_\Lambda^{MM} = \{ R\in \mathcal{R}_\Lambda^{\infty} \, | \, R \text{ ends in two or more monomers}\}
\end{equation}
the set of $ {\mathbb Z}$-induced root tilings for which the last two tiles can be replaced. For any tiling $R\in\mathcal{R}_\Lambda^{MM}$ there is a unique choice $n\geq 2$ and $i\in\{1,2\}$ so that
\begin{equation}\label{two_monomers}
R = (\tilde{R},M_n^{(i)}) ,
\end{equation}
where $\tilde{R}$ does not end in a monomer, and we recall that $M_n^{(i)}=(M,\ldots,M,M^{(i)})$ stands for a tiling of an interval of length $2(n-1)+i$ by $n$ monomers (the last of which has length $i\in\{1,2\}$). We use the convention that $\tilde{R} = \emptyset$ if $R=M_n^{(i)}$. With respect to this decomposition, define the tiling $R_D \leftrightarrow R$ by replacing the last two monomers of $R$ with a dimer,
\begin{equation}\label{R_D}
R_D = (\tilde{R}, \, M_{n-2}^{(2)}, \, D^{(i)})
\end{equation}
where $D^{(2)} = D$ and $R_D = (\tilde{R},D^{(i)})$ if $n=2$. Even though $R_D$ is not a root tiling of $\Lambda,$ its restriction produces a root tiling on $\Lambda'$ that is ${\mathbb Z}$-induced.
\begin{lem}[BVMD-Space Decomposition]\label{lem:tiling_decomp2}
Suppose $\Lambda = [a,b]$ and $\Lambda' = [a,b-2]$ with $|\Lambda'|\geq 4$. For any $R\in \mathcal{R}_{\Lambda}^{\infty}\setminus \mathcal{R}_\Lambda^{MM}$,
\begin{equation}\label{R_decomp_1}
\mathcal{C}_\Lambda(R) = \mathcal{C}_{\Lambda'}(R')\otimes \ket{\mu}
\end{equation}
where $\sigma_\Lambda(R) = (\sigma_{\Lambda'}(R'),\mu)$ and $R'\in\mathcal{R}_{\Lambda'}^{\infty}$. Moreover, for any $R\in \mathcal{R}_\Lambda^{MM}$,
\begin{equation}\label{R_decomp_2}
\mathcal{C}_\Lambda(R) = \big( \mathcal{C}_{\Lambda'}(R')\otimes \ket{\mu_R}\big) \oplus \big( \mathcal{C}_{\Lambda'}(R_D')\otimes \ket{\mu_{R_D}}\big)
\end{equation}
where $\sigma_\Lambda(R) = (\sigma_{\Lambda'}(R'),\mu_R)$, $\sigma_\Lambda(R_D) = (\sigma_{\Lambda'}(R_D'),\mu_{R_D})$ for $R_D$ as in \eqref{R_D}, and both $R',R_D'\in\mathcal{R}_{\Lambda'}^{\infty}$.
\end{lem}
\begin{figure}
\begin{center}
\includegraphics[scale=.45]{Pictures/restrictions2.png}
\end{center}
\caption{Examples of the restrictions to $\Lambda'$ fo root tilings in $\mathcal{R}_\Lambda^\infty\setminus \mathcal{R}_{\Lambda}^{MM}$ and $\mathcal{R}_\Lambda^{MM}$ respectively.}
\label{fig:restrictions2}
\end{figure}
This result will be proved by showing that the configuration bases agree. As such, denote by
\begin{eqnarray}
\mathcal{B}_{\Lambda}(R) = \left\{\ket{\sigma_\Lambda(T)} \, | \, T\in\mathcal{T}_\Lambda(R)\right\}, \label{BVMD-basis}
\end{eqnarray}
the orthonormal basis of $\mathcal{C}_\Lambda(R)$ with $ R \in \mathcal{R}_\Lambda $.
\begin{proof}
Consider the two cases separately.
\emph{Case $R\in\mathcal{R}_\Lambda^{\infty}\setminus\mathcal{R}_{\Lambda}^{MM}$:} In this case, the replacement rules used to generate the set of tilings $\mathcal{T}_\Lambda(R)$ will never change the particle content of the last two sites of $\Lambda$, see Figure~\ref{fig:restrictions2}. As a consequence, every tile replacement on $\Lambda$ is in one-to-one correspondence with a tile replacement on $\Lambda'$. Thus,
\[
\mathcal{B}_{\Lambda}(R) = \mathcal{B}_{\Lambda'}(R')\otimes\ket{\mu}
\]
where $\mu = \sigma_\Lambda(R)\restriction_{[b-1,b]}$, and $R'$ is the root-tiling associated to $\sigma_{\Lambda}(R)\restriction_{\Lambda'}$.
\emph{Case $R\in\mathcal{R}_\Lambda^{MM}$:} Consider first the case that $R = (\tilde{R},\,M_n^{(2)})$ for some $n\geq 2$ and $\tilde{R}$ as in \eqref{two_monomers}. The particle content of the last two sites for any tiling $T\in\mathcal{T}_\Lambda(R)$ is either $(1,0)$ if these two sites are covered by a monomer, or $(0,0)$ if the last two monomers are replaced by a bulk dimer. Considering all possible tilings on $\Lambda$, one quickly finds
\begin{equation}\label{k=2}
\mathcal{B}_\Lambda(R)= \big( \mathcal{B}_{\Lambda'}(R')\otimes\ket{10} \big) \cup \big( \mathcal{B}_{\Lambda'}(R_D')\otimes\ket{00}\big)
\end{equation}
where $R' = (\tilde{R},M_{n-1}^{(2)})$ and $R_D' = (\tilde{R},M_{n-2}^{(2)},B_2^r)$, see Figure~\ref{fig:restrictions2}.
The analogous argument holds when $R = (\tilde{R},\,M_n^{(1)})$, for which
\begin{equation}\label{k=1}
\mathcal{B}_\Lambda(R)= \big( \mathcal{B}_{\Lambda'}(R')\otimes\ket{01}\big) \cup\big( \mathcal{B}_{\Lambda'}(R_D')\otimes\ket{20}\big)
\end{equation}
where $R' = (\tilde{R},M_{n-1}^{(1)})$ and $R_D' = (\tilde{R},M_{n-2}^{(2)},V)$.
\end{proof}
A useful corollary for establishing \eqref{MM3} identifies a special orthogonal basis of $\mathcal{C}_{\Lambda}^{\infty}\cap\big( \mathcal{G}_{\Lambda'}\otimes {\mathcal H}_{[b-1,b]}\big) $ with $\Lambda$ and $\Lambda'$ as in Lemma~\ref{lem:tiling_decomp2}. To state the result, we first recall that $\mathcal{G}_\Lambda\subseteq \mathcal{G}_{\Lambda'}\otimes{\mathcal H}_{[b-1,b]}$ by frustration-freeness, and so
\begin{equation}\label{ff_containment}
\{\psi_\Lambda(R) \, | \, R\in\mathcal{R}_\Lambda^{\infty}\} \subseteq \mathcal{C}_{\Lambda}^{\infty}\cap \big( \mathcal{G}_{\Lambda'}\otimes{\mathcal H}_{[b-1,b]}\big)
\end{equation}
is an orthogonal set of vectors, see~\eqref{BVMD}. Using Lemma~\ref{lem:tiling_decomp2} we extend this set to an orthogonal basis in Corollary~\ref{cor:OB} by adding a set of vectors $\{\xi_\Lambda(R) \, | \, R\in \mathcal{R}_\Lambda^{MM}\}$, which result from decomposing $R = (\tilde{R},M_n^{(i)})\in\mathcal{R}_\Lambda^{MM}$ as in \eqref{two_monomers}. Specifically,
\begin{equation}\label{xi}
\xi_\Lambda(R) := \psi_{\Lambda(n,i)}(\tilde{R})\otimes \eta_n^{(i)}
\end{equation}
where $\psi_{\Lambda(n,i)}(\tilde{R})$ is the associated BVMD-state on $\Lambda(n,i):=[a,b-2(n-1)-i]$,
\begin{equation}\label{eta}
\eta_{n+1}^{(i)} := -\frac{\overline{\lambda}}{\sqrt{2}}\beta_{n} \varphi_{n}\otimes\varphi_1^{(i)} + \varphi_{n-1}\otimes\ket{\sigma_d^{(i)}} \in\mathcal{C}_{[1,2n+i]}(M_{n+1}^{(i)}) ,
\end{equation}
and the ingredients defining the RHS above are as in Subsection~\ref{subsec:BVMD_properties}, see specifically \eqref{TT}, \eqref{recursion}, and \eqref{alpha_convergence}. This state is chosen so that $\braket{\eta_n^{(i)}}{\varphi_n^{(i)}}=0$ for all $n\geq 2$ and $i\in\{1,2\}$. Like the squeezed Tau-Thouless states, $\eta_n^{(i)}$ is not normalized, but satisfies
\begin{equation}\label{eta_norm}
\|\eta_n^{(k)}\|^2 = \|\varphi_{n-2}\|^2\left[1+\beta_{n-1}\frac{|\lambda|^2}{2}\right]=\frac{\|\varphi_{n-3}\|^2}{\beta_n\beta_{n-2}}.
\end{equation}
\begin{cor}[Orthogonal Basis] \label{cor:OB}Let $\Lambda = [a,b]$ with $|\Lambda|\geq 7$, and $\Lambda' = [a,b-2]$. Then, the following is an orthogonal basis for $\mathcal{C}_{\Lambda}^{\infty}\cap (\mathcal{G}_{\Lambda'}\otimes{\mathcal H}_{[b-1,b]})$:
\begin{equation}\label{orthogonal_basis}
\left\{\psi_{\Lambda}(R) \, | \, R\in\mathcal{R}_\Lambda^{\infty}\right\}\cup \left\{\xi_{\Lambda}(R)\, | \ R\in\mathcal{R}_\Lambda^{MM}\right\}.
\end{equation}
\end{cor}
\begin{proof} Just as in \eqref{gs_direct_sum}, the mutual orthogonality of the BVMD-spaces and the direct sum decomposition from \eqref{def:IVBVMD} guarantee that
\[
\mathcal{C}_{\Lambda}^{\infty}\cap (\mathcal{G}_{\Lambda'}\otimes{\mathcal H}_{[b-1,b]}) = \bigoplus_{R\in\mathcal{R}_{\Lambda}^{\infty}} \mathcal{C}_\Lambda(R)\cap(\mathcal{G}_{\Lambda'}\otimes{\mathcal H}_{[b-1,b]}).
\]
Since each $\mathcal{C}_{\Lambda'}(R')$ supports a unique ground state of $\mathcal{G}_{\Lambda'}$, by Lemma~\ref{lem:tiling_decomp2} \[\dim\left(\mathcal{C}_\Lambda(R)\cap\mathcal{G}_{\Lambda'}\otimes{\mathcal H}_{[b-1,b]}\right) = \begin{cases} 1 & \mbox{if} \; R\in \mathcal{R}_\Lambda^{\infty}\setminus\mathcal{R}_{\Lambda}^{MM}, \\ 2 & \mbox{if} \; R\in \mathcal{R}_{\Lambda}^{MM} .
\end{cases}
\]
Given \eqref{ff_containment}, one only needs to consider $R=(\tilde{R},M_n^{(i)})\in\mathcal{R}_\Lambda^{MM}$ to complete the orthogonal basis. Using the notation from Lemma~\ref{lem:tiling_decomp2} (see also \eqref{k=2}-\eqref{k=1}) one can verify that for such $R$,
\begin{align}
\psi_{\Lambda'}(R')\otimes\ket{\mu_R} & = \psi_{\Lambda(n,i)}(\tilde{R})\otimes\varphi_{n-1}\otimes\varphi_1^{(i)} \\
\psi_{\Lambda'}(R_D')\otimes\ket{\mu_{R_D}} & = \psi_{\Lambda(n,i)}(\tilde{R})\otimes\varphi_{n-2}\otimes\ket{\sigma_d^{(i)}}
\end{align}
and so $\xi_\Lambda(R)\in\mathcal{C}_\Lambda(R)\cap(\mathcal{G}_{\Lambda_1}\otimes{\mathcal H}_{[b-1,b]}).$ By construction $\xi_\Lambda(R)$ and $\psi_\Lambda(R)$ are orthogonal since $\psi_\Lambda(R)=\psi_{\Lambda(n,i)}(\tilde{R})\otimes\varphi_{n}^{(i)}$ and $\braket{\varphi_n^{(i)}}{\eta_n^{(i)}} = 0$. Thus, the result holds as stated.
\end{proof}
We conclude this subsection with the following lemma, which constitutes the core of the proof of \eqref{MM3} in Theorem~\ref{thm:tiling_gap}.
\begin{lem}[Overlap]\label{lem:epsilon}
Fix $\Lambda = [a,b]$ with $|\Lambda|\geq 7$ and set $\Lambda_1 =[a,b-2]$ and $\Lambda_2 = [b-5,b]$. The ground-state projections satisfy the norm bound
\begin{equation}
\label{norm_bound}
\|G_{\Lambda_2}(\mathbbm{I}-G_\Lambda)G_{\Lambda_1}\|_{\mathcal{C}_{\Lambda}^{\infty}} \leq \sqrt{f(|\lambda|^2/2)}.
\end{equation}
where $f$ is as in Theorem~\ref{thm:tiling_gap}.
\end{lem}
\begin{proof} The subspace $\mathcal{G}_{\Lambda_1}^\infty := \mathcal{C}_\Lambda^\infty\cap (\mathcal{G}_{\Lambda_1}\otimes{\mathcal H}_{[b-1,b]})$ is of the form considered in Corollary~\ref{cor:OB}. Comparing the orthogonal basis from that result with the orthogonal basis for $\mathcal{G}_\Lambda$ in Theorem~\ref{thm:gss}, it is clear that $\mathcal{G}_\Lambda^\perp \cap \mathcal{G}_{\Lambda_1}^\infty = \operatorname{span} \{\xi_\Lambda(R) | R\in\mathcal{R}_\Lambda^{MM}\}$ and
\begin{equation}\label{norm}
\|G_{\Lambda_2}(\mathbbm{I}-G_\Lambda)G_{\Lambda_1}\|_{\mathcal{C}_{\Lambda}^{\infty}}^2
=
\sup_{0\neq \psi \in\mathcal{G}_\Lambda^\perp \cap \mathcal{G}_{\Lambda_1}^\infty }
\frac{\|G_{\Lambda_2}\psi\|^2}{\|\psi\|^2}.
\end{equation}
Recalling the identification from \eqref{idenfification}, $G_{\Lambda_2}\psi$ for any $\psi\in\mathcal{G}_\Lambda^\perp\cap\mathcal{G}_{\Lambda_1}^\infty$ can be expanded via Theorem~\ref{thm:gss} as
\begin{equation}\label{G_action}
G_{\Lambda_2}\psi = \sum_{R\in\mathcal{R}_{\Lambda_2}^{\infty}}\frac{\ketbra{\psi_{\Lambda_2}(R)} }{\|\psi_{\Lambda_2}(R)\|^2} \psi.
\end{equation}
where we need only sum over $R\in\mathcal{R}_{\Lambda_2}^\infty$, since $\psi$ is supported on $ {\mathbb Z} $-induced tiling states. We first compute $G_{\Lambda_2}\xi_\Lambda(R)$ for an arbitrary $R\in\mathcal{R}_\Lambda^{MM}$ and use this to bound $\|G_{\Lambda_2}\psi\|^2$ for an arbitrary state $\psi\in\mathcal{G}_\Lambda^\perp \cap \mathcal{G}_{\Lambda_1}^\infty$. Recalling the factored form $\xi_\Lambda(R) = \psi_{\Lambda(n,i)}(\tilde{R})\otimes \eta_n^{(i)}$ from \eqref{xi} and denoting by $\Gamma(n,i):=\Lambda\setminus \Lambda(n,i)$ the support of $\eta_n^{(i)}$, we consider two cases distinguished by $\Gamma(n,i)\subseteq \Lambda_2$, which holds for $n\leq 3$, and $\Lambda_2 \subsetneq \Gamma(n,i)$, which holds for $n\geq 4$.
Assume $n\leq 3$. Then $\Gamma(n,i)\subseteq \Lambda_2$, and so $G_{\Lambda_2}=G_{\Lambda_2}G_{\Gamma(n,i)}$ by frustration-freeness, and hence
\[
G_{\Lambda_2}\xi_\Lambda(R)= G_{\Lambda_2}\left(\psi_{\Lambda(n,i)}(\tilde{R})\otimes G_{\Gamma(n,i)}\eta_n^{(i)}\right) = 0 ,
\]
where the final equality holds since $\eta_{n}^{(i)}\in \mathcal{C}_{\Gamma(n,i)}(M_n^{(i)})$ is orthogonal to the unique ground state $\varphi_{n}^{(i)}\in \mathcal{C}_{\Gamma(n,i)}(M_n^{(i)})$, and so the pairwise orthogonality of the BVMD-tiling spaces and analogous expansion from \eqref{G_action} for $\Gamma(n,i)$ guarantees
\[
G_{\Gamma(n,i)}\eta_n^{(i)} = \frac{\ketbra{\varphi_n^{(i)}}\eta_n^{(i)} \rangle}{\|\varphi_n^{(i)}\|^2}= 0.
\]
\begin{figure}
\begin{center}
\includegraphics[scale=.35]{Pictures/Lambda_2_truncation.png}
\end{center}
\caption{The root tilings $R^i$ and $R^i_D$ for $i=1,2$, respectively.}
\label{fig:Lambda_2_truncation}
\end{figure}
If $n\geq4$, then $\Lambda_2 \subsetneq \Gamma(n,i)$ and we need only consider $G_{\Lambda_2}\eta_n^{(i)}$ since by \eqref{idenfification}
\begin{equation}\label{ngeq4}
G_{\Lambda_2}\xi_\Lambda(R) = \psi_{\Lambda(n,i)}(\tilde{R})\otimes G_{\Lambda_2}\eta_n^{(i)}.
\end{equation}
To this end, notice that the restriction of any tiling $T\leftrightarrow M_n^{(i)}$ to $\Lambda_2$ produces a tiling $T'$ connected to one of two root tilings, $R^i,R_D^i\in\mathcal{R}_{\Lambda_2}^\infty$, determined by whether or not $T$ has a dimer laying across the boundary of $\Lambda_2$ as in Figure~\ref{fig:Lambda_2_truncation}. Concretely, the root tilings are:
\begin{equation}\label{G_roots}
R^i = \begin{cases}
(V,M_3^{(1)}) & i = 1 , \\
M_3^{(2)} & i= 2 ,
\end{cases}
\qquad
R^i_D = \begin{cases}
(B_2^l,M_2^{(1)}), & i = 1 , \\
(V,V,M_2^{(2)}) & i= 2 .
\end{cases}
\end{equation}
Using \eqref{G_action} to evaluate $G_{\Lambda_2}\eta_n^{(i)}$, the mutual orthogonality of the BVMD-spaces combined with \eqref{G_roots} reduce the calculation to
\begin{align}\label{G_on_eta}
G_{\Lambda_2}\eta_n^{(i)} & = \sum_{R'\in\{R^{i},R_D^{i}\}}\frac{\ketbra{\psi_{\Lambda_2}(R')}}{\|\psi_{\Lambda_2}(R')\|^2}\left(-\frac{\overline{\lambda}}{\sqrt{2}}\beta_{n-1} \varphi_{n-1}\otimes\varphi_1^{(i)} + \varphi_{n-2}\otimes\ket{\sigma_d^{(i)}}\right)
\end{align}
where we have inserted the expansion of $\eta_n^{(i)}$ from \eqref{eta}. Applying the recursion relations~\eqref{recursion_general}-\eqref{recursion} to further expand $\eta_n^{(i)}$ and $\psi_{\Lambda_2}(R')$, one can compute \eqref{G_on_eta} to find
\begin{align}\label{G2_calculation}
G_{\Lambda_2}\eta_n^{(i)} = \frac{\overline{\lambda}\left(1-\beta_{n-1}\|\varphi_2\|^2\right)}{\sqrt{2}\|\varphi_3^{(i)}\|^2}\varphi_{n-3}\otimes\varphi_{3}^{(i)} + \frac{|\lambda|^2\left(1-\beta_{n-1}\right)}{2\|\varphi_2^{(i)}\|^2}\varphi_{n-4}\otimes\ket{\sigma_d}\otimes\varphi_2^{(i)}
\end{align}
which is a sum of orthogonal vectors from $\mathcal{C}_{\Gamma(n,i)}(M_n^{(i)}).$ The coefficients in \eqref{G2_calculation} are independent of $i$ by \eqref{vp2_to_vp1}. Calculating these explicitly and applying \eqref{eta_norm} then produces
\[
\|G_{\Lambda_2}\eta_{n}^{(i)}\|^2 = f_n(|\lambda|^2/2)\|\eta_n^{(i)}\|^2 \quad \mbox{with}\quad f_n(r) = r\beta_n\beta_{n-2}\left(\frac{[1-\beta_{n-1}(1+r)]^2}{1+2r}+\frac{r(1-\beta_{n-1})^2}{1+r}\right).
\]
We further conclude that the action of $G_{\Lambda_2}$ on $\{\xi_\Lambda(R) | R\in\mathcal{R}_\Lambda^{MM}\}$ preserves orthogonality since $G_{\Lambda_2}\xi_\Lambda(R) \in \mathcal{C}_\Lambda(R)$ for any $R = (\tilde{R},M_n^{(i)})\in\mathcal{R}_\Lambda^{MM}$ by \eqref{ngeq4} and \eqref{G2_calculation}. In addition, for each such $R$ the previous equality implies
\[
\|G_{\Lambda_2}\xi_\Lambda(R) \|^2 \leq \sup_{m\geq4}f_m(|\lambda|^2/2)\|\psi_{\Lambda(n,i)}(\tilde{R})\|^2\|\eta_n^{(i)} \|^2 = f(|\lambda|^2/2)\|\xi_\Lambda(R)\|^2 ,
\]
and so combining these two observations shows $\|G_{\Lambda_2}\psi\|^2 \leq f(|\lambda|^2/2) \|\psi\|^2$ for any $\psi\in\mathcal{G}_\Lambda^\perp\cap\mathcal{G}_{\Lambda_1}^\infty$, and the claim holds by \eqref{norm}.
\end{proof}
\subsection{Proof of Theorem~\ref{thm:tiling_gap}}\label{sec:proofMM}
We now prove the main result of Section~\ref{sec:MM}. First, if Conditions 1-3 of Theorem~\ref{thm:tiling_gap} hold, then applying \cite[Theorem~5.1]{nachtergaele:2016b} to the restriction $H_N^{\infty} = H_N\restriction_{\mathcal{C}_\Lambda^{\infty}}$ produces,
\[
\inf_{0\neq \psi\in\mathcal{C}_\Lambda^\infty \cap \ker(H_N)^\perp}\frac{\braket{\psi}{H_N^\infty\psi}}{\|\psi\|^2} \geq 2\kappa\left(1-\sqrt{3f(|\lambda|^2/2)}\right)^2,
\]
and \eqref{tiling_gap} follows by \eqref{equivalent_hams}. Thus, we only need to verify the three claims and the bound on $ f $.
1. For all $n$, $\mathcal{C}_\Lambda^{\infty}$ is an invariant subspace of $h_n$. Its restriction $ h_n^{\infty} $ is defined as in~\eqref{blocks},
and the associated ground state projection is given by $g_n^{\infty}= P_{\mathcal{C}_{\Lambda}^{\infty}} g_n P_{\mathcal{C}_{\Lambda}^{\infty}}$. Using Corollary~\ref{cor:reductions} it follows that
\[
h_n^{\infty} \geq E_1(\mathcal{C}_{\Lambda_n}^{\infty})(\mathbbm{I}-g_n^{\infty}) \geq \min_{k=5,6}E_1(\mathcal{C}_{[1,k]}^{\infty})(\mathbbm{I}-g_n^{\infty})
\]
where we have used translation invariance and $|\Lambda_n|\in\{5,6\}$ for the second inequality. Condition 1 then holds after computing the above minimum. Recalling that
\[
E_1(\mathcal{C}_{[1,k]}^\infty) = \min\{E_1(\mathcal{C}_{[1,k]}(R)) | R\in\mathcal{R}_{[1,k]}^\infty\},
\]
we compute $E_1(\mathcal{C}_{[1,k]}(R))$ for all possible choices for $R$. By convention, $E_1(\mathcal{C}_\Lambda(R)) =\infty$ for any root where $\dim(\mathcal{C}_\Lambda(R)) = 1$ as such subspaces are contained in the ground-state space. The condition $\dim(\mathcal{C}_\Lambda(R))>1$ requires that $R$ has two or more neighboring monomers. Up to a factor of $\kappa$, the restriction $H_{[1,k]}\restriction_{\mathcal{C}_{[1,k]}(R)}$ for any root with exactly two consecutively monomers is unitarily equivalent to the matrix from \eqref{MM_action}, which has a gap of $\kappa(|\lambda|^2+2)$. The restriction $H_{[1,k]}\restriction_{\mathcal{C}_{[1,k]}(R)}$ for any root with three consecutive monomers is unitarily equivalent to the matrix
\begin{equation}\label{eq:bmatrix}
\begin{bmatrix}
2\kappa|\lambda|^2 & -\sqrt{2}\kappa\overline{\lambda} & -\sqrt{2}\kappa\overline{\lambda}\\
-\sqrt{2}\kappa\lambda & 2\kappa & 0 \\
-\sqrt{2}\kappa\lambda & 0 & 2\kappa
\end{bmatrix},
\end{equation}
which has a gap $2\kappa$. This verifies Condition 1 since an interval $[1,k]$ with $k\leq 6$ can hold at most three monomers.
2. First, notice that the operators $g_n$ and $E_m=G_m-G_{m+1}$ as in Section~\ref{sec:MM} are defined so that
\[
\operatorname{supp}(g_n) = \Lambda_n, \quad \operatorname{supp}(E_m) \subseteq \operatorname{supp}(H_{m+1}) = [1,2m+2+k].
\]
Moreover, by construction $\operatorname{ran}(g_n) = \ker(H_{\Lambda_n})$ and $\operatorname{ran}(G_m) = \ker(H_{[1,2m+k]})$ and so by frustration-freeness $\ker(h_n)\subseteq \ker(H_m)$ for all $m\geq n$. As a consequence, $g_nG_m=G_m$ and $[g_n,E_m]=0$ for all $m\geq n$. In addition, $[g_n,E_m]=0$ for $m\leq n-4$ since $\operatorname{supp}(E_m)\subseteq[1,2m-6+k]$ which is disjoint from $\Lambda_n$. Summarizing,
\begin{equation}\label{comm}
[g_n,E_m]\neq 0 \; \mbox{only if} \; m\in[n-3,n-1].
\end{equation}
Since $\mathcal{C}_\Lambda^{\infty}$ is an invariant subspace of the operators $g_n$ and $E_m$, they can be block diagonalized as in \eqref{blocks}, and one concludes that the commutator relations in \eqref{comm} are inherited by the respective blocks $E_m^{\infty}$ and $g_n^{\infty}$. Hence, Condition 2 holds.
3. Fix $2\leq n\leq N-1$. Since $\mathcal{C}_\Lambda^{\infty}$ is an invariant subspace of both $g_{n+1}$ and $E_{n}$, \eqref{blocks} applies to both operators and $g_{n+1}^\infty E_n^\infty = (g_{n+1}E_n)\restriction_{\mathcal{C}_\Lambda^\infty}$. The product $g_{n+1}E_n$ is supported on $[1,2n+2+k]$, and so Corollary~\ref{cor:reductions} yields
\[
\|g_{n+1}^{\infty}E_n^{\infty}\| = \|g_{n+1}E_n\|_{\mathcal{C}_\Lambda^{\infty}} = \|g_{n+1}E_n\|_{\mathcal{C}_{[1,2n+2+k]}^{\infty}}.
\]
Once again applying frustration-freeness, the product factored as
\[
g_{n+1}E_n = G_{\Lambda_{n+1}}(G_{[1,2n+k]}-G_{[1,2n+2+k]}) = G_{\Lambda_{n+1}}(\mathbbm{I}-G_{[1,2n+2+k]})G_{[1,2n+k]},
\]
which is of the form considered in Lemma~\ref{lem:epsilon}. This produces
\[
\|g_{n+1} E_n\|_{\mathcal{C}_{[1,2n+2+k]}^{\infty}} \leq \sqrt{f(|\lambda|^2/2)},
\]
verifying Condition 3.
4. The function $f(r)$ was analyzed in \cite[Appendix A]{NWY:2020}, where it was shown that $f(r^2)<1/3$ for $r\in[0,5.3]$. The claimed bound on $f$ is immediate from taking $r=|\lambda|/\sqrt{2}$.
\section{Proof of the uniform bulk gap}\label{sec:UBG}
The goal of this section is to present a method that establishes the robust bulk gap for the periodic Hamiltonian $ H_\Lambda^\textrm{per} $ asserted in Theorem~\ref{thm:main2}. In order to avoid the issue of edge states, we will proceed as sketched in Subsection~\ref{sec:invariant}. Using
\begin{equation}\label{eq:bulkgapest}
E_1^\textrm{per}({\mathcal H}_\Lambda) = \min\left\{ E_1^\textrm{per}(\mathcal{C}_\Lambda^\textrm{per}) , E_0^\textrm{per}\left(\big(\mathcal{C}_\Lambda^\textrm{per}\big)^\perp\right)\right\} ,
\end{equation}
we reduce the proof of the lower bound on the spectral gap to proving separately a lower bound on the spectral gap of $ H_\Lambda^\textrm{per} $ restricted to the invariant subspace $ \mathcal{C}_\Lambda^\textrm{per} $ of periodic tiling states introduced in Subsection~\ref{sec:periodic_tilings}, and a lower bound on the ground-state energy in the complement subspace. We start with a bound on the former quantity, and follow up with electrostatic estimates on the latter in Subsection~\ref{sec:electro2}.
The a priori decomposition of the Hilbert space in terms of the invariant subspace $ \mathcal{C}_\Lambda^\textrm{per}$ such that \eqref{eq:bulkgapest} holds is key to the proof of the uniform bulk gap in Theorem~\ref{thm:main2} since the Hamiltonian $ H_{\Lambda'} $ with open boundary conditions on any interval $ \Lambda' \subsetneq \Lambda $ is free of edge states in this subspace. As long as one can find such a decomposition, this idea is robust and also applicable to the bulk gap question for other models with edge states, such as the one studied in~\cite{NWY:2021}.
\subsection{Finite-volume criteria avoiding edge states}
As in the application of the martingale method in Section~\ref{sec:MM}, for the finite-size criterion we will consider the restriction of operators $A:{\mathcal H}_\Lambda\to{\mathcal H}_\Lambda$ supported on subintervals $\Lambda'\subsetneq\Lambda$ to the periodic tiling space $\mathcal{C}_\Lambda^\mathrm{per}$. In the case that $\mathcal{C}_\Lambda^\mathrm{per}$ is an invariant subspace of $A$, the operator can be block diagonalized similar to \eqref{blocks} as
\begin{equation}\label{per_blocks}
A := A\restriction_{\mathcal{C}_\Lambda^\mathrm{per}} + (\mathbbm{I}-P_\Lambda^\mathrm{per})A(\mathbbm{I}-P_\Lambda^\mathrm{per}), \quad A\restriction_{\mathcal{C}_\Lambda^\mathrm{per}} = P_\Lambda^\mathrm{per} A P_\Lambda^\mathrm{per}
\end{equation}
where $P_\Lambda^\mathrm{per}$ is the orthogonal projection onto $\mathcal{C}_\Lambda^\mathrm{per}$. In particular, such a decomposition holds for the Hamiltonian $H_{\Lambda'}$ as well as its ground state projection $G_{\Lambda'}$ for any subinterval $\Lambda' \subseteq \Lambda$ in the ring geometry.
An analysis similar to that of Section~\ref{sec:truncations} also applies to restrictions of the periodic tiling space $\mathcal{C}_\Lambda^\mathrm{per}$ to subintervals $\Lambda'\subseteq\Lambda$.
The truncation of any periodic tiling on the ring $\Lambda$ to an interval $\Lambda'\subseteq\Lambda$ produces a ${\mathbb Z}$-induced tiling on $\Lambda'$.
Conversely, if $|\Lambda|\geq |\Lambda'|+3$ every ${\mathbb Z}$-induced tiling on $\Lambda'$ can be realized as a truncation of a periodic tiling, from which the following variation of Lemma~\ref{lem:tiling_decomp} can be deduced. Namely,
\[
\mathcal{C}_\Lambda^\textrm{per} = \bigoplus_{R'\in\mathcal{R}_{\Lambda'}^{\infty}}\bigoplus_{\substack{\ket{\mu}\in\mathcal{B}_{\Lambda}^\textrm{per} \, : \\ \mu = ( \sigma_{\Lambda'}(R'), \, \mu^{\Lambda\backslash \Lambda'} )}} \mathcal{C}_{\Lambda'}(R')\otimes\ket{ \mu^{\Lambda\backslash \Lambda'} },
\]
where $ \mathcal{B}_\Lambda^\textrm{per} $ is the orthonormal configuration basis of periodic tilings.
The size constraint here guarantees that bulk tiles can be placed in $\Lambda$ to produce all possible combinations of ${\mathbb Z}$-induced boundary tiles on $\Lambda'$. A minor adaptation of the argument in Corollary~\ref{cor:reductions} then produces the following isospectral relation for any subinterval $ \Lambda' \subset \Lambda $ of the ring geometry such that $|\Lambda'|\leq |\Lambda|-3$:
\begin{equation}\label{eq:isospectral}
\operatorname{spec} \left(H_{\Lambda'} \otimes \mathbbm{1}_{\Lambda\backslash \Lambda'} \restriction_{ \mathcal{C}_\Lambda^\textrm{per} }\right) \ = \ \operatorname{spec} \left(H_{\Lambda'} \restriction_{ \mathcal{C}_{\Lambda'}^\infty }\right) \ = \ \operatorname{spec} \left(H_{\Lambda'}^\infty\right) .
\end{equation}
Of particular interest, the spectral gap of the restriction on the LHS of \eqref{eq:isospectral} agrees with the spectral gap of the restriction on the RHS:
\begin{equation}\label{eq:iso2}
\inf_{0\neq\psi\in \mathcal{C}_\Lambda^\textrm{per}\cap (\mathcal{G}_{\Lambda'}^\perp \otimes {\mathcal H}_{\Lambda\backslash \Lambda'} ) } \frac{\braket{\psi}{H_{\Lambda'}\psi}}{\|\psi\|^2} = E_1(\mathcal{C}_{\Lambda'}^\infty) .
\end{equation}
A lower bound on the RHS was the topic of Theorem~\ref{thm:tiling_gap}. Moreover, we recall from the proof of this theorem in Subsection~\ref{sec:proofMM} that the spectral gap of $ H_{\Lambda'}^\infty $ on any interval $ \Lambda' $ of size five or six is at least $ 2 \kappa $, and the operator norm of this restriction is bounded by the largest eigenvalue of the matrix~\eqref{eq:bmatrix}, i.e.
\begin{equation}\label{def:gamma}
\gamma := \inf_{|\Lambda'| \in \{ 5,6 \} } E_1(\mathcal{C}_{\Lambda'}^\infty) = 2\kappa , \qquad \Gamma:= \sup_{|\Lambda'| \in \{ 5,6 \} } \left\| H_{\Lambda'}^\infty \right\| = 2 \kappa (1 + |\lambda|^2) .
\end{equation}
With this in mind, we may now formulate the following finite-volume criterion for the gap $ E_1^\textrm{per}(\mathcal{C}_\Lambda^\textrm{per}) $. This criterion is an improvement of~\cite[Theorem 3.11]{NWY:2021}. Its proof below closely follows the strategy used in this previous work and relies on the version of Knabe's method~\cite{knabe:1988} from~\cite[Theorem~3.10]{NWY:2021}.
\begin{theorem}[Periodic Spectral Gap]\label{thm:pbc_gap} Fix $n\geq 2$. Then for any ring $\Lambda$ such that $|\Lambda| \geq 3n+6$,
\begin{equation}\label{eq:pergap3}
E_1^\mathrm{per}(\mathcal{C}_{\Lambda}^\mathrm{per}) \geq \frac{\gamma\, n}{2\Gamma(n-1)}\left[\min_{1\leq l\leq3} E_1(\mathcal{C}_{[ 1,3n + l]}^\infty)-\frac{\Gamma}{n}\right] .
\end{equation}
\end{theorem}
\begin{proof}
By translation invariance, it is sufficient to consider the ring associated to $\Lambda := [1,3N+r]$ where $N>0$ and $r\in\{2,3,4\}$ are the unique integers so that $|\Lambda|=3N+r$. Let $\Lambda_1, \ldots, \Lambda_{N+1}$ be the sequence of subintervals of $\Lambda$ defined by
\begin{align*}
\Lambda_{i} & = [3i-2,3i+2] \;\; \text{for} \;\; 1\leq i \leq N \;\; \text{and} \;\; \Lambda_{N+1} = \begin{cases}
\left[|\Lambda|-2,|\Lambda|+2\right], & r=2,3 \\
\left[|\Lambda|-3,|\Lambda|+2\right], & r=4
\end{cases}
\end{align*}
where we identify $x\equiv x+|\Lambda|$. These intervals are defined so that every interaction term, $n_xn_{x+1}$ or $q_x^*q_x$, that contributes to $H_\Lambda^\mathrm{per}$ is supported on at least one and at most two of the intervals $\Lambda_i$. As a consequence, for each $1\leq k\leq N+1$
\begin{equation}
H_{\Lambda_{n,k}}\leq \sum_{i=k}^{n+k-1} H_{\Lambda_i} \leq 2H_{\Lambda_{n,k}} , \quad
H_{\Lambda}^{\mathrm{per}} \leq \sum_{i=1}^{N+1}H_{\Lambda_i} \leq 2H_{\Lambda}^\mathrm{per} \label{eq:first_bound},
\end{equation}
where the addition $n+k-1$ is taken modulo $N+1$ and $\Lambda_{n,k}:=\bigcup_{i=k}^{n+k-1}\Lambda_i$. As $\mathcal{C}_\Lambda^\mathrm{per}$ is an invariant subspace of each of the Hamiltonians in \eqref{eq:first_bound}, the same relations hold when one restricts all of the Hamiltonians to this subspace.
Let $P_i:\mathcal{C}_\Lambda^\mathrm{per} \to \mathcal{C}_\Lambda^\mathrm{per}$ denote the orthogonal projection onto $\operatorname{ran}(H_{\Lambda_i}\restriction_{\mathcal{C}_\Lambda^\mathrm{per}})$. Thus, applying the isospectrality from~\eqref{eq:isospectral}
\begin{equation}\label{eq:projection_bounds}
\gamma P_i \leq H_{\Lambda_i} \restriction_{\mathcal{C}_\Lambda^\textrm{per}} \leq \Gamma P_i,
\end{equation}
with $ \gamma $ and $ \Gamma $ as in~\eqref{def:gamma} since each interval $\Lambda_i$ contains 5 or 6 sites. Summing \eqref{eq:projection_bounds} over appropriate values of $i$ and using the restricted versions of \eqref{eq:first_bound} produces the operator inequalities
\begin{equation}\label{sf_inequalites}
\frac{\gamma}{2} H_{n,k} \leq \, H_{\Lambda_{n,k}}\restriction_{\mathcal{C}_\Lambda^\textrm{per}} \, \leq \Gamma H_{n,k} , \quad \frac{\gamma}{2} H_N \, \leq H_{\Lambda}^{\mathrm{per}}\restriction_{\mathcal{C}_\Lambda^\textrm{per}}\, \leq \Gamma H_N
\end{equation}
where the operators $H_{n,k}$ and $H_N$ on $\mathcal{C}_\Lambda^\mathrm{per}$ are defined by
\[ H_{n,k} := \sum_{i=k}^{n+k-1}P_i, \qquad H_N := \sum_{i=1}^{N+1} P_i .
\]
Depending on the value of $r$ and whether or not $\Lambda_{N+1}$ contributes to the definition of $\Lambda_{n,k}$, one can easily deduce that $\Lambda_{n,k}$ is an interval with at least $3n+1$ sites and at most $3n+3$ sites. Thus, the gap of $H_{\Lambda_{n,k}}\restriction_{\mathcal{C}_\Lambda^\textrm{per}} $ satisfies~\eqref{eq:iso2} since $|\Lambda|\geq |\Lambda_{n,k}| +3 $. Combining this with \eqref{sf_inequalites}, it readily follows by translation invariance that for all $ k \in \{1, \ldots, N+1\} $:
\begin{equation}\label{eq:gap_inequalities}
\Gamma E_{1}^{(n,k)} \geq E_1(\mathcal{C}_{\Lambda_{n,k}}^\infty) \geq \min_{1\leq l \leq 3} E_1(\mathcal{C}_{[ 1, 3n + l]}^\infty) \quad \text{and}\quad E_1^\textrm{per}(\mathcal{C}_{\Lambda}^\textrm{per}) \geq \frac{\gamma}{2} E_{1}^L
\end{equation}
where $E_{1}^{(n,k)}$ and $E_1^L$ denote the spectral gaps of $H_{n,k}$ and $H_L$ in $\mathcal{C}_\Lambda^\mathrm{per}$, respectively. The proof is completed after applying \cite[Theorem~3.10]{NWY:2021} to produce a lower bound on $E_1^L$ using $E_1^{(n,k)}$. For its application, we note that
the Hamiltonian $H_L$ is frustration-free, since \eqref{sf_inequalites} implies $\ker(H_L)=\ker(H_\Lambda^\mathrm{per})$, which is nontrivial by Theorem~\ref{thm:periodic_gss}. In addition, $P_i\psi = (\mathbbm{I}-G_{\Lambda_i})\psi$ for all $\psi\in\mathcal{C}_{\Lambda}^\mathrm{per}$ and $i=1,\ldots, N+1$ by \eqref{per_blocks}. Since operators defined on ${\mathcal H}_\Lambda$ that are supported on disjoint spatial regions commute, and the intervals $\Lambda_i$ were chosen so that $\Lambda_{i}\cap \Lambda_{j}= \emptyset$ unless $|i-j|=1$ or $\{i,j\}=\{1,N+1\}$, this guarantees that $[P_i,P_j] = 0$ under the same constraints on $i$ and $j$. Therefore, the operators $P_i$ satisfy the requirements of~\cite[Theorem~3.10]{NWY:2021} on the Hilbert space $ \mathcal{C}_{\Lambda}^\textrm{per} $, and hence the respective spectral gaps $E_1^L$ and $E_1^{(n,k)}$ satisfy the bound
\begin{equation}\label{eq:proj_bound}
E_{1}^L \geq \frac{n-1}{n}\left(\min_{1\leq k \leq N+1} E_{1}^{(n,k)} -\frac{1}{n}\right) .
\end{equation}
Using \eqref{eq:gap_inequalities} to further bound \eqref{eq:proj_bound} produces the result.
\end{proof}
\subsection{Electrostatic estimates for periodic boundary conditions}\label{sec:electro2}
For our proof of the bulk gap via~\eqref{eq:bulkgapest} we also need a lower bound on the ground-state energy $ E_0^\textrm{per}((\mathcal{C}_\Lambda^\mathrm{per})^\perp)$ of the Hamiltonian $ H_\Lambda^\textrm{per} $ restricted to the invariant subspace orthogonal to periodic tiling states. The approach we employ to bound this energy is inspired by the related question of a lower bound on the \emph{Yrast line} and, specifically, how the ground state energy $E_0(N)$ for the periodic system on $\Lambda$ in the $N$-particle sector behaves as the number of particles increases. In the physical regime $|\lambda|^2\ll 1$ and for total filling $\nu:=N/|\Lambda|>1$, a positive lower bound on $E_0(N)$ that scales with $\nu$ can easily be deduced from the following Cauchy-Schwarz operator inequality
\begin{equation}\label{CS}
q_x^*q_x \geq (1-\delta)n_x(n_x-1) - |\lambda|^2\frac{1-\delta}{\delta}n_{x-1}n_{x+1}
\end{equation}
which holds for all $ \delta \in (0,1) $.
\begin{proposition}[Yrast Line]\label{prop:Yrast} Fix an interval $\Lambda$ and suppose that $|\lambda|^2<(\kappa-2)/(2\kappa)$. Then for any $\nu>1$,
\begin{equation}\label{eq:Yrast}
E_0(\nu|\Lambda|)\geq \nu|\Lambda|\left[\nu(1+\kappa/2-\kappa|\lambda|^2)-\kappa/2\right].
\end{equation}
\end{proposition}
\begin{proof}
Applying \eqref{CS} with $\delta = 1/2$ for all $x$ produces an operator lower bound on $H_\Lambda^\mathrm{per}$ in terms of a sum of electrostatic operators. The claimed bound on $E(\nu|\Lambda|)$ results from minimizing the energy over $\mu\in[0,\infty)^{\Lambda}$ of the classical problem
\[
\mathcal{E}(\mu):=\sum_{x\in\Lambda}\mu_x\mu_{x+1} + \frac{\kappa}{2} \mu_x(\mu_x-1) -\kappa|\lambda|^2\mu_x\mu_{x+2} \quad \text{subject to}\quad \sum_{x\in\Lambda}\mu_x = \nu|\Lambda|.
\]
When $|\lambda|^2<(\kappa-2)/(2\kappa)$ this has a unique minimum at $\mu_x = \nu$ for all $x\in\Lambda$, which produces \eqref{eq:Yrast}.
\end{proof}
While the simple operator inequality \eqref{CS} is sufficient for estimating the ground state energy of $H_\Lambda^\mathrm{per}$ in sufficiently high particle sectors, to bound $E_0^\mathrm{per}((\mathcal{C}_\Lambda^\mathrm{per})^\perp)$ one also needs to consider configurations $\mu\in{\mathbb N}_0^\Lambda$ with low filling. Our approach here is to use a refined version of \eqref{CS} that depends on the type of configuration under consideration.
To this end, recall that any state $\psi\in \big( \mathcal{C}_\Lambda^\textrm{per} \big)^\perp $ is a linear combination of $ \mu \in {\mathbb N}_0^\Lambda\setminus\big( \operatorname{ran} \sigma_\Lambda \restriction_{\mathcal{T}_\Lambda^\textrm{per}}\big) =: \mathcal{S}_\Lambda $, i.e.\
\[
\psi = \sum_{\mu \in \mathcal{S}_\Lambda } \psi(\mu)\ket{\mu}, \quad\text{where}\quad \|\psi\|^2 = \sum_{\mu \in \mathcal{S}_\Lambda }|\psi(\mu)|^2 <\infty.
\]
Given $ \Lambda = [a,b] $ with $|\Lambda|\geq 4$, Lemma~\ref{lem:periodic_configs} characterizes $\mu=(\mu_a, \ldots, \mu_b)\in \mathcal{S}_\Lambda $ as those particle configurations which belong to one of the following disjoint sets (which are to be understood using the convention $ x \equiv x +|\Lambda| $):
\begin{align*} & \mathcal{S}_{\Lambda}^{(1)} := \left\{ \mu \; \big| \; \mu_x\mu_{x+1}\geq 1\; \mbox{for some $x\in\Lambda$} \right\} , \\
& \mathcal{S}_{\Lambda}^{(2)} := \left\{ \mu \; \big| \; \mu_x \geq 3 \; \mbox{for some $x\in\Lambda$}\right\} \backslash \mathcal{S}_{\Lambda}^{(1)} , \\
& \mathcal{S}_{\Lambda}^{(3)} := \left\{ \mu \; \big| \; \mu_x=\mu_{x+3} = 2 \; \mbox{for some $x\in\Lambda$}\right\} \backslash ( \mathcal{S}_{\Lambda}^{(1)} \cup \mathcal{S}_{\Lambda}^{(2)} ) , \\
& \mathcal{S}_{\Lambda}^{(4)} := \left\{\mu \ \; \big| \; \mu_x\mu_{x+2}\geq 2 \; \mbox{for some $x\in\Lambda$} \right\} \backslash ( \mathcal{S}_{\Lambda}^{(1)} \cup \mathcal{S}_{\Lambda}^{(2)} \cup \mathcal{S}_{\Lambda}^{(3)} ) .
\end{align*}
Note that all possible violations of the conditions from Lemma~\ref{lem:periodic_configs} are covered since (i) $\mathcal{S}_\Lambda^{(1)}$ is the set of all configurations that violate Condition~1, (ii) $\mathcal{S}_\Lambda^{(2)}$ is all configurations that satisfy Condition~1, but violate Condition~2 by filling a site with three or more particles, and (iii) $\mathcal{S}_\Lambda^{(3)}\uplus \mathcal{S}_\Lambda^{(4)}$ contains all configurations that satisfy Conditions~1, have at most two particles on every site, but still violate Condition~2. Thus, we have constructed a disjoint partition
\begin{equation}\label{non-tilings2}
\mathcal{S}_\Lambda = \mathcal{S}_{\Lambda}^{(1)} \uplus \mathcal{S}_{\Lambda}^{(2)} \uplus \mathcal{S}_{\Lambda}^{(3)} \uplus \mathcal{S}_{\Lambda}^{(4)} .
\end{equation}
Any configuration $ \mu \in \mathcal{S}_{\Lambda}^{(1)} $ has positive electrostatic energy
$
e_\Lambda^\textrm{per}(\mu) = \sum_{x=a}^{b} \mu_x \mu_{x+1} $,
and the mean energy of any $\psi\in \big( \mathcal{C}_\Lambda^\textrm{per}\big)^\perp\cap\mathrm{dom}(H_\Lambda^\textrm{per})$ is given by
\begin{equation}\label{eq:Cenergy2}
\langle \psi | H_\Lambda \psi \rangle = \sum_{\mu \in \mathcal{S}_{\Lambda}^{(1)} } e_\Lambda^\textrm{per}(\mu) |\psi(\mu)|^2 + \kappa \, \sum_{\nu \in \mathbb{N}_0^\Lambda} \sum_{x=a}^{b} \left| (q_x\psi)(\nu) \right|^2 .
\end{equation}
Our strategy for a lower bound is to bound every term in $ \sum_{\mu \in \mathcal{S}_\Lambda }|\psi(\mu)|^2 $ by a few terms from the RHS of~\eqref{eq:Cenergy2}. This is trivial for $ \mu \in \mathcal{S}_{\Lambda}^{(1)} $ since those configurations have electrostatic energy. For all other $ \mu \in \mathcal{S}_\Lambda$ we will associate (i) a configuration $ \eta(\mu) \in \mathcal{S}_{\Lambda}^{(1)} $ with electrostatic energy and (ii) one or two terms $|(q_x\psi)(\nu)|^2$ from the second part of~\eqref{eq:Cenergy} which will be bounded using a variation of \eqref{CS}. This yields the following result.
\begin{theorem}[Electrostatic Estimates I] \label{thm:electro2}
For any $\Lambda = [a,b] $ with $|\Lambda|\geq 8$, the ground state energy of $H_\Lambda^\textrm{per} $ in the invariant subspace $\big( \mathcal{C}_\Lambda^\textrm{per} \big)^\perp $ satisfies the lower bound
\begin{equation}\label{perp_gap}
E_0^\textrm{per}(\mathcal{C}_\Lambda^\perp) \geq \frac{1}{4} \min\left\{1, \frac{2\kappa}{\kappa+1} , \frac{2 \kappa}{1+\kappa|\lambda|^2} \right\} =\gamma_\kappa^\textrm{per}(|\lambda|^2) .
\end{equation}
\end{theorem}
\begin{proof}
We consider $\psi\in \big( \mathcal{C}_\Lambda^\textrm{per} \big)^\perp $ and fix any configuration $ \mu \in \mathcal{S}_\Lambda $ in its support.
\textit{Case $ \mu \in \mathcal{S}_{\Lambda}^{(1)} $:} For such configurations we have the trivial bound
\begin{equation}\label{gamma1}
e_\Lambda^\textrm{per}(\mu) \, |\psi(\mu)|^2 \geq |\psi(\mu)|^2 =: \gamma^{(1)}|\psi(\mu)|^2.
\end{equation}
\textit{Case $ \mu \in \mathcal{S}_{\Lambda}^{(2)} $:}
Set $ x \equiv x_\mu := \max\{x\in[a,b] \, | \, \mu_x \geq 3 \} $, and note that this means $ \mu_{x-1} = \mu_{x+1} = 0 $ since otherwise $ \mu \in \mathcal{S}_{\Lambda}^{(1)} $. Recalling the notation from the beginning of Section~\ref{sec:main} (see the text following \eqref{creation-action}), we associate to $\mu$ the configurations $ \nu(\mu) := \alpha_x^2 \mu $ and $ \eta(\mu) := \alpha_{x-1}^*\alpha_{x+1}^* \nu(\mu) $, for which $ e_\Lambda^\textrm{per}(\eta(\mu)) \geq 2 $. Using \eqref{creation-action}, we then have for any $ \delta \in (0,1) $
\begin{align}
T^{(2)} [\psi;\mu] := \left| (q_x\psi)(\nu(\mu)) \right|^2 & = \left| \sqrt{\mu_x(\mu_x-1)} \psi(\mu) - \lambda \psi(\eta(\mu)) \right|^2 \notag \\
& \geq (1-\delta) \mu_x(\mu_x-1) | \psi(\mu)|^2 - \frac{1-\delta}{\delta} |\lambda|^2 | \psi(\eta(\mu)) |^2 . \label{eq:case2}
\end{align}
Choosing $ \delta = \frac{\kappa|\lambda|^2}{2+\kappa|\lambda|^2} $ and bounding $ \mu_x(\mu_x-1) \geq 6 $ yields
\begin{equation}\label{gamma2}
e_\Lambda^\textrm{per}(\eta(\mu) ) | \psi(\eta(\mu)) |^2 + \kappa T^{(2)} [\psi;\mu] \geq \frac{12 \kappa}{2+\kappa|\lambda|^2} | \psi(\mu)|^2 =: \gamma^{(2)}|\psi(\mu)|^2 .
\end{equation}
\textit{Case $ \mu \in \mathcal{S}_{\Lambda}^{(3)} $:} Let $ x \equiv x_\mu := \max\{x\in[a,b] \, | \, \mu_x=\mu_{x+3}=2\} $. Since $\mu\notin \mathcal{S}_\Lambda^{(1)}$, it follows that $\mu_{x-1}=\mu_{x+1}=\mu_{x+2}=0$. To $ \mu $ we associate four configurations $ \nu(\mu) := \alpha_x^2 \mu $ and $ \eta'(\mu) := \alpha_{x-1}^*\alpha_{x+1}^* \nu(\mu) $ as well as $ \nu'(\mu):= \alpha_{x+1}\alpha_{x+3} \eta'(\mu) $ and $ \eta(\mu) := (\alpha_{x+2}^*)^2 \nu'(\mu) $. The last configuration has electrostatic energy $ e_\Lambda^\textrm{per}(\eta(\mu)) \geq 2$. Estimating similarly to \eqref{eq:case2}, we obtain for any $ \delta, \delta' \in (0,1) $,
\begin{align}
T^{(3)} [\psi;\mu] & := \left| (q_x\psi)(\nu(\mu)) \right|^2 + \left| (q_{x+2}\psi)(\nu'(\mu)) \right|^2 \label{eq:Case3}\\
& = \left| \sqrt{2} \psi(\mu) - \lambda \psi(\eta'(\mu)) \right|^2 + \left| \sqrt{2} \psi(\eta(\mu)) - \lambda\sqrt{2} \psi(\eta'(\mu)) \right|^2 \nonumber\\
& \geq 2(1-\delta) | \psi(\mu)|^2 + |\lambda|^2 \left[ 2 (1-\delta') - \frac{1-\delta}{\delta} \right] |\psi(\eta'(\mu)) |^2 - 2\frac{1-\delta'}{\delta'}| \psi(\eta(\mu)) |^2.\nonumber
\end{align}
The choice $ \delta = \frac{1}{1+2(1-\delta')} $ eliminates the second term. In turn, choosing $ \delta' = \frac{\kappa}{\kappa+1} $ yields
\begin{equation}\label{S3_bound}
e_\Lambda^\textrm{per}(\eta(\mu) ) | \psi(\eta(\mu)) |^2 + \kappa T^{(3)} [\psi;\mu] \geq \frac{2\kappa}{\kappa+1}|\psi(\mu)|^2 =: \gamma^{(3)}|\psi(\mu)|^2.
\end{equation}
\textit{Case $ \mu \in \mathcal{S}_{\Lambda}^{(4)} $:} Choose $ x \equiv x_\mu := \max\{x\in[a,b] \, | \, \mu_x=2 \; \text{and} \; \max\{ \mu_{x-2},\mu_{x+2} \} \geq 1 \} $ and note that $ \mu_{x\pm 1} = 0 $. We then proceed similarly as in the second case and associate to $ \mu $ the configurations $ \nu(\mu):= \alpha_x^2\mu $ and $\eta(\mu) := \alpha_{x-1}^*\alpha_{x+1}^* \nu(\mu) $. The latter configuration has electrostatic energy $ e_\Lambda^\textrm{per}(\eta(\mu) ) \geq 1 $. Proceeding as in~\eqref{eq:case2} we then bound for any $ \delta \in (0,1) $:
\[
T^{(4)} [\psi;\mu] := \left| (q_{x}\psi)(\nu(\mu)) \right|^2 \geq 2 (1-\delta) | \psi(\mu)|^2 - \frac{1-\delta}{\delta} |\lambda|^2 | \psi(\eta(\mu)) |^2 .
\]
The choice $ \delta = \frac{\kappa|\lambda|^2}{1+\kappa|\lambda|^2} $ then yields
\begin{equation}\label{eq:Case4}
e_\Lambda^\textrm{per}(\eta(\mu) ) | \psi(\eta(\mu)) |^2 + \kappa T^{(4)} [\psi;\mu] \geq \frac{2\kappa}{1+ \kappa |\lambda|^2} | \psi(\mu)|^2 =: \gamma^{(4)}|\psi(\mu)|^2.
\end{equation}
\bigskip
Employing the conventions $ T^{(1)} [\psi;\mu] \equiv 0 $ and $ \eta(\mu) = \mu $ for $ \mu \in \mathcal{S}_{\Lambda}^{(1)} $, the collective estimates above show that summing over all configurations in $\mathcal{S}_\Lambda^{(j)}$ for each fixed $ j \in \{ 1,2,3,4 \} $ satisfies:
\begin{align}
\gamma^{(j)} \sum_{\mu\in \mathcal{S}_{\Lambda}^{(j)} } |\psi(\mu)|^2 & \leq \sum_{\mu\in \mathcal{S}_{\Lambda}^{(j)} } e_\Lambda^\textrm{per}(\eta(\mu)) |\psi(\eta(\mu))|^2+ \kappa \sum_{\mu\in \mathcal{S}_{\Lambda}^{(j)} } T^{(j)} [\psi;\mu] \nonumber \\
& \leq c_j\sum_{\eta \in \mathcal{S}_{\Lambda}^{(1)} } e_\Lambda^\textrm{per}(\eta) |\psi(\eta)|^2 + \kappa \, \sum_{\nu \in \mathbb{N}_0^\Lambda} \sum_{x=a}^{b} \left| (q_x\psi)(\nu) \right|^2 \leq c_j\langle \psi | H_\Lambda \psi \rangle , \label{j-estimate2}
\end{align}
where $c_j\in{\mathbb N}$ counts the maximum number of times a configuration with electrostatic energy $\eta\in\mathcal{S}_\Lambda^{(1)}$ is associated to a configuration $\mu\in\mathcal{S}_\Lambda^{(j)}$,
\[c_j := \max_{\eta\in\mathcal{S}_{\Lambda}^{(1)}}\left|\left\{\mu\in\mathcal{S}_\Lambda^{(j)} \big| \eta(\mu)=\eta\right\}\right|.\]
For \eqref{j-estimate2}, we also use that the set of pairs $(x,\nu)\in\Lambda\times{\mathbb N}_0^\Lambda$ that contribute to $T^{(j)} [\psi;\mu]$ is in one-to-one correspondence with $\mu\in\mathcal{S}_\Lambda^{(j)}$. The final bound in \eqref{perp_gap} is obtained by dividing by $c_j$, summing over $j$ and seeing that $4\gamma_\kappa^\textrm{per}(|\lambda|^2)=\min_j\gamma^{(j)}/c_j$. Thus, the proof is complete after determining the values of $c_j$.
It is trivial that $c_1=1$. For all other $j$ and fixed $\mu\in\mathcal{S}_\Lambda^{(j)}$, the sites in $\Lambda$ that contribute to the electrostatic energy of $\eta(\mu)\in\mathcal{S}_\Lambda^{(1)}$ are localized to an interval of size at most 6 around $x_\mu$. The constraint $|\Lambda|\geq 8$ guarantees that this interval can be uniquely identified in the ring geometry. By considering separately for each $j$ the possible forms of $\mu$ and $\eta(\mu)$ in this interval, one can quickly deduce that the mapping $S_\Lambda^{(j)}\ni \mu \mapsto \eta(\mu)\in \mathcal{S}_\Lambda^{(1)}$ is injective for $j=3,4$ giving $c_j = 1$ for those $j$, and that the preimage of any $\eta\in\mathcal{S}_\Lambda^{(1)}$ for $j=2$ has at most two elements producing $c_2=2$.\footnote{The configurations $\mu\in\mathcal{S}_\Lambda^{(2)}$ that map to a non-unique $\eta(\mu)$ are ones for which $\mu_x = 3$, $\{\mu_{x+2},\mu_{x-2}\}=\{0,1\},$ and $\mu_{x\pm3}=0$. All other configurations $\mu\in\mathcal{S}_\Lambda^{(2)}$ are in a one-to-one correspondence with $\eta(\mu)\in\mathcal{S}_\Lambda^{(1)}$.}
\end{proof}
\subsection{Proof of Theorem~\ref{thm:main2}}\label{Sec:ProofMain2}
We are now ready to finalize the proof of Theorem~\ref{thm:main2} by bounding the RHS of~\eqref{eq:bulkgapest}. The lower bound on the ground-state energy $ E_0^\mathrm{per}((\mathcal{C}_\Lambda^\mathrm{per})^\perp) $ in Theorem~\ref{thm:electro2} yields the first term in the minimum on the RHS of~\eqref{eq:main2}. Using the finite-volume criterion from Theorem~\ref{thm:pbc_gap}, the spectral gap $E_1^\textrm{per} (\mathcal{C}_\Lambda^\textrm{per}) $ in~\eqref{eq:bulkgapest} is bounded from below in terms of the spectral gap $E_1(\mathcal{C}_{[1,m]}^\infty)$ on subintervals of $ \Lambda $ of lengths $m\geq 7$. In turn, these gaps are uniformly lower bounded in Theorem~\ref{thm:tiling_gap}, which yields for all $ m \geq 7 $,
\[
E_1(\mathcal{C}_{[1,m]}^\infty) \geq \frac{2\kappa}{3}(1-\sqrt{3f(|\lambda|^2/2)})^2.
\]
In the situation that $ f(|\lambda|^2/2) < 1/3 $, the condition
\[
\inf_{l=1,2,3}E_1(\mathcal{C}_{[1,3n+l]}^\infty) - \frac{\Gamma}{n} >0
\]
is thus satisfied for sufficiently large $n $. Taking the limit $n\to\infty$ in the bound~\eqref{eq:pergap3}, and inserting the values~\eqref{def:gamma} produces the second term in the minimum on the RHS of~\eqref{eq:main2}. \qed
\section{Edge and excited states}
The aim of this section is twofold. The first goal is to complete the proof of Theorem~\ref{thm:main} by producing a lower bound on the ground-state energy in the subspace orthogonal to all BVMD tilings. This is accomplished in the next section, where an analogue of the electrostatic estimate in Theorem~\ref{thm:electro2} is proved for open boundary conditions. This bound is limited by the presence of edge states, which are discussed and classified at the end of Section~\ref{sec:Electrostatic}.
As a second goal, we prove variational bounds on low-lying bulk excitations in Section~\ref{sec:Excited_States}, where a brief discussion on many-body scars of mid- and high-energy is also provided.
\subsection{Electrostatic estimates and edge states for open boundary conditions}\label{sec:Electrostatic}
For our proof of the spectral gap of $ H_\Lambda \equiv H_\Lambda^\textrm{obc} $ via~\eqref{eq:twoway}, we still need to establish a volume-independent lower bound on the ground-state energy in the invariant subspace $ \mathcal{C}_\Lambda^\perp$ orthogonal to all BVMD tiling states. Together with the bound on the spectral gap of $ E_1(\mathcal{C}_{\Lambda}^\infty) $ from Section~\ref{sec:MM}, this then completes the proof of the uniform gap for open boundary conditions.
\begin{theorem}[Electrostatic estimate II]\label{thm:perp_bound} For any interval $\Lambda $ with $|\Lambda|\geq 5$, the ground state energy of $H_\Lambda$ in the invariant subspace $\mathcal{C}_\Lambda^\perp$ satisfies the lower bound
\begin{equation}\label{perp_gap_OBC}
E_0(\mathcal{C}_\Lambda^\perp) \geq \frac{1}{5} \min\left\{1, \frac{2 \kappa}{1+\kappa|\lambda|^2} , \frac{2\kappa}{\kappa+1} , \frac{2\kappa |\lambda|^2}{\kappa+1} \right\} =\gamma_\kappa(|\lambda|^2) .
\end{equation}
\end{theorem}
The proof of this theorem parallels the one of Theorem~\ref{thm:electro2} for the periodic case.
We will therefore focus on the subtle differences and provide the proof using the same notation. Similar to the previous case, any state $\psi\in \mathcal{C}_\Lambda^\perp$ is a linear combination of configurations $ \mu \in {\mathbb N}_0^\Lambda\setminus\operatorname{ran} \sigma_\Lambda =: \mathcal{S}_\Lambda $, that is
\begin{equation}\label{eq:vec_expansion}
\psi = \sum_{\mu \in \mathcal{S}_\Lambda } \psi(\mu)\ket{\mu}, \quad\text{where}\quad \|\psi\|^2 = \sum_{\mu \in \mathcal{S}_\Lambda }|\psi(\mu)|^2 <\infty.
\end{equation}
Fixing $ \Lambda = [a,b] $ with $|\Lambda|\geq 5$, Lemma~\ref{lem:tiling_configs} characterizes $\mu=(\mu_a, \ldots, \mu_b)\in \mathcal{S}_\Lambda $ as those particle configurations which belong to one of the following disjoint sets:
\begin{align*} & \mathcal{S}_{\Lambda}^{(1)} := \left\{ \mu \; \big| \; \mu_x\mu_{x+1}\geq 1\; \mbox{for some $x\in[a,b-1]$} \right\} , \\
& \mathcal{S}_{\Lambda}^{(2)} := \left\{ \mu \; \big| \; \mu_x \geq 3 \; \mbox{for some $x\in[a+1,b-1]$}\right\} \backslash \mathcal{S}_{\Lambda}^{(1)} , \\
& \mathcal{S}_{\Lambda}^{(3)} := \left\{ \mu \; \big| \; \min\{\mu_x, \,\mu_{x+3}\} = 2 \; \mbox{for some $x\in[a,b-3]$}\right\} \backslash ( \mathcal{S}_{\Lambda}^{(1)} \cup \mathcal{S}_{\Lambda}^{(2)} ) , \\
& \mathcal{S}_{\Lambda}^{(4)} := \left\{\mu \; \big| \; \mu_x= 2 \; \mbox{and} \; \max\{\mu_{x-2},\mu_{x+2}\}\geq 1 \; \mbox{for some $x\in[a+1,b-1]$} \right\} \backslash ( \mathcal{S}_{\Lambda}^{(1)} \cup \mathcal{S}_{\Lambda}^{(2)} \cup \mathcal{S}_{\Lambda}^{(3)} ) \\
& \mathcal{S}_{\Lambda}^{(5)} := \left\{\mu \; \big| \; \mu_a\mu_{a+2}\geq 2 \; \mbox{or} \; \mu_{b-2}\mu_b\geq 2 \right\} \backslash ( \mathcal{S}_{\Lambda}^{(1)} \cup \mathcal{S}_{\Lambda}^{(2)} \cup \mathcal{S}_{\Lambda}^{(3)}\cup \mathcal{S}_{\Lambda}^{(4)} ) .
\end{align*}
We use the convention $\mu_{a-1}=\mu_{b+1}=0$ in the definition of $\mathcal{S}_\Lambda^{(4)}$. The last two sets are chosen as a partition of
\begin{equation}\label{edge_configs}
\mathcal{S}_{\Lambda}^{(4)} \cup \mathcal{S}_{\Lambda}^{(5)} = \left\{\mu \; \big| \; \mu_x\mu_{x+2}\geq 2 \; \mbox{for some $x\in[a,b-2]$} \right\} \backslash ( \mathcal{S}_{\Lambda}^{(1)} \cup \mathcal{S}_{\Lambda}^{(2)} \cup \mathcal{S}_{\Lambda}^{(3)} ),
\end{equation}
which is analogous to the fourth set of configurations from the periodic case in Section~\ref{sec:electro2}. Here, $\mathcal{S}_\Lambda^{(4)}$ consists of the configurations from \eqref{edge_configs} where there is an \emph{interior} site $x$ with two particles that has an occupied next nearest neighbor. The set $\mathcal{S}_\Lambda^{(5)}$ corresponds to the configurations from \eqref{edge_configs} where the site with two (or more) particles must be on the boundary and the next-nearest neighbor holds exactly one particle; it is precisely these configurations that produce the low-lying energies for small $|\lambda|$ which were discussed in Section~\ref{sec:main}.
Given~\eqref{edge_configs}, the construction of the sets $\mathcal{S}_\Lambda^{(j)}$ only differ from their counterparts in the periodic case at the boundary of $ \Lambda $. Since all possible violations of the conditions in Lemma~\ref{lem:tiling_configs} are covered, we have constructed a disjoint partition of
$
\mathcal{S}_\Lambda $.
For open boundary conditions, a configuration $ \mu \in \mathcal{S}_{\Lambda}^{(1)} $ has electrostatic energy
$
e_\Lambda(\mu) = \sum_{x=a}^{b-1} \mu_x \mu_{x+1} $,
and the mean energy of any $\psi\in \mathcal{C}_\Lambda^\perp\cap\mathrm{dom}(H_\Lambda)$ is given by
\begin{equation}\label{eq:Cenergy}
\langle \psi | H_\Lambda \psi \rangle = \sum_{\mu \in \mathcal{S}_{\Lambda}^{(1)} } e_\Lambda(\mu) |\psi(\mu)|^2 + \kappa \, \sum_{\nu \in \mathbb{N}_0^\Lambda} \sum_{x=a+1}^{b-1} \left| (q_x\psi)(\nu) \right|^2 .
\end{equation}
\begin{proof}[Proof of Theorem~\ref{thm:perp_bound}]
We expand $\psi\in \mathcal{C}_\Lambda^\perp\cap\mathrm{dom}(H_\Lambda)$ as in \eqref{eq:vec_expansion} and fix a configuration $ \mu \in \mathcal{S}_\Lambda $. The argument for $\mu\in\mathcal{S}_\Lambda^{(j)}$ with $j=1,2$ or $4$ proceeds identically as in the proof in Theorem~\ref{thm:electro2}: we define $\eta(\mu)$ and $T^{(j)}[\psi;\mu]$ as before with the only modification being that the value of $x_\mu$ is constrained to the interval of sites used to define the set $\mathcal{S}_\Lambda^{(j)}$ above. In these cases, one again produces the bound
\begin{equation}\label{eq:same_bound}
e_\Lambda(\eta(\mu) ) | \psi(\eta(\mu)) |^2 + \kappa T^{(j)} [\psi;\mu] \geq \gamma^{(j)}|\psi(\mu)|^2
\end{equation}
with $\gamma^{(j)}$ defined as in the proof of Theorem~\ref{thm:electro2}, see specifically \eqref{gamma1}, \eqref{gamma2} and \eqref{eq:Case4}.
\textit{Case $ \mu \in \mathcal{S}_{\Lambda}^{(3)} $:} The argument follows the analogous case from Theorem~\ref{thm:electro2} with only technical modifications. We set $ x \equiv x_\mu := \max\{x\in[a,b-3] \, | \, \min\{\mu_x, \mu_{x+3}\}\geq2\} $, and consider the cases $x>a$ and $x=a$ separately.
For $x>a$, defining $\eta(\mu)$ and $T^{(3)}[\psi;\mu]$ exactly as in \eqref{eq:Case3}, and bounding similarly with the choices $ \delta' = \frac{\kappa}{\kappa+1} $ and $ \delta =(1+\mu_{x+3}(1-\delta'))^{-1} \leq (1+2 (1-\delta'))^{-1} $ once again produces \eqref{eq:same_bound} with $\gamma^{(3)}$ as in \eqref{S3_bound}. For open boundary conditions, it is possible to have $\mu_{x+3}>2$ if $x=b-3$, which accounts for the slightly different choice of $\delta$.
The case $x=a$ runs analogously to that of $x>a$, with the roles of $x$ and $x+3$ interchanged since $x+3$ is in the interior of $\Lambda$. Setting $\nu(\mu) = \alpha_{x+3}^2\mu$, $\eta'(\mu)=\alpha_{x+2}^*\alpha_{x+4}^*\nu(\mu)$, $\nu'(\mu)=\alpha_x\alpha_{x+2}\eta'(\mu)$ and $\eta(\mu) = (\alpha_{x+1}^*)^2\nu'(\mu)$ we then estimate
\[
T^{(3)} [\psi;\mu] := \left| (q_{x+3}\psi)(\nu(\mu)) \right|^2 + \left| (q_{x+1}\psi)(\nu'(\mu)) \right|^2 \]
as in the case $x>a$ which again yields \eqref{eq:same_bound}.
\textit{Case $ \mu \in \mathcal{S}_{\Lambda}^{(5)} $:} Due to the presence of a boundary, our strategy differs from the case of $\mathcal{S}_\Lambda^{(4)}$, and we pick $ x \equiv x_\mu := \max\{x\in\{a+1,b-1\} \, | \, \mu_{x-1}\mu_{x+1} \geq 2 \} $ and note that $ \mu_{x} = 0 $. To $ \mu $ we associate the configurations $ \nu(\mu) := \alpha_{x-1} \alpha_{x+1} \mu $, and $ \eta(\mu) := (\alpha_x^*)^2 \nu(\mu) $. The latter configuration has electrostatic energy $ e_\Lambda(\eta(\mu) ) \geq 2 $. We then bound for any $ \delta \in (0,1) $:
\begin{align*}
T^{(5)} [\psi;\mu] := \left| (q_{x}\psi)(\nu(\mu)) \right|^2 & = \left| \sqrt{2} \psi(\eta(\mu)) - \lambda \sqrt{\mu_{x-1} \mu_{x+1}} \psi(\mu) \right|^2 \\
& \geq (1-\delta)|\lambda|^2 \mu_{x-1}\mu_{x+1} | \psi(\mu)|^2 - 2\frac{1-\delta}{\delta} | \psi(\eta(\mu)) |^2 .
\end{align*}
The choice $ \delta = \frac{\kappa}{\kappa+1} $ then yields the final estimate
\begin{equation}\label{eq:Case5_OBC}
e_\Lambda(\eta(\mu) ) | \psi(\eta(\mu)) |^2 + \kappa T^{(5)} [\psi;\mu] \geq \frac{2\kappa |\lambda|^2}{\kappa+1} | \psi(\mu)|^2 := \gamma^{(5)}|\psi(\mu)|^2.
\end{equation}
Summing the collective estimates above over all configurations in $\mathcal{S}_\Lambda^{(j)}$ for each fixed $ j \in \{ 1,2,3,4,5 \} $ once again produces
\begin{align}
\gamma^{(j)} \sum_{\mu\in \mathcal{S}_{\Lambda}^{(j)} } |\psi(\mu)|^2 & \leq \sum_{\mu\in \mathcal{S}_{\Lambda}^{(j)} } e_\Lambda(\eta(\mu)) |\psi(\eta(\mu))|^2+ \kappa \sum_{\mu\in \mathcal{S}_{\Lambda}^{(j)} } T^{(j)} [\psi;\mu]
\leq c_j\langle \psi | H_\Lambda \psi \rangle , \label{j-estimate}
\end{align}
where $c_j\in{\mathbb N}$ counts the maximum number of times a configuration with electrostatic energy $\eta\in\mathcal{S}_\Lambda^{(1)}$ is associated to a configuration $\mu\in\mathcal{S}_\Lambda^{(j)}$. Our choices are again made so that $c_j=1$ for $j\neq2$, and $c_2=2$. The proof then concludes by dividing \eqref{j-estimate} by $c_j$, summing over $j$, and noting that $5\gamma_\kappa(|\lambda|^2)=\min_j\gamma^{(j)}/c_j$.
%
\end{proof}
While bound in Theorem~\ref{thm:perp_bound} is sufficient for our purposes, it is not optimal as far as constants are concerned. However, the lower bound does scale as $ \mathcal{O}(|\lambda|^2) $ in the regime of small $ |\lambda| $, which agrees with the scaling of the edge states singled out in~\eqref{ex:edge_energy}. Since this scaling is only reflected in the estimate from~\eqref{eq:Case5_OBC}, any such low-energy state $\psi$ must have a nonzero overlap with some configuration $\mu\in\mathcal{S}_\Lambda^{(5)}$, i.e. $\psi(\mu)\neq 0$. We end this section by using modified tilings to identify the invariant subspaces that contain the configurations from $\mathcal{S}_\Lambda^{(5)}$. These tilings will only differ from BVMD tilings at the boundary, and so in this sense any eigenstate with energy $\mathcal{O}(|\lambda|^2)$ can be interpreted as an edge state.
Since every configuration $\mu\in\mathcal{S}_\Lambda^{(5)}$ only breaks the conditions from Lemma~\ref{lem:tiling_configs} at the boundary of $\Lambda$, it is the configuration associated with a \emph{edge tiling} $T=(T_1,\ldots, T_k)$ of $\Lambda=[a,b]$ where either $T_1 = (n010)$ or $T_k=(10n)$ for some $n\geq 2$, and all other tiles are BVMD-tiles. In addition to the original BVMD tiles and replacement rules $(10)(10)\leftrightarrow(02000)$ and $(10)(1)\leftrightarrow(020)$, the action of the dipole hopping terms $q_x^*q_x$ on these edge tilings generates the following new set of boundary tiles and replacement rules:
\begin{enumerate}
\item \emph{On the left boundary:} The edge tiles $(n010)$, $((n-1)200)$, and $(n0200)$ for $n\geq 2$ which satisfy the replacement rules
\begin{equation}\label{left_replacements}
(n010)\leftrightarrow((n-1)200), \quad (n010)(10)\leftrightarrow(n00200)
\end{equation}
\item \emph{On the right boundary:} The edge tiles $(10n)$, $(02(n-1))$, and $(0200n)$ for $n\geq 2$ which satisfy the replacement rules
\begin{equation}\label{right_replacements}
(10n)\leftrightarrow(02(n-1)), \quad (10)(10n)\leftrightarrow(0200n)
\end{equation}
\end{enumerate}
With these modified edge tiles, the invariant subspaces that contain states with energy $\mathcal{O}(|\lambda|^2)$ are of the form $\mathcal{E}_\Lambda(R)=\operatorname{span}\{\sigma_\Lambda(T) \, | \, T \leftrightarrow R\}$ where $R=(R_1,\ldots,R_k)$ is any tiling of $\Lambda$ so that
\[
R_1\in\{B_n^l, M, V, (n010) | n\geq 2\}, \;\; R_k \in\{B_n^r, M^{(1)}, M, V, (10n) | n \geq 2\}, \;\; R_i \in\{M,V\} \;\; 1<i<k ,
\]
and at least one $R_1 = (n010)$ or $R_k=(10n)$. Here, the equivalence relation $T\leftrightarrow R$ is defined using the new replacement rules from \eqref{left_replacements}-\eqref{right_replacements} as well as the original BVMD-replacement rules. As remarked above, the restriction of any of these tilings to the interior $[a+1,b-1]$ produces a BVMD-tiling.
\subsection{Proof of Theorem~\ref{thm:main}}\label{sec:eeproofmain}
We now spell out the short proof of Theorem~\ref{thm:main}. Due to \eqref{eq:twoway} one only needs to estimate the spectral gap in the tiling space $ E_1(\mathcal{C}_\Lambda) $ as well as the ground state in the orthogonal complement $E_0(\mathcal{C}_\Lambda^\perp)$. The latter has been accomplished in Theorem~\ref{thm:perp_bound}. The former is bounded from below using Theorem~\ref{thm:gap_reduction} together with the uniform estimate in Theorem~\ref{thm:tiling_gap}. \qed
\subsection{Variational estimates for excited states}\label{sec:Excited_States}
The bound~\eqref{eq:main2} on the bulk gap avoids edge states, but is nevertheless not sharp even in the regime of small $ |\lambda| $.
To investigate low-lying exited states in greater detail, we study the Hamiltonian $H_\Lambda$ restricted to other invariant subspaces spanned by tiling states $ | \sigma_\Lambda(T) \rangle $.
We choose to work with with open boundary conditions. However, the analysis can easily be modified to periodic boundary conditions.
While the set of tilings $T$ is once again generated from a root tiling and a few substitution rules to keep the complexity manageable, the set of tiles is augmented to basic (bulk) tiles plus a few additional tiles. One simple class of states results from adding the new bulk tile $(01)$ to produce, e.g., roots of the form
\begin{equation}\label{quasiparticle}
R_{l,r}^{(m)} = (10)_l (01)_m (10)_r ,
\end{equation}
where we place $ l $, respectively, $ r $, basic bulk monomers $(10)$ to the left, respectively, right, of a string of $ m $ tiles $(01)$ on the interval $ \Lambda = \Lambda_l \cup \{0\} \cup \Lambda_{r}^{(m)}$ with $ \Lambda_l = [-2l , -1] $ and $ \Lambda_r^{(m)} = [ 1, 2(r+m)-1] $. By allowing the boundary tile $M^{(1)}$ to be placed in the interior of $\Lambda$, \eqref{quasiparticle} can be represented in terms of the BVMD-tiles as $R_{l,r}^{(m)}=(10)_l(0)(10)_{m-1}(1)(10)_r$. The action of the operators $ q_x^* q_x $ with $ -2l < x < 2 (r+m) -1$ on this root then generate additional tiles and replacement rules analogous to~\eqref{eq:replacement} that are used to define tilings $T$ which are connected to the root, denoted $R_{l,r}^{(m)}\leftrightarrow T$, see \eqref{excited_rr1}-\eqref{excited_rr2} below. Due to the presence of a void at the interface $x=0$, the invariant subspace corresponding to this root factors,
\begin{align*}
& \mathcal{D}_{l,r}^{(m)} := \operatorname{span}\left\{ | \sigma_\Lambda(T) \rangle \, \big| \, T \leftrightarrow R_{l,r}^{(m)}\right\} = \mathcal{C}_{\Lambda_l}(M_l^{(2)}) \otimes | 0 \rangle \otimes \mathcal{D}_{r}^{(m)} \\
& \mbox{with}\quad \mathcal{D}_{r}^{(m)}:= \operatorname{span}\left\{ | \sigma_{\Lambda_r^{(m)}}(T) \rangle \, \big| \, T \leftrightarrow R_{r}^{(m)} = (M_{m-1}^{(2)}, M^{(1)}, M_r^{(2)}) \right\} ,
\end{align*}
where $M_n^{(2)}$ covers an interval of length $2n$ with $n$ regular monomers $M$.
For all finite $ m \in \mathbb{N} $ the set of additional tiles generated from $ R_{r}^{(m)} $ by the replacement rules is finite. More specifically:
\begin{enumerate}
\item For $ m = 1 $, in addition to the usual replacement rule $ (10)(10) \leftrightarrow (0200) $, define the rule
\begin{equation} \label{excited_rr1}
(1)(0200) \leftrightarrow (02100) .
\end{equation}
\item For $ m \geq 2 $, in addition to the usual replacement rules $ (10)(10) \leftrightarrow (0200) $ and $ (10)(1) \leftrightarrow (020)$, define
\begin{equation} \label{excited_rr2}
(020) (10) \leftrightarrow (01200) \qquad (1)(0200) \leftrightarrow (02100) .
\end{equation}
\end{enumerate}
\begin{figure}
\begin{center}
\includegraphics[scale=.25]{Pictures/excited_replacements.png}
\end{center}
\caption{Some of the tilings generated by the replacement rules on $R_3^{(2)}$. Since the hopping terms act on three consecutive sites, the distortions caused by $M^{(1)}$ are localized to an interval of size seven.}
\end{figure}
The same strategy has been applied in~\cite{NWY:2020} to explore excited states of the truncated $ \nu=1/3 $ Haldane pseudopotential.
Similar to the fermionic case (see also~\cite{wang:2015}), we conjecture that for $ \kappa > 1/2 $ and $ \lambda $ small enough, the first exited (bulk) state of the Hamiltonian $ H_\Lambda $ is found as a ground state $ E_0( \mathcal{D}_{l,r}^{(2)}) $ in the invariant subspace $ \mathcal{D}_{l,r}^{(2)} $. This state resembles a bound state consisting of a particle-hole pair separated by $ 2(m-1) $ sites. Within this sector, we compute in the physical parameter regime of small $ |\lambda| $ an explicit expression for the minimum up to order $ |\lambda|^2 $. Moreover, we provide a variational state from $\mathcal{D}_{l,r}^{(2)}$ that agrees with this expansion to $\mathcal{O}(|\lambda|^2)$. This state is defined in terms of the squeeze Tau-Thouless state $\varphi_r$ and the excited state $\eta_r$ defined as
\begin{equation}
\eta_r := - \tfrac{\overline{\lambda}}{\sqrt{2}} \beta_{r-1} | 10\rangle \otimes \varphi_{r-1} + | 0200\rangle \otimes \varphi_{r-2} = \eta_2 \otimes \varphi_{r-2} - \frac{\beta_{r-1} |\lambda|^2}{2} \ | 10\rangle \otimes \eta_{r-1} .
\end{equation}
Note that $\eta_r$ is the mirror of the excited state $\eta_r^{(2)}$ considered for the martingale method in Section~\ref{sec:MM}. Similar to that case, $\eta_r$ is orthogonal to $\varphi_r $ and its norm satisfies $ \| \eta_r \| = \sqrt{\beta_{r-1}} \| \varphi_r \| $.
\begin{theorem}[Exited States]
For any $ l, \, r \geq 3 $ and $ \kappa > 1/2 $, and all sufficiently small $ |\lambda| $:
\begin{equation}\label{eq:exstateasym}
\min_{m\in \mathbb{N} }\ E_0(\mathcal{D}_{l,r}^{(m)} ) =1 - \frac{2\kappa}{2\kappa-1} \left| \lambda \right|^2 + \mathcal{O}( \left| \lambda \right|^4).
\end{equation}
Moreover, a variational state whose energy agrees to $ \mathcal{O}( \left| \lambda \right|^2) $ with the right side is $ \varphi_l \otimes | 0 \rangle \otimes \psi\in \mathcal{D}_{l,r}^{(2)}$ where
$$
\psi =\left( | 101 \rangle - \frac{\sqrt{2} \kappa \lambda}{2\kappa -1 } | 020 \rangle \right) \otimes \varphi_r - \frac{\sqrt{2} \lambda}{2\kappa -1 } \, | 101 \rangle \otimes \eta_r .
$$
\end{theorem}
\begin{proof}
We first discuss the lower bound on the ground state energy of $H_\Lambda $ restricted to the invariant closed subspace $ \mathcal{D}_{l,r}^{(m)} $. This is estimated by dropping all terms in the Hamiltonian aside from those acting on the interval $[2m-3,2m+3]$ for $ m \geq 2 $, i.e.
\begin{equation}\label{eq:lowbdexited}
H_\Lambda \geq \sum_{x=2m-3}^{2(m+1)} n_x n_{x+1} + \kappa \sum_{x=2(m-1)}^{2(m+1)} q_x^* q_{x} ,
\end{equation}
and dropping all terms except those acting on the interval $[1,5]$ when $ m = 1 $.
In the case $m=2$, the restriction of $H_{[1,7]}$ to $D_{l,r}^{(2)}$ is unitarily equivalent to a block matrix of size $6+3=9$. More precisely, the $3\times 3$ block corresponds to the restriction onto the cyclic subspace generated from $(10)(1)(10)(02)$, i.e. $ \operatorname{span}\{ |1011002\rangle , |0201002\rangle , |0120002\rangle\} $, given by
$$
H_{3\times 3} = \left[\begin{matrix} 1+\kappa |\lambda|^2 & - \sqrt{2} \kappa \overline{\lambda} & 0 \\
- \sqrt{2} \kappa \lambda & 2 \kappa (1 + |\lambda|^2) & - 2 \kappa \overline{\lambda} \\
0 & - 2 \kappa\lambda & 2 (1+\kappa) \end{matrix}\right] .
$$
If $ \kappa > 1/2 $ it follows by second order perturbation theory that to order $ |\lambda|^2 $ the lowest eigenvalue in such a block is
$$
\inf \operatorname{spec}(H_{3\times 3}) = 1 - \frac{\kappa}{2\kappa-1} |\lambda|^2 + \mathcal{O}\left( \left| \lambda \right|^4\right) .
$$
The corresponding $6\times 6$-block results from restricting $H_{[1,7]}$ to the subspace generated by $(10)(1)(10)_2$, i.e. $\operatorname{span}\{ |1011010\rangle , |0201010\rangle , |0120010\rangle , |1010200\rangle , |0200200\rangle, |1002100\rangle \}$, and produces:
$$
H_{6\times 6} =\left[\begin{matrix} 1+\kappa |\lambda|^2 & - \sqrt{2} \kappa \overline{\lambda} & 0 & - \sqrt{2} \kappa \overline{\lambda} & 0 & 0 \\
- \sqrt{2} \kappa \lambda & \kappa (2 + 3 |\lambda|^2) & - 2 \kappa \overline{\lambda} & 0 & - \sqrt{2} \kappa \overline{\lambda} & 0 \\
0 & - 2 \kappa\lambda & 2 (1+\kappa) & 0 & 0 & 0 \\
- \sqrt{2} \kappa \lambda & 0 & 0 & \kappa (2 + 3 |\lambda|^2) & - \sqrt{2} \kappa \overline{\lambda} & - 2 \kappa \overline{\lambda} \\
0 & - \sqrt{2} \kappa \lambda & 0 & - \sqrt{2} \kappa \lambda & 4\kappa & 0 \\
0 & 0 & 0 & - 2 \kappa \lambda & 0 & 2(\kappa +1) \end{matrix}\right] .
$$
Second order perturbation theory then yields
\[ \inf \operatorname{spec}( H_{6\times 6}) = 1 - \frac{2\kappa}{2\kappa-1} \left| \lambda \right|^2 + \mathcal{O}( \left| \lambda \right|^4) \]
for $ \kappa > 1/2 $, which is clearly smaller than $ \inf \operatorname{spec}(H_{3\times 3}) $.
The analysis for $m>2$ and $m=1$ follows similarly. For $m>2$, the restriction of $H_{[2m-3,2m+3]}$ to $D_{l,r}^{(m)}$ produces a $6+3+3+1=13$ dimensional block matrix. In addition to the two blocks discussed above, there is a $ 1 \times 1 $ block corresponding to restricting to the vector $ |00 11002 \rangle $ with value $ 1 $, and an additional copy $H_{3\times 3}$ corresponding to the invariant subspace $ \operatorname{span}\{ |0011010\rangle , |0010200\rangle , |0002100\rangle\} $.
For $ m = 1 $ the restriction of $H_{[1,5]}$ to $ \mathcal{D}_{l,r}^{(1)} $ is unitarily equivalent to $ H_{3\times 3} $. This finishes the proof of the fact that the right side in \eqref{eq:exstateasym} is a lower bound on the ground state energy of $ H_\Lambda $ restricted to $ \mathcal{D}_{l,r}^{(m)} $ for all $ m \in \mathbb{N} $.
For an upper bound we use the Rayleigh-Ritz principle to write
$$E_0(\mathcal{D}_{l,r}^{(2)}) \leq \langle \varphi_l \otimes \langle 0 | \otimes \psi , H_\Lambda \varphi_l \otimes | 0 \rangle \otimes \psi \rangle / (\| \varphi_l \|^2 \| \psi\|^2) = \langle \psi , H_{\Lambda_r^{(2)}} \psi \rangle / \| \psi \|^2 $$
and explicitly compute the expectation value.
\end{proof}
Similar to \cite{NWY:2020} (see also~\cite{MBR:2020}), one can find plenty of many-body scars higher up in the spectrum of $ H_\Lambda $ or $ H_\Lambda^\textrm{per} $. To produce such an eigenstate with low complexity, one places a sufficient number of voids around a local excitation to shield it on either side from a Tao-Thouless ground-state~\eqref{TT}. The particular form of the hopping term then prevents these excitations from move across the void-boundary. For example,
\begin{equation} \label{scarstate}
\psi = \varphi_l \otimes \ket{01100} \otimes \varphi_r^{(1)},
\end{equation}
is an eigenstate of $ H_\Lambda $ with energy $ 1 $, filling fraction $ \nu=1/2$, and Schmidt rank $ 2$. Inserting the string $ (0110)_n(0) $ produces an eigenstate with energy $ n $. Up to potentially increasing the length of the void-string, in the same manner one can produce other eigenfunctions with local excitations arising from locally confined configurations generated by a finite number of substitution rules. For example, the cyclic subspace of the configuration $ (003000) $ is two-dimensional and consists of the additional configuration $ (011100) $. The eigenvectors corresponding to this invariant subspace constitute local excitations with strictly positive energies which can be similarly inserted in \eqref{scarstate} to create excited states of the larger system.
\minisec{Acknowledgements }
{\small This work was supported by the DFG under EXC-2111--390814868.}
\bibliographystyle{abbrv}
|
2206.09150
|
\section{The Complete Proof of Lemma \ref{Lemma_Complexity}}\label{Appendix_A}
\textbf{Lemma 4.7} (restated) Under event $\mathcal{E}_{0,c} \land \mathcal{E}_{1,c} \land \mathcal{E}_{2,c}$, a base arm $i$ will not be pulled if $N_i(t) \ge {98\operatornamewithlimits{width} C(t)L_2(t) \over \Delta_{i,c}^2}$.
\begin{proof}
We still prove this lemma by contradiction.
Assume that arm $i$ is pulled with $N_i(t) \ge {98\operatornamewithlimits{width} C(t)L_2(t) \over \Delta_{i,c}^2}$, then there are four probabilities: $(i\in \hat{S}_t) \land (i\in S^*)$, $(i\in \hat{S}_t) \land (i\notin S^*)$, $(i\in \tilde{S}_t) \land (i\in S^*)$, $(i\in \tilde{S}_t) \land (i\notin S^*)$.
Case i): $(i\in \hat{S}_t) \land (i\in S^*)$ or $(i\in \tilde{S}_t) \land (i\notin S^*)$. In this case, $i \in S^* \oplus \tilde{S}_t$ and therefore $\Delta_{i,c} \le \Delta_{S^*,\tilde{S}_t}$.
Since $\hat{S}_t = {\sf Oracle}(\bm{\hat{\mu}}(t))$, we have that $\sum_{j\in \tilde{S}_t \setminus \hat{S}_t} \hat{\mu}_j(t) \le \sum_{j\in \hat{S}_t \setminus \tilde{S}_t} \hat{\mu}_j(t)$. By event $\mathcal{E}_{1,c}$, we also have that for any $k$, $|\sum_{j\in \tilde{S}_t \setminus \hat{S}_t} (\theta_j^{k}(t) - \hat{\mu}_j(t))| \le \sqrt{C(t)\sum_{j\in \tilde{S}_t \setminus \hat{S}_t}{2\over N_j(t)}L_2(t)} \le {\Delta_{i,c} \over 7}$, and similarly $|\sum_{j\in \hat{S}_t \setminus \tilde{S}_t} (\theta_j^{k}(t) - \hat{\mu}_j(t))| \le \sqrt{C(t)\sum_{j\in \hat{S}_t \setminus \tilde{S}_t}{2\over N_j(t)}L_2(t)} \le {\Delta_{i,c} \over 7}$ (recall that $N_i(t) \ge {98\operatornamewithlimits{width} C(t)L_2(t) \over \Delta_{i,c}^2}$ and $i = \operatornamewithlimits{argmin}_{j\in \hat{S}_t \oplus \tilde{S}_t} N_j(t)$).
Therefore, for any $k$, we have that \begin{eqnarray*}
&&\sum_{j\in \tilde{S}_t \setminus \hat{S}_t} \theta_j^k(t) - \sum_{j\in \hat{S}_t \setminus \tilde{S}_t} \theta_j^k(t) \\
&=& \left(\sum_{j\in \tilde{S}_t \setminus \hat{S}_t} \hat{\mu}_j(t) - \sum_{j\in \hat{S}_t \setminus \tilde{S}_t} \hat{\mu}_j(t)\right) + \left(\sum_{j\in \tilde{S}_t \setminus \hat{S}_t} (\theta_j^k(t) - \hat{\mu}_j(t)) \right) - \left( \sum_{j\in \hat{S}_t \setminus \tilde{S}_t} (\theta_j^{k}(t) - \hat{\mu}_j(t)) \right)\\
&\le& \left(\sum_{j\in \tilde{S}_t \setminus \hat{S}_t} \hat{\mu}_j(t) - \sum_{j\in \hat{S}_t \setminus \tilde{S}_t} \hat{\mu}_j(t)\right) + \left(|\sum_{j\in \tilde{S}_t \setminus \hat{S}_t} (\theta_j^k(t) - \hat{\mu}_j(t)) |\right) + \left(| \sum_{j\in \hat{S}_t \setminus \tilde{S}_t} (\theta_j^{k}(t) - \hat{\mu}_j(t)) |\right)\\
&\le& 0 + {2\over 7}\Delta_{i,c} \\
&\le& {2\over 7}\Delta_{i,c}.
\end{eqnarray*}
This means that $\tilde{\Delta}_t^{k_t^*} = \sum_{j\in \tilde{S}_t \setminus \hat{S}_t} \theta_j^{k_t^*}(t) - \sum_{j\in \hat{S}_t \setminus \tilde{S}_t} \theta_j^{k_t^*}(t) \le {2\over 7}\Delta_{i,c}$.
Moreover, since $\sum_{j\in \tilde{S}_t \setminus \hat{S}_t} \theta_j^{k_t^*}(t) \ge \sum_{j\in \hat{S}_t \setminus \tilde{S}_t} \theta_j^{k_t^*}(t)$, we have that
\begin{eqnarray}
\nonumber&&\sum_{j\in \tilde{S}_t \setminus \hat{S}_t} \hat{\mu}_j(t) - \sum_{j\in \hat{S}_t \setminus \tilde{S}_t} \hat{\mu}_j(t) \\
\nonumber&=& \left(\sum_{j\in \tilde{S}_t \setminus \hat{S}_t} \theta_j^{k_t^*}(t) - \sum_{j\in \hat{S}_t \setminus \tilde{S}_t} \theta_j^{k_t^*}(t)\right) - \left(\sum_{j\in \tilde{S}_t \setminus \hat{S}_t} (\theta_j^{k_t^*}(t) - \hat{\mu}_j(t)) \right) + \left( \sum_{j\in \hat{S}_t \setminus \tilde{S}_t} (\theta_j^{k_t^*}(t) - \hat{\mu}_j(t)) \right)\\
\nonumber&\ge& \left(\sum_{j\in \tilde{S}_t \setminus \hat{S}_t} \theta_j^{k_t^*}(t) - \sum_{j\in \hat{S}_t \setminus \tilde{S}_t} \theta_j^{k_t^*}(t)\right) - \left(|\sum_{j\in \tilde{S}_t \setminus \hat{S}_t} (\theta_j^{k_t^*}(t) - \hat{\mu}_j(t))| \right) - \left( |\sum_{j\in \hat{S}_t \setminus \tilde{S}_t} (\theta_j^{k_t^*}(t) - \hat{\mu}_j(t))| \right)\\
\nonumber&\ge& 0 - {2\over 7}\Delta_{i,c} \\
\label{Eq_2}&\ge& - {2\over 7}\Delta_{i,c}.
\end{eqnarray}
On the other hand, by event $\mathcal{E}_{2,c}$, we know that $\exists k'$ such that $\sum_{j\in S^*} \theta_j^{k'}(t) - \sum_{j\in \tilde{S}_t} \theta_j^{k'}(t) \ge \Delta_{S^*,\tilde{S}_t}$.
Then
\begin{eqnarray}
\nonumber&&\sum_{j\in S^*} \theta_j^{k'}(t) - \sum_{j\in \hat{S}_t} \theta_j^{k'}(t) \\
\nonumber&=&\left(\sum_{j\in S^*} \theta_j^{k'}(t) - \sum_{j\in \tilde{S}_t} \theta_j^{k'}(t) \right) + \left(\sum_{j\in \tilde{S}_t} \theta_j^{k'}(t) - \sum_{j\in \hat{S}_t} \theta_j^{k'}(t) \right)\\
\nonumber&\ge& \Delta_{S^*,\tilde{S}_t} + \left(\sum_{j\in \tilde{S}_t} \theta_j^{k'}(t) - \sum_{j\in \hat{S}_t} \theta_j^{k'}(t) \right)\\
\nonumber&=& \Delta_{S^*,\tilde{S}_t} + \left(\sum_{j\in \tilde{S}_t \setminus \hat{S}_t} \theta_j^{k'}(t) - \sum_{j\in \hat{S}_t \setminus \tilde{S}_t} \theta_j^{k'}(t) \right)\\
\nonumber&=& \Delta_{S^*,\tilde{S}_t} + \left(\sum_{j\in \tilde{S}_t \setminus \hat{S}_t} \hat{\mu}_j(t) - \sum_{j\in \hat{S}_t \setminus \tilde{S}_t} \hat{\mu}_j(t) \right) + \left(\sum_{j\in \tilde{S}_t \setminus \hat{S}_t} (\theta_j^{k'}(t) - \hat{\mu}_j(t)) \right) - \left(\sum_{j\in \hat{S}_t \setminus \tilde{S}_t} (\theta_j^{k'}(t) - \hat{\mu}_j(t)) \right)\\
\label{Eq_3}&\ge & \Delta_{S^*,\tilde{S}_t} - {2\over 7}\Delta_{i,c} - \left(|\sum_{j\in \tilde{S}_t \setminus \hat{S}_t} (\theta_j^{k'}(t) - \hat{\mu}_j(t)) |\right) - \left(|\sum_{j\in \hat{S}_t \setminus \tilde{S}_t} (\theta_j^{k'}(t) - \hat{\mu}_j(t)) | \right)\\
\nonumber&\ge &\Delta_{S^*,\tilde{S}_t} - {4\over 7}\Delta_{i,c}\\
\nonumber&>& {2\over 7}\Delta_{i,c},
\end{eqnarray}
where Eq. \eqref{Eq_3} is because of Eq. \eqref{Eq_2}.
This means that $\tilde{\Delta}^{k'}_t > {2\over 7}\Delta_{i,c}$, which contradicts with $\tilde{\Delta}_t^{k_t^*} \le {2\over 7}\Delta_{i,c}$ (since $k_t^* = \operatornamewithlimits{argmax}_{k} \tilde{\Delta}^k_t$).
Case ii): $(i\in \hat{S}_t) \land (i\notin S^*)$ or $(i\in \tilde{S}_t) \land (i\in S^*)$. In this case, $i \in S^* \oplus \hat{S}_t$ and therefore $\Delta_{i,c} \le \Delta_{S^*,\hat{S}_t}$.
By event $\mathcal{E}_{2,c}$, we know that $\exists k, \sum_{j\in S^*} \theta_j^k(t) - \sum_{j\in \hat{S}_t} \theta_j^k(t) \ge \Delta_{S^*,\hat{S}_t}$. Hence $\tilde{\Delta}_t^k \ge \Delta_{S^*,\hat{S}_t}$. Moreover, since $k_t^* = \operatornamewithlimits{argmax}_k \tilde{\Delta}_t^k$, we have that $\sum_{j\in \tilde{S}_t} \theta_j^{k_t^*}(t) - \sum_{j\in \hat{S}_t} \theta_j^{k_t^*}(t) \ge \Delta_{S^*,\hat{S}_t}$, which is the same as $\sum_{j\in \tilde{S}_t \setminus \hat{S}_t} \theta_j^{k_t^*}(t) - \sum_{j\in \hat{S}_t \setminus \tilde{S}_t} \theta_j^{k_t^*}(t) \ge \Delta_{S^*,\hat{S}_t}$. On the other hand, by event $\mathcal{E}_{1,c}$, we have that $|\sum_{j\in \tilde{S}_t \setminus \hat{S}_t} (\theta_j^{k_t^*}(t) - \hat{\mu}_j(t))| \le \sqrt{C(t)\sum_{j\in \tilde{S}_t \setminus \hat{S}_t}{2\over N_j(t)}L_2(t)} \le {\Delta_{i,c} \over 7}$, and similarly $|\sum_{j\in \hat{S}_t \setminus \tilde{S}_t} (\theta_j^{k_t^*}(t) - \hat{\mu}_j(t))| \le \sqrt{C(t)\sum_{j\in \hat{S}_t \setminus \tilde{S}_t}{2\over N_j(t)}L_2(t)} \le {\Delta_{i,c} \over 7}$ (recall that $N_i(t) \ge {98\operatornamewithlimits{width} C(t)L_2(t) \over \Delta_{i,c}^2}$ and $i = \operatornamewithlimits{argmin}_{j\in \hat{S}_t \oplus \tilde{S}_t} N_j(t)$).
Therefore,
\begin{eqnarray*}
&&\sum_{j\in \tilde{S}_t \setminus \hat{S}_t} \hat{\mu}_j(t) - \sum_{j\in \hat{S}_t \setminus \tilde{S}_t} \hat{\mu}_j(t) \\
&=& \left(\sum_{j\in \tilde{S}_t \setminus \hat{S}_t} \theta_j^{k_t^*}(t) - \sum_{j\in \hat{S}_t \setminus \tilde{S}_t} \theta_j^{k_t^*}(t)\right) - \left(\sum_{j\in \tilde{S}_t \setminus \hat{S}_t} (\theta_j^{k_t^*}(t) - \hat{\mu}_j(t)) \right) + \left( \sum_{j\in \hat{S}_t \setminus \tilde{S}_t} (\theta_j^{k_t^*}(t) - \hat{\mu}_j(t)) \right)\\
&\ge& \left(\sum_{j\in \tilde{S}_t \setminus \hat{S}_t} \theta_j^{k_t^*}(t) - \sum_{j\in \hat{S}_t \setminus \tilde{S}_t} \theta_j^{k_t^*}(t)\right) - \left(|\sum_{j\in \tilde{S}_t \setminus \hat{S}_t} (\theta_j^{k_t^*}(t) - \hat{\mu}_j(t))| \right) - \left( |\sum_{j\in \hat{S}_t \setminus \tilde{S}_t} (\theta_j^{k_t^*}(t) - \hat{\mu}_j(t))| \right)\\
&\ge& \Delta_{S^*, \hat{S}_t} - {2\over 7}\Delta_{i,c} \\
&>& 0.
\end{eqnarray*}
This means that $\sum_{j\in \tilde{S}_t } \hat{\mu}_j(t) - \sum_{j\in \hat{S}_t} \hat{\mu}_j(t) > 0$, which contradicts with the fact that $\hat{S}_t = {\sf Oracle}(\bm{\hat{\mu}}(t))$.
\end{proof}
\section{How to Obtain Complexity Upper Bound by Eq. \eqref{Eq_1}}\label{Appendix_B}
Eq. \eqref{Eq_1} is restated below:
\begin{equation*}
Z \le O\left(H_{1,c}{\left(\log (|\mathcal{I}|Z) + \log{1\over \delta}\right)^2 \over \log{1\over q}}\right).
\end{equation*}
We can then use the following lemma to find an upper bound for complexity $Z$.
\begin{lemma}\label{Lemma_Count}
Given $K$ functions $f_1(x), \cdots, f_K(x)$ and $K$ positive values $X_1, \cdots, X_K$, if $\forall x \ge X_k, Kf_k(x) < x$ holds for all $1\le k \le K$, then for any $x\ge \sum_{k} X_k, \sum_k f_k(x) < x$.
\end{lemma}
\begin{proof}
Since $X_1, \cdots, X_K$ are positive values, for any $x\ge \sum_{k} X_k$, we must have that $x \ge X_k$.
%
Therefore $Kf_k(x) < x$, which implies that
\begin{eqnarray*}
\sum_{k} Kf_k(x) < \sum_k x.
\end{eqnarray*}
This is the same as $\sum_k f_k(x) < x$.
\end{proof}
To apply Lemma \ref{Lemma_Count}, we set $f_1(Z) = H_{1,c}{\log^2(|\mathcal{I}|Z)\over \log{1\over q}}$, $f_2(Z) = H_{1,c}{\log(|\mathcal{I}|Z)\log{1\over \delta}\over \log{1\over q}}$, and $f_3(Z) = H_{1,c}{\log^2{1\over \delta}\over \log{1\over q}}$.
After some basic calculations, we get that $X_1 \le c_1 H_{1,c}{\log^2(|\mathcal{I}|H_{1,c})\over \log{1\over q}}$, $X_2 \le c_{2,1}H_{1,c}{\log(|\mathcal{I}|H_{1,c})\log{1\over \delta}\over \log{1\over q}} + c_{2,2}H_{1,c}{\log^2{1\over \delta}\over \log{1\over q}}$ and $X_3 \le c_3H_{1,c}{\log^2{1\over \delta}\over \log{1\over q}}$. Here $c_1, c_{2,1}, c_{2,2}, c_3$ are universal constants.
Then we know that for $Z \ge \Theta(H_{1,c}(\log{1\over \delta} + \log(|\mathcal{I}|H_{1,c})){\log{1\over \delta} \over \log{1\over q}} + H_{1,c}{\log^2(|\mathcal{I}|H_{1,c}) \over \log{1\over q}})$, $f_1(Z) + f_2(Z) + f_3(Z) < Z$ (by Lemma \ref{Lemma_Count}).
This contradicts with Eq. (\ref{Eq_1}).
Therefore, we know that
\begin{equation*}
Z = O\left(H_{1,c}{\left(\log (|\mathcal{I}|H_{1,c}) + \log{1\over \delta}\right)^2 \over \log{1\over q}}\right) = O\left(H_{1,c}\left(\log{1\over \delta} + \log(|\mathcal{I}|H_{1,c})\right){\log{1\over \delta} \over \log{1\over q}} + H_{1,c}{\log^2(|\mathcal{I}|H_{1,c}) \over \log{1\over q}}\right),
\end{equation*}
and this is the complexity upper bound in Theorem \ref{Theorem_Explore}.
\section{Discussions about the Optimal Algorithms for Combinatorial Pure Exploration}\label{Appendix_C}
\citet{chen2017nearly} prove that the complexity lower bound for combinatorial pure exploration is $\Omega(H_{0,c}\log{1\over \delta})$, where $H_{0,c}$ is the optimal value of the following convex program (here $\Delta_{S^*,S} = \sum_{i\in S^*} \mu_i - \sum_{i\in S} \mu_i$):
\begin{eqnarray*}
\min && \sum_{i\in [m]} N_m\\
\mathrm{s.t.} && \sum_{i\in S^* \oplus S} {1\over N_i} \le \Delta_{S^*, S}^2, \quad\forall S \in \mathcal{I}, S \ne S^*
\end{eqnarray*}
In other words, for any correct combinatorial pure exploration algorithm, $[{\mathbb{E}[N_1] \over \log{1\over \delta}}, {\mathbb{E}[N_2] \over \log{1\over \delta}}, \cdots, {\mathbb{E}[N_m] \over \log{1\over \delta}}]$ must be a feasible solution for the above convex program, where $\mathbb{E}[N_i]$ represents the expected number of pulls on base arm $i$.
We say base arm $i$ needs exploration the most at time $t$ if $\alpha N_i^* \log{1\over \delta} - N_i(t)$ is positive, where $\alpha$ is some universal constant and $N_i^*$ is the value of $N_i$ in the optimal solution $H_{0,c}$ (note that there may be several base arms that need exploration the most in one time step).
By this definition, if we always pull a base arm that needs exploration the most, then the frequency of pulling each base arm converges to the optimal solution of $H_{0,c}$, which leads to an optimal complexity upper bound.
However, the simple offline oracle used in TS-Explore (as described in Section \ref{Section_Model_CMAB}) is not enough to look for a base arm that needs exploration the most. In fact, both the existing optimal policies \citep{chen2017nearly,jourdan2021efficient} not only need to use this simple offline oracle, but also require some other mechanisms to explore detailed information about the combinatorial structure of the problem instance to look for a base arm that needs exploration the most.
The algorithms in \cite{chen2017nearly} need to record all the super arms in $\mathcal{I}$ with an empirical mean larger than some threshold $\eta$. This is one kind of information about the combinatorial structure that can help to find out a base arm that needs exploration the most.
Nevertheless, the authors only provide an efficient way to do this when the combinatorial structure satisfies some specific constraints.
In the most general setting, the algorithms in \cite{chen2017nearly} must pay an exponential time cost for collecting the detailed information.
As for \citep{jourdan2021efficient}, the best-response oracle used by the $\lambda$-player needs to return a super arm within set $N(S_t)$ that has the shortest distance to a fixed target.
Here $S_t$ is a super arm, $N(S_t)$ represents the set of super arms whose cells’ boundaries intersect the boundary of the cell of $S_t$, and the cell of a super arm $S_t$ is defined as all the possible parameter sets $[\theta_1, \theta_2, \cdots, \theta_m]$ in which $S_t$ is the optimal super arm.
This is another kind of information about the combinatorial structure that can help to find out a base arm that needs exploration the most.
Nevertheless, this best-response oracle also has an exponential time cost (which is scaled with $|N(S_t)|$).
By using these exponential time cost mechanisms, the optimal algorithms \citep{chen2017nearly,jourdan2021efficient} can find out a base arm that needs exploration the most, which is critical to achieving the optimal complexity upper bound.
In this paper, to make TS-Explore efficient in the most general setting, we only use the simple offline oracle in our algorithm and our mechanism can only inform us of one of the violated constraints in the optimization problem (if all the constraints are not violated, TS-Explore will output the correct optimal arm). This means that we know nothing about the combinatorial structure of the problem instance.
Therefore, the best thing we can do is to treat all the base arms in the violated constraint equally, i.e., we choose to pull the base arm (in the violated constraint) with the smallest number of pulls.
This leads to a complexity upper bound of $O(H_{1,c}\log{1\over \delta})$.
If we want
TS-Explore to achieve an optimal complexity upper bound $O(H_{0,c}\log{1\over \delta})$, then we need to
know which base arm
needs exploration the most, e.g., by applying a powerful offline oracle that takes the empirical means and random samples as input and outputs a base arm that needs exploration the most.
How to design such offline oracles
and how to implement them efficiently is one of our future research topics.
\section{Concentration Inequality of Sub-Gaussian Random Variables}\label{Appendix_D}
\begin{fact
If $X$ is zero-mean and $\sigma^2$ sub-Gaussian, then
\begin{equation*}
\Pr[|X| > \epsilon] \le 2\exp(-{\epsilon^2 \over 2\sigma^2}).
\end{equation*}
\end{fact}
\section{Introduction}
Pure exploration is an important task in online learning, and it tries to find out the target arm as fast as possible.
In pure exploration of classic multi-armed bandit (MAB) \citep{Audibert2010Best}, there are totally $m$ arms, and each arm $i$ is associated with a probability distribution $D_i$ with mean $\mu_i$.
Once arm $i$ is pulled, it returns an observation $r_i$, which is drawn independently from $D_i$ by the environment.
At each time step $t$, the learning policy $\pi$ either chooses an arm $i(t)$ to pull, or chooses to output an arm $a(t)$.
The goal of the learning policy is to pull arms properly, such that with an error probability at most $\delta$, its output arm $a(t)$ is the optimal arm (i.e., $a(t) = \operatornamewithlimits{argmax}_{i \in [m]} \mu_i$) with complexity (the total number of observations) as low as possible.
Pure exploration is widely adopted in real applications.
For example, in the selling procedure of cosmetics, there is always a testing phase before the commercialization phase \citep{Audibert2010Best}.
The goal of the testing phase is to help to maximize the cumulative reward collected in the commercialization phase.
Therefore, instead of regret minimization \citep{Berry1985Bandit,Auer2002Finite}, the testing phase only needs to do exploration (e.g., to investigate which product is the most popular one), and wants to find out the target with both correctness guarantee and low cost.
In the real world, sometimes the system focuses on the best action under a specific combinatorial structure, instead of the best single arm \citep{Chen2014Combinatorial}.
For example, a network routing system needs to search for the path with minimum delay between the source and the destination.
Since there can be an exponential number of paths, the cost of exploring them separately is unacceptable.
Therefore, people choose to find out the best path by exploring single edges, and this is a pure exploration problem instance in combinatorial multi-armed bandit (CMAB).
In this setting, we still pull single arms (base arms) at each time step, and there is a super arm set $\mathcal{I} \subseteq 2^{[m]}$.
The expected reward of a super arm $S\in \mathcal{I}$ is $\sum_{i\in S} \mu_i$, i.e., the sum of the expected rewards of its contained base arms.
And the goal of the player is to find out the optimal super arm with error probability at most $\delta$ and complexity as low as possible.
Most of the existing solutions for pure exploration follow the UCB approach \citep{Audibert2010Best,kalyanakrishnan2012PAC,Chen2014Combinatorial,kaufmann2013information}. They compute the confidence bounds for all the arms, and claim that one arm is optimal only if its lower confidence bound is larger than the upper confidence bounds of all the other arms.
Though this approach is asymptotically optimal in pure exploration of classic MAB problems \citep{kalyanakrishnan2012PAC,kaufmann2013information}, it faces some challenges in the CMAB setting \citep{Chen2014Combinatorial}.
In the UCB approach, for a super arm $S$, the algorithm usually uses the sum of upper confidence bounds of all its contained base arms as its upper confidence bound to achieve a low implementation cost.
This means that the gap between the empirical mean of super arm $S$ and the used upper confidence bound of $S$ is about $\tilde{O}(\sum_{i\in S} \sqrt{1/N_i})$ (here $N_i$ is the number of observations on base arm $i$).
However, since the observations of different base arms are independent, the standard deviation of the empirical mean of super arm $S$ is $\tilde{O}(\sqrt{\sum_{i\in S} {1/N_i}})$, which is much smaller than $\tilde{O}(\sum_{i\in S} \sqrt{1/N_i})$.
This means that the used upper confidence bound of $S$ is much larger than the tight upper confidence bound of $S$.
\citet{combes2015combinatorial} deal with this problem by computing the upper confidence bounds for all the super arms independently, %
which leads to an exponential time cost and is hard to implement in practice.
In fact, existing efficient UCB-based solutions either suffer from a high complexity bound \citep{Chen2014Combinatorial,gabillon2016improved}, or need further assumptions on the combinatorial structure
to achieve an optimal complexity bound efficiently \citep{chen2017nearly}.
Then a natural solution to deal with this challenge is to use random samples of arm $i$ (with mean to be its empirical mean and standard deviation to be $\tilde{O}(\sqrt{1/ N_i})$) instead of its upper confidence bound to judge whether a super arm is optimal.
If we let the random samples of different base arms be \emph{independent}, then with high probability, the gap between the empirical mean of super arm $S$ and the sum of random samples within $S$ is $\tilde{O}(\sqrt{\sum_{i\in S} {1/ N_i}})$, which has the same order as the gap between the empirical mean of super arm $S$ and its real mean.
Therefore, using independent random samples can
behave better than using confidence bounds,
and this is the key idea of Thompson Sampling (TS) \citep{thompson1933likelihood,Kaufmann2012Thompson,agrawal2013further}.
In fact, many prior works show that TS-based algorithms have smaller cumulative regret than UCB-based algorithms in regret minimization of CMAB model \citep{WC18,perrault2020statistical}.
However, there still lack studies on adapting the idea of TS, i.e., using random samples to make decisions, to the pure exploration setting.
In this paper, we attempt to fill up this gap, and study using random samples in pure exploration under the frequentist setting for both MAB and CMAB instances.
We emphasize that it is non-trivial to design (and analyze) such a TS-based algorithm.
The first challenge is that there is a lot more uncertainty in the random samples than in the confidence bounds.
In UCB-based algorithms, the upper confidence bound of the optimal arm is larger than its expected reward with high probability.
Thus, the arm with the largest upper confidence bound is either an insufficiently learned sub-optimal arm (i.e., the number of its observations is not enough to make sure that it is a sub-optimal arm) or the optimal arm, which means that the number of pulls on sufficiently learned sub-optimal arms is limited.
However, for TS-based algorithms that use random samples instead, there is always a constant probability that the random sample of the optimal arm is smaller than its expected reward.
In this case, the arm with the largest random sample may be a sufficiently learned sub-optimal arm.
Therefore,
the mechanism of the TS-based policy should be designed carefully to make sure that we can still obtain an upper bound for the number of pulls on sufficiently learned sub-optimal arms.
Another challenge is that using random samples to make decisions is a kind of Bayesian algorithm and it loses many good properties in the frequentist setting.
In the Bayesian setting, at each time step $t$, the parameters of the game follow a posterior distribution $\mathcal{P}_t$, and the random samples are drawn from $\mathcal{P}_t$ independently as well.
Therefore, using the random samples to output the optimal arm can explicitly ensure the correctness of the TS-based algorithm.
However, in the frequentist setting, the parameters of the game are fixed but unknown, and they have no such correlations with the random samples.
Because of this, if we still choose to use the random samples to output the optimal arm, then the distributions of random samples in the TS-based algorithm need to be chosen carefully to make sure that we can still obtain the correctness guarantee in the frequentist setting.
Besides, the analysis of the TS-based algorithm in pure exploration is also very different from that in regret minimization.
In regret minimization, at each time step, we only need to draw \emph{one} sample for each arm \cite{thompson1933likelihood,agrawal2013further,WC18,perrault2020statistical}.
However, in pure exploration, one set of samples at each time step is not enough, since the algorithm needs to i) check whether there is an arm that is the optimal one with high probability; ii) look for an arm that needs exploration.
None of these two goals can be achieved by only \emph{one} set of samples.
Therefore, we must draw several sets of samples to make decisions, and this may fail the existing analysis of TS in regret minimization.
In this paper, we solve the above challenges, and design a TS-based algorithm TS-Explore for (combinatorial) pure exploration under the frequentist setting
with polynomial implementation cost.
At each time step $t$, TS-Explore first draws independent random samples $\theta_i^k(t)$ for all the (base) arms $i\in [m]$ and $k = 1,2,\cdots,M$ (i.e., totally $M$ independent samples for each arm).
Then it tries to find out the $M$ best (super) arms $\tilde{S}^k_t$'s under sample sets $\bm{\theta}^k(t) = [\theta_1^k(t), \theta_2^k(t), \cdots,\theta_m^k(t)]$ for $k = 1,2,\cdots,M$.
If in all these sample sets, the best (super) arm is the same as the empirically best (super) arm $\hat{S}_t$ (i.e., the best arm under the empirical means), then the algorithm will output that this (super) arm is optimal.
Otherwise, for all $k \in [M]$, the algorithm will check the reward gap between $\tilde{S}^k_t$ and $\hat{S}_t$ under parameter set $\bm{\theta}^k(t)$.
Then it focuses on the (super) arm $\tilde{S}^{k_t^*}_t$ with the largest reward gap, i.e., it chooses to pull a (base) arm in the exchange set of $\hat{S}_t$ and $\tilde{S}^{k_t^*}_t$.
Recording such reward gaps and focusing on $\tilde{S}^{k_t^*}_t$ is the key mechanism we used to solve the above challenges.
On the one hand, our analysis shows that with a proper value $M$ (the number of sample sets) and proper random sample distributions, $\tilde{S}^{k_t^*}_t$ has some similar properties with the (super) arm with the largest upper confidence bound.
These properties play a critical role in the analysis of UCB approaches.
Thus, we can also use them to obtain an upper bound for the number of pulls on sufficiently learned sub-optimal arms in TS-Explore as well as the correctness guarantee of TS-Explore.
On the other hand, this novel mechanism saves the key advantage of Thompson Sampling, i.e., the sum of random samples within $S$ will not exceed the tight upper confidence bound of $S$ with high probability.
Hence, TS-Explore can achieve a lower complexity upper bound than existing efficient UCB-based solutions.
These results indicate that our TS-Explore policy is correct, efficient (with low implementation cost), and effective (with low complexity).
In the general CMAB setting, we show that TS-Explore is near optimal and achieves a lower complexity upper bound than existing efficient UCB-based algorithms \citep{Chen2014Combinatorial}.
The optimal algorithm in \citep{chen2017nearly} is only efficient when the combinatorial structure satisfies some specific properties (otherwise it suffers from an exponential implementation cost),
and is less general than our results.
We also conduct experiments to compare the complexity of TS-Explore with existing baselines.
The experimental results show that TS-Explore outperforms the efficient baseline CLUCB \cite{Chen2014Combinatorial}, and behaves only a little worse than the optimal but non-efficient baseline NaiveGapElim \cite{chen2017nearly}.
As for the MAB setting, we show that TS-Explore is asymptotically optimal, i.e., it has a comparable complexity to existing optimal algorithms \citep{kalyanakrishnan2012PAC,kaufmann2013information} when $\delta \to 0$.
All these results indicate that our TS-based algorithm is efficient and effective in dealing with pure exploration problems.
To the best of our knowledge, this is the first result of using this kind of TS-based algorithm (i.e., always making decisions based on random samples) in pure exploration under the frequentist setting.
\section{Related Works}
Pure exploration of the classic MAB model is first proposed by \citet{Audibert2010Best}.
After that, people have designed lots of learning policies for this problem.
The two most representative algorithms are successive-elimination \citep{even2006action,Audibert2010Best,kaufmann2013information} and LUCB \citep{kalyanakrishnan2012PAC,kaufmann2013information}.
Both of them adopt the idea of UCB \citep{Auer2002Finite} and achieve an asymptotically optimal complexity upper bound (i.e., it matches with the complexity lower bound proposed by \citet{kalyanakrishnan2012PAC}).
Compared to these results, our TS-Explore policy uses a totally different approach, and can achieve an asymptotically optimal complexity upper bound as well.
Combinatorial pure exploration is first studied by \citet{Chen2014Combinatorial}.
They propose CLUCB, an LUCB-based algorithm that is efficient as long as there is an offline oracle to output the best super arm under any given parameter set.
\citet{chen2017nearly} then design an asymptotically optimal algorithm for this problem. However, their algorithm can
only be implemented efficiently when the combinatorial structure follows some specific constraints.
Recently, based on the game approach, \citet{jourdan2021efficient} provide another optimal learning policy for pure exploration in CMAB. But their algorithm still suffers from an exponential implementation cost.
Compared with these UCB-based algorithms, our TS-Explore policy achieves a lower complexity bound than CLUCB \citep{Chen2014Combinatorial} (with a similar polynomial time cost), and has a much lower implementation cost than the optimal policies \cite{chen2017nearly,jourdan2021efficient} in the most general combinatorial pure exploration setting.
There also exist some researches about applying Thompson Sampling to pure exploration. For example, \citet{russo2016simple} considers a frequentist setting of pure exploration in classic MAB, and proposes algorithms called TTPS, TTVS, and TTTS; \citet{shang2020fixed} extend the results in \cite{russo2016simple}, design a T3C algorithm, and provide its analysis for Gaussian bandits; \citet{li2021bayesian} study Bayesian contextual pure exploration, propose an algorithm called BRE and obtain its corresponding analysis. However, these results are still very different from ours. The first point is that they mainly use random distributions but not random samples to
decide the next chosen arm or when to stop, %
which may cause a high implementation cost when we extend them to the combinatorial setting, since it is much more complex to deal with random distributions than to deal with random samples.
Moreover, our results are still more general even if we only consider pure exploration in classic MAB under the frequentist setting.
For example, \citet{russo2016simple} does not provide a correct stopping rule, and \citet{shang2020fixed} only obtain the correctness guarantee for Gaussian bandits. Besides, their complexity bounds are asymptotic ones (which require $\delta \to 0$), while ours works for any $\delta \in (0,1)$.
\section{Model Setting}
\subsection{Pure Exploration in Multi-armed Bandit}
A pure exploration problem instance of MAB is a tuple $([m], \bm{D}, \delta)$.
Here $[m] = \{1,2,\cdots,m\}$ is the set of arms, $\bm{D} = \{D_1, D_2, \cdots, D_m\}$ are the corresponding reward distributions of the arms, and $\delta$ is the error constraint.
In this paper, we assume that all the distributions $D_i$'s are supported on $[0,1]$.
Let $\mu_i \triangleq \mathbb{E}_{X \sim D_i}[X]$ denote the expected reward of arm $i$, and $a^* = \operatornamewithlimits{argmax}_{i\in [m]} \mu_i$ is the optimal arm with the largest expected reward.
Similar to many existing works (e.g., \citep{Audibert2010Best}), we assume that the optimal arm is unique.
At each time step $t$, the learning policy $\pi$ can either pull an arm $i(t) \in [m]$, or output an arm $a(t) \in [m]$.
If it chooses to pull arm $i(t)$, then it will receive an observation $r_{i(t)}(t)$, which is drawn independently from $D_{i(t)}$.
The goal of the learning policy is to make sure that with probability at least $1-\delta$, its output $a(t) = a^*$.
Under this constraint, it aims to minimize the complexity
\begin{equation*}
Z^{\pi} \triangleq \sum_{i=1}^m N_i(T^{\pi}),
\end{equation*}
where $T^{\pi}$ denotes the time step $t$ that policy $\pi$ chooses to output $a(t)$, and $N_i(t)$ denotes the number of observations on arm $i$ until time step $t$.
Let $\Delta_{i,m} \triangleq \mu_{a^*} - \mu_i$ denote the expected reward gap between the optimal arm $a^*$ and any other arm $i\ne a^*$.
For the optimal arm $a^*$, its $\Delta_{a^*,m}$ is defined as $\mu_{a^*} - \max_{i\ne a^*} \mu_i$.
We also define $H_{m} \triangleq \sum_{i\in [m]} {1\over \Delta_{i,m}^2}$, and existing works \citep{kalyanakrishnan2012PAC} show that the complexity lower bound of any pure exploration algorithm is $\Omega(H_{m} \log{1\over \delta})$.
\subsection{Pure Exploration in Combinatorial Multi-armed Bandit}\label{Section_Model_CMAB}
A pure exploration problem instance of CMAB is an extension of the MAB setting.
The arms $i\in [m]$ are called base arms, and there is also a super arm set $\mathcal{I} \subseteq 2^{[m]}$.
For each super arm $S \in \mathcal{I}$, its expected reward is $\sum_{i\in S} \mu_i$.
Let $S^* = \operatornamewithlimits{argmax}_{S\in \mathcal{I}} \sum_{i\in S} \mu_i$ denote the optimal super arm with the largest expected reward, and we assume that the optimal super arm is unique (as in \citep{Chen2014Combinatorial}).
At each time step $t$, the learning policy $\pi$ can either pull a base arm $i(t) \in [m]$, or output a super arm $S(t) \in \mathcal{I}$.
The goal of the learning policy is to make sure that with probability at least $1-\delta$, its output $S(t) = S^*$. Under this constraint, it also wants to minimize its complexity $Z^{\pi}$.
As in many existing works \citep{Chen2013Combinatorial,Chen2014Combinatorial,WC18}, we also assume that there exists an offline ${\sf Oracle}$, which takes a set of parameters $\bm{\theta} = [\theta_1, \cdots, \theta_m]$ as input, and outputs the best super arm under this parameter set, i.e., ${\sf Oracle}(\bm{\theta}) = \operatornamewithlimits{argmax}_{S\in \mathcal{I}} \sum_{i\in S}\theta_i$.
In this paper, for $i \notin S^*$, we use $\Delta_{i,c} \triangleq \sum_{j\in S^*} \mu_j - \max_{S \in \mathcal{I}: i\in S} \sum_{j\in S} \mu_j$ to denote the expected reward gap between the optimal super arm $S^*$ and the best super arm that contains $i$.
As for $i \in S^*$, its $\Delta_{i,c}$ is defined as $\Delta_{i,c} \triangleq \sum_{j\in S^*} \mu_j - \max_{S \in \mathcal{I} : i\notin S} \sum_{j\in S} \mu_j$, i.e., the expected reward gap between $S^*$ and the best super arm that does not contain $i$.
We also define $S \oplus S' = (S\setminus S') \cup (S'\setminus S)$ and $\operatornamewithlimits{width} \triangleq \max_{S\ne S'} |S \oplus S'|$, and let $H_{1,c} \triangleq \operatornamewithlimits{width} \sum_{i\in [m]} {1\over \Delta_{i,c}^2}$, $H_{2,c} \triangleq \operatornamewithlimits{width}^2 \sum_{i\in [m]} {1\over \Delta_{i,c}^2}$.
\citet{chen2017nearly} prove that the complexity lower bound for combinatorial pure exploration is $\Omega(H_{0,c}\log{1\over \delta})$, where $H_{0,c}$ is the optimal value of the following convex program (here $\Delta_{S^*,S} = \sum_{i\in S^*} \mu_i - \sum_{i\in S} \mu_i$):
\begin{eqnarray*}
\min && \sum_{i\in [m]} N_i\\
\mathrm{s.t.} && \sum_{i\in S^* \oplus S} {1\over N_i} \le \Delta_{S^*, S}^2, \quad\forall S \in \mathcal{I}, S \ne S^*
\end{eqnarray*}
The following result shows the relationships between $H_{0,c}$, $H_{1,c}$ and $H_{2,c}$.
\begin{proposition}\label{Theorem_basic}
For any combinatorial pure exploration instance, $H_{0,c} \le H_{1,c} \le H_{2,c}$.
\end{proposition}
\begin{proof}
Since $\operatornamewithlimits{width} \ge 1$, we have that $H_{1,c} \le H_{2,c}$.
As for the first inequality, note that $\forall i \in [m]$, $N_i = {\operatornamewithlimits{width} \over \Delta_{i,c}^2}$ is a feasible solution of the above convex program. Hence we have that $H_{0,c} \le H_{1,c}$.
\end{proof}
\section{Thompson Sampling-based Pure Exploration Algorithm}
Note that pure exploration of classic multi-armed bandit is a special case of combinatorial multi-armed bandit (i.e., $\mathcal{I} = \{\{1\}, \{2\}, \cdots, \{m\}\}$).
Therefore, in this section, we mainly focus on the TS-based pure exploration algorithm in the combinatorial multi-armed bandit setting. Its framework and analysis can directly lead to results in the classic multi-armed bandit setting.
In the following, we let $\Phi(x,\mu,\sigma^2) \triangleq \Pr_{X \sim \mathcal{N}(\mu,\sigma^2)}[X \ge x]$.
For any $x \in (0,0.5)$, we also define $\phi(x)$ as a function of $x$ such that $\Phi(\phi(x),0,1) = x$.
\subsection{Algorithm Framework}
\begin{algorithm}[t]
\centering
\caption{TS-Explore}\label{Algorithm_TSE}
\begin{algorithmic}[1]
\STATE \textbf{Input: } Error constraint $\delta$, $q \in [\delta, 0.1]$, $t \gets m$, $N_i \gets 0, R_i \gets 0$ for all $i\in [m]$.
\STATE Pull each arm once, update their number of pulls $N_i$'s, the sum of their observations $R_i$'s.
\WHILE {\textbf{true}}
\STATE $t \gets t+1$.
\STATE For all base arm $i\in [m]$, $\hat{\mu}_i(t) \gets {R_i \over N_i}$.
\STATE $\bm{\hat{\mu}}(t) \gets [\hat{\mu}_1(t), \hat{\mu}_2(t), \cdots, \hat{\mu}_m(t)]$.
\STATE $\hat{S}_t \gets {\sf Oracle}(\bm{\hat{\mu}}(t))$.
\FOR {$k=1,2,\cdots, M(\delta,q,t)$}
\STATE For each arm $i$, draw $\theta_i^k(t)$ independently from distribution $\mathcal{N}(\hat{\mu}_i(t), {C(\delta,q,t)\over N_i})$.
\STATE $\bm{\theta}^k(t) \gets [\theta_1^k(t), \theta_2^k(t), \cdots, \theta_m^k(t)]$.
\STATE $\tilde{S}^k_t \gets {\sf Oracle}(\bm{\theta}^k(t))$.
\STATE $\tilde{\Delta}^k_t \gets \sum_{i\in \tilde{S}^k_t} \theta_i^k(t) - \sum_{i\in \hat{S}_t} \theta_i^k(t)$.
\ENDFOR
\IF {$\forall 1 \le k \le M(\delta,q,t)$, $\tilde{S}^k_t = \hat{S}_t$}
\STATE \textbf{Return: } $\hat{S}_t$.
\ELSE
\STATE $k^*_t \gets \operatornamewithlimits{argmax}_k \tilde{\Delta}^k_t$, $\tilde{S}_t \gets \tilde{S}^{k^*_t}_t$.
\STATE Pull arm $i(t) \gets \operatornamewithlimits{argmin}_{i\in \hat{S}_t \oplus \tilde{S}_t} N_i$, update its number of pulls $N_{i(t)}$ and the sum of its observations $R_{i(t)}$.
\ENDIF
\ENDWHILE
\end{algorithmic}
\end{algorithm}
Our Thompson Sampling-based algorithm (TS-Explore) is described in Algorithm \ref{Algorithm_TSE}.
We use $N_i$ to denote the number of observations on arm $i$, $R_i$ to denote the sum of all the observations from arm $i$, and $N_i(t), R_i(t)$ to denote the value of $N_i, R_i$ at the beginning of time step $t$.
At each time step $t$, for any $i\in[m]$, TS-Explore first draws $M(\delta,q,t) \triangleq {1\over q}\log (12|\mathcal{I}|^2t^2/\delta)$ random samples $\{\theta_i^k(t)\}_{k=1}^{M(\delta,q,t)}$ independently from a Gaussian distribution with mean $\hat{\mu}_i(t) \triangleq {R_i(t) \over N_i(t)}$ (i.e., the empirical mean of arm $i$) and variance ${C(\delta,q,t)\over N_i(t)}$ (i.e., inversely proportional to $N_i(t)$), where $C(\delta,q,t) \triangleq {\log (12|\mathcal{I}|^2t^2/\delta) \over \phi^2(q)}$.
Then it checks which super arm is optimal in the empirical mean parameter set $\bm{\hat{\mu}}(t) = [\hat{\mu}_1(t), \hat{\mu}_2(t), \cdots, \hat{\mu}_m(t)]$ and the $k$-th sample set $\bm{\theta}^k(t) = [\theta_1^k(t), \theta_2^k(t), \cdots, \theta_m^k(t)]$ for all $k$ by using the offline ${\sf Oracle}$,
i.e., $\hat{S}_t = {\sf Oracle}(\bm{\hat{\mu}}(t))$ and $\tilde{S}^k_t = {\sf Oracle}(\bm{\theta}^k(t))$.
If all the best super arms $\tilde{S}^k_t$'s are the same as $\hat{S}_t$, then TS-Explore outputs that this super arm is the optimal one.
Otherwise, for all $1 \le k \le M(\delta,q,t)$, it will compute the reward gap between $\tilde{S}^k_t$ and $\hat{S}_t$ under the $k$-th sample set $\bm{\theta}^k(t)$, and focus on $k^*_t$ with the largest reward gap, i.e., TS-Explore will choose to pull the base arm with the least number of observations in $\hat{S}_t \oplus \tilde{S}^{k^*_t}_t$ (in the following, we use $\tilde{S}_t$ to represent $\tilde{S}^{k^*_t}_t$ to simplify notations).
Note that $\hat{S}_t \ne \tilde{S}_t$ (otherwise we will output $\hat{S}_t$ as the optimal super arm), thus the rule of choosing arms to pull is well defined.
\subsection{Analysis of TS-Explore}\label{Section_TSPEMAB}
\begin{theorem}\label{Theorem_Explore}
In the CMAB setting, for $q \in [\delta, 0.1]$, with probability at least $1-\delta$, TS-Explore will output the optimal super arm $S^*$ with complexity upper bounded by $O(H_{1,c}(\log{1\over \delta} + \log(|\mathcal{I}|H_{1,c})){\log{1\over \delta} \over \log{1\over q}} + H_{1,c}{\log^2(|\mathcal{I}|H_{1,c}) \over \log{1\over q}})$.
Specifically, if we choose $q = \delta$, then the complexity upper bound is $O(H_{1,c}\log{1\over \delta} + H_{1,c}\log^2(|\mathcal{I}|H_{1,c}))$.
\end{theorem}
\begin{remark}
When the error constraint $\delta \in (0.1, 1)$, we can still let the parameters $(\delta,q)$ in TS-Explore be $(\delta_0,q_0) = (0.1, 0.1)$.
In this case: i) its error probability is upper bounded by $\delta_0 = 0.1 < \delta$;
and ii) since ${\delta\over \delta_0} < {1\over 0.1} = 10$, the complexity of TS-Explore is still upper bounded by $O(H_{1,c}\log{1\over \delta_0} + H_{1,c}\log^2(|\mathcal{I}|H_{1,c})) = O(H_{1,c}\log{1\over \delta} + H_{1,c}\log^2(|\mathcal{I}|H_{1,c}))$.
\end{remark}
By Theorem \ref{Theorem_Explore}, we can directly obtain the correctness guarantee and the complexity upper bound for applying TS-Explore in pure exploration of classic multi-armed bandit.
\begin{corollary}\label{Corollary_Complexity}
In the MAB setting, for $q \in [\delta, 0.1]$, with probability at least $1-\delta$, TS-Explore will output the optimal arm $a^*$ with complexity upper bounded by $O(H_{m}(\log{1\over \delta} + \log(mH_{m})){\log{1\over \delta} \over \log{1\over q}} + H_{m}{\log^2(mH_m) \over \log{1\over q}})$.
Specifically, if we choose $q = \delta$, then the complexity upper bound is $O(H_{m}\log{1\over \delta} + H_{m}\log^2(mH_{m}))$.
\end{corollary}
\begin{remark}
The value $q$ in TS-Explore is used to control the number of times that we draw random samples at each time step.
Note that $M(\delta,q,t) = {1\over q}\log (12|\mathcal{I}|^2t^2/\delta)$. Hence when $q$ becomes larger, we need fewer samples, but the complexity bound becomes worse.
Here is a trade-off between the algorithm's complexity and the number of random samples it needs to draw.
Our analysis shows that using $q = \delta^{1\over \beta}$ for some constant $\beta \ge 1$ can make sure that the complexity upper bound remains the same order, and reduce the number of random samples significantly.
\end{remark}
\begin{remark}
If the value $|\mathcal{I}|$ is unknown (which is common in real applications), we can use $2^m$ instead (in $M(\delta,q,t)$ and $C(\delta,q,t)$).
This only increases the constant term in the complexity upper bound
and does not influence the major term $O(H_{1,c}\log{1\over \delta})$.
\end{remark}
Theorem \ref{Theorem_Explore} shows that
the complexity upper bound of TS-Explore is $\operatornamewithlimits{width}$ lower than the CLUCB policy in \citep{Chen2014Combinatorial}.
To the best of our knowledge, TS-Explore is the first algorithm that efficiently achieves an $O(H_{1,c}\log{1\over \delta})$ complexity upper bound in the most general combinatorial pure exploration setting.
Besides, this is also the first theoretical analysis of using a TS-based algorithm (i.e., using random samples to make decisions) to deal with combinatorial pure exploration problems under the frequentist setting.
Though the complexity bound of TS-Explore still has some gap with the optimal one $O(H_{0,c}\log{1\over \delta})$, we emphasize that this is because we only use the simple offline oracle and do not seek more detailed information about the combinatorial structure. This makes our policy more efficient.
As a comparison, the existing optimal policies \citep{chen2017nearly,jourdan2021efficient} either suffer from an exponential time cost, or require the combinatorial structure to satisfy some specific constraints so that they can adopt more powerful offline oracles that explore detailed information about the combinatorial structure efficiently (please see Appendix \ref{Appendix_C} for discussions about this).
Therefore, our algorithm is more efficient in the most general CMAB setting, and can be attractive in real applications with large scale
and complex combinatorial structures.
On the other hand, the gap between the complexity upper bound and the complexity lower bound does not exist in the MAB setting.
Corollary \ref{Corollary_Complexity} shows that the complexity upper bound of applying TS-Explore in the classic MAB setting matches the complexity lower bound $\Omega(H_m\log{1\over \delta})$ \citep{kalyanakrishnan2012PAC} when $\delta \to 0$, i.e., TS-Explore is asymptotically optimal in the classic MAB setting.
Now we provide the proof of Theorem \ref{Theorem_Explore}.
In this proof, we denote $\mathcal{J} = \{U: \exists S,S' \in \mathcal{I}, U = S \setminus S'\}$.
Note that $|\mathcal{J}| \le |\mathcal{I}|^2$ and for any $U \in \mathcal{J}$, $|U| \le \operatornamewithlimits{width}$. We also denote $L_1(t) = \log (12|\mathcal{I}|^2t^2/\delta)$ and $L_2(t) = \log(12|\mathcal{I}|^2t^2M(\delta,q,t)/\delta)$ to simplify notations.
We first define three events as follows:
$\mathcal{E}_{0,c}$ is the event that for all $t > 0$, $U \in \mathcal{J}$,
\begin{equation*}
|\sum_{i\in U} (\hat{\mu}_i(t) - \mu_i)| \le \sqrt{ \sum_{i\in U}{1\over 2N_i(t)}L_1(t)};
\end{equation*}
$\mathcal{E}_{1,c}$ is the event that for all $t > 0$, $1 \le k \le M(\delta,q,t)$, $U \in \mathcal{J}$,
\begin{eqnarray*}
|\sum_{i\in U} (\theta_i^k(t) - \hat{\mu}_i(t))| \le \sqrt{\sum_{i\in U}{2C(\delta,q,t)\over N_i(t)}L_2(t)};
\end{eqnarray*}
and $\mathcal{E}_{2,c}$ is the event that for all $t > 0$, $S,S' \in \mathcal{I}$. there exists $1 \le k \le M(\delta,q,t)$ such that
\begin{equation*}
\sum_{i\in S} \theta_i^k(t) - \sum_{i\in S'} \theta_i^k(t) \ge \sum_{i\in S} \mu_i - \sum_{i\in S'} \mu_i.
\end{equation*}
Roughly speaking, $\mathcal{E}_{0,c}$ means that for any $U \in \mathcal{J}$, the gap between its real mean and its empirical mean lies in the corresponding confidence radius; $\mathcal{E}_{1,c}$ means that for any $U \in \mathcal{J}$, the gap between its empirical mean and the sum of its random samples lies in the corresponding confidence radius; and $\mathcal{E}_{2,c}$ means that for any two super arms $S\ne S'$, at each time step, there is at least one sample set such that the gap between the sum of their random samples is larger than the gap between their real means.
We will first prove that $\mathcal{E}_{0,c} \land \mathcal{E}_{1,c} \land \mathcal{E}_{2,c}$ happens with high probability, and then show the correctness guarantee and complexity upper bound under $\mathcal{E}_{0,c} \land \mathcal{E}_{1,c} \land \mathcal{E}_{2,c}$.
\begin{lemma}\label{Lemma_123}
In Algorithm \ref{Algorithm_TSE}, we have that
\begin{equation*}
\Pr[\mathcal{E}_{0,c} \land \mathcal{E}_{1,c} \land \mathcal{E}_{2,c}] \ge 1 - \delta.
\end{equation*}
\end{lemma}
\begin{proof}
Note that the random variable $(\hat{\mu}_i(t) - \mu_i)$ is zero-mean and ${1\over 4N_i(t)}$ sub-Gaussian, and for different $i$, the random variables $(\hat{\mu}_i(t) - \mu_i)$'s are independent. Therefore, $\sum_{i\in U} (\hat{\mu}_i(t) - \mu_i)$ is zero-mean and $\sum_{i\in U}{1\over 4N_i(t)}$ sub-Gaussian. Then by concentration inequality of sub-Gaussian random variables (see details in Appendix \ref{Appendix_D}),
\begin{eqnarray*}
&&\Pr\left[|\sum_{i\in U} (\hat{\mu}_i(t) - \mu_i)| > \sqrt{\sum_{i\in U}{1\over 2N_i(t)}L_1(t)}\right] \\
&\le& 2\exp(-L_1(t))\\
&=& {\delta \over 6|\mathcal{I}|^2t^2}.
\end{eqnarray*}
This implies that
\begin{equation*}
\Pr[\neg \mathcal{E}_{0,c}] \le \sum_{U,t} {\delta \over 6|\mathcal{I}|^2t^2} \le \sum_{t} {\delta \over 6t^2} \le {\delta \over 3},
\end{equation*}
where the second inequality is because that $|\mathcal{J}| \le |\mathcal{I}|^2$, and the third inequality is because that $\sum_t {1\over t^2} \le 2$.
Similarly, the random variable $(\theta_i^k(t) - \hat{\mu}_i(t))$ is a zero-mean Gaussian random variable with variance ${C(\delta, q, t)\over N_i(t)}$, and for different $i$, the random variables $(\theta_i^k(t) - \hat{\mu}_i(t))$'s are also independent.
Then by concentration inequality,
\begin{eqnarray*}
\!\!\!\!\!\!\!&&\Pr\left[|\sum_{i\in U} (\theta_i^k(t) - \hat{\mu}_i(t))| > \sqrt{\sum_{i\in U}{2C(\delta,q,t)\over N_i(t)}L_2(t)}\right]\\
\!\!\!\!\!\!\!&\le& 2\exp(-L_2(t)) \\
\!\!\!\!\!\!\!&=& {\delta \over 6|\mathcal{I}|^2t^2M(\delta,q,t)}.
\end{eqnarray*}
This implies that
\begin{equation*}
\Pr[\neg \mathcal{E}_{1,c}] \le \sum_{U,t, k} {\delta \over 6|\mathcal{I}|^2t^2M(\delta,q,t)} \le \sum_{U,t} {\delta \over 6|\mathcal{I}|^2t^2} \le {\delta \over 3},
\end{equation*}
where the second inequality is because that there are totally $M(\delta,q,t)$ sample sets at time step $t$.
Finally we consider the probability $\Pr[\neg \mathcal{E}_{2,c} | \mathcal{E}_{0,c}]$. In the following, we denote $\Delta_{S,S'} \triangleq \sum_{i\in S} \mu_i - \sum_{i\in S'} \mu_i = \sum_{i\in S\setminus S'} \mu_i - \sum_{i\in S'\setminus S} \mu_i $ as the reward gap (under the real means) between $S$ and $S'$.
For any fixed $S \ne S'$, we denote $A(t) = \sum_{i\in S \setminus S'} {1\over 2N_i(t)}$, $B(t) = \sum_{i\in S' \setminus S} {1\over 2N_i(t)}$ and $C(t) = C(\delta, q,t)$ to simplify notations.
Then under event $\mathcal{E}_{0,c}$, we must have that $\sum_{i\in S \setminus S'} \hat{\mu}_{i}(t) - \sum_{i\in S'\setminus S } \hat{\mu}_{i}(t) \ge (\sum_{i\in S \setminus S'} \mu_{i} - \sqrt{A(t)L_1(t)}) - (\sum_{i\in S'\setminus S } \mu_{i} + \sqrt{B(t)L_1(t)}) \ge \Delta_{S,S'} - \sqrt{A(t)L_1(t)} - \sqrt{B(t)L_1(t)}$.
Since $\sum_{i\in S\setminus S' } \theta_{i}^k(t) - \sum_{i\in S'\setminus S } \theta_{i}^k(t)$ is a Gaussian random variable with mean $\sum_{i\in S \setminus S'} \hat{\mu}_{i}(t) - \sum_{i\in S'\setminus S } \hat{\mu}_{i}(t)$ and variance $2A(t)C(t) + 2B(t)C(t)$, then under event $\mathcal{E}_{0,c}$,
(recall that $\Phi(x,\mu,\sigma^2) = \Pr_{X \sim \mathcal{N}(\mu, \sigma^2)}[X \ge x]$):
\begin{eqnarray}
\!\!\!\!\!\nonumber&&\!\!\!\!\!\Pr\left[\sum_{i\in S} \theta_i^k(t) - \sum_{i\in S'} \theta_i^k(t) \ge \Delta_{S,S'}\right] \\
\nonumber\!\!\!\!\!&=&\!\!\!\!\! \Pr\left[\sum_{i\in S \setminus S'} \theta_i^k(t) - \sum_{i\in S' \setminus S} \theta_i^k(t) \ge \Delta_{S,S'}\right]\\
\!\!\!\!\!\nonumber&=&\!\!\!\!\! \Phi\!\left(\!\Delta_{S,S'}, \!\!\!\sum_{i\in S \setminus S'} \!\!\hat{\mu}_{i}(t) - \!\!\!\!\!\sum_{i\in S'\setminus S } \!\!\hat{\mu}_{i}(t), 2A(t)C(t) + 2B(t)C(t)\!\right)\\
\!\!\!\!\!\nonumber&\ge&\!\!\!\!\! \Phi(\Delta_{S,S'}, \Delta_{S,S'} - \sqrt{A(t)L_1(t)} - \sqrt{B(t)L_1(t)}, \\
\!\!\!\!\!\nonumber&&\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad 2A(t)C(t) + \!2B(t)C(t))\\
\!\!\!\!\!\nonumber&=&\!\!\!\!\! \Phi(\sqrt{A(t)L_1(t)} + \!\sqrt{B(t)L_1(t)}, 0, 2A(t)C(t) + \!2B(t)C(t))\\
\!\!\!\!\!\nonumber&=&\!\!\!\!\! \Phi\left(\sqrt{L_1(t)\over C(t)} \cdot {\sqrt{A(t)} + \sqrt{B(t)} \over \sqrt{2A(t)+2B(t)}}, 0, 1\right)\\
\!\!\!\!\!\nonumber&\ge&\!\!\!\!\!\Phi\left(\sqrt{L_1(t)\over C(t)}, 0, 1\right)\\
\!\!\!\!\!\nonumber&=&\!\!\!\!\! q,
\end{eqnarray}
where the last equation is because that we choose $C(t) = {L_1(t) \over \phi^2(q)}$ and $\Phi(\phi(q), 0,1) = q$ (by definition of $\phi$).
Note that the parameter sets $\{\bm{\theta}^k(t)\}_{k=1}^{M(\delta,q,t)}$ are chosen independently, therefore under event $\mathcal{E}_{0,c}$, we have that
\begin{eqnarray*}
&&\Pr\left[\forall k, \sum_{i\in S} \theta_i^k(t) - \sum_{i\in S'} \theta_i^k(t) < \Delta_{S,S'}\right] \\
&\le& (1-q)^{M(\delta,q,t)} \\
&\le& {\delta \over 12|\mathcal{I}|^2t^2},
\end{eqnarray*}
where that last inequality is because that we choose $M(\delta,q,t) = {1\over q}\log (12|\mathcal{I}|^2t^2/\delta)$.
This implies that
\begin{equation*}
\Pr[\neg \mathcal{E}_{2,c} | \mathcal{E}_{0,c}] \le \sum_{t, S, S'} {\delta \over 12|\mathcal{I}|^2t^2} \le \sum_{t} {\delta \over 12t^2} \le {\delta \over 3}.
\end{equation*}
All these show that $\Pr[\mathcal{E}_{0,c} \land \mathcal{E}_{1,c} \land \mathcal{E}_{2,c}] \ge 1 - \delta$.
\end{proof}
Then it is sufficient to prove that under event $\mathcal{E}_{0,c} \land \mathcal{E}_{1,c} \land \mathcal{E}_{2,c}$, TS-Explore works correctly with complexity upper bound shown in Theorem \ref{Theorem_Explore}.
Firstly, we prove that TS-Explore will output $S^*$. The proof is quite straightforward: if $\hat{S}_t \ne S^*$, then under event $\mathcal{E}_{2,c}$, there exists $k$ such that $\sum_{i\in S^*} \theta^k_i(t) - \sum_{i\in \hat{S}_t} \theta^k_i(t) \ge \Delta_{S^*,\hat{S}_t} > 0$. Therefore $\tilde{S}^k_t \ne \hat{S}_t$ and we will not output $\hat{S}_t$. Because of this, TS-Explore can only return $S^*$ under event $\mathcal{E}_{0,c} \land \mathcal{E}_{1,c} \land \mathcal{E}_{2,c}$, and we finish the proof of its correctness.
Then we come to bound the complexity of TS-Explore, and we will use the following lemma
in our analysis.
\begin{restatable}{lemma}{LemmaComplexity}\label{Lemma_Complexity}
Under event $\mathcal{E}_{0,c} \land \mathcal{E}_{1,c} \land \mathcal{E}_{2,c}$, a base arm $i$ will not be pulled if $N_i(t) \ge {98\operatornamewithlimits{width} C(t)L_2(t) \over \Delta_{i,c}^2}$.
\end{restatable}
Due to space limit, we only prove Lemma \ref{Lemma_Complexity} for the case that $(\hat{S}_t = S^*) \lor (\tilde{S}_t = S^*)$
in our main text, and defer the complete proof to Appendix \ref{Appendix_A}.
\begin{proof}
We will prove this lemma by contradiction.
First we consider the case that $(\hat{S}_t = S^* )\land (\tilde{S}_t \ne S^*)$. In this case, $i\in S^* \oplus \tilde{S}_t$, which implies that $\Delta_{i,c} \le \Delta_{S^*, \tilde{S}_t}$. If we choose a base arm $i$ with $N_i(t) \ge {98\operatornamewithlimits{width} C(t)L_2(t) \over \Delta_{i,c}^2} \ge {98\operatornamewithlimits{width} C(t)L_2(t) \over \Delta_{S^*, \tilde{S}_t}^2}$ to pull, we know that $\forall j \in S^* \oplus \tilde{S}_t$, $N_j(t) \ge {98\operatornamewithlimits{width} C(t)L_2(t) \over \Delta_{S^*, \tilde{S}_t}^2}$. This implies that $\sqrt{\sum_{j\in S^* \setminus \tilde{S}_t} {2\over N_j(t)}} \le {\Delta_{S^*, \tilde{S}_t}\over 7\sqrt{C(t)L_2(t)}}$ and $\sqrt{\sum_{j\in \tilde{S}_t \setminus S^*} {2\over N_j(t)}} \le {\Delta_{S^*, \tilde{S}_t}\over 7\sqrt{C(t)L_2(t)}}$.
By $\mathcal{E}_{2,c}$, there exists $k$ such that $\sum_{j\in S^* \setminus \tilde{S}_t} \theta_j^k(t) - \sum_{j\in \tilde{S}_t \setminus S^*} \theta_j^k(t) \ge \Delta_{S^*, \tilde{S}_t}$. By $\mathcal{E}_{1,c}$, $|\sum_{j\in S^* \setminus \tilde{S}_t} (\theta_j^k(t) - \hat{\mu}_j(t))| \le \sqrt{\sum_{j\in S^* \setminus \tilde{S}_t}{2C(t)\over N_j(t)}L_2(t)} \le {\Delta_{S^*, \tilde{S}_t}\over 7}$ and similarly $|\sum_{j\in \tilde{S}_t \setminus S^*} (\theta_j^k(t) - \hat{\mu}_j(t))| \le {\Delta_{S^*, \tilde{S}_t}\over 7}$. Hence
\begin{eqnarray*}
\!\!\!\!\!&&\sum_{j\in S^* \setminus \tilde{S}_t} \hat{\mu}_j(t) - \sum_{j\in \tilde{S}_t \setminus S^*} \hat{\mu}_j(t) \\
\!\!\!\!\!&\ge& \sum_{j\in S^* \setminus \tilde{S}_t} \theta_j^k(t) - \sum_{j\in \tilde{S}_t \setminus S^*} \theta_j^k(t)\\
\!\!\!\!\!&&- |\!\!\!\!\!\sum_{j\in S^* \setminus \tilde{S}_t} (\theta_j^k(t) - \hat{\mu}_j(t))| - |\!\!\!\!\!\sum_{j\in \tilde{S}_t \setminus S^*} (\theta_j^k(t) - \hat{\mu}_j(t))|\\
\!\!\!\!\!&\ge&{5\Delta_{S^*, \tilde{S}_t}\over 7}.
\end{eqnarray*}
$\mathcal{E}_{1,c}$ also means $|\sum_{j\in S^* \setminus \tilde{S}_t} (\theta_j^{k_t^*}(t) - \hat{\mu}_j(t))| \le {\Delta_{S^*, \tilde{S}_t}\over 7}$ and $|\sum_{j\in \tilde{S}_t \setminus S^*} (\theta_j^{k_t^*}(t) - \hat{\mu}_j(t))| \le {\Delta_{S^*, \tilde{S}_t}\over 7}$. Thus we know that
\begin{eqnarray*}
\!\!\!\!\!&&\sum_{j\in S^* \setminus \tilde{S}_t} \theta_j^{k_t^*}(t) - \sum_{j\in \tilde{S}_t \setminus S^*} \theta_j^{k_t^*}(t) \\
\!\!\!\!\!&\ge& \sum_{j\in S^* \setminus \tilde{S}_t} \hat{\mu}_j(t) - \sum_{j\in \tilde{S}_t \setminus S^*} \hat{\mu}_j(t)\\
\!\!\!\!\!&&- |\!\!\!\!\!\sum_{j\in S^* \setminus \tilde{S}_t} (\theta_j^{k_t^*}(t) - \hat{\mu}_j(t))| - |\!\!\!\!\!\sum_{j\in \tilde{S}_t \setminus S^*} (\theta_j^{k_t^*}(t) - \hat{\mu}_j(t))|\\
\!\!\!\!\!&\ge&{3\Delta_{S^*, \tilde{S}_t}\over 7}.
\end{eqnarray*}
This contradicts with the fact that $\tilde{S}_t$ is the optimal super arm under the $k_t^*$-th sample set $\bm{\theta}^{k_t^*}(t)$.
Then we come to the case that $(\hat{S}_t \ne S^*) \land (\tilde{S}_t = S^*)$. In this case, $i\in S^* \oplus \hat{S}_t$, which implies that $\Delta_{i,c} \le \Delta_{S^*, \hat{S}_t}$. If we choose a base arm $i$ with $N_i(t) \ge {98\operatornamewithlimits{width} C(t)L_2(t) \over \Delta_{i,c}^2} \ge {98\operatornamewithlimits{width} C(t)L_2(t) \over \Delta_{S^*, \hat{S}_t}^2}$ to pull, we have that $\sqrt{\sum_{j\in S^* \setminus \hat{S}_t} {2\over N_j(t)}} \le {\Delta_{S^*, \hat{S}_t}\over 7\sqrt{C(t)L_2(t)}}$ and $\sqrt{\sum_{j\in \hat{S}_t \setminus S^*} {2\over N_j(t)}} \le {\Delta_{S^*, \hat{S}_t}\over 7\sqrt{C(t)L_2(t)}}$.
By $\mathcal{E}_{2,c}$, there exists $k$ such that $\sum_{j\in S^* \setminus \hat{S}_t} \theta_j^k(t) - \sum_{j\in \hat{S}_t \setminus S^*} \theta_j^k(t) \ge \Delta_{S^*, \hat{S}_t}$.
By $\mathcal{E}_{1,c}$, $|\sum_{j\in S^* \setminus \hat{S}_t} (\theta_j^k(t) - \hat{\mu}_j(t))| \le {\Delta_{S^*, \hat{S}_t}\over 7}$ and $|\sum_{j\in \hat{S}_t \setminus S^*} (\theta_j^k(t) - \hat{\mu}_j(t))| \le {\Delta_{S^*, \hat{S}_t}\over 7}$. All these imply
\begin{eqnarray*}
\!\!\!\!\!&&\sum_{j\in S^* \setminus \hat{S}_t} \hat{\mu}_j(t) - \sum_{j\in \hat{S}_t \setminus S^*} \hat{\mu}_j(t) \\
\!\!\!\!\!&\ge& \sum_{j\in S^* \setminus \hat{S}_t} \theta_j^k(t) - \sum_{j\in \hat{S}_t \setminus S^*} \theta_j^k(t)\\
\!\!\!\!\!&&- |\!\!\!\!\!\sum_{j\in S^* \setminus \hat{S}_t} (\theta_j^k(t) - \hat{\mu}_j(t))| - |\!\!\!\!\!\sum_{j\in \hat{S}_t \setminus S^*} (\theta_j^k(t) - \hat{\mu}_j(t))|\\
\!\!\!\!\!&\ge&{5\Delta_{S^*, \hat{S}_t}\over 7}.
\end{eqnarray*}
This contradicts with the fact that $\hat{S}_t$ is the empirically optimal super arm.
\end{proof}
Lemma \ref{Lemma_Complexity} is similar to Lemma 10 in \cite{Chen2014Combinatorial}. Both of them give an upper bound for the number of pulls on base arm $i$. The key difference is that in \cite{Chen2014Combinatorial}, for an arm set $U \in \mathcal{J}$, the gap between its real mean and its upper confidence bound is $\tilde{O}(\sum_{i\in U} \sqrt{1\over N_i})$, which means that we require all the $N_i$'s to be $\tilde{\Theta}({\operatornamewithlimits{width}^2
\over\Delta_{i,c}^2})$ to make sure that this gap is less than $\Delta_{i,c}$. In our paper, based on event $\mathcal{E}_{0,c} \land \mathcal{E}_{1,c}$, the gap between $U$'s real mean and the sum of random samples in $U$ is $\tilde{O}(\sqrt{\sum_{i\in U}{C(t) \over N_i}})$. Therefore, we only require all the $N_i$'s to be $\tilde{\Theta}({\operatornamewithlimits{width} C(t) \over \Delta_{i,c}^2})$ to make sure that this gap is less than $\Delta_{i,c}$, and this reduces a factor of $\operatornamewithlimits{width}$ in the number of pulls on base arm $i$ (our analysis shows that $C(t)$ is approximately a constant).
In fact, reducing a $\operatornamewithlimits{width}$ factor in Lemma \ref{Lemma_Complexity} is the key reason that the complexity upper bound of TS-Explore is $\operatornamewithlimits{width}$ lower than the CLUCB policy in \citep{Chen2014Combinatorial}.
The novelty of our proof for Lemma \ref{Lemma_Complexity} mainly lies in the event $\mathcal{E}_{2,c}$ (as well as the mechanism of recording all the $\tilde{\Delta}^k_t$'s and focusing on the largest one), i.e., we show that under event $\mathcal{E}_{2,c}$, $\tilde{S}_t$ has some similar properties as the super arm with the largest upper confidence bound (which is used in LUCB-based policies such as CLUCB).
For example, when $\hat{S}_t$ does not equal the optimal super arm $S^*$, $\mathcal{E}_{2,c}$ tells us that there must exist $k$ such that $\sum_{i\in S^*} \theta_i^k(t) - \sum_{i\in \hat{S}_t} \theta_i^k(t) \ge \Delta_{S^*, \hat{S}_t}$.
Along with the fact $k^*_t = \operatornamewithlimits{argmax}_k \tilde{\Delta}^k_t$, we know $\tilde{\Delta}^{k^*_t }_t \ge \Delta_{S^*, \hat{S}_t}$.
This means that the reward gap between $\hat{S}_t$ and $\tilde{S}_t$ ($\tilde{S}^{k^*_t }_t$) could be larger than $\Delta_{S^*, \hat{S}_t}$, which implies that $\tilde{S}_t$ is either an insufficiently learned sub-optimal arm or the optimal arm (see details in the complete proof in Appendix \ref{Appendix_A}).
This method solves the challenge of the uncertainty in random samples, and allows us to use similar analysis techniques (e.g., \cite{Chen2014Combinatorial}) to prove Lemma \ref{Lemma_Complexity}.
By Lemma \ref{Lemma_Complexity}, if $\forall i, N_i(t) \ge {98\operatornamewithlimits{width} C(t)L_2(t) \over \Delta_{i,c}^2}$, then TS-Explore must terminate (and output the correct answer).
Thus, the complexity $Z$ satisfies
\begin{equation*}
Z \le \sum_{i\in [m]} {98\operatornamewithlimits{width} C(Z)L_2(Z) \over\Delta_{i,c}^2} = 98H_{1,c} C(Z)L_2(Z).
\end{equation*}
For $q \le 0.1$, ${1\over \phi^2(q)} = O({1\over \log{1\over q}})$. Then with $C(Z) = {\log (12|\mathcal{I}|^2Z^2/\delta) \over \phi^2(q)}$, $L_2(Z) = \log(12|\mathcal{I}|^2Z^2M(Z)/\delta)$ and $M(Z) = {\log(12|\mathcal{I}|^2Z^2/\delta)\over q}$, we have that (note that $q \ge \delta$):
\begin{equation}\label{Eq_1}
Z \le O\left(H_{1,c}{\left(\log (|\mathcal{I}|Z) + \log{1\over \delta}\right)^2 \over \log{1\over q}}\right).
\end{equation}
Therefore, after some basic calculations (the details are deferred to Appendix \ref{Appendix_B}), we know that
\begin{equation*}
Z = O\left(H_{1,c}{\left(\log (|\mathcal{I}|H_{1,c}) + \log{1\over \delta}\right)^2 \over \log{1\over q}}\right).
\end{equation*}
\section{Experiments}\label{Appendix_Experiments}
In this section, we conduct some experiments to compare the complexity performances of efficient learning algorithms (i.e., TS-Explore and CLUCB \cite{Chen2014Combinatorial}) and the optimal but non-efficient algorithm NaiveGapElim \cite{chen2017nearly} in the CMAB setting.
We consider the following combinatorial pure exploration problem instance with parameter $n \in \mathbb{N}^+$, and always choose $q = \delta$ in TS-Explore.
All the results (average complexities and their standard deviations) take an average of 100 independent runs.
\begin{problem}
For fixed value $n$, there are totally $2n$ base arms. For the first $n$ base arms, their expected rewards equal 0.1, and for the last $n$ base arms, their expected rewards equal 0.9. There are only two super arms:
$S_1$ contains the first $n$ base arms and $S_2$ contains the last $n$ base arms.
\end{problem}
We first fix $\delta = 10^{-3}$, and compare the complexity of the above algorithms under different $n$'s (Fig. \ref{Figure_1}). We can see that when $n$ increases, the complexities of TS-Explore and NaiveGapElim do not increase a lot, while the complexity of CLUCB increases linearly. This accords with our analysis, since $H_{0,c}(n) = H_{1,c}(n) = 2n \cdot {2n \over (0.8n)^2} = 6.25$ is a constant but $H_{2,c}(n) = 2n \cdot {(2n)^2 \over (0.8n)^2} = 12.5n$ is linear with $n$ (here $H_{0,c}(n), H_{1,c}(n), H_{2,c}(n)$ are the values of $H_{0,c}, H_{1,c}, H_{2,c}$ under the problem instance with parameter $n$, respectively).
Then we fix $n = 2$, and compare the complexity of the above algorithms under different $\delta$'s (Fig. \ref{Figure_2}).
We can see that when $\delta$ is large, the complexity of TS-Explore decreases as $\delta$ decreases, and when $\delta$ is small, the complexity of TS-Explore increases as $\delta$ decreases. Moreover, the complexities of TS-Explore and NaiveGapElim increase much slower than CLUCB (when $\delta$ decreases). This also accords with our analysis. Note that there is a term $O(H_{1,c}{\log^2 (|\mathcal{I}|H_{1,c}) \over \log{1\over q}})$ in our complexity bound.
Since we choose $q = \delta$, this term decreases as $\delta$ decreases. When $\delta = 10^{-1}$, this term is very large and becomes the majority term in complexity, and therefore the complexity decreases when $\delta$ decreases from $10^{-1}$ to $10^{-3}$. When $\delta = 10^{-3}$, the term $O(H_{1,c} \log{1\over \delta})$ becomes the majority term in complexity, therefore the complexity increases when $\delta$ decreases from $10^{-3}$ to $10^{-5}$.
\begin{figure}[t]
\centering
\subfigure[$\delta = 10^{-3}$]{ \label{Figure_1}
\includegraphics[width=2.9in]{Figure_1.png}}
\subfigure[$n = 2$]{ \label{Figure_2}
\includegraphics[width=2.9in]{Figure_2.png}}
\caption{Comparison of TS-Explore, CLUCB and NaiveGapElim
}\label{Figure_E1}
\end{figure}
All the experimental results indicate that TS-Explore outperforms CLUCB (especially when the size of the problem is large or the error constraint $\delta$ is small).
On the other hand, the complexity of our efficient algorithm TS-Explore is only a little higher than the optimal but non-efficient algorithm NaiveGapElim.
These results demonstrate the effectiveness of our algorithm.
\section{Conclusions}
In this paper, we explore the idea of Thompson Sampling to solve the pure exploration problems under the frequentist setting.
We first propose TS-Explore, an efficient policy that uses random samples to make decisions, and then show that: i) in the combinatorial multi-armed bandit setting, our policy can achieve a lower complexity bound than existing efficient policies; and ii) in the classic multi-armed bandit setting, our policy can achieve an asymptotically optimal complexity bound.
There remain many interesting topics to be further studied, e.g., how to achieve the optimal complexity bound in the combinatorial multi-armed bandit setting by seeking detailed information about the combinatorial structure in TS-Explore; and what are the complexity bounds of using our TS-Explore framework in other pure exploration problems (e.g., dueling bandit and linear bandit). It is also worth studying how to design TS-based pure exploration algorithms for the fixed budget setting.
\section*{Acknowledgement}
This work was supported by the National Key Research and Development Program of China
(2017YFA0700904, 2020AAA0104304),
NSFC Projects (Nos. 62061136001, 62106122, 62076147, U19B2034,
U1811461, U19A2081), Beijing NSF Project (No. JQ19016), Tsinghua-Huawei Joint Research Program, and a grant from
Tsinghua Institute for Guo Qiang.
|
2301.04800
|
\section{Introduction} \label{intro}
Trees of complete graphs with random edge weights are important from both theoretical and practical perspectives. For independent and identically distributed (i.i.d.) edge weights with a common cumulative distribution function (cdf)~\(F(.)\) that varies linearly close to zero,~\cite{fre} studied convergence of the weight of the minimum spanning tree (MST) of the complete graph~\(K_n\) on~\(n\) vertices. Later~\cite{ald} studied asymptotics for the \emph{expected} value of the MST weight, when the edge weight distributions follow a power law distribution. The paper~\cite{janson} studied central limit theorems for a scaled and centred version of the MST weight and more recently~\cite{add} studied bounds on the diameter of the MST. The methods involve a combination of graph evolution via Kruskal's agorithm along with a component analysis of random graphs. For MSTs with nonidentical edge weight distributions,~\cite{li} use the Tutte polynomial approach~\cite{ste} to compute expressions for the expected value of~\(MST_n.\)
In Section~\ref{sec_mst} of our paper, we study ``approximate" MSTs containing~\(O(n)\) edges, obtained by placing random heavy weights in each edge of~\(K_n\) that are not necessarily identically distributed but are uniformly heavy. We use stochastic domination to obtain deviation type estimates for the minimum weight and use the martingale method to bound the variance (see Theorem~\ref{mst_thm}).
Next we study constrained paths in the integer lattice. Consider the following scenario where each edge in the square lattice \(\mathbb{Z}^d\) is associated with a random passage time and it is of interest to determine the minimum passage time~\(T_n\) between the origin and~\((n,0,\ldots,0).\) The case of independent and identically distributed~(i.i.d.) passage times has been well-studied and detailed results are known regarding the almost sure convergence and convergence in mean of the scaled passage time~\(\frac{T_n}{n}\) (see~\cite{kest1}). Later~\cite{chatterjee} studied central limit theorems for first passage across thin cylinders and recently~\cite{jiang} have studied critical first passage percolation in the triangular lattice.
In many applications, the passage times may not be i.i.d.\ For example, if we model vertices of~\(\mathbb{Z}^d\) as mobiles stations and the edge passage times as the delay in sending a packet between two adjacent stations, then depending on external conditions, the edges may have different passage time distributions. In such cases, it is of interest to study convergence properties of the minimum passage time~\(T_n,\) with appropriate centering and scaling.
In Section~\ref{sec_cons_path} our paper, we state and prove our result (Theorem~\ref{thm1}) regarding the behaviour of the constrained minimum passage times as a function of the edge constraint.
The paper is organized as follows. In Section~\ref{sec_mst}, we state and prove our result regarding the asymptotic behaviour of weighted trees of the complete graph, with edge constraints. Finally, in Section~\ref{sec_cons_path}, we describe the behaviour of constrained minimum passage time paths in the integer lattice~\(\mathbb{Z}^{d}.\)
\setcounter{equation}{0}
\renewcommand\theequation{\thesection.\arabic{equation}}
\section{Edge Constrained Minimum Weight Trees}\label{sec_mst}
For~\(n \geq 1,\) let~\(K_n \) be the complete graph with vertex set~\(\{1,2,\ldots,n\}.\) Let~\(\{w(i,j)\}_{1 \leq i < j \leq n}\) be independent random variables with corresponding cumulative distribution functions (cdfs)~\(\{F_{i,j}\}_{1 \leq i < j \leq n}\) and for~\(1 \leq j < i \leq n,\) set~\(w(i,j) := w(j,i).\) We define~\(w(e) := w(i,j)\) to be the \emph{weight} of the edge~\(e = (i,j) \in K_n\) and assume throughout that~\(w(e) \leq 1,\) for simplicity.
A tree in~\(K_n\) is a connected acyclic subgraph. For a tree~\({\cal T}\) with vertex set~\(\{v_1,\ldots,v_t\},\) the weight of~\({\cal T}\) is the sum of the weights of the edges in~\({\cal T};\) i.e.,~\(W({\cal T}) := \sum_{e \in {\cal T}} w(e).\) For~\(1 \leq \tau \leq n-1\) we define
\begin{equation}\label{min_weight_tree}
M_n = M_n(\tau) := \min_{{\cal T}} W({\cal T}),
\end{equation}
where the minimum is taken over all trees~\({\cal T} \subset K_n\) having at least~\(\tau\) edges.
The following is the main result of this section. Constants throughout do not depend on~\(n.\)
\begin{thm}\label{mst_thm} Suppose~\(\tau \geq \rho n\) for some~\(0 < \rho \leq 1\) and also suppose there are positive constants~\(D_1 \leq D_2\) and~\(0 < \alpha < 1\) such that
\begin{equation}\label{dif}
D_1 x^{\frac{1}{\alpha}} \leq F_{i,j}(x) \leq D_2x^{\frac{1}{\alpha}}
\end{equation}
for~\(0 \leq x \leq 1.\) There are positive constants~\(C_i, 1 \leq i \leq 3\) such that
\begin{equation} \label{mst_dev}
\mathbb{P}\left(C_1n^{1-\alpha} \leq M_n \leq C_2 n^{1-\alpha}\right) \geq 1-e^{-C_3n^{1-\alpha}}
\end{equation}
and~\(C_1n^{1-\alpha} \leq \mathbb{E}M_n \leq C_2 n^{1-\alpha}.\) Moreover,~\(var(M_n) \leq 2n.\)
\end{thm}
To prove Theorem~\ref{mst_thm}, we use the following preliminary Lemma regarding the behaviour of the exponential moments of small edge weights. For~\(1 \leq j \leq n\) and distinct deterministic integers~\(1 \leq a_1,\ldots,a_j \leq n\) let
\begin{equation}\label{yj_def}
Y_j = Y_j(a_1,\ldots,a_j) := \min_{a \notin \{a_1,\ldots,a_{j-1}\}} w(a_{j},a).
\end{equation}
\begin{lem}\label{yj_lem} There are positive constants~\(C_1\) and~\(C_2\) not depending on the choice of~\(\{a_i\}\) or~\(j\) such that for any~\(1 \leq j \leq n-1,\)
\begin{equation}\label{eyj_est_lem}
\frac{C_1}{(n-j)^{\alpha}} \leq \mathbb{E}Y_j \leq \frac{C_2}{(n-j)^{\alpha}}.
\end{equation}
Moreover for every~\(s > 1\) there are constants~\(K,C \geq 1\) not depending on the choice of~\(\{a_i\}\) or~\(j\) such that for all~\(1 \leq j \leq n-K\)
\begin{equation}\label{eyj_est_lem2}
\mathbb{E}e^{sY_j} \leq \exp\left(\frac{C}{(n-j)^{\alpha}}\right).
\end{equation}
\end{lem}
\emph{Proof of Lemma~\ref{yj_lem}}: In what follows, we use the following standard deviation estimate. Suppose~\(W_i, 1 \leq i \leq m\) are independent Bernoulli random variables satisfying~\(\mathbb{P}(W_1=1) = 1-\mathbb{P}(W_1~=~0) \leq \mu_2.\) For any~\(0 < \epsilon < \frac{1}{2},\)
\begin{equation}\label{std_dev_up}
\mathbb{P}\left(\sum_{i=1}^{m} W_i > m\mu_2(1+\epsilon) \right) \leq \exp\left(-\frac{\epsilon^2}{4}m\mu_2\right).
\end{equation}
For a proof of~(\ref{std_dev_up}, we refer to Corollary A.1.14, pp. 312 of Alon and Spencer (2008).
We first find the lower bound for~\(\mathbb{E}Y_j\) and then upper bound~\(\mathbb{E}Y_j\) and~\(\mathbb{E}e^{sY_j}\) for constant~\(s > 1\) in that order. The term~\(Y_j\) is the minimum of~\(n-j\) edge weights and so for~\(0 < x < (n-j)^{\alpha}\) we use the upper bound for the cdfs in~(\ref{dif}) to get that~\(\mathbb{P}\left(Y_j > \frac{x}{(n-j)^{\alpha}}\right) \geq \left(1-D_2\frac{x^{\frac{1}{\alpha}}}{n-j}\right)^{n-j},\)
where~\(D_2 \geq 1\) is as in~(\ref{dif}).
Thus
\begin{equation}\label{ey_temp2}
(n-j)^{\alpha} \mathbb{E}Y_j = \int_0^{(n-j)^{\alpha}} \mathbb{P}\left(Y_j > \frac{x}{(n-j)^{\alpha}}\right) dx \geq \int_{0}^{(n-j)^{\alpha}} \left(1-D_2\frac{x^{\frac{1}{\alpha}}}{n-j}\right)^{n-j} dx.
\end{equation}
To evaluate the integral in~(\ref{ey_temp2}), we use~\(1-y \geq e^{-2y}\) for all~\(0 < y < \frac{1}{2}.\) Letting~\(y = \frac{D_2 x^{\frac{1}{\alpha}}}{n-j}\) for~\(0 < x < \left(\frac{n-j}{2D_2}\right)^{\alpha}\) we then have~\((1-y)^{n-j} \geq e^{-2D_2x^{\frac{1}{\alpha}}}\) and substituting this in~(\ref{ey_temp2}) and using~\(D_2 \geq 1,\) we get
\[(n-j)^{\alpha} \mathbb{E}Y_j \geq \int_{0}^{\left(\frac{n-j}{2D_2}\right)^{\alpha}} e^{-2D_2x^{\frac{1}{\alpha}}} dx \geq \int_{0}^{\left(\frac{1}{2D_2}\right)^{\frac{1}{\alpha}}}e^{-2D_2x^{\frac{1}{\alpha}}} dx =: C_1\] for all~\(1 \leq j \leq n-1.\)
For upper bounding~\(\mathbb{E}Y_j,\) we again use the fact that the term~\(Y_j\) is the minimum of~\(n-j\) edge weights and so for~\(0 < x < (n-j)^{\alpha}\) we use the lower bound for the cdfs in~(\ref{dif}) to get~\(\mathbb{P}\left(Y_j > \frac{x}{(n-j)^{\alpha}}\right) \leq \left(1-D_1\frac{x^{\frac{1}{\alpha}}}{n-j}\right)^{n-j} \leq e^{-D_1x^{\frac{1}{\alpha}}}.\)
Thus
\begin{equation}\label{eyj_est}
(n-j)^{\alpha}\mathbb{E}Y_j = \int_0^{(n-j)^{\alpha}} \mathbb{P}\left(Y_j > \frac{x}{(n-j)^{\alpha}}\right) dx \leq \int_0^{\infty} e^{-D_1x^{\frac{1}{\alpha}}} dx =: C_2,
\end{equation}
a finite positive constant not depending on the choice of~\(\{a_i\}.\)
To compute~\(\mathbb{E}e^{sY_j}\) we split~\(\mathbb{E}e^{sY_j} = I_1 + I_2,\)
where~\(I_1 = \mathbb{E} e^{sY_j} 1\hspace{-2.3mm}{1}\left(Y_j \leq \frac{1}{2s}\right)\) and~\(I_2 = \mathbb{E}e^{sY_j} 1\hspace{-2.3mm}{1}\left(Y_j > \frac{1}{2s}\right)\)
and estimate each term separately. To evaluate~\(I_1,\) we bound~\(e^{x} \leq 1+2x\) for~\(x \leq \frac{1}{2}\) and set~\(x= sY_j \leq \frac{1}{2}\) to get that~\(e^{sY_j} \leq 1+2sY_j.\) Thus
\begin{equation}\label{i1_est}
I_1 \leq 1+2s\mathbb{E}Y_j \leq 1+\frac{2sC_2}{(n-j)^{\alpha}},
\end{equation}
using~(\ref{eyj_est}).
To evaluate~\(I_2,\) we recall that~\(Y_j = \min_{a \notin \{a_1,\ldots,a_{j-1}\}} w(a_{j},a) \leq 1\) is the minimum of~\(n-j\) independent edge weights.
Using the upper bound for the cdfs in~(\ref{dif}) we have~\(\mathbb{P}\left(w(a_j,a) > \frac{1}{2s}\right) \leq 1-\frac{D_1}{(2s)^{\frac{1}{\alpha}}} <1,\)
since~\(D_1 \leq 1\) and~\(s~>~1.\) Setting~\(e^{-\theta} = 1-\frac{D_1}{(2s)^{\frac{1}{\alpha}}},\) we therefore have~\(\theta > 0\) and that~\(\mathbb{P}\left(Y_j > \frac{1}{2s}\right) \leq e^{-\theta(n-j)}.\) Finally using~\(Y_j \leq 1\) (since all edge weights are at most one) we get
\begin{equation}\label{i2_est}
I_2 = \mathbb{E}e^{sY_j}1\hspace{-2.3mm}{1}\left(Y_j > \frac{1}{2s}\right) \leq e^{s}\mathbb{P}\left(Y_j > \frac{1}{2s}\right) \leq e^{s}e^{-\theta(n-j)} \leq \frac{e^{s}C_2}{(n-j)^{\alpha}},
\end{equation}
where~\(C_2 > 0\) is as in~(\ref{i1_est}), provided~\(n-j \geq K+1\) and~\(K = K(s,C_2)\) is large. The final estimate in~(\ref{i2_est}) is obtained using~\(x^{\alpha} e^{-\theta x} \longrightarrow 0\) as~\(x \rightarrow \infty.\)
From~(\ref{i1_est}) and~(\ref{i2_est}), we therefore get for~\(1 \leq j\leq n-K\) that
\begin{equation}\nonumber
\mathbb{E}e^{sY_j} \leq 1+\frac{2sC_2}{(n-j)^{\alpha}} + \frac{e^{s}C_2}{(n-j)^{\alpha}} \leq \exp\left(\frac{2sC_2+e^{s}C_2}{(n-j)^{\alpha}}\right),
\end{equation}
proving~(\ref{eyj_est_lem2}).~\(\qed\)
\emph{Proof of Theorem~\ref{mst_thm}}: We obtain the lower deviation bound by counting the number of edges with large enough weight and the upper deviation bound by constructing a spanning path with low weight analogous to Aldous (1990). The expectation bounds then follow from~(\ref{mst_dev}). To compute the variance, we use the martingale difference method. In fact, from the variance bound, we get that~\(var\left(\frac{M_n}{n^{1-\alpha}}\right) \leq \frac{C}{n^{1-2\alpha}}\) and so if~\(\alpha < \frac{1}{2},\) then~\(\frac{M_n-\mathbb{E}M_n}{n^{1-\alpha}}\) converges to zero in probability. Details follow.
We begin with the proof of the lower deviation bound. For~\(\gamma > 0\) a small constant, let~\(R_{tot} := \sum_{e \in K_n} 1\hspace{-2.3mm}{1}\left(w(e) < \left(\frac{\gamma}{n}\right)^{\alpha}\right)\) be the number of edges of weight at most~\(\left(\frac{\gamma}{n}\right)^{\alpha}.\)
We estimate~\(R_{tot}\) using the standard deviation bound~(\ref{std_dev_up}).
First, we have from the bounds for the cdfs in~(\ref{dif}) that~\(\mathbb{P}\left(w(e) < \left(\frac{\gamma}{n}\right)^{\alpha}\right) < \frac{D_2\gamma}{n}\) and since the edge weights are independent, we use~(\ref{std_dev_up}) with~\(m = {n \choose 2},\mu_2 = \frac{D_2\gamma}{n}\) and~\(\epsilon =\frac{1}{4}\)
to get that
\[\mathbb{P}\left(R_{tot} > \frac{5mD_2\gamma}{4n}\right) \leq \exp\left(-m\frac{D_2\gamma}{64n}\right) \leq \exp\left(-\frac{n D_2 \gamma}{256}\right),\]
since~\(m = \frac{n(n-1)}{2} > \frac{n^2}{4}.\) Let~\({\cal T}_n\) be any tree with weight~\(M_n\) and containing at least~\(\tau \geq \rho n\) edges. Using~\(m < \frac{n^2}{2},\) we get that with probability at least\\\(1-e^{-\frac{nD_2\gamma}{256}},\) the weight of~\({\cal T}_n\) is at least
\[\left(\rho n-\frac{5mD_2\gamma}{4n}\right)\cdot \left(\frac{\gamma}{n}\right)^{\alpha} \geq \left(\rho n-\frac{5nD_2\gamma}{8}\right)\cdot \left(\frac{\gamma}{n}\right)^{\alpha} \geq Cn^{1-\alpha}\] for some constant~\(C > 0,\) provided~\(\gamma >0\) is small. This completes the proof of the lower deviation bound in~(\ref{mst_dev}).
For the upper deviation bound, we consider the spanning path obtained by an incremental approach similar to Aldous (1990). Let~\(i_1 =1\) and among all edges with endvertex~\(i_1,\) let~\(i_2\) be the index such that~\(w(i_1,i_2)\) has the least weight. Similarly, among all edges with endvertex in~\(i_2 \setminus \{i_1\},\) let~\(i_3\) be such that~\(w(i_2,i_3)\) has the least weight. Continuing this way, the path~\({\cal P}_{iter} := (i_1,\ldots,i_n)\)
is a spanning path containing all the nodes and so letting~\(Z_j = w(X_{i_{j-1}},X_{i_j})\)
be the weight of the~\(j^{th}\) edge in~\({\cal P}_{iter},\) we have~\(M_{n} \leq W({\cal P}_{iter}) = \sum_{j=1}^{n} Z_j.\) For~\(s > 0\)
we therefore have
\begin{equation}\label{mst_z}
\mathbb{E}e^{sM_n} \leq \mathbb{E}e^{\sum_{j=1}^{n-1}sZ_j}
\end{equation}
and in what follows, we find an upper bound for the right hand side of~(\ref{mst_z}).
Let~\(a_1 := 1\) and~\(a_l, 2 \leq l \leq j-1\) be deterministic numbers
and suppose the event~\(\{i_1=a_1,\ldots,i_{j-1}=a_{j-1}\}\) occurs
so that~\(Z_l=w(a_l,a_{l+1})\) for~\(1 \leq l \leq j-1\) and~\(Z_j = Y_j = Y_j(a_1,\ldots,a_j) = \min_{a \notin \{a_1,\ldots,a_{j-1}\}} w(a_{j},a)\)
is as in~(\ref{yj_def}). The event~\(\{i_1=a_1,\ldots,i_{j-1}=a_{j-1}\}\) and the random variables~\(w(a_l,a_{l+1}), 1 \leq l \leq j-1\)
depend only the state of edges having at least
one endvertex in~\(\{a_1,\ldots,a_{j-1}\}.\) On the other hand, the random variable~\(Y_j\)
depends only on the state of edges having both endvertices in~\(\{1,\ldots,n\} \setminus \{a_1,\ldots,a_{j-1}\}.\)
Thus
\begin{eqnarray}
&&\mathbb{E}e^{s\sum_{l=1}^{j} Z_l}1\hspace{-2.3mm}{1}(i_1=a_1,\ldots,i_{j-1} = a_{j-1}) \nonumber\\
&&\;\;\;\;= \mathbb{E}e^{sY_j} e^{\sum_{l=1}^{j-1} sZ_l} 1\hspace{-2.3mm}{1}(i_1=a_1,\ldots,i_{j-1}=a_{j-1}) \nonumber\\
&&\;\;\;\;= \mathbb{E}e^{sY_j} \mathbb{E}e^{\sum_{l=1}^{j-1} sZ_l} 1\hspace{-2.3mm}{1}(i_1=a_1,\ldots,i_{j-1}=a_{j-1}).\label{yj_exp_one}
\end{eqnarray}
Using~(\ref{eyj_est_lem}) we have~\(\mathbb{E}e^{sY_j} \leq \exp\left(\frac{C}{(n-j)^{\alpha}}\right)\)
for all~\(1 \leq j \leq n-K,\) where~\(K\) and~\(C\) do not depend on the choice of~\(\{a_i\}.\)
Thus summing~(\ref{yj_exp_one}) over all possible~\(a_1,\ldots,a_{j-1},\) we get~\(\mathbb{E}e^{s\sum_{l=1}^{j} Z_l} \leq \exp\left(\frac{C}{(n-j)^{\alpha}}\right)\mathbb{E}e^{s\sum_{l=1}^{j-1} Z_l}\)
and continuing iteratively, we get~\(\mathbb{E}e^{s\sum_{l=1}^{j} Z_l} \leq \exp\left(C\sum_{l=1}^{j}\frac{1}{(n-l)^{\alpha}}\right)\)
for~\(n-j \geq K.\) For~\(n-j < K,\) we use the bound~\(\mathbb{E}e^{sY_j} \leq e^{s}\)
since~\(Y_j \leq 1\) (all the edge weights are at most one) and argue as before to get that
\begin{equation}\label{zj_rec3}
\mathbb{E}e^{s\sum_{l=1}^{n-1} Z_l} \leq \exp\left(C\sum_{l=1}^{n-K}\frac{1}{(n-l)^{\alpha}}\right)e^{sK}.
\end{equation}
Comparing with integrals, the term~\[\sum_{l=1}^{n-K}\frac{1}{(n-l)^{\alpha}} =\sum_{j = K+1}^{n-1} \frac{1}{j^{\alpha}} \leq C_3 \int_{K}^{n-1} \frac{1}{x^{\alpha}}dx \leq C_4 n^{1-\alpha}\] for some positive constants~\(C_3,C_4.\) We therefore get from~(\ref{zj_rec3}) and~(\ref{mst_z})
that\\\(\mathbb{E}e^{sM_n} \leq e^{C_5n^{1-\alpha}}\) for some constant~\(C_5 = C_5(s).\) Therefore by Chernoff estimate,
we have~\(\mathbb{P}(M_n \geq C_6n^{1-\alpha}) \leq e^{-C_7 n^{1-\alpha}}\) for some positive constants~\(C_6,C_7.\) This completes the proof of the lower deviation bound in~(\ref{mst_dev}).
Finally, the lower bound on the expectation~\(\mathbb{E}M_n\) follows directly from the lower deviation bound~(\ref{mst_dev}). For the expectation upper bound, we use the fact that the edge weights are at most one and so total weight of any tree containing at least~\(\rho n\) edges is at most~\(n.\) Consequently from the upper deviation bound in~(\ref{mst_dev}), we get that~\(\mathbb{E}M_n \leq C_2 n^{1-\alpha} + n\cdot e^{-Cn^{1-\alpha}} \leq 2C_2n^{1-\alpha}.\) The proof of the variance bound is analogous to the pivotal edge argument in Kesten (1993) together with the fact that the number of edges in a spanning tree is at most~\(n.\) This completes the proof of the Theorem.~\(\qed\)
\setcounter{equation}{0}
\renewcommand\theequation{\thesection.\arabic{equation}}
\section{Edge Constrained Minimum Passage Time Paths} \label{sec_cons_path}
Consider the square lattice \(\mathbb{Z}^d,\) where two vertices~\(w_1 = (w_{1,1},\ldots,w_{1,d})\) and~\(w_2 = (w_{2,1},\ldots,w_{2,d})\) are \emph{adjacent} if~\(\sum_{i=1}^{d}|w_{1,i} - w_{2,i}| = 1\) and adjacent vertices are joined together by an edge. Let~\(\{q_i\}_{i \geq 1}\) denote the set of edges. Each edge~\(q_i\) is equipped with a random passage time \(t(q_i)\) and we define the random sequence~\((t(q_1),t(q_2),\ldots)\) on the probability space~\((\Omega, {\cal F}, \mathbb{P}).\)
A \emph{path}~\(\pi\) is a sequence of distinct adjacent vertices~\((w_1,\ldots,w_{r+1}).\) If~\(e_i, 1 \leq i \leq r\) is the edge with endvertices~\(w_i\)
and~\(w_{i+1},\) then we denote~\(\pi = (e_1,...,e_r).\) By definition,~\(\pi\) is self-avoiding and~\(w_1\) and~\(w_{r+1}\) are said to be the \emph{endvertices} of~\(\pi.\) The length of~\(\pi\) is the number of edges in~\(\pi\) and the passage time of~\(\pi\) is defined as~\(T(\pi) := \sum_{i=1}^{r} t(e_i).\)
\begin{definition}
For~\(k \geq 1\) we define the~\(k-\)\emph{constrained} minimum passage time between the origin and the vertex~\((n,\mathbf{0})\) as~\(T_n(k) := \min_{\pi}T(\pi),\) where the minimum is over all paths~\(\pi\) of length at most~\(k\) and with endvertices~\((0,\mathbf{0})\) and~\((n,\mathbf{0}).\)
We define the \emph{unconstrained} minimum passage time as~\(T_n := \inf_{k \geq 1} T_n(k).\)
\end{definition}
By definition~\(T_n(k) \downarrow T_n\) a.s.\ as~\(k \rightarrow \infty.\) In this section, we are primarily interested in studying how~\(T_n(k)\) varies as the constraint parameter~\(k\) increases and also how fast~\(T_n(k)\) converges to~\(T_n.\)
The following are the main results of this section. Throughout constants do not depend on~\(n.\)
\begin{thm} \label{thm1} Suppose
\begin{equation}\label{sup_i}
\sup_{i \geq 1} \mathbb{P}(t(q_i) \leq \epsilon) \longrightarrow 0 \text{ as }\epsilon \downarrow 0 \text{ and }\mu_2 := \sup_{i \geq 1} \mathbb{E}t^2(q_i) < \infty.
\end{equation}
\((a)\) There are constants~\(C_1,C_2 > 0\) such that for every~\(k \geq n:\)\\
\begin{equation}\label{bama}
\mathbb{P}\left(C_1n \leq T_n \leq T_n(k) \leq C_2 n\right) \geq 1- \frac{C_2}{n}, \;\;\;\;\;\;\;var(T_n(k)) \leq C_2n
\end{equation}
and~\(C_1n \leq \mathbb{E}T_n \leq \mathbb{E}T_n(k) \leq C_2n.\)\\
\((b)\) There exists a constant~\(C_3 > 0\) such that if~\(k \geq C_3n\) then\\\(\mathbb{P}(T_n \neq T_n(k)) \leq \frac{C_3}{k}.\) If~\(k \geq n^{1+\epsilon}\) for some~\(\epsilon >0,\) then both~\(\frac{T_n(k)-\mathbb{E}T_n(k)}{n}\) and~\(\frac{T_n-\mathbb{E}T_n}{n}\) converge to zero a.s.\ as~\(n \rightarrow \infty.\)\\
\((c)\) If the edge weights are uniformly square integrable in the sense that\\\(\sup_{i \geq 1} \mathbb{E}t^2(q_i)1\hspace{-2.3mm}{1}(t(q_i) \geq M) \longrightarrow 0\) as~\(M \rightarrow \infty,\) then~\(var\left(\frac{T_n}{n}\right) \longrightarrow 0\) as~\(n \rightarrow \infty.\)\\If~\(\sup_{i \geq 1} \mathbb{E}t^p(q_i) < \infty\) for some~\(p > 2,\) then~\(var(T_n) \leq Cn\) for some constant \(C > 0.\)
\end{thm}
\emph{Proof of Theorem~\ref{thm1}\((a)\)}: Let~\(\mu := \sup_{f} \mathbb{E}t(f)\) be the maximum expected passage time of an edge. We begin by showing that there exists a constant~\(0 < \beta \leq \mu\) such for any integer~\(m \geq 1\) and any path~\(\pi\) containing~\(m\) edges,
\begin{equation}\label{t_pi_ax}
\mathbb{P}\left(T(\pi) \leq \beta m\right) \leq e^{-dm}
\end{equation}
for some positive constant \(\beta = \beta(d) \leq \mu,\) not depending on~\(m\) or~\(\pi.\) Here~\(d\) is the dimension of the integer lattice under consideration.
Indeed, let~\(\pi = (e_1,\ldots,e_m)\) so that~\(T(\pi) = \sum_{i=1}^{m} t(e_i).\) Using the Chernoff bound we obtain for~\(\delta,s > 0\) that
\begin{equation}
\mathbb{P}(T(\pi) \leq \delta m) = \mathbb{P}\left(\sum_{i=1}^{m}t(e_i) \leq \delta m\right) \leq e^{s\delta m}\prod_{i=1}^{m}\mathbb{E}\left(e^{-st(e_i)}\right).\label{y_1_eq1}
\end{equation}
For a fixed \(\eta > 0,\) we write~\(\mathbb{E}e^{-st(e_i)} = \int_{t(e_i) < \eta} e^{-st(e_i)} d\mathbb{P} + \int_{t(e_i) \geq \eta} e^{-st(e_i)} d\mathbb{P} \)
and use~\[\int_{t(e_i) < \eta} e^{-st(e_i)} d\mathbb{P} \leq \mathbb{P}(t(e_i) < \eta) \text{ and } \int_{t(e_i) \geq \eta} e^{-st(e_i)} d\mathbb{P} \leq e^{-s\eta} \]
to get that~\[\mathbb{E}e^{-st(e_i)} \leq \mathbb{P}(t(e_i) < \eta) + e^{-s\eta}.\]
Since~\(F(0) = 0,\) we choose~\(\eta > 0\) small so that~\(\mathbb{P}(t(e_i) < \eta) \leq \frac{e^{-6\epsilon}}{2}.\) Fixing such an~\(\eta\) we choose~\(s = s(\eta,\epsilon) > 0\) large so that the second term~\(e^{-s\eta} < \frac{e^{-6\epsilon}}{2}.\) This implies that~\(\mathbb{E}e^{-st(e_i)} \leq e^{-6\epsilon}\) and so from~(\ref{y_1_eq1}) we then get that~\(\mathbb{P}(W({\cal P}) \leq \delta m) \leq e^{s\delta m} e^{-6\epsilon m} \leq e^{-2\epsilon m}\) for all \(m \geq 1,\) provided \(\delta = \delta(s,\epsilon) > 0\) is small. This completes the proof of~(\ref{t_pi_ax}).
Next, for integer~\(m \geq 1\) define the event~\(E_m\) as
\begin{equation}\label{e_k_def}
E_{m} := \bigcap_{r \geq \frac{3\mu}{\beta} m}\;\;\bigcap_{\pi} \left\{T(\pi) \geq \beta r\right\}
\end{equation}
where the second intersection is over all paths with origin as an endvertex and consisting of~\(r\) edges. Thus~\(E_m\) is the event that every path \(\pi\) with origin as an endvertex and consisting of \(r \geq \frac{3 \mu}{\beta}m\) edges has passage time~\(T(\pi) \geq \beta r.\) Since there are at most \((2d)^{r}\) paths of length~\(r\) starting from the origin, the estimate~(\ref{t_pi_ax}) gives
\begin{equation}\label{a_0k}
\mathbb{P}(E_{m}^c)\leq \sum_{r \geq 3\mu\beta^{-1} m} (2d)^{r}e^{-dr} \leq \sum_{r \geq 3\mu\beta^{-1} m} (2e^{-1})^{r} \leq \frac{e^{-\delta m}}{1-2e^{-1}}
\end{equation}
for all \(m \geq 1\) and some positive constant~\(\delta = \delta(d,\mu),\) not depending on~\(m.\) Here, the second inequality in~(\ref{a_0k}) is obtained using the fact that the function~\(xe^{-x} \) attains its maximum at \(x = 1\) and so \(2d e^{-d} \leq 2e^{-1} < 1\) for all \(d \geq 2.\)
Let~\(F_m\) be the event that~\(\sum_{i=1}^{m}t(f_i) \leq 2\mu m\) where~\(f_i\) is the horizontal edge with endvertices~\((i-1,\mathbf{0})\) and \((i,\mathbf{0}).\) Letting~\(X_i := t(f_i) - \mathbb{E}t(f_i)\) and using the fact that~\(\{X_i\}\) are independent, we then get from Chebychev's inequality that
\begin{equation}\label{fm_est}
\mathbb{P}(F_m^c) \leq \mathbb{P}\left(\sum_{i=1}^{m}X_i \geq \mu m\right) \leq \frac{\sum_{i=1}^{m} var(X_i)}{\mu^2m^2} \leq \frac{C}{m}
\end{equation}
for some constant~\(C > 0.\) Now set~\(m = \frac{\beta n}{3\mu} <n\) and suppose~\(E_m \cap F_n\) occurs. From~(\ref{a_0k}),~(\ref{fm_est}) and the union bound, we get that~\(\mathbb{P}(E_m \cap F_n) \geq 1-\frac{C}{n}\) for some constant~\(C > 0.\) Since~\(F_n\) occurs, we get that~\(T_n(k) \leq 2\mu n\) and since~\(E_m\) occurs, we get that any path starting from the origin and containing~\(r \geq \frac{3\mu}{\beta}m = n\) edges has weight at least~\(\beta r \geq 3\mu m = \beta n.\) This obtains the first estimate~(\ref{bama}).
Next, using the bounded second moment assumption in~(\ref{sup_i}) and arguing as in the variance estimate in Theorem~\(1,\) Kesten (1993) we get that~\(var(T_n(k)) \leq C \mathbb{E}N_n(k),\) where~\(N_n(k)\) is the number of edges in the path with passage time~\(T_n(k).\) If~\(E_m \cap F_n\) occurs, then from the discussion in the previous paragraph, we get that~\(N_n(k) \leq \frac{3\mu}{\beta}n.\)
For~\(x \geq \frac{3\mu}{\beta}n\) we assume for simplicity that~\(y = \frac{\beta x}{3\mu}\) is an integer and write
\begin{eqnarray}
\mathbb{P}(N_n(k) \geq x) &\leq& \mathbb{P}\left(\{N_n(k) \geq x\} \cap E_y\right) + \mathbb{P}\left(E_y^c\right) \nonumber\\
&\leq& \mathbb{P}\left(\{N_n(k) \geq x\} \cap E_m\right) + D_1 e^{-D_2 x}\label{thmoo_one}
\end{eqnarray}
for some constants~\(D_1,D_2> 0 \) by~(\ref{a_0k}). If~\(N_n(k) \geq x\) and the event~\(E_y\) occurs, then every path containing~\(r \geq \frac{3\mu}{\beta}y = x\) edges has weight at least~\(\beta r \geq \beta x.\) Thus~\(T_n(k) \geq \beta x\) and so
\[\mathbb{P}\left(\{N_n(k) \geq x\} \cap E_y\right) \leq \mathbb{P}(T_n(k) \geq \beta x) \leq \mathbb{P}\left(\sum_{i=1}^{n}t(f_i) \geq \beta x\right).\]
Since~\(\beta x \geq 3\mu n\) we have that~\(\beta x - \sum_{i=1}^{n} \mathbb{E}t(f_i) \geq \beta x - \mu n \geq \frac{2\beta x}{3}.\)
Consequently, recalling that~\(X_i = t(f_i) -\mathbb{E}t(f_i)\) and using the fact that~\(var(X_i) \leq \mathbb{E}t^2(f_i) \leq C\) for some constant~\(C > 0,\) we get from Chebychev's inequality that
\begin{equation}
\mathbb{P}\left(\{N_n(k) \geq x\} \cap E_y\right) \leq \mathbb{P}\left(\sum_{i=1}^{n}X_i \geq \frac{2\beta x}{3}\right) \leq \frac{ D_3\sum_{i=1}^{n} var(X_i)}{x^2} \leq \frac{D_4n}{x^2} \label{thmoo_two}
\end{equation}
for some constants~\(D_3,D_4 > 0.\) Combining~(\ref{thmoo_one}) and~(\ref{thmoo_two}), we get that~\(\mathbb{P}(N_n(k) \geq x) \leq \frac{D_5}{x^2}\) for~\(x \geq \frac{3\mu}{\beta}n\) and so~\(\mathbb{E}N_n(k) \leq D_6 n\) for some constant~\(D_6 > 0.\) Plugging this into the variance estimate for~\(T_n(k)\) obtained in the previous paragraph, we get the second estimate in~(\ref{bama}).
The lower expectation bound follows directly from the lower deviation bound in~(\ref{bama}). The upper expectation bound follows from the fact that~\(\mathbb{E}T_n \leq \sum_{i=1}^{n} \mathbb{E}t(f_i) \leq \mu n.\) This completes the proof of part~\((a).\)~\(\qed\)
\emph{Proof of Theorem~\ref{thm1}\((b)\)}: Let~\(\beta,\mu\) be as in part~\((a)\) and for~\(k \geq n\) suppose that the event~\(E_k \cap F_k\) occurs. From the discussion following~(\ref{fm_est}), we know that~\(\mathbb{P}(E_k \cap F_k) \geq 1-\frac{C}{k}\) for some constant~\(C >0.\) The minimum passage time between the origin and~\((n,\mathbf{0})\) is at most~\(2\mu k\) and any path starting from the origin and containing~\(r \geq \frac{3 \mu}{\beta}k\) edges has passage time at least~\(\beta r \geq 3\mu k.\) Thus~\(T_n\left(\frac{3\mu k}{\beta}\right) = T_n\) and this obtains the probability estimate in~\((b)\) with~\(C_3~=~\frac{3\mu}{\beta}.\)
We prove the a.s.\ convergence in two steps. In the first step, we use a subsequence argument to show that~\(\frac{T_n(k)-\mathbb{E}T_n(k)}{n}\) converges to zero a.s. In the second step we show that the \emph{difference}~\(\frac{T_n(k)-T_n}{n}\) converges to zero a.s.\ and in~\(L^1,\) provided~\(k\) is sufficiently large. This then obtains the a.s.\ convergence for~\(\frac{T_n-\mathbb{E}T_n}{n}.\)
\underline{\emph{Step 1}}: We begin with a description of the sub-additivity property of the unconstrained passage time~\(T_n.\) If~\(T_{n,m}\) is the minimum passage time between~\((n,\mathbf{0})\) and~\((m,\mathbf{0}),\) then~\(T_n \leq T_m + T_{n,m}.\) This is because the concatenation of the minimum passage time path with endvertices~\((0,\mathbf{0})\) and~\((m,\mathbf{0})\) and the minimum passage time path with endvertices~\((m,\mathbf{0})\) and~\((n,\mathbf{0})\) contains a path with endvertices~\((0,\mathbf{0})\) and~\((n,\mathbf{0}).\) Switching the roles of~\(m\) and~\(n\) we therefore have that~\(|T_n-T_m| \leq T_{n,m}\) and we refer to this estimate as the \emph{sub-additive property} of~\(T_n.\)
Letting~\(k \geq n^{1+\epsilon}\) with~\(\epsilon > 0,\) we now perform the subsequence argument. Setting~\(U_n := T_n(k)\) and~\(S_n := U_n - \mathbb{E}U_n,\) we first show that~\(\frac{S_n}{n} \longrightarrow 0\) a.s.\ as~\(n~\rightarrow~\infty.\) Indeed, from the variance estimate for~\(U_n\) in~(\ref{bama}), we know that~\(\mathbb{E}S^2_n \leq C n\) for some constant \(C > 0\) and so for a fixed \(\delta > 0,\) the sum \[\sum_{n \geq 1} \mathbb{P}(|S_{n^2}| > n^2 \delta) \leq \sum_{n \geq 1}\frac{\mathbb{E}S^2_{n^2}}{\delta^2 n^4} \leq \sum_{n \geq 1} \frac{C}{\delta^2 n^{2}} < \infty.\] Since this is true for all \(\delta > 0,\) Borel-Cantelli Lemma implies that~\(\frac{S_{n^2}}{n^2} \) converges to zero a.s.\ as \(n \rightarrow \infty.\)
To estimate the intermediate values of~\(S_j,\) we let~\(n^2 \leq j < (n+1)^2\) and set~\(R_n := \max_{n^2 \leq j < (n+1)^2} |S_j-S_{n^2}|\) and show below that~\(\frac{R_n}{n^2} \longrightarrow 0\) a.s.\ as~\(n~\rightarrow~\infty.\) This would imply that for \(n^2 \leq j < (n+1)^2, \) \[\frac{|S_j|}{j} \leq \frac{|S_j-S_{n^2}|}{j} + \frac{|S_{n^2}|}{j} \leq \frac{|S_j-S_{n^2}|}{n^2} + \frac{|S_{n^2}|}{n^2} \leq \frac{D_{n}}{n^2} + \frac{|S_{n^2}|}{n^2}\] and so~\(\frac{S_j}{j}\) converges to zero a.s.\ as \(j \rightarrow \infty.\)
To estimate~\(R_n,\) use first the triangle inequality to get that
\begin{equation}
|S_j - S_{n^2}| \leq |U_j- U_{n^2}| + \mathbb{E}|U_j - U_{n^2}| \label{eq_s2}
\end{equation}
We know that~\(\mathbb{P}(T_j \neq U_j) \leq \frac{D_1}{k(j)} \leq \frac{D_2}{j^{1+\epsilon}}\) for some constants~\(D_1,D_2 > 0\) and so setting~\(E_{tot} := \bigcap_{j=n^2}^{(n+1)^2}\{T_j = U_j\},\) we get from the union bound that
\begin{equation}\label{e_tot_est}
\mathbb{P}(E_{tot}) \geq 1- \sum_{j=n^2}^{(n+1)^2} \frac{D_2}{j^{1+\epsilon}} \geq 1-\frac{D_3 n}{n^{2+2\epsilon}} = 1-\frac{D_3}{n^{1+2\epsilon}}
\end{equation}
for some constant~\(D_3 > 0.\)
We now write~\(|U_j-U_{n^2}| = |T_j-T_{n^2}| 1\hspace{-2.3mm}{1}(E_{tot}) + |U_j-U_{n^2}| 1\hspace{-2.3mm}{1}(E^c_{tot})\) and evaluate each term separately.
For~\(n^2 \leq j < (n+1)^2\) we know by the subadditivity property that~\(|T_j-T_{n^2}| \leq T_{j,n^2} \leq \sum_{i=n^2}^{(n+1)^2}t(f_i) =: A_n\)
and by definition, we have that~\(U_j \leq \sum_{i=1}^{(n+1)^2} t(f_i) =: J_n.\) Thus~\(|U_j-U_{n^2}| \leq A_n + 2J_n1\hspace{-2.3mm}{1}(E^c_{tot})\) for all~\(n^2 \leq j \leq(n+1)^2\) and so from~(\ref{eq_s2}), we see that
\begin{equation}\label{dn_est}
R_n \leq A_n + 2J_n1\hspace{-2.3mm}{1}(E^c_{tot}) + \mathbb{E}A_n + 2\mathbb{E}J_n 1\hspace{-2.3mm}{1}(E^c_{tot}).
\end{equation}
Based on~(\ref{dn_est}), it suffices to show that both the terms~\(\frac{A_n}{n^2}\) and~\(\frac{J_n 1\hspace{-2.3mm}{1}(E^c_{tot})}{n^2}\) converge to zero a.s.\ and in~\(L^1\) as~\(n \rightarrow \infty.\) First, from the estimate~\(\mathbb{E}A_n \leq \sum_{i=n^2}^{(n+1)^2}\mathbb{E}t(f_i) \leq Cn\) for some constant~\(C >0, \) we get that~\(\frac{\mathbb{E}A_n}{n^2} \longrightarrow 0\) as~\(n~\rightarrow~\infty.\) Next, let~\(0 < \theta < 1\) be any constant. Using Chebychev's inequality and arguing as in~(\ref{fm_est}), we get that~\(\mathbb{P}(A_n \geq 2\mu n^{1+\theta}) \leq \frac{C}{n^{1+2\theta}}\) for all~\(n\) large and so by the Borel-Cantelli Lemma we get that~\(\mathbb{P}\left(A_n \leq 2\mu n^{1+\theta} \text{ for all large }n\right) = 1.\) Since~\(\theta < 1,\) we get that~\(\frac{A_n}{n^2} \longrightarrow 0\) a.s.\ as~\(n \rightarrow \infty.\)
To evaluate~\(J_n1\hspace{-2.3mm}{1}(E^c_{tot}),\) we use~(\ref{e_tot_est}) and the Borel-Cantelli Lemma to get~\(1\hspace{-2.3mm}{1}(E^c_{tot}) \longrightarrow 0\) a.s.\ as~\(n \rightarrow \infty.\) Thus~\(\frac{J_n1\hspace{-2.3mm}{1}(E^c_{tot})}{n^2} \longrightarrow 0\) a.s.\ as~\(n \rightarrow \infty.\)
Using the Cauchy-Schwartz inequality, we also get that~\(\mathbb{E}J_n1\hspace{-2.3mm}{1}(E^c_{tot}) \leq \left(\mathbb{E}J_n^2\right)^{1/2} \left(\mathbb{P}(E^c_{tot})\right)^{1/2}.\) By the AM-GM inequality and the bounded second moment assumption in~(\ref{sup_i}), we get that~\[\mathbb{E}J_n^2 \leq (n+1)^2 \sum_{i=1}^{(n+1)^2} \mathbb{E}t^2(f_i) \leq C_2 n^4\] for some constant~\(C_2 > 0\) and so using~(\ref{e_tot_est}), we get that~\[\mathbb{E}J_n1\hspace{-2.3mm}{1}(E^c_{tot}) \leq \sqrt{C_2} n^2 \cdot \left(\frac{D}{n^{1+2\epsilon}}\right)^{1/2}.\] Thus~\(\frac{\mathbb{E}J_n1\hspace{-2.3mm}{1}(E^c_{tot})}{n^2}~\longrightarrow~0\) as~\(n \rightarrow \infty\) as well and this completes the proof of~\(\frac{S_n}{n} \longrightarrow 0\) a.s.\ as~\(n \rightarrow \infty.\)
\underline{\emph{Step 2}}: We now set~\(k = n^{1+\epsilon}\) and show that~\(\frac{T_n(k)-T_n}{n}\) converges to zero a.s.\ and in~\(L^1.\) First using~\(\mathbb{P}(T_n(k) \neq T_n) \leq \frac{D_1}{n^{1+\epsilon}}\) for some constant~\(D_1 > 0,\) we get from the Borel-Cantelli Lemma that~\(\frac{T_n(k)-T_n}{n} \longrightarrow 0\) a.s.\ as~\(n \rightarrow \infty.\) Next using the fact that~\(T_n(k)\) and~\(T_n\) are both bounded above by~\(\sum_{i=1}^{n} t(f_i)\) we have that~\(\mathbb{E}|T_n(k)-T_n| = \mathbb{E}|T_n(k)-T_n|1\hspace{-2.3mm}{1}(T_n(k) \neq T_n)\) is bounded above by
\begin{eqnarray}
\mathbb{E}\sum_{i=1}^{n}t(f_i) 1\hspace{-2.3mm}{1}(T_n(k) \neq T_n) &\leq& \mathbb{E}^{1/2} \left(\sum_{i=1}^{n} t(f_i)\right)^2 \mathbb{P}^{1/2}(T_n(k) \neq T_n) \nonumber\\
&\leq& \mathbb{E}^{1/2} \left(\sum_{i=1}^{n} t(f_i)\right)^2 \left(\frac{D_1}{n^{1+\epsilon}}\right)^{1/2}. \nonumber
\end{eqnarray}
Using~\((\sum_{i=1}^{l}a_i)^2 \leq \l \sum_{i}a_i^2\) we have~\(\mathbb{E}\left(\sum_{i=1}^{n} t(f_i)\right)^2 \leq n \sum_{i=1}^{n} \mathbb{E}t^2(f_i) \leq D_2n^2\) for some constant~\(D_2 > 0.\) Thus~\(\mathbb{E}|T_n(k)-T_n| \leq D_3 (n^{1-\epsilon})^{1/2} = o(n)\) and so\\\(\frac{\mathbb{E}|T_n(k)-T_n|}{n} \longrightarrow 0\) as~\(n \rightarrow \infty.\) This completes the proof of a.s.\ convergence in part~\((b).\)~\(\qed\)
\emph{Proof of Theorem~\ref{thm1}\((c)\)}: Using~\((a+b)^2 \leq 2(a^2 + b^2)\) for any two real numbers~\(a\) and~\(b\) we have that the variance of the sum of any two random variables~\(X\) and~\(Y\) satisfies
\begin{equation}\label{var_xy}
var (X+Y) = \mathbb{E}\left( (X - \mathbb{E}X) + (Y - \mathbb{E}Y)\right)^2 \leq 2(var(X) + var(Y)).
\end{equation}
Setting~\(T = T_n, U = T_n(k), X = T-U\) and \(Y = {U}\) we get that
\begin{equation}\label{var_t}
var(T) \leq 2 var(T - U) + 2 var(U) \leq 2 var(T-U) + D_1 n
\end{equation}
for all \(n\) large and some constant~\(D_1 >0,\) by the variance estimate for~\(U\) in part~\((a)\) of this Theorem.
We estimate~\(var(T-U)\) as follows. Using~\(T \leq U\) we write
\[\mathbb{E}(T-U)^2 = \mathbb{E}(T-U)^2 1\hspace{-2.3mm}{1}(T \neq U) \leq 2\mathbb{E}(T^2+U^2)1\hspace{-2.3mm}{1}(T \neq U) \leq 4\mathbb{E}U^2 1\hspace{-2.3mm}{1}(T \neq U).\]
Since~\(U \leq \sum_{i=1}^{n}t(f_i),\) we have that~\(U^2 \leq n \sum_{i=1}^{n}t^2(f_i)\) and so~\(var(T-U)\) is bounded above by
\begin{equation}\label{etu}
\mathbb{E}(T-U)^2 \leq 4n\sum_{i=1}^{n}\mathbb{E}t^2(f_i)1\hspace{-2.3mm}{1}(T \neq U) \leq 4n^2 \sup_{i} \mathbb{E}t^2(f_i) 1\hspace{-2.3mm}{1}(T \neq U).
\end{equation}
Let~\(\theta > 0\) be a constant and split
\begin{eqnarray}\label{eq_zn3}
\mathbb{E}t^2(f_i)1\hspace{-2.3mm}{1} (T \neq U) &=& \mathbb{E}t^2(f_i)1\hspace{-2.3mm}{1}(\{T \neq U\} \cap \{t(f_i) < n^{\theta}\}) \nonumber\\
&&\;\;\;\;\;\;\;+\;\;\;\mathbb{E}t^2(f_i)1\hspace{-2.3mm}{1}(\{T \neq U\} \cap \{t(f_i) \geq n^{\theta}\}).
\end{eqnarray}
From~(\ref{bama}), we know that there are constants~\(D_2,D_3 > 0\) such that if~\(k \geq D_2n\) then\\\(\mathbb{P}(T \neq U) \leq \frac{D_3}{k}.\) With this choice of~\(k,\) the first term in~(\ref{eq_zn3}) is bounded above by
\begin{equation}\label{ondra}\mathbb{E}t^2(f_i)1\hspace{-2.3mm}{1}(\{T \neq U\} \cap \{t(f_i) < n^{\theta}\}) \leq n^{2\theta}\mathbb{P}(T \neq U) \leq \frac{D_2n^{2\theta}}{k} \leq \frac{1}{n^3}
\end{equation}
provided we choose~\(k\) larger if necessary so that~\(k \geq n^{2\theta}\log{n}.\)
We now consider the case where the uniform square integrability condition holds. For any~\(\eta > 0\) and all~\(n\) large, the final term in~(\ref{eq_zn3}) is then at most~\(\mathbb{E}t^2(f_i) 1\hspace{-2.3mm}{1}(t(f_i) \geq n^{\theta}) \leq \eta.\) Combining this estimate with~(\ref{ondra}), we get that~\(\mathbb{E}t^2(f_i)1\hspace{-2.3mm}{1} (T \neq U) \leq \frac{1}{n^3} + \eta \leq 2\eta\) for all~\(n\) large and so from~(\ref{etu}) we get that~\(\mathbb{E}(T-U)^2 \leq 8n^2 \eta.\) Plugging this into~(\ref{var_t}) we get that~\(var(T) \leq D_1 n + 8n^2 \eta \) and since \(\eta > 0\) is arbitrary, this implies that~\(var\left(\frac{T}{n}\right) = o(1).\)
Suppose now that bounded~\(p^{th}\) moment condition holds for some~\(p > 2.\) Using H\"older's inequality and Markov inequality in succession, the final term in~(\ref{eq_zn3}) is at most
\begin{eqnarray}
\mathbb{E}t^2(f_i) 1\hspace{-2.3mm}{1}(t(f_i) \geq n^{\theta}) &\leq& \left(\mathbb{E}t^{p}(f_i)\right)^{2/p} \mathbb{P}\left(t(f_i) \geq n^{\theta}\right)^{1-2/p} \nonumber\\
&\leq& \left(\mathbb{E}t^{p}(f_i)\right)^{2/p} \left(\frac{\mathbb{E}t^{p}(f_i)}{n^{\theta p}}\right)^{1-2/p} \nonumber\\
&\leq& \frac{D_4}{n^{\theta(p-2)}} \label{renda}
\end{eqnarray}
for some constant~\(D_4 > 0,\) by the bounded~\(p^{th}\) moment assumption. We choose~\(\theta > 0\) large so that the final term in~(\ref{renda}) is at most~\(\frac{1}{n^3}.\) With this choice of~\(\theta\) we get from~(\ref{ondra}) that~\(\mathbb{E}t^2(f_i)1\hspace{-2.3mm}{1} (T \neq U) \leq \frac{2}{n^3}\) and plugging this into~(\ref{etu}), we get that~\(\mathbb{E}(T-U)^2 \leq \frac{D_5}{n}\) for constant~\(D_5 >0.\) From~(\ref{var_t}), we then get that~\(var(T) \leq D_6n\) for some constant~\(D_6 > 0.\) This completes the proof of part~\((c).\)~\(\qed\)
\underline{\emph{Remark}}: For~\(p > 2,\) the bounded~\(p^{th}\) moment condition in Theorem~\ref{thm1}\((c)\) is stronger than the uniformly square integrable condition which in turn is stronger than the bounded second moment condition in~(\ref{sup_i}). For the particular case of i.i.d.\ passage times, uniform square integrability is implied by the bounded second moment condition and the first condition in~(\ref{sup_i}) above simply states that the passage times are a.s.\ positive.
\subsection*{Acknowledgement}
I thank Professors Rahul Roy, C. R. Subramanian and the referee for crucial comments that led to an improvement of the paper. I also thank IMSc and IISER Bhopal for my fellowships.
\bibliographystyle{plain}
|
2301.06009
|
\section{Introduction}
Although deep neural networks have recently been contributing to state-of-the-art advances in various areas \cite{Krizhevsky:2012:ICD:2999134.2999257, hinton2012deep, DBLP:journals/corr/SutskeverVL14},
such models are often black-box, and therefore may not be deemed appropriate in situations where safety needs to be guaranteed, such as legal judgment prediction and medical diagnosis. Interpretable deep neural networks are a promising way to increase the reliability of neural models~\cite{sabour2017dynamic}. To this end, extractive rationales, i.e., subsets of features of instances on which models rely for their predictions on the instances, can be used as evidence for humans to decide whether to trust a prediction and more generally a~model.
There are many different methods to explain a deep neural model, such as probing internal representations~\cite{hewitt-liang-2019-designing,conneau-etal-2018-cram,pimentel-etal-2020-information,voita-titov-2020-information,cifka-bojar-2018-bleu,vanmassenhove2017investigating}, adding interpretability to deep neural models~\cite{graves2014neural,sabour2017dynamic,grathwohl2019your,agarwal2020neural,chen2016infogan,sha2021multi}, and looking for global decision rules~\cite{holte1993very,cohen1995fast,letham2015interpretable,borgelt2005implementation,yang2017scalable,furnkranz2012foundations}. Extracting rationales belongs to the second category.
Previous works use selector-predictor types of neural models to provide extractive rationales. More precisely, such models are composed of two modules: (i) a \textit{selector} that selects a subset of features of each input, and (ii) a \textit{predictor} that makes a prediction based solely on the selected features. For example, \newcite{yoon2018invase} and \newcite{lei2016rationalizing} use a selector network to calculate a selection probability for each token in a sequence, then sample a set of tokens that is the only input of the predictor. Supervision is typically given only on the final prediction and not on the rationales.
\newcite{paranjape-etal-2020-information} also uses information bottleneck to find a better trade-off between the sparsity and the final task performance. Note that gold rationale labels are required for semi-supervised training in \newcite{paranjape-etal-2020-information}.
An additional typical desideratum in natural language pro\-cessing (NLP) tasks is that the selected tokens form a semantically fluent rationale. To achieve this, \newcite{lei2016rationalizing} added a non-differential regularizer that encourages any two adjacent tokens to be simultaneously selected or unselected. The selector and predictor are jointly trained in a REINFORCE-style manner~\cite{williams1992simple} because the sampling process and the regularizer are not differentiable.
\newcite{bastings2019interpretable} further improved the quality of the rationales by using a HardKuma regularizer that also encourages any two adjacent tokens to be selected or unselected together, which is differentiable and no need to use REINFORCE any more.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=\linewidth]{./graphics/rational_intro.pdf}\\[0.75ex]
\caption{Examples of rationales in legal judgement prediction. The human-provided rationale is shown in bold in Sample~1. In Sample 2, the selector missed the key information ``he stole a VIVO X9'', but the predictor only tells the selector that the whole extracted rationale (in bold) is not so informative, by producing a low probability of the correct crime. }
\label{fig:intro}
\end{center}
\end{figure}
One drawback of previous works is that the learning signal for both the selector and the predictor comes mainly from comparing the prediction of the selector-predictor model with the ground-truth answer.
Therefore, the exploration space to get to the correct rationale is large, decreasing the chances of converging to the optimal rationales and predictions.
Moreover, in NLP applications, the regularizers commonly used for achieving fluency of rationales treat all adjacent token pairs in the same way. This often leads to the selection of unnecessary tokens due to their adjacency to informative~ones.
In this work, we first propose an alternative method to rationalize the predictions of a neural model. Our method aims to squeeze more information from the predictor in order to guide the selector in selecting the rationales. Our method trains two models jointly: a ``guider'' model that solves the task at hand in an accurate but black-box manner, and a selector-predictor model that solves the task while also providing rationales. We use an adversarial-based training procedure to encourage the final information vectors generated by the two models to encode the same information. We use an information bottleneck technique in two places: (i)~to encourage the features selected by the selector to be the least-but-enough features, and (ii)~to encourage the final information vector of the guider model to also contain the least-but-enough information for the prediction.
Secondly, we propose using language models as regularizers for rationales in natural language understanding tasks. A language model (LM) regularizer encourages rationales to be fluent subphrases, which means that the rationales are formed by consecutive tokens while avoiding unnecessary tokens to be selected simply due to their adjacency to informative tokens.
The effectiveness of our LM-based regularizer is proved both by a mathematical derivation and experiments.
The contributions of this article are briefly summarized as follows:
\begin{itemize}
\item We introduce a novel model that generates extractive rationales for its predictions. The model is based on an adversarial approach that calibrates the information between a guider and a selector-predictor model, such that the selector-predictor model learns to mimic a typical neural model while additionally providing rationales.
\item We propose a language-model-based regularizer to encourage the sampled tokens to form fluent rationales. Usually, this regularizer will encourage fewer fragment of subsequences and avoid strange start and end of the sequences. This regularizer also gives priority to important adjacent token pairs, which benefits the extraction of informative features.
\item We experimentally evaluate our method on a sentiment analysis dataset and a hate speech detection dataset, both containing ground-truth rationale annotations for the ground-truth labels, as well as on three tasks of a legal judgement prediction dataset, for which we conducted human evaluations of the extracted rationales.
The results show that our method improves over the previous state-of-the-art models in precision and recall of rationale extraction without sacrificing the prediction performance.
\end{itemize}
The rest of this paper is organized as follows. In Section~\ref{sec:approach}, we introduce our proposed approach, including the selector-predictor module (Section~\ref{sec:sp}), the guider module (Section~\ref{sec:guider}), the information calibrating method (Section~\ref{sec:cali}), and the language model-based rationale regularizer (Section~\ref{sec:LM}). In Section~\ref{sec:experiment}, we report the experimental results on the three datasets: a beer review dataset (Section~\ref{sec:beer}), a legal judgment prediction dataset (Section~\ref{sec:law}), and a hate speech detection dataset (Section~\ref{sec:hate}). Section~\ref{sec:related} reviews the related works of this paper. In Section~\ref{sec:conclu}, we provide a summary and an outlook on future research.
\section{Related Work}\label{sec:related}
Explainability is currently a key bottleneck of deep-lear\-ning-based approaches~\cite{atkinson2020explanation,kaptein2021evaluating}. A summarization of related works is shown in Table~\ref{tab:rel}, where we have listed the representative works in each branch of interpretable models. As is shown, previous works on explainable neural models include self-explanatory models and post-hoc explainers.
The model proposed in this work belongs to the class of self-explanatory models, which contain an explainable structure in the model architecture, thus providing explanations / rationales for their predictions. Self-explanatory models can use different types of explanations / rationales, such as feature-based explanations which is usually conducted by selector-predictors~ \cite{lei2016rationalizing, yoon2018invase,chen2018learning,yu-etal-2019-rethinking,carton-etal-2018-extractive} and natural language explanations
\cite{DBLP:conf/eccv/HendricksARDSD16, esnli, zeynep, cars}. Our model uses feature-based explanations.
\begin{table}[]
\renewcommand{\arraystretch}{1.4}
\resizebox{\linewidth}{!}{
\begin{tabular}{|l|l|p{3cm}|p{5cm}|p{1.5cm}|p{1.5cm}|p{1.5cm}|p{1.5cm}|p{1.5cm}|}
\hline
& & & Representative Methods & Controllable & Provide important features& Provide important examples & Provide NL explanations & Provide rules\\
\hline
\multirow{12}{1.5cm}{Self-explanatory methods} & \multirow{2}{2cm}{Disentangle\-ment} & Implicit &$\beta$-VAE~\cite{higgins2017beta}, $\beta$-TCVAE~\cite{chen2018isolating} & Yes & && &\\ \cline{3-9}
& & Explicit & InfoGAN~\cite{chen2016infogan}, MTDNA~\cite{sha2021multi} & Yes& Yes && & \\ \cline{2-9}
& \multirow{11}{*}{Architecture} & \multirow{2}{*}{Attention-based} & \newcite{rocktaschel2015reasoning}, \newcite{vaswani2017attention}, OrderGen~\cite{sha2018order} & & Yes && & \\ \cline{3-9}
& & \multirow{3}{*}{Read-Write Memory} & Neural Turing Machines~\cite{collier2018im,sha2020estimating}, Progressive Memory~\cite{rusu2016progressive,xia-etal-2017-progressive}, Differentiable neural computer~\cite{graves2016hybrid}, Neural RAM~\cite{kaiser2015neural}, Neural GPU~\cite{kurach2015neural} &Yes & && & \\ \cline{3-9}
& & Capsule-based & Capsule~\cite{sabour2017dynamic} & & Yes && & \\ \cline{3-9}
& & \multirow{3}{*}{Energy-based} & \newcite{grathwohl2019your}, Hopfield Network~\cite{ramsauer2020hopfield}, Boltzmann Machine~\cite{marullo2021boltzmann}, Predictive Coding~\cite{song2020can} & & Yes && & \\\cline{3-9} & & \multirow{2}{*}{Rationalization} & Selector-predictor~\cite{lei2016rationalizing,bastings2019interpretable,yoon2018invase,chen2018learning}, natural language explanations~\cite{DBLP:conf/eccv/HendricksARDSD16,esnli} & & Yes & &Yes& \\ \hline
\multirow{14}{1.5cm}{Post-hoc explainer} & \multirow{8}{*}{Local} & \multirow{1}{*}{Perturbation-based} & SHAP~\cite{lundberg2017unified}, Shapley Values~\cite{shapley1953value} & & Yes & &&\\ \cline{3-9}
& & \multirow{1}{*}{Surrogate-based} &Anchors~\cite{ribeiro2018anchors}, LIME~\cite{ribeiro2016should} & & Yes&& & Yes \\ \cline{3-9}
& & \multirow{2}{*}{Saliency Maps} &Input gradient~\cite{baehrens2010explain,simonyan2014deep,shrikumar2017learning}, SmoothGrad~\cite{smilkov2017smoothgrad,seo2018noise}, Integrated Gradients~\cite{sundararajan2017axiomatic}, Guided Backprop~\cite{springenberg2015striving} & & Yes & & & \\ \cline{3-9}
& & \multirow{2}{3cm}{Prototypes/Example Based} &Influence Functions~\cite{cook1980characterizations,koh2017understanding}, Representer Points~\cite{yeh2018representer}, TracIn~\cite{pruthi2020estimating} & & &Yes & & \\ \cline{3-9}
& & \multirow{2}{*}{Counterfactuals} &\newcite{wachter2017counterfactual}, \newcite{mahajan2019preserving}, \newcite{karimi2020algorithmic} & & Yes & Yes & & \\ \cline{2-9}
& \multirow{5}{*}{Global} & Collection of Local Explanations &SP-LIME~\cite{ribeiro2016should}, Summaries of Counterfactuals~\cite{rawal2020beyond} & &Yes & & & \\\cline{3-9}
& & \multirow{2}{*}{Model Distillation} & Tree Distillation~\cite{bastani2017interpreting}, Decision set distillation~\cite{lakkaraju2019faithful}, Generalized Additive Models~\cite{tan2018learning} & & Yes & & & Yes \\ \cline{3-9}
& & Representation based &Network Dissection~\cite{bau2017netdissect}, TCAV~\cite{kim2018interpretability} & & Yes & & & \\ \hline
\end{tabular}
}
\caption{The branches of related works.}
\label{tab:rel}
\end{table}
\subsection{Self-explanatory models for interpretability.}
Self-explanatory models with feature-based explanations can be further divided into two branches. The first branch is disentanglement-based approaches, which map specific features into latent spaces and then use the latent variables to control~the outcomes of the model, such as disentangling methods~\cite{chen2016infogan,sha2021multi}, information bottleneck methods~\cite{tishby2000information}, and constrained generation~\cite{sha-2020-gradient}. The second branch consists of architecture-in\-ter\-pretable models, such as attention-based models \cite{zhang2018top,sha2016reading,sha2018order,sha2018multi,liu2017table}, Neural Turing Machines~\cite{collier2018im,xia-etal-2017-progressive,sha2020estimating}, capsule networks~\cite{sabour2017dynamic}, and energy-based models~\cite{grathwohl2019your}. Among them, attention-based models have an important extension, that of sparse feature learning, which implies learning to extract a subset of features that are most informative for each example. Most of the sparse feature learning methods use a selector-predictor architecture. Among them, L2X \cite{chen2018learning} and INVASE~\cite{yoon2018invase} make use of information theories for feature selection, while CAR~\cite{chang2019game} extracts useful features in a game-theoretic approach.
In addition, rationale extraction for NLP usually raises one desideratum for the extracted subset of tokens: rationales need to be fluent subphrases instead of separate tokens. To this end, \newcite{lei2016rationalizing} proposed a non-differentiable regularizer to encourage selected tokens to be consecutive, which can be optimized by REINFORCE-style methods~\cite{williams1992simple}. \newcite{bastings2019interpretable} proposed a differentiable regularizer using the Hard Kumaraswamy distribution; however, this still does not consider the difference in the importance of different adjacent token pairs. \newcite{paranjape-etal-2020-information} proposed a very similar information bottleneck method to our InfoCal method. However, they did not use any calibration method to encourage the completeness of the extracted rationale.
Our method belongs to the class of self-explanatory methods. Different from previous sparse feature learning methods, we use an adversarial information calibrating mechanism to hint to the selector about missing important features or over-selected features. Moreover, our proposed LM regularizer is differentiable and can be directly optimized by gradient descent. This regularizer also encourages important adjacent token pairs to be simultaneously selected, which benefits the extraction of useful features.
\subsection{Post-hoc explainers for interpretability}
Post-hoc explainers analyze the effect of each feature in the prediction of an already trained and fixed model. Post-hoc explainers can be divided into two types: local explainers and global explainers. Local explainers can be further split into five categories: (a) perturbation-based: change the values of some features to see their effect on the outcome~\cite{friedman2001greedy,hooker2004discovering,friedman2008predictive,fisher2018model,greenwell2018simple,zhao2019causal,goldstein2015peeking,janzing2019feature,apley2020visualizing}. Some famous perturbation-based post-hoc explainer methods include Shapley values~\cite{shapley1953value,vstrumbelj2014explaining,sundararajan2020many,janzing2020feature,staniak2018explanations} and SHAP method~\cite{lundberg2017unified,lundberg2018consistent,slack2020fooling}, (b) surrogate-based: train an explainable model, such as linear regression or decision trees, to approximate the predictions of a black-box model~\cite{kaufmann2013information,alvarez2018robustness,ribeiro2018anchors,slack2020fooling}, for example, LIME~\cite{ribeiro2016should}. (c) saliency maps: use gradient information to show what parts of the input are most relevant for the model's prediction, including input gradient~\cite{baehrens2010explain,simonyan2014deep,shrikumar2017learning}, SmoothGrad~\cite{smilkov2017smoothgrad,seo2018noise}, integrated gradients~\cite{sundararajan2017axiomatic}, guided backprop~\cite{springenberg2015striving}, class activation mapping~\cite{zhou2016learning}, meaningful perturbation~\cite{fong2017interpretable}, RISE~\cite{petsiuk2018rise}, extremal perturbations~\cite{fong2019understanding}, DeepLift~\cite{shrikumar2017learning}, expected gradients~\cite{erion2019improving}, excitation backprop~\cite{zhang2018top}, GradCAM~\cite{selvaraju2017grad}, occlusion~\cite{zeiler2014visualizing}, prediction difference analysis~\cite{gu2019contextual}, and internal influence~\cite{leino2018influence}. (d) prototype / example based: find which training example affects the model prediction the most. Usually, this is conducted by influence functions~\cite{cook1980characterizations}. (e) counterfactual explanations: detect what features need to be changed to flip the model's prediction~\cite{wachter2017counterfactual,mahajan2019preserving,karimi2020algorithmic}. On the other hand, some of the global explainers are collections of local explanations (e.g., SP-LIME~\cite{ribeiro2016should}, and summaries of counterfactuals~\cite{rawal2020beyond}). Also, distillation methods provide explainable rules by distilling the information from deep models to tree models~\cite{bastani2017interpreting} or decision set models~\cite{lakkaraju2019faithful}. There are also some methods~(Network Dissection~\cite{bau2017netdissect}, TCAV~\cite{kim2018interpretability}) derives model understanding by analyzing intermediate representations of a deep black-box model.
\subsection{Information bottleneck}
The information bottleneck~(IB) theory is an important basic theory of neural networks ~\cite{tishby2000information}. It originated in information theory and has been widely used as a theoretical framework in analyzing deep neural networks~\cite{tishby2015deep}. For example, \newcite{li-eisner-2019-specializing} used IB to compress word embeddings in order to make them contain only specialized information, which leads to a much better performance in parsing tasks.
\subsection{Adversarial methods}
Adversarial methods, which had been widely applied in image generation~\cite{chen2016infogan} and text generation~\cite{yu2017seqgan}, usually have a discriminator and a generator. The discriminator receives pairs of instances from the real distribution and from the distribution generated by the generator, and it is trained to differentiate between the two. The generator is trained to fool the discriminator~\cite{goodfellow2014generative}. Our information calibration method generates a dense feature vector using selected symbolic features, and the discriminator is used for measuring the calibration extent.
Our adversarial calibration method is inspired by distilling methods~\cite{hinton2015distilling}.
Distilling methods are usually applied to compress large models into small models while keeping a comparable performance. For example, TinyBERT~\cite{jiao2019tinybert} is a distillation of BERT~\cite{devlin-etal-2019-bert}. Our method is different from distilling methods, because we calibrate the final feature vector instead of the softmax prediction. Also, to our best knowledge, we are the first to apply information calibration for rationale extraction.
\section{Approach}\label{sec:approach}
Our approach is composed of a selector-predictor architecture, in which we use the information bottleneck technique to restrict the number of selected features, and a guider model, for which we again use the information bottleneck technique to restrict the information in the final feature vector. Then, we use an adversarial method to make the guider model guide the selector into selecting the least-but-enough features. Finally, we use a language model (LM) regularizer to obtain semantically fluent rationales.
\subsection{InfoCal: Selector-Predictor-Guider with Information Bottleneck}\label{sec:sp}
The Selector-Predictor-Guider architecture contains two parallel architectures, one is a selector-predictor model, which selects the rationale and judges whether it can make a correct prediction; the other is a guider model, which is a dense ``black-box'' neural network trying to learn the feature vector required for the task. The information calibration is used to calibrate the dense feature vector learned by the guider model and the information contained in the rationales extracted by the selector-predictor model.
The high-level architecture of our model, called InfoCal, is shown in Fig.~\ref{fig:arch}.
Below, we detail each of its components.
\subsubsection{Selector}
For a given instance $(\textbf{x},y)$, $\textbf{x}$ is the input with $n$ features $\textbf{x} = (x_1, x_2, \ldots, x_n)$, and $y$ is the ground-truth corresponding label. The selector network $\text{Sel}(\tilde{\textbf{z}}_\text{sym}|\textbf{x})$ takes $\textbf{x}$ as input and outputs $p(\tilde{\textbf{z}}_\text{sym}|\textbf{x})$, a sequence of probabilities $(p_i)_{i=1,\ldots,n}$ representing the probability of choosing each feature $x_i$ as part of the rationale.
Given the sampling probabilities, a subset of features is sampled using the Gumbel softmax~\cite{jang2016categorical}, which provides a differentiable sampling process:
\begin{align}
u_i&\sim U(0,1),\quad g_i=-\log(-\log(u_i))\\
m_i&=\frac{\exp((\log(p_i)+g_i)/\tau)}{\sum_j\exp((\log(p_j)+g_j)/\tau)},\label{eq:maski}
\end{align}
where $U(0,1)$ represents the uniform distribution between $0$ and $1$, and $\tau$ is a temperature hyperparameter.
Hence, we obtain the sampled mask $m_i$ for each feature $x_i$, and the vector symbolizing the rationale $\tilde{\textbf{z}}_\text{sym}=(m_1x_1,\ldots,m_nx_n)$. Thus, $\tilde{\textbf{z}}_\text{sym}$ is the sequence of discrete selected symbolic features forming the rationale.
\subsubsection{Predictor} The predictor takes as input the rationale $\tilde{\textbf{z}}_\text{sym}$ given by the selector, and outputs the prediction $\hat{y}_{sp}$.
In the selector-predictor part of InfoCal, the input to the predictor is the multiplication of each feature $x_i$ with the sampled mask $m_i$. The predictor first calculates a dense feature vector $\tilde{\textbf{z}}_\text{nero}$,\footnote{Here, ``nero'' stands for neural feature (i.e., a neural vector representation) as opposed to a symbolic input feature.} then uses one feed-forward layer and a softmax layer to calculate the probability distribution over the possible predictions:
\begin{align}
\tilde{\textbf{z}}_\text{nero}&=\text{Pred}(\tilde{\textbf{z}}_\text{sym})\\
p(\hat{y}_{sp}|\tilde{\textbf{z}}_\text{sym}) &= \text{Softmax}(W_p\tilde{\textbf{z}}_\text{nero}+b_p).\label{eq:pyn}
\end{align}
As the input is masked by $m_i$, the prediction $\hat{y}_{sp}$ is exclusively based on the features selected by the selector. The loss of the selector-predictor model is the cross-entropy loss:
\begin{equation}\label{eq:lsp}
\begin{small}
\begin{aligned}
L_{sp}&=-\frac{1}{K}\sum_k\log p( y^{(k)}_\text{sp}|\textbf{x}^{(k)})\\
&=-\frac{1}{K}\sum_k\log \mathbb E_{\text{Sel}(\tilde{\textbf{z}}_\text{sym}^{(k)}|\textbf{x}^{(k)})}p(y^{(k)}_\text{sp}|\tilde{\textbf{z}}_\text{sym}^{(k)})\\
&\le -\frac{1}{K}\sum_k\mathbb E_{\text{Sel}(\tilde{\textbf{z}}_\text{sym}^{(k)}|\textbf{x}^{(k)})}\log p(y^{(k)}_\text{sp}|\tilde{\textbf{z}}_\text{sym}^{(k)}),
\end{aligned}
\end{small}
\end{equation}
where $K$ represents the size of the training set, the superscript (k) denotes the k-th instance in the training set, and the inequality follows from Jensen's inequality.
\subsubsection{Guider} \label{sec:guider}
To guide the rationale selection of the selector-predictor model, we train a \textit{guider} model, denoted Pred$_G$, which receives the full original input $\textbf{x}$ and transforms it into a dense feature vector $\textbf{z}_\text{nero}$, using the same predictor architecture as the selector-predictor module, but different weights, as shown in Fig.~\ref{fig:arch}. We generate the dense feature vector in a variational way, which means that we first generate a Gaussian distribution according to the input $\textbf{x}$, from which we sample a vector $\textbf{z}_\text{nero}$:
\begin{align}
h&=\text{Pred}_G(\textbf{x}),\quad\mu=W_mh+b_m,\quad\sigma=W_sh+b_s \\
u&\sim \mathcal N(0,1), \quad \textbf{z}_\text{nero} = u\sigma + \mu\\
p&(\hat y_\text{guide}|\textbf{z}_\text{nero}) = \text{Softmax}(W_p\textbf{z}_\text{nero}+b_p).
\end{align}
We use the reparameterization trick of Gaussian distributions to make the sampling process differentiable~\cite{kingma2013auto}. We share the parameters $W_p$ and $b_p$ with those in Eq.~\ref{eq:pyn}.
The guider model's loss $L_\text{guide}$ is as follows:
\begin{equation}\label{eq:full}
\begin{aligned}
L_\text{guide}&=-\frac{1}{K}\sum_k\log p(y^{(k)}_\text{guide}|\textbf{x}^{(k)})\\
&\le-\frac{1}{K}\sum_k\mathbb E_{p(\textbf{z}_\text{nero}|\textbf{x}^{(k)})}\log p(y^{(k)}_\text{guide}|\textbf{z}_\text{nero}^{(k)}),
\end{aligned}
\end{equation}
where the inequality again follows from Jensen's inequality. The guider and the selector-predictor are trained jointly.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=0.8\linewidth]{./graphics/arch.eps}\\
\caption{Architecture of InfoCal: the grey round boxes stand for the losses, and the red arrows indicate the data required for the calculation of the losses. FFL is an abbreviation for feed-forward layer.}
\label{fig:arch}
\end{center}
\end{figure}
\subsubsection{Information Bottleneck} \label{sec:ib}
To guide the model to select the least-but-enough information, we employ an information bottleneck technique \cite{li-eisner-2019-specializing}. We aim to minimize $I(\textbf{x}, \tilde{\textbf{z}}_\text{sym}) - I(\tilde{\textbf{z}}_\text{sym},y)$\footnote{$I(a,b) = \int_a\int_bp(a,b)\log\frac{p(a,b)}{p(a)p(b)} \,{=}\, \mathbb E_{a,b}[\frac{p(a|b)}{p(a)}]$ denotes the mutual information between the variables $a$ and $b$.}, where the former term encourages the selection of few features, and the latter term encourages the selection of the necessary features. As $I(\tilde{\textbf{z}}_\text{sym},y)$ is implemented by $L_{sp}$ (the proof is given in Appendix \ref{the:0}), we only need to minimize the mutual information $I(\textbf{x}, \tilde{\textbf{z}}_\text{sym})$:
\begin{align}\label{eq:Isym}
I(\textbf{x}, \tilde{\textbf{z}}_\text{sym})=\mathbb E_{\textbf{x}, \tilde{\textbf{z}}_\text{sym}}\Big[\log\frac{p(\tilde{\textbf{z}}_\text{sym}|\textbf{x})}{p(\tilde{\textbf{z}}_\text{sym})}\Big].
\end{align}
However, there is a time-consuming term $p(\tilde{\textbf{z}}_\text{sym})=\sum_{\textbf{x}}p(\tilde{\textbf{z}}_\text{sym}|\textbf{x})p(\textbf{x})$, which needs to be calculated by a loop over all the instances $\textbf{x}$ in the training set. Inspired by \newcite{li-eisner-2019-specializing}, we replace this term with a variational distribution $r_\phi(z)$ and obtain an upper bound of Eq.~\ref{eq:Isym}: $I(\textbf{x}, \tilde{\textbf{z}}_\text{sym}) \le \mathbb E_{\textbf{x}, \tilde{\textbf{z}}_\text{sym}}\Big[\log\frac{p(\tilde{\textbf{z}}_\text{sym}|\textbf{x})}{r_\phi(z)}\Big]$. Since $\tilde{\textbf{z}}_\text{sym}$ is a sequence of binary-selected features, we sum up the mutual information term of each element of $\tilde{\textbf{z}}_\text{sym}$ as the information bottleneck loss:
\begin{align}
L_\text{ib}=\sum_i\sum_{\tilde{z}_i} p(\tilde{z}_i|\textbf{x})\log\frac{p(\tilde{z}_i|\textbf{x})}{r_\phi(z_i)},
\end{align}
where $\tilde{z}_i$ represents whether to select the $i$-th feature: $1$ for selected, $0$ for not selected.
To encourage $\textbf{z}_\text{nero}$ to contain the least-but-enough information in the guider model, we again use the information bottleneck technique.
Here, we minimize $I(\textbf{x}, \textbf{z}_\text{nero}) - I(\textbf{z}_\text{nero},y)$. Again, $I(\textbf{z}_\text{nero},y)$ can be implemented by $L_\text{guide}$.
Due to the fact that $\textbf{z}_\text{nero}$ is sampled from a Gaussian distribution, the mutual information has a closed-form upper bound:
\begin{equation}\label{eq:mi}
\begin{aligned}
L_\text{mi}&=I(\textbf{x}, \textbf{z}_\text{nero})\le \mathbb E_{\textbf{z}_\text{nero}}\Big[\log\frac{p(\textbf{z}_\text{nero}|\textbf{x})}{p(\textbf{z}_\text{nero})}\Big] =0.5(\mu^2+\sigma^2-1-2\log\sigma).
\end{aligned}
\end{equation}
The derivation is in Appendix~\ref{proof:mi}
\subsection{Calibrating Key Features via Adversarial Training}\label{sec:cali}
Our goal is to inform the selector what kind of information is still missing or has been wrongly selected. Since we already use the information bottleneck principal to encourage $\textbf{z}_\text{nero}$ to encode the information from the least-but-enough features, if we also require $\tilde{\textbf{z}}_\text{nero}$ and $\textbf{z}_\text{nero}$ to encode the same information, then we would encourage the selector to select the least-but-enough discrete features.
To achieve this, we use an adversarial-based training method.
Thus, we employ an additional discriminator neural module, called~$D$, which takes as input either $\tilde{\textbf{z}}_\text{nero}$ or $\textbf{z}_\text{nero}$ and outputs label ``0'' or label ``1'', respectively. The discriminator can be any differentiable neural network. The generator in our model is formed by the selector-predictor that outputs $\tilde{\textbf{z}}_\text{nero}$.
The losses associated with the generator and discriminator are:
\begin{align}
L_d&=-\log D(\textbf{z}_\text{nero}) + \log D(\tilde{\textbf{z}}_\text{nero})\label{eq:D}\\
L_g&=- \log D(\tilde{\textbf{z}}_\text{nero}).
\end{align}
\citet{yoon2018invase} also attempted to use guidance from a so-called ``base" model to a selector-predictor model.
Nevertheless, their ``base'' model can only provide valid information calibration in actor-critic reinforcement learning, which is difficult to provide in POMDP problems~\cite{kaelbling1998planning}. In comparison, the discriminator in our method is more flexible in providing valid information calibration.
\subsection{Regularizing Rationales with Language Models}
\label{sec:LM}
For NLP tasks, it is often desirable that a rationale is formed of
fluent subphrases
\cite{lei2016rationalizing}. To this end, previous works propose regularizers that bind the adjacent tokens to make them be simultaneously sampled or not. For example, \newcite{lei2016rationalizing} proposed a non-differentiable regularizer trained using REINFORCE~\cite{williams1992simple}. To make the method differentiable, \newcite{bastings2019interpretable} used the Kumaraswamy-distribution for the regularizer.
However, they treat all pairs of adjacent tokens in the same way, even though some adjacent tokens have more priority to be bound than others, such as ``He stole'' or ``the victim'' rather than ``. He'' or ``) in'' in Fig.~\ref{fig:intro}.
We propose a novel differentiable regularizer for extractive rationales that is based on a pre-trained language model, thus encouraging both the consecutiveness and the fluency of the tokens in the extracted rationale.
The LM-based regularizer is implemented as follows:
\begin{equation}\label{eq:lm}
L_\text{lm} = -\sum_im_{i-1}\log p_{lm}(m_ix_i|\textbf{x}_{<i}),
\end{equation}
where the $m_i$'s are the masks obtained in Eq.~\ref{eq:maski}.
Note that non-selected tokens are masked instead of deleted in this regularizer. The language model
can have any architecture.
First, we note that $L_\text{lm}$ is differentiable. Second, the following theorem guarantees that $L_\text{lm}$ encourages consecutiveness of selected tokens.
\begin{theorem}\label{the:1}
If the following is satisfied for all $i,j$:
\begin{itemize}
\item $m'_i<\epsilon \ll 1-\epsilon< m_i$, \,$0<\epsilon<1$, and
\item $\big|p(m'_ix_i|x_{<i})-p(m'_jx_j|x_{<j})\big|<\epsilon$,
\end{itemize}
then the following two inequalities hold:\\
(1) $L_\text{lm}(\ldots,m_k,\ldots, m'_{n})<L_\text{lm}(\ldots,m'_k,\ldots, m_{n})$.\\
(2) $L_\text{lm}(m_1, \ldots,m'_k,\ldots)>L_\text{lm}(m'_1,\ldots,m_k,\ldots)$.
\end{theorem}
The theorem says that for the same number of selected tokens, if they are consecutive, then they will get a lower $L_\text{lm}$ value.
Its proof is given in Appendix \ref{proof:1}.
\subsubsection{Language Model in Continuous Form} \label{lmvec}
Conventional language models are in discrete-form, which usually generate a multinomial distribution for each token, and minimize the Negative Log-likelihood (NLL) loss. The probability of the expected token is computed as follows:
\begin{equation}\label{eq:lm_lastlayer}
p(x_i|x_{<i})=\frac{\exp{(h_i^\top e_i)}}{\sum_{j\in\mathcal V}\exp{(h_i^\top e_j)}},
\end{equation}
where $h_i$ is the hidden vector corresponding to $x_i$, $e_i$ is a trainable parameter which represents the output vector of $x_i$, and $\mathcal V$ is the vocabulary. In language model literature~\cite{kuhn1990cache,bengio2003neural}, $x_i$ is a symbolic token, and each token in $\mathcal V$ has a corresponding trainable output vector. Eqn.~\ref{eq:lm_lastlayer} is a \texttt{Softmax} operation which normalizes throughout the whole vocabulary.
Note that in Eq.~\ref{eq:lm}, the target sequence of the language model $P(m_ix_i|x_{<i})$ is formed of vectors instead of symbolic tokens. Since $m_ix_i$ is not symbolic token, it do not have a corresponding trainable output vector so that we cannot use a \texttt{Softmax}-like operation to normalize throughout the whole vocabulary. To tackle this, we require a continuous-form language model. Therefore, we make some small changes in the pre-training of the language model.
When we are modeling the language model in vector form, we only use a bilinear layer to directly calculate the probability in Eq.~\ref{eq:lm_lastlayer}:
\begin{equation}\label{eq:vlm}
p(x_i|x_{<i})=\sigma(h_i^\top Me_i),
\end{equation}
where $\sigma$ stands for \texttt{sigmoid}, and $M$ is a trainable parameter matrix. The \texttt{sigmoid} operation ensures the result lies in $[0,1]$, which is a probability value. Then the probability value of $P(m_ix_i|x_{<i})$ is computed by:
\begin{equation}
p(m_ix_i|x_{<i})=\sigma(h_i^\top M(m_ie_i)).
\end{equation}
However, without normalization operations like \texttt{Softmax}, what Eqn.~\ref{eq:vlm} computes is a quasi-probability value which relates to only one token.
To solve this issue, we use negative sampling~\cite{mikolov2013distributed} in the training procedure. Therefore, the language model is pretrained using the following loss:
\begin{equation}\label{eq:pre}
L_\text{pre}=-\sum_i\Big[\log\sigma(h_i^\top Me_i) - \mathbb E_{j\sim p(x_j)}\log\sigma(h_i^\top Me_j) \Big],
\end{equation}
where $p(x_j)$ is the occurring probability (in the training dataset) of token $x_j$.
\subsection{Training and Inference}
The total loss function of our model, which takes the generator's role in adversarial training, is shown in Eq.~\ref{eq:G}. The adversarial-related losses are denoted by $L_\text{adv}$. The discriminator is trained by $L_d$ from Eq.~\ref{eq:D}.
\begin{align}
L_\text{adv} &= \lambda_{g}L_g + L_{guide} + \lambda_{mi}L_\text{mi}\\
J_\text{total}&=L_{sp} +\lambda_{ib}L_\text{ib}+L_\text{adv}+\lambda_{lm}L_\text{lm}, \label{eq:G}
\end{align}
where $\lambda_{ib},\lambda_{g},\lambda_{mi}$, and $\lambda_{lm}$ are hyperparameters.
At training time, we optimize the generator loss $J_\text{total}$ and discriminator loss $L_d$ alternately until convergence.
At inference time, we run the selector-predictor model to obtain the prediction and the rationale~$\tilde{\textbf{z}}_\text{sym}$.
The whole training process is illustrated in Algorithm~\ref{algo:1}.
\begin{algorithm}
\SetAlgoLined
Random initialization\;
Pre-train language model by Eq.~\ref{eq:pre}\;
\For{each iteration $i=1,2,\ldots$ }{
\For{each batch }{
Calculate the loss $J_\text{total}$ for the sampler-predictor model and the guider model by Eq.~\ref{eq:G}\;
Calculate the loss $L_D$ for the discriminator by Eq.~\ref{eq:D}\;
Update the parameters of selector-predictor model and the guider model\;
Update the parameters of the discriminator\;
}
}
\caption{Training process of InfoCal.}
\label{algo:1}
\end{algorithm}
\section{Experiments}\label{sec:experiment}
We performed experiments on three NLP applications: multi-aspect sentiment analysis, legal judgement prediction, and hate speech detection.
For multi-aspect sentiment analysis and hate speech detection, we have rationale annotations in the dataset. So, we can directly use automatic evaluation metrics to evaluate the quality of extracted rationales. For legal judgement prediction, there is no rationale annotation, so we conduct human evaluation for the extracted rationales.
\subsection{Evaluation Metrics for Rationales.}
With the annotations of rationales in the multi-aspect sentiment analysis and hate speech detection datasets, we would like to evaluate the explainability of our model. For better comparison, we use the same evaluation metrics with previous works~\cite{deyoung-etal-2020-eraser,mathew2020hatexplain}, which contains 5 metrics as listed below.
\begin{itemize}
\item IOU $F_1$: This metric is defined upon a token-level partial match score Intersection-Over-Union (IOU). For two spans $a$ and $b$, IOU is the quotient of the number of their overlapped tokens and the number of their union tokens: $\text{IOU}=\frac{|a\cap b|}{|a\cup b|}$. If the IOU value between a rationale prediction and a ground truth rationale is above $0.5$, we consider this prediction as correct. Then, the $F_1$ score is calculated accordingly as the IOU $F_1$.
\item Token $P$, Token $R$, Token $F_1$: For two spans, prediction rationale span $a$ and ground-truth rationale span $b$, token-level precision is the quotient of the number of their overlapped tokens and the number of tokens in the prediction rationale span: $P_\text{token}=\frac{|a\cap b|}{|a|}$. The token-level recall is the quotient of the number of their overlapped tokens and the number of tokens in the ground-truth rationale span: $R_\text{Token}=\frac{|a\cap b|}{|b|}$. Then, token $F_1$ is calculated as $\frac{2P_\text{token}R_\text{Token}}{P_\text{token}+R_\text{Token}}$.
\item AUPRC: This metric is the area under the precision ($P_\text{token}$)-recall ($R_\text{Token}$) curve. The calculate method is sweeping the threshold over the token-level scores.
\item Comprehensiveness (Comp.): This metric means to judge whether the selected rationale is complete. To calculate this, we create a contrast example for each example by removing the rationale $\textbf{z}_\text{sym}$ from the original input $\textbf{x}$, denoted by $\textbf{x}/\textbf{z}_\text{sym}$. After removing the rationales, the model should become less confident to the original predicted class $y$. We then measure comprehensiveness as follows: $\text{Comp.} = p(y|\textbf{x})-p(y|\textbf{x}/\textbf{z}_\text{sym})$. A high comprehensiveness score suggest that the extracted rationale is indeed complete for the prediction.
\item Sufficiency (Suff.): This metric means to judge whether the selected rationale is useful. Similar to the comprehensiveness score, we calculate the sufficiency score as: $\text{Suff.} = p(y|\textbf{x})-p(y|\textbf{z}_\text{sym})$. If the extracted rationale is indeed useful, then the sufficiency score should be very small.
\end{itemize}
Among them, Token $P$, Token $R$, Token $F_1$, IOU $F_1$, and AUPRC requires the gold rationale annotations, so we just calculate these metrics in the beer review task and the hate speech explaination task. Comp. and Suff. only fit for classification problems, so we just apply these metrics to legal judgment prediction task and hate speech explaination task.
\subsection{Beer Reviews}\label{sec:beer}
\subsubsection{Data.}
To provide a quantitative analysis for the extracted rationales, we use the BeerAdvocate\footnote{\url{https://www.beeradvocate.com/}} dataset~\cite{mcauley2012learning}. This dataset contains instances of human-written multi-aspect reviews on beers. Similarly to \citet{lei2016rationalizing}, we consider the following three aspects: appearance, smell, and palate. \newcite{mcauley2012learning} provide manually annotated rationales for 994 reviews for all aspects, which we use as test set.
The training set of BeerAdvocate contains 220,000 beer reviews, with human ratings for each aspect.
Each rating is on a scale of $0$ to $5$ stars, and it can be fractional (e.g., 4.5 stars), \newcite{lei2016rationalizing} have normalized the scores to $[0,1]$, and picked ``less correlated'' examples to make a de-correlated subset.\footnote{\url{http://people.csail.mit.edu/taolei/beer/}} For each aspect, there are 80k--90k reviews for training and 10k reviews for validation.
\subsubsection{Model details.}
Because our task is a regression, we make some modifications to our model. First, we replace the \textit{softmax} in Eq.~\ref{eq:pyn} by the \textit{sigmoid} function, and replace the cross-entropy loss in Eq.~\ref{eq:lsp} by a mean-squared error (MSE) loss. Second, for a fair comparison, similar to \newcite{lei2016rationalizing} and \newcite{bastings2019interpretable}, we set all the architectures of selector, predictor, and guider as bidirectional Recurrent Convolution Neural Network (RCNN)~\newcite{lei2016rationalizing}, which performs similarly to an LSTM~\cite{hochreiter1997long} but with $50\%$ fewer parameters.
We search the hyperparameters in the following scopes: $\lambda_\text{ib}\in(0.000, 0.001]$ with step $0.0001$, $\lambda_g\in[0.2,2.0]$ with step $0.2$, $\lambda_\text{mi}\in[0.0, 1.0]$ with step $0.1$, and $\lambda_\text{lm}\in[0.000,0.010]$ with step $0.001$.
The best hyperparameters were found as follows: $\lambda_\text{ib}=0.0003$, $\lambda_g=1$, $\lambda_\text{mi}=0.1$, and $\lambda_\text{lm}=0.005$.
We set $r_\phi(z_i)$ to $r_\phi(z_i=0)=0.999$ and $r_\phi(z_i=1)=0.001$.
\subsubsection{Evaluation Metrics and Baselines.}
For the evaluation of the selected tokens as rationales, we use precision, recall, and F1-score. Typically, precision is defined as the percentage of selected tokens that also belong to the human-annotated rationale. Recall is the percentage of human-annotated rationale tokens that are selected by our model. The predictions made by the selected rationale tokens are evaluated using the mean-square error (MSE).
We compare our method with the following baselines:
\begin{itemize}
\item Attention~\cite{lei2016rationalizing}: This method calculates attention scores over the tokens and selects top-k percent tokens as the rationale.
\item Bernoulli~\cite{lei2016rationalizing}: This method uses a selector network to calculate a Bernoulli distribution for each token, and then samples the tokens from the distributions as the rationale. The basic architecture is RCNN~\newcite{lei2016rationalizing}.
\item HardKuma~\cite{bastings2019interpretable}: This method replaces the Bernoulli distribution by a Kuma distribution to facilitate differentiability. The basic architecture is also RCNN~\newcite{lei2016rationalizing}.
\item FRESH~\cite{jain-etal-2020-learning}: This method breaks the selector-predictor model into three sub-components: a support model which calculates the importance of each input token, a rationale extractor model which extracts the rationale snippets according to the output of the support model, a classifier model which make prediction according to the extracted rationale.
\item Sparse IB~\cite{paranjape-etal-2020-information}: This method also uses information bottleneck to control the number of tokens selected by the rationale. But it did not use any information calibration methods or any regularizers to extract more complete and fluent rationales.
\end{itemize}
\subsubsection{Results.}
The rationale extraction performances are shown in Table~\ref{tab:beer_rational}. The precision values for the baselines are directly taken from \cite{bastings2019interpretable}. We use their source code for the Bernoulli\footnote{\url{https://github.com/taolei87/rcnn}} and HardKuma\footnote{\url{https://github.com/bastings/interpretable_predictions}} baselines.
\begin{figure}[!t]
\centering
\includegraphics[width=0.75\linewidth]{./graphics/mse.png}\\
\caption{MSE of all aspects of BeerAdvocate. The blue dashed line represents the full-text baseline (all tokens are selected).}
\label{tab:mse}
\end{figure}
\begin{table}[!t]
\centering
\resizebox{0.49\linewidth}{!}{
\begin{tabular}{|l|c|c|c|c|c||c|}
\hline
\multirow{2}{*}{Method} & \multicolumn{6}{c|}{Appearance} \\\cline{2-7}
& P& R& F &IOU $F_1$& \% selected & AUPRC \\\hline
Attention &80.6 &35.6 & 49.4 &32.8 &13&0.613 \\\hline
Bernoulli &96.3 &56.5 &71.2 &55.3&14&0.785 \\\hline
HardKuma &98.1 &65.1 &78.3 &64.3&13&0.833 \\\hline
FRESH &96.5 &53.2 & 68.6 &52.2&13&0.772 \\\hline
Sparse IB & 91.3 & 54.6 & 68.3 & 51.9 &13&0.752 \\\hline\hline
InfoCal &\textbf{98.5} &\textbf{73.2} &\textbf{84.0} &72.4&13& 0.871 \\\hline
\end{tabular}
}
\resizebox{0.49\linewidth}{!}{
\begin{tabular}{|l|c|c|c|c|c||c|}
\hline
\multirow{2}{*}{Method} & \multicolumn{6}{c|}{Smell} \\\cline{2-7}
& P& R& F &IOU $F_1$& \% selected & AUPRC \\\hline
Attention &88.4 &20.6 &33.4 &20.1&7& 0.584\\\hline
Bernoulli &95.1 &38.2 &54.5 &37.5&7 & 0.697\\\hline
HardKuma &\textbf{96.8} &31.5 &47.5 &31.2&7& 0.675 \\\hline
FRESH &90.4 &32.3 &47.6 &31.2 &7 &0.647 \\\hline
Sparse IB & 90.8 & 34.5 & 50.0 & 33.3 &7& 0.659 \\\hline\hline
InfoCal &95.6 &\textbf{45.6} &\textbf{61.7} &44.7&7& 0.733\\\hline
\end{tabular}
}
\vspace{5pt}
\resizebox{0.49\linewidth}{!}{
\begin{tabular}{|l|c|c|c|c|c||c| }
\hline
\multirow{2}{*}{Method} & \multicolumn{6}{c|}{Palate}\\\cline{2-7}
& P& R& F&IOU $F_1$& \% selected & AUPRC \\\hline
Attention &65.3 &35.8 &46.2 &30.1&7&0.537 \\\hline
Bernoulli &80.2 &53.6 &64.3 &47.3&7& 0.692\\\hline
HardKuma &\textbf{89.8} &48.6 &63.1 &46.1&7&0.718 \\\hline
FRESH &78.4 &50.2 &61.2 &44.1&7
& 0.668\\\hline
Sparse IB & 84.3 & 49.2 & 62.1 & 45.1 &7& 0.692 \\\hline\hline
InfoCal &89.6 &\textbf{59.8} &\textbf{71.7} &55.9&7& 0.767\\\hline
\end{tabular}
}
\smallskip
\caption{Token-level precision (P), recall (R), F1-score (F), IOU $F_1$, and AUPRC of selected rationales for the three aspects of BeerAdvocate. In bold, the best performance. ``\% selected'' means the average percentage of tokens selected out of the total number of tokens per instance.}
\label{tab:beer_rational}
\end{table}%
\begin{table}[!t]
\centering
\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{Method} & \multicolumn{3}{c|}{Appearance} & \multicolumn{3}{c|}{Smell}& \multicolumn{3}{c|}{Palate}\\\cline{2-10}
& P& R& F & P& R& F & P& R& F \\\hline
InfoCal (HardKuma reg) &97.9 &71.7 & 82.8 &94.8 &42.3 & 58.5 &89.4 &56.9 & 69.5 \\\hline
InfoCal (INVASE reg) & 96.8 &53.5 & 68.9 &93.2 &35.7 &51.6 &85.7 &39.5 & 54.1 \\\hline
InfoCal$-L_\text{adv}$ &97.3 &67.8 & 79.9 &94.3 &34.5 &50.5 &89.6 &51.2 &65.2 \\\hline
InfoCal$-L_\text{lm}$ &79.8 &54.9 &65.0 &87.1 &32.3 &47.1 &83.1 &47.4 &60.4 \\\hline
InfoCal &\textbf{98.5} &\textbf{73.2} &\textbf{84.0} &95.6 &\textbf{45.6} &\textbf{61.7} &89.6 &\textbf{59.8} &\textbf{71.7} \\\hline
\end{tabular}
\smallskip
\caption{The ablation tests. All the listed methods are tuned to select $13\%$ words in ``Appearence'', $7\%$ in ``Smell'' and ``Palate'' to make them comparable with InfoCal. The best performances are bolded.}
\label{tab:beer_rational_ablation}
\end{table}%
\begin{table}[!t]
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{|c|m{15cm}|}
\hline
Gold & \textcolor{red}{clear , burnished copper-brown topped by a large beige head that displays impressive persistance and leaves a small to moderate amount of lace in sheets when it eventually departs}
\textcolor{green}{the nose is sweet and spicy and the flavor is malty sweet , accented nicely by honey and by abundant caramel/toffee notes .} there ...... alcohol .
\textcolor{blue}{the mouthfeel is exemplary ; full and rich , very creamy . mouthfilling with some mouthcoating as well .} drinkability is high ......
\\\hline
Bernoulli & \textcolor{red}{clear , burnished copper-brown topped by a large beige head that displays impressive persistance and} leaves a small to moderate amount of lace in sheets when it eventually departs
the nose is \textcolor{green}{sweet and spicy} and the flavor is malty sweet , accented nicely by honey and by abundant caramel/toffee notes . there ...... alcohol .
the mouthfeel \textcolor{blue}{is exemplary ; full and rich , very creamy . mouthfilling with} some mouthcoating as well . drinkability is high ......\\\hline
HardKuma & \textcolor{red}{clear , burnished copper-brown topped by a large beige head that displays impressive persistance and leaves a small} to moderate amount of lace in sheets when it eventually departs the nose is \textcolor{green}{sweet and spicy} and the flavor is malty sweet , accented nicely by honey and by abundant caramel/toffee notes . there ...... alcohol . the mouthfeel \textcolor{blue}{is exemplary ; full and rich , very creamy . mouthfilling with} some mouthcoating as well . drinkability is high ...... \\\hline
InfoCal & \textcolor{red}{clear , burnished copper-brown topped by a large beige head that displays impressive persistance} and leaves a small to moderate amount of lace in sheets when it eventually departs \textcolor{green}{the nose is sweet and spicy and the flavor is malty sweet} , accented nicely by honey and by abundant caramel/toffee notes . there ...... alcohol . \textcolor{blue}{the mouthfeel is exemplary ; full and rich , very creamy . mouthfilling with some mouthcoating }as well . drinkability is high ...... \\\hline
InfoCal$-L_\text{adv}$ &\textcolor{red}{clear , burnished copper-brown topped by a large beige head that displays impressive persistance} and leaves a small to moderate amount of lace in sheets when it eventually departs the nose is \textcolor{green}{sweet and spicy} and the flavor is malty sweet , accented nicely by honey and by abundant caramel/toffee notes . there ...... alcohol . \textcolor{blue}{the mouthfeel is exemplary }; full and rich , very creamy . mouthfilling with some mouthcoating as well . drinkability is high ...... \\\hline
InfoCal$-L_\text{lm}$ & \textcolor{red}{clear , burnished} copper-brown topped by a large beige head that displays \textcolor{red}{impressive persistance} and leaves a small to \textcolor{red}{moderate amount of lace} in sheets when it eventually departs the nose is \textcolor{green}{sweet} and \textcolor{green}{spicy} and the flavor is \textcolor{green}{malty sweet} , accented nicely by \textcolor{green}{honey} and by abundant caramel/toffee notes . there ...... alcohol . the mouthfeel is \textcolor{blue}{exemplary} ; \textcolor{blue}{full} and rich , very \textcolor{blue}{creamy} . mouthfilling with some mouthcoating as well . drinkability is high ...... \\
\hline
\end{tabular}
}
\smallskip
\caption{One example of extracted rationales by different methods. Different colors correspond to different aspects: \textcolor{red}{red}: appearance, \textcolor{green}{green}: smell, and \textcolor{blue}{blue}: palate.}
\label{tab:case}
\end{table}%
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\linewidth]{./graphics/pr_valid.png}\\
\caption{The precision (left) and recall (right) for rationales on the smell aspect of the BeerAdvocate valid set. }
\label{tab:pr}\vspace*{1ex}
\end{figure}
We trained these baseline for 50 epochs and selected the models with the best recall on the dev set when the precision was equal or larger than the reported dev precision. For fair comparison, we used the same stopping criteria for InfoCal (for which we fixed a threshold for the precision at 2\% lower than the previous state-of-the-art).
We also conducted ablation studies: (1) we removed the adversarial loss and report the results in the line InfoCal$-L_\text{adv}$, and (2)~we removed the LM regularizer and report the results in the line InfoCal$-L_\text{lm}$.
In Table~\ref{tab:beer_rational}, we see that, although Bernoulli, HardKuma, FRESH, and Sparse IB achieve very high precisions, their recall scores are significantly low.
The reason is that these four methods only focus on making the extracted rationale enough for a correct prediction, so the rationale is not necessary to be competent and many details would lost. In comparison, our InfoCal method use a dense neural network as a guider, which provided many detailed information. Therefore, the selector is able to extract more complete rationales.
In comparison, our method InfoCal significantly outperforms the previous methods in the recall scores for all the three aspects of the BeerAdvocate dataset (we performed Student's t-test, $p<0.01$). Also, all the three F-scores of InfoCal are a new state-of-the-art performance.
In the ablation studies in Table~\ref{tab:beer_rational_ablation}, we see that when we remove the adversarial information calibrating structure, namely, for InfoCal$-L_\text{adv}$, the recall scores decrease significantly in all the three aspects. This shows that our guider model is critical for the increased performance.
Moreover, when we remove the LM regularizer, we find a significant drop in both precision and recall, in the line InfoCal$-L_\text{lm}$. This highlights the importance of semantical fluency of rationales, which are encouraged by our LM regularizer.
We also apply another kind of calibration, which was applied in \newcite{yoon2018invase}. This calibration method is very similar to the ``base'' model in actor-critic models~\cite{konda2000actor}. Their difference with our InfoCal is that \newcite{yoon2018invase} minimizes the difference between the cross entropy values of the selector-predictor model and the base model. We apply their method to our model and listed the results in the InfoCal (INVASE reg) line in Table~\ref{tab:beer_rational_ablation}. We found that the recall score decreases a lot compared to InfoCal, which shows that our information calibration method is better for improving the recall of rationale extraction.
We also replace the LM regularizer with the regularizer used in the HardKuma method with all the other parts of the model unchanged, denoted InfoCal(HardKuma reg) in Table~\ref{tab:beer_rational_ablation}. We found that the recall and F-score of InfoCal outperforms InfoCal(HardKuma reg), which shows the effectiveness of our LM regularizer.
\begin{table}[!t]
\begin{center}
\resizebox{\linewidth}{!}{
\begin{tabular}{l l |ccccc| ccccc |ccccc}
\toprule[1.0pt]
\multirow{2}{*}{Small}&Tasks & \multicolumn{5}{c|}{Law Articles} & \multicolumn{5}{c|}{Charges} & \multicolumn{5}{c}{Terms of Penalty}\\
\cmidrule[0.5pt]{2-17}
&Metrics& Acc & MP & MR & F1 & \%S & Acc & MP & MR & F1 &\%S & Acc & MP & MR & F1 &\%S \\
\midrule[0.5pt]
\multirow{8}{*}{Single} &Bernoulli (w/o) &0.812 & 0.726 & 0.765 & 0.756 &100 & 0.810 & 0.788 & 0.760 & 0.777 & 100 & 0.331 & 0.323 & 0.297 & 0.306 & 100\\
&Bernoulli &0.755 & 0.701 & 0.737 & 0.728 &14 & 0.761 & 0.753 & 0.739 & 0.754 & 14 & 0.323 & 0.308 & 0.265 & 0.278 & 30\\
&HardKuma (w/o) & 0.807 & 0.704 & 0.757 & 0.739 &100&0.811 & 0.776 & 0.763 & 0.776 &100 & 0.345 & 0.355 & 0.307 & 0.319& 100\\
&HardKuma & 0.783 & 0.706 & 0.735 & 0.729 &14&0.778 & 0.757 &0.714 &0.736 &14 & 0.340 & 0.328 &0.296 & 0.309 & 30\\
&FRESH & 0.801 & 0.714 & 0.761 & 0.743 &14&0.790 & 0.766 &0.725 &0.745 &14 & 0.344 & 0.332 &0.308 & 0.312 & 30\\
&Sparse IB &0.773 & 0.692 & 0.734 & 0.712 &14 & 0.769 & 0.758 & 0.742 & 0.750 & 14 & 0.336 & 0.324 & 0.280 & 0.300 & 30\\
\cline{2-17}
&InfoCal$-L_\text{adv}$ & 0.826 &0.739 &0.774 &0.777 &14 &0.845 &0.804 &0.781 &0.797 &14 &0.351 &0.374 & 0.329&0.330 &30\\
&InfoCal$-L_\text{adv}\!\!-\!\!L_\text{ib}$ (w/o) & \textbf{0.841} & \textbf{0.759}& \textbf{0.785}&\textbf{0.793} &100 & \textbf{0.850} & \textbf{0.820}& \textbf{0.801}&\textbf{0.814} &100 & \textbf{0.368}&\textbf{0.378} &\textbf{0.341} & \textbf{0.346}&100\\
&InfoCal$-L_\text{lm}$ & 0.822 &0.723 &0.768 &0.773 &14 &0.843 &0.796 &0.770 &0.772 &14 &0.347 &0.361 & 0.318&0.320 &30\\
&InfoCal & \textcolor{red}{0.834} &\textcolor{red}{0.744} &\textcolor{red}{0.776} &\textcolor{red}{0.786} &14 &\textcolor{red}{0.849} &\textcolor{red}{0.817} &\textcolor{red}{0.798} &\textcolor{red}{0.813} &14 &\textcolor{red}{0.358} &\textcolor{red}{0.372} & \textcolor{red}{0.335}&\textcolor{red}{0.337} &30\\
\midrule[0.5pt]
\multirow{3}{*}{Multi} &FLA & 0.803& 0.724& 0.720 &0.714 &$-$&0.767& 0.758& 0.738& 0.732&$-$ &0.371& 0.310 &0.300 &0.299&$-$\\
&TOPJUDGE & 0.872 & 0.819 &0.808 &0.800 &$-$&0.871 &0.864& 0.851 &0.846&$-$& 0.380 &0.350 &0.353&0.346&$-$\\
&MPBFN-WCA &\underline{0.883} &\underline{0.832}& \underline{0.824} &\underline{0.822}&$-$ &\underline{0.887} &\underline{0.875} &\underline{0.857} &\underline{0.859}&$-$&\underline{ 0.414} &\underline{0.406} &\underline{0.369}& \underline{0.392}&$-$\\
\midrule[1.0pt]
\multirow{2}{*}{Big}&Tasks & \multicolumn{5}{c|}{Law Articles} & \multicolumn{5}{c|}{Charges} & \multicolumn{5}{c}{Terms of Penalty}\\
\cmidrule[0.5pt]{2-17}
&Metrics& Acc & MP & MR & F1 &\%S& Acc & MP & MR & F1&\%S & Acc & MP & MR & F1 &\%S\\
\midrule[0.5pt]
\multirow{8}{*}{Single} &Bernoulli (w/o) & 0.876 &0.636 & 0.388 &0.625 &100 & 0.857 &0.643 &0.410 &0.569 &100 & 0.509 &0.511 &0.304 &0.312 & 100\\
&Bernoulli & 0.857 &0.632 & 0.374 &0.621 &14 & 0.848 &0.635 &0.402 &0.543 &14 & 0.496 &0.505 &0.289 &0.306 & 30\\
&HardKuma (w/o)& 0.907 & 0.664 & 0.397 & 0.627 & 100& 0.907 & 0.689 & 0.438 & 0.608&100 & 0.555 & 0.547 & 0.335 & 0.356&100\\
&HardKuma & 0.876 & 0.645 & 0.384 & 0.609 & 14& 0.892 & 0.676 & 0.425 & 0.587&14 & 0.534 & 0.535 & 0.310 & 0.334&30\\
&FRESH & 0.902 & 0.698 & 0.675 & 0.682 & 14& 0.902 & 0.695 & 0.632 & 0.653&14 & 0.532 & 0.539 & 0.343 & 0.387&30\\
&Sparse IB & 0.863 &0.634 & 0.372 & 0.624 &14 & 0.852 &0.638 &0.401 & 0.545 &14 & 0.501 &0.510 &0.286 & 0.302 & 30\\
\cline{2-17}
&InfoCal$-L_\text{adv}$ & 0.953 & 0.844& 0.711&0.782&20 &0.954& 0.857 & 0.772 &0.806& 20&0.552&0.490& 0.353& 0.356 &30\\
&InfoCal$-L_\text{adv}\!\!-\!\!L_\text{ib}$ (w/o) & \textbf{0.959} & \textbf{0.862} & \textbf{0.751}&0.791&100 &\textbf{0.957}&\textbf{0.878}&0.776&0.807&100 &\textbf{0.584}& \textbf{0.519}& \textbf{0.411}&\textbf{0.427} &30\\
&InfoCal$-L_\text{lm}$ & 0.953 & 0.851& 0.730 & 0.775& 20 & 0.950& 0.857&0.756& 0.789 &20 &0.563& 0.486& 0.374& 0.367& 30\\
&InfoCal & \textcolor{red}{0.956}&\textcolor{red}{0.852}& \textcolor{red}{0.742}& \textbf{0.805} & 20 &\textcolor{red}{0.955}&\textcolor{red}{0.868}& \textbf{0.788}&\textbf{0.820} &20 & 0.556&\textbf{0.519}&0.362&0.372 &30 \\
\midrule[0.5pt]
\multirow{3}{*}{Multi}&FLA & 0.942&0.763&0.695&0.746&$-$&0.931&0.798&0.747&0.780&$-$&0.531&0.437&0.331&0.370&$-$\\
&TOPJUDGE&0.963&0.870&0.778&0.802&$-$&0.960&0.906&0.824&0.853&$-$&0.569&0.480&0.398&0.426&$-$\\
&MPBFN-WCA&\underline{0.978}&\underline{0.872}&\underline{0.789}&\underline{0.820}&$-$&\underline{0.977}&\underline{0.914}&\underline{0.836}&\underline{0.867}&$-$&\underline{0.604}&\underline{0.534}&\underline{0.430}&\underline{0.464}&$-$\\
\bottomrule[1.0pt]
\end{tabular}
}
\end{center}
\caption{The overall performance on the CAIL2018 dataset (Small and Big). The results from previous works are directly quoted from \newcite{yang2019legal}, because we share the same experimental settings, and hence we can make direct comparisons. \%S represents the selection percentage (which is determined by the model). ``Single'' represents single-task models, ``Multi'' represents multi-task models. The best performance is in bold. The red numbers mean that they are less than the best performance by no more than $0.01$. The underlined numbers are the state-of-the-art performances, all of which are obtained by multi-task models. (w/o) represents that the corresponding model is a dense model without extracting rationales.}
\label{tab:overall_law}
\end{table}%
We further show the relation between a model's performance on predicting the final answer and the rationale selection percentage (which is determined by the model) in Fig.~\ref{tab:mse}, as well as the relation between precision/recall and training epochs in Fig.~\ref{tab:pr}. The rationale selection percentage is influenced by $\lambda_\text{ib}$. According to Fig.~\ref{tab:mse}, our method InfoCal achieves a similar prediction performance compared to previous works, and does slightly better than HardKuma for some selection percentages. Fig.~\ref{tab:pr} shows the changes in precision and recall with training epochs. We can see that our model achieves a similar precision after several training epochs, while significantly outperforming the previous methods in recall, which proves the effectiveness of our proposed method.
Table~\ref{tab:case} shows an example of rationale extraction. Compared to the rationales extracted by Bernoulli and HardKuma, our method provides more fluent rationales for each aspect. For example, unimportant tokens like ``and'' (after ``persistance'', in the Bernoulli method), and ``with'' (after ``mouthful'', in the HardKuma method) were selected just because they are adjacent to important ones.
\subsection{Legal Judgement Prediction}\label{sec:law}
\subsubsection{Datasets and Preprocessing.}
We use the CAIL2018 data\-set\footnote{ \url{https://cail.oss-cn-qingdao.aliyuncs.com/CAIL2018_ALL_DATA.zip}}~\cite{zhong-etal-2018-legal} for three tasks on legal judgment prediction.
The dataset consists of criminal cases published by the Supreme People's Court of China.\footnote{\url{http://cail.cipsc.org.cn/index.html}} To be consistent with previous works, we used two versions of CAIL2018, namely, CAIL-small (the exercise stage data) and CAIL-big (the first stage data). The statistics of CAIL2018 dataset are shown in Table~\ref{tab:cail}.
The instances in CAIL2018 consist of a \textit{fact description} and three kinds of annotations: \textit{applicable law articles}, \textit{charges}, and \textit{the penalty terms}. Therefore, our three tasks on this dataset consist of predicting (1)~law articles, (2)~charges, and (3)~terms of penalty according to the given fact description.
\begin{table}[!t]
\begin{center}
\begin{tabular}{lcc}
\toprule[1.0pt]
& CAIL-small & CAIL-big\\
\midrule[0.5pt]
Cases & 113,536 & 1,594,291\\
Law Articles & 105 &183 \\
Charges & 122 & 202\\
Term of Penalty &11 & 11\\
\bottomrule[1.0pt]
\end{tabular}
\end{center}
\caption{Statistics of the CAIL2018 dataset.}
\label{tab:cail}
\end{table}%
In the dataset, there are also many cases with multiple applicable law articles and multiple charges. To be consistent with previous works on legal judgement prediction \cite{zhong-etal-2018-legal,yang2019legal}, we filter out these multi-label examples.
We also filter out instances where the charges and law articles occurred less than $100$ times in the dataset (e.g., insulting the national flag and national emblem).
For the term of penalty, we divide the terms into $11$ non-overlapping intervals. These preprocessing steps are the same as in \newcite{zhong-etal-2018-legal} and \newcite{yang2019legal}, making it fair to compare our model with previous models.
We use Jieba\footnote{\url{https://github.com/fxsjy/jieba}} for token segmentation, because this dataset is in Chinese. The word embedding size is set to $100$ and is randomly initiated before training. The maximum sequence length is set to $1000$. The architectures of the selector, predictor, and guider are all bidirectional LSTMs. The LSTM's hidden size is set to $100$. $r_\phi(z_i)$ is the sampling rate for each token (0 for selected), which we set to $r_\phi(z_i=0)=0.9$.
We search the hyperparameters in the following scopes:
$\lambda_{ib} \in [0.00, 0.10]$ with step $0.01$, $\lambda_g \in [0.2, 2.0]$
with step $0.2$, $\lambda_{mi} \in [0.0, 1.0]$ with step $0.1$, $\lambda_{lm} \in
[0.000, 0.010]$ with step $0.001$.
The best hyperparameters were found to be: $\lambda_{ib}=0.05, \lambda_{g}=1, \lambda_{mi}=0.5, \lambda_{lm}=0.005$ for all the three tasks.
\subsubsection{Overall Performance.}
We again compare our method with the Bernoulli~\cite{lei2016rationalizing} and the HardKuma~\cite{bastings2019interpretable} methods on rationale extraction. These two methods are both single-task models, which means that we train a model separately for each task.
We also compare our method with three multi-task methods listed as follows:
\begin{itemize}
\item FLA~\cite{luo-etal-2017-learning} uses an attention mechanism to capture the interaction between fact descriptions and applicable law articles.
\item TOPJUDGE~\cite{zhong-etal-2018-legal} uses a topological architecture to link different legal prediction tasks together, including the prediction of law articles, charges, and terms of penalty.
\item MPBFN-WCA~\cite{yang2019legal} uses a backward verification to verify upstream tasks given the results of downstream tasks.
\end{itemize}
The results
are listed in Table~\ref{tab:overall_law}.
On CAIL-small, we observe that it is more difficult for the single-task models to outperform multi-task methods. This is likely due to the fact that the tasks are related, and learning them together can help a model to achieve better performance on each task separately. After removing the restriction of the information bottleneck, InfoCal$-L_\text{adv}\!\!-\!\!L_\text{ib}$ achieves the best performance in all tasks, however, it selects all the tokens in the review.
When we restrict the number of selected tokens to $14\%$ (by tuning the hyperparameter $\lambda_\text{ib}$), InfoCal (in red) only slightly drops in all evaluation metrics, and it already outperforms Bernoulli and HardKuma, even if they have used all tokens. This means that the $14\%$ selected tokens are very important to the predictions. We observe a similar phenomenon for CAIL-big. Specifically, InfoCal outperforms InfoCal$-L_\text{adv}\!\!-\!\!L_\text{ib}$ in some evaluation metrics, such as the F1-score of law article prediction and charge prediction tasks.
\subsubsection{Rationales.}
\begin{table}[]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
&\multicolumn{2}{c|}{Law Articles} & \multicolumn{2}{c|}{Charges} & \multicolumn{2}{c|}{Terms of Penalty}\\
\cline{2-7}
& Comp.$\uparrow$ & Suff.$\downarrow$ &Comp.$\uparrow$ & Suff.$\downarrow$ &Comp.$\uparrow$ & Suff.$\downarrow$ \\
\hline
Bernoulli & 0.231 &0.005 &0.243 & 0.002&0.132 &0.017\\
HardKuma & 0.304& -0.021&0.312 &-0.034 & 0.165&0.009\\
InfoCal &\textbf{0.395} & \textbf{-0.056}&\textbf{0.425} &\textbf{-0.067} &\textbf{0.203} &\textbf{0.005}\\
\hline
\end{tabular}
\caption{The quantitative evaluation of rationales for legal judgment prediction. The ``$\uparrow$''
means that a good result should have a larger value, while ``$\downarrow$'' means lower is better.}
\label{tab:lawrationale}
\end{table}
The CAIL2018 dataset does not contain annotations of rationales. So, we only use Comp. and Suff. for quantitative evaluation since they do not require gold rationale annotations. The results are shown in Table~\ref{tab:lawrationale}. We can see that in all the three subtasks of legal judgement prediction, our proposed method outperforms the previous methods.
We also conducted human evaluation for the extracted rationales. Due to limited budget and resources, we sampled 300 examples for each task. We randomly shuffled the rationales for each task and asked six undergraduate students from Peking University to evaluate them. The human evaluation is based on three metrics: usefulness (U), completeness (C), and fluency (F); each scored from $1$ (lowest) to~$5$. The scoring standard for human annotators is given in Appendix \ref{human} in the extended paper.
The human evaluation results are shown in Table~\ref{tab:he}. We can see that our proposed method outperforms previous methods in all metrics. Our inter-rater agreement is acceptable by Krippendorff's rule~(\citeyear{krippendorff2004content}), which is shown in Table~\ref{tab:he}.
A sample case of extracted rationales in legal judgement is shown in Fig.~\ref{fig:lawcase}. We observe that our method selects all the useful information for the charge prediction task, and the selected rationales are formed of continuous and fluent sub-phrases.
\begin{table}[!t]
\centering
\resizebox{0.75\linewidth}{!}{
\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|}
\hline
& \multicolumn{3}{|c|}{Law} & \multicolumn{3}{|c|}{Charges} & \multicolumn{3}{|c|}{ToP} \\\cline{2-10}
& U & C &F & U & C & F&U & C & F \\\hline
Bernoulli &4.71 &2.46 &3.45 & 3.67 &2.35 &3.45 &3.35 &2.76&3.55 \\\hline
HardKuma &4.65 &3.21 &3.78 & 4.01 &3.26&3.44 &3.84 &2.97&3.76\\\hline
InfoCal &\textbf{4.72} &\textbf{3.78} &\textbf{4.02} &\textbf{4.65} & \textbf{3.89}&\textbf{4.23} &\textbf{4.21} &\textbf{3.43}&\textbf{3.97}\\\hline\hline
$\alpha$&0.81 & 0.79&0.83&0.92&0.85&0.87&0.82&0.83&0.94\\\hline
\end{tabular}
}
\smallskip
\caption{Human evaluation on the CAIL2018 dataset. ``ToP" is the abbreviation of ``Terms of Penalty". The metrics are: usefulness (U), completeness (C), and fluency (F), each scored from $1$ to~$5$. Best performance is in bold. $\alpha$ represents Krippendorff's alpha values. The basic architecture for the three methods are all RCNN~\newcite{lei2016rationalizing}.}
\label{tab:he}
\end{table}%
\begin{figure}[!t]
\centering
\includegraphics[width=0.75\linewidth]{./graphics/lawsmall.pdf}\\
\smallskip
\caption{An example of extracted rationale for charge prediction. The correct charge is ``Rape". The original fact description is in Chinese, we have translated it to English. It is easy to see that the extracted rationales are very helpful in making the charge prediction.}
\label{fig:lawcase}
\end{figure}
\subsection{Hate Speech Explanation}\label{sec:hate}
\subsubsection{Datasets and Preprocessing.}
For evaluating the performance of our method on hate speech detection task. We use the HateXplain dataset\footnote{\url{https://github.com/punyajoy/HateXplain.git}}~\cite{mathew2020hatexplain}. This dataset contains 9,055 posts from Twitter~\cite{davidson2017automated,fortuna2018survey} and 11,093 posts from Gab~\cite{lima2018inside,mathew2020hate,zannettou2018gab}.
There are three different classes in this dataset: hateful, offensive, and normal. Apart from the class labels, this dataset also contains rationale annotations for each example that is labelled as hateful or offensive. The training set, valid set, and test set are already split as $8:1:1$ in the dataset.
More details of this dataset is shown in Table~\ref{tab:hatedata}. This dataset is very noisy, and it can test the robustness of our InfoCal method on noisy text information.
For classification performance, we have three metrics: Accuracy, Macro $F_1$, and AUROC. These metrics are used for evaluating the ability of distinguish among the three classes, i.e., hate speech, offensive speech, and normal. Among them, AUROC is the area under the ROC curve.
\subsubsection{Competing Methods.}
We also compare our method with Bernoulli~\cite{lei2016rationalizing} and HardKuma~\cite{bastings2019interpretable} in this experiment.
We also compare our method with the following competing methods provided in \newcite{mathew2020hatexplain}:
\begin{itemize}
\item \textbf{CNN-GRU}~\cite{zhang2018detecting} has achieved state-of-the-art performance in multiple hate speech datasets. CNN-GRU first use convolution neural network (CNN)~\cite{lecun1995convolutional} to capture the local features and then use recurrent neural network (RNN)~\cite{rumelhart1986learning} with GRU unit~\cite{cho2014learning} to capture the temporal information. Finally, this model max-pools GRU's hidden layers to a feature vector, and then use a fully connected layer to finally output the prediction results.
\item \textbf{BiRNN}~\cite{schuster1997bidirectional} first input the tokens into a sequential model with long-short term memory (LSTM)~\cite{hochreiter1997long}. Then, the last hidden state is passed through two feed-forward layers and then a fully connected layer for prediction.
\item \textbf{BiRNN-Attn} adds an attention layer after the sequential layer of BiRNN model.
\item \textbf{BERT}~\cite{devlin-etal-2019-bert} is a large pretrained model constructed by a stack of transformer~\cite{vaswani2017attention} encoder layers. A fully connected layer is added to the output corresponding to the \textit{CLS} token for the hate speech class prediction. We used the \texttt{bert-base-uncased} model with 12-layer, 768-
hidden, 12-heads, 110M parameters, this is the same setting with previous work~\cite{mathew2020hate}. The model is fine-tuned using the HateXplain training set.
\end{itemize}
In all the above methods, the rationales are extracted by two methods: attention~\cite{rocktaschel2015reasoning} and LIME~\cite{ribeiro2016should}. When we are using attention method, as is described in \newcite{deyoung-etal-2020-eraser}, the tokens with top 5 attention values are selected as rationale. The LIME method selects rationales by training a new explanation model to imitate the original deep learning ``black-box'' model. Different from these methods, our model InfoCal as well as the other two competing method Bernoulli and HardKuma are extracting rationales by the model itself without any external methods (like attention selection or LIME selection) for rationale selection. So, it is much more challenging for them to achieve similar explanability performance.
In \newcite{mathew2020hatexplain}, the ground-truth rationale annotations were also used to train some models by adding an external cross entropy loss on the attention layer. The rationale training is conducted on BiRNN and BERT models, denoted as BiRNN-HateXplain and BERT-HateXplain, respectively.
\subsubsection{Results.}
The overall results are shown in Table~\ref{tab:hate}. We can see that in the classification performance, the BERT models achieved the highest score in all the three metrics (Accuracy, Macro $F_1$, and AUROC) no matter whether the rationale supervising is conducted. Also, our InfoCal model has outperformed all the other approaches except for BERT. This makes sense because BERT has pretrained by a large amount of texts, and it has a much better understanding for language than other models without pretraining.
In the explanability evaluations, our model InfoCal has achieved the state-of-the-art performance in three metrics: IOU $F_1$, AUPRC, and Sufficiency. Also, for the other two metrics (Token $F_1$ and Comprehensiveness), the InfoCal method is comparable with the state-of-the-art method (BERT [Attn]). Note that in our model, the rationales are selected by the model itself instead of by selecting top 5 attention value or by LIME method externally. Therefore, this experimental result show that our InfoCal model is a better model for explaining neural network predictions.
We also listed the performances of the BiRNN model and BERT model after supervised by rationale annotations in Table~\ref{tab:hate}. We can see that both the classification performance and the explanability performance improved a lot after trained by rationale annotations. This also makes sense because the rationale annotation is the most direct training signal of rationale selection. However, such kind of rationale annotation is very expensive to get in real-world applications. Therefore, the rationale extraction methods without rationale supervision is much proper to be applied in the industry.
\begin{table}[!ht]
\centering
\begin{tabular}{cccc}
\toprule[1.0pt]
&Twitter & Gab & Total \\
\midrule[0.5pt]
Hateful & 708 & 5,227 & 5,935\\
Offensive &2,328 &3,152 &5,480\\
Normal & 5,770& 2,044& 7,814\\
Undecided &249& 670& 919\\
\midrule[0.5pt]
Total &9,055& 11,093 &20,148\\
\bottomrule[1.0pt]
\end{tabular}
\caption{The statistics of HateXplain dataset. ``Undecided'' means that in the annotation process, all the three annotators gave different labels to the example. We omit this part of data in our experiments as is consistent with previous works. }
\label{tab:hatedata}
\end{table}
\begin{table}[!t]
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{ll|ccc|ccccc}
\toprule[1.0pt]
&& \multicolumn{3}{c|}{Classification Performance}&\multicolumn{5}{c}{Explanability}\\
\midrule[0.5 pt]
&&Acc $\uparrow$ & Macro F1 $\uparrow$ & AUROC $\uparrow$ & IOU F1 $\uparrow$ & Token F1 $\uparrow$& AUPRC $\uparrow$ & Comp $\uparrow$& Suff $\downarrow$\\
\midrule[0.5 pt]
\multirow{9}{1.2cm}{W/o rationale supervising}&CNN-GRU [LIME] & 0.627& 0.606& 0.793& 0.167&0.385& 0.648&0.316& -0.082 \\
&BiRNN [LIME] & 0.595& 0.575 &0.767&0.162 &0.361 &0.605&0.421& -0.051 \\
&BiRNN-Attn [Attn] & 0.621& 0.614 &0.795&0.167 &0.369& 0.643&0.278& 0.001 \\
&BiRNN-Attn [LIME] & 0.621& 0.614 &0.795&0.162 &0.386& 0.650&0.308& -0.075 \\
&BERT [Attn] & \textbf{0.690} &\textbf{0.674} &\textbf{0.843}&0.130 &\textbf{0.497}& 0.778&\textbf{0.447} &0.057 \\
&BERT [LIME] & \textbf{0.690} &\textbf{0.674} &\textbf{0.843}&0.118 &0.468& 0.747&0.436 &0.008 \\
\cmidrule[0.5 pt]{2-10}
&Bernoulli &0.597 &0.568 &0.765 &0.138 &0.482&0.668&0.324&0.003 \\
&HardKuma & 0.594 &0.570 &0.772 &0.152 &0.485&0.672&0.406&-0.022 \\
&Sparse IB &0.602 &0.572 &0.768 &0.145 &0.486&0.670&0.389&0.001 \\
&InfoCal &0.630 &0.614 &0.792 &\textbf{0.206} &0.493&\textbf{0.680} &0.436&\textbf{-0.097 }\\
\midrule[0.5 pt]
\multirow{4}{1.2cm}{With rationale supervising}&BiRNN-HateXplain [Attn] & 0.629 &0.629& 0.805&\underline{0.222} &\underline{0.506}& \underline{0.841}&0.281& 0.039 \\
&BiRNN-HateXplain [LIME] &0.629 &0.629 &0.805&0.174 &0.407& 0.685&0.343& -0.075 \\
&BERT-HateXplain [Attn] &\underline{0.698} &\underline{0.687}& \underline{0.851} &0.120 &0.411& 0.626&0.424& 0.160\\
&BERT-HateXplain [LIME] &\underline{0.698} &\underline{0.687}& \underline{0.851} &0.112 &0.452 &0.722&\underline{0.500} &0.004 \\
\bottomrule[1.0pt]
\end{tabular}
}
\smallskip
\caption{The overall performance on the HateXplain dataset. The results from previous work are directly quoted from \newcite{mathew2020hatexplain}, because we share identical train/valid/test data split, and hence we can make direct comparison. The ``$\uparrow$'' means that a good result should have a larger value, while ``$\downarrow$'' means lower is better. The square bracket means the method of extracting rationales. Note that except for Bernoulli, HardKuma, Sparse IB, and InfoCal, all the other methods are extracting rationale by external methods ([Attn] or [LIME]). The basic architectures for Bernoulli, HardKuma, and InfoCal are all RCNN.}
\label{tab:hate}
\end{table}%
\subsubsection{Case Study for Rationales.}
In Table~\ref{tab:hatecase}, we have listed some of the generated rationales in HateXplain dataset by our InfoCal method and the two competing methods: Bernoulli and HardKuma. We can see that our InfoCal method has extracted nearly all of the annotated rationales in the ground-truth. Compared to Bernoulli and HardKuma, our InfoCal method do not extract nonsense rationales, such as ``yeah i also'' in the second line, and ``precinct and campaign meetings'' in the third line. This again shows the effectiveness of the information calibration method.
\begin{table}[]
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{ p{4cm} p{4cm} p{4cm} p{4cm} }
\toprule[1.0pt]
Gold & Bernoulli & HardKuma & InfoCal\\
\midrule[0.5pt]
\textbf{jew faggot milo} is king of the alt right great it not the alt right anymore & \textbf{jew faggot milo is} king of the alt right great it not the alt right anymore & \textbf{jew faggot} milo is king of the alt right great it not the alt right anymore & \textbf{jew faggot milo} is king of the alt right great it not the alt right anymore\\
\midrule[0.5pt]
yeah i also agree that the \textbf{middle eastern ones are coming mostly to hungary} but these other \textbf{refugees away from mudslimes} are all schengen regions why bother applying for refugee status when you can just move it perfectly legal & \textbf{yeah i also} agree that the middle eastern ones are coming mostly to hungary but these other \textbf{refugees} away from mudslimes are all schengen regions why bother applying for refugee status when you can just move it perfectly legal & yeah i also agree that the \textbf{middle eastern ones are coming mostly to hungary} but these other \textbf{refugees} away from mudslimes \textbf{are all schengen regions} why bother applying for refugee status when you can just move it perfectly legal & yeah i also agree that the \textbf{middle eastern ones} are coming mostly to hungary but these other \textbf{refugees away from mudslimes} are all schengen regions why bother applying for refugee status when you can just move it perfectly legal\\
\midrule[0.5pt]
precinct and campaign meetings will confirm that it is only whites who hold any ideals about government \textbf{non whites want free govt cheese and jews are crazy for power}& precinct and campaign meetings will confirm that it is only \textbf{whites} who hold any ideals about government non whites want free govt cheese and \textbf{jews are crazy for power}& \textbf{precinct and campaign meetings} will confirm that it is only whites who hold any ideals about government non whites want free \textbf{govt cheese and jews are crazy for} power& precinct and campaign meetings will confirm that it is only \textbf{whites} who hold any ideals about government \textbf{non whites want free govt cheese and jews are crazy for power}\\
\midrule[0.5pt]
the labour party has number mps in prison for paedophilia number mps and councillors on the sex registry number million white kids \textbf{raped by paki muslims} all labour voters covered up by labour councils mps and a pm
& the labour party has \textbf{number} mps in prison for paedophilia \textbf{number} mps and councillors on the sex registry \textbf{number} million white kids raped \textbf{by paki muslims all} labour voters covered up by labour councils mps and a pm
& the labour party has number mps in prison for paedophilia number mps and councillors on the sex registry number million \textbf{white kids raped by paki muslims all labour voters} covered up by labour councils mps and a pm
& the labour party has number mps in prison for paedophilia number mps and councillors on the sex registry number million white \textbf{kids raped by paki muslims} all labour voters covered up by labour councils mps and a pm \\
\bottomrule[1.0pt]
\end{tabular}
}
\caption{Example rationales extracted by three methods: Bermoulli, HardKuma, and InfoCal. Note that in these cases, many phrases are offensive or hateful. Nevertheless, this cannot be avoided due to the nature of the work.}
\label{tab:hatecase}
\end{table}
\subsection{Performance of the Pretrained Language Model for the rationale regularizer}
\begin{table}[!ht]
\centering
\begin{tabular}{lccc}
\toprule[1.0pt]
& KenLM~\cite{heafield-2011-kenlm} & RNNLM~\cite{bengio2003neural,tomas2011rnnlm} & Our LM\\
\midrule[0.5pt]
Perplexity (Beer) & 66 & 50 &44\\
\midrule[0.5pt]
Perplexity (Legal Small) & 32 & 20 &
29 \\
Perplexity (Legal Big) & 11 & 69 &62
\\
\midrule[0.5pt]
Perplexity (HateXplain) & 413 & 146 &165 \\
\bottomrule[1.0pt]
\end{tabular}
\caption{The comparison of perplexity between language models. }
\label{tab:beer_lm}
\end{table}
In the InfoCal model, we need a pretrained language model (in Sec.~\ref{sec:LM}) for the rationale regularizer. Our language model described in Section~\ref{lmvec} is different from previous language model because it has to compute probabilities for token's vector representations
instead of token's symbolic IDs. Therefore, the quality of the pretrained language model is paramount to the InfoCal model. In Table~\ref{tab:beer_lm}, we listed the comparison of the perplexity between our language model and two famous language models: Kenneth Heafield's language model (KenLM)~\cite{heafield-2011-kenlm} and recurrent neural network language model (RNNLM)~\cite{bengio2003neural,tomas2011rnnlm}. The training is conducted on the pure texts of the training data in the three tasks, and the trained models are tested on the pure texts of the corresponding test sets.
We can see that the perplexity of our language model is comparable to RNNLM and even better than kenLM in some datasets. This shows that the performance of our language model is acceptable to our experiments. We do not compare the perplexity with Transformer-based models like GPT~\cite{radford2018improving,radford2019language,brown2020language}, because these models usually use subword vocabularies (like Byte Pair Encoding (BPE)~\cite{radford2019language} and WordPiece~\cite{schuster2012japanese,devlin-etal-2019-bert} ) which makes the perplexities not comparable with our work.
Also, from the comparison of perplexity score, we found that the perplexity of HateXplain dataset is obviously higher than the other two datasets, this shows that HateXplain dataset is very noisy. The results in Table~\ref{tab:hate} proves that our InfoCal model is able to extract sensitive rationales on noisy text data.
\section{Summary and Outlook}\label{sec:conclu}
In this work, we proposed a novel method to extract rationales for neural predictions. Our method uses an adversarial-based technique to make a selector-predictor model learn from a guider model. In addition, we proposed a novel regularizer based on language models, which makes the extracted rationales semantically fluent. In this way, the ``guider'' model tells the selector-predictor model what kind of information (token) remains unselected or over-selected.
We conducted experiments on a task of sentiment analysis, hate speech recognition and three tasks from the legal domain. According to the comparison between the extracted rationales and the gold rationale annotations in sentiment analysis task and hate speech recognition task, our InfoCal method improves the selection of rationales by a large margin. We also conducted ablation tests for the evaluation of the LM regularizer's contribution, which showed that our regularizer is effective in refining the rationales.
As future work, the main architecture of our model can be directly applied to other domains, e.g., images or tabular data. The image rationales can be applied in many read-world applications, such as medical image recognition~\cite{deruyver2009image} and automatic driving~\cite{reece1995control}. Regularizers based on Manifold learning~\cite{cayton2005algorithms} is promising to be applied on image rationale extraction. The tabular rationales are very useful in some tasks like automatic disease diagnose~\cite{alkim2012fast}. When designing the regularizers for tabular rationales, a sensible method is to make use of the relations between different fields of the tabular since different kinds of data are closely related in medical experiment reports and many of them are potentially to contribute to the patients' diagnose result.
\section{Ethical Statement}
The paper does not present a new dataset. It also does not use demographic or identity characteristics information. Furthermore, the paper does not report on experiments that involve a lot of computing time/power.
\begin{itemize}
\item \textbf{Intended use.} While the paper presents an NLP legal prediction application, our method is not yet ready to be used in practice. Our work takes a step forward in the research direction of making legal prediction systems explainable, which should uncover the systems' potential biases and modes of failures, thus ultimately rendering them more reliable.
Thus, once it can be guaranteed a high likelihood of correctness and unbiasedness of the predictions and the faithfulness of their explanations w.r.t.\ the inner-working of the model, legal prediction systems may help to assist judges (and not replace them) in their decisions, so that they can process more cases, and more people can perceive justice than nowadays is the case. (At present, only a very small portion of cases is brought to court; especially poorer parts of the populations have essentially no access to the justice system, due to its high costs.) In addition, legal prediction systems may be used as second opinion and help to uncover mistakes or even biases of human judges.
Currently, legal prediction systems are being heavily researched in the literature without the explainability component that our paper is bringing. Hence, our approach is taking a step forward in assessing the reliability of the systems, although we do not currently guarantee the faithfulness of the provided explanations. Hence, our work is intended purely as a research advancement and not as a real-world tool.
\item \textbf{Failure modes.}
Our model may fail to provide correct and unbiased predictions and explanations that are faithfully describing its decision-making process. Ensuring correct and unbiased predictions as well as faithful explanations are very challenging open questions, and our work takes an important but far from final step forward in this direction.
\item \textbf{Biases.}
If the training data contains biases, then a model may pick up on these biases, and hence it would not be safe to use it in practice. Our explanations may help to detect biases and potentially give insights to researchers on how to further develop models that avoid them. However, we do not currently guarantee the faithfulness of the explanations to the decision-making of the model.
\item \textbf{Misuse potential.} As our method is not currently suitable for production, the legal prediction model should not be used in real-world legal judgement prediction tasks.
\item \textbf{Collecting data from users.} We do not collect data from users, we only use an existing dataset.
\item \textbf{Potential harm to vulnerable populations.} Since our model learns from datasets, if there are under-represented groups in the datasets, then the model might not be able to learn correct predictions for these groups. However, our model provides explanations for its predictions, which may uncover the potential incorrect reasons for its predictions on under-represented groups. This could further unveil the under-representation of certain groups and incentivize the collection of more instances for such groups. However, we highlight again that our model is not yet ready to be used in practice and that it is currently a stepping stone in this important direction of research.
\end{itemize}
\section*{Acknowledgments}
This work was supported by the ESRC grant ES/S010424/1 “Unlocking the Potential of AI for English Law”, an Early Career Leverhulme Fellowship, a JP Morgan PhD Fellowship, the Alan Turing Institute under the EPSRC grant EP/N510129/1, the AXA Research Fund, and the EU TAILOR grant 952215. We also acknowledge the use of Oxford’s Advanced Research Computing (ARC) facility, of the EPSRC-funded Tier 2 facility JADE (EP/P020275/1), and of GPU computing support by Scan Computers
International Ltd.
\printcredits
\bibliographystyle{cas-model2-names}
|
1904.10852
|
\section{Introduction}
The study of Schubert varieties and their singularities is a field where topology, algebraic geometry, and representation theory meet. An effective strategy to study Schubert varieties is assigning characteristic classes to them. However, the most natural characteristic class, the fundamental class, does not exist beyond K theory \cite{BE}---that is, e.g. the elliptic fundamental class depends on choices (e.g. choice of resolution, or choice of basis in a Hecke algebra) \cite{LZ}. Important deformations of the fundamental class appeared recently in the center stage in cohomology and K theory, under the names of Chern-Schwartz-MacPherson class and Motivic Chern class, partially due to their relation to Okounkov's stable envelope classes. In this paper we study the elliptic analogue of the CSM and MC classes, the {\em elliptic characteristic class of Schubert varieties}, which unlike the fundamental class, does {\em not} depend on choices.
\subsection{Background, early history}
The development of the concept of elliptic genus and its underlying characteristic class (for smooth manifolds) started in the second half of the 1980s, see e.g.~\cite{Ochanine, Landweber, Hirzebruch, Witten, Krichever, Hohn,Totaro}. Already at the beginning, the $S^1$ or $\C^*$ equivariant versions were considered,
see e.g. the surveys of the subject in the introductions to \cite{BoLi0} and \cite{Li}.
According to the cited works the elliptic class of a {\em smooth} complex manifold $X$, in {cohomology} has the form
\[
\mathcal E\mkern-3.7mu\ell\mkern-2mu\ell(X)=td(X)\,ch(\mathcal{E\!L\!L}(TX))\in H^*(X;{\mathbb C})[[q,z]],
\]
where $\mathcal{E\!L\!L}(TX)\in K_{\bf T}(X)[y^{\pm 1/2}][[q]]$ is the elliptic complex defined in \cite[Lemma 2.5.2]{Hohn}, \cite[\S3]{Totaro}, \cite[Formula (3)]{BoLi0}, with $y={\rm e}^z$, $-{\rm e}^z$ or ${\rm e}^{\pm 2\pi i z}$ depending on the author, c.f. Definition \ref{def:Ell0} below. Here $ch$ denotes the Chern character and $td$ is the Todd class. For convenienc
we will use cohomology with $\C$ coefficients, and will not indicate it anymore.
It is worth noting that the elliptic class above can be interpreted as the Hirzebruch class of the free loop space $\mathcal LX=$Map$(S^1,X)$ localized at the set of constant loops $X\subset \mathcal LX$. This idea comes from Witten \cite[\S5]{Witten} and it is well explained by a heuristic argument in \cite[\S7.4]{HirzebruchBook}.
The formula above for $\mathcal E\mkern-3.7mu\ell\mkern-2mu\ell$ holds in the equivariant situation, say with the torus ${\bf T}=(\C^*)^n$ action. The only change is that the Borel cohomology has to be completed with respect to the gradation, since the Chern character is given by an infinite series.
\subsection{Extension to singular varieties: Borisov-Libgober}
Applications of the elliptic genus to mirror symmetry created a necessity to define the elliptic class for singular varieties.
Borisov and Libgober in \cite{BoLi1} constructed a modification of the elliptic class which can be applied not only to singular varieties but to singular pairs consisting of a variety together with a Weil divisor. Their elliptic class $\widehat{\Ellh}(X,D)$ is defined if the pair $(X,D)$ has at worst {\it Kawamata log-terminal singularities} (KLT).
It is convenient to embed the singular pair in a smooth ambient space and consider the elliptic class as an element of the equivariant cohomology of the ambient space.
The starting point to define elliptic classes for a Schubert variety $X_\omega\subset G/B$ is Theorem~\ref{fullflagcnonical}, which collects results about the canonical divisor from \cite{Ram1, Ram2, BrKu, GrKu}.
In particular it follows that $K_{X_\omega}+\partial X_{\omega}$ is a Cartier divisor. This means that
the pair $(X_\omega,\partial X_\omega)$ is Gorenstein, however it is generally not KLT.
\subsection{The elliptic class for Schubert varieties ${E^{\rm K}}(X_\omega,\lambda)$} \label{13}
To overcome the non-KLT property for Schubert varieties, we perturb the boundary divisor by a fractional line bundle $\LL{\lambda}$ depending on the new weight-parameter $\lambda\in{\mathfrak t}^*$. Thus we will obtain a KLT pair, to which the Borisov-Libgober construction can be applied:
\[
E^{\rm coh}(X_\omega,\lambda):=\widehat{\Ellh}(X_\omega,\partial X_\omega-\LL{\lambda})\in H^*_{\bf T}(G/B ) ^{_\wedge} ((z))[[q]],
\]
where $^\wedge$ denotes the completion with respect to the gradation. The superscript {\it coh} indicates that the class lives in the {\em cohomology} of the flag variety. It turns out that the dependence on $\lambda$ of the resulting elliptic class is meromorphic, with poles at the walls of the Weyl alcoves.
We will see that $E^{\rm coh}(X_\omega,\lambda)$ is of the form
$$E^{\rm coh}(X_\omega,\lambda)=td(G/B)\,ch({E^{\rm K}}(X_\omega,\lambda))$$
with
$${E^{\rm K}}(X_\omega,\lambda)\in \big(K_{\bf T}(G/B)({\rm e}^{z/N})\otimes {\mathbb C}({\bf T}^*)\big)[[q]],$$
where $N$ is an integer such that the weight $N\lambda$ is integral.
The class ${E^{\rm K}}(X_\omega,\lambda)$ living now in {\em K theory} is the main protagonists of this paper.
Treating ${E^{\rm K}}(X_\omega,\lambda)$ as a function on $(\lambda,z)\in {\mathfrak t}^*\times {\mathbb C}$, by quasi-periodicity properties of the theta function
we arrive to
$${E^{\rm K}}(X_\omega)\in \big(K_{\bf T}(G/B)\otimes {\mathbb C}({\bf T}^*\times \C^*)\big)[[q]],$$
where
${\bf T}^*\simeq{\mathfrak t}^*/{\mathfrak t}^*_{\mathbb Z}$ is the dual torus, the quotient of ${\mathfrak t}^*$ by the integral weights lattice.
After choosing appropriate variables, we will obtain
fairly concrete recursions and expressions for ${E^{\rm K}}(X_\omega)$ in terms of Jacobi theta functions.
\subsection{Cohomology vs K theory vs elliptic cohomology} \label{14}
The functor $\hat H^*_{\bf T}(-)[[q]]$ can be considered as an equivariant complex-oriented cohomology whose Euler class of a line bundle is the theta function $\vt$. For our purposes this theory serves as the equivariant elliptic cohomology. The elliptic class in elliptic cohomology has a short and natural definition. For a smooth variety $X$ with a torus ${\bf T}$ action the elliptic class is defined as the equivariant Euler class of the tangent bundle. Here we consider the extended torus ${\bf T}\times{\mathbb C}^*$, where ${\mathbb C}^*$ acts trivially on $X$ and with the scalar multiplication by inverses on the fibers of the bundle. In terms of Chern roots $\xi_1,\xi_2,\dots,\xi_n$ the elliptic class is given by the formula
$$\mathcal E\mkern-3.7mu\ell\mkern-2mu\ell^{\,\rm ell}(X)=\prod_{k=1}^n\vt(\xi_k-z)\,,$$
where $z$ is a variable corresponding to the ${\mathbb C}^*$ factor.
The elliptic classes for two complex orientations in Borel cohomology are related by the formula
$$\mathcal E\mkern-3.7mu\ell\mkern-2mu\ell(X)=\frac{e^{\rm coh}(X)}{{e^{\rm ell}}(X)}\mathcal E\mkern-3.7mu\ell\mkern-2mu\ell^{\,\rm ell}(X)=\prod_{k=1}^n\xi_k\frac{\vt(\xi_k-z)}{\vt(\xi_k)},$$
where $e^{\rm coh}$ and ${e^{\rm ell}}$ are the Euler classes in cohomology and in the elliptic theory.
This is a classical approach to generalized cohomology theories, and passage from one elliptic class to another is a Riemann-Roch type transformation, see \cite[42.1.D]{FoFu}. We extend this method to define the elliptic class of singular pairs in elliptic cohomology. An advantage of this is that the classes ${E^{\rm ell}}(X_\omega)$ for Schubert varieties have better transformation properties. Moreover, it will be convenient to study the quotient (``local class'')
\[
\frac{E^{\rm coh}(X_\omega)}{e^{\rm coh}(TG/B)}=\frac{{E^{\rm ell}}(X_\omega)}{{e^{\rm ell}}(TG/B)}=ch\left(\frac{{E^{\rm K}}(X_\omega)}{{e^{\rm K}}(TG/B)}\right),
\]
instead of the three numerators.
\subsection{The Grojnowski model}
We would like to note that our version of elliptic cohomology has rational or complex coefficients, therefore the essential problems described in \cite{Landweber} are omitted. We work with equivariant Borel-type theory.
Our results apply to the delocalized equivariant elliptic cohomology in the Grojnowski sense as well. In his sketch \cite{Grojnowski} Grojnowski suggests that the elliptic cohomology should be defined as a sheaf of algebras over a product of elliptic curves.
It would contain information about equivariant cohomology of all possible fixed points with respect to subtori.
In this approach the elliptic cohomology class would be a section of that sheaf. For flag varieties the restriction map $H^*_{\bf T}(G/B)\to H^*_{\bf T}((G/B)^{\bf T})$ is injective, and the inversion of the Euler class of the tangent bundle does not weaken the formulas.
In another approach (see \cite{GKV, Ganter}), where the elliptic cohomology is a scheme, the elliptic class would be a section of a so-called {\it Thom sheaf} over the scheme ${\bf E}(G/B)$, see \cite[\S7.2]{Ganter}.
In this approach the restriction to fixed points corresponds to passing to the disjoint union of the products of elliptic curves.
These constructions of equivariant elliptic cohomology theories are not relevant for us. Our objective is to describe the combinatorics governing the characteristic classes, which can be achieved for the notions in Sections \ref{13}, \ref{14} above.
\subsection{Recursions}
The fixed points $(G/B)^{\bf T}$ are identified with the elements of the Weyl group~$W$.
The local class ${E^{\rm K}}(X_\omega)/{e^{\rm K}}(TG/B)$ restricted to the fixed points can be considered an element of
\[
\mathcal M=
\bigoplus_{\sigma\in W} {\mathbb C}({\bf T}\times {\bf T}^*\times \C^*)[[q]],
\]
where ${\mathbb C}({\bf T}\times {\bf T}^*\times \C^*)$ stands for the field of rational functions.
We consider three actions of the Weyl group on $\mathcal M$
\begin{itemize}
\item $W$ acts on $\mathcal M$ by permuting the components. For a reflection $s\in W$ the action is
$$\left\{f_\sigma\right\}_{\sigma\in W}\quad\mapsto\qquad \left\{f_{\sigma s}\right\}_{\sigma\in W}.$$
This action on $\mathcal M$ will be denoted by $s^\gamma$.
\item $W$ acts on ${\bf T}$ and hence on ${\mathbb C}({\bf T})\simeq \C(z_1,z_2,\dots,z_n)$ (the $z_i$ variables will be called the ``equivariant variables''). This action will be denoted by $s^z$.
\item $W$ acts on the space of characters and on the quotient torus ${\bf T}^*$. The resulting action on ${\mathbb C}({\bf T}^*)$ will be denoted by $s^\mu$.
\end{itemize}
Our first result is a recursive formula for the elliptic class.
It takes the most elegant form when expressed
in the equivariant elliptic cohomology of $G/B$. Let $\alpha\in {\mathfrak t}^*$ be a simple root, then
\begin{formula}\label{BS-intro}\begin{multline*}
\delta\left(\LL{\alpha},h^{\alpha^\text{\tiny$\vee$}}\right)
\cdot s^\mu_\alpha {E^{\rm ell}}(X_\omega) -
\delta\left(\LL{\alpha},h\right)\cdot
s^\gamma_\alpha s^\mu_\alpha {E^{\rm ell}}(X_\omega)=\\
=\begin{cases}{E^{\rm ell}}(X_{\omega s_\alpha})& \text{\rm if } \ell(\omega s_\alpha)=\ell(\omega)+1,\\ \\
\delta(h^{\alpha^\text{\tiny$\vee$}},h)\delta(h^{-\alpha^\text{\tiny$\vee$}},h)\cdot{E^{\rm ell}}(X_\omega)
&\text{\rm if }\ell(\omega s_\alpha )=\ell(\omega)-1.\end{cases}\end{multline*}\end{formula}
\noindent Here
\begin{itemize}
\item $\alpha^\text{\tiny$\vee$}$ is the dual root, the expression $h^{\alpha^\text{\tiny$\vee$}}$ is a function on ${\mathfrak t}^*$,
\item $s_\alpha\in W$ is the reflection in $\alpha$,
\item $\LL{\alpha}=G\times_B\C_{-\alpha}$ is the line bundle associated to the root $\alpha$,
\item $\delta(x,y)=\frac{\vartheta'(1)\vartheta{(x y)}}{\vartheta(x)\vartheta(y)}$ a certain function defied by the multiplicative version of the Jacobi theta function.
\end{itemize}
The proof relies on the study of the Bott-Samelson inductive resolution of the Schubert variety.
The recursion above can be studied in the framework of Hecke algebras. In Section \ref{sec:Hecke} we present a version of a Hecke algebra which acts on the elliptic classes. We will also take this opportunity to explore the various degenerations of the elliptic class (and corresponding Hecke operations), such as Chern-Schwartz-MacPherson class, motivic Chern class, and cohomological and K theoretic fundamental class.
We prove that, in addition to the Bott-Samelson recursion above, elliptic classes of Schubert varieties satisfy another ``dual'' recursion in equivariant elliptic cohomology of $G/B$:
\begin{formula}\label{R-intro}\begin{multline*}
\delta\left({\rm e}^{-\alpha},h^{\omega^{-1}\alpha^\text{\tiny$\vee$}}\right)
\cdot {E^{\rm ell}}(X_\omega) -
\delta\left({\rm e}^\alpha,h\right)\cdot s_\alpha^z
{E^{\rm ell}}(X_\omega)=\\
=\begin{cases}{E^{\rm ell}}(X_{s_\alpha\omega})& \text{\rm if } \ell(s_\alpha\omega)=\ell(\omega)+1,\\ \\
\delta(h^{\omega^{-1}\alpha^\text{\tiny$\vee$}},h)\delta(h^{-\omega^{-1}\alpha^\text{\tiny$\vee$}},h)\cdot
{E^{\rm ell}}(X_{s_\alpha\omega})&\text{\rm if }\ell(s_\alpha\omega)=\ell(\omega)-1.\end{cases}
\end{multline*}\end{formula}
Remarkably, if $G=\GL_n$ then this recursion is equivalent to a three term identity in \cite{RTV} (and references therein), where this three term identity is interpreted as the R-matrix identity for an elliptic quantum group. Hence we will call this second dual recursion the {\em R-matrix recursion}.
\medskip
Let us introduce the rescaled elliptic classes
\[
\Em(X_\omega)=
\prod_{\alpha\in \Sigma_-\cap\, \omega^{-1}\Sigma_+} \delta(h^{-\alpha^\text{\tiny$\vee$}},h)^{-1}\cdot {E^{\rm ell}}(X_\omega).
\]
where $\Sigma_\pm$ is the set of positive/negative roots. In terms of this version both of our recursions (Formulas \ref{BS-intro} and \ref{R-intro}) can be expressed in more compact forms:
\begin{theorem}[Bott-Samelson recursion]
Let $\alpha$ be a simple root. Then
\[
\Em(X_{\omega s_\alpha}) =
\frac{\delta( \LL{\alpha},h^{\alpha^\text{\tiny$\vee$}})}
{\delta(h^{-\alpha^\text{\tiny$\vee$}},h)}\cdot
s_\alpha^\mu\Em(X_{\omega})
-
\frac{\delta(\LL{\alpha},h)}{\delta(h^{-\alpha^\text{\tiny$\vee$}},h)}\cdot
s_\alpha^\gamma s_\alpha^\mu\Em(X_{\omega}).
\]
If $G=\GL_n$ then in the language of weight functions
$$
\LL{\alpha_k}=\tfrac{\gamma_{k+1}}{\gamma_k},\qquad h^{\alpha_k^\text{\tiny$\vee$}}= \tfrac{\mu_{k+1}}{\mu_k}.
$$
\end{theorem}
\begin{theorem}[R-matrix recursion]
Let $\alpha$ be a simple root. Then
\[
\Em(X_{s_\alpha \omega}) =
\frac{\delta({\rm e}^{-\alpha}, h^{\omega^{-1}\alpha^\text{\tiny$\vee$}})}
{\delta(h^{-\omega^{-1}\alpha^\text{\tiny$\vee$}},h)} \cdot
\Em(X_{\omega})
+
\frac{\delta({\rm e}^\alpha,h)}
{\delta(h^{-\omega^{-1}\alpha^\text{\tiny$\vee$}},h)} \cdot s_\alpha^z\Em(X_\omega)
,\]
where ${\rm e}^\alpha\in K_{\bf T}(pt)\simeq {\rm R}({\bf T})$ is the character corresponding to the root $\alpha$. If $G=\GL_n$ then ${\rm e}^{\alpha_k}=\frac{z_k}{z_{k+1}}$.
\end{theorem}
\subsection{Relation to weight functions}
In \cite{RTV} certain special functions called {\em weight functions} are defined that satisfy the R-matrix recursion. These weight functions are the elliptic analogues of cohomological and K theoretic weight functions whose origin goes back to \cite{TV}. The three versions of weight functions play an important role in representation theory, in the theory of hypergeometric solutions of various KZ differential equations, and also turn up in Okounkov's theory of stable envelopes.
The fact that elliptic classes of Schubert varieties satisfy the R-matrix recursion allows us to prove
\begin{theorem}
The elliptic classes ${E^{\rm ell}}(X_\omega)$ for $G=\GL_n$ are represented by weight functions $\wwh_\omega$, that is,
$${E^{\rm ell}}(X_\omega)=\eta(\wwh_\omega),$$
where $\eta:K_{{\bf T}\times{\bf T}}({\rm End}(\C^n))\to K_{{\bf T}\times{\bf T}}(\GL_n)\simeq K_{\bf T}(\GL_n/B)$ is the restriction map, which is a surjection, composed with Chern character.
\end{theorem}
\subsection{By-products}
Throughout the paper we heavily use equivariant localization, that is, we work with torus fixed point restrictions of the elliptic classes scaled by an Euler class:
$$E_\sigma(X_\omega)=\frac{{E^{\rm K}}(X_\omega,\lambda)_{|\sigma}}{{e^{\rm K}}(T_\sigma(G/B))},$$
where $\sigma$ is a torus fixed point. Thus our proofs are achieved by calculations of restricted classes that live in the equivariant K theory of fixed points.
We should emphasize that many of our formulas encode deep identities among theta functions (occasionally we will remark on Fay's three term identity, and a four term identity). For us, these identities will come for free, we will not need to prove them. This is the power of Borisov and Libgober's theory: their classes are well defined.
\medskip
A remarkable by-product of the fact that the $E(X_\omega)$ classes satisfy two seemingly unrelated recursions is the fact that weight functions, besides satisfying the known R-matrix recursions, also satisfy a so far unknown recursion coming from the Bott-Samelson induction. This will be presented in Section \ref{sec:tale}, and will be interpreted as the R-matrix relation for the 3d mirror dual variety in a follow up paper (for a special case see \cite{Sm2}).
\subsection{ Conventions.} We work with varieties over $\C$; for example by $\GL_n$ we mean $\GL_n(\C)$. By K-theory we mean algebraic K theory of coherent sheaves, which in our case is isomorphic to the topological equivariant K theory, see the Appendix of \cite{FRW}. The Weyl group of $\GL_n$ will be identified with the group of permutations. A permutation $\omega:\{1,\ldots,n\}\to \{1,\ldots,n\}$ will be denoted by a list $\omega=\omega(1)\ldots\omega(n)$. The permutation switching $k$ and $k+1$ will be denoted by $s_k$. Permutations will be multiplied by the convention that $\omega \sigma$ is the permutation obtained by first applying $\sigma$ then $\omega$; for example $231 \cdot 213=321$.
\medskip
For a general simply connected reductive group $G$ we fix a maximal torus ${\bf T}$ and a Borel group $B$ containing ${\bf T}$. The set of roots of $B$, according to our convention, are positive. For an integral weight $\lambda\in{\mathfrak t}^*$ the line bundle \begin{equation}\label{def:LL}\LL{\lambda}=G\times_B\C_{-\lambda}\end{equation} is such that the torus ${\bf T}$ acts via the character $-\lambda$ on the fiber at $eB\in G/B$. With this convention $\LL{\lambda}$ is ample for $\lambda$ belonging to the interior of the dominant Weyl chamber. The half sum of positive roots is denoted by $\rho$. The canonical divisor $K_{G/B}$ is isomorphic to $\LL{\rho}^{-2}$.
\medskip
\noindent{\bf Acknowledgment.} We are grateful to Jaros{\l}aw Wi{\'s}niewski for very useful conversations on Bott-Samelson resolutions, and to Shrawan Kumar for discussions on the canonical divisors of Schubert varieties and their relation with Bott-Samelson varieties.
\section{The equivariant elliptic characteristic class twisted by a line bundle}\label{sec2}
First we recall the notion of the elliptic characteristic class of a singular variety, defined by
Borisov and Libgober in \cite{BoLi1}. We study its equivariant version, and its behavior with respect to fixed point restrictions. In Section \ref{sec:twisted_gen} we define a version ``twisted by a line bundle and its section.'' This latter version will be used in the rest of the paper.
The original definition of elliptic genus for singular varieties arose from the study of mirror symmetry for a generic hypersurface in the toric variety associated to a reflexive polytope, \cite{BoLi0}. For a possibly singular variety $X$ with a Weil divisor $\Delta$ (which satisfy some assumptions, see the definition below) the elliptic genus was constructed by Borisov and Libgober in \cite{BoLi1}. Their construction goes as follows. Let $f:Z\to X$ be a log-resolution of the pair $(X,\Delta)$ and $D=f^*(K_X+\Delta)-K_Z$. The genus is computed as an integral of a certain class $\widehat{\Ellh}(Z,D)$ defined in terms of the Chern classes of $Z$ and the divisor $D$. An alchemy of the formulas make the definition independent of the resolution. The key argument is that whenever we have a blowup in a smooth center $b:Z_1\to Z_2$ then $b_*(\widehat{\Ellh}(Z_1,D_1))=\widehat{\Ellh}(Z_2,D_2)$. By weak factorization theorem (i.e.~since any two resolutions differ by a sequence of blow-ups and blow-downs in smooth centers) the push-forward to the point of $\widehat{\Ellh}(Z,D)$ does not depend on the resolution. In fact more is obtained: the push-forward to $X$ does not depend on the resolution. This way homology classes are defined, since $X$ is singular in general. If $X$ is embedded in a smooth ambient space, then it is more convenient to consider (by Poincar\'e duality) the dual class in the cohomology of the ambient space.
Below we give details of the sketched construction, introducing some simplifications and moving all the objects to K theory, from where they in fact come via the Chern character.
\subsection{Theta functions} \label{sec:theta}
For general reference on theta functions see e.g. \cite{We,Cha}. We will use the following version
\[
\vartheta(x)=\vartheta(x,q)=
x^{1/2}(1-x^{-1})\prod_{n\geq 1}(1-q^n x)(1-q^n /x)\in \Z[x^{\pm 1/2}][[q]].
\]
For a fixed $|q|<1$ the series is convergent and defines a holomorphic function on a double cover on ${\mathbb C}^*$.
Throughout the paper we will use the function
\begin{equation}\label{def:delta}
\delta(a,b)=\frac{\vartheta(ab)\vartheta'(1)}{\vartheta(a)\vartheta(b)},
\end{equation}
which is meromorphic on $\C^*\times \C^*$ with poles whenever $a$ or $b$ is a power of $q$.
As a power series in $q$ the coefficients of $\delta(a,b)$ are rational functions in $a$ and $b$
$$\delta(a,b)=\frac{1-a^{-1}b^{-1}}{(1-a^{-1})(1-b^{-1})}+q(a^{-1}b^{-1}- ab)+q^2(a^{-2}b^{-1}+a^{-1}b^{-2}- a^2b- ab^2)+\dots\,.$$
We will also use theta functions in additive variables, namely, let
$$
\vt(u)=\vt(u,\tau)=\vartheta({\rm e}^{u},q)
$$
where $q={\rm e}^{2\pi i\tau}$, $\tau\in {\mathbb C}$, $im(\tau)>0$.
Our function $\vt$ differs from the classical Jacobi theta-function only by a factor depending on $\tau$. Namely, according to Jacobi's product formula \cite[Ch~V.6]{Cha},
\begin{equation*}\label{Jacobi-prod-for}
\vt_\text{Jacobi}(u,\tau)=2{\rm e}^{\pi i\tau/4}
\sin (\pi u)\prod_{\ell=1}^{\infty}(1-{\rm e}^{2\pi \ell i\tau})(1-{\rm e}^{2\pi\ell i(\tau+u)} )(1-{\rm e}^{2\pi\ell i(\tau-u)} ),
\end{equation*}
and hence
\[
\vt_\text{Jacobi}(u,\tau)=\frac1iq^{1/8}\prod_{\ell=1}^{\infty}(1-q^{\ell})\cdot
\vt(2\pi i u,\tau).
\]
The theta function satisfies the quasi-periodicity identities
\begin{align}\label{theta_trans}
\vt(u+2\pi i,\tau)&=-\vt(u,\tau),\\
\notag \vt(u+2\pi i\tau,\tau)&=- {\rm e}^{- u-\pi i \tau} \cdot \vt(u,\tau).
\end{align}
Note that the periods of our theta function differ by the factor $2\pi i$ comparing with the Jacobi theta function. Our convention fits well to the topological context, where $\vartheta$ is composed with the Chern character. The same convention was chosen for example in \cite{Hirzebruch, Hohn, HirzebruchBook}.
We will take a closer look at transformation properties of several variable functions built out of theta functions in Section \ref{sec:transformations}.
\subsection{The elliptic class of a smooth variety}
First we define the elliptic class of a smooth variety $Z$, cf. \cite[Lemma 2.5.2]{Hohn}, \cite[\S3]{Totaro}, \cite[Formula (3)]{BoLi0}.
For a rank $n$ bundle $\TTT$ over $Z$ with Gro\-then\-dieck roots $t_i$ (that is, $\sum_{i=1}^nt_i=[\TTT]$ in K theory) define
\begin{align}
\notag \Lambda_x(\TTT) & =\sum_{k=0}^{\rank \TTT}[\Lambda^k\TTT] x^k=\prod_{i=1}^n (1+xt_i) \ \in K(Z)[x], \\
\notag S_x(\TTT) & =\sum_{k=0}^{\infty}[S^k\TTT] x^k=\prod_{i=1}^n \frac{1}{(1-xt_i)} \ \in K(Z)[[x]], \\
\label{eq:eK} {e^{\rm K}}(\TTT) & =\Lambda_{-1}(\TTT^*)=\prod_{i=1}^n (1-t_i^{-1}) \ \ \ \ \ \ \in K(Z).
\end{align}
The last one plays the role of Euler class in K theory.
\begin{definition}\label{def:Ell0}
For a smooth variety $Z$ with tangent bundle $TZ$ whose Gro\-then\-dieck roots are $t_i$, define its {\em elliptic bundle}
\begin{align*}
\mathcal{E\!L\!L}(Z) = &
y^{-\dim Z/2}\, \Lambda_{-y}\Omega^1_Z
\otimes\bigotimes_{n \ge 1}
\Bigl(\Lambda_{-yq^{n}}(\Omega^1_Z) \otimes \Lambda_{-y^{-1}q^n}
(TZ) \otimes S_{q^n}(\Omega^1_Z) \otimes
S_{q^n}(TZ) \Bigr) \\
= &
y^{-\dim Z/2}\prod_{i=1}^{\dim Z}(1-yt_i^{-1})\prod_{n\geq 1}\frac{(1-yq^{n}t_i^{-1})(1-y^{-1}q^{n}t_i)}{(1-q^nt_i^{-1})(1-q^nt_i)} \\
= &
{e^{\rm K}}(TZ) \prod_{i=1}^{\dim Z}\frac{\vartheta(t_i /y)}{\vartheta(t_i)}
\hskip 7 true cm \in K(Z)[y^{\pm1/2}][[q]].
\end{align*}
\end{definition}
As we indicted in the notation $K(Z)[y^{\pm1/2}][[q]]$, the class $\mathcal{E\!L\!L}(Z)$ is a formal series in $q$, or equivalently, in ${\rm e}^{2\pi i\tau}$.
Since the power series defining the theta function converges for $|q|<1$ the class $\mathcal{E\!L\!L}(Z)$ can be considered as a function on $(y,q)\in \C^*\times \C_{\rm im >0}$ with values in a suitable completion of K theory. We regard $q$ or $y$ as a parameter of the theory, and hence we do not indicate this dependence, unless we want to emphasize it.
\medskip
\subsection{The Borisov-Libgober class $\widehat{\ELL}$} \label{sec:shelf}
For a singular pair $(X,\Delta)$ Borisov and Libgober defined an elliptic class $\widehat{\Ellh}(X,\Delta)$, living in cohomology. In this section we introduce a K theoretic modification $\widehat{\ELL}(X,\Delta)$ of their construction. The relation to the original definition in \cite{BoLi1} is $\widehat{\Ellh}(X,\Delta)=td(TM) ch(\widehat{\ELL}(X,\Delta))$.
Consider pairs $(Z,D)$ where $Z$ is smooth and projective and $D$ is
a SNC ${\mathbb Q}$-divisor on $Z$ i.e. $D=\sum_{k=1}^\ell a_kD_k$ is a formal sum
such that the components $D_k$ are smooth
divisors on $Z$ intersecting transversely, $a_k\in{\mathbb Q}$. We additionally require $a_k \neq 1$ for all $1\leq k\leq \ell$. Equally well we can consider divisors with complex coefficients.
\begin{definition}
Define
\begin{equation*}\label{ellgenpairsformula}
\widehat{\ELL}(Z,D)=
{e^{\rm K}}(TZ)\prod_{i=1}^{\dim Z} \frac{\vartheta(t_ih) \vartheta'(1)} {\vartheta(t_i)\vartheta (h)} \prod_{k=1}^{\ell} \frac{\vartheta(d_k h^{1-a_k}) \vartheta(h)}{\vartheta(d_k h) \vartheta(h^{1-a_k})}\in K_{\bf T}(X)(h^{ 1/N})[[q]]
\end{equation*}
where $d_k=[{\mathcal O}(D_k)]$ and $t_i$ are the Grothendieck roots of $TZ$. Here $N$ is an integer divisible by all the denominators of the rational numbers $a_k$.
\end{definition}
\begin{remark}[About notation]\rm Borisov and Libgober use the notation
$D=-\sum_{k=1}^\ell \alpha_kD_k$,
but we reserve the letter $\alpha$ for a root of a Lie algebra, and we want to get rid of the minus sign.
Another change is that we introduce the letter $h=y^{-1}$. This way we avoid the following inconsistency: when $q\to 0$ then the elliptic genus converges to Hirzebruch genus $\chi_{-y}$, not to $\chi_y$.
In H\"ohn's definition
$y$ appears with a different sign. Our main reason of passing from $y$ to $h$ is to agree with the notation of \cite{RTV} and with the circle of papers related to Okounkov theory.
\end{remark}
\begin{remark} \rm
Formally there are rational or even possibly complex powers of $h$ in the expression above. By this, here and in the whole paper we mean the following: for a formal variable $z$ satisfying $h={\rm e}^{-z}$, by $\vartheta(h^a)$ we mean
\[
\vartheta(h^a)=\vartheta(({\rm e}^{- z})^a)=\vartheta({\rm e}^{- za})=\vt(-za).
\]
To keep the exposition simple we will not explicitly work with the $z$ variable anymore, and this kind of dependence on formal powers of $h$ we indicate by $K_{{\bf T}}(X)(h^{ 1/N})$.
\end{remark}
Observe that if the divisor $D$ is empty then
\[
\widehat{\ELL}(Z,\emptyset)=\left(\tfrac{\vartheta'(1)}{\vartheta(h)}\right)^{\dim Z}\mathcal{E\!L\!L}(Z).
\]
The two definitions above generalize without change to the torus equivariant case: the case when ${\bf T}=({\mathbb C}^*)^n$ acts on $Z$ or $(Z,D)$, and $t_i$ are the ${\bf T}$ equivariant Gro\-then\-dieck roots. One advantage of torus equivariant K theory is the tool of fixed point restrictions. Let $x$ be a ${\bf T}$ fixed point on $Z$, and consider the restriction of $\widehat{\ELL}(Z,D)$ to $x$. Here each $d_k$ can be chosen as a Gro\-then\-dieck root of the tangent bundle. By introducing artificial components of $D$ with coefficient $a_i=0$ if necessary, we may now assume that $d_i=t_i$ for all $i=1,\ldots,\dim Z$, and we obtain
\begin{align}\label{fixed} \notag
\widehat{\ELL}(Z,D)_{|x}= & {e^{\rm K}}(TZ)\prod_{i=1}^{\dim Z} \frac{\vartheta(t_i h^{1-a_i})\vartheta'(1) } {\vartheta(t_i)\vartheta(h^{1-a_i})} \\
= & {e^{\rm K}}(TZ)\prod_{i=1}^{\dim Z} \delta(t_i, h^{1-a_i}) \hskip 1 true cm \in K_{{\bf T}}(x)(h^{1/N})[[q]]=\RR({\bf T})(h^{1/N})[[q]].
\end{align}
Now we are ready to define the elliptic class for certain pairs of singular varieties. Let $X$ be a possibly singular subvariety of a smooth variety $M$, and $\Delta$ a divisor on $X$. We say that $(X,\Delta)$ is a KLT pair, if
\begin{itemize}
\item $K_X + \Delta$ is a ${\mathbb Q}$-Cartier divisor;
\item there exists a map $f:Z\to M$ which is a log-resolution $(Z,D)\to(X,\Delta)$ (i.e. $Z$ is smooth, $D$ is a normal crossing divisor on $Z$, $f$ is proper and is an isomorphism away from $D$) such that
\begin{enumerate}[(i)]
\item $D=\sum_ka_kD_k$, $a_k<1$,\footnote{In the classical definition it is required that $\Delta=-\sum \alpha_i \Delta_i$ with $\alpha_i\in(-1,0]$ (hence $a_i\in [0,1)$), but we do not need positivity of $a_i$'s.}
\item $K_Z+D = f^*(K_X + \Delta)$.
\end{enumerate}
\end{itemize}
\begin{definition}\label{def:KLTEll}
The elliptic class of a KLT pair $(X,\Delta)$ is defined by
$$\widehat{\ELL}(X,\Delta;M):=f_*(\widehat{\ELL}(Z,D)) \in K(M)(h^{1/N})[[q]].$$
\end{definition}
The key result in \cite{BoLi1} goes through without major changes to show that $\widehat{\ELL}(X,\Delta;M)$ as defined here is independent of the choice of the resolution. When allowing complex coefficients $a_k$, the proof of independence is unchanged as long as $a_k\not\in \mathbb N_{\geq 1}$. Note however that to make sense of the definition we have to ensure that some $\C$-multiple of $K_X+\Delta$ is a Cartier divisor. In the presence of a torus action $\widehat{\ELL}(X,\Delta;M)$ is defined formally the same way, see details in \cite{Wae, DBW}. We will not indicate the torus action in the notation.
\begin{remark}\rm To avoid the dependence of an embedding into an ambient space $M$ and work directly with the singular space $X$ we would have to deal with the K theory of coherent sheaves denoted in the literature by $G(X)$. This complication is unnecessary for our purposes.\end{remark}
\begin{remark} \rm
The elliptic class is the ``class version'' of the {\em elliptic genus} studied in detail in the literature. Namely, let $h={\rm e}^{-z}$,
and let $Z$ be Calabi-Yau. Then the elliptic genus $\eta_*(\mathcal{E\!L\!L}(Z))$ ($\eta:Z\to$ point) is a holomorphic function on ${\mathbb C}_z\times {\mathbb C}_{\rm im \tau>0}$, and it is a quasi-modular form of weight 0 and index $\frac{\dim Z}2$.
Now let us assume that $N$ times the multiplicities of $\Delta$ are integers, and that $(X,\Delta)$ is a Calabi-Yau pair (i.e. $K_X+\Delta=0$). Then the ``genus''
\[
\left(\tfrac{\vt(-z,\tau)}{\vt'(0,\tau)}\right)^{\dim X} \eta_*( \widehat{\ELL}(X,\Delta;M))
\]
has transformation properties of Jacobi forms of weight $\dim X$ and index $0$ \cite[Prop. 3.10]{BoLi1}, with respect to the subgroup of the full Jacobi group
generated by the transformations\footnote{Since our theta function is quasiperiodic with respect to the lattice $2\pi i\langle1,\tau\rangle$, we have rescaled the formulas with respect to the those appearing in \cite{BoLi1}.}
$$\begin{array}{lll}(z, \tau ) \mapsto (z +2\pi i N, \tau ),&& (z, \tau ) \mapsto (z + 2N\pi i \tau, \tau ), \\ \\
(z, \tau ) \mapsto (z, \tau + 1), && (z, \tau ) \mapsto (z/(2\pi i\tau), -1/\tau ).\end{array}$$
\end{remark}
\subsection{Calculation of the elliptic class via torus fixed points} \label{sec:localization}
The class $\widehat{\ELL}(X,\Delta;M)$ is defined via the push-forward map $f_*$. From the well known localization formulas (which we call Lefschetz-Riemann-Roch theorem) for push-forward in torus equivariant K theory, see e.g.~\cite[Th. 5.11.7]{ChGi}, we obtain the following proposition.
\begin{proposition} \label{prop:localization_general}
Assume that in the ${\bf T}$ equivariant situation of Definition \ref{def:KLTEll} the fix point sets $M^{{\bf T}}$ and $Z^{\bf T}$ are finite. Then for $x\in M^{\bf T}$ in the fraction field of $K_{\bf T}(x)(h^{1/N})[[q]]$ we have
\begin{equation}\label{EllResLoc}
\frac{\widehat{\ELL}(X,\Delta;M)_{|x}}{{e^{\rm K}}(T_xM)}= \sum_{x'\in f^{-1}(x)} \frac{\widehat{\ELL}(Z,D)_{|x'}}{{e^{\rm K}}(T_{x'}Z)}.
\end{equation}
\qed
\end{proposition}
This formula motivates the definition of the local elliptic class
\begin{equation}\label{eq:Elocal}
{\mathcal E}_x(X,\Delta;M)=\frac{\widehat{\ELL}(X,\Delta;M)_{|x}}{{e^{\rm K}}(T_xM)}
\end{equation}
for a fixed point $x$ on $X$. In fact the division by ${e^{\rm K}}(T_xM)$ gets rid of the dependence on $M$, so we set ${\mathcal E}_x(X,\Delta)={\mathcal E}_x(X,\Delta;M)$ for some $M$. Indeed, having two equivariant embeddings $\iota_i:X\hookrightarrow M_i$ for $i=1,2$ we obtain the third, the diagonal one $\iota:X\hookrightarrow M=M_1\times M_2$. Let $x\in X^{\bf T}$ be an isolated fixed point. Then by Lefschetz-Riemann-Roch theorem for the projection $\pi_i:M\to M_i$ we have
$$\frac{\widehat{\ELL}(\iota(X),\iota(\Delta);M)}{{e^{\rm K}}(TM)}=\frac{\widehat{\ELL}(\iota_i(X),\iota_i(\Delta);M_i)}{{e^{\rm K}}(TM_i)}$$
for $i=1,2$.
From Proposition \ref{prop:localization_general} we obtain
\begin{equation}\label{EllResLoc1}
{\mathcal E}_x(X,\Delta) = \sum_{x'\in f^{-1}(x)} {\mathcal E}_{x'}(Z,D) = \sum_{x'\in f^{-1}(x)}\prod_{k=1}^{\dim Z}\delta(t_k(x'),h^{1-a_k(x')}),
\end{equation}
where $t_k(x')$ and $a_k(x')$ denote the tangent characters and multiplicities of the divisors at the torus fixed point $x'$.
\begin{remark} \rm \label{rem:EllH}
Our choice in this paper is to use equivariant K theory as the home of the elliptic characteristic classes
and we have two possible ways of expressing the elliptic genus:
$$p_*^{\rm K} \widehat{\ELL}(X,\Delta;M)=\int_M td(M)ch(\widehat{\ELL}(X,\Delta;M)),$$
where $p:M\to pt$ and $p_*^K$ the is push-forward in the K theory, i.e.~$\chi(M,-)$.
We could have decided differently by setting up equivariant {\em elliptic cohomology}
to be the rational Borel equivariant cohomology, extended by the formal variable $q$ and with the Euler class given by the theta function. In that contexts we would define the elliptic class $\widehat{\Ellh}^{\rm ell}(X,\Delta;M)$ as the push forward (in elliptic cohomology) of the suitable class defined for a resolution. The resulting class satisfies
\begin{equation}\label{formaldef}
\widehat{\Ellh}^{\rm ell}(X,\Delta;M)=\frac{{e^{\rm ell}}(TM)}{e^{\rm coh}(TM)}\widehat{\Ellh}(X,\Delta;M)
\end{equation}
according to the general Grothendieck-Riemann-Roch theorem. Not going into details let us take~\eqref{formaldef} as the definition of the {\em elliptic class in equivariant elliptic cohomology}.
We have
$$p_*^{\rm ell} \widehat{\Ellh}^{\rm ell}(X,\Delta;M)=\int_M \frac{e^{\rm coh}(TM)}{{e^{\rm ell}}(TM)}\widehat{\Ellh}^{\rm ell}(X,\Delta;M)=\int_M \widehat{\Ellh}(X,\Delta;M)\,.$$
Observe that the quotient in \eqref{eq:Elocal}
$${\mathcal E}_x(X,\Delta)=\frac{\widehat{\Ellh}^{?}(X,\Delta;M)}{e^?(TM)}$$
is not only independent of $M$ but it
is also essentially independent of the cohomology theory used. The only thing that has to by changed when passing from K theory to cohomology or Borel elliptic theory are the substitutions $h={\rm e}^{-z}$
and $z_i={\rm e}^{x_i}$, where $z_i$'s are the basic characters ${\bf T}\to \C^*$ and $x_i\in {\mathfrak t}^*$ the basic weights.
The use of K theory is more economic, since there the classes ${\mathcal E}_x(X,\Delta)$ are power series in $q$ with coefficients in rational functions in $z_i$ and roots of $h$, depending on the denominators of the multiplicities $a_i$:
$${\mathcal E}_x(X,\Delta)=Frac\big(\RR({\bf T})(h^{1/N})\big)[[q]]\,.$$
\end{remark}
\begin{example} \rm
Consider the standard action of ${\bf T}=({\mathbb C}^*)^2$ on $M=X=\C^2$. Denote $\C_x=\{x=0\}$, $\C_y=\{y=0\}$, and let them represent the classes $t_1$ and $t_2$ in $K_{\bf T}(\C^2)$. Consider the divisor $\Delta=a_1\C_x+a_2\C_y$.
Taking the identity map as resolution, using \eqref{fixed} we obtain
\[
{\mathcal E}_0(X,\Delta)=
\delta(t_1,h^{1-a_1}) \delta(t_2,h^{1-a_2}).
\]
Now we calculate $\widehat{\ELL}(X,\Delta;M)$ in another way. Consider the blow-up $Z=Bl_0X$, $f:Z\to X$ with exceptional divisor $E$, and let the strict transforms of $\C_x, \C_y$ be $\tilde{\C}_x,\tilde{\C}_y$. Define the divisor $D=a_1\tilde{\C}_x +a_2\tilde{\C}_y+(a_1+a_2-1)E$. Since $f^*K_X=K_Z-E$, and $f^*(\Delta)=a_1\tilde{\C}_x + a_2\tilde{\C}_y+(a_1+a_2)E$ (from calculations in coordinates) we have $f^*(K_X+\Delta)=K_Z+D$.
Hence, by \eqref{EllResLoc} and using \eqref{fixed} at the two ${\bf T}$ fixed points in $f^{-1}(0)$ we obtain
\[
{\mathcal E}_0(X,\Delta) = \delta(t_1,h^{2-a_1-a_2}) \delta(t_2/t_1,h^{1-a_2}) + \delta(t_2,h^{2-a_1-a_2}) \delta(t_1/t_2,h^{1-a_1}).
\]
Observe that the comparison of the two formulas above
boils down to the identity
\[
\delta(t_1,h^{1-a_1}) \delta(t_2,h^{1-a_2})=
\delta(t_1,h^{2-a_1-a_2}) \delta(t_2/t_1,h^{1-a_2}) + \delta(t_2,h^{2-a_1-a_2}) \delta(t_1/t_2,h^{1-a_1}),
\]
which is a rewriting of the well known {\em Fay's trisecant identity}
\begin{multline}
\vt(a+c)\vt(a-c)\vt(b+d)\vt(b-d)=\\
\vt(a+b)\vt(a-b)\vt(c+d)\vt(c-d)+ \vt(a+d)\vt(a-d)\vt(b+c)\vt(b-c),
\label{trisecant}\end{multline}
see e.g. \cite{Fay}, \cite[Thm. 5.3]{FRV}, \cite{GL}.
\end{example}
\subsection{The elliptic class $\widehat{\ELL}$ twisted by a line bundle}\label{sec:twisted_gen}
For the main application of the present paper (elliptic classes of Schubert varieties) we need a modified version of the notion $\widehat{\ELL}(X,\Delta;M)$. The following example explains why.
\begin{example} \rm
Let $X_{\omega}\subset G/B$ be a Schubert variety, and let $f_{\underline \om}:Z_{\underline \om}\to X_{\omega}$ be a Bott-Samelson resolution (as defined in Section \ref{BoSa}).
Set
\[
\Delta=\partial X_{\omega}=\sum_{X_{\omega'}\subset X_{\omega},\;\dim(X_{\omega'})=\dim(X_{\omega})-1} [X_{\omega}].
\]
It is a fact that $K_{X_{\omega}}+\Delta$ is a Cartier divisor and $f_{\underline \om}^*(K_{X_{\omega}}+\Delta)=K_{Z_{\underline \om}}+D,$ where $D=\partial Z_{\underline \om}$ has all coefficients equal to 1, see Theorem \ref{fullflagcnonical} below.
Hence a crucial condition in the definition of $\widehat{\ELL}$ is not satisfied for $(X_\omega,\partial X_\omega)$.
\end{example}
Let $X$ be a possibly singular subvariety of a smooth variety $M$, and $\Delta$ a divisor on $X$. Assume that
$K_X + \Delta$ is a ${\mathbb Q}$-Cartier divisor; and that there exists a map $f:Z\to M$ which is a log-resolution $(Z,D)\to(X,\Delta)$ such that $K_Z+D = f^*(K_X + \Delta)$. Observe that we have not assumed anything about the coefficients of the divisor $D$ (cf. the definition of KLT pair in Section \ref{sec:shelf}).
Let $L$ be a line bundle on $X$ and $\xi\in H^0(X;L)$ a section such that $\xi$ does not vanish on $X\setminus supp(\Delta)$. Assuming $supp(\Delta)=supp(zeros(\xi))$ we have $supp(f^*D)=supp(zeros(f^*\xi))$. Denote
\[
\Delta(L,\xi)=\Delta-zeros(\xi).
\]
If $L$ is sufficiently positive then the pair $(X,\Delta(L,\xi))$ is a KLT pair. Therefore we can define the twisted elliptic class of $(X,\Delta)$. The definition makes sense for a fractional power of $L$. Then $\widehat{\ELL}(X,\Delta(L,\xi);M) \in K(M)(h^{1/N})[[q]]$ for a sufficiently divisible $N$.
\begin{remark} \rm
In practice, see Section \ref{sec:twisted} below, the line bundle $L$ will be associated with certain integer points $\lambda$ of a vector space,
$V$ consisting of Cartier divisors supported by $supp(\Delta)$. The dependence of the twisted elliptic class on $\lambda\in V$ will be meromorphic; so we can extend the definition to a meromorphic function on $V$: the twisted elliptic class of $(X,\Delta)$ will be a class defined for almost all $\lambda$, hence understood as a meromorphic function on $V$.
\end{remark}
\begin{remark}\rm
It is tempting to think that the ``right'' elliptic class notion for a Schubert variety $X_\omega$ is either $\widehat{\ELL}(X_\omega,\emptyset)$ or $\widehat{\ELL}(X_\omega,\partial X_\omega)$, and the trick of ``twisting with a line bundle'' in this section is not necessary. However, in general we cannot take $\Delta=\emptyset$, since $X$ may not have ${\mathbb Q}$ Gorenstein singularities, and then $f^*(K_X)$ does not make sense. This is the case for most of the Schubert varieties. The pair $(X_\omega,\partial X_\omega)$ is Gorenstein, i.e.~the divisor $K_{X_\omega}+\partial X_\omega$ is Cartier (see Theorem \ref{fullflagcnonical} below), but the multiplicities $a_i$ take the forbidden value 1.
\end{remark}
\section{Bott-Samelson resolution and the elliptic classes of Schubert varieties}
\label{BoSa}
In this section we define elliptic classes of Schubert varieties following the general line of arguments in Section \ref{sec2}. For this, after introducing the usual settings of Schubert calculus, we describe a resolution of Schubert varieties inductively.
\smallskip
Let $G$ be semisimple group with Borel subgroup $B$, maximal torus ${\bf T}$, and Weyl group $W$. Simple roots will be denoted by $\alpha_1,\alpha_2,\ldots,$ and the corresponding reflections in $W$ by $s_1,s_2, \ldots$. We consider reduced words in the letters $s_k$, denoted by ${\underline \om}$. A word ${\underline \om}$ represents an element $\omega\in W$. The length $\ell(\omega)$ of $\omega$ is the length of the shortest reduced word representing it.
We will study the homogeneous space $G/B$. For $\omega\in W$ let $\tilde{\omega}\in N({\bf T})\subset G$ be a representative of $\omega\in W=N({\bf T})/{\bf T}$, and let $x_\omega=\tilde{\omega}B \in G/B$. The point $x_\omega$ is fixed under the ${\bf T}$ action. The $B$ orbit $X^\circ_\omega=Bx_\omega=B\tilde{\omega}B$ of $x_\omega$ will be called a Schubert cell, and its closure $X_\omega$ the Schubert variety. In this choice of conventions we have $\dim(X_{\omega})=\ell(\omega)$.
\subsection{The Bott-Samelson resolution of Schubert varieties}\label{sec:BSres}
Let the reduced word ${\underline \om}$ represent $\omega\in W$. The Bott-Samelson variety $Z_{{\underline \om}}$, together with a resolution map $f_{{\underline \om}}: Z_{{\underline \om}}\to X_{\omega}$ of the Schubert variety $X_{\omega}$ is constructed inductively as follows. Suppose ${\underline \om}={\underline \om}'s_k$ is a reduced word.
Let $P_k\supset B$ be the minimal parabolic containing $B$, such that $W_{P_k}=\langle s_k\rangle$. The map $\pi_k:G/B\to G/P_k$ is a ${\mathbb P}^1$ fibration. It maps the open cell $X^\circ_{\omega'}$ isomorphically to its image.
We have $X_{\omega}=\pi_k^{-1}\pi_k(X_{\omega'})$ and $\pi_k$ restricted to $X^\circ_\omega$ is an ${\mathbb A}^1$ fibration.
The variety $Z_{\underline \om}$ fibers over $Z_{{\underline \om}'}$ with the fiber ${\mathbb P}^1$.
We have a pull-back diagram
\begin{equation}\label{BS-diagram}\xymatrixcolsep{3pc}\xymatrix{Z_{\underline \om}\ar[d]^{\pi_{{\underline \om}}}\ar[r]^{f_{\underline \om}}&X_{\omega} \ar[d]\ar@{^{(}->}[r]& G/B \ar[d]^{\pi_k}\\
Z_{{\underline \om}'}\ar@/^1pc/[u]^\iota\ar[r]^{\pi_k\circ f_{{\underline \om}'}\phantom{aa}}&\pi_k(X_{\omega'})\ar@{^{(}->}[r]& G/P_k. \\}
\end{equation}
The projection $\pi_{\underline \om}:Z_{\underline \om}\to Z_{{\underline \om}'}$
has a section $\iota$, such that
$f_{\underline \om}\circ\iota=f_{{\underline \om}'}$.
The relative tangent bundle for $\pi_k$ is denoted by $L_k$. It is associated with the ${\bf T}$ representation of weight $-\alpha_k$, see \cite{Ram1}, \cite[\S3]{OSWW}. According to our notation given in \eqref{def:LL}$$L_k=\LL{\alpha_k}.$$
\subsection{Fixed points of the Bott-Samelson varieties}\label{sec:fixZ}
Let ${\underline \om}=s_{k_1}s_{k_2}\dots s_{k_\ell}$ be a reduced word representing $\omega\in W$ and let
$f_{\underline \om}:Z_{\underline \om}\to X_{\omega}$ be the Bott-Samelson resolution of the Schubert variety $X_{\omega}$. The ${\bf T}$ fixed points of $X_\omega$ and $Z_{\underline \om}$ are discrete, namely:
\begin{itemize}
\item The fixed points $(X_{\omega})^{\bf T}$ are $x_{\omega'}$ where $\omega'\leq \omega$ in the Bruhat order.
\item The fixed points $(Z_{\underline \om})^{\bf T}$ are indexed by subwords of ${\underline \om}$ (which are words obtained by leaving out some of the letters from ${\underline \om}$). We identify subwords with
01-sequences (where $0$'s mark the positions of the letters to be omitted). We will identify a fixed point with its subword and with its 01-sequence.
\end{itemize}
The map $f_{\underline \om}$ sends the sequence $(\epsilon_1,\epsilon_2,\dots,\epsilon_\ell)$ to $x_\sigma$ where
$\sigma=s_{k_1}^{\epsilon_1}s_{k_2}^{\epsilon_2}\dots s_{k_\ell}^{\epsilon_\ell}\in W$.
\begin{example}\rm \label{ex:table}
Let $G=\GL_3$ and ${\underline \om}=s_1s_2s_1$.
\begin{center}
\def\phantom{aa}{\phantom{aa}}
\begin{tabular}{ |c|c|r| }
\hline
sequence&subword&image\phantom{aa}\pp\\
\hline
000 & $\cancel{s_1}\cancel{s_2}\cancel{s_1}$ & $\id=123$\phantom{aa} \\
001 & $\cancel{s_1}\cancel{s_2}s_1$ & $s_1=213$\phantom{aa}\\
010 & $\cancel{s_1}s_2\cancel{s_1}$ & $s_2=132$\phantom{aa} \\
011 & $\cancel{s_1}s_2s_1$ & $s_2s_1=312$\phantom{aa} \\
100 & $s_1\cancel{s_2}\cancel{s_1}$ & $s_1=213$\phantom{aa} \\
101 & $s_1\cancel{s_2}s_1$ & $\id=123$\phantom{aa} \\
110 & $s_1s_2\cancel{s_1}$ & $s_1s_2=231$\phantom{aa} \\
111 & $s_1s_2s_1$ & $s_1s_2s_1=321$\phantom{aa} \\
\hline
\end{tabular}
\end{center}
\end{example}
From the recursive definition of $Z_{{\underline \om}}$ above we find the recursive description of the tangent characters of $Z_{\underline \om}$ at the fixed point $x\in \{0,1\}^\ell$:
\[
\characters(Z_{\underline \om},x)=
\begin{cases}
\characters(Z_{{\underline \om}'},x')\cup \{(L_k)_{f_{{\underline \om}'}(x')}\} & \text{ if } x=(x',0) \\
\characters(Z_{{\underline \om}'},x')\cup \{(L^{-1}_k)_{f_{{\underline \om}'}(x')}\} & \text{ if } x=(x',1).
\end{cases}
\]
Note that $L_k$ or $L_k^{-1}$ at a fixed point is just a line with ${\bf T}$ action, i.e.~a character, so it can indeed be interpreted as a function on ${\bf T}$ or an element of $\RR(T)$. In theory characters form a multiset, so above the $\cup$ should mean union of multisets, but in fact no repetition of characters occurs, hence there is no need for multisets.
\subsection{Canonical divisors of Schubert and Bott-Samelson varieties}
The starting point for the computation of the elliptic classes of Schubert varieties is the following fact.
Let $f_{\underline \om}:Z_{\underline \om}\to X_\omega$ be the Bott-Samelson resolution associated to the word ${\underline \om}$. The subvariety $\partial Z_{\underline \om}=f_{\underline \om}^{-1}(\partial X_\omega)$ with the reduced structure is of codimension one. Its components correspond to the subwords with one letter omitted:
$$\partial Z_{\underline \om}=\bigcup_{i=1}^{\ell(\omega)}\partial_i Z_{\underline \om}.$$
We consider $\partial Z_{\underline \om}$ as a divisor, the sum of components with the coefficients equal to one. The last component $\partial_{\ell(\omega)}Z$ is the image of the section $\iota$ in the diagram \eqref{BS-diagram}.
\begin{theorem} \label{fullflagcnonical}
The divisor $K_{X_{\omega}}+\partial X_{\omega}$ is a Cartier divisor, and
we have
\[
f_{{\underline \om}}^*(K_{X_{\omega}}+\partial X_{\omega})=K_{Z_{\underline \om}}+\partial Z_{\underline \om}.
\]
\end{theorem}
We would like to stress the importance of this theorem. It allows us to pull-back the divisor $K_{X_\omega}+\partial X_\omega$ to any resolution, and to use intersection product in order to compute characteristic classes. In addition the pull back to the Bott-Samelson resolution has strikingly simple form: all coefficients of the divisor $D=f_{\underline \om}^*(K_{X_\omega}+\partial X_\omega)-K_{Z_{\underline \om}}$ are equal to one.
\begin{proof}
Let $\rho\in{\mathfrak t}^*$ be half of the sum of positive roots. Let ${\mathbb C}_{-\rho}$ be the trivial bundle with the ${\bf T}$ action of weight $-\rho$ and let $\LL{\rho}=G\times_B {\mathbb C}_{-\rho}$ be the line bundle associated with weight $\rho$. We denote ideal sheaves by $I(\ )$, and canonical sheaves by ${\boldsymbol \omega}_\bullet$. We have the following identities on equivariant sheaves:
\begin{enumerate}
\item ${\boldsymbol \omega}_{X_\omega}=I(\partial X_\omega) \otimes \LL{\rho}^{-1} \otimes \C_{-\rho}$,
\item ${\boldsymbol \omega}_{Z_{\underline \om}}=I(\partial Z_{\underline \om}) \otimes f_{{\underline \om}}^*(\LL{\rho}^{-1}) \otimes \C_{-\rho}$.
\end{enumerate}
The first one is proved in \cite[Prop. 2.2]{GrKu} (cf. the non-equivariant version \cite[Th~4.2]{Ram2}), and the second one is proved in \cite[Prop.~2.2.2]{BrKu} (cf. the non-equivariant version \cite[Prop.~2]{Ram1}).
The boundary $\tilde{D}$ of the {\em opposite} open Schubert cell (that is $\tilde{D}=\cup X^{s_i}$) intersects $X_\omega$ transversally, hence, from the sheaf identities above we obtain the divisor identities
\[
K_{X_\omega}=-\partial X_\omega - \tilde{D}\cap X_{\omega},
\qquad
K_{Z_{\underline \om}}=-\partial Z_{\underline \om} - f^{-1}(\tilde{D}\cap X_{\omega}).
\]
Hence, $K_{X_\omega}+\partial X_\omega$ is Cartier and, by rearrangement we obtain $K_{Z_{\underline \om}}+\partial {Z_{\underline \om}}=f^*( K_{X_\omega}+\partial X_\omega)$.
Note, that all the involved Weil divisors are ${\bf T}$ invariant.
\end{proof}
\subsection{The $\lambda$-twisted elliptic class of Schubert varieties}
\label{sec:twisted}
Let $\lambda\in {\mathfrak t}^*$ be a strictly dominant weight (i.e.~belonging to the interior of the Weyl chamber) and let
$\LL{\lambda}=G\times_B{\mathbb C}_{-\lambda}$ be the associated (globally generated) line bundle over $G/B$. Then $H^0(G/B;\LL{\lambda})$ is the irreducible representation of $G$ with highest weight $\lambda$. There exists a unique (up to a constant) section $\xi_\lambda$ of $\LL{\lambda}$, which is invariant with respect to the nilpotent group $N^-\subset B^-$ and on which ${\bf T}$ acts via the character $\lambda$. Therefore $\xi_\lambda$ does not vanish at the points of the open Schubert cell $X_{\omega_0}$ and its zero divisor is supported on the union of codimension one Schubert varieties. The translation $\xi_{\lambda}^\omega:= \tilde \omega\tilde \omega_0^{-1}(\xi_\lambda)$ of this section by $\tilde w\tilde w_0^{-1}\in N(T)$ is an eigenvector of $B^-$ of weight $\omega\lambda$. The zero divisor of ${\xi^\omega_\lambda}_{|X_{\omega}}$ is supported on $\partial X_{\omega}$. The multiplicities of this zero divisor are given by the Chevalley formula. Namely, if $\omega=\omega' s_\alpha $, $\ell(\omega)=\ell(\omega')+1$ and $\alpha$ is a positive root, then the multiplicity of $X_{\omega'}$ is equal to $\langle\lambda,\alpha^\text{\tiny$\vee$}\rangle$, where $\alpha^\text{\tiny$\vee$}$ is the dual root, \cite{Che} (or see \cite[Prop. 1.4.5]{BrionFlag} for the case of $\GL_n$).
\begin{example}\rm Let $G=\GL_n$,
\begin{itemize}
\item $\lambda=(\lambda_1\geq\lambda_2\geq\dots\geq\lambda_n)$,
\item $\alpha=(0,\dots,1,\dots,-1,\dots,0)$, $\alpha^\text{\tiny$\vee$}=\lambda^*_i-\lambda^*_j$ for $i<j$,
\end{itemize}
then
$\langle\lambda,\alpha^\text{\tiny$\vee$}\rangle=\lambda_i-\lambda_j$.
\end{example}
Our main object for the rest of the paper is the $\lambda$-twisted elliptic class
\begin{align*}
{E^{\rm K}}(X_{\omega},\lambda)
= \widehat{\ELL}(X_\omega, \partial X_\omega-zeros(\xi_\lambda^\omega);G/B),
\end{align*}
of the Schubert variety $X_\omega$, cf. Section \ref{sec:twisted}. The definition makes sense for ``sufficiently large'' $\lambda$, i.e. we need to assume that the coefficients of each boundary component in $zeros(\xi_\lambda^\omega)$ is positive. It will be clear from the next section, that it is enough to assume that $\lambda$ is strictly dominant.
\section{Recursive calculation of local elliptic classes}
After using Kempf's lemma to calculate the multiplicities of $f_{\underline \om}^*(\xi^\omega_\lambda)$ in Section \ref{sec:mults}, we will give a recursive formula for the local elliptic classes of the Bott-Samelson and Schubert varieties in Sections~\ref{sec:recursionZ},~\ref{sec:recursionX}.
\subsection{Multiplicities of the canonical section}\label{sec:mults}
As before, let ${\underline \om}$ be a reduced word representing $\omega$, and consider the corresponding Bott-Samelson resolution. We have $f_{{\underline \om}}^*(K_{X_{\omega}}+\partial X_{\omega})=K_{Z_{\underline \om}}+\partial Z_{\underline \om}$ (Theorem \ref{fullflagcnonical}), and hence
\[
{E^{\rm K}}(X_{\omega},\lambda) = f_{{\underline \om}*}\widehat{\ELL}(Z_{\underline \om},\partial Z_{\underline \om}-zeros(f_{\underline \om}^*(\xi_\lambda^\omega))).
\]
Thus, to calculate ${E^{\rm K}}(X_{\omega},\lambda)$ we need to know the multiplicities of $f_{\underline \om}^*(\xi^\omega_\lambda)$ along the components of $\partial Z_{\underline \om}$. Recall that the components of $\partial Z_{\underline \om}$ correspond to omitting a letter in the word ${\underline \om}$. Let $\partial_jZ_{\underline \om}$ denote the component corresponding to omitting the $j$'th letter of ${\underline \om}$.
For our argument in the next section we need the following corollary drown from a sequence of papers dealing with Bott-Samelson resolutions.
\begin{proposition}\label{corinductive}Suppose $\lambda\in {\mathfrak t}^*$ (not necessarily dominant), then
\begin{enumerate}
\item $f^*_{\underline \om}(\LL{\lambda})\simeq \pi_{\underline \om}^*({\LL{s_k\lambda}}_{|Z_{{\underline \om}'}})\otimes{\mathcal O}_{Z_{\underline \om}}(\langle \lambda,\alpha_k^\text{\tiny$\vee$}\rangle \iota Z_{{\underline \om}'})$,
\item if $\lambda$ is dominant, then the multiplicity of zeros of $f_{\underline \om}^*\xi^\omega_\lambda$ along the divisor $\iota Z_{{\underline \om}'}$ is equal to $\langle\lambda, \alpha_k^\text{\tiny$\vee$}\rangle$,
\item the remaining multiplicities of $f^*_{\underline \om} \xi^\omega_\lambda$ along the components of $\partial Z_{\underline \om}$ are equal to the corresponding multiplicities $f_{{\underline \om}'}^*\xi^{\omega'}_{s_k\lambda}$.
\end{enumerate}
\end{proposition}
\begin{proof} Suppose ${\underline \om}={\underline \om}'s_k$ is a reduced word. Let $Y=X_{\omega'}\times_{G/P}G/B$. We have the commutative diagram
\[\xymatrixcolsep{3pc}
\xymatrix{Z_{\underline \om}\ar[d]^{\pi_{{\underline \om}}}\ar[r]{}\ar@/^1pc/[rr]^{f_{\underline \om}}&
Y\ar[d]^{\pi_Y}\ar[r]_{f_Y\phantom{xxx}}& G/B \ar[d]^{\pi_k}\\
Z_{{\underline \om}'
\ar[r]^{ f_{{\underline \om}'}\phantom{aa}}&X_{\omega'
\ar[r]^{}& G/P_k, \\}
\]
together with a section $\iota':X_{{\underline \om}'}\to Y$ which agrees with the section $\iota:Z_{{\underline \om}'}\to Z_{\underline \om}$. Recall the following lemma of Kempf (originating from the Chevalley's work).
\begin{lemma}[\cite{Kem} Lemma 3]\label{kempf} Suppose $\lambda\in {\mathfrak t}^*$ (not necessarily dominant), then
\begin{enumerate}[(i)]
\item $f^*_Y(\LL{\lambda})\simeq \pi_Y^*({\LL{s_k\lambda}}_{|X_{\omega'}})\otimes{\mathcal O}_Y(\langle \lambda,\alpha_k^\text{\tiny$\vee$}\rangle \iota'X_{\omega'})$.
\item If $f_Y^*(\LL{\lambda})$ has a non-zero section, then so does $\pi_Y^*({\LL{s_k\lambda}}_{|X_{\omega'}})$. Furthermore, there exists a non-zero section, which is invariant with respect to $B^-$.
\end{enumerate}
\end{lemma}
To continue the proof note that
(1) follows directly from (i).
The bundle $f_{\underline \om}^*( \LL{\lambda})$ is isomorphic to ${\mathcal O}_{Z_{\underline \om}}(\sum_{k=1}^{\ell(\omega)} a_k \partial_kZ_{\underline \om})$.
The section in (ii) is unique and it is equal to $f_Y^*\xi^\omega_\lambda$.
The multiplicity corresponding to the last component $\partial_{\ell(\omega)}Z_{\underline \om}=\iota(Z_{{\underline \om}'})$ is equal to $\langle \lambda,\alpha_k^\text{\tiny$\vee$}\rangle$, the remaining components are pulled back from $Z_{{\underline \om}'}$. This proves (2) and (3).
\end{proof}
\noindent It is immediate to show inductively the Chevalley formula for the multiplicities in the Bott-Samelson variety.
\begin{corollary}\label{lastword}
Let ${\underline \om}={\underline \om}_1s_{k_j}{\underline \om}_2$ where $s_{k_j}$ is the $j$'th letter in the word ${\underline \om}$.
Write $\omega= \omega_1 \omega_2s_\alpha$, where $s_\alpha=\omega_2^{-1}s_{k_j}\omega_2$ and $\alpha$ is a positive root. Then
the multiplicity of $f_{\underline \om}^*(\xi^\omega_\lambda)$ along the boundary component $\partial_j Z_{\underline \om}$ is $\langle\lambda,\alpha^\text{\tiny$\vee$}\rangle$.\qed
\end{corollary}
\subsection{Local elliptic classes of Bott-Samelson and Schubert varieties} \label{sec:localBSS}
Recall from Section~\ref{sec:fixZ} that the ${\bf T}$ fixed points of $G/B$ are identified with elements of $W$, and the ${\bf T}$ fixed points of $Z_{\underline \om}$ are parameterized by subwords of ${\underline \om}$ or equivalently by 01-sequences. For $\sigma \in (G/B)^{{\bf T}}$, $x\in Z_{\underline \om}^{{\bf T}}$ define the local classes
\begin{align*}
E_\sigma(X_{\underline \om},\lambda)=&\frac{{E^{\rm K}}(X_{\underline \om},\lambda){|\sigma}}{{e^{\rm K}}(T_\sigma (G/B))}=\frac{ \widehat{\ELL}(X_\omega, \partial X_\omega-zeros(\xi^w_\lambda)){|\sigma}}{{e^{\rm K}}(T_\sigma (G/B))} \ \ \ \ \ \in Frac(\RR({\bf T})[h^{1/N}][[q]]),\\
E_{x}(Z_{\underline \om},\lambda)=&\frac{ \widehat{\ELL}(Z_{\underline \om},\partial Z_{\underline \om}-zeros(f_{\underline \om}^*(\xi^w_\lambda))){|x} }{{e^{\rm K}}(T_{x}Z_{\underline \om})} \ \ \ \ \ \ \ \ \ \ \in Frac(\RR({\bf T})[h^{1/N}][[q]])
\end{align*}
in the fraction field of the representation ring $\RR({\bf T})$ extended by formal formal roots of the parameter $h$.
If $\omega=\id\in W$ then ${\underline \om}=\emptyset$. The Bott-Samelson variety $Z_\emptyset$ is one point, a fixed point indexed by the sequence of length 0. Hence we have $f_\emptyset(Z_\emptyset)=[\id]\in G/B$ and
\begin{equation}\label{eq:base}
E_{\emptyset}(Z_\emptyset,\lambda)=1, \qquad E_{[\id]}(X_{\id},\lambda)=1.
\end{equation}
In the next two subsections we show how the geometry described in Section \ref{BoSa} implies recursions of the local classes, that together with the base step \eqref{eq:base} determine them.
\subsection{Recursion for local elliptic classes of Bott-Samelson varieties}\label{sec:recursionZ}
From the description of the fixed points of $Z_{\underline \om}$ and their tangent characters in Section \ref{sec:fixZ} we obtain the following recursion for the local classes on $Z_{\underline \om}$. Let ${\underline \om}={\underline \om}' s_k$ be a reduced word.
If $x=(x',0)\in Z^{\bf T}_{\underline \om}$ then
\[
E_x(Z_{\underline \om},\lambda)= E_{x'}(Z_{{\underline \om}'},s^\lambda_k\lambda) \cdot \delta\left((L_k)_{f_{{\underline \om}'}(x')},h^{\langle\lambda,\alpha_k^\text{\tiny$\vee$}\rangle}\right).
\]
If $x=(x',1)\in Z^{\bf T}_{\underline \om}$ then
\[
E_x(Z_{\underline \om},\lambda)=E_{x'}(Z_{{\underline \om}'},s^\lambda_k\lambda)\cdot \delta\left((L_k^{-1})_{f_{{\underline \om}'}(x')},h\right).
\]
As before, in our notation we identify a bundle restricted to a fixed point with the character of the obtained ${\bf T}$ representation on $\C$. Note that in the formula for $x=(x',1)$ the character $(L_k^{-1})_{f_{{\underline \om}'}(x')}$ is equal to $(L_k)_{f_{{\underline \om}}(x)}$.
Recall that the classes $E_x(Z_{\underline \om},\lambda)$ are defined only for strictly dominant weights $\lambda$. However, the formulas above are meromorphic functions in $\lambda$ (with poles on the hyperplanes $z\langle \lambda, \alpha_k^\text{\tiny$\vee$}\rangle\in {\mathbb Z}$), so we formally define $E_x(Z_{\underline \om},\lambda)$ for all $\lambda\in {\mathfrak t}^*$ by the meromorphic function it satisfies for strictly dominant~$\lambda$.
\begin{example} \rm
For $G=\GL_3$ we use the notation $\lambda=(\lambda_1,\lambda_2,\lambda_3)$ as before, and let $\mu_k=\mu_k(\lambda)=h^{-\lambda_k}$.
\begin{itemize}
\item For ${\underline \om}=\emptyset$ we have $E_\emptyset(Z_{\underline \om},\lambda)=1$.
\item For ${\underline \om}=s_1$ we have
\begin{align*}
E_{(0)}(Z_{s_1},\lambda)=& 1_{|\mu_1 \leftrightarrow \mu_2} \cdot \delta(z_2/z_1,\mu_2/\mu_1)=\delta(z_2/z_1,\mu_2/\mu_1) \\
E_{(1)}(Z_{s_1},\lambda)=& 1_{|\mu_1 \leftrightarrow \mu_2} \cdot \delta(z_1/z_2,h)=\delta(z_1/z_2,h).
\end{align*}
\item For ${\underline \om}=s_2s_1$ we have
\begin{align*}
E_{(0,0)}(Z_{s_1s_2},\lambda)=& E_{(0)}(Z_{s_1},\lambda)_{|\mu_2 \leftrightarrow \mu_3} \cdot \delta(z_3/z_2,\mu_3/\mu_2) \\
=& \delta(z_2/z_1,\mu_3/\mu_1)\delta(z_3/z_2,\mu_3/\mu_2), \\
E_{(0,1)}(Z_{s_1s_2},\lambda)=& E_{(0)}(Z_{s_1},\lambda)_{|\mu_ 2\leftrightarrow \mu_3} \cdot \delta(z_2/z_3,h) \\
=& \delta(z_2/z_1,\mu_3/\mu_1)\delta(z_2/z_3,h), \\
E_{(1,0)}(Z_{s_1s_2},\lambda)=& E_{(1)}(Z_{s_1},\lambda)_{|\mu_2 \leftrightarrow \mu_3} \cdot \delta(z_3/z_1,\mu_3/\mu_2) \\
=& \delta(z_1/z_2,h)\delta(z_3/z_1,\mu_3/\mu_2), \\
E_{(1,1)}(Z_{s_1s_2},\lambda)=& E_{(1)}(Z_{s_1},\lambda)_{|\mu_ 2\leftrightarrow \mu_3} \cdot \delta(z_1/z_3,h)\\
=& \delta(z_1/z_2,h)\delta(z_1/z_3,h).
\end{align*}
\end{itemize}
\end{example}
\subsection{Recursion for local elliptic classes of Schubert varieties}\label{sec:recursionX}
Let ${\underline \om}$ represent $\omega\in W$. According to Proposition \ref{prop:localization_general} we have
\begin{equation}\label{eq:Esum}
E_\sigma(X_\omega,\lambda)=\sum_{x} E_{x}(Z_{\underline \om},\lambda),
\end{equation}
where the summation runs for fixed points $x$ with $f_{\underline \om}(x)=\sigma$.
For example, for $G=\GL_3$ let ${\underline \om}=s_1s_2s_1$ and let $x$ be the fixed point corresponding to the identity permutation. Then the summation has two summands, corresponding to the 01 sequences (fixed points) $(0,0,0)$ and $(1,0,1)$. In fact for this ${\underline \om}$ only two fixed points ($\id$ and $s_1$) are such that the summation has two terms; in the remaining cases there is only one term, see the table in Example \ref{ex:table}.
The recursion of Section \ref{sec:recursionZ} for the terms of the right hand side of \eqref{eq:Esum} implies a recursion for the $E_\sigma(X_\omega,\lambda)$ classes: the initial step is
\[
E_\sigma(X_{id},\lambda)=\begin{cases}1&\text{if } \sigma=\id\\0&\text{if } \sigma\neq id,\end{cases}
\]
and for $\omega=\omega' s_k$, $\ell(\omega)=\ell(\omega')+1$ we have
\begin{equation}\label{eq:Xrecursion}
E_\sigma(X_{\omega},\lambda)= E_\sigma(X_{\omega'},s_k(\lambda)) \cdot \delta(L_{k,\sigma},h^{\langle\lambda,\alpha_k^\text{\tiny$\vee$}\rangle})
+ E_{\sigma s_k}(X_{\omega'},s_k(\lambda)) \cdot \delta(L_{k,\sigma},h).
\end{equation}
\begin{example} \label{GL3recu}\rm
Let $G=\GL_3$ and assume we already calculated the fixed point restrictions of ${E^{\rm K}}(X_{312},\lambda)$. By applying the recursion above for $\omega=\omega's_2=(s_2s_1)s_2$ we obtain
\begin{align}
\notag E_{123}(X_{321},\lambda)=&
E_{123}(X_{312},\lambda)_{|\mu_2\leftrightarrow \mu_3}\delta(z_3/z_2,\mu_3/\mu_2)+
E_{213}(X_{312},\lambda)_{|\mu_2\leftrightarrow \mu_3}\delta(z_3/z_2,h)
\\ \label{4term1}
=&
\delta(z_3/z_2,\mu_2/\mu_1)\delta(z_2/z_1,\mu_3/\mu_1)\delta(z_3/z_2,\mu_3/\mu_2) \\
& \ \hskip 4 true cm +\delta(z_2/z_3,h)\delta(z_3/z_1,\mu_3/\mu_1)\delta(z_3/z_2,h). \notag
\end{align}
We may calculate the same local class using the recursion for $\omega=\omega's_1=(s_1s_2)s_1$, and we obtain
\begin{align} \label{4term2}
E_{123}(X_{321},\lambda)=&
\delta(z_2/z_1,\mu_3/\mu_2)\delta(z_3/z_2,\mu_3/\mu_1)\delta(z_2/z_1,\mu_2/\mu_1) \\
& \ \hskip 4 true cm +\delta(z_1/z_2,h)\delta(z_3/z_1,\mu_3/\mu_1)\delta(z_2/z_1,h). \notag
\end{align}
The equality of the expressions \eqref{4term1} and \eqref{4term2} is a non-trivial four term identity for theta functions,
for more details see Section \ref{eg:FL3}.
\end{example}
\medskip
Now we are going to rephrase the recursion \eqref{eq:Xrecursion} in a different language. According to K theoretic equivariant localization theory, the map
\[
K_{\bf T}(G/B) \to \bigoplus_{\sigma \in W} K_{\bf T}(x_\sigma)=\bigoplus_{\sigma \in W} \RR({\bf T}),
\]
whose coordinates are the restriction maps to $x_\sigma$, is injective. As before (cf. Sections \ref{sec:localization}, \ref{sec:localBSS})
we divide the restriction by the K theoretic Euler class, and consider the map
\[
\res: K_{\bf T}(G/B)\to \bigoplus_{\sigma \in W} Frac(\RR({\bf T})), \qquad\qquad
\res(\beta)=\left\{ \frac{ \beta_{|x_\sigma}}{{e^{\rm K}}(T_{x_\sigma}(G/B))} \right\}_{\sigma \in W}
\]
where $Frac(\RR({\bf T}))$ is the fraction field of the representation ring $\RR({\bf T})$. Since the map is injective, we may identify an element of $K_{\bf T}(G/B)$ with its $\res$-image, i.e. with a tuple of elements from of $Frac(\RR({\bf T}))$.
Let us note that $E_\sigma(X_\omega,\lambda)$ is expressed by factors $\delta(z,h)$ and $\delta(z,h^{\langle\lambda, \alpha^\text{\tiny$\vee$}\rangle})$ where $z$ is a character of the maximal torus and $\alpha^\text{\tiny$\vee$}$ is a coroot. Let $E_\sigma(X_\omega)$ denote the associated function on $\lambda\in {\mathfrak t}^*$.
In the basis of simple coroots $\{\alpha^\text{\tiny$\vee$}_i\}$ the function $E_\sigma(X_\omega)$ can be expressed by the functions $\mu_i(\lambda)=h^{-\langle \lambda,\alpha^\text{\tiny$\vee$}\rangle}={\rm e}^{ z\langle \lambda,\alpha^\text{\tiny$\vee$}\rangle}$.
The variables $\mu_i$ are functions on the torus ${\bf T}^*={\mathfrak t}^*/z{\mathfrak t}^*_{\mathbb Z}$.
Therefore we can treat the restricted classes as elements of the following objects
$$E_\sigma(X_\omega)\in Frac(\RR({\bf T}\times {\bf T}^*\times \C^*))[[q]]\,.$$
\begin{theorem}[Main Theorem]\label{th:mainiduction} Regarding the classes $E_\bullet(X_\omega)$ as elements of $$\mathcal M=\oplus_{\sigma \in W} Frac\big(\RR({{\bf T}\times {\bf T}^*\times \C^*})\big)[[q]]\,,$$ the following recursion holds:
\begin{equation}\label{Eini}
E_\sigma(X_{\id},\lambda)=\begin{cases} 1 & \sigma=\id \\ 0 & \sigma\not=\id; \end{cases}
\end{equation}
and for $\omega=\omega's_k$ with $\ell(\omega)=\ell(\omega')+1$ we have
\[
E_\bullet(X_{\omega},\lambda)=(\delta^{bd}_k\id+\delta^{int}_ks^\gamma_k) \left(E_\bullet(X_{\omega'},s_k\lambda)\right).
\]
Here
\begin{itemize}
\item $s_k^\gamma$ for $s_k\in W$ acts on the fixed points by right translation $\sigma\mapsto \sigma s_k$,
\item $s_k$ acts on $\lambda\in{\mathfrak t}^*$, later this action will be denoted by $s^{\mu}_k$,
\item $\delta^{bd}_k$ --- multiplication by the element $\delta(L_k,h^{\langle\alpha_k^\text{\tiny$\vee$},\lambda\rangle})$ (the ``boundary factor'')
\item $\delta^{int}_k$ --- multiplication by the element $\delta(L_k,h)$ (the ``internal factor'').
\end{itemize}
\end{theorem}
Note that the boundary and internal factors indeed make sense: restricted to a fixed point $x_\sigma$ the line bundle $L_k$ is a ${\bf T}$ character depending on $\sigma$; that is, multiplication by one of these factors means multiplication by a diagonal matrix, not by a constant matrix.
\begin{proof} The statement is the rewriting of the recursion \eqref{eq:Xrecursion}. \end{proof}
\section{Hecke algebras}\label{sec:Hecke}
In this section we review various Hecke-type actions on cohomology or K-theory of $G/B$ giving rise to inductive formulas for various invariants of the Schubert cells.
Sections \ref{sec:HeckeNil}--\ref{sec:HeckeMC}---exploring the relation between our elliptic classes and other characteristic classes of singular varieties---is not necessary for the rest of the paper. A reader not familiar with Chern-Schwartz-MacPherson or motivic Chern classes can jump to Section \ref{sec:HeckeEll}.
\subsection{Fundamental classes---the nil-Hecke algebra} \label{sec:HeckeNil}
Consider the notion of {\em equivariant fundamental class in cohomology}, denoted by $[\ ]$. According to \cite{BGG, Dem} if $\omega=\omega's_k$, $\ell(w)=\ell(\omega')+1$ then the Demazure operation
$D_k=\pi^*_k\circ {\pi_k}_*$ in cohomology satisfies
$$D_k([X_{\omega'}])=[X_{\omega}]\,, \qquad D_k\circ D_k=0.$$
The algebra generated by the operations $D_k$ is called the nil-Hecke algebra.
As before, let us identify elements of $H^*_{\bf T}(G/B)$ with their $\res$-image, where
\[
\res:H^*_{\bf T}(G/B)\to \bigoplus_{\sigma\in W} {\mathbb Q}({\mathfrak t}), \qquad \qquad
\beta \mapsto \left\{\frac{\beta_{|x_\sigma}}{e^{\rm coh}(T_{x_\sigma}(G/B))}\right\}_{\sigma\in W}.
\]
Here ${\mathbb Q}({\mathfrak t})$ is the field of rational functions on ${\mathfrak t}$, and $e^{\rm coh}(\ )$ is the equivariant {\em cohomological} Euler class.
The action of the Demazure operations on the right hand side is given by the formula
\[
D_k=\tfrac 1{c_1(L_k)}(\id+s^\gamma_k),\qquad \text{that is} \qquad
D_k(\{f_\bullet\})_\sigma=
\tfrac 1{c_1(L_k)}_{\sigma}(f_\sigma+f_{\sigma s_k}).\]
The operators $D_k$ satisfy the braid relations and $D_k\circ D_k=0$.
For $G=\GL_n$ we have $c_1(L_k)_{\sigma}=z_{\sigma(k+1)}-z_{\sigma(k)}$
(where $z_1,z_2,\dots,z_n$ are the basic weights of ${\bf T}\leq \GL_n$) and we recover the divided difference operators from algebraic combinatorics.
\subsection{CSM-classes and the group ring ${\mathbb Z}[W]$} An important one-parameter deformation of the notion of cohomological (equivariant) fundamental class is the equivariant Chern-Schwartz-MacPherson (CSM, in notation $c^{sm}(-)$) class. For introduction to this cohomological characteristic class see, e.g. \cite{Oh, WeCSM, AlMi, FR, AMSS}.
It is shown in \cite{AlMi, AMSS} that the CSM classes of Schubert cells satisfy the recursion: if $\omega=\omega's_k$, $\ell(w)=\ell(\omega')+1$ then
$$A_k(c^{sm}(X^\circ_{\omega'}))=c^{sm}(X^\circ_{\omega})$$
where
$$A_k=(1+c_1(L_k))D_k-\id\,.$$
In terms of $\res$-images
$$A_k(\{f_\bullet\})_\sigma=
\tfrac 1{c_1(L_k)_{\sigma}} f_\sigma+ \tfrac {1+c_1(L_k)_{\sigma}}{c_1(L_k)_{\sigma}} f_{\sigma s_k}.$$
By \cite{AlMi} or by straightforward calculation we find that $A_k\circ A_k=\id$ and the operators $A_k$ satisfy the braid relations.
\subsection{Motivic Chern classes---the Hecke algebra}\label{sec:HeckeMC}
The K theoretic counterpart of the notion of CSM class is the {\em motivic Chern class} (in notation $mC_y(-)$), see \cite{BSY, FRW, AMSS2}.
The operators
\[
B_k=(1+y L_k^{-1})\pi_k^*{\pi_k}_*-\id\;\in\;End(K_{\bf T}(G/B)[y]), .
\]
(see Section \ref{sec:BSres}) reproduce the motivic Chern classes $mC_y$ of the Schubert cells: if $\omega=\omega's_k$, $\ell(w)=\ell(\omega')+1$ then
$B_k(mC_y(X^\circ_{\omega'}))=mC_y(X^\circ_{\omega})$, see \cite{AMSS2}, c.f.~\cite{SZZ}.
In the local presentation, i.e. after restriction to the fixed points and division by the K-theoretic Euler class,
the operator $B_k$ takes form
$$B_k(\{f_\bullet\})_{\sigma}=
\frac{(1+y)(L_k^{-1})_{\sigma}}{1-(L_k^{-1})_{\sigma}} f_{\sigma}
+ \frac{1+y(L_k^{-1})_{\sigma}}{1-(L_k^{-1})_{\sigma}} f_{\sigma s_k}.$$
For example, for $G=\GL_n$ we have $(L_k)^{-1}_\sigma={z_{\sigma(k)}}/{z_{\sigma(k+1)}}$.
The squares of the operators satisfy
$$B_k\circ B_k=-(y+1) B_k-y\, id,$$
and the operators $B_k$ satisfy the braid relations. This kind of algebra was discovered much earlier by Lusztig \cite{Lusztig}.
\subsection{Elliptic Hecke algebra} \label{sec:HeckeEll}
Consider the operator
$$C_k=(\delta^{bd}_k\id+\delta^{int}_ks^\gamma_k)s^{\mu}_k$$
acting on the direct sum of the spaces of rational functions extended by the formal parameter $q$ $$\oplus_{\sigma \in W} Frac\big(\RR({\bf T}\times {\bf T}^*\times\C^*)\big)[[q]]\,,$$ or in coordinates
$$C_k(\{f_\bullet\})_\sigma(\lambda)=(\delta_k^{bd})_{\sigma}\, f_\sigma(s_k\lambda) +(\delta_k^{int})_{\sigma}\,f_{\sigma s_k}(s_k\lambda)\,.$$
In Section \ref{sec:recursionX} we have shown that if $\omega=\omega's_k$, $\ell(w)=\ell(\omega')+1$ then
$$E_\bullet(X_{\omega},\lambda)=C_k \left(E_\bullet(X_{\omega'},\lambda)\right).$$
\begin{theorem}\label{th:kappa}
The square of the operator $C_k$ is multiplication by a function depending only on $\lambda$ and $h$:
$$C_k\circ C_k=\kappa_k(\lambda)\,id\,,$$
where
$$\kappa_k(\lambda)=\delta(h,\nu_k(\lambda))\delta(h,1/\nu_k(\lambda))
\,,$$
where $\nu_k(\lambda)=h^{\langle\lambda,\alpha_k^\text{\tiny$\vee$}\rangle}$.
\end{theorem}
\begin{proof}
It is enough to check the identity for $G=\GL_2$, $\sigma=\id$ :
\begin{multline*}C_1\circ C_1(\{f_\bullet\})_{id}=\\
\left(\delta\left(\frac{z_1}{z_2},h\right) \delta\left(\frac{z_2}{z_1},h\right)+\delta\left(\frac{z_2}{z_1},\frac{1}{\nu _1}\right) \delta\left(\frac{z_2}{z_1},\nu_1\right)\right)f_{id}+
\delta\left(\frac{z_2}{z_1},h\right) \left(\delta\left(\frac{z_1}{z_2},\frac{1}{\nu _1}\right)+\delta\left(\frac{z_2}{z_1},\nu_1\right)\right)f_{s_1 } \end{multline*}
Since the function $\vt$ is antisymmetric ($\vartheta(1/x)=-\vartheta(x)$ and hence $\delta(x,y)=-\delta(1/x,1/y)$) we have
$$C_1\circ C_1(\{f_\bullet\})_{id}=
\left(\delta\left(\frac{z_1}{z_2},h\right) \delta\left(\frac{z_2}{z_1},h\right)+\delta\left(\frac{z_2}{z_1},\frac{1}{\nu _1}\right) \delta\left(\frac{z_2}{z_1},\nu_1\right)\right)f_{id}$$
$$= -\frac{\left(\vartheta
(\nu _1)^2\,
\vartheta \left(h\frac{
z_1}{z_2}\right) \vartheta
\left(h\frac{
z_2}{z_1}\right)+\vartheta (h)^2\, \vartheta
\left(\frac{z_2}{\nu _1
z_1}\right) \vartheta
\left(\frac{\nu _1
z_2}{z_1}\right)\right)\vartheta'(1)^2}{\vartheta
(h)^2\, \vartheta(\nu
_1)^2 \,\vartheta
\left(\frac{z_2}{z_1}\right)\!{}^2}f_{id}$$
Setting
$$a= h,\quad
b= \frac{z_2}{z_1},\quad
c= \nu_1,\quad
d= 1$$
in the Fay's trisecant identity \eqref{trisecant} we obtain the claim.
\end{proof}
\medskip
\noindent{\it Proof of Formula \ref{BS-intro}} (from the Introduction) relating global nonrestricted classes, we conjugate the operation $C_k$ with the multiplication by the elliptic Euler class
\begin{equation}\label{eq:ellipticEuler}{e^{\rm ell}}(TG/B)=\prod_{\alpha\,\in\,\text{positive roots}}\vartheta(\LL{\alpha})\end{equation}
(according to our convention $\LL{\alpha}=G\times_B{\mathbb C}_{-\alpha}$). Since $$\vartheta(\LL{\alpha}|\sigma)/\vartheta(\LL{\alpha}|\sigma s_k)=-1$$ we obtain the minus sign in \eqref{BS-intro}.
\medskip
If $s_ks_\ell=s_\ell s_k$, then $C_k\circ C_\ell=C_\ell\circ C_k$.
Moreover for $\GL_n$
$$C_k\circ C_{k+1}\circ C_k=C_{k+1}\circ C_k\circ C_{k+1}.$$
Hence for $G=\GL_n$ the operators $C_k$ define a representation of the braid group.
For general $G$ the corresponding braid relation of the Coxeter group are satisfied. This is an immediate consequence of the fact, that the elliptic class does not depend on the resolution.
\begin{remark}\rm
Note that the operations $C_\ell$ do not commute with
$\kappa_k(\lambda)$
but they satisfy
$\kappa_k(\lambda)\circ C_\ell=C_\ell\circ\kappa_k(s_\ell\lambda)$.
\end{remark}
This table summarizes the various forms of the Hecke algebras whose operators produce more and more general characteristic classes of Schubert varieties.
\begin{center}{\def2{2}
\begin{tabular}{ |c|c|c| }
\hline
invariant&operation&square\\
\hline
$[-]$ & $\tfrac 1{c_1(L_k)}(\id+ s_k^\gamma)$ & $D_k^2=0$ \\
$c^{sm}$ & $\tfrac 1{c_1(L_k)} \id+ \tfrac {1+c_1(L_k)}{c_1(L_k)} s_k^\gamma$ & $A_k^2=\id$ \\
$mC_y$ & $\frac{(1+y)L_k^{-1}}{1-L_k^{-1}} \id
+ \frac{1+y\,L_k^{-1}}{1-L_k^{-1}} s^\gamma_k$ & $B_k^2=-(y+1)B_k-y\,\id$ \\
& & or $(B_k+y)(B_k+1)=0$ \\
$E(-,\lambda)$ & $\; \; \delta(L_k,h^{\langle\lambda,\alpha_k^\text{\tiny$\vee$}\rangle}
\, s^\mu_k +\delta(L_k,h)s^\gamma_ks^\mu_k\; \;
$ & $C_k^2=\kappa_k(\lambda)$. \\
\hline
\end{tabular}}
\end{center}
\bigskip
\subsection{Modifying the degeneration of $E(-,\lambda)$ and $C_k$}
The characteristic classes, $[-]$, $c^{sm}$, $mC_y$, $E(-,\lambda)$ are of increasing generality: an earlier in the list can be obtained from a latter in the list by formal manipulations.
However, the limiting procedure of getting $mC_y$ from $E(-,\lambda)$ itself is not obvious. We describe this procedure below. As a result we obtain a family of $mC_y$-like classes, with only one of them the $mC_y$-class, as well as a family of corresponding Hecke-type algebras.
The theta function has the limit property
$$\lim_{q\to 0}\vartheta(x) = x^{1/2}-x^{-1/2}.$$
It follows that
$$\lim_{q\to 0}\delta(x,h)=\frac{1-1/(x h)}{(1-1/x)(1-1/h)}\,.$$
The motivic Chern class is the limit of the elliptic class when $q={\rm e}^{2\pi i \tau}\to0$.
The limit of the elliptic class of a pair is not we would expect: the limit of the boundary factor is equal to
$$\lim_{q\to 0}\delta(x,\nu_k(\lambda))=\frac{1-L_k^{-1}/\nu_k(\lambda)}{(1-L_k^{-1})(1-1/\nu_k(\lambda))}$$
where
$\nu_k(\lambda)=h^{\langle\lambda,\alpha_k^\text{\tiny$\vee$}\rangle}$.
The limit Hecke algebra differs from the Hecke algebra computing $mC_y$'s of Schubert cells. The limit classes depend on the parameter $\lambda\in{\mathfrak t}^*$.
Calculation shows that in the limit when $q\to 0$ we obtain the operators
$$C^{q\to 0}_k=\left(\frac{1-L_k^{-1}/\nu_k(\lambda)}{(1-L_k^{-1})(1-1/\nu_k(\lambda))}\, \id+\frac{1 -L_k^{-1}/h}{(1-L_k^{-1})(1-1/h)}s_k^\gamma\right)\circ s^\mu_k$$
satisfying
$$C^{q\to 0}_k\circ C^{q\to 0}_k=\frac{1 -\nu_k(\lambda)/h}{(1-\nu_k(\lambda))(1-1/h)}\cdot \frac{1 -1/(\nu_k(\lambda)h)}{(1-1/\nu_k(\lambda))(1-1/h)} id.$$
\bigskip
Another method of passing to the limit, as in \cite{BoLi2}, is
when we first rescale $\lambda$ by $\log(q)/\log(h)$ (thus where in the formulas we had $h^{\langle \lambda,\alpha^\text{\tiny$\vee$}_k\rangle}$ we have $q^{\langle \lambda,\alpha^\text{\tiny$\vee$}_k\rangle}$ instead)
and then pass to the limit.
For $0 < Re(z) < 1$ , and $m \in {\mathbb Z}$ we have
$$\lim_{q\to 0}\frac{\vartheta(a q^{ m+z} )}{\vartheta(b q^{ m+z} )}
= (a/b)^{-m-1/2}\,.$$
Passing to the limit we obtain different factors:
$$
\lim_{q\to 0}\delta (a, q^{\langle \lambda,\alpha_k^\text{\tiny$\vee$}\rangle})=\lim_{q\to 0}\frac{\vartheta'(1)\vartheta(a q^{\langle \lambda,\alpha_k^\text{\tiny$\vee$}\rangle})}{\vartheta(a)\vartheta(q^{\langle \lambda,\alpha_k^\text{\tiny$\vee$}\rangle})}=\frac{a^{-b_k(\lambda)-1/2}}{a^{1/2}-a^{-1/2}}=\frac{a^{-b_k(\lambda)-1}}{1-a^{-1}}\,,
$$
where $b_k(\lambda)$ is the integral part of $Re \langle \lambda,\alpha_k^\text{\tiny$\vee$}\rangle$, provided that $ \langle \lambda,\alpha_k^\text{\tiny$\vee$}\rangle\not\in \Z$.
The limit operation now takes form
$$\widetilde C^{q\to 0}_k=\left( \frac{L_k^{-b_k(\lambda)-1}}{1-L_k^{-1}} \, \id+\frac{1 -L_k^{-1}/h}{(1-L_k^{-1})(1-1/h)}s^\gamma_k\right)\circ s^\mu_k\,.$$
Setting $y=-h^{-1}$ we obtain a form resembling the operation $B_k$:
$$\frac{1}{1+y}\left( \frac{(1+y)L_k^{-b_k(\lambda)-1}}{1-L_k^{-1}} \, \id+\frac{1 +y\,L_k^{-1}}{(1-L_k^{-1})}s_k^\gamma\right)\circ s^\mu_k\,.$$
For weights $\lambda$ belonging to the dominant Weyl chamber, which are sufficiently close to 0 we obtain the operation $B_k$. But note that here still the limit operation is composed with the action of $s_k$ on $\lambda\in {\mathfrak t}^*$. In general we obtain a version of ``motivic stringy invariant'' mentioned in \cite[\S11.2]{SchYo}.
\begin{remark}\rm
The operators $\widetilde C^{q\to 0}_k$ map the so-called {\em trigonometric weight functions} of \cite[Section 3.2]{RTV} into each other. These functions also depend on an extra {\em slope} or {\em alcove} parameter, where a region in a subset of ${\mathfrak t}^*$ where the functions $b_k$ are constant.
The resulting multiplier for $\widetilde C^{q \to 0}_k\circ \widetilde C^{q\to 0}_k$ equals
$$\lim_{q\to 0} \delta(h,q^{\langle\lambda,\alpha^\text{\tiny$\vee$}_k\rangle})\delta(h,q^{-\langle\lambda,\alpha^\text{\tiny$\vee$}_k\rangle})=
\frac{h^{-b_k(\lambda)-1}}{(1-h^{-1})}\cdot \frac{h^{-b_k(-\lambda)-1}}{(1-h^{-1})}=
-\frac{h^{-1}}{(1-h^{-1})^2}=
-\frac{ y}{(1+y)^2}\,$$
(since $b_k(\lambda)+b_k(-\lambda)=-1$), which, remarkably, does not depend of $\lambda$.
\end{remark}
\section{Weight functions
}\label{sec:EllWeight}
In this section we focus on type A Schubert calculus, and give a formula for the elliptic class of a Schubert variety in terms of natural generators in the K theory of $\F(n)=G/B$. This formula will coincide with the {\em weight function} defined in \cite{RTV} (based on earlier weight function definitions of Tarasov-Varchenko \cite{TV}, Felder-Rim\'anyi-Varchenko \cite{FRV18}, see also \cite{konno}). Weight functions play an important role in representation theory, quantum groups, KZ differential equations, and recently (in some situations) they were identified with stable envelopes in Okounkov's theory.
\smallskip
For a non-negative integer $n$ let $\F(n)$ be the full flag variety parametrizing chains of subspaces $0=V_0\subset V_1\subset \ldots \subset V_n=\C^n$ with $\dim V_k=k$. We will consider the natural action of ${\bf T}=(\C^*)^n$ on $\F(n)$. The ${\bf T}$-equivariant tautological rank $k$ bundle (i.e. the one whose fiber is $V_k$) will be denoted by $\TTT^{(k)}$, and let ${\mathcal G}_k$ be the line bundle $\TTT^{(k)}/\TTT^{(k-1)}$.
Let $\gamma_k$ be the class of ${\mathcal G}_k$ in $K_{{\bf T}}(\F(n))$ and let $t^{(k)}_1,\ldots,t^{(k)}_k$ be the Gro\-then\-dieck roots of $\TTT^{(k)}$ (i.e. $[\TTT^{(k)}]=t^{(k)}_1+\ldots+t^{(k)}_k$) for $k=1,\ldots,n$.
Let us rename $t^{(n)}_j=z_j$.
It is well known that the ${\bf T}$-equivariant K ring of $\F(n)$ can be presented as
\begin{align}
\label{KT1}
K_{{\bf T}}(\F(n))&=
\Z[(t^{(k)}_a)^{\pm 1},z_j^{\pm 1}]_{k=1,\ldots,n-1,\ a=1,\ldots,k,\ j=1,\ldots,n}^{{S_1}\times \ldots \times S_{n-1}}/(\text{relations})\\
\label{KT2}
&=
\Z[(\gamma_{k})^{\pm 1},z_j^{\pm1}]_{k=1,\ldots,n,\ j=1,\ldots,n}/(\text{relations}),
\end{align}
with certain relations.
The first presentation is a result of presenting the flag variety as a quotient of the quiver variety $$\F(n)=V/\!/G\,,\qquad V=\prod_{k=1}^{n-1}{\rm Hom}(\C^k,\C^{k+1})\,,\qquad G=\prod_{k=1}^{n-1}\GL_k,$$ see \cite[\S6]{FRW2}. Then
$$K_{G\times {\bf T}}(V)\;\twoheadrightarrow\; K_{G\times {\bf T}}(U)\simeq K_{\bf T}(\F(n))\,,$$
where $U$ is the open subset in $V$ consisting of the family of monomorphisms.
The variables $t^{(k)}_a$ are just the characters of the factor $\GL_k$.
The second presentation comes from a geometric picture as well:
$\F(n)=\GL_n/B_n$ is homotopy equivalent to $\GL_n/{\bf T}$ and
$$K_{{\bf T}\times {\bf T}}({\rm Hom}(\C^n,\C^n))\;\twoheadrightarrow\; K_{{\bf T}\times {\bf T}}(\GL_n)\simeq K_{\bf T}(\GL_n/{\bf T}).$$
The variables $\gamma_k$ appearing in the presentation \eqref{KT2} are the characters of the second copy of ${\bf T}$ acting from the right on ${\rm Hom}(\C^n,\C^n)$.
Explicit generators of the ideal of relations could be named in both lines \eqref{KT1}, \eqref{KT2}, but it is more useful to understand the description of the ideals via ``equivariant localization'' (a.k.a. ``GKM description'', or ``moment map description''), as follows.
The ${\bf T}$ fixed points $x_\sigma$ of $\F(n)$ are parameterized by permutations $\sigma\in S_n$.
The restriction map from $K_{{\bf T}}(\F(n))$ to $K_{{\bf T}}(x_\sigma)=\Z[z_j^{\pm 1}]_{j=1,\ldots,n}$ is given by the substitutions
\begin{equation}\label{substitute}
t^{(k)}_a\mapsto z_{\sigma(a)}, \qquad\qquad\qquad
\gamma_{k}\mapsto z_{\sigma(k)}.
\end{equation}
Symmetric functions in $t^{(k)}_a, z_j$, and functions in $\gamma_{k},z_j$ belong to the respective ideals of relations if and only if their substitutions \eqref{substitute} vanish for all $\sigma\in S_n$ \cite[Appendix]{KR}, \cite[Ch. 5-6]{ChGi}.
\medskip
Our main objects of study, the classes ${E^{\rm K}}(X_\omega)$ live in the completion of $K_{{\bf T}}(\F(n))$ adjoined with variables $h,\mu_k$, that is, in the ring
\begin{align}
\label{ring1}
&\Z[[(t^{(k)}_a)^{\pm 1},z_j^{\pm 1},h,\mu_j^{\pm 1}]]_{k=1,\ldots,n-1,\ a=1,\ldots,k,\ j=1,\ldots,n}^{{S_1}\times \ldots \times S_{n-1}}/(\text{relations})\\
\label{ring2}
= &\Z[[(\gamma_{k})^{\pm 1},z_k^{\pm1},h,\mu_k^{\pm 1}]]_{k=1,\ldots,n}/(\text{relations}),
\end{align}
where the same localization description holds for the two ideals of relations.
Our goal in this section is to define representatives---that is, functions in $t^{(k)}_a,z_j,h,\mu_j$ and functions in $\gamma_{k}, z_j, h,\mu_j$---that represent the elliptic classes of ${E^{\rm K}}(X_\omega)$ of Schubert varieties. This goal will be achieved in Theorem~\ref{thm:EsameW} below.
\subsection{Elliptic weight functions}
Now we recall some special functions called elliptic weight functions, from \cite{RTV}.
For $\omega\in S_n$, $k=1,\ldots,n-1$, $a=1,\ldots,k$ define the integers
\begin{itemize}
\item $\omega^{(k)}_a$ by $\{\omega(1),\ldots,\omega(k)\}=\{\omega^{(k)}_1<\ldots<\omega^{(k)}_k\}$,
\item $j_\omega(k,a)$ by $\omega^{(k)}_a=\omega(j_\omega(k,a))$,
\item
\[
c_\omega(k,a)=\begin{cases}\hfill 0&\text{if }\omega(k+1)\geq \omega^{(k)}_a\\
1 &\text{if } \omega(k+1)< \omega^{(k)}_a.\end{cases}
\]
\end{itemize}
\begin{definition} \cite{RTV}
For $\omega\in S_n$ define the elliptic weight function by
\[
\ww_\omega= \left(\tfrac{\vartheta(h)}{\vartheta'(1)}\right)^{\dim G}\frac{{\rm Sym}_{t^{(1)}}\ldots{\rm Sym}_{t^{(N-1)}} U_\omega}{\prod_{k=1}^{n-1} \prod_{i,j=1}^{k} \vartheta(ht^{(k)}_i/t^{(k)}_j)},
\]
where
\[
G=\prod_{k=1}^{n-1}\GL_k\,,\qquad \dim G=\tfrac{(n-1)n(2n-1)}6,
\]
\[
{\rm Sym}_{t^{(k)}} f(t^{(k)}_1,\ldots,t^{(k)}_{k})=\sum_{\sigma\in S_{k}} f(t^{(k)}_{\sigma(1)},\ldots,t^{(k)}_{\sigma(k)}),
\]
\[
U_\omega=\prod_{k=1}^{n-1} \prod_{a=1}^{k}\left(
\prod_{c=1}^{k+1} \psi_{\omega,k,a,c}(t^{(k+1)}_c/t^{(k)}_a) \prod_{b=a+1}^{k}
\delta(t^{(k)}_b/t^{(k)}_a,h)
\right),
\]
\[
\psi_{\omega,k,a,c}(x)
=\vartheta(x)\cdot\begin{cases}
\delta(x,h) & \text{if}\qquad \omega^{(k+1)}_c<\omega^{(k)}_a \\
\delta\left(x,h^{1-c_\omega(k,a)} \frac{\mu_{k+1}}{\mu_{j_\omega(k,a)}}\right) & \text{if}\qquad \omega^{(k+1)}_c=\omega^{(k)}_a \\
1 & \text{if}\qquad \omega^{(k+1)}_c>\omega^{(k)}_a.
\end{cases}
\]
\end{definition}
\noindent The usual names of the variables of the elliptic weight function are:
\[
\begin{array}{lll}
t^{(k)}_a & \text{ for } k=1,\ldots,n-1,\ a=1,\ldots,k& \text{the topological variables,} \\
z_a:=t^{(n)}_a & \text{ for } a=1,\ldots,n & \text{the equivariant variables,}\\
h & & \text{the ``Planck variable'',}\\
\mu_k=h^{-\lambda_k} & \text{ for } k=1,\ldots, n & \text{the dynamical (or K\"ahler) variables}.
\end{array}
\]
The function $\ww_\omega$ is symmetric in the $t^{(k)}_*$ variables (for each $k=1,\ldots,n-1$ separately), but not symmetric in the equivariant variables.
Consider the new variables $\gamma_1,\ldots,\gamma_n$, and define the modified weight function (of the variables $\gamma=(\gamma_1,\ldots,\gamma_n)$, $z=(z_1,\ldots,z_n)$, $\mu=(\mu_1,\ldots,\mu_n)$, and $h$)
\begin{equation}\label{modW}
\wwh_\omega(\gamma,z,\mu,h)=\ww_\omega
\left(t^{(k)}_a=\gamma_a \text{ for } k=1,\ldots,n-1; t^{(n)}_a=z_a\right),
\end{equation}
that is, we substitute $\gamma_a$ for $t^{(k)}_a$ for $k=1,\ldots,n-1$, and rename $t^{(n)}_a$ to $z_a$. This substitution corresponds to going from the presentation \eqref{KT1} to the presentation \eqref{KT2}.
\begin{example} \rm
We have
\[\ww_{12}=\tfrac{1}{\vartheta'(1)}{\vartheta\left(\tfrac{t^{(2)}_1}{t^{(1)}_1}\right)\vartheta\left(\tfrac{t^{(2)}_2}{t^{(1)}_1}\right)\delta\left(\tfrac{t^{(2)}_1}{t^{(1)}_1},\tfrac{h\mu_2}{\mu_1}\right)}
=
\tfrac{\vartheta\left(\tfrac{t^{(2)}_2}{t^{(1)}_1}\right)\vartheta\left(\tfrac{h\mu_2t^{(2)}_1}{\mu_1t^{(1)}_1}\right)}{\vartheta\left(\tfrac{h\mu_2}{\mu_1}\right)},\]
\[
\ww_{21}=
\tfrac{1}{\vartheta'(1)}\vartheta\left(\tfrac{t^{(2)}_1}{t^{(1)}_1}\right)\vartheta\left(\tfrac{t^{(2)}_2}{t^{(1)}_1}\right)\delta\left(\tfrac{t^{(2)}_1}{t^{(1)}_1},h\right)\delta\left(\tfrac{t^{(2)}_2}{t^{(1)}_1},\tfrac{\mu_2}{\mu_1}\right)=
\frac{\vartheta'(1)\vartheta
\left(\tfrac{h
t^{(2)}_1}{t^{(1)}_1}\right)
\vartheta\left(\frac{\mu_2
t^{(2)}_2}{\mu _1
t^{(1)}_1}\right)}{\vartheta (h)
\vartheta\left(\frac{\mu
_2}{\mu _1}\right)},
\]
and hence
\begin{equation}\label{WWex}
\wwh_{12}=\tfrac{1}{\vartheta'(1)}
\vartheta\left(\tfrac{z_1}{\gamma_1}\right)\vartheta\left(\tfrac{z_2}{\gamma_1}\right)\delta\left(\tfrac{z_1}{\gamma_1},\tfrac{h\mu_2}{\mu_1}\right),
\ \ \ \
\wwh_{21}=
\tfrac{1}{\vartheta'(1)}
\vartheta\left(\tfrac{z_1}{\gamma_1}\right)
\vartheta\left(\tfrac{z_2}{\gamma_1}\right)
\delta\left(\tfrac{z_1}{\gamma_1},h\right)
\delta\left(\tfrac{z_2}{\gamma_1},\tfrac{\mu_2}{\mu_1}\right)
.
\end{equation}
For $n=3$ for example we have
\[
\wwh_{123}=
\frac{
\vartheta\left(\tfrac{z_2}{\gamma_1}\right)
\vartheta\left(\tfrac{z_3}{\gamma_1}\right)
\vartheta\left(\tfrac{z_3}{\gamma_2}\right)
\vartheta\left(\tfrac{z_1h}{\gamma_2}\right)
\vartheta\left(\tfrac{z_1 h\mu_3}{\gamma_1\mu_1}\right)
\vartheta\left(\tfrac{z_2 h\mu_3}{\gamma_2\mu_2}\right)
}
{
\vartheta\left(\tfrac{h\mu_3}{\mu_1}\right)
\vartheta\left(\tfrac{h\mu_3}{\mu_2}\right)
\vartheta\left(\tfrac{\gamma_1h}{\gamma_2}\right)
}.
\]
\end{example}
The key properties of weight functions are the R-matrix recursion property, substitution properties, transformation properties, orthogonality relations, as well as their axiomatic characterizations, see details in \cite{RTV}. First we recall some obvious substitution properties, and the R-matrix recursion property.
\subsection{Substitution properties}
Keeping in mind that fixed point restrictions in geometry are obtained by the substitutions~\eqref{substitute}, for permutations $\omega,\sigma$ define
\[
\ww_{\omega,\sigma}=
{\wwh_{\omega}}|_{\gamma_k=z_{\sigma(k)}}=
{\ww_{\omega}}|_{t^{(k)}_i=z_{\sigma(i)}}.
\]
From the definition of weight functions (or by citing \cite[Lemmas 2.4, 2.5]{RTV}) it follows that
$\ww_{\omega,\sigma}=0$ unless $\sigma\leq \omega$ in the Bruhat order, and
\begin{equation*}
\ww_{\omega,\omega}={\prod_{i<j}\vartheta(z_{\omega(j)}/z_{\omega(i)})} \cdot \prod_{i<j,\omega(i)>\omega(j)} \delta(z_{\omega(j)}/z_{\omega(i)},h).
\end{equation*}
In particular, we have
\begin{equation}\label{Rbasic}
{\ww_{\id,\sigma}}=\begin{cases}
{\prod_{i<j}\vartheta(z_{j}/z_{i})} & \text{if } \sigma=\id \\
0 & \text{if } \sigma\not=\id.
\end{cases}
\end{equation}
\subsection{R-matrix recursion}
In \cite{RTV} (Theorem 2.2 and notation (2.8)) the following identity is proved for weight functions:
\begin{equation}\label{eq:Rmatrix}
s_k^z\ww_{s_k\omega} =
\begin{cases}
\ww_{\omega} \cdot \frac{\delta(\frac{\mu_{\omega^{-1}(k)}}{\mu_{\omega^{-1}(k+1)}},h) \delta(\frac{\mu_{\omega^{-1}(k+1)}}{\mu_{\omega^{-1}(k)}},h)}{\delta(\frac{z_k}{z_{k+1}},h)}
-
\ww_{s_k\omega} \cdot \frac{\delta( \frac{z_{k+1}}{z_k}, \frac{\mu_{\omega^{-1}(k)}}{\mu_{\omega^{-1}(k+1)}})}{\delta(\frac{z_k}{z_{k+1}},h)}
& \text{if } \ell(s_k \omega)>\ell(\omega)
\\
\ww_{\omega} \cdot \frac{1}{\delta(\frac{z_k}{z_{k+1}},h)}
-
\ww_{s_k\omega} \cdot \frac{\delta( \frac{z_{k+1}}{z_k}, \frac{\mu_{\omega^{-1}(k)}}{\mu_{\omega^{-1}(k+1)}})}{\delta(\frac{z_k}{z_{k+1}},h)}
& \text{if } \ell(s_k \omega)<\ell(\omega),
\end{cases}
\end{equation}
where $s^z_k$ operates by replacing the $z_k$ and $z_{k+1}$ variables. Of course, the same formula holds for $\wwh$--functions (replace $\ww$ with $\wwh$ everywhere in \eqref{eq:Rmatrix}).
\begin{corollary} \label{cor:R-recur}
If $\ell(s_k\omega)=\ell(\omega)+1$ then
\begin{equation*}
\ww_{s_k\omega}
=
\delta\left( \frac{z_{k+1}}{z_k}, \frac{\mu_{\omega^{-1}(k+1)}}{\mu_{\omega^{-1}(k)}}\right)
\cdot\ww_{\omega} + \delta\left(\frac{z_k}{z_{k+1}},h\right)\cdot s_k^z\ww_{\omega}
\end{equation*}
and the same holds if $\ww$ is replaced with $\wwh$.
\end{corollary}
\begin{proof} The statement follows from the second line of \eqref{eq:Rmatrix}, after we rename $\omega$ to $s_k \omega$.
\end{proof}
A key observation is that the recursion in Corollary \ref{cor:R-recur}, together with the initial condition \eqref{Rbasic} completely determine the classes $\ww_{\omega,\sigma}$.
\section{Weight functions are representatives of elliptic classes}
For a rank $n$ bundle $\TTT$ with Grothendieck roots $t_k$ we defined its K theoretic Euler class in~\eqref{eq:eK}.
We recall that the elliptic cohomology is understood as Borel equivariant cohomology $\hat H^*_{\bf T}(-;{\mathbb C})[[q]]$ with the complex orientation given by the theta function.
The elliptic Euler class is defined by ${\rm e}^{\rm ell}(\TTT)=\prod_{k=1}^n \vt(\xi_k)$, where $\xi_k$ are the Chern roots, $ch(t_k)={\rm e}^{\xi_k}$. Below we identify equivariant K theory or equivariant elliptic cohomology classes of $G/B$ with the tuple of their restrictions to the fixed points. Formally we should apply the Chern character to compare formulas in $$K_{\bf T}(pt)[[q]]\simeq {\mathbb Z}[z_1^{\pm 1},z_2^{\pm 1},\dots,z_n^{\pm 1}][[q]]$$ with
$$\hat H^*_{\bf T}(pt;{\mathbb C})[[q]]\simeq {\mathbb C}[[x_1,x_2,\dots,x_n,q]]\,.$$
Here the variables $z_k$ form the basis of characters of ${\bf T}$, while the variables $x_k$ are the weights, they form an integral basis of ${\mathfrak t}^*$.
The Chern character is given by the substitution $$z_k\mapsto {\rm e}^{x_k}$$
and it is clearly injective. Therefore from now on we will omit the Chern character in the notation and for example we will write $\vartheta(z_i/z_j)$ instead of $\vt(x_i-x_j)$.
\medskip
We will be concerned with ${e^{\rm ell}}(T\!\Fl(n))_{|\sigma}={e^{\rm ell}}(T_\sigma\!\Fl(n))=\prod_{i<j} \vartheta(z_{\sigma(j)}/z_{\sigma(i)})$ for a permutation $\sigma$. Using this Euler class, the recursion we obtained in the last section, \eqref{Rbasic}, reads:
\begin{equation}\label{locini}
\frac{\ww_{\id,\sigma}}{{e^{\rm ell}}(T_{\sigma}\Fl(n))}=\begin{cases} 1 & \text{if } \sigma=\id \\ 0 & \text{if } \sigma\not=\id, \end{cases}
\end{equation}
and for $\ell(s_k \omega)=\ell(\omega)+1$
\begin{equation}\label{eq:Rw-recurence}
\frac{\ww_{s_k\omega,\sigma}}{{e^{\rm ell}}(T_\sigma\!\F(n))}
=
\delta\left( \tfrac{z_{k+1}}{z_k}, \tfrac{\mu_{\omega^{-1}(k+1)}}{\mu_{\omega^{-1}(k)}}\right)
\cdot\frac{\ww_{\omega,\sigma}}{{e^{\rm ell}}(T_\sigma\!\F(n))} + \delta\left(\tfrac{z_k}{z_{k+1}},h\right)\cdot s_k^z\left(\frac{\ww_{\omega,s_k\sigma}}{{e^{\rm ell}}(T_{s_k\sigma}\!\F(n))}\right).
\end{equation}
Now we are ready to state the theorem that weight functions represent the elliptic classes of Schubert varieties.
\begin{theorem}\label{thm:EsameW}
Set $\mu_i=h^{-\lambda_i}$. With this identification
in presentation \eqref{ring1} we have
\[
{E^{\rm K}}(X_\omega,\lambda)=\frac{{e^{\rm K}}(T\!\F(n))}{{e^{\rm ell}}(T\!\F(n))}\cdot [\ww_\omega] \,,
\]
and in presentation \eqref{ring2} we have
\[
{E^{\rm K}}(X_\omega,\lambda)= \frac{{e^{\rm K}}(T\!\F(n))}{{e^{\rm ell}}(T\!\F(n))}\cdot [\wwh_\omega]\,.
\]
\end{theorem}
\begin{remark} \rm \label{rm:ell2K}
Continuing Remark \ref{rem:EllH} let us note that if we had set up the elliptic class of varieties not in equivariant K theory but in equivariant elliptic cohomology, then the class would be multiplied by ${{e^{\rm ell}}(TM)}/{{e^{\rm K}}(TM)}$ (where $M$ is the ambient space). That is, Theorem \ref{thm:EsameW} claims that the functions $\ww_\omega$, $\wwh_\omega$ represent the elliptic class of Schubert varieties in equivariant elliptic cohomology.
\end{remark}
\subsection{Proof of Theorem \ref{thm:EsameW}}
Let us fix a notation:
\medskip
\noindent{\bf Convention.} We will skip $\lambda$ in the notation of $E_\sigma(X_\omega,\lambda)$ and we treat $E_\sigma(X_\omega)$ as a function on $\lambda\in{\mathfrak t}^*\simeq \C^n$ expressed by
the basic functions $\mu_k=\mu_k(\lambda)=h^{-\lambda_k}$. The action of $\omega\in W$ on the space of functions generated by $\mu_\bullet$ will be denoted by $s_k^\mu$. We will write $L_{k,\sigma}$ to denote the character of the line bundle $L_k$ at the point $x_\sigma$.
\bigskip
\noindent We need to prove that for all $\omega, \sigma$
\[
E_\sigma(X_{\omega})=\frac{\ww_{\omega,\sigma}}{{e^{\rm ell}}(T_{\sigma}\!\F(n))}.
\]
This will be achieved by showing that the recursive characterization \eqref{locini}, \eqref{eq:Rw-recurence} of the right hand side holds for the left hand side too.
The basic step \eqref{locini} holds for $E_\sigma(X_{\id})$ because of \eqref{Eini}.
\begin{proposition}\label{pro:ERrecursion}Suppose $G=\GL_n$. If $\ell(s_k\omega)=\ell(\omega)+1$ then the functions $E_\sigma(X_\omega)$ satisfy the recursion
$$E_\sigma(X_{s_k\omega})=
\delta\left(\frac{z_{k + 1}}{z_k},\frac{ \mu_{\omega^{-1}(k + 1)}}{\mu_{\omega^{-1}(k)}}\right)
\cdot E_\sigma(X_\omega) +
\delta\left(\frac{z_k }{z_{k+1}},h\right)\cdot
s_k^z E_{s_k\sigma}(X_\omega).$$
More generally for an arbitrary reductive group
$$E_\sigma(X_{s_k\omega})=
\delta\left(L_{k,\id},(\omega^{-1})^\mu(\nu_k)\right)
\cdot E_\sigma(X_\omega) +
\delta\left(L^{-1}_{k,\id},h\right)\cdot
s_k^zE_{s_k\sigma}(X_\omega)\,.$$
Here
$s_k^z$
means the action of $s_k$ on $z$-variables and $\omega^\mu$ acts on the $\mu$-variables, $\nu_k=h^{-\alpha_k^\text{\tiny$\vee$}}$.\end{proposition}
The reader may find it useful to verify the statement for $n=2$ using the local classes below.
\begin{center}{\def2{2}
\begin{tabular}{|c|l|l|}
\hline
&$\omega=\id$&$\omega=s_1$\\
\hline
$E_\sigma(X_\omega)$ &$E_{12}(X_{12})=1 $ & $E_{21}(X_{12})=0$ \\
& $E_{12}(X_{21})=\delta(\frac{z_2}{z_1},\frac{\mu_2}{\mu_1})$ & $E_{21}(X_{12})=\delta(\frac{z_1}{z_2},h)$ \\
\hline
$\ww_{\omega,\sigma}$ & $\ww_{12,12} =\vartheta(\frac{z_2}{z_1}) $ & $\ww_{12,21}=0$ \\
& $\ww_{21,12}=\vartheta(\frac{z_2}{z_1}) \delta(\frac{z_2}{z_1},\frac{\mu_2}{\mu_1})$ & $\ww_{21,21}=\vartheta(\frac{z_1}{z_2}) \delta(\frac{z_1}{z_2},h)$ \\
\hline
\end{tabular}}
\end{center}
\begin{proof}
We prove the proposition by induction with respect to the length of $\omega$. We assume that the formula of Proposition \ref{pro:ERrecursion} holds for $\omega_1$ with $\ell(\omega_1)<\ell(\omega)$.
We introduce the notation for a group element inverse:
$\bb\omega=\omega^{-1}$. Note that $\bb{s_k}=s_k$.
Let's assume that ${\underline \om} s_\ell$ is a reduced expression of $\omega$ and $\ell(s_k\omega)=\ell(\omega)+1$. Then
$\ell(s_k\omega_1)=\ell(\omega_1)+1$.
By Theorem \ref{th:mainiduction}
$$E_\sigma(X_{s_k\omega})=\delta(L_{\ell,\sigma},\nu_\ell)s_\ell^\mu E_\sigma(X_{s_k\omega_1 })+\delta(L_{\ell,\sigma},h)s_\ell^\mu E_{\sigma s_\ell}(X_{s_k\omega_1 })$$
By inductive assumption this expression is equal to
\begin{multline*}\delta(L_{\ell,\sigma},\nu_\ell)s_\ell^\mu\big(\delta(L_{k,\id},\bb\omega_1^\mu(\nu_k))E_\sigma(X_{\omega_1})+\delta(L_{k,\id}^{-1},h)s_k^z E_{s_k\sigma}(X_{\omega_1})\big)\\
+\delta(L_{\ell,\sigma},h)s_\ell^\mu\big(
\delta(L_{k,\id},\bb\omega_1^\mu(\nu_k))E_{\sigma s_\ell}(X_{\omega_1})+\delta(L_{k,\id}^{-1},h)s_k^z E_{s_k\sigma s_\ell}(X_{\omega_1})\big)\,.\end{multline*}
Note that
\[\begin{array}{ll}
s_k^z\delta(L_{\ell,\sigma},\nu_k)=\delta(L_{\ell,s_k\sigma},\nu_k)
\,,&
s_k^z\delta(L_{\ell,\sigma},h)=\delta(L_{\ell,s_k\sigma},h)\,,
\\ \\
s_\ell^\mu\delta(L_{k,\sigma},\bb\omega_1^\mu(\nu_k))=\delta(L_{k,\sigma},s_l^\mu\bb\omega_1^\mu(\nu_k))\,,
&
s_\ell^\mu\delta(L_{k,\id}^{-1},h)=\delta(L_{k,\id}^{-1},h)\,,
\end{array}\]
hence rearranging the expression we obtain
\begin{multline*}
\delta(L_{k,\id},s_\ell^\mu\bb\omega_1^\mu(\nu_k))\big(
\delta(L_{\ell,\sigma},\nu_\ell)s_\ell^\mu E_\sigma(X_{\omega_1})+
\delta
(L_{\ell,\sigma},h)s_\ell^\mu E_{\sigma s_\ell}(X_{\omega_1})\big)
\\
+\delta(L_{k,\id}^{-1},h)
s_k^z\big(\delta(L_{\ell,s_k\sigma},\nu_\ell)s_\ell^\mu E_{s_k\sigma}(X_{\omega_1}))
+
\delta(L_{\ell,s_k\sigma},h)s_\ell^\mu E_{s_k\sigma s_\ell}(X_{\omega_1})\big)
=\end{multline*}
\begin{multline*}=\delta(L_{k,\id},s_\ell^\nu\bb\omega_1^\mu(\nu_k))
E_\sigma(X_{\omega_1 s_\ell})+
\delta(L_{k,\id}^{-1},h)
s_k^z E_{s_k\sigma}(X_{\omega_1 s_\ell})=\\
=\delta(L_{k,\id},\bb\omega^\mu(\nu_k))
E_\sigma(X_{\omega})+
\delta(L_{k,\id}^{-1},h)
s_k^z E_{s_k\sigma}(X_{\omega})\,.
\end{multline*}
\end{proof}
\noindent This completes the proof of Theorem \ref{thm:EsameW}.
\begin{proposition}
If $\ell(s_k\omega)=\ell(\omega)-1$, then
$$\delta\big(\frac{z_{k + 1}}{z_k},\frac{ \mu_{\omega^{-1}(k + 1)}}{\mu_{\omega^{-1}(k)}}\big)
\cdot E_\sigma(X_\omega) +
\delta\big(\frac{z_k }{z_{k+1}},h\big)\cdot E_{s_k\sigma}(X_\omega)=\delta\big(h,\frac{ \mu_{\omega^{-1}(k + 1)}}{\mu_{\omega^{-1}(k)}}\big)
\delta\big(h,\frac{ \mu_{\omega^{-1}(k )}}{\mu_{\omega^{-1}(k+1)}}\big)
E_\sigma(X_{s_kw})\,.$$\end{proposition}
\begin{proof}
For $G=\GL_n$
this relation is a reformulation of the first line of the R-matrix relation \eqref{eq:Rmatrix}. For general $G$ this statement follows from a direct calculation when $\omega_1=s_k \omega_2$, $\ell(\omega_2)<\ell(\omega_1)$, which is exactly the same as the proof of Theorem \ref{th:kappa}.
\end{proof}
To obtain Formula \ref{R-intro} given in the Introduction we multiply and divide by the elliptic Euler classes \eqref{eq:ellipticEuler}. Here, comparing with the proof of Formula \ref{BS-intro}, the minus sign does not appear because additionally we have the action of $s_k^z$ compensating the sign.
\section{Transformation properties of ${E^{\rm K}}(X_{\omega})$}
Having proved that for $\GL_n$ the elliptic classes of Schubert varieties are represented by weight functions, we can conclude that all proven properties of weight functions hold for those elliptic classes. One key property of weight functions is a strong constraint on their transformation properties. Hence, such a constraint holds for ${E^{\rm K}}(X_\omega)$ in the $\GL_n$ case. Motivated by this fact we will prove an analogous theorem on the transformation properties of elliptic classes of Schubert varieties for any reductive group $G$.
In Section \ref{sec:transformations} we recall how to encode transformation properties of theta-functions by quadratic forms and recall the known transformation properties of weight functions. In Section~\ref{sec:axiom} we put that statement in context be recalling a whole set of other properties such that together they characterize weight functions. In Section \ref{sec:trans_general} we generalize the transformation property statement to arbitrary $G$. So the new result is only in Section \ref{sec:trans_general}, the preceding sections are only motivations for that.
\subsection{Transformation properties of the weight function} \label{sec:transformations}
Consider functions $\C^p\times \HH \to \C$, where $\HH$ is the upper half space, and the variable in $\HH$ is called $\tau$.
Let $M$ be a $p\times p$ symmetric integer matrix, which we identify with the quadratic form $x\mapsto x^TMx$.
We say that the function $f:\C^p \times \HH \to \C$ has transformation property $M$, if
\begin{align*}
f(x_1,\ldots,x_{j-1},\,x_j+2\pi i,\,x_{j+1},\ldots,x_p)=& (-1)^{M_{jj}} f(x), \\
f(x_1,\ldots,x_{j-1},\,x_j+2\pi i\tau,\,x_{j+1},\ldots,x_p)=&(-1)^{M_{jj}} {\rm e}^{-\text{$\Sigma_k$} M_{jk}x_k-\pi i \tau M_{jj}} f(x).
\end{align*}
For a quadratic form $M$ there one may define a line bundle $\mathcal L(M,0)$ over the $p$th power of the elliptic curve $\C/2\pi i\langle 1,\tau\rangle$ such that the sections of $\mathcal L(M,0)$ are identified with functions with transformation properties $M$, see \cite[Section 6]{RTV}.
Recall from Section \ref{sec:theta} that we set up theta function in two ways, $\vartheta(\ )$
in ``multiplicative variables'', and $\vt(\ )$
in ``additive variables''. The transformation property being the quadratic form $M$ is always meant in the additive variables, but naming the quadratic function we use the variable names most convenient for the situation. For example, the function $\vartheta(x)$ has transformation property $M=(1)$ (or equivalently, the quadratic form $x^2$), because of \eqref{theta_trans} .
Iterating this fact one obtains that for integers $r_i$ the function
$\vartheta(\prod_{i=1}^p x_i^{r_i} )$ has transformation property $(\sum_{i=1}^p r_ix_i)^2$. For products of functions the quadratic form of their transformation properties add. Hence, for example, the function $\delta(a,b)$ of \eqref{def:delta} has transformation property $(a+b)^2-a^2-b^2=2ab$. Through careful analysis of the combinatorics of the weight functions defined above (or from \cite[Lemmas 6.3, 6.4]{RTV} by carrying out the necessary convention changes) we obtain
\begin{proposition}
The weight function $\wwh_\omega$ has transformation property
\begin{align}\label{Qpi}
\notag Q(\omega)=&\mathop{\sum_{1\leq i,j\leq n-1}}_{\omega(i)<\omega(j)} 2h(\gamma_j-\gamma_i) + \sum_{i=1}^{n-1}\sum_{j=1}^{\omega(i)-1} 2h(z_j-\gamma_i) \\
&+ \sum_{i=1}^{n-1} 2(z_{\omega(i)}-\gamma_i) (P_{\omega,i}h+\mu_n-\mu_i) \\
\notag &+ \sum_{i=1}^{n-1} \sum_{j=1}^n (z_j-\gamma_i)^2-\mathop{\sum_{1\leq i,j\leq n-1}}_{i<j} (\gamma_i-\gamma_j)^2,
\end{align}
for $P_{\omega,i}=1$ if $\omega(i)<\omega(n)$ and $P_{\omega,i}=0$ otherwise. \qed
\end{proposition}
\begin{example} \rm
We have
\begin{align*}
Q(12)&=2(z_1-\gamma_1)(h+\mu_2-\mu_1)+(z_1-\gamma_1)^2+(z_2-\gamma_1)^2,\\
Q(21)&=2h(z_1-\gamma_1)+2(z_2-\gamma_1)(\mu_2-\mu_1)+(z_1-\gamma_1)^2+(z_2-\gamma_1)^2,
\end{align*}
in accordance with formulas \eqref{WWex}.
\end{example}
\begin{corollary} \label{cor:transfom_recursive}
We have
\begin{align*}
Q(\omega)-Q(\omega s_k)=& 2(z_{\omega(k)}-z_{\omega(k+1)})(\mu_{k+1}-\mu_k), \\
Q(\omega)-Q(s_k\omega)=& 2(z_{k}-z_{k+1})(\mu_{\omega^{-1}(k+1)}-\mu_{\omega^{-1}(k)}).
\end{align*}
\end{corollary}
Note that neither line depends on the variables $\gamma_i$.
\begin{proof}
Straightforward calculation based on formula \eqref{Qpi}, carried out separately in the few cases depending on whether $k<n-1$ or $k=n-1$, whether $\omega(k)<\omega(n)$ or not.
Let us note that a more conceptual proof for (only) the second line can be obtained using the R-matrix relation~\eqref{eq:Rmatrix}.
\end{proof}
\subsection{Axiomatic characterization}\label{sec:axiom}
In this section we recall from \cite{RTV} a list of axioms that determine the weight function. One of the axioms is that they must have the transformation properties calculated above. Some other axioms include holomorphicity of some functions. Since in the definition of $\vartheta(x)$ the square root of $x$ appears, the domain of our functions is a suitable cover of $\C^N$ (or the domain of the induced section is a suitable cover of the product of elliptic curves).
The ``constant'' (not depending on $z$ variables)
\[
\psi_\omega=\psi_\omega(\mu,h)=\vartheta(h)^{n(n-1)(n-2)/3}
\prod_{i<j}\left( \prod_{\omega(i)<\omega(j)} \vartheta\left( h\mu_j/\mu_i\right) \cdot \prod_{\omega(i)>\omega(j)} \vartheta(h) \vartheta\left(\mu_j/\mu_i\right) \right).
\]
plays a role below.
\begin{theorem}[\cite{RTV} Theorem 7.3, see also \cite{FRV} Theorem A.1] \label{thm:axioms} \ \\
(I) The functions $\ww_{\omega,\sigma}$ satisfy the properties:
\begin{itemize}
\item[(1.1)] (holomorphicity) We have
\[
\ww_{\omega,\sigma}=\frac{1}{\psi_\omega} \cdot \text{holomorphic function.}
\]
\item[(1.2)] (GKM relations) We have
\[
\ww_{\omega,\sigma s_k}|_{z_{\sigma(k)}=z_{\sigma(k+1)}}=\ww_{\omega,\sigma}|_{z_{\sigma(k)}=z_{\sigma(k+1)}}.
\]
\item[(1.3)] (transformations) The transformation properties of $\ww_{\omega,\sigma}$ are described by the quadratic form $Q(\omega)_{|\gamma_i=z_{\sigma(i)}}$.
\item[(2)]\label{item:smooth} (normalization)
\[
\ww_{\omega,\omega}=
\prod_{i<j} \vartheta(z_{\omega(j)}/z_{\omega(i)})
\mathop{\prod_{i<j}}_{\omega(j)<\omega(i)} \delta(z_{\omega(j)}/z_{\omega(i)},h).
\]
\item[(3.1)] (triangularity) if $\sigma\not\preccurlyeq \omega$ in the Bruhat order then $\ww_{\omega,\sigma}=0$.
\item[(3.2)] (support) if $\sigma\preccurlyeq \omega$ in the Bruhat order then $\ww_{\omega,\sigma}$ is of the form
\[\frac{1}{\psi_\omega} \cdot \mathop{\prod_{i<j}}_{\sigma(i)>\sigma(j)} \vartheta(z_{\sigma(j)}h/z_{\sigma(i)}) \cdot \text{holomorphic function}
\]
\end{itemize}
(II) These properties uniquely determine the functions $\ww_{\omega,\sigma}$. \qed
\end{theorem}
\noindent The axiom (1.3) may be replaced by the inductive property:
\begin{itemize}
\item[(1.3')] If $\omega=\omega's_k$ with $\ell(\omega)=\ell(\omega')+1$
then the difference of the quadratic forms describing the transformation properties of
$\ww_{\omega',\sigma}$
and that of
$\ww_{\omega,\sigma}$
is equal to
$$2(z_{\omega(k+1)}-z_{\omega(k)})(\mu_{k+1}-\mu_k).$$
\end{itemize}
\subsection{Transformation properties for general $G$} \label{sec:trans_general}
A key axiom for the weight functions is (1.3'). Through Theorem \ref{thm:EsameW} it implies that the difference of the quadratic forms of $E_\sigma(X_{\omega'},\lambda)$ and $E_\sigma(X_{\omega},\lambda)$ is also $2(z_{\omega(k+1)}-z_{\omega(k)})(\mu_{k+1}-\mu_k)$.
Below, in Theorem \ref{simple_induction} we will prove that the generalization of this surprising property holds for general $G$. We will also see that this general argument (using the language of a general Coxeter groups) fits better the transformation properties than the combinatorics of weight functions.
\smallskip
Let $\alpha$ and $\beta\in {\mathfrak t}^*$ be two roots. The reflection of $\alpha$ about $\beta$ will be denoted by $\alpha^\beta=s_\beta(\alpha)$.
Define the functional $\mu_\alpha$ acting on ${\mathfrak t}^*$
$$\mu_\alpha\in ({\mathfrak t}^*)^*,\qquad
\mu_\alpha(\beta)=-\langle \beta, \alpha^\text{\tiny$\vee$} \rangle.$$
Note that we artificially introduce the minus sign to agree with the previous conventions (the definition of $\LL{\lambda}$ and $\mu_k=h^{-\lambda_k}$ in the weight function).
The reflection $s_\alpha\in W$ acting on polynomial functions on ${\mathfrak t}^*$ is denoted by $s_\alpha^\mu$
$$s^\mu_\alpha(f)=f\circ s_\alpha.$$
Define the generalized divided difference operation
$$d_\alpha(f)(x)=\frac{f(x)-f(s_\alpha(x))}{\mu_\alpha(x)}.$$
It satisfies the properties
\[
s^\mu_\alpha=s^\mu_{-\alpha},
\qquad
d_\alpha=-d_{-\alpha},
\qquad
d_{\beta} \circ s^\mu_\alpha=s^\mu_\alpha \circ d_{\beta^\alpha},
\qquad
d_\beta(\mu_\alpha)=\langle \alpha,\beta^\text{\tiny$\vee$} \rangle.
\]
In particular, we have $d_{\alpha}\mu_\alpha=\langle\alpha, \alpha^\text{\tiny$\vee$} \rangle=2$.
Consider the vector space ${\mathfrak t}\times{\mathfrak t}^*\times {\mathbb C}$. For a root $\alpha\subset {\mathfrak t}^*$ the linear functional $z_\alpha\in ({\mathfrak t}\times{\mathfrak t}^*\times{\mathbb C})^*$ depends on the first coordinate by $\alpha$.
The functional $\mu_\alpha$ depends on the second coordinate, while $h$ depends on the third coordinate. For $\sigma\in W$ and $\alpha\in {\mathfrak t}^*$ by $\sigma(\alpha)$ we understand the usual action of $\sigma$ on $t^*$.
\medskip
As usual, we keep fixed the positive roots and the simple roots.
To each pair ${\omega,\sigma}\in W$ such that $\sigma\preccurlyeq \omega$ we associate a quadratic form $M(\omega,\sigma)$ such that $M(\id,\id)=0$
and inductively: If $\omega=\omega's_\alpha$ with $\ell(\omega)=\ell(\omega')+1$, then
\begin{equation}\label{def-qinduction}M(\omega,\sigma)=\begin{cases}
s^\mu_\alpha(M(\omega',\sigma))+\mu_\alpha\, z_{\sigma(\alpha)}&\text{ if }\sigma\preccurlyeq \omega'\\
s^\mu_\alpha(M(\omega',\sigma s_\alpha))-h\,z_{\sigma(\alpha)}&\text{ if }\sigma s_\alpha\preccurlyeq \omega'.\end{cases}\end{equation}
If $\sigma\preccurlyeq \omega'$ and $\sigma s_\alpha \preccurlyeq \omega'$, then the cases of the definitions give the same quadratic form (this follows from the proof of Proposition \ref{prop88} below). Also, $\sigma\preccurlyeq \omega$ implies that one of the above conditions holds.
\medskip
\begin{example}\rm Let $G=\GL_3$. We illustrate two ways of computing $M(s_1s_2s_1,id)$:
$$\begin{array}{cc}
M(\id,\id)= & \phantom{aa}0 \\
&^{s_1}\downarrow\\
M(s_1,\id)= & \boxed{\left(z_2-z_1\right) \left(\mu _2-\mu _1\right)}
\\
&^{s_2}\downarrow\\
M(s_1s_2,\id)= & \left(z_2-z_1\right) \left(\mu _3-\mu
_1\right)+\boxed{\left(z_3-z_2\right) \left(\mu _3-\mu
_2\right)} \\
&^{s_1}\downarrow\\
M(s_1s_2s_1,id)= &\left(z_2-z_1\right) \left(\mu _3-\mu
_2\right)+\left(z_3-z_2\right) \left(\mu _3-\mu
_1\right)+\boxed{ \left(z_2-z_1\right) \left(\mu _2-\mu
_1\right)}. \\
\end{array}
$$
But also
$$\begin{array}{cc}
M(\id,\id)= & 0 \\&^{s_1}\downarrow\\
M(s_1,s_1)= & \boxed{ h\left(z_1-z_2\right) }\\&^{s_2}\downarrow\\
M(s_1s_2,s_1)= & h \left(z_1-z_2\right)+\boxed{\left(z_3-z_1\right)
\left(\mu _3-\mu _2\right)} \\&^{s_1}\downarrow\\
M(s_1s_2s_1,\id)= & h \left(z_1-z_2\right)+\left(z_3-z_1\right) \left(\mu
_3-\mu _1\right)+
\boxed{ h\left(z_2-z_1\right)}. \\
\end{array}
$$
In both cases we obtain $M(s_1s_2s_1,\id)=\left(z_3-z_1\right) \left(\mu
_3-\mu _1\right)$.
One can check that presenting this permutation as $s_2s_1s_2$ and performing analogous computations we obtain the same result.
\end{example}
The inductive procedure of constructing the elliptic classes can be translated to a description of the associated quadratic form.
\begin{proposition}\label{pr:ellform} The quadratic form $2M(\omega,\sigma)$ describes the transformation properties of $E_\sigma(X_\omega,\lambda)$.\end{proposition}
\begin{proof} If $\omega=\id$ then $$E_\sigma(X_{\id},\lambda)=\begin{cases}1&\text{ if } \id=\sigma\,\\ 0&\text{ if } \id\neq\sigma,\end{cases}$$
hence the associated form at $(\id,\id)$ is 0. By Theorem \ref{th:mainiduction}, when passing from $\omega'$ to $\omega$ the elliptic class changes by the factor $\delta^{in}_k=\delta(L_k,h)$, which is at the point $\sigma$ has the transformation properties $2z_{-\sigma(\alpha_k)} h=-2z_{\sigma(\alpha_k)}h$ (since $L_k=\LL{\alpha_k}$) or by the factor $\delta^{bd}_k=\delta(L_k,h^{\langle\lambda,\alpha_k^\text{\tiny$\vee$}\rangle})$ which at the point $\sigma$ has the transformation properties $2z_{-\alpha_k}\mu_{-\alpha_k}=2z_{\sigma(\alpha_k)}\mu_{\alpha_k}$.
\end{proof}
\begin{proposition}\label{pr:center}The quadratic form at the smooth point of the cell is equal to
$$M(\omega,\omega)=h\sum_{\alpha\,\in\,\Sigma_+\cap\,\omega(\Sigma_-)} z_{\alpha},$$
where $\Sigma_\pm$ denotes the set of positive/negative roots. The roots appearing in the summation are the tangent weights of $X_\omega$ at $x_\omega$.\end{proposition}
\begin{proof}At the smooth point $x_\omega$ the localized elliptic class $E_\omega(X_\omega,\lambda)$is given by the product of $\delta$ functions and the transformation matrix for $\delta(x,h)$ is equal to $2xh$ by \eqref{EllResLoc1}.\end{proof}
\begin{proposition} \label{prop88}
Suppose $\sigma\preccurlyeq \omega$, then
$$d_{\beta}(M(\omega,\sigma))=z_{\sigma(\beta)}-z_{\omega(\beta)}\,.$$
\end{proposition}
\begin{proof} Obviously, the statement holds of $\omega=\id$.
Consider the first case of the inductive definition. We have
\begin{align*}
d_\beta(M(\omega,\sigma)) & =d_\beta(s^\mu_\alpha(M_{\omega',\sigma})+\mu_\alpha\, z_{\sigma(\alpha)})\\
&= s^\mu_\alpha d_{\beta^\alpha}(M(\omega',\sigma))+\langle \alpha,\beta^\text{\tiny$\vee$}\rangle z_{\sigma(\alpha)}\\
& = s^\mu_\alpha(z_{\sigma(\beta^\alpha)}-z_{\omega'(\beta^\alpha)})+\langle \alpha,\beta^\text{\tiny$\vee$}\rangle z_{\sigma(\alpha)}\\
& =z_{\sigma s_\alpha(\beta)}-z_{\omega(\beta)}+\langle\alpha, \beta^\text{\tiny$\vee$}\rangle z_{\sigma(\alpha)}.
\end{align*}
Since
\[
s_\alpha(\beta)=\beta-\langle \alpha,\beta^\text{\tiny$\vee$}\rangle \alpha, \ \
s_\alpha(\beta)+\langle \alpha,\beta^\text{\tiny$\vee$}\rangle \alpha=\beta,\ \
z_{\sigma s_\alpha(\beta)}+\langle \alpha,\beta^\text{\tiny$\vee$}\rangle z_{\sigma(\alpha)}=z_{\sigma(\beta)}
\]
the conclusion follows.
\medskip
Consider the second case of the inductive definition. We have
\begin{align*}
d_\beta(M(\omega,\sigma))& =d_\beta(s^\mu_\alpha(M_{\omega',\sigma s_\alpha})-h\, z_{\sigma(\alpha)})\\
&=s^\mu_\alpha d_{\beta^\alpha}(M(\omega',\sigma s_\alpha))\\
&=s^\mu_\alpha(z_{\sigma s_\alpha(\beta^\alpha)}-z_{\omega'(\beta^\alpha)})\\
& =s^\mu_\alpha(z_{\sigma (\beta)}-z_{\omega(\beta)})\\
& =z_{\sigma (\beta)}-z_{\omega(\beta)}.
\end{align*}
\end{proof}
Note that both cases of the inductive definition \eqref{def-qinduction} are linear in $\mu_\alpha$ variables.
If both cases are applicable in the above proof, then we get the same result of the differential $d_\beta$ for any root~$\beta$:
$$d_\beta(s^\mu_\alpha(M(\omega',\sigma))+\mu_\alpha\, z_{\sigma(\alpha)})=
d_\beta(
s^\mu_\alpha(M(\omega',\sigma s_\alpha))-h\,z_{\sigma(\alpha)})\,.$$
It follows that the quadratic forms are equal:
$$s^\mu_\alpha(M(\omega',\sigma))+\mu_\alpha\, z_{\sigma(\alpha)}=
s^\mu_\alpha(M(\omega',\sigma s_\alpha))-h\,z_{\sigma(\alpha)}\,.$$
Therefore the two cases of the formula \eqref{def-qinduction} do not create a contradiction.
\begin{theorem}\label{simple_induction}
Let $\alpha$ be a simple root. Suppose $\sigma\preccurlyeq\omega\preccurlyeq\omega s_\alpha$.
Then
$$M(\omega s_\alpha,\sigma)=M(\omega,\sigma)+\mu_\alpha z_{\omega(\alpha)}.$$
\end{theorem}
\begin{proof}
From the inductive definition of $M$ (case 1) and Proposition \ref{prop88} it follows:
\begin{align*}
M(\omega,\sigma)-M(\omega s_\alpha,\sigma) &=M(\omega,\sigma)-\big(s^\mu_\alpha(M(\omega,\sigma))+\mu_\alpha z_{\sigma(\alpha)}\big)\\
&=\mu_\alpha d_\alpha (M(\omega,\sigma))-\mu_\alpha z_{\sigma(\alpha)}\\
&=\mu_\alpha \big(z_{\sigma(\alpha)}-z_{\omega(\alpha)} -z_{\sigma(\alpha)}\big)\\
&=-\mu_\alpha z_{\omega(\alpha)}.
\end{align*}
\end{proof}
Theorem \ref{simple_induction} is an extension of the property (1.3') of the axiomatic characterization of the weight function to the case of general $G$.
Of course, this inductive step
together with the diagonal data determines all transformation properties $M(\omega,\sigma)$:
\begin{proposition}\label{pr:comparing}Suppose two families of quadratic forms $M_1(\omega,\sigma)$ and $M_2(\omega,\sigma)$ are defined for $\sigma\preccurlyeq\omega$ and satisfy the formula of Theorem \ref{simple_induction}. Moreover suppose $M_1(\sigma,\sigma)=M_2(\sigma,\sigma)$ for all $\sigma\in W$, then $M_1(\omega,\sigma)=M_2(\omega,\sigma)$ for all $\sigma\preccurlyeq\omega$.
\end{proposition}
\begin{proof}We show equality of forms inductively, keeping the second variable of $M_i(\omega,\sigma)$ fixed. We can only change $\omega$ by an elementary reflection $s_i$. The starting point for the induction is $M_1(\sigma,\sigma)=M_2(\sigma,\sigma)$. Increasing the length of $\omega$ by 1 we can arrive to $M_1(\omega_0,\sigma)=M_2(\omega_0,\sigma)$. Now decreasing the length $\omega$ for we can go down to any $\omega$ satisfying $\sigma\prec\omega\prec\omega_0$.
\end{proof}
\section{Weight function vs lexicographically smallest reduced word} \label{eg:FL3}
Let us revisit Example \ref{GL3recu}, and study the underlying geometry and its relation with the weight functions.
The class $E_{123}(X_{321})$ is calculated in Example \ref{GL3recu} in two ways, one corresponding to the reduced word $s_1s_2s_1$, the other corresponding to the reduced word $s_2s_1s_2$ of the permutation $321$. The two obtained expressions are, respectively,
\begin{equation}\label{s1s2s1}
E_{\id}(X_{s_1s_2s_1})=
\delta\left(\frac{z_2}{z_1},\frac{\mu_3}{\mu_2}\right)
\delta\left(\frac{z_3}{z_2},\frac{\mu_3}{\mu_1}\right)
\delta\left(\frac{z_2}{z_1},\frac{\mu_2}{\mu_1}\right)
+
\delta\left(\frac{z_1}{z_2},h\right)
\delta\left(\frac{z_3}{z_1},\frac{\mu_3}{\mu_1}\right)
\delta\left(\frac{z_2}{z_1},h\right),
\end{equation}
\begin{equation}\label{s2s1s2}
E_{\id}(X_{s_2s_1s_2})=
\delta\left(\frac{z_3}{z_2},\frac{\mu_2}{\mu_1}\right)
\delta\left(\frac{z_2}{z_1},\frac{\mu_3}{\mu_1}\right)
\delta\left(\frac{z_3}{z_2},\frac{\mu_3}{\mu_2}\right)
+
\delta\left(\frac{z_2}{z_3},h\right)
\delta\left(\frac{z_3}{z_1},\frac{\mu_3}{\mu_1}\right)
\delta\left(\frac{z_3}{z_2},h\right).
\end{equation}
As we mentioned, the equality of these two expressions follows from general theory of Borisov and Libgober, or can be shown to be equivalent to the {\em four term identity} \cite[eq.~(2.7)]{RTV}.
\medskip
To explore the underlying geometry, let $u_{21}$, $u_{31}$ and $u_{32}$ be the coordinates in the standard affine neighborhood of the fixed point $x_{123}$, that is, $u_{ij}$ are the entries of the lower-triangular $3\times 3$ matrices. The character of the coordinate $u_{ij}$ is equal to $z_i/z_j$. The open cell $X^\circ_{s_1s_2s_1}$ intersected with this affine neighborhood is the complement of the sum of divisors
$$
\{u_{31}-u_{21}u_{s_2}= 0\}\quad \text{and}\quad
\{ u_{31}= 0\}\,.$$
The intersections of the divisors is singular. The resolution corresponding to the word $s_1s_2s_1$ coincides with the blow-up of the $u_{21}$ axis. The fiber above $0$ contains two fixed points which give the two contributions in \eqref{s1s2s1}. Analogously, the expression obtained by blowing up the axis $u_{23}$ is \eqref{s2s1s2}.
\smallskip
The reader is invited to verify that the (long) calculation for
\[
\frac{\ww_{321,123}}{{e^{\rm ell}}(T_{\id}\!\Fl(3))}=
\frac{\ww_{321,123}}{\vartheta({z_2}/{z_1})\vartheta({z_3}/{z_1})\vartheta({z_3}/{z_2})}
\]
results exactly the expression \eqref{s1s2s1}. This and many other examples calculated by us suggest the following conjecture: the weight function formula for $\ww_{\omega,\sigma}/{{e^{\rm ell}}(T_{\sigma}\!\Fl(n))}$ coincides (without using any $\vartheta$-function identities) with the expression obtained for $E_\sigma(X_\omega)$ using the {\em lexicographically smallest} reduced word for $\omega$.
\section{Action of $C$-operations on weight functions.}
We still consider the case of $G=\GL_n$.
Let $\hC_k$ be a family of operators on the space of meromorphic functions no ${\mathfrak t}\times{\mathfrak t}^*\times\C$ indexed by the simple roots:
\begin{equation}\label{eq:c-hat}\hC_k(f)(z,\gamma,\lambda,h)=\delta(L^\gamma_k,\nu_k)\cdot f(z,\gamma,s_k(\lambda),h)\,+\,\delta(L^\gamma_k,h)\cdot f(s_k(z),\gamma,s_k(\lambda),h)\,.\end{equation}
Here $$L_k^\gamma=\frac{\gamma_{k+1}}{\gamma_k}$$ denotes the character of the relative tangent bundle $G/B\to G/P_k$ (equal to ${\rm e}^{-\alpha_k}$ at the point $x_{\id}$), but living in the $\gamma$-copy of variables
We also recall that $$\nu_k=\frac{\mu_{k+1}}{\mu_k}=h^{\langle-,\alpha^\text{\tiny$\vee$}_k\rangle}\,.$$
The operators are constructed in such a way that they descend to the operators $C_k$ acting on the K theory of $\F(n)$. One may think about them as acting on $Frac(K_{{\bf T}\times{\bf T}}({\rm Hom}(\C^n,\C^n))\otimes \RR({\bf T}^*\times \C^*))[[q]]$.
\begin{theorem}The operators $\hC_k$ satisfy \begin{itemize}
\item braid relations,
\item $\hC_k^2=\kappa(\alpha_k)$.
\end{itemize}
\end{theorem}
\begin{proof}This is straightforward checking and a repetition of the proof of Theorem \ref{th:kappa}.\end{proof}
Let us set ${\mathfrak w}_{id}=\wwh_{id}/{e^{\rm ell}}_\gamma$, where ${e^{\rm ell}}_\gamma$ is the elliptic Euler class written in $\gamma$ variables.
The operators $\hC_k$ recursively define the functions ${\mathfrak w}_\omega$ living in the same space as the weight function $\wwh(\omega)$. They have the restrictions to the fixed points of $\F(n)$ equal to the restrictions of $\wwh_\omega$ divided by the elliptic Euler class. Nevertheless one can check that they are essentially different from the weights function. The difference lies in the ideal defining K theory of $\F(n)$.
\begin{example}\rm Let $G=\GL_2$.
Setting
$${\mathfrak w}_{id}=\frac{\wwh_{id}}{{e^{\rm ell}}_\gamma}=
\frac{\vartheta \left(\frac{z_1}{\gamma _1}\right) \vartheta
\left(\frac{z_2}{\gamma _1}\right) \delta
\left(\frac{z_1}{\gamma _1},\frac{h \mu _2}{\mu
_1}\right)}{\vartheta \left(\frac{\gamma _2}{\gamma
_1}\right)}=\frac{\vartheta \left(\frac{z_2}{\gamma _1}\right) \vartheta
\left(\frac{h \mu _2 z_1}{\gamma _1 \mu
_1}\right)}{\vartheta \left(\frac{\gamma _2}{\gamma
_1}\right) \vartheta \left(\frac{h \mu _2}{\mu
_1}\right)}$$
we obtain
$${\mathfrak w}_{s_1}=\frac{\vartheta \left(\frac{\gamma _2 \mu _2}{\gamma
_1 \mu _1}\right) \vartheta \left(\frac{z_2}{\gamma
_1}\right) \vartheta \left(\frac{h \mu _1 z_1}{\gamma _1
\mu _2}\right)}{\vartheta \left(\frac{\gamma _2}{\gamma
_1}\right){\!}^2\, \vartheta \left(\frac{\mu _2}{\mu
_1}\right) \vartheta \left(\frac{h \mu _1}{\mu
_2}\right)}-\frac{\vartheta \left(\frac{\gamma _2
h}{\gamma _1}\right) \vartheta \left(\frac{z_2}{\gamma
_2}\right) \vartheta \left(\frac{h \mu _1 z_1}{\gamma _2
\mu _2}\right)}{\vartheta \left(\frac{\gamma _2}{\gamma
_1}\right){\!}^2\, \vartheta (h)\, \vartheta \left(\frac{h \mu
_1}{\mu _2}\right)}
$$
Note that the operations $\hC_i$ (as well as $C_i$) do not preserve the initial transformation form. The summands of \eqref{eq:c-hat} might have different transformation properties.
The equality holds in the quotient ring $K_T(\GL_2/B)$.
\end{example}
\section{A tale of two recursions for weight functions}\label{sec:tale}
\subsection{Bott-Samelson recursion for weight functions}
The main achievement in Sections~\ref{sec:EllWeight}--\ref{eg:FL3} was the identification of the geometrically defined ${E^{\rm K}}(X_\omega)$ classes with the weight functions whose origin is in representation theory. The way our identification went was through recursions. The elliptic classes satisfied the Bott-Samelson recursion of Theorem \ref{th:mainiduction}, and the weight functions satisfied the R-matrix recursion of \eqref{eq:Rmatrix}. In Proposition \ref{pro:ERrecursion} we showed that the two recursions are consistent, and hence both recursions hold for both objects.
One important consequence is that (the fixed point restrictions of) weight functions satisfy the Bott-Samelson recursion, as follows:
\begin{theorem} We have
\[
[\wwh_{\omega s_k}] =
\begin{cases}
s_k^\mu [\wwh_{\omega}] \cdot \delta( \frac{\gamma_{k+1}}{\gamma_k},\frac{\mu_{k+1}}{\mu_k})
-
s_k^\mu s_k^\gamma[\wwh_{\omega}] \cdot \delta(\frac{\gamma_{k+1}}{\gamma_k},h)
& \text{if } \ell(\omega s_k)>\ell(\omega) \\
\\
s_k^\mu[\wwh_{\omega}] \cdot
\frac{\delta( \frac{\gamma_{k+1}}{\gamma_k},\frac{\mu_{k+1}}{\mu_k})}
{\delta(\frac{\mu_k}{\mu_{k+1}},h)\delta(\frac{\mu_{k+1}}{\mu_k},h)}
-
s_k^\mu s_k^\gamma[\wwh_{\omega}] \cdot \frac{\delta(\frac{\gamma_{k+1}}{\gamma_k},h)}{\delta(\frac{\mu_k}{\mu_{k+1}},h)\delta(\frac{\mu_{k+1}}{\mu_k},h)}
& \text{if } \ell(\omega s_k)<\ell(\omega),
\end{cases}
\]
or equivalently, using the normalization
\begin{equation}\label{defmodd}
\wwh'_{\omega}=\wwh_{\omega} \cdot \frac{1}{\prod_{i<j,\omega(i)>\omega(j)} \delta(\frac{\mu_i}{\mu_j},h)}.
\end{equation}
we have the unified
\begin{equation}\label{uniBSW}
[\wwh'_{\omega s_k}] =
s_k^\mu[\wwh'_{\omega}] \cdot
\frac{\delta( \frac{\gamma_{k+1}}{\gamma_k},\frac{\mu_{k+1}}{\mu_k})}
{\delta(\frac{\mu_k}{\mu_{k+1}},h)}
-
s_k^\mu s_k^\gamma[\wwh'_{\omega}] \cdot \frac{\delta(\frac{\gamma_{k+1}}{\gamma_k},h)}{\delta(\frac{\mu_k}{\mu_{k+1}},h)}.
\end{equation} \qed
\end{theorem}
This new property of weight functions plays an important role in a followup paper \cite{Sm2} in connection with elliptic stable envelopes. It is worth pointing out that the normalization \eqref{defmodd} also makes the R-matrix property of \eqref{eq:Rmatrix} unified:
\begin{equation}\label{uniRW}
\wwh'_{s_k \omega} =
\wwh'_{\omega} \cdot
\frac{\delta( \frac{\mu_{\omega^{-1}(k+1)}}{\mu_{\omega^{-1}(k)}},\frac{z_{k+1}}{z_k})}
{\delta( \frac{\mu_{\omega^{-1}(k)}}{\mu_{\omega^{-1}(k+1)}},h)}
+
s_k^z{\wwh'_{\omega}} \cdot \frac{\delta(\frac{z_k}{z_{k+1}},h)}
{\delta( \frac{\mu_{\omega^{-1}(k)}}{\mu_{\omega^{-1}(k+1)}},h)}.
\end{equation}
There is, however, an essential difference between \eqref{uniBSW} and \eqref{uniRW}: the latter holds for the weight functions themselves, while the former only holds for the cosets $[\wwh']$ of $\wwh'$ functions (i.e. after the restriction). Already for $n=2$, $\omega=\id$, $k=1$ the two sides of \eqref{uniBSW} only hold for the fixed point restrictions, not for the $\wwh'$ functions themselves.
In essence, the remarkable geometric object ${E^{\rm K}}(X_\omega)$ satisfies two different recursions: the Bott-Samelson recursion and the R-matrix recursion. The weight functions are the lifts of ${E^{\rm K}}(X_\omega)$ classes satisfying only one of these recursions.
\subsection{Two recursions for the local elliptic classes}
Although we already stated and proved that both Bott-Samelson and R-matrix recursions hold for the elliptic classes, let us rephrase these statements in a convenient normalization.
\begin{itemize}
\item Let $\zeta_k\in {\rm Hom}({\bf T},\C^*)$ be the inverse of a root written multiplicatively, e.g. $\zeta_k=\frac{z_{k+1}}{z_k}$ for $G=\GL_n$
\item Let $\nu_k\in {\rm Hom}(\C^*,{\bf T})$ be the inverse of a coroot written multiplicatively, e.g. $\nu_k=\frac{\mu_{k+1}}{\mu_k}$ for $G=\GL_n$.
\end{itemize}
For $\nu\in {\rm Hom}(\C^*,{\bf T})$ define
\[
\kappa(\nu)=\vartheta\big(h,\nu\big)\vartheta\big(h,\nu^{-1}\big).
\]
The Bott-Samelson recursion for local elliptic classes is:
\begin{multline*}
\delta\left(\sigma^z(\zeta_k),\nu_k\right)
\cdot s^\mu_kE_\sigma(X_\omega) +
\delta\left(\sigma^z(\zeta_k),h\right)\cdot
s^\mu_kE_{\sigma s_k}(X_\omega)=\\
=\begin{cases}E_\sigma(X_{\omega.s_k})& \text{if } \ell(\omega s_k)=\ell(\omega)+1\\ \\
\kappa\left(\nu_k\right)
E_\sigma(X_{\omega s_k})&\text{if }\ell(\omega s_k)=\ell(\omega)-1,\end{cases}
\end{multline*}
and the R-matrix recursion for local elliptic classes is
\begin{multline*}
\delta\left(\zeta_k,\bb\omega^\mu(\nu_k)\right)
\cdot E_\sigma(X_\omega) +
s_k^z\left(\delta\left(\zeta_k,h\right)\cdot
E_{s_k\sigma}(X_\omega)\right)=\\
=\begin{cases}E_\sigma(X_{s_k\omega})& \text{if } \ell(s_k\omega)=\ell(\omega)+1\\ \\
\kappa\left(\bb\omega^\mu(\nu_k)\right)
E_\sigma(X_{s_k\omega})&\text{if }\ell(s_k\omega)=\ell(\omega)-1.\end{cases}
\end{multline*}
\noindent The $G=\GL_n$ special case of the above two formulas is
\begin{multline*}
\delta\left(\frac{z_{\sigma(k + 1)}}{z_{\sigma(k)}},\frac{ \mu_{(k + 1)}}{\mu_{(k)}}\right)
\cdot s^\mu_kE_\sigma(X_\omega) +
\delta\left(\frac{z_{\sigma(k+1)} }{z_{\sigma(k)}},h\right)\cdot
s^\mu_kE_{\sigma s_k}(X_\omega)=\\
=\begin{cases}E_\sigma(X_{\omega.s_k})& \text{if } \ell(\omega s_k)=\ell(\omega)+1\\ \\
\delta\left(h,\frac{ \mu_{(k + 1)}}{\mu_{(k)}}\right)
\delta\left(h,\frac{ \mu_{(k )}}{\mu_{(k+1)}}\right)
E_\sigma(X_{\omega s_k})&\text{if }\ell(\omega s_k)=\ell(\omega)-1,
\end{cases}\end{multline*}
\begin{multline*}
\delta\left(\frac{z_{k + 1}}{z_k},\frac{ \mu_{\omega^{-1}(k + 1)}}{\mu_{\omega^{-1}(k)}}\right)
\cdot E_\sigma(X_\omega) +
\delta\left(\frac{z_k }{z_{k+1}},h\right)\cdot
s_k^zE_{s_k\sigma}(X_\omega)=\\
=\begin{cases}E_\sigma(X_{s_k\omega})& \text{if } \ell(s_k\omega)=\ell(\omega)+1\\ \\
\delta\left(h,\frac{ \mu_{\omega^{-1}(k + 1)}}{\mu_{\omega^{-1}(k)}}\right)
\delta\left(h,\frac{ \mu_{\omega^{-1}(k )}}{\mu_{\omega^{-1}(k+1)}}\right)
E_\sigma(X_{s_k\omega})&\text{if }\ell(s_k\omega)=\ell(\omega)-1.
\end{cases}\end{multline*}
\section{Tables}
\subsection{$G=\GL_3$}
Below we give the full table of localized elliptic classes $E_\sigma(X_\omega)$:
$$
\def2{2}
\begin{array}{|c|c|c|c|c|c|}\hline
_\sigma\!\!\!{\text{\large$\diagdown$}}\!\!^\omega & ~~\id~~ & s_1 & s_2 & s_1s_2 & s_2s_1 \\
\hline\hline
\id & 1 & \delta\!\big( \frac{z_2}{z_1},\frac{\mu _2}{\mu _1}\big) & \delta\!\big( \frac{z_3}{z_2},\frac{\mu _3}{\mu _2}\big) &
\delta\!\big( \frac{z_2}{z_1},\frac{\mu _3}{\mu _1}\big) \delta\!\big( \frac{z_3}{z_2},\frac{\mu _3}{\mu _2}\big) & \delta\!\big(
\frac{z_2}{z_1},\frac{\mu _2}{\mu _1}\big) \delta\!\big( \frac{z_3}{z_2},\frac{\mu _3}{\mu _1}\big) \\
\hline
s_1 & 0 & \delta\!\big( \frac{z_1}{z_2},h\big) & 0 & \delta\!\big( \frac{z_1}{z_2},h\big) \delta\!\big(
\frac{z_3}{z_1},\frac{\mu _3}{\mu _2}\big) & \delta\!\big( \frac{z_1}{z_2},h\big) \delta\!\big(
\frac{z_3}{z_2},\frac{\mu _3}{\mu _1}\big) \\
\hline
s_2 & 0 & 0 & \delta\!\big( \frac{z_2}{z_3},h\big) & \delta\!\big( \frac{z_2}{z_3},h\big) \delta\!\big(
\frac{z_2}{z_1},\frac{\mu _3}{\mu _1}\big) & \delta\!\big( \frac{z_2}{z_3},h\big) \delta\!\big(
\frac{z_3}{z_1},\frac{\mu _2}{\mu _1}\big) \\
\hline
s_1s_2 & 0 & 0 & 0 & \delta\!\big( \frac{z_1}{z_2},h\big) \delta\!\big( \frac{z_1}{z_3},h\big) & 0 \\\hline
s_2s_1 & 0 & 0 & 0 & 0 & \delta\!\big( \frac{z_1}{z_3},h\big) \delta\!\big( \frac{z_2}{z_3},h\big) \\\hline
s_1s_2s_1 & 0 & 0 & 0 & 0 & 0 \\\hline
\end{array}
$$
$$\def2{2}
\begin{array}{|c|c|}
\hline
_\sigma\!\!\!{\text{\large$\diagdown$}}\!\!^\omega& s_1s_2s_1=s_2s_1s_
\\
\hline
\hline
\id & \delta\!\big( \frac{z_1}{z_2},h\big) \delta\!\big( \frac{z_2}{z_1},h\big) \delta\!\big( \frac{z_3}{z_1},\frac{\mu _3}{\mu
_1}\big)+\delta\!\big( \frac{z_2}{z_1},\frac{\mu _2}{\mu _1}\big) \delta\!\big( \frac{z_2}{z_1},\frac{\mu _3}{\mu
_2}\big) \delta\!\big( \frac{z_3}{z_2},\frac{\mu _3}{\mu _1}\big) \\
\hline
s_1 & \delta\!\big( \frac{z_1}{z_2},h\big) \delta\!\big( \frac{z_1}{z_2},\frac{\mu _2}{\mu _1}\big) \delta\!\big(
\frac{z_3}{z_1},\frac{\mu _3}{\mu _1}\big)+\delta\!\big( \frac{z_1}{z_2},h\big) \delta\!\big(
\frac{z_2}{z_1},\frac{\mu _3}{\mu _2}\big) \delta\!\big( \frac{z_3}{z_2},\frac{\mu _3}{\mu _1}\big) \\
\hline
s_2 & \delta\!\big( \frac{z_2}{z_1},\frac{\mu _3}{\mu _2}\big) \delta\!\big( \frac{z_2}{z_3},h\big) \delta\!\big(
\frac{z_3}{z_1},\frac{\mu _2}{\mu _1}\big) \\
\hline
s_1s_2 & \delta\!\big( \frac{z_1}{z_2},h\big) \delta\!\big( \frac{z_1}{z_3},h\big) \delta\!\big( \frac{z_3}{z_2},\frac{\mu
_2}{\mu _1}\big) \\
\hline
s_2s_1 & \delta\!\big( \frac{z_2}{z_1},\frac{\mu _3}{\mu _2}\big) \delta\!\big( \frac{z_1}{z_3},h\big) \delta\!\big(
\frac{z_2}{z_3},h\big) \\
\hline
s_1s_2s_1 & \delta\!\big( \frac{z_1}{z_2},h\big) \delta\!\big( \frac{z_1}{z_3},h\big) \delta\!\big( \frac{z_2}{z_3},h\big)\\
\hline
\end{array}
$$
\subsection{$G={\rm Sp}_2$} The Weil group is generated by two reflections:
$$\begin{array}{llllll}
s_1&\text{reflection in }& \alpha_1=(1,-1),&\text{the dual root }&\alpha_1^\text{\tiny$\vee$}=(1,-1),\\
s_2&\text{reflection in }& \alpha_2=(0,2), &\text{the dual root }&\alpha_2^\text{\tiny$\vee$}=(0,1).
\end{array}
$$
We have $s_1s_2s_1s_2=s_2s_1s_2s_1$. The full table of localized elliptic classes is given below:
$$\def2{2}\begin{array}{|c|c|c|c|c|c|}\hline
_\sigma\!\!\!{\text{\large$\diagdown$}}\!\!^\omega&~~\id~~&s_1&s_2&s_1s_2&s_2s_1\\ \hline\hline
\text{id}&1&\delta\!\big(\frac{z_2}{z_1},\frac{\mu_2}{\mu_1}\big)&\delta\!\big(\frac{1}{z_2^2},\frac{1}{\mu_2}\big)&\delta\!\big(\frac{1}{z_2^2},\frac{1}{\mu_2}\big)\delta\!\big(\frac{z_2}{z_1},\frac{1}{\mu_1\mu_2}\big)&\delta\!\big(\frac{1}{z_2^2},\frac{1}{\mu_1}\big)\delta\!\big(\frac{z_2}{z_1},\frac{\mu_2}{\mu_1}\big)\\ \hline
s_1&0&\delta\!\big(\frac{z_1}{z_2},h\big)&0&\delta\!\big(\frac{z_1}{z_2},h\big)\delta\!\big(\frac{1}{z_1^2},\frac{1}{\mu_2}\big)&\delta\!\big(\frac{z_1}{z_2},h\big)\delta\!\big(\frac{1}{z_2^2},\frac{1}{\mu_1}\big)\\ \hline
s_2&0&0&\delta\!\big( z_2^2,h\big)&\delta\!\big( z_2^2,h\big)\delta\!\big(\frac{z_2}{z_1},\frac{1}{\mu_1\mu_2}\big)&\delta\!\big( z_2^2,h\big)\delta\!\big(\frac{1}{z_1z_2},\frac{\mu_2}{\mu_1}\big)\\ \hline
s_1s_2&0&0&0&\delta\!\big( z_1^2,h\big)\delta\!\big(\frac{z_1}{z_2},h\big)&0\\ \hline
s_2s_1&0&0&0&0&\delta\!\big( z_1z_2,h\big)\delta\!\big( z_2^2,h\big)\\ \hline
s_1s_2s_1&0&0&0&0&0\\ \hline
s_2s_1s_2&0&0&0&0&0\\ \hline
s_1s_2s_1s_2&0&0&0&0&0\\ \hline
\end{array} $$
{
\tiny
$$\arraycolsep=1.4pt\def2{2}\begin{array}{|c|c|c|}\hline
_\sigma\!\!\!{\text{\large$\diagdown$}}\!\!^\omega&s_1s_2s_1&s_2s_1s_2\\ \hline \hline
\text{id}&\delta\!\big(\frac{z_1}{z_2},h\big)\delta\!\big(\frac{z_2}{z_1},h\big)\delta\!\big(\frac{1}{z_1^2},\frac{1}{\mu_1}\big)+\delta\!\big(\frac{1}{z_2^2},\frac{1}{\mu_1}\big)\delta\!\big(\frac{z_2}{z_1},\frac{1}{\mu_1\mu_2}\big)\delta\!\big(\frac{z_2}{z_1},\frac{\mu_2}{\mu_1}\big)
&\delta\!\big(\frac{1}{z_2^2},h\big)\delta\!\big( z_2^2,h\big)\delta\!\big(\frac{1}{z_1z_2},\frac{1}{\mu_1\mu_2}\big)+\delta\!\big(\frac{1}{z_2^2},\frac{1}{\mu_1}\big)\delta\!\big(\frac{1}{z_2^2},\frac{1}{\mu_2}\big)\delta\!\big(\frac{z_2}{z_1},\frac{1}{\mu_1\mu_2}\big)
\\ \hline
s_1&\delta\!\big(\frac{z_1}{z_2},h\big)\delta\!\big(\frac{1}{z_1^2},\frac{1}{\mu_1}\big)\delta\!\big(\frac{z_1}{z_2},\frac{\mu_2}{\mu_1}\big)+\delta\!\big(\frac{z_1}{z_2},h\big)\delta\!\big(\frac{1}{z_2^2},\frac{1}{\mu_1}\big)\delta\!\big(\frac{z_2}{z_1},\frac{1}{\mu_1\mu_2}\big)
&\delta\!\big(\frac{z_1}{z_2},h\big)\delta\!\big(\frac{1}{z_1^2},\frac{1}{\mu_2}\big)\delta\!\big(\frac{1}{z_2^2},\frac{1}{\mu_1}\big)
\\ \hline
s_2&\delta\!\big( z_2^2,h\big)\delta\!\big(\frac{1}{z_1z_2},\frac{\mu_2}{\mu_1}\big)\delta\!\big(\frac{z_2}{z_1},\frac{1}{\mu_1\mu_2}\big)
&\delta\!\big( z_2^2,h\big)\delta\!\big(\frac{1}{z_2^2},\frac{1}{\mu_1}\big)\delta\!\big(\frac{z_2}{z_1},\frac{1}{\mu_1\mu_2}\big)+\delta\!\big( z_2^2,h\big)\delta\!\big(\frac{1}{z_1z_2},\frac{1}{\mu_1\mu_2}\big)\delta\!\big( z_2^2,\frac{1}{\mu_2}\big)
\\ \hline
s_1s_2&\delta\!\big( z_1^2,h\big)\delta\!\big(\frac{z_1}{z_2},h\big)\delta\!\big(\frac{1}{z_1z_2},\frac{\mu_2}{\mu_1}\big)
&\delta\!\big( z_1^2,h\big)\delta\!\big(\frac{z_1}{z_2},h\big)\delta\!\big(\frac{1}{z_2^2},\frac{1}{\mu_1}\big)
\\ \hline
s_2s_1&\delta\!\big( z_1z_2,h\big)\delta\!\big( z_2^2,h\big)\delta\!\big(\frac{z_2}{z_1},\frac{1}{\mu_1\mu_2}\big)
&\delta\!\big( z_1z_2,h\big)\delta\!\big( z_2^2,h\big)\delta\!\big(\frac{1}{z_1^2},\frac{1}{\mu_2}\big)
\\ \hline
s_1s_2s_1&\delta\!\big( z_1^2,h\big)\delta\!\big(\frac{z_1}{z_2},h\big)\delta\!\big( z_1z_2,h\big)
&0
\\ \hline
s_2s_1s_2&0
&\delta\!\big( z_1^2,h\big)\delta\!\big( z_1z_2,h\big)\delta\!\big( z_2^2,h\big)
\\ \hline
s_1s_2s_1s_2&0&0\\ \hline
\end{array} $$
$$\arraycolsep=1.4pt\def2{2}\begin{array}{|c|c|}\hline
_\sigma\!\!\!{\text{\large$\diagdown$}}\!\!^\omega&s_1s_2s_1s_2=s_2s_1s_2s_1\\ \hline
\text{id}&\delta\!\big(\frac{1}{z_1^2},\frac{1}{\mu_1}\big)\delta\!\big(\frac{1}{z_2^2},\frac{1}{\mu_2}\big)\delta\!\big(\frac{z_1}{z_2},h\big)\delta\!\big(\frac{z_2}{z_1},h\big)+\delta\!\big(\frac{1}{z_2^2},\frac{1}{\mu_1}\big)\delta\!\big(\frac{1}{z_2^2},\frac{1}{\mu_2}\big)\delta\!\big(\frac{z_2}{z_1},\frac{1}{\mu_1\mu_2}\big)\delta\!\big(\frac{z_2}{z_1},\frac{\mu_2}{\mu_1}\big)+\delta\!\big(\frac{1}{z_2^2},h\big)\delta\!\big(\frac{1}{z_1z_2},\frac{1}{\mu_1\mu_2}\big)\delta\!\big(\frac{z_2}{z_1},\frac{\mu_2}{\mu_1}\big)\delta\!\big( z_2^2,h\big)\\ \hline
s_1&\delta\!\big(\frac{1}{z_1^2},h\big)\delta\!\big( z_1^2,h\big)\delta\!\big(\frac{1}{z_1z_2},\frac{1}{\mu_1\mu_2}\big)\delta\!\big(\frac{z_1}{z_2},h\big)+\delta\!\big(\frac{1}{z_1^2},\frac{1}{\mu_1}\big)\delta\!\big(\frac{1}{z_1^2},\frac{1}{\mu_2}\big)\delta\!\big(\frac{z_1}{z_2},\frac{1}{\mu_1\mu_2}\big)\delta\!\big(\frac{z_1}{z_2},h\big)+\delta\!\big(\frac{1}{z_1^2},\frac{1}{\mu_2}\big)\delta\!\big(\frac{1}{z_2^2},\frac{1}{\mu_1}\big)\delta\!\big(\frac{z_2}{z_1},\frac{\mu_2}{\mu_1}\big)\delta\!\big(\frac{z_1}{z_2},h\big)\\ \hline
s_2&\delta\!\big(\frac{1}{z_1^2},\frac{1}{\mu_1}\big)\delta\!\big(\frac{z_1}{z_2},h\big)\delta\!\big(\frac{z_2}{z_1},h\big)\delta\!\big( z_2^2,h\big)+\delta\!\big(\frac{1}{z_2^2},\frac{1}{\mu_1}\big)\delta\!\big(\frac{z_2}{z_1},\frac{1}{\mu_1\mu_2}\big)\delta\!\big(\frac{z_2}{z_1},\frac{\mu_2}{\mu_1}\big)\delta\!\big( z_2^2,h\big)+\delta\!\big(\frac{1}{z_1z_2},\frac{1}{\mu_1\mu_2}\big)\delta\!\big(\frac{z_2}{z_1},\frac{\mu_2}{\mu_1}\big)\delta\!\big( z_2^2,\frac{1}{\mu_2}\big)\delta\!\big( z_2^2,h\big)\\ \hline
s_1s_2&\delta\!\big( z_1^2,h\big)\delta\!\big( z_1^2,\frac{1}{\mu_2}\big)\delta\!\big(\frac{1}{z_1z_2},\frac{1}{\mu_1\mu_2}\big)\delta\!\big(\frac{z_1}{z_2},h\big)+\delta\!\big(\frac{1}{z_1^2},\frac{1}{\mu_1}\big)\delta\!\big( z_1^2,h\big)\delta\!\big(\frac{z_1}{z_2},\frac{1}{\mu_1\mu_2}\big)\delta\!\big(\frac{z_1}{z_2},h\big)+\delta\!\big( z_1^2,h\big)\delta\!\big(\frac{1}{z_2^2},\frac{1}{\mu_1}\big)\delta\!\big(\frac{z_2}{z_1},\frac{\mu_2}{\mu_1}\big)\delta\!\big(\frac{z_1}{z_2},h\big)\\ \hline
s_2s_1&\delta\!\big(\frac{1}{z_1^2},\frac{1}{\mu_2}\big)\delta\!\big(\frac{z_2}{z_1},\frac{\mu_2}{\mu_1}\big)\delta\!\big( z_1z_2,h\big)\delta\!\big( z_2^2,h\big)\\ \hline
s_1s_2s_1&\delta\!\big( z_1^2,h\big)\delta\!\big(\frac{1}{z_2^2},\frac{1}{\mu_2}\big)\delta\!\big(\frac{z_1}{z_2},h\big)\delta\!\big( z_1z_2,h\big)\\ \hline
s_2s_1s_2&\delta\!\big( z_1^2,h\big)\delta\!\big(\frac{z_2}{z_1},\frac{\mu_2}{\mu_1}\big)\delta\!\big( z_1z_2,h\big)\delta\!\big( z_2^2,h\big)\\ \hline
s_1s_2s_1s_2&\delta\!\big( z_1^2,h\big)\delta\!\big(\frac{z_1}{z_2},h\big)\delta\!\big( z_1z_2,h\big)\delta\!\big( z_2^2,h\big)\\ \hline
\end{array} $$
}
|
1904.10746
|
\section{Introduction and discussion}
The spontaneous breaking of supersymmetry is one of the most challenging aspects of realistic model building within String Theory and Supergravity.
The phenomenon itself,
in four-dimensional N=1 supergravity,
is well-understood and there are various models that can describe the supersymmetry breaking sector \cite{FVP,Kallosh:2014oja}.
For example,
if we study a single chiral superfield coupled to supergravity with K\"ahler potential and superpotential \cite{Polonyi:1977pj}\footnote{We will not review four-dimensional N=1 supergravity here,
rather we refer the reader to the book of Wess and Bagger \cite{WB} for a superspace description of off-shell supergravity and the book of Freedman and Van Proeyen \cite{FVP}
for a tensor calculus description. In this work we set $M_P=1$.}
\begin{eqnarray}
K = \Phi \overline \Phi \, , \quad W = \mu \left( \Phi -\beta \right) \, ,
\end{eqnarray}
the system will have a stable Minkowski vacuum
for $\beta = 2 - \sqrt 3$
with gravitino mass
\begin{eqnarray}
m_{3/2} = \mu e^{\beta} \, .
\end{eqnarray}
Because we are in a Minkowski vacuum the gravitino mass will also match with the supersymmetry breaking scale
\begin{eqnarray}
{\rm F}_{\text{SUSY}} = \sqrt{ \langle {\cal V} \rangle + 3 m_{3/2}^2 } = \sqrt 3 m_{3/2} \, .
\end{eqnarray}
For further properties of this system, which is referred to as the {\it Polonyi model}, see \cite{Polonyi:1977pj,FVP}.
The Polonyi model is flexible enough,
such that it allows for two types of important deformations.
Firstly,
one can slightly reduce the $\beta$ parameter in the superpotential and construct a stable de Sitter vacuum (see e.g. \cite{Akrami:2018ylq}).
A second deformation is to include a $-\lambda |\Phi|^4$ term in the K\"ahler potential
that will induce an extra contribution to the mass of the scalar proportional to $\lambda \mu^2$.
The Polonyi model can be considered as an F-type supersymmetry breaking model in supergravity.
This is because if we use the chiral superfield expansion
\begin{eqnarray}
\Phi = A + \sqrt 2 \Theta \chi + \Theta^2 F \, ,
\end{eqnarray}
one can check that it is the auxiliary field $F$ that gets a vev and triggers the spontaneous supersymmetry breaking.
In contrast,
when the auxiliary field $M$ (or $F^0$ in the superconformal setup) of the supergravity multiplet gets a vev supersymmetry is not essentially broken,
rather we get the anti de Sitter supergravity \cite{FVP}.
Other notable type of F-term supersymmetry breaking models are the {\it no-scale} models \cite{Cremmer:1983bf},
and the models related to {\it higher order} F-potentials (see e.g. \cite{Koehn:2012ar,Farakos:2012qu,Nitta:2018yzb}).
Another important class of supersymmetry breaking models is the D-term type,
which will also be the main topic of this contribution.
Let us first remind the reader that an abelian vector multiplet coupled to supergravity is described by the
real superfield $V$ that transforms under gauge transformations as
\begin{eqnarray}
\label{gaugeV}
V \to V + i S - i \overline S \, ,
\end{eqnarray}
where the superfield $S$ is chiral.
The vector superfield $V$
has a chiral superfield field strength given by
\begin{equation}
{\cal W}_\alpha(V) = - \frac14 \left( \overline{\cal D}^2 - 8 {\cal R} \right) {\cal D}_\alpha V \, .
\end{equation}
Here ${\cal R}$ is a chiral superfield of the supergravity sector with lowest component given by the auxiliary field $M$,
that is ${\cal R}|=-M/6$.
The component fields of the vector multiplet are then given by
\begin{equation}
\label{Wcomp}
{\cal W}_\alpha | = - i \lambda_\alpha \, , \quad
{\cal D}_{(\alpha} {\cal W}_{\beta)}| = - 2 i \sigma^{ab\ \rho}_{\ \ \alpha} \epsilon_{\rho \beta} \hat D_a v_b \, ,
\quad
{\cal D W}| = - 2 {\rm D} \, ,
\end{equation}
where $\hat D_b v_a$ is the supercovariant derivative of the abelian vector and can be found in \cite{WB}.
The Weyl spinor $\lambda$ is the gaugino and D is a real scalar auxiliary field.
Under a supersymmetry transformation the gaugino transforms as
\begin{eqnarray}
\delta \lambda_\alpha = -2 i \epsilon_\alpha \text{D} - 2 (\sigma^{ab} \epsilon)_\alpha \hat F_{ab} \, ,
\end{eqnarray}
where $\hat F_{ab} = \hat D_a v_b - \hat D_b v_a$.
When supersymmetry is broken by the D-term then the gaugino will contribute to the goldstino,
and we can always set the former to vanish by a gauge transformation.
This gauge choice is identified with the unitary gauge when the supersymmetry breaking is sourced completely by the D-term,
therefore in such case we have:
\begin{eqnarray}
\label{UUGG}
\text{Unitary gauge for pure D-term breaking:} \quad \quad \lambda_\alpha = 0 \, .
\end{eqnarray}
The simplest construction that allows to introduce a D-term supersymmetry breaking is to embed the
Fayet--Iliopoulos (FI) model \cite{Fayet:1974jb} in supergravity.
The central ingredient of the FI model is the FI term,
which in global supersymmetry is given by
\begin{eqnarray}
{\cal L}_{FI} = -2 \sqrt 2 \xi \int d^4 \theta V = - \sqrt 2 \xi \text{D} \, .
\end{eqnarray}
In practice one simply has to embed a term of the form $e \text{D}$ in supergravity,
where $e$ is the determinant of the vielbein.
Because of the properties of the real superspace density $E$ of the old-minimal supergravity,
a term of the form
\begin{eqnarray}
\label{wbFI}
\int d^4 \theta \, E \, V \, ,
\end{eqnarray}
is not gauge invariant.
Indeed because for the old-minimal superspace formulation of supergravity we have
\begin{eqnarray}
\int d^4 x \, d^4 \theta \, E \, S \ne 0 \, ,
\end{eqnarray}
the term \eqref{wbFI} is not invariant under \eqref{gaugeV}.
The embedding of the FI term in old-minimal N=1 supergravity is however possible if we gauge the $U(1)_{\rm R}$ symmetry.
This was originally done by Freedman in \cite{Freedman:1976uk}.
Let us first recall that under a super-Weyl rescaling the superspace integral together with the density transform as \cite{WB}
\begin{eqnarray}
\int d^4 \theta \, E \, \to \, \int d^4 \theta \, E \, \text{e}^{2 \Sigma + 2 \overline \Sigma} \, .
\end{eqnarray}
As a result the term \cite{Stelle:1978wj,VanProeyen:1979ks}
\begin{eqnarray}
{\cal L}_{\text{standard FI}} = -3 \int d^4 \theta \, E \, \text{e}^{2 \sqrt 2 \xi V/3}
= -3 \int d^4 \theta \, E
-2 \sqrt 2 \xi \int d^4 \theta \, E \, V + {\cal O}(V^2) \, ,
\end{eqnarray}
is gauge invariant if we have
\begin{eqnarray}
\label{R-gauging}
\frac{2 \sqrt 2}{3} iS \xi = -2 \Sigma \, .
\end{eqnarray}
In this way we arrive at the {\it Freedman model} \cite{Freedman:1976uk} that is given in superspace by
\begin{eqnarray}
{\cal L} = -3 \int d^4 \theta \, E \, \text{e}^{2 \sqrt 2 \xi V/3}
+ \frac14 \left( \int d^2 \Theta \, 2{\cal E} \, {\cal W}^2 + c.c. \right) \, .
\end{eqnarray}
Notice that the superspace kinetic term of the vector multiplet is gauge invariant and super-Weyl invariant independently,
because under a super-Weyl transformation we have
\begin{eqnarray}
\int d^2 \Theta \, 2{\cal E} \, \to \int d^2 \Theta \, 2{\cal E} \, \text{e}^{6 \Sigma} \, , \quad {\cal W}_\alpha(V) \, \to \, {\cal W}_\alpha(V) \, \text{e}^{-3 \Sigma} \, .
\end{eqnarray}
In component form we have
\begin{eqnarray}
\label{freedman}
\begin{aligned}
e^{-1} {\cal L}\Big{|}_{\lambda=0}
= & -\frac12 R
+ \frac12 \epsilon^{klmn} \left( \overline \psi_k \overline \sigma_l D_m \psi_n
- \psi_k \sigma_l D_m \overline \psi_n \right)
\\
& -\frac14 F_{mn} F^{mn} + \frac{i}{2} e \xi \epsilon^{klmn} \overline \psi_k \overline \sigma_l \psi_n v_m - \xi^2 \, ,
\end{aligned}
\end{eqnarray}
where $D_m$ is the covariant derivative for Lorentz indices that includes the spin-connection $\omega_{ma}^{\ \ \ b}(e,\psi)$.
An inspection of the Lagrangian \eqref{freedman} shows that the {\it R-symmetry is gauged}.
This happens because on one hand the gravitino is by definition charged under the R-symmetry \cite{WB,FVP},
while on the other hand we see that the Lagrangian \eqref{freedman} contains a {\it minimal coupling}
between the gravitino and the FI abelian gauge vector $v_m$.
Therefore,
for the self-consistency of this coupling,
the gauging of the R-symmetry by the FI abelian gauge vector takes place.\footnote{This is of course exactly what the identification \eqref{R-gauging} implies.}
For a discussion on the R-symmetry gauging and the FI terms see \cite{Barbieri:1982ac,VanProeyen:2004xt}.
The supersymmetry breaking scale in the Freedman model is
\begin{eqnarray}
{\rm F}_{\text{SUSY}} = \sqrt{ \langle {\cal V} \rangle + 3 m_{3/2}^2 } = \langle {\cal V} \rangle = \xi^2 \, .
\end{eqnarray}
Notice that the gravitino has no mass term in \eqref{freedman}.
In the next two sections of this contribution we will review the properties of a new construction
that allows to embed the FI term in N=1 supergravity without gauging the R-symmetry.
\section{New FI terms in N=1}
To explain the rationale behind the construction of the new FI term presented in \cite{Cribiori:2017laj},
we will start by discussing the {\it non-linear realizations} of supersymmetry within supergravity \cite{Samuel:1982uh}
\footnote{We will not review here the non-linear realizations of supersymmetry,
however, for a recent review see \cite{Cribiori:2019cgz}.}.
When we have an N=1 supergravity theory where supersymmetry is spontaneously broken we can
define a spinor superfield $\Gamma_\alpha$
with the properties
\begin{eqnarray}
\label{SW}
\begin{aligned}
{\cal D}_\alpha \Gamma_\beta &= \epsilon_{\beta \alpha} \left( 1 - 2 \, \Gamma^2 {\cal R} \right) \, ,
\\
\overline{\cal D}^{\dot \beta} \Gamma^\alpha &= 2 i \, \left( \overline \sigma^a \, \Gamma \right)^{\dot \beta} \, {\cal D}_a \Gamma^\alpha
+ \frac12 \, \Gamma^2 {\cal G}^{\dot \beta \alpha} \, .
\end{aligned}
\end{eqnarray}
Here ${\cal G}_a$ is a superfield of the supergravity sector,
which has lowest component ${\cal G}_a| = -b_a/3$.
The important properties of this superfield are that the only independent component field is described by the lowest component
\begin{eqnarray}
\gamma_\alpha = \Gamma_\alpha | \, ,
\end{eqnarray}
and that the supersymmetry transformation of the lowest component have the form
\begin{eqnarray}
\delta \gamma_ \alpha = \epsilon_\alpha(x) + \dots
\end{eqnarray}
Indeed,
the constraints \eqref{SW} imply that the descendant component fields of the $\Gamma_\alpha$
superfield are composite fields that depend only on $\gamma_\alpha$ and on the supergravity sector (see e.g. \cite{Samuel:1982uh,DallAgata:2016syy,Farakos:2017bxs}).
The simplest construction with the $\Gamma$ superfield is given by
\begin{eqnarray}
\label{PPP}
\int d^4 \theta \, E \, \Gamma^2 \overline \Gamma^2 = e + {\cal O}(\gamma, \overline \gamma) \, ,
\end{eqnarray}
thus gives a positive contribution to the vacuum energy once it is coupled to
supergravity \cite{Samuel:1982uh}.
In addition,
if the unitary gauge is $\gamma=0$, then the term \eqref{PPP} will only contribute to the cosmological constant, as the $\gamma$ terms will drop out.
If however we have a superfield ${\cal U}$ inside the superspace integral together with the $\Gamma$,
then we can generate terms with the lowest component of ${\cal U}|=U$ to be the only non-vanishing
contribution in the $\gamma=0$ gauge.
That is we have
\begin{eqnarray}
\label{EU}
\int d^4 \theta \, E \, {\cal U} \, \Gamma^2 \overline \Gamma^2 = e \, U + {\cal O}(\gamma, \overline \gamma) \, .
\end{eqnarray}
The rationale behind the construction of the new FI term presented in \cite{Cribiori:2017laj} can now be explained.
{\it First},
one has to insert a specific superfield in \eqref{EU} with the lowest component given by D,
and {\it second}, one has to relate the $\gamma$ fermions with the gaugini.
Let us first relate the $\gamma$ fermions to the gaugini.
We can do this directly in superspace by simply postulating
\begin{eqnarray}
\label{GW2}
\Gamma_\alpha = - 2 \frac{{\cal D}_\alpha {\cal W}^2}{{\cal D}^2 {\cal W}^2} \, .
\end{eqnarray}
One can check that \eqref{GW2} does indeed satisfy the constraints \eqref{SW}.
For the lowest component of $\Gamma$ we have
\begin{eqnarray}
\gamma_\alpha = \frac{2i {\cal Z}_{\alpha \beta}}{{\cal Z}^{\rho \sigma} {\cal Z}_{\rho \sigma}} \, \lambda^\beta + \text{3-fermi terms} \, ,
\end{eqnarray}
where
\begin{eqnarray}
{\cal Z}_{\alpha \beta} = {\cal D}_\beta {\cal W}_\alpha \, .
\end{eqnarray}
From \eqref{Wcomp} we see that
\begin{eqnarray}
\label{L/D}
\gamma_\alpha = \frac{i \lambda_\alpha}{2 {\text{D}}} + \dots
\end{eqnarray}
From equation \eqref{L/D} we see that if the vev of D is vanishing then we cannot use the form of $\Gamma$ given by \eqref{GW2}.
If on the contrary we insist on having a consistent effective field theory while we will always use \eqref{GW2},
then we will require
\begin{eqnarray}
\label{DNV}
\langle {\text D} \rangle \ne 0 \, .
\end{eqnarray}
As we will shortly see,
the self-consistency of the full construction of \cite{Cribiori:2017laj} will guarantee \eqref{DNV}.
We now turn to the second step of the construction of the new FI term.
We want to have a superfield with lowest component given by the auxiliary field D.
This is easily achieved because from \eqref{Wcomp} we see that the superfield we are
interested in is given by ${\cal D}^\alpha{\cal W}_\alpha$.
We therefore have
\begin{eqnarray}
\label{EU}
\int d^4 \theta \, E \, {\cal DW} \, \Gamma^2 \overline \Gamma^2 = -2 \,e \, {\text D} + {\cal O}(\gamma, \overline \gamma) \, ,
\end{eqnarray}
or in the way it is constructed in \cite{Cribiori:2017laj} we will have
\begin{eqnarray}
\label{newFI}
{\cal L}_{\text{new FI}} = 8 \sqrt 2 \xi \int d^4\theta \, E \, \frac{{\cal W}^2\overline{{\cal W}}^2}{{\cal D}^2 {\cal W}^2 \overline{\cal D}^2 \overline{\cal W}^2} {\cal D}W \, .
\end{eqnarray}
One can go from \eqref{EU} to \eqref{newFI} by taking into account
the algebraic identity
\begin{eqnarray}
\Gamma^2 \overline \Gamma^2 \equiv 16 \frac{
{\cal W}^2 \overline {\cal W}^2
}{{\cal D}^2 {\cal W}^2 \overline {\cal D}^2 \overline {\cal W}^2} \, ,
\end{eqnarray}
that is derived by using \eqref{GW2}.
Clearly once we expand \eqref{newFI} in components we will have
\begin{eqnarray}
\label{LFI}
{\cal L}_{\text{new FI}} = - \sqrt 2 \xi \,e \, {\text D} + {\cal O}(\lambda, \overline \lambda) \, .
\end{eqnarray}
It is important to stress that the fermionic terms in \eqref{LFI} are in general divided by the auxiliary field D.
However,
as the kinetic term for the vector multiplet contains a term quadratic in the auxiliary field D,
that is
\begin{eqnarray}
{\cal L}_{\text{D}} = e \frac12 \text{D}^2 - \sqrt 2 e \xi \, {\text D} \, ,
\end{eqnarray}
then by integrating out D we will generically find it has a non-vanishing vev
\begin{eqnarray}
{\text{D}} = \sqrt 2 \xi \, ,
\end{eqnarray}
and therefore the full construction will be self-consistent.
We can now consider the simplest supergravity model with the new FI term.
The superspace Lagrangian is given by
\begin{eqnarray}
\begin{aligned}
{\cal L} =& - 3 \left( \int d^2 \Theta \, 2 {\cal E} \, {\cal R} + c.c. \right)
+ \left( \int d^2 \Theta \, 2 {\cal E} \, W_0 + c.c. \right)
\\
& + \frac14 \left( \int d^2 \Theta \, 2 {\cal E} \, {\cal W}^2(V) + c.c. \right)
+ 8 \sqrt 2 \xi \int d^4\theta \, E \, \frac{{\cal W}^2\overline{{\cal W}}^2}{{\cal D}^2 {\cal W}^2 \overline{\cal D}^2 \overline{\cal W}^2} {\cal D}{\cal W} \, .
\end{aligned}
\end{eqnarray}
Once we write the theory in component form and integrate out the auxiliary fields we find
\begin{eqnarray}
\label{NEWDTERM}
\begin{aligned}
e^{-1} {\cal L} \Big{|}_{\lambda=0}
=& -\frac12 R
+ \frac12 \epsilon^{klmn} \left( \overline \psi_k \overline \sigma_l D_m \psi_n
- \psi_k \sigma_l D_m \overline \psi_n \right)
\\
& -\frac14 F_{mn} F^{mn} - \left( \xi^2 -3 |W_0|^2 \right)
- \overline W_0 \psi_a \sigma^{ab} \psi_b - W_0 \overline \psi_a \overline \sigma^{ab} \overline \psi_b \, .
\end{aligned}
\end{eqnarray}
We can highlight some properties of the Lagrangian \eqref{NEWDTERM}:
\begin{itemize}
\item No R-symmetry gauging.
\item Arbitrary gravitino {\it Majorana} mass:
\begin{eqnarray}
m_{3/2} = W_0 \, .
\end{eqnarray}
\item The cosmological constant can be tuned:
\begin{eqnarray}
{\cal V} = \xi^2 -3 |m_{3/2}|^2 \, .
\end{eqnarray}
\item Independent vacuum energy from supersymmetry breaking scale:
\begin{eqnarray}
{\rm F}_{\text{SUSY}} = \sqrt{ \langle {\cal V} \rangle + 3 m_{3/2}^2 } = \xi^2 \, .
\end{eqnarray}
\item No {\it smooth} supersymmetric limit.
\item Electric-magnetic duality:
\begin{eqnarray}
F_{mn} \rightarrow \epsilon_{mnkl} F^{kl} \, .
\end{eqnarray}
\end{itemize}
This concludes the simplest model that we can construct with the use of the new Fayet--Iliopoulos term of \cite{Cribiori:2017laj}.
Developments related to cosmology can be found in
\cite{Aldabergenov:2017hvp,Antoniadis:2018cpq,Antoniadis:2018oeh,Abe:2018rnu,Antoniadis:2018vtd,Ishikawa:2019pnb},
further progress in the study of matter couplings can be found in \cite{Aldabergenov:2018nzd,Abe:2018plc},
and variations of the construction can be found in \cite{Kuzenko:2017zla,Kuzenko:2018jlz,Farakos:2018sgq,Kuzenko:2019vaw}.
Let us note that these constructions are only known for four-dimensional N=1 supergravity.
\section{Anti D3-brane interpretation}
In this section we will study the new FI term when matter fields are included and we will also study the new FI term in the global limit.
These two aspects of the new FI term hint towards an interpretation in terms of anti D3-branes.
\subsection{Matter coupling and scalar potential}
We now introduce a single chiral superfield $\Phi$ in the theory and we will study a Lagrangian of the form
\begin{eqnarray}
\label{mmm}
\begin{aligned}
{\cal L} =& - 3 \int d^4 \theta \, E \, {\text e}^{-K/3}
+ \left( \int d^2 \Theta \, 2 {\cal E} \, W(\Phi) + c.c. \right)
\\
& + \frac14 \left( \int d^2 \Theta \, 2 {\cal E} \, {\cal W}^2(V) + c.c. \right)
+ 8 \sqrt 2 \xi \int d^4\theta \, E \, \frac{{\cal W}^2\overline{{\cal W}}^2}{{\cal D}^2 {\cal W}^2 \overline{\cal D}^2 \overline{\cal W}^2} {\cal D}W \, .
\end{aligned}
\end{eqnarray}
We should point out that even though $K$ appears as the K\"ahler potential in \eqref{mmm},
the K\"ahler invariance is explicitly broken by the new FI term.
This however does not mean the theory is not supersymmetric.
It is also possible to restore the K\"ahler invariance by introducing appropriate ${\text e}^K$ factors as has been pointed out in \cite{Antoniadis:2018cpq}.
Here we will review the matter couplings as presented in \cite{Cribiori:2017laj} which are described by the Lagrangian \eqref{mmm}.
When we reduce \eqref{mmm} to component form we find {\it in the unitary gauge} \eqref{UUGG} that
the Lagrangian has exactly the same form as in standard supergravity
and the only difference appears in the scalar potential that is given by
\begin{eqnarray}
\label{newV}
{\cal V} = {\cal V}_{\substack{\text{Standard} \\\text{SUGRA}}} + \xi^2 \, {\text e}^{2K/3}\, .
\end{eqnarray}
The very interesting property of the new term that enters the scalar potential \eqref{newV} is that it
gives rise to an {\it uplift},
and as we will see now it matches with the uplift that is ascribed to an anti D3-brane in the KKLT scenario \cite{Kachru:2003aw,Kachru:2003sx}.
Indeed,
if we assume a no-scale K\"ahler potential \cite{Cremmer:1983bf} (as in KKLT),
that is
\begin{eqnarray}
K = -3 \ln(\Phi + \overline \Phi) \, ,
\end{eqnarray}
then the uplift term gives to the scalar potential the form
\begin{eqnarray}
\label{antiD3}
{\cal V} = {\cal V}_{\substack{\text{Standard} \\\text{SUGRA}}} + \frac{\xi^2}{(A + \overline A )^2} \, .
\end{eqnarray}
Note that the scalar potential \eqref{antiD3} is usually constructed by introducing non-linear realizations,
either for example within the {\it constrained superfields} setup \cite{Ferrara:2014kva,Kallosh:2014wsa,Bergshoeff:2015jxa},
or in the {\it Goldstino brane} setup \cite{Bandos:2016xyu}.
In both cases the effective supergravity theory is expected to capture the impact of the anti D3-brane in the KKLT scenario.
However,
the construction of a scalar potential \eqref{antiD3} with the new FI term of \cite{Cribiori:2017laj}
has been achieved without explicitly invoking a non-linear realization of supersymmetry
as the uplift effect is induced by a standard N=1 abelian vector multiplet and the field content of the Lagrangian \eqref{mmm} is supersymmetric.
Let us note that in \cite{DallAgata:2013jtw} a new kind of no-scale models can be introduced where the K\"ahler potential
has the form $K = -2 \ln(\Phi + \overline \Phi)$ and there is a gauging of the isometry $\Phi \to \Phi + iq$,
with real $q$,
by the abelian vector multiplet $V$.
Alternatively, the new FI term can be also utilized,
and no-scale models can be constructed when \eqref{newFI} is included in a setup with $K = - \ln(\Phi + \overline \Phi)$
and with no gauging introduced \cite{Aldabergenov:2019hvl}.
\subsection{Super Born--Infeld and alternative Bagger--Galperin}
In this subsection we will review the results of \cite{Cribiori:2018dlc} where it was shown that when the new FI term \eqref{newFI}
is studied in the global limit then it can be identified as the source of supersymmetry breaking in the Bagger--Galperin (BG) model \cite{Bagger:1996wp}.
The Bagger--Galperin model is an N=1 supersymmetric theory constructed by an abelian vector multiplet with a
bosonic sector matching the Born--Infeld.
Generic supersymmetric theories that have a bosonic sector with this property were constructed in \cite{Cecotti:1986gb},
but the BG construction is a special class of these models that has a {\it second non-linearly realized supersymmetry}
of the Volkov--Akulov (VA) type \cite{Volkov:1973ix}.
This property of the BG action allows to identify it as the effective action of a space-filling (anti) D3-brane with truncated spectrum
(see e.g. the discussions in \cite{Rocek:1997hi,Kallosh:2016aep}).
Further developments related to the BG action can be found in
\cite{Bandos:2001ku,Klein:2002vu,Antoniadis:2008uk,Kuzenko:2009ym,Ferrara:2014oka,Bellucci:2015qpa,Cribiori:2018jjh,Antoniadis:2019gbd}.
The BG model can be written in terms of a standard non-linear realization of supersymmetry (see e.g. \cite{Klein:2002vu} for a recent review)
in the form \cite{Bellucci:2015qpa,Cribiori:2018dlc,Antoniadis:2019gbd}
\begin{equation}
\label{BG1}
S_\text{BG}=- \beta m \int d^4 x \det[A_m^a] \left(
1+
\sqrt{-\det{\left(\eta_{ab}+\frac{1}{m}{\cal F}_{ab}\right) }}
\right) \, ,
\end{equation}
where $\beta$ and $m$ are real constants.
The expressions appearing in \eqref{BG1} are
\begin{equation}
\label{AcompAM}
A_m^a = \delta_m^a
- i \partial_m\chi\sigma^a\overline\chi
+ i \chi\sigma^a\partial_m\overline\chi \, ,
\end{equation}
with the fermion $\chi$ describing the standard VA goldstino that transforms under supersymmetry as \cite{Volkov:1973ix}
\begin{eqnarray}
\delta \chi_\alpha = \epsilon_\alpha
- i \left( \chi \sigma^m \overline \epsilon - \epsilon \sigma^m \overline \chi \right) \partial_m \chi_\alpha \, .
\end{eqnarray}
The field strength of the abelian vector is given by
\begin{equation}
{\cal F}_{ab}=(A^{-1})^m_a(A^{-1})^n_b\left[\partial_{m} u_{n}-\partial_{n} u_{m}\right] \, ,
\end{equation}
where the $u_m$ transforms under supersymmetry as
\begin{eqnarray}
\delta u_m=- i \left(\chi\sigma^n\overline\epsilon
-\epsilon\sigma^n\overline\chi\right)\partial_n u_m
- i \partial_m\left(\chi \sigma^n\overline\epsilon
-\epsilon \sigma^n\overline\chi\right)u_n \, .
\end{eqnarray}
The form of the second supersymmetry that transforms the vector $u_m$ into the fermion $\chi_\alpha$ has been derived in \cite{Bellucci:2015qpa}.
In \cite{Cribiori:2018dlc} an alternative formulation of the BG action was presented that has the form
\begin{equation}
\begin{aligned}
S_{\overline{\text{BG}}}=&\frac{\beta}{4m} \int d^4 x\, \left( d^2\theta\, {\cal W}^2 + c.c. \right)
+ 16 \beta \int d^4x\, d^4\theta\, \frac{{\cal W}^2\overline{\cal W}^2}{D^2{\cal W}^2\overline D^2\overline{\cal W}^2}D^\alpha {\cal W}_\alpha\\
& + 16 \beta m \int d^4x\, d^4\theta\, \frac{{\cal W}^2\overline{\cal W}^2}{D^2{\cal W}^2\overline D^2\overline {\cal W}^2}\left\{1+\frac{1}{4m^2} f_{ab} f^{ab} -\sqrt{-\det \left(\eta_{ab} + \frac{1}{m} f_{ab} \right) } \right\} \,,
\label{BG2}
\end{aligned}
\end{equation}
where we have defined the superfield
\begin{equation}
f_{ab}=\frac{i}{4}\sigma_{ab\,\gamma}{}^\alpha\varepsilon^{\gamma\beta}\left(D_\alpha {\cal W}_\beta +D_\beta {\cal W}_\alpha\right)+ c.c. \,.
\end{equation}
Notice that the first line of \eqref{BG2} has the same structure as the new D-term of \cite{Cribiori:2017laj}.
When one writes \eqref{BG2} in component form and integrates out the auxiliary fields it will reduce to \eqref{BG1} after appropriate field redefinitions \cite{Cribiori:2017ngp}.
This procedure has been performed in detail in \cite{Cribiori:2018dlc}.
This finding further supports the interpretation of the new FI term as the source of supersymmetry breaking
related to the effective theory of an (anti) D3-brane
and also allows to identify the FI parameter in terms of the brane tension and the $\alpha'$ \cite{Cribiori:2018dlc}.
In \cite{Cribiori:2018dlc} this Lagrangian is also embedded in four-dimensional N=1 supergravity.
\acknowledgments
I would like to thank Niccol\`o Cribiori for discussions.
This work is supported from the KU Leuven C1 grant ZKD1118 C16/16/005.
|
1910.07799
|
\section{Introduction}
%
%
%
Label placement is an important step in map production, both manual and automatic, and it can require up to 50 percent of the total map production time for manually created maps~\cite{yoeli1972}. Imhof's 1975 statement ``Good form and placing of type make the good map. Poor, sloppy, amateurish type placement is irresponsible; it spoils even the best image and impedes reading.''~\cite{imhof1975} has not lost its validity until today. Yet, with more and more automation in cartography and fully digital map production, one can argue that the quality of label placement has not improved or even diminished compared to skilled, but tedious manual label placement~\cite{s-gpusye-12}.
While practical label placement algorithms are typically very fast and can compute overlap-free positions of thousands of labels within seconds, the resulting maps usually do not meet the highest quality standards but must be carefully post-processed in tedious work by human cartographers.
There is a lack of algorithmic support of human involvement in an automated labeling workflow.
Ideally, a responsive labeling algorithm would be able to react on human interaction and respect any added constraints, e.g., choosing an alternative position of a label or changing its font size, by locally updating the solution while keeping the already placed labels as stable as possible.
Such a semi-automatic map labeling tool would allow for a much more comfortable and intelligent human-in-the-loop label placement process in digital map production, where neither human alone can deal with the full data complexity, nor machine alone can deal with the fine-tuning and optimization of mathematically somewhat ill-defined map aesthetics.
Instead, the tool should combine the computational power and mathematical rigor of geometric labeling algorithms with the expertise and sense of aesthetics of experienced domain experts in cartography.
In this paper we present a prototype for a such a human-in-the-loop label placement approach that supports first applying selected labeling algorithms for placing an initial set of non-overlapping labels for point features that, in our case, maximizes an objective function counting the (weighted) number of labeled features.
To proceed from this initial solution our prototype implements several editing and interaction tools for modifying the labeling according to the needs of an expert user, e.g., changing the visibility or position of a label, or its size, shape, and weight.
Upon each of those modifications, a second algorithm is re-optimizing and refining the labeling by taking into account both the user input and the existing solution so as to satisfy the new constraints and maximize the stability with respect to the previous solution.
That is, a labeling is computed that contains as many as possible of the previously displayed labels and at the same time resolves any new label conflicts resulting from the user input.
Our prototype is designed to be flexible as to which algorithm to actually use for computing initial solutions and iterative updates.
Secondly, we take an algorithm engineering perspective on the map labeling problem and perform an experimental simulation study in the GIS software QGIS.
The aim is to investigate differences between and suitability of heuristic and exact algorithms (including those provided by QGIS itself) for the envisioned interactive labeling workflow, which requires frequent recomputation of labelings after local modifications.
%
%
%
%
%
\paragraph{Related Work}
%
Based on general cartographic guidelines~\cite{imhof1975, w-digtpssm-00}, the first algorithmic solutions to the label placement problem have been studied in the cartographic literature in the 1970s and 1980s~\cite{yoeli1972,hirsch82,z-ipalpp-86}. In the early 1990s the problem has been introduced as a geometric independent set problem to the computational geometry community~\cite{Formann1991,ms-ccclp-91}, where it was recognized as an important application challenge in the computational geometry task force report~\cite{c-accgitfr-96}.
It was quickly shown that almost all variants of label placement and label selection problems are NP-hard~\cite{Formann1991,ms-ccclp-91}.
Therefore, researchers focused on special cases, approximation algorithms and heuristics for label number maximization or label size maximization problems, predominantly for point features, e.g.~\cite{ww-pla-95,wwks-trsglp-01,vanKreveld1998,s-gaclp-01,christensen1995}; for surveys and general introductions see, e.g., \cite{s-gaclp-01,kt-la-13,ws-mlb-09}.
More recent works introduced advanced multi-criteria optimization models~\cite{dksw-teqnpm-02,rr-cmmhcqplp-14,hw-bmieipfpl-17} that can express more accurately several established cartographic principles, but still with the aim of a full automation of the map labeling process.
While progress is made by incorporating more comprehensive cartographic rules for label placement, none of the above approaches includes decisions made by human experts -- other than setting preferences, parameters, and priorities in the different scoring functions that control a single optimization run of the respective algorithm.
A notable exception is the UserHints framework~\cite{Nascimento2008}, where human interaction was integrated into solving the label number maximi\-za\-tion problem in a fixed-position point labeling setting. In that system, two heuristic methods were implemented as labeling algorithms, and hence the evaluation could not assess the deviation from optimal solutions with respect to the objective function. Moreover, the authors did not consider the stability of the labeling under user interaction.
Beyond the label placement problem, interactive optimization~\cite{slk-iho-02} and human-guided search~\cite{klmm-hs-09} are of course techniques that are of general interest and more broadly applicable.
Popular GIS software like \href{https://mapbox.com}{Mapbox}\footnote{see \url{https://mapbox.com}},
\href{https://pro.arcgis.com}{ArcGIS Pro}\footnote{see \url{https://pro.arcgis.com}}, %
or \href{https://www.qgis.org}{QGIS}\footnote{see \url{https://www.qgis.org}} %
also provide labeling al\-go\-rithms.
Mapbox
%
allows customized label modifications with data conditions, but no manual selection or drag-and-drop placement.
The ArcGIS Pro documentation\footnote{see \url{https://pro.arcgis.com/en/pro-app/help/mapping/text/labeling-basics.htm}}
states ``Label positions are generated automatically. Labels are not selectable. You cannot edit the display properties of individual labels.'' To allow for manual adjustment, labels can be converted to annotations. If the labels are stored in a database, the annotations can be feature-linked, i.e., the annotations update in case features are added or changed. However, after converting to annotations, all positioning needs to be done manually.
Other proprietary software developers like \href{https://1spatial.com/}{1Spatial}\footnote{see \url{https://1spatial.com/}}
and \href{http://lorienne.com/en/}{Lorienne}\footnote{see \url{http://lorienne.com/en/}}
advertise features to modify labels in a more advanced manner.
Though, there seems to be no focus on how to better integrate the automatic labeling process into a more interactive approach,
especially from an algorithmic perspective.
Finally, in QGIS 3 some advanced labeling tools were introduced.
For example, it is possible to manually drag and reposition labels; other labels will be re-placed accordingly.
Labels, which were manually edited, can be highlighted and reversed to their default position.
While this is a good example that demonstrates the awareness and practical need for semi-automatic labeling solutions, prior to this paper no experimental studies on the performance of different labeling algorithms under interactive editing
%
have been published that evaluate such an approach and guide further development in QGIS and other systems.
%
%
%
%
%
%
%
%
%
%
\paragraph{Paper Structure}
%
%
In Section~\ref{sec:model} we introduce our model for semi-automatic map labeling, which combines the classic point-feature label placement with a dynamic update problem. Section~\ref{sec:framework} introduces our prototype tool and describes a sample map labeling workflow using interactive modifications by a cartographic expert.
%
Finally, Section~\ref{sec:algexpqgis} describes our simulation experiment in QGIS to analyze the performance of several labeling algorithms.
%
%
%
\section{Labeling Model}\label{sec:model}
In this paper we restrict our attention to the \emph{point feature label placement} (PFLP) problem, which is defined as follows.
Let $ P $ be a set of $n$ feature points in $ \mathbb{R}^2 $. For each point $p \in P$ we are given a finite set $L_p$ of \emph{label candidates}, where each label candidate $\ell \in L_p$ is represented by the bounding box $R_\ell$ of the feature name placed at a particular position. While in general the label candidates in $L_p$ can be arbitrary label positions, we focus on the standard 4- and 8-position models, where either one of the corners of $R_\ell$ coincides with $p$ (4-position model) or one of the corners or midpoints of the edges of $R_\ell$ coincides with $p$ (8-position model). Let $\mathcal L = \bigcup_{p \in P} L_p$ be the union of all label candidates.
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
We say that two label candidates $\ell$ and $\ell'$ are in \emph{conflict}, if the two rectangles $R_\ell$ and $R_{\ell'}$ intersect. Since the names of two conflicting label candidates would overlap, the goal in map labeling is to find a \emph{conflict-free} solution set $\mathcal S \subseteq \mathcal L$ of label candidates. In particular, we require that any two label candidates of the same point $p$ are in conflict so that each point receives at most one label. To optimize the labeling we define a quality function $w \colon \mathcal L \rightarrow \mathbb R^+$ that assigns a weight to each label candidate. Then in its basic form, which we implemented for our prototype tool and is also used equivalently in QGIS through a cost model for non-labeled features, the PFLP optimization problem is defined as follows.
\begin{problem}[PFLP]
Given a set $P$ of $n$ points in $\mathbb{R}^2$ with a set of label candidates $\mathcal L$ and a weight function $w$, find a conflict-free set $S \subseteq \mathcal L$ of label candidates for $P$ such that the weight $W(\mathcal S) = \sum_{\ell \in \mathcal S} w(\ell)$ is maximized.
\end{problem}
The simplest weight function is $w \equiv 1$, which just counts the number of selected labels. But more advanced weight functions, defined for single labels, pairs of labels, or even larger subsets, in order to model various cartographic principles are possible~\cite{rr-cmmhcqplp-14,hw-bmieipfpl-17,dksw-teqnpm-02}.
We want to explore user modifications in our semi-automatic labeling process, which, for example, change the set $\mathcal L$ of label candidates to a set $\mathcal L'$ or the weight function $w$ to a function $w'$. Therefore we define the following PFLP update problem.
\begin{problem}[PFLP-Update]
\label{prob:PFLP-Update}
Given a set $P$ of $n$ points in $\mathbb{R}^2$ with a set of label candidates $\mathcal L'$, a weight function $w'$, and a previous labeling $\mathcal S$, compute a conflict-free solution $\mathcal S' \subseteq \mathcal L'$ that maximizes the number $|\mathcal S \cap \mathcal S'|$ of \emph{stable} labels as well as the weight $W(\mathcal S') = \sum_{\ell \in \mathcal S'} w'(\ell)$.
\end{problem}
We note that there may be a trade-off between the stability of the new solution $\mathcal S'$ and its weight $W(\mathcal S')$ that can be adjusted by the user.
%
%
%
%
%
%
%
%
%
For solving the two labeling problems algorithmically, we model the conflicts and the label candidates as a weighted \emph{conflict graph} $G=(V,E)$, where the vertex set $V=\mathcal L$ consists of all label candidates and the edge set $E$ consists of all pairs of conflicting label candidates. Then, in graph-theoretic terms, an optimal labeling corresponds to a \emph{maximum weight independent set} in $G$, i.e., a subset $V' \subseteq V$ of vertices such that no two vertices $u,v \in V'$ are adjacent and the weight $\sum_{v \in V'} w(v)$ is maximum. The problem of computing maximum independent sets in graphs is a classic NP-hard problem~\cite{Garey79}, even in its unweighted form.
%
%
%
%
%
%
%
%
%
%
\subsection{Data}\label{sec:data}
For our computational experiments and the evaluation of the prototype we extracted points-of-interest data from the \href{https://openstreetmap.org}{OpenStreetMap}\footnote{\url{https://openstreetmap.org}} (OSM) project,
filtered it for certain categories or properties, and then stored the name and location of the remaining points as ESRI shapefiles %
or in a simple JSON file format to be read by our tool.
Using data from the OSM project guarantees that the feature distribution is realistic, even if the particular data sets are simplified and not cartographically sound use cases.
%
%
We compiled five different datasets, all with unit weights, whose properties are summarized in Tab.~\ref{tab:data}. The first one consists of all mountain peaks above 2,499 meters in the mountain range “Hohe Tauern” in Austria.
It consists of 1,278 homogeneously distributed, natural features and is on average less dense than the other four data sets.
We compiled this dataset to use it in the sample workflow with a zoom level set to 12, see Section~\ref{sec:sampleworkflow}.
The number of conflicts and conflicts per feature hence are measured in the applied 4-position model of the prototype.
%
The other four datasets are man-made features taken from Vienna, the province of Lower Austria, and Austria itself.
Here we use QGIS 12-position model to measure the conflicts.
In case of Austria we filtered all places marked as town or city, resulting in a total of 301 features with $ 16.58 $ conflicts per feature.
For Lower Austria we took all villages, towns, and cities inside the state boundaries of Lower Austria and Vienna.
This resulted in a set with 2,260 features with $ 22.6 $ conflicts per feature.
These settlement features are irregularly distributed according to the physical geography of an alpine country.
Finally we considered all bus, tram, and subway stops inside the state boundaries of Vienna. %
These features are more dense in the city center and thin out towards the periphery.
One set, ``Vienna Train'', consists of 1,001 tram and subway stops with $ 23.27 $ conflicts per feature.
%
The last data set we call ``Vienna Bus/Train''.
It adds also all bus stops inside Vienna to the tram and subway stops.
These are 3,939 features and $ 33.66 $ conflicts per feature, hence it is by far the most densely packed set of features.
Note that for all data sets the number of conflicts includes the conflicts between label candidates of the same feature, which yields a lower bound of 3 or 11 conflicts per feature in the 4- or 12-position model, respectively.
%
%
\begin{table}[tb]
\caption{Test data sets and their properties.}
\label{tab:data}
\centering
\begin{tabular}{lrrrr}
\toprule
data set & features & conflicts & conflicts/feat\\
\midrule
Mountain Peaks & 1,278 & 15,416 & 3.01\\
\midrule
Austria & 301 & 4,991 & 16.58 \\
Lower Austria & 2,260 & 47,269 & 22.60 \\
Vienna Train & 1,001 & 23,294 & 23.27 \\
Vienna Bus/Train & 3,939 & 132,571 & 33.66 \\
\end{tabular}
\end{table}
\section{Semi-Automatic Labeling Prototype}\label{sec:framework}
We developed a prototype tool that includes four labeling algorithms and provides a proof-of-concept GUI to test user interaction with the system. For the implementation of the backend, especially the algorithms, we used \emph{Java 8} in conjunction with the \href{https://www.playframework.com/}{\emph{Play Framework}}\footnote{\url{https://www.playframework.com/}}
(version 2.6) and the \href{https://jgrapht.org/}{JGraphT}\footnote{\url{https://jgrapht.org/}} library
(version 1.0.1). For displaying the labels we built a web interface using the Javascript libraries \href{https://leafletjs.com/}{\emph{Leaflet}}\footnote{\url{https://leafletjs.com/}}
(version 1.0.3) and \href{https://d3js.org/}{\emph{D3}}\footnote{\url{https://d3js.org/}}
(version 4.9.1).
%
%
%
\begin{figure}[tb]
\centering
\includegraphics[width=\textwidth]{screencast_new/original.png}
\caption{The graphical user interface of the developed prototype.}
\label{fig:GUI}
\end{figure}
Our application is a one-page design, i.e., the page does not require any reloads while working with it.
The user interface, shown in Fig.~\ref{fig:GUI}, consists of the large map area in the middle of the window.
Here the current labeling is displayed on background map tiles.
The labels are drawn as white rectangles with black text and are initially attached to the features according to a 4- or 8-position model.
All feature points are displayed as filled circles. A blue circle indicates a labeled feature, while a red circle corresponds to an unlabeled one.
On the left-hand side we find a sidebar, containing most of the input controls, e.g., for loading a file or choosing the algorithms.
Above the map area, there are three toggle buttons.
Two of them manipulate what is shown on the map, while the rightmost button toggles if the labels we just modified are kept as fixed labels or if they can be deleted from the solution by the next update.
By clicking on a label, its background color changes to green as seen in Fig.~\ref{fig:GUI} and the label properties area pops up as a sidebar on the right-hand side of the window.
Here the current values are displayed, e.g., font size, weight, or margin, and the user can modify them accordingly.
Shifting a label candidate position by drag-and-drop is also possible.
%
%
Concerning user feedback, the application informs the user via small message boxes on the lower right corner.
Longer computations will block the user interface, and a progress bar pops up on the top.
%
%
%
%
%
%
%
%
%
%
%
\subsection{Workflow and User Interaction}
\label{sec:modification}
Starting from an unlabeled map, the first step is to import a set of data points to be labeled.
Then one of the implemented algorithms (see Section~\ref{sec:algorithms}) is selected to compute an initial solution that can subsequently be modified.
The core of the proposed user-centered labeling process consists of a number of implemented modification tools, that were designed according to the needs of a human cartographer and are summarized in Tab.~\ref{tab:modifications}.
\begin{table}[tb]
\small
\caption{Implemented user modifications.}\label{tab:modifications}
\centering
\begin{tabular}{l|l|l}
\toprule
\textbf{text} & \textbf{solution} & \textbf{label}\\
\midrule
change font size & delete point features & drag label in the map \\
change text & delete label candidates & change padding\\
add line breaks & fixate label candidates & toggle label box visibility\\
& change cand. weight\\ %
\bottomrule
\end{tabular}
\end{table}
Most of the modifications have a direct effect in the corresponding conflict graph, e.g., deletion or insertion of conflict edges, deletion of vertices, forced selection of vertices, or changes of vertex weights.
Lastly, the user can select the algorithm for solving the PFLP-Update problem following the modifications; it can be the same algorithm as for computing the initial labeling or a different one, where aspects such as computation speed, stability, and optimality must be taken into account. Our simulation experiments in Section~\ref{sec:statistic} provide some empirical guidance for choosing an update algorithm.
%
%
%
%
%
%
%
%
%
%
\subsection{Realistic Sample Workflow}\label{sec:sampleworkflow}
In this subsection, we describe an example of a realistic map labeling workflow that has been performed by a cartographer using our prototype tool; the process was protocoled and video-captured.
In the first step, the point features from the Mountain Peaks dataset are added to the map. After loading and parsing the features, the
user zooms and pans to the area of interest. Once the
desired zoom level is set, an initial labeling can be produced by any of the four algorithms outlined in Section~\ref{sec:algorithms}. Here we used the exact algorithm \textsc{MHS}\xspace.
After it found a solution, the labeled features are shown with a blue symbol and the corresponding label candidate;
unlabeled features are indicated by a red symbol.
Usually, a cartographer would now try to group and prioritize features
according to their attributes first. In this sample scenario, though, we are
treating all features with the same importance and are only using
visual cues to manually refine the results of the automatic
labeling.
Reasons for necessary manual adaptations include cases, in which the
visual connection between symbols and labels is ambiguous or in
which the label does not correspond well with underlying map
features (e.g., labels covering lakes).
While some cases could be avoided by implementing more sophisticated placement rules (e.g., assigning label and feature weights), other cases seem difficult to automate -- either due to their subjective nature or due to the complexity of demands.
%
%
%
After identifying an improvable label, the cartographic expert would
click on the label, which activates the view of four (or eight)
alternative label positions. The preferred label candidate can be
fixated by marking it and activating ``Fixate Label Candidate'' in the modification sidebar.
Alternatively, in ``drag mode'',
the label can be dragged to the preferred position.
Further label
edits include the option to change the font size and weight as well
as the label name, e.g., using abbreviations or stacking long label names. %
Based on the manually selected label, the positions of surrounding labels are re-calculated with the selected update algorithm.
%
In our test scenario, about 15 improvable labels (less than 10\% of the initially placed labels) were identified.
%
Not all of them had to be changed manually, since updates of neighboring labels and subsequent re-calculations fixed some of the issues. Of course, the instant updates sometimes also resulted in new improvable labels.
All in all, the option to select and adjust individual labels while maintaining all automatic labeling functionality was considered highly useful by the expert to speed up the label optimization process in comparison to current cartographic workflows without customized algorithmic support for interactive labeling. %
%
\section{Algorithms and Experiments in \textsc{QGIS}\xspace}\label{sec:algexpqgis}
%
Quantum GIS (QGIS) %
is one of the most popular open-source geographic information system application in recent years.
By using the PAL~\cite{ertz2009pal} local search labeling library in its labeling engine, QGIS provides an automated layer labeling.
The label placement can be customized by choosing labeling algorithms, adjusting styling, etc.
The newly updated labeling toolbar in QGIS 3 provides more tools for manual label placement, which include changing individual label attributes and moving labels in the map.
This indicates that the QGIS developers see a clear demand for support of interactive labeling, too.
Our goal in this section is to investigate the effectiveness and stability of several algorithms (described in Section~\ref{sec:algorithms}) applicable to solve problems PFLP and PFLP-Update, as well as the algorithms from PAL (see Section~\ref{sec:qgis}) that are included in QGIS.
To this end we integrated our algorithms in the labeling engine of QGIS and performed several simulation experiments. The results are reported in Section~\ref{sec:statistic}.
\subsection{Labeling Algorithms}\label{sec:algorithms}
%
%
We selected four labeling algorithms as representatives of existing labeling approaches, three increasingly sophisticated heuristics and one exact algorithm.
The simplest algorithm (called \textsc{GREEDY}\xspace) is an easy-to-implement and fast greedy approach with little to no optimization.
The second algorithm (\textsc{MIS}\xspace) aims to obtain a good approxima\-tion of the maximum independent set in the conflict graph.
Given that a lot of graph libraries provide independent set algorithms it is also very easy to apply by taking an existing implementation.
%
%
The algorithm \textsc{KAMIS}\xspace~\cite{kamis2017} computes large independent sets by a combination of several advanced algorithmic techniques such as graph partitioning and kernelization.
Finally we designed a new exact algorithm (\textsc{MHS}\xspace) based on a MAXSAT formulation.
%
To solve our MAXSAT formulation we use the solver \href{http://www.maxhs.org/}{MaxHS}, %
which also gives the name to our algorithm.
MaxHS is a freely available solver which ranks highly in the competition held at the annual SAT conferences. %
All algorithms except \textsc{KAMIS}\xspace support weighted labels, but this does not directly affect our experiments as we initially use unweighted labels.
%
Maintaining a previous solution can be done with \textsc{GREEDY}\xspace, \textsc{MIS}\xspace and \textsc{MHS}\xspace.
In the case of \textsc{GREEDY}\xspace we just keep the conflict-free subset of the old solution completely and try to extend it.
For \textsc{MIS}\xspace and \textsc{MHS}\xspace we adjust the weights of the old labels so that they have higher priority to be included in the solution.
Since \textsc{KAMIS}\xspace currently does not support weighted instances, prioritizing labels of a previous solution via weights is not possible with \textsc{KAMIS}\xspace.
%
%
%
%
%
From a theoretical perspective, specialized algorithms that respect a previous solution have been considered recently for the \emph{dynamic independent set} problem~\cite{DBLP:conf/stoc/AssadiOSS18}. So far, this was not investigated in the light of the PFLP problem.
%
%
\paragraph{\textsc{GREEDY}\xspace}
Algorithm \textsc{GREEDY}\xspace computes a maximal independent set $D \subseteq V $ in the conflict graph $G=(V,E)$ using a greedy approach. It starts by picking a random vertex $u\in V$ of maximum weight and adds it to $D$. All neighbors of $u$ and $u$ itself are then marked. Next we pick an unmarked vertex $v \in V$ of maximum weight, add it to $D$ and mark $v$ and its neighbors. We repeat this until no unmarked vertex remains. The constructed set $D$ is a maximal independent set of $G$, i.e., it cannot be extended. But there is no guarantee that it is a maximum weight independent set.
The selected set of label candidates corresponds to the set $D$.
%
Our implementation is also able to take as input an independent set $D' \subseteq V$ and guarantee that for the new solution $D$ we have $D' \subseteq D$. %
\paragraph{\textsc{MIS}\xspace}
%
Like \textsc{GREEDY}\xspace, the algorithm \textsc{MIS}\xspace builds a maximal independent set, but in contrast to \textsc{GREEDY}\xspace it tries to find a good approximation to a maximum weight independent set in $G$. Such approaches are well known in the map labeling literature~\cite{Agarwal1998,Strijk2000}. %
One way to find a large maximal independent set is to find a small minimal \emph{vertex cover} $D \subset V$ of the conflict graph $G$.
A vertex cover $D$ has the property that for every edge $(u,v) \in E$ at least one of the two vertices $u,v$ is contained in $D$.
Consequently, the set $ D' = V\setminus D$ is an independent set, as by definition of $D$ no two vertices in $D'$ can be neighbors in $G$. In our implementation we use the greedy vertex cover heuristic as implemented in the graph library JGraphT. %
For a vertex $ u $ let $ \deg(u) $ be its degree in the conflict graph, then in each step the algorithm picks the vertex with the smallest ratio of $w(u)/\deg(u)$, adds it to $ D $, and removes $ u $ together with all edges incident to $ u $.
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
\paragraph{\textsc{KAMIS}\xspace} The third algorithm \textsc{KAMIS}\xspace is based on the maximum independent set solver framework \textit{KaMIS}~\cite{kamis2017}. By combining kernelization, local search, an evolutionary algorithm, graph partitioning and other techniques, this advanced maximum independent solver can very successfully find large independent sets in huge sparse graphs.
There are three algorithm components in this framework. Firstly, to create initial solutions, it uses a swap-based local search called \textit{ARW}~\cite{kamis2017}.
After a greedy insertion of vertices with small residual degrees in the independent set, the local search applies (1, 2)-swaps.
In particular, if two vertices have only one common neighbor in the current solution, the optimization search then inserts these two vertices and removes its neighbor to increase the size of the independent set locally.
The second algorithm component is the evolutionary algorithm \textit{EvoMIS}~\cite{lamm2015graph}.
It employs a multi-way partitioning on the graph to make the exchange of sub-solutions in graph components possible.
After the recombination, newly generated offsprings are locally optimized using swaps.
The third component is a kernelization technique~\cite{kamis2017} with both exact and inexact kernels.
Exact kernelization applies reduction rules to decrease the problem size without affecting solution quality.
For example, isolated vertices can always be added to the independent set.
Besides exact kernelization, inexact kernelization rules are used to reduce the search space in the optimization phase.
Intuitively, we choose vertices with very small degree which are in the current candidate independent set and fixate them.
Consequently their neighborhood is then deleted, which reduces the size of the remaining instance.
The whole algorithm works as follows. The first phase is to reduce the problem size by exact kernelization. After an initialization phase using \textit{ARW}, the evolutionary procedure \textit{EvoMIS} combines sub-solutions of components and optimizes the newly generated solutions. Inexact kernelization of the fit solutions make further exact kernelization possible, and the algorithm repeats the whole process from there.
\paragraph{\textsc{MHS}\xspace}
%
%
%
%
Finally we introduce \textsc{MHS}\xspace, an approach based on satisfiability testing of Boolean formulas. A Boolean formula $\phi$ in conjunctive normal form consists of a logical conjunction of clauses $\phi = c_1 \land \dots \land c_m$. Each \emph{clause} $c_i$ is a logical disjunction of one or more Boolean variables $x_1, \dots, x_n$ and their negations $\neg x_1, \dots, \neg x_n$; these are called \emph{literals}. Each \emph{variable} can take the value \emph{true} or \emph{false}. For a particular \emph{truth assignment} $\varphi$ of all variables, one can evaluate the truth values of all clauses. The formula $\phi$ is \emph{true} if every clause is \emph{true}. A truth assignment for which $\phi$ evaluates to \emph{true} is called a \emph{satisfying~assignment}.
The satisfiability problem (SAT) asks for the existence of a satisfying assignment, given a Boolean formula $\phi$, and is one of the fundamental NP-complete problems~\cite{Garey79}. If every clause consists of at most two literals, the restricted problem is known as 2-SAT and can be solved in polynomial time~\cite{krom1967decision}. Formann and Wagner~\cite{Formann1991} modeled the labeling problem for a 2-position model as a 2-SAT formula, which allowed them to test in polynomial time whether a conflict-free labeling exists that assigns a label candidate to every point.
%
%
%
%
Let $P$ be a set of point features, $\mathcal L$ the label candidates and $G = (V,E)$ the conflict graph of $ P $.
We first construct a $2$-SAT formula $\phi$ that guarantees the solution set to be conflict free.
Let $\Lambda$ be the set of variables of $\phi$ and $\Gamma$ the set of clauses.
To build our formula, we introduce for each vertex $u \in V$ a Boolean variable $\lambda_u \in \Lambda$, and for each conflict edge $(u,v) \in E$ the clause $ \gamma(u,v) = (\neg \lambda_u \lor \neg \lambda_v) \in \Gamma$.
We derive a solution set $\mathcal S\subseteq \mathcal L$ from a truth assignment $\varphi$ of $\phi$, by choosing a label candidate $\ell \in \mathcal L$ to be added to $\mathcal S$ if and only if for the corresponding vertex $u \in V$ the variable $\lambda_u$ is \emph{true} in $\varphi$.
Now $ \mathcal S $ is conflict-free if and only if $ \varphi $ is a satisfying assignment of $ \phi $. This can be seen by remembering that for every conflicting pair of labels we find an edge $ (u,v) \in E $ between the vertices $ u,v \in V $ corresponding to the conflicting labels. For such an edge we introduced the clause $ \gamma(u,v) $ which states that it is not possible to set $ \lambda_u = \lambda_v = \emph{true} $ in any satisfying assignment. While such an assignment leads to conflict-free label sets it does not maximize the number of labels. In particular the assignment $ \varphi $ mapping all variables to \emph{false} is a satisfying one, but the solution set $ \mathcal S $ resulting from $ \varphi $ is empty.
In the related problem MAXSAT we do not ask for a satisfying assignment of a given formula, but instead for an assignment that maximizes the number of clauses that evaluate to \emph{true}. MAXSAT, as well as MAX-2-SAT, are well known to be NP-complete~\cite{Garey79}.
To model the PFPL problem as a MAX-2-SAT formula we add a literal for every $\lambda_u\in \Lambda$ as a separate clause $ \gamma(u) = \lambda_u \in \Gamma $.
However, if we simply maximize the number of satisfied clauses, we have no guarantee whether some of the clauses %
$\gamma(u,v)$ evaluate to \emph{false} -- which would imply that two conflicting label candidates can be selected.
%
%
%
%
%
Hence we use a version of the MAXSAT problem, the \emph{partial maximum satisfiability problem} (PMAX-SAT)~\cite{Ansotegui2010}.
In PMAX-SAT the set of clauses is partitioned into a set of \emph{hard clauses} $\Gamma_H$ and a set of \emph{soft clauses} $\Gamma_S$. %
We must find a truth assignment $\varphi$ such that any clause $\gamma \in \Gamma_H$ evaluates to \emph{true}, while the number of clauses $ \gamma' \in \Gamma_S $ that evaluate to \emph{true} is maximized.
In our case we define the clauses $\gamma(u,v) \in \Gamma$ %
as hard clauses and the clauses $\gamma(u) \in \Gamma $ as soft clauses.
Now for any solution the clauses expressing a conflict have to be satisfied.
We note that a similar formulation has been used to model the maximum clique problem~\cite{flqfx-smwcumsr-14}, which is equivalent to the maximum independent set problem in the complement graph.
As a next step we extend our model to also solve the PFLP-Update problem.
In \emph{weighted} PMAX-SAT a soft clause $ \gamma \in \Gamma_S $ can be assigned a weight $ w(\gamma) \in \mathbb{R}^+ $, which can be seen as a penalty for falsifying $\gamma$.
The aim is to find an assignment $\varphi$ of $\phi$ that satisfies all the clauses in $\Gamma_H$ and minimizes the sum of penalties of the unsatisfied clauses in $\Gamma_S$.
Let $ \ell \in \mathcal L $ be a label candidate, $w$ its weight, and $\mathcal S' \subset \mathcal L$ the previous solution.
We have to specify the weights for the soft clauses.
For every $ \gamma \in \Gamma_S $ that corresponds to a label candidate from $\mathcal S'$ we set $w(\gamma) = w+\varepsilon$ for some parameter $\varepsilon\geq0$, which gives higher priority to selecting a previously displayed label candidate over a previously unused one.
In our implementation we used $w=1$ and $\varepsilon=1$.
%
%
If we want to strictly prioritize maximum solutions over solutions with fewer labels (but possibly more from $\mathcal S'$) we need to compute a suitable $ \varepsilon $.
%
Let $k = |\mathcal S'|$ and assume all label weights are uniform $w \equiv 1$.
Then we can choose $0 < \varepsilon < 1 / k$.
%
%
%
%
\subsection{QGIS labeling algorithms}\label{sec:qgis}
For our experiments we further selected three representative algorithms from the labeling engine of QGIS.
The \textsc{FALP}\xspace algorithm is the simplest greedy algorithm, which is also used to build the initial solution in other optimization algorithms.
The algorithm \textsc{CHAIN}\xspace is a local search algorithm with chained neighborhood moves.
The most advanced algorithm in \textsc{QGIS}\xspace is \textsc{POPMUSIC}\xspace (called pop\_tabu\_chain). It combines the general paradigm of \textsc{POPMUSIC}\xspace~\cite{taillard2002popmusic} with \textsc{CHAIN}\xspace and tabu search.
\paragraph{\textsc{FALP}\xspace}
The fast procedure \textsc{FALP}\xspace builds an unweighted solution in two steps: First, all label candidates are ordered by increasing number of conflicts with other label candidates.
By using the ordered position mode in QGIS, the ordering respects the common preference of candidate positions~\cite{imhof1975} for tie-breaking.
Then the label candidates are visited in this order. Once a label position is chosen, other candidates of its feature and other label candidates in conflict with this label are removed from the ordering, and all conflict numbers of remaining label positions are updated and re-sorted.
The implementation in QGIS is incremental.
It maintains the conflict number of each candidate and updates the values accordingly.
\paragraph{\textsc{CHAIN}\xspace}
The local search approach \textsc{CHAIN}\xspace is chaining multiple modifications in order to escape from local minima.
After building an initial solution with \textsc{FALP}\xspace, improvements of the current labeling are searched by applying a sequence of chained modifications.
A chain of modification is formed as follows.
First, a (seed) feature is chosen randomly, and it will be labeled (unlabeled) or modified in the current solution.
This move may create new overlaps and therefore may lead to a chain of modifications of the current solution.
%
The chain search temporarily applies these new changes and the local search process continues.
The chain search will stop after a specified number of modifications is reached.
Once the chain is stopped, the best solution reached along the chain will replace the current one.
\paragraph{\textsc{POPMUSIC}\xspace}
The \textsc{POPMUSIC}\xspace algorithm is an implementation of the \textsc{POPMUSIC}\xspace framework presented by Taillard and Voss~\cite{taillard2002popmusic}.
The basic idea applied to the PFPL problem can be summarized as follows: We begin with an initial solution generated by \textsc{FALP}\xspace.
Every feature is now considered as a sub-part of the instance.
We also maintain a list $ L $ of features which is initially empty.
The algorithm iteratively considers features not in $ L $ and executes the \textsc{CHAIN}\xspace algorithm starting from this feature with a bound on the maximum number of labels \textsc{CHAIN}\xspace is allowed to consider in its search.
If this procedure leads to an improvement we remove all labels considered by \textsc{CHAIN}\xspace from $ L $.
In case the run did not further improve the solution we add the feature from which we started \textsc{CHAIN}\xspace to $ L $.
\textsc{POPMUSIC}\xspace terminates once $ L $ contains all features.
Additionally, PAL implements a tabu search paradigm for the \textsc{CHAIN}\xspace procedure to avoid cyclically visiting the same solutions when optimizing from some feature. For further details see~\cite{Alvim2009} and~\cite{ertz2009pal}.
%
%
%
%
%
%
%
\subsection{Simulation Experiments}\label{sec:statistic}
\begin{table}[b]
\caption{Average running times (in ms) for computing an initial solution (init) or an update (upd), number of initially labeled features (init f), and number of labeled features on average over all runs (feat). Best values are printed in bold.}
\label{tab:init}
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{c|rrrr|rrrr|rrrr|rrrr}
\toprule
&\multicolumn{4}{c|}{Austria}&\multicolumn{4}{c|}{Lower Austria}&\multicolumn{4}{c|}{Vienna Train}&\multicolumn{4}{c}{Vienna Train/Bus}\\
Algorithm & init & upd & init f & feat & init & upd & init f & feat & init & upd & init f & feat & init & upd & init f & feat \\\hline
\textsc{CHAIN}\xspace & 13 & 13 & \bf 179 & \bf 179 & 140 & 146 & \bf 497 & \bf 534 & 38 & 39 & \bf 282 & \bf 288 & 263 & 266 & \bf 351 & \bf 370 \\
\hline
\textsc{FALP}\xspace & 5 & 4 & 176 & 177 & 80 & 79 & 494 & 529 & 19 & 19 & 280 & 285 & 192 & 194 & 349 & 367 \\
\hline
\textsc{GREEDY}\xspace & \bf 0.9& \bf 0.9 & 152 & 159 & \bf 13 & \bf 14 & 388 & 414 & \bf 3 & \bf 3 & 242 & 248 & \bf 18 & \bf 19 & 277 & 283 \\
\hline
\textsc{MIS}\xspace & 14 & 13 & 171 & 175 & 261 & 250 & 457 & 509 & 56 & 57 & 272 & 279 & 701 & 668 & 325 & 347 \\
\hline
\hline
\textsc{KAMIS}\xspace & \bf 123 & \bf 121 & \bf 182 & \bf 182 & \bf 581 & \bf 863 & \bf 523 & \bf 560 & \bf 248 & \bf 249 & \bf 289 & \bf 295 & 57746 & \bf 6272 & \bf 373 & \bf 391 \\
\hline
\textsc{MHS}\xspace & 181 & 164 & \bf 182 & 179 & 11821 & 4257 & \bf 523 & 537 & 734 & 561 & \bf 289 & 288 & 20937 & 7212 & \bf 373 & 373 \\
\hline
\textsc{POPMUSIC}\xspace & 1599 & 1448 & 181 & 181 & 14684 & 15723 & 504 & 541 & 3915 & 3961 & 282 & 289 & \bf 17820 & 18020 & 355 & 373 \\
\hline
\end{tabular}}
\end{table}
%
Our experiment focuses on two aspects.
The first aspect asks how viable our advanced heuristic \textsc{KAMIS}\xspace and the exact approach \textsc{MHS}\xspace are compared to the heuristics implemented in \textsc{QGIS}\xspace.
The second aspect asks which combination of algorithms performs best in an interactive scenario as defined by Problem~\ref{prob:PFLP-Update}.
All runs of the algorithms produce valid, overlap-free labelings.
Our main interest in this experiment lies in the computational performance, the objective value of the solution, and the stability of the updated solutions. We are not claiming that the resulting labelings are competitive with manually labeled maps, as the applied modifications in this simulation study are generated by a random process and not by a cartographic expert.
Yet the algorithmic performance is expected to be comparable for modifications made purposefully to improve a labeling.
Nonetheless, two initial examples labelings can be seen in the appendix as Figures~\ref{fig:labelingkamis} and~\ref{fig:labelingpopmusic}.
The first one is produced with \textsc{KAMIS}\xspace, the latter with \textsc{POPMUSIC}\xspace.
In this section, we focus on the data sets for Lower Austria and the denser transport network of Vienna, since they are the largest and most dense label sets, respectively. For the two other data sets our findings are similar.
%
All experiments were run on a standard desktop computer with an eight-core Intel i7-860 CPU clocked at $ 2.8 $ GHz and $ 8 $ GB RAM, and running Archlinux kernel version $ 5.1.4 $. We compiled \textsc{KAMIS}\xspace, as well as our code together with QGIS 3.7, using gcc version 8.3 and cmake 3.14.4. The compile flags for QGIS and \textsc{KAMIS}\xspace were set to Release. Every test was performed 50 times for each combination of initial and update algorithm.
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
\paragraph{Initial Solutions}
To compare our algorithms with the existing labeling algorithms as found in \textsc{QGIS}\xspace,
we considered four of the datasets presented in Section~\ref{sec:data}.
Our expectation would be that the greedy heuristics \textsc{CHAIN}\xspace, \textsc{FALP}\xspace, \textsc{GREEDY}\xspace, and \textsc{MIS}\xspace have clearly faster running times compared to \textsc{POPMUSIC}\xspace, \textsc{MHS}\xspace, and \textsc{KAMIS}\xspace,
while the number of labeled features is expected to be higher for the latter algorithms.
We present our findings in Table~\ref{tab:init}.
Our intuitive expectations get largely confirmed for the greedy heuristics.
\textsc{GREEDY}\xspace is clearly the fastest algorithm, but also leaves a large gap in the solution quality,
even compared to the other greedy heuristics.
In terms of running times we have to consider the overhead of building the conflict graph for \textsc{MIS}\xspace.
For the Austria data set this took around $ 11 $ms, for Lower Austria $ 100 $ms, and for the two Vienna data sets it took $ 49 $ms for the sparse and $ 279 $ms for the dense one.
If this overhead could be removed, e.g., in an update run we would not need to recompute the full graph each time, \textsc{MIS}\xspace would run about as fast as \textsc{FALP}\xspace.
In terms of solution quality we also see that \textsc{MIS}\xspace stays behind \textsc{CHAIN}\xspace and \textsc{FALP}\xspace.
Comparing \textsc{FALP}\xspace and \textsc{CHAIN}\xspace we see that \textsc{FALP}\xspace runs roughly twice as fast as \textsc{CHAIN}\xspace,
but the solutions are slightly worse.
In the end \textsc{FALP}\xspace and \textsc{CHAIN}\xspace provide similar results in terms of quality and speed.
For the remaining three algorithms we see that \textsc{POPMUSIC}\xspace can provide pretty good solutions
in terms of number of labeled features, even compared with the optimal approach \textsc{MHS}\xspace
and clearly better ones compared to the greedy heuristics.
For \textsc{KAMIS}\xspace it turned out that its initial solution was in fact always an optimal solution equivalent to \textsc{MHS}\xspace.
Looking at the running times,
we see that surprisingly \textsc{MHS}\xspace and \textsc{KAMIS}\xspace outperform \textsc{POPMUSIC}\xspace by one to two orders of magnitude for all but one data set.
The data set on which \textsc{POPMUSIC}\xspace surpasses \textsc{KAMIS}\xspace and \textsc{MHS}\xspace is also the most dense one,
namely the full transport network of Vienna.
Likely this behavior is due to the fact that especially \textsc{KAMIS}\xspace uses small cuts in the graphs very well, where
a cut $ E' \subseteq E $ in a graph $G=(V,E)$ is a set of edges
such that $ G $ decomposes into two or more independent parts after removal of $E'$.
Intuitively the sparser instances also have smaller cuts, while the denser instances lead to more highly connected conflict graphs with large cuts.
%
%
\paragraph{Modifications}
\begin{figure}[tb]
\centering
\includegraphics{figures/statistics/statistics_lower_austria_wien_modification_labels.pdf}
%
\caption{Labeled features in the Lower Austria data set.}
\label{fig:labeled_LA}
\end{figure}
\begin{figure}[tb]
\centering
\includegraphics{figures/statistics/statistics_vienna_transport_modification_labels.pdf}
%
\caption{Labeled features in the Lower Austria data set.}
\label{fig:labeled_VT}
\end{figure}
For the experiments with modifications we consider five runs of labeling algorithms.
In the first run we produce an initial labeling.
As seen in the previous paragraph, \textsc{KAMIS}\xspace, \textsc{MHS}\xspace, and \textsc{POPMUSIC}\xspace are the natural candidates for this,
since they provide the best solution while still running sufficiently fast.
To simulate manual modifications of labels in the current solution, we choose after each run labels from the complete set of labels uniformly at random.
In each round we will change the font size to $ 20 $ for one percent of the labels,
and for another three percent set it to $ 5 $.
Note that our initial font size is set to $ 10 $.
Furthermore, we pick one percent of the labels and delete them from further consideration.
From the perspective of the conflict graph, these modifications cover all relevant changes: insertion and deletion of conflict edges and deletion of vertices.
It should further be noted that shrinking takes precedence over enlarging, i.e.,
if we decide to enlarge a label, then shrink it and again enlarge it, it will still remain at a font
size of~$ 5 $.
Considering Figures~\ref{fig:labeled_LA} and~\ref{fig:labeled_VT} this explains immediately why on average the number of labeled features increases after each round.
For computing the updates we are especially interested in \textsc{MHS}\xspace, \textsc{GREEDY}\xspace, and \textsc{MIS}\xspace,
as these algorithms consider weights and thus can optimize stability of the previous solution.
%
%
%
To measure stability of an updated labeling we compute the following ratio.
Let $ \mathcal S $ be a labeling and $ \mathcal S' $ a labeling of the modified instance,
then a stable solution keeps $ |\mathcal S \cap \mathcal S'|/|\mathcal S \cup \mathcal S'| $ as close as possible to $ 1.0 $.
\begin{figure}[tb]
\centering
\includegraphics{figures/statistics/statistics_lower_austria_wien_stability.pdf}
%
\caption{Stability over four rounds of modifications in the Lower Austria data set.}
\label{fig:stability_LA}
\end{figure}
\begin{figure}[tb]
\centering
\includegraphics{figures/statistics/statistics_vienna_transport_stability.pdf}
%
\caption{Stability over four rounds of modifications in the Vienna data set of bus, tram, and subway stops.}
\label{fig:stability_VT}
\end{figure}
Figures~\ref{fig:stability_LA} and~\ref{fig:stability_VT} show our findings in regard to this measure.
%
%
We used \textsc{POPMUSIC}\xspace, \textsc{KAMIS}\xspace, and \textsc{MHS}\xspace as initial algorithms and all algorithms as update algorithms, to explore if by random chance the solutions stay stable as well.
Clearly we see this is not the case when comparing with \textsc{MHS}\xspace and \textsc{MIS}\xspace,
as for nearly all possible starting algorithms they manage to keep the ratio at a value of $ 0.8 $ for the transport network of Vienna and $ 0.97 $ for the lower Austria data set.
In general it seems that \textsc{MIS}\xspace is worse at maintaining stability, while \textsc{MHS}\xspace is computing more stable labeling.
Interestingly, for \textsc{KAMIS}\xspace as a starting algorithm and the dense data set of Vienna \textsc{MHS}\xspace heavily changes the solution after the first modification.
This might point to a difference in the solution between \textsc{KAMIS}\xspace and \textsc{MHS}\xspace.
Without further investigation it is hard to determine the exact cause of this change.
If it turns out that \textsc{MHS}\xspace and \textsc{KAMIS}\xspace find very different high quality solutions this could be interesting to cartographers independently of our interactive approach.
The very naive approach of \textsc{GREEDY}\xspace seems in fact to be too simple to keep the solution stable.
Turning to the running times and number of labeled features, consider Table~\ref{tab:init} again.
Comparing \textsc{MHS}\xspace and \textsc{MIS}\xspace we see that \textsc{MHS}\xspace is at a disadvantage in terms of running time,
but keeps up more labels on average and as just seen produces more stable results.
On the other hand \textsc{MIS}\xspace is very fast and for not too dense input sets seems to be fine in terms of
stability.
Since these are average values over all rounds of modification we finally consider boxplots of the percentage of labeled features for \textsc{KAMIS}\xspace, \textsc{MHS}\xspace, and \textsc{POPMUSIC}\xspace as starting algorithms and \textsc{KAMIS}\xspace, \textsc{MHS}\xspace, \textsc{GREEDY}\xspace, and \textsc{MIS}\xspace as update algorithms.
Our findings are shown in Figures~\ref{fig:labeled_LA} and~\ref{fig:labeled_VT}.
Remember that \textsc{KAMIS}\xspace does not try to keep the solution stable,
hence in these plots it should rather be seen as a potential optimum for the number of labeled features,
if the algorithms were allowed to disregard the old solution.
As expected we find that \textsc{MHS}\xspace comes closest to this theoretical optimum.
\textsc{MIS}\xspace is not far behind, while \textsc{GREEDY}\xspace is several percent behind.
Correlating with the drop in stability we also find that for \textsc{KAMIS}\xspace as a starting algorithm and \textsc{MHS}\xspace and \textsc{MIS}\xspace as update algorithms we get a large variation in the solution quality on the lower Austria data set.
\paragraph{Discussion}
In conclusion we can say that it is more than viable to use exact or near-exact approaches for map labeling when compared with sophisticated heuristics.
Especially the cut based maximum independent set approach of \textsc{KAMIS}\xspace is promising in general, as it not only improves the computation times by one to two orders of magnitude over the most sophisticated heuristic \textsc{POPMUSIC}\xspace in QGIS, but also performs consistently better in the optimization goal.
A combination of \textsc{KAMIS}\xspace and a heuristic like \textsc{POPMUSIC}\xspace seems like a very useful future approach.
In such a combined idea one could use cuts to find dense areas which are solved by \textsc{POPMUSIC}\xspace, while the sparser areas are handled by \textsc{KAMIS}\xspace.
Also the SAT approach of \textsc{MHS}\xspace has potential as SAT-solvers continue to get better and the used solver MaxHS surely will be surpassed in the coming years.
In terms of greedy heuristics we saw that \textsc{GREEDY}\xspace and \textsc{MIS}\xspace both have their disadvantages.
\textsc{GREEDY}\xspace is in the end a too trivial approach to realize good solutions,
while \textsc{MIS}\xspace suffers from the overhead of building the conflict graph.
In a labeling framework though, where the conflict graph would be kept readily available as modifications are made,
clever independent set heuristics may become a viable approach.
In terms of modifications and interactive labeling we saw that it is possible to use just simple weights to keep the solution stable.
\textsc{MHS}\xspace and \textsc{MIS}\xspace are both suitable algorithms to handle updates, with \textsc{MIS}\xspace clearly being the faster approach.
A viable combination of the two could be to run \textsc{MHS}\xspace only every fifth or tenth iteration of modification.
Also it is likely that future maximum independent set frameworks will support weights,
bringing the approach of \textsc{KAMIS}\xspace also to the table.
In case these weighted frameworks exhibit a similar running time to \textsc{KAMIS}\xspace they likely would provide the best of both worlds, fast running times and stable, high quality solutions.
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
\section{Conclusion}
We have presented a prototype tool for supporting semi-automatic map labeling workflows together with first experimental results on four possible labeling algorithms and on how to combine them for computing initial labelings and updates after user modifications.
%
%
%
This is underlined by the QGIS project recently implementing similar ideas into their label placement engine.
An immediate consequence from our experiments is that targeted and fast update algorithms that aim for label stability are needed to support interactive modifications.
In our future investigations we aim to develop algorithms specifically tailored to optimize the proposed stability criteria.
As a first step, we plan to investigate fast dynamic weighted independent set heuristics,
where the weights better model the stability of the labeling.
Moreover, the labeling algorithms must take into account more advanced and accurate cartographic quality constraints,
also in combination with the stability criteria.
%
Ultimately, this may be fully integrated, e.g., into the QGIS project in order to find its way into practical map production. At that point, meaningful formal user studies with GIS experts on usability as well as final labeling quality and required interaction efforts are needed to validate whether human-in-the-loop optimization for label placement is meeting its expectations.
%
%
%
%
%
%
%
%
\paragraph{Acknowledgments}{
This work is
supported by the Austrian
Science Fund (FWF) under Grant P31119.
}
\bibliographystyle{plainurl}
|
1910.07750
|
\section{Introduction}
A numbering is a surjective mapping $\gamma :\omega \rightarrow S$
from the natural numbers $\omega$ to a set $S$.
The theory of numberings was started by Ershov in a series of
papers, beginning with \cite{Ershov} and \cite{Ershov2}.
Ershov studied the computability-theoretic properties of numberings,
as generalizations of numberings of the partial computable functions.
In particular, he called a numbering {\em precomplete\/} if
for every partial computable unary function $\psi$ there exists a
computable unary $f$ such that for every~$n$
\begin{equation} \label{precomplete}
\psi(n)\darrow \; \Longrightarrow \; \gamma(f(n))=\gamma(\psi(n)).
\end{equation}
Following Visser, we say that
{\em $f$ totalizes $\psi$ modulo~$\gamma$}.
Ershov showed that Kleene's recursion theorem holds for arbitrary
precomplete numberings.
Visser~\cite{Visser} extended this to his so-called
``anti diagonal normalization theorem'' (ADN theorem).
Another generalization of the recursion theorem is the famous
Arslanov completeness criterion \cite{Arslanov},
that extends the recursion theorem from computable functions to
all functions bounded by an incomplete computably enumerable (c.e.)
Turing degree.
Barendregt and Terwijn~\cite{BarendregtTerwijn} showed that
Arslanov's result also holds for any precomplete numbering.
In Terwijn~\cite{Terwijn} a joint generalization of Arslanov's
completeness criterion and Visser's ADN theorem was proved.
It is currently open whether this joint generalization also
holds for every precomplete numbering.
A classic example of numberings are numberings of the partial
computable functions. Such a numbering is {\em acceptable\/}
if it can be effectively translated back and forth into the standard
numbering of the p.c.\ functions $\vph_e$.
Rogers~\cite{Rogers1967} proved that a numbering is acceptable
if and only if it satisfies both the enumeration theorem
and parametrization (also known as the S-m-n--theorem).
It follows from this that every acceptable numbering is precomplete.
On the other hand, Friedberg~\cite{Friedberg} showed that there
exist an effective numbering of the p.c.\ functions without repetitions.
Friedberg's 1-1 numbering is not precomplete, as can be seen as follows.
Suppose that $\gamma:\omega\rightarrow \mathcal{P}$ is a 1-1 numbering of
the (unary) p.c.\ functions that is precomplete.
By \eqref{precomplete}, we then have for every p.c.\ function $\psi$
a computable function $f$ such that
$$
\psi(n)\darrow
\; \Longrightarrow \; \gamma(f(n))=\gamma(\psi(n))
\; \Longrightarrow \; f(n)= \psi(n).
$$
The second implication follows because $\gamma$ is 1-1.
So we see that in fact $f$ is a total extension of~$\psi$.
But it is well-known that there exist p.c.\ functions that do not
have total computable extensions.
So we see that 1-1 numberings of the p.c.\ functions are never precomplete.
For more about 1-1 numberings see Kummer~\cite{Kummer}.
A topic closely related to numberings is that of computably
enumerable equivalence relations (ceers).
For every numbering $\gamma$ we have the equivalence relation
defined by $n \sim_\gamma m$ if $\gamma(n) = \gamma(m)$. Conversely,
for every countable equivalence relation, we have the numbering of
its equivalence classes. Hence the above terminology about
numberings also applies to ceers.
Lachlan~\cite{Lachlan},
following work of Bernardi and Sorbi~\cite{BernardiSorbi},
proved that all precomplete ceers are computably isomorphic.
For a recent survey about ceers, see Andrews, Badaev, and Sorbi~\cite{ABS}.
In the examples of numberings given above, the set $\omega$
is not merely a set used to number the elements of a set $S$,
but it carries extra structure as the domain of the partial
computable functions, making it into a so-called
partial combinatory algebra (pca).
Combinatory completeness is the characteristic property
that makes a structure with an application operator a pca.
This is the analog of the S-m-n--theorem (parametrization) for the
p.c.\ functions.
In section~\ref{sec:pca} below we review the basic definitions of pca.
We can extend the notion of numbering from $\omega$ to arbitrary
pca's as follows.
A {\em generalized numbering\/} is a surjective mapping
$\gamma\colon \A \rightarrow S$, where $\A$ is a pca and $S$ is a set.
This notion was introduced in Barendregt and Terwijn~\cite{BarendregtTerwijn}.
We also have a notion of precompleteness for generalized
numberings, analogous to Ershov's notion (Definition~\ref{def:precomplete}).
Precompleteness of generalized numberings is related to the topic
of complete extensions. For example, the identity on a pca $\A$
is precomplete if and only if every element of $\A$ (seen as a
function on $\A$) has a total extension in~$\A$.
In section~\ref{sec:smooth} we show that the numbering of functions of a pca
is precomplete, which is the analog of the precompleteness of the
standard numbering of the p.c.\ functions.
In general the functions modulo extensional equivalence do not form
a pca. This prompts the definition of the notion of {\em algebraic\/}
numbering, which is a generalized numbering that preserves the algebraic
structure of the pca.
Below we study generalized numberings in relation to the
algebraic structure of the pca $\A$. Just as in the lambda-calculus,
the notion of extensionality is central here.
A pca is called {\em extensional\/} if $f=g$ whenever
$fx \simeq gx$ for every~$x$. We have a similar notion of
extensionality based on generalized numberings
(Definitions~\ref{def:ext}).
In section~\ref{sec:ext} we show that there is a relation between
extensionality (an algebraic property) and precompleteness
(a computability theoretic property).
In section~\ref{sec:strongext} we introduce strong extensionality,
(Definition~\ref{def:strext}),
and in section~\ref{sec:aux} we introduce some auxiliary equivalence
relations to aid the comparison between extensional and algebraic.
In section~\ref{sec:algext} we investigate the relations between various
notions of extensionality and algebraic numberings.
We will see that neither notion implies the other,
and that they are in a sense complementary notions.
The Friedberg numbering of the p.c.\ functions quoted above
also exists for the class of c.e.\ sets.
In section~\ref{sec:1-1} we discuss the existence of 1-1 numberings
of classes of sets that are uniformly c.e.\ by considering the
complexity of equality on those classes.
Our notation is mostly standard.
In the following, $\omega$ denotes the natural numbers.
$\vph_e$ denotes the $e$-th partial computable
(p.c.) function, in the standard numbering of the p.c.\ functions.
We write $\vph_e(n)\darrow$ if this computation is defined,
and $\vph_e(n)\uarrow$ otherwise.
$W_e = \dom(\vph_e)$ denotes the $e$-th computably enumerable (c.e.) set.
For unexplained notions from computability theory we refer to
Odifreddi~\cite{Odifreddi} or Soare~\cite{Soare}.
For background on lambda-calculus we refer to Barendregt~\cite{Barendregt}.
\section{Partial combinatory algebra} \label{sec:pca}
Combinatory algebra predates the lambda-calculus,
and was introduced by Sch\"{o}nfinkel~\cite{Schoenfinkel}.
It has close connections with the lambda-calculus, and played an
important role in its development.
Partial combinatory algebra (pca) was first studied in
Feferman~\cite{Feferman}.
To fix notation and terminology, we will briefly recall the
definition of a pca, and for a more elaborate treatment refer
to van Oosten~\cite{vanOosten}.
\begin{definition} \label{def:pca}
A {\em partial applicative structure\/} (pas) is a set $\A$ together
with a partial map from $\A\times \A$ to $\A$. We denote the image of
$(a,b)$, if it is defined, by $ab$, and think of this as `$a$ applied
to $b$'. If this is defined we denote this by $ab\darrow$. By
convention, application associates to the left. We write $abc$ instead
of $(ab)c$. {\em Terms\/} over $\A$ are built from elements of $\A$,
variables, and application. If $t_1$ and $t_2$ are terms then so is
$t_1t_2$. If $t(x_1,\ldots,x_n)$ is a term with variables $x_i$, and
$a_1,\ldots,a_n {\in} \A$, then $t(a_1,\ldots,a_n)$ is the term obtained
by substituting the $a_i$ for the~$x_i$. For closed terms
(i.e.\ terms without variables) $t$ and $s$, we write $t \simeq s$ if
either both are undefined, or both are defined and equal.
Here application is \emph{strict} in the sense that for $t_1t_2$ to be
defined, it is required that both $t_1,t_2$ are defined.
We say that an element $f{\in} \A$ is {\em total\/} if $fa\darrow$ for
every $a{\in} \A$.
A pas $\A$ is {\em combinatory complete\/} if for any term
$t(x_1,\ldots,x_n,x)$, $0\leq n$, with free variables among
$x_1,\ldots,x_n,x$, there exists a $b{\in} \A$ such that
for all $a_1,\ldots,a_n,a{\in} \A$,
\begin{enumerate}[\rm (i)]
\item $ba_1\cdots a_n\darrow$,
\item $ba_1\cdots a_n a \simeq t(a_1,\ldots,a_n,a)$.
\end{enumerate}
A pas $\A$ is a {\em partial combinatory algebra\/} (pca) if
it is combinatory complete.
\end{definition}
\begin{theorem} {\rm (Feferman~\cite{Feferman})} \label{Feferman}
A pas $\A$ is a pca if and only if it has elements $k$ and $s$
with the following properties for all $a,b,c\in\A$:
\begin{itemize}
\item $k$ is total and $kab = a$,
\item $sab\darrow$ and $sabc \simeq ac(bc)$.
\end{itemize}
\end{theorem}
Note that $k$ and $s$ are nothing but partial versions of the
familiar combinators from combinatory algebra.
As noted in \cite[p95]{Feferman}, Theorem~\ref{Feferman} has the
consequence that in any pca we can define lambda-terms in the usual
way (cf.\ Barendregt~\cite[p152]{Barendregt}):\footnote
Because the lambda-terms in combinatory algebra do not have
the same substitution properties as in the lambda calculus,
we use the notation $\lambda^*$ rather than~$\lambda$.
Curry used the notation $[x]$ to distinguish the two
(cf.\ \cite{CurryFeys}).}
For every term $t(x_1,\ldots,x_n,x)$, $0\leq n$, with free variables among
$x_1,\ldots,x_n,x$, there exists a term $\lambda^* x.t$
with variables among $x_1,\ldots,x_n$,
with the property that for all $a_1,\ldots,a_n,a {\in} \A$,
\begin{itemize}
\item $(\lambda^* x.t)(a_1,\ldots, a_n)\darrow$,
\item $(\lambda^* x.t)(a_1,\ldots, a_n)a \simeq t(a_1,\ldots,a_n,a)$.
\end{itemize}
The most important example of a pca is Kleene's {\em first model\/} $\K_1$,
consisting of $\omega$ with application defined as $nm = \vph_n(m)$.
Kleene's {\em second model\/} $\K_2$ \cite{KleeneVesley}
consists of the reals (or more conveniently Baire space $\omega^\omega$),
with application $\alpha\beta$ defined as applying the continuous
functional with code $\alpha$ to the real $\beta$.
See Longley and Normann~\cite{LongleyNormann} for a more detailed definition.
In the axiomatic approach to the theory of computation, there is
the notion of a basic recursive function theory (BRFT).
Since this is supposed to model basic computability theory,
it will come as no surprise that every BRFT gives rise to a pca.
In case the domain of the BRFT is $\omega$, it actually contains
a copy of the p.c.\ functions.
See Odifreddi~\cite{Odifreddi} for a discussion of this,
and references to the literature, including the work of
Wagner, Strong, and Moschovakis.
Other pca's can be obtained by relativizing $\K_1$, or by generalizing
$\K_2$ to larger cardinals.
Also, every model of Peano arithmetic gives rise to a pca by considering
$\K_1$ inside the model.
Further constructions of pca's are discussed in
van Oosten and Voorneveld~\cite{vanOostenVoorneveld}, and in
van Oosten~\cite{vanOosten} even more examples of pca's are listed.
Finally, it is possible to define a combination of Kleene's models
$\K_1$ and $\K_2$ (due to Plotkin and Scott) using enumeration operators,
cf.\ Odifreddi~\cite[p857ff]{OdifreddiII}.
\section{Generalized numberings}
A {\em generalized numbering\/} \cite{BarendregtTerwijn} is a surjective mapping
$\gamma\colon \A \rightarrow S$, where $\A$ is a pca and $S$ is a set.
As in the case of ordinary numberings, we have an equivalence relation on
$\A$ defined by $a \sim_\gamma b$ if $\gamma(a) = \gamma(b)$.
As for ordinary numberings, in principle every generalized
numbering corresponds to an equivalence relation on $\A$, and
conversely. However, below we will mainly be interested in
numberings that also preserve the algebraic structure of the pca,
making this correspondence less relevant.
The notion of precompleteness for generalized numberings was
defined in \cite{BarendregtTerwijn}.
By \cite[Lemma 6.4]{BarendregtTerwijn},
the following definition is equivalent to it.
\begin{definition} \label{def:precomplete}
A generalized numbering $\gamma \colon \A \rightarrow S$ is
{\em precomplete\/}\footnote{
There is also a notion of completeness for numberings, that we
will however have no use for in this paper.
A precomplete generalized numbering $\gamma$ is {\em complete\/} if
there is a special element $s{\in}S$ (not depending on $b$)
such that in addition to \eqref{precomplete2},
$\gamma(fa) = s$ for every $a$ with $ba\uarrow$.}
if for every $b{\in} \A$ there exists a total element $f{\in} \A$
such that for all $a{\in} \A$,
\begin{equation} \label{precomplete2}
b{a}\darrow \; \Longrightarrow \; f{a} \sim_\gamma b{a}.
\end{equation}
In this case, we say that {\em $f$ totalizes $b$ modulo~$\sim_\gamma$\/}.
\end{definition}
In \cite{BarendregtTerwijn}, generalized numberings were used to
prove a combination of a fixed point theorem for pca's (due to
Feferman~\cite{Feferman}), and Ershov's recursion theorem \cite{Ershov2}
for precomplete numberings on~$\omega$.
Every pca $\A$ has an associated generalized numbering,
namely the identity $\gamma_\A: \A\rightarrow \A$.
In section~\ref{sec:ext} we will see examples of when the numbering
$\gamma_\A$ is or is not precomplete.
\section{Algebraic numberings} \label{sec:smooth}
\begin{definition} \label{def:e}
Let $\A$ be a pca. Define an equivalence on $\A$ by
$a\sim_e b$ if
$$
\fa x\in\A (ax \simeq bx).
$$
\end{definition}
The following result generalizes the precompleteness of the
numbering $n\mapsto \vph_n$ of the partial computable functions.
\begin{proposition} \label{prop:precomplete}
The natural map $\gamma_e: \A \rightarrow \A/{\sim_e}$ is precomplete.
Note that $\gamma_e$ is a generalized numbering of the equivalence
classes.
\end{proposition}
\begin{proof}
Let $b\in\A$. We have to prove that there is a total $f\in \A$ such
that when $ba\darrow$ then $fa\sim_e ba$, i.e.\
$\fa c{\in}\A (fac \simeq bac)$.
This follows from the combinatory completeness of $\A$:
Consider the term $bxy$. By combinatory completeness there exists $f\in A$
such that for all $a,c\in \A$, $fa\darrow$ and $fac \simeq bac$.
\end{proof}
\begin{remark} \label{remark}
Note that $\A/{\sim_e}$ is in general {\em not\/} a pca, at least not
with the natural application defined by
$\overline a \cdot \overline b = \overline{a\cdot b}$.
For example, in Kleene's first model $\K_1$ we have for
$n,m\in \omega$ that $n\sim_e m$ if $\vph_n = \vph_m$.
Now we can certainly have that $m\sim_e m'$, but
$\vph_n(m) \neq \vph_n(m')$, so we see that the natural definition
of application $\overline n \cdot \overline m = \overline{n\cdot m}$ in
$\omega/{\sim_e}$ is not independent of the choice of representative~$m$.
\end{remark}
The previous considerations prompt the following definition.
First we extend the definition of $\sim_\gamma$ from $\A$ to the set of
closed terms over $\A$ as follows:
\begin{definition} \label{def:equivalent}
$a \sim_\gamma b$ if either $a$ and $b$ are terms that are both
undefined, or $a,b\in\A$ and $\gamma(a) = \gamma(b)$.
\end{definition}
This extended notion $\sim_\gamma$ is the analog of the
Kleene equality $\simeq$.
\begin{definition} \label{def:smooth}
Call a generalized numbering $\gamma:\A\rightarrow S$ {\em algebraic\/}
if $\sim_\gamma$ is a congruence, i.e.\
$$
a\sim_\gamma a' \wedge b\sim_\gamma b' \Longrightarrow ab \sim_\gamma a'b'.
$$
In this case, we also call the pca $\A$ $\gamma$-algebraic.
\end{definition}
If $\gamma$ is algebraic, we can factor out by $\sim_\gamma$, as in algebra:
\begin{proposition} \label{prop:divide}
Suppose that $\A$ is $\gamma$-algebraic. Then $\A/{\sim_\gamma}$ is again a pca.
\end{proposition}
\begin{proof}
Define application in $\A/{\sim_\gamma}$ by
$\overline a\cdot\overline b = \overline{a\cdot b}$.
By algebraicity this is well-defined.
Combinatory completeness follows because we have the combinators
$\overline s$ and $\overline k$.
\end{proof}
We note that for a generalized numbering $\gamma:\A\rightarrow S$
there is in general no relation between the notion of precompleteness
and algebraicity. This follows from results in the following
sections.
\begin{itemize}
\item algebraic does not imply precomplete.
Namely, let $\gamma_{\K_2}$ be the identity on
Kleene's second model~$\K_2$.
Then $\gamma_{\K_2}$ is trivially algebraic.
However, by \cite{BarendregtTerwijn}, the numbering $\gamma_{\K_2}$
is not precomplete.
\item precomplete does not imply algebraic.
Otherwise, by Proposition~\ref{prop} we would have that
$\A$ is $\gamma$-extensional implies that $\A$ is $\gamma$-algebraic,
contradicting Proposition~\ref{extsmooth}.
Altenatively: The canonical map $\gamma_e: \A \rightarrow \A/{\sim_e}$
is precomplete by Proposition~\ref{prop:precomplete}.
However, it is not algebraic, since otherwise we would have
by Proposition~\ref{prop:divide} that $\A/{\sim_e}$ is a pca,
which in general it is not by Remark~\ref{remark}.
\end{itemize}
\section{Precompleteness and extensionality} \label{sec:ext}
We can think of combinatory completeness of a pca as an analog of the
S-m-n--theorem (also called the parametrization theorem)
from computability theory \cite{Odifreddi}.
Suppose $\A$ is a pca, and $\gamma:\A\rightarrow S$ is a generalized numbering.
Every element $a\in\A$ represents a partial function on $\A$,
namely $x\mapsto ax$. In analogy to $\K_1$, one could call these the
partial $\A$-computable functions.
Note that the precompleteness of the numbering $n\mapsto\vph_n$ of the
partial computable functions follows from the S-m-n--theorem,
and that the precompleteness of the numbering of the partial
$\A$-computable functions follows likewise from the
combinatory completeness of $\A$ (see Proposition~\ref{prop:precomplete}).
Note that the identity on $\K_1$ is not precomplete,
as there exist p.c.\ functions that do not have a total computable extension.
In \cite{BarendregtTerwijn} it was shown that the identity
on Kleene's second model $\K_2$ is also not precomplete.
In general, every pca $\A$ has the identity $\gamma_\A: \A\rightarrow \A$
as an associated generalized numbering, and
$\gamma_\A$ is precomplete if and only if every element $b\in \A$
has a total extension $f\in \A$.
Faber and van Oosten \cite{FabervanOosten} showed that the latter
is equivalent to the statement that $\A$ is isomorphic to a
total pca. Here ``isomorphic'' refers to isomorphism in the
category of pca's introduced in Longley~\cite{Longley}.
\begin{definition} \label{def:ext}
Let $\A$ be a pca, and $\gamma:\A\rightarrow S$ a generalized numbering.
We say that $\A$ is {\em $\gamma$-extensional\/} if
\begin{equation} \label{ext}
\fa a\in\A (fa \simeq ga) \Longrightarrow f\sim_\gamma g
\end{equation}
for all $f,g\in \A$.
\end{definition}
In other words, $\A$ is $\gamma$-extensional if
the relation $\sim_\gamma$ extends the relation $\sim_e$
from Definition~\ref{def:e}.
For the special case where $\gamma:\A\rightarrow\A$ is the identity, this is called
{\em extensionality\/} of $\A$, cf.\ Barendregt~\cite[p1094]{Barendregt1977}.
\begin{proposition} \label{prop}
Suppose $\A$ is $\gamma$-extensional. Then $\gamma$ is precomplete.
\end{proposition}
\begin{proof}
This is similar to Proposition~\ref{prop:precomplete}.
Given $b\in\A$, we have to prove that there exists a total $f\in \A$
such that for every $a\in\A$, $fa\sim_\gamma ba$ whenever $ba\darrow$.
Consider the term $bxy$.
By combinatory completeness of $\A$ there exists a total $f\in\A$ such that
$fac \simeq bac$ for all $a,c\in\A$.
Now suppose $ba\darrow$. It follows from
$\gamma$-extensionality of $\A$ that $fa\sim_\gamma ba$.
Hence $\gamma$ is precomplete.\footnote{
Alternatively, we could derive Proposition~\ref{prop}
from Proposition~\ref{prop:precomplete} by noticing that
if $\gamma,\gamma' : \A \rightarrow S$ are generalized numberings
such that $\sim_{\gamma}$ extends $\sim_\gamma'$, and
$\gamma'$ is precomplete, then also $\gamma$ is precomplete.
By Proposition~\ref{prop:precomplete} we have that
$\gamma' = \gamma_e$ is precomplete, and if $\A$ is $\gamma$-extensional
then $\sim_\gamma$ extends $\sim_e$, so it follows that
$\gamma$ is precomplete.}
\end{proof}
In particular, we see from Proposition~\ref{prop}
that the identity $\gamma_\A$ on $\A$ is
precomplete if $\A$ is extensional.
It is possible that a generalized numbering $\gamma:\A\rightarrow S$
is precomplete for some other reason than $\A$ being $\gamma$-extensional.
For example, suppose that $\A$ is a total pca.
Then $\gamma$ is trivially precomplete
(this is immediate from Definition~\ref{def:precomplete}),
but a total pca $\A$ need not be extensional, as the next proposition shows.
\begin{proposition} \label{prop:converse}
There exists a generalized numbering $\gamma:\A\rightarrow S$ that
is precomplete, but such that $\A$ is not $\gamma$-extensional.
\end{proposition}
\begin{proof}
This follows from the fact that there exists a total pca that is not
extensional.
For example, let $\A$ be a model of the lambda calculus. This certainly
does not have to be extensional, for example the graph model
$P\omega$ is not extensional, cf.\ \cite[p474]{Barendregt}.
(An example of a model of the lambda calculus that {\em is\/} extensional
is Scott's model $D_\infty$.)
Another example for is the set of terms $\mathfrak{M}(\beta\eta)$
in the lambda calculus.
This combinatory algebra is extensional, by inclusion of the $\eta$-rule.
However, the set of {\em closed\/} terms $\mathfrak{M}^0(\beta\eta)$ is
not extensional by Plotkin~\cite{Plotkin1974}.
\end{proof}
Proposition~\ref{prop:converse} shows that the converse of
Proposition~\ref{prop} does not hold.
By the results of \cite{BarendregtTerwijn}, combinatory completeness
of $\A$ does not imply precompleteness of $\gamma_\A$.
Conversely, precompleteness of $\gamma_\A$ also does not imply
combinatory completeness of $\A$.
Namely, simply take any total applicative system $\A$ that is not
a pca.\footnote{An example of such a system is the set
$\A=\omega^{<\omega}$ of finite strings of numbers, with concatenation
of strings as application. This is a pure term model,
in which application of terms can only increase the length of terms.
This implies that this pas is not combinatory complete, as there
cannot be a combinator $k$, which does reduce the length of terms.}
Note that the identity $\gamma_\A:\A\rightarrow\A$ is precomplete,
as the pas $\A$ is total.
\section{Strong extensionality}\label{sec:strongext}
Given a generalized numbering $\gamma:\A\rightarrow S$,
we have two kinds of equality on $\A$: $a\simeq b$ and $a\sim_\gamma b$.
The notion of $\gamma$-extensionality is based on the former.
We obtain a stronger notion if we use the latter, where we use
the extended notion of $\sim_\gamma$ for closed terms from
Definition~\ref{def:equivalent}.
\begin{definition} \label{def:strext}
We call $\A$ {\em strongly $\gamma$-extensional\/} if
$$
\fa x (fx \sim_\gamma gx) \Longrightarrow f\sim_\gamma g
$$
for every $f,g\in\A$.
\end{definition}
\begin{theorem} \label{strictext}
Strong $\gamma$-extensionality implies $\gamma$-extensionality,
but not conversely.
\end{theorem}
\begin{proof}
First, strong $\gamma$-extensionality implies $\gamma$-extensionality
because $\fa x \; fx \simeq gx$ implies
$\fa x \; fx \sim_\gamma gx$, so the premiss
of the first notion is weaker than that of the second.
To see that the implication is strict, we exhibit a pca $\A$ that
is $\gamma$-extensional but not strongly $\gamma$-extensional.
Take $\A$ to be Kleene's first model $\K_1$, and let $\gamma = \gamma_e$,
the numbering of equivalence classes from Proposition~\ref{prop:precomplete}.
That $\A$ is $\gamma_e$-extensional is trivial, since
$f\sim_e g$ means precisely $\fa x \; fx\simeq gx$.
To see that $\A$ is not strongly $\gamma_e$-extensional,
let $d,e\in \A$ be such that $d\neq e$ and
$\fa x \; dx\simeq ex$, and define
$f = kd$ and $g=ke$, with $k$ the combinator.
Then $fx=d$ and $gx=e$, hence
$\fa x \; fx \sim_e gx$ because $d\sim_e e$.
But not $\fa x \; fx \simeq gx$ because $d\neq e$,
so $f \not\sim_e g$.
\end{proof}
\section{Left and right equivalences}\label{sec:aux}
For the discussion below (and also to aid our thinking),
we introduce two equivalence relations.
Let $\A$ be a pca, and $\gamma:\A\rightarrow S$ a generalized numbering.
\begin{definition}
We define two kinds of equivalence relations on $\A$,
corresponding to right and left application:
\begin{itemize}
\item \makebox[1.45cm][l]{$f\sim_{R_\gamma} g$} if $\fa x \; fx \sim_\gamma gx$.
\item \makebox[1.45cm][l]{$f\sim_{L_\gamma} g$} if $\fa z \; zf \sim_\gamma zg$.
\end{itemize}
\end{definition}
Recall the relation $\sim_e$ from section~\ref{sec:smooth}.
Note that $\sim_e$ is equal to $\sim_{R_\gamma}$ for
$\gamma$ the identity.
With these equivalences we can succinctly express extensionality
as follows:
\begin{align*}
\text{$\A$ is $\gamma$-extensional} &
\text{ if $f \sim_e g \Longrightarrow f\sim_\gamma g$,} \\
\text{$\A$ is strongly $\gamma$-extensional} &
\text{ if $f \sim_{R_\gamma} g \Longrightarrow f\sim_\gamma g$.}
\end{align*}
\begin{proposition} \label{propLR}
For all $f,g\in\A$ we have
$f\sim_{L_\gamma} g \Longrightarrow f\sim_{R_\gamma} g$.
\end{proposition}
\begin{proof}
For every $x$, define $z_x = \lambda^* h. hx$.
(Note that in every pca we can define such lambda terms,
cf.\ section~\ref{sec:pca}.)
Then
\begin{align*}
f \sim_{L_\gamma} g &\Longrightarrow \fa x\; z_x f \sim_\gamma z_x g \\
&\Longrightarrow \fa x\; fx \sim_\gamma gx \\
&\Longrightarrow f \sim_{R_\gamma} g.
\qedhere
\end{align*}
\end{proof}
\section{Algebraic versus extensional} \label{sec:algext}
For a given pca $\A$ and a generalized numbering $\gamma$ on $\A$,
note that the following hold:
If $\A$ is $\gamma$-algebraic, then for every $f,g\in\A$,
\begin{align}
f\sim_\gamma g &\Longrightarrow f\sim_{R_\gamma} g, \label{s1} \\
f\sim_\gamma g &\Longrightarrow f\sim_{L_\gamma} g. \label{s2}
\end{align}
This holds because in Definition~\ref{def:smooth},
we can either take the right sides equal, obtaining \eqref{s1},
or take the left sides equal, obtaining~\eqref{s2}.
Also note that \eqref{s2} actually implies \eqref{s1}
by Proposition~\ref{propLR}.
We could call \eqref{s1} {\em right-algebraic\/} and
\eqref{s2} {\em left-algebraic}.
\begin{proposition}
$\gamma$-algebraic is equivalent with \eqref{s2}.
\end{proposition}
\begin{proof}
That \eqref{s2} follows from $\gamma$-algebraicity was noted above.
Conversely, assume \eqref{s2} and
suppose that $a\sim_\gamma a'$ and $b\sim_\gamma b'$.
We have to prove that $ab \sim_\gamma a'b'$. Indeed we have
\begin{align*}
ab &\sim_\gamma a'b &&\text{by \eqref{s1}} \\
&\sim_\gamma a'b' &&\text{by \eqref{s2}}.
\qedhere
\end{align*}
\end{proof}
On the other hand, if $\A$ is strongly $\gamma$-extensional, we have
\begin{align}
f\sim_{R_\gamma} g &\Longrightarrow f\sim_\gamma g \label{e3}
\end{align}
which is the converse of \eqref{s1}.
So we see that in a sense, the notions of algebraicity and
extensionality are complementary.
We now show that neither of them implies the other.
\begin{proposition} \label{extsmooth}
$\gamma$-extensional does not imply $\gamma$-algebraic.
\end{proposition}
\begin{proof}
Consider Kleene's first model $\K_1$, and let $\gamma=\gamma_e$ be
the numbering from Proposition~\ref{prop:precomplete}.
Every pca is trivially $\gamma_e$-extensional,
as $f\sim_e g \Rightarrow f\sim_e g$.
However, $\K_1$ is not $\gamma_e$-algebraic. Namely, \eqref{s2} above
does not hold: There are $n,m\in \K_1$ such that
$n\sim_e m$, i.e.\ $n$ and $m$ are codes of the same partial computable
function, but $n\neq m$, so that $\fa z \; zn\sim_e zm$ does not hold:
There is a p.c.\ function $\vph$ such that $\vph(n) \not\sim_e \vph(m)$.
\end{proof}
We can strengthen Proposition~\ref{extsmooth} to the following.
\begin{theorem} \label{strongextsmooth}
Strong $\gamma$-extensional does not imply $\gamma$-algebraic.
\end{theorem}
\begin{proof}
We show that \eqref{e3} does not imply \eqref{s2}.
As a pca we take Kleene's first model $\K_1$, and we define a
generalized numbering $\gamma$ on it as follows.
We start with the equivalence $\sim_e$ on $\K_1$,
and we let $\sim_\gamma$ be the smallest extension of $\sim_e$
such that
\begin{equation} \label{fixpoint}
f\sim_\gamma g \Longleftrightarrow \fa x \; fx\sim_\gamma gx.
\end{equation}
(Here we read $fx\sim_\gamma gx$ as in Definition~\ref{def:equivalent}.)
The equivalence relation $\sim_\gamma$ is the smallest fixed point of
the monotone operator that, given an equivalence $\sim$ on $\K_1$
that extends $\sim_e$, defines a new equivalence $\approx$ by
\begin{equation} \label{operator}
f\approx g \Longleftrightarrow \fa x \; fx \sim gx.
\end{equation}
The existence of $\sim_\gamma$ is then guaranteed by the Knaster-Tarski
theorem on fixed points of monotone operators~\cite{KnasterTarski}.
Note that by Remark~\ref{remark}, $\K_1/{\sim_e}$ is not a pca,
and neither are the extensions $\K_1/{\approx}$, but this is not
a problem for the construction \eqref{operator}, since the
application $fx$ keeps taking place in the pca~$\K_1$.
Note that by \eqref{fixpoint} we have that
$\K_1$ is strongly $\gamma$-extensional.
We claim that \eqref{s2} fails for $\gamma$, and hence that
$\K_1$ is not $\gamma$-algebraic.
First we observe that $\gamma$ is not trivial, i.e.\
does not consist of only one equivalence class.
Namely, let $ax\uarrow$ for every~$x$, and let $b\in \K_1$
be total. Then obviously $\fa x \; ax\sim_\gamma bx$ does not hold,
hence by \eqref{fixpoint} we have $a\not\sim_\gamma b$.
For the failure of \eqref{s2} we further need the existence of
$f\sim_\gamma g$ such that $f\neq g$.
Such $f$ and $g$ exist, since they already exist for $\sim_e$,
and $\sim_\gamma$ extends~$\sim_e$.
Now let $a\not\sim_\gamma b$ (the existence of which we noted above),
and let $z$ be a code of a partial computable function such that
$zf=a$ and $zg=b$.
Then $zf\not\sim_\gamma zg$, hence $\fa z \; zf\sim_\gamma zg$ does
not hold, and thus \eqref{s2} fails.
\end{proof}
\begin{corollary} \label{cor}
\eqref{s2} implies \eqref{s1}, but not conversely.
\end{corollary}
\begin{proof}
\eqref{s2} implies \eqref{s1} by Proposition~\ref{propLR}.
In the counterexample of Theorem~\ref{strongextsmooth}
the equivalence \eqref{fixpoint} holds,
so both \eqref{s1} and \eqref{e3} hold, but \eqref{s2} does not.
\end{proof}
\begin{proposition}
$\gamma$-algebraic does not imply $\gamma$-extensional.
\end{proposition}
\begin{proof}
Let $\A$ be a pca that is not extensional (such as Kleene's $\K_1$),
and let $\gamma$ be the identity on $\A$.
Every pca is always $\gamma$-algebraic, so this provides a
counterexample to the implication.
\end{proof}
The proof of Theorem~\ref{strongextsmooth} shows that
\eqref{e3} does not imply \eqref{s2}.
Since \eqref{s2} strictly implies \eqref{s1} by Corollary~\ref{cor},
a stronger statement would be to show that
\eqref{e3} does not imply~\eqref{s1}. We can obtain this with a
variation of the earlier proof (though this construction
is less natural, and only serves a technical purpose).
\begin{theorem}
Strong $\gamma$-extensional \eqref{e3} does not imply
right-al\-geb\-raic~\eqref{s1}.
\end{theorem}
\begin{proof}
We show that \eqref{e3} does not imply \eqref{s1}.
As before, we take Kleene's first model $\K_1$, and we define a
generalized numbering $\gamma$ on it.
Now we do not want the equivalence \eqref{fixpoint} to hold,
so we do not start with the equivalence relation $\sim_e$.
Let $f,g\in\K_1$ and $x\in\omega$ be such that
$fx\darrow$ is total, and $gx\uarrow$.
Start with $f \sim_\gamma g$, so that $\gamma$ equates $f$ and $g$
and nothing else.
Let $\sim_\gamma$ be the smallest extension of this equivalence
relation such that
\begin{equation*}
\fa x (fx\sim_\gamma gx) \Longrightarrow f\sim_\gamma g,
\end{equation*}
where again, we read $fx\sim_\gamma gx$ as in
Definition~\ref{def:equivalent}.
As before, the equivalence relation $\sim_\gamma$ exists by the
Knaster-Tarski theorem~\cite{KnasterTarski}.
This ensures that $\gamma$ satisfies~\eqref{e3}.
We claim that $\fa x \; fx\sim_\gamma gx$ does not hold,
and hence that \eqref{s1} fails.
Note that we would only have $\fa x \; fx\sim_\gamma gx$ if
at some stage $\fa x,y \; fxy\sim_\gamma gxy$ would hold.
However, since $fx$ is total, $fxy$ is always defined,
whereas $gxy$ is never defined by choice of~$x$.
Hence, by Definition~\ref{def:equivalent}, $fxy\not\sim_\gamma gxy$,
whatever $\gamma$ may be.
\end{proof}
Consider the following property, which is the converse of
the implication from Proposition~\ref{propLR}:
\begin{equation}
f\sim_{R_\gamma} g \Longrightarrow f\sim_{L_\gamma} g. \label{eiggamma}
\end{equation}
This property expresses that whenever $f$ and $g$ denote the
same function, they are {\em inseparable\/} in the pca
(compare Barendregt~\cite[p48]{Barendregt}).
Combining algebraicity and extensionality, we obtain the following relation.
\begin{proposition} \label{converse}
$\gamma$-algebraic + strongly $\gamma$-extensional $\Longrightarrow$ \eqref{eiggamma}.
\end{proposition}
\begin{proof}
By $\gamma$-algebraicity we have \eqref{s2}, hence
\begin{align*}
f\sim_{R_\gamma} g &\Longrightarrow f\sim_\gamma g &&\text{by strong $\gamma$-extensionality}\\
&\Longrightarrow f\sim_{L_\gamma} g &&\text{by \eqref{s2}.}
\qedhere
\end{align*}
\end{proof}
Note that the identity $\gamma_\A$ is algebraic for any pca $\A$.
So any extensional $\A$ (meaning $\gamma_\A$-extensional,
which in this case coincides with strongly $\gamma_\A$-extensional)
is an example where the conditions of Proposition~\ref{converse} hold.
\section{A note on 1-1 numberings of uniformly c.e.\ classes}\label{sec:1-1}
A class of c.e.\ sets $\LL$ is called {\em uniformly c.e.\/} if
it admits a computable numbering of its indices, i.e.\ if it
is of the form
$$
\LL=\{W_{f(n)}:n\in\omega\}
$$
with $f$ a computable function.
Such a numbering $f$ is called 1-1 if the numbering does not have any
repetitions, i.e.\ if the sets $W_{f(n)}$ are all different.
A classic question is which uniformly c.e.\ classes admit a \mbox{1-1} numbering.
For a list of references about this topic see
Odifreddi~\cite[p228 ff.]{Odifreddi} and Kummer \cite{Kummer}.
Friedberg~\cite{Friedberg} showed that the class of all c.e.\ sets
has a 1-1 numbering.
The difficulty of course lies in the fact that equality for c.e.\ sets
is hard to decide: The set $\{(n,m)\mid W_n=W_m\}$
is $\Pi^0_2$-complete, cf.\ Soare~\cite{Soare}.
Here we prove two results relating the problem above to the
complexity of the equality relation.
\begin{proposition}
If for an infinite uniformly c.e.\ class $\LL$ the equality relation is
$\Pi^0_1$ then $\LL$ has a 1-1 numbering.
\end{proposition}
\begin{proof}
Let $f$ be computable such that $\LL=\{W_{f(n)}:n\in\omega\}$.
The statement that the equality relation of $\LL$ is $\Pi^0_1$
means that the set
$$
U=\{(n,m) \mid W_{f(n)}\neq W_{f(m)}\}
$$
is c.e. Enumerate $\LL$ as follows. Enumerate $f(n)$ in $\LL$ if
and only if $\mbox{$(\forall m<n)$}$ $\mbox{$[(n,m)\in U]$}$.
This is clearly a computable enumeration because $U$ is c.e.
Also, it is easy to see that for every set in $\LL$,
$f(n)$ is enumerated into $\LL$
for the minimal $n$ such that $f(n)$ is a code for this set. This
proves that the enumeration is a 1-1 numbering.
\end{proof}
\begin{proposition}
A uniformly c.e.\ class $\LL$ for which the equality relation is
$\Sigma^0_1$ does not necessarily have a 1-1 numbering.
\end{proposition}
\begin{proof}
Define a uniformly c.e.\ class $\LL$ as follows.
We want to ensure that $\vph_e$ is not a 1-1 enumeration of $\LL$.
To this end, for every $e$ we put two codes $x_e$ and $y_e$ of c.e.\ sets
into $\LL$, with $W_{y_e}=\{2e, 2e+1\}$.
The code $x_e$ is defined by the following enumeration procedure
for $W_{x_e}$. Enumerate $2e$ into $W_{x_e}$. Search for two
different codes $a$, $b$ in the range of $\varphi_e$ such that
$2e\in W_a$ and $2e\in W_b$. If such $a$ and $b$ are found enumerate
$2e+1$ in $W_{x_e}$. The class $\LL$ thus defined is uniformly c.e.\
and has a $\Sigma^0_1$ equality relation, because to find out whether
two sets in $\LL$ are equal, enumerate them and compare the first
even elements enumerated (every set in $\LL$ contains exactly one
even element). If these are equal, say they both equal $2e$,
then we know that the two sets are $W_{x_e}$ and $W_{y_e}$, and
these are equal if and only if the $\Sigma^0_1$-event from the
enumeration procedure of $W_{x_e}$ occurs.
To prove that no computable function 1-1 enumerates $\LL$,
fix $e$ such that $\varphi_e$ is total. Now if
two different codes $a$ and $b$ occur in the range of $\varphi_e$ such
that $2e\in W_a\cap W_b$, then $W_{x_e}=W_{y_e}$ by
definition of $x_e$,
hence $\varphi_e$ does not one-one enumerate $\LL$. In the
case that such $a$ and $b$ do not appear in ${\rm range}(\varphi_e)$
it holds that $W_{x_e}\neq W_{y_e}$, hence $\varphi_e$ does not
enumerate all the elements of $\LL$.
\end{proof}
|
2104.00385
|
\section{Introduction}
\label{sec:introduction}
In the last decade, the tasks (or objects) required of robots have become steadily more complex.
For such next-generation robot control problems, traditional model-based control like~\cite{kobayashi2018unified} seems to reach its limit due to the difficulty of modeling complex systems.
Model-free or learning-based control like~\cite{itadera2021towards} is expected to resolve these problems in recent year.
In particular, reinforcement learning (RL)~\cite{sutton2018reinforcement} is one of the most promising approaches to this end, and indeed, RL integrated with deep neural networks~\cite{lecun2015deep}, so-called deep RL~\cite{mnih2015human}, achieved several complex tasks:
e.g. human-robot interaction~\cite{modares2015optimized}; manipulation of deformable objects~\cite{tsurumine2019deep}; and manipulation of various general objects from scratch~\cite{kalashnikov2018scalable}.
In principle, RL makes an agent to optimize a policy (a.k.a. controller) to stochastically sample action (a.k.a. control input) depending on state, result of interaction between the agent and environment~\cite{sutton2018reinforcement}.
Generally speaking, therefore, the policy to be optimized can be regarded as one of the feedback (FB) controllers.
Of course, the policy is more conceptual and general than traditional FB controllers such as for regulation and tracking, but it is still a mapping from state to action.
Such a FB policy inherits the drawbacks of the traditional FB controllers, i.e. the sensitivity to sensing failures~\cite{sugimoto2020relaxation}.
For example, if the robot has a camera to detect an object, pose of which is given to be state of RL, the FB policy would sample erroneous action according to a wrong pose by occlusion.
Alternatively, if the robot system is connected with a wireless TCP/IP network to sense data from IoT devices, communication loss or delay due to poor signal conditions will occur at irregular intervals, causing erroneous action.
To alleviate this fundamental problem of the FB policy, previous studies have developed the policies that do not depend only on state.
In a straightforward way, time-dependent policy has been proposed by directly adding the elapsed time to state~\cite{musial2007feed} or by utilizing recurrent neural networks (RNNs)~\cite{hochreiter1997long,murata2013learning} for approximation of that policy~\cite{lee2020stochastic}.
If the policy is computed according to the phase and spectrum information of the system, instantaneous sensing failures can be ignored~\cite{sharma2018phase,azizzadenesheli2016reinforcement}.
In an extreme case, if the robot learns to episodically generate the trajectory, the adaptive behavior to state is completely lost, but it is never affected by the sensing failures.
From the perspective of the traditional control theory and biology, it has been suggested that this problem of the FB policy can be resolved by a feedforward (FF) policy with feedback error learning (FEL)~\cite{miyamoto1988feedback,nakanishi2004feedback,sugimoto2008feedback,sugimoto2020relaxation}.
FEL is a framework in which the FF controller is updated based on the error signal of the FB controller, and finally the control objective is achieved only by the FF controller.
In other words, instead of designing only the single policy as in the previous studies above, FEL has both the FB/FF policies in the system and composes their outputs appropriately to complement each other's shortcomings: the sensitivity to the sensing failures in the FB policy; and the lack of adaptability to the change of state in the FF policy.
The two separate policies are more compact than the integrated one.
In addition, although the composition of the outputs in the previous studies is a simple summation, it creates a new room for designing different composition rules, which makes it easier for designers to adjust which of the FB/FF policies is preferred.
The purpose of this study is to take over the benefits of FEL to the RL framework, as shown in Fig.~\ref{fig:fffb_framework}.
To this end, we have to solve two challenges as below.
\begin{enumerate}
\item Since RL is not only for tracking problem, which is the target of FEL, we need to design how to compose the FB/FF policies.
\item Since the FB policy is not fixed unlike FEL, both of the FB/FF policies are required to be optimized simultaneously.
\end{enumerate}
For the first challenge, we assumes that the composed policy is designed as mixture distribution of the FB/FF policies since RL policy is stochastically defined.
In addition, we heuristically design its mixture ratio depending on confidences of the respective FB/FF policies so that the higher confident policy is prioritized.
For the second challenge, inspired by \textit{control as inference}~\cite{levine2018reinforcement}, we derive a new optimization problem to minimize/maximize the divergences between trajectory, predicted by the composed policy and a stochastic dynamics model, and optimal/non-optimal trajectories.
Furthermore, by designing the stochastic dynamics model with variational approximation~\cite{chung2015recurrent}, we yield regularization between the FB/FF policies.
We expect that skill of the FB policy, which can be optimized faster than the FF policy, will be transferred into the FF policy via this regularization.
To verify that the proposed method can optimize the FB/FF policies in a unified manner, we conduct numerical simulations for statistical evaluation and a robot experiment as demonstration.
Through the numerical simulations, we show the capability of the proposed method, namely, stable optimization of the composed policy even with the different learning law from the traditional RL.
However, the proposed method occasionally fails to learn the optimal policy.
We analyze this reason as the extreme updating of the FF policy (or RNNs) to wrong direction.
In addition, after training on the robot experiment, we clarify the value of the proposed method that the optimized FF policy robustly samples valuable actions to the sensing failures even when the FB policy fails to achieve the optimal behavior.
\section{Preliminaries}
\subsection{Reinforcement learning}
In RL~\cite{sutton2018reinforcement}, an agent interacts with unknown environment using action $a \in \mathcal{A}$ sampled from policy $\pi$.
The environment returns the result of the interaction as state $s \in \mathcal{S}$ and evaluates it according to reward function $r(s, a) \in \mathbb{R}$.
The optimization problem of RL is to find the optimal policy $\pi^*$ that maximizes the sum of rewards in the future from the current time $t$ (or, called return), defined as $R_t = \sum_{k=0}^\infty \gamma^k r_{t+k}$ with $\gamma \in [0, 1)$ discount factor.
RL generally assumes that the environment follows Markov process, i.e. the next state $s^\prime$ is sampled from $s^\prime \sim p_e(s^\prime \mid s, a)$.
By additionally limiting the policy as $\pi(a \mid s)$, Markov decision process (MDP) is satisfied.
In that case, RL can be illustrated as the agent-environment-loop at the top of Fig.~\ref{fig:problem_rl}.
However, in practical use, measurement of state causes delay (e.g. due to overload in the communication networks) and/or loss (e.g. occlusion in camera sensors), suggested in the bottom of Fig.~\ref{fig:problem_rl}.
To solve this problem, this paper therefore proposes a new method to optimize the FB/FF policies in a unified manner by formulating them without necessarily requiring MDP.
In the conventional RL under MDP, the expected value of $R$ is functionalized as $V(s)$ as (state) value function and $Q(s, a)$ as (state-)action value function, and $V$ can be learned by the following equation.
\begin{align}
\delta &= Q(s,a) - V(s) \simeq r(s, a) + \gamma V(s^\prime) - V(s)
\label{eq:td_err} \\
\mathcal{L}_\mathrm{value} &= \cfrac{1}{2}\delta^2
\label{eq:loss_value}
\end{align}
Note that $Q$ can also be learned with the similar equation, although we do not use $Q$ directly in this paper.
Based on $\delta$, an actor-critic algorithm~\cite{konda2000actor} updates $\pi$ according to the following policy gradient.
\begin{align}
\nabla \mathcal{L}_\pi &= - \mathbb{E}_{p_e \pi} [\delta \nabla \ln \pi(a \mid s)]
\label{eq:grad_ac}
\end{align}
where $\mathbb{E}_{p_e \pi} [\cdot]$ is approximated by Monte Carlo method.
\subsection{Introduction of optimality variable in \textit{control as inference}}
Recently, RL can be regarded as inference problem, so-called control as inference~\cite{levine2018reinforcement}.
This extension of interpretation is realized by introducing a optimality variable, $o = \{0, 1\}$, which represents whether the current state $s$ and action $a$ are optimal ($o = 1$) or not ($o = 0$).
Since it is defined as random variable, the probability of $o = 1$, $p(o=1 \mid s, a)$, is parameterized by reward $r$ to connect the conventional RL with this interpretation.
\begin{align}
p(o = 1 \mid s, a) = \exp\left( \cfrac{r(s, a) - c}{\tau} \right)
\label{eq:def_opt_r}
\end{align}
where $c = \max(r)$ to satisfy $e^{r(s, a) - c} \leq 1$, and $\tau$ denotes the hyperparameter to clarify uncertainty, and can be adaptively tuned.
Furthermore, by considering the optimality in the future as $O$, we can connect this formulation with the conventional value functions.
Specifically, the following probability can be derived.
\begin{align}
p(O = 1 \mid s) &= \exp\left( \cfrac{V(s) - C}{\tau} \right)
\label{eq:def_opt_V} \\
p(O = 1 \mid s, a) &= \exp\left( \cfrac{Q(s, a) - C}{\tau} \right)
\label{eq:def_opt_Q}
\end{align}
where $C = \max(V) = \max(Q)$ theoretically, although its specific value is generally unknown.
In this way, the optimality can be treated in probabilistic inference problems, facilitating integration with such as Bayesian inference and other methods.
This paper utilizes this property to derive a new optimization problem, as derived later.
\subsection{Variational recurrent neural network}
To reveal state transition probability (i.e. $p_e$) as stochastic dynamics model, we derive the method to learn it based on variational recurrent neural network (VRNN)~\cite{chung2015recurrent}.
Therefore, in this section, we briefly introduce the VRNN.
The VRNN considers the maximization problem of log likelihood of a prediction model of observation ($s$ in the context of RL), $p_m$.
$s$ is assumed to be stochastically decoded from lower-dimensional latent variable $z$, and $z$ is also sampled according to the history of $s$, $h^s$, as time-dependent prior $p(z \mid h^s)$.
Here, $h^s$ is generally approximated by recurrent neural networks, and this paper employs deep echo state networks~\cite{gallicchio2018design} for this purpose.
Using Jensen's inequality, a variational lower bound is derived as follows:
\begin{align}
\ln p_m(s \mid h^s) &= \ln \int p(s \mid z) p(z \mid h^s) dz
\nonumber \\
&= \ln \int q(z \mid s, h^s) p(s \mid z) \cfrac{p(z \mid h^s)}{q(z \mid s, h^s)} dz
\nonumber \\
&\geq \mathbb{E}_{q(z \mid s, h^s)}[\ln p(s \mid z)]
\nonumber \\
&- \mathrm{KL}(q(z \mid s, h^s) \| p(z \mid h^s))
\nonumber \\
&= - \mathcal{L}_\mathrm{vrnn}
\label{eq:loss_vrnn}
\end{align}
where $p(s \mid z)$ and $q(z \mid s, h^s)$ denote the decoder and encoder, respectively.
$\mathrm{KL}(\cdot \| \cdot)$ is the term for Kullback-Leibler (KL) divergence between two probabilities.
$\mathcal{L}_\mathrm{vrnn}$ is minimized via the optimization of $p_m$, which consists of $p(s \mid z)$, $q(z \mid s, h^s)$, and $p(z \mid h^s)$.
Note that, in the original implementation\cite{chung2015recurrent}, the decoder is also depending on $h^s$, but that is omitted in the above derivation for simplicity and for aggregating time information to $z$.
In addition, the strength of regularization by the KL term can be controlled by following $\beta$-VAE~\cite{higgins2017beta} with a hyperparameter $\beta \geq 0$.
\section{Derivation of proposed method}
\subsection{Overview}
The outputs of FB/FF policies should eventually coincide, but it is unclear how they will be updated if we directly optimize the composed policy according to the conventional RL.
In this paper, we propose a unified optimization problem in which the FB/FF policies naturally coincide and the composed one is properly optimized.
The key points in the proposed method are two folds:
\begin{enumerate}
\item The trajectory predicted with the stochastic dynamics model and the composed policy is expected to be close to/away from optimal/non-optimal trajectories inferred with the optimality variable.
\item The stochastic dynamics model is trained via its variational lower bound, which naturally generates a soft constraint between the FB/FF policies.
\end{enumerate}
Here, as an additional preliminary preparation, we define the FB, FF, and composed policies mathematically: $\pi_\mathrm{FB}(a \mid s)$; $\pi_\mathrm{FF}(a \mid h^a)$; and the following mixture distribution, respectively.
\begin{align}
\pi(a \mid s, h^a) = w \pi_\mathrm{FB}(a \mid s) + (1 - w) \pi_\mathrm{FF}(a \mid h^a)
\label{eq:def_policy_mix}
\end{align}
where $w \in [0, 1]$ denotes the mixture ratio of the FB/FF policies.
That is, for generality, the outputs of the FB/FF policies are composed by a stochastic switching mechanism, rather than a simple summation as in FEL~\cite{miyamoto1988feedback}.
Note that since the history of action, $h^a$, can be updated without $s$, the FF policy is naturally robust to sensing failures.
\subsection{Inference of optimal/non-optimal policies}
First of all, we infer the optimal policy, which yields the optimal trajectory by interacting with the real environment $p_e$, and the non-optimal policy, which causes the non-optimal trajectory on the contrary.
With eqs.~\eqref{eq:def_opt_V} and~\eqref{eq:def_opt_Q}, the policy conditioned on $O$, $\pi^*(a \mid s, h^a, O)$, can be derived through Bayes theorem.
\begin{align}
\pi^*(a \mid s, h^a, O) = \cfrac{p(O \mid s, a) b(a \mid s, h^a)}{p(O \mid s)}
\label{eq:policy_cond}
\end{align}
where $b(a \mid s, h^a)$ denotes the sampler distribution (e.g. the composed policy with old parameters or one approximated by target networks~\cite{kobayashi2021t}).
By substituting $\{0,1\}$ for $O$, the inference of the optimal policy, $\pi^+$, and the non-optimal policy, $\pi^-$ is given as follows:
\begin{align}
\pi^+(a \mid s, h^a) &= \pi^*(a \mid s, h^a, O=1)
= \cfrac{\exp\left( \cfrac{Q(s,a) - C}{\tau} \right)}{\exp\left( \cfrac{V(s) - C}{\tau} \right)} b(a \mid s, h^a)
\label{eq:def_policy_opt} \\
\pi^-(a \mid s, h^a) &= \pi^*(a \mid s, h^a, O=0)
= \cfrac{1 - \exp\left( \cfrac{Q(s,a) - C}{\tau} \right)}{1 - \exp\left( \cfrac{V(s) - C}{\tau} \right)} b(a \mid s, h^a)
\label{eq:def_policy_nopt}
\end{align}
Although it is difficult to sample action from these policies directly, they can be utilized for analysis in the next section.
\subsection{Optimization problem for optimal/non-optimal trajectories}
With the composed policy, $\pi$, and the stochastic dynamics model, given as $p_m(s^\prime \mid s, a, h^s, h^a)$, a part of trajectory is predicted as $p_m \pi$.
As a reference, we can consider the part of trajectory with $\pi^*$ in eq.~\eqref{eq:policy_cond} and the real environment, $p_e$, as $p_e \pi^*$.
The degree of divergence between the two can be evaluated by KL divergence as follows:
\begin{align}
\mathrm{KL}(p_e \pi^* \| p_m \pi) &= \mathbb{E}_{p_e \pi^*} [(\ln p_e + \ln \pi^*) - (\ln p_m + \ln \pi)]
\nonumber \\
&= \mathbb{E}_{p_e b} \left [ \cfrac{p(O \mid s, a)}{p(O \mid s)} \{(\ln p_e + \ln \pi^*) - (\ln p_m + \ln \pi)\} \right ]
\nonumber \\
&\propto - \mathbb{E}_{p_e b} \left [ \cfrac{p(O \mid s, a)}{p(O \mid s)} (\ln p_m + \ln \pi) \right ]
\label{eq:kl_traj_cond}
\end{align}
where the term $\ln p_e \pi^*$ inside the expectation operation is excluded since it is not related to the learnable $p_m$ and $\pi$.
The expectation operation with $p_e$ and $b$ can be approximated by Monte Carlo method, namely, we can optimize $p_m$ and $\pi$ using the above KL divergence with the appropriate conditions of $O$.
As the conditions, our optimization problem considers that $p_m \pi$ is expected to be close to $p_e \pi^+$ (i.e. the optimal trajectory) and be away from $p_e \pi^-$ (i.e. the non-optimal trajectory), as shown in Fig.~\ref{fig:problem_traj_div}.
Therefore, the specific loss function to be minimized is given as follows:
\begin{align}
\mathcal{L}_\mathrm{traj} &= \mathrm{KL}(p_e \pi^+ \mid p_m \pi) - \mathrm{KL}(p_e \pi^- \mid p_m \pi)
\nonumber \\
&\propto - \mathbb{E}_{p_e b}\left [ \left \{ \cfrac{\exp\left( \cfrac{Q - C}{\tau} \right)}{\exp\left( \cfrac{V - C}{\tau} \right)} - \cfrac{1 - \exp\left( \cfrac{Q - C}{\tau} \right)}{1 - \exp\left( \cfrac{V - C}{\tau} \right)} \right \} (\ln p_m + \ln \pi) \right ]
\nonumber \\
&= - \mathbb{E}_{p_e b}\left [ \cfrac{\exp\left( \cfrac{Q - V}{\tau} \right) - 1}{1 - \exp \left( \cfrac{V - C}{\tau} \right)} (\ln p_m + \ln \pi) \right ]
\nonumber \\
&\propto - \mathbb{E}_{p_e b}\left [ \tau \left \{\exp\left( \cfrac{\delta}{\tau} \right) - 1 \right \} (\ln p_m + \ln \pi) \right ]
\label{eq:loss_traj}
\end{align}
where $1 - \exp \{(V - C)\tau^{-1}\}$ and $\tau$ are multiplied to eliminate unknown $C$ and to scale the gradient at $\delta = 0$ to be one, respectively.
Note that the derived result is similar to eq.~\eqref{eq:grad_ac}, but with a different coefficient from $\delta$ and a different sampler from $\pi$.
\subsection{Stochastic dynamics model with variational lower bound}
In eq.~\eqref{eq:loss_traj}, $\ln p_m$, i.e. the stochastic dynamics model, is included and it should be modeled.
Indeed, we found that the model based on the VRNN~\cite{chung2015recurrent} shown in eq.~\eqref{eq:loss_vrnn} can naturally yield an additional regularization between the FB/FF policies.
In addition, such a method is regarded as one for extracting latent Markovian dynamics in problems for which MDP is not established in the observed state, and is similar to the latest model-based RL~\cite{chua2018deep,clavera2019model}.
Specifically, we consider the dynamics of latent variable $z$ as $z^\prime = f(z, a)$ with $f$ learnable function, and $a$ can be sampled from time-dependent prior (i.e. the FF policy).
In that time, eq.~\eqref{eq:loss_vrnn} is modified through the following derivation.
\begin{align}
\ln p_m(s^\prime \mid h^s, h^a) &= \ln \iint p(s^\prime \mid z^\prime) p(z \mid h^s) \pi_\mathrm{FF}(a \mid h^a) dz da
\nonumber \\
&= \ln \iint q(z \mid s, h^s) \pi(a \mid s, h^a) p(s^\prime \mid z^\prime)
\nonumber \\
&\times \cfrac{p(z \mid h^s)}{q(z \mid s, h^s)} \cfrac{\pi_\mathrm{FF}(a \mid h^a)}{\pi(a \mid s, h^a)} dz da
\nonumber \\
&\geq \mathbb{E}_{q(z \mid s, h^s) \pi(a \mid s, h^a)} [ \ln p(s^\prime \mid z^\prime) ]
\nonumber \\
&- \mathrm{KL}(q(z \mid s, h^s) \| p(z \mid h^s))
- \mathrm{KL}(\pi(a \mid s, h^a) \| \pi_\mathrm{FF}(a \mid h^a))
\nonumber \\
&= - \mathcal{L}_\mathrm{model}
\label{eq:loss_model}
\end{align}
Since we know the composed policy $\pi$ is mixture of the FB/FF policies defined in eq.~\eqref{eq:def_policy_mix}, the KL term between $\pi$ and $\pi_\mathrm{FF}$ can be decomposed using variational approximation~\cite{hershey2007approximating} and Jensen's inequality.
\begin{align}
\mathrm{KL}(\pi \| \pi_\mathrm{FF}) &\geq w \ln \cfrac{w e^{-\mathrm{KL}(\pi_\mathrm{FF} \| \pi_\mathrm{FF})}
+ (1 - w) e^{-\mathrm{KL}(\pi_\mathrm{FB} \| \pi_\mathrm{FF})}}
{e^{-\mathrm{KL}(\pi_\mathrm{FB} \| \pi_\mathrm{FF})}}
\nonumber \\
&+ (1 - w) \ln \cfrac{w e^{-\mathrm{KL}(\pi_\mathrm{FF} \| \pi_\mathrm{FB})}
+ (1 - w) e^{-\mathrm{KL}(\pi_\mathrm{FF} \| \pi_\mathrm{FF})}}
{e^{-\mathrm{KL}(\pi_\mathrm{FF} \| \pi_\mathrm{FF})}}
\nonumber \\
&= w \ln \{ w e^{\mathrm{KL}(\pi_\mathrm{FB} \| \pi_\mathrm{FF})} + (1 - w) \}
\nonumber \\
&+ (1 - w) \ln \{ w e^{-\mathrm{KL}(\pi_\mathrm{FF} \| \pi_\mathrm{FB})} + (1 - w) \}
\nonumber \\
&\geq w^2 \mathrm{KL}(\pi_\mathrm{FB} \| \pi_\mathrm{FF})
- (1 - w) w \mathrm{KL}(\pi_\mathrm{FF} \| \pi_\mathrm{FB})
\nonumber \\
&= w^2 \{H(\pi_\mathrm{FB} \| \pi_\mathrm{FF}) - H(\pi_\mathrm{FB})\}
- (1 - w) w \mathrm{KL}(\pi_\mathrm{FF} \| \pi_\mathrm{FB})
\nonumber \\
&\propto w^2 H(\pi_\mathrm{FB} \| \pi_\mathrm{FF})
\end{align}
where we use the fact that $\mathrm{KL}(p \| q) = H(p \| q) - H(p)$ with $H(\cdot \| \cdot)$ cross entropy and $H(\cdot)$ (differential) entropy.
By eliminating the negative KL term and the negative entropy term, which are unnecessary for regularization, only the cross entropy remains.
The general case of VAE omits the expectation operation by sampling only one $z$ (and $a$ in the above case) according to $s$.
In addition, as explained before, the strength of regularization can be controlled by adding $\beta$~\cite{higgins2017beta}.
With this fact, we can modify $\mathcal{L}_\mathrm{model}$ as follows:
\begin{align}
\mathcal{L}_\mathrm{model} = - \ln p(s^\prime \mid z^\prime)
+ \beta_z \mathrm{KL}(q(z \mid s, h^s) \| p(z \mid h^s))
+ \beta_a w^2 H(\pi_\mathrm{FB} \| \pi_\mathrm{FF})
\label{eq:loss_model2}
\end{align}
where $z \sim q(z \mid s, h^s), a \sim \pi(a \mid s, h^a), z^\prime = f(z, a)$, and $\beta_{z,a}$ denote the strength of regularization for each.
Finally, the above $\mathcal{L}_\mathrm{model}$ can be substituted into eq.~\eqref{eq:loss_traj} as $- \ln p_m$.
\begin{align}
\mathcal{L}_\mathrm{traj} = - \mathbb{E}_{p_e b}\left [ \tau \left \{\exp\left( \cfrac{\delta}{\tau} \right) - 1 \right \} (- \mathcal{L}_\mathrm{model} + \ln \pi) \right ]
\label{eq:loss_traj2}
\end{align}
As can be seen in eq.~\eqref{eq:loss_model2}, the regularization between the FB/FF policies is naturally added.
Its strength is depending on $w^2$, that is, as the FB policy is prioritized (i.e. $w$ is increased), this regularization is reinforced.
In addition, since $\mathcal{L}_\mathrm{model}$ is now inside of $\mathcal{L}_\mathrm{traj}$, the regularization becomes strong only when $\delta > 0$ enough, that is, the agent knows the optimal direction for updating $\pi$.
Usually, at the beginning of RL, the policy generates random actions, which make the FF policy be optimized;
in contrast, the FB policy can be optimized under weak regularization (if the observation is sufficiently performed).
Afterwards, if $w$ is adaptively given (as introduced in the next section), the FB policy will be strongly connected with the FF policy.
In summary, with this formulation, we can expect that the FB policy will be optimized first while regularization is weakened, and that its skill will gradually be transferred to the FF policy as like FEL~\cite{miyamoto1988feedback}.
\section{Additional design for implementation}
\subsection{Design of mixture ratio based on policy entropy}
For the practical implementation, we first design the mixture ratio $w \in [0, 1]$ heuristically.
As its requirements, the composed policy should prioritize the policy with higher confidence from the FB/FF policies.
In addition, if the FB/FF policies are similar to each other, either can be selected.
Finally, even for arbitrary distribution model of the FB/FF policies, $w$ must be computable.
As one of the solutions for these requirements, we design the following $w$ with the entropies for the FB/FF policies, $H_\mathrm{FB}, H_\mathrm{FF}$, and the L2 norm between the means of these policies, $d = \| \mu_\mathrm{FB} - \mu_\mathrm{FF} \|_2$.
\begin{align}
w = \cfrac{\exp(-H_\mathrm{FB} d \beta_T)}{\exp(-H_\mathrm{FB} d \beta_T) + \exp(-H_\mathrm{FF} d \beta_T)}
\label{eq:policy_mix_ratio}
\end{align}
where $\beta_T > 0$ denotes the inverse temperature parameter, i.e. $w$ tends to be deterministic at $0$ or $1$ with higher $\beta_T$; and vice versa.
Note that as lower entropy has higher confidence, the negative entropies are applied into softmax function.
If one of the entropies is sufficiently smaller than another, $w$ will converge on $1$ or $0$ for prioritizing the FB/FF policies, respectively.
However, if these policies output similar values on average, the robot can select action from either policy, so the inverse temperature is adaptively lowered by $d$ to make $w$ converge to $w \simeq 0.5$.
\subsection{Partial cut of computational graph}
In general, VAE-based architecture holds the computational graph, which gives paths for backpropagation, of latent variable $z$ by reparameterization trick.
If this trick is applied to $a$ in our dynamics model as it is, the policy $\pi$ will be updated toward one for improving the prediction accuracy, not for maximizing the return, which is the original purpose of policy optimization in RL.
To mitigate the wrong updates of $\pi$ while preserving the capability to backpropagate the gradients to the whole network as in VAE, we partially cut the computational graph as follows:
\begin{align}
a \gets \eta a + (1 - \eta) \hat{a}
\label{eq:action_graph_cut}
\end{align}
where $\eta$ denotes the hyperparameter and $\hat{\cdot}$ cuts the computational graph and represents merely value.
\subsection{Auxiliary loss functions}
As can be seen in eq.~\eqref{eq:loss_traj2}, if $\delta < 0$, $- \mathcal{L}_\mathrm{model}$ will be minimized, reducing the prediction accuracy of dynamics.
As for the policy, it is desirable to have a sign reversal of its loss according to $\delta$ to determine whether the update direction is good or bad.
On the other hand, since the dynamics model should ideally have a high prediction accuracy for any state, this update rule may cause the failure of optimization.
In order not to reduce the prediction accuracy, we add an auxiliary loss function.
We focus on the fact that the lower bound of the coefficient in eq.~\eqref{eq:loss_traj2}, $\tau (\exp(\delta\tau^{-1}) - 1)$, is bounded and can be found analytically to be $-\tau$ when $\delta \to - \infty$.
That is, by adding $\tau \mathcal{L}_\mathrm{model}$ as the auxiliary loss function, the dynamics model should be updated toward one with higher prediction accuracy, while its update amount is still weighted by $\exp(\delta\tau^{-1})$.
To update the value function, $V$, the conventional RL uses eq.~\eqref{eq:loss_value}.
Instead of it, we found that the minimization problem of the KL divergence between $p(O \mid s, a)$ and $p(O \mid s)$ yields the following loss function similar to eq.~\eqref{eq:loss_traj2}.
\begin{align}
\mathcal{L}_\mathrm{value} = - \mathbb{E}_{p_e b}\left [ \tau \left \{\exp\left( \cfrac{\delta}{\tau} \right) - 1 \right \} V \right ]
\label{eq:loss_value2}
\end{align}
Note that, in this formula (and eq.~\eqref{eq:loss_traj2}), $\delta$ has no computational graph for backpropagation, i.e. it is merely coefficient.
Finally, the loss function to be minimized for updating $\pi$ (i.e. $\pi_\mathrm{FB}$ and $\pi_\mathrm{FF}$), $V$, and $p_m$ can be summarized as follows:
\begin{align}
\mathcal{L}_\mathrm{all} = \mathcal{L}_\mathrm{traj} + \mathcal{L}_\mathrm{value} + \tau \mathcal{L}_\mathrm{model}
\label{eq:loss_all}
\end{align}
where $\mathcal{L}_\mathrm{traj}$, $\mathcal{L}_\mathrm{value}$, and $\mathcal{L}_\mathrm{model}$ are given in eqs.~\eqref{eq:loss_traj2}, \eqref{eq:loss_value2}, and \eqref{eq:loss_model2}, respectively.
This loss function can be minimized by one of the stochastic gradient descent (SGD) methods like~\cite{ziyin2020laprop}.
\section{Experiment}
\subsection{Objective}
We verify the validity of the proposed method derived in this paper.
This verification is done through a numerical simulation of a cart-pole inverted pendulum and an experiment of a snake robot forward locomotion, which is driven by central pattern generators (CPGs)~\cite{cohen1982nature}.
Four specific objectives are listed as below.
\begin{enumerate}
\item Through the simulation and the robot experiment, we verify that the proposed method can optimize the composed policy, optimization process of which is also revealed.
\item By comparing the successful and failing cases in the simulation, we clarify an open issue of the proposed method.
\item We compare two behaviors with the decomposed FB/FF policies to make sure there is little difference between them.
\item By intentionally causing sensing failures in the robot experiment, we illustrate the sensitivity/robustness of FB/FF policies to it, respectively.
\end{enumerate}
\subsection{Setup of proposed method}
The network architecture for the proposed method is designed using PyTorch~\cite{paszke2017automatic}, as illustrated in Fig.~\ref{fig:fffb_architecture}.
All the modules (i.e. the encoder $q(z \mid s, h^s)$, decoder $p(s^\prime \mid z^\prime)$, time-dependent prior $q(z \mid h^s)$, dynamics $f(z, a)$, value function $V(s)$, and the FB/FF policies $\pi_\mathrm{FB}(a \mid s)$, $\pi_\mathrm{FF}(a \mid h^a)$) are represented by three fully connected layers with 100 neurons for each.
As nonlinear activation functions for them, we apply layer normalization~\cite{ba2016layer} and Swish function~\cite{elfwing2018sigmoid}.
To represent the histories, $h^s$ and $h^a$, as mentioned before, we employ deep echo state networks~\cite{gallicchio2018design} (three layers with 100 neurons for each).
Probability density function outputted from all the stochastic model is given as student-t distribution with reference to~\cite{takahashi2018student,kobayashi2019variational,kobayashi2019student}.
To optimize the above network architecture, a robust SGD, i.e., LaProp~\cite{ziyin2020laprop} with t-momentum~\cite{ilboudo2020robust} and d-AmsGrad~\cite{kobayashi2021towards} (so-called td-AmsProp), is employed with their default parameters except the learning rate.
In addition, optimization of $V$ and $\pi$ can be accelerated by using adaptve eligibility traces~\cite{kobayashi2020adaptive}, and stabilized by using t-soft target network~\cite{kobayashi2021t}.
The parameters for the above implementation, including those unique to the proposed method, are summarized in Table~\ref{tab:parameter}.
Many of these were empirically adjusted based on values from previous studies.
Because of the large number of parameters involved, the influence of these parameters on the behavior of the proposed method is not examined in this paper.
However, it should be remarked that a meta-optimization of them can be easily performed with packages such as Optuna~\cite{akiba2019optuna}, although such a meta-optimization requires a great deal of time.
\subsection{Simulation for statistical evaluation}
For the simulation, we employ Pybullet dynamics engine wrapped by OpenAI Gym~\cite{coumans2016pybullet,brockman2016openai}.
A task (a.k.a. environment), \textit{InvertedPendulumBullet-v0}, where a cart tries to keep a pole standing on it, is tried to be solved.
With different random seeds, 30 trials involving 300 episodes for each are performed.
First of all, we depict the learning curves about the score (a.k.a. the sum of rewards) and the mixture ratio in Fig.~\ref{fig:sim_result}.
Since five of them were obvious failures, for further analysis, we separately depicted \textit{Failure (5)} for the five failures and \textit{Success (25)} for the remaining successful trials.
We can see in the successful trials that the agent could solve this balancing task stably after 150 episodes, even with stochastic actions.
Furthermore, further stabilization and making the composed policy deterministic were accelerated, and in the end, the task was almost certainly accomplished by the proposed method in the successful 25 trials.
Focusing on the mixture ratio, the FB policy was dominant in the early stages of learning, as expected.
Then, as the episodes passed, the FF policy was optimized toward the FB policy, and the mixture ratio gradually approached 0.5.
Finally, it seems to have converged to around 0.7, suggesting that the proposed method is basically dominated by the FB policy under stable observation.
Although all the trials obtained almost the same curves until 50 episodes in both figures, the failure trials suddenly decreased their scores.
In addition, probably due to the failure of optimization of the FF policy, the mixture ratio in the failure trials fixed on almost 1.
It is necessary to clarify the cause of this apparent difference from the successful trials, i.e. the open issue of the proposed method.
To this end, we decompose the mixture ratio into the distance between the FB/FF policies, $d$, and the entropies of the respective policies, $H_\mathrm{FB}$ and $H_\mathrm{FF}$, in Fig.~\ref{fig:sim_analysis}.
Extreme behavior can be observed around 80th episode in $d$ and $H_\mathrm{FF}$.
This suggests that the FF policy (or its base RNNs) was updated extremely wrong direction, and could not be reverted from there.
As a consequence, the FB policy was also constantly regularized to the FF policy, i.e. the wrong direction, causing the failures of the balancing task.
Indeed, $H_\mathrm{FB}$ was gradually increased toward $H_\mathrm{FF}$.
In summary, the proposed method lacks the stabilization of learning of the FF policy (or its base RNNs).
It is however expected to be improved by suppressing the amount of policy updates like the latest RL~\cite{kobayashi2020proximal}, regularization of RNNs~\cite{zaremba2014recurrent}, and/or promoting initialization of the FF policy.
\subsection{Robot experiment}
The following robot experiment is conducted to illustrate the practical value of the proposed method.
Since the statistical properties of the proposed method are verified via the above simulation, we analyze one successful case here.
\subsubsection{Setup of robot and task}
A snake robot used in this experiment is shown in Fig.~\ref{fig:snake_robot}.
This robot has eight Qbmove actuators developed by QbRobotics, which can control the stiffness in hardware level, i.e. variable stiffness actuator (VSA)~\cite{catalano2011vsa}.
As can be seen in the figure, all the actuators are serially connected and on casters to easily drive by snaking locomotion.
On the head of the robot, a AR marker is attached to detect its coordinates using a camera (ZED2 developed by Stereolabs).
To generate the primitive snaking locomotion, we employ CPGs~\cite{cohen1982nature} as mentioned before.
Each CPG follows Cohen's model with sine function as follows:
\begin{align}
\zeta_i &= \zeta_i + \left \{ u_i^r + \sum_{ij} \alpha(\zeta_j + \zeta_i - u_i^\eta) \right \} dt
\label{eq:cpg_dyn} \\
\theta_i &= u_i^A \sin(\zeta_i)
\label{eq:cpg_out}
\end{align}
where $\zeta_i$ denotes the internal state, and $\theta_i$ is consistent with the reference angle of $i$-th actuator.
$\alpha$, $u_i^r$, $u_i^\eta$, and $u_i^A$ denote the internal parameters of this CPG model.
For all the CPGs (a.k.a. actuators), we set the same parameters, $\alpha = 2$, $u_i^r = 10$, $u_i^\eta = 1$, and $u_i^A = \pi / 4$, respectively.
$dt$ is the discrete time step and set to be $0.02$ sec.
Even with this CPG model, the robot has room for optimization of the stiffness of each actuator, $k_i$.
Therefore, the proposed method is applied to the optimization of $k_i \in [0, 1]$ ($i = 1, 2, \ldots, 8$).
Let us introduce the state and action spaces of the robot.
As for the state of the robot $s$, the robot observes the internal state of each actuator: $\theta_i$ angle; $\dot{\theta}_i$ angular velocity; $\tau_i$ torque; and $k_i$ stiffness (different from the command value due to control accuracy).
To evaluate its locomotion, the coordinates of its head, $x$ and $y$, are additionally observed (see Fig.~\ref{fig:snake_env}).
In addition, as mentioned before, the action of the robot $a$ is set to be $k_i$.
In summary, 34-dimensional $s$ and 8-dimensional $a$ are summarized as follows:
\begin{align}
s &= [\theta_1, \dot{\theta}_1, \tau_1, k_1; \theta_2, \dot{\theta}_2, \tau_2, k_2; \ldots; \theta_8, \dot{\theta}_8, \tau_8, k_8; x, y]^\top
\label{eq:robot_state} \\
a &= [k_1, k_2, \ldots, k_8]^\top
\label{eq:robot_action}
\end{align}
For the definition of task, i.e. the design of reward function, we consider forward locomotion.
Since the primitive motion is already generated by the CPG model, this task can be accomplished only by restraining the sideward deviation.
Therefore, we define the reward function as follows:
\begin{align}
r(s, a) = - |y|
\label{eq:robot_reward}
\end{align}
The proposed method learns the composed policy for the above task.
At the beginning of each episode, the robot is initialized to the same place with $\theta_i = 0$ and $k_i = 0.5$.
Afterwards, the robot starts to move forward, and if it goes outside of observable area (including a goal) or spends 2000 time steps, that episode is terminated.
We tried 100 episodes in total.
\subsubsection{Learning results}
We depict the learning curves about the score (a.k.a. the sum of rewards) and the mixture ratio in Fig.~\ref{fig:exp_result}.
Note that the moving average with 5 window size is applied to make it easier to see the learning trends.
From the score, we say that the proposed method improved straightness of the snaking locomotion.
Indeed, Fig.~\ref{fig:snap_learn}, which illustrates the snapshots of experiment before and after learning, clearly indicates that the robot could succeeded in forward locomotion only after learning.
As well as the successful trials in Fig.~\ref{fig:sim_result}, this experiment also increased the mixture ratio at first, and afterwards, the FF policy was optimized, reducing the mixture ratio toward 0.5 (but converged on around 0.7).
We found the additional feature that during 10-30 episodes, probably when the transfer of skill from the FB to FF policies was active, the score temporarily decreased.
This would be due to the increased frequency of use of the non-optimal FF policy, resulting in erroneous behaviors.
After that period, however, the score became stably high, and we expect that the above skill transfer was almost complete and the optimal actions could be sampled even from the FF policy.
\subsubsection{Demonstration with learned policies}
To see the accomplishment of the skill transfer, after the above learning, we apply the decomposed FB/FF policies individually into the robot.
On the top of Fig.~\ref{fig:snap_compare}, we shows the overlapped snapshots (red/blue robots correspond to the FB/FF policies, respectively).
With the FF policy, of course, errors in the initial state were gradually increased and accumulated, namely the two results can never be completely consistent.
However, the difference at the goal was only a few centimeters.
This result suggests that the skill transfer from the FB to FF policies has been achieved as expected, although there is room for further performance improvement.
Finally, we emulate a sensing failure for detecting the AR marker on the head.
When the robot is in the left side of the video frame, the detection of the AR marker is forcibly failed, and returns wrong (and constant) $x$ and $y$.
In that case, the FB policy would collapse, while the FF policy is never affected by the sensing failure.
On the bottom of Fig.~\ref{fig:snap_compare}, we shows the overlapped snapshots, where the left side with the sensing failure is shaded.
Until the robot left the left side, the locomotion obtained by the FB policy drifted in front of the video frame, and it was apparent that the robot could not recovered by the goal.
In detail, Fig.~\ref{fig:exp_test_w_failure} illustrates the stiffness during this test.
Note that the vertical axis is the unbounded version of $k_i$, and can be encoded into the original $k_i$ through sigmoid function.
As can be seen in the figure, the sensing failure absolutely affected the outputs by the FB policy, while the FF policy ignored it and outputted periodically.
Although this test is a proof-of-concept, it clearly shows the sensitivity/robustness of the FB/FF policies to sensing failures that may occur in real environment.
We then conclude that a framework that can learn both the FB/FF policies in a unified manner, such as the proposed method, is useful in practice.
\section{Conclusion}
\label{sec:conclusion}
In this paper, we derive a new optimization problem of both the FB/FF policies in a unified manner.
Its point is to consider minimization/maximization of the KL divergences between the trajectory, predicted by the composed policy and the stochastic dynamics model, and the optimal/non-optimal trajectories, inferred based on \textit{control as inference}.
With the composed policy as mixture distribution, the stochastic dynamics model that is approximated by variational method yields the soft regularization, i.e. the cross entropy between the FB/FF policies.
In addition, by designing the mixture ratio to prioritize the policy with higher confidence, we can expect that the FB policy is first optimized since its state dependency can easily be found, then its skill is transferred to the FF policy via the regularization.
Indeed, the numerical simulation and the robot experiment verified that the proposed method can stably solve the given tasks, that is, it has capability to optimize the composed policy even with the different learning law from the traditional RL.
In addition, we demonstrated that using our method, the FF policy can be appropriately optimized to generate the similar behavior to one with the FB policy.
As a proof-of-concept, we finally illustrated the robustness of the FF policy to the sensing failures when the AR marker could not be detected.
However, we also found that the FF policy (or its base RNNs) occasionally failed to be optimized due to the cause of extreme updates toward wrong direction.
To alleviate this problem, in the near future, we need to make the FF policy conservatively update, for example, using a soft regularization to its prior.
With more stable learning capability, the proposed method will be applied to various robotic tasks with potential for the sensing failures.
\begin{backmatter}
\section*{Availability of data and materials}
The data that support the findings of this study are available from the corresponding author, TK, upon reasonable request.
\section*{Competing interests}
The authors declare that they have no competing interests.
\section*{Author's contributions}
TK proposed the algorithm and wrote this manuscript.
KY developed the hardware and performed the experiments.
\section*{Funding}
This work was supported by Telecommunications Advancement Foundation Research Grant.
\bibliographystyle{bmc-mathphys}
|
2110.12487
|
\section{Abstract}
Many quantum programs require circuits for addition, subtraction and logical operations. These circuits may be packaged within routines known as oracles. However, oracles can be tedious to code with current frameworks. To solve this problem the author developed Higher-Level Oracle Description Language (HODL) $-$ a C-style programming language for use on quantum computers $-$ to ease the creation of such circuits. The compiler translates high-level code written in \textit{HODL} and converts it into \textit{OpenQASM}[1], a gate-based quantum assembly language that runs on IBM Quantum Systems and compatible simulators. HODL is interoperable with IBM's QISKit framework.
\section{Introduction}
Quantum Computers were first conceptualized by the late physicist Richard Feynman[2] in a landmark paper in which he observed that classical techniques are inefficient when simulating quantum mechanics. Since the memory required to store a quantum state increases exponentially with the size of the system, Feynman proposed a new form of computer which would be capable of processing quantum information, hence the term \textit{Quantum Computer}, in which the fundamental unit of information is a two-level system known as a qubit. Quantum gates are applied to qubits in order to change their state[3].
Unfortunately current quantum frameworks make it tedious to construct classical functions in the form of quantum circuits. This was the primary motivation behind HODL where tasks such as addition, subtraction and multiplication on quantum states as well as relational operations can be performed in a simple yet expressive manner. This is aided by the HODL compiler's resource estimator which heuristically computes the total number of qubits required to run a program before run-time and therefore allows dynamic resizing of registers at compile-time. The compiler generates OpenQASM, meaning its output can be used with IBM's QISKit framework.
\section{The HODL Programming Language}
A standard HODL program consists of two main parts:
\begin{itemize}
\item Declaration of data
\item Program statements to manipulate data
\end{itemize}
Statements and declarations are terminated with semicolons.\\
\hspace*{-0.6cm} There are two types of data in HODL:
\begin{itemize}
\item \textit{Integer (\textbf{int})}
\item \textit{Quantum-Integer (\textbf{super})}
\end{itemize}
These types distinguish between data allocated on two different devices, namely classical and quantum data. Storage for data is reserved using \textit{variable declarations}. Variables are represented by identifiers. The values of both data-types are expressed as integers. However, for quantum data, the integer $n$ represents the upper-bound of a uniform superposition of states. This provides a useful method for generating superposition states.
Expressions in HODL can be composed of either classical or quantum data, and are constructed using a selection of operators (infix notation) similar to C. Operations are further discussed in Section \textbf{3.3}.
\begin{itemize}
\item Arithmetic operators are for addition, subtraction and multiplication (+, - , *, +=, -=, *=)
\item Relational operators are for testing equality (==, !=) and order ($>$, $<$, $>=$, $<=$)
\item Boolean operators are for testing the value of Boolean functions (\&, $|$)
\end{itemize}
\subsection{Functions and Oracles}
Subroutines are \textit{functions} that have the option to return a value or \textit{oracles} which, unlike functions, are first-class objects $-$ they return a memory address pointing to the location of the oracle in classical memory. Since all quantum operations are unitary and therefore reversible, the bodies of all subroutines in HODL are expanded inline.
Functions are declared using the keyword \textbf{function}. If a function returns a value, it is specified by preceding the keyword with the type. This is followed by a function identifier followed by a list of parameters enclosed in parentheses. Finally, the function's body is enclosed in opening and closing braces. All parameters are passed-by-reference due to the no-cloning theorem which states that a quantum state cannot be cloned in its entirety [4].
The return statement saves whichever variable is returned from being uncomputed and the corresponding qubit register is returned and can be assigned a new identifier. The syntax for the return statement is the keyword \textbf{return}, followed by the variable name to be returned. Returning a variable is a classical instruction, and cannot be implemented inside quantum code blocks.
Although they have no return value, oracles are declared in a similar manner using the keyword \textbf{oracle} instead of the keyword \textbf{function}. The primary reason for oracles is the need to pass functions as parameters to other functions, this is because low-level memory manipulation is not supported and function-pointers cannot be used. Instead, oracles were introduced into the language for this purpose. When oracles are sent as input, a structure is passed which contains their address in memory amongst other metadata. Currently, oracles may only be passed to intrinsic functions such as \textbf{filter}, although it is a future development goal to allow oracles as parameters to user-defined subroutines. \\
\hspace*{-0.5cm}Intrinsic functions are \textbf{filter} and \textbf{mark}.
The \textbf{mark} function must be within a quantum conditional. The function applies a phase of $\theta$ to all variables upon which the conditional control qubit is dependent on. The first parameter is an annotation for the programmer $-$ it specifies which variable the phase must be applied to and is only required so that one does not err by undesirably marking multiple variables. The second parameter specifies the value of the phase in terms of the keyword \textbf{pi}, since floating-point numbers are not currently supported. The function XORs the conditional control qubit with a qubit in the state $\frac{1}{\sqrt{2}}|0\rangle + e^{\theta}|1\rangle$.
The \textbf{filter} function performs quantum search. It accepts as input a single oracle call, followed by a variable identifier representing the search space. The oracle must mark any states which are a solution to the search problem. Quantum search is further discussed in Section \textbf{4}.
\begin{figure}[H]
\caption{Functions and Oracles in HODL}
\begin{lstlisting}
# This is a comment
super function some_function(super foo, int bar) {
# body
# return some_var of type super
}
int function some_other_function() {
# body
# return some_var of type int
}
oracle some_oracle() {
# body
}
function main() {
# body
}
\end{lstlisting}
\end{figure}
\subsection{Type System}
Variable declarations in HODL begin with a type specifier, followed by a variable identifier. The type system is designed to indicate where a variable should be allocated. A variable of type \textbf{int} declares an integer on a classical computer, whereas a declaration introduced with the keyword \textbf{super} allocates qubits on a quantum computer.
The keyword \textbf{super} for quantum variables provides a shorthand way to declare uniform superpositions. Values are assigned to either type of variable using the operator \textbf{=} as in C.
There is a limitation on the values that can be assigned to a quantum variable $-$ it must be a power of two, ie any quantum variable initialized with a value $x$ must satisfy $log_2(x) \in \textbf{N}$. This limitation enables the creation of a uniform superposition: $\sum_{i=0}^{x-1} \frac{1}{\sqrt{x}}|i\rangle$. When measured, this state will collapse into a basis state $|i\rangle$ with probability $(\frac{1}{\sqrt{x}})^2 = \frac{1}{x}$. Reassignment of quantum variables is not allowed. \\
Measurement is performed using the keyword \textbf{measure}, followed by the variable name to be measured. Internally, the measurement operator creates a classical register referenced as ``creg\_var'' where ``var'' is the variable name. This is often the last step in many quantum algorithms, and at the moment HODL provides no mechanism for classical style post-processing.
\begin{figure}[H]
\caption{Type System in HODL}
\begin{lstlisting}
function main() {
super a = 4;
int b = 2;
}
\end{lstlisting}
\end{figure}
\subsection{Operations}
Operations can be performed on both quantum and classical data. Quantum variables may not be placed into a classical variable via any operation other than measurement. For example, the expression $c_2 = q + c_1$ where $q$ is quantum and both $c_i$ are classical, is not permitted, whereas the converse, $q_2 = c + q_1$ is. The compiler maintains a resource estimator at compile-time during which it tracks each operation and dynamically resizes quantum registers as required. This is particularly useful when performing arithmetic operations since they can alter the size of a register from $n$ qubits to $m$ qubits $-$ it saves the programmer from having to perform such tasks manually.
\subsection{Conditional Expressions}
HODL supports the if-else model that permeates throughout most modern programming languages.
A conditional expression can be based on classical or quantum test conditions, but not both. Since quantum conditional expressions introduce entanglement, their bodies must purely be quantum.
The syntax for declaring an if-statement is to use the keyword \textbf{if} followed by a test condition enclosed in parentheses, succeeded by the conditional body enclosed within braces.
The syntax is similar for an elsif statement, which can succeed either an if-statement or another elsif-statement. The only difference is instead of using the keyword \textbf{if}, one must use the keyword \textbf{elsif}.
An else-statement signals the default case. It can succeed either an if-statement or an elsif-statement. To declare an else-statement, one must use the keyword \textbf{else}, followed by the body of the else-statement enclosed within braces.
\begin{figure}[H]
\caption{Conditionals in HODL}
\begin{lstlisting}
function main() {
if(some_condition) {
# body
}
elsif(some_other_condition) {
# body
}
else {
# body
}
}
\end{lstlisting}
\end{figure}
\subsection{Loops}
Loops are treated as classical constructs within HODL by expanding them inline during compile time.
There are two forms of loops in the language:
\begin{itemize}
\item For loop
\item While loop
\end{itemize}
\subsubsection{For Loop}
The syntax for declaring a for loop is to use the keyword \textbf{for} followed by a series of three expressions enclosed in parentheses and separated by commas. These expressions must be classical.
The first expression should declare and/or initialize the classical data to be used in the loop. The second should specify the halt condition, that is, the circumstances required for the loop to terminate. As in C, the third condition specifies the modifications to be made to data on each iteration. After these three conditions the loop body is specified in braces.
\subsubsection{While Loop}
The syntax for declaring a while loop is to use the keyword \textbf{while} followed by a single classical condition enclosed in parentheses. The condition specifies the circumstances required for the loop to run. After this condition the loop body is specified in braces.
\begin{figure}[H]
\caption{Loops in HODL}
\begin{lstlisting}
function main() {
# FOR
for(int i = 0; i < 5; i+=1) {
# body
}
# WHILE
while(cond) {
# body
}
}
\end{lstlisting}
\end{figure}
\subsection{Assembly Instructions}
HODL supports the basic quantum assembly instructions shown in Figure 5.
\begin{figure}[H]
\caption{Quantum Gates in HODL}
\begin{lstlisting}
function main() {
# Hadamard gate
H(foo);
# X/NOT gate
X(foo);
# Y gate
Y(foo);
# Z gate
Z(foo);
# Rotate X gate
RX(foo, angle);
# Rotate Z gate
RZ(foo, angle);
# Rotate Y gate
RY(foo, angle);
# Apply phase
P(foo, angle);
# S gate
S(foo);
# T gate
T(foo);
# Controlled-Not gate
CX(foo, bar);
# Controlled-Z gate
CZ(foo, bar);
# Controlled phase
CP(foo, bar, angle);
}
\end{lstlisting}
\end{figure}
\subsection{Compiler Details}
The HODL compiler makes use of mechanisms such as register-size tracking in order to perform mathematical operations. Furthermore, ancillary registers used in such operations are dealt with internally and are uncomputed when not required and reset to zero automatically by maintaining an internal instruction tape intermediate program representation. They are referenced by the compiler as ancillaX where X is the number of ancillary registers in use at the time of creation decremented by one. Likewise, cmpX registers are used for storing results of relational operations. Classical expressions are evaluated at compile-time, leaving only quantum code to be compiled for later execution.
\begin{figure}
\caption{HODL Compiler Structure Flowchart}
\includegraphics[scale=0.6]{language.drawio3.png}
\end{figure}
\newpage
\subsubsection{Addition and Subtraction}
The addition operator in HODL is based on the Quantum Fourier Transform (QFT) [3] and is implemented as an optimized version of the method proposed by Draper[5].
The algorithm to perform $r$ = $x$ $+$ $y$ proceeds as follows:
\begin{enumerate}
\item Initialize two registers, $x$ and $y$
\item Initialize a register $r$ to $\ket{0}^{\otimes n}$, where $n$ = $floor(log_2(x+y)) + 1$
\item Apply $H^{\otimes n}$ to $r$ to obtain the state:\[(\frac{\ket{0} + \ket{1}}{\sqrt{2}})^{\otimes{n}} = \frac{1}{\sqrt{2^n}}(\ket{0} + \ket{1}) \otimes (\ket{0} + \ket{1}) \otimes ... (\ket{0} + \ket{1})\]
\item Apply a series of controlled phase operations to store the Fourier Transform of $x$ in $r$: \[\frac{1}{{\sqrt{2^n}}} (\ket{0} + e^{2 \pi i 0.x_n}\ket{1}) \otimes
(\ket{0} + e^{2 \pi i 0.x_{n-1}...x_n}\ket{1}) \otimes . . . (\ket{0} + e^{2 \pi i 0.x_1x_2...x_n}\ket{1})\]
\item Apply a series of controlled phase operations to add the Fourier Transform of $y$ into $r$. $r$ now holds the sum of $x$ and $y$ in the Fourier Basis.\\ $\frac{1}{{\sqrt{2^n}}} (\ket{0} + e^{2 \pi i (0.x_n + 0.y_n)}\ket{1}) \otimes
(\ket{0} + e^{2 \pi i (0.x_{n-1}...x_n + 0.y_{n-1}...y_n)}\ket{1}) \otimes . . . (\ket{0} + e^{2 \pi i (0.x_1x_2...x_n + 0.y_1y_2...y_n)}\ket{1})$ \\
$=$ $QFT|x + y\rangle$
\item Apply $QFT^\dagger$ to $result$ to retrieve $\ket{x + y}$ in the computational basis: \\ $QFT^\dagger QFT\ket{x + y} = \ket{x + y}$\\
\end{enumerate}
Note: To perform subtraction, the controlled operations in step 5 are inverted
\subsubsection{Multiplication}
The multiplication operator in HODL is similarly based on the QFT.
The algorithm to perform $j$ = $k$ $\cdot$ $l$ proceeds as follows:
\begin{enumerate}
\item Initialize registers $k$ as multiplicand, $l$ as multiplier and $j$ as $|0\rangle^n$ where n is the size of the output in bits and defaults to $n = size(k) + size(l) $
\item Apply $H^{\otimes n} $ to $j$, resulting in the state:
\[
(\frac{\ket{0} + \ket{1}}{\sqrt{2}})^{\otimes{n}})= \frac{1}{\sqrt{2^n}}(\ket{0} + \ket{1}) \otimes (\ket{0} + \ket{1}) \otimes ... (\ket{0} + \ket{1}) \] The goal is to transform the state described above to the Fourier Transform of $k\cdot l = j$
\item QFT$|j\rangle$ = $\frac{1}{\sqrt{2^n}}(\ket{0} + e^{2\pi i 0.j_1j_2...j_n} \ket{1}) \otimes (\ket{0} + e^{2\pi i 0.j_2...j_n}\ket{1}) \otimes ... (\ket{0} + e^{2\pi i 0.j_n}\ket{1}) $, therefore it is desirable to obtain the relative phase factor $e^{2\pi i 0.j_1j_2...j_n}$
This can be achieved through multiplying the binary fractional forms of the multiplier and multiplicand: $(0.k) (0.l) = 0.kl = (\frac{k_1}{2^1} + \frac{k_2}{2^2}...\frac{k_n}{2^n}) \cdot (\frac{l_1}{2^1} + \frac{l_2}{2^2}...\frac{l_n}{2^n}) = $ \\
$\frac{k_1l_1}{2^2}$ + $\frac{k_1l_2}{2^3}$ ... $\frac{k_1l_n}{2^{n+1}}$ +
$\frac{k_2l_1}{2^3}$ + $\frac{k_2l_2}{2^4}$ ... $\frac{k_2l_n}{2^{n+2}}$ +
$\frac{k_n l_1}{2^{n+1}}$ + $\frac{k_n l_2}{2^{n+2}}$ ... $\frac{k_n l_n}{2^{2n}}$
= $C$
\small{This is implemented as multi-controlled phase rotations ($P$) applied to a single qubit of $j$, to produce the phase $e^{2\pi i 0.j_1j_2...j_n} = e^{2\pi iC} $, where $P$ corresponds to the following matrix:}
$\large{
\begin{bmatrix}
1 & 0\\
0 & e^{2 \pi i \theta}
\end{bmatrix}
}
$
\small
and $\theta$ is equal to the $\frac{1}{2^{x}}$ phase angles
\item Apply relative phases $e^{2\pi i 2^{m}C}$to each qubit $j_m$ and $0 \leq m < size(j) $
\item Apply $ QFT^{\dagger}$ (Inverse QFT) on $j$ to retrieve the product in the computational basis
\end{enumerate}
\section{Examples}
\subsection{Quantum Search}
The quantum search algorithm, first published in 1996 by Lov Grover [6], offers a polynomial-time advantage over the best known classical algorithm for searching for an item in an unordered dataset.\\
The algorithm is as follows:
\begin{enumerate}
\item Initialize a superposition of the search space in state, $|s\rangle$.
\item Apply an oracle, $O$, to $|s\rangle$. The oracle is designed in such a way that it applies a phase of $\pi$ if a term in the superposition fits a specified search constraint.
\item Apply the diffusion operator ($2|s\rangle \langle s| - I$).
\item Repeat steps 2 and 3, $(\frac{\pi}{4} \sqrt{\frac{N}{M}})$ times where $N$ is the size of the search space and $M$ is the number of solutions.
\item Measure state.
\end{enumerate}
The following is an application of the quantum search algorithm to search for all $x$ where $4x < 4$ and $0 \leq x \leq 7$.
The only solution is the state $0$, and the algorithm discovers it with high probability.
The first figure depicts the algorithm written in HODL, and the second figure shows the same algorithm written in IBM's QISKit.
\begin{figure}[H]
\caption{Example HODL program for Quantum Search Algorithm}
\begin{lstlisting}
# oracle takes a superposition as input
oracle some_oracle(super var) {
# if condition satisfied apply phase of pi to var
if(var * 4 < 4) {
mark(var,pi);
}
}
function main() {
#declare a uniform superposition of 3 qubits
super variable = 8;
# "filter" function is an intrinsic function corresponding
# to the diffusion operator, accepting an oracle,
# followed by the variable to apply the operator to
filter(some_oracle(variable), variable);
measure variable;
}
\end{lstlisting}
\end{figure}
\begin{figure}[H]
\caption{QISKit Code for Quantum Search Algorithm}
\begin{lstlisting}
# import libraries
from qiskit.circuit.library.arithmetic import IntegerComparator
from qiskit.circuit.library import QFT, GroverOperator
from qiskit.visualization import plot_histogram
from qiskit import *
from math import pi
# create and initialize registers
input_reg = QuantumRegister(3,name="input")
output_reg = QuantumRegister(5, name="output")
qc1 = QuantumCircuit(input_reg, output_reg)
qc1.h(input_reg)
qc = QuantumCircuit(input_reg, output_reg
# set up multiplication circuit based on QFT
phase = pi*1/2**2*4
phase_copy = phase*2
for i in range(5):
qc.h(output_reg[i])
for j in range(3):
if i >= 3 and j == 0 or (j==1 and i==4):
phase /= (2**1)
continue
qc.cp(phase,input_reg[j], output_reg[i])
phase /= (2 ** 1)
phase = phase_copy
phase_copy *= 2
# apply inverse QFT
qft_circ = QFT(num_qubits=5).inverse()
qc.compose(qft_circ, qubits=[3,4,5,6,7], inplace=True)
qc1.compose(qc, qubits=range(8), inplace=True)
# compare with 4 (less than)
comparison_circ = IntegerComparator(5, 4, geq=False,name=
"comparison_circ")
circ_final = QuantumCircuit(15,3)
circ_final.compose(qc1, inplace=True)
circ_final.compose(comparison_circ, qubits=range(3,13), inplace=True)
# apply phase
circ_final.z(8)
# uncomputation
circ_final.compose(comparison_circ.inverse().to_gate(),
qubits=range(3,13), inplace=True)
circ_final.compose(qc.inverse().to_gate(), qubits=range(8), inplace=True)
# diffusion operator applied once
circ_final.h(range(3))
circ_final.x(range(3))
circ_final.mct([0,1,2], 13, 14, mode="basic")
circ_final.x(range(3))
circ_final.h(range(3))
# measurement
circ_final.measure(range(3), range(3))
\end{lstlisting}
\end{figure}
\subsection{Deutsch-Jozsa Algorithm}
The algorithm was first proposed by Deutsch [7] in 1985 and later revised in 1992 by Deutsch and Jozsa.
Given a function $f$, this algorithm checks if it is constant or balanced. Note: $f$ is guaranteed to be one or the other $-$ if constant, $f$ returns either $0$ or $1$ for all inputs, otherwise it is balanced and returns $0$ for half of all inputs and $1$ for the other half.
The algorithm proceeds as follows:
\begin{enumerate}
\item Initialize state $|s\rangle$ as a superposition: $\frac{1}{\sqrt{2^n}}\sum_{i=0}^{2^n-1} |i\rangle$.
\item Apply $f$ to $|s\rangle$ and XOR the result with a qubit in the $|-\rangle$ state.
\item Apply Hadamard Gate ($H$), on $|s\rangle$.
\item Measure $|s\rangle$.
\item If $|s\rangle$ is measured to be zero, then $f$ is constant else it is balanced.
\end{enumerate}
The following is an application of the Deutsch-Jozsa algorithm for a function $f$ whereby $f(x) = 1$ if $ x + 7 > 14$ and $f(x) = 0$ if $x + 7 \leq 14$ where $0 \leq x \leq 15$. This function is balanced and must return a non-zero integer $-$ in this case the bit-string $1000_2$.
\begin{figure}[H]
\caption{Example HODL program for the Deutsch-Jozsa Algorithm}
\begin{lstlisting}
function deutsch_josza(super inputs) {
# if condition satisfied store result in
# 1-qubit register (called cmp0 internally)
if(inputs + 7 > 14) {
# XOR the contents of cmp0 with a qubit in the state |->
mark(inputs,pi);
}
}
function main() {
# initialize superposition of 4 qubits
super test = 16;
deutsch_josza(test);
# apply interference with Hadamard Gate
H(test);
# store result in a classical register creg_test
measure test;
}
\end{lstlisting}
\end{figure}
\begin{figure}[H]
\caption{QISKit Code for Deutsch Josza Algorithm}
\begin{lstlisting}
# import libraries for integer comparisons and addition
from qiskit.circuit.library.arithmetic import WeightedAdder,
IntegerComparator
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit
# create circuits for addition and integer comparison
addition_circuit = WeightedAdder(7, [8,4,2,1, 4,2,1], name="adder_circ")
comparison_circuit = IntegerComparator(5, 15, name="comparison_circ")
# create registers and circuits for input and the integer "seven"
input_register = QuantumRegister(4, name="input_register")
seven = QuantumRegister(3, name="seven")
input_reg_to_circ = QuantumCircuit(input_register)
integer_seven_circ = QuantumCircuit(seven)
# apply HADAMARD on input to generate uniform superposition
input_reg_to_circ.h(input_register)
# flip all 3 bits in register to represent "7" in binary
integer_seven_circ.x(seven)
# add summands as input to addition circuit
addition_circuit.compose(qc.to_gate(), qubits=[0,1,2,3], front=True,
inplace=True)
addition_circuit.compose(qc1.to_gate(), qubits=[4,5,6], front=True,
inplace=True)
# create final circuit to hold all circuits
circuit_final = QuantumCircuit(22,4)
# append addition circuit in front of empty circuit
circuit_final.compose(addition_circuit.to_gate(), qubits=range(17),
front=True, inplace=True)
# add comparison circuit with addition circuit
# with the input being the sum from the previous circuit
circuit_final.compose(comparison_circuit.to_gate(),
qubits=[7,8,9,10,11,17,18,19,20,21], inplace=True)
# apply phase
circuit_final.z(17)
# uncomputation of circuits
circuit_final.compose(comparison_circuit.inverse().to_gate(),
qubits=[7,8,9,10,11,17,18,19,20,21], inplace=True)
circuit_final.compose(addition_circuit.inverse().to_gate(),
qubits=range(17), inplace=True)
# measure input register
circuit_final.measure(range(4), range(4))
\end{lstlisting}
\end{figure}
\section{Summary}
The author developed a new programming language for quantum computation that allows a higher-level description of oracle functions than available in existing frameworks. Although the language compiler can be used as a standalone tool, it was designed to be used alongside other OpenQASM-based frameworks such as QISKit. The author has open-sourced the language on GitHub[8].
\section{Acknowledgements}
The author thanks Prof David Abrahamson (Trinity College Dublin) for his guidance and supervision in the creation of this paper, and Prof Brendan Tangney (Trinity College Dublin) and Dr Keith Quille (Technological University Dublin, and CSInc) for their insights and assistance. Thanks are also due to Dr Lee O'Riordan (Irish Center For High-End Computing), Dr Steve Campbell (UCD) and Dr Peter Rohde (UTS Australia) for their suggestions and feedback.
\section{Appendices}
\subsection{Lexical Specification in Regex}
digit = [``0"..``9"] \\
number = digit+\\
letter = [``a"..``z"] $|$ [``A"..``Z"]\\
identifier = letter (letter $|$ digit $|$ ``\_'')*\\
keyword = ``else'' $|$ ``elsif'' $|$ ``for'' $|$ ``function'' $|$ ``if'' $|$ ``int'' $|$ \newline
\hspace*{1.8cm}``measure'' $|$``oracle'' $|$ ``return'' $|$ ``super'' $|$ ``while'' \\
intrinsic\_function = ``mark'' $|$ ``filter'' \\
operator = ``+” $|$``-” $|$ ``*” $|$ ``+=” $|$ ``-=” $|$ ``*=” $|$ \\
\hspace*{1.8cm}``$<$” $|$ ``$>$” $|$ ``$<$=” $|$ ``$>$=” $|$ ``==” $|$ ``!=”\\
\hspace*{-0.6cm} Note: Keywords are reserved names and cannot be used as identifiers
\subsection{EBNF Specification of Syntax}
program = subroutine \{[subroutine]\} \\
subroutine = (function $|$ oracle) \\
function = [type] \textbf{function} identifier parameters \textbf{\{} body [\textbf{return} identifier] \textbf{\}} \\
type = (\textbf{super} $|$ \textbf{int}) \\
parameters = \textbf{(} [type identifier \{\textbf{,} [type identifier]\}] \textbf{)} \\
oracle = \textbf{oracle} identifier parameters \textbf{\{} body \textbf{\}} \\
body = \{assignment $|$ fcall $|$ ocall $|$ operation $|$ loop $|$ cond\} \\
assignment = type identifier \textbf{=} (integer $|$ operation $|$ fcall)\\
fcall = identifier \textbf{(} \{[(type identifier)\ $|$ fcall $|$ ocall]\} \textbf{)} \textbf{;}\\
ocall = identifier \textbf{(} \{[(type identifier)\ $|$ fcall $|$ ocall]\} \textbf{)} \textbf{;} \\
operation = (identifier $|$ integer $|$ fcall $|$ operation) operator \\
\hspace*{2cm}(identifier $|$ integer $|$ fcall $|$ operation)\\
loop = (for $|$ while)\\
for = \textbf{for} \textbf{(} assignment semicolon operation semicolon operation \textbf{) \{} body \textbf{\}}\\
while = \textbf{while (} operation \textbf{) \{} body \textbf{\}}\\
cond = (if $|$ elsif $|$ else)\\
if = \textbf{if (} operation \textbf{) \{} body \textbf{\}}\\
elsif = \textbf{elsif (} operation \textbf{) \{} body \textbf{\}}\\
else = \textbf{else} \textbf{\{} body \textbf{\}}\\
\section{References}
[1] Andrew W. Cross, Lev. S Bishop, John A. Smolin, Jay M. Gambetta, Open Quantum Assembly Language, 2017. arXiv:1707.03429 [quant-ph]
\newline \newline
[2] Richard P Feynman. Simulating physics with computers, 1981.
International Journal of Theoretical Physics, 21(6/7)
\newline \newline
[3] Nielsen and Chuang. Quantum Computation and Quantum Information, 2000.
Cambridge Press
\newline \newline
[4] Wootters, W., Zurek, W. A single quantum cannot be cloned. Nature 299, 802–803 (1982). https://doi.org/10.1038/299802a0
\newline \newline
[5] Thomas G. Draper. Addition on a Quantum Computer, 2000.
arXiv:quant-ph/0008033
\newline \newline
[6] Lov K. Grover. A fast quantum mechanical algorithm for database search, 1996. arXiv:quant-ph/9605043 \newline \newline
[7] D. Deutsch and R. Jozsa. Rapid solution of problems by quantum compuation. Proceedings of Royal Society of London, A439:553–558 (1992) \newline \newline
[8] Ayush Tambde. https://github.com/at2005/HODL
\end{document}
|
2108.11640
|
\section{Introduction}
When analysing mathematical models of biological systems, we often aim to reverse engineer the parameters of the model
by fitting to observed data. The Bayesian formalism provides a principled way to perform parameter inference that quantifies our uncertainty in the model parameters \citep[see, for example,][]{kirk2015systems}, but traditionally requires us to be able to write down an analytical function (the likelihood function) that returns the likelihood of a parameter vector given the observed data.
However, for many models of interest, there is no straightforward way to write down the likelihood function associated with the model. This is often due to the intractability of deriving a closed form expression for the model likelihood. In such situations, it may nevertheless be possible to apply a simulation-based inference approach termed {\em Approximate Bayesian Computation} \citep[ABC; see, for example,][]{sissonHandbookApproximateBayesian2018}, that substitutes a kernel on some statistics of the data for the model likelihood, and evaluates the fit of the model at a given set of parameter values through simulations. For given parameter realisations, the model is simulated, and the statistics of the simulated data compared with the same statistics of the observed data. Informally, regions of parameter space that correspond to simulated data sets whose statistics are ``more similar" to those of the observed data will be associated with higher posterior probability than regions corresponding to simulated data sets with statistics that are ``less similar" (where ``similarity" is quantified using a pre-specified distance function).
Applying ABC, we can derive an approximate posterior distribution over the model parameters using standard sampling techniques such as rejection sampling. This approximate posterior distribution expresses our uncertainty in the model parameters, given the model and the observed data set. Recently, ABC parameter inference and model selection has been successfully developed for reaction-diffusion models \citep{warne2019using}. However, performing parameter inference for more general spatial models has been largely unexplored
Topological data analysis (TDA) is a relatively new area of computational mathematics that quantifies the shape of data by computing topological properties of the data. There are various approaches of topological inference, e.g., level sets or mode clusters \citep{wassermanTopologicalDataAnalysis2018}. The most prominent algorithm in TDA is persistent homology \citep[PH;][]{carlssonTopologyData2009a,edelsbrunnerComputationalTopologyIntroduction2010}. PH takes in data and a metric, and outputs topological features (e.g., connected components and loops) and their persistence across different scales of the data. The computation crucially depends on the choice of filtration, which is a nested sequence of spaces built on the data, that is indexed by a scale parameter \citep{edelsbrunnerComputationalTopologyIntroduction2010,ghristHomologicalAlgebraData2018a}.
There are many software implementations for persistent homology \citep{Otter2017}; however, the software used is often selected based on the types of filtrations available within it.
The choice of filtration for applications is an active area of research, and there is no one-size-fits-all filtration for biological applications \citep{stolz-pretzer_global_2019}. The persistence of the topological features as well as where topological features appear and die in the filtration may provide insight into biological processes and models.
In previous work with spatial models of biological processes \citep{murrayMathematicalBiologyII2003}, TDA has been applied to test for spatial randomness \citep{robinsPrincipalComponentAnalysis2016}, automatically detect zebra-fish patterns \citep{mcguirl2020topological}, characterise immune cell infiltration by changes in a chemotaxis parameter \citep{Vipond2021}, and cluster parameter regimes for angiogenesis \citep{nardiniTopologicalDataAnalysis2021}. Now we wish to address the inverse problem of recovering model parameters given some observed data, in the Bayesian formalism. ABC enables us to perform parameter inference in a statistical model on the basis of data summaries, even when there is no clear way to define a likelihood function for the model. One key challenge in ABC is the choice of summary statistic, as the statistic must capture the relevant information about the model parameters in the data to allow the parameters to be learnt. Here we show that TDA provides informative data summaries that enable parameter inference to be performed successfully in a spatial model. In particular, we consider as a case study the Anderson-Chaplain model of angiogenesis \citep{andersonContinuousDiscreteMathematical1998}.
In previous work in the literature \citet{maroulasBayesianFrameworkPersistent2020} model persistence diagrams as Poisson point processes and use this to allow a posterior to be inferred on a persistence diagram given some observed data and a suitable prior. This allows a posterior on topological features to be defined, and a scheme for performing Bayesian classification is developed, but it does not consider the case of performing inference on a parametric model, given an observed set of topological features.
In \citet{sgouralisBayesianTopologicalFramework2017}, Bayesian inference is applied in the processing of the data, but not in a topological context or for parameter inference in the model of interest. Instead various performance measures are evaluated for a small set of selected parameter combinations, not considering a distribution over parameters or a Bayesian posterior.
In this paper we first describe the model and data generation process applied, before describing TDA and ABC in general terms, and their specific application to the Anderson-Chaplain model. We demonstrate our suggested approach for parameter inference on simulated data from the Anderson-Chaplain model and compare the outputs to the results produced by other non-topological statistics.
\section{Model Data}
The Anderson-Chaplain model \citep{andersonContinuousDiscreteMathematical1998} is a well-studied spatio-temporal model of angiogenesis. Angiogenesis is the growth of new blood vessels from pre-existing vasculature. The model combines a system of partial differential reaction equations with discrete dynamics. The model considers production and consumption of fibronectin, the secretion of tumour angiogenic factors (TAF) from a tumour, and new vasculature forms from endothelial tip cells in response to gradients of fibronectin and TAF.
The Anderson-Chaplain model of angiogenesis has two key parameters, $\rho$ and $\chi$, coefficients for haptotaxis and chemotaxis respectively. These determine the relative contribution of fibronectin driven haptotaxis and TAF driven chemotaxis to the movement of tip cells in the model. Other parameters determine the dynamics of the distribution of fibronectin and TAF, and we keep these fixed as in \citet{nardiniTopologicalDataAnalysis2021}.
Data were generated by simulating the Anderson-Chaplain model on a two-dimensional square lattice of resolution $201$ by $201$ \citep[as in][]{andersonContinuousDiscreteMathematical1998} using the implementation provided in \citet{nardiniTopologicalDataAnalysis2021}, with a linear chemoattractant distribution that increases with the coordinate along the $x$ axis. This produces sets of binary images (see figure \ref{fig:posterior}) which are then further processed using the methods described below.
\section{Methods}
\subsection{Topological Data Analysis}
To characterise the $k$ dimensional features of a topological space $X$ we can consider the homology group in dimension $k$, $H_k(X)$, composed of elements that intuitively correspond to equivalence classes of cycles that can be continuously deformed into one another on $X$. In dimension one, the generators of the homology group correspond to one dimensional holes in $X$, or loops, while in dimension zero the generators of the homology group correspond to the connected components of $X$.
The topological spaces we are interested in can be represented using finite sets of simplices
known as simplicial complexes $K$ that are constructed by joining together individual simplices, potentially of different dimension, and are closed under the operation of taking faces. A zero dimensional simplex corresponds to a single vertex, a one dimensional simplex an edge, and a two dimensional simplex a triangle. Given a real valued function on $K$ we can define a filtration as a sequence of homology groups in a given dimension $k$, with homomorphisms induced by inclusion
\begin{equation}
0 = H_k(K_{a_0}) \rightarrow H_k(K_{a_1}) \rightarrow \ldots \rightarrow H_k(K_{a_n}) = H_k(K)
\end{equation}
where $K_a=f^{-1}(-\infty,a]$ and $a_0<a_1<\ldots<a_n$, and $K_{a_i}\subseteq K_{a_j}$ for $i<j$. Persistent homology then tracks the birth and death of elements of the homology groups as $a$ varies.
By choosing an appropriate definition of the simplicial complex and filtration built from the data, persistent homology
can provide information about the topological features in data
We build the simplicial complex and filtration from the final timepoint of model simulation data following \citet{nardiniTopologicalDataAnalysis2021}. All cells in the two-dimensional square lattice that have vasculature present are assigned a value of one, and zero elsewhere. The centroid of each non-zero cell is a $0$-simplex. The simplicial complex is built on these $0$-simplices based on so-called Moore neighborhoods: if any of the eight cells surrounding a vertex are also nonzero, then we connect them via $1$-simplices (edges) for two points pairwise connected, or $2$-simplices for three points pairwise connected by an edge. The union of these simplices form a \emph{simplicial complex}. There are different ways to study vascular data at multiple scales using filtrations ~\citep{bendich2016persistent,stolz2020multiscale}. Here, we construct sequences of filtered simplicial complexes using a sweeping plane filtration \cite{bendich2016persistent,nardiniTopologicalDataAnalysis2021}. In the sweeping plane filtration, we move a vertical line from left to right across the 2D lattice domain and include simplices in the filtration only to the left of this line. This filtration can be considered a sublevel set filtration corresponding to a height function $h: X \rightarrow \mathbb{R}$ on this simplicial complex.
\subsection{Approximate Bayesian Computation}
In Bayesian inference we aim to derive the posterior distribution of the parameters of a model given some observed data. To do so we first define a prior distribution on the model parameters, treating them as random variables. This describes our belief in the distribution of the parameters before having observed any data. We then perform a so called \textit{Bayesian update} of the model having observed some data. This is done using the likelihood of the observed data given the model and parameters. From this we arrive at a posterior distribution that describes the conditional distribution of the parameters given the observed data. If we denote the model parameters by $\theta$, and the data by $x$, we can first write the prior as $p(\theta)$, and the likelihood of the data as $p(x|\theta)$. In the Bayesian framework we apply Bayes rule to update the prior distribution having observed the data, giving us the posterior distribution as
\begin{equation}
p(\theta|x) = \frac{p(x|\theta)p(\theta)}{p(x)},
\end{equation}
where $p(x)$ is known as the evidence or marginal likelihood, and plays a key role in Bayesian model selection. Evaluation of the marginal likelihood is often computationally expensive or intractable. However, in many settings (e.g. when sampling from the posterior using Markov chain Monte Carlo techniques), it is sufficient to be able to write down the posterior up to proportionality
\begin{equation}
p(\theta|x) \propto p(x|\theta)p(\theta).
\end{equation}
This approach relies on the ability to calculate both the prior of the parameters $p(\theta)$, which is generally tractable, and the likelihood $p(x|\theta)$. However in many models of interest it is not tractable or not possible to directly evaluate $p(x|\theta)$, for example in population genetics \citep{beaumontApproximateBayesianComputation2002}, random graph models \citep{thorneGraphSpectralAnalysis2012a} and some models of dynamical systems \citep{toniApproximateBayesianComputation2009a,liepe2014framework}. To allow us to perform Bayesian inference in these situations, an approach named Approximate Bayesian Computation (ABC) was developed, based on initial work in \citet{fuEstimatingAgeCommon1997} and \citet{tavareInferringCoalescenceTimes1997}, developed further in \citet{beaumontApproximateBayesianComputation2002} and \citet{marjoramMarkovChainMonte2003a}, and expanded in many works, see e.g. \citet{sissonSequentialMonteCarlo2007a,toniApproximateBayesianComputation2009a,beaumontAdaptiveApproximateBayesian2009,delmoralAdaptiveSequentialMonte2012,prangleRareEventApproach2018}.
In an ABC framework, we rely on the observation that given the ability to sample realisations $y$ from $p(x|\theta)$, we can rewrite the posterior as
\begin{equation}
p(\theta|x) = \int p(\theta,y|x)\dd{y} ,
\end{equation}
where
\begin{equation}
p(\theta,y|x) = \frac{\mathds{1}(x=y)p(y|\theta)p(\theta)}{p(x)},
\end{equation}
and by relaxing this to
\begin{equation}
p(\theta,y|x) \approx \frac{\mathds{1}(D(x,y)<\epsilon)p(y|\theta)p(\theta)}{p(x)},
\end{equation}
we can generate samples from an approximate posterior (which we shall refer to as the {\em ABC posterior}) by using a suitably small $\epsilon$ in Algorithm~\ref{a:abc}. Often when applying the rejection algorithm we fix the number of samples $S$ and select $\epsilon$ such that the set of samples $\hat{\theta_s}$ with $d_s<\epsilon$ is some fraction $\alpha S$.
\begin{algorithm}[h]
\caption{ABC rejection sampler algorithm}\label{a:abc}
\begin{algorithmic}[1]
\For{$s \in 1,\ldots,S$}
\State{Sample $\hat{\theta_s}\sim p(\theta)$}
\State{Simulate $y\sim p(y|\hat{\theta_s})$}
\State{Calculate $d_s\leftarrow D(g(y),g(x))$}
\EndFor
\State{Return samples $\hat{\theta_s}$ where $d_s<\epsilon$}
\end{algorithmic}
\end{algorithm}
The ABC rejection sampler algorithm requires us to define a distance on the data, $D(x,y)$, and in some cases this may itself be intractable. It is then possible to substitute a summary statistic of the data, $g(x)$ in place of the data itself, leading to a distance on these summary statistics $D(g(x),g(y))$ being considered. In the case where $g$ is a \textit{sufficient statistic} for the model, as $\epsilon \rightarrow 0$ this will be equivalent to applying a distance on the $x$ and $y$ themselves. Often this is not the case, and this is another avenue through which ABC produces an approximation to the posterior rather than a true evaluation of the posterior itself.
\subsection{Topological statistics for Approximate Bayesian Computation}
In previous work \citet{nardiniTopologicalDataAnalysis2021} applied topological statistics of simulated data (2-D binary images) to quantify different regimes in the parameter space of the Anderson-Chaplain model of angiogenesis. By constructing simplicial complexes from the output data of a spatial model, and using the same filtration as \citet{nardiniTopologicalDataAnalysis2021}, PH can be applied to describe the presence of topological features in the simulated data.
In some cases when calculating the persistence of the topological features of a filtration, it is possible for some features to persist indefinitely, so that their death in the filtration is represented as $+\infty$. In our application, this causes information about certain topological features to be lost, for example loops and some connected components, as although we know when they are born in the filtration, we have no measure of their extent.
For this reason, \citet{nardiniTopologicalDataAnalysis2021} computed persistence of a left to right sweeping plane filtration and right to left sweeping plane filtration of the simplicial complex built from the simulated model data (see \citet{nardiniTopologicalDataAnalysis2021} for details).
By viewing the left to right filtration as a sublevel set filtration and the right to left filtration as a superlevel set filtration, more information (e.g., only finite bars that capture the extent of topological features) can be extracted as a consequence of duality and symmetry theorems \citep{cohen-steinerExtendingPersistenceUsing2009}.
\subsection{Extended persistence}
Here we propose a more elegant solution that applies the extended persistence of \citet{cohen-steinerExtendingPersistenceUsing2009}, which forces all topological features to be of finite length.
Extended persistence was developed to study cavities and protrusions in protein docking \citep{agarwal2006extreme,cohen-steinerExtendingPersistenceUsing2009}. Since then, \citet{yim2021optimization} optimised spectral wavelets for graph classification using extended persistence, and extended a differentiability result for ordinary persistence to extended persistence.
In standard persistence, the sublevel sets $X_a=f^{-1}(-\infty,a]$ of the manifold $X$ are nested and PH is defined through the corresponding linear sequence of homology groups.
In extended persistence, we compute the homology of the sublevel sets, as well as the relative homology with respect to the superlevel sets $X^a=f^{-1}[a,\infty)$.
This is motivated by the fact that the relative homology group $H_k(X,X^a)$ of dimension $k$ is isomorphic to the cohomology group of $X_a$ of dimension $dim-k$, denoted $H^{dim-k}(X_a)$, where $dim$ is the dimension of the manifold $X_a$ \citep{cohen-steinerExtendingPersistenceUsing2009}.
For a set of values $a_0,\ldots,a_n$ that bound and fit between the critical points of $f$, the extended persistence in dimension $k$ is defined as the persistence of the homology groups and relative homology groups as
\begin{align}
0 &= H_k(X_{a_0}) \rightarrow H_k(X_{a_1}) \rightarrow \ldots \rightarrow H_k(X_{a_n}) = H_k(X)\nonumber\\
&H_k(X) = H_k(X,X^{a_n}) \rightarrow \ldots \rightarrow H_k(X,X^{a_0}) = 0
\end{align}
where $H_k(X,X^a)$ denotes the relative homology group of $X$ and $X^a$ in dimension $k$ \citep{edelsbrunnerComputationalTopologyIntroduction2010}.
This extended persistence can be broken down into multiple components \citep{cohen-steinerExtendingPersistenceUsing2009}, the ordinary part, formed of topological features that are both born and die within the homology groups of the sublevel sets of $X$, the relative part of features that are born and die in the relative homology groups, and the extended part of features that are born in the ordinary homology groups and die in the relative homology groups in the filtration. The birth time $b$ of a feature may be larger than its death time $d$ due to the possibility that the feature dies in the relative homology group $H(X,X^d)$ with $d<b$. The extended part can be further divided into topological features that have $b<d$, termed extended$+$, and those with $d<b$, termed extended$-$.
\subsection{Persistence images}
The output of applying PH to a data set is often represented as a persistence diagram, that for a given dimension $k$ consists of a plot of points $(b,d)$, where $b$ is the time of birth and $d$ is the time of death $d$ of each dimension $k$ topological feature in the filtration. To allow for the straightforward application of methods from machine learning to these diagrams, \citet{JMLR:v18:16-337} developed the concept of a persistence image. This allows a persistence diagram to be represented as a vector in $\mathds{R}^n$, so that for example it can be used in methods such as K-means clustering, as in \citet{nardiniTopologicalDataAnalysis2021}.
To generate the persistence image corresponding to a persistence diagram represented as a multiset of points $(b,d)$, the points are first transformed to give a multiset $B$ of birth and persistence coordinates $(b,d-b)$ (for extended persistence, we require a slightly different formulation -- see below). We note that the persistent image formulation of \citet{JMLR:v18:16-337} ignores all infinite persistent features. A persistence surface in $\mathds{R}^2\rightarrow \mathds{R}$ is then defined as the weighted sum of kernels applied to each birth/persistence coordinate
\begin{equation}
f(x,y) = \sum_{(b,p)\in B} g(b,p)h(x,y;b,p).
\label{e:persist}
\end{equation}
From the persistence surface defined in eqn. \ref{e:persist}, an $m\cross m$ array of values is created by discretizing $f(x,y)$ into an $m$ by $m$ grid in a suitable range. This array can then by flattened to give a vector in $\mathds{R}^{m^2}$. As in \citet{JMLR:v18:16-337} we apply a Gaussian kernel for $h$ with mean $\mu=(b,p)$ and fixed standard deviation $\sigma$.
We remark that extended persistence only has finite persistence; therefore no information (i.e., the infinite bars in ordinary persistence) is lost in the persistence images for extended persistent homology.
\subsection{TABC}
We use a set of topological statistics derived from the extended persistence of a filtration over the simplicial complex representing the data as the summary statistics in an ABC framework, in a method we title TABC, to perform topological posterior inference on the Anderson-Chaplain model of angiogenesis. In the TABC methodology, the summary statistics used in ABC are the persistence images in each dimension produced by the by the four components of the extended persistence of a filtration. To allow persistence images to be generated for the extended persistence, in components of the extended persistence with points in the persistence diagram $(b,d)$ with $d<b$, we flip the coordinates to consider instead $(d,b)$, which when transformed into a birth/persistence coordinate then represents the duration of persistence of the feature in the relative part, or the gap between birth in the ordinary homology and death in the relative homology of the feature in the extended$-$ part.
We generate persistence images of dimension $50$ by $50$ with a constant weight function for the persistence surface and the kernel of the persistence images set as a Gaussian distribution with standard deviation $\sigma=1$, as we found this to work well. As the distance metric in the ABC algorithm we applied the Euclidean distance between the statistics.
In our implementation we use the GUDHI library (\url{http://gudhi.gforge.inria.fr/}) to construct simplicial complexes, generate extended persistence diagrams and produce persistence images (with standard weighting $g=1$).
\subsection{Image-based statistics}
\label{s:im}
For comparison we also consider four statistics based on the binary image data produced by the simulations, that were chosen with the aim of differentiating the different classes of behaviours observed in \citet{nardiniTopologicalDataAnalysis2021}, without overlapping with features that could be considered as topological descriptors (for example numbers of connected components). These statistics are:
\begin{itemize}
\item \textbf{Mean X coordinate} The mean X value of occupied pixels.
\item \textbf{Mean Y coordinate} The mean Y value of occupied pixels.
\item \textbf{Maximum X coordinate} The maximum X value of an occupied pixel.
\item \textbf{Mass} The fraction of occupied pixels.
\end{itemize}
As with the topological statistics, we applied the Euclidean distance between vectors of statistics as the distance in the ABC rejection algorithm.
\section{Results}
\begin{figure*}[!htp]
\centering
\begin{tabular}{ccccc}
Observed&Posterior density&\multicolumn{3}{c}{ABC posterior predictive}\\
\includegraphics[width=0.12\textwidth]{sims/test_simulated_0.pdf}&
\includegraphics[width=0.11\textwidth]{plots/tabc_posterior0.pdf}&
\includegraphics[width=0.12\textwidth]{sims/tabc_pred_0_0.pdf}&
\includegraphics[width=0.12\textwidth]{sims/tabc_pred_0_1.pdf}&
\includegraphics[width=0.12\textwidth]{sims/tabc_pred_0_2.pdf}\\
\includegraphics[width=0.12\textwidth]{sims/test_simulated_12.pdf}&
\includegraphics[width=0.11\textwidth]{plots/tabc_posterior12.pdf}&
\includegraphics[width=0.12\textwidth]{sims/tabc_pred_12_0.pdf}&
\includegraphics[width=0.12\textwidth]{sims/tabc_pred_12_1.pdf}&
\includegraphics[width=0.12\textwidth]{sims/tabc_pred_12_2.pdf}\\
\includegraphics[width=0.12\textwidth]{sims/test_simulated_74.pdf}&
\includegraphics[width=0.11\textwidth]{plots/tabc_posterior74.pdf}&
\includegraphics[width=0.12\textwidth]{sims/tabc_pred_74_0.pdf}&
\includegraphics[width=0.12\textwidth]{sims/tabc_pred_74_1.pdf}&
\includegraphics[width=0.12\textwidth]{sims/tabc_pred_74_2.pdf}\\
\includegraphics[width=0.12\textwidth]{sims/test_simulated_1.pdf}&
\includegraphics[width=0.11\textwidth]{plots/tabc_posterior1.pdf}&
\includegraphics[width=0.12\textwidth]{sims/tabc_pred_1_0.pdf}&
\includegraphics[width=0.12\textwidth]{sims/tabc_pred_1_1.pdf}&
\includegraphics[width=0.12\textwidth]{sims/tabc_pred_1_2.pdf}\\
\includegraphics[width=0.12\textwidth]{sims/test_simulated_98.pdf}&
\includegraphics[width=0.11\textwidth]{plots/tabc_posterior98.pdf}&
\includegraphics[width=0.12\textwidth]{sims/tabc_pred_98_0.pdf}&
\includegraphics[width=0.12\textwidth]{sims/tabc_pred_98_1.pdf}&
\includegraphics[width=0.12\textwidth]{sims/tabc_pred_98_2.pdf}\\
\end{tabular}
\caption{Visualisations of simulation output from the Anderson-Chaplain model for five parameter sets sampled from a uniform prior on the model parameters. The first column shows the observed data, while the second shows a contour plot of the posterior density inferred by applying the TABC methodology, with the red cross indicating the known parameter values used to generate the observed data. The remaining three columns show simulations of parameter values drawn from the ABC posterior predictive distribution.\label{fig:posterior}}
\end{figure*}
We apply the TABC approach described above to parameter inference in the Anderson-Chaplain model. Taking $10000$ samples from the prior on the two model parameters we simulated the Anderson-Chaplain model of angiogenesis for each sampled parameter pair.
To validate our approach we drew a further $100$ parameter sets from the model prior and simulated data from each to take on the role of the observed data. A representative subset of these simulated data sets can be seen in figure \ref{fig:posterior}, and cover a range of different behaviours.
Given these data, we applied the TABC approach described above to derive samples of $500$ parameter values from the ABC posterior. To investigate the ability of our topological approach to accurately capture the relevant behaviour of the model, we generated ABC posterior predictive samples by simulating the model using parameter values drawn at random from the ABC posterior. These are shown in figure \ref{fig:posterior}, and demonstrate that TABC enables the effective recovery of parameters that replicate the qualitative behaviour of the observed data.
It can be seen that the ABC posterior distributions for the two parameters demonstrate a degree of unidentifiability, in that in most cases the posterior follows a ridge shape with a strong correlation between the two parameters. This aligns with the results found in \citet{nardiniTopologicalDataAnalysis2021}, where it was discovered that there were distinct classes of behaviour that occupied diagonal sections of the parameter space, as do our posterior distributions. Being able to identify such uncertainty in our parameter estimates is one of the key benefits of a Bayesian analysis, and it also provides insights into the behaviour of the model. For example we can see that ABC posterior predictive samples in figure \ref{fig:posterior} are representative of a given class of model behaviour, and that draws from across the potentially wide distribution of parameters indicated by the posterior will follow this behaviour.
The known parameter values used to generate the data on which the posterior distributions are based are marked in figure \ref{fig:posterior}, and can be seen to be within the bulk of the ABC posterior mass.
\begin{table}[h]
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
Statistics & Mean RSSE & $2\sigma_{\overline{x}}$ RSSE & Mean Entropy & $2\sigma_{\overline{x}}$ Entropy\\
\hline
Image & 4.30 & 0.25 & -2.86 & 0.12\\
Topological & 3.61 & 0.27 & -3.31 & 0.12\\
\hline
\end{tabular}
\vspace*{5pt}
\caption{Mean of the root sum of squared errors and entropy of the posterior distribution inferred from simulated data for $100$ parameter sets drawn from a uniform prior. Values for both the TABC based posterior and ABC on the image-based statistics are shown.}
\label{t:rsse}
\end{table}
To further quantify the efficacy of our approach, we compared statistics of the posterior distributions obtained from TABC with those generated by an ABC approach using only the image-based statistics described in section \ref{s:im}. We quantified the accuracy of the inferred parameters by taking the mean root sum of squared errors (RSSE) between the posterior samples and the ``true" parameters used to generate the data, as shown in table \ref{t:rsse}. Here the mean RSSE achieved by the topological posterior over the $100$ simulated data sets is below that of the posterior generated using image-based statistics. We also calculated the mean entropy of the posterior distributions produced for each observed data point using both TABC, and ABC with image-based statistics. As can be seen in table \ref{t:rsse}, the entropy for the posterior derived from the topological features is lower than that derived from the image-based statistics. Taken together, the RSSE and entropy results suggest that the topological statistics used in TABC retain more of the information in the original data set, and hence that TABC is able to more accurately infer the parameters used to generate the data, than ABC using image-based statistics alone.
\section{Conclusions}
We have developed an approach for performing ABC in a topological context that is able to derive posterior distributions over model parameters that can accurately reproduce multiple different classes of behaviour and structure observed within the data.
We applied extended persistence, which strictly quantifies more topological features than ordinary persistence. Other topological shape statistics have focussed on sweeping across data in multiple different directions \citep{turner2014persistent,curry2018many,crawford2020predicting}. Their utility for parameter inference and model selection will be explored in future studies.
Evaluating the ABC posterior distributions we obtain, we find that by considering topological features in the data through the TABC approach we are able to reduce the posterior uncertainty in the parameter values, and to infer posterior distributions that are more closely focused around the parameters used to generate the data.
While we use persistence images here, there are other potential approaches to summarising topological data analysis for use in parameter inference. For example it is possible to directly derive distances between persistence diagrams in a number of ways \citep{atienzaStabilityPersistentEntropy2020,bubenikStatisticalTopologicalData2015,carriereSlicedWassersteinKernel2017,carriereStableTopologicalSignatures2015,chazalStochasticConvergencePersistence2014,difabioComparingPersistenceDiagrams2015,kerberGeometryHelpsCompare2017,lacombeLargeScaleComputation2018,royerATOLMeasureVectorization2021}, and these could be substituted for the Euclidean distance between the vectors of persistence images that we apply. In future work we will investigate the possibility of applying a distance function on persistence diagrams in the ABC likelihood and how this influences the efficiency of the algorithm.
For simplicity we have also only considered the simplest form of the ABC algorithm -- many other increasingly sophisticated approaches exist, including Markov Chain Monte Carlo algorithms, Sequential Monte Carlo methods \citep{sissonSequentialMonteCarlo2007a} and rare event schemes \citep{prangleRareEventApproach2018}. It would be expected that for models with larger numbers of parameters, significant improvements in efficiency could be obtained by applying one of these approaches rather than a rejection sampler based ABC approach. Doing so would not require any changes to the topological aspects of TABC, only the encompassing sampling mechanism.
As with some other applications of ABC \citep[e.g.][]{russell2019bayesian}, a potential strength of our approach is that it enables a form of {\em qualitative} inference to be performed; in our case by allowing combinations of parameters that result in model behaviour that is topologically similar to the observed data to be identified. Although we consider a specific application, to parameter inference in the Anderson-Chaplain model of angiogenesis, the TABC approach may be adapted to be widely applicable to parametric models having topological features in the data that are informative about model parameters, including in situations where a mixture of topological statistics and other complementary statistics could be used.
\section*{Acknowledgements}
HAH thanks PG Kevrekidis and N Whitaker for first presenting the challenge of inferring parameters in models of angiogenesis. HAH also thanks members of the Centre for TDA, specifically A Barbensi, H Byrne, L Marsh, and U Tillmann for many stimulating discussions and helpful comments. PDWK is grateful to L Reali for useful conversations.
\section*{Funding}
PDWK acknowledges the Medical Research Council (MC\_UU\_00002/13), and support from the National Institute for Health Research (Cambridge Biomedical Research Centre at the Cambridge University Hospitals NHS Foundation Trust). The views expressed are those of the authors and not necessarily those of the NHS, the NIHR, or the Department of Health and Social Care. HAH gratefully acknowledges funding from EPSRC EP/R018472/1, EP/R005125/1 and EP/T001968/1, the Royal Society RGF$\backslash$EA$\backslash$201074 and UF150238, and Emerson Collective. Partly funded by the RESCUER project. RESCUER has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No. 847912.
\section*{Data Availability Statement}
All code used to produce our results is available as a Snakemake \citep{kosterSnakemakeScalableBioinformatics2012} workflow from \url{github.com/tt104/tabc_angio}. It is also stored as an archive on Zenodo with DOI 10.5281/zenodo.5562670.
\bibliographystyle{natbib}
|
2204.05610
|
\subsection{Appendices}
\section{Method Detail}
\subsection{Disentangler BERT Learning Detail} \label{appendix:method}
We initialize $\mathcal{F}$ with two pre-trained BERT. In the unsupervised initialization stage we only train the BERT$^{\alpha}$, then in the reinforcement learning stage, we fix the parameters of BERT$^{\alpha}$ and loosen BERT$^{\beta}$.
\begin{align}
P_{\mathcal{F}}(A|\overline{Y},U,K) &= \prod_{i=1}^{m} P(a_{i}|\overline{Y},U,K)\\
P(a_{i}|\overline{Y},U,K) &= x_{i} = x^{\alpha}_{i} + x^{\beta}_{i}\\ \label{eq:eqp}
x^{\alpha}_{i} &= {\rm sigmoid}({\boldsymbol{W^{\alpha}}{e^{\alpha}_{i}}}) \\
x^{\beta}_{i} &= {\rm sigmoid}({\boldsymbol{W^{\beta}}{e^{\beta}_{i}}}) \\
\{e^{\alpha}_1,\ldots ,e^{\alpha}_m\} &= {\rm BERT^{\alpha}}(\overline{Y})\\
\{e^{\beta}_1,\ldots ,e^{\beta}_m\} &= {\rm BERT^{\beta}}(\overline{Y},U,K)
\end{align}
\section{Experiments}
\subsection{Datasets} \label{appendix:data}
Table \ref{data:datasets} reports the statistics of the Wizard of Wikipedia dataset and the Topical Chat dataset.
\begin{table*}[hbt]\scriptsize
\renewcommand\arraystretch{1.2}
\centering
\begin{tabular}{c | c c c c | c c c c c}
\hline
\multirow{2}{*}{Dataset } &
\multicolumn{4}{c|}{Wizard of Wikipedia} &
\multicolumn{5}{c}{Topical Chat} \\\cline{2-10}
& Train & Valid & Test Seen & Test Unseen & Train & Valid Freq. & Valid Rare & Test Freq.& Test Rare\\
\hline
Utterances & 166787 & 17715 & 8715 & 8782 & 188378 & 11681 & 11692 & 11760 & 11770 \\
\hline
Conversations & 18430 & 1948 & 965 & 968 & 8628& 539 & 539 & 539 & 539 \\
\hline
Average Turns & 9.0 & 9.1 & 9.0 & 9.1 & 21.8& 21.6& 21.7& 21.8& 21.8 \\
\hline
\end{tabular}
\caption{\label{font-table} Statistics for Wizard of Wikipedia, Topical Chat datasets.
}
\label{data:datasets}
\end{table*}
\subsection{Case Study} \label{appendix:case}
Table \ref{case:wizard} and Table \ref{case:topical} presents some examples from Wizard of Wikipedia and Topical Chat respectively. In each case, we show the dialogue context, the knowledge (ground-truth), the human response, and responses from different models with each style. We can see that responses from DTR and StyleDGPT are well grounded by the provided knowledge and have obvious style , while responses from StyleFusion and StylizedDU in general lack of both informative content and desired style. Compared with StyleDGPT, DTR is better at leveraging target style in the test phase and replies with more informative and more contextually coherent responses, which demonstrates the potential of the model in practice. For DTR, the knowledge-grounded response generator $\mathcal{G}_G$ firstly generates a factual response with mixed style-related tokens (such as ``yeah'', ``like'', ``not'', ``whether'', etc.) and content, then the template generator $\mathcal{F}$ replace them with a tag token [*] to produce a disentangled template, finally the rewriter $\mathcal{G}_R$ modifies the tag [*] to generate some new sentences in different target styles.
\begin{table*}[hbt]\small
\renewcommand\arraystretch{1.12}
\centering
\begin{tabular}{p{48pt}p{52pt}| p{330pt} }
\hline
\multicolumn{2}{l|}{Knowledge} & a \textcolor{purple}{ grilled cheese sandwich} is made by grilling the sandwich with \textcolor{purple}{butter} or toasting it . \\
\hline
\multirow{6}{*}{Context}& & A: hot dog . i love a good hotdog ! \\
& & B: archery is a sport/skill of using a bow to propel arrows and a great sport it is . \\
& & A: do you know where archery originated from ?
\\
& & B: it's a delicious sausage sandwich . add a little mustard to it and a coke and that's a fine meal \\
& & A: absolutely ! need to get me some homemade mustard plants . \\
& & B: lol ! what other quick meals do you like ? for example grilled cheese with chips ? \\
\hline
Human & & i love butter on my grilled cheese !\\\hline
&$\mathcal{G}_G$ & \textcolor{red}{yeah}, i \textcolor{red}{like} the grilled cheese sandwich with butter. \\
& $\mathcal{F}$ & \textcolor{red}{[*]}, i \textcolor{red}{[*]} the grilled cheese sandwich with butter. \\
\multirow{4}{*}{Positive}
& DTR & \textcolor{blue}{certainly}, i \textcolor{blue}{enjoy} the \textcolor{blue}{delicious} \textcolor{purple}{butter} on \textcolor{purple}{grilled cheese sandwich }.\\ \cline{2-3}
& StyleFusion & yes, i think so too. \\
& StylizedDU & yes, i heard about the \textcolor{purple}{cheese}. \\
& StyleDGPT & i like \textcolor{purple}{toasting the sandwich}. \\
\hline
\multirow{4}{*}{Negative}& DTR & \textcolor{blue}{I don't think so }, i \textcolor{blue}{hate} the \textcolor{purple}{grilled cheese sandwich} with \textcolor{blue}{greasy} \textcolor{purple}{butter}. \\ \cline{2-3}
& StyleFusion & i hate the other quick meals.\\
& StylizedDU & i did not know that. what is it about? \\
& StyleDGPT & i don't know. I think it would be a bad idea. \\
\hline
\multirow{4}{*}{Polite}& DTR& \textcolor{blue}{i am so sorry}, i ate a little \textcolor{purple}{grilled cheese sandwich with butter}. \\ \cline{2-3}
& StyleFusion & you know i am a big fan of \textcolor{purple}{cheese sandwich}.\\
& StylizedDU & i don't know. I think it would be a bad idea.\\
& StyleDGPT & thanks for your \textcolor{purple}{grilled cheese sanwich}. \\
\hline
\end{tabular}
\caption{\label{font-table} Case study of Wizard of Wikipedia. Style-related words discovered by the style decoupler are marked in the \textcolor{red}{red} color, the generated style-related words of DTR are marked in the \textcolor{blue}{blue} color, and the knowledge related words of all baselines and our model are marked in the \textcolor{purple}{purple} color.
}
\label{case:wizard}
\end{table*}
\begin{table*}[hbt]\small
\renewcommand\arraystretch{1.2}
\centering
\begin{tabular}{p{50pt}p{52pt}| p{320pt} }
\hline
\multicolumn{2}{l|}{Knowledge} & \textcolor{purple}{Former Partiots RB BenJarvus Green-Ellis} has never fumbled the football in his NFL career. \\
\hline
& & A: cold bench. Then again, I wouldn't want to be some place that cold or watching football.\\
& & B: I'd rather watch it inside where it's warm. Have you heard about the Georgia Tech-Cumberland game of 1916? \\
\multirow{8}{*}{Context}& & A: No, what happened in that game? \\
& & B: Georgia Tech defeated Cumberland but here's the thing, they defeated them by a score of 222-0! \\
& & A: That is insane. How could that even happen? \\
& & B: I don't know but it did. It's the highest scoring game in history. \\
& & A: I'm sure. I don't even watch much and I couldn't imagine that score. I wonder if most people left or were they curious to see how high it would go? \\
& & B: I guess it depended on what team you were pulling for. To me, it's surprising that the highest scoring game was in college football and not professional. \\
& & A: Maybe it is because some are not as good in college so they may be playing against someone not on their level.\\
& & B: Good point. Professional does have a player that has never fumbled the ball. \\
\hline
Human & & I've heard that. Wasn't it a Patriot player? \\\hline
&$\mathcal{G}_G$ & i am \textcolor{red}{not} sure \textcolor{red}{whether} he was benjarvus green-ellis.\\
& $\mathcal{F}$ & i am \textcolor{red}{[*]} sure \textcolor{red}{[*]} he was benjarvus green-ellis.\\
\multirow{4}{*}{Positive}
& DTR & i am \textcolor{blue}{pretty} sure, because \textcolor{blue}{i am the loyal fan of} \textcolor{purple}{benjarvus green-ellis}. \\ \cline{2-3}
& StyleFusion & i think it's funny that \textcolor{purple}{green-ellis} has never fumbled the football. \\
& StylizedDU & that's impressive. \\
& StyleDGPT & i agree, the player was \textcolor{purple}{former partiots rb benJarvus}. \\
\hline
\multirow{4}{*}{Negative}& DTR & i \textcolor{blue}{don't know whether } he was \textcolor{purple}{benjarvus green-ellis as a former partiots rb}. \\ \cline{2-3}
& StyleFusion & are you a football fan?\\
& StylizedDU & \textcolor{purple}{green-ellis} has never fumbled the football.\\
& StyleDGPT & no, i didn't know about nfl. \\
\hline
\multirow{4}{*}{Polite}& DTR& i am sure \textcolor{blue}{and please note that} he was \textcolor{purple}{benjarvus green-ellis}. \\ \cline{2-3}
& StyleFusion & i also saw the nfl this year. \\
& StylizedDU & i hope i never fumbled the football.\\
& StyleDGPT & could you please tell me who is the player? \\
\hline
\end{tabular}
\caption{\label{font-table} Case study of Topical Chat. Style-related words discovered by the style decoupler are marked in the \textcolor{red}{red} color, the generated style-related words of DTR are marked in the \textcolor{blue}{blue} color, and the knowledge related words of all baselines and our model are marked in the \textcolor{purple}{purple} color.
}
\label{case:topical}
\end{table*}
\begin{table*}[hbt]\rmfamily\scriptsiz
\renewcommand\arraystretch{1.1}
\renewcommand\tabcolsep{2.5pt}
\centering
\begin{tabular}{c|l |c| c c c c | c c c c |c | c c c c |c c c c }
\hline
\multirow{3}{*}{Style } &
\multirow{3}{*}{Models} &
\multicolumn{9}{c|}{Wizard of Wikipedia} &
\multicolumn{9}{c}{Topical Chat} \\\cline{3-20}
& & {Style} &
\multicolumn{4}{c|}{Relevance} &
\multicolumn{4}{c|}{Diversity} &
{Style } &
\multicolumn{4}{c|}{Relevance} &
\multicolumn{4}{c}{Diversity}\\\cline{4-11} \cline{13-20}
& & Intensity & F1&{B-1} & {B-2} &{R} & {D-1} & {D-2}& {iD-1} & {iD-2} &
Intensity & F1 &{B-1} & {B-2} &{R} & {D-1} & {D-2}& {iD-1} & {iD-2} \\
\hline
\multirowcell{4}{Positive}
& DTR &0.338 & 31.3 & 32.6 & 20.7 & 32.6 &12.9 &35.5& 59.6 & 76.9 & 0.448 & 26.4 & 30.2 & 18.9 & 26.0 &3.9 &11.8 &63.8 & 76.0 \\ \cline{2-20}
& w/o $\mathcal{F}$ Initialization &0.186 &23.4 &21.2 &13.5 &21.6 &15.1 &38.4 &67.3 & 80.4 &0.287 &15.5 &17.9 &11.6 &16.3 &4.8 &14.9 &69.1 & 81.3 \\
& w/o $\mathcal{F}$ (TFIDF) &0.244 &28.7 &28.5 &19.0 &29.6 &13.5 &36.6 &61.5 & 78.6 &0.369 &22.7 &26.2 &16.0 &21.5 &4.3 &12.3 &65.5 & 76.4 \\
& w/o $\mathcal{F}$ (Classification) &0.256 &31.7 &30.2 &20.9 &31.5 &13.1 &35.3 &58.6 & 76.2 &0.375 &23.3 &26.7 &16.5 &22.6 &3.8 &12.1 &64.7 & 75.0 \\ \cline{2-20}
& w/o Rewards&0.307&31.4 &28.9 &20.2 &29.6 &12.7 &35.1 &58.1 & 76.4 &0.424 &25.1 &29.0 &18.3 &24.3 &4.2 &11.9 & 63.7& 74.2\\
& w/o Cls &0.297 &32.0 &31.5 &20.8 & 30.2&12.0 &34.7 &56.3 & 74.3 &0.396 &24.4 &30.7 &19.1 &26.2 &3.9 &11.4 &63.1 & 74.5\\
& w/o Sim &0.340 &30.4 &28.6 &20.2 & 29.4&12.3 &36.5 &61.0 & 78.3 &0.452 &26.8 &28.8 &17.9 &24.3 &4.3 &12.1 & 64.8& 75.6\\
\hline
\multirowcell{4}{Negative}
& DTR & 0.783 & 32.0 & 31.1 & 20.6 & 31.8 &14.3 &34.5 & 66.4 & 78.7 & 0.715 & 27.9 & 31.2 & 19.7 & 26.5 &4.5 &12.8 & 67.2 & 75.3 \\\cline{2-20}
& w/o $\mathcal{F}$ Initialization &0.508 &21.7 &20.9 &13.8 &23.5 &17.2 &39.7 &68.8 & 81.7 &0.425 &16.8 &16.3 &10.1 &14.2 &5.4 &14.3 &69.6 & 77.0 \\
& w/o $\mathcal{F}$ (TFIDF) &0.727 &30.1 &29.7 &18.9 & 28.7 &14.8 &34.7 &68.5 & 79.1 &0.647 &25.7 &29.3 &18.0 &25.4 &4.9 &12.4 &66.4 & 74.0\\
& w/o $\mathcal{F}$ (Classification) &0.705 &31.0 &30.6 &20.6 &31.0 &15.0 &35.1 &67.9 & 79.6 &0.633 &26.2 &30.2 &18.4 &25.7 &5.1 &13.3 &68.3 &75.8 \\\cline{2-20}
& w/o Rewards &0.768 &30.9 &30.1 &20.2 &30.6 &14.9 &35.4 &66.8 &79.9 &0.698 &27.1 &30.4 &18.1 &25.3 &5.2 &12.6 &67.0 & 75.6 \\
& w/o Cls &0.759 &32.1 &31.2 &21.4 &32.1 &13.8 &34.6 &64.3 &78.3 &0.687 &28.0 &31.5 &20.0 &26.9 &4.1 &11.9 &66.1 & 73.2\\
& w/o Sim &0.786 &30.4 &29.9 &19.7 &30.5 &15.2 &35.9 &68.9 & 80.8 & 0.720&26.1 &30.6 &19.3 &26.5 &5.5 &12.7 &68.5 & 77.3\\
\hline
\multirowcell{4}{Polite}
& DTR & 0.287 & 30.6 & 29.3 & 20.5 & 31.6 &12.8 &37.4& 55.4 & 68.1 & 0.403 & 27.6 & 30.5 & 19.8 & 29.1 &4.2 &14.6 & 47.2 & 62.5 \\ \cline{2-20}
& w/o $\mathcal{F}$ Initialization &0.156 &22.9 &20.5 &11.7 &19.6 &14.9 &40.6 &59.8 & 72.3 &0.282 &16.1 &18.3 &12.7 &17.5 &5.3 &16.9 &55.8 & 70.1 \\
& w/o $\mathcal{F}$ (TFIDF) &0.214 &27.0 &27.8 &18.8 &29.8 &13.0 &38.1 &56.3 & 69.3&0.341 &23.1 &28.0 &18.2 &26.9 &4.9 &15.5 &48.7 & 63.5 \\
& w/o $\mathcal{F}$ (Classification) &0.258 &30.9 &30.2 &21.3 &32.4&12.2 &36.6 &52.1 & 67.0 &0.375 &25.9 &29.4 &18.6 &27.1 &4.0 &14.8 &47.6 &62.8 \\ \cline{2-20}
& w/o Rewards&0.266 &31.0 &27.8 &20.2 &31.7 &12.6 &37.6 & 55.9&68.5 &0.384 &26.1 &29.1&19.1 &27.1 &4.3 &15.1 & 47.3& 63.6 \\
& w/o Cls &0.265 &32.9 &30.6 &21.3 &32.3 &11.9 &37.0 &53.6 &67.6 &0.379 &27.8 &31.1 &20.1 &29.3 &3.8 &14.3 &45.5 & 62.4\\
& w/o Sim &0.292 &30.7 &27.5 &20.0 &31.2 &13.1 &37.2 &56.3 &69.7 & 0.406&26.9 &28.9 &18.7 &27.8 &4.6 &15.2 & 48.0& 65.1 \\
\hline
\end{tabular}
\caption{\label{font-table} Ablation evaluation results.
}
\label{abl:ablation}
\end{table*}
\subsection{Ablation evaluation} \label{appendix:ablation}
As shown in table \ref{abl:ablation}, we list all ablation evaluation results of Positive, Negative and Polite on Wizard of Wikipedia and the Topical Chat.
\subsection{Manual evaluation} \label{appendix:manual}
As shown in Table \ref{font-table}, we list all manual evaluation results of Positive, Negative and Polite on Wizard of Wikipedia and the Topical Chat.
\begin{table*}[hbt]\scriptsize
\setlength{\tabcolsep}{0.9mm}
\renewcommand\arraystretch{1.1}
\centering
\begin{tabular}{c | c | c c c | c c c | c c c | c c c | c}
\hline
\multirow{3}{*}{Style} &
\multirow{3}{*}{Models} &
\multicolumn{3}{c|}{Style} &
\multicolumn{3}{c|}{Knowledge} &
\multicolumn{3}{c|}{Context} &
\multicolumn{3}{c|}{Fluency}&
{Kappa}\\
& & \multicolumn{3}{c|}{Consistency} &
\multicolumn{3}{c|}{Preservation} &
\multicolumn{3}{c|}{Coherence} &
\multicolumn{3}{c|}{} &
{}\\
& & W(\%) & L(\%) & T(\%) & W(\%) & L(\%) & T(\%) & W(\%) & L(\%) & T(\%) & W(\%) & L(\%) & T(\%) \\
\hline
\multicolumn{15}{c}{Wizard of Wikipedia} \\
\hline
\multirowcell{3}{Positive}
& DTR vs. StyleFusion &53.6 &17.1 &29.3 &66.3 &10.5 & 23.2 &54.1 &10.4 & 35.5 &53.5 &21.8 & 24.7 &0.72 \\
& DTR vs. StylizedDU &57.4 &24.9 &17.7 &59.1 &24.1 & 16.8 &46.0 &23.2 & 30.8 &50.8 &23.1 & 26.1 &0.69 \\
& DTR vs. StyleDGPT &56.8 &22.2 &21.0 &58.0 &22.8 & 19.2 &52.4 &19.5 & 28.1 &54.9 &22.3 &22.8 &0.67 \\
\hline
\multirowcell{3}{Negative}
& DTR vs. StyleFusion &59.7 &22.9 &17.4 &65.7 &15.9 & 18.4 &58.0 &16.7 & 25.3 &55.9 &18.2 &25.9 &0.68 \\
& DTR vs. StylizedDU &57.2 &24.0 &18.8 &57.9 &23.0 & 19.1 &50.1 & 20.9& 29.0 &46.5 &24.8 & 28.7 &0.66 \\
& DTR vs. StyleDGPT &54.8 &18.4 &26.8 &58.2 &17.9 & 23.9 &55.0 & 19.6& 25.4 &51.0 &28.6 & 20.4 &0.65 \\
\hline
\multirowcell{3}{Polite}
& DTR vs. StyleFusion &60.9 &15.9 &23.2 &64.3 &7.3 & 28.4 &55.3 &16.1 & 28.6 &47.1 &25.2 & 27.7 &0.70 \\
& DTR vs. StylizedDU &58.7 &22.1 &19.2 &58.6 &20.4 & 21.0 &47.8 &21.2 & 31.0 &45.6 &31.8 & 22.6 &0.66 \\
& DTR vs. StyleDGPT &58.0 &21.6 & 20.4 &60.2 &21.1 & 18.7 &56.7 &20.7 & 22.6 &50.5 &29.2 & 20.3 &0.68 \\
\hline
\multicolumn{15}{c}{Topical Chat} \\
\hline
\multirowcell{3}{Positive}
& DTR vs. StyleFusion &54.8 &16.5 & 28.7 &53.0 &13.6 & 33.4 &56.0 &17.2 & 26.8 &53.5 &17.4 & 29.1 &0.69 \\
& DTR vs. StylizedDU &45.9 &19.4 & 34.7 &49.1 &21.3 & 29.6 &52.7 &21.4 & 25.9 &46.7 &20.2 & 33.1 &0.63 \\
& DTR vs. StyleDGPT &48.2 &23.3 & 28.5 &57.3 &20.5 & 22.2 &48.8 &24.0 & 27.2 &53.6 &23.9 & 22.5 &0.65 \\
\hline
\multirowcell{3}{Negative}
& DTR vs. StyleFusion &56.8 &8.5 &34.2 &62.5 &10.6 & 26.9 &53.7 &10.2 & 36.1 &55.2 &16.8 & 28.0 &0.73 \\
& DTR vs. StylizedDU &49.2 &16.8 & 34.0 &55.8 &24.9 & 19.3 &50.9 &22.4 & 26.7 &38.7 &25.1 & 36.2 &0.66 \\
& DTR vs. StyleDGPT &56.7 &21.6 & 21.7 &51.6 &27.4 & 21.0 &54.0 &22.6 & 23.4 &52.6 &22.9 & 24.5 &0.64 \\
\hline
\multirowcell{3}{Polite}
& DTR vs. StyleFusion &58.2 &12.6 & 29.2 &56.5 &8.9 & 34.6 &58.3 &11.6 & 30.1 &50.7 &23.1 & 26.2 &0.68 \\
& DTR vs. StylizedDU &54.6 &17.1 & 28.3 &48.0 &23.8 & 28.2 &48.2 &25.1 & 26.7 &46.0 &28.3 & 25.7 &0.70 \\
& DTR vs. StyleDGPT &49.8 &19.3 & 30.9 &46.5 &28.1 & 25.4 &45.6 &27.3 & 27.1 &53.5 &21.1 & 25.4 &0.65 \\
\hline
\end{tabular}
\caption{\label{font-table} Manual evaluation results. W, L, and T refer to Win, Lose, and Tie, respectively. The ratios are calculated
by combining labels from the three annotators.
}
\label{res:human_all}
\end{table*}
\subsection{Replace Rate $P_r$} \label{appendix:replace}
As shown in Figure \ref{fig:mask_topic}, we present the F1 and Inner Distinct with different replace rate in Topical Chat.
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth, scale=0.85, trim=15 6 0 35,clip]{mask_topic.png}
\caption{F1 and Inner Distinct with different replace rate in Topical Chat.}
\label{fig:mask_topic}
\end{figure}
\subsection{Statistics of frequent style words} \label{appendix:stawords}
As shown in Figure \ref{fig:keywords}, we present the visualization of
the style tokens in various style corpus found by the initiated style decoupler.
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth, scale=1, trim=260 340 415 165,clip]{keywords.pdf}
\caption{Statistics of frequent new generated words in Positive, Negative, and Polite.}
\label{fig:keywords}
\end{figure}
\subsection{F1 Drop $P_r$} \label{appendix:f1}
As shown in Figure \ref{fig:dropnegative} and \ref{fig:droppolitee} , we present F1 of DTR, StyleDGPT, and SOTA KDG models in different task mode of negative sentiment and polite style.
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth, scale=0.85, trim=15 6 0 35,clip]{f1_negative_wow_topic.png}
\caption{F1 of DTR, StyleDGPT, and SOTA KDG models in different task mode of negative sentiment. Gold-K represents we use ground truth knowledge as input, and Predicted-K represents we use a knowledge selection model to predict top-1 knowledge sentence as input.}
\label{fig:dropnegative}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth, scale=0.85, trim=15 6 0 35,clip]{f1_polite_wow_topic.png}
\caption{F1 of DTR, StyleDGPT, and SOTA KDG models in different task mode of polite style. Gold-K represents we use ground truth knowledge as input, and Predicted-K represents we use a knowledge selection model to predict top-1 knowledge sentence as input.}
\label{fig:droppolitee}
\end{figure}
\section{Conclustion}
We explore stylized knowledge-grounded dialogue generation by proposing bridging the knowledge-grounded response generation with the stylized rewriting via sharing a disentangled template. Evaluation results on benchmarks of the task indicate that our model can achieve state-of-the-art performance and exhibits a superior generation ability over different knowledge domains and styles.
\section{Task Definition}
For the SKDG task, our model is trained on a dialogue dataset $\mathcal{D}_{c}= \{(K_{i}, U_{i}, Y_{i}) \}^{N}_{i=1}$ and a style corpus $\mathcal{D}_{s} = \{ T_{i}\}^{M}_{i=1} $, where $\forall{(K_{i}, U_{i}, Y_{i})} \in \mathcal{D}_{c}$, $U_{i}$ is a dialogue context, $K_{i}$ a external document that contains relevant knowledge regarding to $U_{i}$ and $Y_{i}$ a response to $U_{i}$, and $\forall{T_{i}} \in \mathcal{D}_{s}$, $T_{i}$ is a piece of text in the target style $\mathcal{S}$. We don't assume that there exists triples $\{(K, U, Y^{'})\}$ with $Y{'}$ expressed in the style or sentiment $\mathcal{S}$, e.g., $\mathcal{S} = \{``polite",``positive", ``negative"\}$. Our goal is to learn a generation method $P(Y |K,U,S)$ with $\mathcal{D}_{c}$ and $\mathcal{D}_{s}$, thus given a document $K$ and a context $U$, one can generate a response $Y$ following the desired style $\mathcal{S}$, where $Y$ also coheres with context and preserves the knowledge.
\section{Experiments}
We conduct experiments on Wizard of Wikipedia (Wizard) and Topical Chat with positive and negative sentiments, and polite style.
\subsection{Datasets}
\noindent\textbf{KDG Corpus}
Wizard consists of 1365 topics, and each conversation happens between a wizard who has access to Wikipedia paragraphs and an apprentice who talks to the wizard. Topical Chat utilizes wiki articles, Washington Post, and Reddit fun facts as the knowledge source. The participants play symmetric and asymmetric roles according to the knowledge. Wizard and Topical Chat are split as training, valid and test set respectively. We compare our method with baselines on Wizard Test Seen and Topical Chat Test Freq. More details are described in Appendix \ref{appendix:data}.
\noindent\textbf{Style Corpus} We use Amazon dataset published in \citet{li2018amazom} and Politeness published in \citet{Aman2020polite} for style transfer. Amazon consists of product reviews from Amazon for flipping sentiment, and it contains 27800 positive sentences and 27700 negative sentences. For Politeness, We use the P9-bucket as the polite dataset, which consists of 27000 polite sentences.
\begin{table*}\scriptsize
\renewcommand\arraystretch{1.2}
\renewcommand\tabcolsep{1.6pt}
\centering
\begin{tabular}{c|l |c| c c c c | c c c c| c|c | c c c c |c c c c|c }
\hline
\multirow{3}{*}{Style } &
\multirow{3}{*}{Models} &
\multicolumn{10}{c|}{Wizard of Wikipedia} &
\multicolumn{10}{c}{Topical Chat} \\\cline{3-22}
& & {Style} &
\multicolumn{4}{c|}{Relevance} &
\multicolumn{4}{c|}{Diversity} &
{Average } &
{Style } &
\multicolumn{4}{c|}{Relevance} &
\multicolumn{4}{c|}{Diversity} &{Average } \\\cline{4-11} \cline{14-21}
& & Intensity & F1&{B-1} & {B-2} &{R} & {D-1} & {D-2} & {iD-1} & {iD-2} & {Length} &
Intensity & F1 &{B-1} & {B-2} &{R} & {D-1} & {D-2} & {iD-1} & {iD-2} & {Length}\\
\hline
\multirowcell{4}{Positive}
&StyleFusion &0.275 &11.8 &12.3 &6.5 &10.1 &3.3 &8.7 & 54.1 & 67.3 &11.2 &0.263 &12.6 &12.9 &6.8 &11.2 &0.8 &2.6 & 42.1 & 60.7 & 9.7\\
&StylisticDLV &0.336 &10.6 &11.5 &6.1 &9.3 &3.9 &9.2 & 56.7 & 69.5 &10.6 &0.381 &12.2 &12.5 &6.7 &10.6 &1.3 &3.3 & 44.7 & 63.2 & 10.3\\
&StylizedDU &0.342 &15.7 &17.4 &9.6 &18.5 & {14.1} & {34.8}& 50.3 & 65.5 &14.5 &0.417 &16.2 &15.8 &10.1 &15.4 &3.6 &10.5 & 46.2 & 65.8&12.8 \\
&StyleDGPT &\textbf{0.354} &21.7 &24.3 &17.1 &24.8 &12.5 &33.2 & \textbf{61.4} & 73.2 & 10.8 &0.392 &20.4 &18.6 &14.6 &18.7 &2.8 &7.8 & 58.6 & 74.9&11.3 \\
& DTR &0.338 & \textbf{31.3} & \textbf{32.6} & \textbf{20.7} & \textbf{32.6} &\textbf{12.9} &\textbf{35.5}& 59.6 & \textbf{76.9} &\textbf{20.3} & \textbf{0.448} & \textbf{26.4} & \textbf{30.2} & \textbf{18.9} & \textbf{26.0} &\textbf{3.9} &\textbf{11.8} & \textbf{63.8} & \textbf{76.0}& \textbf{19.5} \\
\hline
\multirowcell{4}{Negative}
&StyleFusion &0.327 &12.5 &11.7 &7.4 &9.6 &3.1 &8.8 & 53.5 & 70.3& 10.4 &0.293 &10.8 &11.4 &6.5 &10.6 &1.0 &2.4 & 55.7 & 63.5& 10.9\\
&StylisticDLV &0.665 &11.8 &11.1 &6.9 &9.0 &3.4 &9.1 & 54.7 & 70.8 &11.3 &0.655 &10.4 &11.2 &6.1 &10.5 &1.2 &2.7 & 58.0 & 64.9 & 11.2\\
&StylizedDU&0.640 &16.1 &16.7 &9.4 &15.9 &13.6 &31.3 & 56.1 & 69.6&13.8 &0.642 &15.7 &15.5 &11.3 &15.8 &3.2 &8.4 & 58.0 & 65.4& 12.5 \\
&StyleDGPT &0.713 &22.5 &24.9 &17.5 &25.0 &11.8 &32.0 & 62.9 & 74.2&12.4 &0.686 &21.3 &22.1 &16.5 &19.2 &2.3 &6.6& 64.6 & 70.1 & 10.7 \\
& DTR & \textbf{0.783} & \textbf{32.0} & \textbf{31.1} & \textbf{20.6} & \textbf{31.8} &\textbf{14.3} &\textbf{34.5} & \textbf{66.4} & \textbf{78.7}&\textbf{18.7} & \textbf{0.715} & \textbf{27.9} & \textbf{31.2} & \textbf{19.7} & \textbf{26.5} &\textbf{4.5} &\textbf{12.8} & \textbf{67.2} & \textbf{75.3}& \textbf{21.2} \\
\hline
\multirowcell{4}{Polite}
&StyleFusion &0.211 &11.3 &11.6 &6.8 &10.7 &1.9 &5.5 & 45.0 & 53.4& 12.6 &0.243 &12.5 &12.8 &7.3 &12.2 &0.8 &2.3 & 40.4 & 57.1& 10.4 \\
&StylisticDLV &0.264 &10.7 &10.8 &6.2 &10.1 &2.1 &6.0 & 47.3 & 55.9 &12.1 &0.375 &13.0 &13.4 &7.5 &12.6 &0.9 &2.8 & 43.6 & 59.3 & 9.8\\
&StylizedDU &0.270 &14.9 &16.2 &10.2 &17.4 &11.5 &35.1 & 43.3 & 63.2& 14.7 &0.382 &16.4 &15.3 &10.9 &14.7 &3.8 &12.4 & 42.8 & 60.9& 13.9 \\
&StyleDGPT &0.262 &24.8 &22.2 &15.7 &23.8 &12.2 &33.1 & 51.9 & 65.7& 13.3 &0.316 &20.8 &19.4 &15.3 &20.8 &3.0 &9.2 & 45.7 & 58.3& 12.8 \\
& DTR & \textbf{0.287} & \textbf{30.6} & \textbf{29.3} & \textbf{20.5} & \textbf{31.6} &\textbf{12.8} &\textbf{37.4}& \textbf{55.4} & \textbf{68.1} & \textbf{20.3} & \textbf{0.403} & \textbf{27.6} & \textbf{30.5} & \textbf{19.8} & \textbf{29.1} &\textbf{4.2} &\textbf{14.6} & \textbf{47.2} & \textbf{62.5}& \textbf{20.5} \\
\hline
\end{tabular
\caption{\label{font-table} Automatic evaluation results. Numbers in bold
mean that the improvement to the best baseline is statistically significant (t-test with $p$-value $<$ 0.01).
}
\label{res:auto}
\end{table*}
\begin{table*}[hbt]\scriptsize
\setlength{\tabcolsep}{0.3mm}
\renewcommand\arraystretch{1.2}
\centering
\begin{tabular}{l | c | c c c | c c c | c c c | c c c | c | l | c | c c c | c}
\hline
\multicolumn{15}{c|}{Manual evaluation results}& \multicolumn{6}{c}{Attractiveness evaluation results}\\
\hline
\multirow{3}{*}{Style} &
\multirow{3}{*}{Models} &
\multicolumn{3}{c|}{Style} &
\multicolumn{3}{c|}{Knowledge} &
\multicolumn{3}{c|}{Context} &
\multicolumn{3}{c|}{}&
{}&
\multirow{3}{*}{Style} &
\multirow{3}{*}{Models} &
\multicolumn{3}{c|}{} &
{}\\
& & \multicolumn{3}{c|}{Consistency} &
\multicolumn{3}{c|}{Preservation} &
\multicolumn{3}{c|}{Coherence} &
\multicolumn{3}{c|}{Fluency} &
{Kappa}& & & \multicolumn{3}{c|}{Attractiveness } &Kappa\\
& & W(\%) & L(\%) & T(\%) & W(\%) & L(\%) & T(\%) & W(\%) & L(\%) & T(\%) & W(\%) & L(\%) & T(\%) & & & & W(\%) & L(\%) & T(\%) & \\
\hline
\multicolumn{15}{c|}{Wizard of Wikipedia} & \multicolumn{6}{c}{Wizard of Wikipedia}\\
\hline
{Positive}
& DTR vs. StyleDGPT &56.8 &22.2 &21.0 &58.0 &22.8 & 19.2 &52.4 &19.5 & 28.1 &54.9 &22.3 &22.8 &0.67 &{Positive} & DTR vs. DTR-s &60.4 &18.1 &21.5 & 0.68 \\
\hline
{Negative}
& DTR vs. StyleDGPT &54.8 &18.4 &26.8 &58.2 &17.9 & 23.9 &55.0 & 19.6& 25.4 &51.0 &28.6 & 20.4 &0.65 &{Negative} & DTR vs. DTR-s &13.7 &58.3 & 28.0 & 0.65\\\hline
{Polite}
& DTR vs. StyleDGPT &58.0 &21.6 & 20.4 &60.2 &21.1 & 18.7 &56.7 &20.7 & 22.6 &50.5 &29.2 & 20.3 &0.68&{Polite} & DTR vs. DTR-s &56.2 &12.3 & 31.5 & 0.64 \\
\hline
\multicolumn{15}{c|}{Topical Chat} & \multicolumn{6}{c}{Topical Chat}\\
\hline
{Positive} & DTR vs. StyleDGPT &48.2 &23.3 & 28.5 &57.3 &20.5 & 22.2 &48.8 &24.0 & 27.2 &53.6 &23.9 & 22.5 &0.65& {Positive} & DTR vs. DTR-s &54.6 &23.1 & 22.3 & 0.65\\
\hline
{Negative}
& DTR vs. StyleDGPT &56.7 &21.6 & 21.7 &51.6 &27.4 & 21.0 &54.0 &22.6 & 23.4 &52.6 &22.9 & 24.5 &0.64 &{Negative} & DTR vs. DTR-s &26.4 &54.5& 19.1& 0.65\\\hline
{Polite}
& DTR vs. StyleDGPT &49.8 &19.3 & 30.9 &46.5 &28.1 & 25.4 &45.6 &27.3 & 27.1 &53.5 &21.1 & 25.4 &0.65 &{Polite} & DTR vs. DTR-s &49.6 &21.7 & 28.7 & 0.67\\
\hline
\end{tabular}
\caption{\label{font-table} Manual evaluation results. W, L, and T refer to Win, Lose, and Tie. All of the Kappa scores are greater than 0.6, which indicates the good agreement among the annotators. Other models are shown in Appendix \ref{appendix:manual}.
}
\label{res:human}
\end{table*}
\subsection{Evaluation Metrics}\label{sunsec:metircs}
Following \citet{zheng2020stylized} and \citet{yang2020styledgpt}, we use automatic metrics to measure DTR on three aspects: \textbf{Style Intensity}, \textbf{Relevance}, and \textbf{Diversity}. For style intensity, we use the GPT-2 classifier prediction mentioned in section \ref{sunsec:details}. Relevance is measured with \textbf{F1}, \textbf{BLEU} \citep{papineni-etal-2002-bleu} and \textbf{Rouge} \citep{lin-2004-rouge}. We use \textbf{Distinct} \citep{li-etal-2016-diversity} to measure Diversity of different models. To measure the diversity between different styles, we propose \textbf{inner Distinct}: given a context and knowledge, we calculate distinct in three generated responses with three styles.
For human evaluation, we randomly sample 500 examples from test set, and recruit 3 well-educated annotators. To each annotator, two responses from different models are presented, which are randomly shuffled to hide their sources. The annotators then judge which response is better from four aspects: (1) \textbf{Style Consistency}: which response exhibits the desired style more (2) \textbf{Knowledge Preservation}: which response is more relevant to the knowledgeable document (3) \textbf{Context Coherence}: which response is more coherent with the dialogue context (4) \textbf{Fluency}: which response is more fluent and free from any grammar errors.
\subsection{Implementation Details}\label{sunsec:details}
We use pre-trained MASS \citep{song2019mass} to initialize $\mathcal{G}_G$ and $\mathcal{G}_R$. We adopt Adam optimizer as an initial learning rate of 5 $\times 10^{-4}$, and the batch size is 4096 tokens for a NVIDIA 1080 Ti GPU. Since all the baselines don't have a knowledge selection module, we chose the ground-truth knowledge as input for Wizard and the top-1 knowledge sentence according to the BLEU-1 with the corresponding response as input for Topical Chat. We use beam search(size=5) to decode the response. We initialize $\mathcal{F}$ with pre-trained BERT, the replace rate $P_r$ is 25, $Z$ in section \ref{sec:policy-learning} is 10. We use Glove \citep{pennington2014glove} 100d embedding and cosine similarity as $\operatorname{Dis}(\cdot, \cdot)$ to calculate distance $d$. ${\mu}$ in Eq.\ref{eq:pairwise-loss} is 0.2. To get the style intensity reward, we follow \citet{yang2020styledgpt} and train binary GPT-2 \citep{radford2019language} classifiers. Early stopping on validation is adopted as a regularization strategy. All the above hyperparameters are determined by grid search.
\begin{table*}[hbt]\rmfamily\scriptsiz
\renewcommand\arraystretch{1.2}
\renewcommand\tabcolsep{2.5pt}
\centering
\begin{tabular}{l |c| c c c c | c c c c |c | c c c c |c c c c }
\hline
\multirow{3}{*}{Models} &
\multicolumn{9}{c|}{Wizard of Wikipedia} &
\multicolumn{9}{c}{Topical Chat} \\\cline{2-19}
& {Style} &
\multicolumn{4}{c|}{Relevance} &
\multicolumn{4}{c|}{Diversity} &
{Style } &
\multicolumn{4}{c|}{Relevance} &
\multicolumn{4}{c}{Diversity}\\\cline{3-10} \cline{12-19}
& Intensity & F1&{B-1} & {B-2} &{R} & {D-1} & {D-2}& {iD-1} & {iD-2} &
Intensity & F1 &{B-1} & {B-2} &{R} & {D-1} & {D-2}& {iD-1} & {iD-2} \\
\hline
DTR &0.338 & 31.3 & 32.6 & 20.7 & 32.6 &12.9 &35.5& 59.6 & 76.9 & 0.448 & 26.4 & 30.2 & 18.9 & 26.0 &3.9 &11.8 & 63.8 & 76.0 \\ \cline{1-19}
w/o $\mathcal{F}$ WSL &0.186 &23.4 &21.2 &13.5 &21.6 &15.1 &38.4 &67.3 & 80.4 &0.287 &15.5 &17.9 &11.6 &16.3 &4.8 &14.9 &69.1 & 81.3 \\
w/o $\mathcal{F}$ (TFIDF) &0.244 &28.7 &28.5 &19.0 &29.6 &13.5 &36.6 &61.5 & 78.6 &0.369 &22.7 &26.2 &16.0 &21.5 &4.3 &12.3 &65.5 & 76.4 \\
w/o $\mathcal{F}$ (Classification) &0.256 &31.7 &30.2 &20.9 &31.5 &13.1 &35.3 &58.6 & 76.2 &0.375 &23.3 &26.7 &16.5 &22.6 &3.8 &12.1 &64.7 & 75.0 \\ \cline{1-19}
w/o Rewards&0.307&31.4 &28.9 &20.2 &29.6 &12.7 &35.1 &58.1 & 76.4 &0.424 &25.1 &29.0 &18.3 &24.3 &4.2 &11.9 & 63.7& 74.2\\
w/o Cls &0.297 &32.0 &31.5 &20.8 & 30.2&12.0 &34.7 &56.3 & 74.3 &0.396 &24.4 &30.7 &19.1 &26.2 &3.9 &11.4 &63.1 & 74.5\\
w/o Sim &0.340 &30.4 &28.6 &20.2 & 29.4&12.3 &36.5 &61.0 & 78.3 &0.452 &26.8 &28.8 &17.9 &24.3 &4.3 &12.1 & 64.8& 75.6\\
\hline
\end{tabular}
\caption{\label{font-table} Ablation evaluation results of positive sentiment. Other styles are shown in Appendix \ref{appendix:ablation}.
}
\label{abl:ablation}
\end{table*}
\subsection{Baselines}
\begin{figure}[htb]
\centering
\includegraphics[width=0.5\textwidth, scale=0.85, trim=15 6 0 37,clip]{f1_positive_wow_topic.png}
\caption{F1 of DTR and StyleDGPT (positive), and SOTA KDG models in different evaluation settings. }
\label{fig:drop}
\end{figure}
The following models are selected as baselines:
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth, scale=0.85, trim=15 6 0 37,clip]{mask.png}
\caption{F1 and inner Distinct with different replace rate on three different styles in Wizard Test set.}
\label{fig:mask}
\end{figure}
\begin{itemize}
\item\textbf{StyleFusion} \citep{gao2019stylefusion} bridges conversation modeling and nonparallel style transfer by sharing a latent space. We use the code {\url{https://github.com/golsun/StyleFusion}}.
\item\textbf{StylisticDLV} \citep{zhu-etal-2021-neural} disentangles the content and style in latent space by diluting information in style representations. We use the code {\url{https://github.com/golsun/StyleFusion}}.
\item\textbf{StylizedDU} \citep{zheng2020stylized} leverages back-translation technique to generate pseudo stylized context-response pairs. We use the code {\url{https://github.com/silverriver/Stylized_Dialog}}.
\item\textbf{StyleDGPT} \citep{yang2020styledgpt} exploits the pre-trained language models on the stylized response generation task. We use the code {\url{https://github.com/TobeyYang/StyleDGPT}}.
\end{itemize}
All the baselines are jointly learned with datasets $\mathcal{D}_{c}$ and $\mathcal{D}_{s}$, and take the concatenation of knowledge and context as input.
\subsection{Evaluation Results}
As shown in Table \ref{res:auto}, our DTR model achieves competitive performance in style transfer and significantly outperforms the baselines in all the relevance metrics. This indicates that DTR can produce high-quality responses which are coherent to the context, related to the knowledge, and consistent with the target style simultaneously. We also observe that all SDG methods frequently lost the knowledge part (Appendix \ref{appendix:case}). DTR significantly outperforms StyleDGPT on relevance, indicating that leveraging the style intensity score to optimize the decoupling of the template is superior to directly optimizing response generation (degrading language modeling). We observe the core component back-translation in StylizedDU fails to infer pseudo-knowledge from a response (generally, the knowledge carries much more information than the response). Table \ref{res:human} reports the results of human evaluation, DTR significantly outperforms StyleDGPT on all aspects. DTR is also superior to all the baselines as the \textbf{Case Study} section in Appendix \ref{appendix:case}.
\subsection{Ablation Study}
Firstly, to verify the contributions of the proposed disentangler and weakly supervised learning method, we consider the following variants:
(1) \textbf{w/o $\mathcal{F}$ WSL}: training DTR without the \textbf{W}eakly \textbf{S}upervised \textbf{L}earning of $\mathcal{F}$ in section \ref{sec:disentangler_initialization}. (2) \textbf{ w/o $\mathcal{F}$ (Classification)}: replace the pairwise ranking loss in $\mathcal{F}$ with a binary classification loss. We define those tokens with $d=0$ (in section \ref{sec:disentangler_initialization}) as style words (label=1), otherwise non-style words (label=0). (3) \textbf{w/o $\mathcal{F}$ (TFIDF)}: replace $\mathcal{F}$ with a TFIDF-based rule (replace the fragments as [*] in a sentence with the lowest $P_r$\% TFIDF scores except stop words). Table \ref{abl:ablation} shows the results of the three variants. We can conclude that (1) the weakly supervised learning of $\mathcal{F}$ is crucial to training DTR, since the variant with a simple TFIDF significantly outperforms the one without any initialization; and (2) the ranking loss in $\mathcal{F}$ plays a key role in the success of style transfer, there is a dramatic drop on the style intensity of \textbf{ w/o $\mathcal{F}$ (Classification)}. According to our observation, it is overfitting on the style corpus, leading to a low success rate.
Secondly, to investigate the RL rewards in Eq.\eqref{eq:pairwise-rl}, we consider the following variants: (1) \textbf{w/o Rewards} : remove the similarity and style intensity reward. (2) $\textbf{w/o Sim}$: remove the similarity reward. (3) $\textbf{w/o Cls}$: remove the style intensity reward. As shown in Table \ref{abl:ablation}, removal any of the two rewards will cause performance drop, indicating that style intensity and similarity reward can enhance DTR. We also add $\textbf{Sim}$ to StylizedDU, the improvement is +2.1 on F1, thus it's hard for $\textbf{Sim}$ to bridge the huge gap. Negative and Polite are similar, these results are presented in Appendix \ref{appendix:ablation}.
\subsection{Discussions}
\noindent\textbf{Impact of stylized knowledge-grounded generation.} We annotate the ``Attractiveness'' (the annotators are given two different responses with the same context and knowledge from two different models, and they should determine which response is more attractive and engaging in a holistic way) of DTR and DTR-s (without style transfer) following the same process in \ref{sunsec:metircs}. Table \ref{res:human} reports the evaluation results. We can see that introducing a positive sentiment or a polite style would enhance the engagement of the KDG model while establishing a negative sentiment harm the attractiveness.
\noindent\textbf{Impact of style transfer on the conversational ability of SDG models.} We are curious about to what extent the conversational ability of SDG models will be damaged after style transfer. We examine DTR and StyleDGPT in two settings: (1) Gold-K: the given knowledge is the ground-truth (2) Predicted-K: the given knowledge is selected from a knowledge selection model \citep{zhao2020knowledgpt}. As shown in Figure \ref{fig:drop}, after style transfer on Wizard, the F1 of DTR drops 2.28 and 2.1 in Gold-K and Predicted-K, while the F1 of StyleDGPT drops 11.16 and 8.16 respectively. On Topical Chat,the F1 of DTR drops 1.77 and 1.51 in Gold-K and Predicted-K, while the F1 of StyleDGPT drops 7.1 and 6.16 respectively. Compared with StyleDGPT, DTR dramatically reduces the damage to the conversational ability while achieving a high success rate of style transfer. Thanks to the superior style transferring mechanism, our DTR achieves comparable performance with the state-of-the-art KDG models $\{$KnowledGPT\cite{zhao2020knowledgpt} on Wizard, UNILM\cite{li2020zero} on Topical Chat$\}$ in the standard KDG evaluation setting even after style transfer. The results of Negative and Polite are similar and presented in Appendix \ref{appendix:f1}.
\noindent\textbf{Impact of the replace rate $P_r$.} As shown in Figure \ref{fig:mask}, $P_r$ = 25 achieves the best balance between relevance and diversity. A smaller $P_r$ would remain a large number of original style fragments in the template, leading to tiny differences between different styles. On the contrary, a larger $P_r$ would delete those content fragments, which are harder to restore by rewriter, but the responses from different styles will be more diverse. Topical Chat follows the same regularity as shown in Appendix \ref{appendix:replace}.
\section{Introduction}
\section{Introduction}
\begin{figure}[ht]
\centering
\includegraphics[width=0.48\textwidth, scale=1, trim=335 140 200 195,clip]{case_1.pdf}
\caption{The KDG models only produce a pedantic response, which lacks emotion and attraction compared with the responses with polite style, positive and negative sentiments.}
\label{fig:intro_example}
\end{figure}
Every good conversational agent needs the ability to generate good responses, which are not only knowledgeable and coherent with contexts but also have abundant and desirable styles and sentiments \cite{rashkin2018towards,smith2020controlling,zhou2020design}. Such an agent can deliver depth dialogues on various topics and yield more engaging and vivacious conversations to attract more users. In other words, rational and perceptual thought are all necessary for a perfect dialogue agent. Nevertheless, most existing Knowledge-Grounded Dialogue Generation (KDG) methods \cite{Dinan2019kgc,kim2020sequential,zhao2020knowledge} pay more attention to the former and ignore the latter. Specifically, let's claim our motivation: The previous KDG works mainly focus on selecting knowledge and expressing knowledge in response accurately. However, the excessive emphasis on knowledge makes the KDG models tend to mechanically copy large sections from the unstructured knowledge (e.g., Wikipedia). As a result, the responses from the KDG models reflect a ``pedantic" style (i.e., use very technical terms and language), making the conversation less engaging and less natural.
In this paper, we are aiming to have the first attempt to incorporate stylized-text-generation into KDG to tackle the above challenge. As shown in Figure \ref{fig:intro_example}, the KDG model takes the context and related document as input and outputs a knowledgeable but pedantic response corresponding to the polite one, which makes people feel respected and comfortable. In the meanwhile, the polite, positive responses all show bright and lively styles which not only are able to condense the core meaning of the response, but also sound appealing to the users for more exposure and memorableness.
\begin{figure*}[bht]
\centering
\includegraphics[scale=2, width=0.97\textwidth, trim=10 175 10 162,clip]{Model_Overview_2.pdf}
\caption{Overview of DTR. The sequential style disentangler can find out and replace the style-related tokens in generated response with [*] and produce a template. Then, the style rewriter transfers the template to a new response in the target style.}
\label{fig:my_model}
\end{figure*}
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth, scale=1, trim=165 295 303 176,clip]{introduction_case.pdf}
\caption{Our model combines the style-related fragments from the style corpus and the knowledge fragments from the KDG response to generate a response in a desired sentiment or style.}
\label{fig:intro_text_case}
\end{figure}
Specifically, we formulate a new problem: Stylized Knowledge-Grounded Dialogue Generation (SKDG). That is, the responses provided by a model should be coherent with the dialogue contexts and be consistent with the given knowledge and a designated style or sentiment.
The challenges lie in two aspects: (1) As lacking stylized knowledge-grounded dialogue triples (i.e., $<$context, knowledge, stylized response$>$), we need to train the SKDG model jointly by both independent knowledge-grounded dialogues and monolingual corpus with a target style or sentiment. (2) In addition to being coherent with context and consistent with target style / sentiment, a good response from SKDG needs to ensure objective correctness in the knowledge section. Especially when the given knowledge contains style-related content, existing stylized dialogue generation (SDG) models \cite{zheng2020stylized,yang2020styledgpt} may undermine the correctness of knowledge section.
For example, in case of negative-to-positive sentiment transfer shown in Figure \ref{fig:intro_text_case}, the first two negative fragments of KDG response - `` dislike cruel'' and ``horrible'' should be modified to positive fragments, but the third `` bad '' should be retained to maintain the original meaning of knowledge section.
Hence, our motivation is: on the one hand, bridging the separate knowledge-grounded response generation and stylized rewriting by sharing a disentangled template (addressing challenge (1)); on the other hand, enhancing the fidelity regarding to given knowledge by using a reinforcement learning approach (addressing challenge (2)).
To achieve this goal, we propose a new paradigm: Generate-Disentangle-Rewrite. Firstly, given a dialogue context and the associated external knowledge, a KDG model is adopted to generate a response. Then as shown in Figure \ref{fig:my_model} and \ref{fig:intro_text_case}, we leverage a sequential style disentangler to delete style-related fragments from the KDG response to form a style-agnostic template. Then the rewriter rewrites the entire template token-by-token, injecting style-related fragments in the process, to generate a vivid and informative response in the desired style. As there is no supervision on the style disentangler and the style rewriter, we propose a reinforcement learning-based method to train the style disentangling and style rewriting in an end-to-end manner using a style intensity reward and a semantic similarity reward. The huge joint action space of the two modules fragile the training, thus we propose a novel weakly supervised stylistic template disentangle method to initialize both the disentangler and the rewriter. As a result, our method successfully produces the knowledgeable response in the desired style without any paired training data.
We name our model \textbf{DTR} standing for ``\textbf{D}isentangled \textbf{T}emplate \textbf{R}ewriting''. We demonstrate this approach using knowledge-grounded dialogues from Wizard of Wikipedia \citep{Dinan2019kgc} and Topical Chat \citep{Gopalakrishnan2019} with three sets of sentences with distinct sentiments (positive, negative) and styles (polite). Automatic and human evaluations show that our method significantly outperforms competitive baselines with a large margin in generating coherent and knowledgeable dialogue responses while rendering stronger stylistic features.
Our contributions are three-fold: (1) To the best of our knowledge, it is the first work on the generation of stylized knowledge-grounded responses without any labeled paired data for style-specific context-knowledge-response. (2) We proposed a stylized knowledge-grounded dialogue generation method via disentangled template rewriting. To optimize the model, we propose a reinforcement learning approach with a novel weakly supervised method to guide the learning of both disentangler and rewriter.
(3) Extensive experiments on two benchmarks indicate that DTR significantly outperforms previous state-of-the-art SDG methods on all evaluation metrics. Besides, DTR achieves comparable performance with the state-of-the-art KDG methods in the standard KDG evaluation setting. Our source code will be released at~\url{https://github.com/victorsungo/SKDG-DTR}.
\section{Approach}
Heading for learning an effective disentangled template rewriting model for SKDG task, we need to deal with several challenges: (1) how to distinguish the style-related fragments from a given sentence without any supervision;(2) how to retain the style-related fragments in knowledge section to defend the completeness;(3) how to rewrite the disentangled template holistically instead of inserting a few style words, thus to enhance fluency and diversity.
Our DTR model is made up of a knowledge-grounded response generator $\mathcal{G}_G$, a sequential style disentangler $\mathcal{F}$ and a style rewriter $\mathcal{G}_R$. Given a dialogue context $U$ and its associated knowledge $K$, we first use $\mathcal{G}_G$ to generate a response $\overline{Y}$. Figure \ref{fig:my_model} illustrates the cooperation of $\mathcal{F}$ and $\mathcal{G}_R$. The former reads $\overline{Y}$ and disentangles the style-related content from $\overline{Y}$ to form a style-agnostic template sequence $\widetilde{Y}$, which is further provided as input to $\mathcal{G}_R$ to generate the transferred response $\hat{Y}$ in a target style. Since $\widetilde{Y}$ is discrete, the major hinder of learning $\mathcal{F}$ lies in the gradient is not differentiable. To cope with the challenge, we exploit a reinforcement learning approach to optimize $\mathcal{F}$ leveraging the signals from $\mathcal{G}_R$.
So why do we need a Disentangler + Rewriter architecture? The previous SDG methods fuse the knowledge and style into mixed representation and decode a response. Due to the difficulty of mixing knowledge and style implicitly under the unsupervised setting, it is possible to lose knowledge or style in the decoding stage. Motivated by this, we propose to decouple the response generation into two relatively independent processes: 1) knowledge fragments generation 2) style fragments generation. The knowledge fragments and style fragments are explicitly composited into the response in the final stage. Such a method ensure the knowledge is successfully presented in the final output. The disentangler plays a central role in decoupling and composition. In the following, we will elaborate details of each component.
\subsection{Model Architecture}\label{sec:kdgmodel}
\subsubsection{Knowledge-Grounded Response Generator}\label{sec:kdgmodel}
The generator $\mathcal{G}_G$ is a sequence-to-sequence model based on the Transformer architecture \citep{vaswani2017attention}, it consists of a 6-layers encoder and decoder with a hidden size of 768. Given a dialogue context $U = \{u_{1},\ldots, u_{i},\ldots,u_{l}\}$ with $u_{i}$ the $i$-th utterance, and a document $K = \{k_{1},\ldots,k_{i},\ldots,k_{h}\}$ with $k_{i}$ the $i$-th sentence. We concatenate $U$ and $K$ as a long sentence as the input of the encoder, then the decoder generates a response $\overline{Y}$ as output
\begin{align}
\overline{Y} = \{w_1, \ldots,w_i, \ldots ,w_m\} = \mathcal{G}_G(U,K)
\end{align}
\subsubsection{Sequential Style Disentangler}\label{sec:template}
To identify and disentangle the style-related fragments from $\overline{Y}$, we employ a sequence labeling module named Sequential Style Disentangler $\mathcal{F}$ to model the probabilities $\{x_i\}_{i=1}^m$ of being style-related token at each position in $\overline{Y}$.
The formulations are as follows:
\begin{align}
P_{\mathcal{F}}(A|\overline{Y},U,K) &= \prod_{i=1}^{m} P(a_{i}|\overline{Y},U,K)\\
P(a_{i}|\overline{Y},U,K) &= x_{i} = {\rm sigmoid}({\boldsymbol{W}{e_{i}}}) \\
\{e_1,\ldots ,e_m\} &= {\rm BERT}(\overline{Y},U,K)
\end{align}
where $\boldsymbol{W}\in\mathbb{R}^{v \times 1}$ and $e_i\in\mathbb{R}^{v}$, $v$ is representation dimension, $a_i\in \{replace,retain\}$, $A = \{a_i\}_{i=1}^m$. Then when generating $\widetilde{Y}$ if $x_{i} > \varepsilon$, $a_i$ will be operation ``replace" indicating $w_i$ is a style token and needs to be replaced with a tag token [*], and viceversa for $x_{i} < \varepsilon$, $a_i$ will be operation ``retain" indicating $w_i$ remains unchanged. Threshold $\varepsilon$ is equal to top $P_r\%$ percentile of $\{x_i\}_{i=1}^m$, where $P_r$ is a hyper parameter. Finally, we perform the the predicted sequence of operations on $\overline{Y}$ to obtain style-agnostic template $\widetilde{Y}$. As the style disentangler tags each word in a sentence, it captures the style fragments (e.g., words, phrases, sub-sequences, or even the whole sentence) rather than only style tokens. The learning detail is presented in Appendix \ref{appendix:method}.
\subsubsection{Style Rewriter}
With $\widetilde{Y}$ as input, the style rewriter $\mathcal{G}_R$ generates a new $\hat{Y}$ word-by-word in the target style. $\mathcal{G}_R$ has the same architecture as $\mathcal{G}_G$. The generation process of $\mathcal{G}_R$ is formulated as:
\begin{equation}
P_{R}(\hat{Y}|\widetilde{Y})=\prod_{t=1}^{h} P_{R}(\hat{w}_{t}|\widetilde{Y})
\end{equation}
where $\hat{w}_{t}$ is the $t$-th token of $\hat{Y}$ whose length is $h$.
\subsection{Reinforcement Learning}\label{sec:policy-learning}
Neither style disentangler nor style rewriter has supervision for training. Moreover, We need to ensure the correctness of $\hat{Y}$ without any modifications of the original content in the knowledge section of $\overline{Y}$. To cope with the challenges, we exploit REINFORCE~\citep{policygradient} to train $\mathcal{F}$ and $\mathcal{G}_R$ jointly with a total reward determined by the semantic similarity with the ground-truth response and the consistency with the desired style. Specifically, we maximize the expected reward as:
\begin{align}
\mathcal{R_{RL}} &= \mathbb{E}_{\hat{Y} \sim P_{R}(\hat{Y})}\mathbb{E}_{\widetilde{Y} \sim P_{\mathcal{F}}(A)}[R(\widetilde{Y},Y)]
\label{eq:pairwise-rl}
\end{align}
where $P_{R}(\hat{Y})$ and $P_{\mathcal{F}}(A)$ stand for $P_{R}(\hat{Y}|\widetilde{Y})$ and $P_{\mathcal{F}}(A|\overline{Y},U,K)$ respectively, $R(\widetilde{Y},Y) = {\rm{Sim}}(\hat{Y}, Y) + {\rm{Cls}}(\hat{Y})$, ${\rm{Sim}}(\cdot)$ is embedding cosine similarity which supervises the knowledge regularization, ${\rm{Cls}}(\cdot)$ is the style intensity predicted by a classifier. We subtract the mean value of rewards $R$ in a batch to reduce the variance of gradient estimation~\citep{clark-manning-2016-deep}. In order to avoid the destroying issue of RL, we fix the parameters of $\mathcal{G}_R$, then only optimize the $\mathcal{F}$.
\subsection{Weakly Supervised Learning}
Since the style disentangler and style rewriter need to be carefully synchronized, ideally we hope they can benefit each other in learning. However, in the early stage as the parameters of $\mathcal{F}$ and $\mathcal{G}_R$ are far from optimal. It is possible that, on the one hand, the templates that are not decoupled successfully hinder $\mathcal{G}_R$ learning from rewriting style fragments accurately. On the other hand, noise signals from rewards computed with the low-quality responses generated by $\mathcal{G}_R$ flow to the learning of $\mathcal{F}$, resulting in inferior $\mathcal{F}$. To alleviate error accumulation in joint training, we propose a novel weakly supervised stylistic template disentangle method to assist the learning of $\mathcal{F}$ and $\mathcal{G}_R$.
\subsubsection{Weakly Supervised Disentangler}
\label{sec:disentangler_initialization}
Intuitively, style fragments dominate the distribution of style corpus $\mathcal{D}_{s}$ compared with content fragments, thus the style fragments are easier to be reconstructed than content fragments by the denoising autoencoder trained on $\mathcal{D}_{s}$. As shown in Figure \ref{fig:intuition_dae}, a denoising reconstruction model $\mathcal{G}_D$ reconstructs the style word ``good'' successfully but fail to do that for content word ``pizza'' in the same response from $\mathcal{D}_{c}$. Particularly, we randomly divide $\mathcal{D}_{s}$ into two halves with equal probability: $\mathcal{D}_{s}^1$ and $\mathcal{D}_{s}^2$, then $\mathcal{D}_{s}^1$ is used to train the denoising reconstruction model $\mathcal{G}_D$. The reconstruction objective $\mathcal{L}_\mathcal{S}$ is formulated as:
\begin{align}
\mathcal{{L}_\mathcal{S}}=\mathbb{E}_{T\sim {\mathcal{D}_{s}^1}}[-\log p(T|\widetilde{T})]\
\end{align}
\begin{figure}[ht]
\centering
\includegraphics[width=0.55\textwidth, scale=1, trim=330 270 300 280,clip]{pizza.pdf}
\caption{The positive sentiment word ``good'' is easier to be reconstructed than the knowledge word ``pizza'' in the sentence, where the wrong prediction ``beef'' would hurt the knowledge preservation and confuse the dialogue theme.}
\label{fig:intuition_dae}
\end{figure}
\noindent{where $\widetilde{T}$ is the corrupted version of $T$ by randomly mask 15\% tokens.}
Then for each sentence $T = \{t_i\}_{i=1}^m$ (with $t_i$ the $i$-th token in $T$) in $\mathcal{D}_{s}^2$, we sequentially mask one token each time to construct its denoising versions $\{\widetilde{T}_i\}_{i=1}^m$, then $\{\widetilde{T}_i\}_{i=1}^m$ are inferenced by $\mathcal{G}_D$ to reconstruct $\{\hat{T}_i\}_{i=1}^m$. We acquire a distance sequence $\boldsymbol{d} = \{d_i\}_{i=1}^m = \{\operatorname{Dis}(t_i, \hat{t}_i)\}_{i=1}^m$ where $\operatorname{Dis}(\cdot, \cdot)$ denotes a distance function.
Based on above intuition, lower $d_i$ means $t_i$ is more preferable as a style-related token, thus for $t_i$ and $t_j$, if $d_i < d_j$, we define the label $y = 1$, and viceversa for $d_i > d_j$, $y = -1$. We aggregate all $<t_i$, $t_j$, $y>$ triples to construct $\mathcal{D}_{s\_t}$ to optimize the style disentangler via the pairwise ranking loss:
\begin{align}
\mathcal{L}_\mathcal{P}(t_i, t_j, y)=\max{(0, -y * (t_i - t_j) + {\mu})}\
\label{eq:pairwise-loss}
\end{align}
where ${\mu}$ is a hyper parameter.
The action space of token-level pairwise ranking is large, so for each sentence in $\mathcal{D}_{s\_t}$, we randomly sample $Z$ non-repetitive $<x_i, x_j, y>$ triples to optimize $\mathcal{L}_\mathcal{P}$, where Z is a hyper parameter. The style tokens in various style corpus found by the style disentangler is presented in Appendix \ref{appendix:stawords}.
\subsubsection{Weakly Supervised Rewriter }\label{sec:rewriter_training}
The training data for the rewriter are also constructed by an unsupervised method: Optimized style disentangler $\mathcal{F}$ (Eq.\ref{eq:pairwise-loss}) infers the style corpus $\mathcal{D}_{s}= \{T_i\}_{i=1}^M$ and generates a disentangled template set ${\mathcal{\widetilde{D}}_{s}} = \{\widetilde{T}_i\}_{i=1}^M$. Then the rewriter takes paired $< {{\widetilde{T}}}, {T}>$ as input and output respectively. Since $\widetilde{T}$ is style-agnostic, the rewriter would focus on transfering a factual sentence to a desired sentence with target style. The loss function for the rewriter $\mathcal{G}_R$ is:
\begin{equation}
\label{eq:dialogpt-prob}
\mathcal{L_R} = - \frac{1}{M} \sum_{l=1}^{M}(\prod^{|T_l|}_{i=1} p(t_{l,i} | t_{l,1},\cdots, t_{l,i-1}; \widetilde{T}_l))
\end{equation}
where $t_{l,i}$ is the $i$-th token in $l$-th sentence. Specifically, the rewriter $\mathcal{G}_R$ has a same architecture as $\mathcal{G}_G$.
\begin{algorithm}[htb]
\caption{Optimization Algorithm.}
\label{alg:optimize}
\begin{algorithmic}[1]
\STATE {\textbf{Input:} Datasets $\mathcal{D}_{c}$, $\mathcal{D}_{s}$; Models $\mathcal{G}_G$, $\mathcal{F}$, $\mathcal{G}_R$.}
\STATE Optimize $\mathcal{G}_G$ using $\mathcal{D}_{c}$.\\
\STATE Construct $\mathcal{D}_{s\_t}$. \\
\STATE Optimize $\mathcal{F}$ using $\mathcal{D}_{s\_t}$ (Eq.\ref{eq:pairwise-loss}) . \\
\STATE Construct ${\mathcal{\widetilde{D}}_{s}}$ using $\mathcal{F}$. \\
\STATE Optimize $\mathcal{G}_R$ using $\mathcal{D}_{s}$ and ${\mathcal{\widetilde{D}}_{s}}$ (Eq.\ref{eq:dialogpt-prob}) . \\
\STATE Further Optimize $\mathcal{F}$ using $\mathcal{D}_{conv}$ (Eq.\ref{eq:pairwise-rl}) . \\
\RETURN $\mathcal{G}_G$, $\mathcal{F}$, $\mathcal{G}_R$.
\end{algorithmic}
\end{algorithm}
\section*{Acknowledgement}
We thank anonymous reviewers for their insightful suggestions to improve this paper.
\section{Related Work}
\noindent\textbf{Knowledge-Grounded Dialogue Generation} has attracted broad interest in recent years, where the knowledge could be obtained from documents \cite{Dinan2019kgc, kim2020sequential,rashkin-etal-2021-increasing} and images \cite{shuster2018image,yang2020open,liang2021maria}. Our study considers document-grounded dialogue generation. With the rapid development of pre-training techniques, \citet{zhao2020low-resource} proposes pre-trained disentangled decoder, and \citet{li2020zero} proves that the KDG models could achieve comparable performance with state-of-the-art supervised methods through an unsupervised learning method. Rather than testing new architectures on the benchmarks, our main contribution lies in the investigation of transferring the pedantic and factual knowledge-grounded responses into a desired style or sentiment, which roots in the requirement from practice.
\noindent\textbf{Text Style and Sentiment Transfer} was inspired by visual style transfer \citep{imagestyletransfer2016, CycleGAN2017}, and many methods have made remarkable work in text style transfer, which aims to alter the style attributes of text while preserving the content. A prevalent idea of style transfer is to disentangle the content and style of text \cite{AAAI1817015, li-etal-2018-delete,jin-etal-2020-hooks,wen-etal-2020-decode,zhu-etal-2021-neural} or leverage the back-translation \citep{lample2019multiple-attribute, li2021stylized}. Stylized dialogue generation has attracted numerous attention in recent years \citep{niu2018polite, gao2019stylefusion}. Different from style transfer, stylized dialogue generation requires that the response is also coherent with its context.
\noindent\textbf{Stylized Dialogue Generation} refers to generate a dialogue response in the target style. \citet{akama-etal-2017-generating} first train a response generation model on a dialog corpus then use a style corpus to fine-tune the model. \citet{yang-etal-2020-styledgpt} builds a pre-trained language model and devise both a word-level loss and a sentence-level loss to fine-tune the pre-trained model towards the target style. \citet{https://doi.org/10.48550/arxiv.2004.02202} proposes an information guided reinforcement learning strategy to better balance the trade-off between the stylistic expression and the content quality. \citet{https://doi.org/10.48550/arxiv.2110.08515} blends textual and visual responses to make the dialogue style more attractive and vivid. \citet{zheng2020stylized} captures stylistic features embedded in unpaired texts, \citet{su2020prototypetostyle} uses the pointwise mutual information (PMI) to determine stylistic word, \citet{yang2020styledgpt} adopts pre-trained models to tackle the open-domain stylized response generation.
We propose a novel disentangled template rewriting approach as the first attempt to study stylized knowledge-grounded dialogue generation without any supervised style-specific context-knowledge-response triples data.
|
1501.03653
|
\section{Introduction}
The diffusion is anomalous when the variance rises with time slower or faster than linearly and
the density distribution differs from the normal distribution.
This means that the central limit theorem is violated. One can expect that the theorem is not valid
for transport processes if memory effects are present, e.g. when a medium contains traps.
Traps hamper the transport and, as a consequence,
a subdiffusion emerges -- as well as a stretched-Gaussian asymptotics of the distribution \cite{met}.
We encounter such a situation for disordered media with impurities and defects. If the waiting time distribution
is not prescribed to a given position and changes at each visit of the particle, we are dealing with
an annealed disorder which corresponds to a renewal process. Then the continuous-time random walk (CTRW)
is well suited to model the anomalous diffusion.
If, on the other hand, the particle exercises a space structure which slowly evolves with time,
the trapping time at a given site is the same for each visit of this site and a correlation between
trapping times emerges (a quenched disorder) \cite{bou}. The quenched trap model \cite{ber,bar}
takes into account that the particle may remember the rest time at a given
point. Then the time-dependence of the variance
can be derived by averaging over disorder \cite{bar} or using a renormalisation group approach \cite{mach};
it takes a form $\sim t^{2\beta/(1+\beta)}$, where $0<\beta<1$ characterises the memory.
The central limit theorem may not apply also in the Markovian case since a nonhomogeneous structure
of the environment makes subsequent random stimulations mutually dependent.
Considering, in particular, the diffusion on fractals one must take into account
the self-similar medium structure;
it is described by the Fokker-Planck equation with a variable diffusion coefficient \cite{osh} (for a non-Markovian
generalisation see \cite{met1,met2}). CTRW involves, in general, a coupled jump density distribution.
When the waiting-time distribution is position-dependent,
the Fokker-Planck equation, corresponding to the master equation, contains the variable diffusion
coefficient \cite{kam} and, in the non-Markovian case, a variable order of the fractional derivative \cite{chevo}.
For such inhomogeneous problems, CTRW implies a stretched-Gaussian shape of the density distribution
and predicts the anomalous diffusion.
If, on the other hand, the jump length does not obey the normal distribution
but is governed by a general L\'evy stable distribution (L\'evy flights) the variance
does not exist; such long jumps are frequently observed in many areas
of science \cite{klag}. They may be directly related to a specific topology of the medium and then
the transport description requires a variable diffusion coefficient: the folded polymers are a well-known
example \cite{bro}. If a composite medium consists of many layers, the fractional equation
is complicated and contains position-dependent both the diffusion coefficient and the order
of the fractional derivative \cite{sti}.
However, presence of the L\'evy flights does not need to imply infinite fluctuations because
any physical system is finite and, if one introduces a truncation of the distribution tail,
the diffusion properties are well-determined.
It has been recently demonstrated that cracking of heterogeneous materials
reveals a slowly falling power-law tail of the local velocity distribution of the crack front \cite{tall} but,
despite that, the authors were able to determine the variance due to the finiteness of the system.
Systems with memory can be conveniently handled in terms of a Langevin equation by introducing
an auxiliary operational time. Process given by this equation is subsequently subordinated to
the random physical time by means of a one-sided probability distribution with a long tail \cite{pir,zas};
such a system of the Langevin equations is used as an alternative formulation of CTRW \cite{fog,bar1}
and also applied to quenched random media \cite{den} where the intensity of the random time distribution
depends on the position. CTRW was applied to transport in the heterogeneous media by using
fractional derivatives \cite{berk} and a subordinated system of Langevin equations \cite{berk1}.
A subordination formalism, which directly takes into account that the memory in nonhomogeneous
systems must depend on the position, has been recently proposed \cite{sro14}. This dependence was introduced
via a variable intensity of the random time distribution and models the influence
of the trap geometry on the local time lag. The system is then described by a set of two Langevin equations,
\begin{eqnarray}
\label{la}
dx(\tau)&=&\eta(d\tau)\nonumber\\
dt(\tau)&=&g(x)\xi(d\tau),
\end{eqnarray}
where a non-negative random time intensity $g(x)$ defines a position-dependence of the memory effects and
results from the presence of traps.
Increments of the white noise $\eta$ are determined, in general, by the $\alpha$-stable
L\'evy distribution, $L_\alpha(x)$, and $\xi(d\tau)$ stands for a stochastic process given by
a one-sided distribution $L_\beta(x)$. Let us consider for the moment
a Markovian case for which the density of $\xi$,
still being one-sided, has finite moments (e.g. an exponential). Approximating $\xi$ by its mean,
$\xi\to \langle\xi\rangle$, allows us to evaluate the operational time increment,
$\Delta\tau=(\langle\xi\rangle g(x))^{-1}\Delta t$. The first equation (\ref{la}) can be discretized
as $\Delta x=\Delta\eta\Delta\tau^{1/\alpha}$ and the system (\ref{la}) resolves itself
to a single equation with a multiplicative noise,
$dx(t)=\nu(x)^{1/\alpha}\eta(dt)$, where $\nu(x)=(\langle\xi\rangle g(x))^{-1}$.
Since the $x-$dependence is evaluated from the other equation, $x(t)$ corresponds to the initial time
for each interval and the It\^o interpretation is appropriate.
The above equation incorporates the medium structure but neglects the memory effects which are important
if $\beta<1$. Those effects, in turn, can be approximately taken into account by the subordination of the process
$x(\tau)$ to a random time $t$ and, after that decoupling of the medium structure and memory,
Eq.(\ref{la}) takes the form
\begin{eqnarray}
\label{las}
dx(\tau)&=&\nu(x)^{1/\alpha}\eta(d\tau)\nonumber\\
dt(\tau)&=&\xi(d\tau);
\end{eqnarray}
Eq.(\ref{las}) refers to the case $\beta<1$ but $\nu(x)$ contains $\langle\xi\rangle$ corresponding
to a Markovian process.
The first part of Eq.(\ref{las}) describes such a process: a Markovian CTRW where the jump length is governed by
$\eta(\tau)$ and the waiting time is Poissonian with a position-dependent rate $\nu(x)$. The derivation
of this equation is presented in Appendix A. The system similar to (\ref{las}) (but including
a potential) was considered in \cite{bar1}. It was demonstrated in Ref.\cite{sro14} that
Eq.(\ref{las}) corresponds to a fractional kinetic equation \cite{zas} containing two fractional operators
and a position-dependent diffusion coefficient.
The anomalous transport predicted by Eq.(\ref{las}) is described by the variance for the Gaussian case and
by the fractional moments for $\alpha<2$ \cite{sro14}.
In the present paper, we analyse consequences on the diffusion process which follow from the dependence
of memory on the position, defined by Eq.(\ref{la}). In the case of the L\'evy flights, we construct a process
characterised by a finite variance taking into account that the system is finite.
The finite variance emerges as a result of a variable noise intensity near a boundary and
which decreases with the distance. This procedure differs from the usual noise truncation because
the distribution of $\eta$ remains unaffected.
The paper is organised as follows. In Sec.II we discuss the case $\alpha=2$ exactly solving Eq.(\ref{las})
for the power-law $g(x)$.
Sec.III is devoted to the L\'evy flights. We discuss a system with the multiplicative noise
in various interpretations and infer its general properties (Sec.IIIA). In Sec.IIIB, the multiplicative noise
serves to model a boundary layer; we derive the variance, which then exists,
as a function of time and all system parameters.
\section{Gaussian case}
We consider a stochastic motion inside a medium containing traps and the particle
is subjected to a white noise the increments of which obey
the Gaussian statistics. The substrate nonhomogeneity is restricted to the time-characteristics
of the system, determined by $g(x)$, and the dynamics is described by Eq.(\ref{la}).
We assume the function $g(x)$ in a power-law form,
\begin{equation}
\label{nuodx}
g(x)\sim|x|^\theta ~~~~~~(\theta>-1).
\end{equation}
This form of the diffusion coefficient was assumed to describe, beside a problem of the diffusion
on fractals, e.g. a turbulent two-particle diffusion \cite{fuj} and transport of fast electrons
in a hot plasma \cite{ved}. It is encountered in geology where a power-law distribution of fracture
lengths is responsible for transport in a rock \cite{pai}; this self-similar structure of
a fracture and fault network is characterised by a fractal dimension and determines a pattern of water
and steam penetration in the rock \cite{taf}.
In Eq.(\ref{nuodx}), a negative $\theta$ means that intensity of the random time distribution
is largest near the origin
and diminishes with the distance; it rises with the distance for a positive $\theta$.
We look for the density distribution of the particle position, $p(x,t)$, by solving Eq.(\ref{las}).
The first equation (\ref{las}) in the It\^o interpretation leads to the Fokker-Planck equation
\begin{equation}
\label{frace}
\frac{\partial p_0(x,\tau)}{\partial \tau}=
\frac{\partial^2[\nu(x) p_0(x,\tau)]}{\partial x^2},
\end{equation}
which has the solution \cite{hen}
\begin{equation}
\label{sol20}
p_0(x,\tau)={\cal N}\tau^{-\frac{1+\theta}{2+\theta}} |x|^\theta\exp(-|x|^{2+\theta}/(2+\theta)^2\tau)
\end{equation}
(${\cal N}$ denotes a normalisation constant), and corresponds to the Markovian process $x(\tau)$.
The random time, given by the second equation
(\ref{las}), is defined by a one-sided, maximally asymmetric stable L\'evy distribution
$L_\beta(\tau)$, where $0<\beta<1$, and the inverse distribution we denote by $h(\tau,t)$.
The density $p(x,t)$ results from the integration of the densities $p_0(x,\tau)$ and $h(\tau,t)$
over the operational time,
\begin{equation}
\label{inte}
p(x,t)=\int_0^\infty p_0(x,\tau)h(\tau,t)d\tau.
\end{equation}
To evaluate the integral, it is convenient to use the Laplace transform from Eq.(\ref{inte})
taking into account that $\bar h(\tau,u)=u^{\beta-1}\exp(-\tau u^\beta)$ \cite{wer}.
Then a direct integration yields a Laplace transform from the normalised density,
\begin{equation}
\label{lbes}
\bar p(x,u)=-2\frac{(2+\theta)^\nu}{\Gamma(-\nu)}|x|^{\theta+1/2}u^c
{\mbox K}_\nu\left(\frac{|x|^{1+\theta/2}}{1+\theta/2}u^{\beta/2}\right),
\end{equation}
where K$_\nu(z)$ is a modified Bessel function, $\nu=1/(2+\theta)$ and
$c=\beta-\beta/(4+2\theta)-1$.
The density obtained from the inversion of the above transform can be expressed
in terms of the Fox H-function,
\begin{eqnarray}
\label{sol2}
p(x,t)=-\frac{2}{\beta t}\frac{(1+\theta/2)^{\nu+2c/\beta}}{\Gamma(-\nu)}|x|^{(2+\theta)/\beta-1}
H_{1,2}^{2,0}\left[\frac{|x|^{(2+\theta)/\beta}}{(2+\theta)^{2/\beta}t}\left|\begin{array}{l}
~~~~~~~~~~~~~~~~~~~(0,1)\\
\\
(c/\beta-\nu/2,1/\beta),(c/\beta+\nu/2,1/\beta)
\end{array}\right.\right];
\end{eqnarray}
details of the derivation are presented in Appendix B.
Eq.(\ref{sol2}) seems complicated but an expression for large $|x|$ is simple: it has
a stretched-Gaussian form which follows from an asymptotic formula for the H-function \cite{bra},
\begin{equation}
\label{strg2}
p(x,t)\sim |x|^{\theta/\beta}t^{-\frac{\beta(1+\theta)}{(2+\theta)(2-\beta)}}
\exp\left[-A|x|^{(2+\theta)/(2-\beta)}/t^{\frac{\beta}{2-\beta}}\right],
\end{equation}
where $A=(2/\beta-1)/[\beta^{3/(2-\beta)}(2+\theta)^{2/(2-\beta)}]$. The stretched-Gaussian shape
of the tail is typical for diffusion on fractals \cite{met2} and emerges in the trap models \cite{ber}.
We are interested in the moments of $p(x,t)$; all of them are finite and given by
a characteristic function which can be derived as a series expansion. Eq.(\ref{inte}) yields
\begin{equation}
\label{chf1}
\widetilde p(k,t)=\sum_{n=0}^\infty\frac{(-1)^n}{(2n)!}k^{2n}\int_0^\infty\langle x^{2n}\rangle_{p_0}h(\tau,t)d\tau,
\end{equation}
where the moments of $p_0(x,\tau)$ directly follow from Eq.(\ref{sol20}),
\begin{equation}
\label{chf2}
\langle x^n\rangle_{p_0}=-(2+\theta)^{2n/(2+\theta)+1}\frac{\Gamma[(1+n+\theta)/(2+\theta)]}{\Gamma[-1/(2+\theta)]}
\tau^{n/(2+\theta)}.
\end{equation}
Then the integral resolves itself to the moments of $h(\tau,t)$ that are given by \cite{pir}
\begin{equation}
\label{chf3}
\langle\tau^{2n/(2+\theta)}\rangle_h=\frac{\Gamma[2n/(2+\theta)+1]}{\Gamma[2n\beta/(2+\theta)+1]}
t^{2n\beta/(2+\theta)}
\end{equation}
and the final expression for the characteristic function reads
\begin{equation}
\label{chf}
\widetilde p(k,t)=-\frac{2+\theta}{\Gamma[-1/(2+\theta)]}
\sum_{n=0}^\infty\frac{(-1)^n}{(2n)!}(2+\theta)^{4n/(2+\theta)}\Gamma[(1+2n+\theta)/(2+\theta)]
\frac{\Gamma[2n/(2+\theta)+1]}{\Gamma[2n\beta/(2+\theta)+1]} t^{2n\beta/(2+\theta)}k^{2n}.
\end{equation}
Diffusion properties are determined by the variance the time-dependence of which follows from
scaling arguments or from Eq.(\ref{chf}):
$\langle x^2\rangle(t)=-\partial^2\widetilde p(k=0,t)/\partial k^2 \sim t^{2\beta/(2+\theta)}$;
this formula indicates both the sub- and superdiffusion.
Comparison of the above approximate result with a numerical solution of Eq.(\ref{la}) reveals
a reasonable agreement; some discrepancies emerge for a small $\beta$ and large $\theta$ \cite{sro14}.
Finally, we will demonstrate that $p(x,t)$, Eq.(\ref{sol2}), satisfies the fractional equation
\begin{equation}
\label{glef}
\frac{\partial p(x,t)}{\partial t}={_0}D_t^{1-\beta}\frac{\partial^2}{\partial x^2}(|x|^{-\theta} p(x,t)),
\end{equation}
where ${_0}D_t^{1-\beta}$ is a Riemann-Liouville operator
\begin{equation}
\label{rlo}
_0D_t^{1-\beta}f(t)=\frac{1}{\Gamma(\beta)}\frac{d}{dt}\int_0^t dt'\frac{f(t')}{(t-t')^{1-\beta}},
\end{equation}
and which equation constitutes a non-Markovian generalisation of a Fokker-Planck equation for CTRW
with a position-dependent waiting-time distribution \cite{sro06}. First, we integrate Eq.(\ref{glef})
over time,
\begin{equation}
\label{glef1}
p(x,t)-p_0(x)={_0}D_t^{-\beta}\frac{\partial^2}{\partial x^2}(|x|^{-\theta} p(x,t)),
\end{equation}
where $p_0(x)$ is the initial condition, and take the Laplace transform,
\begin{equation}
\label{glef2}
u^\beta\bar p(x,u)-u^{\beta-1}p_0(x)=\frac{\partial^2}{\partial x^2}(|x|^{-\theta} \bar p(x,u)).
\end{equation}
The differentiation produces a differential equation
\begin{equation}
\label{glef3}
x^2\bar p''-\theta x\bar p'+[\theta(1+\theta)-u^\beta x^{\theta+2}]\bar p+u^{\beta-1}x^{\theta+2}p_0=0
\end{equation}
and the last term vanishes if $p_0(x)=\delta(x)$. Its particular solution has the form \cite{kamke}
\begin{equation}
\label{glef4}
\bar p(x,u)=|x|^{\theta+1/2}
{\mbox K}_\nu\left(\frac{|x|^{1+\theta/2}}{1+\theta/2}u^{\beta/2}\right)f(u),
\end{equation}
where $f(u)$ is an arbitrary function. Putting $f(u)\sim u^c$, we obtain Eq.(\ref{lbes}).
The power-law form of $g(x)$ suggests an interpretation of the memory position-dependence
as a fractal trap structure. In this picture, $g(x)$ means a density of traps,
i.e. the number of traps per unit interval, which is self-similar.
The density of a fractal embedded in the one-dimensional space equals $|x|^{d_f-1}$, where
$d_f$ stands for the fractal (Hausdorff) dimension \cite{hav}. Comparing the above expression with $g(x)$
given by Eq.(\ref{nuodx}) allows us to interpret the parameter $\theta$ by relating it to the
fractal dimension: $\theta=d_f-1$ for $\theta\in(-1,0]$. Then the expression for the variance takes the form,
\begin{equation}
\label{warg}
\langle x^2\rangle(t)\sim t^{2\beta/(1+d_f)}.
\end{equation}
The subdiffusion, observed in the homogeneous case, may turn to the enhanced diffusion if
dimension of the trap structure is small and the memory sufficiently weak (large $\beta$).
The case $\beta=1$ is special: $\langle\xi\rangle$ does not exist and,
for $d_f=1$ ($\theta=0$), the problem resolves itself
to the ordinary one-dimensional CTRW which is characterised by a weak subdiffusion,
$\langle x^2\rangle(t)\sim t/\ln t$ \cite{bou}.
The present problem differs from the random walk on fractals when the particle performs jumps
according to a self-similar pattern \cite{osh}.
The non-Markovian generalisation of that model \cite{met1} contains a fractional equation which has a form
different from Eq.(\ref{glef}) and the predicted motion is always subdiffusive.
The density has a finite value at the origin in contrast to the our approach: $p(x,t)$,
Eq.(\ref{sol2}), diverges at $x=0$ ($\theta<0$) where the trapping is strong and the evolution, as a function
of the physical time, proceeds very slowly near the origin.
\section{L\'evy flights}
Motion of a massive particle subjected to an additive noise which obeys the L\'evy statistics with long jumps
is characterised by the infinite variance; this situation is unacceptable for physical reasons. However,
the additive noise is an idealisation and in realistic systems the stochastic
stimulation may depend on a state of the system. For this -- multiplicative -- noise, the variance
may be finite and the anomalous diffusion exponent well determined.
In the next subsection, we discuss general properties of
the non-Markovian Langevin dynamics with that noise; the memory effects are taken into account
by a subordination of the dynamical process to the random time. One can expect
that the multiplicative noise emerges near a boundary where the environment structure is more
complicated than in the bulk. The boundary effects modelled in terms of the multiplicative
noise will be discussed in Subsection IIIB.
\subsection{Multiplicative noise}
The generalisation of Eq.(\ref{la}) including the multiplicative noise is the following
\begin{eqnarray}
\label{lam}
dx(\tau)&=&f(x)\eta(d\tau)\nonumber\\
dt(\tau)&=&g(x)\xi(d\tau)
\end{eqnarray}
and, in the case of the first equation (\ref{lam}), we must decide how this equation is to be interpreted,
namely at which time $f(x(t))$ is to be evaluated. More precisely, one defines
a stochastic integral as a Riemann integral,
\begin{equation}
\label{riem}
\int_0^t f[x(\tau)]d\eta(\tau)=
\sum_{i=1}^n f[(1-\lambda_I)x(\tau_{i-1})+\lambda_Ix(\tau_i)][\eta(\tau_i)-\eta(\tau_{i-1})],
\end{equation}
where the interval $(0,t)$ has been divided in $n$ subintervals ($n\to\infty$).
The parameter $0\le\lambda_I\le1$ determines the interpretation and corresponds, in particular,
to the It\^o (II) ($\lambda_I=0$), Stratonovich (SI) ($\lambda_I=1/2$)
and anti-It\^o interpretation (AII) ($\lambda_I=1$). II applies, in particular, if the noise consists
of clearly separated pulses, e.g. for a continuous description of integer processes and is used
in the perturbation theory due to its simplicity. It is well-known that for the
Gaussian processes the ordinary rules of the calculus are valid in the case of SI, in contrast to
the other interpretations, which allows us to transform the equation with the multiplicative noise into
an equation with the additive noise by a simple variable change. As regards the general stable processes,
we observe the same property:
the numerical solutions of the Langevin equation by using Eq.(\ref{riem}) with $\lambda_I=1/2$ agree
with those where a standard technique of the variable change is applied \cite{sro09}.
Obviously, the ordinary rules of the calculus are always valid if one defines
the white noise as a limit of the coloured noise \cite{sro12}; equivalence of that limit and SI
is an important property of the multiplicative Gaussian processes.
In the case of Eq.(\ref{lam}), we obtain the equation
with the additive noise by changing the variable, $y(x)=\int_0^x\frac{dx'}{f(x')}$, in the first
of Eq.(\ref{lam}). Next, we decouple the medium structure and memory (cf. Eq.(\ref{las})) and get the equation
$dy(\tau)=\nu(x(y))^{1/\alpha}\eta(d\tau)$. It contains a multiplicative noise in II
and corresponds to the Fokker-Planck equation \cite{sche}
\begin{equation}
\label{fp0}
\frac{\partial p_0(y,\tau)}{\partial \tau}=
\frac{\partial^\alpha[\nu(x(y)) p_0(y,\tau)]}{\partial|y|^\alpha}.
\end{equation}
Solution in the original variable $x$ directly follows from the solution of Eq.(\ref{fp0}),
\begin{equation}
\label{solx}
p_0(x,\tau)=\frac{1}{f(x)}p_0(y(x),\tau),
\end{equation}
and the density as a function of the physical time -- determined by the equation $dt(\tau)=\xi(d\tau)$ --
results from Eq.(\ref{inte}).
In the following, we restrict our considerations to a power-law form of $g(x)$:
\begin{equation}
\label{nul}
\nu(x)=|x|^{-\theta\alpha} ~~~~~~(\theta>-1),
\end{equation}
similar to Eq.(\ref{nuodx}). Then the waiting time is either large near the origin corresponding
to a large intensity of the random time density ($\theta<0$) or the probability of long rests rises with the distance
($\theta>0$). Eq.(\ref{fp0}) can be solved if one neglects terms higher than $|k|^\alpha$
in the characteristic function \cite{sro06,sro14a} which, after inverting the Fourier transform,
yields tail of the density (\ref{solx}): $p_0(y,\tau)\sim |y|^{-1-\alpha}$; it corresponds
to the L\'evy-stable asymptotics \cite{uwa}. The backward transformation of the variable,
$y\to x$, produces the asymptotics $p_0(x,\tau)\sim f(x)^{-1}y(x)^{-1-\alpha}$,
which indicates that finite moments can exist. In particular, the variance is finite
if $f(x)$ satisfies the condition
\begin{equation}
\label{warvar}
\lim_{x\to\infty}\frac{x^3}{f(x)y(x)^{1+\alpha}}=0.
\end{equation}
We assume from now on that $f(x)$ has the power-law form,
\begin{equation}
\label{fodx}
f(x)=|x|^{-\gamma},
\end{equation}
which allows us to obtain exact solutions. Then Eq.(\ref{warvar})
implies the condition for the finite variance: $\gamma>2/\alpha-1$.
Inserting the functions $g(x)$ and $f(x)$, given by Eq.(\ref{nul}) and Eq.(\ref{fodx}),
to Eq.(\ref{lam}) we obtain after elimination of the position-dependent factor in the subordination
equation and straightforward calculations the following set of the Langevin equations,
\begin{eqnarray}
\label{lam1}
dy(\tau)&=&D|y|^{-\theta/(1+\gamma)}\eta(d\tau)\nonumber\\
dt(\tau)&=&\xi(d\tau),
\end{eqnarray}
where $D=(1+\gamma)^{-\theta/(1+\gamma)}$. The first of the above equations corresponds to the Fokker-Planck equation,
\begin{equation}
\label{fp01}
\frac{\partial p_0(y,\tau)}{\partial \tau}=
D^\alpha\frac{\partial^\alpha[|y|^{-\alpha\theta/(1+\gamma)}
p_0(y,\tau)]}{\partial|y|^\alpha}
\end{equation}
which in the limit of small wave numbers is satisfied by a density determined by the following
characteristic function in respect to the variable
$y=\frac{1}{1+\gamma}|x|^{1+\gamma}\mbox{sign }x$,
\begin{equation}
\label{py0k}
\widetilde p_0(k,\tau)\approx 1-(A_0\tau)^{c_\theta}|k|^\alpha;
\end{equation}
in the above equation, $c_\theta=1/(\alpha+\theta/(1+\gamma))$ and
\begin{equation}
\label{staa}
A_0=\frac{2D(1+\theta+\gamma)}{\pi\alpha(1+\gamma)}
\Gamma(\theta/(1+\gamma))\Gamma(1-\alpha\theta/(1+\gamma))
\sin\left(\frac{\pi\alpha\theta}{2(1+\gamma)}\right).
\end{equation}
Eq.(\ref{py0k}) corresponds to the stable density $L_\alpha$ if
$\theta\in(-(1+\gamma),(1+\gamma)/\alpha)$ \cite{sro14a}.
The final density $p(x,t)$ results from the transformation to the physical time by means
of Eq.(\ref{inte}) which we rewrite as a Fourier transform in respect to $y$. Using Eq.(\ref{py0k}) yields
\begin{equation}
\label{intek}
\widetilde p(k,t)=1-A_0^{c_\theta}|k|^\alpha\int_0^\infty \tau^{c_\theta} h(\tau,t)d\tau
\end{equation}
and the integral can be easily evaluated if we express the function $h(\tau,t)$ by the H-function \cite{non},
\begin{eqnarray}
\label{htt}
h(\tau,t)=\frac{1}{\beta\tau}H_{1,1}^{1,0}\left[\frac{\tau^{1/\beta}}{t}\left|\begin{array}{l}
(1,1)\\
\\
(1,1/\beta)
\end{array}\right.\right].
\end{eqnarray}
Changing the variable in the integral (\ref{intek}), $\xi=\tau^{1/\beta}/t$, we get
\begin{equation}
\label{intek1}
\widetilde p(k,t)=1-A_0^{c_\theta}|k|^\alpha t^{\beta c_\theta}\int_0^\infty \xi^{\beta c_\theta-1}H(\xi)d\xi,
\end{equation}
where $H(\xi)$ means the H-function in Eq.(\ref{htt}). We evaluate the resulting Mellin transform
and obtain the Fourier transform corresponding to the stable distribution for small $|k|$.
Inversion of this Fourier transform produces the final expression,
\begin{equation}
\label{px}
p(x,t)=A^{-1} t^{-\beta c_\theta}|x|^\gamma L_\alpha(t^{-\beta c_\theta}|y(x)|/A),
\end{equation}
where we applied Eq.(\ref{solx}) and denoted
\begin{equation}
\label{staa1}
A=A_0^{c_\theta/\alpha}\Gamma[1+c_\theta]^{1/\alpha}/\Gamma[1+\beta c_\theta]^{1/\alpha}.
\end{equation}
The solution (\ref{px}) has the asymptotics
\begin{equation}
\label{asyt}
p(x,t)\sim t^{\beta c_\theta}|x|^{-1-\alpha-\alpha\gamma}.
\end{equation}
\begin{center}
\begin{figure}
\includegraphics[width=95mm]{fig1.eps}
\caption{(Colour online) Distributions obtained from a numerical integration of Eq.(\ref{lam}) with
$g(x)=|x|^{\theta\alpha}$ and $f(x)=|x|^{-\gamma}$ for $\alpha=3/2$, $\theta=2/3$, $\beta=1/2$, $\gamma=1/2$ and $t=0.1$.
Curves correspond to the following
values of $\lambda_I$: 0 (magenta), 0.1 (blue), 0.2 (cyan), 0.5 (red) and 1 (green) (from bottom
to top in the upper part and from top to bottom in the lower part). The straight lines correspond to
a power-law function with the index 2.5, 2.9 and 3.25 (from top to bottom).}
\end{figure}
\end{center}
The variance can be directly evaluated from (\ref{px}):
\begin{equation}
\label{warm}
\langle x^2\rangle(t)=2\int_0^\infty x^2 p(x,t)dx=2A^{-1}t^{-\beta c_\theta}
\int_0^\infty x^{2+\gamma}L_\alpha(A^{-1}(1+\gamma)^{-1}t^{-\beta c_\theta}x^{1+\gamma})dx.
\end{equation}
Change of the variable and evaluating the Mellin transform yields the final expression,
\begin{equation}
\label{warmf}
\langle x^2\rangle(t)=-\frac{2}{\pi\alpha}[A(1+\gamma)]^{2/(1+\gamma)}\Gamma\left(-\frac{2}{\alpha(1+\gamma)}\right)
\Gamma\left(1+\frac{2}{1+\gamma}\right)\sin\left(\frac{\pi}{1+\gamma}\right)t^{2\beta c_\theta/(1+\gamma)},
\end{equation}
which means that motion is always subdiffusive.
On the other hand, if one understands the noise $\eta$ in Eq.(\ref{lam}) in a sense of II, the Langevin
equation for $x(\tau)$ contains a multiplicative term $|x|^{-\theta-\gamma}$ and
the counterpart of Eq.(\ref{fp01}) reads
\begin{equation}
\label{fp01i}
\frac{\partial p_0(x,\tau)}{\partial \tau}=
\frac{\partial^\alpha[|x|^{-\alpha(\theta+\gamma)}
p_0(x,\tau)]}{\partial|x|^\alpha}.
\end{equation}
Solving the above equation in the diffusion limit and evaluation of the integral (\ref{inte})
produces the distribution $p(x,t)$ with the asymptotics $\sim|x|^{-1-\alpha}$.
Comparison with Eq.(\ref{asyt}) indicates a qualitative difference between II and SI for the L\'evy flights:
whereas for II presence of the multiplicative factor in the noise influences only the time-dependence,
for SI it modifies the tail shape and makes finiteness of the variance possible.
In the face of that difference, it is interesting to check what
the density distributions for the other interpretations look like. To find those distributions,
we have to resort to the numerical analysis. According to Eq.(\ref{riem}), the discretized form
of the Langevin equation for the arbitrary interpretation is given by the following expression,
\begin{equation}
\label{riem1}
x_{n+1}=x_n+[(1-\lambda_I)x_n+\lambda_I x_{n+1}]^{-\gamma}\eta_n h^{1/\alpha},
\end{equation}
where $\eta_n$ is sampled from a symmetric stable distribution according to a well-known
algorithm \cite{wer1} and $h$ is a time step. The expression for $x_{n+1}$ is not explicit and Eq.(\ref{riem1}) can be
exactly solved only for a few values of $\gamma$; in general, it must solved numerically at every integration step.
For that purpose, we applied the parabolic interpolation scheme (the Muller method) \cite{ral}.
A simple modification of the standard method \cite{wer} allows us to evaluate
the variable $x$ as a function of the physical time -- determined by the second equation (\ref{lam}) --
without an explicit derivation of the subordinator $t(\tau)$.
Examples of distributions for a few values of $\lambda_I$ are presented in Fig.1, separately
for the central part and for the tails. We observe a similar slope of the tail for all $\lambda_I>0.2$; in particular,
it is almost identical for SI and AII. Near the origin, in turn, some differences emerge and the height of the peak
rises with $\lambda_I$. For all the interpretations, $p(0,t)=0$ which is a consequence of the divergence
of $f(x)$ in the origin.
\subsection{Boundary effects and anomalous diffusion}
Let us assume that the particle is subjected to a noise the intervals of which are governed by the symmetric
L\'evy stable distribution. The transport proceeds inside a medium with traps and intensity of the
random time density
is given by the function $g(x)$; the dynamics is described by the Langevin equation with the additive noise,
Eq.(\ref{la}). Such systems are characterised by the infinite variance which, for massive particles,
violates physical principles and attempts were undertaken to suppress long tails in the L\'evy density.
A simple remedy is to introduce a modification of the stable distribution to make the tail
steeper. Such a truncation may be assumed as a simple cut-off \cite{man} or involve some rapidly falling
function: e.g. an exponential \cite{kop} or a power-law $|x|^{-\beta}$,
where $\beta\ge2-\alpha$ \cite{sok}. Processes involving the truncated distributions actually
converge to the normal distribution, according to the central limit theorem, but
the power-law tails may be visible for a long time due to a slow convergence.
On the other hand, variance becomes finite if one takes into account a finiteness of
the particle velocity (L\'evy walk) \cite{met}.
In the present approach, the finite variance results from a variable intensity of the noise in
the region close to the boundary whereas intervals of the noise are always distributed according to
the stable distribution. As a consequence, the Langevin equation acquires a multiplicative noise and
the requirement of the finiteness of the variance imposes a condition on the form of the position-dependent
noise intensity. We assume, in addition, that the boundary effects do not affect the trap structure.
Presence of that noise is natural: one can expect that the additional complication of the environment structure
in the vicinity of the boundary introduces a dependence of the noise on the process value and
requires a more general approach than an equation with the additive noise.
Emergence of the multiplicative noise near a boundary has been experimentally demonstrated for some physical systems.
For example, a description of the colloidal particles diffusion in terms of a constant diffusion coefficient
appears possible only if a particle remains far from any boundary \cite{brett}.
Moreover, presence of the multiplicative noise in a description of particles near a wall
is necessary to reach a proper thermal equilibrium \cite{lau}.
We assume that the boundary effects become important at a distance $|x|=L$ and then the Langevin
equation acquires the multiplicative noise. Its intensity is parametrised by $f(x)$ as a falling
power-law function,
\begin{eqnarray}
\label{fodxtr}
f(x)=\left\{\begin{array}{ll}
1 &\mbox{for $|x|\le L$}\\
L^\gamma|x|^{-\gamma} &\mbox{for $|x|>L$},
\end{array}
\right.
\end{eqnarray}
where $\gamma>0$; the dynamics is governed by Eq.(\ref{lam}) and
the noise in the first equation will be interpreted according to SI.
A new variable,
\begin{eqnarray}
\label{trxy}
y(x)=\left\{\begin{array}{ll}
x &\mbox{for $|x|\le L$}\\
\frac{L}{1+\gamma}\left[\gamma+(|x|/L)^{1+\gamma}\right)\mbox{]sign }x &\mbox{for $|x|>L$},
\end{array}
\right.
\end{eqnarray}
allows us to get rid of the multiplicative factor $f(x)$ in Eq.(\ref{lam})
and the equation corresponding to the first equation (\ref{lam1}) takes the form
\begin{eqnarray}
\label{lamtr}
dx(\tau)&=&|x|^{-\theta}\eta(d\tau)\mbox{\hskip6cm for $|x|\le L$}\nonumber\\
dy(\tau)&=&[(|y|-L)(1+\gamma)L^\gamma+L^{1+\gamma}]^{-\theta/(1+\gamma)}\eta(d\tau)\mbox{\hskip13mm for $|x|>L$.}
\end{eqnarray}
In general, the density distribution can be obtained in a closed form only for $|x|\le L$ and $|x|\gg L$.
We consider, at the beginning, the case $\theta=0$ corresponding to a uniform trap distribution and
follow a similar method as in the preceded subsection: solve the
Fokker-Planck equation and integrate over the operational time. The final result for $|x|>L$ reads
\begin{equation}
\label{soltr}
p(x,t)=A^{-1}t^{-\beta/\alpha}L^{-\gamma}
|x|^\gamma L_\alpha(t^{-\beta/\alpha}|y(x)|/A),
\end{equation}
where $A=\Gamma(1+\beta)^{-1/\alpha}$.
The asymptotic form of the above equation, $p(x,t)\sim|x|^{-1-\alpha-\alpha\gamma}$, reveals
the meaning of the parameter $\gamma$: the limit $\gamma\to\infty$, for which $f(x)=1-\Theta(|x|-L)$,
yields $p(x,t)\to 0$ ($|x|>L$),
i.e. $x=\pm L$ becomes an absorbing barrier and the surface is reduced to single points. For a finite value
of $\gamma$, the system is not strictly confined but the probability density of finding
the particle in the outer region rapidly decreases with the distance if $\gamma$ is large.
Moreover, for a large $L$, a very long time is needed to observe the particle at distances markedly
larger than $L$.
The diffusion problem is well-defined since the finite variance exists if $\gamma>2/\alpha-1$ and,
in the following, we will calculate the variance on this assumption.
The system defined by Eq.(\ref{lamtr}) changes its properties at $|x|=L$ and one can expect
a different diffusion behaviour in the surface region, compared to the bulk.
If $L$ is sufficiently large, the dynamics resolves itself
to the truncated L\'evy flights and the diffusion properties are equivalent to the Gaussian
case \cite{trunc}, providing the time is relatively small; then one gets the standard anomalous
diffusion law,
\begin{equation}
\label{war1t0}
\langle x^2\rangle(t)\sim t^\beta.
\end{equation}
Variance in the limit $t\to\infty$ follows from a direct evaluation of the integral. One can easily
demonstrate that the contribution from $|x|<L$ is small for a large time. Then
\begin{equation}
\label{war2t0}
\langle x^2\rangle(t)=\frac{2t^{-\beta/\alpha}}{AL^\gamma}\int_L^\infty x^{\gamma+2}
L_\alpha\left(\frac{|y(x)|}{At^{\beta/\alpha}}\right)dx
\end{equation}
and, introducing a new variable $x'=t^{-\beta/\alpha(1+\gamma)}x$, we obtain
\begin{equation}
\label{ygr}
y=L+\frac{L^{-\gamma}}{1+\gamma}(x'^{1+\gamma}t^{\beta/\alpha}-L^{1+\gamma})\to
\frac{L^{-\gamma}}{1+\gamma}x'^{1+\gamma}t^{\beta/\alpha}~~~~(t\to\infty)
\end{equation}
for any $x'$ which allows us to reduce Eq.(\ref{war2t0}) to the form
\begin{equation}
\label{war2t01}
\langle x^2\rangle(t)=\frac{2t^{2\beta/\alpha(1+\gamma)}}{AL^\gamma}\int_{x_0}^\infty x^{\gamma+2}
L_\alpha\left(\frac{L^{-\gamma}}{A(1+\gamma)}x^{1+\gamma}\right)dx,
\end{equation}
where $x_0=L/t^{\beta/\alpha(1+\gamma)}\to 0$. The integral can be evaluated by applying the standard
properties of the H-function and, after lengthy but straightforward calculations, we obtain the final
formula,
\begin{equation}
\label{war2t0f}
\langle x^2\rangle(t)=-\frac{2}{\pi\alpha}L^{\alpha\gamma c_\gamma}(1+\gamma)^{\alpha c_\gamma}
\Gamma(1+\beta)^{-c_\gamma}\Gamma(-c_\gamma)\Gamma(\alpha c_\gamma)\sin\left(\frac{\pi}{1+\gamma}\right)t^{\beta c_\gamma},
\end{equation}
where $c_\gamma=2/\alpha(1+\gamma)$. Since $c_\gamma<1$, the motion is always subdiffusive: the variance
in the limit $t\to\infty$ rises with time slower than linearly and also slower than in the case
of a small time, Eq.(\ref{war1t0}). The slope explicitly depends on $\alpha$, in contrast to Eq.(\ref{war1t0}),
and drops to zero for a sharp edge ($\gamma\to\infty$). Therefore,
we observe two diffusion regimes; the standard form of the variance (\ref{war1t0}),
which is typical for the Gaussian case and the truncated L\'evy flights,
emerges for relatively small times and corresponds to trajectories
abiding not far from the origin. On the other hand, when at large time the surface region becomes
important, diffusion is weaker.
Those two diffusion regimes are illustrated in Fig.2 for the Cauchy distribution ($\alpha=1$)
where the variance was obtained from $p(x,t)$ by a direct integral evaluation: slopes agree
with Eq.(\ref{war1t0}) and Eq.(\ref{war2t0f}).
\begin{center}
\begin{figure}
\includegraphics[width=95mm]{fig2.eps}
\caption{(Colour online) Variance as a function of time for $\alpha=1$ and $\beta=1/2$.
The curves (from bottom to top) correspond to: 1. $\gamma=4$ and $L=10$; 2. $\gamma=2$ and $L=10$;
3. $\gamma=2$ and $L=100$. Straight-line segments (marked by the red lines) on the left-hand side
have the slope 1/2 and those on the right-hand side: 1/5, 1/3 and 1/3.}
\end{figure}
\end{center}
\begin{center}
\begin{figure}
\includegraphics[width=95mm]{fig3.eps}
\caption{(Colour online) Variance as a function of time for a few sets of the parameters
$\alpha$, $\theta$ and $\beta$; the other parameters: $L=10$ and $\gamma=2$.}
\end{figure}
\end{center}
\begin{center}
\begin{figure}
\includegraphics[width=95mm]{fig4.eps}
\caption{Slope of the time-dependence of the variance, $t^\mu$, as a function of $\theta$ for $\beta=0.5$
and two values of $\alpha$: 0.5 (lower points) and 1.5 (upper points). The other parameters: $\gamma=2$ and $L=10$.
Solid lines mark the function $\mu=0.5/(1+c(\alpha)\theta)$, where $c(0.5)=1.51$ and $c(1.5)=0.51$.}
\end{figure}
\end{center}
\begin{center}
\begin{figure}
\includegraphics[width=95mm]{fig5.eps}
\caption{Slope as a function of $\alpha$ for two values of $\theta$, the other parameters are the same as in Fig.4.
Solid line marks the function $\mu=0.2+0.12\alpha$.}
\end{figure}
\end{center}
\begin{center}
\begin{figure}
\includegraphics[width=95mm]{fig6.eps}
\caption{Slope as a function of $\beta$ for two values of $\theta$, the other parameters are the same as in Fig.4.
Solid lines mark the functions $\mu=0.09+0.545\beta$ and $\mu=-0.05+1.276\beta$ for the positive
and negative $\theta$, respectively.}
\end{figure}
\end{center}
If $\theta\ne 0$, Eq.(\ref{lamtr}) is not manageable analytically except a non-physical case $|x|\gg L$.
Then the variance was determined by a numerical solving of Eq.(\ref{lam}) where
the multiplicative noise was treated according to Eq.(\ref{riem1}) with $\lambda_I=1/2$.
The numerical calculations show that the time-dependence of the variance has the same
form as Eq.(\ref{war1t0}) but with a modified index, $\sim t^\mu$.
This form was found for all the parameters -- $\theta$, $\alpha$ and $\beta$ --
if $L$ was large. The above observation is illustrated in Fig.3 where some
examples of $\langle x^2\rangle(t)$ are presented.
Dependence of the slope on the parameters is presented in the subsequent figures.
Fig.4 shows $\mu$ as a function of $\theta$ for two values of $\alpha$; it diminishes with $\theta$
and the numerical results reveal a dependence $\mu=0.5/(1+c(\alpha)\theta)$ (the coefficient
$c(\alpha)$ is indicated in the figure).
The dependence of $\mu$ on $\alpha$ is presented in Fig.5 for two values of $\theta$, both negative and positive.
Whereas in the former case $\mu$ is almost constant -- and larger than the value predicted by Eq.(\ref{war1t0}) --
for the positive $\theta$ we observe a linear growth in a wide range of $\alpha$;
$\mu(\alpha)$ becomes flat only at large $\alpha$. Finally, Fig.6 shows that the slope for a positive (negative)
$\theta$ rises with $\beta$ weaker (stronger) than for $\theta=0$ and both dependences are linear.
The superdiffusion emerges when $\theta$ is negative and $\beta$ large.
The proposed formalism is applicable to diffusion problems where memory is connected
with nonhomogeneous medium structure. Therefore, we conclude with some remarks concerning the parameters
in the above analysis; they may be useful to compare the results with an experiment.
We interpreted the multiplicative noise $f(x)$ in Eq.(\ref{lam}) according to SI.
This interpretation is distinguished in stochastic problems because it constitutes a white-noise
limit of coloured noises for any $\alpha$ \cite{sro12}. However, the other interpretations
may also be important. For example, the experimental analysis of the colloidal particles
diffusion near the boundary favours AII ($\lambda_I=1$) \cite{brett}, as we have already mentioned.
We performed the numerical calculations for AII, similar to those for SI, and found
the same diffusion properties in respect both to the exponent and the proportionality coefficient.
This conclusion could be anticipated from Fig.1: tails of the distribution for both interpretations are
very similar. Moreover, results presented in Fig.4-6 are independent of $\gamma$ and $L$ if $L$ is
sufficiently large. The other parameters have a straightforward interpretation:
$\alpha$ defines the jump statistics, $\beta$ is responsible for the trapping time characterising the depth of
the effective trapping potential and $\theta$ is responsible for the trap distribution.
\section{Summary and conclusions}
The stochastic motion in a medium with traps was studied in terms of the Langevin equation and
the anomalous diffusion exponent was determined.
The memory effects were taken into account by a subordination of a Markovian process to a physical, random
time and the nonhomogeneity of the trap structure by a dependence of the time lag on the position, modelled by
a non-negative function $g(x)$. If one decouples effects related to the trap structure and memory,
the problem resolves itself to a multiplicative process subordinated to the random time.
The Langevin equation in the operational time describes a Markovian jumping process with a variable
rate of the Poissonian waiting-time distribution.
The density distribution can be exactly derived if $g(x)$ has a power-law form. It has
a stretched-Gaussian shape and this result is similar to the prediction of the quenched
trap model \cite{bou,ber}. However, in contrast to that model, the variance may rise with time faster than
linearly; beside the subdiffusion, observed for the homogeneous case, the enhanced diffusion emerges.
If one interprets the position-dependence of the memory in terms of a variable trap density and
assume it in a power-law form, such a density corresponds to a fractal pattern. Then the fractal dimension
determines the diffusion properties: the smaller the dimension, the faster the variance grows with time.
L\'evy flights are characterised by the infinite variance but a finite size of
the system may modify the random stimulation. In contrast to the usual truncation procedure
where long jumps are eliminated, our approach imposes a restriction on the noise intensity
by introducing a multiplicative noise to the Langevin equation. This noise may follow from
a complicated medium structure near the boundary which requires
a generalisation of a simple modelling in terms of the additive noise.
The resulting density distributions have fast falling tails. Therefore, diffusion inside
the substrate is well-determined and may be quantified in terms of the variance
the time-dependence of which does not depend of the system size. Moreover,
the diffusion properties appear robust in respect to a particular interpretation of the multiplicative
noise in the Langevin equation. On the other hand, diffusion inside a layer near the boundary
is weaker than in the bulk and we observe two regimes with a different anomalous diffusion
exponent $\mu$. This exponent reflects both the magnitude of the memory in the system, described by the parameter
$\beta$, and the position-dependence of the memory, given by the parameter $\theta$: it diminishes
with $\theta$ and rises with $\beta$. In contrast to the case $\theta=0$, we observe a dependence
of $\mu$ on $\alpha$ but only for small $\alpha$ and positive $\theta$. This conclusion points at
a subtle relation between the nonhomogeneous distribution of the random time statistics
and the noise statistics in respect to the diffusion properties of systems with the L\'evy flights.
\section*{APPENDIX A}
\setcounter{equation}{0}
\renewcommand{\theequation}{A\arabic{equation}}
In this Appendix, we demonstrate that the first part of Eq.(\ref{las})
traces back to a Markovian jumping process \cite{kam}.
This stationary process is defined by a jump-size distribution $Q(x)$ in a form of the L\'evy $\alpha$-stable and
symmetric distribution with a characteristic function,
\begin{equation}
\label{A.1}
{\widetilde Q}(k)=\exp(-K^\alpha |k|^\alpha)~~~~~~~~~~(\alpha\ne1, K>0).
\end{equation}
The particle performs instantaneous jumps and then rests for a time given by a waiting-time distribution
which is Poissonian,
\begin{equation}
\label{A.2}
w(t)=\nu(x){\mbox e}^{-\nu(x)t},
\end{equation}
where $\nu(x)$ denotes a position-dependent rate. The transition probability for infinitesimal time intervals
$\Delta t$ reads
\begin{equation}
\label{A.3}
p_{tr}(x,\Delta t|x', 0) = [1-\nu(x')]\Delta t\delta(x-x')+Q(x-x')\nu(x') \Delta t,
\end{equation}
where the first term corresponds to the case that no jump occurred within $\Delta t$ and the second one that
exactly one jump occurred. The differentiation over time,
\begin{equation}
\label{A.4}
\frac{\partial}{\partial t}p(x,t)=\lim_{\Delta t\to 0}\left[\int p_{tr}(x,\Delta t|x',0)
p(x',t)dx' - p(x,t)\right]/\Delta t,
\end{equation}
produces a master equation:
\begin{equation}
\label{A.5}
\frac{\partial}{\partial t}p(x,t) = -\nu(x)p(x,t) +
\int Q(x',x)\nu (x') p(x',t) dx'.
\end{equation}
In the diffusion limit $k\to0$, Eq.(\ref{A.1}) can be approximated by
${\widetilde Q}(k)\approx 1-K^\alpha |k|^\alpha$
and inserting this expression into the Fourier transformed Eq.(\ref{A.5}) yields \cite{sro06}
\begin{equation}
\label{A.6}
\frac{\partial{\widetilde p}(k,t)}{\partial t}=-K^\alpha |k|^\alpha{\cal F}[\nu(x)p(x,t)].
\end{equation}
Inversion of the above equation yields a fractional Fokker-Planck equation in the form
\begin{equation}
\label{A.7}
\frac{\partial p(x,t)}{\partial t}=K^\alpha\frac{\partial^\alpha[\nu(x)p(x,t)]}{\partial|x|^\alpha},
\end{equation}
where the Weyl-Riesz operator is defined by the inverse Fourier transform:
$\frac{\partial^\alpha}{\partial|x|^\alpha}={\cal F}^{-1}(-|k|^\alpha)$. On the other hand, Eq.(\ref{A.7})
follows from the first of Eq.(\ref{las}) in the It\^o interpretation \cite{sche}.
\section*{APPENDIX B}
\setcounter{equation}{0}
\renewcommand{\theequation}{A\arabic{equation}}
In Appendix B, we derive Eq.(\ref{sol2}). The Bessel function ${\mbox K}_\nu(z)$ can be expressed
in terms of the Fox H-function \cite{kil} which formula, in our case, reads
\begin{eqnarray}
\label{B.1}
{\mbox K}_\nu(z)=\frac{1}{2}H_{0,2}^{2,0}\left[\frac{z^2}{4}\left|\begin{array}{l}
~~~~~~-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\\
\\
(-\nu/2,1),(\nu/2,1)
\end{array}\right.\right].
\end{eqnarray}
After applying standard properties of the H-function and straightforward calculations
we obtain the density in the form
\begin{eqnarray}
\label{B.2}
\bar p(x,u)=-\frac{1}{\beta}\frac{(1+\theta/2)^{\nu+2c/\beta}}{\Gamma(-\nu)}|x|^{(2+\theta)/\beta-1}
H_{0,2}^{2,0}\left[\xi\left|\begin{array}{l}
~~~~~~~~~~~~~~~~-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\\
\\
(c/\beta-\nu/2,1/\beta),(c/\beta+\nu/2,1/\beta)
\end{array}\right.\right],
\end{eqnarray}
where $\xi=(2+\theta)^{-2/\beta}|x|^{(2+\theta)/\beta}u\equiv\kappa u$. Inversion of the transform
enhances the order of the H-function \cite{glo}; in the case of the function in Eq.(\ref{B.2}),
the inversion formula yields
\begin{eqnarray}
\label{B.3}
\frac{1}{t}
H_{1,2}^{2,0}\left[\frac{\kappa}{t}\left|\begin{array}{l}
~~~~~~~~~~~~~~~~~~~(0,1)\\
\\
(c/\beta-\nu/2,1/\beta),(c/\beta+\nu/2,1/\beta)
\end{array}\right.\right].
\end{eqnarray}
Finally, we obtain Eq.(\ref{sol2}).
|
1501.03570
|
\section{Introduction}
\label{intro}
The emergence of collective motion of living things, such as insects, birds, slime molds, bacteria
and fish is a fascinating far-from-equilibrium phenomenon which has attracted a great deal of cross-disciplinary attention
\cite{vicsek_zafeiris_12}.
The study of these systems falls under the broader umbrella of ``active matter''.
This term stands for collections of agents that are able to extract and
dissipate energy from their surrounding to produce systematic motion \cite{ramaswamy_10,marchetti_13}.
Non-living examples of active matter are chemically powered nanorods \cite{rueckner_07},
networks of actin fibers driven by molecular
motors \cite{humphrey_02} and swarms of
interacting robots \cite{rubenstein_14,wilson_14}.
In 1995, a minimal computational
model that captures the essentials of collective motion without entering too much detail was introduced by Vicsek
\cite{vicsek_95_97}.
In this so-called Vicsek model (VM), agents are represented as point particles traveling at constant speed.
The agents follow
noisy interaction rules which
aim to align the velocity vector of any given particle with its neighbors.
Later, more sophisticated models that include additional interactions such as attraction and short-range repulsion
were introduced \cite{levine_00,couzin_02,romenskyy_13,grossmann_13}.
On the theoretical side, coarse-grained descriptions of the dynamics in terms of phenomenological hydrodynamic equations have been proposed on the basis of symmetry and conservation law arguments.Well-knowns examples are the Toner-Tu theory
for polar active matter \cite{toner_98,toner_12} and the theory by Kruse {\em et al.} for active polar gels \cite{kruse_05}.
While being very successful, a disadvantage of these approaches is
that no link between microscopic collision rules and the coefficients of the macroscopic equations is provided.
In particular, since all coefficients must be related to only a few microscopic parameters, they
cannot be varied independently and the actual parameter space should be more restricted
than the hydrodynamic equations
might suggest.
Furthermore, by just postulating equations there is the immanent danger that some terms or even entire equations
might have been ommitted,
which could become relevant in new, previously untested, situations.
To address the missing-link issue, several groups have derived hydrodynamic equations from the underlying microscopic rules and
provided expressions for the transport coefficients in terms of microscopic parameters
\cite{aranson_05,bertin_06,mishra_10,ihle_11,menzel_12,romanczuk_12,peshkov_12}.
One of the first attempts to put the Toner-Tu theory
on a microscopic basis was published by Bertin {\em et al.}, who studied a Vicsek-type model with continuous time-dynamics
and binary interactions
by means of a Boltzmann approach \cite{bertin_06,bertin_09}.
While the authors recently clarified \cite{bertin_14A} that even at low density
their underlying microscopic model is not identical
to the Vicsek-model, many predictions agree qualitatively with the ones for the VM.
This Boltzmann approach was later extended to systems of self-propelled particles
with nematic and metric-free interactions, and hydrodynamic equations were drived \cite{peshkov_12,ngo_12}.
Very recently, the very basis of these derivations -- the Boltzmann equation and its validity in active matter --
has been critically assessed \cite{thueroff_13,ihle_14,ihle_14A}.
In particular, it has been shown that, at least near the threshold to collective motion,
the binary collision assumption is not valid in the VM
at realistic particle speeds and even at very low densities \cite{ihle_14}.
Furthermore, it has been
demonstrated that the mean-field assumption of molecular chaos is not justified near the threshold to collective motion
in soft active colloids
and that the Boltzmann theory must be amended by pre-collisional correlations \cite{hanke_13}.
A first attempt to rigorously include correlations by means of a ring-kinetic theory for Vicsek-type models was put forward
by Chou {\em et al.} \cite{chou_14}.
This theory is very complicated; work is still in progress to simplify it \cite{ihle_15}.
Nevertheless, arguments from ring-kinetic theory can be used to confirm the plausible presumption that
mean-field theories become reliable in the VM at large particle speeds (or time steps) and/or at large
particle densities \cite{FOOT2}.
Here, large density means that the average number $M$ of collision partners of a given agent
is much larger than one.
It is known from equilibrium
statistical mechanics that the behavior of spin-models become mean-field or more mean-field-like
if the number of interaction partners is increased which can either be achieved by extending the range of interaction or
by increasing the spatial dimension.
It appears reasonable to assume a similar tendency in the VM which consists of moving ``spins''.
The large density limit $M\gg 1$ is beyond the capability of Boltzmann approaches because
Boltzmann equations are restricted to binary interactions.
However, a recently proposed Enskog-like theory \cite{ihle_11,ihle_13,ihle_14}
has no limitation on density and has already be applied to a model
with $M=2\ldots 7$ and metric-free interactions \cite{chou_12}.
This Enskog-like theory is based on an exact equation for a Markov chain in phase space.
In a previous paper \cite{ihle_11},
hydrodynamic equations were derived from this theory and its transport coefficients were given in terms
of infinite series.
In this paper, I analyze the transport coefficients in the large density limit, $M\gg 1$, and show
that they take a simple form.
This allows me to analytically evaluate the well-known density instability of the
polarly ordered phase near the flocking threshold \cite{chate_04_08,bertin_06}, and to obtain simple formulas and scaling laws for
the dispersion relation.
Note that a similar analysis for the opposite limit $M\ll 1$ has already been performed by Bertin {\em et al.}
\cite{bertin_09}.
The large density expansion performed in this paper is supposed to be benefitial
for several reasons.
First, one does not have to worry about the validity of the underlying mean-field assumption of Molecular Chaos:
As discussed above, this assumption is expected to be asymptotically valid for $M\rightarrow \infty$.
Thus, the obtained relations should be quantitatively correct in this limit.
As will be shown below, at large density, the restabilization of the homogeneous ordered phase happens closer
to the threshold. Hence, this effect
occurs within the validity domain of the hydrodynamic theory,
and, aiming at quantitative agreement, one is not forced to evaluate the kinetic eqution directly as proposed
in Ref. \cite{bertin_09}.
Second, biologically realistic models of fish schools \cite{tegeder_95} and bird flocks \cite{ballerini_08} assume that
$M$ is between 2 and 7, thus an approach for large $M$ seems promising.
Third, similar to the opposite limit, $M\rightarrow 0$, the large density approach
leads to a simplification of the expressions for the transport coefficients and allows a physical interpretation of the
instability close to the threshold of collective motion.
An obvious concern about the large density limit is how realistic it is for active matter in general.
Of course, for example,
for systems of active colloids with short-ranged excluded volume interactions
it will be problematic.
However, it does make sense for systems with long-ranged alignment interactions.
Speaking in terms of the terminology put forward by Couzin et al. \cite{couzin_02}, the condition $M\gg 1$ can occur in a model
with a large ``zone of orientations'' and a very small ``zone of repulsion''
Furthermore, in the process of doing the actual calculations and performing expansions in powers of $1/M$,
one realizes that error terms are often proportional to ${\rm exp}(-M)$.
For example, for $M=2$, this is already a reasonably small number.
Therefore, it is possible that the large density formulas derived in this paper
could still be valid at $M$ slightly larger than one without loosing too much accuracy.
To the best of my knowledge, almost no analytical results exist for the Vicsek-model at high particle density.
This paper is intended to fill this void. One of the main results
is the prediction that the homogeneous ordered phase remains stable to small perturbations if the linear system size is chosen to be
smaller than $L^{*}=\pi\sqrt{2M}$.
Another main result is that in systems larger than $L^{*}$ the number of time steps to develop an instability is
equal to or larger than $16\pi\,M$.
\section{Vicsek model}
\label{sec:Vicsek}
In the two-dimensional Vicsek-model (VM) \cite{vicsek_95_97,nagy_07}, $N$ particles with average number density
$\rho_0=N/A$ move at constant
speed $v_0$.
The system of area $A$
evolves in discrete time intervals of size $\tau$.
The particles are assumed to have zero volume.
The evolution takes place in of two steps: streaming and collision.
During streaming, the particle positions ${\bf x}_j(t)$
are updated in parallel according to
\begin{equation}
\label{STREAM}
{\bf x}_j(t+\tau)={\bf x}_j(t)+\tau {\bf v}_j(t)\,.
\end{equation}
Because the particle speeds remain the same at all times, the velocities, ${\bf v}_j(t)$, are parametrized by the ``flying'' angles,
$\theta_j$,
${\bf v}_j=v_0(\cos{\theta_j},\sin{\theta_j})$.
In the collision step,
the directions $\theta_j$
are modified in the following way:
a circle of radius $R$ is drawn around the focal particle $j$, and the average direction $\Phi_j$ of motion
of the
particles
within the circle is determined
according to
\begin{equation}
\label{COLLIS}
\Phi_j={\rm Arg}\left\{
\sum_{k}{\rm e}^{i\theta_k}\right\}
\end{equation}
where the sum goes over all particles inside the corresponding circle (including particle $j$).
The new flying angles
are then given as,
\begin{equation}
\label{VM_RULE}
\theta_j(t+\tau)=\Phi_j+\xi_j
\end{equation}
where $\xi_j$ is a random number uniformly distributed in
the interval $[-\eta/2,\eta/2]$.
In this paper, a forward-upating rule \cite{baglietto_08_09}, corresponding to the so-called standard Vicsek-model, is used:
the updated positions ${\bf x}_j(t+\tau)$ instead of the ``old'' positions ${\bf x}_j(t)$
enter the calculations of the average directions $\Phi_j$.
Important dimensionless parameters of the model are (i) the noise strength $\eta$,
(ii) the average number of particles in a collision circle, $M=\pi R^2 \rho_0$, which is basically a normalized density, and (iii)
the ratio $\gamma=R/\lambda$ of the range $R$ of the interaction to the mean displacement during streaming $\lambda=v_0\tau$
which can be interpreted as a mean free path (mfp).
\section{Hydrodynamic theory}
\label{sec:hydro}
\subsection{Macroscopic equations for the Vicsek model}
\label{sec:background}
Using Boltzmann's assumption of molecular chaos,
an Enskog-type kinetic equation for the one-particle density $f(\theta,{\bf x},t)$
was derived from the exact evolution equation of the $N$-particle probability.
This was done for the standard Vicsek-model (VM) \cite{ihle_11} as well as for other
microscopic models \cite{ihle_09,chou_12,romensky_14}.
By means of a Chapman-Enskog expansion and an angular Fourier transformation of the one-particle density,
\begin{equation}
f(\theta,{\bf x},t)=g_0({\bf x},t)+\sum_{k=1}^{\infty}
\left[g_k({\bf x},t)\cos{(k\theta)}+
h_k({\bf x},t)\sin{(k\theta)}\right]\,
\end{equation}
hydrodynamic
equations were obtained for the dimensionless particle density $\rho$ and momentum density $\vec{w}$ of the VM \cite{ihle_11}.
In this formulation, all times are scaled by the time step $\tau$, all lengths are scaled by the mean free path (mfp)
, $\lambda=v_0 \tau$, and all other quantities are scaled accordingly.
For example, the actual two-dimensional number density $\tilde{\rho}$ and momentum density $\tilde{\bf w}$ can be obtained from their scaled versions as
$\tilde{\rho}=\rho/\lambda^2$ and $\tilde{\bf w}={\bf w} v_0/\lambda^2$.
To emphasize rotational invariance, the hydrodynamic equations are given in terms of the tensors
$\HT$, $\QTa$, and $\QTb$
which depend on spatial derivatives of $\rho$ and $\vec{w}$:
\begin{eqnarray}
\label{CONTIN}
\partial_t\rho+\nabla\cdot {\bf w}
&=&0 \\
\label{NAVIER}
\partial_{t}\vec{w}+\nabla\cdot\HT&=&-b\,\nabla\rho+(\Lambda-1)\vec{w}+
\QTa\cdot \vec{w}+\QTb\cdot\nabla\rho
\end{eqnarray}
with $b=(3-\Lambda)/4$. Here, $\Lambda$ is the amplification factor for the first Fourier mode, which also determines the
linear growth rate $(1-\Lambda)$
of the momentum density.
This is because density and momentum density are the zeroth and first order moments of the one-particle distribution function, respectively, and
therefore, the components of the momentum density are proportional to the first Fourier coefficients, $w_x\propto g_1$, $w_y\propto h_1$.
In Ref. \cite{ihle_11}, in the thermodynamic limit $N\rightarrow \infty$,
an infinite series was found for $\Lambda$,
\begin{eqnarray}
\nonumber
\Lambda&=&{4 \over \eta}{\rm sin}\left({\eta\over 2}\right)
{\rm e}^{-M_R} \sum_{n=1}^{\infty} {n^2 M_R^{n-1}\over n!}\,I(n) \\
\label{LAMBDA_DEF}
I(n)&=&{1\over (2\pi)^n}
\int_0^{2\pi}d\theta_1\ldots
\int_0^{2\pi}d\theta_n
\,\cos{\bar{\theta}}\,
\cos{\theta_1}
\end{eqnarray}
where $\bar{\theta}=\bar{\theta}(\theta_1,\theta_2,\ldots,\theta_n)$ is equal to the average angle $\Phi_1$
defined in Eq. (\ref{COLLIS}).
Here, $M_R$ is the {\em local} average number of particles within a collision circle centered around the position ${\bf x}$,
\begin{equation}
\label{M_R_DEF}
M_R({\bf x},t)=\int_{|{\bf x}-\bf{x'}|\leq R} \,d{\bf x'} \int_0^{2 \pi} d\theta\,
f(\theta,{\bf x'},t)
=\int_{|{\bf x}-\bf{x'}|\leq R} \,d{\bf x'} \rho({\bf x'},t)
\end{equation}
For a homogeneous system, one has $M_R=M=\pi R^2 \tilde{\rho}_0$.
If $\Lambda$ is larger than one, the disordered state of zero average motion becomes unstable
and a polarly ordered state with non-zero global momentum develops.
Thus, setting $\Lambda=1$ defines the threshold noise $\eta_C(M)$ of the flocking transition, at the level of homogeneous mean field theory.
The hydrodynamic equations, Eqs. (\ref{CONTIN}) and (\ref{NAVIER}),
are only valid in the vicinity of the transition to polar order, that is, for $|\lambda-1|\ll 1$.
This condition was essential to obtain a closed set of just two equations, see Refs. \cite{ihle_11,ihle_14}.
Physically, this assumption means that the first order Fourier coefficients $g_1$ and $h_1$ are evolving so slowly that
the higher order coefficients, $g_2,g_3,\ldots$, become
enslaved to them \cite{FOOT1}.
Therefore, no additional equations for the temporal evolution of the higher order coefficients are needed.
The momentum flux tensor $\HT$ and the tensors $\QTa$, $\QTb$,
\begin{equation}
\label{TENSOR1}
\HT=\sum_{i=1}^5h_i\,\omi\;\;\;\;\;\;\;
\QTa=\sum_{i=1}^5q_i\,\omi \;\;\;\;\;\;
\QTb=\sum_{i=1}^5k_i\,\omi
\end{equation}
are given in terms of five symmetric traceless tensors $\omi$,
\begin{eqnarray}
\nonumber
\Omega_{1,\alpha\beta}&=&\partial_{\alpha}w_{\beta}+\partial_{\beta}w_{\alpha}
-\delta_{\alpha\beta}\partial_{\gamma}w_{\gamma} \\
\nonumber
\Omega_{2,\alpha\beta}&=&2\partial_{\alpha}\partial_{\beta}\rho
-\delta_{\alpha\beta}\partial^2_{\gamma}\rho \\
\nonumber
\Omega_{3,\alpha\beta}&=&2w_{\alpha}w_{\beta}
-\delta_{\alpha\beta}w^2 \\
\nonumber
\Omega_{4,\alpha\beta}&=&w_{\alpha}\partial_{\beta}\rho+w_{\beta}\partial_{\alpha}\rho
-\delta_{\alpha\beta}w_{\gamma}\partial_{\gamma}\rho \\
\label{OMEGA_DEF}
\Omega_{5,\alpha\beta}&=&2(\partial_{\alpha}\rho)(\partial_{\beta}\rho)
-\delta_{\alpha\beta}(\partial_{\gamma}\rho)^2\,.
\end{eqnarray}
The tensor $\Omega_1$ is the viscous stress tensor of a two-dimensional fluid.
The transport coefficients in Eq.\ (\ref{TENSOR1}) were obtained in the additional limit of large mean free path, $\lambda=v_0\tau\gg R$, and
are given in
Table \ref{TAB2}.
This means, contributions
from the so-called collisional momentum transfer, which are supposed to be relevant at small mean free paths, see Ref. \cite{ihle_09},
are neglected here.
The rationale for this choice was that at small mean free paths and not too large density, the mean-field assumption
(on which our theory is based) is supposed to break down anyway.
The transport coefficients depend on the following variables,
\begin{eqnarray}
\nonumber
p&=&
{4\over \eta} \sin{(\eta)}\sum_{n=1}^{\infty}
{{\rm e}^{-M_R}\over n!}n^2
M_R^{n-1} J_1(n) \\
\nonumber
q&=&
{4\pi \gamma^2\over \eta} \sin{(\eta)}\sum_{n=2}^{\infty}
{{\rm e}^{-M_R}\over n!}n^2(n-1)
M_R^{n-2} J_2(n)\\
\label{DEF_PQ}
S&=&
{8\pi\gamma^2\over \eta }\sin{\eta\over 2}
\sum_{n=2}^{\infty}
{{\rm e}^{-M_R}\over n!}
n^2(n-1) M_R^{n-2} J_3(n) \\
\nonumber
\Gamma&=&
{8\pi^2\gamma^4\over 3 \eta}\sin{\eta\over 2}
\sum_{n=3}^{\infty}
{{\rm e}^{-M_R}\over n!}
n^2 (n-1)(n-2)M_R^{n-3} J_4(n)
\end{eqnarray}
\begin{table}
\begin{center}
{\renewcommand{\arraystretch}{1.4}
\large
\begin{tabular}{|r||c|c|c|}
\hline
$j$ & $h_j$ & $q_j$ & $k_j$ \\
\hline
\hline
$1$ & ${1+p\over 8(p-1)}$ & ${S\over 2(p-1)}$ & $ {S\over 8 (p-1)}$ \\
\hline
$2$ & $-{p^2+10p+1\over 96(p-1)^2}$& $-{S\over 4(p-1)^2}$ & $-{S (p+5)\over 96(p-1)^2}$ \\
\hline
$3$ & $-{q\over 2(p-1)} $ & $\Gamma-{Sq\over p-1}$ & ${\Gamma\over 4}-{Sq\over 4(p-1)}$ \\
\hline
$4$ & ${q(1+p)\over 4(p-1)^2}$ & ${\Gamma\over 2}-{Sq(p-3)\over 2(p-1)^2}$ & ${\Gamma\over 12}-{Sq(p-4)\over 12(p-1)^2}$ \\
\hline
$5$ & $-{q(p^2+10p+1)\over 48(p-1)^3}$ & ${\Gamma\over 24}-{Sq(p^2-2p+13)\over24(p-1)^3}$ & $-{Sq(p+5)\over 48(p-1)^3}$ \\
\hline
\end{tabular}
}
\caption{The transport coefficients $h_j$, $q_j$ and $k_j$,
defined in Eq. (\ref{TENSOR1}), are expressed as functions of
$\Gamma$, $S$, $p$, $q$, see Eq. (\ref{DEF_PQ}).
}
\label{TAB2}
\end{center}
\vspace*{-5ex}
\end{table}
where $\gamma$ is the ratio of the interaction radius to the mfp, $\gamma=R/\tau v_0$.
Note that these coefficients have a non-local dependence on position through the normalized density $M_R$.
The transport coefficients contain the following four types of $n-$dimensional integrals,
\begin{equation}
\label{INTEGRALS}
J_m(n)={1\over (2\pi)^n}
\int_0^{2\pi}d\theta_1\ldots
\int_0^{2\pi}d\theta_n\,
\Psi_m
\end{equation}
where $\Psi_m$ is given
by $\Psi_1=
\cos^2{\bar{\theta}}\,
\cos{2\theta_1}$,
$\Psi_2=
\cos{\bar{\theta}}\,\sin{\bar{\theta}}\,
\cos{\theta_1}\,\sin{\theta_2}
$,\\
$\Psi_3=
\cos{\bar{\theta}}\,
\cos{\theta_1}\,\cos{2 \theta_2}
$, and
$\Psi_4=
\cos{\bar{\theta}}\,
\cos{\theta_1}\,\cos{\theta_2}\,\cos{\theta_3}$.
The average angle
$\bar{\theta}\equiv \Phi_1$ is a function of the angles
$\theta_1,\theta_2,\ldots \theta_n$, see Eq. (\ref{COLLIS}).
The Navier-Stokes-type equation, Eq.\ (\ref{NAVIER}) is consistent with the one from Toner-Tu theory
\cite{toner_98,toner_12} but
contains
additional gradient terms.
The additional terms are a result of a Chapman-Enskog expansion that includes all terms up to {\em third order} in a formal expansion parameter.
Discussions of higher order expansions can be found in Refs. \cite{ihle_14,peshkov_14,ihle_14A,bertin_14A}.
Eq. (\ref{NAVIER}) has a simple homogeneous flocking solution:
$\vec{w}=w_0\,{\bf \hat{n}}$ and $\rho=\rho_0$. The amplitude of the flow is given by
\begin{equation}
\label{W0_DEF}
w_0=\sqrt{1-\Lambda\over q_3}\,.
\end{equation}
\subsection{Large $M$ expansion}
\label{sec:largeM}
Using an analogy to the freely-jointed chain model for polymers,
the angular integrals, Eqs. (\ref{INTEGRALS}), can be asymptotially evaluated in the limit $n\gg 1$ \cite{ihle_15}.
To leading order, the results are \cite{ihle_10_V1},
\begin{eqnarray}
\nonumber
J_1(n)&\sim &{1\over 8 n} \\
\nonumber
J_2(n)& \sim &{1\over 8 n} \\
\nonumber
J_3(n)& \sim &- {\sqrt{\pi}\over 32 n^{3/2}} \\
\label{J_DEFS}
J_4(n)& \sim &- {3 \sqrt{\pi}\over 32 n^{3/2}}
\end{eqnarray}
The angular integral in the definition of $\Lambda$, Eq. (\ref{LAMBDA_DEF}), is evaluated similarly,
\begin{equation}
\label{I_ASYM}
I(n)\sim \sqrt{\pi \over 16 n}.
\end{equation}
Even with these approximations,
the infinite series in Eqs. (\ref{LAMBDA_DEF}) and (\ref{DEF_PQ}), still cannot be calculated exactly.
However, we note that they can be written as an average over a
Poisson distribution.
This is a consequence of the mean-field assumption of molecular chaos which predicts that the density fluctuations are equal to the ones of
an ideal gas, e.g.
are Poisson distributed.
For example, using Eq. (\ref{I_ASYM}) we can rewrite the amplification coefficient $\Lambda$ from Eq. (\ref{LAMBDA_DEF}) as
\begin{equation}
\label{NEW_LAM}
\Lambda={\sqrt{\pi} \over \eta}{\rm sin}\left({\eta\over 2}\right)
\big\langle (n+1)^{1/2}\big\rangle
\end{equation}
where the average of an arbitrary function $g(n)$ is defined as
\begin{eqnarray}
\nonumber
\langle g(n) \rangle &\equiv & \sum_{n=0}^{\infty} {\rm e}^{-M_R} {M_R^n\,g(n)\over n!} \\
\label{AVERAGE_DEF}
&=& \sum_{n=1}^{\infty} {\rm e}^{-M_R} {M_R^{n-1}\, g(n-1)\over (n-1)!} \\
\end{eqnarray}
Expanding $g$ around the mean value $M$ of the distribution
one obtains,
\begin{equation}
\label{TAYLOR_G}
\langle g(n) \rangle =g(M)+\sum_{k=1}^{\infty}g^{(k)}(M) {\mu_k \over k!}
\end{equation}
where $g^{(k)}(M)$ is the $k-th$ derivative of $g$ evaluated at $n=M$, and $\mu_k$ are the so-called central moments,
\begin{equation}
\label{CENTRAL_DEF}
\mu_k=\langle (n-M)^k\rangle
\end{equation}
which are known for the Poisson distribution. For example, one has
\begin{eqnarray}
\nonumber
\mu_1&=&0 \\
\nonumber
\mu_2&=&\mu_3=M\\
\label{CENTRAL_POISS}
\mu_4&=&M+3M^2
\end{eqnarray}
Thus, the following approximation is obtained,
\begin{equation}
\label{APPROX_G}
\langle g(n)\rangle=g(M)+{M\over 2} g''(M)+{M\over 6}g'''(M)+{M+3M^2\over 4!} g^{(4)}+\ldots
\end{equation}
To evaluate transport coefficients, averages over polynomials in $n$ are needed.
Therefore, we investigate a generic example, $g_0=(n+\epsilon)^{\alpha}$
with some power $\alpha$ and a positive constant $\epsilon$.
The approximation, Eq. (\ref{APPROX_G}), then gives for small $\epsilon\ll M$
\begin{equation}
\label{APPROX_POLYN}
\langle g_0 \rangle \approx M^{\alpha}
\left(1+{\alpha(\alpha-1)\over 2 M} +{\alpha(\alpha-1)(\alpha-2)\over 6M^2}+{\alpha(\alpha-1)(\alpha-2)(\alpha-3)[(1/M)+3]\over 24 M^2}
\right)
\end{equation}
This approximation becomes accurate for $M\gg 1$. For very large $M$ it suffices to replace $\langle g\rangle$ by $g(M)$.
Thus, replacing $n$ by $M$ in Eq. (\ref{NEW_LAM}),
leads to the leading order approximation for the amplification factor $\Lambda$,
\begin{equation}
\label{NEW_LAM2}
\Lambda\approx {\sqrt{\pi} \over \eta}{\rm sin}\left({\eta\over 2}\right)
\,M^{1/2}\,.
\end{equation}
Thus, fulfilling the critical noise condition $\Lambda=1$
in the limit of infinite density $M\rightarrow \infty$ requires that the critical noise $\eta_C$ becomes equal to $2\pi$.
Expanding $\eta_C$ around its infinite density limit gives,
\begin{equation}
\label{LARGE_M_ETAC}
\eta_C\approx 2\pi -4 \sqrt{\pi\over M}\,\;\;\;{\rm for}\,M\gg 1
\end{equation}
Approximation (\ref{LARGE_M_ETAC}) leads to
\begin{eqnarray}
\nonumber
{\rm sin}\left({\eta_C\over 2}\right)&\approx & 2\sqrt{\pi\over M} \\
\label{SIN_APPROX}
{\rm sin}\left(\eta_C\right)&\approx & -4\sqrt{\pi\over M}
\end{eqnarray}
Introducing the relative noise distance $\delta$ to the threshold,
\begin{equation}
\label{DELTA_DEF}
\delta={\eta_C-\eta\over \eta_C}
\end{equation}
we consider the difference of
the amplification factor to its value at the threshold, $\Lambda(\eta_C)=1$ and find,
\begin{equation}
\label{LAMBDA_EXPAND}
\Lambda-1\approx {\sqrt{\pi}\over 2\pi}\left[{\rm sin}\left({\eta\over 2}\right)-{\rm sin}\left({\eta_C\over 2}\right] \right) M^{1/2}\approx \delta {\sqrt{ \pi M}\over 2},\;\;\;{\rm for}\,\delta\ll 1\,.
\end{equation}
As a preliminary to calculate transport coefficients, we express the variables $p$, $q$, $S$ and $\Gamma$ from Eqs. (\ref{DEF_PQ}),
in terms of averages over the Poisson distribution,
\begin{eqnarray}
\nonumber
p&=&
{4\over \eta} \sin{(\eta)}
\langle (n+1) J_1(n+1)\rangle \approx
{1\over 2 \eta} \sin{(\eta)}
\\
\nonumber
q&=&
{4\pi \gamma^2\over \eta} \sin{(\eta)}
\langle (n+2) J_2(n+2)\rangle \approx
{\pi \gamma^2\over 2\eta} \sin{(\eta)}
\\
\label{EVAL_PQ}
S&=&
{8\pi\gamma^2\over \eta }\sin{\eta\over 2}
\langle (n+2) J_3(n+2)\rangle \approx
-{\pi^{3/2}\gamma^2\over 4\eta \sqrt{M} }\sin{\eta\over 2}
\\
\nonumber
\Gamma&=&
{8\pi^2\gamma^4\over 3 \eta}\sin{\eta\over 2}
\langle (n+3) J_4(n+3)\rangle \approx
-{\pi^{5/2}\gamma^4\over 4\eta \sqrt{M} }\sin{\eta\over 2}
\end{eqnarray}
and evaluate the averages to leading order by means of Eqs. (\ref{J_DEFS}).
Finally, using Eq. (\ref{SIN_APPROX}), at the threshold one obtains
\begin{eqnarray}
\nonumber
p(\eta_C)&\approx&
-{1\over \sqrt{\pi M}}
\\
\nonumber
q(\eta_C)&\approx&
-\gamma^2\sqrt{\pi \over M}
\\
\label{THRESH_PQ}
S(\eta_C)&\approx&
-{\pi \gamma^2\over 4 M}
\\
\nonumber
\Gamma(\eta_C)&\approx &
-{\pi^2 \gamma^4\over 4 M}
\end{eqnarray}
These expressions, Eqs. (\ref{THRESH_PQ}) are used to evaluate the transport coefficients from Table \ref{TAB2} in the limit of large $M$ and at the flocking threshold.
Only the leading order contributions in the limit of large $M$ are given.
For example, the product $Sq\sim M^{-3/2}$ in coefficients such as
$q_4$, $k_3$ and so on is neglected because $\Gamma\sim M^{-1}$ decays less strongly in the infinite density limit.
The results are given in Table \ref{TAB3}.
\begin{table}
\begin{center}
{\renewcommand{\arraystretch}{1.4}
\large
\begin{tabular}{|r||c|c|c|}
\hline
$j$ & $h_j$ & $q_j$ & $k_j$ \\
\hline
\hline
$1$ & $-{1\over 8} $ & $-{S\over 2}\approx {\pi \gamma^2\over 8 M} $ & $ -{S\over 8 }\approx {\pi \gamma^2\over 32 M} $ \\
\hline
$2$ & $-{1+10p\over 96}\approx -{1\over 96} $ & $-{S\over 4}\approx {\pi \gamma^2\over 16 M}$ & $-{5\, S \over 96}\approx {5 \pi \gamma^2\over 384 M} $ \\
\hline
$3$ & ${q\over 2}\approx -{\gamma^2\over 2} \sqrt{\pi\over M} $ & $\Gamma+Sq \approx -{\pi^2 \gamma^4\over 4 M}$ &
${\Gamma\over 4}+{Sq\over 4} \approx -{\pi^2 \gamma^4\over 16 M}$ \\
\hline
$4$ & ${q\over 4}\approx -{\gamma^2\over 4}\sqrt{\pi\over M}$ & ${\Gamma\over 2}+{3 \,Sq\over 2}\approx -{\pi^2 \gamma^4\over 8 M}$ & ${\Gamma\over 12}+{Sq\over 3} \approx -{\pi^2 \gamma^4\over 48 M}$ \\
\hline
$5$ & $ {q(1+10p)\over 48}\approx -{\gamma^2\over 48}\sqrt{\pi\over M} $ & ${\Gamma\over 24}+{13 \,Sq\over 24}\approx -{\pi^2 \gamma^4\over 96 M} $ & $ { 5\,Sq\over 48} \approx {5 \pi^3 \gamma^6\over 768 M^2}$ \\
\hline
\end{tabular}
}
\caption{The transport coefficients $h_j$, $q_j$ and $k_j$,
defined in Eq. (\ref{TENSOR1}), evaluated at the threshold $\eta=\eta_C$, in the limit $M\gg 1$.
}
\label{TAB3}
\end{center}
\vspace*{-5ex}
\end{table}
It is instructive to evaluate the momentum density of a homogeneous, ordered system, close to the threshold, at $\delta\ll 1$,
According to Eqs. (\ref{W0_DEF}), (\ref{LAMBDA_EXPAND}), and Table \ref{TAB3}, we find
\begin{equation}
\label{W0_RES}
w_0=(2\delta)^{1/2} \pi^{-3/4} \gamma^{-2} M^{3/4}
\end{equation}
Even if the distance to the flocking threshold $\delta$, is tiny, the amplitude $w_0$ eventually diverges in the infinite density limit.
It also has a strong dependence on the mean free path ratio $\gamma=R/\lambda$ and diverges for infinite mean free path $\lambda$.
This seemingly unphysical behavior
is just a consequence of the particular choice of dimensionless variables where all lengths are measured in units of the mean free path $\lambda$, and all times in units of the time step $\tau$.
The amplitude $w_0$ is a measure of global polar order but
it is not a convenient variable in the large density limit.
Instead, I ``renormalize'' $w_0$ be relating it to the polar order parameter $\Omega$, which is defined
as the average speed of a particle $\langle|{\bf v}|\rangle$ divided by the individual particle speed $v_0$.
This way, $\Omega$ will always be between zero and one, and a small $\Omega\ll 1$ is expected near the order-disorder transition.
One has
\begin{equation}
\Omega={<|{\bf v}|>\over v_0}={w_0\over \rho}
\end{equation}
where $\rho$ is the dimensionless particle density.
It is related to the regular number density of a two-dimensional system, $\tilde{\rho}$
by $\rho=\tilde{\rho} \lambda^2$
Using the definition,
$M=\pi R^2 \tilde{\rho}=\pi R^2 \rho/\lambda^2$ we find $\rho=M/(\pi \gamma^2)$ and
an expression for $\Omega$ is obtained,
\begin{equation}
\label{OMEGA_W0_CON}
\Omega={\pi \gamma^2 \over M} w_0
\end{equation}
Inserting $w_0$ from Eq. (\ref{W0_RES}), the order parameter near threshold follows as
\begin{equation}
\label{OMEGA_VS_DELTA}
\Omega=2^{1/2} \delta^{1/2} \pi^{1/4} M^{-1/4}
\end{equation}
In contrast to $w_0$, the order parameter does not depend on particle speed.
This is the expected result because, (i) we work in the mean-field approximation, and (ii)
the collisional contribution to the transport coefficients was neglected earlier.
\subsection{Stability analysis}
\label{sec:stability}
A long wavelength instability has been reported in Vicsek-type models \cite{bertin_06,bertin_09,ihle_11,ihle_13}.
This instability is strongest for longitudinal sound-wave-like perturbations and exists in the ordered phase
at a range of noise values $\eta_S\le \eta \le \eta_C$,
next to the flocking threshold.
In this chapter, I reanalyse this instability in the large $M$-limit
for a
longitudinal perturbation with wave vector
$\vec{k}=k\hat{n}$.
Without loss of generality, the collective motion is assumed to go into the x-direction, $\hat{n}=\hat{x}$.
As small perturbation of amplitude $\delta \rho$ is imposed on the homogeneous particle density $\rho_0$. A similar perturbation
$\delta w$ is added to the x-component of the homogeneous momentum density $w_{x,0}=w_0$, where $w_0$ is given in Eqs. (\ref{W0_DEF}) and (\ref{W0_RES}),
\begin{eqnarray}
\nonumber
\rho&=&\rho_0+\delta \rho\; {\rm e}^{ikx+\omega t} \\
\nonumber
w_x&=&w_0+\delta w\; {\rm e}^{ikx+\omega t} \\
\label{STAB_START}
w_y&=&0\,,
\end{eqnarray}
and $\omega$ is the complex growth rate of the perturbation.
From the continuity equation, Eq. (\ref{CONTIN}), a simple relation between $\delta \rho$ and $\delta w$ follows,
\begin{equation}
\label{RHO_W_REL}
\delta \rho=-{ik\over \omega} \delta w
\end{equation}
The hydrodynamic equation for the momentum density, Eq. (\ref{NAVIER}), in linear order of the perturbations leads to
\begin{eqnarray}
\label{NAVIER_STAB1}
& & (\omega-k^2 h_1 +2 ik w_0 h_3)\,\delta w
+(-ik^3 h_2+ik w_0^2 h_3'-k^2 w_0 h_4)\,\delta \rho= \\
\nonumber
& &(\Lambda-1+ik w_0 q_1+3 w_0^2 q_3)\,\delta w+ \\
\nonumber
& &(-ik b+w_0 \Lambda'
-k^2 w_0 q_2+ik w_0^2 q_4+w_0^3 q_3' +ik w_0^2 k_3)\,\delta \rho
\end{eqnarray}
Here, all transport coefficients are evaluated for a homogeneous system with constant density $\rho_0$.
The density derivatives of the coefficients $\Lambda$, $h_3$ and $q_3$ are also needed
and, for example, are given as
\begin{equation}
\Lambda' \equiv \left. {d \Lambda\over d\rho}\right|_{\rho=\rho_0}= \pi \gamma^2 {\partial \Lambda \over \partial M}\,.
\end{equation}
Using Eq. (\ref{NEW_LAM2}), one finds
\begin{equation}
\label{LAM_STRICH1}
{\partial\Lambda\over \partial M}={1\over 2 \eta} \sqrt{\pi\over M} {\rm sin}\left({\eta\over 2}\right)
\end{equation}
This expression is then evaluated at the threshold with the help of Eq. (\ref{SIN_APPROX}), leading to
\begin{equation}
\label{LAM_STRICH2}
\Lambda'(\eta_C)\approx {\pi \gamma^2\over 2 M}
\end{equation}
Similiarly, one obtains
\begin{equation}
\label{Q3_STRICH}
q_3'(\eta_C)=\left.{d q_3\over d \rho}\right|_{\eta=\eta_C}\approx
{\pi^3 \gamma^6\over 8 M^2}\,,\;{\rm and}\;\;h_3'(\eta_C)\approx 0\,
\end{equation}
Substituting Eq. (\ref{RHO_W_REL}) into (\ref{NAVIER_STAB1}), a quadratic equation for $\omega$ is obtained,
\begin{equation}
\label{QUADRAT}
\omega^2+\alpha \omega=\beta
\end{equation}
with complex coefficients $\beta=\beta_{R}+i \beta_I$, $\alpha=\alpha_R+i \alpha_I$.
The real and imaginary parts of $\alpha$ and $\beta$ are found as,
\begin{eqnarray}
\nonumber
\alpha_R&=&1-\Lambda-3 w_0^2 q_3 -h_1 k^2 = -2 w_0^2 q_3 -h_1 k^2\\
\nonumber
\alpha_I&=&k w_0(2 h_3 -q_1) \\
\nonumber
\beta_R&=& k^2[-b + w_0^2 (q_4+k_3-h_3')+k^2 h_2] \\
\beta_I&=& k w_0[ -\Lambda'-w_0^2 q_3' + k^2 ( q_2- h_4)]
\end{eqnarray}
where Eq. (\ref{W0_DEF}) has been used to eliminate $\Lambda$.
Eq. (\ref{QUADRAT}) has two solutions, $\omega_{\pm}$, where $\omega_{+}$
is the one with the larger real part. Solving for the real part, one finds,
\begin{equation}
\label{RADICAL}
Re(\omega_{+})=-{\alpha_R\over 2}+{(\Delta^2+\nu^2)^{1/4}\over \sqrt{2}}
\left(1+{\Delta\over \sqrt{\Delta^2+\nu^2}}\right)^{1/2}
\end{equation}
with the abbreviations
\begin{eqnarray}
\nonumber
\Delta & \equiv &{\alpha_R^2-\alpha_I^2\over 4}+\beta_R \\
\label{DELTA_NU_DEF}
\nu &\equiv & {\alpha_R \alpha_I\over 2}+\beta_I
\end{eqnarray}
Instead of directly analysing this complicated expression, a different approach is pursued:
Splitting the growth rate into its real and imaginary part, $\omega=\omega_R+i \omega_I$,
Eq. (\ref{QUADRAT}) can be rewritten as two equations for $\omega_R$ and $\omega_I$,
\begin{eqnarray}
\nonumber
\omega_R^2-\omega_I^2+\alpha_R \omega_R -\alpha_I \omega_I&=&\beta_R \\
\label{TWO_EQ}
2 \omega_R \omega_I + \alpha_R \omega_I + \alpha_I\omega _R&=& \beta_I
\end{eqnarray}
The sign of the real part $\omega_R$ determines whether the system is linearly stable.
We therefore eliminate $\omega_I$ from Eq.(\ref{TWO_EQ}) and obtain a quartic equation with real coefficients for the real part of $\omega$,
\begin{equation}
\label{QUARTIC1}
c_4 \omega_R^4 +c_3 \omega_R^3 + c_2 \omega_R^2 + c_1 \omega_R = c_0
\end{equation}
with coefficients
\begin{eqnarray}
\nonumber
c_4&=&4 \\
\nonumber
c_3&=&8 \alpha_R \\
\nonumber
c_2&=& \alpha_I^2+5 \alpha_R^2-4 \beta_R \\
\nonumber
c_1&=& \alpha_R[\alpha_I^2+\alpha_R^2-4 \beta_R] \\
\label{C_COEFFS}
c_0&=& \beta_I^2+\alpha_I \alpha_R \beta_I + \beta_R \alpha_R^2
\end{eqnarray}
This equation can be solved exactly by radicals as given in Eq. (\ref{RADICAL}).
However, in the high density limit $M\gg 1$ and near threshold $\Omega \ll 1$,
the terms in the quartic equation can differ by several orders of magnitude and the analytical expressions
become difficult to evaluate.
Furthermore, our aim is to gain a better physical understanding and to obtain simple expressions for relevant quantities like
the maximum growth rate of a perturbation of the ordered state.
Therefore, in the next section, instead of relying on
the exact solution of the quartic equation, various scaling regimes of $\omega_R$ are identified by
different Ansatzes to
solve
Eq. (\ref{QUARTIC1}) approximately.
\subsection{Analysis of the dispersion relation}
\label{sec:dispers}
\subsubsection{The limit of vanishing wave number}
\label{sec:smallK}
To see whether there is an instability at all,
the limit of zero wave number $k\rightarrow 0$ is investigated
by inserting the ``hydrodynamic'' Ansatz
\begin{equation}
\label{SMALL_K_ANSATZ}
\omega_R=d_1 k^2+d_2 k^4+O( k^6)
\end{equation}
into Eq. (\ref{QUARTIC1}) and
expanding the coefficients from Eq. (\ref{C_COEFFS}) in powers of $k$,
\begin{eqnarray}
\nonumber
c_3&\equiv & c_{30}+c_{32} k^2 +O(k^4) \\
\nonumber
c_2&\equiv&c_{20}+c_{22} k^2 +O(k^4) \\
\nonumber
c_1&\equiv&c_{10}+c_{12} k^2 +c_{14} k^4 +O(k^6)\\
\label{C_IN_K}
c_0&\equiv&c_{02}k^2+c_{04}k^4+c_{06} k^6+O(k^8) \\
\end{eqnarray}
with new coefficients that do not depend on $k$, for example,
\begin{eqnarray}
\nonumber
c_{02}&=&w_0^2\left\{
(\Lambda'+w_0^2q_3')^2+2w_0^2 q_3(\Lambda'+w_0^2q_3') (2h_3-q_1)+4 w_0^2 q_3^2(-b+w_0^2[q_4+k_3-h_3'])\right\} \\
\nonumber
c_{04}&=&w_0^2\left\{2(\Lambda'+w_0^2q_3')(h_4-q_2)+(q_1-2h_3)[-h_1 (\Lambda'+w_0^2q_3')+2 w_0^2q_3(q_2-h_4)]\right. \\
\nonumber
& &\left. +4q_3(w_0^2q_3 h_2-bh_1+w_0^2h_1[q_4+k_3-h_3'])\right\} \\
\nonumber
c_{10}&=&-8w_0^6 q_3^3\\
\label{NEW_C_DEF}
c_{12}&=&w_0^2 q_3\left\{ -2w_0^2(2 h_3-q_1)^2-12 w_0^2 q_3 h_1+8[-b+w_0^2(q_4+k_3-h_3')]\right\}
\end{eqnarray}
Collecting terms of order $k^2$ and $k^4$, we find
\begin{eqnarray}
\label{D1_EQ}
d_1&=&{c_{02}\over c_{10}} \\
\label{D2_EQ}
d_2&=&{1\over c_{10}^3}
\left[
c_{04} c_{10}^2-c_{02}^2c_{20}-c_{10}c_{12} c_{02}
\right]
\end{eqnarray}
To simplify the analysis we evaluate the dispersion in the large density limit $M\gg 1$
and replace $w_0$, using Eq. (\ref{OMEGA_W0_CON}) by $w_0=\Omega M/(\pi \gamma^2)$.
In order to trace the origin of stabilizing and destabilizing effects, mathematical ``markers'' are put on
the transport coefficients. This is done by taking the asymptotic expressions from Table \ref{TAB3} and multiplying them
with dummy variables that
become equal to one in the limit $M\rightarrow \infty$. For example, we have
\begin{eqnarray}
\nonumber
q_3&= & -{\bar{q}_3 f^2 \over 4 M} \\
\nonumber
h_3&=&-{\bar{h}_3 f \over 2 \sqrt{\pi M}} \\
\nonumber
\Lambda'&=&{\bar{\Lambda}' f\over 2 M} \\
\nonumber
q_3'&=&{\bar{q}_3' f^3\over 8 M^2}\\
\label{MARKERS}
b&=&{\bar{b}\over 2}
\end{eqnarray}
where $f\equiv \pi \gamma^2$ and the ``marker''-variables are given by $\bar{q}_3$, $\bar{h}_3$, $\bar{\Lambda}'$ and so on.
Inserting these ``marked'' coefficients into Eqs. (\ref{NEW_C_DEF}) and (\ref{D1_EQ}), one finds that for vanishing order, $\Omega\rightarrow 0$, the coefficient
$d_1$ is always
positive but becomes negative for a strongly ordered state, $\Omega\approx 1$.
To obtain quantitative results and to investigate the sign change of $d_1$, the dominant positive and negative terms must be balanced.
Balance is achieved by assuming that the order parameter $\Omega$ is of order $1/M$ or smaller.
Mathematically, this is imposed by introducing
the scaled order parameter
$\bar{\Omega}\equiv \Omega M$
and assuming that $\bar{\Omega}$ is at most of order one.
Replacing $\Omega=\bar{\Omega}/M$ in the equations for the coefficients $c_j$ and $c_{ij}$, Eqs. (\ref{C_COEFFS}) and (\ref{NEW_C_DEF}),
and neglecting terms of higher order in $1/M$, the coefficients $c_{ij}$ dramatically simplify to,
\begin{eqnarray}
\nonumber
c_{02}&\approx &{\bar{\Omega}^2 \over 8 M^2}[2 (\bar{\Lambda}')^2-\bar{b}\bar{q}_3^2 \bar{\Omega}^2] \\
\nonumber
c_{04}&\approx& -{\bar{\Omega}^2 \bar{q}_3 \bar{b} \bar{h}_1 \over 16 M} \\
\nonumber
c_{06}&\approx &-{\bar{b} \bar{h}_1^2\over 128} \\
\nonumber
c_{10}&\approx&{\bar{\Omega}^6 \bar{q}_3^3 \over 8 M^3} \\
\nonumber
c_{12}&\approx& {\bar{\Omega}^2 \bar{q}_3 \bar{b} \over M} \\
\nonumber
c_{14}&\approx & {\bar{b}\bar{h}_1\over 4} \\
\nonumber
c_{20}&\approx& {5 \bar{\Omega}^4 \bar{q}_3^2\over 4 M^2}\\
\nonumber
c_{22}&\approx & 2 \bar{b} \\
\nonumber
c_{30}&\approx & {4 \bar{\Omega}^2\bar{q}_3\over M} \\
\label{CIJ_COEFFS}
c_{32}&\approx & \bar{h}_1
\end{eqnarray}
Note that the mean free path dependence through the factor $f=\pi \gamma^2$ dropped out
exactly from these expressions at any density.
By means of Eqs. (\ref{D1_EQ}) and (\ref{D2_EQ}) the low wave number approximation, Eq. (\ref{SMALL_K_ANSATZ}),
can now be easily evaluated with the coefficients $d_1$ and $d_2$ found as,
\begin{eqnarray}
\label{D1_FOUND}
d_1&\approx & {MB\over \bar{\Omega}^4 \bar{q}_3^3} \\
\label{D2_FOUND}
d_2&\approx & -{M^3 B\over \bar{\Omega}^8 \bar{q}_3^5}\left( {10 B\over \bar{\Omega}^2 \bar{q}_3^2}+8\right)\;\;\;{\rm with} \\
\nonumber
B&\equiv &2 (\bar{\Lambda}')^2-\bar{b}\bar{q}_3^2 \bar{\Omega}^2
\end{eqnarray}
While $\bar{b}$, $\bar{q}_3$ and $\bar{\Lambda}'$ are just marker variables that are equal to one, their appearance tells us that
the low wave number behavior $k\rightarrow 0$ is controlled by only four transport coefficients: $b$, $q_3$, $\Lambda'$ and $\Lambda$.
The latter coefficient enters implicitly because according to Eq. (\ref{W0_DEF}), it is involved in the
calculation of the order parameter $\bar{\Omega}$.
The most interesting prediction of Eq. (\ref{D1_FOUND}) is that there is a critical order parameter
\begin{equation}
\label{OMEGA_C}
\bar{\Omega}_C={\sqrt{2} |\bar{\Lambda}'| \over \sqrt{\bar{b}} \bar{q}_3}=\sqrt{2}
\end{equation}
below which $d_1$ is positive.
Thus, very close to threshold, for $0<\Omega< \sqrt{2}/M$, we do have a long wave length instability but there is restabilization slightly further away from threshold,
at $\Omega\ge \sqrt{2}/M$. Therefore, the width of the instability window (in $\Omega$-space) scales as $1/M$.
Given that $\bar{\Omega}$ is at most of order one, one sees that inside the instability region, the growth rate $\omega$ increases rapidly with $M$ since $d_1\sim M$
and $d_2\sim M^3$. In the high density limit $M\rightarrow \infty$ this suggests that the point $k=0$ becomes singular, e.g.
the value for $\omega_R$ at $k$ slightly above zero is
very different from $\omega(k=0)=0$. This behavior is supported by the direct solution of Eq. (\ref{QUARTIC1}).
Moreover, inside the instability window (where $\bar{\Omega}<\sqrt{2}$), the small wave number expansion is only valid
for $k\ll k_1$ where $k_1$ is defined through the equality of the first two terms
in the expansion of $\omega_R$, $|d_1| k_1^2=|d_2| k_1^4$.
This gives,
\begin{equation}
\label{K1_DEF}
k_1=\left|{d_1\over d_2} \right|^{1/2}\,.
\end{equation}
Evaluating $k_1$ near the threshold to collective motion, that is for $\bar{\Omega}\ll 1$, we have $B\approx 2 (\bar{\Lambda}')^2$ and $k_1$ follows as
\begin{equation}
\label{K1_CALC}
k_1\approx {\bar{\Omega}^3 \over M} {\bar{q}_3^2\over 2\sqrt{5} |\bar{\Lambda}'|}
={\bar{\Omega}^3 \over M} {1 \over 2\sqrt{5}}\,,
\end{equation}
which goes to zero in the limit $M\rightarrow \infty$ and also vanishes for $\bar{\Omega}\rightarrow 0$.
Direct solution of the quartic equation (\ref{QUARTIC1}) shows that the low $k$ expansion of $\omega_R$ is of very limited use because
$k_1$ goes to zero too rapidly.
In particular, the shape of the dispersion relation near the onset of collective motion,
which is characterized by the maximum (positive) growth rate $\omega_{R,max}$ and the highest unstable wave number $k_0$
(where $\omega_R(k_0)=0$)
cannot be determined using a Taylor-expansion of $\omega_R$.
This is because $k_0$ will be shown to scale with a smaller power
of $\bar{\Omega}/M$. Therefore, in the large $M$ limit,
$k_0$ is outside the range of validity of the Taylor series, that is $k_0\gg k_1$.
Different techniques to find $k_0$ are presented in the next section.
Furthermore, Eq. (\ref{D1_FOUND}) also tells us that the mathematical
reason for the instability is a nonzero density derivative $\Lambda'$ and that the sign of $\Lambda'$ does not matter.
One also sees from Eqs. (\ref{D1_FOUND}) and (\ref{D2_FOUND})
that it is the combination of the coefficients $b$ and $q_3$ that initiate restabilization. The coefficient $b$ describes the pressure term in the
Navier-Stokes-like hydrodynamic equation (\ref{NAVIER}) and is equal to the square of the speed of sound,
whereas $q_3$ controls the nonlinear (cubic) stabilization of the momentum density.
\subsubsection{The shape of the dispersion relation at large wavenumbers}
\label{sec:shape}
In the previous section we discussed that the Ansatz $\omega_R=d_1 k^2+d_2 k^4+\ldots$ only gives a condition at what
values of the order parameter $\Omega$ an instability occurs but fails to predict the shape of
the dispersion relation inside the instability region.
In this section, different scaling analyses of the quartic equation (\ref{QUARTIC1}) are presented that are valid at larger
wave numbers $k$. In particular, $k$ is assumed to be
larger than the limit value $k_1$ from Eq. (\ref{K1_CALC}), and finally large enough to describe
the value $k_0$ above which $\omega_R$ becomes negative again.
This value is of particular interest because through the length scale $L_{inst}=2\pi/k_0$ it gives
us an crude idea above which system size $L^*$ the high density
Vicsek model developes soliton-like density waves \cite{chate_04_08,ihle_13}.
The maximum value of the dispersion relation, $\omega_{R,max}$, is important to provide a
lower bound $2\pi/\omega_{R,max}$
on the time scale such waves can pile up.
To calculate $k_0$ and $\omega_{R,max}$ the following scaling Ansatz is made,
\begin{eqnarray}
\label{SCAL1}
\omega_R&\equiv &{\hat{\omega} \over M} \\
\label{SCAL2}
k& \equiv & {\hat{k}\over \sqrt{M}}
\end{eqnarray}
which will be justified {\em a posteriori}.
The rescaled wave number $\hat{k}$ and the rescaled growth rate $\hat{\omega}$ are assumed to be of order one.
Inserting these definitions into the quartic equation (\ref{QUARTIC1}) and only keeping terms in the lowest
order of the small parameter $1/M$, a simple quadratic equation emerges,
\begin{equation}
\label{QUADR1}
\hat{\omega}^2+\hat{\omega}\left[ {\bar{\Omega}^2 \bar{q}_3\over 2} +\hat{k}^2 {\bar{h}_1\over 8}\right]
={\bar{\Omega}^2 B \over 16 \bar{b}}-\hat{k}^2 {\bar{\Omega}^2 \bar{q}_3 \bar{h}_1 \over 32}
-\hat{k}^4 {\bar{h}_1^2\over 256}\,.
\end{equation}
Its solution, written in terms of the unscaled variables, is
\begin{eqnarray}
\nonumber
\omega_R&\approx &{\bar{\Omega}\over M } {|\bar{\Lambda}'|\over \sqrt{8\bar{b}}}
-{\bar{\Omega}^2 \bar{q}_3\over 4 M}-{k^2\bar{h}_1\over 16}
\\
\label{QUADR1_SOL}
&= & {\bar{\Omega} \bar{q}_3 \over 4 M}\left[\bar{\Omega}_C-\bar{\Omega}\right]
-{k^2\bar{h}_1\over 16}
\end{eqnarray}
where we used the definition of the critical order parameter, Eq. (\ref{OMEGA_C}) in the second line to visualize
that $\omega_R$ can only be positive if $0<\bar{\Omega}<\bar{\Omega}_C$.
This expression gives as a first estimate of the maximum value of $\omega_R$,
\begin{equation}
\label{OMEGA_MAX_1}
\omega_{R,max}\approx
{\bar{\Omega} \over 4 M}\left[\sqrt{2}-\bar{\Omega}\right]
\end{equation}
and justifies the initial scaling choice, $\omega_R\sim 1/M$ of Eq. (\ref{SCAL1}).
Very close to the threshold, at $\bar{\Omega}\ll 1$, one finds,
\begin{equation}
\label{OMEGA_MAX_SIMP}
\omega_{R,max}\approx {\bar{\Omega}\over \sqrt{8} M}={\Omega\over \sqrt{8}}
\end{equation}
with $\bar{\Omega}=M\Omega$ and $\bar{\Omega}_C=\sqrt{2}$.
Clearly, expression (\ref{QUADR1_SOL}) ceases to be valid at very small $k$ because in
contrast to Eq. (\ref{SMALL_K_ANSATZ}), it does not
predict $\omega_R(k=0)=0$.
Comparing neglected terms in Eq. (\ref{QUARTIC1}) with the surviving ones in (\ref{QUADR1}) gives the condition
\begin{equation}
\label{K2_EXPR}
k\gg k_2={\bar{\Omega} \over M} {|\bar{\Lambda}'| \over 2 \bar{b}}={\Omega\over 2}\;\;\;{\rm for}\;\bar{\Omega}\ll 1
\end{equation}
for approximation (\ref{QUADR1_SOL}) to be valid, assuming proximity to the threshold to ordered motion.
Near the restabilization point where $\bar{\Omega}\approx \bar{\Omega}_C$ the wave number restriction changes to
\begin{equation}
\label{NEAR_RESTAB1}
k\gg {|\bar{\Lambda}'|\over M}\sqrt{5\over 2 \bar{b}^3}
\end{equation}
which can be realized by noting that
\begin{equation}
\label{B_VS_D}
B=\bar{b}\bar{q}_3^2(\bar{\Omega}_C^2-\bar{\Omega}^2)\approx 2 \bar{\Omega}_C \bar{b}\bar{q}_3^2(\bar{\Omega}_C-\bar{\Omega})
\end{equation}
and assuming $(\bar{\Omega}_C-\bar{\Omega})\ll 1$ when analyzing the terms in Eq. (\ref{QUARTIC1}).
Finally, setting $\omega_R=0$ in Eq. (\ref{QUADR1_SOL}) leads to an estimate for $k_0$,
the wave number which delimitates the domain of unstable modes,
\begin{equation}
\label{K0_EXPR}
k_0\approx \sqrt{4 \bar{\Omega} \bar{q}_3[\bar{\Omega}_C-\bar{\Omega}] \over M \bar{h}_1}
=2 \sqrt{4 \bar{\Omega} [\sqrt{2}-\bar{\Omega}] \over M }
\end{equation}
which for $\bar{\Omega}\ll 1$ gives,
\begin{equation}
\label{K0_EXPR_CLOSE}
k_0\approx 2^{5/4} \sqrt{\bar{\Omega}\over M} {|\bar{\Lambda}'|^{1/2} \over \bar{h}_1^{1/2} \bar{b}^{1/4}}=
2^{5/4} \sqrt{\bar{\Omega}\over M}\,.
\end{equation}
The result $k_0\sim 1/\sqrt{M}$ confirms the scaling Ansatz, Eq. (\ref{SCAL2}).
It is also in the range of validity, $k_0\gg k_2$, of the approximation (\ref{QUADR1_SOL})
because $\bar{\Omega}/M$ is
always much smaller than $\sqrt{\bar{\Omega}/M}$ for $M\gg 1$ and $\bar{\Omega}<\sqrt{2}$.
Maximizing $k_0$ and $\omega_R(k_0)$ at fixed $M$ but variable $\bar{\Omega}$ leads to the ``most dangerous'' value
$\bar{\Omega}_D=\bar{\Omega}_C/2={1\over \sqrt{2}}$.
According to Eq. (\ref{OMEGA_VS_DELTA}) this corresponds to a relative noise distance $\delta_D$,
\begin{equation}
\label{DELTA_D}
\delta_D=\delta(\bar{\Omega}_D)={1\over 4 \sqrt{4 M^3}}
\end{equation}
from threshold. For example, at $M=5$, the instability is strongest at $\delta=\delta_D=0.013$.
Furthermore, calculating
\begin{eqnarray}
\nonumber
k_0(\Omega_D)&=& \sqrt{2\over M} \\
\label{DANGER}
\omega_{max}(\Omega_D)&=& {1\over 8 M}
\end{eqnarray}
we get a lower bound for the system size $L^{*}$ above which the homeogeneous ordered state is unstable
and we can extract a typical time scale for the formation of this instability, $T^{*}$,
\begin{eqnarray}
\label{L_MAX}
L^{*}&\approx &\tau v_0 {2\pi\over k_0(\Omega_D)}=\tau v_0 \pi\sqrt{2 M} \\
\label{TIME_SCALE}
T^{*}&\approx & \tau {2\pi\over \omega_{max}(\Omega_D)}= 16 \pi M\,\tau
\end{eqnarray}
Here, we had to reinsert the time step $\tau$ and the particle speed $v_0$ because $k$ and $\omega$ are dimensionless quantities.
Thus we arrive at the prediction that the lower bound for $L^{*}$ increases proportional to $\sqrt{M}$.
Comparing the prediction for $L^{*}$ from Eq. (\ref{L_MAX}) to the insert in Fig. 1 of Ref. \cite{ihle_11}
very good agreement with the lower curve is found for $M\ge 2$. This figure was obtained directly from the full dispersion relation,
evaluated by Mathematica, and without making any large density approximations.
We also find that the number of iterations to see the instability (in a system that is significantly larger than $L^{*}$), is proportional
to $M$.
\subsubsection{The dispersion relation at intermediate wavenumbers}
\label{sec:intermed}
So far, we have found an approximation for $\omega_R$ given by Eq. (\ref{SMALL_K_ANSATZ}) for small $k\ll k_1$ and
another one, Eq. (\ref{QUADR1_SOL}), for large $k\gg k_2$.
However, between $k_1$ and $k_2$ there is a large gap in wave number space.
The closer one is to the threshold value, the more this gap widens up since the ratio $k_2/k_1=\sqrt{5}/\bar{\Omega}^2$
diverges for $\bar{\Omega}\rightarrow 0$.
The previous approximations did not allow us to find the most unstable $k_{max}$.
One of the goals of this section is to find $k_{max}$ which is expected to be
inside the unexplored gap in wavenumber space.
A direct evaluation of the quartic equation (\ref{QUARTIC1}) using Mathematica indicated that the cubic and the linear term
do not seem to be relevant in this intermediate range of wave numbers, $k_1\ll k \ll k_2$.
To put this observation on a more solid ground, the following scaling Ansatz is made
\begin{eqnarray}
\nonumber
\omega&=&{\hat{\omega}\over M^{1+\phi}} \\
\nonumber
k&=&{\hat{k}\over M^{1+\alpha}} \\
\bar{\Omega}&=&{\hat{\Omega}\over M^{\beta}}
\end{eqnarray}
with unknown positive exponents $\alpha$, $\beta$ and $\phi$ and inserted into Eq. (\ref{QUARTIC1}).
To obtain a meaningful balancing of terms in this equation and to prevent contradictions the coefficients must
fulfill the conditions,
\begin{equation}
\phi={1\over 2}(\alpha+\beta)\;\;{\rm and}\;\; {\alpha\over 3} < \beta < \alpha
\end{equation}
At the lowest order in $1/M$ one finds a balance between the quartic term $\sim \omega^4$ and the $O(\omega^0)$ term, leading to the approximation
\begin{equation}
\label{OM_SQRT1}
\omega_R\approx \sqrt{\bar{\Omega}\over M}\left({B\over 32}\right)^{1/4}\,k^{1/2}
\end{equation}
Near threshold, $\bar{\Omega}\ll 1$ and setting the dummy variables to one, this becomes
\begin{equation}
\label{OM_SQRT2}
\omega_R\approx {1\over 2} \sqrt{\bar{\Omega}\over M}\,k^{1/2}
={1\over 2} (\Omega\, k)^{1/2}
\end{equation}
Away from the restabilization region, that is assuming $B=O(1)$ and $\bar{\Omega}<1$, the approximation (\ref{OM_SQRT1})
is valid in the following window:
\begin{equation}
\label{INTERMED_LIM1}
{\bar{\Omega}^3\over M} \sqrt{32\over B}\,\bar{q}_3^2\ll k \ll
{\bar{\Omega}\over M} \sqrt{B\over 8}\, {1\over \bar{b}}
\end{equation}
For $\bar{\Omega}\ll 1$ and setting the dummy variables to one this window becomes
\begin{equation}
\label{INTERMED_LIM2}
k_3\equiv {4 \bar{\Omega}^3\over M} \ll k \ll
{\bar{\Omega}\over 2 M}\equiv k_4
\end{equation}
This means the scaling regime $\omega_R\sim \sqrt{k}$ is only clearly visible
very close to the threshold where $\bar{\Omega}$ is smaller or equal to about $0.1$.
Note, that $k_4$ happens to be equal to $k_2$, and $k_3$ has the same scaling $\sim \bar{\Omega}^3/M$
than $k_1$.
The approximation for $\omega_R$ at intermediate wavenumber, Eq. (\ref{OM_SQRT2}), increases monotonically with $k$, whereas
the expression for high $k$, Eq. (\ref{QUADR1_SOL}), decreases monotonically.
However, the regions of validity for these approximations do not overlap.
Therefore, they should not be equated to obtain an estimate of $k_{max}$ where $\omega_R$ reaches its maximum.
To estimate $k_{max}$, we first construct another approximation whose region of validity extends above $k_4$.
This is done by not only balancing terms of order $\omega_R^4$ and $\omega_R^0$ in Eq. (\ref{QUARTIC1}), but also including
a contribution proportional to $\omega_R^2$. This leads to a quadratic equation in $\omega_R^2$,
\begin{equation}
\label{MATCH_START}
4\omega_R^4+2\bar{b}k^2\omega_R^2={\bar{\Omega}^2 B\over 8 M^2}\,k^2
\end{equation}
with the solution
\begin{equation}
\label{MATCH_SOL1}
\omega_R={1\over 2} \left\{
-\bar{b} k^2 +\sqrt{(\bar{b} k^2)^2+{\bar{\Omega}^2 B k^2\over 2M^2}
}
\right\}^{1/2}
\end{equation}
Compared to Eq. (\ref{INTERMED_LIM2}), the range of validity is extended to
\begin{equation}
\label{INTERMED_LIM_NEW}
k_3 \ll k \ll
4^{1/5}
\left({\bar{\Omega}\over M}\right)^{3/5}\equiv k_5
\;\;\;{\rm for} \bar{\Omega}\ll 1
\end{equation}
Now, we have an overlap with the large $k$ approximation since $k_2\sim \bar{\Omega}/M \ll k_5\sim (\bar{\Omega}/M)^{3/5}$.
To match the results in the overlap region, the square root in Eq. (\ref{MATCH_SOL1}) is expanded for large $k$
with the result:
\begin{equation}
\label{FMATCH1}
\omega_R^2\approx
{|\bar{\Lambda}'|^2 \bar{\Omega}^2\over 8 M^2 \bar{b}}
-{|\bar{\Lambda}'|^4 \bar{\Omega}^4\over 32 M^4 \bar{b}^3 k^2}+O(1/k^4)
\;\;\;{\rm for}\; { \bar{\Omega}\over M} \sqrt{B\over 2} \ll k \ll k_5
\end{equation}
From the high $k$ expression (\ref{QUADR1_SOL}) one finds
\begin{equation}
\label{FMATCH2}
\omega_R^2\approx {|\bar{\Lambda}'|^2 \bar{\Omega}^2\over 8 M^2 \bar{b}}-{k^2 \bar{h}_1 |\bar{\Lambda}'|\bar{\Omega}\over 8 M \sqrt{8\bar{b}}}
\;\;\;{\rm for}\; k\gg k_2
\end{equation}
where a term proportional to $k^4$ could be neglected for $k\approx k_{max}$.
The two expressions (\ref{FMATCH1}) and (\ref{FMATCH2}) becomes equal at the particular wavenumber $k_{match}$, given by
\begin{equation}
\label{KMATCH1}
k_{match}={1\over 2^{1/8}} \left({\bar{\Omega}\over M}\right)^{3/4}
\end{equation}
It seems plausible to assume that $k_{match}$ is a good estimate of the wavenumber $k_{max}$.
This means $k_{max}$ is expected to be proportional to $(\bar{\Omega}/M)^{3/4}$. This is confirmed by a more accurate
calculation of $k_{max}$ and $\omega_{max}$ for $\bar{\Omega}\ll 1$:
\begin{eqnarray}
\label{KMAX_DET}
k_{max}&=&\left({2\over 9}\right)^{1/8} \left({\bar{\Omega}\over M}\right)^{3/4}\\
\label{WMAX_DET}
\omega_{max}&=& {1\over \sqrt{8}} {\bar{\Omega}\over M}-{1\over 3^{1/2} 2^{7/4}}{\bar{\Omega}\over M}^{3/2}
\end{eqnarray}
The prefactor in Eq. (\ref{KMAX_DET}) is about $10\%$ smaller than the one in Eq. (\ref{KMATCH1}) but the scaling is the same.
\subsubsection{Verification of scaling predictions}
\begin{figure}
\begin{center}
\vspace{0cm}
\includegraphics[width=3.2in,angle=0]{FIG1_ihle_dispersion.eps}
\vspace{-1ex}
\caption{
The scaled growth rate $\omega_{R,S}=\omega_R\sqrt{8}/\Omega$ is plotted versus the scaled wavenumber $k_S=k\, 2^{-5/4}\Omega^{-1/2}$
for $M=7$ and $M=100$. $\omega_R$ was obtained from direct solution of Eq. (\ref{QUADRAT}).
The large $k$ approximation, Eq. (\ref{QUADR1_SOL}), is given by the dashed line and represents an inverted parabola.
The scaling was chosen to test the predictions for $k_0$ and $\omega_R$ from Eqs. (\ref{K0_EXPR_CLOSE}) and (\ref{OMEGA_MAX_SIMP}).
}
\label{FIG1}
\end{center}
\vspace*{-2ex}
\end{figure}
To verify the proposed scaling laws, Eq. (\ref{QUADRAT})
is evaluated numerically.
Using Eq. (\ref{NEW_LAM})
the threshold value $\eta_C$ for a given $M$ is found from the condition $\Lambda(\eta_C)=1$.
For a given relative distance $\delta$, the noise is determined by $\eta=\eta_C(1-\delta)$.
The transport coefficients are obtained from Table \ref{TAB2}, by inserting
the large $M$ approximations for $p$, $q$, $\Gamma$ and $S$ from Eqs. (\ref{EVAL_PQ}), into the corresponding expressions.
Note that the approximations from Eq. (\ref{THRESH_PQ}) and from Table \ref{TAB3}
are not used.
Fig. \ref{FIG1} shows the scaled real part of the growth rate, $\omega_{R,S}=\omega_R\sqrt{8}/\Omega$,
as a function of scaled wavenumber $k_S=k\, 2^{-5/4}\Omega^{-1/2}$ for $M=7$ and
$\delta=10^{-5}$.
The values $k_{0}=0.15566$, $k_{max}=0.01372$ and
$\omega_{R,max}=0.001233$ can be read off the plot.
Eqs. (\ref{OMEGA_W0_CON}) and (\ref{W0_DEF}) give the order parameters $\Omega=0.004018$ and $\bar{\Omega}=\Omega M=0.02813$ which are used
to calculate the limiting wavenumbers, $k_2$ to $k_5$. Following Eqs. (\ref{K2_EXPR}), (\ref{INTERMED_LIM2}) and
(\ref{INTERMED_LIM_NEW}) one finds $k_2=k_4=0.02$, $k_3=1.27\times 10^{-5}$, and $k_5=0.0482$.
As expected, the conditions $k_0\gg k_2$, and $k_3\ll k_{max}\ll k_5$ are fulfilled.
The limit $k_1=7.1\times 10^{-7}$ is too small to play any role in the plot.
For verification, Eqs. (\ref{K0_EXPR}), (\ref{KMAX_DET}) and (\ref{WMAX_DET}) are evaluated and give
$k_{0}=0.1507$, $k_{max}=0.01322$ and
$\omega_{R,max}=0.001377$.
These values show excellent agreement with the ones from Fig. \ref{FIG1}.
I also checked a denser system with $M=100$ and $\delta=10^{-6}$ corresponding to $\bar{\Omega}=0.05741$.
Again, the errors of the predictions for $k_0$, $k_{max}$ and $\omega_{max}$ are only between 2 and 5 percent.
Finally, a very dense system with $M=8000$, $\delta=10^{-9}$ and $\bar{\Omega}=0.304$ was tested.
Here, the error of the prediction for $k_0$ was only $0.6\%$.
Since the scaling exponent of $3/4$ in the prediction for $k_{max}$ seems unusual, I checked whether the exponents
$1/2$ and $1$ could also make sense. It turns out that they can be ruled
out because for the $M=7$ sample, $k_{max}$ would change by a factor of $\Omega^{1/4}\approx 4^{\pm 1}$, and for the $M=100$
system $k_{max}$ would change by a factor of $\approx 6.5^{\pm 1}$.
These factors are in clear contradiction with
the small errors of the $k_{max}\propto \Omega^{3/4}$ prediction.
\subsection{Discussion of the dispersion relation}
Close to the threshold to collective motion, that is for $\bar{\Omega}=\Omega M\ll 1$,
the dispersion relation has a range of unstable modes.
One of the main results of this paper can be seen in Fig. \ref{FIG1}:
For $M\gg 1$, the unstable modes are asymptotically described by an inverted parabola
for the real part of the growth rate $\omega_R$. It is not a perfect parabola since
close to zero wavenumber, $\omega_R$ abruptly turns down.
Inside this region of rapid change, a power law applies, $\omega_R\sim k^{1/2}$.
The imperfection of the parabola becomes smaller and smaller with increasing density, as shown in Fig. \ref{FIG1}.
Loosely speaking, I found a ``boundary layer'' in the dispersion relation.
For extremely small wavenumbers $k\approx 0$ there is another power law regime $\omega\sim k^2$. However,
for $\Omega M\ll 1$ and $M \gg 1$
this regime is basically irrelevant. For example, even for $M=7$ it is not visible in Fig. \ref{FIG1}.
Another main result is the scaling of the most unstable mode $k_{max}\sim \Omega^{3/4}$ and
$\omega_{R,max}\sim \Omega$, as well as the scaling of the range of unstable modes, $k_0\sim \Omega^{1/2}$.
The different scaling laws for $k_0$ and $k_{max}$,
near the threshold to collective motion can be combined in the relation
\begin{equation}
\label{SCALE_COMBIN}
{k_0^3\over k_{max}^2}=const.
\end{equation}
Note, that this disagrees with the low density Boltzmann approach for a Vicsek-like model by Bertin {\em et al} \cite{bertin_09}.
The scaling reported there corresponds to $k_0^2/k_{max}=const.$
\section{Conclusion}
Previously, an Enskog-type kinetic theory for Vicsek-type models was proposed and
hydrodynamic equations were drived \cite{ihle_11}.
The corresponding transport coefficients were given in terms
of infinite series which are difficult to handle.
The fact that this theory is not restricted to low density has not been fully utilized yet.
In this paper, I exploit the special properties of the Poisson distribution to
obtain simple approximations of the transport coefficients. These expressions become exact
in the infinite density limit but, in practice, are expected to be
quite good as long as $M$, the average number of collision partners, is of order one or larger.
Analyzing the hydrodynamic theory of the VM in the large $M$-limit
is not only advantageous with respect to having simpler expressions but
also because the underlying mean-field assumption is supposed to become exact near the flocking threshold for $M\rightarrow \infty$.
Ranking transport coefficients by powers of the small parameter $1/M$ allows me to
analytically evaluate the well-known density instability of the polarly ordered phase near the flocking threshold.
The growth rate $\omega$ of a longitudinal perturbation is calculated and a band of unstable modes with wavenumbers $0<k<k_0$ is found.
Some of the main results of this paper are given in Eqs. (\ref{K0_EXPR}), (\ref{KMAX_DET}) (\ref{WMAX_DET}) which describe
the scaling behavior
of $k_0$ as well as the maximum growth rate and the most unstable mode number $k_{max}$ in terms of density and size of the order
parameter $\Omega$.
It is found that there is only an instability for $0<\Omega<\sqrt{2}/M$. This means, for large $M$,
a restabilization of the ordered phase occurs very close to the threshold -- the size of the instability window in $\Omega$-space shrinks as $1/M$.
Thus, one can assume that for large enough $M$, the restabilization can be described within the validity domain of hydrodynamic theory.
Inside the instability window, the largest value for $k_0$ is found for an order parameter value $\Omega=1/(M\sqrt{2})$.
This allows the calculation of the (approximate)
maximum system size $L^{*}\sim \sqrt{M}$ below which the homogeneously ordered phase
is stable. A corresponding time scale for the formation of density inhomogeneities was also calculated and found to increase
linearly with density.
The estimate for $L^{*}$ is in agreement with an earlier numerical evaluation of the dispersion relation
{\em without} the high density approximation, see Ref. \cite{ihle_11}.
Furthermore, I show that the real part of the growth rate follows three different power laws:
at very low wavenumber
$k$, one has $Re(\omega)\sim k^2$ followed by a power law regime with $\sim \sqrt{k}$. At large enough wavenumber, one finds
$Re(\omega)\sim const -k^2$ which represents an inverted parabola.
The closer the system is moved towards the threshold,
the more room in $k$-space is taken by the inverted parabola, at the expense of the
other two regimes.
It is also interesting to see that the key features of the dispersion relation are determined by only five
($b$, $q_3$, $h_1$, $\Lambda$, and $|\Lambda'|$) of the many
transport coefficients of the hydrodynamic equations.
Support
from the National Science Foundation under grant No.
DMR-0706017
is gratefully acknowledged.
|
1501.03909
|
\section{Introduction}
What does dynamical systems theory contribute to our understanding of matter?
To a large extent, the royal road to gain an understanding of fluids or solids has been statistical mechanics.
Based on interaction potentials obtained from experiments and quantum mechanical simulations,
sophisticated perturbation theories are capable of providing a quantitative description of the static structural properties of such fluids
on the atomistic scale. Computer simulations have provided guidance and invaluable insight to unravel the intricate local structure and
even the non-equilibrium dynamics of ``simple'' liquids including hydrodynamic flows and shock waves. At present,
this program is being extended to ever more complex molecular fluids and/or to systems confined to particular geometries such as
interfaces and pores, and to fluids out of thermal equilibrium.
We have raised a related question, ``What is liquid?'' in 1998 in a similar context \cite{MPH_1998}.
A fluid differs from a solid by the mobility of its particles, and this ability to flow is a collective phenomenon.
The flow spreads rapidly in phase space, which constitutes the fundamental instability characteristic of a gas or liquid.
About thirty years ago, dynamical systems theory provided new tools for the characterization of the chaotic evolution in
a multi-dimensional phase space, which were readily applied to liquids shortly after \cite{PH88,PH_1989}.
The main idea is to follow not only the evolution of a state in phase space but, simultaneously, the evolution of various tiny perturbations
applied to that state at an initial time and to measure the growth or decay of such perturbations. The study of the Lyapunov
stability, or instability, of a trajectory with respect to small perturbations always present in real physical systems is hoped
to provide new and alternative
insight into the theoretical foundation and limitation of dynamical models for dense gases and liquids, of phase transitions involving
rare events, and of the working of the Second Law of thermodynamics for stationary non-equilibrium systems.
It is the purpose of this review to assess how far this hope has become true and how much our understanding of fluids has gained from the study of the Lyapunov instability in phase space.
The structure of a simple fluid is essentially determined by the steep (e.g. $\propto r^{-12}$) repulsive part of the pair
potential, which can be well approximated by a discontinuous hard core. In perturbation theories, this hard potential may be taken as
a reference potential with the long-range attractive potential ($\propto - r^{-6}$) acting as a perturbation \cite{Gray}.
Hard disk fluids in two dimensions, and hard sphere systems in three, are easy to simulate and are paradigms for
simple fluids. The first molecular dynamics simulations for hard sphere systems were carried out by Alder and Wainwright
\cite{Alder} in 1957, the first computation of Lyapunov spectra for such systems by Dellago, Posch and Hoover
\cite{DPH_1996} in 1996.
There exist numerous extensions of the hard-sphere model to include rotational degrees of freedom and various
molecular geometries \cite{Allen}. Arguably the simplest, which still preserves spherical symmetry, are spheres with
a rough surface, so-called ''rough hard spheres'' \cite{chapman:1953}. In another approach, fused dumbbell diatomics
are used to model linear molecules \cite{AI_1987,TS_1980}. Both models are used to study the energy exchange between
translational and rotational degrees of freedom and, hence, rotation-translation coupling for molecules with
moderate anisotropy. Other approaches more suitable for larger molecular anisotropies involve spherocylinders
\cite{VB,RS}, ellipsoids \cite{FMM,TAEFK} and even inflexible hard lines \cite{FMag}. To our knowledge, only for the
first two schemes, namely for rough hard disks \cite{BP_2013,Bdiss} and for planar hard dumbbells \cite{MPH_1998,Milano,MPH_chaos},
extensive studies of the Lyapunov spectra have been carried out. We shall come back to this work below.
There are numerous papers for repulsive soft-potential systems and for Lennard-Jones systems from various authors, in which
an analysis of the Lyapunov instability has been carried out. We shall make reference to some of this work in the following sections.
The paper is organized as follows. In the next section we introduce global and local
(time dependent) Lyapunov exponents and review some of their properties. Of particular interest are the
symmetry properties of the local exponents and of the corresponding perturbation vectors in tangent space. Both
the familiar orthonormal Gram-Schmidt vectors and the covariant Lyapunov vectors are considered. In Sec.~\ref{hard_wca} we study planar hard disk systems over a wide range of densities, and
compare them to analogous fluids interacting with a smooth repulsive potential. There we also demonstrate that
the largest (absolute) Lyapunov exponents are generated by perturbations, which are strongly localized in physical space:
only a small cluster of particles contributes at any instant of time. This localization persists in the
thermodynamic limit. Sec.~\ref{hydro} is devoted to another property of the perturbations associated with the
small exponents, the so-called Lyapunov modes. In a certain sense, the Lyapunov modes are analogous to the
Goldstone modes of fluctuating hydrodynamics (such as the familiar sound and heat modes). However, it is
surprisingly difficult to establish a connection between these two different viewpoints. In Sec.~\ref{roto}
particle systems with translational and
rotational dynamics are considered. Two simple planar model fluids are compared, namely gases of rough hard disks and
of hard-dumbbell molecules. Close similarities, but also surprising differences, are found. Most surprising
is the fast disappearance of the Lyapunov modes and the breakdown of the
symplectic symmetry of the local Gram-Schmidt exponents for the rough hard disk systems, if the moment of inertia
is increased from zero. In Sec.~\ref{nonequilibrium} we summarize some of the results for stationary systems far from thermal equilibrium,
for which the Lyapunov spectra have been valuable guides for our understanding.
For dynamically thermostatted systems
in stationary non-equilibrium states, they provide a direct link with the Second Law of thermodynamics
due to the presence of a multifractal phase-space distribution. We conclude with a few short remarks
in Sec.~\ref{outlook}.
\section{Lyapunov exponents and perturbation vectors}
\label{general_intro}
Let ${\bf \Gamma}(t)= \{{\bf p},{\bf q}\}$ denote the instantaneous state of a dynamical particle system with a phase space $M$ of dimension $D$.
Here, ${\bf p}$ and ${\bf q}$ stand for the momenta and positions of all the particles. The motion equations are usually written
as a system of first-order differential equations,
\begin{equation}
\dot{\bf \Gamma} = {\bf F}({\bf \Gamma}),
\label{motion}
\end{equation}
where ${\bf F}$ is a (generally nonlinear) vector-valued function of dimension $D$. The
formal solution of this equation is written as ${\bf \Gamma}(t) = \phi^t ({\bf \Gamma}(0))$,
where the map $\phi^t: {\bf \Gamma} \to {\bf \Gamma}$ defines the flow in $M$, which maps a
phase point ${\bf \Gamma}(0)$ at time zero to a point ${\bf \Gamma}(t)$ at time $t$.
Next, we consider an arbitrary infinitesimal perturbation vector $\delta {\bf \Gamma}(t)$, which
depends on the position ${\bf \Gamma}(t)$ in phase space and, hence, implicitly on time.
It evolves according to the linearized equations of motion,
\begin{equation}
\dot{\delta {\bf \Gamma}} = {\cal J}({\bf \Gamma}) \delta {\bf \Gamma}.
\label{linearized}
\end{equation}
This equation may be formally solved according to
\begin{equation}
\delta{\bf \Gamma}(t) = D\phi^t \vert_{{\bf \Gamma}(0)}\; \delta{\bf \Gamma}(0),
\label{evolution}
\end{equation}
where $D\phi^t$ defines the flow in tangent space and is represented by a real but
generally non-symmetric $D \times D$ matrix . The dynamical (or Jacobian) matrix,
\begin{equation}
{\cal J}({\bf \Gamma}) \equiv \frac{ \partial {\bf F}}{ \partial {\bf \Gamma}},
\label{jacobian}
\end{equation}
determines, whether the perturbation vector $ \delta {\bf \Gamma}(t) $
has the tendency to grow or shrink at a particular point ${\bf \Gamma}(t)$ in phase space the
system happens to be at time $t$. Accordingly, the matrix element
\begin{equation}
\delta {\bf \Gamma}^{\dagger}({\bf \Gamma}) {\cal J({\bf \Gamma})} \delta {\bf \Gamma}({\bf \Gamma})
\label{lyageneral}
\end{equation}
turns out to be positive or negative, respectively. Here, $^\dagger$ means transposition. If, in addition, the perturbation is normalized,
$||\delta {\bf \Gamma}|| = 1$, and points into particular directions in tangent space to be specified below,
this matrix element turns out to be a local rate for the growth or decay of $|| \delta {\bf \Gamma}(t) ||$
and will be referred to as a {\em local} Lyapunov exponent $\Lambda({\bf \Gamma})$ at the phase point
${\bf \Gamma}$.
\subsection{Covariant Lyapunov vectors}
In 1968 Oseledec published his celebrated multiplicative ergodic theorem \cite{Oseledec:1968,Ruelle:1979,Eckmann:1985,Ruelle:1999},
in which he proved that under very general assumptions (which apply to molecular fluids) the tangent space
decomposes into subspaces $E^{(j)}$ with dimension $m_{j}$,
\begin{equation}
TM({\bf \Gamma}) = E^{(1)}({\bf \Gamma}) \oplus E^{(2)}({\bf \Gamma}) \oplus \cdots \oplus E^{(L)}({\bf \Gamma}),
\label{split}
\end{equation}
for almost all ${\bf \Gamma} \in M$ (with respect to the Lesbegue measure), such that $\sum_{j=1}^L m_j = D$.
These subspaces evolve according to
\begin{equation}
D \phi^t \vert_{{\bf \Gamma}(0)} \; E^{(j)}\left({\bf \Gamma}(0)\right) = E^{(j)}\left( {\bf \Gamma}(t)\right) ,
\label{defcov}
\end{equation}
and are said to be covariant, which means that they co-move -- and in particular co-rotate -- with the flow in tangent space.
In general they are not orthogonal to each other.
If $ {\vec v}\left({\bf \Gamma}(0)\right) \in E^{(j)}\left({\bf \Gamma}(0)\right)$
is a vector in the subspace $E^{(j)}\left({\bf \Gamma}(0)\right)$, it evolves according to
\begin{equation}
D \phi^t \vert_{{\bf \Gamma}(0)} \; \vec v\left({\bf \Gamma}(0)\right)
= \vec v\left( {\bf \Gamma}(t)\right) \in E^{(j)}\left({\bf \Gamma}(t)\right).
\label{defcovv}
\end{equation}
The numbers
\begin{equation}
^{(\pm)}\lambda^{(j)} = \lim_{t \rightarrow \pm \infty} \dfrac{1}{\vert t \vert} \, \ln
\frac{ \big\|{\vec v} \left({\bf \Gamma}(t)\right) \big\|} { \big\|{\vec v} \left({\bf \Gamma}(0)\right) \big\|}
\label{covlambda}
\end{equation}
for $j \in \{ 1,2,\dots, L\} $ exist and are referred to as the (global) Lyapunov exponents.
Here, the upper index $(+)$ or $(-)$ indicates, whether the trajectory
is being followed forward or backward in time, respectively.
If a subspace dimension $m_{j}$ is larger than one, then the respective exponent is called
degenerate with multiplicity $m_{j}$. If all exponents are repeated according to their
multiplicity, there are $D$ exponents altogether, which are commonly ordered according to size,
\begin{eqnarray}
^{(+)}\lambda_1& \ge & \cdots \ge ^{(+)}\lambda_D, \label{e1}\\
^{(-)}\lambda_1& \ge & \cdots \ge ^{(-)}\lambda_D, \label{e2} \\
\nonumber
\end{eqnarray}
where the subscripts are referred to as Lyapunov index.
The vetors ${\vec v}^{\ell}$ generating
$\lambda_{\ell}$ according to
\begin{equation}
^{(\pm)}\lambda_{\ell} = \lim_{t \rightarrow \pm \infty} \dfrac{1}{\vert t \vert} \, \ln
\frac{ \big\|{\vec v}^{\ell} \left({\bf \Gamma}(t)\right) \big\|} { \big\|{\vec v}^{\ell} \left({\bf \Gamma}(0)\right) \big\|}; \;\;\;
\ell = 1,2,\dots,D,
\label{covlambda1}
\end{equation}
are called covariant Lyapunov vectors. This notation, which treats the tangent space dynamics in terms of a set
of vectors, is more convenient for algorithmic purposes, although the basic theoretical objects are the covariant subspaces
$E^{(j)}; \; j = 1,2,\dots,L.$
Degenerate Lyapuov exponents appear, if there exist
intrinsic continuous symmetries (such as invariance of the Lagrangian with respect to time and/or
space translation, giving rise to energy and/or momentum conservation, respectively). For particle systems such symmetries almost
always exist. Some consequences will be discussed below.
The global exponents for systems evolving forward or backward in time are related according to
\begin{equation}
^{(+)}\lambda_{\ell} = - ^{(-)}\lambda_{D+1-\ell}; \: \; \ell \in \{1,2, \dots , D\}.
\label{fb}
\end{equation}
The subspaces $E^{(j)}$ and, hence, the covariant Lyapunov vectors
${\vec v}^{\ell}$ generally are not pairwise orthogonal.
If in Eq.~(\ref{covlambda}) the time evolution is not followed from zero to infinity but only over a finite time interval $\tau > 0$, so-called
covariant {\em finite-time dependent} Lyapunov exponents are obtained,
\begin{equation}
^{(\pm)}\bar{\Lambda}_{\ell}^{\tau,\mbox{cov}} = \dfrac{1}{ \tau} \, \ln \, \big\| \, D\phi^{ \pm \tau}\vert_{\textrm{\smallskip{\vec{\Gamma}}(0)}} \,
\; \, {\vec v}^{\ell} \left({\bf \Gamma}(0)\right) \, \big\|.
\label{covftlambda}
\end{equation}
In the limit $\tau \to 0$ so-called {\em local} Lyapunov exponents are generated,
\begin{eqnarray}
^{(\pm)}\Lambda_{\ell}^{\mbox{cov}}\left({\bf \Gamma}\right) & = & \lim_{t \rightarrow \pm 0} \dfrac{1}{\vert t \vert} \, \ln \, \big\| \, \
D\phi^t\vert_{\textrm{\smallskip{\vec{\Gamma}}}} \, \; \, {\vec v}^{\ell} \left({\bf \Gamma}\right) \, \big\| \nonumber \\
& = & \left[ ^{(\pm)}{\vec v}^{\ell}({\bf \Gamma}) \right]^{\dagger} {\cal J({\bf \Gamma})} \;\; ^{(\pm)}{\vec v}^{\ell}({\bf \Gamma})
\label{covllambda}
\end{eqnarray}
where $\vert\vert{\vec v}^{\ell}({\bf \Gamma})|| = 1$ is required. The second equality has a structure as in
Eq.~(\ref{lyageneral}), applied to the covariant vectors, and indicates that the local exponents are point functions, which
only depend on ${\bf \Gamma}$. The finite-time-dependent exponents may be viewed as time averages of local exponents over a
stretch of trajectory during the finite time $\tau$, the global exponents as time averages over an infinitely-long trajectory.
For the latter to become dynamical invariants, one requires ergodicity, which we will always assume in the following for
lack of other evidence.
\subsection{Orthogonal Lyapunov vectors}
Another definition of the Lyapunov exponents, also pioneered by Oseledec
\cite{Oseledec:1968,Ruelle:1979,Eckmann:1985,Ruelle:1999}, is via the
real and symmetrical matrices
\begin{equation}
\lim_{t \to \pm \infty} \left[ D \phi^{t}\vert_{{\bf \Gamma}(0)}^{\dagger} D \phi^{t}\vert_{{\bf \Gamma}(0)} \right]^{\frac{1}{2|t|}},
\label{Oseledec1}
\end{equation}
which exist with probability one (both forward and backward in time). Their (real) eigenvalues involve the global
Lyapunov exponents,
\begin{equation}
\exp(^{(\pm)}\lambda_1) \ge \cdots \ge \exp(^{(\pm)}\lambda_D),
\end{equation}
where, as before, degenerate eigenvalues are repeated according to their multiplicities.
For non-degenerate and degenerate eigenvalues the $>$ and $=$ signs apply, respectively.
The corresponding eigenspaces, $U_{\pm}^{(j)}; \; j = 1,\dots, L$, are pairwise orthogonal and provide two
additional decompositions of the tangent space at almost every point in phase space, one forward $(+)$ and one backward $(-)$ in time:
\begin{equation}
TM({\bf \Gamma}) = U_{\pm}^{(1)}({\bf \Gamma}) \oplus U_{\pm}^{(2)}({\bf \Gamma}) \oplus \cdots \oplus U_{\pm}^{(L)}({\bf \Gamma}).
\label{splitalt}
\end{equation}
In each case, the dimensions $m_j$ of the Oseledec subspaces $U_{\pm}^{(j)}; \; j = 1,\dots,L, $ have a sum equal to the phase-space dimension $D$:
$\sum_{j=1}^L m_j = D.$
These subspaces are not covariant.
The classical algorithm for the computation of (global) Lyapunov exponents \cite{Benettin,Shimada,Wolf} carefully keeps track of
all $d$-dimensional infinitesimal volume elements $\delta {V}^{(d)}$ which (almost always) evolve according to
\begin{equation}
\delta {V}^{(d)}(t) \approx \delta {V}^{(d)}(0) \exp \left( \sum_{\ell = 1}^d \lambda_{\ell} t \right).
\label{volume}
\end{equation}
This is algorithmically achieved with the help of a set of perturbation vectors $^{(\pm)}{\vec g}_{\ell}({\bf \Gamma}); \;\; \ell = 1,\dots,D$, which are either periodically re-orthonormalized
with a Gram-Schmidt (GS) procedure (equivalent to a QR-decomposition) \cite{recipes}, or are continuously constrained to stay
orthonormal with a triangular matrix of Lagrange multipliers \cite{HP1987,Goldhirsch,PH88,PH04,BPDH}. For historical reasons we
refer to them as
GS-vectors. They have been shown to be the spanning vectors for the Oseledec subspaces $U_{\pm}^{(j)}$ \cite{Ershov}.
Accordingly, they are orthonormal but not covariant.
Analogous to Eqs.~(\ref{covftlambda}) and~(\ref{covllambda}), one may define finite-time-dependent GS Lyapunov exponents
$ ^{(\pm)}\bar{\Lambda}_{\ell}^{\tau,\mbox{GS}}$ and local GS exponents $ ^{(\pm)}\Lambda_{\ell}^{\mbox{GS}}\left({\bf \Gamma}\right)$,
where the latter are again point functions,
\begin{equation}
^{(\pm)}\Lambda_{\ell}^{\mbox{GS}}\left({\bf \Gamma}\right) = [ ^{(\pm)}{\vec g}^{\ell}({\bf \Gamma}) ]^{\dagger}\; {\cal J({\bf \Gamma})}
\;\;^{(\pm)}{\vec g}^{\ell}({\bf \Gamma}).
\end{equation}
As before, the finite-time and global GS exponents are time averages of the respective local exponents over a finite
respective infinite trajectory \cite{Ershov,BPDH}.
From all GS vectors only $^{(\pm)}{\vec g}_{1}({\bf \Gamma})$, which is associated with the maximum global
exponent, evolves freely without constraints which might affect its orientation in tangent space. Therefore it agrees with
$^{(\pm)}{\vec v}_{1}({\bf \Gamma})$ and is also covariant. Also the corresponding local exponents agree for $\ell = 1$,
but generally not for other $\ell$. However, the local covariant exponents may be computed from the local
GS exponents, and vice versa, if the angles between them are known \cite{BPDH}.
As an illustration of the relation between covariant Lyapunov vectors and orthonormal GS vectors, we consider a
simple two-dimensional example, the H\'enon map \cite{Henon},
\begin{eqnarray}
x_{n+1} &=& a - x_n^2 + b \, y_n
\nonumber
\enspace , \\
y_{n+1} &=& x_n \enspace ,
\nonumber
\end{eqnarray}
with $a=1.4$ and $b=0.3$.
\begin{figure}[t]
\includegraphics[width=0.4\textwidth]{figure_1.pdf}
\caption{(Color online) Covariant and GS vectors for a point $O$ on the H\'enon attractor (light-blue). The black line is a finite-time approximation of the (global) stable manifold.}
\label{henon_map}
\end{figure}
The light-blue line in Fig.~\ref{henon_map} represents the H\'enon attractor, which is known to coincide with its unstable manifold.
The black line is a finite-time approximation of its stable manifold. Let $O$ denote a point on the attractor.
The GS vectors at this point are indicated by ${\vec g}_1(O)$ and ${\vec g}_2(O)$, where the former is also covariant and
identical to ${\vec v}_1(O)$ (parallel to the unstable manifold). As required, the covariant vector ${\vec v}_2(O)$ is tangent to the
stable manifold (which was determined by a different method). For the computation of the covariant vectors at $O$,
it is necessary to follow and store the reference trajectory and the GS vectors sufficiently far into the future up to a point $F$
in our example. In Fig.~\ref{henon_map} the GS vectors at this point are denoted by ${\vec g}_1(F)$ and ${\vec g}_2(F)$. Applying an algorithm
by Ginelli {et al.} \cite{Ginelli}, to which we shall return below, a backward iteration to the original phase point $O$ yields the covariant vectors,
${\vec v}_2(O)$ in particular.
\subsection{Symmetry properties of global and local exponents}
For ergodic systems the global exponents are averages of the local exponents over the natural measure
in phase space and, according to the multiplicative ergodic theorem for chaotic systems, they do not depend on
the metric and the norm one applies to the tangent vectors. Also the choice of the coordinate system
(Cartesian or polar, for example) does not matter.
For practical reasons the Euclidian (or L2) norm will be used throughout in the following. It also
does not matter, whether covariant or Gram-Schmidt vectors are used for the computation.
The global exponents are truly dynamical invariants.
This is not the case for the local exponents. They depend on the norm and on the coordinate system.
And they generally depend on the set of basis vectors, covariant or GS, as was mentioned before. Furthermore, some
symmetry properties of the equations of motion are reflected quite differently by the two local representations.
\begin{itemize}
\item Local Gram-Schmidt exponents: During the construction of the GS vectors, the changes of the
differential volume elements $\delta {V}^{(d)}$ following Eq.~(\ref{volume}) are used to compute the local exponents.
If the total phase volume is conserved as is the case for time-independent Hamiltonian systems, the following sum rule
holds for almost all ${\bf \Gamma}$:
\begin{equation}
\sum_{\ell = 1}^D \Lambda_{\ell}^{\mbox{GS}}({\bf \Gamma}) = 0.
\label{sum_rule}
\end{equation}
In this symplectic case we can even say more. For each positive local GS exponent there exists a
negative local GS exponent such that their pair sum vanishes \cite{Meyer}:
\begin{equation}
^{(\pm)}\Lambda_{\ell}^{\mbox{GS}}({\bf \Gamma}) = -^{(\pm)}\Lambda_{D + 1 -\ell }^{\mbox{GS}}({\bf \Gamma}),\;\; \ell = 1,\dots,D. \label{gsfwd}
\end{equation}
As indicated, such a symplectic local pairing symmetry is found both forward and backward in time.
Non-symplectic systems do not display that symmetry.
On the other hand, the re-orthonormalization process tampers with the orientation and rotation of the GS vectors
and destroys all consequences of the time-reversal invariance property of the original motion equations.
\item Local covariant exponents: During their construction \cite{Ginelli}, only the norm of the covariant perturbation vectors
needs to be periodically adjusted for practical reasons, but the angles between them remain unchanged. This process effectively destroys
all information concerning the $d$-dimensional volume elements. Thus, no symmetries
analogous to Eq.~(\ref{gsfwd}) exist. Instead, the re-normalized covariant vectors faithfully
preserve the time-reversal symmetry of the equations of motion, which is reflected by
\begin{equation}
^{(\mp)}\Lambda_{\ell}^{\mbox{cov}}({\bf \Gamma}) = -^{(\pm)}\Lambda_{D+1 -\ell}^{\mbox{cov}}({\bf \Gamma}), \; \; \ell = 1,\dots,D,
\label{local_symmetry}
\end{equation}
regardless, whether the system is symplectic or not. This means that an expanding co-moving direction is
converted into a collapsing co-moving direction by an application of the time-reversal operation.
\end{itemize}
The set of all global Lyapunov exponents, ordered according to size, is referred to as the Lyapunov spectrum. For stationary Hamiltonian systems in thermal equilibrium the (global) conjugate pairing rule holds,
\begin{equation}
^{(\pm)}\lambda_{\ell} = -^{(\pm)}\lambda_{D + 1 -\ell }, \label{cpr}
\nonumber
\end{equation}
which is a consequence of Eq.~(\ref{gsfwd}) and of the fact that the global exponents are the phase-space averages of these quantities.
In such a case only the first half of the spectrum containing the positive exponents needs to be computed. In the following we shall
refer to this half as the positive branch of the spectrum.
The subspaces spanned by the covariant (or GS) vectors associated with the strictly
positive/negative global exponents are known as the unstable/stable manifolds.
Both are covariant and are linked by the time-reversal transformation, which converts one into the other.
\subsection{Numerical considerations}
The computation of the Gram-Schmidt exponents is commonly carried out with the algorithm of
Benettin {\em et al.} \cite{Benettin,Wolf} and Shimada {\em et al.} \cite{Shimada}. Based on this classical approach, a
reasonably efficient algorithm for the computation of covariant exponents has been recently developed by Ginelli {\em et al.}
\cite{Ginelli,Ginelli_2013}. Some computational details for this method may also be
found in Refs.~\cite{BPDH,BP_2010,BP_2013}.
An alternative approach is due to Wolfe and Samelson \cite{Wolfe}, which was subsequently applied
to Hamiltonian systems with many degrees of freedom \cite{Romero}.
The considerations of the previous section are for time-continuous systems based on the differential equations (\ref{motion}). They may be
readily extended to maps such as systems of hard spheres, for which a pre-collision state of two colliding particles is
instantaneously mapped to an after-collision state. Between collisions the particles move smoothly.
With the linearized collision map the time evolution of the perturbation vectors
in tangent space may be constructed \cite{DPH_1996}.
In numerical experiments of stationary systems, the initial orientation of the perturbation vectors is arbitrary.
There exists a transient time, during which the perturbation vectors
converge to their proper orientation on the attractor. All symmetry relations mentioned above refer to this
well-converged state and exclude transient conditions \cite{WHH}.
For the computation of the full set of exponents, the reference trajectory and $D$ perturbation vectors
(each of dimension $D$) have to be followed in time, which requires $D(D+1)$ equations to be integrated. Present computer
technology limits the number of particles to about $10^4$ for time-continuous systems, and to $10^5$ for hard-body systems..
In all applications below appropriate reduced units will be used.
To ease the notation, we shall in the following omit to indicate the forward-direction in time by $(+)$, if there is no ambiguity.
\section{Two-dimensional hard-disk and WCA fluids in equilibrium}
\label{hard_wca}
\begin{figure}
\includegraphics[width=0.35\textwidth,angle=-90]{figure_2.pdf}
\caption{(Global) Lyapunov spectra of a planar hard-disk gas and of a planar WCA fluid with the same number of particles,
density and temperature. For details we refer to the main text. Only the positive branches of the spectra are shown.
Although the spectra consist of discrete points for integer values of the index $l$, smooth lines and
reduced indexes $l/2N$ are used for clarity in the main panel. In the inset a
magnified view of the small-exponents regime is shown, for which $l$ is not normalized.
The figure is taken from Ref.~\cite{FP_2005}.
}
\label{comparison}
\end{figure}
We are now in the position to apply this formalism to various models for simple fluids \cite{FP_2005}. First we consider
two moderately dense planar gases, namely a system of elastic smooth hard disks with a pair potential
\begin{equation}
\phi_{HD} = \left\{\begin{array}{ll} \infty, &\qquad r \leq \sigma, \\
0,&\qquad r>\sigma, \end{array}\right.
\nonumber
\end{equation}
and a two-dimensional Weeks-Chandler-Anderson (WCA) gas interacting with a repulsive soft potential \cite{WCA1,WCA2}
\begin{equation}
\phi_{WCA}=\left\{\begin{array}{ll}4\epsilon\left[\left(\frac{\sigma}{r}\right)^{12}-
\left(\frac{\sigma}{r}\right)^6\right]+\epsilon,
&\qquad
r \leq 2^{1/6}\sigma, \\
0,&\qquad r>2^{1/6}\sigma. \end{array}\right.
\nonumber
\end{equation}
In Fig.~\ref{comparison} the positive branches of their (global) Lyapunov spectra are compared.
Both gases consist of $N = 400$ particles in a square box with side length $L$ and periodic boundaries,
and have the same density, $\rho = N/L^2 = 0.4$, and temperature, $T = 1$. Although, as expected, for low densities
the Lyapunov spectra are rather insensitive to the interaction potential, the comparison of
Fig.~\ref{comparison} for moderately dense gases already reveals a rather strong sensitivity. In particular,
the maximum exponent $\lambda_1$ is much lower for the WCA fluid, which means that deterministic chaos is
significantly reduced in systems with smooth interaction potentials. This difference becomes even more pronounced
for larger densities, as will be shown below.
\begin{figure}
\includegraphics[width=0.3\textwidth]{figure_3.jpg}
\caption{Localization of the perturbation vector ${\vec g}^1$ in physical space (the red square) for a
planar gas of 102,400 soft repulsive disks \cite{HBP_1998}. The quantity $\mu^{1}$, measuring the
contribution of individual particles to the perturbation vector associated with the maximum GS exponent, is
plotted in the vertical direction at the position of the particles. }
\label{local_surface}
\end{figure}
The maximum (minimum) Lyapunov exponent denotes the rate of fastest perturbation growth (decay)
and is expected to be dominated by the fastest dynamical events such as a locally enhanced collision frequency.
To prove this expectation one may ask how individual particles contribute to the growth of
the perturbations determined by ${\vec v}^{\ell}$ or ${\vec g}^{\ell}$.
Writing the {\em normalized} perturbation vectors in terms of the position and momentum perturbations of all
particles, ${\vec v}^{\ell} = \{\delta {\bf q}_i^{(\ell)}, \delta {\bf p}_i^{(\ell)}; i=1,\dots, N \}$, the quantity
\begin{equation}
\mu_i^{(\ell)} \equiv \left( \delta {\bf q}_i^{(\ell)}\right)^2 + \left( \delta {\bf p}_i^{(\ell)}\right)^2
\label{gamma}
\end{equation}
is positive, bounded, and obeys the sum rule $\sum_{i=1}^N \mu_i^{(\ell)} = 1$ for any $\ell$.
It may be interpreted as a measure for the activity of particle $i$ contributing to the perturbation in question.
Equivalent relations apply for ${\vec g}^{\ell}$.
If $\mu_i^{(1)}$ is plotted - vertical to the simulation plane - at the position of particle $i$,
surfaces such as in Fig.~\ref{local_surface} are obtained. They are strongly peaked in a small domain of the
physical space indicating strong localization of ${\vec v}^{1} \equiv {\vec g}^1$. It means that only a small
fraction of all particles contributes to the
perturbation growth at any instant of time. This property is very robust and persists in the thermodynamic limit
in the sense that the fraction of particles, which contribute significantly to the formation of ${\vec v}^{1} $
varies as a negative power of $N$ \cite{Milano,FHPH_2004,FP_2005}.
For example, Fig.~\ref{local_surface} is obtained for a system of 102,400 (!) smooth disks interacting with a
pair potential similar to the WCA potential \cite{HBP_1998}.
This dynamical localization of the perturbation vectors associated with the large Lyapunov exponents is a very general
phenomenon and has been also observed for one-dimensional models of space-time chaos \cite{Pikovsky_1} and
Hamiltonian lattices \cite{Pikovsky_2}.
\begin{figure}[t]
\centering
\includegraphics[angle=-90,width=0.47\textwidth]{figure_4.pdf}
\caption{Localization spectra $W$ for the complete set of Gram-Schmidt vectors (blue)
and covariant vectors (red) for 198 hard disks in a periodic box with an aspect ratio 2/11.
The density $\rho = 0.7$ and the temperature $T = 1$.
Reduced indices $\ell/4N$ are used on the abscissa of the main panel. The inset shows a magnification
of the central part of the spectra dominated by Lyapunov modes.
From Ref.~\cite{BP_2010}.}
\label{W}
\end{figure}
For $\ell > 1$, the localization for ${\vec v}^{\ell}$ differs from that of ${\vec g}^{\ell}$. This may be seen
by using a localization measure due to Taniguchi and Morriss \cite{TM_2003a,TM_2003b},
\begin{equation}
W = \frac{1}{N} \exp\langle S \rangle; \;\;\;\; S({\bf \Gamma}(t)) = -\sum_{i=1}^N \mu_i^{(\ell)} \ln \mu_i^{(\ell)} ,
\nonumber
\end{equation}
which is bounded according to $1/N \le W \le 1$. The lower and upper bounds correspond to complete localization
and delocalization, respectively. $S$ is the Shannon entropy for the `measure' defined in Eq.~(\ref{gamma}), and
$\langle \dots \rangle$ denotes a time average.
The localization spectra for the two sets of perturbation vectors are shown in
Fig.~\ref{W} for a rather dense system of hard disks in two dimensions. One may infer from the figure that
localization is much stronger for the covariant vectors (red line) whose orientations in tangent space
are only determined by the tangent flow and are not constrained by periodic re-orthogonalization steps
as is the case for the Gram-Schmidt vectors (blue line). One further observes that the localization of the
covariant vectors becomes less and less pronounced the more the regime of coherent Lyapunov modes, located in the center
of the spectrum, is approached.
Next we turn our attention to the maximum Lyapunov exponent $\lambda_1$ and to the Kolmogorov-Sinai entropy.
Both quantities are indicators of dynamical chaos. The KS-entropy is the rate with which information about an initial state is generated,
if a finite-precision measurement at some later time is retraced backward along the stable manifold. According to
Pesin's theorem \cite{Pesin} it is equal to the sum of the positive Lyapunov exponents, $h_{KS} = \sum_{\lambda_{\ell} > 0} \lambda_{\ell}$.
It also determines the characteristic time for phase space mixing according to \cite{Arnold,Zaslav,DP_relax}
$\tau^{mix} = 1/h_{KS}$.
In Fig.~\ref{dens} we compare the isothermal density dependence of the maximum Lyapunov exponent $\lambda_1$ (top panel), of the
smallest positive exponent $\lambda_{2N-3}$ (middle panel), and of the Kolmogorov-Sinai (KS) entropy per particle $h_{KS}/N$
(bottom panel) for planar WCA and HD fluids. Both systems contain $N = 375$ particles in a periodic box with aspect ratio 0.6 and with
a temperature $T = 1$. As expected, these quantities for the two fluids agree at small densities, but differ significantly
for large $\rho$.
\begin{figure}[t]
\centering{\includegraphics[width=0.3\textwidth,angle=-90]{figure_5a.pdf}}
\vfill
{\includegraphics[width=0.3\textwidth,angle=-90]{figure_5b.pdf}}
\vfill
{\includegraphics[width=0.3\textwidth,angle=-90]{figure_5c.pdf}}
\vspace{3mm}
\caption{Isothermal density dependence of the maximum Lyapunov exponent, $\lambda_1$ (top), of
the smallest positive exponent, $\lambda_{2N-3}$ (middle), and of the Kolmogorov-Sinai entropy
per particle, $h_{KS}/N$ (bottom), for hard and soft-disk systems.
From Ref.~\cite{FP_2005}.}
\label{dens}
\end{figure}
For hard disks, van Zon and van Beijeren \cite{Zon} and de Wijn \cite{Wijn} used kinetic theory to obtain expressions
for $\lambda_1$ and $h_{KS}/N$ to leading orders of $\rho$. They agree very well with computer simulations of
Dellago {\em et al.} \cite{BDPD,ZBD}. The regime of larger densities, however, is only qualitatively understood. $\lambda_1^{HD}$ and $h_{KS}^{HD}/N$
increase monotonically due to the increase of the collision frequency $\nu$. The (first-order) fluid-solid transition
shows up as a step between the freezing point of the fluid ( $\rho_f^{HD}= 0.88 $ )
and the melting point of the solid ($\rho_m^{HD}=0.91$ \cite{Stillinger}) (not shown in the figure). These steps disappear, if instead of the density the collision frequency is plotted on the abscissa. $\lambda_1^{HD}$ and $h_{KS}^{HD}/N$ diverge at the close-packed density due to the divergence of $\nu$.
For the WCA fluid, both the maximum exponent and the Kolmogorov-Sinai entropy have a maximum as a function of density, and become very small when the density of the freezing point is approached, which happens for $\rho_f^{WCA} = 0.89 $ \cite{Tox}.
This behavior is not too surprising in view of the
fact that a harmonic solid is not even chaotic and $\lambda_1$ vanishes. The maximum of $\lambda_1^{WCA}$ occurs
at a density of about $0.9 \rho_f^{WCA}$ and confirms earlier results for Lennard-Jones fluids \cite{PHH_1990}.
At this density chaos is most pronounced possibly due to the onset of cooperative dynamics characteristic of phase transitions.
At about the same density, mixing becomes most effective as is demonstrated by the maximum of the KS-entropy. The latter is related
to the mixing time in phase space according to $\tau^{mix} = 1 / h_{KS}$ \cite{Arnold,Zaslav}.
It is interesting to compare the Lyapunov time $\tau_{\lambda} = 1/\lambda_1$, which is a measure for the system to ``forget'' its past,
with the (independently-measured) time between collisions of a particle, $\tau_c = 1/\nu$. In Fig.~\ref{tau} such a comparison is shown for a
three-dimensional system of 108 hard spheres in a cubic box with periodic boundary conditions \cite{DP_relax,DP_3d}.
Also included is the behavior of $\tau_{KS} \equiv N / h_{KS}$. For small densities, we have $\tau_{\lambda} \ll \tau_c$,
and subsequent collisions are uncorrelated. This provides the basis for the validity of lowest-order kinetic theory (disregarding
correlated collisions). For densities $\rho > 0.4$ the Lyapunov time is progressively larger than the collision time and higher-order corrections such as ring collisions
become important. Also the lines for $\tau_{\lambda}$ and $ \tau_{KS}$ cross even before, at a density 0.1. This is a
consequence of the shape change of the Lyapunov spectrum with density \cite{DP_3d}: the small positive exponents grow faster with $\rho$ than the large exponents. It also follows that neither the Lyapunov time nor the mixing time determine the
correlation decay of - say - the particle velocities. For lower densities, for which ring collisions are not important, the decay of the
velocity autocorrelation function $z(t)$ is strictly dominated by the collision time. In Ref.~\cite{DP_relax}
we also demonstrate that for hard-particle systems the time $\tau_{\lambda}$ does not provide an upper bound for the time for which
correlation functions may be reliably computed.
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{figure_6.pdf}
\caption{(Color online) Comparison of the Lyapunov time $\tau_{\lambda} = 1/\lambda_1$, the time $ \tau_{KS} \equiv N/h_{KS}$, and the collision time
$\tau_c = 1/\nu$, as a function of the collision frequency $\nu$, for a system of 108 hard spheres in a cubic box with periodic boundaries.
The vertical lines indicate collision frequencies corresponding to the densities $\rho=0.1, 0.4,$ and $1.0$.
}
\label{tau}
\end{figure}
For later reference, we show in Fig.~\ref{symplectic} the time-dependence of the local exponents for $\ell = 1$ and
$\ell = D = 4N = 16$ of a system consisting of four smooth hard disks. The symplectic symmetry
as given by Eq.~(\ref{gsfwd}) is well obeyed.
\begin{figure}[h]
\centering
\includegraphics[width=0.47\textwidth]{figure_7.pdf}
\caption{(Color online) Test of the symplectic symmetry of Eq.~(\ref{gsfwd}) for a smooth hard-disk system
with $N = 4$. The phase space has 16 dimensions. The local GS exponents for $\ell = 1$ and $\ell = 16$ are
plotted as a function of time $t$ along a short stretch of trajectory.
}
\label{symplectic}
\end{figure}
\section{Lyapunov modes}
\label{hydro}
Already the first simulations of the Lyapunov spectra for hard disks revealed an interesting new step-like structure
of the small positive and negative exponents in the center of the spectrum \cite{DPH_1996}, which is due to
degenerate exponents. A similar structure was also found for hard dumbbell systems \cite{MPH_1998}, to which we come
back in Sec.~\ref{dumbb}. The explanation for this behavior lies in the fact that the perturbation vectors for these
exponents are characterized by coherent sinusoidal patterns spread out over the whole physical space (the simulation cell).
We have referred to these collective patterns as Lyapunov modes.
The modes are interpreted as a consequence of a spontaneous breaking of continuous symmetries and, hence,
are intimately connected with the zero modes spanning the central manifold \cite{Szasz_Buch,Hthesis}.
They appear as sinusoidal modulations of the zero modes in space -- with wave number $k \ne 0$ -- due to the spontaneous
breaking of the continuous symmetries. For $k \to 0$, the modes reduce to a linear superposition of the zero modes, and their
Lyapunov exponent vanishes. The experimentally observed wave vectors depend on the size of the system and the
nature of the boundaries (reflecting or periodic).
Our discovery of the Lyapunov modes triggered a number of theoretical approaches.
Eckmann and Gat were the first to provide theoretical arguments for the existence of the Lyapunov modes
in transversal-invariant systems \cite{Eck}. Their model did not have a dynamics in phase space but only an
evolution matrix in tangent space, which was modeled as a product of independent random
matrices. In another approach, McNamara and Mareschal isolated the six hydrodynamic fields related to the invariants of the binary
particle collisions and the vanishing exponents, and used a generalized Enskog theory to derive hydrodynamic evolution equations for
these fields. Their solutions are the Lyapunov modes \cite{McNamara}. In a more detailed
extension of this work restricted to small densities, a generalized Boltzmann equation is used for the
derivation \cite{Mare}. de Wijn and van Beijeren pointed out the analogy to the Goldstone mechanism of
constructing (infinitesimal) excitations of the zero modes and the Goldstone modes of fluctuating
hydrodynamics \cite{Goldstone,Forster}. They used this analogy to derive the modes
within the framework of kinetic theory \cite{Wijn_vB}. With a few exceptions, a rather good agreement with the simulation results was achieved, at least for low densities. Finally, Taniguchi, Dettmann, and Morriss approached the problem
from the point of view of periodic orbit theory \cite{TDM} and master equations \cite{TM_2002}.
The modes were observed for hard-ball systems in one, two, and three dimensions
\cite{DP_3d,Szasz_Buch,HPFDZ,TM_2003a,TM_2003b,Zabey}, for planar hard dumbbells \cite{Milano}, and also for one- and two-dimensional
soft particles \cite{Radons_Yang,Yang_Radons,FP_2005}.
A formal classification for smooth hard-disk systems has been given by Eckmann {\em et al.} \cite{Zabey}.
If the $4N$ components of a tangent vector are arranged according to
\begin{equation}
\delta {\bf \Gamma}\,=\,\left( \delta q_x,\delta q_y ; \delta p_x,\delta p_y \right), \nonumber
\end{equation}
the six orthonormal zero modes, which span the central manifold and are the
generators of the continuous symmetry transformations, are given by \cite{FHPH_2004,Zabey}
\begin{eqnarray}
\vec e_1 &=& \dfrac{1}{\sqrt{2 K}\,} \, ({p_x},{p_y};\,0,0) ,\nonumber \\
\vec e_2 &=& \dfrac{1}{\sqrt{N}\,} \, (1,0 ;\ 0,0) ,\nonumber \\
\vec e_3 &=& \dfrac{1}{\sqrt{N}\,} \, (0,1 ;\,0,0), \nonumber \\
\vec e_4 &=& \dfrac{1}{\sqrt{2 K}\,} \, (0,0 ;\,{p_x},{p_y} ) ,\nonumber \\
\vec e_5 &=& \dfrac{1}{\sqrt{N}\,} \, (0,0 ;\,1,0) ,\nonumber \\
\vec e_6 &=& \dfrac{1}{\sqrt{N}\,} \, (0,0 ;\,0,1) .\nonumber \\ \nonumber
\end{eqnarray}
$\vec e_1$ corresponds to a shift of the time origin,
$\vec e_4$ to a change of energy, $\vec e_2$ and $\vec e_3$ to a uniform translation
in the $x$ and $y$ directions, respectively, and $\vec e_5$ and $\vec e_6$ to a
shift of the total momentum in the $x$ and $y$ directions, respectively.
The six vanishing Lyapunov exponents associated with these modes have indices $2N-2 \le i \le 2N+3$ in the center of the spectrum.
Since, for small-enough $k$, the perturbation components of a particle obey
$\delta {\bf p} = C \delta {\bf q} $ for the unstable perturbations $(\lambda > 0)$, and
$\delta {\bf q} = - C \delta {\bf p} $ for the stable perturbations $(\lambda < 0)$, where $C > 0$ is a number, one may restrict the classification
to the $\delta {\bf q}$ part of the modes and, hence, to the basis vectors $\vec e_1, \vec e_2, \vec e_3$ \cite{Zabey}.
Now the modes with a wave vector ${\bf k}$ may be classified as follows:
\begin{enumerate}
\item {\bf Transverse modes} (T) are {\em divergence-free} vector fields consisting of a superposition of sinusoidal
modulations of $\vec e_2$ and $\vec e_3$.
\item {\bf Longitudinal modes} (L) are {\em irrotational} vector fields consisting of a superposition of sinusoidal modulations
of $\vec e_2$ and $\vec e_3$.
\item {\bf Momentum modes} (P) are vector fields consisting of sinusoidal modulations of $\vec e_1$. Due to the
random orientations of the particle velocities, a P mode is not easily recognized as fundamental mode in a simulation.
However, it may be numerically transformed into an easily recognizable periodic shape \cite{Zabey,BP_2013}, as will
be shown in Fig.~\ref{P10} below.
\end{enumerate}
The subspaces spanned by these modes are denoted by $T ({\bf n})$, $L ({\bf n})$, and $P ({\bf n})$, respectively,
where the wave vector for a periodic box with sides $L_x, L_y$ is ${\bf k} = (2 \pi n_x / L_x, 2 \pi n_y/ L_y)$, and
${\bf n} \equiv (n_x, n_y)$. $n_x$ and $n_y$ are integers.
For the same ${\bf k}$, the values of the Lyapunov exponents for the
longitudinal and momentum modes coincide. These modes actually belong to a combined
subspace $LP({\bf n}) \equiv L({\bf n}) \oplus P({\bf n})$. The dimensions of the subspaces $T ({\bf n})$ and
$LP({\bf n})$ determine the multiplicity of the exponents associated with the T and LP modes. For a periodic
boundary, say in $x$ direction, the multiplicity is 2 for the T, and 4 for the LP modes. For details we refer to
Ref.~\cite{Zabey}.
The transverse modes are stationary in space and time, but the L and P modes are not \cite{FHP_Congress,Zabey,CTM_2011}.
In the following, we only consider the case of periodic boundaries.
Any tangent vector from an LP subspace observed in a simulation
is a combination of a pure L and a pure P mode. The dynamics of such an LP pair may be represented as a rotation in a
two-dimensional space with a constant angular frequency proportional to the wave number of the mode,
\begin{equation}
\omega_{\bf n} = v k_{\bf n}, \nonumber
\end{equation}
where the L mode is continuously transformed into the P mode and vice versa.
Since the P mode is modulated with the velocity field of the particles, the experimentally observed mode pattern becomes periodically
blurred when its character is predominantly P. Since during each half of a period, during which all
spacial sine and cosine functions change sign,
the mode is offset in the direction of the wave vector ${\bf k}$, which gives it the appearance
of a traveling wave with an averaged phase velocity $v$. If reflecting boundaries are used, standing waves are obtained.
The phase velocity $v$ is about one third of the sound velocity \cite{FHP_Congress}, but otherwise seems to be unrelated to it.
The basis vectors spanning the subspaces of the T and LP modes may be reconstructed from the experimental data
with a least square method \cite{Zabey,BP_2010}. As an example, we consider the L(1,0) mode of a system
consisting of 198 hard disks at a density 0.7 in a rectangular periodic box with an aspect ratio 2/11.
The LP(1,0) subspace includes the tangent vectors for $388 \le \ell \le 391$ with four identical Lyapunov exponents.
The mutually orthogonal basis vectors spanning the corresponding four-dimensional subspace are viewed as
vector fields in position space and are given by \cite{Zabey}
\begin{equation}
L(1,0): \left( \begin{array}{l} 1\\ 0 \end{array} \right) \cos\left(\frac{2 \pi q_x}{ L_x }\right), \;\;
\left( \begin{array}{l} 1\\ 0 \end{array} \right) \sin\left(\frac{2 \pi q_x}{ L_x }\right),
\nonumber
\end{equation}
\begin{equation}
P(1,0): \left( \begin{array}{l} p_x\\ p_y \end{array} \right) \cos\left(\frac{2 \pi q_x}{ L_x }\right), \;\;
\left( \begin{array}{l} p_x\\ p_y \end{array} \right) \sin\left(\frac{2 \pi q_x}{ L_x }\right),
\nonumber
\end{equation}
where, for simplicity, we have omitted the normalization. If the Gram-Schmidt vectors are used for the reconstruction, the components corresponding to the cosine are shown for L(1,0) in Fig.~\ref{L10}, for P(1,0) in Fig.~\ref{P10}. In the latter case, the mode
structure is visible due to the division by $p_x$ and $p_y$ as indicated. This also explains the larger scattering of the points.
The components proportional to the sine are analogous, but are not shown. A related analysis may be carried out for the covariant vectors.
Recently, Morriss and coworkers \cite{CTM_2010,CTM_2011,MT_2013} considered in detail the tangent-space dynamics
of the zero modes and of the mode-forming perturbations over a time $\tau$, during which many collisions occur. In addition, they
enforced the mutual orthogonality of the subspaces of the modes and of their conjugate pairs (for which the Lyapunov exponents only differ by the sign), with the central manifold by invoking an (inverse) Gram-Schmidt procedure, which starts with the zero modes and works outward
towards modes with larger exponents. With this procedure they were able to construct the modes in the limit of large $\tau$ and to obtain (approximate) values for the Lyapunov exponents for the first few Lyapunov modes (counted away from the null space) \cite{CTM_2010}. The agreement with simulation results is of the order of a few percent for the T modes, and slightly worse for the LP modes.
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{figure_8_top.pdf} \\
\includegraphics[width=0.47\textwidth]{figure_8_bottom.pdf}
\caption{(Color online) Reconstructed L(1,0) mode components proportional to $\cos (2 \pi q_x / L_x) $.
The system consists of 198 smooth hard disks in a periodic box. For details we refer to the main text.}
\label{L10}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{figure_9_top.pdf}\\
\includegraphics[width=0.47\textwidth]{figure_9_bottom.pdf}
\caption{(Color online) Reconstructed P(1,0) mode components proportional to $\cos (2 \pi q_x / L_x) $
for the same LP subspace as in Fig.~\ref{L10}.}
\label{P10}
\end{figure}
For the modes with a wave length large compared to the inter-particle spacing, a continuum limit for the perturbation vectors leads to a set of partial differential equations for the perturbation fields,
whose solutions give analytical expressions for the modes in accordance with the boundary conditions applied. This procedure works for the stationary transverse modes, as well as for the time-dependent LP modes. For the latter, the partial differential equations for the continuous
perturbation fields assume the form of a wave equation with traveling wave solutions \cite{CTM_2011}.
For quasi-one-dimensional systems (with narrow boxes such that particles may not pass each other), the phase velocity is found to depend on the particle density via the mean particle separation and the mean time between collisions (of a particle). For fully two-dimensional models
of hard disks, the wave velocity becomes proportional to the collision frequency of a particle, and inversely proportional to the mean
squared distance of a particle to all its neighbors in the first coordination shell \cite{CTM_2011,MT_2013}.
This is an interesting result, since it provides a long-looked-for connection between a mode property -- the wave velocity -- with other density-dependent microscopic quantities of a fluid.
\section{Rototranslation}
\label{roto}
\subsection{Rough hard disks}
\label{roughd}
Up to now we were concerned with simple fluids allowing only translational particle dynamics. As a next step
toward more realistic models, we consider fluids, for which the particles may also rotate and exchange energy
between translational and rotational degrees of freedom. In three dimensions, the first model of that kind, rough hard spheres,
was already suggested by Bryan in 1894 \cite{bryan:1894}. Arguably, it constitutes the simplest model of a molecular
fluid. It was later treated by Pidduck \cite{pidduck:1922} and, in particular, by
Chapman and Cowling \cite{chapman:1953},
who derived the collision map in phase space for two colliding spheres with maximum possible roughness.
The latter requires that the relative surface velocity at the point of contact of the collision partners is reversed,
leaving their combined (translational plus rotational) energy, and their combined linear and angular momenta invariant
\cite{Allen}. The thermodynamics and
the dynamical properties of that models were extensively studied by O'Dell and Berne \cite{Berne_I}. They also
considered generalizations to partial roughness in-between smooth and maximally-rough spheres \cite{Berne_II,Berne_III,Berne_IV}.
Here we consider the simplest two-dimensional version of that model, maximally-rough hard disks in a planar
box with periodic boundaries. Explicit
expressions for the collision maps in phase space and in tangent space may be obtained from our previous work
\cite{vMP,BP_2013,Bdiss}. The coupling between translation and rotation is controlled by a single dimensionless parameter
$$\kappa = \frac{4I}{ m \sigma^2}, $$
where $I$ is the principal moment of inertia for a rotation around an axis through the center,
and $m$ and $\sigma$ denote the mass and diameter of a disk, respectvely. $\kappa$ may take values between zero and one: $ 0,$
if all the mass is concentrated in the center, $1/2$ for a uniform mass distribution, and $ 1$,
if all the mass is located on the perimeter of the disk.
The rotation requires to add angular momentum $J_i$ to the other independent variables ${\bf q}_i, {\bf p}_i$ of a particle $i$.
The $J_i$ show up in
the collision map \cite{chapman:1953,vMP,BP_2013}, and their perturbations $\delta J_i$ affect the tangent space dynamics
\cite{vMP,BP_2013}. If one is only interested in the global Lyapunov exponents, this is all what is needed, since it constitutes an autonomous system.
We refer to it as the $J$-version of the rough-disk model. Its phase-space dimension $D = 5N$. Since $D$ may be an odd number, it is obvious that one cannot ask questions about the symplectic nature of the model. For this it is necessary to include
also the disk orientations $\Theta_i$ conjugate to the angular momenta in the list of
independent variables. The angles do not show up in the collision map of the reference trajectory, but their perturbations,
$\delta \Theta_i$, affect the collision map in tangent space. Now the phase space has 6N dimensions. We refer to this representation as
the $J \Theta$-version.
Before entering the discussion of the Lyapunov spectra for large systems, let us clarify the number of vanishing exponents first.
\begin{figure}[t]
\centering
\includegraphics[width=0.40\textwidth]{figure_10.pdf}
\caption{(Color online) Full Lyapunov spectra in $J \Theta$-representation for four rough disks $(\kappa = 0.5)$ respective
smooth disks $(\kappa = 0)$ with boundary conditions as indicated. It is interesting to note that the equipartition
theorem does not hold for such small systems.
For $\kappa = 0.5$, the translational and rotational kinetic energies add according to $3.59 + 2.41 = 6.$
For the simulation with $\kappa = 0$, a total translational kinetic energy equal to two was used.
}
\label{zero_exponents}
\end{figure}
As an illuminating case, we plot in Fig.~\ref{zero_exponents} some spectra for the $J \Theta$-version of a
rough disk system with only 4 particles and 24 exponents. For positive $\kappa$, when translation and rotation
are intimately coupled, the number of vanishing exponents is determined by the intrinsic symmetries of
space and time-translation invariance and by the number of particles. For {\em periodic boundaries} in $x$ and $y$ directions
(full red dots in Fig.~\ref{zero_exponents}), space-translation invariance contributes four, time-translation
invariance another two vanishing exponents. Furthermore, the invariance of the collision map with respect to an arbitrary rotation
of a particle adds another $N$ zero exponents, four in our example. Altogether, this gives 10 vanishing
exponents, as is demonstrated in Fig.~\ref{zero_exponents} by the red points. For {\em reflecting} boundaries in $x$ and $y$
directions (small blue dots in Fig.~\ref{zero_exponents}) there is no space-translation invariance, and the total number
of vanishing exponents is limited to six. If $\kappa$ vanishes, the rotational energy also vanishes, and all variables connected with
rotation cease to be chaotic and contribute another $2N$ zeroes to the spectrum. Together with the
intrinsic-symmetry contributions, this amounts to
$2N+6 = 14$ vanishing exponents for periodic boundaries (open purple circles in Fig.~\ref{zero_exponents}),
respective $2N + 2 = 10$ for reflecting boundaries (not shown). Note that we have plotted the full spectra including the
positive and negative branches in this figure. For the $J$-version, and for an odd number of particles, the discussion
is slightly more involved for which we refer to Ref.~\cite{BP_2013}.
Next we turn to the discussion of large systems, for which deviations from equipartition are negligible. We use reduced units for which
$m$, $\sigma$ and the kinetic translational (and rotational) temperatures are unity. Lyapunov exponents are given in units of $\sqrt{K/Nm\sigma^2}.$ The system we first consider in the following consists of
$N = 88$ rough hard disks with a density $\rho = 0.7$ in a periodic box with an aspect ratio $A = 2/11$ \cite{BP_2013}.
For comparison, spectra for 400-particle systems are given in Ref.~\cite{vMP}. Since we are here primarily interested in the global exponents,
the simulations are carried out with the $J$-version of the model.
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{figure_11.pdf}
\caption{(Color online) Lyapunov spectra (positive branches only) for a system of 88 rough hard disks
for various coupling parameters $\kappa$ as indicated by the labels \cite{BP_2013}.
The density $\rho = 0.7$. The periodic box has an aspect ratio of 2/11.
}
\label{spectrum_rough}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{figure_12.pdf}
\caption{(Color online) Enlargement of the regime of small Lyapunov exponents for some spectra of
Fig.~\ref{spectrum_rough}. Also negative exponents are displayed such that conjugate pairs, $\lambda_{\ell}$ and
$\lambda_{5N+1-\ell}$, share the same index $\ell$ on the abscissa.
}
\label{center_rough}
\end{figure}
In Fig.~\ref{spectrum_rough} we show the positive branches of the Lyapunov spectra for a few selected values of $\kappa$ \cite{BP_2013}. Although the exponents are only defined for integer $\ell$, smooth lines are drawn for clarity, and a reduced index
is used on the abscissa. There are $5N = 440$ exponents, of which the first half constitute the positive branch
shown in the figure. An enlargement of the small exponents near the center of the spectrum is
provided in Fig.~\ref{center_rough} for selected values of $\kappa$. There we have also included the negative branches in such a way to emphasize conjugate pairing.
For $\kappa = 0$, which corresponds to freely rotating smooth disks without roughness, the Lyapunov spectrum is identical
to that of pure smooth disks without any rotation, if $N = 88$ vanishing exponents are added to the spectrum.
The simulation box is long enough (due to the small aspect ratio)
for Lyapunov modes to develop along the $x$ direction. This may be verified by the
step structure of the spectrum due to the small degenerate exponents. If $\kappa$ is increased, the Lyapunov modes quickly disappear,
and hardly any trace of them is left for $\kappa = 0.1$. For small positive $\kappa$ the spectrum is decomposed into a rotation-dominated
regime for $2N < \ell < 3N$ and a translation-dominated regime for $ \ell \le 2N$ and $\ell \ge 3N$.
It is, perhaps, surprising that, with the exception of the maximum exponent $\lambda_1$, already a small admixture
of rotational motion significantly reduces the translation-dominated exponents, whereas the rotation-dominated exponents
only gradually increase.
To study this further, we plot in the Figs.~\ref{max_kappa} and~\ref{KS_kappa} the isothermal $\kappa$-dependence of the
maximum exponent, $\lambda_1$, and of the
Kolmogorov-Sinai entropy per particle, $h_{KS}/N$, respectively, for various fluid densities as indicated by the labels.
These data are for a system consisting of 400 rough disks \cite{vMP}.
$\lambda_1$ and, hence, dynamical chaos tends to decrease with $\kappa$ for low densities, and always increases for large densities.
At the same time, the KS-entropy always decreases with $\kappa$. This means that mixing becomes less effective and the
time for phase-space mixing goes up, the more the rotational degrees of freedom interfere with the translational dynamics.
\begin{figure}[b]
\centering
\includegraphics[width=0.46\textwidth]{figure_13.pdf}
\caption{(Color online) Dependence of the maximum Lyapunov exponent on the coupling parameter $\kappa$
for the rough-disk fluid at various densities. From Ref.~\cite{vMP}.
}
\label{max_kappa}
\end{figure}
Before leaving the rough hard-disk case, a slightly disturbing observation for that system should be mentioned. In Fig.~\ref{symplectic}
we have demonstrated that the local GS Lyapunov exponents of the smooth hard-disk model display symplectic
symmetry. Unexpectedly, for the rough hard-disks this is not the case. This is shown in Fig.~\ref{rough_symplectic}.
where the maximum and minimum local exponents do not show the expected symmetry, and neither do the other
conjugate pairs which are not included in the figure. If the collision map for two colliding particles in tangent space
is written in matrix form, $\delta {\bf \Gamma}' = {\cal M} \delta {\bf \Gamma},$ where $\delta {\bf \Gamma}$
and $\delta {\bf \Gamma}' $ are the perturbation vectors immediately before and after a collision, respectively,
the matrix ${\cal M}$ does not obey the symplectic condition ${\cal S}^{\dagger} {\cal M} {\cal S } = {\cal M}.$ Here,
${\cal S}$ is the anti-symmetric symplectic matrix, and $^{\dagger}$ means transposition. For a more extensive
discussion we refer to Refs.~\cite{BP_2013,Bdiss}.
\begin{figure}[t]
\centering
\includegraphics[width=0.46\textwidth]{figure_14.pdf}
\caption{(Color online) Dependence of the Kolmogorov-Sinai entropy per particle on the coupling parameter $\kappa$
for the rough-disk fluid at various densities. From Ref.~\cite{vMP}
}
\label{KS_kappa}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{figure_15.pdf}
\caption{(Color online) Demonstration of the lack of symplectic symmetry (as formulated in Eq.~(\ref{gsfwd})) for a rough hard-disk system with $N = 4$ particles. The simulation is carried out with the $J \Theta$-representation of the model, which includes the disk orientations, and for which the phase space has 24 dimensions. The coupling parameter $\kappa = 0.5$. The local GS-exponents for
$\ell = 1$ and $\ell = 24$ are plotted as a function of time $t$ for a short stretch of trajectory.}
\label{rough_symplectic}
\end{figure}
\subsection{Hard dumbbells}
\label{dumbb}
A slightly more realistic model for linear molecules are ``smooth'' hard dumbbells, whose geometry is shown in
Fig.~\ref{dumbbell_geometry}. As with hard disks, the dynamics is characterized by hard encounters with impulsive forces perpendicular to the surface. Between collisions, the particles translate and rotate freely, which
makes the simulation rather efficient \cite{AI_1987,TS_1980,Bellemans_1980}. In the following we restrict to the
planar version of this model, for which the ''molecule'' consists of two fused disks as shown in Fig.~\ref{dumbbell_geometry}.
The state space is spanned by the center-of-mass (CM) coordinates ${\bf q}_i$, the
translational momenta ${\bf p}_i$, the orientation angles $\Theta_i$, and the angular momenta $J_i$, $i = 1,2,\dots,N$,
and has $6N$ dimensions.
The Lyapunov spectra for this model were first computed by Milanovi{\'c} {\em et al.}
\cite{MPH_1998,MPH_chaos,Milano,Mdiss}. Related work for rigid diatomic molecules interacting with
soft repulsive site-site potentials was carried out by Borzs\'ak {\em et al.} \cite{Borzsak} and
Kum {\em et al.} \cite{Kum}.
\begin{figure}[b]
\centering
\includegraphics[width=0.35\textwidth]{figure_16.jpg}
\caption{ Geometry of a hard dumbbell diatomic. All quantities for this model are given in terms of
reduced quantities, for which the disk diameter $\sigma$, the molecular mass $m$, and the total kinetic energy
per molecule, $K/N,$ are equal to unity. The molecular anisotropy is defined by $d/\sigma$.}
\label{dumbbell_geometry}
\end{figure}
Here we restrict the discussion to homogeneous dumbbells, for which the molecular mass $m$ is uniformly distributed
over the union of the two disks. The moment of inertia for rotation around the center of mass becomes
\begin{equation}
I(d \le \sigma) =
\frac{m \sigma^2}{4} \frac{ 3 d w + [d^2 + (\sigma^2/2)]
[\pi + 2 \arctan(d/w)]}{ 2 d w + \sigma^2
[\pi + 2 \arctan(d/w)]},
\nonumber
\end{equation}
and
\begin{equation}
I(d > \sigma) = \frac{m}{4}\left[\frac{\sigma^2}{2}+d^2\right], \nonumber
\end{equation}
where $w = \sqrt{\sigma^2 - d^2}$ is the molecular waist.
$I(d)$ monotonously increases with $d$. For $d\to 0$ it converges to that
of a single disk with mass $m$ and diameter $\sigma$, and with a homogeneous
mass density.
Results for other mass distributions may be found in Ref.~\cite{Milano}.
$d$ plays
a similar role as the coupling parameter $\kappa$ for the rough-disk model.
The results are summarized in Fig.~\ref{surface}, where the Lyapunov spectra, positive branches only,
for a system of 64 dumbbells in a periodic square box at an intermediate gas density of $0.5$ are shown for various
molecular anisotropies $d$. The system is too small for Lyapunov modes to be clearly visible
(although a single step due to a transverse mode may be observed for $d > 0.01$ even in this case).
Most conspicuous, however, is the widening gap in the spectra between the indices $l = 2N-3 = 125$ and $l = 126$
for $ d < d_c \approx 0.063.$ It separates the translation-dominated exponents ($1 \le l \le 125$) from the rotation-dominated exponents
($126 \le l \le 189$). The gap disappears for $d > d_c$.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{figure_17.jpg}
\caption{ Anisotropy dependence of the Lyapunov spectra for a system of 64 planar hard dumbbells
in a square box with periodic boundaries. $d$ is the molecular anisotropy (in reduced units). Only the positive branches of the
spectra with $3N = 192 $ exponents are shown. The number density is 0.5. The Lyapunov index is denoted by $l$.
(From Ref.~\cite{MPH_1998}).
}
\label{surface}
\end{figure}
It was observed in Ref.~\cite{MPH_1998} that for $d < d_c$ the perturbation
vectors associated with the translation- and rotation-dominated exponents predominantly point into the subspace
belonging to center-of-mass translation and molecular rotation, respectively, and very rarely rotate into a direction belonging to the other subspace. For anisotropies $d > d_c$, however, one finds
that the offset vectors for {\em all} exponents spend comparable times in both subspaces, which is taken as an indication of strong
translation - rotation coupling. $d_c$ is found to increase with the density. The anisotropy dependence
of some selected Lyapunov exponents is
shown in Fig.~\ref{aniso}. The horizontal lines for $l=1$ and $l=2N-3=125$ indicate the values
of the maximum and smallest positive exponents for a system of 64 hard disks with the same density,
to which the respective exponents of the dumbbell system converge with $d \to 0$. The smooth hard-disk data
were taken from Ref.~\cite{DPH_1996}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.48\textwidth]{figure_18.pdf}
\caption{ Anisotropy dependence of selected Lyapunov exponents, labelled by their indices $l$,
for the 64-dumbbell system summarized in Fig.~\ref{surface}. The horizontal lines indicate the values for corresponding exponents of a smooth hard-disk gas.
From Ref.~\cite{MPH_1998}.
}
\label{aniso}
\end{figure}
It was shown in the Refs.~\cite{MPH_1998,Milano} that two ``phase transitions'' exist for the hard dumbbells with an anisotropy of $d=0.25$,
a first at a number density $n_1 = 0.75$ from a fluid to a rotationally-disordered solid, and a second, at $n_2 \approx 0.775$, from the
orientationally-disordered solid to a crystal with long-range orientational order. Both transitions were observed by computing
orientational correlation functions. The first transition
at $n_1$ makes itself felt by large fluctuations of the Lyapunov exponents during the simulation and a slow convergence
\cite{Mdiss}, but does not show up in the density dependence of the Lyapunov exponents. The second transition at $n_2$ does give
rise to steps in the density-dependence curves of both $\lambda_1$ and of $\lambda_{189}$ (which is the smallest positive exponent).
These steps are even more noticeable when viewed as a function of the collision frequency instead of the density,
and the step is significantly larger for $\lambda_{189}$ than for $\lambda_1$. This indicates that the collective long-wavelength perturbations are much stronger affected by the locking of dumbbell orientations than the large exponents, which -- due to localization --
measure only localized dynamical events. Here, dynamical systems theory offers a new road to an understanding of collective phenomena
and phase transitions, which has not been fully exploited yet.
\begin{figure}[t]
\centering
{\vspace{-8mm}\includegraphics[width=0.4\textwidth]{figure_19.pdf}}
{\vspace{-15mm}
\caption{Magnification of the lower
part of the Lyapunov spectrum for a system consisting of $N=150$
homogeneous dumbbells with a molecular anisotropy
$d=0.25$. The density $n=0.5$, and the aspect ratio of the box is $1/32$. The
Lyapunov exponents are given in units of
$\sqrt{(K/N)/(m\sigma^2)}$, where $K$ is the total (translational and rotational) kinetic
energy, $m$ is the dumbbell mass, and $\sigma$ is the diameter
of the two rigidly connected disks forming a dumbbell. The complete spectrum contains
900 exponents.}
\label{low}}
\end{figure}
Unlike the rough hard disks, the hard dumbbell systems generate well-developed Lyapunov modes for their small exponents.
Instructive examples for $N = 400$ homogeneous hard dumbbells
with an anisotropy $d = 0.25$ are given in the Refs.~\cite{Milano,Mdiss}. The classification of the modes
in terms of transversal (T) and longitudinal-momentum (LP) modes is completely
analogous to that for hard disks \cite{Zabey}, although some modifications due to the presence of rotational degrees of freedom
have not yet been worked out in all detail. In Fig.~\ref{low}, we show, as an example, the smallest positive exponents of a
system of $N = 150$ dumbbells with a density $n = 0.5$ in an elongated (along the $x$ axis) periodic box with an aspect ratio $A = 1/32$
\cite{Mdiss}.
\begin{figure}[b]
\centering
\includegraphics[width=0.44\textwidth]{figure_20.pdf}
\caption{Transverse mode pattern for the perturbation vector $\delta {\bf \Gamma}_{446}$
for an instantaneous configuration of the 150-dumbbells system with the Lyapunov
spectrum shown in Fig.~\ref{low}. The circles denote the center-of-mass positions of the dumbbells, and the arrows indicate the position perturbations $(\delta x, \delta y)$ (top) and momentum perturbations $(\delta p_{x}, \delta p_y)$ (bottom) of the particles. The system is strongly contracted along the x-axis.}
\label{trans_446}
\end{figure}
Only modes with wave vectors parallel to $x$ develop, which facilitates the analysis. The spectrum displays the
typical step structure due to the mode degeneracies with multiplicity two, for the T modes, and four, for the LP modes.
For this system, typical instantaneous mode patterns are shown in the
Figs.~\ref{trans_446} and~\ref{trans_443}, the former for a T mode with
$l = 446$, the latter for an LP mode with $l = 443$. In both cases, the wavelength is equal to the box length in $x$ direction.
Please note that the system appears strongly contracted along the $x$ axis.
\begin{figure}[t]
\centering
\includegraphics[width=0.44\textwidth]{figure_21.pdf}
\caption{As in the previous Fig.~\ref{trans_446}, but for the longitudinal-momentum
mode belonging to the perturbation vector $\delta {\bf \Gamma}_{443}.$
}\label{trans_443}
\end{figure}
\section{Stationary particle systems out of equilibrium}
\label{nonequilibrium}
One of the most interesting -- and historically first -- studies of Lyapunov spectra for many-particle systems
is for stationary non-equilibrium states \cite{HHP_1987,PH88,PH_1989}. If a system
is driven away from equilibrium by an
external or thermal force, the irreversibly generated heat needs to be removed and transferred
to a heat bath. Otherwise the system would heat up and would never reach a steady state.
Such a scheme may assume the following equations of motion,
\begin{eqnarray}
\dot{{\bf q}}_i &=& {\bf p_i}/m_i + {\cal{Q}}(\{{\bf q}\},\{{\bf p}\}) X(t) \nonumber\\
\dot{{\bf p}}_i &=& - \partial \Phi/\partial {\bf q}_i + {\cal{P}}(\{{\bf q}\},\{{\bf p}\}) X(t) - s_i\zeta {\bf p}_i, \label{thermo}\\
\nonumber
\end{eqnarray}
where ${\cal{Q}} X$ and ${\cal{P}} X$ represent the driving, and $-\zeta {\bf p}_i$ the thermostat or heat bath
acting on a particle $i$ selected with the switch $s_i \in \{ 0,1\}$.
$\zeta$ fluctuates and may be positive or negative, whether
kinetic energy has to be extracted from -- or added to -- the system. Averaged over a long trajectory,
$\left\langle \zeta \right\rangle$ needs to be positive.
$\zeta(\{{\bf q}\},\{{\bf p}\})$ may be a
Lagrange multiplier, which minimizes the thermostatting force according to Gauss' principle of least constraint
and keeps either the kinetic energy or the total internal energy a constant of the motion. Or it may be a single
independent variable in an extended phase space such as for the Nos\'e-Hoover thermostat.
There are excellent monographs describing these schemes \cite{EM_2008,HH_2012}.
They are rather flexible and allow for any number of heat baths. Although it is not possible to
construct such thermostats in the laboratory, they allow very efficient non equilibrium molecular dynamics (NEMD)
simulations and are believed to provide an accurate description of the nonlinear transport involved \cite{Ruelle:1999}.
Simple considerations lead to the following thermodynamic relations for the stationary non-equilibrium state generated in this way after
initial transients have decayed:
\begin{eqnarray}
&&\left\langle \frac{ d \ln \delta V^{(D)} }{dt } \right\rangle = \left\langle \frac{\partial}{\partial {\bf \Gamma}} \cdot
\dot{{\bf \Gamma}} \right\rangle \nonumber = \sum_{\ell =1}^{D} \lambda_{\ell}
= - d \sum_{i=1}^N s_i \left\langle \zeta \right\rangle \nonumber \\
&=& -\frac{1}{k_B T} \left\langle \frac{dQ}{dt} \right\rangle = - \frac{1}{k_B} \left\langle \frac{dS_{irr}}{dt}
\right\rangle < 0. \label{ne} \\ \nonumber
\end{eqnarray}
Here, $d$ is the dimension of the physical space, and $d \sum_i s_i $ becomes the number of thermostatted degrees of freedom.
$D$ is the phase space dimension, and $dQ/dt$ is the rate with which
heat is transferred to the bath and gives rise to a positive rate of entropy production, $ \left\langle dS_{irr} / dt \right\rangle$.
$T$ is the kinetic temperature enforced by the thermostat.
These relations show that an infinitesimal phase volume shrinks with an averaged rate, which is
determined by the rate of heat transfer to the bath, which, in turn, is regulated by the thermostat variable $\zeta$.
As a consequence, there exists a multifractal attractor in phase space, to which all trajectories converge for
$t \to \infty$, and on which the natural measure resides. For Axiom A flows or even non-uniform hyperbolic systems,
these states have been identified with the celebrated Sinai-Ruelle-Bowen (SRB) states \cite{Ruelle:1999}.
This result is quite general for dynamically thermostatted systems and has been numerically confirmed for many transport processes
in many-body systems. The computation of the Lyapunov spectrum is essential for the understanding of this
mechanism \cite{HHP_1987,PH88,PH_1989} and, still, is the only practical way to compute the information dimension
of the underlying attractor.
Such a system is special in the sense that its stationary measure is
singular and resides on a fractal attractor with a vanishing phase space volume. The ensuing Gibbs entropy is diverging to minus infinity,
which explains the paradoxical situation that heat is continuously transferred to the bath giving rise to a positive rate of
irreversible entropy production $\dot{S}_{irr}$. The fact that there is a preferred direction for the heat flow is in accordance with
the Second Law of thermodynamics, which these systems clearly obey. The divergence of the Gibbs entropy also indicates that the
{\em thermodynamic entropy}, namely the derivative of the internal energy with respect to the entropy, is not defined
for such stationary non-equilibrium states.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{figure_22.jpg}
\caption{(Color online) One-dimensional Frenkel-Kontorova conductivity model described in the main text.
Three Poincar\'e maps for the planes defining the first octant are shown. The system is in a stationary
non-equilibrium state. Reprinted from Ref.~\cite{Radons_Just}}
\label{frenkel}
\end{figure}
We demonstrate such a fractal attractor for a one-dimensional Frenkel-Kontorova conductivity model of a charged
particle in a sinusoidal potential, which is subjected to a constant applied field $X$ and a Nos\'e-Hoover thermostat
\cite{HHP_1987,PHH_1990}. The equations of motion are
\begin{equation}
\dot{q} = p; \;\;\; \dot{p} = -\sin(q) + X - \zeta p; \;\;\; \dot{\zeta} = p^2 -1.
\nonumber
\end{equation}
The phase space is three-dimensional. In Fig.~\ref{frenkel} three Poincar\'e maps for mutually orthogonal planes
defining the first octant of the phase space are shown. The singularity spectrum for this model has been computed in Ref.~\cite{PHH_1990},
and the multifractal nature of the stationary phase-space measure has been established.
The sum of all Lyapunov exponents becomes negative for stationary non-equilibrium states.
To demonstrate this effect, we show in Fig.~\ref{sllod}
\begin{figure}[b]
\centering
\includegraphics[width=0.47\textwidth]{figure_23.pdf}
\caption{(Color online) Lyapunov spectra of a system of 36 hard disks subjected to iso-energetic planar shear flow with shear rates
$\dot{\varepsilon}$ as indicated by the points.
For details we refer to Ref.~\cite{DPH_1996}. }
\label{sllod}
\end{figure}
the Lyapunov spectra of a system of 36 particles subjected to planar shear flow for various shear rates
$\dot{\varepsilon}$ as indicated by the labels. Homogeneous SLLOD mechanics \cite{EM_2008} in combination with
a Gaussian energy control has been used, which keeps the internal energy of the fluid a constant of the motion.
The figure emphasizes the conjugate pairing symmetry in the non equilibrium stationary state:
Since all particles are homogeneously affected by the same thermostat, the pair sum of conjugate exponents
becomes negative and is the same for all pairs. With the Kaplan-Yorke formula, the information dimension is found to be smaller
than the dimension of the phase space \cite{DPH_1996}, a clear indication of a multi-fractal phase-space distribution.
\section{Final remarks}
\label{outlook}
Here we have described some of our studies of simple fluids, the results so far obtained, and the new questions
they raise. Most of our simulations are for planar systems due to the numerical effort required for the computation
of Lyapunov spectra, but the results are expected to carry over to three-dimensional systems.
What have these studies taught us about fluids? For equilibrium systems, the results about the density dependence
of the maximum Lyapunov exponents and of the mixing time in Sec.~\ref{hard_wca} clearly provide deeper and
quantitative insight into the foundation and limitations of more familiar models for dense fluids. Still, the behavior of the exponents
near phase transitions has not been satisfactorily exploited yet and should be the topic for further research.
It is interesting to note that time-dependent local Lyapunov exponents have been used to identify atypical trajectories
with very high or very low chaoticity. Such trajectories, although rare and difficult to locate in phase space, may be of
great physical significance. For example, in phase traditions, or in chemical reactions, they may lead over unstable saddle points
connecting a quasi-stable state, or chemical species, with another. Tailleur and Kurchan invented a very promising
algorithm, {\em Lyapunov weighted dynamics}, for that purpose \cite{Kurchan}, in which a swarm of phase points
evolves according to the natural dynamics. Periodically, trajectories are killed, or cloned, with a rate determined by their
local exponents $\Lambda_1$. As a consequence, trajectories are weighted according to their chaoticity.
A slightly modified version of this algorithm, {\em Lyapunov weighted path sampling}, has been suggested by
Geiger and Dellago \cite{Geiger}, and has been successfully applied to the isomerizing dynamics
of a double-well dimer in a solvent.
Since the turn of the century, Lyapunov modes have raised a lot of interest. But it has been surprisingly difficult
to connect them to other hydrodynamic properties \cite{Zabey,CTM_2011,MT_2013}. Most recently, however, some progress
has been made. But this problem will still occupy researchers for some time.
Another interesting and promising field is the study of systems with qualitatively different degrees of freedom, such as
translation and rotation. Our studies of the Lyapunov instability of rough disks and of linear dumbbell molecules has opened another
road to investigate translation-rotation coupling. An extension of these studies to vibrating molecules and
to more complex systems with internal rotation, such as rotational isomer dynamics in butane \cite{PT_unpublished},
are promising topics for future research.
Arguably, the most important impact Lyapunov spectra have achieved is in non-equilibrium statistical mechanics. In combination with
dynamical thermostats, it is possible to generate stationary states far from equilibrium and to link the rate of entropy production with the
time-averaged friction coefficients and the logarithmic rate of phase-volume contraction. See Eq.(\ref{ne}). The existence of a multi-fractal
phase-space distribution with an information dimension smaller than the phase space dimension provides a geometrical
interpretation of the Second Law of thermodynamics. The importance of this result is reflected in the voluminous literature on this
subject.
There are many other applications of local Lyapunov exponents and tangent vectors in meteorology, in
the geological science, and in other fields.
For more details we refer to Ref.~\cite{Cencini}, which is a recent collection of
review articles and applications.
\section{Acknowledgments}
We thank Prof. Christoph Dellago and Prof. Willliam G. Hoover for many illuminating discussions.
We also thank Drs. Ljubo Milanovi\'c, Jacobus van Meel, Robin Hirschl and
Christina Forster, whose enthusiasm and dedication provided the basis for the present discussion.
One of us (HAP) is also grateful to the organizers of a workshop at KITPC in July 2013, whose invitation
provided the possibility to discuss various aspects of this work with other participants of that event.
|
2201.09786
|
\section{Selection of key technologies for aerial recharging}
Aerial recharging of \gls{iot} nodes raises significant challenges in terms of both the charging itself, which needs to be in a short time, and the localization of the node by the \gls{uav}.
In the following, we elaborate on the potential technological options for both tasks, and motivate our selection.
\subsection{Task~I: Charging the \gls{iot} node}
\label{charging}
The \gls{iot} node needs a continuous source of energy.
We propose to satisfy the energy need by frequently provisioning energy through \glspl{uav}. As the latter remains in the air during the process, the charging speed is of paramount importance and poses a challenge for the \gls{uav}. The speed of charging is predominantly determined by the energy storage and the charging method
\subsubsection{Energy Storage}
Non-rechargeable batteries, commonly used in place-and-forget \gls{iot} nodes \cite{sather2017battery}, have high volumetric and weight energy densities, making them attractive options for energy-constrained \gls{iot} nodes. Moreover, their self-discharge energy is relatively low compared to rechargeable batteries. Clearly, in our proposed system, a technological shift towards rechargeable-only solutions is needed. The selected battery should have a high recharge cycle count and a high shelf life, to ensure a long lifetime, and a low internal resistance to enable fast charging. A \gls{lto} battery, for instance, satisfies these requirements~\cite{han2014cycle}
\subsubsection{Charging}
To charge the \gls{iot} node, the \gls{uav} should transfer energy to the node. Energy can be transferred either through physical contacts or wireless transmission.
Physical contacts are exposed to the elements causing \eg corrosion. Furthermore, the \gls{uav} must be more precisely aligned than \acrlong{wpt}. Hence, wireless transmission is preferred, as this does not require exposing circuitry outside the node's enclosure and allows for a less accurate alignment of the \gls{uav}.
Several candidates for \gls{wpt} are considered in this comparative study:
\begin{enumerate*}[label=(\arabic*)]
\item \acrlong{ipt},
\item \acrlong{cpt},
\item \acrlong{rfeh}.
\end{enumerate*}
In what follows, we provide a brief discussion on the appropriateness of these options for \gls{uav}-based charging.
\textbf{\Acrfull{ipt}.}
An inductive link is generated by means of magnetic coupling between two coils, one at the transmitting and one at the receiving side of the energy transfer set-up and is called \gls{ipt}. This is illustrated in Fig.~\ref{fig:wpt}. The operation is similar to a transformer, which is built with a primary and secondary coil mostly winded around a ferrite or laminated steel core.
In the aerial charging case, there is air between the two coils. Note that due to the low permeability of air, the self inductance and coupling factor $k$ between the two coils is low in comparison to set-ups that can rely on magnetically conductive materials.
The efficiency can be improved by using series or parallel connected capacitors on both \gls{wpt} coils to create resonance circuits. This reduces the leakage inductance created by the lower coupling factor over the air.
More than \SI{1}{\kilo\watt} is achievable through \gls{ipt}.
Nevertheless, the efficiency strongly depends on the coil alignment, the transfer distance, the size of the coils, and the magnetic losses~\cite{van2009inductive}.
For the aerial charging approach, provided the aforementioned challenges in efficiency are carefully designed for, \gls{ipt} is an interesting candidate technology.
\textbf{\Acrfull{cpt}.}
Instead of using an magnetic field, an electric field can deliver energy wirelessly by means of capacitors,
called a \gls{cpt} system. When bringing transmitter and receiver close to each other, two coupling capacitors are effectively created. Both transmitter and receiver consist of two metal plates, which create a capacitive coupled system during the energy transfer. Different structures like the two-plate, four-plate parallel, four-plate stacked structures exist in combination with several circuit topologies~\cite{lu2017review}. Typical systems consist of an inverter, two compensation networks, a capacitive coupler and a rectifier. Due to low capacitance between the plates, resonance is used to produce high voltages, resulting in sufficient electric fields for the energy transfer.
The switching frequency and plate voltage can be increased to exchange energy over higher distances. Similar to \gls{ipt}, literature shows that power delivery up to more than \SI{1}{kW} is achievable with \gls{cpt}~\cite{lu2017review}.
\textbf{\Acrfull{rfeh}.}
Energy transmission by means of \gls{em} waves, called \gls{rfeh}, is based on a \gls{pa} with transmitter and a receiver dipole or directive antenna with a matching circuit to charge the device.
The available energy on the receiver depends on the distance, frequency, transmitted power and gain of the antennas.
An easy accessible solution is to use the \gls{ism} band, for which the transmitted power and duty cycle are regulated following the corresponding \gls{ism} band regulation
~\cite{etsi2012electromagnetic}.
\Gls{em}-waves based transfer is appealing as it does not rely on nearby coupling. However, it is important to note that the waves are affected by the path loss, which in free space increases with at least the square of the distance. The latter is not the case for \gls{ipt}/\gls{cpt}.
We hereby provide a basic case study to illustrate what can (not) be expected from RF-based energy harvesting.
Consider two antennas at a distance of \SI{30}{cm} and a transmit power of \SI{27}{dBm}. Using the near field transmission formula from~\cite{schantz2013simple}, the received power would be approximately \SI{10}{dBm}.
Hence, it would take around 100~seconds to transfer 1 joule of energy.
\textbf{Evaluation of \gls{ipt}, \gls{cpt} and \gls{rfeh}.} In view of technology selection, we here compare the required power, size, proximity and efficiency of the above energy transfer solutions.
We consider a \textit{power} requirement of \SI{1}{\kilo\joule} to \SI{10}{\kilo\joule} transferred in \SI{5}{\minute}, yielding a \gls{wpt} link of \SI{3}{\watt} to \SI{33}{\watt}. \Gls{ipt} and \gls{cpt} are both suitable technologies to deliver this amount of power. A \gls{rf} link is not able to transfer sufficient energy (in accordance to the regulations of the \gls{ism} band) within the time constraint. Moreover, the \textit{efficiency} for an \gls{rf} system is low in comparison to a \gls{cpt} or \gls{ipt} system, the latter two achieving efficiencies up to \SI{90}{\percent}~\cite{lu2017review}. The \textit{transceiver size}
is relatively large for \gls{ipt} or \gls{cpt} systems compared to \gls{rf} systems (\eg a dipole antenna), due to the need of coils and capacitor plates. \textit{Alignment} requirements also vary widely among \gls{wpt} technologies. \Gls{rf} features non-critical alignment, aligning \gls{cpt} systems is more critical. In particular, when using a four plate parallel structure, all four plates must be aligned. A typical \gls{ipt} system requires two coils to be aligned within certain margins. For instance, when using the P9221 IC~\cite{P9221R3}, the power transfer efficiency is lowered from \SI{85}{\percent} to \SI{70}{\percent} by a misplacement of \SI{12}{\milli\meter} for a 32 by \SI{42}{\milli\meter} coil.
In conclusion, we select \Gls{ipt} as the best fit technology for aerial charging, considering the above discussed options, current state of technology, and typical constraints and energy needs of \gls{iot} nodes.
\begin{figure}
\centering
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{Images/inductive-capacitive.pdf}
\caption{}
\label{fig:wpt-inductive}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{Images/rf.pdf}
\caption{}
\label{fig:wpt-rf}
\end{subfigure}
\caption{Illustration of \gls{uav}-based energy provisioning through \gls{ipt} or \gls{cpt}~(a) or RF~(b)}
\label{fig:wpt}
\end{figure}
\subsection{Task~II: Finding the \gls{iot} node}
Prior to charging the nodes, a \gls{uav} needs to find the location of the node and align itself in order to provide contactless energy transfer. We here discuss the technological options to achieve this, and motivate the benefits and shortcomings.
We assume that the \gls{uav} can locate itself with an on-board-\gls{gnss} system and can fly automatically to get near to the depleted node. This is a fair assumption, given that professional \glspl{uav} are by default equipped with \gls{gnss}.
This technology provides a first coarse positioning, which needs to be followed by a precise positioning and alignment step to enable efficient coupled energy transfer. We briefly discuss the technological options and selection for both steps here below.
\subsubsection{Coarse positioning}
During installation, the operator can enter the geographic coordinates of the node. Alternatively, the location is determined via geolocation or an on-board \gls{gnss}.
\textbf{Geolocalization.} An \gls{iot} node can position itself using the built-in radio: by using the localization option of many wireless \gls{iot} (\eg Wi-Fi, Bluetooth, \gls{lpwan}), the position of the \gls{iot} node can be determined. The location is calculated in post-processing, based on the the regular data communication packets (by applying techniques such as \gls{rssi} based localization and trilateration). Using geolocalization proves to be not feasible for \gls{uav} aerial recharging as the currently obtained accuracy is insufficient. For \gls{lpwan} technologies, for example, the best obtained accuracy is \SI{100}{\meter}~\cite{janssen2018outdoor, fargas2017gps}.
\textbf{\acrlong{gnss}.}
By adding the required hardware for \gls{gnss} reception, more accurate positioning can be achieved.
\gls{gps} receivers, for example, can achieve a localization accuracy of up to \SI{2}{\meter} in open areas and \SI{5}{\meter} in forested landscapes~\cite{ucar2014dynamic}. To achieve an even higher accuracy, e.g. the GPS \gls{rtk} technique can be used, improving accuracy to up to \SI{1}{\centi\meter}.
With an extra communication link, the location error from a fixed predetermined base station, is sent to the mobile \gls{gps} \gls{rtk} receiver~\cite{cordesses2000combine}. Because of the high accuracy that can be obtained, we consider \gls{gps} \gls{rtk} technology the prime candidate for localization of the \gls{uav}~\cite{stempfhuber2011precise}.
\subsubsection{Precise positioning and alignment}
As the accurate location of the \gls{iot} node cannot be guaranteed,
a fine-grained localization system should be foreseen to bridge the last meter to centimeters alignment. Especially under bridges, in forests, etc., the \gls{uav} can not rely on the \gls{gnss} system. Two types of implementations for the alignment can be considered, and categorized as either a passive or an active system. They are distinguished from each other by the need for extra components on the \gls{iot} node.
Existing deployments could benefit from a \textbf{passive} system as no additional hardware is required at the node side. Moreover, no extra components could drain out the energy constrained node. Consequently, the finding of the node is determined by the \gls{uav}, without the need for feedback from the node.
An exemplary passive system works on the basis of a camera placed on the \gls{uav}, which is able to detect objects. Object recognition with computer vision requires compute resources and algorithms~\cite{druzhkov2011new} to process the incoming images of the camera.
Additional sensors can keep the drone at a safe distance from obstacles and assist the alignment. A disadvantage of the camera-based system is its limited operating range when the distance between \gls{uav} and node is large, in combination with the required line-of-sight.
In contrast, an \textbf{active} solution requires a sensor, actuator or transceiver on both the \gls{uav} and the node, thus supplementary hardware that should be powered by the node. Besides the extra cost, the node can become unreachable if the battery is completely depleted.
Several technologies have been proposed for precise nearby positioning, based on RF signals or sound sources, and using for example \gls{tdoa} or \gls{aoa} techniques~\cite{8692423}.
The additional energy consumed by an active solution, can also be tackled by using energy harvesting. \gls{rfeh} on the node side can power a small circuit with a sensor to locate the node. Hence, the node remains detectable, even when the battery is empty.
For example, through a hybrid RF-acoustic system~\cite{cox2020energy} having multiple speakers at the \gls{uav} side and a microphone on the node side, the node's position can be determined. The microphone is extended with an \gls{rf} energy harvester, able to receive enough power to send back the collected acoustic data via backscattering. Further alignment improvements in for example \gls{ipt}, could rely on monitoring the link efficiency or on introducing multiple alignment coils~\cite{P9221R3}.
\section{Concept and Operating Principle}\label{concept}
Most common battery-powered \gls{iot} nodes are typically restricted to an autonomy of a few years at best~\cite{thoen2019deployable, gawali2019energy, ashraf2015introducing}.
We propose to extend the battery time of \gls{iot} nodes to a virtually limitless autonomy using a \gls{uav} aerial recharging concept (Fig.~\ref{fig:intro}). \gls{iot} use cases often deploy a swarm of \gls{iot} nodes for monitoring purposes (\eg environmental monitoring)~\cite{dafflon2021distributed}. Each \gls{iot} node is connected and can send measurements to a central server using a wireless connection (\eg \gls{lpwan} connectivity). To monitor the charge level of the node's battery, the \gls{soc} is a key metric. The \gls{soc} can be acted upon by sending \gls{soc} messages, when the \gls{soc} falls below a certain threshold or by regularly including the \gls{soc} parameter in the node's communication. The central server keeps an overview and predicts the \gls{soc} of each \gls{iot} node. Acting upon this information, it can send a \gls{uav}, equipped with the appropriate charging circuitry, to nodes that are (almost) running out of battery. By being able to predict \gls{soc} information, a more efficient route can be mapped out ~\cite{liu2020wireless, su2020uav}.
The \gls{uav} responsible for charging a certain \gls{iot} node, faces two main tasks. First, navigating to the node and aligning itself with respect to the node. Second, creating the energy transfer link and start charging the node's battery. These two distinct stages are studied more in depth in the remainder of this work.
By adopting this \gls{uav} recharging concept, each node will vastly prolong its autonomy. This results in several benefits. Firstly, as the lifespan of an \gls{iot} node prolongs, less manual interventions will be required and less devices will need to be replaced. Secondly, \gls{iot} nodes require lower-capacity batteries: reducing the cost and the environmental footprint. Finally, regular \gls{uav} visits can enable proactive maintenance, both by enabling larger software updates and hardware swaps.
\section{Conclusion}
\label{conclusion}
We presented a \gls{uav}-based system to increase the autonomy of remote and difficult accessible \gls{iot} nodes. Multiple advantages are associated with this concept such as the eco-friendly benefits and proactive maintenance. Although this concept comes with challenges related to both the localization and the charging of the devices, the approach can outperform alternative solutions such as manual replacement/recharging or long-distance RF-based charging, by orders of magnitude in terms of energy efficiency. Adequate technologies need to be selected for the energy storage and the energy transfer respectively.
We proposed a low complexity model to estimate the required battery capacity based on a number of predefined parameters.
We showed that the capacity can be reduced by increasing the charging speed, thus it should be considered to use energy storage technologies with a high c-rate or a low internal resistance. In our future work, we are optimizing an actual design, whereby the \gls{uav} will recharge itself based on a sustainable source.
\section{Gains: Lowering capacity while maintaining autonomy}
\label{gains}
\section{Modelling the Energy Storage and Provisioning Profile}
The number of \gls{uav} interventions or the overall autonomy depends on (i)~the node's energy consumption per day, (ii)~the energy storage technology and (iii)~the selected voltage conversion~\cite{callebaut2021art}. We model the energy consumption of a node equipped with a sensor, an \gls{lpwan} technology and a battery. Based on that model, the energy provisioning requirement is determined.
Eq.~\ref{eq:chgperinv} defines the maximum charged energy per intervention. This depends on the charge rate (CR) [1/h], the total battery capacity ($E_{\text{Battery}}$) [J] and the charge time ($T_{\text{Charged}}$) [s/intervention]. We assume a linear relationship between $T_{\text{Charged}}$ and $E_{\text{charged}}$. In practice, the energy-intake will depend on the \acrlong{soc}. The \gls{soc} determines whether the battery is charged in \gls{cv} or \gls{cc} mode, resulting in a non linear behavior in practice.
We assume that the battery will be almost fully depleted before recharging, and hence the majority of the charge time will follow the linear behavior.
\begin{equation}
E_{\text{Charged}} = E_{\text{Battery}} \cdot \frac{\text{CR}}{3600} \cdot T_{\text{Charged}}
\label{eq:chgperinv}
\end{equation}
The required energy transfer per intervention is given in Eq.~\ref{eq:ereq} and takes into account the energy-efficiency of the \gls{wpt}. Multiple components contribute to the overall energy losses per day ($E_{\text{Losses}}$) [J/day], which consists of the sum of self discharge losses ($E_{\text{SD}}$), conversion losses ($E_{\text{Conv}}$) and leakage losses ($E_{\text{Leak}}$).
The minimum required energy, to bridge the days between two interventions, becomes
\begin{equation}
E_{\text{Req,min}} = (E_{\text{Consumed}} + E_{\text{Losses}}) \cdot \frac{365}{n}\ ,
\label{eq:ereq}
\end{equation}
with $n$ the number of interventions per year. Hence, in order to ensure a continuous operation $E_{\text{Battery}}$ should be higher than $E_{\text{Req,min}}$. If $E_{\text{Charged}} > E_{\text{Req,min}}$, theoretically, an infinite autonomy is ensured. In this specific case, the charge time could be adapted or the interventions per year diminished to save energy of the \gls{uav}. However, if $E_{\text{Charged}} < E_{\text{Req,min}}$, the autonomy $T_{\text{Aut}}$ [Days] is limited and could be calculated with Eq.~\ref{eq:autonomy}.
\begin{equation}
T_{\text{Aut}} = \frac{365 \cdot E_{\text{Battery}}}{365 \cdot (E_{\text{Consumed}} + E_{\text{Losses}}) - E_{\text{Charged}} \cdot n}
\label{eq:autonomy}
\end{equation}
The condition presented in Eq.~\ref{eq:cond} must be met to achieve a virtually unlimited autonomy. It takes into account the minimal required energy to keep the node running between two interventions.
In addition, the battery capcity is limited. As a consequence, the stored energy per intervention $E_{\text{charged}}$, constrained by $E_{\text{battery}}$, needs to be higher than the required energy to guarantee the operation of the node $E_{\text{Req,min}}$.
\begin{equation}\label{eq:cond}
E_{\text{Battery}} > \max\left( \frac{E_{\text{Req,min}} \cdot 3600}{\text{CR} \cdot T_{\text{Charged}}},E_{\text{Req,min}}\right)
\end{equation}
Eq.~\ref{eq:cond} allows the designer to select the battery capacity as low as possible, resulting in a more environmentally friendly device. To do so, one can increase n, CR or $T_{Charged}$, while maintaining a certain autonomy.
\begin{comment}
\guus{Ik zou het toch omgekeerd doen, begin met eenvoudige formule en leg parameters uit en van daaruit opbouwen. Ik moet veel teruglezen op deze manier. I stand corrected for Gilles uiteraard.}
\begin{itemize}
\item The relevant \textbf{battery} specifications contains the nominal cell voltage $V_{\text{cell}}$ [V], the battery capacity $BC$ [Ah] and C-rate $CR$ [1/h]. The amount of energy in the cell $E_{\text{Battery}}$ [J] can be estimated by Eq.~\ref{eq:batener}.
\begin{equation}
E_{\text{Battery}} = BC \cdot V_{\text{cell}} \cdot 3600
\label{eq:batener}
\end{equation}
\item It is not feasible to expect that each node will be recharged e.g., every day. The \textbf{number of interventions} per year $n$ [intervention/year] must be limited to a doable number.
\item The \gls{uav} is not able to transfer energy over a long time interval. The \textbf{charge time} per intervention is restricted to a predefined value $T_{\text{Charged}}$ [s/intervention].
\item \textbf{Consumed energy} per day $E_{\text{Consumed}}$ [J/day] depends on the implementation and application and represents the energy needed for normal daily operation. It contains the energy consumed by the \gls{mcu}, \gls{lpwan} transceiver, sensors and actuators.
\item Multiple components contribute to the overall \textbf{energy losses} $E_{\text{Losses}}$ [J/day] and consists of the sum of $E_{\text{SD}}$, $E_{\text{Conv}}$ and $E_{\text{Leak}}$.
Each chemical battery will deplete over time which is represented as the self discharge rate, mostly expressed as a percentages per year.
To use here, a conversion to the self-discharge energy per day is needed $E_{\text{SD}}$ [J/day].
An \gls{ldo} or \gls{smps} is present to convert the cell voltage to a fixed voltage level. The efficiency $\eta_{\text{Conv}}$, converted to $E_{\text{Conv}}$ [J/day],
could achieve values around \SI{90}{\%} for an \gls{smps}~\cite{TPS63900} and become lower when a \gls{ldo} is used. The quiescent current in combination with the efficiency gives $E_{\text{Conv}}$ [J/day]. $E_{\text{Conv}} = 0$ , if the battery is directly connected to the peripherals, meaning that all the electronic parts could operate over the entire cell voltage range.
Leakage currents to the charger circuit, load switches and other components are represented as $E_{\text{Leak}}$ [J/day].
\end{itemize}
Eq.~\ref{eq:chgperinv} shows the maximum charged and stored energy per intervention. This depends of the charge rate, the total battery energy and the charge time. We assume a directly proportional relationship between $T_{\text{Charged}}$ and $E_{charged}$. In reality, the capability to store the energy will depends of the \gls{soc} or whether the battery can be charged in \gls{cv} or \gls{cc} mode.
\begin{equation}
E_{\text{Charged}} = \frac{E_{\text{Battery}}}{3600} \cdot CR \cdot T_{\text{Charged}}
\label{eq:chgperinv}
\end{equation}
Where $E_{\text{charged}}/T_{\text{Charged}}$ represents the required power over the wireless power transfer link. At the end, the \gls{soc} is increased depending on the transferred energy.
\begin{equation}
\Delta SoC = \cfrac{E_{\text{Charged}}}{E_{\text{Battery}}}\,\,\,\,[\%]
\end{equation}
Further, we investigate if the autonomy will become infinite or the battery will run out over time. Therefore the number of days between each intervention is multiplied with the consumed energy per day by the \gls{iot} node. The battery self-discharge could also been taken into account. The minimum required energy $E_{\text{Req,min}}$ to bridge the days between two interventions is given in Eq.~\ref{eq:ereq}.
\begin{equation}
E_{\text{Req,min}} = (E_{\text{Consumed}} + E_{\text{Losses}}) \cdot \frac{365}{n}
\label{eq:ereq}
\end{equation}
$E_{\text{Battery}}$ should be greater then $E_{\text{Req,min}}$ to stay working during two interventions. Further we want to know the autonomy. If $E_{\text{Charged}} > E_{\text{Req,min}}$, there will be plenty of energy available and the autonomy is theoretically infinite. In this specific case, the charge time could be adapted or the interventions per year diminished to save energy of the \gls{uav}. However, if $E_{\text{Charged}} < E_{\text{Req,min}}$, the autonomy $T_{\text{Aut}}$ [Days] is limited and could be calculated with Eq.~\ref{eq:autonomy}.
\begin{equation}
T_{\text{Aut}} = \frac{365 \cdot E_{\text{Battery}}}{365 \cdot (E_{\text{Consumed}} + E_{\text{Losses}}) - E_{\text{Charged}} \cdot n}
\label{eq:autonomy}
\end{equation}
Remark: If the denominator $\leq 0$, the autonomy is infinite.
Thus, to achieve an infinite autonomy, two conditions must be met.
\begin{equation}
E_{\text{Battery}} > E_{\text{Req,min}}
\end{equation}
\begin{equation}
E_{\text{Battery}} > \frac{E_{\text{Req,min}} \cdot 3600}{CR \cdot T_{\text{Charged}}}
\end{equation}
A number of assumptions were made. Practically the autonomy is limited to the hardware lifetime.
\guus{1. infinite 2. Applied to use cases}
\end{comment}
\section{Use case-based assessment}
We evaluate both high-energy and low-energy consuming \gls{iot} use cases in view of designing a solution for contactless energy provisioning by \glspl{uav}.
One low-power application measures temperature difference at the trees to determine their health~\cite{thoen2019deployable}. Another application uses a volatile gas sensor (Bosch BME680) to monitor the air quality. These gas sensors need to be heated to a target temperature of \SI{320}{\degree C}, taking around \SI{92}{s}, yielding a high energy-penalty per measurement. This low-power use case will be further denoted as the \textit{Tree node} application, while the more energy-demanding use case will be called the \textit{Gas node} application.
In fair comparison, the same wireless transceiver, \gls{mcu} and batteries are used in the evaluation, as presented in Table~\ref{tab:usecasesenergycons}.
The \gls{mcu} sleep power is \SI{0.025}{mW}. During operation, the node wakes-up, reads out the sensors and transmits the data via LoRaWAN to the cloud. We assume for this transmission a spreading factor~of~12 with a bandwidth of \SI{125}{kHz} and a payload size of \SI{20}{bytes}~\cite{thoen2019deployable}. This yields a power consumption of \SI{111.15}{mW} over a time duration of \SI{1810}{ms}. To consider a realistic example, we assume a data transmission every 15~minutes. The energy consumption per measurement, including wireless transmission and sensor reading, amounts to \SI{0.012}{J}~\cite{thoen2019deployable} for the low-power sensor and \SI{2.153}{J} for the high-power sensor. The consumed energy and duration for both nodes is summarized in Table~\ref{tab:usecasesenergycons}. In total, the \textit{Tree node} and \textit{Gas node} consume, respectively, \SI{22.7}{\joule} and \SI{227.9}{\joule} per day.
\begin{table}[btp]
\vspace{4pt}
\centering
\caption{Energy consumption for the two use cases}
\label{tab:usecasesenergycons}
\begin{tabular}{lrrrr}
\toprule
Energy Consumption & \multicolumn{2}{c}{Tree node} & \multicolumn{2}{c}{Gas node}\\
\cmidrule(lr){2-3} \cmidrule(lr){4-5}
& mW & s & mW & s\\
\midrule
MCU sleep & \num{0.025} & - & \num{0.025} & - \\
LoRaWAN (SF12) & \num{111.1} & 1.81 & \num{111.1} & 1.81 \\
Sensor & \num{65.7} & 0.19 & \num{23.4} & 92.00 \\
\bottomrule
\end{tabular}
\vspace{10pt}
\end{table}
In the following, both non-rechargeable and rechargeable batteries are evaluated as an energy source for these use cases.
\begin{comment}
We consider two \gls{iot} applications, a remote tree monitoring system and a CO2 meter. The latter, consuming relatively more energy due to the energy consumption of the CO2 sensor. Our treated remote tree monitoring systems read the sap flow and detects a fallen or dead trees in forests or in rural environments. Secondly, the system could detect irregular behavior and prevent that trees falls on paths or against houses. The module consists of an \gls{mcu}, multiple sensors, a LoRa transceiver and alkaline batteries. \jarne{DIT WEGLATEN? Two temperature sensors detects the sap flow. If there is a temperature difference between the steel and environmental temperature, the sap flow is present, and the tree is alive. An accelerometer measures the movement of the tree and the position of the module in function of the ground based on the gravitational acceleration.} The \gls{mcu} sleep power is \SI{0.025}{mW}. During operation, a subdivision is made for reading out the sensors and transmitting the data via LoRaWAN to the cloud. The readout of the sensors consumes \SI{65.7}{mW} for a time of \SI{190}{ms} and the LoRa transmission consumes \SI{111.15}{mW} for a time of \SI{1810}{ms}. We assume for the LoRa transmission a spreading factor 12 with a bandwidth of \SI{125}{kHz} and a payload size of \SI{20}{bytes}~\cite{thoen2019deployable}. We can assume that the data is sent to the cloud every 15 minutes. $E_{\text{Consumed}}$ per day is then \SI{22.66}{J}.
A similar application with as volatile gas sensor could monitor the air quality.
The Bosch BME680 sensor measures gas, humidity, pressure and temperature. For the gas measurement, a sensing element should be heated to the target temperature of \SI{320}{\degree C} and takes around \SI{92}{s}. The heater draws maximum \SI{13}{mA} at \SI{1.8}{V}. If LoRa transmission and \gls{mcu} energy is supposed to be the same as well as the update rate. The daily consumed energy is approximately \SI{210}{J}.
\end{comment}
\subsubsection{Non-rechargeable solution}
The power source of the \textit{Tree node} system exists of three in series-connected alkaline batteries, with a usable \SI{21}{\kilo\joule} of energy. An autonomy of \SI{2.5}{year} could be theoretically achieved~\cite{thoen2019deployable}. The \textit{Gas node}, with a similar autonomy, needs a battery pack of \SI{208}{\kilo\joule} or, equivalently, \num{30}~alkaline cells on the node side. Here, you can already state that providing this amount of stored energy is infeasible and not the least, not environmental friendly.
\subsubsection{Rechargeable solution}
To extend the autonomy, a rechargeable energy storage is required. Furthermore, the capacity of the battery can be drastically lowered, thereby requiring less materials for the same or even longer autonomy.
In the following analysis, we assume a \textit{Tree node} consuming a fixed amount of energy every day. We demonstrate the importance of choosing the best fit type of storage technology. For demonstration, we start with an \gls{lco} rechargeable battery with a nominal cell voltage of \SI{3.6}{\volt} and a variable capacity. The \gls{soc} degradation of the battery over time is shown in Fig.~\ref{fig:soc_lifetime}.
Assuming 12~interventions per year, a charge rate restriction of 1C and a charge time of \SI{5}{\minute}, a theoretical infinite autonomy is obtained.
Choosing the battery capacity too small will quickly lead to a depleted node.
Continuing on this example, the optimal battery capacity for powering the \textit{Tree node} can be determined and will be situated between \SI{1.80}{Wh} and \SI{2.88}{Wh}.
\begin{figure}[!hbt]
\centering
\input{Images/soc_autonomy_type_battery}
\caption{The \acrlong{soc} degradation depends on the selected battery. A higher capacity will automatically give higher autonomy.
\label{fig:soc_lifetime}
\end{figure}
Secondly, the autonomy is related to the charge time. Increasing $T_{\text{Charged}}$ results in more energy stored during each intervention, as depicted in Fig~\ref{fig:aut_chg_time}. Two battery capacities (\SI{0.36}{Wh} and \SI{1.80}{Wh}) for different battery technologies with corresponding charge rates (\SI{1}{C}, \SI{2}{C}, \SI{5}{C} and \SI{10}{C}) are considered. The required charge time decreases, when a battery technology with higher charge rate is selected.
\begin{figure}[!hbt]
\centering
\input{Images/autonomychargetime}
\caption{Autonomy in function of charge time, battery capacity and battery C-rate. Number of intervention is fixed to 12.}
\label{fig:aut_chg_time}
\end{figure}
Finally, the required battery capacity with respect to the number of interventions is assessed for the two use cases and is depicted in Fig.~\ref{fig:E_batt_vs_interventions}. We distinguish two lithium based batteries: an \gls{lco} and \gls{lto} battery with a charge rate of respectively \SI{1}{C} and \SI{10}{C}, giving in total four curves. Here, the charge time during an intervention is assumed to be restricted to \SI{5}{\minute}. Assuming that the interventions per year are predefined, the selection of a \SI{10}{C} capable battery allows the designer to reduce the battery capacity significantly compared with a \SI{1}{C} variant. Otherwise, for the same amount of battery capacity, more interventions are required, when choosing a \SI{1}{C} battery above a \SI{10}{C} variant.
\begin{figure}[!hbt]
\centering
\input{Images/E_batt_vs_interventions}
\caption{The minimal required energy for the Tree node and Gas node in combination with two types of batteries.
\label{fig:E_batt_vs_interventions}
\end{figure}
To reduce the battery capacity and electronic waste, \gls{lto} technology can be considered as energy storage for \gls{uav} rechargeable nodes, as it provides a higher c-rate than alternatives. Further, the higher number of charge cycles, wider operating temperature and higher chemical stability makes them attractive to use in the considered scenarios. A disadvantage of this technology is the lower volumetric energy density and low nominal voltage compared to an \gls{lco} battery~\cite{han2014cycle}. Although, with recent developments, extremely efficient \gls{smps} with low quiescent currents could convert the low nominal cell voltage to a fixed output voltage
\section{Introduction}
\gls{iot} applications often require vast \glspl{wsn} to be deployed in remote and inaccessible locations. Sensor nodes can be deployed on agriculture fields, managing and optimizing the irrigation of the crops~\cite{khoa2019smart}. To accurately monitor the health and growth of trees, recent efforts include the continuous monitoring of stem sap flow \cite{thoen2019deployable,valentini2019new}. Most devices in a \gls{wsn} are typically battery powered. To be economically viable, manual maintenance possibilities (\eg replacing batteries) are limited. As such, energy efficiency and longevity of the \gls{iot} node are of paramount importance.
To improve the autonomy of an \gls{iot} node, several methods have been thoroughly discussed in literature. Reducing the battery drainage of a node is often the first approach~\cite{callebaut2021art}. The overall energy consumption can be reduced by making use of energy optimization techniques such as choosing energy-efficient communication, introducing stand-by and sleep phases~\cite{callebaut2021art}. However, some accurate sensors and actuators (\eg motorized valves in irrigation systems or sensors requiring heaters) inherently use larger amounts of energy. In such devices, the energy reducing possibilities are limited.
Another straightforward approach is using a battery with a larger capacity. This improves the autonomy, at the expense of node size, cost, and ecological footprint.
When external ambient energy sources are available in the use case at hand, the battery can be recharged by utilizing energy harvesting techniques. Most common techniques are harvesting from solar (energy).
However, these energy sources tend to be of lower energy density \cite{harb2011energy} and are falling short for \gls{iot} use cases with higher power sensors or actuators. In other use cases, no or little energy harvesting possibilities exist, or the extra cost and size of the energy harvesting solution for each individual node is not viable. To combat these battery autonomy issues, we propose a \gls{wsn} charging method using \glspl{uav} and short range wireless power: \glspl{wrsn}~\cite{yang2015wireless}. In this concept, each \gls{iot} node is equipped with a wireless charging mechanism, allowing the \gls{uav} to fly in close proximity and charge the \gls{iot} node. This concept is illustrated in Fig.~\ref{fig:intro}.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{Images/concept.pdf}
\caption{\Gls{uav} navigating to the \gls{iot} nodes. When the \gls{uav} is near the \gls{iot} device, the \gls{uav} initiates the \acrlong{wpt}, charging to the node's battery.
\label{fig:intro}
\end{figure}
The concept of charging energy-constrained \gls{iot} devices, has been proposed in literature~\cite{yao2019energy}. Recent studies explore this concept from the \gls{uav} point-of-view: comparing various scheduling algorithms to optimize the routing path of the \gls{uav}~\cite{wei2019energy, su2020uav}. Yet, in depth research on how an \gls{iot} charging system using \glspl{uav} can be established from a technical node point-of-view, is lacking
{\textbf{\textit{Contributions}-- }} The main contribution of this work is the development of the technological concepts to achieve a theoretical infinite lifespan of an \gls{iot} node using a \gls{uav}-enabled aerial charging method.
We clarify and quantify the specific requirements imposed by the \gls{uav}-based charging approach, in particular concerning charging time. We further elaborate on the theoretical limitations and practical restrictions and solutions to achieve sufficiently efficient energy transfer, localization and alignment. Candidate technologies for the actual power transfer are assessed and compared. We present models quantifying the energy transfer, and based on those validate that the selected technologies allow upgrading existing \gls{lpwan} nodes with limited impact on design and form factor.
This paper is organized as follows. First, we elaborate on \gls{uav} aerial charging concept and explore operating principles and restrictions. Secondly, we discuss technological options and selection for
\begin{enumerate*}[label=(\arabic*)]
\item charging the \gls{iot} node by \gls{uav}, and
\item localizing the \gls{iot} node for accurate alignment to start energy transfer.
\end{enumerate*}
Thirdly, we model energy storage and charging systems, resulting in a \gls{uav}-based provisioning profile. Fourthly, we apply our findings on two use cases. Finally, we summarize our main findings.
\textit{We consider an \gls{iot} node with a battery capacity between \SI{1}{\kilo\joule} and \SI{10}{\kilo\joule} that needs to be charged within a limited time of maximum \SI{5}{\minute}.}
|
2108.09877
|
\section{Introduction}\label{sec:introduction}
\subsection{Background}
Finding a complete and consistent theory that reconciles Quantum Mechanics (QM) and General Relativity (GR) has been an elusive goal since the very first attempts \cite{bronstein1936kvantovanie} made during the `miracle decades' of theoretical physics at the start of the last century.
Independently both theories have been remarkably successful, but to date no consistent theory exists that successfully unites both \cite{graf2018hawking}.
At the heart of the problem is a profound inconsistency between how space and time are treated.
For QM and Quantum Field Theory (QFT) space and time are transcendent quantities, input by hand into the theories, whereas GR {\sl is essentially} a theory of the geometry of spacetime.
Recently new theoretical models that capitalize on advances in network science have emerged that attempt to reconcile GR and QM using an emergent geometry.
These `Ising Geometry' (IG) models were originally proposed by Trugenberger \cite{trugenberger2015quantum}, subsequently extended and applied to quantum gravity \cite{trugenberger2016random,trugenberger2017combinatorial,kelly2019self,tee2020dynamics,tee2021quantum}, and are central to the Combinatorial Quantum Gravity (CQG) program.
A fundamental precept of these models is a discrete quantum structure, implicitly or explicitly assumed to be at the Planck length scale.
This approach can be viewed as building upon other theories that imply a discrete structure to spacetime, including Quantum Graphity \cite{konopka2006quantum,konopka2008quantum}, Causal Set theory \cite{bombelli1987space,dowker2006causal}, and Loop Quantum Gravity \cite{rovelli2014covariant,smolin2004invitation,smolin2011unimodular}.
Although not normally associated with discrete geometries, type IIB String theories also admit a `matrix model' formulation \cite{ishibashi1997large,klinkhamer2020iib}, which proposes a Lagrangian with an $SU(N)$ gauge symmetry where $N$ is very large.
In matrix models, conventional spacetime `emerges', with Lorentzian signature metrics, when one identifies the eigenvalues of the $N \times N$ gauge field matrices with spacetime locations, with the limit of $N \rightarrow \infty$, recovering a smooth manifold.
Cao {\sl et al} \cite{cao2017space} pointed out that it is also possible to cause geometry to emerge from the quantum entanglement of a large number of interacting Hilbert spaces, yielding a discrete graph structure for the geometry.
Finally, and more recently, Wolfram {\sl et al} proposed an entirely abstract method of emerging a discrete spacetime based upon `branchial graphs' \cite{gorard2020some,gorard2020zx}, although entirely different from the models described here they share the commonality of representing the emerged geometry as a graph.
In this work we will focus upon the IG models so named because they propose that spacetime emerges from a disordered and independent collection of quantum information bits subject to a ferromagnetic and Ising like interaction.
For the precise details we refer the reader to the cited literature \cite{trugenberger2015quantum,trugenberger2016random,tee2020dynamics}, but in principle the interaction between information bits creates the fabric of a geometry expressed as a graph, with entangled spins sharing an edge.
To prevent a fully connected graph emerging there are anti-ferromagnetic terms introduced into the Hamiltonians of these models that disfavor edges, and higher order structures such as triangles, that would impair locality.
The models possess many attractive features, including a regular and highly local ground state, the emergence of a low preferential dimension as evidenced by a convergence of extrinsic and intrinsic dimensions, and the capacity to support the modelling of matter by the presence of stable defects in the mesh \cite{tee2020dynamics}.
These models do not yet amount to a complete and consistent theory of spacetime, not least because there is no accepted and established mechanism for the emergence of a causal structure, or even a distinct dimension that could be associated with time.
Indeed, it is an often disputed claim that the discrete structure of the lattice prevents the model exhibiting Lorentz covariance as the $SO(3,1)$ symmetry is broken by the lattice.
We do not believe that this is a fatal flaw in discrete approaches and refer the reader to the work of Amelino-Camelia \cite{amelino2002relativity} and the comprehensive review by Hossenfelder \cite{hossenfelder2013minimal} for the details on how discrete geometries and Lorentz covariance can be reconciled.
Further, despite the simplicity and familiarity of the underlying physical model, we must consider the model and its refinements as essentially a ground up proposal for a model of emergent spacetime with no explicit connection to the well tested theories of QM and GR.
For these proposals to be taken seriously this connection with GR in particular is essential if any experimentally testable consequences are to be proposed.
It is to that question we will address ourselves in this paper.
The key insight will be to leverage recent results in the study of discrete curvature \cite{tee2021enhanced} that will allow us to analyze directly the Hamiltonian formulation of GR in a discrete setting.
We will seek to illustrate that by reinterpreting the emerged geometry as spatial, the Hamiltonians for the IG models can be inferred from a discretized formulation of Hamiltonian or Canonical GR (CGR), at least up to a choice of gauge or spacetime foliation.
In essence we decompose the spacetime manifold into $M=G(V,E) \bigotimes \mathbb{R}$, with $G(V,E)$ being the emerged spatial graph, and time $t \in \mathbb{R}$ being regarded as the sequence of labels on the state of the spatial graph.
It should be noted that there is no requirement on time being continuous, and we could equally well have chosen $M=G(V,E) \bigotimes \mathbb{Z}$.
Our argument rests upon the revolution in recent years in discrete differential geometry that now provides a rich framework of curvature measures analogous to the familiar Riemann-Ricci curvature of a smooth manifold.
This allows us to identify the induced curvature of a spatial slice with the discrete curvature of a spacetime graph, and the interpretation of spin-spin interactions as a kinetic energy term.
There are two options for discrete curvature that we will consider,the Forman-Ricci (FR) \cite{forman2003bochner} and Ollivier-Ricci (OR) \cite{ollivier2011visual} curvature.
It has been suggested that OR curvature has as its continuum limit \cite{van2021ollivier} the normal Ricci curvature of a manifold if one considers the graph as embedded in it.
It has however the drawback of being difficult to work with and does not have the supporting structure of index theorems, differential forms and vector fields that FR curvature possesses.
However, in a recent result \cite{tee2021enhanced} it is proven that for certain important classes of graphs FR and OR curvature are simply related, allowing us to work with FR curvature and assume that in low energy continuum limit of the emerged geometry, we can recover the normal Ricci curvature.
Our claim in this paper is that starting with the Hamiltonian formulation of GR, we can propose a model that is broadly equivalent to the IG models, implying in turn that it may be possible to claim that such models have GR as a low energy limit.
We refer to the model derived from CGR as the Canonical Ising Model (CIM).
To investigate the properties of the CIM model we use numerical simulations to directly compute the ground states using Glauber dynamics \cite{muller2012neural}, and present the results.
We will show, in the case of the FR curvature, that the ground states are broadly comparable to those produced in the IG models, specifically the QMD version, having a stable and highly local geometry.
The OR curvature model though is more problematic.
Drawing on the result in \cite{tee2021enhanced} to leading order the Hamiltonian for FR curvature is a polynomial in the average degree $k$, which for a given value of the coupling constant has a well defined minimum energy at a specific value of $k$.
In the case of OR curvature, however, the leading order approximation of the Hamiltonian is linear in $k$ and so does not have such a value.
The existence of a stable energy minimum requires the insertion of a power of $k$, which is precisely the equivalence of the two curvatures established in \cite{tee2021enhanced}.
Focusing on the ground states of the CIM using Forman curvature, we see that the ground states of the graphs mirror the QMD models at least down to $3$ dimensions, below which the graphs show very divergent behavior.
This includes an explosion in clustering, and divergence in both intrinsic and extrinsic measures of dimensionality, which indicates that there may be distinct phases in the topology of the ground states of the CIM model.
This phase transition could perhaps point to something special about the model at three spatial dimensions, corresponding to the familiar observed $3+1$ dimensionality of the Universe.
Specifically, when we compute both the intrinsic and extrinsic curvature of the ground state graphs using the techniques described in \cite{jonsson1992intrinsic,ambjorn1997quantum}, we see that there is a distinct divergence of dimension measures below a critical value that lies between $3$ and $4$.
The CIM ground state graphs have a chaotic and `crumpled' topology below this, although the precision of the dimension at which this occurs is not sharp due to the computational limits on the size of the graphs we can numerically simulate.
We also note a persistent negative spatial curvature of the model, which tends towards zero as dimension reduces.
The constraints from the canonical treatment of GR indicate that this negative spatial curvature is balanced by the extrinsic curvature and may provide a hint of the global topology of discrete spacetime.
\subsection{Organization of this paper}
We begin our treatment in the next section \Cref{ssec:qmd} with an overview of the models previously studied, to enable detailed later comparison to the new discrete CIM model.
In particular we focus upon the QMD model introduced in \cite{tee2020dynamics}, as it has a simple Hamiltonian including a correction term to suppress triangles.
This correction term will prove to be important and can be reinterpreted as approximating discrete curvature in the Hamiltonian.
We present an overview of canonical GR, for completeness in \Cref{sec:discreteGR}, sufficient to be able to propose a discrete canonical model.
We operate in a specific choice of gauge, the `Gaussian' gauge, which simplifies the Hamiltonian.
Using these results, we briefly survey discrete approaches to curvature in \Cref{sec:curvature}, focusing on the Forman-Ricci and Ollivier-Ricci treatments.
To use these in the our discrete model we need certain mean-field approximations that we outline in \Cref{ssec:mean-field}.
We bring all of the elements together to propose our discrete canonical Ising model (CIM), in \Cref{ssec:discrete_hamiltonian}, using the FR curvature, and discuss how this compares to Ising models such as QMD.
We are then able to show analytically that the QMD Hamiltonian is, to first order in certain mean-field parameters, an approximation to CIM.
To explore the properties of the CIM model we describe the numerical computation of the ground states of this model in \Cref{sec:results}, and compare these to QMD.
We include the connectivity and locality properties of the models, but also investigate some topological properties not previously studied.
These include the discrete curvature and density of chordless cycles in the graph up to length $5$.
Additionally, it has also been proposed \cite{trugenberger2015quantum,tee2020dynamics} that the free parameter in the model, the coupling constant $g$, `runs' with energy.
As such we can consider the states of the model at different values of $g$ representing the evolution of the geometry from an origin event.
We discuss how our results could provide an alternative view of a `topological big bang' with spatial expansion that is initially rapid, slowing down as the spatial dimension of the model approaches $3$.
Finally in \Cref{sec:conclusion} we conclude with an assessment of the merits and deficiencies of the model we have proposed, and highlight potential further lines of inquiry.
\subsection{Overview of Ising emergent geometry}
\label{ssec:qmd}
The model originally introduced by Trugenberger \cite{trugenberger2015quantum} was proposed as an Ising model, with spin `qubits' located on the vertices of a graph, and edges being favored between spins that align due to their ferromagnetic interaction.
Operating in opposition to edge formation is a link frustration term, expressed as an anti-ferromagnetic Ising interaction between spin states defined on the edges.
The statement of the model \cite{tee2020dynamics} can be made precise by defining the Hilbert spaces for the edges and the $N$ vertices of a simple, undirected graph $G(V,E)$ where $V$ is the set of vertices and $E \subset \{V \times V \}$ the edges connecting any two distinct vertices.
It should be noted that throughout this paper we make use of standard graph theory notation consistent with standard texts such as Bollobas \cite{bollobas2013modern}.
On each vertex $v_i $ we associate the Hilbert space $\mathcal{H}_{i} = \mbox{span} \{ \ket{i,0}, \ket{i,1} \}$, and define a spin operator obeying $\hat{s}_{i} \ket{i,s}=s_i\ket{i,s}$ on this space.
We assert the usual anti-commutation relations and ladder operators consistent with spin $1/2$ fermions,
\begin{align}
\{ \hat{s}^{+}_i, \hat{s}^{+}_j \}, \{ \hat{s}^{-}_i, \hat{s}^{-}_j \} = 0, \{ \hat{s}^{-}_i, \hat{s}^{+}_j \}&= \delta_{ij} \mbox{,} \\
\hat{s}^{+}_i \ket{ i,1} = 0, \hat{s}^{+}_i \ket{ i,0} &= \ket{i,1} \mbox{,} \\
\hat{s}^{-}_i \ket{ i,1} = \ket{i,0}, \hat{s}^{-}_i \ket{ i,0} &= 0 \mbox{.}
\end{align}
For the edges we proceed similarly with $\frac{1}{2}N(N-1)$ Hilbert spaces, $\mathcal{H}_{ij} = \mbox{span} \{ \ket{i,j,0}, \ket{i,j,1} \}$, with the state $\ket{i,j,1}$ indicating the presence of an edge, and $\ket{i,j,0}$ its absence.
In a parallel fashion to the vertices we define fermionic edge annihilation and creation operators,
\begin{align}
\{ \hat{a}^{\dagger}_{ij}, \hat{a}^{\dagger}_{kl} \}, \{ \hat{a}_{ij}, \hat{a}_{kl} \} = 0, \{ \hat{a}_{ij}, \hat{a}^{\dagger}_{kl} \}&= \delta_{ik} \delta_{jl} \mbox{,}\\
\hat{a}^{\dagger}_{ij} \ket{ i, j, 1} = 0, \hat{a}^{\dagger}_{ij} \ket{ i, j, 0} &= \ket{i, j, 1} \mbox{,}\\
\hat{a}_{ij} \ket{ i, j, 1} = \ket{i,j, 0}, \hat{a}_{ij} \ket{ i, j, 0} &= 0 \mbox{.}
\end{align}
The adjacency matrix of the graph $A_{ij}$ (defined as having the value $1$ when there is an edge between vertices $i$ and $j$, and zeros elsewhere and $A_{ii}=0$), can be represented using these edge annihilation/creation operators as $A_{ij} = \hat{a}^{\dagger}_{ij} \hat{a}_{ij}$.
With these definitions we are able to state the Hamiltonian of the model, termed the Dynamical Graph Model (DGM) described in \cite{trugenberger2015quantum} as,
\begin{equation}\label{eqn:trugenberger}
H_{DGM}=\frac{g^2}{2} \Bigg ( \sum\limits_{i \neq j}^N \sum\limits_{ k \neq i,j }^N A_{ik}A_{kj} \Bigg ) - \frac{g}{2} \sum\limits_{i,j} s_i A_{ij} s_j \mbox{.}
\end{equation}
It is possible using numerical methods solve for the ground state graphs of this Hamiltonian at different values of $g$, and it was found that they possess certain attractive features:
\begin{description}
\item [Regular, Euclidean flat ground state] The ground state corresponds to a regular graph where nearly all of the nodes posses the same average degree (number of incident edges upon a given vertex) $k$. This configuration is referred to as a `large world' in network science, which means that the graph has a high degree of locality with few `short-cuts' between distant nodes.
\item [Low dimensionality] There are several ways to quantify the dimension of a graph, both intrinsic dimension (as would be measured by an observer confined to the graph), and extrinsic dimension measured from the point of view of a higher dimensional space in which the graph is embedded \cite{ambjorn1997quantum}. These two measures agree for the ground state until around $d=4$, at which point the extrinsic dimension does not reduce any further. This points to a `preferred' low dimension for the graph.
\item [Entropy area law] It is possible to demonstrate that the ground state graph possesses a measure of informational entropy which is related to the size of the boundary of the graph or any defects in it, rather than its bulk.
\end{description}
Despite these attractive features, the ground state obtained using Eq. \eqref{eqn:trugenberger} has persistent non-zero clustering.
The presence of clustering is highly undesirable as it amounts to a loss of locality in the emerged geometry.
Accordingly, the model was further refined by the insertion of perturbation terms in the third and fourth powers of the adjacency matrix which penalize triangles and favors squares by Trugenberger \cite{trugenberger2016random}, but it was later demonstrated that an even simpler Hamiltonian with one dimensionless coupling constant could achieve the same result without the use of additional parameters \cite{tee2020dynamics}.
This model is referred to as Quantum Mesh Dynamics (QMD), as it formed the basis of a dynamical theory of matter in the emerged ground state.
The Hamiltonian proposed in this model is,
\begin{equation}\label{eqn:qmd_hamiltonian}
H_{QMD}=\frac{g^2}{2} \Bigg ( \Tr (A^3) + \sum\limits_{i \neq j}^N \sum\limits_{ k \neq i,j }^N A_{ik}A_{kj} \Bigg ) - \frac{g}{2} \sum\limits_{i,j} s_i A_{ij} s_j \mbox{.}
\end{equation}
An intriguing feature of this Hamiltonian, and those proposed as extensions to DGM \cite{trugenberger2016random}, is the appearance of sequential powers of the adjacency matrix, which could suggest that they are an approximation to a more fundamental model.
In particular, do these powers of the adjacency matrix suggest that the Hamiltonian is in fact comprised of a kinetic term relating to neighbor spin interactions and a potential term relating to the structure and topology of the spacetime graph?
We will show in the following sections that indeed this could be the case, and the first term is an approximation of the spatial curvature of the graph.
When decomposed in this fashion we have a Hamiltonian that is strongly analogous to canonical GR, at least up to the choice of `lapse' and `shift' vectors.
The identification of $\sum\limits_{i,j} s_i A_{ij} s_j$ as the kinetic term was investigated in \cite{tee2020dynamics,tee2021quantum}, when one considers matter as excited defects in the ground state of the emerged geometry.
In that specific instance the precise formulation uses spin ladder operators and the graph Laplacian, but in form at least we consider the second term to be kinetic.
It is these observations, along with the numerical evidence presented, that underpins the central contribution of this work that IG models and canonical GR are intimately related.
This result is intriguing as it could point to the possibility of GR emerging as a continuum limit to an emergent Ising geometry theory, similar to the models described in this paper.
\section{Discretizing Canonical Gravity}
\label{sec:discreteGR}
Our starting point is the the Hamiltonian formulation of GR \cite{arnowitt1959dynamical,arnowitt2008republication,poisson2004relativist}, used as the basis for the canonical approach to quantum gravity \cite{dewitt1967quantum1,dewitt1967quantum2,dewitt1967quantum3}.
This approach to GR is well documented in the standard texts cited, and here we provide the minimum overview necessary to frame our proposed Hamiltonian.
Hamiltonian GR relies upon a decomposition of spacetime into a series of spacelike foliations $\Sigma_t$, at constant time $t$.
For the following discussion Greek indices $\alpha,\beta$ run from $0$ to $4$ and Latin indices $i,j,k$ from $1$ to $3$, and we will work in the `East Coast' $(-1,1,1,1)$ metric signature.
The foliation induces a metric on the hypersurfaces, which we denote as $h_{ij}$ to distinguish it from the $4$-metric $g_{\mu \nu}$.
We can decompose the $4$-metric into a combination of the induced metric $h_{ij}$ and two new quantities, the lapse $N$, and shift vector $N^i$, which we define below.
On the hypersurfaces we provide a local coordinate system $y^i$, and a transformation $x^{\alpha}=x^{\alpha}(t,y^i)$ that relates the local coordinates at time $t$ of the hypersurface to the general coordinates of spacetime.
The physical interpretation of lapse and shift vectors can be understood in terms of a congruence of curves $\gamma$ that intersect each spacelike foliation, such that the local coordinates $y^i$ of each intersection with each hypersurface are preserved.
The transformation from local to global coordinates at constant $t$, $e^{\alpha}_i=\left ( \pdv{x^{\alpha}}{y^i} \right )_t$, defines a natural set of basis vectors in the hypersurface.
Using these basis vectors, if $n^{\alpha}$ is the normal vector to the hypersurface, the tangent vector to a curve $\gamma$, $v^{\alpha}$ can be written as $v^{\alpha} = N n^{\alpha} + N^{a} e^{\alpha}_a$.
It is important to underline that the choice of $N$ and $N^a$ is arbitrary and can be thought of as a choice of gauge for a given foliation.
This freedom in the choice of lapse and shift vector means that they can not be dynamic variables of the theory, and this results in certain constraints on the solutions of the resultant equations of motion often referred to as the Hamiltonian and momentum constraints.
In terms of the induced metric, lapse and shift vector the $4$-metric can be decomposed as,
\begin{align}
g_{\mu \nu} &= \begin{pmatrix}
-N^2 + N_i N^i & N_i \\
N_j & h_{ij}
\end{pmatrix} \text{,} \label{eqn:decomposed_metric_cv}\\
g^{\mu \nu} &= \begin{pmatrix}
-N^{-2} & N^{-2} N^{i} \\
N^{-2} N^j & h^{ij} - N^{-2} N^i N^j
\end{pmatrix} \label{eqn:decomposed_metric_cont}\mbox{.}
\end{align}
Along with these definitions, it is then possible to define both an intrinsic and an extrinsic curvature that can be related back to the full Riemann tensor for spacetime using the Gauss-Codazzi equations.
The intrinsic tensor $~^3\!R_{ij}$ is interpreted as the curvature as experienced in the hypersurface, and the extrinsic curvature $K_{ij}$, the curvature of the foliations as embedded in the full spacetime.
In what follows we draw largely upon the standard text by Poisson \cite{poisson2004relativist}.
It is possible to express the theory of GR in terms of a Hamiltonian where the dynamic variables are the induced metric $h_{ij}$ and its time derivative $\dot{h}_{ij} = \partial_t h_{ij}$.
The canonical momentum $p^{ij}$ can be defined using the gravitational Lagrangian density $\mathcal{L}_G$ as,
\begin{equation}
p^{ij} = \pdv{ \left( \sqrt{ -g} \mathcal{L}_g \right ) }{\dot{h}_{ij}} = \frac{\sqrt{h}}{16 \pi } (K^{ij} -K h^{ij}) \mbox{,}
\end{equation}
where $K=h_{ij}K^{ij}$ is the extrinsic curvature scalar.
Using this expression for the canonical momentum we state the following expression for the gravitational Hamiltonian $H_G$,
\begin{equation}\label{eqn:gr_ham}
\begin{split}
(16 \pi ) H_G &= \int_{\Sigma_t} \left [ N( K^{ij} K_{ij} -K^2 - ~^3 R) -2 N_i (K^{ij} - K h^{ij})_{|j} \right ] \sqrt{h} ~ d^3 y\\
&\mbox{ + surface terms, }
\end{split}
\end{equation}
where $(\dots)_{|j}$ refers to induced metric, $h_{ij}$ compatible covariant differentiation.
We can express \Cref{eqn:gr_ham} in terms of the momentum $p^{ij}$ and its scalar contraction $p=h_{ij}p^{ij}$ as follows,
\begin{equation}
\begin{split}
(16 \pi ) H_G &= \int_{\Sigma_t} \left [ N \left ( \frac{(16 \pi )^2}{h} ( p^{ij}p_{ij} -\frac{1}{2}p^2 ) - ~^3 R \right ) -2N_i \left ( \frac{16 \pi }{\sqrt{h}}p^{ij} \right )_{|j} \right ] \sqrt{h} ~d^3 y\\
&\mbox{ + surface terms. }
\end{split}
\end{equation}
We now invoke the freedom to choose the value of $N$ and $N_a$ by setting $N=1$, and $N_a=0$, sometimes referred to as the ``Gaussian gauge''.
On each foliation $\Sigma_t$ with this choice of lapse and shift vector, one obtains a system of Gaussian normal coordinates, for which observers with $x^i = $ constant represent local surfaces of simultaneity (see \cite{thorne2000gravitation} sections $21,27$).
Alternatively, one can also think of each curve $\gamma$ intersecting each hypersurface orthogonally.
Discarding surface terms we are left with,
\begin{equation}\label{eqn:H_can}
(16 \pi ) H_G = \int_{\Sigma_t} \left [ \frac{(16 \pi )^2}{h} ( p^{ij}p_{ij} -\frac{1}{2}p^2 ) -~^3R \right ] \sqrt{h} ~d^3 y
\end{equation}
In the standard interpretation of classical mechanics $H=T+V$, where $T$ is the kinetic energy and $V$ the potential, and indeed we see in this Hamiltonian that there is a term in the square of the canonical momenta that we can equate to $T$.
Making this association we interpret $(p^{ab}p_{ab}-p^2)$ as our kinetic energy $T$ and $(-~^3R)$ as the potential energy $V$, up to proportionality constants.
Our strategy is to replace these terms with their discrete analogs and propose a Hamiltonian that represents a discretized form of the CGR Hamiltonian $H_G$.
For the intrinsic curvature it is natural to substitute the discrete curvature of a graph that we will introduce in \Cref{sec:curvature}, and denote as $\kappa_{ij}$.
Discrete curvature in the context of a graph is defined on the edges, and the equivalent of the scalar curvature $~^3R$ defined at a point, is the sum over all edges incident at a given vertex $\sum\limits_{j \sim i} \kappa_{ij}$, with the notation $j \sim i$ indicating that the vertex $j$ is connected by an edge to the vertex $i$.
In this way we identify vertices of the graph with points in spacetime in the continuum limit.
We will make this much more precise in \Cref{sec:curvature}.
So much for the intrinsic curvature and $V$, but what can play the role of kinetic energy term?
In \cite{tee2020dynamics} it was demonstrated that a particular choice of interaction Hamiltonian between opposite spins in the mesh was consistent with the non-relativistic wave equation in the continuum limit.
This required an interaction Hamiltonian of the form,
\begin{equation}\label{eqn:H_mod}
\hat{H}_{ij}^{I} = -\frac{g}{2\epsilon_m r^2_{ij} }\hat{s}_i^{+} ( 1 + L_{ij} ) \hat{s}_j^{-} \text{,}
\end{equation}
where $L_{ij}$ is the Laplacian matrix of the graph, defined as $L_{ij} = \Delta_{ij} -A_{ij}$, where $\Delta_{ij} = \delta^i_j k_i$ is the degree matrix.
For the remaining terms, $r_{ij}$ is the distance (i.e. shortest path length) between vertices $i$ and $j$, $\epsilon_m$ the excitation energy necessary to flip a spin, $\hat{s}^{\pm}_{i}$ the spin ladder operators at a given vertex, and $g$ the coupling constant.
It is precisely $L_{ij}$ that in the continuum limit gives rise to the momentum term, arising from the fact that $L_{ij}$ is associated with $-\nabla^2$ when discrete graph models are taken to a continuum limit \cite{chung1997spectral}.
The Laplacian compares how much of a particular edge, incident at a vertex, represents of the total connectivity of that vertex.
In an unweighted graph this is of course evenly distributed amongst all edges.
It is this property that in discrete dynamical models measure the `flow' of relative influence in a particular direction in a graph away from a specific vertex.
The structure of this interaction Hamiltonian models the dynamics of defect excitations in the ground state of IG models.
Fundamentally it compares spins at different vertices in the graph, and in the ground state this structure is captured by the adjacency matrix.
In fact, terms involving simply the adjacency matrix and vertex spin interactions such as $\sum\limits_{ij} \hat{s}_i A_{ij} \hat{s}_j$ are often associated as representing the `kinetic' energy of the Ising model when the model evolves in time \cite{creutz1986deterministic,marcjasz2017phase}.
In fact the Hamiltonians of the DGM and QMD models, \Cref{eqn:qmd_hamiltonian} and \Cref{eqn:trugenberger}, both contain such a term, capturing the tendency for aligned spins to generate an edge and therefore `propagate' influence in the graph.
Pursuing this analogy, we identify this $\sum\limits_{ij} \hat{s}_i A_{ij} \hat{s}_j$ term as representing `momentum' for the purposes of constructing our discrete Hamiltonian as realized in the graph $G(V,E)$.
To propose this Hamiltonian we therefore make the following substitutions in Eq. \eqref{eqn:H_can},
\begin{align}
\int_{\Sigma_t} \dots \sqrt{h} ~ d^3 y & \rightarrow l_p^3 \sum\limits_{i \in V} \mbox{;} \\
\frac{(16 \pi )^2}{h}(p^{ab}p_{ab} -\frac{1}{2}p^2) & \rightarrow -\frac{g}{2} \sum\limits_{i,j \in V} \hat{s}_i A_{ij} \hat{s}_j \mbox{;} \label{eqn:laplace_sub}\\
~^3R(y) & \rightarrow \sum\limits_{j \sim i} \kappa_{ij} \mbox{.}
\end{align}
In the first substitution we have added in the $l^3_p$ factor, with the assumption that $l_p$ is the lattice spacing, for completeness.
This value arises from the discretization of the measure and as it has no effect on our numerical analysis, and will be disregarded in what follows.
We have introduced a coupling constant $-\frac{g}{2}$ into our model, and although we could absorb any proportionality for the potential term into this one coupling constant term in the Hamiltonian, we will use $-\frac{g}{2}$ for the kinetic term and its square $\frac{g^2}{4}$ for the potential term to allow us to balance between the two contributions when we simulate.
To be explicit, the simulations use $g$ as the free parameter of the model, which ultimately we will see is linked to the dimensionality of spacetime.
In our simulations we vary the value of $g$, and we find that for our Hamiltonian the range $0.04 < g < 0.14$ yields ground states that are particularly interesting.
As $g$ increases the ground state is obtained when the curvature potential energy balances the spin-spin kinetic energy at the minimum of the Hamiltonian.
For large values of $g$ the curvature term required to balance the kinetic contribution and produce this minimum can be smaller, which we shall see corresponds to a ground state graph that has an overall lower connectivity and more closely resembles a square lattice.
We can now finalize the prescription for obtaining a discretized CGR model, and propose our canonical Ising Hamiltonian $H_{CIM}$ as,
\begin{equation}\label{eqn:h_comb}
H_{CIM} = -\frac{g}{2}\sum\limits_i \sum\limits_j \hat{s}_i A_{ij} \hat{s}_j - \frac{g^2}{4} \sum\limits_i \sum\limits_j \kappa_{ij}\text{.}
\end{equation}
It will be noted that the negative sign in the $-^3R$ term carries over into the above expression to give an overall negative sign to the term in discrete curvature.
In order to make use of Eq. \eqref{eqn:h_comb}, we need an expression for the discrete curvature of the spacetime graph.
We turn to the computation of this in \Cref{sec:curvature}.
\section{Discrete Curvature and the Discrete Canonical Hamiltonian}
\label{sec:curvature}
We will present a very brief overview in this section of two popular methods of computing graph curvature, the Forman-Ricci and Ollivier-Ricci curvature.
The principle aim of this section is to obtain an expression for the curvature in terms of graph primitives such as the adjacency matrix.
We will draw upon recent results regarding the mean field values of these curvatures \cite{tee2021enhanced}, and do not intend this to be a comprehensive treatment of what is a complex subject.
\subsection{Forman-Ricci}
\label{ssec:forman}
We begin with a brief overview of FR curvature, originally proposed by Robin Forman, as a measure of curvature on CW (closure-finite, weak topology) complexes \cite{forman2002combinatorial,forman2003bochner,forman2004topics}.
The work is of particular interest for applications to physics, as it includes the discrete analogs of differential forms, Morse theory and Ricci curvature to their counterparts in the differential geometry of smooth manifolds.
The definition of FR curvature draws upon an analogy to the identities developed by Bochner \cite{bochner1946vector} regarding the decomposition of the Riemannian-Laplace operator defined on $\Omega^p(M)$, the space of $p$ forms of a manifold $M$.
In this decomposition, the Laplace operator can be expressed as a sum of the square of the covariant derivative plus a curvature term.
This is the origin of the Bochner-Weitzenb{\"o}ck identity and in its discrete form it is used to derive the formula for the FR curvature.
Forman's analysis is conducted using the formalism of CW complexes (the reader is referred to standard texts on algebraic topology for more detail such as Hatcher \cite{hatcher2002algebraic}).
The concept is similar to simplicial complexes, which are perhaps more familiar due to the calculating theorems used to compute Homology and Homotopy groups of topological spaces \cite{nash1988topology,nakahara2003geometry}, but the formalism is more general.
The building blocks are the $p$-cells, where the $p$ refers to the dimension of the cell, and intuitively a $d$ dimensional CW complex built by gluing $p \leq d$ complexes along shared faces.
For example, a $0$-cell is a point $p_1$, a $1$-cell an edge between two points $\langle p_1 p_2\rangle$, and a $2$-cell could be a triangle, with the interior included, bounded by three $1$-cells $\langle p_0 p_1 \rangle , \langle p_1 p_2 \rangle ,
\langle p_2 p_3 \rangle$, or indeed longer cycles including their interiors.
For our purposes we will focus on cell complexes up to $p=2$, which are essentially equivalent to graphs, with the addition that chordless cycles in the graph are assumed to bound a $2$-cell.
We will define more precisely the concept of chordless cycles in \Cref{ssec:mean-field}.
An important concept is the boundary of a $p$-cell, being the set of $p-1$ cells that contain the cell.
Intuitively for a $1$-cell $\langle p_0p_1 \rangle$, the boundary is the collection of points $p_0$ and $p_1$, and for a general $p-$cell, $\alpha_p$, we say it is a proper face of a $p+1$ cell $\beta$ if it is a member of the boundary set of $\beta$, and we write $\alpha_p < \beta_{p+1}$, or conversely $\beta_{p+1} > \alpha_p$.
A $p$-cell CW complex $M$, over $\mathbb{R}^p$, is then a collection of cells $\alpha_p p \in \{0,\dots,p\}$, with the restriction that any two cells are joined along a common proper face, and all faces of all complexes are contained in the cell.
The key concept used to defined the curvature of cell complexes is the definition of the neighbors of complexes of arbitrary degree.
We reproduce the definition first stated by Forman here \cite{forman2002combinatorial,forman2003bochner},
\begin{definition}\label{def:neighbor}
$\alpha_1$ and $\alpha_2$ are $p$-cells of a complex $M$. $\alpha_1$,$\alpha_2$ are neighbors if,
\begin{enumerate}
\item $\alpha_1$ and $\alpha_2$ share a $(p+1)$ cell $\beta$ such that $\beta > \alpha_1$ and $\beta > \alpha_2$, or
\item $\alpha_1$ and $\alpha_2$ share a $(p-1)$ cell $\gamma$ such that $\gamma < \alpha_1$ and $\gamma < \alpha_2$.
\end{enumerate}
\end{definition}
In addition, we can partition the set of neighbors of a complex into parallel and non-parallel neighbors.
We say that two $p$-cells $\alpha_1$,$\alpha_2$ are parallel, if one but not both of the conditions in \Cref{def:neighbor} are true, and we write $\alpha_1 \parallel \alpha_2$.
FR curvature is defined as a series of maps $\mathcal{F}_p : \alpha_p \rightarrow \mathbb{R}$, defined for each value of $p$.
In the case of an unweighted CW complex, this reduces to a simple form,
\begin{equation}\label{eqn:fr_simple}
\mathcal{F}_p (\alpha_p) = \# \{ \beta_{(p+1)} > \alpha_p \} + \# \{ \gamma_{(p-1)} < \alpha_p \} - \# \{ \epsilon_p \parallel \alpha_p \} \mbox{,}.
\end{equation}
where $\epsilon_p$ is a $p$-cell that is a parallel neighbor of $\alpha_p$, and $\epsilon_q \neq \alpha_p$.
We can state this in words as the number of $(p-1)$-cells that bound $\alpha_p$, plus the number of $(p+1)$-cells that $\alpha_p$ is part of the boundary of, minus the number of $p$-cell parallel neighbors of $\alpha_p$.
The curvature function for $p=1$ is specifically identified as the analog of Ricci curvature, and defines the curvature of the edges of a graph, and we refer to this as the Forman-Ricci curvature.
We distinguish this particular value of the FR curvature for $p=1$ by the notation $\kappa_{ij}^f = \mathcal{F}_1(e_{ij})$.
Fortunately for a regular mesh \Cref{eqn:fr_simple} is highly simplified, as the vertices and edges constitute the $0$ and $1$-cells, and we assume that any chordless cycles in the graph constitute the $2$-cells.
The interpretation of \Cref{eqn:fr_simple} is often referred to as the `augmented' FR curvature.
It should be noted that many authors do not consider arbitrary length cycles, and only consider triangles.
This is not appropriate for our purposes as a flat square mesh would not have zero curvature.
Henceforth we will assume any reference to FR curvature refers to this augmented form.
At each vertex $v_i$, if $j \sim i$ indicates that the vertex $v_j$ is connected to $v_i$ by one edge, we define the analogy of the Ricci scalar curvature at $v_i$ as,
\begin{equation}\label{eqn:scalar_forman}
\kappa^f(v_i) = \sum\limits_{j \sim i} \kappa^f_{ij} \text{.}
\end{equation}
For the whole graph, $G(V,E)$, we can sum over all vertices, to compute the total curvature as follows,
\begin{equation}\label{eqn:scalar_forman_graph}
\kappa^f = \sum\limits_{i} \kappa^f(v_i) = 2\sum\limits_{i,j \in V} \kappa^f_{ij} \mbox{,}
\end{equation}
with the factor of $2$ coming from the fact that the graph is undirected and each edge is counted twice.
\subsection{Ollivier-Ricci}
\label{ssec:ollivier}
The second form of curvature we consider is Ollivier-Ricci, first introduced by Yann Ollivier \cite{ollivier2007ricci,ollivier2009ricci,ollivier2011visual}.
This curvature has a close association to the conventional notion of curvature for a smooth manifold.
It is based upon the well known result from Riemannian geometry concerning small balls surrounding two points in a curved space.
Between the two points we consider a geodesic, and in flat space the average geodesic distance between all other points in the two balls is the same as the geodesic distance between the centers.
Positive curvature shortens this average distance and negative curvature lengthens it.
The starting point for OR curvature is to define the graph to be a metric space, with the metric provided by graph distance, and the role of the balls is replaced by unit normalized probability distributions on neighboring vertices.
Wasserstein distance between `unit balls' centered around two points $i$ and $j$, denoted $b_i, b_j$ is used to define geodesic distance between $i$ and $j$.
This relies upon a unit normalized measure $\mu_i$ of `mass', and a transference plan that exchanges this `mass' between distributions on $b_i$ and $b_j$.
Wasserstein distance is then defined to be the optimal such transport plan.
For a graph $G(V,E)$, with the graph distance metric, these concepts can be made more precise as follows.
The unit ball $b_i$ of a node $v_i$ corresponds to the set of nodes, including $v_i$ that are connected by one edge to $v_i$, and the graph distance metric $d(v_i,v_j)$ is the shortest path between two vertices $v_i$ and $v_j$.
Our metric space is therefore defined as $M=( G, d( v_i,v_j) )$.
The probability measure we use is called the graph counting measure, defined using the fractional cardinality of any subset of vertices $X \subset V$, simply stated as $\mu(X) = |X|/|V|$.
Applied to our unit balls $b_i$ we have $\mu(v_x^i)= 1 / |b_i|$, for $v_x^i \subset b_i$.
This obeys the sum to unity constraint of a probability measure $\sum_a \mu( v_x^i ) = 1$, and we note for a node of degree $k_i$ that $\mu( v_x^i ) = 1/(k_i+1)$ is trivially valid as such a measure.
To obtain an optimal transport plan we need to calculate the minimum cost to carry the probability distribution from $b_i$ to $b_j$.
This is not a simple undertaking, but for regular graphs it is simplified by the symmetry of the neighborhoods of any given vertex.
As an example, in \Cref{fig:or_example} we depict such a graph, and the unit balls $b_i,b_j$ that form the neighborhood of $v_i$ and $v_j$ at a graph distance of $1$.
The transport cost $\pi(v_x,v_y)$ for each vertex $v_x^i$ in $b_i$ is the distance the node has to `transport' its probability $\mu(v_x^i)$, or a portion of it, to a node $v_y^j$ in $b_j$, multiplied by the value of the portion of the probability distribution transported.
The objective of the transport being to swap the probability distributions surrounding the source and destination nodes, which in turn may involve transfers of probability distributions between all possible combinations of nodes.
For such a transport we have a cost of $\pi(v_x,v_y) = \mu(v_x^i) d( v_x, v_y )$.
For example, the node $v_a$ has a value of $\mu=1/5$, and it needs to move a distance of $3$ hops to $v_c$, yielding a cost of $3/5$.
We need to do the same with $v_d$, but the nodes $v_b$, $v_i$ and $v_j$ in $b_i$ do not need to move.
The specific collection of such moves is a transport plan, and the transference $\xi(b_i,b_j)$ between $v_i$ and $v_j$ is then the sum of these transport costs for a given transport plan $\Pi(b_i,b_j)$, formally,
\begin{equation}
\xi(b_i,b_j) = \sum\limits_{x \in b_i, y \in b_j, \pi( v_x, v_y ) \in \Pi(b_i,b_j)} \pi( v_x, v_y ) \text{.}
\end{equation}
In the case of our example transference plan it is $6/5$, but in general there are a combinatorially large set of possible transports and therefore transport plans.
We are now ready to define the Wasserstein distance as the infinum of the transference over all possible transport plans,
\begin{equation}
W(b_i,b_j)= \inf\limits_{\Pi(b_i,b_j)} \xi( b_i,b_j) \text{.}
\end{equation}
Using this definition, the OR curvature $\kappa^o_{ij}$ of a given edge between $v_i$ and $v_j$ is then defined as,
\begin{equation}
\kappa^o_{ij} = 1 - \frac{ W(b_i,b_j) }{ d(v_i,v_j )} \text{.}
\end{equation}
In the case of our graph in Fig . \ref{fig:or_example} the OR curvature of the edge $e_{ij}$ connecting $v_i$ and $v_j$ is therefore $\kappa^o_{ij} = (1-6/5) = -0.2$.
\begin{figure}[htbp]
\centering
\begin{tikzpicture}[node distance=0.5cm]
\node [black] at (0,2) {\textbullet};
\node [black] at (1,1) {\textbullet};
\node [black] at (0,0) {\textbullet};
\node [black] at (2,2) {\textbullet};
\node [black] at (3,1) {\textbullet};
\node [black] at (4,2) {\textbullet};
\node [black] at (4,0) {\textbullet};
\draw [-] (0,0) -- (1,1);
\draw [-] (0,2) -- (1,1);
\draw [-] (1,1) -- (3,1);
\draw [-] (1,1) -- (2,2);
\draw [-] (2,2) -- (3,1);
\draw [-] (3,1) -- (4,2);
\draw [-] (3,1) -- (4,0);
\node at (1.1,0.75) {$v_i$};
\node at (2.85,0.75) {$v_j$};
\node at (0.25,2) {$v_a$};
\node at (2.25,2) {$v_b$};
\node at (4.25,2) {$v_c$};
\node at (0.25,0) {$v_d$};
\node at (4.25,0) {$v_e$};
\end{tikzpicture}
\caption{An example graph, upon which we will compute the OR curvature of the edge $e_{ij}$. For this graph $b_i=\{v_i, v_j, v_a,v_b,v_d\}$ and $b_j=\{v_j, v_i,v_b,v_c,v_e\}$.}
\label{fig:or_example}
\end{figure}
For an arbitrary graph without a high degree of symmetry and regularity, this computation is extremely involved and computationally expensive.
There are some exact calculations in closed form available \cite{kelly2019self}, but they rely upon some simplifying assumptions which may not hold in general.
However, for a regular graph, such as those known to be the ground states of the Ising emergent geometry models, these assumptions can be shown to hold.
We refer the reader to the cited literature for details.
\subsection{Mean-field results for discrete curvature}
\label{ssec:mean-field}
Fortunately the expressions for discrete curvature simplify dramatically when we constrain the types of graphs that we consider.
In particular the frequency of chordless cycles is critical.
A chordless cycles is used to represent the $2$-cells in the FR treatment of the graph, and indeed to compute the approximate values of OR curvature as described in \cite{kelly2019self,tee2021enhanced}.
Firstly let us define a chordless cycle as a closed path in the graph that contains no bisections.
More precisely, for a cycle of length $n$, being a collection of $n$ vertices connected path-wise by $n$ edges, it is a chordless cycle if and only if each vertex has only two neighbors considering only the vertices included in the cycle.
We present in \Cref{fig:chordless} two cycles of length $n=4$, to illustrate the difference.
\begin{figure}[htbp]
\centering
\begin{tikzpicture}[node distance=0.5cm]
\node [black] at (0,0) {\textbullet};
\node [black] at (0,1) {\textbullet};
\node [black] at (1,0) {\textbullet};
\node [black] at (1,1) {\textbullet};
\node [black] at (4,0) {\textbullet};
\node [black] at (4,1) {\textbullet};
\node [black] at (5,0) {\textbullet};
\node [black] at (5,1) {\textbullet};
\draw [-] (0,0) -- (0,1);
\draw [-] (0,1) -- (1,1);
\draw [-] (1,1) -- (1,0);
\draw [-] (1,0) -- (0,0);
\draw [-] (4,0) -- (4,1);
\draw [-] (4,1) -- (5,1);
\draw [-] (5,1) -- (5,0);
\draw [-] (5,0) -- (4,0);
\draw [-,red] (4,0) -- (5,1);
\node at (0,-0.25) {$v_1$};
\node at (1,-0.25) {$v_4$};
\node at (0,1.25) {$v_2$};
\node at (1,1.25) {$v_3$};
\node[text width=3cm] at (1.0,-1.0){Chordless cycle on $(v_1,v_2,v_3,v_4)$};
\node at (4,-0.25) {$v_1$};
\node at (5,-0.25) {$v_4$};
\node at (4,1.25) {$v_2$};
\node at (5,1.25) {$v_3$};
\node[text width=3cm] at (5.0,-1.0){$(v_1,v_3)$ Bisects cycle creating two Chordless cycles $(v_1,v_2,v_3),~(v_1,v_3,v_4)$};
\end{tikzpicture}
\caption{We depict here two cycles of length $4$ involving the vertices $(v_1,v_2,v_3,v_4)$. On the left hand side the cycle is chordless and would represent a proper $2$-cell in a CW complex for the computation of FR curvature or a square in the computation of OR curvature. The cycle on the right hand side however is bisected by the edge between $v_1$ and $v_3$. As such it would contribute two triangles to any curvature computation, and zero squares.}
\label{fig:chordless}
\end{figure}
Chordless cycles in a graph become increasingly rare as the length of the cycle increases.
This is easy to demonstrate for random graphs by noting that the link probability $p$, also defines the probability of a link not existing as $(1-p)$.
Given that in a cycle of length $n$ there are $n$ links present out of a possible $\frac{1}{2}n(n-1)$ available, the cycle is therefore chordless with probability $(1-p)^{\frac{1}{2}n(n-3)}$.
We denote the number of chordless cycles of length $n$ by $\square_n$, and cycles of length $n$ that are either chordless or not by $\boxtimes_n$.
For a graph of $N$ vertices we have,
\begin{align}
\square_n&=(1-p)^{\frac{1}{2}n(n-3)} \boxtimes_n \text{,and} \label{eqn:chordless_factor}\\
\boxtimes_n&=p^n\frac{N!}{n!(N-n)!} \text{.}
\end{align}
The last expression is obtained by calculating the number of potential ways in which $n$ vertices can be selected from a graph, which will have edges forming a cycle containing them with probability $p^n$.
Even though $\boxtimes_n$ gets combinatorially larger with increasing $N$ and $p$, the effect of increasing link probability is to reduce the number of those cycles that are chordless, making longer chordless cycles increasingly rare compared to shorter ones.
We plot in \Cref{fig:chordless_cycles} the theoretical count of chordless cycles in a random graph of $N=100$ nodes, for a range of link probabilities from $0.05$ to $0.85$.
The lower number is chosen to be above the critical value at which a fully connected graph emerges, given by the threshold $p \geq \frac{\log(N)}{N}$ \cite{barabasi2016network}.
Even for a graph of this size, the number of cycles becomes very large as $p$ increases, but for our graphs $p$ is much closer to $0.05$, at which point the most frequent chordless cycle is a square with a count of $85$.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.43]{ChordlessProb_NM10.png}
\caption{For $N=100$ vertices we generate random Gilbert graphs \cite{barabasi2016network}, varying the link probability $p$. For each value of $p$ we compute the number of chordless cycles of length $n$, plotting the theoretical value for this number.}
\label{fig:chordless_cycles}
\end{figure}
Regardless of the rarity of long chordless cycles, it is well established in lattice field theory that ascending orders of a derivative of any quantity defined at a vertex, requires the consideration of vertices that are of increasing edge distance away \cite{creutz1983quarks}.
We make the simplifying assumption to restrict our analysis to a maximum cycle length of $n=5$, which captures vertices that are neighbors and next to nearest neighbors of a given vertex.
Considering the discrete calculus of a quantity defined at a vertex, this would permit the computation of a second derivative, which we consider sufficient for our analysis.
In principle any order of derivative could be included by extending the length of cycles included in our model.
The last simplifying assumption is the Independent Short Cycle (ISC) condition, which is the analog of the elastic particle assumption in the kinetic theory of gases, which we reproduce from \cite{tee2021enhanced} extended to cycles of any length.
\begin{definition}{The independent short-cycle condition.}\label{def:hard_core}
The independent short-cycle condition is satisfied by a graph $G(V,E)$ on $N$ vertices, if all of its closed cycles of length $n$ do not share more than one edge. Let $\square_n (e_{ij})$ represent a closed cycle supported upon an edge $e_{ij}$ of length $n$, with $n \geq 3$.
The independent short-cycle condition is satisfied for the graph $G$ if and only if,
\begin{equation}
\bigcap\limits_{n=3}^{n \leq N} \square_n (e_{ij}) = e_{ij} \text{,~} \forall e_{ij} \in E \text{.}
\end{equation}
\end{definition}
With these conditions we can state the mean-field results from \cite{tee2021enhanced} for the discrete curvatures $\kappa^f$ and $\kappa^o$ in terms of the number of triangles $\triangle_{ij}$, squares $\square_{ij}$ and pentagons $\pentagon_{ij}$ incident upon an edge $e_{ij}$.
Denoting by $k$ the average degree of a node in the graph we have the following expressions for the curvatures,
\begin{align}
\kappa^{f}_{ij}&=4-2k +3 \triangle_{ij} + 2 \square_{ij} + \pentagon_{ij} \text{,} \label{eqn:exact_fr_expansion}\\
k \kappa^{o}_{ij} &= 4-2k +3 \triangle_{ij} + 2\square_{ij} +\pentagon_{ij}+ \delta_{ij} \label{eqn:exact_or_expansion} \text{.}
\end{align}
where the correction to the mean field value $\delta_{ij}$ is given by Eq. \eqref{eqn:mf_cases}.
\begin{equation}\label{eqn:mf_cases}
\delta_{ij} = \begin{cases}
0 &\text{if $k > 2+ \triangle_{ij}+\square_{ij}+\pentagon_{ij}$}\\
k-2-\triangle_{ij}-\square_{ij}-\pentagon_{ij} &\text{if $2+\triangle_{ij}+\square_{ij} < k < 2+ \triangle_{ij}+\square_{ij} + \pentagon_{ij}$} \\
2k-4-2\triangle_{ij}-2\square_{ij}-\pentagon_{ij} &\text{if $ k < 2+\triangle_{ij}+\square_{ij}$}
\end{cases}
\end{equation}
In the case of FR curvature we can relax the ISC condition in the mean-field approximation somewhat, as the formula for FR curvature in \Cref{eqn:exact_fr_expansion} only relies upon it to compute the number of parallel edges, using the assumption that each cycle incident upon an edge is independent.
This enters in the equation when we compute the number of parallel edges incident upon the $0$-cells (i.e. vertices) $i$ and $j$ of an edge $e_{ij}$, but not part of any cycle on $e_{ij}$, which we write as $P_0$.
We achieve this by subtracting from the average degree of the nodes at either end of an edge, two edges for each cycle incident upon the edge.
Specifically, for a mean field edge, the number of such parallel edges is $2k-2 - 2(\triangle_{ij}+\square_{ij}+\pentagon_{ij})$, on the assumption that each of the cycles are independent.
Therefore if we know the probability of independent edges we have an opportunity to further refine \Cref{eqn:exact_fr_expansion}.
Let us denote by $\rho_{ij}=(\triangle_{ij}+\square_{ij}+\pentagon_{ij})$ the sum of all cycles to order $5$ incident upon the edge $e_{ij}$, and by $\xi$ the probability that {\sl all} the edges $\rho_{ij}$ are independent according to \Cref{def:hard_core}.
The number of edges consumed by cycles is at least $2\Theta[(1-\xi)\rho_{ij}]$, where $\Theta[\dots]$ is the Heaviside step function with $\Theta[ 0 ]=0$.
The factor of $(1-\xi)$ is present to ensure that as $\xi \rightarrow 1$ we reflect that all contributions to $P_0$ comes uniquely from the cycles incident upon the edge and will be accounted for by $\rho_{ij}$.
Bringing this together, we estimate the number of parallel edges $P_0$ that arise from edges incident upon the vertices $i$ and $j$ as,
\begin{equation}
P_0=2k-2-2\Theta[(1-\xi)\rho_{ij}] -2\xi\rho_{ij} \text{.}
\end{equation}
The probability $\xi$ is another mean field parameter of the curvature, and will simplify our computations later.
It should be noted that our estimate of $P_0$ is an upper bound, as some non-independent cycles will still consume edges from the $2k-2$ available ones at the vertices $i$ and $j$.
If we make the assumption that $\rho_{ij} > 0$, which for the ground states obtained for IG models is a reasonable one, we arrive at the following modified version of \Cref{eqn:exact_fr_expansion} that we will find useful in our numerical simulations,
\begin{equation}\label{eqn:non_isc_fr}
\kappa^f_{ij} = 4-2k + 2\Theta[(1-\xi)\rho_{ij}] + (1+2\xi)\triangle_{ij} + 2\xi \square_{ij} - (2\xi-1)\pentagon_{ij} \text{.}
\end{equation}
A similar expression is not available for OR curvature.
The parameter $\xi$ can be estimated by computing the number of ISC violating edges in the graph.
In Kelly {\sl et al} \cite{kelly2019self} there is a simple method for identifying edges that violate the ISC constraint that rests upon searching the graph for certain motifs, a technique we make use of in our simulations.
If $|E|_{v}$ is the number of edges that violate the ISC constraint, then
\begin{equation*}
p_{isc}=1-\frac{|E|_{v}}{|E|} \text{,}
\end{equation*}
is the probability that an edge satisfies it, and we will make use of the following estimate for $\xi$, obtained from the average value of $\rho_{ij}$,
\begin{equation}
\xi=p_{isc}^{\expval{\rho_{ij}}} \text{.}
\end{equation}
Although mean field, the result in \Cref{eqn:non_isc_fr} is still exact with respect to the cycles.
As we are intending to use the scalar values for the discrete curvature we can further simplify by computing the average values of $\triangle_{ij}, \square_{ij}$ and $\pentagon_{ij}$ in terms of powers the adjacency matrix, which we can obtain using the cycle count formulae originally derived by Harary {\sl et al} and Perepechko {\sl et al } \cite{harary1971number,perepechko2009number}.
If we denote $\triangle$, $\square$, and $\pentagon$ as the total number of $3,4$ and $5$ cycles in the graph, we state the following result due to Harary \cite{harary1971number},
\begin{align}
\triangle &= \frac{1}{6} \Tr ( A^3) \label{eqn:triangles} \mbox{,}\\
\square &= \frac{1}{8} \Tr (A^4) - \frac{\abs{E}}{4} - \frac{1}{4} \sum\limits_{i \neq j} \sum\limits_{k \neq i,j} A_{ik} A_{kj} \label{eqn:squares} \mbox{,} \\
\pentagon &= \frac{1}{10} \Tr (A^5) - \frac{1}{2}\Tr (A^3) -\frac{1}{2} \sum\limits_i \left ( \sum\limits_j A_{ij}-2 \right ) A^3_{ii} \label{eqn:pentagons} \mbox{.}
\end{align}
Using these expressions for the total number of cycles in a graph with $\abs{E}$ edges, we can compute the average number of a given cycle incident upon a given edge by assuming that any edge of an $n$ polygon has probability $n/\abs{E}$ of containing that edge.
For each polygon we obtain,
\begin{equation}\label{eqn:mean_cycle}
\expval{\triangle_{ij}} = \frac{3 \triangle}{\abs{E}} \mbox{,}
\expval{\square_{ij}} = \frac{4 \square}{\abs{E}} \mbox{,}
\expval{ \pentagon{ij}} = \frac{5 \pentagon}{\abs{E}} \mbox{,}
\end{equation}
which we can substitute into Eqs. \eqref{eqn:exact_fr_expansion} and \eqref{eqn:exact_or_expansion} to obtain values of $\expval{\kappa^f_{ij}}$ and $\expval{\kappa^o_{ij}}$.
We can then compute $\expval{\kappa^f}$ and $\expval{\kappa^o}$ by recalling that $\expval{\kappa^f} = 2|E| \expval{\kappa^f_{ij}}$ with the parallel result for OR curvature.
It should be noted that Eqs. \eqref{eqn:triangles}, \eqref{eqn:squares} and \eqref{eqn:pentagons} all refer to the total number of cycles i.e. $\boxtimes_n$, not $\square_n$ and so when we come to use them in our simulations we will have to adjust using Eq. \eqref{eqn:chordless_factor}.
Before stating the expressions for curvature obtained by substituting in these mean field expressions for cycles into those for the curvature, it is instructive to examine the extremal properties of the Hamiltonians we obtain when using the prescription in Eq. \eqref{eqn:h_comb}.
If we again take $p$ to indicate the probability of an edge between two randomly chosen vertices, we note that for an average edge, after traversing the $k-1$ edges connected to one end of the edge to close cycles to the other end of the edge we have,
\begin{align}
\expval{\triangle_{ij}} &= 2(k-1) p \text{,}\\
\expval{\square_{ij}} &= 2(k-1) p^2 \text{,} \label{eqn:exp_squares}\\
\expval{\pentagon_{ij}} &= 2(k-1) p^3 \text{.}
\end{align}
In the ground states of our models for a graph of size $N$ we find typically that $p \ll 1$, and indeed formally $p \rightarrow 0$ as $N \rightarrow \infty$, and so we can write our curvatures in the sparse graph limit of very small $p$ as,
\begin{align*}
\expval{\kappa^f} &=N k( 4-2k) + \mathcal{O}(p) \text{,} \\
\expval{\kappa^o} &=N( 4-2k) + \mathcal{O}(p) \text{.}
\end{align*}
We have made the assumption that the term in $2\Theta[(1-\xi)\rho_{ij}]$, does not contribute in the sparse graph approximation as in the limit $p \rightarrow 0$ we have $\expval{\rho_{ij}}=0$.
Similarly for the kinetic term in the Hamiltonian we will pick up a contribution of $Nk$, assuming that all connected spins are aligned (which is precisely the form of ground state encountered in \cite{trugenberger2015quantum,tee2020dynamics}).
With this assumption we obtain the following expressions for the Hamiltonian as a function of $k$ for both curvature measures as,
\begin{align}
H^f(k) &= \frac{g^2}{4} Nk(2k-4) -\frac{g}{2}Nk + \mathcal{O}(p) \text{,} \label{eqn:fr_hamiltonian}\\
H^o(k) &= \frac{g^2}{4} N(2k-4) -\frac{g}{2}Nk + \mathcal{O}(p) \label{eqn:or_hamiltonian} \text{, }
\end{align}
noting that $\expval{\kappa^f}=2|E| \expval{\kappa^f_{ij}} = Nk\expval{\kappa^f_{ij}}$.
We immediately see that there is no well defined minimum of the energy for the OR curvature case by virtue of the missing factor of $k$.
It is entirely possible, however, that higher order of $p$ contributions will bring in terms of at least $k^3$ that would allow a well defined minimum of \Cref{eqn:or_hamiltonian}.
Unfortunately in our simulations all attempts to use OR curvature to find stable ground states for the CIM models has proven elusive, and we will henceforth concentrate on the models utilizing FR curvature.
It should be stressed that the approximation used to argue the existence of a minimum for the Hamiltonian constructed from FR curvature, $H^f (k)$, is only valid for small link probability, which although a safe approximation as we shall see in \Cref{sec:results}, does not reproduce the zero curvature for square lattices.
Putting aside temporarily this complication, we can identify the minimum of \Cref{eqn:fr_hamiltonian} as a function of $g$ using elementary calculus.
The minimum of $k$, $k_m$ is obtained as $k_m=\frac{1}{2g} + \frac{1}{2}$.
In a $d$ dimensional regular lattice $k=2d$, and so the dimension of such a ground state graph corresponding to the minimum of \Cref{eqn:fr_hamiltonian}, which we denote by $d_k$, has the value,
\begin{equation}\label{eqn:naive_dimension}
d_k=\frac{1}{4g} + \frac{1}{2} \text{.}
\end{equation}
If we extend the approximation to $\mathcal{O}(p^2)$ to include squares, this expression becomes more complex.
Specifically our expression for the curvature becomes $\expval{\kappa_{ij}}=4-2k +6(k-1)p + 4(k-1)p^2$.
If we define $\epsilon=3p+2p^2$, we can minimize the resultant expression for the Hamiltonian and obtain,
\begin{equation}
d_k = \frac{1}{1-\epsilon} \left \{ \frac{1}{4g} + \frac{1}{2} - \frac{\epsilon}{4} \right \} \text{.}
\end{equation}
As $\epsilon \rightarrow 0$ we recover the original expression \Cref{eqn:naive_dimension}, and it is instructive to consider typically how small $\epsilon$ is in the graphs we will encounter as ground states in \Cref{sec:results}.
For a graph of size $N=100$, which as we shall see becomes a highly regular lattice at around $k=6$, $\epsilon=0.185$.
Our dimension $d_k$ will be approximately $27\%$ bigger when we include terms to order $\mathcal{O}(p^2)$.
If we repeat that calculation for $N=1000$, this drops to a $0.45\%$ increase in the dimension.
Given that the assumption for our model of spacetime is that the ground state graphs are extremely large, we will accept the approximation in \Cref{eqn:naive_dimension}, and we will refer to $d_k$ as the `naive dimension' of the graph.
\subsection{The discrete canonical Hamiltonian and comparison with the Ising models}
\label{ssec:discrete_hamiltonian}
To conclude the theoretical investigation of the CIM model let us capitalize upon the relations due to Harary in \Cref{eqn:triangles}, \Cref{eqn:squares} and \Cref{eqn:pentagons} to obtain an expression for the Hamiltonian in terms of the graph adjacency matrix.
First of all we note that by substituting in the expressions for the expected value of the short cycles stated in \Cref{eqn:mean_cycle} into our ISC corrected formula for the FR curvature of an edge \Cref{eqn:non_isc_fr}, we obtain,
\begin{equation}
\begin{split}
\expval{\kappa^f_{ij}}&=4-2k + 2\Theta[(1-\xi)\rho_{ij}] \\
& + \frac{1}{|E|}\left \{ 3(1+2\xi)\triangle + 8\xi \square + 5(2\xi-1)\pentagon \right \}\text{.}
\end{split}
\end{equation}
To obtain the expected FR scalar we multiply through again by $2|E|$ yielding,
\begin{equation}
\begin{split}
\expval{\kappa^f}&=2|E|(4 + 2\Theta[(1-\xi)\rho_{ij}] -2k) \\
& +6(1+2\xi)\triangle + 16\xi \square + 10(2\xi-1) \pentagon \text{,}
\end{split}
\end{equation}
which is the form we shall use in our simulations.
For completeness we can insert this directly into the expression for the $H_{CIM}$ Hamiltonian in \Cref{eqn:h_comb}, to obtain,
\begin{equation}\label{eqn:h_cim_ulation}
\begin{split}
H_{CIM}=&-\frac{g^2}{4} \left \{ 2|E|\Big\{4 + 2\Theta[(1-\xi)\rho_{ij}] -2k\Big\}+6(1+2\xi)\triangle + 16\xi \square + 10(2\xi-1) \pentagon \right \} \\ &-\frac{g}{2}\sum\limits_i \sum\limits_j \hat{s}_i A_{ij} \hat{s}_j \text{.}
\end{split}
\end{equation}
In the simulations we will directly compute this energy function for the graph, but it is instructive to go one step further and rewrite this expression in terms of the adjacency matrix of the graph.
This is accomplished by using Equations, \eqref{eqn:triangles}, \eqref{eqn:squares} and \eqref{eqn:pentagons}.
However we must also take into account the fact that not all pentagons and squares are chordless, and so we need to factor down these expressions using \Cref{eqn:chordless_factor}.
For a link probability of $p$, these factors are $p_4=(1-p)^2$ for a square, and $p_5=(1-p)^3$ for a pentagon.
After some algebra, substituting these factored expressions for graph cycles into \Cref{eqn:h_cim_ulation}, we obtain the following form of our Hamiltonian,
\begin{equation}\label{eqn:h_cim_adjacency_full}
\begin{split}
H_{CIM}&=\frac{g^2}{2} \Bigg \{ \left[ (5p_5-1)\xi - \frac{1}{2}(1+5p_5)\right ]\Tr (A^3) + 2p_4\xi \sum\limits_{i\neq j}\sum\limits_{k \neq i,j} A_{ik}A_{kj}\\
&+ \frac{5}{2}p_5(2\xi-1) \sum\limits_i \left ( \sum\limits_j A_{ij}-2 \right ) A^3_{ii} +2|E|\Big \{k+ p_4 \xi - 2 - \Theta[(1-\xi)\rho_{ij}]\Big\}\\
&-p_4 \xi \Tr (A^4) - \frac{1}{2}p_5(2\xi-1) \Tr (A^5) \Bigg \} -\frac{g}{2}\sum\limits_i \sum\limits_j \hat{s}_i A_{ij} \hat{s}_j \text{.}
\end{split}
\end{equation}
This expression is complex, but necessary for the simulations that we describe in the next section.
Consider the case with $p_4=p_5 \approxeq 1.0$ and $\xi=1$, which corresponds to the case of an ideal graph with the ISC fully satisfied and most cycles chordless.
As $2\Theta[(1-\xi)\rho_{ij}]$ is zero for $\xi=1$, \Cref{eqn:h_cim_adjacency_full} simplifies to,
\begin{equation}\label{eqn:h_cim_adjacency}
\begin{split}
H_{CIM}&=\frac{g^2}{2} \Bigg \{ \Tr (A^3) + 2 \sum\limits_{i\neq j}\sum\limits_{k \neq i,j} A_{ik}A_{kj} + \frac{5}{2} \sum\limits_i \left ( \sum\limits_j A_{ij}-2 \right ) A^3_{ii}\\
& +2|E|(k-1) - \Tr (A^4) - \frac{1}{2} \Tr (A^5) \Bigg \} -\frac{g}{2}\sum\limits_i \sum\limits_j \hat{s}_i A_{ij} \hat{s}_j \text{.}
\end{split}
\end{equation}
By comparing this expression for the Hamiltonian of CIM expanded in terms of the adjacency matrix and the Hamiltonian for QMD in \Cref{eqn:qmd_hamiltonian}, we note a similarity in form.
In particular, the leading two terms in $\frac{g^2}{2}\{\}$ bracket are both present in the QMD Hamiltonian, as is the last term that expresses the kinetic contribution.
However, in the $H_{CIM}$ Hamiltonian, the $\sum A_{ik}A_{kj}$ has a factor of $2$.
In fact the $\sum A_{ik}A_{kj}$ expression becomes less significant than the $2|E|(k-1)$ term due to the fact that $\xi$ is typically very small, but fortunately minimizing the $2|E|(k-1)$ term is equivalent to minimizing $\sum A_{ik}A_{kj}$.
To see this, let us consider the mean field approximation of $2|E|(k-1)$.
For a given node $i$ of average degree $k$, $\sum\limits_{mn} A_{mi}A_{in}$ counts the number of edge pairs, or open triples, with node $i$ being the central node.
In the mean field approximation there are $\frac{1}{2}k(k-1)$ for each vertex.
As the summation is over $i$ and $j$, each vertex is counted twice, and so,
\begin{equation*}
\sum\limits_{i\neq j}\sum\limits_{k \neq i,j} A_{ik}A_{kj} = Nk(k-1)=2|E|(k-1) \text{,}
\end{equation*}
as $Nk=2|E|$.
Going back to \Cref{eqn:h_cim_adjacency_full}, we can make this substitution for the $2|E|(k-1)$ term and write out explicitly the relationship between $H_{CIM}$ and $H_{QMD}$ as,
\begin{equation}\label{eqn:qmd_cim}
\begin{split}
H_{CIM}&=\frac{g^2}{2} \Bigg \{ \Tr (A^3) + \sum\limits_{i\neq j}\sum\limits_{k \neq i,j} A_{ik}A_{kj} + \dots \Bigg \} \\ &-\frac{g}{2}\sum\limits_i \sum\limits_j \hat{s}_i A_{ij} \hat{s}_j\\
&+\frac{g^2}{2} \Bigg \{ [(5p_5-1)\xi -\frac{1}{2}(3+5p_5)] \Tr (A^3) \\
&+ 2p_4\xi \sum\limits_{i\neq j}\sum\limits_{k \neq i,j} A_{ik}A_{kj} \Bigg\} \\
&= H_{QMD} \\
&+ f(p_5,\xi) \Tr (A^3) + 2p_4\xi \sum\limits_{i\neq j}\sum\limits_{k \neq i,j} A_{ik}A_{kj} + \dots\text{,}\\
& \text{where~} f(p_5,\xi)=[(5p_5-1)\xi -\frac{1}{2}(3+5p_5)] \text{.}
\end{split}
\end{equation}
The $``\dots"$ in \Cref{eqn:qmd_cim} refers to the omitted terms in powers of $A_{ij}$ higher than $3$.
The similarity to the QMD Hamiltonian is now clear.
Indeed the additional terms are all dependent upon $p_4,p_5$ and $\xi$, which vary as the structure of the graph changes, specifically the total number of edges.
In the limit of zero connectivity $p_4=p_5=\xi=1.0$, at which point $H_{QMD}=H_{CIM}$, but this is of course a trivial scenario.
For low values of $d_k$, corresponding to $p_4$ and $p_5$ approaching $1.0$, the behavior is complex.
Providing $\xi < 0.5$, the complete $\Tr (A^3)$ term, including the additional corrections, will change sign in $H_{CIM}$ and start favoring triangles as $d_k$ reduces.
This will be somewhat offset by an increase coefficient in the $\sum\limits_{i\neq j}\sum\limits_{k \neq i,j} A_{ik}A_{kj}$ term that disfavors open triples in the graph.
Indeed we note that $f(p_5,\xi)$ is bounded in the range $[-4,0]$, and that when $f(p_5,\xi) < -1$ the $\Tr (A^3)$ contribution will have an overall negative coefficient.
Accordingly we expect our CIM model to share many feature with QMD, but not yield exactly equivalent ground state graphs.
In \Cref{sec:results} we see precisely this behavior with ground state graphs for the two models that are quite similar.
The $H_{CIM}$ ground state, however, exhibits a reemergence of triangles and clustering for low values of $d_k$.
This would be consistent with sign of the $\Tr (A^3)$ term reversing at low dimensions.
We believe this analysis justifies our interpretation of QMD as an approximation to the full CIM model.
The additional powers of the adjacency matrix that come from considerations of higher order cycles, represent corrections to the estimate of the discrete curvature, supporting the intuition that all the IG Hamiltonians are an approximation to a more physical model of emergent discrete geometry.
We will now proceed to explore the extent of this similarity by computing numerically the ground states of the CIM and QMD model in order to compare the structure of the obtained ground state graphs.
\section{Simulation results and discussion}
\label{sec:results}
\subsection{Computing the ground state}
Using the results for the Hamiltonian derived in \Cref{ssec:discrete_hamiltonian}, we can compute the ground state graphs of fixed size, varying the model's free parameter, $g$, the coupling constant.
We use the same technique of Glauber dynamics previously described in \cite{trugenberger2015quantum, tee2020dynamics}, which itself is adapted from techniques commonly used in certain classes of neural networks \cite{muller2012neural}.
The additional terms in powers of the adjacency matrix present in the Hamiltonian $H_{CIM}$, require a slight modification to accommodate them.
We adapt the method used for the QMD simulations described in \cite{tee2020dynamics}, to account for the $\Tr A^3_{ij}$ term present in the QMD model and also in the extensions to DGM described in \cite{trugenberger2016random}.
To summarize we start with a fully randomized graph with $N$ vertices, where the orientation of the vertex spins is set $\ket{1}$ or $\ket{0}$ with equal probability, and edges set to exist between all pairs of vertices with probability $p=0.5$.
Sequentially a sample of the vertices and edges is chosen.
With even probability we change the spin states, retaining the change if it lowers the energy by being aligned to more of a vertices neighbors.
If the energy is not reduced we restore the previous spin state.
We then address the sample of edges, again randomly adding or removing edges between the vertices in the sample set.
For each change in the edge set we compute an edge energy function that estimates the effect the change will have on the overall Hamiltonian of the model.
If the random change reduces the value of the energy function the change is accepted.
The whole change set is then accepted or rejected if the total graph Hamiltonian is reduced.
By iterating this approach a configuration of the graph that minimizes the Hamiltonian is achieved, and the simulation is averaged over multiple runs to restrict the influence of any individual run reflecting a non global minimum and distorting the results.
In detail, the process of minimization using Glauber dynamics starts with a random sample of the vertices, and we alter the value of the spin state with even probability to the opposite of the initial value.
At each time step, for each vertex we compute,
\begin{equation}\label{eqn:spin_itr}
h_i=\sum\limits_{j \in V } A_{ij} s_j \mbox{,}
\end{equation}
and if $h_i \geq 0$ set $s_i=+1$, otherwise $s_i=-1$.
We now select a subset of vertex pairs $i$ and $j$, which may or may not have an edge between them.
For each of the potential edges $e_{ij}$ in our selected sample, we now randomly alter the edge state to the {\sl opposite} of what it was, recording the change in a variable $\delta$, which we set to $+1$ if we add an edge and $-1$ for removal.
We now seek to calculate the effect of the change on the total Hamiltonian by defining a measure of the edge energy difference $h_{ij}$ for a change of edge state.
To compute the contribution of this edge to the Hamiltonian, we make the approximation that for terms involving a power of the adjacency matrix such as $A^n_{ij}$ the contribution can be obtained by comparing the change in the trace of the $n^{th}$ power of the matrix at the vertices $i$ and $j$.
The addition or removal of an edge $e_{ij}$, will potentially change both $A^n_{ii}$ and $A^n_{jj}$, and so we average these changes to determine the contribution to $h_{ij}$.
Wherever terms such as $\Tr(A^n)$ appear in the Hamiltonian we substitute $\Delta(A^n)= \frac{1}{2}\{\Delta(A^n_{ii}) + \Delta(A^n_{jj})\}$ in the expression for the edge energy function $h_{ij}$.
For the $\sum A_{ik}A_{kj}$ term, an edge will contribute $\delta( k_i + k_j -1)$ to the edge energy difference, with $\delta$ recording the direction of the change.
Turning to the $2|E|\Big \{k+ p_4 \xi - 2 - \Theta[(1-\xi)\rho_{ij}]\Big\}$ term, which as we discussed in the prior section, is in the mean field approximation, equivalent to an additional $\sum A_{ik}A_{kj}$ term.
For the change in edge state $k$ changes by $\delta$, and the contribution to the $h_{ij}$ arises from this variation in $k$, giving $\delta 2|E|$.
As we are considering a single edge, we factor by the change to $|E|$ from both vertices $i$ and $j$, which is $\delta(k_i + k_j)$, recalling that $\sum k=2|E|$.
Bringing all of these together and noting that the contribution must be doubled as each edge is counted twice in $H_{CIM}$, we arrive at our final edge energy function below,
\begin{equation}
\begin{split}\label{eqn:energy_itr}
h_{ij}&= g^2 \Bigg \{ \left[ (5p_5-1)\xi - \frac{1}{2}(1+5p_5)\right ]\Delta(A^3) -2p_4\xi\Delta(A^4) \\
& + \delta(k_i+k_j) + 2\delta p_4\xi(k_i+k_j-2) - p_5(2\xi-1)\Delta(A^5)\\
& + 5p_5(2\xi-1) \sum\limits_{ij} (A_{ij}-2 ) \Delta(A^3) \Bigg \} \\
&-\delta g s_i s_j \text{.}
\end{split}
\end{equation}
If $h_{ij} \leq 0$ we retain the change, and if not it is discarded.
At the end of the evaluation of every link in the sample we compute the value of the Hamiltonian \Cref{eqn:h_cim_ulation}.
If the Hamiltonian has reduced we retain the change set, otherwise we discard all of them.
Eventually we will arrive at a stable minimum, which we define as successive iterations not discovering a configuration of overall lower energy.
As we approach the minimum, we reduce the sample size of nodes and edges, and, lengthen the number of successive iterations without change required to indicate that the model has minimized its energy.
This allows us to finer grain the calculation of the ground state graph configuration.
For all ground states we repeat the calculation $10$ times and average the harvested metrics across each of the runs.
The choice of $10$ runs is dictated by computational time, with each computation of the ground state at $N=100$ nodes requiring $20$ to $30$ minutes to complete on mid-size server hardware.
Our results could be significantly improved by the use of more powerful hardware.
In addition to the computation of the ground states we also seek to investigate the emergence of a preferred low valued dimension, a feature of the QMD and DGM models.
We repeat the analysis conducted in earlier work, using the method described above for computing the ground state graphs of the CIM and QMD Hamiltonian.
We focus on the Hausdorff measure of extrinsic dimension, Spectral measure for intrinsic dimension \cite{tee2020dynamics,ambjorn1997quantum}, the `lattice' dimension described below, and finally the `naive dimension' $d_k$ described by \Cref{eqn:naive_dimension}.
To compute these dimensions we take a slightly different approach to that taken in \cite{tee2020dynamics,trugenberger2015quantum}, instead focusing on a smaller number of relatively larger graphs of $N=350$ nodes obtained by computing the ground state of our Hamiltonians.
Randomization of the results is obtained by repeated computations of the dimensions seeded from different starting nodes in the graphs.
The spectral dimension $d_S$ is problematic, as the computation relies up the return probability $p_r$ of a random walk to a randomly chosen starting node after $t$ steps, which is known to vary as,
\begin{equation}\label{eqn:spectral_d}
p_r=t^{\frac{-d_S}{2}}\text{.}
\end{equation}
This result is only truly valid in the limit of an infinite graph, $N \rightarrow \infty$ \cite{ambjorn1997quantum}.
To overcome this limitation we conduct a series of random walks, of increasing time steps $t$, and then attempt a least squares fit to \Cref{eqn:spectral_d}, extracting from the best fit the value of $d_S$.
We plot the results of this computation in \Cref{fig:spectral} for the CIM model.
We can see that the spectral dimension establishes a plateau until the length of the walk exceeds a multiple of the size of the graph, which depends upon the value of $d_k$.
This demonstrates the unreliability of $d_S$ for large values of $t$, which as $t \rightarrow \infty$, by definition $p_r \rightarrow 1$, evidenced by a sudden drop and then decay of the dimension to zero as $t$ increases.
The `cliff' occurs at lower values of $t$ for higher values of $d_k$, which is due to a more clustered ground state graph.
The value of $d_S$ of the largest non-zero plateau is used as the spectral dimension of the graph.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.43]{AllSpectralFR_g_0_038_0_167_N_350_steps_5.png}
\caption{We compute and plot the spectral dimension $d_S$ as a function of time $t$ for a graph of $N=350$ nodes. These are computed for a range of coupling constants $g$, represented by the equivalent naive dimension $d_k$ computed using \Cref{eqn:naive_dimension}.}
\label{fig:spectral}
\end{figure}
Hausdorff dimension $d_H$ is simpler, and is extracted by examining how the graph volume scales with distance.
We compute this by selecting a series of random nodes and then computing the scaling of the number of nodes at increasing shortest path distance to it.
This equates the volume $V$ with the cardinality of the subgraph at $r$ hops from the randomly selected node, and exploits the relation \cite{ambjorn1997quantum}
\begin{equation}\label{eqn:hausdorff}
r \propto V^{1/d_H} \text{,}
\end{equation}
to extract the value of the Hausdorff dimension.
The distance $r$ can easily be computed using the the powers of the adjacency matrix to create a shortest distance matrix between all pairs of nodes in the graph.
We progressively raise the adjacency matrix to powers of increasing $n$.
The smallest value of $n$ that has a non-zero value at $A^n_{ij}$ is the shortest distance between vertices $i$ and $j$.
Finally, we define the lattice dimension of the graph $d_l$ to be half the average node degree, that is $d_l=\frac{1}{2}\expval{k}$.
This is an exact result for a square lattice graph $\mathbb{Z}^{d_l}$, where it is equal numerically to both the extrinsic and intrinsic dimension of the graph.
We make use of lattice dimension in our numerical results to understand how the various measures of dimension compare to this idealized dimension.
In particular, the closer the naive dimension is to the lattice dimension, the more the graph will resemble a regular square lattice.
We also compute for the ground state graphs the Von Neumann Entropy (VNE), which arises directly from the eigenvalue spectrum of the matrices representing the graph structure.
In information theory it is possible to quantify the information in `bits' required to describe the structure of a graph, and from that construct an entropy measure similar to Shannon entropy \cite{shannon1948mathematical}.
The VNE entropy for a graph was proposed as an analogy to the Von Neumann entropy of a quantum system defined in terms of its density matrix \cite{von2018mathematical}.
If one interprets the graph as the entanglement of the quantum states of the vertices and edges, it is natural to propose the Laplacian matrix of the graph as a density matrix \cite{passerini2008neumann,anand2009entropy}
Consequently the VNE entropy is computed by solving the eigenvalue problem for $L_{ij}$, obtaining the set of $N$ eigenvalues $\lambda_i$.
These are then used to define the dimensionless entropy,
\begin{equation}\label{eqn:vonneuman}
S_G=-\sum\limits_{i=1}^{N} \lambda_i \log_2 \lambda_i \mbox{,}
\end{equation}
We compute this entropy for all of our simulations.
Finally for all of the ground state graphs we also compute degree distributions and the two-point degree correlation functions.
The two-point degree correlation is obtained by calculating the degree correlation matrix $e_{k k'}$ \cite{pastor2001dynamical,vazquez2002large,newman2003mixing}, which measures the probability of two randomly chosen nodes, connected by a single edge, having degree $k$ and $k'$.
From the degree correlation matrix we then compute the trace, which is the total probability of any randomly selected link connecting nodes of the same degree.
In a regular square lattice we would expect $\Tr e_{kk'}$ to be close to $1.0$, indicating a highly assortative graph where nodes tend to be connected to nodes of the same degree.
\subsection{Ground state metrics}
We present in \Cref{fig:FR-N100CoarseScale} metrics obtained from the ground state graphs of the CIM model defined by \Cref{eqn:h_cim_adjacency}.
We have varied the coupling constant $g$ from $0.04$ to $0.14$, corresponding to a naive dimension in the range of $d_k=2$ to $7$.
We plot the results against naive dimension, but it should be recalled that $d_k$ is inversely proportional to the coupling constant in the graph, and low degree corresponds to larger values of the coupling constant $g$.
In particular it has been suggested \cite{trugenberger2015quantum,trugenberger2017combinatorial,tee2020dynamics} that the coupling constant of Ising models could `run' with the energy or temperature of the system.
It is already visible from \Cref{fig:FR-H100} and \Cref{fig:QMD-H100} that the energy of the system reduces as dimension reduces, or conversely as the coupling constant $g$ increases, corresponding to the system cooling.
It is possible to interpret the simulations that we have run as representing the state of the spacetime graph at various points in the evolution of the universe from a hotter initial state to a cooler one.
We will address this specific point below.
On each of the graphs we have identified the point where the average node degree corresponds to a lattice dimension of $d_l=3$ and $4$, or alternatively $\expval{k}=6$ and $8$.
We determine these points by identifying the value of $d_k$ where the average degree of the ground state graphs is closest to these values, and plot a vertical line on the figures.
For the results presented in \Cref{fig:FR-N100CoarseScale,fig:QMD-N100CoarseScale,fig:scaled_metrics,fig:ground_state_topo}, the data is presented with the appropriate error bars.
For the majority of the figures the approach is to use the standard deviation of the results averaged across multiple simulation runs to produce the metric.
Assuming a normal distribution of these results, the bars plotted represent an approximate $90\%$ confidence interval either side of the stated metric.
The exceptions to this approach are the calculations for spectral and Hausdorff dimension, which are computed using a regression fit to the equations \Cref{eqn:spectral_d,eqn:hausdorff}.
For these results the error bars depict a measure dependent upon the $R^2$ coefficient of determination.
This value is identically $1.0$ if the fit is perfect, and our error bars are set to $(1-R^2)$, so a larger bar indicates a less reliable result.
In both of the models considered our coefficient of determination $R^2 > 0.95$, indicating reasonably strong regression fits to the equations defining the dimension measures.
We collect our observations on the ground state into the following remarks:
\begin{description}
\item[Locality] In \Cref{fig:CC100} we notice that the clustering coefficient of the obtained graphs is effectively zero for dimensions above $d_l=3$.
Below $d_l=3$ the clustering coefficient increases markedly.
As remarked in earlier work \cite{tee2020dynamics,trugenberger2015quantum}, significant clustering in the graph corresponds to violation of spatial locality.
This is because triangles introduce the small world property into the spatial graph, allowing points in space to be in contact with potentially distant and non-local ones.
Locality is a requirement of most theories of physics, at least classically, and it would appear that $d_l=3$ represents a boundary below which the emerged ground state is chaotic and non-local.
In \Cref{fig:Dgr100} we see that average degree decreases approximately linearly with $d_k$ as we would expect from \Cref{eqn:naive_dimension} which computes the estimated value of $\expval{k}$ as a function of $g$.
The fact that the observed value of $\expval{k}$ has a linear relationship with $d_k$ gives us confidence that the assumptions implicit in the calculation of $d_k$ have reasonable validity.
Just below both $d_l=3$ and $4$ we can detect a sudden drop in $\expval{k}$, which may indicate where our assumptions surrounding the computation of $d_k$ break down.
We speculate that close to points of high regularity in $\expval{k}$ that this could be caused by a topological phase transition involving a discontinuous change in the number of cycles in the graph.
Indeed in the plot of cycle density \Cref{fig:FR-polygons} we see that the number of squares incident upon an edge $\expval{\square_{ij}}$ exhibits noticeable discontinuities.
\item[Evolution] Inspecting \Cref{fig:FR-H100} we notice that the value of $H_{CIM}$ decreases for lower values of $d_k$.
As described earlier, this reduction in energy corresponds to the value of $g$ increasing, and also from the results in \Cref{fig:FR-VNE100} an overall increase in the entropy density of the ground state graphs.
How do we interpret this?
We have investigated the ground states of our model across a range of coupling constant values.
If we accept that the coupling constant changes as the universe evolves and cools from an original very hot state we arrive at an alternative interpretation of our results as reflecting the emergence of spacetime from an origin event.
We imagine a universe where energy reduces and entropy increases from such an origin event, in a similar way to the currently accepted standard model of cosmology in which the Universe has evolved from the `Big Bang'.
As the energy of the universe reduces, from an initial high energy and low entropy state, the dimensionality of it decreases.
We note that below $d_l=3$ this drop off in energy accelerates, but also from \Cref{fig:CC100} clustering dramatically increases, perhaps indicating that $3$ spatial dimensions is in some way the minimum dimension of our model to exhibit a a highly local geometry.
This progression from a hot, high-dimensional and low entropy universe to the low dimension, $d_l=3$ state is a process that could be thought of as a 'topological big bang'.
We assume that the universe has a fixed, very large, number of nodes $N$, consistent with the links having a length in the order of $l_p$. We can then consider how the spatial extent of the universe graph $r_U$ expands as $d_l$ reduces.
The quantity $r_U$ is the maximum shortest path in the graph, a quantity more usually referred to as the graph's diameter.
If we assume each link in the spacetime graph represents a `quantum' of distance, this diameter is then physically realized and we can consider it to measure the actual spatial diameter of the ground state graphs.
From our definition of extrinsic dimension $r_U \propto N^{1/d_l}$, we see that $r_U$ varies with $d_l$ in a highly non-linear fashion.
Specifically it implies an initially slow increase in $r_U$ as $d_l$ reduces from the initial very large values of $d_l$, that then accelerates rapidly as dimension reduces towards $3$.
If we assume that the spatial dimension of the universe does not reduce below $d_l=3$, this acceleration in size will slow down as the universe graph becomes more homogeneous.
The sudden expansion in the spatial extent of the universe, as $d_l$ reduces, is somewhat analogous to a period of rapid cosmic inflation.
To recap, interpreting the coupling constant as varying with the energy of the universe, we have an evolution model that starts with a hot, high dimensional and low entropy universe.
The universe then subsequently undergoes rapid spatial expansion that slows down as it cools, entropy increases and spatial dimension reduces to $d_l=3$.
\item[Regularity] In \Cref{fig:kStd100} we plot the standard deviation of the average node degree.
It is notable that in general this reduces with $d_k$ towards a minimum value at $d_l=3$, at which point it increases rapidly.
This indicates that the ground state graphs are steadily getting more regular and uniform up until that point.
This is consistent with the results in \Cref{fig:TwoPoint100} for the two-point correlation function $\Tr e_{kk'}$.
The value of $\Tr e_{kk'}$ measures the probability that a randomly chosen link connects nodes of equal degree.
The value of $\Tr e_{kk'}$ peaks around $d_l=3$, indicating that at this dimension the ground state graphs become maximally uniform.
The value, however, is not identically equal to $1.0$, which one would obtain for a perfectly regular lattice.
In \Cref{fig:kSkew100} we plot the third moment of the degree distribution, $\tilde{\mu}_3(k)$ the `skewness'.
In a small interval surrounding $d_l=3$ the distribution switches from a positive to a negative skew.
When the distribution has $\tilde{\mu}_3(k) < 0$ the distribution is described as `left-tailed' and indicates the population of nodes with degree below the mean value is larger than the population above the mean.
It is generally accepted that values of $\tilde{\mu}_3(k)$ in the range $[-0.5,0.5]$ are consistent with a symmetrical distribution and the significance is only important for $\tilde{\mu}_3(k) >1$ and $\tilde{\mu}_3(k) < -1$.
Using this interpretation, the switch from positive to negative skew in our ground states would appear significant.
This could potentially indicate the emergence of a boundary to the graph that would naturally entail a collection of nodes of lower degree than the bulk, particularly if the number of nodes with lower degree is significant and peaked around a second local maximum.
If this is not the case the skew could simply occur because of the presence of low connectivity defects.
For values of $d_k$ corresponding to positive skew, it could be as a results of the presence of defects involving regions of higher connectivity.
A more detailed investigation of the structure of the graph is required to definitively identify the cause of this variability in skew, but this is the subject of future work.
\item[Curvature] We plot in \Cref{fig:KappaFr100} the average FR curvature of an edge in the ground state graphs.
In general the total curvature of the ground state graphs are negative, but as they lower in dimension the average edge curvature increases towards zero.
The computation in \Cref{sec:discreteGR} relied upon a choice of `gauge' for the canonical Hamiltonian, which amounted to setting the lapse and shift vectors for our foliation to $N=1$, and $N_i=0$.
As these parameters are not dynamical variables in the canonical GR formulation, this freedom comes with two constraints on the system, usually applied to the system's initial conditions.
These are referred to as the Hamiltonian and momentum constraints.
They are enforced through the definition of two quantities,
\begin{equation}
\begin{split}
\mathcal{C}&=~^3R+K^2+K^{ij}K_{ij}, \\
\mathcal{C}_i&=(K_i^j-K\delta_i^j)_|j \text{,}
\end{split}
\end{equation}
which must both be zero.
We recall that we identified $\kappa^f$ with $~^3R$ to obtain our Hamiltonian.
Following that analogy, the first of these implies that as $\kappa^f < 0$, the terms involving the extrinsic curvature must satisfy $K^2-K^{ij}K_{ij} > 0$, and both will tend to zero as the dimension reduces and the graph lowers in energy.
We argue that in the continuum limit assuming $\kappa^f$ corresponds to $~^3R$, and the relationship between the extrinsic and intrinsic curvatures implies that the spacetime Ricci curvature of the hypersurface is, to a first approximation, zero.
Indeed for smooth manifolds the Gauss-Codazzi equations \cite{poisson2004relativist} relate the Ricci scalar for hypersurfaces to $\mathcal{C}$ and terms in the second order covariant derivative of the normal vectors to the hypersurfaces $\Sigma_t$ we consider.
So if we stretch our analogy associating the FR curvature to the intrinsic curvature of spacelike hypersurfaces, our results imply that for low dimensions we have an approximately flat spacetime geometry.
This is consistent with current experimental observations of the curvature of the Universe \cite{xia2017revisiting,handley2021curvature}, with the current consensus being that the spatial curvature is either zero or weakly positive.
\end{description}
\begin{figure*}[htbp]
\centering
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[scale=0.39]{FR_DIM_N100_0_04_0_14_Clust.png}
\caption{Averaged clustering coefficient, $N=100$.}
\label{fig:CC100}
\end{subfigure}%
~
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[scale=0.39]{FR_DIM_N100_0_04_0_14_Degree.png}
\caption{Averaged node degree, $N=100$.}
\label{fig:Dgr100}
\end{subfigure}
~
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[scale=0.39]{FR_DIM_N100_0_04_0_14_Entropy.png}
\caption{Averaged Von Neumann entropy density, $N=100$.}
\label{fig:FR-VNE100}
\end{subfigure}%
~
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[scale=0.39]{FR_DIM_N100_0_04_0_14_Hamiltonian.png}
\caption{Averaged Graph Hamiltonian, $N=100$.}
\label{fig:FR-H100}
\end{subfigure}
~
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[scale=0.39]{FR_DIM_N100_0_04_0_14_kStd.png}
\caption{Standard deviation of average node degree, $N=100$.}
\label{fig:kStd100}
\end{subfigure}%
~
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[scale=0.39]{FR_DIM_N100_0_04_0_14_kSkew.png}
\caption{Skewedness of degree distribution, $N=100$.}
\label{fig:kSkew100}
\end{subfigure}
~
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[scale=0.39]{FR_DIM_N100_0_04_0_14_TwoPoint.png}
\caption{Averaged Trace of Degree Correlation $\Tr e_{kk'}$, $N=100$.}
\label{fig:TwoPoint100}
\end{subfigure}%
~
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[scale=0.39]{FR_DIM_N100_0_04_0_14_Kappa.png}
\caption{Averaged Forman-Ricci edge curvature, $N=100$.}
\label{fig:KappaFr100}
\end{subfigure}
\caption{Simulation of key ground state metrics for a universe of $N=100$ nodes, using the CIM Hamiltonian. The coupling constant is varied from $g=0.04$ to $g=0.14$, and the ground state is obtained by minimizing the Hamiltonian in Eq. \eqref{eqn:h_cim_adjacency}. We plot against the naive dimension $d_k$ obtained using equation \Cref{eqn:naive_dimension}.}
\label{fig:FR-N100CoarseScale}
\end{figure*}
We compare these results with those obtained by minimizing the QMD Hamiltonian defined in \Cref{eqn:qmd_hamiltonian}.
In general the results are mostly similar to those obtained using the CIM model.
Specifically we note:
\begin{description}
\item[Clustering, Connectivity and Regularity] In \Cref{fig:QMD-CC100} we plot the clustering coefficient, noting that the scale is considerably compressed compared to the results for the CIM model in \Cref{fig:CC100}.
In general the ground state is highly local, with a general trend towards lower clustering as $d_k$ reduces, but with periodic elevated clustering between regions of none.
The clustering coefficient however, does not exhibit a sudden increase for values of $d_k<3$, as seen in the $H_{CIM}$ model.
We also note that the vertical lines placed at $d_l =3$, and $4$ correspond to regimes of effectively zero clustering.
Additionally in \Cref{fig:QMD-Dgr100} we note a much more coarse, but highly linear relationship between $\expval{k}$ and $d_k$.
The correspondence between $d_l$ and $d_k$ is almost exact, reflecting the fact that the computation of $d_k$ for the QMD model does not involve the approximations necessary to analytically minimize the CIM Hamiltonian.
The distribution of node degree demonstrates some important differences to CIM.
In \Cref{fig:QMD-kappaStd} the standard deviation of the degree distribution is considerably tighter in QMD than CIM, and again the lattice dimensions of $3$ and $4$ correspond to near zero values of standard deviation.
This is repeated in \Cref{fig:QMD-TwoPoint100} where the values of $\Tr (e_{kk'})$ approach unity at $d_l=3$.
The third moment plotted in \Cref{fig:QMD-kSkew100} is also highly negative, indicating that any deviation involves a small number of nodes with degree less than $\expval{k}$.
This could be interpreted as the presence of a boundary, but we believe that this interpretation is much weaker for QMD than for CIM, given how small the variation in $\expval{k}$ is in QMD.
Indeed this could simply be due to experimental error, if we take into account the average degree variation.
Comparing the degree variation in \Cref{fig:kStd100} and \Cref{fig:QMD-kStd100}, we see that the QMD model is much more uniform, and the presence of a boundary would require a degree distribution highly peaked around two values, $\expval{k}$ and $(\expval{k}-1)$.
Further investigation is required for both models.
\item[Evolution and Curvature] The results for entropy \Cref{fig:QMD-VNE100} and energy \Cref{fig:QMD-H100} are both comparable with the CIM model results.
Perhaps one could argue that the relationship is less smooth than in the CIM model, but this is a function of the stepped nature of the dependence of $\expval{k}$ on $d_k$.
We have already remarked that the CIM Hamiltonian is very similar to QMD, with the additional terms indicating that QMD is an approximation of CIM.
The coarseness of the QMD results, with the presence of step discontinuities in many of the metrics, could be a consequence of this.
This discontinuous variation is in even more evidence with the curvature distribution in \Cref{fig:QMD-KappaFr100}.
The curvature still tends upwards to zero with decreasing $d_k$ and we can make the same observations as were made for CIM, that the Hamiltonian constraints imply a Ricci flat ground state.
Indeed at $d_l=3$ the curvature of the ground state graph is near zero, indicating that the QMD ground states are spatially flat.
\end{description}
\begin{figure*}[htbp]
\centering
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[scale=0.39]{H_QMD_DIM_N100_0_04_0_14_Clust.png}
\caption{Averaged clustering coefficient, $N=100$.}
\label{fig:QMD-CC100}
\end{subfigure}%
~
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[scale=0.39]{H_QMD_DIM_N100_0_04_0_14_Degree.png}
\caption{Averaged node degree, $N=100$.}
\label{fig:QMD-Dgr100}
\end{subfigure}
~
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[scale=0.39]{H_QMD_DIM_N100_0_04_0_14_Entropy.png}
\caption{Averaged Von Neumann entropy density, $N=100$.}
\label{fig:QMD-VNE100}
\end{subfigure}%
~
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[scale=0.39]{H_QMD_DIM_N100_0_04_0_14_Hamiltonian.png}
\caption{Averaged Graph Hamiltonian, $N=100$.}
\label{fig:QMD-H100}
\end{subfigure}
~
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[scale=0.39]{H_QMD_DIM_N100_0_04_0_14_kStd.png}
\caption{Standard deviation of average node degree, $N=100$.}
\label{fig:QMD-kStd100}
\end{subfigure}%
~
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[scale=0.39]{H_QMD_DIM_N100_0_04_0_14_kSkew.png}
\caption{Skewedness of degree distribution, $N=100$.}
\label{fig:QMD-kSkew100}
\end{subfigure}
~
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[scale=0.39]{H_QMD_DIM_N100_0_04_0_14_TwoPoint.png}
\caption{Averaged Trace of Degree Correlation $\Tr e_{kk'}$, $N=100$.}
\label{fig:QMD-TwoPoint100}
\end{subfigure}%
~
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[scale=0.39]{H_QMD_DIM_N100_0_04_0_14_Kappa.png}
\caption{Averaged Forman-Ricci edge curvature, $N=100$.}
\label{fig:QMD-KappaFr100}
\end{subfigure}
\caption{Simulation of key ground state metrics for a universe of $N=100$ nodes, using the QMD Hamiltonian $H_{QMD}$. The coupling constant is varied from $g=0.04$ to $g=0.14$, and the ground state is obtained by minimizing the Hamiltonian in Eq. \eqref{eqn:qmd_hamiltonian}. We plot these against the naive dimension defined in \Cref{eqn:naive_dimension}. The vertical lines represent the value of $d_k$ for which the graph has lattice dimension $d_l=3$ and $d_l=4$.}
\label{fig:QMD-N100CoarseScale}
\end{figure*}
\subsection{Small scale effects}
Computational limitations prevent us from considering very large graphs in our numerical simulations.
We have investigated the scaling dependence of some of our metrics on graph size, and in \Cref{fig:scaled_metrics} we present a subset of the results for selected metrics for both models, varying graph size from $N=50$ to $100$.
The subset of metrics is selected to highlight those that are particularly sensitive to boundary features and scale effects.
In \Cref{fig:FR-ScaledKappa} and \Cref{fig:QMD-ScaledKappa} we plot the standard deviation of the edge FR curvature.
This is a key measure of regularity in the ground state and in both cases it reduces as the graphs increase in size.
The only exception to this is in the case of the CIM models for $d_k<3$ where the graph becomes more irregular.
We will comment more in the following section but for low values of $d_k$ the ground state graph becomes chaotic, unstable and disordered.
This is characterized by a pronounced increase in clustering and a drop in regularity of the structure of the graph.
In \Cref{fig:FR-ScaledkStd} and \Cref{fig:QMD-ScaledkStd} we plot the standard deviation of node degree.
Both results are similar, and show increasingly regular graphs as their size increases.
Once more in the case of CIM, for $d_k < 3$ the graphs become more irregular, consistent with there being a transition to a much more disordered ground state below $d_k=3$.
The drop in the standard deviation of node degree with graph size could occur if there is a boundary feature in the topology of the graph, which would of course consume fewer nodes as a proportion of the total, as the graph scales.
Turning to degree correlation in \Cref{fig:FR-ScaledTwoPoint} and \Cref{fig:QMD-ScaledTwoPoint} we see a similar trend with nodes becoming more assortative and exhibiting higher degree correlation, with the exception of CIM for the graphs with $d_k<3$.
We conclude that the graph's regularity increases with size, which would support the expectation that conclusions we have drawn regarding the structure of our ground state graphs are likely to have validity for much larger graphs.
\begin{figure*}[htbp]
\centering
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[scale=0.39]{SCALED_FR_DIM_50_100_kappaStd.png}
\caption{Standard deviation of Forman-Ricci Curvature $H_{CIM}$.}
\label{fig:FR-ScaledKappa}
\end{subfigure}%
~
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[scale=0.39]{SCALED_QMD_FR_DIM_50_100_kappaStd.png}
\caption{Standard deviation of Forman-Ricci Curvature $H_{QMD}$.}
\label{fig:QMD-ScaledKappa}
\end{subfigure}
~
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[scale=0.39]{SCALED_FR_DIM_50_100_kStd.png}
\caption{Standard deviation of node degree $H_{CIM}$.}
\label{fig:FR-ScaledkStd}
\end{subfigure}%
~
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[scale=0.39]{SCALED_QMD_FR_DIM_50_100_kStd.png}
\caption{Standard deviation of node degree $H_{QMD}$.}
\label{fig:QMD-ScaledkStd}
\end{subfigure}
~
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[scale=0.39]{SCALED_FR_DIM_50_100_TwoPoint.png}
\caption{Two-point correlation function for $H_{CIM}$.}
\label{fig:FR-ScaledTwoPoint}
\end{subfigure}%
~
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[scale=0.39]{SCALED_QMD_FR_DIM_50_100_TwoPoint.png}
\caption{Two-point correlation function for $H_{QMD}$.}
\label{fig:QMD-ScaledTwoPoint}
\end{subfigure}
\caption{Using the Hamiltonians for the QMD and CIM models we present a selection of metrics, at fixed values of dimension, for graphs of sizes ranging from $N=50$ to $N=100$. The metrics are averaged over a collection of $10$ runs for each size of graph and we plot the value of the metrics for a selection of naive dimensions $d_k=2 \text{, to~}6$. The vertical lines represent the value of $d_k$ for which the graph has lattice dimension $d_l=3$ and $d_l=4$.}
\label{fig:scaled_metrics}
\end{figure*}
\subsection{Topology, regularity and dimension}
In order to probe the structure and topology of the ground states we present in \Cref{fig:ground_state_topo} a selection of metrics for both the CIM and QMD models.
In \Cref{fig:FR-Dims} and \Cref{fig:QMD-Dims} we plot the different measures of dimension against the naive dimension $d_k$.
In both models the lattice dimension $d_l$ varies approximately linearly with $d_k$, although this linear relationship is much stronger with the QMD model.
This is to be expected, as the expression for $d_k$ in the CIM model is an approximation.
In both models there is a divergence of extrinsic and intrinsic measures of dimension as $d_k$ reduces below $4.5$.
This is consistent with earlier results \cite{trugenberger2015quantum,tee2020dynamics}, but the result for CIM is much more pronounced.
We have already remarked that for small values of $d_k$ the CIM model has a large amount of clustering and appears to produce a more chaotic ground state.
This more exaggerated divergence of dimensions in CIM is potential evidence of a `crumpled' phase of the ground state graph at low dimension, whereby instead of a smooth ground state a highly folded and non-local graph is obtained.
The non-locality will serve to increase the spectral dimension $d_S$ due to more rapid decay of return probability.
The folding will increase the Hausdorff dimension of the graph reflecting the higher dimension needed of a space in which the graph can be embedded.
Our model should ideally produce a $d_k$ dimensional square lattice $\mathbb{Z}^{d_k}$, in which each edge bounds $2(d_k-1)$ squares and pentagons and triangles are rare.
In \Cref{fig:FR-polygons} and \Cref{fig:QMD-polygons} we plot for a graph of $N=100$ nodes the average edge density of triangles, squares and pentagons.
For the squares we normalize by $2(d_k-1)$ and for visibility in \Cref{fig:QMD-polygons} the pentagon density is on a logarithmic scale.
In both cases squares dominate numerically in the graph, but critically between $d_l=3$ and $4$ the normalized value is approximately $1.0$.
A perfectly square lattice would have $\expval{\square_{ij}}/2(d_k-1) = 1.0$ identically, and we conclude that this regular geometry is only achieved at the same low dimension at which the intrinsic and extrinsic dimensions diverge.
The result is not exactly $1.0$ at either $d_l=3.0$ or $d_l=4.0$, but this could be because of finite size effects in the graph causing experimental error.
Accordingly in \Cref{fig:FR-sqDens} and \Cref{fig:QMD-sqDens} we plot the values of the edge square density for graphs of increasing size.
Across a range of naive dimensions from $d_k=2.0$ to $6.0$ we see for both models that $\expval{\square_{ij}}/2(d_k-1)$ reduces as the graph size increases.
It is difficult to form a conclusive opinion on the limiting behavior of graph size, but we remark that the value for $d_k=3.0$ appears to be stabilizing around $1.0$ and for QMD this occurs at $d_k=4.0$ and $5.0$.
Further investigations with much larger graphs are required to sharpen the result.
Finally, in \Cref{fig:FR-kappaStd} and \Cref{fig:QMD-kappaStd} we plot the variation with naive dimension of the standard deviation of the FR curvature on an edge, as a fraction of the the mean value.
We would expect, if the emergent discrete topology is a coarse grained version of a smooth manifold, the discrete topology of the graph to be highly uniform.
Consequently the curvature of the edges should also be highly uniform for a given value of $d_k$.
In the QMD model the standard deviation is low and stable at $d_l=4$, but before the dimension reduces to $d_l=3$ the variation becomes much more pronounced.
In the case of CIM, the curvature is effectively uniform for $d_l>3$, but below this value is much less stable.
This result somewhat strengthens the hypothesis that our ground states are regular lattices, with the difference that the transition to a less regular and `crumpled' phase occurs at a lower dimension for CIM of around $d_l=3$.
We began our analysis with the goal of identifying whether the QMD and similar IG models are approximations of a more fundamental one, which in the low energy limit replicates the smooth spacetime of GR.
We believe the results presented here provide some support for this hypothesis, but with important caveats.
It is striking how similar the ground state properties of CIM and QMD are to each other.
In most regards the connectivity, topology and regularity of the ground states of CIM and QMD are almost identical.
We may speculate, however, that whereas $d_l=4$ appears to be the dimension of QMD that has the high regularity and topology of a regular lattice, for CIM this is slightly lower and potentially at a value of $d_l=3$.
If QMD is a coarse approximation of CIM, as indicated by our analysis resulting in \Cref{eqn:qmd_cim}, the refinements and extra terms in the Hamiltonian are responsible for this.
These originate directly from considerations of curvature, and have their origin in canonical GR.
Put another way, if our geometry is emergent and governed by a discrete form of GR at short scale, it would appear to prefer $3$ spatial dimensions.
The two major caveats are of course the non-rigorous nature of the analysis that took us from canonical GR to the CIM model, and the fact that the model is intrinsically non-relativistic.
Nowhere in our analysis have we factored in symmetry under the Lorentz group or demanded Lorentz covariance, and we stress that our model is by definition not a real-world model of emergent geometry until this is remedied.
We note in passing that the interaction previously proposed as a model of dynamics in IG models \cite{tee2020dynamics}, exhibits an inherent maximum speed of propagation.
This maximum speed of propagation could be interpreted as an approximate form of causality \cite{tee2021quantum}, and may indicate that a relativistic version of CIM is within reach.
\begin{figure*}[htbp]
\centering
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[scale=0.38]{FR_Dims_g_0_038_to_0_167_N_350.png}
\caption{Intrinsic, extrinsic and naive dimension $N=350$, for $H_{CIM}$. Error bars are plotted but not visible as all values are below $0.05$.}
\label{fig:FR-Dims}
\end{subfigure}%
~
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[scale=0.38]{QMD_Dims_g_0_04_to_0_143_N_350.png}
\caption{Intrinsic, extrinsic and naive dimension $N=350$, for $H_{QMD}$. Error bars are plotted but not visible as all values are below $0.05$.}
\label{fig:QMD-Dims}
\end{subfigure}
~
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[scale=0.38]{FR_DIM_N100_0_04_0_14_Polygons.png}
\caption{Cycle count versus naive dimension, for $H_{CIM}$ and $N=350$.}
\label{fig:FR-polygons}
\end{subfigure}%
~
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[scale=0.38]{H_QMD_DIM_N100_0_04_0_14_Polygons.png}
\caption{Cycle count versus naive dimension, for $H_{QMD}$ and $N=350$.}
\label{fig:QMD-polygons}
\end{subfigure}
~
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[scale=0.38]{SCALED_FR_DIM_50_100_sqDens.png}
\caption{Normalized square cycle density versus naive dimension, for $H_{CIM}$ and $N=350$.}
\label{fig:FR-sqDens}
\end{subfigure}%
~
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[scale=0.38]{SCALED_QMD_FR_DIM_50_100_sqDens.png}
\caption{Normalized square cycle density versus naive dimension, for $H_{QMD}$ and $N=350$.}
\label{fig:QMD-sqDens}
\end{subfigure}
~
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[scale=0.38]{FR_DIM_N100_0_04_0_14_kappaStd.png}
\caption{Standard deviation of Forman-Ricci curvature for $H_{CIM}$, $N=100$, as a fraction of the average edge curvature.}
\label{fig:FR-kappaStd}
\end{subfigure}%
~
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[scale=0.38]{H_QMD_DIM_N100_0_04_0_14_kappaStd.png}
\caption{Standard deviation of Forman-Ricci curvature for $H_{QMD}$, $N=100$, as a fraction of the average edge curvature.}
\label{fig:QMD-kappaStd}
\end{subfigure}
\caption{Comparison of key topology metrics for the ground states of both $H_{CIM}$ and $H_{QMD}$. We plot all of these against the naive dimension \Cref{eqn:naive_dimension}. We compare the variation of dimensions, curvature regularity, polygon frequency and variation of polygon frequency with graph size. }
\label{fig:ground_state_topo}
\end{figure*}
\section{Concluding remarks}
\label{sec:conclusion}
In this paper we have taken as our starting point the canonical formulation of GR and used the Hamiltonian in the Gaussian gauge to propose a discrete model of emergent geometry that we call the Canonical Ising model.
This model, like earlier Ising models of emergent geometry, describes a universe that is comprised of `atoms' of spacetime that can be thought of as qubits on both vertices and edges.
These edge and vertex states, as modeled in the Hamiltonian, interact to form stable ground states resembling a regular lattice.
When we numerically compute the ground states of CIM, we find that they are highly regular and topologically similar to the QMD and similar models described.
We note however that in some ways CIM evolves with coupling in a smoother fashion than QMD.
The ground states are not spatially Forman-Ricci flat, but our freedom to choose a specific gauge or foliation in canonical GR, brings with it the Hamiltonian and momentum constraints.
This requires that $\mathcal{C}=~^3R +K^2-K^{ab}K_{ab}=0$, and so we see that negative spacial curvature of our hypersurfaces simply requires that $K^2-K^{ab}K_{ab} >0$.
What kind of universe does our model describe?
We remarked in \Cref{sec:results} that there is something special about $d_l=3$, which as our total manifold is $M=G(V,E) \bigotimes \mathbb{R}$ has the pleasing conclusion that we are describing a $3+1$ dimensional total space, albeit with a Euclidean geometry.
However, the model also indicates that the higher energy regime has lower entropy and higher dimension.
As discussed in \Cref{sec:results}, it is tempting to interpret the results as we vary upwards the coupling constant from an initially small value, as representing the evolution of our model universe from a hot, low entropy, high connectivity and dimension initial state.
From this point the universe gradually cools and topologically `unfolds' into a nearly flat, and spatially three dimensional spatial geometry.
This mirrors our current understanding of the large scale geometry of the real Universe.
We speculate that this involves a topological `big bang', that progressed by the gradual reduction in the dimensionality of the universe, which can potentially provide an alternative interpretation of inflation and the cosmic microwave background uniformity.
In the initial state all points are local to all others, and by definition in causal contact.
As a consequence thermal equilibrium and uniformity are to be expected.
As dimensionality reduces, the size of the spatial extent of the graph, $r_U$, does not increase smoothly.
Due to the relationship of $r_U$ to the dimension, $r_U=V^{1/d}$, as $d$ reduces the extent of the universe accelerates.
This acceleration will initially be very slow for large $d$, but will accelerate as $d$ gets closer to $3$.
This expansion will continue whilst pockets of the graph have not reduced to $d=3$.
To summarize, we begin with a low entropy spatially `small' and energetically hot universe that is in thermal equilibrium that undergoes an accelerating increase in spatial extent as the dimensionality reduces with cooling.
If dimensional reduction comes to a halt as we approach $d=3$, this acceleration will slow down as more of the universe graph becomes uniform and of dimension $3$.
As all spacetime points are in causal contact prior to this topological `big bang' we would expect that all points in the model would be in equilibrium, and would expect in the evolved model for the energy of the spatial points to be uniform.
This is a consistent story with the models of an inflationary universe, at least qualitatively.
The CIM model though is still not complete and is currently in our view a toy model of emergent geometry, albeit at least related to GR by virtue of its formulation.
Conspicuously it is non-relativistic, and describes an effectively Euclidean universe.
For it to be taken seriously this has to be remedied.
It is well know that discrete sub-groups of the the Lorentz group exist \cite{tarakanov2012some,foldes2008lorentz}, which have been speculated as providing a mechanism for a regular lattice to exhibit Lorentz symmetry, so it would seem feasible to construct one.
Alongside this Amelino-Camelia \cite{amelino2002relativity} has argued that a fundamental length scale can be reconciled with relativity in the `Doubly Special Relativity' model.
We believe that recasting the Ising structure of our model using Dirac spinors instead of qubits could provide a way of constructing a version of CIM that has Lorentz symmetry.
This is the subject of ongoing work.
What we believe we can safely conclude is that the well studied models of emergent geometry based upon Ising Hamiltonians may be approximate forms of a theory more closely related to general relativity, at least in a discretized form.
As the relationship between non-relativistic quantum mechanics and the dynamics of excitations in this model has already been shown in other work, this holds out the tantalizing prospect that GR itself is a low energy approximation of a quantum model of emergent geometry in which quantum mechanics is an integral feature.
If you admit the idea that the reconciliation of quantum mechanics and relativity requires one to be constructed from the other, perhaps this is a small hint that quantum mechanics could be used to emerge geometry and therefore general relativity.
A conclusive answer to this question is still far from being decided by the work presented here, but we believe that discrete emergent geometries will be an important part of the answer.
\clearpage
\bibliographystyle{JHEP}
|
2103.05671
|
\section{Introduction}
\label{sec:intro}
The interaction between a many-body quantum system
and its environment can give rise to exotic and
counterintuitive out-of-equilibrium behavior.
One of the most intriguing is the so-called quantum Zeno
effect~\cite{degasperis-1974,misra-1977,facchi-2002}:
As a consequence of the interaction with an environment,
for instance performing some type of repeated measurement on the quantum system, the coherent Hamiltonian dynamics freezes.
In nonequilibrium settings, this effect has been shown to be responsible for suppression of transport in quantum systems \cite{bernard2018,carollo2018,popkov2018}.
On the other hand, dissipation can be also exploited to engineer desired quantum states~\cite{lin2013},
to perform quantum computation~\cite{verstraete-2009}, or even to
prepare topological states of matter~\cite{diehl-2011}.
The possibility of analyzing the
interplay between dissipation and quantum
criticality~\cite{vicari-2018,rossini-2019a,nigro-2019,rossini-2019,di-meglio-2020,rossini2021coherent}
is also particularly intriguing.
However, unfortunately,
modeling the system-environment interaction
within an analytic or numerical framework is in general
a daunting task.
In Markovian regimes, the Lindblad equation provides a well-defined mathematical framework to
treat open quantum systems~\cite{petruccione}. Still, exact results for the Lindblad equation are
rare~\cite{prosen-2008,prosen-2011,prosen-2014,prosen-2015,
znidaric-2010,znidaric-2011,medvedyeva-2016,ilievski2017dissipation,buca-2020,bastianello-2020,essler-2020,ziolkowska-2020},
with the notable exception of noninteracting systems with {\it linear} dissipators~\cite{prosen-2008}.
Interestingly, also a perturbative field-theoretical treatment of the Lindblad equation is possible~\cite{sieberer-2016}.
Furthermore, the recent discovery of Generalized Hydrodynamics~\cite{bertini-2016,olalla-2016} (GHD)
triggered a lot of interest in understanding whether the hydrodynamic framework could be
extended to open quantum systems~\cite{bouchoule-2020,bastianello-2020,Friedman_2020,deleeuw-2021,denardis2021}.
Remarkably, for simple free-fermion setups it is possible
to apply the so-called quasiparticle picture~\cite{calabrese-2005,fagotti-2008,alba-2017,alba-2018} to
describe the quantum information spreading~\cite{alba-2021,maity-2020}
in the presence of global gain/loss dissipation.
In this paper, we focus on the hydrodynamic description of the out-of-equilibrium dynamics of one-dimensional
free-fermion systems in the presence of localized dissipation, namely a dissipative impurity.
This setting is nowadays the focus of
growing interest~\cite{dolgirev-2020,jin-2020,maimbourg-2020,
froml-2019,tonielli-2019,froml-2020,krapivsky-2019,krapivsky-2020,rosso-2020,vernier-2020}, since this type of dissipation can also be engineered
in experiments with optical lattices~\cite{gericke-2008,brazhnyi-2009,zezyulin-2012,barontini-2013,patil-2015,labouvie-2016}.
Recent experiments also aim at investigating the effect of localized losses in quantum transport in fermionic systems~\cite{lebrat2019quantized,corman2019quantized}.
In particular, we consider here the case of localized gain and loss of fermions. Our work
takes inspiration from Ref.~\onlinecite{krapivsky-2019} (see also
Ref.~\onlinecite{krapivsky-2020} for similar results in a bosonic chain) which deals with the case of a fully-occupied noninteracting fermionic chain subject to losses. (The effects of losses on a uniform Fermi sea have also been studied in Ref.~\onlinecite{froml-2019}.) Here, we consider several homogeneous as well as inhomogeneous out-of-equilibrium initial states.
The actual setup of interest is illustrated in Fig.~\ref{fig0:cartoon}.
An infinite chain is subject to both gain and loss processes
with rates $\gamma^+$ and $\gamma^-$, respectively.
The dissipation acts at the center of the chain ($x=0$), removing or
adding fermions {\it incoherently}.
Here we consider the dynamics ensuing from a homogeneous initial state [see Fig.~\ref{fig0:cartoon}
(a)], such as a uniform Fermi sea with generic filling, or initial product states,
such as the fermionic N\'eel state, in which every other site of the chain
is occupied.
Furthermore, we also consider the dynamics from inhomogeneous initial states, as depicted in
Fig.~\ref{fig0:cartoon} (b). We take as initial state the one obtained
by joining two Fermi seas with different filling. This is a well-known
setup to study quantum transport in one-dimensional
systems. In the absence of dissipation it has been studied in Ref.~\onlinecite{viti-2016}.
If one of the two chains is empty, this becomes the
so-called geometric quench~\cite{mossel-2010}. If the left chain is fully-occupied
the setup is that of the domain-wall quench~\cite{antal-1999}.
In all these cases, we show that the evolution of the fermionic correlators
$G_{x,y}:=\langle c_x^\dagger c_y\rangle$ is fully captured
by a simple hydrodynamic picture, which we derive from the exact
solution of the microscopic Lindblad equation. The hydrodynamic regime holds in the
space-time scaling (or hydrodynamic) limit of large times and positions (see Fig.~\ref{fig0:cartoon})
$x,y,t\to\infty$ with their ratios $\xi_x:=x/(2t)$ and $\xi_y:=y/(2t)$ fixed.
Crucially, in the hydrodynamic limit the local dissipation acts as an effective
delta potential, with momentum-dependent reflection and transmission amplitudes
that depend on the dissipation rate.
This becomes manifest in the singular behavior at $x=0$ of the profile of
local observables. For arbitrary $\xi_x$ and $\xi_y$ the hydrodynamic result contains detailed information
about the model and the quench, and it can be derived easily only in a few cases. Interestingly,
for $\xi_x\approx\xi_y$ the hydrodynamic result can be expressed entirely in terms of the
initial fermionic occupations and the effective reflection and transmission amplitudes of the
dissipative impurity. This is reminiscent of what happens in the absence of
dissipation~\cite{viti-2016}.
Our findings demonstrate how a quantum Zeno effect~\cite{degasperis-1974,misra-1977}
arises quite generically in the strong dissipation limit.
In the presence of localized losses we show that the
depletion of a uniform state, both at equilibrium as well as out-of-equilibrium
after a quantum quench, is arrested for large dissipation rates. Similarly,
quantum transport between two unequal Fermi seas is inhibited. What happens is that
for strong dissipation, the central site is continuously subject to
particle injection or ejection, and this determines a constant projection of its state into the occupied or empty state. This projection effectively disconnects the central site from the rest of the chain. In turn, this effect hinders the depletion of the uniform state as well as the particle transport between the two halves of the chain \cite{carollo2018}. This interpretation can also be formalized by considering that for large rates $\gamma^{\pm}$, the Hamiltonian acts as a perturbative effect and the exchange of fermions between the central site and the rest can only take place at a rate $1/\gamma^{\pm}$ \cite{popkov2018}. This is a clear manifestation of a Zeno effect in dissipative nonequilibrium settings\cite{bernard2018,carollo2018,popkov2018}. Furthermore, in such a strong dissipation limit, the spatial profile
of the fermionic density is expressed in terms of the Wigner
semicircle law, reflecting that the scattering with the impurity is
``flat'' in energy.
Finally, we discuss the dynamics starting from two unequal Fermi seas
in the presence of balanced gain and loss dissipation, i.e., with $\gamma^+=\gamma^-$.
It is well-known that in the absence of dissipation a
Non-Equilibrium Steady State (NESS)~\cite{sabetta-2013,viti-2016} develops
around $x=0$. The NESS exhibits the
correlations of a boosted Fermi sea.
For balanced loss/gain dissipation, an interesting ``broken" (piecewise homogeneous)
NESS appears. The corresponding density profile has a step-like
structure with a discontinuity at $x=0$,
reflecting once again that the local dissipation mimics an effective delta potential.
The manuscript is organized as follows. In section~\ref{sec:model} we
introduce the model, the Lindblad treatment of localized gain and
losses, and the different quench protocols. In section~\ref{sec:warm-up}
we focus on the effect of losses on homogeneous out-of-equilibrium
states. In subsection~\ref{sec:ferro} we consider the case of
localized losses in the fully-filled state, which was considered in
Ref.~\onlinecite{krapivsky-2019}. In subsection~\ref{sec:neel} we
discuss losses on the out-of-equilibrim state emerging after the
quench from the N\'eel state. In section~\ref{sec:ness} we
focus on the dynamics starting from inhomogeneous initial states.
In subsection~\ref{sec:DW} we generalize the results of section~\ref{sec:warm-up}
to the domain-wall quench. In subsection~\ref{sec:ness1}
we discuss the quench from the two Fermi seas. We conclude
and discuss future perspectives in section~\ref{sec:conc}. In Appendix~\ref{app:gain-losses}
we present details on how to derive the solution of the problem with both gain and loss
dissipation given the solution for dissipative loss only. In Appendix~\ref{sec:asy}
we derive the reflection amplitude for the effective delta potential describing
the dissipative impurity. In Appendix~\ref{sec:app-tech} we report the
derivation of the results of section~\ref{sec:ness1}. Finally,
in Appendix~\ref{sec:equal} we discuss the effect of losses on a uniform
Fermi sea.
\section{Noninteracting fermions with gain and loss: The protocols}
\label{sec:model}
\begin{figure}[t]
\includegraphics[width=0.5\textwidth]{cartoon.pdf}
\caption{An infinite free-fermion chain with localized
gain and loss processes acting at the center of the chain. Here
$\gamma^\pm$ are the dissipation rates and $J=1$ the hopping.
In (a) the chain is initially prepared in a homogeneous state
$|\Psi\rangle$. Here we consider the case with $|\Psi\rangle$
being the fully occupied state $|F\rangle$, a Fermi sea with generic filling $k_F$ and the
fermionic N\'eel state. In (b) the initial state
is obtained by joining two semi-infinite homogeneous chains $L$ and
$R$, prepared in two different states.
}
\label{fig0:cartoon}
\end{figure}
In this paper, we consider the infinite free-fermion chain defined by the tight-binding Hamiltonian
\begin{equation}
\label{eq:ham}
H=\sum_{x=-\infty}^\infty(c_x^\dagger c_{x+1}+c^\dagger_{x+1}c_x)\, ,
\end{equation}
where $c_x^\dagger,c_x$ are creation and annihilation operators at the different sites $x$ of the chain. They obey canonical anticommutation relations. The Hamiltonian in Eq.~\eqref{eq:ham} becomes diagonal after taking a Fourier transform
with respect to $x$. One can indeed define the fermionic operators $b_k$ as
\begin{equation}
b_k:=\sum_{x=-\infty}^\infty e^{-ikx}c_x,\quad
c_x=\int_{-\pi}^\pi \frac{dk}{2\pi} e^{i k x}b_k\, ,
\end{equation}
and in terms of these operators, Eq.~\eqref{eq:ham} is equivalent to
\begin{equation}
\label{eq:ham-k}
H=\int_{-\pi}^\pi\frac{dk}{2\pi} \varepsilon_k b^\dagger_k b_k\, ,\quad
\varepsilon_k:=2\cos(k)\, .
\end{equation}
The Hamiltonian $H$ conserves the particle number. At a fixed density $n_f=k_F/\pi$, the ground state can be obtained from the Fermi vacuum $\ket{0}$, by occupying the quasi-momenta $b_k$ with single-particle energies in $k\in[-k_F,k_F]$, where $k_F$
is the Fermi momentum. For $n_f=1$ ($k_F=\pi$) one has the
fully-filled state $|F\rangle$, which is a product
state. For $0<k_F<\pi$ the ground state of~\eqref{eq:ham} is instead critical,
i.e., with power-law decaying correlation functions.
For later convenience, we define here the group velocity $v_k$
of the fermions as
\begin{equation}
\label{eq:v-k}
v_k:=\frac{d\varepsilon_k}{dk}=-2\sin(k)\, .
\end{equation}
In addition to the Hamiltonian contribution, we consider a dynamics which is also affected by localized gain/loss processes at the center of the chain [see Fig.~\ref{fig0:cartoon}]. To account for these dissipative contributions, we exploit the formalism of quantum master equations \cite{petruccione}. The time-evolution of the system state $\rho_t$ is implemented by a Lindblad generator, through the following equation
\begin{equation}
\label{eq:lind}
\frac{d\rho_t}{dt}=-i[H,\rho_t]+\sum_{i=+,-}\left(L^{i}\rho_t L^{i\, \dagger}-\frac{1}{2}\{L^{i\,\dagger} L^{i},\rho_t\}\right)\, .
\end{equation}
Here, the so-called jump operators $L^{i}$ are given by $L^+=\sqrt{\gamma^+}c^\dagger_0$ and
$L^-=\sqrt{\gamma^-}c_0$ (see Fig.~\ref{fig0:cartoon} for a pictorial
definition), and account for gain and loss, with rates $\gamma^+$ and $\gamma^-$, respectively.
The relevant information about the system is contained in the fermionic two-point correlation functions
\begin{equation}
G_{x,y}(t):=\mathrm{Tr}(c^\dagger_x c_y\rho(t))\, .
\end{equation}
The dissipative dynamics of this {\it covariance matrix} is obtained as
\begin{multline}
\label{eq:G}
G(t)=
e^{t\Lambda}G(0)e^{t\Lambda^\dagger}+\int_0^t dze^{(t-z)\Lambda}\Gamma^+ e^{(t-z)\Lambda^\dagger},
\end{multline}
with $G(0)$ being the matrix containing the initial correlations.
The matrix $\Lambda$ is defined as
\begin{equation}
\label{eq:lambda}
\Lambda=ih-\frac{1}{2}(\Gamma^++\Gamma^-),
\end{equation}
where $h=\delta_{|x-y|,1}$ implements the Hamiltonian contribution while
$\Gamma^\pm=\gamma^\pm\delta_{x,0}$ account for the localized dissipative effects.
The correlation functions $G_{x,y}$ in~\eqref{eq:G} satisfy the
linear system of equations (we drop the explicit time dependence when this does not generate confusion)
\begin{multline}
\label{eq:one}
\frac{d G_{x,y}}{dt}=i(G_{x+1,y}+G_{x-1,y}-G_{x,y+1}-G_{x,y-1})\\
-\frac{\gamma^++\gamma^-}{2}(\delta_{x,0}G_{x,y}+\delta_{y,0}G_{x,y})+
\gamma^+\delta_{x,0}\delta_{y,0}.
\end{multline}
We mainly consider the loss process, setting
$\gamma^+=0$ in~\eqref{eq:one} and~\eqref{eq:G}. This is not a severe
limitation since the knowledge of $G_{x,y}$ for
$\gamma^+=0$ is sufficient to reconstruct $G_{x,y}$
also in cases of a nonzero $\gamma^+$ (see Appendix~\ref{app:gain-losses}).
We also notice that equations of the type of~\eqref{eq:one} can be efficiently numerically
solved by standard iterative methods, such as the Runge-Kutta method~\cite{press2007numerical}.
This is especially useful to treat the case of non-quadratic Liouvillians, for instance,
in the presence of dephasing or incoherent hopping~\cite{alba-2021}. In our case this is not
necessary because the solution of~\eqref{eq:one} is given by~\eqref{eq:G}. Indeed, Eq.~\eqref{eq:G} can
be evaluated numerically after noticing that the matrix $\Lambda$ is $L\times L$, and it can be diagonalized
with a computational cost ${\mathcal O}(L^3)$. This allows to efficiently evaluate the integral in~\eqref{eq:G}.
In the following sections we discuss the effect of gain/loss dissipation in several theoretically and
experimentally relevant
situations. We consider both equilibrium as well as out-of-equilibrium systems, i.e. after
a quantum quench~\cite{polkovnikov2011colloquium,eisert2015quantum,gogolin2016equilibration,dalessio2016quantum,calabrese-2016}. At equilibrium we are interested in understanding
how the local dissipation affects the critical correlations of a
homogeneous Fermi sea with arbitrary filling $0<k_F<\pi$. We also review the effect of losses
in the non-critical state $|F\rangle$, which was discussed in
Ref.~\onlinecite{krapivsky-2019}.
Furthermore, we consider the case in which the initial state is a product state that
is however not an eigenstate of the Hamiltonian~\eqref{eq:ham}.
In the absence of dissipation this is one of the paradigm of quantum quenches.
The generic out-of-equilibrium dynamics ensuing from an initial product
state is in fact highly nontrivial, as for instance reflected by the
ballistic growth of bipartite entanglement. Interestingly, for integrable systems, this growth is due to the propagation
of pairs of entangled quasiparticles~\cite{calabrese-2005,fagotti-2008,alba-2017,alba-2018}.
Our setting thus allows us to investigate the interplay between localized
dissipation and quench dynamics. For concreteness, we focus on the
situation in which the initial state is the fermionic N\'eel state
$|N\rangle:=\prod_{x\,\mathrm{even}} c^\dagger_x|0\rangle$, in which only every other site is occupied.
Finally, we consider quenches from inhomogeneous initial states
obtained by joining two homogeneous Fermi seas with different
Fermi levels $k_F^l$ and $k_F^r$ [see Fig.~\ref{fig0:cartoon}(b)].
The choice $k_F^l=\pi/2$ and $k_F^r=0$ corresponds
to the so-called geometric quench~\cite{mossel-2010}, whereas $k_F^l=\pi$ and $k_F^r=0$
to the domain-wall quench~\cite{antal-1999,gobert-2005,antal-2008,allegra-2016,gamayun-2020}. The case with
$k_F^l\ne k_F^r$ is particularly interesting since, in the absence of
dissipation and at long times, a non-equilibrium steady state (NESS) emerges
around the interface between the two parts of the chain. Such a NESS exhibits
the critical correlations of a boosted Fermi sea~\cite{sabetta-2013,viti-2016}.
Interestingly, the space-time profile of physical observables and of the
von Neumann entropy in these setups admit an elegant field theory description
in terms a Conformal Field Theory in a curved space~\cite{allegra-2016,dubail-2017,brun-2017,dubail-2017a,brun-2018,ruggiero-2019,bastianello-2020,collura-2020}.
For generic integrable systems without dissipation,
similar inhomogeneous protocols can be studied by using the
recently-developed Generalized Hydrodynamics~\cite{bertini-2016,olalla-2016} (GHD).
\section{HOMOGENEOUS out-of-equilibrium states}
\label{sec:warm-up}
In this section we discuss the effect of losses on homogeneous
out-of-equilibrium states. To introduce the notation, we start by reviewing the quench from the fully-filled state \cite{krapivsky-2019} in section~\ref{sec:ferro}. Note that, in the absence of
dissipation there is no dynamics since such state is an eigenstate
of Eq.~\eqref{eq:ham}. We will obtain analogous results in the more general context
of section~\ref{sec:ness}. In order to study the interplay between
unitary and dissipative dynamics, in section~\ref{sec:neel} we
consider the quench from the fermionic N\'eel state.
\subsection{Fully-filled state}
\label{sec:ferro}
Let us consider the out-of-equilibrium dynamics starting from
the fully-filled state $|F\rangle$ defined as
\begin{equation}
|F\rangle:=\prod_{x=-\infty}^\infty c_x^\dagger|0\rangle\, .
\end{equation}
The above state is a product state, with
diagonal correlator $G_{x,y}$, given by
\begin{equation}
\label{eq:in-g}
G_{x,y}(0)=\delta_{x,y}\, .
\end{equation}
To solve~\eqref{eq:one} with $\gamma^+=0$ we employ
a product ansatz~\cite{krapivsky-2019} for
$G_{x,y}$. Specifically, we take $G_{x,y}$ of the form
\begin{equation}
\label{eq:ansatz}
G_{x,y}=\sum_{k=-\infty}^\infty S_{k,x} \bar S_{k,y}\, ,
\end{equation}
where the bar denotes complex conjugation.
A similar product ansatz will be used in section~\ref{sec:ness}.
The factorization as in~\eqref{eq:ansatz} arises naturally when
treating transport problems in free-fermion models~\cite{viti-2016}.
Eq.~\eqref{eq:ansatz} is consistent with~\eqref{eq:one} provided that
$S_{k,x}$ satisfies
\begin{equation}
\label{eq:S-eq}
\frac{dS_{k,x}}{dt}=i[S_{k,x+1}+S_{k,x-1}]-\frac{\gamma^-}{2}\delta_{x,0}S_{k,x}\, .
\end{equation}
From Eq.~\eqref{eq:in-g} we obtain as initial condition for $S_{k,x}$
\begin{equation}
\label{eq:S-in}
S_{k,x}(0)=\delta_{x,k}\, .
\end{equation}
Eq.~\eqref{eq:S-eq} is conveniently solved by a combination of
Laplace transform with respect to time and Fourier transform with
respect to the space coordinate $x$. Let us define the
Laplace transform $\widehat S_{k,x}(s)$ as
\begin{equation}
\label{eq:S-lap}
\widehat S_{k,x}(s)=\int_0^\infty dt e^{-st}S_{k,x}(t).
\end{equation}
This allows us to rewrite~\eqref{eq:S-eq} as
\begin{equation}
\label{eq:laplace}
s \widehat S_{k,x}-S_{k,x}(0)=
i[\widehat S_{k,x+1}+\widehat S_{k,x-1}]-\frac{\gamma^-}{2}\delta_{x,0}
\widehat S_{k,x}.
\end{equation}
We can now perform the Fourier transform with respect to
$x$, by defining
\begin{equation}
\label{eq:S-ft}
\widehat S_{k,q}=\sum_{x=-\infty}^\infty \widehat S_{k,x} e^{-iqx},
\end{equation}
with $q\in[-\pi,\pi]$ being the momentum. From now on, we will use $\widehat{S}_{k,q/p}$ to indicate the Laplace and Fourier transform of $S_{k,x}$; instead $\widehat{S}_{k,x/y}$ will stand for the Laplace transform of $S_{k,x/y}$ only. After substituting in~\eqref{eq:laplace} and
using the initial condition~\eqref{eq:S-in}, we obtain
\begin{equation}
\label{eq:lap-ft}
s\widehat S_{k,q}-
e^{-i q k}=i\widehat S_{k,q}(e^{iq}+e^{-iq})-\frac{\gamma^-}{2}\widehat S_{k,x=0}.
\end{equation}
The solution of~\eqref{eq:lap-ft} is straightforward, yielding
\begin{equation}
\label{eq:s0}
\widehat S_{k,q}=\big[e^{-iqk}-\frac{\gamma^-}{2}\widehat
S_{k,x=0}\big]\frac{1}{s-2i\cos(q)}.
\end{equation}
We note that $\widehat S_{k,x=0}$ is conveniently written as
\begin{equation}
\widehat S_{k,x=0}=\frac{1}{2\pi}\int_{-\pi}^{\pi} dq \widehat S_{k,q}.
\end{equation}
We can now take the inverse Fourier transform in~\eqref{eq:s0}, and using that
\begin{equation}
\label{eq:ff-nota}
\frac{1}{2\pi}\int_{-\pi}^{\pi}dq\frac{e^{iqx}}{s-2i\cos(q)}= \frac{1}{\sqrt{s^2+4}}
\Big(\frac{2i}{s+\sqrt{s^2+4}}\Big)^{|x|},
\end{equation}
we obtain
\begin{multline}
\label{eq:f-final}
\widehat S_{k,x}=
\frac{1}{\sqrt{s^2+4}}\Big(\frac{2i}{s+\sqrt{s^2+4}}\Big)^{|k-x|}
\\-\frac{\gamma^-/2}{(\gamma^-/2+\sqrt{s^2+4})\sqrt{s^2+4}}
\Big(\frac{2i}{s+\sqrt{s^2+4}}\Big)^{|k|+|x|}.
\end{multline}
Note the absolute value $|x|$ in the second term in~\eqref{eq:f-final}.
The last step is to take the inverse Laplace transform of~\eqref{eq:f-final}.
This is straightforward for the first term in Eq.~\eqref{eq:f-final}, which
accounts for the unitary part of the evolution, and gives
a term $J_{|x-y|}(2t)$, with $J_x(t)$ the Bessel function of
the first type. The second term in Eq.~\eqref{eq:f-final} encodes the
effects of the losses. One can write~\cite{krapivsky-2019}
\begin{equation}
\label{eq:f-final-1}
S_{k,x}(t)=i^{|x-k|}J_{|x-k|}(2t)-\frac{\gamma^-}{2} i^{|x|+|k|}K_{|x|+|k|}(t).
\end{equation}
Here $K_{|x|+|k|}$ is the inverse Laplace transform of the second
term in Eq.~\eqref{eq:f-final}. To determine $K_{|k|+|x|}$ analytically,
one can use the inverse Laplace transform
\begin{multline}
\label{eq:dn}
D_{|x|}:={\mathcal L}^{-1}\Big(\frac{1}{\frac{\gamma^-}{2}+\sqrt{s^2+4}}
\Big(\frac{2i}{s+\sqrt{s^2+4}}\Big)^{|x|}\Big)=
i^{|x|}J_{|x|}(2t)\\-i^{|x|}\frac{\gamma^-}{2}\int_{0}^t dz e^{-\gamma^-z/2}
\Big(\frac{t-z}{t+z}\Big)^{|x|/2}J_{|x|}(2\sqrt{t^2-z^2}),
\end{multline}
together with the fact that
\begin{equation}
{\mathcal L}^{-1}\Big(\frac{1}{{\sqrt{s^2+4}}}\Big)=J_0(2t).
\end{equation}
This allows us to obtain the inverse Laplace of the second
term in~\eqref{eq:f-final} as the convolution
\begin{multline}
{\mathcal L}^{-1}\Big(\frac{1}{(\frac{\gamma^-}{2}+\sqrt{s^2+4})\sqrt{s^2+4}}
\Big(\frac{2i}{s+\sqrt{s^2+4}}\Big)^{|k|+|x|}\Big) =\\
\int_0^t d\tau J_0(2(t-\tau))D_{|x|+|k|}(\tau),
\end{multline}
with $D_{|x|+|k|}$ defined in Eq.~\eqref{eq:dn}.
We anticipate that we will also employ Eq.~\eqref{eq:dn} in
section~\ref{sec:ness}.
Here we are interested in the space-time
scaling limit $x,k,t\to\infty$, with their ratio fixed.
We define the two scaling variables $u,\xi_x$ as
\begin{equation}
u:=\frac{k}{2t},\quad \xi_x:=\frac{x}{2t}.
\end{equation}
Since the initial state is homogeneous and the dissipation acts
at $x=0$ we expect local observables, such as the fermionic
density, to be even functions of $x$. Thus, we can restrict
ourselves to $\xi_x>0$.
The asymptotic behaviour of $J_{|x-k|}$ and $K_{|x|+|k|}$ is
derived analytically~\cite{krapivsky-2019} and is given by
\begin{multline}
\label{eq:asyJ}
J_{|x-k|}(2t)\simeq \\
\frac{ \cos \big[2t \sqrt{1 - (u -\xi_x)^2} - 2 t |u-\xi_x|
\arccos |u-\xi_x| -\frac{\pi}{4} \big]}
{\sqrt{\pi t}\, \left[1 - (u -\xi_x)^2\right]^{1/4}},
\end{multline}
which holds for $-1\le u-\xi_x\le1$. For $u,\xi_x$ outside of this interval the
asymptotic behavior of $J_{|x-k|}$ is subleading in the scaling limit.
Similarly, one can show that~\cite{krapivsky-2019}
\begin{multline}
\label{eq:asyK}
K_{|x| + |k|}(t)\simeq \\
\frac{ \cos \big[2t \sqrt{1 - (\xi_x +|u|)^2} - 2 t (\xi_x +|u|)
\arccos (\xi_x +|u|)
-\frac{\pi}{4} \big]}
{\sqrt{\pi t}\, \left[\gamma^-/2 + 2(\xi_x + |u|)\right]\left[1 -
(\xi_x + |u|)^2
\right]^{1/4}},
\end{multline}
which holds in the interval $-1\le |u|+\xi_x\le1$.
As it will be clear in section~\ref{sec:ness}, the
scaling behavior of $G_{x,y}$ will be given by a simple formula,
in the limit $x,y,t\to\infty$ with $x/t$, $y/t$ fixed and
$|x-y|/t\to0$.
On the other hand, we should stress that by using~\eqref{eq:f-final-1} together
with the asymptotic expansions~\eqref{eq:asyJ} and~\eqref{eq:asyK}
it is possible to obtain the behavior of $G_{x,y}$ for large
$x,y,t$ with {\it arbitrary} fixed ratios $\xi_x=x/(2t),\xi_y=y/(2t)$. However, as it is
clear from~\eqref{eq:asyJ} and~\eqref{eq:asyK} the result contains
detailed information about the quench parameters and dissipation.
Let us consider the dynamics of the density profile $n_{x,t}$. We have
\begin{equation}
\label{eq:den1}
n_{x,t}=\sum_{k=-\infty}^\infty |S_{k,x}|^2.
\end{equation}
\begin{figure}[t]
\includegraphics[width=0.45\textwidth]{den_c_profile.pdf}
\caption{ Density profile $n_{x,t}$ in a free fermion chain with localized
losses. Here we plot $1-n_{x,t}$ versus $x/(2t)$,
with $x$ the position with respect to the center of the chain and
$t$ the time. The initial state of the chain is the fully occupied state
$|F\rangle$. The symbols are exact numerical data for ``strong''
loss rate $\gamma^-=5$ and ``weak'' loss rate $\gamma^-=0.5$.
Lines and the red circle at $x=0$ are the analytic results in
the hydrodynamic limit.
}
\label{fig1:ferro}
\end{figure}
The behavior of the density in the space-time scaling limit is
obtained by using~\eqref{eq:asyJ} and~\eqref{eq:asyK} in~\eqref{eq:den1}.
Let us assume $\xi_x>0$. We obtain
\begin{equation}
\label{eq:1f}
n_{x,t}=1-\frac{\gamma^-}{\pi}\int_{\arcsin(x/(2t))}^{\pi/2}dq\frac{|v_q|}
{(\frac{\gamma^-}{2}+|v_q|)^2}\, .
\end{equation}
In deriving~\eqref{eq:1f} we approximated the rapidly oscillating trigonometric
functions in~\eqref{eq:asyJ} and~\eqref{eq:asyK} with their time average.
An important remark is that the derivation above is not valid near the origin
at $x=0$, where the density profile exhibits a singularity.
For $x=0$ from~\eqref{eq:f-final} we obtain
\begin{multline}
S_{k,0}(t)=
i^{|k|}J_{|k|}(2t)\\
-i^{|k|}\frac{\gamma^-}{2}\int_{0}^t dz e^{-\gamma^-z/2}
\Big(\frac{t-z}{t+z}\Big)^{|k|/2}J_{|k|}(2\sqrt{t^2-z^2}).
\end{multline}
In the scaling limit $k,t\to\infty$ with $u=k/(2t)$ fixed we have
\begin{equation}
\Big(\frac{t-z}{t+z}\Big)^\frac{|k|}{2}\simeq e^{-2|u|z}.
\end{equation}
This implies
\begin{equation}
S_{k,0}(t)\simeq \frac{2i^{|k|}|u|}{\gamma^-/2+2|u|}J_{|k|}(2t).
\end{equation}
By using the asymptotic expansion of the Bessel function~\eqref{eq:asyJ} we
obtain
\begin{equation}
\label{eq:2f}
n_{0,t}=\frac{1}{\pi}\int_{-\pi/2}^{\pi/2}
dq \frac{|v_q|^2}{\big(\frac{\gamma^-}{2}+|v_q|\big)^2}.
\end{equation}
We provide an alternative derivation of~\eqref{eq:1f} and~\eqref{eq:2f}
in section~\ref{sec:ness}.
It is interesting to observe that from~\eqref{eq:1f} in the limit of large loss
rate $\gamma^-$ one obtains the Wigner semi-circle law as
\begin{equation}
\label{eq:wigner}
n_{x,t}=1-\frac{1}{\gamma^-}\frac{8}{\pi}\sqrt{1-\frac{x^2}{4t^2}}.
\end{equation}
The behavior in Eq.~\eqref{eq:wigner} appears also in the case of the
out-of-equilibrium dynamics from the N\'eel state (see section~\ref{sec:neel}) and
from inhomogeneous initial states (see section~\ref{sec:ness}).
Eq.~\eqref{eq:wigner} has a simple physical interpretation. Eq.~\eqref{eq:1f}
in the limit of large $\gamma$ can be rewritten as
\begin{equation}
\label{eq:wigner-1}
n_{x,t}=1-\frac{1}{\gamma\pi}\int_{-\pi}^{\pi}
dq v_q\Theta(v_q-x/t).
\end{equation}
Now the integral in~\eqref{eq:wigner-1} describes the number of holes (equivalently, the absorbed
fermions) that
are emitted at the origin and at time $t$ arrive at position $x$. Importantly,
since $dq v_q=d\epsilon$, this means that the hole is produced at a
rate $\propto 1/\gamma$ with a uniform distribution in energy, i.e., at
infinite temperature.
In the limit $\gamma^-\to\infty$ the density remains $n_{x,t}=1$.
The total number of fermions absorbed at a generic time $t$ in the limit
$\gamma^-\to\infty$ is given as
\begin{equation}
n_{a}:=\int_{-\infty}^\infty dx (1-n_{x,t})=\frac{8}{\gamma^-}t.
\end{equation}
The number of fermions that are lost at the origin increases linearly
with time. However, the rate goes to zero as $\gamma^-\to\infty$, which is consistent with the emergence of a Zeno effect. These results are checked in Fig.~\ref{fig1:ferro}.
The symbols are numerical data obtained by using~\eqref{eq:G} for $\gamma^-=0.5$ and
$\gamma^-=5$. The different symbols correspond to different times. To highlight
the scaling behavior we plot $1-n_{x,t}$ versus $x/(2t)$. All the data for
different times collapse on the same curve.
Note the singularity at $x=0$. Some corrections are visible
only for very short times. The continuous lines are the
analytical predictions~\eqref{eq:1f} and~\eqref{eq:2f},
and are in perfect agreement with the numerical data.
\subsection{Homogeneous N\'eel quench}
\label{sec:neel}
Let us now discuss the effect of losses on an out-of-equilibrium
state arising after the quantum quench from the fermionic N\'eel state.
The N\'eel state $|N\rangle$ is defined as
\begin{equation}
|N\rangle:=\prod_{x\,\mathrm{even}} c_x^\dagger|0\rangle.
\end{equation}
The initial correlation matrix reads as
\begin{equation}
G_{x,y}(0)=\delta_{x,y}, \quad \mathrm{with}\,\,x\,\,\textrm{even}.
\end{equation}
To proceed we impose that the solution of~\eqref{eq:one} is
factorized as in Eq.~\eqref{eq:ansatz}.
We obtain the same equation as in~\eqref{eq:S-eq}.
The initial condition for $S_{k,x}$ is
\begin{equation}
\label{eq:neel-ini}
S_{k,x}(0)=\delta_{x,k},\quad k\,\,\textrm{even}.
\end{equation}
After performing a Laplace transform with
respect to time we obtain~\eqref{eq:laplace}.
Now we can consider separately the cases of $k$ even and $k$ odd.
Let us first start considering the case with $k$ odd.
It is straightforward to check that~\eqref{eq:S-eq} together with the
initial condition~\eqref{eq:neel-ini} implies that $\widehat S_{k,n}=0$
for odd $k$. Thus we can restrict ourselves to even $k$.
For $k$ even we have to distinguish the cases of even $x$ and
odd $x$. We define $S^{e/o}_{k,x}$, where now $x=0,1,\dots,L/2$
labels the ``unit cell'' containing the sites $2x,2x+1$. These
$S^{e/o}_{k,x}$ satisfy the set of equations
\begin{align}
\label{eq:1}
s\widehat S_{k,x}^{e}-\delta_{k,2x}&=
i[\widehat S_{k,x}^o+\widehat S_{k,x-1}^o]-
\frac{\gamma^-}{2}\delta_{x,0}\widehat S^{e}_{k,x}\\
\label{eq:2}
s\widehat S_{k,x}^o&=i[\widehat S_{k,x}^e+\widehat S_{k,x+1}^e].
\end{align}
We define the Fourier transforms as
\begin{equation}
\label{eq:ft}
\widehat S_{k,q}^{e/o}=\sum_{x=-\infty}^\infty\widehat S^{e/o}_{k,x}e^{-iqx}.
\end{equation}
Taking the Fourier transform in~\eqref{eq:1} and~\eqref{eq:2}
we obtain
\begin{align}
\label{eq:lap-ff-1}
s\widehat S^e_{k,q}-
e^{-i q k/2}&=i\widehat S_{k,q}^o(1+e^{-iq})-\frac{\gamma^-}{2}\widehat S^e_{k,x=0}\\
\label{eq:lap-ff-2}
s\widehat S^e_{k,q}&=i\widehat S_{k,q}^o(1+e^{iq})
\end{align}
Similar manipulations as in section~\ref{sec:ferro} yield
\begin{multline}
\label{eq:fin-3}
\widehat S^e_{k,x}=
\frac{1}{\sqrt{s^2+4}}\Big(\frac{2i}{s+\sqrt{s^2+4}}\Big)^{|k-2x|}\\
-\frac{\gamma^-}{2}\frac{1}{(\gamma^-/2+\sqrt{s^2+4})\sqrt{s^2+4}}
\Big(\frac{2i}{s+\sqrt{s^2+4}}\Big)^{|k|+2|x|}\, ,
\end{multline}
and
\begin{equation}
\label{eq:fin-4}
\widehat S^o_{k,x}=i\int_0^{t}d\tau [\widehat S^e_{k,n}(\tau)+
\widehat S^e_{k,x+1}(\tau)].
\end{equation}
Eq.~\eqref{eq:fin-3} is the same as for the quench from the
ferromagnetic state (cf.~\eqref{eq:f-final}) discussed
in section~\ref{sec:ferro}, after redefining $x\to 2x$, i.e.,
\begin{equation}
S^e_{k,x}=S_{k,2x}^{|F\rangle},
\end{equation}
with $S_{k,x}^{|F\rangle}$ given by~\eqref{eq:f-final}.
One can use~\eqref{eq:asyJ} and~\eqref{eq:asyK} to obtain the correlators
$G_{x,y}$ in the space-time scaling limit. We now discuss the dynamics of the
density profile. We restrict ourselves to even sites $n^e_{n,t}$.
This is because translation invariance is restored
by the dynamics at long times and we expect the result for odd sites
to be the same.
We now have
\begin{equation}
n^e_{x,t}=\sum_{k=-\infty}^\infty |S^e_{2k,x}|^2.
\end{equation}
One obtains
\begin{equation}
\label{eq:ne-den}
n^e_{x,t}=\frac{1}{2}-\frac{1}{2}\frac{\gamma^-}{\pi}\int_{\arcsin(x/(2t))}^{\pi/2}dq
\frac{|v_q|}{\big(\frac{\gamma^-}{2}+|v_q|\big)^2}.
\end{equation}
Note that this is the result obtained for the quench from the
fully-filled state (see section~\ref{sec:ferro}) divided by two. In addition, once again, the
density profile is singular in $x=0$.
The value of the density at $x=0$ is the half of that found in~\eqref{eq:2f}.
In the absence of dissipation, i.e., for $\gamma^-=0$,
the fermionic density is uniform and is given as $n_{x,t}=1/2$.
\begin{figure}[t]
\includegraphics[width=0.45\textwidth]{den_c_profile_Neel.pdf}
\caption{ Density profile $n_{x,t}$ in a free fermion chain with local
losses: Dynamics starting from the N\'eel state. We plot the shifted
density $1/2-n_{x,t}$ versus $x/(2t)$.
The symbols are exact numerical data for $\gamma^-=5$ and $t=98$ and
$t=100$. Note the strong oscillations with time.
Lines are the analytic results in the space-time scaling limit.
}
\label{fig2:Neel}
\end{figure}
For strong dissipation, instead,
one obtains
\begin{equation}
n^e_{x,t}=\frac{1}{2}-\frac{1}{\gamma^-}\frac{4}{\pi}\sqrt{1-
\frac{x^2}{4t^2}},
\end{equation}
which is reminiscent of the Wigner semicircle law in
Eq.~\eqref{eq:wigner}. It is useful to compare the results in~\eqref{eq:ne-den}
with numerical data.
We present some benchmarks in Fig.~\ref{fig2:Neel}.
In the figure we show $1/2-n_{x,t}$ versus $x/(2t)$. The symbols are
numerical results obtained by using~\eqref{eq:G}. We only show data for
$\gamma^-=5$. Strong oscillating corrections are present. They
disappear in the long time limit $t\to\infty$. Similar corrections are
also present in the unitary case, i.e., without dissipation. The continuous
line is the analytic result in the space-time scaling limit. Despite the strong
oscillations the agreement with the numerical data is satisfactory.
\section{Quenches from inhomogeneous initial states}
\label{sec:ness}
In this section we address the effect of losses in out-of-equilibrium
dynamics starting from inhomogeneous initial states. We first discuss
the so-called domain-wall quench in subsection~\ref{sec:DW}.
This can be straightforwardly treated by using the results of section~\ref{sec:warm-up}.
We then discuss the generic situation in which the initial state
is obtained by joining two Fermi seas with different fillings
in subsection~\ref{sec:ness1}. We show that both the fermionic
density and the correlation functions admit
a simple hydrodynamic picture in the space-time scaling limit.
\subsection{Domain-wall quench}
\label{sec:DW}
Let us consider the domain-wall initial state, in which the left part of the chain
is fully filled, and the right one is empty. This situation has
been intensely investigated in the
past~\cite{antal-1999,bettelheim-2012,gobert-2005,allegra-2016,dubail-2017,collura-2018}.
The full out-of-equilibrium dynamics ensuing from the domain-wall state
is obtained by a slight modification of the method employed in section~\ref{sec:warm-up}.
The same ansatz as in~\eqref{eq:ansatz} holds true, with $S_{k,x}$
satisfying~\eqref{eq:S-eq}. The initial condition for $S_{k,x}$ is now
\begin{equation}
\label{eq:dw-ini}
S_{k,x}(0)=\delta_{k,x}\Theta(-k),
\end{equation}
where the Heaviside theta function $\Theta(-k)$ takes into account that
at $t=0$ only the left part of the chain is fully occupied with fermions.
In taking the Laplace and Fourier transforms of~\eqref{eq:ansatz}, we
distinguish the case of $k\ge 0$ and $k<0$, obtaining
\begin{align}
s\widehat S_{k,q}-e^{-iqk}&=2i\widehat S_{q,k}\cos(q)-\frac{\gamma^-}{2}
\widehat S_{k,x=0}\quad\textrm{for}\, k<0\\
s\widehat S_{k,q}&=2i\widehat S_{q,k}\cos(q)-\frac{\gamma^-}{2} \widehat S_{k,x=0}
\quad\textrm{for}\, k\ge0.
\end{align}
The solution of the system above is straightforward and gives
\begin{align}
\widehat S_{k,x}=\widehat S_{k,x}^{|F\rangle}\quad&\textrm{for}\,k<0\, ,\\
\widehat S_{k,x}=0\quad&\textrm{for}\, k\ge 0,
\end{align}
where $\widehat S_{k,x}^{|F\rangle}$ (cf.~\eqref{eq:f-final})
is the same as for the quench from the fully-occupied state (see section~\ref{sec:ferro}).
Let us consider the density profile. For $x>0$ we obtain
\begin{equation}
\label{eq:e1}
n_{x,t}=\frac{1}{\pi}\int_{\arcsin(x/(2t))}^{\pi/2}dq
\frac{|v_q|^2}{\big(\frac{\gamma^-}{2}+|v_q|\big)^2}.
\end{equation}
\begin{figure}[t]
\includegraphics[width=0.45\textwidth]{den_c_profile_DW.pdf}
\caption{ Density profile $n_{x,t}$ in a free fermion chain with localized
losses: Dynamics starting from the domain-wall state. We plot $n_{x,t}$ versus
$x/(2t)$. The symbols are exact
numerical data. Lines are the analytic results in the
space-time scaling limit.
}
\label{fig3:DW}
\end{figure}
For $x<0$ the density reads as
\begin{multline}
\label{eq:e2}
n_{x,t}=
\frac{1}{\pi}\int^{\pi/2}_{\arcsin(x/(2t))}dq\\+
\frac{(\gamma^-)^2}{4}\frac{1}{\pi}\int^{\pi/2}_{\arcsin(|x|/(2t))}dq\frac{1}
{(\frac{\gamma^-}{2}+|v_q|)^2}.
\end{multline}
Here $v_q$ is the fermion group velocity~\eqref{eq:v-k}.
Clearly, from~\eqref{eq:e1} and~\eqref{eq:e2}
for $\gamma^-\to0$ one recovers the expected result in the
absence of dissipation. This corresponds to the first term
in~\eqref{eq:e2}. Furthermore, in the limit of strong dissipation
$\gamma^-\to\infty$, Eq.~\eqref{eq:e1} is, again, reminiscent of the Wigner semicircle
law. In particular, for $x>0$, from~\eqref{eq:e2}, one obtains that
$n_{x,t}=16/(\gamma^-\pi)\sqrt{1-x^2/(4t^2)}$.
In Fig.~\ref{fig3:DW} we compare~\eqref{eq:e1} and~\eqref{eq:e2}
with numerical results obtained from~\eqref{eq:G}. We report data for
strong dissipation $\gamma^-=5$ and weak one $\gamma^-=0.5$ and several
times. The data show a perfect agreement when plotting $n_{x,t}$
versus $x/(2t)$. The scaling functions are consistent with the
numerical results in the space-time scaling limit.
Similarly to the quench from the N\'eel state we should stress that by
using~\eqref{eq:asyJ} and~\eqref{eq:asyK} it is possible to obtain
the behavior of the generic fermionic correlator $G_{x,y}$ in the
space-time scaling limit with arbitrary $\xi_x=x/(2t)$ and $\xi_y=y/(2t)$.
Finally, we also stress that~\eqref{eq:e2} can be rederived as a particular
case of the double Fermi seas expansion that we will discuss in the
next section.
\subsection{Inhomogeneous Fermi seas}
\label{sec:ness1}
In this section we discuss the situation in which two semi-infinite
chains [see Fig~\ref{fig0:cartoon}(b)] are prepared in two Fermi
seas at different fillings $k_F^l$ and $k_F^r$.
The quench protocol is as follows.
The two chains are prepared in the ground state
of~\eqref{eq:ham} with different fermionic densities, and
with {\it periodic} boundary conditions.
At $t=0$ the two chains are joined together. Note that due to
the initial periodic boundary conditions on the two chains,
this involves a ``cut and glue''
operation. The situation in which the two initial systems
have open boundary conditions can be treated in a similar way,
although we expect the out-of-equilibrium dynamics not to be
dramatically affected by the choice of the boundary conditions.
In the absence of dissipation, the out-of-equilibrium
dynamics starting from two open chains that are joined together
was obtained in Ref.~\onlinecite{viti-2016}, in the space-time
scaling limit.
Note that by fixing $k_F^l=\pi$ and $k_F^r=0$, one obtains
the domain-wall quench (see section~\ref{sec:DW}). Instead,
for $k_F^r=0$ and $k_F^l=\pi/2$ one has the so-called geometric
quench~\cite{mossel-2010}, in which the ground state of a
chain is let to expand in the vacuum.
It is straightforward to derive the initial correlation matrix
as
\begin{multline}
\label{eq:G0}
G_{x,y}(0)=\frac{\sin(k_F^r(x-y))}{\pi(x-y)}\Theta(x)\Theta(y)\\
+\frac{\sin(k_F^l(x-y))}{\pi(x-y)}\Theta(-x)\Theta(-y),
\end{multline}
where $\Theta(x)$ is the Heaviside theta function.
Equation~\eqref{eq:G0} is conveniently rewritten as
\begin{multline}
\label{eq:G-par0}
G_{x,y}(0)=\frac{1}{2\pi}\int_{-k_F^l}^{k_F^l} dk e^{i k(x-y)}\Theta(-x)\Theta(-y)\\
+\frac{1}{2\pi}\int_{-k_F^r}^{k_F^r}dk e^{i k(x-y)}\Theta(x)\Theta(y).
\end{multline}
Crucially, Eq.~\eqref{eq:G-par0} suggests that we can
parametrize $G_{x,y}$ as
\begin{equation}
\label{eq:G-par}
G_{x,y}=\frac{1}{2\pi}\int_{-k_F^r}^{k_F^r} dk S^l_{k,x}\bar S^l_{k,y}+
\frac{1}{2\pi}\int_{-k_F^l}^{k_F^l} dk S^r_{k,x}\bar S^r_{k,y},
\end{equation}
where $S_{k,x}^{l/r}$ have to be determined. Clearly, the
ansatz~\eqref{eq:G-par} is similar to the one used in section~\eqref{eq:ansatz}.
After substituting Eq.~\eqref{eq:G-par} in~\eqref{eq:one}, we
obtain that $S_{k,x}^{\scriptscriptstyle l/r}$ satisfy~\eqref{eq:S-eq}.
The initial conditions are given as
\begin{equation}
\label{eq:lr-ini}
S^{r/l}_{k,x}(0)=e^{i k x}\Theta(\pm x),
\end{equation}
where the plus and minus signs are for $S^r_{k,x}$ and $S^l_{k,x}$,
respectively. The Laplace and Fourier transforms of~\eqref{eq:G-par} read
\begin{equation}
\label{eq:S-split}
\widehat S_{k,q}^{l/r}=\widehat S_{k,q}^{l/r,U}
+\widehat S_{k,q}^{l/r,D},
\end{equation}
where we separated the unitary part from the
contribution of the dissipation, as stressed by
the superscripts $U$ and $D$ in~\eqref{eq:S-split}.
Here we defined
\begin{align}
\label{eq:gq-un-1}
\widehat S_{k,q}^{l,U}&=
\frac{1}{s-2i\cos(q)}\frac{1}{1-e^{i(q-k+i0)}}\\
\label{eq:gq-un-2}
\widehat S_{k,q}^{r,U}&=
-\frac{1}{s-2i\cos(q)}\frac{1}{1-e^{i(q-k-i0)}}\\
\label{eq:gq-dis-1}
\widehat S_{k,q}^{l,D}&=
\int_{-\pi}^{\pi}
\frac{dp}{2\pi}\frac{Z(p)}{1-e^{i(p-k+i0)}}\\
\label{eq:gq-dis-2}
\widehat S_{k,q}^{r,D}&=
-\int_{-\pi}^{\pi}
\frac{dp}{2\pi}\frac{Z(p)}{1-e^{i(p-k-i0)}}.
\end{align}
The function $Z(p)$ is defined as
\begin{equation}
\label{eq:Zp}
Z(p)=-\frac{\frac{\gamma^-}{2}}{\frac{\gamma^-}{2}+\sqrt{s^2+4}}
\frac{\sqrt{s^2+4}}{s-2i\cos(p)}\frac{1}{s-2i\cos(q)}.
\end{equation}
The terms $\pm i0$ in the equations above are convergence factors, and
their sign is chosen to impose
the $\Theta(\pm x)$ in the initial conditions for $S_{k,x}^{l/r}$ (cf.~\eqref{eq:lr-ini}).
From~\eqref{eq:G-par} it is clear that in order to determine $G_{x,y}$
one has to compute the integrals ${\mathcal I}^l$ and ${\mathcal I}^r$ defined as
\begin{align}
\label{eq:int-l}
& {\mathcal I}^l=
\frac{1}{4\pi^2}\int_{-k^l_F}^{k^l_F}\frac{dk}{(1-e^{i(p-k+i0)})(1-e^{-i(q-k-i0)})}\\
\label{eq:int-r}
&{\mathcal I}^r=
\frac{1}{4\pi^2}\int_{-k^r_F}^{k^r_F}\frac{dk}{(1-e^{i(p-k-i0)})(1-e^{-i(q-k+i0)})}.
\end{align}
Similar integrals were discussed in Ref.~\onlinecite{viti-2016}.
\begin{figure}[t]
\includegraphics[width=0.45\textwidth]{den_c_profile_FS.pdf}
\caption{ Density profile $n_{x,t}$ in a free fermion chain with localized
losses with rate $\gamma^-$: Dynamics starting from the
inhomogeneous state (see Fig.~\ref{fig0:cartoon})
obtained by joining two Fermi seas with $k_F^l=\pi/2$ and $k_F^r=\pi/3$.
The symbols are exact numerical data for $\gamma^-=0.5$. Lines are the
analytic results in the space-time scaling limit. Inset: The case without
dissipation. Note the formation of the Non Equilibrium Steady State (NESS)
at the interface between the two chains.
}
\label{fig4:two-FS}
\end{figure}
The integration over $k$ in Eqs.~\eqref{eq:int-l}-\eqref{eq:int-r} can be performed easily in the
complex plane. The derivation is as in Ref.~\onlinecite{viti-2016},
and we do not report it here. We obtain
\begin{multline}
\label{eq:inte}
4\pi^2{\mathcal I}^{l/r}=
\frac{i}{1-e^{i(p-q\pm i0)}}
\Big[\ln\frac{e^{i k_F^l}-e^{i q}}{e^{-i k_F^l}-e^{iq}}\\
-\ln\frac{e^{i k_F^l}-e^{i p}}{e^{-i k_F^l}-e^{i p}}\mp 2\pi i\chi^{l/r}(p)\Big],
\end{multline}
where the terms with $i0$ and $-i0$ in the exponential in~\eqref{eq:inte}
correspond to ${\mathcal I}^l$ and ${\mathcal I}^r$, respectively.
Here the function $\chi^{l/r}(p)$ is one if $p$ is in the interval
$[-k^{l/r}_F,k^{l/r}_F]$, while it is zero otherwise.
The next step is to determine the large $x$ behavior of
$S_{k,x}^{\scriptscriptstyle l/r,D}$.
This requires to calculate the inverse Fourier transform of
$Z(p)$ with respect to $q$ (cf.~\eqref{eq:Zp}) and its
inverse Laplace transform with respect to $s$. More precisely,
one has to determine the asymptotic behavior for large $x,t$ of
\begin{equation}
\label{eq:F-def}
F_x^D:={\mathcal L}^{-1}\Big(\frac{1}{2\pi}\int_{-\pi}^{\pi} dq Z(p)e^{i q x}\Big)\, .
\end{equation}
The derivation is reported it in Appendix~\ref{sec:asy}.
Here we quote the final result, which reads
\begin{equation}
\label{eq:ref-c}
F_x^D(p)=\chi_{x}e^{2it\cos(p)+i |x||p|}r(p),
\end{equation}
where $v_p$ is the fermions group velocity defined in~\eqref{eq:v-k}.
Here $\chi_{x}$ is defined as
\begin{equation}
\label{eq:chi-def}
\chi_x(p):=\Theta\Big(|v_p|-\frac{|x|}{t}\Big),
\end{equation}
with $v_p$ the group velocity of the fermions (cf.~\eqref{eq:v-k}).
In~\eqref{eq:ref-c} we defined the reflection amplitude
$r(p)$ as
\begin{equation}
\label{eq:r}
r(p):=-\frac{\gamma^-}{2}\frac{1}{\frac{\gamma^-}{2}+|v_p|}.
\end{equation}
Note that $r(p)$ appears in the scattering problem of a plane wave
with a delta potential with imaginary strength~\cite{burke-2020}.
Finally, we discuss the behavior of $G_{x,y}$ in the space-time
scaling limit for $t,x,y\to\infty$ with $x/(2t),y/(2t)$ fixed
and $|x-y|/t\to0$.
Let us start by discussing the different contributions
in Eqs.~\eqref{eq:gq-un-1}-\eqref{eq:gq-un-2}-\eqref{eq:gq-dis-1}-\eqref{eq:gq-dis-2}.
We first consider the unitary contribution
\begin{multline}
\label{eq:statio}
\frac{1}{2\pi}\int_{-k_F^l}^{k_F^l} dk S_{k,x}^{l,U}\bar S_{k,y}^{l,U}=\\
\frac{1}{2\pi}\int_{-\pi}^\pi dp dq e^{2it\cos(p)-2it\cos{q}+i p x-i q y}\,{\mathcal I}^l(p,q).
\end{multline}
The analysis is essentially the same as in Ref.~\onlinecite{viti-2016}.
Let us first consider the case with $x,y>0$.
We employ the standard stationary phase approximation~\cite{wong}.
In the large $t,x,y$ limit the stationary points in the
double integral in~\eqref{eq:statio} satisfy the equations
\begin{align}
\label{eq:sp-1}
&-2t\sin(p)+x=0\\
\label{eq:sp-2}
&-2t\sin(q)+y=0
\end{align}
As it is clear from~\eqref{eq:sp-1} and~\eqref{eq:sp-2},
in the space-time scaling, the integral~\eqref{eq:statio}
is dominated by the region with $p\to q$. Thus, we
define $K:=(p+q)/2$ and $Q:=(p-q)$. In the limit $Q\to0$ we have
\begin{equation}
\label{eq:I-simp}
{\mathcal I}^{l/r}(p,q)=\mp\frac{1}{2\pi i}\frac{1}{Q\pm i0}\Theta(k_F^{l/r}-|K|),
\end{equation}
where the term with $Q+i0$ refers to ${\mathcal I}^{l}$, and the one
with $Q-i0$ to ${\mathcal I}^{r}$. By combining~\eqref{eq:I-simp}
with the well-known formula
\begin{equation}
\label{eq:well-known}
\frac{1}{2\pi i}\int_{-\infty}^\infty dQ \frac{e^{i Q x}}{Q\mp i0}=\pm\Theta(\pm x)
\end{equation}
we obtain the relatively simple result
\begin{multline}
\label{eq:b-1}
\frac{1}{2\pi}\int_{-k_F^l}^{k_F^l} dk S_{k,x}^{l,U}\bar S_{k,y}^{l,U}=\\
\int_{-k_F^l}^{k_F^l}\frac{dK}{2\pi}
e^{i K(x-y)}\Theta\Big(2t\sin(K)-\frac{x+y}{2}\Big)\, .
\end{multline}
This coincides with the result in Ref.~\onlinecite{viti-2016}.
The derivation of the remaining terms entering in the definition
of $G_{x,y}$ (cf.~\eqref{eq:G-par} and~\eqref{eq:S-split})
is similar although more cumbersome due to the presence of the absolute
values $|p|$ and $|q|$ in the integrands. We illustrate the main steps of
the derivations in Appendix~\ref{sec:app-tech}.
We obtain
\begin{multline}
\label{eq:FS-final}
G_{x,y}(t)=
\int_{-k_F^l}^{k_F^l}\frac{dK}{2\pi}
\Big\{e^{i K(x-y)}\Theta\Big(2t\sin(K)-\frac{x+y}{2}\Big)\\+
e^{i K(|x|-|y|)}\Theta(K)(\Theta(x)+\Theta(y))
\chi_{x}\chi_{y}r\\
+\Theta(K)e^{i K(|x|-|y|)}
\chi_{x}\chi_{y}r^2
\Big\}+l\leftrightarrow r,
\end{multline}
with $\chi_x$ as defined in~\eqref{eq:chi-def}. Note that
$\chi_{x},\chi_{y}$ and $r$ are functions of $K$.
Here the last term $l\leftrightarrow r$ is obtained by changing
the integration boundaries as $k_F^l\to k_F^r$, and by replacing
$K\to-K$ and $x,y\to-x,-y$ in the integrand in~\eqref{eq:FS-final}.
Eq.~\eqref{eq:FS-final} holds only in the space-time (hydrodynamic) limit
$|x|,|y|,t\to\infty$ with fixed $x/(2t)\approx y/(2t)$.
Note that, similar to the previous sections, the
correlation matrix $G_{x,y}$ is singular at $x,y\to0$.
This happens because of fast oscillating terms
in the limit $x\to\infty$ that cannot be neglected
at $x\approx 0$. In the region $x/t,y/t\to0$ one obtains
\begin{multline}
\label{eq:FS-final-1}
G_{x,y}=\int_{0}^{k_F^l}\frac{dK}{2\pi}(e^{iKx}+r e^{i K|x|})
(e^{-iK y}+r e^{-i K|y|})\\
\int_{-k_F^r}^0\frac{dK}{2\pi}(e^{iKx}+r e^{i K|x|})
(e^{-iK y}+r e^{-i K|y|}).
\end{multline}
Before discussing the numerical checks of~\eqref{eq:FS-final} it is
useful to address its physical interpretation. To this purpose it is
useful to focus on the dynamics of the fermionic density $G_{x,x}(t)$.
Equation~\eqref{eq:FS-final} is rewritten as
\begin{equation}
G_{x,x}=\int_{-\pi}^\pi \frac{dk}{2\pi}(n_{x,t}^l(k)+n_{x,t}^r(k)),
\end{equation}
where $n^{(l/r)}_{x,t}(k)$ describe the evolution of the fermions with
momentum $k$ originated in the initial left and right chains.
As it is clear from~\eqref{eq:FS-final}, $n_{x,t}^l(k)$
is written as
\begin{multline}
\label{eq:den-simple}
n_{x,t}^l(k)=n_0^l(k)\Big[\Theta(-x)\Theta(k)(1+|r|^2\Theta(x+2t\sin(k)))\\
+\Theta(x)\Theta(k)\Theta(-x+2t\sin(k))|\tau|^2\\
+\Theta(-x)\Theta(-k)\Theta(-x+2t\sin(k))
\Big].
\end{multline}
In~\eqref{eq:den-simple} we defined the transmission amplitude $\tau(k)$ as
\begin{equation}
\label{eq:transmission}
\tau(k):=\frac{v_k}{\frac{\gamma^-}{2}+|v_k|},
\end{equation}
where $v_k$ is the fermion group velocity (cf.~\eqref{eq:v-k}).
Note that $\tau^2+r^2\ne1$, signaling that the evolution is not unitary.
Note that $\tau(k)$ coincides with the transmission amplitude for
the scattering with a delta potential with imaginary
strength~\cite{burke-2020}. In~\eqref{eq:den-simple}, $n_0^l$ is the
initial momentum distribution for the left chain $n_0^l=\Theta(k_F^l-|k|)$,
and $r$ the reflection amplitude defined in~\eqref{eq:r}.
Now Eq.~\eqref{eq:den-simple} has a simple physical interpretation. The
first row in~\eqref{eq:den-simple} describes the fermions moving towards
the dissipative impurity and the scattered ones. The second row describes the
fermions that are transmitted to the chain on the right at $x>0$. Finally,
the last row accounts for the fermions that are in the left chain and
are moving with negative velocity.
It is useful to check~\eqref{eq:FS-final} and~\eqref{eq:FS-final-1}
against exact numerical data.
\begin{figure}[t]
\includegraphics[width=0.45\textwidth]{J_c_profile_FS.pdf}
\caption{Depleted Non-Equilibrium Steady State (NESS). We show the
dynamics of the off-diagonal correlator $\mathrm{Re}(G_{x,x+1})$ in the
free fermion chain with localized losses. The data are exact numerical
results for $\gamma^-=0.5$ for the infinite chain. We show results for the
inhomogeneous initial state obtained by joining two Fermi seas with Fermi
levels $k_F^l=\pi/2$ and $k_F^r=\pi/3$ (see Fig.~\ref{fig0:cartoon} (b)).
The curve is the result in the space-time scaling limit $t,x\to\infty$.
}
\label{fig5:two-FS}
\end{figure}
\begin{figure}[t]
\includegraphics[width=0.45\textwidth]{curr_c_profile_FS.pdf}
\caption{Depleted Non-Equilibrium Steady State (NESS). We show the
dynamics of the fermion current $\mathrm{Im}(G_{x,x+1})$ in the
free fermion chain with localized losses.
The initial state and the dissipation are the same as in Fig.~\ref{fig5:two-FS}.
The curve is the result in the space-time scaling limit $t,x\to\infty$.
}
\label{fig5b:two-FS}
\end{figure}
This is discussed in Fig.~\ref{fig4:two-FS}. We plot the fermionic
density $n_{x,t}$ versus the scaling variable $x/(2t)$. We consider the
case with $k_F^l=\pi/2$ and $k_F^r=\pi/3$. Interestingly,
in the absence of dissipation a Non-Equilibrium Steady State (NESS) emerges
at the interface between the two chains with
a flat density profile for $[-\sin(k_F^r)\le x/(2t)\le
\sin(k_F^r)]$. The fermionic density in the flat region is the average density
$(k_F^l+k_F^r)/(2\pi)$. The case without dissipation is shown
in the inset of Fig.~\ref{fig4:two-FS}. As it is clear from the
main Figure, in the presence of losses the NESS is depleted. Also the
density profile exhibits a clear asymmetry under $x\to-x$ with a
discontinuity at $x=0$. Cusp-like features are present at
$\pm k_{F}^r$. These are also present in the absence of dissipation~\cite{viti-2016}.
Finally, we report in the Figure the analytic result in the space-time
scaling limit~\eqref{eq:FS-final}. This is in perfect agreement with the
numerical data. Note that the agreement is also good for $x=0$. The
theoretical prediction for $x=0$ is given by~\eqref{eq:FS-final-1} and it is
reported as a circle in the Figure.
Deviations are present near the singularities related to the Fermi momenta,
similar to the non-dissipative case~\cite{viti-2016}, and are expected to
vanish in the limit $x,t\to\infty$. Finally, we should mention that
by imposing $k_F^l=k_F^r=k_F$ in~\eqref{eq:FS-final} one obtains the
space-time limit behavior of the correlator $G_{x,y}$ for the problem of
a uniform Fermi sea with Fermi level $k_F$. This is explicitly
discussed in Appendix~\ref{sec:equal}.
\begin{figure}[t]
\includegraphics[width=0.45\textwidth]{den_c_profile_FS_gl.pdf}
\caption{Broken NESS with balanced gain and losses. We show the
fermionic density $n_{x,t}$ in the free fermion chain with
gains and losses. The data are exact numerical
results for $\gamma^-=\gamma^+=1$. The initial state is obtained by
joining two Fermi seas with Fermi levels $k_F^l=\pi/2$ and
$k_F^r=\pi/3$ (see Fig.~\ref{fig0:cartoon} (b)). The dashed line
is the NESS density $\pi(k_F^l+k_F^r)/2$ in the absence of dissipation. In the inset we
show the off-diagonal correlators $\mathrm{Re}(G_{x,x+1})$ and
$\mathrm{Im}(G_{x,x+1})$.
}
\label{fig6:FS-gl}
\end{figure}
In Fig.~\ref{fig5:two-FS} and Fig.~\ref{fig5b:two-FS} we show the behavior of the off-diagonal correlation
function $G_{x,x+1}$ in the space-time scaling limit. We present $\mathrm{Re}(G_{x,x+1})$ and $\mathrm{Im}(G_{x,x+1})$, which is the fermion current, separately.
Similar to the density (see Fig.~\ref{fig4:two-FS}), the exact numerical data
for $G_{x,x+1}$ obtained by numerically solving~\eqref{eq:one} collapse on the
same curve when plotted as a function of $x/(2t)$, at least for large enough $x,t$.
The scaling curve is perfectly described by the analytic result~\eqref{eq:FS-final}
and~\eqref{eq:FS-final-1}. Note that similar to Fig.~\ref{fig4:two-FS} a singularity
is present at $x=0$ in both Figures and the same cusp-like features at $x/(2t)=\sin(k_F^r)$ can be
observed. We observe that the current is zero for $|x|/(2t)\ge 1$,
as expected since for $|x|/(2t)\ge 1$ the system is at equilibrium. Note also that
$\mathrm{Im}(G_{x,x+1})<0$ for any $x$, and does not change its sign across the singularity. Finally, we should stress that on increasing $\gamma^-$ the transport between
the two chains is suppressed, which is, again, a manifestation of the quantum Zeno
effect. This is nicely encoded in the value assumed by the reflection
and transmission coefficients; for $\gamma_-\to\infty$, we have $r(k)\to-1$ and $\tau(k)\to0$.
To conclude we discuss an interesting effect that arises when one restores the
gain dissipation. We consider the case of balanced gain/loss dissipation,
i.e., with $\gamma^+=\gamma^-$. Our results are reported in Fig.~\ref{fig6:FS-gl}.
We focus on the density profile $n_{x,t}$ plotted versus $x/(2t)$. We fix
$k_F^l=\pi/2$ and $k_F^r=\pi/3$ and $\gamma^-=\gamma^+=1$. Interestingly,
as it is clear from Fig.~\ref{fig6:FS-gl} the density profile now exhibits a
``broken'' NESS structure. Specifically, two flat regions are visible for
$-\sin(k^r_F)\le x\le0$ and $0<x\le\sin(k^r_F)$, with a step-like discontinuity
at $x=0$. The dashed line in the figure shows the NESS density in the absence of
dissipation. In the inset of Fig.~\ref{fig6:FS-gl} we report the behavior of the
off-diagonal correlation $\mathrm{Re}(G_{x,x+1})$ and $\mathrm{Im}(G_{x,x+1})$,
which show a nontrivial structure. We should mention that the behavior of the
correlator in the presence of both gain and loss can be derived analytically
by using the results in Appendix~\ref{app:gain-losses}, although we do not report
its explicit expression.
\section{Conclusions}
\label{sec:conc}
We have provided exact results on the out-of-equilibrium dynamics
of free-fermion systems subject to localized gain/loss dissipation, playing the role of a dissipative defect.
We considered different setups with both homogeneous and inhomogeneous initial states, and derived general results on the fermionic correlations $G_{x,y}(t)=\mathrm{Tr}(c^\dagger_x c_y\rho(t))$.
Our findings hold in the space-time scaling limit (hydrodynamic
limit) with $x,y,t\to\infty$, their ratios $\xi_x=x/(2t),\xi_y=y/(2t)$ being fixed.
In this limit, we have shown that dissipation acts as an effective delta potential with
momentum-dependent reflection and transmission amplitudes.
For generic $\xi_x,\xi_y$, the fermionic correlation functions
depend on the details of the model. On the other hand, in the limit $\xi_x\approx\xi_y$, the
dynamics of fermionic correlations is completely characterized by the initial fermionic occupations and the
emergent reflection amplitude of the dissipative impurity.
Our results pave the way for several further studies. For instance, it would
be interesting to extend them to more complicated free-fermion models,
e.g., the transverse field Ising chain. Another interesting direction
concerns the investigation of the effect of localized dissipation in free-bosonic systems \cite{krapivsky-2020}.
An intriguing question is how local dissipation affects the
entanglement scaling at finite-temperature critical points. An ideal
setup to explore this is provided by the so-called quantum spherical model, for
which entanglement properties can be studied effectively~\cite{wald-2020,wald-2020a,alba2020c}.
Furthermore, it would be of interest to study how localized
gain/loss dissipations may affect entanglement spreading, for instance,
by studying the dynamics of the logarithmic negativity~\cite{vidal-2002,plenio-2005,calabrese-2012,shapourian-2019}
and comparing with the quasiparticle picture~\cite{alba2019quantum}. In the
absence of dissipation the entanglement dynamics has been investigated for both the geometric
quench~\cite{alba-2014,vicari-2012,nespolo-2013} and the domain-wall quench~\cite{sabetta-2013,dubail-2017,collura-2020}.
Moreover, it is important
to generalize our findings to the interacting case. Although
this is a challenging task, the
results in Ref.~\onlinecite{bouchoule-2020} provide first steps in this direction. It would be
also interesting to understand whether it is possible to incorporate the effects of
dissipation in the Conformal Field Theory framework put forward in Refs.~\onlinecite{allegra-2016,dubail-2017}
or in the quantum GHD~\cite{ruggiero-2020}. Finally, it would be important to clarify the correlation
structure of the broken NESS discussed in section~\ref{sec:ness1}, and to understand whether it can be
observed experimentally.
\section{Acknowledgements}
We would like to thank J\'er\^ome Dubail and Oleksandr Gamayun for useful discussions in
related projects.
V.A. acknowledges support from the European Research Council under ERC Advanced grant 743032 DYNAMINT. F.C. acknowledges support from the “Wissenschaftler-R\"uckkehrprogramm GSO/CZS” of the Carl-Zeiss-Stiftung and the German Scholars Organization e.V., as well as through the Deutsche Forschungsgemeinsschaft (DFG, German Research Foundation) under Project No. 435696605.
|
1807.01145
|
\section{Introduction}
The KArlsruhe Research Accelerator (KARA) is the storage ring of the accelerator test facility and synchrotron light source of the Karlsruhe Institute of Technology (KIT) in Germany.
A special short-bunch operation mode at \unit[1.3]{GeV} allows the reduction of the momentum compaction factor and therefore reduces the electron bunch length down to a few picoseconds.
The bunch-by-bunch feedback system~\cite{hertle_ipac14} enables custom filling patterns from a single bunch to complex multi bunch patterns.
Coherent synchrotron radiation (CSR) is emitted for wavelengths in the order of or longer than the emitting structure. In the short-bunch operation mode the effect of CSR plays an important role in the beam dynamics.
The compressed bunch length of a few picoseconds leads to the emission of CSR in the low THz frequency range, that is related to a modulation of the longitudinal phase space due to the CSR impedance.
This modulation leads to substructures in the longitudinal particle distribution and is referred to as micro-bunching~\cite{stupakov2002}.
The time varying substructures lead to strong fluctuations of the emitted power in the THz range, the so-called bursting.
The minimum bunch-current at which this phenomenon occurs is called the bursting threshold. It depends strongly on the natural bunch length and therefore on various machine parameters~\cite{Bane_cai_stupakov2010}.
At KARA at KIT~\cite{asm_ipac13,brosi_ipac16} as well as at MLS~\cite{ries_ipac12}, for very short bunches, bursting was additionally observed in a region below the main bursting threshold. Indications also can be seen in measurements at Diamond (see~\cite{diamond_bursting_2012} Fig.~6) though it was not discussed further.
This instability is referred to as short bunch-length bursting (SBB) in the following.
\section{Theoretical description}\label{chap:theory}
The interaction of the electrons inside an electron bunch with their CSR radiation field, which causes the micro-bunching instability, can be described by the Vlasov-Fokker-Planck (VFP) equation~\cite{stupakov2002}.
The solution depends on how the effects of the conductive beam pipe are taken into account as boundary conditions for the emitted electromagnetic field.
In this contribution, the model, to which the measurements will be compared to, treats the beam pipe as a pair of perfectly conducting parallel plates with a distance of $2h$.
The resulting solution for the main threshold of the instability was published in 2010 by Bane, Stupakov and Cai~\cite{Bane_cai_stupakov2010}. Using the dimensionless quantities $S_{\mathrm{CSR}}$ and $\Pi$ to parametrize CSR strength and shielding, the main threshold can be described by the simple linear scaling law~\cite{Bane_cai_stupakov2010}
\begin{align}
\left(S_{\mathrm{CSR}}\right)_{\mathrm{th}} &= 0.5 + 0.12\ \Pi \label{equ:threshold}\\
\mathrm{with~}
\Pi &= \frac{\sigma_{\mathrm{z},0} R^{1/2}}{ h^{3/2}}\label{equ:pi}\\
\mathrm{and~}
S_{\mathrm{CSR}} &= \frac{I_{\mathrm{n}} R^{1/3} }{ \sigma_{\mathrm{z},0}^{4/3}}\label{equ:S_csr}.
\end{align}
Here $\sigma_{\mathrm{z},0}$ is the natural bunch length, $R$ the bending radius, $h$ half of the spacing between the parallel plates and $I_{\mathrm{n}}$ the normalized bunch current:
$$I_{\mathrm{n}} = \frac{r_{\mathrm{e}} N_{\mathrm{b}}}{2 \pi \nu_{\mathrm{s},0} \gamma \sigma_{\delta,0}}=\frac{I_{\mathrm{b}} \sigma_{\mathrm{z},0}}{\gamma \alpha_{\mathrm{c}} \sigma_{\delta,0}^{2} I_{\mathrm{A}}}$$
with $N_{\mathrm{b}}$ the number of electrons, $I_{\mathrm{b}}$ the bunch current, $r_{\mathrm{e}}$ the classical electron radius, $\nu_{\mathrm{s},0}$ the nominal synchrotron tune, $\sigma_{\delta,0}$ the natural energy spread, $\alpha_{\mathrm{c}}$ the momentum compaction factor, $\gamma$ the Lorentz factor and $I_{\mathrm{A}}$ the Alfv\'en current\footnote{Alfv\'en current $I_{\mathrm{A}}=4\pi\varepsilon_{0}m_{\mathrm{e}}c^{3}/e= \unit[17045]{\mathrm{A}}$}.
The natural bunch length is given by \cite{wiedemann_2013}
$$\sigma_{\mathrm{z},0}= \frac{\alpha_{\mathrm{c}}\, c\, \sigma_{\delta,0}}{2\pi f_{\mathrm{s}}}
\quad\mbox{with}\quad
f_{\mathrm{s}} = \sqrt{\frac{\alpha_{\mathrm{c}} \,f_{\mathrm{RF}} f_{\mathrm{rev}} \sqrt{e^{2}V_{\mathrm{RF}}^{2} - U_{0}^{2}}}{E\, 2 \pi}}$$
with the synchrotron frequency $f_\mathrm{s}$, the beam energy $E$, the RF frequency $f_{\mathrm{RF}}$,
the revolution frequency $f_{\mathrm{rev}}$, the RF peak voltage $V_{\mathrm{RF}}$ and
the radiated energy per particle and revolution $U_0$.
Equation~(\ref{equ:threshold}) was obtained from the results of a VFP-solver, based on an algorithm devised by Warnock and Ellison \cite{warnock_2000}.
It is the result of a linear fit to the simulated thresholds.
This linear scaling law fits best for high values of $\Pi$ ($\Pi > 3$), however at lower values the simulated thresholds are significantly higher than the fit.
Interestingly, the measured thresholds are described more accurately by the fit (see~\cite{brosi_prab16}) than the thresholds obtained by the VFP simulations for these low values of $\Pi$. This is the case for the VFP simulations in~\cite{Bane_cai_stupakov2010} as well as for the ones presented in the following and can be attributed to the simplicity of the parallel plates model.
The fit, leading to Eq.~(\ref{equ:threshold}), ignores a dip around $\Pi\approx0.7$, where the calculated thresholds deviate from the simple linear scaling law~\cite{Bane_cai_stupakov2010}.
A closer look~\cite{Bane_cai_stupakov2010} at the calculated energy spread at this dip
reveals a second unstable region in the bunch current, with a threshold below the one expected from the linear scaling law. For the values of $\Pi$ accessible at KARA, it is separated by a stable region from the main instability starting at the threshold described by Eq.~(\ref{equ:threshold}).
This new region corresponds to the short bunch-length bursting studied in the following.
The upper and lower limits of this additional region of instability are predicted to depend not only on the bunch length and thus the shielding parameter, but also on $\beta=1 / \left(2 \pi \, f_{\mathrm{s}}\, \tau_{\mathrm{d}}\right)$, which relates the synchrotron frequency $f_{\mathrm{s}}$ and the longitudinal damping time $\tau_{\mathrm{d}}$ \cite{Bane_cai_stupakov2010, Kuske_ipac13}. It is therefore termed a weak instability.
\section{Measurements}
\subsection{Measurement Principle}
The measurements presented in this paper were obtained with a broad-band quasi-optical Schottky barrier diode from ACST~\cite{acst_flyer} sensitive in the spectral range from several \unit[10]{GHz} up to \unit[2]{THz} with the peak sensitivity around \unit[80]{GHz}, which is operated at room temperature.
The THz radiation was detected at the Infrared2 Beamline, which provides synchrotron radiation from the entry edge of a dipole magnet~\cite{Mathis2003}.
To detect the fluctuations in the emitted THz radiation for each bunch in a multi bunch filling pattern individually, the fast detector was combined with the ultra-fast DAQ system KAPTURE (KArlsruhe Pulse Taking and Ultrafast Readout Electronics)~\cite{caselle_ipac14}.
The KAPTURE system samples the detector response to the THz pulse of each bunch at four points ~\cite{caselle_ibic14}.
In principle KAPTURE can sample the signal continuously with the rate of the RF frequency of KARA ($\approx$ \unit[500]{MHz}).
For this publication, the signal was recorded for every bunch at every 10th revolution during a period of one second, to limit the acquired data volume.
Figure~\ref{fig:decay} shows the characteristic patterns of the fluctuation frequencies of the emitted THz radiation for different bunch currents.
During the decay of the bunch current, the instability passes different regimes and ends at the main bursting threshold (in Fig.~\ref{fig:decay} at $\approx$ \unit[0.2]{mA}).
\begin{figure}[!t]
\includegraphics*[width=0.5\textwidth]{figure1.png}
\caption{Spectrogram of the fluctuations of the THz intensity as a function of the decaying bunch current, showing the micro-bunching instability. It was obtained in a measurement lasting several hours while the bunch current decreased. No short bunch-length bursting occurs, as the bunch was not compressed strongly enough.}
\label{fig:decay}
\end{figure}
The combination of a custom filling pattern and a data acquisition system which facilitates the measurement of the THz signal of each bunch individually allows a reduction of the measurement time down to one second \cite{brosi_prab16}.
This so called snapshot measurement technique was used for measuring the bunch current dependence of the behavior as well as the threshold of the instability.
\subsection{Short Bunch-Length Bursting}
For most machine settings the beam is stable for all currents below the bursting threshold (see Fig.~\ref{fig:decay}).
Nevertheless, observations show, that at KARA a momentum compaction factor $\alpha_{\mathrm{c}}\le2.64\times10^{-4}$ combined with high RF voltages ($V_{\mathrm{RF}}>\unit[1100]{kV}$) leading to a natural bunch length $\sigma_{\mathrm{z},0}\le\unit[0.723]{mm}\,\hat{=}\, \unit[2.43]{ps}$, an instability occurs again for bunch currents below the main bursting threshold, see Fig.~\ref{fig:snapshot}.
This spectrogram was obtained by a snapshot measurement within one second.
To compensate for the limited current resolution of this measurement method, the filling pattern was chosen in such a way that the region of interest with small bunch currents is sampled with a sufficient resolution.
This is visible in the limited bunch current resolution in the upper part of Fig.~\ref{fig:snapshot}.
The spectrogram shows the lower bound of the main bursting and the complete occurrence of the short bunch-length bursting.
This second region of instability occurred at bunch currents between \unit[0.038]{mA} and \unit[0.016]{mA} for the measured machine settings.
\begin{figure}[!tb]
\includegraphics*[width=0.5\textwidth]{figure2.png}
\caption{Snapshot spectrogram of the fluctuations of the THz intensity as a function of bunch current for a synchrotron frequency of \unit[6.55]{kHz}. Below the end of the micro-bunching instability (main bursting threshold) around \unit[0.052]{mA}, a second unstable region is clearly visible between \unit[0.038]{mA} and \unit[0.016]{mA}.
The current bins were chosen such that a high bunch current resolution in the region of the short bunch-length bursting was achieved. For this measurement approx. $115$ bunches were filled.}
\label{fig:snapshot}
\end{figure}
The frequencies of the intensity fluctuations are located below twice the synchrotron frequency ($2\times f_{\mathrm{s}} = 2\times \unit[6.55]{kHz}$ in Fig.~\ref{fig:snapshot}) and approach this frequency with decreasing bunch current. A second frequency line at the first harmonic of the intensity fluctuation is visible (below $4\times f_{\mathrm{s}}$).
\subsection{Results}
\begin{figure}[!bt]
\includegraphics*[width=0.5\textwidth]{figure3.png}
\caption
CSR strength vs. shielding of thresholds from snapshot measurements at different settings of the machine parameters compared to the linear scaling law given by Eq.~\ref{equ:threshold} (line). The lower bound (orange discs) as well as the upper bound (blue triangles) of the short bunch-length bursting (SBB) are shown. The main bursting threshold is shown in red (squares) for machine settings where short bunch-length bursting occurred and in green (diamonds) for settings where it did not occur. The purple stars represent thresholds and bounds which were obtained from a full decay of a single bunch and not from snapshot measurements. The error bars display the one standard deviation uncertainties calculated from the measurement errors.
\label{fig:Scsr_pi}
\end{figure}
Snapshot measurements of the lower current range, similar to Fig.~\ref{fig:snapshot}, were taken for different values of the momentum compaction factor and the natural bunch length by changing the magnet optics as well as the RF voltage. The scanned parameter range reached for $\alpha$ from $9.94\times 10^{-3}$ down to $1.51\times 10^{-3}$ and for $V_\mathrm{RF}$ from $\unit[524]{kV}$ up to $\unit[1500]{kV}$.
The bunch currents at the lower and upper bound of the short bunch-length bursting as well as the main bursting threshold for each measurement are displayed in Fig.~\ref{fig:Scsr_pi} using the dimensionless parameters $S_{\mathrm{CSR}}$ and $\Pi$ (Eqs.~(\ref{equ:pi}) and (\ref{equ:S_csr})) following the notation of \cite{Bane_cai_stupakov2010}.
Bursting thresholds measured at machine settings, where no short bunch-length bursting occurs (more details see~\cite{brosi_prab16}), show that the main bursting threshold is described by Eq.~(\ref{equ:threshold}) and is independent of the occurrence of short bunch-length bursting.
The highest value of the shielding parameter $\Pi$ where the short bunch-length bursting occurs at KARA (right-most red square in Fig.~\ref{fig:Scsr_pi}) is $\Pi_{\mathrm{highest\ SBB}}=0.845\pm0.013$.
The smallest value of the shielding parameter where the short bunch-length bursting does not occur (left-most green diamond in Fig.~\ref{fig:Scsr_pi}) is at $\Pi_{\mathrm{no\ SBB}}=0.835\pm0.017$, and therefore smaller than $\Pi_{\mathrm{highest\ SBB}}$.
This small difference is expected and caused by the fact, that the two values ($\Pi_{\mathrm{no\ SBB}}$ and $\Pi_{\mathrm{highest\ SBB}}$) were obtained for different combinations of momentum compaction factor and RF voltage with similar values of $\Pi$, however, different values for $\beta$ ($\beta_{\mathrm{no\ SBB}}=2.59\times 10^{-3}$ and $\beta_{\mathrm{highest\ SBB}}=1.88\times 10^{-3}$).
As described in \cite{Kuske_ipac17} the range of $\Pi$ where the weak instability occurs is the bigger the smaller $\beta$ is.
The overall limit agrees within uncertainties with the results obtained by Bane, Stupakov and Cai using their VFP-solver \cite{Bane_cai_stupakov2010}.
There, the authors observed a dip around $\Pi=0.7$, while the threshold for $\Pi=1$ is again on the theoretical calculated linear scaling law given by Eq.~(\ref{equ:threshold}).
Values below $\Pi=0.66$ were not accessible for our measurements, which precludes the possibility to check if the short bunch-length bursting vanishes for even smaller values of the shielding parameter, as predicted by calculations in \cite{Bane_cai_stupakov2010}.
\section{Simulations}
\begin{figure}[!bt]
\includegraphics*[width=0.45\textwidth]{figure4.png}
\caption
Simulated spectrogram showing the end of the micro-bunching instability (main bursting threshold) around \unit[54]{$\mu$A} as well as the short bunch-length bursting between \unit[36]{$\mu$A} and \unit[22]{$\mu$A}.
The frequencies are directly below two and four times the synchrotron frequency, similar to the frequencies observed in the corresponding measurements (see Fig.~\ref{fig:snapshot}).
\label{fig:sim_spec}
\end{figure}
The upper and lower limits in bunch current of the short bunch-length bursting are expected to depend not only on the natural bunch length $\sigma_{\mathrm{z,0}}$ but also on $\beta=1 / \left(2 \pi \, f_{\mathrm{s}}\, \tau_{\mathrm{d}}\right)$ which relates the longitudinal damping time $\tau_{\mathrm{d}}$ and the synchrotron frequency $f_{\mathrm{s}}$. For the measurements presented here, the synchrotron frequency changes due to the different values of the RF-voltage and the momentum compaction factor, while the damping time stays constant. This means that $\beta$ is different for the different measurement points ranging in the presented measurements from
$\beta=1.13\times 10^{-3}$ to
$\beta=3.33\times 10^{-3}$. As the simulations in \cite{Bane_cai_stupakov2010} were carried out only for the fixed value $\beta=1.25\times 10^{-3}$, new simulations were done for each measurement point using exactly the parameters of the respective measurement.
The VFP-solver used for these additional simulations is presented in \cite{Kuske_ipac12} and a comparison between the simulation results and measurement done at MLS and BESSY is given in \cite{Kuske_ipac13}.
Figure~\ref{fig:sim_spec} shows a spectrogram calculated from the simulated phase space.
Similar to the measurements, a second region of instability corresponding to the short bunch-length bursting is visible between \unit[20]{$\mu$A} and \unit[41]{$\mu$A}, well below the main bursting threshold at \unit[54]{$\mu$A}. The dominant frequencies in this instability region are close to two and four times the synchrotron frequency, showing the same structure as the corresponding measurement (Fig.~\ref{fig:snapshot}).
The simulation also reveals that in the stable area between the short bunch-length bursting and the main bursting threshold, the energy spread equals the natural energy spread and is not increased as it is during the instability (see Fig.~\ref{fig:sim_spec}), confirming the first simulations done in \cite{Bane_cai_stupakov2010}.
The simulations yield thresholds which are higher by about $10\%$ in comparison to the linear scaling law~\cite{Bane_cai_stupakov2010} as discussed above (see Sec.~\ref{chap:theory}). The overall behavior is in good agreement with the measurements.
The CSR strength at the thresholds obtained from the VFP solver are shown in Fig.~\ref{fig:comp_Scsr_pi} (red triangles) as a function of the shielding parameter.
\section{Comparison}
\begin{figure*}[!tbh]
\includegraphics*[width=1\textwidth]{figure5.png}
\caption
CSR strength at the bursting thresholds as a function of the shielding parameter for measurements and VFP solver calculations for different machine settings. The measured area of instability is indicated as light blue area and confined by the measured thresholds (blue discs, already shown in Fig.~\ref{fig:Scsr_pi}) with the error bars displaying the standard deviation error of each measurement. The red triangles show the results from the VFP solver calculations at the corresponding machine settings (red line to guide the eye). The gray line indicates the linear scaling law for the main bursting threshold given by Eq.~\ref{equ:S_csr}
}
\label{fig:comp_Scsr_pi}
\end{figure*}
Figure~\ref{fig:comp_Scsr_pi} shows again clearly that in the measurement as well as in the new VFP calculations, a range of $\Pi$ exists, where unstable THz emission also occurs below the threshold given by the simple linear scaling law (Eq.~\ref{equ:S_csr}), as already shown by \cite{Bane_cai_stupakov2010}. Our measurements as well as simulations show a stable area between the two regions of instability.
The range in $\Pi$ as well as in $S_{\mathrm{CSR}}$ where the short bunch-length bursting occurs depends on $\beta$. For the parameters in our measurements, the simulations give an upper limit for the occurrence of short bunch-length bursting of $\Pi=0.76$.
While the measurements show short bunch-length bursting up to $\Pi_{\mathrm{highest\ SBB}}=0.845\pm0.013$, resulting in a small range of $\Pi$ where short bunch-length bursting is observed by measurements and not in the simulations.
Also the lower bound of the short bunch-length bursting in the CSR strength differs slightly between the calculations and the measurements. The measurements show instability at an even lower CSR strength (corresponding to a lower bunch current) than the calculations.
This could be related to the fact that in general the threshold values obtained by the simulation are systematically slightly higher than the ones measured. The average difference is \unit[7]{$\mu$A} for the main threshold current ranging from \unit[40]{$\mu$A} to \unit[400]{$\mu$A}.
Lower values for the thresholds in the measurements can not be explained by too insensitive THz detectors as this would result in an overestimation of the measured thresholds.
Also systematic influences on the measured thresholds due to multi bunch effects in the used snapshot measurements can be excluded, as thresholds measured in pure single-bunch decays agree with the ones from snapshot measurements (see Fig.~\ref{fig:Scsr_pi}).
A small discrepancy consistent with the one observed could be caused due to our measurement method.
For small fluctuations of the machine settings the measurements would give the absolute floor of the corresponding thresholds.
While the simulation gives a value corresponding more to the average threshold for the machine settings, leading to a small discrepancy.
Such fluctuations in the machine could occur in the RF-voltage and the current in the magnets, and thus the magnet optics and the momentum compaction factor.
Another potential source for deviations could come from assumptions used in the simulation. For example, the longitudinal damping time, an essential component in the solution of the VFP-equation, was obtained by beam dynamics calculations which did not include CSR.
Furthermore, as described above, the VFP calculations only consider the simple “parallel plates” model. The small discrepancy between the measured thresholds and the calculated ones could indicate additional impedance contributions.
For example, considering an additional geometric impedance for an aperture like a scraper, leads to a slightly lower simulated threshold \cite{schoenfeldt_diss_18}. Also the impedance of edge radiation is mainly resistive and would lead, if considered in the simulation, to a lower threshold.
Last but not least, a stronger CSR-interaction than expected from the simple circular orbit simulated could be caused by an interaction extending into the straights behind the dipoles.
\section{Summary}
The short bunch-length bursting observed at KARA for certain machine settings, corresponds to the behavior observed in the results of the Vlasov-Fokker-Planck solver calculations first published by Bane, Stupakov and Cai~\cite{Bane_cai_stupakov2010}.
This second region of instability below the bursting threshold of the main micro-bunching instability occurred in the measurements for values of the shielding parameter smaller than $\Pi=0.85$. As the occurrence of short bunch-length bursting depends not only on the natural bunch length but also on the ratio of the synchrotron frequency and the longitudinal damping time, new VFP solver calculations were performed at the exact conditions of the measurements. This simulation result shows slightly higher values for the thresholds with an average difference of \unit[7]{$\mu$A} but the overall behavior is in good agreement with the measurements. The small deviations might be caused by the floor in the determination of the thresholds from the measurements, which would lead to an underestimation in the presence of small parameter fluctuations of the machine settings.
Another possible explanation is an additional impedance contribution or a stronger CSR interaction that is not covered by the simple model simulated. The latter could be caused by the interaction of the bunches with their CSR extending into the straights behind the dipoles.
\begin{acknowledgments}
We thank Karl Bane for his questions concerning the presence of this second region of instability at KARA.
We would like to thank the infrared group at KARA and in particular M. S\"upfle for their support during the beam times at the Infrared2 beam line. Further, we would like to thank the THz group for inspiring discussions.
This work has been supported by the German Federal Ministry of Education and Research (Grant No. 05K13VKA), the Helmholtz Association (Contract No. VH-NG-320) and by the Helmholtz International Research School for Teratronics (HIRST).
E. Blomley and J. Gethmann acknowledge the support by the DFG-funded Doctoral School ``Karlsruhe School of Elementary and Astroparticle Physics: Science and Technology''.
M. Brosi, J. Steinmann, and P. Sch\"onfeldt acknowledge the support of the Helmholtz International Research School for Teratronics (HIRST).
\end{acknowledgments}
|
2106.02727
|
\section{Introduction}
In life testing, progressively type-II censored order statistics serve to model component lifetimes in experiments, where upon each failure a pre-fixed number of intact components are removed; see, e.g., \cite{BalAgg2000,BalCra2014}. Such withdrawals of operating components may be part of the experimental design for different reasons. As two examples, a researcher may wish to release capacities for another experiment or to save costs in situations, where the ongoing operation of the components is expensive. For more motivational aspects on the model, we refer to \cite{Bal2007,Cra2017}.
A progressively type-II censored lifetime experiment is formally described as follows. Suppose that we put $n$ identical components on a lifetime test. At time of the $j$-th failure of some component, $1\leq j\leq m$, $R_j\in\mathbb{N}_0$ of the still operating components are (randomly) selected and removed from the experiment. Upon the $m$-th component failure, all remaining $R_m=n-m-\sum_{j=1}^{m-1}R_j$ components are removed. Thus, $m$ component failure times are recorded while $\sum_{j=1}^m R_j=n-m$ component failure times have been (progressively type-II) censored.
Note that conventional type-II right censoring is included in this setup by choosing $R_1=\dots=R_{m-1}=0$ and $R_m=n-m$. In that case, choosing $m=n$ corresponds to a complete sample from the underlying lifetime distribution.
Inferential results for the underlying lifetime distribution of progressively type-II censored order statistics have widely been studied. We refer to \cite{BalCra2014} for a summary providing many references; see also \cite{Cra2017}. Among others, confidence regions have been proposed for parameters, quantiles, and (point-wise) reliability of the lifetime distribution in different distribution families including exponential, Weibull, and Pareto models. Recent works on inference under progressive type-II censoring are provided, e.g., by \cite{BalHayLiuKia2015,DoeCra2019,MalKha2020} for the exponential distribution, and by \cite{BaiShiLiuLiu2019,MonKun2019} for other parametric distributions.
The scope of the present work is to develop, study, and compare confidence bands for the underlying cumulative distribution function (cdf) of progressively type-II censored order statistics. Such confidence bands are random sets in the two-dimensional plane, which contain the entire graph of the true cdf with a pre-specified probability. The bands may be interpreted as simultaneous confidence intervals around estimated survival probabilities, effectively addressing the multiple-testing problem. This is of particular relevance for applications in reliability, where a statistical statement for the failure probabilities of some unit jointly at several time points is required.
Asymptotic confidence bands may be constructed based on a nonparametric estimator for the survival function (or cdf); see, e.g., \cite{Nai1984,HolMckYan1997}. However, in life testing, the acquisition of samples is typically costly, and small sample sizes may render asymptotic methods inapplicable while parametric models still allow for exact inference (provided they are properly specified). In this paper, we focus on a single sample from the two-parameter exponential distribution although the methodologies presented here also apply to the multi-sample case and other location-scale families. Explicit formulas are presented for various confidence bands satisfying a desired exact confidence level. Two of these confidence bands ($B_1$ and $B_4$) have been studied by \cite{SriKanWha1975} for the case of a complete sample of independent and identically distributed (iid) exponential random variables; here, the results are extended to a progressively type-II censored sample. Moreover, we show that the confidence band $B_4$ may be trimmed without reducing its coverage probability. Beyond that, we propose two more confidence bands ($B_2$ and $B_3$) and discuss their properties. As a by-product, our methodology also suggests a new explicit confidence region ($C''_4$) for the location-scale parameter of the exponential distribution.
The detailed outline of the paper is as follows.
In Section \ref{s:PCII}, the model of progressively type-II censored order statistics is introduced, and some properties are stated. Based on a sample of progressively type-II censored order statistics from a two-parameter exponential distribution, confidence bands for the underlying cdf are then proposed in Section \ref{s:bands}. Here, confidence bands are constructed via confidence regions for the parameters in Section \ref{ss:conf} and via Kolomogorov-Smirnov type statistics in Section \ref{ss:kst}. Explicit formulas for the boundaries and the coverage probabilities of the confidence bands are presented, and it is shown how to obtain the quantiles of the relevant distributions by simulation. In Section \ref{s:comp}, the bands are illustrated and compared with respect to band width and area for a classical data set. Extensions of the results are discussed in Section \ref{s:general}, where we focus on related models for ordered data in Section \ref{ss:sos} and consider other underlying location-scale families in Section \ref{ss:otherlsf}. Section \ref{s:conclusion} finally gives the conclusion highlighting the main findings.
\section{Progressively type-II censored order statistics}\label{s:PCII}
For $m,n\in\mathbb{N}$ with $1<m\leq n$, we assume to have a sample $X_{1:m:n}\leq\dots\leq X_{m:m:n}$ of progressively type-II censored order statistics based on some absolutely continuous cdf $F$ with corresponding density function $f$ and the censoring scheme $(R_1,\dots,R_m)\in\mathbb{N}_0^m$ with $\sum_{j=1}^m R_j=n-m$. The joint density function of $X_{1:m:n},\dots, X_{m:m:n}$ is then given by
\begin{equation*}
f(x_1,\dots,x_m)\,=\,\prod_{j=1}^m \gamma_jf(x_j)[1-F(x_j)]^{R_j}
\end{equation*}
for $x_1\leq\dots\leq x_m$,
where
\begin{equation}\label{eq:gammas}
\gamma_j\,=\,\sum_{i=j}^m (R_i+1)\,,\quad 1\leq j\leq m\,,
\end{equation} denote (known) positive parameters; see, e.g., \cite{BalCra2014}, p. 22. Note that $\gamma_1=n$. To shorten notation, let $\boldsymbol{X}=(X_{1:m:n},\dots,X_{m:m:n})$ and $\boldsymbol{R}=(R_1,\dots,R_m)$.
Now, let us assume that the underlying cdf $F$ belongs to the location-scale family $\mathcal{F}=\{F_{\boldsymbol{\vartheta}}:\boldsymbol{\vartheta}\in\Theta\}$, $\Theta=\mathbb{R}\times(0,\infty)$, of exponential cdfs, i.e.,
\begin{equation}\label{eq:locscale}
F_{\boldsymbol{\vartheta}}(x)\,=\,1-\exp\left\{-\,\frac{x-\mu}{\sigma}\right\}\,,\quad x>\mu\,,
\end{equation}
for $\boldsymbol{\vartheta}=(\mu,\sigma)\in\Theta$. Throughout this article, the location-scale parameter $\boldsymbol{\vartheta}$ is supposed to be unknown. It is well known that the maximum likelihood estimator (MLE) $\hat{\boldsymbol{\vartheta}}=(\hat{\mu},\hat{\sigma})$ of $\boldsymbol{\vartheta}$ based on $\boldsymbol{X}$ is given by
\begin{eqnarray}
\hat{\mu}\,&=&\ X_{1:m:n}\label{eq:mudach}\\[1ex]
\text{and}\quad\hat{\sigma}\,&=&\,\frac{1}{m}
\sum_{j=2}^m \gamma_j(X_{j:m:n}-X_{j-1:m:n})\,,\label{eq:sigmadach}
\end{eqnarray}
where $\hat{\mu}$ and $\hat{\sigma}$ are independent with $\hat{\mu}\thicksim F_{(\mu,\sigma/n)}\in\mathcal{F}$ and $\hat{\sigma}\thicksim\Gamma(m-1,\sigma/m)$. Here, $\Gamma(a,b)$ denotes the gamma distribution with shape parameter $a>0$, scale parameter $b>0$, and mean $ab$. Moreover, $\hat{\boldsymbol{\vartheta}}$ is a complete sufficient statistic for $\boldsymbol{\vartheta}$, and the uniformly minimum variance unbiased estimators of $\mu$ and $\sigma$ are given by $\tilde{\mu}=\hat{\mu}-\tilde{\sigma}/n$ and $\tilde{\sigma}=m\hat{\sigma}/(m-1)$; for more details, see \cite{BalCra2014}, Section 12.
\section{Confidence bands for the baseline cdf}\label{s:bands}
Based on progressively type-II censored order statistics $\boldsymbol{X}$ with censoring scheme $\boldsymbol{R}$, we aim for constructing confidence bands for (the graph) of the underlying cdf $F_{\boldsymbol{\vartheta}}$ subject to a desired (exact) confidence level. For this, let $\boldsymbol{X}$ be formally defined on the probability space $(\Omega,\mathcal{A},P_{\boldsymbol{\vartheta}})$. A confidence band is then introduced as a random mapping $B=B(\boldsymbol{X})$ with values in the power set of $\mathbb{R}^2$, which satisfies the following two properties for all $\boldsymbol{\vartheta}\in\Theta$:
\begin{itemize}
\item[(i)] $\{\text{graph}\,F_{\boldsymbol{\vartheta}}\subseteq B\}\in\mathcal{A}$, where $\text{graph}\, F_{\boldsymbol{\vartheta}}=\{(t,F_{\boldsymbol{\vartheta}}(t)):t\in\mathbb{R}\}$ denotes the graph of $F_{\boldsymbol{\vartheta}}$, and
\item[(ii)] the sets $\{y:(t,y)\in B\}$, $t\in\mathbb{R}$, form (possibly degenerated) intervals $P_{\boldsymbol{\vartheta}}$-almost-surely.
\end{itemize}
A confidence band $B$ is then said to have (exact) confidence level $1-p\in(0,1)$ if for all $\boldsymbol{\vartheta}\in\Theta$
\begin{equation*}
P_{\boldsymbol{\vartheta}}(\text{graph}\,F_{\boldsymbol{\vartheta}}\subseteq B)\stackrel{(=)}{\geq} 1-p\,.
\end{equation*}
For the construction of confidence bands for $F_{\boldsymbol{\vartheta}}$, we focus on two parametric methods proposed in the literature. The first approach is presented by \cite{Kan1968b,Kan1968a} and based on the availability of some confidence region for $\boldsymbol{\vartheta}$; see also \cite{MieBed2017}. The second method is developed by \cite{KanSri1972} and makes use of so called Kolmogorov-Smirnov type statistics, i.e., parametric analogues of the nonparametric Kolmogorov-Smirnov statistic.
Finally, note that once having derived confidence bands for the underlying cdf, confidence bands for the corresponding reliability function and quantile function can be obtained by respective transformations. For example, if $B$ denotes a confidence band for $F_{\boldsymbol{\vartheta}}$ with (exact) confidence level $1-p$, then
\begin{equation*}
\overline{B}\,=\,\{(x,1-y):(x,y)\in B\}
\end{equation*}
forms a confidence band for the reliability function $1-F_{\boldsymbol{\vartheta}}$ with (exact) confidence level $1-p$.
\subsection{Confidence Bands based on Confidence Regions}\label{ss:conf}
Whenever there is some confidence region for $\boldsymbol{\vartheta}$ available, one may proceed as follows to obtain a confidence band for $F_{\boldsymbol{\vartheta}}$; see \cite{Kan1968b,Kan1968a}. Let $p\in(0,1)$ and $C=C(\boldsymbol{X})$ be a confidence region for $\boldsymbol{\vartheta}$ with exact confidence level $1-p$. Moreover, we assume that $C$ is path-connected $P_{\boldsymbol{\vartheta}}$-almost-surely for every $\boldsymbol{\vartheta}\in\Theta$. Then,
\[B_C\,=\,\bigcup_{\tilde{\boldsymbol{\vartheta}}\in C}\text{graph}\,F_{\tilde{\boldsymbol{\vartheta}}}\]
forms a confidence band for $F_{\boldsymbol{\vartheta}}$ provided that it meets the measurability condition (i). If so, it is obvious from the definition that $B_C$ has at least confidence level $1-p$. However, to prevent large and thus less informative bands, an exact confidence level of $1-p$ may be desired. For this, a sufficient condition is that $C$ satisfies the implication
\begin{equation}
\text{graph}\,F_{\tilde{\boldsymbol{\vartheta}}}\subseteq B_C\quad\Rightarrow\quad \tilde{\boldsymbol{\vartheta}}\in C \label{eq:exhaustive}
\end{equation}
$P_{\boldsymbol{\vartheta}}$-almost-surely for every $\boldsymbol{\vartheta}\in\Theta$. If condition \eqref{eq:exhaustive} holds, we say that the set $C$ is exhaustive. In that case, the measurability condition (i) on $B_C$ is trivially met, and we have equal coverage probabilities
\begin{equation*}
P_{\boldsymbol{\vartheta}}(\text{graph}\,F_{\tilde{\boldsymbol{\vartheta}}}\subseteq B_C)\,=\,P_{\boldsymbol{\vartheta}}(\tilde{\boldsymbol{\vartheta}}\in C),\quad \tilde{\boldsymbol{\vartheta}}\in\Theta\,,
\end{equation*}
for every $\boldsymbol{\vartheta}\in\Theta$; in particular, $B_C$ is then unbiased if and only if $C$ is unbiased.
Based on an iid sample from the general location-scale family, characterizations of exhaustive confidence regions are provided in \cite{MieBed2017} by making a case distinction on the supports of the underlying cdfs; the results therein are readily seen to remain true for a progressively type-II censored sample. In the present situation, we have left-bounded supports and thus conclude from the results in \cite{MieBed2017} that any compact (and path-connected) confidence region is exhaustive if and only if it is convex and comprehensive. Here, a set $A\subseteq\mathbb{R}^2$ is called comprehensive if $(x_1,x_2),(y_1,y_2)\in A$ and $(z_1,z_2)\in\mathbb{R}^2$ with $x_i\leq z_i\leq y_i$, $i=1,2$, imply that $(z_1,z_2)\in A$.
Two different confidence regions for $\boldsymbol{\vartheta}$ have been constructed in \cite{Wu2010} by combining the independent pivotal quantities
\begin{eqnarray*}
\frac{n(\hat{\mu}-\mu)}{\sigma}\thicksim F_{(0,1)}\,,\qquad \frac{2m\hat{\sigma}}{\sigma}\thicksim \chi^2(2m-2)
\end{eqnarray*}
and
\begin{eqnarray*}
\frac{n(\hat{\mu}-\mu)}{m\hat{\sigma}/(m-1)}\thicksim \text{F}(2,2m-2)\,,\qquad
\frac{2(n(\hat{\mu}-\mu)+m\hat{\sigma})}{\sigma}\thicksim \chi^2(2m)\,,
\end{eqnarray*}
respectively. Here, $\chi^2(k)$ and $\text{F}(k_1,k_2)$ denote the chi-square distribution and F-distribution with $k$ and $k_1,k_2$ degrees of freedom, and $F_{(0,1)}\in\mathcal{F}$ is the standard exponential distribution, see formula \eqref{eq:locscale}. Allocating the overall confidence level uniformly to the intervals and tails, confidence regions with exact confidence level $1-p\in(0,1)$ are thus
\begin{eqnarray}
C_1\,=\,\Big\{(\mu,\sigma)\in\Theta\,:\,a_{q_1}(\sigma)\leq\mu\leq a_{q_2}(\sigma)\,,
\, \sigma_{q_2}\leq\sigma\leq\sigma_{q_1}\Big\}\label{eq:Wu1}
\end{eqnarray}
with
\begin{eqnarray*}
a_q(\sigma)\,&=&\,\hat{\mu}+\frac{\sigma\ln(q)}{n}\,,\quad\sigma>0\,,\\[1ex]
\text{and}\qquad \sigma_q\,&=&\,\frac{2m\hat{\sigma}}{\chi_{q}^2(2m-2)}\,,\quad q\in(0,1)\,,
\end{eqnarray*}
as well as
\begin{eqnarray}
C_2\,=\,\Big\{(\mu,\sigma)\in\Theta\,:\,\mu_{q_2}\leq\mu\leq\mu_{q_1}\,,\,b_{q_2}(\mu)\leq\sigma\leq b_{q_1}(\mu)\Big\}\label{eq:Wu2}
\end{eqnarray}
with
\begin{eqnarray*}
b_q(\mu)\,&=&\,\frac{2(n(\hat{\mu}-\mu)+m\hat{\sigma})}{\chi_{q}^2(2m)}\,,\quad\mu\in\mathbb{R}\,,\\[1ex]
\text{and}\qquad
\mu_q\,&=&\,\hat{\mu}-\frac{m\hat{\sigma}\text{F}_{q}(2,2m-2)}{(m-1)n}\,,\quad q\in(0,1)\,.
\end{eqnarray*}
Here, $\chi^2_\beta(k)$ and $\text{F}_{\beta}(k_1,k_2)$ denote the $\beta$-quantile of $\chi^2(k)$ and $\text{F}(k_1,k_2)$, respectively, and we set $q_1=(1-\sqrt{1-p})/2$ and $q_2=1-q_1$, for brevity.
The confidence regions $C_1$ and $C_2$ have trapezoidal shape in the $(\mu,\sigma)$-plane; see Figures \ref{fig:C1} and \ref{fig:C2} for an illustration. $C_1$ is the area enclosed by the horizontal lines
$\sigma=\sigma_{q_1}$ and $\sigma=\sigma_{q_2}$ parallel to the $\mu$-axis and the diagonal lines
\[\sigma=\frac{n(\mu-\hat{\mu})}{\ln(q_i)}\,,\qquad i=1,2\,,\]
with negative slopes. Similarly, $C_2$ is bounded by the vertical lines $\mu=\mu_{q_1}$ and $\mu=\mu_{q_2}$ parallel to the $\sigma$-axis and by the diagonal lines
$\sigma=b_{q_1}(\mu)$, $\mu\in\mathbb{R}$, and $\sigma=b_{q_2}(\mu)$, $\mu\in\mathbb{R}$,
with negative slopes. Hence, $C_1$ and $C_2$ are both comprehensive and thus exhaustive; see \cite{MieBed2017}. The corresponding confidence bands
\begin{eqnarray*}
B_i\,&\equiv&\, B_{C_i}\,=\,\bigcup_{\tilde{\boldsymbol{\vartheta}}\in C_i}\text{graph}\,F_{\tilde{\boldsymbol{\vartheta}}}\,,\qquad i=1,2\,
\end{eqnarray*}
for $F_{\boldsymbol{\vartheta}}$ therefore have exact confidence level $1-p$ as it is the case for the underlying confidence regions.
To have explicit representations for $B_1$ and $B_2$ at hand, we shall state the lower and upper boundaries of both confidence bands. For $B_1$, their derivation is in analogy to that in \cite{SriKanWha1975}, Section 2.2, for a complete exponential sample and therefore omitted; cf. also \cite{Hay2012}. In case of $B_2$, the proof can similarly be performed and is presented in the appendix.
\begin{theorem}\label{thm:Wu}
Let the confidence regions $C_1$ and $C_2$ in formulas (\ref{eq:Wu1}) and (\ref{eq:Wu2}) have exact confidence level $1-p\in(0,1)$.
Then,
\begin{eqnarray*}
B_i\,=\,\{(x,y)\in\mathbb{R}\times[0,1]:\,U_i(x)\leq y\leq O_i(x)\}
,\quad i=1,2\,,
\end{eqnarray*}
with
\begin{eqnarray*}
U_1\,(x)&=&\begin{cases} F_{(a_{q_2}(\sigma_{q_2}),\sigma_{q_2})}(x)\,, & x\leq \hat{\mu}\\
F_{(a_{q_2}(\sigma_{q_1}),\sigma_{q_1})}(x)\,, & x>\hat{\mu}\end{cases}\,,\\[1ex]
O_1(x)\,&=&\begin{cases} F_{(a_{q_1}(\sigma_{q_1}),\sigma_{q_1})}(x)\,, & x\leq \hat{\mu}\\
F_{(a_{q_1}(\sigma_{q_2}),\sigma_{q_2})}(x)\,, & x>\hat{\mu}\end{cases}\,,
\end{eqnarray*}
respectively
\begin{eqnarray*}
U_2(x)\,&=&\begin{cases} F_{(\mu_{q_1},b_{q_1}(\mu_{q_1}))}(x)\,, & x\leq \hat{\mu}+m\hat{\sigma}/n\\
F_{(\mu_{q_2},b_{q_1}(\mu_{q_2}))}(x)\,, & x>\hat{\mu}+m\hat{\sigma}/n\end{cases}\,,\\[1ex]
O_2(x)\,&=&\begin{cases} F_{(\mu_{q_2},b_{q_2}(\mu_{q_2}))}(x)\,, & x\leq \hat{\mu}+m\hat{\sigma}/n\\
F_{(\mu_{q_1},b_{q_2}(\mu_{q_1}))}(x)\,, & x>\hat{\mu}+m\hat{\sigma}/n\end{cases}\,,
\end{eqnarray*}
form confidence bands for $F_{\boldsymbol{\vartheta}}$ with exact confidence level $1-p$.
\end{theorem}
In recent years, confidence regions with smallest area for distribution parameters of progressively type-II censored order statistics have been proposed; see, for instance, \cite{Fer2014} and \cite{AsgFerAbd2017} for the Pareto distribution and Rayleigh distribution. In \cite{BedKamLen2019}, a minimum area confidence region for an underlying location-scale parameter has been derived based on independent progressively type-II censored samples. For the one-sample exponential case, the finding yields that
\begin{eqnarray}
C_3\,&=&\,\Bigg\{(\mu,\sigma)\in(-\infty,\hat{\mu}]\times(0,\infty):\notag\\[1ex]
\quad&&(m+1)\ln\left(\frac{\hat{\sigma}}{\sigma}\right)
\,-\,\frac{n(\hat{\mu}-\mu)+m\hat{\sigma}}{\sigma}\,\geq\, c_p\Bigg\}\label{eq:Cmin}
\end{eqnarray}
has smallest area among all confidence regions for $\boldsymbol{\vartheta}$ with confidence level $1-p\in(0,1)$, which are based on the pivotal quantity $((\hat{\mu}-\mu)/\sigma,\hat{\sigma}/\sigma)$; for the iid case, see also \cite{Zha2018}. Here, $c_p\equiv c_p(m)$ denotes the $p$-quantile of the distribution of the random variable
\[(m+1)\ln(Y)-mY-Z\,,\]
where $Y\thicksim\Gamma(m-1,1/m)$ and $Z\thicksim F_{(0,1)}\in\mathcal{F}$ are independent; see Section \ref{s:PCII}. In this context, a confidence region $C=C(\boldsymbol{X})$ is said to be based on some pivotal quantity $\boldsymbol{T}=\boldsymbol{T}(\boldsymbol{\vartheta},\boldsymbol{X})$ if there exists a Borel set $A\subseteq\mathbb{R}^2$ with the property that $\boldsymbol{\vartheta}\in C(\boldsymbol{X})$ if and only if $\boldsymbol{T}(\boldsymbol{\vartheta},\boldsymbol{X})\in A$. It is evident that the class of all confidence regions based on $\boldsymbol{T}$ is invariant under measurable bijective transformations of $\boldsymbol{T}$. Hence, in the present situation, $C_3$ is seen to have smaller area than $C_1$ and $C_2$. Note that, in general, this relation does not necessarily transfer to the area of the corresponding confidence bands.
The algebraic structure of $C_3$ is seen to be the same as in the particular case of type-II right censoring; see formula (4) in \cite{LenBedKam2019}, Section 3. By inspecting the proof of Theorem 2 in \cite{LenBedKam2019}, we therefore find a more explicit representation for $C_3$, i.e.,
\begin{equation}\label{eq:C3g}
C_3\,=\,\left\{(\mu,\sigma)\in\Theta:\hat{\mu}+g(\sigma)\leq\mu\leq\hat{\mu}\,,\,Z_{-1}\leq\sigma\leq Z_0\right\}\
\end{equation}
with mapping $g:(0,\infty)\rightarrow\mathbb{R}$ defined by
\begin{equation}\label{eq:g}
g(z)\,=\,\frac{[c_p-(m+1)(\ln(\hat{\sigma})-\ln(z))]\,z+m\hat{\sigma}}{n}
\end{equation}
for $z>0$, and
\begin{equation*}
Z_i\,=\,-\,\frac{m\hat{\sigma}}{m+1}\,\left[ W_i\left(-\,\frac{m}{m+1}\exp\left\{\frac{c_p}{m+1}\right\}\right)\right]^{-1}
\end{equation*}
for $i\in\{-1,0\}$. Here, $W_{-1}$ and $W_0$ denote the real-valued branches of the Lambert W-function, i.e., $W_{-1}$ $(W_0)$ is the inverse function of the mapping $U(x)=x\exp\{x\}$, $x\leq-1\;(x\geq-1)$.
The corresponding confidence band $B_3\equiv B_{C_3}$ is now as follows; for the derivation, see the appendix.
\begin{theorem}\label{thm:minvol}
Let the confidence region $C_3$ in formulas (\ref{eq:Cmin}) and (\ref{eq:C3g}) have exact confidence level $1-p\in(0,1)$.
Then, $B_3$ has confidence level $1-p$ and is given by
\begin{eqnarray*}
B_3\,=\,\{(x,y)\in\mathbb{R}\times[0,1]:\,U_3(x)\leq y\leq O_3(x)\}
\,,
\end{eqnarray*}
with
\begin{equation*}
U_3(x)\,=\,F_{(\hat{\mu},Z_0)}(x)\,,\quad x\in\mathbb{R}\,,
\end{equation*}
and
\begin{eqnarray*}
O_3(x)\,=\,\begin{cases} F_{(\hat{\mu},Z_{-1})}(x)\,, & \sigma_x^*<Z_{-1}\\
F_{(\hat{\mu}+g(\sigma_x^*),\sigma_x^*)}(x)\,, & Z_{-1}\leq \sigma_x^*\leq Z_0\\
F_{(\hat{\mu},Z_0)}(x)\,, & \sigma_x^*>Z_0\end{cases}\,,
\end{eqnarray*}
where
\begin{equation*}
\sigma_x^*\,=\,\frac{n(\hat{\mu}-x)+m\hat{\sigma}}{m+1}\,.
\end{equation*}
\end{theorem}
The minimum area confidence region is depicted in Figure \ref{fig:C3}; see also \cite{LenBedKam2019} for the doubly type-II censored case. These figures indicate that $C_3$ is not comprehensive and, hence, not exhaustive. That is, there exist parameters $\tilde{\boldsymbol{\vartheta}}\notin C_3$ satisfying $\text{graph}\, F_{\tilde{\boldsymbol{\vartheta}}}\subseteq B_3$, and an example of this is highlighted in Figure \ref{fig:C3}. The confidence band $B_3$ therefore has a confidence level greater than that of $C_3$. More precisely, the exact confidence level of $B_3$ is given by that of the comprehensive convex hull of $C_3$, which is defined as the smallest comprehensive and convex confidence region containing $C_3$. For an illustration, see again Figure \ref{fig:C3}; cf. \cite{MieBed2017}. The coverage probability of this superset can be computed analytically; the derivation can be found in the appendix.
\begin{lemma}\label{La:CPMin}
Under the assumptions of Theorem \ref{thm:minvol}, the exact confidence level of $B_3$ is given by
\begin{eqnarray*}
\tau\,&=&\,1-p+\frac{e^{c_p}m^{m+1}}{2(m-2)!}\left(\frac{1}{y^2}-\frac{1}{z^2}\right)
+\left(\frac{z}{m+1}\right)^{m-1}\\[1ex]
&&\quad\times\left[G_{m-1}\left(\frac{(m+1)y}{z}\right)-G_{m-1}(m+1)\right]\,,
\end{eqnarray*}
where
\begin{eqnarray*}
y\,&=&\,-(m+1)W_0\left(-\,\frac{m}{m+1}\exp\left\{\frac{c_p}{m+1}\right\}\right)\,,\\[1ex]
z\,&=&\,m\exp\left\{1+\frac{c_p}{m+1}\right\}\,,
\end{eqnarray*}
and $G_k$ denotes the cdf of $\Gamma(k,1)$. In particular, it does only depend on $p$ and $m$.
\end{lemma}
Table \ref{table:B3CPTab} shows the exact confidence level $\tau$ of $B_3$ for different values of $m$, where the exact confidence level $1-p$ of the underlying confidence region $C_3$ is chosen as 90\% and 95\%, respectively. Here, the constant $c_p$ in formula (\ref{eq:Cmin}) is each numerically obtained by a Monte Carlo simulation of size $10^7$.
\begin{table}[h!]
\centering
\begin{tabular}{cc|rrrrrrrr}
\toprule
&\multirow{2}{*}{$1-p$} &\multicolumn{8}{|c}{$m$}\\[1ex]
&& 2 & 3 & 4 & 5 & 10 & 25 & 50 & 100
\\ \midrule
&$90\%$ & 91.1 & 91.7 & 92.0 & 92.2 & 92.5 & 92.6 & 92.6 & 92.5\\[1ex]
&$95\%$ & 95.6 & 95.9 & 96.1 & 96.2 & 96.5 & 96.5 & 96.5 & 96.4 \\ \midrule
\end{tabular}
\caption{Exact confidence level $\tau$ of $B_3$ (in \%) for exact confidence level $1-p$ of $C_3$ and different values of $m$.}
\label{table:B3CPTab}
\end{table}
Likewise, we may invert the formula in Lemma \ref{La:CPMin} numerically and choose $p=p(\tau)$ in such a way that $B_3$ meets some desired exact confidence level $\tau\in(0,1)$. The resulting critical value $c_{p(\tau)}$ is presented in Table \ref{table:B3Quantiles} for $\tau\in\{90\%,95\%\}$ and values of $m$ as in Table \ref{table:B3CPTab}. Here, we use a Monte Carlo simulation to approximate the quantile function $p\mapsto c_p$.
\begin{table*}[h!]
\centering
\footnotesize
\begin{tabular}{cc|rrrrrrrr}
\toprule
& \multirow{2}{*}{$\tau$} & \multicolumn{8}{c}{$m$}\\[1ex]
& & 2 & 3 & 4 & 5 & 10 & 25 & 50 & 100 \\
\midrule
$1-p(\tau)$ & \multirow{2}{*}{90\%} & 88.8 & 88.0 & 87.6 & 87.4 & 86.9 & 86.8 & 86.9 & 87.0 \\
$c_{p(\tau)}$ & & -9.784 & -8.372 & -8.542 & -9.116 & -13.385 & -28.025 & -52.924 & -102.878 \\ \midrule
$1-p(\tau)$ & \multirow{2}{*}{95\%} & 94.4 & 93.9 & 93.6 & 93.4 & 93.1 & 93.0 & 93.1 & 93.1 \\
$c_{p(\tau)}$ & & -11.906 & -9.807 & -9.737 & -10.191 & -14.272 & -28.806 & -53.684 & -103.614 \\
\midrule
\end{tabular}
\caption{Exact confidence level $1-p(\tau)$ (in \%) and critical value $c_{p(\tau)}$ to choose for $C_3$ such that $B_3$ has exact confidence level $\tau$ (obtained by $10^7$ Monte Carlo simulations, each).}
\label{table:B3Quantiles}
\end{table*}
\subsection{Kolomogorov-Smirnov Type Bands}\label{ss:kst}
We now focus on another method for the construction of confidence bands for an underlying cdf, which was developed by \cite{KanSri1972} and makes use of so called Kolmogorov-Smirnov type statistics. These statistics are of the form
\begin{equation*}
K_{\check{\boldsymbol{\vartheta}}}\,=\,\sup_{x\in\mathbb{R}}|F_{\check{\boldsymbol{\vartheta}}}(x)-F_{\boldsymbol{\vartheta}}(x)|\,,
\end{equation*}
where $\check{\boldsymbol{\vartheta}}=\check{\boldsymbol{\vartheta}}(\boldsymbol{X})$ denotes an estimator of $\boldsymbol{\vartheta}$ based on $\boldsymbol{X}$. If $\check{\boldsymbol{\vartheta}}$ is equivariant, i.e., if for all $a\boldsymbol{1}=(a,\dots,a)\in\mathbb{R}^m$ and $b>0$
\[\check{\boldsymbol{\vartheta}}(a\boldsymbol{1}+b\boldsymbol{X})=(a,0)+b\check{\boldsymbol{\vartheta}}(\boldsymbol{X})\,,\]
the statistic can be rewritten as
\begin{eqnarray}
K_{\check{\boldsymbol{\vartheta}}}\,&=&\,
\sup_{z\in\mathbb{R}} |F_{\check{\boldsymbol{\vartheta}}(\boldsymbol{X})}(\sigma z+\mu)-F_{(0,1)}(z)|\notag\\[1ex]
\,&=&\,
\sup_{z\in\mathbb{R}}|F_{\check{\boldsymbol{\vartheta}}((\boldsymbol{X}-\mu\boldsymbol{1})/\sigma)}(z)-F_{(0,1)}(z)|\,.\label{eq:KSTpivot}
\end{eqnarray}
Here, $(\boldsymbol{X}-\mu\boldsymbol{1})/\sigma$ is distributed as a progressively type-II censored sample with underlying cdf $F_{(0,1)}\in\mathcal{F}$, such that the distribution of $K_{\check{\boldsymbol{\vartheta}}}$ is found to be free of $\boldsymbol{\vartheta}$, i.e., $K_{\check{\boldsymbol{\vartheta}}}$ is a pivotal quantity.
The MLE $\hat{\boldsymbol{\vartheta}}$ of $\boldsymbol{\vartheta}$ is known to be equivariant; see, e.g., \cite{BalCra2014}, Section 12.1.1. This yields the following theorem, the proof of which is provided in the appendix; see also \cite{SriKanWha1975}, Section 2.1, for the complete sample case.
\begin{theorem}\label{thm:BandKST}
Let $p\in(0,1)$. Then,
\begin{equation}\label{eq:B_K}
B_4\,=\,\left\{(x,y)\in\mathbb{R}^2:\,|F_{\hat{\boldsymbol{\vartheta}}}(x)-y|\leq d_{p}\right\}
\end{equation}
forms a confidence band for $F_{\boldsymbol{\vartheta}}$ with exact confidence level $1-p$, where $d_p\equiv d_p(m,n)$ denotes the $(1-p)$-quantile of
\begin{equation}\label{eq:KSTrep}
K_{\hat{\boldsymbol{\vartheta}}}\,\stackrel{\text{d}}{=}\,\max\{U,V\}
\end{equation}
with random variables
\begin{eqnarray*}
U\,&=&\,1-\exp\{-S\}\,,\\[1ex]
V\,&=&\,|1-T|\,\exp\left\{\frac{S-T\ln(T)}{T-1}\right\}\,\left(\mathbbm{1}_{\{T<1\}}+\mathbbm{1}_{\{S<\ln(T)\}}\right)\,,
\end{eqnarray*}
where $S\thicksim F_{(0,1/n)}\in\mathcal{F}$ and $T\thicksim\Gamma(m-1,1/m)$ are independent. Here, $\stackrel{\text{d}}{=}$ means equality in distribution, and $\mathbbm{1}_A$ denotes the indicator function of the set $A$.
\end{theorem}
Note that Theorem \ref{thm:BandKST} allows for a simple numerical computation of the quantiles of $K_{\hat{\boldsymbol{\vartheta}}}$ by using Monte Carlo simulation. For some configurations, respective numerical values are presented in Table \ref{table:dp}.
\begin{table}[h!]
\centering
\begin{tabular}{cc|rrrrrrr}
\toprule
&& \multicolumn{7}{c}{$n$}\\[1ex]
&& 3 & 4 & 5 & 10 & 15 & 20 & 50 \\
\midrule
\multirow{7}{*}{$m$} & 3 & .123 & .109 & .099 & .075 & .064 & .058 & .045 \\
& 4 & & .095 & .086 & .064 & .055 & .049 & .037 \\
& 5 & & & .078 & .058 & .049 & .044 & .033 \\
& 10 & & & & .045 & .038 & .034 & .024 \\
& 15 & & & & & .033 & .029 & .020 \\
& 20 & & & & & & .027 & .018 \\
& 50 & & & & & & & .014 \\
\midrule
\end{tabular}
\caption{Value of $d_p\equiv d_p(m,n)$ for $B_4$ to have exact confidence level $1-p=90\%$ for different configurations $m,n$ (obtained by $10^7$ Monte Carlo simulations, each).}
\label{table:dp}
\end{table}
$B_4$ in formula (\ref{eq:B_K}) has the same non-random vertical width $2d_{p}$ at every $x\in\mathbb{R}$, which implies that there will be points $(x,y)\in B_4$ that cannot be part of any graph of a cdf lying wholly inside the band. These points, however, may be removed from the band in a second step without affecting its confidence level. That is, if $B_4$ has exact confidence level $1-p$, the trimmed confidence band
\[B_4'\,=\,\bigcup_{\tilde{\boldsymbol{\vartheta}}\in\Theta:\,\text{graph}\,F_{\tilde{\boldsymbol{\vartheta}}}\subseteq B_4} \text{graph}\,F_{\tilde{\boldsymbol{\vartheta}}}\]
has also exact confidence level $1-p$.
Note that $B_4'$ can be considered to be constructed from the confidence region
\begin{align}
C_4'
&=\{\tilde{\boldsymbol{\vartheta}}\in\Theta:\,\text{graph}\,F_{\tilde{\boldsymbol{\vartheta}}}\subseteq B_4 \} \notag\\[1ex]
&= \{ \tilde{\boldsymbol{\vartheta}}\in\Theta: \, \sup_{x\in\mathbb{R}}|F_{\tilde{\boldsymbol{\vartheta}}}(x)-F_{\hat{\boldsymbol{\vartheta}}}(x)| \leq d_p \}\label{eq:C4'}
\end{align}
in the sense of the first approach, i.e., $B_4'=B_{C_4'}$. Since $C_4'$ is exhaustive by definition, it follows that
\begin{equation*}
P_{\boldsymbol{\vartheta}}(\boldsymbol{\vartheta} \in C_4')\,=\,P_{\boldsymbol{\vartheta}}(\text{graph}\, F_{\boldsymbol{\vartheta}}\subseteq B_4') =1-p\,,\qquad\boldsymbol{\vartheta}\in\Theta\,,
\end{equation*}
such that $C_4'$ defines a confidence region for $\boldsymbol{\vartheta}$ with exact confidence level $1-p$. To compute $C_4'$, we may use that
\begin{eqnarray*}
\sup_{x\in\mathbb{R}}|F_{(\tilde{\mu},\tilde{\sigma})}(x) - F_{(\hat{\mu}, \hat{\sigma})}(x)|
\,=\, \sup_{x\in\mathbb{R}}|F_{(\frac{\tilde{\mu}-\hat{\mu}}{\hat{\sigma}},\frac{\tilde{\sigma}}{\hat{\sigma}})}(x) - F_{(0,1)}(x)|\,,
\end{eqnarray*}
and the latter quantity is calculated explicitly in the proof of Theorem \ref{thm:BandKST}.
The shape of $C_4'$ is depicted in Figure \ref{fig:C4}.
$C_4'$ can be trimmed even further without changing its confidence level by using that $\mu\leq\hat{\mu}$ $P_{\boldsymbol{\vartheta}}$-almost-surely. By defining
\begin{equation}\label{eq:C4''}
C_4''\,=\,\{(\mu,\sigma)\in C_4':\mu\leq\hat{\mu}\}\quad\text{and}\quad B_4''\,=\,B_{C_4''}\,,
\end{equation}
we have for every $\boldsymbol{\vartheta}\in\Theta$ that
\begin{eqnarray*}
1-p\,&=&\,P_{\boldsymbol{\vartheta}}(\boldsymbol{\vartheta}\in C_4')\,=\,P_{\boldsymbol{\vartheta}}(\boldsymbol{\vartheta}\in C_4'')\\[1ex]
\,&\leq&\,P_{\boldsymbol{\vartheta}}(\text{graph}\,F_{\boldsymbol{\vartheta}}\subseteq B_4'')
\,\leq\,P_{\boldsymbol{\vartheta}}(\text{graph}\,F_{\boldsymbol{\vartheta}}\subseteq B_4')\\[1ex]
\,&=&\,1-p\,.
\end{eqnarray*}
Hence, the confidence region $C_4''$ and the corresponding confidence band $B_4''$ have both exact confidence level $1-p$.
$C_4'$ and $C_4''$ can be stated explicitly.
\begin{theorem}\label{thm:C4}
The confidence regions $C'_4$ and $C''_4$ in formulas (\ref{eq:C4'}) and (\ref{eq:C4''}) admit the representations
\begin{eqnarray*}
C'_4 \,&=&\, \Big\{ (\mu,\sigma)\in \Theta:\, u(\tfrac{\hat{\sigma}}{\sigma}) \leq \frac{\hat{\mu}-\mu}{\sigma} \leq o(\tfrac{\hat{\sigma}}{\sigma}) \Big\}\,, \\[1ex]
C''_4 \,&=&\, \Big\{ (\mu,\sigma)\in \Theta:\, \max\{u(\tfrac{\hat{\sigma}}{\sigma}),0\} \leq \frac{\hat{\mu}-\mu}{\sigma} \leq o(\tfrac{\hat{\sigma}}{\sigma}) \Big\}\,,
\end{eqnarray*}
where
\begin{eqnarray*}
u(x) \,&=&\, \begin{cases}
\qquad h(x)\,, & x< 1-d_p\\
x\ln(1-d_p),& x\geq 1-d_p
\end{cases}\,, \\[1ex]
o(x) \,&= &\,\begin{cases}
-\ln(1-d_p),& x\leq\frac{1}{1-d_p} \\
\qquad h(x)\,,& x>\frac{1}{1-d_p}
\end{cases}\,,
\end{eqnarray*}
for $x>0$ and the function $h:(0,\infty)\rightarrow\mathbb{R}$ is defined by \begin{align*}
h(x) = \ln\left( \frac{d_p}{|1-x|} \right) (x-1) + x\ln (x)\,,\quad x>0\,,x\neq1\,,
\end{align*}
and $h(1)=0$.
\end{theorem}
The confidence regions $C'_4$ and $C''_4$ are depicted in Figure \ref{fig:C4} along with the corresponding confidence bands $B'_4$, $B''_4$ and the untrimmed confidence band $B_4$. Here, the form of $B'_4$ and $B''_4$ is determined by maximizing/minimizing $F_{\boldsymbol{\vartheta}}(x)$ numerically on every confidence region for each $x\in\mathbb{R}$.
\section{Data Example}\label{s:comp}
For illustration and to compare the bands in terms of their width and area, we apply the confidence bands proposed in Section \ref{s:bands} to a progressively type-II censored data set presented by \cite{VivBal1994} and generated from a real data set in \cite{Nel1982}, pp. 105, 228. The observations are shown in Table \ref{table:realdata} and consist of the times (in minutes) to breakdown of $m=8$ out of $n=19$ insulating fluids between electrodes at voltage 34 Kilovolts. The corresponding censoring scheme is given by $\boldsymbol{R}=(0,0,3,0,3,0,0,5)$.
We assume that $x_{1:8:19},\dots,x_{8:8:19}$ in Table \ref{table:realdata} are realizations of progressively type-II censored order statistics from an exponential cdf $F_{\boldsymbol{\vartheta}}$ as in formula (\ref{eq:locscale}); see also \cite{VivBal1994} and \cite{BalCra2014}, p. 255. According to formulas (\ref{eq:mudach}) and (\ref{eq:sigmadach}), the MLEs of $\mu$ and $\sigma$ based on the data are given by $\hat{\mu}=.19$ and $\hat{\sigma}=8.635$. For an exact confidence level of 90.25\% $(=95\%\times95\%)$, the boundaries of the confidence bands $B_1,B_2,B_3$, $B_4$, $B_4'$, and $B_4''$ are depicted in Figures \ref{fig:C1}-\ref{fig:C4}. Here, for the underlying confidence regions $C_1$ and $C_2$, the exact confidence level is uniformly allocated to the intervals and tails, i.e., we set $q_1=2.5\%$ and $q_2=97.5\%$ in formulas (\ref{eq:Wu1}) and (\ref{eq:Wu2}), respectively. Moreover, to ensure that $B_3$ meets the exact confidence level of $\tau=90.25\%$, we choose the constant $c_p$ in formula (\ref{eq:Cmin}) as $c_{p(\tau)}$, which is determined by using Lemma \ref{La:CPMin} yielding $1-p(\tau)=87.3\%$ and $c_{p(\tau)}=-11.587$. The constant $d_{0.9025}=0.249$ in formula (\ref{eq:B_K}), in turn, is numerically obtained by sampling according to Theorem \ref{thm:BandKST} with $10^7$ simulations.
\begin{figure}[h!]
\centering
\includegraphics[width=0.49\textwidth]{img/C1.png}
\includegraphics[width=0.49\textwidth]{img/B1.png}
\caption{Confidence region $C_1$ and the resulting confidence band $B_1$ with exact confidence level 90.25\%, each, based on the insulating fluid data in Table \ref{table:realdata}.
\label{fig:C1}
\end{figure}
\begin{figure}[h!]
\includegraphics[width=0.49\textwidth]{img/C2.png}
\includegraphics[width=0.49\textwidth]{img/B2.png}
\caption{Confidence region $C_2$ and the resulting confidence band $B_2$ with exact confidence level 90.25\%, each, based on the insulating fluid data in Table \ref{table:realdata}.
\label{fig:C2}
\end{figure}
\begin{figure}[h!]
\includegraphics[width=0.49\textwidth]{img/C3.png}
\includegraphics[width=0.49\textwidth]{img/B3.png}
\caption{Confidence region $C_3$ with exact confidence level $1-p(\tau)=87.3\%$ and the resulting confidence band $B_3$ with exact confidence level $\tau=90.25\%$, based on the insulating fluid data in Table \ref{table:realdata}. $B_3$ covers the cdf associated with the marked alternative parameter, which is not contained in $C_3$ but part of its comprehensive convex hull.}
\label{fig:C3}
\end{figure}
\begin{figure}[h!]
\includegraphics[width=0.49\textwidth]{img/C4.png}
\includegraphics[width=0.49\textwidth]{img/B4.png}
\caption{Uniform Kolmogorov-Smirnov type confidence band $B_4$ (right, light grey) and the resulting confidence regions $C_4'$ and $C_4''$ (left, grey and dark grey) with exact confidence level 90.25\% based on the insulating fluid data in Table \ref{table:realdata}. The corresponding confidence bands $B'_4$ and $B''_4$ are also depicted (right, grey and dark grey).}
\label{fig:C4}
\end{figure}
\begin{table*}[h!]
\centering
\begin{tabular}{cc|rrrrrrrr}
\toprule
&$i$ & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8\\ \midrule
&$x_{i:8:19}$ & .19 & .78 & .96 & 1.31 & 2.78 & 4.85 & 6.50 & 7.35\\[1ex]
&$R_i$ & 0 & 0 & 3 & 0 & 3 & 0 & 0 & 5\\
\midrule
\end{tabular}
\caption{Progressively type-II censored data set in \cite{VivBal1994}: times (in minutes) to breakdown of $m=8$ out of $n=19$ insulating fluids at 34 Kilovolts with censoring scheme $\boldsymbol{R}=(R_1,\dots,R_8)$.}
\label{table:realdata}
\end{table*}
Numerical values for the maximum band width $W(B)$ and the area $A(B)$ of $B_1,\dots,B_4$, $B_4'$, and $B_4''$ are shown in Table \ref{table:realdataprops}. It gives the rankings
\begin{eqnarray*}
W(B_4'')\leq W(B_4')\leq W(B_4) \leq W(B_1) \leq W(B_2) \leq W(B_3)
\end{eqnarray*}
and
\begin{eqnarray*}
A(B_4'')\leq A(B_4')\leq A(B_3)\leq A(B_1) \leq A(B_2) \leq A(B_4)\,.
\end{eqnarray*}
In particular, the trimmed Kolmogorov-Smirnov type bands $B_4''$ and $B_4'$ are first and second best for both criteria; note that the untrimmed band $B_4$ has infinite area by construction. While the area of $B_3$, which is obtained from a minimum area confidence region for the underlying parameter, is also comparatively small, its maximum band width is worst among the confidence bands considered. Regarding the confidence bands based on the trapezoidal confidence regions, $B_1$ performs better than $B_2$.
\begin{table}[h!]
\centering
\begin{tabular}{cc|rrrrrr}
\toprule
&$B$ & $B_1$ & $B_2$ & $B_3$ & $B_4$ & $B_4'$ & $B_4''$ \\ \midrule
&$W(B)$ & .54 & .57 & .59 & .50 & .50 & .47\\
&$A(B) $ & 20.59& 27.53 & 18.87 & $\infty$ & 18.70 & 17.90\\ \midrule
\end{tabular}
\caption{Maximum band width $W(B)$ and area $A(B)$ of the confidence bands $B_1,\dots,B_4$, $B_4'$, and $B_4''$ with exact confidence level 90.25\% based on the insulating fluid data in Table \ref{table:realdata}.}
\label{table:realdataprops}
\end{table}
\section{Generalizations and Extensions}\label{s:general}
Generalizations and extensions of the preceding results are possible in different directions, which are pointed out in the following. First, we focus on related models for ordered data, in which the findings may be applied as well with minor changes. Then, we suppose to have an underlying location-scale family of distributions containing the exponential one as particular case.
\subsection{Sequential Order Statistics}\label{ss:sos}
In the above derivations, the explicit structure of $\gamma_1,\dots,\gamma_m$ defined by formula (\ref{eq:gammas}) via the censoring scheme has not been used. Indeed, all findings remain true as long as the $\gamma$'s are given positive parameters. Here, the particular choice
\begin{equation}\label{eq:gammas2}
\gamma_j\,=\,(n-j+1)\,\alpha_j\,,\qquad 1\leq j\leq n\,,
\end{equation}
with known positive parameters $\alpha_1,\dots,\alpha_n$ is of some interest. In that case, $\boldsymbol{X}$ is distributed as the first $m$ (of $n$) sequential order statistics (SOSs) based on the cdfs
\begin{eqnarray*}
F_j(x)\,&=&\,1-(1-F_{\boldsymbol{\vartheta}}(x))^{\alpha_j}\\[1ex]
\,&=&\,1-\exp\left\{-\frac{\alpha_j(x-\mu)}{\sigma}\right\}\,,\qquad x>\mu\,,
\end{eqnarray*}
with corresponding hazard rates $\lambda_{F_j}=\alpha_j\lambda_{F_{\boldsymbol{\vartheta}}}=\alpha_j/\sigma$ on $(\mu,\infty)$, $1\leq j\leq n$, being proportional to that of $F_{\boldsymbol{\vartheta}}$; see \cite{Kam1995a,Kam1995b}. The model of SOSs allows for describing component lifetimes of sequential $k$-out-of-$n$ systems, which are operating as long as $k$ out of $n$ components are operating, where the failure of some component may have an impact on the residual lifetimes of the remaining components ($k=n-m+1$, here). In particular, they may be used to model load-sharing effects arising in systems, in which the components share some total load. All $n$ components then start working at hazard rate $\alpha_1\lambda_{F_{\boldsymbol{\vartheta}}}$ and, upon the $j$-th failure, $1\leq j\leq m-1$, the hazard rate of the surviving components changes to $\alpha_{j+1}\lambda_{F_{\boldsymbol{\vartheta}}}$. The $m$-th SOS $X_m$ coincides with the system failure time, after which no further observations are recorded such that the data is type-II right censored. Note that order statistics (based on $F_{\boldsymbol{\vartheta}}$) modelling the component lifetimes of the common $k$-out-of-$n$ system are contained in the model by setting $\alpha_1=\dots=\alpha_n\,(=1)$. In this context, confidence bands for the baseline cdf $F_{\boldsymbol{\vartheta}}$ may be useful to assess the quality of the individual components.
Having said that the $m$-th SOS $X_m$ also represents the lifetime of the system, a confidence band for its cdf is naturally of interest as well. For arbitrary choices of $\alpha_1,\dots,\alpha_n$, closed formula expressions of the marginal cdf $F_{\boldsymbol{\vartheta}}^{(m)}$ of $X_m$
are derived in \cite{CraKam2003} by using Meijer's $G$-functions. If $\alpha_1,\dots,\alpha_n$ are such that the $\gamma$'s in formula (\ref{eq:gammas2}) are pairwise distinct,
this formula simplifies to $F_{\boldsymbol{\vartheta}}^{(m)}=H(F_{\boldsymbol{\vartheta}})$ with function
\begin{equation*}
H(y)\,=\, 1-\left( \prod_{j=1}^m \gamma_j \right) \sum_{i=1}^m \frac{a_i}{\gamma_i} \left( 1-y \right)^{\gamma_i}\,,\qquad y\in[0,1]\,,
\end{equation*}
where
\[a_i \,=\, \prod_{j=1, j\neq i}^m \frac{1}{\gamma_j-\gamma_i}\,,\qquad 1\leq i\leq m\,;
\]
see, e.g., \cite{CraKam2001b}, Theorem 2.5.
In fact, $H$ coincides with the cdf on $[0,1]$ of the $m$-th SOS based on a standard uniform distribution and model parameters $\alpha_1,\dots,\alpha_n$ (i.e., based on the cdfs $F_j(x)=1-(1-u)^{\alpha_j}$, $u\in[0,1]$, $1\leq j\leq n$); $H:[0,1]\rightarrow[0,1]$ is therefore strictly increasing and, in particular, bijective. This transformation can be used for the construction of a confidence band for $F_{\boldsymbol{\vartheta}}^{(m)}$ as follows.
Suppose that we have already constructed a confidence band $B$, say, for $F_{\boldsymbol{\vartheta}}$ with (exact) confidence level $1-p\in(0,1)$. Let
\[\breve{B}\,=\,\{(x,y)\in\mathbb{R}\times[0,1]:\,(x,H^{-1}(y))\in B\}\,.\]
Then,
\begin{eqnarray*}
\text{graph}\, F_{\boldsymbol{\vartheta}}^{(m)} \subseteq \breve{B}\quad&\Leftrightarrow&\quad
(x,H(F_{\boldsymbol{\vartheta}}(x)))\in\breve{B}\;\;\forall x\in\mathbb{R}\\[1ex]
\quad&\Leftrightarrow&\quad
(x,F_{\boldsymbol{\vartheta}}(x))\in B\;\;\forall x\in\mathbb{R}\\[1ex]
\quad&\Leftrightarrow&\quad
\text{graph}\, F_{\boldsymbol{\vartheta}}\subseteq B\,.
\end{eqnarray*}
Hence, $\breve{B}$ forms a confidence band for $F_{\boldsymbol{\vartheta}}^{(m)}$ with (exact) confidence level $1-p$.
More results on inference with SOSs can be found, for instance, in \cite{CraKam2001b} and \cite{Cra2016}. For recent works, see, e.g., \cite{BurCraGor2016} and \cite{MieBed2019}. Finally, notice that SOSs with proportional hazard rates are closely related to generalized order statistics; see \cite{Kam1995a,Kam1995b,Kam2016}. Some of the results presented here may therefore also be interpreted in light of other models of ordered random variables and may be applied, for example, to construct a confidence band for the exponential baseline cdf of Pfeifer record values.
\subsection{A Location-Scale Familiy of Distributions}\label{ss:otherlsf}
In the context of SOSs, various inferential results are shown in \cite{CraKam2001b} for the underlying cdf belonging to the location-scale family
\begin{equation}\label{eq:locscale2}
F_{\boldsymbol{\vartheta}}(x)\,=\,1-\exp\left\{-\,\frac{g(x)-\mu}{\sigma}\right\}\,,\quad x\geq g^{-1}(\mu)\,,
\end{equation}
for $\boldsymbol{\vartheta}=(\mu,\sigma)\in (\lim_{u\downarrow0}g(u),\infty)\times(0,\infty)$, where $g$ is a differentiable and strictly increasing function on $(0,\infty)$ satisfying $\lim_{x\rightarrow\infty}g(x)=\infty$. Note that the supports of the distributions in that family are bounded from the left. Exponential distributions and Pareto distributons are included in this setting and are obtained by choosing $g(x)=x$ and $g(x)=\ln(x)$, respectively; more examples can be found in \cite{CraKam2001b}, Section 6. When replacing assumption (\ref{eq:locscale}) by (\ref{eq:locscale2}) with a known function $g$, the MLEs of $\mu$ and $\sigma$ in formulas (\ref{eq:mudach}) and (\ref{eq:sigmadach}) change to
\begin{eqnarray*}
\hat{\mu}\,&=&\ g(X_{1:m:n})\\[1ex]
\text{and}\quad\hat{\sigma}\,&=&\,\frac{1}{m}
\sum_{j=2}^m \gamma_j(g(X_{j:m:n})-g(X_{j-1:m:n}))\,,
\end{eqnarray*}
where all distributional properties of $\hat{\mu}$ and $\hat{\sigma}$ are preserved; see, e.g., \cite{CraKam2001b}, Section 9. Since the construction principles and arguments in Section \ref{s:bands} do not use the explicit form of the MLEs, all findings remain true under this setup.
\section{Conclusion}\label{s:conclusion}
Based on a progressively type-II censored sample from the two-parameter exponential distribution, various confidence bands are derived containing the entire graph of the underlying cumulative distribution function with a desired exact probability.
The bands are constructed via confidence regions for the location-scale parameter and Kolmogorov-Smirnov type statistics, and the relation between both approaches is highlighted.
Explicit formulas for the boundaries of the confidence bands as well as instructions how to obtain the quantiles of the relevant statistics by simulation are provided, such that the bands may easily be computed in applications. The Kolmogorov-Smirnov type statistic can also be used to construct a novel confidence region for the location-scale parameter, and an explicit representation of the latter is found. By means of a data example, the bands are illustrated and compared in terms of maximum band width and area, where the trimmed Kolmogorov-Smirnov type band is found to perform well for both criteria.
The presented results also yield useful findings for related models of ordered random variables.
This comprises, for instance, confidence bands for the baseline and marginal cumulative distribution functions of sequential order statistics, which serve as a model for the component lifetimes of sequential $k$-out-of-$n$ systems.
Finally, the results are extended to some other location-scale families of distributions.
\section*{Appendix}
\subsection{Proof of Theorem \ref{thm:Wu}}
To find the boundaries of $B_2$, we proceed similarly as in \cite{SriKanWha1975}, Section 2.2. Let $x\in\mathbb{R}$ be fixed. The aim is to determine the parameters $\boldsymbol{\vartheta}\in C_2$, which maximize and minimize $F_{\boldsymbol{\vartheta}}(x)$. This question may be simplified to finding $(\mu,\sigma)\in C_2$ corresponding to extremal values of $(x-\mu)/\sigma$ . Since the latter is a monotone function of $\sigma$ for fixed $\mu$, these points lie on the graphs of the functions $\sigma=b_{q_i}(\mu)$, $\mu\in[\mu_{q_2},\mu_{q_1}]$, $i=1,2$. There we have that
\begin{equation*}
\frac{x-\mu}{\sigma}\,=\,\frac{(x-\mu)\chi_{q_i}^2(2m)}{2(n(\hat{\mu}-\mu)+m\hat{\sigma})}
\end{equation*}
is strictly decreasing in $\mu$ if $x\leq\hat{\mu}+m\hat{\sigma}/n$, and strictly increasing in $\mu$ if $x>\hat{\mu}+m\hat{\sigma}/n$. Hence,
\begin{eqnarray*}
\operatorname*{arg\,min}_{\boldsymbol{\vartheta}\in C_2}\frac{x-\mu}{\sigma}
\,=\,\begin{cases} (\mu_{q_1},b_{q_2}(\mu_{q_1}))\,, & x\leq \mu_{q_1}\\[1ex]
(\mu_{q_1},b_{q_1}(\mu_{q_1}))\,, & \mu_{q_1}<x\leq \hat{\mu}+m\hat{\sigma}/n\\[1ex]
(\mu_{q_2},b_{q_1}(\mu_{q_2}))\,, & x>\hat{\mu}+m\hat{\sigma}/n\end{cases}\,,
\end{eqnarray*}
and
\begin{eqnarray*}
\operatorname*{arg\,max}_{\boldsymbol{\vartheta} \in C_2}\frac{x-\mu}{\sigma}\,=\,\begin{cases} (\mu_{q_2},b_{q_1}(\mu_{q_2}))\,, & x\leq \mu_{q_2}\\[1ex]
(\mu_{q_2},b_{q_2}(\mu_{q_2}))\,, & \mu_{q_2}<x\leq \hat{\mu}+m\hat{\sigma}/n\\[1ex]
(\mu_{q_1},b_{q_2}(\mu_{q_1}))\,, & x>\hat{\mu}+m\hat{\sigma}/n\end{cases}\,.
\end{eqnarray*}
This yields the representations of $U_2$ and $O_2$, where each the first two cases can be summarized, since $F_{(\mu_{q_i},\sigma)}(x)=0$ for $x\leq\mu_{q_i}$, $\sigma>0$, and $i=1,2$.
\subsection{Proof of Theorem \ref{thm:minvol}}
Let $x\in\mathbb{R}$ be fixed. As in the proof of Theorem \ref{thm:Wu}, we have to find the extremal values of $(x-\mu)/\sigma$ with respect to $(\mu,\sigma)$ lying on the boundary of $C_3$, which is given by the vertical line $\hat{\mu}\times[Z_{-1},Z_0]$ and the curve $\{(\hat{\mu}+g(\sigma),\sigma):\sigma\in[Z_{-1},Z_0]\}$. Clearly, on the line, $(x-\mu)/\sigma=(x-\hat{\mu})/\sigma$ is extremal only at $\sigma=Z_{-1}$ and $\sigma=Z_0$. On the curve, we have
\begin{eqnarray*}
\frac{x-\mu}{\sigma}\,=\,\frac{x-\hat{\mu}}{\sigma}-\frac{(m+1)\ln(\sigma)}{n}-\frac{m\hat{\sigma}}{n\sigma}+d
\end{eqnarray*}
for some constant $d$ free of $\mu$ and $\sigma$, and, considered a function of $\sigma>0$, the right-hand side is strictly increasing on $(0,\sigma_x^*]$ and strictly decreasing on $[\sigma_x^*,\infty)$. Together, these findings imply that, in any case, the minimum of $(x-\mu)/\sigma$ over the boundary of $C_3$ is attained on the line, and the respective maximum is attained on the curve. The case distinctions $x\leq\hat{\mu}$ and $x>\hat{\mu}$ for the minimum, and $\sigma_x^*<Z_{-1}$, $\sigma_x^*\in[Z_{-1},Z_0]$, and $\sigma_x^*>Z_0$ for the maximum then lead to the stated representation of $B_3$, where the first can eventually be dropped again, since $F_{(\hat{\mu},\sigma)}(x)=0$ for $x\leq\hat{\mu}$ and $\sigma>0$.
\subsection{Proof of Lemma \ref{La:CPMin}}
For $z\in[Z_{-1},Z_0]$, the function $g$ in formula (\ref{eq:g}) is strictly decreasing-increasing with minimum at $Z_{\text{min}}=\hat{\sigma}\exp\{-1-c_p/(m+1)\}$ given by
\begin{equation*}
g(Z_{\text{min}})\,=\,\hat{\sigma}\left(\frac{m}{n}-\frac{m+1}{n}\exp\left\{-1-\frac{c_p}{m+1}\right\}\right)\,;
\end{equation*}
cf. \cite{LenBedKam2019}. Hence, the comprehensive convex hull of $C_3$ is given by the disjoint union $C_3\cup\Delta$, where
\begin{eqnarray*}
\Delta\,&=&\,\left\{(\mu,\sigma)\in\Theta:\hat{\mu}+g(Z_{\text{min}})\leq\mu<\hat{\mu}+g(\sigma)\,,\, Z_{\text{min}}\leq\sigma\leq Z_0\right\}\\[1ex]
\,&=&\,\left\{(\mu,\sigma)\in\Theta: (m+1)\ln\left(\frac{1}{m}\frac{m\hat{\sigma}}{\sigma}\right)-\frac{m\hat{\sigma}}{\sigma}-c_p\right.\\[1ex]
&&\qquad \left.<\frac{n(\hat{\mu}-\mu)}{\sigma}
\leq \left(\frac{m+1}{z}-1\right)\frac{m\hat{\sigma}}{\sigma}\;,\; y\leq \frac{m\hat{\sigma}}{\sigma}\leq z\right\}
\end{eqnarray*}
with $y$ and $z$ as stated (the set $\Delta$ is colored dark in Figure \ref{fig:C3}). Now, recall that the statistics $m\hat\sigma/\sigma\sim \Gamma(m-1,1)$ and $n(\hat\mu-\mu)/\sigma\sim \Gamma(1,1)$ are independent; see Section \ref{s:PCII}. Denoting by $g_k$ the density function of $\Gamma(k,1)$, we then obtain for $(\mu,\sigma)\in\Theta$
\begin{eqnarray*}
P_{(\mu,\sigma)}((\mu,\sigma)\in\Delta)&&\,=\,\int \mathbbm{1}_\Delta\,dP_{(\mu,\sigma)}\\[1ex]
\,=\,&&\int_y^z\int_{(m+1)\ln(v/m)-v-c_p}^{[(m+1)/z-1]v}
g_1(u)g_{m-1}(v)\,du\,dv\\[1ex]
\,=\,&&\frac{e^{c_p}m^{m+1}}{(m-2)!}\int_y^z v^{-3}\,dv\,-\,\frac{1}{(m-2)!}\int_y^z v^{m-2}e^{-(m+1)v/z}\,dv\,.
\end{eqnarray*}
A change of variables in the second integral then leads to the stated formula.
\subsection{Proof of Theorem \ref{thm:BandKST}}
\begin{comment}
By definition of $B_4$ and $d_p$, it is clear that for every $\boldsymbol{\vartheta}\in\Theta$
\begin{equation*}
P_{\boldsymbol{\vartheta}}(\text{graph}\, F_{\boldsymbol{\vartheta}}\subseteq B_4)\,=\,P_{\boldsymbol{\vartheta}}(K_{\hat{\boldsymbol{\vartheta}}}\leq d_p)\,=\,1-p\,.
\end{equation*}
Hence, we only have to verify formula (\ref{eq:KSTrep}). To this end, let $\boldsymbol{\vartheta}=(\mu,\sigma)\in\Theta$ with $\mu\geq0$, and let
\begin{eqnarray*}
K_{\boldsymbol{\vartheta}}\,&=&\,\sup_{x\in\mathbb{R}} |F_{\boldsymbol{\vartheta}}(x)-F_{(0,1)}(x)|\\[1ex]
&=&\,\sup_{x>0} |F_{\boldsymbol{\vartheta}}(x)-F_{(0,1)}(x)|\\[1ex]
\,&=&\,\max\left\{1-\exp\{-\mu\}\,,\,\sup_{x>\mu} |\kappa(x)|\right\}
\end{eqnarray*}
with function
\begin{equation*}
\kappa(x)\,=\,\exp\{-x\}-\exp\left\{-\,\frac{x-\mu}{\sigma}\right\}\,,\qquad x\in\mathbb{R}.
\end{equation*}
Since $
\lim_{x\rightarrow\mu}|\kappa(x)|\,=\,1-\exp\{-\mu\}$
and $\lim_{x\rightarrow\infty} |\kappa(x)|=0$, the aim is now to find local extrema of $\kappa$ in $(\mu,\infty)$. For $\sigma=1$ and $\mu>0$, such extrema do not exist, since $g$ is then strictly monotone. Hence, we have $K_{\boldsymbol{\vartheta}}=1-\exp\{-\mu\}$ for $\sigma=1$. For $\sigma\neq1$, simple analysis shows that $\kappa$ is either increasing/decreasing or decreasing/increasing with a local extrema at
\begin{equation*}
x^*\,=\,\frac{\mu-\sigma\ln(\sigma)}{1-\sigma}\,.
\end{equation*}
The value
\begin{equation*}
|\kappa(x^*)|\,=\,|1-\sigma|\,\exp\left\{\frac{\mu-\sigma\ln(\sigma)}{\sigma-1}\right\}
\end{equation*}
thus has to be taken into account in the calculation of the supremum if
\begin{eqnarray*}
x^*>\mu\;&\Leftrightarrow&\;\frac{\mu}{1-\sigma}>\frac{\ln(\sigma)}{1-\sigma}\\[1ex]
\;&\Leftrightarrow&\;\left(\;\sigma<1\;\vee\;\mu<\ln(\sigma)\;\right)\,.
\end{eqnarray*}
Hence, $K_{\boldsymbol{\vartheta}} = \max\{U(\boldsymbol{\vartheta}), V(\boldsymbol{\vartheta})\}$ for $\sigma\neq1$, where
\begin{eqnarray*}
U(\boldsymbol{\vartheta})\,&=&\,1-\exp\{-\mu\}\,,\\[1ex]
V(\boldsymbol{\vartheta})\,&=&\,|1-\sigma|\,\exp\left\{\frac{\mu-\sigma\ln(\sigma)}{\sigma-1}\right\}\\[1ex]
&&\qquad\quad \times(\mathbbm{1}_{\{\sigma<1\}}+\mathbbm{1}_{\{\mu<\ln(\sigma)\}})\,.
\end{eqnarray*}
By setting $V(\boldsymbol{\vartheta})=0$ for $\sigma=1$, the representation $K_{\boldsymbol{\vartheta}}=\max\{U(\boldsymbol{\vartheta}),V(\boldsymbol{\vartheta})\}$ then holds true for all $\boldsymbol{\vartheta}\in\Theta$ with $\mu\geq0$.
Finally, according to formula (\ref{eq:KSTpivot}), replacing $\boldsymbol{\vartheta}$ by
\begin{equation*}
\hat{\boldsymbol{\vartheta}}\left(\frac{\boldsymbol{X}-\mu\boldsymbol{1}}{\sigma}\right)\,=\,\left(\frac{\hat{\mu}(\boldsymbol{X})-\mu}{\sigma},\frac{\hat{\sigma}(\boldsymbol{X})}{\sigma}\right)\,\stackrel{d}{=}\,(S,T)
\end{equation*}
completes the proof, where the distributional properties of $S$ and $T$ follow from formulas (\ref{eq:mudach}) and (\ref{eq:sigmadach}).
\end{comment}
By definition of $B_4$ and $d_p$, it is clear that for every $\boldsymbol{\vartheta}\in\Theta$
\begin{equation*}
P_{\boldsymbol{\vartheta}}(\text{graph}\, F_{\boldsymbol{\vartheta}}\subseteq B_4)\,=\,P_{\boldsymbol{\vartheta}}(K_{\hat{\boldsymbol{\vartheta}}}\leq d_p)\,=\,1-p\,.
\end{equation*}
Hence, we only have to verify formula (\ref{eq:KSTrep}). To this end, let $\boldsymbol{\vartheta}=(\mu,\sigma)\in\Theta$ and
\begin{eqnarray*}
K_{\boldsymbol{\vartheta}}\,&=&\,\sup_{x\in\mathbb{R}} |F_{\boldsymbol{\vartheta}}(x)-F_{(0,1)}(x)|\\[1ex]
&=&\,\sup_{x>\min\{\mu,0\}} |F_{\boldsymbol{\vartheta}}(x)-F_{(0,1)}(x)|\,.
\end{eqnarray*}
A case distinction on the sign of $\mu$ and
resolving the indicator functions gives
\begin{eqnarray}
K_{\boldsymbol{\vartheta}}\,&=&\,\max\left\{1-\exp\left\{-\max\left\{\mu,-\,\frac{\mu}{\sigma}\right\}\right\}\,,\,\sup_{x>\max\{\mu,0\}} |\kappa(x)|\right\}\label{eq:Kmax}
\end{eqnarray}
with function
\begin{equation*}
\kappa(x)\,=\,\exp\{-x\}-\exp\left\{-\,\frac{x-\mu}{\sigma}\right\}\,,\qquad x\in\mathbb{R}.
\end{equation*}
Since
\begin{equation*}
\lim_{x\rightarrow\max\{\mu,0\}}|\kappa(x)|\,=\,1-\exp\left\{-\max\left\{\mu,-\,\frac{\mu}{\sigma}\right\}\right\}\
\end{equation*}
and $\lim_{x\rightarrow\infty} |\kappa(x)|=0$, the aim is now to find all local extrema of $\kappa$ in $(\max\{\mu,0\},\infty)$. For $\sigma=1$ and $\mu\neq0$, such extrema do not exist, since $\kappa$ is then strictly monotone. Hence, we have $K_{\boldsymbol{\vartheta}}=1-\exp\{-\max\{\mu,-\mu/\sigma\}\}$ for $\sigma=1$. For $\sigma\neq1$, simple analysis shows that $\kappa$ is either increasing/decreasing or decreasing/increasing with a local extrema at
\begin{equation*}
x^*\,=\,\frac{\mu-\sigma\ln(\sigma)}{1-\sigma}\,.
\end{equation*}
The value
\begin{equation*}
|\kappa(x^*)|\,=\,|1-\sigma|\,\exp\left\{\frac{\mu-\sigma\ln(\sigma)}{\sigma-1}\right\}
\end{equation*}
thus has to be taken into account in the calculation of the maximum in formula (\ref{eq:Kmax}) if
\begin{eqnarray*}
x^*>\max\{\mu,0\}\qquad
&\Leftrightarrow&\qquad
\min\{x^*-\mu,x^*\}>0\\[1ex]
\qquad&\Leftrightarrow&\qquad\min\left\{\,\frac{\mu-\ln(\sigma)}{1-\sigma}\,,\,\frac{\mu-\sigma\ln(\sigma)}{1-\sigma}\right\}>0\\[1ex]
\qquad&\Leftrightarrow&\qquad\frac{\mu}{\ln(\sigma)}<\min\{\sigma,1\}\,.
\end{eqnarray*}
Hence, $K_{\boldsymbol{\vartheta}} = \max\{U(\boldsymbol{\vartheta}), V(\boldsymbol{\vartheta})\}$ for $\sigma\neq1$, where
\begin{eqnarray*}
U(\boldsymbol{\vartheta})\,&=&\,1-\exp\left\{-\max\left\{\mu,-\,\frac{\mu}{\sigma}\right\}\right\}\,,\\[1ex]
V(\boldsymbol{\vartheta})\,&=&\,|1-\sigma|\,\exp\left\{\frac{\mu-\sigma\ln(\sigma)}{\sigma-1}\right\}\mathbbm{1}_{\left\{\frac{\mu}{\ln(\sigma)}<\min\{\sigma,1\}\right\}}\,.
\end{eqnarray*}
By setting $V(\boldsymbol{\vartheta})=0$ for $\sigma=1$, the representation $K_{\boldsymbol{\vartheta}}=\max\{U(\boldsymbol{\vartheta}),V(\boldsymbol{\vartheta})\}$ then holds true for all $\boldsymbol{\vartheta}\in\Theta$.
Now, according to formula (\ref{eq:KSTpivot}), replacing $\boldsymbol{\vartheta}$ by
\begin{equation*}
\hat{\boldsymbol{\vartheta}}\left(\frac{\boldsymbol{X}-\mu\boldsymbol{1}}{\sigma}\right)\,=\,\left(\frac{\hat{\mu}(\boldsymbol{X})-\mu}{\sigma},\frac{\hat{\sigma}(\boldsymbol{X})}{\sigma}\right)\,\stackrel{d}{=}\,(S,T)
\end{equation*}
completes the proof, where the distributional properties of $S$ and $T$ follow from formulas (\ref{eq:mudach}) and (\ref{eq:sigmadach}). Here, the final representations of $U$ and $V$ are obtained by using that $S$ and $T$ are positive almost-surely.
\subsection{Proof of Theorem \ref{thm:C4}}
For $\boldsymbol{\vartheta}=(\mu,\sigma)\in\Theta$, we have from the proof of Theorem \ref{thm:BandKST} that
\begin{equation}\label{eq:Kmax}
K_{\boldsymbol{\vartheta}}\leq d_p\quad\Leftrightarrow\quad \max\{U(\boldsymbol{\vartheta}),V(\boldsymbol{\vartheta})\}\leq d_p\,.
\end{equation}
First, note that
\begin{equation}\label{eq:U}
U(\boldsymbol{\vartheta})\leq d_p\quad\Leftrightarrow\quad
\sigma\ln(1-d_p)\leq\mu\leq-\ln(1-d_p)\,.
\end{equation}
Moreover, we introduce the function $h:(0,\infty)\rightarrow\mathbb{R}$ via \begin{align*}
h(x) = \ln\left( \frac{d_p}{|1-x|} \right) (x-1) + x\ln (x)\,,\quad x>0\,,x\neq1\,,
\end{align*}
and $h(1)=0$, such that for $\sigma\leq1$
\begin{equation}\label{eq:V1}
V(\boldsymbol{\vartheta})\leq d_p\quad\Leftrightarrow\quad \mu\,\mathbbm{1}_{\{\mu>\sigma\ln(\sigma)\}}\geq h(\sigma)\,\mathbbm{1}_{\{\mu>\sigma\ln(\sigma)\}}\,,
\end{equation}
and for $\sigma>1$
\begin{equation}\label{eq:V2}
V(\boldsymbol{\vartheta})\leq d_p\quad\Leftrightarrow\quad \mu\,\mathbbm{1}_{\{\mu<\ln(\sigma)\}}\leq h(\sigma)\,\mathbbm{1}_{\{\mu<\ln(\sigma)\}}\,,
\end{equation}
Let us first find the lower bound $u(\sigma)$ for $\mu$ in dependence of $\sigma$, such that the right-hand side of equivalence (\ref{eq:Kmax}) is true. For $\sigma>1$, condition (\ref{eq:V2}) does not yield any lower bound for $\mu$.
For $\sigma\in[1-d_p,1]$, condition (\ref{eq:V1}) is always true, since then $h(\sigma)\leq \sigma\ln(\sigma)$. Finally, for $\sigma\in(0,1-d_p)$, condition (\ref{eq:U}) implies that the indicator functions in condition (\ref{eq:V1}) are equal to 1. Moreover, we have that
\begin{equation*}
h(\sigma)\geq\sigma\ln(1-d_p)\,,
\end{equation*}
since the mapping $x\mapsto h(x)-x\ln(1-d_p)$ is decreasing/increasing on $(0,1)$ with minimum 0 at $x=1-d_p$. Combining these findings we have
\begin{equation*}
u(\sigma)\,=\,\begin{cases}
\qquad h(\sigma)\,,\quad&\sigma\in(0,1-d_p)\\[1ex]
\sigma\ln(1-d_p)\,,\quad&\sigma\in[1-d_p,\infty)
\end{cases}\,.
\end{equation*}
Now, let us derive the upper bound $o(\sigma)$ for $\mu$ in dependence of $\sigma$, such that the right-hand side of equivalence (\ref{eq:Kmax}) is valid. For $\sigma\leq1$, condition (\ref{eq:V1}) does not yield any upper bound for $\mu$. For $\sigma\in(1,1/(1-d_p)]$, it holds that $h(\sigma)\geq\ln(\sigma)$ such that condition (\ref{eq:V2}) is always true. Finally, for $\sigma>1/(1-d_p)$, condition (\ref{eq:U}) implies that the indicator functions in condition (\ref{eq:V2}) are equal to 1. Moreover, $h$ is increasing/decreasing on $(1,\infty)$ with maximum $-\ln(1-d_p)$ at $x=1/(1-d_p)$. In summary, we have
\begin{equation*}
o(\sigma)\,=\,\begin{cases}
-\ln(1-d_p)\,,\quad&\sigma\in(0,1/(1-d_p)]\\[1ex]
\qquad h(\sigma)\,,\quad &\sigma\in(1/(1-d_p),\infty)
\end{cases}\,.
\end{equation*}
According to formula (\ref{eq:KSTpivot}) and as in the proof of Theorem \ref{thm:BandKST}, replacing $\boldsymbol{\vartheta}$ by $((\hat{\mu}-\mu)/\sigma,\hat{\sigma}/\sigma)$ then gives the representation for $C'_4$, from which the one for $C''_4$ is evident.
\bibliographystyle{tfnlm}
|
0904.1589
|
\section{Introduction}\label{sec:intro}
Counts of galaxy clusters are a potentially very powerful technique to probe
dark energy and the accelerating universe (e.g.\ \cite{FTH,sah09,mar06,voi05,pie03,bat03,ros02,hai01,hol01}). The idea
is an old one: count clusters as a function of redshift (and, potentially,
mass), and compare to theoretical prediction which can be obtained either
analytically or numerically. Recently, Rozo et al.\ \cite{Rozo09} have
obtained very interesting constraints on $\sigma_8$ from Sloan Digital Sky
Survey (SDSS) cluster samples using the relation between mass and optical
richness (the number of red-sequence galaxies in the cluster above a
luminosity threshold). This follows recent dark energy constraints using
optical \cite{gla07} and X-ray observations of clusters \cite{Mantz07,Vikhlinin08,hen09}.
In this paper we calculate the potential of cluster counts to improve combined
constraints from the other three major probes of dark energy: baryon acoustic
oscillations (BAO), type Ia supernovae (SNIa) and weak gravitational lensing
(WL). We are motivated by the recently released report of the Figure of Merit
Science Working Group (FoMSWG; \cite{FoMSWG}) that studied and recommended
parametrizations and statistics best suited to addressing the power of
cosmological probes to measure properties of dark energy. While the FoMSWG
report was mainly aimed at figures of merit to be used in the upcoming
competition for the Joint Dark Energy Mission (JDEM) space telescope, the
applicability of its results and recommendations is general.
We address quantitatively how ongoing and upcoming cluster
surveys, in particular the South Pole Telescope (SPT, \cite{ruh04}) and the
Dark Energy Survey (DES, \cite{des05}), can strengthen the combined
``pre-JDEM'' constraints on dark energy considered in the FoMSWG report ---
that is, combined constraints expected around the year 2016. To model cluster
counts, we utilize recent results from Cunha \cite{cun08} which optimally
combine future optical and Sunyaev-Zel'dovich (SZ) observations of clusters to
estimate the constraints on dark energy.
\section{Information from cluster counts and clustering}\label{sec:counts}
The subject of deriving cosmological constraints from cluster number counts
and clustering of clusters has been treated extensively in the literature (see
e.g.\ \cite{cun08,lim04,lim05,lim07, wu08}). In this work we use
cross-calibration for two observable proxies for mass: Sunyaev-Zel'dovich flux
(henceforth SZ), and optical observations --- which identify clusters via
their galaxy members --- (henceforth OPT).
While we focus on optical and SZ surveys, our results
are applicable to combinations of any cluster detection techniques. In
particular, planned X-ray surveys such as eRosita \cite{pre07}, WFXT
\cite{gia09}, and IXO \cite{vik09} will have mass sensitivity competitive
with, and complementary to, the SZ and optical surveys
Our approach closely follows that in \cite{cun08} and we refer the reader to
that publication for basic details. In brief, cluster counts in a bin of the
observables are calculated by integrating the mass function $dn/dM$ over mass,
volume, and the observable proxy in the appropriate range.
We adopt the Jenkins mass function in this work, though results are weakly dependent
on this choice.
Clustering is given by the sample covariance of the mean counts in different redshift bins.
The contribution of clustering to the constraints is very small when cross-calibration
is used \cite{cun08}.
We allow for scatter in both the relation between mass and the observable
proxy, and the relation between true and estimated photometric redshifts.
Results from both simulations (e.g.\ \cite{sha07,kra06}) and observations
(e.g. \citep{ryk08b,evr08,ryk08a}) suggest that the mass-observable relations
can be parametrized in simple forms with lognormal scatter of the
mass-observable about the mean relation. Other works (see e.g.\ \cite{coh07})
suggest that the distribution of galaxies in halos may be more complicated.
We assume lognormal scatter for the mass-observable relation as well as for
the photometric redshift errors.
We have neglected any theoretical uncertainties in the mass function,
galaxy bias, and photometric redshifts, all of which must be independently
known to a few percent so as not to affect cosmological constraints \cite{cun09, lim07}.
We fix the photo-z scatter to $\sigma_{z} = 0.02$, the expected overall scatter of
cluster photo-z's in the Dark Energy Survey \citep{des05}. Our ``theorist's
observable'' quantity, which we feed into the Fisher matrix formalism to
obtain constraints on cosmological parameters, is the covariance of the counts
--- defined as the sample covariance plus the shot noise variance --- in different
redshift bins.
We adopt the same surveys and parametrizations described in \cite{cun08},
namely, an SZ and an OPT survey on the same 4000 sq.\ deg.\ patch of sky.
Let $M_{\rm obs}$ be the observable proxy for mass (from either SZ or OPT survey).
For the SZ survey, we define the $M_{\rm obs}$ threshold for detection to be
$M^{\rm th}=10^{14.2}h^{-1}M_{\odot}$, complete up to $z=2$, based on the
projected sensitivity of the South Pole Telescope. We parametrize the mass bias
(the difference between the true mass and SZ $M_{\rm obs}$), and the variance in the
mass-observable relation, respectively as
\begin{eqnarray}
{\rm ln}M^{\rm bias}(z)&=&{\rm ln}M^{\rm bias}_0 + a_1\ln(1+z) \label{eqn:mbiasdefsz}\\[0.2cm]
\sigma_{\ln M}^2(z)&=&\sigma_{0}^2 + \sum_{i=1}^{3}b_iz^i
\label{eqn:msigdefsz}
\end{eqnarray}
Fiducial mass values of all nuisance parameters are zero, except for the
scatter which is set to $\sigma_{0}=0.25$ in the fiducial model. Our choice
of scatter is somewhat conservative given recent studies which suggest that
$\sigma_{0}<0.2$ (see e.g. \cite{mel06}). However, hydrodynamics simulations by
\cite{hal07} find a scatter of $(+32\%, -16\%)$ about the median for clusters
of $M\sim3.0\times 10^{14} M_{\odot}$, which matches our choice. In total, there
are six nuisance parameters for the mass bias and scatter (${\rm ln}M^{\rm bias}_0$,
$a_1$, $\sigma_{0}^2$, $b_i$).
\begin{figure}[!t]
\includegraphics[scale=0.30]{plots/1.pdf}
\vspace{-0.5cm}
\caption{First three principal components for the OPT+SZ cluster
survey with Planck priors. Solid lines refer
to the case with no prior on the 16 nuisance parameters, while the dashed
lines correspond to the case of perfectly known nuisance parameters.}
\label{fig:PC_clus}
\end{figure}
For the optical survey the mass threshold of the observable is set to
$M^{\rm th}=10^{13.5}h^{-1}M_{\odot}$ and the redshift limit is $z=1$, corresponding to
the projected sensitivity of the Dark Energy Survey. Different studies
suggest a wide range of scatter for optical observables, ranging from a
constant $\sigma_{\ln M}=0.5$ \citep{wu08} to a mass-dependent scatter in the range
$0.75 < \sigma_{\ln M} < 1.2$ \citep{bec07}. Using weak lensing and X-ray analysis
of MaxBCG selected optical clusters, Ref.~\cite{roz08a} estimated a lognormal
scatter of $\sim 0.45$ for $P(M|M_{\rm obs})$, where $M$ was determined using weak
lensing and $M_{\rm obs}$ was an optical richness estimate.
We choose a fiducial mass scatter of $\sigma_{\ln M}=0.5$ and allow for a cubic
evolution in redshift and mass:
\begin{eqnarray}
{\rm ln}M^{\rm bias}(M_{\rm obs},z)&=&{\rm ln}M^{\rm bias}_0 + a_1\ln(1+z)\nonumber \\[0.2cm]
&+&a_2({\rm ln}M_{\rm obs}-{\rm ln}M_{\rm pivot}) \label{eqn:mbiasdef}\\[0.0cm]
\sigma_{\ln M}^2(M_{\rm obs},z)&=&\sigma_{0}^2 + \sum_{i=1}^{3}b_iz^i \nonumber \\[-0.2cm]
&+& \sum_{i=1}^{3}c_i({\rm ln}M_{\rm obs}-{\rm ln}M_{\rm pivot})^i \label{eqn:msigdef}
\end{eqnarray}
\noindent We set $M_{\rm pivot}=10^{15}h^{-1} M_{\odot}$.
In all, we have 10 nuisance parameters for the optical mass errors
(${\rm ln}M^{\rm bias}_0$, $a_1$, $a_2$, $\sigma_{0}^2$, $b_i$, $c_i$).
There are few, if any, constraints on the number of parameters necessary to
realistically describe the evolution of the variance and bias with mass.
Ref.\ \cite{lim05} shows that a cubic evolution of the mass-scatter with
redshift captures most of the residual uncertainty when the redshift evolution
is completely free (as assumed in the Dark Energy Task Force (DETF) report
\cite{DETF}). Note too that we employ more nuisance parameters to describe
the optical survey than the SZ survey because the former is expected to have a
more complicated selection function. For the cross-calibration analysis, we
assume the correlation coefficient between optical and SZ scatter $\rho$,
defined in \cite{cun08}, to be fixed to zero; the same paper shows that the
cross-calibration results are insensitive to the value of $\rho$ for $\rho \in
[-1, 0.6]$.
In total, we use 6+10=16 nuisance parameters to describe the systematics of
the combined OPT+SZ cluster survey. While generous, this
parametrization assumes a lognormal distribution of the mass-observable relation
that may fail for low-masses.
We have also implicitly assumed that selection effects can be described by the
bias and scatter of the mass-observable relation. By the year 2016, we expect
significant progress in simulations of cluster surveys that will allow us to
better parametrize the cluster selection errors.
\begin{figure*}[!t]
\includegraphics[scale=0.30]{plots/2a.pdf}
\includegraphics[scale=0.30]{plots/2b.pdf}
\vspace{-0.5cm}
\caption{Left panel: First three principal components for the
SNIa+BAO+WL+Planck pre-JDEM combination alone (solid curves) and for the same combination
with the addition of clusters (dashed curves). For the latter we assume OPT+SZ cluster
survey with flat (i.e.\ uninformative) external priors on nuisance
parameters. Right panel: Uncertainty in eigencoefficients of $w(a)$,
$\sigma(\alpha_i)$, for the SNIa+BAO+WL+Planck pre-JDEM combination alone (circular points),
and for the same combination with the addition of clusters (square points). We also show the
ratio of the improvement in each eigencoefficient when clusters are added
(dashed line - scaled down by a factor of 10 for clarity). }
\label{fig:PC_combined}
\end{figure*}
\section{Complementary probes and Figures of Merit}\label{sec:jdemprobes}
To model the power of complementary probes of dark energy, we adopt the
pre-JDEM information (that is, combined information projected around year
2016) based on estimates of the Figure of Merit Science Working Group
\cite{FoMSWG}. These estimates include information from BAO, SNIa, WL and the
Planck CMB satellite. We use these probes in combination, without or with
clusters. Note that
systematic errors have been included in all of these methods
(see Ref.~\cite{FoMSWG}).
The FoMSWG figures of merit are described in the FoMSWG paper \cite{FoMSWG}
and we review them here very briefly. There are a total of 45 cosmological
parameters, 36 of which describe the equation of state $w(z)$ while the others
are mostly standard cosmological parameters (plus a couple of nuisance ones
that have not been explicitly marginalized over). One figure-of-merit is the
area in the $w_0$-$w_a$ plane \cite{Huterer_Turner,DETF}, where
$w(z)=w_0+w_a(1-a)= w_p+w_a(a_p-a)$ and where $w_p$ and $a_p$ are the ``pivot''
parameter and the scale factor; we adopt ${\rm FoM}\equiv 1/(\sigma(w_p)\times
\sigma(w_a))$. The growth of density perturbations is described by a single
parameter, the growth index $\gamma$, which is a free parameter in the fitting
function for the linear growth of perturbations \cite{Linder_growth}. The
figure-of-merit in the growth index is simply its inverse marginalized error,
${\gamma \rm FoM}\equiv 1/\sigma(\gamma)$.
A much richer (and less prone to biases) description of the equation
of state is achieved through computing the principal components (PCs) of dark
energy \cite{Huterer_Starkman}, $e_i(z)$
\vspace{-0.2cm}
\begin{equation}
1+w(a) = \sum_{i=0}^{35} \alpha_i e_i(a),
\label{eq:PC_expansion}
\end{equation}
where $\alpha_i$ are coefficients, and $e_i(a)$ are the eigenvectors (see
\cite{FoMSWG} for details). The associated figure of merit consists of
presenting the shapes $e_i(z)$ in redshift and computing the associated
accuracies $\sigma(\alpha_i)$ with which the coefficients can be measured
\cite{FoMSWG}.
Combining the different cosmological probes is achieved by adding their
associated Fisher matrices. We add the $45\times 45$ Fisher matrix for
clusters (marginalized over the mass nuisance parameters) to the combined BAO+SNIa+WL+Planck
Fisher matrix and report the improvement in the figures of merit and
accuracies in the PCs as well as shapes of the new PCs.
\section{Results}\label{sec:res}
\begin{figure*}[!t]
\includegraphics[scale=0.30]{plots/3a.pdf}
\includegraphics[scale=0.30]{plots/3b.pdf}
\vspace{-0.3cm}
\caption{Improvement in the figures of merit relative to the pre-JDEM
combination (BAO+SNIa+WL+Planck) when cluster information is added. In both
panels we add a uniformly increasing prior on the nuisance parameters as
explained in the text, and show (on the x-axis) the
effective resulting prior on the scatter in mass $\sigma_{\ln M}(M, z)$, that is,
the (square root of the) left-hand sides of Eqs.~(\ref{eqn:msigdefsz}) or
(\ref{eqn:msigdef}) that correspond to the prior values of the nuisance
parameters on the right-hand sides at $z=1.0$ and $M=10^{15}h^{-1} M_{\odot}$.
For the OPT+SZ case, the uncertainty in the optical scatter is shown on
the x-axis. The left panel shows the improvements in the (DETF) ${\rm FoM}$,
while the right panel shows improvements in the inverse error in the growth
index, ${\gamma \rm FoM}$. }
\label{fig:FoM_over_FoM0}
\end{figure*}
Our baseline is the combined pre-JDEM BAO+SNIa+WL+Planck case from the FoMSWG
report. The baseline uncertainties in various dark energy parameters, after
marginalization over all nuisance and cosmological parameters, are
$\sigma(w_0) = 0.10$, $\sigma(w_a) = 0.31$, $\sigma(w_p)=0.028$ ($z_p=0.40$),
${\rm FoM}=116$, and $\sigma(\gamma)=0.21$ (${\gamma \rm FoM}=4.8$). These constraints are
dominated by BAO+Planck which alone yield $\sigma(w_0) = 0.15$, $\sigma(w_a) =
0.44$, $\sigma(w_p)=0.037$,
${\rm FoM}=61$. In comparison, WL+Planck and SNIa+Planck yield ${\rm FoM}=9.8$ and
0.42, respectively.
We first consider the cluster information alone, with only a
Planck prior adopted from \cite{FoMSWG}. The constraints in this case,
assuming flat external priors on cluster nuisance parameters, are $\sigma(w_0)
= 0.10$, $\sigma(w_a) = 0.41$, $\sigma(w_p)=0.036$ ($z_p=0.28$), ${\rm FoM}=66$,
and $\sigma(\gamma)=0.17$ (${\gamma \rm FoM}=6.0$). These constraints are comparable
(and complementary) to the BAO+Planck constraints. Fig. \ref{fig:PC_clus}
shows the first three principal components for the combined (OPT+SZ) cluster
survey combined with Planck. Two cases are shown: completely unknown and
perfectly known nuisance parameters. In the first case, the first principal
component peaks at $z\sim 0.3$, which is not surprising given that most
clusters are at $z\lesssim 1$: while this peak sensitivity is at lower
redshifts than that for BAO surveys, it is at slightly larger $z$ than the
peak for SNIa.
As seen in Fig.~\ref{fig:PC_clus}, adding priors to nuisance parameters moves
the cluster PC weights to higher $z$. This is easy to understand: freedom in
the nuisance parameters has progressively more deleterious effects as redshift
increases, as can be deduced from
Eqs.~(\ref{eqn:mbiasdefsz})-(\ref{eqn:msigdef}). Thus, priors on the nuisance
parameters restore the ability of the survey to probe higher redshifts, and
push the principal components to higher $z$.
Next, we combine the cluster information with BAO+SNIa+WL+Planck. The amount
of information that clusters contribute is a strong function of the
systematics assumed, in our case, a function of the {\it external priors} on
the nuisance parameters. We find that clusters provide very significant
improvement in the figures of merit even with uninformative (flat) priors on
the cluster nuisance parameters. The new clusters+SNIa+WL+BAO+Planck figure
of merit is 206, which is nearly a factor of two better than
BAO+SNIa+WL+Planck alone. The pivot error is $\sigma(w_p)=0.022$ ($z_p=0.41$)
which is $\sim 25\%$ better. Constraints on $w_0$ and $w_a$ improve by about
50\% each, since with clusters $\sigma(w_0)=0.065$ and $\sigma(w_a)=0.214$.
Constraints on growth improve by more than a factor of 2, with
$\sigma(\gamma)=0.099$ (${\gamma \rm FoM}=10$). The main effect of
including clusters is to provide significant additional information on the
growth of density perturbations. Of the complementary techniques we
consider, only weak lensing probes the growth, but our fiducial cluster
model gives (slightly) stronger constraints on growth than the pre-JDEM
combination of BAO+SNIa+WL+Planck.
The left panel of Fig.~\ref{fig:PC_combined} shows the best-determined three
principal components for the fiducial pre-JDEM survey adopted from
\cite{FoMSWG} when clusters are added, i.e. clusters+SNIa+WL+BAO+Planck. For
the cluster survey we assume OPT+SZ with flat external priors on nuisance
parameters. We see that clusters make the total principal components look more
``cluster-like'' (compare to Fig.~\ref{fig:PC_clus}) since they add a lot of
information to the total.
For the same reason, clusters move the
weight of the best-determined PC toward lower redshifts.
The right panel of Fig.~\ref{fig:PC_combined} shows the accuracies
$\sigma(\alpha_i)$ with which the coefficients $\alpha_i$ of the principal
components can be measured. The contribution of clusters becomes more pronounced
for higher PC number, leading to a nearly constant fractional improvement
in error. With clusters, the fourth eigencoefficient is about as well
constrained as the second eigencoefficient in the baseline case without
clusters.
If external priors on the nuisance parameters are available, the full power of
cluster constraints is even more evident. To model such priors, we scale all
errors by the same fractional value --- for each nuisance parameter $p_i$, we
let $F_{ii}\rightarrow F_{ii}(1+\alpha)$ where $\alpha$ varies from zero (flat
prior) to infinity (sharp prior). Consequently, the additional information in
each nuisance parameter is a fixed fraction of the original (unmarginalized)
error in the parameter. While other choices are possible for adding priors, we
settle on this simple prescription to illustrate the effects of external
information on nuisance parameters.
The left panel of Fig. \ref{fig:FoM_over_FoM0} shows the ratio of the figure
of merit that includes all probes, ${\rm FoM}$, and the ${\rm FoM}$ with clusters left
out (that is, ${\rm FoM}/{\rm FoM}_0$ where ${\rm FoM}_0=116$). We consider three cluster
survey scenarios: an optical survey, an SZ survey with optical follow-up for
photometric redshift measurements only \footnote{For SPT, optical follow-up is
expected from the DES, the Blanco Cosmology Survey (BCS) and the Magellan
Telescope.}, and the cross-calibrated OPT+SZ survey. On the x-axis we show
the effective prior on the scatter in mass (the quantity
$(\sigma_{\ln M}^2(z))^{1/2}$ from Eqs.~(\ref{eqn:msigdefsz}) and
(\ref{eqn:msigdef})) --- priors are added to other nuisance parameters as
well, we simply do not show them. The plot shows that the OPT+SZ combination
improves the total ${\rm FoM}$ by more than a factor of four if the scatter is
known to high precision. In the more realistic cases where mass scatter (and
other corresponding parameters) are known to finite accuracy from independent
measurements, we still see improvement by factors of $\sim 3$.
Priors on nuisance parameters contribute to the information content only if
they are substantially stronger than the intrinsic (``self-calibrated'')
uncertainties in these parameters; for the scatter in mass, for example, this
implies the knowledge of $(\sigma_{\ln M}^2(z))^{1/2}$ to better than $O(1)$ as
Fig.~\ref{fig:FoM_over_FoM0} shows. Comparing the OPT, SZ, and OPT+SZ
cases, we see that as prior information approaches zero (high values of
$\sigma_{\rm prior}$), the cross-calibration provides a lot of extra
information relative to OPT or SZ alone.
The right panel of Fig. \ref{fig:FoM_over_FoM0} shows the corresponding figure
of merit for the growth index $\gamma$. Even stronger improvements are now
seen, with the $\gamma$ figure of merit increasing between a factor of two
(flat priors) and ten (infinitely sharp priors). However, we caution that
simulations of modified gravity models need to be done to determine whether
the impact of modified structure growth on the cluster abundance is adequately
captured by the $\gamma$ parameter. Nevertheless, the right panel of Fig.
\ref{fig:FoM_over_FoM0} indicates that clusters appear to have at least as
much potential to improve the pre-JDEM constraints on the growth history of
the universe as they do for the expansion history (a similar conclusion has
been reached in Ref.~\cite{tang_weller} for a specific modified gravity
model). We also see that the SZ survey is more useful for improving ${\gamma \rm FoM}$
than the DETF ${\rm FoM}$; this is because SZ probes higher
redshifts, which allows for improved constraints on the redshift evolution
of the growth of structure and hence $\gamma$.
\section{Discussion: Implications of the Assumptions}\label{sec:disc}
In this section we discuss the validity of the assumptions we made and the
consequences of varying those assumptions. We divide our assumptions
into optimistic and pessimistic.
The assumptions we consider optimistic are:\\[-0.6cm]
\begin{itemize}
\item
The optical mass threshold ($M^{\rm th}=10^{13.5}h^{-1}M_{\odot}$); \\[-0.6cm]
\item
The SZ mass threshold ($M^{\rm th}=10^{14.2}h^{-1}M_{\odot}$);\\[-0.6cm]
\item
Perfect selection for both SZ and optical cluster finding;\\[-0.6cm]
\item
SPT area (4000 sq. deg.; could be less);\\[-0.6cm]
\item
Known functional form of the scatter in the mass-observable relation (lognormal);\\[-0.6cm]
\item
No mass dependence in the SZ mass-observable scatter (see
Eq.~(\ref{eqn:msigdefsz})).
\item
Perfect knowledge of photometric redshift errors.
\end{itemize}
The assumptions that are arguably pessimistic are:\\[-0.6cm]
\begin{itemize}
\item
No other cluster techniques (e.g. X-ray or weak lensing) are available to
further cross-calibrate cluster counts;\\[-0.6cm]
\item
Large fiducial value of scatter for both optical ($\sigma_0=0.5$) and SZ ($\sigma_0=0.25$);\\[-0.6cm]
\item
Area of DES (4,000 sq.\ deg.; could be as large as
10,000 sq. degrees);\\[-0.6cm]
\item
Low redshift range of optical cluster-finding ($z<1$);\\[-0.6cm]
\item
Cubic polynomial evolution of redshift scatter for optical and SZ and mass
evolution of optical scatter (Lima \& Hu, 2005 \cite{lim05} show that cubic
redshift evolution of the scatter yields near-maximal degradation of
cosmological parameters);\\[-0.6cm]
\item
Constraints are based on our current knowledge of cluster physics, while the
field is developing rapidly.
\end{itemize}
The first three optimistic assumptions are the most important. Since the
mass-function falls rapidly with increasing mass, the lower mass bins contain
most of the clusters, and are therefore most relevant. Cunha (2009)
\cite{cun09} shows that cross-calibration decreases the sensitivity of the
constraints to the mass threshold somewhat. Here we have checked that
increasing the optical limit from $\log M^{\rm th} = 13.5$ to $13.7$, or the SZ
limit from $14.2$ to $14.5$ degrades the figures of merit by
$10$-$20\%$. Increasing both leads to $30\%$ degradation in the FoMs.
The importance of uncertainty in photometric redshift errors has been studied
extensively by \cite{lim07}.
For the surveys we consider here, it is not unreasonable to assume that large
enough training sets will be available to sufficiently constrain the evolution of the redshift
errors and characterize the survey selection.
Of the pessimistic assumptions, the first one is especially significant: for
example, if X-ray information is available (as expected from surveys
such as eRosita), then X-ray plus optical cross-calibration alone can lead to
excellent dark energy constraints even with the unexpected failure of one or
more of our SZ assumptions.
To test assumptions about the functional form of the scatter, we added another
Gaussian to both optical and SZ mass-observable relations\protect{\footnote{Data and
simulations (see e.g. Cohn \& White 2009 \cite{coh09}) suggest that the
double Gaussian is a good representation of projection effects.}}, for a
total of 27 new parameters (43 total); the new parameters describe the
evolution with redshift and mass of the mean and variance of the new Gaussian
(cf. Eqs.~\ref{eqn:mbiasdef} and \ref{eqn:msigdef}), the ratio between the two
Gaussians describing each mass-observable relation, and the correlation
coefficient between optical and SZ (see \cite{cun09}). The figures of merit
degrade by merely 15-20\%. The small additional degradation is a consequence
of the fact that the new nuisance parameters do not introduce significant new
degeneracies with cosmological parameters. If we instead add 4 parameters to
characterize the {\it mass dependence} of the SZ scatter and bias, the
degradations are even weaker, being $\lesssim 5\%$. Intuitively, adding
mass-dependent evolution of the SZ bias and scatter is not as important as the
functional form of the scatter because the SZ probes too narrow a range of
masses for the evolution to be significant.
The arguments and tests outlined in this section show that the assumptions
made in this paper are not overly optimistic, and that the unforeseen
systematic effects would have to be rather capricious in order to lead to
significant further degradations in the cosmological constraints.
\section{Conclusions}\label{sec:conc}
We have shown that galaxy clusters are a potentially powerful complement to
other probes of dark energy. Assuming optimally combined optical and SZ
cluster surveys based on fiducial DES and SPT expectations and allowing for a
generous set of systematic errors (a total of 16 nuisance parameters), we have
shown that the constraints on the figure of merit expected in 2016 from baryon
acoustic oscillations, type Ia supernovae, weak lensing, and Planck improves
by nearly a factor of two when clusters are added.
This improvement is achieved without any external prior knowledge on the
cluster mass-observable nuisance parameters (but also without explicitly allowing
for errors in the theoretically predicted mass function or cluster
selection).
We have further illustrated the cluster contribution to constraints by
computing, for the first time, the principal components of the equation of
state of dark energy for clusters alone and clusters combined with other
probes. We found that the first cluster principal component peaks at $z\simeq
0.3$, indicating the ``sweet spot'' of cluster sensitivity to dark
energy. This redshift increases slightly if external information on the
cluster nuisance parameters is available. Each eigencoefficient of the
principal component expansion is improved by about a factor of two when
clusters are added, indicating that the improvements extend to well beyond one
or two parameters.
Finally, we have shown that measurements of the growth index of linear
perturbations $\gamma$ (which is a proxy for testing modified gravity) improve
by a factor of several with cluster information. While this particular
calculation depends on assumptions about the modified gravity model, it
broadly illustrates the intrinsic power of clusters to measure growth and
distance separately and to obtain useful constraints on modified gravity
explanations for the accelerating universe.
We conclude that cross-calibrated cluster counts have enough intrinsic
information to significantly improve constraints on dark energy even if the
associated systematics are not precisely known.
\vspace{-0.8cm}
\acknowledgments
\vspace{-0.3cm}
CC and DH are supported by the DOE OJI grant under contract
DE-FG02-95ER40899, NSF under contract AST-0807564, and NASA under contract
NNX09AC89G. DH thanks the Galileo Galilei Institute in Firenze for good
coffee, and we thank an anonymous referee, Gus Evrard and Eduardo Rozo for comments.
|
0904.2388
|
\section{Introduction}
M82 is a nearby \citep[3.6 Mpc based on a Cepheid distance to M81
by][]{FreedmanHughesMadore1994} irregular (I0) galaxy with a very
active starburst in its nuclear region. It harbors many bright supernova
remnants in its central region, which have been studied extensively
for decades
\citep{MuxlowPedlarWilkinson1994,BeswickRileyMartiVidal2006,Fenech2008}.
\citet{vanBurenGreenhouse1994} estimate a supernova rate of 0.1
year$^{-1}$. However, a new radio supernova has not been discovered.
\citet{Singer2004} reported a supernova in M82 (SN2004am) that was
classified as type-II \citep{Mattila2004}, but it has not been detected at
radio wavelengths \citep{Beswick2004}.
\citet{KronbergSramek1985} and \citet{KronbergSramekBirk2000} monitored
the flux densities of 24 radio sources in M82 from 1980 until 1992. Most
sources (75\%) remained surprisingly constant. There is some controversy about
how the fluxes of these compact radio sources can be stable.
Models of supernova remnants expanding into a dense medium may explain this
\citep{ChevalierFransson2001}. \citet{SeaquistStankovic2007} argue that the
radio emission could arise from wind-driven bubbles. Studying the evolution
of a young source could be very important for understanding these models.
The strongest source, 41.9+58.0, shows an exponential decay with a decay
rate of $\tau_d$=11.9 years \citep{KronbergSramekBirk2000}. Another source,
41.5+597, which was detected in 1981 with a flux density of $\sim$10~mJy,
faded within a few months to a flux density below 1 mJy
\citep{KronbergSramek1985,KronbergSramekBirk2000}. The nature of this
strongly variable source was never clarified.
Radio supernova are rare events. So far only about two dozen have been detected
\citep{Weiler2002} and most of them were quite distant and rather weak. This
makes it difficult to study them in great detail. One notable exception is
SN1993J \citep{Schmidt1993} in M81, which has been studied extensively
\citep{Marcaide1997,Marceide2009, Bietenholz2001, Bietenholz2003, Perez2001,
Perez2002, Bartel2002, Bartel2007}. Thus, the detection of a new nearby
supernova would be highly desirable.
M82 is part of the M81 group of galaxies and it shows clear signs of tidal
interaction with M81 and NGC\,3077 \citep{YunHoLo1994}. This makes the M81
group an ideal system for studying galaxy interaction in great detail. With
this
motivation, we have initiated a project to measure the proper motions of M81
and M82 with VLBI astrometry. We are observing M81*, the nuclear radio source
in M81, bright water masers in M82, and three compact background
quasars. Based on our experience with measurements of proper motions in the
Local Group \citep{BrunthalerReidFalcke2005, BrunthalerReidFalcke2007}, we
expect a detection of the tangential motions of M81 and M82 relative to the
Milky Way within a few years. So far, we have observed at three epochs at 22
GHz with the High Sensitivity Array (including the Very Long Baseline Array,
the Very Large Array, and the Greenbank and Effelsberg telescopes).
Here, we report the detection of a new transient source in M82 based on the
data from the NRAO\footnote{The National Radio
Astronomy Observatory is a facility of the National Science Foundation
operated under cooperative agreement by Associated Universities, Inc.} Very
Large Array (VLA).
\section{Observations and data reduction}
M82 was observed with the Very Large Array (VLA) as part
of the High Sensitivity Array observation under projects BB229
and BB255 on 2007 January 28, 2008 May 03, and 2009 April
08. The total observing time in each epoch was 12 hours. We
used M81* as phase calibrator and switched between M81*, M82, and 3
extragalactic background quasars every 50 seconds
in the cycle M81* -- 0945+6924 -- M81* -- 0948+6848 -- M81*
-- M82 -- 1004+6936 -- M81*, yielding an integration time of $\sim$100
minutes on M82. We observed with two frequency
bands of 50 MHz, each in dual circular polarization.
The data reduction was performed with the Astronomical
Image Processing System (AIPS) and involved standard steps.
On 2007 January 28 and 2009 April 08, we used 3\,C48 as
flux density calibrator. M81* was used as a gain and phase calibrator and
one round of phase self-calibration was performed on M82.
Unfortunately, no flux density calibrator was observed on 2008
May 03. Here, we assumed a flux density of 150 mJy for the highly variable
source M81*. This value was chosen since it was in the range of typical values
at cm wavelenghts \citep[e.g.][]{BrunthalerBowerFalcke2006} and it yielded flux
densities for 0945+6924, 1004+6936, and 44.0+59.6 in M82 that were consistent
with their flux densities at the other two epochs.
M82 was also observed on 2008 March 24 with the VLA at 22 GHz for 10 minutes
($\sim$6 minutes integration time on M82) in spectral line mode. 3C\,48 was
used as flux density calibrator, and 1048+717 was used as gain and phase
calibrator. A total bandwidth of 9.18 MHz was observed. We also analyzed the
archival VLA data of 2007 October 29 at 4.8 GHz, which is the latest available
data before our observations. Here, 3C\,286 was used as flux density
calibrator and 1048+717 was used as phase calibrator. Eighteen minutes of
integration time on M82 was spread over 1.5 hours.
\section{Results}
A new bright source was detected on 2008 May 03 (see Fig.~\ref{Fig:detect},
middle and Fig.~\ref{Fig:detail}, bottom). With it’s flux density of $\sim90$
mJy, it was clearly the brightest radio source in the field (at least five
times brighter than the second brightest source). Almost one year later, on
2009 April 08, the source had faded to a flux density of $\sim11$ mJy,
i.e. losing almost 90\% of it’s flux density (see Fig.~\ref{Fig:detect}, bottom). We
estimated its position to be
$\alpha_{J2000}$=09$^{h}$55$^{m}$51.551$^{s}\pm0.008^{s}$
$\delta_{J2000}$=69$^\circ$40$'$45.792$''\pm0.005''$. Following the
traditional convention of previous papers naming a source after the
position offset to $\alpha_{B1950}$=09$^{h}$55$^{m}$ and
$\delta_{B1950}$=69$^\circ40'$, our new detection would be 42.82+59.54. The
positions in the two observations are consistent and the uncertainty was
estimated by comparing the position of another clearly identified source
(44.0+59.6) with the positions from observations with the Multi-Element Radio
Linked Interferometer Network (MERLIN) by \citet{MuxlowPedlarWilkinson1994}.
\begin{figure}[!h]
\resizebox{0.5\textwidth}{!}
{\includegraphics[bb=0.0cm 1.8cm 25.1cm 51.2cm,clip,
angle=0]{2327fig1.eps}}
\caption{
{\bf Top:} VLA B-configuration image at 4.8 GHz of M82 on 2007
October 29. Contours start at 3 mJy and increase with factors of
2. The beamsize is 1.7$\times$1.1 arcseconds with a position angle
of -58$^\circ$. There is no detectable source at the position of our
new source. The peak flux in
the image is 45 mJy~beam$^{-1}$.
{\bf Middle:} VLA C-configuration image at 22 GHz of M82 on 2008 May
03. Contours start at 0.7 mJy and increase with factors of 2. The
beamsize is 0.90$\times$0.83 arcseconds at a position angle of
-73$^\circ$. The new source at position
$\alpha_{J2000}$=09$^{h}$55$^{m}$51.551$^{s}\pm0.008^{s}$
$\delta_{J2000}$=69$^\circ$40$'$45.792$''\pm0.005''$ is clearly
visible, with a peak flux of 90 mJy~beam$^{-1}$.
{\bf Bottom:} VLA B-configuration image at 22 GHz of M82 on 2009 April
08. Contours start at 0.7 mJy and increase with factors of 2. The
beamsize is 0.32$\times$0.29 arcseconds with a position angle of
-60$^\circ$. The new source is still clearly
visible, but its peak flux density has decreased to 11 mJy~beam$^{-1}$.
}
\label{Fig:detect}
\end{figure}
To constrain the start time of the flare, we reduced two earlier VLA
observations of M82. The spectral line observation on 2008 March 24 had lower
quality, since we observed with a narrow bandwidth and had only $\sim$6 minutes
integration time on M82. Nevertheless, we could easily confirm the detection
of the new source at the same position with a similar but slightly higher flux
density as in the 2008 May 03 observation. Next, we analyzed 4.8 GHz data from
2007 October 29. Here, no bright point source was discovered (see
Fig.~\ref{Fig:detect}, top), so we conclude that the start of the flare was
between 2007 October 29 and 2008 March 24. However, it is possible that the
source could be highly self-absorbed early in its development and thus might
have been detectable at high frequencies, but not at 4.8 GHz. In this case,
the flare might have started earlier, but not earlier than 2007 January 28
(the day of our first 22 GHz observation, see Fig.~\ref{Fig:detail}, top).
The comparision of flux densities from different epochs in M82 is difficult
due to the diffuse emission, in particular when comparing observations with
different spatial resolutions. In Table~\ref{tab:flux}, we list the flux
densities of the new source along with the flux densities of one additional
source in M82 (44.0+59.6), M81*, and three background quasars.
\section{Pre-flare observations at other wavelengths}
\citet{Matsumoto2001} present high-resolution (FWHM $\sim 0\rlap{.}''5$) x-ray
imaging of the central $1'\times1'$ (1.1 $\times$ 1.1 kpc) region of M 82
with the High-Resolution Camera (HRC) aboard the Chandra X-ray Observatory.
These images, taken on 1999 October 28 and 2000 January 20, show a total of 9
sources within this area, some of which are highly variable. They do
not find a counterpart to our new radio source. Neither do \citet{Kong2007} in
a total of 12 datasets of a similar sized region taken with the Chandra HRC and
Advanced CCD Imaging Spectrometer Array (ACIS-1 and -2) taken between 1999
September 20 and 2005 August 18. These authors also present Hubble Space
Telescope (HST) $H-$band ($1.6~\mu$m) imaging of the region with
the Near-Infrared Camera and Multi-Object Spectrometer (NICMOS) that, again,
does not show a counterpart to our variable radio source.
\citet{KoerdingColbertFalcke2005} observed M82 with the VLA at 8.4
GHz eight times between 2003 June and October in order to detect possible
radio flares
from several ultraluminous X-ray sources (ULX). No significant emission was
found at the position of our new detection above a noise level of 70 $\mu$Jy
at each epoch. A very sensitive eight-day integration of M82 at 5 GHz
wavelength was performed with MERLIN between 2002 April 1 and 28
\citep{Fenech2008}. The resulting 40 millarcsecond resolution images show no
significant emission at the position of our new source above the rms noise
level of $17~\mu$Jy~beam$^{-1}$. \citet{Tsai2009} present a 7 mm map from
the VLA from 2005 April 22, with no detection of more than 0.3 mJy at
the position of our source.
\begin{figure}[!h]
\resizebox{0.5\textwidth}{!}
{\includegraphics[angle=0]{2327fig2.eps}}
\caption{Contours and grey scale represent 22.2 GHz images of the central
region of M82 taken on 2007 January 28 ({\bf top}) and 2008 May 03 ({\bf
bottom}). The new bright source is very conspicuous in the latter image. Contour
values are -4, 4, 8, 16, 32, 64, 128, 256, and 512 times 0.2 mJy~beam$^{-1}$,
roughly the (comparable) rms noise level in both images. The star symbols mark
the positions of the x-ray sources discussed by \citet{Matsumoto2001} and in
the top panel bear these authors' nomenclature. The asterisk gives the position
of the radio kinematic center (R.K.C.) of M82 as determined by
\citet{WeliachevFomalontGreisen1984}. Both images were restored with a
circular beam of 1.5 arcseconds. Since the observation on 2007 January 28 was
made in C configuration, there was much more data from short baselines, which
results in better sensitivity to extended emission.
}
\label{Fig:detail}
\end{figure}
\section{Discussion}
The most straightforward explanation for this new source is a new radio
supernova. A quantitative analysis of the lightcurve is difficult due
to the small number of data points (3) we obtained with different angular
resolutions, the complication of diffuse emission in M82, and the uncertain
absolute flux scale in our observation of 2008 May 03. We extracted the flux d
ensities of our source by restricting the interferometer (u,v)-data to
$>$30~k$\lambda$ to remove most of the extended emission
(Table~\ref{tab:flux}).
We fitted the lightcurve of the source with an exponential decay:
S(t)=S$_0$ e$^{(t_0-t)/\tau_{d}}$.
This yields a decay timescale of $\tau_d$=0.46$\pm$0.03 yr
(Fig.~\ref{Fig:light}). Since we do not know the exact time of the onset of the
flare, t$_0$, we get values of S$_0$ between 110 and 270 mJy for t$_0$ between
2008 March 24 and 2007 October 29. For the fit, we added in quadrature the
difference between peak and integrated flux density as one error estimate and
systematic (flux density scale) errors of
5\% for the first and third epochs, and 10\% for the second epoch, where no
proper flux density calibrator was observed. The resulting $\chi^2_{pdf}$ was
0.5. Based on this fit, we predict that the source flux density will drop to 5
mJy by mid 2009 and 1.5 mJy in early 2010. We also fitted a simple exponential
decay to the 22 GHz lightcurve of SN199J in M81. Here we used the first year
of data after the peak in the lightcurve published in
\citet{WeilerWilliamsPanagia2007}. The fit yields $\tau_d$=0.72$\pm$0.09 yr,
indicating that the new source in M82 decays faster than SN1993J.
A power-law fit (S(t)$\propto$(t$_0$-t)$^\alpha$) is not consistent with the
lightcurve of the new source in M82 ($\chi^2_{pdf}=5-8$, for t$_0$ between
2007.8 and 2008). If one allows an earlier t$_0$ (e.g., if the source is highly
self-absorbed at lower frequencies), the power-law fit improves
($\chi^2_{pdf}=2$, for t$_0$=2007.08, one day after our 2007 January 28
observation).
\begin{figure}
\resizebox{0.5\textwidth}{!}
{\includegraphics[angle=-90]{2327fig3.ps}}
\caption{Lightcurve of the new radio transient from our
observations (squares). Also shown are an exponential-decay fit (solid
line) and power-law fits (dashed lines), assuming different peak times
of the flare: 2008.0 (lower line), 2007.8 (middle line), and 2007.08
(upper line).
}
\label{Fig:light}
\end{figure}
The position of the new source is marginally consistent with the position of
the kinematic center of M82 \citep{WeliachevFomalontGreisen1984}. This raises
the possibility that, rather than a supernova, we could have detected a
flare from a supermassive black hole in the center of M82. Flares in AGN
sources can often be fitted with exponential decays
\citep{ValtaojaLaehteenmaekiTeraesranta1999}. For
example, a strong radio flare in 1999 in the Seyfert galaxy III\,Zw\,2 has a
decay rate of $\tau_d$=0.73 year \citep{BrunthalerFalckeBower2005}.
Other explanations, such as luminous flares from quiescent supermassive black
holes induced by a close passage of a star that is torn apart by tidal forces,
are also possible. However, such flares show a power-law decay with
$\alpha=-5/3$ \citep{EvansKochanek1989, Ayal2000, Gezari2009}, which is not
consistent with our lightcurve.
However, M82 has never shown evidence of a nuclear supermassive black hole
(which would be surprising for a small irregular galaxy). Since the progenitor
for our flare showed no X-ray emission, a stellar or intermediate mass black
hole is also not probable. Thus, based on the current data, a new radio
supernova seems to be the most likely explanation.
\begin{table}
\caption{Details of the detected compact sources in our VLA observations with
statistical errors on the 22.2 GHz flux densities.}
\label{tab:flux}
\centering
\begin{tabular}{cccccc}
\hline\hline
Source & Date & Peak Flux & Integrated Flux\\
& & [mJy~beam$^{-1}$] & [mJy]\\
\hline
M82 transient & 24/03/2008 & 104.4$\pm$1.3 & 99.6$\pm$2.3 \\
(42.82+59.54) & 03/05/2008 & 89.9$\pm$0.1 & 88.4$\pm$0.2 \\
& 08/04/2009 & 11.1$\pm$0.1 & 9.2$\pm$0.2 \\
\\
44.0+59.6 & 28/01/2007 & 15.0$\pm$0.2 & 77.8$\pm$1.0 \\
& 03/05/2008 & 12.5$\pm$0.1 & 11.0$\pm$0.2 \\
& 08/04/2009 & 14.7$\pm$0.1 & 12.4$\pm$0.2 \\
\\
M81* & 28/01/2007 & 71$\pm$1 & 72$\pm$1 \\
& 03/05/2008 & 150 & 150 \\
& 08/04/2009 & 195$\pm$3 & 179$\pm$5 \\
\\
0945+6924 & 28/01/2007 & 14.3$\pm$0.2 & 14.4$\pm$0.3 \\
& 03/05/2008 & 14.8$\pm$0.1 & 14.7$\pm$0.2 \\
& 08/04/2009 & 13.8$\pm$0.2 & 12.2$\pm$0.3 \\
\\
0948+6848 & 28/01/2007 & 22.5$\pm$0.2 & 24.3$\pm$0.3 \\
& 03/05/2008 & 59.1$\pm$0.2 & 59.8$\pm$0.3 \\
& 08/04/2009 & 63.6$\pm$0.4 & 56.3$\pm$0.6 \\
\\
1004+6936 & 28/01/2007 & 31.0$\pm$0.2 & 30.8$\pm$0.3 \\
& 03/05/2008 & 30.9$\pm$0.2 & 30.4$\pm$0.2 \\
& 08/04/2009 & 34.6$\pm$0.3 & 30.8$\pm$0.4 \\
\hline
\end{tabular}
\end{table}
\begin{acknowledgements}
We thank the referee Dr. K. Weiler for critically reading the manuscript.
\end{acknowledgements}
\bibliographystyle{aa}
|
0904.1575
|
\section{Introduction}
\label{sec:introduction}
\setcounter{equation}{0}
The first measurements of the Sun's rotation rate were obtained by careful
tracking of the location of sunspots. In fact, in 1611, shortly after the
invention of the telescope, independent observations of the motion of sunspots
across the solar disk by Galileo Galilei and Christopher Scheiner proved
unequivocally that the Sun rotated. We now know that magnetic features are
not purely passive tracers. Detailed tracking of sunspots and other magnetic
features---such as pores and plages---has revealed that magnetized regions
rotate more quickly than the surrounding field-free plasma
\cite[e.g.,][]{Howard:1970, Golub:1978, Komm:1993}. Furthermore, the larger
the flux concentration the more rapid the rotation rate \citep{Ward:1966,
Howard:1984, Howard:1992}. Recent helioseismic analyses have shown that the
plasma within active regions as a whole also rotates more quickly
\cite[e.g.,][]{Braun:2004}. This superrotation extends to depths as great as
16 Mm \citep{Komm:2009} and there is evidence that the leading polarity might
rotate more rapidly than the trailing polarity \citep{Zhao:2004b, Svanda:2008b}. As in the
earlier studies involving the tracking of sunspots and plage, the helioseismic
studies find that the prograde rotation speed increases with increasing magnetic
flux density.
The tracking of magnetic features has produced less clear results for
meridional motions. We know from direct Doppler velocity measurements
\cite[e.g.,][]{LaBonte:1982, Hathaway:1996} and from helioseismic techniques
\cite[e.g.,][]{Giles:1997, Braun:1998, Basu:1999, Haber:2002, Zhao:2004a}
that within the surface layers, the meridional circulation is poleward
with roughly a speed of 20 m s$^{-1}$. Outside of active regions, correlation
tracking has found that magnetic flux advects towards the pole at roughly
the same speed, acting like a passive tracer \citep{Svanda:2007}. However,
sunspots do not appear to follow the same rules. While individual sunspots
may move substantially in latitude over their lifetime, sunspots as a
group lack systematic poleward motion. Instead, on average, sunspots appear
to slowly drift ($<2$ m s$^{-1}$) away from the center of the active latitude
belts \citep{Woehl:2001, Woehl:2002}.
In addition to their bulk motion, active regions have internal circulations.
As first revealed through the local helioseismic techniques of ring analysis
and time-distance helioseismology, within the surface layers there are large-scale
flows that stream into active regions. These flows typically occupy a layer
7 Mm deep below the photosphere and have amplitudes of 20--30 m s$^{-1}$
\citep{Haber:2001, Gizon:2001}. These same techniques have also demonstrated
that in deeper layers many, but not all, active regions possess strong outflows
with speeds reaching 50 m s$^{-1}$ \citep{Haber:2004, Zhao:2004a}. The largest
magnetic complexes inevitably evince these deep outflows, but many of the
smaller complexes fail to exhibit such behavior. Figure~\ref{fig:DensePack}
shows examples of these organized flows around a large active complex.
This simple circulation pattern (inflows at the surface, coupled to outflows
at depth) becomes more complicated when we consider the flows that are
observed around sunspots. The tracking of ``moving magnetic features" (MMFs)
has revealed an annular collar of outflow surrounding sunspots
\cite[for a review see ][]{Hagenaar:2005}. This ``moat" of flow has also been
observed helioseismically \citep{Lindsey:1996, Gizon:2000, Braun:2003}. Related
outflows have also been observed surrounding newly emerged active regions that
have yet to fray and form a region of extended plage \citep{Hindman:2006b,
Komm:2008, Kosovichev:2008}. Figure~\ref{fig:EmergingAR} shows the flows around
an active region that emerged just two days prior. The flow field during this
young stage in the active region's life is dominated by strong outflows from
the sunspots.
In this paper we perform detailed measurements of the flows in quiet sun
and in active regions using the local helioseismic technique of ring analysis
to detect flows within the upper 2 Mm of the solar convection zone. Our goal
is to understand the circulations that are typically established within active
regions and to assess how properties of the flow field---such as its divergence
and vorticity---differ between magnetized and nonmagnetized regions. In
\S\ref{sec:HRRA} we discuss the ring analysis procedure in detail. In \S\ref{sec:ARFlows}
we present two different schemes for analyzing the measured flow fields.
In \S\ref{sec:Discussion} we provide a comprehensive discussion of our findings,
and in \S\ref{sec:Conclusions} expound our conclusions.
\section{High-Resolution Ring Analysis (HRRA)}
\label{sec:HRRA}
\setcounter{equation}{0}
Ring analysis assesses the speed and direction of subsurface horizontal
flows by measuring the advection of ambient waves by the flow field. In
the presence of a flow, waves traveling in opposite directions have their
frequencies split by the Doppler effect, providing a direct measure of the
flow velocity in those layers where the waves have significant amplitude.
For this particular study, we have utilized only surface gravity waves
($f$ modes); therefore, we are able to probe a layer several Mm thick lying
directly below the photosphere. The frequency perturbation introduced by
the flow is $\Delta\omega = \bvec{k} \cdot \bvec{\hat{U}}$, where $\bvec{k}$
is the horizontal wavenumber and $\bvec{\hat{U}}$ is the integral over depth
of the horizontal flow velocity weighted by a kernel which is approximately
the kinetic energy density of the surface gravity wave.
The frequency splittings produced by flows are measured in the Fourier domain.
For a single analysis, a power spectrum is obtained of the wave field in a
localized region on the solar surface by Fourier transforms (two in space,
one in time) of a sequence of tracked, remapped, and apodized Dopplergrams
\citep{Bogart:1995, Haber:1998}. The mode power in the spectrum is distributed
along curved surfaces, which when cut at constant frequency appear as a set
of concentric rings, each corresponding to a mode of different radial order.
These rings are nearly circular in shape with centers displaced slightly from
the origin due to the splitting of the mode frequencies. The frequency splittings
$\Delta\omega$ are obtained as a function of wavenumber by carefully fitting
the $f$ modes in such power spectra with Lorentzian profiles
\cite[e.g.,][]{Haber:2000, Haber:2001}.
A single ring analysis is performed within a small region, or tile, on the
solar surface. The resulting flow is an average measure of the flow within
that tile \citep{Hindman:2005}. We map the flow field over the entire visible
disk by performing many such ring analyses on a mosaic of different locations
on the solar surface. Each day the analyses are repeated to obtain a sequence
of daily flow maps.
For this study we have used Dynamics Program data from MDI \citep{Scherrer:1995}
on the {\sl SOHO} spacecraft. We use the same tiling and tracking scheme discussed
at length in \cite{Hindman:2006a}. This scheme, known as high-resolution ring
analysis (HRRA), produces a daily map of the flow field formed from the analysis
of over $10^4$ tiles that are 2$^\circ$ in heliographic angle on a side. The
tiles overlap and their centers are separated by $0^\circ.9375$ in longitude and
latitude. In order to reduce the amount of image tracking and remapping that
is required, in practice, instead of tracking each of these tiles separately,
we track 189 larger regions that span 16$^\circ$ in angle on a side \citep{Haber:2002}.
The smaller 2$^\circ$ tiles are then extracted from these larger tracked regions.
Each of the larger tiles, is tracked for 27.7 hours at the surface rotation rate
appropriate for the center of the tile \citep{Snodgrass:1984}. The large tiles
overlap each other and their centers are separated in longitude and latitude by
7.5$^\circ$.
Since the large tiles overlap, for any given location in the mosaic of 2$^\circ$
tiles, there exists multiple realizations of that tile each extracted from a different
neighboring large tile. In general, any given location will have 4 separate tiles
and the flow determinations from these different realizations are averaged together.
Since, not all of the neighboring large tiles are tracked at the same rotation rate,
before averaging we must convert each of the flow realizations to a common rotation
rate. This is accomplished by subtracting from the zonal velocity a longitudinal
and temporal mean obtained from the small tiles from a given year. Therefore, the
reported zonal flows are measured relative to the differential rotation rate obtained
from the data itself.
As in \cite{Hindman:2006a}, the results for waves of different horizontal wavenumber
are averaged together to increase the ratio of signal to noise. A typical daily
flow map, comprised of roughly 10$^4$ measurements, is generated from $1.3\times 10^5$
separate flow determinations. Assuming that the errors are uncorrelated, this
averaging procedure produces a fractional uncertainty of roughly 20\% for any
single flow speed measurement in the daily map. Figures~\ref{fig:EmergingAR}
and \ref{fig:LargeAR} show flow maps obtained by this technique. Figure~\ref{fig:EmergingAR}
shows the flows around a newly emerged active region where the dominant flow
structure is an outflow from the sunspots. Figure~\ref{fig:LargeAR} shows the
flows near a large, mature active complex, which has persisted for
several solar rotations and undergone multiple flux emergence events.
The spatial resolution achieved by this technique is largely determined
by the size of the analysis tiles---which determines the wavelength of the
waves that are sampled; however, the shape of the apodization function, the
details of the fitting procedure and the damping length of the waves may
also play a role \citep{Hindman:2005, Birch:2007}. For the tiles used in
this study, we expect that the shape of the averaging kernel is essentially
the product of a vertical profile and a horizontal planform. The vertical
profile is provided by the standard $f$-mode eigenfunction and the horizontal
planform is a singly-humped function that vanishes at the edge of the tile
and peaks at the tile center \citep{Birch:2007}. Since the $f$ modes are
surface gravity waves, they are confined to a narrow layer just below the
solar surface. The exact depth to which the eigenfunction extends is proportional
to the horizontal wavelength. However, the small tiles used in this study
permit only a relatively narrow band of wavelengths to be measured. Thus,
the vertical eigenfunctions of the measured $f$ modes are rather similar
and the flow measurements are essentially a mean over a layer spanning the
first 2 Mm below the photosphere.
\section{Flows within Active Regions}
\label{sec:ARFlows}
\setcounter{equation}{0}
We have generated daily flow maps using MDI Dynamics Program data for three
periods of time in three subsequent years: 1 March to 26 May 2001, 11 January
to 21 May 2002 and 12 September to 15 November 2003. In total, due to gaps
in some of these periods, we have produced flow maps for 201 days of data.
We have analyzed the measured flow maps with two distinct procedures. The
first is the calculation of probability density functions (PDFs) for a variety
of flow parameters within both quiet sun and within regions of magnetism.
From these PDFs we examine how the mean properties of the flow vary with
magnetic activity as well as how the shapes of the distributions change. The
second analysis involves identifying active regions, measuring spatially
structured flows within those regions and computing average active region
flow structures.
\subsection{Probability Density Functions for Flow Properties}
\label{subsec:Distributions}
\setcounter{equation}{0}
We compute distribution functions for the zonal and meridional components
of the flow field as well as for the divergence and vorticity of the flow. Since
our HRRA procedure only produces estimates of the horizontal flow, the
divergence that we compute is only the horizontal divergence and the vorticity
is only the vertical component of the vorticity. Since we are interested in the
differences in the flow field between magnetized regions and quiet sun, we
need to analyze active pixels separately from quiet pixels. This requires
that we produce colocal flow maps and magnetograms with the same pixel spacing.
We achieve this by using MDI magnetograms and interpolating our HRRA flow maps
to the same spatial sampling via splines. Simultaneously, we compute the spatial
derivatives necessary for the divergence and the vorticity by utilizing the same
spline coefficients. PDFs are computed for quiet sun through the selection
of flow measurements from only those pixels with a field strength less than
50 G. Separately, PDFs for magnetized regions are calculated using pixels
with a field strength greater than 50 G. Furthermore, since we expect that
many of the flow properties may be functions of latitude, we compute separate
PDFs for different latitudinal bands. There are 11 bands in total, each
$10^\circ$ wide, with centers separated by 10$^\circ$ and evenly distributed
about the equator. Each quiet sun PDF is constructed of over 10$^5$ independent
flow measurements, whereas, due to their low filling factor, the PDFs for active
regions are composed of roughly 10$^4$ distinct measurements. As a test, we have
varied the width and number of latitudal bands, finding little difference in
the results except for the expected changes in error estimates due to the number
of data points in each band. For simplicity, we have chosen to show only the
results for the 10$^\circ$ bands.
Figures~\ref{fig:ZonMerPDFs} and \ref{fig:DivCurlPDFs} present the distribution
functions for active and quiet regions. For clarity, we have only shown
the distributions for the latitudinal bands centered at $30^\circ$ north and
south of the equator. Figure~\ref{fig:MeanFlows} shows the mean values of
these distributions as a function of latitude. From this set of figures one
can clearly see that, on average, magnetized regions rotate across the solar
disk more rapidly than quiet regions (by roughly 20 m s$^{-1}$), yet magnetized
regions appear to move poleward at the same rate as quiet regions. Furthermore,
the meridional circulation is poleward, with a nearly sinusoidal shape as a
function of latitude. We see no evidence for residual circulations in the
active bands as seen by some helioseismic studies \cite{Zhao:2004a, Gonzalez-Hernandez:2008}.
this may be the result of averaging the flows over three separate years and the
relatively small size of these residual circulations ($\approx$ 5 m s$^{-1}$).
The distributions of the divergence reveal that active regions appear as zones
of converging flow, while
quiet sun is on average slightly divergent. Finally, magnetized regions possess
cyclonic vortical motions that increase linearly with latitude. These findings
will be discussed in more detail in \S\ref{sec:Discussion}.
In order to examine the shape of the distributions with a better signal-to-noise
ratio, we average the distributions over latitude. Since the PDFs for the
zonal flow and divergence are largely independent of latitude, we average
these over all latitudes. The meridional flow and the vorticity are antisymmetric
with latitude; therefore, we average those separately over each hemisphere.
The results are shown in Figure~\ref{fig:MeanPDFs}. The mean shifts between
magnetized and quiet regions are obvious in these figures. However, it is now
also clear that the shape of the distributions change within active regions.
In particular, the distributions of all flow quantities within active and quiet
regions have similar cores, but those in active regions possess extended wings,
indicating that a wider range of speeds is present. Furthermore, in the
divergence distributions, the flows in the quiet sun are notably asymmetric,
with more area occupied by diverging flows than converging flows. On the other
hand, in active regions, the flows are quite symmetric. Typical widths for the
zonal and meridional flow distributions are 80 m s$^{-1}$, while the divergence
and vorticity have widths on the order of 20 $\mu$Hz and 10 $\mu$Hz, respectively.
\bigskip
\centerline{TABLE 1}
\centerline{\small\scshape Moments of the Distributions}
\begin{center}
{\footnotesize
\begin{tabular}{lccccc}
\hline\hline
{} & {} & \multicolumn{2}{c}{Quiet Regions} & \multicolumn{2}{c}{Magnetized Regions}\\
{Flow Property} & {Hemisphere} & {Mean$^1$} & {Variance$^1$} & {Mean$^1$} & {Variance$^1$}\\
\hline
Zonal Flow & Both & -1.1 & 81.6 & 19.0 & 87.8 \\
Meridional Flow & North & 14.0 & 82.0 & 10.0 & 94.6 \\
Meridional Flow & South & -20.4 & 82.0 & -19.8 & 79.2 \\
Divergence & Both & 0.2 & 17.0 & -4.4 & 17.1 \\
Curl & North & 0.005 & 9.6 & 0.4 & 15.7 \\
Curl & South & 0.01 & 8.9 & -0.7 & 12.1 \\
\hline
\multicolumn{6}{l}{$^1$The means and variances for the zonal and meridional flow are measured in}\\
\multicolumn{6}{l}{~~units of m s$^{-1}$. The divergence and vorticity are measured in units of $\mu$Hz.}\\
\multicolumn{6}{l}{~~The variance is defined, as usual, as the second central moment of the distribution.}
\end{tabular}
}
\end{center}
\subsection{Flow Structures within Active Regions}
\label{subsec:Structures}
\setcounter{equation}{0}
In order to assess the importance of organized flow structures within active
regions, we have chosen to average flow properties over different zones within
an active region. These zones are identified by various contour levels in
smoothed maps of the unsigned magnetic flux. Figure~\ref{fig:MagGram} shows
an MDI magnetogram of NOAA AR9433 (the same active region shown in
Figure~\ref{fig:LargeAR}). The overlying contours were obtained by smoothing
the modulus of the magnetogram with a Gaussian filter with a width of $2^\circ$
in heliographic angle---the same spatial resolution as the helioseismic flow
measurements. Any region with magnetic field strength greater than 50 G was
labeled active. Only regions within $60^\circ$ of disk center were considered
since the helioseismic measurements have a similar coverage. Figure~\ref{fig:DistARs}
shows distributions of the area and total magnetic flux of the resulting set
of flux concentrations. We further winnowed our sample of flux concentrations
by selecting only the subset with areas greater than $1\times10^4$ Mm$^2$. Over
the course of the 201 days of data, over 100 independent flux concentrations
meeting these criteria were identified.
The flows coincident with each of the magnetic contours within each of these
active regions are decomposed into an inflow component (perpendicular to the
contour and pointed inwards) and a circulation component (parallel to the
contour and pointed counterclockwise). For each active region, a line integral
was performed around each contour to compute the mean inflow speed and the
mean circulation speed. The results for each strength of magnetic contour
were then averaged over all active regions in the sample to form mean inflow and
circulation speeds as a function of the magnetic field strength $B$.
\begin{eqnarray}
v_{\rm inflow}(B) &\equiv& \sum_i \frac{w_i(B)}{L_i(B)}
\int_{C_i(B)} \bvec{v} \cdot \bvec{\hat{n}} \; dl\; , \\
\nonumber \\
v_{\rm circ}(B) &\equiv& \sum_i \frac{w_i(B)}{L_i(B)}
\int_{C_i(B)} \bvec{v} \cdot \bvec{\hat{t}} \; dl\; , \\
\nonumber \\
\nonumber L_i(B) \equiv \int_{C_i(B)} dl\; , &&\;\;\; w_i(B) \equiv \frac{L_i(B)}{\sum_j L_j(B)}\; .
\end{eqnarray}
In the expressions above, the integrals are contour integrals around the
magnetic contour $C_i$ with magnetic flux density $B$ of the $i$th active region
in our sample. The unit vectors $\bvec{\hat{n}}$ and $\bvec{\hat{t}}$ are,
respectively, normal and tangential to the magnetic contour, with the normal
pointing inward toward higher field strength and the tangent pointing
counterclockwise around the contour. Each integral is divided by the length
of the contour $L_i$ to generate an average speed and the summation is a
weighted average over all active regions, with the weights, $w_i$, proportional
to the length of the contour.
The above averaging process was performed separately for active regions whose flux
weighted centers lie in the northern hemisphere and those in the southern.
The results are shown in Figure~\ref{fig:ARflows}. The periphery of active
regions possess positive inflows with an amplitude of 20--30 m s$^{-1}$ and
have a weak tendency for cyclonic circulation with a speed of 5 m s$^{-1}$.
As the field strength $B$ increases towards the interior of active regions,
this inflow turns into an outflow, presumably forming a downflow where
the two flows meet. The very core of active regions, formed by sunspots, are
zones of strong outflow ($\approx 50$ m s$^{-1}$) and anticyclonic motion
with a rotational speed of approximately 10 m s$^{-1}$.
\section{Discussion}
\label{sec:Discussion}
\setcounter{equation}{0}
Using HRRA we have estimated the horizontal flow field within the upper 2 Mm
of the solar convection zone by measuring the Doppler shifts of $f$ modes.
Flow maps for 201 days of data with a horizontal resolution of 2$^\circ$ have
been produced and two separate analysis procedures applied. Firstly, from
these flow maps we have calculated PDFs of the zonal flow, the meridional
flow, the flow's divergence and it's vorticity. Secondly, we have computed mean
inflow rates and circulation speeds at various magnetic field strength levels
within active regions. From these analyses we deduce the following, and will
expand upon these findings in the subsequent subsections.
\begin{itemize}
\item Magnetized regions rotate more rapidly across the solar disk than
nonmagnetized regions (by roughly 20 m s$^{-1}$).
\item Magnetized regions are advected poleward by the meridional circulation
at the same rate as quiet regions.
\item On average, magnetized regions possess convergent cyclonic vortical
motions, whereas the flows in quiet sun are weakly divergent without
a measurable vortical preference.
\item The flows within active regions span a wider range of flow speeds
than those seen in quiet sun.
\item The divergence distribution in quiet sun peaks at zero, but is
asymmetric about this peak value, with more of the solar surface covered
by outflows than inflows. The divergence distribution in magnetized
regions peaks at a negative value (converging flows), and is symmetric
about its peak.
\item The periphery of an active region is a zone of inflow (20--30 m s$^{-1}$)
as well as a zone of cyclonic circulation ($\approx 5$ m s$^{-1}$).
\item The moat flows streaming from sunspots form anticylones with a
mean rotational speed of roughly 10 m s$^{-1}$.
\end{itemize}
\subsection{Bulk Motion of Active Regions}
\label{subsec:BulkMotions}
We find that active regions, on average, rotate across the solar surface
with a speed that is roughly 20 m s$^{-1}$ faster than quiet sun at the same
latitude. Our observation that active regions are zones of superrotation
is consistent with the previous findings obtained through feature tracking,
direct Doppler velocity measurement and helioseismology. The rate of
superrotation (20 m s$^{-1}$) also agrees with previous findings if one
takes into consideration the spatial resolution of the various measurement
schemes. Surface tracking of magnetic features \cite[for a review see ][]{Howard:1996}
and high-resolution helioseismic measurements \citep{Braun:2004, Zhao:2004b}
indicate that sunspots rotate at a rate that is roughly 50 m s$^{-1}$ greater
than quiet sun, whereas the rotation rate of plages is markedly less. The
low-resolution ring-analysis procedure employed by \cite{Komm:2008} finds
that active regions as a whole superrotate at a rate of 4 m s$^{-1}$. This
low-resolution procedure has a spatial resolution of 15$^\circ$, and is thus
incapable of resolving sunspots. In fact, a single flow measurement averages
over sunspots, surrounding plage and even a significant portion of quiet sun.
Therefore, one would expect a dilution of the superrotation rate. To a lesser
degree the same averaging effect occurs here. Our horizontal spatial resolution
is 2$^\circ$, which is sufficient to resolve an active region, but still
incapable of resolving sunspots. Therefore, we would expect that our
superrotation rate should lie between that for sunspots and that for plage.
The meridional component of the flow field within active regions behaves
rather differently than the zonal component. Instead of moving at a rate
that differs from quiet sun, we find that the fluid within active regions
advects poleward at exactly the same speed as quiet sun. While this result
confirms previous surface measurements \cite[e.g.,][]{Svanda:2007}, this
is the first time that such a result has been reported for helioseismic
measurements. It is not entirely clear how this evident poleward advection
of the bulk of the active region can be reconciled with the observation that
sunspots lack systematic poleward motion \cite[e.g.,][]{Woehl:2001, Woehl:2002}.
When a rising flux rope first emerges through the solar surface, we expect
that the magnetized region will be marked by an upwelling that swells
horizontally---due to the decreasing gas pressure with height in the solar
atmosphere. At this point in the active region's evolution, the magnetic
field is dynamically important both at the surface and below. Thus, the
field at the solar surface is influenced by the flows all along the flux
rope. This may explain why magnetic regions rotate more quickly than the
quiet sun. The Sun's rotation rate increases with depth throughout the
upper 30 Mm of the convection zone and the magnetic field in the sunspot
might be grabbed and dragged by the fast moving subsurface layers
\cite[e.g.,][]{Gilman:1979, Hiremath:2002, Sivaraman:2003}. However, if
similar arguments are made for meridional advection of magnetic elements
a contradiction arises. Plage within active regions and small magnetic
elements outside of active regions are observed to passively advect with
the meridional circulation. This observation is consistent with the
helioseismic determinations of the meridional flow, since those measurements
indicate that the flow remains roughly constant with depth in the upper 20 Mm
of the convection zone \citep{Haber:2002, Zhao:2004a}. However, the lack of
systematic meridional motion by sunspots, by the same arguments, would appear
to indicate that the sunspots must be rooted very deeply, within the supposed
return flow in the meridional circulation. This is clearly in contradiction
with the helioseismic observations that fail to find such a return flow within
the near-surface shear layer, which is where we have presumed that the sunspots
are anchored in order to explain their superrotation.
This conundrum becomes even more complicated if we consider what might
happen as an active region ages. \cite{Fan:1994} have suggested that
the connection between the field at the surface and its underlying roots
may be broken through a dynamical disassociation process. Their original
suggestion involved the establishment of hydrostatic equilibrium throughout
the length of the tube once it emerges. Due to differences in entropy
stratification between the fluid within the tube and the external field-free
gas, at a layer roughly 10 Mm below the solar surface, the internal and
external gas pressure match. Therefore, lateral pressure balance requires
that the magnetic pressure vanish, the tube herniates and turbulent
convection shreds the tube because the magnetic field becomes dynamically
insignificant. The field above the herniation layer becomes disconnected
from the field below, thus enabling the surface field to be advected and
dispersed by near-surface flows, a process that is presently well-modelled
by surface transport models \cite[e.g.,][]{Wang:1989, van Ballegooijen:1998,
Schrijver:2001, Baumann:2004}.
\cite{Schuessler:2005} have pointed out that the establishment of hydrostatic
equilibrium along the entire length of the flux rope is much too slow
a process. Observations of the changing dynamics of magnetic structures in
the photosphere indicate that the field begins to disconnect a couple of days
after emergence. \cite{Schuessler:2005} have refined the dynamical disconnection
model by demonstrating that surface cooling in regions of intense magnetism
can drive downflows that both concentrate the field at the surface (through
evacuation and collapse) and enhance the dynamical disconnection at depth
by increasing the subsurface pressure where the deep upflow along the flux
rope and the surface driven downflow meet. This process operates quickly,
on a time scale of several days after emergence, and recent helioseismic
studies indicate that the flows within active complexes change from upflows
to downflows over such a period of time \citep{Komm:2008}. The mechanism may
explain the observation that young sunspots rotate more rapidly than old
sunspots \cite[e.g.,][]{Balthasar:1982, Svanda:2008a}. Initially, the sunspot is dragged
by rapidly rotating subsurface layers, but after disconnection the sunspot
slows due to viscous or turbulent drag. Similarly, the observed advection
of plage and weak field both inside and outside active regions is
well-explained. Dynamical disconnection allows the magnetic flux to advect
like a passive tracer since its roots have been severed. We once again find
that the sticking point is the lack of systematic meridional motion by sunspots.
If sunspots become disconnected from their roots in a matter of 2 or 3 days,
we would expect that the meridional circulation would exert its influence and
begin to advect the sunspots poleward.
\subsection{Steady Flows Established around Mature Active Regions}
\label{subsec:SteadyFlows}
In addition to the bulk motions of active regions, we have measured internal
flow structures. Two separate but related measurements (the mean divergence
and the mean inflow speed) show that within the surface layers, active regions
are zones of convergence. From the PDFs of the divergence we see that on average,
magnetized regions possess a negative divergence (see Figures~\ref{fig:MeanFlows}
and \ref{fig:MeanPDFs}). Note however, that the variance about this mean value
for the divergence is much larger than the mean itself (see Table 1). Equivalently,
our measurements of the mean inflow speeds within the surface layers of active
regions show that almost all active regions possess a net inflow of 20--30 m s$^{-1}$
at their periphery. Conversely, the cores of active regions, formed by the presence
of sunspots, possess strong outflows. These outflows arise from the moat
flows that extend 10--20 Mm beyond the penumbra \cite[e.g.,][]{Sheeley:1972,
Harvey:1973, Gizon:2000, Hagenaar:2005}. The presence of inflows at the
periphery and outflows from sunspots dictates that somewhere within the
active region's plage, the two flows meet and downflow must occur.
Presumably, the downflow within the plage connects to the deep outflows
that are seen to emerge from active regions at depths greater than 10 Mm
in low-resolution helioseismic flow measurements \citep{Haber:2003,
Haber:2004}. This downflow may also partially supply the return flow
needed for the moat flow observed at the surface, although as of yet
helioseismology has not been able to tell us how deeply this return
flow may be rooted. Figure~\ref{fig:ARsideview} shows a cartoon sketch
of the inferred circulations. The flows indicated with the arrows outlined
in white have been directly observed through helioseismic techniques
\cite[e.g.,][]{Gizon:2000, Braun:2003, Haber:2003, Gizon:2005} as well
as through direct Dopper velocity measurements \cite[e.g.,][]{Sheeley:1972}
and magnetic feature tracking \cite[e.g.,][]{Brickhouse:1988}. The remaining
flows are a logical means of connecting the observed components
of the flow field; however, without direct helioseismic measurement of the
vertical component of the flow---which still remains elusive, the topology
of this circulation remains speculative.
We also find systematic circulations around active regions. From the PDFs
of the vertical component of the vorticity (see Figures~\ref{fig:MeanFlows}
and \ref{fig:MeanPDFs}) we deduce that magnetized regions have a tendency
to possess cyclonic vorticity. Consistently, we also measure a mean cyclonic
circulation speed of roughly 5 m s$^{-1}$ around the peripheries of active
regions. Just as the inflows transition to outflows as we move from the edge
of an active region toward the sunspots, the circulations transition from
cyclonic flows around the boundary to anticyclones at the location of the
sunspots.
\subsection{Source of the Inflows and Circulations}
\label{subsec:Source}
One possible mechanism for the generation of the surface inflows into active
regions has already been mentioned. Plage and faculae are bright and, therefore,
locations of radiative cooling \cite[e.g.,][]{Fontenla:2006}. Radiative cooling
in magnetized surface layers will generate downflows as cool, dense material
looses buoyancy and plunges into the solar interior. Such downflows draw
fluid from the surrounding surface layers, generating inflows into
magnetized regions. The effect of enhanced surface cooling may be further
augmented by a reduction in the convective energy flux within regions of
magnetism, arising from the systematic suppression and modification of granulation.
One consequence of such a model is that the inflow speed may well be a function
of the area occupied by the active region's plage. The radiative cooling and
hence the mass downflow rate should be proportional to the area of the plage,
whereas the mass flux into the active region is proportional to the inflow
speed and the circumference of the active region. If the mass supply rate
from the inflow is to equal the subduction rate, the inflow speed must increase
as the square root of the area of the plage. Furthermore, the inflow should
extend for some distance into the quiet sun around the active region. To date,
no attempt has been made to detect correlations between inflow speed and
active region size nor has the inflow been systematically measured in quiet
sun in the vicinity of active complexes. We plan to pursue such studies
in the near future.
If the surface cooling mechanism is correct, the moat flows
that stream from sunspots would seem to indicate that sunspots do not participate
in the same radiative cooling, or perhaps the surface cooling within sunspots
is significantly weaker than it is in plage. However, the facts that moat flows
do not appear isotropically around all sunspots, and that the presence of moat
flows appears to be connected to the existence of penumbra \citep{Vargas Dominguez:2007,
Vargas Dominguez:2008}, provide evidence that another mechanism may be at work.
Depending on the mechanism, the depth of the return flow is likely to be very
different. Using time-distance helioseismology with $p$ modes (as opposed to $f$ modes),
\cite{Zhao:2001} have reported the observation of a returning inflow around a
sunspot that spans a depth range of 1.5--5 Mm. However, the connection that these
inflows may have to the moat flows is difficult to assess, since the moat flows
themselves are not detected within that study.
The circulations that are established around active regions seem to be well
correlated spatially with the observed inflows and outflows. In the region
where we observe inflows into active regions, we measure cyclonic motion
rotating around the active region. In those regions with outflows, we also
measure anticyclones. We suspect that this correlation is not accidental.
A likely explanation is that the circulations are caused by the deflection
of the inflows and outflows by the Coriolis force. Therefore, the ultimate
source of the circulations is the same mechanism that drives the inflows
at the periphery and the mechanism that produces the moat flow around
sunspots. If we assume that the flows are steady, and that the inflows are
driven by a pressure gradient, we may estimate the size of the circulation
speed from the momentum equation in a rotating coordinate system,
\begin{equation}
(\bvec{v}\cdot\bvec{\nabla})\bvec{v} = -\frac{1}{\rho}\bvec{\nabla}P - 2\bvec{\Omega}\times\bvec{v}\; ,
\label{eqn:Momentum}
\end{equation}
\noindent where we have explicitly dropped the partial derivative with respect
to time in the advective derivative because the flows are steady. The component
of this equation tangential to the pressure gradient is a balance purely
between the advective derivative and the Coriolis force. For simplicity,
we employ an $f$-plane approximation and consider an active region that is
circular in shape. In polar coordinates $(r,\phi)$ with the origin at the active
region's center, the circulation speed, $v_{\rm circ} = v_\phi$, is given by
the angular component of equation~\eqnref{eqn:Momentum}
\begin{equation}
v_r \frac{\partial v_\phi}{\partial r} = -2\Omega v_r \sin\theta \; .
\end{equation}
\noindent If we assume that the flow extends over a radial distance $H$,
then the circulation speed may be estimated by
\begin{equation}
v_{\rm circ} \sim 2 \Omega H \sin\theta \; .
\end{equation}
If we assume that $H$ equals the radius of a typical active region, 100 Mm,
and we use the Carrington rotation rate, $\Omega = 456$ nHz, we estimate a circulation
speed of 46 m s$^{-1}$ at a latitude of 30$^\circ$. Clearly, this estimate
depends critically on $H$, the distance over which the inflow extends into quiet
sun. But for reasonable values, the Coriolis force is strong enough to generate
circulation speeds on the order of 5 m s$^{-1}$. Furthermore, the effect should
increase with latitude and should be antisymmetric about the equator, as
is reproduced in Figure~\ref{fig:MeanFlows}.
A similar mechanism was suggested by \cite{Spruit:2003} as the source of
the torsional oscillations. In his model, radiative cooling is enhanced in
the active region belts by reduced opacity within the small-scale magnetic
fibrils forming the plage \cite[e.g.,][]{Spruit:1977}. This induces a slight
decrease in temperature and pressure within the active latitudes. Coriolis
forces generate steady geostrophic flows around these low pressure regions
and the torsional oscillations are simply the resulting thermal wind. One
possible extension of Spruit's model would be to argue that geostrophic
balance is established around each active region separately as the radiative
cooling is localized to the plage. A longitudinal average of these geostrophic
circulations would result in a zone of faster rotation at the low-latitude
edge of the active region belt and a slow down at the high-latitude edge.
In our case, we measure circulations around each active region that possess
the same handedness as the geostrophic flows in Spruit's model and the measured
circulation speed ($\approx 5$ m s$^{-1}$) is comparable to the amplitude of
the torsional oscillations, it is reasonable to ask if our circulations are
the source of the torsional oscillations. If this were so, the amplitude of
the torsional oscillation should be the mean active region circulation speed
multiplied by the fraction of longitudes occupied by magnetic activity. Since
the circulation speed measured here is only 5 m s$^{-1}$, a dilution by a
longitudinal filling factor would result in torsional oscillations with an
amplitude that is only a fraction of the observed value. Of course the extent
to which the flows protrude into quiet sun will reduce the dilution factor.
We should also point out that the torsional oscillation pattern extends to
high latitude during the quiet phase of the solar cycle when active
regions are largely absent \citep{Schou:1999, Basu:2003, Howe:2005, Howe:2006}.
This property suggests that another mechanism is at work either in isolation
or in conjunction with active region circulations.
\subsection{Convective Motions of Active Regions}
\label{subsec:Convection}
It has long been known that granulation is modified and perhaps suppressed
within regions of intense magnetism. Whether this suppression applies to
larger scales of convection is not as clear. We find here that for scales of
motion larger than supergranulation, the flows appear to be less organized
within magnetic active regions; they lack the regular tiling of convection
cells that is apparent within quiet sun (see Figure~\ref{fig:LargeAR}).
Despite the disruption of the cellular pattern, the flows speeds within active
regions are generally larger than in quiet sun. This increase in speeds isn't
caused by a general broadening of the distribution. Instead the PDFs
evince elevated tails for flow speeds in excess of 200 m s$^{-1}$.
Another noticeable difference between the PDFs in active and quiet regions
is the shape of the divergence distribution. In quiet sun, the distribution
is asymmetric with an enhanced wing corresponding to positive values of the
divergence. Therefore, a larger percentage of the solar surface is occupied
by divergent outflows than convergent inflows. This property is consistent
with the asymmetric nature of solar convection that is seen in numerical
simulations of solar convection, where the convection is composed of zones of
broad, upwelling outflows in the center of convection cells and the narrow,
inflowing downflows at the cell boundaries \cite[e.g.,][]{Stein:1989, Stein:1998,
Cattaneo:1991, Brummell:1996}. This asymmetry arises from the gravitational
stratification. Upflows expand as they rise because of the decreasing gas
pressure. Downflows contract into plumes as they descend for the same reason.
Note that the measured PDFs for the divergence do not exhibit the bimodal structure
seen in large-scale global numerical simulations \citep{Miesch:2008}. We suspect
that this arises because the observations under-resolve the narrow downflow lanes.
Interestingly, the PDFs do not exhibit the same asymmetry within magnetized
regions. In fact, the divergence distribution is rather symmetric, showing
no preference for either convergence or divergence. The magnetic activity
has changed the fundamental nature of the convection. Perhaps magnetoconvection
does not result in the same organized cellular structure that is so apparent
in both observations and simulations of quiet sun. This is certainly the
impression one receives when one examines our flow maps in regions of activity;
however, we have yet to quantitize this property in a meaningful way.
\subsection{Production of Magnetic Shear}
\label{subsec:Shear}
We measure cyclonic rotation about the active region at the periphery of
the active region and anticyclonic motion within the cores of active regions.
Clearly such differential motion results in shear that imparts twist to the
magnetic field within active regions. Let us first consider the cyclonic inflow.
In an active region with a diameter of 300 Mm, a flow of 5 m s$^{-1}$ around
the periphery results in a circumnavigation time for a fluid parcel of roughly
2000 days. This is clearly a very slow windup that is unlikely to result in
significant magnetic shear over the lifetime of the active region. The anticylonic
moat flows from sunspots, on the other hand, are more significant. Assuming
that the moat flow extends out to a radius of 20 Mm with a rotational speed
of 10 m s$^{-1}$ (see Figure~\ref{fig:ARflows}),
a complete winding of the field would occur in 145 days. Thus over the
course of a single Carrington rotation, significant shear can be introduced
into an otherwise stable magnetic configuration. Of course shear of this
nature may aid in the destabilization of the magnetic field, leading to
flares and coronal mass ejections. We suspect that the correlations between
flaring activity and the vorticity measured in low-resolution helioseismic
measurements \citep{Mason:2006} may result from the large-scale shearing motions
observed here.
We note that the measurement of systematic trends in the behavior of the flow
vorticity is difficult since the trends are weak; averaging over a substantial
number of active regions is required. The inflows and outflows, on the other
hand, are quite robust signatures that are easily observed even within a
single active region. This difference arises primarily because the circulations
have an amplitude 4--5 times smaller than the inflows and outflows. Since we
suspect that the systematic cyclonic and anticyclonic motions that produce
magnetic shear are caused by Coriolis deflection of the inflows and outflows,
there is a strong possibility that the inflows and outflows may be a better
predictor of flare activity, simply because of their greater amplitude.
\subsection{Surface Inflows and Flux Confinement}
\label{subsec:Confinement}
The surface inflows may play a very dramatic role in the evolution of active
regions. \cite{Hurlburt:2008} have suggested that the observed surface
inflows may inhibit the diffusion of magnetic flux out of active regions,
thereby prolonging the lifetime of an active region before it breaks up and
disperses. By their estimates, the advection of field due to an inflow with
a speed between 10 and 100 m s$^{-1}$ should be sufficient to balance the
outward transport of magnetic field by turbulent diffusion. This is exactly
the range in which our measured inflows fall. Therefore, flows on all scales
may be crucial in the decay of active complexes. Turbulent diffusion of the
field by supergranulation and smaller-scale flows works to disperse the field,
but is counteracted, at least partially, by coalescence of field arising from
inward advection by flows with spatial scales larger than supergranulation.
This argument is predicated on the assumption that the magnetic field is not
structured on the scale of the larger-scale flows. If, for example, the magnetic
flux is concentrated at the boundaries of giant cells, as it is for granules
and supergranules, spatial correlations between the magnetic flux and large-scale
flows would prevent the mean magnetic advection rate from simply being the
product of the mean inflow speed and the mean flux density. However, we have
tested this possibility and found it not to be the case. We compute the mean
field advection rate by the relation
\begin{eqnarray}
v_{\rm mag} &\equiv& \sum_i \frac{w_i(B)}{L_i(B) \; \bar{\Phi}_i(B)} \;
\int_{C_i(B)} |\bvec{\Phi}| \; \bvec{v} \cdot \bvec{\hat{n}} \; dl \; , \\
\nonumber \\
\bar{\Phi}_i(B) &\equiv& \frac{1}{L_i(B)} \int_{C_i(B)} |\bvec{\Phi}| \; dl \; ,
\end{eqnarray}
\noindent where $|\bvec{\Phi}|$ is the magnetic flux density and $\bar{\Phi}_i(B)$
is the mean field strength along a contour. Note that $B$ is the magnetic
field strength associated with the contour, which is derived from the smoothed
magnetograms; whereas $|\Phi|$ is the field strength within the full-resolution
magnetograms. We find that, within the observational errors, $v_{\rm mag}$ is identical
to the mean inflow rate shown in Figure~\ref{fig:ARflows}. Thus, the transport
of the magnetic field is dominated by advection by the large-scale flow
component. The consolidation of the field within active regions is therefore
an important mechanism that impedes the dissolution of active regions through
turbulent diffusion. In fact, the inward advection may be sufficiently strong
that flux sequestration becomes a difficulty for models of the evolution of the
sun's global magnetic field, which rely on the steady diffusion of flux from active
regions \citep{DeRosa:2006}. However, before we declare that the surface inflows
problematically inhibit the dispersal of magnetic flux from active regions, an
additional question must be answered. We must ascertain whether the surface
inflows vary over the lifespan of an active region. It is likely that young
active regions possess strong inflows and hold onto their magnetic flux rather
tightly; whereas older regions have weakened inflows, thus enabling the escape
of large amounts of magnetic flux. To date such studies have yet to be performed.
\section{Conclusions}
\label{sec:Conclusions}
Using ring analysis, we have measured the flow field within the upper
2 Mm of the solar convection zone with a spatial resolution of 2$^\circ$.
From these measurements we have found that solar active regions have a
bulk motion with respect to quiet sun as well as large-scale circulation
cells associated with their presence. The bulk motion consists of a 20 m s$^{-1}$
prograde motion relative to quiet sun at the same latitude, with simultaneous
meridional advection that moves in lock step with the surrounding
field-free plasma.
The large-scale circulations that are established resemble, in many ways,
an inverted hurricane (see the schematic diagrams shown as Figures~\ref{fig:ARsideview}
and \ref{fig:ARtopview}). In a hurricane, evaporation from a warm sea
surface drives an upflow and a concomitant inflow to feed the upwelling.
The upflow eventually spreads high above the surface forming an outflow.
Coriolis forces act upon the surface inflows and spin up the storm until
a quasigeostrophic balance is achieved. For active regions, surface cooling
by enhanced radiative losses in plage plays the role of a warm sea surface.
However, instead of causing a warm upwelling, this cooling causes a descending
downdraft. This downdraft pulls in fluid from the surroundings causing an
inflow and the attendent cyclonic motion.
The existence of sunspots modifies this picture somewhat. It may be that
the surface cooling is weaker within sunspots and that sunspots represent
a hole in the downdraft caused by plage. The outflow from the sunspots
could simply be drawn to feed the annular downdraft. However, a more
likely explanation is that the outflows result from a different mechanism
entirely, one that depends crucially on the orientation of the penumbral
filaments. Whatever the source, Coriolis forces acting on these outflows
produce a measurable flow deflection, resulting in a net anticyclonic rotation
about the sunspot. While this rotation is fairly weak, over the lifetime
of an active region, the rotation should produce significant shear, perhaps
playing a role in destabilizing the coronal magnetic field overlying the
active region.
Superimposed on these circulations are convective flows. We find that quiet
sun exhibits a notable asymmetry where a larger percentage of the solar surface
is covered with outflows than inflows. For the spatial scales that our
helioseismic technique samples, we find that within active regions this
asymmetry disappears, and parity between convergence and divergence is attained.
Why this occurs isn't obvious; however, it may have something to do with the
disruption or segmentation of the larger convective scales by the presence of
magnetism. Our measurement technique is only capable of measuring flows with a
spatial scale larger than 2$^\circ$ in heliographic angle ($\approx$ 24 Mm).
Therefore, we sample flows larger than supergranulation. It would be quite
useful to perform a similar study with finer resolution where supergranules
are explicitly resolved.
\acknowledgments
We are indebted to to Richard S. Bogart for his substantial efforts in
tracking and remapping the MDI data for use in ring analyses. We gratefully
thank Greg Kuebler for using his artistic talents to produce Figures~\ref{fig:ARsideview}
and \ref{fig:ARtopview}. We acknowledge support from NASA through grants
NAG5-13520, NNG05GM83G, NNG06GD97G, NNX07AH82G, NNX08AJ08G, and NNX08AQ28G.
\input biblio.tex
\input figs.tex
\figone
\figtwo
\figthree
\figfour
\figfive
\figsix
\figseven
\figeight
\fignine
\figten
\figeleven
\figtwelve
\end{document}
|
1710.00190
|
\section{Introduction and motivation}
This work was conceived and carried out in the context of a task to reduce
respondent burden in a mental health study. We were interested in a range of outcome
variables, and our analytical goal was fitting regression models to explain the mental health outcomes.
The instrument collects demographic explanatory variables,
as well as scores from the Big Five Inventory \citep{john:sriv:1999:big5},
a commonly used set of five psychological traits that are often found to be correlated
with behaviors and outcomes. We expected that multiple matrix sampling would allow
us to reduce the instrument length from over an hour to about 20--25 minutes.
A key component of sampling design, sample size determination,
will be based on a linear regression power analysis.
However, complexities of regression analysis with missing data required custom
derivations of power analyses, which is what this technical paper addresses.
\section{Regression setting}
Consider a regression analysis problem where an outcome $y$ is predicted by a set of
explanatory variables $x_1, \ldots, x_p$:
\begin{equation}
y = \beta_0 + \beta_1 x_1 + \ldots + \beta_p x_p + \varepsilon
\label{eq:regress}
\end{equation}
In the simplest possible case of no missing data and homoskedastic normal errors $\mathbb{V}[\epsilon_i]=\sigma^2 \, \forall i$,
the maximum likelihood estimates are the OLS estimates
\begin{gather}
\hat\beta_{\rm OLS} = (X'X)^{-1} X'Y;
\quad \mathbb{V}[ \hat\beta_{\rm OLS} ] = \sigma^2 (X'X)^{-1};
\notag \\
v[ \hat\beta_{\rm OLS} ] = s^2 (X'X)^{-1};
\quad s^2 = \frac1n \sum_{i=1}^n (y_i - x_i'\hat\beta)^2
\label{eq:ols}
\end{gather}
Inference on regression coefficients is based on normality of coefficient estimates,
$\hat\beta \sim N(\beta,\sigma^2(X'X)^{-1})$.
An unbiased estimate of $s^2$ can be obtained by changing the denominator (degrees of freedom)
from $n$ to $n-p$. The OLS estimates hold desirable properties in more general settings,
e.g., by dropping the normality requirement. When regressors $X$ are stochastic, the OLS estimates
and their variance estimates given above only have an asymptotic justification, and require
independence of regressors $X$ and errors $\varepsilon$. When the basic assumptions are violated,
sandwich-type or resampling variance estimates need to be used.
\subsection{Power analysis and sample size determination in regression setting}
\label{subsec:reg:power}
Power analysis and sample size determination are statistical tasks of addressing,
quantifying and controling type I error. In a typical power analysis problem,
a null hypothesis, $H_0: \theta \in \Theta_0$, and an alternative, $H_1: \theta \in \Theta_1$,
are formulated; a test statistic $t(X)$ is selected, for which a critical region of level $\alpha$ is specified.
E.g., assuming that high values of the test statistic indicate disagreement between the data
and the null, as is typical with $\chi^2$ or $F$-statistic tests common in regression models,
the rejection region would have the form $T_\alpha = [c,+\infty)$ so that
$\rm{Prob} [t(X) > c] \le \alpha$ when the true value of the parameter $\theta \in \Theta_0$.
Finally, power analysis addresses the issue of Type II error, i.e., $\rm{Prob} [t(X)\le c]$ under
the alternative $\theta \in \Theta_1$. While the null hypothesis typically represents
a simple hypothesis $\theta=\theta_0$ or a subset of reasonably small dimension, the alternative
is necessarily complex. Hence researchers often formulate a measure of \textit{effect size}
$\delta$ and consider power analysis for parameter values under the alternative that are
at least $\delta$ away from the specific value $\theta_0$ or the subset $\Theta_0$ in an appropriate metric.
As the power to reject the null typically grows with the sample size $n$, the task
of sample size determination is to find the value $n_\beta$ that guarantees a given level
of power $1-\beta$. While the size of the test is often taken to be $\alpha=5\%$,
the traditional type II error rate is $\beta=20\%$ leading to $1-\beta=80\%$ power.
While testing the location parameter of two populations, for example, the natural hypotheses
might be $H_0: \mu_1 = \mu_2$ vs. the (two-sided) alternative $H_1: |\mu_1 - \mu_2| \ge \delta$,
with relatively straightforward testing based on the Student $t$ distribution, linear regression
models feature a variety of statistics that may be subject to testing and power analysis:
\begin{enumerate}
\item Test of overall fit: $H_0: \mathbf{\beta}=0$.
\begin{enumerate}
\item A version of this hypothesis can be formulated as $H_0: R^2=0$. Depending
on how easy or difficult it is to conduct inference on parameter estimates
or regression sums of squares, one or the other may be preferred in applications.
\end{enumerate}
\item An increase of overall fit: $H_0: R^2 \le R_0^2$ vs. $H_1: R^2 \ge R_0^2 + \delta$.
\item Specific regression coefficients: the coefficient of the $j$-th explanatory variable
is zero, $H_0: \beta_j = 0$ vs. $H_1: | \beta_j | \ge \delta$.
\item Linear hypothesis $H_0: R\beta = r_0$, which covers cases like:
\begin{enumerate}
\item Equality of two regression coefficient for the $j$-th and the $k$-th explanatory variable,
$H_0: \beta_j - \beta_k = 0$, so $R = (0, \ldots, 1, \ldots, -1), r=0$
\item No impact of a set of variables $j_1, j_2, \ldots$: $H_0: \beta_{j_1}=0, \beta_{j_2}=0, \ldots$,
so that $R$ is a subset of rows of a unit matrix, and $r$ is a zero vector of conforming dimension.
\end{enumerate}
\item Tests on error variance $\sigma^2$, e.g. $H_0: \sigma_2 \le \sigma^2_0$ vs. $H_1: \sigma^2 \ge \sigma_0^2 + \delta$.
\end{enumerate}
\section{Multiple matrix sampling}
\textit{Multiple matrix sampling} of a survey questionnaire consists
of administering only a specific subset of items to a given respondent, out of all items
this respondent is potentially eligible to be asked. The name stems from representation of the data
with respondents as rows, and items as columns, so that matrix sampling concerns selecting
specific entries in the matrix to be administered, rather than the full row as is typically done.
The focus of the technique is on selecting items out of all the relevant ones that the respondent
could be asked, with the potential skip patterns already taken into account.
Similar or equivalent techniques are also known as \textit{partitioned designs}
and \textit{questionnaire splitting}. The method originated in educational testing
\citep{shoemaker:1972}, where it was first used to select items from a large pool
of available ones. The educational testing companies have identified the need to implement
multiple matrix sampling methods to protect the integrity of their data products, so that
the students taking a standardized test are not able to get
trained on a small subset of items known to be administered on standardized tests, thus
biasing the estimates of achievement.
Given the relatively esoteric nature of the method, the existing publications
have addressed some specific niche problems in matrix sampling (and the current paper
is no exception).
\citet{gonzalez:eltinge:2007} provided a review of matrix sampling
and applications in Consumer Expenditure Quarterly Survey.
\citet{chipperfield:steel:2009} put the problem of matrix sampling into a cost
optimization framework, where proper subsets of $K$ items can be administered on
up to $2^K-1$ forms at specific cost per item rates. They demonstrated that
with two items (or groups of items),
the split questionnaire best linear unbiased estimator (BLUE; also related to the GLS
estimator) provides modest efficiency gains over a design in which all items
are administered at once, and over a two-phase design in which all items are administered
to a fraction of respondents, and one subset of items is administered to the remainder of respondents.
\citet{merkouris:2010} extended their work to provide simplified composite estimation
using the estimates based on the form-specific subsamples, where compositing is based
on the second-order probabilities of selection and the way they are utilized
in estimating the variance of the Horvitz-Thompson estimator.
\citet{eltinge:2013} discussed connections to and relations with multiple frame
and multiple systems estimation methods (e.g., integration of survey and administrative
data, where administrative data may fill some of the survey items when available).
We add to this literature by providing the asymptotic variance-covariance matrix
of the coefficient estimates under matrix sampling of regressors, assuming
that the outcome is always collected. We also discuss implications for power analysis
and sample size determination.
\subsection{A simple example}
Consider the following matrix sampling design, in which
the outcome $y$ is collected on every form, while the explanatory variables
differ between forms.
\begin{table}[!h]
\centering
\caption{Three questionnaire forms for data collection: Design 1. \label{tab:forms3:design1}}
\begin{tabular}{l|ccc|c}
Form & $X_1$ & $X_2$ & $X_3$ & $n$ \\
\hline
1 & + & & & $n_1$ \\
2 & & + & & $n_2$ \\
3 & & & + & $n_3$ \\
\end{tabular}
\end{table}
With this design, summaries (means, totals) of all the variables
($x_1,x_2,x_3,y$) can be obtained, and the bivariate relations
between each of the regressors and the outcome $y$ can be analyzed.
However, estimation of a multiple regression model requires
estimability of all of the entries of the $(X'X)$ matrix,
which this specific matrix sampling design does not provide.
To conduct regression analysis, we need to observe the cross-entries
of the $X'X$ matrix, which necessitates the following matrix sampling design.
\begin{table}[!h]
\centering
\caption{Three questionnaire forms for data collection: Design 2. \label{tab:forms3:design2}}
\begin{tabular}{l|ccc|c}
Form & $X_1$ & $X_2$ & $X_3$ & $n$ \\
\hline
1 & & + & + & $n_1$ \\
2 & + & & + & $n_2$ \\
3 & + & + & & $n_3$ \\
\end{tabular}
\end{table}
\subsection{Parameter estimation under matrix sampling}
Since the components of $X'X$ and/or $X'y$ necessary to obtain the OLS regression
estimates may not be jointly available, more complex estimation strategies may need to
be employed. We study two such strategies.
One possibility is to utilize structural equation modeling (SEM) with missing data,
in which the marginal regression model of interest is formulated by using the regressors
as exogenous variables, the dependent variable is introduced as the only endogenous variable
explained by the model
\citep{bollen:selv:1989},
and the existing SEM estimation methods are applied
\citep{yuan:bentler:2000,savalei:2010}.
Alternatively, since the data are missing by design, and can be treated as MCAR, multiple imputation
\citep[MI]{rubin:1996,vanbuuren:2012}
can be used to fill in the missing values, with Rubin's variance formulae used to combine
MI estimates and provide inference. Of the several existing flavors of multiple imputation,
one of the simplest strategies is
imputation under multivariate normality (which we expect to behave in ways similar to
the estimation methods for SEM with missing data under multivariate normality).
A less model-dependent method is predictive mean matching \citep{little:1988}
in which a regression model is fit for each imputed variable, a linear prediction
is obtained for each case with missing variable, and an imputation is made by choosing
the value of the dependent variable from one of the nearest neighbors in terms of
the linear prediction score.
\section{Set up and notation}
All of the derivations in this paper concern the joint matrix of the first and second order moments of the data:
\begin{equation}
\Omega =
\mathbb{E}
\left[
\begin{pmatrix}
1 \\
\mathbf{x} \\
y
\end{pmatrix}
\begin{pmatrix}
1 &
\mathbf{x}' &
y
\end{pmatrix}
\right]
=
\begin{pmatrix}
1 & \mu_\mathbf{x}' & \mu_y \\
\mu_\mathbf{x} & \mathbb{E}[ \mathbf{x} \mathbf{x}'] & \mathbb{E}[\mathbf{x} y] \\
\mu_y & \mathbb{E}[\mathbf{x}' y] & \mathbb{E}[y^2]
\end{pmatrix}
\equiv
\begin{pmatrix}
\omega_{00} & \Omega_{0x} & \omega_{0y} \\
\Omega_{0x}' & \Omega_{xx} & \Omega_{xy} \\
\omega_{0y} & \Omega_{xy}' & \omega_{yy}
\end{pmatrix}
\label{eq:Omega}
\end{equation}
The maximum likelihood estimates of the coefficients in the regression of $y$ on $x$
(obtained, for instance, through SEM modeling using maximum likelihood estimates with multivariate normal missing data method;
or approximated through multiple imputation) are obtained as
\begin{equation}
\hat\beta_{\rm FIML} =
\begin{pmatrix}
\omega_{00} & \hat\Omega_{0x} \\
\hat\Omega_{0x}' & \hat\Omega_{xx}
\end{pmatrix}^{-1}
\begin{pmatrix}
\hat\omega_{0y} \\
\hat\Omega_{xy}
\end{pmatrix}
\label{eq:ml:regression}
\end{equation}
where $\hat\Omega$ is the maximum likelihood estimator of the joint parameter matrix:
\begin{equation}
\hat\Omega =
\begin{pmatrix}
\omega_{00} & \hat\Omega_{0x} & \hat\omega_{0y} \\
\hat\Omega_{0x}' & \hat\Omega_{xx} & \hat\Omega_{xy} \\
\hat\omega_{0y} & \hat\Omega_{xy}' & \hat\omega_{yy}
\end{pmatrix}
=
\begin{pmatrix}
\omega_{00} & \hat\omega_{01} & \ldots & \hat\omega_{0p} & \hat\omega_{0y} \\
\hat\omega_{01} & \hat\omega_{11} & \ldots & \hat\omega_{1p} & \hat\omega_{y1} \\
\vdots & \ldots & \vdots & \vdots & \vdots \\
\hat\omega_{0p} & \hat\omega_{1p} & \ldots & \hat\omega_{pp} & \hat\omega_{yp} \\
\hat\omega_{0y} & \hat\omega_{y1} & \ldots & \hat\omega_{yp} & \hat\omega_{yy}
\end{pmatrix}
\label{eq:hat:Sigma}
\end{equation}
where $x_0=1$ is the regression intercept by convention, so that $\omega_{00}\equiv 1$, $\hat\omega_{0j}=\hat\mu_j$
are the (estimated) means of the $j$-th explanatory variable, and $\hat\omega_{0y}=\hat\mu_y$ is the estimated mean of $y$.
To derive the likelihood, we need the form-specific submatrices obtained by multiplying the
overall matrix by selector matrices. For instance, in Design 2 above, for the first form, the relevant covariance matrix is
\begin{equation}
{\rm Cov}(x_2, x_3, y)' = F_1 \Omega F_1',
\quad
F_1 =
\begin{pmatrix}
0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 1
\end{pmatrix}
\label{eq:selector:1}
\end{equation}
Matrices necessary to form $F_2\Omega F_2'$ and
$F_3 \Omega F_3'$ are defined in a similar way.
Define the unit selector vector
that picks up the estimates of the means $e_0=(1,0,\ldots,0)$, which is the unit vector with 1
in the ``zeroth'' position corresponding to the intercepts in the parameter matrix $\Omega$.
In addition to $e_0$ selecting the first order moments, define the unit selection vectors
$e_y=(0,0,0,0,1)'$ as the unit vector selecting the last row/column of $\Omega$ corresponding to the $y$-parameters,
and $e_j=(0,\ldots,0,1,0,\ldots)$ is a unit vector with 1 in the $j$-th position corresponding to the $j$-th variable
(with the convention of indexing starting at zero). Then we observe that
\begin{align}
F_1' & = (e_2, e_3, e_y) \notag \\
F_2' & = (e_1, e_3, e_y) \notag \\
F_3' & = (e_1, e_2, e_y)
\end{align}
\section{Likelihood and derivatives}
\subsection{Likelihood}
Indexing the forms by $k$, and observations within forms by $i$, the likelihood can be written as
\begin{align}
\ln L(\omega;X) =
\sum_{k=1}^3 \sum_{i=1}^{n_k} &
\Bigl\{
- \frac12 \mathop{\rm tr} (F_k F_k') \ln(2\pi)
- \frac12 \ln \det ( F_k \Omega F_k' )
\notag \\ &
- \frac12 \bigl[ (x_i',y) - e_0' \Omega F_k' \bigr] (F_k \Omega F_k')^{-1}
\bigl[ (x_i', y)' - F_k \Omega e_0 \bigr]
\Bigr\}
\label{eq:log:lkhd}
\end{align}
where $n_k$ is the number of observations on which the $k$-th form is collected,
and $F_k$ is the selector matrix corresponding to the $k$-th form.
Derivations of the asymptotic properties of the MLE estimate $\widehat\Omega$
are based on the matrix differential \citep{magn:neud:1999}
\begin{align}
{\sf d} \Omega = &
{\sf d} \omega_{00} \, e_0 e_0'
+ \sum_{j=1}^p {\sf d} \omega_{0j} \, (e_0 e_j' + e_j e_0')
+ {\sf d} \omega_{0y} \, (e_0 e_y' + e_y e_0')
+ \sum_{j=1}^p {\sf d} \omega_{jj} \, e_j e_j'
\notag \\ &
+ \sum_{j=1}^p \sum_{i\neq j} {\sf d} \omega_{ij} \, (e_i e_j' + e_j e_i')
+ \sum_{j=1}^p {\sf d} \omega_{yj} \, (e_y e_j' + e_j e_y'),
\label{eq:omega:as:sum:of:e:crossed}
\end{align}
After some tedious algebra,
the following information matrix $\mathbb{E} \nabla^2 \ln L(\omega;X)$ results.
\begin{align}
\mathbb{E} \Bigl[ \frac {\partial^2 \ln L(\omega;X)}{\partial \omega_{0s}\partial \omega_{0t}} \Bigr]
& =
- \sum_{k=1}^3 n_k \tau^{(k)}_{st},
\label{eq:d2lnl:domega0:domega0}
\\
\mathbb{E} \Bigl[ \frac {\partial^2 \ln L(\omega;X)}{\partial \omega_{0s}\partial \omega_{uu}} \Bigr]
& = 0,
\label{eq:d2lnl:domega0:domega:uu}
\\
\mathbb{E} \Bigl[ \frac {\partial^2 \ln L(\omega;X)}{\partial \omega_{0s}\partial \omega_{uv}} \Bigr]
& = 0
\label{eq:d2lnl:domega0:domega:uv}
\end{align}
The zero expected cross-derivatives indicate that the estimates of the multivariate
normal means and the variance-covariance parameters are independent. (This may not be the case in general
if the missing data mechanism coded by the matrices $F_k$ is not MCAR, and instead related to the data values.)
\begin{align}
\mathbb{E} \Bigl[ \frac {\partial^2 \ln L(\omega;X)}{\partial \omega_{ss}\partial \omega_{uu}} \Bigr]
& = - \frac12 \sum_{k=1}^3 n_k \bigr[ \tau^{(k)}_{su} \bigl]^2
\label{eq:d2lnl:domega:ss:domega:uu}
\\
\mathbb{E} \Bigl[ \frac {\partial^2 \ln L(\omega;X)}{\partial \omega_{ss}\partial \omega_{uv}} \Bigr]
& = - \sum_{k=1}^3 n_k \tau^{(k)}_{su} \tau^{(k)}_{sv}
\label{eq:d2lnl:domega:ss:domega:uv}
\\
\mathbb{E} \Bigl[ \frac {\partial^2 \ln L(\omega;X)}{\partial \omega_{st}\partial \omega_{uv}} \Bigr]
& = - \sum_{k=1}^3 n_k \bigl[ \tau^{(k)}_{su} \tau^{(k)}_{tv} + \tau^{(k)}_{sv} \tau^{(k)}_{tu} \bigr]
\label{eq:d2lnl:domega:st:domega:uv}
\\
\tau^{(k)}_{st} & = e_s' F_k' (F_k \Omega F_k')^{-1} F_k e_t
\label{eq:tau:kst}
\end{align}
where
$\tau^{(k)}_{st}$ is the $(s,t)$-th entry of the inverse of the form-specific covariance matrix;
and indices $s,t,u,v$ can enumerate the explanatory variables $x_j$ and the response $y$.
As $x_i$ and $y$ are considered jointly multivariate normal at this point, there is no separation into
dependent and explanatory variables.
Putting these entries together into a matrix, and using
the standard maximum likelihood estimation theory results, the asymptotic variance
of the maximum likelihood estimates of $\mathop{\rm vech}\Omega$ is given by
\begin{equation}
{\rm As.} \mathbb{V}[ \hat\omega ] = - \mathbb{E} \bigr[ \nabla^2 \ln L(\omega;X) \bigr]^{-1}
\label{eq:asvar:omega-hat}
\end{equation}
\subsection{The delta method derivation of the asymptotic variance of $\hat\beta$}
Let us now return to the task of estimating the coefficients of the regression equation
$$y=\beta'x + \epsilon$$
via (\ref{eq:ml:regression}). The asymptotic variance-covariance matrix of $\hat\beta_{\rm FIML}$
can be obtained from the asymptotic covariance matrix of $\hat\Omega$ using the delta-method,
i.e., linearization of the relation (\ref{eq:ml:regression}):
\begin{align}
{\sf d} \beta = &
-
\begin{pmatrix}
\omega_{00} & \Omega_{0x} \\
\Omega_{0x}' & \Omega_{xx}
\end{pmatrix}^{-1}
\begin{pmatrix}
0 & {\sf d} \Omega_{0x} \\
{\sf d} \Omega_{0x}' & {\sf d} \Omega_{xx}
\end{pmatrix}
\begin{pmatrix}
\omega_{00} & \Omega_{0x} \\
\Omega_{0x}' & \Omega_{xx}
\end{pmatrix}^{-1}
\begin{pmatrix}
\omega_{0y} \\
\Omega_{xy}
\end{pmatrix}
+
\notag \\
& \hspace{5cm} +
\begin{pmatrix}
\omega_{00} & \Omega_{0x} \\
\Omega_{0x}' & \Omega_{xx}
\end{pmatrix}^{-1}
\begin{pmatrix}
{\sf d} \omega_{0y} \\
{\sf d} \Omega_{xy}
\end{pmatrix}
\label{eq:diff:beta}
\end{align}
where the individual components of ${\sf d}\Omega$ can be obtained from
(\ref{eq:omega:as:sum:of:e:crossed}).
Thus
\begin{align}
\frac{\partial\beta}{\partial\omega_{0j}} = &
-
\begin{pmatrix}
\omega_{00} & \Omega_{0x} \\
\Omega_{0x}' & \Omega_{xx}
\end{pmatrix}^{-1}
\begin{pmatrix}
0 & e_j' \\
e_j & \underline{0}
\end{pmatrix}
\begin{pmatrix}
\omega_{00} & \Omega_{0x} \\
\Omega_{0x}' & \Omega_{xx}
\end{pmatrix}^{-1}
\begin{pmatrix}
\omega_{0y} \\
\Omega_{xy}
\end{pmatrix}
\notag \\
\frac{\partial\beta}{\partial\omega_{0y}} = &
\begin{pmatrix}
\omega_{00} & \Omega_{0x} \\
\Omega_{0x}' & \Omega_{xx}
\end{pmatrix}^{-1}
\begin{pmatrix}
1 \\
\vec{0}
\end{pmatrix}
\notag \\
\frac{\partial\beta}{\partial\omega_{jj}} = &
-
\begin{pmatrix}
\omega_{00} & \Omega_{0x} \\
\Omega_{0x}' & \Omega_{xx}
\end{pmatrix}^{-1}
\begin{pmatrix}
0 & \vec{0}' \\
\vec{0} & e_j e_j'
\end{pmatrix}
\begin{pmatrix}
\omega_{00} & \Omega_{0x} \\
\Omega_{0x}' & \Omega_{xx}
\end{pmatrix}^{-1}
\begin{pmatrix}
\omega_{0y} \\
\Omega_{xy}
\end{pmatrix}
\notag \\
\frac{\partial\beta}{\partial\omega_{ij}} = &
-
\begin{pmatrix}
\omega_{00} & \Omega_{0x} \\
\Omega_{0x}' & \Omega_{xx}
\end{pmatrix}^{-1}
\begin{pmatrix}
0 & \vec{0}' \\
\vec{0} & e_i e_j' + e_j e_i'
\end{pmatrix}
\begin{pmatrix}
\omega_{00} & \Omega_{0x} \\
\Omega_{0x}' & \Omega_{xx}
\end{pmatrix}^{-1}
\begin{pmatrix}
\omega_{0y} \\
\Omega_{xy}
\end{pmatrix}
\notag \\
\frac{\partial\beta}{\partial\omega_{yj}} = &
\begin{pmatrix}
\omega_{00} & \Omega_{0x} \\
\Omega_{0x}' & \Omega_{xx}
\end{pmatrix}^{-1}
\begin{pmatrix}
0 \\
e_j
\end{pmatrix}
\notag \\
\nabla_\omega \beta = & \Bigl(
\frac{\partial\beta}{\partial\omega_{01}}, \frac{\partial\beta}{\partial\omega_{02}}, \ldots,
\frac{\partial\beta}{\partial\omega_{y0}},
\frac{\partial\beta}{\partial\omega_{11}}, \frac{\partial\beta}{\partial\omega_{12}}, \ldots, \frac{\partial\beta}{\partial\omega_{y1}}, \frac{\partial\beta}{\partial\omega_{22}}, \ldots,
\frac{\partial\beta}{\partial\omega_{yp}}, 0 \Bigr)
\label{eq:dbeta:domega}
\end{align}
where the derivatives are with respect to the components of the vectorization $\mathop{\rm vech}\Omega$,
of which the last term is $\frac{\partial\beta}{\partial\omega_{yy}}=0$. By the standard multivariate
delta-method results \citep{newey:mcfadden:1994,vandervaart:1998},
\begin{equation}
{\rm As.} \mathbb{V}[ \hat\beta ] = \nabla_\omega \beta \, \mathbb{V}[ \hat\omega ] \, \nabla_\omega' \beta
\label{eq:asvar:beta-hat}
\end{equation}
\section{Example: Big Five Inventory}
In our application, we wanted to analyze the relation between mental health outcomes
and the Big Five personal traits:
\begin{itemize}
\item Openness to experience (inventive/curious vs. consistent/cautious)
\item Conscientiousness (efficient/organized vs. easy-going/careless)
\item Extraversion (outgoing/energetic vs. solitary/reserved)
\item Agreeableness (friendly/compassionate vs. challenging/detached)
\item Neuroticism (sensitive/nervous vs. secure/confident)
\end{itemize}
These personal traits have been found in numerous studies to be related to academic performance, disorders, general health,
and many other behaviors and outcomes. The standard Big Five scale consists of 44 items,
some of which are reverse worded and reverse scored to minimize the risk of straightlining, and with
items from different subscales mixed throughout the scales. Each item is a 5 point Likert scale with a clear midpoint.
In the population of interest, the Big Five traits are expected to have the following correlations, based
on preceding research:
\begin{equation}\label{eq:bigfive:corr}
{\rm Cov}[x] =
\begin{pmatrix}
1 & 0.26 & 0.47 & 0.20 & -0.16 \\
0.26 & 1 & 0.28 & 0.46 & -0.28 \\
0.47 & 0.28 & 1 & 0.20 & -0.35 \\
0.20 & 0.46 & 0.20 & 1 & -0.37 \\
-0.16 & -0.28 & -0.35 & -0.37 & 1
\end{pmatrix}
\equiv
\Sigma_{\rm Big 5}
\end{equation}
We thus consider a regression model
$$
y_i = \beta_0 + \beta_1 x_{i1} + \ldots \beta_5 x_{i5} + \varepsilon_i
$$
where $x_{i1},\ldots,x_{i5}$ are subscale scores of the Big Five traits.
Measurement error in these scores is ignored, although more accurate methods are
available to account for it \citep{skrondal:laake:2001}.
A balanced multiple matrix sampling design would consist of ten forms,
each administering the outcome $y$ and two of the Big Five subscales:
\begin{table}[!h]
\centering
\caption{Multiple matrix sampling design with five explanatory variables. \label{tab:bigfive}}
\medskip
\begin{tabular}{l|cccccccccc}
Form & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\
\hline
O $\vphantom{\Bigl|}$ & + & + & + & + & & & & & & \\
C $\vphantom{\Bigl|}$ & + & & & & + & + & + & & & \\
E $\vphantom{\Bigl|}$ & & + & & & + & & & + & + & \\
A $\vphantom{\Bigl|}$ & & & + & & & + & & + & & + \\
N $\vphantom{\Bigl|}$ & & & & + & & & + & & + & + \\
$y\vphantom{\Bigl|}$ & + & + & + & + & + & + & + & + & + & + \\
\end{tabular}
\end{table}
\section{Simulation 1: Parameter space exploration}
\label{sec:explore}
In this simulation exercise, we explore the parameter space of
regression coefficients to gauge the degree of variability of sample size
determination results. Asymptotic variance resulting from
(\ref{eq:asvar:beta-hat}) is used to obtain the sample sizes
for the tasks outlined in Section \ref{subsec:reg:power}.
Simulation 1 consists of the following steps.
\begin{enumerate}
\item Population regression parameters are simulated from $\beta \sim N(\mathbf{0},I_5)$.
\item To provide the scale of the residual variance, the fraction of explained variance is set to
$R^2=0.15$, a moderate effect for behavioral and social science data, and the associated
residual variance $\sigma_\varepsilon^2$ is calculated based on this value of $R^2$.
\item The complete data variances stemming from (\ref{eq:ols}) are recorded.
\label{enum:explor:full:var}
\item The multiple-matrix-sampled data variances stemming from (\ref{eq:asvar:beta-hat}) are recorded.
\label{enum:explor:mcar:var}
\item Sample size to reject the test of overall significance $H_0: \beta_1 = \ldots = \beta_5 = 0$
at 5\% level with 80\% power is recorded.
\label{enum:explor:n:overall}
\item Sample size to detect an increase in $R^2$ by 0.01 (i.e., from 0.15 to 0.16), through a uniform
multiplicative increase in the values of the regression parameters, keeping the residual variance
$\sigma_\varepsilon^2$ constant, at 5\% level with 80\% power, is recorded.
\label{enum:explor:n:infl}
\item Sample size to detect an increase in $R^2$ by 0.01 (i.e., from 0.15 to 0.16), through an increase
in the value of the coefficient $\beta_j, j=1, \ldots, 5$, keeping the residual variance
$\sigma_\varepsilon^2$ constant and other regression parameters constant, at 5\% level with 80\% power, is recorded.
\label{enum:explor:n:poke}
\item Fraction of missing information (FMI) is computed as one minus the ratio
of the variance of regression parameter estimate with complete data (obtained in step \ref{enum:explor:full:var})
to the variance of regression parameter estimate with missing data (obtained in step \ref{enum:explor:mcar:var})
\label{enum:explor:fmi}
\end{enumerate}
1,000 Monte Carlo draws of the $\beta$ vector, and subsequent analytical computation of asymptotic variances and power, were done.
Results are presented graphically. Figure \ref{fig:explore:n} presents the sample sizes obtained in steps
\ref{enum:explor:n:overall}--\ref{enum:explor:n:poke} of the parameter exploration. A striking feature of the plot
is wide variability of the sample sizes as a function of the specific configuration of parameters. While the lower limit
of the sample size necessary to detect an overall increase in $R^2$ by $0.01$ is about $n=82K$, the median value
is $n=110K$, the 95th percentile is $n=220K$, and the maximum (worst case scenario) identified in this simulation is
$n=400K$. The patterns of the coefficients of the worst case scenarios typically indicate large coefficients of opposite signs
of the positively correlated variables ($x_1$ through $x_4$), or large coefficients of similar size of one of the positively
correlated factors ($x_1$ through $x_4$) and a high value of factor $x_5$ that is negatively correlated with all other subscales.
This wide range of variability makes it difficult to provide a definite recommendation concerning the sample
size for the study to the stakeholders. A conservative value based on a high percentile (80\% or 90\%) can be recommended,
to protect against bad population values of regression parameters at the expense of a potentially unnecessary increase in costs.
Figure \ref{fig:explore:fmi} presents the exploration distribution of the fraction of missing information
due to the missing data. FMI for the intercept is generally low, below 0.2. FMI for regression slopes are generally high,
in the range of about 70\% to 80\%. Given the structure of the missing data shown by the
multiple matrix sampling design in Table \ref{tab:bigfive},
each of the predictor variables is observed in 40\% of the data (informing the diagonal entries of the $X'X$ matrix),
and each pairwise combination of the regressors is observed in 10\% of the data (informing the off-diagonal entries).
This yields an expected information loss for the predictor variables somewhere between 60\% and 90\%.
\begin{figure}[!bh]
\centering
\begin{tabular}{cc}
\includegraphics[scale=0.09]{explor_n_overall.png}
&
\includegraphics[scale=0.09]{explor_n_infl01.png}
\\
(a) & (b) \\
\includegraphics[scale=0.09]{explor_n_poke1.png}
&
\includegraphics[scale=0.09]{explor_n_poke5.png}
\\
(c) & (d)
\end{tabular}
\caption{
\label{fig:explore:n}
Sample size to ensure the necessary detectable effect.
(a) Overall test $H_0: R^2=0$;
(b) $R^2$ increase due to overall explanatory power increase from $R^2=0.15$ by $0.01$;
(c) $R^2$ increase due to an increase in explanatory power from $R^2=0.15$ by $0.01$ due to $x_1$;
(d) $R^2$ increase due to an increase in explanatory power from $R^2=0.15$ by $0.01$ due to $x_5$.
}
\end{figure}
\begin{figure}[!bh]
\centering
\includegraphics[scale=0.09]{explor_fmi0.png}
(a)
\includegraphics[scale=0.09]{explor_fmi1.png}
(b)
\includegraphics[scale=0.09]{explor_fmi5.png}
(c)
\caption{
\label{fig:explore:fmi}
Fraction of missing information:
(a) intercept; (b) slope of $x_1$; (c) slope of $x_5$.
}
\end{figure}
\section{Simulation 2: Performance in finite samples}
To study the performance of estimation methods based on SEM estimation with missing data, and on multiple imputation procedures,
a simulation with microdata was also performed. For each simulation draw, the following steps were taken.
\begin{enumerate}
\item Sample size is set to $n=1,000$ (i.e., $100$ observations per form).
\item Multivariate non-normal factor scores are simulated:
\begin{enumerate}
\item The non-normal principal components of $x_1, \ldots, x_5$ are simulated as
\begin{align}
f_1 & = -\ln u_1-1, \quad u_1 \sim U[0,1] \\
f_2 & = (2 b-1) (-\ln u_2-1), \quad b \sim {\rm Bernoulli}(0.5), u_2 \sim U[0,1] \\
f_3, f_4, f_4 & \sim N(0,1)
\end{align}
so that each principal component has a mean of 0 and variance of 1,
with all the underlying random variables being drawn independently of each other.
The first component $f_1$ has a marginal exponential distribution with a heavy right tail,
ensuring the overall skewness of each factor. The second component has a bimodal distribution
with two exponential components and heavy tails. The remaining three components are normal.
\item The factor values are reconstructed as
\begin{equation}
\begin{pmatrix}
x_1 \\ x_2 \\ x_3 \\ x_4 \\ x_5
\end{pmatrix}
=
\sum_{j=1}^5 \mathbf{u}_j \sqrt{\lambda_j} f_j
\label{eq:Sigma:big5:eigenproblem}
\end{equation}
where $\Sigma_{\rm Big 5} = U' \Lambda U$ is the eigenvalue decomposition of the target
covariance matrix (\ref{eq:bigfive:corr}) of the Big Five factors.
\end{enumerate}
\item The outcome is obtained as $y=0.3x_1 + 0.3x_4 + \varepsilon, \varepsilon \sim N(0,1.248602)$
where the specific value of the residual variance was chosen to ensure that $R^2=0.15$ in the population.
\item The regression model with the complete data is fit to obtain the benchmark for FMI calculation.
\item The values of regressors were deleted in accordance with
the multiple matrix sampling design in Table \ref{tab:bigfive}.
\item The normal theory based SEM model for missing data was fit; regression parameter estimates
and their asymptotic standard errors based on the inverse Hessian were recorded.
\item $M=50$ complete data sets were imputed using multivariate normal imputation model.
\item The regression model was estimated using the first $M=5$ data sets, in accordance with the traditional
recommendation regarding the number of imputed data sets. Regression parameter estimates
and their asymptotic standard errors based on the Rubin's rules were recorded.
\item The regression model was estimated using all of the $M=50$ data sets. Regression parameter estimates
and their asymptotic standard errors based on the Rubin's rules were recorded.
\item $M=50$ complete data sets were imputed using predictive mean matching imputation model for each of the missing variables.
\item The regression model was estimated using the first $M=5$ data sets, in accordance with the traditional
recommendation regarding the number of imputed data sets. Regression parameter estimates
and their asymptotic standard errors based on the Rubin's rules were recorded.
\item The regression model was estimated using all of the $M=50$ data sets. Regression parameter estimates
and their asymptotic standard errors based on the Rubin's rules were recorded.
\end{enumerate}
There were 1,200 Monte Carlo samples drawn.
\begin{figure}[!bh]
\centering
\includegraphics[scale=0.2]{simul_hist_b1_cens.png}
\caption{Sampling distributions of the parameter estimates $\hat\beta_1$ across different methods.}\label{fig:simul:b1}
\end{figure}
Figure \ref{fig:simul:b1} reports the simulated distributions of the estimates of parameter $\beta_1$.
The population value of 0.3 is shown as a vertical line on the plot. As expected, the complete data
regression model demonstrates higher efficiency. Estimates based on the multivarate normal methods
are biased up, while those based on MI with predictive mean matching are biased down.
Distributions of the estimates based on the multivariate normal methods are more spread out
than the asymptotic variance based on (\ref{eq:asvar:beta-hat}), while those based
on PMM MI are less spread out, with apparent efficiency gains extracted from higher moments of the data.
The plots in Figure \ref{fig:simul:b1} are truncated, with about 3\% of the Monte Carlo simulations
outside the right range of the plot (the value of $\beta_1=0.6$), and about 1\% of the Monte Carlo simulations
outside the left range of the plot (the value of $\beta_1=0$) for each of the methods based on multivariate normality.
Details for $\hat\beta_1$ and other regression coefficient estimates are provided in Table \ref{tab:simul:b1}.
\begin{table}[!th]
\centering
\caption{Monte Carlo means, [95\% confidence intervals] for the means and $\langle$standard deviations$\rangle$ for regression parameter estimates.}
\label{tab:simul:b1}
\begin{tabular}{l|ccccc}
Method & $\hat\beta_1$ & $\hat\beta_2$ & $\hat\beta_3$ & $\hat\beta_4$ & $\hat\beta_5$ \\
\hline
Complete & 0.3002 & 0.0016 & 0.0006 & 0.3002 & 0.0015 \\
data & [0.298,0.303] & [-0.001,0.004] & [-0.002,0.003] & [0.298,0.303] & [-0.001,0.004] \\
regression & $\langle$ 0.0418 $\rangle$ & $\langle$ 0.0408 $\rangle$ & $\langle$ 0.0440 $\rangle$ & $\langle$ 0.0414 $\rangle$ & $\langle$ 0.0413 $\rangle$
\medskip
\\
SEM with & 0.3277 & -0.0203 & -0.0130 & 0.3324 & 0.0096 \\
MVN & [0.320,0.336] & [-0.028,-0.013] & [-0.022,-0.004] & [0.324,0.340] & [0.003,0.017] \\
missing data & $\langle$ 0.1414 $\rangle$ & $\langle$ 0.1356 $\rangle$ & $\langle$ 0.1588 $\rangle$ & $\langle$ 0.1429 $\rangle$ & $\langle$ 0.1249 $\rangle$
\medskip
\\
MI using & 0.3369 & -0.0253 & -0.0173 & 0.3393 & 0.0091 \\
MVN model, & [0.329,0.345] & [-0.033,-0.017] & [-0.027,-0.008] & [0.331,0.347] & [0.002,0.016] \\
$M=5$ & $\langle$ 0.1390 $\rangle$ & $\langle$ 0.1415 $\rangle$ & $\langle$ 0.1645 $\rangle$ & $\langle$ 0.1435 $\rangle$ & $\langle$ 0.1259 $\rangle$
\medskip
\\
MI using & 0.3430 & -0.0314 & -0.0208 & 0.3466 & 0.0109 \\
MVN model, & [0.334,0.352] & [-0.040,-0.023] & [-0.031,-0.011] & [0.338,0.355] & [0.003,0.018] \\
$M=50$ & $\langle$ 0.1556 $\rangle$ & $\langle$ 0.1507 $\rangle$ & $\langle$ 0.1760 $\rangle$ & $\langle$ 0.1531 $\rangle$ & $\langle$ 0.1336 $\rangle$
\medskip
\\
MI using & 0.2661 & 0.0356 & 0.0261 & 0.2666 & -0.0056 \\
PMM model, & [0.262,0.270] & [0.032,0.039] & [0.022,0.030] & [0.263,0.271] & [-0.009,-0.002] \\
$M=5$ & $\langle$ 0.0707 $\rangle$ & $\langle$ 0.0679 $\rangle$ & $\langle$ 0.0758 $\rangle$ & $\langle$ 0.0707 $\rangle$ & $\langle$ 0.0631 $\rangle$
\medskip
\\
MI using & 0.2678 & 0.0361 & 0.0251 & 0.2671 & -0.0043 \\
PMM model, & [0.264,0.272] & [0.032,0.040] & [0.021,0.029] & [0.263,0.271] & [-0.008,-0.001] \\
$M=50$ & $\langle$ 0.0676 $\rangle$ & $\langle$ 0.0656 $\rangle$ & $\langle$ 0.0719 $\rangle$ & $\langle$ 0.0665 $\rangle$ & $\langle$ 0.0591 $\rangle$ \\
\hline
Population & 0.3 & 0 & 0 & 0.3 & 0 \\
& $\langle$ 0.0791 $\rangle$ & $\langle$ 0.0856 $\rangle$ & $\langle$ 0.0926 $\rangle$ & $\langle$ 0.0824 $\rangle$ & $\langle$ 0.0832 $\rangle$ \\
\end{tabular}
\end{table}
Figure \ref{fig:simul:se1} provides the Monte Carlo distributions of the standard errors reported for the missing data methods.
The dotted vertical line is the asymptotic standard error based on (\ref{eq:asvar:beta-hat}), 0.0791. The dashed lines are empirical
means of the standard errors. All distributions are skewed with heavy right tails. The distributions of the standard errors
based on multivariate data contain outliers outside the range of the plot (3\% of the SEM with missing data results; 6\% of
the results for MI using the multivariate normal model with $M=5$; 8\% of the results for MI using the multivariate normal model with $M=50$;
the range of the plots is from 0 to $3\times$ the asymptotic standard error, 0.0791). Distributions of the standard errors
for the multivariate normal methods are significantly higher that this asymptotic standard error, which reflects, to some extent,
the greater variability of the estimates observed above in Figure \ref{fig:simul:b1} and Table \ref{tab:simul:b1}.
Distributions of the standard errors
for the PMM MI method are significantly lower that the asymptotic standard error, which reflects, to some extent,
the lower variability of the estimates based on this method.
A higher number of multiple imputations $M=50$ vs. $M=5$ helps to stabilize the variance estimates, particularly in the case of PMM.
\begin{figure}[!bh]
\centering
\includegraphics[scale=0.2]{simul_se1_sep.png}
\caption{Sampling distributions of the standard errors of $\hat\beta_1$ across different methods.}\label{fig:simul:se1}
\end{figure}
Coverage of the nominal 95\% confidence intervals is analyzed in Table \ref{tab:simul:cover95}. Despite the shortcomings
of both the point estimates and the standard errors noted above, things seem to balance out and provide confidence interval coverage
fairly close to the target.
\begin{table}[!th]
\centering
\caption{Coverage of the nominal 95\% coverage intervals.}\label{tab:simul:cover95}
\begin{tabular}{l|ccccc}
Method & $\hat\beta_1$ & $\hat\beta_2$ & $\hat\beta_3$ & $\hat\beta_4$ & $\hat\beta_5$ \\
\hline \\
Complete data regression & 95.5\% & 95.4\% & 95.1\% & 95.8\% & 97.3\% \\
SEM with MVN missing data & 97.8\% & 98.6\% & 97.3\% & 98.7\% & 98.4\% \\
MI using MVN model, $M=5$ &93.3\% & 93.8\% & 93.0\% & 93.3\% & 96.8\% \\
MI using MVN model, $M=50$ &92.8\% & 93.4\% & 93.2\% & 93.1\% & 97.9\% \\
MI using PMM model, $M=5$ &94.4\% & 95.6\% & 94.5\% & 95.0\% & 94.2\% \\
MI using PMM model, $M=50$ &96.3\% & 96.6\% & 95.8\% & 96.8\% & 97.2\% \\
\end{tabular}
\end{table}
Estimated fractions of missing information reported by the software are shown on Figure \ref{fig:simul:fmi}.
The dotted line is the value based on asymptotic variance, 73.6\%. Dashed lines are the empirical FMI,
based on the ratios of the Monte Carlo variance of $\hat\beta_1$ based on a given missing data method
to the variance of $\hat\beta_1$ based on the complete data. The latter empirical FMI is greater than
the theoretical one for the MI methods based on the multivariate normality assumption, and lower than
the theoretical one for the PMM MI methods. The methods based on multivariate normality appear to underestimate
FMI, as the distributions of the reported empirical FMI appear to the left of the true value (dashed line).
The FMI that come out of PMM MI appear to be more accurate. An increase in the number of completed data sets
from $M=5$ to $M=50$ helps to improve stability of the FMI estimates, making the distributions of the empirical FMI
more concentrated.
\begin{figure}[!bh]
\centering
\includegraphics[scale=0.2]{simul_fmi1_sep.png}
\caption{Reported fraction of missing information.}\label{fig:simul:fmi}
\end{figure}
\section{Concluding remarks}
This paper provides an analytical framework for analysis of regression models
(and, more generally, other statistical methods that are based on the covariance matrices of observed
items or scales)
that allows for quick power analysis avoiding computationally intensive simulations.
Revisiting the initial motivation of burden reduction, the results are underwhelming.
Is burden really reduced by multiple matrix sampling in the example considered?
Out of five explanatory variables (based on approximately 8 survey items each) and one outcome,
only three variables are collected on each of the matrix sampled instrument forms.
This translates to about 50\% burden reduction per respondent. However,
given that the loss of information quantified by the fraction of missing information
(FMI) is about 75–-80\%, the data collection sample sizes would
need to be about 4--5 times larger compared to the traditional data collection
of all items at once.
Unless the response rate drops sharply by a factor of more than two due to
the increase in questionnaire length, the total public burden is increased.
The sample sizes necessary to detect the required effect sizes in increased $R^2$
demonstrate long tails in the exploration of parameter spaces. These long tails
make it difficult to plan for the worst-case scenarios associated with ``unfortunate''
regression parameter configurations. Should a specific decision need to be made
based on the parameter explorations akin to those undertaken in Section \ref{sec:explore},
the trade-off between the survey costs due to large sample sizes and risks of having
an underpowered study should the coefficient estimates be found to have an ``unfortunate''
configuration should be carefully discussed with the survey stakeholders to find
the most appropriate course of action.
We conducted a finite sample simulation with non-normal data and several missing data methods,
and determined that the methods that assume multivariate normality generally perform poorly,
and generate a non-negligible proportion of really bad outliers. In comparison,
semiparametric multiple imputation by predictive mean matching with sufficiently large
number of imputed data sets seem to work best.
Our work can be extended in a number of additional dimensions. The derivations of asymptotic
variances are based on the working assumption of multivariate normality and using
the inverse information matrix to estimate variances. With non-normal data, the problem
can be formulated in terms of estimating equations, and sandwich variance estimators
should be formed. As our simulation demonstrated, asymptotic standard errors based on
inverse information matrix are inadequate for the analysis methods that we used,
leading to underestimates with misspecified normality-based methods, and overestimates with
a more accurate semiparametric method.
The current paper assumed independence of respondents. In practice, complex survey features
such as strata, clusters, unequal probabilities of selection, and weight calibration would
affect asymptotic properties of the estimates. In particular, the sandwich variance estimation
will be required. Many practical survey statistics issues may also interact with multiple
matrix sampling in unusual ways. How would differential nonresponse by form affect the results?
What should we do when a stratum has fewer than two cases of a given form? These and other
questions related to design-based inference would need to be answered when multiple matrix sampling
is applied in practice.
Finally, in terms of ensuring adequate measurement properties, we note
that psychometric properties are usually established and validated for scales,
but not necessarily subscales that respondents are exposed to in multiple matrix sampling instruments.
In particular, if the order of the items, or the degree of mixing of items from the different
subscales of the Big Five Inventory is important for the validity of the scale and its subscales,
these properties may be violated when shorter subscales are administered that require
the respondent to answer similar questions more frequently.
\clearpage
|
1710.00321
|
\section{Introduction}
Let $A \in \mathbb{Z}^{d \times n}$ be the integral matrix. Its $ij$-th element is
denoted by $A_{i\,j}$, $A_{i\,*}$ is $i$-th row of $A$, and $A_{*\,j}$ is $j$-th column of $A$. The set of integer values started from a value $i$ and finished on $j$ is denoted by the symbol $i:j = \{i,i+1,\dots,j\}$. Additionally, for subsets $I \subseteq \{1,\dots,d\}$ and $J \subseteq \{1,\dots,n\}$, $A_{I\,J}$ denotes the sub-matrix of $A$ that was generated by all rows with numbers in $I$ and all columns with numbers in $J$. Sometimes, we will change the symbols $I$ and $J$ to the symbol $*$ meaning that we take the set of all rows or columns, respectively. Let $\rank(A)$ be the rank of an integral matrix $A$. The \emph{lattice spanned by columns} of $A$ is denoted $\Lambda(A) = \{A t : t \in \mathbb{Z}^{n} \}$. Let $||A||_{\max}$ denote the maximal absolute value of the elements of $A$. We refer to \cite{CAS71,GRUB87,SIEG89} for mathematical introductions to lattices.
An algorithm parameterized by a parameter $k$ is called \emph{fixed-parameter tractable} (FPT-\emph{algorithm}) if its complexity can be expressed by a function from the class $f(k)\, n^{O(1)}$, where $n$ is the input size and $f(k)$ is a function that depends on $k$ only.
A computational problem parameterized by a parameter $k$ is called \emph{fixed-parameter tractable} (FPT-\emph{problem}) if it can be solved by a FPT-algorithm. For more information about parameterized complexity theory, see \cite{PARAM15,PARAM99}.
{\bf Shortest Lattice Vector Problem}
The Shortest Lattice Vector Problem (SVP) consists in finding $x \in \mathbb{Z}^n \setminus \{0\}$ minimizing $||H x||$, where $H \in \mathbb{Q}^{d \times n}$ is given as an input. The SVP is known to be NP-hard with respect to randomized reductions, see \cite{AJTAI96}. The first polynomial-time approximation algorithm for SVP was proposed by A.~Lenstra, H.~Lenstra~Jr. and L.~Lov\'asz in the paper \cite{LLL82}. Shortly afterwards, U.~Fincke and M.~Pohst \cite{FP83,FP85}, and R.~Kannan \cite{KANN83,KANN87} described the first exact SVP solvers. The R.~Kannan's solver has the complexity $2^{O(n\,\log n)} \poly(\size H)$. The first SVP solvers that achieve the complexity $2^{O(n)} \poly(\size H)$ were proposed by M.~Ajtai, R.~Kumar and D.~Sivakumar \cite{AJKSK01,AJKSK02}, and D.~Micciancio and P.~Voulgaris \cite{MICCVOUL10}. The previously discussed SVP solvers are useful for the $l_2$ Euclidean norm. Recent results about SVP-solvers for more general norms are presented in the papers \cite{BLNAEW09,DAD11,EIS11}. The paper of G.~Hanrot, X.~Pujol, D.~Stehl\'e \cite{SVPSUR11} is a good survey about SVP-solvers.
Recently, a novel polynomial-time approximation SVP-solver was proposed by J.~Cheon and L.~Changmin in the paper \cite{CHLEE15}. The algorithm is parameterized by the lattice determinant, and its complexity and approximation factor are record for bounded determinant lattices.
In our work, we consider only integral lattices, whose generating matrices are near square. The goal of Section 2 is development of an exact FPT-algorithm for the SVP parameterized by the lattice determinant. Additionally, in Section 3 we develop a FPT-algorithm for lattices, whose generating matrices have no singular sub-matrices. The proposed algorithms work for the $l_p$ norm for any finite $p \geq 1$.
{\bf Integer Linear Programming Problem}
The Integer Linear Programming Problem (ILPP) can be formulated as $\min\{ c^\top x : H x \leq b,\, x \in \mathbb{Z}^n\}$ for integral vectors $c,b$ and an integral matrix $H$.
There are several polynomial-time algorithms for solving the linear programs. We mention L.~G.~Khachiyan's algorithm \cite{KHA80}, N.~Karmarkar's algorithm \cite{KAR84}, and Y.~E.~Nesterov's algorithm \cite{NN94,PAR91}. Unfortunately, it is well known that the ILPP is NP-hard problem. Therefore, it would be interesting to reveal polynomially solvable cases of the ILPP. Recall that an integer matrix is called \emph{totally unimodular} if any of its minor is equal to $+1$ or $-1$ or $0$. It is well known that all optimal solutions of any linear program with a totally unimodular constraint matrix are integer. Hence, for any primal linear program and the corresponding primal integer linear program with a totally unimodular constraint matrix, the sets of their optimal solutions coincide. Therefore, any polynomial-time linear optimization algorithm (like algorithms in \cite{KHA80,KAR84,NN94,PAR91}) is also an efficient algorithm for the ILPP.
The next natural step is to consider the \emph{bimodular} case, i.e. the ILPP having constraint matrices with the absolute values of all rank minors in the set $\{0, 1, 2\}$. The first paper that discovers fundamental properties of the bimodular ILPP is the paper of S.~Veselov and A.~Chirkov \cite{VESCH09}. Very recently, using results of \cite{VESCH09}, a strongly polynomial-time solvability of the bimodular ILPP was proved by S.~Artmann, R.~Weismantel, R.~Zenklusen in the paper \cite{AW17}.
More generally, it would be interesting to investigate the complexity of the problems with constraint matrices having bounded minors. The maximum absolute value of rank minors of an integer matrix can be interpreted as a proximity measure to the class of unimodular matrices. Let the symbol ILPP$_{\Delta}$ denote the ILPP with constraint matrix each rank minor of which has the absolute value at most $\Delta$. A conjecture arises that for each fixed natural number $\Delta$ the ILPP$_{\Delta}$ can be solved in polynomial-time \cite{SHEV96}. There are variants of this conjecture, where the augmented matrices $\dbinom{c^\top}{A}$ and $(A \; b)$ are considered \cite{AZ11,SHEV96}. Unfortunately, not much is known about the complexity of the ILPP$_{\Delta}$. For example, the complexity statuses of the ILPP$_{3}$ are unknown. A next step towards a clarification of the complexity was done by S.~Artmann, F.~Eisenbrand, C.~Glanzer, O.~Timm, S.~Vempala, and R.~Weismantel in the paper \cite{AE16}. Namely, it has been shown that if the constraint matrix, additionally, has no singular rank sub-matrices, then the ILPP$_{\Delta}$ can be solved in polynomial-time. Some results about polynomial-time solvability of boolean ILPP$_{\Delta}$ were obtained in the papers \cite{AZ11,BOCK14,GRIBM17}. Additionally, the class of ILPP$_{\Delta}$ has a set of interesting properties. In the papers \cite{GRIB13,GRIBV16}, it has been shown that any lattice-free polyhedron of the ILPP$_{\Delta}$ has relatively small width, i.e., the width is bounded by a function that is linear by the dimension and exponential by $\Delta$. Interestingly, due to \cite{GRIBV16}, the width of an empty lattice simplex can be estimated by $\Delta$ for this case. In the paper \cite{GRIBC16}, it has been shown that the width of any simplex induced by a system with bounded minors can be computed by a polynomial-time algorithm. Additional result of \cite{GRIBC16} states that any simple cone can be represented as a union of $n^{2 \log \Delta}$ unimodular cones, where $\Delta$ is the parameter that bounds minors of the cone constraint matrix. As it was mentioned in \cite{AW17}, due to E.~Tardos results \cite{TAR86}, linear programs with constraint matrices whose minors are bounded by a constant $\Delta$ can be solved in strongly polynomial time. N.~Bonifas et al. \cite{BONY14} showed that polyhedra defined by a constraint matrix that is totally $\Delta$-modular have small diameter, i.e., the diameter is bounded by a polynomial in $\Delta$ and the number of variables. Very recently, F.~Eisenbrand and S.~Vempala \cite{EIS16} showed a randomized simplex-type linear programming algorithm, whose running time is strongly polynomial even if all minors of the constraint matrix are bounded by any constant.
The second goal of our paper (Section 4) is to improve results of the paper \cite{AE16}. Namely, we are going to present a FPT-algorithm for the ILPP$_{\Delta}$ with the additional property that the problem's constraint matrix has no singular rank sub-matrices. Additionally, we improve some inequalities established in \cite{AE16}.
The authors consider this paper as a part for achieving the general aim to find out critical values of parameters, when a given problem changes complexity. For example, the integer linear programming problem is polynomial-time solvable on polyhedrons with all-integer vertices, due to \cite{KHA80}. On the other hand, it is NP-complete in the class of polyhedrons with denominators of extreme points equal $1$ or $2$, due \cite{PAD89}. The famous $k$-satisfiability problem is polynomial for $k \leq 2$, but is NP-complete for all $k > 2$. A theory, when an NP-complete graph problem becomes easier, is investigated for the family of hereditary classes in the papers \cite{A,ABKL,AKL,KLMT,M1,M2,M3,M4,MP,M5,MS}.
\section{FPT-algorithm for the SVP}
Let $H \in \mathbb{Z}^{d \times n}$. The SVP related to the $L_p$ norm can be formulated as follows:
\begin{equation}\label{ISVP}
\min\limits_{x \in \Lambda(H) \setminus \{0\} } ||x||_p,
\end{equation} or equivalently
\[
\min\limits_{x \in \mathbb{Z}^n \setminus \{0\} } ||H x||_p.
\]
Without loss of generality, we can assume that the following properties hold:
1) the matrix $H$ is already reduced to the Hermite normal form (HNF) \cite{SCHR98,STORH96,ZHEN05},
2) the matrix $H$ is a full rank matrix and $d \geq n$,
3) using additional permutations of rows and columns, the HNF of the matrix $H$ can be reduced to the following form:
\begin{equation} \label{HNF}
H = \begin{pmatrix}
1 & 0 & \dots & 0 & 0 & 0 & \dots & 0\\
0 & 1 & \dots & 0 & 0 & 0 & \dots & 0\\
\hdotsfor{8} \\
0 & 0 & \dots & 1 & 0 & 0 & \dots & 0\\
a_{1\,1} & a_{1\,2} & \dots & a_{1\,k} & b_{1\,1} & 0 & \dots & 0\\
a_{2\,1} & a_{1\,2} & \dots & a_{2\,k} & b_{2\,1} & b_{2\,2} & \dots & 0\\
\hdotsfor{8} \\
a_{s\,1} & a_{1\,2} & \dots & a_{s\,k} & b_{s\,1} & b_{s\,2} & \dots & b_{s\,s}\\
\bar a_{1\,1} & \bar a_{1\,2} & \dots & \bar a_{1\,k} & \bar b_{1\,1} & \bar b_{1\,2} & \dots & \bar b_{1\,s}\\
\hdotsfor{8} \\
\bar a_{m\,1} & \bar a_{m\,2} & \dots & \bar a_{m\,k} & \bar b_{m\,1} & \bar b_{m\,2} & \dots & \bar b_{m\,s}\\
\end{pmatrix},
\end{equation}
where $k + s = n$ and $k + s + m = d$.
Let $\Delta$ be the maximal absolute value of $n \times n$ minors of $H$ and let $\delta = |\det(A_{1:n\,*})|$, let also $A \in \mathbb{Z}^{s \times k}$, $B \in \mathbb{Z}^{s \times s}$, $\bar A \in \mathbb{Z}^{m \times k}$, and $\bar B \in \mathbb{Z}^{m \times s}$ be the matrices defined by the elements $\{a_{i\,j}\}$, $\{b_{i\,j}\}$, $\{\bar a_{i\,j}\}$, and $\{\bar b_{i\,j}\}$, respectively. Hence, $B$ is lower triangular.
The following properties are standard for the HNF of any matrix:
1) $0 \leq a_{i\,j} \leq b_{i\,i}$ for any $i \in 1:s$ and $j \in 1:k$,
2) $0 \leq b_{i\,j} \leq b_{i\,i}$ for any $i \in 1:s$ and $j \in 1:i$,
3) $\Delta \geq \delta = \prod_{i=1}^s b_{i\,i}$, and hence $ s \leq \log_2 \Delta$.
In the paper \cite{AE16}, it was showed that $||(\bar A \; \bar B)||_{\max} \leq B_q$, where $q = \lceil \log_2 \Delta \rceil$ and the sequence $\{B_i\}$ is defined for $i \in 0:q$ as follows:
\[
B_0 = \Delta,\quad B_i = \Delta + \sum_{j=0}^{i-1} B_j \Delta^{\log_2 \Delta} (\log_2 \Delta)^{(\log_2 \Delta /2)}.
\]
It is easy to see that $B_q = \Delta (\Delta^{\log_2 \Delta} (\log_2 \Delta)^{(\log_2 \Delta /2)}+1)^{\lceil \log_2 \Delta \rceil}$.
We will show that the estimate on $||(\bar A \; \bar B)||_{\max}$ can be significantly improved by making a bit more accurate analysis as in \cite{AE16}.
\begin{lemma}\label{HNFElem}
Let $j \in 1:m$, then the following inequalities are true:
\[
\bar b_{j\,i} \leq \frac{\Delta}{2} (3^{s-i} + 1)
\] for $i \in 1:s$, and
\[
\bar a_{j\,i} \leq \frac{\Delta}{2} (3^s + 1)
\] for $i \in 1:k$.
Hence, $||(\bar A \; \bar B)||_{\max} \leq \frac{\Delta}{2} (\Delta^{\log_2 3} + 1) < \Delta^{1 + \log_2 3}$.
\end{lemma}
\begin{proof}
The main idea and the skeleton of the proof is the same as in the paper \cite{AE16}.
Assume that $H$ has the form as in \eqref{HNF}. Let $c$ be any row of $\bar A$, let also $w$ be the row of $\bar B$ with the same row index as $c$.
Let $H_i$ denote the square sub-matrix of $H$ that consists of the first $n$ rows of $H$, except the $i$-th row, which is replaced by the row $(c\;w)$. Let also $b_i$ denote $b_{i\,i}$. Since $|det(H_n)| = b_{1}\dots b_{s-1} |w_s|$, it follows that $|w_s| \leq \Delta$.
Similar to reasonings of the paper \cite{AE16}, let us consider two cases:
{\bf Case 1: $i > k$.}
We can express $det(H_i)$ as follows:
\[
|b_1|\dots|b_{r-1}| | \det \underbrace{\begin{pmatrix}
w_{r} & \hdotsfor{4} & w_s \\
* & b_{r+1} & & & & \\
* & * & \ddots & & & \\
\hdotsfor{4} & b_{s-1} & \\
\hdotsfor{5} & b_s \\
\end{pmatrix}}_{:= \bar H} |,
\]
where $r=i - k$.
Let ${\bar H}^j$ be the sub-matrix of $\bar H$ obtained by deletion of the first row and the column indexed by $j$. Then,
\[
\Delta \geq |\det \bar H| = \left| w_r {\bar H}^1 + \sum_{j=2}^{s-r+1} (-1)^{j+1} w_{r+j-1} {\bar H}^j \right| \\
\geq |w_r {\bar H}^1| - |\sum_{j=2}^{s-r+1} (-1)^{j+1} w_{r+j-1} {\bar H}^j|,
\]
and thus
\[
|w_r| \leq \frac{1}{|\det {\bar H}^1|} \left(\Delta + \sum_{j=2}^{s-r+1} |w_{r+j-1}| |{\bar H}^j| \right).
\]
Let $ \bar \delta = |\det {\bar H}^1| = b_{r+1}\dots b_s$.
We note that for any $2 \leq j \leq s-r+1$ the matrix ${\bar H}^j$ is a lower-triangular matrix with an additional over-diagonal. The over-diagonal is a vector that consists of at most $j-2$ first nonzeros and the other elements are zeros.
The following example expresses the structure of the ${\bar H}^5$ matrix:
\[
\begin{pmatrix}
\boldsymbol{*} &* & & & & & &\\
*& \boldsymbol{*} & * & & & & &\\
* & * & \boldsymbol{*} & * & & & &\\
* & * & * & \boldsymbol{*} & & & &\\
* & * & * & * & \boldsymbol{*} & & &\\
\hdotsfor{5} \\
* & \hdotsfor{6} &\boldsymbol{*} & \\
* & \hdotsfor{7} & \boldsymbol{*}\\
\end{pmatrix},
\]
where we can see three additional nonzero over-diagonal elements (the diagonal is bold).
It is easy to see that $|\det {\bar H}^j| \leq 2^{j-2} \bar \delta$ for $2 \leq j \leq s-r+1$. Hence, the recurrence for $w_r$ takes the following form:
\[
|w_r| \leq \Delta + \sum_{j=2}^{s-r+1} 2^{j-2} |w_{r+j-1}| = \Delta + \sum_{j=0}^{s-r-1} 2^{j} |w_{r+j+1}|.
\]
{\bf Case 2: $i \leq k$.}
Similar to the previous case, we can express $|\det H_i|$ as
\[
|\det \underbrace{\begin{pmatrix}
c_i & w_1 & \dots & \dots & w_s \\
* & b_1 \\
* & * & \ddots \\
\hdotsfor{3} & b_{s-1} \\
\hdotsfor{4} & b_s \\
\end{pmatrix}|}_{:= \bar H}.
\]
Let again, ${\bar H}^j$ be the sub-matrix of the matrix $\bar H$ obtained by deletion of the first row and the column indexed by $j$, then
\[
\Delta \geq |\det \bar H| = \left| c_i \delta + \sum_{j=2}^{s+1} (-1)^{j+1} w_{j-1} {\bar H}^j \right|
\geq |c_i \delta| - |\sum_{j=2}^{s+1} (-1)^{j+1} w_{j-1} {\bar H}^j|,
\]
and thus
\[
|c_i| \leq \frac{1}{\delta} \left(\Delta + \sum_{j=2}^{s+1} |w_{j-1}| |{\bar H}^j| \right).
\]
As in the {\bf case 1}, we have the inequality $|\det {\bar H}^j| \leq 2^{j-2} \delta$ for $2 \leq j \leq s+1$. Hence, we have the following inequality:
\[
|c_i| \leq \Delta + \sum_{j=2}^{s+1} 2^{j-2} |w_{j-1}| = \Delta + \sum_{j=0}^{s-1} 2^j |w_{j+1}|.
\]
Let $\{B_i\}_{i=0}^{s}$ be the sequence defined as follows:
\[
B_0 = \Delta,\quad B_i = \Delta + \sum_{j=0}^{i-1} 2^{i-j-1} B_j.
\]
Using the final inequality from Case 1, we have $|w_i| \leq B_{s-i}$ for any $i \in 1:s$. And using the final inequality from Case 2, we have $|c_i| \leq B_s$ for any $i \in 1:k$.
For the sequence $\{B_i\}$ the following equalities are true:
\[
B_i = \Delta + B_{i-1} + \sum_{j=0}^{i-2} 2^{i-j-1} B_j = \Delta + B_{i-1} + 2(B_{i-1} - \Delta) = 3 B_{i-1} - \Delta.
\]
Finally,
\[
B_i = 3^i B_0 - \Delta \sum_{j=0}^{i-1} 3^j = \Delta( 3^i - \frac{3^i - 1}{2}) = \frac{\Delta}{2}(3^i + 1),
\]
and the lemma follows.
\end{proof}
\begin{theorem}\label{SimpleSVP}
If $n > \Delta^{1+m(1+\log_2 3)} + \log_{2} \Delta$, then there exists a polynomial-time algorithm to solve the problem \eqref{ISVP} with the bit-complexity $O(n \log n \log \Delta (m + \log \Delta))$.
\end{theorem}
\begin{proof}
If $n > \Delta^{1+m(1+\log_2 3)} + \log_2 \Delta$, then $k > \Delta^{1+m(1+\log_2 3)}$.
Consider the matrix $\bar H = \dbinom{A}{\bar A}$. By Lemma \ref{HNFElem}, there are strictly less than $\Delta^{1+m(1+\log_2 3)}$ possibilities to generate a column from $\bar A$, so if $k > \Delta^{1+m(1+\log_2 3)}$, then $\bar H$ has two equivalent columns. Hence, the lattice $\Lambda(H)$ contains the vector $v$, such that $||v||_p = \sqrt[p]{2}$ ($||v||_\infty = 1$). We can find equivalent rows using any sorting algorithm with the compares-complexity equal to $O(n \log n)$, where the bit-complexity of the two vectors compare operation is $O(\log \Delta (m + \log \Delta))$. The lattice $\Lambda(H)$ contains a vector of the norm $1$ (for $p \not= \infty$) if and only if the matrix $\bar H$ contains the zero column. Definitely, let $\bar H$ have no zero columns and let $u$ be the vector of the norm $1$ induced by the lattice $\Lambda(H)$. Then, $u \in H_{*\,i} + H_{*\,(k+1):n} t$ for the integral, nonzero vector $t$ and $i \in 1:k$. Let $j \in 1:(n-k)$ be the first index, such that $t_j \not= 0$. Since $H_{j\,j} > H_{j\,i}$, we have $u_j \not= 0$ and $||u||_p \geq \sqrt[p]{2}$, this is a contradiction.
\end{proof}
In the case, when $m = 0$ and $H$ is the square nonsingular matrix, we have the following trivial corollary:
\begin{corollary}
If $n \geq \Delta + \log_{2} \Delta$, then there exists a polynomial-time algorithm to solve the problem \eqref{ISVP} with the bit-complexity $O(n \log n \log^2 \Delta)$.
\end{corollary}
Let $x^*$ be an optimal vector of the problem \eqref{ISVP}. The most classical G.~Minkowski's theorem in geometry of numbers states that:
\[
||x^*||_p \leq 2 \left(\frac{\det \Lambda(H)}{\vol(B_p)}\right)^{1/n},
\]
where $B_p$ is the unit sphere for the $l_p$ norm.
Using the inequalities $\det \Lambda(H) = \sqrt{\det H^\top H} \leq \Delta \sqrt{\dbinom{d}{n}} \leq \Delta \left(\cfrac{e d}{n}\right)^{n/2}$, we can conclude that
\[
||x^*||_p \leq 2 \sqrt{\frac{e d}{n}} \sqrt[n]{\frac{\Delta}{\vol(B_p)}}.
\]
On the other hand, by Lemma \ref{HNFElem}, the last column of $H$ has the norm equals $\Delta \sqrt[p]{m+1}$.
Let
\begin{equation}\label{MConst}
M = \min \Bigl\{\Delta \sqrt[p]{m+1},\, 2 \sqrt{\frac{e d}{n}} \sqrt[n]{\frac{\Delta}{\vol(B_p)}}\Bigr\}
\end{equation} be the minimum value between these two estimates on a shortest vector norm.
\begin{lemma}\label{SVBounds}
Let $x^* = \binom{\alpha}{\beta}$ be an optimal solution of \eqref{ISVP}, then:
1) $||\alpha||_1 \leq M^p$,
2) $|\beta_i| \leq 2^{i-1}(M^p + M/2)$, for any $i \in 1:s$,
3) $||\beta||_1 \leq 2^s (M^p + M/2) \leq \Delta (M^p + M/2) < 2 \Delta M^p$
and $||x^*||_1 \leq (1 + \Delta) M^p + \frac{\Delta M}{2} < 2 (1 + \Delta)M^p$.
\end{lemma}
\begin{proof}
The statement 1) is trivial.
For $\beta_1$ we have:
\[
|b_{1\,1} \beta_{1} + \sum_{i=1}^{k} a_{1\,i} \alpha_{i}| \leq M,
\]
\[
\sum_{i=1}^{k} a_{1\,i} \alpha_{i} - M \leq b_{1\,1} \beta_{1} \leq \sum_{i=1}^{k} a_{1\,i} \alpha_{i} + M,
\]
\[
- M^p - M/2 \leq \frac{1}{b_{1\,1}} (\sum_{i=1}^{k} a_{1\,i} \alpha_{i} - M) \leq \beta_{1} \leq \frac{1}{b_{1\,1}} (\sum_{i=1}^{k} a_{1\,i} \alpha_{i} + M) \leq M^p + M/2.
\]
For $\beta_j$ we have:
\[
|\sum_{i=1}^{j} b_{j\,i} \beta_{i} + \sum_{i=1}^{k} a_{j\,i} \alpha_{i}| \leq M,
\]
\[
\beta_{j} \leq \frac{1}{b_{j\,j}} (\sum_{i=1}^{j-1} b_{j\,i} \beta_{i} + \sum_{i=1}^{k} a_{j\,i} \alpha_{i} + M) \leq \sum_{i=1}^{j-1} \beta_{i} + M^p + M/2,
\]
\[
|\beta_{j}| \leq 2^{j-1} (M^p + M/2).
\]
The statement 3) follows from the proposition 2).
\end{proof}
Let $Prob(l,v,u,C)$ denote the following problem:
\begin{align}
&\sum_{i=1}^l |\alpha_i|^p + \sum_{j=1}^s \left| \sum_{i=1}^l a_{j\,i} \alpha_i + v_j \right|^p + \sum_{j=1}^m \left|\sum_{i=1}^l \bar a_{j\,i} \alpha_i + u_j \right|^p \to \min \label{Prob1}\\
&\begin{cases}
\alpha \in \mathbb{Z}^l \setminus \{0\} \\
||\alpha||_1 \leq C.\\
\end{cases} \notag
\end{align}
where $1 \leq l \leq k$, $1 \leq C \leq M^p$, $v \in \mathbb{Z}^s$, $u \in \mathbb{Z}^m$ and $||v||_\infty \leq 2 \Delta (1 + \Delta) M^p$, $||u||_\infty \leq 2 \Delta^{1 + \log_2 3} (1+\Delta) M^p$.
Let $\sigma(l,v,u,C)$ denote the optimal value of the $Prob(l,v,u,C)$ objective function, then we trivially have
\begin{equation}\label{Sigma1}
\sigma(1,v,u,C) = \min \{ |z|^p + \sum_{i=1}^s \left| a_{i\,1} z + v_i \right|^p + \sum_{i=1}^m \left| \bar a_{i\,1} z + u_i \right|^p : z \in \mathbb{Z},\, |z| \leq C \}.
\end{equation}
The following formula gives relations between $\sigma(l,v,u,C)$ and $\sigma(l-1,v,u,C)$, correctness of the formula could be checked directly:
\begin{equation}\label{Sigma2}
\sigma(l,v,u,C) = \min \{ f(\bar v,\bar u,z) : z \in \mathbb{Z},\, |z| \leq C,\, \bar v_i = v_i + a_{i\,l} z,\, \bar u_i = u_i + \bar a_{i\,l}z \},
\end{equation}
where
\[
f(v,u,z) =
\begin{cases}
\sigma(l-1,v,u,C),\,\text{ for }z = 0\\
|z|^p + \min\{ \sigma(l-1,v,u,C-|z|), ||v||_p^p+||u||_p^p \},\,\text{ for }z \not= 0\\
\end{cases}.
\]
Let $\overline{Prob}(l,v,u,C)$ denote the following problem:
\begin{align}
&\sum_{i=1}^k |\alpha_i|^p + \sum_{j=1}^s \left|\sum_{i=1}^k a_{j\,i} \alpha_i + \sum_{i=1}^{\min\{j,l\}} b_{j\,i} \beta_i + v_j\right|^p + \notag\\ &+\sum_{j=1}^m \left|\sum_{i=1}^k \bar a_{j\,i} \alpha_i + \sum_{i=1}^{\min\{j,l\}} \bar b_{j\,i} \beta_i + u_j \right|^p \to \min \label{Prob2}\\
&\begin{cases}
\alpha \in \mathbb{Z}^k,\,\beta \in \mathbb{Z}^l \\
1 \leq ||\alpha||_1 + ||\beta||_1 \leq C.\\
\end{cases} \notag
\end{align}
where $1 \leq l \leq s$, $1 \leq C \leq 2 (\Delta+1) M^p$ and the values of $v,u$ are the same as in \eqref{Prob1}.
Let $\bar \sigma(l,v,u,C)$ denote the optimal value of the $\overline{Prob}(l,v,u,C)$ objective function.
Again, it is easy to see that
\begin{equation}\label{SigmaBar1}
\bar\sigma(1,v,u,C) = \min\{f(\bar v,\bar u,z) : z \in \mathbb{Z},\, |z| \leq C,\, \bar v_i = v_i + b_{i\,1} z,\, \bar u_i = u_i + \bar b_{i\,1} z \},
\end{equation}
where
\[
f(v,u,z) =
\begin{cases}
\sigma(k,v,u,\min\{C,M^p\}),\,\text{ for }z = 0\\
\min\{\sigma(k,v,u,\min\{C-|z|,M^p\}), ||v||_p^p + ||u||_p^p\},\,\text{ for }z \not= 0\\
\end{cases}.
\]
The following formula gives relations between $\bar\sigma(l,v,u,C)$ and $\bar\sigma(l-1,v,u,C)$:
\begin{equation}\label{SigmaBar2}
\bar\sigma(l,v,u,C) = \min\{f(\bar v, \bar u, z) : z \in \mathbb{Z},\, |z| \leq C,\, \bar v_i = v_i + b_{i\,l} z,\, \bar u_i = u_i + \bar b_{i\,l} z \},
\end{equation}
where
\[
f(v,u,z) =
\begin{cases}
\bar\sigma(l-1,v,u,C),\,\text{ for }z = 0\\
\min\{\bar\sigma(l-1,v,u,C-|z|), ||v||_p^p + ||u||_p^p\},\,\text{ for }z \not= 0.\\
\end{cases}
\]
\begin{theorem}
There is an algorithm to solve the problem \eqref{ISVP}, which is polynomial on $n$, size $H$ and $\Delta$. The algorithm bit-complexity is equal to \\$O(n\, d\, M^{p(2+m+\log_2 \Delta)} \Delta^{3+4m+2\log_2 \Delta} \mult(\log \Delta))$, where $\mult(k)$ is the two $k$-bit integers multiplication complexity. Since $M \leq \Delta \sqrt[p]{m+1}$ (see \eqref{MConst}), the problem \eqref{ISVP}, parameterized by a parameter $\Delta$, is included to the FPT-complexity class for fixed $m$ and $p$.
\end{theorem}
\begin{proof}
By Lemma \ref{SVBounds}, the objective function optimal value is equal to $\bar \sigma(s,0,0,2(1+\Delta)M^p)$. Using the recursive formula \eqref{SigmaBar2}, we can reduce instances of the type $\bar \sigma(s,\cdot,\cdot,\cdot)$ to instances of the type $\bar \sigma(1,\cdot,\cdot,\cdot)$. Using the formula \eqref{SigmaBar1}, we can reduce instances of the type $\bar \sigma(1,\cdot,\cdot,\cdot)$ to instances of the type $\sigma(k,\cdot,\cdot,\cdot)$. Using the formula \eqref{Sigma2}, we can reduce instances of the type $\sigma(k,\cdot,\cdot,\cdot)$ to instances of the type $\sigma(1,\cdot,\cdot,\cdot)$. Finally, an instance of the type $\sigma(1,\cdot,\cdot,\cdot)$ can be computed using the formula \eqref{Sigma1}. The bit-complexity to compute an instance $\sigma(1,v,u,C)$ is $O(C\, d\, \mult(\log \Delta) )$. The vector $v$ can be chosen using $(2\Delta(1+\Delta)M^p)^{s}$ possibilities and the vector $u$ can be chosen using $(2 \Delta^{1+\log_2 3} (1 + \Delta) M^p)^m$ possibilities, hence the complexity to compute instances of the type $\sigma(1,\cdot,\cdot,\cdot)$ is roughly
\[
O(d\, M^{p(2+m+\log_2 \Delta)} \Delta^{1+4m+2\log_2 \Delta} \mult(\log \Delta)).
\]
The reduction complexity of $\sigma(l,\cdot,\cdot,\cdot)$ to $\sigma(l-1,\cdot,\cdot,\cdot)$ (the same is true for $\bar \sigma$) consists of $O(C)$ minimum computations and $O(d C)$ integers multiplications of size $O(\log \Delta)$. So, the bit-computation complexity for instances of the type $\sigma(l,\cdot,\cdot,\cdot)$ for $1 \leq l \leq k$ can be roughly estimated as
\[
O(k\, d\, M^{p(2+m+\log_2 \Delta)} \Delta^{1+4m+2\log_2 \Delta} \mult(\log \Delta)),
\]
and the bit-computation complexity for instances of the type $\bar \sigma(l,\cdot,\cdot,\cdot)$ for $1 \leq l \leq s \leq \log_2 \Delta$ can be roughly estimated as
\[
O(\log_2 \Delta\, d\, M^{p(2+m+\log_2 \Delta)} \Delta^{3+4m+2\log_2 \Delta} \mult(\log \Delta)).
\]
Finally, the algorithm complexity can be roughly estimated as
\[
O(n\, d\, M^{p(2+m+\log_2 \Delta)} \Delta^{3+4m+2\log_2 \Delta} \mult(\log \Delta)).
\]
\end{proof}
\section{The SVP for a special class of lattices}
In this section we consider the SVP \eqref{ISVP} for a special class of lattices that are induced by integral matrices without singular rank sub-matrices. Here, we inherit all notations and special symbols from the previous section.
Let the matrix $H$ have the additional property such that $H$ has no singular $n \times n$ sub-matrices. One of results of the paper \cite{AE16} states that if $n \geq f(\Delta)$, then the matrix $H$ has at most $n+1$ rows, where $f(\Delta)$ is a function that depends only on $\Delta$. The paper \cite{AE16} contains a super-polynomial estimate on the value of $f(\Delta)$. Here, we will show an existence of a polynomial estimate.
\begin{lemma}\label{NRowsHNF}
If $n > \Delta^{3 + 2\log_2 3} + \log_2 \Delta$, then $H$ have at most $n+1$ rows.
\end{lemma}
\begin{proof} Our proof of the theorem has the same structure and ideas as in the paper \cite{AE16}. We will make a small modification with usage of Lemma \ref{HNFElem}.
Let the matrix $H$ be defined as illustrated in \eqref{HNF}. Recall that $H$ has no singular $n \times n$ sub-matrices. For the purpose of deriving a contradiction, assume that $n > \Delta^{3 + 2\log_2 3} + \log_2 \Delta$ and $H$ has precisely $n+2$ rows. Let again, as in the paper \cite{AE16}, $\bar H$ be the matrix $H$ without rows indexed by numbers $i$ and $j$, where $i,j \leq k$ and $i \not= j$. Observe, that
\[
|\det \bar H| = |\det \underbrace{\begin{pmatrix}
a_{1\,i} & a_{1\,j}& b_{1\,1} & & \\
\vdots &\vdots & & \ddots & \\
a_{s\,i}& a_{s\,j}& \hdotsfor{2} & b_{s\,s} \\
\bar a_{1\,i}& \bar a_{1\,j} & \hdotsfor{2} & \bar b_{1\,s} \\
\bar a_{2\,i}& \bar a_{2\,j} & \hdotsfor{2} & \bar b_{2\,s} \\
\end{pmatrix}}_{:={\bar H}^{ij}}|.
\]
The matrix ${\bar H}^{ij}$ is a nonsingular $(s+2)\times(s+2)$-matrix. This implies that the first two columns of ${\bar H}^{ij}$ must be different for any $i$ and $j$. By Lemma \ref{HNFElem} and the structure of HNF, there are at most $\Delta \cdot \Delta^{2(1+\log_2 3)}$ possibilities to choose the first column of ${\bar H}^{ij}$. Consequently, since $ n > \Delta^{3 + 2\log_2 3} + \log_2 \Delta$, then $k > \Delta^{3 + 2\log_2 3}$, and there must exist two indices $i \not= j$, such that $\det {\bar H}^{ij} = 0$. This is a contradiction.
\end{proof}
Using the previous theorem and Theorem \ref{SimpleSVP} of the previous section, we can develop a FPT-algorithm that solves the announced problem.
\begin{theorem}
Let $H$ be the matrix defined as illustrated in \eqref{HNF}. Let also $H$ have no singular $n \times n$ sub-matrices and $\Delta$ be the maximal absolute value of $n \times n$ minors of $H$. If $n > \Delta^{3 + 2\log_2 3} + \log_2 \Delta$, then there is an algorithm with the complexity $O(n \log n \log^2 \Delta)$ that solves the problem \eqref{ISVP}.
\end{theorem}
\begin{proof}
If $n > \Delta^{3 + 2\log_2 3} + \log_2 \Delta$, then, by the previous theorem, we have $m=1$ or $m=0$. In both cases, we have $n > \Delta^{3 + 2\log_2 3} + \log_2 \Delta > \Delta^{1+m(1+\log_2 \Delta)} + \log_2 \Delta$. The last inequality meets the conditions of Theorem \ref{SimpleSVP} and the theorem follows.
\end{proof}
\section{Integer linear programming problem (ILPP)}
Let $H \in \mathbb{Z}^{d \times n}$, $c \in \mathbb{Z}^n$, $b \in \mathbb{Z}^d$, $rank(H) = n$ and let $\Delta$ be the maximal absolute value of $n \times n$ minors of $H$. Suppose also that all $n \times n$ sub-matrices of $H$ are nonsingular.
Consider the ILPP:
\begin{equation}\label{IPP}
\max\{c^\top x : H x \leq b,\, x \in \mathbb{Z}^n \}.
\end{equation}
\begin{theorem}
Let $n > \Delta^{3 + 2\log_2 3} + \log_2 \Delta$, then the problem \eqref{IPP} can be solved by an algorithm with the complexity
\[
O( \log \Delta \cdot n^4 \Delta^5 (n+\Delta) \cdot \mult(\log \Delta + \log n + \log ||w||_\infty) ).
\]
\end{theorem}
\begin{proof}
By Lemma \ref{NRowsHNF}, for $n > \Delta^{3 + 2\log_2 3} + \log_2 \Delta$ the matrix $H$ can have at most $n+1$ rows.
Let $v$ be an optimal solution of the linear relaxation of the problem \eqref{IPP}. Let us also suppose that $\Delta = |\det(H_{1:n\,*})| > 0$ and $H_{1:n\,*} v = b_{1:n}$. First of all, the matrix $H$ need to be transformed to the HNF. Suppose that it has the same form as in \eqref{HNF}.
Let us split the vectors $x$ and $c$ such that $x = \dbinom{\alpha}{\beta}$ and $c = \dbinom{c_\alpha}{c_\beta}$. We note that the sub-matrix $(\bar A \, \bar B)$ from $H$ is actually a row. The problem \eqref{IPP} takes the form:
\begin{align*}
& c_\alpha^\top \alpha + c_\beta^\top \beta \to \max\\
&\begin{cases}
\alpha \leq b_{1:k}\\
A \alpha + B \beta \leq b_{k+1:k+s}\\
\bar A \alpha + \bar B \beta \leq b_{d}\\
\alpha \in \mathbb{Z}^k,\, \beta \in \mathbb{Z}^s.\\
\end{cases}\\
\end{align*}
As in \cite{AE16}, the next step consists of the integral transformation $\alpha \to b_{1:k} - \alpha$ and from an introduction of the slack variables $y \in \mathbb{Z}_+^{s}$ for rows with numbers in the range $k+1:k+s$. The problem becomes:
\begin{align*}
&- c_\alpha^\top b_{1:k} + c_\alpha^\top \alpha - c_\beta^\top \beta \to \min\\
&\begin{cases}
B \beta - A \alpha + y = \hat b\\
\bar B \beta - \bar A \alpha \leq \hat b_d\\
\alpha \in \mathbb{Z}_+^k,\, y \in \mathbb{Z}_+^s,\, \beta \in \mathbb{Z}^s,\\
\end{cases}
\end{align*}
where $\hat b = b_{k+1:k+s} - A\, b_{1:k}$ and $\hat b_d = b_d - \bar A b_{1:k}$.
We note that
\begin{equation} \label{TardoshT}
||\binom{\alpha}{y}||_\infty \leq n \Delta
\end{equation}
due to the classical theorem proved by Tardosh (see \cite{SCHR98,TAR86}). The Tardosh's theorem states that if $z$ is an optimal integral solution of \eqref{IPP}, then $||z - v||_\infty \leq n \Delta$. Since $A_{1:n\,*} v = b_{1:n}$ for the optimal solution of the relaxed linear problem $v$, the slack variables $\alpha$ and $y$ must be equal to zero vectors, when the $x$ variables are equal to $v$.
Now, using the formula $\beta = B^{-1} (\hat b + A \alpha - y)$, we can eliminate the $\beta$ variables from the last constraint and from the objective function:
\begin{align*}
& -c_\alpha^\top b_{1:k} -c_\beta^\top B^{-1} \hat b + (c_\alpha^\top - c_\beta^\top B^{-1} A) \alpha + c_\beta^\top B^{-1} y \to \min\\
&\begin{cases}
B \beta - A \alpha + y = \hat b\\
(\bar B B^{*} A - \Delta \bar A) \alpha - \bar B B^{*} y \leq \Delta \hat b_d - \bar B B^{*} \hat b\\
\alpha \in \mathbb{Z}_+^k,\, y \in \mathbb{Z}_+^s,\, \beta \in \mathbb{Z}^s,
\end{cases}
\end{align*}
where the last line was additionally multiplied by $\Delta$ to become integral, and where $B^* = \Delta B^{-1}$ is the adjoint matrix for $B$.
Finally, we transform the matrix $B$ into the Smith normal form (SNF) \cite{SCHR98,STORS96,ZHEN05} such that $B = P^{-1} S Q^{-1}$, where $P^{-1}$, $Q^{-1}$ are unimodular matrices and $S$ is the SNF of $B$. After making the transformation $\beta \to Q \beta$, the initial problem becomes equivalent to the following problem:
\begin{align}
& w^\top x \to \min \label{GroupMin}\\
&\begin{cases}
G x \equiv g \,(\text{mod}\, S)\\
h x \leq h_0 \\
x \in \mathbb{Z}_+^n,\, ||x||_{\infty} \leq n \Delta,
\end{cases}\notag
\end{align}
where $w^\top = (\Delta c_\alpha^\top - c_\beta^\top B^{*} A, \; c_\beta^\top B^{*})$, $G = (P \; -PA) \mod S$, $g = P \hat b \mod S$, $h = (\bar B B^{*} A - \Delta \bar A, \; - \bar B B^{*})$, and $h_0 = \Delta \hat b_d - \bar B B^{*} \hat b$. The inequalities $||x||_{\infty} \leq n \Delta$ are additional tools to localize an optimal integral solution that follows from Tardosh's theorem argumentation (see \eqref{TardoshT}).
Trivially, $||(G\;g)||_{\max} \leq \Delta$. Since $||\bar A||_{\max} \leq \Delta^{1 + \log_2 3}$, $||A||_{\max} \leq \Delta$, and $||\bar B B^*||_{\max} \leq \Delta$, we have that $||h||_{\max} \leq \Delta^2( n + \Delta^{\log_2 3} )$.
Actually, the problem \eqref{GroupMin} is the classical Gomory's group minimization problem \cite{GOM65} (see also \cite{HU70}) with an additional linear constraint (the constraint $||x||_{\infty} \leq n \Delta$ only helps to localize the minimum). As in \cite{GOM65}, it can be solved using the dynamic programming approach.
To do that, let us define subproblems $Prob(l,\gamma,\eta)$:
\begin{align*}
& w_{1:l}^\top x \to \min\\
&\begin{cases}
G_{*\,1:l} x \equiv \gamma \,(\text{mod}\, S)\\
h_{1:l} x \leq \eta \\
x \in \mathbb{Z}_+^l,
\end{cases}
\end{align*}
where $l \in 1:n$, $\gamma \in \inth(G)\mod S$, $\eta \in \mathbb{Z}$, and $|\eta| \leq n^2 \Delta^3 (n+\Delta)$.
Let $\sigma(l,\gamma,\eta)$ be the objective function optimal value of the $Prob(l,\gamma,\eta)$. When the problem $Prob(l,\gamma,\eta)$ is unfeasible, we put $\sigma(l,\gamma,\eta) = +\infty$. Trivially, the optimum of \eqref{IPP} is $\sigma(n,g,\min\{h_0,\,n^2 \Delta^3 (n+\Delta)\})$.
The following formula gives the relation between $\sigma(l,*,*)$ and $\sigma(l-1,*,*)$:
\[
\sigma(l,\gamma,\eta) = \min \{\sigma(l-1,\gamma - z G_{*\,l},\eta-z h_{l}) +z w_l : |z| \leq n \Delta \}.
\]
The $\sigma(1,\gamma,\eta)$ can be computed using the following formula:
\[
\sigma(1,\gamma,\eta) = \min \{z w_1 : z G_{*\,1} \equiv \gamma \,(\text{mod } S),\, z h_1 \leq \eta,\, |z| \leq n \Delta \}.
\]
Both, the computational complexity of $\sigma(1,\gamma,\eta)$ and the reduction complexity of $\sigma(l,\gamma,\eta)$ to $\sigma(l-1,\cdot,\cdot)$ for all $\gamma$ and $\eta$ can be roughly estimated as:
\[
O( \log \Delta \cdot n^3 \Delta^5 (n+\Delta) \cdot \mult(\log \Delta + \log n + \log ||w||_\infty) ).
\]
The final complexity result can be obtained multiplying the last formula by $n$.
\end{proof}
\section*{Conclusion}
Here, we present FPT-algorithms for SVP instances parameterized by the lattice determinant on lattices induced by near square matrices and on lattices induced by matrices with no singular sub-matrices. In the first case, the developed algorithm is applicable for the norm $l_p$ for any finite $p \geq 1$. In the second case, the algorithm is also applicable for the $l_\infty$ norm. Additionally, we present a FPT-algorithm for ILPP instances, whose constraint matrices have no singular sub-matrices.
In the full version of the paper, we are going to extend the results related to the SVP on more general classes of norms. Next, we are going to extend result related to the ILPP for near square constraint matrices. Finally, we will present a FPT-algorithm for the simplex width computation problem.
|
1710.00044
|
\section{Introduction}\label{sec:introduction}
Metasurfaces are engineered subwavelengthly thin $\left(\delta/\lambda_0\ll 1\right)$ materials consisting of a two-dimensional array of scattering particles. In the most general case, they are bianisotropic and nonperiodic~\cite{Kuester_AveragTransCond_2003, Karim_ms_suscp_syn_2015} with effective material parameters that may vary both in space and time~\cite{Nima_simultan_st_spect_2018, Nima_spacetie_proces_ms2016, Caloz_spacetime_ms2016}. They have a myriad of applications, such as for instance, angular momentum conversion~\cite{Capasso_angular_mom_dielec_ms2017}, diffraction-free refraction~\cite{Lavigne_Refr_ms_no_spurious_diffr2018}, harmonic generation~\cite{Karim_2ndorder_nonlinear_ms2017} and antenna radomes~\cite{Eleftheriades_Leaky_ant_ms2017, Esmaeli_Artf_magnet_ms_ant2018}, remote processing~\cite{Karim_remote_proces_2016}, light extraction efficiency enhancement~\cite{Luzhou_Spont_emmision_2016} and solar sails~\cite{Karim_Solar_sail_2017}.
Metasurfaces may be modeled as sheets of zero thickness for simplified design and physical insight~\cite{Alu_wave_transform_grad_ms2016, Maci_Modulated_ms_ant2015,Achouri_birefringent_ms2016, Nima_Exact_polychrom_ms_design2016, Karim_ms_suscp_syn_2015, Xiao_spherical_ms_synthesis2017, Safari_cylinderical_ms2017}. However, no commercial software is available for such structures~\cite{Yousef_Comput_analysis_ms2018}. Therefore, specific numerical techniques have been recently developed for them, both in the frequency domain~\cite{Yousef_Comput_analysis_ms2018, GSTC_FDFD_Yousef_2016, GSTC_FEM_Kumar2017, Nima_SD_IE2017}, namely Finite-Difference Frequency-Domain (FDFD)~\cite{GSTC_FDFD_Yousef_2016, GSTC_FDFD_Yousef_APS2016}, Finite Element Method (FEM)~\cite{GSTC_FEM_Kumar2017} and Spectral Domain Integral Equation (SD-IE)~\cite{Nima_SD_IE2017}, and in the time domain, namely Finite-Difference Time-Domain (FDTD)~\cite{Shulabh_FDTD_broadband_Huygens_ms2017, Shulabh_FDTD_space_time_ms2017, Shulabh_Integr_GSTC_FDTD2017, Achouri_nonlinear_ms2017, Xiao_ms_analys_FDTD2018, GSTC-FDTD_2018_Yousef, Hosseini_PLRCFDTD_GSTC_ms2018}.
FDTD is particularly suited to simulate broadband, time-varying and dispersive structures~\cite{Susan_FDTD2005}. To date, only the FDTD scheme in~\cite{Hosseini_PLRCFDTD_GSTC_ms2018} has included a dispersive treatment of metasurfaces, based on the Piecewise Linear Recursive Convolution (PLRC) technique. However, the formulation in~\cite{Hosseini_PLRCFDTD_GSTC_ms2018} is tedious and computationally inefficient in terms of memory and speed because it involves the inversion of a matrix equation at each time-step.
Here, we present a FDTD scheme that is 1)~exact (no approximation in equation discretization), 2)~efficient in terms of memory and speed, 3)~ applicable to bianisotropic metasurfaces, and 4)~straightforwardly extensible to time-varying dispersive metasurfaces. This method is based on the Auxiliary Differential Equation (ADE) scheme~\cite{Susan_FDTD2005}. In contrast to the conventional ADE for bulk materials, it includes tensorial electric and magnetic polarizations due to bianisotropy. It is therefore more complete but also leads to a more complicated system of equations.
The organization of the paper is as follows. Section~\ref{sec:disp_ms} describes the basic physics of dispersion in materials and provides the related Lorentz, Drude and Debye dispersive models. Section~\ref{sec:ms_synthesis} recalls the metasurface susceptibility GSTC synthesis equations. Section~\ref{sec:formulation} is the core of the paper; it establishes the ADE-dispersive FDTD metasurface analysis. Section~\ref{sec:examples} demonstrates this method via three illustrative examples. Finally, Sec.~\ref{sec:conclusion} draws conclusions.
\section{Dispersive Medium Modeling}\label{sec:disp_ms}
A temporal\footnote{A medium can also be dispersive in terms of the spatial frequency, $\ve{k}$, or spatially dispersive~\cite{Elect_conti_media_Landau_1984}.} frequency dispersive, or temporally dispersive, or, for short, dispersive, medium is a medium whose constitutive parameters depend on the temporal frequency, $\omega$~\cite{Jackson_Classical_Electd2012}. Dispersion is a consequence of causality, which states that any effect must be preceded by a cause~\cite{Nussensveig_caus_disp2012},
incarnated in the Kramers-Kronig relations~\cite{Jackson_Classical_Electd2012}. The major mechanism leading to dispersion in materials is electronic, atomic, molecular or domain polarizations, which may be macroscopically represented by the electric and magnetic polarization density vectors~\cite{Ishimaru_EM_Radia_Propag_Book1991}.
Since such polarizations are associated with electron, atom, molecule and domain motions in the medium, the dispersion parameters are found by solving the Newton equation of motion~\cite{Bladel_electromagnetic, Bladel_electromagnetics, Ishimaru_EM_Radia_Propag_Book1991, Jackson_Classical_Electd2012}. This generally leads to the following Lorentz-form dispersion relation in terms of medium susceptibility:
\begin{equation}\label{eq:Lorentz_disp}
\tilde{\chi}_\textrm{L}(\omega)=\frac{\omega_\textrm{p}^2}{\omega_\textrm{0}^2+2j\omega\gamma-\omega^2},
\end{equation}
where $\omega_\text{p}$ is the plasma frequency, $\omega_0$ is the resonant frequency and $\gamma$ is the damping factor. The real and imaginary parts of $\tilde{\chi}_\textrm{L}(\omega)$ are plotted versus frequency in Fig.~\ref{fig:disp_materials_2}.
In the case of conductors, no resonance occurs since the conduction electrons are not bound, and hence~\eqref{eq:Lorentz_disp} reduces to the Drude dispersion model, $\tilde{\chi}_\textrm{L}(\omega)=\omega_\textrm{p}^2/(2j\omega\gamma-\omega^2)$. In the case of highly lossy materials, such as for instance biological tissues at low frequency, we have $\omega^2\ll\omega\gamma$, and hence~\eqref{eq:Lorentz_disp} reduces to the Debye dispersion
\begin{equation}\label{eq:Debye_disp}
\tilde{\chi}_\textrm{D}(\omega)
=\frac{\Delta \chi}{1+j\omega\tau}
=\chi_\infty+\frac{\chi_\textrm{s}-\chi_\infty}{1+j\omega\tau}
\end{equation}
where $\Delta\chi=(\omega_\text{p}/\omega_0)^2$, $\chi_\textrm{s}$ and $\chi_\infty$ are the static and infinite frequency susceptibilities, respectively, and $\tau=2\gamma/\omega_0$. The real and imaginary parts of $\tilde{\chi}_\textrm{D}(\omega)$ are plotted in Fig.~\ref{fig:disp_materials_1}.
\begin{figure}[!ht]
\centering
\begin{subfigure}{1\columnwidth}
\centering
\includegraphics[width=0.98\columnwidth]{General_Lorentz_disp.pdf}
\caption{}\label{fig:disp_materials_2}
\begin{subfigure}{1\columnwidth}
\centering
\includegraphics[width=1\columnwidth]{General_Debye_disp}
\caption{}\label{fig:disp_materials_1}
\end{subfigure}
\end{subfigure}
\caption{Complex dispersive susceptibility.~(a) Lorentz model.~(b) Debye model.}
\label{fig:disp_materials}
\end{figure}
In a metasurface, the scattering particles may include metals, dielectrics or combination of the two. Since these particles are essentially resonators, they also typically exhibit Lorentz or Debye dispersions. Note that while the bulk material susceptibility is unitless the metasurface susceptibility has the unit of meter, as shown in the appendix of~\cite{Xiao_spherical_ms_synthesis2017}.
\section{GSTC Susceptibility Equations}\label{sec:ms_synthesis}
Figure~\ref{fig:ms} shows a metasurface structure, with key parameters and illustration of field transformation. The corresponding bianisotropic GSTC synthesis equations, assuming only tangential polarizations, are~\cite{Karim_ms_suscp_syn_2015, Kuester_AveragTransCond_2003, Idemen_GSTCBook_2011}
\begin{subequations}\label{eq:freq_GSTC}
\begin{align}\label{eq:freq_GSTC_1}
\left(\!\!\!
\begin{array}{c}
-\Delta \tilde{H}_y\\
\Delta \tilde{H}_x\\
\end{array}
\!\!\!\right) =&j\omega\varepsilon_0 \left(\!\!
\begin{array}{cc}\!\!
\tilde{\chi} _{\textrm{ee}}^{xx} & \tilde{\chi}_{\textrm{ee}}^{xy} \\
\tilde{\chi}_{\textrm{ee}}^{yx} & \tilde{\chi}_{\textrm{ee}}^{yy} \\
\end{array}
\!\!\right)
\left(\!\!
\begin{array}{c}
\tilde{E}_{x,\textrm{av}} \\
\tilde{E}_{y,\textrm{av}} \\
\end{array}
\!\!\right)\\\notag
&+j\omega\sqrt{\varepsilon_0 \mu_0} \left(\!\!
\begin{array}{cc}
\tilde{\chi}_{\textrm{em}}^{xx} & \tilde{\chi}_{\textrm{em}}^{xy} \\
\tilde{\chi}_{\textrm{em}}^{yx} & \tilde{\chi}_{\textrm{em}}^{yy} \\
\end{array}
\!\!\right)
\left(\!\!
\begin{array}{c}
\tilde{H}_{x,\textrm{av}} \\
\tilde{H}_{y,\textrm{av}} \\
\end{array}
\!\!\right),
\end{align}
\begin{align}\label{eq:freq_GSTC_2}
\left(\!\!
\begin{array}{c}
\Delta \tilde{E}_y \\
-\Delta \tilde{E}_x \\
\end{array}
\!\!\right) =&j\omega\mu_0 \left(\!\!
\begin{array}{cc}
\tilde{\chi}_{\textrm{mm}}^{xx} & \tilde{\chi}_{\textrm{mm}}^{xy} \\
\tilde{\chi}_{\textrm{mm}}^{yx} & \tilde{\chi}_{\textrm{mm}}^{yy} \\
\end{array}
\!\!\right)
\left(\!\!
\begin{array}{c}
\tilde{H}_{x,\textrm{av}} \\
\tilde{H}_{y,\textrm{av}} \\
\end{array}
\!\!\right)\\\notag
&+j\omega\sqrt{\varepsilon_0 \mu_0} \left(\!\!
\begin{array}{cc}
\tilde{\chi}_{\textrm{me}}^{xx} & \tilde{\chi}_{\textrm{me}}^{xy} \\
\tilde{\chi}_{\textrm{me}}^{yx} & \tilde{\chi}_{\textrm{me}}^{yy} \\
\end{array}
\!\!\right)
\left(\!\!
\begin{array}{c}
\tilde{E}_{x,\textrm{av}} \\
\tilde{E}_{y,\textrm{av}} \\
\end{array}
\!\!\right),
\end{align}
\end{subequations}
where $\Delta \tilde{\psi}=\tilde{\psi}^\textrm{t}-\left(\tilde{\psi}^\textrm{i}+\tilde{\psi}^\textrm{r}\right)$ and $\Delta \tilde{\psi}=[\tilde{\psi}^\textrm{t}+\left(\tilde{\psi}^\textrm{i}+\tilde{\psi}^\textrm{r}\right)]/2$ with $\tilde{\psi}$ representing any component of the $\tilde{\ve{E}}$ or $\tilde{\ve{H}}$ fields and t, i and r denoting the transmitted, incident and reflected fields, respectively~\footnote{Based on the surface equivalent principle~\cite{Ishimaru_EM_Radia_Propag_Book1991}, any transformation can be represented by equivalent surface currents, leading to only transverse polarization densities. Thus, normal polarization densities lead to redundant solutions, unless one wishes to design a metasurface with different specified fields to different excitations.}.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.7\columnwidth]{MSfig}
\caption{Metasurface sheet discontinuity transforming a given incident wave ($\ve{\psi}^\textrm{i}$) into a reflected wave ($\ve{\psi}^\textrm{r}$) and a transmitted wave ($\ve{\psi}^\textrm{t}$).}
\label{fig:ms}
\end{figure}
Equation~\eqref{eq:freq_GSTC} provides the susceptibilities required for a transformation in terms of the specified incident, reflected and transmitted fields. For a single, double or triple transformation, multiple solutions are possible, as discussed in~\cite{Karim_ms_suscp_syn_2015}. Since the solutions are necessarily causal, the metasurface is necessarily dispersive.
We assume that the susceptibilities follow the Lorentz or Debye dispersion models in the bandwidth of interest. Other dispersion models may be handled using expansions in terms of Lorentzian and/or Debye dispersion functions~\cite{VECTFIT_code2002,VECTFIT_method1999}.
\section{Dispersive Metasurface Analysis}\label{sec:formulation}
This section develops a GSTC-FDTD scheme for the simulation of metasurfaces represented by~\eqref{eq:freq_GSTC} with Lorentz~\eqref{eq:Lorentz_disp} or Debye~\eqref{eq:Debye_disp} dispersions. To avoid lengthy equations and tedious developments we consider, without loss of essential generality, a 1D-FDTD problem, i.e. a 0D (point) bianisotropic metasurface with nonzero fields restricted to $(E_y,H_x)\neq0$ and propagation direction $\tilde{\textbf{k}}=k_0\hat{z}$. The extension to the 2D and 3D problems involves a similar procedure, with just more complexity. We assume the general Lorentz dispersion
\begin{equation}\label{eq:ms_DeborLoren}
\tilde{\chi}_\textrm{ab}(\omega)=\frac{\omega_\textrm{p,ab}^2}{\omega_\textrm{0,ab}^2+2j\omega\gamma_\textrm{ab}-\zeta\omega^2},
\end{equation}
where a can b can be either e or m~\eqref{eq:freq_GSTC}. The dimensionless coefficient $\zeta$ is used to toggle between Lorentz dispersion $\left(\zeta=1\right)$ and Debye dispersion $\left(\zeta=0\right)$.
\subsection{FDTD Virtual Node}
The conventional 1D-FDTD equations, assuming magnetic and electric fields along the $x$ and $y$ directions, respectively, are~\cite{Susan_FDTD2005}
\begin{subequations}\label{eq:regularFDTD}
\begin{align}\label{eq:regularFDTD_1}
&H_x^{n+\frac{1}{2}}\left(i\right)=H_x^{n-\frac{1}{2}}\left(i\right)+\frac{\Delta t}{\mu_0\Delta z}\left[E_y^n\left(i+1\right)-E_y^n\left(i\right)\right],\\\label{eq:regularFDTD_2}
&E_y^n \left( i\right)=E_y^{n-1}\left( i\right)+\frac{\Delta t}{\varepsilon_0\Delta z}\left[H_x^{n-\frac{1}{2}}\left(i \right) -H_x^{n-\frac{1}{2}}\left( i-1\right)\right],
\end{align}
\end{subequations}
where $\Delta t=t/n$ and $\Delta z=z/i$ are the FDTD time step and mesh size, respectively.
In the FDTD grid, bulk 3D materials are terminated at either an $E$ or an $H$-field node, and are hence at least one grid cell $\left(\Delta z\right)$ thick, as shown in Fig.~\ref{fig:1DFDTD_1}. In contrast, a metasurface, which is ideally modeled as a zero thickness sheet, can be positioned neither at an $E$ nor at an $H$-field node. For this reason, following~\cite{FDTDGraphene} and~\cite{GSTC-FDTD_2018_Yousef}, we position the metasurface \emph{between} two neighboring cells, as shown in Fig.~\ref{fig:1DFDTD_2}.
\begin{figure}[!ht]
\centering
\begin{subfigure}{1\columnwidth}
\centering
\includegraphics[width=1\columnwidth]{1DFDTD_1}
\caption{}\label{fig:1DFDTD_1}
\end{subfigure}
\begin{subfigure}{1\columnwidth}
\centering
\includegraphics[width=1\columnwidth]{1DFDTD}
\caption{}\label{fig:1DFDTD_2}
\end{subfigure}
\caption{Material positioning in the 1D FDTD grid. (a)~Bulk material, with minimum possible thickness $\Delta z$, positioned at an $E$-field node. (b)~Metasurface, with zero thickness, positioned between adjacent $E$ and $H$-field nodes. The small green circle and purple arrow represent the electric and magnetic virtual nodes placed just before ($z=0^-$) and just after ($z=0^+$) the metasurface, respectively. The metasurface is illuminated from the left in the $z-$direction.}
\label{fig:1DFDTD}
\end{figure}
Equation~\eqref{eq:regularFDTD} is applicable everywhere in the computational domain, except at the $k$ and $k+1$ metasurface discontinuity nodes, whose update equation involves $E_y(k+1), H_x(k+1)$ and $H_x(k)$. To address this discontinuity issue, we introduce a magnetic \emph{virtual node} (small purple arrow in Fig~\ref{fig:1DFDTD_2}). Then, we incorporate this node into~\eqref{eq:regularFDTD_2}, which yields
\begin{align}\label{eq:updateEy}
E_y^n \left( k+1\right)=&E_y^{n-1}\left( k+1\right)+\\\notag&\frac{\Delta t}{\varepsilon_0\Delta z}\left[H_x^{n-\frac{1}{2}}\left(k+1 \right) -H_x^{n-\frac{1}{2}}\left(0^+\right)\right].
\end{align}
Similarly, Eq.~\eqref{eq:regularFDTD_1} becomes at the metasurface discontinuity
\begin{equation}\label{eq:updateHx}
H_x^{n+\frac{1}{2}}\left(k\right)=H_x^{n-\frac{1}{2}}\left(k\right)+\frac{\Delta t}{\mu_0\Delta z}\left[E_y^n\left(0^-\right)-E_y^n\left(k\right)\right].
\end{equation}
Now, $H_x(0^+)$ and $E_y(0^-)$ are computed through the GSTCs~\eqref{eq:freq_GSTC}, which reduce here to
\begin{subequations}\label{eq:simp_GSTC}
\begin{align}
\Delta \tilde{H}_x &= j\omega\varepsilon_0 \tilde{\chi}_\textrm{ee}^{yy}\tilde{E}_{y,\textrm{av}}+ jk_0 \tilde{\chi}_\textrm{em}^{yx}\tilde{H}_{x,\textrm{av}},\label{eq:simp_GSTC_1} \\
\Delta \tilde{E}_y &= j\omega\mu_0 \tilde{\chi}_\textrm{mm}^{xx}\tilde{H}_{x,\textrm{av}}+ jk_0 \tilde{\chi}_\textrm{me}^{xy}\tilde{E}_{y,\textrm{av}},\label{eq:simp_GSTC_2}
\end{align}
\end{subequations}
with the $\chi$'s given in~\eqref{eq:ms_DeborLoren}.
Since the overall FDTD simulation is performed in the time domain, Eq.~\eqref{eq:simp_GSTC} must be converted into its time-domain counterpart. Such a conversion generally transforms simple products into convolution products, which are often problematic to handle. However, the convolution products may be avoided in particular cases, such as the one relevant here with the Lorentz dispersive function, as will be seen next.
\subsection{Auxiliary Functions}
Due to the staggered nature of the Yee grid, the discretized version of the time-domain equation~\eqref{eq:simp_GSTC} involves mismatch between space and time sampling. The conventional solution in bulk and non-bianisotropic dispersive media is to use the technique of Auxiliary Differential Equations (ADEs)~\cite{Susan_FDTD2005}. Here we extend this technique to bianisotropic metasurfaces. This requires judicious selection of the half-integer and full-integer time steps. Trial and error searching led to the following auxiliary polarization functions\footnote{Note that these functions are not trivial. They must satisfy two essential ADE requirements: 1)~their substitution into~\eqref{eq:simp_GSTC} should lead to a discretizable equation, and 2)~the corresponding ADE should be numerically stable. For example, we numerically found that the auxiliary functions
\begin{subequations}
\begin{align*}
\tilde{P}_\textrm{ee}^{yy} & =\varepsilon_0\tilde{\chi}^{yy}_\textrm{ee}\tilde{E}_{y,\textrm{av}}, \\
\tilde{P}_\textrm{em}^{yx} & =\frac{\tilde{\chi}^{yx}_\textrm{em}}{c_0} \tilde{H}_{x,\textrm{av}}, \\
\tilde{M}_\textrm{mm}^{xx} & =\mu_0\tilde{\chi}^{xx}_\textrm{mm}\tilde{H}_{x,\textrm{av}}, \\
\tilde{M}_\textrm{me}^{xy} & =\frac{\tilde{\chi}^{xy}_\textrm{me}}{c_0} \tilde{E}_{y,\textrm{av}},
\end{align*}
\end{subequations}
used in~\cite{Susan_FDTD2005} for the simulation of bulk dispersive materials, result in unstable update equations.}
\begin{subequations}\label{eq:PM_Lorentz}
\begin{align}
\tilde{P}_\textrm{ee}^{yy} & =j\omega\varepsilon_0\tilde{\chi}^{yy}_\textrm{ee}\tilde{E}_{y,\textrm{av}},\label{eq:PM_Lorentz_1} \\
\tilde{P}_\textrm{em}^{yx} & =jk_0\tilde{\chi}^{yx}_\textrm{em}\tilde{H}_{x,\textrm{av}},\label{eq:PM_Lorentz_2} \\
\tilde{M}_\textrm{mm}^{xx} & =j\omega\mu_0\tilde{\chi}^{xx}_\textrm{mm}\tilde{H}_{x,\textrm{av}},\label{eq:PM_Lorentz_3} \\
\tilde{M}_\textrm{me}^{xy} & =jk_0\tilde{\chi}^{xy}_\textrm{me}\tilde{E}_{y,\textrm{av}},\label{eq:PM_Lorentz_4}
\end{align}
\end{subequations}
whose form was inspired -- but modified! -- from the functions involved in the conventional ADE scheme, which uses electric polarization currents as the auxiliary functions~\cite{Susan_FDTD2005}.
Let us verify the validity of the auxiliary functions~\eqref{eq:PM_Lorentz}. Substituting them as an ansatz into~\eqref{eq:simp_GSTC} interestingly yields the coefficient-free relations
\begin{subequations}\label{eq:GSTC_Lorentz}
\begin{align}
\Delta \tilde{H}_x &= \tilde{P}_\textrm{ee}^{yy}+\tilde{P}_\textrm{em}^{yx},\label{eq:GSTC_Lorentz_1} \\
\Delta \tilde{E}_y &= \tilde{M}_\textrm{mm}^{xx}+\tilde{M}_\textrm{me}^{xy}.\label{eq:GSTC_Lorentz_2}
\end{align}
\end{subequations}
Inverse Fourier transforming~\eqref{eq:GSTC_Lorentz_1} and its discretization provides the the time-domain quantity $H_x^{n - \frac{1}{2}}(0^+)$
\begin{align}\label{eq:Hz0p_Lorentz}
H_x^{n - \frac{1}{2}}\left(0^ +\right) = H_x^{n - \frac{1}{2}}(k) &+\frac{P_\textrm{ee}^{yy,n}+P_\textrm{ee}^{yy,n-1}}{2}+\\\notag &\frac{P_\textrm{em}^{yx,n}+P_\textrm{em}^{yx,n-1}}{2},
\end{align}
whose substitution into~\eqref{eq:updateEy} yields
\begin{align}\label{eq:Ey_update}
E_y^{n}&\left(k +1\right)=E_y^{n-1}\left( k+1\right)
+\frac{\Delta t}{\varepsilon_0\Delta z}\left[H_x^{n-\frac{1}{2}}\left(k+1\right)-H_x^{n-\frac{1}{2}}\left(k\right)\right]
\\\notag&-\frac{\Delta t}{\varepsilon_0\Delta z}\frac{P_\textrm{ee}^{yy,n}+P_\textrm{ee}^{yy,n-1}+P_\textrm{em}^{yx,n}+P_\textrm{em}^{yx,n-1}}{2}.
\end{align}
The first line of this equation is recognized as the conventional FDTD update equation~\eqref{eq:regularFDTD_2}, while the second-line term corresponds the effect of the metasurface discontinuity.
As shown in Appendix~\ref{app:Lorentz_AF_descret}, the auxiliary functions $P_\textrm{ee}^{yy,n}$ and $P_\textrm{em}^{yx,n}$, or ADEs, are obtained from the discretization of~\eqref{eq:PM_Lorentz_1} and~\eqref{eq:PM_Lorentz_2}, respectively, as
\begin{subequations}\label{eq:Pee_n_Lorentz}
\begin{align}\label{eq:Pee_n_Lorentz_1}
P_\textrm{ee}^{yy,n}&=-\frac{\Delta t^2\omega^2_\textrm{0,ee}-2\zeta}{\Delta t\gamma_\textrm{ee} +\zeta}P_\textrm{ee}^{yy,n-1}-\frac{\zeta-\Delta t\gamma_\textrm{ee}}{\zeta+\Delta t\gamma_\textrm{ee}}P_\textrm{ee}^{yy,n-2}\\\notag&+\frac{\varepsilon_0\Delta t\omega^2_\textrm{p,ee}}{2(\gamma_\textrm{ee}\Delta t+\zeta)}\left[ E_{y,\textrm{av}}^n-E_{y,\textrm{av}}^{n-2}\right],
\end{align}
\begin{align}\label{eq:Pee_n_Lorentz_2}
P_\textrm{em}^{yx,n}&=-\frac{\Delta t^2\omega^2_\textrm{0,em}-2\zeta}{\Delta t\gamma_\textrm{em} +\zeta}P_\textrm{em}^{yx,n-1}-\frac{\zeta-\Delta t\gamma_\textrm{em}}{\zeta+\Delta t\gamma_\textrm{em}}P_\textrm{em}^{yx,n-2}\\\notag&+\frac{\Delta t\omega^2_\textrm{p,em}}{c_0(\gamma_\textrm{em}\Delta t+\zeta)}\left[ H_{x,\textrm{av}}^{n-\frac{1}{2}}-H_{x,\textrm{av}}^{n-\frac{3}{2}}\right].
\end{align}
\end{subequations}
Updating $E_y^n(k+1)$ in~\eqref{eq:Ey_update} requires the knowledge of $P_\textrm{ee}^{yy,n}$, which, from~\eqref{eq:Pee_n_Lorentz_1}, itself depends on $E_y^n(k+1)$ via $E^n_{y,\text{av}}$ according to Fig.~\ref{fig:1DFDTD}. Substituting~\eqref{eq:Pee_n_Lorentz_1} into~\eqref{eq:Ey_update}, and solving for $E_y^n(k+1)$ yields then
\begin{align}\label{eq:Ey_final_update}
E_y^n&(k+1)\left[1+\frac{\Delta t^2\omega^2_\textrm{p,ee}}{8\Delta z(\gamma_\textrm{ee}\Delta t+\zeta)}\right]=E_y^{n-1}\left( k+1\right)
\\\notag&+\frac{\Delta t}{\varepsilon_0\Delta z}\left[H_x^{n-\frac{1}{2}}\left(k+1\right)-H_x^{n-\frac{1}{2}}\left(k\right)\right]\\\notag&-\frac{\Delta t^2\omega^2_\textrm{p,ee}}{4\Delta z(\gamma_\textrm{ee}\Delta t+\zeta)}\left[ \frac{E_{y}^n(k)}{2}+E_{y,\textrm{av}}^{n-2}\right]+c_1P_\textrm{ee}^{yy,n-1}\\\notag &+c_2P_\textrm{ee}^{yy,n-2}-\frac{\Delta t}{\varepsilon_0\Delta z}\frac{P_\textrm{em}^{yx,n}+P_\textrm{em}^{yx,n-1}}{2},
\end{align}
where $c_1=\frac{\Delta t}{2\varepsilon_0\Delta z}\left[-1+\frac{\Delta t^2\omega^2_\textrm{p,ee}-2\zeta}{\gamma_\textrm{ee}\Delta t+\zeta}\right], c_2=\frac{\Delta t}{2\varepsilon_0\Delta z}\frac{-\gamma_\textrm{ee}\Delta t+\zeta}{\gamma_\textrm{ee}\Delta t+\zeta}$, and $P_\textrm{ee}^{yy,n-1}$ is found upon replacing $n$ by $n-1$ in~\eqref{eq:Pee_n_Lorentz_1}.
$E_y(0^-)$ in~\eqref{eq:updateHx} can be handled in an analogous manner using the time-domain version of~\eqref{eq:GSTC_Lorentz_2}, which leads to
\begin{align}\label{eq:Ey0n_Lorentz}
E_y^n\left(0^-\right) = E_y^n\left(k +1\right) &-\frac{M_\textrm{mm}^{xx,n+\frac{1}{2}}+M_\textrm{mm}^{xx,n-\frac{1}{2}}}{2}\\\notag &-\frac{M_\textrm{me}^{xy,n+\frac{1}{2}}+M_\textrm{me}^{xy,n-\frac{1}{2}}}{2},
\end{align}
whose substitution into~\eqref{eq:updateHx} yields
\begin{align}\label{eq:Hx_update}
H_x^{n+\frac{1}{2}}&\left(k\right)=H_x^{n-\frac{1}{2}}\left(k\right)+\frac{\Delta t}{\mu_0\Delta z}\left[E_y^n\left(k+1\right)-E_y^n\left(k\right)\right]-\\\notag &\frac{\Delta t}{\mu_0 \Delta z}\frac{M_\textrm{mm}^{xx,n+\frac{1}{2}}+M_\textrm{mm}^{xx,n-\frac{1}{2}}+ M_\textrm{me}^{xy,n+\frac{1}{2}}+M_\textrm{me}^{xy,n-\frac{1}{2}}}{2},
\end{align}
where the first line is the conventional FDTD update equation~\eqref{eq:regularFDTD_1}, while the second-line term corresponds the effect of the metasurface. Similar to~\eqref{eq:Pee_n_Lorentz}, the auxiliary functions $M_\textrm{mm}^{xx,n+\frac{1}{2}}$ and $M_\textrm{me}^{xy,n+\frac{1}{2}}$ are obtained from discretization of~\eqref{eq:PM_Lorentz_3} and~\eqref{eq:PM_Lorentz_4}, respectively, as
\begin{subequations}\label{eq:Mmm_n_Lorentz}
\begin{align}\label{eq:Mmm_n_Lorentz_1}
&M_\textrm{mm}^{xx,n+\frac{1}{2}}=-\frac{\Delta t^2\omega^2_\textrm{0,mm}-2\zeta}{\Delta t\gamma_\textrm{mm} +\zeta}M_\textrm{mm}^{xx,n-\frac{1}{2}}-\\\notag&\frac{\zeta-\Delta t\gamma_\textrm{mm}}{\zeta+\Delta t\gamma_\textrm{mm}}M_\textrm{mm}^{xx,n-\frac{3}{2}}+\frac{\mu_0\Delta t\omega^2_\textrm{p,mm}}{2(\gamma_\textrm{mm}\Delta t+\zeta)}\left[ H_{x,\textrm{av}}^{n+\frac{1}{2}}-H_{x,\textrm{av}}^{n-\frac{3}{2}}\right],
\end{align}
\begin{align}\label{eq:Mmm_n_Lorentz_2}
&M_\textrm{me}^{xy,n+\frac{1}{2}}=-\frac{\Delta t^2\omega^2_\textrm{0,me}-2\zeta}{\Delta t\gamma_\textrm{me}+\zeta}M_\textrm{me}^{xy,n-\frac{1}{2}}\\\notag&-\frac{-\Delta t\gamma_\textrm{me}+\zeta}{\Delta t\gamma_\textrm{me} +\zeta}M_\textrm{me}^{xy,n-\frac{1}{2}}+\frac{\Delta t\omega^2_\textrm{p,me}}{c_0(\gamma_\textrm{me}\Delta t+\zeta)}\left[ E_{y,\textrm{av}}^{n}-E_{y,\textrm{av}}^{n-1}\right].
\end{align}
\end{subequations}
Then, updating $H_x^{n+\frac{1}{2}}\left(k\right)$ in~\eqref{eq:Hx_update} requires the knowledge of $M_\textrm{mm}^{xx,n+\frac{1}{2}}$, which, from~\eqref{eq:Mmm_n_Lorentz_1}, itself depends on $H_x^{n+\frac{1}{2}}\left(k\right)$. Substituting~\eqref{eq:Mmm_n_Lorentz_1} into~\eqref{eq:Hx_update}, and then solving for $M_\textrm{mm}^{xx,n+\frac{1}{2}}$ finally yields
\begin{align}\label{eq:Hx_final_update}
&H_x^{n+\frac{1}{2}}\left(k\right)\left[1+\frac{\Delta t^2\omega^2_\textrm{p,mm}}{8\Delta z(\gamma_\textrm{mm}\Delta t+\zeta)}\right]=H_x^{n-\frac{1}{2}}\left(k\right)\\\notag&+\frac{\Delta t}{\mu_0\Delta z}\left[E_y^n\left(k+1\right)-E_y^n\left(k\right)\right]\\\notag &+c_3M_\textrm{mm}^{xx,n-\frac{1}{2}}+\frac{\Delta t^2\omega^2_\textrm{p,mm}}{4\Delta z(\gamma_\textrm{mm}\Delta t+\zeta)}\left[ \frac{H_{x}^{n+\frac{1}{2}}(k+1)}{2}-H_{x,\textrm{av}}^{n-\frac{3}{2}}\right]\\\notag&+c_4M_\textrm{mm}^{xx,n-\frac{3}{2}}+\frac{\Delta t}{\mu_0 \Delta z}\frac{M_\textrm{me}^{xy,n+\frac{1}{2}}+M_\textrm{me}^{xy,n-\frac{1}{2}}}{2},
\end{align}
where $c_3=\frac{\Delta t}{2\varepsilon_0\Delta z}\left[-1+\frac{\Delta t^2\omega^2_\textrm{p,mm}-2\zeta}{\gamma_\textrm{mm}\Delta t+\zeta}\right], c_4=\frac{\Delta t}{2\varepsilon_0\Delta z}\frac{-\gamma_\textrm{mm}\Delta t+\zeta}{\gamma_\textrm{mm}\Delta t+\zeta}$ and $M_\textrm{mm}^{yy,n-\frac{1}{2}}$ is found upon replacing $n+\frac{1}{2}$ with $n-\frac{1}{2}$ in~\eqref{eq:Mmm_n_Lorentz_1}.
So Eqs.~\eqref{eq:Ey_final_update} and~\eqref{eq:Hx_final_update} are the final update equations taking into account the effect of the metasurface. If the metasurface is not present, $\omega_\textrm{p,ee}=\omega_\textrm{p,mm}=0$, then, these equations reduce to the conventional FDTD equations.
In summary, the dispersive bianisotropic metasurface problem is solved in FDTD using the update equations~\eqref{eq:Ey_final_update} and~\eqref{eq:Hx_final_update}, which reduce to the conventional update equations~\eqref{eq:regularFDTD} away from the metasurface.
\section{Illustrative Simulation Results}\label{sec:examples}
All the forthcoming simulations use the normalization $\varepsilon_0=\mu_0=c_0=1$ and $f=1$ Hz with source $E^\textrm{inc}=e^{-(\frac{t-t_0}{\tau})^2}\sin(\omega t)$, plotted in Fig.~\ref{fig:Ey_source}, where $t_0=3.6, \tau=1$ and $\omega=2\pi f$, unless otherwise specified.
\begin{figure}[!ht]
\centering
\includegraphics[width=1\columnwidth]{Ey_source}
\caption{Waveform of the incident modulated Gaussian pulse.}
\label{fig:Ey_source}
\end{figure}
\begin{table}[h]
\caption{Summary of the three examples presented in this section. The dimension of the problem is one more than the metasurface dimension.}\label{tb:summary_ex}
\centering
\begin{tabular}{|c|c|c|c|}
\hline
Nb. & Dispersion & Scattering & dimension and type \\\hline
1 & Debye & $R,T\neq0$ & 0, bianisotropic \\\hline
2 & Lorentz & $R=0,T\neq0$ & 0, bianisotropic \\\hline
3 & Lorentz & $R=0, T=T(y)$ & 1, two anisotropic \\
\hline
\end{tabular}
\end{table}
Table~\ref{tb:summary_ex} summarizes other parameters of the metasurface. All the results will be compared with the analytic solutions and computed, following the procedures described in~\cite{Karim_ms_suscp_syn_2015} and~\cite{Lavigne_Refr_ms_no_spurious_diffr2018}, as
\begin{subequations}\label{eq:analytic}
\begin{align}\label{eq:analytic_1}
S_{11}&=\frac{2jk_0\left(\chi_\textrm{mm}^{xx}-\chi_\textrm{ee}^{yy}+\chi_\textrm{em}^{yx}-\chi_\textrm{me}^{xy}\right)}{2jk_0\left( \chi_\textrm{mm}^{xx}+\chi_\textrm{ee}^{yy}\right)+k_0^2\chi_\textrm{em}^{yx}\chi_\textrm{me}^{xy}+4-k_0^2 \chi_\textrm{mm}^{xx}\chi_\textrm{ee}^{yy}} \\\label{eq:analytic_2}
S_{12}&=\frac{k_0^2\chi_\textrm{mm}^{xx}\chi_\textrm{ee}^{yy}-\left(2j-k_0\chi_\textrm{em}^{yx}\right) \left(2j-k_0\chi_\textrm{me}^{xy}\right)}{2jk_0\left( \chi_\textrm{mm}^{xx}+\chi_\textrm{ee}^{yy}\right)+k_0^2\chi_\textrm{em}^{yx}\chi_\textrm{me}^{xy}+4-k_0^2 \chi_\textrm{mm}^{xx}\chi_\textrm{ee}^{yy}}
\end{align}
\end{subequations}
The first example (Tab.~\ref{tb:summary_ex}) involves the Debye dispersive metasurface susceptibilities $\tilde{\chi}_\textrm{me}^{xy}=\frac{2}{1+2j\omega}$ and $\tilde{\chi}_\textrm{mm}^{yy}=\tilde{\chi}_\textrm{ee}^{xx}=\tilde{\chi}_\textrm{em}^{yx}=\frac{2}{1+0.7j\omega}$. The simulation results are shown in Figs.~\ref{fig:Ex1_anisot_Debye_disp_1} and~\ref{fig:Ex1_anisot_Debye_disp}. Figure~\ref{fig:Ex1_anisot_Debye_disp_1} plots the fields in different regions at $t=5.8$~s. According to~\eqref{eq:analytic_1}, the matching condition ($S_{11}=0$) for a bianisotropic metasurface is $\chi_\textrm{mm}^{xx}=\chi_\textrm{ee}^{yy}$ and $\chi_\textrm{em}^{yx}=\chi_\textrm{me}^{xy}$, which is not satisfied in this example. Therefore, the metasurface is mismatched and the reflected field is non-zero ($E_y^\textrm{r}\neq0$). The phase and amplitude of the Fourier transforms of the transmitted and reflected waves, shown in Fig.~\ref{fig:Ex1_anisot_Debye_disp}, are seen to be in agreement with the analytic results obtained from~\eqref{eq:analytic}.
\begin{figure}[!ht]
\centering
\includegraphics[width=1\columnwidth]{Ey1_scrshot_anisot_Debye_disp}
\caption{Example 1 (Tab.~\ref{tb:summary_ex}): Simulated electric field at time $t=5.8$ s versus space.}\label{fig:Ex1_anisot_Debye_disp_1}
\end{figure}
\begin{figure}[!ht]
\centering
\begin{subfigure}{1\columnwidth}
\centering
\includegraphics[width=1\columnwidth]{Ey1_FDTD_Debye_disp_abs}
\caption{}\label{fig:Ex1_anisot_Debye_disp_2}
\end{subfigure}
\begin{subfigure}{1\columnwidth}
\centering
\includegraphics[width=1\columnwidth]{Ey1_FDTD_Debye_disp_phase}
\caption{}\label{fig:Ex1_anisot_Debye_disp_3}
\end{subfigure}
\caption{Example 1 (Tab.~\ref{tb:summary_ex}): Fourier transform of the incident and reflected (right before the metasurface) and transmitted (right after the metasurface) electric field in Fig.~\ref{fig:Ex1_anisot_Debye_disp_1} and comparison with the exact result [Eq.~\eqref{eq:analytic}]. (a) Amplitudes. (b) Phases.}
\label{fig:Ex1_anisot_Debye_disp}
\end{figure}
The second example (Tab.~\ref{tb:summary_ex}) involves the Lorentz dispersive susceptibilities $\tilde{\chi}_\textrm{ee}^{yy}=\tilde{\chi}_\textrm{mm}^{xx}=\frac{2}{\omega_0^2+2j\omega\gamma-\omega^2}$ and $\tilde{\chi}_\textrm{em}^{yx}=\tilde{\chi}_\textrm{me}^{xy}=\frac{1}{\omega_0^2+2j\omega\gamma-\omega^2}$, where $\omega_0=2\pi20$ and $\gamma=8\omega_0$. Here the matching condition is satisfied and the reflection should therefore be zero. This is verified in Figs.~\ref{fig:Ex2_anisot_Lorentz_disp_1} and~\ref{fig:Ex2_anisot_Lorentz_disp}. The phase and amplitude of the transmitted and reflected fields are again in good agreement with the analytical results.
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{Ey2_scrshot_anisot_Lorentz_disp}
\caption{Example 2 (Tab.~\ref{tb:summary_ex}): Simulated electric field at time $t=3$ s.}\label{fig:Ex2_anisot_Lorentz_disp_1}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}{1\columnwidth}
\centering
\includegraphics[width=1\columnwidth]{Ey2_FDTD_Lorentz_disp_abs}
\caption{}\label{fig:Ex2_anisot_Lorentz_disp_2}
\end{subfigure}
\begin{subfigure}{1\columnwidth}
\centering
\includegraphics[width=1\columnwidth]{Ey2_FDTD_Lorentz_disp_phase}
\caption{}\label{fig:Ex2_anisot_Lorentz_disp_3}
\end{subfigure}
\caption{Example 2 (Tab.~\ref{tb:summary_ex}): Fourier transform of the incident and reflected (right before the metasurface) and transmitted (right after the metasurface) electric fields in Fig.~\ref{fig:Ex2_anisot_Lorentz_disp_1} and comparison with the exact result, [Eq.~\eqref{eq:analytic}]. (a) Amplitudes. (b) Phases.}
\label{fig:Ex2_anisot_Lorentz_disp}
\end{figure}
The third and last example (Tab.~\ref{tb:summary_ex}) involves two parallel space-varying anisotropic metasurfaces excited by a plane-wave incident field ($\tilde{\chi}_\textrm{em}^{yx}=\tilde{\chi}_\textrm{me}^{xy}=0$) with Lorentzian dispersion. The metasurfaces are designed to exhibit the highest transmission at their center and zero transmission at their edges, while being matched with $\tilde{\chi}_\textrm{ee}^{yy}=\tilde{\chi}_\textrm{mm}^{xx}=\frac{\omega_\textrm{p}^2}{\omega_0^2+2j\omega\gamma-\omega^2}$, where $\omega_\textrm{p}=2$ and $\omega_0=2\pi20$. To control the metasurface absorption coefficient, $\gamma$ is varied in space as shown in Fig.~\ref{fig:Ex3_gamma}.
It was numerically found that a single metasurface with Lorentz dispersion cannot absorb all the incident field. Therefore, we stack two metasurfaces and tune their distance for total absorption. This is achieved at $0.1\lambda$, as shown in Fig.~\ref{fig:Ex3_anisot_Lorentz_disp_1}. It can be qualitatively observed that the metasurface exhibits the desired behaviour. This behavior is quantified in Fig.~\ref{fig:Ex3_anisot_Lorentz_disp_2}, where the field distribution at $y=0$ shows almost full transmission with a phase rotation, but zero transmitted field at $y=3.75\lambda_0$, according to specification.
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{Ex3_gamma}
\caption{Example 3 (Tab.~\ref{tb:summary_ex}): Damping, $\gamma(y)$, profile for full absorption.} \label{fig:Ex3_gamma}
\end{figure}
\begin{figure}[!ht]
\centering
\begin{subfigure}{1\columnwidth}
\centering
\includegraphics[width=1\columnwidth]{Ex3_anisot_Lorentz_disp_1}
\caption{}\label{fig:Ex3_anisot_Lorentz_disp_1}
\end{subfigure}
\begin{subfigure}{1\columnwidth}
\centering
\includegraphics[width=1\columnwidth]{Ex3_anisot_Lorentz_disp_2}
\caption{}\label{fig:Ex3_anisot_Lorentz_disp_2}
\end{subfigure}
\caption{Example 3 (Tab.~\ref{tb:summary_ex}): Two-metasurfaces full absorption with illumination in the $+z-$direction. (a)~Field distribution in space, with metasurfaces, in white dashed lines, located at $z=-\lambda_0$ and $z=-0.9\lambda_0$. (b)~Field distribution in the $z-$direction for $y=0$ and $y=3.75\lambda_0$.}
\label{fig:Ex3_anisot_Lorentz_disp}
\end{figure}
\section{Conclusion}\label{sec:conclusion}
We have presented a simple and efficient Finite-Difference Time-Domain (FDFD) scheme for simulating dispersive -- as well as time-varying and nonlinear -- bianisotropic metasurfaces, using judicious auxiliary polarization functions based on the Generalized Sheet Transition Conditions (GSTCs).
This scheme is a fundamental addition to FDTD. Moreover, it is physically insightful, computationally efficient and easy to implement. For these reasons, its integration into commercial software products, which currently do not effectively allow the simulation of such structures and other emerging complex two-dimensional materials, would be highly beneficial, and may hence become reality in the forthcoming years.
\bibliographystyle{IEEEtran}
|
1710.00097
|
\section{Introduction}
In \cite{FS}, Fuchs and Salce proved the equivalence of nine conditions for modules over commutative rings $R$ with perfect ring of quotients $Q$. The aim of this paper is to show that the equivalence of seven of their conditions also holds for noncommutative right and left Ore rings $R$ for which $\Fdim(Q_Q) = 0$. Here
$Q = R[S^{-1}] = [S^{-1}] R$, where $S$ is the set of regular elements of $R$. Notice that a commutative ring $Q$ is perfect if and only if $\Fdim(Q_Q) = 0$ \cite[pp.~466--468]{Bass}. Thus this paper is a genuine extension of part of the results by Fuchs and Salce.
The history of this line of research begins in the theory of abelian groups. The term {\em cotorsion} first appears in Harrison \cite{36}, who defined cotorsion abelian groups as those reduced groups $G$ for which every short exact sequence $0\to G\to A\to B\to0$ with $B$ torsion-free splits, that is, the reduced abelian groups $G$ for which $\operatorname{Ext}(B,G)=0$ for every torsion-free abelian group $B$. The theory of cotorsion abelian groups was then extended to modules over commutative rings $R$ by Matlis \cite{MatlisMAMS, MatlisTF, 22}. As Matlis says in \cite[Introduction, p.~3]{MatlisTF}: ``Without doubt
there are no ideas in the general theory of integral domains which are
more fundamental in nature than those of cotorsion modules and completions as well as the relations between them.'' Matlis' results were soon extended to the case of noncommutative rings $R$, for instance in the works by Sandomierski \cite{sando}, who obtained very elegant results in the case of rings $R$ with a semisimple maximal quotient ring $Q$. Cotorsion theory then received a great impulse with the proof of the so called "flat cover conjecture": every module has a flat cover. This had been conjectured by Enochs \cite{7}, and proved with two different solutions by Bican, El Bashir and Enochs \cite{4}. One proof is an application of a theorem of Eklof and Trlifaj \cite{6} that guarantees the existence of ``enough projectives and injectives'' for suitable cotorsion theories.
\medskip
Fuchs and Salce \cite[Theorem~7.1]{FS} proved that if $R$ is an order in a commutative perfect ring $Q$, then the following conditions are
equivalent:
$(i)$ $R$ is an almost perfect ring.
$(ii)$
Flat $R$-modules are strongly flat.
$(iii)$ Matlis-cotorsion $R$-modules are Enochs-cotorsion.
$(iv)$ $R$-modules of w.d.$\le 1$ are of p.d.$\le 1$.
$(v)$ The cotorsion pairs $(\Cal P_1,\Cal D)$ and ($\Cal F_1, \Cal W\Cal I)$ are equal.
$(vi)$ Divisible $R$-modules are weak-injective.
$(vii)$ $h$-divisible $R$-modules are weak-injective.
$(viii)$ Homomorphic images of weak-injective $R$-modules are weak-injective.
$(ix)$ $R$ is $h$-local and $Q/R$ is semi-artinian.
\medskip
Here $\Cal P_1$ and $\Cal F_1$ denote the classes of all $R$-modules of projective dimension $\le1$, of weak dimension $\le 1$ respectively.
\medskip
We prove that if $R$ is a right and left Ore ring and $\Fdim(Q_Q) = 0$, where
$Q = R[S^{-1}] = [S^{-1}] R$, then
the following conditions are equivalent:
$(i)$ Flat right
$R$-modules are strongly flat.
$ (ii)$ Matlis-cotorsion right $R$-modules are Enochs-cotorsion.
$(iii) $ $h$-divisible right $R$-modules are weak-injective.
$(iv)$ Homomorphic images of weak-injective right $R$-modules are weak-injective.
$(v)$ Homomorphic images of injective right $R$-modules are weak-injective.
$(vi)$ Right $R$-modules of $\wed \le 1$ are of $\pd\le1$.
$(vii)$ The cotorsion pairs $(\Cal P_1,\Cal D)$ and $(\Cal F_1,\Cal W\Cal I)$ coincide.
$(viii)$ Divisible right $R$-modules are weak-injective.
\medskip
Our results in this paper are organized in sections with an increasing number of hypotheses on the extension of rings $R\subseteq Q$. In Section~\ref{0}, we assume that the inclusion $R\to Q$ be an epimorphism in the category of rings and that $\operatorname{Tor}_1^R(Q,Q)=0$. In Section~\ref{1}, the assumption $\operatorname{Tor}_1^R(Q,Q)=0$ is replaced by the stronger assumption that ${}_RQ$ be a flat left $R$-module. In Section~\ref{2}, we consider the case where the Gabriel topology $\Cal F$ consisting of all right ideals $I$ of $R$ with $IQ=Q$ has a basis of principal right ideals. In Section~\ref{4}, we consider the case of rings $R$ for which the set $S$ of all their regular elements is both a right denominator set and a left denominator set (right and left Ore ring), and we assume that $Q=R[S^{-1}]=[S^{-1}]R$ (the classical right and left ring of quotients of $R$) and that $\Fdim(Q_Q)=0$. Under these hypotheses, we prove the equivalence of seven of the nine conditions considered by Fuchs and Salce in \cite{FS}.
\smallskip
As far as notation and terminology are concerned, for a ring $Q$, $\Fdim(Q_Q) = 0$ means that every right $R$-module has projective dimension $0$ or $\infty$. For a ring $Q$, $\Fdim(Q_Q) = 0$ if and only if $R$ is right perfect and every simple right $R$-module is a homomorphic image
of an injective module \cite[Theorem~6.3]{Bass}. For a commutative ring $Q$, $\Fdim(Q) = 0$ if and only if $R$ is perfect. A right and left Ore ring is a ring $R$ such that, for all elements $x,y\in R$ with $x$ regular, there exist elements $u,v,u,v'$ with $v$ and $v'$ regular, $ux=vy$ and $xu'=yv'$. If $S$ denotes the set of all regular elements of $R$, the condition ``$R$ is a right and left Ore ring'' is equivalent to the existence of both classical rings of quotients $R[S^{-1}]$ and $[S^{-1}]R$. In this case, they necessarily coincide.
\section{Bimorphisms $R\to Q$ and the condition $\operatorname{Tor}_1^R(Q,Q)=0$}\label{0}
{\em In this section, $R$ and $Q$ are rings, $\varphi \colon R\to Q$ is a bimorphism in the category of rings, that is, $\varphi$ is both a monomorphism and an epimorphism, and $\operatorname{Tor}_1^R(Q,Q)=0$.} Set $K:=Q/\varphi(R)$. Then the pair $(Q,\varphi)$ has the following properties:
\begin{itemize}
\item[(1)] The mapping $\varphi$ is injective, and is a ring morphism, so that $R$ can be viewed as a subring of $Q$ via $\varphi$. We will always identify via $\varphi$ the isomorphic rings $R$ and $\varphi(R)$, so that $\varphi$ will be always seen as an inclusion.
\item[(2)] $\varphi$ is an epimorphism in the category of associative rings, that is, if, for every pair of morphisms of rings $\psi,\omega\colon Q\to Q'$, $\psi\varphi=\omega\varphi$ implies $\psi=\omega$.
\item[(3)] $\operatorname{Hom}(M_R,N_R)=\operatorname{Hom}(M_Q,N_Q)$ for every pair of right $Q$-modules $M_Q,N_Q$ \cite[Proposition~XI.1.2(d)]{Stenstrom}.
\item[(4)] $\operatorname{Tor}_1^R(M_R,{}_RN)\cong \operatorname{Tor}_1^Q(M_Q,{}_QN)$ for every right $Q$-module $M_Q$ and every left $Q$-module ${}_QN$ \cite[Theorem 4.8]{Schofield}.
\item[(5)]\label{(5)} $\operatorname{Ext}^1_R(M_R,N_R)\cong\operatorname{Ext}^1_Q(M_Q,N_Q)$ for every pair of right $Q$-modules $M_Q,N_Q$ and $\operatorname{Ext}^1_R(_RM,{}_RN)\cong\operatorname{Ext}^1_Q(_QM,{}_QN)$ for every pair of left $Q$-modules $_QM,{} _QN$ \cite[Theorem 4.8]{Schofield}.
\item[(6)] The class of all right $R$-modules $M_R$ with $M\otimes_RQ=0$ is closed under homomorphic images, direct sums and extensions, and therefore it is the torsion class of a torsion theory for $\operatorname{Mod-\!} R$. We will denote by $t(M_R)$ the torsion submodule of any right $R$-module $M_R$ in this torsion theory. In all the paper, whenever we say ``torsion'' or ``torsion-free'', we refer to this torsion theory.
\item[(7)]\label{(7)} A right $R$-module $M_R$ is a right $Q$-module $M_Q$ if and only if $$\operatorname{Ext}^1
_R(K_R,M_R) = 0\quad\mbox{\rm and}\quad \operatorname{Hom}(K_R,M_R) = 0$$ (\cite[Theorem 2.6]{AS} and \cite[Proposition 4.12]{18}).
As a consequence of this, if a right $R$-module $M_R$ is a right $Q$-module $M_Q$, then its unique right $Q$-module structure is given by the canonical isomorphism $\operatorname{Hom}(Q_R,M_R) \to M_R$ \cite[Remark~2.7]{AS}.
\item[(8)] The $R$-$R$-bimodule $Q\otimes_R Q$ is isomorphic to the $R$-$R$-bimodule $Q$ via the canonical isomorphism induced by the multiplication $\cdot\colon Q\times Q\to Q$ of the ring $Q$ \cite[Proposition~XI.1.2]{Stenstrom}.
\item[(9)] Every right $Q$-module is a torsion-free $R$-module.
[Proof of (9): Let $M_Q$ be a right $Q$-module. In order to prove that $M_R$ is torsion-free, we must prove that for every right $R$-module $T_R$, $T\otimes_RQ=0$ implies $\operatorname{Hom}(T_R, M_R)=0$. Since $M_R$ is a $Q$-module, $\operatorname{Hom}(Q_R,M_R)\cong M_R$, as we have remarked in (7). Thus $$\operatorname{Hom}(T_R, M_R)\cong \operatorname{Hom}(T_R, \operatorname{Hom}(Q_R,M_R))\cong \operatorname{Hom}(T\otimes_RQ,M_R)=0.]$$
\item[(10)] $K\otimes_RQ=0$ and $\operatorname{Tor}_1^R(K_R, {}_RQ)$=0.
[Proof of (10): Consider the short exact sequence of $R$-$R$-bimodules \begin{equation*} 0\rightarrow
R\rightarrow Q\rightarrow K\rightarrow 0\end{equation*} and tensor it with the left module $_RQ$, getting the short exact sequence of abelian groups $0\rightarrow \operatorname{Tor}_1^R(K_R, {}_RQ)\rightarrow Q\rightarrow Q\otimes_R Q\rightarrow K\otimes_R Q\rightarrow 0$. The central mapping $Q_R\rightarrow Q\otimes_R Q$, $q\mapsto 1\otimes q$, is clearly a right inverse of the isomorphism $Q\otimes_R Q\to Q$ considered in (8). Since any bijection has a unique right inverse, which is also the left inverse and is a bijection, it follows that the mapping $Q_R\rightarrow Q\otimes_R Q$ is a bijection. Hence its kernel $\operatorname{Tor}_1^R(K_R, {}_RQ)$ and cokernel $Q\otimes_R K$ are both zero.]
\end{itemize}
\begin{remark}\label{VHOU}{\rm (a) Notice that our conditions for this section on the extension $R\subseteq Q$ are left/right symmetric. Therefore all the definions we give and all the results we prove in this section about right modules are always true, mutatis mutandis, for left $R$-modules as well.
(b) The ring epimorphisms $\varphi \colon R\to Q$ such that $\operatorname{Tor}^R_n(Q,Q)=0$ for all $n\ge 1$ are sometimes called {\em
homological ring epimorphisms} (\cite[Definition~2.3]{AS}, \cite{18}).}\end{remark}
\begin{Lemma}\label{ppp} The following conditions are equivalent for a right $R$-module $N_R$:
{\rm (1)} Every homomorphism $R_R\to N_R$ extends to a right $R$-module morphism $Q\to N_R$.
{\rm (2)} $N_R$ is a homomorphic image of a right $Q$-module.
{\rm (3)} $N_R$ is a homomorphic image of a direct sum of copies of $Q$.\end{Lemma}
\begin{Proof} (1)${}\Rightarrow{}$(3) The module $N_R$ is a homomorphic image of a direct sum of copies of $R_R$, so that there is an epimorphism $\pi\colon{}R^{(X)}_R\to N_R$. Each restriction $\pi_x\colon R_R\to N_R$ extends to a right $R$-module morphism $Q_R\to N_R$ by (1).
(3)${}\Rightarrow{}$(2) is trivial.
(2)${}\Rightarrow{}$(1) If $N_R$ is a homomorphic image of a right $Q$-module, $N_R$ is a homomorphic image of a free right $Q$-module, $N_R\cong Q_R^{(X)}/S$ say. Every homomorphism $R_R\to Q_R^{(X)}/S$ is left multiplication by an element $(q_x)_{x\in X}+S$ of $Q_R^{(X)}/S$. Left multiplication by the element $(q_x)_{x\in X}$ is a right $Q$-module morphism $Q_Q\to Q^{(X)}_R$, which composed with the canonical projection $Q^{(X)}_R\to {}Q^{(X)}/S\cong N_R$ yields the suitable extension $Q\to N_R$.
\end{Proof}
We say that a right $R$-module is {\em $h$-divisible} if it satisfies the equivalent conditions of Lemma~\ref{ppp}. Clearly, any direct sum of $h$-divisible right $R$-modules is $h$-divisible, homomorphic images of injective modules are $h$-divisible, and any right $R$-module $B_R$ contains a unique largest $h$-divisible submodule $h(B_R)$ that contains every $h$-divisible submodule of $B_R$. We will say that $B_R$ is {\em $h$-reduced} if $h(B_R)=0$ (equivalently, if $B_R$ has no nonzero $h$-divisible submodule, equivalently if $\operatorname{Hom}(Q_R,B_R)=0$).
\begin{Lemma} {\rm \cite[Lemma 3.3]{AS}} For every right $R$-module $M_R$, the image of the canonical morphism $\operatorname{Hom}(Q_R,M_R)\to M$ is the unique largest $h$-divisible submodule $h(M_R)$ of $M_R$.\end{Lemma}
\begin{proposition} \label{factor} {\rm \cite[Lemma 3.4]{AS}}. The following conditions are equivalent:
{\rm (1)} $h(M_R/h(M_R))=0$ for every right $R$-module $M_R$.
{\rm (2)} For every short exact sequence $ 0 \to A_R \to B_R \to C_R \to 0$ of right
$R$-modules, if $A_R$ and $C_R$ are $h$-divisible, then $B_R$ is also $h$-divisible.\end{proposition}
Several of our results can be stated in the following terminology and notation, as in \cite[\S~8]{andersonfuller}. Let~$\mathcal{U}$ denote a class of right $R$-modules and $\operatorname{Gen}(\mathcal{U})$ the class of all the right modules $M$ {\em generated by} $\Cal U$, that is,
for which there exist an indexed set $ (U_\alpha )_ {\alpha \in A}$
in $\mathcal{U}$ and an epimorphism $ \oplus_{\alpha \in A} U_ {\alpha} \to M$.
For any right module $M$, set Tr$_M (\mathcal{U}) := \sum \{\,f(U) \mid f\colon U \to M$
is a homomorphism for some $ U \in \mathcal{U}\, \}$. Thus $M \in{} \operatorname{Gen}(\mathcal{U})$ if and only if Tr$_M (\mathcal{U}) = M$. If $\mathcal{U}$ consists of a unique module $U$, we will write $\operatorname{Gen}({U})$ and Tr$_M ({U})$.
Thus ``$h$-divisible'' means ``generated by $Q_R$'', and we have that Tr$_M (Q_R)=h(M)$ for every right $R$-module $M$.
\begin{Lemma}\label{pjb1}
Every right $R$-module generated by $K_R$ is torsion and $h$-divisible.\end{Lemma}
\begin{Proof} The right $R$-module $K_R$ is clearly $h$-divisible, and is torsion by (10). Both the classes of $h$-divisible modules and torsion modules are closed under direct sums and homomorphic images. Thus every module generated $K_R$ is torsion and $h$-divisible.\end{Proof}
\begin{proposition} The right $R$-module $\operatorname{Hom}(K_R,M_R)$ is torsion-free for every right
$R$-module $M_R$.\end{proposition}
\begin{Proof} Apply the functor $\operatorname{Hom}(-,M_R)$ to the short exact sequence \begin{equation*} 0\rightarrow
R\rightarrow Q\rightarrow K\rightarrow 0\end{equation*} of $R$-$R$-bimodules, getting an exact sequence $$0\to\operatorname{Hom}(K_R,M_R)\to\operatorname{Hom}(Q_R,M_R)\to\operatorname{Hom}(R_R,M_R)$$ of
right $R$-modules. Now $\operatorname{Hom}(Q_R,M_R)$ is a right $Q$-module, hence it is a torsion-free right $R$-module by (9). In any
torsion theory, submodules of torsion-free modules are torsion-free. Thus the submodule
$\operatorname{Hom}(K_R,M_R)$ of the torsion-free right $R$-module $\operatorname{Hom}(Q_R,M_R)$ is torsion-free.\end{Proof}
A right module $M_R$ is {\em Matlis-cotorsion} if $\operatorname{Ext}^1_R(Q_R,M_R)=0$.
By (5), all right $Q$-modules are
Matlis-cotorsion right $R$-modules. More generally, let $ \mathcal {A}$ be a class of right $R$-modules. Set $ ^ \bot \mathcal {A}:= \{ \,B\in \operatorname{Mod-\!} R \mid \operatorname{Ext} ^1 (B, A) = 0$ for every $A \in \mathcal {A}\,\}$.
Similarly, $\mathcal {A}^ \bot := \{ \,B\in \operatorname{Mod-\!} R \mid\operatorname{Ext} ^1 (A, B) = 0$ for every $A \in \mathcal {A}\,\}$. Note that
$\mathcal {A} \subseteq{} ^ \bot (\mathcal {A}^ \bot)$ and $\mathcal {A} \subseteq ( ^\bot \mathcal {A} ) ^ \bot$ always.
If the class of $\mathcal {A}$
consists of a single element, $A$ say, we will simply write $^ \bot A$ and $A ^ \bot$. Thus the class of Matlis-cotorsion modules is the class $Q^ \bot$.
\begin{theorem} \label{1.5} Let $ 0 \to A_R \to B_R \to C_R \to 0$ be a short exact sequence of right
$R$-modules. Then:
{\rm (1)} If $A_R$ and $C_R$ are $h$-reduced, then $B_R$ is also $h$-reduced.
{\rm (2)} If $A_R$ and $C_R$ are Matlis-cotorsion, then $B_R$ is also Matlis-cotorsion.
{\rm (3)} If $B_R$ is Matlis-cotorsion $h$-reduced, then $A_R$ is Matlis-cotorsion if and only if $C_R$ is $h$-reduced.\end{theorem}
\begin{Proof} Apply the functor $\operatorname{Hom}(Q_R,-)\colon\operatorname{Mod-\!} R\to{}$Ab to the given short exact sequence, getting the corresponding long exact sequence.\end{Proof}
\begin{theorem} \label{1.6} The right $R$-module $\operatorname{Hom}({}_RK_R,M_R)$ is Matlis-cotorsion $h$-reduced for every right $R$-module $M_R$.\end{theorem}
\begin{Proof} We know that there is a canonical isomorphism $$\operatorname{Hom}(Q_R,\operatorname{Hom}({}_RK_R,M_R))\cong\operatorname{Hom}(Q\otimes_RK,M_R).$$ Since the hypothesis $\operatorname{Tor}_1^R(Q,Q)$ in this section is left/right symmetric, from (10) we know that $Q\otimes_RK=0$, so that $\operatorname{Hom}({}_RK_R,M_R)$ is $h$-reduced for every right $R$-module $M_R$. Now let $E_R$ be an injective right $R$-module containing $M_R$ and consider the exact sequence \begin{equation}0\to \operatorname{Hom}({}_RK_R,M_R)\to \operatorname{Hom}({}_RK_R,E_R)\to \operatorname{Hom}({}_RK_R,E/M).\label{VH}\end{equation} All the three right $R$-modules in this exact sequence are $h$-reduced by the first part of this proof.
Now $\operatorname{Tor}^R_1(Q_R,{}_RK)=0$ by (10).
For the module $\operatorname{Hom}({}_RK_R,E_R)$, we have that $\operatorname{Ext}^1_R(Q_R,\operatorname{Hom}({}_RK_R,E_R))\cong \operatorname{Hom}_R(\operatorname{Tor}_1^R(Q_R,{}_RK),E_R)$ \cite[Proposition~VI.5.1]{CartanEilenberg}. It follows that $\operatorname{Hom}({}_RK_R,E_R)$ is Matlis-cotorsion. Now apply Theorem~\ref{1.5}(3) to the exact sequence (\ref{VH}), getting that $\operatorname{Hom}({}_RK_R,M_R)$ is Matlis-cotorsion as well.
\end{Proof}
A {\em cotorsion pair} is a pair $\mathfrak C=(\Cal A, \Cal B)$ of classes of right modules over
the ring $R$ such that $\Cal A ={}^ \bot \Cal B$ and $\Cal B = \Cal A^ \bot$.
The class $\Cal A$ is always closed under arbitrary direct sums and contains
all projective right $R$-modules. Dually, the class $\Cal B$ is closed under direct products and contains all injective right $R$-modules.
\section{Left flat bimorphisms}\label{1}
{\em In this section, $R$ and $Q$ are rings, $\varphi \colon R\to Q$ is a bimorphism in the category of rings, that is, $\varphi$ is both a monomorphism and an epimorphism, and ${}_RQ$ is a flat left $R$-module.}
For examples of bimorphisms $R\subseteq Q$ with $\operatorname{Tor}^R_1(Q,Q)=0$ but $Q_R$ not flat, that is, bimorphisms satisfying the hypotheses of Section~\ref{0} but not those of Section~\ref{1}, see
\cite[Examples 3.11(1) and 4.17]{AS} and Example~\ref{right, but not left}. Note that now, in this section, the conditions on the extension $R\subseteq Q$ are not left/right symmetric anymore, so that we must now distinguish between the behaviour of right modules and that of left modules (cf.~Remark~\ref{VHOU}).
The pair $(Q,\varphi)$ has the following properties:
\begin{itemize}
\item[(11)] The inclusion of $R$ into its maximal right ring of quotients $Q_{\max}(R)$ factors through the mapping $\varphi$ \cite[Theorem~XI.4.1]{Stenstrom}.
\item[(12)] $\varphi\colon R\to Q$ is the canonical homomorphism of $R$ into its right localization $R_{\Cal F}$, where $\Cal F=\{\,I\mid I$ is a right ideal of $R$ and $\varphi(I)Q=Q\,\}$ is a Gabriel topology consisting of dense right ideals. Moreover, $\Cal F$ has a basis consisting of finitely generated right ideals \cite[Theorem~XI.2.1 and Proposition~XI.3.4]{Stenstrom}.
\item[(13)] Every right $Q$-module is isomorphic to $M_{\Cal F}\cong M\otimes_R Q$ for some right $R$-module $M_R$ \cite[Proposition~XI.3.4]{Stenstrom}.
\item[(14)] The full subcategory of $\operatorname{Mod-\!} R$ whose objects are all $\Cal F$-closed $R$-modules (that is, the modules $M_R$ such that, for every $I\in\Cal F$, every right $R$-module morphism $I\to M_R$ extends to a morphism $R_R\to M_R$ in a unique way) is equivalent to the category $\operatorname{Mod-\!} Q$ \cite[Proposition~XI.3.4(a)]{Stenstrom}.
\item[(15)]\label{(15)} For every right $R$-module $M_R$, the kernel of the canonical right $R$-module morphism $M_R\to M\otimes_R Q$ is the torsion submodule $t(M_R)$ of $M_R$ \cite[Proposition~XI.3.4(f)]{Stenstrom}.
\item[(16)]\label{(16)}
The torsion submodule $t(M_R)$ of any right $R$-module $M_R$ is isomorphic to $\operatorname{Tor}_1^R(M_R, {}_RK)$ \cite[Proposition~XI.1.2(e)]{Stenstrom}.
[Proof of (16): Consider the short exact sequence of $R$-$R$-bimodules $0\rightarrow
R\rightarrow Q\rightarrow K\rightarrow 0$ and tensor it with the right module $M_R$, getting the short exact sequence of right $R$-modules $0\rightarrow \operatorname{Tor}_1^R(M_R, {}_RK)\rightarrow M_R\rightarrow M_R\otimes_R Q\rightarrow M_R\otimes_R K\rightarrow 0$. Now (16) follows from (15).]
\item[(17)] Every right ideal of $Q$ is extended from a right ideal of $R$, that is, $I=\varphi^{-1}(I)Q$ for every right ideal $I$ of $Q$ \cite[Proposition~4(ii)]{Knight}.
\item[(18)] $\operatorname{Ext}^n_R(M_R,N_R)\cong\operatorname{Ext}^n_Q(M\otimes_R{}Q,N)$ for every $n\ge 0$, every right $R$-module $M_R$ and every right $Q$-module $N_Q$ (\cite[Page 232]{Stenstrom} or \cite[Page 118]{CartanEilenberg}).
\end{itemize}
It is possible to prove that, for any ring $R$, there exists a {\em maximal} left flat bimorphism $\overline{\varphi} \colon R\to \overline{Q}$, satisfying the following universal property: for any left flat bimorphism $\varphi \colon R\to Q$, there exists a unique ring morphism $\beta\colon Q\to\overline{Q}$ such that $\beta\varphi=\overline{\varphi}$. Since the maximal left flat bimorphism is the solution of a universal property, $\overline{Q}$ is unique up to isomorphism, in the following sense: if $\overline{\varphi} \colon R\to \overline{Q}$ and $\overline{\varphi_0} \colon R\to \overline{Q_0}$ are any two maximal left flat bimorphism, there exists a unique ring isomorphism $\beta\colon \overline{Q}\to\overline{Q_0}$ such that $\beta\varphi=\overline{\varphi_0}$.
\bigskip
For any ring $R$, we can consider the set $\Cal L_R$ of all subrings $Q$ of the maximal ring of quotients $Q_{\max}(R)$ such that the inclusion $R\to Q$ is a bimorphism and $_RQ$ is flat, and we can partially order $\Cal L_R$ by set inclusion. Then $\Cal L_R$ is a bounded complete lattice, where (1) the least element of $\Cal L_R$ is $R$, (2) the least upper bound of two elements $Q,Q'\in \Cal L_R$ is the ring generated by $Q$ and $Q'$, that is, the set of all finite sums of products of the form $q_1q'_1\dots q_nq'_n$, with $q_1,\dots,q_n\in Q$, $q'_1,\dots,q'_n\in Q'$, (3) the greatest lower bound of two elements $Q,Q'\in \Cal L_R$ is the union of all the subrings in $\Cal L_R$ contained in $Q\cap Q'$, and (4) the greatest element of $\Cal L_R$ is the ring $\overline{Q}$, where $\overline{Q}$ is the subring of $Q_{\max}(R)$ corresponding to the maximal left flat bimorphism $\overline{\varphi} \colon R\to \overline{Q}$ considered in the previous paragraph \cite[proof of Theorem~XI.4.1]{Stenstrom}.
\bigskip
Recall that a left $R$-module $_RD$ is {\em divisible} if $D=ID$ for every $I\in\Cal F$ (equivalently, if $M\otimes_RD=0$ for every torsion right $R$-module $M_R$ \cite[Proposition~VI.9.1]{Stenstrom}). $h$-divisible left $R$-modules are divisible.
\begin{theorem}\label{2.2} {\rm (1)} For every right $R$-module $M_R$, there is a short exact sequence of right $R$-modules \begin{equation}
\xymatrix{
0 \ar[r] & M_R/t(M_R) \ar[r] & M\otimes_R Q \ar[r] & M\otimes_R K \ar[r] & 0.
}\label{a}\tag{a}\end{equation}
{\rm (2)} For every left $R$-module $_RB$, there are two short exact sequences of left $R$-modules \begin{equation}
\xymatrix{
0 \ar[r] & \operatorname{Hom}({}_RK_R, {}_RB) \ar[r] & \operatorname{Hom}({}_RQ_R, {}_RB) \ar[r] & h(_RB) \ar[r] & 0
}\label{b}\tag{b}\end{equation}
and \begin{equation}
\xymatrix{
0 \ar[r] & {}_RB/h(_RB) \ar[r] & \operatorname{Ext}^1_R({}_RK_R,{}_RB) \ar[r] & \operatorname{Ext}^1_R({}_RQ_R,{}_RB) \ar[r] & 0.
}\label{c}\tag{c}\end{equation}\end{theorem}
\begin{Proof} Consider the short exact sequence of $R$-$R$-bimodules \begin{equation} 0\rightarrow
R\rightarrow Q\rightarrow K\rightarrow 0\label{d}\tag{d}\end{equation} and tensor it with the right module $M_R$, getting the short exact sequence of right $R$-modules $0\rightarrow \operatorname{Tor}_1^R(M_R, {}_RK)\rightarrow M_R\rightarrow M_R\otimes_R Q\rightarrow M_R\otimes_R K\rightarrow 0$. This and properties (15) and (16) give short exact sequence (\ref{a}).
If we apply the contravariant functor $\operatorname{Hom}(-,{}_RB)\colon R\mbox{\rm -Mod}\to R\mbox{\rm -Mod}$ to exact sequence (\ref{d}), we obtain the exact sequence of left $R$-modules $$\begin{array}{l}0 \rightarrow \operatorname{Hom}({}_RK_R, {}_RB) \rightarrow \operatorname{Hom}({}_RQ_R, {}_RB) \stackrel{\beta}{\longrightarrow} {}_RB \rightarrow \\ \qquad\qquad\qquad\rightarrow\operatorname{Ext}^1_R({}_RK_R,{}_RB) \rightarrow \operatorname{Ext}^1_R({}_RQ_R,{}_RB) \rightarrow 0,\end{array}
$$ where $\beta$ is defined by $\beta(f)=f(1)$ for every $f\in \operatorname{Hom}({}_RQ, {}_RB)$. Now the image of $\beta$ is clearly $h({}_RB)$, and from this we get the two short exact sequences of left $R$-modules in (2). \end{Proof}
The next corollary shows that the class of torsion $h$-divisible right $R$-modules is generated by the right $R$-module $K_R$. More generally, Tr$_M ({K})=h(t(M_R))$ for any right $R$-module $M_R$.
{\begin{corollary}\label{pjb}
A right $R$-module is torsion $h$-divisible if and only if it is generated by $K_R$. In particular, the right $R$-module $M\otimes_R{}K_R$ is a torsion $h$-divisible module for every right $R$-module $M_R$.\end{corollary}
\begin{Proof}
Right $R$-modules generated by $K$ are torsion $h$-divisible by Lemma~\ref{pjb1}.
Conversely, assume that $M$ is a torsion $h$-divisible module. To see that $M$ is generated by $K$, we must show that,
for every $h\in M$, there exists $n \geq 1$ and $f \colon K^n \to M$ with $h \in f(K^n)$. Fix an element $h\in M$.
By Lemma~\ref{ppp}(1), there exists $g \colon Q \to M$ such that $g (1) = h$.
Set $S := \ker(g)$. Since $M$ is torsion, we have that $Q/S$ is torsion.
So $S\otimes Q = Q\otimes Q$, hence there exists $n \geq 1$ such that
$1 = \sum_{i= 1} ^n s_i q_i $, where $s_i \in S$ and $q_i \in Q$.
Define a map $\varphi \colon K ^{n} \to Q/S$ setting, for all $t_1, \cdots, t_n \in Q$,
$\varphi(t_1+R, \cdots, t_n+R) = \sum _{i= 1}^n s_i t_i+S$.
If all the elements $t_i$ belong to $R$, then $\sum _{i = 1}^n s_it_i \in S$, so this map $\varphi$ is well defined and is an $R$-module homomorphism. The composite mapping of $\varphi \colon K ^{n} \to Q/S$ and the monomorphism $Q/S\to M$ induced by $g$ is the required mapping $f\colon K^n\to M$ whose image contains~$h$.
For the last part of the statement, apply the functor $-\otimes_R{}K_R\colon\operatorname{Mod-\!} R\to\operatorname{Mod-\!} R$ to an
epimorphism $R_R^{(X)}\to M_R$, where $R_R^{(X)}$ is a suitable free right $R$-module.
\end{Proof}
As a consequence of Corollary \ref{pjb}, we have the following.
\begin{corollary}\label{Zahra}
For every torsion right $R$-module $M_R$, the canonical mapping $$\pi\colon \operatorname{Hom}({}_RK_R, M_R)\otimes_RK \to h(M_R)$$ defined by $\pi(f\otimes x)=f(x)$ for every $f\in \operatorname{Hom}(K_R,M_R)$ and $x\in K$ is a right $R$-module epimorphism.\end{corollary}
\begin{Proof} The right $R$-module $\operatorname{Hom}({}_RK_R, M_R)\otimes_RK$ is a homomorphic image of the right $R$-module $\operatorname{Hom}({}_RK_R, M_R)\otimes_RQ$, which is a right $Q$-module. Thus the image of the canonical right $R$-module morphism $\operatorname{Hom}({}_RK_R, M_R)\otimes_RK \to M_R$ is contained in $h(M_R)$.
Conversely, note that $h(M) $ is torsion and $h$-divisible, so that $h(M) $ is generated by $K_R$ by Corollary~\ref{pjb}. Thus, if $x\in h(M)$, then there exists $n \geq 1$ such that
$x$ belongs to the image of a morphism $f\colon K_R^n\to M_R $. Let $y = (k_1, \cdots, k_n)$ be such that $x = f(y)$.
For each $i=1,\dots,n$, let $\iota _i \colon K \to K^n$ be the canonical map.
Then $\pi (\sum f \iota _i \otimes k_i ) = x$. Thus $\pi$ is an epimorphism.
\end{Proof}
{\em Until the end of this section, we will consider {\em left} $R$-modules.}
Define the class of {\em Matlis-cotorsion} left $R$-modules by $_R\mathcal {MC}:={} _RQ ^ \bot$ and
the class of {\em strongly flat} left $R$-modules by $_R\mathcal {SF}:= {}^\bot(_R\mathcal {MC})$.
A left module $_RM$ will be said to be {\em Enochs-cotorsion} if
$\operatorname{Ext}_R^1 (_RF, {}_RM) = 0$ for all flat left $R$-modules $_RF$. Their class will be denoted
by $_R\mathcal{EC}$.
If $_R\mathcal {F}$ is the class of flat left $R$-modules, then
$(_R\mathcal {F}, _R\mathcal{EC})$ is a cotorsion pair \cite [ Lemma 7.1.4 ]{Jenda}.
Since $_RQ$ is flat, $Q^ \bot \supseteq \Cal F ^\bot = {} _R\mathcal{EC}$ and since $^\bot (_R\mathcal{EC}) = \mathcal{F}$,
we have that strongly flat modules are flat.
Notice that the concept of Enochs-cotorsion left $R$-module is an ``absolute concept'', in the sense that it depends only on the ring $R$, while the concept of Matlis-cotorsion left $R$-module is a ``relative concept'', in the sense that it also depends on the choice of the overring $Q$ of $R$ with $_RQ$ flat.
A class $\Cal C$ of left $R$-modules is {\em precovering} if, for each left module $_RM$,
there exists a morphism $f\in \operatorname{Hom}_R(_RC,\, {}_RM)$ with $C\in\Cal C$ such that each morphism
$f_0\in\operatorname{Hom}(_RC_0,{}_RM)$ with $C_0\in\Cal C$ factors through $f$. Such an $f$ is called a {\em $\Cal C$-precover} of
$_RM$.
A precovering class $\Cal C$ of modules is called {\em special precovering} if every left $R$-module $_RM$ has a $\Cal C$-precover $f\colon C\to M$ which is an epimorphism and with
$\ker(f)\in\Cal C^\bot$. Moreover, $\Cal C$ is called a {\em covering class} if every left $R$-module $M$ has a $\Cal C$-precover $f \colon C \to M$ with the property that for every endomorphism $g$ of $C$ with $fg = f$, the endomorphism $g$ is necessarily an automorphism
of $C$. Such a $\Cal C$-precover $f$ is then called a
$\Cal C${\em -cover} of $_RM$. Dually, we define {\em preenveloping, special preenveloping,} and
{\em enveloping} classes of modules.
A cotorsion pair $\mathfrak C=(\Cal A, \Cal B)$ is {\em complete} if $\Cal A$ is a special precovering class (equivalently, if $\Cal B$ is a special preenveloping class \cite{Sa}). For instance, every
cotorsion pair generated by a set of modules is complete.
Note that, by
\cite[Theorem~6.11] {approx}, $(_R\mathcal{SF} ,{}_R\mathcal{MC})$ is a
complete cotorsion pair. Thus every left $R$-module has a special
${}_R\mathcal{MC}$-preenvelope and every left $R$-module has a special ${}_R\mathcal{SF}$-precover.
Now recall that a left $R$-module $_RG$ is said to be {\em $\{Q\}$-filtered} if there exists an ordinal $\rho$ such that
$_RG$ is the union of a well-ordered ascending chain $\{\,G_\sigma \mid \sigma < \rho\,\}$
of submodules with $G_0=0$, $G_{\sigma+1}/G_\sigma\cong Q$
for every ordinal $\sigma<\rho$ and $G_\sigma=\bigcup_{\gamma<\sigma}G_\gamma$ for every limit ordinal $\sigma\le\rho$.
By \cite[Corollary 6.13]{approx}, the class $_R\mathcal{SF} $ consists of
all summands of
modules $_RN$ such that $_RN$ fits into an exact sequence of the form
$$ 0 \to {}_RF \to {}_RN \to {}_RG \to 0$$
where ${}_RF$ is free and ${}_RG$ is $\{Q\}$-filtered.
From (5), we get that:
\begin{lemma}\label{ExSten}
If $M$ and $N$ are projective left $Q$-modules, then $\operatorname{Ext}^1_R(M, N) = 0$.
\end{lemma}
Lemma \ref{ExSten} implies that a $Q$-filtered left $R$-module is a free $Q$-module (see, for example,
\cite [paragraph before the statement of Lemma~6.15] {approx}).
Thus $_RG$ is a free $Q$-module.
Therefore the class $_R\mathcal{SF}$ consists of
all summands of
modules $_RN$ such that $_RN$ fits into an exact sequence of the form
$$ 0 \to {}_RF \to {}_RN \to {}_RG \to 0$$
where ${}_RF$ is a free left $R$-module and ${}_RG$ is a free left $Q$-module.
\bigskip
It is well known that every left module has an Enochs-cotorsion envelope.
\begin{theorem}
If $Q$ is a left perfect ring, then every left $R$-module has an
$\mathcal{MC}$-envelope.
\end{theorem}
\begin{proof}
Let $\mathcal{P}$ be the class of all projective left $Q$-modules, and consider $\mathcal{P}$ as a class of left $R$-modules.
By Lemma~\ref{ExSten}, this class of left $R$-modules is closed under extensions.
Clearly $\mathcal{P} ^\bot = {Q} ^\bot $, because if, for some left $R$-module $M$, $\operatorname{Ext}^1_R(Q,M)=0$, then $\operatorname{Ext}^1_R(\bigoplus Q,M)\cong\prod \operatorname{Ext}^1_R(Q,M)=0$. On the other
hand, $Q$ is left perfect, and so, by Bass' theorem, every direct limit of
projective left
$Q$-modules is projective. Thus the class $\mathcal{P} $ is closed under
direct limits.
Now assume that $M$ is a left $R$-module. By \cite [Theorem 6.11]{approx},
there exists a short exact sequence $0 \to M \to P \to N \to 0$ where $ M \to P$ is a special $\mathcal{MC}$-preenvelope and
$P$ is
is the union of a continuous chain of submodules, $\left\{\,P_ \alpha \mid \alpha < \lambda \,\right\}$
such that $P_0 = M$ and $P_{\alpha +1 } / P_\alpha$ is isomorphic to a direct sum of copies of $Q$
for each $\alpha < \lambda$. Since $N \cong P/M$, it follows that
$N$ is $\{Q\}$-filtered. Thus
$N$ is a free $Q$-module by Lemma~\ref{ExSten}, and so $N$ is an element of
$\mathcal{P}$. Therefore $M$ has an $\mathcal{MC}$-envelope \cite [Theorem 5.27]{approx}.
\end{proof}
A left $R$-module $_RM$ is called {\em weak-injective} if $\operatorname{Ext}^1_R(I, M) = 0$ for all modules $I$ of weak dimension $\leq 1$.
\begin{Lemma}\label{3.6} Weak-injective left $R$-modules are $h$-divisible and Matlis-cotorsion.\end{Lemma}
\begin{Proof} Let $_RM$ be a weak-injective module. Since $_RQ$ is flat, we have that $_RK$ has weak dimension $\leq 1$.
So $\operatorname{Ext}^1_R(K, M) = 0$. From Theorem~\ref{2.2}(c), we get that $M/h(M)=0$ and $\operatorname{Ext}^1(Q,M)=0$, that is, $M$ is $h$-divisible and Matlis-cotorsion.\end{Proof}
Therefore the class $\mathcal{WI}$ of weak-injective modules is a subclass of $\mathcal{HD}$. We denote by $_R {\mathcal{P}_1 },{}_R{\mathcal{F}_1}$ and $ {}_R{\mathcal{ D}}$ the classes of all left $R$-modules of projective dimension $\le 1$, of weak dimension $\le 1$ and divisible, respectively.
\begin{proposition}\label{4.4} Under the hypotheses of this section, the following conditions hold:
{\rm (i)} $^\bot(_R \mathcal{HD}) \subseteq{}_R {\mathcal{P}_1 } $.
{\rm (ii)} If $_R{\mathcal{F}_1} ={} ^\bot (_R{\mathcal{ D}})$, then $\Fdim(_QQ)= 0$ and so $Q$ is left perfect.
\end{proposition}
\begin{proof}
(i) Assume that $M \in{} ^\bot (_R \mathcal{HD})$.
Let $_RN$ be a left $R$-module and $E$ be its injective hull.
Consider the exact sequence $0 \to N \to E \to E/N \to 0$. Since $_RE$ is injective, $\operatorname{Ext}^1_R(M,E/N)\cong \operatorname{Ext}^2_R(M,N)$.
Now $E/N$ is $h$-divisible and $M \in{} ^\bot \mathcal{HD}$, so that $\operatorname{Ext}^1_R(M,E/N)=0$. Hence $\operatorname{Ext}^2 ({}_RM, {}_RN) = 0$.
(ii) It is enough to show that the $Q$-modules of projective dimension $\le 1$ are
projective. Let $_QM$ be a $Q$-module of p.d.~$\le 1$. Then there exists an exact sequence $0 \to P_1 \to P_0 \to M \to 0$
with $P_1$ and $P_0$ projective $Q$-modules. Thus $_QP_i$ is a direct summand of $_QQ^{(X)}$, hence $_RP_i$ is a direct summand of the flat $R$-module $_RQ^{(X)}$.
So $P_1$ and $P_0$ are flat $R$-modules, and hence $_RM$ is of weak dimension $\le1$. Since $_RM\in\Cal F_1={}^\bot(_R\Cal D)$ and $P_1\in{}_R\Cal H\Cal D\subseteq{}_R\Cal D$,
the short exact sequence $0 \to P_1 \to P_0 \to M \to 0$ splits in $R\mbox{\rm -Mod}$, and so in $Q\mbox{\rm -Mod}$. Therefore $_QM$ is projective.
\end{proof}
\section{$\Cal F$ is a 1-topology}\label{2}
As we have already said in (12), the Gabriel topology $\Cal F$ always has a basis consisting of finitely generated right ideals. Now we will suppose that the Gabriel topology $\Cal F$ is a 1-{\em topology}, that is, that $\Cal F$ has a basis consisting of principal right ideals \cite[Proposition~XI.6.1]{Stenstrom}. Thus $\Cal F$ is completely determined by the set $S:=\{\, s\in R\mid sR\in\Cal F\,\}$, which is a multiplicatively closed subset of $R$ satisfying: (1) If $a,b\in R$ and $ab\in S$, then $a\in S$. (2) If $s\in S$ and $a\in R$, then there are $t\in S$ and $b\in R$ such that $sb=at$ \cite[Proposition~VI.6.1]{Stenstrom}. Moreover, the elements of $S$ are not right zero-divisors in $R$, because $sR$ is a dense right ideal of $R$ for every $s\in S$, so $sR=(sR:1)$ has zero left annihilator \cite[Proposition~VI.6.4]{Stenstrom}, and $s$ is not a right zero-divisor.
\medskip
For instance, consider the following trivial example. Suppose $Q=R$. There is not doubt that the identity $R\to Q$ is a bimorphism and that ${}_RR$ is a flat left $R$-module, so that $R$ is the least element in $\Cal L_R$. The corresponding multiplicatively closed subset $S$ is then the set of all right invertible elements of $R$. The only torsion right $R$-module is the zero module. All right $R$-modules are torsion-free.
\medskip
Thus, in the rest of this section, {\em we will suppose that $R$ is a ring and $S$ is a multiplicatively closed subset of $R$ satisfying:} (1) {\em If $a,b\in R$ and $ab\in S$, then $a\in S$.} (2) {\em If $s\in S$ and $a\in R$, then there are $t\in S$ and $b\in R$ such that $sb=at$.} (3) {\em The elements of $S$ are not right zero-divisors.}
\begin{Lemma} Let $\Cal F$ be the Gabriel topology consisting of all right ideals $I$ of $R$ such that $I\cap S\ne 0$, let $R_{\Cal F}$ be the localization and $\varphi \colon R\to R_{\Cal F}$ be the canonical mapping. Then $\Cal F$ consists of dense right ideals, $\varphi$ is a bimorphism and ${}_RR_{\Cal F}$ is a flat left $R$-module.\end{Lemma}
\begin{Proof} In order to show that the right ideal $sR$ is dense for every $s\in S$, we must prove that, for every $s\in S$ and $a\in R$, the right ideal $(sR:a)$ has zero left annihilator \cite[Proposition~VI.6.4]{Stenstrom}. Now if $s\in S$ and $a\in R$, then $(sR:a)\in\Cal F$ by \cite[Property~T3 on Page 144]{Stenstrom}, so that $(sR:a)$ contains an element $t\in S$. Hence the right ideal $sR$ is dense because the elements $t\in S$ is not a right zero-divisor.
\end{Proof}
It follows that the torsion submodule of a right $R$-module $M_R$ is the set of all elements $x\in M_R$ for which there exists an element $s\in S$ with $xs=0$. In particular, a right $R$-module $M_R$ is torsion-free if right multiplication $\rho_s\colon M_R\to M_R$ by $s$ is an abelian group monomorphism for every $s\in S$. Dually, we will say that a right $R$-module $M_R$ is {\em divisible} if right multiplication $\rho_s\colon M_R\to M_R$ by $s$ is an abelian group epimorphism for every $s\in S$, that is, if $Ms=M$ for every $s\in S$. Every homomorphic image of a divisible right $R$-module is divisible. If $A$ is a submodule of a right $R$-module $B_R$ and if $A_R$ and $B/A$ are divisible, then $B$ is divisible. Any sum of divisible submodules is a divisible submodule, so that every right $R$-module $M_R$ contains a greatest divisible submodule, denoted by $d(M_R)$. A right $R$-module $M_R$ is {\em reduced} if $d(M_R)=0$. For every module $M_R$, $M_R/d(M_R)$ is reduced.
\begin{remark}{\em (a) It is very important to stress that all the concepts we have defined until now in Sections~\ref{1} and~\ref{2}, like divisible right $R$-module, reduced right $R$-module, $h$-divisible right or left $R$-module, and Matlis-cotorsion $R$-module are relative, in the sense that they depend on the fixed multiplicatively closed set $S$ (in Section~\ref{2}) or on the overring $Q$ of $R$ (in Section~\ref{1}). We have decided not to use a terminology like
$S$-divisible right $R$-module, $S$-reduced right $R$-module, $Q$-$h$-divisible right or left $R$-module, $Q$-Matlis-cotorsion $R$-module in order not to make the terminology itself too heavy.
(b) The localization $R_{\Cal F}$ is not the right ring of quotients $R[S^{-1}]$ of $R$ with respect to $S$ in general, as the following example shows.}\end{remark}
\begin{example} {\em Let $k$ be a division ring, $V_k$ an infinite dimensional right vector space over $k$ and
$R:= \operatorname{End}(V_k)$. Since $R$ is Von Neumann regular, the only bimorphisms $R\to Q$ with $_RQ$ flat are isomorphisms \cite[Proposition~XI.1.4]{Stenstrom}. Thus, in this case, we have $\Cal {L}_R=\{R\}$, so that without loss of generality we can assume $Q=R$, $\Cal F=\{R\}$, $R_{\Cal F}=R$ and $S$ the set of all right invertible elements of $R$. The right invertible elements of $R$ are exactly the epimorphisms $V_k\to V_k$. Let us show that the right ring of quotients $R[S^{-1}]$ of $R$ with respect to $S$ does not exists. Suppose the contrary, and let $\varphi\colon R\to R[S^{-1}]$ denote the canonical morphism. Fix a direct-sum decomposition $V_k=U\oplus W$ with $U\cong W\cong V_k$. Then it is easy to construct epimorphisms $f,g\colon V_k\to V_k$ and monomorphisms $f',g'\colon V_k\to V_k$ with $ff'=1_V$, $gg'=1_V$, $fg'=0$, $gf'=0$ and $f'f+g'g=1_V$. As $f,g\in S$, it follows that $\varphi(f),\varphi(g)$ are invertible in $R[S^{-1}]$, with inverse $\varphi(f'),\varphi(g')$ respectively. Now $e:=f'f$ is an idempotent in $R$, with $1-e=g'g$. Thus $\varphi(f'f)=1$ in $R[S^{-1}]$, and similarly $\varphi(g'g)=1$ in $R[S^{-1}]$. It follows that $1=0$ in $R[S^{-1}]$, so that $R[S^{-1}]$ is the zero ring. Thus $\varphi(s)=0$ for every $s\in S$. Therefore $R$ is the zero ring as well, a contradiction. This proves that the localization $R_{\Cal F}=R$ is not the right ring of quotients $R[S^{-1}]$ of $R$ with respect to $S$.}\end{example}
\begin{proposition}\label{xyk} Suppose that the ring $Q$ is directly finite. Then:
(1) The elements of $S$ are regular elements of $R$, invertible in $Q$.
(2) The set $S$ is a right denominator set in $R$, and $Q$ is the right ring of quotients $R[S^{-1}]$ of $R$ with respect to $S$.
\end{proposition}
\begin{Proof} (1) If $s\in S$, then $sR\in\Cal F$, so $sQ=Q$. Thus $s$ is right invertible in~$Q$. But $Q$ is directly finite, so right invertible elements of $Q$ are invertible in $Q$. In particular, $s$ is regular in $R$.
(2) follows immediately from (1).
\end{Proof}
Proposition~\ref{xyk} will be later applied in particular to the case in which $Q$ is right (or left) perfect, hence semilocal, hence directly finite.
\begin{corollary}\label{3.3} Suppose that the ring $Q$ is directly finite. Then for every torsion right $R$-module $M_R$, the canonical mapping $$\pi\colon \operatorname{Hom}({}_RK_R, M_R)\otimes_RK \to h(M_R),$$ defined by $\pi(f\otimes x)=f(x)$ for every $f\in \operatorname{Hom}(K_R,M_R)$, is a right $R$-module isomorphism.\end{corollary}
\begin{Proof} We saw in Corollary~\ref{Zahra} that $\pi$ is surjective. As far as injectivity is concerned, notice that
every element of $\operatorname{Hom}({}_RK_R, M_R)\otimes_RK$ can be written in the form $$\sum_{i=1}^n f_i\otimes (q_i+R).$$ Now $q_i=r_is^{-1}$ for a suitable $s\in S$ \cite[Lemma~10.2(a)]{anintroductionGoodearl}, so that every element of $\operatorname{Hom}({}_RK_R, M_R)\otimes_RK$ can be written in the form $f\otimes (s^{-1}+R)$. Suppose that $f\otimes (s^{-1}+R)\in\ker\pi$, that is, $f(s^{-1}+R)=0$. Let $p \colon Q\to K$ denote the canonical projection, so that $f p \colon Q_R\to M_R$ is a morphism whose kernel contains $s^{-1}$. Compose this with left multiplication by $s^{-1}$
$$\lambda\colon Q_R\to Q_R,$$ which is an automorphism, getting a morphism $f p \lambda\colon Q_R\to M_R$ whose kernel contains $1$. Thus $f p \lambda$ factors through a suitable morphism $
g\colon K_R\to M_R$, so that $f p \lambda=gp $. If $\lambda'\colon Q_R\to Q_R$ is left multiplication by $s$, then $f p =gp \lambda'$, that is, $f(x+R)=g(sx+R)$ for every $x\in Q$. This proves that $f=gs$. Then $f\otimes (s^{-1}+R)=gs\otimes (s^{-1}+R)=g\otimes 0=0$. Therefore $\pi$ is also injective.
\end{Proof}
Let $M_R$ be a right $R$-module. For every element $x\in M_R$, there is a right $R$-module morphism $R_R\to M_R$, $1\mapsto x$. Tensoring with $_RK$, we get a right $R$-module morphism $\lambda_x\colon K_R\to M\otimes_RK$, defined by $\lambda_x(k)=x\otimes k$. The mapping $\lambda\colon M_R\to\operatorname{Hom}(K_R, M\otimes_RK)$, defined by $\lambda(x)=\lambda_x$ for every $x\in M_R$, is a right $R$-module morphism, as is easily checked. Here the right $R$-module structure on $\operatorname{Hom}(K_R, M\otimes_RK)$ is given by the multiplication defined, for every $f\in \operatorname{Hom}(K_R, M\otimes_RK)$ and $r\in R$, by $(fr)(k)=f(rk)$ for all $k\in K$.
\begin{theorem} Suppose $Q$ directly finite. Let $M_R$ be an $h$-reduced torsion-free right $R$-module. Then the canonical mapping $\lambda\colon M_R\to\operatorname{Hom}(K_R, M\otimes_RK)$ is injective and its cokernel is isomorphic to $\operatorname{Ext}_R^1(Q_R,M_R)$.\end{theorem}
\begin{Proof} The proof is organized in seven steps.
\smallskip
{\em Step 1: Every element of $M\otimes_RK$ can be written in the form $x\otimes (s^{-1}+R)$ for suitable $x\in M_R$ and $s\in S$.}
Any element of $M\otimes_RK$ is of the form $\sum_{i=1}^nx_i\otimes (r_is_i^{-1}+R)$. Reducing to the same denominator \cite[Lemma~4.21]{anintroductionGoodearl}, we find elements $r'_i\in R$ and $s\in S$ such that $s_ir'_i=s$ for every $i$. Multiplying by $s^{-1}$ on the right and by $s_i^{-1}$ on the left, we get that $r'_is^{-1}=s_i^{-1}$. Thus $\sum_{i=1}^nx_i\otimes (r_is_i^{-1}+R)=\sum_{i=1}^nx_i\otimes (r_ir'_is^{-1}+R)=\left(\sum_{i=1}^nx_ir_ir'_i\right)\otimes (s^{-1}+R)$ is of the form $x\otimes (s^{-1}+R)$.
\smallskip
{\em Step 2: Let $s$ be an element of $S$. The elements $y$ of $M\otimes_RK$ such that $ys=0$ are those that can be written in the form $x\otimes (s^{-1}+R)$ for a suitable $x\in M_R$.}
Clearly, $\left(x\otimes (s^{-1}+R)\right)s=0$. Conversely, let $y$ be an element of $M\otimes_RK$ such that $ys=0$. By Step 1, we have that $y=z\otimes (t^{-1}+R)$ for suitable elements $z\in M_R$ and $t\in S$. Taking the same denominator again, we get $a,b\in R$ and $u\in S$ with $sa=u$ and $tb=u$, so that $au^{-1}=s^{-1}$ and $bu^{-1}=t^{-1}$ in $Q$. Hence $y=z\otimes (t^{-1}+R)=zb\otimes (u^{-1}+R)$. From the short exact sequence (\ref{a}) in Theorem~\ref{2.2}, we see that the condition $ys=0$ implies that \begin{equation}zb\otimes (u^{-1}s)=x\otimes 1\label{**}\end{equation} in $M\otimes_RQ$ for some $x\in M_R$. Now $au^{-1}=s^{-1}$, so $au^{-1}s=1$. As $Q$ is directly finite, one-sided inverses are two-sided inverses, hence
$u^{-1}sa=1$. Thus, multiplying (\ref{**}) by $a$ on the right, we get that $zb\otimes 1=x\otimes a$, from which $zb-xa=0$. Thus $y=zb\otimes (u^{-1}+R)=xa\otimes (u^{-1}+R)=x\otimes (au^{-1}+R)=x\otimes (s^{-1}+R)$, as desired.
\smallskip
{\em Step 3: If $x\in M_R$, $r\in R$ and $s\in S$, then $x\otimes(rs^{-1}+R)=0$ in $M\otimes_RK$ if and only if $xr\in Ms$.}
From the short exact sequence (d) in Theorem~\ref{2.2}, we see that $x\otimes(rs^{-1}+R)=0$ in $M\otimes_RK$ if and only if there exists $y\in M_R$ such that $x\otimes(rs^{-1})=y\otimes 1$ in $M\otimes_RQ$, if and only if $x\otimes r=y\otimes s$, if and only if $xr-ys=0$. That is, $x\otimes(rs^{-1}+R)=0$ in $M\otimes_RK$ if and only if there exists $y\in M_R$ with $xr=ys$, that is, if and only if $xr\in Ms$.
\smallskip
{\em Step 4: $\lambda$ is injective.}
The submodule $\ker\lambda$ of $M_R$ is torsion-free because it is a submodule of the torsion-free module $M_R$. Let us show that $\ker\lambda$ is also divisible. Let $x$ be an element of $\ker\lambda$ and $s\in S$. Then $\lambda_x=0$, so that $x\otimes k=0$ for every $k\in K$. It follows that $x\otimes(rt^{-1}+R)=0$ in $M\otimes_RK$ for every $r\in R$ and $t\in S$. By Step 3, $xr\in Mt$ for every $r\in R$ and $t\in S$. In particular, $x\in Ms$, so that $x=ys$ for some $y\in M_R$. In order to conclude, it suffices to show that $y\in\ker\lambda$, that is, that $\lambda_y=0$, equivalently that $y\otimes K=0$ in $M\otimes_RK$. But $y\otimes K=y\otimes sK=ys\otimes K=x\otimes K=0$. Thus $y\in\ker\lambda$. This proves that $\lambda_y=0$, so that $\ker\lambda$ is a divisible submodule of $M_R$. As $\ker\lambda$ is both torsion-free and divisible, right multiplication by any element $s\in S$ is an automorphism of the abelian group $\ker\lambda$. Thus $\ker\lambda$ has a unique right $Q$-module structure that extends the right $R$-module structure. In particular, $\ker\lambda$ is $h$-divisible. But $M_R$ is $h$-reduced, so that $\ker\lambda=0$.
\smallskip
Thus we have a short exact sequence \begin{equation}0\rightarrow
M_R \stackrel{\lambda}{\longrightarrow} \operatorname{Hom}(K_R, M\otimes_RK)\rightarrow C_R\rightarrow 0,\label{e}\tag{e}\end{equation} where $C_R$ denotes the cokernel of $\lambda$.
\smallskip
{\em Step 5: $C_R$ is torsion-free.}
Suppose $f\in \operatorname{Hom}(K_R, M\otimes_RK)$, $s\in S$ and $fs\in\lambda(M_R)$. We must prove that $f\in\lambda(M_R)$. Now $fs\in\lambda(M_R)$ implies that there exists $x\in M_R$ with $fs=\lambda_x$, that is $f(sk)=x\otimes k$ for every $k\in K$. In particular, $0=f(1_Q+R)=f(s(s^{-1}+R))=x\otimes(s^{-1}+R)$. By Step 3, we get that $x\in Ms$. Hence there exists $y\in M_R$ with $x=ys$. It follows that $f(sk)=x\otimes k=ys\otimes k=y\otimes sk$. As $_RK$ is divisible, we get that $f(k)=y\otimes k$ for every $k\in K$, i.e., $f=\lambda_y\in \lambda(M_R)$, as desired.
\smallskip
{\em Step 6: $C_R$ is divisible.}
Assume that $f\in \operatorname{Hom}(K_R, M\otimes_RK)$ and $s\in S$. We must prove that there exist $g\in \operatorname{Hom}(K_R, M\otimes_RK)$ and $x\in M_R$ such that $f=gs+\lambda_x$. Now $f\in \operatorname{Hom}(K_R, M\otimes_RK)$ and $s\in S$ imply that $f(s^{-1}+R)$ is an element of $M\otimes_RK$ such that $(f(s^{-1}+R))s=0$. By Step 2, $f(s^{-1}+R)=x\otimes (s^{-1}+R)$ for a suitable $x\in M_R$. Thus $(f-\lambda_x)(s^{-1}+R)=0$. It follows that if $\pi\colon K_R=Q/R\to Q/s^{-1}R$ denotes the canonical projection, there exists a morphism $\overline{g}\colon Q/s^{-1}R\to M\otimes_RK$ such that $f-\lambda_x=\overline{g}\pi$. Similarly, if $\ell_s\colon K_R\to K_R$ denotes the right $R$-module morphism $\ell_s\colon k\mapsto sk$, there exists an isomorphism $\overline{\ell_s}\colon Q/s^{-1}R\to Q/R=K_R$ such that $\ell_s=\overline{\ell_s}\pi$. Set $g:=\overline g\circ(\overline{\ell_s})^{-1}$, so that $g\colon K_R\to M\otimes_RK$. Then $gs+\lambda_x=g\circ\ell_s+\lambda_x=\overline g\circ(\overline{\ell_s})^{-1}\circ\overline{\ell_s}\circ\pi+\lambda_x=\overline g\circ\pi+\lambda_x=f$, as desired.
\smallskip
{\em Step 7: $C_R\cong \operatorname{Ext}_R^1(Q_R,M_R)$.}
Apply the functor $\operatorname{Hom}(Q_R,-)$ to the short exact sequence (\ref{e}), getting an exact sequence \begin{equation}\begin{array}{l}\operatorname{Hom}(Q_R, \operatorname{Hom}(K_R, M\otimes_RK))\rightarrow \operatorname{Hom}(Q_R, C_R)\rightarrow \\ \qquad\qquad\rightarrow\operatorname{Ext}^1_R(Q_R, M_R) \rightarrow \operatorname{Ext}^1_R(Q_R, \operatorname{Hom}(K_R, M\otimes_RK)).\end{array}\label{f}\tag{f}\end{equation} The right $R$-module $ \operatorname{Hom}(K_R, M\otimes_RK)$ is Matlis-cotorsion and $h$-reduced by Theorem~\ref{1.6}, so that the first and the last module in the exact sequence (\ref{f}) are zero. It follows that $\operatorname{Hom}(Q_R, C_R)\cong \operatorname{Ext}^1_R(Q_R, M_R) $. Now $C_R$ is torsion-free and divisible (Steps 6 and 7), so that right multiplication by any element of $S$ is an automorphism of the abelian group $C$. It follows that $C$ has a unique right $Q$-module structure that extends the right $R$-module structure on $C_R$. In particular $C_Q\cong \operatorname{Hom}(Q_R, C_R)$ by (7). It follows that $C_R\cong \operatorname{Hom}(Q_R, C_R)\cong \operatorname{Ext}^1_R(Q_R, M_R) $, which concludes the proof of the Theorem.
\end{Proof}
\begin{corollary}\label{3.5} Suppose $Q$ directly finite. Let $M_R$ be an $h$-reduced torsion-free Matlis-cotorsion right $R$-module. Then the canonical mapping $$\lambda\colon M_R\to\operatorname{Hom}(K_R, M\otimes_RK)$$ is an isomorphism.\end{corollary}
Thus we have generalized to our setting the Matlis category equivalence \cite[Corollary 2.4]{22}:
\begin{theorem}\label{main1} Suppose $Q$ directly finite. Then there is an equivalence of the category $\Cal C$ of $h$-reduced torsion-free
Matlis-cotorsion right $R$-modules with the category $\Cal T$ of $h$-divisible torsion right $R$-modules, given by $$-\otimes_RK\colon\Cal C\to\Cal T\qquad\mbox{\rm and}\qquad \operatorname{Hom}(K_R,-)\colon\Cal T\to~\Cal C.$$\end{theorem}
\begin{Proof} Corollaries~\ref{3.3} and~\ref{3.5}.\end{Proof}
\section{Left and right flat bimorphisms, and $1$-topologies}\label{4}
In this section, {\em $R$ and $Q$ are rings, $\varphi \colon R\to Q$ is a bimorphism in the category of rings, and the $R$-$R$-bimodule ${}_RQ_R$ is a flat both as a left $R$-module and as a right $R$-module.} Therefore $\varphi\colon R\to Q$ is the canonical homomorphism of $R$ both into its right localization $R_{\Cal F}$, where $\Cal F$ is the right Gabriel topology $\Cal \{\,I\mid I$ is a right ideal of $R$ and $\varphi(I)Q=Q\,\}$
and into its
left localization $R_{\Cal G}$, where $\Cal G=\{\,J\mid J$ is a left ideal of $R$ and $Q\varphi(J)=Q\,\}$ is a Gabriel topology consisting of dense left ideals and with a basis consisting of finitely generated left ideals. In order to apply the results of Section~\ref{2}, we will {\em suppose that $\Cal F$ and $\Cal G$ are 1-topologies and that $\Fdim(Q_Q) = 0$.} In particular, $Q$ is right perfect, and so directly finite.
Correspondingly to $\Cal F$ and $\Cal G$, we have the two sets $S:=\{\, s\in R\mid sQ=Q\,\}$ and $T:=\{\, t\in R\mid Qt=Q\,\}$, so that $S$ (resp.~$T$) consists of all the elements of $R$ that are right invertible (resp.~left invertible) in $Q$. But $Q$ is directly finite, which implies that $S=T$ consists of regular elements of $R$ and $Q=R[S^{-1}]=[S^{-1}]R$. Moreover, as the elements of $S$ are regular, the ring $Q$ is contained in the classical right ring of quotients $R[S^{-1}_{\reg}]$ of $R$, where $S_{\reg}$, denotes the set of all regular elements of $R$ \cite[p.~52]{Stenstrom}. Since every element $s\in S_{\reg}$ is invertible in the directly finite ring $Q$, it follows that $Q=R[S^{-1}_{\reg}]$. Similarly, $Q=[S^{-1}_{\reg}]R$, and $S=S_{\reg}$.
Thus the situation now is the following. We have, in this section, {\em a ring $R$ for which the set $S$ of all its regular elements is both a right denominator set and a left denominator set (right and left Ore ring), and we assume that $Q=R[S^{-1}]=[S^{-1}]R$ (the classical right and left ring of quotients of $R$) and that $\Fdim(Q_Q)=0$.}
\begin{remark}\label{right, but not left}{\rm Notice that if $R$ is a right Ore domain that is not left Ore, then the right field of quotients $Q$ of $R$ is flat as a left $R$ module, but is not flat as a right $R$-module \cite[paragraph after the proof of Proposition~0.8.6]{CohnFIRandLoc}. Thus the results of the previous section apply to this extension $Q$ of $R$, but the results in this section do not.}\end{remark}
We are now finally ready to prove, in the noncommutative case, the result, due to Fuchs and Salce in the commutative case, which we mentioned in the Introduction. In \cite{FS}, Fuchs and Salce proved the equivalence of the nine equivalent conditions listed in the Introduction for modules over commutative rings $R$ with perfect quotient ring $Q$. Now we prove that the equivalence of seven of their conditions also holds for noncommutative right and left Ore rings $R$ for which $\Fdim(Q_Q) = 0$. Here
$Q = R[S^{-1}] = [S^{-1}] R$, where $S$ is the set of all regular elements of $R$. Notice that a commutative ring $Q$ is perfect if and only if $\Fdim(Q_Q) = 0$ \cite[pp.~466--468]{Bass}.
\begin{theorem}\label{7.1} Assume that $R$ is a right and left Ore ring and $\Fdim(Q_Q) = 0$, where
$Q = R[S^{-1}] = [S^{-1}] R$. Then
the following conditions are equivalent:
(i) Flat right
$R$-modules are strongly flat.
(ii) Matlis-cotorsion right $R$-modules are Enochs-cotorsion.
(iii) $h$-divisible right $R$-modules are weak-injective.
(iv) Homomorphic images of weak-injective right $R$-modules are weak-injective.
(v) Homomorphic images of injective right $R$-modules are weak-injective.
(vi) Right $R$-modules of $\wed \le 1$ are of $\pd\le1$.
(vii) The cotorsion pairs $(\Cal P_1,\Cal D)$ and $(\Cal F_1,\Cal W\Cal I)$ coincide.
(viii) Divisible right $R$-modules are weak-injective.
\end{theorem}
\begin{proof}
$(i) {}\Leftrightarrow{} (ii)$ is clear, because
$({}_R\mathcal{SF} , {}_R{\mathcal{MC}})$ and $ ({}_R\mathcal{F}, {}_R\mathcal{EC})$
are cotorsion pairs, ${}_R\mathcal{SF} \subseteq {}_R\mathcal{F}$ and ${}_R{\mathcal{MC}}\supseteq {}_R{\mathcal{EC}}$.
$(ii) {}\Rightarrow{} (iii)$ Let $D_R$ be an $h$-divisible module. By sequence (\ref{b}) of Theorem~\ref{2.2}, we have an exact sequence of right $R$-modules
$$ 0 \to\operatorname{Hom}(K, D) \to\operatorname{Hom}(Q, D) \to D \to 0$$
Let $M_R$ be a module of weak dimension $\le1$. In order to prove $(iii)$, we must show that $\operatorname{Ext}^1_R(M, D) = 0.$
We have the exact sequence \begin{equation}\operatorname{Ext}^1_R(M,\operatorname{Hom}(Q, D)) \to \operatorname{Ext}^1_R(M, D) \to \operatorname{Ext}^2(M,\operatorname{Hom}(K, D)).\label{ooo}\end{equation} Now
$\operatorname{Hom}(Q, D)$ is a right $Q$-module and so $\operatorname{Ext}^1_R(M,\operatorname{Hom}(Q, D)) \cong\operatorname{Ext}^1_Q(M\otimes Q,\operatorname{Hom}(Q, D))$ by (18). The module $M_R$ has weak dimension $\le1$, and $_RQ$ is flat, so that the $Q$-module $M\otimes_RQ$ has weak dimension $\le1$. But $Q$ is perfect, so that the $Q$-module $M\otimes_RQ$ has projective dimension $\le1$.
Since $\Fdim(Q_Q) = 0$, $M \otimes Q$ is projective, and so $\operatorname{Ext}^1_Q(M\otimes Q,\operatorname{Hom}(Q, D)) = 0$. By (18), the first $\operatorname{Ext}$ in the sequence (\ref{ooo}) is zero.
On the other hand, $M_R$ has weak dimension $\le1$, so that there exists an exact sequence $0 \to N_R \to P_R \to M_R \to 0 $, where $N_R$ is flat and $P_R$ is projective. Applying to this exact sequence the functor $\operatorname{Hom}(-,\operatorname{Hom}(K, D))$, we get an exact sequence $\operatorname{Ext}^1_R(N,\operatorname{Hom}(K, D))\to \operatorname{Ext}^2_R(M,\operatorname{Hom}(K, D))\to \operatorname{Ext}^2_R(P,\operatorname{Hom}(K, D))$. The last module is zero because $P$ is projective, and the first module is also zero because
$\operatorname{Hom}(K, D)$ is Matlis-cotorsion by Theorem~\ref{1.6}, and so Enochs-cotorsion by $(ii)$. This implies that
$\operatorname{Ext}^2_R(M,\operatorname{Hom}(K, D)) = 0$. From the exact sequence (\ref{ooo}), we get that $\operatorname{Ext}^1_R(M, D) = 0$, as desired.
$(iii) {}\Rightarrow{} (iv)$ follows from Lemma~\ref{3.6}, and $(iv) {}\Rightarrow{} (v)$ is trivial.
$(v) {}\Rightarrow{} (vi)$. Let $M$ be a right $R$-module and $A$ be a right $R$-module of weak dimension $\le1$.
We want to show that $\operatorname{Ext}^2(A, M) = 0$. Let $E$ be the injective hull of $M$ and consider the exact sequence
$0 \to M \to E \to E/M \to 0$. We get the exact sequence
$\operatorname{Ext}^1 (A, E/M) \to\operatorname{Ext}^2 (A, M) \to\operatorname{Ext}^2(A, E) $, where the first $\operatorname{Ext}^1$ is zero, because $E/M$ is weak-injective by $(v)$, and the last $\operatorname{Ext}^2$ is zero because $E$ is injective. So $A$ is of projective dimension $\le1$.
$(vi) {}\Rightarrow{} (vii)$.
First of all we show that $({\Cal P}_1, \mathcal{HD})$ is a cotorsion pair.
Let $M$ be a right $Q$-module and $P_R\in {\Cal P}_1$. As the module $P_R$ has projective dimension $\le1$, and $_RQ$ is flat, the $Q$-module $P\otimes_RQ$ has projective dimension $\le1$. But $\Fdim(Q_Q) = 0$, so $P \otimes Q$ is projective, and thus, from (18), we get that $\operatorname{Ext}^1_R(P,M) \cong \operatorname{Ext}^1_Q(P \otimes Q,M) = 0$. This shows that $\operatorname{Ext}^1_R(P,M) =0$ for every right $Q$-module $M$ and every $P_R\in {\Cal P}_1$. If $N$ is an $h$-divisible $R$-module, there is an exact sequence $0\to K\to M\to N\to 0$ for some $Q$-module $M$. From this sequence, we get the exact sequence $\operatorname{Ext}^1(P,M)\to \operatorname{Ext}^1(P,N)\to \operatorname{Ext}^2(P,K)$. The first module is zero because $M$ is a $Q$-module, and the last module is zero because $P$ is in $\Cal P_1$. We have thus proved that $\operatorname{Ext}^1_R(P,N) = 0$ for every module $P$ in $\Cal P_1$ and every $h$-divisible $R$-module $N$. This proves that $\Cal H\Cal D \subseteq {\Cal P}_1 ^\bot $ and
${\Cal P}_1 \subseteq{} ^\bot \Cal H \Cal D$. We also know that $^\bot \Cal H\Cal D \subseteq {\Cal P}_1$ (Proposition~\ref{4.4}(i)) and that
${\Cal P}_1^ \bot ={\Cal F}_1^ \bot = \mathcal{WI} \subseteq \mathcal{HD}$ (by $(vi)$ and Lemma~\ref{3.6}). Therefore
$({\Cal P}_1, \mathcal{HD})$ is a cotorsion pair.
Now p.dim $(K_R) \leq 1$ by $(vi)$, so that the functor $\operatorname{Ext}^2(K,-)$ is zero. From the exact sequence of bimodules $ 0\rightarrow
R\rightarrow Q\rightarrow K\rightarrow 0$, we get that $\operatorname{Ext}^2(Q,-)\cong \operatorname{Ext}^2(K,-)$, and so p.dim$(Q_R) \le 1$.
Therefore $\mathcal{HD}_R = \mathcal{D}_R $ by \cite[Corollary 4.14]{AS}.
$(vi) {}\Rightarrow{} (ii)$ Firstly, we will show that $Q^\bot$ is closed under homomorphic images. Let $M \in Q^\bot$ and $N$ be a submodule of $M$.
From the exact sequence $0 \to N \to M \to M/N \to 0$, we get the exact sequence $\operatorname{Ext}^1(Q, M) \to \operatorname{Ext}^1(Q, M/N) \to \operatorname{Ext}^2(Q, N)$.
The first $\operatorname{Ext}$ is zero because $M \in Q^\bot$, and the third $\operatorname{Ext}$ is also zero, because $Q$ is of projective dimension $\le1$ by $(vi)$. So $M/N \in Q^\bot$.
In order to prove $(ii)$, we must show that $\operatorname{Ext}^1_R(F, C) = 0$ for every $C \in Q^\bot $ and every flat right $R$-module $F$.
For any $h$-divisible module $H$, we have that $H\in\Cal D$, so that $H\in \Cal W\Cal I$ by $(vi){}\Rightarrow{}(vii)$.
Also, $F$, which is flat, belongs to $\Cal F_1$. Therefore $\operatorname{Ext}^1_R(F, H) = 0$. Thus we can assume that $C$ is not $h$-divisible. Consider the exact sequence
$0 \to h(C) \to C \to C/h(C) \to 0 $. We have the short exact sequence $\operatorname{Ext}^1(F, h(C)) \to \operatorname{Ext}^1(F, C) \to \operatorname{Ext}^1(F, C/h(C))$.
The first $\operatorname{Ext}$ is zero as we have just seen, and $C/h(C) \in Q^\bot$ because $Q^\bot$ is closed under homomorphic images. We want to show that $\operatorname{Ext}^1(F, C/h(C))=0$. Apply \cite[Theorem~3.5]{AS} to the injective ring epimorphism $R\to Q$. As we have already seen, the projective dimension of $Q$ is $\le1$, so that condition (1) in \cite[Theorem~3.5]{AS} holds. Thus condition (4) holds, that is, the class $K^\bot $ is the class of modules generated by $Q$, that is, the class of $h$-divisible modules. Thus
the class $K^\bot $ is closed under extensions. Hence we can apply Proposition~\ref{factor}, and get that
$C/h(C)$ is $h$-reduced. So it suffices to assume that $C$ is $h$-reduced.
From the short exact sequence $0 \to R \to Q \to K \to 0$, we obtain the exact sequence
$0 \to F \to F\otimes Q \to F \otimes K \to 0$. Thus we have the exact sequence
$\operatorname{Ext}^1_R(F\otimes Q, C) \to \operatorname{Ext}^1_R(F, C) \to \operatorname{Ext}^2_R(F \otimes K, C)$.
The first $\operatorname{Ext}^1_R$ is zero:
$\operatorname{Ext}^1_R (F\otimes Q, C) \cong \operatorname{Ext}^1_R (F, \operatorname{Hom} (Q, C))$ by \cite [Lemma 2.3] {FL}, and $\operatorname{Hom} (Q, C)=0$ because $C$ is $h$-reduced. Finally, let us prove that the third $\operatorname{Ext}^2_R$ is also zero. We have the short exact sequence of right $R$-modules $0\to F\otimes R\to F\otimes Q\to F\otimes K\to 0$. The right $R$-module $F\otimes Q$ is flat, and therefore $F\otimes K$ is of weak dimension $\le1$. By $(vi)$,
$F \otimes K$ is of projective dimension $\le1$. Thus $\operatorname{Ext}^2_R(F \otimes K, C)=0$.
$(vii) {}\Rightarrow{} (viii)$ and $(viii) {}\Rightarrow{} (iii)$ are obvious.
\end{proof}
|
1007.2672
|
\section{Introduction}
\label{sec:Intro}
An interesting resolution of the gauge hierarchy problem is offered by
warped models based on the original Randall-Sundrum (RS) geometry
\cite{RS} which is a slice of the 5D Anti de Sitter (AdS$_5$)
spacetime. This geometry, characterized by a curvature scale $k$, is
bounded by flat walls, often referred to as the UV and IR branes. In
these models the fundamental 5D scales of the theory, perhaps near the
4D Planck scale $M_P \sim 10^{19}$~GeV, redshift to the weak scale of
order 1~TeV as one goes from the UV brane to the IR brane. The
redshift is caused by an exponential warp factor and a Planck-weak
hierarchy of order $10^{16}$ can be achieved if the size $L$ of the
extra dimension satisfies $k L \gtrsim 30$. In addition, placing the
Standard Model (SM) gauge \cite{Davoudiasl:1999tf,Pomarol:1999ad} and
fermion \cite{Grossman:1999ra} content in all 5D can yield a realistic
explanation of 4D flavor \cite{Gherghetta:2000qt}, where heavy
fermions are localized towards the source of the electroweak symmetry
breaking (EWSB) on the IR brane, while light fermions are UV-brane
localized, so that their small masses arise from an exponentially
small wavefunction overlap with the IR brane.
While many features of the SM can be accommodated and explained in
warped scenarios, quite often the scale $m_W \sim 10^2$~GeV of EWSB is
simply assumed to be set by the IR-brane physical scale
$\tilde{k}\equiv k \, e^{- k L}$. Since various bounds from precision
and collider phenomenology favor $\tilde{k}\gtrsim 1$~TeV, this often
reintroduces some level of fine-tuning into the models. It would then
be interesting to find a dynamical mechanism whereby EWSB sets $kL$
near the requisite value, and that naturally leads to a separation
between $m_{W}$ and $\tilde{k}$. As the size of the extra dimension
is controlled by the dynamics of the radion scalar, one would like to
generate an appropriate potential for the radion through EWSB.
In fact, such a model was proposed in Ref.~\cite{Bai:2008gm}, where
the condensation of IR-localized top quarks of the SM is triggered by
strong interactions mediated by Kaluza-Klein (KK) gluons. This
condensation breaks the electroweak symmetry and through its
radion-dependence can also stabilize the size of the fifth dimension.
In this top-condensation scenario, the IR-bane scale is set by
$\tilde{k}\sim 10$~TeV, which has the added benefit of avoiding most
severe constraints on warped models, while providing a GeV-scale
radion as its low energy signature
\cite{Bai:2008gm,Davoudiasl:2009xz}. Due to the dynamical nature of
the condensation, the separation between the IR-brane scale and the
weak scale is natural. However, one has to assume that other
contributions to the radion potential would not overwhelm the
electroweak contribution. In particular, one may naively expect that
the corrections from zero-point energies of bulk fields to the radion
potential (which contain the calculable Casimir energy) are of order
$\tilde{\Lambda}^4/(16 \pi^2)$, where the effective theory
cutoff-scale $\tilde{\Lambda} \sim 4\pi \tilde{k}$.
Given the above situation, either one has to resort to a severe
fine-tuning, or else find a way to suppress such cutoff scale effects.
In this work, we adopt the second possibility and examine how
supersymmetry (SUSY) can be used to protect the radion potential from
too large cutoff dominated contributions. Since in the limit of exact
SUSY all vacuum energies cancel out, in supersymmetric scenarios the
full radion potential is expected to be far less sensitive to the
cutoff. We will consider soft supersymmetry breaking by localized
mass terms on both the UV and the IR branes. In addition, we allow
for the presence of brane kinetic terms (BKTs), which are expected to
be induced radiatively, and that are known to have a strong effect on
the KK spectrum~\cite{Davoudiasl:2002ua, Carena:2002dz,
Carena:2004zn}. We find that when SUSY is broken on the UV brane,
this vacuum energy contribution is of order $\tilde{k}^{2}
m_{\lambda}^{2}/(16\pi^{2} kL)$, where $m_{\lambda}$ is the physical
mass of the (would-be zero-mode) gaugino. Hence, there is a
suppression from the dependence on soft masses, as well as an
additional volume-suppression. This result suggests that a
supersymmetric version of the scenario presented in
Ref.~\cite{Bai:2008gm} may be realizable without the need for
fine-tuning. Even though dynamical EWSB scenarios, which will be
pursued in Ref.~\cite{HDEP}, provide our basic motivation for this
study, we will also consider the effect of a supersymmetric bulk in
other contexts. In particular, in models where the Casimir energy is
responsible for generating the radion potential
\cite{Garriga:2000jb,Goldberger:2000dv,Garriga:2002vf,Maru:2010ap},
the success of the mechanism depends on the size and sign of certain
contributions that are in general not calculable. We show how a
supersymmetric framework with SUSY broken at a high scale may realize
the radion stabilization mechanism by gauge fields proposed in
Ref.~\cite{Garriga:2002vf}. The above radion stabilization mechanisms
are distinct from stabilization by bulk fields that acquire vacuum
expectation values (VEV's) at tree level, as in
Ref.~\cite{Goldberger:1999uk}. Related work has appeared
in Refs.~\cite{Flachi:2003bb, Gregoire:2004nn, Katz:2006mva}.
The structure of this paper is as follows. In the next section, we
consider the case of a bulk scalar field, with brane localized masses
and kinetic terms, and evaluate the radion potential generated by the
Casimir effect from the scalar. The results are sufficiently general
to be applied to fields of arbitrary spin that obey arbitrary boundary
conditions. We provide such a generalization to particles of
different spin in Section~\ref{sec:OtherSpins}. We apply our results
to study scenarios in which supersymmetry is broken on the UV or IR
branes, as may be relevant in different models, in
Sections~\ref{sec:susy}. We present our conclusions in
Section~\ref{sec:conclusions}, followed by
Appendices~\ref{app:scalars} (conventions for KK reduction) and
\ref{app:AnalyticApprox} (approximate expressions for the radion
potential in various limits).
\section{Radion Potential from Scalar Fields}
\label{sec:RadionPotential}
In this section, we summarize the results for the radion potential at
one-loop order in warped backgrounds. Since this potential arises
through the radion-dependence of the KK masses associated with bulk
fields propagating in compact dimensions, we focus on describing the
spectrum of such fields. The spectrum depends on the quadratic terms
in the action (including both bulk and brane-localized operators), as
well as on the boundary conditions. Since the results for fields in
different spin representations of the Lorentz group can be expressed
in terms of those of a scalar field, we start with the real scalar case.
For reference, further details regarding the boundary conditions, the
KK decomposition, orthonormality relations, etc. are relegated to
Appendix~\ref{app:scalars}.
\subsection{Bulk Fields with Brane-Localized Terms}
\label{sec:scalars}
We consider a slice of AdS$_{5}$:
\beqa
ds^2 = e^{-2ky} \eta_{\mu\nu} dx^{\mu} dx^{\nu} - dy^{2}~,
\label{metric}
\eeqa
where $k$ is the AdS curvature and $y \in [0,L]$ parametrizes the
fifth dimension. The action for a free real scalar field is
\beqa
S = \int \! d^{4}x \int_{0}^{L} \! dy \, \sqrt{g}
\left\{ \frac{1}{2} g^{MN} \partial_{M} \Phi \partial_{N} \Phi -
\frac{1}{2} M^{2} \Phi^{2}
+ 2\delta(y) {\cal L}_{0} +
2\delta(y-L) {\cal L}_{L} \right\}~,
\label{Action}
\eeqa
where the brane-localized terms are given by~\footnote{For simplicity,
we restrict to brane kinetic terms involving only $\mu$ derivatives,
but the same methods can be easily generalized to include arbitrary
BKTs by using the appropriate eigenvalue equation. This may require
an appropriate classical renormalization when $\partial_{5}$
derivatives are involved~\cite{delAguila:2003bh}.}
\beqa
{\cal L}_{0} &=& \frac{1}{2} r_{UV} \, g^{\mu\nu} \partial_{\mu} \Phi \partial_{\nu} \Phi - \frac{1}{2} M_{0} \Phi^{2}~,
\\[0.5em]
{\cal L}_{L} &=& \frac{1}{2} r_{IR} \, g^{\mu\nu} \partial_{\mu} \Phi \partial_{\nu} \Phi - \frac{1}{2} M_{L} \Phi^{2}~.
\eeqa
The kinetic coefficients $r_{UV}$ and $r_{IR}$ have mass
dimension~$-1$, while $M_{0}$ and $M_{L}$ have mass dimension~$1$. It
is convenient to introduce a dimensionless parameter $\alpha$ (that we
will call the ``index'' of the field) and new mass parameters $m_{UV}$
and $m_{IR}$ by writing the bulk and localized masses as
\beqa
M^{2} &=& \left(\alpha^{2} - 4 \right) k^{2}~,
\label{Mdef}
\eeqa
\vspace{-7mm}
\beqa
M_{0} &=& - \left(\alpha - 2 \right) k + m_{UV} ~,
\label{m0def}
\hspace{1cm}
M_{L} ~=~ \left(\alpha - 2 \right) k + m_{IR} ~.
\label{mLdef}
\eeqa
Performing the KK decomposition in the usual manner (see
Appendix~\ref{app:scalars}), one finds that the KK spectrum, $m_{n}$,
is determined by~\footnote{$J_{\alpha}$ and $Y_{\alpha}$
($I_{\alpha}$ and $K_{\alpha}$) are the (modified) Bessel functions of
the first and second kind, respectively.}
\beqa
F(m_{n}/k) \equiv \tilde{J}^{IR}_{\alpha} \! \left(\frac{m_{n}}{k} e^{kL} \right)
\tilde{Y}^{UV}_{\alpha} \! \left(\frac{m_{n}}{k} \right)
- \tilde{J}^{UV}_{\alpha} \! \left(\frac{m_{n}}{k} \right)
\tilde{Y}^{IR}_{\alpha} \! \left(\frac{m_{n}}{k} e^{kL} \right) &=& 0~,
\label{Feigen}
\eeqa
where $L$ is the radion VEV. Here, we have introduced the functions
\beqa
\tilde{J}^{UV}_{\alpha}(z) &=& z J_{\alpha-1}(z) + b_{UV}(z) J_{\alpha}(z)~,
\label{JUVtilde}
\\ [0.5em]
\tilde{J}^{IR}_{\alpha}(z) &=& z J_{\alpha-1}(z) - b_{IR}(z) J_{\alpha}(z)~,
\label{JIRtilde}
\eeqa
with analogous expressions for $\tilde{Y}^{UV}_{\alpha}(z)$ and
$\tilde{Y}^{IR}_{\alpha}(z)$, and defined
\beqa
b_{i}(z) &=& \hat{r}_{i} z^{2} - \hat{m}_{i}~,
\hspace{1cm}
i = UV, IR
\label{bUVIR}
\eeqa
by using the dimensionless parameters $\hat{m}_{i} = m_{i}/k$,
$\hat{r}_{i} = k \, r_{i}$.
For $\hat{m}_{UV} = \hat{m}_{IR} = 0$ (the SUSY limit) there is a
scalar zero-mode with an exponential profile controlled by the
dimensionless parameter $\alpha$ (see Appendix~\ref{app:LightStates}
for details). This profile coincides with that of a 5D fermion with a
bulk Dirac mass term written as $M = c k$, where $\alpha = c +
\frac{1}{2}$~\cite{Grossman:1999ra}. We will refer to fields with
$\alpha < 1$ as ``IR localized fields'', those with $\alpha = 1$ as
``flat fields'' and those with $\alpha > 1$ as ``UV localized
fields''. When $\hat{m}_{UV}$ and/or $\hat{m}_{IR}$ deviate from
zero, the scalar zero-mode is lifted: for positive $\hat{m}_{UV, IR}$
it becomes massive, while for negative $\hat{m}_{UV, IR}$ a tachyonic
mode appears. Note also that one can interpolate between Neumann-type
boundary conditions (setting $m_{i} = 0$) and Dirichlet boundary
conditions ($m_{i} \to \infty$) on the $i^{\rm th}$ ($=UV$ or $IR$)
brane, and therefore our results are sufficiently general to capture
the dependence of the radion potential on the choice of boundary
conditions for the bulk field.
It is also worth pointing out that, when $\hat{r}_{i} = \hat{m}_{i} =
0$, the massive KK spectrum is identical for fields with index $\alpha
= 1 + \delta\alpha$ and fields with index $\alpha = 1 - \delta\alpha$,
for any $\delta\alpha$, as can be easily checked from
Eq.~(\ref{Feigen}). However, non-vanishing $\hat{r}_{i}$ and/or
$\hat{m}_{i}$ introduce a distinction between the KK spectra of UV
versus IR localized fields.
\subsection{One-Loop Effective Potential}
\label{sec:RadionPotentialResults}
The presence of fields propagating in a $D$-dimensional spacetime with
$D-4$ compact dimensions generically leads to a vacuum energy that
depends on the overall volume of the extra dimensions as well as on
the various shape moduli. For a single compact extra dimension one
simply gets a potential for its size $L$, given at one-loop order by
\beqa
V_{\rm eff}(L) &=& \frac{1}{2} \sum_{n} \int \! \frac{d^{4} p_{E}}{(2\pi)^{4}} \, \ln\left( \frac{p^{2}_{E} + m^{2}_{n}}{\mu^2} \right)~,
\label{EffPot}
\eeqa
where the radion dependence enters through the spectrum, $m_{n}(L)$.
Here the integration is over Euclidean momentum and $\mu$ is the
renormalization scale. Eq.~(\ref{EffPot}) is the contribution due to
a bulk real scalar, but we will consider other types of fields in
Section~\ref{sec:OtherSpins}. In this work, we will assume that the
backreaction due to the one-loop vacuum energy is small, so that the
AdS$_{5}$ metric of Eq.~(\ref{metric}) gives a good approximation.
The effective potential contains divergent pieces that correspond to a
renormalization of the 5D cosmological constant, as well as of the IR
and UV brane tensions. The first two lead to radion-dependent terms
that behave like $e^{-4kL}$. Additionally, there are calculable
terms, usually referred to as the Casimir energy, that can have a more
complicated radion dependence and can sometimes stabilize the distance
between the branes~\cite{Garriga:2002vf}. In order to exhibit the
various radion-dependent contributions, one can evaluate
Eq.~(\ref{EffPot}) using standard methods based on $\zeta$-function
regularization. The \textit{regularized} radion potential has been
computed in
Refs.~\cite{Garriga:2000jb,Goldberger:2000dv,Garriga:2002vf,Maru:2010ap},
and takes the form
\beqa
V_{\rm eff}(L) &=& \frac{k^{4}}{16\pi^{2}} \left[ {\cal I}_{UV} + e^{-4kL} {\cal I}_{IR}
\right] + V_{\rm Casimir}(L)~,
\label{FinalPot}
\eeqa
where the ``Casimir energy'' is given by~\footnote{Throughout this
work, the designation {\it calculable} refers to effects that are
insensitive to the physics near or above the cutoff of the 5D
effective theory. Here, we will refer to Eq.~(\ref{CasimirPot}) as
the ``Casimir energy", even though it can contain terms that scale
like $e^{-4kL}$, exactly as the IR brane tension contribution, and
that should not be considered calculable in generic theories.}
\beqa
V_{\rm Casimir}(L) &=& \frac{k^{4} e^{-4kL}}{16\pi^{2}} \int_{0}^{\infty} \! dt \, t^{3} \ln \left| 1 - \frac{\tilde{K}^{IR}_{\alpha}(t)}{\tilde{I}^{IR}_{\alpha}(t)} \frac{\tilde{I}^{UV}_{\alpha}(t e^{-kL})}{\tilde{K}^{UV}_{\alpha}(t e^{-kL})} \right|~,
\label{CasimirPot}
\eeqa
and the ``generalized'' Bessel functions are defined by
\beqa
\tilde{K}^{UV}_{\alpha}(z) &=& z K_{\alpha-1}(z) - \tilde{b}_{UV}(z) K_{\alpha}(z)~,
\label{tildeKUV}
\\ [0.5em]
\tilde{I}^{UV}_{\alpha}(z) &=& z I_{\alpha-1}(z) + \tilde{b}_{UV}(z) I_{\alpha}(z)~,
\label{tildeIUV}
\\ [0.5em]
\tilde{K}^{IR}_{\alpha}(z) &=& z K_{\alpha-1}(z) + \tilde{b}_{IR}(z) K_{\alpha}(z)~,
\label{tildeKIR}
\\ [0.5em]
\tilde{I}^{IR}_{\alpha}(z) &=& z I_{\alpha-1}(z) - \tilde{b}_{IR}(z) I_{\alpha}(z)~.
\label{tildeIIR}
\eeqa
Here, $\tilde{b}_{i}(z) \equiv - \hat{r}_{i} z^{2} - \hat{m}_{i}$ has
an additional minus sign in the first term compared to
Eq.~(\ref{bUVIR}) as a result of the Wick rotation involved in
obtaining Eq.~(\ref{FinalPot}).
The terms proportional to ${\cal I}_{UV}$ and ${\cal I}_{IR}$ in
Eq.~(\ref{FinalPot}) correspond to UV and IR brane tension
renormalizations, respectively. (${\cal I}_{UV}$ depends only on the
UV quantities $\hat{m}_{UV}$ and $\hat{r}_{UV}$, while ${\cal I}_{IR}$
depends only on the IR quantities $\hat{m}_{IR}$ and $\hat{r}_{IR}$.)
Such contributions are in general uninteresting, since there are
additional incalculable (and most likely larger) contributions that
renormalize the brane tensions. Hence, we will focus on the Casimir
term in what follows.
\subsection{The Casimir Energy}
\label{CasimirEnergy}
The Casimir energy contribution to the radion potential,
Eq.~(\ref{CasimirPot}), which contains the 5D non-local, and therefore
calculable, terms has been computed in the
literature~\cite{Garriga:2000jb,Goldberger:2000dv,Garriga:2002vf}.
Although the expression given in Eq.~(\ref{CasimirPot}) can be easily
evaluated numerically for any value of $L$, and for fixed values of
the parameters in the Lagrangian: $\alpha = c+1/2$, $m_{i}$ and
$r_{i}$ ($i=UV$ or $IR$), it is useful to have analytic approximations
that make the radion dependence more explicit. We derive general
expressions for the case that $e^{-k L} \ll 1$ in
Appendix~\ref{app:AnalyticApprox}. These result from the fact that
the radion dependence is fully contained in the ``UV factor''
$\tilde{I}^{UV}_{\alpha}(t e^{-kL})/\tilde{K}^{UV}_{\alpha}(t
e^{-kL})$, which can legitimately be approximated by a small argument
expansion of the Bessel functions. Thus, the functional dependence of
the Casimir energy on $L$ is determined by the UV quantities. In
particular, one finds that $\hat{m}_{UV}$ defines a characteristic
scale $L_{T}$ according to~\footnote{For $\alpha < 0$, one has $kL_{T}
\ll 1$, unless $\hat{m}_{UV}$ is exponentially large.}
\beqa
\begin{array}{lcl}
e^{2kL_{\rm T}}\hat{m}_{UV} = 1~,
& & \textrm{for } \alpha \geq 1~, \\ [0.2em]
& \textrm {or} & \\ [0.2em]
e^{2\alpha kL_{\rm T}}\hat{m}_{UV} = 1~,
& & \textrm{for } \alpha < 1~.
\end{array}
\label{LT}
\eeqa
As shown in Appendix~\ref{app:LightStates}, the above transition
signals whether a KK state parametrically lighter than the KK
scale, $k \, e^{-kL}$, exists or not. For $L \gg L_{T}$ (``large
$m_{UV}$'') and $\alpha > 0$, one finds
\beqa
V_{\rm Casimir} &\approx& \frac{k^{4} e^{-2(2+\alpha)kL}}{16\pi^{2}} \frac{2^{1-2\alpha}(\hat{m}_{UV} - 2 \alpha) }{\hat{m}_{UV} \Gamma(\alpha) \Gamma(\alpha + 1)}
\int_{0}^{\infty} \! dt \, t^{3+2\alpha} \frac{\tilde{K}^{IR}_{\alpha}(t)}{\tilde{I}^{IR}_{\alpha}(t)}~,
\label{VCasLargemUV}
\eeqa
while the expressions for $\alpha = 0$ and $\alpha < 0$ are given in
Eqs.~(\ref{VCasimirApproxLargemUValpha0}) and
(\ref{VCasLargemUVNegalpha}) of Appendix~\ref{app:AnalyticApprox},
respectively. For $L \ll L_{T}$ (``small $m_{UV}$''), which is the
case with a light would-be zero mode in the spectrum, but $kL \gg 1$,
one has to distinguish several cases. For instance, for moderate
localization of this mode near the IR brane, one finds
\beqa
V_{\rm Casimir} &\approx& \frac{k^{4} e^{-4kL}}{16\pi^{2}}
\int_{0}^{\infty} \! dt \, t^{3} \ln\left| 1 - \frac{2 \sin(\pi\alpha)}{\pi} \, \frac{\tilde{K}^{IR}_{\alpha}(t)}{\tilde{I}^{IR}_{\alpha}(t)} \right|~,
\hspace{1cm} \textrm{for } 0 < \alpha < 1~.
\label{VCasSmallmUV0alpha1}
\eeqa
For the flat case, $\alpha = 1$:
\beqa
V_{\rm Casimir} &\approx& - \frac{k^{4} e^{-4kL}}{16\pi^{2}} \, \frac{1}{k L + \hat{r}_{UV}}
\int_{0}^{\infty} \! dt \, t^{3} \frac{\tilde{K}^{IR}_{\alpha=1}(t)}{\tilde{I}^{IR}_{\alpha=1}(t)}~,
\label{VCasSmallmUValpha1}
\eeqa
where, for simplicity, we have ignored the numerically small Euler
constant, as well as a $\ln(t/2)$ term that makes a relatively small
difference [see third line in Eq.~(\ref{UVRatioSmallmUV}) of
Appendix~\ref{app:AnalyticApprox}]. Finally, for moderate
localization near the UV brane one finds
\beqa
V_{\rm Casimir} &\approx& - \frac{k^{4} e^{-2(\alpha+1)kL}}{16\pi^{2}} \, \frac{2^{3 - 2 \alpha}}{\Gamma(\alpha) \left[ \Gamma(\alpha - 1) + 2\hat{r}_{UV} \Gamma(\alpha) \right]}
\int_{0}^{\infty} \! dt \, t^{1+2\alpha} \frac{\tilde{K}^{IR}_{\alpha}(t)}{\tilde{I}^{IR}_{\alpha}(t)}~,
\hspace{5mm} \textrm{for } 1 < \alpha < 2~.
\nonumber \\
\label{VCasSmallmUV1alpha2}
\eeqa
The case of more extreme IR localization (with $\alpha \leq 0$) is
given in Eq.~(\ref{VCasLargemUVNegalpha}), while for more extreme UV
localization (with $\alpha \geq 2$) the result can be easily obtained
from Eq.~(\ref{UVRatioSmallmUV}) of Appendix~\ref{app:AnalyticApprox}.
In Eqs.~(\ref{VCasSmallmUV0alpha1}), (\ref{VCasSmallmUValpha1}) and
(\ref{VCasSmallmUV1alpha2}), we have ignored terms subleading in
$\hat{m}_{UV}$. In SUSY theories that are softly broken by UV brane
localized masses, the above leading contributions cancel out within a
given supermultiplet, and one should keep the terms subleading in
$\hat{m}_{UV}$. We postpone such an application to
Section~\ref{sec:susy}. For the time being we simply make a few
observations that follow from the previous results. First, when
$\hat{m}_{UV}$ is large in the sense defined above, the Casimir energy
is suppressed by a factor $e^{-2\alpha k L}$ on top of the naive
$e^{-4kL}$ expected from the scaling of the KK masses. This happens
for any $\alpha > 0$ [see Eq.~(\ref{VCasLargemUV})]. Second, for
small $\hat{m}_{UV}$, fields localized near the IR brane give a radion
dependence identical to the one arising from an IR brane tension, up
to exponentially small terms [see Eq.~(\ref{VCasSmallmUV0alpha1})].
Third, as pointed out in Ref.~\cite{Garriga:2002vf}, flat fields
produce only a volume suppression [see Eq.~(\ref{VCasSmallmUValpha1})]
that allows their contribution to be plausibly comparable to the IR
brane tension contribution, and in certain cases can lead to radion
stabilization. And finally, fields localized towards the UV brane
give a contribution that is exponentially small compared to that of
the IR brane tension [see Eq.~(\ref{VCasSmallmUV1alpha2})].
In applications, one often finds fields obeying Dirichlet boundary
conditions on both branes. The contribution to the Casimir energy
from such fields can be easily obtained by taking the limits
$\hat{m}_{UV} \to \infty$ and $\hat{m}_{IR} \to \infty$ [e.g.~in
Eq.~(\ref{VCasLargemUV})]. For the Casimir energy part, one finds
that for $\alpha > 0$:
\beqa
V^{(-,-)}_{\rm Casimir} &\approx& \frac{B^{(-,-)}_{IR} k^{4}}{16\pi^{2}} \, e^{-2(2+\alpha)kL}~,
\label{VCasminusFields}
\eeqa
where
\beqa
B^{(-,-)}_{IR} &=& - \frac{2^{1 - 2 \alpha}}{\Gamma(\alpha) \Gamma(\alpha + 1)}
\int_{0}^{\infty} \! dt \, t^{3+2\alpha} \frac{K_{\alpha}(t)}{I_{\alpha}(t)}~
\label{BIR_mmFields_alphaPos}
\eeqa
depends only on the localization of the field. We plot
$B^{(-,-)}_{IR}$ in Fig.~\ref{fig:BIRvsralpha}, which shows that it is
always negative. Eq.~(\ref{VCasminusFields}) shows that these effects
are exponentially suppressed compared to the IR brane tension
contribution. The exception occurs for $\alpha \approx 0$, in which
case one should use [see Eq.~(\ref{VCasimirApproxLargemUValpha0}) of
Appendix~\ref{app:AnalyticApprox}]
\beqa
B^{(-,-)}_{IR} &\approx& - \frac{1}{kL} \int_{0}^{\infty} \! dt \, t^{3}
\frac{K_{\alpha = 0}(t)}{I_{\alpha = 0}(t)}~,
\label{BIR_mmFields_alpha0}
\eeqa
instead of Eq.~(\ref{BIR_mmFields_alphaPos}), setting $\alpha = 0$ in
Eq.~(\ref{VCasminusFields}).
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.5 \textwidth]{Figures/BIR_mmfields.eps}
\caption{$B^{(-,-)}_{IR}$ of Eq.~(\ref{BIR_mmFields_alphaPos}) as a
function of $\alpha$. This determines the contribution to the
effective potential due to fields obeying $(-,-)$ boundary conditions
according to Eq.~(\ref{VCasminusFields}). At very small $\alpha$ one
should use Eq.~(\ref{BIR_mmFields_alpha0}), but this detail is
imperceptible in the figure when $kL \gg1$ since $B^{(-,-)}_{IR}$ is
very small.}
\label{fig:BIRvsralpha}
\end{center}
\end{figure}
For $\alpha < 0$, one finds [see Eq.~(\ref{VCasLargemUVNegalpha}) of
Appendix~\ref{app:AnalyticApprox}, with $\hat{m}_{IR} \to \infty$]:
\beqa
V^{(-,-)}_{\rm Casimir} &\approx& \frac{k^{4} e^{-4kL}}{16\pi^{2}}
\int_{0}^{\infty} \! dt \, t^{3} \ln \left|1 + \frac{2 \sin(\pi\alpha)}{\pi} \, \frac{K_{\alpha}(t)}{I_{\alpha}(t)} \right|~.
\label{BIR_mmFields_alphaNeg}
\eeqa
In fact, in the case of vanishing BKT's, the $(-,-)$ contribution of a
field with index $\alpha$ is identical to that of a field with
vanishing localized masses and index $\alpha+1$ (see
Section~\ref{sec:OtherSpins}). The results of this section will be
used in Sections~\ref{sec:OtherSpins} and \ref{sec:susy}.
\section{Higher Spins and SUSY Multiplets}
\label{sec:OtherSpins}
In this section we give a brief summary of the contributions to the
radion potential from 5D fields of spin-$1/2$, $1$, $3/2$ and $2$, and
then give the results for the net effective potential from various
SUSY multiplets. In general, these contributions can be simply
expressed in terms of the effective potential due to a real scalar
field, which was evaluated above. One finds
\beqa
V_{\rm eff}^{s \geq 0}(L) &=& (-1)^{2s}\, n\, V^{\textrm{Real scalar}}_{\rm eff}(L) ~,
\label{Veff_General}
\eeqa
where $V^{\textrm{Real scalar}}_{\rm eff}(L) = V_{\rm eff}(L)$ is
given in Eq.~(\ref{FinalPot}) and $n$ is the number of physical
degrees of freedom of a \textit{massive} 4D field of spin $s$ (taking
into account whether the field is real or complex, in the bosonic
case). Deriving the result above is straightforward for complex
scalars or fermions, but requires adding a gauge-fixing term to the
action for higher spins. One also has to include the associated
Faddeev-Popov determinant, which simply cancels the spurious
contributions to the potential from the gauge degrees of freedom.
Here we just use Eq.~(\ref{Veff_General}) in our applications below
(with $n=2$ for complex scalars, $n=4$ for fermions, $n=3$ for (real)
gauge bosons, $n=5$ for the graviton and $n=8$ for the gravitino).
Based on this, we can put together the total contributions to the
radion potential due to (softly broken) SUSY multiplets. Since the
compactification breaks the 5D $N=1$ SUSY (4D $N=2$) down to 4D $N=1$
SUSY, it is useful to classify the field content in 4D $N=1$ language.
Starting with hypermultiplets, and listing only the on-shell fields,
these consist of two 4D $N=1$ chiral multiplets: $(\Phi_{1},\psi_{1})$
and $(\Phi_{2},\psi_{2})$. If the pair $(\Phi_{1},\psi_{1})$ has
``localization parameter'' (or index) $\alpha$, then the second pair
of fields, $(\Phi_{2},\psi_{2})$, has index $\alpha - 1$. Both
$\Phi_{2}$ and $\psi_{2}$ obey Dirichlet boundary conditions, so that
they do not have zero-modes. For vanishing kinetic terms,
$\hat{r}_{UV} = \hat{r}_{IR} = 0$, and arbitrary UV and/or IR
localized masses for the complex scalar $\Phi_{1}$ (recall that
$\Phi_{2}$ vanishes on the branes) the effective potential is:
\beqa
\left. \rule{0mm}{5mm} V^{\rm Hyper}_{\rm eff} \right|_{\hat{r}_{i} = 0}
&=& 2 \times \left. \rule{0mm}{5mm} V_{\rm eff}(L) \right|_{\alpha;\hat{r}_{i} = 0;
\hat{m}_{i}} + 2 \times \left. V^{(-,-)}_{\rm eff}(L) \right|_{\alpha-1}
- 4 \times \left. \rule{0mm}{5mm} V_{\rm eff}(L) \right|_{\alpha;\hat{r}_{i} = 0;
\hat{m}_{i}=0}~,
\label{Veff_Hyper_nori}
\eeqa
where $V_{\rm eff}(L)$ is given in Eq.~(\ref{FinalPot}), and
$V^{(-,-)}_{\rm eff}(L)$, which depends only on the index $\alpha$, is
the effective potential for a real scalar obeying Dirichlet boundary
conditions on both branes [the (approximate) Casimir energy part is
given in Eqs.~(\ref{BIR_mmFields_alphaPos}),
(\ref{BIR_mmFields_alpha0}) or (\ref{BIR_mmFields_alphaNeg}),
according to the value of $\alpha$]. We have also indicated the index
$\alpha$ of the ``generalized'' Bessel functions [see
Eq.~(\ref{tildeKUV})-(\ref{tildeIIR})] to be used in each term. The
fermions $\psi_{1}$ and $\psi_{2}$ contribute together as a KK tower
of Dirac fermions through the last term in
Eq.~(\ref{Veff_Hyper_nori}). The total effective potential above
vanishes in the SUSY limit, i.e.~when $\hat{m}_{i} = 0$. Indeed, in
this case the approximate expressions~(\ref{VCasSmallmUV0alpha1}),
(\ref{VCasSmallmUValpha1}) or (\ref{VCasSmallmUV1alpha2}) for $V_{\rm
Casimir}$ hold, depending on the value of $\alpha$, and one can
explicitly see that the Casimir energy contributions from the
components of the hypermultiplet cancel out. More generally, the
exact cancellations follow from
\beqa
\left. \frac{\tilde{K}^{i}_{\alpha}(z)}{\tilde{I}^{i}_{\alpha}(z)} \right|_{\hat{r}_{i} = 0;\hat{m}_{i}=0} &=&
\left. \frac{\tilde{K}^{i}_{\alpha-1}(z)}{\tilde{I}^{i}_{\alpha-1}(z)} \right|_{\hat{r}_{i} = 0;\hat{m}_{i}=\infty}~,
\hspace{1.5cm}\textrm{for}~i=UV,IR~,
\eeqa
which implies that the Casimir energies, Eq.~(\ref{CasimirPot}), are
the same for a field with index $\alpha-1$ that obeys Dirichlet
boundary conditions (obtained by taking $\hat{m}_{i} \to \infty$), and
for a field with $\hat{m}_{i}=0$ and index $\alpha$ ({\it i.e.}~the KK
spectra are identical in these two cases).
When the brane kinetic terms are non-zero one has to be more careful,
since supersymmetry requires that the action for the ``odd'' scalar
field, $\Phi_{2}$, contain special singular terms on the branes.
These can be derived by writing the SUSY action in 4D $N=1$ language,
following the formalism of
Refs.~\cite{ArkaniHamed:2001tb,Marti:2001iw}, together with
brane-localized minimal K\"{a}hler terms for the ``even'' chiral
superfield, $(\Phi_{1}, \psi_{1})$, and then integrating out the
auxiliary fields. Besides brane localized terms (with
$\partial_{\mu}$ derivatives) for $\Phi_{1}$ and $\psi_{1}$, one finds
that the bulk kinetic term for $\Phi_{2}$ should be replaced by
\beqa
|\partial_{M} \Phi_{2}|^{2} \to |\partial_{\mu} \Phi_{2}|^{2} - \frac{1}{1+\sum_{i=UV,IR} r_{i} \delta(y-y_{i})} |\partial_{5} \Phi_{2}|^{2}~,
\label{SingularBKTs_OddScalar}
\eeqa
where $y_{UV} = 0$ and $y_{IR} = L$, while $r_{i}$ are the brane
kinetic coefficients for $\Phi_{1}$ and $\psi_{1}$. This is
reminiscent of the singular terms first pointed out in
Ref.~\cite{Mirabelli:1997aj}, and implies that the equation of motion
for $\Phi_{2}$ is identical to the second order differential equation
of motion obeyed by $\psi_{2}$. In particular, the spectrum of
$\Phi_{2}$ at the massive level is identical to the fermion one, as
shown in Ref.~\cite{delAguila:2003bh}. The latter one coincides with
that of a scalar with $\hat{m}_{i}=0$ and arbitrary brane kinetic
coefficients $r_{i}$~\cite{Carena:2004zn}. Note that although
$\Phi_{2}$ obeys Dirichlet boundary conditions on both branes (hence
no brane mass terms for $\Phi_{2}$ are allowed), the singular terms of
Eq.~(\ref{SingularBKTs_OddScalar}) were not taken into account in the
derivation of the effective potential due to real scalars above. In
particular, the contribution to $V_{\rm eff}$ from $\Phi_{2}$ does not
simply correspond to the limit $\hat{m}_{i} \to \infty$ of
Eq.~(\ref{FinalPot}) (except when $\hat{r}_{i}=0$). However, since
the one-loop effective potential, Eq.~(\ref{EffPot}), depends only on
the KK spectrum, we can immediately write down the contribution from
$\Phi_{2}$ as $2 \times \left. V_{\rm eff}(L)
\right|_{\hat{m}_{i}=0}$, where $V_{\rm eff}(L)$ is given in
Eq.~(\ref{FinalPot}). Therefore, we can generalize
Eq.~(\ref{Veff_Hyper_nori}) to
\beqa
V^{\rm Hyper}_{\rm eff} &=& 2 \times \left. \rule{0mm}{5mm} V_{\rm eff}(L) \right|_{\alpha;\hat{r}_{i};\hat{m}_{i}} - 2 \times \left. \rule{0mm}{5mm} V_{\rm eff}(L) \right|_{\alpha;\hat{r}_{i};\hat{m}_{i}=0}~,
\label{Veff_Hyper}
\eeqa
where the contribution from the ``odd'' scalar precisely cancels half
of the fermion contribution.
For the 5D gauge supermultiplet, consisting of a vector superfield
$(A_{\mu}, \lambda_{1})$ with index $\alpha = 1$, and a chiral
superfield $(\Sigma+i A_{5},\lambda_{2})$ with index $\alpha=0$, one
has
\beqa
V^{\rm Vector}_{\rm eff} &=& 4 \times \left. \rule{0mm}{5mm} V_{\rm eff}(L) \right|_{\alpha=1;\hat{r}_{i};\hat{m}_{i}=0} - 4 \times \left. \rule{0mm}{5mm} V_{\rm eff}(L) \right|_{\alpha=1;\hat{r}_{i};\hat{m}_{i}}~,
\label{Veff_Vector}
\eeqa
where the second term comes from the gauginos (here $\hat{m}_{i}$
denote brane localized Majorana masses), while the first one arises
from $A_{M}$ (including $A_{5}$ and the associated Faddeev-Popov ghost
contributions) plus the term arising from the odd (real) scalar
$\Sigma$. In the presence of BKT's, the action for the latter field
(after integrating out the auxiliary components) contains singular
terms of the form (\ref{SingularBKTs_OddScalar}) that ensure that the
KK spectrum of the $\Sigma$ field is identical to the gaugino KK
spectrum with $\hat{m}_{i} = 0$ (see Ref.~\cite{delAguila:2003bh} for
details), and its contribution is therefore identical to the gauge
boson one for any $\hat{r}_{i}$.
The 5D supergravity multiplet has the following propagating degrees of
freedom: the f\"unfbein $e^{A}_{M}$, a (Dirac) gravitino $\Psi_{M}$,
and a vector field $B_{M}$ (the graviphoton)~\cite{D'Auria:1981kq}
(see also Ref.~\cite{Altendorfer:2000rr}). The fields containing
zero-modes are $(e_{\mu}^{a}, e_{5}^{\hat{5}}, B_{5}, \psi^{+}_{\mu},
\psi^{-}_{5})$, and their KK wavefunctions have index $\alpha = 2$.
The remaining fields, $(e_{5}^{a}, e_{\mu}^{\hat{5}}, B_{\mu},
\psi^{-}_{\mu}, \psi^{+}_{5})$, obey Dirichlet boundary conditions and
have index $\alpha=1$. Here $\hat{5}$ denotes the fifth tangent space
index, and $\psi_{M}^{\pm} = \frac{1}{\sqrt{2}} \left( \psi^{1}_{M}
\pm \psi^{2}_{M} \right)$ where $\psi^{i}_{M}$ are the two Weyl
components of $\Psi_{M}$. At each KK level $n\neq 0$, the KK
gravitons, $g_{\mu\nu}^{n}$, and KK graviphotons, $B_{\mu}^{n}$, gain
mass by eating $g_{\mu5}^{n}$, $g_{55}^{n}$ and $B_{5}^{n}$, while the
KK gravitinos $\Psi_{\mu}^{n}$ gain mass by eating
$\Psi_{5}^{n}$~\cite{Bagger:2001ep,DeCurtis:2003hs}. Thus, the
effective potential from the gravity supermultiplet is
\beqa
V^{\rm SUGRA}_{\rm eff} &=& 8 \times \left. \rule{0mm}{5mm} V_{\rm eff}(L) \right|_{\alpha=2;\hat{r}_{i};\hat{m}_{i}=0} - 8 \times \left. \rule{0mm}{5mm} V_{\rm eff}(L) \right|_{\alpha=2;\hat{r}_{i};\hat{m}_{i}}~,
\label{Veff_Gravity}
\eeqa
where the second term corresponds to the gravitino contribution, while
the first term includes both the KK towers of 4D massive gravitons and
4D massive graviphotons (as in the cases discussed above, the
$B^{n}_{\mu}$, which obey Dirichlet boundary conditions and have
$\alpha=1$, contribute like a $(+,+)$ field with $\alpha=2$ and
vanishing brane mass terms).
Finally, we note that the radion multiplet, which contains the radion
scalar itself, as well as $B_5^0$, and $\Psi_5^0$, consists, before
stabilization, of massless zero modes. As we are mainly interested in
stabilization scenarios based on bulk field quantum effects, we take
the radion multiplet to be massless, at leading order. Therefore, it
does not contribute to the 1-loop potential computed above, which only
accounts for the effects of massive KK modes. The contribution of the
radion multiplet to the radion scalar potential are then higher order
in our treatment and hence ignored. With these results at hand, we
turn to the question of radiative radion stabilization in SUSY
theories in the next section.
\section{Application to SUSY Theories in RS}
\label{sec:susy}
We now consider an application to supersymmetric theories that are
softly broken by boundary masses. As we have seen in the previous
section, the contribution to the radion potential from a pair of
superpartners (that have identical brane-kinetic terms, but may have
different boundary masses) is proportional to $\Delta V_{\rm
eff}(L;\hat{r}_{i},\hat{m}_{i}) = \left. V_{\rm
eff}(L)\right|_{\hat{r}_{i},\hat{m}_{i}=0} - \left. V_{\rm eff}(L)
\right|_{\hat{r}_{i},\hat{m}_{i}}$,~\footnote{\label{signconvention}The
sign corresponds to a gauge-gaugino (or gravity-gravitino) multiplet.
For a scalar-fermion multiplet the sign is opposite, since the soft
mass affects the scalar, which contributes with a plus sign to the
radion potential.} where $V_{\rm eff}(L)$ is given in
Eq.~(\ref{FinalPot}). This takes the form
\beqa
\Delta V_{\rm eff}(L;\hat{r}_{i},\hat{m}_{i}) &=&
\frac{\Lambda^2 k^{2} e^{-4kL}}{16\pi^{2}} u \, \hat{m}_{IR} +
\Delta V_{\rm Casimir}(L) + \textrm{const.}
\label{DeltaVeff}
\eeqa
Here we included a quadratically divergent contribution,\footnote{The
quadratic divergence is proportional to the supertrace of the square
mass matrix, ${\rm sTr}{\cal M}^{2}$, which is non-vanishing in the
observable sector~\cite{Choi:1994xg}. If the model in question
satisfies ${\rm sTr}{\cal M}^{2}$ = 0, then only a logarithmic
sensitivity to the cutoff remains~\cite{Intriligator:2007cp}.} where
one can estimate that the cutoff $\Lambda \lesssim 4\pi k$, with $u$
an incalculable dimensionless coefficient that might be expected to be
of order one [we absorb here the term proportional to ${\cal I}_{IR}$
in Eq.~(\ref{FinalPot})]. The first term can be expected to dominate
over $\Delta V_{\rm Casimir}(L)$ unless either $\Lambda \sim k$, or
one is willing to fine-tune $u$ to small values. However, notice that
when $\hat{m}_{IR} = 0$, the first term in Eq.~(\ref{DeltaVeff})
vanishes, and the radion potential is cutoff-independent at one-loop
order. We consider three different scenarios: \\
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.5 \textwidth]{Figures/beta1_vs_rIR.eps}
\caption{
$\beta_{1} \equiv \int_{0}^{\infty} \! dt \, t^{3} \left[ \tilde{K}^{IR}_{\alpha=1}(t)/\tilde{I}^{IR}_{\alpha=1}(t) \right]$ as a function of $\hat{r}_{IR}$ for three values of $\hat{m}_{IR}$.
}
\label{fig:beta1vsrIR}
\end{center}
\end{figure}
$\bullet$ \textit{High-scale SUSY breaking on the UV brane}: We
consider a region where $L > L_{T}$, where $L_{T}$ is defined in
Eq.~(\ref{LT}), so that Eq.~(\ref{VCasLargemUV}) applies for the
superpartner with a non-vanishing UV mass term. Taking for
concreteness the case $\alpha = 1$, so that
Eq.~(\ref{VCasSmallmUValpha1}) applies to the superpartner with no
mass terms (e.g. the gauge field), we see that
\beqa
\Delta V_{\rm Casimir}(L) &\approx& \left. V_{\rm Casimir}(L) \right|_{\hat{r}_{i},\hat{m}_{i} = 0}
~\approx~ - \frac{k^{4} e^{-4kL}}{16\pi^{2}} \, \frac{1}{k L + \hat{r}_{UV}}
\int_{0}^{\infty} \! dt \, t^{3} \frac{\tilde{K}^{IR}_{\alpha=1}(t)}{\tilde{I}^{IR}_{\alpha=1}(t)}~,
\label{Casimir_App1}
\eeqa
since the term $\left. -V_{\rm Casimir}(L)
\right|_{\hat{r}_{i},\hat{m}_{i}}$ gives an exponentially small
contribution (suppressed by $e^{-2kL}$, as remarked in the paragraph
after Eq.~(\ref{VCasSmallmUV1alpha2}) of
Subsection~\ref{CasimirEnergy}). This is essentially the result
derived in Ref.~\cite{Garriga:2002vf}, although here we allow for a
non-vanishing $\hat{r}_{UV}$. If SUSY is unbroken on the IR brane,
the first term in Eq.~(\ref{DeltaVeff}) vanishes, and the radion is
not stabilized. Depending on the size of the IR BKT, which can change
the sign of the integral in Eq.~(\ref{Casimir_App1}), the branes can
collapse towards each other or ``run-away''.
However, if SUSY is also broken on the IR brane, one can realize the
scenario envisioned in Ref.~\cite{Garriga:2002vf}. In this case, the
radion potential, Eq.~(\ref{FinalPot}), has an extremum at $kL \approx
\beta_{1}/\Delta - \hat{r}_{UV}$, provided $\beta_{1}/\Delta > 0$,
where $\Delta = (\Lambda/k)^{2} u \, \hat{m}_{IR}$, and $\beta_{1}
\equiv \int_{0}^{\infty} \! dt \, t^{3} \left[
\tilde{K}^{IR}_{\alpha=1}(t)/\tilde{I}^{IR}_{\alpha=1}(t) \right]$.
This extremum is a minimum when both $\beta_{1}$ and $\Delta$ are
negative. The calculable coefficient $\beta_{1}$ can indeed be
negative if there is a non-negligible $\hat{r}_{IR}$ or a sizable
$\hat{m}_{IR}$, as shown in Fig.~\ref{fig:beta1vsrIR}. Obtaining a
minimum with $kL \sim 30$ requires that $|\Delta| \sim 1/30$, which
can arise from a suppression due to $\hat{m}_{IR} \ll 1$. Note that
having $\hat{m}_{IR} \ll 1$ does not mean that the superpartner ({\it
e.g} the gaugino) is light, since we are assuming that there is a
``large'' source of SUSY breaking on the UV brane (more precisely,
$e^{2kL}\hat{m}_{UV} \gg 1$). In fact, Table.~\ref{Zero-mode_masses}
of Appendix~\ref{app:scalars} indicates that the physical soft
breaking mass is of order the KK scale, $M_{\rm KK} \sim 2\times
k\,e^{-kL}$, although there is a region where the soft mass can be
parametrically smaller, of order $k\,e^{-kL}/\sqrt{kL}$. \\
$\bullet$ \textit{Low-scale SUSY breaking from the UV brane}: We now
consider SUSY breaking on the UV brane only ({\it i.e.} $\hat{m}_{IR}
= 0$, so that the incalculable contribution to the radion potential
vanishes), and focus on the case that $kL \gg 1$ with $L \ll L_{T}$,
as defined in Eq.~(\ref{LT}) ($\hat{m}_{UV}$ so small that there is a
light state in the spectrum). As remarked in the discussion after
Eq.~(\ref{VCasSmallmUV1alpha2}), the leading order contributions
cancel within a given supermultiplet. We can therefore immediately
obtain the leading effect in $\hat{m}_{UV}$ from
Eqs.~(\ref{UVRatioSmallmUV}) and (\ref{VCasimirApproxSmallmUV}) of
Appendix~\ref{app:AnalyticApprox}, for the various possible
localizations of the would-be zero mode. For instance, for IR
localization, $0 < \alpha < 1$:
\beqa
\Delta V_{\rm eff}(L;\hat{r}_{i},\hat{m}_{UV},\hat{m}_{IR} = 0) &\approx& - \frac{k^{4} e^{-2(2-\alpha)kL}}{16\pi^{2}} \, \frac{4^{\alpha} \hat{m}_{UV} \sin^{2}(\pi\alpha) \Gamma(\alpha)^{2}}{\pi^{2}}
\int_{0}^{\infty} \! dt \, t^{3-2\alpha} \frac{\tilde{K}^{IR}_{\alpha}(t)}{\tilde{I}^{IR}_{\alpha}(t)}~,
\label{VCasSmallmUV0alpha1SUSY}
\eeqa
for the flat case, $\alpha = 1$:
\beqa
\Delta V_{\rm eff}(L;\hat{r}_{i},\hat{m}_{UV},\hat{m}_{IR} = 0) &\approx& - \frac{k^{4} e^{-2kL}}{16\pi^{2}} \, \frac{\hat{m}_{UV}}{(k L + \hat{r}_{UV})^{2}}
\int_{0}^{\infty} \! dt \, t \, \frac{\tilde{K}^{IR}_{\alpha=1}(t)}{\tilde{I}^{IR}_{\alpha=1}(t)}~,
\label{VCasSmallmUValpha1SUSY}
\eeqa
and for ``moderate'' UV localization, $1 < \alpha < 2$:
\beqa
\Delta V_{\rm eff}(L;\hat{r}_{i},\hat{m}_{UV},\hat{m}_{IR} = 0) &\approx&
\nonumber \\ [0.5em]
& & \hspace{-5cm}
- \frac{k^{4} e^{-2\alpha kL}}{16\pi^{2}} \, \frac{4^{2 - \alpha} (\alpha - 1) \hat{m}_{UV}}{\Gamma(\alpha) \left[ \Gamma(\alpha - 1) + 2\hat{r}_{UV} \Gamma(\alpha) \right] [1 + 2 \hat{r}_{UV} (\alpha - 1)]}
\int_{0}^{\infty} \! dt \, t^{2\alpha-1} \frac{\tilde{K}^{IR}_{\alpha}(t)}{\tilde{I}^{IR}_{\alpha}(t)}~.
\label{VCasSmallmUV1alpha2SUSY}
\eeqa
For $\alpha \geq 2$, one also obtains $\Delta V_{\rm
eff}(L;\hat{r}_{i},\hat{m}_{UV},\hat{m}_{IR} = 0) \sim e^{-2\alpha
kL}$. Note that the warp factor dependence is the same for $\alpha =
1 + \delta\alpha$ and $\alpha = 1 - \delta\alpha$, for any
$\delta\alpha$, although the coefficients do not exhibit this
symmetry. In the above formulas it is understood that $\hat{m}_{IR} =
0$ in $\tilde{K}^{IR}_{\alpha}$ and $\tilde{I}^{IR}_{\alpha}$. It is
clear that these potentials do not have a minimum by themselves.
Nevertheless, note that from the point of view of the warp factor
dependence, the case $\alpha = 1$ gives the largest contribution.
Furthermore, notice the ``double-volume'' suppression in
Eq.~(\ref{VCasSmallmUValpha1SUSY}). These properties are important
when there are other quantum mechanical sources of radion
stabilization~\cite{HDEP}. We also point out that the coefficient of
Eq.~(\ref{VCasSmallmUValpha1SUSY}) can have either sign, since for
instance
\beqa
\int_{0}^\infty \! dt \, t \, \frac{\tilde{K}^{IR}_{\alpha = 1}(t)}{\tilde{I}^{IR}_{\alpha = 1}(t)} &\longrightarrow&
\left\{
\begin{array}{lcl}
+ 0.63 & & \textrm{for } \hat{r}_{IR}=0 \\[0.5em]
- 0.10 & & \textrm{for } \hat{r}_{IR}=1
\end{array}
\right.~.
\eeqa
We now turn to the last scenario we wish to consider:
\\
$\bullet$ \textit{SUSY breaking from the IR brane only}: When
$\hat{m}_{UV} = 0$, but $\hat{m}_{IR} \ll 1$, the effective potential
is determined by the term linear in $\hat{m}_{IR}$ of
Eqs.~(\ref{VCasSmallmUV0alpha1}), (\ref{VCasSmallmUValpha1}) and
(\ref{VCasSmallmUV1alpha2}). For instance, in the case of a vector multiplet with $\alpha = 1$, and for $kL \gg 1$, we have
\beqa
\Delta V_{\rm eff}(L;\hat{r}_{i},\hat{m}_{UV} = 0,\hat{m}_{IR}) &\approx&
\frac{k^{4} e^{-4kL}}{16\pi^{2}} \left[ \frac{\Lambda^{2}}{k^{2}} u \, \hat{m}_{IR} - \frac{B_{IR}\, \hat{m}_{IR} }{kL + \hat{r}_{UV}} \right]~,
\label{VeffmIRalpha1SUSY}
\eeqa
where [we set $\hat{m}_{IR} = 0$ in $\tilde{I}^{IR}_{1}(t)^{2}$]
\beqa
B_{IR} &=& \int_{0}^\infty \! dt \, \frac{t^{3}}{\tilde{I}^{IR}_{1}(t)^{2}} ~\longrightarrow ~
\left\{
\begin{array}{ccc}
1.26 & \textrm{for } \hat{r}_{IR}=0 \\[0.5em]
0.48 & \textrm{for } \hat{r}_{IR}=1
\end{array}
\right.
\eeqa
is manifestly positive. In fact, for arbitrary $\hat{m}_{IR}$ the
Casimir energy part of the potential (i.e. the \textit{difference}
between Eq.~(\ref{VCasSmallmUValpha1}) at $\hat{m}_{IR}=0$ and the
same expression at arbitrary $\hat{m}_{IR}$) is always negative.
Thus, if $u < 0$ this potential does not have an extremum in the
physical (positive) region for $L$. However, if $u > 0$ the effective
potential exhibits a \textit{maximum}. It follows that the
contribution due to matter hypermultiplets with $\alpha = 1$ (which
has an overall minus sign relative to the gauge case, see
footnote~\ref{signconvention}) has a minimum. Hence, a model with more
matter degrees of freedom (with $\alpha \approx 1$) than gauge degrees
of freedom can stabilize the size of the extra dimension through these
quantum effects. However, for this minimum to be at $kL \gg 1$ (for
self-consistency with the above approximations), one must have $u
\Lambda^{2}/k^{2} \ll 1$. For small $kL$, where fields with different
localization parameters $\alpha$ can give comparable contributions to
the radion potential, stabilization may be possible as in the flat
$S^{1}/Z_{2}$ orbifold case~\cite{Ponton:2001hq}.
\section{Conclusions}
\label{sec:conclusions}
In this work, we examined the use of supersymmetry to control vacuum
energy contributions to the radion potential. Our analysis included
UV- and IR-brane-localized mass terms, to implement soft supersymmetry
breaking, as well as localized kinetic terms. We found that UV-brane
breaking of supersymmetry leads to a diminished Casimir energy
contribution to the radion potential, both through the size of the
soft terms, as well as through an additional volume suppression. This
finding may be useful in constructing models where dynamical EWSB
induces a non-trivial radion potential that is protected from unwanted
large corrections, thus tying the Planck-weak scale hierarchy to the
dynamics of EWSB itself~\cite{HDEP}. As another possibility, we
studied the question of quantum radion stabilization by bulk fields
(without VEV's). Stabilizing the radion at $kL \gg 1$ through the
quantum effects studied in this paper requires that SUSY be broken on
the IR brane. If in addition SUSY is also broken on the UV brane, the
radion stabilization mechanism by flat fields of
Ref.~\cite{Garriga:2002vf} can be easily realized, provided the
\textit{sign} of an incalculable contribution is favorable. Our
results are sufficiently general to be applicable to fields of
arbitrary spin, and obeying arbitrary boundary conditions, and can be
expected to be useful also outside the SUSY context.
\vspace{5mm}
\noindent
\noindent \textbf{Note added:} After posting the initial version of
this work, we found that some contributions were missing in our
brane-tension renormalization. These contributions signal that,
absent exact supersymmetry, quantum corrections to brane-tensions are
not well-defined in our effective theory, as may also be inferred on
general grounds. We have clarified this point in the new version and
removed discussions dealing with brane-tension renormalization, since
they do not affect our main conclusions outlined in the abstract.
\vspace{5mm}
\noindent
\subsection*{Acknowledgements}
We thank Markus Luty for useful discussions. The work of H.D. is
supported by the United States Department of Energy under Grant
Contract DE-AC02-98CH10886. E.P. is supported by DOE under contract
DE-FG02-92ER-40699.
|
1907.04918
|
\section{Introduction and Preliminaries}
Throughout this paper, $\mathbb{N}$ denotes the set of natural numbers. Bakhtin \cite{3} and Czerwik \cite{4} generalized the concept of metric spaces and introduced the notion of $b$-metric spaces as follows:
\begin{definition}
\emph{Let $X$ be a nonempty set. A $b$-metric is a function $d: X \times X \rightarrow [0, \infty)$ such that for all $x,y,z \in X$ and a constant $s \geq 1$, the following conditions are satisfied:}
\emph{(i) $d(x,y)=0$ if and only if $x=y$, }
\emph{(ii) $d(x,y)=d(y,x)$,}
\emph{(iii) $d(x,y) \leq s[d(x,z)+d(z,y)]$.}\\
\emph{The pair $(X,d)$ is called a $b$-metric space and the number $s$ is called the coefficient of $(X,d)$. If we take $s=1$, then $(X,d)$ reduces to a metric space.}
\end{definition}
Let $X$ be a nonempty set equipped with a binary relation $\preccurlyeq$. Then the relation $\preccurlyeq$ is
(i) \emph{reflexive}: if $x \preccurlyeq x$ for all $x \in X$,
(ii) \emph{antisymmetric}: if $x \preccurlyeq y$ and $y \preccurlyeq x$, then $x=y$ for all $x,y \in X$,
(iii) \emph{transitive}: if $x \preccurlyeq y$ and $y \preccurlyeq z$, then $x \preccurlyeq z$ for all $x,y,z \in X$.
A binary relation $\preccurlyeq$ is said to be a \emph{preorder} if it is reflexive and transitive. A preorder is said to be a \emph{partial order} if it is antisymmetric. Let $(X,d,\preccurlyeq)$ be a $b$-metric space with coefficient $s \geq 1$ equipped with a binary relation $\preccurlyeq$. Motivated by Kamihigashi and Stachurski \cite{7} we define the following axiom:
$d$ is $s$-\emph{regular} if for all $x,y,z \in $ we have
$$x \preccurlyeq y \preccurlyeq z \quad \mbox{implies} \quad \max\{d(x,y),d(y,z)\} \leq s^2 d(x,z).$$
Evidently, if $d$ is regular in the sense of Kamihigashi and Stachurski \cite{7}, then $d$ is $s$-regular but the converse is not true as illustrated by the following example:
Let $X=[0,\infty)$ and define $d:X \times X \rightarrow [0, \infty)$ by $$d(x,y)=\left\{\begin{array}{cc}
(x+y)^2,& \mbox{if}\thinspace \thinspace x \neq y,\\
0, &\mbox{if} \thinspace \thinspace x=y.
\end{array}
\right.$$
Then $(X,d)$ is a $b$-metric space with coefficient $s=2$. Suppose that $X$ is equipped with a preorder relation defined by: for $x,y \in X$, $x \preccurlyeq y$ if and only if $x \leq y$. It can be easily observed that if $x \preccurlyeq y \preccurlyeq z$, then $\max\{d(x,y),d(y,z)\} \leq 4 d(x,z)$ for all $x,y,z \in X$. This implies that $d$ is $s$-regular. Also, $1 \preccurlyeq 2 \preccurlyeq 3$ but $\max \{d(1,2),d(2,3)\}=25 \nleq d(1,3)$. Therefore, $d$ is not regular.
Kamihigashi and Stachurski \cite{7} and Batsari et al. \cite{2} established fixed point results of order-preserving self mappings in spaces equipped with a transitive binary relation and some distance measures. Recently, Fomenko and Podoprikhin \cite{5,9} generalized the results of Arutyunov \cite{11,1} to the case of families of multivalued mappings in partially ordered sets. They \cite{6,9} introduced the notion of order homotopy and proved that common fixed point and coincidence point properties are preserved under homotopies of families of mappings in ordered sets.
In this paper we establish results on preservation of coincidence points and common fixed points under order homotopies of families of mappings. The main objective of the paper is to develop a new approach and obtain sufficient conditions to gurantee the existence of coincidence and common fixed points under order homotopies of families of mappings in preordered $s$-regular $b$-metric spaces. We have generalized the results of \cite{6,8} to the case of preordered $s$-regular $b$-metric spaces and preordered regular metric spaces. This approach has enabled us to replace the classical approach of considering the underlying space to be complete by a preordered space and $b$-metric function satisfying the axiom of $s$-regularity. As a consequence we establish coincidence and common fixed points results under order homotopic families of mappings in the context of metric spaces and $b$-metric spaces from a unique point of view. An example demonstrating the utility of the results obtained is also provided.
\section{Coincidence point results}
In this section we prove some coincidence point results under a homotopy of families of mappings in preordered $s$-regular $b$-metric spaces. As a consequence, we formulate coincidence and fixed point results in preordered regular metric spaces.
For formulating the results, we recall some definitions and notions introduced by Fomenko and Podoprikhin \cite{6,8}.
\begin{definition}
\emph{Let $(X,\preccurlyeq)$ be a preordered set. A mapping $T:X \rightarrow X$} covers (covers from above) \emph{a mapping $S:X \rightarrow X$ on a set $A \subset X$ if and only if for any $x \in A$ such that $S(x) \preccurlyeq T(x)$ ($S(x) \succcurlyeq T(x)$) then there exists an element $y \in X$ satisfying $y\preccurlyeq x$ ($y \succcurlyeq x$) and $T(y)=S(x)$. If $A=X$, then we say that $T$} covers (covers from above) \emph{$S$.}
\end{definition}
Let $(X,\preccurlyeq)$ be a preordered set. A mapping $T:X \rightarrow X$ is said to be \emph{isotone} if for all $x, y \in X$
$$x\preccurlyeq y \quad \mbox{implies} \quad T(x) \preccurlyeq T(y).$$
Let $T,S : X \rightarrow X$ be a pair of self mappings on a preordered set $(X,\preccurlyeq)$. Let $\mathcal{C}(T,S,\preccurlyeq)$ denote the set of chains $C$ such that for all $x,y \in C$ we have
(i) $T(x) \preccurlyeq S(x)$,
(ii) $x \prec y$ implies $S(x) \preccurlyeq T(y)$,
(iii) $S(C) \subset T(X)$.
Let $\mathcal{C}^*(T,S,\preccurlyeq)$ denote the set of chains $C$ such that for all $x,y \in C$ we have
(i) $T(x) \succcurlyeq S(x)$,
(ii) $x \prec y$ implies $T(x) \preccurlyeq S(y)$,
(iii) $S(C) \subset T(X)$.
Recall that the dual order of $\preccurlyeq$ is the order $\preccurlyeq^*$ defined by: $x \preccurlyeq y$ if and only if $y \preccurlyeq^* x$. It can be easily observed that $\mathcal{C}^*(T,S,\preccurlyeq)=\mathcal{C}(T,S,\preccurlyeq^*)$. For a fixed element $x_0 \in X$ define the set
\begin{align*}
O_X(x_0)&:= \{x \in X: x\preccurlyeq x_0\},\\
O_X^*(x_0)&:=\{x \in X: x \succcurlyeq x_0\},\\
\mathcal{C}(x_0,T,S,\preccurlyeq)&:=\{C \in \mathcal{C}(T,S,\preccurlyeq):C \subset O_X(x_0) \thinspace \thinspace \mbox{and} \thinspace \thinspace S(C) \subset T(O_X(x_0)) \},\\
\mathcal{C}^*(x_0,T,S,\preccurlyeq)&:=\{C \in \mathcal{C}^*(T,S,\preccurlyeq):C \subset O_X^*(x_0) \thinspace \thinspace \mbox{and} \thinspace \thinspace S(C) \subset T(O_X^*(x_0)) \},\\
\mbox{Coin}(T,S)&:=\{x \in X: Tx=Sx\},\\
\mbox{Fix}(T)&:=\{ x \in X: Tx=x \}.
\end{align*}
Recall that an element $x \in X$ is said to a \emph{minimal} (\emph{maximal}) element of $X$ if and only if for every other element $y \in X$ we have $x\preccurlyeq y$ ($x \succcurlyeq y$).
\begin{theorem}\label{theorem1}
Let $(X,d,\preccurlyeq)$ be a preordered $s$-regular $b$-metric space. Let $T,S:X \rightarrow X$ be self mappings and there exists $x_0 \in X$ such that $T(x_0) \preccurlyeq S(x_0)$. Suppose that the following conditions are satisfied:
(i) the mapping $T$ is isotone,
(ii) the mapping $S$ covers the mapping $T$ on the set $O_X(x_0)$,
(iii) any chain $C \in \mathcal{C}(x_0,T,S,\preccurlyeq)$ has a lower bound $w \in X$ satisfying $w \preccurlyeq T(w)$ and there exists $z \in X$ such that
\begin{align*}
&S^i(z) \preccurlyeq S(w) \preccurlyeq T(w) \quad \mbox{for all} \thinspace \thinspace \thinspace i \in \mathbb{N},\\
&d(T^i(w),S^i(z)) \rightarrow 0 \quad \mbox{as} \thinspace \thinspace \thinspace i \rightarrow \infty.
\end{align*}
Then the set $ \mbox{Coin}(T,S) \cap O_X(x_0)$ is nonempty and contains a minimal element.
\end{theorem}
\begin{proof}
The proof is divided into four steps where the existence of an element in $ \mbox{Coin}(T,S) \cap O_X(x_0)$ is proven in Steps $1$ to Step $3$ and its minimality is established in Step $4$. \\
\textbf{Step 1} Set
$$V=\{x \in O_X(x_0): T(x)\preccurlyeq S(x), \thinspace Sx \in T(O_X(x_0)) \}.$$
We show that $V$ is a nonempty set. Since $\preccurlyeq$ is reflexive, $x_0 \in O_X(x_0)$. Also, $T(x_0) \preccurlyeq S(x_0)$. By (ii) there exists $x_1 \in X$ satisfying
$$x_1 \preccurlyeq x_0 \quad \mbox{and} \quad T(x_0)=S(x_1).$$
Since $T$ is isotone, $T(x_1) \preccurlyeq S(x_1)$. Also, $S(x_1)=T(x_0) \in T(O_X(x_0))$ which implies that $x_1 \in V$. Define an ordered relation on $V$ as follows: $v_1 \sqsubseteq v_2$ if either $v_1=v_2$ or $v_1 \prec v_2$ and $S(v_1) \preccurlyeq T(v_2)$. It is easily seen that $\sqsubseteq$ is reflexive and antisymmetric. Suppose that $v_1,v_2,v_3 \in V$, $v_1 \sqsubseteq v_2$ and $v_2 \sqsubseteq v_3$. If $v_1=v_2$ or $v_2=v_3$, then evidently, $v_1 \sqsubseteq v_3$. If $v_1 \prec v_2$, $v_2 \prec v_3$, $S(v_1) \preccurlyeq T(v_2)$ and $S(v_2) \preccurlyeq T(v_3)$, then $v_1 \prec v_3$. Also, $v_2 \in V$ gives $T(v_2) \preccurlyeq S(v_2)$ which implies that $S(v_1) \preccurlyeq T(v_3)$. Therefore, $v_1 \sqsubseteq v_3$ which gives $\sqsubseteq$ is a partial order on $V$. By Hausdorff maximal principle, there exists a maximal chain $C \subset V$ with respect to the order $\sqsubseteq$.\\
\textbf{Step 2} We show that $C \in \mathcal{C}(x_0,T,S,\preccurlyeq)$. By the definition of V we have $C \subset O_X(x_0)$, $S(C) \subset T(O_X(x_0))$ and $T(x) \preccurlyeq S(x)$ for all $x \in C$. Let $x,y \in C$ and $x \prec y$. Since $C$ is a chain, $x \sqsubseteq y$ which gives $S(x) \preccurlyeq T(y)$. Therefore, $C \in \mathcal{C}(x_0,T,S,\preccurlyeq)$. By (iii), there exists a lower bound $w \in X$ of $C$ with respect to the order $\preccurlyeq$.\\
\textbf{Step 3} It is to be shown that $w \in \mbox{Coin}(T,S) \cap O_X(x_0)$. Since $w$ is a lower bound of $C$, $w \preccurlyeq x$ for all $x \in C$. Also, $C \subset O_X(x_0)$ and $\preccurlyeq$ is transitive. Therefore, $w \preccurlyeq x_0$ which implies that $w \in O_X(x_0)$. Consider
\begin{align}\label{equation1}
d(T(w),S(w)) & \leq sd(T(w),T^i(w))+sd(T^i(w),S(w)) \nonumber \\
&\leq s^2 d(T(w),S^i(z))+2s^2d(T^i(w),S^i(z))+s^2d(S^i(z),S(w)).
\end{align}
Since $S^i(z) \preccurlyeq S(w) \preccurlyeq T(w)$ for all $i \in \mathbb{N}$ and $d$ is $s$-regular, $d(S^i(z),S(w)) \leq s^2 d(S^i(z),T(w))$. Therefore, (\ref{equation1}) becomes
\begin{align}\label{equation2}
d(T(w),S(w)) \leq (s^4+s^2)d(S^i(z),T(w))+2s^2d(T^i(w),S^i(z)).
\end{align} Also, $w \preccurlyeq T(w)$ and $T$ is isotone gives $T(w) \preccurlyeq T^2(w)$. Continuing likewise we get $w \preccurlyeq T(w) \preccurlyeq T^i(w)$ for all $i \in \mathbb{N}$. Therefore, $S^i(z) \preccurlyeq T(w) \preccurlyeq T^i(w)$ which gives $d(S^i(z),T(w)) \leq s^2 d(S^i(z),T^i(w))$. Using (\ref{equation2}) we have
\begin{align*}
d(T(w),S(w)) \leq (s^6+s^4+2s^2)d(T^i(w),S^i(z)).
\end{align*}
Letting $i \rightarrow \infty$ we get $T(w)=S(w)$. Therefore, $w \in \mbox{Coin}(T,S) \cap O_X(x_0)$.\\
\textbf{Step 4} We claim that $w$ is a minimal element of the set $ \mbox{Coin}(T,S) \cap O_X(x_0)$. Assume on the contrary, there exists $u \in \mbox{Coin}(T,S) \cap O_X(x_0) $ satisfying $u \prec w$. Since $T$ is isotone, $T(u) \preccurlyeq T(w)$ which implies that $S(u) \preccurlyeq T(w)$. Therefore, $u \sqsubseteq w$. As $w$ is a lower bound of $C$, $w \preccurlyeq x$ for all $x \in C$. Therefore, $T(w) \preccurlyeq T(x)$ which implies that $S(w) \preccurlyeq T(w)$. This gives $u \sqsubseteq w \sqsubseteq x$. As $C$ is a maximal chain with respect to $\sqsubseteq$, $u \in C$ which gives $w \preccurlyeq u$, a contradiction. Hence, $w$ is a minimal element of the set $ \mbox{Coin}(T,S) \cap O_X(x_0)$.
\end{proof}
The dual version of Theorem \ref{theorem1} can be stated as follows:
\begin{theorem}\label{theorem3}
Let $(X,d,\preccurlyeq)$ be a preordered $s$-regular $b$-metric space. Let $T,S:X \rightarrow X$ be self mappings and there exists $x_0 \in X$ such that $T(x_0) \succcurlyeq S(x_0)$. Suppose that the following conditions hold:
(i) the mapping $T$ is isotone,
(ii) the mapping $S$ covers the mapping $T$ from above on the set $O_X^{*}(x_0)$,
(iii) any chain $C \in \mathcal{C}^*(x_0,T,S,\preccurlyeq)$ has an upper bound $w \in X$ satisfying $w \succcurlyeq T(w)$ and there exists $z \in X$ such that
\begin{align*}
&S^i(z) \succcurlyeq S(w) \succcurlyeq T(w) \quad \mbox{for all} \thinspace \thinspace i \in \mathbb{N},\\
&d(T^i(w),S^i(z)) \rightarrow 0 \quad \mbox{as} \thinspace \thinspace \thinspace i \rightarrow \infty.
\end{align*}
Then the set $\mbox{Coin}(T,S) \cap O_X^*(x_0)$ is nonempty and contains a maximal element.
\end{theorem}
Podoprikhin and Fomenko \cite{8} generalized the notion of order homotopy of isotone mappings introduced by Walker \cite{10} as follows:
Let $(X,\preccurlyeq)$ be a preordered set. A pair of mappings $T,S:X \rightarrow X$ are said to be \emph{order homotopic} if there exists a finite sequence of mappings $H_i:X \rightarrow X$ $(i=1,2,\ldots,n)$ such that
$$T=H_0\preccurlyeq H_1 \succcurlyeq H_2 \preccurlyeq \ldots \succcurlyeq H_n=S,$$
where $H_i \preccurlyeq H_j$ if and only if $H_i(x) \preccurlyeq H_j (x)$ for all $x \in X$.
\begin{theorem}\label{theorem2}
Let $(X,\preccurlyeq)$ be a preordered $s$-regular $b$-metric space. Let $(T,S)$ and $(\Tilde{T},\Tilde{S})$ be a pair of self mappings on $X$. Suppose that there are homotopies $\{H_t\}_{0 \leq t \leq n}$ and $\{K_t\}_{0 \leq t \leq n}$ between these pair of mappings such that
\begin{align*}
T=H_0 \preccurlyeq H_1 \succcurlyeq H_2 \preccurlyeq \ldots \succcurlyeq H_n=\Tilde{T},\\
S=K_0 \succcurlyeq K_1 \preccurlyeq K_2 \succcurlyeq \ldots \preccurlyeq H_n=\Tilde{S}.
\end{align*}
Suppose that $x_0 \in \mbox{Coin}(T,S)$ and the following conditions are satisfied:
(i) for each odd $t$, $1 \leq t \leq n$, $K_t$ is isotone, the mapping $H_t$ covers the mapping $K_t$ and any chain $C \in \mathcal{C}(x_0,K_t,H_t,\preccurlyeq)$ has a lower bound $w \in X$ satisfying $w \preccurlyeq K_t(w)$ and there exists $z \in X$ such that for all $i \in \mathbb{N}$ we have $$H_t^i(z) \preccurlyeq H_t(w) \preccurlyeq K_t(w) \quad \mbox{and} \quad d(K_t^i(w),H_t^i(z)) \rightarrow \infty,$$
(ii) for each even $t$, $1 \leq t \leq n$, $H_t$ is isotone, the mapping $K_t$ covers the mapping $H_t$ and any chain $C' \in \mathcal{C}(x_0,H_t,K_t,\preccurlyeq)$ has a lower bound $w' \in X$ satisfying $w' \preccurlyeq H_t(w')$ and there exists $z' \in X$ such that for all $i \in \mathbb{N}$ we have $$K_t^i(z') \preccurlyeq K_t(w') \preccurlyeq H_t(w') \quad \mbox{and} \quad d(H_t^i(w'),K_t^i(z')) \rightarrow \infty.$$
Then there exists a chain
$$x_0\succcurlyeq x_1 \succcurlyeq x_2 \succcurlyeq \ldots \succcurlyeq x_n$$
such that for every $t$, $1 \leq t \leq n$, $x_t \in \mbox{Coin}(H_t,K_t) \cap O_X(x_{t-1})$ and $x_t$ is a minimal element of the set $\mbox{Coin}(H_t,K_t) \cap O_X(x_{t-1})$.
\end{theorem}
\begin{proof}
Since $x_0 \in \mbox{Coin}(H_0,K_0)$, $K_1(x_0) \preccurlyeq K_0 (x_0)=H_0(x_0) \preccurlyeq H_1(x_0)$ which implies that $K_1(x_0) \preccurlyeq H_1(x_0)$. Using (i) we infer that all the conditions of Theorem \ref{theorem1} are satisfied. Therefore, there exists $x_1 \in \mbox{Coin}(H_1,K_1)\cap O_X(x_0)$ such that $x_1$ is a minimal element of the set $ \mbox{Coin}(H_1,K_1) \cap O_X(x_0)$. Consider $H_2(x_1) \preccurlyeq H_1(x_1)=K_1(x_1) \preccurlyeq K_2(x_1)$ which gives $H_2(x_1) \preccurlyeq K_2(x_1)$. As $\mathcal{C}(x_1,H_2,K_2, \preccurlyeq) \subset \mathcal{C}(x_0,H_2,K_2, \preccurlyeq)$ and using (ii) we deduce that all the conditions of Theorem \ref{theorem1} are satisfied. Therefore, there exists $x_2 \in \mbox{Coin}(H_2,K_2) \cap O_X(x_1)$ such that $x_2$ is a minimal element of the set $ \mbox{Coin}(H_2,K_2) \cap O_X(x_1)$. Repeating this process we obtain a chain $$x_0\succcurlyeq x_1 \succcurlyeq x_2 \succcurlyeq \ldots \succcurlyeq x_n,$$ where $x_t \in \mbox{Coin}(H_t,K_t) \cap O_X(x_{t-1})$ and $x_t$ is a minimal element of the set $ \mbox{Coin}(H_t,K_t) \cap O_X(x_{t-1})$, $1 \leq t \leq n$.
\end{proof}
The following result is an immediate consequence of Theorem \ref{theorem1} and Theorem \ref{theorem3}:
\begin{theorem}\label{theorem4}
Let $(X,d,\preccurlyeq)$ be a preordered $s$-regular $b$-metric space. Let $(T,S)$ and $(\Tilde{T},\Tilde{S})$ be a pair of self mappings on $X$. Suppose that there are homotopies $\{H_t\}_{0 \leq t \leq n}$ and $\{K_t\}_{0 \leq t \leq n}$ between these pair of mappings such that
\begin{align*}
T=H_0 \preccurlyeq H_1 \succcurlyeq H_2 \preccurlyeq \ldots \succcurlyeq H_n=\Tilde{T},\\
S=K_0 \succcurlyeq K_1 \preccurlyeq K_2 \succcurlyeq \ldots \preccurlyeq H_n=\Tilde{S}.
\end{align*}
Suppose that the mappings $H_t$ are isotone, $1 \leq t \leq n$, $x_0 \in \mbox{Coin}(T,S)$ and the following conditions are satisfied:
(i) for each odd $t$, $1 \leq t \leq n$, the mapping $K_t$ covers the mapping $H_t$ from the above and any chain $C \in \mathcal{C}^*(H_t,K_t,\preccurlyeq)$ has an upper bound $w \in X$ satisfying $w \succcurlyeq H_t(w) $ and there exists $z \in X$ such that for all $i \in \mathbb{N}$ we have $$K_t^i(z) \succcurlyeq K_t(w) \succcurlyeq H_t(w) \quad \mbox{and} \quad d(H_t^i(w),K_t^i(z)) \rightarrow \infty,$$
(ii) for each even $t$, $1 \leq t \leq n$, the mapping $K_t$ covers the mapping $H_t$ and any chain $C \in \mathcal{C}(H_t,K_t,\preccurlyeq)$ has a lower bound $w' \in X$ satisfying $w' \preccurlyeq H_t(w')$ and there exists $z' \in X$ such that for all $i \in \mathbb{N}$ we have $$K_t^i(z') \preccurlyeq K_t(w') \preccurlyeq H_t(w') \quad \mbox{and} \quad d(H_t^i(w'),K_t^i(z')) \rightarrow \infty.$$
Then there exists a fence
$$x_0 \preccurlyeq x_1 \succcurlyeq x_2 \preccurlyeq \ldots \succcurlyeq x_n$$
such that for each odd $t$, $1 \leq t \leq n$, $x_t \in \mbox{Coin}(H_t,K_t) \cap O_X^*(x_{t-1})$ and $x_t$ is a maximal element of the set $\mbox{Coin}(H_t,K_t) \cap O_X^*(x_{t-1})$ and for each even $t$, $1 \leq t \leq n$, $x_t \in \mbox{Coin}(H_t,K_t) \cap O_X(x_{t-1})$ and $x_t$ is a minimal element of the set $ \mbox{Coin}(H_t,K_t) \cap O_X(x_{t-1})$.
\end{theorem}
Evidently, the identity mapping $I:X \rightarrow X$ covers (covers from above) any mapping $T:X \rightarrow X$. Therefore, we have the following fixed point result under a homotopy of self mappings:
\begin{theorem}\label{theorem5}
Let $(X,d,\preccurlyeq)$ be a preordered $s$-regular $b$-metric space. Let $T, \Tilde{T}:X \rightarrow X$ be isotone mappings and $\{H_t\}_{0\leq t \leq n}$ be homotopy between $T$ and $\Tilde{T}$ such that
$$T=H_0 \preccurlyeq H_1 \succcurlyeq H_2 \preccurlyeq \ldots \succcurlyeq H_n=\Tilde{T}.$$
Suppose that $x_0 \in \mbox{Fix}(T)$ and the following conditions are satisfied:
(i) for each odd $t$, $1 \leq t \leq n$, any chain $C \in \mathcal{C}^*(H_t,I, \preccurlyeq)$ has an upper bound $w \in X$ and there exists $z \in X$ such that for all $i \in \mathbb{N}$ we have $$z \succcurlyeq w \succcurlyeq H_t(w) \quad \mbox{and} \quad d(H_t^i(w),z) \rightarrow 0,$$
(ii) for each even $t$, $1 \leq t \leq n$, any chain $C \in \mathcal{C}(H_t,T,\preccurlyeq)$ has a lower bound $w' \in X$ and there exists $z' \in X$ such that for all $i \in \mathbb{N}$ we have $$z' \preccurlyeq w' \preccurlyeq H_t(w') \quad \mbox{and} \quad d(H_t^i(w'),z') \rightarrow 0.$$
Then there exists a fence
$$x_0 \preccurlyeq x_1 \succcurlyeq x_2 \preccurlyeq \ldots \succcurlyeq x_n$$
such that for each odd $t$, $1 \leq t \leq n$, $x_t \in \mbox{Fix}(H_t) \cap O_X^*(x_{t-1})$ and $x_t$ is a maximal element of the set $\mbox{Fix}(H_t) \cap O_X^*(x_{t-1})$ and for each even $t$, $1 \leq t \leq n$, $x_t \in \mbox{Fix}(H_t) \cap O_X(x_{t-1})$ and $x_t$ is a minimal element of the set $\mbox{Fix}(H_t) \cap O_X(x_{t-1})$.
\end{theorem}
\begin{remark}
\emph{In Theorems \ref{theorem1}-\ref{theorem5} if we assume $(X,d,\preccurlyeq)$ is a preordered regular metric space, then also the results hold.}
\end{remark}
We conclude this section with the following example to illustrate the efficiency of our results:
\begin{example}
\emph{Let $X=[0,2]$ and define $d: X \times X \rightarrow [0, \infty)$ by $d(x,y)=(x-y)^2$. Then $(X,d)$ is a $b$-metric space with coefficient $s=2$ and $d$ is $2$-regular. Suppose that $X$ is equipped with a preorder given by: for $x,y \in X$, $x \preccurlyeq y$ if and only if $x \leq y$. Define $T,H,\Tilde{T}: X \rightarrow X$ by
\begin{equation*}
\begin{split}
T(x)&= \frac{x^3}{5},\\\
H(x)&=\left\{\begin{array}{ll}
0.2,& \mbox{if}\thinspace \thinspace 0 \leq x \leq 0.2,\\
x, &\mbox{if} \thinspace \thinspace 0.2 \leq x \leq 1,\\
\frac{x^2+3}{4}, &\mbox{if} \thinspace \thinspace 1 \leq x \leq 2,
\end{array}
\right.\\\
\Tilde{T}(x)&=\left\{\begin{array}{ll}
\frac{x}{2}+0.1,& \mbox{if}\thinspace \thinspace 0 \leq x \leq 0.2,\\
x, &\mbox{if} \thinspace \thinspace 0.2 \leq x \leq 0.5,\\
\frac{2x+1}{4}, &\mbox{if} \thinspace \thinspace 0.5 \leq x \leq 2.
\end{array}
\right.
\end{split}
\end{equation*}
Then $T \preccurlyeq H \succcurlyeq \Tilde{T}$.
Clearly, $x_0=0 \in \mbox{Fix}(T)$ and $O_X^*(x_0)=[0,2]$. Also, every chain $C \in \mathcal{C}^*(H,I,\preccurlyeq)$ has an upper bound $w=1$ and there exists $z=1+\frac{1}{i} \in X$, $i \in \mathbb{N}$ such that $$ 1+\frac{1}{i} \succcurlyeq 1 \succcurlyeq H(1)$$ and $$d(H^i(w),z)=d\Big(1,1+\frac{1}{i} \Big)=\frac{1}{i^2} \rightarrow 0 \quad \mbox{as} \thinspace \thinspace \thinspace i \rightarrow \infty.$$ Also, every chain $C \in \mathcal{C}(\Tilde{T},I,\preccurlyeq)$ has a lower bound $w'=0.2$ and $z'=0.2-\frac{i}{2i^2+4} \in X$, $i \in \mathbb{N}$ such that $$0.2-\frac{i}{2i^2+4} \preccurlyeq 0.2 \preccurlyeq \Tilde{T}(0.2)$$ and $$d(\Tilde{T}^i(w'),z')=d\Big(0.2,0.2-\frac{i}{2i^2+4}\Big)=\frac{i^2}{(2i^2+4)^2} \rightarrow 0 \quad \mbox{as} \thinspace \thinspace \thinspace i \rightarrow \infty.$$ Then all the conditions of Theorem \ref{theorem5} are satisfied for a preordered $2$-regular $b$-metric space. Therefore, we get a fence $x_0 \preccurlyeq x_1 \succcurlyeq x_2$, where $x_0=0$, $x_1=1$ and $x_2=0.2$. It is observed that $x_1 \in \mbox{Fix}(H) \cap O_X^*(x_0)$ and $x_1$ is a maximal element of the set $\mbox{Fix}(H) \cap O_X^*(x_0)$. Also, $O_X(x_1)=[0,1]$, $x_2 \in \mbox{Fix}(\Tilde{T}) \cap O_X(x_1)$ and $x_2$ is a minimal element of the set $\mbox{Fix}(\Tilde{T}) \cap O_X(x_1)$.}
\end{example}
\section{Common Fixed Point Results}
In this section we prove some common fixed point results under a homotopy of families of mappings in preordered $s$-regular $b$-metric spaces.
Fomenko and Podoprikhin \cite{5} introduced the notion of concordantly isotone mappings as follows:
Let $(X,\preccurlyeq)$ be a preordered set, $\mathcal{I}$ be a nonempty set and a family of mappings $\mathcal{F}=\{f_{\alpha}\}_{\alpha \in \mathcal{I}}$, where $f_{\alpha}:X \rightarrow X$ for all $\alpha \in \mathcal{I}$. The family $\mathcal{F}$ is said to be \emph{condordantly isotone} if for all $x,y \in X$
$$x \prec y \quad \mbox{implies} \quad f_{\alpha}(x) \preccurlyeq f_{\beta}(y) \thinspace \thinspace \thinspace \mbox{for all} \thinspace \thinspace \alpha,\beta \in \mathcal{I}.$$
Let $\mathcal{C}_1(\mathcal{F},\preccurlyeq)$ denote the set of chains $C \subset \bigcup\limits_{\alpha \in \mathcal{I}}f_{\alpha}(X)$ such that
(i) $f_{\alpha}(x) \preccurlyeq x$ for all $x \in C$ and $\alpha \in \mathcal{I}$,
(ii) $x \prec y$ implies $x \preccurlyeq f_{\alpha}(y)$ for all $x,y \in C$ and $\alpha \in \mathcal{I}$.
Let $\mathcal{C}_1^*(\mathcal{F},\preccurlyeq)$ denote the set of chains $C \subset \bigcup\limits_{\alpha \in \mathcal{I}}f_{\alpha}(X)$ such that
(i) $f_{\alpha}(x) \succcurlyeq x$ for all $x \in C$ and $\alpha \in \mathcal{I}$,
(ii) $x \prec y$ implies $f_{\alpha}(x) \preccurlyeq y$ for all $x,y \in C$ and $\alpha \in \mathcal{I}$.
Observe that $\mathcal{C}_1^*(\mathcal{F},\preccurlyeq)= \mathcal{C}_1(\mathcal{F},\preccurlyeq^*)$. For a fixed element $x_0 \in X$ define the set
\begin{align*}
\mathcal{C}_1(x_0, \mathcal{F},\preccurlyeq)&:=\{C \in \mathcal{C}_1(\mathcal{F},\preccurlyeq):C \subset O_X(x_0) \cap \bigcup\limits_{\alpha \in \mathcal{I}}f_{\alpha}(O_X(x_0)) \},\\
\mathcal{C}_1^*(x_0, \mathcal{F},\preccurlyeq)&:=\{C \in \mathcal{C}_1^*(\mathcal{F},\preccurlyeq):C \subset O_X^*(x_0) \cap \bigcup\limits_{\alpha \in \mathcal{I}}f_{\alpha}(O_X^*(x_0)) \},\\
\mbox{ComFix}(\mathcal{F})&:= \{ x \in X: f_{\alpha}(x)=x \thinspace \thinspace \mbox{for all} \thinspace \thinspace \alpha \in \mathcal{I} \}.
\end{align*}
\begin{theorem}\label{theorem6}
Let $(X,d,\preccurlyeq)$ be a preordered $s$-regular $b$-metric space in which a point $x_0$ is fixed and $\mathcal{F}=\{f_{\alpha}\}_{\alpha \in \mathcal{I}}$ be a concordantly isotone family of mappings verifying $f_{\alpha}(x_0) \preccurlyeq x_0$ for all $\alpha \in \mathcal{I}$. Suppose that for any chain $C \in \mathcal{C}_1(x_0,\mathcal{F},\preccurlyeq)$ there exists $w \in X$ such that $w$ is a common lower bound of the chains $f_{\alpha}(C)$ for each $\alpha \in \mathcal{I}$ and there exists $z \in X$ and $\beta \in \mathcal{I}$ satisfying
\begin{align*}
&f_{\alpha}(w) \preccurlyeq w \preccurlyeq f_{\beta}^i(z),\\
&d(f_{\alpha}^i(w), f_{\beta}^i(z)) \rightarrow 0.
\end{align*}
Then the set $\mbox{ComFix}(\mathcal{F}) \cap O_X(x_0)$ is nonempty and contains a minimal element.
\end{theorem}
\begin{proof}
The proof is divided into three steps where the existence of an element in $\mbox{ComFix}(\mathcal{F}) \cap O_X(x_0)$ is estalished in Steps $1$ and $2$ and its minimality is proven in Step $3$. \\
\textbf{Step 1} We show that $\mathcal{C}_1(x_0, \mathcal{F},\preccurlyeq)$ is nonempty. Since $f_{\alpha} (x_0) \preccurlyeq x_0$, $$f_{\alpha} (x_0) \in O_X(x_0) \cap \bigcup\limits_{ \alpha \in \mathcal{I}} f_{\alpha} (O_X(x_0)).$$ Also, $f_{\alpha}( x_0) \preccurlyeq x_0$ and $\mathcal{F}$ is concordantly isotone implies that $f_{\beta}(f_{\alpha}(x_0)) \preccurlyeq f_{\alpha}(x_0)$ for all $\beta \in \mathcal{I}$. Therefore, $\{f_{\alpha}(x_0)\} \in \mathcal{C}_1(x_0,\mathcal{F},\preccurlyeq)$ which gives $\mathcal{C}_1(x_0,\mathcal{F},\preccurlyeq)$ is nonempty. Define an ordered relation on $\mathcal{C}_1(x_0, \mathcal{F},\preccurlyeq)$ as follows: $C_1 \trianglerighteq C_2$ if and only if $C_1 \subset C_2$. It is easily seen that $\trianglerighteq$ is a partial order on $C_1(x_0, \mathcal{F},\preccurlyeq)$. Then by Hausdorff maximal principle, there exists a maximal chain $C$ in $\mathcal{C}_(x_0, \mathcal{F}, \preccurlyeq)$ containing $\{f_{\alpha}(x_0)\}$ with respect to the order $\trianglerighteq$. By our assumption there exists $w \in X$ such that $w$ is a common lower bound of the chains $f_{\alpha}(C)$ for each $\alpha \in \mathcal{I}$ and there exists $z \in X$ and $\beta \in \mathcal{I}$ satisfying
\begin{align*}
&f_{\alpha}(w) \preccurlyeq w \preccurlyeq f_{\beta}^i(z)\\
&d(f_{\alpha}^i(w), f_{\beta}^i(z)) \rightarrow 0.
\end{align*}
\textbf{Step 2} In this step we show that $w \in \mbox{ComFix}(\mathcal{F}) \cap O_X(x_0)$. Since $w$ is a common lower bound of the chains $f_{\alpha}(C)$ for all $\alpha \in \mathcal{I}$. Then $w \preccurlyeq f_{\alpha}(x)$ for all $x \in C$ and $\alpha \in \mathcal{I}$. As $f_{\alpha}(x_0) \in C$,
$$w \preccurlyeq f_{\alpha}(f_{\alpha}(x_0)) \preccurlyeq f_{\alpha}(x_0) \preccurlyeq x_0.$$
Then transitivity of $\preccurlyeq$ implies that $w \in O_X(x_0)$. Consider
\begin{equation}\label{equation3}
d(f_{\alpha}(w),w) \leq s d(f_{\alpha}(w), f_{\beta}^i(z))+sd(f_{\beta}^i(z),w).
\end{equation}
Since $f_{\alpha}(w) \preccurlyeq w$ and $\mathcal{F}$ is concordantly isotone, $f_{\alpha}(f_{\alpha}(w))\preccurlyeq f_{\alpha}(w)$. Proceeding likewise we have $f_{\alpha}^i(w) \preccurlyeq f_{\alpha}(w) \preccurlyeq w$ for all $i \in \mathbb{N}$. Therefore, for all $i \in \mathbb{N}$
$$f_{\alpha}^i(w) \preccurlyeq f_{\alpha}(w) \preccurlyeq f_{\beta}^i(z).$$
As $d$ is $s$-regular, $d(f_{\alpha}(w),f_{\beta}^i(z)) \leq s^2 d(f_{\alpha}^i(w),f_{\beta}^i(z))$. Also, $f_{\alpha}^i(w) \preccurlyeq w \preccurlyeq f_{\beta}^i(z)$ and $d$ is $s$-regular gives $d(w, f_{\beta}^i(z)) \leq s^2d(f_{\alpha}^i(w),f_{\beta}^i(z))$. Therefore, (\ref{equation3}) becomes
$$d(f_{\alpha}(w),w) \leq 2s^3 d(f_{\alpha}^i(w),f_{\beta}^i(z)).$$
Letting $i \rightarrow \infty$ we get $f_{\alpha}(w)=w$ for all $\alpha \in \mathcal{I}$. Therefore, $w \in \mbox{ComFix}(\mathcal{F}) \cap O_X(x_0)$.\\
\textbf{Step 3} We claim that $w$ is a minimal element of the set $\mbox{ComFix}(\mathcal{F}) \cap O_X(x_0)$. Assume on the contrary, there exists $v \in \mbox{ComFix}(\mathcal{F}) \cap O_X(x_0)$ satisfying $v \prec w$. As $v=f_{\alpha}(v) \in \bigcup\limits_{\alpha \in \mathcal{I}}f_{\alpha}(O_X(x_0))$, $v \in O_X(x_0) \cap \bigcup\limits_{\alpha \in \mathcal{I}}f_{\alpha}(O_X(x_0))$. Since $\preccurlyeq$ is reflexive, $f_{\alpha}(v) \preccurlyeq v$ for all $\alpha \in \mathcal{I}$. Therefore, $f_{\alpha}(x) \preccurlyeq x$ for all $x \in C \cup \{v\}$ and $\alpha \in \mathcal{I}$. As $w$ is a common lower bound of the chains $f_{\alpha}(C)$ and $v \prec w$, $v \preccurlyeq w \preccurlyeq f_{\alpha}(x) \preccurlyeq x$ for all $x \in C$ and $\alpha \in \mathcal{I}$. Then transitivity of $\preccurlyeq$ implies that $v \preccurlyeq x$ for all $x \in C$. Since $\mathcal{F}$ is concordantly isotone, $f_{\beta}(v) \preccurlyeq f_{\alpha}(x)$ which gives $v \preccurlyeq f_{\alpha}(x)$ for all $x \in C$ and $\alpha \in \mathcal{I}$. Therefore, $C \cup \{v\}$ is a chain in $\mathcal{C}_1(x_0, \mathcal{F},\preccurlyeq)$ but this contradicts the maximality of $C$. Hence, $w$ is a minimal element of the set $\mbox{ComFix}(\mathcal{F}) \cap O_X(x_0)$.`
\end{proof}
The dual version of Theorem \ref{theorem6} can be stated as follows:
\begin{theorem}\label{theorem7}
Let $(X,d,\preccurlyeq)$ be a preordered $s$-regular $b$-metric space in which a point $x_0$ is fixed and $\mathcal{F}=\{f_{\alpha}\}_{\alpha \in \mathcal{I}}$ be a concordantly isotone family of mappings verifying $f_{\alpha}(x_0) \succcurlyeq x_0$ for all $\alpha \in \mathcal{I}$. Suppose that for any chain $C \in \mathcal{C}_1^*(x_0\mathcal{F},\preccurlyeq)$ there exists $w \in X$ such that $w$ is a common upper bound of the chains $f_{\alpha}(C)$ for each $\alpha \in \mathcal{I}$ and there exists $z \in X$ and $\beta \in \mathcal{I}$ satisfying
\begin{align*}
&f_{\alpha}(w) \succcurlyeq w \succcurlyeq f_{\beta}^i(z),\\
&d(f_{\alpha}^i(w), f_{\beta}^i(z)) \rightarrow 0.
\end{align*}
Then the set $\mbox{ComFix}(\mathcal{F}) \cap O_X^*(x_0)$ is nonempty and contains a maximal element.
\end{theorem}
\begin{theorem}\label{theorem8}
Let $(X,d,\preccurlyeq)$ be a preordered $s$-regular $b$-metric space. Let $\mathcal{F}=\{ f_{\alpha} \}_{\alpha \in \mathcal{I}}$ and $\mathcal{G}=\{g_{\alpha}\}_{\alpha \in \mathcal{I}}$ be a pair of families of self mappings on $X$. Suppose that $\{H_{t,\alpha} \}_{0 \leq t \leq n}$ be a homotopy between $f_{\alpha}$ and $g_{\alpha}$ such that
$$f_{\alpha}=H_{0,\alpha} \preccurlyeq H_{1,\alpha} \succcurlyeq H_{2,\alpha} \preccurlyeq \ldots \succcurlyeq H_{n, \alpha}=g_{\alpha}.$$
Suppose that $x_0 \in \mbox{ComFix}(\mathcal{F})$ and the following conditions are satisfied:
(i) for each $t$, $1 \leq t \leq n$, the family $\mathcal{H}_t=\{H_{t,\alpha}\}_{\alpha \in \mathcal{I}}$ is concordantly isotone,
(ii) for each odd $t$, $1 \leq t \leq n$, for any chain $C \in \mathcal{C}_1^*(\mathcal{H}_t,\preccurlyeq)$ there exists $w \in X$ such that $w$ is a common upper bound of the chains $H_{t,\alpha}(C)$ for all $\alpha \in \mathcal{I}$ and there exists $z \in X$ and $\beta \in \mathcal{I}$ satisfying
\begin{align*}
&H_{t,\alpha}(w) \succcurlyeq w \succcurlyeq H_{t,\beta}^i(z)\\
&d(H_{t,\alpha}^i(w),H_{t,\beta}^i(z)) \rightarrow 0,
\end{align*}
(iii) for each even $t$, $1 \leq t \leq n$, for any chain $C \in \mathcal{C}_1(\mathcal{H}_t,\preccurlyeq)$ there exists $w' \in X$ such that $w'$ is a common lower bound of the chains $H_{t,\alpha}(C)$ for all $\alpha \in \mathcal{I}$ and there exists $z' \in X$ and $\gamma \in \mathcal{I}$ satisfying
\begin{align*}
&H_{t,\alpha}(w') \preccurlyeq w' \preccurlyeq H_{t,\gamma}^i(z')\\
&d(H_{t,\alpha}^i(w'),H_{t,\gamma}^i(z')) \rightarrow 0,
\end{align*}
Then there exists a fence
$$x_0 \preccurlyeq x_1\succcurlyeq x_2 \preccurlyeq \ldots \succcurlyeq x_n$$
such that for each odd $t$, $1 \leq t \leq n$, $x_t \in \mbox{ComFix}(\mathcal{H}_t) \cap O_X^*(x_{t-1})$ and $x_t$ is a maximal element of the set $\mbox{ComFix}(\mathcal{H}_t) \cap O_X^*(x_{t-1})$ and for each even $t$, $1 \leq t \leq n$, $x_t \in \mbox{ComFix}(\mathcal{H}_t) \cap O_X(x_{t-1})$ and $x_t$ is a minimal element of the set $\mbox{ComFix}(\mathcal{H}_t) \cap O_X(x_{t-1})$.
\end{theorem}
\begin{proof}
Since $x_0 \in \mbox{ComFix}(\mathcal{F})$, $H_{1,\alpha}(x_0) \succcurlyeq H_{0,\alpha}(x_0)=f_{\alpha}(x_0)= x_0$ for all $\alpha \in \mathcal{I}$. This gives $H_{1, \alpha}(x_0)\succcurlyeq x_0$ for all $\alpha \in \mathcal{I}$. Using $\mathcal{C}_1^*(x_0,\mathcal{H}_1,\preccurlyeq) \subset \mathcal{C}_1^*(\mathcal{H}_1,\preccurlyeq)$, (i) and (ii) we deduce that all the conditions of Theorem \ref{theorem7} are satisfied. Therefore, there exists $x_1 \in \mbox{ComFix}(\mathcal{H}_1) \cap O_X^*(x_0)$ such that $x_1$ is a maximal element of the set $\mbox{ComFix}(\mathcal{H}_1) \cap O_X^*(x_0)$. Also, $H_{2,\alpha} (x_1)\preccurlyeq H_{1,\alpha}(x_1)=x_1$ for all $\alpha \in \mathcal{I}$ and $\mathcal{C}_1(x_1,\mathcal{H}_2,\preccurlyeq) \subset \mathcal{C}_1(\mathcal{H}_2,\preccurlyeq)$. Using (i) and (iii) we conclude that all the conditions of Theorem \ref{theorem6} are satisfied. Therefore, there exists $x_2 \in \mbox{ComFix}(\mathcal{H}_2) \cap O_X(x_1)$ such that $x_2$ is a minimal element of the set $\mbox{ComFix}(\mathcal{H}_2) \cap O_X(x_1)$. Proceeding likewise we get a fence
$$x_0 \preccurlyeq x_1 \succcurlyeq x_2 \preccurlyeq \ldots \succcurlyeq x_n,$$
where for each odd $t$, $1 \leq t \leq n$, $x_t \in \mbox{ComFix}(\mathcal{H}_t) \cap O_X^*(x_{t-1})$ and $x_t$ is a maximal element of the set $\mbox{ComFix}(\mathcal{H}_t) \cap O_X^*(x_{t-1})$ and for each even $t$, $1 \leq t \leq n$, $x_t \in \mbox{ComFix}(\mathcal{H}_t) \cap O_X(x_{t-1})$ and $x_t$ is a minimal element of the set $\mbox{ComFix}(\mathcal{H}_t) \cap O_X(x_{t-1})$.
\end{proof}
\begin{remark}
\emph{It is observed that Theorems \ref{theorem6}-\ref{theorem8} hold if we take $(X,d,\preccurlyeq)$ to be a preordered regular metric space.}
\end{remark}
\section*{Acknowledgements}
$^*$corresponding author\\
The corresponding author is supported by UGC Non-NET fellowship (Ref.No. Sch/139/Non-NET/ Math./Ph.D./2017-18/1028).
|
2205.14892
|
\section{Introduction}\label{sec:introduction}
Traditionally, machine learning treats the world as \textit{closed} and \textit{static} space. In particular for classification, domain data is assumed to comprise pre-defined classes with stationary class-conditional distributions. Also datasets to fit models before deploying them shall be available in a single chunk. Practitioners develop such models under controlled lab conditions, where they nowadays rely on tremendous computational resources.
This scarcely applies to many real-world\xspace} \def\Realworld{Real-world\xspace applications as the world is an \textit{open} space in many facets. For instance, classifiers might be confronted with classes unseen during training. Also distributions of pre-trained classes might be non-stationary or models shall learn novel classes within operation mode. These aspects often occur simultaneously like in image classification, where unknown image categories should be distinguished from known ones showing \textit{concept drifts} (\emph{e.\,g}\onedot} \def\Eg{\emph{E.\,g}\onedot, captured new data with different cameras). It is also in the very nature of biometric systems like face or writer identification that are confronted with known subjects having concept drifts (\emph{e.\,g}\onedot} \def\Eg{\emph{E.\,g}\onedot., due to aging or environmental changes), novel subjects to enroll, and unknown subjects. There is also a steady quest for making the respective algorithms computationally efficient to be applicable on edge devices with limited resources.
\Gls{owr} as formalized by Bendale and Boult~\cite{bendale2015openworld} addresses such constraints and includes three subtasks.
\begin{enumerate*}
\item \emph{Recognize} new samples either as a \emph{known} or \emph{unknown}.
\item \emph{Label} new samples either by approving the recognition or defining a new known class.
\item \emph{Adapt} the current model by exploiting updated labels.
\end{enumerate*}
The recognition subtask poses an independent research area termed \gls{osr} \cite{scheirer2012openset} and received a lot of interest in applications like face recognition~\cite{gunther2017opensetface}, novelty and intrusion detection~\cite{bendale2016opensetnn, henrydoss2017ievm, prijatelj2021novelty}, and forensics~\cite{lorch2020jpeg, maier2020bnn, lorch2021gps}.
Currently \gls{evm} models as proposed by Rudd \emph{et al}\onedot~\cite{rudd2017evm} are state of the art in \gls{osr}.
\Glspl{evm} predict unnormalized class-wise probabilities for query samples to be included in the respective known classes.
Model fitting depends on class negatives, \emph{i.\,e}\onedot} \def\Ie{\emph{I.\,e}\onedot, it adapts well to imbalanced data, which is a common problem in incremental learning~\cite{ditzler2010learn++unb, wu2019largeir}.
However, fitting and prediction scale badly for large datasets making their use on resource limited platforms difficult.
Model adaptability can be achieved by cyclic retraining. However, this model-agnostic approach is computationally inefficient and all data needs to be organized in a single chunk. \textit{Incremental learning} aims at doing adaptions effectively and efficiently by batch-wise or sample-wise incorporation of novel data. This needs to handle different challenges: On the one hand, data undergoes concept drifts that shall be learned. On the other hand, the stability-plasticity dilemma~\cite{carpenter1987stabilityplasticity} could either lead to maximum predictive power on previously learned classes (\emph{i.\,e}\onedot} \def\Ie{\emph{I.\,e}\onedot, high stability) or on novel classes (\emph{i.\,e}\onedot} \def\Ie{\emph{I.\,e}\onedot, high plasticity). A good tradeoff between both border cases is desired for well-generalizing models.
Although there are several incremental formulations of popular classifiers~\cite{bifet2009adaptivedt, cauwenberghs2001isvc} or deep learning architectures~\cite{rebuffi2017icarl, castro2018endir, wu2019largeir}, these approaches assume closed sets of known classes in their prediction phase. In principle, probabilistic models like the \gls{evm} can handle batch-wise data but their actual behaviour in incremental learning under an open world\xspace} \def\Openworld{Open world\xspace} \def\OpenWorld{Open World\xspace regime is still widely unexplored. In this paper, we show that simple ad-hoc applications of existing \gls{evm} approaches in \gls{owr} lead to suboptimal stability-plasticity tradeoffs.
The contribution of this paper can be summarized as follows:
\begin{enumerate*}
\item A partial model fitting algorithm that prevents costly Weibull estimations by neglecting unaffected space during an update. This reduces the incremental training time by a factor of $28$.
\item A model reduction technique using weighted maximum $K$-set cover\xspace} \def\Setcover{Set cover\xspace} \def\SetCover{Set Cover\xspace providing fixed size model complexities, which is fundamental for memory constrained systems.
This approach is up to $4\times$ faster than existing methods and achieves higher recognition rates of about \SI{12}{\percent}.
\item Two novel open world\xspace} \def\Openworld{Open world\xspace} \def\OpenWorld{Open World\xspace protocols that can be adapted to vary the task complexity in terms of openness.
\item The framework is evaluated on these protocols with varying difficulty and dimensional complexity for applications such as image classification~and~face~recognition
\end{enumerate*}
\section{Related Work}\label{sec:related-work}
\subsubsection{Incremental Learning}
Popular classifiers such as \glspl{svm}, decision trees, linear discriminant analysis, and ensemble techniques are modified to allow efficient model adaptations~\cite{domingos2000hoeffdingtrees, cauwenberghs2001isvc, polikar2001learnpp, kim2007incrementallda, bifet2009adaptivedt}.
Curriculum and self-paced learning are concepts to sequentially incorporate samples into a model in a meaningful order~\cite{bengio2009curriculum, kumar2010spl, lin2017activeselfpaced}.
iCaRL~\cite{rebuffi2017icarl} and EEIL~\cite{castro2018endir} use distillation or bias correction~\cite{wu2019largeir} to counter catastrophic forgetting.
Zhang \emph{et al}\onedot~\cite{zhang2021fewshotil} proposed a pseudo incremental learning paradigm by decoupling the feature and classification learning stages.
However, the adaptation of underlying \glspl{dnn} on embedded hardware, as required in many open world applications~\cite{bendale2015openworld}, is far from being efficient.
Additionally, these incremental strategies are not designed for \gls{osr}.
\subsubsection{\OpenSet Recognition}
Early approaches~\cite{tax2008growing,bartlett2008classification,grandvalet2008svm,cevikalp2012efficient} define threshold-based unknown detection rules for closed-set classifier outputs.
More recent methods focus on the \gls{evt} to consider negative class samples for the estimation of rejection probabilities.
Scheirer \emph{et al}\onedot~\cite{scheirer2014wsvm} developed the \gls{wsvm} that combines a one-class and a binary \gls{svm}, where decision scores are calibrated via Weibull distributions. Jain \emph{et al}\onedot~\cite{jain2014pisvm} proposed the \gls{pisvm} to calibrate the outputs of a \acrshort{rbf} \gls{svm} to unnormalized posterior probabilities.
The related OpenMax~\cite{bendale2016opensetnn} calibration is used for class activations of \glspl{dnn} to model the probability of samples being unknown.
Unfortunately, such re-calibrations do not support incremental learning off-the-shelf.
Also GANs allow to sharpen open set\xspace} \def\Openset{Open set\xspace} \def\OpenSet{Open Set\xspace models with adversarial samples~\cite{ge2017genopenmax, neal2018counterfactual, kong2021opengan, yue2021counterfactualzf}.
Recent novelty detection approaches focus on the uncertainty expressiveness of classifiers that can be used to perform novelty or unknown detection, such as Bayesian neural networks~\cite{blundell2015bnn}, Bayesian logistic regression~\cite{lorch2020jpeg}, and Gaussian processes~\cite{lorch2021gps}.
While these methods commonly require multiple computationally demanding Monte Carlo draws to calculate the predictive uncertainty,
Sun \emph{et al}\onedot~\cite{sun2021react} propose a non-incremental post hoc approach to handle model overconfidence.
\subsubsection{Open World Recognition}
\Gls{nn} based classifiers are open world\xspace} \def\Openworld{Open world\xspace} \def\OpenWorld{Open World\xspace capable, as they typically have no actual training step.
The \gls{osnn}~\cite{junior2017osnn} defines the open space\xspace} \def\Openspace{Open space\xspace via a threshold on the ratio of similarity scores of the two most similar classes.
Bendale and Boult~\cite{bendale2015openworld} derived the \gls{nno} algorithm from the \gls{ncm} classifier~\cite{mensink2013ncm, ristin2014incm}.
\gls{nno} rejects samples that are not in the range of any class center where the distance depends on a learned Mahalanobis distance.
However, these approaches are purely distance-based and do not take distributional information into account.
Joseph \emph{et al}\onedot~\cite{joseph2021ore} proposed an open world\xspace} \def\Openworld{Open world\xspace} \def\OpenWorld{Open World\xspace object detection method that includes fine-tuning of a \gls{dnn} which is typically too costly for embedded hardware.
To overcome the limitations of \glspl{nn}, Rudd \emph{et al}\onedot~\cite{rudd2017evm} introduced the \gls{evm} that defines sample-wise inclusion probabilities in dependence of their neighborhood of other classes.
Since this approach is based on a \gls{nn}-like data structure, they propose a model reduction technique to keep the most relevant data points, similar to the support vectors of \glspl{svm}, to reduce the memory footprint.
The \gls{evm} has achieved state-of-the-art results in intrusion detection~\cite{henrydoss2017ievm} and open set\xspace} \def\Openset{Open set\xspace} \def\OpenSet{Open Set\xspace face recognition\cite{gunther2017opensetface}.
The C-EVM~\cite{henrydoss2020cevm} performs a clustering prior to the actual \gls{evm} fitting to reduce the dataset size.
These centroids are then used to fit the \gls{evm}.
However, the clustering does not ensure a reduced model size and especially for small batches, it can cause computational overhead.
In contrast, our proposed method adequately detects unaffected space in incremental updates and prevents redundant parameter estimations.
Additionally, we provide a computationally more efficient model reduction using weighted maximum $K$-set cover, that reduces the model size to a fixed user-set value.
\section{Background: Extreme Value Theory}\label{sec:background}
The \gls{evm} estimates per-sample probabilities of inclusions.
Let $\bm{x}} \newcommand{\BX}{\bm{X}_i$ be a feature vector of class $y_i$ referred to as an anchor sample. Given $(\bm{x}} \newcommand{\BX}{\bm{X}_i, y_i)$, we select the $\tau$ nearest negative neighbors $\bm{x}} \newcommand{\BX}{\bm{X}_j$, $j = 1, \ldots, \tau$ from different classes $y_j \neq y_i$ according to a distance $d(\bm{x}} \newcommand{\BX}{\bm{X}_i, \bm{x}} \newcommand{\BX}{\bm{X}_j)$, where $\tau$ denotes a tail size\xspace} \def\Tailsize{Tail size\xspace.
The inclusion probability of a sample $\bm{x}} \newcommand{\BX}{\bm{X}$ for class $y_i$ is given by the cumulative Weibull distribution:
\begin{equation}
\Psi_i(\bm{x}} \newcommand{\BX}{\bm{X}) = \Psi(\bm{x}} \newcommand{\BX}{\bm{X}; \theta_i) = \exp{\left(- \left( \frac{d(\bm{x}} \newcommand{\BX}{\bm{X}_i, \bm{x}} \newcommand{\BX}{\bm{X})}{\lambda_i} \right)^{\kappa_i} \right)} \enspace \text{,}
\end{equation}
where $\theta_i = \{\kappa_i, \lambda_i\}$ denotes the Weibull parameters, $\kappa_i$ is the \emph{shape}, and $\lambda_i$ is the \emph{scale} associated with $\bm{x}} \newcommand{\BX}{\bm{X}_i$.
Given labeled training data $\mathcal{N} = \left\{ (\bm{x}} \newcommand{\BX}{\bm{X}_1, y_1), \ldots, (\bm{x}} \newcommand{\BX}{\bm{X}_N, y_N) \right\}$, each feature vector $\bm{x}} \newcommand{\BX}{\bm{X}_i$ with class label $y_i$ becomes an anchor. Fitting the underlying EVM aims at sample-wise estimating their~$\theta$.
A query sample $\bm{x}} \newcommand{\BX}{\bm{X}$ is assigned to class $y_i$ with maximum probability $\max_{i \in N} \Psi_i(\bm{x}} \newcommand{\BX}{\bm{X})$.
This probability shall reach a threshold $\delta$ to distinguish knowns and unknowns according to:
\begin{equation}
y =
\begin{cases}
y_i & \text{if } \max_{i \in N} \Psi_i(\bm{x}} \newcommand{\BX}{\bm{X}) \geq \delta \enspace \text{,} \\
\text{``unknown''} & \text{otherwise} \enspace \text{.}
\end{cases}
\end{equation}
A baseline approach keeps all $\theta_i$, which is expensive in terms of prediction time complexity and memory footprint.
Rudd~\emph{et al}\onedot~\cite{rudd2017evm} proposed a model reduction such that only informative $\theta_i$, \emph{\glspl{ev}}, are kept since samples within the same class might be redundant.
It can be expressed as set cover\xspace} \def\Setcover{Set cover\xspace} \def\SetCover{Set Cover\xspace problem~\cite{karp1972setcover} to find a minimum number of samples that \emph{cover} all other samples.
Redundancies are determined by inclusion probabilities $\Psi_i(\bm{x}} \newcommand{\BX}{\bm{X}_j)$ within $N_c$ samples of a class $c$ ($y_i = y_j \, \forall i,j \in \{1,\ldots,N_c\}$).
A sample $\bm{x}} \newcommand{\BX}{\bm{X}_j$ is discarded if it is covered by $\theta_i$, \emph{i.\,e}\onedot} \def\Ie{\emph{I.\,e}\onedot, $\Psi_i(\bm{x}} \newcommand{\BX}{\bm{X}_j) \geq \zeta$, where $\zeta$ denotes the coverage threshold.
This can be formulated as the minimization~problem:
\begin{align}
\text{minimize} \enspace \sum_{i=1}^{N_c} I(\theta_i) \label{eq:rudd-optimize}
\enspace \text{subject to} \enspace I(\theta_i) \Psi_i(\bm{x}} \newcommand{\BX}{\bm{X}_j) \geq \zeta \enspace \text{,}
\end{align}
where the indicator function $I(\theta_i)$ is given by:
\begin{equation}\label{eq:rudd-indicator-function}
I(\theta_i) =
\begin{cases}
1 & \text{if any } \Psi_i(\bm{x}} \newcommand{\BX}{\bm{X}_j) \geq \zeta \quad \forall j \in N_c \, \enspace \text{,} \\
0 & \text{otherwise}\enspace \text{.}
\end{cases}
\end{equation}
Rudd~\emph{et al}\onedot~\cite{rudd2017evm} determines approximate solutions in $\mathcal{O}(N_c^2)$ using greedy iterations, where in each iteration samples that cover most other samples are selected.
This approach does not constrain the amount of \glspl{ev}, which might be necessary for memory limited systems.
To this end, bisection to determine a suitable $\zeta$ \emph{per class} can be performed.
\section{Incremental Extreme Value Learning}\label{sec:incremental-learning}
During online learning new data points arise and may interfere with the current \glspl{ev}' Weibull distribution estimates.
\subsubsection{Incremental Learning Framework}\label{ssec:incremental-framework}
\gls{evm} learning involves two subtasks:
\begin{enumerate*}
\item \emph{Model fitting} to adapt the model to new data and
\item \emph{model reduction} that bounds the model's computational complexity and required resources.
\end{enumerate*}
In \gls{owr}, both steps need to handle training data arriving batch-wise over consecutive epochs.
We perform incremental learning over epochs using new arriving training batches $\mathcal{N}^t$, where $t$ denotes the epoch index.
For an incremental formulation, let $\Theta_E^t = \{\theta_1^t, \ldots, \theta_E^t\}$ be a model of $E$ \glspl{ev} determined either at the previous epoch or learned from scratch at the first epoch.
The fit function incorporates the new batch $\mathcal{N}^t$ to the current model $\Theta_E^t$ to obtain a new intermediate model $\Theta^{t+1}$.
The reduction squashes $\Theta^{t+1}$ according to a given budget by selecting most informative \glspl{ev} considering both previous and new samples. This yields the consolidated model $\Theta_{E}^{t+1} \subseteq \Theta^{t+1}$.
Our framework alternates the fit and reduction function efficiently per epoch.
\subsubsection{Partial Model Fitting}
\label{ssec:partial-model-fit}
\begin{figure}[tb]
\centering
\subfloat[No update required.]{\input{figures/incremental-updates1}\label{sfig:incremental-a}}
\qquad
\subfloat[Update required.]{\input{figures/incremental-updates2}\label{sfig:incremental-b}}
\caption{Incremental update illustration with $\tau=4$. The Weibull distribution of the \acrfull{ev}~(\protect\tikzextremevec) is estimated on the $\tau$ nearest samples~(\protect\tikznormalsample). The blueish hypersphere with radius~$d_\tau$ is derived from the farthest sample. The new sample~(\protect\tikznewsample) in \cref{sfig:incremental-a} lies outside the sphere and can be ignored. Once a new sample lies within the sphere, \emph{cf}\onedot} \def\Cf{\emph{Cf}\onedot \cref{sfig:incremental-b}, an update is required.}
\label{fig:incremental}%
\end{figure}%
For model fitting, we process samples in new arriving batches $\mathcal{N}^t$ independently to incorporate them into the current model $\Theta_E^t$.
A new sample $\bm{x}} \newcommand{\BX}{\bm{X}^{t+1}$ might fall into the neighborhood of any \gls{ev}'s feature vector $\bm{x}} \newcommand{\BX}{\bm{X}_e^t$, which would invalidate the corresponding Weibull parameters in $\theta_e^t$, where $\theta_e^t \in \Theta_E^t$.
A naive approach is to re-estimate a new Weibull distribution for each \gls{ev} including nearest negative neighbor search and tail construction.
We argue that this is highly inefficient since it is most likely that the new sample will not influence all the \glspl{ev}. Thus, most estimates will result in the same Weibull parameters as previously.
We extend the \gls{evm} model by an automatically derivable, \emph{i.\,e}\onedot} \def\Ie{\emph{I.\,e}\onedot, nonuser-set value, namely the \emph{maximum tail distance\xspace} \def\Taildistance{Tail distance\xspace}~$d_\tau$, which corresponds to the maximum distance within a tail such that $\theta_e^t = \{\kappa_e^t, \lambda_e^t, d_{\tau,e}^t\}$.
This parameter operates as a threshold and controls the model update.
It can be described by a hypersphere centered at an \gls{ev} with radius $d_{\tau}$ as depicted in \cref{fig:incremental}.
Anytime a sample falls into this hypersphere, we need to shrink it.
To perform partial fits, we need to compute distances between $\bm{x}} \newcommand{\BX}{\bm{X}^{t+1}$ and all $\bm{x}} \newcommand{\BX}{\bm{X}_e^t$ and estimate the Weibull parameters for $\bm{x}} \newcommand{\BX}{\bm{X}^{t+1}$.
Using these distances, we define the update rule for the \gls{ev}:
\begin{equation}
\theta_e^{t+1} =
\begin{cases}
\text{update}(\theta_e^{t}) & \text{if } d(\bm{x}} \newcommand{\BX}{\bm{X}_e^t, \bm{x}} \newcommand{\BX}{\bm{X}^{t+1}) < d_{\tau,e}^t \enspace \text{,} \\
\theta_e^{t} & \text{otherwise} \enspace \text{,}
\end{cases}
\end{equation}
where $\text{update}(\cdot)$ denotes tail update, re-estimation of Weibull parameters, and storage of new maximum tail distances.
This allows computationally efficient partial fits and leads to exactly the same result as cyclic retraining, as long as no model reduction is carried out.
\input{tables/mnist-update-ratio}%
In \cref{tab:mnist-update-ratio}, we exemplify the gain of this approach.
We incrementally fit an \gls{evm} on a subset of MNIST and store all samples as \glspl{ev}.
The update ratio determines the fraction of \glspl{ev} that require an update in subsequent epochs.
It follows, the smaller the batches and tail size the less updates are necessary.
The benefit can become very substantial at small batch and tail sizes with an update ratio of only \SI{0.56}{\percent}.
\subsubsection{Model Reduction}\label{ssec:model-reduction}
\renewcommand{\thefigure}{2}
\begin{figure*}[tb]
\setlength{\fboxsep}{0pt}%
\centering
\newcommand{.18}{.18}
\subfloat[EVM ($\infty$-SC)~\cite{rudd2017evm}]{\includegraphics[width=.18\linewidth]{figures/no-reduce-r100.pdf}\label{sfig:reduce-none2}}%
\hfill%
\subfloat[EVM ($10$-SC)~\cite{rudd2017evm}]{\includegraphics[width=.18\linewidth]{figures/original-r10.pdf}\label{sfig:reduce-orig2}}%
\hfill%
\subfloat[iEVM ($10$-wSC)]{\includegraphics[width=.18\linewidth]{figures/ours-r10.pdf}\label{sfig:reduce-ours2}}%
\hfill%
\subfloat[C-EVM ($\infty$-SC)~\cite{henrydoss2020cevm}]{\includegraphics[width=.18\linewidth]{figures/c-evm-r100.pdf}\label{sfig:c-evm2}}%
\hfill%
\subfloat[C-iEVM ($10$-wSC)]{\includegraphics[width=.18\linewidth]{figures/c-ievm-r10.pdf}\label{sfig:c-ievm2}}%
\caption{Decision boundaries of different \gls{evm} reductions on a $3$-class toy dataset. Solid dots correspond to the \acrfullpl{ev} and colored areas belong to the related class where the inclusion probability is visualized via the opacity.
In~(a), no reduction is performed, \emph{i.\,e}\onedot} \def\Ie{\emph{I.\,e}\onedot, the \glspl{ev} match the training data. The set cover (SC) reduction and our weighted (wSC) are shown in~(b)
and~(c),
respectively. In~(d),
the C-EVM is shown and
(e)~presents~the~C-EVM with our wSC.}%
\label{fig:overview-reductions2}%
\end{figure*}%
\input{algorithms/gmc-greedy-short}%
In our incremental learning framework, the aim of a class-wise model reduction $g$ is to find a subset $\Theta_{E_c}^t \subseteq \Theta_c^t$ that is budgeted w.\,r.\,t\onedot} \def\dof{d.\,o.\,f\onedot the number of resulting \glspl{ev}.
\paragraph{Problem Statement}
For the sake of simplicity, let us drop the batch count $t$ and class index $c$, unless it is necessary.
We denote our model reduction by a function $g \colon \Theta \to \Theta_{E}$, where $\Theta_{E}$ underlies the constraint $|\Theta_{E}| \leq K\leq |\Theta| = N$ and $K$ denotes the budget of \glspl{ev} that can be kept for a certain class with $N$ samples.
The intuition behind the design of $g$ is three-fold:
\begin{enumerate*}
\item We aim at selecting \glspl{ev} that best cover others according to pair-wise inclusion probabilities.
\item While pair-wise inclusion probabilities are not symmetric in general, \emph{i.\,e}\onedot} \def\Ie{\emph{I.\,e}\onedot, $\Psi_i(\bm{x}} \newcommand{\BX}{\bm{X}_j) \neq \Psi_j(\bm{x}} \newcommand{\BX}{\bm{X}_i)$, high bilateral coverage is common and would introduce a bias towards selecting \glspl{ev} very close to class centroids implying that selecting both $\Psi_i(\bm{x}} \newcommand{\BX}{\bm{X}_j)$ and $\Psi_j(\bm{x}} \newcommand{\BX}{\bm{X}_i)$ shall be penalized.
\item At most $K$ \glspl{ev} shall be selected.
\end{enumerate*}
We propose to formulate $g$ as a weighted maximum $K$-set cover\xspace} \def\Setcover{Set cover\xspace} \def\SetCover{Set Cover\xspace~\cite{cohen2008gmc}.
Let us define a collection of sets $\mathcal{S} = \{ \mathcal{S}_1, \ldots, \mathcal{S}_N \}$, where $\mathcal{S}_i = \{ (w_{kl}, w_{lk}) \, | \, 1 \leq k \leq i < l \leq N \}$ models a single~\gls{ev}.
A pair $(w_{kl}, w_{lk}) \in \mathcal{S}_i$ contains two weights given by the inclusion probabilities $w_{kl} = \Psi_k(\bm{x}} \newcommand{\BX}{\bm{X}_l)$ and $w_{lk} = \Psi_l(\bm{x}} \newcommand{\BX}{\bm{X}_k)$.
We determine $g$ according to the integer linear program:
\begin{align}
\text{maximize} \enspace & \sum_{i=1}^{N} \sum_{j=i+1}^{N} \beta_{ij} \Psi_i(\bm{x}} \newcommand{\BX}{\bm{X}_j) + \beta_{ji} \Psi_j(\bm{x}} \newcommand{\BX}{\bm{X}_i) \label{eq:mst-optimization-3} \\*
\text{subject to} \enspace & \sum_{i=1}^{N} \gamma_i \leq K \enspace \text{,} \label{eq:mst-constraint-1-3} \\*
& \beta_{ij} + \beta_{ji} \leq 1 \enspace \text{,}
\label{eq:mst-constraint-2-3}
\end{align}
where $\beta_{ij} \in \{0, 1\}$ selects covered elements ($\beta_{ij} = 1 \Leftrightarrow (w_{ij}, w_{ji})~\text{is covered by}~\mathcal{S}_i$) and $\gamma_i \in \{0, 1\}$ selects kept \glspl{ev} ($\gamma_i = 1 \Leftrightarrow \mathcal{S}_i~\text{is kept}$).
The objective in \cref{eq:mst-optimization-3} is optimized w.\,r.\,t\onedot} \def\dof{d.\,o.\,f\onedot $\beta$ and $\gamma$ to maximize the value of the coverage.
The constraint in~\cref{eq:mst-constraint-1-3} limits the amount of \glspl{ev} to the budget~$K$ and \cref{eq:mst-constraint-2-3} penalizes the selections of bilateral coverage.
\paragraph{Incremental Algorithm}
We solve \crefrange{eq:mst-optimization-3}{eq:mst-constraint-2-3} by greedy iterations as depicted in Algorithm~\ref{alg:gmc}.
Our algorithm facilitates incremental learning by reusing intermediate results from the model reduction of the previous epoch, where $\Theta$ denotes the intermediate model of a class from the partial fit function and $K$ is the \gls{ev} budget.
Line $3$ limits the amount of iterations to the desired budget $K$.
In each iteration, we first compute for each sample the sum of inclusion probabilities from all other samples toward it (line $4$).
The element with the highest sum is selected as \gls{ev} (line $5$ - $6$).
In the end, the reduced model $\Theta_E$ is released.
Note that summations in line $4$ do not need to be recomputed in every iteration.
We provide additional implementation details for Algorithm~\ref{alg:gmc} in the supplementary~material.
\paragraph{Relationship to Previous Works~\cite{rudd2017evm, henrydoss2020cevm}} \label{sssec:relationship}
Our weighted maximum $K$-set cover\xspace} \def\Setcover{Set cover\xspace} \def\SetCover{Set Cover\xspace formulation in \crefrange{eq:mst-optimization-3}{eq:mst-constraint-2-3} generalizes the conventional set cover\xspace} \def\Setcover{Set cover\xspace} \def\SetCover{Set Cover\xspace model reduction of Rudd~\emph{et al}\onedot~\cite{rudd2017evm}.
To formulate \cite{rudd2017evm} in our framework, we need to substitute $\Psi_i(\bm{x}} \newcommand{\BX}{\bm{X}_j)$ and $\Psi_j(\bm{x}} \newcommand{\BX}{\bm{X}_i)$ in \cref{eq:mst-optimization-3} by $I(\theta_i)$ and $I(\theta_j)$, \emph{i.\,e}\onedot} \def\Ie{\emph{I.\,e}\onedot, the indicator function of \cref{eq:rudd-indicator-function}.
Thus, all samples with coverage probabilities $\geq \zeta$ are weighted uniformly.
The C-EVM~\cite{henrydoss2020cevm} uses class-wise DBSCAN clustering~\cite{ester1996dbscan} and generates centroids from these clusters.
This preconditioning reduces the training set size before the actual EVM is fitted to the centroids.
However, this does not enforce a specific amount of \glspl{ev}.
This is sub-optimal in memory-limited applications, \emph{e.\,g}\onedot} \def\Eg{\emph{E.\,g}\onedot, on edge devices, where fixed model sizes are preferred.
In \cref{fig:overview-reductions2}, we compare different reduction techniques on example data, where $K$-set cover\xspace} \def\Setcover{Set cover\xspace} \def\SetCover{Set Cover\xspace ($K$-SC) represents Rudd's method~\cite{rudd2017evm} and ($K$-wSC) our weighted $K$-set cover\xspace} \def\Setcover{Set cover\xspace} \def\SetCover{Set Cover\xspace ($K$-wSC).
It can be observed that $K$-SC leads to scattered decision boundaries and is sensitive to outliers.
Our stand-alone \gls{ievm} is robust against outliers and empowers the open space\xspace} \def\Openspace{Open space\xspace, \emph{cf}\onedot} \def\Cf{\emph{Cf}\onedot \cref{sfig:reduce-ours2}.
The C-EVM generates new centroids but does not guarantee a certain amount of \glspl{ev}.
Therefore, we extend it with our $K$-wSC and bilateral coverage regularization.
This selects \glspl{ev} that accurately describe the underlying distributions of known classes.
We argue that both, the iEVM and C-iEVM, perfectly describe different levels of the stability-plasticity tradeoff.
While the iEVM strictly bounds the decision boundaries to dense class centers and leaves more open space\xspace} \def\Openspace{Open space\xspace, it is stable to concept drift.
In contrast, the C-iEVM enables more plasticity as outliers have a high impact on the generated centroids.
The hard thresholding of Rudd~\emph{et al}\onedot~\cite{rudd2017evm} also comes at the cost of embedding their set cover\xspace} \def\Setcover{Set cover\xspace} \def\SetCover{Set Cover\xspace into a bisection search to determine a coverage threshold $\zeta$ providing the desired number of \glspl{ev}. Given a bisection termination tolerance of $\epsilon$, the overall model reduction has a time complexity of $\mathcal{O}(\log(\epsilon^{-1})N^2)$ for a single class comprising $N$ samples. In contrast, our model reduction method avoids thresholding and considers the given budget on the number of \glspl{ev} in a single pass with time complexity $\mathcal{O}(N^2)$.
This is an important factor for implementations on resource limited devices.
\section{\OpenWorld Evaluation Protocols}\label{sec:protocols}
We introduce our two designed open world\xspace} \def\Openworld{Open world\xspace} \def\OpenWorld{Open World\xspace evaluation protocols.
The first protocol describes the very general real-world\xspace} \def\Realworld{Real-world\xspace online learning environment, where new classes are learned and old classes are updated by new samples.
The second protocol is a specialization of the first one, where subsequent epochs contain only new classes.
\subsubsection{Protocol~I\xspace}
This protocol reflects the realization of a newly deployed \gls{owr} application.
While others start with a large initial training phase~\cite{bendale2015openworld}, we argue that this is not possible in real-world\xspace} \def\Realworld{Real-world\xspace scenarios, as the exact environmental conditions, \emph{e.\,g}\onedot} \def\Eg{\emph{E.\,g}\onedot, sensors and lighting, are unknown.
Furthermore, it is an unrealistic assumption to start with a large initial training~phase.
We start with a minimum of $2$ classes and incrementally learn new classes, while incorporating new samples of previous classes.
This introduces two types of concept drifts, termed \emph{direct} and \emph{implicit} concept drift.
Direct concept drift applies to a single changing class, \emph{e.\,g}\onedot} \def\Eg{\emph{E.\,g}\onedot, the aging of a person.
Implicit concept drift determines the mutual impact of neighboring classes competing for transitional feature space.
Here, the occurrence of a new class can have a high impact on previously learned classes as both may share parts of the feature space, \emph{e.\,g}\onedot} \def\Eg{\emph{E.\,g}\onedot, leopards and jaguars.
Implicit concept drift is given whenever an altering class influences the learned concepts of other classes.
\renewcommand{\thefigure}{4}
\begin{figure*}[t]
\centering
\vspace{1ex
\tikzsetnextfilename{classinc-lfw-macro}
\includegraphics[width=.99\linewidth]{figures/classinc-lfw-75-macro.tikz}
\vspace{-1ex}%
\caption{Averaged results over $3$ runs of Protocol~II\xspace on LFW. Set cover and our weighted maximum $K$-set cover reduction to $K$ \acrfullpl{ev} are denoted as $K$-SC and $K$-wSC, respectively. Our reduction achieves comparable results \emph{while} reducing the model complexity by factor $4$.}%
\label{fig:lfw-classinc-macro}%
\end{figure*}%
Our protocol allows the control of its complexity on the basis of an initial \emph{openness}~\cite{scheirer2012openset}.
According to this openness, classes are divided into two disjoint sets of knowns~$\mathcal{C}_\text{K}$ and unknowns~$\mathcal{C}_\text{U}$, with $|\cdot|$ denoting the cardinality.
The first epoch contains $2$ classes of $\mathcal{C}_\text{K}$.
The following epochs comprise a single new class of $\mathcal{C}_\text{K}$ as well as samples of classes seen in previous epochs.
Hence, all classes in $\mathcal{C}_\text{K}$ are known at epoch $|\mathcal{C}_\text{K}| - 1$.
Each learning epoch follows an evaluation on a fixed test set.
Note that, although the test set is fixed, the amount of unknowns reduces over the epochs.
Thus, the openness decreases from epoch number $1$ to $|\mathcal{C}_\text{K}| - 1$.
This reduces the complexity of unknown detection while increasing the difficulty for the classification of knowns.
To further investigate the models' incremental adaptability at a steady openness, we continue the epoch-wise training after $|\mathcal{C}_\text{K}| - 1$ with batches of $\mathcal{C}_\text{K}$.
\subsubsection{Protocol~II\xspace}
This protocol specializes the first one for applications with few samples per class.
Due to the limited amount of training samples, we derive a pure class-incremental evaluation, where each epoch contains a certain amount of new classes.
No previously learned classes are directly updated by new samples in subsequent epochs but they are updated implicitly by new occurring classes leading to the previously mentioned implicit concept drift.
We split the classes w.\,r.\,t\onedot} \def\dof{d.\,o.\,f\onedot a predefined openness into knowns and unknowns.
The unknowns are put in the test set together with a subset of samples for each of the known classes.
The known classes are split into batches where each batch contains all remaining samples of a certain amount of classes.
\subsubsection{Performance Measures}
The \gls{dir} at certain \glspl{far} serves as evaluation metric, which is common in the open set\xspace} \def\Openset{Open set\xspace} \def\OpenSet{Open Set\xspace face recognition~\cite{gunther2017opensetface}.
The \gls{far} determines the fraction of misclassified unknowns.
The threshold to receive a certain \gls{far} can be derived from the evaluated dataset.
The \gls{dir} determines the fraction of correctly detected knowns \emph{and} their correct classification.
A high \gls{dir} at low \gls{far} is favorable.
\section{Experiments and Results}\label{sec:experiments}
We evaluate our \gls{ievm} in different \gls{owr} applications.
The \gls{evm}, \gls{osnn}, and \gls{tnn} serve as baselines.
We also extend the C-EVM by our incremental framework, where clustering is applied prior to model fitting.
The method notations are adopted from Section~\ref{sssec:relationship}.
Model reductions are performed at every epoch.
\subsubsection{Image Classification}
The open world\xspace} \def\Openworld{Open world\xspace} \def\OpenWorld{Open World\xspace performance of our approach is evaluated with Protocol~I\xspace on \cifar{100}~\cite{krizhevsky09cifar}.
This dataset comprises \num{50000} training and \num{10000} test samples of \num{100} classes.
The randomized split into knowns and unknowns is \SI{50}{\percent}, which results in an openness range from \SI{80.2}{\percent} for the first batch to \SI{18.4}{\percent} for batch $49$ and the following ones.
We evaluate $100$ epochs using a batch size of $24$ and benchmark all models on the whole test set after each epoch.
We repeat the protocol $3$ times using different random orders in the creation and processing of batches.
\paragraph{Implementation Details}
For feature extraction, we use EfficientNet-B6~\cite{tan2019efficientnet} pre-trained on ImageNet~\cite{deng2009imagenet} and fine-tuned on a \cifar{100} training split via categorical cross-entropy loss and a bottleneck layer of size $1024$.
All \glspl{evm} use the same parameters: $\tau = 75$ and $\alpha = 0.5$.
For the clustering in the C-EVM and C-iEVM, we adopt the parameters reported in~\cite{henrydoss2020cevm}.
Methods that employ a model reduction reduce the amount of \glspl{ev} to $K = 10$.
We report additional results with alternative parameters in the supplementary material.
\paragraph{Results}
Averaged results of $3$ repetitions of Protocol~I\xspace are shown in \cref{fig:cifar100-openworld-group}.
We depict the \gls{dir} over the amount of samples at different \glspl{far}.
All \glspl{evm} perform similar for the first \num{250} samples and achieve an initial \gls{dir} of about \SI{95}{\percent} at a \gls{far} of \SI{10}{\percent}.
In later epochs, our \gls{ievm} and C-iEVM clearly outperform the competing methods for high and medium \glspl{far} (\SI{10}{\percent} and \SI{1}{\percent}), while at very small \gls{far} (\SI{0.1}{\percent}) all methods perform comparably.
However, our methods begin to recover after the openness remains constant.
In the case that the training samples within a class are widely spread, the original set cover\xspace} \def\Setcover{Set cover\xspace} \def\SetCover{Set Cover\xspace model reduction struggles to find the most important \glspl{ev}.
This leads to a constant decrease in the \gls{dir} even after the openness complexity stays constant.
Similarly, DBSCAN in the C-EVM fails to generate meaningful centroids resulting in almost identical outputs as the baseline \gls{evm}.
We noticed that DBSCAN achieves only average reductions of about \SI{3}{\percent} and the model contains \num{2294}~\glspl{ev} after the last epoch.
Our weighted $K$-set cover\xspace} \def\Setcover{Set cover\xspace} \def\SetCover{Set Cover\xspace easily selects the most important \glspl{ev} and achieves the best results in the C-iEVM and iEVM while storing only \num{500}~\glspl{ev} ($10$ per class).
The amount of \glspl{ev} does not only influence the memory but also the inference time.
The reduced models take about \SI{2.4}{s} to evaluate the test set while the others require about \SI{14.7}{s} which is a factor of $6$.
Further, our model reduction is, averaged over all epochs, by a factor $4.2$ faster than the conventional one.
\subsubsection{Face Recognition}
To evaluate our method in open world\xspace} \def\Openworld{Open world\xspace} \def\OpenWorld{Open World\xspace face recognition, we apply Protocol~II\xspace to the \gls{lfw}~\cite{huang2007lfw1, huang2014lfw2} dataset.
We adopt the training and the $O3$~test split of~\cite{gunther2017opensetface}, where the training set consists of \num{2900} samples from \num{1680} unbalanced classes with either $1$ or $3$ images.
We divide this split into $10$ batches with $168$ classes each.
After each epoch the test set is evaluated.
Since the test set is highly unbalanced with \numrange{1}{527} samples per class, we report the \emph{macro} average \gls{dir} at certain \glspl{far}.
This prevents the suppression of misclassified underrepresented classes and is therefore a better representation on the global performance on this dataset.
The protocol is repeated $3$ times.
\paragraph{Implementation Details}
For feature extraction we use the ResNet50,
pre-trained on MS-Celeb-1M~\cite{guo2016msceleb} and fine-tuned on VGGFace2~\cite{cao2018vggface2}, with an embedding size of \num{128}.
We adopt the \gls{evm} parameters $\tau=75$ and $\alpha=0.5$ from~\cite{gunther2017opensetface}.
Additionally, our methods with model reduction perform the contraction to a single \gls{ev} per class, \emph{i.\,e}\onedot} \def\Ie{\emph{I.\,e}\onedot, $K = 1$.
\paragraph{Results}
We present the averaged \gls{dir} at several \glspl{far} in \cref{fig:lfw-classinc-macro}.
Surprisingly, the \gls{osnn} achieves in this protocol better recognition scores than in the previous one.
The C-EVM and \gls{osnn} perform comparable while the \gls{osnn} looses precision at the lowest FAR (\SI{0.1}{\percent}).
Our C-iEVM and iEVM achieve comparable results \emph{while} reducing the model complexity by a factor of $4$.
The computational efficacy of our incremental framework is presented in \cref{fig:lfw-runtime}.
Here, partial fitting reduces the average training time by a factor of $28$.
In particular, performance gains are substantial at late epochs, where the EVM requires \SI{27}{s} to learn the final classes, while the iEVM takes \SI{0.7}{s}.
Our model reduction is, averaged over all epochs, by a factor of $3.7$ faster than the conventional set cover approach.
\renewcommand{\thefigure}{5}
\begin{figure}[t]
\centering
\tikzsetnextfilename{lfw-runtime}%
\includegraphics[width=.99\linewidth]{figures/lfw-runtime.tikz}%
\vspace{-1ex}%
\caption{Averaged runtime of the training step (left) and model reduction (right) from the evaluation of Protocol~II\xspace and LFW. Our partial fit reduces the average training time by a factor of $28$. Our model reduction, averaged over all epochs, is faster than the conventional set cover by a factor of $3.7$.}
\label{fig:lfw-runtime}%
\end{figure}%
\subsubsection{Additional Experiments}
The supplementary material contains additional details about the proposed reduction and the evaluation on an additional dataset~\cite{fiel2017icdar2017} using Protocol~II\xspace.
\section{Conclusion}\label{sec:conclusion}
We introduced an incremental leaning framework for the \gls{evm}.
Our partial model fitting neglects unaffected space during an update and prevents costly Weibull estimates.
The proposed weighted maximum $K$-set cover model reduction guarantees a fixed-size model complexity with less computational effort than the conventional set cover approach.
Our reduction leads to dense class centers filtering out outliers.
The proposed modifications outperform the original EVM and the C-EVM on novel open world\xspace} \def\Openworld{Open world\xspace} \def\OpenWorld{Open World\xspace protocols in terms of efficacy and efficiency.
In future work, we will investigate the method on larger datasets to better understand the advantages of our model reduction and put more effort into applications with harsh constraints on low \acrlongpl{far}.
\subsection{Algorithm Details}
Algorithm~\ref{alg:gmc2} provides additional details of the proposed weighted maximum $K$-set cover model reduction for the \gls{evm}.
Recall that this is a class-wise reduction technique.
Thus, the amount of \glspl{ev} in a single class is denoted as $E$. The amount of samples within a batch of this class is denoted $N$.
The summations of the inclusion probabilities for each \gls{ev} are given in $\bm{p}} \newcommand{\BP}{\bm{P}$.
The \gls{evm} model $\Theta_E^t$ represents the \glspl{ev} of the previous epoch, $\Theta_N^{t+1}$ the estimated Weibull parameters of the current data batch, and $K$ determines the \gls{ev} budget.
The reduction comprises four steps:
\begin{enumerate}
\item Updating the inclusion probability sums of the old \glspl{ev} w.\,r.\,t\onedot} \def\dof{d.\,o.\,f\onedot the new batch (line \numrange{2}{4}).
\item Sum up the inclusion probabilities of the new samples w.\,r.\,t\onedot} \def\dof{d.\,o.\,f\onedot each other (line \numrange{6}{9}).
This step has a time complexity of $\mathcal{O}(N \cdot (E + N))$ which is $\mathcal{O}(N^2)$ for large batches (\emph{i.\,e}\onedot} \def\Ie{\emph{I.\,e}\onedot, $N \gg E$) and $\mathcal{O}(NE)$, otherwise.
\item In line $10$ follows the greedy search for the \glspl{ev}. Details for Algorithm~\ref{alg:greedy} follow in the next paragraph.
\item Update $\bm{p}} \newcommand{\BP}{\bm{P}$ according to the new \glspl{ev} (line \numrange{11}{15}).
If the two conditions $N > E$ and $E > (N - E)$ hold, it is more efficient to skip line $11$, \emph{i.\,e}\onedot} \def\Ie{\emph{I.\,e}\onedot, not to reset $\bm{p}} \newcommand{\BP}{\bm{P}$.
Then we can use the modified $\bm{p}} \newcommand{\BP}{\bm{P}$ of Algorithm~\ref{alg:greedy} and incrementally subtract and remove non-\gls{ev} samples similar as in the regularization in Algorithm~\ref{alg:greedy}.
This has a time complexity of $\mathcal{O}((N - E) \cdot E) \Rightarrow \mathcal{O}(NE)$, since we only need to update the elements in $\bm{p}} \newcommand{\BP}{\bm{P}$ that are part of $\Theta_E^{t+1}$.
\end{enumerate}
The greedy iteration algorithm is depicted in Algorithm~\ref{alg:greedy} and requires the summations $\bm{p}} \newcommand{\BP}{\bm{P}$, the combined model $\Theta$, and the budget $K$.
The amount of iterations is limited by $K$ (line~$3$).
In line~4 we take the sample with the highest sum of inclusion probabilities and store it in the \gls{ev} model (line~$5$).
Then follows the bilateral coverage regularization by removing the probability of inclusion of the selected \gls{ev} from the other samples (line \numrange{6}{8}).
In line \numrange{9}{10}, we remove the \gls{ev} from $\bm{p}} \newcommand{\BP}{\bm{P}$ and $\Theta$.
In the end, we receive the \gls{evm} model $\Theta_E$ containing only the \glspl{ev}.
Note for the mentioned special case in the previous step~$4$, we also need to return the modified $\bm{p}} \newcommand{\BP}{\bm{P}$ and $\Theta$.
The total asymptotic runtime of the proposed weighted maximum $K$-set cover algorithm is $\mathcal{O}(N^2)$.
It does not depend on a bisection search as the set cover of Rudd \emph{et al}\onedot~\cite{rudd2017evm} that has a complexity of $\mathcal{O}(\log(\epsilon^{-1}) N^2)$, with termination tolerance $\epsilon$.
\input{algorithms/gmc-greedy-long}%
\input{algorithms/gmc-greedy-iter}%
\renewcommand{\thefigure}{6}
\begin{figure*}[tb]
\centering
\vspace{1ex}
\tikzsetnextfilename{protocol1-cifar-params}
\includegraphics[width=.99\linewidth]{figures/supmat/protocol1-cifar-params}
\vspace{-1ex}%
\caption{Different parameterizations of our \acrfull{ievm}. Averaged results over $3$ runs of Protocol~I\xspace and \cifar{100}. The vertical dashed line determines the batch at which the openness remains constant.}
\label{fig:cifar100-ievm-params}%
\end{figure*}%
\renewcommand{\thefigure}{7}
\begin{figure*}[tb]
\centering
\tikzsetnextfilename{protocol2-icdar-ievm-params}
\includegraphics[width=.99\linewidth]{figures/supmat/protocol2-icdar-ievm-params}
\vspace{-1ex}%
\caption{Different tail size $\tau$ parameterizations of our \acrfull{ievm}. Averaged results over $3$ runs of Protocol~II\xspace and ICDAR\num{17}\xspace.}
\label{fig:icdar-ievm-params}%
\end{figure*}%
\subsection{Additional Experiments}
In this section we present further experiments of the evaluation with Protocol~I\xspace and \cifar{100}.
Furthermore, we evaluated the writer identification dataset ICDAR\num{17}\xspace~\cite{fiel2017icdar2017} with Protocol~II\xspace.
\subsubsection{Protocol~I\xspace\ -- \cifar{100}}
In the main text, we show the result of the \gls{ievm} on Protocol~I\xspace and \cifar{100} with parameters $\tau = 75$ and the reduction to $K=10$.
Here, we want to present further parameterizations in \cref{fig:cifar100-ievm-params}. As in the main text, the left, middle, and right plots show the \gls{dir} at \glspl{far} of \SI{10}{\percent}, \SI{1}{\percent}, and \SI{0.1}{\percent}.
When comparing the accuracies for different values of $\tau$ at identical $K$, it turns out that the tail size\xspace} \def\Tailsize{Tail size\xspace $\tau$ has almost no influence on the models' accuracy.
This is similar to what Günther \emph{et al}\onedot~\cite{gunther2017opensetface} reported on the \gls{lfw} dataset.
A larger value of $K$ may lead to worse results, as can be seen in the case of iEVM ($\tau = 75$, $50$-wSC).
This may be counter-intuitive at first glance, considering that classification should perform better with more data.
However, storing more data implies less plasticity and more stability which can interfere with the incremental training adaptability.
\subsubsection{Protocol~II\xspace\ -- ICDAR\num{17}\xspace}
Another \gls{owr} task is writer identification.
Here, we apply Protocol~II\xspace to the dataset ICDAR\num{17}\xspace~\cite{fiel2017icdar2017}.
It contains handwritten pages from the $13^\text{th}$ to $20^\text{th}$ century.
Since the feature extraction is trained on the training set of ICDAR\num{17}\xspace, the subsequent classification training and evaluation on the same set would be biased.
Therefore, we take only the test set into account with $5$ pages for each of the $720$ writers.
\SI{30}{\percent} of the classes are selected as unknowns and left in the test split.
For each of the known classes, we leave $1$ sample in the test split, \emph{i.\,e}\onedot} \def\Ie{\emph{I.\,e}\onedot, the training split has $4$ samples for each of the $504$ known classes.
The knowns are split into $9$ batches with $56$ classes and trained incrementally.
This protocol implements an openness from \SIrange{62}{9.3}{\percent}.
The results are averaged over $3$ protocol repetitions.
\paragraph{Implementation Details}
The feature set consists of the \num{6400}-dimensional activation of the penultimate layer of a ResNet20.
It was trained in a self-supervised fashion~\cite{Christlein17ICDAR}.
The training uses SIFT descriptors that are calculated on patches of $32\times32$ pixels at SIFT keypoints.
The SIFT descriptors are clustered using $k$-means.
Then, the ResNet20 is trained using cross-entropy loss where the patches are used as input and the targets are the cluster center IDs of the patches.
\renewcommand{\thefigure}{8}
\begin{figure*}[tb]
\centering
\vspace{1ex}
\tikzsetnextfilename{protocol2-icdar2}
\includegraphics[width=.99\linewidth]{figures/supmat/protocol2-icdar2}
\vspace{-1ex}%
\caption{Averaged results over $3$ runs of Protocol~II\xspace on ICDAR\num{17}\xspace. Set cover and our weighted maximum $K$-set cover reduction to $K$ \acrfullpl{ev} are denoted as $K$-SC and $K$-wSC, respectively.}
\label{fig:icdar}%
\end{figure*}%
\paragraph{Hyperparameter Evaluation}
The experiments on \cifar{100} and Protocol~I\xspace show, similar as the previous work of G{\"u}nther \emph{et al}\onedot~\cite{gunther2017opensetface}, that the tail size\xspace} \def\Tailsize{Tail size\xspace parameter $\tau$ has only a minor impact on the results.
However, we noticed that this does not apply to Protocol~II\xspace and ICDAR\num{17}\xspace as visualized in \cref{fig:icdar-ievm-params}.
The experiments show that a small tail size\xspace} \def\Tailsize{Tail size\xspace ($\tau \in \{5, 10\}$) achieves a better \gls{dir} at a high \gls{far} of \SI{10}{\percent}.
This difference degrades over the class-wise increments at medium and small \glspl{far} of \SI{1}{\percent} and \SI{0.5}{\percent}.
Rudd \emph{et al}\onedot~\cite{rudd2017evm} state that a larger tail size\xspace} \def\Tailsize{Tail size\xspace leads to higher coverage.
This implies that for ICDAR\num{17}\xspace a high coverage and little open space\xspace} \def\Openspace{Open space\xspace is less favorable and a steep decision boundary is beneficial.
\paragraph{Results}
The comparison to the other baseline methods follows in \cref{fig:icdar}.
All \glspl{evm} use a tail size\xspace} \def\Tailsize{Tail size\xspace $\tau = 5$.
The C-iEVM without model reduction performs comparable to the \gls{osnn} and both outperform the conventional \gls{evm}.
The boundary case of a model reduction to a single \gls{ev} per class does not lead to an improvement in this evaluation.
In contrast to this result, we note that the evaluation of Protocol I on \cifar{100} performed much better with model reduction.
However, the representation of a class via a single sample is challenging and heavily depends on the class distribution.
|
2205.14887
|
\section{Introduction}
\IEEEPARstart
{H}{yperspectral} (HS) imaging aims to capture the continuous electromagnetic spectrum of real-world scenes/objects. Benefiting from such dense spectral resolution, HS images are widely applied in numerous areas, such as agriculture \cite{park2015hyperspectral,lu2020recent}, military \cite{shimoni2019hypersectral,jia2020status}, and environmental monitoring \cite{banerjee2020uav,mishra2017close}.
Unfortunately, due to limited sensor resolution, it's hard to acquire HS images with both high spatial and spectral resolution via single-shot HS imaging devices. The inevitable trade-off between the spatial and spectral resolution results in much lower spatial resolution than traditional RGB images, which may limit the performance of downstream HS image-based applications.
\begin{figure}[!t]
\centering
\includegraphics[width=0.95\linewidth]{figure/params_psnr.pdf}
\caption{Comparison of the number of parameters and average reconstruction quality (PSNR/SSIM) of our PDE-Net and state-of-the-art deep learning-based methods, including 3DFCNN \cite{Mei2017Hyperspectral}, 3DGAN \cite{Li2020Hyperspectral}, ERCSR \cite{Li2021Exploring} , MCNet \cite{Li2020Mixed}, and SSPSR \cite{Jiang2020Learning}, on
the Harvard dataset for $4\times$ super-resolution.
}
\label{fig:params-psnr}
\end{figure}
Instead of relying on the development of hardware, computational methods known as super-resolution have been proposed for reconstructing high-resolution (HR) HS images from low-resolution (LR) ones
\cite{Jiang2020Learning,Dong2016Hyperspectral,yi2018hyperspectral,Mei2017Hyperspectral,Li2020Hyperspectral,Li2020Mixed,Li2021Exploring}.
Specifically, the early works explicitly formulate HS image super-resolution as constrained optimization problems regularized by prior knowledge,
such as, sparsity \cite{Akhtar2015Bayesian,xu2019nonlocal,Han2018Self}, non-local similarity \cite{Dian2017Hyperspectral,Dian2019Learning}, and low-rankness \cite{dian2019hyperspectral,xue2021spatial}. Besides, auxiliary information \cite{Dian2019Learning,Dong2016Hyperspectral}, \cite{vicinanza2014pansharpening,fei2019convolutional}, e.g., HR RGB and panchromatic images, was incorporated to improve reconstruction quality. However,
the limited representation ability of these optimization-based methods is insufficient to model such a severely ill-posed problem,
making the quality of reconstructed HR-HS images still unsatisfied.
Owing to the powerful representational ability, recent deep learning-based HS image super-resolution methods have improved the reconstruction quality significantly
\cite{Xie2019Multispectral,yao2020cross,qu2018unsupervised,Zhu2021Hyperspectral,Mei2017Hyperspectral,Li2020Mixed,Jiang2020Learning}. For deep learning-based HS image super-resolution, one of the critical issues is how to effectively and efficiently extract/embed the high-dimensional spatial-spectral information.
Most of the existing methods design the feature extraction/embedding module by empirically combining some common convolutions in the dense or residual fashion, such as separately convolving on spatial and spectral domains \cite{Li2021Exploring,Xie2018Rethinking}, directly utilizing 3D convolution \cite{Mei2017Hyperspectral}, or using both 2D and 3D convolutional layers \cite{Li2020Mixed}, such network architectures may be not optimal, thus compromising performance.
In contrast to existing methods that adopt empirically-designed convolutional modules to embed the high-dimensional spatial-spectral information of HS images,
we propose to cast this process
as an approximation to the posterior distribution of a set of carefully-defined HS embedding events, including layer-wise spatial-spectral feature extraction and network-level feature aggregation. Then, we incorporate the proposed feature embedding scheme into a source-consistent spatial super-resolution framework that is built upon the degradation process of LR-HS images from HR-HS ones and thus physically-interpretable, leading to lightweight PDE-Net, where a coarse HR-HS image is first initialized and then iteratively refined by learning residual maps from the differences between the input LR-HS image and the pseudo-LR-HS image re-degenerated from the reconstructed HR-HS image.
Extensive experiments on three common benchmark datasets demonstrate the significant superiority of the proposed PDE-Net over multiple state-of-the-art methods.
In summary, our contributions are two-fold
\begin{itemize}
\item
we formulate the embedding of the high-dimensional spatial-spectral information of HS images from the probabilistic perspective and propose a generic HS feature embedding scheme; and
\item we incorporate the proposed feature embedding scheme into a physically-interpretable deep framework to construct a compact and end-to-end HS image super-resolution method and experimentally demonstrate its advantages over state-of-the-art ones.
Besides, the probabilistic characteristic of the method
can bring additional benefits, i.e., the uncertainty of outputs;
\end{itemize}
The rest of this paper is organized as follows. Section~\ref{sec:Re} briefly reviews existing works. Section~\ref{sec:proposed} presents the proposed framework in detail, followed by extensive experiments and analysis in Section~\ref{sec:experiments}.
Finally, Section~\ref{sec:con} concludes this paper.
\section{Related Work}
\label{sec:Re}
\subsection{Single HS Image Super-resolution}
The early works explicitly formulate HS image super-resolution as constrained optimization problems, in which some priors are explored to regularize the solution space. For example, Wang \emph{et al.} \cite{Wang2017Hyperspectral} modeled the three characteristics of HS images, i.e., the global correlation in the spectral domain, the non-local self-similarity in the spatial domain, and the local smooth structure across both spatial and spectral domains. Huang \emph{et al.} \cite{huang2014super} utilized the low-rank and group-sparse modeling to spatially super-resolve single HS images. Zhang \emph{et al.} \cite{zhang2012super} proposed a maximum a posterior-based HS image super-resolution algorithm.
Recently, many deep learning-based methods for single HS image super-resolution have been proposed, which improve the reconstruction quality of traditional optimization-based methods dramatically. For example,
Yuan \emph{et al.} \cite{Yuan2017Hyperspectral} designed a transfer learning model to recover HR-HS image by utilizing the knowledge from the natural image and enforcing collaborations between LR and HR-HS images via non-negative matrix factorization. Li \emph{et al.} \cite{Li2018Single} proposed a grouped deep recursive residual network
with a grouped recursive module embedded to effectively formulate the ill-posed
mapping function from LR- to HR- HS images.
To simultaneously explore spatial and spectral information, Hu \emph{et al.} \cite{Hu2020Hyperspectral} proposed an intrafusion network to jointly learn the spatial information, spectral information, and spectral difference. Jiang \emph{et al.} \cite{Jiang2020Learning} designed a spatial-spectral prior network with progressive upsampling and grouped convolutions with shared parameters.
Mei \emph{et al.} \cite{Mei2017Hyperspectral} proposed a 3D full convolution neural network (CNN) to well explore both the spatial context and spectral correlation.
Li \textit{et al}. \cite{Li2020Hyperspectral} presented a 3D generative adversarial network with a band attention mechanism to alleviate the spectral distortion problem.
However, 3D convolution is usually with high computational and memory complexity. Inspired by the separable 3D CNN model \cite{Xie2018Rethinking}, Li \emph{et al.} \cite{Li2020Mixed} proposed a mixed convolutional module, including 2D convolution and separable 3D convolution, to fully extract spatial and spectral features. In \cite{Li2021Exploring},
the relationship between 2D and 3D convolution was explored to achieve HS image super-resolution.
Although
various network architectures/convolutions were designed to fully and efficiently exploit the high-dimensional spatial-spectral information for achieving high reconstruction quality,
they were empirically designed based on human knowledge, which may be not optimal, thus limiting performance.
\begin{figure*}[!t]
\centering
\includegraphics[width=0.9\linewidth]{figure/HSISR-Framework-012.png}
\caption{Illustration of the flowchart of the proposed PDE-Net for HS image super-resolution. Our PDE-Net consists of coarse estimation and multi-stage source-consistent HS refinement. More importantly, PDE-Net adaptively learns the architecture of the basic HS embedding unit ($\mathcal{H}^{(j)}$) as well as the connection between different units from the probabilistic perspective, which is fundamentally different from existing methods with empirically-designed architectures.
}
\label{fig:hsisr-framework}
\end{figure*}
\subsection{Fusion-based HS Image Super-resolution}
Different from single HS image super-resolution, fusion-based HS image super-resolution methods employ additional data, e.g., HR RGB images, to improve performance.
Many traditional methods have been proposed,
such as Bayesian inference-based \cite{Akhtar2015Bayesian, Akhtar2016Hierarchical}, matrix factorization-based \cite{Lanaras2015Hyperspectral, Dian2017Hyperspectral, Dian2019Learning}, and sparse representation-based \cite{Akhtar2014Sparse, Dong2016Hyperspectral, Han2018Self}. To be specific,
under the assumption that each spectrum can be linearly represented with multiple spectral atoms, Dian \emph{et al.} \cite{Dian2019Learning} proposed a matrix factorization-based approach. Han \emph{et al.} \cite{Han2018Self} designed a self-similarity constrained sparse representation approach to form the global-structure groups and local-spectral super-pixels.
The recent deep learning-based methods \cite{Xie2019Multispectral, Wang2019Deep, Zhang2020Unsupervised, Zhu2021Hyperspectral} improve the performance of fusion-based HS image super-resolution significantly. For example, Xie \emph{et al.} \cite{Xie2019Multispectral} constructed a deep network, which mimics the iterative algorithm for solving the explicitly formed fusion model,
to merge an HR multispectral image and an LR-HS image to generate an HR-HS image. Wang \emph{et al.} \cite{Wang2019Deep} proposed a deep blind iterative fusion network to iteratively optimize the estimation of the observation model and fusion process. Zhu \emph{et al.} \cite{Zhu2021Hyperspectral} designed a progressive zero-centric residual network with the spectral-spatial separable convolution to enhance the performance of HS image reconstruction.
Despite fusion-based methods have achieved remarkable performance, the above-mentioned methods highly rely on the additional co-registered HR images, which may be difficult to obtain. Recently, to tackle the registration challenge,
Qu \emph{et al.} \cite{Qu2022Unsupervised} presented an registration-free and unsupervised mutual Dirichlet-Net, namely $u^2$-MDN.
\section{Proposed Method}
\label{sec:proposed}
\subsection{Problem Statement and Overview
\label{sec:Pfam}
Given an LR-HS image denoted as $\mathbf{X}$ $\in\mathbb{R}^{B\times hw}$ with $h \times w$ being the spatial dimensions and $B$ being the number of spectral bands, we aim to recover an HR-HS image denoted as $\mathbf{Y}$ $\in\mathbb{R}^{B\times HW}$ ($H=\alpha h$ and $W=\alpha w$ where $\alpha>1$ is the scale factor).
The degradation process of $\mathbf{X}$ from $\mathbf{Y}$ can be generally written as
\begin{equation}
\label{equ:1}
\mathbf{X} = \mathbf{YD} + \mathbf{N}_{z},
\end{equation}
where $\mathbf{D}$ $\in\mathbb{R}^{HW\times hw}$ is the degeneration matrix composed of the blurring and down-sampling operators
and $\mathbf{N}_{z}\in \mathbb{R}^{B\times hw}$ stands for the noise. To tackle such an ill-posed reconstruction problem, inspired by the great success of deep CNNs in image/video processing applications, we will consider a deep learning-based framework named PDE-Net, as illustrated in Fig. \ref{fig:hsisr-framework}.
Note that instead of designing a new overall framework to achieve performance improvement, we focus on the efficient and effective feature embedding manner for capturing the high-dimensional characteristics of HS images. To be specific, motivated by iterative back-projection refinement works \cite{romano2015boosting,Tao2017Zero,Wang2019Deep},
we propose a source-consistent reconstruction framework, in which a coarse HR-HS image is first initialized and then iteratively refined by learning residual maps from the differences between the input LR-HS image and the pseudo-LR-HS image re-degenerated from the reconstructed HR-HS image. More importantly, to explore the high-dimensional spatial-spectral information of HS images efficiently and effectively, we propose posterior distribution-based HS embedding, the core module of our PDE-Net for feature embedding, which models the process of embedding HS images as an approximation of posterior distributions.
Owing to the explicit problem formulation, the proposed PDE-Net is physically-interpretable and lightweight.
In what follows, we will first provide the overall framework before presenting the proposed probability-based feature embedding scheme.
\if0
\subsection{Initializing the Gradient Position}
\label{ssec:initialization}
This module is proposed to initialize an appropriate gradient position $\mathbf{Z}^{(0)}$ for the formed gradient descent progress.
Taking an LR-HS image $\mathbf{Z}_l$ $\in\mathbb{R}^{B\times hw}$ as input, we firstly expand it to four dimensions by an unsqueeze operation, and a 3D convolution operation to match the requirement of our designed densely-connected 3D-CNN based spectral-spatial block, more about the spectral-spatial network will be introduced in Sec.~\ref{ssec:ssfs}.
After the spectral-spatial module and the skip connection, the extracted spatial-spectral information of $\mathbf{Z}_l$ is further fed into a deconvolution layer to obtain the desired scale feature map. Then, a 3D convolution operation and a squeeze operation are utilized to reduce the dimensions of the upscaled feature map. Finally, the appropriate gradient position $\mathbf{Z}^{(0)}$ is obtained with the addition of the bicubic-upscaled feature map and deconvolution-upscaled feature map.
\subsection{Mining Efficient Gradient Descent Path}
\label{ssec:amendedg}
After initializing an appropriate gradient position $\mathbf{Z}^{(0)}$, in this module, we further mine the efficient gradient descent path by learning an adjusted gradient.
The spatial degradation function $\mathbf{D}$ and its transpose $\mathbf{D^T}$ in Eq.~\ref{equ:4} can be described in a linear way in pixel-wise. Therefore, for better back
projection, we perform $\mathbf{D}$ with a convolution layer, referred as $f_D(\cdot,\theta_D^{(k)})$ , and $\mathbf{D^T}$ with a deconvolution layer, referred as $f_{D^T}(\cdot,\theta_{D^T}^{(k)})$.
$\theta_D^{(k)}$ and $\theta_{D^T}^{(k)}$ represent the learnable parameters of these two linear projection. Then, we adjust the gradient at each step of the iteration process. The Eq.~\ref{equ:4} at the $k$-th step can be described as follows:
\begin{equation}
\begin{split}
\label{equ:adjustedgradient1}
\eta\Delta_{\mathcal{H}}(\mathbf{\hat{Z}}^{(k)}) &= f_{D^T}( \eta\Delta_{\mathcal{J}}(\mathbf{\hat{Z}}^{(k)}) + \eta\Gamma^{(k)}, \theta_{D^T}^{(k)}) \\
\eta\Delta_{\mathcal{J}}(\mathbf{\hat{Z}}^{(k)}) &= f_{D}(\eta\Delta_{\mathcal{G}}(\mathbf{\hat{Z}}^{(k)}), \theta_{D}^{(k)}) \\
&= \mathbf{Z}_l - f_D(\mathbf{\hat{Z}}^{(k)}, \theta_{D}^{(k)})
\end{split}
\end{equation}
where $\mathbf{\hat{Z}}^{(k)}$ represents the intermediate HS image recovered at $k$-th stage. $\Delta_{\mathcal{H}}(\mathbf{\hat{Z}}^{(k)})$ is the adjusted gradient.
$\Delta_{\mathcal{J}}(\mathbf{\hat{Z}}^{(k)})$ is the basic gradient. $\eta\Gamma^{(k)}$ is the increment gradient, which can be directly learned from the basic gradient $\Delta_{\mathcal{J}}(\mathbf{\hat{Z}}^{(k)})$ by using the densely-connected 3D-CNN based spectral-spatial block, referred as $\mathcal{N}^{(k)}(\cdot,\theta^{(k)})$.
This progress is formulated as:
\begin{equation}
\label{equ:incrementgradien}
\eta\Gamma^{(k)} = \mathcal{N}^{(k)}(\eta\Delta_{\mathcal{J}}(\mathbf{\hat{Z}}^{(k)}), \theta^{(k)})
\end{equation}
where $\theta^{(k)}$ denotes the learnable parameters of the densely-connected 3D-CNN based spectral-spatial block at $k$-th stage. Specially, this spectral-spatial block applies the same architecture as that in the initializing the gradient position progress. Then, the adjusted gradient $\Delta_{\mathcal{H}}(\mathbf{\hat{Z}}^{(k)})$ in Eq.~\ref{equ:adjustedgradient1} can be further derived as:
\begin{equation}
\begin{split}
\label{equ:adjustedgradient2}
\eta\Delta_{\mathcal{H}}&(\mathbf{\hat{Z}}^{(k)}) = \\
&f_{D^T}(\eta\Delta_{\mathcal{J}}(\mathbf{\hat{Z}}^{(k)}) + \mathcal{N}^{(k)}(\eta\Delta_{\mathcal{J}}(\mathbf{\hat{Z}}^{(k)}), \theta^{(k)}), \theta_{D^T}^{(k)}))
\end{split}
\end{equation}
Finally, we obtain the adjusted gradient descent process as follows:
\begin{equation}
\label{equ:gdprogress}
\mathbf{Z}^{(k)} = \mathbf{Z}^{(k-1)} + \eta \Delta_{\mathcal{H}}(\mathbf{Z}^{(k-1)}),
\end{equation}
During the adjusted gradient descent process, we utilize the error $\mathbf{E}^{(k)} = \mathbf{Z}_l - f_D(\mathbf{\hat{Z}}^{(k)}, \theta_{D}^{(k)})$ to calculate the differences between recovered HS image and the ground-truth LR-HS image. Specially, we share the convolution parameters of each degradation linear projection layer $f_D(\cdot,\theta_D^{(k)})$ across all stages. When the $\mathbf{E}^{(k)}$ achieves zero, the adjusted gradient $\Delta_{\mathcal{H}}(\mathbf{\hat{Z}}^{(k)})$ will thus reach zero as well. This makes the update of HR HS image to be finished, which means that the proposed framework has mined an efficient gradient descent path and recovered an appropriate HS image with regard to Eq.~\ref{equ:1}.
\fi
\subsection{Source-consistent Reconstruction}
According to Eq. (\ref{equ:1}), it can be deduced that if the reconstructed HR-HS image $\widehat{\mathbf{Y}}\in\mathbb{R}^{B\times HW}$ via a typical method approximates $\mathbf{Y}$ well, the re-degenerated LR-HS image $\widehat{\mathbf{X}}\in\mathbb{R}^{B\times hw}$ from $\widehat{\mathbf{Y}}$ via Eq. (\ref{equ:1}) should be very close to $\mathbf{X}$.
Equivalently, the difference between $\widehat{\mathbf{X}}$ and $\mathbf{X}$ indicates the deviation of $\widehat{\mathbf{Y}}$ from $\mathbf{Y}$.
Based on this deduction, as illustrated in Fig. \ref{fig:hsisr-framework}, we propose a source-consistent reconstruction framework,
composed of two modules, i.e., coarse estimation and iterative refinement.
\subsubsection{Coarse estimation} In this module, we estimate a coarse HR-HS image denoted as $\widehat{\mathbf{Y}}^{(1)}\in\mathbb{R}^{B\times HW}$ from $\mathbf{X}$ in a residual learning manner, i.e.,
\begin{equation}
{\widehat{\mathbf{Y}}^{(1)}} = \mathcal{G}^{(1)}\left(\mathbf{X}\right) + \mathcal{I}\left(\mathbf{X}\right),
\label{con:eqution3}
\end{equation}
where
$\mathcal{G}^{(1)}(\cdot)$ stands for the process of regressing residuals from its input,
the details of which are provided in Section \ref{Sec:PDHSE},
and $\mathcal{I}(\cdot)$ denotes the bicubic interpolation operator.
\subsubsection{Iterative refinement} Let $\mathcal{D}(\cdot)$ be a single convolutional layer with the stride equal to
$\alpha$ to mimic the degradation process in Eq. (\ref{equ:1}), i.e., $\widehat{\mathbf{X}}=\mathcal{D}(\widehat{\mathbf{Y}})$, and the kernel size is set to $5$ (resp. 9) when $\alpha$ = 4 (resp. 8). We design a multi-stage structure to iteratively refine the coarse estimation by exploring the differences between $\mathbf{X}$ and $\widehat{\mathbf{X}}$, and at the $t$-th ($t=2,\cdots,T$) stage, the refinement process is written as
\begin{equation}
{\widehat{\mathbf{Y}}^{(t)}} = \mathcal{G}^{(t)}\left(\mathbf{X}- \mathcal{D}\left(\widehat{\mathbf{Y}}^{(t-1)}\right)\right) + \widehat{\mathbf{Y}}^{(t-1)},
\label{con:eqution4}
\end{equation}
where $\mathcal{G}^{(t)}(\cdot)$ is the set of HS embedding events involved in the $t$-th stage.
\subsection{Posterior Distribution-based HS Embedding}
\label{Sec:PDHSE}
Learning representative embeddings from high-dimensional HS images is a crucial issue for deep learning-based HS image processing methods.
As an HS image is a 3D cube,
the 3-D convolution is an intuitive choice for feature extraction, which has demonstrated its effectiveness \cite{Mei2017Hyperspectral}. However, compared with 1-D and 2-D convolutions, the 3-D convolution results in a significant increase of network parameters, which may potentially cause over-fitting and consumption of huge computational resources. By analogy with the approximation of a high-dimensional filter with multiple low-dimensional filters in the field of signal processing,
one can perform multiple low-dimensional convolutions along one or two out of three dimensions separately, and then aggregate them together to cover all the three dimensions.
However, some questions are naturally posed: `` (1) how to select convolutional patterns? and (2) how to effectively and efficiently aggregate those convolutional layers together?" Based on human prior knowledge, previous works \cite{dong2019deep}, \cite{wang2020spatial}, \cite{Li2020Mixed}, \cite{Li2021Exploring} empirically combine some low-dimensional convolutional layers, such as 1-D convolutions in the spectral dimension and 2-D convolution in the spatial dimension,
which maybe not optimal, thus compromising performance.
By contrast, from the probabilistic view,
we formulate HS embedding as to optimize the distribution $\mathcal{P}(\widehat{\mathbf{Y}}|\mathbf{X},\mathbb{T})$,
where
$\mathbb{T}=\{(\mathbf{X}_v, \mathbf{Y}_v)\}_{v=1}^V$
is a set of paired training samples. Let $\mathcal{G}=\{\mathcal{G}^{(t)}\}_{t=1}^T$
be the set of feasible events for HS embedding at each stage, including network architectures and corresponding weights. With the Bayesian theorem,
we can rewrite this process as
\begin{equation}
\label{equ:gdprogress}
\mathcal{P}(\widehat{\mathbf{Y}}|\mathbf{X},\mathbb{T}) = \int \mathcal{P}(\widehat{\mathbf{Y}}|\mathbf{X},\mathcal{G}) \mathcal{P}(\mathcal{G}|\mathbb{T}) ~~ d\mathcal{G},
\end{equation}
where $\mathcal{P}(\widehat{\mathbf{Y}}|\mathbf{X},\mathcal{G})$ is the model likelihood, which could be calculated via a single inference process,
and the posterior distribution $\mathcal{P}(\mathcal{G}|\mathbb{T})$ captures the distribution of a set of plausible models for the dataset $\mathbb{T}$.
Thus, to achieve HS embedding, we can optimize a distribution $\mathcal{Q}(\mathcal{G})$ to approximate the intractable posterior distribution $\mathcal{P(\mathcal{G}|\mathbb{T})}$.
\label{ssec:ssfs}
Specifically, to model the distribution $\mathcal{Q}(\mathcal{G})$,
we first define the set of plausible HS embedding events $\mathcal{G}$.
Let $\mathcal{G}^{(t)} = \left\{ \{(\mathbf{K}^{(j)},\mathcal{H}^{(j)})\}_{j=1}^J,
~\mathcal{C}_h,~\mathcal{U}\right\}$,
where $\mathbf{K}^{(j)} \in \{0,1\}^{j-1}$
is a binary vector of length $j-1$,
indicating whether the features from previous $j-1$ units are used (i.e., if the typical element of $\mathbf{K}^{(j)}$ is equal to 1, the features of the corresponding unit will be used), $\mathcal{H}^{(j)}$ stands for a unit to extract high-level HS features, $\mathcal{C}_h(\cdot)$ is a convolutional layer as an projection head to lift up the number of channel for input feature maps, and $\mathcal{U}(\cdot)$ is a spatial upsampling layer for transforming the LR-feature maps to an HR-HS image.
Moreover, to handle the high-dimensionality of HS images efficiently, we further introduce spatial and spectral separable convolutional layers
for local HS feature extraction, i.e., $\mathcal{H}^{(j)} = \left\{ \mathbf{L}^{(j)},~\mathcal{C}_{spe}^{(j)}(\cdot),~ \mathcal{C}_{spa}^{(j)}(\cdot) \right\}$, where $\mathcal{C}_{spe}^{(j)}(\cdot)$ and $\mathcal{C}_{spa}^{(j)}(\cdot)$ denote convolutional layers in spectral and spatial domains, respectively, and
$\mathbf{L}^{(j)} \in \{0,1\}^{2}$
is a binary vector of length 2,
indicating whether the spatial or spectral embedding layer is used (i.e., if the typical element of $\mathbf{L}^{(j)}$ is equal to 1, the corresponding layer will be used). We name the network built upon $\mathcal{G}^{(t)}$ with all elements of $\{\mathbf{K}^{(j)}\}_{j=1}^{J}$ and $\{\mathbf{L}^{(j)}\}_{j=1}^{J}$ fixed to 1 the template network (Template-Net). Next, we will demonstrate the approach to learn the distribution $\mathcal{Q}(\mathcal{G})$.
Considering that introducing the dropout operation into CNNs could objectively minimize the Kullback–Leibler divergence between an approximate distribution $\mathcal{Q}(\mathcal{G})$ and the model posterior $\mathcal{P}(\mathcal{G}|\mathbb{T})$ \cite{gal2016dropout}, we
model the distribution $\mathcal{Q}(\mathcal{G})$ via training a template network whose binary vectors
are replaced with variables following independent learnable Bernoulli distributions. Specifically, as shown in Fig. \ref{fig:Aggregation}, both
the path for feature aggregation ($\{\mathbf{K}^{(j)}\}_{j=1}^{J}$) and the local feature embedding pattern ($\{\mathbf{L}^{(j)}\}_{j=1}^{J}$) are replaced with masks of logits $\epsilon \sim \mathcal{B}(p)$, where $\mathcal{B}(p)$ denotes the Bernoulli distribution with probability $p$. However, the classic sampling process is hard to manage a differentiable linkage between the sampling results and the probability, which restricts the gradient descent-based optimization process of CNNs. Besides, such dense aggregations in CNNs will result in a huge number of feature embeddings, which makes the CNNs hard to be optimized. Thus, we also need to explore an efficient and effective way for aggregating those features masked by binary logits. In what follows, we discuss how to deal with the two aspects.
\begin{figure}
\centering
\includegraphics[width=0.75\linewidth,height=0.8\linewidth]{figure/HSISR-Framework-Local-002.pdf}
\caption{Illustration of a set of possible aggregation unit $\mathcal{H}^{(j)}$ for HS image embedding. \textbf{Left}: the network-level aggregation pattern controlled by $\mathbf{K}^{(j)}$; \textbf{Right:} the feasible feature embedding patterns in the unit $\mathcal{H}^{(j)}$ controlled by $\mathbf{L}^{(j)}$.}
\label{fig:Aggregation}
\end{figure}
To obtain a differentiable sampling manner of logits $\epsilon$, we use the Gumbel-softmax \cite{jang2016categorical} to relax the discrete Bernoulli distribution to continuous space. Mathematically,
we formulate this process as
\begin{align}
\small
\mathcal{M}(p) = \mathsf{Sigmoid} \{ \frac{1}{\tau}& \left(\log p - \log (1 - p) +
\notag \right. \\
&\left.
\log (\log ({r_1})) - \log (\log ({r_2})) \right) \},
\label{con:eqution3}
\end{align}
where $\mathsf{Sigmoid}(\cdot)$ refers to the sigmoid function; $r_1$ and $r_2$ are random noise with the standard uniform distribution in the range of $[0,1]$; $p$ is a learnable parameter encoding the probability of aggregations in the neural network; and the temperature $\tau > 0$ controls the similarity between $\mathcal{M}(p)$ and $\mathcal{B}(1-p)$, i.e.,
as $\tau \to 0$, the distribution of $\mathcal{M}(p)$ approaches $\mathcal{B}(1-p)$; while as $\tau \to \infty$, $\mathcal{M}(p)$ tends to be a uniform distribution.
To aggregate the features efficiently and effectively, we design the network architecture at both network and layer levels.
Specifically, according to Eq. (\ref{con:eqution3}), we approximate the discrete variable $\mathbf{K}^{(j)}$ by applying Gumbel-softmax $\mathcal{M}(\cdot)$ to continuous learnable variables $\widetilde{\mathbf{K}}^{(j)}$. Thus, we could formulate network-level feature aggregation as
\begin{align}
&\widetilde {\mathbf{F}}^{(j)} = { \mathcal{C}_{1 \times 1}}\left( { { \mathbf{T}^{(0,j)} , \cdots, \mathbf{T}^{(j-1,j)}}} \right),\nonumber\\
&{\rm with} ~~ \mathbf{T}^{(k,j)} = {\mathbf{F}}^{(k)} \times \mathcal{M}\left(\widetilde{\mathbf{K}}^{(j)}_{(k)}\right),
\label{con:eqution4}
\end{align}
where $\widetilde{\mathbf{F}}^{(j)} ~~ (1 \leq j \leq J)$ denotes the aggregated feature which would be fed into $\mathcal{H}^{(j)}$; $\mathbf{F}^{(k)} ~~ (1 \leq k \leq j-1)$ is the feature from the ${k}$-th embedding unit $\mathcal{H}^{(k)}$; $\mathbf{F}^{(0)}$ denotes an HS embedding extracted from the input of $\mathcal{H}^{(t)}(\cdot)$ by a single linear convolutional layer;
$\widetilde{\mathbf{K}}^{(j)}_{(k)}$ indicates the ${k}$-th element of the vector $\widetilde{\mathbf{K}}^{(j)}$ in a range of $[0,1]$, according to its meaning of the sampling probability of $\mathbf{K}^{(j)}$;
and $\mathcal{C}_{1 \times 1}(\cdot)$ is a $1 \times 1$ kernel to compress the feature embedding and activate them with the rectified linear unit (ReLU).
In analogy to the network-level design, we also introduce the continuous learnable weights $\widetilde{\mathbf{L}}^{(j)}$ with Gumbel-softmax to approximate the Bernoulli distribution of $\mathbf{L}^{(j)}$ in each feature embedding unit as
\begin{align}
&{\mathbf{F}^{(j + 1)}} = \mathbf{O}^{(j)} + {\mathcal{C}_{spa}^{(j)}}\left( \mathbf{O}^{(j)} \right) \times \mathcal{M}\left(\widetilde{\mathbf{L}}^{(j)}_{(2)}\right),\nonumber \\
&{\rm with} ~~ \mathbf{O}^{(j)} = \widetilde {\mathbf{F}}^{(j)} + \mathcal{C}_{spe}^{(j)}\left(\widetilde {\mathbf{F}}^{(j)}\right) \times \mathcal{M}\left(\widetilde{\mathbf{L}}^{(j)}_{(1)}\right).
\label{con:eqution5}
\end{align}
Training such a masked Template-Net will lead to the posterior distribution $\mathcal{P}(\mathcal{G}|\mathbb{T})$. In the next section, we discuss the inference process of the proposed model and model epistemic uncertainty.
\if 0
\textcolor{magenta}{\emph{Remarks.} Note that
our posterior distribution-based embedding
is different from the network architecture search (NAS) technique. More specifically, NAS aims to automatically find the best neural architecture from the given search space under the guidance of performance. However, ours
formulates the process of embedding HS images as an approximation of posterior distributions, from which we can sample different models to reconstruct HR-HS images. See the quantitative comparison in Table \ref{tab:nasvspde}.
}
\fi
\subsection{Model Inference \& Epistemic Uncertainty}
\label{Sec:Inference}
As aforementioned, given an input LR-HS image, the proposed PDE-Net predicts the distribution of an HR-HS image, i.e., $\mathcal{P}(\mathbf{Y}|\mathbf{X},\mathbb{T})$. Thus, we have to obtain its expectation to objectively compare reconstruction results. Specifically, we adopt the Monte Carlo (MC) sampling method to randomly sample $N$ models from $\mathcal{P}(\mathcal{G}|\mathbb{T})$, which output reconstructed HR-HS images denoted as $\widehat{\mathbf{Y}}_1,~\widehat{\mathbf{Y}}_2,~\cdots,~\widehat{\mathbf{Y}}_N\in\mathbb{R}^{B\times HW}$, and then calculate $\widehat{\mathbf{Y}}=\frac{1}{N}\sum_{n=1}^N\widehat{\mathbf{Y}}_n$.
Note that thanks to the
parallelism of deep neural networks, we could realize the MC sampling efficiently via a batched inference manner, where we just feed $N$ copies of an input LR-HS image as a mini-batch and average the super-resolved HR-HS images in batch-wise. See Fig. \ref{fig:sampling} for
the effect of the hyperparameter $N$ on quantitative reconstruction quality.
Based on the probabilistic characteristic of our PDE-Net
we can figure out the uncertainty of reconstruction by calculating the probability of expectation.
To measure the variation of the reconstructed $\widehat{\mathbf{Y}}$, we first discretize the continuous space of the network output in the range of $[0,1]$ with an interval of $\frac{1}{255}$. Then we define the epistemic uncertainty of a pixel as
\begin{align}
&\mathcal{S}\left( \widehat{y} \right) = \sum_{n=1}^{N}{ \mathbf{I}_n } / {N} \times 100\%, \nonumber \\
&{\rm with} ~~ \mathbf{I}_n = \left\{
\begin{aligned}
1 ~~ & {\rm if} ~\mathcal{Z}(\widehat{y}_n) \neq \mathcal{Z}(\widehat{y}), \\
0 ~~ & {\rm otherwise},
\end{aligned}
\right.
\label{eq:Reconstruction}
\end{align}
where $\widehat{y}_n$ and $\widehat{y}$ are typical pixels of $\widehat{\mathbf{Y}}_n$ and $\widehat{\mathbf{Y}}$, respectively, and $\mathcal{Z}(\widehat{y})=\mathsf{round}(\widehat{y}\times 255)/255$ is the discretization function with $\mathsf{round}(\cdot)$ being the rounding operation.
Note that we do not require the ground-turth pixel value during the calculation of the epistemic uncertainty. Thus, we could measure the model epistemic uncertainty during both training and testing phases.
\subsection{Loss Function}
\label{ssec:lossfunction}
Following previous single-image and HS image super-resolution works \cite{Kim2016Accurate,Lim2017Enhanced,Zhang2018Image,Dai2019Second,Li2020Mixed,Jiang2020Learning}, we train PDE-Net by minimizing the $\ell_1$ distance between the $\mathbf{\widehat{Y}}$ and $\mathbf{Y}$:
\begin{equation}
\label{equ:l1loss}
\mathcal{L}_1(\mathbf{\widehat{Y}}, \mathbf{Y}) = \frac{1}{B\times HW}\left\|\mathbf{\widehat{Y}}- \mathbf{Y} \right\|_1.
\end{equation}
Besides, we also promote $\mathcal{D}(\mathbf{\widehat{Y}})$ to be close to $\mathbf{X}$ to regularize $\mathbf{\widehat{Y}}$, i.e.,
\begin{equation}
\label{equ:l2loss}
\mathcal{L}_2(\mathbf{\widehat{Y}}, \mathbf{X}) = \frac{1}{B\times hw}\left\|\mathcal{D}(\mathbf{\widehat{Y}})- \mathbf{X} \right\|_F^2,
\end{equation}
where $\|\cdot\|_F$ is the Frobenious norm of a matrix.
Thus, the overall loss function for training our PDE-Net is written as
\begin{equation}
\label{equ:loverall}
\mathcal{L}(\mathbf{\widehat{Y}}, \mathbf{Y}, \mathbf{X}) = \mathcal{L}_1(\mathbf{\widehat{Y}}, \mathbf{Y}) + \lambda\mathcal{L}_2(\mathbf{\widehat{Y}}, \mathbf{X}),
\end{equation}
where the hyper-parameter $\lambda$ is to balance these two terms, which is empirically set to 1.
\section{Experiments}
\label{sec:experiments}
\subsection{Experiment Settings}
\subsubsection{Datasets}
We employed 3 common HS image datasets to evaluate the performance of our PDE-Net, i.e., CAVE\footnote{http://www.cs.columbia.edu/CAVE/databases/} \cite{Yasuma2010CAVE}, Harvard\footnote{http://vision.seas.harvard.edu/hyperspec/} \cite{Chakrabarti2011Harvard},
and NCALM\footnote{http://www.grss-ieee.org/community/technical-committees/data-fusion/2018-ieee-grss-data-fusion-contest/} \cite{xu2019advanced},
whose details are listed as follows.
\begin{itemize}
\item The CAVE dataset contains 32 HS images of spatial dimensions $512\times512$ and spectral dimension 31, which were collected by a generalized assorted pixel camera ranging from 400 to 700 nm.
We randomly selected 20 HS images for training, and the remaining 12 HS images for testing.
\item The Harvard dataset consists of 50 HS images of spatial dimensions $1040\times1392$ and spectral dimension 31, which were gathered by a Nuance FX, CRI Inc. camera covering the wavelength range from 420 to 720 nm. We randomly selected 40 HS images as the training set , and the rest as the testing set.
\item The NCALM dataset used for the IEEE GRSS Data Fusion Contest only contains one HS image of spatial dimensions $1202\times4172$, which covers a 380-1050 nm spectral range with 48 bands. For this image, we cropped four left regions of $512\times512$ spatial dimensions for testing and the rest for training.
\end{itemize}
During training, we cropped overlapped patches of spatial dimensions $128\times128$, and utilized rotation and flipping for data augmentation. Following previous works \cite{Jiang2020Learning,Li2020Mixed,Li2021Exploring}, we used the bicubic down-sampling method to generate LR-HS images.
\subsubsection{Implementation details} We implemented the proposed method with PyTorch, where the ADAM optimizer \cite{kingma2014adam} with the exponential decay rates $\beta_1=0.9$ and $\beta_2=0.999$ was utilized. We initialized the learning rate as $5\times10^{-4}$, which was halved
every 25 epochs.
We set the batch size to 4 for all the three datasets. The total training process contained 100 warm-ups and 100 training epochs. During the warm-up phase, we set all elements of $\{\mathbf{K}^{(j)},~\mathbf{L}^{(j)}\}_{j=1}^J$ to 1 for warming up the Template-Net. To increase the flexibility of our model, we finely defined the probability space in channel-wise, i.e., assigning different probabilities to each channel (convolutional kernel).
\subsubsection{Evaluation metrics} Following previous works
\cite{Li2020Mixed}, \cite{Li2021Exploring}, we adopted three widely-used metrics to evaluate the quality of reconstructed HR-HS images quantitatively, i.e.,
\begin{itemize}
\item Mean Peak Signal-to-Noise Ratio (MPSNR):
\begin{equation}
\mathsf{MPSNR}(\mathbf{Y},\widehat{\mathbf{Y}}) = -\frac{10}{B} \sum_{u=1}^{B} \log(\mathsf{MSE}(\mathbf{Y}_{(u)},\widehat{\mathbf{Y}}_{(u)})),
\end{equation}
where $\widehat{\mathbf{Y}}_{(u)}\in\mathbb{R}^{H\times W}$ and $\mathbf{Y}_{(u)}\in\mathbb{R}^{H\times W}$ are the $k$-th ($1\leq u\leq B$) spectral bands of $\widehat{\mathbf{Y}}$ and $\mathbf{Y}$, respectively, and $\mathsf{MSE}(\cdot, \cdot)$ returns the mean squares error between the inputs. The larger, the better.
\item Mean Structural Similarity Index (MSSIM):
\begin{equation}
\mathsf{MSSIM}(\mathbf{Y},\widehat{\mathbf{Y}}) = \frac{1}{B} \sum_{u=1}^{B}\mathsf{SSIM}(\mathbf{Y}_{(u)},\widehat{\mathbf{Y}}_{(u)}),
\end{equation}
where $\mathsf{SSIM}(\cdot,\cdot)$ \cite{Zhou2002SSIM} computes the SSIM value of a typical spectral band. The larger, the better
\item Spectral Angle Mapper (SAM) \cite{Yuhas1992Discrimination}:
\begin{equation}
\mathsf{SAM}(\mathbf{Y},\widehat{\mathbf{Y}}) = \frac{1}{HW} \sum_{{m}=1}^{HW}\arccos{\left(\frac{\widehat{\mathbf{y}}^\textsf{T}_{(m)}\mathbf{y}_{(m)}}{\|\widehat{\mathbf{y}}_{(m)}\|_2\|\mathbf{y}_{(m)}\|_2}\right)},
\end{equation}
where $\widehat{\mathbf{y}}_{(m)}\in \mathbb{R}^{B}$ and $\mathbf{y}_{(m)}\in \mathbb{R}^{B}$ are the spectral signatures of the $m$-th ($1\leq m\leq HW$) pixels of $\widehat{\mathbf{Y}}$ and $\mathbf{Y}$, respectively, and $\|\cdot\|_2$ is the $\ell_2$ norm of a vector. The smaller, the better.
\end{itemize}
\subsection{Comparison with State-of-the-Art Methods}
\begin{table}[t]
\caption{Quantitative Comparisons Of Different Methods Over The CAVE Dataset. The best results of all methods and the best results of existing methods are highlighted in bold and underline, respectively.``$\uparrow$" (resp. ``$\downarrow$") means the larger (resp. smaller), the better.}
\vspace{-0.2cm}
\centering
\label{tab:caveresults}
\begin{tabu}{c|c|c|c|c|c}
\tabucline[1pt]{*}
Methods &Scale &\#Params &MPSNR$\uparrow$ &MSSIM$\uparrow$ &SAM$\downarrow$ \\ \hline\hline
BI &4 &- &36.533 &0.9479 &4.230 \\
3DFCNN\cite{Mei2017Hyperspectral} &4 &0.04M &38.061 &0.9565 &3.912 \\
3DGAN\cite{Li2020Hyperspectral} &4 &0.59M &39.947 &0.9645 &3.702 \\
SSPSR\cite{Jiang2020Learning} &4 &26.08M &40.104 &0.9645 &3.623 \\
MCNet\cite{Li2020Mixed} &4 &2.17M &40.658 &0.9662 &3.499 \\
ERCSR\cite{Li2021Exploring} &4 &1.59M &\underline{40.701} &\underline{0.9662} &\underline{3.491} \\ \hline
Template-Net &4 &2.29M &40.911 &0.9666 &3.514 \\
PDE-Net &4 &2.30M &\textbf{41.236} &\textbf{0.9672} &\textbf{3.455} \\ \hline\hline
BI &8 &- &32.283 &0.8993 &5.412 \\
3DFCNN\cite{Mei2017Hyperspectral} &8 &0.04M &33.194 &0.9131 &5.019 \\
3DGAN\cite{Li2020Hyperspectral} &8 &0.66M &34.930 &0.9293 &4.888 \\
SSPSR\cite{Jiang2020Learning} &8 &28.44M &34.992 &0.9273 &4.680 \\
MCNet\cite{Li2020Mixed} &8 &2.96M &35.518 &0.9328 &4.519 \\
ERCSR\cite{Li2021Exploring} &8 &2.38M &\underline{35.519} &\underline{0.9338} &\underline{4.498} \\ \hline
Template-Net &8 &2.32M &35.781 &0.9341 &4.442 \\
PDE-Net &8 &2.33M &\textbf{36.021} &\textbf{0.9363} &\textbf{4.312} \\
\tabucline[1pt]{*}
\end{tabu}
\end{table}
\begin{table}[t]
\caption{Quantitative Comparisons Of Different Methods Over The Harvard Dataset. The best results of all methods and the best results of existing methods are highlighted in bold and underline, respectively.``$\uparrow$" (resp. ``$\downarrow$") means the larger (resp. smaller), the better.} \vspace{-0.2cm}
\centering
\label{tab:harvardresults}
\begin{tabu}{c|c|c|c|c|c}
\tabucline[1pt]{*}
Methods &Scale &\#Params &MPSNR$\uparrow$ &MSSIM$\uparrow$ &SAM$\downarrow$ \\ \hline\hline
BI &4 &- &37.255 &0.8977 &2.574 \\
3DFCNN\cite{Mei2017Hyperspectral} &4 &0.04M &38.110 &0.9101 &2.527 \\
3DGAN\cite{Li2020Hyperspectral} &4 &0.59M &38.781 &0.9189 &2.520 \\
SSPSR\cite{Jiang2020Learning} &4 &26.08M &39.397 &\underline{0.9287} &\underline{2.433} \\
MCNet\cite{Li2020Mixed} &4 &2.17M &\underline{39.412} &0.9268 &2.445 \\
ERCSR\cite{Li2021Exploring} &4 &1.59M &39.395 &0.9265 &2.440 \\ \hline
Template-Net &4 &2.29M &39.595 &0.9295 &2.473 \\
PDE-Net &4 &2.30M &\textbf{40.021} &\textbf{0.9346} &\textbf{2.427} \\ \hline\hline
BI &8 &- &33.597 &0.8129 &3.076 \\
3DFCNN\cite{Mei2017Hyperspectral} &8 &0.04M &34.155 &0.8251 &2.984 \\
3DGAN\cite{Li2020Hyperspectral} &8 &0.66M &34.799 &0.8321 &3.047 \\
SSPSR\cite{Jiang2020Learning} &8 &28.44M &35.094 &0.8410 &\textbf{2.871} \\
MCNet\cite{Li2020Mixed} &8 &2.96M &\underline{35.264} &\underline{0.8414} &2.937 \\
ERCSR\cite{Li2021Exploring} &8 &2.38M &35.207 &0.8402 &2.928 \\ \hline
Template-Net &8 &2.32M &35.242 &0.8413 &2.983 \\
PDE-Net &8 &2.33M &\textbf{35.382} &\textbf{0.8438} &\underline{2.924} \\
\tabucline[1pt]{*}
\end{tabu}
\end{table}
\begin{table}[!t]
\caption{Quantitative Comparisons Of Different Methods Over The NCALM Dataset. The best results of all methods and the best results of existing methods are highlighted in bold and underline, respectively.``$\uparrow$" (resp. ``$\downarrow$") means the larger (resp. smaller), the better.}\vspace{-0.2cm}
\centering
\label{tab:ieeecontestresults}
\begin{tabu}{c|c|c|c|c|c}
\tabucline[1pt]{*}
Methods &Scale &\#Params &MPSNR$\uparrow$ &MSSIM$\uparrow$ &SAM$\downarrow$ \\ \hline\hline
BI &4 &- &43.618 &0.9646 &2.504 \\
3DFCNN\cite{Mei2017Hyperspectral} &4 &0.04M &44.300 &0.9703 &2.390 \\
3DGAN\cite{Li2020Hyperspectral} &4 &0.59M &45.239 &0.9761 &2.267 \\
SSPSR\cite{Jiang2020Learning} &4 &12.88M &45.271 &0.9754 &2.221 \\
MCNet\cite{Li2020Mixed} &4 &2.17M &45.578 &0.9764 &2.156 \\
ERCSR\cite{Li2021Exploring} &4 &1.59M &\underline{45.683} &\underline{0.9768} &\underline{2.132} \\ \hline
Template-Net &4 &2.29M &45.920 &0.9780 &2.155 \\
PDE-Net &4 &2.30M &\textbf{46.533} &\textbf{0.9810} &\textbf{1.927} \\ \hline\hline
BI &8 &- &38.699 &0.9079 &4.530 \\
3DFCNN\cite{Mei2017Hyperspectral} &8 &0.04M &39.128 &0.9142 &4.409 \\
3DGAN\cite{Li2020Hyperspectral} &8 &0.66M &39.527 &0.9190 &4.272 \\
SSPSR\cite{Jiang2020Learning} &8 &15.23M &39.799 &0.9221 &4.150 \\
MCNet\cite{Li2020Mixed} &8 &2.96M &39.809 &0.9217 &4.153 \\
ERCSR\cite{Li2021Exploring} &8 &2.38M &\underline{39.999} &\underline{0.9233} &\underline{4.103} \\ \hline
Template-Net &8 &2.32M &40.007 &0.9225 &4.244 \\
PDE-Net &8 &2.33M &\textbf{40.286} &\textbf{0.9265} &\textbf{3.976} \\
\tabucline[1pt]{*}
\end{tabu}
\end{table}
\begin{table}[!t]
\scriptsize
\caption{Results of the ablation study towards the computational efficiency over the CAVE dataset. $N$ refers to the MC sampling times. } \vspace{-0.2cm}
\centering
\label{tab:flops-results}
\setlength{\tabcolsep}{0.5mm}{
\begin{tabu}{c|c|c|c|c|c|c} \tabucline[1pt]{*}
Methods &Scale
&Inference time &\#FLOPs &Scale &Inference time &\#FLOPs \\ \hline
3DFCNN\cite{Mei2017Hyperspectral} &4 &0.197s &0.321T &8 &0.195s &0.321T \\
3DGAN\cite{Li2020Hyperspectral} &4 &0.382s &1.300T &8 &0.337s &1.233T \\
SSPSR\cite{Jiang2020Learning} &4 &0.429s &3.029T &8 &0.251s &1.818T \\
MCNet\cite{Li2020Mixed} &4 &0.578s &4.489T &8 &0.327s &10.220T \\
ERCSR\cite{Li2021Exploring} &4 &0.430s &4.463T &8 &0.266s &10.429T \\ \hline
PDE-Net ($N=1$) &4 &0.641s &1.275T &8 &0.189s &0.604T \\
PDE-Net ($N=5$) &4 &0.704s &6.375T &8 &0.258s &3.020T \\
\tabucline[1pt]{*}
\end{tabu}}
\end{table}
\begin{figure*}[!t]
\centering
\includegraphics[width=0.9\linewidth]{figure/sr_comparisons3.pdf}\vspace{-0.2cm}
\caption{Visual comparisons of different methods with
$\alpha=4$.
For ease of comparison, we visualized the reconstructed HS images in the form of RGB images, which were generated via employing the commonly-used spectral response function of Nikon-D700
\cite{jiang2013space}.
}
\label{fig:Harvard-errormap}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[width=1\linewidth]{figure/spectral_response.pdf}\vspace{-0.3cm}
\caption{Visual comparison of the spectral signatures of pixels reconstructed by different methods.
The positions of the corresponding pixels are marked by the green dot in RGB images. The spectral signatures by our PDE-Net are much closer to the ground-truth ones than the other compared methods, especially on the $1^{st}$ and $4^{th}$ columns.}
\label{fig:spectral-intensity}
\end{figure*}
\begin{figure}[!t]
\centering
\includegraphics[width=1.0\linewidth]{figure/UrbanRealTest.pdf}\vspace{-0.3cm}
\caption{Visual comparison of different methods on the HS image from the Urban dataset ($\alpha=4$).}
\label{fig:realtest}
\end{figure}
\begin{figure*}[!t]
\centering
\includegraphics[width=1\linewidth]{figure/uncertainty.pdf} \vspace{-0.3cm}
\caption{Visual illustration of the uncertainty estimation.
(a) Zoomed-in patch in the red frame of the ground-truth HS images; (b) and (d) reconstructed HS images via two MC sampling;
and (c): uncertainty maps. It could be observed that higher uncertainties correspond to larger variations of textures and higher risks of reconstruction errors.}
\label{fig:uncertainty}
\end{figure*}
We compared the proposed PDE-Net with 5 state-of-the-art deep learning-based methods, i.e., 3DFCNN \cite{Mei2017Hyperspectral}, 3DGAN \cite{Li2020Hyperspectral}, SSPSR \cite{Jiang2020Learning}, MCNet \cite{Li2020Mixed}, and ERCSR \cite{Li2021Exploring}. We also provided the results of bi-cubic interpolation (BI) as a baseline. For a fair comparison,
we retrained all the compared methods with the same training data as ours by using the codes released by the authors with suggested settings. Besides, we applied the same data pre-processing to all methods.
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\linewidth]{figure/relationship_u_e.pdf}\vspace{-0.3cm}
\caption{Visualization of the relationship between the uncertainty and reconstruction error of pixels.
It can be observed that the average reconstruction error is approximately proportional to the pixel uncertainty.}
\label{fig:relationship_u_e}
\end{figure}
\begin{figure*}[!t]
\centering
\includegraphics[width=0.95\linewidth]{figure/Sampling.pdf} \vspace{-0.3cm}
\caption{Illustration of the performance of our PDE-Net and Template-Net with different MC sampling times on three datasets ($\alpha=4$), i.e., (a) CAVE, (b) Harvard, and (c) NCALM.
}
\label{fig:sampling}
\end{figure*}
Tables \ref{tab:caveresults}, \ref{tab:harvardresults}, and \ref{tab:ieeecontestresults} show the quantitative results of different methods on the three datasets, where
it can be observed that
\begin{itemize}
\item our PDE-Net consistently achieves the best performance in terms of all the three metrics on all the three datasets when $\alpha=4$ and $8$, except the SAM value on the Harvard dataset for the $8\times$ super-resolution. Especially, our PDE-Net improves the MPSNR of the best existing methods by $0.53$ dB, $0.61$ dB, and $0.85$ dB (resp. $0.50$ dB, $0.12$ dB, and $0.29$ dB) over the CAVE, Harvard, and NCALM, respectively, when $\alpha=4$ (resp. 8). Moreover, the superiority of SSPSR \cite{Jiang2020Learning} over our PDE-Net in terms of SAM under the $8\times$ super-resolution may
benefit from the huge number of network parameters and the adopted spectral attention mechanism;
\item the proposed Template-Net also obtains better reconstruction quality than most of the compared methods, demonstrating the superiority of our source-consistent HS images reconstruction framework to some extent;
\item our PDE-Net further improves the Template-Net on the three datasets under all scenarios,
validating the effectiveness and advantage of our posterior distribution-based HS embedding method; and
\item
for ERCSR \cite{Li2021Exploring} that always achieves the best or second-best performance among the compared methods,
although it has a smaller number of network parameters than our PDE-Net, increasing its number of parameters cannot bring obvious performance improvement or even worsens performance \cite{Li2021Exploring}, due to the network architecture limitation.
Besides, as listed in Table~\ref{tab:stagesresults} our PDE-Net with a comparable number of parameters to ERCSR still achieves better performance than ERCSR \cite{Li2021Exploring}.
\end{itemize}
Besides, Fig. \ref{fig:Harvard-errormap} visually compares the results by different methods, where
we can observe that
most high-frequency details are lost in the super-resolved images by the compared methods. By contrast, our PDE-Net produces results with sharper textures closer to the
ground truth ones, which further demonstrates its advantage.
In addition, Fig. \ref{fig:spectral-intensity} illustrates the spectral signatures of some pixels of reconstructed HR-HS images by different methods,
where it can be seen that the shapes of the spectral signatures of all methods are generally consistent with those of the ground-truth ones. Moreover, the spectral signatures by our PDE-Net are closer to the ground-truth ones than the other methods, demonstrating the advantage of our method.
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\linewidth]{figure/probability_distribution.pdf}\vspace{-0.3cm}
\caption{Visualization of network-level and layer-level probability space corresponding to the channels and layers of PDE-Net on the CAVE dataset for $4\times$ SR. (a) and (b) denote the network-level and layer-level distribution, respectively.
}
\label{fig:probability_distribution}
\end{figure}
To demonstrate the robustness and generalization of our PDE-Net in practice, we also conducted the experiment in a real scenario, in which the input LR-HS is directly acquired by a typical sensor but not simulated by spatially downsampling the corresponding HR-HS image. Specifically, we utilized an HS image of spatial dimensions $307\times307$ and spectral dimension 210 ranging from 400 to 2500 nm
from the Urban\footnote{https://rslab.ut.ac.ir/data} dataset collected by the HYDICE hyperspectral system.
Due to the limitation of computing resources, we only selected a region of size $128 \times 128$ from the HS image for testing.
Fig. \ref{fig:realtest} visually compares the results of different methods
trained with the NCALM dataset,
where it can be seen that
the super-resolved image by our method shows clearer and sharper textures, demonstrating the advantage of our method. Note that the corresponding HR ground-truth HS image is not unknown, making it impossible to quantitatively compare different methods here.
Finally, we compared the computational efficiency of different methods measured with the inference time and the number of floating point of operations (\#FLOPs) in Table \ref{tab:flops-results}.
It can be seen that PDE-Net consumes less inference time and much fewer \#FLOPs than most compared methods when performing MC sampling only once ($N=1$). Although \#FLOPs grows linearly with the number of MC sampling (i.e., the value of $N$) increasing, we want to note that the MC-sampling process could be realized in a parallel manner as mentioned in Section \ref{Sec:Inference},
and thus with more GPU nodes, the inference time of $N$ MC sampling could be comparable to that of 1 MC sampling.
Besides, as illustrated in Fig. \ref{fig:sampling}, the reconstruction quality increases relatively rapidly in the first 5 MC sampling but marginally when performing more MC sampling. Thus, in practice, one can perform MC sampling 5 times at most to save computational cost only with slight reconstruction quality sacrifice.
\subsection{Ablation Study}
\subsubsection{The number of stages}
To explore how the number of stages involved in our PDE-Net affects performance, we evaluated the PDE-Net with various numbers of stages, i.e., $T=2,~3,~4$ and $5$.
From Table \ref{tab:stagesresults}, we can see that increasing the number of stages appropriately is able to improve the performance of
both PDE-Net and Template-Net,
demonstrating the rationality of
the iterative refinement strategy on our source-consistent reconstruction framework. Especially, the PDE-Net is consistently better than
Template-Net under all scenarios,
which further indicates the effectiveness of our posterior distribution-based HS embedding method.
Based on this study, we set the number of stages of our PDE-Net to 4 in all the remaining experiments of this paper.
\subsubsection{The $\mathcal{L}_2$ loss}
Table \ref{tab:wol2loss} lists the reconstruction quality of our PDE-Net with and without the $\mathcal{L}_2$ loss during training, where it can be concluded that the $\mathcal{L}_2$ loss makes
contributions to the reconstruction process of our PDE-Net.
The reason is that employing $\mathcal{L}_2$ loss can not only regularize the reconstructed HR-HS image, but also guarantee the residual between the pseudo-LR-HS image and the input LR-HS image can be minimized progressively.
\begin{table}[!t]
\caption{Results of the ablation study towards the number of stages ($T$) over the CAVE dataset ($\alpha=4$).
}\vspace{-0.2cm}
\centering
\label{tab:stagesresults}
\begin{tabu}{c|c|c|c|c|c}
\tabucline[1pt]{*}
Stages &Methods &\#Params &MPSNR$\uparrow$ &MSSIM$\uparrow$ &SAM$\downarrow$ \\ \hline
\multirow{2}{*}{2} &Template-Net &1.147M &40.785 &0.9663 &3.556 \\
&PDE-Net &1.151M &40.997 &0.9668 &3.477 \\ \hline
\multirow{2}{*}{3} &Template-Net &1.721M &40.879 &0.9667 &3.520 \\
&PDE-Net &1.726M &41.145 &0.9671 &3.457 \\ \hline
\multirow{2}{*}{4} &Template-Net &2.295M &40.911 &0.9666 &3.514 \\
&PDE-Net &1.147M &41.236 &0.9672 &3.455 \\ \hline
\multirow{2}{*}{5} &Template-Net &2.868M &41.047 &0.9666 &3.509 \\
&PDE-Net &2.877M &41.241 &0.9672 &3.437 \\
\tabucline[1pt]{*}
\end{tabu}
\end{table}
\begin{table}[!t]
\caption{Results of the ablation study towards the $\mathcal{L}_2$ loss over the CAVE dataset ($\alpha=4$).} \vspace{-0.2cm}
\centering
\label{tab:wol2loss}
\begin{tabu}{c|ccc}
\tabucline[1pt]{*}
Methods &MPSNR$\uparrow$ &MSSIM$\uparrow$ &SAM$\downarrow$ \\ \hline
PDE-Net w/o $\mathcal{L}_2$ &41.083 &0.9670 &3.467 \\
PDE-Net &41.236 &0.9672 &3.455 \\
\tabucline[1pt]{*}
\end{tabu}
\end{table}
\begin{table}[!t]
\caption{Comparison of the proposed posterior distribution-based and NAS-based embedding schemes on the CAVE dataset ($\alpha=4$).} \vspace{-0.2cm}
\centering
\label{tab:nasvspde}
\begin{tabu}{c|ccc}
\tabucline[1pt]{*}
Methods &MPSNR$\uparrow$ &MSSIM$\uparrow$ &SAM$\downarrow$ \\ \hline
NAS-based &41.085 &0.9668 &3.490 \\
PDE-Net &41.236 &0.9672 &3.455 \\
\tabucline[1pt]{*}
\end{tabu}
\end{table}
\subsubsection{Illustration of the epistemic uncertainty}
As shown in Figs. \ref{fig:uncertainty} and \ref{fig:relationship_u_e}, as expected, the high uncertainty always occurs in the regions
with highly-volatile textures and large reconstruction errors.
Therefore, such epistemic uncertainty maps could help us to figure out the regions that are hard to handle,
so that additional efforts or more advanced super-resolution techniques can be considered to improve these regions. Moreover, the predicted uncertainty may also give the confidence of network outputs
in other HS image-based high-level applications,
such as, HS image classification (assigning pixel-wise object categories to HS images) \cite{mou2017deep,hong2020graph} and object detection/tracking
\cite{liang2018material,zhang2018salient
\cite{uzkent2017aerial,tochon2017object}.
\subsubsection{MC sampling} We validated how the MC sampling times affects the performance of our PDE-Net.
Specifically, we calculated the mean value and standard deviation of MPSNRs/SAMs
obtained via multiple MC sampling.
As shown in Fig. \ref{fig:sampling}, it can be observed that the PDE-Net consistently outperforms the Template-Net
over all the three datasets. As the number of MC sampling gradually rising up, the average value of samples is gradually approaching the expectation of the distribution. Thus,
the performance of PDE-Net gradually rises and finally achieves stable
with the MC sampling times increasing.
\subsubsection{Visualization of the learned posterior distribution}
To have an intuitive understanding of our HS embedding architecture adaptively learned from the probabilistic perspective, we visualized the learned network-level and layer-level distributions in Fig. \ref{fig:probability_distribution}, where we can observe that the layer-level distribution is generally more complex than the network-level distribution, which is credited to the need of spatial-spectral diversities of local feature embedding.
\subsubsection{Posterior Distribution-based vs. Network Architecture Search (NAS)-based embedding schemes}
NAS-based schemes learn the network topology via maximum a posterior distribution \cite{Liu2019DARTS}, resulting in a determined network architecture.
Although such an optimization scheme may produce the most possible model among the whole feasible set, compared to the proposed posterior distribution-based embedding, it discards a great number of plausible cases, which may also fit training samples well and contribute to performance improvement.
To quantitatively compare these two embedding schemes, we constructed an NAS-based framework, in which with the same training data as ours and well-tuned hyperparameters, we trained our framework
with the NAS strategy,
to optimize the topology of the set of feasible HS embedding events $\mathcal{G}$. As shown in Table \ref{tab:nasvspde}, our PDE-Net surpasses the NAS-based method in terms of all three metrics, demonstrating the advantage of our posterior distribution-based embedding scheme.
\section{Conclusion}
\label{sec:con}
We have proposed PDE-Net, a novel end-to-end learning-based framework for HS image super-resolution.
We built PDE-Net on the basis of the intrinsic degradation relationship between LR and HR-HS images, thus making it physically-interpretable and compact.
More importantly,
we formulated HS embedding, a core module contained in the PDE-Net, from the probabilistic perspective to extract the high-dimensional spatial-spectral information efficiently and effectively.
By conducting extensive experiments on three common datasets, we demonstrated the significant superiority of our PDE-Net over state-of-the-art methods both quantitatively and qualitatively. Besides, we provided comprehensive ablation studies to have a better understanding of the proposed PDE-Net.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\balance
\bibliographystyle{IEEEtran}
|
1208.6492
|
\section{Introduction}
The nuclear symmetry energy plays an important role in the properties of nuclei and neutron stars~\cite{Latti2001,Latti2004,Stein2005,Anna2006,Yakov2004}. To a good approximantion, it can be written as
\begin{equation}
E_{sym}=S(\rho)\delta^2.
\end{equation}
where $\delta=(\rho_n-\rho_p)/(\rho_n+\rho_p)$, is the isospin asymmetry; $\rho_{n}$, $\rho_{p}$, are the neutron, proton densities, and $S(\rho)$ describes the density dependence of the symmetry energy. Theoretical predictions for $S(\rho)$ from microscopic nucleon-nucleon interactions show large uncertainties, especially in the region of suprasaturation density \cite{Brown91,BALi08}. Constraining the density dependence of the symmetry energy has become one of the main goals in nuclear physics and has stimulated many theoretical and experimental studies \cite{BALi08,Danie02,Fuch06,Garg04,HSXu00,Tsang01,Shett04,Tsang04,LWCh04,qfli05,qfli06,TXLiu07,BALi05,Fami06,BALi97,BALi06,BALi00,BALi04,
Yong06,Tsang09,Gior10,Napo10, zhang05,zhang08}. Heavy Ion Collisions (HIC) with asymmetric nuclei provide a unique opportunity for laboratory studies of the density dependence of the symmetry energy because a large range of densities can be momentarily achieved during HICs. In theoretical studies with transport models, the isospin ratio observables which are constructed from the isospin contents of emitted nucleons or fragments, such as Y(n)/Y(p) and DR(n/p) for emitted nucleons\cite{ Fami06,BALi97, zhang08}, isospin transport ratios $R_i$ constructed from the isospin asymmetry of projectile residues (or emitted resource)\cite{Tsang04,LWCh04,Tsang09,zhang12}, and $R^{mid}_{yield}$, constructed from the yields of LCP between the mid-rapidity and projectile region\cite{Kohley}, have been prove to be primarily sensitive to the density dependence of the symmetry energy. By comparing the theoretical predictions to the experimental data, the sought-after constraints can be obtained.
One frequently utilized transport models to describe the heavy ion collisions is the Boltzmann-Uehling-Uhlenbeck (BUU) equation, which provides an approximate Wigner transform of the one-body density matrix as its solution\cite{Bertsch88}.
\begin{eqnarray}
\frac{\partial f}{\partial t}+v \cdot \nabla_{\mathbf{r}}f-\nabla_{\mathbf{r}}U \cdot \nabla_{\mathbf{p}}f && =
-\frac{1}{(2\pi)^6}\int d^3p_2d^3p_{2'}d\Omega\frac{d\sigma}{d\Omega}v_{12}\nonumber\\
&&\times \{ [ff_2(1-f_{1'})(1-f_{2'})]-[f_{1'}f_{2'}(1-f)(1-f_{2})]\nonumber\\
&&\times (2\pi)^3\delta^{3}(\mathbf{p}+\mathbf{p}_2-\mathbf{p}_{1'}-\mathbf{p}_{2'})\} \label{buueq}
\end{eqnarray}
The l.h.s. of this equation is the total differential of $f$ with respect to the time assuming
a potential $U$. Usually a Skyrme-parametrization of the real part of the G-matrix or Skyrme-like energy density functional are employed as the nucleonic potential which describe the influence of the different isospin asymmetric nuclear equation of state (asy-EOS). Stochastic extensions of these mean-field based approaches have been
introduced (see \cite{Chomaz04} and references therein). For instance,
in the so-called Stochastic Mean Field (SMF) \cite{Baran02} model fluctuations are
injected in coordinate space by agitating the spacial density profile. The r.h.s. of Eq. (\ref{buueq}) contains a Boltzmann collision integral which describes the influence of binary hard-core collisions and are realized by the test particles.
Another frequently utilized approaches, known as the Molecular Dynamics Model (QMD)that represent the individual nucleons as Gaussian "wave-packet" with mean values that move in according the Ehrenfest theorem; i.e. Hamilton's equations\cite{Aiche87}.
\begin{eqnarray}
\dot{\mathbf{r}_i}=\frac{\mathbf{p}_i}{m}+\nabla_{\mathbf{p}_i}\sum_{j}\langle V_{ij} \rangle=\nabla_{\mathbf{p}_i}\sum_{j}\langle H \rangle\\
\dot{\mathbf{p}_i}=-\nabla_{\mathbf{r}_i}\sum_{j}\langle V_{ij} \rangle=-\nabla_{\mathbf{r}_i}\sum_{j}\langle H \rangle
\end{eqnarray}
The expectation of the total Hamiltonian $<H>$ is obtained from the real part of the G-matrix or the Skyrme energy density functional, and it describes the influence of the different asy-EOS. The collision part in the QMD models are handled as same as the way in BUU type models but it is for nucleons rather than the test particles.
In this work, we choose to simulate nuclear collisions with the code ImQMD05 developed at the China Institute of Atomic Energy (CIAE), details of this code are described in Ref. \cite{zhang05,zhang08,zhang06,zhang07}, for studying several isospin ratio observables, such as DR(n/p), isospin transport ratios $R_i$, $R^{mid}_{yield}$ ratios for the yields of LCP between the mid-rapidity and projectile region and their relations to the fragmentation mechanism. For brevity, we limit our discussion here to the parameterization of the symmetry energy used in our calculations, which is of the form
\begin{equation}
S(\rho)=\frac{1}{3}\frac{\hbar^2}{2m}\rho^{2/3}_{0}(\frac{3\pi^2}{2}\frac{\rho}{\rho_{0}})^{2/3}+\frac{C_{s}}{2}(\frac{\rho}{\rho_{0}})^{\gamma_{i}}.\label{srho}\end{equation}
where $m$ is the nucleon mass and the symmetry coefficient $C_s=35.19MeV$. Using this particular parameterization, the symmetry energy at subsaturation densities increases with decreasing $\gamma_i$, while the opposite is true for supranormal densities. In general, the EoS is labeled as stiff-asy for $\gamma_i>1$, and as soft-asy for $\gamma_i<1$. Finally, we also give a brief discussion the recent comparisons between the ImQMD05 and SMF calculations for further understanding the theoretical issue in the describing the reaction mechanism at low-intermediate energy heavy ion collisions.
\section{isospin ratio from nucleon to fragments}
Isospin ratios which are constructed from the isospin contents of fragments, such as $R(n/p)=Y(n)/Y(p)$ (or named as n/p ratio), DR(n/p) from neutron-rich and neutron-poor systems, isospin transport ratios $R_i$ and $R^{mid}_{yield}=2 Y_{LCP}$($y^0<0.5$)/$Y_{LCP}$(0.5$<y^0<$1.5), are sensitive to the density dependence of symmetry energy. In this section, we will check their sensitivities to the density dependence of symmetry energy and try to get the information of symmetry energy from them.
\subsection{n/p ratio and DR(n/p) ratio}
The neutron to proton ratio
$R_{n/p}=Y(n)/Y(p)$ of
pre-equilibrium emitted neutron over proton spectra was considered
as a sensitive observable to the density dependence of symmetry
energy\cite{BALi97}, because it has a straightforward link to the
symmetry energy. In order to reduce the sensitivity to uncertainties
in the neutron detection efficiencies and sensitivity to relative
uncertainties in energy calibrations of neutrons and protons, the
double ratio
\begin{equation}
DR(n/p)=R_{n/p}(A)/R_{n/p}(B)
\end{equation}
had been measured by Famiano and compared with the transport model
prediction\cite{Fami06,BALi97}.
We performed calculations of collisions at an impact parameter
of $b=2 fm$ at an incident energy of $50 MeV$ per nucleon for two
systems: $A=^{124}Sn+^{124}Sn$ and
$B=^{112}Sn+^{112}Sn$ with ImQMD05 to study the
$DR(n/p)$ ratio for emitted nucleons\cite{zhang08}. The shaded regions in the left
panel of Fig.\ref{fig-dr} show the range, determined by uncertainties in
the simulations, of predicted double ratios $DR(n/p)=R_{n/p}(A)/
R_{n/p}(B)$ of the nucleons emitted between $70^{\circ}$ and
$110^{\circ}$ in the center of mass frame as a function of the
center of mass nucleon energy, for $\gamma_{i}=0.5$ and $2.0$. The
double ratios $DR(n/p)$ are higher for the EOS with the weaker
symmetry energy density dependence $\gamma_{i}=0.5$ than that for
$\gamma_{i}=2.0$ because the nucleons mainly emit from the lower
density region at intermediate energy HICs. Compare to the data on
$DR(n/p)$ for emitted nucleons(solid stars), the general trend of
data $DR(n/p)$ are qualitatively reproduced and the data seem to be
closer to the calculation employing the EOS with $\gamma_{i} =0.5$.
The right panel of fig.\ref{fig-dr} show the coalescence-invariant double ratio. The
coalescence-invariant double ratios are constructed by including all
neutrons and protons emitted at a given velocity, regardless of
whether they are emitted free or within a cluster. The data are
shown as open stars and the calculation results are shown as shaded
region. Here, the measurement and
simulation results illustrate that the fragments with $Z\ge2$
mainly contribute to the low energy spectra and do not affect the
high-energy $DR(n/p)$ data very much.
\begin{figure}[htbp]
\centering
\includegraphics[angle=270,width=0.40\textwidth]{fig-dr.eps
\setlength{\abovecaptionskip}{40pt}
\caption{\label{fig-dr}(Color online) (Left) DR(n/p) ratios for emitted free nucleons
and (Right) coalescent-invariant $DR(n/p)$ from the ImQMD simulations
are plotted as shadow region.}
\setlength{\belowcaptionskip}{10pt}
\end{figure}
In order to constrain the range of $\gamma_{i}$ from the $DR(n/p)$
data that had been published, a series calculations for two systems,
$A=^{124}Sn+^{124}Sn$ and $B=^{112}Sn+^{112}Sn$, have been performed
by varying $\gamma_{i}=0.35, 0.5, 0.75, 1.0$ and $2.0$\cite{Tsang09}. Since the emitted nucleons
are mainly from the subnormal densities at this energies, the n/p
ratios of emitted nucleons are associated with the values of symmetry energy at subnormal density. Therefor, the $DR(n/p)$ ratio
should increase with decreasing $\gamma_{i}$. However, in the limit
of very small $\gamma_{i}\ll0.35$, the finite system completely
disintegrates and the $DR(n/p)$ ratio decrease and approaches the
limit of reaction system, $(N/Z)_{124}/(N/Z)_{112}=1.2$. As a
consequence of these two competing effects, the double ratio values
peak around the $\gamma_{i}=0.7$. Despite the large experiment
uncertainties for higher energy data, those comparisons definitely
rule out very soft ($\gamma_{i}=0.35$) and very stiff
($\gamma_{i}=2.0$) density dependence of symmetry energy. The $\chi^2$ analysis suggest that within a $2\sigma$ uncertainty, parameters of $\gamma_{i}$
fall in the range of $0.4\le\gamma_{i}\le1.05$ for the
$C_{s}=35.2MeV$.
\subsection{isospin transport ratio}
When the projectile and target nuclei come into contact, there can be exchange of nucleons between them. If the neutron to proton ratios of the projectile and target differ greatly, the net nucleon flux can cause a diffusion of the asymmetry $\delta$ reducing the difference between the asymmetries of two nuclei. This isospin diffusion process, which depends on the magnitude of the symmetry energy, affects the isospin asymmetry of the projectile and target residues in peripheral HICs. The isospin transport ratio $R_i$ has been introduced \cite{Tsang04} to quantify the isospin diffusion effects,
\begin{equation}
R_i=\frac{2X-X_{aa}-X_{bb}}{X_{aa}-X_{bb}}, \label{Ridef}
\end{equation}
where X is an isospin observable and the subscripts $a$ and $b$ represent the neutron rich and neutron-poor nuclei. In this work, we use $a$ and $b$ to denote the projectile (first index) and target (second index) combination. where $\mathrm{a=^{124}Sn}$, and $\mathrm{b=^{112}Sn}$. We obtain the value of $R_{i}$ by comparing three reaction systems, $\mathrm{a+a}$, $\mathrm{b+b}$ and $\mathrm{a+b}$ (or $\mathrm{b+a}$). Construction of the transport ratio minimize the influence of other effects besides isospin diffusion effects on the fragment yields, such as preequilibrium emission and secondary decay, by rescaling the observable X for the asymmetric a+b system by its values for the neutron-rich and neutron-deficient symmetric systems, which do not experience isospin diffusion. Based on Eq. (\ref{Ridef}), one expects $R_i=\pm1$ in the absence of isospin diffusion and $R_i\sim0$ if isospin equilibrium is achieved. Eq. (\ref{Ridef}) also dictates that two different observables, $\mathrm{X}$, will give the same results if they are linearly related.
In one experiment, $\mathrm{X}$ was taken as the isoscaling parameter, $\alpha$, obtained from the yield of the light particles near the projectile rapidity\cite{Tsang01}, to measure the isospin diffusion ability in heavy ion collisions. In transport models \cite{Tsang04,LWCh04}, the isospin asymmetry $\delta$ of the projectile residues (emitting source) has been used to compute $R_i(\delta)$ because it is linearly related to the isoscaling parameters $\alpha$\cite{TXLiu07,Tsang01,Ono03}.
We analyze the amount of isospin diffusion with ImQMD05 by constructing a tracer from the isospin asymmetry of all emitted nucleons (N) and fragments (frag), including the heavy residue if it exists, with velocity cut $v^{N,frag}_z>0.5v^{c.m.}_{beam}$ (nearly identical results are obtained with higher velocity cut $v^{N,frag}_z> 0.7v^{c.m.}_{beam}$). This represents the full projectile-like emitting source, and should be comparable to what has been measured in experiments. Fig.2 shows the results of isospin transport ratios $R_i(X=\delta_{N,frag})$ (upright triangles) as a function of the impact parameter for a soft symmetry case ($\gamma_i=0.5$, open symbols) and a stiff symmetry case ($\gamma_i=2.0$, closed symbols). $R_i$ obtained with soft-symmetry case is smaller than those obtained with stiff-symmetry potential case. This is consistent with the expectation that higher symmetry energy at subnormal density leads to larger isospin diffusion effects (smaller $R_i$ values).
\begin{figure}[htbp]
\centering
\includegraphics[angle=90,width=0.40\textwidth]{fig7-final.eps
\setlength{\abovecaptionskip}{20pt}
\caption{(Color online) Isospin transport ratios as a function of impact parameter with two tracers for a soft symmetry case ($\gamma_i=0.5$, open symbols) and a stiff symmetry case ($\gamma_i=2.0$, closed symbols). Upright triangle symbols are for the tracer defined by the isospin asymmetry of all fragments and unbound nucleons with velocity cut ($v^{N,frag}_z>0.5v^{c.m.}_{beam}$), $X=\delta_{N,frag}$. Circles are for the tracer defined by the heaviest fragment with $Z_{max} > 20$ in projectile region, $X=\delta_{Z_{max} > 20}$.}
\setlength{\belowcaptionskip}{0pt}
\end{figure}
$R_i$ depends weakly on impact parameter over a range extending from central ($b=3fm$) to mid peripheral($b=8fm$) collisions. Interestingly, the isospin equilibrium and global thermal equilibrium are not reached even for central collisions. Our results show, that neither the effective interaction is sufficiently strong nor the collisions are sufficiently frequent (most of them are Pauli suppressed) to mix the projectile and target nucleons completely. These two effects prevent the combined system from attaining isospin equilibrium even in central collisions. With impact parameter increasing for $b>5fm$, the overlap region and thus the number of nucleons transferred from projectile and target decreases, causing the $R_i$ values to increase.
In peripheral collisions, most often, a large residue remains. If it decouples from the full emitting source before it equilibrates, it may experience a different amount of diffusion than the full emitting source examined by $X=\delta_{N,frag}$. To examine this, we constructed a tracer using the isospin asymmetry of the heaviest fragments with charge $Z_{max}>20$ in the projectile region. This tracer is mainly relevant to peripheral collisions as the central collisions are dominated by multifragmentation and very few large projectile fragments survive. The dependence of $R_i(X=\delta_{Z_{max}>20})$ for impact parameter $b\geq5fm$ is shown as open and closed circles in Fig. 2. The isospin transport ratios constructed from the different isospin tracers have different values especially in the case of $\gamma_i=0.5$. Stronger isospin equilibration (smaller $R_i$ values) is observed in the isospin transport ratios $R_i(X=\delta_{N,frag})$ constructed from nucleons and fragments than $R_i(X=\delta_{Z_{max}>20})$ constructed from the heaviest fragments with $Z_{max} > 20$. Since isospin diffusion mainly occurs through the low-density neck region,
and the system breaks up before isospin equilibrium, the asymmetry of the projectile and target residues do not achieve equilibrium and, larger $R_i(X=\delta_{Z_{max}\ge20})$ values result. In contrast, there is more mixing of nucleons from the target and projectile in the neck region due to the isospin diffusion. Consequently, rupture of the neutron-rich neck is predicted to result in the production of neutron-rich fragments at mid rapidity.
Since fragments are formed at all rapidities, we can examine the rapidity dependence of $R_i$ to obtain more information about the reaction dynamics. Fig. 3 shows $R_i$ as a function of the scaled rapidity $y/y_{beam}$. The symbols in the leftmost panel are experimental data obtained in Ref. \cite{TXLiu07} for three centrality gates. This transport ratio was generated using the isospin tracer $X=ln(Y(^7Li)/Y(^7Be))$ where $Y(^7Li)/Y(^7Be)$ is the yield ratio of the mirror nuclei, $^7Li$ and $^7Be$ \cite{TXLiu07}. As expected the values of $R_i$ obtained from peripheral collisions (solid stars) are larger than those obtained in central collisions (open stars). For comparison, the ImQMD05 calculations of $R_i(X=\delta_{N,frag})$ are plotted as lines in the middle and right panels for a range of impact parameters. The middle panel contains the results from the soft symmetry potential ($\gamma_i=0.5$) while the right panel shows the results from the stiff symmetry potential ($\gamma_i=2.0$). The impact parameter trends and magnitude of the data are more similar to the results of the calculations from soft symmetry potentials ($\gamma_i=0.5$) for peripheral collisions.
We have performed the $\chi^2$ analysis for both
observables, $R_i$ and $R_i(y)$, for constraining the density dependence of symmetry energy. Using the
same 2$\sigma$ criterion, the analysis brackets the regions $0.45\le\gamma_i\le0.95$
is obtained. It is consistent with previous analysis on DR(n/p). However, the experimental trend of $R_i$ gated on the most central collisions (open stars) is not reproduced by the calculations. The experimental data indicate more equilibration for central collisions near mid rapidity while the transport model indicates more transparency. The equilibration in the E/A = 50MeV data may be the result of the impact parameter determination from charged particle multiplicity wherein the most central collisions are assumed to be the ones with highest charge particle multiplicity. For the most central events, a gate on the highest multiplicity, may select events in which more nucleon-nucleon collisions occur rather than a strict selection on the most central impact parameters.
\begin{figure}[htbp]
\centering
\includegraphics[angle=90,width=0.45\textwidth] {fig8_Ri_rap_b.ps}
\setlength{\abovecaptionskip}{20pt}
\caption{(Color online) (Left panel) Experimental $R_i$ as a function of rapidity for three centrality gates [16]. (Middle panel) The calculated results of $R_i(X=\delta_{N,frag})$ as a function of rapidity for $b= 2, 4, 6, 8 fm$ for $\gamma_i=0.5$ and (Right panel) $\gamma_i=2.0$.}
\setlength{\belowcaptionskip}{0pt}
\end{figure}
\subsection{ $R^{mid}_{yield}$ ratios for light charged particles}
The yield ratios, $R^{mid}_{yield}$, for LCPs between the projectile region and mid-rapidity region, are defined as
\begin{equation}
R^{mid}_{yield}=\frac{2\cdot Yield(0.0\leq Y_r \leq0.5)}{Yield(0.5\leq Y_r \leq1.5)},
\end{equation}
where $Y_r=\frac{Y_{c.m.}}{Y^{c.m.}_{proj}}$ is the reduced rapidity.
It reflect the isospin migration ability and have been measured by \textit{Kohley.et.al.} for $\mathrm{^{70}Zn+^{70}Zn}$, $\mathrm{^{64}Zn+^{64}Zn}$, $\mathrm{^{64}Ni+^{64}Ni}$ at the beam energy of 35 MeV/nucleon for middle peripheral collisions \cite{Kohley}. The data show a clear preference for emission around the mid-rapidity region for more neutron-rich LCPs resulting from the isospin migration mechanism through the neck region between the projectile and target\cite{Baran04, zhang05}.
Theoretical study by SMF model\cite{Rizzo08} demonstrates that $R^{mid}_{yield}$ is sensitive to the density dependence of symmetry energy. The experimental trends are reproduced by SMF model. However, there are largest discrepancies on the reduced rapidity distribution for the yields of proton and $^3He$, and also on their values of $R^{mid}_{yield}$ as mentioned in Reference\cite{Kohley}. The discrepancies were thought to be related to the statistical decay of QP at later stages of the reaction \cite{Kohley}. In the points of reaction dynamics, the different fragmentation mechanism in the transport models simulations also lead to different behaviors of the rapidity distribution for LCPs besides the effects from secondary decay. Thus, it is instructive to study the rapidity distribution of LCP with ImQMD05.
In Fig.\ref{ref-Fig.3}(a), we present the multiplicity distribution for fragments with $Z\geq 3$ for $\mathrm{^{70}Zn+^{70}Zn}$ at $E_{beam}$=35 MeV/nucleon and impact parameter b=4fm and $\gamma_i=2.0$. We find that \textbf{half} of events belongs to multi-fragmentation process which are defined by multiplicity for fragments with charge $Z>3$, i.e.,$M(Z\geq 3)>3$. The rest are the binary ($M(Z\geq 3)=2$) and ternary ($M(Z\geq 3)=3$) fragmentation events. It suggests that the binary, ternary fragmentation and multi-fragmentation coexist around 35MeV/nucleon. In Fig.\ref{ref-Fig.3} (b) and (c), we plot the reduced rapidity distribution for the yields of $^3He$ and $^6He$ obtained with three kinds of fragmentation process, binary (square symbols), ternary (circle symbols) and multi-fragmentation (triangle symbols) which are selected by $M(Z\geq 3)=2, 3$ and $>3$. The yields of $^3He$ and $^6He$ in Fig.\ref{ref-Fig.3} are normalized to per event. It is clear that the binary events produce more $^3He$ and $^6He$ at mid-rapidity relative to that produce in multi-fragmentation events. For $\gamma_i=2.0$ case, the yield of $^3He$ at $Y_r=0$ obtained with binary fragmentation events is 35\% larger than that with multi-fragmentation events. For neutron-rich LCP, for example, the yield of $^6He$ at $Y_r=0$ obtained in binary fragmentation events is 70\% larger than that in multi-fragmentation events due to isospin migration.
\begin{figure}[htbp]
\centering
\includegraphics[angle=270,scale=0.45]{pm-the6.eps}
\setlength{\abovecaptionskip}{45pt}
\caption{\label{ref-Fig.3}(Color online) (a) The multiplicity distribution for fragments with $Z\geq 3$ ($M(Z\geq 3)$). (b) is the reduced rapidity ($Y_r$) distribution for the yield of $^3He$ with binary (square symbols), ternary (circle symbols) and multi-fragmentation (triangle symbols) process. (c) is for $^6He$. All of those results are for $\mathrm{^{70}Zn+^{70}Zn}$ at E=35 MeV/u for b=4fm and $\gamma_i=2.0$.}
\setlength{\belowcaptionskip}{0pt}
\end{figure}
Fig.\ref{ref-Fig.4} shows the calculated results for the rapidity distribution of light charged particles \textit{p}, \textit{d}, \textit{t}, $^{3}He$, $^{4}He$ and $^{6}He$ for $^{64}Ni+^{64}Ni$ at b=4fm with 100,000 events.
The distributions are normalized with the yield at $Y_r=0$ for comparing with data in Ref \cite{Kohley} to understand the fragmentation mechanism, because the fragmentation mechanism mainly determine the shape of the rapidity distribution for LCP. The open circles are for $\gamma_i=0.5$ and solid symbols are for $\gamma_i=2.0$. The data are taken from the Ref \cite{Kohley} and plotted as stars. Both our calculations and data show that the width of distribution decreases with the mass of LCPs increasing. For the rapidity distributions of $^3H$ and $^3He$, the width of distribution for $^3H$ is smaller than that for $^3He$ due to the isospin migration. By comparing the simulated results to the data, the ImQMD05 calculations with stiffer symmetry energy can well reproduce the data at forward rapidity region ($Y_r>0$) for all \textit{p}, \textit{d}, \textit{t}, $^3He$, $^4He$ and $^6He$. For the backward rapidity region ($Y_r<0$), there are obvious differences between the results from ImQMD05 calculations and the data because the efficiency for detection of LCPs at the backward \cite{Kohley} are not included in this ImQMD05 calculations.
\begin{figure*}[htbp]
\centering
\includegraphics[angle=270,scale=0.4]{Fig.4.eps}
\setlength{\abovecaptionskip}{35pt}
\caption{\label{ref-Fig.4}(Color online)Reduced rapidity ($Y_r$) distributions for \emph{p}, \emph{d}, \emph{t}, $^{3}He$, $^{4}He$ and $^{6}He$ fragments from the 35 MeV/nucleon $\mathrm{^{64}Ni+^{64}Ni}$ reaction for impact parameter b=4fm. The experimental data are shown as the stars. The ImQMD05 calculations for $\gamma_i=0.5$ are shown as the open circles and the solid circles are for $\gamma_i=2.0$. Each distribution is normalized with the yield at $Y_r=0$.}\label{ref-Fig.4}
\setlength{\belowcaptionskip}{0pt}
\end{figure*}
In order to constrain the symmetry energy by the rapidity distribution of LCPs, we further calculate $R_{yield}^{mid}$ in a series of $\gamma_i=0.5, 0.75, 1.0, 2.0$. In Fig.\ref{ref-Fig.5}, we present the results of $R^{mid}_{yield}$ as a function of the AZ of emitted particles for three reaction systems $\mathrm{^{64}Zn+^{64}Zn}$, $\mathrm{^{64}Ni+^{64}Ni}$ and $\mathrm{^{70}Zn+^{70}Zn}$ at b=4fm. The open symbols are the results for $\gamma_i=0.5, 0.75, 1.0$ and $2.0$. The solid stars are the data from \cite{Kohley}. Since the isospin migration occurs in the neutron-rich neck region, the $R^{mid}_{yield}$ shows an increasing trend with the values of isospin asymmetry of LCPs increasing for the same element. The more neutron-rich the LCPs is, the larger the $R^{mid}_{yield}$ is. Furthermore, the calculated results show the values of $R^{mid}_{yield}$ for neutron-rich isotopes are sensitive to the density dependence of symmetry energy. The calculations with stiffer symmetry energy predict larger values of $R^{mid}_{yield}$ due to the stronger isospin migration effects. This conclusion is as same as the results obtained with SMF model \cite{Kohley}. As shown in Fig.\ref{ref-Fig.5}, the ImQMD05 calculations with stiffer symmetry energy ($\gamma_i\ge0.75$) can reasonably reproduce the data of $R^{mid}_{yield}$ as a function of AZ for $\mathrm{^{64}Zn+^{64}Zn}$. But our calculations underestimate the $R^{mid}_{yield}$ values of neutron-rich light charged particles, such as $^6He$, for the neutron-rich reaction systems $\mathrm{^{64}Ni+^{64}Ni}$ and $\mathrm{^{70}Zn+^{70}Zn}$. It could come from the lacking of fine structure effects of neutron-rich elements (such as neutron-skin, stability of lighter neutron-rich elements), and the impact parameter smearing effects in the transport model simulations. Even though our calculations can reproduce the $R^{mid}_{yield}$ data for $^{64}Zn+^{64}Zn$, the definitely constraints on the symmetry energy with the data of $R^{mid}_{yield}$ can not be obtained before we fix the problems on the theoretical predictions of $R^{mid}_{yield}$ for neutron-rich reaction system.
\begin{figure}[htbp]
\centering
\includegraphics[angle=270,scale=0.5]{Fig.5.EPS}
\setlength{\abovecaptionskip}{45pt}
\caption{\label{ref-Fig.5}(Color online)$R_{yield}^{mid}$ values as a function of the charge times mass (ZA) for \textit{p} (ZA=1), \textit{d} (ZA=2), \textit{t} (ZA=3), $^3He$ (ZA=6), $^4He$ (ZA=8), $^6He$ (ZA=12). The open symbols are the results obtained with ImQMD05 for $\gamma_i=0.5, 0.75, 1.0$ and $2.0$. The solid stars are the data from \cite{Kohley}.}
\setlength{\belowcaptionskip}{0pt}
\end{figure}
\subsection{remarks on the comparisons between ImQMD05 and SMF}
Up to date, there are several constraints on the symmetry energy with isospin sensitive observables, DR(n/p) raitos, isospin transport ratios $R_i$ and $R^{mid}_{yield}$ ratios, by adopting different type of transport models, such as QMD type and BUU type\cite{Tsang09,LWChen05,Rizzo08} . There are overlap between the results of symmetry energy from different approaches, but they are different in detail. Thus, further understanding the issues in the QMD type and BUU type would be crucial in theoretical studies for improving the constraints on the symmetry energy.
At the code level, both BUU and QMD models propagate particles classically under the influence of a mean field potential, which is calculated self-consistently the positions and momenta of the particles, and allow scattering by nucleon-nucleon collisions due to the residual interaction. The Pauli principle in both approaches is enforced by application of Pauli blocking factors. These similarities in implementation have lead to similarities in predictions for many collision observables \cite{Aich89}.
There are also significant differences in these approaches. In the BUU equations, each nucleon is represented by 200-1000 test particels that generate the mean field and suffer the collisions. In QMD, there is one test particle per nucleon. A-body correlations and cluster formation are not native to the original BUU approach; which is supposed to provide the Wigner transform of the one body density matrix. On the other hand, many-body correlations and fluctuations can arise from the A-body dynamics of QMD approach. Such A-body correlations are suppressed in BUU approach, but correlations can arise in both approaches from the amplification of mean field instabilities in spinodal region \cite{Chomaz04}. Collision algorithms in the QMD approach modify the momenta of individual nucleons, while in BUU approach, only the momenta of test particles are modified. Depending on the details of the in-medium cross sections that are implemented, the blocking of collisions can also be more restrictive for QMD than for BUU, leading to fewer collisions and therefore a greater transparency.
Fragments can be formed in QMD approaches due to the A-body correlations and these correlations are mapped onto the asymptotic final fragments by a minimum spanning tree algorithm. Serval different methods have been developed to allow BUU codes to calculate cluster production. In the Stochastic Mean Field (SMF) approach (one of the BUU type model), the time evolution of the one-body phase-space distribution $f$ is governed by the nuclear mean-field, two-body scattering, and a fluctuating (stochastic) term which causes the fragmentation \cite{Baran02,Baran04, Rizzo08,Colonna98}.
Since there are typically more than 100 test particles per nucleon, collision induced fluctuations are smaller in BUU than in QMD possibly suppressing the fragment formation rates.
As an example, we present the results of average charge number for $Z\ge3$ as a function of rapidity obtained with ImQMD05 and SMF models for $^{124}Sn+^{124}Sn$ at $E_{beam}=50AMeV$ and b=6, 8fm in Fig.\ref{ref-zhangsmf}. The solid lines are the results from ImQMD05, and dashed lines are from SMF calculations\cite{zhmari}. Squares are for b=6fm, and circles are for b=8fm. The results obtained with QMD simulations show that the average charge number of $Z\ge3$ increase with rapidity increasing before $y/y^{c.m.}_{beam}\sim1$ in the forward region in the QMD simulations, and two peaks appear around the projectile and target rapidity region, respectively.
In the SMF simulations, the peak appears at mid-rapidity besides two peaks around projectile and target rapidity. It clearly shown that the strict Pauli blocking in the QMD models simulations leads to a greater transparency than that in the SMF type simulations. The larger fluctuation in the QMD models lead to more fragments and light particles emitted than that from SMF predictions.
As results, the fragments distributed over the whole rapidity region and heavier fragment is, larger velocity is.
In SMF calculations, the fragmentation process is less effective due to
the reduced amplitude of fluctuations and many-body correlations.
This enhances the appearance of binary and ternary processes in
semi-peripheral heavy ion reactions, according to the prominent role of
the mean-field dynamics. As a consequence, the intermediate mass fragments tend to distribute at mid-rapidity. It also lead that the the average charge number of $Z\ge3$ have a peak around mid-rapidity and narrower rapidity distribution of lighter clusters than the results from QMD models.
\begin{figure}[htbp]
\centering
\includegraphics[angle=270,width=0.35\textwidth] {smf-imqmd.eps}
\setlength{\abovecaptionskip}{30pt}
\caption{\label{ref-zhangsmf}(Color online) The average charge number for $Z\ge3$ as a function of rapidity for $^{124}Sn+^{124}Sn$ at b=6,8fm with $\gamma_i=2.0$. The solid lines are the results from ImQMD05, and dashed lines are from SMF.}
\setlength{\belowcaptionskip}{0pt}
\end{figure}
\section{Summary}
In summary, we have investigated the influences of the density dependence of the symmetry energy on several different isospin ratio observables, such as DR(n/p) ratio, isospin transport ratios $R_i$, the rapidity dependence of isospin transport ratio $R_i(y)$ and $R^{mid}_{yield}$ raitos (the yield ratios of LCP between the mid-rapidity and projectile region) with ImQMD05. The study shows that these isospin ratio observables are sensitive to the density dependence of symmetry energy. This conclusion is similar to conclusions reached using BUU approaches in the range of symmetry energies studied here. By comparing the calculated results to data, the very soft ($\gamma_i=0.35$) and very stiff symmetry energy ($\gamma_i=2.0$) are ruled out.
Cluster formation is important for intermediate energy heavy ion collisions, and it modifies the spectral double ratios at $E_{c.m.}<
40 MeV$. We also tested different tracers by constructing corresponding isospin transport ratios for them using different symmetry energies. For weakly density dependent symmetry energies (small $\gamma_i$) with large symmetry energies at sub-saturation densities, the values of $R_i$ for the residue tracer $X=\delta_{Z_{max}>20}$ are larger than those extracted from the entire emitting source, i.e., $X=\delta_{N,frag}$. The difference between these two tracers can be examined experimentally as a new probe of the symmetry energy and reaction mechanism.
By studying reaction systems $\mathrm{^{64}Zn+^{64}Zn}$, $\mathrm{^{64}Ni+^{64}Ni}$ and $\mathrm{^{70}Zn+^{70}Zn}$ at the beam energy of 35 MeV per nucleon and b=4fm within the framework of ImQMD05, we find that half of events belongs to the multi-fragmentation mechanism, and half of them is of binary and ternary fragmentation events. The binary and ternary events produce more light charged particles at middle rapidity, and the multi-fragmentation events broaden the reduced rapidity distribution for the yields of LCPs. Both the data and our calculations illustrate that the reaction systems seems more transparency and more fragments, light particles emitted. As results, the data of the reduced rapidity distribution for the yields of LCPs and $R_{yield}^{mid}$ as a function of AZ for $^{64}Ni+^{64}Ni$ can be well reproduced by the ImQMD05 calculations. For neutron rich reaction systems $\mathrm{^{64}Ni+^{64}Ni}$ and $\mathrm{^{70}Zn+^{70}Zn}$, our calculations underestimate the $R^{mid}_{yield}$ values of neutron-rich light charged particles, such as $^6He$, it could be cause by the lacking of fine structure effects for lighter elements in the transport models, and the impact parameter smearing effects.
\section{Acknowledgments}
This work has been supported by the Chinese National Science Foundation under Grants 11075215, 10979023, 10875031, 11005022,11005155, 10235030, and the national basic research program of China No. 2007CB209900. We wish to acknowledge the support of the National Science Foundation Grants No. PHY-0606007.
\section*{References}
|
1208.6166
|
\section{Introduction}
The notion of a transmutation operator relating two linear differential
operators was introduced in 1938 by J. Delsarte \cite{Delsarte} and nowadays
represents a widely used tool in the theory of linear differential equations
(see, e.g., \cite{BegehrGilbert}, \cite{Carroll}, \cite{LevitanInverse},
\cite{Marchenko}, \cite{Sitnik}, \cite{Trimeche}). Very often in literature
the transmutation operators are called the transformation operators. Here we
keep the original term introduced by Delsarte and Lions
\cite{DelsarteLions1956}. It is well known that under certain regularity
conditions the transmutation operator transmuting the operator $A=-\frac
{d^{2}}{dx^{2}}+q(x)$ into $B=-\frac{d^{2}}{dx^{2}}$ is a Volterra integral
operator with good properties. Its integral kernel can be obtained as a
solution of a certain Goursat problem for the Klein-Gordon equation with a
variable coefficient. There exist very few examples of the transmutation
kernels available in a closed form (see \cite{KrT2012}).
In the present work we obtain a representation for the kernels of the
transmutation operators for regular Sturm-Liouville operators (with complex
valued coefficients) in the form of a functional series with the exact
formulae for the terms of the series. The result is based on several new
observations. We use our recent result on the construction of the kernel of
the transmutation operator corresponding to a Darboux associated
Schr\"{o}dinger operator \cite{KrT2012} to find out that a bicomplex-valued
function whose one complex component is the transmutation kernel $K_{1}(x,t)$
(for a Schr\"{o}dinger operator $\frac{d^{2}}{dx^{2}}-q_{1}(x)$, $q_{1}\in
C[-b,b]$) and the other complex component is $K_{2}(x,t)$ (for a
Schr\"{o}dinger operator $\frac{d^{2}}{dx^{2}}-q_{2}(x)$, with $q_{2}$
obtained from $q_{1}$ by a Darboux transformation) is a solution of a certain
hyperbolic Vekua equation of a special form (for the theory of elliptic Vekua
equations see \cite{Vekua} as well as \cite{Berskniga} and \cite{APFT}).
In spite of a recent progress reported in \cite{KKT}, \cite{APFT}, \cite{KRT},
\cite{KT2010JMAA} the theory of hyperbolic Vekua equations is considerably
less developed than the theory of classical (elliptic) Vekua equations. For
example, as was shown in \cite{KRT} (see also \cite{APFT}) the construction of
so-called formal powers (solutions of the Vekua equation generalizing the
usual complex powers $(z-z_{0})^{n}$) can be performed using the
definitions\ completely analogous to those introduced by L. Bers
\cite{Berskniga}. Nevertheless no expansion theorem nor a result on the
completeness of the obtained system (a Runge-type theorem) of hyperbolic
formal powers was available. In this work we apply the results from \cite{KRT}
for constructing the formal powers for the hyperbolic Vekua equation arising
in relation with the kernels $K_{1}$ and $K_{2}$ as well as the observation
from \cite{CKT} establishing that they are the result of the transmutation of
the usual powers of the hyperbolic variable. We obtain an expansion theorem
and prove a completeness result. Moreover, we obtain explicit formulae for the
expansion coefficients which leads to the functional series representation for
the kernel $K_{1}$ (and also for $K_{2}$). We give several examples of
application of this result including a numerical computation. Finally, we
propose an alternative method for approximate construction of the
transmutation kernel based on the same hyperbolic formal powers but instead of
the expansion theorem using the obtained completeness result and show that this method leads to an efficient method of solving of Sturm-Liouville spectral problems.
The paper is structured as follows. In Section \ref{SectHPFT} we introduce the hyperbolic and bicomplex numbers, the Vekua equation one of the solutions of which is the bicomplex function $K_{1}-\mathbf{j}K_{2}$. We present several properties of that Vekua equation, construct an infinite system of its solutions called formal powers, explain its relation to generalized wave polynomials and to spectral parameter power series (SPPS) representation for solutions of Sturm-Liouville equations. We consider a Goursat problem for the hyperbolic Cauchy-Riemann system and obtain certain related results important for what follows. In Section \ref{SectTO} we recall some long known facts about the transmutation operators and give some recent and new results concerning the relation between $K_{1}$ and $K_{2}$, the commutation and the mapping properties of the corresponding transmutation operators. In Section \ref{SectGGTO} we introduce an operator transforming the boundary data of the Goursat problem for the wave equation into the boundary data of the Goursat problem for the operator $\square-q(x)$. Using its properties we obtain a result about a functional series representation for solutions of the equation $\left( \square-q\right) u=0$ in terms of generalized wave polynomials. In Section \ref{SectTransmutVekua} we introduce a transmutation operator relating the hyperbolic Cauchy-Riemann operator with the interesting for us hyperbolic Vekua operator. The main observation here is that this transmutation operator possessing important boundedness properties
maps usual powers of the hyperbolic variable $z$ into the formal powers as
well as the usual derivatives with respect to $z$ into the generalized
derivatives in the sense of L. Bers. Here we mainly use the ideas from
\cite{CKM AACA} where similar observations were made in the elliptic setting.
Section \ref{SectETh} is dedicated to the expansion and the Runge-type theorems corresponding to the special Vekua equation under consideration. We show that
a generalized Taylor formula for its solutions is valid under certain
conditions on their Goursat data. The formula involves the formal powers
constructed and studied in preceding sections. The obtained Runge-type theorem
establishes that even when the conditions required for the validity of the
Taylor expansion are not fulfilled any solution of the studied hyperbolic
Vekua equation can be approximated arbitrarily closely by a formal polynomial.
In Section \ref{SectKernelAsSolVekua} we show that $K_{1}$ and $K_{2}$ are necessarily the conjugate components of a solution of the Vekua equation. Using this we give explicit formulae expressing $K_{2}$ in terms of $K_{1}$ and vice versa. Section \ref{SectFIK} contains the main result of this work -- the representation of the transmutation kernel in terms of a generalized Taylor series in formal powers. For the corresponding expansion coefficients we obtain both a recurrent and a direct exact formulae. Several examples are given illustrating their application. In Section \ref{SectApproxKernel} we show how the obtained results can be used for numerical computation of the transmutation kernels. Here together with the approach based on the generalized Taylor expansion theorem we propose a method based on the Runge-type theorem. When sufficiently many derivatives
of the potential taken in the origin are not available this second method is
more convenient. A developed numerical example illustrates its excellent
computational performance. Finally, in Section \ref{SectASL} we show that the developed theory can be applied to efficient numerical solution of Sturm-Liouville spectral problems and allows one to obtain thousands of eigenvalues whose errors are essentially of the same order.
\section{Elements of hyperbolic pseudoanalytic function theory}\label{SectHPFT}
\subsection{Hyperbolic and bicomplex numbers}
By $\mathbf{j}$ we denote the hyperbolic imaginary unit: $\mathbf{j}^{2}=1$
and consider the algebra of hyperbolic numbers, also called duplex numbers
(see, e.g., \cite{Sobczyk})
\[
\mathbb{D}:=\big\{c=a+b\mathbf{j}\ :\ \mathbf{j}^{2}=1,\ a,b\in\mathbb{R}%
\big\}\cong\mathrm{Cl}_{\mathbb{R}}(0,1).
\]
The algebra $\mathbb{D}$ is commutative and contains zero divisors. Additional
information on the hyperbolic numbers can be found in \cite{Lavrentyev and
Shabat}, \cite{MotterRosa} and \cite{Sobczyk}.
We will consider the variable $z=x+t\mathbf{j}$ where $x$ and $t$ are real
variables and the corresponding differential operators
\[
\partial_{z}=\frac{1}{2}\left( {\partial_{x}+\mathbf{j}\partial_{t}}\right)
\mbox{ and }\partial_{\bar{z}}=\frac{1}{2}\left( {\partial_{x}-\mathbf{j}%
\partial_{t}}\right) .
\]
As in the case of complex numbers, we have $\partial_{\bar{z}}z=0$ which
explains the choice of the minus sign in the definition of $\partial_{\bar{z}%
}$.
Let $\mathbb{B}$ denote the algebra of bicomplex numbers which can be defined
as follows
\[
\mathbb{B}:=\big\{w=u+v\mathbf{j}\ :\ u,v\in\mathbb{C}\big\}
\]
and the complex imaginary unit $\mathbf{i}$ commutes with $\mathbf{j}$. More
on bicomplex numbers can be found, e.g., in \cite{CKr2012}, \cite{CastaKr2005}%
, \cite{APFT} and \cite{RochonTrembl}.
The conjugate of a bicomplex number $w$ with respect to $\mathbf{j}$ we denote
by $\overline{w}$, i.e., $\overline{w}=u-v\mathbf{j}$. The corresponding
conjugation operator is denoted by $C$, $Cw=\overline{w}$.
$\mathbb{C}$-valued functions will be also called scalar. For the scalar
components of $w$ we introduce the notations
\[
\mathcal{R}(w)=u=\frac{1}{2}(w+\overline{w})\quad\text{and\quad}%
\mathcal{I}(w)=v=\frac{1}{2\mathbf{j}}(w-\overline{w}).
\]
In order not to overload the text by excessively many different notations we
will use the notation $\mathcal{R}$ and $\mathcal{I}$ also in the operational
sense as projection operators projecting $w\in\mathbb{B}$ onto the respective
scalar components, $\mathcal{R=}\frac{1}{2}(I+C)$ and $\mathcal{I}=\frac
{1}{2\mathbf{j}}(I-C)$ where $I$ is the identity operator.
It is convenient to introduce the pair of idempotents $P^{+}=\frac{1}%
{2}(1+\mathbf{j})$ and $P^{-}=\frac{1}{2}(1-\mathbf{j})$ satisfying $\left(
P^{\pm}\right) ^{2}=P^{\pm}$ and $P^{+}P^{-}=P^{-}P^{+}=0$. Then for any
$w\in\mathbb{B}$ there exist the unique numbers $w^{+}$, $w^{-}\in\mathbb{C}$
such that $w=P^{+}w^{+}+P^{-}w^{-}$ which are related with the components of
$w$ in the following way
\begin{equation}
w^{\pm}=\mathcal{R}\left( w\right) \pm\mathcal{I}(w). \label{W+-}%
\end{equation}
A nonzero element $w\in\mathbb{B}\ $belongs to the set of zero divisors
$\sigma(\mathbb{B)}$ iff $w=P^{+}w^{+}$ or $w=P^{-}w^{-}$.
We will use the following norm (see \cite{CKr2012}) in $\mathbb{B}$,
\begin{equation}
\left\vert w\right\vert =\frac{1}{2}\left( \left\vert w^{+}\right\vert
_{\mathbb{C}}+\left\vert w^{-}\right\vert _{\mathbb{C}}\right) ,
\label{bicom_norm}%
\end{equation}
where $\left\vert \cdot\right\vert _{\mathbb{C}}$ is the usual norm in
$\mathbb{C}$.
\subsection{Hyperbolic Vekua equation}
We will consider functions $w:\mathbb{D\rightarrow B}$ and Vekua-type
equations of the form%
\begin{equation}
\partial_{\bar{z}}w=aw+b\overline{w}\label{Vekua hyper}%
\end{equation}
where $a$ and $b$ are bicomplex functions of the variable $z\in\mathbb{D}$. In
\cite{KKT}, \cite{APFT}, \cite{KRT} it was shown that many results from
pseudoanalytic function theory \cite{Berskniga} remain valid in the hyperbolic
situation. We will not give here definitions and properties corresponding to
the general Vekua equation (\ref{Vekua hyper}) referring the reader to
\cite{APFT}. Instead, as we are interested in a very special form of the
coefficients $a$ and $b$ we restrict ourselves to that particular case and
obtain several results, such as the expansion theorem and a Runge-type theorem
which are still unavailable in a general situation. Thus, let us consider the
following hyperbolic Vekua equation
\begin{equation}
\partial_{\bar{z}}W-\frac{\partial_{\bar{z}}f}{f}\overline{W}%
=0\label{Vekua main hyper}%
\end{equation}
where $f$ is a scalar, non-vanishing function in the domain of interest. Vekua
equations of this form, i.e., with $a\equiv0$ and $b$ being a logarithmic
derivative of a scalar function are called Vekua equations of the main type or
main Vekua equations. They arise from the factorization of the stationary
Schr\"{o}dinger operator (in the elliptic case) \cite{Krpseudoan}, \cite{APFT}
and of the variable mass Klein-Gordon operator (in the hyperbolic case)
\cite{KRT}, \cite{APFT}. Moreover, for the purposes of the present paper it is
sufficient to consider the case when $f$ depends only on $x$ and hence
(\ref{Vekua main hyper}) can be written also in the form
\begin{equation}
\partial_{\bar{z}}W-\frac{f^{\prime}(x)}{2f(x)}\overline{W}%
=0.\label{Vekua main x}%
\end{equation}
It is equivalent to the system
\begin{equation}
f{\partial_{x}}\left( \frac{1}{f}u\right) ={\partial_{t}v,}\label{Vek1}%
\end{equation}%
\begin{equation}
\frac{1}{f}{\partial_{x}}\left( fv\right) ={\partial_{t}u,}\label{Vek2}%
\end{equation}
where $u$, $v$: $\mathbb{R}^{2}\rightarrow\mathbb{C}$ and $W=u+v\mathbf{j}$.
Introducing the notation $\varphi=u/f$ and $\psi=fv$ one can rewrite this
system also in the following form%
\begin{equation}
{\partial_{t}}\varphi=\frac{1}{f^{2}}{\partial_{x}}\psi\quad\text{and\quad
}{\partial_{x}}\varphi=\frac{1}{f^{2}}{\partial_{t}}\psi.\label{hyp p analyt}%
\end{equation}
In the case when all the involved functions were real valued this system was
studied in \cite{PA1983}, \cite{PA1983 2} and later on in \cite{KRT},
\cite{APFT}.
Equation (\ref{Vekua main hyper}) admits a corresponding generating pair $F=f$
and $G=\mathbf{j}/f$. We recall that a generating pair corresponding to a
Vekua equation is a pair of its solutions $F$ and $G$, independent in the
sense that an arbitrary function $\omega$, in this case $\mathbb{B}$-valued,
can be represented in the form
\begin{equation}
\omega=\varphi F+\psi G \label{omega}%
\end{equation}
where $\varphi$ and $\psi$ are scalar functions. That is, $F$ and $G$ should
satisfy the condition $\mathcal{I}(\overline{F}G)\neq0$ everywhere in the
domain of interest. In relation with the generating pair $(F,G)=(f,\mathbf{j}%
/f)$ observe that the scalar functions $\varphi$ and $\psi$ satisfy
(\ref{hyp p analyt}) iff the $\mathbb{B}$-valued function $\omega=\varphi
f+\psi\mathbf{j}/f$ is a solution of (\ref{Vekua main hyper}).
The knowledge of a generating pair allows one to define the $(F,G)$-derivative
in the sense of L. Bers. For a function $\omega$ written in the form
(\ref{omega}) its $(F,G)$-derivative has the form
\[
\overset{\circ}{\omega}:=\frac{d_{(F,G)}\omega}{dz} := \left( \partial
_{z}\varphi\right) F+\left( \partial_{z}\psi\right) G.
\]
Whenever $W$ satisfies (\ref{Vekua main hyper}) its $(F,G)$-derivative (with
$(F,G)=(f,\mathbf{j}/f)$ ) is a solution of the Vekua equation \cite{APFT}
\begin{equation}
\partial_{\bar{z}}w+\frac{\partial_{z}f}{f}\overline{w}=0.
\label{Vekua successor}%
\end{equation}
Notice that for $f=f(x)$ this equation can be written as follows
\[
\partial_{\bar{z}}w+\frac{f^{\prime}(x)}{2f(x)}\overline{w}=0
\]
or%
\begin{equation}
\partial_{\bar{z}}w-\frac{\left( f^{-1}(x)\right) ^{\prime}}{2f^{-1}%
(x)}\overline{w}=0. \label{Vekua succ}%
\end{equation}
Thus, the coefficient in the equation has the same structure as in
(\ref{Vekua main x}) but the role of $f$ is played by $1/f$. Hence a
generating pair for (\ref{Vekua succ}) can be chosen as $(F_{1},G_{1}%
)=(1/f,f\mathbf{j})$. The second derivative of a solution of
(\ref{Vekua main x}) can be defined in the form $\overset{\circ\circ}{W}$
$=\overset{\circ}{w}$ where $w=\overset{\circ}{W}=\varphi_{1}F_{1}+\psi
_{1}G_{1}=\varphi_{1}/f+\psi_{1}f\mathbf{j}$, $\varphi_{1}$ and $\psi_{1}$ are
scalar functions, and $\overset{\circ}{w}=\left( \partial_{z}\varphi
_{1}\right) /f+\left( \partial_{z}\psi_{1}\right) f\mathbf{j}$. Obviously,
the next generating pair can be chosen again as $(F_{2},G_{2})=(f,\mathbf{j}%
/f)$ $\ $and in this way the derivative of any order in the sense of Bers can
be defined. The derivative of $n$-th order we denote as $W^{[n]}$. In the
theory of pseudoanalytic functions such a sequence of generating pairs is
known as a periodic generating sequence (of the period two). We have
$(F_{n},G_{n})=(f,\mathbf{j}/f)$ when $n$ is even, and $(F_{n},G_{n}%
)=(1/f,f\mathbf{j})$ when $n$ is odd.
For what follows it is important to observe that if $W$ is a solution of
(\ref{Vekua main x}), its $n$-th derivative in the sense of Bers (if exists)
has an especially simple form
\begin{equation}
W^{[n]}=\mathbf{j}^{n}{\partial_{t}^{n}W.} \label{nth Der}%
\end{equation}
Indeed, for $W=\varphi f+\psi\mathbf{j}/f$ we have $\overset{\circ}{W}=\left(
\partial_{z}\varphi\right) f+\left( \partial_{z}\psi\right) \mathbf{j}/f$
where $\varphi$ and $\psi$ are solutions of (\ref{hyp p analyt}). Consider
\[
\partial_{z}\varphi=\frac{1}{2}\left( {\partial_{x}\varphi+\mathbf{j}%
\partial_{t}}\varphi\right) =\frac{1}{2}\left( \frac{1}{f^{2}}{\partial_{t}%
}\psi{+\mathbf{j}\partial_{t}}\varphi\right) =\frac{{\mathbf{j}}}%
{2f}{\partial_{t}}\left( \varphi f+\psi\frac{{\mathbf{j}}}{f}\right)
=\frac{{\mathbf{j}}}{2f}{\partial_{t}W}%
\]
and similarly,
\[
\partial_{z}\psi=\frac{1}{2}\left( {\partial_{x}\psi+\mathbf{j}\partial_{t}%
}\psi\right) =\frac{1}{2}\left( f^{2}{\partial_{t}}\varphi{+\mathbf{j}%
\partial_{t}}\psi\right) =\frac{f}{2}{\partial_{t}}\left( \varphi
f+\psi\frac{{\mathbf{j}}}{f}\right) =\frac{f}{2}{\partial_{t}W.}%
\]
Thus, $\overset{\circ}{W}=\mathbf{j}{\partial_{t}W}$. As this reasoning does
not change if $f$ is substituted everywhere by $1/f$ we obtain that
$\overset{\circ\circ}{W}$ $=\mathbf{j}{\partial_{t}}\overset{\circ}{{W}%
}=\mathbf{j}^{2}{\partial_{t}^{2}W}$ and hence (\ref{nth Der}) is valid both
for odd and for even values of $n$.
Below we extend some of the results on the relationship between hyperbolic
pseudoanalytic functions and solutions of the Klein-Gordon equations to the
bicomplex case. Since all the proofs are essentially the same, we refer the
reader to \cite[Chapter 13]{APFT}, \cite{KRT} for details. The operator
$\partial_{\bar{z}}$ applied to a scalar function $\varphi$ can be regarded as
a kind of gradient. If $\partial_{\bar{z}}\varphi=\Phi$, where $\Phi$ is a
$\mathbb{B}$-valued function defined on a simply connected domain $\Omega$
with $\Phi_{1}=\mathcal{R}(\Phi)$ and $\Phi_{2}=\mathcal{I}(\Phi)$ such that
\[
\partial_{t}\Phi_{1}+\partial_{x}\Phi_{2}=0,\quad\forall\,(x,t)\in\Omega,
\]
then we can construct $\varphi$ up to an arbitrary complex constant $c$.
Indeed, let $\Gamma\subset\Omega$ be a rectifiable curve leading from
$(x_{0},t_{0})$ to $(x,t)$, then the integral
\begin{equation}
\overline{A}_{h}[\Phi](x,t):=2\left( \int_{\Gamma}\Phi_{1}\,dx-\Phi
_{2}\,dt\right) \label{ABarGamma}%
\end{equation}
is path-independent, and all $\mathbb{C}$-valued solutions $\varphi$ of the
equation $\partial_{\bar{z}}\varphi=\Phi$ in $\Omega$ have the form
$\varphi(x,t)=\overline{A}_{h}[\Phi(x,t)]+c$ where $c$ is an arbitrary complex
constant. In other words the operator $\overline{A}_{h}$ denotes the well
known operation for reconstructing the potential function from its gradient.
In Section \ref{SectKernelAsSolVekua} we need a particular case of
\eqref{ABarGamma} when $\Gamma$ is the path consisting from 2 segments, first
going from $(x_{0},t_{0})$ to $(x,t_{0})$ and second going to $(x,t)$.
Assuming that this path belongs to the domain of interest $\Omega$, in such
case formula \eqref{ABarGamma} reads as follows
\begin{equation}
\overline{A}_{h}[\Phi](x,t)=2\left( \int_{x_{0}}^{x}\Phi_{1}(\eta
,t_{0})\,d\eta-\int_{t_{0}}^{t}\Phi_{2}(x,\xi)\,d\xi\right) .
\label{ABarKernel}%
\end{equation}
The following theorems explain relationship between the main Vekua equation
and variable mass Klein-Gordon equations. We use the notation $\Box
:=\partial_{x}^{2}-\partial_{t}^{2}$.
\begin{theorem}
[\cite{KRT}]Let $f=f(x)$ be a non-vanishing scalar function and $W$ be a
solution of \eqref{Vekua main x}. Then $u=\mathcal{R}(W)$ is a solution of the
equation
\begin{equation}
\label{KG1}\bigl(\Box-q_{1}(x)\bigr)u=0,\quad\text{where}\quad q_{1} =
\frac{f^{\prime\prime}}{f},
\end{equation}
and $v = \mathcal{I}(W)$ is a solution of the equation
\begin{equation}
\label{KG2}\bigl(\Box-q_{2}(x)\bigr)v=0,\quad\text{where}\quad q_{2} =
\frac{(f^{-1})^{\prime\prime}}{f^{-1}} = 2\Bigl(\frac{f^{\prime}}{f}%
\Bigr)^{2}-q_{1}.
\end{equation}
\end{theorem}
\begin{theorem}
[\cite{KRT}]\label{ThDarboux} Let $u$ be a scalar solution of the Klein-Gordon
equation \eqref{KG1} in a simply connected domain $\Omega$. Then a scalar
solution $v$ of the associated Klein-Gordon equation \eqref{KG2} such that
$u+\mathbf{j}v$ is a solution of \eqref{Vekua main x} in $\Omega$, can be
constructed according to the formula
\[
v = -f^{-1} \overline{A}_{h}\bigl[\mathbf{j} f^{2} \partial_{\bar{z}}(f^{-1}
u)\bigr].
\]
It is unique up to an additive term $cf^{-1}$ where $c$ is an arbitrary
complex constant.
Vice versa, given a solution $v$ of \eqref{KG2}, the corresponding solution
$u$ of \eqref{KG1} such that $u+\mathbf{j}v$ is a solution of
\eqref{Vekua main x}, has the form
\[
u = -f \overline{A}_{h}\bigl[\mathbf{j} f^{-2} \partial_{\bar{z}}(f v)\bigr],
\]
up to an additive term $cf$.
\end{theorem}
\subsection{Integration and formal powers}
Let $(F,G)$ be a generating pair. Its adjoint generating pair $(F,G)^{\ast
}=(F^{\ast},G^{\ast})$ is defined by the formulae
\[
F^{\ast}=-\frac{2\overline{F}}{F\overline{G}-\overline{F}G},\qquad G^{\ast}=
\frac{2\overline{G}}{F\overline{G}-\overline{F}G}.
\]
For our special case of generating pairs we have $(f, \mathbf{j}/f)^{\ast}=
(f\mathbf{j}, 1/f)$ and $(1/f, f\mathbf{j})^{\ast}= (\mathbf{j}/f, f)$.
The $(F,G)$-integral is defined as
\[
\int_{\Gamma}Wd_{(F,G)}z=\frac{1}{2}\left( F(z_{1})\,\mathcal{R}\left(
\int_{\Gamma}G^{\ast}W\,dz\right) +G(z_{1})\,\mathcal{R}\left( \int_{\Gamma
}F^{\ast}W\,dz\right) \right)
\]
where $\Gamma$ is a rectifiable curve leading from $z_{0}$ to $z_{1}$.
If $W=\varphi F+\psi G$ is a bicomplex $(F,G)$-pseudoanalytic function where
$\varphi$ and $\psi$ are scalar functions then
\[
\int_{z_{0}}^{z}\overset{\circ}{W}\,d_{(F,G)}z=W(z)-\varphi(z_{0}%
)F(z)-\psi(z_{0})G(z),
\]
and this integral is path-independent and represents the $(F,G)$%
-antiderivative of $\overset{\circ}{W}$.
\begin{definition}
[\cite{Berskniga}]\label{DefFormalPower_bi}The formal power $Z_{m}%
^{(0)}(a,z_{0};z)$ with center at $z_{0}$, coefficient $a$ and exponent $0$ is
defined as the linear combination of the generators $F_{m}$, $G_{m}$ with
scalar constant coefficients $\lambda$, $\mu$ chosen so that $\lambda
F_{m}(z_{0})+\mu G_{m}(z_{0})=a$. The formal powers with exponents
$n=0,1,2,\ldots$ are defined by the recursion formula
\[
Z_{m}^{(n+1)}(a,z_{0};z)=(n+1)\int_{z_{0}}^{z}Z_{m+1}^{(n)}(a,z_{0}%
;\zeta)\,d_{(F_{m},G_{m})}\zeta.
\]
\end{definition}
This definition implies the following properties.
\begin{enumerate}
\item $Z_{m}^{(n)}(a,z_{0};z)$ is an $(F_{m},G_{m})$-bicomplex pseudoanalytic
function of $z$, i.e., it satisfies the hyperbolic Vekua equation determined
by $(F_{m},G_{m})$ (see \cite{APFT}).
\item If $a^{\prime}$ and $a^{\prime\prime}$ are scalar constants, then
\[
Z_{m}^{(n)}(a^{\prime}+\mathbf{j}a^{\prime\prime},z_{0};z)=a^{\prime}%
Z_{m}^{(n)}(1,z_{0};z)+a^{\prime\prime}Z_{m}^{(n)}(\mathbf{j},z_{0};z).
\]
\item The formal powers satisfy the differential relations
\[
\frac{d_{(F_{m},G_{m})}Z_{m}^{(n)}(a,z_{0};z)}{dz}=nZ_{m+1}^{(n-1)}%
(a,z_{0};z).
\]
\item The asymptotic formulae
\[
Z_{m}^{(n)}(a,z_{0};z)\sim a(z-z_{0})^{n},\quad z\rightarrow z_{0}%
\]
hold.
\end{enumerate}
In the following we denote $Z_{0}^{(n)}(a,z_{0};z)$ by $Z^{(n)}(a,z_{0};z)$.
Since we are primarily interested in the hyperbolic Vekua equation
(\ref{Vekua main x}), observe that as was explained above, a corresponding
generating sequence is periodic having the form $(F_{n},G_{n})=(f,\mathbf{j}%
/f)$ when $n$ is even, and $(F_{n},G_{n})=(1/f,f\mathbf{j})$ when $n$ is odd.
Using the construction from \cite{Berskniga} given there for an analogous
elliptic situation (see also \cite[Sect. 4.2]{APFT}) we obtain that the
corresponding formal powers admit the following elegant representation. We
consider the formal powers with the center in the origin and for simplicity
assume that $f(0)=1$. Define the following systems of functions
\begin{align}
X^{(0)}(x) & \equiv\widetilde{X}^{(0)}(x)\equiv1,\label{X1}\\
X^{(n)}(x) & =n\int_{0}^{x}X^{(n-1)}(s)\left( f^{2}(s)\right) ^{(-1)^{n}%
}\,\mathrm{d}s,\label{X2}\\
\widetilde{X}^{(n)}(x) & =n\int_{0}^{x}\widetilde{X}^{(n-1)}(s)\left(
f^{2}(s)\right) ^{(-1)^{n-1}}\,\mathrm{d}s.\label{X3}%
\end{align}
Then the formal powers corresponding to \eqref{Vekua main x} have the
following form. For $\alpha=\alpha^{\prime}+\mathbf{j}\alpha^{\prime\prime}$
we have
\begin{equation}
Z^{(n)}(\alpha,0;z)=f(x)\mathcal{R}\bigl(\vphantom{Z^{(n)}}_{\ast}%
Z^{(n)}(\alpha,0;z)\bigr)+\frac{\mathbf{j}}{f(x)}\mathcal{I}%
\bigl(\vphantom{Z^{(n)}}_{\ast}Z^{(n)}(\alpha,0;z)\bigr),\label{Zn}%
\end{equation}
where
\begin{equation}
_{\ast}Z^{(n)}(\alpha,0;z)=\alpha^{\prime}\sum_{k=0}^{n}\binom{n}{k}%
X^{(n-k)}\mathbf{j}^{k}t^{k}+\mathbf{j}\alpha^{\prime\prime}\sum_{k=0}%
^{n}\binom{n}{k}\widetilde{X}^{(n-k)}\mathbf{j}^{k}t^{k}\qquad\text{for an odd
}n\label{Znodd}%
\end{equation}
and
\begin{equation}
_{\ast}Z^{(n)}(\alpha,0;z)=\alpha^{\prime}\sum_{k=0}^{n}\binom{n}{k}%
\widetilde{X}^{(n-k)}\mathbf{j}^{k}t^{k}+\mathbf{j}\alpha^{\prime\prime}%
\sum_{k=0}^{n}\binom{n}{k}X^{(n-k)}\mathbf{j}^{k}t^{k}\qquad\text{for an even
}n.\label{Zneven}%
\end{equation}
For any $\alpha\in\mathbb{B}$ and $n\in\mathbb{N}_{0}=\mathbb{N}\cup\left\{
0\right\} $ the formal power (\ref{Zn}) is a solution of (\ref{Vekua main x}).
\begin{remark}
Formulae \eqref{Zn}--\eqref{Zneven} clearly generalize the binomial
representation for the analytic powers $\alpha z^{n}$. If one chooses
$f\equiv1$ then $Z^{(n)}(\alpha,0;z)=\alpha z^{n}$.
\end{remark}
The formal powers $Z^{(n)}(\alpha,0;z)$ can be written with the use of
generalized wave polynomials \cite{KKTT}. Consider two families of functions
$\left\{ \varphi_{k}\right\} _{k=0}^{\infty}$ and $\left\{ \psi
_{k}\right\} _{k=0}^{\infty}$ constructed from \eqref{X1}, \eqref{X2} and
\eqref{X3} by
\begin{equation}
\varphi_{k}(x)=%
\begin{cases}
f(x)X^{(k)}(x), & k\text{\ odd},\\
f(x)\widetilde{X}^{(k)}(x), & k\text{\ even},
\end{cases}
\label{phik}%
\end{equation}
and
\begin{equation}
\psi_{k}(x)=%
\begin{cases}
\dfrac{\widetilde{X}^{(k)}(x)}{f(x)}, & k\text{\ odd,}\\
\dfrac{X^{(k)}(x)}{f(x)}, & k\text{\ even}.
\end{cases}
\label{psik}%
\end{equation}
The following functions
\begin{equation}
u_{0}=\varphi_{0}(x),\quad u_{2m-1}(x,t)=\sum_{\text{even }k=0}^{m}\binom
{m}{k}\varphi_{m-k}(x)t^{k},\quad u_{2m}(x,t)=\sum_{\text{odd }k=1}^{m}%
\binom{m}{k}\varphi_{m-k}(x)t^{k}, \label{um}%
\end{equation}
and
\begin{equation}
v_{0}=\psi_{0}(x),\quad v_{2m-1}(x,t)=\sum_{\text{even }k=0}^{m}\binom{m}%
{k}\psi_{m-k}(x)t^{k},\quad v_{2m}(x,t)=\sum_{\text{odd }k=1}^{m}\binom{m}%
{k}\psi_{m-k}(x)t^{k}, \label{vm}%
\end{equation}
are called generalized wave polynomials (the wave polynomials are introduced
below, in Remark \ref{Rem Wave polynomials}). It is easy to see that
\begin{align}
Z^{(0)}(\alpha,0;z) & =\alpha^{\prime}u_{0}(x,t)+\mathbf{j}\alpha
^{\prime\prime}v_{0}(x,t), & & \label{ZasWavePol0}\\
Z^{(n)}(\alpha,0;z) & =\alpha^{\prime}u_{2n-1}(x,t)+\alpha^{\prime\prime
}u_{2n}(x,t)+\mathbf{j}\bigl(\alpha^{\prime}v_{2n}(x,t)+\alpha^{\prime\prime
}v_{2n-1}(x,t)\bigr), & & n\geq1. \label{ZasWavePol}%
\end{align}
The following parity relations hold for generalized wave polynomials.
\begin{equation}
u_{0}(x,-t)=u_{0}(x,t),\qquad u_{2n-1}(x,-t)=u_{2n-1}(x,t),\qquad
u_{2n}(x,-t)=-u_{2n}(x,t). \label{umParity}%
\end{equation}
It is worth to mention the following result obtained in \cite{KrCV08} (for
additional details and simpler proof see \cite{APFT} and \cite{KrPorter2010})
which establishes the relation of the systems of functions $\left\{
\varphi_{k}\right\} _{k=0}^{\infty}$ and $\left\{ \psi_{k}\right\}
_{k=0}^{\infty}$ to the Sturm-Liouville equation.
\begin{theorem}
[\cite{KrCV08}]\label{ThGenSolSturmLiouville} Let $q$ be a continuous complex
valued function of an independent real variable $x\in\lbrack a,b]$ and
$\lambda$ be an arbitrary complex number. Suppose there exists a solution $f$
of the equation
\begin{equation}
f^{\prime\prime}-qf=0 \label{SLhom}%
\end{equation}
on $(a,b)$ such that $f\in C^{2}(a,b)\cap C^{1}[a,b]$ and $f(x)\neq0$\ for any
$x\in\lbrack a,b]$. Then the general solution $g\in C^{2}(a,b)\cap C^{1}[a,b]$
of the equation
\[
g^{\prime\prime}-qg=\lambda g
\]
on $(a,b)$ has the form
\[
g=c_{1}g_{1}+c_{2}g_{2}%
\]
where $c_{1}$ and $c_{2}$ are arbitrary complex constants,
\begin{equation}
g_{1}=\sum_{k=0}^{\infty}\frac{\lambda^{k}}{(2k)!}\varphi_{2k}\qquad
\text{and}\qquad g_{2}=\sum_{k=0}^{\infty}\frac{\lambda^{k}}{(2k+1)!}%
\varphi_{2k+1} \label{u1u2}%
\end{equation}
and both series converge uniformly on $[a,b]$ together with the series of the
first derivatives which have the form
\begin{multline}
\displaybreak[2]g_{1}^{\prime}=f^{\prime}+\sum_{k=1}^{\infty}\frac{\lambda
^{k}}{(2k)!}\left( \frac{f^{\prime}}{f}\varphi_{2k}+2k\,\psi_{2k-1}\right)
\qquad\text{and}\label{du1du2}\\
g_{2}^{\prime}=\sum_{k=0}^{\infty}\frac{\lambda^{k}}{(2k+1)!}\left(
\frac{f^{\prime}}{f}\varphi_{2k+1}+\left( 2k+1\right) \psi_{2k}\right) .
\end{multline}
The series of the second derivatives converge uniformly on any
segment\linebreak$\lbrack a_{1},b_{1}]\subset(a,b)$.
\end{theorem}
Representations \eqref{u1u2} and \eqref{du1du2}, also known as the SPPS
method, present an efficient and highly competitive technique for solving a
variety of spectral and scattering problems related to Sturm-Liouville
equations, see \cite{CKOR}, \cite{ErbeMertPeterson2012}, \cite{KKRosu},
\cite{KiraRosu2010}, \cite{KrPorter2010} and \cite{KT Obzor} and references therein.
\subsection{The Goursat problem for the hyperbolic Cauchy-Riemann system and a
Runge type theorem}
For the construction of the kernels of the transmutation operators we are
interested in solving corresponding Goursat problems. Our results concerning
the Goursat problems for equation (\ref{Vekua main x}) arising as a natural
step in that construction are based on the following simplest case.
Consider the hyperbolic Cauchy-Riemann system%
\begin{equation}
\partial_{\bar{z}}w=0\quad\Longleftrightarrow\quad%
\begin{cases}
u_{x}=v_{t}\\
u_{t}=v_{x}%
\end{cases}
\label{hCR}%
\end{equation}
studied in a considerable number of publications (see, e.g., \cite{Lavrentyev
and Shabat}, \cite{MotterRosa}, \cite{Wen}). It is easy to see that its
general solution can be written as follows%
\begin{equation}
\label{gensolhCR}%
\begin{split}
w(x,t) & =\frac{1}{2}\left( \Phi\left( \frac{x+t}{2}\right) +\Psi\left(
\frac{x-t}{2}\right) +\mathbf{j}\left( \Phi\left( \frac{x+t}{2}\right)
-\Psi\left( \frac{x-t}{2}\right) \right) \right) \\
& =P^{+}\Phi\left( \frac{x+t}{2}\right) +P^{-}\Psi\left( \frac{x-t}%
{2}\right)
\end{split}
\end{equation}
where $\Phi$ and $\Psi$ are arbitrary continuously differentiable scalar functions.
Considering the values of $w$ on the characteristics $x=t$ and $x=-t$ we see
that
\begin{equation}
w(x,x)=P^{+}\Phi\left( x\right) +P^{-}\Psi\left( 0\right) \quad
\text{and\quad}w(x,-x)=P^{+}\Phi\left( 0\right) +P^{-}\Psi\left( x\right)
. \label{condG}%
\end{equation}
Thus, we arrive at the following correct statement of the Goursat problem.
\emph{Find a solution }$w$\emph{ of \eqref{hCR} in a closed square }%
$\overline{R}$\emph{ with the vertices }$(\pm2b,0)$\emph{ and }$(0,\pm
2b)$\emph{, }$b>0$\emph{ satisfying the conditions \eqref{condG} where }$\Phi
$\emph{ and }$\Psi$\emph{ are arbitrary scalar functions such that }$\Phi
,\Psi\in C^{1}[-b,b]$\emph{. }
\begin{proposition}
\label{Prop Goursat hCR} The unique solution of the Goursat problem
\eqref{hCR}, \eqref{condG} has the form \eqref{gensolhCR}.
\end{proposition}
\begin{proof}
The fact that $w$ is a solution of (\ref{hCR}) follows from (\ref{gensolhCR}).
The conditions (\ref{condG}) are obviously satisfied. To prove the uniqueness
one considers the Goursat problem with the homogeneous conditions and finds
out that $\Phi$ and $\Psi$ from the representation (\ref{gensolhCR}) of the
general solution of (\ref{hCR}) satisfy the equations $P^{+}\Phi(x)+P^{-}%
\Psi(0)\equiv0$ and $P^{+}\Phi(0)+P^{-}\Psi(x)\equiv0$, $x\in\lbrack-b,b]$
from which both $\Phi$ and $\Psi$ are necessarily trivial.
\end{proof}
This proposition is not entirely new. As a kind of analogue of the Cauchy
integral formula for analytic functions it was discussed, e.g., in
\cite{MotterRosa} in the case $w:$ $\mathbb{D}\rightarrow\mathbb{D}$. For our
purposes it is more convenient to consider this simple representation of any
hyperbolic analytic function via its values on the characteristics from the
point of view of the Goursat problem.
The following result is a direct corollary of Proposition
\ref{Prop Goursat hCR}.
\begin{proposition}
\label{Prop Goursat polynom} Let $\Phi$ and $\Psi$ be scalar functions defined
on $[-b,b]$ and admitting there the uniformly convergent power series
expansions%
\[
\Phi\left( x\right) =\sum\limits_{n=0}^{\infty}\alpha_{n}x^{n}%
\quad\text{and\quad}\Psi\left( x\right) =\sum\limits_{n=0}^{\infty}\beta
_{n}x^{n}.
\]
Then the unique solution of the Goursat problem \eqref{hCR}, \eqref{condG} has
the form
\begin{equation}
w=\sum\limits_{n=0}^{\infty}a_{n}z^{n}, \label{power series}%
\end{equation}
where
\begin{equation}
\label{power series coeffs}a_{n}=\frac{1}{2^{n}}\left( P^{+}\alpha_{n}%
+P^{-}\beta_{n}\right) .
\end{equation}
\end{proposition}
\begin{proof}
Due to Proposition \ref{Prop Goursat hCR} we have
\begin{align*}
w(x,t) & =P^{+}\sum\limits_{n=0}^{\infty}\alpha_{n}\left( \frac{x+t}%
{2}\right) ^{n}+P^{-}\sum\limits_{n=0}^{\infty}\beta_{n}\left( \frac{x-t}%
{2}\right) ^{n}\\
& =\sum\limits_{n=0}^{\infty}\left( \frac{\alpha_{n}}{2^{n}}P^{+}\left(
x+t\right) ^{n}+\frac{\beta_{n}}{2^{n}}P^{-}\left( x-t\right) ^{n}\right)
.
\end{align*}
Notice that $P^{\pm}z=P^{\pm}\left( x\pm t\right) $ and hence $P^{\pm}%
z^{n}=P^{\pm}\left( x\pm t\right) ^{n}$. Thus,
\[
w=\sum\limits_{n=0}^{\infty}\left( \frac{\alpha_{n}}{2^{n}}P^{+}z^{n}
+\frac{\beta_{n}}{2^{n}}P^{-}z^{n}\right) =\sum\limits_{n=0}^{\infty}\frac
{1}{2^{n}}\left( P^{+}\alpha_{n}+P^{-}\beta_{n}\right) z^{n}.
\]
\end{proof}
\begin{remark}
Under the conditions of Proposition \ref{Prop Goursat polynom} we have that
the unique solution $w$ of the Goursat problem \eqref{hCR}, \eqref{condG} can
be represented as a uniformly convergent in $\overline{R}$ Taylor series
\eqref{power series} where
\begin{equation}
a_{n}=\frac{w^{[n]}(0)}{n!}. \label{Taylor coefs}%
\end{equation}
Indeed, let us prove this formula for the coefficients. Due to \eqref{nth Der}
we have that $w^{[n]}=\mathbf{j}^{n}{\partial_{t}^{n}w}$ and due to
\eqref{gensolhCR},
\[
\mathbf{j}^{n}{\partial_{t}^{n}w=}\frac{1}{2^{n}}\left( P^{+}\mathbf{j}%
^{n}\Phi^{(n)}\left( \frac{x+t}{2}\right) +P^{-}\mathbf{j}^{n}\left(
-1\right) ^{n}\Psi^{(n)}\left( \frac{x-t}{2}\right) \right) .
\]
Now we notice that $P^{+}\mathbf{j}^{n}=P^{+}$ and $P^{-}\mathbf{j}^{n}\left(
-1\right) ^{n}=P^{-}$ for any $n=0,1,\ldots$. Thus, $w^{[n]}(0)=\frac
{n!}{2^{n}}\left( P^{+}\alpha_{n}+P^{-}\beta_{n}\right) =n!a_{n}$.
\end{remark}
\begin{remark}
\label{Rem Wave polynomials}It is possible to give another representation for
the solution of the Goursat problem, namely, a representation as a series in
wave polynomials, cf. \cite[Proposition 1]{KKTT}. Recall that the following
polynomials
\[
p_{0}(x,t)=1,\quad p_{2m-1}(x,t)=\mathcal{R}\bigl((x+\mathbf{j}t)^{m}%
\bigr),\quad p_{2m}(x,t)=\mathcal{I}\bigl((x+\mathbf{j}t)^{m}\bigr),\ m\geq1,
\]
are called wave polynomials. They may be also written as
\begin{equation}
p_{0}(x,t)=1,\quad p_{2m-1}(x,t)=\sum_{\mathrm{even}\text{ }k=0}^{m}\binom
{m}{k}x^{m-k}t^{k},\quad p_{2m}(x,t)=\sum_{\mathrm{odd}\text{ }k=1}^{m}%
\binom{m}{k}x^{m-k}t^{k}, \label{WavePolynomials}%
\end{equation}
cf.\ with the definition of generalized wave polynomials \eqref{um},
\eqref{vm}. Since $z^{n}=p_{2n-1}(x,t)+\mathbf{j}p_{2n}(x,t)$ for $n\geq1$, we
obtain from \eqref{power series coeffs} that
\begin{multline*}
w(x,t)=(P^{+}\alpha_{0}+P^{-}\beta_{0})p_{0}(x,t)\\
+\sum_{n=1}^{\infty}
\frac{P^{+}\alpha_{n}+P^{-}\beta_{n}}{2^{n}}p_{2n-1}(x,t)+\sum_{n=1}^{\infty
}\frac{P^{+}\alpha_{n}-P^{-}\beta_{n}}{2^{n}}p_{2n}(x,t).
\end{multline*}
\end{remark}
Obviously, an arbitrary continuously differentiable solution of (\ref{hCR})
cannot be represented globally as a uniformly convergent power series of the
form (\ref{power series}). Nevertheless the following analogue of the Runge
approximation theorem from classical complex analysis is a corollary of the
preceding proposition.
\begin{proposition}
\label{Prop RungeHCR} Let $w\in C^{1}(\overline{R})$ be a solution of
\eqref{hCR} in $R$. Then there exists a sequence of polynomials $P_{N}%
=\sum\limits_{n=0}^{N}a_{n}z^{n}$ uniformly convergent to $w$ in $\overline
{R}$.
\end{proposition}
\begin{proof}
We need to prove that for any $\varepsilon>0$ there exist such a number $N$
and coefficients $a_{n}$, $n=0,1,\ldots,N$ that $\left\vert w(x,t)-P_{N}%
(x,t)\right\vert <\varepsilon$ for any point $(x,t)\in\overline{R}$. Let the
scalar functions $\Phi$ and $\Psi$ defined on $[-b,b]$ be such that
(\ref{condG}) hold. As was shown above such scalar functions always exist and
under the condition of the proposition they are obviously from $C^{1}[-b,b]$.
We choose $\varepsilon>0$ and such $\varepsilon_{1,2}>0$ that $\varepsilon
=(\varepsilon_{1}+\varepsilon_{2})/2$. According to the Weierstrass theorem
there exists such a number $N$ and such polynomials $p_{1}$ and $p_{2}$ of
order not greater than a certain $N$ that $\left\vert \Phi\left( x\right)
-p_{1}(x)\right\vert _{\mathbb{C}}<\varepsilon_{1}$ and $\left\vert
\Psi\left( x\right) -p_{2}(x)\right\vert _{\mathbb{C}}<\varepsilon_{2}$,
$-b\leq x\leq b$. Due to Proposition \ref{Prop Goursat polynom} the unique
solution $\widetilde{w}$ of the Goursat problem for equation (\ref{hCR}) with
the boundary conditions
\[
\widetilde{w}(x,x)=P^{+}p_{1}\left( x\right) +P^{-}p_{2}\left( 0\right)
,\qquad\widetilde{w}(x,-x)=P^{+}p_{1}\left( 0\right) +P^{-}p_{2}\left(
x\right)
\]
has the form $\widetilde{w}=P_{N}$ where $P_{N}=\sum_{n=0}^{N}\frac{1}{2^{n}%
}\left( P^{+}\alpha_{n}+P^{-}\beta_{n}\right) z^{n}$ with $\alpha_{n}$ and
$\beta_{n}$ being the coefficients of the polynomials $p_{1}$ and $p_{2}$ respectively.
Consider
\[%
\begin{split}
\left\vert w(x,t)-\widetilde{w}(x,t)\right\vert & =\left\vert w(x,t)-P_{N}
(x,t)\right\vert \\
& =\left\vert P^{+}\Phi\left( \frac{x+t}{2}\right) +P^{-}\Psi\left(
\frac{x-t}{2}\right) -P^{+}P_{N}^{+}(x,t)-P^{-}P_{N}^{-}(x,t)\right\vert \\
& =\frac{1}{2}\left( \left\vert \Phi\left( \frac{x+t}{2}\right) -P_{N}
^{+}(x,t)\right\vert _{\mathbb{C}}+\left\vert \Psi\left( \frac{x-t}
{2}\right) -P_{N}^{-}(x,t)\right\vert _{\mathbb{C}}\right) ,
\end{split}
\]
where we have used \eqref{W+-} and \eqref{bicom_norm}. Notice that $P_{N}%
^{+}(x,t)=\sum_{n=0}^{N}\alpha_{n}\left( \frac{x+t}{2}\right) ^{n}%
=p_{1}\left( \frac{x+t}{2}\right) $ and $P_{N}^{-}(x,t)=\sum_{n=0}^{N}%
\beta_{n}\left( \frac{x-t}{2}\right) ^{n}=p_{2}\left( \frac{x-t}{2}\right)
$. Thus, $\left\vert w(x,t)-P_{N}(x,t)\right\vert <\frac{1}{2}(\varepsilon
_{1}+\varepsilon_{2})=\varepsilon$.
\end{proof}
\section{Transmutation operators}\label{SectTO}
We give a definition of a transmutation operator from \cite{KT Obzor} which is
a modification of the definition given by Levitan \cite{LevitanInverse}
adapted to the purposes of the present work. Let $E$ be a linear topological
space and $E_{1}$ its linear subspace (not necessarily closed). Let $A$ and
$B$ be linear operators: $E_{1}\rightarrow E$.
\begin{definition}
\label{DefTransmut} A linear invertible operator $T$ defined on the whole $E$
such that $E_{1}$ is invariant under the action of $T$ is called a
transmutation operator for the pair of operators $A$ and $B$ if it fulfills
the following two conditions.
\begin{enumerate}
\item Both the operator $T$ and its inverse $T^{-1}$ are continuous in $E$;
\item The following operator equality is valid
\begin{equation}
AT=TB \label{ATTB}%
\end{equation}
or which is the same
\[
A=TBT^{-1}.
\]
\end{enumerate}
\end{definition}
Our main interest concerns the situation when $A=-\frac{d^{2}}{dx^{2}}+q(x)$,
$B=-\frac{d^{2}}{dx^{2}}$, \ and $q$ is a continuous complex-valued function.
Hence for our purposes it will be sufficient to consider the functional space
$E=C[a,b]$ with the topology of uniform convergence and its subspace $E_{1}$
consisting of functions from $C^{2}\left[ a,b\right] $. One of the
possibilities to introduce a transmutation operator on $E$ was considered by
Lions \cite{Lions57} and later on in other references (see, e.g.,
\cite{Marchenko}), and consists in constructing a Volterra integral operator
corresponding to a midpoint of the segment of interest. As we begin with this
transmutation operator it is convenient to consider a symmetric segment
$[-b,b]$ and hence the functional space $E=C[-b,b]$. It is worth mentioning
that other well known ways to construct the transmutation operators (see,
e.g., \cite{LevitanInverse}, \cite{Trimeche}) imply imposing initial
conditions on the functions and consequently lead to transmutation operators
satisfying (\ref{ATTB}) only on subclasses of $E_{1}$.
Thus, consider the space $E=C[-b,b]$. In \cite{CKT} and \cite{KrT2012} a
parametrized family of transmutation operators for the defined above $A$ and
$B$ was studied. Operators of this family can be realized in the form of the
Volterra integral operator
\begin{equation}
\mathbf{T}_{h} u(x)=u(x)+\int_{-x}^{x}\mathbf{K}(x,t;h)u(t)dt \label{Tmain}%
\end{equation}
where $\mathbf{K}(x,t;h)=\mathbf{H}\big(\frac{x+t}{2},\frac{x-t}{2};h\big)$,
$h$ is a complex parameter, $|t|\le|x|\le b$ and $\mathbf{H}$ is the unique
solution of the Goursat problem
\begin{equation}
\frac{\partial^{2}\mathbf{H}(u,v;h)}{\partial u\,\partial v}=q(u+v)\mathbf{H}%
(u,v;h), \label{GoursatTh1}%
\end{equation}%
\begin{equation}
\mathbf{H}(u,0;h)=\frac{h}{2}+\frac{1}{2}\int_{0}^{u}q(s)\,ds,\qquad
\mathbf{H}(0,v;h)=\frac{h}{2}. \label{GoursatTh2}%
\end{equation}
If the potential $q$ is continuously differentiable, the kernel $\mathbf{K}$
itself is a solution of the Goursat problem
\begin{equation}
\label{GoursatKh1}\left( \frac{\partial^{2}}{\partial x^{2}}-q(x)\right)
\mathbf{K}(x,t;h)=\frac{\partial^{2}}{\partial t^{2}}\mathbf{K}(x,t;h),
\end{equation}
\begin{equation}
\label{GoursatKh2}\mathbf{K}(x,x;h)=\frac{h}{2}+\frac{1}{2}\int_{0}%
^{x}q(s)\,ds,\qquad\mathbf{K}(x,-x;h)=\frac{h}{2}.
\end{equation}
If the potential $q$ is $n$ times continuously differentiable, the kernel
$\mathbf{K}(x,t;h)$ is $n+1$ times continuously differentiable with respect to
both independent variables.
\begin{remark}
In the case $h=0$ the operator $\mathbf{T}_{h}$ coincides with the
transmutation operator studied in \cite[Chap. 1, Sect. 2]{Marchenko}. In
\cite{LevitanInverse}, \cite{Lions57}, \cite{Trimeche} it was established that
in the case $q\in C^{1}[-b,b]$ the Volterra-type integral operator
\eqref{Tmain} is a transmutation in the sense of Definition \ref{DefTransmut}
on the space $C^{2}[-b,b]$ if and only if the integral kernel $\mathbf{K}%
(x,t)$ satisfies the Goursat problem \eqref{GoursatKh1}, \eqref{GoursatKh2}.
\end{remark}
The following proposition shows that it is sufficient to know the
transmutation operator $\mathbf{T}_{h_{1}}$ or its integral kernel
$\mathbf{K}_{h_{1}}$ for some particular parameter $h_{1}$ to be able to
construct transmutation operators $\mathbf{T}_{h}$ or their integral kernels
$\mathbf{K}_{h}$ for arbitrary values of the parameter $h$.
\begin{proposition}
[\cite{CKT}, \cite{KT Obzor}]\label{PropChangeOfH} The operators
$\mathbf{T}_{h_{1}}$ and $\mathbf{T}_{h_{2}}$ are related by the expression
\[
\mathbf{T}_{h_{2}}u=\mathbf{T}_{h_{1}}\bigg[u(x)+\frac{h_{2}-h_{1}}2\int
_{-x}^{x} u(t)\,dt\bigg]
\]
valid for any $u\in C[-b,b]$.
The integral kernels $\mathbf{K}(x,t;h_{1})$ and $\mathbf{K}(x,t;h_{2})$ are
related by the expression
\[
\mathbf{K}(x,t;h_{2})=\frac{h_{2}-h_{1}}{2}+\mathbf{K}(x,t;h_{1})+\frac
{h_{2}-h_{1}}{2}\int_{t}^{x}\big( \mathbf{K}(x,s;h_{1})-\mathbf{K}%
(x,-s;h_{1})\big) \,ds.
\]
\end{proposition}
The following theorem states that the operators $\mathbf{T}_{h}$ are indeed
transmutations in the sense of Definition \ref{DefTransmut}.
\begin{theorem}
\label{Th Transmutation} Let $q\in C[-b,b]$. Then the operator $\mathbf{T}%
_{h}$ given by \eqref{Tmain} satisfies the equality
\begin{equation}
\left( -\frac{d^{2}}{dx^{2}}+q(x)\right) \mathbf{T}_{h}[u]=\mathbf{T}%
_{h}\left[ -\frac{d^{2}}{dx^{2}}(u)\right] \label{ThTransm}%
\end{equation}
for any $u\in C^{2}[-b,b]$.
\end{theorem}
\begin{remark}
This theorem under additional assumptions on $q$ was proved in \cite{CKT} and
\cite{KrT2012}. In \cite{KT Obzor} it was shown that \eqref{ThTransm} holds
for any $h$ whenever it holds for some particular value $h_{1}$.
\end{remark}
\begin{proof}
It follows from Proposition \ref{PropChangeOfH} that it is sufficient to prove
\eqref{ThTransm} for one particular $h$, see \cite[Proof of Theorem 5.6]{KT
Obzor} for details. We take $h=0$ and the operator $\mathbf{T}:=\mathbf{T}%
_{0}$. Let $\mathbf{K}$ denotes the integral kernel of $\mathbf{T}$. There
exists a sequence of potentials $\{q_{n}\}_{n\in\mathbb{N}}\subset
C^{1}[-b,b]$ such that $q_{n}\to q$, $n\to\infty$ uniformly on $[-b,b]$. Let
$\mathbf{K}_{q_{n}}$ and $\mathbf{T}_{q_{n}}$ be the integral kernels and the
transmutation operators corresponding to the potentials $q_{n}$. Then
$\mathbf{K}_{q_{n}}\to\mathbf{K}$ uniformly and $\mathbf{T}_{n}\to\mathbf{T}$
in the operator norm, see, e.g., \cite[Chap. 1, Sect. 2]{Marchenko}. Take a
function $u\in C^{2}[-b,b]$. For the operators $\mathbf{T}_{q_{n}}$ the
following equality holds \cite[Theorem 6]{KrT2012}
\[
\left( -\frac{d^{2}}{dx^{2}}+q_{n}(x)\right) \mathbf{T}_{q_{n}%
}[u]=\mathbf{T}_{q_{n}}\left[ -\frac{d^{2}}{dx^{2}}(u)\right] ,
\]
which with the use of \eqref{Tmain} may be rewritten as
\[
\frac{d^{2}}{dx^{2}}\biggl(\int_{-x}^{x} \mathbf{K}_{q_{n}}%
(x,t)u(t)\,dt\biggr) = -u^{\prime\prime}(x) + q_{n}(x) \mathbf{T}_{q_{n}%
}[u](x) + \mathbf{T}_{q_{n}}[u^{\prime\prime}](x).
\]
Let $y_{n}(x) := \int_{-x}^{x} \mathbf{K}_{q_{n}}(x,t)u(t)\,dt$ and $z_{n} :=
-u^{\prime\prime}+q_{n} \mathbf{T}_{q_{n}}[u] + \mathbf{T}_{q_{n}}%
[u^{\prime\prime}]$. Then $y_{n}\to y := \int_{-x}^{x} \mathbf{K}%
(x,t)u(t)\,dt$ and $z_{n} \to z:= -u^{\prime\prime}+ q \mathbf{T}[u] +
\mathbf{T}[u^{\prime\prime}]$, $n\to\infty$. Moreover, functions $y_{n}$
satisfy the initial conditions $y_{n}(0)=y^{\prime}_{n}(0)=0$. Now
\eqref{ThTransm} follows from the fact that the operator $\partial_{x}^{2}$
with the domain $\{ y\in C^{2}[-b,b] : y(0)=y^{\prime}(0)=0\}$ is closed, see,
e.g., \cite{Kato}.
\end{proof}
The following theorem summarizes the mapping properties of the operators
$\mathbf{T}_{h}$ establishing that there exists a value of the parameter $h$
such that $\mathbf{T}_{h}$ maps $x^{k}$ to $\varphi_{k}$ defined by
(\ref{phik}).
\begin{theorem}
[\cite{CKT}, \cite{KrT2012}]\label{Th Transmute}Let $q$ be a continuous
complex valued function of an independent real variable $x\in\lbrack-b,b]$ for
which there exists a particular solution $f$ of \eqref{SLhom} such that $f\in
C^{2}[-b,b]$, $f\neq0$ on $[-b,b]$ and normalized as $f(0)=1$. Denote
$h:=f^{\prime}(0)\in\mathbb{C}$. Suppose $\mathbf{T}_{h}$ is the operator
defined by \eqref{Tmain} and $\varphi_{k}$, $k\in\mathbb{N}_{0}$ are functions
defined by \eqref{phik}. Then
\begin{equation}
\label{mapping powers 1}\mathbf{T}_{h} x^{k} = \varphi_{k}(x) \qquad\text{for
any}\ k\in\mathbb{N}_{0}.
\end{equation}
Moreover, $\mathbf{T}_{h}$ maps a solution $v$ of an equation $v^{\prime
\prime}+\omega^{2}v=0$, where $\omega$ is a complex number, into a solution
$u$ of the equation $u^{\prime\prime}-q(x)u+\omega^{2}u=0$ with the following
correspondence of the initial values
\begin{equation}\label{mapping IC}
u(0)=v(0),\qquad u^{\prime}(0)=v^{\prime}(0)+hv(0).
\end{equation}
\end{theorem}
\begin{remark}
The mapping property \eqref{mapping powers 1} of the transmutation operator
allows one to see that the SPPS representations \eqref{u1u2} from Theorem
\ref{ThGenSolSturmLiouville} are nothing but the images of Taylor expansions
of the functions $\cosh\sqrt{\lambda}x$ and $\frac{1}{\sqrt{\lambda}}
\sinh\sqrt{\lambda}x$. Moreover, equality \eqref{mapping powers 1} is behind a
new method for solving Sturm-Liouville problems proposed in \cite{KT MMET
2012}.
\end{remark}
In what follows we assume that $f\in C^{2}[-b,b]$, $f\neq0$ on $[-b,b]$,
$f(0)=1$ and denote $h:=f^{\prime}(0)\in\mathbb{C}$. Any such function is
associated with an operator $\mathbf{T}_{h}$. For convenience, from now on we
will write $T_{f}$ instead of $\mathbf{T}_{h}$ and the integral kernel of
$T_{f}$ will be denoted by $\mathbf{K}_{f}$. Together with the function $f$
let us consider the function $1/f$. Notice that if $f$ is a solution of
(\ref{SLhom}) with the potential $q_{f}=f^{\prime\prime}/f$, the function
$1/f$ is a solution of the equation
\[
\left( -\frac{d^{2}}{dx^{2}}+q_{1/f}(x)\right) u=0
\]
with the Darboux associated potential
\[
q_{1/f}=-q_{f}+2\Bigl(\frac{f^{\prime}}{f}\Bigr)^{2}.
\]
Consider the transmutation operator $T_{1/f}$ which satisfies the equality
\[
\left( -\frac{d^{2}}{dx^{2}}+q_{1/f}(x)\right) {T}_{1/f}[u]=T_{1/f}\left[ -\frac{d^{2}}{dx^{2}}(u)\right]
\]
for any $u\in C^{2}[-b,b]$ and transforms $x^{k}$ into the functions $\psi
_{k}(x)$, $k\in\mathbb{N}_{0}$ defined by (\ref{psik}), i.e.
\begin{equation}
{T}_{1/f}x^{k}=\psi_{k}(x)\qquad\text{for any}\ k\in\mathbb{N}_{0}.
\label{mapping powers 2}%
\end{equation}
It follows from \eqref{mapping powers 1}, \eqref{mapping powers 2} and
definitions \eqref{um}, \eqref{vm} and \eqref{WavePolynomials} that
generalized wave polynomials are images of wave polynomials under the action
of operators $T_{f}$ and $T_{1/f}$, i.e.
\begin{equation}
\label{Mapping Wave Polynomial}T_{f} p_{n} = u_{n} \quad\text{and}\quad
T_{1/f} p_{n} = v_{n}\quad\text{for any }n\ge0.
\end{equation}
In \cite{KrT2012} explicit formulas were obtained for the kernel
$\mathbf{K}_{1/f}(x,t)$ in terms of $\mathbf{K}_{f}(x,t)$. In order to obtain
a simpler expression for the integral kernel $\mathbf{K}_{1/f}(x,t)$ we
assumed that the original integral kernel $\mathbf{K}_{f}(x,t)$ is known in
the larger domain than required by definition \eqref{Tmain}. Namely, suppose
that the function $\mathbf{K}_{f}(x,t)$ is known and is continuously
differentiable in the domain $\bar{\Pi}:\ -b\leq x\leq b,-b\leq t\leq b$. It
is worth mentioning that such continuation of the integral kernel to the
domain $\bar{\Pi}$ is always possible due to \eqref{GoursatTh1},
\eqref{GoursatTh2} and the general theory of Goursat problems. We refer the
reader to \cite{KrT2012} for further details. In Corollary \ref{Cor K Vekua}
based on pseudoanalytic function theory we obtain another expression for the
integral kernel $\mathbf{K}_{1/f}(x,t)$ suitable in the case when the integral
kernel $\mathbf{K}_{f}(x,t)$ is known only in the natural domain $|t|\le|x|\le
b$.
\begin{proposition}
[\cite{KrT2012}]\label{PropTT_D} Under the conditions of Theorem
\ref{Th Transmute} the transmutation operators $T_{f}$ and $T_{1/f}$ are
related by the expressions
\[
T_{1/f}[u](x)=\frac{1}{f(x)}\bigg(\int_{0}^{x}f(\eta)T_{f}[u^{\prime}%
](\eta)\,d\eta+u(0)\bigg)
\]
and
\[
T_{f}[u](x)=f(x)\bigg(\int_{0}^{x}\frac{1}{f(\eta)}T_{1/f}[u^{\prime}%
](\eta)\,d\eta+u(0)\bigg),
\]
valid for any $u\in C^{1}[-b,b]$, and their integral kernels $\mathbf{K}%
_{f}(x,t):=\mathbf{K}_{f}(x,t;h)$ and $\mathbf{K}_{1/f}(x,t):=\mathbf{K}%
_{1/f}(x,t;-h)$ are related by the expressions
\begin{equation}
\mathbf{K}_{1/f}(x,t)=-\frac{1}{f(x)}\bigg(\int_{-t}^{x}\partial_{t}%
\mathbf{K}_{f}(s,t)f(s)\,ds+\frac{f^{\prime}(0)}{2}f(-t)\bigg) \label{K2}%
\end{equation}
and
\begin{equation}
\mathbf{K}_{f}(x,t)=-f(x)\bigg(\int_{-t}^{x}\partial_{t}\mathbf{K}%
_{1/f}(s,t)\frac1{f(s)}\,ds-\frac{f^{\prime}(0)}{2f(-t)}\bigg) . \label{K1}%
\end{equation}
\end{proposition}
The following commutation relations immediately follow from Proposition
\ref{PropTT_D}.
\begin{corollary}
[\cite{KrT2012}]\label{Cor Commutation Relations}The following operator
equalities hold on $C^{1}[-b,b]$:
\begin{align*}
\partial_{x}fT_{1/f} & =fT_{f}\partial_{x}\\
\partial_{x}\frac{1}{f}T_{f} & =\frac{1}{f}T_{1/f}\partial_{x}.
\end{align*}
\end{corollary}
\begin{example}
[\cite{KrT2012}]\label{ModelExample} Take $f(x)=x+1$ and some segment
$[-b,b]\subset(-1,1)$. Then $q_{f}=0$ and $q_{1/f} = \frac2{(x+1)^{2}}$. For
the potential $q_{f}$ the transmutation operator $\mathbf{T}_{0}$ is obviously
the identity operator, hence $\mathbf{K}(x,t;0) = 0$ and from Proposition
\ref{PropChangeOfH} we obtain
\[
\mathbf{K}_{f}(x,t) := \mathbf{K}(x,t;1) = 1/2.
\]
Hence we get from \eqref{K2} that
\[
\mathbf{K}_{1/f}(x,t) = \frac{t-1}{2(x+1)}.
\]
It is easy to see that the integral kernels $\mathbf{K}_{f}$ and
$\mathbf{K}_{1/f}$ satisfy the Goursat problems \eqref{GoursatKh1},
\eqref{GoursatKh2} and relation \eqref{K1} holds.
For more examples of transmutation integral kernels we refer the reader to
\cite{KrT2012}.
\end{example}
\section{Goursat-to-Goursat transmutation operators}\label{SectGGTO}
Let $\widetilde{u}$ and $u$ be solutions of the equations $\square
\widetilde{u}=0$ and $\left( \square-q(x)\right) u=0$ in $\overline
{\mathbf{R}}$ where $\mathbf{R}$ is a square with the diagonal with endpoints $(b,b)$ and
$(-b,-b)$, respectively such that $u=T_{f}\widetilde{u}$. Consider the
operator $T_{G}$ mapping the Goursat data corresponding to $\widetilde{u}$
into the Goursat data corresponding to $u$,
\[
T_{G}:\quad\binom{\widetilde{u}(x,x)}{\widetilde{u}(x,-x)}\quad\longmapsto
\quad\binom{u(x,x)}{u(x,-x)}.
\]
Denote $\varphi(x):=\widetilde{u}(x,x)$, $\psi(x):=\widetilde{u}(x,-x)$,
$\Phi(x):=u(x,x)$, $\Psi(x):=u(x,-x)$. We have $\varphi(0)=\psi(0)$ and
$\Phi(0)=\Psi(0)$. The solution of the wave equation $\widetilde{u}$ is
represented via its Goursat data as follows $\widetilde{u}(x,t)=\varphi
(\frac{x+t}{2})+\psi(\frac{x-t}{2})-\varphi(0)$. Application of the operator
$T_{f}$ gives us the equalities%
\[
T_{f}\varphi\Bigl(\frac{x+t}{2}\Bigr)=\varphi\Bigl(\frac{x+t}{2}%
\Bigr)+\int_{-x}^{x}\mathbf{K}_{f}(x,\tau)\varphi\Bigl(\frac{\tau+t}%
{2}\Bigr)d\tau,
\]
\[
T_{f}\psi\Bigl(\frac{x-t}{2}\Bigr)=\psi\Bigl(\frac{x-t}{2}\Bigr)+\int_{-x}%
^{x}\mathbf{K}_{f} (x,\tau)\psi\Bigl(\frac{\tau-t}{2}\Bigr)d\tau
\]
and $T_{f}[\varphi(0)]=\varphi(0)f$. Considering the value of these
expressions when $t=x$ and $t=-x$ and introducing obvious changes of variables
we obtain the following relations%
\[
\Phi(x)=\varphi(x)+2\int_{0}^{x}\mathbf{K}_{f}(x,2t-x)\varphi(t)dt+\psi
(0)+2\int_{-x}^{0}\mathbf{K}_{f}(x,2t+x)\psi(t)dt-\varphi(0)f(x)
\]
and%
\[
\Psi(x)=\varphi(0)+2\int_{-x}^{0}\mathbf{K}_{f}(x,2t+x)\varphi(t)dt+\psi
(x)+2\int_{0}^{x}\mathbf{K}_{f}(x,2t-x)\psi(t)dt-\varphi(0)f(x).
\]
Due to the boundedness of the operators $T_{f}$ and $T_{f}^{-1}$ as well as to
the continuous dependence of the solutions of the Goursat problems under
consideration on their respective Goursat data (see, e.g., \cite[Sect.
15]{Vladimirov}) the operator $T_{G}$ together with $T_{G}^{-1}$ are bounded
on vector functions from $C^{1}[-b,b]\times C^{1}[-b,b]$ equipped with a
suitable, for example, maximum, norm.
It is easy to establish the following mapping properties of $T_{G}$,%
\begin{equation}
T_{G}:\quad2^{n-1}\binom{x^{n}}{x^{n}}\quad\longmapsto\quad\binom
{u_{2n-1}(x,x)}{u_{2n-1}(x,x)} \label{mapping1}%
\end{equation}
and
\begin{equation}
T_{G}:\quad2^{n-1}\binom{x^{n}}{-x^{n}}\quad\longmapsto\quad\binom
{u_{2n}(x,x)}{-u_{2n}(x,x)}. \label{mapping2}%
\end{equation}
Indeed, consider the wave polynomial $p_{2n-1}(x,t)$, $n=1,2,\ldots$, see
\eqref{WavePolynomials}. Its Goursat data have the form $p_{2n-1}%
(x,x)=p_{2n-1}(x,-x)=2^{n-1}x^{n}$. The image of $p_{2n-1}$ under the action
of $T_{f}$ is $u_{2n-1}$, see \eqref{Mapping Wave Polynomial}. Using the
property \eqref{umParity} we obtain (\ref{mapping1}). Analogously,
consideration of $p_{2n}$ leads to (\ref{mapping2}).
\begin{proposition}
\label{Prop Unif Conv u} Let $u$ be a regular solution of the equation
$\left( \square-q(x)\right) u=0$ in $\overline{\mathbf{R}}$ such that its
Goursat data admit the following series expansions%
\begin{equation}
\frac{1}{2}\left( u(x,x)+u(x,-x)\right) =c_{0}u_{0}(x,x)+\sum_{n=1}^{\infty
}c_{n}u_{2n-1}(x,x), \label{+1}%
\end{equation}
and
\begin{equation}
\frac{1}{2}\left( u(x,x)-u(x,-x)\right) =\sum_{n=1}^{\infty}b_{n}%
u_{2n}(x,x), \label{-1}%
\end{equation}
both uniformly convergent on $[-b,b]$. Then for any $(x,t)\in\overline
{\mathbf{R}}$,
\begin{equation}
u(x,t)=c_{0}u_{0}(x,t)+\sum_{n=1}^{\infty}\left( c_{n}u_{2n-1}(x,t)+b_{n}%
u_{2n}(x,t)\right) \label{u}%
\end{equation}
and the series converges uniformly in $\overline{\mathbf{R}}$.
\end{proposition}
\begin{proof}
Notice that if (\ref{u}) is valid then (\ref{+1}) and (\ref{-1}) hold. Indeed,
due to \eqref{umParity}
\[%
\begin{split}
\frac{1}{2}\left( u(x,x)+u(x,-x)\right) & =c_{0}u_{0}(x,x)\\
& +\frac{1}{2} \sum_{n=1}^{\infty}\left( c_{n}\left( u_{2n-1}%
(x,x)+u_{2n-1}(x,x)\right) +b_{n}\left( u_{2n} (x,x)-u_{2n}(x,x)\right)
\right) \\
& =c_{0}u_{0}(x,x)+\sum_{n=1}^{\infty}c_{n}u_{2n-1}(x,x)
\end{split}
\]
and similarly (\ref{-1}) can be verified.
Suppose that (\ref{+1}) and (\ref{-1}) are valid. Then we have
\[
\binom{u(x,x)}{u(x,-x)}=c_{0}\binom{u_{0}(x,x)}{u_{0}(x,x)}+\sum_{n=1}%
^{\infty}\left( c_{n}\binom{u_{2n-1}(x,x)}{u_{2n-1}(x,x)}+b_{n}\binom
{u_{2n}(x,x)}{-u_{2n}(x,x)}\right) .
\]
From (\ref{mapping1}) and (\ref{mapping2}) we obtain
\begin{equation}
\label{phipsi}%
\begin{split}
\binom{\varphi(x)}{\psi(x)} & := T_{G}^{-1}\binom{u(x,x)}{u(x,-x)}%
=c_{0}\binom{1}{1}+\sum_{n=1}^{\infty}2^{n-1}\left( c_{n}\binom{x^{n}}{x^{n}%
}+b_{n}\binom{x^{n}}{-x^{n}}\right) \\
& =\binom{c_{0}+\sum_{n=1}^{\infty}2^{n-1}\left( c_{n}+b_{n}\right) x^{n}%
}{c_{0}+\sum_{n=1}^{\infty}2^{n-1}\left( c_{n}-b_{n}\right) x^{n}}.
\end{split}
\end{equation}
Under the conditions of the proposition and due to the boundedness of
$T_{G}^{-1}$ we have that both series in the last equality are uniformly
convergent on $[-b,b]$. Then applying Proposition 1 from \cite{KKTT} we obtain
that the unique solution $\widetilde{u}$ of the Goursat problem for the wave
equation in $\overline{\mathbf{R}}$ and with the Goursat data $\binom
{\varphi(x)}{\psi(x)}$ has the form
\begin{equation}
\widetilde{u}(x,t)=c_{0}p_{0}(x,t)+\sum_{n=1}^{\infty}\left( c_{n}%
p_{2n-1}(x,t)+b_{n}p_{2n}(x,t)\right) \label{util}%
\end{equation}
and the series converges uniformly in $\overline{\mathbf{R}}$. Since
$u=T_{f}\widetilde{u}$, $T_{f}p_{n}=u_{n}$ and $T_{f}$ is bounded we obtain
the uniform convergence of the series (\ref{u}).
\end{proof}
\section{Transmutations for the Vekua operator}\label{SectTransmutVekua}
By analogy with \cite{CKM AACA} we introduce the following pair of operators
\begin{equation}
\mathbf{V}_{1}=T_{f}\mathcal{R}+\mathbf{j}T_{1/f}\mathcal{I} \label{Tbold1}%
\end{equation}
and
\begin{equation}
\mathbf{V}_{2}=T_{1/f}\mathcal{R}+\mathbf{j}T_{f}\mathcal{I}. \label{Tbold2}%
\end{equation}
\begin{proposition}
\label{Prop Transm dz} The following equalities hold for any $\mathbb{B}%
$-valued, continuously differentiable function $w$ defined on $\overline
{\mathbf{R}}$.
\[
\left( \partial_{\overline{z}}-\frac{\partial_{\overline{z}}f}{f}C\right)
\mathbf{V}_{1}w=\mathbf{V}_{2}\left( \partial_{\overline{z}}w\right)
,\qquad\left( \partial_{\overline{z}}+\frac{\partial_{z}f}{f}C\right)
\mathbf{V}_{2}w=\mathbf{V}_{1}\left( \partial_{\overline{z}}w\right) .
\]%
\[
\left( \partial_{z}-\frac{\partial_{z}f}{f}C\right) \mathbf{V}%
_{1}w=\mathbf{V}_{2}\left( \partial_{z}w\right) ,\qquad\left( \partial
_{z}+\frac{\partial_{\overline{z}}f}{f}C\right) \mathbf{V}_{2}w=\mathbf{V}%
_{1}\left( \partial_{z}w\right) .
\]
\end{proposition}
\begin{proof}
The proof consists in a direct calculation with the aid of the commutation
relations from Corollary \ref{Cor Commutation Relations}. For example, for
$w=u+\mathbf{j}v$ we have
\begin{align*}
\left( \partial_{\overline{z}}-\frac{\partial_{\overline{z}}f}{f}C\right)
\mathbf{V}_{1}w & =\frac{1}{2}\left( f\partial_{\overline{z}}\left(
\frac{1}{f}\mathcal{R}\left( \mathbf{V}_{1}w\right) \right) +\mathbf{j}%
\frac{1}{f}\partial_{\overline{z}}\left( f\mathcal{I}\left( \mathbf{V}%
_{1}w\right) \right) \right) \\
& =\frac{1}{2}\left( f\partial_{x}\left( \frac{1}{f}T_{f}u\right)
-\mathbf{j}\partial_{t}T_{f}u+\mathbf{j}\frac{1}{f}\partial_{x}\left(
fT_{1/f}v\right) -\partial_{t}T_{1/f}v\right) \\
& =\frac{1}{2}\left( T_{1/f}\partial_{x}u-\mathbf{j}T_{f}\partial
_{t}u+\mathbf{j}T_{f}\partial_{x}v-T_{1/f}\partial_{t}v\right) =\mathbf{V}%
_{2}\left( \partial_{\overline{z}}w\right) .
\end{align*}
\end{proof}
An immediate corollary from Proposition \ref{Prop Transm dz} is the fact that
the operators $\mathbf{V}_{1}$ and $\mathbf{V}_{2}$ map the hyperbolic
analytic functions satisfying (\ref{hCR}) into the solutions of
(\ref{Vekua main hyper}) and (\ref{Vekua successor}) respectively. Moreover,
they map powers of the variable $z$ into the corresponding formal powers.
\begin{proposition}
\label{PropMapComplexPowers}For any $z\in\overline{\mathbf{R}}$,
$n\in\mathbb{N}_{0}$ and $a\in\mathbb{B}$ the following equalities are valid
\[
\mathbf{V}_{1}[az^{n}]=Z^{(n)}(a,0;z)\quad\text{and}\quad\mathbf{V}_{2}%
[az^{n}]=Z_{1}^{(n)}(a,0;z).
\]
\end{proposition}
\begin{proof}
The proof consists in the observation that for $a=a^{\prime}+\mathbf{j}%
b^{\prime}$ and $z=x+\mathbf{j}t$ one has
\[
az^{n}=\left( a^{\prime}+\mathbf{j}b^{\prime}\right) \sum_{m=0}^{n}
\binom{n}{m}x^{n-m}\mathbf{j}^{m}t^{m}%
\]
and the result follows from the formulas \eqref{Zn}, \eqref{Znodd},
\eqref{Zneven} by application of the mapping properties
\eqref{mapping powers 1}, \eqref{mapping powers 2}.
\end{proof}
Notice that both operators $\mathbf{V}_{1}$ and $\mathbf{V}_{2}$ are bounded
on the space of continuous functions. Indeed, consider
\begin{align*}
\left\vert \mathbf{V}_{1}w\right\vert & =\frac{1}{2}\left( \left\vert
\left( \mathbf{V}_{1}w\right) ^{+}\right\vert _{\mathbb{C}}+\left\vert
\left( \mathbf{V}_{1}w\right) ^{-}\right\vert _{\mathbb{C}}\right) \\
& =\frac{1}{2}\left( \left\vert \mathcal{R}\left( \mathbf{V}_{1}w\right)
+\mathcal{I}\left( \mathbf{V}_{1}w\right) \right\vert _{\mathbb{C}%
}+\left\vert \mathcal{R}\left( \mathbf{V}_{1}w\right) -\mathcal{I}\left(
\mathbf{V}_{1}w\right) \right\vert _{\mathbb{C}}\right) \\
& \leq\left\vert \mathcal{R}\left( \mathbf{V}_{1}w\right) \right\vert
_{\mathbb{C}}+\left\vert \mathcal{I}\left( \mathbf{V}_{1}w\right)
\right\vert _{\mathbb{C}}=\left\vert T_{f}u\right\vert _{\mathbb{C}%
}+\left\vert T_{1/f}v\right\vert _{\mathbb{C}}.
\end{align*}
From the boundedness of the operators $T_{f}$ and $T_{1/f}$ we have
\[
\max\left\vert \mathbf{V}_{1}w\right\vert \leq M\left( \max\left\vert
u\right\vert _{\mathbb{C}}+\max\left\vert v\right\vert _{\mathbb{C}}\right)
\]
where $M=\max\left\{ \left\Vert T_{f}\right\Vert ,\left\Vert T_{1/f}%
\right\Vert \right\} $. \ Since $\left\vert u\right\vert _{\mathbb{C}}%
\leq\frac{1}{2}\left( \left\vert w^{+}\right\vert _{\mathbb{C}}+\left\vert
w^{-}\right\vert _{\mathbb{C}}\right) $ and $\left\vert v\right\vert
_{\mathbb{C}}\leq\frac{1}{2}\left( \left\vert w^{+}\right\vert _{\mathbb{C}%
}+\left\vert w^{-}\right\vert _{\mathbb{C}}\right) $, we obtain
$\max\left\vert \mathbf{V}_{1}w\right\vert \leq2M\max\left\vert w\right\vert
$. The boundedness of $\mathbf{V}_{2}$ is proved analogously.
The inverse operators $\mathbf{V}_{1}^{-1}$ and $\mathbf{V}_{2}^{-1}$ have the
form
\[
\mathbf{V}^{-1}_{1}=T^{-1}_{f}\mathcal{R}+\mathbf{j}T^{-1}_{1/f}%
\mathcal{I}\qquad\text{and}\qquad\mathbf{V}^{-1}_{2}=T^{-1}_{1/f}%
\mathcal{R}+\mathbf{j}T^{-1}_{f}\mathcal{I}%
\]
and clearly are bounded. For the explicit formulae for $T^{-1}_{f}$ and
$T^{-1}_{1/f}$ we refer to \cite[Theorem 10]{KrT2012}.
The following statement establishes a relation between the derivatives of
hyperbolic analytic functions and the generalized derivatives of their images
under the action of the transmutation operator.
\begin{proposition}
\label{PropDerivatives}Let $w$ be a hyperbolic analytic function in
$\mathbf{R}$ and $W=\mathbf{V}_{1}w$ be a corresponding solution of
\eqref{Vekua main hyper}. Then whenever the corresponding derivative of $w$
exists there exists a derivative of the same order in the sense of Bers of the
function $W$ and vice versa, and the following relations are valid
\begin{equation}
\mathbf{V}_{1}\left( \partial_{z}^{(2n)}w\right) =W^{\left[ 2n\right]
}\quad\text{and\quad}\mathbf{V}_{2}\left( \partial_{z}^{(2n-1)}w\right)
=W^{\left[ 2n-1\right] }\text{,\quad}n=1,2,\ldots
.\label{Relations for derivatives}%
\end{equation}
\end{proposition}
\begin{proof}
From Proposition \ref{Prop Transm dz} we have%
\begin{equation}
\overset{\circ}{W}=\mathbf{V}_{2}\left( \partial_{z}w\right) . \label{45}%
\end{equation}
$\overset{\circ}{W}$ is a solution of the succeeding Vekua equation
(\ref{Vekua successor}). Denote $W_{1}=\overset{\circ}{W}$. Any solution of
(\ref{Vekua successor}) is the image of a bicomplex analytic function under
the action of the operator $\mathbf{V}_{2}$, so $W_{1}=\mathbf{V}_{2}w_{1}$.
Due to Proposition \ref{Prop Transm dz} we have $\overset{\circ}{W}%
_{1}=\mathbf{V}_{1}\left( \partial_{z}w_{1}\right) $. Thus, $\overset
{\circ\circ}{W}=\mathbf{V}_{1}\left( \partial_{z}^{2}w\right) $ because from
(\ref{45}) $w_{1}=\partial_{z}w$. Now (\ref{Relations for derivatives}) can be
easily proved by induction.
\end{proof}
\section{Expansion theorem}\label{SectETh}
\begin{theorem}
\label{Th W series} A solution $W$ of \eqref{Vekua main x} can be represented
as a uniformly convergent series
\begin{equation}
W(z)=u(z)+\mathbf{j}v(z)=\sum_{n=0}^{\infty} Z^{(n)}(a_{n},0;z)
\label{W=Taylor}%
\end{equation}
in $\overline{\mathbf{R}}$ iff the functions $u(x,x)$ and $u(x,-x)$ admit the
uniformly convergent series expansions on $[-b,b]$ of the form \eqref{+1} and
\eqref{-1} where the coefficients of the expansions are related by the
equalities
\[
a_{n}=c_{n}+\mathbf{j}b_{n},\quad n=0,1,\ldots
\]
and the coefficient $b_{0}$ defines the value of $v$ at the origin.
\end{theorem}
\begin{proof}
Suppose that $W$ is a solution of (\ref{Vekua main x}) such that
(\ref{W=Taylor}) holds. Denote $c_{n}:=\mathcal{R}(a_{n})$ and $b_{n}%
:=\mathcal{I}(a_{n})$, $n=0,1,\ldots$. By \eqref{ZasWavePol} we have
\[
Z^{(n)}(1,0;z)=u_{2n-1}(x,t)+\mathbf{j}v_{2n}(x,t),\quad n=1,2,\ldots
\]
and
\[
Z^{(n)}(\mathbf{j},0;z) =u_{2n}(x,t)+\mathbf{j}v_{2n-1}(x,t),\quad
n=1,2,\ldots.
\]
Then
\[
W(z)=c_{0}u_{0}(x,t)+b_{0}\mathbf{j}v_{0}(x,t)+\sum_{n=1}^{\infty}\left(
c_{n}\left( u_{2n-1}(x,t)+\mathbf{j}v_{2n}(x,t)\right) +b_{n}\left(
u_{2n}(x,t)+\mathbf{j}v_{2n-1}(x,t)\right) \right) .
\]
Hence
\[
u:=\mathcal{R}(W)=c_{0}u_{0}+\sum_{n=1}^{\infty}\left( c_{n}u_{2n-1}%
+b_{n}u_{2n}\right)
\]
and
\[
v:=\mathcal{I}(W)=b_{0}v_{0}+\sum_{n=1}^{\infty}\left( c_{n}v_{2n}%
+b_{n}v_{2n-1}\right) .
\]
Thus,
\[
u(x,x)=c_{0}u_{0}(x,x)+\sum_{n=1}^{\infty}\left( c_{n}u_{2n-1}(x,x)+b_{n}%
u_{2n}(x,x)\right)
\]
and
\[
u(x,-x)=c_{0}u_{0}(x,x)+\sum_{n=1}^{\infty}\left( c_{n}u_{2n-1}%
(x,x)-b_{n}u_{2n}(x,x)\right) .
\]
We obtain that (\ref{+1}) and (\ref{-1}) hold.
Now let us assume that $W$ is a solution of (\ref{Vekua main x}) in
$\overline{\mathbf{R}}$ such that for $u=\mathcal{R}(W)$ the equalities
(\ref{+1}) and (\ref{-1}) hold. Then due to Proposition \ref{Prop Unif Conv u}
the function $u$ admits a series expansion (\ref{u}), uniformly convergent in
$\overline{\mathbf{R}}$ and at the same time $\widetilde{u}=T_{f}^{-1}u$ is a
solution of a Goursat problem for the wave equation with the Goursat data
$\binom{\varphi}{\psi}$ defined by (\ref{phipsi}). $\widetilde{u}$ has the
form (\ref{util}).
Now, let us consider the Goursat problem (\ref{hCR}), (\ref{condG}) with
\[
\Phi(x):=2\varphi(x)-c_{0}+b_{0}\quad\text{and}\quad\Psi(x):=2\psi
(x)-c_{0}-b_{0}%
\]
where $b_{0}\in\mathbb{C}$ is an arbitrary number. Thus,
\[
\Phi(x)=\sum_{n=0}^{\infty}2^{n}\left( c_{n}+b_{n}\right) x^{n}%
\quad\text{and}\quad\Psi(x)=\sum_{n=0}^{\infty}2^{n}\left( c_{n}%
-b_{n}\right) x^{n}.
\]
Due to Proposition \ref{Prop Goursat hCR}, the unique solution of this Goursat
problem has the form $\widetilde{W}(z)=\sum\limits_{n=0}^{\infty}a_{n}z^{n}$
where
\[
a_{n}=P^{+}\left( c_{n}+b_{n}\right) +P^{-}\left( c_{n}-b_{n}\right)
=c_{n}+\mathbf{j}b_{n},\quad n=0,1,\ldots.
\]
Application of the operator $\mathbf{V}_{1}$ to $\widetilde{W}$ gives us a
solution $W$ of (\ref{Vekua main x}) in $\overline{\mathbf{R}}$ in the form of
a uniformly convergent series (\ref{W=Taylor}) with $u=\mathcal{R}(W)$.
\end{proof}
\begin{theorem}
\label{Th Taylor Vekua} Let $W$ be a solution of \eqref{Vekua main x}
admitting in $\overline{\mathbf{R}}$ a uniformly convergent series expansion
\eqref{W=Taylor}. Then there exist its derivatives in the sense of Bers of any
order and the Taylor formula for the expansion coefficients holds,
\begin{equation}
a_{n}=\frac{W^{[n]}(0)}{n!}. \label{TaylorCoef}%
\end{equation}
\end{theorem}
\begin{proof}
Under the condition of the theorem we obtain that $w=\mathbf{V}_{1}^{-1}W$ has
the form (\ref{power series}) with the expansion coefficients
(\ref{Taylor coefs}). The derivatives of $w$ of any order exist because they
reduce to the differentiation of power series expansions of $\Phi$ and $\Psi$
from Proposition \ref{Prop Goursat hCR}. Application of the operator
$\mathbf{V}_{1}$ to the even derivatives and of $\mathbf{V}_{2}$ to the odd,
due to Proposition \ref{PropDerivatives} gives us the corresponding
derivatives of $W$ in the sense of Bers as well as the formula
(\ref{TaylorCoef}).
\end{proof}
Even if the conditions of Theorem \ref{Th W series} are not fulfilled and a
solution of (\ref{Vekua main x}) cannot be represented globally as a uniformly
convergent formal power series of the form (\ref{W=Taylor}), it may be
arbitrarily closely approximated by finite linear combinations of the formal
powers. The following analogue of the Runge approximation theorem from
classical complex analysis is a corollary of the existence of the
transmutation operator $\mathbf{V}_{1}$.
\begin{proposition}
\label{Prop RungeZ} Let $W$ be a solution of \eqref{Vekua main x} in
$\mathbf{R}$. Then there exists a sequence of polynomials in formal powers
$P_{N}=\sum_{n=0}^{N}Z^{(n)}(a_{n}, 0; z)$ uniformly convergent to $W$ in
$\overline{\mathbf{R}}$.
\end{proposition}
\begin{proof}
Consider $w = \mathbf{V}_{1}^{-1}W$. By Proposition \ref{Prop Transm dz} we
have $\mathbf{V}_{2}(\partial_{\bar z} w) = 0$, hence $w$ is a solution of
\eqref{hCR} in $\overline{\mathbf{R}}$. Despite the domain $\overline
{\mathbf{R}}$ is smaller than the domain $\overline{R}$, the proof of
Proposition \ref{Prop RungeHCR} may be applied without changes to the function
$w$. Hence for an arbitrary $\varepsilon>0$ there exists a polynomial
$p_{N}=\sum_{n=0}^{N} a_{n}z^{n}$ such that for any
$(x,t)\in\overline{\mathbf{R}}$
\[
|w(x,t) - p_{N}(x,t)|<\frac{\varepsilon}{\|\mathbf{V}_{1}\|},
\]
where $\|\mathbf{V}_{1}\|<\infty$ is the norm of the transmutation operator
$\mathbf{V}_{1}$. Applying $\mathbf{V}_{1}$ and Proposition
\ref{PropMapComplexPowers} we obtain that
\[
\left| W(x,t) - \sum_{n=0}^{N} Z^{(n)}(a_{n},0;z)\right|
\le\|\mathbf{V}_{1}\|\cdot\max_{(x,t)\in\overline{\mathbf{R}}}|w(x,t) -
p_{N}(x,t)| <\varepsilon.
\]
\end{proof}
\section{Integral kernels as solutions of Vekua equations}\label{SectKernelAsSolVekua}
In Section \ref{SectTransmutVekua} we showed that
combinations \eqref{Tbold1} and \eqref{Tbold2} of the transmutation operators
$T_{f}$ and $T_{1/f}$ are related with Vekua operators and formal powers. In
this section we consider the combination
\begin{equation}
\label{KVekua}\mathbf{K} = \mathbf{K}_{f} - \mathbf{j} \mathbf{K}_{1/f}%
\end{equation}
of the integral kernels of the operators $T_{f}$ and $T_{1/f}$.
\begin{theorem}
\label{Th K Vekua} The function $\mathbf{K}$ given by \eqref{KVekua} is a
solution of the hyperbolic Vekua equation \eqref{Vekua main x}.
\end{theorem}
\begin{proof}
The proof immediately follows from relations \eqref{K2} and \eqref{K1} and
conditions \eqref{Vek1}, \eqref{Vek2}.
\end{proof}
The way of constructing the integral kernel $\mathbf{K}_{1/f}$ by the integral
kernel $\mathbf{K}_{f}$ given by \eqref{K2} requires the knowledge of the
kernel $\mathbf{K}_{f}(x,t)$ not only on a natural domain $|t|\leq|x|\leq b$
sufficient for defining the transmutation operator, but on a larger domain
$|t|\leq b$, $|x|\leq b$. Pseudoanalytic function theory allows us to obtain
another method of reconstructing $\mathbf{K}_{1/f}$ by the known
$\mathbf{K}_{f}$ and vice versa. Resulted formulae are simpler than those
mentioned in \cite[Remark 21]{KrT2012}.
\begin{corollary}
\label{Cor K Vekua} The integral kernels $\mathbf{K}_{f}$ and $\mathbf{K}%
_{1/f}$ of the transmutation operators $T_{f}$ and $T_{1/f}$ are related by
the expressions
\begin{multline}
\label{K2Vekua}\mathbf{K}_{1/f}(x,t) = -\frac{1}{f(x)} \int_{0}^{x} f(\eta)
\partial_{t} \mathbf{K}_{f}(\eta,0)\,d\eta \\
- \int_{0}^{t} \partial_{x}
\mathbf{K}_{f}(x,\xi)\,d\xi+ \frac{f^{\prime}(x)}{f(x)}\int_{0}^{t}
\mathbf{K}_{f}(x,\xi)\,d\xi- \frac{f^{\prime}(0)}{2f(x)}%
\end{multline}
and
\begin{multline}
\label{K1Vekua}\mathbf{K}_{f}(x,t) = -f(x) \int_{0}^{x} \frac{1}{f(\eta)}
\partial_{t} \mathbf{K}_{1/f}(\eta,0)\,d\eta\\
- \int_{0}^{t} \partial_{x}
\mathbf{K}_{1/f}(x,\xi)\,d\xi- \frac{f^{\prime}(x)}{f(x)}\int_{0}^{t}
\mathbf{K}_{1/f}(x,\xi)\,d\xi+ \frac{f^{\prime}(0)}{2}f(x).
\end{multline}
\end{corollary}
\begin{proof}
Suppose first that $q_{f}:= \frac{f^{\prime\prime}}{f} \in C^{1}[-b,b]$. In
such case we also have $q_{1/f}= 2\Bigl(\frac{f^{\prime}}{f}\Bigr)^{2} -
q_{f}\in C^{1}[-b,b]$ and the integral kernels $\mathbf{K}_{f}$ and
$\mathbf{K}_{1/f}$ are twice continuously differentiable with respect to both
variables and satisfy equation \eqref{GoursatKh1} with potentials $q_{f}$ and
$q_{1/f}$ correspondingly. Hence we obtain from Theorems \ref{ThDarboux} and
\ref{Th K Vekua} for the function $\mathbf{K}_{f}$ using the operator
$\overline{A}_{h}$ given by \eqref{ABarKernel} that the function
$\mathbf{K}_{1/f}$ differs from the function
\[
-v = -\frac{1}{f(x)} \int_{0}^{x} f(\eta) \partial_{t} \mathbf{K}_{f}%
(\eta,0)\,d\eta- \int_{0}^{t} \partial_{x} \mathbf{K}_{f}(x,\xi)\,d\xi+
\frac{f^{\prime}(x)}{f(x)}\int_{0}^{t} \mathbf{K}_{f}(x,\xi)\,d\xi
\]
by the term $cf^{-1}$, where $c$ is a constant. From the condition
$\mathbf{K}_{1/f}(0,0) = -\frac{f^{\prime}(0)}2$ (condition
\eqref{GoursatKh2}) we find that $c=-\frac{f^{\prime}(0)}2$.
Suppose now that $q_{f}\in C[-b,b]$. We proceed as follows. Consider a
sequence of polynomials $\{f_{n}\}_{n\in\mathbb{N}}$ such that $f_{n}(0)=1$
and $f_{n}^{\prime\prime}\to f^{\prime\prime}$, $f_{n}^{\prime}\to f^{\prime}$
and $f_{n}\to f$ uniformly on $[-b,b]$ as $n\to\infty$. See, e.g., \cite[Proof
of Theorem 11]{CKT} for a possible construction. Since the function $f$ does
not vanish on $[-b,b]$ we may additionally assume that all functions $f_{n}$
do no vanish on $[-b,b]$. It is easy to see that $q_{n}:=\frac{f_{n}%
^{\prime\prime}}{f_{n}}\to q$ uniformly and $q_{n}\in C^{1}[-b,b]$. Every
$f_{n}$ is associated with the transmutation operators $T_{f_{n}}$ and
$T_{1/f_{n}}$ with the integral kernels $\mathbf{K}_{f_{n}}$ and
$\mathbf{K}_{1/f_{n}}$. Consider functions $\mathbf{H}_{f_{n}}(u,v) =
\mathbf{K}_{f_{n}}(u+v, u-v; f^{\prime}_{n}(0))$. It is well known (e.g.,
\cite{Vladimirov}) that the Goursat problem \eqref{GoursatTh1},
\eqref{GoursatTh2} is equivalent to either of the systems of integral
equations
\[
\begin{cases}
\mathbf{H}(u,v) = \frac h2 + \int_{0}^{u} \mathbf{G}(u^{\prime}, v)
\,du^{\prime},\\
\mathbf{G}(u,v) = \frac12 q(u)+\int_{0}^{v} q(u+v^{\prime}) \mathbf{H}(u,
v^{\prime})\,dv^{\prime}%
\end{cases}
\]
and
\[
\begin{cases}
\mathbf{H}(u,v) = \frac h2 + \frac12\int_{0}^{u} q(s)\,ds+\int_{0}^{v}
\mathbf{F}(u, v^{\prime}) \,dv^{\prime},\\
\mathbf{F}(u,v) = \int_{0}^{u} q(u^{\prime}+v) \mathbf{H}(u^{\prime},
v)\,du^{\prime},
\end{cases}
\]
where $\mathbf{G}=\partial_{u} \mathbf{H}$ and $\mathbf{F}=\partial_{v}
\mathbf{H}$, from which we conclude that the functions $\mathbf{H}_{f_{n}}$,
$\partial_{u} \mathbf{H}_{f_{n}}$ and $\partial_{v} \mathbf{H}_{f_{n}}$ are
uniformly convergent to the functions $\mathbf{H}_{f}$, $\partial
_{u}\mathbf{H}_{f}$ and $\partial_{v}\mathbf{H}_{f}$, respectively. Hence the
partial derivatives of $\mathbf{K}_{f_{n}}$ are uniformly convergent as well,
and taking in \eqref{K2Vekua} the limit as $n\to\infty$ we finish the proof.
The second formula \eqref{K1Vekua} easily follows from \eqref{K2Vekua} by
changing $f$ to $1/f$.
\end{proof}
\begin{example}
\label{Example K Vekua} For the functions $\mathbf{K}_{f}$ and $\mathbf{K}%
_{1/f}$ from Example \ref{ModelExample} we get $\mathbf{K} = \frac12 -
\mathbf{j} \frac{t-1}{2(x+1)}$ and
\[
\partial_{\bar z} \mathbf{K} = \frac12 \left( \mathbf{j} \frac{t-1}%
{2(x+1)^{2}} + \frac1{2(x+1)}\right) = \frac1{2(x+1)}\mathbf{\overline{K}} =
\frac{f^{\prime}(x)}{2f(x)}\mathbf{\overline{K}}.
\]
Formulae \eqref{K2Vekua} and \eqref{K1Vekua} can be verified as well.
\end{example}
\section{Formulae for integral kernels of transmutation operators}\label{SectFIK}
It follows from Theorems \ref{Th Taylor Vekua} and \ref{Th K Vekua} and from
\eqref{nth Der} that the knowledge of the derivatives $\partial_{t}%
^{n}\mathbf{K}(0,0)$ allows one to obtain the generalized Taylor coefficients
\eqref{TaylorCoef} for $W = \mathbf{K}$. In this section we present several
formulae for computing these derivatives whenever the derivatives $q^{(m)}%
(0)$, $m\le n-1$ are known.
First we illustrate the expansion theorem with an example.
\begin{example}
For the function $\mathbf{K}$ from Example \ref{Example K Vekua} by
\eqref{nth Der} we obtain
\[
\mathbf{K}^{[0]}(0,0) = \frac12+\frac12\mathbf{j},\quad\mathbf{K}^{[1]}(0,0) =
\left. -\frac{1}{2(x+1)}\right| _{(0,0)}=-\frac12+0\mathbf{j},\quad
\mathbf{K}^{[n+1]} = \mathbf{j}\partial_{t}\mathbf{K}^{[n]} = 0,\ n\ge1.
\]
Hence the expansion coefficients are $a_{0}= \frac12+\frac12\mathbf{j}$,
$a_{1} = -\frac12+0\mathbf{j}$ and $a_{n}=0$ for $n\ge2$.
The first two functions of the systems $\{\varphi_{k}\}$ and $\{\psi_{k}\}$
are equal to
\begin{align*}
\varphi_{0} & = x+1, & \psi_{0} & = \frac1{x+1},\\
\varphi_{1} & = x, & \psi_{1} & = \frac{x^{3}+3x^{2}+3x}{3(x+1)},
\end{align*}
the first two formal powers are given by
\begin{align*}
Z^{(0)}(a,0; z) & = \mathcal{R}(a)\cdot\varphi_{0}+\mathbf{j}\mathcal{I}%
(a)\cdot\psi_{0},\\
Z^{(1)}(a,0; z) & = \mathcal{R}(a)\cdot(\varphi_{1}+\mathbf{j}t\psi
_{0})+\mathbf{j}\mathcal{I} (a)\cdot(\psi_{1}+\mathbf{j}t\varphi_{0})
\end{align*}
and we find that indeed,
\[
\mathbf{K}(x,t) = Z^{(0)}\Bigl(\frac12+\frac12\mathbf{j}, 0; z\Bigr)+Z^{(1)}%
\Bigl(-\frac12+0\mathbf{j}, 0; z\Bigr).
\]
\end{example}
We are able to calculate the expansion coefficients $a_{n}$ whenever we know
values of the derivatives $\partial_{t}^{n}\mathbf{K}(0,0)$. Recall that the
function $\mathbf{K}_{f}$ is the integral kernel of a transmutation operator
if and only if the function $\mathbf{H}_{f}(u,v) = \mathbf{K}_{f}(u+v,u-v)$ is
the solution of the Goursat problem \eqref{GoursatTh1}, \eqref{GoursatTh2}. We
have
\begin{equation}
\label{dtKdudvH}\partial_{t}^{n}\mathbf{K}_{f}(x,t) = \partial_{t}%
^{n}\mathbf{H}_{f}\Bigl(\frac{x+t}2, \frac{x-t}2\Bigr) = \frac1{2^{n}%
}(\partial_{u}-\partial_{v})^{n}\mathbf{H}_{f}(u,v)\Bigr|_{u=\frac
{x+t}2,\,v=\frac{x-t}2}.
\end{equation}
Moreover, it follows from \eqref{GoursatTh2} that
\begin{equation}
\label{duH(0)}\partial_{u}^{n}\mathbf{H}_{f}(u,0) =
\begin{cases}
\frac{f^{\prime}(0)}2 + \frac12\int_{0}^{u} q_{f}(s)\,ds, & n=0,\\
\frac12 q_{f}^{(n-1)}(u), & n\ge1,
\end{cases}
\end{equation}
and
\begin{equation}
\label{dvH(0)}\partial_{v}^{n}\mathbf{H}_{f}(0,v) =
\begin{cases}
\frac{f^{\prime}(0)}2, & n=0,\\
0, & n\ge1.
\end{cases}
\end{equation}
Hence to calculate the value of $\partial_{t}^{n}\mathbf{K}_{f}(0,0)$ it is
sufficient to transform the terms $\partial_{u}^{k}\partial_{v}^{n-k}%
\mathbf{H}_{f}$, where $1\le k\le n-1$, into the terms involving only
derivatives of the form $\partial_{u}^{\ell}\mathbf{H}_{f}$ or $\partial
_{v}^{m} \mathbf{H}_{f}$ and some derivatives of the potential $q_{f}$. Such
transformation may be performed using equation \eqref{GoursatTh1} which allows
us to reduce the orders $k$ and $n-k$ in the term $\partial_{u}^{k}%
\partial_{v}^{n-k}\mathbf{H}_{f}$ by 1. After several applications of equation
\eqref{GoursatTh1} and the Leibniz rule we obtain a finite sum of terms of the
form
\begin{equation}
\label{dtHGeneralTerm}q_{f}^{(n_{1})}(u+v)\cdot\ldots\cdot q_{f}^{(n_{\ell}%
)}(u+v) \partial^{d}_{u/v}\mathbf{H}_{f}(u,v).
\end{equation}
It is easy to see that the number $\ell$ of factors $q^{(n_{i})}(u+v)$
satisfies the inequality $0\le\ell\le\min(k,n-k)$ and that $n_{1}%
+\ldots+n_{\ell}+2\ell+d=n$.
Moreover, consider a term of the form \eqref{dtHGeneralTerm} obtained as a
result of application to $\mathbf{H}_{f}$ of some derivatives $\partial_{u}$
and $\partial_{v}$, equation \eqref{GoursatTh1} and the Leibniz rule. If
during this process we change every derivative $\partial_{u}$ by
$-\partial_{v}$ and every derivative $\partial_{v}$ by $-\partial_{u}$, we
obtain the term of exactly the same form, with the only difference that the
last derivative $\partial^{d}_{u/v}\mathbf{H}_{f}$ changes into $\partial
^{d}_{v/u}\mathbf{H}_{f}$, and the coefficient $(-1)^{n}$ appears. Hence, we
may group all the terms obtained after the transformation of the expression
$(\partial_{u}-\partial_{v})^{n}\mathbf{H}_{f}(u,v)$ into the pairs of the
form
\begin{equation}
\label{dtHGeneralTermPairs}\bigl(\partial_{u}^{d}+(-1)^{n}\partial^{d}%
_{v}\bigr)\mathbf{H}_{f}(u,v) \cdot\prod_{i=1}^{\ell}q_{f}^{(n_{i})}(u+v).
\end{equation}
Now we derive the recurrent relation for the derivative $(\partial
_{u}-\partial_{v})^{n}\mathbf{H}_{f}$. We may parametrize each term of the
form \eqref{dtHGeneralTermPairs} by the following parameters.
\begin{enumerate}
\item A number $\ell$ of the factors $q_{f}^{(n_{i})}(u+v)$. Since we need two
derivatives to obtain one factor $q(u+v)$ with the use of \eqref{GoursatTh1},
the number $\ell$ satisfies
\begin{equation}
\label{ParamEllIneq}0\le\ell\le\frac n2.
\end{equation}
\item A number $d$ of derivatives of $\mathbf{H}_{f}$ satisfying
\begin{equation}
\label{ParamDIneq}0\le d\le n-2\ell.
\end{equation}
\item An ordered sequence of non-negative numbers $n_{1}\le n_{2}\le\ldots\le
n_{\ell}$ (orders of derivatives of the factors $q_{f}(u+v)$) satisfying
\begin{equation}
\label{ParamNiIneq}n_{1}+n_{2}+\ldots+n_{\ell}+2\ell+d = n.
\end{equation}
\end{enumerate}
We denote by $(-1)^{\ell}S^{n}_{\ell; d; (n_{1},\ldots,n_{\ell})}$ the
coefficient at the term \eqref{dtHGeneralTermPairs} in the final
representation of $(\partial_{u}-\partial_{v})^{n}\mathbf{H}_{f}$ after the
described transformation. The factor $(-1)^{\ell}$ is used to make the number
$S^{n}_{\ell; d; (n_{1},\ldots,n_{\ell})}$ non-negative. We assume that
$S^{n}_{\ell; d; (n_{1},\ldots,n_{\ell})} = 0$ whenever either of the
conditions \eqref{ParamEllIneq}--\eqref{ParamNiIneq} fails.
\begin{lemma}
The number of different lists of parameters $\ell; d; (n_{1},\ldots,n_{\ell})$
satisfying conditions \eqref{ParamEllIneq}--\eqref{ParamNiIneq} for a fixed
$n$ coincides with the number $p(n)$ of partitions of $n$, i.e., the number of
ways $n$ may be represented as a sum of ordered positive integer terms, see,
e.g., \cite{Charal2002}, \cite{Comtet1974}. The number $p(n)$ is also known as
the number of different Young or Ferrers diagrams.
\end{lemma}
\begin{proof}
Consider a list of parameters $\ell;d;(n_{1},\ldots,n_{\ell})$ and put into
correspondence to this list a partition $\underbrace{1,1,\ldots,1}_{d},
n_{1}+2,n_{2}+2,\ldots, n_{\ell}+2$. Due to the condition \eqref{ParamNiIneq}
this partition is a partition of $n$. It is easy to see that the described
correspondence is one-to-one.
\end{proof}
Note that $(\partial_{u}-\partial_{v})q_{f}(u+v)=0$, hence in the case $d=0$
we have
\begin{multline}
\label{ReccurStepD0}(\partial_{u}-\partial_{v}) \biggl(\bigl(\mathbf{H}%
_{f}(u,v)+(-1)^{n}\mathbf{H}_{f}(u,v)\bigr)\prod_{i=1}^{\ell}q_{f}^{(n_{i}%
)}(u+v)\biggr)\\
=\bigl(1+(-1)^{n}\bigr)\bigl(\partial_{u}+(-1)^{n+1}\partial_{v}%
\bigr)\mathbf{H}_{f}(u,v)\cdot\prod_{i=1}^{\ell}q_{f}^{(n_{i})}(u+v),
\end{multline}
and in the case $d\ne0$ we have
\begin{multline}
\label{ReccurStepD}(\partial_{u}-\partial_{v}) \biggl(\bigl(\partial^{d}%
_{u}+(-1)^{n}\partial^{d}_{v}\bigr)\mathbf{H}_{f}(u,v)\prod_{i=1}^{\ell}%
q_{f}^{(n_{i})}(u+v)\biggr)\\
\displaybreak[2]
=\Bigl(\bigl(\partial^{d+1}_{u}+(-1)^{n+1}\partial^{d+1}_{v}\bigr)\mathbf{H}%
_{f}(u,v) -\bigl(\partial^{d-1}_{u}+(-1)^{n+1}\partial^{d-1}_{v}%
\bigr)\bigl(q_{f}(u+v)\mathbf{H}_{f}(u,v)\bigr)\Bigr)\prod_{i=1}^{\ell}%
q_{f}^{(n_{i})}(u+v)\\
\displaybreak[2]
=\bigl(\partial^{d+1}_{u}+(-1)^{n+1}\partial^{d+1}_{v}\bigr)\mathbf{H}%
_{f}(u,v)\cdot\prod_{i=1}^{\ell}q_{f}^{(n_{i})}(u+v)\\
-\sum_{k=0}^{d-1} \binom{d-1}{k} q_{f}^{(k)}(u+v)\bigl(\partial^{d-1-k}%
_{u}+(-1)^{n+1}\partial^{d-1-k}_{v}\bigr)\mathbf{H}_{f}(u,v)\cdot\prod
_{i=1}^{\ell}q_{f}^{(n_{i})}(u+v).
\end{multline}
Also note that the coefficient at the term $(\partial_{u}^{n}+(-1)^{n}%
\partial^{n}_{v})\mathbf{H}_{f}(u,v)$ is equal to $1/2$ for $n=0$ and $1$ for
$n\ge1$, hence $S^{0}_{0;0;()}=1/2$ and $S^{n}_{0; n; ()}=1$ for $n\ge1$. From
the relations \eqref{ReccurStepD0} and \eqref{ReccurStepD} and the latter
initial condition we obtain the following statement.
\begin{proposition}
\label{Prop S reccurrence} The coefficients $S^{n+1}_{\ell; d;(n_{1}%
,\ldots,n_{\ell})}$ with the lists of parameters $\ell; d;(n_{1}%
,\ldots,n_{\ell})$ satisfying conditions
\eqref{ParamEllIneq}--\eqref{ParamNiIneq} can be calculated by means of the
following recurrent relations
\begin{equation}
\label{S0reccurr}%
\begin{split}
S^{n+1}_{\ell; d;(n_{1},\ldots,n_{\ell})} = & \bigl(1+(-1)^{n}%
\bigr)S^{n}_{\ell; d-1;(n_{1},\ldots,n_{\ell})}\\
& +\sum_{n_{k}\in\{n_{1},\ldots,n_{\ell}\}} \binom{d+n_{k}}{n_{k}}S^{n}%
_{\ell-1; d+n_{k}+1;(n_{1},\ldots,n_{k-1},n_{k+1},\ldots,n_{\ell})}%
\qquad\text{if } d=1,
\end{split}
\end{equation}
\begin{equation}
\label{Sreccurr}
\begin{split}S^{n+1}_{\ell; d;(n_{1},\ldots,n_{\ell})} = & S^{n}_{\ell;
d-1;(n_{1},\ldots,n_{\ell})}\\
& +\sum_{n_{k}\in\{n_{1},\ldots,n_{\ell}\}}
\binom{d+n_{k}}{n_{k}}S^{n}_{\ell-1; d+n_{k}+1;(n_{1},\ldots,n_{k-1}%
,n_{k+1},\ldots,n_{\ell})}\qquad\text{if } d\ne1,
\end{split}
\end{equation}
where $\sum_{n_{k}\in\{n_{1},\ldots,n_{\ell}\}}$ means that only different
values of $n_{i}$ are used in the sum, with the initial conditions
\begin{equation}
\label{SreccurrInit}S^{0}_{0;0;()}=1/2,\qquad S^{n}_{0;n;()} = 1\quad
\text{for}\ n\ge1.
\end{equation}
\end{proposition}
The following proposition presents a direct formula for the coefficients
$S^{n}_{\ell; d; (n_{1},\ldots,n_{\ell})}$ not involving recurrent relations.
\begin{proposition}
\label{Prop S direct} The following formula holds for the coefficient
$S^{n}_{\ell; d;(n_{1},\ldots,n_{\ell})}$, where $\ell\ge1$ and the list of
parameters $\ell; d;(n_{1},\ldots,n_{\ell})$ satisfies conditions
\eqref{ParamEllIneq}--\eqref{ParamNiIneq}:
\begin{multline}
\label{Sdirect}S^{n}_{\ell; d;(n_{1},\ldots,n_{\ell})} = \sum_{(\sigma
_{1},\ldots,\sigma_{\ell})\in\mathfrak{S}(n_{1},\ldots,n_{\ell})} \sum
_{d_{1}=0}^{d} \sum_{d_{2}=0}^{d_{1}+\sigma_{1}+1}\cdots\sum_{d_{\ell}%
=0}^{d_{\ell-1}+\sigma_{\ell-1}+1}\\
\prod_{i=1}^{\ell}\binom{d_{i}+\sigma_{i}}{\sigma_{i}}\bigl(1+(-1)^{\sigma
_{i}+\ldots+\sigma_{\ell}}\delta(d_{i})\bigl(1-\delta(d)\delta
(i-1)\bigr)\bigr),
\end{multline}
where $(\sigma_{1},\ldots,\sigma_{\ell})\in\mathfrak{S}(n_{1},\ldots,n_{\ell
})$ means that the sum is taken over all distinct permutations of the numbers
$n_{1},\ldots,n_{\ell}$, and $\delta(t)=1$ if $t=0$ and $\delta(t)=0$ otherwise.
\end{proposition}
\begin{proof}
To arrive from the term $S^{n}_{\ell;d;(n_{1},\ldots,n_{\ell})}$ to a term
$S^{\tilde n}_{0;\tilde n;()}$ using recurrent relations \eqref{S0reccurr} and
\eqref{Sreccurr} we have to perform exactly $\ell$ times the step of removing
one of the $n_{k}$'s from the list $(n_{1},\ldots,n_{\ell})$. Denote by
$d_{1},\ldots,d_{\ell}$ the values of the parameter $d$ on each of these steps
and by $\sigma_{1},\ldots,\sigma_{\ell}$ the number $n_{k}$ removed from the
list $(n_{1},\ldots,n_{\ell})$ on the corresponding step $1,\ldots,\ell$. Then
the numbers $d_{1},\ldots,d_{\ell}$ satisfy the following inequalities
\begin{align*}
0 \le\, & d_{1}\le d,\\
0 \le\, & d_{2}\le1+d_{1}+\sigma_{1},\\
& \, \vdots\\
0\le\, & d_{\ell}\le1+d_{\ell-1}+\sigma_{\ell-1}.
\end{align*}
Each distinct permutation of the numbers $n_{1},\ldots,n_{\ell}$ and each list
of parameters $d_{1},\ldots,d_{\ell}$ give us a different way of getting from
the term $S^{n}_{\ell;d;(n_{1},\ldots,n_{\ell})}$ to the term $S^{\tilde
n}_{0;\tilde n;()}$. Note that the factor $1+(-1)^{n^{\prime}}$ for some
$n^{\prime}\le n$ appears in \eqref{S0reccurr} only in the case when we cross
the coefficient $S^{n^{\prime}}_{\ell^{\prime};0;(n^{\prime}_{1}%
,\ldots,n^{\prime}_{\ell^{\prime}})}$ having $d^{\prime}=0$. Note also that if
$d=0$ initially, then $d_{1}=0$ and only \eqref{Sreccurr} is applicable, hence
the factor $1+(-1)^{n}$ does not appear in this case. So the factor
$1+(-1)^{n^{\prime}}$ appears only in one of the following cases: 1) $d\ne0$
and $d_{1}=0$, 2) $d_{i}=0$, $i\ge2$. It follows from condition
\eqref{ParamNiIneq} that in each of these cases $n^{\prime}=2\ell^{\prime
}+n_{1}^{\prime}+\ldots+n^{\prime}_{\ell^{\prime}}$ therefore $(-1)^{n^{\prime
}} = (-1)^{n_{1}^{\prime}+\ldots+n^{\prime}_{\ell^{\prime}}}$. Hence we
conclude that the following relation holds
\begin{multline*}
S^{n}_{\ell; d;(n_{1},\ldots,n_{\ell})} = \sum_{(\sigma_{1},\ldots
,\sigma_{\ell})\in\mathfrak{S}(n_{1},\ldots,n_{\ell})} \sum_{d_{1}=0}^{d}
\binom{d_{1}+\sigma_{1}}{\sigma_{1}} \bigl(1+(-1)^{\sigma_{1}+\ldots
+\sigma_{\ell}}\cdot\delta(d_{1})(1-\delta(d))\bigr)\\
\times\sum_{d_{2}=0}^{d_{1}+\sigma_{1}+1}\binom{d_{2}+\sigma_{2}}{\sigma_{2}}
\bigl(1+(-1)^{\sigma_{2}+\ldots+\sigma_{\ell}}\delta(d_{2})\bigr)\cdots
\sum_{d_{\ell}=0}^{d_{\ell-1}+\sigma_{\ell-1}+1}\binom{d_{\ell}+\sigma_{\ell}%
}{\sigma_{\ell}} \bigl(1+(-1)^{\sigma_{\ell}}\delta(d_{\ell})\bigr),
\end{multline*}
where the first sum is taken over all distinct permutations of numbers
$n_{1},\ldots,n_{\ell}$, and the function $\delta(\cdot)$ is used to the
encode mentioned above conditions 1) and 2). Finally note that the conditions
1) and 2) may be jointly represented by the expression $\delta(d_{i}%
)\bigl(1-\delta(d)\delta(i-1)\bigr)$.
\end{proof}
As a result we obtain the following representation for the derivative
$(\partial_{u}-\partial_{v})^{n}\mathbf{H}_{f}(u,v)$, $n\ge1$.
\begin{proposition}
\label{Prop Repres dudvH} Let $n\ge1$ and $q_{f}\in C^{(n-1)}[-b,b]$. Then
\begin{multline*}
(\partial_{u}-\partial_{v})^{n}\mathbf{H}_{f}(u,v) = \bigl(\partial^{n}%
_{u}+(-1)^{n}\partial_{v}^{n}\bigr)\mathbf{H}_{f}(u,v)\\
+\sum_{\ell=1}^{[n/2]} (-1)^{\ell}\sum_{d=0}^{n-2\ell}\bigl(\partial_{u}%
^{d}+(-1)^{n}\partial^{d}_{v}\bigr)\mathbf{H}_{f}(u,v) \sum_{\substack{n_{1}%
+\ldots+n_{\ell}=n-2\ell-d,\\0\le n_{1}\le\ldots\le n_{\ell}}} S^{n}%
_{\ell;d;(n_{1},\ldots,n_{\ell})} \prod_{i=1}^{\ell}q_{f}^{(n_{i})}(u+v),
\end{multline*}
where the coefficients $S^{n}_{\ell;d;(n_{1},\ldots,n_{\ell})}$ are given by
Proposition \ref{Prop S reccurrence} or Proposition \ref{Prop S direct} and do
not depend on $q_{f}$.
\end{proposition}
The following proposition is a corollary of \eqref{dtKdudvH}, \eqref{duH(0)},
\eqref{dvH(0)} and Proposition \ref{Prop Repres dudvH}.
\begin{proposition}
Let $n\ge1$ and $q_{f}\in C^{(n-1)}[-b,b]$. Then
\begin{multline}
\label{dtK0}\partial_{t}^{n}\mathbf{K}_{f}(0,0) = \frac1{2^{n+1}}
\biggl( q_{f}^{(n-1)}(0) + \sum_{\ell=1}^{[n/2]} (-1)^{\ell}\sum
_{d=0}^{n-2\ell} \bigl(1+(-1)^{n}\delta(d)\bigr)q_{f}^{(d-1)}(0)\\
\times\sum_{\substack{n_{1}+\ldots+n_{\ell}=n-d-2\ell,\\0\le n_{1}\le\ldots\le
n_{\ell}}}S^{n}_{\ell;d;(n_{1},\ldots,n_{\ell})}\prod_{i=1}^{\ell}%
q_{f}^{(n_{i})}(0)\biggr),
\end{multline}
where we set $q_{f}^{(-1)}(0):=h=f^{\prime}(0)$, the coefficients $S^{n}%
_{\ell;d;(n_{1},\ldots,n_{\ell})}$ are given by Proposition
\ref{Prop S reccurrence} or Proposition \ref{Prop S direct} and do not depend
on $q_{f}$, and $\delta(d)=1$ if $d=0$ and $\delta(d)=0$ otherwise.
Formula \eqref{dtK0} also holds for $n=0$ for any $q_{f}\in C[-b,b]$.
\end{proposition}
As a consequence of Theorems \ref{Th W series}, \ref{Th Taylor Vekua} and
\ref{Th K Vekua} we obtain the following representation of the integral kernel
$\mathbf{K}_{f}$.
\begin{theorem}
\label{Th Formula K} Let $q_{f}\in C^{\infty}[-b,b]$ be a complex-valued
function and $f$ be a particular solution of \eqref{SLhom} such that $f\neq0$
on $[-b,b]$ and normalized as $f(0)=1$. Denote $h:=f^{\prime}(0)\in\mathbb{C}%
$. Suppose that the functions $g_{1}(x):=\frac12\bigl(\mathbf{K}%
_{f}(x,x)+\mathbf{K}_{f}(x,-x)\bigr)=\frac h2+\frac14\int_{0}^{x}
q_{f}(s)\,ds$ and $g_{2}(x):=\frac12\bigl(\mathbf{K}_{f}(x,x)-\mathbf{K}%
_{f}(x,-x)\bigr)=\frac14\int_{0}^{x} q_{f}(s)\,ds$ admit uniformly convergent
on $[-b,b]$ series expansions
\[
g_{1}(x) = c_{0} u_{0}(x,x) + \sum_{n=1}^{\infty}c_{n} u_{2n-1}(x,x)
\]
and
\[
g_{2}(x) = \sum_{n=1}^{\infty}b_{n} u_{2n}(x,x).
\]
Then the coefficients $\{c_{n}\}_{n\ge0}$ and $\{b_{n}\}_{n\ge1}$ may be found
by the formulas
\[
c_{n} =\frac{\partial_{t}^{n}\mathbf{K}_{f}(0,0)}{n!},\qquad b_{n} =
-\frac{\partial_{t}^{n}\mathbf{K}_{1/f}(0,0)}{n!},\quad\text{if } n\ \text{is
even}%
\]
and
\[
c_{n} =-\frac{\partial_{t}^{n}\mathbf{K}_{1/f}(0,0)}{n!},\qquad b_{n} =
\frac{\partial_{t}^{n}\mathbf{K}_{f}(0,0)}{n!},\quad\text{if } n\ \text{is
odd},
\]
where the derivatives $\partial_{t}^{n}\mathbf{K}_{f}(0,0)$ and $\partial
_{t}^{n}\mathbf{K}_{1/f}(0,0)$ are given by \eqref{dtK0} for the potentials
$q_{f}$ and $q_{1/f}=-q_{f}+2\bigl(f^{\prime}/f\bigr)^{2}$, respectively.
Also, for any $(x,t)\in\overline{\mathbf{R}}$
\begin{equation}
\label{FormulaK}\mathbf{K}_{f}(x,t) = c_{0}u_{0}(x,t)+\sum_{n=1}^{\infty
}\left( c_{n}u_{2n-1}(x,t)+b_{n}u_{2n}(x,t)\right) ,
\end{equation}
and the series converges uniformly in $\overline{\mathbf{R}}$.
\end{theorem}
\begin{example}
For the potentials $q_{f}$ and $q_{1/f}$ from Example \ref{ModelExample} we
have
\[
q_{f}^{(-1)}(0) := f^{\prime}(0) = 1,\qquad q_{f}^{(n)}(0) = 0,\quad n\ge0,
\]
and
\[
q_{1/f}^{(-1)}(0) :=(1/f)^{\prime}(0)= -1,\qquad q_{1/f}^{(n)}(0) =
2(-1)^{n}(n+1)!,\quad n\ge0.
\]
By Proposition \ref{Prop S reccurrence} the first coefficients $S^{n}%
_{\ell;d;(n_{1},\ldots,n_{\ell})}$ for $n\le6$ have the following values
\begin{gather*}
S^{1}_{0;1;()}=1;\\
S^{2}_{0;2;()}=1,\quad S^{2}_{1;0;(0)}=1;\\
S^{3}_{0;3;()}=1;\quad S^{3}_{1;0;(1)}=1,\quad S^{3}_{1;1;(0)}=3;\\
S^{4}_{0;4;()}=1;\quad S^{4}_{1;0;(2)}=1,\quad S^{4}_{1;1;(1)}=2;\quad
S^{4}_{1;2;(0)}=4,\quad S^{4}_{2;0;(0,0)}=3;\\
S^{5}_{0;5;()}=1;\ S^{5}_{1;0;(3)}=1,\ S^{5}_{1;1;(2)}=5;\ S^{5}%
_{1;2;(1)}=5,\ S^{5}_{1;3;(0)}=5,\ S^{5}_{2;0;(0,1)}=6,\ S^{5}_{2;1;(0,0)}%
=10;\\
S^{6}_{0;6;()}=1;\quad S^{6}_{1;0;(4)}=1,\quad S^{6}_{1;1;(3)}=4;\quad
S^{6}_{1;2;(2)}=11,\quad S^{6}_{1;3;(1)}=9,\quad S^{6}_{1;4;(0)}=6,\\
S^{6}_{2;0;(0,2)}=10,\quad S^{6}_{2;0;(1,1)}=5,\quad S^{6}_{2;1;(0,1)}%
=15,\quad S^{6}_{2;2;(0,0)}=15,\quad S^{6}_{3;0;(0,0,0)}=10.
\end{gather*}
We can check that formula \eqref{dtK0} works, e.g.,
\begin{multline*}
\displaybreak[2] \partial_{t}^{5}\mathbf{K}_{1/f}(0,0) = \frac1{64}\Bigl(240+
(-1)^{1}\bigl(1\cdot(1+(-1)^{5})\cdot48 + 5\cdot24 + 5\cdot16+5\cdot24\bigr)\\
+(-1)^{2}\bigl(6\cdot(1+(-1)^{5})\cdot8+10\cdot8\bigr)\Bigr)=0,
\end{multline*}
and
\begin{multline*}
\displaybreak[2] \partial_{t}^{6}\mathbf{K}_{1/f}(0,0) = \frac1{128}%
\Bigl(-1440+ (-1)^{1}\bigl(1\cdot(1+(-1)^{6})\cdot(-240) + 4\cdot(-96) +
11\cdot(-48)\\
\displaybreak[2]
+9\cdot(-48)+6\cdot(-96)\bigr)
+(-1)^{2}\bigl(10\cdot(1+(-1)^{6})\cdot(-24)+5\cdot
(1+(-1)^{6})\cdot(-16)\\
+15\cdot(-16)+15\cdot(-16)\bigr)+(-1)^{3}\bigl(10\cdot(1+(-1)^{6})\cdot(-8)\bigr)\Bigr)=0.
\end{multline*}
\end{example}
Now consider an example when all the coefficients $b_{n}$, $c_{n}$ from
\eqref{FormulaK} may be easily calculated explicitly from \eqref{Sdirect} and \eqref{dtK0}.
\begin{example}
Consider $q_{f}=c^{2}$ for some constant $c$ taking the function $f = e^{cx}$
as a non-vanishing solution of the equation $f^{\prime\prime}-q_{f} f=0$. Then
the Darboux transformed potential has the form $q_{1/f} = 2(f^{\prime}%
/f)^{2}-q_{f} = c^{2}$.
Consider the function $\mathbf{H}_{f}$ from \eqref{dtKdudvH}. Equation
\eqref{GoursatTh1} for $q_{f}=c^{2}$ reads as $\partial_{u}\partial
_{v}\mathbf{H}_{f} = c^{2}\mathbf{H}_{f}$, hence
\begin{equation}
\label{ExampleHconst}%
\begin{split}
(\partial_{u}-\partial_{v})^{n}\mathbf{H}_{f}(u,v) & = \sum_{k=0}^{n}
(-1)^{n-k}\binom{n}{k}\partial_{u}^{k}\partial_{v} ^{n-k}\mathbf{H}_{f}(u,v)\\
& = (\partial_{u}^{n} + (-1)^{n}\partial_{v}^{n})\mathbf{H}_{f}(u,v) \\
& +\sum_{k=1}^{[n/2]-1} (-1)^{n-k} c^{2k} \binom{n}{k}\partial_{v}^{n-2k}%
\mathbf{H}_{f}(u,v)\\
& + \sum_{k=[n/2]}^{n-1} (-1)^{n-k} c^{2(n-k)} \binom{n}{k}\partial
_{u}^{2k-n}\mathbf{H}_{f}(u,v).
\end{split}
\end{equation}
It follows from \eqref{duH(0)} and \eqref{dvH(0)} that at the point $u=v=0$
every term in \eqref{ExampleHconst} involving a derivative of $\mathbf{H}_{f}$
of order at least 2 with respect to $u$ or at least 1 with respect to $v$
equals zero. Hence from \eqref{dtKdudvH} for any even $n=2m$ we obtain
\begin{equation}
\label{ExampleConstK2m}\partial_{t}^{2m}\mathbf{K}_{f}(0,0)=\frac1{2^{2m}%
}(-1)^{m} c^{2m}\binom{2m}{m}\mathbf{H}_{f}(0,0) = \frac{(-1)^{m} c^{2m+1}%
}{2^{2m+1}}\binom{2m}{m},
\end{equation}
and for any odd $n=2m+1$ we obtain
\begin{equation}
\label{ExampleConstK2m+1}\partial_{t}^{2m+1}\mathbf{K}_{f}(0,0)=\frac
1{2^{2m+1}}(-1)^{m} c^{2m}\binom{2m+1}{m}\partial_{u}\mathbf{H}_{f}(0,0) =
\frac{(-1)^{m} c^{2m+2}}{2^{2m+2}}\binom{2m+1}{m}.
\end{equation}
Similarly for the kernel $\mathbf{K}_{1/f}$ we have
\[
\partial_{t}^{2m}\mathbf{K}_{1/f}(0,0)=-\frac{(-1)^{m} c^{2m+1}}{2^{2m+1}%
}\binom{2m}{m},\qquad\partial_{t}^{2m+1}\mathbf{K}_{1/f}(0,0)= \frac{(-1)^{m}
c^{2m+2}}{2^{2m+2}}\binom{2m+1}{m}.
\]
Hence by Theorem \ref{Th Formula K} the expansion coefficients have the form
\begin{align*}
c_{2m} & = \frac{(-1)^{m} c^{2m+1}}{2^{2m+1} (m!)^{2}}, & b_{2m} & =
\frac{(-1)^{m} c^{2m+1}}{2^{2m+1} (m!)^{2}},\\
c_{2m+1} & = \frac{(-1)^{m+1} c^{2m+2}}{2^{2m+2} m! (m+1)!}, & b_{2m+1} &
= \frac{(-1)^{m} c^{2m+2}}{2^{2m+2} m! (m+1)!}.
\end{align*}
Now we show that it is possible to obtain the same coefficients from
\eqref{Sdirect} and \eqref{dtK0}. Note that any term in \eqref{dtK0} involving
a positive order derivative of $q_{f}$ is equal to zero. Hence $n_{1}%
=n_{2}=\ldots=n_{\ell}=0$ and $n=2\ell+d$. Moreover, $d\le1$ and is equal
either to 0 or 1 depending on the parity of $n$. So we have to calculate only
one coefficient, $S^{2m}_{m;0;(0,\ldots,0)}$ for $n=2m$ and $S^{2m+1}%
_{m;1;(0,\ldots,0)}$ for $n=2m+1$. Formula \eqref{Sdirect} for the coefficient
$S^{2m}_{m;0;(0,\ldots,0)}$ takes the form
\[
S^{2m}_{m;0;(0,\ldots,0)} = \sum_{d_{2}=0}^{1} \bigl(1+\delta(d_{2}%
)\bigr)\sum_{d_{3}=0}^{d_{2}+1}\bigl(1+\delta(d_{3})\bigr)\cdots\sum_{d_{m}%
=0}^{d_{m-1}+1}\bigl(1+\delta(d_{m})\bigr).
\]
We claim that
\begin{equation}
\label{ExampleConstCoeff}\sum_{d_{m-k+1}=0}^{d_{m-k}+1} \bigl(1+\delta
(d_{m-k+1})\bigr) \cdots\sum_{d_{m}=0}^{d_{m-1}+1}\bigl(1+\delta
(d_{m})\bigr) = \binom{2k+1+d_{m-k}}{k}.
\end{equation}
Indeed, for $k=1$ we have $\sum_{d_{m}=0}^{d_{m-1}+1}\bigl(1+\delta
(d_{m})\bigr) = d_{m-1}+3 = \binom{3+d_{m-1}}{1}$. Assume that the claim is
correct for some $k$, then for $k+1$ we obtain
\begin{multline*}
\sum_{d_{m-k}=0}^{d_{m-k-1}+1}\bigl(1+\delta(d_{m-k})\bigr)\cdot
\binom{2k+1+d_{m-k}}{k} \\= 2\binom{2k+1}{k}+\binom{2k+2}{k}+\ldots
+\binom{2k+2+d_{m-k-1}}{k},
\end{multline*}
and the induction step follows from the equality $2\binom{2k+1}{k}%
+\binom{2k+2}{k} = \binom{2k+3}{k+1}$ and the well-known combinatorial
identity $\binom{n+1}{k+1}=\binom{n}{k}+\binom{n-1}{k}+\ldots+\binom{n-s}%
{k}+\binom{n-s}{k+1}$, see, e.g., \cite{Comtet1974}. For $k=m-1$ we obtain
from \eqref{ExampleConstCoeff} that $S^{2m}_{m;0;(0,\ldots,0)} = \binom
{2m-1}{m-1}$ and $\partial_{t}^{2m}\mathbf{K}_{f}(0,0) = \frac{(-1)^{m}%
}{2^{2m+1}}2c\binom{2m-1}{m-1}(c^{2})^{m} = \frac{(-1)^{m} c^{2m+1}}{2^{2m+1}%
}\binom{2m}{m}$.
Similarly, for $n=2m+1$ we obtain from \eqref{ExampleConstCoeff} for $k=m$
that $S^{2m+1}_{m;1;(0,\ldots,0)} = \binom{2m+1}{m}$ and $\partial_{t}%
^{2m+1}\mathbf{K}_{f}(0,0) = \frac{(-1)^{m}}{2^{2m+2}}c^{2}\binom{2m+1}%
{m+1}(c^{2})^{m} = \frac{(-1)^{m} c^{2m+2}}{2^{2m+2}}\binom{2m+1}{m}$, exactly
as in \eqref{ExampleConstK2m} and \eqref{ExampleConstK2m+1}.
\end{example}
In the following example we calculate the expansion coefficients $b_{n}$ and
$c_{n}$, $n\le22$, for a reflectionless potential in the one-dimensional
quantum scattering theory, $q_{\sech}(x) = 1-2\sech^{2} x$, see, e.g.,
\cite{KrT2012}, \cite{Lamb}. For this potential to the difference with from
the previous examples neither the higher derivatives of $q_{f}$ nor the higher
derivatives with respect to $t$ of the transmutation kernel do not vanish identically.
\begin{example}
\label{ExampleCosh} The potential $q_{\sech}(x) = 1-2\sech^{2} x$ can be
obtained as a result of the Darboux transformation of the equation
$u^{\prime\prime}=u$ with the potential $q_{\cosh}\equiv1$ with respect to the
solution $f(x)=\cosh x$. The transmutation operator for the operator
$A_{\cosh}=\partial_{x}^{2}-1$ was calculated in \cite[Example 3]{CKT}. Its
kernel is given by the expression
\[
\mathbf{K}_{\cosh}(x,t)=\frac{1}{2}\frac{\sqrt{x^{2}-t^{2}}I_{1}(\sqrt
{x^{2}-t^{2}})}{x-t},
\]
where $I_{1}$ is the modified Bessel function of the first kind. Even though
in \cite{KrT2012} a formula for the integral kernel $\mathbf{K}_{\sech}$ was
presented, it is not well suited for calculating higher order derivatives with
respect to $t$. Using \eqref{K2Vekua} we obtain another representation with
the help of Maple 12 software
\begin{multline}
\label{Ksech}\mathbf{K}_{\sech}(x,t) =\frac12\biggl(I_{1}(x) - I_{0}(x)\tanh x
+\tanh x\int_{0}^{t}\frac{\sqrt{x^{2}-s^{2}} I_{1}(\sqrt{x^{2}-s^{2}})}%
{x-s}ds\\
+\int_{0}^{t} \frac{(xs-x^{2})I_{0}(\sqrt{x^{2}-s^{2}}) +\sqrt{x^{2}-s^{2}}
I_{1}(\sqrt{x^{2}-s^{2}})}{(x-s)^{2}}ds \biggr).
\end{multline}
Despite the integral kernel $\mathbf{K}_{\sech}$ could not be evaluated
explicitly, it is possible to expand it into a Taylor series at the point
$t=0$. Expanding the kernels $\mathbf{K}_{\cosh}$ and $\mathbf{K}_{\sech}$
into corresponding Taylor series and taking a limit as $x\to0$ we obtain
(again with the help of Maple 12 software) that
\begin{multline*}
\mathbf{K}_{\cosh}(0, t) = \frac{1}{4}\,t - \frac{1}{32} \,t^{3} + \frac
{1}{768} \,t^{5} - \frac{1}{36864} \,t^{7} + \frac{ 1}{2949120} \,t^{9} -
\frac{1}{353894400} \,t^{ 11} \\
+ \frac{1}{59454259200} \,t^{13}
- \frac{1}{13317754060800} \,t^{15} + \frac{1}{3835513169510400} \,t^{17} \\
- \frac{1}{1380784741023744000} \,t^{19}
+ \frac{1}{607545286050447360000}\,t^{21} + o(t^{22})
\end{multline*}
and
\begin{multline*}
\mathbf{K}_{\sech}(0, t) =- \frac{1}{4} \,t + \frac{1}{96 } \,t^{3} - \frac
{1}{3840} \,t^{5} + \frac{1}{258048} \,t^{7} - \frac{1}{26542080} \,t^{9} +
\frac{1}{ 3892838400} \,t^{11} \\
- \frac{1}{772905369600} \,t^{13} + \frac{1}{199766310912000} \,t^{15} - \frac{1}{65203723881676800} \,t^{17} \\
+ \frac{1}{26234910079451136000} \,t^{19}
- \frac{1}{12758451007059394560000} \,t^{21} + o(t^{22})
\end{multline*}
According to Theorem \ref{Th Formula K} we obtain the following table of the
expansion coefficients for the integral kernel $\mathbf{K}_{\sech}$. Since all
even coefficients are equal to 0, we do not include them in the table. To save
the space, we represent all odd coefficients in the form of a fraction
$\frac{x_{n}}{2^{n+1}n!}$.
\begin{center}
\noindent%
\begin{tabular}
[c]{ccccccc}\hline
$n$ & $1$ & $3$ & $5$ & $7$ & $9$ & $11$\\\hline
$b_{n}$ & $-\frac{1}{2^{2} 1!}$ & $\frac{1}{2^{4} 3!}$ & $-\frac{2}{2^{6} 5!}$
& $\frac{5}{2^{8} 7!}$ & $-\frac{14}{2^{10} 9!}$ & $\frac{42}{2^{12} 11!}$\\
\hline
$c_{n}$ & $-\frac{1}{2^{2} 1!}$ & $\frac{3}{2^{4} 3!}$ & $-\frac{10}{2^{6}
5!}$ & $\frac{35}{2^{8} 7!}$ & $-\frac{126}{2^{10} 9!}$ & $\frac{462}{2^{12}
11!}$\\\hline
\end{tabular}
\noindent%
\begin{tabular}
[c]{cccccc}\hline
$n$ & $13$ & $15$ & $17$ & $19$ &
$21$\\\hline
$b_{n}$ & $-\frac{132}{2^{14} 13!}$ & $\frac{429}{2^{16} 15!}$ & $-\frac{1430}{2^{18}
17!}$ & $\frac{4862}{2^{20} 19!}$ & $-\frac{16796}{2^{22} 21!}$\\\hline
$c_{n}$ & $-\frac{1716}{2^{14} 13!}$ & $\frac{6435}{2^{16} 15!}$ &
$-\frac{24310}{2^{18} 17!}$ & $\frac{92378}{2^{20} 19!}$ & $-\frac
{352716}{2^{22} 21!}$\\\hline
\end{tabular}
\end{center}
For $f = \cosh x$ we have $h=f^{\prime}(0)=0$. The derivatives at the point
$x=0$ of the potential $q_{1}\equiv1$ are equal to $q_{1}^{(0)}(0)=1$,
$q_{1}^{(n)}(0)=0$, $n\ge1$, the non-zero values of the derivatives of the
potential $q(x) = 1-2\sech^{2}(x)$ at the same point are given in the
following table.
\begin{center}
\noindent%
\begin{tabular}
[c]{cccccccc}\hline
$n$ & $0$ & $2$ & $4$ & $6$ & $8$ & $10$ & $12$\\\hline
$q^{(n)}(0)$ & $-1$ & $4$ & $-32$ & $544$ & $-15872$ & $707584$ &
$-44736512$\\\hline
\end{tabular}
\noindent%
\begin{tabular}
[c]{ccccc}\hline
$n$ & $14$ & $16$ & $18$ & $20$\\\hline
$q^{(n)}(0)$ & $3807514624$ & $-419730685952$ & $58177770225664$ &
$-9902996106248192$\\\hline
\end{tabular}
\end{center}
The table below presents the values of the expression in parentheses appearing
in \eqref{dtK0}, obtained from the derivatives of the potentials presented
above with the coefficients $S^{n}_{\ell;d;(n_{1},\ldots,n_{\ell})}$
calculated by Proposition \ref{Prop S reccurrence}. All calculations were
performed with the help of Matlab 7 and the results agree with the ones
obtained from the explicit formulae for the integral kernels (see the table
for $b_{n}$ and $c_{n}$ above). The value $n=21$ is chosen only to keep the
size of the table reasonable. Our Matlab program calculated exact values of
the derivatives $\partial_{t}^{n}\mathbf{K}_{\cosh}(0,0)$ for $n\le30$ and
exact values of the derivatives $\partial_{t}^{n}\mathbf{K}_{\sech}(0,0)$ for
$n\le26$. To calculate the derivatives for larger values of $n$ one have to
use arbitrary precision arithmetic to overcome the limitation of the machine precision.
\begin{center}
\noindent%
\begin{tabular}
[c]{ccccccc}\hline
$n$ & $1$ & $3$ & $5$ & $7$ & $9$ & $11$ \\\hline
$2^{n+1}\partial_{t}^{n}\mathbf{K}_{\cosh}$ & $1$ & $-3$ & $10$ & $-35$ &
$126$ & $-462$\\\hline
$2^{n+1}\partial_{t}^{n}\mathbf{K}_{\sech}$ & $-1$ & $1$ & $-2$ & $5$ & $-14$
& $42$\\\hline
\end{tabular}
\noindent%
\begin{tabular}
[c]{cccccc}\hline
$n$ & $13$ & $15$ & $17$ & $19$ &
$21$\\\hline
$2^{n+1}\partial_{t}^{n}\mathbf{K}_{\cosh}$ & $1716$ & $-6435$ & $24310$ & $-92378$ & $352716$\\\hline
$2^{n+1}\partial_{t}^{n}\mathbf{K}_{\sech}$ & $-132$ & $429$ & $-1430$ & $4862$ & $-16796$\\\hline
\end{tabular}
\end{center}
\end{example}
\section{Approximate construction of integral kernels}\label{SectApproxKernel}
In this section we discuss two methods of approximate construction of integral
kernels of transmutation operators. The first method is based on the expansion
theorem and computation of the generalized Taylor coefficients by Theorem
\ref{Th Formula K} and formula \eqref{dtK0}. The second method is based on the
completeness result of hyperbolic formal powers and of generalized wave
polynomials (Proposition \ref{Prop RungeZ} and \cite[Theorem 22]{KKTT}) and on
approximating the Goursat data for the integral kernel by generalized wave
polynomials. However we present only the description of the main ideas of
these approximate methods and do not perform detailed analysis. Such analysis,
involving studying of approximative properties of the functions $\{u_{n}%
(x,x)\}$, convergency rate estimates and detailed comparison with existing
numerical techniques, e.g., successive approximation and the series expansions
from \cite{Boumenir2006}, goes beyond the scope of the present article. The
authors hope to perform such analysis and present the results shortly.
The first method is applicable in the case when we know derivatives of the
function $f\in C^{n}[-b,b]$ at the point $x=0$ of all orders up to some order
$n$, $n\geq2$, i.e.\ when we know coefficients in the Taylor formula
\[
f(x)=\sum_{k=0}^{n}f_{k}x^{k}+o(x^{k}).
\]
Suppose additionally that the function $f$ does not vanish on $[-b,b]$ and
normalized as $f(0)=1$, i.e.\ $f_{0}=1$. Then we can calculate Taylor
coefficients up to the order $n$ of the function $1/f$. For example, as it
follows from \cite[Theorem 11.7]{Charal2002} or directly from the Fa\`{a} di
Bruno formula \cite[Theorem 11.4]{Charal2002},
\[
\frac{1}{f(x)}=1+\sum_{k=1}^{n}\tilde{f}_{k}x^{k}+o(x^{k}),
\]
where
\begin{equation}
\tilde{f}_{k}=\sum_{\substack{m_{1}+2m_{2}+\ldots+km_{k}=k\\m_{1}\geq
0,\ldots,m_{k}\geq0}}(-1)^{m_{1}+\ldots+m_{k}}\binom{m_{1}+\ldots+m_{k}}%
{m_{1},\ldots,m_{k}}\prod_{j=1}^{k}f_{j}^{m_{j}}, \label{fInverseTaylor}%
\end{equation}
and $\binom{m_{1}+\ldots+m_{k}}{m_{1},\ldots,m_{k}}=\frac{(m_{1}+\ldots
+m_{k})!}{m_{1}!\cdot\ldots\cdot m_{k}!}$ are multinomial coefficients.
Having coefficients $f_{k}$ and $\tilde f_{k}$, $k\le n$, we easily calculate
the Taylor coefficients up to the order $n-2$ of the potentials $q_{f} =
f^{\prime\prime}/f$ and $q_{1/f} = f\cdot(1/f)^{\prime\prime}$. By
\eqref{dtK0} we obtain the values of the derivatives $\partial_{t}%
^{m}\mathbf{K}_{f}(0,0)$ and $\partial_{t}^{m}\mathbf{K}_{1/f}(0,0)$, $m\le
n-1$, and by Theorem \ref{Th Formula K} we find the coefficients $b_{k}$ and
$c_{k}$, $k\le n-1$ in the representation \eqref{FormulaK}. The obtained
truncated series is an approximation of the integral kernel $\mathbf{K}_{f}$.
If on the opposite the Taylor coefficients of the potential $q_{f}$ and the
value of the parameter $h=f^{\prime}(0)$ are known, the reconstruction of the
Taylor coefficients of the solution $f$ also presents no difficulty.
\begin{example}
\label{ExampleCosh2} Consider the function $f=\cosh x$ as in Example
\ref{ExampleCosh}. One can easily verify that the first Taylor coefficients
for the function $1/f$ calculated by \eqref{fInverseTaylor} coincide with the
coefficients in the known expansion
\[
\frac{1}{\cosh x}=\sech x=1-\frac{1}{2}x^{2}+\frac{5}{24}x^{4}-\frac{61}%
{720}x^{6}+\frac{277}{8064}x^{8}+O(x^{10}),
\]
and the calculated Taylor coefficients for the potentials $q_{f}$ and
$q_{1/f}$ coincide with the values presented in Example \ref{ExampleCosh}.%
\begin{figure}[tbh]
\centering
\includegraphics[
natheight=2.303800in,
natwidth=3.135700in,
height=2.93in,
width=3.975in
]%
{SechKernel4.png}
\caption{The integral kernel $K_{\sech}$ in the domain
$\overline{\mathbf{R}}$ given by $b=4$.}
\end{figure}
We compare the exact integral kernel $\mathbf{K}_{\sech}$ given by
\eqref{Ksech} with its approximations based on the generalized Taylor
coefficients calculated in Example \ref{ExampleCosh}. To illustrate the
dependence of the approximation error on the domain $\overline{\mathbf{R}}$
and on the number of coefficients used, we present the resulting approximation
error for 3 different domains corresponding to values $b=1$, $b=2$ and $b=4$,
and for different number of generalized Taylor coefficients. The functions
$\varphi_{k}$, defined by \eqref{phik} and necessary for constructing the
generalized wave polynomials, are calculated using two Matlab routines from
the Spline Toolbox: on each step the integrand is approximated by a spline
with 5000 knots using the command \texttt{spapi} and then it is integrated
using \texttt{fnint}. The resulting kernel $\mathbf{K}_{\sech}$ and its
approximations were computed on the mesh of $100\times100$ equally spaced
points in $\overline{\mathbf{R}}$. Obtained results together with the graphs
of the integral kernel $\mathbf{K}_{\sech}$ and the approximation error are
given below.
\begin{center}
\noindent%
\begin{tabular}
[c]{|c|c|}\hline
\multicolumn{2}{|c|}{$b=1$}\\\hline
$N$ & Approx. error\\\hline
$1$ & $0.12833$\\
$3$ & $0.021458$\\
$5$ & $0.0017866$\\
$7$ & $8.9155\cdot10^{-5}$\\
$9$ & $2.9655\cdot10^{-6}$\\
$11$ & $7.0469\cdot10^{-8}$\\
$13$ & $1.2562\cdot10^{-9}$\\
$15$ & $7.3683\cdot10^{-11}$\\
$17$ & $7.3991\cdot10^{-11}$\\
$19$ & $7.3989\cdot10^{-11}$\\\hline
\end{tabular}
\qquad%
\begin{tabular}
[c]{|c|c|}\hline
\multicolumn{2}{|c|}{$b=2$}\\\hline
$N$ & Approx. error\\\hline
$5$ & $0.21204$\\
$7$ & $0.043909$\\
$9$ & $0.0059553$\\
$11$ & $0.00057228$\\
$13$ & $4.1076\cdot10^{-5}$\\
$15$ & $2.288\cdot10^{-6}$\\
$17$ & $1.0182\cdot10^{-7}$\\
$19$ & $3.7047\cdot10^{-9}$\\
$21$ & $3.3373\cdot10^{-10}$\\
$23$ & $3.3373\cdot10^{-10}$\\\hline
\end{tabular}
\qquad%
\begin{tabular}
[c]{|c|c|}\hline
\multicolumn{2}{|c|}{$b=4$}\\\hline
$N$ & Approx. error\\\hline
$13$ & $1.8261$\\
$15$ & $0.39987$\\
$17$ & $0.070023$\\
$19$ & $0.010037$\\
$21$ & $0.0011998$\\
$23$ & $0.00012146$\\
$25$ & $1.055\cdot10^{-5}$\\
$27$ & $7.9493\cdot10^{-7}$\\
$29$ & $5.2453\cdot10^{-8}$\\
$31$ & $3.0562\cdot10^{-9}$\\\hline
\end{tabular}
\end{center}
\begin{figure}[tbh]
\centering
\includegraphics[
natheight=2.303800in,
natwidth=3.135700in,
height=2.93in,
width=3.975in
]%
{ErrorSechKernel4_31.png}%
\caption{The approximation error of the integral kernel $K_{\sech}$
in the domain $\overline{\mathbf{R}}$ given by $b=4$ by the generalized wave polynomial of order $N=31$.}
\end{figure}
In order to estimate the error of approximation the exact formula
\eqref{Ksech} for the kernel $\mathbf{K}_{\sech}$ was used, where the
corresponding integrals were computed numerically. As can be seen from
\eqref{Ksech} the integrands for $t$ close to $x$ involve subtraction of close
values and division by values close to zero. Due to this fact the kernel
$\mathbf{K}_{\sech}$ computed in this way by \eqref{Ksech} is not exact. This
explains why in the above tables after a certain value of $N$ the reported
approximation error does not decrease, it is limited by the computational
error of $\mathbf{K}_{\sech}$.
To the difference of $\mathbf{K}_{\sech}$ the integral kernel $\mathbf{K}%
_{\cosh}$ does not involve integration and may be computed more precisely. The
approximations of the integral kernel $\mathbf{K}_{\cosh}$ were computed for
$b=2$ and the resulting absolute errors obtained for different numbers of
expansion coefficients are presented in the table below.
\begin{center}
\noindent%
\begin{tabular}
[c]{|c|c||c|c||c|c|}\hline
\multicolumn{6}{|c|}{$b=2$}\\\hline
$N$ & Approx. error & $N$ & Approx. error & $N$ & Approx. error\\\hline
$1$ & $1.7878$ & $11$ & $0.00074903$ & $21$ & $1.316\cdot10^{-10}$\\
$3$ & $1.088$ & $13$ & $5.2024\cdot10^{-5}$ & $23$ & $3.3386\cdot10^{-12}$\\
$5$ & $0.33724$ & $15$ & $2.8243\cdot10^{-6}$ & $25$ & $7.5051\cdot10^{-14}$\\
$7$ & $0.063779$ & $17$ & $1.2312\cdot10^{-7}$ & $27$ & $6.9944\cdot10^{-15}%
$\\
$9$ & $0.0081416$ & $19$ & $4.4042\cdot10^{-9}$ & $29$ & $6.6613\cdot10^{-15}%
$\\\hline
\end{tabular}
\end{center}
\end{example}
The first method of approximate construction of the integral kernels requires
the knowledge of higher order derivatives of the potentials $q_{f}$ and
$q_{1/f}$ in the origin. In the case when such derivatives are unknown or hard
to obtain analytically, instead of performing numerical differentiation we can
approximate the integral kernel by approximating its Goursat data given by
\eqref{GoursatKh2} with truncated series of the form \eqref{+1} and \eqref{-1}.
Indeed, suppose that the numbers $c_{0},\ldots,c_{N}$ and $b_{1},\ldots,b_{N}$
are such that
\[
\biggl|\frac{1}{2}\bigl(\mathbf{K}_{f}(x,x)+\mathbf{K}_{f}(x,-x)\bigr)-c_{0}%
u_{0}(x,x)-\sum_{n=1}^{N}c_{n}u_{2n-1}(x,x)\biggr|<\varepsilon_{1}%
\]
and
\[
\biggl|\frac{1}{2}\bigl(\mathbf{K}_{f}(x,x)-\mathbf{K}_{f}(x,-x)\bigr)-\sum
_{n=1}^{N}b_{n}u_{2n}(x,x)\biggr|<\varepsilon_{2}%
\]
for every $x\in\lbrack-b,b]$. Consider the functions
\begin{equation}
K(x,t)=c_{0}u_{0}(x,t)+\sum_{n=1}^{N}c_{n}u_{2n-1}(x,t)+\sum_{n=1}^{N}%
b_{n}u_{2n}(x,t) \label{Kapprox}%
\end{equation}
and $\widetilde{\mathbf{K}}_{f}=T_{f}^{-1}\mathbf{K}_{f}$ and $\widetilde
{K}=T_{f}^{-1}K$. Then by the definition of the Goursat-to-Goursat
transmutation operator
\[%
\begin{pmatrix}
\widetilde{\mathbf{K}}_{f}(x,x)\\
\widetilde{\mathbf{K}}_{f}(x,-x)
\end{pmatrix}
=T_{G}^{-1}%
\begin{pmatrix}
\mathbf{K}_{f}(x,x)\\
\mathbf{K}_{f}(x,-x)
\end{pmatrix}
\quad\text{and}\quad%
\begin{pmatrix}
\widetilde{K}(x,x)\\
\widetilde{K}(x,-x)
\end{pmatrix}
=T_{G}^{-1}%
\begin{pmatrix}
K(x,x)\\
K(x,-x)
\end{pmatrix}
,
\]
hence due to the boundedness of the operator $T_{G}^{-1}$
\begin{multline*}
\displaybreak[2]
\max\biggl(\max_{x\in\lbrack-b,b]}\bigl|\widetilde{\mathbf{K}}_{f}%
(x,x)-\widetilde{K}(x,x)\bigr|,\max_{x\in\lbrack-b,b]}\bigl|\widetilde
{\mathbf{K}}_{f}(x,-x)-\widetilde{K}(x,-x)\bigr|\biggr)\\
\displaybreak[2]
\leq\Vert T_{G}^{-1}\Vert\max\biggl(\max_{x\in\lbrack-b,b]}\bigl|\mathbf{K}%
_{f}(x,x)-K(x,x)\bigr|,\max_{x\in\lbrack-b,b]}\bigl|\mathbf{K}_{f}%
(x,-x)-K(x,-x)\bigr|\biggr)\\
\displaybreak[2]
\leq\Vert T_{G}^{-1}\Vert\max_{x\in\lbrack-b,b]}\biggl(\biggl|\frac{1}%
{2}\bigl(\mathbf{K}_{f}(x,x)+\mathbf{K}_{f}(x,-x)\bigr)-\frac{1}%
{2}\bigl(K(x,x)+K(x,-x)\bigr)\biggr|\\
+\biggl|\frac{1}{2}\bigl(\mathbf{K}_{f}(x,x)-\mathbf{K}_{f}(x,-x)\bigr)-\frac
{1}{2}\bigl(K(x,x)-K(x,-x)\bigr)\biggr|\biggr)<\Vert T_{G}^{-1}\Vert
(\varepsilon_{1}+\varepsilon_{2}),
\end{multline*}
where we have used equalities $\frac{1}{2}\bigl(K(x,x)+K(x,-x)\bigr)=c_{0}%
u_{0}(x,x)+\sum_{n=1}^{N}c_{n}u_{2n-1}(x,x)$ and $\frac{1}{2}%
\bigl(K(x,x)-K(x,-x)\bigr)=\sum_{n=1}^{N}b_{n}u_{2n}(x,x)$. We obtain from the
proof of \cite[Theorem 3]{KKTT} that for every $(x,t)\in\overline{\mathbf{R}%
}$
\[
\bigl|\widetilde{\mathbf{K}}_{f}(x,t)-\widetilde{K}(x,t)\bigr|\leq3\Vert
T_{G}^{-1}\Vert(\varepsilon_{1}+\varepsilon_{2}),
\]
hence for every $(x,t)\in\overline{\mathbf{R}}$
\[
\bigl|\mathbf{K}_{f}(x,t)-K(x,t)\bigr|\leq3\Vert T_{f}\Vert\cdot\Vert
T_{G}^{-1}\Vert(\varepsilon_{1}+\varepsilon_{2}).
\]
That is, if we approximate the function $g_{1}(x):=\frac{1}{2}\bigl(\mathbf{K}%
_{f}(x,x)+\mathbf{K}_{f}(x,-x)\bigr)=\frac{h}{2}+\frac{1}{4}\int_{0}^{x}%
q_{f}(s)\,ds$ by the functions $\{u_{0}(x,x)\}\cup\{u_{2n-1}(x,x)\}_{n\geq1}$
and the function $g_{2}(x):=\frac{1}{2}\bigl(\mathbf{K}_{f}(x,x)-\mathbf{K}%
_{f}(x,-x)\bigr)=\frac{1}{4}\int_{0}^{x}q_{f}(s)\,ds$ by the functions
$\{u_{2n}(x,x)\}_{n\geq1}$, we obtain an approximation having the form
\eqref{Kapprox} of the integral kernel. It is worth to mention that we do not
need to know the Darboux associated potential $q_{1/f}$ for this method and we
do not impose additional smoothness conditions on the potential $q_{f}$. The
approximation of the functions $g_{1}$ and $g_{2}$ by the corresponding
combinations of the functions $u_{n}(x,x)$ can be done in several ways. We may
use the least squares method to obtain a good, however far from the best,
approximation. We may reformulate the approximation problem as a linear
programming problem and solve it. Under the additional assumption that the
corresponding systems of the functions $u_{n}(x,x)$ satisfy the Haar
condition, we may use the Remez algorithm. We refer the reader to
\cite[Section 6]{KKTT} and to \cite[\S 7]{Meinardus}, \cite[Chapter 6]{Rice}
and \cite[Chapter 1]{Rivlin} for further details.
\begin{example}
We approximated the integral kernels $\mathbf{K}_{\sech}$ and $\mathbf{K}%
_{\cosh}$ from Example \ref{ExampleCosh2} by the second described method. All
the parameters are taken as in the previous example. The Goursat data were
approximated by corresponding functions $u_{n}(x,x)$ with the help of the
Remez simple exchange algorithm, see e.g., \cite[\S 7]{Meinardus}.%
\begin{figure}[tbh]
\centering
\includegraphics[
natheight=2.342900in,
natwidth=3.135700in,
height=2.93in,
width=3.975in
]%
{ErrorCoshRemez2_19.png}%
\caption{The approximation error of the integral kernel $K_{\cosh}$
in the domain $\overline{\mathbf{R}}$ given by
$b=2$ by the generalized wave polynomial of order $N=19$. The
generalized wave polynomial is obtained with the use of the Remez simple
exchange algorithm.}
\end{figure}
\begin{center}
\noindent%
\begin{tabular}
[c]{|c|c|}\hline
\multicolumn{2}{|c|}{$b=2$, $\mathbf{K}_{\sech}$ kernel}\\\hline
$N$ & Approx. error\\\hline
$5$ & $0.0045993$\\
$9$ & $9.3687\cdot10^{-6}$\\
$13$ & $4.1549\cdot10^{-9}$\\
$17$ & $3.3354\cdot10^{-10}$\\\hline
\end{tabular}
\qquad%
\begin{tabular}
[c]{|c|c|}\hline
\multicolumn{2}{|c|}{$b=4$, $\mathbf{K}_{\sech}$ kernel}\\\hline
$N$ & Approx. error\\\hline
$13$ & $7.4042\cdot10^{-5}$\\
$17$ & $2.342\cdot10^{-7}$\\
$21$ & $2.789\cdot10^{-10}$\\
$25$ & $4.9467\cdot10^{-11}$\\\hline
\end{tabular}
\qquad%
\begin{tabular}
[c]{|c|c|}\hline
\multicolumn{2}{|c|}{$b=2$, $\mathbf{K}_{\cosh}$ kernel}\\\hline
$N$ & Approx. error\\\hline
$5$ & $0.0052907$\\
$9$ & $1.2563\cdot10^{-5}$\\
$13$ & $6.6227\cdot10^{-9}$\\
$17$ & $1.1813\cdot10^{-12}$\\
$19$ & $1.0325\cdot10^{-14}$\\\hline
\end{tabular}
\end{center}
\end{example}
\section{Applications to Sturm-Liouville spectral problems}\label{SectASL}
Solution of Sturm-Liouville spectral problems is a straightforward application of the developed theory. In this section we briefly explain the convenience of the approximation \eqref{Kapprox} of the integral kernel in the simplest setting of Dirichlet boundary conditions. The separate paper \cite{KrT2013} contains further details.
Consider the spectral problem for the equation
\begin{equation}\label{EqSLproblem}
-u''+qu=\omega^2 u
\end{equation}
on the interval $[0,b]$ with the boundary conditions
\begin{equation}\label{EqSLBC}
u(0)=u(b)=0.
\end{equation}
Let $\mathbf{T}_h$ be a transmutation operator for \eqref{EqSLproblem} where we assume that the potential $q$ is extended onto $[-b,b]$. From \eqref{mapping IC} we have that the function $s(x;\omega):=\mathbb{T}_h(\sin\omega x)$ is a solution of \eqref{EqSLproblem} satisfying the first of the conditions \eqref{EqSLBC}. Thus, the characteristic equation of the problem \eqref{EqSLproblem}, \eqref{EqSLBC} has the form
\begin{equation*}
\sin\omega b +\int_{-b}^{b}\mathbf{K}(b,t;h)\sin\omega t\,dt=0.
\end{equation*}
Let $f$ be a non-vanishing solution of the equation $-u''+qu=0$ on $[-b,b]$ normalized as $f(0)=1$ and $h:=f'(0)$. Consider an approximation $K_N(x,t)$ of the integral kernel $\mathbf{K}(x,t;h)$ of the form \eqref{Kapprox} and define $s_N(x;\omega):=\sin\omega x+\int_{-x}^x K_N(x,t)\sin\omega t\,dt$. By the definition of the generalized wave polynomials \eqref{um} and the parity relations \eqref{umParity} we have
\begin{equation}\label{sN}
s(x;\omega) \cong s_{N}(x;\omega) = \sin \omega x + \sum_{n=1}^{N}b_{n}\sum_{\text{odd }k=1}^{n}\binom{n}{k}\varphi
_{n-k}(x)\int_{-x}^{x}t^{k}\sin \omega t\,dt.
\end{equation}
The integrals in \eqref{sN} can be easily evaluated explicitly, hence \eqref{sN} presents a convenient approximation of the solution $s(x;\omega)$. Moreover, the error of the approximation can be uniformly bounded for all real $\omega$. If $|\mathbf{K}(x,t;h)-K_N(x,t)|\le\varepsilon$ in $0\le|t|\le|x|\le b$ then
\begin{equation*}
|s(x;\omega)-s_N(x;\omega)|\le\int_{-x}^{x}|\mathbf{K}(x,t;h)-K_N(x,t)|\cdot|\sin\omega t|\,dt\le \varepsilon \int_{-x}^{x}|\sin\omega t|\,dt\le 2\varepsilon x.
\end{equation*}
Thus, the proposed method for solving \eqref{EqSLproblem}, \eqref{EqSLBC} consists of the following steps: 1) compute a non-vanishing solution $f$ of $-u''+qu=0$; 2) compute $N+1$ functions $\varphi_k$; 3) find coefficients $b_n$, $n=1,\ldots,N$ of the approximation of the integral kernel $\mathbf{K}(x,t;h)$; 4) find zeros of $\left. s_N(x,\omega)\right|_{x=b}$.
\begin{example}
Consider the spectral problem \eqref{EqSLproblem}, \eqref{EqSLBC} with $q(x)=e^x$ and $b=\pi$ (see \cite[Appendix A]{Pryce}). The exact characteristic equation of the problem is given by $I_{2 i \omega }(2)I_{-2 i \omega }\big(2
\sqrt{e^\pi}\big)-I_{-2 i \omega }(2) I_{2 i
\omega }\big(2 \sqrt{e^\pi}\big)=0$, allowing one to calculate an arbitrary number of eigenvalues for comparison with numerical results.
We choose $f(x)=I_0\big(2e^{x/2}\big)/I_0(2)$ as a non-vanishing particular solution and performed approximation of the integral kernel by approximating its Goursat data as described in Section \ref{SectApproxKernel} using $N=30$. In the following table we present approximations of the first 1000 eigenvalues obtained by the proposed method together with corresponding absolute and relative errors. Observe that the absolute errors remain essentially of the same order and relative errors approach the machine precision bound. The computation time is within seconds.
\begin{center}
\noindent%
\begin{tabular}{|c|c|c|c|}\hline
$n$ & Obtained $\omega_n^2$ & Abs. error & Rel. error\\\hline
$1$ & 4.89666937996891 & $1.2\cdot10^{-12}$ & $2.5\cdot10^{-13}$\\
$2$ & 10.0451898932577 & $4.0\cdot10^{-12}$& $4.0\cdot10^{-13}$\\
$3$ & 16.0192672505157 & $2.3\cdot10^{-11}$& $1.5\cdot10^{-12}$\\
$5$ & 32.2637070458132 & $8.7\cdot10^{-12}$& $2.7\cdot10^{-13}$\\
$10$ & 107.116676138236 & $3.2\cdot10^{-11}$& $3.0\cdot10^{-13}$\\
$20$ & 407.065235267218 & $1.2\cdot10^{-10}$& $3.0\cdot10^{-13}$\\
$50$ & 2507.05043440902 &$1.2\cdot10^{-10}$& $4.9\cdot10^{-14}$\\
$100$ & 10007.0483099952 &$2.4\cdot10^{-11}$& $2.4\cdot10^{-15}$\\
$200$ & 40007.0477785361 &$3.6\cdot10^{-11}$& $9.1\cdot10^{-16}$\\
$500$ & 250007.047629702 &$1.2\cdot10^{-10}$& $4.7\cdot10^{-16}$\\
$1000$ & 1000007.04760844 &$2.3\cdot10^{-10}$& $2.3\cdot10^{-16}$\\
\hline
\end{tabular}
\end{center}
\end{example}
|
1801.09377
|
\section{Introduction}
Linear response theory (LRT) has been a cornerstone of statistical mechanics ever since its introduction in the 1960s. When valid, it allows us to express the average of some observable when subjected to small perturbations from an unperturbed state -- the system's so called {\em{response}} -- entirely in terms of statistical information from the unperturbed system. In essence, linear response theory relies on the smoothness of the invariant measure with respect to a perturbation, in the sense that there exists a Taylor expansion of the perturbed invariant measure around the unperturbed equilibrium measure.
The development of the theory occured in statistical mechanics in the context of thermostatted Hamiltonian systems \cite{Kubo66,Balescu1,Zwanzig,MarconiEtAl08} but found applications far beyond this realm; recent years have seen an increased interest in LRT and its applications. In particular, climate scientists have resorted to LRT to study the timely question how certain observables such as the global mean temperature or local rainfall intensities behave upon increasing the ${\rm{CO}}_2$ concentration in the atmosphere. LRT has been successfully applied to several situations with macroscopic observables in various atmospheric toy models \cite{MajdaEtAl10,LucariniSarno11,AbramovMajda07,AbramovMajda08,CooperHaynes11,CooperEtAl13}, barotropic models \cite{Bell80,GritsunDymnikov99,AbramovMajda09}, quasi-geostrophic models \cite{DymnikovGritsun01}, atmospheric models \cite{NorthEtAl93,CionniEtAl04,GritsunEtAl02,GritsunBranstator07,GritsunEtAl08,RingPlumb08,Gritsun10} and in coupled climate models \cite{LangenAlexeev05,KirkDavidoff09,FuchsEtAl14,RagoneEtAl15}.
In a separate strand of research mathematicians have tried to obtain rigorous results extending the validity of LRT to deterministic dynamical systems. There was initial success by Ruelle \cite{Ruelle97,Ruelle98,Ruelle09a,Ruelle09b} in the case of uniformly hyperbolic Axiom A systems, however the works of Baladi and colleagues undermined hopes that LRT typically holds in dynamical systems \cite{BaladiSmania08,BaladiSmania10,Baladi14,BaladiEtAl15,DeLimaSmania15}. They showed that simple dynamical systems such as the logistic map do not obey LRT but rather their invariant measure changes non-smoothly with respect to the perturbation (even considering only chaotic parameter values). This poses a conundrum: how can LRT seem to be typically valid in high-dimensional systems for macroscopic observables when structural obstacles to its validity are likely to be present in its microscopic constituents?
To justify the validity of LRT in high-dimensional systems, scientists often invoke the {\em{chaotic hypothesis}} of Gallavotti-Cohen \cite{GallavottiCohen95a,GallavottiCohen95b} according to which a high-dimensional system behaves for all practical purposes as an Axiom A system. This invocation, however, is unjustified: even if the hypothesis is true, it does not address how the equivalent Axiom A systems of the unperturbed and the perturbed system relate to each other, which is crucial for any statement on LRT.
In a recent paper \cite{GottwaldEtAl16} we showed that breakdown of LRT might not be detectable using uncertainty quantification when analyzing time series unless the time series is very long (exceeding 1 million data points even for simple one-dimensional systems such as the logistic map, for example) and/or the observables are sensitive to the non-smooth change of the invariant measure. Consequently, the apparent observed validity of LRT in climate science might be a finite size effect.
Here we follow a different avenue, drawing on the fact that linear response theory can be justified \cite{Haenggi78,HairerMajda10} for stochastic dynamical systems. We argue here that certain deterministic chaotic systems have stochastic limits for macroscopic observables which implies that they are amenable to LRT. Statistical limit laws of deterministic dynamical systems have recently been proven for slow variables in multi-scale systems \cite{MelbourneStuart11,GottwaldMelbourne13c,KellyMelbourne14} and for resolved degrees of freedom in high-dimensional weakly coupled systems \cite{FordEtAl65,Zwanzig73,FordEtAl87,StuartWarren99,KupfermanEtAl02,GivonEtAl04}. In both cases the diffusive limit of the macroscopic observables relies on the central limit theorem via a summation of infinitely many weakly dependent variables. We treat here the case of weak coupling whereby distinguished resolved degrees of freedom are weakly coupled to a large {\em{heat bath}} of unresolved, dissipative microscopic degrees of freedom. The central limit theorem can be justified in this situation either for sufficiently chaotic dynamics (the case we consider here) or for a collection of randomly chosen initial conditions. We introduce here a judiciously chosen toy model which considers the worst case scenario where both the resolved and the unresolved dynamics violate LRT, when considered on their own. Our main finding is that LRT can be assured in high-dimensional systems of weak coupling type, when the macroscopic resolved variables exhibit effective stochastic dynamics and when additionally the microscopic dynamics is spatially heterogeneous.\\
The paper is organized as follows. Section~\ref{s.LRT} briefly reviews LRT. In Section~\ref{s.model} we introduce the high-dimensional weak coupling model under consideration. Section~\ref{s.homo} considers the case when the resolved scales exhibit a diffusive limit in the thermodynamic limit of an infinite-dimensional microscopic sub-system, and we show that LRT is valid. Section~\ref{s.det} treats the case when the thermodynamic limit is deterministic and LRT is not valid for infinitely many degrees of freedom. We will see, however, that for large but finite system sizes, linear response is valid for some, albeit small, range of perturbations, and the breakdown of LRT might not be detectable in typical time series for an increasing range of perturbations. We conclude with a discussion and an outlook in Section~\ref{s.discussion}.
\section{Linear response theory}
\label{s.LRT}
Consider a family of dynamical systems $f_\varepsilon:D \to D$ on some space $D$ where the map $f_\varepsilon$ depends smoothly on the parameter $\varepsilon$ and where for each $\varepsilon$ the dynamical system admits a unique invariant physical measure $\mu_\varepsilon$. An ergodic measure is called physical if for a set of initial conditions of nonzero Lebesgue measure the temporal average of a continuous observable converges to the spatial average over this measure. LRT is concerned with the change of the average of an observable $\Psi:D\to{\mathbb{R}}$,
\begin{align*}
{\mathbb{E}}^\varepsilon [\Psi] = \int_D \Psi\, d\mu_\varepsilon
\end{align*}
upon varying $\varepsilon$. A system exhibits {\em{linear response}} at ${\varepsilon}={\varepsilon}_0$, if the derivative
\begin{align*}
{\mathbb{E}}^{\varepsilon_0} [\Psi]^\prime := \frac{\partial}{\partial\varepsilon} {\mathbb{E}}^\varepsilon [\Psi]|_{\varepsilon_0}
\end{align*}
exists. One can then express the average of an observable of the perturbed state with ${\varepsilon}={\varepsilon}_0+\delta{\varepsilon}$ up to $o({\varepsilon})$ as
\begin{align*}
{\mathbb{E}}^\varepsilon [\Psi] \approx {\mathbb{E}}^{\varepsilon_0} [\Psi] +\delta {\varepsilon}\, {\mathbb{E}}^{\varepsilon_0} [\Psi]^\prime ,
\end{align*}
which may be determined entirely in terms of the statistics of the unperturbed system and its invariant measure $\mu_{{\varepsilon}_0}$ using so-called linear response formulae \cite{Ruelle09a,Ruelle98,Baladi14}.
A sufficient condition for linear response is therefore that the invariant measure $\mu_\varepsilon$ is differentiable with respect to $\varepsilon$. If the limit does not exist, we say there is a breakdown of linear response. We assume that the observable captures sufficient dynamic information about the dynamical system; for example, an odd observable on a system symmetric about $0$ would be identically zero regardless of whether the system exhibits linear response or not.
\section{The model}
\label{s.model}
\change{
We introduce an exemplary high-dimensional toy system where each individual component does not obey linear response. We consider the case of a single resolved macroscopic degree of freedom $Q$ weakly coupled to $M$ unresolved microscopic degrees of freedom $q^{(j)}, j = 1, \ldots, M$. The microscopic dynamics is assumed to evolve independently of the macroscopic dynamics and independently of each other. Further, we make the natural assumptions that the microscopic dynamics is heterogeneous in the sense that each microscopic variable $q^{(j)}$ evolves with their own parameter $a^{(j)}$, drawn from a smooth distribution $\nu$. To study the linear response of macroscopic observables $\Psi(Q)$ we consider perturbations of the parameters of the microscopic dynamics of the form $a^{(j)} = a_0^{(j)} + \epsilon a_1^{(j)}$. Figure~\ref{f.model} provides a graphical illustration of our set up.}
\begin{figure
\centering
\includegraphics[width=0.4\linewidth,clip=true,angle=90]{figures/schematic}
\caption{Sketch of the toy model set up of a macroscopic resolved variable $Q$ which is observed with observable $\psi(Q)$. $Q$ is weakly coupled to unresolved microscopic variables $q^{(j)}$, $j=1,\cdots,M$. The microscopic sub-system is heterogeneous with each microscopic variable evolving independently according to its own randomly drawn parameters, as indicated by the different coloured shadings.}
\label{f.model}
\end{figure}
\change{
To illustrate how a high-dimensional system can exhibit linear response for macroscopic observables even if their microscopic constituents do not obey LRT, we make the worst case assumption that neither the microscopic variables nor the macroscopic variables obey linear response when viewed in isolation. For the purposes of this paper we use the prototypical example of a logistic-type map for a dynamical system which violates LRT \cite{BaladiSmania08,BaladiSmania10,Baladi14,BaladiEtAl15,DeLimaSmania15}.}
\change{
To be specific, the macroscopic variable $Q$ evolves according to a logistic map
\begin{align}
Q_{n+1} = A\,Q_n(1-Q_n),
\label{e.Qn}
\end{align}
with parameter
\begin{align*}
A = A_0 +A_1 Z_n
\end{align*}
driven by the unresolved, microscopic variables through the coupling term
\begin{align}
Z_n = \frac{1}{M^\gamma}\sum_{j=1}^{M}\phi^{(j)}_n,
\label{e.Zn}
\end{align}
with scaling parameter $\gamma \geq \tfrac{1}{2}$. Here
\begin{align*}
\phi^{(j)}_n = \phi(q^{(j)};a^{(j)})
\end{align*}
is a H\"older continuous function of the microscopic variables $q_{(j)}$. The $M$ unresolved microscopic degrees of freedom $q^{(j)}$ evolve according to modified logistic maps of the form
\begin{align}
{\small{
\left( q_{n+1}^{(j)},r_{n+1}^{(j)}\right)
=
\begin{cases}
\left( q_{n}^{(j)},2r_{n}^{(j)}\right) & r_n^{(j)}<\tfrac{1}{2}\\
\left(a^{(j)}\,q_n^{(j)}(1-q_n^{(j)}),2 r_n^{(j)}-1\right) & r_n^{(j)}\ge\tfrac{1}{2}
\end{cases},
}}
\label{e.qn}
\end{align}
each with their particular parameter $a^{(j)}$. The modification of the logistic map as a cocycle over a mixing doubling map for $r_n$ assures that the overall dynamics is mixing (thereby avoiding any periodic dynamics of the microscopic variables $q^{(j)}$). The modified map is constructed such that its marginal invariant measure of $q^{(j)}$ recovers exactly the physical measure of the standard logistic map with the same parameter $a^{(j)}$. Hence the microscopic dynamics (\ref{e.qn}) violates LRT while being chaotic.}
\change{
We study perturbations of the form
\[ a^{(j)} = a_0^{(j)} + \epsilon a_1^{(j)}, \]
where
the $a_0^{(j)}$ are sampled from a $C^1$ compactly supported distribution $\nu(a_0) da_0$ and the $a_1^{(j)}$ are sampled from a compactly supported distribution $\nu(a_1|a_0) da_1$ depending smoothly on $a_0$. As we argue below, smoothness of $\nu$ is crucial for the existence of linear and higher-order response. For concreteness, we choose $\nu(a_0) da_0$ to be the raised cosine distribution supported on the interval $[3.8,3.9]$:
\[ \nu(x) = \frac{\mathbf{1}_{[3.8,3.9]}(x) }{0.2} \left(1 + \cos \frac{x-3.85}{0.05}\pi \right), \] which is depicted in Figure \ref{f.raisedcosine}. The raised cosine distribution is bell-shaped and has a second derivative with bounded variation. This degree of smoothness implies that our model exhibits cubic (third-order) response, as will be shown in the next section. Furthermore, for the numerical simulations in Section~\ref{s.homo} and \ref{s.det} we choose $a_1^{(j)} = 1$ for all $j$.
}
\begin{figure
\centering
\includegraphics[width=0.7\linewidth]{figures/raisedcosine}
\caption{Probability density $\nu(x)$ of the raised cosine distribution supported on $[3.8,3.9]$.}
\label{f.raisedcosine}
\end{figure}
\change{
We shall consider two cases, $\gamma=\tfrac{1}{2}$ and $\gamma=1$ corresponding to a diffusive scaling limit and a deterministic scaling limit, respectively. In the thermodynamic limit $M\to \infty$, we will see that in the former case the microscopic driving term $Z_n$ converges to a stochastic process $\zeta_n$ in the macroscopic dynamics, whereas in the latter case $Z_n$ converges to a constant. This, in conjunction with heterogeneously distributed microscopic parameters $a^{(j)}$, leads asymptotically to macroscopic linear response in the former case, and a failure of linear response in the latter.
}
\section{$\gamma=\frac{1}{2}$: Weak coupling with diffusive limit}
\label{s.homo}
We now justify LRT for the high-dimensional system (\ref{e.Qn})-(\ref{e.qn}) with $\gamma=\tfrac{1}{2}$. This is done in two steps. We first show that the dynamics of the macroscopic variable $Q$ is diffusive. The invariant measure of this diffusive process depends on the integrated effect of the microscopic variables for a specific configuration of the parameters $a^{(j)}$. In a second step we establish conditions on the parameter distribution $\nu(a)$ for the logistic map parameters of the microscopic sub-system which allow for expectation values of an observable of the resolved state to vary smoothly with the perturbation size ${\varepsilon}$.
We begin by considering the unperturbed case ${\varepsilon}=0$ and show that the macroscopic variable $Q$ asymptotically satisfies a stochastic limit system in the thermodynamic limit $M\to\infty$ when $\gamma=\tfrac{1}{2}$. We consider driving terms $Z_n$ with mean-zero functions $\phi(\cdot;a)$, \change{${\mathbb{E}}[\phi(\cdot;a)]=0$, where the average is with respect to the invariant measure of the unresolved microscopic variable for fixed parameter $a=a_0^{(j)}+{\varepsilon} a_1^{(j)}$. (Whenever we consider averages for fixed parameters rather than for fixed ${\varepsilon}$ we drop the superscript of ${\mathbb{E}}$).} The driving term $Z_n$ contains a sum over independent identically distributed random variables for each time $n$. Hence, for $\gamma=\tfrac{1}{2}$, the central limit theorem assures that the driving term $Z_n$ converges to a random Gaussian variable $\zeta_n\sim{\mathcal{N}}(0,\sigma^2)$ with $\sigma^2=\langle \phi^2\rangle$, where the angular brackets denote the average over the measure of the logistic map parameters $\nu(da)$.
Moreover, in discrete time the $\zeta_n$ define a stationary Gaussian stochastic process -- a moving average process of infinite order -- which is (subject to continuity assumptions \cite{KarlinTaylor75}) uniquely defined by its mean and its covariance $R(m)$. The covariance is readily determined as
\begin{align}
R(m)={\mathrm{cov}}(\zeta_n,\zeta_{n+m})
=\lim_{M\to\infty}\frac{1}{M}\sum_{j=1}^M {\mathbb{E}}[\phi^{(j)}_0\phi^{(j)}_m]
= \langle {\mathbb{E}}[\phi_0\phi_m] \rangle .
\label{e.covzetan}
\end{align}
The process $Q_n$ hence converges weakly to the stochastic process defined by
\begin{align}
{\mathcal{Q}}_{n+1} = (A_0+A_1\zeta_n)\,{\mathcal{Q}}_n(1-{\mathcal{Q}}_n).
\label{e.Qnstoch}
\end{align}
Figure~\ref{f.g0p5_pdf} illustrates the convergence of the deterministic map (\ref{e.Qn})-(\ref{e.qn}) to the stochastic limit system (\ref{e.Qnstoch}) in distribution by comparing the respective empirical measures for several values $M$ of the size of the microscopic sub-system. The microscopic dynamics is run unperturbed with ${\varepsilon}=0$. Here we chose the mean-zero (conditional on the parameter $a$) functions $\phi(x,a)=x^2- \left( a x (1-x)\right)^2 $ to generate the driving sum $Z_n$. We used a time series of $N=4\times 10^7$ and determined the empirical measure of the full system (\ref{e.Qn})-(\ref{e.qn}) by binning using $1000$ bins. Details on how to determine the statistics of the limiting diffusive system (\ref{e.Qnstoch}) are given in Appendix \ref{appb}. It is remarkable that with only $M=16$ microscopic variables the eye can barely distinguish the empirical density from the density of the diffusive limit equation (\ref{e.Qnstoch}). We further show convergence of the first four moments of $Q$ when increasing $M$ in Figure~\ref{f.g0p5_mom}. It is seen that for accurate convergence of higher order moments to the values of their stochastic limiting equation (\ref{e.Qnstoch}) larger system sizes $M$ are required.
\begin{figure
\centering
\includegraphics[width=0.7\linewidth]{figures/figure1-4}
\includegraphics[width=0.7\linewidth]{figures/figure1-16}
\caption{Empirical probability density $\rho_Q(x)$ (orange) of the macroscopic variable $Q$ for $\gamma=\tfrac{1}{2}$ as estimated from simulations of the original deterministic system (\ref{e.Qn})-(\ref{e.qn}) for different values of the size $M$ of the microscopic sub-system. Top: $M=4$. Bottom: $M=16$. The continuous black line depicts the invariant density of the stochastic limit system (\ref{e.Qnstoch}). We used $A_0=3.91$, $A_1=0.05$ and ${\varepsilon}=0$.}
\label{f.g0p5_pdf}
\end{figure}
\begin{figure
\centering
\includegraphics[width=0.7\linewidth]{figures/figure2}
\caption{First four centred moments $\mu_{i}$, $i=1,\cdots,4$, of the macroscopic variable $Q$ for $\gamma=\tfrac{1}{2}$ as estimated from simulations of the original deterministic system (\ref{e.Qn})-(\ref{e.qn}) for fixed time $n=6$ for several values of the size of the microscopic sub-system: $M=4$ (blue triangles), $M=16$ (orange diamonds) and $M=1024$ (green dots). We depict the moments scaled by the respective moments of the stochastic limit system (\ref{e.Qnstoch}) so that the asymptotic limit is $1$ for all moments. Parameters as in Fig.~\ref{f.g0p5_pdf}.}
\label{f.g0p5_mom}
\end{figure}
After having established that the dynamics of the macroscopic variable $Q$ is diffusive, we now establish in a second step that the associated invariant measure and expectation values of macroscopic observables depend smoothly on ${\varepsilon}$. It is pertinent to stress that the mere existence of a stochastic limit does not imply LRT. \change{We remark that the existence of the stochastic limit is in line with the Gallavotti-Cohen hypothesis; however, this is insufficient for LRT.} Consider, for example, the case when each unresolved microscopic variable $q^{(j)}$ evolves according to the logistic map with the same parameters $a^{(j)}\equiv {\rm{const}}$, differing only in the initial conditions drawn from the invariant measure. The limit system would still be a stochastic system due to the randomness in the initial conditions, but LRT would not be valid when homogeneously perturbing the unresolved scales. Crucial for the validity of LRT is that the parameters $a_0^{(j)}, a_1^{(j)}$ are identically independently distributed ({\em{i.i.d.}}) random variables, sampled from a distribution $\nu(da_0,da_1)$ with a regularity property that we now derive.
For microscale variables with parameter $a$, consider the expectation of an observable $\Phi_a={\mathbb{E}}[\phi_0]$ or $\Phi_a={\mathbb{E}}[\phi_0\phi_m]$, and consider its average over the microscopic dynamics $\langle \Phi\rangle_{{\varepsilon}}=\int \Phi_{a_0+{\varepsilon} a_1} \nu(a_0,a_1)da_0da_1
$. Changing variables $\alpha=a_0+{\varepsilon} a_1$ we find $\langle \Phi\rangle_{{\varepsilon}}=\int \Phi_\alpha \nu(\alpha-{\varepsilon} a_1,a_1)\, d\alpha da_1 $, and hence
\begin{align*}
\frac{d}{d{\varepsilon}} \langle \Phi\rangle_{\varepsilon} = -\int a_1 \Phi_{a_0+{\varepsilon} a_1}\,\frac{\partial}{\partial a_0}\nu(a_0,a_1)\, da_0 da_1 .
\end{align*}
This is well-defined provided that $a_1 \frac{\partial}{\partial a_0} \nu(da_0,da_1)$ is integrable: if so, the statistics of $\zeta_n$ vary smoothly with respect to ${\varepsilon}$. A particular case of this is when $a_0$ and $a_1$ are independently distributed and the marginal density of $a_0$ is of bounded variation. It is readily seen that to achieve higher-order response, say of order $\ell$, (weak) derivatives of order $\ell$ must be defined. This can be achieved if $a_0$ and $a_1$ are drawn independently from a distribution $\nu$ with a marginal distribution $\nu(a_0)$ in Sobolev space $W^{\ell,1}$.
We present in Figure~\ref{f.g0p5_LRT} results of the linear response for an observable $\Psi(Q)=Q$. The microscopic sub-system is perturbed homogeneously with $a_1^{(j)}=1$ for all $j$. It is clearly seen that the perturbation ${\varepsilon}$ induces a smooth change in the observable for large $M$, indicative of the validity of LRT. We employ here the test for linear response developed in \cite{GottwaldEtAl16} and report the $p$-values testing the null hypothesis of linear response. We compute averages for several values of ${\varepsilon}$ from long simulations of length $N=5\times 10^6$. The error bars shown in Figure~\ref{f.g0p5_LRT} are estimated from $K=200$ realizations differing in the initial conditions of the microscopic variables. For completeness we provide a description and justification of the test in Appendix \ref{appa}. For small values of $M=16$ the $p$-value is $\O(10^{-5})$, rejecting the null hypothesis of linear response, whereas for $M=2^{10}$ the $p$-value is $0.27$, consistent with linear response. We also show results of the linear response for the stochastic limit system (\ref{e.Qnstoch}), illustrating that the thermodynamic limit implies linear response with a $p$-value of $p=0.54$. Note that although the invariant density of the resolved degree of freedom $Q$ has sufficiently converged to the invariant density of the stochastic limit system (\ref{e.Qnstoch}) for $M=16$ (cf. Figure~\ref{f.g0p5_pdf}), this size is not sufficiently large to assure linear response.
In Figure \ref{f.app1} in Appendix \ref{appa} we present results for cubic response for the same simulations which gave rise to Figure~\ref{f.g0p5_LRT}. Cubic response is valid for the stochastic limiting system (\ref{e.Qnstoch}) because the raised cosine distribution, which was chosen for the distribution $\nu$ in the simulation, lies in $W^{3,1}$.
\begin{figure*}[htbp]
\centering
\includegraphics[width=1\linewidth]{figures/figure3}
\caption{Linear response of an observable $\Psi(Q)=Q$ for the deterministic system (\ref{e.Qn})-(\ref{e.qn})
for $\gamma=\tfrac{1}{2}$ for different values of the size $M$ of the microscopic sub-system. (a): $M=16$. (b): $M=1024$. (c) $M=32768$. (d): Stochastic limit system (\ref{e.Qnstoch}). All experiments used a time series of length $N=2\times10^5$. The error bars were estimated from $K=200$ realizations differing in the initial conditions. We used $A_0=3.91$, $A_1=0.05$.}
\label{f.g0p5_LRT}
\end{figure*}
\section{$\gamma=1$: Weak coupling with deterministic limit}
\label{s.det}
We begin again by considering the unperturbed case ${\varepsilon}=0$. In the case $\gamma=1$ we consider the driving term $Z_n$ generated by a function $\phi$ with non-vanishing mean and consider $\phi(x,a)=x^2$. Since each unresolved degree of freedom generates an invariant measure, for $\gamma=1$ the driving variable $Z_n$ converges to a constant according to the law of large numbers with $Z_n \to C=\langle {\mathbb{E}}[\phi] \rangle$.
In the thermodynamic limit therefore the limiting equation is a deterministic logistic map
\begin{align}
{\mathcal{Q}}_{n+1} = A\,{\mathcal{Q}}_n(1-{\mathcal{Q}}_n)
\label{e.Qndet}
\end{align}
with $A = A_0 + C A_1$. Figure~\ref{f.g1_pdf} illustrates the convergence of the invariant measure of the deterministic map (\ref{e.Qn})-(\ref{e.qn}) to the averaged deterministic limit system (\ref{e.Qnstoch}) in distribution upon increasing the size $M$ of the microscopic sub-system. We used again a time series of $N=4\times 10^7$ and determined the empirical measure by binning using $1000$ bins. We see that for $M=1024$ convergence to the rough limiting invariant measure of the deterministic logistic map (\ref{e.Qndet}) with its narrow peaks has not been fully achieved. This is due to finite sample size $M$. \change{There are two averages requiring the limit $M\to\infty$: the average ${\mathbb{E}}[\cdot]$ with respect to the invariant density of the microscopic logistic dynamics and the average $\langle \cdot\rangle$ with respect to the distribution of the parameters $a^{(j)}$ of the modified logistic dynamics. Each of those averages is associated with their own finite size correction which are captured by the central limit theorem. Up to $\mathcal{O}(1/\sqrt{M})$ we have
\begin{align}
Z_n &= \frac{1}{M}\sum_{j=1}^{M}\phi^{(j)}(q_n^{(j)})= \frac{1}{M}\sum_{j=1}^{M}{\mathbb{E}}[\phi^{(j)}] + \frac{1}{\sqrt{M}}\zeta_n
\nonumber \\
&=\langle {\mathbb{E}}[\phi] \rangle + \frac{1}{\sqrt{M}}\eta + \frac{1}{\sqrt{M}}\zeta_n.
\label{e.Zndet}
\end{align}
Here $\zeta_n$ is again the mean-zero Gaussian process with covariance matrix $R(m)$ defined in (\ref{e.covzetan}), and for fixed ${\varepsilon}$, $\eta$ is a Gaussian variable with $\eta\sim{\mathcal{N}}(0,\langle{\mathbb{E}}^{\varepsilon}[\phi]^2\rangle - \langle {\mathbb{E}}^{\varepsilon}[\phi]\rangle^2)$. In the context where ${\varepsilon}$ varies, $\eta$ in (\ref{e.Zndet}) can be understood as a random function of ${\varepsilon}$, having a mean-zero Gaussian distribution with covariance
\[\langle \eta^{\varepsilon} \eta^{{\varepsilon}'} \rangle = \langle {\mathbb{E}}^{\varepsilon}[\phi] {\mathbb{E}}^{{\varepsilon}'}[\phi] \rangle - \langle {\mathbb{E}}^{\varepsilon}[\phi] \rangle \langle {\mathbb{E}}^{{\varepsilon}'}[\phi] \rangle.\]
In general, $\eta$ is non-differentiable which implies that LRT is violated for macroscopic observables $\Psi(Q)$, even for the random finite-size driver $Z_n$ given by (\ref{e.Zndet}). However, if the variation in ${\mathbb{E}}[\phi]$ over the parameter values sampled by $\nu$ is small by comparison with the typical variance $R(0)={\mathbb{E}}[(\phi-{\mathbb{E}}[\phi])^2]$ for these parameters (e.g. if the support of $\nu$ is sufficiently small), then the small, rough contribution of $\frac{1}{\sqrt{M}}\eta$ to the response of $\Psi(Q)$ is dominated by the (linear) response generated by $\langle {\mathbb{E}}[\phi] \rangle + \frac{1}{\sqrt{M}}\zeta_n$. We remark, however, that if the support of $\nu$ is too small and the parameters are therefore less heterogeneous, LRT is only valid for a small range of perturbation sizes ${\varepsilon}$.
}
\change{To illustrate the role of finite size effects, we present in Figure~\ref{f.g1_pdf} also results of simulations of the logistic map (\ref{e.Qn}) with $Z_n$ stochastically generated by (\ref{e.Zndet}),
mimicking random finite size effects in approximating the deterministic limit $Z_n = \langle {\mathbb{E}} [\phi] \rangle$. It is seen that for finite $M$ the peaks are smoothed by sampling noise, and the random logistic map reproduces the invariant density of the macroscopic variable $Q$ of the full deterministic model driven by the microscopic dynamics.}\\
Given that the thermodynamic limit system is deterministic, one might be tempted to conclude that linear response is not valid. Figure~\ref{f.g1_LRT} shows the linear response as a function of perturbation ${\varepsilon}$ for several values of the microscopic sub-system size $M$. For small values of $M$ LRT is clearly violated with a $p$-value of $\O(10^{-3})$, as expected. For very large values of $M=2^{15}$ LRT is violated with a $p$-value of $\O(10^{-40})$, consistent with the LRT-violating deterministic limit system (\ref{e.Qndet}). Remarkably and maybe surprisingly, decreasing the size $M$ from these very large values to intermediate values of $M=1024$ we observe that the numerical results are consistent with LRT and the $p$-value increases dramatically to around $0.16$. \change{This can be explained by the finite size corrections (\ref{e.Zndet}) to the deterministic limit $Z_n = \langle {\mathbb{E}}[\phi] \rangle$ provided by the central limit theorem. We note that the p-value for $M=1024$ indicates marginal evidence in favour of breakdown of LRT associated with the (small) contribution of the non-differentiable $\eta$ term.}
Just as in the $\gamma = \tfrac{1}{2}$ case it is necessary for LRT to hold in the case of finite sample size, that the parameters $a^{(j)}$ are inhomogeneously distributed with a sufficiently smooth distribution $\nu(a)$.\\
In \cite{GottwaldEtAl16} it was found that even if a system does not obey linear response one might not be able to reject the null hypothesis of linear response with sufficient statistical significance when the data length $N$ of the time series is not sufficiently long. In Figure~\ref{f.g1_LRT_finitesize} we show the linear response as a function of ${\varepsilon}$ for a microscopic sub-system of size $M=16$ for $N=2\times 10^4$. While for $N=2\times 10^5$ linear response was rejected with $p=7.2\times 10^{-3}$, linear response is now consistent with the given data with a $p$-value of $p=0.21$. It is found that decreasing the length of the time series allows for a larger range in the perturbation size ${\varepsilon}$ for which linear response is consistent with the data.
\begin{figure
\centering
\includegraphics[width=0.7\linewidth]{figures/figure4-16}
\includegraphics[width=0.7\linewidth]{figures/figure4-1024}
\caption{Empirical probability density $\rho_Q(x)$ (orange line) of the macroscopic variable $Q$ for $\gamma=1$ as estimated from simulations of the original deterministic system (\ref{e.Qn})-(\ref{e.qn}) for different values of the size $M$ of the microscopic sub-system. Top: $M=16$. Bottom: $M=1024$. The continuous black line depicts the invariant density of the deterministic logistic map limit system (\ref{e.Qnstoch}); the thin dotted lines, which are indistinguishable from $\rho_Q(x)$,
represent invariant densities of the logistic map (\ref{e.Qn}) with the stochastic driving $Z_n$ given by (\ref{e.Zndet}) for various realisations of $\eta$.
We used $A_0=3.847$, $A_1=0.147$ and ${\varepsilon}=0$.}
\label{f.g1_pdf}
\end{figure}
\begin{figure*}[htbp]
\centering
\includegraphics[width=1\linewidth]{figures/figure5}
\caption{Linear response of an observable $\Psi(Q)=Q$ for the deterministic system (\ref{e.Qn})-(\ref{e.qn}) for $\gamma=1$ for different values of the size $M$ of the microscopic sub-system. (a): $M=16$. (b): $M=1024$. (c) $M=32768$. (d): Deterministic limit system (\ref{e.Qndet}). All experiments used a time series of length $N=2\times10^5$. The error bars were estimated from $K=200$ realizations differing in the initial conditions. We used $A_0=3.847$ and $A_1=0.147$.}
\label{f.g1_LRT}
\end{figure*}
\begin{figure
\centering
\includegraphics[width=0.7\linewidth]{figures/figure6}
\caption{Linear response of an observable $\Psi(Q)=Q$ for the deterministic system (\ref{e.Qn})-(\ref{e.qn}) for $\gamma=1$ with $M=16$ estimated from a time series of length $N=2\times 10^4$. The error bars were estimated from $K=200$ realizations differing in the initial conditions. We used $A_0=3.847$ and $A_1=0.147$.}
\label{f.g1_LRT_finitesize}
\end{figure}
\section{Discussion and outlook}
\label{s.discussion}
We have shown that macroscopic observables in high-dimensional deterministic dynamical systems which consist of unresolved microscopic variables weakly coupled to macroscopic resolved variables may obey linear response theory even if each of the microscopic units individually violate LRT. We showed that in the case when the thermodynamic limit of an infinitely large microscopic sub-system leads to a stochastic limit equation for the macroscopic resolved variables, linear response theory can be justified for macroscopic observables. In case when the thermodynamic limit is deterministic we showed that for a finite microscopic sub-system, the limiting dynamics has a stochastic correction which again allows for linear response. We established that the existence of a stochastic limit system is not sufficient to assure LRT, and an additional assumption on the distribution of the parameters of the microscopic sub-system is needed in the case when the microscopic variables are not respecting linear response. In this case, we require the parameters of the microscopic sub-system to be heterogeneous with a smooth distribution of their parameters. The degree of the smoothness directly determines the polynomial order of the response. For example, if the parameters of the unresolved degrees of freedom $q^{(j)}$ were chosen to be all equal and the initial conditions $q_0^{(j)}$ were chosen from the invariant measure, the macroscopic variable $Q$ still obeyed a stochastic limit for $\gamma=\tfrac{1}{2}$, but LRT would clearly be violated upon a homogeneous perturbation of the microscopic sub-system. If the microscopic variables obey linear response, for example with uniformly expanding maps, this condition on the parameter distribution is not necessary.
We considered here the worst case scenario where the dynamics of both the macroscopic and the unresolved degrees of freedom on their own violate LRT. In the numerical simulations we studied the effect of perturbing the parameters of the unresolved microscopic variables. \change{We remark that if perturbations of the macroscopic variable $Q$ were considered with $A=A_0+\varepsilon \delta A$, LRT would be valid for $\gamma=\tfrac{1}{2}$ since the limiting system is stochastic \cite{HairerMajda10} (and also for $\gamma=1$ when finite size effects are non-negligible).} Rather than considering a macroscopic variable weakly coupled to a micrscopic sub-system consisting of non-conservative logistic maps, one may instead consider the case of a traditional heat bath consisting of an infinite collection of harmonic oscillators with randomly chosen initial conditions which are weakly coupled to a distinguished resolved degree of freedom. The limiting stochastic properties of the associated $Z_n$ was established rigorously in \cite{FordEtAl65,Zwanzig73,FordEtAl87,StuartWarren99,KupfermanEtAl02,GivonEtAl04} using trigonometric approximation of Gaussian noise \cite{Kahane85}. In this case, if weakly coupled to the macroscopic variable $Q$ which evolves according to the logistic dynamics (\ref{e.Qn}), we would obtain similar results as for the case considered here.
We have treated here the case of weakly coupled systems. It is well known that stochastic limit systems also occur in multi-scale dynamics where the central limit theorem is realized by summing up many fast chaotic degrees of freedom in one slow time unit \cite{GivonEtAl04,Dolgopyat04,MelbourneStuart11,GottwaldMelbourne13c,KellyMelbourne14,DeSimoiLiverani15,DeSimoiLiverani16}. We expect analogous results in this case. As in the case of weak coupling considered here, the heterogeneity in the parameter distribution of the fast system is essential.
\section*{Acknowledgements}
GAG is partially supported by ARC, grant DP180101385. CW is supported by an Australian Government Research Training Program (RTP) Scholarship.
|
1902.00978
|
\section{Introduction}
We say that a permutation on $n$ symbols has a descent at position $i$ if $\pi(i) > \pi(i+1)$, and we let $d(\pi)$ denote the number of descents of $\pi$. For example. the permutation $1\underline{4}32\underline{6}5$ has descents at positions $2$ and $5$, and has $d(\pi)=2$. Descents appear in numerous parts of mathematics. For examples, see Knuth \cite{Knuth} for connections of descents with the theory of sorting and the theory of runs in permutations and see Bayer and Diaconis \cite{BD} for applications of descents to card shuffling. The number $A(n,k)$ of permutations on $n$ symbols with $k$ descents is called an Eulerian number, and there is an entire book devoted to their study \cite{Pet}.
It is well known that the distribution of descents is asymptotically normal with mean $(n-1)/2$ and variance $(n+1)/12$. There are many proofs of this:
\begin{enumerate}
\item Pitman \cite{Pit} uses real-rootedness of the Eulerian polynomials
\begin{align*}
A_n(t) = \sum_{\pi \in S_n} t^{d(\pi)+1}
\end{align*}
\item David and Barton \cite{DB} use the method of moments.
\item Tanny \cite{Tan} uses the fact that if $U_1,\cdots,U_n$ are independent uniform $[0,1]$ random variables, the for all integers $k$,
\begin{align*}
\operatorname{\mathbb{P}}\left(k \leq \sum_{i=1}^n U_i < k+1 \right) = A(n,k)/n!
\end{align*}
\item Fulman \cite{Fstein} uses Stein's method.
\end{enumerate}
There is also interesting literature on the joint distribution of descents and cycles. Gessel and Reutenauer \cite{GR} use symmetric function theory to enumerate permutations with a given cycle structure and descent set, and Diaconis, McGrath, and Pitman \cite{DMP} interpret this in the context of card shuffling. We regard these exact results as a miracle, and they enable one to write down an exact (but quite complicated) generating function for descents of permutations in a given conjugacy class. These exact generating functions make it possible to prove central limit theorems for the number of descents in fixed conjugacy classes of the symmetric group. Fulman \cite{FulCLT} proved a central limit theorem when the conjugacy classes consist of large cycles. Almost twenty years later, Kim \cite{Kim1} proved a central limit for descents in random fixed point free involutions. Quite recently, Kim and Lee \cite{KimLee} proved a central limit theorem for arbitrary conjugacy classes. These
results would be very difficult to obtain without exact generating functions.
Given the above discussion, it is natural to ask if there are other permutation statistics for which there is exact information about the joint distribution with cycle structure. In their work on casino shuffling machines, Diaconis, Fulman, and Holmes \cite{DFH} discovered that there is a lovely exact generating function for the number of peaks of a permutation enumerated according to cycle structure. Let us describe their result. We say that a permutation $\pi \in S_n$ has a peak at position $1<i<n-1$ if $\pi(i-1)< \pi(i) > \pi(i+1)$, and let $p(\pi)$ be the number of peaks of $\pi$. Thus $\pi = 1 \underline{4} 2 6 \underline{7} 5 3$ has peaks at positions 2 and $5$, so that $p(\pi)=2$. Letting $\lambda$ be a partition of $n$ with $n_i$ parts of size $i$, Corollary 3.8 of \cite{DFH} gives that
\begin{align}
\label{eq:peakgen_cycletype}
\sum_{\pi \in \mathcal{C}_{\lambda}}
\left( \frac{4t}{(1+t)^2} \right)^{p(\pi)+1}
= 2 \left( \frac{1-t}{1+t} \right)^{n+1} \sum_{a \geq 1} t^a \prod_i
[x_i^{n_i}] \left( \frac{1+x_i}{1-x_i} \right)^{f_{a,i}}.
\end{align}
Here, $\mathcal{C}_{\lambda}$ denotes the elements of $S_n$ of cycle type $\lambda$, and $[x_i^{n_i}] g(x_i)$ denotes the coefficient of $x_i^{n_i}$ in the function $g(x_i)$, and
\begin{align*}
f_{a,i} = \frac{1}{2i} \sum_{\substack{d \mid i \\ d \text{ odd}}} \mu(d) (2a)^{i/d},
\end{align*}
where $\mu$ is the M\"obius function of elementary number theory. (The result of \cite{DFH} actually deals with valleys rather than peaks, but the joint generating function with cycle structure is the same as can be seen by conjugating by the longest permutation $n \cdots 21$). The reader will agree that the generating function (\ref{eq:peakgen_cycletype}) looks hard to deal with (it need not be real-rooted), and our main insight is that we can adapt the methods of Kim and Lee \cite{KimLee} to analyze it.
To close the introduction, we mention that the number of peaks of a permutation is a feature of interest. The paper \cite{DFH} uses peaks to analyze casino shelf-shuffling machines. The number of peaks is classically used as a test of randomness for time series; see Warren and Seneta \cite{WS} and their references, which also include a central limit theorem for the number of peaks for a uniform random permutation. Permutations with no peaks are called unimodal (usually unimodal refers to no valleys but these are equivalent for our purposes), and are of interest in social choice theory through Coombs's ``unfolding hypothesis'' (see Chapter 6 of \cite{Dibook}). They also appear in dynamical systems and magic tricks (see Chapter 5 of \cite{DG}).
Finally, we note that peaks have been widely studied by combinatorialists; see Petersen \cite{Petpeak}, Stembridge \cite{Stempeak}, Nyman \cite{Nypeak}, Schocker \cite{Sch} and a paper of Billey, Burdzy, and
Sagan \cite{BBS}, for a small sample of combinatorial work on peaks.
\subsection{Main results}
To motivate the readers, we first demonstrate a numerical simulation result. Figure 1 is a histogram of peaks of $10^5$ permutations drawn from the conjugacy class $\mathcal{C}_{2^{250} 4^{125}} \subset S_{1000}$.
\begin{center}
\begin{minipage}{.7\linewidth}
\centering
\includegraphics[width=.75\linewidth]{fig_histogram.pdf}\\
\small Figure 1. Histogram of peaks of $10^5$ samples drawn from $\mathcal{C}_{2^{250}4^{125}} \subset S_{1000}$.
\end{minipage}
\end{center}
The histogram suggests that the peaks of permutations in $\mathcal{C}_{2^{250} 4^{125}}$ are normally distributed, and indeed, the p.d.f.\ of $\mathcal{N} \left(\frac{n-2}{3}, \frac{2(n+1)}{45}\right)$ with $n = 1000$ fits very well. This suggests that the behavior of peaks for a particular conjugacy class is mostly the same as that of peaks for $S_n$. This does turn out to be true for conjugacy classes with no fixed points, as the following main theorem states that the asymptotic distribution of peaks in conjugacy classes is normal, where the asymptotic mean and variance depend only on the density of fixed points.
\begin{theorem} \label{mainthm}
Let $C_n$ be a conjugacy class of $S_n$ for each $n \geq 1$. Denote by $\alpha_1(C_n)$ the fraction of fixed points of each element of $C_n$. Suppose that $\pi_n$ is chosen uniformly at random from $C_n$ and that $\alpha_1(C_n)$ converges to some $\alpha \in [0, 1] $ as $n\to\infty$. Then, as $n\to\infty$,
\begin{align*}
\frac{p(\pi_n) - \frac{1-\alpha_1(C_n)^3}{3}n}{\sqrt{n}}
\quad \text{converges in distribution to} \quad
\mathcal{N}\left(0, \tfrac{2}{45} + \tfrac{1}{9}\alpha^3 - \tfrac{3}{5}\alpha^5 + \tfrac{4}{9}\alpha^6 \right).
\end{align*}
\end{theorem}
Our main strategy is to adopt the modified Curtiss' theorem from~\cite{KimLee}, which relates convergence in distribution of random variables to the pointwise convergence of their moment generating functions on an open set. In this regard, the main theorem is a direct consequence of the following technical theorem:
\begin{theorem} \label{maintechthm}
For each $s > 0$, there exists a universal constant $C = C(s) > 0$, depending only on $s$, such that the following is true: Let $\mathcal{C}_{\lambda} \subseteq S_n$ be the conjugacy class of cycle type $\lambda = 1^{n_1} 2^{n_2} \cdots$ and $\pi$ be chosen uniformly at random from $\mathcal{C}_{\lambda}$. Denote by $\alpha_1 = n_1 / n$ the density of fixed points. Then,
\begin{align*}
\operatorname{\mathbb{E}} \left[ e^{-s p(\pi)/\sqrt{n}} \right]
= \exp\left\{ -\frac{1-\alpha_1^3}{3}s\sqrt{n} + \left( \frac{1}{45} + \frac{\alpha_1^3}{18} - \frac{3\alpha_1^5}{10} + \frac{2\alpha_1^6}{9} \right)s^2 + E_{\lambda,s} \right\},
\end{align*}
where $|E_{\lambda,s}| \leq Cn^{-1/4}$.
\end{theorem}
This theorem is interesting in its own right, because the uniform estimate allows us to readily extend the scope of the main theorem to a more general class of sequences $(C_n)$. More precisely, the statement of Theorem \ref{mainthm} readily extends to the case where each $C_n$ is simply a conjugacy-invariant subset of $S_n$ such that every element of $C_n$ has the same number of fixed points. For example, if we consider the set of all elements of $S_n$ with zero fixed points, we would obtain a central limit theorem for peaks of derangements.
\section{Central limit theorem for peaks of a random permutation in $S_n$}
Denoting the peak generating function by
\begin{align*}
W_n(t) = \sum_{\pi \in S_n} t^{p(\pi) + 1},
\end{align*}
it is well known \cite[p779]{Stempeak} that $A_n(t)$ and $W_n(t)$ are related by the identity
\begin{align}
\label{eq:peakgen}
W_n\left(\frac{4t}{(1+t)^2}\right) = \left(\frac{2}{1+t}\right)^{n+1}A_n(t).
\end{align}
Our aim in this section is to identify the asymptotic distribution of peaks of a random permutation in $S_n$ using \eqref{eq:peakgen}.
\subsection{Computing mean and variance of peaks in $S_n$}
We begin by calculating the derivatives of $A_n(t)$ at $1$ up to the fourth order.
\begin{lemma}
\label{lemma:eulerian_derivatives}
We have
\begin{align*}
A_n^{(0)}(1) &= n! , \\
A_n^{(1)}(1) &= n! \cdot \frac{n+1}{2} \mathbf{1}_{\{ n \geq 1\}}, \\
A_n^{(2)}(1) &= n! \cdot \frac{3n^2 + n - 2}{12} \mathbf{1}_{\{ n \geq 2 \}}, \\
A_n^{(3)}(1) &= n! \cdot \frac{n^3 - 2n^2 - n + 2}{8} \mathbf{1}_{\{ n \geq 3\}}, \text{ and} \\
A_n^{(4)}(1) &= n! \cdot \frac{15n^4 - 90n^3 + 125n^2 + 78n - 152}{240} \mathbf{1}_{\{ n \geq 4\}}.
\end{align*}
\end{lemma}
\begin{proof}
It is well known that the Eulerian polynomials satisfy the identity
\begin{align*}
A_n(t) = (1 - t)^{n+1} \sum_{a\geq 1} a^n t^a.
\end{align*}
Recall that the Stirling numbers of the second kind $\stirlingii{n}{k}$ count the number of partitions of an $n$-element set into $k$ blocks. Plugging the expansion $a^n = \sum_{k=0}^{n} \stirlingii{n}{k} \frac{a!}{(a-k)!}$ into the expression above, we see that
\begin{align*}
A_n(t)
&= (1-t)^{n+1} \sum_{a=0}^{\infty} \left( \sum_{k=0}^{n} \stirlingii{n}{k} \frac{a!}{(a-k)!} \right) t^a \\
&= (1-t)^{n+1} \sum_{k=0}^{n} \stirlingii{n}{k} \frac{k!t^k}{(1-t)^{k+1}} \\
&= \sum_{k=0}^{n} k! \stirlingii{n}{k} t^k (1 - t)^{n-k} \\
&= \sum_{k=0}^{n} (n-k)! \stirlingii{n}{n-k} t^{n-k} (1 - t)^{k}.
\end{align*}
Now one can compute $A_n^{(p)}(1)$ by plugging the above identity into $A_n^{(p)}(1) = p![s^p]A_n(1+s)$. More specifically, if $p > n$, then $A_n(1+s)$ has degree $n$, and so, $A_n^{(p)}(1) = 0$. If $p \leq n$, then
\begin{align*}
A_n^{(p)}(1)
= p![s^p]A_n(1+s)
&= p![s^p]\sum_{k=0}^{n} (-1)^k (n-k)! \stirlingii{n}{n-k} (1+s)^{n-k} s^{k} \\
&= p! \sum_{k=0}^{p} (-1)^k (n-k)! \stirlingii{n}{n-k} \binom{n-k}{p-k}.
\end{align*}
For each given $p$, the last sum can be computed by calculating $\stirlingii{n}{n-k}$'s for $k = 0, \cdots, p$. For instance, $\stirlingii{n}{n} = 1$ and $\stirlingii{n}{n-1} = \binom{n}{2}$, and for larger values of $k$, they can be systematically computed by utilizing the relationship between the Stirling numbers of the second kind and Eulerian numbers of the second kind (see equation (6.43) of \cite{GKP}). The $\stirlingii{n}{n-k}$'s relevant to us are
\begin{align*}
\stirlingii{n}{n-2} &= 2 \binom{n}{4} + \binom{n+1}{4}, \\
\stirlingii{n}{n-3} &= 6 \binom{n}{6} + 8 \binom{n+1}{6} + \binom{n+2}{6}, \text{ and}\\
\stirlingii{n}{n-4} &= 24 \binom{n}{8} + 58 \binom{n+1}{8} + 22 \binom{n+2}{8} + \binom{n+3}{8}.
\end{align*}
Plugging these back into the formula for $A_n^{(p)}(1)$ provides the desired lemma.
\end{proof}
Next, \eqref{eq:peakgen} relates $W_n^{(p)}(1)$ to the derivatives of $A_n(t)$ up to order $2p$ evaluated at~$1$. Differentiating both sides of \eqref{eq:peakgen} gives us
\begin{align*}
- \frac{4(t-1)}{(1+t)^3}W_n'\left(\frac{4t}{(1+t)^2}\right)
= -\frac{(n+1)2^{n+1}}{(1+t)^{n+2}}A_n(t) + \frac{2^{n+1}}{(1+t)^{n+1}}A_n'(t),
\end{align*}
and by multiplying $- \frac{(1+t)^3}{4(t-1)}$ to both sides and simplifying, we see that
\begin{align*}
W_n'\left(\frac{4t}{(1+t)^2}\right) = \left(\frac{2}{1+t}\right)^{n-1} \frac{(n+1)A_n(t) - (t+1)A_n'(t)}{t-1}.
\end{align*}
This formula cannot be evaluated directly at $ t = 1$, but we can use L'H\^opital's rule to get
\begin{align*}
W_n'(1)
&= \lim_{t \to 1} W_n'\left(\frac{4t}{(1+t)^2}\right)
= \lim_{t \to 1} \frac{(n+1)A_n(t) - (t+1)A_n'(t)}{t-1} \\
&= n A_n'(1) - 2A_n''(1)
= n! \cdot \frac{n+1}{3}, \qquad \text{if } n \geq 2.
\end{align*}
The last step is a consequence of Lemma \ref{lemma:eulerian_derivatives}. The second derivative $W_n''(1)$ can be computed in similar fashion. By differentiating both sides of \eqref{eq:peakgen} twice and simplifying, we obtain an identity relating $W_n''$ to the derivatives of $A_n$:
\begin{align*}
W_n''\left(\frac{4t}{(1+t)^2}\right) = \left(\frac{2}{1+t}\right)^{n-3} \frac{P_n(t)}{(t-1)^3},
\end{align*}
where $P_n(t)$ is given by
\begin{align*}
P_n(t) = (n+1)(nt-n+2)A(t) - 2(t+1)(nt-n+1)A'(t) + (t-1)(t+1)^2 A''(t).
\end{align*}
Similarly as before, we find $W_n''(1)$ by using L'H\^opital's rule:
\begin{align*}
W_n''(1)
&= \lim_{t \to 1} W_n''\left(\frac{4t}{(1+t)^2}\right)
= \lim_{t \to 1} \frac{P_n(t)}{(t-1)^3}
= \frac{P_n^{(3)}(1)}{6} \\
&= \frac{(3n^2-9n+6) A^{(2)}(1) - (10n-20) A^{(3)}(1) + 8 A^{(4)}(1)}{6} \\
&= n! \cdot \frac{(5n-8)(n+1)}{45}, \qquad \text{if } n \geq 4,
\end{align*}
where the last step follows from Lemma \ref{lemma:eulerian_derivatives}. Finally, since $W_n'(1) = n! \operatorname{\mathbb{E}} [p(\pi) + 1]$ and $W_n''(1) = n! \operatorname{\mathbb{E}} [(p(\pi) + 1)p(\pi)]$, we have
\begin{align*}
\operatorname{\mathbb{E}}[p(\pi)] = \frac{W_n'(1)}{n!} - 1 = \frac{n-2}{3} \qquad \text{if } n \geq 2,
\end{align*}
and
\begin{align*}
\operatorname{Var}(p(\pi)) = \frac{W_n''(1)}{n!} + \frac{W_n'(1)}{n!} - \left( \frac{W_n'(1)}{n!} \right)^2 = \frac{2(n+1)}{45} \qquad \text{if } n \geq 4.
\end{align*}
At this point, it is worth noting \eqref{eq:peakgen} implies that, like $A_n(t)$, $W_n(t)$ has only real roots, and so, by Harper's method~\cite{Harper}, we can obtain a central limit theorem for peaks of a random permutation in $S_n$. In the upcoming section, we give a new proof of this central limit theorem by using analytic combinatorics and will go further to prove a central limit theorem for peaks in arbitrary conjugacy classes of $S_n$, where the mean and variance depend only on the density of fixed points in the conjugacy classes.
\subsection{Establishing the asymptotic normality of peaks in $S_n$}
Kim and Lee~\cite{KimLee} proved the following modification of Curtiss' theorem:
\begin{theorem}
\label{theorem:curtiss}
Let $X_n$ be random vectors in $\mathbb{R}^d$ for each $n \in \mathbb{N} \cup \left\{ \infty \right\}$ and $M_{X_n}(s) = \operatorname{\mathbb{E}}[e^{sX_n}]$ be the moment generating function (m.g.f\@.) of $X_n$. Suppose that there is a non-empty open subset $U \subseteq \mathbb{R}^d$ such that $\lim_{n \to \infty} M_{X_n}(s) = M_{X_\infty}(s)$ for all $s \in U$. Then, $X_n$ converges in distribution to $X_\infty$.
\end{theorem}
This theorem will be used in this subsection to prove a central limit theorem about peaks of permutations chosen, uniformly at random, from $S_n$, and in section 3 to prove an analogous theorem about peaks of permutations chosen, uniformly at random, from arbitrary conjugacy classes, where the asymptotic mean and variance are functions of only $\alpha$, the density of fixed points in the conjugacy classes.
\begin{theorem}
\label{thm:clt_peaks_Sn}
Let $\pi_n$ be chosen uniformly at random from $S_n$. Then $p(\pi_n)$ is asymptotically normal with mean $\frac{n-2}{3}$ and variance $\frac{2(n+1)}{45}$. More precisely, as $n \to \infty$,
\begin{align*}
\frac{p(\pi_n) - \tfrac{n-2}{3}}{\sqrt{n}}
\quad \text{converges in distribution to} \quad \mathcal{N}\left(0, \tfrac{2}{45}\right).
\end{align*}
\end{theorem}
\begin{proof}
Let $X_n = \left( p(\pi_n) - \frac{n-2}{3} \right) / \sqrt{n}$ denote the normalized peaks. In view of Theorem \ref{theorem:curtiss}, it suffices to show that $M_{X_n}(s)$ converges pointwise to the m.g.f.\ of $\mathcal{N} \left(0, \frac{2}{45}\right)$ on some open interval. Let $0 < t < 1$. By a simple comparison, it follows that
\begin{align*}
t \cdot \frac{n!}{\log^{n+1}(1/t)}
= \int_{0}^{\infty} a^{n} t^{a+1} \, \mathrm{d}a
\leq \sum_{a\geq 1} a^{n} t^{a}
\leq \int_{0}^{\infty} a^{n} t^{a-1} \, \mathrm{d}a
= \frac{1}{t} \cdot \frac{n!}{\log^{n+1}(1/t)}.
\end{align*}
Plugging this into \eqref{eq:peakgen}, we obtain
\begin{align*}
\frac{1}{n!} W_n\left(\frac{4t}{(1+t)^2}\right)
= \frac{1}{n!} \left(\frac{2(1 - t)}{1+t}\right)^{n+1} \left( \sum_{a\geq 1} a^n t^a \right)
= e^{\mathcal{O}(\log t)} \left( \frac{2(1-t)}{(1+t)\log(1/t)} \right)^{n+1}.
\end{align*}
Now, fix $s > 0$ and choose $t$ as the unique solution of $\frac{4t}{(1+t)^2} = e^{-s/\sqrt{n}}$ in the range $(0, 1)$, which is given by
\begin{align}
\label{eq:t_asymp}
t
= \frac{1 - \sqrt{1 - e^{-s/\sqrt{n}}}}{1 + \sqrt{1 - e^{-s/\sqrt{n}}}}
= 1 - \frac{2s^{1/2}}{n^{1/4}} + \frac{2 s}{n^{1/2}} - \frac{3 s^{3/2}}{2n^{3/4}} + \frac{s^2}{n} + \mathcal{O}\left(n^{-5/4}\right),
\end{align}
where the implicit bound of the error term depends only on $s$. From this expansion, we have both $\log(t) = \mathcal{O}\left(n^{-1/4}\right)$ and $\log\left(\frac{2(1-t)}{(1+t)\log(1/t)} \right) = -\frac{s}{3\sqrt{n}} + \frac{s^2}{45n} + \mathcal{O}(n^{-5/4}) $. Plugging these into $M_{X_n}(s)$, we see that
\begin{align*}
M_{X_n}(s)
= \frac{1}{n!} W_n\left(e^{-s/\sqrt{n}}\right) e^{\frac{n+1}{3\sqrt{n}}s}
= e^{\frac{s^2}{45} + \mathcal{O}\left(n^{-1/4}\right)}.
\end{align*}
The desired conclusion follows since $e^{s^2/45}$ is the m.g.f.\ of the $\mathcal{N}\left( 0, \frac{2}{45} \right)$.
\end{proof}
\section{Central limit theorem for peaks of a random permutation in a fixed conjugacy class of $S_n$}
Let $\mathcal{C}_{\lambda}$ denote the set of all permutations of $S_n$ of cycle type $\lambda = 1^{n_1} 2^{n_2} \cdots $ of $n$. Recall that the peak generating function over $\mathcal{C}_{\lambda}$ has an explicit formula \eqref{eq:peakgen_cycletype}, which involves the quantity~$f_{a,i}$ defined in the introduction. Along the proof of the main theorem, it is important to know a precise estimation of $f_{a,i}$. Define $g_{a,i}$ by the following relation
\begin{align*}
f_{a,i} = \frac{(2a)^i}{2i} g_{a,i}.
\end{align*}
The main reason for introducing $g_{a,i}$ is that $f_{a,i}$ is expected to behave much like $(2a)^i/(2i)$, and so, it is necessary to study the relative difference and produce a precise estimate for the difference. The following lemma serves this purpose.
\begin{lemma}
\label{lemma:fai_est}
There exists a universal constant $c_1 > 0$ such that
\begin{align*}
e^{-c_1 (2a)^{-2i/3}}
\leq g_{a,i}
\leq e^{c_1 (2a)^{-2i/3}}
\end{align*}
for all $a \geq 1$ and $ i \geq 1$. Consequently, we have $e^{-(c_1/4) / a^2} \leq g_{a,i} \leq e^{(c_1/4) / a^2}$.
\end{lemma}
Although the intermediate step of the proof will show that the explicit choice $c_1 = 4$ works, we prefer to leave it as a named constant. This is because its value is not important for the argument and its presence will clarify the way we utilize this lemma.
\begin{proof}
Recall that $f_{a,i} = \frac{1}{2i} \sum \mu(d) (2a)^{i/d}$, where the sum is over $d$, the positive odd divisors of $i$. From this, we see that $g_{a,i} = 1$ when $i$ is either $1$ or $2$, and so, it suffices to assume that $i \geq 3$. For such $i \geq 3$,
\begin{align*}
(2a)^i \left| g_{a,i} - 1 \right|
\leq \sum_{\substack{ d \mid i \\ d \text{ odd, } d \neq 1 }} (2a)^{i/d}
\leq \sum_{k=1}^{\lfloor i/3 \rfloor} (2a)^k
= \frac{2a}{2a-1} \left( (2a)^{\lfloor i/3 \rfloor} - 1 \right)
\leq 2 (2a)^{i/3}.
\end{align*}
Rearranging, it follows that
\begin{align*}
1 - 2(2a)^{-2i/3} \leq g_{a,i} \leq 1 + 2(2a)^{-2i/3}.
\end{align*}
Since $a \geq 1$ and $i \geq 3$, we have $2(2a)^{-2i/3} \leq \frac{1}{2}$. Then, applying the inequalities $e^{-2x} \leq 1-x$ and $ 1+x \leq e^{2x}$, which are valid for $0 \leq x \leq \frac{1}{2}$, proves the claim with the choice $c_1 = 4$. The remaining assertion is a simple consequence of the fact that $(2a)^{-2i/3} \leq a^{-2}$ for $i \geq 3$.
\end{proof}
\begin{remark}
The quantity $f_{a,i}$ is a positive integer. In the special case when $a$ is a power of $2$, this follows from Lemma 1.3.16 of \cite{FNP}, which enumerates monic, irreducible, self-conjugate polynomials of degree $2i$ over a finite field of size $2a$.
For general $a$, the quantity $f_{a,i}$ enumerates what Victor Reiner calls ``nowhere-zero primitive twisted necklaces'' with values in
\begin{align*}
A = \{+1,-1,+2,-2,\cdots,+a,-a \}
\end{align*}
having $i$ entries. To define this notion, let the cyclic group $C_{2i}$ act on $i$-tuples of words $(b_1,\cdots,b_i)$ where the $b_k$'s take values in $A$, and the generator of $C_{2i}$ acts by
\begin{align*}
g(b_1,\cdots,b_i) = (b_2,\cdots,b_i,-b_1).
\end{align*}
An orbit $P$ of this action is called a twisted necklace, and $P$ primitive means that the $C_{2i}$ action is free (i.e. no non-trivial group element fixes any vector in the orbit $P$). Arguing as in the proof of Theorem 4.2 of \cite{Reiner} shows that $f_{a,i}$ does indeed enumerate nowhere-zero primitive twisted necklaces. We thank Victor Reiner for this observation.
\end{remark}
\subsection{Heuristics and main idea}
We begin by focusing on the product of coefficients appearing in the formula of the peak generating function \eqref{eq:peakgen_cycletype}. More specifically, we seek to find a formula of each coefficient that is more manageable for estimation. Applying the generalized binomial theorem to expand the function, we get
\begin{align}
[x_{i}^{n_i}] \left( \frac{1+x_i}{1-x_i} \right)^{f_{a,i}}
&= [x_{i}^{n_i}] \left( \left( 1+x_i \right)^{f_{a,i}} \left( 1-x_i \right)^{-f_{a,i}} \right) \nonumber \\
&= \sum_{k=0}^{\infty} \binom{f_{a,i}}{k} \binom{f_{a,i} - 1 + n_i - k}{f_{a,i} - 1}
= \frac{(2f_{a,i})^{n_i}}{n_i!} \mathsf{K}_{a,i}, \label{eq:kai}
\end{align}
where $\mathsf{K}_{a,i}$ is defined by
\begin{align*}
\mathsf{K}_{a,i}
= \sum_{\nu = 0}^{n_i} \frac{1}{2^{n_i}} \binom{n_i}{\nu} \frac{(f_{a,i} - \nu + n_i - 1)!}{(f_{a,i} - \nu)! f_{a,i}^{n_i - 1}}.
\end{align*}
To apply \eqref{eq:kai}, note that the term $t^a \prod_i [x_{i}^{n_i}] \left( (1+x_i)/(1-x_i) \right)^{f_{a,i}}$ in \eqref{eq:peakgen_cycletype} appears to contribute to the sum meaningfully only when $a$ is comparable to $n^{5/4}$. Also, the $\mathsf{K}_{a,i}$'s are approximately $1$ if $f_{a,i}$ is considerably larger than $n_i$. If all these observations get along, one may argue heuristically that
\begin{align*}
\frac{1}{|\mathcal{C}_{\lambda}|}\sum_{\pi \in \mathcal{C}_{\lambda}} \left( \frac{4t}{(1+t)^2} \right)^{p(\pi) + 1}
&\stackrel{?}{\approx} \left( \frac{\prod_i n_i! i^{n_i}}{n!} \right) \cdot 2 \left( \frac{1-t}{1+t} \right)^{n+1}
\int_{0}^{\infty} t^{x} \prod_{i} \frac{\left( (2x)^{i} / i \right)^{n_i}}{n_i!} \, \mathrm{d}x \\
&= \frac{1}{n!} \left( \frac{2(1-t)}{1+t} \right)^{n+1} \int_{0}^{\infty} t^x x^n \, \mathrm{d}x \\
&= \left( \frac{2(1-t)}{(1+t)\log(1/t)} \right)^{n+1}.
\end{align*}
The final result is the same as what appears in the proof of the asymptotic normality of peaks over $S_n$. This leads to a naive guess that the peaks over $\mathcal{C}_{\lambda}$ have asymptotically the same normal distribution as the peaks over $S_n$. Of course, we must test the validity of this claim. One main concern is that the alleged asymptotic behavior of \eqref{eq:kai} may not be valid for small $i$'s. Such phenomenon is already observed in the case of descents \cite{KimLee}, where the asymptotic distribution of descents for a fixed cycle type is parametrized by the density of fixed points. And indeed, we will find that corrections are also needed for the peak distribution due to the presence of fixed points. In summary, we need to
\begin{itemize}
\item precisely control error terms appearing in various approximations, and
\item investigate how the presence of fixed points affects the asymptotic formula for the peak generating function.
\end{itemize}
From this point forward, let $s > 0$ be a fixed positive real number. Then, $t$ is chosen as in \eqref{eq:t_asymp}, which is the unique solution of $\frac{4t}{(1+t)^2} = e^{-s/\sqrt{n}}$ in the interval $(0, 1)$. As the first step of rigorization, we mimic the heuristic computation without using approximations. Applying \eqref{eq:kai} to the peak generating function \eqref{eq:peakgen_cycletype}, we get
\begin{align*}
\frac{1}{|\mathcal{C}_{\lambda}|}\sum_{\pi \in \mathcal{C}_{\lambda}} e^{-\frac{s}{\sqrt{n}}(p(\pi) + 1)}
&= \frac{2}{n!} \left( \frac{1-t}{1+t} \right)^{n+1}
\sum_{a \geq 1} t^{a} \prod_{1 \leq i \leq n} n_i! i^{n_i} [x_{i}^{n_i}] \left( \frac{1+x_i}{1-x_i} \right)^{f_{a,i}} \\
&= \frac{2}{n!} \left( \frac{1-t}{1+t} \right)^{n+1}
\sum_{a \geq 1} t^{a} \prod_{1 \leq i \leq n} (2a)^{in_i} g_{a,i}^{n_i} \mathsf{K}_{a,i} \\
&= \left( \frac{2(1-t)}{(1+t)\log(1/t)} \right)^{n+1}
\left[ \frac{\log^{n+1}(1/t)}{n!} \sum_{a \geq 1} a^n t^{a} \prod_{1 \leq i \leq n} g_{a,i}^{n_i} \mathsf{K}_{a,i} \right].
\end{align*}
For the sake of conciseness, define $\mathsf{L}_{\bullet}$ by
\begin{align*}
\mathsf{L}_{A} := \frac{\log^{n+1}(1/t)}{n!} \sum_{a \in A \cap \mathbb{N}} a^n t^{a} \prod_{1 \leq i \leq n} g_{a,i}^{n_i} \mathsf{K}_{a,i}
\end{align*}
for all $A \subseteq \mathbb{R}$. Then, the above computation simplifies to
\begin{align}
\label{eq:peakgen_cycletype_simple}
\frac{1}{|\mathcal{C}_{\lambda}|}\sum_{\pi \in \mathcal{C}_{\lambda}} e^{-\frac{s}{\sqrt{n}}(p(\pi) + 1)}
= \left( \frac{2(1-t)}{(1+t)\log(1/t)} \right)^{n+1} \mathsf{L}_{[1,\infty)}.
\end{align}
As in the heuristic computation, $\mathsf{L}_{\bullet}$ will be approximated by its integral analogue. In doing so, it is convenient to split the sum into two parts at a certain threshold. The primary reason is that the aforementioned approximation tends to fail for small $a$, and so, such case deserves to be handled separately. To describe this threshold, let
\begin{align}
\label{eq:delta}
\delta_0 = \left[ \sup_{n \geq 1} \left( n^{1/4} \log(1/t) e^{(c_1/4)+1} \right) \right]^{-1}
\end{align}
and fix any $\delta \in (0, \delta_0)$. In view of \eqref{eq:t_asymp}, $\log(1/t) = 2\sqrt{s}n^{-1/4} + \mathcal{O}(n^{-3/4})$ for large~$n$. This guarantees that $\delta_0$ is away from $0$, and so, the choice of $\delta$ does make sense. Then, the sum~$\mathsf{L}_{[1,\infty)}$ will be split into $\mathsf{L}_{[1, \delta n^{5/4}]} + \mathsf{L}_{(\delta n^{5/4},\infty)}$, and we will call the former term the \emph{small range} and the latter term the \emph{large range}.
\subsection{Estimation of small range}
We will focus on the range $a \leq \delta n^{5/4}$, where $\delta$ will be chosen from $(0, \delta_0)$. The main goal in this section is to show that the contribution arising from this range is negligible. The precise statement is as follows.
\begin{lemma}
\label{lemma:small_range}
For each $\delta \in (0, \delta_0)$ and $\rho \in (\delta/\delta_0, 1)$, there exists a constant $c_3 = c_3(\delta, \rho) > 0$, depending only on $\delta$ and $\rho$, such that
\begin{align*}
\mathsf{L}_{[0, \delta n^{5/4}]} \leq c_3 \rho^{n+1}.
\end{align*}
\end{lemma}
We begin by producing a simple upper bound for the product of the $\mathsf{K}_{a,i}$'s.
\begin{lemma}
\label{lemma:kai_asymp_small}
Let $\delta > 0$. Then, there exists a constant $c_2 = c_2(\delta) > 0$, depending only on $\delta$, such that
\begin{align}
\prod_{1 \leq i \leq n} \mathsf{K}_{a,i} \leq \left( \frac{\delta n^{5/4}}{a} \right)^{n} e^{c_2 n^{3/4}}
\end{align}
whenever $a \leq \delta n^{5/4}$ holds.
\end{lemma}
\begin{proof}
Assume that $a \leq \delta n^{5/4}$. If $0 \leq \nu \leq n_i$, then
\begin{align*}
\frac{(f_{a,i} - \nu + n_i - 1)!}{(f_{a,i} - \nu)! f_{a,i}^{n_i - 1}}
&= \prod_{k=1}^{n_i - 1} \left( 1 + \frac{k - \nu}{f_{a,i}} \right)
\leq \left( 1 + \frac{n_i}{f_{a,i}} \right)^{n_i}.
\end{align*}
Plugging this to the definition of $\mathsf{K}_{a,i}$, we obtain $\mathsf{K}_{a,i} \leq \left(1 + (n_i / f_{a,i}) \right)^{n_i}$. This bound will be further simplified depending on whether $i = 1$ or $i \geq 2$. For the sake of brevity, we write $r = \delta n^{5/4} / a$. By assumption, we have $r \geq 1$. Now, when $i = 1$, plug $f_{a,1} = a$ and proceed as
\begin{align*}
\mathsf{K}_{a,1}
\leq \left( 1 + \frac{n_1}{a} \right)^{n_1}
\leq r^{n_1} \left( 1 + \frac{n_1}{ra} \right)^{n_1}
\leq r^{n_1} e^{n_1^2 / ra }
\leq r^{n_1} e^{(1/\delta) n^{3/4}}.
\end{align*}
In the third and fourth steps, inequalities $1+x \leq e^x$ and $n_1 \leq n$ are utilized, respectively. Likewise, when~$i \geq 2$, we apply Lemma \ref{lemma:fai_est} and proceed as in the previous case to get
\begin{align*}
\mathsf{K}_{a,i}
\leq \left( 1 + 2e^{c_1} \frac{in_i}{(2a)^i} \right)^{n_i}
\leq r^{in_i} \left( 1 + 2e^{c_1} \frac{in_i}{(2ra)^i} \right)^{n_i}
\leq r^{in_i} e^{2e^{c_1} in_i^2 / (2ra)^i }
\leq r^{in_i} e^{(e^{c_1}/\delta^2) in_i / n^{3/2} }.
\end{align*}
In the third step, the obvious inequality $in_i \leq n$ is used. Combining altogether and utilizing the identity $\sum_{i \geq 2} i n_i = n - n_1$, we see that
\begin{align*}
\prod_{1 \leq i \leq n} \mathsf{K}_{a,i}
\leq \left( r^{n_1} e^{(1/\delta) n^{3/4}} \right) \left( r e^{(e^{c_1}/\delta^2) / n^{3/2} } \right)^{n-n_1}
\leq r^{n} e^{c_2 n^{3/4}},
\end{align*}
where $c_2$ can be chosen as $c_2 = (1/\delta) + (e^{c_1}/\delta^2)$.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lemma:small_range}]
By Lemmas \ref{lemma:fai_est} and \ref{lemma:kai_asymp_small}, we see that
\begin{align*}
\mathsf{L}_{[0, \delta n^{5/4}]}
&\leq \frac{\log^{n+1}(1/t)}{n!} \sum_{1 \leq a \leq \delta n^{5/4}} (\delta n^{5/4})^{n} t^{a} e^{c_2 n^{3/4}} e^{(c_1/4) n/ a^2} \\
&\leq \frac{\log^{n+1}(1/t)}{n!} (\delta n^{5/4})^{n+1} e^{c_2 n^{3/4}} e^{(c_1/4) n}
\end{align*}
Here, the last step follows by taking the union bound together with the fact that $t^a \leq 1$. Now, by the definition of $\delta_0$, we have $n^{1/4}\log(1/t)e^{(c_1/4)+1} \leq 1/\delta_0$. Moreover, a quantitative form of the Stirling's formula \cite{Robbins} tells us that $n! \geq \sqrt{2\pi} n^{n+1/2}e^{-n}$, and so,
\begin{align*}
\mathsf{L}_{[0, \delta n^{5/4}]}
&\leq \frac{1}{(2\pi)^{1/2}n^{n+1/2}e^{-n}} \left( \frac{1}{\delta_0 n^{1/4} e^{(c_1/4)+1}} \right)^{n+1} (\delta n^{5/4})^{n+1} e^{c_2 n^{3/4}} e^{(c_1/4) n} \\
&= \rho^{n+1} \cdot \left(\frac{\delta}{\delta_0 \rho} \right)^{n+1} \frac{n^{1/2} e^{c_2 n^{3/4}}}{(2\pi)^{1/2} e^{(c_1/4)+1}}.
\end{align*}
If $\rho \in (\delta/\delta_0, 1)$, then the factor $(\delta/\rho\delta_0)^{n+1} n^{1/2}e^{c_2 n^{3/4}} $ is bounded, and hence, the claim follows.
\end{proof}
\subsection{Estimation of large range}
We now turn our attention to the range $a > \delta n^{5/4}$, where we recall that $\delta > 0$ is a fixed number chosen to satisfy \eqref{eq:delta}. We begin by proving the following lemma, which resolves the contribution of the~$\mathsf{K}_{a,i}$'s for~$i \geq 2$.
\begin{lemma}
\label{lemma:kai_asymp_large}
There exists a universal constant $c_4 > 0$ such that
\begin{align*}
e^{ -c_4 n^2 / a^2}
\leq \prod_{i \geq 2} \mathsf{K}_{a,i}
\leq e^{ c_4 n^2 / a^2}
\end{align*}
whenever $a \geq \delta n^{5/4} \geq e^{c_1} n$. Here, $c_1$ is chosen as in Lemma \ref{lemma:fai_est}.
\end{lemma}
\begin{proof}
Assume that $a \geq \delta n^{5/4} \geq e^{c_1} n$. When $i \geq 2$, Lemma \ref{lemma:fai_est} gives us that $f_{a,i} \geq e^{-c_1}\frac{(2a)^{2}}{2i} \geq \frac{2na}{i} \geq 2e^{c_1}n \geq 2n_i$. Now, letting $0 \leq \nu \leq n_1$, we have, as in the beginning of the proof of Lemma \ref{lemma:kai_asymp_small},
\begin{align*}
\left( 1 - \frac{n_i}{f_{a,i}} \right)^{n_i}
\leq \mathsf{K}_{a,i}
\leq \left( 1 + \frac{n_i}{f_{a,i}} \right)^{n_i}.
\end{align*}
Since $\frac{n_i}{f_{a,i}} \leq \frac{1}{2}$, we may apply inequalities $-2x \leq \log(1-x) $ and $ \log(1+x) \leq 2x$, which are valid for $0 \leq x \leq \frac{1}{2}$, to further simplify the above bounds, which results in
\begin{align*}
-e^{c_1} \frac{i n_i^2}{a^2}
\leq -\frac{2n_i^2}{f_{a,i}}
\leq \log(\mathsf{K}_{a,i})
\leq \frac{2n_i^2}{f_{a,i}}
\leq e^{c_1} \frac{i n_i^2}{a^2}.
\end{align*}
Finally, by summing this inequality for $i = 2, \cdots, n$ and utilizing the bound $\sum_{i} i n_i^2 \leq n^2$, the desired conclusion follows with $c_4 = e^{c_1}$.
\end{proof}
Next, we establish a detailed asymptotic expansion of $\mathsf{K}_{a,1}$.
\begin{lemma}
\label{lemma:ka1_asymp_large}
Let $\delta \in (0, \delta_0)$. Then,
\begin{align*}
\mathsf{K}_{a,1} = \exp\left\{ \frac{n_1^3}{12 a^2} - \frac{3 n_1^5}{160 a^4} + \mathcal{O} \left(n^{-1/4}\right) \right\}
\end{align*}
holds in the range $a \geq \delta n^{5/4} \geq 2n$. Moreover, the implicit bound of the error term depends only on $s$ and~$\delta$.
\end{lemma}
\begin{proof}
It is convenient to separate the case of small $n_1$ from the general argument. Letting $0 \leq \nu \leq n_1$ and using the fact that $1+x = e^{x + \mathcal{O}(x^2)}$ near $x = 0$, we get
\begin{align*}
\frac{(a - \nu + n_i - 1)!}{(a - \nu)! a^{n_i - 1}}
= \prod_{k=1}^{n_1 - 1} \left( 1 + \frac{k-\nu}{a} \right)
= \exp\left\{ \sum_{k=1}^{n_1 - 1} \left( \frac{k-\nu}{a} + \mathcal{O}\left( \frac{n_1^2}{a^2} \right) \right) \right\}.
\end{align*}
So, if $N$ is a random variable having binomial distribution with parameters $n_1$ and $\frac{1}{2}$, then
\begin{align*}
\mathsf{K}_{a,1}
= \operatorname{\mathbb{E}} \left[ \frac{(a - N + n_i - 1)!}{(a - N)! a^{n_i - 1}} \right]
= e^{\mathcal{O}(n_1^3/a^2)} \operatorname{\mathbb{E}} \left[ \exp\left\{ \frac{n_1 - 1}{a} \left( \frac{n_1}{2} - N \right) \right\} \right]
\end{align*}
and
\begin{align*}
\operatorname{\mathbb{E}} \left[ \exp\left\{ \frac{n_1 - 1}{a} \left( \frac{n_1}{2} - N \right) \right\} \right]
= \cosh^{n_1} \left( \frac{n_1 - 1}{2a} \right)
= e^{\mathcal{O}(n_1^3/a^2)},
\end{align*}
where we utilized the fact that $\cosh(x) = e^{\mathcal{O}(x^2)}$ near $x = 0$. In particular, if we set $\beta = \frac{3}{4}$ and assume that $n_1 \leq n^{\beta}$, then $n_1^3/a^2 \leq \delta^{-2} n^{-1/4}$, and so, the conclusion of the lemma holds. Again, we prefer to use the named variable $\beta$ rather than the actual value in order to emphasize how it is employed in each step of the proof.
The previous computation leads our attention to the case $n_1 \geq n^{\beta}$ with $\beta = \frac{3}{4}$. In such case, we will write
\begin{align*}
\mathsf{K}_{a,1} = \sum_{\nu=0}^{n_1} p(\nu),
\quad \text{where} \quad
p(\nu) = \frac{1}{2^{n_1}} \binom{n_1}{\nu} \frac{(a + n_1 - 1 - \nu)!}{(a - \nu)! a^{n_1 - 1}}.
\end{align*}
We adopt the idea of Laplace's method to estimate $\mathsf{K}_{a,1}$. That said, we will argue by showing that $p(\nu)$ is approximately a gaussian density. Our goal is to establish a rigorous version of this claim and then draw the desired estimate from it.
We first obtain a global upper bound of $p$. Identify the factorial $n!$ with the gamma function $\Gamma(n+1)$ so that $p(\nu)$ is defined as an analytic function of $\nu$ on $[0, n_1]$. It is well known that the second derivative of the log-gamma function satisfies $(\log \Gamma(z+1))'' = \sum_{n=1}^{\infty} (n+z)^{-2}$, and so,
\begin{align*}
(\log p(\nu))''
&= -\sum_{n=1}^{\infty} \left( \frac{1}{(\nu + n)^2} + \frac{1}{(n_1 - \nu + n)^2} \right) - \sum_{k=1}^{n_1 - 1} \frac{1}{(a-\nu+k)^2} \\
&\leq -\sum_{n=1}^{\infty} \frac{2}{(\frac{n_1}{2} + n)^2}
\leq - \int_{1}^{\infty} \frac{2}{(\frac{n_1}{2} + x)^2} \, \mathrm{d}x
= - \frac{4}{n_1 + 2}.
\end{align*}
In particular, $(\log p(\nu))'$ is strictly decreasing on $[0, n_1]$. Moreover, there exists a unique solution $\nu = \tilde{\nu}_{0}$ of the equation $\log p(\nu+1) - \log p(\nu) = 0$ on $[0, n_1]$, which is explicitly given by
\begin{align}
\label{eq:nutilde0_asymp}
\tilde{\nu}_{0}
= \frac{a + n_1 - 1 - \sqrt{a^2 + n_1^2 - 1}}{2}
= \frac{n_1}{2} - \frac{n_1^2}{4a} + \frac{n_1^4}{16a^3} + \mathcal{O}\left(1\right).
\end{align}
Then, by the mean-value theorem, there exists $\nu_0 \in [\tilde{\nu}_{0},\tilde{\nu}_{0}+1]$ at which $(\log p(\nu))'$ vanishes, and $\nu_0$ is unique by the strict monotonicity. Integrating twice, we get
\begin{align}
\label{eq:pnu_upper_bound}
p(\nu)
= p(\nu_{0}) \exp\left\{ \int_{\nu_0}^{\nu} (\nu - t) (\log p(t))'' \, \mathrm{d} t \right\}
\leq p(\nu_{0}) \exp\left\{ -\frac{2}{n_1 + 2} (\nu - \nu_{0})^2 \right\}.
\end{align}
Next, we claim that this upper bound is a correct asymptotic formula for $p(\nu)$, which amounts to providing a lower bound similar to \eqref{eq:pnu_upper_bound}. However, one minor issue is that such lower bound cannot generally exist on all of $[0, n_1]$. To circumvent this, we notice that $p(\nu)/p(\nu_0)$ becomes small if $|\nu - \nu_0|$ is sufficiently large compared to $\sqrt{n_1}$. This suggests that we may focus on the range $|\nu - \nu_{0}| \leq n^{\gamma}\sqrt{n_1}$, where $\gamma$ is chosen as $\gamma = \frac{\beta}{2}-\frac{1}{4} = \frac{1}{8} $. And in this range, we want to obtain a gaussian lower bound of $p$. Focusing on the second derivative of $\log p(\nu)$ as before, we obtain
\begin{align*}
(\log p(\nu))''
&= -\left( \frac{1}{\nu} + \mathcal{O}\left(\frac{1}{\nu^2}\right) + \frac{1}{n_1 - \nu} + \mathcal{O}\left(\frac{1}{(n_1 - \nu)^2}\right) \right) + \mathcal{O}\left( \frac{n_1}{a^2} \right) \\
&= -\frac{n_1}{\nu( n_1 - \nu)} + \mathcal{O}\left( n^{-2\beta} \right) + \mathcal{O}\left( n^{-3/2} \right),
\end{align*}
where both estimates $\sum_{n=1}^{\infty} \frac{1}{(n+x)^2} = \frac{1}{x} + \mathcal{O}\left(\frac{1}{x^2}\right)$ uniformly in $x > 0$ and $\left|\frac{1}{a-\nu+k}\right| \leq \frac{2}{a}$ are exploited in the first step. To simplify further, we note that
\begin{align*}
\left| \nu - \frac{n_1}{2} \right|
\leq \left|\nu - \nu_0 \right| + \left| \nu_0 - \frac{n_1}{2} \right|
\leq n^{\gamma}\sqrt{n_1} + \mathcal{O}\left( \frac{n_1^2}{a} \right)
\leq \mathcal{O} \left( \frac{n_1}{n^{1/4}} \right).
\end{align*}
In the last step, we made use of the bounds $n_1/a = \mathcal{O}(n^{-1/4})$ and $n^{\gamma}/\sqrt{n_1} \leq n^{\gamma-\beta/2} = n^{-1/4}$. So it follows that
\begin{align*}
\frac{n_1}{\nu( n_1 - \nu)}
= \frac{4}{n_1} \cdot \frac{1}{1 - \left( \frac{\nu - (n_1/2)}{n_1/2} \right)^2}
= \frac{4}{n_1} \left(1 + \mathcal{O}\left( n^{-1/2} \right) \right)
= \frac{4}{n_1} + \mathcal{O}\left( n^{-\beta-1/2} \right).
\end{align*}
Plugging this into the asymptotic formula of $(\log p(\nu))''$ and combining all the error terms into a single one, we end up with
\begin{align*}
(\log p(\nu))''
&= -\frac{4}{n_1} + \mathcal{O}\left( n^{-5/4} \right).
\end{align*}
Given this asymptotic formula, we can proceed as in \eqref{eq:pnu_upper_bound} to obtain
\begin{align*}
p(\nu)
&= p(\nu_0) \exp \left\{ -\frac{2}{n_1}(\nu - \nu_{0})^2 + \mathcal{O}\left( n^{2\gamma-5/4} \right) \right\}.
\end{align*}
From this, we have
\begin{align*}
\sum_{\nu : |\nu - \nu_0| \leq n^{\gamma}\sqrt{n_1}} \frac{p(\nu)}{p(\nu_0)}
&= e^{\mathcal{O}(n^{-1/4})} \int_{|t| \leq n^{\gamma}\sqrt{n_1}} e^{-\frac{2}{n_1}t^2} \, \mathrm{d}t \\
&= e^{\mathcal{O}\left( n^{-1/4} \right)} \left( \int_{\mathbb{R}} e^{-\frac{2}{n_1}t^2} \, \mathrm{d}t - \int_{|t| > n^{\gamma}\sqrt{n_1}} e^{-\frac{2}{n_1}t^2} \, \mathrm{d}t \right) \\
&= e^{\mathcal{O}\left( n^{-1/4} \right)} \sqrt{\frac{\pi n_1}{2}} + \mathcal{O}\left( e^{-2n^{\gamma}} \right)
\end{align*}
The first step follows by noting that $-\frac{2}{n_1}(t - \nu_0)^2 = -\frac{2}{n_1} (\nu - \nu_0)^2 + \mathcal{O}\left(n^{\gamma-\beta/2}\right) $ if $|t - \nu| \leq 1$ and~$\gamma - \beta/2 = -1/4$. Also, in the last step, we utilized the tail estimate $\int_{x}^{\infty} e^{-t^2/2} \, \mathrm{d}t < e^{-x^2/2}/x$, which is valid for $x > 0$, to produce a stretched-exponential decay. Similar reasoning shows that
\begin{align*}
\sum_{\nu : |\nu - \nu_0| \leq n^{\gamma}\sqrt{n_1}} \frac{p(\nu)}{p(\nu_0)}
\leq \mathcal{O} \left( \int_{|t| > n^{\gamma}\sqrt{n_1}} e^{-\frac{2}{n_1+2}t^2} \, \mathrm{d}t \right)
\leq \mathcal{O}\left( e^{-2n^{\gamma}} \right).
\end{align*}
Putting all the estimates altogether, we obtain
\begin{align}
\label{eq:ka1_est_1}
\mathsf{K}_{a,1}
= \sqrt{\frac{\pi n_1}{2}} e^{\mathcal{O}\left( n^{-1/4} \right)} p(\nu_{0}).
\end{align}
In view of \eqref{eq:ka1_est_1}, it remains to estimate $p(\nu_0)$. Since $\nu_{0} - \tilde{\nu}_0 = \mathcal{O}(1)$, it follows $\nu_{0}$ satisfies the same asymptotic formula as in \eqref{eq:nutilde0_asymp}. Write $\mu = \nu_{0} - \frac{n_1}{2}$. We know that $\mu = o(n_1)$, or more precisely, $\mu/n_1 = \mathcal{O}(n^{-1/4})$. Then, by using Stirling's approximation \cite{Robbins}
\begin{align*}
\log(n!)
= \left(n + \frac{1}{2}\right)\log n - n + \log\sqrt{2\pi} + \mathcal{O}\left(\frac{1}{n}\right),
\end{align*}
we obtain
\begin{align*}
\log \left[ \frac{1}{2^{n_1}} \binom{n_1}{\nu_{0}} \right]
%
&= -n_1 \log 2 + \log (n_1 !) - \log \left( \frac{n_1}{2} + \mu \right)! - \log \left( \frac{n_1}{2} - \mu \right)! \\
%
%
&= - \left(\frac{n_1}{2} + \mu + \frac{1}{2}\right)\log \left( 1 + \frac{2\mu}{n_1} \right)
- \left(\frac{n_1}{2} - \mu + \frac{1}{2}\right)\log \left( 1 - \frac{2\mu}{n_1} \right) \\
& \qquad + \log 2 - \frac{1}{2}\log n_1 - \log\sqrt{2\pi} + \mathcal{O}\left(\frac{1}{n_1}\right) \\
%
&= \frac{n_1}{2} \left[ \left( \frac{1}{n_1} - 1 \right) \left( \frac{2\mu}{n_1} \right)^2 + \left( \frac{1}{2n_1} - \frac{1}{6} \right) \left( \frac{2\mu}{n_1} \right)^4 + \mathcal{O} \left( \frac{2\mu}{n_1} \right)^6 \right] \\
&\qquad + \frac{1}{2}\log\left(\frac{2}{\pi n_1} \right) + \mathcal{O} \left( n^{-\beta} \right).
\end{align*}
This can be further simplified by noting that $\frac{\mu}{n_1} = - \frac{n_1}{4a} + \frac{n_1^3}{16a^3} + \mathcal{O}\left(\frac{1}{n^{1/4}n_1}\right) = \mathcal{O}(n^{-1/4})$, and the result is
\begin{align}
\log \left[ \frac{1}{2^{n_1}} \binom{n_1}{\nu_{0}} \right]
&= \frac{1}{2}\log\left(\frac{2}{\pi n_1} \right) - \frac{2\mu^2}{n_1} - \frac{4\mu^4}{3n_1^3} + \mathcal{O} \left(n^{-1/4}\right) \nonumber \\
&= \frac{1}{2}\log\left(\frac{2}{\pi n_1} \right) - \frac{n_1^3}{8a^2} + \frac{11n_1^5}{192a^4} + \mathcal{O} \left(n^{-1/4}\right). \label{eq:ka1_est_2}
\end{align}
For the remaining factor, we estimate it as follows.
\begin{align*}
\log \left[ \frac{(a + n_1 - 1 - \nu_0)!}{(a - \nu_0)! a^{n_1 - 1}} \right]
%
&= \log \left[ \frac{(a + n_1 - \nu_0)!}{(a - \nu_0)! a^{n_1}} \right] + \log \left[ \frac{a}{a + n_1 - \nu_0} \right] \\
%
&= - n_1 + \left(a + \frac{1}{2} + \frac{n_1}{2} - \mu \right) \log \left( 1 + \frac{\frac{n_1}{2} - \mu}{a} \right) \\
&\qquad - \left(a + \frac{1}{2} - \frac{n_1}{2} - \mu\right) \log \left( 1 - \frac{\frac{n_1}{2} + \mu}{a} \right) + \mathcal{O}\left(n^{-1/4}\right)
\end{align*}
After some painful expansion, we end up with
\begin{align}
\label{eq:ka1_est_3}
\log \left[ \frac{(a + n_1 - 1 - \nu_0)!}{(a - \nu_0)! a^{n_1 - 1}} \right]
&= \frac{5 n_1^3}{24 a^2} - \frac{73 n_1^5}{960 a^4} + \mathcal{O}\left(n^{-1/4}\right).
\end{align}
Therefore, the conclusion follows by combining \eqref{eq:ka1_est_1}, \eqref{eq:ka1_est_2} and \eqref{eq:ka1_est_3} altogether.
\end{proof}
\subsection{Estimation of the peak generating function}
\begin{lemma}
\label{lemma:large_range}
Let $\delta \in (0, \delta_0)$ and write $\alpha_1 = n_1 / n$ for the density of fixed points. Then
\begin{align*}
\mathsf{L}_{(\delta n^{5/4}, \infty)} = \exp\left\{ \frac{\alpha_1^3}{3} s\sqrt{n} + \left( \frac{\alpha_1^3}{18} - \frac{3\alpha_1^5}{10} + \frac{2\alpha_1^6}{9} \right) s^2 + \mathcal{O}\left(n^{-1/4}\right) \right\}
\end{align*}
holds in the range $\delta n^{5/4} \geq \max\{e^{c_1}, 2\}n$. Moreover, the implicit bound of the error term depends only on~$\delta$ and~$s$.
\end{lemma}
Following Kim and Lee's method \cite{KimLee}, we will utilize Laplace's method to approximate the sum by the integral of a certain gaussian density function and show that the relative error due to this approximation can be controlled in an explicit and uniform manner. The following simple lemma is useful for this purpose.
\begin{lemma}
\label{lemma:approx_gaussian}
Define $f_n: \mathbb{R} \to \mathbb{R}$ by $f_n(x) = \left( 1 + \frac{x}{\sqrt{n}} \right)^n e^{-\sqrt{n}x} \mathbf{1}_{[-\sqrt{n},\infty)}(x)$. Then
\begin{enumerate}[topsep=0.25em,itemsep=0em,label={(\arabic*)}]
\item If $x \geq 0$ and $l > n > 0$, then $f_l(x) \leq f_n(x) \leq (2/\sqrt{e})^n e^{-\sqrt{n}x/2}$.
\item If $x \leq 0$ and $l > n > 0$, then $f_n(x) \leq f_l(x) \leq e^{-x^2/2}$.
\item $f_n(x) \to e^{-x^2/2}$ pointwise as $n\to\infty$.
\end{enumerate}
\end{lemma}
The estimation of $f_n$ is a recurring tool in previous works (see Lemma 4.3 of \cite{KimLee} and the proof therein, for instance) and requires only basic calculus computation. Nevertheless, we include the proof for self-containedness.
\begin{proof}
Let $h(t, x) = t \log\left( 1 + \frac{x}{\sqrt{t}}\right) - \sqrt{t}x$. It is easy to check that
\begin{itemize}[topsep=0.25em,itemsep=0em]
\item $x \mapsto h(t, x)$ is concave on $(0, \infty)$ for each $t \in (0, \infty)$,
\item $t \mapsto h(t, x)$ is decreasing on $(0, \infty)$ for each $x \geq 0$,
\item $t \mapsto h(t, x)$ is increasing on $(x^2, \infty)$ for each $x \leq 0$, and
\item $h(t, x) \to -x^2/2$ as $t\to\infty$ for each $x \in \mathbb{R}$.
\end{itemize}
From $f_n(x) = e^{h(n, x)}$, the assertions (2) and (3) follows immediately. Moreover, we may exploit the concavity of $x \mapsto h(t, x)$ to bound $h(t, x) \leq h(t, \sqrt{n}) + \frac{\partial h}{\partial x}(t, \sqrt{n})(x - \sqrt{n})$, which gives (1).
\end{proof}
Now we return to the proof of the main claim of this section.
\begin{proof}[Proof of Lemma \ref{lemma:large_range}]
Assume that $\delta n^{5/4} \geq \max\{e^{c_1}, 2\}n$ holds. Then, by Lemmas \ref{lemma:fai_est}, \ref{lemma:kai_asymp_large}, and \ref{lemma:ka1_asymp_large}, we have
\begin{align*}
\mathsf{L}_{(\delta n^{5/4}, \infty)}
= e^{\mathcal{O}(n^{-1/4})} \frac{\log^{n+1}(1/t)}{n!} \sum_{a > \delta n^{5/4}} t^a a^n \exp\left\{ \frac{n_1^3}{12a^2} - \frac{3n_1^5}{160a^4} \right\}
\end{align*}
Next, we approximate the sum in the right-hand side by its integral analogue. If $x \in \mathbb{R}$ and $a > \delta n^{5/4}$ are such that $|x - a| \leq 1$, then
\begin{itemize}
\item $t^x = t^{a}e^{\mathcal{O}(\log t)} = t^a e^{\mathcal{O}(n^{-1/4})}$,
\item $x^n = a^n e^{n\log(x/a)} = a^n e^{\mathcal{O}(n/a)} = a^n e^{\mathcal{O}(n^{-1/4})}$, \text{ and}
\item for each $k \geq 0$ given, $\frac{n_1^{k+1}}{x^k} = \frac{n_1^{k+1}}{a^k}\left(1 + \mathcal{O}\left(\frac{1}{a}\right)\right)^k = \frac{n_1^{k+1}}{a^k} + \mathcal{O}\left(\frac{n_1}{a}\right)^{k+1} = \frac{n_1^{k+1}}{a^k} + \mathcal{O}(n^{-1/4})$. The implicit error bound now depends on $k$ as well. However, it will be used only for $k = 2$ and $k =4$, and so, this causes no harm for our objective of retaining error bounds depending only on $s$ and $\delta$.
\end{itemize}
This allows us to approximate the sum by its integral analogue at the expense of the relative error~$e^{\mathcal{O}(n^{-1/4})}$, yielding
\begin{align}
\label{eq:integral_to_est}
\mathsf{L}_{(\delta n^{5/4}, \infty)}
= e^{\mathcal{O}(n^{-1/4})} \frac{\log^{n+1}(1/t)}{n!} \mathsf{J}, \qquad \text{where }
\mathsf{J} = \int_{\delta n^{5/4}}^{\infty} t^x x^n \exp\left\{ \frac{n_1^3}{12x^2} - \frac{3n_1^5}{160x^4} \right\} \, \mathrm{d}x
\end{align}
So it remains to estimate $\mathsf{J}$. To this end, we substitute $x = \frac{n}{\log (1/t)} \left( 1 + \frac{w}{\sqrt{n}} \right)$. For the sake of brevity, we also write $c_5 = c_5(n) = 1-\delta n^{1/4}\log(1/t)$. Although $c_5$ depends on $n$ and $s$, the choice of $\delta$ and \eqref{eq:delta} tell us that $c_5$ is uniformly away from $0$ and $1$, which will be sufficient for our purpose. Then,
\begin{align*}
\mathsf{J}
&= \int_{-c_5\sqrt{n}}^{\infty} \exp\Bigg\{ \left( n \log \left( \frac{n}{\log(1/t)} \right) - n \right) + \left( n \log \left( 1 + \frac{w}{\sqrt{n}} \right) - \sqrt{n}w \right) \\
&\hspace{8em} + \frac{\alpha_1^3 n \log^2(1/t)}{12\left(1+(w/\sqrt{n})\right)^2} - \frac{3\alpha_1^5 n \log^4(1/t)}{160\left( 1 + (w/\sqrt{n}) \right)^4} \Bigg\} \frac{\sqrt{n}}{\log(1/t)} \, \mathrm{d}w.
\end{align*}
The first two grouped terms in the exponent of the integrand are easily controlled, as they originated from the `unperturbed term' $t^x x^n$. So, it suffices to study the effect of the `perturbation terms'. Taking advantage of the explicit formula of the perturbation term, one may expand
\begin{align*}
\frac{\alpha_1^3 n \log^2(1/t)}{12\left(1+(w/\sqrt{n})\right)^2}
&= \frac{\alpha_1^3 n \log^2(1/t)}{12} \left( 1 - \frac{(w/\sqrt{n})\left(2+(w/\sqrt{n})\right)}{\left(1+(w/\sqrt{n})\right)^2} \right)
\end{align*}
Plugging this back in, the integral takes the form
\begin{align*}
\mathsf{J}
= \frac{n^{n+1/2}e^{-n}}{\log^{n+1} (1/t)} \exp\left\{ \frac{\alpha_1^3}{12} n \log^2(1/t) \right\} \int_{-c_5\sqrt{n}}^{\infty} f_n(w) e^{g_n(w)} \, \mathrm{d}w,
\end{align*}
where $f_n$ is as in Lemma \ref{lemma:approx_gaussian} and $g_n$ is defined by
\begin{align*}
g_n(w) = - \frac{\alpha_1^3 \sqrt{n}\log^2(1/t) w(2 + (w/\sqrt{n}))}{12(1 + (w/\sqrt{n}))^2} - \frac{3\alpha_1^5 n \log^4(1/t)}{160\left( 1 + (w/\sqrt{n}) \right)^4}.
\end{align*}
As mentioned before, $c_5$ is uniformly away from $1$, meaning that $\sup_{n \geq 1} c_5(n) < 1$ holds. Then $g_n(w) \leq 0$ for $w \geq 0$ and $g_n(w) \leq -c_6 w$ for $w \in [-c_5\sqrt{n}, 0]$, where $c_6 > 0$ is a constant depending only on $s$. Now using the tail estimates in Lemma \ref{lemma:approx_gaussian}, we can check that
\begin{align*}
\int\limits_{\substack{w \geq -c_4\sqrt{n} \\ |w|\geq \log n} } f_n(w) e^{g_n(w)} \, \mathrm{d}w
= \mathcal{O}\left(n^{-1/4}\right).
\end{align*}
Moreover, if $|w| \leq \log n$, then using $\log(1/t) = \frac{2\sqrt{s}}{n^{1/4}} + \frac{s^{3/2}}{6n^{3/4}} + \mathcal{O}\left(n^{-5/4}\right)$,
\begin{align*}
f_n(w) = -\frac{w^2}{2} + \mathcal{O}\left(\frac{\log^3 n}{\sqrt{n}}\right), \qquad
g_n(w) = - \frac{\alpha_1^3}{3} s w - \frac{3\alpha_1^5}{10} s^2 + \mathcal{O}\left(\frac{\log n}{\sqrt{n}} \right).
\end{align*}
Plugging this back to $\mathsf{L}_{(\delta n^{5/4}, \infty)}$ and utilizing Stirling's formula,
\begin{align*}
\mathsf{L}_{(\delta n^{5/4}, \infty)}
&= \frac{1}{\sqrt{2\pi}} \exp\left\{ \frac{\alpha_1^3}{3} s\sqrt{n} + \left(\frac{\alpha_1^3}{18} - \frac{3\alpha_1^5}{10}\right) s^2 + \mathcal{O}(n^{-1/4}) \right\} \\
&\hspace{4em} \times \left( \int_{|w|\leq\log n} \exp\left\{ -\frac{w^2}{2}- \frac{\alpha_1^3}{3} s w \right\} \, \mathrm{d}w + \mathcal{O}\left(n^{-1/4}\right) \right) \\
&= \exp\left\{ \frac{\alpha_1^3}{3}s\sqrt{n} + \left( \frac{\alpha_1^3}{18} - \frac{3\alpha_1^5}{10} + \frac{2\alpha_1^6}{9} \right)s^2 + \mathcal{O}\left(n^{-1/4}\right) \right\}
\end{align*}
as required.
\end{proof}
With all the ingredients ready, we immediately obtain the proof of Theorem \ref{maintechthm}.
\begin{proof}[Proof of Theorem \ref{maintechthm}]
In the proof of Theorem \ref{thm:clt_peaks_Sn}, we checked that
\begin{align*}
\left( \frac{2(1-t)}{(1+t)\log(1/t)} \right)^{n+1} = \exp\left\{ -\frac{s}{3}\sqrt{n} + \frac{1}{45}s^2 + \mathcal{O}(n^{-1/4}) \right\}.
\end{align*}
Moreover, if we fix $\delta \in (0, \delta_0)$, by Lemma \ref{lemma:small_range}, we can choose $\rho \in (0, 1)$, independent of $n$ and $\lambda$, so that $\mathsf{L}_{[1,\delta n^{5/4}]} = \mathcal{O}(\rho^{n})$. Also, if $n$ is sufficiently large so that $\delta n^{5/4} \geq \max\{e^{c_1}, 2\}n$, Lemma \ref{lemma:large_range} gives a uniform estimate on $\mathsf{L}_{(\delta n^{5/4}, \infty)}$. Finally, if $\pi$ is chosen uniformly at random from $\mathcal{C}_{\lambda}$, then
\begin{align*}
\operatorname{\mathbb{E}}\left[ e^{-s p(\pi) / \sqrt{n}} \right]
= e^{s/\sqrt{n}} \left( \frac{2(1-t)}{(1+t)\log(1/t)} \right)^{n+1} \mathsf{L}_{[1, \infty)}.
\end{align*}
Plugging in all the estimates and taking advantage of the fact that $\mathsf{L}_{[1,\delta n^{5/4}]} = \mathcal{O}(\rho^n)$ can be absorbed into the relative error $\mathcal{O}(n^{-1/4})$, we have
\begin{align*}
\operatorname{\mathbb{E}}\left[ e^{-s p(\pi) / \sqrt{n}} \right]
= \exp\left\{ \left( -\frac{s}{3}\sqrt{n} + \frac{1}{45}s^2 \right) + \left( \frac{\alpha_1^3}{3} s\sqrt{n} + \left( \frac{\alpha_1^3}{18} - \frac{3\alpha_1^5}{10} + \frac{2\alpha_1^6}{9} \right) s^2 \right) + \mathcal{O}\left(n^{-1/4}\right) \right\}.
\end{align*}
This provides the desired bound for the term $E_{\lambda,s}$ appearing in the statement of Theorem \ref{maintechthm}, completing the proof.
\end{proof}
\section*{Acknowledgement}
Fulman was supported by Simons Foundation Grant 400528.
|
2107.12492
|
\section{Introduction}
\label{sec:intro}
\input{sections/introduction.tex}
\section{Methodology}
\label{sect:method}
\input{sections/method.tex}
\section{Experimental Validations}
\label{sect:results}
\input{sections/results.tex}
\section{Conclusion}
\label{sect:conc}
\input{sections/conclusion.tex}
\bibliographystyle{IEEEtran}
\subsection{Binary Extended Gaussian Image (BEGI)}
The extended Gaussian image (EGI) \cite{horn1984extended} has been extensively used as a shape descriptor for object surface normals \cite{lowekamp2002exploring,nayar1990specular,little1985extended}. It represents the distribution of surface normals in polar coordinate system by mapping them onto a unit sphere, also denoted as the 2-sphere or $\bm{S}^2$. Given a vector $\bm{n}=(n_x,n_y,n_z)$ in the Euclidean space $\mathbb{R}^3$, the corresponding spherical coordinates $(r, \theta, \phi)$ are computed as:
\begin{equation}\label{eq:spherical_coordinates}
\begin{gathered}
r = \sqrt{n_x^2 + n_y^2 + n_z^2} \quad \quad \theta = \arctan{\frac{\sqrt{n_x^2 + n_y^2}}{n_z}} \\
\mkern-38mu \phi = \arctan(\frac{n_y}{n_x})
\end{gathered}
\end{equation}
Physically, these coordinates represent norm, longitude and latitude. When $\bm{n}$ is unitary, \textit{i.e.}, $r = 1$, the set $(\theta, \phi)$ is sufficient to locate the vector on the unit sphere. To make use of spherical coordinates to numerically compute distributions, a discrete sphere is required. We use the following discretization along longitude and latitude to represent object surface normals on $\bm{S}^2$: $\theta_j = \frac{\pi(2j + 1)}{4B}$ and $\phi_k = \frac{\pi k}{B}$ , $(j, k) \in \mathbb{N}$ with the constraint $0 \leq j, k < 2B$. Here, $B$ is the bandwidth and acts like a low pass filter. Smaller $B$ values lead to a coarser distributions and higher values provide more accurate representations, as seen in Fig. \ref{fig:BEGI_bands}. While conventional EGIs use histograms to represent the distribution of surface normals on a unit sphere, our method uses binary values for each cell parameterised by the indices $(j,k)$. This corresponds to the existence of at least one surface normal in the direction of $(\theta_j, \phi_k)$. Additionally, we store the list of point coordinates of surface normals corresponding to the cell.
Fig. \ref{fig:BEGI_bands} illustrates object surface normals represented as BEGI for three different bandwidths. Finer representations on the sphere are obtained with higher bandwidth, \textit{i.e.}, $B=16$. Points belonging to adjacent cells when $B = 8$ are merged into a single cell at a lower bandwidth, $B = 4$.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Fig2.pdf}
\caption{BEGIs with different bandwidths for the above ``Hammer'' object point cloud. The one with highest bandwidth, \textit{i.e.}, $B=16$, provides finer distribution where as the lower bandwidth ones show coarse distributions.}
\label{fig:BEGI_bands}
\end{figure}
Given a point cloud with surface normals $\mathcal{PC}_n = \{(\bm{p},\bm{n}) \in \mathbb{R}^3\times \bm{S}^2 \}$ and a bandwidth $B$, the BEGI can be constructed using \eqref{eq:begi_representation}.
\begin{equation}\label{eq:begi_representation}
\begin{gathered}
f(\theta_j, \phi_k) = v_{jk} \\
\mathcal{P}_{jk} = \left\{ p_i \in \mathbb{R}^3 \mid n_i = (\theta_j, \phi_k) \right\}
\end{gathered}
\end{equation}
With $0 \leq j, k < 2B$, $v_{jk} \in [0, 1]$. $v_{jk} = 1$ if there exists a surface normal $n_i =(\theta_j, \phi_k)$ in $\mathcal{PC}_n$ and $v_{jk} = 0$, otherwise. $f(\theta_j, \phi_k)$is the BEGI function and $\mathcal{P}_{jk}$ is the BEGI point set at $(j,k)$. Any point cloud with surface normals can be reconstructed upto a degree of precision using its BEGI by specifying an appropriate value for the bandwidth $B$.
\subsection{Spherical Harmonics}
Spherical harmonics are used to provide a decomposition of functions defined on the unit sphere \cite{muller2006spherical}. They are the spherical equivalent of the Fourier transform in Euclidean space and form an orthonormal basis on the sphere. In computer vision, they have been used for 3D model recognition and visual servoing \cite{cohen2018spherical,marturi2016image}. Any square-integrable function $f :\bm{S}^2 \rightarrow \mathbb{C} $ ($f \in L^2(\bm{S}^2)$) can be represented by its spherical harmonic expansion as:
\begin{equation}
f(\theta, \phi) = \sum_{l=0}^{l_{max}}\sum_{m=-l}^{l}\hat{f_l^m}Y_l^m(\theta, \phi)
\label{eq:spherical_harmonics}
\end{equation}
where, $l, m \in \mathbb{N}^+$ , $l_{max} > 0$ is the maximum degree of expansion, $Y_l^m$ is the spherical harmonics of degree $l$ and order $m$, $\hat{f_l^m}$ is the corresponding harmonic coefficient. The spherical harmonics $Y_l^m$ are computed as:
\begin{equation}
Y_l^m(\theta, \phi) = (-1)^m\sqrt{\frac{(2l+1)(l-m)!}{4\pi(l+m)!}}P_l^m(cos\theta)e^{im\phi}
\label{eq:spherical_harmonics_yml}
\end{equation}
In the literature, the factor $(-1)^m$ is sometimes integrated with the definition of Legendre polynomials, $P_l^m$.
The harmonic coefficients $\hat{f_l^m}$ are computed by the inner product between $f$ and $Y_l^m$ over $\bm{S}^2$:
\begin{equation}\label{eq:harmonic_coeffs}
\begin{aligned}
\hat{f_l^m} &= \int_{w \in \bm{S}^2} f(w)\overline{Y_l^m(w)} \,dw \\
&= \int_0^{2\pi}d\phi\int_0^{\pi}d\theta\sin\theta f(\theta, \phi)\overline{Y_l^m(\theta,\phi)}
\end{aligned}
\end{equation}
where, $\overline{Y_l^m}$ is the complex conjugate of $Y_l^m$. Using \eqref{eq:spherical_harmonics}, the harmonic decomposition of a BEGI function $f : \bm{S}^2 \rightarrow [0,1] \subset \mathbb{C}$ is computed. This means that the harmonic functions can be used to evaluate the correlation of two BEGIs.
The harmonic coefficients described in \eqref{eq:harmonic_coeffs} are computed numerically by using the previously presented discretization of the sphere. By setting a value for $B$, the angles $\theta$ and $\phi$ can be sampled using an equiangular $2B\times2B$ grid.
\subsection{Spectral Correlation of Functions on \texorpdfstring{$\bm{\mathrm{SO}(3)}$}{}}
Let $f$ and $g$ are two functions with bandwidth $B$ defined on $\bm{S}^2$, and $g_r = g(\mathcal{R}(\alpha, \beta, \gamma)) = g(\alpha, \beta, \gamma)$ is the rotated version of $g$ by a rotation $\mathcal{R} \in \bm{\mathrm{SO}(3)}$\footnote{$\mathcal{R}$ is taken as $zyz$ Euler angles $\alpha, \beta, \gamma$; $\alpha, \gamma \in [0, 2\pi[$ and $\beta \in [0, \pi]$.}. The correlation $\mathcal{C}(\mathcal{R})$ between $f$ and $g_r$ is computed as
\begin{equation}
\mathcal{C}(\mathcal{R}) = \int_{w \in S^2} f(w)\overline{g_r(w)} \,dw
\label{eq:correlation}
\end{equation}
\eqref{eq:correlation} evaluates the degree of similarity between $f$ and $g_r$. By computing the correlation for a set of rotations, a correlation density function is obtained as shown in Fig. \ref{fig:begi_correlation}(d).
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{Fig3.png}
\caption{Illustation of BEGI correlation process. Initially, surface normals of both gripper fingers and object are converted to BEGIs. These are then correlated using spherical hormonics resulting in a correlation density map. Here we show the values $\geq 0.5$. The cross-section along $(\alpha, \beta, \gamma) = (\pi/4, \pi/2, \pi)$ shows an in depth view of the correlation density map. Blue dots correspond to rotations with low correlation values, while red dots are locations with high correlation
}
\label{fig:begi_correlation}
\end{figure*}
In harmonic domain, rotations are expressed as \textit{Wigner D-Matrices}, $D_{mm'}^l$ \cite{wigner2012group}.
Consequently, the rotated pattern BEGI $g_r$ is represented in the spectral domain as:
\begin{equation} \label{eq:wigner_D_matrix}
\begin{gathered}
g(\alpha, \beta, \gamma) = \sum_{l=0}^{l_{max}} \sum_{m=-l}^{l} \sum_{m'=-l}^{l} D_{mm'}^l(\alpha, \beta, \gamma)\hat{g_l^{m'}}Y_l^m \\
\mbox{with}\quad D_{mm'}^l(\alpha, \beta, \gamma) = e^{-im\alpha}d_{mm'}^l(\beta)e^{-im'\gamma}
\end{gathered}
\end{equation}
where, $d_{mm'}^l$ are the \textit{Wigner d-functions}. Replacing $f$ and $g_r$ in \eqref{eq:correlation} by their harmonic expansions,
using the orthogonality principle of the harmonic coefficients, \textit{i.e.}, $\int_{w \in S^2} Y_l^m(w)\overline{Y_{l'}^{m'}(w)} \,dw = 1$ if $m=m'$ and $0$, otherwise, we get:
\begin{equation}
\mathcal{C}(\mathcal{R}) = \sum_{l=0}^{l_{max}}\sum_{m=-l}^{l}\sum_{m'=-l}^{l} \hat{f_l^m}\overline{\hat{g_l^{m'}}} \overline{D_{mm'}^l(\mathcal{R})}
\label{eq:correlation_simplified}
\end{equation}
\eqref{eq:correlation_simplified} can be evaluated using FFTs defined on $\bm{\mathrm{SO}(3)}$. We refer the readers to \cite{kostelec2008ffts} for more details on spectral correlation.
\subsection{Contact Sampling by Spectral Correlation}
A grasp is defined by the location of each finger of the gripper on an object and a pose defining the wrist location in space. For a gripper with $N_f$ fingers, the grasp configuration $\mathcal{G}$ can be defined as:
\begin{equation}
\mathcal{G} = (p_1\cdots p_{N_f}, n_1\cdots n_{N_f}, M)
\label{eq:grasp_conf}
\end{equation}
where, $p_i, n_i \in \mathbb{R}^3$ are respectively the object contact point and surface normals for the finger $i$, $M \in \bm{\mathrm{SE}(3)}$ is the wrist pose.
Achieving static equilibrium when grasping requires that the force applied by each finger on the object stays within the friction cone \cite{nguyen1988constructing}. In this sense, we attempt to select finger positions applying minimal tangential forces on the object during grasping. This is accomplished by finding contact points maximizing the dot product between finger and object surface normal vectors. Correlating the BEGIs corresponding to object and gripper fingers
provide indication on the location of such points in $\bm{\mathrm{SO}(3)}$. Rotations with high correlation values can then be sampled from the correlation density function, while the BEGI point set map $\mathcal{P} = \{\mathcal{P}_{jk}\}$ is used to extract, for each finger, those contact points. The following contact sampling algorithm is then devised:
\begin{itemize}
\item Given an object point cloud and a hand configuration parametrized by the gripper's joint configuration $\bm{q}$, we compute the BEGIs of point cloud and gripper fingers using \eqref{eq:begi_representation}.
\item The correlation between these two BEGIs is then computed using \eqref{eq:correlation_simplified}. This process is depicted in Fig. \ref{fig:begi_correlation}.
\item Rotations with correlation values greater than a threshold $t_{corr}$ are then sampled. The BEGI function of the gripper is then rotated and for each finger, object points located at the index $(j,k)$ of the finger are extracted.
\item The set $\mathcal{P}^r_{N_f} = \{\mathcal{P}_{(0)}, \mathcal{P}_{(1)}\cdots \mathcal{P}_{(N_f)}\}$ can then be constructed for each rotation where $\mathcal{P}_{(i)}$ is the set of points extracted for the finger $i$.
\item Contact point sets with surface normals $(p_1 \cdots p_{N_f},\allowbreak n_1\cdots n_{N_f})$ can then be extracted from $\mathcal{P}^r_{N_f}$.
\item An additional filtering step is required in order to remove the contacts that do not satisfy the force closure principle and that are out of range of the gripper kinematics.
\item Wrist poses $\{M\}$ leading to collision-free grasps can then be sampled using the kinematics of the gripper.
\end{itemize}
\subsection{Grasp Ranking}
Grasp candidates obtained from the previous step need to be ranked such that the one with maximum likelihood is selected and executed on the robot. For this purpose, we use our previously developed LoCoMo metric \cite{adjigble2018model}, which provides a similarity score between gripper fingers and object at the contact points. Due to the generic nature of SpectGRASP, using alternative ranking metrics will not limit its performance. The benefits of using the LoCoMo metric are twofold; first, it has been proven to perform well in practice and second, it provides a common basis to objectively evaluate the proposed method against the one presented in \cite{adjigble2018model}. LoCoMo grasp ranking $\mathcal{Q}$ is computed as in \eqref{eq:locomo}.
\begin{figure*}[!ht]
\centering
\includegraphics[width=\textwidth]{figures/Fig4.png}
\caption{Successful grasps for various objects from the YCB object set using the proposed SpectGRASP method. Top row shows the computed top-ranked grasp candidates and bottom row shows the same grasps executed on the simulated robot. Detailed results can be found in the supplementary video.}
\label{fig:single_grasps}
\end{figure*}
\begin{equation}
\label{eq:locomo}
\begin{aligned}
\mathcal{M}_{i} &= \frac{1}{N_{s}} \sum_{j=1}^{\kappa}\left((2\pi)^{\kappa} | \Sigma| \right)^{\frac{1}{2}}\mathcal{N}\left(\varepsilon_j; \vec{0},\Sigma \right) \\
\mathcal{Q} &= \rho\prod_{i=1}^{N_{f}}\mathcal{M}_{i}^{\omega_{i}}
\end{aligned}
\end{equation}
where, $\mathcal{M}_{i}$ is the contact moment (CoMo) of finger $i$ computed at a contact point, $N_s$ is a normalizing term, $\kappa$ is the number of point cloud patches sampled around the location of the finger, $\mathcal{N}(\varepsilon_j; \vec{0},\Sigma )$ is the multivariate Gaussian density function centered at $\vec{0}$ with covariance $\Sigma$, $\varepsilon_j$ is the difference of gripper and object zero-moment shift vectors. Finally, $\omega_i$ and $\rho$ are weighting factors. As presented in \cite{adjigble2019assisted}, $\rho$ can be dynamically updated to re-rank grasps based on factors such as the distance between robot's end effector and closest grasps. More details on the derivation of \eqref{eq:locomo} can be found in \cite{adjigble2018model}.
\subsection{Simulation Setup Description}
All experiments are performed using the PyBullet physics simulator. It is a Python wrapper for the Bullet physics engine, which has been proven to be an efficient open-source simulation tool for robotic models \cite{coumans2016pybullet}. Our robotic setup consists of a 7 degrees of freedom (DoF) articulated robot arm fitted with a parallel-jaw gripper.
For experiments, we have selected 16 different objects from the YCB object set \cite{calli2015ycb} that are suitable with our gripper model.
In order to make our system as realistic as possible, we have created a virtual RGB-D camera for point cloud acquisition and attached it to the robot's end-effector. We use a point cloud generated by registering multiple clouds acquired by moving the robot to four different viewpoints around the objects. This registration is merely cloud stitching as the point clouds are acquired by a camera whose exact pose in the robot base frame is known \cite{marturi2019dynamic}. Object CAD models downloaded from YCB website are used for simulations. In order to reach the grasp poses, a full 6-DoF end-effector pose controller has been implemented to generate the robot trajectory. Moreover, as previously stated, kinematically infeasible configurations are discarded from the full list of generated candidates. For all experiments, an empirically selected threshold $t_{corr}=0.1$ is used to generate grasps.
We report both single and multiple objects grasping results following the evaluation protocol described in \cite{BenchmarkERL}. Similar to the setup described in \cite{BenchmarkERL}, we have considered a circular task-space with a radius of $25~\mathrm{cm}$ within the robot workspace.
\aboverulesep = 0.2mm \belowrulesep = 0.2mm
\begin{table*}
\smaller
\caption{Single object grasping results for both proposed and LoCoMo-based methods at test location-1.}
\label{tab:single_test_l1}
\centering
\begin{threeparttable}
\begin{tabularx}{\textwidth}{>{\hsize=0.14\hsize\raggedright\arraybackslash}X|
>{\hsize=0.13\hsize\centering\arraybackslash}X|
>{\hsize=0.08\hsize\centering\arraybackslash}X|
>{\hsize=0.1\hsize\centering\arraybackslash}X|
>{\hsize=0.1\hsize\centering\arraybackslash}X|
>{\hsize=0.1\hsize\centering\arraybackslash}X|
>{\hsize=0.13\hsize\centering\arraybackslash}X|
>{\hsize=0.1\hsize\centering\arraybackslash}X|
>{\hsize=0.1\hsize\centering\arraybackslash}X|
>{\hsize=0.1\hsize\centering\arraybackslash}X|
>{\hsize=0.1\hsize\centering\arraybackslash}X}
\toprule
\multirow{2}{*}{\textbf{Object}} & \multicolumn{5}{c|}{\textbf{SpectGRASP}} & \multicolumn{5}{c}{\textbf{LoCoMo}} \\
\cmidrule{2-11}
& \textbf{Total Grasps\tnote{1} (Feasible)} & \textbf{Time [s]} & \textbf{Lift (\%)} & \textbf{Rotation (\%)} & \textbf{Shake (\%)} & \textbf{Total Grasps (Feasible)} & \textbf{Time [s]} & \textbf{Lift (\%)} & \textbf{Rotation (\%)} & \textbf{Shake (\%)} \\
\midrule
Cleanser & 1995 (265) & 2.55 &100 &100 &100 &7916 (379) &9.93 &100 &100 &100\\
Cup & 790 (116) &0.42 &100 &100 &100 &3421 (90) &1.66 &100 &100 &100\\
F. Screwdriver &2016 (125) &0.86 &100 &100 &100 &13251 (1172) &4.60 &100 &0 &NA\tnote{2}\\
Golf Ball &3514 (119) &0.98 &100 &100 &100 &30479 (1684) &9.63 &100 &100 &100\\
Hammer &3173 (537) &1.58 &100 &100 &100 &24179 (4213) &10.49 &100 &0 &NA\\
Large Clamp &47 (12) &0.15 &100 &0 &NA &1361 (173) &1.93 &100 &0 &NA\\
Mug &72 (4) &0.31 &100 &0 &NA &261 (9) &4.19&100 &33 &0 \\
P. Screwdriver &1828 (5) &0.64 &100 &100 &100 &14281 (276) &4.7 &100 &0 &NA\\
Potted Meat &300 (6) &0.34 &100 &100 &100 &1046 (28) &0.92 &100 &100 &100\\
Power Drill &2410 (300) &1.48 &100 &0 &NA &28503 (1626) &17.62 &100 &100 &0\\
Racquet Ball &614 (65) &0.53 &100 &0 &NA &16064 (2821) &10.26 &100 &100 &100\\
Softball &7406 (52) &2.07 &100 &100 &100 &64510 (1576) &20.25 &0 &NA &NA\\
Strawberry &5446 (16) &1.57 &100 &100 &100 &36521 (639) &12.16 &100 &100 &0\\
Woodblock &6001 (380) &3.33 &100 &100 &100 &13269 (782) &10.82 &100 &100 &100\\
Mustard &4638 (440) &3.66 &100 &100 &100 &13438 (1647) &21.56 &100 &100 &100\\
\bottomrule
\end{tabularx}
\begin{tablenotes}
\item[1] Total number of grasps computed and in parenthesis are the total feasible grasps after IK filtering.
\item[2] NA -- Test not performed as previous test failed.
\end{tablenotes}
\end{threeparttable}
\end{table*}
\aboverulesep = 0.2mm \belowrulesep = 0.2mm
\begin{table*}
\smaller
\caption{Single object grasping results for both proposed and LoCoMo-based methods at test location-2.}
\label{tab:single_test_l2}
\centering
\begin{tabularx}{\textwidth}{>{\hsize=0.14\hsize\raggedright\arraybackslash}X|
>{\hsize=0.13\hsize\centering\arraybackslash}X|
>{\hsize=0.08\hsize\centering\arraybackslash}X|
>{\hsize=0.1\hsize\centering\arraybackslash}X|
>{\hsize=0.1\hsize\centering\arraybackslash}X|
>{\hsize=0.1\hsize\centering\arraybackslash}X|
>{\hsize=0.13\hsize\centering\arraybackslash}X|
>{\hsize=0.1\hsize\centering\arraybackslash}X|
>{\hsize=0.1\hsize\centering\arraybackslash}X|
>{\hsize=0.1\hsize\centering\arraybackslash}X|
>{\hsize=0.1\hsize\centering\arraybackslash}X}
\toprule
\multirow{2}{*}{\textbf{Object}} & \multicolumn{5}{c|}{\textbf{SpectGRASP}} & \multicolumn{5}{c}{\textbf{LoCoMo}} \\
\cmidrule{2-11}
& \textbf{Total (Feasible)} & \textbf{Time} & \textbf{Lift} & \textbf{Rotation} & \textbf{Shake} & \textbf{Total (Feasible)} & \textbf{Time} & \textbf{Lift} & \textbf{Rotation} & \textbf{Shake} \\
\midrule
Cleanser & 146 (33) &0.32 &100 &100 &100 &8972 (1921) &10.93 &100 &100 &100\\
Cup & 617 (9) &0.43 &100 &100 &100 &1525 (10) &1.1 &100 &100 &0\\
F. Screwdriver & 1869 (106) &0.75 &100 &0 &NA &13210 (694) &4.65 &100 &0 &NA\\
Golf Ball & 5674 (321) &1.46 &100 &100 &100 &38481 (1891) &12.64 &100 &100 &100\\
Hammer & 2678 (31) &1.25 &100 &100 &100 &14505 (231) &7.94 &100 &100 &100\\
Large Clamp & 161 (5) &0.24 &100 &0 &NA &1897 (69) &1.86 &100 &0 &NA\\
Mug & 90 (14) &0.41 &0 &NA &NA &312 (38) &4.3 &100 &100 &100\\
P. Screwdriver& 1502 (228) &0.58 &100 &100 &100 &11995 (1108) &4.02 &100 &100 &100\\
Potted Meat & 118 (8) &0.33 &100 &100 &100 &1203 (94) &1.06 &0 &NA &NA\\
Power Drill & 2767 (577) &1.57 &100 &100 &100 &28078 (1869) &20.6 &100 &0 &NA\\
Racquet Ball & 505 (79) &0.45 &100 &100 &100 &14425 (1084) &8.64 &0 &NA &NA\\
Softball & 8489 (327) &2.45 &100 &100 &100 &68615 (4286) &21.58 &100 &0 &NA\\
Strawberry & 5164 (419) &1.54 &100 &100 &100 &34740 (1613) &11.47 &0 &NA &NA\\
Woodblock & 5152 (707) &2.93 &100 &100 &100 &12262 (1155) &11.87 &100 &100 &100\\
Mustard & 3428 (928) &3.35 &100 &100 &100 &13080 (1865) &21.68 &100 &100 &100\\
\bottomrule
\end{tabularx}
\vspace{-5pt}
\end{table*}
\subsection{Grasping Individual Objects}
For this experiment, two different test locations on the circular task-space are selected. For location-1, the objects are placed at the origin of the circular task-space, \textit{i.e.}, the point on the ground plane at which the centre of the tool is projected onto, when the robot is at 90-90 configuration. Refer \cite{BenchmarkERL} for more details. For location-2, we move the object $25~\mathrm{cm}$ in the negative-$y$ direction and apply a rotation of $-90^{\circ}$ around $z$-axis. After candidate generation, filtering, and sequential ranking, the top ranked grasp is executed on the robot. A force of $50\mathrm{N}$ is used to grasp and hold the object. Thereafter, a series of three tests, as in \cite{BenchmarkERL}, are performed to check the grasp stability: (i) lift test, where the object is lifted $20~\mathrm{cm}$ off the table at a speed of $10~\mathrm{cm/s}$; (ii) rotation test, where the object is rotated $90^{\circ}$ and $-90^{\circ}$ around the $y$-axis at a speed of $45~\mathrm{deg/s}$; and finally (iii) shake test, where the robot shakes the object in a sinusoidal pattern ($0.25 \mathrm{m}$ amplitude, $10~\mathrm{m/s^2}$ acceleration) for 10 seconds. These tests are performed sequentially, \textit{i.e.}, if any of these fail the grasp is considered a failure and the next test will not be performed. The test is considered a fail if the object slips out of the fingers. For each object, with both proposed and LoCoMo-based methods, at each test location, we have repeated the test 3 times. In the paper, we report the average of those three trials
Fig.~\ref{fig:single_grasps} shows some screenshots of the successful grasps generated and executed for 7 different objects. Tables \ref{tab:single_test_l1} and \ref{tab:single_test_l2} summarise the results for test locations 1 and 2, respectively. From the results, it can be seen that the proposed method outperformed the LoCoMo-based grasp planner in terms of grasp generation time. SpectGRASP takes on average $\bm{1.28~\mathrm{seconds}}$ to generate grasps. This is $\bm{7\times}$ faster compared to the LoCoMo-based grasp planner, which takes $9.5~\mathrm{seconds}$ on average. Although SpectGRASP generates fewer grasp candidates compared to LoCoMo, it demonstrated higher success rates than the later. For location 1, the average success rates of SpectGRASP for lifting, rotation and shaking tests are respectively, $\bm{100\%}$, $\bm{73\%}$ and $\bm{73\%}$, while for LoCoMo they are $93\%$, $62\%$, $47\%$. And for location 2, the average success rates of SpectGRASP for lifting, rotation and shaking tests are respectively $\bm{93\%}$, $\bm{80\%}$ and $\bm{80\%}$, while for LoCoMo they are $80\%$, $53\%$, $47\%$. These results suggest that the SpectGRASP is able to generate good and stable grasps than the ones generated by LoCoMo-based method
The difference in the number of grasps generated is due to the fact that LoCoMo uses all possible point pairs of a given point cloud to sample contact points. This results in a high number of grasp candidates; however, the generation time is more. Nevertheless, in practical real-world applications, only the top ranked grasps are considered; thus, making the SpectGRASP a more suitable grasp planner. It is worth noting that for both methods, the success rates from rotation and shake tests are lower than that of the lifting test. This is mainly due to the inaccuracy of physics simulator to model interactions between objects. We believe that these success rates will be improved with a real robot system, e.g. see results for LoCoMo method in \cite{adjigble2018model} and \cite{BenchmarkERL}.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figures/Fig5.png}
\caption{Grasping objects in clutter with proposed method. Top row shows generated top-ranked grasps and the bottom row shows the same grasps executed on the robot. Objects are grasped one by one until either the scene is cleared, or the method returns no feasible grasps. Detailed results can be found in the supplementary video.}
\label{fig:clutter_grasps}
\end{figure*}
\subsection{Grasping Objects in Clutter}
The second set of experiments are performed to evaluate the performance of the proposed method in clearing cluttered scenes containing multiple objects. For this tests, 6 different objects are randomly positioned. We test the performance by clearing 3 such randomly generated scenes. The task is to successively grasp, lift, rotate and shake one object at a time until the scene is successfully cleared. For each iteration, a new scene point cloud is captured based on which the grasps are generated. Like before, the generated grasps are filtered and ranked, and the top grasp is executed by the robot. The experiment is repeated until all objects are successfully removed or if the algorithm fails to find any feasible grasp. Also for this tests, we report the results following the protocol in \cite{BenchmarkERL} for group of objects.
Fig. \ref{fig:clutter_grasps} shows successive images from this test where a cluttered scene being cleared using the proposed method. Table \ref{tab:clutter_table} summarises the result for clearing three different clutters. SpectGRASP is able to successfully clear all 3 scenes. It can be seen that the pickup order varies due to the random positioning of objects, but in all cases the first 4 objects are successfully picked up in the first attempt. For the 6\textsuperscript{th} object of scene 2, the ``Mustard'' bottle was lifted successfully but fell back on the table during the rotation test. Another attempt was required to successfully complete this trial. Likewise, two attempts were needed to clear ``screwdriver'' in scene 3. Overall, considering all trials, on average it took $18.02~\mathrm{seconds}$ to generate grasps required to clear a scene of 6 objects. This corresponds to $ \approx3~\mathrm{seconds}$ per object, which is slightly higher compared to the case of single object grasping. This is because, cluttered scenes are composed of multiple objects, which means more grasp candidates are generated leading to high generation time
\setlength{\textfloatsep}{5pt }
\aboverulesep = 0.2mm \belowrulesep = 0.2mm
\begin{table}
\smaller
\caption{Clutter clearance results with proposed method.}
\label{tab:clutter_table}
\centering
\begin{threeparttable}
\begin{tabularx}{\columnwidth}{>{\hsize=0.07\hsize\centering\arraybackslash}X|
>{\hsize=0.18\hsize\centering\arraybackslash}X|
>{\hsize=0.1\hsize\centering\arraybackslash}X|
>{\hsize=0.1\hsize\centering\arraybackslash}X|
>{\hsize=0.07\hsize\centering\arraybackslash}X|
>{\hsize=0.1\hsize\centering\arraybackslash}X|
>{\hsize=0.07\hsize\centering\arraybackslash}X}
\toprule
Scene & Object & Pickup order & Time (s) & Lift & Rotation & Shake\\
\midrule
\multirow{6}{*}{\textbf{1}} & Woodblock& 2 & 3.96 &\textcolor{Green}{\small{\ding{52}}} &\textcolor{Green}{\small{\ding{52}}} &\textcolor{Green}{\small{\ding{52}}} \\
& Golfball & 4 & 2.37 & \textcolor{Green}{\small{\ding{52}}} & \textcolor{Green}{\small{\ding{52}}} & \textcolor{Green}{\small{\ding{52}}} \\
& Dishwash & 1 & 7 & \textcolor{Green}{\small{\ding{52}}} & \textcolor{Green}{\small{\ding{52}}} & \textcolor{Green}{\small{\ding{52}}} \\
& Lemon & 3 & 2.67 & \textcolor{Green}{\small{\ding{52}}} & \textcolor{Green}{\small{\ding{52}}} & \textcolor{Green}{\small{\ding{52}}} \\
& Screwdriver& 5 & 0.75 & \textcolor{Green}{\small{\ding{52}}} & \textcolor{Green}{\small{\ding{52}}} & \textcolor{Green}{\small{\ding{52}}} \\
& Mustard& 6 & 0.14 & \textcolor{Green}{\small{\ding{52}}} & \textcolor{Green}{\small{\ding{52}}} & \textcolor{Green}{\small{\ding{52}}} \\
\midrule
\multicolumn{3}{r}{Total grasp generation time: }& \multicolumn{4}{l}{16.89 seconds}\\
\midrule
\multirow{6}{*}{\textbf{2}} & Woodblock & 1 & 5.18 & \textcolor{Green}{\small{\ding{52}}} & \textcolor{Green}{\small{\ding{52}}} & \textcolor{Green}{\small{\ding{52}}} \\
& Golfball & 3 & 2.11 & \textcolor{Green}{\small{\ding{52}}} & \textcolor{Green}{\small{\ding{52}}} & \textcolor{Green}{\small{\ding{52}}} \\
& Dishwash & 2 & 3.4 & \textcolor{Green}{\small{\ding{52}}} & \textcolor{Green}{\small{\ding{52}}} & \textcolor{Green}{\small{\ding{52}}} \\
& Lemon & 4 & 1.09 & \textcolor{Green}{\small{\ding{52}}} & \textcolor{Green}{\small{\ding{52}}} & \textcolor{Green}{\small{\ding{52}}} \\
& Screwdriver & 5 & 0.63 & \textcolor{Green}{\small{\ding{52}}} & \textcolor{Green}{\small{\ding{52}}} & \textcolor{Green}{\small{\ding{52}}} \\
& Mustard & 6 & 0.25 & \textcolor{Plum}{\small{\ding{52}}} \tnote{3} & \textcolor{Plum}{\small{\ding{52}}} & \textcolor{Plum}{\small{\ding{52}}} \\
\midrule
\multicolumn{3}{r}{Total grasp generation time: }& \multicolumn{4}{l}{12.66 seconds}\\
\midrule
\multirow{6}{*}{\textbf{3}} & Woodblock & 2 & 5.59 & \textcolor{Green}{\small{\ding{52}}} & \textcolor{Green}{\small{\ding{52}}} &\textcolor{Green}{\small{\ding{52}}} \\
& Golfball & 6 & 1.66 &\textcolor{Green}{\small{\ding{52}}} &\textcolor{Green}{\small{\ding{52}}} &\textcolor{Green}{\small{\ding{52}}} \\
& Dishwash & 1 & 6.45 & \textcolor{Green}{\small{\ding{52}}} & \textcolor{Green}{\small{\ding{52}}} & \textcolor{Green}{\small{\ding{52}}} \\
& Lemon & 4 & 3.34 &\textcolor{Green}{\small{\ding{52}}} &\textcolor{Green}{\small{\ding{52}}} &\textcolor{Green}{\small{\ding{52}}} \\
& Screwdriver & 5 & 2.96 & \textcolor{Plum}{\small{\ding{52}}} & \textcolor{Plum}{\small{\ding{52}}} & \textcolor{Plum}{\small{\ding{52}}} \\
& Mustard & 3 & 4.51 &\textcolor{Green}{\small{\ding{52}}} &\textcolor{Green}{\small{\ding{52}}} & \textcolor{Green}{\small{\ding{52}}} \\
\midrule
\multicolumn{3}{r}{Total grasp generation time: }& \multicolumn{4}{l}{24.51 seconds}\\
\midrule
\end{tabularx}
\begin{tablenotes}
\item[3] Purple tick mark: Success after second try.
\end{tablenotes}
\end{threeparttable}
\end{table}
\subsection{Discussion}\label{sec:discussions}
\subsubsection*{\bfseries Complexity analysis}
Experimental results demonstrate the efficiency of SpectGRASP method in generating grasps for various objects with a variety of shapes. This is mainly supported by rapid computation of the correlation density map that is used to sample contact points. For a parallel jaw gripper, naively sampling contact points on a point cloud with $N$ points results in a $\mathcal{O}(N^2)$ complexity. As the number of points increases, the cost of sampling contacts becomes high. Also, this complexity drastically increases when hands with more fingers are used.
However, for SpectGRASP, which uses the Fourier transform on $\bm{\mathrm{SO}(3)}$, has a complexity of $\mathcal{O}(B^4)$ with $B$ being the bandwidth \cite{kostelec2008ffts}. %
Suppose, $N_r$ rotations are sampled from the correlation density map and a maximum number of $K$ points are extracted from each BEGI cell, in extreme cases, sampling contacts leads to an $\mathcal{O}(N_rK^{N_f})$ complexity, with $N_f$ being the number of fingers. It is important to notice that the complexity of SpectGRASP is not directly related to the number of points in the point cloud, meaning that the efficiency of the method still remains same even for dense point clouds. Additionally, $B$ value can be adjusted to modify the grasp generation time. For instance, lower values of $B$ lead to higher number of points per cell but result in a lower number of possible rotations. This is desirable for applications requiring high number of grasp candidates. While, higher values of B tend to produce a smaller number of points per cell with a high number of rotations. This leads to fewer grasp candidates with precise matching. Applications such as dynamic grasping could greatly benefit from this type of fine matching. The contact sampling complexity can be further reduced by clustering the points in each BEGI cell.
\subsubsection*{\bfseries Applicability to real system} In this work, we have validated our approach by a substantial number of virtual experiments with a simulated 7-DoF robot and a parallel-jaw gripper, grasping a variety of objects. The selected robot and gripper configurations match with the real KUKA iiwa robot arm and Schunk PG-70 gripper models, same as the setup presented in \cite{adjigble2018model}. The full pipeline presented in this paper can be directly imported on to the real robot as it does not assume any kinematic constraints, and the grasp configurations for which there is no IK solutions are discarded. Our method relies only on the point clouds generated by the simulated virtual camera, which is mounted on the robot’s wrist. This camera has been constructed considering the camera parameters of a real depth camera. This means that the virtually generated point clouds are similar to that of the real-world depth sensor. As previously stated, our autonomous grasping technique does not rely on any prior knowledge of the object, e.g., a CAD model. Furthermore, our approach does not make use of any additional information regarding given unknown object that would not be available in a real-world scenario. The assumption is that the object will be placed in a reachable location by the robot within its workspace, viewable from a camera and the object size should be suitable for the gripper to be able to manipulate. Besides, our approach is agnostic to the robot system used, given that: (i) there is a scene or robot mounted vision system to acquire point cloud; (ii) the robot’s end-effector is a multi-finger hand with minimum two fingers; and (iii) hand kinematic model is known. Henceforth, we believe that our method is applicable to real systems.
|
2107.12416
|
\section{Introduction}
Zeroth-order optimization (ZOO) algorithms solve optimization problems with implicit function information by estimating gradients via zeroth-order observation (function evaluations) at judiciously chosen points \cite{Spall05}. They have been extensively employed as model-free reinforcement learning (RL) algorithms for black-box off-line optimization \cite{Nesterov17}, online optimization \cite{Flaxman04}, neural networks training \cite{Chen17}, and model-free linear quadratic regulator (LQR) problems \cite{Fazel18,Malik19,Furieri20,Mohammadi20}. Although, ZOO algorithms are applicable to a broad range of problems, they suffer from high variance and slow convergence rate especially when the problem is of a large size. To improve the performance of ZOO algorithms, besides the classical one-point feedback \cite{Flaxman04,Malik19}, other types of estimation schemes have been proposed, e.g., average of multiple one-point feedback \cite{Fazel18}, two-point feedback \cite{Agarwal10,Nesterov17,Chen17,Shamir17,Malik19}, average of multiple two-point feedback \cite{Mohammadi20}, and one-point residual feedback \cite{Zhang20}. These methods have been shown to reduce variance and increase convergence rate efficiently. However, all these methods are still based on the evaluation of the global optimization objective, which may cause significantly large errors and high variance of gradient estimates when encountering a large-size problem with a high dimensional variable.
Multi-agent networks are one of the most representative systems that have broad applications and usually induce large-size optimization problems \cite{sharma2021survey}. In recent years, distributed zeroth-order convex and non-convex optimizations on multi-agent networks have been extensively studied, e.g., \cite{Sun19,Hajine19,Gratton20,Tang20,Akhavan21}, all of which decompose the original cost function into multiple functions and assign them to the agents. Unfortunately, the variable dimension for each agent is the same as that for the original problem. As a result, the agents are expected to eventually reach consensus on the optimization variables. Such a setting indeed divides the global cost into local costs, but the high variance of the gradient estimate caused by the high variable dimension remains unchanged. In \cite{Li19}, each agent has a lower dimensional variable. However, to estimate the local gradient, the authors employed a consensus protocol for each agent to estimate the global cost function value. Such a framework maintains a similar gradient estimate to that of the centralized ZOO algorithm, which still suffers a high variance when the network is of a large scale.
One class of efficient algorithms for large-size optimization problems is block coordinate descent (BCD) algorithms due to the low per-iteration cost \cite{nesterov2012efficiency,peng2016coordinate}. The underlying idea is to solve convex or non-convex optimizations by updating only partial components (one block) of the entire variable in each iteration, \cite{canutescu2003cyclic,nesterov2012efficiency,richtarik2014iteration,wright2015coordinate,peng2016coordinate,xu2017globally}. Recently, zeroth-order BCD algorithms have been proposed to craft adversarial attacks for neural networks \cite{Chen17,cai2021zeroth}, where the convergence analysis was given under the convexity assumption on the objective function in \cite{cai2021zeroth}. A non-convex optimization was addressed by a zeroth-order BCD algorithm in \cite{yu2019zeroth}, whereas the feasible domain was assumed to be convex.
In this paper, we consider a general stochastic locally smooth non-convex optimization with a possibly non-convex feasible domain, and propose a distributed ZOO algorithm based on the BCD method. Out of the aforementioned situations, we design local cost functions involving only partial agents by utilizing the network structure embedded in the optimization objective over a multi-agent network. This is reasonable in model-free control problems because although the dynamics of each agent is unknown, the coupling structure between different agents may be known and the control objective with an inherent network structure is usually artificially designed. Our algorithm allows each agent to update its optimization variable by evaluating a local cost independently, without requiring agents to reach consensus on their optimization variables. This formulation is applicable to the distributed LQR control and many multi-agent cooperative control problems such as formation control \cite{Oh15}, network localization \cite{fang2020angle} and cooperative transportation \cite{ebel2018distributed}, where the variable of each agent converges to a desired local set corresponding to a part of the minimizer of the global cost function. Our main contributions are listed as follows.
\begin{itemize}
\item We propose a distributed accelerated zeroth-order algorithm with asynchronous sample and update schemes (Algorithm \ref{alg:as}). The multi-agent system is divided into multiple clusters, where agents in different clusters evaluate their local costs asynchronously, and the evaluations for different agents are completely independent. It is shown that with appropriate parameters, any convergent point of the algorithm is an approximate stationary point of the optimization with high probability. The sample complexity is polynomial with the reciprocal of the convergence error, and is dominated by the number of clusters (see Theorem \ref{th as}). When compared to ZOO based on the global cost evaluation, our algorithm produces gradient estimates with lower variance and thus result in faster convergence due to the local cost design (see the variance analysis in Subsection \ref{subsec: as variance}).
\item We further consider a model-free multi-agent distributed LQR problem, where multiple agents with decoupled dynamics\footnote{Our algorithm applies to multi-agent systems with coupled dynamics. Please refer to Remark \ref{re coupled dynamics} for more details.} cooperatively minimize a cost function coupling all the agents. The optimal LQR controller is desired to satisfy a structural constraint that the control of each agent involves only its neighbors. We describe the coupling in the cost function and the structural constraint by a \emph{cost graph} and a \emph{sensing graph}, respectively. To solve the problem in a distributed fashion, we design a local cost function for each agent, and introduce a \emph{learning graph} that describes the required agent-to-agent interaction relationship for local cost evaluations. Specifically, the local cost function for each agent is designed such that its gradient w.r.t. the local control gain is the same as that of the global cost function. The learning graph is determined by the cost and sensing graphs and is needed only during the learning stage. By implementing our asynchronous RL algorithm, the agents optimize their local cost functions and are able to learn a distributed controller corresponding to a stationary point of the optimization distributively. The design of local costs and the learning graph plays an important role in enabling distributed learning. The learning graph is typically denser than the cost and sensing graphs, which is a trade-off for not using a consensus algorithm.
\end{itemize}
The comparisons between our work and existing related results are listed as follows.
\begin{itemize}
\item In comparison to centralized ZOO \cite{Flaxman04,Agarwal10,Nesterov17,Chen17,Zhang20} and distributed ZOO \cite{Sun19,Hajine19,Gratton20,Tang20,Akhavan21}, we reduce the variable dimension for each agent, and construct local cost involving only partial agents. Such a framework avoids the influence of the convergence error and convergence rate of the consensus algorithm, and results in reduced variance and high scalability to large-scale networks. Moreover, since the agents do not need to reach agreement on any global index, our algorithm benefits for privacy preserving.
\item In comparison to ZOO algorithms for LQR, e.g., \cite{Fazel18,Malik19,Mohammadi20}, our problem has a structural constraint on the desired control gain, thus the cost function is no longer gradient dominated.
\item Existing BCD algorithms in \cite{canutescu2003cyclic,nesterov2012efficiency,richtarik2014iteration,wright2015coordinate,peng2016coordinate,xu2017globally} always require global smoothness of the objective function. When only zeroth-order information is available, existing BCD algorithms require convexity of either the objective function \cite{cai2021zeroth} or the feasible domain \cite{yu2019zeroth}. In our work, we propose a zeroth-order BCD algorithm for a stochastic non-convex optimization with a locally smooth objective and a possibly non-convex feasible set. A clustering strategy and a cluster-wise update scheme are proposed as well, which further significantly improve the algorithm convergence rate.
\item The distributed LQR problem has been shown to be very complicated because both the objective function and the feasible set are non-convex, and the number of locally optimal solutions can grow exponentially in system dimension \cite{Feng19,Bu19to}. Existing results for distributed LQR include model-based centralized approaches \cite{Borrelli08,Bu19}, model-free centralized approaches \cite{Furieri20,Jing21}, and model-free distributed approaches \cite{Li19,Jingtcns}. All of these results focus on seeking a sub-optimal distributed controller for an approximate problem or solving for a stationary point of the optimization via policy gradient. Our algorithm is a derivative-free distributed policy gradient algorithm, which seeks a stationary point of the problem.
\item The sample complexity of our algorithm is higher than that in \cite{Fazel18,Malik19,Mohammadi20} because the optimization problems in all of them have the gradient domination property, and \cite{Fazel18,Malik19} assume that the infinite horizon LQR cost can be obtained perfectly. The sample complexity of our algorithm for LQR is similar to that in \cite{Li19}. However, two consensus algorithms are employed in \cite{Li19} for distributed sampling and the global cost estimation, respectively, which may slow down the gradient estimation process, and are not needed in our algorithms. Moreover, the gradient estimation in \cite{Li19} is essentially based on global cost evaluation, while our algorithms are based on local cost evaluation, which are significantly more scalable to large-scale networks.
\item Distributed RL has also been extensively studied via a Markov decision process (MDP) formulation, e.g., \cite{kar2013cal,omidshafiei2017deep,lowe2017multi,zhang2018fully,chen2021communication}. However, in comparison to our work, they require more information for each agent or are essentially based on global cost evaluation. More specifically, in \cite{kar2013cal,lowe2017multi,zhang2018fully,chen2021communication}, the global state is assumed to be available for all the agents. It has been shown in \cite{lowe2017multi} that naively applying policy gradient methods to multi-agent settings exhibits high variance gradient estimates. \cite{omidshafiei2017deep} describes a distributed learning framework with partial observations, but only from an empirical perspective.
\end{itemize}
This paper is structured as follows. Section \ref{sec:problem} describes the optimization problem studied in this paper. Section \ref{sec: as} presents our zeroth-order BCD algorithm and shows the convergence result. Section \ref{sec MAS} introduces the model-free multi-agent LQR problem, and shows how it can be applied with the proposed zeroth-order BCD algorithm. Section \ref{sec: sim} shows the simulation results of a formation control example. Section \ref{sec: con} concludes the whole paper.
\textbf{Notations.} Given function $f(x)$ with $x=(x_1^{\top},...,x_N^{\top})^{\top}$, denote $f(y_i,x_{-i})=f(x')$ where $x'=({x'}_1^{\top},...,{x'}_N^{\top})^\top$, $x'_i=y_i$ and $x'_j=x_j$ for all $j\neq i$. Let $\mathbb{R}^{n}$ and $\mathbb{R}_{\geq0}$ denote the $n-$dimensional Euclidean space and the space of nonnegative real numbers. The norm $||\cdot||$ denotes the $l$2-norm for vectors, and denotes the spectral norm for matrices, $||\cdot||_F$ is the Frobenius norm. The pair $\mathcal{G}=(\mathcal{V},\mathcal{E})$ denotes a directed graph or an undirected graph, where $\mathcal{V}$ is the set of vertices, $\mathcal{E}\subset\mathcal{V}^2$ is the set of edges. When $\mathcal{G}$ is undirected, $(i,j)\in\mathcal{E}$ if and only if $(j,i)\in\mathcal{E}$. A path from vertex $i$ to vertex $j$ in $\mathcal{G}$ is a sequence of pairs $(i,i_1)$, $(i_1,i_2)$, ..., $(i_s,j)$ such that each of them belongs to $\mathcal{E}$. The symbol $\mathbf{E}[\cdot]$ denotes expectation, $\mathrm{Cov}(x)$ denotes the covariance matrix of vector $x$. Given a set $S$, $\text{Uni}(S)$ represents the uniform distribution in $S$. Given two matrices $X$ and $Y$, let $\langle X,Y\rangle=\text{trace}(X^{\top}Y)$ be the Frobenius inner product.
\section{Stochastic ZOO via Multi-Agent Networks}\label{sec:problem}
In this section, we review the zeroth-order stochastic non-convex optimization problem and describe the optimization we aim to solve in this paper.
\subsection{Stochastic ZOO}
Consider a general optimization problem
\begin{equation}\label{optimization}
\mathop{\text{minimize}}\limits_{x\in\mathcal{X}}f(x),
\end{equation}
where \begin{equation}
f(x)=\mathbf{E}_{\xi\sim\mathcal{D}}[h(x,\xi)],
\end{equation}
here $x\in\mathbb{R}^q$ is the optimization variable, $\mathcal{X}=\{x\in\mathbb{R}^q:f(x)<\infty\}$ is the feasible domain and possibly non-convex, $\xi\in\mathbb{R}^p$ is a random variable and may denote the noisy data with distribution $\mathcal{D}$ in real applications, and $h(\cdot,\cdot): \mathbb{R}^q\times\mathbb{R}^{p}\rightarrow\mathbb{R}_{\geq0}$ is a mapping such that the following assumptions hold:
\begin{assumption}\label{as Lips f}
The function $f$ is $(\lambda_x,\zeta_x)$ locally Lipschitz in $\mathcal{X}$, i.e., for any $x\in\mathcal{X}$, there exist $\lambda_x$, $\zeta_x>0$ such that if $||x'-x||\leq\zeta_x$ for $x'\in\mathbb{R}^q$, then
\begin{equation}
||f(x')-f(x)||\leq\lambda_x||x'-x||.
\end{equation}
\end{assumption}
\begin{assumption}\label{as Lips g}
The function $f$ has a $(\phi_x,\beta_x)$ locally Lipschitz gradient in $\mathcal{X}$, i.e., for any $x\in\mathcal{X}$, there exist $\beta_x, \phi_x>0$, such that if $||x'-x||\leq \beta_x$ for $x'\in\mathbb{R}^q$, it holds that
\begin{equation}
||\nabla_x f(x')-\nabla_xf(x)||\leq \phi_x||x'-x||.
\end{equation}
\end{assumption}
The stochastic zeroth-order algorithm aims to solve the stochastic optimization (\ref{optimization}) in the bandit setting, where one only has access to a noisy observation, i.e., the value of $h(x,\xi)$, while the detailed form of $h(x,\xi)$ is unknown. Since the gradient of $f(x)$, i.e., $\nabla_{x}f(x)$, can no longer be computed directly, it will be estimated based on the function value.
In the literature, the gradient can be estimated by the noisy observations with one-point feedback \cite{Flaxman04,Fazel18,Malik19} or two-point feedback \cite{Agarwal10,Nesterov17,Chen17}. In what follows, we introduce the one-point estimation approach, based on which we will propose our algorithm. The work in this paper is trivially extendable to the two-point feedback case.
Define the unit ball and the $(d-1)$-dimensional sphere (surface of the unit ball) in $\mathbb{R}^d$ as
\begin{equation}
\mathbb{B}_d=\{y\in\mathbb{R}^d:||y||\leq1\},
\end{equation}
and
\begin{equation}
\mathbb{S}_{d-1}=\{y\in\mathbb{R}^d:||y||=1\},
\end{equation}
respectively. Given a sample $v\sim\uni(\mathbb{B}_{q})$, define
\begin{equation}
\hat{f}(x)=\mathbf{E}_{v\in\mathbb{B}_{q}}[f(x+r v)],
\end{equation}
where $r>0$ is called the smoothing radius.
It is shown in \cite{Flaxman04} that
\begin{equation}\label{central g}
\nabla_{x}\hat{f}(x)=\mathbf{E}_{u\in\mathbb{S}_{q-1}}[f(x+r u)u]q/r,
\end{equation}
which implies that $f(x+ru)uq/r$ is an unbiased estimate for $\nabla_{x}\hat{f}(x)$. Based on this estimation, a first-order stationary point of (\ref{optimization}) can be obtained by using gradient descent.
\subsection{Multi-Agent Stochastic Optimization}
In this paper, we consider the scenario where optimization (\ref{optimization}) is formulated based on a multi-agent system with $N$ agents $\mathcal{V}=\{1,...,N\}$. More specifically, let $x=(x_1^{\top},...,x_N^{\top})^{\top}$, where $x_i\in\mathbb{R}^{q_i}$ is the state of each agent $i$, and $\sum_{i=1}^Nq_i=q$. Similarly, let $\xi=(\xi_1^{\top},...,\xi_N^{\top})^{\top}$, where $\xi_i\in\mathbb{R}^{p_i}$ denotes the noisy exploratory input applied to agent $i$, and $\sum_{i=1}^Np_i=p$. Different agents may have different dimensional states and noise vectors, and each agent $i$ only has access to a local observation $h_i(x_{\mathcal{N}_i},\xi_{\mathcal{N}_i})$, $i=1,...,N$. Here $x_{\mathcal{N}_i}=\{x_j,j\in\mathcal{N}_i\}$ and $\xi_{\mathcal{N}_i}=\{\xi_j,j\in\mathcal{N}_i\}$ are the vectors composed of the state and noise information of the agents in $\mathcal{N}_i$, respectively, where $\mathcal{N}_i=\{j\in\mathcal{V}:(j,i)\in\mathcal{E}\}$ contains its neighbors determined by the graph\footnote{Here graph $\mathcal{G}$ can be either undirected or directed. In Section \ref{sec MAS} where our algorithm is applied to the LQR problem, graph $\mathcal{G}$ corresponds to the learning graph.} $\mathcal{G}=(\mathcal{V},\mathcal{E})$. Note that each vertex in graph $\mathcal{G}$ is considered to have a self-loop, i.e., $i\in\mathcal{N}_i$. At this moment we do not impose other conditions for $\mathcal{G}$, instead, we give Assumption \ref{as local cost} on those local observations, based on which we propose our distributed RL algorithm. In Section \ref{sec MAS}, when applying our RL algorithm to a model-free multi-agent LQR problem, the details about how to design the inter-agent interaction graph for validity of Assumption \ref{as local cost} will be introduced.
\begin{assumption}\label{as local cost}
There exist local cost functions $f_i(x)=\mathbf{E}_{\xi\sim\mathcal{D}}[h_i(x_{\mathcal{N}_i},\xi_{\mathcal{N}_i})]$ such that for any $i\in\mathcal{V}$ and $x\in\mathcal{X}$, $h_i(\cdot,\cdot): \mathbb{R}^{\sum_{j\in\mathcal{N}_i}q_j}\times \mathbb{R}^{\sum_{j\in\mathcal{N}_i}p_j}\rightarrow\mathbb{R}_{\geq0}$ satisfies
\begin{equation}\label{hi<cfi}
h_i(x_{\mathcal{N}_i},\xi_{\mathcal{N}_i})\leq cf_i(x), \quad \text{almost surely~(a.s.)}.
\end{equation}
and
\begin{equation}\label{grhi}
\nabla_{x_i}f(x)=\nabla_{x_i}f_i(x).
\end{equation}
\end{assumption}
\begin{remark}
Inequality (\ref{hi<cfi}) is used to build a relationship between $h_i(x,\xi)$ and its expectation w.r.t. $\xi$. If $h_i(x,\xi)$ is continuous in $\xi$ for any $x\in\mathcal{X}$, the assumption (\ref{hi<cfi}) holds when the random variable $\xi$ is bounded. When $\xi$ is unbounded, for example, $\xi$ follows a sub-Gaussian distribution, (\ref{hi<cfi}) may hold with a high probability, see \cite{Rudelson13}. A truncation approach can be used when evaluating the local costs to ensure the boundedness of the observation if $\xi$ follows a standard Gaussian distribution.
\end{remark}
To show that Assumption \ref{as local cost} is reasonable, we give a class of feasible examples. In multi-agent coordination control, $x_i$ may represent the state of agent $i$ that needs to be stabilized to a desired set, which can be the ``policy" in RL, and the ``state" in our optimization formulation. Consider
\begin{equation}\label{eq:example}
h(x,\xi)=\sum_{(i,j)\in\mathcal{E}}F_{ij}(x_i,x_j,\xi_i,\xi_j), ~~~~\xi_i\sim\mathcal{D}_i,
\end{equation}
where $F_{ij}(\cdot,\cdot,\cdot,\cdot): \mathbb{R}^{q_i}\times\mathbb{R}^{q_j}\times\mathbb{R}^{p_i}\times\mathbb{R}^{p_j}$ is locally Lipschitz continuous and has a locally Lipschitz continuous gradient w.r.t. $x_i$ and $x_j$, $\mathcal{E}$ is the edge set of a graph characterizing the inter-agent coupling relationship in the objective function, and $\mathcal{D}_i$ for each $i\in\mathcal{V}$ is a bounded distribution with zero mean. By setting $h_i(x_{\mathcal{N}_i},\xi_{\mathcal{N}_i})=\sum_{j\in\mathcal{N}_i}F_{ij}(x_i,x_j,\xi_i,\xi_j)$, Assumption \ref{as local cost} is satisfied. The formulation \eqref{eq:example} is applicable to many multi-agent coordination problems such as consensus \cite{Qin16} and formation control \cite{Oh15}. Similar formulations have been considered in \cite{Chen14}.
A direct consequence of Assumption \ref{as local cost} is that $f_i(x)$ is locally Lipschitz continuous and has a locally Lipschitz continuous gradient w.r.t. $x_i$. Moreover, the two Lipschitz constants are the same as those of $f(x)$.
In this paper, with the help of Assumption \ref{as local cost}, we will propose a novel distributed zeroth-order algorithm, in which the local gradient of each agent is estimated based upon the local observations $h_i$'s directly.
The problem we aim to solve is summarized as follows:
\begin{problem}
Under Assumptions \ref{as Lips f}-\ref{as local cost}, given an initial state $x^0\in\mathcal{X}$, design a distributed algorithm for each agent $i$ based on the local observation\footnote{Since $x_{\mathcal{N}_i}$ is composed of partial elements of $x$, for symbol simplicity, we write the local cost function for each agent $i$ as $h_i(x,\xi)$ and treat $h_i(\cdot,\cdot)$ as a mapping from $\mathbb{R}^q\times\mathbb{R}^p$ to $\mathbb{R}_{\geq0}$ while keeping in mind that $h_i(x,\xi)$ only involves agent $i$ and its neighbors.} $h_i(x,\xi)$ such that by implementing the algorithm, the entire state $x$ converges to a stationary point of $f(x)$.
\end{problem}
\section{Distributed ZOO with Asynchronous Samples and Updates} \label{sec: as}
In this section, we propose a distributed zeroth-order algorithm with asynchronous samples and updates based on an accelerated zeroth-order BCD algorithm.
\subsection{Block Coordinate Descent}
A BCD algorithm solves for the minimizer $x=(x_1^{\top},...,x_N^{\top})^{\top}\in\mathbb{R}^q$ by updating only one block $x_i\in\mathbb{R}^{q_i}$ in each iteration. More specifically, let $b_k\in\mathcal{V}$ be the block to be updated at step $k$, and $x^k$ be the value of $x$ at step $k$. An accelerated BCD algorithm is shown below:
\begin{equation}\label{BCD}
\left\{
\begin{array}{lrlr}
x_i^{k+1}=x_i^{k}, & i\neq b_k,\\
x_i^{k+1}=\hat{x}_i^k-\eta\nabla_{x_i} f(\hat{x}_i^k,x_{-i}^k), & i=b_k,
\end{array}
\right.
\end{equation}
where $\eta>0$ is the step-size, $\hat{x}_i^k$ is the extrapolation determined by
\begin{equation}\label{xhati}
\hat{x}_i^k=x_i^k+w_i^k(x_i^k-x_i^{k_{prev}}),
\end{equation}
here $w_i^k\geq0$ is the extrapolation weight that will be further determined, $x_i^{k_{prev}}$ is the value of $x_i$ before it was updated to $x_i^k$.
Note that $w_i^k$ can be simply set as 0. However, it has been shown in \cite{wright2015coordinate,xu2017globally} that having appropriate positive extrapolation weights helps significantly accelerate the convergence speed of the BCD algorithm.
To avoid using the first-order information, which is usually absent in reality, we estimate $\nabla_{x_i} f(\hat{x}_i^k,x_{-i}^k)$ in (\ref{BCD}) based on the observation $f_i(\hat{x}_i^k,x_{-i}^k)$, see the next subsection.
\subsection{Gradient Estimation via Local Cost Evaluation}
In this subsection, we introduce given $x\in\mathcal{X}$, how to estimate $\nabla_{x_i} f(x)$ based on $f_i(x)$. In the ZOO literature, $\nabla_{x}f(x)$ is estimated by perturbing state $x$ with a vector randomly sampled from $\mathbb{S}_{q-1}$. In order to achieve distributed learning, we expect different agents to sample their own perturbation vectors independently. Based on Assumption \ref{as local cost}, we have
\begin{equation}\label{fi}
\nabla_{x}f(x)=\begin{pmatrix}
\nabla_{x_1}f(x)\\
\colon\\
\nabla_{x_N}f(x)
\end{pmatrix}=\begin{pmatrix}
\nabla_{x_1}f_1(x)\\
\colon\\
\nabla_{x_N}f_N(x)
\end{pmatrix}.
\end{equation}
Let
\begin{equation}
\begin{split}
\hat{f}_i(x)&=\mathbf{E}_{v_i\in\mathbb{B}_{q_i}}[f_i(x_i+r_i v_i,x_{-i})]\\ &=\frac{\int_{r_i\mathbb{B}_{q_i}}f_i(x_i+ v_i,x_{-i})dv_i}{V(r_i\mathbb{B}_{q_i})},
\end{split}
\end{equation}
where $V(r_i\mathbb{B}_{q_i})$ is the volume of $r_i\mathbb{B}_{q_i}$. Here $\hat{f}_i(x)$ is always differentiable even when $f_i(x)$ is not differentiable.
To approximate $\nabla_{x_i}f_i(x)$ for each agent $i$, we approximate $\nabla_{x_i}\hat{f}_i(x)$ by the following one-point feedback:
\begin{equation}\label{gixuxi}
g_i(x,u_i,\xi)=\frac{q_i}{r_i}h_i(x_i+r_iu_i,x_{-i},\xi)u_i,
\end{equation}
$u_i\in\uni(\mathbb{S}_{q_i-1})$, $\xi\in\mathbb{R}^p$ is a random variable following the distribution $\mathcal{D}$. Note that according to the definition of the local cost function $h_i(x,\xi)$, $g_i(x,u_i,\xi)$ may be only affected by partial components of $\xi$.
The following lemma shows that $g_i(x,u_i,\xi)$ is an unbiased estimate of $\nabla_{x_i}\hat{f}_i(x)$.
\begin{lemma}\label{le fhat=Eg}
Given $r_i>0$, $i=1,...,N$, the following holds
\begin{equation}
\nabla_{x_i}\hat{f}_i(x)=\mathbf{E}_{u_i\in\mathbb{S}_{q_i-1}}\mathbf{E}_{\xi\sim\mathcal{D}}[g_i(x,u_i,\xi)].
\end{equation}
\end{lemma}
Although $\nabla_{x_i}\hat{f}_i(x)\neq\nabla_{x_i}f(x)$, their error can be quantified using the smoothness property of $f(x)$, as shown in Lemma~\ref{le fhat error} below.
\begin{lemma}\label{le fhat error}
Given a point $x\in\mathbb{R}^{q}$, if $r_i\leq\beta_x$, then
\begin{equation}\label{g=gradf}
||\nabla_{x_i}\hat{f}_i(x)-\nabla_{x_i}f(x)||\leq\phi_xr_i.
\end{equation}
\end{lemma}
\begin{remark}
Compared with gradient estimation based on one-point feedback, the two-point feedback $$g_i(x,u_i,\xi_i)=\frac{q_i}{r_i}\left[h_i(x_i+r_iu_i,x_{-i},\xi_i)-h_i(x_i-r_iu_i,x_{-i},\xi_i)\right]u_i$$ is recognized as a more robust algorithm with a smaller variance and a faster convergence rate \cite{Agarwal10,Nesterov17,Chen17}. In this work, we mainly focus on how to solve an optimization via networks by a distributed zeroth-order BCD algorithm. Since it has been shown in \cite{Nesterov17} that the expectation of the one-point feedback is equivalent to the expectation of the two-point feedback, our algorithm in this paper is extendable to the two-point feedback case.
\end{remark}
\subsection{Distributed ZOO Algorithm with Asynchronous Samplings}
In this subsection, we propose a distributed ZOO algorithm with asynchronous sample and update schemes based on the BCD algorithm (\ref{BCD}) and the gradient approximation for each agent $i$. According to (\ref{gixuxi}), we have the following approximation for each agent $i$ at step $k$:
\begin{equation}\label{gixhat}
g_i(\hat{x}_i^k, x_{-i}^k, u_i^k,\xi)=\frac{q_i}{r_{i}}h_i(\hat{x}_i^k+r_{ik}u_i^k,x_{-i}^k,\xi^k)u_i^k,
\end{equation}
where $u_i^k$ is uniformly randomly sampled from $\mathbb{S}_{q_i-1}$.
In fact, since $h_i(x,\xi)$ only involves the agents in $\mathcal{N}_i$, it suffices to maintain the variables of the agents in $\mathcal{N}_i\setminus\{i\}$ invariant when estimating the gradient of agent $i$. That is, in our problem, two agents are allowed to update their variables simultaneously if they are not neighbors. This is different from a standard BCD algorithm where only one block of the entire variable is updated in one iteration.
To achieve simultaneous update for non-adjacent agents, we decompose the set of agents $\mathcal{V}$ into $s$ independent clusters ($s$ has an upper bound depending on the graph), i.e., $\mathcal{V}=\cup_{j=1}^s\mathcal{V}_j$, and
\begin{equation}
\mathcal{V}_{j_1}\cap\mathcal{V}_{j_2}=\varnothing, ~~\forall \text{ distinct } j_1,j_2\in\{1,...,s\}
\end{equation}
such that the agents in the same cluster are not adjacent in the interaction graph, i.e., $(i_1,i_2)\notin\mathcal{E}$ for any $i_1,i_2\in\mathcal{V}_{j}$, $j\in\{1,...,s\}$. Note that no matter graph $\mathcal{G}$ is directed or undirected, any two agents have to lie in different clusters if there is a link from one to the other. Without loss of generality, we assume $\mathcal{G}$ is undirected, and let $\mathcal{N}_i$ be the neighbor set of agent $i$. In the case when $\mathcal{G}$ is directed, we define $\mathcal{N}_i$ as the set of agents $j$ such that $(i,j)\in\mathcal{E}$ or $(j,i)\in\mathcal{E}$. A simple algorithm for achieving such a clustering is shown in Algorithm \ref{alg:clustering}. Note that the number of clusters obtained by implementing Algorithm \ref{alg:clustering} may be different each time. In practical applications, Algorithm \ref{alg:clustering} can be modified to solve for a clustering with maximum or minimum number of clusters.
\begin{algorithm}[htbp]
\small
\caption{Non-Adjacent Agents Clustering}\label{alg:clustering}
\textbf{Input}: $\mathcal{V}$, $\mathcal{N}_i$ for $i=1,...,N$.\\
\textbf{Output}: $s$, $\mathcal{V}_j$, $j=1,...,s$.
\begin{itemize}
\item[1.] Set $s=0$, $\mathcal{C}=\varnothing$.
\item[2.] \textbf{while} $\mathcal{C}\neq\mathcal{V}$
\item[3.] Set $s\leftarrow s+1$, $\mathcal{D}=\mathcal{C}$, \textbf{while} $\mathcal{D}\neq\mathcal{V}$
\item[4.] Randomly select $i$ from $\mathcal{V}\setminus\mathcal{D}$, set $\mathcal{V}_s\leftarrow\mathcal{V}_s\cup\{i\}$, $\mathcal{D}\leftarrow\mathcal{D}\cup\mathcal{N}_i$.
\item[5.] \textbf{end}
\item[6.] $\mathcal{C}\leftarrow\mathcal{C}\cup\mathcal{V}_s$.
\item[7.] \textbf{end}
\end{itemize}
\end{algorithm}
Based on the clustering obtained by Algorithm \ref{alg:clustering}, we propose Algorithm \ref{alg:as} as the asynchronous distributed zeroth-order algorithm. Algorithm \ref{alg:as} can be viewed as a distributed RL algorithm where different clusters take actions asynchronously, different agents in one cluster take actions simultaneously and independently. In Algorithm \ref{alg:as}, step 5 can be viewed as {\it policy evaluation} for agent $i$, while step 6 corresponds to {\it policy iteration}. Moreover, the local observation $h_{i}(\hat{x}_i^k+r_iu_{i}^k,x_{-i}^k,\xi^k)$ can be viewed as the reward returned by the environment to agent $i$.
\begin{algorithm}[htbp]
\small
\caption{Distributed Zeroth-Order Algorithm with Asynchronous Samplings}\label{alg:as}
\textbf{Input}: Step-size $\eta$, smoothing radius $r_i$ and variable dimension $q_i$, $i=1,...,N$, clusters $\mathcal{V}_j$, $j=1,...,s$, iteration number $T$, update order $z_k$ and extrapolation weight $w_i^k$, $k=0,...,T-1$, initial point $x_0\in\mathcal{X}$.\\
\textbf{Output}: $x(T)$.
\begin{itemize}
\item[1.] \textbf{for} $k=0,1,...,T-1$ \textbf{do}
\item[2.] ~~~~Sample $\xi^k\sim\mathcal{D}$.
\item[3.] ~~~~\textbf{for} all $i\in\mathcal{V}$ \textbf{do} (Simultaneous Implementation)
\item[4.]~~~~~~~~\textbf{if} $i\in\mathcal{V}_{z_k}$ \textbf{do}
\item[5.] ~~~~~~~~~~~~Agent $i$ computes $\hat{x}_i^k$ by (\ref{xhati}), samples $u_{i}^k$ randomly from $\mathbb{S}_{q_i-1}$ and observes $h_{i}(\hat{x}_i^k+r_iu_{i}^k,x_{-i}^k,\xi^k)$.
\item[6.]~~~~~~~~~~~~Agent $i$ computes the estimated local gradient $g_i(\hat{x}_i^k,x_{-i}^k,u_i^k,\xi^k)$ according to (\ref{gixhat}). Then updates its policy:
\begin{equation}
x_i^{k+1}=\hat{x}_i^k-\eta g_i(\hat{x}_i^k,x_{-i}^k,u_i^k,\xi^k).
\end{equation}
\item[7.]~~~~~~~~\textbf{else}
\item[8.]\begin{equation}
x_i^{k+1}=x_i^k.
\end{equation}
\item[9.]~~~~~~~~\textbf{end}
\item[10.] ~~~~\textbf{end}
\item[11.] \textbf{end}
\end{itemize}
\end{algorithm}
In the literature, BCD algorithms have been studied with different update orders such as deterministically cyclic \cite{canutescu2003cyclic,wright2015coordinate} and randomly shuffled \cite{richtarik2014iteration,yu2019zeroth}. In this paper, we adopt an ``essentially cluster cyclic update" scheme, which includes the standard cyclic update as a special case, and is a variant of the essentially cyclic update scheme in \cite{wright2015coordinate,xu2017globally}, see the following assumption.
\begin{assumption}\label{as cyclic}(Essentially Cluster Cyclic Update)
Given integer $T_0\geq s$, for any cluster $j\in\{1,...,s\}$ and any two steps $k_1$ and $k_2$ such that $k_2-k_1=T_0-1$, there exists $k_0\in [k_1,k_2]$ such that $z_{k_0}=j$.
\end{assumption}
Assumption \ref{as cyclic} implies that each cluster of agents update their states at least once during every consecutive $T_0$ steps. When $|\mathcal{V}_i|=1$ for $i=1,...,s$, and $s=N$, Assumption \ref{as cyclic} implies an essentially cyclic update in \cite{wright2015coordinate,xu2017globally}.
To better understand Algorithm \ref{alg:as}, let us look at a multi-agent coordination example.
\begin{example}\label{ex1}
Suppose that the global function to be minimized is (\ref{eq:example}) with $\mathcal{E}=\{(1,2),(2,3),(3,4)\}$ being the edge set. Then the local cost function for each agent is:
\begin{equation}
\begin{split}
&h_1=F(x_1,x_2,\xi_{12}),~~ h_2=F(x_1,x_2,\xi_{12})+F(x_2,x_3,\xi_{23}),\\
&h_3=F(x_2,x_3,\xi_{23})+F(x_3,x_4,\xi_{34}), ~~h_4=F(x_3,x_4,\xi_{34}),
\end{split}
\end{equation}
where $\xi_{ij}$ is a bounded zero-mean noise. By implementing Algorithm \ref{alg:clustering}, the two clusters obtained are $\mathcal{V}_1=\{1,3\}$ and $\mathcal{V}_2=\{2,4\}$. By implementing Algorithm \ref{alg:as} with two clusters updating alternatively, the diagram for two consecutive iterations is shown in Fig. \ref{fig diagram as}, where agents in $\mathcal{V}_1$ and $\mathcal{V}_2$ take their actions successively. In multi-agent coordination, $F(x_i,x_j,\xi_{ij})$ can be set as $(||x_i-x_j+\xi_{ij}||^2-d_{ij}^2)^2$ for distance-based formation control ($d_{ij}$ is the desired distance between agents $i$ and $j$), and $||x_i-x_j+\xi_{ij}-h_{ij}||^2$ for displacement-based formation control ($h_{ij}$ is the desired displacement between agents $i$ and $j$).
\begin{figure
\centering
\includegraphics[width=9cm]{architc2}
\caption{The architecture of distributed RL via asynchronous actions during two consecutive iterations.} \label{fig diagram as}
\end{figure}
\end{example}
\subsection{Convergence Result}
We study the convergence of Algorithm \ref{alg:as} by focusing on $x\in\mathbb{X}$, where $\mathbb{X}$ is defined as
\begin{equation}\label{Xx}
\mathbb{X}=\{x\in\mathbb{R}^{nN}:f(x)\leq\alpha f(x^0), f_i(x)\leq\alpha_if_i(x^0), \forall~ i\in\mathcal{V}\}\subseteq \mathcal{X},
\end{equation}
where $\alpha, \alpha_i>1$, $i\in\mathcal{V}$, $x^0\in\mathbb{X}$ is the given initial condition.
Since $f(x)$ is continuous, the set $\mathbb{X}$ is compact. Then we are able to find uniform parameters feasible for $f(x)$ over $\mathbb{X}$:
\begin{equation}
\phi_0=\sup_{x\in\mathbb{X}}\phi_x,~~\lambda_0=\sup_{x\in\mathbb{X}}\lambda_x, ~~\rho_0=\inf_{x\in\mathbb{X}}\{\beta_x,\zeta_x\}.
\end{equation}
The following theorem shows an approximate convergence result based on establishing the probability of the event $\{x^k\in\mathbb{X}\}$ for $k=0,...,T-1$. Let $N_i$ denote the number of agents in the cluster containing agent $i$. For notation simplicity, we denote $N_0=\max_{i\in\mathcal{V}}N_i=\max_{j\in\{1,...,s\}}|\mathcal{V}_j|$, $q_+=\max_{i\in\mathcal{V}}q_i$, $r_-=\min_{i\in\mathcal{V}}r_i$, $f_0(x^0)=\max_{i\in\mathcal{V}}\alpha_if_i(x^0)$.
\begin{theorem}\label{th as}
Under Assumptions \ref{as Lips f}-\ref{as local cost}, given positive scalars $\epsilon$,
$\nu$, $\gamma$, and $\alpha\geq 2+\gamma+\frac{1}{\nu}+\nu\gamma$, $x^0\in\mathcal{X}$. Let $\{x^k\}_{k=0}^{T-1}$ be a sequence of states obtained by implementing Algorithm \ref{alg:as} for $k=0,...,T-1$. Suppose that
\begin{equation}\label{conditions}
\begin{split}
T=\lceil \frac{2\alpha \nu f(x^0)}{\eta\epsilon}\rceil, ~~~\eta\leq\min\{\frac{\rho_0}{2\delta\sqrt{N_0}},\frac{2\alpha f(x^0)}{\gamma\epsilon},\frac{\gamma\epsilon}{2\alpha N_0(\phi_0\delta^2+4\phi_0^2+4+\phi)}\},\\
w_i^k\leq\frac{1}{||x_i^k-x_i^{k_{prev}}||}\min\{\eta^{3/2},\frac{\rho_0}{2\sqrt{N_i}}\},~~~ r_i\leq \min\{\frac{\rho_0}{2},\frac{1}{2\phi_0}\sqrt{\frac{\gamma\epsilon}{\alpha N_0}}\}, i\in\mathcal{V},
\end{split}
\end{equation}
where $\delta=\frac{q_+}{r_-}c\left[f_0(x^0)+\lambda_0\rho_0\right]$ is the uniform bound on the estimated gradient (as shown in Lemma \ref{le E[g]}). The following statements hold.
(i). The following inequality holds with a probability at least $1-\frac{1}{\alpha}(2+\gamma+\frac{1}{\nu}+\nu\gamma)$:
\begin{equation}
\frac{1}{T}\sum_{k=0}^{T-1}\sum_{i\in\mathcal{V}_{z_k}}||\nabla_{x_i}f(x^k)||^2< \epsilon.
\end{equation}
(ii). Under Assumption \ref{as cyclic}, suppose that there exist $\bar{x}\in\mathbb{R}^q$, step $\bar{T}\leq T-T_0$ and error $\bar{\epsilon}\leq\rho_0$ such that $||x^k-\bar{x}||<\bar{\epsilon}$ for all $k\in[\bar{T},T-1]$. If the conditions in (\ref{conditions}) where $x^0$ in the conditions on $T$, $\eta$ and $r_i$ is replaced by $x^{\bar{T}}$ hold, then the following holds with a probability at least $1-\frac{1}{\alpha}(2+\gamma+\frac{1}{\nu}+\nu\gamma)$:
\begin{equation}
||\nabla_xf(\bar{x})||^2<\hat{\epsilon},
\end{equation}
where $\hat{\epsilon}=T_0(2\epsilon+2\phi_0^2\bar{\epsilon}^2)$.
\end{theorem}
\textbf{Approximate Convergence.} Given positive scalars $\epsilon$ $\alpha$, $\gamma$, and $\nu$, Theorem \ref{th as} (i) implies that Algorithm \ref{alg:as} converges to a sequence $\{x_k\}$, where the partial derivative of $x_k$ w.r.t. one cluster is close to 0 at each step with high probability. When $s=1$, Algorithm \ref{alg:as} reduces to an accelerated zeroth-order gradient descent algorithm, and Theorem (i) implies convergence to a stationary point with high probability. Theorem \ref{th as} (ii) implies that under an ``essentially cluster cyclic update" scheme, once Algorithm \ref{alg:as} converges to a point with an error bounded by $\bar{\epsilon}$, the convergent point is a $\hat{\epsilon}$-accurate stationary point.
\textbf{Convergence Probability.} The convergence probability can be controlled by adjusting parameters $\alpha$, $\gamma$ and $\nu$. For example, set $\alpha=20$, $\gamma=1$, $\nu=1$, then the probability is at least $1-1/4=3/4$. To achieve a high probability of convergence, it is desirable to have large $\alpha$ and $\nu$, and small $\gamma$, which implies that the performance of the algorithm can be enhanced at the price of adopting a large number of samples.
\textbf{Sample Complexity.} To achieve high accuracy of the convergent result, we consider $\epsilon$ as a small positive scalar such that $\epsilon\ll1/\epsilon$. As a result, the sample complexity for convergence of Algorithm \ref{alg:as} is $T\sim\mathcal{O}(q_+^2N_0^2T_0^3/\hat{\epsilon}^3)$. This implies that the required iteration number mainly depends on the highest dimension of the variable for one agent, the largest size of one cluster, and the number of clusters (because $T_0\geq s$). Note that even when the number of agents increases, $q_+$ may remain the same, implying high scalability of our algorithm to large-scale networks. Moreover, $N_0$ may increase as the number of clusters $s$ decreases. Hence, there is a trade-off between the benefits of minimizing the largest cluster size against minimizing the number of clusters. When the network is of large scale, $T_0$ dominates the sample complexity, which makes minimizing the number of clusters the optimal clustering strategy.
\subsection{Variance Analysis}\label{subsec: as variance}
In this subsection, we analyze the variance of our gradient estimation strategy and make comparisons with the gradient estimation via global cost evaluation. Without loss of generality, we analyze the estimation variance for the $i$-th agent. Let $u_i\sim\text{Uni}(\mathbb{S}_{q_i-1})$, $z\sim\text{Uni}(\mathbb{S}_{q-1})$, and $z_i\in\mathbb{R}^{q_i}$ be a component of $z$ corresponding to agent $i$. Under the same smoothing radius $r>0$, one time gradient estimates based on the local cost $h_i$ and the global cost $h$ are
\begin{equation}\label{g_las}
g_l=\frac{q_i}{r}h_i(x_i+ru_i,x_{-i},\xi)u_i,
\end{equation}
and
\begin{equation}\label{g_g}
g_{g}=\frac{q}{r}h(x+rz,\xi)z_i,
\end{equation}
respectively.
\begin{lemma}\label{le cov g as g}
The covariance matrices for the gradient estimates (\ref{g_las}) and (\ref{g_g}) are
\begin{equation}\label{eq:cov1}
\mathrm{Cov}(g_l) = \frac{q_i}{r^2}\left[\mathbf{E}[h_i^2(x, \xi)]I_{q_i} - \mathbf{E}[\nabla_{x_i} h_i(x,\xi)]\mathbf{E}^{\top} [\nabla_{x_i}h_i(x,\xi)]+\mathcal{O}(r^2)\right],
\end{equation}
and \begin{equation}
\mathrm{Cov}(g_g) = \frac{q}{r^2}\left[\mathbf{E}[h^2(x,\xi)]I_{q_i} - \mathbf{E}[\nabla_{x_i}h(x,\xi)]\mathbf{E}^{\top}[\nabla_{x_i}h(x,\xi)]+\mathcal{O}(r^2)\right], \label{eq:cov2}
\end{equation}
respectively.
\end{lemma}
Due to Assumption \ref{as local cost}, the difference between two covariance matrices is
\begin{equation}\label{variance difference}
\mathrm{Cov}(g_g)-\mathrm{Cov}(g_l)=\frac{1}{r^2}\left[\sum_{j\in\mathcal{V}\setminus\{i\} }q_j\mathbf{E}[h^2(x,\xi)]I_{q_i}+q_i\mathbf{E}[h^2(x,\xi)-h_i^2(x,\xi)]I_{q_i}+\mathcal{O}(r^2)\right],
\end{equation}
where the first term is positive definite as long as there are more than one agent in the network, the second term is usually positive semi-definite, and the third term is negligible if $r$ is much smaller than $h(x,\xi)$.
Observe that the first two terms may have extremely large traces if the network is of large scale. This implies that the gradient estimate (\ref{g_las}) leads to a high scalability of our algorithm to large-scale network systems.
\section{Application to Distributed RL of Model-Free Distributed Multi-Agent LQR}\label{sec MAS}
In this section, we will show how Algorithm \ref{alg:as} can be applied to distributively learning a sub-optimal distributed controller for a linear multi-agent system with unknown dynamics.
\subsection{Multi-Agent LQR}
Consider the following MAS\footnote{Although we assume the agents have decoupled dynamics, our results are extendable to the case of coupled dynamics. In that case, the coupling relationship in agents' dynamics has to be taken into account in the design of the local cost function and the learning graph in next subsection. The details are explained in Remark \ref{re coupled dynamics}.}:
\begin{equation}\label{MAS}
x_i(t+1)=A_ix_i(t)+B_iu_i(t), ~~~~i=1,..., N
\end{equation}
where $x_i\in\mathbb{R}^n$, $u_i\in\mathbb{R}^m$, $A_i$ and $B_i$ are assumed to be unknown. The entire system dynamics becomes
\begin{equation}\label{hom MAS}
x(t+1)=\mathcal{A}x(t)+\mathcal{B}u(t).
\end{equation}
By considering random agents' initial states, we study the following LQR problem:
\begin{equation}\label{LQR no noise}
\begin{split}
\min_{K}~~~~&J(K)=\mathbf{E}\left[\sum_{t=0}^\infty\gamma^t x^{\top}(t)(Q+K^{\top}RK)x(t)\right]\\
\text{s.t.}&~~~~x(t+1)=(\mathcal{A}-\mathcal{B}K)x(t), ~~~~x(0)\sim\mathcal{D},
\end{split}
\end{equation}
here $0<\gamma\leq1$, $\mathcal{D}$ is a distribution such that $x(0)$ is bounded, has zero mean, and a covariance matrix $\Sigma_x\succ0$, $Q=G\odot\tilde{Q}$ with $G\in\mathbb{R}^{N\times N}\succeq0$, $\tilde{Q}\in\mathbb{R}^{nN\times nN}\succeq0$, and the symbol $\odot$ denoting the Khatri-Rao product, and $R=\diag\{R_1,...,R_N\}\succ0$. Note that while we assume that the agents' dynamics are unknown, the matrices $Q$ and $R$ are considered known. This is reasonable because in reality $Q$ and $R$ are usually artificially designed. It has been shown in \cite{Malik19} that when $0<\gamma<1$, the formulation (\ref{LQR no noise}) is equivalent to the LQR problem with fixed agents' initial states and additive noises in dynamics, where the noise follows the distribution $\mathcal{D}$. Hence, our results are extendable to LQR with noisy dynamics.
Let $\mathbb{K}_s$ be the set of stabilizing gains. That is,
\begin{equation}
\mathbb{K}_s=\{K\in\mathbb{R}^{mN\times nN}: \mathcal{A}-\mathcal{B}K \text{ is Schur stable}\}.
\end{equation}
It has been shown in \cite{Bu19} that under the formulation in (\ref{LQR no noise}), it holds that $\mathbb{K}_s=\{K\in\mathbb{R}^{mN\times nN}: J(K)<\infty\}$, which is non-convex, compact, and exactly the feasible domain of the optimization.
From \cite[Lemmas 4, 5]{Malik19}, the cost function $J(K)$ in (\ref{LQR no noise}) has the following properties.
\begin{lemma}\label{lem:a2a3}
For any $K\in\mathbb{K}_s$, there exist positive scalars $\lambda_K$, $\zeta_K$, $\phi_K$ and $\beta_K$ such that the cost function $J(K)$ in (\ref{LQR no noise}) is $(\lambda_K,\zeta_K)$ locally Lipschitz continuous and has a $(\phi_K,\beta_K)$ locally Lipschitz continuous gradient.
\end{lemma}
Based on the matrix $G$, we define the {\it cost graph} interpreting inter-agent coupling relationships in the cost function.
\begin{definition}(\textbf{Cost Graph})
The cost graph $\mathcal{G}_C=(\mathcal{V},\mathcal{E}_C)$ is an undirected graph such that $G_{ij}\neq0$ if and only if $(i,j)\in\mathcal{E}_C$. The neighbor set of agent $i$ in the cost graph is defined as $\mathcal{N}_C^i=\{j\in\mathcal{V}: (i,j)\in\mathcal{E}_C\}$.
\end{definition}
Distributed control implies that each agent only needs to sense state information of its local neighbors. Next we define {\it sensing graph} interpreting required inter-agent sensing relationships for distributed control.
\begin{definition}(\textbf{Sensing Graph})
The sensing graph $\mathcal{G}_S=(\mathcal{V},\mathcal{E}_S)$ is a directed graph with each agent having a self-loop. The neighbor set for each agent $i$ in graph $\mathcal{G}_S$ is defined as $\mathcal{N}_S^i=\{j\in\mathcal{V}: (j,i)\in\mathcal{E}_S\}$, where $(j,i)\in\mathcal{E}_S$ implies that agent $i$ has access to $x_j$.
\end{definition}
\textbf{Notes for the cost graph and the sensing graph}.
\begin{itemize}
\item The cost graph $\mathcal{G}_C$ is determined by the prespecified cost function, and is always undirected becuase $Q$ is positive definite.
\item We assume $\mathcal{G}_C$ is connected. Note that if $\mathcal{G}_C$ is disconnected, then the performance index in (\ref{LQR no noise}) can be naturally decomposed according to those connected components, and the LQR problem can be transformed to smaller sized LQR problems.
\item In real applications, the sensing graph is designed based on the sensing capability of each agent. It is even not necessarily weakly connected.
\item Here the cost graph $\mathcal{G}_C$ and the sensing graph $\mathcal{G}_S$ are defined independently. In specific applications, they can be either related to or independent of each other.
\end{itemize}
Let $X(i,j)\in\mathbb{R}^{m\times n}$ be a submatrix of $X\in\mathbb{R}^{mN\times nN}$ consisting of elements of $X$ on $(i-1)m+1$-th to $im$-th rows and $(j-1)n+1$-th to $jn$-th columns. The space of distributed controllers is then defined as
\begin{equation}\label{K space}
\mathbb{K}_d=\{X\in\mathbb{R}^{mN\times nN}: X(i,j)=\mathbf{0}_{m\times n}~\text{if}~j\notin\mathcal{N}_S^i,~ i,j\in\mathcal{V}\}.
\end{equation}
We make the following assumption to guarantee that the distributed LQR problem is solvable.
\begin{assumption}\label{as solvable}
$\mathbb{K}_d\cap\mathbb{K}_s\neq\varnothing$.
\end{assumption}
We aim to design a distributed RL algorithm for agents to learn a sub-optimal distributed controller $K^*\in\mathbb{K}_d$ such that during the learning process, each agent only requires information from partial agents (according to the sensing graph), and takes actions based on the obtained information.
\subsection{Local Cost Function and Learning Graph Design}
We have verified Assumptions \ref{as Lips f} and \ref{as Lips g} in Lemma \ref{lem:a2a3}.
To apply Algorithm \ref{alg:as}, it suffices to find local cost functions such that Assumption \ref{as local cost} holds. In this subsection, we propose an approach to design of such local cost functions.
Note that the cost function can be written as a function of $K$:
\begin{equation}
\begin{split}
J(K)&=\mathbf{E}\left[\sum_{t=0}^\infty\gamma^t x^{\top}(t)(Q+K^{\top}RK)x(t)\right]\\
&=\mathbf{E}\left[\sum_{t=0}^\infty\gamma^t x^{\top}(0)(\mathcal{A}-\mathcal{B}K)^{\top}(Q+K^{\top}RK)(\mathcal{A}-\mathcal{B}K)x(0)\right].
\end{split}
\end{equation}
Let $K=[K_1^{\top},...,K_N^{\top}]^{\top}\in\mathbb{R}^{mN\times nN}$. Then $K_i\in\mathbb{R}^{m\times nN}$ is the local gain matrix to be designed for agent $i$. Based on the definition of $\mathbb{K}_d$ in (\ref{K space}), the distributed controller for each agent $i$ has the form:
\begin{equation}\label{u_i}
u_i=-K_ix=-\tilde{K}_ix_{\mathcal{N}_S^i},
\end{equation}
where $\tilde{K}_i\in\mathbb{R}^{m\times n_i}$ with $n_i=|\mathcal{N}_S^i|n$.
We now view the control gain $K_i$ for each agent $i$ as the optimization variable. According to Assumption \ref{as local cost}, we need to find a local cost $J_i(K)$ for each agent $i$ such that its gradient is the same as the gradient of the global cost w.r.t. $K_i$. That is, $\nabla_{K_i}J_i(K)=\nabla_{K_i}J(K)$.
To design the local cost $J_i$ for each agent $i$, we define the following set including all the agents whose control inputs and states will be affected by agent $i$ during the implementation of the distributed controller:
\begin{equation}
\mathcal{V}_S^i=\{j\in\mathcal{V}: \text{A path from}~ i~ \text{to}~ j~ \text{exists in } \mathcal{G}_S\}.
\end{equation}
Since different agents are coupled in the cost function, when extracting the local cost function involving an agent $j\in\mathcal{V}_S^i$ from the entire cost function, all of its neighbors in the cost graph (i.e., $\mathcal{N}_C^j$) should be taken into account. Based on the set $\mathcal{V}_S^i$ for each agent $i$, we formulate the following feasibility problem for each agent $i\in\mathcal{V}$:
\begin{equation}\label{SDP grad}
\begin{split}
&~~~\text{find}~~~ M_i\in\mathbb{R}^{N\times N}\\
\text{s.t.}~M_i[j,k]&=G_{jk}, \text{ for all } k\in\mathcal{N}_C^j, j\in\mathcal{V}_S^i, \\
M_i[j,k]&=0,~~ k\in\mathcal{V}\setminus\cup_{j\in\mathcal{V}_S^i}\mathcal{N}_C^j,
\end{split}
\end{equation}
where $M_i[j,k]$ is the element of matrix $M_i$ on the $j$-th row and $k$-th column.
The solution $M_i$ to (\ref{SDP grad}) must satisfy
\begin{equation}
\frac{\partial (x^{\top}(M_i\odot \tilde{Q})x)}{\partial x_j}=\frac{\partial (x^{\top}(G\odot\tilde{Q})x)}{\partial x_j} \text{ for all } j\in\mathcal{V}_S^i.
\end{equation}
Moreover, we observe that the solution $M_i\in\mathbb{R}^{N\times N}$ is actually the matrix with the same principal submatrix associated with $\cup_{j\in\mathcal{V}_S^i}\mathcal{N}_C^j$ as $G$, and all the other elements of $M_i$ are zeros. Then $M_i\succeq0$ because all the principal minors of $M_i$ are nonnegative.
Now, based on the cost graph $\mathcal{G}_C$ and the sensing graph $\mathcal{G}_S$, we give the definition for the communication graph required in distributed learning.
\begin{definition}(\textbf{Learning Graph})\label{de learning graph}
The learning-required communication graph $\mathcal{G}_L=(\mathcal{V},\mathcal{E}_L)$ is a directed graph with the edge set $\mathcal{E}_L$ defined as
\begin{equation}
\mathcal{E}_L=\{(k,i)\in\mathcal{V}\times\mathcal{V}: k\in\cup_{j\in\mathcal{V}_S^i}\mathcal{N}_C^j, i\in\mathcal{V}\}.\label{eq:learning_graph}
\end{equation}
The neighbor set for each agent $i$ in graph $\mathcal{G}_L$ is defined as $\mathcal{N}_L^i=\{k\in\mathcal{V}: (k,i)\in\mathcal{E}_L\}$, where $(k,i)\in\mathcal{E}_L$ implies that there is an oriented link from $k$ to $i$.
\end{definition}
To better understand the three different graphs, we summarize their definitions in Fig. \ref{fig graph relationship}, and show three examples demonstrating the relationships between $\mathcal{G}_S$, $\mathcal{G}_C$ and $\mathcal{G}_L$ in Fig. \ref{fig graph example}.
\begin{figure}
\centering
\includegraphics[width=10cm]{graphrelationship}
\caption{Summary of definitions for the cost, sensing, and learning graphs.} \label{fig graph relationship}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=14cm]{graphs}
\caption{With the same cost graph, three different sensing graphs result in three different learning graphs. Each node in each graph has a self-loop, which is omitted in this figure.} \label{fig graph example}
\end{figure}
\begin{remark}
There are two points we would like to note for the learning graph. First, since we consider that all the agents have self-loops, by Definition \ref{de learning graph}, each edge of graph $\mathcal{G}_C$ must be an edge of $\mathcal{G}_L$. Second, when $\mathcal{G}_S$ is strongly connected, $\mathcal{G}_L$ is a complete graph because in this case $\mathcal{V}_S^i=\mathcal{V}$ for all $i\in\mathcal{G}$. As a result, if we regard each node in a sensing graph in Fig. \ref{fig graph example} as a strongly connected component, then the resulting learning graph $\mathcal{G}_L$ still has the same structure as it is shown in Fig. \ref{fig graph example}, where each node denotes a fully connected component, and the edge from node $a$ to another node $b$ denotes edges from all agents in component $a$ to all agents in component $b$.
\end{remark}
\begin{remark}\label{re coupled dynamics}
When the multi-agent system has coupled dynamics, there will be another graph $\mathcal{G}_{CO}=(\mathcal{V},\mathcal{E}_{CO})$ describing the inter-agent coupling in dynamics. We assume that $\mathcal{G}_{CO}$ is known, which is reasonable in many model-free scenarios because we only need to have the coupling relationship between different agents. In this scenario, to construct the learning graph, the sensing graph used in defining (\ref{eq:learning_graph}) should be replaced by a new graph $\mathcal{G}_{SC}=(\mathcal{V},\mathcal{E}_{SC})$ with $\mathcal{E}_{SC}=\mathcal{E}_{CO}\cup\mathcal{E}_S$, where $\mathcal{E}_S$ is the edge set of the graph $\mathcal{G}_S$ describing the desired structure of the distributed control gain. If $\mathcal{E}_{CO}\subset\mathcal{G}_S$, then the learning graph is the same for the multi-agent networks with coupled and decoupled dynamics.
\end{remark}
Let $\{M_i\}_{i=1}^N$ be solutions to (\ref{SDP grad}) for $i=1,...,N$. Also let $\hat{Q}_i=M_i\odot\tilde{Q}$ and $\hat{R}_i=\sum_{j\in\mathcal{N}_L^i}(e_j^{\top}\otimes I_m)R_j(e_j\otimes I_m)$. By collecting the parts of the entire cost function involving agents in $\mathcal{N}_L^i$, we define the local cost $J_i$ as
\begin{equation}\label{local cost}
\begin{split}
J_i(K)
&=\mathbf{E}\left[\sum_{t=0}^\infty\gamma^t (x^{\top}\hat{Q}_ix+x^{\top}K^{\top}\hat{R}_iKx)\right]\\
&=\mathbf{E}\left[\sum_{t=0}^{\infty}\gamma^t(x_{\mathcal{N}_L^i}^{\top}\bar{Q}_ix_{\mathcal{N}_L^i}+ u_{\mathcal{N}_L^i}^{\top}\bar{R}_iu_{\mathcal{N}_L^i})
\right],
\end{split}
\end{equation}
where $\bar{Q}_i$ is the maximum nonzero principal submatrix of $M_i\odot\tilde{Q}$ and $\bar{R}_i=\diag\{R_j\}_{j\in\mathcal{N}_L^i}$.
\begin{remark}
Each agent $i$ computes its local cost $J_i(x_{\mathcal{N}_L^i},u_{\mathcal{N}_L^i})$ based on $x_j$ and $u_j^{\top}R_ju_j$, $j\in\mathcal{N}_L^i$. In practical application, agent $i$ may compute a finite horizon cost value to approximate the infinite horizon cost. The state information $x_j$ is sensed from its neighbors in the learning graph, and $u_j^{\top}R_ju_j$ is obtained from its neighbors via communications. Note that $u_{\mathcal{N}_L^i}$ may involve state information of agents in $\mathcal{V}\setminus\mathcal{N}_L^i$. However, it is not necessary for agent $i$ to have such state information. Instead, it obtains $u_j^{\top}R_ju_j$ by communicating with agent $j\in\mathcal{N}_L^i\setminus\{i\}$.
\end{remark}
Next we verify the validity of Assumption \ref{as local cost}. Let $H_i(K,x(0))=\sum_{t=0}^\infty\gamma^t x^{\top}(t)(\hat{Q}_i+K^{\top}\hat{R}K)x(t)$. The next proposition shows that the local cost functions we constructed satisfy Assumption \ref{as local cost}.
\begin{proposition}\label{pr LQR J_i}
The local cost $J_i(K)$ constructed in (\ref{local cost}) has the following properties.
(i). There exists a scalar $c>0$ such that $H_i(K,x(0))\leq cJ_i(K)$ for any $K\in\mathbb{K}_s$ and $x(0)\sim\mathcal{D}$.
(ii). $\nabla_{K_i}J_i(K)=\nabla_{K_i} J(K)$ for any $K\in\mathbb{K}_s$.
\end{proposition}
\subsection{Distributed RL Algorithms for Multi-Agent LQR}
Due to Proposition \ref{pr LQR J_i}, we are able to apply Algorithms \ref{alg:as} to the distributed multi-agent LQR problem. Let $\mathbf{K}_i=\text{vec}(\tilde{K}_i)\in\mathbb{R}^{q_i}$, where $\tilde{K}_i$ is defined in (\ref{u_i}), $q_i=mn_i=mn|\mathcal{N}_S^i|$, $i=1,...,N$. From the degree sum formula, we know $$\sum_{i=1}^Nq_i=mn\sum_{i=1}^N|\mathcal{N}_S^i|=2mn|\mathcal{E}_S|\triangleq q.$$
Define $\mathbf{K}=(\mathbf{K}_1^{\top},...,\mathbf{K}_N^{\top})^{\top}\in\mathbb{R}^{q}$. There must uniquely exist a control gain matrix $K\in\mathbb{R}^{mN\times nN}$ corresponding to $\mathbf{K}$. Let $\mathcal{M}_K(\cdot): \mathbb{R}^q\rightarrow\mathbb{K}_d$ be the mapping transforming $\mathbf{K}$ to a distributed stabilizing gain.
To apply Algorithm \ref{alg:as} to the multi-agent LQR problem, we first implement Algorithm \ref{alg:clustering} based on the learning graph $\mathcal{G}_L$ to achieve a clustering $\{\mathcal{V}_j, j=1,...,s\}$. Then the asynchronous RL algorithm for problem (\ref{LQR no noise}) is given in Algorithm \ref{alg:lqr1}. Note that the algorithm requires a stabilizing distributed control gain as the initial policy. One way to achieve this is making each agent learn a policy stabilizing itself, which has been studied in \cite{Lamperski20}.
\begin{algorithm}[htbp]
\small
\caption{Asynchronous Distributed Learning for Multi-Agent LQR}\label{alg:lqr1}
\textbf{Input}: Step-size $\eta$, smoothing radius $r_i$ and variable dimension $q_i$, $i=1,...,N$, clusters $\mathcal{V}_j$, $j=1,...,s$, iteration numbers $T_K$ and $T_J$ for controller variable and the cost, respectively, update sequence $z_k$ and extrapolation weight $w_i^k$, $k=0,...,T_K-1$, $\mathbf{K}^0$ such that $\mathcal{M}_K(\mathbf{K^0})\in\mathbb{K}_s$.\\
\textbf{Output}: $\mathbf{K}^*$.
\begin{itemize}
\item[1.] \textbf{for} $k=0,1,...,T_K-1$ \textbf{do}
\item[2.] ~~~~~~~~Sample $x_0\sim\mathcal{D}$.
\item[3.]~~~~~~~~\textbf{for} $i\in\mathcal{V}$ \textbf{do} (Simultaneous Implementation)
\item[4.] ~~~~~~~~~~~~\textbf{if} $i\in\mathcal{V}_{z_k}$ \textbf{do}
\item[5.] ~~~~~~~~~~~~~~~~Set $x(0)=x_0$.
\item[6.] ~~~~~~~~~~~~~~~~Agent $i$ samples $\mathbf{D}_i(k)\in\mathbb{R}^{q_i}$ randomly from $\mathbb{S}_{q_i-1}$, and computes $\hat{\mathbf{K}}_i^k=\mathbf{K}_i^k+w_i^k(\mathbf{K}_i^k-\mathbf{K}_i^{k_{prev}})$.
\item[7.] ~~~~~~~~~~~~~~~~Agent $i$ implements its controller $$u_i(t,k)=-\text{vec}^{-1}(\hat{\mathbf{K}}_i^k+r_i\mathbf{D}_i(k))x_{\mathcal{N}_S^i}(t),$$
~~~~~~~~~~~~~~~~while agent $j\notin \mathcal{V}_p$ implements
$$u_{j}(t,k)=-\text{vec}^{-1}(\mathbf{K}_{j}^k)x_{\mathcal{N}_S^{j}}(t)$$
~~~~~~~~~~~~~~~~for $t=0,...,T_J-1$, and observes
\begin{equation}
J_{i,T_J}(K^{i,k})=\sum_{t=0}^{T_J-1}\gamma^t\left[x_{\mathcal{N}_L^i}^{\top}(t)\bar{Q}_ix_{\mathcal{N}_L^i}(t)+ u_{\mathcal{N}_L^i}^{\top}(t,k)\bar{R}_iu_{\mathcal{N}_L^i}(t,k)\right].\label{eq:finite_Cost}
\end{equation}
\item[8.] ~~~~~~~~~~~~~~~~Agent $i$ computes the estimated gradient: $$\mathbf{G}_i^{T_J}(k)=\frac{q_i}{r_i}J_{i,T_J}(K^{i,k})\mathbf{D}_i(k),$$
~~~~~~~~~~~~~~~~then updates its policy:
\begin{equation}
\mathbf{K}_i^{k+1}=\hat{\mathbf{K}}_i^k-\eta \mathbf{G}_i^{T_J}(k).
\end{equation}
\item[9.]~~~~~~~~~~~~\textbf{else}
\begin{equation}
\mathbf{K}_i^{k+1}=\mathbf{K}_i^k.
\end{equation}
\item[10.] ~~~~~~~~~~~~\textbf{end}
\item[11.]~~~~~~~~\textbf{end}
\item[12.] \textbf{end}
\item[13.] Set $\mathbf{K}^*=\mathbf{K}^{k+1}$.
\end{itemize}
\end{algorithm}
\subsection{Convergence Analysis}
In this subsection, we show the convergence result of Algorithm \ref{alg:lqr1}. Throughout this subsection, we adopt $J_i(K)$ in (\ref{local cost}) as the local cost function for agent $i$. Let $Q_i^K=\hat{Q}_i+K^{\top}\hat{R}_iK$, $J_i^{T_J}(K)=\mathbf{E}[\sum_{t=0}^{T_J-1}x^{\top}(t)Q_i^Kx(t)]$, $\mathbf{G}=(\mathbf{G}_1^{\top},...,\mathbf{G}_N^{\top})^{\top}$, $\mathbf{G}_i(k)=\frac{q_i}{r_i}J_i(k)\mathbf{D}_i(k)$, $\mathbf{G}^{T_J}=((\mathbf{G}_1^{T_J})^{\top},...,(\mathbf{G}_N^{T_J})^{\top})^{\top}$, where $\mathbf{G}_i$ and $\mathbf{G}_i^{T_J}$ are the ideal and the actual estimates, respectively, of the gradient of the local cost function for agent $i$.
The essential difference between Algorithm \ref{alg:as} and Algorithm \ref{alg:lqr1} is that the computation of the cost function in a LQR problem is inaccurate because~\eqref{eq:finite_Cost} only approximates the infinite horizon cost~\eqref{local cost}. This means that we need to take into account the estimation error of each local cost function. In this case, the ideal and actual gradient estimates for agent $i$ at step $k$ are $\mathbf{G}_i(k)=\frac{q_i}{r_i}J_i(K^{i,k})\mathbf{D}_i(k)$, and $\mathbf{G}_i^{T_J}(k)=\frac{q_i}{r_i}J_{i,T_J}(K^{i,k})\mathbf{D}_i(k)$, respectively, where $K^{i,k}=\mathcal{M}_K(\mathbf{K}^{i,k})$, $\mathbf{K}^{i,k}=(\mathbf{K}_1^{k\top},...,(\hat{\mathbf{K}}_i^k+r_i\mathbf{D}_i(k))^{\top},...,\mathbf{K}_N^{k\top})^{\top}$. Note that for any matrix $\mathbf{K}\in\mathbb{R}^q$, it holds that $||\mathbf{K}||=||\mathcal{M}_K(\mathbf{K})||_F$.
We study the convergence by focusing on $K\in\mathbb{K}_\alpha$, where $\mathbb{K}_{\alpha}$ is defined as
\begin{equation}\label{X}
\mathbb{K}_\alpha=\{K\in\mathbb{K}_d: J(K)\leq\alpha J(K^0),J_i(K)\leq\alpha_iJ_i(K^0)\}\subseteq \mathbb{K}_s,
\end{equation}
where $\alpha,\alpha_i>1$, $K^0\in\mathbb{K}_d\cap\mathbb{K}_s$ is the given initial stabilizing control gain.
Due to the continuity of $J(K)$, we define the following parameters for $K\in\mathbb{K}_\alpha$:
\begin{equation}
\phi_0=\sup_{K\in\mathbb{K}_\alpha}\phi_K,~~\lambda_0=\sup_{K\in\mathbb{K}_\alpha}\lambda_K, ~~\rho_0=\inf_{K\in\mathbb{K}_\alpha}\{\beta_K,\zeta_K\}.
\end{equation}
The following lemma evaluates the gradient estimation in the LQR problem.
\begin{lemma}\label{le LQR g as}
Given $\epsilon'>0$ and $K^{k}\in\mathbb{K}_\alpha$, if $r_i\leq \frac{\rho_0}{2}$, $w_i^k\leq\frac{\rho_0}{2||K_i^k-K_i^{k_{prev}}||_F}$ and
\begin{equation}\label{T_J bound}
T_J\geq\max_{i\in\mathcal{V}}\frac{q_i(J_i(K^{i,k})+J_0(K^{i,k}))^2(||\hat{Q}_i+I_{nN}||+||\hat{R}_i||||K^{i,k}||^2)}{\epsilon'\gamma\lambda_{\min}(\Sigma_x)\sigma_{\min}^2(\hat{Q}_i+I_{nN})},
\end{equation}
where $J_0(K)=\mathbf{E}[\sum_{t=0}^{\infty}\gamma^tx^{\top}(t)x(t)]$, then
\begin{equation}\label{J error}
J_i(K^{i,k})-J_{i, T_J}^l(K^{i,k})\leq\epsilon', ~~~~i\in\mathcal{V},
\end{equation}
\begin{equation}\label{LQR g error}
||\mathbf{E}[\mathbf{G}_i^{T_J}(k)]-\nabla_{\mathbf{K}_i}J(K^{i,k})||\leq \frac{q_i\epsilon'}{r_i}+\phi_0r_i,
\end{equation}
\begin{equation}\label{LQR bound g}
||G_i^{T_J}(k)||\leq \frac{q_i}{r_i}c\left[\alpha_i J_i(K^0)+\lambda_0\rho_0+\epsilon'\right],
\end{equation}
where $J_{i,T_J}$ is in (\ref{eq:finite_Cost}).
\end{lemma}
Let $J_0(K^0)=\max_{i\in\mathcal{V}}\alpha_i J_i(K^0)$. Based on Lemma \ref{le LQR g as} and Theorem \ref{th as}, we have the following convergence result for Algorithm \ref{alg:lqr1}.
\begin{theorem}\label{th as LQR}
Under Assumptions \ref{as Lips f}-\ref{as local cost} and \ref{as solvable}, given positive scalars $\epsilon$, $\nu$, $\gamma$, and $\alpha\geq 2+\gamma+\frac{1}{\nu}+\nu\gamma$, $K^0\in\mathbb{K}_s\cap\mathbb{K}_d$. Let $\{\mathbf{K}^k\}_{k=0}^{T-1}$ be the sequence of states obtained by implementing Algorithm \ref{alg:lqr1} for $k=0,...,T-1$. Suppose that $T_J$ satisfies (\ref{T_J bound}) for each $k$ ($T_J$ can be different for different steps), and
\begin{equation}\label{lqrconditions}
\begin{split}
T_K=\lceil \frac{2\alpha \nu J(\mathbf{K}^0)}{\eta\epsilon}\rceil, ~~~\eta\leq\min\{\frac{\rho_0}{2\delta\sqrt{N_0}},\frac{2\alpha J(\mathbf{K}^0)}{\gamma\epsilon},\frac{\gamma\epsilon}{2\alpha N_0(\phi_0\delta_2^2+4\phi_0^2+4+\phi)}\},~~~ \epsilon'\leq\frac{r_-}{4q_+}\sqrt{\frac{\gamma\epsilon }{\alpha N_0}}, \\
w_i^k\leq\frac{1}{||\mathbf{K}_i^k-\mathbf{K}_i^{k_{prev}}||}\min\{\eta^{3/2},\frac{\rho_0}{2\sqrt{N_i}}\},~~~ r_i\leq \min\{\frac{\rho_0}{2},\frac{1}{4\phi_0}\sqrt{\frac{\gamma\epsilon}{\alpha N_0}}\}, i\in\mathcal{V},
\end{split}
\end{equation}
where $\delta=\frac{q_+}{r_-}c\left[J_0(K^0)+\lambda_0\rho_0+\epsilon'\right]$.
(i). The following holds with a probability at least $1-\frac{1}{\alpha}(2+\gamma+\frac{1}{\nu}+\nu\gamma)$:
\begin{equation}
\frac{1}{T}\sum_{k=0}^{T-1}\sum_{i\in\mathcal{V}_{z_k}}||\nabla_{\mathbf{K}_i}J(\mathbf{K}^k)||^2< \epsilon.
\end{equation}
(ii). Under Assumption \ref{as cyclic}, suppose that there exist step $\bar{T}\leq T-T_0$ and error $\bar{\epsilon}\leq\rho_0$ such that $||\mathbf{K}^k-\bar{\mathbf{K}}||<\bar{\epsilon}$ for all $k\in[\bar{T},T-1]$. If the conditions in (\ref{lqrconditions}) where $\mathbf{K}^0$ in the conditions on $T$, $\eta$ and $r_i$ is replaced by $\mathbf{K}^{\bar{T}}$ hold, then the following holds with a probability at least $1-\frac{1}{\alpha}(2+\gamma+\frac{1}{\nu}+\nu\gamma)$:
\begin{equation}
||\nabla_{\mathbf{K}}J(\bar{\mathbf{K}})||^2<\hat{\epsilon},
\end{equation}
where $\hat{\epsilon}=T_0(2\epsilon+2\phi_0^2\bar{\epsilon}^2)$.
\end{theorem}
From Theorem \ref{th as LQR} (ii), the sample complexity for convergence of Algorithm \ref{alg:lqr1} is $T_KT_J=\mathcal{O}(q_+^3N_0^2T_0^4/\hat{\epsilon}^4)$, which is higher than that of Algorithm \ref{alg:as} because of the error on the local cost function evaluation in the LQR problem. The sample complexity here has the same order on convergence accuracy as that in \cite{Li19}. Compared with \cite{Li19}, our algorithm mainly has two advantages: (i) the sample complexity in \cite{Li19} is affected by the convergence rate of the consensus algorithm, the number of agents, and the dimension of the entire state variable, while the sample complexity of our algorithm depends on the local optimization problem for each agent (the number of agents in one cluster and the dimension of the variable for one agent), therefore rendering high scalability of our algorithm to large-scale networks; (ii) the algorithm in \cite{Li19} requires each agent to estimate the global cost during each iteration, while our algorithm is based on local cost evaluation, which benefits for variance reduction and privacy preservation.
\section{Simulation Experiments}\label{sec: sim}
\subsection{Optimal Tracking of Multi-Robot Formation}
In this section, we apply Algorithm \ref{alg:lqr1} to a multi-agent formation control problem. Consider a group of $N=10$ robots modeled by the following double integrator dynamics:
\begin{equation}\label{agent dynamics}
\begin{split}
r_i(t+1)&=r_i(t)+v_i(t),\\
v_i(t+1)&=v_i(t)+C_iu_i(t), ~~i=1,...,10,
\end{split}
\end{equation}
where $r_i, v_i, u_i\in\mathbb{R}^2$ are position, velocity, and control input of agent $i$, respectively, $C_i\in\mathbb{R}^{2\times2}$ is a coupling matrix in the dynamics of agent $i$. Let $x_i=(r_i^{\top},v_i^{\top})^{\top}$, $A_i=(\begin{smallmatrix}
I_2&I_2\\
\mathbf{0}_{2\times2}&I_2
\end{smallmatrix})$, $B_i=(\mathbf{0}_{2\times2},C_i^{\top})^{\top}$, the dynamics (\ref{agent dynamics}) can be rewritten as
\begin{equation}
x_i(t+1)=A_ix_i(t)+B_iu_i(t).
\end{equation}
The control objective is to make the robots learn their own optimal controllers for the whole group to form a circular formation, track a moving target, maintain the formation as close as possible to the circular formation during the tracking process, and cost the minimum control energy. The target has the following dynamics:
\begin{equation}
r_0(t+1)=r_0(t)+v_0,
\end{equation}
where the velocity $v_0\in\mathbb{R}^2$ is fixed. Let $x_0=(r_0^{\top},v_0^{\top})^{\top}$ be the state vector of the target, and $d_i(t)=x_0(t)+(\cos\theta_i,\sin\theta_i,0,0)^{\top}$ with $\theta_i=\frac{2\pi i}{N}$ be the desired time-varying state of robot $i$. Suppose that the initial state $x_i$ of each robot $i$ is a random variable with mean $d_i$, which implies that each agent is randomly perturbed from its desired state. Then the control objective can be written as
\begin{equation}
\min_{u} J=\mathbf{E}_{(x(0)-d)\sim\mathcal{D}}\left[\sum_{t=0}^\infty \left(\sum_{(i,j)\in\mathcal{E}_C}||x_i(t)-x_j(t)-(d_i-d_j)||^2+\sum_{i\in\mathcal{V}_r}||x_i-d_i||^2+\sum_{i=1}^N||u_i(t)||^2\right)\right],
\end{equation}
where $\mathcal{V}_r$ is the leader set consisting of robots that do not sense information from others but only chase for its target trajectory. In the literature of formation control, usually the sensing graph via which each robot senses its neighbors' state information is required to have a spanning tree. Here we assume that the sensing graph has a spanning forest from the leader set $\mathcal{V}_L$. Different from a pure formation control problem, the main goal of the LQR formulation is to optimize the transient relative positions between different robots and minimize the required control effort of each agent. Fig. \ref{fig formation graphs} shows the sensing graph, the cost graph and the resulting learning graph for the formation control problem. The leader set is $\{1,3,5,7,9\}$.
\begin{figure}
\centering
\includegraphics[width=12cm]{formationgraph}
\caption{The sensing graph $\mathcal{G}_S$, cost graph $\mathcal{G}_C$ and the resulting learning graph $\mathcal{G}_L$ for the formation control problem.} \label{fig formation graphs}
\end{figure}
Define the new state variable $y_i=x_i-d_i$ for each robot $i$. Using the property $A_id_i(t)=d_i(t+1)$, we obtain dynamics of $y_i$ as $y_i(t+1)=A_iy_i(t)+B_iu_i(t)$. The objective becomes
\begin{equation}
\min_{u} J=\mathbf{E}_{y(0)\sim\mathcal{D}}\left[\sum_{t=0}^\infty \gamma^t(y^{\top}(t)(L+\Lambda)y(t)+u^{\top}(t)u(t))\right],
\end{equation}
where $y=(y_1^{\top},...,y_N^{\top})^{\top}$, $L$ is the Laplacian matrix corresponding to the cost graph, $\Lambda$ is a diagonal matrix with $\Lambda_{ii}=1$ if $i\in\mathcal{V}_r$ and $\Lambda_{ii}=0$ otherwise.
Let $C_i=\frac{i}{i+1}I_2$, the initial stabilizing gain is given by $\mathbf{K}(0)=\mathcal{M}_K^{-1}(K_0)$, where $K_0=I_N\otimes \tilde{K}$ and $\tilde{K}=(I_2,1.5I_2)$, which is stabilizing for each robot. By implementing Algorithm \ref{alg:clustering} based on the learning graph in Fig. \ref{fig formation graphs} (c), the following clustering is obtained :
\begin{equation}\label{formation cluster}
\mathcal{V}_1=\{1,4,6,8\}, \mathcal{V}_2=\{2,5,9\}, \mathcal{V}_3=\{3,7,10\},
\end{equation}
which means that there are three clusters update asynchronously, while agents in each cluster update their variable states via independent local cost evaluations at one step. It will be shown that compared with the traditional BCD algorithm where only one block coordinate is updated at one step, our algorithm is more efficient.
\subsection{Simulation Results}
\begin{figure}
\centering
\includegraphics[width=18cm]{simulations}
\caption{(a) The group performance evolution of a 10-agent formation under the centralized ZOO algorithm, Algorithm \ref{alg:lqr1} without clustering and acceleration, without clustering but with acceleration, with clustering but without acceleration, and with both clustering and acceleration. The shaded areas denote the performance corresponding to the controllers obtained by perturbing the current control gain with 50 random samplings. (b) The trajectories of robots under the controller learned by Algorithm \ref{alg:lqr1} with $s=3$, $w_i^k=0.5$. (c) The performance evolution of a 100-agent formation under Algorithm \ref{alg:lqr1} with different parameters.} \label{fig simulations}
\end{figure}
We compare our algorithm with the centralized one-point ZOO algorithm in \cite{Fazel18}, where $\mathbf{K}$ is updated by taking an average of multiple repeated global cost evaluations. Fig. \ref{fig simulations} shows the simulation results, where the performance is evaluated with given initial states of all the robots. When implementing the algorithms, each component of the initial states are randomly generated from a truncated normal distribution $\mathcal{N}(0,1)$. Moreover, we set $\eta=10^{-6}$, $r=1$, $T_K=1000$, $T_J=50$ for both Algorithm \ref{alg:lqr1} and the centralized algorithm. In Fig. \ref{fig simulations}, we show the performance trajectory of the controller generated by different algorithms. When $s=10$, Algorithm \ref{alg:lqr1} is a BCD algorithm without clustering, i.e., each cluster only contains one robot, while $s=3$ corresponds to the clustering strategy in (\ref{formation cluster}). The extrapolation weight $w_i^k$ for each agent $i$ at each step $k$ is uniformly set. So $w_i^k=0$ implies that Algorithm \ref{alg:lqr1} is not accelerated. To compare the variance of the controller performance for different algorithms, we conduct gradient estimation for 50 times in each iteration, and plot the range of performance induced by all the estimated gradients, as shown in the shaded areas of Fig. \ref{fig simulations} (a). Each solid trajectory in Fig. \ref{fig simulations} corresponds to a controller updated by using an average of the estimated gradients for the algorithm. Fig. \ref{fig simulations} (b) shows the trajectories of agents by implementing the convergent controller generated by Algorithm \ref{alg:lqr1} with $s=3$, and $w_i^k=0.5$, $i\in\mathcal{V}$, $k=0,...,T_K-1$. From the simulation results, we have the following observations:
\begin{itemize}
\item Algorithm \ref{alg:lqr1} always converges faster and has a lower performance variance than the centralized zeroth-order algorithm.
\item Algorithm \ref{alg:lqr1} with the cluster-wise update scheme converges faster than it with the agent-wise update scheme. This is because the number of clusters dominates the sample complexity, as we analyzed below Theorem \ref{th as LQR}.
\item Appropriate extrapolation weights not only accelerate Algorithm \ref{alg:lqr1}, but also decrease the performance variance.
\end{itemize}
Note that the performance of the optimal centralized controller is 345.271. The reason why Algorithm \ref{alg:lqr1} converges to a controller with a cost value larger than the optimal one in Fig. \ref{fig simulations} (a) is that the distributed LQR problem has a structure constraint for the control gain, which results in a feasible set $\mathbb{K}_s\cap\mathbb{K}_d$, whereas the feasible set of the centralized LQR problem is $\mathbb{K}_s$.
\textbf{Scalability to Large-Scale Networks.} Even if we further increase the number of robots, the local cost of each robot always involves only 5 robots as long as the structures of the sensing and cost graphs are maintained. That is, the magnitude of each local cost does not change as the network scale grows. On the contrary, for global cost evaluation, the problem size severely influences the estimation variance. The difference of variances between these two methods has been analyzed in Subsection \ref{subsec: as variance}. To show the advantage of our algorithm, we further deal with a case with 100 robots, where the cost, sensing and learning graphs have the same structure as the 10-robot case, and implementing Algorithm \ref{alg:clustering} still results in 3 clusters. Simulation results show that the centralized algorithm failed to solve the problem. The performance trajectories of Algorithm \ref{alg:lqr1} with different settings are shown in Fig. \ref{fig simulations} (c), from which we observe that Algorithm \ref{alg:lqr1} has an excellent performance, and both clustering and extrapolation are efficient in improving the convergence rate. Note that the performance of the centralized optimal control gain for the 100-agent case is 4119.87.
\section{Conclusion}\label{sec: con}
We have proposed a novel distributed RL (zeroth-order accelerated BCD) algorithm with asynchronous sample and update schemes. The algorithm was applied to distributed learning of the model-free distributed LQR by designing the specific local cost functions and the interaction graph required by learning. A simulation example of formation control has shown that our algorithm has significantly lower variance and faster convergence rate compared with the centralized ZOO algorithm. In the future, we will look into how to reduce the sample complexity of the proposed distributed ZOO algorithm.
\section{Appendix}\label{sec: app}
\subsection{Analysis for the Asynchronous RL Algorithm}
{\it Proof of Lemma \ref{le fhat=Eg}: } It suffices to show that
\begin{equation}\label{estimate for gi}
\mathbf{E}_{u_i^l\in\mathbb{S}_{n-1}}[f_i(x_i+r_i u_i^l,x_{-i})u_i^l]=\frac{r_i}{q_i}\nabla_{x_i}\hat{f}_i(x),
\end{equation}
for $i=1,...,N$.
This has been proved in \cite[Lemma 1]{Flaxman04}.
\QEDA
{\it Proof of Lemma \ref{le fhat error}:} Denote $v^i=(0,...,0,v_i^{\top},0,...,0)^{\top}\in\mathbb{R}^q$ with $v_i\in\mathbb{R}^{q_i}$. The following holds:
\begin{equation}
\begin{split}
&||\nabla_{x_i}\hat{f}_i(x)-\nabla_{x_i}f(x)||\\
&=||\nabla_{x_i}\mathbf{E}_{v_i\in\mathbb{B}_{q_i}}[f_i(x_i+r_i v_i,x_{-i})]-\nabla_{x_i}f(x)||\\
&=||\mathbf{E}_{v_i\in\mathbb{B}_{q_i}}\left[\nabla_{x_i}f_i(x_i+r_i v_i,x_{-i})-\nabla_{x_i}f(x)\right]||\\
&\leq \mathbf{E}_{v_i\in\mathbb{B}_{q_i}}||\nabla_{x_i}f_i(x_i+r_i v_i,x_{-i})-\nabla_{x_i}f(x)||\\
&=\mathbf{E}_{v^i\in\mathbb{B}_{q}}||\nabla_{x_i}f(x+r_i v^i)-\nabla_{x_i}f(x)||\\
&\leq \mathbf{E}_{v^i\in\mathbb{B}_{q}}||\nabla_{x}f(x+r_i v^i)-\nabla_{x}f(x)||\\
&\leq \phi_xr_i,
\end{split}
\end{equation}
where the second equality follows from the smoothness of $f_i(x)$, and the first inequality used Jensen's inequality.
\QEDA
Before proving Theorem \ref{th as}, we show that once $x^k$ is restricted in $\mathbb{X}$, we are able to give a uniform bound on the estimated gradient $g_i(\hat{x}_i,x_{-i}^k, u,\xi)$ at each step, see the following lemma.
\begin{lemma}\label{le E[g]}
If $r_i\leq \rho_0/2$ and $w_i^k\leq \frac{\rho_0}{2||x_i^k-x_i^{prev}||}$, then for any $x^k\in\mathbb{X}$, the estimated gradient satisfies
\begin{equation}
||g_i(\hat{x}_i^k,x_{-i}^k,u_i,\xi)||\leq\frac{q_i}{r_i}c\left[\alpha_i f_i(x^0)+\lambda_0\rho_0\right].
\end{equation}
\end{lemma}
{\it Proof of Lemma \ref{le E[g]}:} Using the definition of $g_i(x,u_i,\xi)$ in (\ref{gixuxi}), we have
\begin{equation}
\begin{split}
||g_i(\hat{x}_i^k,x_{-i}^k,u_i,\xi)||
&=\frac{q_i}{r_i}h_i(\hat{x}_i^k+r_iu_i,x_{-i}^k,\xi)\\
&\leq\frac{q_i}{r_i}cf_i(\hat{x}_i^k+r_iu_i,x_{-i}^k)\\
&\leq\frac{q_i}{r_i}c\left[f_i(x^k)+\lambda_0\rho_0\right]\\
&\leq\frac{q_i}{r_i}c\left[\alpha_i f_i(x^0)+\lambda_0\rho_0\right],
\end{split}
\end{equation}
where the first inequality used Assumption \ref{as local cost}, the second inequality used the local Lipschitz continuity of $f(x)$ over $\mathbb{X}$ because
\begin{equation}
||\hat{x}_i^k+r_iu_i-x_i^k||\leq||\hat{x}_i^k-x_i^k||+r_i=w_i^k||x_i^k-x_i^{k_{prev}}||+r_i\leq\rho_0,
\end{equation}
and the last inequality holds because $x^k\in\mathbb{X}$.
\QEDA
{\it Proof of Theorem \ref{th as}:} To facilitate the proof, we give some notations first. Let $\mathcal{K}_i^k=\{j: i\in\mathcal{V}_{z_j}, j\leq k\}$ be the set of iterations that $x_i$ is updated before step $k$, $d_i^k=|\mathcal{K}_i^k|$ be the number of times that $x_i$ has been updated until step $k$. Let $\tilde{x}_i^j$ be the value of $x_i$ after it is updated $j$ times, then $x_i^k=\tilde{x}^{d_i^k}_i$. We use $g_i(\hat{x}_i^k,x_{-i}^k)$ as the shorthand of $g_i(\hat{x}_i^k,x_{-i}^k,u,\xi)$. Let $\mathcal{F}_k$ denote the $\sigma$-field containing all the randomness in the first $k$ iterations. Moreover, we use the shorthand $\mathbf{E}^k[\cdot]=\mathbf{E}[\cdot|\mathcal{F}_k]$.
(i). Suppose $x_k\in\mathbb{X}$. Since $f(x)$ has a $(\phi_0,\rho_0)$ Lipschitz continuous gradient at $x_k$,
and
\begin{equation}
\begin{split}
||x^{k+1}-x^k||^2&=\sum_{i\in\mathcal{V}_{z_k}}||x_i^{k+1}-x_i^k||^2\\
&\leq \sum_{i\in\mathcal{V}_{z_k}}\left[w_i^k(x_i^k-x_i^{k_{prev}})+\eta||g_i(\hat{x}_i^k,x_{-i}^k)||\right]^2\\
&\leq N_i(\frac{\rho_0}{2\sqrt{N_i}}+\frac{\rho_0}{2\sqrt{N_i}})^2 \leq\rho_0^2,
\end{split}
\end{equation}
it holds that
\begin{equation}\label{fxk+1-fxk}
\begin{split}
f(x^{k+1})-f(x^k)&\leq \sum_{i\in\mathcal{V}_{z_k}}\left[\langle\nabla_{x_i}f(x^k),x^{k+1}_i-x^k_i\rangle +\frac{\phi_0}{2}||x^{k+1}_i-x^k_i||^2\right]\\
&\leq\sum_{i\in\mathcal{V}_{z_k}}\left[\langle\nabla_{x_i} f_i(x^k),w_i^k(x_i^k-x_i^{k_{prev}})-\eta g_i(\hat{x}_i^k)\rangle+\frac{\phi_0}{2}\eta^2||g_i(\hat{x}_i^k,x_{-i}^k)||^2+\frac{\phi_0}{2}(w_i^k)^2||x_i^k-x_i^{k_{prev}}||^2\right]\\
&=\sum_{i\in\mathcal{V}_{z_k}}\left[-\eta||\nabla_{x_i} f_i(x^k)||^2+\frac{\phi_0}{2}\eta^2||g_i(\hat{x}_i^k,x_{-i}^k)||^2+\eta\Delta_i^k+\Theta_i^k\right],
\end{split}
\end{equation}
where $\Delta_i^k=||\nabla_{x_i}f_i(x^k)||^2-\langle\nabla_{x_i} f_i(x^k),g_i(\hat{x}_i^k,x_{-i}^k)\rangle$, $\Theta_i^k=\langle\nabla_{x_i}f(x^k), w_i^k(x_i^k-x_i^{k_{prev}})\rangle+\frac{\phi_0}{2}(w_i^k)^2||x_i^k-x_i^{k_{prev}}||^2$.
Define the first iteration step at which $x$ escapes from $\mathbb{X}$ below:
\begin{equation}
\tau=\min\{k: x(k)\notin\mathbb{X}\}.
\end{equation}
Next we analyze $\mathbf{E}^k\left[(f(x^k)-f(x^{k+1}))1_{\tau>k}\right]$. Under the condition $\tau>k$, both $||\nabla_{x_i} f_i(\hat{x}_i^k,x_{-i}^k)-\mathbf{E}^k[g_i(\hat{x}_i^k,x_{-i}^k)]||$ and $||g_i(\hat{x}_i^k,x_{-i}^k)||$ are uniformly bounded. For notation simplicity in the rest of the proof, according to Lemma \ref{le fhat error} and Lemma \ref{le E[g]}, we adopt $\theta=\phi_0\max_{i\in\mathcal{V}}r_i$ as the uniform upper bound of
$||\nabla_{x_i} f_i(\hat{x}_i^k,x_{-i}^k)-\mathbf{E}^k[g_i(\hat{x}_i^k,x_{-i}^k)]||$, and $\delta=\frac{q_+}{r_-}c\left[\alpha f_0(x^0)+\lambda_0\rho_0\right]$ as the uniform upper bound of $||g_i(\hat{x}_i^k,x_{-i}^k)||$.
According to Assumption \ref{as local cost}, $\nabla_{x_i}f_i(x)$ is $(\phi_0,\rho_0)$ locally Lipschitz continuous. Then we are able to bound $\mathbf{E}^k[\Delta_i^k]$ if $\tau>k$:
\begin{equation}\label{BoundDelta}
\begin{split}
\mathbf{E}^k[\Delta_i^k]&=\langle\nabla_{x_i} f_i(x^k),\nabla_{x_i} f_i(x^k)-\nabla_{x_i} f_i(\hat{x}_i^k,x_{-i}^k)\rangle+\langle\nabla_{x_i} f_i(x^k),\nabla_{x_i} f_i(\hat{x}_i^k,x_{-i}^k)-\mathbf{E}^k[g_i(\hat{x}_i^k,x_{-i}^k)]\rangle\\
&\leq ||\nabla_{x_i} f_i(x^k)||\phi_0||x_i^k-\hat{x}_i^k||+\theta ||\nabla_{x_i} f_i(x^k)||\\
&=w_i^k\phi_0||\nabla_{x_i} f_i(x^k)||||\tilde{x}_i^{d_i^k}-\tilde{x}_i^{d_i^k-1}||+\theta ||\nabla_{x_i} f_i(x^k)||\\
&\leq \frac{1}{8}||\nabla_{x_i}f_i(x^k)||^2+2(w_i^k)^2\phi_0^2||\tilde{x}_i^{d_i^k}-\tilde{x}_i^{d_i^k-1}||^2+\theta ||\nabla_{x_i} f_i(x^k)||\\
&\leq \frac{3}{8}||\nabla_{x_i}f_i(x^k)||^2+2\phi_0^2\eta^3+\theta^2,
\end{split}
\end{equation}
where the first inequality used $||x_i^k-\hat{x}_i^k||=||w_i^k(x_i^k-x_i^{k_{prev}})||\leq\rho_0$, and the Lipschitz continuity of $\nabla_{x_i}f(x_i^k,x_{-i}^k)$, the second equality used (\ref{xhati}), the second and the last inequalities used Young's inequality, and $w_i^k\leq\eta^{3/2}/||x_i^k-x_i^{k_{prev}}||$.
Similarly, we bound $\mathbf{E}^k[\Theta_i^k]$ for $\tau>k$ as follows.
\begin{equation}\label{BoundTheta}
\mathbf{E}^k[\Theta_i^k]\leq \frac18\eta||\nabla_{x_i}f_i(x^k)||^2+2\eta^2+\frac{\phi_0}{2}\eta^3,
\end{equation}
where we used $w_i^k\leq\frac{\eta^{3/2}}{||x_i^k-x_i^{k_{prev}}||}$.
Combining the inequalities (\ref{fxk+1-fxk}), (\ref{BoundDelta}) and (\ref{BoundTheta}), we have
\begin{equation}\label{fk+1-fk}
\begin{split}
&\mathbf{E}^k[(f(x^{k+1})-f(x^k))1_{\tau>k}]\leq-\frac12\sum_{i\in\mathcal{V}_{z_k}}\eta\mathbf{E}^k\left[||\nabla_{x_i}f(x^k)||^21_{\tau>k}\right]+\frac{\eta Z}{2},
\end{split}
\end{equation}
where $Z=N_0(\phi_0\delta^2\eta+4\phi_0^2\eta^3+2\theta^2+4\eta+\phi\eta^2)$.
Note that
\begin{equation}
\begin{split}
\mathbf{E}^k\left[f(x^{k+1})1_{\tau>k+1}\right]&\leq\mathbf{E}^k \left[f(x^{k+1})1_{\tau>k}\right]= \mathbf{E}^k\left[f(x^{k+1})\right] 1_{\tau>k},
\end{split}
\end{equation}
where the equality holds because all the randomness before the $(k+1)$th iteration has been considered in $\mathbf{E}^k[\cdot]$. Then $1_{\tau>k}$ is determined.
It follows that
\begin{equation}\label{succ}
\mathbf{E}^k\left[(f(x^k)-f(x^{k+1}))1_{\tau>k}\right]
\leq\mathbf{E}^k\left[f(x^k)1_{\tau>k}-f(x^{k+1})1_{\tau>k+1}\right].
\end{equation}
Summing (\ref{fk+1-fk}) over $k$ from $0$ to $T-1$ and utilizing (\ref{succ}) yields
\begin{equation}
\begin{split}
\mathbf{E}^k[f(x^{\top})1_{\tau>T}]-f(x^0)\leq-\frac12\sum_{k=0}^{T-1}\sum_{i\in\mathcal{V}_{z_k}}\eta\mathbf{E}^k\left[||\nabla_{x_i}f(x^k)||^2\right]+T\eta Z/2.
\end{split}
\end{equation}
It follows that
\begin{equation}\label{gradnorm}
\frac1T\sum_{k=0}^{T-1}\sum_{i\in\mathcal{V}_{z_k}}\mathbf{E}^k[||\nabla_{x_i}f(x^k)||^2]\leq \frac2{\eta T}f(x^0)+ Z.
\end{equation}
Now we analyze the probability $\mathbf{P}(\tau< T)$. Define the process
\begin{equation}
Y(k)=f(x^{\min\{k,\tau\}})+\frac{\eta}{2}(T-k)Z, ~~k=0,...,T-1,
\end{equation}
which is non-negative and almost surely bounded under the given conditions. Next we show $Y(k)$ is a supermartingale by discussing the following two cases:
Case 1, $\tau>k$.
\begin{equation}
\begin{split}
\mathbf{E}^k[Y(k+1)]&=\mathbf{E}^k[f(x^{k+1})]+\frac{\eta}{2}(T-k-1)Z\\
= f(x^k)+&\mathbf{E}^k[f(x^{k+1})-f(x^k)]+\frac{\eta}{2}(T-k-1)Z\\
\leq f(x^k)+&\frac{\eta}{2}(T-k)Z=Y(k),
\end{split}
\end{equation}
where the inequality used (\ref{fk+1-fk}).
Case 2, $\tau\leq k$.
\begin{equation}
\mathbf{E}^k[Y(k+1)]=f(x^\tau)+\frac{\eta}{2}(T-k-1)Z\leq f(x^\tau)+\frac{\eta}{2}(T-k)Z= Y(k).
\end{equation}
Therefore, $Y(k)$ is a super-martingale. Invoking Doob's maximal inequality for super-martingales yields
\begin{equation}
\begin{split}
\mathbf{P}(\tau< T) &\leq\mathbf{P}(\max_{k=0,...,T-1}f(x^k)>\alpha f(x^0))\\&\leq\mathbf{P}(\max_{k=0,...,T-1}Y(k)>\alpha f(x^0))\\
&\leq\frac{\mathbf{E}^k[Y(0)]}{\alpha f(x^0)}=\frac{f(x^0)+\eta TZ/2}{\alpha f(x^0)}=\frac{2+\nu\gamma}{\alpha}.
\end{split}
\end{equation}
where the last equality used $\eta\leq\frac{2\alpha f(x^0)}{\gamma\epsilon}$, $T\leq\frac{2\alpha f(x^0)}{\epsilon\eta}\nu+1$, and $Z\leq \gamma\epsilon/\alpha$. The upper bound for $Z$ is obtained by noting that our conditions on $r_i$ and $\eta$ cause that $2\theta^2\leq\frac{\gamma\epsilon}{2\alpha N_0}$ and $\phi_0\delta^2\eta+4\phi_0^2\eta^3+4\eta+\phi\eta^2\leq(\phi_0\delta_2^2+4\phi_0^2+4+\phi)\eta\leq \frac{\gamma\epsilon}{2\alpha N_0}$ (we consider $\eta\leq1$ by default).
It follows that
\begin{equation}
\begin{split}
\mathbf{P}(\frac{1}{T}\sum_{k=0}^{T-1}\sum_{i\in\mathcal{V}_{z_k}}||\nabla_{x_i}f(x^k)||^2\geq\epsilon)
&\leq\mathbf{P}(\frac{1}{T}\sum_{k=0}^{T-1}\sum_{i\in\mathcal{V}_{z_k}}||\nabla_{x_i}f(x^k)||^21_{\tau\geq T}\geq\epsilon)+\mathbf{P}(\tau< T)\\
&\leq\frac{1}{\epsilon}\mathbf{E}^k\left[\frac{1}{T}\sum_{k=0}^{T-1}||\nabla_{x_i}f(x^k)||^21_{\tau\geq T}\right]+\mathbf{P}(\tau< T)\\
&\leq \frac{1}{\alpha}(\gamma+\frac{1}{\nu})+\frac{2+\nu\gamma}{\alpha}=\frac{1}{\alpha}(2+\gamma+\frac{1}{\nu}+\nu\gamma),
\end{split}
\end{equation}
where the last inequality used (\ref{gradnorm}) and the conditions on $\eta$ and $Z$.
(ii). For any $k\geq \bar{T}$ and $x^k\in\mathbb{X}$, the smoothness of of $f(x)$ at $x^k$ implies that
\begin{equation}
||\nabla_{x}f(\bar{x})-\nabla_{x}f(x^k)||\leq\phi_0\bar{\epsilon}.
\end{equation}
It follows that
\begin{equation}
\begin{split}
\frac{1}{T-\bar{T}}||\nabla_xf(\bar{x})||^2&=\frac{1}{T-\bar{T}}\sum_{j=1}^s\sum_{i\in\mathcal{V}_j}||\nabla_{x_i}f(\bar{x})||^2\\
&\leq\frac{1}{T-\bar{T}}\sum_{k=\bar{T}}^{T-1}\sum_{i\in\mathcal{V}_{z_k}}||\nabla_{x_i}f(\bar{x})||^2\\
&\leq \frac{1}{T-\bar{T}}\sum_{k=\bar{T}}^{T-1}\sum_{i\in\mathcal{V}_{z_k}}\left(2||\nabla_{x_i}f(x^k)||^2+ 2||\nabla_{x_i}f(x^k)-\nabla_{x_i}f(\bar{x})||^2\right)\\
&\leq \frac{1}{T-\bar{T}}\sum_{k=\bar{T}}^{T-1}\sum_{i\in\mathcal{V}_{z_k}}2||\nabla_{x_i}f(x^k)||^2+2\phi_0^2\bar{\epsilon}^2.
\end{split}
\end{equation}
Summing (\ref{fk+1-fk}) over $k$ from $\bar{T}$ to $T-1$. By following similar lines to the proof of statement (i), it is obtained that
\begin{equation}
\mathbf{P}(\frac{1}{T-\bar{T}}\sum_{k=\bar{T}}^{T-1}\sum_{i\in\mathcal{V}_{z_k}}||\nabla_{x_i}f(x^k)||^2\geq\epsilon)\leq\frac{1}{\alpha}(2+\gamma+\frac{1}{\nu}+\nu\gamma).
\end{equation}
Hence, with probability $1-\frac{1}{\alpha}(2+\gamma+\frac{1}{\nu}+\nu\gamma)$, the following holds:
\begin{equation}
\frac{1}{T-\bar{T}}||\nabla_{x}f(\bar{x})||^2< 2\epsilon+2\phi_0^2\bar{\epsilon}^2.
\end{equation}
Let $\hat{T}=T-T_0$, then $\hat{T}$ is a feasible choice of $\bar{T}$. By setting $\bar{T}=\hat{T}$, the proof is completed.
\QEDA
{\it Proof of Lemma \ref{le cov g as g}:} For notation simplicity, when the expectation is taken over all the random variables in a formula, we use $\mathbf{E}$ as the shorthand. Moreover, we use $\nabla_i$ to represent $\nabla_{x_i}$.
During the derivation, we will use the first-order approximation (assuming a sufficiently small $r$), i.e.,
\begin{equation}
h_i(x_i+ru_i,x_{-i},\xi) = h_i(x_i,x_{-i}, \xi) + \nabla_i h_i(x_i,x_{-i}, \xi)ru_i+\mathcal{O}(r^2).
\end{equation}
For $g_l$, it holds that
\begin{equation}
\begin{split}
\mathbf{E}[g_lg_l^{\top}]&=\mathbf{E}[\frac{q_i}{r}h_i(x_i+ru_i,x_{-i},\xi)u_iu_i^{\top}\frac{q_i}{r}h_i(x_i+ru_i,x_{-i},\xi)] \\
&=\frac{q_i^2}{r^2}\mathbf{E}[h^2_i(x_i+ru_i,x_{-i},\xi)u_iu_i^{\top}]\\
&= \frac{q_i^2}{r^2} \mathbf{E}\left[(h_i(x, \xi) + ru^{\top}_i\nabla_i h_i(x, \xi)+\mathcal{O}(r^2))^2u_iu_i^{\top} \right]\\
&=\frac{q_i^2}{r^2}\mathbf{E}_{\xi}[h_i^2(x, \xi) +\mathcal{O}(r^2)]\mathbf{E}_{u_i}[u_iu_i^{\top}]\quad \\
&= \frac{q_i}{r^2}\mathbf{E}[h_i^2(x, \xi)+\mathcal{O}(r^2)]I_{q_i} ,
\end{split}
\end{equation}
where the fourth inequality is obtained by $\mathbf{E}(u_iu_i^{\top}u_i)=0$, and the last equality used $\mathbf{E}[u_iu_i^{\top}]=\frac{1}{q_i}I_{q_i}$ since $u_i\sim\text{Uni}(\mathbb{S}_{q_i-1})$.
On the other hand,
\begin{equation}
\mathbf{E}[\frac{q_i}{r}h_i(x_i+ru_i,x_{-i},\xi)u_i] = \frac{q_i}{r}\mathbf{E}\left[(h_i(x, \xi)u_i + ru^{\top}\nabla_{x_i} h_i(x,\xi))u_i+\mathcal{O}(r^2)\right]= \mathbf{E}[\nabla_{x_i}h_i(x,\xi)]+\mathcal{O}(r).
\end{equation}
As a result, $\mathrm{Cov}(g_l)$ is in (\ref{eq:cov1}).
Similarly,
\begin{align}
\mathbf{E}[g_g]& = \mathbf{E}[\frac{q}{r}h(x+rz,\xi)z_i]\\
&= \frac{q}{r}\mathbf{E}[h(x,\xi)z_i + rz^{\top}\nabla_{x_i} h(x,\xi)z_i+\mathcal{O}(r^2)]\\
&=q\mathbf{E}[z_iz^{\top}] \mathbf{E}[\nabla_xh(x,\xi)]+\mathcal{O}(r)\\ &=\left[\mathbf{0}_{q_i\times q_1}~,\cdots,I_{q_i},\cdots,\mathbf{0}_{q_i\times q_N}\right] \mathbf{E}[\nabla_xh(x,\xi)]+\mathcal{O}(r)\\
&=\mathbf{E}[\nabla_{x_i}h(x,\xi)]+\mathcal{O}(r),
\end{align}
where the last equality used $\mathbf{E}[zz^{\top}]=\frac1qI_q$. Also
\begin{align}
\mathbf{E}[g_gg_g^{\top}] &= \frac{q^2}{r^2}\mathbf{E}\left[h(x+rz,\xi)z_iz_i^{\top}h(x+rz,\xi)\right]\\
&=\frac{q^2}{r^2}\mathbf{E}\left[(h(x,\xi)+rz^{\top}\nabla_{x} h(x,\xi)+\mathcal{O}(r^2))^2z_iz_i^{\top}\right] \\ &=\frac{q}{r^2}\mathbf{E}[h^2(x,\xi)+\mathcal{O}(r^2)]I_{q_i}.
\end{align}
Thus, $\mathrm{Cov}(g_g)$ is in (\ref{eq:cov2}).
\QEDA
\subsection{Analysis for Application to Multi-Agent LQR}
{\it Proof of Proposition \ref{pr LQR J_i}:} (i). Due to the boundedness of $x(0)$, there must exist $c>0$ such that for any $x(0)\sim\mathcal{D}$, it holds that $x(0)x^{\top}(0)\preceq c\mathbf{E}[x(0)x^{\top}(0)]=c\Sigma_x$. Let $Q_{i,K}^\infty=\sum_{t=0}^\infty\gamma^t(\mathcal{A}-\mathcal{B}K)^{t\top}(\hat{Q}_i+K^{\top}\hat{R}_iK)(\mathcal{A}-\mathcal{B}K)^{t}$. It follows that
\begin{equation}
H_i(K,x(0))=\langle Q_{i,K}^\infty,x(0)x^{\top}(0)\rangle\leq c\langle Q_{i,K}^\infty,\Sigma_x\rangle=c\mathbf{E}_{x(0)\sim\mathcal{D}}[H_i(K,x(0))]=cJ_i(K).
\end{equation}
(ii). Given two control gains $$K=(K_1^{\top},...,K_i^{\top},...,K_N^{\top})^{\top},$$ $$K'=(K_1^{\top},...,{K'_i}^{\top},...,K_N^{\top})^{\top}.$$ Given the initial state $x(0)$, let $x(t)$ and $x'(t)$ be the resulting entire system state trajectories by implementing controllers $u=-Kx$ and $u=-K'x$, respectively. It holds for any $j\in\mathcal{V}$ that
\begin{equation}
x_j(t+1)=A_jx_j(t)+B_jK_jx(t)=A_jx_j(t)+B_j\tilde{K}_jx_{\mathcal{N}_S^j}(t).
\end{equation}
Note that for all $j\in\mathcal{V}\setminus\mathcal{N}_L^i$, it must hold that $\mathcal{N}_C^j\cap\mathcal{V}_S^i=\varnothing$. From the definition of $\mathcal{V}_S^i$, we have $x_j(t)=x'_j(t)$ and $x_{\mathcal{N}_S^j}(t)=x'_{\mathcal{N}_S^j}(t)$ for all $j\in\mathcal{V}\setminus\mathcal{N}_L^i$. It follows that $x^{\top}(Q-\hat{Q}_i)x=x'^{\top}(Q-\hat{Q}_i)x'$, and for any $j\in\mathcal{V}\setminus\mathcal{N}_L^i$, we have
$$x^{\top}K_j^{\top}R_jK_jx=x_{\mathcal{N}_S^j}^{\top}\tilde{K}_j^{\top}R_j\tilde{K}_jx_{\mathcal{N}_S^j}= x^{'\top}_{\mathcal{N}_S^j}\tilde{K}_j^{\top}R_j\tilde{K}_jx'_{\mathcal{N}_S^j}.$$
Therefore,
\begin{equation}
\begin{split}
&J(K)-J(K')\\&=\mathbf{E}\left[\sum_{t=0}^{\infty}x^{\top}(Q+K^{\top}RK)x\right]-\mathbf{E}\left[\sum_{t=0}^{\infty}{x'}^{\top}(Q+K^{\top}RK)x'\right]\\
&=\mathbf{E}\left[\sum_{t=0}^{\infty}x^{\top}(\hat{Q}_i+\sum_{j\in\mathcal{N}_L^i}K_j^{\top}R_jK_j)x\right]-\mathbf{E}\left[\sum_{t=0}^{\infty}{x'}^{T}(\hat{Q}_i+\sum_{j\in\mathcal{N}_L^i}{K'}_j^{\top}R_jK'_j)x'\right]\\
&=J_i(K)-J_i(K').
\end{split}
\end{equation}
This means that any perturbation on $K_i$ results in the same difference on $J(K)$ and $J_i(K)$. The proof is completed.
\QEDA
{\it Proof of Lemma \ref{le LQR g as}:} For notation simplicity, we use $K$ to denote $K^{i,k}$. Let $\hat{Q}'_i=\hat{Q}_i+I_{nN}$, we consider the problem for each agent $i$,
\begin{equation}
\begin{split}
\min_{K^{i,k}}~~~~&J'_i(K^{i,k})=\mathbf{E}\left[\sum_{t=0}^\infty\gamma^t x^{\top}(t)(\hat{Q}'_i+K^{i,k\top}\hat{R}_iK^{i,k})x(t)\right]=J_i(K^{i,k})+J_0(K^{i,k})\\
s.t.&~~~~x(t+1)=(\mathcal{A}-\mathcal{B}K^{i,k})x(t), ~~x(0)\sim\mathcal{D},
\end{split}
\end{equation}
where $J_0(K^{i,k})=\mathbf{E}[\sum_{t=0}^{\infty}\gamma^tx^{\top}(t)x(t)]$.
Let $y=\sqrt{\gamma}x$, the LQR problem is equivalent to
\begin{equation}
\begin{split}
\min_{K^{i,k}}~~~~&J'_i(K^{i,k})=\mathbf{E}\left[\sum_{t=0}^\infty y^{\top}(t)(\hat{Q}'_i+K^{i,k\top}\hat{R}_iK^{i,k})y(t)\right]\\
s.t.&~~~~y(t+1)=(\mathcal{A}-\mathcal{B}K^{i,k})y(t),~~ y(0)=\sqrt{\gamma}x(0), ~~x(0)\sim\mathcal{D},
\end{split}
\end{equation}
where $y$ has zero mean and covariance matrix $\gamma\Sigma_x$. Let $J_{i,T_J}^{'}(K^{i,k})=\mathbf{E}\left[\sum_{t=0}^{T_J-1}\gamma^t x^{\top}(t)(\hat{Q}'_i+K^{i,k\top}\hat{R}_iK^{i,k})x(t)\right]$ and $J_{0,T_J}=\mathbf{E}[\sum_{t=0}^{T_J-1}\gamma^tx^{\top}(t)x(t)]$. By \cite[Lemma 26]{Fazel18}, under the inequality (\ref{J error}), we have $J'_i(K^{i,k})-J_{i,T_J}'(K^{i,k})\leq\epsilon'$. Note that
\begin{equation}
J'_i(K^{i,k})-J_{i,T_J}'(K^{i,k})=J_i(K^{i,k})-J_{i,T_J}(K^{i,k})+J_0(K^{i,k})-J_{0,T_J}(K^{i,k}).
\end{equation}
Together with the fact $(J_0(K^{i,k})-J_{0,T_J}(K^{i,k}))>0$, we have
\begin{equation}
\begin{split}
J_i(K^{i,k})-J_{i,T_J}(K^{i,k}) &= J'_i(K^{i,k})-J'_{i,T_J}(K^{i,k}) - (J_0(K^{i,k})-J_0^{T_J}(K^{i,k}))\\
&\leq \epsilon' - \left(J_0(K^{i,k})-J_{0,T_J}(K^{i,k})\right)\leq \epsilon'.
\end{split}
\end{equation}
Next we prove (\ref{LQR g error}). For any $i\in\mathcal{V}$, we have
\begin{equation}
\begin{split}
||\mathbf{E}[\mathbf{G}_i(k)-\mathbf{G}_i^{T_J}(k)]||&\leq|\frac{q_i}{r_i}(J_i(K^{i,k})-J_{i,T_J}(K^{i,k}))| \cdot\max_{\mathbf{D}_i\in\mathbb{S}_{q_i-1}}||\mathbf{D}_i||\\
&\leq \frac{q_i}{r_i}\epsilon'.
\end{split}
\end{equation}
According to Lemma \ref{le fhat error}, when $r_i\leq\rho_0\leq\beta_K$, we have
\begin{equation}
||\mathbf{E}[\mathbf{G}_i(k)]-\nabla_{\mathbf{K}_i}J(K^{i,k})||\leq \phi_0r_i.
\end{equation}
Using the triangular inequality yields (\ref{LQR g error}).
Next we bound $||\mathbf{G}^{T_J}_i(k)||$:
\begin{equation}
\begin{split}
||\mathbf{G}_i^{T_J}(k)||&\leq\frac{q_i}{r_i}c(J_{i}(K^{i,k})+\epsilon')\\
&\leq \frac{q_i}{r_i}c\left[J_i(K^{k})+\lambda_0\rho_0+\epsilon'\right]
\\
&\leq \frac{q_i}{r_i}c\left[\alpha_i J_i(K^0)+\lambda_0\rho_0+\epsilon'\right],
\end{split}
\end{equation}
where the first inequality used the first statement in Proposition \ref{pr LQR J_i}, the second inequality used $$||\mathbf{K}^{i,k}-\mathbf{K}^k||\leq||\hat{\mathbf{K}}_i^k-\mathbf{K}_i^k||+r_i=w_i||\mathbf{K}_i^k-\mathbf{K}_i^{k_{prev}}||+r_i\leq \rho_0,$$ the third inequality used $J_i(K^k)\leq \alpha_i J_i(K^0)$.
\QEDA
|
1511.06698
|
\section{Introduction}
\label{intro_Bugaev}
The hadron resonance gas model (HRGM) \cite{Andronic:05} is traditionally used to extract the parameters of chemical freeze-out (CFO) from the measured hadronic yields. Its version with the multicomponent hard-core repulsion
\cite{Oliinychenko:12,HRGM:13,SFO:13}
allowed one for the first time to successfully describe the most problematic ratios $K^+/\pi^+$ with $\chi^2/dof = 3.9/14$ and
$\Lambda/\pi^-$ with $\chi^2/dof = 10.2/12$ without spoiling all other hadron yield ratios \cite{Sagun,Sagun2}.
Fig.~\ref{fig_Bugaev_Horn} demonstrates the present fit quality of these traditionally problematic ratios.
The achieved high quality $\chi^2/dof \simeq 0.95$ \cite{Sagun,Sagun2} of data description of 111 independent hadron yield ratios measured at midrapidity in central nucleus-nucleus collisions at the center of mass energies $\sqrt s_{NN}=2.7, 3.1, 3.6, 4.3, 4.9, 6.3, 7.6, 8.8, 9.2, 12.3, 17.3, 62.4, 130, 200$ GeV
proves that the multicomponent version of HRGM is a precise and a sensitive tool of heavy ion collision phenomenology.
Using the multicomponent version of HRGM it was possible to reveal a few novel irregularities at CFO.
The most remarkable of them are two sets of highly correlated quasi-plateaus in the collision energy dependence of the entropy per baryon, total pion number per baryon, and thermal pion number per baryon which were found at the center of mass energies 3.6-4.9 GeV and 7.6-10 GeV \cite{Bugaev:SA1} and the sharp peak of the trace anomaly found at the center of mass energy 4.9 GeV \cite{Bugaev:SA2}. The low energy set of quasi-plateaus was predicted a long time ago \cite{KAB:89a,KAB:90,KAB:91} as a signal of the anomalous thermodynamic properties inside the quark-gluon-hadron mixed phase. Unfortunately, the generalized shock-adiabat model cannot be safely applied to the central nuclear collisions
at $\sqrt s_{NN} \ge 7.6$ GeV \cite{KAB:89a}. Therefore, in order to correctly interpret the high energy quasi-plateaus here we
use the results of meta-analysis \cite{Metaanalisys} of the quality of data description (QDD) of 10 existing event generators of nucleus-nucleus collisions along with the thorough analysis of irregularities in the existing experimental hadron yield ratios.
The work is organized as follows. In Sect. 2 we remind the basic elements of the HRGM with multicomponent hard-core repulsion. A brief description of the meta-analysis suggested in \cite{Metaanalisys} is presented in Sect. 3 along with a discussion of existing hadron multiplicity data which help to shed light on the problem of the formation of two quark-gluon-hadron mixed phases. In Sect. 4 our conclusions are formulated.
\begin{figure}[h]
\centering
\mbox{\includegraphics[width=70mm,clip]{Bugaev_horn.jpg}\hspace*{0.2mm}\includegraphics[width=70mm,clip]{Bugaev_La_pi_min.jpg}}
\caption{Collision energy dependence of $K^+/\pi^+$ and $\Lambda/\pi^-$ hadron yield ratios which traditionally were the most problematic ones. }
\label{fig_Bugaev_Horn}
\end{figure}
\section{ HRGM with multicomponent hard-core repulsion}
\label{sect2_Bugaev}
The HRGM is based on the assumption of local thermal and chemical equilibrium at CFO.
Hence the hadron yields produced in the collisions of large atomic nuclei can be found using
the grand canonical valuables, i.e. using the temperature $T$, the baryonic $\mu_B$, the strange $\mu_s$ and the isospin third projection $\mu_{I3}$ chemical potentials. As usual, the chemical potential $\mu_s$ is fixed by the condition of zero total strange charge. The possible deviation of strange charge from the full chemical equilibrium is taken into account by the parameter $\gamma_s$ \cite{Rafelski}.
It changes the thermal density $\varphi_j$ of hadron sort $j$ as
$\varphi_j\rightarrow\gamma_s^{S_j}\varphi_j$,
where $S_j$ is the total number of strange valence quarks and antiquarks in such a hadron.
The main difference of the present version of HRGM from the ones developed by other authors is that in our HRGM
several sorts of hadrons have individual hard-core radii. Thus, it employes
different hard-core radii for pions, $R_{\pi}$, kaons, $R_K$, $\Lambda$-hyperons, $R_\Lambda$,
other mesons, $R_m$, and baryons, $R_b$.
The best global fit of 111 independent hadronic multiplicities measured
in the whole collision energy range from $\sqrt{s_{NN}} =2.7$
GeV to $\sqrt{s_{NN}} = 200$ GeV
was found for $R_b$ = 0.355 fm, $R_m$ = 0.4 fm, $R_{\pi}$ = 0.1 fm, $R_K$ = 0.37 fm
and $R_\Lambda = 0.11$ fm with the quality $\chi^2/dof \simeq 0.95$ \cite{Sagun,Sagun2}.
The second virial coefficient between the hadrons of hard-core radii $R_i$ and $R_j$ is defined as $b_{ij}=\frac{2\pi}{3}(R_i+R_j)^3$.
Taking from the thermodynamic code THERMUS \cite{Thermus} such characteristics of hadrons of sort $i$
as the spin-isospin degeneracy $g_i$, the mass $m_i$ and the width $\Gamma_i$ one can find the set of partial pressures
$p_i$ for each hadronic
component ($p=\sum_i p_i$ is total pressure) from the system
\begin{equation}
\label{EqI}
p_i=T\varphi_i\exp\left[\frac{\mu_i-2\sum_j p_jb_{ji}+\sum_{jl}p_j b_{jl}p_l/p}{T}\right] \,.
\end{equation}
Here $\mu_i=Q^B_i\mu_B+Q^{I3}_i\mu_{I3}+Q^S_i\mu_S$ is the full chemical potential of the $i$-th hadronic sort expressed via its charges $\{Q^A_i\}$ and the corresponding chemical potentials $\{\mu_A\}$ (with $A \in \{B, I3, S \}$).
In the Boltzmann approximation the thermal density of $i$-th hadronic sort reads as
\begin{equation}
\label{EqII}
\varphi_i=\gamma_s^{S_i}g_i \hspace*{0.0cm}\int\limits_{M_i}^\infty \hspace*{0.cm}dm \, f(m,m_i,\Gamma_i) \hspace*{-0.cm}
\int \hspace*{-0.cm}
\frac{k^2 d k}{2\pi^2} \hspace*{0.1cm}e^{- \frac{\sqrt{m^2+{k}^2} }{T} } \,,
\end{equation}
where
$M_i$ is a threshold of its dominant decay channel and $f$ is the normalized Breit-Wigner mass attenuation function.
Thermal multiplicities $N_i^{th}=V\frac{\partial p}{\partial\mu_i}$ ($V$ is the effective volume at CFO) should be
corrected by the hadron decays after the CFO according to the branching ratios $Br_{l\rightarrow i}$. The latter define
the probability of particle $l$ to decay into a particle $i$. Hence the ratio of full multiplicities can be written as
\begin{equation}
\label{EqIII}
\frac{N^{tot}_i}{N^{tot}_j}=
\frac{p_i+\sum_{l\neq i}p_lBr_{l\rightarrow i}}{p_j+\sum_{l\neq j}p_lBr_{l\rightarrow j}}\,.
\end{equation}
With the help of (\ref{EqIII}) we obtained the high quality fit of experimental hadron ratios measured at AGS for energies $\sqrt s_{NN}=2.7, 3.1, 3.6, 4.3, 4.9$ GeV\cite{AGS1,AGS2,AGS2b,AGS_p2,AGS3,AGS4,AGS5,AGS6,AGS7,AGS8}, at SPS energies $\sqrt s_{NN}=6.3, 7.6, 8.8, 12.3, 17.3$ GeV measured by the NA49 Collaboration \cite{SPS1,SPS2,SPS3,SPS4,SPS5,SPS6,SPS7,SPS8,SPS9} and at
RHIC energies $\sqrt s_{NN}=9.2, 62.4, 130, 200$ GeV measured by the STAR Collaboration \cite{RHIC}.
As described in \cite{SFO:13,HRGM:13}, from these data we constructed 111 independent ratios measured at 14 values of collision energies. The most important results are shown in Figs.~\ref{fig_Bugaev_Horn} and \ref{fig_Bugaev_plateaus}.
The left panel of Fig.~\ref{fig_Bugaev_plateaus} shows
the highly correlated quasi-plateaus in the collision energy dependence of the entropy per baryon $s/\rho_B$, total pion number per baryon $\rho_{\pi}^{tot}/\rho_B$, and thermal pion number per baryon $\rho_{\pi}^{th}/\rho_B$ at laboratory energies 6.9--11.6 GeV (i.e. $\sqrt{s_{NN}} = 3.6-4.9$ GeV) which were found in \cite{Bugaev:SA1}. As one can see from the left panel of Fig.~\ref{fig_Bugaev_plateaus}, a clear plateau is demonstrated by the thermal pion number per baryon
while other quantities show quasi-plateaus. Nevertheless, all these quasi-plateaus are important, since their strong
correlation with the plateau in the thermal pion number per baryon
allows one to find out their common width in the collision energy \cite{Bugaev:SA1,Bugaev:SA2}.
\begin{figure}[h]
\centering
\mbox{\includegraphics[width=80mm,clip]{Bugaev_Plateaus.jpg}\hspace*{-4.4mm}\includegraphics[width=65mm,clip]{Bugaev_TraceFO.jpg}}
\caption{Left: The correlated quasi-plateaus at CFO found in \cite{Bugaev:SA1} (see details in the text).
Right: Trace anomaly as function of collision energy at CFO established in \cite{Bugaev:SA2}. }
\label{fig_Bugaev_plateaus}
\end{figure}
Note that these low energy quasi-plateaus were predicted about 25 years ago \cite{KAB:89a,KAB:90,KAB:91}
as a manifestation of the anomalous thermodynamic properties of quark-gluon-hadron mixed phase.
In contrast to the normal thermodynamic properties, in the medium with the anomalous ones the adiabatic compressibility of matter increases for increasing pressure. In the normal media (pure gaseous or liquid phases) there exists a repulsion between the constituents at short distances which leads to an opposite behavior of the adiabatic compressibility. Therefore, an appearance
of these quasi-plateaus is a signal of the quark-gluon-hadron mixed phase formation \cite{KAB:89a,KAB:90,KAB:91}. Such a conclusion is strongly supported by an existence of the sharp peak of the trace anomaly $\delta =
\frac{\varepsilon - 3p}{T^4}$ (here $\varepsilon$ is energy density) at $\sqrt{s_{NN}} = 4.9$ GeV \cite{Bugaev:SA2}
(see the right panel of Fig.~\ref{fig_Bugaev_plateaus}). This peak is important, since in lattice QCD
an inflection or a maximum point of the trace anomaly is used for a determination of the pseudo-critical temperature of the cross-over transition \cite{lQCD}. One may think that a sharp peak of $\delta$ at CFO is exclusively generated by the peak of baryonic density which in our HRGM also exists at $\sqrt{s_{NN}} = 4.9$ GeV. However, the real situation is more complicated. Writing the trace anomaly as
\begin{eqnarray}\label{EqIV}
\delta = \frac{\varepsilon - 3p}{T^4} = \frac{Ts + \mu_B \rho_B + \mu_{I3} \rho_{I3} - 4 p}{T^4} \simeq \frac{s}{T^3} \left( 1 + \frac{\mu_B}{T} \frac{\rho_B}{s} \right) - 4 \frac{ p}{T^4} \,,
\end{eqnarray}
where in the last step the small contribution $\mu_{I3} \rho_{I3}$ related to the charge of the third isospin projection is neglected. From (\ref{EqIV}) one can easily conclude that the strong increase of $\delta$ on the collision energy
interval $\sqrt{s_{NN}} = [4.3; 4.9]$ GeV is provided by a strong jump of the effective number of degrees of freedom
${s}/{T^3}$ on this interval \cite{Bugaev:SA2,NonsmFO}. Note that despite the existence of a baryon density peak on this interval of collision energy, the entropy per baryon ${s}/{\rho_B}$ is constant on it as one can see from the left panel of Fig.~\ref{fig_Bugaev_plateaus}. Now it is evident that without a strong jump of the effective number of degrees of freedom
${s}/{T^3}$ the sharp peak of trace anomaly at CFO would not exist. At higher collision energies the trace anomaly $\delta$ decreases mainly because the ratio $\mu_B/T$ strongly decreases, while the inverse entropy per baryon decreases slowly.
It is important to mention that the sharp peak of $\delta$ is seen, if the finite width of all hadronic resonances is included into the HRGM \cite{NonsmFO}, while for the HRGM with a zero width of hadron resonances such a peak is washed out.
The physical origin of the trace anomaly sharp peak (and, hence, of a strong jump of the effective number of degrees of freedom ${s}/{T^3}$) at CFO found at $\sqrt{s_{NN}} = 4.9$ GeV is rooted in the trace anomaly peak existing at the shock adiabat \cite{Bugaev:SA2}.
The shock adiabat model reasonably well describes the hydrodynamic and thermodynamic quantities of the initial state formed in the central nucleus-nucleus collisions in the laboratory energy range $1$ GeV $ \le E_{lab} \le $ 30 GeV \cite{Bugaev:SA2}, while at higher collision energies, i.e. for $\sqrt{s_{NN}} \ge 7.6$ GeV, it can be used for qualitative estimates.
In \cite{Bugaev:SA2} it was found that the peak of $\delta$ at the shock adiabat appears at the collision energy corresponding exactly to the boundary between the quark gluon plasma (QGP) and quark-gluon-hadron mixed phase and, therefore, the trace anomaly sharp peak at CFO is a signal of QGP formation.
In this respect it is interesting that
in the right panel of Fig.~\ref{fig_Bugaev_plateaus} one can see a second peak of trace anomaly located at $\sqrt{s_{NN}} = 9.2$ GeV. Although the second peak of $\delta$ is less pronounced than the first one, it is also associated to the high energy set of quasi-plateaus
shown in the left panel of Fig.~\ref{fig_Bugaev_plateaus} at the collision energy interval $E_{lab} = [30; 44.]$ GeV
($\sqrt{s_{NN}} = [7.6; 9.2]$ GeV). Therefore, the future experiments at RHIC, NICA and FAIR will have to find out whether the high energy peak of trace anomaly has any physical meaning.
\section{Meta-analysis of quality of data description}
The main objects of a meta-analysis suggested in \cite{Metaanalisys} are the mean deviation squared of the quantity $A^{model,h}$ of the model M from the data $A^{data,h}$ per number of the data points $n_d$ for a particle type $h$
%
\begin{eqnarray}\label{EqV}
%
\langle\chi^2/n\rangle^h_A\biggl|_{M} = \frac{1}{n_d} \sum\limits_{k=1}^{n_d} \left[ \frac{A^{data,h}_k -A^{model,h}_k }{\delta A_k^{data,h}}\right]^2 \biggl|_{M} \,,
\end{eqnarray}
and its error which is defined according to the rule of indirect measurements \cite{Taylor82} as
\begin{eqnarray}\label{EqVI}
%
\Delta_{A}\langle\chi^2/n\rangle^h_A\biggl|_{M} &\equiv & \left[ \sum\limits_{k=1}^{n_d} \left[\delta A_k^{data,h} \, \frac{\partial \langle\chi^2/n\rangle^h_A\biggl|_{M}}{\partial ~A^{data,h}_k} \right]^2 \right]^\frac{1}{2}
\equiv \frac{2}{\sqrt{n_d}} \sqrt{\langle\chi^2/n\rangle^h_A\biggl|_{M}} \,,~
\end{eqnarray}
where $\delta A_k^{data,h}$ is an experimental error of the experimental quantity $A^{data,h}_k$ and the summation in Eqs. (\ref{EqV}) and (\ref{EqVI}) runs over all data point $n_d$ at given collision energy.
For a convenience the quantity defined in (\ref{EqV}) is called the quality of data description (QDD).
To get the most complete picture of the dynamics of nuclear collisions, one has to compare the available data on
the transverse mass ($m_T$) distributions $A= \frac{1}{m_T}\,\frac{d^2 N(m_T, y)}{d m_T dy}$, the longitudinal rapidity ($y$) distributions $A=\frac{d N(y)}{dy}$ and the hadronic yields (Y) measured at midrapidity $A=\frac{d N(y=0)}{dy}$ or/and the total one, i.e. measured within 4$\pi$ solid angle,
since right these observables are traditionally believed to be sensitive to the equation of state properties \cite{Rafelski2,Shuryak}.
The QDD of strange hadrons was found for two types of models \cite{Metaanalisys}:
%
\begin{itemize}
\item {\bf The hadron gas (HG) models} are as follows: ARC \cite{Arc}, RQMD2.1(2.3) \cite{RQMD}, HSD \cite{HSDgen,HSDgen2,S5.4a}, UrQMD1.3(2.0, 2.1, 2.3) \cite{Bass98}, statistical hadronization model (SHM) \cite{SHMgen} and AGSHIJET\_N* \cite{HIJETa,S5.4b}. These models do not include the QGP formation in the process of A+A collisions.
\item {\bf The QGP models} are as follows: Quark Combination (QuarkComb) model \cite{QComb}, 3-fluid dynamics (3FD) model \cite{3FDgenA,3FDgen,3FDII,3FDIII}, PHSD model \cite{PHSDgen,PHSDgen2,Phsd} and Core-Corona model \cite{CoreCorMa,CoreCorMb}.
These generators explicitly assume the QGP formation in A+A collisions.
\end{itemize}
{A short description of these models along with the criteria of their selection can be found in the Appendix of \cite{Metaanalisys}.}
The main idea of such a meta-analysis \cite{Metaanalisys} is based on the assumption that the HG models of heavy ion collisions should provide worse description of the data above the QGP threshold energy, whereas below this threshold they should be able to better (or at least not worse) reproduce experimental data compared to the QGP models.
Furthermore, it is assumed that both kinds of models should provide an equal and rather good QDD at the
energy of mixed phase production. Hence, the energy of the mixed phase formation should be located below the energy
at which the equal QDD is changed to the essentially worse QDD of HG models.
\begin{figure}[h]
\centering
\includegraphics[width=140mm,clip]{Bugaev_Chi2_AA_new.jpg}
\caption{Comparison of $\langle\chi^2/n\rangle^{\overline{\{h\}}}_{\overline{\{A\}}}\biggl|_{\overline{HG}}$
(black symbols and dashed curve) and
$\langle\chi^2/n\rangle^{\overline{\{h\}}}_{\overline{\{A\}}}\biggl|_{\overline{QGP}}$ (red symbols and solid curve) as functions of collision energy obtained for the arithmetic averaging. The symbols of different hadrons which correspond to the same collision energy are slightly spread around the energy value for better perception.
The symbols are connected by the lines to guide the eye. The numbers staying behind the short name of the model indicate the version(s) used in meta-analysis. The short dashed ovals indicate the regions of possible mixed phase formation.}
\label{fig_Bugaev_Chi}
\end{figure}
Based on these assumptions the experimental data measured at the collision energies $\sqrt{s_{NN}}\,=$ 3.1, 3.6, 4.2, 4.9, 5.4, 6.3, 7.6, 8.8, 12.3 and 17.3 GeV were analyzed in \cite{Metaanalisys}. The collision energies $\sqrt{s_{NN}}\,\le $ 4.9 GeV correspond to Au+Au reactions studied at AGS. At $\sqrt{s_{NN}}\,=$ 5.4 GeV the reactions Pb+Si, Si+Si and
Si+Al were also investigated at AGS, while higher collision energies correspond to Pb+Pb reactions studied at SPS.
Using the definitions (\ref{EqV}) and (\ref{EqVI}) at each collision energy the available description of the transverse mass distributions, the longitudinal rapidity distributions and the hadronic yields measured at midrapidity or/and the total one obtained by a given model was arithmetically averaged for each kind of analyzed strange hadron. The QDDs and their errors obtained for each energy and for each hadron were arithmetically averaged over the models belonging to the same type.
Then the QDDs and their errors found in this way for the models belonging to the same type were arithmetically averaged
for each hadron and antihadron, if available, in order to reduce the number of data for a comparison.
Finally, the resulting QDDs and their errors of the same same type of model were arithmetically averaged over all hadronic species. More details can be found in \cite{Metaanalisys}. The averaged QDDs
of HG models $\langle\chi^2/n\rangle^{\overline{\{h\}}}_{\overline{\{A\}}}\biggl|_{\overline{HG}}$ and the ones of QGP models $\langle\chi^2/n\rangle^{\overline{\{h\}}}_{\overline{\{A\}}}\biggl|_{\overline{QGP}}$ were found in this way together with their errors. The results are shown in Fig.~\ref{fig_Bugaev_Chi}.
From Fig.~\ref{fig_Bugaev_Chi} one can see that the meta-analysis of work
\cite{Metaanalisys} leads to an independent conclusion that the mixed phase exists at the same collision energy range
$\sqrt{s_{NN}} = [4. 3; 4.9]$ GeV
which was originally found in \cite{Bugaev:SA1,Bugaev:SA2}.
This result is important not only to validate the entire framework of shock adiabat model used in \cite{Bugaev:SA1,Bugaev:SA2}, but also to justify the jump of the effective number of degrees of freedom $s/T^3$ at CFO and a sharp peak of the trace anomaly $\delta$ at CFO as reliable signals of QGP formation.
In addition the meta-analysis of QDD \cite{Metaanalisys} predicts that the most probable collision energy range of the second phase transition is $\sqrt{s_{NN}} =10-13.5$ GeV. Thus, the meta-analysis of QDD supports an interpretation of the
high energy set of correlated quasi-plateaus as an indicator of phase transition, although it shifts this transition to slightly higher collision energies.
Unfortunately, at present it is impossible to distinguish between two possible explanations of this phenomenon.
The first possible explanation is that with increasing collision energy the initial states of thermally equilibrated matter formed in nucleus-nucleus collisions move first from the hadron gas into the mixed phase, then from the mixed phase to QGP and then again they return to the same mixed phase, but at higher initial temperature and lower
baryonic density.
Such a scenario corresponds to the case, if QCD phase digram has a critical endpoint \cite{Metaanalisys}. An alternative explanation \cite{Metaanalisys} corresponds to the QCD phase diagram with the tricritical endpoint. In such a case the second phase transition is the second order phase transition of (partial) chiral symmetry restoration or a transition between quarkyonic matter and QGP \cite{QYON}. It is necessary to stress, that despite the lack of a single interpretation of the second phase transition possibly existing at $\sqrt{s_{NN}} =10-13.5$ GeV there are strong arguments \cite{Metaanalisys} that the (tri)critical endpoint of QCD phase diagram maybe located at $\sqrt{s_{NN}} =12-14$ GeV.
\begin{figure}[h]
\centering
\mbox{\includegraphics[width=75mm,clip]{Bugaev_Lambda_p.jpg}\hspace*{-4.2mm}\includegraphics[width=72mm,clip]{Bugaev_K+_Lambda.jpg}}
\caption{The collision energy dependence of the $\Lambda/p$ (left) and $K^+/\Lambda$ (right) ratios obtained within the present HRGM. The lines are given to guide the eye. More explanations are given in the text. }
\label{fig_Bugaev_Lamd}
\end{figure}
The combined conclusions obtained from inspecting two sets of correlated quasi-plateaus at CFO, two peaks of trace anomaly at CFO and
the ones found out by the meta-analysis, led us a thorough analysis of the $\Lambda/p$ and $K^+/\Lambda$ ratios.
From the left panel of Fig.~\ref{fig_Bugaev_Lamd} one can see that there are three regimes in the
energy dependence of the $\Lambda/p$
ratio: at $\sqrt{s_{NN}}=4.3$ GeV the slope of this ratio clearly increases, while above
$\sqrt{s_{NN}}=8.8$ GeV it nearly saturates. The sudden increase of $\Lambda/p$ slope
at $\sqrt{s_{NN}}=4.3$ GeV can be
naturally explained by the idea of work \cite{Rafelski:82} that the mixed phase formation can be
identified by a rapid increase in the number of strange quarks per light quarks.
Evidently, the $\Lambda/p$ ratio is a convenient indicator because at low collision energies $\Lambda$ hyperons
are generated in collisions of nucleons. Moreover, such a ratio does not depend on baryonic chemical potential, since both the protons and $\Lambda$ hyperons have the same baryonic charge.
As it is seen from the left panel of Fig.~\ref{fig_Bugaev_Lamd}, this mechanism works up to $\sqrt{s_{NN}}=4.3$ GeV,
while an appearance of the mixed phase should lead to an increase
of the number of strange quarks and antiquarks due to the annihilation of light quark-antiquark and gluon pairs.
Clearly, this simple picture is well fitted into the prediction that the mixed phase can be reached at $\sqrt{s_{NN}}=4.3$ GeV,
while QGP is formed at $\sqrt{s_{NN}} \ge 4.9$ GeV. The dramatic decrease of the slope of the experimental $\Lambda/p$
ratio at $\sqrt{s_{NN}} > 8.8$ GeV which is seen in Fig.~\ref{fig_Bugaev_Lamd} can be an evidence for the second
phase transformation, which we discussed above.
It is appropriate to say
a few words about the experimental data of hadron yields shown in both panels of Fig.~\ref{fig_Bugaev_Lamd}.
For the AGS collision energies $\sqrt{s_{NN}}=$ 2.7, 3.1, and 4.3 GeV the yields of
protons and kaons were, respectively, taken from Refs.\ \cite{AGS8} and \cite{AGS_p2}, whereas for $\Lambda-$hyperons they were taken from Ref. \cite{AGS3}.
Experimental yields measured at the highest AGS energy $\sqrt{s_{NN}}=$
4.9 GeV for protons and kaon were taken from Ref.\ \cite{AGS2,AGS8}, while
for $\Lambda$ they were given in Ref.\ \cite{AGS7}.
The mid-rapidity yields of protons, kaons and lambdas
measured at the SPS energies $\sqrt{s_{NN}}=$
6.3, 7.6, 8.8, 12, and 17.3 GeV are provided by the NA49 collaboration in Refs.\
\cite{SPS2,SPS3,SPS5,SPS6,SPS7}.
For a comparison, in Fig.~\ref{fig_Bugaev_Lamd} we also show the value with huge error bars which are found from
other two ratios, $\Lambda/\pi^-$ and $p/\pi^-$, for $\sqrt{s_{NN}}=3.6$ GeV.
It is interesting that the energy dependence of the $K^+/\Lambda$ ratio shows the change of slopes at the same energies
as the $\Lambda/p$ ratio. This can be see from the right panel of Fig.~\ref{fig_Bugaev_Lamd}. Note that in the dominant hadronic reactions the positive
kaons and $\Lambda$ hyperons are born simultaneously. Since both of these hadrons carry the strange charge, then the logic of
work \cite{Rafelski:82} is inapplicable to their ratio. Therefore, in contrast to an increase of the slope of the $\Lambda/p$ ratio on the interval $\sqrt{s_{NN}}= [4.3; 8.8]$ GeV, the $K^+/\Lambda$ ratio has a flattening of the slope on this collision energy interval. In the leading order this ratio is defined via the kaon mass $m_K$, the $\Lambda$ mass $m_\Lambda$ and two chemical potentials as $K^+/\Lambda \simeq \sqrt{ \left[ \frac{m_K}{m_\Lambda}\right]^3}\exp\left[ \frac{m_\Lambda - m_K + 2\mu_S - \mu_B}{T} \right]$. Therefore, a small slope of this ratio at the interval $\sqrt{s_{NN}}= [4.3; 8.8]$ GeV evidences about the cancellation of energy dependencies of strange and baryonic chemical potentials, i.e. $m_\Lambda - m_K + 2\mu_S - \mu_B \simeq Const$ for $\sqrt{s_{NN}}= [4.3; 8.8]$ GeV. An increase of the $K^+/\Lambda$ ratio at higher collision energies can be mainly explained by the fast decrease of the baryonic chemical potential.
\begin{figure}[h]
\centering
\sidecaption
\includegraphics[width=84mm,clip]{Bugaev_DLam_Dp.jpg}
\caption{Most recent predictions for the collision energy dependence of the $\frac{\Delta \Lambda}{\Delta\, p}$ ratio. The triangles depict the ratio of total multiplicities, while the circles correspond to the ratio of thermal multiplicities. The lines are given to guide the eye. }
\label{fig_Bugaev_DLam}
\end{figure}
Very recently a better description of the $\Lambda/p$ ratio was achieved in \cite{Bugaev15a}, when more data were analyzed.
Note that this result was obtained not on the expense of worsening of other hadron yield ratios. Based on this new fit of hadron yield ratios the predictions for the $\frac{\Delta \Lambda}{\Delta\, p} = \frac{\Lambda - \bar \Lambda}{p - \bar p}$ ratio are made. As one can see from Fig.~\ref{fig_Bugaev_DLam} this ratio demonstrates even more dramatic changes in the collision energy dependence.
Indeed, at the narrow collision energy interval $\sqrt{s_{NN}} =4.3-4.9$ GeV this ratio has a strong jump, while
at $\sqrt{s_{NN}} = 9.2$ GeV it shows a change of slope.
Our educated guess is that the collision energy dependence of the $\frac{\Delta \Lambda}{\Delta\, p}$ ratio is an indicator of two phase transformations \cite{Bugaev15a}. Since the observed jump of this ratio is located in the collision energy region of the mixed phase formation (i.e. with a first order phase transition), then a change of its slope at $\sqrt{s_{NN}} = 9.2$ GeV can be naturally associated with a weak first order or a second order phase transition. Note that such a hypothesis is well supported by the results of the meta-analysis \cite{Metaanalisys} which is briefly summarized above.
\section{Conclusions}
From the discussions given in previous sections it is clear that a development of the multicomponent version of HRGM in 2012 led to a real breakthrough in our understanding of the thermodynamics at CFO. With the help of HRGM it was possible for the first time
to describe the Strangeness Horn with the highest accuracy \cite{HRGM:13,SFO:13,Sagun2} including the topmost point. The new concept of separated CFOs for strange and non-strange hadrons with conservation laws connecting them allows one to naturally explain the appearance of apparent chemical non-equilibrium of strange particles \cite{SFO:13}. Furthermore, a thorough analysis of the entropy per baryon and the pion number (thermal and total) per baryon at CFO led to a finding out of two sets of strongly correlated quasi-plateaus \cite{Bugaev:SA1,Bugaev:SA2}. Since the low energy set of quasi-plateaus was predicted in \cite{KAB:89a,KAB:90, KAB:91} as a signal of the quark-gluon-hadron mixed phase formation, then it was necessary to give a physical interpretation of the high energy set of quasi-plateaus.
A good hint to interpret the appearance of high energy set of quasi-plateaus is provided by the meta-analysis \cite{Metaanalisys} of QDD. Since the QDD meta-analysis gave an independent evidence for the quark-gluon-hadron mixed phase formation at the narrow region of collision energy $\sqrt{s_{NN}} = 4.3-4.9$ GeV, then its predictions for the
possible existence of another mixed phase at collision energies $\sqrt{s_{NN}} = 10-13.5$ GeV led us to a more thorough inspection of existing hadron multiplicity ratios. As one can see from the left panel of Fig.~\ref{fig_Bugaev_Lamd}
the $\Lambda/p$ ratio exhibits three different regimes in the collision energy dependence: at $\sqrt{s_{NN}}=4.3$ GeV the slope of this ratio suddenly increases, while above $\sqrt{s_{NN}}=8.8$ GeV it nearly saturates. As we argued above
a strong increase of $\Lambda/p$ slope at $\sqrt{s_{NN}}=4.3$ GeV can be
naturally explained by the idea of work \cite{Rafelski:82} that the mixed phase formation can be
identified by a rapid increase in the number of strange quarks per light quarks, while a dramatic decrease of the $\Lambda/p$ slope at $\sqrt{s_{NN}} > 8.8$ GeV can be an evidence for the second phase transformation.
It is remarkable that the $K^+/\Lambda$ ratio shows the change of slopes at the same energies
as the $\Lambda/p$ ratio, although the $K^+/\Lambda$ ratio involves two strange particles and (in general) it strongly depends on the baryonic chemical potential. Also the trace anomaly at CFO shows two peaks at the collision energies $\sqrt{s_{NN}}= 4.9$ GeV and
$\sqrt{s_{NN}}= 9.2$ GeV. The low energy trace anomaly peak can be explained within the shock adiabat model \cite{Bugaev:SA2} as a signal of QGP formation, whereas the existence of high energy peak requires further confirmation by better experimental data. If it will be confirmed, then it will serve as a new indicator for a second phase transition.
Nevertheless, already now all irregularities at CFO discussed here together with the results of the QDD meta-analysis form a coherent picture of possible observation of two mixed phases in nucleus-nucleus collisions.
Therefore, the Beam Energy Scan program at RHIC has a unique chance to experimentally verify the above signals and to discover the mixed phases before the start of NICA and FAIR programs. The new observable, the $\frac{\Delta \Lambda}{\Delta\, p}$ ratio suggested in \cite{Bugaev15a}, will be very helpful for this because of its high sensitivity. However, to reach such goals the RHIC experiments should provide much smaller error bars, especially at low energies. Hence, the experiments in a fixed target mode are absolutely necessary for the success of the Beam Energy Scan program at RHIC.
\begin{acknowledgement} The authors are thankful to D. B. Blaschke, T. Galayuk, R. A. Lacey, I. N. Mishustin, D. H. Rischke, K. Redlich, L. M. Satarov, A.V. Taranenko, K. Urbanowski and Nu Xu for interesting discussions and valuable comments. K.A.B., A.I.I., V.V.S. and G.M.Z. acknowledge the partial support of the program ``On perspective fundamental research in high-energy and nuclear physics'' launched by the Section of Nuclear Physics of NAS of Ukraine. D.R.O. acknowledges a support of Deutsche Telekom Stiftung. K.A.B. is very thankful to all organizers of the ICNFP2015 for providing him with a chance to present and to discuss these results at the Conference and for a warm hospitality in OAC.
\end{acknowledgement}
|
2109.12021
|
\section{Introduction} \label{sec:intro}
Prefetching is a well-studied speculation technique that predicts the addresses of long-latency memory requests and fetches the corresponding data from main memory to on-chip caches before the \rbc{program executing on the processor} demands it.
\rbc{A program's repeated accesses over its data structures create patterns in its memory request addresses}.
A prefetcher tries to identify such memory access patterns from past memory requests to predict \rbc{the addresses of} future memory requests. To quickly identify a memory access pattern, a prefetcher typically uses some program context information to examine only \rbc{a subset} of memory requests. We call this program context a \emph{feature}.
\rbc{The prefetcher associates a memory access pattern with a feature and generates prefetches following the same pattern \rbc{when} the feature reoccurs \rbc{during program execution}}.
\rbc{Past research has} proposed numerous prefetchers that consistently pushed the limits of prefetch coverage (i.e., the fraction of memory requests predicted by the prefetcher) and accuracy (i.e., the fraction of \rbc{prefetch} requests that are \rbc{actually} demanded by the program) by exploiting various program features, e.g., program counter (\texttt{PC}), cacheline address (\texttt{Address}), page offset of a cacheline (\texttt{Offset}), or \rbc{a simple} combination of such features using simple operations like concatenation (\texttt{+}) ~\cite{stride,streamer,baer2,stride_vector,jouppi_prefetch,ampm,fdp,footprint,sms,sms_mod,spp,vldp,sandbox,bop,dol,dspatch,bingo,mlop,ppf,ipcp}.
For example, a PC-based stride prefetcher~\cite{stride,stride_vector,jouppi_prefetch} uses PC as the feature to learn the constant stride between two consecutive memory accesses \rbc{caused} by the same PC. VLDP~\cite{vldp} and SPP~\cite{spp} use a sequence of cacheline address deltas as the feature to predict the next cacheline \rbc{address} delta. Kumar and Wilkerson~\cite{footprint} use \texttt{PC+Address} of the first access \rbc{in} a \rbc{memory} region as the feature to predict the \rbc{spatial} memory access footprint \rbc{in} the entire memory region. SMS~\cite{sms} empirically finds \texttt{PC+Offset} of the first access \rbc{in a memory region} to be a better feature to predict the \rbc{memory access} footprint. Bingo~\cite{bingo} combines the features from~\cite{footprint} and SMS and uses \rbc{\texttt{PC+Address} and \texttt{PC+Offset}} \rbc{as its features}.
\rbc{Accurate and timely prefetch requests reduce} the long memory access latency \rbc{experienced by} the \rbc{processor}, \rbc{thereby} improving overall \rbc{system} performance. \rbc{However}, \rbc{speculative prefetch requests} \rbc{can} cause \rbc{undesirable} effects on the system (e.g., increased memory bandwidth consumption, cache pollution, \rbc{memory access interference,} etc.), which can reduce or negate the performance improvement gained by hiding memory access latency~\cite{fdp,ebrahimi2009coordinated}. \rbc{Thus, a good prefetcher aims to maximize its benefits while minimizing its undesirable effects on the system.}
\rbc{Even though there is a large number of prefetchers proposed in the literature, we observe
three key shortcomings in almost every prior prefetcher design that significantly limits \rbc{its} performance benefits over a wide range of workloads and system configurations:
(1) \rbc{the use of \rbc{mainly} a single program feature for prefetch prediction}, (2) lack of inherent system awareness, and (3) \rbc{lack of ability to customize \rbc{the} prefetcher design to seamlessly adapt to a wide range of workload \rbc{and system configurations}}}.
\begin{sloppypar}
\textbf{Single-feature prefetch prediction.} \rbc{Almost every prior prefetch-er relies on \emph{only one} program feature to correlate with the program memory access pattern and generate prefetch requests~\cite{stride,streamer,baer2,stride_vector,jouppi_prefetch,ampm,fdp,footprint,sms,sms_mod,spp,vldp,sandbox,bop,dol,dspatch,mlop,ppf,ipcp}. As a result, a prefetcher typically \rbc{provides} good \rbc{(or poor)} performance benefits in \rbc{mainly} those workloads where the correlation between the feature used by the prefetcher and program's memory access pattern is dominantly present \rbc{(or absent)}.}
\rbc{To demonstrate this, we show the coverage and overpredictions (i.e., \rbc{prefetched} memory requests that do \emph{not} get demanded by the processor) of two recently proposed prefetchers, SPP~\cite{spp} and Bingo~\cite{bingo}, and our new proposal Pythia (\cref{sec:design}) for six \rbc{example} workloads (\cref{sec:methodology} discusses our experimental methodology) in Fig.~\ref{fig:intro_perf_cov_acc}(a). Fig.~\ref{fig:intro_perf_cov_acc}(b) shows the performance of SPP, Bingo and Pythia on the same workloads.}
As we see in Fig.~\ref{fig:intro_perf_cov_acc}(a), Bingo provides higher prefetch coverage than SPP in \texttt{sphinx3}, \texttt{PARSEC-Canneal}, and \texttt{PARSEC-Facesim}, where the correlation exists between the first access \rbc{in} a \rbc{memory} region and the other accesses \rbc{in} the same region. \rbc{As a result, Bingo performs better than SPP in these workloads (Fig.~\ref{fig:intro_perf_cov_acc}(b)).}
\rbc{In contrast}, for workloads like \texttt{GemsFDTD} that have regular access patterns within a physical page, SPP's \emph{sequence of deltas} feature \rbc{provides} better coverage and performance than Bingo.
\end{sloppypar}
\begin{figure}[!h]
\vspace{-0.8em}
\centering
\includegraphics[width=3.3in]{figures/intro_perf_cov_acc.pdf}
\vspace{-0.5em}
\caption{Comparison of \rbc{(a) coverage, overprediction, and (b) performance of two recently-proposed prefetchers, SPP~\cite{spp} and Bingo~\cite{bingo}, and our new proposal, Pythia.}}
\label{fig:intro_perf_cov_acc}
\vspace{-1em}
\end{figure}
\begin{sloppypar}
\textbf{Lack of inherent system awareness.}
\rbc{All prior prefetchers either completely neglect their undesirable effects on the system (e.g., memory bandwidth usage, cache pollution, memory access interference, system energy consumption, etc.)~\cite{stride,streamer,baer2,stride_vector,jouppi_prefetch,ampm,footprint,sms,sms_mod,spp,vldp,sandbox,bop,dol,bingo,mlop,ppf,ipcp} or incorporate system awareness as \rbc{an afterthought \rbc{(i.e., a separate control component)}} to the underlying system-unaware prefetch algorithm~\cite{fdp,ebrahimi2009coordinated,ebrahimi2009techniques,ebrahimi_paware,dspatch,bapi,pa_dram,mutlu2005,wflin,zhuang,wflin2,charney}. Due to the lack of inherent system awareness,}
a prefetcher often loses its performance gain in resource-constrained scenarios.
\rbc{For example, as shown in} Fig.~\ref{fig:intro_perf_cov_acc}(a), Bingo achieves similar \rbc{prefetch} coverage in \texttt{Ligra-CC} as compared to \texttt{PARSEC-Canneal}, while generating significantly lower overpredictions in \texttt{Ligra-CC} than \texttt{PARSEC-Canneal}.
However,
Bingo loses performance in \texttt{Ligra-CC} by $1.9$\% compared to a no-prefetching baseline, whereas it improves performance by $6.4$\% in \texttt{PARSEC-Canneal} (Fig.~\ref{fig:intro_perf_cov_acc}(b)).
\rbc{
This contrasting outcome is due to Bingo's lack of awareness \rbc{of} the memory bandwidth usage.
\rbc{Without prefetching, }\texttt{Ligra-CC} consumes higher memory bandwidth than \texttt{PARSEC-Canneal}. As a result,
\rbc{each overprediction}
made by Bingo in \texttt{Ligra-CC} wastes more precious \rbc{memory} bandwidth and is more detrimental to performance than
\rbc{that in}
\texttt{PARSEC-Canneal}.}
\end{sloppypar}
\textbf{Lack of online prefetcher design customization.}
\rbc{The high design complexity of architecting a multi-feature, system-aware prefetcher has traditionally compelled architects to statically select only one program feature at design time. With every new prefetcher, architects design new rigid hardware structures to exploit the selected program feature.}
To exploit a new program feature for higher performance benefits, one must design a new prefetcher from scratch \rbc{and} extensively evaluate and verify \rbc{it both \rbc{in} pre-silicon and post-silicon realization}.
Due to the rigid design-time decisions, the hardware structures proposed by prior prefetchers cannot be customized \rbc{online} in silicon either to exploit any other program feature or to change the prefetcher’s objective (\rbc{e.g.}, to increase/decrease coverage, accuracy, or timeliness) \rbc{so that it can seamlessly adapt to} varying workloads and system configurations.
\vspace{0.3em}
\emph{\textbf{Our goal}} in this work is to design a single prefetching framework that (1) can holistically learn to prefetch using both \emph{multiple different types of program features} and \emph{system-level feedback} \rbc{information that is inherent to the design}, and (2) can be \emph{easily customized} in silicon via simple configuration registers to exploit
\rbc{different types of program features and/or}
to change the objective of the prefetcher \rbc{(e.g., increasing/decreasing coverage, accuracy, or timeliness)} without any changes to the underlying hardware.
\vspace{0.3em}
\textbf{Key ideas.} To this end, we propose Pythia,\footnote{Pythia, according to Greek mythology, is the oracle of Delphi who is known for accurate prophecies~\cite{pythia}.} which formulates hardware prefetching as a reinforcement learning problem. Reinforcement learning (RL)~\cite{rl_bible,rlmc} is a machine learning paradigm that studies how an autonomous agent can learn to take optimal actions that maximizes a reward function by interacting with a stochastic environment.
We formulate Pythia as an RL-agent that autonomously learns to prefetch by interacting with the processor and the memory subsystem.
For every new demand request, Pythia extracts a set of program features.
It uses the set of features as \emph{state} information to take a prefetch \emph{action} based on its prior experience. For every prefetch action (including \rbc{\emph{not to prefetch}}), Pythia receives a numerical \emph{reward}
\rbc{which} evaluates the accuracy and timeliness of the prefetch action
\rbc{given various system-level feedback \rbc{information}. While Pythia's framework is general enough to incorporate any type of system-level feedback \rbc{information} \rbc{into its decision making}, in this paper we demonstrate Pythia using \rbc{one} major system-level \rbc{information for prefetching}: memory bandwidth usage.}
Pythia uses the reward received for a prefetch action to reinforce the \rbc{correlations} between various program features and the prefetch action and \rbc{learn from} experience \rbc{how} to generate accurate, timely, and system-aware prefetches in the future.
\rbc{The types of program feature used by Pythia and the reward level values can be easily customized in silicon via configuration registers.}
\textbf{Novelty and Benefits.}
Pythia's RL-based design approach requires an architect to only specify \emph{which} of the possible program features \emph{might} be useful to design a good prefetcher and \emph{what} performance goal the prefetcher should target, rather than spending time on designing and implementing a new \rbc{(likely rigid)} prefetch algorithm and accompanying rigid hardware that describes \emph{precisely how} the prefetcher should exploit the selected features to achieve that performance goal.
This \rbc{approach} provides two unique advantages over prior prefetching proposals.
First, using the RL framework, Pythia can holistically learn to prefetch using \rbc{both} \rbc{\emph{multiple program features}} and \emph{system-level feedback} \rbc{information} \rbc{inherent to its design}.
\rbc{Second, Pythia can be easily customized in silicon via simple configuration registers to exploit different types of program features and/or change the objective of the prefetcher. This gives Pythia the unique benefit of providing \rbc{even higher} performance \rbc{improvements} for a wide variety of workloads \rbc{and changing system configurations}, without any changes \rbc{to} the underlying hardware.}
\textbf{Results Summary.} We evaluate Pythia using a diverse set of memory-intensive workloads spanning \texttt{SPEC CPU2006}~\cite{spec2006}, \texttt{SPEC CPU2017}~\cite{spec2017}, \texttt{PARSEC 2.1}~\cite{parsec}, \texttt{Ligra}~\cite{ligra}, and \texttt{Cloudsuite}~\cite{cloudsuite} \rbc{benchmarks}.
\rbc{\rbc{We demonstrate four key results.} First, Pythia outperforms \rbc{two state-of-the-art prefetchers (MLOP~\cite{mlop} and Bingo~\cite{bingo}) by \rbc{$3.4$}\% and \rbc{$3.8$}\% in single-core and \rbc{$7.7$}\% and $9.6$\% in twelve-core configurations.}
This is because Pythia generates lower overpredictions, while simultaneously providing higher prefetch coverage than the prior prefetchers.
Second, Pythia's performance benefits increase in bandwidth-constrained system configurations. For example, in a server-like configuration, where a core \rbc{can have} only $\frac{1}{16}\times$ \rbc{of the} bandwidth of a single-channel DDR4-$2400$~\cite{ddr4} DRAM controller, Pythia outperforms MLOP and Bingo by $16.9$\% and $20.2$\%.
Third, Pythia can be customized further via \rbc{simple} configuration registers \rbc{to target} workload suites to provide even higher performance benefits.
We demonstrate that by simply changing the numerical rewards, Pythia provides up to $7.8$\% ($1.9$\% on average) more performance improvement across all \texttt{Ligra} \rbc{graph processing} workloads over the \rbcb{basic} Pythia configuration.
}
Fourth, Pythia's performance benefits come with \rbc{only} modest area and power \rbc{overheads}.
\rbc{Our functionally-verified hardware synthesis for Pythia shows that}
Pythia only incurs an area and power overhead of $1.03$\% and $0.37$\% over a $4$-core desktop-class processor.
\vspace{0.5em}
\noindent We make the following contributions in this paper:
\begin{itemize}
\item \rbc{We observe three key shortcomings in prior prefetchers that significantly limits their performance benefits:
(1) \rbc{the use of only a single program feature for prefetch prediction, (2) lack of inherent system awareness, and (3) lack of ability to customize \rbc{the} prefetcher design to seamlessly adapt to a wide range of workloads \rbc{and system configurations}}.
}
\item \rbc{We introduce a new prefetcher called Pythia}. Pythia formulates the prefetcher as a reinforcement learning (RL) agent, which takes adaptive prefetch decisions \rbc{by autonomously learning using both multiple program features and system-level feedback information \rbc{inherent to its design}} (\cref{sec:key_idea_formulation}).
\item \rbc{We provide a low-overhead, practical implementation of Pythia's \rbc{RL-based} algorithm in hardware, which uses no more complex structures than simple tables (\cref{sec:ke_design}). \rbc{This design can potentially be used for other hardware structures that can benefit from RL principles.}}
\item By extensive evaluation, we show that Pythia outperforms prior state-of-the-art prefetchers over a \rbc{wide} variety of workloads in a wide range of system configurations.
\item \rbc{We open source Pythia and all the workload traces used for performance modeling in our GitHub repository: \url{https://github.com/CMU-SAFARI/Pythia}.}
\end{itemize}
\section{Background}\label{sec:background}
\rbc{We first} briefly review the basics of reinforcement learning~\cite{rl_bible, rlmc}. We then \rbc{describe} why reinforcement learning is a good \rbc{framework} for designing a hardware prefetcher \rbc{that fits our goals}.
\subsection{Reinforcement Learning}\label{sec:background_rl}
Reinforcement learning (RL)~\cite{rl_bible,rlmc}, in its simplest form, is the algorithmic approach to learn how to take \rbc{an} \emph{action} in a given \emph{situation} to maximize a numerical \emph{reward} signal.
A typical RL system \rbc{comprises} of two main components: \emph{the agent} and \emph{the environment}, as shown in Fig.~\ref{fig:rl_basics}. The agent is the entity \rbc{that takes actions}.
\rbc{the agent resides in the environment and interacts with it in discrete timesteps.}
At each timestep $t$, the agent observes the current \textbf{\emph{state}} of the environment \textbf{$S_t$} and takes \textbf{\emph{action}} \textbf{$A_t$}. Upon receiving the action, the environment transitions to a new state $\mathbf{S_{t+1}}$, and emits an immediate \textbf{\emph{reward}} $R_{t+1}$, which is \rbc{immediately or} later \rbc{delivered} to the agent. The reward scheme encapsulates \rbc{the} agent's objective and drives the agent \rbc{towards taking} optimal actions.
\begin{figure}[!h]
\vspace{-0.5em}
\centering
\includegraphics[scale=0.3]{figures/RL-basics.pdf}
\vspace{-1em}
\caption{Interaction between an agent and the environment in a reinforcement learning system.}
\vspace{-1em}
\label{fig:rl_basics}
\end{figure}
\rbc{The \textbf{\emph{policy}} of the agent dictates it to take a certain action in a given state. \emph{The agent's goal is to find the optimal policy that maximizes the cumulative reward collected from the environment over time.}}
The expected cumulative reward by taking an action $A$ in a given state $S$ is defined as the \textbf{\emph{Q-value}} of the state-action pair (denoted as $Q(S,A)$).
\rbc{At} every timestep $t$, the agent iteratively optimizes its policy in two steps: (1) the agent updates the Q-value of a state-action pair using \rbc{the reward} collected in the current timestep, and (2) the agent \rbc{optimizes its current policy} using the newly updated Q-value.
\textbf{Updating \rbc{Q-values}.} If at a given timestep $t$, the agent observes a state $S_t$, takes an action $A_t$, while the environment transitions to a new state $S_{t+1}$ and emits a reward $R_{t+1}$ and the agent takes action $A_{t+1}$ in the new state, the Q-value of the old state-action pair $Q(S_t, A_t)$ is iteratively optimized using \rbc{the} SARSA~\cite{sarsa, rl_bible} \rbc{algorithm,} as shown in Eqn.~\eqref{eq:sarsa}\rbc{:}
\begin{equation}\label{eq:sarsa}
\normalsize
\begin{aligned}
Q\left(S_t, A_t\right) & \gets Q\left(S_t, A_t\right)\\
&+ \alpha\left[R_{t+1}+\gamma Q\left(S_{t+1}, A_{t+1}\right) - Q\left(S_t, A_t\right)\right]
\end{aligned}
\end{equation}
$\alpha$ is the \emph{learning rate} parameter \rbc{that} controls the convergence rate of Q-values.
$\gamma$ is the \emph{discount factor}, \rbc{which is used} to assign more weight to the immediate reward received by the agent at any given timestep than to the delayed future rewards. A $\gamma$ value closer to 1 gives a ``far-sighted" planning capability to the agent, i.e., the agent can trade off a low immediate reward to gain higher rewards in the future. This is particularly useful in creating an autonomous agent that can anticipate the \rbc{long-term} \rbc{effects} of taking an action to optimize its policy \rbc{that gets closer to optimal over time}.
\textbf{\rbc{Optimizing} policy.} To find a policy that maximizes the cumulative reward collected over time, a purely-greedy agent always exploits the action $A$ in a given state $S$ that provides the highest Q-value $Q(S,A)$. However, greedy exploitation can leave the state-action space under-explored.
Thus, in order to strike a balance between exploration and exploitation, an $\epsilon$-greedy agent \emph{stochastically} takes a random action with a low probability of $\epsilon$ (called \emph{exploration rate}); otherwise, it selects the action that provides the highest Q-value~\cite{rl_bible}.
In short, the Q-value serves as the foundational cornerstone of reinforcement learning.
By iteratively learning Q-values of state-action pairs, an RL-agent \rbc{continuously optimizes its policy} to take actions \rbc{that get closer to optimal over time}.
\vspace{-5pt}
\subsection{Why \rbc{is} RL a Good Fit for Prefetching?} \label{sec:background_rl_prefetching}
The RL framework \rbc{has} been recently successfully demonstrated to solve complex problems like mastering human-like control \rbc{on} Atari~\cite{deepmind_atari} and Go~\cite{alpha_go, alpha_zero}.
\rbc{We argue that the RL framework is an inherent fit to model a hardware prefetcher for three key reasons.}
\textbf{Adaptive learning in \rbc{a complex state space}.}
As we state in \cref{sec:intro}, the \rbc{benefits} of a prefetcher not only depends on its coverage and accuracy but also on its \rbc{undesirable effects} on the system, like \rbc{memory bandwidth usage}.
In other words, \emph{it is not sufficient for a prefetcher only to make highly accurate predictions}. Instead, a prefetcher should be \emph{performance-driven}.
A prefetcher should have the capability to adaptively trade-off coverage for higher accuracy (and vice-versa) depending on its impact on the overall system to provide a robust performance improvement with varying \rbc{workloads and system configurations}. This adaptive and performance-driven nature of prefetching \rbc{in a complex state space} makes RL a good fit for modeling a prefetcher \rbc{as an autonomous agent that learns to prefetch by interacting with the system.}
\textbf{Online learning.}
\rbc{An RL agent \emph{does not} require an expensive offline training phase. Instead, it can \emph{continuously} learn \emph{online} by iteratively optimizing its policy using the rewards received from the environment. A hardware prefetcher, similar to an RL agent, also needs to continuously learn from the changing workload behavior and system \rbc{conditions} to provide consistent performance benefits. The online learning requirement of prefetching makes RL an inherent fit to model a hardware prefetcher.}
\begin{sloppypar}
\textbf{Ease of implementation.}
Prior works have evaluated many sophisticated \rbc{machine} learning models like simple neural \rbc{networks}~\cite{peled2018neural}, \rbc{LSTMs}~\cite{hashemi2018learning, shineural}, and Graph Neural \rbc{Networks} (GNNs)~\cite{shi2019learning} as \rbc{models for} hardware \rbc{prefetching}. \rbc{Even though} these techniques show encouraging results in accurately predicting memory accesses, they fall short \rbc{especially} in two major aspects. First, these models' \rbc{sizes} often exceed even the \rbc{largest} \rbc{caches in traditional processors}~\cite{peled2018neural,hashemi2018learning,shineural,shi2019learning}, making them impractical \rbc{(or at best very difficult) to implement}.
Second, due to the vast amount of computation \rbc{they require for inference}, these models’ inference latency is much higher than an acceptable latency of a prefetcher at any cache level.
On the other hand, we can efficiently implement an RL-based model, as we \rbc{demonstrate} in this paper \rbc{(\cref{sec:design})}, that can \emph{quickly} make predictions and can be \rbc{relatively} easily adopted in a real processor.
\end{sloppypar}
\section{Pythia: Design} \label{sec:design}
Fig.~\ref{fig:overview} shows a \rbc{high-level} \rbc{overview} of Pythia.
Pythia is mainly comprised of two hardware structures: \emph{Q-Value Store} (QVStore) and \emph{Evaluation Queue} (EQ).
The purpose of QVStore is to \rbc{record} Q-values for all state-action pairs that are \rbc{observed} by Pythia.
The purpose of EQ is to maintain a first-in-first-out list of Pythia's recently-taken actions.\footnote{Pythia keeps track of recently-taken actions because it cannot always \emph{immediately} assign a reward to an action, as the \rbc{usefulness} of the generated prefetch request (i.e., \rbc{if and when} the prefetched address is demanded by the processor) is not immediately known while the action is being taken.
During EQ residency, \rbc{if the address of a demand request matches with the prefetch address stored in an EQ entry}, the corresponding action is considered to \rbc{have generated a useful prefetch request}.
}
\rbc{Every EQ entry holds three \rbc{pieces of} information: \rbc{(1)} the \rbc{taken} action, \rbc{(2)} the prefetch address generated for the corresponding action, and \rbc{(3)} a \emph{filled} bit. A set filled bit indicates that the prefetch request has been filled into the cache.}
For every new demand request, Pythia first checks the EQ with the demanded memory address (\circled{1}). \rbc{If the address is present in the EQ (i.e., Pythia has issued a prefetch request for this address in the past)},
\rbc{it signifies that the prefetch action corresponding to the EQ entry \rbc{has generated a useful prefetch request}. As such, Pythia assigns a reward (either $\mathcal{R}_{AT}$ or $\mathcal{R}_{AL}$) to the EQ entry, based on whether \rbc{or not} the EQ entry's filled bit is set.}
\rbc{Next, Pythia extracts the state-vector from the attributes of the demand request (e.g., PC, address, cacheline delta, etc.) (\circled{2}) and looks up QVStore to find the action with the maximum Q-value for the given state-vector (\circled{3}).}
\rbc{Pythia selects the action with the maximum Q-value to generate prefetch request and issues the request to the memory hierarchy (\circled{4}).}
\rbc{At the same time, Pythia inserts the selected prefetch action, its corresponding prefetched memory address, and the state-vector into EQ (\circled{5}).}
Note that, a \emph{no-prefetch} action or an action that prefetches an address beyond the current physical page
is also inserted into EQ. The reward for such \rbc{an} action is instantaneously assigned to the EQ entry.
\rbc{When an EQ entry gets evicted,} the state-action pair and the reward stored in the evicted EQ entry are used to update the Q-value in the QVStore (\circled{6}).
For every prefetch fill in cache, Pythia looks up EQ with the prefetch address \rbc{and sets the \emph{filled} bit in the matching EQ entry indicating that the prefetch request has been filled \rbc{into the} cache (\circled{7}).}
\rbc{Pythia uses this filled bit in \circled{1} to classify actions that generated timely or late prefetches.}\footnote{In this paper, we define prefetch timeliness as a binary value due to its measurement simplicity. One can easily make the definition non-binary by storing three timestamps per EQ entry: (1) when the prefetch is issued ($t_{issue}$), (2) when the prefetch is filled ($t_{fill}$), and (3) when a demand is generated for the same prefetched address ($t_{demand}$).}
\begin{figure}[!h]
\vspace{-1.2em}
\centering
\includegraphics[width=3.3in]{figures/overview2.pdf}
\vspace{-1em}
\caption{Overview of Pythia.}
\vspace{-2em}
\label{fig:overview}
\end{figure}
\begin{algorithm*}[h]
\footnotesize
\caption{Pythia's \rbc{reinforcement learning} based prefetching algorithm}\label{algo:Pythia}
\begin{algorithmic}[1]
\Procedure{Initialize}{}
\State initialize QVStore: $Q(S,A)\gets \frac{1}{1-\gamma}$
\State clear EQ
\EndProcedure
\State
\Procedure{Train\_and\_Predict}{Addr} \hspace{32pt} \textit{/* Called for every demand request */}
\State $entry \gets search\_EQ(Addr)$ \hspace{46pt} \textit{/* For a demand request to $Addr$, search EQ with the demand address */}
\If{entry is valid}
\If {$entry.filled == true$}
\State $entry.reward \gets \mathcal{R}_{AT}$ \hspace{40pt} \textit{/* If the filled bit is set, i.e., the demand access came \emph{after} the prefetch fill, assign reward $\mathcal{R}_{AT}$ */}
\Else
\State $entry.reward \gets \mathcal{R}_{AL}$ \hspace{40pt} \textit{/* Otherwise, assign $\mathcal{R}_{AL}$ */}
\EndIf
\EndIf
\State $S\gets get\_state()$ \hspace{80pt} \textit{/* Extract the state-vector from the attributes of current demand request */}
\If{$rand()\le \epsilon$}
\State $action\gets get\_random\_action()$ \hspace{20pt} \textit{/* Select a random action with a low probability $\epsilon$ to explore the state-action space */}
\Else
\State $action\gets \argmax_a Q(S,a)$ \hspace{38pt} \textit{/* Otherwise, select the action with the highest Q-value */}
\EndIf
\State $prefetch(Addr+Offset[action])$ \hspace{20pt} \textit{/* Add the selected prefetch offset to the current demand address to generate prefetch address */}
\State $entry \gets create\_EQ\_entry(S,action,Addr+Offset[action])$ \hspace{1.5em} \textit{/* Create new EQ entry using the current state-vector, the selected action, and the prefetch address */}
\If {no prefetch action}
\State $entry.reward \gets \mathcal{R}_{NP}^{H}$ or $\mathcal{R}_{NP}^{L}$ \hspace{23pt} \textit{/* In case of no-prefetch action, immediately assign reward ${R}_{NP}^{H}$ or ${R}_{NP}^{L}$ based on current memory bandwidth usage */}
\ElsIf{out-of-page prefetch}
\State $entry.reward \gets \mathcal{R}_{CL}$ \hspace{50pt} \textit{/* In case of out-of-page prefetch action, immediately assign reward ${R}_{CL}$ */}
\EndIf
\State $evict\_entry\gets insert\_EQ(entry)$ \hspace{25pt} \textit{/* Insert the entry. Get the evicted EQ entry. */}
\If{$has\_reward(dq\_entry) == false$}
\State $dq\_entry.reward \gets \mathcal{R}_{IN}^{H}$ or $\mathcal{R}_{IN}^{L}$ \hspace{2em} \textit{/* If the evicted entry does not have a reward yet, assign the reward $\mathcal{R}_{IN}^{H}$ or $\mathcal{R}_{IN}^{L}$ based on current memory bandwidth usage */}
\EndIf
\State $R \gets dq\_entry.reward$ \hspace{116pt} \textit{/* Get the reward stored in the evicted entry */}
\State $S_1\gets dq\_entry.state;\ \ A_1\gets dq\_entry.action$ \hspace{42pt} \textit{/* Get the state-vector and the action from the evicted EQ entry */}
\State $S_2\gets EQ.head.state;\ \ A_2\gets EQ.head.action$ \hspace{46pt} \textit{/* Get the state-vector and the action from the entry at the head of the EQ */}
\State $Q(S_1, A_1)\gets Q(S_1, A_1)+\alpha[R+\gamma Q(S_2, A_2)-Q(S_1, A_1)]$ \hspace{2em} \textit{/* Perform the SARSA update */}
\EndProcedure
\State
\Procedure{Prefetch\_Fill}{Addr}
\State $search\_and\_mark\_EQ(Addr,\ \texttt{FILLED})$ \hspace{2em} \textit{/* For every prefetch fill, search the address in EQ and mark the corresponding EQ entry as filled */}
\EndProcedure
\end{algorithmic}
\end{algorithm*}
\subsection{RL-based Prefetching Algorithm}
Algorithm~\ref{algo:Pythia} shows Pythia's RL-based prefetching algorithm. \rbc{Initially,} all entries in QVStore are
reset to the highest possible Q-value ($\frac{1}{1-\gamma}$) and the EQ is cleared (lines 2-3).
For every demand \rbc{request} to \rbc{a} cacheline address $Addr$, Pythia searches for $Addr$ in EQ (line $6$). If a matching entry is found, Pythia \rbc{assigns a reward (either $\mathcal{R}_{AT}$ or $\mathcal{R}_{AL}$) based on the \emph{filled} bit in the EQ entry (lines 8-11)}. Pythia then extracts the state-vector to \emph{stochastically} select a prefetching action (Sec.~\ref{sec:background}) that provides the highest Q-value (lines 13-16). \rbc{Pythia uses the selected action to generate the prefetch request (line 17) and creates a new EQ entry with the current state-vector, the selected action, and its corresponding prefetched address (line 18).}
\rbc{In case of a no-prefetch action, or an action that prefetches beyond the current physical page, Pythia immediately assigns the reward to the newly-created EQ entry (lines 19-22).}
\rbc{The EQ entry is then inserted, which evicts an entry from EQ. If the evicted EQ entry does not already have a reward assigned (\rbc{indicating that} the corresponding prefetch address is \emph{not} demanded \rbc{by} the processor \rbc{so far}), Pythia assigns the reward $\mathcal{R}_{IN}^{H}$ or $\mathcal{R}_{IN}^{L}$ based on the current memory bandwidth usage (lines 25).}
\rbc{Finally, the Q-value of the evicted state-action pair is updated \rbc{via} the SARSA algorithm (Sec.~\ref{sec:background}), using the reward stored in the evicted EQ entry and the Q-value of the state-action pair in the head of the EQ-entry (lines 26-29).}
\subsection{Detailed Design of Pythia}
\rbc{We describe the organization of QVStore \rbc{(\cref{sec:ke_design})}, how Pythia searches QVStore to get the action with the maximum Q-value for a given state-vector (\circled{3}) \rbc{(\cref{sec:design_config_pipeline})}, how Pythia assigns rewards to each taken action and how it updates Q-values (\circled{6}) \rbc{(\cref{sec:ke_update})}.
}
\subsubsection{\textbf{Organization of QVStore}} \label{sec:ke_design}
\rbc{The purpose of QVStore is to record Q-values for all state-action pairs that Pythia observes.}
Unlike prior real-world applications of RL~\cite{deepmind_atari,alpha_go,alpha_zero}, which use deep neural networks to \emph{approximately} \rbc{store} Q-values of every state-action pair, we propose a \rbc{new}, table-based, hierarchical QVStore organization that is \rbc{custom-designed to} our RL-agent.
\rbc{Fig.~\ref{fig:ke_design}(a) shows the high-level organization of QVStore and how the Q-value is retrieved from QVStore for a given state $S$ (which is a k-dimensional vector of program features, $\{\phi_{S}^{1}, \phi_{S}^{2}, \ldots, \phi_{S}^{k}\}$) and an action $A$. As the state space grows rapidly with the state-vector dimension ($k$) and the bits used to represent each feature, we employ a hierarchical organization for QVStore.
We organize QVStore in $k$ partitions, each \rbc{of which} we call a \emph{vault}. Each vault corresponds to one constituent feature of the state-vector and records the Q-values for the feature-action pair, $Q(\phi_{S}^{i},A)$.
}
During the Q-value retrieval for a given state-action pair $Q(S,A)$, Pythia queries each vault in parallel to retrieve the Q-values of constituent feature-action pairs $Q(\phi_{S}^{i},A)$. The final Q-value of the state-action pair $Q(S,A)$ is computed as the \emph{maximum} of all constituent feature-action Q-values, \rbc{as} Eqn.~\ref{eq:q_value} shows).
\rbc{The maximum operation ensures that the state-action Q-value is driven by the constituent feature of the state-vector that has the highest feature-action Q-value.}
\rbc{The vault organization enables QVStore to efficiently scale up \rbc{to} higher state-vector \rbc{dimensions: one} can increase the state-vector dimension by simply adding a new vault to the QVStore.}
\begin{equation} \label{eq:q_value}
Q(S, A) = \max_{i \in (1, k)} Q(\phi_{S}^{i}, A)
\end{equation}
\begin{figure}[!h]
\vspace{-0.5em}
\centering
\includegraphics[width=3.3in]{figures/ke_design.pdf}
\vspace{-0.5em}
\caption{(a) The QVStore is comprised of multiple vaults. (b) Each vault is comprised of multiple planes. (c) Index generation from feature value.}
\label{fig:ke_design}
\vspace{-1em}
\end{figure}
\rbc{\rbc{Fig.~\ref{fig:ke_design}(a) shows the organization of QVStore as a collection of multiple vaults.}
The purpose of a vault is to record Q-values of all feature-action pairs that Pythia observes for a specific feature type.} A vault can be conceptually visualized as a monolithic two-dimensional table (as shown in Fig.~\ref{fig:ke_design}(a)), indexed by the feature and action values, that stores Q-value for every feature-action pair.
\rbc{However, the key challenge in \rbc{implementing} vault as a monolithic table}
is that \rbc{the size of the table increases exponentially with a} linear increase in the number of bits used to represent the feature.
This not only makes the monolithic table \rbc{organization} impractical for implementation but also increases the \rbc{design} complexity to satisfy \rbc{its} latency and power requirements.
One way to address this challenge is \rbc{to quantize} the feature space into \rbc{a} small number of tiles.
\rbc{Even though} feature space quantization can achieve a drastic reduction in the monolithic table size,
\rbc{it requires a compromise between the resolution of a feature value and the generalization of feature values.}
We draw inspiration from \emph{tile coding}~\cite{rl_bible, cmac, rlmc} to strike a balance between resolution and generalization.
\rbc{Tile coding uses} \emph{multiple overlapping} hash functions to quantize a feature value into smaller \emph{tiles}. The quantization achieves generalization of similar feature values, whereas multiple hash functions increase resolution to represent a feature value.
We leverage the idea of tile coding to organize a vault as a collection of $N$ small two-dimensional tables, each \rbc{of which} we call a \emph{plane}.
Each plane entry stores a \emph{partial} Q-value of a feature-action pair.\footnote{\rbc{Our application of tile coding is similar to that used in \rbc{the} self-optimizing memory controller (RLMC)~\cite{rlmc}. The key difference is that RLMC uses a hybrid combination of feature and action values to index \emph{single-dimensional} planes, whereas Pythia uses feature and action values \rbc{\emph{separately}} to index \emph{two-dimensional} planes.}} As \rbc{Fig.~\ref{fig:ke_design}(c)} shows, \rbc{to retrieve a feature-action Q-value $Q(\phi_{S}^{i},A)$,} the given feature is first shifted by a shifting constant \rbc{(which is randomly selected at design time)}, followed by a hashing to get the feature index for the given plane. This feature index, along with the action index, is used to \rbc{retrieve} the partial Q-value from the plane. The final feature-action Q-value is computed \rbc{as the \emph{sum of all}} the partial Q-values from \rbc{all} planes, \rbc{as shown in Fig.~\ref{fig:ke_design}(b)}.
\rbc{The use of tile coding provides two key advantages to Pythia. First, the tile coding of a feature enables the sharing of partial Q-values between similar feature values, which shortens prefetcher training time. Second, multiple planes reduces the chance of sharing partial Q-values between widely different feature values.}
\subsubsection{\textbf{Pipelined Organization of QVStore Search}}\label{sec:design_config_pipeline}
\rbc{To generate a} prefetch request, Pythia has to \rbc{(1)} look up the QVStore with the state-vector extracted from the current demand request, and (2) search for the action that has the maximum state-action Q-value (\circled{3} in Fig.~\ref{fig:overview}). As a result, the search operation lies on Pythia's critical path and directly impacts Pythia's prediction latency. To improve the prediction latency, we pipeline the search operation.
\begin{figure}[!h]
\vspace{-1em}
\centering
\includegraphics[scale=0.26]{figures/pipeline.pdf}
\vspace{-1em}
\caption{\rbc{Pipelined} organization of QVStore search operation. The illustration depicts three program features, each having three planes.
}
\label{fig:pythia_pipeline}
\vspace{-2em}
\end{figure}
\rbc{The Q-value search operation is implemented in the following way. For a given state-vector, Pythia iteratively retrieves the Q-value of each action. Pythia also maintains a variable, $Q_{max}$, that tracks the maximum Q-value found so far. $Q_{max}$ gets compared \rbc{to} every retrieved Q-value. The search operation concludes when Q-values for \emph{all} \rbc{possible} actions \rbc{have} been retrieved.}
We pipeline the search operation into five stages \rbc{as Fig.~\ref{fig:pythia_pipeline} shows}.
Pythia first computes the index for each plane and each constituent feature \rbc{of the given state-vector} \rbc{(Stage 0)}. \rbc{\rbc{In \rbc{Stage} 1}, Pythia uses the feature indices} and an action index to retrieve \rbc{the} partial Q-values from each plane. In \rbc{Stage 2}, Pythia \rbc{sums up} the partial Q-values to get the feature-action Q-value for each constituent feature. \rbc{In Stage 3}, Pythia computes \rbc{the maximum of all feature-action Q-values to get the state-action Q-value}. In \rbc{Stage} 4, \rbc{the maximum state-action Q-value found so far} is compared against the retrieved state-action Q-value, and the maximum Q-value is updated. Stage 2 (i.e., the partial Q-value summation) \rbc{is} the longest stage \rbc{of the pipeline and thus it} dictates the pipeline's throughput.
\rbc{We accurately measure the area and power overhead of the pipelined implementation of the search operation by modeling Pythia using Chisel~\cite{chisel} hardware design language and synthesize the model using Synopsys design compiler~\cite{synopsys_dc} and 14-nm library from GlobalFoundries~\cite{global_foundries} (\cref{sec:eval_overhead}).}
\subsubsection{\textbf{Assigning Rewards and Updating Q-values}} \label{sec:ke_update}
\rbc{
To track usefulness of the prefetched requests, Pythia maintains a first-in-first-out list of recently taken actions, along with their corresponding prefetch addresses in EQ.
\emph{Every} prefetch action
is inserted into EQ. A reward gets assigned to every EQ entry before or \rbc{when} it gets evicted from EQ.
During eviction, the reward and the state-action pair \rbc{associated with} the evicted EQ entry are used to update \rbc{the corresponding} Q-value in QVStore (\circled{6} in Fig.~\ref{fig:overview}).
}
\rbc{We describe how Pythia appropriately assigns rewards to each EQ entry.
We divide the reward assignment into three classes based on \emph{when} the reward gets assigned to an entry:
(1) immediate reward assignment during EQ insertion, (2) reward assignment during EQ residency, and (3) reward assignment during EQ eviction.
}
\rbc{If Pythia selects \rbc{the} action \rbc{\emph{not to prefetch}} or one that generates a prefetch request beyond the current physical page, Pythia immediately assigns \rbc{a} reward to the EQ entry. \rbc{For} out-of-page prefetch action, Pythia assigns $\mathcal{R}_{CL}$. \rbc{For the action \emph{not to prefetch}}, Pythia assigns $\mathcal{R}_{NP}^{H}$ or $\mathcal{R}_{NP}^{L}$, \rbc{based on whether} the current system memory bandwidth usage is high or low.
If \rbc{the} address of a demand request matches with the prefetch address stored in an EQ entry during its residency,
Pythia assigns $\mathcal{R}_{AT}$ or $\mathcal{R}_{AL}$ based on the \emph{filled} bit of the EQ entry. If the filled bit is set, it indicates that the demand request is generated \emph{after} the prefetch fill. Hence the prefetch is accurate and timely, and Pythia assigns the reward $\mathcal{R}_{AT}$. Otherwise, Pythia assigns the reward $\mathcal{R}_{AL}$.
If a reward does not get assigned to an EQ entry \rbc{until it is going to be evicted}, it signifies that the corresponding prefetch address is not yet demanded by the processor. Thus, Pythia assigns a reward $\mathcal{R}_{IN}^{H}$ or $\mathcal{R}_{IN}^{L}$ to the entry during eviction \rbc{based on whether} the current system memory bandwidth usage is high or low.
}
\subsection{Automated Design-Space Exploration}\label{sec:tuning}
\rbc{We} propose an automated, performance-driven approach to systematically explore Pythia's vast design space and
derive a \rbcb{basic} configuration\footnote{Using a compute-grid with ten $28$-core machines, the automated exploration across $150$ workload traces (mentioned in detail in \cref{sec:methodology}) takes $44$ hours to complete.} with appropriate program features, action set, reward and hyperparameters.
\rbc{Table~\ref{tab:pythia_config} shows the \rbcb{basic} configuration.}
\begin{table}[h]
\vspace{-0.5em}
\centering
\footnotesize
\caption{\rbc{Basic} Pythia \rbc{configuration} derived from \rbc{our} automated design-space exploration}
\vspace{-0.5em}
\small
\begin{tabular}{m{9.5em}m{18em}}
\thickhline
\textbf{Features} & \texttt{PC+Delta}, \texttt{Sequence of last-4 deltas}\\
\hline
\textbf{Prefetch Action List} & \{-6,-3,-1,0,1,3,4,5,10,11,12,16,22,23,30,32\} \\
\hline
\textbf{Reward Level Values} & $\mathcal{R}_{AT}$=$20$, $\mathcal{R}_{AL}$=$12$, $\mathcal{R}_{CL}$=$-12$, $\mathcal{R}_{IN}^{H}$=$-14$, $\mathcal{R}_{IN}^{L}$=$-8$, $\mathcal{R}_{NP}^{H}$=$-2$, $\mathcal{R}_{NP}^{L}$=$-4$\\
\hline
\textbf{Hyperparameters} & $\alpha=0.0065$, $\gamma=0.556$, $\epsilon=0.002$ \\
\thickhline
\end{tabular}
\label{tab:pythia_config}
\vspace{-1em}
\end{table}
\subsubsection{\textbf{Feature Selection}}\label{sec:tuning_feature_selection}
\rbc{We derive a list of possible program features for feature-space exploration in \rbc{four} steps. First, we derive a list of $4$ control-flow components, and $8$ data-flow components, which are mentioned in Table~\ref{table:list_of_features}. \rbc{Second,} we combine each control-flow component with each data-flow component \rbc{with the} concatenation operation, to \rbc{obtain} a total of $32$ possible program features.}
\rbc{Third,} we use \rbc{the} linear regression technique~\cite{lr1,lr2,lr3} to create any-one, any-two, and any-three feature-combinations from the set of $32$ initial features, each providing a different state-vector. \rbc{Fourth,} we run Pythia with every state-vectors across all single-core workloads (\cref{sec:methodology}) and select the \rbc{winning} state-vector that provides the highest performance gain over no-prefetching baseline. \rbc{As Table~\ref{tab:pythia_config} shows}, the two constituent features of the winning state-vector are \texttt{PC+Delta} and \texttt{Sequence of last-4 deltas}.
\begin{table}[htbp]
\centering
\footnotesize
\caption{List of program control-flow and data-flow components used to derive the list of features for exploration}
\vspace{-1em}
\small
\begin{tabular}{m{13.8em}m{13.8em}}
\toprule
\textbf{\footnotesize Control-flow Component} & \textbf{\footnotesize Data-flow Component} \\
\midrule
\begin{minipage}{13.7em}
\footnotesize
\vskip 14pt
\begin{enumerate}[leftmargin=1.8em]
\item PC of load request
\item PC-path (XOR-ed last-3 PCs)
\item PC XOR-ed branch-PC
\item None
\end{enumerate}
\vskip 4pt
\end{minipage} &
\begin{minipage}{13.7em}
\footnotesize
\vskip 1pt
\begin{enumerate}[leftmargin=1.8em]
\item Load cacheline address
\item Page number
\item Page offset
\item Load address delta
\item Sequence of last-4 offsets
\item Sequence of last-4 deltas
\item Offset XOR-ed with delta
\item None
\end{enumerate}
\vskip 4pt
\end{minipage}\\
\bottomrule
\end{tabular}%
\label{table:list_of_features}%
\vspace{-1em}
\end{table}%
\textbf{Rationale behind the \rbc{winning} state-vector.} The \rbc{winning} state-vector is intuitive as \rbc{its} constituent features \texttt{PC+Delta} and \texttt{Sequence of last-4 deltas} closely match with the program features exploited by \rbc{two prior state-of-the-art prefetchers}, Bingo~\cite{bingo} and SPP~\cite{spp}, respectively.
However, concurrently running SPP and Bingo \rbc{as a hybrid prefetcher does not provide} the same \rbc{performance} benefit as Pythia, as we show in \cref{sec:eval_perf_1T}. This is because combining SPP with Bingo not only \rbc{improves} their prefetch coverage, but also combines \rbc{their} prefetch overpredictions, \rbc{leading to performance degradation, especially in resource-constrained systems}. \rbc{In contrast}, Pythia's RL-based learning strategy \rbc{that inherently uses} the same two features successfully \rbc{increases} prefetch coverage, while maintaining \rbc{high prefetch accuracy}. As a result, \emph{Pythia not only outperforms SPP and Bingo individually, but also outperforms \rbc{the} combination of \rbc{the two} prefetchers}.
\subsubsection{\textbf{Action Selection}}\label{sec:tuning_action_selection}
In a system with \rbc{conventionally}-sized $4$KB pages and $64$B cachelines, Pythia's list of actions \rbc{(i.e., the list of possible prefetch offsets)} contains \emph{all} prefetch offsets in the range of $[-63,63]$. \rbc{However, such} a long action list poses two drawbacks. First, a long action list requires \rbc{more online exploration} to find the \rbc{best} prefetch offset \rbc{given a state-vector, \rbc{thereby} reducing Pythia's performance benefits}.
Second, a longer \rbc{action} list increases Pythia's storage \rbc{requirements}.
\rbc{To avoid these problems, we prune the action list. We drop each action individually from the full action list $[-63,63]$ and measure the performance improvement relative to the performance improvement with the full action list, across all single-core workload traces. We prune any action that \emph{does not} have any significant impact on the performance.}
\rbc{Table~\ref{tab:pythia_config} shows} the final pruned action list.
\subsubsection{\textbf{Reward and Hyperparameter Tuning}}\label{sec:tuning_reward_hyp_selection}
\begin{sloppypar}
We separately tune \rbc{seven} reward \rbc{level values (i.e., $\mathcal{R}_{AT}$, $\mathcal{R}_{AL}$, $\mathcal{R}_{CL}$, $\mathcal{R}_{IN}^{H}$, $\mathcal{R}_{IN}^{L}$, $\mathcal{R}_{NP}^{H}$, and $\mathcal{R}_{NP}^{L}$) and three hyperparameters (i.e., learning rate $\alpha$, discount factor $\gamma$, and exploration rate $\epsilon$)}
in three steps.
\rbc{First,} we create a test trace suite by \emph{randomly} selecting $10$ \rbc{workload} traces from \rbc{all of our} $150$ \rbc{workload} traces (\cref{sec:methodology}).
\rbc{Second}, \rbc{we create a list of tuning configurations} using \rbc{the} uniform grid search technique~\cite{grid_search1,grid_search2}. \rbc{To do so,} we first define a value range for each parameter to be tuned and divide the \rbc{value} range into uniform grids. For example, each of the three hyperparameters ($\alpha$, $\gamma$, and $\epsilon$) can take \rbc{a} value in the range of $[0,1]$. We divide each hyperparameter \rbc{range} into ten \rbc{exponentially}-sized grids (i.e., $1e^0$, $1e^{-1}$, $1e^{-2}$, etc.) to \rbc{obtain} $10\times10\times10=1000$ possible tuning configurations.
\rbc{For each tuning configuration,} we run Pythia on the test trace suite and select the top-$25$ highest-performing configurations for the third step.
\rbc{Third}, we run Pythia on all single-core \rbc{workload} traces using each of the $25$ selected configurations.
We select the \rbc{winning} configuration that provides the highest \rbc{average} performance gain.
\rbc{Table~\ref{tab:pythia_config} provides}
\rbc{reward level and hyperparameter values of \rbc{the \rbcb{basic} Pythia}.}
\end{sloppypar}
\subsection{Storage Overhead}
\rbc{Table~\ref{table:pythia_storage} shows the storage overhead of Pythia in its \rbcb{basic} configuration. Pythia requires only $25.5$ of metadata storage. QVStore consumes $24$KB to store all Q-values. The EQ consumes only $1.5$KB.}
\begin{table}[h]
\centering
\footnotesize
\caption{Storage overhead of Pythia}
\vspace{-0.5em}
\small
\begin{tabular}{|m{4em}||m{18.7em}|c|}
\hline
\textbf{\footnotesize Structure} & \textbf{\footnotesize Description} & \textbf{\footnotesize Size} \\
\hline
\hline
\begin{minipage}{4em}
\footnotesize
\vspace{11pt}
\textbf{QVStore}
\end{minipage}
&
\begin{minipage}{19em}
\footnotesize
\vskip 2pt
\begin{itemize}[leftmargin=1em]
\item \# vaults = $2$
\item \# planes in each vault = $3$
\item \# entries in each plane = feature dimension ($128$) $\times$ action dimension ($16$)
\item Entry size = Q-value width ($16$b)
\end{itemize}
\vskip 2pt
\end{minipage} &
\begin{minipage}{3em}
\footnotesize
\vspace{16pt}
\textbf{24 KB}
\end{minipage}
\\
\hline
\begin{minipage}{3em}
\footnotesize
\vspace{8pt}
\textbf{EQ}
\end{minipage} &
\begin{minipage}{19em}
\footnotesize
\vskip 2pt
\begin{itemize}[leftmargin=1em]
\item \# entries = 256
\item Entry size = state ($21$b) + action index ($5$b) + reward ($5$b) + filled-bit ($1$b) + address ($16$b)
\end{itemize}
\vskip 2pt
\end{minipage} &
\begin{minipage}{3em}
\footnotesize
\vspace{10pt}
\textbf{1.5 KB}
\end{minipage} \\
\hline
\hline
\textbf{\footnotesize Total} & & \textbf{\footnotesize 25.5 KB} \\
\hline
\end{tabular}%
\label{table:pythia_storage}%
\vspace{-2em}
\end{table}
\subsection{Differences \rbc{from} Prior Work} \label{sec:compare_with_context}
The key idea of using RL in prefetching has been previously explored by \rbc{the} \rbc{\emph{context prefetcher} (CP)}~\cite{peled_rl}. Pythia significantly differs from it both in terms of (1) design \rbc{principles} (i.e., the reward, state, and action definition) and (2) the implementation.
\textbf{Reward.} \rbc{CP} naively defines the agent's reward as a continuous function of prefetch timeliness. Pythia not only considers coverage, accuracy, and timeliness but also system-level feedback like \rbc{memory} bandwidth usage to define discrete reward levels. This reward definition provides two key advantages to Pythia. First, unlike \rbc{CP}, Pythia can adaptively \rbc{trade off} prefetch coverage for accuracy (and \rbc{vice versa}) based on memory bandwidth usage.
Second, one can easily customize \rbc{Pythia's} objective
by changing the reward \rbc{values} via configuration registers to extract \rbc{even} higher performance.
\textbf{State.} \rbc{CP} relies on \rbc{compiler-generated} hints \rbc{in its} state information. In contrast, Pythia extracts program features purely from hardware (e.g., PC, cacheline delta).
\rbc{Thus, Pythia requires no changes to software and it is easier to adopt into existing microprocessors.}
\textbf{Action.} Unlike Pythia, \rbc{CP} uses a full cacheline address as the agent's action.
\rbc{The use of full cacheline address as action dramatically increases} the action space, \rbc{which results \rbc{higher storage cost}, longer training time, and reduced performance benefits.}
\begin{sloppypar}
\textbf{Implementation.}
Pythia's implementation differs \rbc{largely} from CP in two major ways.
First, \rbc{CP} uses the contextual-bandit (CB) algorithm~\cite{chu2011contextual}, a simplified version of RL.
The key difference between CB and RL is that a CB-solver cannot take \rbc{its} actions' \rbc{\emph{long-term}} consequences into account \rbc{when} selecting an action.
In contrast, RL-based Pythia weighs each probable prefetch action not only based on the expected immediate reward but also its long-term consequences (e.g., increased bandwidth usage or reduced prefetch accuracy in future)~\cite{rl_bible}. \rbc{As such,} Pythia provides more robust \rbc{and \rbc{far-sighted}} predictions than the \rbc{myopic} CB-based \rbc{CP}.
Second, Pythia organizes the Q-value storage \rbc{into} multiple vaults, each consisting of multiple planes. This hierarchical QVStore structure (1) enables pipelining the Q-value lookup to achieve high-throughput and low-latency prediction, and (2) easily scales out to support longer state-vectors by \rbc{simply} adding more vaults.
\end{sloppypar}
\section{Pythia: Key Idea}
In this work, we formulate prefetching as a reinforcement learning problem, as shown in Fig.~\ref{fig:rl_as_prefetcher}. Specifically, we formulate Pythia as an RL-agent that learns to make accurate, timely, and system-aware prefetch decisions by interacting with the environment, i.e., the processor and the memory subsystem.
Each timestep corresponds to a new demand request seen by Pythia. With every new demand request, Pythia observes the state of the processor and the memory subsystem and takes a prefetch action.
\rbc{For every prefetch action (including \rbc{\emph{not to prefetch}}), Pythia receives a numerical reward \rbc{that} evaluates the accuracy and timeliness of the prefetch action \rbc{taking into account} various system-level feedback information.
\emph{Pythia's goal is to find the optimal prefetching policy that would maximize the number of accurate and timely prefetch requests, \rbc{taking} system-level feedback information \rbc{into account}}. While Pythia's framework is general enough to incorporate any type of system-level feedback \rbc{into its decision making}, in this paper we demonstrate Pythia using \rbc{\emph{memory bandwidth usage}} as the system-level feedback information.}
\begin{figure}[!h]
\vspace{-0.5em}
\centering
\includegraphics[scale=0.3]{figures/Rl-as-prefetcher.pdf}
\vspace{-1em}
\caption{Formulating \rbc{the} prefetcher as an RL-agent.}
\label{fig:rl_as_prefetcher}
\vspace{-1.2em}
\end{figure}
\subsection{Formulation of the RL-based Prefetcher}\label{sec:key_idea_formulation}
We formally define the three pillars of our RL-based prefetcher: the state \rbc{space}, \rbc{the} actions, and the \rbc{reward} scheme.
\textbf{State.}
We define the state as a $k$-dimensional vector of program features.
\begin{equation}\label{eq:state}
S \equiv \{ \phi_{S}^{1}, \phi_{S}^{2}, \ldots, \phi_{S}^{k} \}
\end{equation}
Each program feature is composed of \rbc{at most} two components: (1) program control-flow component, and (2) program data-flow component. \rbc{The} control-flow component is further made \rbc{up} of simple information like \rbc{load-PC (i.e., the PC of a load instruction)} or \rbc{branch-PC (i.e., the PC of a branch instruction that immediately precedes a load instruction)}, and a history that denotes whether this information is extracted only from the current demand request or a series of past demand requests.
Similarly, the data-flow component is made \rbc{up} of simple information like cacheline address, \rbc{physical} page \rbc{number}, page offset, cacheline delta, and its corresponding history.
Table~\ref{table:features} shows some example program features.
\rbc{Although} Pythia can \rbc{theoretically} learn to prefetch \rbc{using} \emph{any} number of such program features, we fix the state-vector dimension (i.e., $k$) at design time \rbc{given a limited storage budget in hardware}. However, the exact selection of $k$ program features out of all possible program features is configurable \rbc{online} using simple configuration registers.
In \cref{sec:tuning_feature_selection}, we \rbc{provide} an \emph{automated feature selection method} to find a vector of program features \rbc{to be used at design time}.
\begin{table}[htbp]
\centering
\footnotesize
\caption{Example program features}
\vspace{-0.5em}
\begin{tabular}{m{8.8em}C{3em}C{4em}C{6.7em}C{4em}}
\toprule
\multicolumn{1}{l}{\multirow{2}[4]{*}{\textbf{Feature}}} & \multicolumn{2}{c}{\textbf{Control-flow}} & \multicolumn{2}{c}{\textbf{Data-flow}} \\
\cmidrule{2-5} & \textbf{Info.} & \textbf{History} & \textbf{Info.} & \textbf{History} \\
\midrule
Last 3-PCs & PC & last 3 & \ding{54} & \ding{54} \\
Last 4-deltas & \ding{54} & \ding{54} & Cacheline delta & last 4 \\
PC+Delta & PC & current & Cacheline delta & current \\
Last 4-PCs+Page \rbc{no.} & PC & last 4 & Page \rbc{no.} & current \\
\bottomrule
\end{tabular}%
\label{table:features}%
\vspace{-2em}
\end{table}%
\textbf{Action.}
We define the action of the RL-agent as selecting a \rbc{\emph{prefetch offset}} (i.e., \rbc{a} delta, \rbc{"O" in Fig.~\ref{fig:rl_as_prefetcher}}, between the predicted and the \rbc{demanded} cacheline address) from a set of candidate prefetch offsets.
As every post-L1\rbc{-cache} prefetcher generates prefetch requests within a physical page~\cite{ampm,fdp,footprint,sms,sms_mod,spp,vldp,sandbox,bop,dol,dspatch,bingo,mlop,ppf,ipcp}, the list of prefetch offsets only contains values in the range of $[-63,63]$ for a system with a traditionally-sized $4$KB page and $64$B cacheline.
Using prefetch offsets as \rbc{actions} (instead of full cacheline addresses)
drastically reduces the action space size. We further reduce the action space size by fine tuning, as described in \cref{sec:tuning_action_selection}.
\rbc{A prefetch offset of} zero means no \rbc{prefetch is generated}.
\textbf{Reward.}
The reward structure defines the prefetcher's objective.
We define five different reward levels as follows.
\begin{itemize}
\item \textbf{\emph{Accurate and timely}} ($\mathcal{R}_{AT}$). This reward is assigned to an action whose corresponding prefetch address gets demanded \emph{after} the prefetch fill.
\item \textbf{\emph{Accurate but \rbc{late}}} ($\mathcal{R}_{AL}$). This reward is assigned to an action whose corresponding prefetch address gets demanded \emph{before} the prefetch fill.
\item \textbf{\emph{Loss of coverage}} ($\mathcal{R}_{CL}$). This reward is assigned to an action whose corresponding prefetch address \rbc{is to a different physical page than the demand access that led to the prefetch.}
\item \textbf{\emph{Inaccurate}} ($\mathcal{R}_{IN}$). This reward is assigned to an action whose corresponding prefetch address does \emph{not} get demanded in a temporal window. The reward is classified into two sub-levels: inaccurate given low bandwidth \rbc{usage} ($\mathcal{R}_{IN}^{L}$) and inaccurate given high bandwidth \rbc{usage} ($\mathcal{R}_{IN}^{H}$).
\item \textbf{\emph{No-prefetch}} ($\mathcal{R}_{NP}$). This reward is assigned when Pythia decides not to prefetch. This reward level is also classified into two sub-levels: no-prefetch given low bandwidth \rbc{usage} ($\mathcal{R}_{NP}^{L}$) and no-prefetch given high bandwidth \rbc{usage} ($\mathcal{R}_{NP}^{H}$).
\end{itemize}
\rbc{By increasing (decreasing) a reward level value, we reinforce (deter) Pythia to collect such rewards from the environment in the future.}
$\mathcal{R}_{AT}$ and $\mathcal{R}_{AL}$ \rbc{are} used to guide Pythia to generate more accurate and timely prefetch requests.
$\mathcal{R}_{CL}$ is used to guide Pythia to generate prefetches within the physical page of the triggering demand request.
$\mathcal{R}_{IN}$ and $\mathcal{R}_{NP}$ are used to define Pythia's prefetching strategy with respect to memory bandwidth usage feedback.
In \cref{sec:tuning_reward_hyp_selection}, we \rbc{provide} an \emph{automated method} to configure the reward values.
The reward values can be easily customized further for target workload suites to extract higher performance gains (\cref{sec:eval_pythia_custom}).
\section{Methodology} \label{sec:methodology}
We use \rbc{the trace-driven} ChampSim simulator~\mbox{\cite{champsim}} to evaluate Pythia \rbc{and compare it to five} prior prefetching proposals.
We simulate an Intel Skylake~\mbox{\cite{skylake}}-like multi-core processor that supports \rbc{up to} 12 cores. \rbc{Table~\ref{table:sim_params} provides the \rbc{key system} parameters.}
For single-core simulations ($1C$), we warm up the core using $100$~M instructions \rbc{from each workload} and simulate the next $500$~M instructions. For multi-core multi-programmed simulations ($nC$), we use $50$~M and $150$~M instructions \rbc{from each workload} respectively to warmup and simulate. If a core finishes early, the workload is replayed \rbc{until} every core finishes simulating $150$~M instructions.
We also \rbc{implement} Pythia using \rbc{the} Chisel~\cite{chisel} \rbc{hardware design language (HDL)} and functionally verify the resultant \rbc{register transfer logic (RTL) design} to accurately measure Pythia's \rbc{chip} area and power overhead.
The source-code of Pythia is freely available at~\cite{pythia_github}.
\begin{table}[h]
\centering
\footnotesize
\caption{\rbc{Simulated system} parameters}
\vspace{-0.5em}
\small
\begin{tabular}{m{5.1em}m{22.7em}}
\thickhline
\textbf{\footnotesize Core} & {\footnotesize 1-12 cores, 4-wide OoO, 256-entry ROB, \rbc{72/56-entry LQ/SQ}}\\
\hline
\textbf{\footnotesize Branch \rbc{Pred.}} & {\footnotesize Perceptron-based~\cite{perceptron}, 20-cycle misprediction penalty} \\
\hline
\textbf{\footnotesize L1/L2 Caches} & {\footnotesize Private, 32KB/256KB, 64B line, 8 way, LRU, 16/32 MSHRs, 4-cycle/14-cycle round-trip latency} \\
\hline
\textbf{\footnotesize LLC} & {\footnotesize 2MB/core, 64B line, 16 way, SHiP~\cite{ship}, 64 MSHRs per LLC Bank, 34-cycle round-trip latency}\\
\hline
\begin{minipage}{7em}
\footnotesize
\vspace{8pt}
\textbf{Main Memory}
\end{minipage}
&
\begin{minipage}{22.8em}
\footnotesize
\vspace{2pt}
\textbf{1C:} Single channel, 1 rank/channel; \textbf{4C:} Dual channel, 2 ranks/channel; \textbf{8C:} Quad channel, 2 ranks/channel; \\ 8 banks/rank, 2400 MTPS, 64b data-bus/channel, 2KB row buffer/bank, tRCD=15ns, tRP=15ns, tCAS=12.5ns
\vspace{2pt}
\end{minipage}\\
\thickhline
\end{tabular}
\label{table:sim_params}
\end{table}
\subsection{Workloads}
\begin{sloppypar}
\rbc{We evaluate Pythia using a diverse set of memory-intensive workloads} spanning \texttt{SPEC CPU2006}~\cite{spec2006}, \texttt{SPEC CPU2017}~\cite{spec2017}, \texttt{PARSEC 2.1}~\cite{parsec}, \texttt{Ligra}~\cite{ligra}, and \texttt{Cloudsuite}~\cite{cloudsuite} \rbc{benchmark suites}.
\rbc{For \texttt{SPEC CPU2006} and \texttt{SPEC CPU20017} workloads, we reuse the instruction traces provided by \rbc{the} 2nd and \rbc{the} 3rd data prefetching championships (DPC)~\cite{dpc2, dpc3}. For \texttt{PARSEC} and \texttt{Ligra} workloads, we collect the instruction traces using \rbc{the} Intel Pin dynamic binary instrumentation tool~\cite{intel_pin}.
We do not consider workload traces that \rbc{have} lower than 3 last-level cache misses per kilo instructions (MPKI) in the baseline system with no prefetching. In all, we present results for $150$ workload traces spanning $50$ workloads. Table~\ref{table:workloads} shows a categorized view of all the workloads \rbc{evaluated} in this paper.}
For multi-core multi-programmed simulations, we create \rbc{both homogeneous and heterogeneous} trace mixes from our single-core trace list. For an $n$-core homogeneous trace mix, we run $n$ copies of \rbc{a} trace from our single-core trace list, one in each core. For \rbc{a} heterogeneous trace mix, we \emph{randomly} select $n$ traces from our single-core trace list and run one trace in every core. All the single-core traces and multi-programmed trace mixes used in our evaluation are freely available online~\cite{pythia_github}.
\end{sloppypar}
\begin{table}[htbp]
\vspace{-0.5em}
\centering
\footnotesize
\caption{Workloads used for evaluation}
\vspace{-0.8em}
\small
\begin{tabular}{m{3em}C{5em}C{2em}m{13em}}
\toprule
\textbf{\footnotesize Suite} & \multicolumn{1}{l}{\textbf{\footnotesize \# Workloads}} & \multicolumn{1}{l}{\textbf{\footnotesize \# Traces}} & \textbf{\footnotesize Example Workloads} \\
\midrule
{\footnotesize SPEC06} & {\footnotesize 16} & {\footnotesize 28} & {\footnotesize gcc, mcf, cactusADM, lbm, ...} \\
{\footnotesize SPEC17} & {\footnotesize 12} & {\footnotesize 18} & {\footnotesize gcc, mcf, pop2, fotonik3d, ...} \\
{\footnotesize PARSEC} & {\footnotesize 5} & {\footnotesize 11} & {\footnotesize canneal, facesim, raytrace, ...} \\
{\footnotesize Ligra} & {\footnotesize 13} & {\footnotesize 40} & {\footnotesize BFS, PageRank, Bellman-ford, ...} \\
{\footnotesize Cloudsuite} & {\footnotesize 4} & {\footnotesize 53} & {\footnotesize cassandra, cloud9, nutch, ...} \\
\bottomrule
\end{tabular}%
\label{table:workloads}%
\vspace{-1em}
\end{table}%
\subsection{Prefetchers}
\begin{sloppypar}
We \rbc{compare} Pythia \rbc{to five state-of-the-art} prior prefetchers: SPP~\cite{spp}, SPP+PPF~\cite{ppf}, SPP+DSPatch~\cite{dspatch}, Bingo~\cite{bingo}, and MLOP~\cite{mlop}.
We model each competing prefetcher using the source-code provided by their respective authors and fine-tune them in our environment to extract the \rbc{highest} performance gain \rbc{across} all single-core traces.
\rbc{Table~\mbox{\ref{table:pref_params}} shows the parameters of all evaluated prefetchers.}
Each prefetcher is trained on \rbc{L1-cache} misses and fills prefetched lines into L2 and LLC.
\end{sloppypar}
We also \rbc{compare} Pythia against multi-level prefetchers found in commercial processors (e.g., stride prefetcher at L1-cache and streamer at L2~\cite{intel_prefetcher}) and IPCP~\cite{ipcp} in \cref{sec:eval_multi_level}. \rbc{For fair comparison, we add a simple PC-based stride prefetcher~\cite{stride,stride_vector,jouppi_prefetch} \rbc{at the} L1 \rbc{level}, along with Pythia at \rbc{the} L2 \rbc{level} for such \rbc{multi-level comparisons}.}
\begin{table}[h]
\centering
\footnotesize
\caption{Configuration of evaluated prefetchers}
\vspace{-0.5em}
\small
\begin{tabular}{m{6em}|m{16.8em}||m{3em}}
\thickhline
{\footnotesize \textbf{SPP}~\cite{spp}} & {\footnotesize 256-entry ST, 512-entry 4-way PT, 8-entry GHR} & \textbf{\footnotesize 6.2~KB} \\
\hline
{\footnotesize \textbf{Bingo}~\cite{bingo}} & {\footnotesize 2KB region, 64/128/4K-entry FT/AT/PHT} & \textbf{\footnotesize 46~KB} \\
\hline
{\footnotesize \textbf{MLOP}~\cite{mlop}} & {\footnotesize 128-entry AMT, 500-update, 16-degree} & \textbf{\footnotesize 8~KB} \\
\hline
{\footnotesize \textbf{DSPatch}~\cite{dspatch}} & {\footnotesize Same configuration as in~\cite{dspatch}} & \textbf{\footnotesize 3.6~KB} \\
\hline
{\footnotesize \textbf{PPF}~\cite{ppf}} & {\footnotesize Same configuration as in~\cite{ppf}} & \textbf{\footnotesize 39.3~KB} \\
\thickhline
{\footnotesize \textbf{Pythia}} & {\footnotesize 2 features, 2 vaults, 3 planes, 16 actions} & \textbf{\footnotesize 25.5~KB}\\
\thickhline
\end{tabular}
\label{table:pref_params}
\vspace{-1em}
\end{table}
\section{Results}\label{sec:evaluation}
\subsection{Coverage and Overprediction in Single-core}\label{sec:eval_cov_acc}
Fig.~\ref{fig:cov_acc} shows the coverage and overprediction of each prefetcher in \rbc{the} single-core \rbc{system}, \rbc{as} measured at the LLC-main memory boundary.
The key takeaway is that Pythia improves prefetch coverage, while simultaneously reducing overprediction \rbc{compared to} state-of-the-art prefetchers. On average, Pythia provides $6.9$\%, $8.8$\%, and $14$\% \rbc{higher} coverage than MLOP, Bingo, and SPP respectively, while generating $83.8$\%, $78.2$\%, and $3.6$\% \rbc{fewer} overpredictions.
\begin{figure}[!h]
\vspace{-1em}
\centering
\includegraphics[width=3.3in]{figures/cov_acc.pdf}
\vspace{-0.5em}
\caption{Coverage and overprediction with respect to the baseline LLC misses in \rbc{the} single-core \rbc{system}.
}
\label{fig:cov_acc}
\vspace{-1em}
\end{figure}
\begin{figure*}[!h]
\centering
\includegraphics[scale=0.2257]{figures/scalability.pdf}
\vspace{-2em}
\caption{Average performance improvement of prefetchers in \rbc{systems} with varying (a) number of cores, (b) DRAM \rbc{million transfers per second (MTPS)}, (c) LLC size, and (d) prefetching level. Each DRAM bandwidth configuration roughly matches MTPS/core of various commercial processors~\cite{intel_xeon_gold,zen_epyc,zen_threadripper}. The baseline bandwidth/LLC configuration is marked in red.}
\label{fig:scalability_master}
\vspace{-0.8em}
\end{figure*}
\subsection{Performance Overview}\label{sec:eval_scalability}
\subsubsection{\textbf{Varying Number of Cores}}
Figure~\mbox{\ref{fig:scalability_master}}(a) shows \rbc{the} performance improvement of all prefetchers averaged across all traces in single-core to 12-core \rbc{systems}.
\rbc{To realistically model} modern commercial multi-core processors, we simulate $1$-$2$ core, $4$-$6$ core, and $8$-$12$ core \rbc{systems} with one, two, and four DDR4-$2400$ DRAM~\cite{ddr4} channels, respectively.
We make two key observations from Figure~\mbox{\ref{fig:scalability_master}}(a). First, Pythia consistently outperforms MLOP, Bingo, and SPP in \emph{all} \rbc{system} configurations. Second, Pythia's performance improvement over prior prefetchers increases \rbc{as} core count \rbc{increases}.
In \rbc{the} single-core \rbc{system}, Pythia outperforms MLOP, Bingo, SPP, and an aggressive SPP with perceptron filtering (PPF~\mbox{\cite{ppf}}) by $3.4$\%, $3.8$\%, $4.3$\%, and $1.02$\% respectively.
In four (and twelve) core \rbc{systems}, Pythia outperforms MLOP, Bingo, SPP, and SPP+PPF by $5.8$\% ($7.7$\%), $8.2$\% ($9.6$\%), $6.5$\% ($6.9$\%), and $3.1$\% ($5.2$\%), respectively.
\subsubsection{\textbf{Varying DRAM Bandwidth}}
To evaluate Pythia in band-width-constrained, highly-\rbc{multi-}threaded commercial server-class processors, where each core \rbc{can have} only a fraction of a channel's bandwidth, we simulate the single-core single-channel configuration by scaling the DRAM bandwidth (Figure~\mbox{\ref{fig:scalability_master}}(b)).
Each bandwidth configuration roughly \rbc{corresponds} to the available \rbc{per-core} DRAM bandwidth \rbc{in} various commercial processors \rbc{(e.g., Intel Xeon Gold~\cite{intel_xeon_gold}, AMD EPYC Rome~\cite{zen_epyc}, and AMD Threadripper~\cite{zen_threadripper})}.
The key takeaway is that Pythia \emph{consistently} outperforms all competing prefetchers in \emph{every} DRAM bandwidth configuration from $\frac{1}{16}\times$ to $4\times$ bandwidth of the baseline system.
Due to \rbc{their large overprediction \rbc{rates}}, \rbc{the} performance \rbc{gains} of MLOP and Bingo \rbc{reduce} sharply \rbc{as} DRAM bandwidth \rbc{decreases}. \rbc{By} actively trading off prefetch coverage for higher accuracy based on \rbc{memory} bandwidth usage, Pythia outperforms MLOP, Bingo, SPP, and SPP+PPF by $16.9$\%, $20.2$\%, $3.7$\%, and $9.5$\% respectively in the most bandwidth-constrained configuration with 150 million transfers per second (MTPS).
In \rbc{the} $9600$-MTPS configuration, every prefetcher enjoys ample DRAM bandwidth. Pythia \rbc{still} outperforms MLOP, Bingo, SPP, and SPP+PPF by $3$\%, $2.7$\%, $4.4$\%, and $0.8$\%, respectively.
\subsubsection{\textbf{Varying LLC Size}}
\begin{sloppypar}
Fig.~\ref{fig:scalability_master}(c) shows performance of all prefetchers averaged across all traces in \rbc{the} single-core \rbc{system} \rbc{while} varying \rbc{the} LLC size from $\frac{1}{8}\times$ to $2\times$ of the baseline 2MB LLC.
\rbc{The key takeaway is that Pythia consistently outperforms all prefetchers in \emph{every} LLC size configuration.}
For $256$KB (and $4$MB) LLC, Pythia outperforms MLOP, Bingo, SPP, and SPP+PPF by $3.6$\% ($3.1$\%), $5.1$\% ($3.4$\%), $2.7$\% ($4.8$\%), and $1.2$\% ($0.8$\%), respectively.
\end{sloppypar}
\subsubsection{\textbf{Comparison to Multi-level Prefetching Schemes}}\label{sec:eval_multi_level}
Figure~\mbox{\ref{fig:scalability_master}}(d) shows \rbc{the performance comparison of Pythia in single-core system with varying DRAM bandwidth against two state-of-the-art \emph{multi-level} prefetching schemes}: (1) stride prefetcher~\cite{stride,stride_vector,jouppi_prefetch} at L1 and streamer~\cite{streamer} at L2 cache found in commercial Intel processors~\cite{intel_prefetcher}, and (2) IPCP, the winner of \rbc{the} third data prefetching championship~\cite{dpc3}. For \rbc{fair} comparison, we add a stride prefetcher in \rbc{the} L1 cache along with Pythia in \rbc{the} L2 cache for this experiment and measure performance over the no prefetching baseline.
\rbc{The key takeaway is that \rbc{Stride+Pythia} consistently outperforms stride+streamer and IPCP in \emph{every} DRAM bandwidth configuration. Stride+Pythia outperforms \rbc{Stride+Streamer} and IPCP by $6.5$\% and $14.2$\% in \rbc{the} $150$-MTPS configuration and \rbc{by} $2.3$\% and $1.0$\% in \rbc{the} $9600$-MTPS configuration, respectively.}
\subsection{Performance Analysis}\label{sec:eval_perf}
\subsubsection{\textbf{Single-core}}\label{sec:eval_perf_1T}
Fig.~\ref{fig:perf_1T}(a) \rbc{shows} the performance improvement of each individual prefetcher in each workload category in the single-core \rbc{system}. We make two major observations.
First, Pythia improves performance by $22.4$\% on average over a no-prefetching baseline. Pythia outperforms MLOP, Bingo, and SPP by $3.4$\%, $3.8$\%, and $4.3$\% on average, respectively.
Second, \rbc{only} Bingo outperforms Pythia \rbc{only} in \rbc{the} \texttt{PARSEC} suite, by $2.3$\%. However, Bingo's performance comes at \rbc{the} cost of \rbc{a} high overprediction \rbc{rate}, which hurts performance in multi-core \rbc{systems (see \cref{sec:eval_perf_4T})}.
To demonstrate \rbc{the novelty of Pythia's RL-based prefetching approach using multiple program features}, Fig.~\ref{fig:perf_1T}(b) compares Pythia's performance improvement with the performance improvement of \rbc{various} combinations of prior prefetchers.
Pythia not only outperforms \rbc{all prefetchers (stride, SPP, Bingo, DSPatch, and MLOP)} individually, but also outperforms \rbc{their combination} by $1.4$\% on average, with less than half of the combined storage size \rbc{of the five prefetchers}.
\rbc{We conclude that Pythia's RL-based prefetching approach using multiple program features under one single framework provides \rbc{higher} performance \rbc{benefit} than combining multiple prefetchers, each exploiting only one program feature.}
\begin{figure}[!h]
\centering
\includegraphics[width=3.3in]{figures/perf_1T.pdf}
\vspace{-1.2em}
\caption{Performance improvement in single-core workloads. St=Stride, S=SPP, B=Bingo, D=DSPatch, and M=MLOP.}
\label{fig:perf_1T}
\vspace{-1em}
\end{figure}
\subsubsection{\textbf{Four-core}}\label{sec:eval_perf_4T}
Fig.~\ref{fig:perf_4T}(a) \rbc{shows} the performance improvement of each individual prefetcher in each workload category in the four-core \rbc{system}. We make \rbc{two} major observations.
First, Pythia provides significant performance improvement over all prefetchers in \emph{every} workload category in \rbc{the} four-core \rbc{system}. On average, Pythia outperforms MLOP, Bingo, and SPP by $5.8$\%, $8.2$\%, and $6.5$\% respectively.
Second, unlike \rbc{in the} single-core \rbc{system}, Pythia outperforms Bingo in \texttt{PARSEC} by $3.0$\% in \rbc{the} four-core \rbc{system}. \rbc{This is due to Pythia's ability to dynamically increase prefetch accuracy during high DRAM bandwidth usage.}
\rbc{Fig.~\ref{fig:perf_4T}(b) shows that} Pythia outperforms the combination of stride, SPP, Bingo, DSPatch, and MLOP prefetchers by $4.9$\% on average.
Unlike \rbc{in the} single-core system, combining \rbc{more} prefetchers on top of stride+SPP in four-core system lowers the overall performance gain.
This is due to the additive increase in the overpredictions \rbc{made} by each individual prefetcher, \rbc{which leads to performance degradation in the bandwidth-constrained four-core system.} Pythia's RL-based framework holistically learns to prefetch using multiple program features \rbc{and} \rbc{generates} \rbc{fewer} overpredictions, outperforming \rbc{all} combinations of all individual prefetchers.
\begin{figure}[!h]
\centering
\includegraphics[width=3.3in]{figures/perf_4T.pdf}
\vspace{-2em}
\caption{Performance in \rbc{the} four-core \rbc{system}.
}
\label{fig:perf_4T}
\vspace{-2em}
\end{figure}
\subsubsection{\textbf{Benefit of \rbc{Memory} Bandwidth \rbc{Usage} Awareness}} \label{sec:eval_bw_awareness}
To demonstrate the benefit of Pythia's awareness \rbc{of} system memory bandwidth usage, we compare the performance of the full-blown Pythia with a new version of Pythia that is oblivious to system memory bandwidth usage. We create this bandwidth-oblivious version of Pythia by \rbc{setting the high and low bandwidth usage variants of the rewards $\mathcal{R}_{IN}$ and $\mathcal{R}_{NP}$ to the same value (\rbc{i.e.,} essentially removing the bandwidth usage distinction from the reward values). More specifically, we set $\mathcal{R}_{IN}^{H}=\mathcal{R}_{IN}^{L}=-8$ and $\mathcal{R}_{NP}^{H}=\mathcal{R}_{NP}^{L}=-4$.}
Fig.~\ref{fig:bw_awareness} shows the performance benefit of the \rbc{memory} bandwidth-oblivious Pythia normalized to the \rbcb{basic} Pythia \rbc{as we vary the DRAM bandwidth}. \rbc{The key takeaway is that}
\rbc{the bandwidth-oblivious Pythia loses performance by up to $4.6$\% on average across all \rbc{single-core} traces when the available memory bandwidth is low ($150$-MTPS to $600$-MTPS configuration).}
However, \rbc{when the available memory bandwidth is high ($1200$-MTPS to $9600$-MTPS)}, the \rbc{memory} bandwidth-oblivious Pythia \rbc{provides} similar performance improvement \rbc{to} the \rbcb{basic} Pythia.
We conclude that, memory bandwidth awareness gives Pythia the ability to provide robust performance \rbc{benefits} across a wide range of system \rbc{configurations}.
\begin{figure}[!h]
\vspace{-1em}
\centering
\includegraphics[width=\columnwidth]{figures/bw_awareness.pdf}
\vspace{-2em}
\caption{Performance of \rbc{memory} bandwidth-oblivious Pythia \rbc{versus} the \rbcb{basic} Pythia.}
\label{fig:bw_awareness}
\vspace{-2em}
\end{figure}
\subsection{Performance \rbc{on Unseen} Traces} \label{sec:eval_unknown_traces}
To demonstrate Pythia's ability to \rbc{provide performance gains} across \rbc{workload traces that are not used \rbc{at all} to tune Pythia}, we evaluate Pythia using an additional $500$ traces from the second value prediction championship~\mbox{\cite{cvp2}} \rbc{on} both single-core and four-core \rbc{systems}. These traces \rbc{are classified into} floating-point, integer, crypto, and server \rbc{categories} and each of them has at least 3 LLC MPKI in \rbc{the} baseline \rbc{without prefetching}.
No prefetcher, including Pythia, has been tuned on these traces.
\rbc{In \rbc{the} single-core system, Pythia outperforms MLOP, Bingo, and SPP on average by $8.3$\%, $3.5$\%, and $4.9$\%, respectively, across these traces. In \rbc{the} four-core system, Pythia outperforms MLOP, Bingo, and SPP on average by $9.7$\%, $5.4$\%, and $6.7$\%, respectively.}
\rbc{We conclude that, Pythia, tuned on a set of workload traces, provides \rbc{equally high} (or even better) performance benefits \rbc{on} unseen traces \rbc{for} which it has not \rbc{been} tuned.}
\begin{figure}[!h]
\vspace{-1em}
\centering
\includegraphics[scale=0.26]{figures/perf_cvp1.pdf}
\vspace{-2em}
\caption{Performance on unseen traces.
}
\label{fig:perf_cvp1}
\vspace{-2em}
\end{figure}
\subsection{Understanding Pythia Using a Case Study}\label{sec:eval_case_study}
\begin{sloppypar}
\rbc{We} delve deeper into \rbc{an example} workload trace, \texttt{459.GemsFDTD-1320B}, from \texttt{SPEC CPU2006} suite \rbc{to provide more insight into Pythia's prefetching strategy and benefits}.
In this trace, the top two most selected \rbc{prefetch} offsets by Pythia are $+23$ and $+11$, which cumulatively account for nearly $72$\% of all offset selections.
For each of these offsets, we examine the program feature \rbc{value} that \rbc{selects} that offset \rbc{the most}. For simplicity, we only focus on the \texttt{PC+Delta} feature here.
The \texttt{PC+Delta} feature \rbc{values} \texttt{0x436a81+0} and \texttt{0x4377c5+0} select the \rbc{offsets} $+23$ and $+11$ the most, respectively. \rbc{Fig.~\ref{fig:deepdive_gems}(a) and (b)} show the Q-value curve of different actions for these feature. The x-axis shows the number of Q-value updates to the corresponding feature. Each color-coded line represents the Q-value of the respective action.
\end{sloppypar}
\begin{sloppypar}
As Fig.~\ref{fig:deepdive_gems}(a) shows, \rbc{the} Q-value of action $+23$ for feature \rbc{value} \texttt{0x436a81+0} consistently stays higher than \rbc{all} other actions (only three other \rbc{representative} actions are shown in~\ref{fig:deepdive_gems}(a)). This means Pythia actively \rbc{favors} to prefetch \rbc{using} $+23$ offset whenever the PC \texttt{0x436a81} generates the first load to a physical page \rbc{(hence the delta $0$)}. By dumping the program trace, we indeed find that whenever PC \texttt{0x436a81} generates the first load to a physical page, there is only one more \rbc{address demanded} in that page \rbc{that} is $23$ cachelines ahead from the first loaded cacheline. \rbc{In} this case, the positive reward for generating \rbc{a} correct prefetch with offset $+23$ drives the Q-value of $+23$ \rbc{much} higher than \rbc{those of} other offsets and Pythia \rbc{successfully uses} the offset $+23$ for \rbc{prefetch request generation given} the feature \rbc{value} \texttt{0x436a81+0}.
We see similar \rbc{a} trend for the feature \rbc{value} \texttt{0x4377c5+0} with offset $+11$ (Fig.~\ref{fig:deepdive_gems}(b)).
\end{sloppypar}
\begin{figure}[!h]
\vspace{-1em}
\centering
\includegraphics[scale=0.22]{figures/deepdive_GemsFDTD.pdf}
\vspace{-1.8em}
\caption{Q-value curves of \texttt{PC+Delta} feature \rbc{values} (a) \texttt{0x436a81+0} and (b) \texttt{0x4377c5+0} in \texttt{459.GemsFDTD-1320B}.}
\label{fig:deepdive_gems}
\vspace{-1.5em}
\end{figure}
\subsection{Performance Benefits via Customization}\label{sec:eval_pythia_custom}
\rbc{In this section, we show two examples of Pythia's \rbc{online} customization ability to extract \rbc{even} higher performance gain than the baseline Pythia configuration in target workload suites. First, we customize Pythia's reward level values for \rbc{the} \texttt{Ligra} \rbc{graph processing} workloads. Second, we customize the program features used by Pythia for \rbc{the} \texttt{SPEC CPU2006} workloads.}
\subsubsection{\textbf{Customizing Reward Levels}}\label{sec:eval_pythia_custom_reward}
\begin{sloppypar}
For workloads from \rbc{the} Ligra \rbc{graph processing} suite, we observe a general trend that a prefetcher with higher prefetch accuracy typically provides higher performance benefits. This is because any incorrect prefetch request wastes precious main memory bandwidth, which is already heavily used by the \rbc{demand requests of the} workload.
Thus, to improve Pythia's performance benefit in \rbc{the} Ligra suite, we create a new \emph{strict configuration} of Pythia that favors \rbc{\emph{not to prefetch}} \rbc{over} \rbc{generating inaccurate prefetches}. We create this strict configuration by simply \rbc{reducing} the reward level values for inaccurate prefetch (i.e., $\mathcal{R}_{IN}^{H}=-22$ and $\mathcal{R}_{IN}^{L}=-20$) and \rbc{increasing the reward level values for} no prefetch (i.e., $\mathcal{R}_{NP}^{H}=\mathcal{R}_{NP}^{L}=0$).
\end{sloppypar}
\rbc{Fig.~\ref{fig:deepdive_ligra_cc} shows the percentage of the total runtime the workload spends in different bandwidth usage buckets in primary y-axis and the overall performance improvement in the secondary y-axis for each competing prefetcher \rbc{in one example workload} from \rbc{the} \texttt{Ligra} suite, \texttt{Ligra-CC}.}
\rbc{We make two key observations.}
\rbc{First, with MLOP and Bingo prefetchers enabled, \texttt{Ligra-CC} spends \rbc{a much} higher percentage of runtime consuming more than half of the peak DRAM bandwidth than in the no prefetching baseline. As a result, MLOP and Bingo underperforms the no prefetching baseline by $11.8\%$ and $1.8\%$, respectively. In contrast, \rbcb{basic} Pythia \rbc{leads to only} a modest memory bandwidth usage overhead, and outperforms the no prefetching baseline by $6.9$\%.}
\rbc{Second,} in \rbc{the} strict configuration, Pythia \rbc{has even less} \rbc{memory} bandwidth \rbc{usage} overhead, and provides $3.5$\% \rbc{higher} performance \rbc{than} the \rbcb{basic} Pythia configuration ($10.4$\% over \rbc{the no prefetching} baseline), without any hardware changes.
Fig.~\ref{fig:ligra} shows the performance \rbc{benefits} of the \rbcb{basic} and strict Pythia configurations for all \rbc{workloads from} \texttt{Ligra}. The key takeaway is that by simply changing the reward level values via configuration registers on the silicon, \rbc{strict} Pythia \rbc{provides} up to $7.8$\% ($2.0$\% on average) \rbc{higher} performance \rbc{than} \rbcb{basic} Pythia.
\rbc{We conclude that the objectives of Pythia can be easily customized via simple configuration registers for target workload suites to extract \rbc{even} higher performance benefits, without any changes to the underlying hardware.}
\begin{figure}[!h]
\vspace{-1em}
\centering
\includegraphics[width=3.3in]{figures/ligra-cc.pdf}
\vspace{-1em}
\caption{Performance and main memory bandwidth usage of prefetchers in \texttt{Ligra-CC}.}
\label{fig:deepdive_ligra_cc}
\vspace{-1em}
\end{figure}
\begin{figure}[!h]
\vspace{-1em}
\centering
\includegraphics[width=3.3in]{figures/ligra.pdf}
\vspace{-1em}
\caption{Performance of the \rbc{basic} and strict \rbc{Pythia} configurations \rbcb{on} the \texttt{Ligra} workload suite.}
\label{fig:ligra}
\vspace{-1em}
\end{figure}
\subsubsection{\textbf{\rbc{Customizing Feature Selection}}}\label{sec:eval_pythia_custom_feature}
To maximize the performance \rbc{benefits} of Pythia on \rbc{the} \texttt{SPEC CPU2006} workload suite, we run all one-combination and two-combination of program features from the initial set of $32$ supported features. For each workload, we fine-tune Pythia using the feature combination that provides the highest performance \rbc{benefit}. We call this the \emph{\rbc{feature-}optimized configuration} of Pythia for \texttt{SPEC CPU2006} suite. Fig.~\ref{fig:spec} shows the performance \rbc{benefits} of \rbcb{the basic} and optimized configurations of Pythia for all \texttt{SPEC CPU2006} workloads. The key takeaway is \rbc{that} by simply fine-tuning the program feature selection, Pythia delivers up to $5.1$\% ($1.5$\% on average) performance improvement on top of the \rbcb{basic} Pythia configuration.
\begin{figure}[!h]
\centering
\includegraphics[width=3.3in]{figures/spec06.pdf}
\vspace{-1em}
\caption{Performance \rbc{of} \rbc{the basic} and \rbc{feature-optimized} Pythia \rbcb{on} \rbc{the} \texttt{SPEC CPU2006} suite.}
\label{fig:spec}
\vspace{-1.2em}
\end{figure}
\subsection{Overhead Analysis}\label{sec:eval_overhead}
\begin{sloppypar}
\rbc{To accurately estimate Pythia's chip area and power \rbc{overheads},} we implement \rbc{the full-blown} Pythia, including all fixed-point adders, multipliers, and the pipelined QVStore search operation (\cref{sec:design_config_pipeline}), using \rbc{the} Chisel~\cite{chisel} \rbc{hardware design language (HDL). \rbc{We} extensively verify the functional correctness of the resultant register transfer logic (RTL) design \rbc{and} synthesize the RTL design using Synopsys Design Compiler~\cite{synopsys_dc} and 14-nm library from GlobalFoundries~\cite{global_foundries} to estimate Pythia's area and power overhead.}
Pythia \rbc{consumes} $0.33$ mm2 of area and $55.11$ mW of power in each core.
\rbc{The QVStore component}
consumes $90.4$\% and $95.6$\% of the \rbc{total} area and power \rbc{of Pythia,} respectively.
\rbc{With respect to the overall die area and power consumption of a}
$4$-core desktop-class Skylake processor with \rbc{the} \rbc{lowest} TDP budget~\cite{skylake-4c}, \rbc{and a $28$-core server-class Skylake processor with the highest TDP budget}, \rbc{Pythia (implemented in all cores) incurs \rbc{area \& power} overheads of only $1.03$\% \& $0.4$\%, \rbc{and $1.33$\% \& $0.75$\%,} respectively.
We conclude that Pythia's performance benefits come at a \rbc{very modest} cost in area and power \rbc{overheads} across a variety of commercial processors.
}
\end{sloppypar}
\begin{table}[htbp]
\vspace{-0.8em}
\centering
\footnotesize
\caption{Area and power \rbc{overhead} of Pythia}
\vspace{-1.2em}
\begin{tabular}{p{17em}C{4.2em}C{4.2em}}
\multicolumn{3}{c}{\textbf{Pythia's area}: $0.33$ mm2/core; \textbf{Pythia's power}: $55.11$ mW/core\vspace{0.5em}} \\
\toprule
\textbf{Overhead \rbc{compared to real systems}} & \textbf{Area} & \textbf{Power} \\
\midrule
4-core Skylake D-2123IT, 60W TDP~\cite{skylake-4c} & 1.03\% & 0.37\% \\
18-core Skylake 6150, 165W TDP~\cite{skylake-18c} & 1.24\% & 0.60\% \\
28-core Skylake 8180M, 205W TDP~\cite{skylake-28c} & 1.33\% & 0.75\% \\
\bottomrule
\end{tabular}%
\label{tab:overhead_analysis}%
\vspace{-1.4em}
\end{table}%
\section{Other Related Works} \label{related_work}
\rbc{To our knowledge, Pythia is the first RL-based customizable prefetching framework that can learn to prefetch using multiple different program features and system-level feedback information inherent to its design, to provide performance benefits across a wide range of workloads and changing system configurations.}
\rbc{We already compare Pythia against five state-of-the-art prefetching proposals~\cite{spp,bingo,dspatch,ppf,mlop} in \cref{sec:evaluation}.}
In this section, we qualitatively compare \rbc{Pythia against other prior prefetching techniques.}
\begin{sloppypar}
\textbf{Traditional Prefetchers.}
We divide the traditional prefetching algorithms \rbc{into} three broad categories: \rbc{precomputation}-based, temporal, and spatial. \rbc{Precomputation}-based prefetchers (e.g., runahead~\cite{dundas,mutlu2003runahead,mutlu2003runahead2, mutlu2005techniques,mutlu2006efficient,hashemi2016continuous,mutlu2005address,hashemi2015filtered,vector_runahead,mutlu2005reusing,iacobovici2004effective} and helper-thread execution~\cite{ssmt,precompute,software_preexecute,sohi_slice,helper_thread,bfetch,runahead_threads,helper_thread2,zhang2007accelerating,dubois1998assisted,collins2001speculative,solihin2002using}) pre-execute program code to generate future memory requests. These prefetchers can generate highly-accurate prefetches even when no \rbc{recognizable} pattern exists in \rbc{program} memory requests. However, \rbc{precomputation}-based prefetchers \rbc{usually} have high design complexity. Pythia \rbc{is not a} \rbc{precomputation}-based \rbc{proposal}. \rbc{It} finds patterns in past memory request \rbc{addressed} to generate prefetch \rbc{requests}.
\end{sloppypar}
Temporal prefetchers~\cite{markov,stems,somogyi_stems,wenisch2010making,domino,isb,misb,triage,wenisch2005temporal,chilimbi2002dynamic,chou2007low,ferdman2007last,hu2003tcp,bekerman1999correlated,cooksey2002stateless,karlsson2000prefetching}
\rbc{memorize} long sequences of cacheline addresses \rbc{demanded by the processor}.
\rbc{When} a previously-seen address is encountered \rbc{again}, a temporal prefetcher \rbc{issues prefetch requests} \rbc{to} addresses that previously followed the currently-seen cacheline address.
However, temporal prefetchers \rbc{usually} have \rbc{high storage requirements (often multi-megabytes of \rbc{metadata} storage, which necessitates storing metadata in memory~\cite{stems,somogyi_stems,isb}).}
Pythia requires only $25.5$KB of storage, \rbc{which can easily fit inside a core.}
Spatial prefetchers~\cite{stride,streamer,baer2,jouppi_prefetch,ampm,fdp,footprint,sms,spp,vldp,sandbox,bop,dol,dspatch,bingo,mlop,ppf,ipcp} predict a cacheline delta or \rbc{spatial} bit-pattern by learning program access patterns over \rbc{different} spatial \rbc{memory} \rbc{regions}. Spatial prefetchers provide high-accuracy prefetches, \rbc{usually} with \rbc{lower} storage overhead \rbc{than temporal prefetchers}.
We already compare Pythia with other spatial prefetchers~\cite{mlop,bingo,spp,ppf,dspatch} and \rbc{show higher performance benefits}.
\textbf{Machine Learning (ML) in Computer Architecture.}
\rbc{Prior works \rbc{apply} ML techniques in computer architecture in two \rbc{major} ways: (1) to design adaptive, data-driven algorithms, and (2) to explore \rbc{the large} microarchitectural design-space.}
\rbc{Researchers have proposed ML-based algorithms for various microarchitectural tasks like memory scheduling~\cite{rlmc,morse}, cache \rbc{management}~\cite{glider, imitation,rl_cache,teran2016perceptron,balasubramanian2021accelerating}, branch prediction~\cite{tarsa,branchnet,perceptron,garza2019bit,BranchNetV2_2020,Zouzias2021BranchPA,jimenez2016multiperspective,tarjanTaco2005,jimenez2003fast,jimenez2002neural}, \rbc{address translation~\cite{margaritovTranslationNIPS18}} and hardware prefetching~\cite{peled2018neural,peled_rl,hashemi2018learning,shineural,shi2019learning,shineural_asplos,zeng2017long}.
\rbc{Pythia provides three key advantages over prior ML-based prefetchers. First, Pythia can learn to prefetch from multiple program features and system-level feedback information inherent to its design.
Second, Pythia can be customized online.
Third, Pythia incurs low hardware overhead.}
}
\rbc{Researchers have also explored ML techniques to explore the large microarchitectural design space, e.g., NoC design~\cite{fettes2018dynamic,zheng2019energy,lin2020deep,yin2020experiences,ebrahimi2012haraq,ditomaso2016dynamic,clark2018lead,ditomaso2017machine,van2018extending,yin2018toward}, chip placement optimization~\cite{mirhoseini2021}, hardware resource assignment for accelerators~\cite{kao2020confuciux}.
These works are orthogonal to Pythia.}
\section{Conclusion} \label{conclusion}
We introduce Pythia, \rbc{the first} customizable prefetching framework that formulates prefetching as a reinforcement learning (RL) problem. Pythia autonomously learns to prefetch using multiple program features and system-level feedback \rbc{information} to predict memory accesses.
Our extensive evaluations show that Pythia not only outperforms five \rbc{state-of-the-art} prefetchers but also provides robust performance \rbc{benefits} across a wide-range of \rbc{workloads} and system configurations.
\rbc{Pythia's benefits come with very modest area and power overheads.}
\rbc{We believe \rbc{and hope} that Pythia \rbc{would} encourage the next generation of data-driven autonomous prefetchers that automatically learn far-sighted prefetching policies by interacting with the system. Such prefetchers \rbc{can} not only improve performance and efficiency under a wide variety of workloads and system configurations, but also reduce the system architect's burden in designing sophisticated \rbc{prefetching mechanisms}.}
\section{Artifact Appendix} \label{sec:artifact}
\subsection{Abstract}
We implement Pythia using ChampSim simulator~\cite{champsim}. In this artifact, we provide the source code of Pythia and necessary instructions to reproduce its key performance results. We identify four key results to demonstrate Pythia's novelty:
\begin{itemize}
\item Workload category-wise performance speedup of all competing prefetchers (Fig.~\ref{fig:perf_1T}(a)).
\item Workload category-wise coverage and overpredictions of all competing prefetchers (Fig.~\ref{fig:cov_acc}).
\item Geomean performance comparison with varying DRAM bandwidth from $150$-MTPS to $9600$-MTPS (Fig.~\ref{fig:scalability_master}(b)).
\item Workload category-wise performance speedup of all competing prefetchers (Fig.~\ref{fig:perf_4T}(a)).
\end{itemize}
The artifact can be executed in any machine with a general-purpose CPU and $52$ GB disk space. However, we strongly recommend running the artifact on a compute cluster with \texttt{slurm}~\cite{slurm} support for bulk experimentation.
\subsection{Artifact Check-list (Meta-information)}
\small
\begin{itemize}
\item {\bf Compilation: } G++ v6.3.0 or above.
\item {\bf Data set: } Download traces using the supplied script.
\item {\bf Run-time environment: } Perl v5.24.1
\item {\bf Metrics: } IPC, prefetcher's coverage, and accuracy.
\item {\bf Experiments: } Generate experiments using supplied scripts.
\item {\bf How much disk space required (approximately)?: } $52$GB
\item {\bf How much time is needed to prepare workflow (approximately)?: } $\sim 2$ hours. Mostly depends on the time to download traces.
\item {\bf How much time is needed to complete experiments (approximately)?: } 3-4 hours using a compute cluster with $480$ cores.
\item {\bf Publicly available?: } Yes.
\item {\bf Code licenses (if publicly available)?: } MIT
\item {\bf Archived (provide DOI)?: } \url{https://doi.org/10.5281/zenodo.5520125}
\end{itemize}
\subsection{Description}
\subsubsection{How to Access}
The source code can be downloaded either from GitHub (\url{https://github.com/CMU-SAFARI/Pythia}) or from Zenodo (\url{https://doi.org/10.5281/zenodo.5520125}).
\subsubsection{Hardware Dependencies}
Pythia can be run on any system with a general-purpose CPU and at least $52$ GB of free disk space.
\subsubsection{Software Dependencies}
\begin{sloppypar}
Pythia requires \texttt{GCC v6.3.0} and \texttt{Perl v5.24.1}. Optionally, Pythia requires \texttt{megatools v1.9.98} to download few traces, and Microsoft Excel (tested on v16.51) to reproduce results as presented in the paper.
\end{sloppypar}
\subsubsection{Data Sets}
The ChampSim traces required to evaluate Pythia can be downloaded using the supplied script. Our implementation of Pythia is fully compatible with prior ChampSim traces that are used in previous cache replacement (CRC-2~\cite{crc2}), data prefetching (DPC-3~\cite{dpc3}) and value-prediction (CVP-2~\cite{cvp2}) championships. We are also releasing a new set of ChampSim traces extracted from Ligra~\cite{ligra} and PARSEC-2.1~\cite{parsec} suites.
\subsection{Installation}
\begin{enumerate}
\item Clone Pythia from GitHub repository:
\vspace{0.4em}
\shellcmd{git clone https://github.com/CMU-SAFARI/Pythia.git}
\vspace{0.4em}
\item Clone Bloomfilter library inside Pythia home and build:
\vspace{0.4em}
\shellcmd{cd Pythia/}
\shellcmd{git clone https://github.com/mavam/libbf.git libbf/}
\shellcmd{cd libbf/}
\shellcmd{mkdir build \&\& cd build/ \&\& cmake ../}
\shellcmd{make clean \&\& make}
\vspace{0.4em}
\item Build Pythia for single-core and four-core configurations:
\vspace{0.4em}
\shellcmd{cd \$PYTHIA\_HOME}
\shellcmd{./build\_champsim.sh multi multi no 1}
\shellcmd{./build\_champsim.sh multi multi no 4}
\vspace{0.4em}
\item Please make sure to set environment variables as:
\vspace{0.4em}
\shellcmd{source setvars.sh}
\end{enumerate}
\subsection{Experiment Workflow}
This section describes steps to generate, and execute necessary experiments. We recommend the reader to follow the README file to know more about each script used in this section.
\subsubsection{Preparing Traces}
\begin{enumerate}
\item Download necessary traces as follows:
\vspace{0.4em}
\shellcmd{mkdir \$PYTHIA\_HOME/traces}
\shellcmd{cd \$PYTHIA\_HOME/scripts}
\shellcmd{perl download\_traces.pl --csv artifact\_traces.csv \\ --dir \$PYTHIA\_HOME/traces/}
\vspace{0.4em}
\item If the traces are downloaded in other path, please update the full path in \texttt{MICRO21\_1C.tlist} and \texttt{MICRO21\_4C.tlist} inside \\ \texttt{\$PYTHIA\_HOME/experiments} directory appropriately.
\end{enumerate}
\subsubsection{Launching Experiments}\label{sec:artifact_launching_experiments}
The following instructions will launch all experiments required to reproduce key results in a local machine. We \textbf{strongly} recommend using a compute cluster with \texttt{slurm} support to efficiently launch experiments in bulk. To launch experiments using \texttt{slurm}, please provide \texttt{--local 0} (tested using \texttt{slurm v16.05.9}).
\begin{enumerate}
\item Launch single-core experiments as follows:
\vspace{0.4em}
\shellcmd{cd \$PYTHIA\_HOME/experiments}
\shellcmd{perl \$PYTHIA\_HOME/scripts/create\_jobfile.pl --exe \\ \$PYTHIA\_HOME/bin/perceptron-multi-multi-no-ship-1core \\--tlist MICRO21\_1C.tlist --exp MICRO21\_1C.exp --local 1 > jobfile.sh}
\shellcmd{cd experiments\_1C}
\shellcmd{source ../jobfile.sh}
\vspace{0.4em}
\item Launch four-core experiments as follows:
\vspace{0.4em}
\shellcmd{cd \$PYTHIA\_HOME/experiments}
\shellcmd{perl \$PYTHIA\_HOME/scripts/create\_jobfile.pl --exe \\ \$PYTHIA\_HOME/bin/perceptron-multi-multi-no-ship-4core \\--tlist MICRO21\_4C.tlist --exp MICRO21\_4C.exp --local 1 > jobfile.sh}
\shellcmd{cd experiments\_4C}
\shellcmd{source ../jobfile.sh}
\vspace{0.4em}
\item Please make sure the paths used in \texttt{tlist} and \texttt{exp} files are appropriately changed before creating the experiment files.
\end{enumerate}
\subsubsection{Rolling-up Statistics}
We will use \texttt{rollup.pl} script to rollup statistics from outputs of all experiments. To automate the process, we will use the following instructions. This will create three comma-separated-value (CSV) files in experiments directory which will be used for evaluation in \cref{sec:artifact_evaluation}.
\vspace{0.4em}
\shellcmd{cd \$PYTHIA\_HOME/experiments}
\shellcmd{bash automate\_rollup.sh}
\subsection{Evaluation}\label{sec:artifact_evaluation}
For single-core baseline configuration experiments, we will evaluate three metrics: performance, coverage, and overprediction of each prefetcher. For single-core experiments varying DRAM-bandwidth and four-core experiments, we will evaluate only performance. The performance, coverage and overprediction of a prefetcher X is measured by following equations:
\[ Perf_X = \frac{IPC_X}{IPC_{nopref}} \]
\[ Coverage_X = \frac{LLC\_load\_miss_{nopref}-LLC\_load\_miss_X}{LLC\_load\_miss_{nopref}} \]
\[ Overprediction_X = \frac{LLC\_read\_miss_{X}-LLC\_read\_miss_{nopref}}{LLC\_read\_miss_{nopref}} \]
To easily calculate the metrics, we are providing a Microsoft Excel template to post-process the rolled-up CSV files. The template has four sheets, three of which bear the same name of the rolled up CSV files. Each sheet is already populated with our collected results, necessary formulas, pivot tables, and charts to reproduce the results presented in the paper.
Please follow the instructions to reproduce the results from your own CSV statistics files:
\begin{enumerate}
\item Copy and paste each CSV file in the corresponding sheet's top left corner (i.e., cell A1).
\item Immediately after pasting, convert the comma-separated rows into columns by going to Data -> Text-to-Coloumns -> Selecting comma as a delimiter. This will replace the already existing data in the sheet with the newly collected data.
\item \emph{Refresh each pivot table in each sheet} by clicking on them and then clicking Pivot-Table-Analyse -> Refresh.
\end{enumerate}
The reader can also use any other data processor (e.g., Python pandas) to reproduce the same result.
\subsection{Expected Results}
\begin{itemize}
\item In single-core experiments, Pythia should achieve $22\%$ performance improvement over the no prefetching baseline, with $71\%$ prefetch coverage and $27$\% overpredictions. The next best prefetcher MLOP should achieve $19$\% performance improvement, with $64$\% coverage and $110$\% overpredictions.
\item In single-core experiments with DRAM bandwidth scaling, Pythia should achieve the following:
\begin{itemize}
\item In $150$-MTPS configuration, Pythia should achieve $0.89$\% performance improvement over no prefetching baseline, whereas MLOP should \emph{underperform} baseline by $16$\%.
\item In $9600$-MTPS configuration, Pythia should achieve $22.7$\% performance improvement over no prefetching baseline, whereas MLOP should achieve $19.5$\%.
\end{itemize}
\item In four-core experiments, Pythia should achieve $30$\% performance improvement over no prefetching baseline, whereas MLOP should achieve $24$\%.
\end{itemize}
\subsection{Experiment Customization}
\begin{itemize}
\item The configuration of every prefetcher can be customized by changing the \texttt{ini} files inside the \texttt{config} directory.
\item The \texttt{exp} files can be customized to run new experiments with different prefetcher combinations. More experiment files can be found inside \texttt{experiments/extra} directory. One can use the same instructions mentioned in \cref{sec:artifact_launching_experiments} to launch experiments.
\end{itemize}
\subsection{Methodology}
Submission, reviewing and badging methodology:
\begin{itemize}
\item \url{https://www.acm.org/publications/policies/artifact-review-badging}
\item \url{http://cTuning.org/ae/submission-20201122.html}
\item \url{http://cTuning.org/ae/reviewing-20201122.html}
\end{itemize}
\section{Extended Results}
\subsection{Detailed Performance Analysis}
\subsubsection{\textbf{Single-core}}
\begin{sloppypar}
Fig.~\ref{fig:perf_1T_scurve} shows the performance line graph of all prefetchers \rbcc{for the $150$} single-core workload traces. The workload traces are sorted in ascending order of performance improvement \rbcc{of} Pythia over the baseline without prefetching. We make three key observations. First, Pythia outperforms the no-prefetching baseline in every single-core trace, except \texttt{623.xalancbmk-592B} (where it underperforms the baseline by $2.1$\%). \texttt{603.bwaves-2931B} enjoys the highest performance improvement of $2.2\times$ over the baseline. \rbcc{Performance of the} top $80$\% of traces improve by at least $4.2$\% \rbcc{over the baseline}. Second, Pythia underperforms Bingo in workloads like \texttt{libquantum}
due to \rbcc{the heavy streaming} nature of memory accesses. As \texttt{libquantum} streams through all physical pages, Bingo simply prefetches all cachelines of a page \rbcc{at once} just by seeing the first access \rbcc{to} the page. As a result Bingo achieves higher timeliness and higher performance than Pythia. Third, Pythia significantly outperforms every competing \rbcc{prefetcher} in workloads with irregular access patterns (e.g., \texttt{mcf}, \texttt{pagerank}).
We conclude that Pythia provides consistent performance \rbcc{gains} over the no-prefetching baseline and multiple prior state-of-the-art prefetchers over a wide range of workloads. We share a table \rbcc{depicting} the single-core performance of every competing prefetcher considered in this paper over the no-prefetching baseline in our GitHub repository: \url{https://github.com/CMU-SAFARI/Pythia}.
\end{sloppypar}
\begin{figure}[!h]
\centering
\includegraphics[width=3.3in]{figures/perf_1T_scurve.pdf}
\caption{Performance line graph of 150 single-core traces.}
\label{fig:perf_1T_scurve}
\end{figure}
\subsubsection{\textbf{Four-core}}
Fig.~\ref{fig:perf_4T_scurve} shows the performance line graph of all prefetchers \rbcc{for 272} four-core workload trace mixes (including both homogeneous and heterogeneous mixes). The workload mixes are sorted in ascending order of performance improvement \rbcc{of} Pythia over the baseline without prefetching. We make two key observations. First, Pythia outperforms the baseline without prefetching in \rbcc{all but one} four-core trace mix. Pythia provides the highest performance gain in \texttt{437.leslie3d-271B} ($2.1\times$) and lowest performance gain in \texttt{429.mcf-184B} (-$3.5$\%) over the no-prefetching baseline. Second, Pythia also outperforms (or matches performance) all competing prefetchers in \rbcc{majority of} trace mixes. Pythia underperforms Bingo in \rbcc{the} \texttt{462.libquantum} homogeneous trace mix due to \rbcc{the very} regular streaming access pattern. On the other hand, Pythia significantly outperforms \rbcc{Bingo} in \texttt{Ligra} workloads (e.g., \texttt{pagerank}) due to its adaptive prefetching strategy to trade-off coverage for accuracy in high memory bandwidth usage. We conclude that Pythia provides a consistent performance gain over multiple prior state-of-the-art prefetchers over a wide range of workloads even in bandwidth-constrained \rbcc{multi-core} systems.
\begin{figure}[!h]
\centering
\includegraphics[width=3.3in]{figures/perf_4T_scurve.pdf}
\caption{Performance line graph of 272 four-core trace mixes.}
\label{fig:perf_4T_scurve}
\end{figure}
\subsection{Performance with Different Features}
Fig.~\ref{fig:feature_sel} shows the performance, coverage, and overprediction of Pythia averaged across all single-core traces with different feature combinations during automated feature selection (\cref{sec:tuning_feature_selection}). For brevity, we show results for all experiments with any-one and any-two combinations of $20$ features taken from the full list of $32$ features. Both graphs are sorted in ascending order of performance improvement \rbcc{of Pythia} over the baseline without prefetching. We make three key observations. First, Pythia's performance gain over the no-prefetching baseline improves from $20.7$\% to $22.4$\% \rbcc{by varying} the feature combination. We select the feature combination that provides the highest performance gain as the basic Pythia configuration (Table~\ref{tab:pythia_config}). Second, Pythia's coverage and overprediction also change significantly with \rbcc{varying} feature \rbcc{combination}. Pythia's coverage improves from $66.2$\% to $71.5$\%, whereas overprediction improves from $32.2$\% to $26.7$\% by changing feature \rbcc{combination}. Third, Pythia's performance gain positively correlates \rbcc{with} Pythia's coverage in single-core configuration. \rbcc{We conclude that automatic design-space exploration can significantly optimize Pythia's performance, coverage, and overpredictions.}
\begin{figure}[!h]
\centering
\includegraphics[width=\columnwidth]{figures/feature_selection_exps.pdf}
\caption{Performance, coverage, and overprediction of Pythia with different feature combinations. \rbcc{The x-axis shows experiments with different feature combinations.}}
\label{fig:feature_sel}
\end{figure}
\subsection{Performance Sensitivity to Hyperparameters}
Fig.~\ref{fig:hyp_sensitivity}(a) shows Pythia's performance sensitivity \rbcc{to the} exploration rate ($\epsilon$) averaged across all single-core traces. The key takeaway from \rbcc{Figure} \ref{fig:hyp_sensitivity}(a) is that Pythia's performance improvement drops sharply if the underlying RL-agent heavily \emph{explores} the state-action space \rbcc{as opposed to exploiting the learned policy}. Changing the $\epsilon$-value from $0.002$ to $1.0$ \rbcc{reduces Pythia's} performance improvement by $16.0$\%.
Fig.~\ref{fig:hyp_sensitivity}(b) shows Pythia's performance sensitivity \rbcc{to} learning rate \rbcc{parameter} ($\alpha$), averaged across all single-core traces. The key takeaway from Fig.~\ref{fig:hyp_sensitivity}(b) is \rbcc{that Pythia's performance improvement reduces for both increasing or decreasing the learning rate parameter. Increasing the learning rate reduces the hysteresis in Q-values (i.e., Q-values change significantly with the immediate reward received by Pythia), which reduces Pythia's performance improvement. Similarly, decreasing the learning rate also reduces Pythia's performance as it increases the hysteresis in Q-values.}
Pythia achieves optimal performance improvement for $\alpha=0.0065$.
\begin{figure}[!h]
\centering
\includegraphics[width=3.3in]{figures/hyperparameter_sensitivity.pdf}
\caption{Performance sensitivity of Pythia towards (a) the exploration rate ($\epsilon$), and (b) the learning rate ($\alpha$) hyperparameter values. The values in basic Pythia configuration are marked in red.}
\label{fig:hyp_sensitivity}
\end{figure}
\subsection{Comparison to the Context Prefetcher}
As we discuss in Section~\ref{sec:compare_with_context}, unlike Pythia, the context prefetcher (CP~\cite{peled_rl}) relies on both hardware and software contexts. A tailor-made compiler needs to encode the software contexts using \rbcc{special} NOP instructions, which are decoded by the core front-end to pass the context to the CP.
For a fair comparison, we implement the context prefetcher using hardware contexts (CP-HW) and show the performance comparison \rbcc{of Pythia and CP-HW} in Figure~\ref{fig:perf_context_pref}. \rbcc{The key takeaway is that} Pythia outperforms the CP-HW prefetcher by $5.3$\% and $7.6$\% in single-core and four-core configurations, respectively. \rbcc{Pythia's performance improvement over CP-HW mainly comes from two key aspects: (1) Pythia's ability to take memory bandwidth usage into consideration while taking prefetch actions, and (2) the far-sighted predictions made by Pythia as opposed to myopic predictions by CP-HW.}
\begin{figure}[!h]
\centering
\includegraphics[width=\columnwidth]{figures/context.pdf}
\caption{Performance of Pythia vs. the context prefetcher~\cite{peled_rl} using hardware contexts.}
\label{fig:perf_context_pref}
\end{figure}
\subsection{Comparison to \rbcc{the IBM POWER7} Adaptive Prefetcher}
\rbcc{Fig.~\ref{fig:perf_power7} compares Pythia against the IBM POWER7 adaptive prefetcher~\mbox{\cite{ibm_power7}}. The POWER7 prefetcher dynamically tunes its prefetch aggressiveness (e.g., selecting prefetch depth, enabling stride-based prefetching) by monitoring program performance.}
We make two observations from Fig.~\ref{fig:perf_power7}. First, Pythia outperforms \rbcc{the} POWER7 prefetcher by $4.5$\% in single-core system. This is mostly due to Pythia's ability \rbcc{to} \rbcc{capture} different types of address patterns than just streaming/stride patterns. Second, Pythia outperforms POWER7 prefetcher by $6.5$\% in four-core and $6.1$\% in eight-core systems (not \rbcc{plotted}), respectively. The increase in performance improvement from single to four (or eight) core configuration suggests \rbcc{that} Pythia \rbcc{is more adaptive} than \rbcc{the} POWER7 prefetcher.
\begin{figure}[!h]
\centering
\includegraphics[width=\columnwidth]{figures/power7.pdf}
\caption{Performance comparison against IBM POWER7 prefetcher~\cite{ibm_power7}.}
\label{fig:perf_power7}
\end{figure}
\subsection{Performance Sensitivity to Number of Warmup Instructions}
\rbcd{Fig.~\ref{fig:warmup_sen} shows performance sensitivity of all prefetchers \rbcc{to the} number of warmup instructions averaged across all single-core traces. Our baseline simulation configuration uses $100$ million warmup instructions. The key takeaway from Fig.~\ref{fig:warmup_sen} is that Pythia consistently outperforms prior prefetchers in a wide range of simulation configurations using different number of warmup instructions. In the baseline simulation configuration using $100$M warmup instructions, Pythia outperforms MLOP, Bingo, and SPP by $3.4$\%, $3.8$\%, and $4.4$\% respectively. In a simulation configuration with no warmup instruction, Pythia continues to outperform MLOP, Bingo, and SPP by $2.8$\%, $3.7$\%, and $4.2$\% respectively. We conclude that, Pythia can quickly learn to prefetch from a program's memory access pattern and provides higher performance than other heuristics-based prefetching techniques over a wide range of simulation configurations using different number of warmup instructions.}
\begin{figure}[!h]
\vspace{-0.5em}
\centering
\includegraphics[width=\columnwidth]{figures/warmup_sensitivity.pdf}
\vspace{-2em}
\caption{Performance sensitivity of all prefetchers to number of warmup instructions.}
\label{fig:warmup_sen}
\vspace{-1em}
\end{figure}
|
1910.03260
|
\section{Introduction}
In the present paper we offer a contribution to the general problem of understanding the interaction between energy and dissipation terms in variational approaches to gradient-flow type evolutions from the standpoint of minimizing movements (see also e.g.~\cite{donfremie,fle,flesav} for related work).
Implicit Euler schemes are a well-established tool to prove existence and approximation for evolution equations
with a gradient-flow structure. We follow De Giorgi's formalization \cite{deg}, which has allowed to use such schemes as a basis for the definition and study of
gradient flows in metric spaces \cite{ambgigsav}.
Given an initial datum $u^0$ and a functional $\phi$ defined on a metric space $(S,d)$,
for fixed $\tau>0$ we denote by $\{u^\tau_n\}$ a discrete orbit satisfying $u^\tau_0=u^0$ and such that
$u^\tau_n$ be a minimizer of
\begin{equation}
u\mapsto \phi(u)+\frac{d^2(u,u_{n-1}^{\tau})}{2\tau}.
\end{equation}
Any limit of a subsequence of the piecewise-constant interpolations $u^\tau(t)=u^\tau_{\lceil t/\tau\rceil}$ is called a {\em minimizing movement for $\phi$}.
Such a limit exists under very mild conditions on $\phi$, and, under proper differentiability assumptions on $\phi$, is a curve of maximal slope for $\phi$, which is a generalization of the definition of a solution of the gradient-flow equation
\begin{equation}
u'=-\nabla \phi(u)
\end{equation}
(see \cite{ambgigsav} Chapter 2).
In this paper we perturb the scheme above both considering a family of energies $\phi_\varepsilon$ depending on an additional parameter $\varepsilon$ in place of a single $\phi$, and a perturbation by varying multiplicative coefficients $\{a_n^\tau\}$ of the squared-distance term (dissipation).
In this case the discrete orbits depend on $\varepsilon$ and $\tau$ and are defined by successive minimization requiring that $u^{\tau,\varepsilon}_n$ be a minimizer of
\begin{equation}\label{mingen}
u\mapsto \phi_\varepsilon(u)+a_n^\tau\frac{d^2(u,u_{n-1}^{\tau,\varepsilon})}{2\tau}.
\end{equation}
By letting $\varepsilon$ and $\tau$ tend to $0$ at the same time, we then define the {\em$\{a^\tau\}$-perturbed minimizing movements along $\big\{\phi_\varepsilon\big\}$ at scale $\tau$} as all possible limits of subsequences of the corresponding piecewise-constant interpolations $u^{\tau,\varepsilon}(t)=u^{\tau,\varepsilon}_{\lceil t/\tau\rceil}$.
In the case of a single function $\phi_\varepsilon=\phi$, the resulting {\em$\{a^\tau\}$-perturbed minimizing movements for $\phi$} have been analyzed in \cite{tri} showing on the one hand that, under proper local-summability assumptions on $\{1/a^\tau_n\}_n$, the resulting minimizing movements are a perturbed curve of maximal slope with rate $a^*$, which again extends the notion of a solution of the gradient-flow equation
\begin{equation}
a^* u'=-\nabla \phi(u).
\end{equation}
Here, $1/a^*$ is a weak limit of the piecewise-constant interpolations of $\{1/a^\tau_n\}_n$. On the other hand, if the local-summability assumptions on $\{1/a^\tau_n\}_n$ fail, the resulting minimizing movement may result discontinuous and may be used to explore different energy wells.
When varying energies $\phi_\varepsilon$ but no perturbation are considered (i.e, $a_n^\tau=1$ for all $\tau$ and $n$), the scheme above has been analyzed in \cite{bra2,bracolgobsol}, showing that in general the resulting {\em minimizing movements along $\big\{\phi_\varepsilon\big\}$ at scale $\tau$} do depend on how $\tau$ and $\varepsilon$ tend to $0$, even if we assume that $\phi_\varepsilon$ $\Gamma$-converge to some limit $\phi$ (which is not restrictive by a compactness argument). If equi-coerciveness assumptions on $\phi_\varepsilon$ hold, then diagonal arguments show that
we may identify the limit motions in `fast-converging $\varepsilon$-$\tau$ regimes'. More precisely, if $\varepsilon$ converges sufficiently fast to $0$ with respect to $\tau$ then the limit is a minimizing movement for $\phi$, while if conversely $\tau$ converges sufficiently fast to $0$ with respect to $\varepsilon$ then it is a limit of minimizing movements for $\phi_\varepsilon$ as $\varepsilon\to 0$. It follows that, varying from fast-converging $\tau$ to fast-converging $\varepsilon$ we may encounter some $\tau(\varepsilon)$ (or $\varepsilon(\tau)$), which we call $\varepsilon$-$\tau$ {\it critical regimes}, for which the minimizing movements are different from those in the two fast-converging $\varepsilon$-$\tau$ regimes, and in general are in a sense an interpolation of the two extreme cases. All regimes give the same result if some general conditions envisaged by Colombo and Gobbino are satisfied by $\{\phi_\varepsilon\}$ \cite{bracolgobsol}, which in particular hold in the `trivial' case of convex energies but are forbidden by fast-oscillating energies. These conditions can be related to the previous seminal work by Sandier and Serfaty on limits of gradient flows \cite{SS}.
In the case of $\phi_\varepsilon$ $\Gamma$-converging to some limit $\phi$, the presence of many local minima may result in a pinning phenomenon (i.e., orbits may be trapped in energy wells). The addition of the perturbations $\{a^\tau_n\}$ has the effect of allowing for a wider exploration of local energy wells, while maintaining a fixed overall effect on the limit continuum rate $a^*$. We prove that general $\{a^\tau\}$-perturbed minimizing movements along $\big\{\phi_\varepsilon\big\}$ interpolate between the fast-converging regimes given now by $\{a^\tau\}$-perturbed minimizing movements for $\phi$ and limits of minimizing movements for $\phi_\varepsilon$ as $\varepsilon\to 0$, and that the Colombo-Gobbino conditions still provide a `commutability result'. The effect in the critical regimes are examined in three sets of examples. First, we deal with one-dimensional discretizations of the simple energy $\phi(u)=-u$, showing that different perturbations with the same $a^*$ may give different pinning effects at the microscopic level, influencing the final homogenized velocity. The second example deals with one-dimensional wiggly energies, related to gradient flows of the type
\begin{equation}
u'=-F'\Bigl({u\over\varepsilon}\Bigr),
\end{equation}
with oscillating $F$.
A minimizing-movement-based study of such energies has been performed in \cite{ansbrazim}, showing pinning phenomena in terms of the ratio $\varepsilon/\tau$. Here, we prove a general homogenization formula for the effective velocity, and an explicit description of the effect of the perturbations $\{a^\tau_n\}$ on the pinning threshold. Finally, a third example is given of a perturbed crystalline motion derived from lattice energies of ferromagnetic type as in \cite{bragelnov}, showing the dependence of the velocity and consequently of the pinning threshold on the values of the perturbations.
\section{Perturbed minimizing movements along a family of functionals}\label{pertMM}
Following the notation in \cite{ambgigsav}, we consider a complete metric space $(S,d)$, and a Hausdorff topology $\sigma$ on $S$, weaker than the one induced by the metric $d$ and such that $d$ is $\sigma$-lower semicontinuous.
We consider a time-discretization parameter $\tau>0$. For every $\tau$ we consider a sequence $(a_n^\tau)_{n\ge1}$, such that $a_n^\tau>0$ for every $n\ge1$. We call this family of ($\tau$-parameterized) sequences ``perturbations'', and we will use the notation
$$a^\tau:(0,+\infty)\to(0,+\infty),\quad a^\tau(t):=a_{\lceil t/\tau\rceil}^\tau$$
to denote the corresponding piecewise-constant interpolation, where $\lceil s\rceil$ denotes the upper integer part of $s$.
For each $\varepsilon>0$ we will consider a proper functional $\phi_\varepsilon:S\to(-\infty,+\infty]$ with the corresponding domains denoted by $D(\phi_\varepsilon)$.
With given $\varepsilon$ and $\tau$ we consider families of sequences $\{u_n^{\tau,\varepsilon}\}_n$ satisfying
\begin{equation}
\label{problem}
\begin{cases}
u_0^{\tau,\varepsilon}\in D(\phi_\varepsilon)\\
u_n^{\tau,\varepsilon}\in\argmin{u\in S}\bigg\{\phi_\varepsilon(u)+a_n^\tau\frac{d^2(u,u_{n-1}^{\tau,\varepsilon})}{2\tau}\bigg\}& \hbox{ if }n\ge1.
\end{cases}
\end{equation}
If such a family exists then we say that it {\em solves the Euler iterated minimization scheme along the sequence of functionals $\{\phi_\varepsilon\}$ at time discretization scale $\tau$ perturbed by $\{a^\tau\}$}. This scheme is a modified formulation of the one presented in \cite{ambgigsav}; in that case $\{a^\tau\}$ take the constant value $1$ and $\phi_\varepsilon$ are all equal to a single $\phi$. If, for a fixed $\tau$ and $\varepsilon$, there exist every step $u_n^{\tau,\varepsilon}$ of scheme (\ref{problem}), then the sequence is called a {\em discrete solution} or a {\em discrete orbit} for (\ref{problem}), and is identified with the curve
$$u^{\tau,\varepsilon}:[0,+\infty)\to S,\quad u^{\tau,\varepsilon}(t):=u_{\lceil t/\tau\rceil}^{\tau,\varepsilon}.$$
We define gradient-flow type motions as the limits of $u^{\tau,\varepsilon}$ for $\varepsilon$ and $\tau$ tending to $0$.
The two parameters $\tau$ and $\varepsilon$ are thought to be related in the sense that they depend on each other, and the limit motion may depend on their relation. In order to highlight this, depending on the situation, we will write $\tau(\varepsilon)$ or $\varepsilon(\tau)$ and sometimes, we will refer to these relations with the term of ``$\tau$-$\varepsilon$ regimes''.
\begin{definition}
A curve $u:[0,+\infty)\to S$ is called a {\em$\{a^\tau\}$-perturbed minimizing movement along $\big\{\phi_\varepsilon\big\}$} if there exist two sequences $(\tau_k), (\varepsilon_k)$ both tending to $0$ as $k\to+\infty$ such that discrete solutions $u^{\tau_k,\varepsilon_k}$ exist for every $k$ and pointwise converge to $u$ in the topology~$\sigma$.
\end{definition}
For perturbations $\{a^\tau\}$ taking the constant value $1$, this definition is the same as that of minimizing movement along the sequence of functionals $\{\phi_\varepsilon\}$ at scale $\tau$ given in \cite{bracolgobsol}.
For non-constant $\{a^\tau\}$ and a single functional this definition has been used in \cite{tri}. Reworking the arguments therein, which are themselves an elaboration of those in \cite{ambgigsav}, we have the properties contained in the following remark.
\begin{remark}[{\bf Assumptions for the existence of perturbed minimizing movements}]\label{epmm}
Following the case of a single (unperturbed) functional in \cite{ambgigsav}, we consider the following conditions:
\begin{enumerate}
\item (lower semicontinuity) $\phi_\varepsilon$ are $\sigma$-lower semicontinuous for every $\varepsilon>0$
\item (equicoerciveness) there exists $u^*\in S$ such that for all $c>0$ $$\inf_{u\in S, \varepsilon>0}\big\{\phi_\varepsilon(u)+c\, d^2(u,u^*)\big\}>-\infty$$
\item (equicompactness) for all $c>0$ there exists a $\sigma$-compact $K_c$ such that $$\bigcup_{\varepsilon>0}\big\{u\in S\ \big\vert \ d(u,u^*)<c,\,\phi_\varepsilon(u)<c\big\}\subset K_c$$
\item (control of initial data) there exists a constant $C_0$ such that for all $\tau,\,\varepsilon>0$, $d(u_0^{\tau,\varepsilon},u^*)\le C_0$ and $\phi_\varepsilon(u_0^{\tau,\varepsilon})\le C_0$
\item (local uniform integrability) the family $\{1/a^\tau\}$ is uniformly integrable in $[0,T]$ for all $T>0$.
\end{enumerate}
These hypotheses imply (see \cite{tri} Section 2 for details) that for any $T>0$ there exists a constant $C_T$ depending on the perturbations such that
\begin{equation}\label{precomp}
d(u^{\tau,\varepsilon}(t),u^*) \le C_T, \quad \phi_\varepsilon(u^{\tau,\varepsilon}(t)) \le C_0,\text{ for every }t\in[0,T];
\end{equation}
i.e., the $\sigma$-precompactness of discrete orbits, and a regularity of discrete solutions
\begin{equation}\label{regularity}
d(u^{\tau,\varepsilon}(t),u^{\tau,\varepsilon}(s)) \le c\,\theta_T(t+\tau,s),\,\hbox{ for all }t,s\in[0,T]
\end{equation}
where $c$ is a positive constant and
$$\theta_T(t,s)=\Big(\sup_{\tau>0} \int_s^t \frac{1}{a^\tau(\xi)}d\xi\Big)^{\frac{1}{2}}$$
defines a modulus of continuity. Applying a variant of the Ascoli-Arzel\'a Theorem (see Proposition 3.3.1 \cite{ambgigsav}) we obtain the existence of (at least) one perturbed minimizing movement $u\in AC_{\rm loc}(0,+\infty;S)$ as limit of a sequence $u^{\tau_k,\varepsilon_k}$.
Moreover, the increments of the discrete solutions
\begin{equation}\label{discder}
|(u^{\tau,\varepsilon})'|(t):=\frac{d(u_n^{\tau,\varepsilon},u_{n-1}^{\tau,\varepsilon})}{\tau},\text{ if }t\in((n-1)\tau,n\tau]
\end{equation}
weakly converge (up to subsequences) in $L^1_{\rm loc}(0,+\infty)$ to a function $A$, which is an upper bound for the {\em metric derivative} of $u$ (for its definition, see for instance \cite{ambgigsav} Theorem 1.1.2); i.e.,
\begin{equation}\label{upbound}
|u'|(t):=\lim_{s\to t}\frac{d(u(t),u(s))}{|t-s|}\le A(t),\hbox{ a.e. in }(0,+\infty),
\end{equation}
and, defining (as in \cite{ambgigsav} Definition 3.2.1) for every $\tau, \varepsilon$ the {\em De Giorgi interpolants}
\begin{equation}\label{Gte}
\begin{aligned}
&\tilde{u}^{\tau,\varepsilon}(t) \in \argmin{u\in S}\bigg\{\phi(u)+a^\tau_n\frac{d^2(u,u_{n-1}^\tau)}{2\delta}\bigg\} \\
&G_{\tau,\varepsilon}(t) = a_n^\tau\frac{d(\tilde{u}^{\tau,\varepsilon}(t),u_{n-1}^\tau)}{\tau}
\end{aligned},\text{ if }t=(n-1)\tau+\delta
\end{equation}
we also obtain the following discrete energy estimate
\begin{equation}\label{eneest}
\frac{1}{2}\int_0^{n\tau}a^\tau(\xi)|(u^{\tau,\varepsilon})'|^2(\xi)d\xi+\frac{1}{2}\int_0^{n\tau}\frac{1}{a^\tau(\xi)}G_{\tau,\varepsilon}^2(\xi)d\xi=\phi_\varepsilon(u_0^{\tau,\varepsilon})-\phi_\varepsilon(u_n^{\tau,\varepsilon}).
\end{equation}
for all $n\ge1$, that will bring to a convergence in energy.
\end{remark}
\begin{remark}[Curves of maximal slope with perturbed velocity]
In \cite{ambgigsav} it is proved that minimizing movements for a single functional $\phi$ at the scale $\tau$ are curves of maximal slope with respect to $|\partial^-\phi|(u)$, the {\em relaxed local slope} of $\phi$, which is defined as the $\sigma$-lower semicontinuous envelope of
$$|\partial\phi|(u)=\liminf_{v\to u}\frac{\big(\phi(u)-\phi(v)\big)_+}{d(u,v)},$$
under the assumption that $|\partial^-\phi|(u)$ be a {\em strong upper gradient} (see Definition 1.2.1 \cite{ambgigsav}). In the perturbed case, we need a generalization of the concept of curve of maximal slope for a functional in metric spaces to obtain the analogous result (Theorem 3.9 \cite{tri}) that $\{a^\tau\}$-perturbed minimizing movements for $\phi_\varepsilon$ are curves of maximal slope for some $\phi$ with a perturbed velocity.
\begin{definition}[Definition 3.2 \cite{tri}] Let $g:S\to[0,+\infty]$ be a strong upper gradient for $\phi$; that is, for any $v\in AC(a,c;S)$, $g\circ v$ is Borel and
$$|\phi(v(t))-\phi(v(s))|\le\int_s^t g(v(\xi))|v'|(\xi))d\xi,\hbox{ for any }a<s<t<b.$$
If $\lambda:(a,b)\to(0,+\infty)$ is a measurable function, $u\in AC(a,b;S)$ is a {\em curve of maximal slope for $\phi$ with respect to a strong upper gradient $g$ of rate $\lambda$} if $\phi\circ u$ equals almost everywhere a non-increasing map (still denoted by $\phi\circ u$) and for all $a<s<t<b$
\begin{equation}\label{cmp}
\phi(u(t))-\phi(u(s))\le-\frac{1}{2}\int_t^s\frac{1}{\lambda(\xi)}|u'|^2(\xi)d\xi-\frac{1}{2}\int_t^s\lambda(\xi)g(u(\xi))^2d\xi.
\end{equation}
If $\lambda$ is constant then $u$ is a curve of maximal slope for $\phi$ with respect to $g$ according to the definition given in \cite{ambgigsav}.
\end{definition}
\end{remark}
\subsection{The conditions of Colombo-Gobbino}
As in \cite{bracolgobsol}, we prove that if the functionals converge in a strong way (conditions of Colombo-Gobbino below) we have a ``commutatibility result''.
\begin{definition}
We say that a sequence of functionals $\big\{\phi_\varepsilon\big\}$ {\em converges to a functional $\phi$ according to the conditions of Colombo-Gobbino} if for every sequence $\varepsilon_k\to0$ and for all $v_k\xrightarrow{\sigma}v$, such that $\sup_k\big\{|\phi_{\varepsilon_k}(v_k)|,|\partial\phi_{\varepsilon_k}|(v_k)\big\}<+\infty$ we have
\begin{equation}\label{colgob}
\lim_k\phi_{\varepsilon_k}(v_k)=\phi(v),\quad\liminf_k|\partial\phi_{\varepsilon_k}|(v_k)\ge|\partial\phi|(v).
\end{equation}
\end{definition}
With this condition we have the following result.
\begin{theorem}\label{CGthm}
If assumptions from $1$ to $5$ of Remark {\rm\ref{epmm}} hold (so that there exists at least one perturbed minimizing movement), if a finite $a^*$ exists such that the functions $\{1/a^\tau\}$ weakly converge to $1/a^*$ in $L^1_{\rm loc}(0,+\infty)$ (which is always satisfied up to subsequences by assumption $5$ of Remark {\rm\ref{epmm}} ), if
\begin{itemize}
\item[{\rm (i)}] $\phi_\varepsilon$ converges to $\phi$ according to the conditions of Colombo-Gobbino
\item[{\rm (ii)}] the local slope $|\partial\phi|$ is a strong upper gradient for $\phi$
\end{itemize}
then every $\{a^\tau\}$-perturbed minimizing movement along $\big\{\phi_\varepsilon\big\}$ of problem \eqref{problem} is a curve of maximal slope for $\phi$ with respect to $|\partial\phi|$ of rate $1/a^*$.
\end{theorem}
\begin{remark}
The assumption of finiteness on $a^*$ is only technical and can be avoided by a more precise definition of curve of maximal slope of given rate (as the one in \cite{tri}), that we omitted here for the sake of simplicity. Note moreover that $E=\{t\,|\,a^*(t)=+\infty\}$; that is, $1/a^\tau\rightharpoonup0$ on $E$, corresponds to the set of times in which the curve $u$ has zero velocity.
\end{remark}
The proof of this theorem follows the one in \cite{ambgigsav} with the help of two additional technical results presented in \cite{tri} with all the details and recalled below. The first one can be proved using test functions lower than $\liminf_{\tau\to0}G_{\tau,\varepsilon}^2$ in the weak convergence of $1/a^\tau$. The second one can be obtained by slightly modifying the result of $\Gamma$-convergence of Dirichlet energy functionals on Sobolev spaces (Theorem 2.35 and Example 2.36 \cite{bra1}).
\begin{lemma}\label{fatoul} Let $\tilde{u}^{\tau,\varepsilon}$ and $G_{\tau,\varepsilon}$ be defined as in \eqref{Gte}. Then
for every $t>0$ we have
$$\liminf_{\tau,\varepsilon\to0}\int_0^{\lceil\frac{t}{\tau}\rceil\tau}\frac{1}{a^\tau(\xi)}G_{\tau,\varepsilon}^2(\xi)d\xi \ge \int_0^t\frac{1}{a^*(\xi)}\liminf_{\tau,\varepsilon\to0}G_{\tau,\varepsilon}^2(\xi)d\xi.$$
\end{lemma}
Since $|\partial\phi_\varepsilon|(\tilde{u}^{\tau,\varepsilon}(t)) \le G_{\tau,\varepsilon}(t)$ (Lemma 3.1.3 \cite{ambgigsav}) Lemma \ref{fatoul} implies
\begin{equation}\label{fatou}
\liminf_{\tau,\varepsilon\to0}\int_0^{\lceil\frac{t}{\tau}\rceil\tau}\frac{1}{a^\tau(\xi)}G_{\tau,\varepsilon}^2(\xi)d\xi \ge \int_0^t\frac{1}{a^*(\xi)}\liminf_{\tau,\varepsilon\to0}|\partial\phi_\varepsilon|^2(\tilde{u}^{\tau,\varepsilon}(\xi))d\xi.
\end{equation}
\begin{lemma}\label{omol}
Let $(u^{\tau,\varepsilon})'$ be defined as in \eqref{discder} and $A$ as in \eqref{upbound}. Let $\tau_k,\varepsilon_k$ be sequences such that $u^{\tau_k,\varepsilon_k}, (u^{\tau,\varepsilon})'$ and $a^{\tau_k}$ converge respectively to $u, A$ and $a^*$, then exists a subsequence (not relabeled) such that for every $t>0$
\begin{equation}\label{omo}
\liminf_{k}\int_0^{\lceil\frac{t}{\tau_k}\rceil\tau_k} a^{\tau_k}(\xi)|(u^{\tau_k,\varepsilon_k})'|^2(\xi)d\xi \ge \int_0^t a^*(\xi)A^2(\xi)d\xi.
\end{equation}
\end{lemma}
\begin{proof}[Proof of Theorem {\rm\ref{CGthm}}]
Taking the limit in the energy estimate (\ref{eneest}), thanks to the conditions of Colombo-Gobbino, and inequalities (\ref{fatou}) and (\ref{omo}), we have
\begin{eqnarray*}
\phi(u(0))&=&\lim_k\phi_{\varepsilon_k}(u_0^{\tau_k,\varepsilon_k})\\
&\ge&\liminf_k\frac{1}{2}\int_0^{n\tau_k}a^{\tau_k}(\xi)|(u^{\tau,\varepsilon})'|^2(\xi)d\xi+\frac{1}{2}\int_0^{n\tau_k}\frac{1}{a^{\tau_k}(\xi)}G_{\tau_k,\varepsilon_k}^2(\xi)d\xi\\
&&\quad\quad+\phi_{\varepsilon_k}(u^{\tau_k,\varepsilon_k}(t))\\
&\ge&\frac{1}{2}\int_0^ta^*(\xi)A^2(\xi)d\xi+\frac{1}{2}\int_0^t\frac{1}{a^*(\xi)}\liminf_k|\partial\phi_{\varepsilon_k}|^2(\tilde{u}^{\tau_k,\varepsilon_k}(\xi))d\xi\\
&&\quad\quad+\phi(u(t))\\
&\ge&\frac{1}{2}\int_0^ta^*(\xi)|u'|^2(\xi)d\xi+\frac{1}{2}\int_0^t\frac{1}{a^*(\xi)}|\partial\phi|^2(u(\xi))d\xi+\phi(u(t))
\end{eqnarray*}
and the result follows.
\end{proof}
\subsection{Fast-converging sequences}
Now, we treat the case when we only make the assumption of $\Gamma$-convergence of the sequence of the functionals $\phi_\varepsilon$, which always holds up to subsequences in separable metric spaces. In what follows we set $\phi=\Gamma$-$\lim_\varepsilon\phi_\varepsilon$.
Under these weaker hypotheses not in every $\tau$-$\varepsilon$ regime do we have commutation between the $\Gamma$-limit and the minimizing movement, as shown by the following result, which is a readjustment to the perturbed case of the result in \cite{bra2} (Theorem 8.1).
\begin{theorem}\label{fastconv}
Let conditions from $1$ to $5$ of Remark $\ref{epmm}$ hold, and let the family of functionals $\big\{\phi_\varepsilon\big\}$ be equi-mildly-coercive; that is, for every $c>0$ there exists a $d$-compact set $K_c$ such that for all $\varepsilon>0$
$$\inf_{u\in S}\big\{\phi_\varepsilon(u)+c\,d^2(u,u^*)\big\}=\inf_{u\in K_c}\big\{\phi_\varepsilon(u)+c\,d^2(u,u^*)\big\}$$
where $u^*$ is the same as in condition {\rm 2} of Remark {\rm\ref{epmm}}. Then, if $u_0^{\tau,\varepsilon}=u_0^\varepsilon$ we have that
\begin{itemize}
\item[\rm(i)] there exists a scale $\varepsilon(\tau)$ such that if $\varepsilon\le\varepsilon(\tau)$ every $\{a^\tau\}$-perturbed minimizing movement along $\big\{\phi_\varepsilon\big\}$ is a $\{a^\tau\}$-perturbed minimizing movement with respect to $\phi$;
\item[\rm(ii)] there exists a scale $\tau(\varepsilon)$ such that if $\tau\le\tau(\varepsilon)$ every $\{a^\tau\}$-perturbed minimizing movement along $\big\{\phi_\varepsilon\big\}$ is a limit curve of the sequence $\big\{u^\varepsilon\big\}$, where, for every fixed $\varepsilon$, $u^\varepsilon$ is a $\{a^\tau\}$-perturbed minimizing movement with respect to the single functional $\phi_\varepsilon$.
\end{itemize}
\end{theorem}
\begin{proof}
(i) by assumption 4 of Remark \ref{epmm} it is not restrictive to assume that $u_0^\varepsilon\to u_0$.
With fixed $\tau$, for any sequence $\varepsilon_n\to0$ and any $v_n\to v$ the term $d^2(v_n,u_0^{\tau,\varepsilon_n})$ converges to $d^2(v,u_0)$ which implies that
$$\Gamma\text{-}\lim_{\varepsilon\to0}\bigg(\phi_\varepsilon(u)+a_1^\tau\frac{d^2(u,u_0^{\tau,\varepsilon})}{2\tau}\bigg) = \phi(u)+a_1^\tau\frac{d^2(u,u_0)}{2\tau}$$
and, by the equicoerciveness assumption, we have the convergence of the minima
$$\lim_{\varepsilon\to0}\min_{u\in S}\bigg\{\phi_\varepsilon(u)+a_1^\tau\frac{d^2(u,u_0^{\tau,\varepsilon})}{2\tau}\bigg\} = \min_{u\in S}\bigg\{\phi(u)+a_1^\tau\frac{d^2(u,u_0^\tau)}{2\tau}\bigg\}.$$
Hence, every minimizer $u_1^{\tau,\varepsilon}$ converges to a minimizer $u_1^\tau$ for the corresponding minimum problem with respect to $\phi$ when $\varepsilon$ tends to $0$. Repeating the same argument every $u_n^{\tau,\varepsilon}$ converges to the corresponding $u_n^\tau$ for every $n\ge1$ and so we have the convergence of the discrete solutions
$$\lim_{\varepsilon\to0}u^{\tau,\varepsilon}=u^\tau.$$
Since $u^\tau$ converges to a $\{a^\tau\}$-perturbed minimizing movement with respect to $\phi$, a diagonal argument defines $\varepsilon(\tau)$.
(ii) with fixed $\varepsilon>0$, we have convergence of discrete solutions to $u^{\varepsilon}$ and these perturbed minimizing movements are equicompact and equicontinuous. This follows by passing to the limit in (\ref{precomp}) and (\ref{regularity}) in Remark \ref{epmm}, since these properties depend on the perturbations, which do not depend on $\varepsilon$. Hence, the result follows by the Ascoli-Arzel\`a Theorem.
\end{proof}
\begin{remark}\label{rmk-fastconv}
In the sequel, we will use the notation $u^0$ and $u^\infty$ to indicate perturbed minimizing movements obtained under condition (i) and (ii), respectively, of Theorem \ref{fastconv}.
\end{remark}
\section{Examples of critical regimes} From Theorem \ref{fastconv} we infer that, when the evolutions given by opposite types of fast-converging sequences differ, varying between those we may encounter one or more {\em critical $\varepsilon$-$\tau$ regimes}, at which the minimizing movements describe an effective motion different from the extreme ones. Various types of critical regimes have been already studied in \cite{bra2} in the case of un-perturbed minimizing movements. Here we highlight some effects of the perturbations with two simple examples.
\smallskip
On the real line, we consider the functions
$$\phi_\varepsilon(t)=
\begin{cases}
-t&t\in\varepsilon\mathbb{Z}\\
+\infty&\text{otherwise}
\end{cases}$$
as a prototype of multi-well energies with different well depth. It is not restrictive to take
$u_0^{\tau,\varepsilon}\equiv0$. The perturbations $\{a^\tau\}$ are assumed to satisfy assumption 5 of Remark \ref{epmm}.
All the hypotheses of Remark \ref{epmm} are satisfied, so that $\{a^\tau\}$-perturbed minimizing movements along $\big\{\phi_\varepsilon\big\}$ are well-defined curves $u:[0,+\infty)\to\mathbb{R}$.
In this case, the iterated minimization algorithm \eqref{problem} takes the form
$$
u_n^{\tau,\varepsilon}\in\argmin{u\in\varepsilon\mathbb{Z}}\bigg\{-u+a_n^\tau\frac{|u-u_{n-1}^{\tau,\varepsilon}|^2}{2\tau}\bigg\},
$$
so that $u_n^{\tau,\varepsilon}$ is the point of $\varepsilon\mathbb{Z}$ closest to the minimum of the parabola; that is, $u_{n-1}^{\tau,\varepsilon}+\tau/a_n^\tau$.
Note that if $\tau/a_n^\tau\in\varepsilon(\mathbb{Z}+1/2)$ there are two such points, so that we have
$$u_n^{\tau,\varepsilon}=
\begin{cases}
u_{n-1}^{\tau,\varepsilon}+\tau/a_n^\tau\pm\varepsilon/2&\text{if }\tau/a_n^\tau\in\varepsilon(\mathbb{Z}+1/2)\\
u_{n-1}^{\tau,\varepsilon}+\varepsilon\lfloor\tau/(\varepsilon a_n^\tau)+1/2\rfloor&\text{otherwise.}
\end{cases}$$
The cases of double minimizers can be treated separately, so for simplicity we consider the assumption
\begin{equation}\label{nob}
\frac{1}{a_n^\tau}\not\in\frac{\varepsilon}{\tau}\bigg(\mathbb{Z}+\frac{1}{2}\bigg)
\end{equation}
in which case $\{u_n^{\tau,\varepsilon}\}$ is defined iteratively by
\begin{equation}\label{ste}
u_n^{\tau,\varepsilon}=u_{n-1}^{\tau,\varepsilon}+\varepsilon\bigg\lfloor\frac{\tau}{\varepsilon a_n^\tau}+\frac{1}{2}\bigg\rfloor.
\end{equation}
\subsection{Pinning}
We say that the perturbed motion is {\em pinned} if there exists a constant $c>0$ such that for any $a^\tau>c$ then $u\equiv u_0$. We define the infimum of such constants (which may also be $0$) as the {\em pinning threshold} of the motion.
Define $\gamma=\gamma(\tau,\varepsilon)=\varepsilon/\tau$. Condition (\ref{nob}), which ensures the uniqueness of the minima, is
$$\frac{1}{a_n^\tau}\not\in\gamma\bigg(\mathbb{Z}+\frac{1}{2}\bigg).$$
By (\ref{ste}), if $a_n^\tau<2/\gamma$ then $u_n^{\tau,\varepsilon}>u_{n-1}^{\tau,\varepsilon}$, otherwise we have $u_n^{\tau,\varepsilon}=u_{n-1}^{\tau,\varepsilon}$. Hence, if $a^\tau>2/\gamma$ the motion is pinned; i.e., $u(t)\equiv0$ and $2/\gamma$ is the pinning threshold.
Note that $a^\tau>2/\gamma$ is a sufficient condition in order to obtain a pinned motion, but not necessary. In fact, consider the set $I_{\gamma,\tau}(t):=\{\xi\in[0,t]\,|\,a^\tau(\xi)\le2/\gamma\}$. By \eqref{ste} the discrete solution is
$$u^{\tau,\varepsilon}(t)=\varepsilon\sum_{n\tau\in I_{\gamma,\tau}(t)}\left\lfloor\frac{\tau}{\varepsilon a_n^{\tau}}+\frac{1}{2}\right\rfloor=\gamma\int_{I_{\gamma,\tau}(t)}\bigg\lfloor\frac{1}{a^{\tau}(\xi)\gamma}+\frac{1}{2}\bigg\rfloor d\xi$$
and we obtain the estimate
$$\big|I_{\gamma,\tau}(t)\big|\le u^{\tau,\varepsilon}(t)\le\int_{I_{\gamma,\tau}(t)}\frac{1}{a^{\tau}(\xi)}d\xi+\frac{\gamma}{2}\big|I_{\gamma,\tau}(t)\big|.$$
Hence, if the following condition over the perturbations $\{a^\tau\}$ is satisfied
\begin{equation}\label{pinning}
\lim_k\big|I_{\gamma_k,\tau_k}(t)\big|=0 \hbox{ for all } t\ge0,
\end{equation}
where $\gamma_k=\gamma(\varepsilon_k,\tau_k)$, we have a pinned motion $u=\lim_k u^{\tau_k,\varepsilon_k}$. Otherwise, if for some $t_0\ge0$
$$\limsup_k\big|I_{\gamma_k,\tau_k}(t_0)\big|>0$$
taking the limit along a suitable subsequence of $(\tau_k)$ we obtain $u(t_0)>0$.
\begin{remark}
In case of $N$-periodic perturbations, we have that condition (\ref{pinning}) is satisfied if and only if
\begin{equation}\label{pinningth}
\alpha:=\inf_{1\le n\le N}a_n^\tau>\frac{2}{\gamma}.
\end{equation}
So in this case, the pinned perturbed motions are characterized by the pinning threshold;
i.e., if $\alpha>2/\gamma$ the motion is pinned, otherwise it is not.
\end{remark}
\subsection{Fast-convergences}\label{fastconvergences}
In this case, the scales defined in (i) and (ii) of Theorem \ref{fastconv}, can be chosen as $\tau(\varepsilon)=o(\varepsilon)$ and $\varepsilon(\tau)=o(\tau)$.
Indeed, consider $\varepsilon(\tau)=o(\tau)$, by (\ref{ste}) we have, for every $t\ge0$
$$u^{\tau,\varepsilon}(t)=\sum_{n=1}^{\lceil t/\tau\rceil}\varepsilon\bigg\lfloor\frac{\tau}{\varepsilon a_n^{\tau}}+\frac{1}{2}\bigg\rfloor$$
so taking the limit for $\tau\to0$ in
$$\sum_{n=1}^{\lceil t/\tau\rceil}\frac{\tau}{a_n^\tau}-\bigg\lfloor\frac{t}{\tau}\bigg\rfloor\varepsilon\le u^{\tau,\varepsilon}(t)\le\sum_{n=1}^{\lceil t/\tau\rceil}\frac{\tau}{a_n^\tau}+\bigg\lfloor\frac{t}{\tau}\bigg\rfloor\varepsilon$$
we obtain
\begin{equation}\label{fast-motion}
u^0(t)=\int_0^t\frac{1}{a^*(\xi)}d\xi,
\end{equation}
a $\{a^\tau\}$-perturbed minimizing movement with respect to $\phi(t)=-t=\Gamma$-$\lim_{\varepsilon\to0}\phi_\varepsilon(t)$.
Now, let $\tau(\varepsilon)=o(\varepsilon)$, so that $1/\gamma(\varepsilon)\to0$ for $\varepsilon\to0$. Assumption 5 of Remark \ref{epmm} implies that
$$\big|I_{\delta,\tau}(t)\big|\le\frac{2}{\gamma}\int_{I_{\delta,\tau}(t)}\frac{1}{a^\tau(\xi)}d\xi\le\frac{2}{\gamma}\bigg\Vert\frac{1}{a^\tau}\bigg\Vert_{L^1(0,t)}\le\frac{2}{\gamma}C_{0,t}.$$
Hence, pinning condition (\ref{pinning}) is satisfied; i.e., $u^\infty(t)\equiv0$ for these regimes, but for every $\varepsilon$ the perturbed minimizing movements $u^\varepsilon$ are identically $0$ because $\phi_\varepsilon$ has a discrete domain, so the result follows.
This shows that the critical regimes are such that $\varepsilon(\tau)=\gamma(\tau)\tau$, with $\gamma(\tau)$ a bounded function with $\inf_\tau\gamma(\tau)>0$. Without loss of generality consider the regimes
$$\varepsilon=\gamma\tau.$$
In what follows, we use the notation $u^\gamma$ for the $\{a^\tau\}$-perturbed minimizing movements along $\{\phi_{\gamma\tau}\}$.
\subsection{Periodic perturbations}
Now, given $0<\alpha<\beta$, choose general periodic perturbations
$$a_n^\tau=
\begin{cases}
\alpha&n\text{ odd}\\
\beta&n\text{ even.}
\end{cases}$$
Such perturbations *weakly converge to the inverse of the harmonic mean between $\alpha$ and $\beta$; that is, $1/a^*=(1/\alpha+1/\beta)/2$. Hence, from \eqref{fast-motion} and the analysis of the pinning phenomenon performed above we have
$$u^0(t)=\frac{1}{a^*}t,\quad u^\infty(t)\equiv0,$$
where we have used the notation introduced in Remark \ref{rmk-fastconv}.
In the critical regimes, we have different perturbed minimizing movements depending on $\gamma$, chosen according to condition \eqref{nob}. Define $k_\alpha=\lfloor1/(\alpha\gamma)+1/2\rfloor$ and $k_\beta=\lfloor1/(\beta\gamma)+1/2\rfloor$. By \eqref{ste} we have
$$u_n^{\tau,\varepsilon}=
\begin{cases}
u_{n-1}^{\tau,\varepsilon}+k_\alpha\varepsilon&n\text{ odd}\\
u_{n-1}^{\tau,\varepsilon}+k_\beta\varepsilon&n\text{ even}
\end{cases}
=\bigg\lceil\frac{n}{2}\bigg\rceil k_\alpha\varepsilon+\bigg\lfloor\frac{n}{2}\bigg\rfloor k_\beta\varepsilon$$
and so the discrete solution is
$$u^{\tau,\varepsilon}(t)=\bigg\lceil\frac{t}{2\tau}\bigg\rceil k_\alpha\varepsilon+\bigg\lfloor\frac{t}{2\tau}\bigg\rfloor k_\beta\varepsilon=\bigg\lceil\frac{t}{2\tau}\bigg\rceil k_\alpha\gamma\tau+\bigg\lfloor\frac{t}{2\tau}\bigg\rfloor k_\beta\gamma\tau$$
and taking the limit we have
$$u^\gamma(t)=\frac{1}{a_\gamma}t,\text{ with }\frac{1}{a_\gamma}:=\gamma\frac{k_\alpha+k_\beta}{2}.$$
\begin{figure}[htp]
\includegraphics[height=6 cm]{Fig1}\includegraphics[height=6 cm]{Fig2}
\caption[]{The dark line represents the graph of $\gamma\mapsto1/a^\gamma$, the light line is the constant $1/a^*$. On the left $\alpha>\beta/2$ so the sup is reached in $\gamma_1^\beta$, on the right $\alpha<\beta/2$ and the sup is reached in $\gamma_1^\alpha$.}
\label{fig1}
\end{figure}
\begin{remark}
By varying $\gamma$ we obtain different perturbed minimizing movements depending on the value $1/a_\gamma$. The function $\gamma\mapsto 1/a_\gamma$ is a piecewise linear function having jumps in the bifurcation values; i.e., by (\ref{nob}) in
$$\gamma^\alpha_j:=\frac{2}{(2j-1)\alpha},\quad\gamma^\beta_j:=\frac{2}{(2j-1)\beta},$$
and $1/a_\gamma=0$ for all $\gamma>\gamma_1^\alpha$ as stated by condition (\ref{pinningth}). Moreover $1/a_\gamma\to1/a^*$ when $\gamma\to0$ (see Figure \ref{fig1}). In fact $1/a_\gamma$ can be seen as an approximation of the harmonic mean between $\alpha$ and $\beta$ taking values on $\gamma\mathbb{Z}$.
We shortly examine the determination of the largest velocity that the motion could reach. To do this it suffices to evaluate $1/a^\gamma$ at the right-end extremes of the continuity intervals, since there the function is increasing. This corresponds to considering the greatest jump value at $\gamma_j^\alpha$ and $\gamma_j^\beta$ for all $j\ge1$. These are
\begin{align*}
\frac{1}{a^{\gamma_j^\alpha}} &= \frac{j}{(2j-1)\alpha}+\frac{\gamma_j^\alpha}{2}\bigg\lfloor\frac{1}{\beta\gamma_j^\alpha}+\frac{1}{2}\bigg\rfloor=\frac{1}{(2j-1)\alpha}\left(j+\left\lfloor\frac{(2j-1)\alpha}{2\beta}+\frac{1}{2}\right\rfloor\right) \\
\frac{1}{a^{\gamma_j^\beta}} &= \frac{\gamma_j^\beta}{2}\bigg\lfloor\frac{1}{\alpha\gamma_j^\beta}+\frac{1}{2}\bigg\rfloor+\frac{j}{(2j-1)\beta}=\frac{1}{(2j-1)\beta}\left(\left\lfloor \frac{(2j-1)\beta}{2\alpha}+\frac{1}{2} \right\rfloor+j\right).
\end{align*}
Since $\alpha<\beta$, the value $(2j-1)\alpha/(2\beta)+1/2$ is less than $j$ so that its lower integer part is less then or equal to $j-1$, which yields $1/a^{\gamma_j^\alpha}\le1/\alpha=1/a^{\gamma_1^\alpha}$. While, when $\alpha\ge\beta/2$, $(2j-1)\beta/(2\alpha)+1/2\le2j-1/2$, so $1/a^{\gamma_j^\beta}\le1/\beta+j/((2j-1)\beta)<2/\beta=1/a^{\gamma_1^\beta}$. Now, since $1/\alpha\ge1/a^{\gamma_1^\alpha}$ if and only if $\alpha\le\beta/2$, we have that
$$\sup_{\gamma>0}\frac{1}{a^\gamma}=\max_{j\ge1}\left\{\frac{1}{a^{\gamma_j^\alpha}},\frac{1}{a^{\gamma_j^\beta}}\right\}=\begin{cases}
1/\alpha&\alpha\le\beta/2 \\
2/\beta&\alpha\ge\beta/2.\end{cases}$$
Therefore, the largest velocity of $u^\gamma$, which is reached in $2/\alpha$ if $\alpha\le\beta/2$ and in $2/\beta$ otherwise, depends on the ratio $\alpha/\beta$ (Figure \ref{fig1}).
\end{remark}
The behavior of $1/a^\gamma$ when $\gamma\to0,+\infty$ gives a compatibility property for the perturbed minimizing movements. Indeed
$$\lim_{\gamma\to0}u^\gamma=u^0,\quad\lim_{\gamma\to+\infty}u^\gamma=u^\infty,$$
uniformly on compacts subsets of $(0,+\infty)$.
\subsection{Different flows generated by perturbations with the same harmonic mean}
Slightly modifying the functionals, the situation could be more complicated. Consider the sets $Z_\varepsilon:=3\varepsilon\mathbb{Z}\cup\varepsilon(3\mathbb{Z}+1)$ and the functionals
$$\phi_\varepsilon(t)=\begin{cases}-t&t\in Z_\varepsilon\\+\infty&\text{otherwise}\end{cases}$$
with initial data $u_0^{\tau,\varepsilon}\equiv0$, and perturbations satisfying assumption 5 of Remark \ref{epmm}. As well as in the previous section, when $\tau=o(\varepsilon)$ or $\varepsilon=o(\tau)$ we have fast-convergences, so we study the case $\varepsilon=\gamma\tau$, with $\gamma\in(0,+\infty)$.
The $n$-th step of the discrete solution is the projection of $u_{n-1}^{\tau,\varepsilon}+\tau/a_n^\tau$ on $Z_\varepsilon$. Note that the points of the domain are not equidistanced. We define the projection on $Z_\varepsilon$
\begin{equation}\label{ste2}
P_{Z_\varepsilon}(t)=\begin{cases}
t\pm\varepsilon/2 &\text{if }t\in\varepsilon(3\mathbb{Z}+1/2) \\
t\pm\varepsilon &\text{if }t\in\varepsilon(3\mathbb{Z}+2) \\
\varepsilon\lceil(2t+2)/(3\varepsilon)\rceil &\text{otherwise}\end{cases}
\end{equation}
and $u_n^{\tau,\varepsilon}=P_{Z_\varepsilon}(u_{n-1}^{\tau,\varepsilon}+\tau/a_n^\tau)$. Note that by (\ref{ste2}), condition (\ref{nob}), which ensures the absence of bifurcations, is replaced by
$$\frac{1}{a_n^\tau}\not\in\gamma\left(\left(\mathbb{Z}+\frac{1}{2}\right)\cup\left(\mathbb{Z}+2\right)\right)$$
and there are two critical values of the perturbations which affect the motion:
\begin{itemize}
\item[(i)] if $a_n^\tau>2/\gamma$ \emph{total pinning}; i.e., $u_n^{\tau,\varepsilon}=u_{n+1}^{\tau,\varepsilon}$;
\item[(ii)] if $1/\gamma<a_n^\tau<2/\gamma$ \emph{partial pinning}; i.e., if $u_{n-1}^{\tau,\varepsilon}+\varepsilon\not\in Z_\varepsilon$ then $u_n^{\tau,\varepsilon}=u_{n-1}^{\tau,\varepsilon}$ otherwise $u_n^{\tau,\varepsilon}=u_{n-1}^{\tau,\varepsilon}+\varepsilon$
\item[(iii)] if $a_n^\tau<1/\gamma$ then $u_n^{\tau,\varepsilon}>u_{n-1}^{\tau,\varepsilon}.$
\end{itemize}
Consider $\gamma\in(1/2,1)$. We present two perturbations oscillating between the values 1 and 2 with a different period, having the same harmonic mean, which generate two different motions. According to the above conditions, when $a_n^\tau=2$ we are in the case we have partial pinning, when $a_n^\tau=1$ we have no pinning. Consider first
$$a_n^\tau=
\begin{cases}
1&n\text{ odd}\\
2&n\text{ even.}
\end{cases}$$
Except for an initial phase displacement, when $n$ is odd we have that $u_n^{\tau,\varepsilon}=u_{n-1}^{\tau,\varepsilon}+2\varepsilon$, and when $n$ is even we have $u_n^{\tau,\varepsilon}=u_{n-1}^{\tau,\varepsilon}+\varepsilon$.
Hence, for large $n$ we have
$$u_n^{\tau,\varepsilon}=o(1)+\left\lceil\frac{n}{2}\right\rceil\varepsilon+\left\lfloor\frac{n}{2}\right\rfloor2\varepsilon$$
and the discrete solution is
$$u^{\tau,\varepsilon}=o(1)+\gamma\left(\left\lceil\frac{1}{2}\left\lceil\frac{t}{\tau}\right\rceil\right\rceil\tau+\left\lfloor\frac{1}{2}\left\lceil\frac{t}{\tau}\right\rceil\right\rfloor2\tau\right).$$
By taking the limit as $\tau\to0$, we have
$$u^\gamma(t)=\gamma\frac{3}{2}t.$$
Now, consider the perturbations
$$a_n^\tau=
\begin{cases}
1&n\in\big(4\mathbb{N}+1\big)\cup(4\mathbb{N}+2)\\
2&n\in\big(4\mathbb{N}+3\big)\cup4\mathbb{N}.
\end{cases}$$
After an initial phase displacement, we have the following periodic situation. Let $n\in4\mathbb{N}+1$, we always have that $u_n^{\tau,\varepsilon}=\varepsilon(3k+1)$ for some integer $k$ and
\begin{itemize}
\item[(i)] $a_n^\tau=1$ we have no pinning and $u_{n+1}^{\tau,\varepsilon}=u_n^{\tau,\varepsilon}+2\varepsilon$;
\item[(ii)] $a_{n+1}^\tau=1$ again no pinning and $u_{n+2}^{\tau,\varepsilon}=u_n^{\tau,\varepsilon}+3\varepsilon$;
\item[(iii)] $a_{n+2}^\tau=2$ we have partial pinning and $u_{n+2}^{\tau,\varepsilon}+\varepsilon\not\in Z_\varepsilon$, so $u_{n+3}^{\tau,\varepsilon}=u_{n+2}^{\tau,\varepsilon}$;
\item[(iv)] $a_{n+3}^\tau=2$ and, as above, $u_{n+4}^{\tau,\varepsilon}=u_{n+2}^{\tau,\varepsilon}.$
\end{itemize}
Hence, for any large $n$ we have $u_{n+4}^{\tau\varepsilon}=u_n^{\tau,\varepsilon}+3\varepsilon$; that is, $u_{4n}^{\tau,\varepsilon}=o(1)+4\varepsilon+3n\varepsilon$, and the discrete solution is
$$u^{\tau,\varepsilon}(t)=o(1)+\gamma\left(\left\lceil\frac{1}{4}\left\lceil\frac{t}{\tau}\right\rceil\right\rceil\tau+\left\lfloor\frac{1}{4}\left\lceil\frac{t}{\tau}\right\rceil\right\rfloor2\tau\right).$$
Taking the limit we obtain
$$u^\gamma(t)=\gamma\frac{3}{4}t.$$
\section{Oscillating energies}
In this section we study the homogenization of perturbed gradient flows along wiggly energies, which has already been treated in its unperturbed formulation in \cite{ansbrazim} and in \cite{bra2} (Example 8.2). Consider a positive constant $T>0$ and $W\in C^1(\mathbb{R})$ a $1$-periodic even function with Lipschitz derivative, $\Vert W'\Vert_\infty=1$ and zero average. Now, consider the energies
$$\phi_\varepsilon(t)=\varepsilon\, W\bigg(\frac{t}{\varepsilon}\bigg)+Tt,$$
with initial data $u_0^{\tau,\varepsilon}=u_0^\varepsilon$, and assume that all the hypotheses of Remark \ref{epmm} are satisfied, so that we have perturbed gradient flows along $\phi_\varepsilon$.
In \cite{ansbrazim}, it is proved that critical regimes for this motion are those such that the ratio $\varepsilon/\tau$ is bounded with positive infimum, and the case $\varepsilon=\gamma\tau$ is studied. In such critical regimes, there exists a pinning threshold of initial data and minimizing movements are linear functions with a homogenized velocity. We will prove that for perturbed minimizing movements analogous results hold.
\subsection{Fast convergences}
We prove that in the regimes $\varepsilon(\tau)=o(\tau)$ and $\tau(\varepsilon)=o(\varepsilon)$ we have fast convergences.
Consider $\varepsilon(\tau)=o(\tau)$. Denote by $\phi(u)=Tu$ the $\Gamma$-limit of $\phi_\varepsilon$. In order to lighten the notation, for every $n$ and $\tau$ we define $F(u):=\phi(u)+a_n^\tau(u-u_{n-1}^{\tau,\varepsilon})^2/2\tau$ and $F_\varepsilon(u):=\phi_\varepsilon(u)+a_n^\tau(u-u_{n-1}^{\tau,\varepsilon})^2/2\tau$. By a direct computation we can write $F_\varepsilon(u)=F(u)+\varepsilon\, W(u/\varepsilon)$. Now, denote
$$v:=\argmin{u\in\mathbb{R}}F(u)=u_{n-1}^{\tau,\varepsilon}-\frac{\tau}{a_n^\tau}T.$$
Since $\big|\varepsilon W(t/\varepsilon)|\le\varepsilon/2$, we have that
\begin{equation}\label{cond}
F(u_n^{\tau,\varepsilon})<F(v)+\varepsilon.
\end{equation}
Indeed, otherwise $F_\varepsilon(u_n^{\tau,\varepsilon}) \ge F(u_n^{\tau,\varepsilon})-\varepsilon/2 \ge F(v)+\varepsilon/2 > F_\varepsilon(v)$, which is in contrast with the minimality of $u_n^{\tau,\varepsilon}$. By the minimality of $v$ we have that $F(u) = a_n^\tau(u-v)^2/2\tau+F(v)$, so that, by \eqref{cond}, we have that
\begin{equation}\label{dist}
|u_n^{\tau,\varepsilon}-v| \le \left(\frac{2\tau\varepsilon}{a_n^\tau}\right)^{1\over 2}.
\end{equation}
Now take $\{n\},\{m\}$ two family of integers (depending on $\tau$) such that $n>m$ and both $n\tau$ and $m\tau$ converge to $t\ge0$. By \eqref{dist}, applying a discrete H\"older inequality we have that
$$\left|u_n^{\tau,\varepsilon}-u_m^{\tau,\varepsilon}+T\sum_{i=m+1}^{n}\frac{\tau}{a_i^\tau}\right| \le \sum_{i=m+1}^{n}\left(2\frac{1}{a_i^\tau}\tau\varepsilon\right)^{1\over2} \le \bigg(2\tau\varepsilon(n-m)\sum_{i=m+1}^n\frac{1}{a_i^\tau}\bigg)^{\frac{1}{2}}$$
and dividing both the members by $(n-m)\tau$ we get
$$\bigg|\frac{u_n^{\tau,\varepsilon}-u_m^{\tau,\varepsilon}}{(n-m)\tau}+T\frac{1}{n-m}\sum_{i=m+1}^{n}\frac{1}{a_i^\tau}\bigg| \le \bigg(2\frac{\varepsilon}{\tau}\frac{1}{n-m}\sum_{i=m+1}^n\frac{1}{a_i^\tau}\bigg)^{\frac{1}{2}}.$$
Taking the limit as $\tau\to0$, by Lebesgue's Theorem (up to subsequences) we have that
$$u'(t)=-\frac{1}{a^*(t)}T,\text{ for almost every } t\ge0;$$
hence, $u$ is a $\{a^\tau\}$-perturbed minimizing movement with respect to $\phi$.
In order to show that the other extreme regime is $\tau(\varepsilon)=o(\varepsilon)$, we first characterize $u^\infty$.
For the single functional $\phi_\varepsilon$, perturbed minimizing movements are the solutions to
\begin{equation}\label{ode}
u^\varepsilon(t)'=-\frac{1}{a^*(t)}\bigg(T+W'\bigg(\frac{u^\varepsilon(t)}{\varepsilon}\bigg)\bigg) \hbox{ for all }t\ge0.
\end{equation}
First, note that, for $T\le1$, the set of constant solutions $\big\{x\in\mathbb{R}\,\big|\,T+W'(x/\varepsilon)=0\big\}$ tends to be dense, so that for every initial value $u_0^\varepsilon$ (converging to some $u_0$) the solution $u^\varepsilon$ lives in an interval of length $\varepsilon$, so that $u^\infty(t)\equiv u_0$ for any $T\le1$.
Whereas, for $T>1$ integrating (\ref{ode}) from $t_1$ to $t_2$ we obtain
$$\int_{u^\varepsilon(t_1)}^{u^\varepsilon(t_2)} \frac{1}{T+W'(s/\varepsilon)}ds=-\int_{t_1}^{t_2}\frac{1}{a^*(t)}dt.$$
Now, $1/(T+W'(s))$ is a summable $1$-periodic function, so that the integrand on the left-hand side $L^1$-weak converges to the average $\int_0^1 1/(T+W'(s))ds$. We define the function
$$f(T)=\begin{cases}
0 & \text{if }T\le1\\ \displaystyle
\Big(\int_0^1 {1\over T+W'(s)}ds\Big)^{-1} & \text{if }T>1.
\end{cases}$$
Taking the limit for $\varepsilon\to0$, and $t_1\to0, t_2\to t$ we have
$$u^\infty(t)=u_0-f(T)\int_0^t\frac{1}{a^*(\xi)}d\xi \qquad\hbox{ for all } t\ge0.$$
If $\tau(\varepsilon)=o(\varepsilon)$ the same argument applied at the time-discrete level shows that $u^{\tau,\varepsilon}\to u^\infty$ as $\varepsilon\to0$.
\subsection{Critical regimes}
We now study the critical regimes. It is not restrictive to suppose that $\varepsilon=\gamma\tau$, with $\gamma>0$. Since
$$\argmin{u\in\mathbb{R}}\bigg\{\phi_\varepsilon(u)+a_n^\tau\frac{(u-u_{n-1}^{\tau,\varepsilon})^2}{2\tau}\bigg\}=\argmin{u\in\mathbb{R}}\bigg\{\frac{1}{\varepsilon}\phi_\varepsilon(u)+\frac{1}{\varepsilon}a_n^\tau\frac{(u-u_{n-1}^{\tau,\varepsilon})^2}{2\tau}\bigg\},$$
we reduce to the following rescaled problem by taking $y=u/\varepsilon$
\begin{equation}\label{rescaledp}
\begin{cases}
y_0^\tau=y_0\in\mathbb{R},\\ \displaystyle
y_n^\tau\in\argmin{y\in\mathbb{R}}\Big\{\phi_1(y)+{a_n^\tau\over 2}\gamma(y-y_{n-1}^\tau)^2\Big\}.
\end{cases}
\end{equation}
We obtain that $u_n^{\tau,\varepsilon}=\varepsilon y_n^\tau$, provided that $y_0=u_0^\varepsilon/\varepsilon$. The rescaled discrete solutions $y^\tau$ in general depends on $\tau$ because of the perturbations. If one takes, as in the following, periodic perturbations the dependence on $\tau$ will disappear.
First, we recall a useful result presented and proved in \cite{ansbrazim} Proposition 3.1.
\begin{lemma}\label{mp}
Given any function $\psi_1,\psi_2:\mathbb{R}\to\mathbb{R}$, and a positive constant $\beta>0$. For any $x_1,\,x_2\in\mathbb{R}$, if
$$y_1\in\argmin{t\in\mathbb{R}}\{\psi_1(t)+\beta(t-x_1)^2\},\quad y_2\in\argmin{t\in\mathbb{R}}\{\psi_2(t)+\beta(t-x_2)^2\}$$
then $\psi_1(y_1)-\psi_1(y_2)+\psi_2(y_2)-\psi_2(y_1)\le2\beta(x_1-x_2)(y_1-y_2)$.
\end{lemma}
{
By mimicking the argument of the proof of the previous lemma, we obtain the following useful result.
\begin{lemma}\label{pinninglem}
Given $\psi:\mathbb{R}\to\mathbb{R}$, and two positive constants $0<\alpha<\beta$. For any $y_0\in\mathbb{R}$, if
$$y^\alpha\in\argmin{t\in\mathbb{R}}\{\psi(t)+\alpha(t-y_0)^2\},\quad y^\beta\in\argmin{t\in\mathbb{R}}\{\psi(t)+\beta(t-y_0)^2\}$$
then $(y^\alpha-y^\beta)$ has the same sign of $(y^\alpha-y_0)$.
\end{lemma}
\begin{proof}
By the minimality of $y^\alpha$ and $y^\beta$ we can write
\begin{align*}
\psi(y^\alpha)+\alpha(y^\alpha-y_0)^2 &\le \psi(y^\beta)+\alpha(y^\beta-y_0)^2 \\
\psi(y^\beta)+\beta(y^\beta-y_0)^2 &\le \psi(y^\alpha)+\beta(y^\alpha-y_0)^2
\end{align*}
and summing them up we get
$$(\alpha-\beta)(y^\alpha-y_0)^2\le(\alpha-\beta)(y_1^\beta-y_0)^2$$
which implies, on one hand that $|y^\alpha-y_0|\ge|y^\beta-y_0|$, on the other hand
$$0\le(y^\alpha-y^\beta)((y^\beta-y_0)+(y^\alpha-y_0)).$$
Since $(y^\beta-y_0)+(y^\alpha-y_0)$ has the sign of $y^\alpha-y_0$, we get the thesis.
\end{proof}
}
The monotonicity of unperturbed discrete solutions follows from Lemma \ref{mp}, as shown in \cite{ansbrazim}. In the perturbed case we have no monotonicity of the discrete solutions. Nevertheless, an important property remains; namely, the monotone behavior with respect to initial data. This is ensured by the following result (see \cite{ansbrazim} Proposition 4.2), which still holds in the perturbed case with the same proof.
\begin{proposition}\label{monbeh}
Given two initial data $y_0,z_0\in\mathbb{R}$, and two positive constants $S\ge T>0$. Consider $\{y_n^\tau\}$ solutions of \eqref{rescaledp} starting from $y_0$ with $\phi_1(t)=W(t)+Tt$ and $\{z_n^\tau\}$ starting from $z_0$ with $\phi_1(z)=W(t)+St$. If $z_0\le y_0$ then $z_n^\tau\le y_n^\tau$ for all $n\ge1$ and $\tau>0$.
\end{proposition}
{
\begin{remark}\label{energylevel}
For $\{y_n^\tau\}$ solution of \eqref{rescaledp} we either have $y_n^\tau\le y_{n-1}^\tau$ or $
y_n^\tau\le y_{n-1}^\tau+1$, since $$\phi_1(t)+{a_n^\tau\over 2}\gamma(t-y_{n-1}^\tau)^2/2<\phi_1(t+1)+{a_n^\tau\over 2}\gamma(t+1-y_{n-1}^\tau)^2\hbox{ if }y_{n-1}^\tau\le t\le y_{n-1}^\tau+1.
$$
Moreover, $y_{n+1}\le y_{n-1}^\tau+1$. Indeed, otherwise $y_{n-1}^\tau+1<y_{n+1}^\tau\le y_{n-1}^\tau+2$, by the observation above. As \begin{equation}\label{elineq1}
\phi_1(y_{n+1}^\tau-1)<\phi_1(y_{n+1}^\tau)<\phi_1(y_n^\tau)<\phi_1(t)
\end{equation}
for any $y_{n-1}^\tau<t<y_n^\tau$ and therefore
\begin{equation}\label{elineq2}
y_n^\tau<y_{n+1}^\tau-1\le y_{n-1}^\tau<y_{n+1}^\tau,
\end{equation}
\eqref{elineq1} and \eqref{elineq2} lead to a contradiction. Reasoning by induction we get that $y_{n+k}^\tau\le y_{n-1}^\tau+1$ for any $k\ge0$, $n\ge1$. This means that, even if the motion is not (in general) monotone, once it reaches an energy well either it proceeds further decreasing the energy or it remains in that well.
\end{remark}
}
Consider the case of $N$-periodic perturbations; that is,
\begin{equation}\label{periodicpert}
a^\tau_n=a_i,\text{ if }n=kN+i\text{ for some }k\in\mathbb{N},
\end{equation}
for any $\tau$. The solutions of \eqref{rescaledp} with such perturbations do not depend on $\tau$.
\begin{proposition}\label{homvel}
Consider $\{a_n^\tau\}$ as in \eqref{periodicpert} and $\{y_n\}$ a solution of \eqref{rescaledp} with $\phi_1(t)=W(t)+Tt$. Then there exists the limit
$$f_\gamma(T,\{a_n\})=\lim_{n\to\infty} \frac{y_0-y_n}{n}\ge0$$
and it is independent of $y_0$. Moreover $T\mapsto f_\gamma\big(T,(a_n)\big)$ is an increasing map.
\end{proposition}
\begin{proof}
First, notice that, by the periodicity of $W$, for any integer $l$ the solution of \eqref{rescaledp} from $y_0+l$ is $y_n^l=y_n+l$. Indeed
\begin{align*}
y_1^l &\in \argmin{t\in\mathbb{R}}\bigg\{W(t)+Tt+a_1^\tau\frac{\gamma}{2}\big(t-(y_0+l)\big)^2\bigg\} \\
&=\argmin{t\in\mathbb{R}}\bigg\{W(t-l)+T(t-l)-Tl+a_1^\tau\frac{\gamma}{2}\big((t-h)-y_0\big)^2\bigg\} \\
&= \argmin{s\in\mathbb{R}}\bigg\{W(s)+Ts+a_1^\tau\frac{\gamma}{2}\big(s-y_0\big)^2\bigg\}+l,
\end{align*}
and the claim follows by induction. We first consider the case $y_0=0$. For any $k\ge1$ let $h=\lfloor y_{kN}\rfloor$, so that $0\le y_{kN}-h< 1$. If $z_0=y_{kN}-h$ then we have that $z_i=y_{kN+i}-h$ by the $N$-periodicity of $\{a_n\}$. By Proposition \ref{monbeh} with $T=S$, since $0<z_0\le1$, we have that $y_i\le z_i\le y_i+1$, which reads as $y_i\le y_{kN+i}-h\le y_i+1$, and by the definition of $h$ we have
\begin{equation}\label{ineq}
y_i+y_{kN}-1\le y_{kN+i}\le y_i+y_{kN}+1.
\end{equation}
For all $m<n$, denote $m'=\lfloor m/N\rfloor N$ and consider $k$ such that and $n=km'+i$, with $0\le i< m'$. Since $m'$ is a multiple of $N$, the second inequality in (\ref{ineq}) applies and we get
$$\frac{y_n}{n}\le\frac{y_{km'}+y_i+1}{km'+i}\le \frac{ky_{m'}+y_i+k}{km'+i}\le\frac{y_{m'}}{m'}+\frac{1}{m'}+\frac{y_i}{km'}.$$
Taking the limit as $n\to\infty$ and keeping $m$ fixed we have
$$\limsup_{n\to\infty}\frac{y_n}{n}\le\frac{y_{m'}+1}{m'}.$$
Now, consider $0\le j<N$ such that $m=\lfloor m/N\rfloor N+j=m'+j$. By the first inequality in \eqref{ineq} we have $y_m\ge y_{m'}+y_j-1$, so that $(y_{m'}+1)/m'\le(y_m+2-y_j)/m'$. Taking the limit as $m\to\infty$ we have that
$$\limsup_{n\to\infty}\frac{y_n}{n}\le\liminf_{m\to\infty}\frac{y_m}{m}$$
which yields the existence of the limit, which is non-negative from Remark \ref{energylevel}.
Now, consider any $y_0\not=0$ and let $h=\lfloor y_0\rfloor$, so that $0\le y_0-h\le1$. We have
$$y_n^0-1 \le y_n-y_0 \le y_n^0+1,$$
where $\{y_n^0\}$ is the discrete solution of \eqref{rescaledp} starting from $0$. This implies that $f_\gamma\big(T,(a_n)\big)$ is independent of the initial data $y_0$. Moreover, let $\{z_n\}$ be the rescaled discrete solution for linear energy component $S\ge T$. By Proposition \ref{monbeh} we have $-z_n\ge -y_n$, which implies the monotonicity of $f_\gamma$ in $T$.
\end{proof}
If $a_n\equiv c$ then, following the notation in \cite{ansbrazim}, we have $f_\gamma(T,\{a_n\})=f_{c\gamma}(T)$.
\begin{theorem}
Let $\{a_n\}$ be as in \eqref{periodicpert}; then for all $\gamma\in(0,+\infty)$, and for any initial data $u_0\in\mathbb{R}$, the $\{a^\tau\}$-perturbed minimizing movement along $\{\phi_{\gamma\tau}\}$ is
$$u^\gamma(t)=u_0-\gamma f_\gamma\big(T,(a_n)\big)t\quad\hbox{ for all }t\ge0.$$
\end{theorem}
\begin{proof}
For every $t>0$, the pointwise convergence of discrete solutions implies
$$\frac{u^\gamma(t)-u_0}{t} = \lim_k \frac{u^{\tau_k,\varepsilon_k}(t)-u_0^{\varepsilon_k}}{t}.$$
Consider $n=\lceil t/\tau_k\rceil$, which depends on $t$ and $k$. For every $t>0$, we have $n\to\infty$ when $k\to\infty$. We then obtain
$$\frac{u^\gamma(t)-u_0}{t} = \gamma \lim_k \frac{y_n-y_0}{n}=-\gamma f_\gamma\big(T,(a_n)\big)$$
by multiplying and dividing by $\varepsilon_k$.
\end{proof}
\subsection{The pinning threshold}
We extend the definition of the pinning threshold given in \cite{ansbrazim} (see Definition 5.2) to the perturbed case, still working with periodic perturbations as in \eqref{periodicpert}.
\begin{definition}
For any $\gamma>0$, the {\em pinning threshold} with perturbations $\{a_n\}$ at regime $\gamma$ is
$$T_\gamma(\{a_n\}):=\sup\{T>0\,|\,f_\gamma(T,\{a_n\})=0\}.$$
\end{definition}
Notice that, by the monotonicity of $T\mapsto f_\gamma(T,\{a_n\})$, then $f_\gamma(T,\{a_n\})=0$ for every $T<T_\gamma(\{a_n\})$.
If $a_n\equiv c$, following the notation in \cite{ansbrazim}, we have $T_\gamma(\{a_n\})=T_{c\gamma}$.
{
\begin{proposition}
Given $\{a_n\}$ as in \eqref{periodicpert}, and set $\alpha=\min_{1\le i\le N}a_i$. Then
$$T_\gamma(\{a_n\})=T_{\alpha\gamma}.$$
\end{proposition}
\begin{proof}
Consider $T<T_{\alpha\gamma}$ and denote by $\{y_n^\alpha\}$ a solution of \eqref{rescaledp} starting from $y_0$ corresponding to $a_n\equiv\alpha$. We have that $(y_0-y_n^\alpha)/n\to0$ by the definition of pinning threshold. As already noted, the unperturbed discrete solutions are monotone, and so is $y_n^\alpha$. Assume that $y_1^\alpha\le y_0$. By Lemma \ref{pinninglem}, $y_1^\alpha\le y_1$. Consider
$$z\in\argmin{t\in\mathbb{R}}\left\{\phi_1(t)+a_2\frac{\gamma}{2}(t-y_1^\alpha)^2\right\}.$$
On the one hand, from Proposition \ref{monbeh} we get that $z\le y_2$; on the other hand, by applying Lemma \ref{pinninglem}
again we obtain $y_2^\alpha\le z$, and by induction we get $y_n^\alpha\le y_n$. Hence, $0\le(y_0-y_n)/n\le(y_0-y_n^\alpha)/n\to 0$; that is, $f_\gamma(T,\{a_n\})=0$. Analogously, if $y_1^\alpha\ge y_0$. This yields that $T_{\alpha\gamma}\le T_\gamma(\{a_n\})$.
Now take $T>T_{\alpha\gamma}$. Since $f_{\alpha\gamma}(T)>0$, $\{y_n^\alpha\}$ is decreasing. It is not restrictive to suppose that $a_1=\alpha$. From Remark \ref{energylevel} we infer that $y_N\le y_1^\alpha+1$, which yields that $y_{N+1}\le y_2^\alpha+1$. Analogously, $y_{2N}\le y_2^\alpha+1$, and therefore $y_{2N+1}\le y_3^\alpha+1$. Hence, we have that $y_{kN+1}\le y_{k+1}^\alpha+1$, which implies that $(y_0-y_{k+1}^\alpha)/(k+1)\le(y_0-y_{kN+1}+1)/(kN+1)$. By taking the limit as $k\to\infty$ we have that $0<f_{\alpha\gamma}(T)\le Nf_\gamma(T,\{a_n\})$, in particular $T>T_\gamma(\{a_n\})$ and the thesis follows.
\end{proof}
}
\section{Perturbed motion of discrete interfaces}
Since the interest for studying the perturbation method defined in Section \ref{pertMM} relies on the competition between an energy and a dissipation term, it can also be applied to the well-know scheme of geometric minimizing movements introduced in the pioneering work \cite{almtaywan}.
In this section, we analyze the effect of the perturbations on the motion of two-dimensional discrete interfaces studied in \cite{bragelnov}.
\subsection{Setting of the problem}
We briefly recall the setting of the problem; for any $\varepsilon>0$ we consider the lattice $\varepsilon\mathbb{Z}^2$. For any set of indices $\mathcal{I}\subset\varepsilon\mathbb{Z}^2$ we define
$$P_\varepsilon(\mathcal{I})=\varepsilon\#\{(i,j)\,|\,i\in\mathcal{I},\,j\notin\mathcal{I},\,|i-j|=\varepsilon\}.$$
We generalize these functionals to any set $E\subset\mathbb{R}^2$ which is union of squares centered at a point of the lattice $\varepsilon\mathbb{Z}^2$ with side length $\varepsilon$, that is
\begin{equation}\label{latticeset}
E=\bigcup_{i\in\mathcal{I}}Q_\varepsilon(i),
\end{equation}
where $Q_\varepsilon(i)=[i_1-\varepsilon/2,i_1+\varepsilon/2]\times[i_2-\varepsilon/2,i_2+\varepsilon/2]$, and denote with ${\mathcal D}_\varepsilon$ the family of such sets. With a slight abuse of notation we write $P_\varepsilon(E)=P_\varepsilon(\mathcal{I})$ for any set as in \eqref{latticeset}.
If we consider the family of sets of finite perimeter $S=\{E\subset\mathbb{R}^2\,\vert\,\mathcal{H}^1(\partial^*E)<\infty\}$, where $\partial^*E$ denotes the reduced boundary of $E$, endowed with the Hausdorff distance, such functionals correspond to
$$P_\varepsilon(E)=\begin{cases}
\mathcal{H}^1(\partial E) & E\in {\mathcal D}_\varepsilon\\
+\infty&\text{otherwise,}
\end{cases}$$
defined in $S$, with domain ${\mathcal D}(P_\varepsilon)={\mathcal D}_\varepsilon$. As already noted in \cite{bragelnov} and proved for instance in \cite{alibracic},
the functionals $P_\varepsilon$ approximate (in the sense of $\Gamma$-convergence) the crystalline perimeter
$$P(E)=\int_{\partial^*E}\Vert \nu\Vert_1 d\mathcal{H}^1.$$
\begin{remark}
These functionals could also be seen as interface energies on spin systems. Given $u:\mathbb{Z}^2\to\{\pm1\}$ one defines the interaction energies
$$E_\varepsilon(u)=\frac{\varepsilon}{4}\sum_{\substack{i,j\in\mathcal{I}\\ |i-j|=1}}|u_i-u_j|^2$$
and $E_\varepsilon(u)=P_\varepsilon(\{i\in\varepsilon\mathbb{Z}^2\,|\,u(i/\varepsilon)=1\})$.
\end{remark}
The dissipations $D_\varepsilon(F,E)$ are defined as follows (according to the notation in \cite{bragelnov}). For any $\mathcal{I}\subset\varepsilon\mathbb{Z}^2$ and $E$ as in \eqref{latticeset}, we define the {\em discrete distance from $\partial E$}
of $x\in Q_\varepsilon(i)$ for some $i\in\varepsilon\mathbb{Z}^2$ as
$$
d_\infty^\varepsilon(x,\partial E)=\begin{cases}{\varepsilon\over2}+
\text{dist}_\infty(i,\mathcal{I})&i\notin\mathcal{I} \\{\varepsilon\over2}+
\text{dist}_\infty(i,\varepsilon\mathbb{Z}^2\backslash\mathcal{I})&i\in\mathcal{I}
\end{cases}
$$
(the term $\varepsilon/2$ comes from the fact that the minimal distance of a point in $\varepsilon\mathbb{Z}^2$ from $\partial E$ is $\varepsilon/2$, so that
in this way $d_\infty^\varepsilon(x,\partial E)\in\varepsilon\mathbb{Z}$).
Then, for any $E,F\in {\mathcal D}_\varepsilon$
$$D_\varepsilon(F,E)=\int_{E\triangle F}d_\infty^\varepsilon(x,\partial E)dx,$$
where $E\triangle F$ denotes the symmetric difference between $E$ and $F$.
Finally, we can set the following minimization scheme
\begin{equation}\label{discretemot}
\begin{cases}
E_0^{\tau,\varepsilon}\in S \\ \displaystyle
E_n^{\tau,\varepsilon}\in\argmin{E\in S}\Big\{P_\varepsilon(E)+{a_n^\tau\over\tau} D_\varepsilon(E,E_{n-1}^{\tau,\varepsilon})\Big\},
\end{cases}
\end{equation}
and again denote as $E^{\tau,\varepsilon}(t)=E^{\tau,\varepsilon}_{\lceil t/\tau\rceil}$ its discrete solutions. Any limit in the Hausdorff metric of $\{E^{\tau,\varepsilon}\}$ is a \emph{$\{a^\tau\}$-perturbed geometric minimizing movement at regime $\tau$-$\varepsilon$}.
In the following, we study the perturbed scheme \eqref{discretemot} in the special case in which $E_0^{\tau,\varepsilon}=E_0^\varepsilon$ are coordinate rectangles converging in the Hausdorff metric as $\varepsilon\to0$, because this case already provides an interesting example of the effects of the perturbation. We also consider $\{a^\tau\}$ satisfying assumption 5 of Remark \ref{epmm}.
We focus our analysis on the regimes $\varepsilon=\gamma\tau$, which has been proved to be the critical one for the un-perturbed case.
The fast-converging cases are obtained as limit cases of the critical ones, cf.~Remark \ref{relica}.
\subsection{Motions of coordinate rectangles}
In \cite{bragelnov} Theorem 1 it has been proved that motions $\{E^{\tau,\varepsilon}\}$ starting from a coordinate rectangle $E^\varepsilon$ keep the rectangular shape. We confine the analysis to the evolution of the lengths $L_{1,n}^{\tau,\varepsilon}$, $L_{2,n}^{\tau,\varepsilon}$ of the sides of $E^{\tau,\varepsilon}_n$.
In the proof of Theorem 1 \cite{bragelnov} it is proved that, if $E$ is a coordinate rectangle in ${\mathcal D}_\varepsilon$ of sides of length $L_1, L_2$, the minimizer
$$E'\in\argmin{F\in {\mathcal D}_\varepsilon}\{P_\varepsilon(F)+\eta D_\varepsilon(F,E)/\varepsilon\}$$
is a coordinate rectangle centered in the center of $E$ and with sides of length $L_1', L_2'$ which satisfies
$$\frac{L'_1-L_1}{\varepsilon}=-2\left\lfloor\frac{2}{\eta L_2}\right\rfloor+O(\varepsilon^2),\quad \frac{L'_2-L_2}{\varepsilon}=-2\left\lfloor\frac{2}{\eta L_1}\right\rfloor+O(\varepsilon^2),$$
except when $2/\eta L_1$ or $2/\eta L_1$ is in a neighborhood of an integer of amplitude which is infinitesimal with respect to $\varepsilon$. {
In order to simplify the exposition, we omit these cases, as their treatment does not
vary from that of the corresponding cases in \cite{bragelnov}.} By taking $\eta=\gamma a_n^\tau$, and $E=E^{\tau,\varepsilon}_{n-1}$, for any $n\ge 1$ we have
\begin{equation}\label{increments}
\begin{aligned}
\frac{L_{1,n}^{\tau,\varepsilon}-L_{1,n-1}^{\tau,\varepsilon}}{\tau} &=-2\gamma\left\lfloor\frac{2}{\gamma a_n^\tau L_{2,n-1}^{\tau,\varepsilon}}\right\rfloor+O(\varepsilon^2) \\
\frac{L_{2,n}^{\tau,\varepsilon}-L_{2,n-1}^{\tau,\varepsilon}}{\tau} &=-2\gamma\left\lfloor\frac{2}{\gamma a_n^\tau L_{1,n-1}^{\tau,\varepsilon}}\right\rfloor+O(\varepsilon^2).
\end{aligned}
\end{equation}
Note that, as in Remark \ref{epmm}, equation \eqref{regularity} reads
$$D_\varepsilon(E^{\tau,\varepsilon}(t),E^{\tau,\varepsilon}(s))\le c\,\theta_T(t+\tau,s)$$
for any $t,s\in[0,T]$. This means that (up to subsequences) $L_1^{\tau,\varepsilon}, L_2^{\tau,\varepsilon}$ converge pointwise to some absolutely continuous functions $L_1, L_2$, as $\tau\to0$. This implies the pointwise convergence, in the Hausdorff metric, of the discrete solutions; i.e., $d_{\mathcal{H}}(E^{\tau_k,\varepsilon_k}(t),E(t))\to 0$ for some infinitesimal $\{\tau_k\}, \{\varepsilon_k\}$, to the coordinate rectangle $E(t)$ with sides of length $L_1(t), L_2(t)$. In particular $E(t)$ satisfies the following system of coupled ordinary differential equations
\begin{equation}\label{syst}
\begin{cases}
L'_{1}(t)=-2\gamma v_{1}(t) \\
L'_{2}(t)=-2\gamma v_{2}(t),
\end{cases}
\end{equation}
where $v_1$ and $v_2$ are, respectively, the weak limit in $L^1(0,T)$ of $\lfloor 2/(\gamma a^\tau(t)L_2^{\tau,\varepsilon}(t))\rfloor$ and $\lfloor 2/(\gamma a^\tau(t)L_1^{\tau,\varepsilon}(t))\rfloor$.
\subsection{The case of periodic perturbations} We can find the explicit form of $v_1$ and $v_2$, for instance in the case of $N$-periodic perturbations $\{a_n\}$ as in \eqref{periodicpert}. For any $1\le i\le N$ consider the functions
$$\chi^\tau_i(s):=\begin{cases}
1 & s\in((kN+i-1)\tau,(kN+i)\tau],\, k\in\mathbb{N} \\
0 & \text{otherwise,}
\end{cases}$$
which weakly converge to $1/N$ in $L^1_{\rm loc}$. We can rewrite the first of \eqref{increments} as
\begin{align*}
L^{\tau,\varepsilon}_{1,n} &= L^{\tau,\varepsilon}_{1,n-1}-2\gamma\tau \left\lfloor\frac{2}{\gamma a_n}\frac{1}{L^{\tau,\varepsilon}_{2,n}}\right\rfloor = L^{\tau,\varepsilon}_{1,0}-2\gamma\tau\sum_{k=0}^n \left\lfloor\frac{2}{\gamma a_k}\frac{1}{L^{\tau,\varepsilon}_{2,k}}\right\rfloor \\
&= L^{\tau,\varepsilon}_{1,0} - 2\gamma\tau \sum_{i=1}^N \sum_{k=1}^{n/N} \left\lfloor\frac{2}{\gamma a_i}\frac{1}{L^{\tau,\varepsilon}_{2,kN+i}}\right\rfloor \\
&= L^{\tau,\varepsilon}_{1,0}-2\gamma \sum_{i=1}^N \int_0^{n\tau} \left\lfloor\frac{2}{\gamma a_i}\frac{1}{L^{\tau,\varepsilon}_{2}(s)}\right\rfloor\chi^\tau_i(s)ds\,,
\end{align*}
and, taking the limit as $\tau\to0$, by the weak convergence of $\chi^\tau_i$ and the local uniform convergence of $L^{\tau,\varepsilon}_{2}$, we obtain that $v_1=\sum_{i=1}^N \lfloor 2/(\gamma a^\tau_i L_2)\rfloor/N$. We argue analogously for $v_2$; hence, \eqref{syst} reads as
\begin{equation}\label{systper}
\begin{cases}\displaystyle
L'_{1}(t)=-2\gamma{1\over N}\sum_{i=1}^N \Biggl\lfloor {2\over\gamma a^\tau_i L_2(t)}\Biggr\rfloor
\\ \\ \displaystyle
L'_{2}(t)=-2\gamma{1\over N}\sum_{i=1}^N \Biggl\lfloor {2\over\gamma a^\tau_i L_1(t)}\Biggr\rfloor.
\end{cases}
\end{equation}
Note that in the perturbed case the {\em pinning condition} changes; indeed, defining $\alpha=\min_{1\le i\le N}a_i$, if
$$L_{1,0}>\frac{2}{\gamma\alpha},$$
then $L'_2(t)=0$ for $t>0$ small enough. The same holds for $L_1$. As in \cite{bragelnov} Theorem 2 the motion is characterized in three cases:
\begin{itemize}
\item[(i)] \emph{total pinning}; i.e., $E(t)\equiv E_0$, if $L_{1,0}, L_{2,0}>2/(\gamma\alpha)$;
\item[(ii)] if $L_{1,0}\le2/(\gamma\alpha)$ and $L_{2,0}<2/(\gamma\alpha)$, then $E(t)$ shrinks up to an extinction time;
\item[(iii)] \emph{partial pinning} if $L_{1,0}>2/(\gamma\alpha)$ and $L_{2,0}<2/(\gamma\alpha)$ with $2/(\gamma a_i L_{2,0})\not\in\mathbb{N}$ for any $i$, in which only one couple of sides moves up to a time $t_0$ such that $L_{1,0}(t_0)\le2/(\gamma\alpha)$, then $E(t)$ follows the motion described in the previous case.
\end{itemize}
\begin{remark}[Limit cases]\label{relica}
As noted above, in the un-perturbed case, when $\tau(\varepsilon)=o(\varepsilon)$ and $\varepsilon(\tau)=o(\tau)$ we have fast convergences. In the general case of perturbed motion of discrete interfaces, the situation is slightly more complicated and can be treated separately. Nevertheless, in the periodic case we can make some interesting observations.
When $\tau(\varepsilon)=o(\varepsilon)$ we have total pinning of the motion, indeed from \eqref{increments}
$$L_{i,1}^{\tau(\varepsilon),\varepsilon}=L_{0,1}^\varepsilon-2\varepsilon\left\lfloor\frac{2\tau}{\varepsilon a_i L_{2,0}^\varepsilon}\right\rfloor=L_{0,1}^\varepsilon$$
and the same holds for $L_2^\varepsilon$, for $\varepsilon$ sufficiently small.
The case $\varepsilon(\tau)=o(\tau)$, also for non-periodic perturbations, reduces to the study of the $\Gamma$-limit of the functionals $P_\varepsilon/a^{\tau(\varepsilon)}$. We do not provide a limit result in this work, but one can observe that taking the limit in \eqref{systper} as $\gamma\to0$ we get
$$\begin{cases}\displaystyle
L_1'(t)=-{2\over a^*L_2(t)} \\ \displaystyle
L_2'(t)=-{2\over a^*L_1(t)},
\end{cases}$$
where $a^*$ is the harmonic mean of $\{a_n\}$. This can be regarded as the \emph{flat flow} perturbed by $a^*$, with respect to the crystalline perimeter $P$ defined above (see \cite{almtay} for the study of the flat flow in the un-perturbed case;
see also \cite{bra2} Section 9.4).
\end{remark}
\section*{Acknowledgments} The authors acknowledge the MIUR Excellence Department Project awarded to the Department of Mathematics, University of Rome Tor Vergata, CUP E83C18000100006.
|
1902.10138
|
\section{Introduction}
The simplest form of Poincar\'e inequality in an open set $B\subset \rn n$ can be stated as follows: if $1\leq p<n$
there exists $C(B,p)>0$ such that for any
(say) smooth function $u$ on $\rn n$ there exists a constant $c_u$ such that
$$
\|u-c_u\|_{L^q(B)} \leq C(n,p)\,\|\nabla u\|_{L^p(B)}
$$
provided $\frac{1}{p}-\frac{1}{q}=\frac{1}{n}$. Sobolev inequality is very similar, but in that case we
are dealing with compactly supported functions, so that the constant $c_u$ {can} be dropped. It is well
known (Federer \& Fleming theorem, \cite{federer_fleming}) that for $p=1$ Sobolev inequality is equivalent to
the {classical} isoperimetric inequality (whereas Poincar\'e inequality correspond to {classical} {\sl relative}
isoperimetric inequality).
Let us restrict for a while to the case $B=\rn n$, to investigate generalizations of these inequalities to differential forms.
It is easy to see that Sobolev and Poincar\'e inequalities are equivalent to the following problem:
we ask whether, given a closed differential $1$-form $\omega$ in $L^p(\mathbb R^n)$, there exists {a} $0$-form $\phi$ in $L^q(\mathbb R^n)$ with
$\frac{1}{p}-\frac{1}{q}=\frac{1}{n}$ such {that}
\begin{equation}\label{intro 1}
d\phi=\omega\qquad\mbox{and}\qquad \|\phi\|_q\leq C(n,p,h)\,\|\omega\|_p.
\end{equation}
Clearly, this problem can be formulated in general for $h$-forms $\omega$ in $L^p(\mathbb R^n)$ and we are lead to
look for $(h-1)$-forms $\phi$ in $L^q(\mathbb R^n)$ such that \eqref{intro 1} holds. This is the problem we have
in mind when we {speak} about Poincar\'e inequality for differential forms.
When we {speak} about Sobolev inequality, we have in mind compactly supported differential forms.
The case $p>1$ has been fully understood on bounded convex sets by Iwaniec \& Lutoborsky (\cite{IL}). On the other hand,
in the {full} space $\rn n$ an easy proof consists in putting $\phi=d^*\Delta^{-1}\omega$. Here, $\Delta^{-1}$ denotes the inverse of the Hodge Laplacian $\Delta=d^*d+dd^*$
and $d^*$ is the formal $L^2$-adjoint of $d$. The operator $d^*\Delta^{-1}$ is given by convolution with a homogeneous kernel of type $1$ in the terminology of \cite{folland_lectures} and \cite{folland_stein}, hence it is bounded from $L^p$ to $L^q$ if $p>1$.
Unfortunately, this argument does not suffice for $p=1$ since, by \cite{folland_stein}, Theorem 6.10, $d^*\Delta^{-1}$ maps $L^1$ only into the weak Marcinkiewicz space $L^{n/(n-1), \infty}$. Upgrading from $L^{n/(n-1), \infty}$ to $L^{n/(n-1)}$ is possible for functions
(see \cite{LN}, \cite{FGaW}, \cite{FLW_grenoble}), but the trick does not seem to generalize to differential forms.
Since {the} case $p=1$ is the most relevant from a geometric point of view, we focus on that case. First of all,
we notice that Poincar\'e inequality with $p=1$ fails in {top} degree unless a global integral inequality is satisfied.
Indeed for $h=n$ forms belonging to $L^1$ and with nonvanishing integral cannot be differentials of $
L^{n/(n-1)}$ forms, see \cite{Tripaldi}. In arbitrary degree, a {similar} integral obstruction takes the form
$\int\omega\wedge\beta=0$ for every constant coefficient form $\beta$ of complementary degree.
Therefore we introduce the subspace $L_0^1$ of $L^1$-differential forms satisfying these conditions.
However, in degree $n$ {assuming that} the integral constraint {is satisfied} does {not} suffice, {as we shall see in Section \ref{parabolicity}}. On the other hand, for instance it follows from
\cite{BB2007} that Poincar\'e inequality holds in degree $n-1$.
We refer the reader to \cite{BFP3} for a discussion, in particular in connection with van Schaftingen's \cite{vS2014}) and Lanzani \& Stein's \cite{LS} results.
We can state our main results.
We have:
\begin{theorem}[Global Poincar\'e and Sobolev inequalities]
Let $h=1,\ldots,n-1$ and set $q=n/(n-1)$. For every closed $h$-form $\alpha\in L_0^1(\mathbb R^n)$, there exists an $(h-1)$-form $\phi\in L^{q}(\mathbb R^n)$, such that
$$
d\phi=\alpha\qquad\mbox{and}\qquad \|\phi\|_{q}\leq C\,\|\alpha\|_{1}.
$$
Furthermore, if $\alpha$ is compactly supported, so is $\phi$.
\end{theorem}
We also prove a local version of this inequality.
\begin{corollary}
For $h=1,\ldots,n-1$, let $q=n/(n-1)$. Let $B\subset \rn n$ be a {bounded open} convex set, and let $B'$ be an open
set, $B\Subset B'$.
Then there exists $C=C(n,B,B')$ with the following property:
\begin{enumerate}
\item Interior Poincar\'e inequality.
For every closed $h$-form $\alpha$ in $L^1(B')$, there exists an $(h-1)$-form $\phi\in L^q(B)$, such that
$$
d\phi=\alpha_{|B}, \qquad\mbox{and}\qquad \|\phi\|_{L^q(B)}\leq C\,\|\alpha\|_{L^1(B')}.
$$
\item Sobolev inequality. For every closed $h$-form $\alpha\in L^1$ with support in $B$, there exists an $(h-1)$-form $\phi\in L^q$, with support in $B'$, such that
$$
d\phi=\alpha \qquad\mbox{and}\qquad \|\phi\|_{L^q(B')}\leq C\,\|\alpha\|_{L^1(B)}.
$$
\end{enumerate}
We shall refer to the above inequality {as} interior Poincar\'e and interior Sobolev inequality, respectively. The
world ``interior'' is meant to stress the loss of domain from $B'$ to $B$.
\end{corollary}
Remarkably, most of the techniques developed here can be adapted, in combination with other
{\sl ad hoc} arguments to deal with Poincar\'e and Sobolev inequalities in the Rumin complex
of Heisenberg groups (see \cite{BFP3}).
\section{Kernels}
\medskip
Throughout the present note our setting will be the Euclidean space
$\rn n$ with $n>2$.
\bigskip
If $f$ is a real function defined in $\mathbb R^n$, we denote
by ${\vphantom i}^{\mathrm v}\!\, f$ the function defined by ${\vphantom i}^{\mathrm v}\!\, f(p):=
f(-p)$, and, if $T\in\mathcal D'(\mathbb R^n)$, then ${\vphantom i}^{\mathrm v}\!\, T$
is the distribution defined by $\Scal{{\vphantom i}^{\mathrm v}\!\, T}{\phi}
:=\Scal{T}{{\vphantom i}^{\mathrm v}\!\,\phi}$ for any test function $\phi$.
We remind also that the convolution $f\ast g$ is well defined
when $f,g\in\mathcal D'(\rn n)$, provided at least one of them
has compact support. In this case the following identities
hold
\begin{equation}\label{convolutions var}
\Scal{f\ast g}{\phi} = \Scal{g}{{\vphantom i}^{\mathrm v}\!\, f\ast\phi}
\quad
\mbox{and}
\quad
\Scal{f\ast g}{\phi} = \Scal{f}{\phi\ast {\vphantom i}^{\mathrm v}\!\, g}
\end{equation}
for any test function $\phi$.
Following \cite{folland_lectures}, Definition 5.3, we recall now the notion of {\it kernel of type $\mu$}
and some properties stated below in Proposition \ref{kernel}.
\begin{definition}\label{type} A kernel of type $\mu$ is a
distribution $K\in \mathcal S'(\mathbb R^n)$, homogeneous of degree $\mu-n$
that is smooth outside of the origin.
The convolution operator with a kernel of type $\mu$
$$
f\to f\ast K
$$
is still called an operator of type $\mu$.
\end{definition}
\begin{proposition}\label{kernel}
Let $K\in\mathcal S'(\rn r)$ be a kernel of type $\mu$ and let $D_j$ denote the $j$-th partial derivative in $\rn n$.
\begin{itemize}
\item[i)] ${\vphantom i}^{\mathrm v}\!\, K$ is again a kernel of type $\mu$;
\item[ii)] $D_jK$ and $KD_j $ are associated with kernels of type $\mu-1$ for
$j=1,\dots,n$;
\item[iii)] If $\mu>0$, then $K\in L^1_{\mathrm{loc}}(\mathbb R^n)$.
\end{itemize}
\end{proposition}
\begin{lemma}
Let $g$ be a a kernel of type $\mu>0$,
and let $\psi\in \mathcal D(\rn n)$ be a test function.
Then $\psi \ast g $ is smooth on $\rn n$.
If, in addition, $R=R(D)$ is an homogeneous
polynomial of degree $\ell\ge 0$ in $D:=(D_1,\dots,D_n)$,
we have
$$
R( \psi\ast g)(p)= O(|p|^{\mu-n-\ell})\quad\mbox{as }p\to\infty.
$$
\end{lemma}
In particular, if $\psi\in \mathcal D(\rn n)$, and $K$ is a kernel of type
$\mu <n$, then both $\psi \ast K$
and all its derivatives belong to $L^\infty(\rn n)$
{
\begin{corollary}\label{closed ex bis}
If $K$ is a kernel of type $\mu\in (0,n)$,
$u\in L^1(\mathbb R^n)$ and
$\psi\in \mathcal D(\mathbb R^n)$, then
\begin{equation}
\Scal{u\ast K}{ \psi} = \Scal{ u}{\psi\ast{\vphantom i}^{\mathrm v}\!\, K}.
\end{equation}
\end{corollary}
In this equation, the left hand side is the action of a distribution on a test function, see formula \eqref{revised 1}, the right hand side is the inner product of an $L^1$ vector-valued function with an $L^\infty$ vector-valued function.
}
{ \begin{remark}
The conclusion of Corollary \ref{closed ex bis} still holds
if we assume $K\in L^1_{\mathrm{loc}}(\rn n)$, provided $u\in L^1(\mathbb R^n)$ is compactly
supported.
\end{remark}
}
\begin{lemma}\label{anuli}
Let $K$ be a kernel of type $\alpha\in (0,n)$, then for any $f\in L^1(\rn n)$ such that
$$
\int_{\rn n} f(y)\, dy = 0,
$$
we have:
$$
R^{-\alpha} \int_{B(0,2R)\setminus B(0,R)} |K\ast f| \, dx \longrightarrow 0\qquad\mbox{as $R\to\infty$.}
$$
\end{lemma}
\begin{proof}
If $R>1$ we have:
\begin{equation*}\begin{split}
R^{-\alpha} & \int_{R< | x | <2R} |K\ast f| \, dx
=
R^{-\alpha}\int_{R< | x | <2R} \,dx \Big| \int \, K(x-y) f(y) \, dy\Big|
\\&
= R^{-\alpha}\int_{R< | x | <2R} \,dx \Big| \int \, \big[ K(x-y)-K(x)\big] f(y) \, dy\Big|
\\&
\le
R^{-\alpha} \int \, | f(y)| \Big( \int_{R< | x | <2R} \Big| K(x-y)-K(x) \Big| \,dx \Big) \, dy
\\&
=R^{-\alpha} \int_{ | y | <\frac12 R} \, | f(y)| \big( \cdots \big) \, dy
+
R^{-\alpha} \int_{4R> | y | >\frac12 R} \, | f(y)| \big( \cdots \big) \, dy
\\&
+
R^{-\alpha} \int_{ | y | > 4 R} \, | f(y)| \big( \cdots \big) \, dy
\\&
=: R^{-\alpha} I_{1}(R) + R^{-\alpha} I_{2}(R) + R^{-\alpha} I_{3}(R).
\end{split}\end{equation*}
Consider first the third term above. By homogeneity we have
$$
I_{3}(R) \le C_K\, \int_{ | y | >4 R} \, | f(y)| \big( \int_{R< | x | <2R} ( | x-y | ^{-n+\alpha} + | x | ^{-n+\alpha} ) \,dx \big) \, dy
$$
Notice now that, if $ | y | >4R$ and $ R< | x | <2R$, then $ | x-y | \ge | y | - | x |
\ge 4 R -R {\ge} \frac{3}2 | x | $. Therefore
\begin{equation*}\begin{split}
| x-y | ^{-n+\alpha} & + | x | ^{-n+\alpha} \le \left\{\big( \dfrac{2}{3}\big)^{n-\alpha} + 1\right\} | x | ^{-n+\alpha},
\end{split}\end{equation*}
and then
$$
\int_{R< | x | <2R} ( | x-y | ^{-n+\alpha} + | x | ^{-n+\alpha} ) \,dx \le C_\alpha \, R^\alpha.
$$
Thus
$$
R^{-\alpha} I_{3}(R) \le C_{K,\alpha}\, \int_{ | y | >4R} \, | f(y)| \, dy \longrightarrow 0
$$
as $R\to\infty$.
Consider now the second term. Again we have
$$
I_{2}(R) \le C_K\, \int_{\frac12 R< | y | <4 R} \, | f(y)| \big( \int_{R< | x | <2R} ( | x-y | ^{-n+\alpha} + | x | ^{-n+\alpha} ) \,dx \big) \, dy.
$$
Obviously, as above,
$$
\int_{R< | x | <2R} | x | ^{-n+\alpha} \,dx \le C R^\alpha.
$$
Notice now that, if $\dfrac12 R < | y | >4R$ and $ R< | x | <2R$, then $ | x-y | \le | x | + | y |
\ge 6R$. Hence
$$
\int_{\frac12 R< | y | <4 R} \, | f(y)| \big( \int_{ | x-y | <6R} | x-y | ^{-n+\alpha} \,dx \big) \, dy \le CR^\alpha.
$$
Therefore
$$
R^{-\alpha}I_{2}(R) \le C_K\, \int_{\frac12 R< | y | <4 R} \, | f(y)|\, dy \longrightarrow 0
$$
as $R\to\infty$.
Finally, if $ | y | < \frac{R}2$ and $ R< | x | <2R$ we have $ | y | < \frac12 | x | $ so that, by \cite{folland_stein}
Proposition 1.7 and Corollary 1.16,
\begin{equation*}\begin{split}
R^{-\alpha} I_{1}(R) & \le C_K\, \int_{ | y | <\frac12 R} \, | f(y)| \big( \int_{R< | x | <2R} \frac{ | y | }{ | x | ^{n-\alpha+1}} \,dx \big) \, dy
\\&
= C_K\, \int_{\rn n} \, | f(y)| | y | \chi_{[0,\frac12 R]}( | y | ) \big(R^{-\alpha} \int_{R< | x | <2R} \frac{1}{ | x | ^{n-\alpha+1}} \,dx \big) \, dy
\\&
\le C_K\, \int_{\rn n} \, | f(y)| | y | \chi_{[0,\frac12 R]}( | y | )R^{-1} \, dy=: C_K\, \int_{\rn n} \, | f(y)|H_R( | y | ) \, dy.
\end{split}\end{equation*}
Obviously, for any fixed $y\in \he n $ we have $( | y | ) H_R ( | y | )\to 0$ as $R\to\infty$. On the other hand,
$ | f(y)|H_R( | y | ) \le \frac12 |f(y)|$, so that, by dominated convergence theorem,
$$
R^{-\alpha} I_{1}(R) \longrightarrow 0
$$
as $R\to\infty$.
This completes the proof of the lemma.
\end{proof}
\begin{definition} Let $f$ be a measurable function on $\rn n$. If $t>0$ we set
$$
\lambda_f(t) = |\{|f|>t\}|.
$$
If $1\le p\le\infty$ and
$$
\sup_{t>0} \lambda_f^p(t) <\infty,
$$
we say that $f\in L^{p,\infty}(\rn n)$.
\end{definition}
\begin{definition}\label{M}
Following \cite{BBC}, Definition A.1, if $1<p<\infty$, we set
$$
\| u\|_{M^p} : = \inf \{C\ge 0 \, ; \, \int_K |u| \, dx \le C |K|^{1/p'}\;
\mbox{for all $L$-measurable set $K\subset {\mathbb R^{n}}$}\}.
$$
\end{definition}
By \cite{BBC}, Lemma A.2, we obtain
\begin{lemma}
If $1<p<\infty$, then
$$
\dfrac{(p-1)^p}{p^{p+1}} \| u\|_{M^p }^p \le \sup_{\lambda >0} \{\lambda^p | \{|u|>\lambda\} |\, \} \le \| u\|_{M^p } ^p.
$$
In particular, if $1<p<\infty$, then $M^p = L^{p,\infty}(\rn n)$.
\end{lemma}
\begin{corollary}\label{marc alternative coroll} If $1\le s <p$, then $M^p \subset L^s_{\mathrm{loc}} (\rn n)\subset L^1_{\mathrm{loc}} (\rn n)$.
\end{corollary}
\begin{proof} If $u\in M^p$ then $|u|^s\in M^{p/s}$, and we can conclude
thanks to Definition \ref{M}.
\end{proof}
\begin{lemma}\label{convolutions} Let $E$ be a kernel of type $\alpha\in (0,n)$. Then for all $f\in L^1(\rn n)$ we have $f\ast E\in M^{n/(n-\alpha)} $
and there exists $C>0$ such that
$$
\| f\ast E\|_{M^{n/(n-\alpha)}} \le C\|f \|_{L^1({\rn n})}
$$
for all $f\in L^1(\rn n)$. In particular, by Corollary \ref{marc alternative coroll}, $f\ast E\in L^1_{\mathrm{loc}}$.
\end{lemma}
As in \cite{BFP2}, Lemma 4.4 and Remark 4.5, we have:
\begin{remark}
Suppose $0< \alpha<n$.
If $K$ is a kernel of type $\alpha$
and $\psi \in \mathcal D(\rn n)$, $\psi\equiv 1$ in a neighborhood of the origin, then
the statements of Lemma \ref{convolutions} still
hold if we replace $K$ by $(1-\psi )K$ or by $\psi K$.
\end{remark}
\section{Differential forms and currents} Let $(dx_1,\dots, dx_{n} )$ be the canonical basis of $(\rn n)^*$ and
indicate as $\scalp{\cdot}{\cdot}{} $ the
inner product in $(\rn n)^*$ that makes $(dx_1,\dots, dx_{n} )$
an orthonormal basis. We put
$ \bigwedge^0 (\rn n) := \mathbb R $
and, for $1\leq h \leq n$,
\begin{equation*}
\begin{split}
\bigwedge^h(\rn n)& :=\mathrm {span}\{ dx_{i_1}\wedge\dots \wedge dx_{i_h}:
1\leq i_1< \dots< i_h\leq n\}
\end{split}
\end{equation*}
the linear space of the alternanting $h$-forms on $\rn n$.
If $I:= ({i_1},\dots , {i_h})$ with
$1\leq i_1< \dots< i_h\leq n$, we set $|I|:=h$ and
$$
dx^I:= dx_{i_1}\wedge\dots \wedge dx_{i_h}.
$$
We indicate as $\scalp{\cdot}{\cdot}{} $ also the
inner product in $\bigwedge^h(\rn n)$ that makes $(dx_1,\dots, dx_{n} )$
an orthonormal basis.
By translation, $\bigwedge^h(\rn n)$ defines a fibre bundle over $\rn n$, still denoted by $\bigwedge^h(\rn n)$. A differential
form on $\rn n$ is a section of this fibre bundle.
Through this Note, if $0\le h\le n$ and $\mathcal U\subset \rn n$ is an open set, we denote by $\Omega^h(\mathcal U)$ the space of
differential $h$-forms on $\mathcal U$, and by $d:\Omega^h(\mathcal U)\to\Omega^{h+1}(\mathcal U)$
the exterior differential. Thus $(\Omega^\bullet(\mathcal U), d)$ is the de Rham complex
in $\mathcal U$ and any $u\in \Omega^h$ can be written as
$$
u= \sum_{|I|=h} u_I dx^I.
$$
\begin{definition}
If $\mathcal U\subset\rn n$ is an open set and $0\le h\le n$,
we say that $T$ is a $h$-current on $\mathcal U$
if $T$ is a continuous linear functional on $\mathcal D(\mathcal U, \bigwedge^h(\rn n))$
endowed with the usual topology. We write $T\in \mathcal D'(\mathcal U, \bigwedge^h(\rn n))$.
If
$ u \in L^1_{\mathrm{loc}}(\mathcal U, \bigwedge^h(\rn n))$, then $u$ can be
identified canonically with a $h$-current $T_u$ through the
formula
\begin{equation*}
\Scal{T_u}{\varphi}:=\int_{\mathcal U} u\wedge \ast \varphi
= \int_{\mathcal U} \scal{u}{\varphi}\, dx
\end{equation*} for any $\varphi\in \mathcal D(\mathcal U, \bigwedge^h(\rn n))$.
\end{definition}
From now on, if there is no way to misunderstandings, and
$ u \in L^1_{\mathrm{loc}}(\mathcal U, \bigwedge^h(\rn n))$, we shall write $u$ instead of $T_u$.
Suppose now $u$ is sufficiently smooth (take for instance $u\in\mathcal \mathcal C^\infty(\rn n, \bigwedge^h(\rn n) )$. If $\phi\in\mathcal D(\rn n, \bigwedge^h(\rn n))$, then,
by the Green formula
$$
\int_{\rn n} \scal{d u}{\phi}\, dx = \int_{\rn n} \scal{u}{d ^*\phi}\, dx.
$$
Thus, if $T\in \mathcal D'(\rn n, \bigwedge^h(\rn n)$. it is natural to set
\begin{equation*}\begin{split}
\Scal{d T}{\phi} = \Scal{T}{d ^*\phi}
\end{split}\end{equation*}
for any $\phi\in \mathcal D(\rn n, \bigwedge^{h+1}(\rn n))$.
Analogously, if $T\in \mathcal D'(\rn n, \bigwedge^h(\rn n))$, we set
\begin{equation*}\begin{split}
\Scal{d ^*T}{\phi} = \Scal{T}{d \phi}
\end{split}\end{equation*}
for any $\phi\in \mathcal D(\rn n, \bigwedge^{h-1}(\rn n))$.
Notice that, if $u \in L^1_{\mathrm{loc}}(\rn n, \bigwedge^{h}(\rn n)) $
$$
\Scal{u}{d ^*\phi} = \int_{\rn n} u\wedge \ast d ^*\varphi
= (-1)^{h+1} \int_{\rn n} u\wedge d ^* (\ast\varphi).
$$
A straightforward approximation {argument} yields the following identity:
{ \begin{lemma}\label{june 6 eq:1}
Let
$u\in L^1(\rn n, \bigwedge^{h+1}(\rn n))$ be a closed form, and let $K$ be a kernel of type $\mu\in (0,n)$.
If $\psi\in \mathcal D(\rn n, \Omega^h)$, then
\begin{equation}\label{closed eq:1}
\int \scal{u}{ d^*(\psi \ast K)}\, dx=0.
\end{equation}
\end{lemma}
}
\begin{definition}
In $\rn n$, we define
the Laplace-Beltrami operator $\Delta_{h}$ on $\Omega^h$ {by}
\begin{equation*}
\Delta_{h}=
dd^*+d^*d
\end{equation*}
\end{definition}
Notice that $-\Delta_{0} = \sum_{j=1}^{2n}\partial_j^2$ is the usual Laplacian of
$\rn n$.
\begin{proposition}[see e.g. \cite{jost} (2.1.28)]
If $u= \sum_{|I|=h} u_I dx^I$, then
$$
\Delta u = - \sum_{|I|=h} (\Delta u_I) dx^I.
$$
\end{proposition}
For sake of simplicity, since a basis of $\bigwedge^h(\rn n)$
is fixed, the operator $\Delta_{h}$ can be identified with a diagonal matrix-valued map, still denoted
by $\Delta_{h}$,
\begin{equation}\label{matrix form}
\Delta_{h} = -(\delta_{ij}\Delta)_{i,j=1,\dots,\mathrm{dim}\, \bigwedge^h(\rn n)}: \mathcal D'(\rn{n}, \bigwedge^h(\rn n))\to D'(\rn{n}, \bigwedge^h(\rn n)),
\end{equation}
where $D'(\rn{n}, \bigwedge^h(\rn n))$ is the space of vector-valued distributions on $\rn n$.
If we denote by $\Delta^{-1}$ the matrix valued kernel
\begin{equation}\label{matrix form inverse}
\Delta_{h}^{-1} = -(\delta_{ij}\Delta^{-1})_{i,j=1,\dots,\mathrm{dim}\, \bigwedge^h(\rn n)}: \mathcal D'(\rn{n}, \bigwedge^h(\rn n))\to D'(\rn{n}, \bigwedge^h(\rn n)),
\end{equation}
then $\Delta_{h}^{-1} $ is a matrix-valued kernel of type 2 and
$$
\Delta_{h}^{-1} \Delta_{h}\alpha = \Delta_{h}\Delta_{h}^{-1} \alpha = \alpha \qquad\mbox{for all $\alpha\in\mathcal D(\rn n,\bigwedge^h(\rn n)$.}
$$
We notice that, if $n>1$, since $ \Delta_h^{-1}$ is associated with a kernel of type 2
$ \Delta_h^{-1}f $ is well defined when $f\in L^1(\he{n}, E_0^h)$. More precisely, by
Lemma \ref{convolutions} we have:
\begin{lemma}
If $1\le h< n$, and $R=R(D)$ is {a} homogeneous
polynomial of degree $\ell=1$ in $D_1,\dots,D_n$,
we have:
$$
\| f\ast R(D) \Delta_{h}^{-1}\|_{M^{n/(n-1)}} \le C\|f \|_{L^1(\rn n)}
$$
for all $f\in L^1(\rn n, \bigwedge^h(\rn n))$.
By Corollary \ref{marc alternative coroll}, in both cases $ f\ast R(D) \Delta_{h}^{-1}\in L^1_{\mathrm{loc}}(\rn n,\bigwedge^h(\rn n))$.
In particular, the map
\begin{equation}\label{L1-L1}
\Delta_{ h}^{-1}: L^1(\rn n, \bigwedge^h(\rn n))\longrightarrow L^1_{\mathrm{loc}}(\rn n, \bigwedge^h(\rn n))
\end{equation}
is continuous.
\end{lemma}
\begin{remark}\label{closed bis} By Lemma \ref{closed ex bis}, if
$u\in L^1(\rn n, \bigwedge^{h+1}(\rn n))$ and
$\psi\in \mathcal D(\rn n, \bigwedge^h(\rn n))$, then
\begin{equation}\label{revised 1}
\Scal{\Delta^{-1}_{h} u}{ \psi} = \Scal{ u}{ \Delta^{-1}_{h}\psi}.
\end{equation}
\end{remark}
In this equation, the left hand side is the action of a matrix-valued distribution on a vector-valued test function, see formula \eqref{matrix form inverse},
whereasthe right hand side is the inner product of an $L^1$ vector-valued function with an $L^\infty$ vector-valued function.
A standard argument yields the following identities:
\begin{lemma}[see \cite{BFP2}, Lemma 4.11]\label{comm}
If $\alpha\in\mathcal D(\rn n, \bigwedge^h(\rn n))$, then
\begin{itemize}
\item[i)]$
d \Delta^{-1}_{ h}\alpha = \Delta^{-1}_{ h+1} d\alpha$, \qquad $h=0,1,\dots, n-1$,
\item[iv)]$d^* \Delta^{-1}_{\mathbb H, h}\alpha = \Delta^{-1}_{\mathbb H, h-1} d^* \alpha$
\qquad $ h=1,\dots, n$.
\end{itemize}
\end{lemma}
\begin{lemma}
If $\alpha\in L^1(\rn n, \bigwedge^h(\rn n))$, then $\Delta^{-1}_{h}\alpha$
is well defined and belongs to
$ L^1_{\mathrm{loc}}(\rn n, \bigwedge^h(\rn n))$. If in addition $d \alpha=0$ in the distributional sense,
then the following result holds:
$$
d \Delta^{-1}_{ h}\alpha = 0.
$$
\end{lemma}
\begin{proof} Let $\phi\in \mathcal D(\rn n, \bigwedge^h(\rn n))$ be arbitrarily given.
By Lemma \ref{comm}, $d \Delta^{-1}_{h}\phi = \Delta^{-1}_{h}d \phi$.
Thus Remark \ref{closed bis} and Lemma \ref{june 6 eq:1} yield
$$
\Scal{d \Delta^{-1}_{ h}\alpha}{\phi} = \Scal{\Delta^{-1}_{ h}\alpha}{d \phi}
= \Scal{\alpha}{\Delta^{-1}_{h}d \phi} =
\Scal{\alpha}{d \Delta^{-1}_{ h}\phi}
= 0.
$$
\end{proof}
\section{$n$-parabolicity}
\label{parabolicity}
Recall that a noncompact Riemannian manifold $M$ is \emph{$p$-parabolic} if for every compact subset $K$ and every $\epsilon>0$, there exists a smooth compactly supported function $\chi$ on $M$ such that $\chi\geq 1$ on $K$ and
$$
\int_{M}|d\chi|^p <\epsilon.
$$
It is well known that Euclidean $n$-space is $n$-parabolic (the relevant functions $\chi$ can be taken to be piecewise affine functions of $\log r$, where $r$ is the distance to the origin). It follows that Sobolev inequality in $L^n$ cannot hold, and, as we saw in the introduction, that the Poincar\'e inequality on $n$-forms fails as well.
Here, we explain an other consequence of $n$-parabolicity.
\begin{proposition}
\label{descends}
Let $\omega$ be a $k$-form in $L^1(\mathbb R^n)$. Assume that $\omega=d\phi$ where $\phi\in L^{n/(n-1)}(\mathbb R^n)$. Then, for every constant coefficient $n-k$-form $\beta$,
$$
\int_{\mathbb R^n}\omega\wedge\beta=0.
$$
\end{proposition}
\begin{proof}
Let $\chi_R$ be a smooth compactly supported function on $\mathbb R^n$ such that $\chi_R=1$ on $B(R)$ and $\int|d\chi_R|^n \leq \frac{1}{R}$. Let $\omega_R=d(\chi_R\phi)$. Then, since $\chi_R\phi\wedge\beta$ is compactly supported,
$$
\int_{\mathbb R^n}\omega_R\wedge\beta=\int_{\mathbb R^n}d(\chi_R\phi\wedge\beta)=0.
$$
Write $\omega_R=d\chi_R\wedge\phi+\chi_R\omega$. Since
$$
|\int_{\mathbb R^n}d\chi_R\wedge\phi\wedge\beta|
\leq \|d\chi_R\|_n\|\phi\|_{n/(n-1)}\|\beta\|_\infty
\leq\frac{C}{R^{1/n}}
$$
tends to $0$,
\begin{align*}
\int_{\mathbb R^n}\omega\wedge\beta
&=\lim_{R\to\infty}\int_{\mathbb R^n}\chi_R\omega\wedge\beta\\
&=-\lim_{R\to\infty}\int_{\mathbb R^n}\omega_R\wedge\beta=0.
\end{align*}
\end{proof}
In other words, the vanishing of all integrals $\int\omega\wedge\beta$ is a necessary condition for an $L^1$ $k$-form to be the differential of an $L^{n/(n-1)}$ $k-1$-form.
\section{Main results}
The following estimate provides primitives for globally defined closed $L^1$-forms, and
can be derived from Lanzani \& Stein inequality \cite{LS}, {approximating} closed forms in $L^1_0(\rn n),\bigwedge^h(\rn n))$
by means of closed compactly supported smooth form. The convergence of the approximation is guaranteed by Lemma \ref{anuli}.
\begin{proposition}
Denote by $L^1_0(\rn n),\bigwedge^h(\rn n))$ the subspace of $L^1(\rn n,\bigwedge^h(\rn n))$
of forms with vanishing average, and by $\mathcal H^1(\rn n)$ the classic real Hardy space (see \cite{Stein}, Chapter 3). We have:
\begin{itemize}
\item[i)] if $h < n$, then
$$
\|d^* \Delta_h^{-1} u\|_{L^{n/(n-1)}(\rn n)}\le C \|u\|_{L^{1}(\rn n)}\qquad\mbox{for all $u\in L^1_0(\rn n,\bigwedge^h(\rn n))\cap \ker d $};
$$
\item[ii)] if $h= n$, then
$$
\|d^* \Delta_{n}^{-1} u\|_{L^{n/(n-1)}(\rn n)}\le C \|u\|_{\mathcal H^{1}(\rn n)}\qquad\mbox{for all $u\in \mathcal H^1(\rn n)\cap \ker d $}
$$
\end{itemize}
We stress that the vanishing average assumption is necessary
(see Proposition \ref{descends}).
\end{proposition}
A standard approximation argument (akin to that of the {classical} Meyer \& Serrin Theorem) yields
the following density result.
\begin{lemma}
Let $B \subset \rn n$ an open set. If $0\le h\le n$, we set
$$
(L^1\cap d ^{-1}L^1)(B,\bigwedge^h(\rn n)):=\{\alpha\in L^1(B,\bigwedge^h(\rn n))\,;\,d \alpha\in L^1(B,\bigwedge^{h+1}(\rn n))\},
$$
endowed with the graph norm. Then $C^\infty(B,\bigwedge^h(\rn n))$ is dense in $(L^1\cap d ^{-1}L^1)(B,\bigwedge^h(\rn n))$.
\end{lemma}
Again through an approximation argument we can prove the following two lemmata:
\begin{lemma}\label{L1 boundedness of Kd new}
If $K=d ^*\Delta_c^{-1}$, then:
\begin{itemize}
\item $K$ is a kernel of type $1$;
\item if $\chi$ is a smooth function with compact support in $B$, then the identity
$$
\chi=d K\chi+Kd \chi
$$
holds on the space $(L^1\cap d ^{-1}(L^1)(B,\bigwedge^\bullet(\rn n) )$.
\end{itemize}
\end{lemma}
\begin{lemma}
If $1\le h<n$, let
$\psi\in L^1(\rn n,\bigwedge^{h}(\rn n))$ be a compactly supported form with $d \psi \in L^1(\rn n,\bigwedge^{h+1}(\rn n))$,
and let $\xi\in {\bigwedge^{2n-h}}$ be a {constant coefficient} form. Then
$$
\int_{\rn{n} } d \psi\wedge \xi = 0.
$$
\end{lemma}
We are able now to prove the following (approximate) homotopy formula for closed forms.
\begin{proposition}\label{smoothing}
Let $B\Subset B'$ be open sets in $\rn n$. For $h=1,\ldots,n-1$, take $q=n/(n-1)$. Then there exists a smoothing operator
$S:L^1(B',\bigwedge^{h}(\rn n))\to W^{s,q}(B,\bigwedge^{h}(\rn n))$
for every $s\in\mathbb N$, and a bounded operator $T:L^1(B',\bigwedge^{h}(\rn n))\to L^q(B,\bigwedge^{h-1}(\rn n))$ such that, for closed $L^1$-forms $\alpha$ on $B'$,
\begin{equation}\label{homotopy formula for closed forms}
\alpha=d T\alpha+S\alpha\qquad \mbox{on $B$}.
\end{equation}
In particular, $S\alpha$ is closed.
Furthermore, $T$ and $S$ merely enlarge by a small amount the support of compactly supported differential forms.
\end{proposition}
\begin{proof}
{ Le us fix two open sets $B_0$ and $B_1$ with
\begin{equation*}
B\Subset B_0 \Subset B_1\Subset B',
\end{equation*}
and a cut-off function $\chi \in \mathcal D(B_1)$,
$\chi\equiv 1$ on $B_0$. If $\alpha \in ( L^1 \cap d ^{-1})(B', \bigwedge^\bullet(\rn n))$, we set $\alpha_0= \chi\alpha$, continued by zero outside $B_1$.
Denote by $k$ the kernel associated with $K$ in Lemma \ref{L1 boundedness of Kd new}.
We consider a cut-off function $\psi_R$ supported in a $R$-neighborhood
of the origin, such that $\psi_R\equiv 1$ near the origin. Then we can write
$k=k\psi_R + (1-\psi_R)k$. Thus, let us denote by $K_{R}$
the convolution operator associated with $\psi_R k$.
By Lemma \ref{L1 boundedness of Kd new},
\begin{equation}\begin{split}\label{apr 20 eq:1}
\alpha_0 &=d K\alpha_0+K_d \alpha_0\\
&= d K_{R} \alpha_0 + K_{R}d \alpha_0 + S\alpha_0,
\end{split}\end{equation}
where $S_0$ is defined by
$$
S\alpha_0 := d ( (1-\psi_R)k\ast \alpha_0) + (1-\psi_R)k \ast d \alpha_0.
$$
We set
$$
T_1\alpha := K_{R}\alpha_0, \qquad S_1\alpha:= S\alpha_0.
$$
If $\beta\in L^1(B_1,\bigwedge^h(\rn n))$, we set
$$
T_1 \beta:= K_{R}(\chi\beta)_{\big|_{B}}, \qquad S_1\alpha:= {S\alpha_0}_{\big|_{B}}.
$$
We notice that, provided $R>0$ is small enough, the values of $T_1\beta$ do not depend on the continuation of $\beta$ outside $B_1$.
Moreover
$$
{K_{R} d \alpha_0}_{\big|_B} = K_{R} d (\chi \alpha)_{\big|_B} = K_{R} (\chi d \alpha)_{\big|_B} =T_1(d \alpha),
$$
since $ d (\chi \alpha)\equiv \chi d \alpha$ on $B_0$.
Thus, by \eqref{apr 20 eq:1},
$$
\alpha = d T_1\alpha + T_1d \alpha + S_1\alpha \qquad\mbox{in $B$}.
$$
Assume now that $d \alpha=0$.
Then
$$
\alpha = d T_1\alpha + S_1\alpha \qquad\mbox{in $B$}.
$$
Write $\phi=T_1\alpha\in L^1(B_0, \bigwedge^{h-1}(\rn n))$. By difference, $d \phi=\alpha-S_1\alpha\in L^1(B_0, \bigwedge^{h-1}(\rn n))$.
}
The next step will consist of proving that $\phi\in L^q(B_0), \bigwedge^{h-1}(\rn n)$,
``iterating'' the previous argument. Let us sketch how this iteration will work:
let $\zeta$ be a cut-off function supported in $B_0$, identically equal to $1$ in a neighborhood $\mathcal U$ of $B$, and set $\omega=d (\zeta\phi)$.
Obviously, the form $\zeta\phi$ (and therefore also $\omega$) are defined on all $\rn n$ and are compactly
supported in $B_0$. In addition, $\omega$ is closed.
Suppose for a while we are able to prove that
\begin{itemize}
\item[a)] $\omega \in L^1(\rn n, \bigwedge^{h}(\rn n))$;
\item[b)]
$\| K_{0}\omega\|_{L^q(\rn n, \bigwedge^{h}(\rn n))} \le C\|\alpha\|_{L^1(B', \bigwedge^{h}(\rn n))}$,
\end{itemize}
and let us show how the argument can be carried out.
First we stress that, if $R$ is small enough, then when $x\in B$, $K_{R}\omega(x)$
depends only on the restriction of $d \phi$ to $\mathcal U$, so that the map
$$
\alpha \to K_{R}\omega \big|_{B}
$$
is linear.
In addition, notice that $\omega=\chi\omega$, so that, by \eqref{apr 20 eq:1},
$$
d (\zeta\phi) = \omega = d K_{R}\omega + S\omega.
$$
Therefore in $B$
$$
\alpha-S_1\alpha = d \phi = d (\zeta\phi) = d K_{R}\omega + S_0\omega,
$$
and then in $B$
\begin{equation*}\begin{split}
\alpha &= d (K_{R}\omega \big|_{B}) + S_1\alpha \big|_{B}+ S\omega \big|_{B}
\\&=: d (K_{R}(\chi\omega) \big|_{B}) + S\alpha = d T\alpha + S\alpha.
\end{split}\end{equation*}
First notice that the map $\alpha\to \omega=\omega(\alpha)$ is linear, and hence $T$ and
$S$ are linear maps. In addition, by b),
\begin{equation*}\begin{split}
\| T\alpha\|_{L^q(B, \bigwedge^{h-1}(\rn n))} \le \| K_{R}(\chi\omega)\|_{L^q(\rn n), \bigwedge^{h}(\rn n)} = \| K_{R}(\omega)\|_{L^q(\rn n, \bigwedge^{h}(\rn n))}
\le C\, \|\alpha\|_{L^1(B', \bigwedge^{h}(\rn n))}.
\end{split}\end{equation*}
As for the map $\alpha \to S\alpha$ we have just to point out that , when $x\in B$, $S\alpha(x)$
can be written as the convolution of $\alpha_0$ with a smooth kernel with bounded derivatives
of any order, and the proof is completed.
\end{proof}
Interior Poincar\'e and Sobolev inequalities follow now from the approximate homotopy formula for closed forms
\eqref{homotopy formula for closed forms}.
\begin{corollary}[Interior Poincar\'e and Sobolev inequalities]
Let $B\Subset B'$ open sets in $\rn n$, and assume $B$ is convex. For $h=1,\ldots,n-1$, let $q=n/(n-1)$. Then for every closed form
$\alpha\in L^1(B', \bigwedge^{h}(\rn n))$, there exists an $(h-1)$-form $\phi\in L^q(B, \bigwedge^{h-1}(\rn n))$, such that
$$
d \phi=\alpha_{|B}\qquad\mbox{and}\qquad \|\phi\|_{L^q(B, \bigwedge^{h-1}(\rn n))}\leq C\,\|\alpha\|_{L^1(B', \bigwedge^{h}(\rn n))}.
$$
Furthermore, if $\alpha$ is compactly supported, so is $\phi$.
\end{corollary}
\begin{proof}
By Proposition \ref{smoothing}, the $h$-form $S\alpha$ defined in \eqref{homotopy formula for closed forms} is closed and
belongs to $L^{q}(B,\bigwedge^{h}(\rn n))$, with norm controlled by the $L^1$-norm of $\alpha$. Thus we can
apply Iwaniec \& Lutoborski's homotopy (\cite{IL}, Proposition 4.1) to obtain a differential $(h-1)$-form $\gamma$ on $B$
with norm in $W^{1,q}(B, \bigwedge^{h-1}(\rn n))$ controlled by the $L^q$-norm of $S\alpha$ and therefore from the
$L^1$-norm of $\alpha$. Set $\phi:= T\alpha + \gamma$. Clearly
$$
d\phi = dT\alpha + d\gamma =dT\alpha + S\alpha = \alpha.
$$
Then, by Proposition \ref{smoothing},
$$
\| \phi\|_{L^q(B, \bigwedge^{h-1}(\rn n))} \le C \big(\| \alpha \|_{L^1(B, \bigwedge^{h}(\rn n))} + \| S\alpha\|_{L^q(B, \bigwedge^{h}(\rn n))}\big)
\le C \| \alpha \|_{L^1(B, \bigwedge^{h}(\rn n))}.
$$
\end{proof}
\section*{Acknowledgments}{A. B. and B. F. are supported by the University of Bologna, funds for selected research topics, and by MAnET Marie Curie
Initial Training Network, by
GNAMPA of INdAM (Istituto Nazionale di Alta Matematica ``F. Severi''), Italy, and by PRIN of the MIUR, Italy.
\\
P.P. is supported by MAnET Marie Curie
Initial Training Network, by Agence Nationale de la Recherche, ANR-10-BLAN 116-01 GGAA and ANR-15-CE40-0018 SRGI. P.P. gratefully acknowledges the hospitality of Isaac Newton Institute, of EPSRC under grant EP/K032208/1, and of Simons Foundation.}
\bibliographystyle{amsplain}
|
2208.06227
|
\section*{Abstract}
{\bf
QCD predictions for final states with multiple jets in hadron collisions
make use of multi-jet merging methods, which allow one to combine
consistently the contributions from hard scattering matrix elements with different parton multiplicities and
parton showers. In this article I consider a multi-jet merging method that
has recently been proposed to take into account the effects of transverse
momentum dependent (TMD) evolution and parton shower, and present
studies focusing on the application of this method to jets associated with
Drell-Yan (DY) production in the region of high masses.
}
\section{Introduction}
\label{sec:intro}
Drell-Yan (DY) plus jets final states are important at the LHC for tests of the standard model as well as for backgrounds to Higgs studies and beyond standard model studies. Baseline predictions for this are obtained by perturbative fixed-order calculations combined with parton showers in Monte Carlo event generators. Such predictions employ PDFs, which describe initial-state partonic structure and evolution within the collinear approximation.
In certain regions of the DY plus jets phase space, however, it becomes important to treat initial state QCD effects by including transverse momentum dependent (TMD) distributions \cite{Angeles-Martinez:2015sea}, going beyond collinear PDFs. TMD effects in DY plus jets have been the subject of recent studies \cite{Martinez:2022wrf, Chien:2022wiq, Yang:2022qgk, Buonocore:2022mle, Martinez:2021chk, Chien:2019gyf, Sun:2018icb, Buffing:2018ggv}.
In this article I apply the TMD jet merging method \cite{Martinez:2021chk} to study the differential jet rates (DJRs) associated with DY production as a function of the DY mass. In particular, I consider the region of high DY masses, and investigate the behavior of the TMD merging method and the merging scale in this new mass region.
This article is organized as follows. I introduce the theoretical framework for TMD evolution and TMD merging in section \ref{sec:theory}. In section \ref{sec:sliding} I present numerical calculations of the DJRs and discuss their main features. In section \ref{sec:conclusion} I give conclusions.
\section{Parton Branching method and TMD merging} \label{sec:theory}
The parton branching (PB) method developed in \cite{Hautmann:2017xtx, Hautmann:2017fcj} describes the evolution of TMDs by means of real-emission splitting functions, Sudakov form factors and angular ordering phase space constraints. Nonperturbative distributions at the initial scale of order 1 GeV are determined from fits to experimental data \cite{BermudezMartinez:2018fsv}. For applications, the evolved TMD distributions are matched with fixed-order hard-scattering matrix elements \cite{Yang:2022qgk,Martinez:2019mwt}. The TMD distributions and corresponding TMD parton shower are implemented in the Monte Carlo event generator \textsc{Cascade3} \cite{Baranov:2021uol}.
Applications to DY transverse momentum spectra, using PB TMDs matched with next-to-leading-order (NLO) DY matrix elements via MC@NLO \cite{Alwall:2014hca}, have been studied in\cite{Martinez:2019mwt} for LHC energies and in \cite{BermudezMartinez:2020tys} for lower energies. Applications for jets have been studied in \cite{Abdulhamid:2021xtt} by the same method.
It is found in \cite{Martinez:2019mwt} that NLO-matched PB TMD predictions provide a very good description of experimental LHC measurements of the Z-boson transverse momentum $q_T$ distribution in the low $q_T$ region and middle $q_T$ region ($q_T \lesssim M_Z$) while a deficit is observed in the prediction compared to experimental data in the high $q_T$ region. The low $q_T$ region is dominated by soft-gluon radiation emitted through TMD evolution. The high $q_T$ region is dominated by hard, perturbative emissions, and the deficit points to the lack of higher orders beyond the first hard emission in an NLO calculation.
A method to include higher order emissions corresponding to Z + n partons has been devised in \cite{Martinez:2021chk} based on multi-jet merging. The method extends the MLM merging\cite{Mangano:2006rw, Alwall:2007fs, Mrenna:2003if, Mangano:2002bhl} to the case of TMD parton branching, rather than collinear parton branching. It allows one to combine consistently, without double counting or missing events, high multiplicity matrix elements with the TMD parton showers and TMD parton distributions. Using the TMD merging, a good description of the Z-boson $q_T$ spectrum is obtained not only at low $q_T$ but also at high $q_T$. This observation is also made by the recent CMS collaboration analysis \cite{CMS:2022ilp}.
The key features of the TMD jet merging, compared to the collinear MLM merging are that, owing to taking into account the transverse momentum in the initial state parton cascade: i) a better description of high multiplicity jet final states is achieved, and ii) a reduction of the systematic uncertainties due to the merging parameters is observed in the theoretical predictions.
In refs. \cite{Martinez:2021chk,Martinez:2021dwx}, multi-jet observables in Z + jets production are studied using the TMD merging in the region near the Z boson mass.
In the next section, calculations with TMD merging away from the Z-boson mass are shown to explore the behavior of the TMD merging approach and its merging scale in the region of high masses.
\section{DJRs in DY + jets at high masses}
\label{sec:sliding}
Differential jet rates (DJRs) are the distributions of the variable $d_{n,n+1}$, which is the square of the energy scale at which an $n$-jet configuration is resolved as an $(n+1)$-jet configuration, with jets defined according to the kT jet algorithm \cite{CATANI1993187,Ellis:1993tq}. They are sensitive to the consistency of any jet merging method, which makes them appropriate quantities to validate the merging algorithm.
The calculational set-up is as follows. With MadGraph5\_aMC@NLO \cite{Alwall:2014hca} I generate Les Houches Event (LHE) files at leading order (LO) containing hard scattering events - that represent the matrix elements - for pp collisions to Z + 0, 1, 2, 3 jet final states at a centre-of-mass energy of $\sqrt{s} = 13$ TeV. A generation cut of 16 GeV sets the lower limit of emitted transverse momentum by the partons in these events. The transverse momentum at the hard scale is generated by forward evolution \cite{Hautmann:2017xtx} using the PB-TMD Set 2 \cite{BermudezMartinez:2018fsv} provided by the TMDlib library \cite{Abdulov:2021ivr, Hautmann:2014kza}. Parton shower emissions are generated in the TMD shower in \textsc{Cascade} \cite{Baranov:2021uol} following the PB evolution dynamics in a backwards manner. The TMD merging algorithm \cite{Martinez:2021chk} at LO is implemented within the event generator. Hadronization is turned off, since this enters the calculation only after the merging and would not affect the DJRs.
The merging scale in a merged calculation (indicated with $E_\perp^{\text{clus}}$) represents the minimal transverse energy of a jet to pass the merging algorithm. The effect on the DJR is that the region below the merging scale is dominated by the TMD and TMD shower, while the region above the merging scale is dominated by the matrix element.
We investigate DJRs at different hard interaction scales to study the behavior of the merging scale when the hard scale varies.
For di-lepton mass around the $Z$-boson mass,
a merging scale of 23 GeV has been applied~\cite{Martinez:2021chk}.
This works well for TMD merging of Z+jets as observed in \cite{Martinez:2021dwx}. Figure \ref{fig:MZetclus23} shows that the DJRs are smooth in this regime.
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[width=0.95\textwidth]{DJR_Z+j+jj+jjj_13TeV/histDiffRate0antikt}
\label{DJR01-MZ_etclus23}
\caption{$d_{01}$}
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=0.95\textwidth]{DJR_Z+j+jj+jjj_13TeV/histDiffRate1antikt}
\label{DJR12-MZ_etclus23}
\caption{$d_{12}$}
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=0.95\textwidth]{DJR_Z+j+jj+jjj_13TeV/histDiffRate2antikt}
\label{DJR23-MZ_etclus23}
\caption{$d_{23}$}
\end{subfigure}
\caption{DJRs for Z+jets around the Z boson mass with a merging scale $E_\perp^{\text{clus}}=23$ GeV.}
\label{fig:MZetclus23}
\end{figure}
In Figures \ref{fig:etclus23} and \ref{fig:etclus70}, we show the DJRs resulting from studies in the scenario where the minimal mass of the di-lepton pair is set to 800 GeV. At the merging scale of 23 GeV ($\log(23)=1.36$), a discontinuity occurs (Fig. \ref{fig:etclus23}). To resolve this discontinuity, higher merging scales have been applied to the calculation. Application of a merging scale of 70 GeV ($\log(70)=1.85$) does not lead to discontinuities as with the 23 GeV merging scale (Fig. \ref{fig:etclus70}).
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[width=0.95\textwidth]{mmll800_etclus23/histDiffRate0antikt}
\label{DJR01_mmll800_etclus23}
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=0.95\textwidth]{mmll800_etclus23/histDiffRate1antikt}
\label{DJR12_mmll800_etclus23}
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=0.95\textwidth]{mmll800_etclus23/histDiffRate2antikt}
\label{DJR23_mmll800_etclus23}
\end{subfigure}
\caption{DJRs for a minimal DY mass $M_{ll}^{\text{min}}=800$ GeV and a merging scale $E_\perp^{\text{clus}}=23$ GeV.}
\label{fig:etclus23}
\end{figure}
By applying a larger merging scale to events with high di-lepton masses, the low multiplicity samples contribute more at higher scales. For example, $d_{01}$ is smoother when the $2\rightarrow 2$ sample contributes largely up to 70 GeV instead of falling down rapidly at scales larger than 23 GeV.
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[width=0.95\textwidth]{mmll800_etclus70/histDiffRate0antikt}
\label{DJR01_mmll800_etclus70}
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=0.95\textwidth]{mmll800_etclus70/histDiffRate1antikt}
\label{DJR12_mmll800_etclus70}
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=0.95\textwidth]{mmll800_etclus70/histDiffRate2antikt}
\label{DJR23_mmll800_etclus70}
\end{subfigure}
\caption{DJRs for a minimal DY mass $M_{ll}^{\text{min}}=800$ GeV and a merging scale $E_\perp^{\text{clus}}=70$ GeV.}
\label{fig:etclus70}
\end{figure}
\newpage
\section{Conclusion} \label{sec:conclusion}
Transverse momentum recoils in the initial-state shower,
taken into account via TMD distributions, affect the
theoretical systematics of predictions
for jet final
states \cite{Dooling:2012uw,Hautmann:2012dw,Hautmann:2013fla}.
Focusing on the
case of DJR variables in DY + jets production, in
this article I have investigated, within a PB TMD framework
with multi-jet merging, the dependence of predictions
on the merging scale as a function of the DY mass scale.
I have shown in particular that the results support the
possibility of a merging scale increasing with mass.
It looks to be of interest to explore this possibility further,
also in the context of current developments on TMD
parton branching, such as the ongoing determination \cite{Barzani:2022msy} of
PB TMD distributions via
xFitter~\cite{xFitterDevelopersTeam:2022koz,Alekhin:2014irh}
including effects of dynamical
soft-gluon resolution scales~\cite{Hautmann:2019biw} and
angular ordering~\cite{Hautmann:2017fcj,Catani:1990rr,Marchesini:1987cf},
and the extension of the branching evolution to incorporate
TMD splitting functions~\cite{Hautmann:2022xuc,Catani:1994sq}. Besides, it will be interesting to compare the behavior
of the TMD merging algorithm with that of collinear merging algorithms
when the DY mass scale is varied.
\section*{Acknowledgements}
I thank Nestor Perez Armesto and the other organizers of the DIS-2022 workshop for the possibility to present these results at this interesting conference. Many thanks to Francesco Hautmann and Armando Bermudez Martinez for collaboration and interesting discussions.
\bibliographystyle{mybibstyle}
|
1812.04717
|
\section*{Acknowledgments}
\vspace{-1mm}
This work is supported by the National Science Foundation grants CSR-1526237, TWC-1564009 and BD Spokes 1636879
\vspace{-2mm}
\bibliographystyle{ACM-Reference-Format}
\renewcommand*{\bibfont}{\footnotesize}
\section{Conclusion and Future Work}
\label{sec:Conclusion}
We presented Pible: a battery-free mote for Perpetual Indoor BLE applications build out of commercial off the shelf components.
Pible leverages ambient light and a power management algorithm to maximize the quality of service while satisfying perpetual working operations. We tested Pible in a real-world environment, showing that Pible maintained continual operations for 15 days and for 5 different lighting conditions. As a future work, we will explore different prediction mechanisms.
\vspace{-2mm}
\section{Experimental Results}
\label{sec:Experimental}
We report experimental results of Pible motes deployed in a real building. By avoiding cold start-operation (e.g. Figure \ref{fig:CircuitMOS}), we use a 1F super-capacitor for all our experiments.
\begin{comment}
\subsection{Controlled Experiments}
\label{sec:algo}
\begin{figure*}[ht!]
\centering
\includegraphics[width=\linewidth]{./Graphs/Power-Management-2.eps}
\vspace{-5mm}
\caption{Power Management and Prediction Algorithm Result: QoS Adaptation during super-cap discharging and light-off (a), super-cap charging and light-on (b) and super-cap fully charged and light-on (c).}
\label{fig:PowerMan}
\vspace{-2mm}
\end{figure*}
Figure \ref{fig:PowerMan} shows three main moments in which the power management algorithm adapts the QoS based light availability and voltage level for the \textit{One Sensing} application. Dots in the figure represent the acquisition of a sensor sample by the Base Station.
Figure \ref{fig:PowerMan}(a) reports the discharge of the super-cap since light is off: the 6 samples are collected in an hour, meaning that the QoS is set to the minimum available to increase node-lifetime.
Figure \ref{fig:PowerMan}(b) shows Pible charge level with stable light availability. Right after the voltage increases, the sensor sampling are closer meaning due to increasing QoS. If the voltage remains constant, the algorithm reduces the QoS to recharge the super-cap.
Figure \ref{fig:PowerMan}(c) shows the behavior of Pible right before the super-cap reaches its maximum level. Light is on and stable in intensity. The sampling rate increases after the voltage reaches the maximum voltage allowed: at this point the power management increases the QoS state up to the maximum and decreases it only if voltage drops.
\end{comment}
\vspace{-2mm}
\subsection{Pible in the Wild}
\label{wild}
We first describe the specifics of the sensors used.
\textbf{1 Sensor}:
We configured Pible for sensing pressure.
\textbf{5 Sensors}:
We configured Pible for sensing 5 sensors: light, ambient-temperature, object-temperature, pressure and humidity.
\textbf{PIR Detection}:
To compare its performance, we placed a battery powered PIR node near Pible.
\textbf{Broadcasting BLE Advertisements}:
To measure the performance, an external Base Station queries the nodes for light, QoS State, and super-cap voltage levels every 10 minutes.
Table \ref{tab:light-energy} quantifies the average-QoS per day achieved by placing 20 Pible-nodes on 5 different indoor locations.
For each node, we collect data for > 15 days.
\begin{comment}
\begin{table*}[ht]
\centering
\footnotesize
\caption{Position-Application-QoS Trade-Off for 15 Pible-nodes}
\label{tab:light-energy}
\begin{tabular}{c|cc|cc|cc|cc}
\toprule
\small{} & \multicolumn{2}{c }{One Sensor} & \multicolumn{2}{c }{5 Sensors} & \multicolumn{2}{ c }{Advertising} & \multicolumn{2}{ c }{PIR} \\
\cline{2-9}
\small{} & \small{Avg-Light} & \small{Avg-QoS} & \small{Avg-Light} & \small{Avg-QoS} & \small{Avg-Light} & \small{Avg-QoS}& \small{Avg-Light} & \small{Events}\\
\small{} & \small{[lux]} & \small{[s]} & \small{[lux]} & \small{[s]}& \small{[lux]} & \small{[s]}& \small{[lux]} & \small{Detected[\%]}\\ \midrule
Door & \small{121$\pm 168$} & \small{337$\pm 241$}& \small{112$\pm 163$} & \small{564$\pm 110$} & \small{175$\pm 123$} & \small{1.9$\pm 0.36$} & \small{116$\pm 51$} & \small{71}\\ \hline
Center of Office & \small{246$\pm 293$} & \small{128$\pm 191$}& \small{227$\pm 326$} & \small{251$\pm 235$}& \small{312$\pm 352$} & \small{0.98$\pm 0.55$} & \small{395$\pm 210$} & \small{94}\\ \hline
Window & \small{7595$\pm 15862$} & \small{79$\pm 151$}& \small{9109$\pm 14079$} & \small{95$\pm 164$}& \small{6379$\pm 18853$} & \small{1.3$\pm 0.83$} & \small{8197$\pm 16578$} & \small{87} \\ \hline
Stairs Access & \small{235$\pm 56$} & \small{94$\pm 76$}& \small{238$\pm 58$} & \small{160$\pm 113$}& \small{241$\pm 55$} & \small{0.64$\pm 0.1$} & \small{150$\pm 3$} & \small{32} \\ \hline
Conference Room & \small{1085$\pm 1035$} & \small{75$\pm 124$}& \small{427$\pm 751$} & \small{286$\pm 253$} & \small{1456$\pm 1156$} & \small{0.85$\pm 0.8$} & \small{1390$\pm 1138$} & \small{97}\\
\bottomrule
\end{tabular}
\vspace{-2mm}
\end{table*}
\end{comment}
\begin{table}[ht]
\centering
\footnotesize
\vspace{-3mm}
\caption{Position-Application-QoS Trade-Off for 15 Pible-nodes. Data are averaged per day.}
\vspace{-3mm}
\label{tab:light-energy}
\begin{tabular}{c|cc|cc|cc|cc}
\toprule
\small{} & \multicolumn{2}{c }{1 Sensor} & \multicolumn{2}{c }{5 Sensors} & \multicolumn{2}{ c }{Advertising} & \multicolumn{2}{ c }{PIR} \\
\cline{2-9}
\small{} & \small{Light} & \small{QoS} & \small{Light} & \small{QoS} & \small{Light} & \small{QoS}& \small{Light} & \small{Event[\%]}\\
\small{} & \small{[lux]} & \small{[s]} & \small{[lux]} & \small{[s]}& \small{[lux]} & \small{[s]}& \small{[lux]} & \small{Detect}\\ \midrule
Door & \small{121} & \small{337}& \small{112} & \small{564} & \small{175} & \small{1.9} & \small{116} & \small{71}\\ \hline
Center & \small{246} & \small{128}& \small{227} & \small{251}& \small{312} & \small{0.9} & \small{395} & \small{94}\\
Office & \small{} & \small{}& \small{} & \small{}& \small{} & \small{} & \small{} & \small{}\\ \hline
window& \small{7k} & \small{79}& \small{9k} & \small{95}& \small{6k} & \small{1.3} & \small{8k} & \small{87} \\ \hline
Stairs & \small{235} & \small{94}& \small{238} & \small{160}& \small{241} & \small{0.6} & \small{} & \small{32} \\
Access & \small{} & \small{}& \small{} & \small{}& \small{} & \small{} & \small{} & \small{} \\ \hline
Confer & \small{1k} & \small{75}& \small{427} & \small{286} & \small{1k} & \small{0.8} & \small{1k} & \small{97}\\
Room & \small{} & \small{}& \small{} & \small{} & \small{} & \small{} & \small{} & \small{}\\
\bottomrule
\end{tabular}
\vspace{-3mm}
\end{table}
The application executed and the placement of the node play a fundamental role on the QoS achieved.
While nearby a window, Pible sends 5 sensor data every 95 seconds on average and 1 sensor data every 79 seconds.
The \textit{Door} case has an average light of 112 lux and sends 5 sensors data every 564s
\begin{comment}
\begin{figure*}[ht]
\vspace{-3mm}
\centering
\includegraphics[width=\linewidth]{./Graphs/Real1-4-2.eps}
\vspace{-5mm}
\caption{Pible in the Wild for Pressure Sensing: (left) Pible-node in the center of a room, (right) Pible-node close to a door}
\label{fig:Real1}
\vspace{-5mm}
\end{figure*}
\end{comment}
\begin{figure}[ht]
\includegraphics[width=\linewidth]{./Graphs/Real2-4-2.png}
\vspace{-5mm}
\caption{Pible in the Wild: (left) Advertising in a Stair Access Building Hall, (right) 5 Sensors in a Conference Room}
\label{fig:Real2}
\vspace{-5mm}
\end{figure}
\begin{comment}
Figure \ref{fig:Real1} shows the results of placing Pible in 2 different locations for Pressure Sensing: (left) in the middle of an office, and (right) close to a door. Each dot represents a communication-event.
Both the nodes are able to adapt the QoS and achieve continuous operations even though they are subject to very different light conditions: the node on the left has an average of light per day of 246 lux, while the one on the right has 121 lux.
\end{comment}
Figure \ref{fig:Real2}-left shows Pible used for BLE advertising and placed in a stairway where internal lights are always on for security reasons. Even if the average-light per day is similar to other locations (i.e. \textit{center of office}), the QoS is better due to consistent light availability. The power management algorithm adapts the QoS by fully-charging the super-capacitor and then by increasing the QoS up to the maximum allowed. This validates the usefulness of the lookup table.
Figure \ref{fig:Real2}-right shows results of Pible placed in a conference room while sensing 5 sensors. There are no windows in the room, and
due to the intermittent light availability due to the presence of people, we place the node close to an internal light (i.e. 2500 lux) otherwise, the node would not have sustained continual operations.
Applications such as advertising and one sensing perform well even in the presence of low light (Table \ref{tab:light-energy}).
For PIR event detection,
the placement of Pible in the center of an office or in a conference room detects occupancy events with an accuracy respectively of 94\% and 97\%. In these conditions, the light available is adequate to recharge the super-capacitors and most events are detected.
Stair access case achieves only 32\% since
too many people pass through the area and time between events is not enough to recharge the super-capacitor
\vspace{-2mm}
\subsection{Comparison with State of the Art Solutions}
\label{Piblecompare}
The QoS, depends on the application - for periodical sensing sensors (light) it is to sensing rate, for event-driven sensors (PIR) it is the success rate of event capture.
We compare the QoS achieved by Pible w.r.t. battery powered systems~\cite{paper:yuvraj_1} and a pure solar energy harvesting powered architecture (PEH) for buildings~\cite{paper:campbell_2}, since
they represent the extremes of our design space. We consider the location \textit{Center of Room} and emulate the performance of the baselines
\begin{table}[ht]
\footnotesize
\centering
\vspace{-2mm}
\caption{Pible QoS Comparison with State of Art Solutions: Battery System end Pure Energy Harvesting Architecture}
\vspace{-2mm}
\label{tab:QoS_comparison}
\begin{tabular}{cccccc}
\toprule
System & QoS [s] & QoS [s] & PIR Events & QoS [s] & Working\\
& 1 Sensor & 5 Sensor & Detect [\%] & Advertise & Operations\\
\midrule
Battery & 60 & 60 & 100 & 0.1 to 0.9 & few months\\
Pure EH \cite{paper:campbell_2} & 139 & 309 & 96 & NA & when light-on\\
Pible & 128 & 251 & 94 & 0.6 to 0.8 & Perpetual\\
\bottomrule
\end{tabular}
\vspace{-2mm}
\end{table}
\vspace{-1mm}
\subsubsection{Quality of Service}
\label{Quality_of_service}
\textbf{Sensing 1 Sensor:} PEH~\cite{paper:campbell_2} uses a 1.56mJ storage capacitor to
sense and send a sensor data without any standard communication protocol.
Pible needs 3.20 mJ to sense and send a data using BLE. Since Pible uses the light sensor to monitor the node, we simulate a PEH system that sense and send 2 sensor data (i.e. 6.40mJ).
Furthermore, we compare Pible to a battery system from MicroDAQ that sends sensor data every minute using BLE. Results are reported on Table \ref{tab:QoS_comparison}.
The PEH sends data with an average of 139s and Pible 128s. The results are better for Pible since the PEH stops sending data when the light is off.
\textbf{Sensing 5 Sensors:}
In this case, the PEH needs to accumulate 8mJ for the 5 sensors.
Table \ref{tab:QoS_comparison} shows that a PEH can send data with an average per day of 309s while Pible of 251s.
\textbf{PIR Detection:}
A PIR detection together with a BLE transmission of light and QoS requires 5.12mJ.
By looking at the time-stamp and data given by the battery system, 96\% of the events were captured. PEH provides a good accuracy in detection since most of the events happen when light is on.
\textbf{Occupancy Detection:} BLE state of the art localization systems exploit advertisement rates between 0.1s and 0.9s \cite{link:advrate}.
Table \ref{tab:light-energy} shows that the average QoS per day spans between 0.6 and 0.9 when Pible is placed in a Center of Office, Windows or Conference Room.
We posit that occupancy based triggers can tolerate latencies of up to 1 second and the Pible QoS is acceptable for these applications.
\vspace{-2mm}
\subsection{Limitations}
\label{limitations}
\vspace{-1mm}
\subsubsection{Manual creation of lookup table and thresholds:} we manually write the QoS for each sensing application given the super-capacitor voltage level (i.e. Table \ref{tab:QoS}).
As a future work, we will explore machine learning methods to automatically configure the sensors to different lighting and application demands.
\vspace{-2mm}
\subsubsection{Operations with no light}
With no light and by using the lowest QoS, Pible maintains operations for 19 hours by advertising, 27 hours for 5 sensing applications and 31 hours for 1 sensor applications. During our 15-days experiment, we were always able to achieve perpetual operations since presence of people was constant.
If light is not present for more than those times, the node stops working. This can be avoided by (i) moving Pible to a closer source of light; (ii) using a bigger super-capacitor or a bigger solar panel.
\begin{comment}
\vspace{-2mm}
\subsection{Guidelines for Practitioners}
\label{practitioners}
Table \ref{tab:prat1}
shows the time in hours needed to charge different super-capacitor sizes from 2.1 to 3.6V based on different light conditions, for different applications and with different solar panels. We consider for the periodical sensing application reported in \ref{tab:prat1} the sensing and transmitting of a data every 10 minutes (i.e. the lowest QoS), while for occupancy detection applications we considered an average detection of 30 times per day for the PIR motion sensing and an advertisement of 1s for BLE beaconing.
\begin{table}[ht]
\footnotesize
\centering
\vspace{-4mm}
\caption{Hours needed to recharge different super-capacitor sizes while operating Sensing Applications with different light conditions and different solar panels}
\vspace{-4mm}
\label{tab:prat1}
\begin{tabular}{c|c|c|c|c|c|c|c|c|}
\toprule
Solar Panel& \multicolumn{2}{ c |}{35 x 13.9}& \multicolumn{2}{ c |}{41.6 x 26.3} & \multicolumn{2}{ c |}{35 x 13.9}& \multicolumn{2}{ c |}{41.6 x 26.3} \\
Size [mm]& \multicolumn{2}{ c |}{}& \multicolumn{2}{ c |}{} & \multicolumn{2}{ c |}{}& \multicolumn{2}{ c |}{} \\
\midrule
SuperCap & 0.5F & 1F & 1F & 1.5F & 0.5F & 1F & 1F & 1.5F \\
\midrule
Luminance & \multicolumn{4}{c|}{Sensing One Sensor} & \multicolumn{4}{c|}{Sensing 5 Sensors} \\
lux& \multicolumn{4}{c|}{every 10 min} & \multicolumn{4}{c|}{every 10 min} \\
\midrule
200 & NA & NA & 47 & 70 & NA & NA & 83 & 124 \\
400 & 36 & 73 & 16 & 24 & 112 & 225 & 19 & 29 \\
600 & 17 & 33 & 10 & 15 & 25 & 49 & 11 & 16 \\
1000 & 8.2 & 16 & 5.6 & 8.4 & 9.7 & 19 & 5.9 & 8.9 \\
2000 & 4.3 & 8.6 & 3.2 & 4.8 & 4.7 & 9.3 & 3.3 & 4.9 \\
6000 & 1.3 & 2.7 & 1.0 & 1.6 & 1.4 & 2.7 & 1.1 & 1.6 \\
\bottomrule
\end{tabular}
\vspace{-4mm}
\end{table}
We observe that by using a solar panel of size 41.6mm x 26.3mm, we can recharge both the 1F and 1.5F super-capacitors in all the lighting conditions. However, to achieve perpetual operations we need to place the node in a spot in which it is possible to charge the super-capacitor during the time of a day. In the same table we can also notice that a smaller solar panel (i.e. 35mm x 13.9mm) subject to a low light intensity such as 200 lux, will never support continuous operations since the energy required for the node to operate is bigger than the energy that the solar panel can gather.
\begin{table}[ht]
\centering
\caption{Hours needed to recharge different super-capacitor sizes while operating Occupancy Applications with different light conditions and different solar panels}
\vspace{-3mm}
\label{tab:prat2}
\begin{tabular}{c|c|c|c|c|c|c|c|c|}
\toprule
Solar Panel& \multicolumn{2}{ c |}{35 x 13.9}& \multicolumn{2}{ c |}{41.6 x 26.3} & \multicolumn{2}{ c |}{35 x 13.9}& \multicolumn{2}{ c |}{41.6 x 26.3} \\
Size [mm]& \multicolumn{2}{ c |}{}& \multicolumn{2}{ c |}{} & \multicolumn{2}{ c |}{}& \multicolumn{2}{ c |}{} \\
\midrule
SuperCap & 0.5F & 1F & 1F & 1.5F & 0.5F & 1F & 1F & 1.5F \\
\midrule
Luminance & \multicolumn{4}{c|}{PIR Detected} & \multicolumn{4}{c|}{BLE Advertising} \\
lux& \multicolumn{4}{c|}{30 times in a day } & \multicolumn{4}{c|}{every 1 sec} \\
\midrule
200 & NA & NA & 49 & 73 & NA & NA & NA & NA \\
400 & 39 & 78 & 17 & 25 & NA & NA & 90 & 134 \\
600 & 18 & 35 & 10 & 15 & NA & NA & 20 & 30 \\
1000 & 8.3 & 17 & 5.6 & 8.5 & 43 & 85 & 7.8 & 12 \\
2000 & 4.3 & 8.7 & 3.2 & 4.8 & 7.5 & 15 & 3.8 & 5.6 \\
6000 & 1.3 & 2.7 & 1.0 & 1.6 & 1.5 & 3.1 & 1.1 & 1.6 \\
\bottomrule
\end{tabular}
\vspace{-1em}
\end{table}
Similar design decisions can be made with Table \ref{tab:prat2} that reports the time in hours needed to recharge different super-capacitor sizes by using different intensity of light and different solar panels while operating occupancy applications. For the BLE Advertising case, we can notice that several combinations of solar panels and light conditions are not possible. In this case, one solution could be to use multiple solar panels to increase the energy intake.
\end{comment}
\vspace{-2mm}
\section{Power Management Algorithm}
\label{sec:Algorithm}
A mote with a 1F super-capacitor
that advertises every 100ms
lasts 1.9 hours without energy harvesting. Our algorithm dynamically changes the rate of advertising depending on energy availability to increase lifetime.
It uses a simple sensor specific lookup table, and a lighting availability prediction to set the sensing rate. All the algorithmic decisions are made inside Pible and are the following:
\begin{sloppypar}
\textbf{Setting the Sensing Quality}:
We use the super-capacitor voltage level to make a coarse adaptation of the QoS to use. We divide the usable voltage level (from 3.6V to 2.1V) into 7 states to maintain MCU memory requirements low.
Table \ref{tab:QoS} shows the relation between the voltage level and QoS selected for the sensors in Pible.
The different sensing periods are manually assigned to different QoS levels based on the power measurements in Table \ref{tab:Energy} and empirical observations and requirements given in indoor sensing literature \cite{paper:campbell_2}.
Event-driven sensors need to be continually operating to send a packet as an event occurs. To save power, we turn off the sensors for a fixed period as soon as an event occurs. The longer the switched off period, the more likely an event will be missed, but it will also save more power.
For advertising applications, BLE indoor localization systems exploit advertisement rates between 0.1s to 0.9s~\cite{link:advrate}. We assign QoS levels
to best meet this requirement.
\end{sloppypar}
\textbf{Light Intensity Prediction}:
Prediction of light intensity is used to refine the next QoS. The system stores and compares the last 5 light intensity levels: if the light read is close to 0 (i.e. light off) or decreasing, the algorithm decreases the QoS state while if the light intensity is increasing the QoS state is increased.
\textbf{Super-Cap Voltage Level Prediction}:
we store the last 5 voltage values of the super-cap and if the voltage level decreases or remains stable over time, the algorithm lowers the QoS by a level while if it is increasing the system increases the QoS.
5 levels of voltage and light intensity help us capture the short term trends while keeping the MCU compute and memory requirements low.
Between two sensing intervals, the system goes to sleep.
\begin{table}
\footnotesize
\centering
\caption{Relation between Voltage Level and QoS to enable applications requiring sensing and user-position.}
\label{tab:QoS}
\vspace{-4mm}
\begin{tabular}{cccccc}
\toprule
QoS & Voltage & QoS & QoS-PIR & QoS\\
State & [V] & Sensings [s]& Detection [s] & Advertising [s] \\
\midrule
7 & 3.6 - 3.4& 20 & 10 & 0.1\\
6 & 3.4 - 3.2& 40 & 20 &0.2\\
5 & 3.2 - 3.0& 60 & 30 &0.4\\
4 & 3.0 - 2.8& 120 & 60& 0.64\\
3 & 2.8 - 2.6& 240 &120& 0.9\\
2 & 2.6 - 2.4& 300 &300& 2\\
1 & 2.4 - 2.1& 600 &600& 5\\
\bottomrule
\vspace{-7mm}
\end{tabular}
\end{table}
\begin{comment}
\begin{algorithm}[ht]
\caption{Power Management and Prediction Algorithm (Put Pseudo-code)}
\label{alg:one}
\SetAlgoNoLine
\begin{algorithmic}[1]
\STATE \textbf{Input:} vector$<int>$Light(5); vector$<int>$Voltage(5), voltage = 0, light=0, index = 0
\STATE \textbf{Output:} Next-QoS
\WHILE{true}
\STATE voltage = read SC Voltage
\STATE light = read Light Intensity
\IF{(index == 0) \text{or} (voltage = max)}
\STATE Next-QoS = based on voltage and Table~\ref{tab:QoS}
\STATE index++
\ENDIF
\STATE update Light[] with light
\STATE update Voltage[] with voltage
\IF{(light == 0) \text{or} (Light trend $<$ 0)}
\STATE - -Next-QoS
\ELSE
\STATE ++ Next-QoS
\ENDIF
\IF{(Voltage trend $=<$ 0) \text{and} (voltage $\neq$ max)}
\STATE - -Next-QoS
\ELSE
\STATE ++Next-QoS
\ENDIF
\STATE update QoS with the calculated Next-QoS
\STATE sleep until next wake up event\;
\ENDWHILE
\end{algorithmic}
\end{algorithm}
\end{comment}
\vspace{-2mm}
\section{Design-Space and Architecture}
\label{sec:Architecture}
We select a common set of applications
in smart buildings
and assess their power budgets. We select commercial off the shelf components to support their QoS requirements while ensuring perpetual operation on typical indoor environment without a battery.
\vspace{-2mm}
\subsection{Indoor Monitoring Applications}
We categorize indoor applications as periodic or event-driven:
\textbf{Sensing Environmental Conditions:}
\textit{(i) 1 Sensor}: We test if Pible can operate perpetually for applications with minimal energy budget such as sensing of 1 sensor. Results
can be extended to sensors with similar power budget.
\textit{(ii) Multiple Sensors}:
We extend the power budget of motes that monitor different sensors at once.
\textbf{Occupancy Detection:}
\textit{(i) PIR Motion Sensor}, \textit{Door Sensor}~\cite{paper:yuvraj_1}.
\textit{(ii) Bluetooth Low Energy (BLE) Beacons}~\cite{paper:Cosero}.
Table \ref{tab:Energy} reports the power consumption of Pible's main components and operations using a super-capacitor storage element charged at 3V. For the sensing operations (i.e. Read Temperature), the power includes the reading and BLE transmission.
\vspace{-2mm}
\begin{table}[ht]
\caption{Power consumption of Pible key operations executed with a super-capacitor charged at 3V. The values reported are averaged on a one minute execution.}
\centering
\vspace{-3mm}
\footnotesize
\label{tab:Energy}
\begin{tabular}{c c c c}
\toprule
Operation & Power[\si\micro W] & Operation & Power[\si\micro W]\\
\midrule
\small{MCU-Sleep} & \small{19} & \small{PIR Detection} & \small{32} \\
\small{Read-Hum} & \small{51} & \small{Advertising-5s} & \small{69} \\
\small{Read-Temp} & \small{54} & \small{Advertising-2s} & \small{86} \\
\small{Read-Bar} & \small{54} & \small{Advertising-1s} & \small{106}\\
\small{Read-Light} & \small{47} & \small{Advertising-500ms} & \small{171}\\
\small{MCU+PIR Sleep} & \small{22}& \small{Advertising-100ms} & \small{648}\\
\bottomrule
\end{tabular}
\vspace{-5mm}
\end{table}
\vspace{-1mm}
\subsection{Pible-Architecture}
We use a general energy harvesting architecture \cite{paper:EH_review, Scaling_my}: an energy harvester transfers power to a storage element through an energy management board (EMB). Once the energy stored reaches a usable level, the EMB powers the micro-controller that starts its operations.
\begin{comment}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{./Pictures/Architecture.eps}
\caption{Pible prototype's Block Diagram}
\vspace{-5mm}
\label{fig:Components}
\end{figure}
\end{comment}
\vspace{-2mm}
\subsubsection{Platform System on Chip, Antenna and Sensors}
We select the TI CC2650 chip that supports multiple communication protocols (e.g. 6LoWPAN, BLE) and consumes 1 \si\micro A in standby mode.
We equip our board with temperature, light, humidity, pressure, reed switch, accelerometer, gyroscope, and a PIR motion sensor.
\vspace{-2mm}
\subsubsection{Energy Storage}
\label{battery}
batteries have a short cycle-life of 1000 recharges. To increase lifetime, we adopt super-capacitors as they support up to 1 million recharges~\cite{link:supercap_2}. However,
super-capacitors' choice is not trivial since:
\textit{(i)} Energy stored
drops linearly on discharge~\cite{link:supercap_2}.
\textit{(ii)} super-capacitor size is proportional to charging time using a constant charging current.
\textit{(iii)} Large super-capacitors increase node dimensions leading to packaging and aesthetic issues.
\textit{(iv)} The leakage current increases with size.
We tested 3 super-capacitors with capacitance of 0.22F, 0.44F and 1F at 3.6V.
\vspace{-2mm}
\subsubsection{Energy Harvester}
We use solar light since it has high power density
w.r.t other energy sources~\cite{paper:EH_review} and it is available in buildings. To match our worst case scenario (Table \ref{tab:Energy}), we use the indoor solar panel AM-1454 that harvests
71 \si\micro W with 300lux.
\vspace{-2mm}
\subsubsection{Energy Management Board (EMB)}
For our design, we select the BQ25570 from TI since it includes two programmable DC/DC converter: (i) an ultra-low-power boost converter (V$_{BAT}$)
and (ii) a nano-power buck converter (V$_{buck}$) that
can support up to 110mA output current. The BQ25570 has a programmable power good output signal (V$_{BAT\_ok}$) that indicates when the super-capacitor reaches a user-set voltage level. We set this signal to 2.1V.
The V$_{BAT}$ converter
is highly efficient when the storage element voltage level is above 1.8V but it is inefficient under this threshold (i.e. `cold-start'). The cold-start can be frequent if energy availability is low and can occur when adding or moving sensor nodes.
In Figure \ref{fig:ChargeColdStart}-left, we show the charging time of the 3 super-capacitors sizes
using V$_{BAT}$ under 750lux. It takes 1 day for the 220mF SC to charge from 0 to 3.6V
and almost the entire charging-time is spent to exit cold-start operations (0 to 1.6V).
The second DC-DC converter (V$_{buck}$) charges the super-capacitor at a much faster rate and the 1F element exits cold-start operations after 2.2 hours at 750 lux (Figure \ref{fig:ChargeColdStart}-right).
However, V$_{buck}$ is more energy consuming after cold-start operations are over. Hence, we switch between the two charging modes using the circuit in Figure \ref{fig:CircuitMOS}-right.
\begin{figure}[ht]
\vspace{-2mm}
\centering
\includegraphics[width=\linewidth]{./Graphs/Charging-Cold-Buck-Final-8.pdf}
\vspace{-3mm}
\caption{Charging different Super-Capacitors Size: Left- 750 lux with VBAT, Right: 750 lux with VBuck}
\label{fig:ChargeColdStart}
\vspace{-2mm}
\end{figure}
\begin{figure}[ht]
\vspace{-2mm}
\centering
\includegraphics[width=0.7\linewidth]{./Graphs/CircuitMOS_final-2.png}
\vspace{-3mm}
\caption{Our circuitry to switch between V$_{BAT}$ and V$_{Buck}$.}
\label{fig:CircuitMOS}
\vspace{-3mm}
\end{figure}
When the super-capacitor voltage level is low, the EMB raises V$_{buck}$ and V$_{BAT}$. Since the V$_{BAT\_ok}$ signal is low, the inverter (P-2 and N-3) blocks P-1 while N-2 is conducing and charges the super-capacitor through V$_{buck}$. Using a gate-source voltage threshold of 1.7V, N-2 charges the super-capacitor till 2.1V by setting V$_{Buck}$ to 3.8V.
At this point, the V$_{BAT\_ok}$ signal turns on and connects V$_{BAT}$ to the storage element through P-1.
Finally, N-1 powers the MCU once the super-capacitor reaches 2.1V (Figure~\ref{fig:CircuitMOS}-left).
\vspace{-1mm}
\subsubsection{Wireless Communication Protocol}
We use Bluetooth Low Energy (BLE) as it offers advantages for indoor environments~\cite{paper:Cosero}:
\textit{(i)} BLE is more energy efficient w.r.t other technologies (e.g. ZigBee) in transmitting small data packets called advertisements.
\textit{(ii)} advertisements can be used to perform indoor occupancy detection~\cite{paper:Cosero}.
\vspace{-2mm}
\begin{figure}
\centering
\includegraphics[width=0.5\linewidth]{./Pictures/Pible.jpg}
\vspace{-2mm}
\caption{Pible: a Perpetual Indoor BLE Mote}
\label{fig:Pible}
\vspace{-2mm}
\end{figure}
\vspace{-1mm}
\subsection{Wireless Sensor Network Architecture}
Pible-nodes send a data packet to the closest Base Station (BS) that stores and sends the data to the cloud for post-processing.
We used a Raspberry PI equipped with a BLE USB dongle as a Base Station. To monitor the nodes' status, each Pible node sends to the BS information such as sensor data, QoS state, and voltage level.
\vspace{-2mm}
\section{Background and Related Work}
\label{sub:relatedwork}
Existing works in energy harvesting can be categorized into:
\textit{(i) Application Specific:}
Application specific indoor systems have been proposed that use energy harvesting~\cite{paper:DoubleDip}.
The company EnOcean~\cite{link:enocean} produces battery-less systems
but their design is application specific. We use a generic design suitable for most applications.
\textit{ (ii) Energy Harvesting + Rechargeable Battery}: energy harvesting can be used to extend batteries lifetime~\cite{paper:DoubleDip}.
However, the life of rechargeable batteries is limited to a few hundred cycles~\cite{link:supercap_2}.
Hence, recent works and our Pible node exploit super-capacitors as they can support up to a million charging cycles.
\textit{(iv) Communication protocol}:
Prior energy harvesting works do not support standard protocols since they require more energ
\cite{paper:campbell_2}. To facilitate integration with existing technologies, we set this as a requirement.
Works in \cite{paper:EH-ZigBee-6406192,link:enocean,paper:SorberFlickerHester:2017:FRP:3131672.3131674} adopt standard protocols but either lack in perpetual operations or are application specific.
\textit{(v) Power Management:} Work in \cite{paper:Mani-power-management} adopt a power management
to achieve energy-neutral operations. We use a simple application-specific look-up table, and we show that Pible achieves perpetual operation under varying lighting conditions.
\textit{(vi) Intermittent:} Campbell et al.~\cite{paper:campbell_1, paper:campbell_2} designed an indoor sensor-node
that stores only the amount of energy needed to read and transmit a single data packet: the system is continuously working when light is available
but it stops during dark periods.
Results show that door-open detection achieves only 66\% accuracy.
Table \ref{tab:Introduction}, shows the gaps between related work and battery motes and compare them with Pible.
We categorize them into 3 buckets: (i) battery powered systems, (ii) battery and energy harvesting systems and (iii) systems using only harvested energy.
Battery powered systems achieve high QoS for the applications at the expense of a periodic battery replacement requirement. Adding an energy harvesting solution to a battery powered device increases the lifetime but the batteries still need to be replaced. Prior works in energy harvesting nodes promise no battery replacement but compromise the QoS, do not support standard protocols or are application specific. Flicker~\cite{paper:SorberFlickerHester:2017:FRP:3131672.3131674} does provide these functionalities, but has not been evaluated indoors where we have limited light availability.
\vspace{-3mm}
\begin{table}[ht]
\centering
\footnotesize
\caption{Pible comparison with State of the Art Solutions}
\vspace{-3mm}
\label{tab:Introduction}
\begin{tabular}{c |c |c|c|c|c}
\toprule
Platform & Bat/ & Replace & Quality & Standard & Application\\
& Har & Battery & of & Commun & \\
& Power & & Service & Protocol & \\
\midrule
Synergy\cite{paper:yuvraj_1} & Bat & No & $\sim$GT & Zigbee & Occupancy\\
\midrule
Trinity \cite{paper:trinity} & Bat-Har & No & $\sim$GT & Zigbee & Air-Flow\\
DoubleDip\cite{paper:DoubleDip} & Bat-Har & No & 98-65\% & No & Water-Flow\\
\midrule
Enocean \cite{link:enocean} & Har & Yes & NA & BLE & Specific\\
Buzz \cite{paper:campbell_2} & Har & Yes & 66\% & No & Door, Light\\
Flicker\cite{paper:SorberFlickerHester:2017:FRP:3131672.3131674} & Har & Yes & NA & BLE & Periodic Sense\\
\midrule
Pible & Har & Yes & $\sim$GT & BLE & Sense/Event\\
& & & & & Occupancy\\
\midrule
\multicolumn{6}{ c }{GT = Ground Truth; Bat = Battery; Har = Harvesting}\\\hline
\end{tabular}
\end{table}
\vspace{-5mm}
\section{Introduction}
Buildings are integrated with thousands of sensors: temperature sensors provide feedback to HVAC systems, smoke sensors provide fire safety, etc.
These sensing systems are designed for wired communication and power during the design of the building itself.
Even a minor change requires a domain expertise in the building's wiring infrastructure
and can be prohibitively expensive, e.g.,
\$2500 \cite{link:retrofitting-cost}.
Wireless sensors have emerged as the answer to this
problem. With low power and
communication protocols (i.e ZigBee, 6LowPAN),
wireless sensors can be deployed with a multi-year battery lifetime
and be used for applications such as
occupancy based control~\cite{paper:yuvraj_1}.
But these nodes are powered by batteries that require periodic manual replacement.
As we scale to large deployments, the manual replacement of batteries becomes a bottleneck.
Battery replacements can be mitigated using energy harvesting, e.g., DoubleDip measures water flow by powering itself using temperature difference~\cite{paper:DoubleDip}.
However, there are limited commercial devices that use indoor energy harvesting solutions. We highlight 3 limitations that inhibit adoption: : (i) they are designed for specific applications, (ii) they do not support standard protocols, (iii) the application quality of service (QoS) is inadequate.
In this paper, we analyze the feasibility of overcoming these limitations using commercial off the shelf components. We explore the design space of a generic energy harvesting sensor node for indoor monitoring applications with the objective of perpetual operations \cite{paper:campbell_1}.
We designed and built \emph{Pible}, a Perpetual Indoor BLE sensor node.
We show trade-offs between QoS, lifetime and harvested energy that enables our prototype sensor node to work in different indoor lighting conditions.
We introduce hardware solutions to increase charging efficiency and overcome cold-start operations that limit
usability. Finally, we propose a local sensor-node power management solution that maximizes the application-QoS and node-lifetime.
We evaluate Pible by deploying twenty nodes in five different lighting conditions, for a general set of building sensing applications such as periodic sensor measurements, e.g. temperature, and event-driven sensors, e.g. PIR. We conducted a 15-day experiment in which we demonstrate continuous operation for all the different applications and on every light condition tested. Results show that Pible is able to broadcast sensor data at an average period of 94 seconds with an average luminance of 235 lux per day.
\vspace{-2mm}
|
2208.07513
|
\section*{Nomenclature}
\addcontentsline{toc}{section}{Nomenclature}
\subsection{Set and Indices}
\begin{IEEEdescription}[\IEEEusemathlabelsep\IEEEsetlabelwidth{$V_1,V_2,V_3$}]
\raggedright
\item[$\mathcal{I}, i$] Set and index of distribution bus, $i \in \mathcal{I}$
\item[$\ell =ij$] Distribution line connecting bus $i$ to bus $j$
\item[$\mathcal{L}$] Set of distribution lines, $\ell =ij \in \mathcal{L}$
\item[$\mathcal{SW}$] Set of lines which have a tie switch
\end{IEEEdescription}
\subsection{Parameters}
\begin{IEEEdescription}[\IEEEusemathlabelsep\IEEEsetlabelwidth{$V_1,V_2,V_3$}]
\raggedright
\item[$r_\ell$,$x_\ell$] Resistance/reactance of line $\ell$
\item[$\overline{V}_{i}$, $\underline{V}_{i}$] Maximum/minimum voltage magnitude at bus $i$
\item[$P_{i}^d, Q_{i}^d$] Real/reactive power demands at bus $i$
\item[$VOLL$] Value of lost load of customer
\item[$\delta_{\ell}$] Binary parameter, set to 0 if line $\ell$ is outage, otherwise 1
\end{IEEEdescription}
\subsection{Variables}
\begin{IEEEdescription}[\IEEEusemathlabelsep\IEEEsetlabelwidth{$V_1,V_2,V_3$}]
\raggedright
\item[$P_{ij}$] Real power flow from bus $i$ to bus $j$
\item[$Q_{ij}$] Reactive power flow from bus $i$ to bus $j$
\item[$P^{\sf grid}_{}$] Total real power imported to the main grid
\item[$Q^{\sf grid}_{}$] Total reactive power imported to the main grid
\item[$\nu_{i}$] Square voltage at node $i$
\item[$\nu_{i}^\ell$] Auxiliary variable associated with squared voltage at node $i$ on line $\ell$
\item[$I_{\ell}^{sq}$] Square current on line $\ell$
\item[$\alpha_{\ell}$] Binary variable represented to switching action, set to 1 if line $\ell$ is closed, otherwise 0
\item[$\beta_{ij}$] Variable represented to a parent-child relationship between node $i$ and node $j$, set to 1 if node $i$ is a child of node $j$, otherwise 0
\item[$P_{i}^{ds}$] Real power load demand served at bus $i$
\item[$P_{i}^c$] Curtailed real power load demand at bus $i$
\item[$Q_{i}^{ds}$] Reactive power load demand served at bus $i$
\item[$Q_{i}^c$] Curtailed reactive power load demand at bus $i$
\item[$R_{\ell},T_{\ell}$] Auxiliary variables associated with line $\ell$ to formulate the conic constraint
\end{IEEEdescription}
}
\section{Introduction}
There are tremendous advantages in quantum computing in recent years.
The size of quantum computers have been over 100 qubits, resulting in significant computational capabilities in comparison to classical computers.
There are also significant works on mathematical foundation and algorithm designs leveraging the quantum processors.
Applications of quantum computing for power systems have been also recently investigated, i.e., solving DC power flow \cite{eskandarpour2021experimental} and fast-decouple AC power equations \cite{feng2021quantum}.
However, since these problems can tackled in poly-nominal times with high accuracy in classical computers, quantum computing can be expected to improve the computational time without solution performance concern.
Many problems in power grids are NP-hard combinatorial optimization problems, where both computational time and solution accuracy are of concerns.
There are several efforts to extend the quantum computing algorithms into NP-hard problems, such as combinatorial optimization, utilizing the qubits' representation of binary variables \cite{qaoa, qaoa_performance}.
While there are some promising results, it is worth noting that the mathematical foundation of these algorithms are heuristic.
Therefore, examining the performance of these algorithms on power system applications is needed, especially to inform quantum researchers on specific solution designs for power grid.
To this end, this paper constructs combinatorial optimization problems arising from distribution network reconfiguration.
These include two mixed integer conic programs, extended from bus injection and branch flows models \cite{Jabr2012reconfi, Lowc2013convex}, for fault isolation and network reconfiguration.
Our models, which can effectively model the line flows in connected and disconnected lines without using Big-M formulation as in \cite{ShanShan2018-NR, Mehdi2020-NR}, can be used as a benchmark for evaluating existing quantum combinatorial algorithms \cite{qaoa}.
The paper is organized as follows.
Section II presents and compares two
equivalent formulations of network reconfiguration, which are mixed integer conic programs.
Section III introduces basics
of quantum physics and state-of-the-art tools for quantum
optimization. Then, we present how we can tailor quantum
computing algorithms to solve the network reconfiguration
problem.
Numerical results are presented in section IV, which indicate the heuristic nature of existing quantum algorithms for combinatorial optimization problem might need to be considered when applying to power system domains.
Section V concludes the paper.
\section{Network Reconfiguration}
\begin{figure}[t!]
\centering
\includegraphics[width=0.7\linewidth]{images/Fault_Isolation_Restoration.jpg}
\vspace{-0.2pt}
\caption{Fault Isolation and Restoration Procedure}
\label{fig: FISR}
\end{figure}
A radial power distribution network can be represented as a connected graph with no loops $\mathcal{G} = \langle \mathcal{I}, \mathcal{L} \rangle$,
where $\mathcal{I}$ and $\mathcal{L}$ are sets of distribution nodes and lines.
$\mathcal{L}$ includes normally-closed lines and tie switches (normally-open).
Once a fault occurs in a line, the normally-closed sectionalizing switches in the faulted line will open to isolate the fault. Then, normally-open tie switches close to re-energize the healthy parts of the network.
This reconfiguration procedure as shown in Fig. \ref{fig: FISR} is to minimize out-of-service loads respecting the network radiality structure.
Let $\delta_\ell$ represent status of the normally-closed lines, either connected or disconnected due to fault and $\alpha_\ell$ represent the switching status (close or open) of lines with tie-switches.
If the line $\ell$ is connected, (i,e,, no fault or tie switch is closed) the line power flows are:
\begin{subequations}
\label{linepowerflow}
\begin{eqnarray}
P_{ij}= \frac{r_{ij}}{(r_{ij}^2+x_{ij}^2)}V_i^2 - \frac{r_{ij}}{(r_{ij}^2+x_{ij}^2)} V_i V_j \cos\theta_{ij} \label{Rab_Pij_1} \notag \\
- \frac{r_{ij}}{(r_{ij}^2+x_{ij}^2)} V_i V_j \sin\theta_{ij}\\
Q_{ij}= - \frac{x_{ij}}{(r_{ij}^2+x_{ij}^2)}V_i^2 + \frac{x_{ij}}{(r_{ij}^2+x_{ij}^2)} V_i V_j \cos\theta_{ij} \label{Rab_Qij_1} \notag \\
- \frac{x_{ij}}{(r_{ij}^2+x_{ij}^2)} V_i V_j \sin\theta_{ij}, \\
~~\textrm{if}~~ \delta_\ell =1~ \textrm{and/or} ~ \alpha_\ell =1 \nonumber
\end{eqnarray}
where $P_{ij}$ and $Q_{ij}$ be the real and reactive powers that come out of node $i$ and are sent to node $j$ over the line $\ell = ij$ with impedance $z_{ij} = r_{ij} + j x_{ij}$ as shown in Figure \ref{fig: Impedance}.
Also, ${\mathop V\limits^\to}_i= V_{i} \angle \theta_i$ and ${\mathop V\limits^\to}_j = V_{j} \angle \theta_j$ are nodal voltages at $i$ and $j$ where $V$ and $\theta$ represent voltage magnitude and angle respectively.
\begin{figure}[http]
\centering
\includegraphics[width=0.7\linewidth]{images/Impedance.png}
\vspace{-0.2pt}
\caption{Power flow on a distribution line $\ell = ij$}
\label{fig: Impedance}
\end{figure}
If the line is disconnected, i.e., either by a fault ($\delta_\ell =0$) or opening the tie switch ($\alpha_\ell =0$), the line flows should be zero:
\begin{eqnarray}
P_{ij} =0 , Q_{ij} =0 ~~\textrm{if}~~ \delta_\ell =0~ \textrm{or} ~ \alpha_\ell =0
\end{eqnarray}
\end{subequations}
In general, the network reconfiguration model must model line flow equation conditioning on the connection status of the line (\ref{linepowerflow}) respecting the network radiality structure.
Generally, this is a mixed-binary nonlinear optimization due to a set of binary variables representing the switching actions and to the non-convex nature of power flow constraints.
However, in the case of radial network and under mild assumption, the problem can be formulated as mixed integer conic program (MICP), which can be solved optimally by commercial solvers such as CPLEX and MOSEK in classical computers.
Therefore, it can be an ideal benchmark for evaluating new computing methods in power systems, particularly the emerging trend of quantum computing.
This paper first develops novel MICP formulations for radial distribution network reconfiguration to examine the performance of quantum combinatorial optimization algorithms.
Specifically, we consider two formulations, one is based on existing bus injection model \cite{Jabr2012reconfi}, which employs non-physical auxiliary variables, and the other developed in this paper, which is extended from the branch flows model \cite{Lowc2013convex}.
\subsection{The bus injection based reconfiguration model}
The bus injection model \cite{Jabr2012reconfi} used two auxiliary variables:
\begin{eqnarray}
\label{rab_equality}
R_{\ell} = V_i V_j \cos\theta_{ij},~ T_{\ell} = V_i V_j \sin\theta_{ij}, ~~\forall \ell =ij \in \mathcal{L}
\end{eqnarray}
which yields the following conditions:
\begin{eqnarray}
(R_{\ell})^2 + (T_{\ell})^2 = (V_i)^2 (V_j)^2 , ~ R_{\ell} \geq 0, \quad \forall \ell = ij \in \mathcal{L} \label{Rab_RotatedCone}.
\end{eqnarray}
Then, the network configuration is as follows:
\begin{subequations}
\label{jabr}
\begin{eqnarray}
\min \Bigg( P^{\sf grid} + VOLL \sum_{i=2}^n P^c_i \Bigg). \label{objrab}
\end{eqnarray}
\end{subequations}
Subject to:
\begin{itemize}
\item{Radiality constraints}
\begin{subequations}
\begin{eqnarray}
\beta_{1j}=0, \quad j \in \mathcal{N}(1) \label{beta_v1}\\
%
\beta_{ij} \in \left[0,1\right], \quad \forall i \in \mathcal{I}, ~j\in \mathcal{N}(i) \label{beta_v2}\\
%
\sum_{j\in \mathcal{N}(i)} \beta_{ij}=1, \quad \forall i \in \mathcal{I} \label{beta_v3}\\
%
\beta_{ij} + \beta_{ji} = \alpha_{\ell}, \quad \forall \ell = ij \in \mathcal{SW} \label{beta_v4}\\
%
\beta_{ij} + \beta_{ji} = \delta_{\ell}, \quad \forall \ell = ij \in \mathcal{L} \backslash \mathcal{SW} \label{beta_v5}
\end{eqnarray}
\end{subequations}
\item{Incorporating line connection statuses}
\begin{subequations}
\begin{eqnarray}
0 \leq \nu_{i}^\ell \leq (\overline{V_i})^2 \alpha_{\ell} \label{alpha_v1}, ~~ \forall \ell=ij \in \mathcal{SW}\\
%
0 \leq \nu_{i} - \nu_{i}^\ell \leq (\overline{V_i})^2(1 - \alpha_{\ell}) \label{alpha_v2}, ~~ \forall \ell=ij \in \mathcal{SW}\\
%
0 \leq \nu_{j}^\ell \leq (\overline{V_j})^2 \alpha_{\ell} \label{alpha_v3}, ~~ \forall \ell=ij \in \mathcal{SW}\\
%
0 \leq \nu_{j} - \nu_{j}^\ell \leq (\overline{V_j})^2(1 - \alpha_{\ell}) \label{alpha_v4}, ~~ \forall \ell=ij \in \mathcal{SW}\\
%
0 \leq \nu_{i}^\ell \leq (\overline{V_i})^2 \delta_{\ell} \label{delta_v1}, ~~ \forall \ell=ij \in \mathcal{L} \backslash \mathcal{SW}\\
%
0 \leq \nu_{i} - \nu_{i}^\ell \leq (\overline{V_i})^2(1 - \delta_{\ell}) \label{delta_v2}, ~~ \forall \ell=ij \in \mathcal{L} \backslash \mathcal{SW}\\
%
0 \leq \nu_{j}^\ell \leq (\overline{V_j})^2 \delta_{\ell} \label{delta_v3}, ~~ \forall \ell=ij \in \mathcal{L} \backslash \mathcal{SW}\\
%
0 \leq \nu_{j} - \nu_{j}^\ell \leq (\overline{V_j})^2(1 - \delta_{\ell}) \label{delta_v4}, ~~ \forall \ell=ij \in \mathcal{L} \backslash \mathcal{SW}
\end{eqnarray}
\end{subequations}
\item{Line power flow constraints}
\begin{subequations}
\begin{eqnarray}
(R_{\ell})^2 + (T_{\ell})^2 \leq \nu_i^\ell \nu_j^\ell \label{Rab_convex}, ~ R_{\ell} \geq 0, \quad \forall \ell = ij \in \mathcal{L}\\
P_{ij} = \frac{1}{r^2_{\ell} + x^2_\ell} \left( r_{\ell} \nu_i^\ell - r_\ell R_{\ell} + x_\ell T_{\ell} \right), \label{Rab_Pij} \forall \ell = ij \in \mathcal{L}\\
%
Q_{ij} = \frac{1}{r^2_{\ell} + x^2_\ell} \left( x_{\ell} \nu_i^\ell - x_\ell R_{\ell} - x_\ell T_{\ell} \right) , \label{Rab_Qij} \forall \ell = ij \in \mathcal{L}\\
%
P_{ji} = \frac{1}{r^2_{\ell} + x^2_\ell} \left( r_{\ell} \nu_j^\ell - r_\ell R_{\ell} - x_\ell T_{\ell} \right) \label{Rab_Pji}, \forall \ell = ij \in \mathcal{L} \\
%
Q_{ji} = \frac{1}{r^2_{\ell} + x^2_\ell} \left( x_{\ell} \nu_j^\ell - x_\ell R_{\ell} + x_\ell T_{\ell} \right) \label{Rab_Qji}, \forall \ell = ij \in \mathcal{L}\\
%
I^{sq}_{\ell} = \frac{1}{r^2_{\ell} + x^2_\ell} \left( \nu_i^\ell + \nu_j^\ell - 2R_{\ell} \right) \geq 0 \label{Rab_Isq}, \forall \ell = ij \in \mathcal{L},
\end{eqnarray}
\end{subequations}
\item {Operational constraints based on the bus injection model}
\begin{subequations}
\begin{eqnarray}
(\underline{V}_i)^2 \leq \nu_i \leq (\overline{V}_i)^2, \label{Rab_vollim} \quad \forall i \in \mathcal{I}\\
%
P^{d}_i = P^{ds}_i + P^{c}_i \label{Rab_Prelax}, \quad \forall i \in \mathcal{I} \\
%
Q^{d}_i = Q^{ds}_i + Q^{c}_i \label{Rab_Qrelax}, \quad \forall i \in \mathcal{I} \\
%
P^{ds}_i, P^{c}_i, Q^{ds}_i, Q^{c}_i \geq 0, ~~ \forall i \in \mathcal{I} \label{Rab_Nonneg}\\
%
P^{ds}_{i} + \sum_{j \in \mathcal{N} \left(i \right)} P_{ij} = 0 \label{Rab_Pbalance}, \quad \forall i \in \mathcal{I} \\
%
Q^{ds}_{i} + \sum_{j \in \mathcal{N} \left( i \right)} Q_{ij} = 0 \label{Rab_Qbalance}, \quad \forall i \in \mathcal{I} \\
%
P^{\sf grid} = \sum\limits_{j \in \mathcal{N}(1)} P_{1j}, ~
Q^{\sf grid} = \sum\limits_{j \in \mathcal{N}(1)}} {Q_{1j} \label{Rab_SlackPQ}.
%
\end{eqnarray}
\end{subequations}
\end{itemize}
The objective function is to maximize the amount customers' demand that can be served in the network by reconfigurating the network in response to the fault, in other words, minimize total quantity of curtailed power with penalty VOLL.
The set of radiality constraints \eqref{beta_v1}-\eqref{beta_v4} aim to maintain the radial structure of the network.
Here, we use the parent-child relation-based method, which is also called the spanning tree condition \cite{Jabr2012reconfi}.
It regulates every single node, apart from the substation node, to have only one parent expressed in \eqref{beta_v1} and \eqref{beta_v2}, where $\beta_{ij}$ and $\beta_{ji}$ represent a relation between node $i$ and $j$.
If $\beta_{ij}$ equals to 1, node $i$ is parent of node $j$ and 0 otherwise.
Here, we model $\beta_{ij}$ as continuous variables constrained in $\left[0, 1\right]$, but they only take value of 0 or 1 as proved in Claim 1 of \cite{taylor2012convex}.
Therefore, the set of radiality constraints in this paper save a number of $(|\mathcal{I}|^2 - 2 \times \mathcal{L})$ binary variables compared to the original formulation in \cite{Jabr2012reconfi}.
In order to model the power flows over the connected and disconnected lines, we need to incorporate the line connection statuses in the computation of the power flows.
To this end, for each line $\ell$ we define the two auxiliary variables $\nu^\ell_i$ and $\nu^\ell_j$ such that
$$
\begin{cases}
\nu^\ell_i = \nu_i, ~\nu^\ell_j = \nu_j, & \text{if $\ell$ is connected}\\
\nu^\ell_i = \nu^\ell_j = 0, & \text{if $\ell$ is disconnected} \label{connection_status}
\end{cases}
$$
where $\nu_i, \nu_j$ represent the squared nodal voltages.
The connection status condition (\ref{connection_status}) is captured by
constraints \eqref{alpha_v1}-\eqref{delta_v4} where the connection is based on the values of $\delta_\ell$ and $\alpha_\ell$.
The set of real and real active power flows are shown in (\ref{Rab_convex})-(\ref{Rab_Isq}).
Obviously, when line is connected, i.e., $\delta_\ell =1$ or $\alpha_\ell =1$, the $\nu^\ell_i = \nu_i$ and $\nu^\ell_j = \nu_j$ and (\ref{Rab_Pij})-(\ref{Rab_Isq}) indeed represent the original line flows and line current formulation as in (\ref{linepowerflow}).
On the other hand, if the line is disconnected, i.e., $\delta_\ell =0$ or $\alpha_\ell =0$, the $\nu_i^\ell, \nu_j^\ell$ are forced to be 0 because of \eqref{alpha_v1}-\eqref{delta_v4}.
Consequently, $R_\ell =0$ because of (\ref{Rab_convex}).
Then, the values of $P_{ij}, Q_{ij}, P_{ji}$ and $Q_{ji}$ calculated in (\ref{Rab_Pij})-(\ref{Rab_Qji}) are also zero.
Additionally, the line current $I^{sq}_\ell$ also equal to zero.
Finally, constraint \eqref{Rab_vollim} represents the nodal voltage limits, real and reactive power load served at each node are modeled in \eqref{Low_Prelax}-\eqref{Low_Qrelax} where $P^{ds}_i, Q^{ds}_i$ and $P^c_i, Q^c_i$ are the amount of demand served and curtailment at node $i$, and should be positive variables as in (\ref{Rab_Nonneg}).
Additionally, real and reactive power balances at each node is given in (\ref{Rab_Pbalance} and (\ref{Rab_Qbalance}).
The overall problem is an MICP where the rotated cone (\ref{Rab_convex}) is the convex relaxation of the original equality condition (\ref{rab_equality}), which is generally exact for radial networks.
However, such the formulation contains non-explainable variables $R_\ell$ and $T_\ell$ that do not hold any physical meaning.
Also, as discussed in \cite{KPEC_ACOPF}, the rotated cone (\ref{Rab_convex}) can exhibit a numerical ill-condition due to the very small gap $|\nu_i^\ell - \nu_j^\ell|$ arising when MICP solvers transform it into the equivalent quadratic cone.
\subsection{Distribution network reconfiguration formulation based on the branch flows model}
We aim to develop a more compact and numerically stable formulation of network configuration without needing non-explainable auxiliary variables by extending the branch flows model \cite{Lowc2013convex} with line connection status.
The model is as follows:
\begin{subequations}
\label{Low}
\begin{equation}
\min \Bigg( P^{\sf grid} + VOLL \sum_{i=2}^n P^c_i \Bigg) \label{objLow}
\end{equation}
Subject to:
\begin{itemize}
\item{Radiality constraints}
\eqref{beta_v1}-\eqref{beta_v4}
\item{Incorporating line connection statuses}
\eqref{alpha_v1}-\eqref{delta_v4}
\item{Operational constraints based on the branch flows model}
\begin{eqnarray}
%
P_{ij}^2 + Q_{ij}^2 \leq \nu_i^\ell I^{sq}_{\ell}, \label{Low_convex}~~ \forall\ell=ij\in \mathcal{L}\\
%
\nu_j^\ell = \nu_i^\ell - 2(r_{\ell} P_{ij} + x_{\ell} Q_{ij})+(r_{\ell}^2 + x_{\ell}^2)I^{sq}_{\ell}, \label{Low_Isq} \notag \\
\forall ij=\ell \in \mathcal{L}\\
(\underline{V}_{i})^2 \leq \nu_i \leq (\overline{V}_{i})^2 \label{Low_Vollim}, \quad \forall i \in \mathcal{I} \\
P^{d}_i = P^{ds}_i + P^{c}_i \label{Low_Prelax}, \quad \forall i \in \mathcal{I} \\
%
Q^{d}_i = Q^{ds}_i + Q^{c}_i \label{Low_Qrelax}, \quad \forall i \in \mathcal{I} \\
%
\sum_{k: k \to i}\big(P_{ki} - r_{ki} I^{sq}_{ki} \big) - \sum_{j: i \to j } P_{ij} = P^{ds}_{i} \label{Low_Pij}, ~~ \forall i \in \mathcal{I} \\
%
\sum_{k: k \to i}\big(Q_{ki} - x_{ki} I^{sq}_{ki} \big) - \sum_{j: i \to j } Q_{ij} = Q^{ds}_{i} \label{Low_Qij}, ~~ \forall j \in \mathcal{I} \\
%
P^{\sf grid} = \sum\limits_{j: 1 \to j} P_{1j},~~ Q^{\sf grid} = \sum\limits_{j: 1\to j } Q_{1j} \label{Low_SlackPQ} \\
%
I^{sq}_{\ell} \geq 0, \quad \forall \ell \in \mathcal{L}, \nonumber\\
~ P^{ds}_i, P^{c}_i, Q^{ds}_i, Q^{c}_i \geq 0, ~~ \forall i \in \mathcal{I}. \label{Low_Nonneg}
\end{eqnarray}
\end{itemize}
\end{subequations}
The objective function, radiality constraints, and line connection constraints are the same with the previous bus injection model.
Operational constrains (\ref{Low_convex})-\eqref{Low_Nonneg} are constructed following the branch flows model.
Here, the second order cone relaxation is constructed from the circular relation between real and reactive line powers $P_{ij}, Q_{ij}$.
When lines are connected, these operational constraints model the original power flow equations over the distribution lines as in \cite{Lowc2013convex}.
When line $\ell= ij$ are disconnected, the values of $\nu^\ell_i, \nu^\ell_j$ are forced to zero.
Consequently, both $P_{ij}, Q_{ij}$ are forced to zero in accordance to (\ref{Low_convex}).
Also, the voltage drop equation (\ref{Low_Isq}) enforces $I^{sq}_\ell$ to be zero, i.e., there is no current over the disconnected line.
\subsection{Related fault isolation and network reconfiguration model}
Compare to the bus injection model (\ref{objrab})-(\ref{Rab_SlackPQ}), the extended branch flows model exhibits a better numerical stability arising from different representation of the rotated cones as discussed in \cite{KPEC_ACOPF}.
\begin{comment}
Specifically, MICP solvers such as CPLEX and MOSEK will convert rotated cones \eqref{Rab_convex} and \eqref{Low_convex} into equivalently quadratically second order cones:
\begin{subequations}
\label{quad}
\begin{eqnarray}
\sqrt{R^2_{\ell} + T^2_{\ell} + \left( \frac{\nu_{i}^{\ell} - \nu_{j}^{\ell}}{2} \right)^2} \leq \frac{\nu_{i}^{\ell} + \nu_{j}^{\ell}}{2}, \label{quad1} \\
\sqrt{P^2_{ij}+Q^2_{ij} + \Bigg(\frac{\nu_i^{\ell} - I_{\ell}^{sq}}{2} \Bigg)^2} \leq \frac{\nu_i^{\ell} + I_{\ell}^{sq}}{2}, \label{Low_rotatedcone} \label{quad2} \\
\forall\ell=ij\in \mathcal{L} \notag
\end{eqnarray}
\end{subequations}
In particular, if the line is connected, the nodal voltages are close to 1pu whereas the line currents are much smaller than 1pu. Consequently, \eqref{Rab_convex} can have some numerically ill-condition since $\nu^\ell_i \approx \nu^\ell_j$ leading to $\nu^\ell_i - \nu^\ell_j \approx 0$ in its quadratically form (\ref{quad1}).
The cone \eqref{Low_convex}, however, is more numerically stable
since $V^{sq}_i >> I^{sq}_\ell$ in pu.
\end{comment}
Compare other models in the literature \cite{ShanShan2018-NR, Mehdi2020-NR}, the our model does not need Big-M term to decouple the line power flows and nodal voltage in the disconnected lines.
Choosing the value of Big-M is not a trivial task.
If M is chosen to be too large, it leads to an instability of numerical results.
If M is chosen to be too small, it might fail to properly model the decoupling of physical variables, i.e., line flows and voltages, in disconnected lines.
\section{Quantum Computing based Solution Approaches}
This section discusses the backgrounds of quantum computing and how we leverage quantum combinatorial optimization algorithms to solve the network reconfiguration problem, which is an MICP, in a real quantum computer.
\subsection{Quantum Computing Basics}
Quantum computing has logic operations based on the principles of quantum mechanics, which differs from classical computing.
The fundamental unit of information in quantum computation is the qubit.
The state of a qubit has a two-dimensional complex vector space illustrated as follows:
\[
| \psi \rangle = a | 0 \rangle + b | 1 \rangle : a, b \in \mathbb{C} \wedge |a|^2 + |b|^2 = 1.
\]
The coefficients $a$ and $b$ are complex numbers known as probability amplitudes that satisfy the constraint $|a|^2 + |b|^2 = 1$.
A single qubit can be in a superposition of two basis states $|0\rangle$ and $|1\rangle$.
Therefore, a system of $n$-qubits contains an arbitrary superposition of $2^n$ basis states \cite{BurVerify_Qiskit_Circuit}.
Moreover, every single one of the $2^n$ super-positioned computations reserves the information in the same binary variable.
For instance, if an optimization task would occupy $2^7 = 128$ bits to be executed in a classical computer, a quantum computer would be about to only need 7 qubits.
Therefore, it is expected that a polynomial-time of a quantum computing machine is more powerful than that of a classical computing machine in certain search algorithms.
The recent development of quantum computing platforms, services, development kits such as IBM Quantum, Azure Quantum, AWS Quantum, D-Wave Systems, IonQ, Q\# and Qiskit, has a strong motivation to a speed-up from quantum computing solutions applied to combinatorial optimization problems.
However, existing quantum computing algorithms, particularly for solving combinatorial optimization problems, employ relaxation of binary constraints using quantum states.
As indicated in their mathematical foundation paper \cite{qaoa} is heuristic, thus it is worth examine its performance on combinatorial optimization problems arising from power system operations, e.g., distribution network reconfiguration.
\begin{figure}[http]
\centering
\includegraphics[width=0.8\linewidth]{images/Qiskit_Workflows.jpg}
\vspace{-0.2pt}
\caption{Solution Techniques Workflow}
\label{fig: Qiskit_Workflow}
\end{figure}
\subsection{Workflows for working with quantum computers}
In this paper, we use Qiskit - an open-source SDK \cite{qiskit} to test the network reconfiguration formulation with quantum computing enabled by IBM Quantum services \cite{ibmq}.
The overall work flow is shown in in Fig. \ref{fig: Qiskit_Workflow}
First, the MICP based network reconfiguration problem should be built in appropriate syntax that can be compatible with quantum optimization algorithms.
There are two methods, i.e., direct and indirect ones.
In direct method, a program can be coded directly following structures and syntaxes regulated by Qiskit.
In indirect method, we formulate the MICP program in conventional mathematical modelling languages such as docplex or gurobi, which will be converted into a “quadratic program".
Here, “quadratic program" in quantum computing refer the optimization problem contains quadratic constraints and some binary variables, each binary variable can be approximated by quantum state.
If the program is successfully converted or built, an appropriate solver should be selected based on the characteristics of program.
If the program only contains binary variables, particularly, purely integer program in classical optimization, GroverOptimizer, MinimumEigneOptimizer or RecursiveOptimizer should be chosen.
If the program also contains continuous variables, i.e., mixed-binary optimization program as in the network reconfiguration problem, the ADMMOptimizer should be selected.
\subsection{Alternating Direction Method of Multipliers for Convex Optimization}
The alternating direction method of multipliers (ADMM)-based hybrid quantum-classical computing (QCC) algorithm is used to solve a class of mixed-binary convex optimization problems \cite{ADMM}.
The MICP is decomposed in into a quadratic unconstrained binary optimization (QUBO) problem containing only binary variables, and convex sub-problems, which only contain continuous variables where the relaxed binary variables are fixed.
The QUBO problem is solved by quantum approximate optimization algorithm (QAOA) by relaxing binary by quantum states whereas
the convex problems is solved by conventional solvers such as CPLEX, GUROBI, and COBYLA.
The accuracy of results by quantum computing relies on the effect of inexact result of the binary sub-problem \cite{ADMM} and noise of real quantum computers \cite{Quantum_noise_2022}.
Consequently, ADMM can not guarantee the convergence for optimal solution due to the non-smooth nature of mixed-binary problem.
\subsection{Quantum Approximate Optimization Algorithm}
The quantum approximate optimization algorithm (QAOA) is particularly appropriate to problems containing only binary variables
Specifically, the $N$ binary variables in the original integer program are considered in a form of $N$-bit strings: $x_1,x_2,...,x_N$.
QAOA converts the original integer optimization problem into the searching for optimal decision variables in a form of bitstring such that the following optimization ratio is maximized \cite{qaoa}:
\[
R = \frac{f(x)}{\max_x (f(x)}, \label{ratio}
\]
where $f(x)$ denotes the objective of the original integer program and R is called constant-gap approximation.
Obviously, $0 \leq R \leq 1$.
However, $\max_x (f(x)$ in (\ref{ratio}) is the optimal values with optimal solutions of the original integer optimization we have to find, thus being unknown for the algorithm.
On the other hand, we know the hardness of binary approximation ratio $R$, (e.g., the limit of approximation for the MaxCut problem is 16/17$\approx$0.94117) \cite{arora1998proof}.
Therefore, the key point is to find $x$ such that the ratio $f(x)/f(\hat{x})$ improves and closes to its theoretical limits ($\hat{x}$ is the round values of the relaxed $x \in [0,1]$ back to $\{0,1\}$ domain).
In QAOA, the searching procedure utilizes the quantum state relaxation of binary variables.
Obviously, it is a heuristic algorithm \cite{qaoa}.
However, QAOA is one of leading potential algorithms as it takes advantage of quantum computing power in finding solution for optimization problems \cite{qaoa_performance}.
\
\section{Numerical Results}
We examine three numerical performance of three approaches: (i) and (ii) solving the MICP based network reconfiguration (both the bus injection model and the developed branch flows model) on a classical computer using a classical optimizer, i.e., CPLEX and (iii) solving the branch flows based network configuration model in a real quantum computer.
Specifically, we use the $ibm\_oslo$, a 7-qubit machine provided by IBM depicted in Figure \ref{fig: Decouple_map}, to solve the constructed MICP.
\begin{figure}[t!]
\centering
\includegraphics[width=0.8\linewidth]{images/Decop_CNOT.png}
\vspace{-0.2pt}
\caption{Coupling map of $ibm\_oslo$ server}
\label{fig: Decouple_map}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=0.85\linewidth]{images/Case33bw_scenarios.jpg}
\vspace{-0.2pt}
\caption{Fault scenarios and obtained network reconfiguration}
\label{fig: Scenario}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=0.95\linewidth]{images/voltage_comparison.jpg}
\vspace{-0.2pt}
\caption{Voltage magnitude (in pu)}
\label{fig: voltage}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=0.95\linewidth]{images/current_comparison.jpg}
\vspace{-0.2pt}
\caption{Current magnitude (in pu)}
\label{fig: current}
\end{figure}
We conduct the numerical results on the IEEE 33 nodes considering three random scenarios of faults on the network as shown in Fig. \ref{fig: Scenario}.
The three methods result in the same switching actions, i.e., exact binary of classical solvers, and quantum results after rounding into binary values.
Fig. \ref{fig: voltage} and \ref{fig: current} show that the voltage magnitude and current magnitude results of both formulations on the classical PC are equivalent with insignificant differences.
On the other hand, the quantum computing solutions exhibit some differences from the real optimal ones.
This can be explained by the heuristic nature of the algorithms.
Another reason is the large number of inequalities in the problem, which have to be converted into equalities resulting in difficult convergence \cite{qaoa}.
Additionally, the accuracy of final quantum computing solution relies heavily on the solution of QUBO and the noise of quantum device.
\section{Conclusions}
This paper presents two mixed integer conic program formulation for the fault isolation and network reconfiguration problem, namely the existing bus injection and new branch flows models.
The branch flows-based network reconfiguration does not require non-explainable variables or Big-M term to model the line flows over connected and disconnected lines.
Its compact and numerical stable form can be used to evaluate new computing methods proposed for power system operations.
Our experiment with the only known combinatorial optimization algorithm on quantum computers, namely ADMM-QCC, shows that the quantum computing can be a promising approach for large-scale problems.
However, its results is not as accurate as by classical optimizers on classical computers, because of its heuristic nature.
Therefore, the application of quantum computing on critical operations of power grid requires more investigation considering the mathematical foundation behind quantum algorithms.
\bibliographystyle{IEEEtran}
|
2208.07621
|
\section{Introduction}
Recent advances in quantum computing have been driving intense research in the development of quantum algorithms that offer significant advantage over their classical counterparts. In particular, quantum algorithms are used for studying interacting many-electron systems that fundamentally govern the properties of materials and molecules~\cite{collaborators*_hartree-fock_2020, huggins_unbiasing_2022}. In quantum chemistry and materials modeling, a quantum advantage~\cite{lloyd_universal_1996} could be achieved either by offering a significant, potentially exponential acceleration of conventional methods to (approximately) solve the electronic Schr\"odinger equation, or by improving accuracy by incorporating a better description of the many-body effects of strongly correlated electronic systems~\cite{bauer_hybrid_2016}. However, universal fault-tolerant quantum hardware~\cite{shor_fault-tolerant_1996} is required to harness the full potential of quantum computing, which is expected to be deployed only within the next decade. Currently available experimental devices, so-called Noisy Intermediate-Scale Quantum computers (NISQ), are limited by their inherent circuit noise and their decoherence time, posing strong constraints with respect to the number of qubits, the circuit depth, and the number of gate operations which can be executed within quantum algorithms~\cite{preskill_quantum_2018}.
Due to these constraints, the execution of algorithms like quantum phase estimation or quantum Fourier transform are impractical on NISQs, and most methods that produce quantum circuits executable on available hardware are centered around hybrid quantum-classical algorithms. For example, the variational quantum eigensolver (VQE)~\cite{peruzzo_variational_2014, tilly_variational_2021, cerezo_variational_2021} uses a quantum computer to store a parametrized wave function and measure its energy, while a classical, external minimization of the energy through variational parameters provides an approximation to the ground-state of the system. Although the classical optimization in a VQE is challenging due to the presence of local minima and barren plateaus~\cite{mcclean_barren_2018,wang_noise-induced_2021,cerezo_variational_2021}, its utility has already been experimentally demonstrated for small molecular systems~\cite{collaborators*_hartree-fock_2020}.
In addition to the ground states, many applications require the assessment of the thermal state at finite temperatures (Gibbs state) or the properties of excited states in a system. The Gibbs state minimizes the Helmholtz free energy $F = E-TS$ at an inverse temperature $\beta=1/T$ (in units of $k_B^{-1}$) with the energy $E$ and the entropy $S$. This state is mixed and can be formulated in terms of the eigenstates $\ket{\varphi_i}$ and eigenenergies $\epsilon_i$ of the Hamiltonian:
\begin{align}
\hat{\rho}_\text{Gibbs} & = \sum_{i=0} p_i\ket{\varphi_i}\bra{\varphi_i},
\label{eq:definition_Gibbs_state}
\end{align}
with
\begin{align}
p_i = \frac{e^{-\beta \epsilon_i}}{Z},\qquad Z &= \sum_{i} e^{-\beta \epsilon_i}.
\label{eq:partition_function}
\end{align}
To date, two classes of algorithms have been developed to compute the Gibbs state. The first class is based on computing each eigenstate separately and subsequently mixing them according to their probabilities $p_i$ in equation~\eqref{eq:definition_Gibbs_state}. Such algorithms usually start out by computing the ground-state using a VQE, followed by successively computing the excited (eigen-) states and projecting them out, or penalizing the already computed eigenstates~\cite{kuroiwa_penalty_2020,higgott_variational_2018}.
The second class prepares the Gibbs state itself on the quantum device, which involves the explicit treatment of the entropy $S$. The thermofield-double-states method, for example, doubles the system and collapses it in order to introduce entropy into the system~\cite{wu_variational_2018,wang_variational_2020,sagastizabal_variational_2021}. However, the measurement of the entropy is far from trivial: a quantum computer can measure the expectation value of an operator $\hat{O}$ on a state described by a density matrix $\hat{\rho}$ by taking the trace $\text{Tr}(\hat{O}\hat{\rho})$, but the expression for the entropy $\hat{\rho}\ln(\hat{\rho})$ is not linear in $\hat{\rho}$, rendering the corresponding measurement more demanding. On the other hand, using imaginary time evolution to obtain the Gibbs state requires complex quantum circuits which are challenging to implement on NISQ devices~\cite{motta_determining_2020,nishi_implementation_2021}. Verdon \textit{et al.}~\cite{verdon_quantum_2019} recently introduced a combination of a machine-learning algorithm with a variational quantum circuit, called hybrid variational quantum thermalizer (hVQT), which generalizes the VQE towards finite temperatures and involves a neural network that learns the entropic probability distribution, while a quantum circuit prepares the eigenstates of the Hamiltonian (see also Ref.~\onlinecite{guo_thermal_2021}). However, this approach leads to an intimate interaction of classical and quantum computation, which typically results in longer runtimes, and has thus motivated the development of alternate algorithms~\cite{foldager_noise-assisted_2022}.
To alleviate above issues of the hVQT and enable the algorithm to maximally benefit from a possible advantage of quantum machine-learning~\cite{Perdomo_Ortiz_2018,Alcazar_2020} we develop an algorithm which transfers the generation of the entropic probability distribution directly onto the quantum computer, thereby allowing to fully assess the Gibbs state in the quantum device. This approach, which we call quantum-VQT (qVQT), offers significant advantages over the hVQT: it minimizes the communication between classical and quantum computer and is able to achieve accurate results using significantly fewer measurements by evaluating them according to the probability distribution. In the remainder of this manuscript we present in detail the qVQT algorithm and demonstrate its performance by applying it to solve a 1-dimensional Heisenberg chain, and by computing the complete, temperature dependent phase diagram of a 2-dimensional J1-J2 Heisenberg model.
\section{Method}
\begin{figure}
\includegraphics[scale=0.3]{Schaltplan.png
\caption{\label{fig:FVQT} A schematic illustration of the qVQT algorithm, showing in blue the part which is executed on the classical computer (CPU) and includes the calculation of energy $E$ and entropy $S$ to form the free energy $F$ as well as a classical optimization routine. The red part shows the quantum circuit which is evaluated using the quantum computer (QPU). It consists of two variational circuits with an intermediate measurement for obtaining the entropy and a final measurement to obtain the energy.}
\end{figure}
\subsection{Principles of the qVQT}
The flowchart and the relevant components of the qVQT algorithm are shown in Fig.~\ref{fig:FVQT}. The fundamental idea of the qVQT is to use two separate variational quantum circuits (VQC) and an intermediate measurement to obtain a mixed state on a quantum computer (red block ``QPU'' in Fig.~\ref{fig:FVQT}). A classical optimization determines the parameters for which this mixed state represents the Gibbs state (blue block ``CPU'' in Fig.~\ref{fig:FVQT}). Different flavors of VQC have been proposed in the literature, e.g., hardware efficient VQC, particle number conserving VQC, variational Hamilton Ansatz (VHA), etc., which can also be used in qVQT~\cite{tilly_variational_2021,fedorov_vqe_2022,gard_efficient_2020}. We denote the parameters of the first VQC ($\text{VQC}_1$) with $\vec{\displaystyle \phi}$, while the parameters of the second VQC ($\text{VQC}_2$) are referred to as $\vec{\displaystyle\theta}$.
The first variational circuit $\text{VQC}_1\big(\vec{\displaystyle \phi}\big)$ and the intermediate measurement generate a classical distribution (see ``QPU'' in Fig.~\ref{fig:FVQT}). Specifically, the superposition of the basis states $\ket{b_i}$ produced by $\text{VQC}_1\big(\vec{\displaystyle \phi}\big)$ collapses to a probability distribution. $\hat{\rho}_{\text{VQC}_1}$ in equation~\eqref{eq:states_VQC1_midterm_a} represents the density matrix after the first variational circuit $\text{VQC}_1\big(\vec{\displaystyle\phi}\big)$, while $\hat{\rho}_{mm}$ in equation~\eqref{eq:states_VQC1_midterm} is the density matrix after the intermediate measurement:
\begin{align}
\hat{\rho}_{\text{VQC}_1} &= \left(\sum_{i} a_i\big(\vec{\displaystyle\phi}\big) \ket{b_i}\right)\left(\sum_{i} a_i^*\big(\vec{\displaystyle\phi}\big) \bra{b_i}\right)
\label{eq:states_VQC1_midterm_a}
\\
\hat{\rho}_{mm} &= \sum_{i} \left|a_i\big(\vec{\displaystyle\phi}\big)\right|^2 \ket{b_i}\bra{b_i}
\label{eq:states_VQC1_midterm}.
\end{align}
The second variational circuit $\text{VQC}_2\big(\vec{\displaystyle\theta}\big)$ maps the basis states $\ket{b_i}$ to a superposition of these basis states while preserving the orthogonality, and prepares the state
\begin{align}
\hat{\rho}_{\text{VQC}_2} &= \sum_{i} \left|a_i\big(\vec{\displaystyle\phi}\big)\right|^2 \ket{\psi_i\big(\vec{\displaystyle\theta}\big)}\bra{\psi_i\big(\vec{\displaystyle\theta}\big)}
\label{eq:states_VQC2}
\end{align}
which has the same form as equation~\eqref{eq:definition_Gibbs_state}.
We obtain the free energy $F$ by minimizing its value over the parameter set $(\vec{\displaystyle\phi},\vec{\displaystyle\theta})$, a task which is performed classically using an arbitrary (local) optimizer (see ``CPU'' in Fig.~\ref{fig:FVQT}). The energy $E$ is obtained by measuring the expectation value of the Hamilton operator after the second variational circuit $\text{VQC}_2$, and the entropy $S$ is obtained by the intermediate measurement of $\text{VQC}_1$. From the probabilities $p_i$ of measuring the basis state $\ket{b_i}$ we obtain the entropy:
\begin{align}
S = \sum_{i} p_i\ln(p_i), \label{eq:definition_entropy}
\end{align}
\subsection{Computational Cost}
To assess the resource cost and scaling of the qVQT we first discuss the error estimate as a function of the number of measurements in section~\ref{sec:precision}, then determine the required memory resources in section~\ref{sec:memory}, and finally analyze the complexity of the qVQT in section~\ref{sec:complexity}.
\subsubsection{Measurement Precision}\label{sec:precision}
Drawing $N$ samples from a random distribution with standard deviation $\sigma$ yields a standard error of $\Delta = \frac{\sigma}{\sqrt{N}}$.
Hence, an algorithm which measures all eigenvalue $\epsilon_i$ of a Hamiltonian yields a standard error of $\Delta \epsilon_i = \sigma_i/\sqrt{N/2^n}$ for each of these measurements, where $\sigma_i$ is the standard deviation of the measurement of the eigenvalue $\epsilon_i$ and $n$ is the dimension of the system. The factor $2^n$ arises from splitting up the number of measurements among all $2^n$ eigenstates.
The energy of the Gibbs state
\begin{align}
E = \frac{1}{Z}\sum_{i} e^{-\beta \epsilon_i}\epsilon_i = \sum_{i=0} p_i \epsilon_i
\end{align}
will then have an approximate error of
\begin{align}
\Delta E = \frac{1}{\sqrt{N}}\sqrt{2^n\sum_{i} \sigma_i^2p_i^2\left(1-\beta \epsilon_i(1-p_i)\right)^2},\label{eq:error_a}
\end{align}
where the partition function and probabilities are computed from the energy measurements.
Within the qVQT, we measure the expectation value of the thermal state directly with $N$ measurements. The precision depends on the measurement error $\Delta p_i$ of the probabilities and the measurement error $\Delta \epsilon_i$ of the eigenstates. The measurement error of $p_i$ can be calculated from the associated standard deviation, while the error of the energy eigenstate is given by the number of its measurements $p_iN$:
\begin{equation}
\begin{aligned}
\Delta p_i &= \frac{\sqrt{p_i(1-p_i)}}{N}\\
\Delta \epsilon_i &= \frac{\sigma_i}{\sqrt{p_iN}}.
\end{aligned}
\end{equation}
This leads to an error of the energy of the Gibbs state
\begin{align}
\Delta E = \frac{1}{\sqrt{N}}\sqrt{\sum_{i} \left[\sigma_i^2p_i+\frac{p_i(1-p_i)}{N}\epsilon_i^2\right]}\label{eq:error_b}
\end{align}
A detailed derivation of above relations can be found in appendix~\ref{sec:DervMeasPrec}.
When comparing the two error estimates from equations~\eqref{eq:error_a} and~\eqref{eq:error_b}, we see that in both cases the leading order is $1/\sqrt{N}$. However, the pre-factor in~\eqref{eq:error_a} shows an additional energy term and an additional factor of $2^np_i$, which can increase the error dramatically. For example, in the limit of $\beta = 0$ we see that both cases yield the same pre-factor of the leading order. On the other hand, in the limit of $T=0$, the error in equation~\eqref{eq:error_b} decays exponentially compared to~\eqref{eq:error_a}, illustrating the advantage of the qVQT.
\subsubsection{Memory Requirements}\label{sec:memory}
Computing the entropy requires the storage of all probabilities. Since the number of states grows exponentially with the system size the memory requirement $M$ grows exponentially as well up to the point where the number of measurements $N$ limits the number of states which are measured. This could be the case if there are excited states with probabilities comparable to $1/N$. Since the states and counts are stored in a dictionary, all states which do not occur in the $N$ measurements do not require any memory, all other states require memory $M(N)$ to store an integer smaller than $N$.
The qVQT can circumvent this issue if we allow the first variational circuit to split the system into $n/n_s$ independent subsystems of size $n_s$. In this case, we can determine the total entropy from the entropies of the individual subsystems, thereby allowing the algorithm to scale linearly with system size.
\begin{equation}
\begin{aligned}
M &= \frac{n}{n_s} 2^{n_s} \cdot M(N)
\end{aligned}
\end{equation}
\subsubsection{Complexity}\label{sec:complexity}
The goal of the qVQT is to approximate a density matrix of a mixed state, which is a $2^n\times 2^n$-dimensional complex and symmetric matrix with a trace of $1$ for a system of dimension $n$. Therefore, the qVQT has $2^{2n}-1$ degrees of freedom. An equivalent VQE only tries to find a pure state and has hence $2^{n+1}-2$ degrees of freedom. We assess whether the computational effort is comparable to a VQE with twice as many qubits or if it adds an exponential pre-factor to the cost of a VQE in Appendix \ref{sec:apx_scaling_chain}. For the rather small examples used in our numerical experiments we find that the necessary number of parameters, and therefore the cost of the optimization, highly depends on the variational circuits and we do not find evidence for an exponential scaling compared to VQE.
\section{Results and Discussions}
We demonstrate the utility of the qVQT by investigating two model systems: a 1-dimensional Heisenberg chain and a 2-dimensional J1-J2 Heisenberg model. For this purpose, we implement the qVQT algorithm using toolchains provided by qiskit~\cite{Qiskit} and its associated quantum simulator. For all numerical experiments we consider three performance metrics (similar to Ref.~\onlinecite{verdon_quantum_2019}). The first one is given by the difference between the numerically computed and the exact free energy, while the second one is given by the fidelity as
\begin{align}
f(\hat{\rho}_1,\hat{\rho}_2) = \left(\text{Tr}\left(\sqrt{\sqrt{\hat{\rho}_1}\hat{\rho}_2\sqrt{\hat{\rho}_1}}\right)\right)^2.
\end{align}
To obtain a criteria which vanishes as the density matrix $\hat{\rho}$ approaches the Gibbs state $\hat{\rho}_\text{Gibbs}$ we use the metric $f_m=1-f(\hat{\rho},\hat{\rho}_\text{Gibbs})$.
The third metric we use is the trace distance:
\begin{align}
Td(\hat{\rho},\hat{\rho}_\text{Gibbs}) = \frac{1}{2}\text{Tr}\left(\sqrt{(\hat{\rho}-\hat{\rho}_\text{Gibbs})^\dagger(\hat{\rho}-\hat{\rho}_\text{Gibbs})}\right)
\end{align}
All three metrics vanish in the limit of ideal performance.
\subsection{1D Heisenberg Chain with Transverse Fields}\label{sec:1dHB}
The first model system we investigate is the 1D Heisenberg chain with transverse fields and nearest neighbor hopping, given by the Hamiltonian:
\begin{align}
H = \sum_{\braket{ij}} J\left[\sigma_i^x\sigma_j^x + \sigma_i^y\sigma_j^y + \sigma_i^z\sigma_j^z\right] + \sum_i \left[J_x\sigma_i^x+J_z\sigma_i^z\right],
\end{align}
where $\sigma_i^{\{x,y,z\}}$ denotes a Pauli $\{x,y,z\}$ operator on qubit $i$ and the first sum runs over all pairs $\braket{ij}$ of nearest neighbors.
Verdon \textit{et. al}~\cite{verdon_quantum_2019} analyzed this particular model for 4-qubits with $J=-1$, $J_z=0.2$ and $J_x=0.3$ using the hVQT. To allow a direct comparison we employ the same parameters and use the qVQT to calculate the thermal state at the inverse temperature of $\beta =1.3$, which Verdon \textit{et. al} pointed out to be the most challenging.
\subsubsection{Required Circuit Depth\label{sec:reqcirc}}
\begin{figure}[h!]
\includegraphics[scale=0.6]{20perc_numvar_r3.pdf
\caption{\label{fig:4q_20perc_numvar} Results of a 4-qubit 1D Heisenberg chain with transverse fields at $\beta=1.3$, showing the top \nth{20} percentiles of the three performance metrics as a function of the total number of parameters. The difference between the computed and exact values of the free energy, the $f_m$-fidelity measure between the computed and exact density matrix, and the trace distance between the computed and exact density matrix are shown in green, red, and blue, respectively.}
\end{figure}
For a qVQT with a minimal entropy circuit, i.e., where the circuit $\text{VQC}_1$ only contains a Pauli $x$ rotation on each qubit, we already obtain accurate results with rapid convergence. Increasing the number of variational parameters for the energy circuit $\text{VQC}_2$ further improves accuracy. We perform a statistical analysis with different total number of variational parameters by conducting 100 runs starting from random initial parameters, and using the limited-memory Broyden-Fletcher-Goldfarb-Shanno bound optimizer (L\_BFGS\_B) \cite{Byrd_BFGS_1995,Zhu_BFGS_1997} as implemented in qiskit with a target gradient tolerance of \num{1e-3}. The three performance metrics of these statistical experiments are shown in Fig.~\ref{fig:4q_20perc_numvar}, illustrating that the qVQT yields an approximation to the density matrix that can be improved in accuracy by increasing the circuit depth and the associated computational cost. The corresponding scaling with respect to the required number of variational parameters is linear, as suggested by our numerical experiments (see appendix~\ref{sec:apx_scaling_chain}).
\subsubsection{Temperature Dependence}
In Fig.~\ref{fig:crit_beta} we show the dependence of the three performance metrics on the inverse temperature. In the low-temperature limit ($\beta\gg 1$) the ground state is dominant and the accuracy improves as the algorithm does not need to calculate the excited states very precisely. In the high-temperature limit ($\beta\ll 1$), on the other hand, the splitting of all eigenstates becomes less important in comparison to the entropy. Clearly, temperatures around $\beta=1$ are the most challenging for the algorithm as both classical and quantum-mechanical correlation need to be correctly captured.
Since the qVQT algorithm is similar to a hVQT and mainly differs by the method to produce the classical probability distribution, the temperature dependence above is comparable to the results of Verdon \textit{et al.}~\cite{verdon_quantum_2019}. However, and most importantly, by switching from hVQT to qVQT a significantly smaller number of quantum circuits need to be executed on the quantum device due to the intermediate measurement, a key advantage of the qVQT.
\begin{figure}
\includegraphics[scale=0.6]{crit_beta_r3.pdf
\caption{\label{fig:crit_beta} The temperature dependence of a 4-qubit 1D Heisenberg chain with transverse fields, showing the three performance metrics as a function of the inverse temperature. The color coding corresponds to the one in Fig.~\ref{fig:4q_20perc_numvar}. The bottom line within each criteria correponds to the best result obtained from 100 runs starting from random parameters, using the L\_BFGS\_B optimizer with a gradient tolerance of \num{1e-3}. The dark shaded region denotes the range of the best 20-percentile of all runs, while the light shaded region denotes the interval up to the average over all runs. The dashed vertical line denotes the value of $\beta=1.3$ used in Sec.~\ref{sec:reqcirc}}
\end{figure}
\subsection{2D J1-J2 Heisenberg Model}
The second model system we investigate is the Heisenberg model with nearest and next-nearest neighbor interactions. Such systems have been extensively used to simulate and better understand the behavior of real magnetic materials that can be mapped to a spin model~\cite{noauthor_thermodynamic_2016}. The Hamiltonian of this model is given by:
\begin{equation}
H = \sum_{\braket{i,j}} J_1\vec{S}_i\vec{S}_j+\sum_{\braket{\braket{i,j}}} J_2\vec{S}_i\vec{S}_j,
\end{equation}
where the first and second summations run over nearest and next-nearest neighbors, respectively. It is well known that this system develops three phases, depending on the relative interaction parameter $\alpha$, defined as $J_1 = \sin(\alpha)$ and $J_2=\cos(\alpha)$, which introduces a normalization of $J_1^2+J_2^2=1$. When $J_2>0$ and the next nearest neighbor interactions are stronger than the nearest neighbor interactions, the spins form a \textit{stripe} configuration. When the nearest neighbor interactions are more important or $J_2<0$, the system turns ferromagnetic (FM) or antiferromagnetic (AFM) for $J_1<0$ and $J_1>0$, respectively.
To distinguish these three phases we construct two correlation functions that serve as order parameters:
\begin{align}
c_0 &= \frac{1}{N_{nn}}\sum_{\braket{i,j}} \braket{\sigma_i^z\sigma_j^z}\nonumber\\
c_1 &= \frac{1}{N_{nnn}}\sum_{\braket{\braket{i,j}}} \braket{\sigma_i^z\sigma_j^z},
\end{align}
where $c_0$ is the nearest neighbor correlation function averaged over all $N_{nn}$ pairs of nearest neighbors, and, analogously, $c_1$ is the next-nearest neighbor correlation function. Note that the Hamiltonian is symmetric for rotations of all spins in the same way, i.e., for every eigenstate also the rotated states are eigenstates of the Hamiltonian with the same energy. For the term used to calculate the correlation functions this translates to $\sigma_i^z\sigma_j^z = \vec{S}_i\vec{S}_j/3$. Because the eigenvalues of $\vec{S}_i\vec{S}_j$ range between $-3$ and $1$, the correlation functions can assume values in the interval $[-1,1/3]$. A correlation function which is greater than zero means that the ferromagnetic contribution is dominating the antiferromagnetic one, while for a negative correlation function the antiferromagnetic contribution is dominant.
We perform qVQT experiments for a 4-qubit, two-dimensional J1-J2 Heisenberg lattice at a range of parameter angles $\alpha$ and an inverse temperature of $\beta=1$ to obtain the eigenstates of the Hamiltonian and, from those, the phase diagram. At each value of $\alpha$ we perform 100 runs, starting with random initial parameters, again using the L\_BFGS\_B optimizer with a gradient convergence tolerance of \num{1e-3}. Our qVQT algorithm uses two hardware efficient variational circuits with depths of 2 and 7 for $\text{VQC}_1$ and $\text{VQC}_2$, respectively, which results in a total of 76 variational parameter.
The exact correlation functions and the results obtained by the qVQT-experiments at $\beta = 1$ are shown in Fig.~\ref{fig:corelpolar2}. We clearly see the three phases together with the corresponding phase transitions. For almost all parameter angles the numerical experiments match the exact values very well, except for the region near the phase transition between the ferromagnetic and the antiferromagnetic phase at $J_1=0$ and $J_2=-1$. Remarkably, these increased errors seem to be of similar order on both sides of the transition and therefore do not influence the transition angle where the correlation function $c_0$ becomes zero, the point that we identify with the phase transition. Two possible explanations for this errors are that (a) either the approximation of the classical probability distribution breaks down in this regime, or (b) that the splitting of the energy eigenstates cannot be performed precisely when the ferromagnetic and antiferromagnetic states are in the same energy domain.
\begin{figure}
\includegraphics[width=0.90\columnwidth]{corel_polar2_MA.pdf
\caption{\label{fig:corelpolar2} Nearest neighbor $c_0$ and next-nearest neighbor correlation function $c_1$ of a 4-qubit 2D J1-J2 Heisenberg model as a function of the parameter angle $\alpha$ at inverse temperature $\beta=1$. The lines are the results obtained by exact diagonalization, while the crosses denote the qVQT-results.}
\end{figure}
\begin{figure}[t!]
\includegraphics[width=1.0\columnwidth]{phasediag_T_theoexp_r2_MA2.pdf
\caption{\label{fig:phasediag_T_theoexp} The phase diagram of a 4-qubit 2D Heisenberg model, obtained from a qVQT experiment (dashed red) together with the exact ground truth (solid yellow). The phase transitions are given by the angles where the correlation functions vanish. The colormap indicates the combined deviation of the correlation functions qVQT from the exact results, $\Delta_c = |c_0^{theo}-c_0^{qVQT}|+|c_1^{theo}-c_1^{qVQT}|$, with linear interpolation between all data points. The explicit qVQT results are calculated on the white crosses at $T=1$ and classically extended to temperatures $T\neq1$.}
\end{figure}
Next, we compute the temperature dependent phase diagram of the Heisenberg model. For this purpose we use the results from our qVQT-calculation above at $\beta=1$ and monitor the energy spectrum and calculate the correlation functions of each eigenstate independently. The correlation functions $c_0$ and $c_1$ are then obtained by mixing the correlation functions of the eigenstates according to the probabilities given in equation \eqref{eq:definition_Gibbs_state}. This approach has the advantage that the temperature-dependent results can be estimated without explicitly performing an optimization at each temperature, but comes with the drawback that the precision decreases when the temperature differs significantly from the original optimization temperature.
This behavior is reflected in Fig.~\ref{fig:phasediag_T_theoexp}, which shows the phase boundaries as a function of temperature computed from our numerical experiments and the exact results obtained by diagonalization of the Hamiltonian. Note that the agreement is excellent for $\beta\approx1$, while the accuracy decreases as we deviate from this inverse temperature. Nevertheless, the overall phase behavior is captured correctly, demonstrating that our extension from the qVQT-calculation at $\beta=1$ captures the physics accurately.
\section{Conclusions}
In summary, we present a new variational quantum algorithm, called qVQT, which is an extension of the VQE to finite temperatures. Our approach expands on the idea of the hVQT, but implements both the entropic and energetic contribution to the free energy on a quantum circuit. In this way we effectively reduce communication between classical and quantum device and the number of executed quantum circuits to compute an accurate Gibbs state.
We demonstrate the utility of the qVQT by performing extensive numerical experiments on quantum simulators for two model systems and show that our algorithm is well suited to calculate finite temperature properties or excited states on a quantum computer. The resource requirements as well as the scaling behavior are comparable to VQEs (see also appendix~\ref{sec:apx_scaling_FVQT}), and we expect our algorithm to perform equally well for a given problem size. Hence, the qVQT provides a powerful tool to study quantum systems at finite temperature, producing useful results with resources available on current NISQ devices for a wide range of applications.
\section{Acknowledgements}
We thank the members of the MANIQU consortium for valuable expert discussions. JS, MA and TE gratefully acknowledge support from the German Federal Ministry of Education and Research (BMBF) under project No. 13N15574.
|
2208.07569
|
\section{#1}}
\setcounter{footnote}{1}
\renewcommand{\theequation}{\arabic{section}.\arabic{equation}}
\newcommand{\stackrel{\rm def}{=}}{\stackrel{\rm def}{=}}
\newcommand{{\mathcal A}}{{\mathcal A}}
\newcommand{{\mathcal B}}{{\mathcal B}}
\newcommand{{\mathcal C}}{{\mathcal C}}
\newcommand{{\mathcal D}}{{\mathcal D}}
\newcommand{{\mathcal E}}{{\mathcal E}}
\newcommand{{\mathcal F}}{{\mathcal F}}
\newcommand{{\mathcal G}}{{\mathcal G}}
\newcommand{{\mathcal H}}{{\mathcal H}}
\newcommand{{\mathcal I}}{{\mathcal I}}
\newcommand{{\mathcal J}}{{\mathcal J}}
\newcommand{{\mathcal K}}{{\mathcal K}}
\newcommand{{\mathcal L}}{{\mathcal L}}
\newcommand{{\mathcal M}}{{\mathcal M}}
\newcommand{{\mathcal N}}{{\mathcal N}}
\newcommand{{\mathcal O}}{{\mathcal O}}
\newcommand{{\mathcal P}}{{\mathcal P}}
\newcommand{{\mathcal Q}}{{\mathcal Q}}
\newcommand{{\mathcal R}}{{\mathcal R}}
\newcommand{{\mathcal S}}{{\mathcal S}}
\newcommand{{\mathcal T}}{{\mathcal T}}
\newcommand{{\mathcal U}}{{\mathcal U}}
\newcommand{{\mathcal V}}{{\mathcal V}}
\newcommand{{\mathcal W}}{{\mathcal W}}
\newcommand{{\mathcal X}}{{\mathcal X}}
\newcommand{{\mathcal Y}}{{\mathcal Y}}
\newcommand{{\mathcal Z}}{{\mathcal Z}}
\newcommand{\bH} {{\mathbb{H}}}
\newcommand{{\mathbb{T}}}{{\mathbb{T}}}
\newcommand{\sbm}[1]{\left[\begin{smallmatrix} #1
\end{smallmatrix}\right]}
\newcommand{{\boldsymbol \iota}}{{\boldsymbol \iota}}
\newcommand{{\boldsymbol{\mathcal L}}}{{\boldsymbol{\mathcal L}}}
\newcommand{{\boldsymbol{\Theta}}}{{\boldsymbol{\Theta}}}
\newcommand{\langle}{\langle}
\newcommand{\rangle}{\rangle}
\newcommand{{\mathbb D}}{{\mathbb D}}
\newcommand{{\underline{T}}}{{\underline{T}}}
\newcommand{{\boldsymbol{\Delta}}}{{\boldsymbol{\Delta}}}
\newtheorem{thm}{Theorem}[section]
\newtheorem{conj}[thm]{Conjecture}
\newtheorem{corollary}[thm]{Corollary}
\newtheorem{lemma}[thm]{Lemma}
\newtheorem{rslt}[thm]{Result}
\newtheorem{obs}[thm]{Observation}
\newtheorem{fact}[thm]{Fact}
\newtheorem{proposition}[thm]{Proposition}
\theoremstyle{definition}
\newtheorem{theorem}{Theorem}
\newtheorem{definition}[thm]{Definition}
\newtheorem{remark}[thm]{Remark}
\newtheorem{note}[thm]{Note}
\newtheorem{example}[thm]{Example}
\newcounter{tmp}
\numberwithin{equation}{section}
\def\textmatrix#1\\#3\\{\bigl({#1 \atop #3}\ {#2 \atop #4}\bigr)}
\def\dispmatrix#1\\#3\\{\left({#1 \atop #3}\ {#2 \atop #4}\right)}
\numberwithin{equation}{section}
\def\textmatrix#1\\#3\\{\bigl({#1 \atop #3}\ {#2 \atop #4}\bigr)}
\def\dispmatrix#1\\#3\\{\left({#1 \atop #3}\ {#2 \atop #4}\right)}
\begin{document}
\title{Bounded analytic functions on certain symmetrized domains}
\author{Mainak Bhowmik}
\address{Department of Mathematics\\
Indian Institute of Science\\
Bangalore 560012, India}
\email{mainakb@iisc.ac.in}
\author{Poornendu Kumar}
\address{Department of Mathematics\\
Indian Institute of Science\\
Bangalore 560012, India}
\email{poornendukumar@gmail.com, poornenduk@iisc.ac.in}
\thanks{2020 {\em Mathematics Subject Classification}: 47A48, 47A68, 47A56, 32A17, 32E30.\\
{\em Key words and phrase}: Approximation, Pick Interpolation, Rational inner functions, Realization formula, Symmetrized bidisc.}
\maketitle
\begin{abstract}
In this note, we first give a canonical structure of scalar-valued rational inner functions on the symmetrized polydisc. Then we generalize Caratheodory's approximation result to the setting of matrix-valued holomorphic functions on the symmetrized bidisc, $\mathbb{G}$. We approximate matrix-valued holomorphic functions with sup-norm not greater than one by rational iso-inner or coiso-inner functions uniformly on compact subsets of $\mathbb{G}$ using the interpolation results. En route, we also prove that any solvable data with initial nodes on the symmetrized bidisc and the final nodes in the operator norm unit ball of $M\times N$ matrices has a rational iso-inner or coiso-inner solution. Finally, we give necessary and sufficient conditions for matrix-valued Schur class functions on $\mathbb{G}$ to have a Schur class factorization on $\mathbb{G}$ in several situations.
\end{abstract}
\section{Introduction and Preliminaries}
\subsection{Rational Inner functions}
A rational function on the open unit disc $\mathbb{D}$ of the complex plane with poles outside the closed unit disc $\overline{\mathbb{D}}$ is called a \textit{rational inner function} if it maps the unit circle $\mathbb{T}$ to itself. Such functions have been greatly studied for their usefulness. For example, a classical theorem of Caratheodory says that any holomorphic self map of $\mathbb{D}$ can be approximated uniformly on compact subsets of $\mathbb{D}$ by rational inner functions. It is well known that the map $$ e^{i\theta} \prod_{j=1}^n \frac{z-a_j}{1-\bar{a_j}z}$$ is a rational inner map and these are all. A bounded holomorphic map on a bounded domain $\Omega\subset\mathbb{C}^d$ is said to be {\em inner} if it sends almost every point of the distinguished boundary $b\Omega$, the Shilov boundary with respect to the uniform algebra $A(\Omega)$ of functions holomorphic in $\Omega$ and continuous in $\overline{\Omega}$, to the unit circle. We say that $f$ is \textit{rational inner} if it is both rational and inner. It is well known that the distinguished boundary of $\mathbb{D}^d$ is $\mathbb{T}^d$. Various attempts have been made to find structures of rational inner functions on domains in $\mathbb{C}^d$. None of them is more exciting than the following result of Rudin from \cite{Rudin}.
\begin{thm}\label{rudin-thm}
Given a rational inner function $f$ on $\mathbb{D}^d$, there exists a d-tuple of positive integers $n=(n_1, n_2, \dots, n_d)$, $\tau\in\mathbb{T}$ and a polynomial $\xi$ with no zeros in $\mathbb{D}^d$ such that
\begin{align}\label{poly-inner}
f(z)=\tau z^{n}\frac{\widetilde{\xi}(z)}{\xi(z)}.
\end{align}
Conversely, any rational function of the form \eqref{poly-inner} is inner. Moreover, every inner function $f\in A(\mathbb{D}^d)$ is a rational function of the form \eqref{poly-inner} with the additional property that $\xi$ has no zero in $\overline{\mathbb{D}^d}$.
\end{thm}
In the above, for a polynomial $p \in \mathbb{C}[z_1,\dots,z_d]$ with degree of $p$, $n =(n_1,\dots,n_d)$ the polynomial $\tilde{p}$ is defined as $z^n \overline{p\left( \frac{1}{\bar{z}} \right)}$ where $z^n = \prod_{j=1}^d z_j^{n_j}$ and $\frac{1}{\overline{z}}=(\frac{1}{\overline{z_1}}, \frac{1}{\overline{z_2}},\dots, \frac{1}{\overline{z_d}})$.
In this article, domains of our interest are the symmetrized bidisc $$\mathbb{G}:=\{(z+w, zw):(z,w)\in\mathbb{D}^2\}$$ and the symmetrized polydisc
$$\mathbb{G}_{d}:=\{\pi_d(z_1,z_2,\dots,z_d) \in \mathbb{C}^d : (z_1,z_2,\dots,z_d)\in\mathbb{D}^d \} $$
where the symmetrization map $\pi_d:\mathbb{D}^d\rightarrow\mathbb{C}^d$ is defined by
$$\pi_d(z_1,z_2,\dots, z_d)=\left(\sum_{j=1}^d z_j,\sum_{i< j}z_iz_j,\dots, \prod_{j=1}^d z_j\right).$$
These domains are non-convex but polynomially convex domains, see \cite{Edigarian-Zwonek}. These domains have attracted good amount of interest in recent years, see \cite{COMM-LIFT, SYM_GEO, SYM_Real, ay-jot, BDSIMRN, Bhattacharyya-Pal, Tirtha-Hari-JFA, SR, Costara, Edigarian-Zwonek, DKS, MSRZ}. A typical member of the symmetrized polydisc is denoted by $(s_1, s_2,\dots,s_{d-1}, p)$. Note that $\mathbb{G}_2=\mathbb{G}$. The distinguished boundary of the symmetrized polydisc is $b\mathbb{G}_d = \pi_d\left( \mathbb{T}^d \right)$. From \cite{SR}, we have the following description:
$$b\mathbb{G}_d = \left\lbrace (s_1, \dots, s_{d-1}, p) : |p|=1, s_j=\overline{s_{d-j}}p \,\, \text{and}\,\, \left(\frac{d-1}{d}s_1, \frac{d-2}{d} s_2, \dots, \frac{1}{d}s_{d-1} \right)\in \overline{G}_{d-1} \right\rbrace.$$
Rudin \cite{Rudin} proved the Caratheodory type approximation result for $\mathbb{D}^d$. We observe that from there the Caratheodory type approximation on $\mathbb{G}_d$ can be derived. Hence a structure of rational inner functions on $\mathbb{G}_d$ is essential. The first main result of this paper is the following.
\begingroup
\setcounter{tmp}{\value{thm}}
\setcounter{thm}{0}
\renewcommand\thethm{\Alph{thm}}
\begin{thm}\label{Theorem A}
Given a rational inner function $f$ on $\mathbb{G}_d$, there exist a non-negative integer $k$, $\tau\in\mathbb{T}$ and a polynomial $\xi$ with no zero in $\mathbb{G}_d$ such that
\begin{align}\label{RIF-G}
f(s_1, s_2, \dots, s_{d-1}, p)=\tau p^k\frac{\overline{{\xi}\left(\frac{\overline{s_{d-1}}}{\overline{p}}, \frac{\overline{s_{d-2}}}{\overline{p}}, \dots, \frac{\overline{s_1}}{\overline{p}}, \frac{{1}}{\overline{p}} \right)}}{\xi(s_1, s_2, \dots, s_{d-1}, p)}.
\end{align}
Conversely, any rational function of the form \eqref{RIF-G} is inner. Moreover, any inner function $f\in A(\mathbb{G}_d)$ is a rational function of the form \eqref{RIF-G} with the additional property that $\xi$ has no zero in $\overline{\mathbb{G}^d}$.
\end{thm}
\endgroup
\subsection{Realization and interpolation}
Given a Hilbert space ${\mathcal H}$ and an isometry $\left[\begin{smallmatrix}
A & B \\
C & D
\end{smallmatrix}\right]$ on $\mathbb{C}\oplus{\mathcal H}$, the function
\begin{align}\label{Rel}
f(z)=A +zB(I-zD)^{-1}C
\end{align}
is clearly holomorphic on $\mathbb{D}$ with sup-norm not greater than $1$. It is remarkable that for every holomorphic function $f$ on $\mathbb{D}$ with sup-norm no greater than $1$, there exist a Hilbert space ${\mathcal H}$ and an isometry $\left[\begin{smallmatrix}
A & B \\
C & D
\end{smallmatrix} \right]$ on $\mathbb{C}\oplus{\mathcal H}$, such that
$f$ can be expressed as in \eqref{Rel}. The expression \eqref{Rel} is referred to as the {\em realization formula} and sometimes we call it {\em TFR} of $f$ and we say that $f$ has {\em finite dimensional realization} if $\mathcal{H}$ is finite dimensional. If $\left[\begin{smallmatrix}
A & B \\
C & D
\end{smallmatrix} \right]$ is a contraction, then we say that \eqref{Rel} is a {\em contractive} TFR. The realization formula has been generalized to the case of an annulus \cite{Dritschel-Maccu}, the complex unit ball \cite{Ball-Bolo}, the polydisc \cite{Agler} and to the case of more general test functions \cite{Dritschel-Maccu, Dri-Mar- Macc}. Recently, the realization formula has been also discovered in the case of the symmetrized bidisc \cite{SYM_Real, Tirtha-Hari-JFA}.
\setcounter{thm}{\thetmp}
\begin{thm}[Realization Formula (Agler and Young)]\label{thm3.1}
Let $f:\mathbb G\to{\mathcal B}({\mathcal E},{\mathcal F})$ be an analytic function. Then $f$ is in the unit ball of $H^\infty(\mathbb G)$ if and only if there exist a Hilbert space $\mathcal{H}$ and unitary operators
\begin{align*}
\tau:\mathcal{H}\to \mathcal{H} \quad \text {and} \quad
U=\begin{bmatrix}
A & B\\
C & D
\end{bmatrix}
:\begin{bmatrix}{\mathcal E}\\\mathcal{H}\end{bmatrix}\to \begin{bmatrix}{\mathcal F}\\\mathcal{H}\end{bmatrix}
\end{align*}
such that
\begin{align*}
f(s,p)= A+B\varphi(\tau,s,p)(I-D\varphi(\tau,s,p))^{-1}C,
\end{align*}
where $\varphi(\tau, s,p)= (2p\tau-s)(2-s\tau)^{-1}$.
\end{thm}
We shall denote, the set of all $M\times N$ matrices with complex entries by $ \mathbb{M}_{M\times N}(\mathbb{C})$ and when $M=N$ we denote it by $\mathbb{M}_N{(\mathbb{C})}$.
\begin{definition}
Let $\Omega$ be bounded domain in $\mathbb{C}^2$. We say that a rational map $\Phi=((\Phi_{ij})):\Omega \rightarrow \mathbb{M}_{M\times N}(\mathbb{C})$ is
\begin{enumerate}
\item iso-inner if \begin{align*}
\Phi(z)^*\Phi(z)=I_{N}\quad \quad \text { a.e. } z\in b\Omega,
\end{align*}
\item coiso-inner if \begin{align*}
\Phi(z)\Phi(z)^*=I_{M}\quad \quad \text{ a.e. } z\in b\Omega,
\end{align*}
\end{enumerate}
For a rational function $f=\frac{f_1}{f_2}$ on $\Omega$, the $\operatorname{deg}f_1 =(d_1,d_2)$ will be referred to as the degree of $f$.
\end{definition}
In a recent work of Knese \cite{Knese}, he proved that any rational iso-inner (or coiso-inner) function has finite dimensional realization. We observed that the same holds in the case of $\mathbb{G}$, see Theorem \ref{T:SymRatInn}.
Let $\lambda_1, \lambda_2,\dots,\lambda_n$ be points in $\Omega\subset\mathbb{C}^2$ and $A_1, A_2,\dots, A_n$ be points in the closed operator-norm unit ball of $\mathbb{M}_{M\times N}(\mathbb{C})$. The Pick-Nevanlinna interpolation problem is concerned with the study of functions $\Psi$ which are analytic on $\Omega$ with $\|\Psi(z)\|\leq{1}$ for all $z\in\Omega$ and satisfy the interpolation conditions
$$\Psi(\lambda_j)=A_j\quad \quad\quad \quad (j=1,2,\dots, n).$$
When $\Omega=\mathbb{D}$, the problem was solved over a century ago, \cite{PICK, Nev}. Moreover, it was also shown that a solvable data on $\mathbb{D}$ always has a rational inner solution. Agler and McCarthy studied the Pick-Nevanlinna problem on $\mathbb{D}^2$ and gave a necessasry and sufficient condition for such functions to exist. They also proved that a solvable matrix Nevanlinna--Pick problem with initial nodes on the bidisc and the final nodes in the closed operator-norm unit ball of $M\times N$ complex matrices has a rational iso-inner or coiso-inner solution, see \cite{AM-Bidisc, AM-Book}. Recently, the Pick-Nevanlinna problem is carried out in the setting of $\mathbb{G}$, \cite{SYM_Real, Tirtha-Hari-JFA}. In \cite{DKS}, it was shown that a solvable Pick-Nevanlinna problem with initial nodes on $\mathbb{G}$ and the final nodes in the closed operator-norm unit ball of complex squre matrices has a rational inner solution. We observe that this theorem can be proved when the final nodes are in the closed operator-norm unit ball of $M\times N$ complex matrices. With the help of this theorem, we prove a Caratheodory type approximation theorem for holomorphic functions on $\mathbb{G}$ taking values in the operator-norm unit ball in $\mathbb{M}_{M\times N}(\mathbb{C})$.
The second main theorem of this paper is the following.
\begingroup
\setcounter{tmp}{\value{thm}}
\setcounter{thm}{1}
\renewcommand\thethm{\Alph{thm}}
\begin{thm}\label{Main2}
A solvable matrix Nevanlinna-Pick problem with initial nodes in $\mathbb{G}$ and the final nodes on the closed operator-norm unit ball of $M\times N$ complex matrices has a rational iso-inner or coiso-inner solution.
\end{thm}
\endgroup
\subsection{Factorization}
The \textit{Schur class} functions on $\Omega$, a bounded domain of $\mathbb{C}^d$, is the set of all $N\times N$ matrix-valued holomorphic functions $\theta$ in $\Omega$ such that $ \lVert \theta(z)\rVert \leq 1$ for each $z\in \Omega$. We denote it by $\mathcal{S}(\Omega, \mathbb{M}_N(\mathbb{C}))$. By a {\em factorization} of a Schur class function $\theta$, we mean $\theta=\theta_1\theta_2$ for some $\theta_1$ and $\theta_2$ in the Schur class. This is a very old theme. This appeared in many places, for instances in invariant subspace theory, characteristic function (see \cite{Nagy-Foias, Brodskii, Brod}). For factorizations of scalar valued Schur class functions on $\mathbb{D}$ and $\mathbb{D}^2$, Debnath and Sarkar \cite{Ramlal-Jaydeb-1} gave necessary and sufficient conditions in terms of colligation operator. Their technique can also be applied for a special class of Schur functions on $\mathbb{D}^d$. Taking cue from these, in the final section we give necessary and sufficient conditions in terms of colligation operator for factorization of matrix-valued Schur class functions on $\mathbb{G}$ in several situations. We quote a sample result here.
\setcounter{thm}{\thetmp}
\begingroup
\setcounter{tmp}{\value{thm}}
\setcounter{thm}{2}
\renewcommand\thethm{\Alph{thm}}
\begin{thm}
Let $\theta \in \mathcal{S}(\mathbb{G}, \mathbb{M}_N(\mathbb{C}))$ with $\theta(0,0) = A$, an invertible $N\times N$ matrix. Then $\theta = \psi_1 \psi_2$ for some $\psi_1,\psi_2 \in \mathcal{S}(\mathbb{G}, \mathbb{M}_N(\mathbb{C}))$ if and only if there exist Hilbert spaces $\mathcal{H}_1,\mathcal{H}_2$, a unitary operator $\tau $ on $ \mathcal{H}_1 \oplus \mathcal{H}_2$ such that $\mathcal{H}_1,\mathcal{H}_2$ are invariant under $\tau$ and an isometric colligation $$
V= \left[ \begin{array}{c| c c}
A & B_1 & B_2 \\
\hline
C_1 & D_1 & D_2 \\
C_2 & 0 & D_3
\end{array}\right] : \mathbb{C}^N \oplus \left( \mathcal{H}_1 \oplus \mathcal{H}_2 \right) \to \mathbb{C}^N \oplus \left( \mathcal{H}_1 \oplus \mathcal{H}_2 \right)$$
such that \begin{align} \label{cond-2}
D_2= C_1 A^{-1} B_2 , \quad A = A_1 A_2 \quad \text{and} \quad A_2^*A_2 = A^*A + C_1^*C_1
\end{align}
for some $ A_1,A_2 \in \mathbb{M}_N(\mathbb{C}) $
and $$ \theta(s,p)= A + [B_1,B_2] \varphi(\tau,s,p)\left( I_{\mathcal{H}_1 \oplus \mathcal{H}_2} - \begin{bmatrix}
D_1 & D_2\\
0 & D_3
\end{bmatrix} \varphi(\tau,s,p) \right)^{-1} \begin{bmatrix}
C_1 \\
C_2
\end{bmatrix}. $$
\end{thm}
\endgroup
\setcounter{thm}{\thetmp}
\section{Rational Inner Functions on $\mathbb{G}_d$ }
The goal of this section is to study the rational inner functions on $\mathbb{G}_d$. First we give a structure of rational inner functions on $\mathbb{G}_d$. Then we prove an approximation result on $\mathbb{G}_d$. Often, $z$ is used in this section to represent $(z_1, z_2,\dots, z_d)$.
\subsection{Structure of Rational Inner Functions}
We shall start with a few algebraic lemmas which will be used in the proof of the main result of this subsection.
\begin{lemma}\label{L1}
Let $\xi $ and $\eta $ be two polynomials in $\mathbb C[z_1,z_2,\dots,z_d]$ such that $\xi(0,0)\neq 0$ and $\eta(0,0)\neq 0$. Then the following hold:
\begin{enumerate}
\item $\widetilde{\widetilde{\xi}} = \xi $ and
\item $\widetilde{\xi \eta} = \widetilde{\xi}\widetilde{\eta}$.
\end{enumerate}
Moreover, if $\xi$ and $\eta$ are two distinct irreducible polynomials, then the following are also true:
\begin{enumerate}
\item[(3)] If $\xi$ divides $\widetilde{\xi}$, then $\xi$ is a unimodular scalar multiple of $\widetilde{\xi}$ and
\item[(4)] If $\xi$ divides $\widetilde{\eta}$ then $\eta$ is a non-zero scalar scalar multiple of $\widetilde{\xi}$.
\end{enumerate}
\end{lemma}
\begin{proof}
The proof of $(1)$ is obvious. To prove $(2)$, let $n=(n_1,n_2,\dots, n_d)$ and $m= (m_1, m_2,\dots, m_d)$ be the degrees of the polynomials $\xi$ and $\eta$ respectively. Since both $\xi(0,0)$ and $\eta (0,0)$ are non-zero, $\operatorname{deg}(\widetilde{\xi})= n$ and $\operatorname{deg}(\widetilde{\eta})= m $. Thus $\operatorname{deg} (\xi \eta) = n+m =\operatorname{deg} (\widetilde{\xi \eta})$.
Now,\begin{align*}
\widetilde{\xi\eta }(z)&= z^{n+m}\overline{ \xi\eta \left(\frac{1}{\overline{z}}\right)}
=z^n\overline{\xi\left(\frac{1}{\overline{z}}\right)}z^m\overline{\eta\left(\frac{1}{\overline{z}}\right)}
=\widetilde{\xi}(z)\widetilde{\eta}(z).
\end{align*}
Now, we shall prove part $(3)$. Suppose $\xi$ divides $\widetilde{\xi}$. Then, there exists $\psi\in\mathbb{C}[z_1,z_2,\dots,z_d]$ such that $\widetilde{\xi}=\psi\xi$. Thus $\widetilde{\widetilde{\xi}}=\widetilde{\psi}\widetilde{\xi}$. Hence, by part $(1)$, we have $\xi=\widetilde{\psi}\widetilde{\xi}$.
Since $\xi$ is irreducible, it follows that $\psi$ is an unimodular scalar.\\
Finally, suppose that $\xi$ divides $\widetilde{\eta}$. Then there exists $\psi\in\mathbb{C}[z]$ such that $\widetilde{\eta}= \psi\xi$. This gives that $$\eta=\widetilde{\widetilde{\eta}}=\widetilde{\psi}\widetilde{\xi}.$$ But $\eta$ being irreducible $\psi$ must be a non-zero scalar. This proves $(4)$.
\end{proof}
Now, we shall prove an elementary lemma about the symmetric rational functions on $\mathbb{D}^d$. Let $S_d$ be the symmetric group of permutations on $d$ many symbols. For a function $f$ defined on $\mathbb{D}^d$ and $\sigma \in S_d$, we define $\sigma(f)(z_1,z_2,\dots, z_d) = f\left(z_{\sigma(1)}, z_{\sigma(2)},\dots, z_{\sigma(d)}\right)$.
\begin{lemma}\label{L2}
Let $f$ be a symmetric rational function on $\mathbb{D}^d$. Then $f$ can be expressed as quotient of two symmetric polynomials.
\end{lemma}
\begin{proof}
If $f$ is the identically zero function, then there is nothing to prove. So we assume $f$ to be non-zero. Since $f$ is a rational function, there exist polynomials $\xi$ and $\eta$ in $\mathbb{C}[z_1, z_2, \dots, z_d]$ which are co-prime such that
$$f=\frac{\xi}{\eta}.$$
Let $\sigma\in S_d$. Then $\sigma(f)=\frac{\sigma(\xi)}{\sigma(\eta)}$. Since $f$ is symmetric, so
\begin{align}\label{Sym}
\sigma(f)=f=\frac{\xi}{\eta}=\frac{\sigma(\xi)}{\sigma(\eta)}.
\end{align}
Thus, $\xi\sigma(\eta)=\eta\sigma(\xi)$. Since $\xi$ and $\eta$ are co-prime, $\xi$ divides $\sigma(\xi)$ and $\eta$ divides $\sigma(\eta)$. Also, note that the total degrees of $\psi$ and $\sigma(\psi)$ are same for any polynomial $\psi\in\mathbb{C}[z_1, z_2,\dots, z_d]$. Thus,
$$\sigma(\xi)=\lambda_{\sigma}\xi \quad \text{ and } \quad \sigma(\eta)=\mu_{\sigma}\eta$$
for some non-zero scalars $\lambda_{\sigma}$ and $\mu_{\sigma}$. From equation \eqref{Sym}, it follows that $\lambda_{\sigma}=\mu_{\sigma}$.
The map $$\sigma\mapsto \lambda_{\sigma}$$ defines a group homomorphism from $S_{d}$ to $\mathbb{C}^*$. If we show that the above group homomorphism is the trivial homomorphism, then $\sigma(\xi)=\xi$ and $\sigma(\eta)=\eta$ which proves that $\xi$ and $\eta$ are symmetric polynomials. If possible, suppose there exists $\sigma=(i, j)\in S_d$ such that $\lambda_{\sigma}\neq{1}$.
Now $\sigma(\xi) = \lambda_{\sigma}\xi$ gives
$$
\xi(z_1,\dots, z_i ,\dots, z_j,\dots, z_d) = \lambda_{\sigma}\xi(z_1,\dots, z_j ,\dots, z_i,\dots, z_d).
$$
Putting $z_i = z_j$, we have
$$
\xi(z_1,\dots, z_j ,\dots, z_j,\dots, z_d) = \lambda_{\sigma}\xi(z_1,\dots, z_j ,\dots, z_j,\dots, z_d).
$$
Since $\lambda_{\sigma}\neq 1$, $\xi(z_1,\dots, z_j ,\dots, z_j,\dots, z_d) =0 $ that is, $z_i - z_j$ divides $\xi$. Similarly, we can show that $z_i - z_j$ divides $\eta$. But this leads to a contradiction as $\xi$ and $\eta$ are co-prime. Hence the above group homomorphism has to be trivial.
\end{proof}
\begin{lemma}\label{L3}
Let $\xi$ be a polynomial in $\mathbb{C}[z_1,z_2,\dots,z_d]$. Then $\frac{\tilde{\xi}}{\xi}$ can be written as
$$ \tau \frac{\tilde{\xi_{j_1}} \tilde{\xi_{j_2}}\dots\tilde{\xi_{j_l}}}{\xi_{j_1}\xi_{j_2}\dots \xi_{j_l}}$$
for some $\tau\in\mathbb{T}$ and $j_1, j_2,\dots, j_l$ such that $ \tilde{\xi_{j_1}} \tilde{\xi_{j_2}}\dots\tilde{\xi_{j_l}}$ and $\xi_{j_1}\xi_{j_2}\dots \xi_{j_l}$ are co-prime.
\end{lemma}
\begin{proof}
Write $\xi$ as $\xi_1\xi_2\dots \xi_r$ for some irreducible polynomials $\xi_1,\dots, \xi_r\in\mathbb{C}[z_1,z_2,\dots,z_d]$. Then, by part $(2)$ of Lemma \ref{L1}, we have
$$\tilde{\xi}=\tilde{\xi_1} \tilde{\xi_2}\dots \tilde{\xi_r}.$$
So,
\begin{align*}
\frac{\tilde{\xi}}{\xi}=\frac{\tilde{\xi_1} \tilde{\xi_2}\dots\tilde{\xi_r}}{\xi_1 \xi_2\dots \xi_r}.
\end{align*}
Note that $\tilde{\xi}$ and $\xi$ can have common factors. If $\xi_j$ divides $\tilde{\xi_j}$ for some $j$, then by part $(3)$ of Lemma \ref{L1}, $\frac{\tilde{\xi_j}}{\xi_j} \in \mathbb{T}$.
Since $\xi_j$'s are irreducible, any such common factor is divisible by $\xi_i$ for some $i$. Without loss of generality we can assume that $\xi_1$ divides $\tilde{\xi_2}$. Then by part $(4)$ of Lemma \ref{L1}, $\xi_2 = \beta \tilde{\xi_1}$. Also $\tilde{\xi_2} = \bar{\beta} \xi_1$. So, in this case we have,
$$
\frac{\tilde{\xi}}{\xi} = \frac{\tilde{\xi_1} ( \bar{\beta} \xi_1 )\dots \tilde{\xi_r}}{\xi_1 (\beta \tilde{\xi_1}) \dots \xi_r} = \frac{\bar{\beta}}{\beta} \frac{\tilde{\xi_3}\dots\tilde{\xi_r}}{\xi_3 \dots \xi_r}.
$$
After such cancellations, we will end up with
$$
\frac{\tilde{\xi}}{\xi}= \tau \frac{\tilde{\xi_{j_1}}.\tilde{\xi_{j_2}}\dots\tilde{\xi_{j_l}}}{\xi_{j_1}\xi_{j_2}\dots \xi_{j_l}}
$$
for some $\tau\in\mathbb{T}$ and $j_1, j_2,\dots, j_l$ such that $ \tilde{\xi_{j_1}}.\tilde{\xi_{j_2}}\dots\tilde{\xi_{j_l}}$ and $\xi_{j_1}\xi_{j_2}\dots \xi_{j_l}$ are co-prime. This completes the proof.
\end{proof}
Now, we are ready to prove the main theorem about the structure of rational inner functions on $\mathbb{G}_d$. This is Theorem \ref{Theorem A}. We restate it for the reader's convinience.
\begin{thm}
Given a rational inner function $f$ on $\mathbb{G}_d$, there exist a non-negative integer $k$, $\tau\in\mathbb{T}$ and a polynomial $\xi$ with no zero in $\mathbb{G}_d$ such that
\begin{align}\label{THM_RIF-G}
f(s_1, s_2, \dots, s_{d-1}, p)=\tau p^k\frac{\overline{{\xi}\left(\frac{\overline{s_{d-1}}}{\overline{p}}, \frac{\overline{s_{d-2}}}{\overline{p}}, \dots, \frac{\overline{s_1}}{\overline{p}}, \frac{{1}}{\overline{p}} \right)}}{\xi(s_1, s_2, \dots, s_{d-1}, p)}.
\end{align}
Conversely, any rational function of the form \eqref{THM_RIF-G} is inner. Moreover, any inner function $f\in A(\mathbb{G}_d)$ is a rational function of the form \eqref{THM_RIF-G} with the additional property that $\xi$ has no zero in $\overline{\mathbb{G}^d}$.
\end{thm}
\begin{proof}
Let $f$ be a rational inner function on $\mathbb{G}_d$. Then $f\circ \pi : \mathbb{D}^d \rightarrow \overline{\mathbb{D}}$ is a symmetric rational inner function on $\mathbb{D}^d$. For notational simplicity we write $\pi$ in place of $\pi_d$ for the symmetrization map. Invoke Theorem \ref{rudin-thm}, to get a polynomial $\xi$ with no-zero on $\mathbb{D}^d$, a $\tau_1\in\mathbb{T}$ and $n=(n_1,n_2,\dots,n_d)$ such that
\begin{align}\label{RIF-poly}
f\circ \pi (z)= \tau_1 z^{n}\frac{\widetilde{\xi}(z)}{\xi(z)}.
\end{align}
Note that $f\circ \pi$ is a symmetric rational function. Thus by Lemma \ref{L3}, we can find a polynomial $\psi$ such that
$$
f \circ \pi = \tau z^n \frac{\widetilde{\psi}}{\psi}
$$
for some $\tau\in\mathbb{T}$ with $\widetilde{\psi}$ and $\psi$ are co-prime.
Since $\xi$ has no zero in $\mathbb{D}^d$, $\psi$ can not have any zero in $\mathbb{D}^d$. Therefore $z^n \widetilde{\psi}$ and $\psi$ are co-prime. By applying Lemma \ref{L2}, we conclude that $z^n \widetilde{\psi}$ and $\psi$ are both symmetric polynomials. It is easy to see that degree of a symmetric polynomial is of the form $(r,r,\dots, r)$ for some non-negative integer $r$. If the degree of the polynomial $\psi$ is $ l = ( l_1,l_1,\dots, l_1)$, then the degree of $z^n \widetilde{\psi}$ is $(n_1 + l_1, n_2 + l_1, \dots , n_d +l_1)$. But $z^n \widetilde{\psi}$ being symmetric polynomial $n_j +l_1 = n_i + l_1$ that is, $n_i= n_j$ for each $1\leq i, j \leq d $. So $n=(n_1,n_1,\dots, n_1)$ for some non-negative integer $n_1$. Since $\psi$ is symmetric polynomial, there exists a polynomial $g$ such that $\psi=g\circ\pi$. Now,
\begin{align*}
f\circ\pi(z_1, z_2,\dots, z_d)&= \tau z^n\frac {\widetilde{g\circ\pi}(z_1, z_2,\dots, z_d)}{g\circ\pi(z_1, z_2\dots, z_d)}\\
&=\tau z^n z^l\frac {\overline{{g\circ\pi}(\frac{1}{\overline{z_1}}, \frac{1}{\overline{z_2}},\dots, \frac{1}{\overline{z_d}})}}{g\circ\pi(z_1, z_2\dots, z_d)}\\
&=\tau z^n z^l\frac {\overline{g\left(\sum_{j}\frac{1}{\overline{z_j}}, \sum_{i<j}\frac{1}{\overline{z_i}}\frac{1}{\overline{z_j}}, \dots, \frac{1}{\overline{z_1z_2\dots z_d}}\right)}}{g\left(s_1, s_2,\dots,s_{d-1}, p\right)}\\
&=\tau z^{n+l}\frac {\overline{g\left(\frac{\overline{s_{d-1}}}{\bar{p}}, \frac{\overline{s_{d-2}}}{\bar{p}}, \dots, \frac{\overline{s_1}}{\bar{p}}, \frac{1}{\bar{p}} \right)}}{g\left(s_1, s_2,\dots,s_{d-1}, p\right)}.
\end{align*}
Therefore,
$$f(s_1, s_2, \dots, s_{d-1}, p)=\tau p^{n_1+l_1}\frac {\overline{g\left(\frac{\overline{s_{d-1}}}{\bar{p}}, \frac{\overline{s_{d-2}}}{\bar{p}}, \dots, \frac{\overline{s_1}}{\bar{p}}, \frac{1}{\bar{p}} \right)}}{g\left(s_1, s_2,\dots,s_{d-1}, p\right)}.$$
Conversely, consider a function $f$ on $\mathbb{G}_d$ of the form \eqref{THM_RIF-G}. From \cite[Theorem 5.2.5]{Rudin}, it can be seen that $\xi$ has no zero in $\mathbb{G}_d \cup b\mathbb{G}_d$ except possibly a measure zero subset of $b\mathbb{G}_d$. Now $(s_1,s_2,\dots, s_{d-1},p)\in b\mathbb{G}_d$ satisfies $\overline{s_j}= s_{d-j} \overline{p}$ and $|p|=1$. Therefore, for almost every point $(s_1,s_2,\dots, s_{d-1},p)$ in $b\mathbb{G}_d$, $|f(s_1, s_2, \dots, s_{d-1}, p)| =1$. Hence $f$ is inner.
Finally, suppose $f\in A(\mathbb{G}_d)$ is inner. Then $f \circ \pi\in A(\mathbb{D}^d)$ is an inner function. Thus, by Theorem \ref{rudin-thm}, $f\circ \pi$ is of the form \eqref{poly-inner} such that $\xi$ has no zero in $\overline{\mathbb{D}^d}$. Again applying the similar argument as in the above theorem, we get that $f$ has the form \eqref{THM_RIF-G} and the polynomial in the denominator has no zero in $\overline{\mathbb{G}_d}$. This completes the proof.
\end{proof}
The Caratheodory type approximation result on $\mathbb{G}_d$ is the following:
\begin{proposition}
Any holomorphic function $f:{\mathbb{G}}_d\rightarrow{\overline{\mathbb{D}}}$ can be approximated (uniformly on compact subsets) by rational inner functions on $\mathbb{G}_d$.
\end{proposition}
The proof of the proposition is similar to Rudin's proof of Theorem 5.5.1 in \cite{Rudin} along with a symmetry argument. Hence we omit the proof.
\section{ Rational Iso-inner and coiso-inner functions on $\mathbb{G}$}
This section deals with the rational iso-inner and coiso-inner functions from $\mathbb{G}$ to $\mathbb{M}_{M\times N}(\mathbb{C})$. We shall start this section by stating a recent result by Knese in \cite{Knese}:
\begin{thm}\label{T:BDisk}
Let $\Psi$ be an $M\times N$ matrix-valued analytic function on $\mathbb D^2$. Then the following are equivalent:
\begin{enumerate}
\item[(\text{RI})] $\Psi$ is a rational iso-inner function;
\item[(\text{AD})]There exist matrix functions $F_1$ and $F_2$ such that
$$
I-\Psi(w)^*\Psi(z)=(1-\overline w_1z_1)F_1(w)^*F_1(z)+(1-\overline w_2z_2)F_2(w)^*F_2(z);
$$
\item[(\text{TFR})] There exist positive integers $d_1,d_2$, and an isometric matrix
$$
T=\begin{bmatrix} A&B\\C&D \end{bmatrix}=\begin{bmatrix} A&B_1&B_2\\C_1&D_{11}&D_{12}\\C_2&D_{21}&D_{22} \end{bmatrix}:
\begin{bmatrix} \mathbb C^N\\ \mathbb C^{d_1}\\ \mathbb C^{d_2}\end{bmatrix}\to \begin{bmatrix} \mathbb C^M\\ \mathbb C^{d_1}\\ \mathbb C^{d_2}\end{bmatrix}
$$such that
$$
\Psi(z)=A+B{\mathcal E}_{z}(I-D{\mathcal E}_z)^{-1}C
$$where ${\mathcal E}_z=z_1P_1+z_2P_2$ and $P_1,P_2$ are orthogonal projections of $\mathbb C^{d_1+d_2}$ onto $\mathbb C^{d_1}$ and $\mathbb C^{d_2}$, respectively.
\end{enumerate}
\end{thm}
The above Theorem is for known for $M=N$ in \cite{Kummert}. A subset $\mathbb F$ of $\mathbb D^2$ is said to be {\em symmetric}, if $(z_2,z_1)\in\mathbb F$, whenever $(z_1,z_2)\in\mathbb F$. Given a symmetric subset $\mathbb F$ of $\mathbb D^2$, a function $g$ on $\mathbb F\times\mathbb F$ is said to be {\em doubly symmetric} if it is symmetric in both the coordinates, i.e., denoting $z^\sigma:=(z_2,z_1)$ for $z=(z_1,z_2)$,
$$g(z,w)=g(z^\sigma, w)= g(z, w^\sigma)=g(z^\sigma,w^\sigma)$$ for all $z,w\in\mathbb F$.
Now we shall state a lemma without proof which is known for scalar-valued case in \cite{SYM_Real} and for operator-valued case in \cite{DKS}.
\begin{lemma}\label{L:DoubSym}
Let $\mathbb F$ be a symmetric subset of $\mathbb D^2$ and $G:\mathbb F\times\mathbb F\to \mathbb{M}_N (\mathbb{C})$ be a doubly symmetric function such that there exist positive integers $d_1,d_2$ and functions $F_j:\mathbb F\to {\mathcal B}(\mathbb C^N,\mathbb{C}^{d_j})$ such that
\begin{align*}
G(z, w)= (1-\overline{w_1}z_1)F_1(w)^*F_1(z)+ (1-\overline{w_2}z_2)F_2(w)^*F_2(z)
\end{align*}
for all $z,w\in\mathbb F$. Then there exist a unitary operator $\tau$ on $\mathbb{C}^{d_1+d_2}$ and a function $F:\pi(\mathbb{F})\rightarrow{\mathcal B}(\mathbb C^N,\mathbb{C}^{d_1+d_2})$ such that for all $z,w\in\mathbb F$,
\begin{align*}
G(z, w)=F(t,q)^*(I-\varphi(\tau,t,q)^*\varphi(\tau,s,p))F(s,p),
\end{align*}
where $(s,p)=\pi(z)$ and $(t,q)=\pi(w)$.
Moreover, the unitary $\tau$ satisfies
\begin{align*}
\tau^*: \begin{bmatrix} F_1(z)-F_1(z^\sigma)\\ F_2(z^\sigma)- F_2(z) \end{bmatrix}\xi\mapsto \begin{bmatrix} z_1 F_1(z)-z_2F_1(z^\sigma)\\ z_1F_2(z^\sigma)- z_2F_2(z) \end{bmatrix}\xi
\end{align*}for every vector $\xi\in\mathbb C^N$ and $z\in\mathbb F$, the function $F':\mathbb F\to{\mathcal B}(\mathbb C^N,\mathbb C^{d_1+d_2})$ given by
\begin{align*}
F':(z_1,z_2)\mapsto (\tau^* - z_2I)^{-1} \begin{bmatrix} F_1(z_1,z_2)\\ F_2(z_2,z_1)\end{bmatrix}
\end{align*}is symmetric, and $F:\mathbb F\to{\mathcal B}(\mathbb C^N,\mathbb C^{d_1+d_2})$ is given by
\begin{align*}
F(z_1+z_2,z_1z_2)=\left(I-\frac{1}{2}(z_1+z_2)\tau\right)F'(z_1,z_2).
\end{align*}
\end{lemma}
The following theorem is known for square matrix-valued rational inner functions, see \cite{DKS}. Using the same technique with the help of Theorem \ref{T:BDisk} we observe that their result can be generalized for rectangular matrix-valued rational iso-inner or coiso-inner functions. The proof is in the same line, hence we omit the proof.
We also restrict the theorem to rational iso-inner functions solely. The same is true for coiso-inner functions.
\begin{proposition}\label{T:SymRatInn}
Let $\Psi$ be an $M\times N$ matrix-valued analytic function on $\mathbb G$ such that $M\geq N$. Then the following are equivalent:
\begin{enumerate}
\item[(\text{RI})]$\Psi$ is rational iso-inner function;
\item[(AD)]There exist positive integer $d$, a unitary operator $\tau:\mathbb{C}^d\rightarrow\mathbb{C}^d$,
and an analytic function $F:\mathbb G\to {\mathcal B}(\mathbb C^N,\mathbb C^{d})$ such that
\begin{align}\label{AD}
I-\Psi(t,q)^*\Psi(s,p)= F(t,q)^*(I-\varphi(\tau,t,q)^*\varphi (\tau,s,p))F(s,p);
\end{align}
\item[(\text{TFR})] There exist positive integer $d$, unitary operator $\tau:\mathbb{C}^d\rightarrow\mathbb{C}^d$ and an isometric matrix
\begin{align*}
V=\begin{bmatrix} A & B\\ C & D \end{bmatrix}:
\begin{bmatrix} \mathbb C^N\\ \mathbb C^{d}\end{bmatrix}\to \begin{bmatrix} \mathbb C^M\\ \mathbb C^{d}\end{bmatrix}
\end{align*}such that
\begin{align}\label{SymPsi}
\Psi(s,p)=A+B\varphi(\tau,s,p)(I-D\varphi(\tau,s,p))^{-1}C.
\end{align}
\end{enumerate}
\end{proposition}
For a matrix-valued holomorphic function $\Psi$ with finite TFR, we consider $\tilde{\Psi}(s,p):= \Psi(\overline{s}, \overline{p})^* $. The following proposition shows $\tilde{\Psi}$ has also finite TFR.
\begin{proposition}
Let $\Psi:\mathbb{G}\rightarrow \mathbb{M}_{M\times N}(\mathbb{C})$ be a function with a finite contractive TFR given via
a matrix
$$T=\begin{bmatrix} A&B\\C&D \end{bmatrix},$$
and a unitary matrix $\tau$.
Set $\tilde{\Psi}(s,p)=\Psi(\overline{s}, \overline{p})^*$. Then, $\tilde{\Psi}$ has a finite contractive
TFR given via matrix $T^*$ and unitary $\tau^*$.
\end{proposition}
\begin{proof}
First note that
\begin{align*}
\varphi(\tau, \overline{s}, \overline{p})^*&=[ {\big(2\overline{p}\tau-sI\big)}{\big(2I-\overline{s}\tau\big)^{-1}}]^*\\
&={\big(2I-{s}\tau^*\big)^{-1}}{\big(2{p}\tau^*-sI\big)}\\
&= \varphi(\tau^*, s, p).
\end{align*}
Now with ${\Psi}(s,p)=A+B\varphi(\tau,s,p)\big(I-D\varphi(\tau,s,p)\big)^{-1}C$ we have
\begin{align*}
\tilde{\Psi}(s,p)&=\Psi(\overline{s},\overline{p})^*\\
&=[A+B\varphi(\tau,\overline{s},\overline{p})\big(I-D\varphi(\tau,\overline{s},\overline{p})\big)^{-1}C]^*\\
&= A^*+C^*\big(I-\varphi(\tau,\overline{s},\overline{p})^*D^*\big)^{-1}\varphi(\tau,\overline{s},\overline{p})^*B^*\\
&=A^*+C^*\big(I-\varphi(\tau^*,s,p)D^*\big)^{-1}\varphi(\tau^*,s,p)^*B^*\\
&=A^*+C^*\varphi(\tau^*,s,p)\big(I-D^*\varphi(\tau^*,s,p)\big)^{-1}B^*.
\end{align*}
Thus, $\tilde{\Psi}$ has a finite contractive
TFR given by matrix $T^*$ and unitary $\tau^*$.
\end{proof}
Let $\Psi:\mathbb{G}\rightarrow \mathbb{M}_{M\times N}$ be a holomorphic map. Suppose $\Psi$ possess a finite contractive transfer function realization given as equation \eqref{SymPsi}. Let $d$ be the dimension of the Hilbert space where the operator $D$ acts. Then, $d$ will be called the {\em size} of the TFR.
The next proposition shows that how a matrix-valued rational iso-inner or coiso-inner function with TFR of size $d$ is related to a rational inner function with TFR of the same size.
\begin{proposition}
Let $\Psi:\mathbb{G}\rightarrow \mathbb{M}_{M\times N}$ be a function which has a finite contractive TFR of size $d$. Then, there exist $L\geq{M,N}$ and $L\times L$ matrix-valued rational inner function $\tilde{\Psi}$ on $\mathbb{G}$ with finite-dimensional unitary transfer
function realization of size $d$ such that $\Psi$ is a sub-matrix of $\tilde{\Psi}$.
\end{proposition}
\begin{proof}
Since $\Psi:\mathbb{G}\rightarrow \mathbb{M}_{M\times N}$ has a finite contractive TFR of size $d$, there exist unitary matrix $\tau:\mathbb{C}^d\rightarrow\mathbb{C}^d$, and contractive matrix
\begin{align*}
U=\begin{bmatrix} A&B\\C&D \end{bmatrix}:
\begin{bmatrix} \mathbb C^N\\ \mathbb C^{d}\end{bmatrix}\to \begin{bmatrix} \mathbb C^M\\ \mathbb C^{d}\end{bmatrix}
\end{align*}such that
\begin{align}
\Psi(s,p)=A+B\varphi(\tau,s,p)(I-D\varphi(\tau,s,p))^{-1}C.
\end{align}
Now use the fact that every contractive matrix is a sub-matrix of a unitary matrix. Suppose $W$ is a unitary matrix such that $U$ is a sub-matrix of $W$ and ( if necessary we arrange some rows and columns)
$$W=\begin{bmatrix}
A & X & B \\
Y_1 & Y_2 & Y_3 \\
C & Z & D
\end{bmatrix}.$$
Now, define $\tilde{\Psi}:\mathbb{G}\rightarrow \mathbb{M}_{L\times L}(\mathbb{C})$ as
$$\tilde{\Psi}(s,p)=\begin{bmatrix}
A & X \\
Y_1 & Y_2
\end{bmatrix}+\begin{bmatrix}
B \\
Y_3
\end{bmatrix}\varphi(\tau,s,p)(I-D\varphi(\tau,s,p))^{-1}\begin{bmatrix}
C & Z
\end{bmatrix}.$$
Then by Theorem 3.4 of \cite{DKS}, $\tilde{\Psi}(s,p)$ is a rational inner function on $\mathbb{G}.$ It is easy to see that
$${\Psi}(s,p)=\begin{bmatrix}
I & 0
\end{bmatrix}\tilde{\Psi}(s,p)\begin{bmatrix}
I \\ 0
\end{bmatrix}.$$ This completes the proof.
\end{proof}
The two propositions above are motivated by \cite{Knese}. In \cite{DKS}, it is shown that any solvable data with initial nodes in $\mathbb{G}$ and the final nodes in the set of square matrices has rational inner solution. We notice that the same type of outcome is obtained if rectangular matrices are used as final nodes. Specifically, we prove Theorem \ref{Main2}, restated below.
\begin{thm}\label{Rational-Isoinner_Solution}
A solvable matrix Nevanlinna-Pick problem with initial nodes in $\mathbb{G}$ and the final nodes on the closed operator-norm unit ball of $M\times N$ complex matrices has a rational iso-inner or coiso-inner solution.
\end{thm}
\begin{proof}
Suppose that $(M\geq N)$. Consider an $n$-point solvable Nevanlinna-Pick problem with initial data $\lambda_1,\lambda_2,\dots,\lambda_n$ in $\mathbb G$ with $\lambda_i = (s_i, p_i)$ for $i=1,\dots,n$ and final data $A_1,A_2,\dots,A_n$ in the closed operator-norm unit ball of $M\times N$ complex matrices. Let $z_i\in\mathbb D^2$ be such that $\pi(z_i)=\lambda_i$ for $i=1,2,...,n$. Consider the at most $2n$-point $\{z_i\rightarrow A_i\}_{i=1}^n$ and $\{z_i^\sigma\rightarrow A_i\}_{i=1}^n$ matrix Nevanlinna--Pick interpolation problem on $\mathbb D^2$.
By hypothesis, the above interpolation problem is solvable. Thus by Corollary 11.54 in \cite{AM-Book}, there exists a rational iso-inner function $\Psi:\mathbb{D}^2\rightarrow\mathbb{M}_{M\times N}(\mathbb{C})$ such that
\begin{align}\label{symPsi}
\Psi(z_i)=A_i=\Psi(z_i^\sigma) \text{ for each }i=1,2,\dots, n.
\end{align}
Invoke Theorem \ref{T:BDisk}, to get positive integers $d_1,d_2$ and functions $F_{j}:\mathbb{D}^2\rightarrow {\mathcal B}(\mathbb{C}^{N}, \mathbb{C}^{d_j})$ for $j=1,2$ such that
$$
I-\Psi(w)^*\Psi(z)=(1-\overline w_1z_1)F_1(w)^*F_1(z)+(1-\overline w_2z_2)F_2(w)^*F_2(z).
$$Consider the finite subset $\mathbb F=\{z_i,z_i^\sigma:i=1,2,\dots,n\}$ of $\mathbb D^2$. Define $G:\mathbb F\times\mathbb F\to\mathbb{M}_N(\mathbb{C})$ by
$$
G(z,w)=I-\Psi(w)^*\Psi(z).
$$By \eqref{symPsi}, it follows that $G$ is doubly symmetric. So, apply Lemma \ref{L:DoubSym} to the doubly symmetric function $G$ to get a unitary $\tau$ on $\mathbb{C}^{d_1+d_2}$ and a function $F:\pi(\mathbb F)\rightarrow{\mathcal B}(\mathbb{C}^{N},\mathbb{C}^{d_1+d_2})$ such that
$$I-A_i^*A_j=F(\lambda_i)^*(I-\varphi(\tau,\lambda_i)^*\varphi(\tau,\lambda_j))F(\lambda_j)$$
for each $i,j=1,2,\dots,n$. Rearrange the above equation to obtain
\begin{align}\label{Condition-Solv}
I+F(s_i,p_i)^*\varphi(\tau, s_i,p_i)^* \varphi(\tau, s_j,p_j) F(s_j,p_j)&=A_i^*A_j+F(s_i,p_i)^*F(s_j,p_j).
\end{align}
From the above equation the map
\begin{align}\label{Isometry}
\begin{bmatrix}
I_{\mathbb C^N}\\\varphi(\tau, \lambda_i)F(\lambda_i)
\end{bmatrix}\xi\mapsto\begin{bmatrix}
A_i\\
F(\lambda_i)
\end{bmatrix}\xi
\end{align}
defines an isometry from ${\mathcal M}$ to ${\mathcal N}$, where
$${\mathcal M}:=\operatorname{span}\left\{
\begin{bmatrix}
I\\\varphi(\tau, \lambda_i)F(\lambda_i)
\end{bmatrix}\xi:\xi\in\mathbb C^N,i=1,2,\dots,n\right\}\subset \begin{bmatrix}\mathbb{C}^N\\ \mathbb C^{d_1+d_2}\end{bmatrix},$$and
$${\mathcal N}:=
\operatorname{span}\left\{
\begin{bmatrix}
A_i\\
F(\lambda_i)
\end{bmatrix}\xi:\xi\in\mathbb C^N,i=1,2,\dots,n\right\}\subset \begin{bmatrix}\mathbb{C}^M\\ \mathbb C^{d_1+d_2}\end{bmatrix}.
$$
Extend this isometry from $ \begin{bmatrix}\mathbb{C}^N\\ \mathbb C^{d_1+d_2}\end{bmatrix}$ to $ \begin{bmatrix}\mathbb{C}^M\\ \mathbb C^{d_1+d_2}\end{bmatrix}$ and denoting by
$$
\begin{bmatrix} A&B\\C&D \end{bmatrix}:\begin{bmatrix}\mathbb C^N\\ \mathbb C^{d_1+d_2}\end{bmatrix}\to \begin{bmatrix}\mathbb C^N\\ \mathbb C^{d_1+d_2}\end{bmatrix}.
$$We define $\tilde{\Psi}:\mathbb{G}\rightarrow\mathbb{M}_N(\mathbb{C})$ by
$$\tilde{\Psi}(s,p)=A+B\varphi(\tau,s,p)(I-D\varphi(\tau,s,p))^{-1}C.$$
By Proposition \ref{T:SymRatInn}, the function $\tilde{\Psi}$ is a rational iso-inner and equation \eqref{Isometry} gives that $\tilde{\Psi}$ interpolates the data. Similarly, we can prove that if $N\geq M$, then the above data has a rational coiso-inner solution.
\end{proof}
It is a folklore that Pick-interpolation problem on the disc implies the Caratheodory approximation theorem which says that any holomorphic function $f:\mathbb{D}\to \overline{\mathbb{D}}$ can be approximated uniformly (on compact subsets) by rational inner functions. The same technique with the help of the fact that any solvable data in the disc and bidisc has a rational iso-inner solution, can be applied in the matrix-valued setting also. An alternative operator theoretic proof of the same can be found in \cite{B_J_K}.
Having Theorem \ref{Rational-Isoinner_Solution} on $\mathbb{G}$ in hand, we have the following approximation result.
\begin{corollary}
Any holomorphic function $f:\mathbb{G}\rightarrow \mathbb{M}_{M\times N}(\mathbb{C})$ with $\|f(s,p)\|\leq{1}$ for all $(s,p)\in\mathbb{G}$, can be approximated (uniformly on compact subsets) by matrix-valued rational iso-inner or coiso-inner functions.
\end{corollary}
\begin{proof}
We sketch the proof. Choose a countable dense subset $\{ \lambda_1, \lambda_2, \dots \}$ of $\mathbb{G}$, set up the interpolation problem $\{ \lambda_j \to f(\lambda_j) \}_{j=1}^n$, for each $n \geq 1$, get rational iso-inner or coiso-inner solutions $\{ f_n \}$ and apply Montel's theorem. An application of Arzela-Ascoli theorem completes the proof.
\end{proof}
\section{Factorization of matrix-valued Schur class functions on $\mathbb{G}$}
Recall that, given any $\theta \in \mathcal{S}(\mathbb{G}, \mathbb{M}_N(\mathbb{C}))$, there exist a Hilbert space $\mathcal{H}$ and a unitary operator $\tau$ on $\mathcal{H}$ such that
$$
\theta(s,p)= A+ B\varphi(\tau,s,p)\left( I_{\mathcal{H}} - D \varphi(\tau,s,p) \right)^{-1} C
$$
where $$ \begin{bmatrix}
A & B\\
C & D
\end{bmatrix} : \mathbb{C}^N \oplus \mathcal{H} \to \mathbb{C}^N \oplus \mathcal{H}
$$
is an isometry, known as the \textit{colligation operator} for $\theta$.
In this section, we find a necessary and sufficient conditions for a function $\theta$ in $\mathcal{S}(\mathbb{G}, \mathbb{M}_N(\mathbb{C}))$ to have a factorization $\theta = \psi_1 \psi_2$ for some $\psi_1,\psi_2 \in \mathcal{S}(\mathbb{G}, \mathbb{M}_N(\mathbb{C}))$. The results of this section are motivated by \cite{Ramlal-Jaydeb-1}.
First we consider the case when $\theta$ vanishes at $(0,0)$ and one of its factors is a self adjoint invertible matrix at $(0,0)$.
\begin{thm}\label{fact-1}
Let $\theta \in {\mathcal S}\left(\mathbb{G}, \mathbb{M}_N(\mathbb{C})\right)$ be such that $\theta(0,0) = 0$. Then $\theta = \psi_1 \psi_2$ with $\psi_2(0,0)=A$, a self adjoint invertible $N\times N$ matrix, for some $\psi_1,\psi_2 \in {\mathcal S}\left(\mathbb{G}, \mathbb{M}_N(\mathbb{C})\right) $ if and only if there exist Hilbert spaces ${\mathcal H}_1,{\mathcal H}_2$, a unitary operator
$$ \tau : {\mathcal H}_1 \oplus {\mathcal H}_2 \to {\mathcal H}_1 \oplus {\mathcal H}_2 $$ such that both ${\mathcal H}_1, {\mathcal H}_2$ are invariant under $\tau$ and an isometric colligation,
$$ V= \left[\begin{array}{c|c c}
0 & B_1 & 0 \\
\hline
C_1 & D_1 & D_2 \\
C_2 & 0 & D_3
\end{array}\right] \,\,:\mathbb{C}^N \oplus ({\mathcal H}_1 \oplus {\mathcal H}_2) \to \mathbb{C}^N \oplus ({\mathcal H}_1 \oplus {\mathcal H}_2) $$
with \begin{align}\label{cond_1}
C_1 A^{-2}C_1^* D_2 = D_2, \,\, C_1^*C_1 = A^2
\end{align}
such that
$$\theta (s,p) = B \varphi(\tau , s,p) (I_{{\mathcal H}_1 \oplus {\mathcal H}_2}- D \varphi(\tau,s,p))^{-1} C,$$
for all $(s,p)\in \mathbb{G}$, where $$ B=\begin{bmatrix}
B_1 & 0
\end{bmatrix},\,\, C= \begin{bmatrix}
C_1\\
C_2
\end{bmatrix} ,\,\, D= \begin{bmatrix}
D_1 & D_2 \\
0 & D_3
\end{bmatrix}.$$
\end{thm}
\begin{proof}
Suppose $\theta = \psi_1\psi_2$ and $\psi_2(0,0) = A$, where $A$ is self adjoint and invertible. Then $\psi_1(0,0)=0$. Now $\psi_1$ being in ${\mathcal S}(\mathbb{G}, \mathbb{M}_N(\mathbb{C}))$, by the Realization formula, there exist a unitary operator $\tau_1$ on a Hilbert space ${\mathcal H}_1$, and an isometric colligation $$ V_1 = \left[\begin{array}{c|c}
0 & B_1 \\
\hline \vspace{2mm}
C_1 & D_1
\end{array}\right] $$ such that $$\psi_1(s,p) = B_1\varphi(\tau_1, s,p) (I - D_1\varphi(\tau_1,s,p))^{-1} C_1. $$
Similarly, for $\psi_2$, there exist an unitary operator $\tau_2$ on a Hilbert space ${\mathcal H}_2$ and an isometric colligation $$ V_2 =\left[\begin{array}{c|c}
A & B_2 \\
\hline \vspace{1mm}
C_2 & D_2
\end{array}\right] $$ such that $$ \psi_2(s,p) = A + B_2 \varphi(\tau_2,s,p)(I- D_2\varphi(\tau_2,s,p))^{-1}C_2.$$
Now we define, $$ \tau := \begin{bmatrix}
\tau_1 & 0\\
0 & \tau_2
\end{bmatrix} : {\mathcal H}_1 \oplus {\mathcal H}_2 \to {\mathcal H}_1 \oplus {\mathcal H}_2.$$
Clearly, $\tau$ is a unitary operator and both ${\mathcal H}_1,{\mathcal H}_2$ are invariant under $\tau$. So,
$$ \varphi(\tau,s,p)= \begin{bmatrix}
\varphi(\tau_1, s,p) & 0\\
0 & \varphi(\tau_2,s,p)
\end{bmatrix}. $$
Set,
$$ V= \left[\begin{array}{c|c c}
0 & B_1 & 0 \\
\hline
C_1 & D_1 & 0 \\
0 & 0 & I_{{\mathcal H}_2}
\end{array}\right]
\left[\begin{array}{c|c c}
A & 0 & B_2 \\
\hline
0 & I_{{\mathcal H}_1} & 0 \\
C_2 & 0 & D_2
\end{array}\right]
=
\left[\begin{array}{c|c c}
0 & B_1 & 0 \\
\hline
C_1 A & D_1 & C_1 B_2 \\
C_2 & 0 & D_2
\end{array}\right].
$$
Since $V_1$ and $V_2$ are isometries, it follows that $V$ is also isometry.
Let
\begin{align*}
\Phi(s,p) &= \begin{bmatrix}
B_1 & 0
\end{bmatrix} \left[\begin{smallmatrix}
\varphi(\tau_1, s,p) & 0\\
0 & \varphi(\tau_2,s,p)
\end{smallmatrix}\right] \left(I_{{\mathcal H}_1 \oplus {\mathcal H}_2} -\left[ \begin{smallmatrix}
D_1 & C_1 B_2 \\
0 & D_2
\end{smallmatrix}\right] \left[ \begin{smallmatrix}
\varphi(\tau_1, s,p) & 0\\
0 & \varphi(\tau_2,s,p)
\end{smallmatrix}\right] \right) ^{-1} \begin{bmatrix}
C_1 A\\
C_2
\end{bmatrix} \\
&= \begin{bmatrix}
B_1\varphi (\tau_1 , s,p) & 0
\end{bmatrix} \begin{bmatrix}
\left ( I_{{\mathcal H}_1} - D_1\varphi (\tau_1, s,p) \right )^{-1} & Z \\
0 & \left ( I_{{\mathcal H}_2} - D_2\varphi (\tau_2, s,p) \right )^{-1}
\end{bmatrix} \begin{bmatrix}
C_1 A\\
C_2
\end{bmatrix}
\end{align*}
where $$ Z= \left (I_{{\mathcal H}_1} - D_1\varphi(\tau_1,s,p)\right )^{-1} C_1B_2\varphi(\tau_2 ,s,p) \left(I_{{\mathcal H}_2} - D_2\varphi(\tau_2,s,p) \right)^{-1}.$$ A straightforward calculation gives the following
\begin{align*}
\Phi(s,p) &=\left[ B_1\varphi(\tau_1,s,p) \left( I_{{\mathcal H}_1} -D_1\varphi(\tau_1,s,p)\right)^{-1} C_1 A\right ] +\\
& \quad\quad\quad \quad \quad \left [ B_1\varphi(\tau_1,s,p) \left( I_{{\mathcal H}_1} -D_1\varphi(\tau_1,s,p)\right)^{-1} C_1 B_2 \varphi(\tau_2,s,p) \left ( I_{{\mathcal H}_2} - D_2\varphi (\tau_2, s,p) \right )^{-1} C_2 \right ] \\
&= \psi_1(s,p)\psi_2(s,p)\\
&= \theta(s,p).
\end{align*}
Suppose we write the isometry $$ V = \left[\begin{array}{c|c c}
0 & \tilde{B_1} & 0 \\
\hline
\tilde{C_1} & \tilde{D_1} & \tilde { D_2} \\
\tilde{C_2} & 0 &\tilde{D_3}
\end{array}\right].$$ Then $\tilde{B_1} = B_1,\,\tilde{C_1}= C_1 A,\, \tilde{C_2} = C_2,\, \tilde{D_1}= D_1,\, \tilde { D_2} = C_1 B_2,\,\text { and } \tilde{D_3} = D_2 $.\\
Since $V_1$ is an isometry,
$$
V_1^*V_1 = \begin{bmatrix}
0 & C_1^* \\
B_1^* & D_1^*
\end{bmatrix} \begin{bmatrix}
0 & B_1\\
C_1 & D_1
\end{bmatrix} = \begin{bmatrix}
I_{\mathbb{C}^N} & 0 \\
0 & I_{{\mathcal H}_1}
\end{bmatrix}
$$ which gives
\begin{align} \label{iso-1}
C_1^*C_1 =I_{\mathbb{C}^N}, \quad
C_1^* D_1 = 0 \quad \text{ and } \quad
B_1^* B_1 + D_1^* D_1 = I_{{\mathcal H}_1}.
\end{align}
Similarly, $V_2$ is an isometry implies that
\begin{align}\label{iso-2}
A^*A + C_2^*C_2 = I_{\mathbb{C}^N},\quad
A^* B_2 + C_2^*D_2 = 0 \quad\text{and}\quad
& B_2^*B_2 + D_2^*D_2 = I_{\mathcal{H}_2}.
\end{align}
Now,
\begin{align*}
\tilde{C_1} A^{-2}\tilde{C_1}^* \tilde{D_2} &= ( C_1 A)A^{-2}(C_1 A)^*C_1B_2\\&= C_1 A^{-1}A^* C_1^* C_1 B_2\\&= C_1 B_2\\&=\tilde{D_2}\quad\left(\text{since}\,\, C_1^*C_1 =I_{\mathbb{C}^N} \,\,\text{and } A= A^* \right)
\end{align*}
Also, using equation \eqref{iso-1}, we get the following
$$ \tilde{C_1}^*\tilde{C_1} = (C_1 A)^*(C_1 A) = A^*A = A^2. $$
Therefore, the isometry $V$ satisfies the condition \eqref{cond_1}.
Conversely, suppose that there exist Hilbert spaces $\mathcal{H}_1,\mathcal{H}_2$ and unitary operator $\tau$ on $\mathcal{H}_1 \oplus \mathcal{H}_2$ such that
$$\tau \left( \mathcal{H}_1 \oplus \{0\} \right) \subseteq \mathcal{H}_1 \oplus \{0\} \quad \text{and}\quad \tau\left( \{0\} \oplus \mathcal{H}_2 \right) \subseteq \{0\} \oplus \mathcal{H}_2. $$
So, $\tau $ has the following operator matrix form:
$$\tau = \begin{bmatrix}
\tau|_{{\mathcal H}_1} & 0 \\
0 & \tau|_{{\mathcal H}_2}
\end{bmatrix}
: \mathcal{H}_1 \oplus \mathcal{H}_2 \to \mathcal{H}_1 \oplus \mathcal{H}_2 .$$
Set, $\tau_1 = \left(\begin{smallmatrix}
\tau|_{{\mathcal H}_1} & 0\\
0 & I_{{\mathcal H}_2}
\end{smallmatrix}\right)$ and $\tau_2 =\left(\begin{smallmatrix}
I_{{\mathcal H}_1} & 0\\
0 & \tau|_{{\mathcal H}_2}
\end{smallmatrix}\right) $ are both unitary operators on ${\mathcal H}_1\oplus {\mathcal H}_2$.
Suppose, $$
V= \left[ \begin{array}{c| c c}
0 & B_1 & 0 \\
\hline
C_1 & D_1 & D_2 \\
C_2 & 0 & D_3
\end{array} \right]:
\mathbb{C}^N\oplus\mathcal{H}_1\oplus\mathcal{H}_2\rightarrow \mathbb{C}^N\oplus\mathcal{H}_1\oplus\mathcal{H}_2$$
is an isometric colligation such that
$$ \theta(s,p)= B \varphi(\tau , s,p) (I_{{\mathcal H}_1 \oplus {\mathcal H}_2}- D \varphi(\tau,s,p))^{-1} C,$$
for all $(s,p)\in \mathbb{G}$, where $$ B=\begin{bmatrix}
B_1 & 0
\end{bmatrix},\quad C= \begin{bmatrix}
C_1\\
C_2
\end{bmatrix}\quad\text{and}\quad D= \begin{bmatrix}
D_1 & D_2 \\
0 & D_3
\end{bmatrix},$$
that satisfies equation\eqref{cond_1} for some self adjoint and invertible matrix $A$. Set,
$$
V_1 = \left[ \begin{matrix}
0 & B_1 \\
C_1 A^{-1} & D_1
\end{matrix} \right ]\quad\text{and}\quad
V_2 = \left[ \begin{matrix}
A & B_2 \\
C_2 & D_3
\end{matrix}\right ]
$$
where $B_2 = A^{-1} C_1^* D_2 $. Since $V$ is an isometry,
$$
V^*V = \left[ \begin{matrix}
C_1^*C_1 + C_2^*C_2 & C_1^*D_1 & C_1^*D_2 + C_2^*D_3 \\
D_1^*C_1 & B_1^*B_1 + D_1^*D_1 & D_1^*D_2 \\
D_2^*C_1 + D_3^*C_2 & D_2^*D_1 & D_2^*D_2 + D_3^*D_3
\end{matrix} \right]
= \begin{bmatrix}
I_{\mathbb{C}^N} & 0 & 0 \\
0 & I_{\mathcal{H}_1} & 0 \\
0 & 0 & I_{\mathcal{H}_2}
\end{bmatrix},
$$
which gives
\begin{align*}
&C_1^*D_1=0=D_1^*D_2,\quad C_1^*D_2 + C_2^*D_3= 0,\quad B_1^*B_1 + D_1^*D_1 = I_{\mathcal{H}_1},\quad
D_2^*D_2 + D_3^*D_3 = I_{\mathcal{H}_2}\text{ and }
\end{align*}
$$ C_1^*C_1 + C_2^*C_2 = I_{\mathbb{C}^N}\,\,\implies A^2 + C_2^*C_2 = I_{\mathbb{C}^N}.$$
Using the above relations and
\begin{align*}
B_2^*B_2 + D_3^*D_3 &= D_2^* C_1 A^{-2} C_1^* D_2 + D_3^*D_3\\
&= D_2^* D_2 + D_3^* D_3 \quad(\text{by \ref{cond_1})}\\
& = I_{\mathcal{H}_2},
\end{align*}
we find that,
$$V_1^*V_1 = \begin{bmatrix}
A^{-1} C_1^*C_1 A^{-1} & A^{-1} C_1^*D_1 \\
D_1^*C_1 A^{-1} & B_1^*B_1 +D_1^*D_1
\end{bmatrix}
= \begin{bmatrix}
I_{\mathbb{C}^N} & 0 \\
0 & I_{\mathcal{H}_1}
\end{bmatrix}
$$
and
$$
V_2^*V_2 = \begin{bmatrix}
A^2 + C_2^*C_2 & A^* B_2 + C_2^*D_3 \\
B_2^ *A + D_3^*C_2 & B_2^*B_2 + D_3^*D_3
\end{bmatrix}
= \begin{bmatrix}
I_{\mathbb{C}^N} & 0 \\
0 & I_{\mathcal{H}_2}
\end{bmatrix}.
$$
As $V_1$ and $V_2$ are isometries, the operators
$$
\tilde{V_1} := \left[\begin{array}{c| c c}
0 & B_1 & 0 \\
\hline
C_1 A^{-1} & D_1 & 0 \\
0 & 0 & I_{\mathcal{H}_2}
\end{array} \right] \,\,\, \text{and} \,\,
\tilde{V_2}:=
\left[\begin{array}{c| c c}
A & 0 & B_2 \\
\hline
0 & I_{\mathcal{H}_1} & 0 \\
C_2 & 0 & D_3
\end{array}\right]\quad
$$
on $ \mathbb{C}^N\oplus (\mathcal{H}_1 \oplus \mathcal{H}_2)$ are isometries. Using the relation $C_1 A^{-1} B_2= D_2$, we get $\tilde{V_1}\tilde{V_2}=V$.
Now,
\begin{align*}
\theta(s,p)&= [B_1,0] \begin{bmatrix}
\varphi(\tau_1,s,p) & 0 \\
0 & \varphi(\tau_2,s,p)
\end{bmatrix} \left( \begin{bmatrix}
I_{\mathcal{H}_1} & 0 \\
0 & I_{\mathcal{H}_2}
\end{bmatrix}- \begin{bmatrix}
D_1 & C_1 A^{-1}B_2 \\
0 & D_3
\end{bmatrix} \begin{bmatrix}
\varphi(\tau_1,s,p) & 0 \\
0 & \varphi(\tau_2,s,p)
\end{bmatrix}\right)^{-1} \begin{bmatrix}
C_1\\
C_2
\end{bmatrix}\\
&= [B_1\varphi(\tau_1,s,p), 0 ] \begin{bmatrix}
I_{\mathcal{H}_1} - D_1\varphi(\tau_1,s,p) & - C_1 A^{-1} B_2\varphi(\tau_2,s,p) \\
0 & I_{\mathcal{H}_2}- D_3\varphi(\tau_2,s,p)
\end{bmatrix}^{-1} \begin{bmatrix}
C_1\\
C_2
\end{bmatrix}\\
&=\psi_1(s,p) \psi_2(s,p)
\end{align*}
where, $$
\psi_1(s,p)= B_1\varphi(\tau_1,s,p)\left( I_{\mathcal{H}_1}- D_1\varphi(\tau_1,s,p) \right)^{-1} C_1 A^{-1}
$$
and $$
\psi_2(s,p)=A + B_2\varphi(\tau_2,s,p)\left( I_{\mathcal{H}_2}- D_3\varphi(\tau_2,s,p) \right)^{-1}C_2.
$$
Clearly, $\psi_1(0,0) =0$ and $\psi_2(0,0) = A $. And also $V_1$ and $V_2$ are isometric colligations for $\psi_1$ and $\psi_2$ respectively. Therefore by Theorem \ref{thm3.1}, $\psi_1,\psi_2 \in \mathcal{S}(\mathbb{G}, \mathbb{M}_N(\mathbb{C}))$.
\end{proof}
Now we shall consider the case when $\psi_j(0,0)=0$ for $j=1,2$. The proof is more or less similar to the previous one. Hence we omit the proof.
\begin{thm}
Let $\theta \in \mathcal{S}(\mathbb{G}, \mathbb{M}_N(\mathbb{C}))$ with $\theta(0,0)=0$.Then, there exists $\psi_1,\psi_2 \in \mathcal{S}(\mathbb{G},\mathbb{M}_N(\mathbb{C}))$ such that $\theta = \psi_1 \psi_2$ and $\psi_1(0,0)=0=\psi_2(0,0) $ if and only if there exists a unitary $\tau $ on $ \mathcal{H}_1 \oplus \mathcal{H}_2$ for some Hilbert spaces $\mathcal{H}_1,\mathcal{H}_2$ such that they are invariant under $\tau$ and an isometric colligation $$
V= \left[ \begin{array}{c| c c}
0 & B_1 & 0 \\
\hline
0 & D_1 & D_2 \\
C_2 & 0 & D_3
\end{array}\right] : \mathbb{C}^N \oplus \left( \mathcal{H}_1 \oplus \mathcal{H}_2 \right) \to \mathbb{C}^N \oplus \left( \mathcal{H}_1 \oplus \mathcal{H}_2 \right)
$$
such that $$
\theta(s,p)= [B_1,0]\varphi(\tau,s,p)\left( I_{\mathcal{H}_1 \oplus \mathcal{H}_2} - \left[\begin{smallmatrix}
D_1 & D_2 \\
0 & D_3
\end{smallmatrix}\right]\varphi(\tau,s,p) \right)^{-1} \left[ \begin{smallmatrix}
0\\
C_2
\end{smallmatrix}\right]
$$
with $X^*D_1 =0 $ and $D_2 = XY$ for some $Y\in \mathcal{B}(\mathcal{H}_2,\mathbb{C}^N)$ and isometry $X$.
\end{thm}
So in the above two theorems we assumed that $\theta(0,0)=0$. Now we shall give a necessary and sufficient condition for a factorization of $\theta$ when $\theta(0,0)$ is an invertible $N\times N$ matrix.
\begin{thm}
Let $\theta \in \mathcal{S}(\mathbb{G}, \mathbb{M}_N(\mathbb{C}))$ with $\theta(0,0) = A$, an invertible $N\times N$ matrix. Then $\theta = \psi_1 \psi_2$ for some $\psi_1,\psi_2 \in \mathcal{S}(\mathbb{G}, \mathbb{M}_N(\mathbb{C}))$ if and only if there exist Hilbert spaces $\mathcal{H}_1,\mathcal{H}_2$, a unitary operator $\tau $ on $ \mathcal{H}_1 \oplus \mathcal{H}_2$ such that $\mathcal{H}_1,\mathcal{H}_2$ are invariant under $\tau$ and an isometric colligation $$
V= \left[ \begin{array}{c| c c}
A & B_1 & B_2 \\
\hline
C_1 & D_1 & D_2 \\
C_2 & 0 & D_3
\end{array}\right] : \mathbb{C}^N \oplus \left( \mathcal{H}_1 \oplus \mathcal{H}_2 \right) \to \mathbb{C}^N \oplus \left( \mathcal{H}_1 \oplus \mathcal{H}_2 \right)$$
such that \begin{align} \label{cond-2}
D_2= C_1 A^{-1} B_2 , \quad A = A_1 A_2 \quad \text{and} \quad A_2^*A_2 = A^*A + C_1^*C_1
\end{align}
for some $ A_1,A_2 \in \mathbb{M}_N(\mathbb{C}) $
and $$ \theta(s,p)= A + [B_1,B_2] \varphi(\tau,s,p)\left( I_{\mathcal{H}_1 \oplus \mathcal{H}_2} - \begin{bmatrix}
D_1 & D_2\\
0 & D_3
\end{bmatrix} \varphi(\tau,s,p) \right)^{-1} \begin{bmatrix}
C_1 \\
C_2
\end{bmatrix}. $$
\end{thm}
\begin{proof}
Suppose $\psi_1,\psi_2 \in \mathcal{S}(\mathbb{G},\mathbb{M}_N(\mathbb{C}))$ such that $\theta = \psi_1 \psi_2$. Then by the Realization formula, there exist Hilbert spaces $\mathcal{H}_1,\mathcal{H}_2$ and unitaries $\tau_1,\tau_2$ acting on $\mathcal{H}_1,\mathcal{H}_2$ respectively with isometric colligations $V_1,V_2$ as follows:
$$V_1= \begin{bmatrix}
A_1 & B_1 \\
C_1 & D_1
\end{bmatrix}: \mathbb{C}^N\oplus \mathcal{H}_1 \to \mathbb{C}^N \oplus \mathcal{H}_1
$$
and $$ V_2 = \begin{bmatrix}
A_2 & B_2 \\
C_2 & D_2
\end{bmatrix}: \mathbb{C}^N\oplus \mathcal{H}_2 \to \mathbb{C}^N \oplus \mathcal{H}_2
$$
such that $$
\psi_1(s,p)= A_1 + B_1\varphi(\tau_1,s,p)\left( I_{\mathcal{H}_1} - D_1\varphi(\tau_1,s,p)\right)^{-1}C_1
$$
and $$ \psi_2(s,p)= A_2 + B_2\varphi(\tau_2,s,p)\left( I_{\mathcal{H}_2} - D_2\varphi(\tau_2,s,p)\right)^{-1}C_2.
$$
Define $$
V = \left[\begin{array}{c| c c}
A_1 & B_1 & 0\\
\hline
C_1 & D_1 & 0 \\
0 & 0 & I_{\mathcal{H}_2}
\end{array}\right] \left[\begin{array}{c| c c}
A_2 & 0 & B_2 \\
\hline
0 & I_{\mathcal{H}_1} & 0 \\
C_2 & 0 & D_2 \\
\end{array}\right] = \left[\begin{array}{c|c c}
A_1A_2 & B_1 & A_1B_2 \\
\hline
C_1 A_2 & D_1 & C_1B_2 \\
C_2 & 0 & D_2
\end{array}\right]
$$
and $$
\tau = \begin{bmatrix}
\tau_1 & 0 \\
0 & \tau_2
\end{bmatrix} : {\mathcal H}_1 \oplus {\mathcal H}_2 \to {\mathcal H}_1 \oplus {\mathcal H}_2 .$$
Clearly, $V$ is an isometry and $\tau$ is a unitary. A straightforward computation shows that the transfer function realization corresponding to the isometry $V$ and the unitary $\tau$ is $\theta$. Also, it is easy to check that the isometry $V$ satisfies condition \eqref{cond-2}.
Conversely, suppose that there exist Hilbert spaces $\mathcal{H}_1,\mathcal{H}_2$, a unitary operator $\tau$ on $ \mathcal{H}_1 \oplus \mathcal{H}_2$ such that $\mathcal{H}_1,\mathcal{H}_2$ are invariant under $\tau$ and an isometric colligation $$
V= \left[ \begin{array}{c| c c}
A & B_1 & B_2 \\
\hline
C_1 & D_1 & D_2\\
C_2 & 0 & D_3
\end{array}\right] : \mathbb{C}^N\oplus \left( \mathcal{H}_1 \oplus \mathcal{H}_2 \right) \to \mathbb{C}^N\oplus \left( \mathcal{H}_1 \oplus \mathcal{H}_2 \right)
$$ satisfying \eqref{cond-2} and $$ \theta(s,p)= A + [B_1,B_2] \varphi(\tau,s,p)\left( I_{\mathcal{H}_1 \oplus \mathcal{H}_2} - \left[\begin{smallmatrix}
D_1 & D_2\\
0 & D_3
\end{smallmatrix}\right] \varphi(\tau,s,p) \right)^{-1} \left[\begin{smallmatrix}
C_1 \\
C_2
\end{smallmatrix}\right].$$
By our assumption we can write $
\tau = \left[\begin{smallmatrix}
\tau_1 & 0 \\
0 & \tau_2
\end{smallmatrix}\right] $ on $\mathcal{H}_1 \oplus \mathcal{H}_2.$ Set
$$
V_1 = \begin{bmatrix}
A_1 & B_1 \\
C_1 A_2^{-1} & D_1
\end{bmatrix} \quad \text{and} \quad
V_2 = \begin{bmatrix}
A_2 & A_1^{-1} B_2 \\
C_2 & D_3
\end{bmatrix}.
$$
Now we shall prove that $V_1$ and $V_2$ are isometries. Since $V$ is an isometry, we have
\begin{align}\label{eq-1}
A^*A + C_1^*C_1 + C_2^*C_2 = I_{\mathbb{C}^N}, \quad A^* B_1 + C_1^*D_1 = 0 , \quad A^*B_2 + C_1^*D_2+ C_2^*D_3 =0
\end{align}
\begin{align}\label{eq-2}
B_1^*B_1 + D_1^*D_1=I_{\mathcal{H}_1},\quad B_1^*B_2 + D_1^*D_2=0 ,\quad B_2^*B_2+ D_2^*D_2 + D_3^*D_3 = I_{\mathcal{H}_2}.
\end{align}
A matrix computation gives that,$$
V_1^*V_1 = \begin{bmatrix}
A_1^*A_1 + {A_2^*}^{-1}C_1^*C_1 A_2^{-1} & A_1^*B_1 + {A_2^*}^{-1}C_1^*D_1 \\
B_1^* A_1 + D_1^*C_1 A_1^{-1} & B_1^*B_1 + D_1^*D_1
\end{bmatrix}
$$
and
$$
V_2^*V_2 = \begin{bmatrix}
A_2^*A_2 + C_2^*C_2 & A_2^* {A_1}^{-1}B_2+ C_2^*D_3 \\
B_2^* {A_1^*}^{-1}A_2 + D_3^*C_2 & B_2^*{A_1^*}^{-1}A_1^{-1} B_2 + D_3^*D_3
\end{bmatrix}.
$$
The first equation in \eqref{eq-1} together with our assumption $A_2^*A_2 = A^*A + C_1^*C_1$ shows that the $(1,1)^{th}$ entry in $V_1^*V_1$ is the same as $I_{\mathbb{C}^N}$. From equation \eqref{eq-1}, we have $B_1= -{A^*}^{-1} C_1 D_1$ which gives $(1,2)^{th}$ entry as well as $(2,1)^{th}$ entry of $V_1^*V_1$ are $0$. Clearly, from equation \eqref{eq-2}, the $(2,2)^{th}$ entry is $I_{{\mathcal H}_1}$.
Using $\eqref{eq-1}$, we get that the $(1,1)^{th}$ entry of $V_1^*V_2$ is $I_{\mathbb{C}^N}$. Now from \eqref{eq-1} and \eqref{cond-2}, we have
$$ 0 = A_2^*A_1^* B_2 + C_1^*C_1 A^{-1} B_2 + C_2^*D_3 = A_2^*A_1^* B_2 +(A_2^*A_2 - A^*A)A^{-1}B_2 + C_2^*D_3= A_2^* A_1^{-1} B_2 + C_2^*D_3 .$$ Hence $(1,2)^{th}$ and $(2,1)^{th}$ entries of $V_2^*V_2$ are zero. A simple manipulation in the last equation in \eqref{eq-2} with the help of \eqref{cond-2} shows that the $(2,2)^{th}$ entry of $V_2^*V_2$ is the same as $I_{\mathcal{H}_2}$. Hence $V_1$ and $V_2$ are both isometries which in turn imply that the operators:
$$ \tilde{V_1}= \left[\begin{array}{c| c c}
A_1 & B_1 & 0\\
\hline
C_1 A_2^{-1} & D_1 & 0 \\
0& 0 & I_{\mathcal{H}_2}
\end{array}\right] \quad \text{and}\quad \tilde{V_2}=\left[\begin{array}{c| c c}
A_2 & 0 & A_1^{-1} B_2\\
\hline
0 & I_{\mathcal{H}_1}& 0 \\
C_2& 0 & D_3
\end{array}\right].
$$
are isometries on $\mathbb{C}^N \oplus \left( \mathcal{H}_1 \oplus \mathcal{H}_2 \right)$.
Let $\psi_1$ and $\psi_2$ be the transfer function realizations corresponding to the pair of isometry and unitary ($\tilde{V_1}$ , $\tau_1 \oplus I_{\mathcal{H}_2}$) and ($\tilde{V_2}$, $ I_{\mathcal{H}_1} \oplus \tau_2 $) respectively. A similar computation as in the proof of Theorem \ref{fact-1} gives that
$$
\theta(s,p)= \psi_1(s,p) \psi_2(s,p) \quad \text{for all } (s,p) \in \mathbb{G}.
$$
This completes the proof.
\end{proof}
\vspace{0.1in} \noindent\textbf{Acknowledgement:} The authors are thankful to their research supervisor Prof. Tirthanakar Bhattacharyya for fruitful discussions and suggestions. The second author is also thankful to Dr. Haripada Sau for some discussion in the beginning of this work. The first author's work is supported by the Prime Minister's Research Fellowship PMRF-21-1274.03. The second author is supported by Int. Ph.D. scholarship from Indian Institute of Science.
|
2210.06978
|
\section{Funding Disclosure}
This work was fully funded by NVIDIA.
\section{Continuous-Time Diffusion Models and Probability Flow ODE Sampling} \label{app:background_extended}
Here, we are providing additional background on denoising diffusion models (DDMs). In Sec.~\ref{sec:background}, we have introduced DDMs in the ``discrete-time'' setting, where we have a fixed number $T$ of diffusion and denoising steps~\cite{ho2020ddpm,sohl2015deep}. However, DDMs can also be expressed in a continuous-time framework, in which the fixed forward diffusion and the generative denoising process in a continuous manner gradually perturb and denoise, respectively~\cite{song2020score}. In this formulation, these processes can be described by stochastic differential equations (SDEs). In particular, the fixed forward diffusion process is given by (for the ``variance-preserving'' SDE~\cite{song2020score}, which we use. Other diffusion processes are possible~\cite{song2020score,vahdat2021score,dockhorn2022score}):
\begin{equation} \label{eq:sde_forward}
d{\mathbf{x}}_{t} = - \frac{1}{2} \beta_{t} {\mathbf{x}}_{t} \, d{t} + \sqrt{\beta_{t}} \, d{\mathbf{w}}_{t},
\end{equation}
where time $t\in[0,1]$ and ${\mathbf{w}}_{t}$ is a standard Wiener process. In the continuous-time formulation, we usually consider times $t\in[0,1]$, while in the discrete-time setting it is common to consider discrete time values $t\in\{0,...,T\}$ (with $t=0$ corresponding to no diffusion at all). This is just a convention and we can easily translate between them as $t_{\textrm{cont.}} = \frac{t_{\textrm{disc.}}}{T}$. We always take care of these conversions here when appropriate without explicitly noting this to keep the notation concise. The function $\beta_{t}$ in Eq.~(\ref{eq:sde_forward}) above is a continuous-time generalization of the set of $\beta_t$'s used in the discrete formulation (denoted as variance schedule in Sec.~\ref{sec:background}). Usually, the $\beta_t$'s in the discrete-time setting are generated by discretizing an underlying continuous function $\beta_t$---in our case $\beta_t$ is simply a linear function of $t$---, which is now used in Eq.~(\ref{eq:sde_forward}) above directly.
It can be shown that a corresponding reverse diffusion process exists that effectively inverts the forward diffusion from Eq.~(\ref{eq:sde_forward})~\cite{song2020score,anderson1982,haussmann1986time}:
\begin{equation} \label{eq:probability_flow_sde}
d{\mathbf{x}}_t = -\frac{1}{2} \beta_{t} \left[{\mathbf{x}}_t + 2\nabla_{{\mathbf{x}}_t} \log q_t({\mathbf{x}}_t) \right]dt + \sqrt{\beta_t}\, d{\mathbf{w}}_t.
\end{equation}
Here, $q_t({\mathbf{x}}_t)$ is the marginal diffused data distribution after time $t$, and $\nabla_{{\mathbf{x}}_t} \log q_t({\mathbf{x}}_t)$ is the \textit{score function}. Hence, if we had access to this score function, we could simulate this reverse SDE in reverse time direction, starting from random noise ${\mathbf{x}}_1\sim{\mathcal{N}}({\mathbf{x}}_1;\bm{0}, {\bm{I}})$, and thereby invert the forward diffusion process and generate novel data. Consequently, the problem reduces to learning a model for the usually intractable score function. This is where the discrete-time and continuous-time frameworks connect: Indeed, the objective in Eq.~(\ref{eq:ddpm_obj}) for training the denoising model also corresponds to \textit{denoising score matching}~\cite{vincent2011,song2019generative,ho2020ddpm}, i.e., it represents an objective to learn a model for the score function. We have
\begin{equation} \label{eq:score_func}
\nabla_{{\mathbf{x}}_t}\log q_t({\mathbf{x}}_t)\approx-\frac{{\bm{\epsilon}}_{\bm{\theta}}({\mathbf{x}}_t,t)}{\sigma_t}.
\end{equation}
However, we trained ${\bm{\epsilon}}_{\bm{\theta}}({\mathbf{x}}_t,t)$ for $T$ discrete steps only, rather than for continuous times $t$. In principle, the objective in Eq.~(\ref{eq:ddpm_obj}) can be easily adapted to the continuous-time setting by simply sampling continuous time values rather than discrete ones.
In practice, $T=1000$ steps, as used in our models, represents a fine discretization of the full integration interval and the model generalizes well when queried at continuous $t$ ``between'' steps, due to the smooth cosine-based time step embeddings.
A unique advantage of the continuous-time framework based on differential equations is that it allows us to construct an ordinary differential equation (ODE), which, when simulated with samples from the same random noise distribution ${\mathbf{x}}_1\sim{\mathcal{N}}({\mathbf{x}}_1;\bm{0}, {\bm{I}})$ as inputs (where $t=1$, with ${\mathbf{x}}_{t=1}$, denotes the end of the diffusion for continuous $t\in[0,1]$), leads to the same marginal distributions along the reverse diffusion process and can therefore also be used for synthesis~\cite{song2020score}:
\begin{equation} \label{eq:probability_flow_ode}
d{\mathbf{x}}_t = -\frac{1}{2} \beta_{t} \left[{\mathbf{x}}_t + \nabla_{{\mathbf{x}}_t} \log p_t({\mathbf{x}}_t) \right]dt.
\end{equation}
This is an instance of continuous Normalizing flows~\cite{chen2018neuralode,grathwohl2019ffjord} and often called \textit{probability flow ODE}. Plugging in our score function estimate, we have
\begin{equation} \label{eq:probability_flow_ode2}
d{\mathbf{x}}_t = -\frac{1}{2} \beta_{t} \left[{\mathbf{x}}_t -\frac{{\bm{\epsilon}}_{\bm{\theta}}({\mathbf{x}}_t,t)}{\sigma_t} \right]dt,
\end{equation}
which we refer to as the \textit{generative ODE}. Given a sample from ${\mathbf{x}}_1\sim{\mathcal{N}}({\mathbf{x}}_1;\bm{0}, {\bm{I}})$, the generative process of this generative ODE is fully deterministic. Similarly, we can also use this ODE to encode given data into the DDM's own prior distribution ${\mathbf{x}}_1\sim{\mathcal{N}}({\mathbf{x}}_1;\bm{0}, {\bm{I}})$ by simulating the ODE in the other direction.
These properties allow us to perform interpolation: Due to the deterministic generation process with the generative ODE, smoothly changing an encoding ${\mathbf{x}}_1$ will result in a similarly smoothly changing generated output ${\mathbf{x}}_0$. We are using this for our interpolation experiments (see Sec.~\ref{subsec:extensions} and App.~\ref{app:exp_detail_interpol})
\section{Technical Details on LION's Applications and Extensions}
In this section, we provide additional methodological details on the different applications and extensions of LION that we discussed in Sec.~\ref{subsec:extensions} and demonstrated in our experiments.
\subsection{Diffuse-Denoise} \label{app:exp_detail_dd}
Our \textit{diffuse-denoise} technique is essentially a tool to inject diversity into the generation process in a controlled manner and to ``clean up'' imperfect encodings when working with encoders operating on noisy or voxelized data (see Sec.~\ref{subsec:extensions} and App.~\ref{app:exp_detail_guided}). It is related to similar methods that have been used for image editing~\cite{meng2022sdedit}.
Specifically, assume we are given an input shape ${\mathbf{x}}$ in the form of a point cloud. We can now use LION's encoder networks to encode it into the latent spaces of LION's autoencoder and obtain the shape latent encoding ${\mathbf{z}}_0$ and the latent points ${\mathbf{h}}_0$. Now, we can diffuse those encodings for $\tau<T$ steps (using the Gaussian transition kernel defined in Eq.~(\ref{eq:ddpm1})) to obtain intermediate ${\mathbf{z}}_\tau$ and ${\mathbf{h}}_\tau$ along the diffusion process. Next, we can denoise them back to new $\bar{\mathbf{z}}_0$ and $\bar{\mathbf{h}}_0$ using the generative stochastic sampling defined in Eq.~(\ref{eq:ddpm_sampling}), starting from the intermediate ${\mathbf{z}}_\tau$ and ${\mathbf{h}}_\tau$. Note that we first need to generate the new $\bar{\mathbf{z}}_0$, since denoising ${\mathbf{h}}_\tau$ is conditioned on $\bar{\mathbf{z}}_0$ according to LION's hierarchical latent DDM setup.
The forward diffusion of DDMs progressively destroys more and more details of the input data. Hence, diffusing LION's latent encodings only for small $\tau$, and then denoising again, results in new $\bar{\mathbf{z}}_0$ and $\bar{\mathbf{h}}_0$ that have only changed slightly compared to the original ${\mathbf{z}}_0$ and ${\mathbf{h}}_0$. In other words for small $\tau$, the diffuse-denoised $\bar{\mathbf{z}}_0$ and $\bar{\mathbf{h}}_0$ will be close to the original ${\mathbf{z}}_0$ and ${\mathbf{h}}_0$. This observation was also made by \citet{meng2022sdedit}. Similarly, we find that when $\bar{\mathbf{z}}_0$ and $\bar{\mathbf{h}}_0$ are sent through LION's decoder network the corresponding point cloud $\bar{\mathbf{x}}$ resembles the input point cloud ${\mathbf{x}}$ in overall shape well, and only has different details. Diffusing for more steps, i.e., larger $\tau$, corresponds to resampling the shape also more globally (with $\tau=T$ meaning that an entirely new shape is generated), while using smaller $\tau$ implies that the original shape is preserved more faithfully (with $\tau=0$ meaning that the original shape is preserved entirely). Hence, we can use this technique to inject diversity into any given shape and resample different details in a controlled manner (as shown, for instance, in Fig.~\ref{fig:gen_v2_mesh}).
We can use this \textit{diffuse-denoise} approach not only for resampling different details from clean shapes, but also to ``clean up'' poor encodings. For instance, when LION's encoders operate on very noisy or coarsely voxelized input point clouds (see Sec.~\ref{subsec:extensions} and App.~\ref{app:exp_detail_guided}), the predicted shape encodings may be poor. The encoder networks may roughly recognize the overall shape but not capture any details due to the noise or voxelizations. Hence, we can perform some \textit{diffuse-denoise} to essentially partially discard the poor encodings and regenerate them from the DDMs, which have learnt a model of clean detailed shapes, while preserving the overall shape. This allows us to perform multimodal generation when using voxelized or noisy input point clouds as guidance, because we can sample various different plausible versions using \textit{diffuse-denoise}, while always approximately preserving the overall input shape (see examples in Figs.~\ref{fig:voxel_to_mesh}, \ref{fig:noise_exp}, \ref{fig:more_voxel_exp_3cls}, and \ref{fig:more_denoise_3cls}).
\subsection{Encoder Fine-Tuning for Voxel-Conditioned Synthesis and Denoising} \label{app:exp_detail_guided}
A crucial advantage of LION's underlying VAE framework with latent DDMs is that we can adapt the encoder neural networks for different relevant tasks, as discussed in Sec.~\ref{subsec:extensions} and demonstrated in our experiments. For instance, a digital artist may have a rough idea about the shape they desire to synthesize and they may be able to quickly put together a coarse voxelization according to whatever they imagine. Or similarly, a noisy version of a shape may be available and the user may want to guide LION's synthesis accordingly.
To this end, we propose to fine-tune LION's encoder neural networks for such tasks: In particular, we take clean shapes ${\mathbf{x}}$ from the training data and voxelize them or add noise to them. Specifically, we test three different noise types as well as voxelization (see Figs.~\ref{fig:more_denoise_3cls} and~\ref{fig:more_voxel_exp_3cls}): we either perturb the point cloud with uniform noise, Gaussian noise, outlier noise, or we voxelize it. We denote the resulting coarse or noisy shapes as $\tilde{\mathbf{x}}$ here. Given a fully trained LION model, we can fine-tune its encoder networks to ingest the perturbed $\tilde{\mathbf{x}}$, instead of the clean ${\mathbf{x}}$. For that, we are using the following ELBO-like (maximization) objective:
\begin{equation}\label{eq:lion_finetune_obj}
\mathcal{L}_\textrm{finetune}({\bm{\phi}})=\mathcal{L}_\textrm{reconst}({\bm{\phi}}) - \mathbb{E}_{p(\tilde{\mathbf{x}}),q_{\bm{\phi}}({\mathbf{z}}_0|\tilde{\mathbf{x}})}\left[\lambda_{\mathbf{z}} D_\textrm{KL}\left(q_{\bm{\phi}}({\mathbf{z}}_0|\tilde{\mathbf{x}})|p({\mathbf{z}}_0)\right) + \lambda_{\mathbf{h}} D_\textrm{KL}\left(q_{\bm{\phi}}({\mathbf{h}}_0|\tilde{\mathbf{x}},{\mathbf{z}}_0)|p({\mathbf{h}}_0)\right)\right].
\end{equation}
When training the encoder to denoise uniform or Gaussian noise added to the point cloud, we use the same reconstruction objective as during original LION training, i.e.,
\begin{equation}\label{eq:lion_finetune_obj2}
\mathcal{L}^{L_1}_\textrm{reconst}({\bm{\phi}})=\mathbb{E}_{p(\tilde{\mathbf{x}}),q_{\bm{\phi}}({\mathbf{z}}_0|\tilde{\mathbf{x}}),q_{\bm{\phi}}({\mathbf{h}}_0|\tilde{\mathbf{x}},{\mathbf{z}}_0)}\log p_{\bm{\xi}}({\mathbf{x}}|{\mathbf{h}}_0,{\mathbf{z}}_0).
\end{equation}
However, when training with voxelized inputs or outlier noise, there is no good corresponce to define the point-wise reconstruction loss with the Laplace distribution (corresponding to an $L_1$ loss). Therefore, in these cases we instead rely on Chamfer Distance (CD) and Earth Mover Distance (EMD) for the reconstruction term:
\begin{equation}\label{eq:lion_finetune_obj3}
\mathcal{L}^{\textrm{CD/EMD}}_\textrm{reconst}({\bm{\phi}})=\mathbb{E}_{p(\tilde{\mathbf{x}}),q_{\bm{\phi}}({\mathbf{z}}_0|\tilde{\mathbf{x}}),q_{\bm{\phi}}({\mathbf{h}}_0|\tilde{\mathbf{x}},{\mathbf{z}}_0)}\left[\mathcal{L}^{\textrm{CD}}\left(\boldsymbol{\mu}_{\bm{\xi}}({\mathbf{h}}_0,{\mathbf{z}}_0),{\mathbf{x}}\right)+\mathcal{L}^{\textrm{EMD}}\left(\boldsymbol{\mu}_{\bm{\xi}}({\mathbf{h}}_0,{\mathbf{z}}_0),{\mathbf{x}}\right)\right]
\end{equation}
Here, $\mathcal{L}^{\textrm{CD}}$ and $\mathcal{L}^{\textrm{EMD}}$ denote CD and EMD losses:
\begin{align}
\mathcal{L}^{\textrm{CD}}({\mathbf{x}},{\mathbf{y}})&= \sum_{x\in{\mathbf{x}}}\min_{y\in{\mathbf{y}}}||x-y||_1 + \sum_{y\in{\mathbf{y}}}\min_{x\in{\mathbf{x}}}||x-y||_1, \\
\mathcal{L}^{\textrm{EMD}}({\mathbf{x}},{\mathbf{y}})&= \min_{\gamma:{\mathbf{x}}\rightarrow{\mathbf{y}}}\sum_{x\in{\mathbf{x}}} ||x-\gamma(x)||_2,
\end{align}
where $\gamma$ denotes a bijection between the point clouds ${\mathbf{x}}$ and ${\mathbf{y}}$ (with the same number of points). Note that we are using an $L_1$ loss for the distance calculation in the CD, which we found to work well and corresponds to the $L_1$ loss we are relying on during original LION training.
\begin{wrapfigure}{r}{0.6\textwidth}
\vspace{-0.4cm}
\begin{center}
\includegraphics[scale=0.155, clip=True]{sections/fig/pipeline_figs/pipeline_boxonly_new2.jpg}
\end{center}
\vspace{-0.1cm}
\caption{\small LION's encoder networks can be fine-tuned to process voxel or noisy point cloud inputs, which can provide guidance to the generative model.}
\vspace{-0.3cm}
\label{fig:finetune}
\end{wrapfigure}
Furthermore, $\boldsymbol{\mu}_{\bm{\xi}}({\mathbf{h}}_0,{\mathbf{z}}_0)$ in Eq.~(\ref{eq:lion_finetune_obj3}) formally denotes the deterministic decoder output given the latent encodings ${\mathbf{z}}_0$ and ${\mathbf{h}}_0$, this is, the mean of the Laplace distribution $p_{\bm{\xi}}({\mathbf{x}}|{\mathbf{h}}_0,{\mathbf{z}}_0)$. The weights for the Kullback-Leibler (KL) terms in Eq.~(\ref{eq:lion_finetune_obj}), $\lambda_{\mathbf{z}}$ and $\lambda_{\mathbf{h}}$, are generally kept the same as during original LION training. Training with the above objectives ensures that the encoder maps perturbed inputs $\tilde{\mathbf{x}}$ to latent encodings that will decode to the original clean shapes ${\mathbf{x}}$.
This is because maximizing the ELBO (or an adaptation like above) with respect to the encoders $q_{\bm{\phi}}({\mathbf{z}}_0 | \tilde{\mathbf{x}})$ and $q_{\bm{\phi}}({\mathbf{h}}_0 |\tilde{\mathbf{x}}, {\mathbf{z}}_0)$ while keeping the decoder fixed is equivalent to minimizing the KL divergences $D_\textrm{KL}\left(q_{\bm{\phi}}({\mathbf{z}}_0 | \tilde{\mathbf{x}})|p_{\bm{\xi}}({\mathbf{z}}_0 | {\mathbf{x}})\right)$ and $\mathbb{E}_{q_{\bm{\phi}}({\mathbf{z}}_0 | \tilde{\mathbf{x}})} [D_\textrm{KL}\left(q_{\bm{\phi}}({\mathbf{h}}_0 | \tilde{\mathbf{x}}, {\mathbf{z}}_0)|p_{\bm{\xi}}({\mathbf{h}}_0 |{\mathbf{x}}, {\mathbf{z}}_0)\right)]$ with respect to the encoders, where $p_{\bm{\xi}}({\mathbf{z}}_0| {\mathbf{x}})$ and $p_{\bm{\xi}}({\mathbf{h}}_0 | {\mathbf{x}}, {\mathbf{z}}_0)$ are the true posterior distributions given \textit{clean} shapes ${\mathbf{x}}$ \cite{kingma2014vae,neal1998view,kingma2019vaeintro}. Consequently, the fine-tuned encoders are trained to use the noisy or voxel inputs $\tilde{\mathbf{x}}$ to predict the true posterior distributions over the latent variables ${\mathbf{z}}_0$ and ${\mathbf{h}}_0$ given the clean shapes ${\mathbf{x}}$.
A user can therefore utilize these fine-tuned encoders to reconstruct clean shapes from noisy or voxelized inputs. Importantly, once the fine-tuned encoder predicts an encoding we can further refine it, clean up imperfect encodings, and sample different shape variations by a few steps of \textit{diffuse-denoise} in latent space (see previous App.~\ref{app:exp_detail_dd}). This allows for multimodal denoising and voxel-driven synthesis (also see Fig.~\ref{fig:voxel_to_mesh}).
One question that naturally arises is regarding the processing of the noisy or voxelized input shapes. Our PVCNN-based encoder networks can easily process noisy point clouds, but not voxels. Therefore, given a voxelized shape, we uniformly distribute points over the voxelized shape's surface, such that it can be consumed by LION's point cloud processing networks (see details in App.~\ref{app:exp_details_voxelguided}).
We would like to emphasize that LION supports these applications easily without re-training the latent DDMs due to its VAE framework with additional encoders and decoders, in contrast to previous works that train DDMs on point clouds directly~\cite{zhou2021shape,luo2021diffusion}.
For instance, PVD~\cite{zhou2021shape} operates directly on the voxelized or perturbed point clouds with its DDM. Because of that PVD needs to perform many steps of \textit{diffuse-denoise} to remove all the noise from the input---there is no encoder that can help with that. However, this has the drawback that this induces significant shape variations that do not well correspond to the original noisy or voxelized inputs (see experiments and discussion in Sec.~\ref{sec:exps_guided}).
\subsection{Shape Interpolation} \label{app:exp_detail_interpol}
Here, we explain in detail how exactly we perform shape interpolation. It may be instructive to take a step back first and motivate our approach. Of course, we cannot simply linearly interpolate two point clouds, this is, the points' $xyz$-coordinates, directly. This would result in unrealistic outputs along the interpolation path. Rather, we should perform interpolation in a space where semantically similar point clouds are mapped near each other. %
One option that comes to mind is to use the latent space, this is, both the shape latent space and the latent points, of LION's point cloud VAE. We could interpolate two point clouds' encodings, and then decode back to point cloud space. However, we also do not have any guarantees in this situation, either, due to the VAE's prior hole problem~\cite{vahdat2021score,tomczak2018VampPrior,takahashi2019variational,bauer2019resampled,vahdat2020nvae,aneja2020ncpvae,sinha2021d2c,rosca2018distribution,hoffman2016elbo}, this is, the problem that the distribution of all encodings of the training data won't perfectly form a Gaussian, which it was regularized towards during VAE training (see Eq.~(\ref{eq:lion_elbo})). Hence, when simply interpolating directly in the VAE's latent space, we would pass regions in latent space for which the decoder does not produce a realistic sample. %
This would result in poor outputs.
Therefore, we rather interpolate in the prior spaces of our latent DDMs themselves, this is, the spaces that emerge at the end of the forward diffusion processes. Since the diffusion process of DDMs by construction perturbs all data points into almost perfectly Gaussian ${\mathbf{x}}_1\sim{\mathcal{N}}({\mathbf{x}}_1;\bm{0}, {\bm{I}})$ (where $t=1$ denotes the end of the diffusion for continuous $t\in[0,1]$), DDMs do not suffer from any prior hole challenges---the denoising model is essentially well trained for all possible ${\mathbf{x}}_1\sim{\mathcal{N}}({\mathbf{x}}_1;\bm{0}, {\bm{I}})$. Hence, given two ${\mathbf{x}}_1^A$ and ${\mathbf{x}}_1^B$, in DDMs we can safely interpolate them according to
\begin{equation}\label{eq:spherical_interpol}
{\mathbf{x}}_1^{s} = \sqrt{s}\,{\mathbf{x}}_1^A + \sqrt{1-s}\,{\mathbf{x}}_1^B
\end{equation}
for $s\in[0,1]$ and expect meaningful outputs when generating the corresponding denoised samples.
But why do we choose the square root-based interpolation? Since we are working in a very high-dimensional space, we know that according to the Gaussian annulus theorem both ${\mathbf{x}}_1^A$ and ${\mathbf{x}}_1^B$ are almost certainly lying on a thin (high-dimensional) spherical shell that supports almost all probability mass of $p_1({\mathbf{x}}_1)\approx {\mathcal{N}}({\mathbf{x}}_1;\bm{0}, {\bm{I}})$. Furthermore, since ${\mathbf{x}}_1^A$ and ${\mathbf{x}}_1^B$ are almost certainly orthogonal to each other, again due to the high dimensionality, our above interpolation in Eq.~(\ref{eq:spherical_interpol}) between ${\mathbf{x}}_1^A$ and ${\mathbf{x}}_1^B$ corresponds to performing \textit{spherical interpolation} along the spherical shell where almost all probability mass concentrates. In contrast, linear interpolation would leave this shell, which resulted in poorer results, because the model wasn't well trained for denoising samples outside the typical set. Note that we found spherical interpolation to be crucial (in DDMs of images, linear interpolation tends to still work decently; for our latent point DDM, however, linear interpolation performed very poorly).
In LION, we have two DDMs operating on the shape latent variables ${\mathbf{z}}_0$ and the latent points ${\mathbf{h}}_0$. Concretely, for interpolating two shapes ${\mathbf{x}}^A$ and ${\mathbf{x}}^B$ in LION, we first encode them into ${\mathbf{z}}_0^A$ and ${\mathbf{h}}_0^A$, as well as ${\mathbf{z}}_0^B$ and ${\mathbf{h}}_0^B$. Now, using the generative ODE (see App.~\ref{app:background_extended}) we further encode these latents into the DDMs' prior distributions, resulting in encodings ${\mathbf{z}}_1^A$ and ${\mathbf{h}}_1^A$, as well as ${\mathbf{z}}_1^B$ and ${\mathbf{h}}_1^B$ (note that we need to correctly capture the conditioning when using ${\bm{\epsilon}}_{\bm{\psi}}({\mathbf{h}}_t,{\mathbf{z}}_0,t)$ in the generative ODE for ${\mathbf{h}}_t$). Next, we first interpolate the shape latent DDM encodings ${\mathbf{z}}_1^{s} = \sqrt{s}{\mathbf{z}}_1^A + \sqrt{1-s}{\mathbf{z}}_1^B$ and use the generative ODE to deterministically generate all ${\mathbf{z}}_0^{s}$ along the interpolation path. Then, we also interpolate the latent point DDM encodings ${\mathbf{h}}_1^{s} = \sqrt{s}{\mathbf{h}}_1^A + \sqrt{1-s}{\mathbf{h}}_1^B$ and, conditioned on the corresponding ${\mathbf{z}}_0^{s}$ along the interpolation path, also generate deterministically all ${\mathbf{h}}_0^{s}$ along the interpolation path using the generative ODE. Finally, we can decode all ${\mathbf{z}}_0^{s}$ and ${\mathbf{h}}_0^{s}$ along the interpolation $s\in[0,1]$ back to point cloud space and obtain the interpolated point clouds ${\mathbf{x}}^{s}$, which we can optionally convert into meshes with SAP.
Note that instead of using given shapes and encoding them into the VAE's latent space and further into the DDMs' prior, we can also directly sample novel encodings in the DDM priors and interpolate those.
In practice, to solve the generative ODE both for encoding and generation, we are using an adaptive step size Runge-Kutta4(5)~\cite{song2020score,dormand1980odes} solver with error tolerances $10^{-5}$. Furthermore, we don't actually solve the ODE all the way to exactly $0$, but only up to a small time $10^{-5}$ for numerical reasons (hence, the actual integration interval for the ODE solver is $[10^{-5},1]$). We are generally relying on our LION models whose latent DDMs were trained with 1000 discrete time steps (see objectives Eqs.~(\ref{eq:latent_ddm_1}) and~(\ref{eq:latent_ddm_2})) and found them to generalize well to the continuous-time setting where the model is also queried for intermediate times $t$ (see discussion in App.~\ref{app:background_extended}).
\subsection{Mesh Reconstruction with Shape As Points} \label{app:exp_detail_sap}
Before explaining in App.~\ref{app:sap_in_lion} how we incorporate Shape As Points~\cite{peng2021SAP} into LION to reconstruct smooth surfaces, we first provide background on Shape As Points in App.~\ref{app:sap_background}.
\subsubsection{Background on Shape As Points} \label{app:sap_background}
Shape As Points (SAP) \cite{peng2021SAP} reconstructs 3D surfaces from points by finding an indicator function $\chi: \mathbb{R}^3 \rightarrow \mathbb{R}$ whose zero level set corresponds to the reconstructed surface. To recover $\chi$, SAP first densifies the input point cloud $X = \{x_i \in \mathbb{R}^3\}^N_{i=1}$
by predicting $k$ offsetted points and normals for each input point---such that in total we have additional points $X' = \{x_i'\}_{i=1}^{kN}$ and normals $N' = \{n'_i\}_{i=1}^{kN}$---using a neural network $f_\theta(X)$ with parameters $\theta$ conditioned on the input point cloud $X$.
After upsampling the point cloud and predicting normals, SAP solves a Poisson partial differential equation (PDE) to recover the function $\chi$ from the densified point cloud. Casting surface reconstruction as a Poisson problem is a widely used approach first introduced by \citet{kazhdan2006poisson}. Unlike \citet{kazhdan2006poisson}, which encodes $\chi$ as a linear combination of sparse basis functions and solves the PDE using a finite element solver on an octree, SAP represents $\chi$ in a discrete Fourier basis on a dense grid and solves the problem using a spectral solver. This spectral approach has the benefits of being fast and differentiable, at the expense of cubic (with respect to the grid size) memory consumption.
To train the upsampling network $f$, SAP minimizes the $L_2$ distance between the predicted indicator function $\chi$ (sampled on a dense, regular grid) and a pseudo-ground-truth indicator function $\chi_\text{gt}$ recovered by solving the same Poisson PDE on a dense set of points and normals. Denoting the differentiable Poisson solve as $\chi = \text{Poisson}(X', N')$, we can write the loss minimized by SAP as
\begin{equation}
\mathcal{L}(\theta) = \mathbb{E}_{\{X_i, \chi_{\text{i}} \sim \mathcal{D}\}} \|\text{Poisson}(f_\theta(X_i)) - \chi_i\|_2^2
\end{equation} where $\mathcal{D}$ is the training data distribution of indicator functions $\chi_i$ for shapes and point samples on the surface of those shapes $X_i$.
Since the ideas from Poisson Surface Reconstruction (PSR) \cite{kazhdan2006poisson} lie at the core of Shape As Points, we give a brief overview of the Poisson formulation for surface reconstruction: Given input points $X = \{x_i \in \mathbb{R}^3\}_{i=1}^N$ and normals $N = \{n_i \in \mathbb{R}^3\}_{i=1}^N$, we aim to recover an indicator function $\chi : \mathbb{R}^3 \rightarrow \mathbb{R}$ such that the reconstructed surface $S$ is the zero level set of $\chi$, i.e., $S = \{x : \chi(x) = 0\}$.
Intuitively, we would like the recovered $\chi$ to change sharply between a positive value and a negative value at the surface boundary along the direction orthogonal to the surface. Thus, PSR treats the surface normals $N$ as noisy samples of the gradient of $\chi$. In practice, PSR first constructs a smoothed vector field $\vec{V}$ from $N$ by convolving these with a filter (e.g. a Gaussian), and recovers $\chi$ by minimizing
\begin{equation}\label{eq:integrate_poisson}
\min_\chi \|\nabla \chi - \vec{V}\|_2^2
\end{equation}
over the input domain. Observe that applying the (linear) divergence operator to the problem in Eq.~(\ref{eq:integrate_poisson}) does not change the solution. Thus, we can apply the divergence operator to Eq.~(\ref{eq:integrate_poisson}) to transform it into a Poisson problem
\begin{equation}
\Delta \chi = \nabla \cdot \vec{V},
\end{equation}
which can be solved using standard numerical methods for solving Elliptic PDEs. Since PSR is effectively integrating $\vec{V}$ to recover $\chi$, the solution is ambiguous up to an additive constant. To remedy this, PSR subtracts the mean value of $\chi$ at the input points, i.e., $\frac{1}{N} \sum_{i=1}^N \chi(x_i)$, yielding a unique solution.
\subsubsection{Incorporating Shape As Points in LION} \label{app:sap_in_lion}
LION is primarily set up as a point cloud generative model. However, an artist may prefer a mesh as output of the model, because meshes are still the most commonly used shape representation in graphics software. Therefore, we are augmenting LION with mesh reconstruction, leveraging SAP. In particular, given a generated point cloud from LION, we use SAP to predict an indicator function $\chi$ defining a smooth shape surface in $\mathbb{R}^3$ as its zero level set, as explained in detail in the previous section. Then, we extract polygonal meshes from $\chi$ via marching cubes~\cite{lorensen1987marching}.
SAP is commonly trained using slightly noisy perturbed point clouds as input to its neural network $f_\theta$~\cite{peng2021SAP}. This results in robustness and generalization to noisy shapes during inference. Also, the point clouds generated by LION are not perfectly clean and smooth but subject to some noise.
In principle, to make our SAP model ideally suited for reconstructing surfaces from LION's generated point clouds, it would be best to train SAP using inputs that are subject to the same noise as generated by LION. Although we do not know the exact form of LION's noise, we propose to nevertheless specialize the SAP model for LION: Specifically, we take SAP's clean training data (i.e. densely sampled point clouds from which accurate pseudo-ground-truth indicator functions can be calculated via PSR; see previous App.~\ref{app:sap_background}) and encode it into LION's latent spaces ${\mathbf{z}}_0$ and ${\mathbf{h}}_0$. Then, we perform a few \textit{diffuse-denoise} steps in latent space (see App.~\ref{app:exp_detail_dd}) that create small shape variations of the input shapes when decoded back to point clouds. However, when doing these \textit{diffuse-denoise} steps, we are exactly using LION's generation mechanism, i.e., the stochastic sampling in Eq.~(\ref{eq:ddpm_sampling}), to generate the slightly perturbed encodings. Hence, we are injecting the same noise that is also seen in generation. Therefore, the correspondingly generated point clouds can serve as slightly noisy versions of the original clean point clouds before encoding, diffuse-denoise, and decoding, and we can use this data to train SAP. We found experimentally that this LION-specific training of SAP can indeed improve SAP's performance when reconstructing meshes from LION's generated point clouds. We investigate this experimentally in App.~\ref{app:abl_num_step_SAP}.
Note that in principle an even tighter integration of SAP with LION would be possible. In future versions of LION, it would be interesting to study joint end-to-end LION and SAP training, where LION's decoder directly predicts a dense set of points with normals that is then matched to a pseudo-ground-truth indicator function using differentiable PSR. However, we are leaving this to future research. To the best of our knowledge, LION is the first point cloud generative model that directly incorporates modern surface and mesh reconstruction at all. In conclusion, using SAP we can convert LION into a mesh generation model, while under the hood still leveraging point clouds, which are ideal for DDM-based modeling.
\section{Implementation}\label{app:impl}
In Fig.~\ref{fig:architecture}, we plot the building blocks used in LION:
\begin{itemize}
\item Multilayer perceptron (MLP), point-voxel convolution (PVC), set abstraction (SA), and feature propagation (FP) represent the building modules for our PVCNNs. The Grouper block (in SA) consists of the sampling layer and grouping layer introduced by PointNet++~\cite{qi2017pointnetplusplus}.
\item PVCNN visualizes a typical network used in LION. Both the latent points encoder, decoder and the latent point prior share this high-level architecture design, which is modified from the base network of PVD\footnote{\url{https://github.com/alexzhou907/PVD}}~\cite{zhou2021shape}. It consists of some set abstraction levels and feature propagation levels. The details of these levels can be found in PointNet++~\cite{qi2017pointnetplusplus}.
\item ResSE denotes a ResNet block with squeeze-and-excitation (SE)~\cite{hu2018squeeze} layers.
\item AdaGN is the adaptive group normalization (GN) layer that is used for conditioning on the shape latent.
\end{itemize}
\subsection{VAE Backbone}
Our VAE backbone consists of two encoder networks, and a decoder network.
The PVCNNs we used are based on PointNet++~\cite{qi2017pointnetplusplus} with point-voxel convolutions~\cite{liu2019pvcnn}. %
\input{sections/figTexts/architecture}
We show the details of the shape latent encoder in Tab.~\ref{tab:architecture_global_encoder}, the latent points encoder in Tab.~\ref{tab:architecture_local_encoder}, and the details of the decoder in Tab.~\ref{tab:architecture_local_decoder}.
We use a dropout probability of 0.1 for all dropout layers in the VAE. All group normalization layers in the
latent points encoder as well as in the decoder are replaced by adaptive group normalization (AdaGN) layers to condition on the shape latent. For the AdaGN layers, we initialized the weight of the linear layer with scale at $0.1$. The bias for the output factor is set as $1.0$ and the bias for the output bias is set as $0.0$. The AdaGN is also plot in Fig.~\ref{fig:architecture}.
\textbf{Model Initialization.} We initialize our VAE model such that it acts as an identity mapping between the input, the latent space, and reconstructed points at the beginning of training. We achieve this by scaling down the variances of encoders and by weighting the skip connections accordingly.
\textbf{Weighted Skip Connection.} We add skip connections in different places to improve information propagation. In the latent points encoder, the clean point cloud coordinates (in 3 channels) are added to the mean of the predicted latent point coordinates (in 3 channels), which is multiplied by $0.01$ before the addition. In the decoder, the sampled latent points coordinates are added to the output point coordinates (in 3 channels). The predicted output point coordinates are multiplied by $0.01$ before the addition.
\textbf{Variance Scaling.} We subtract the log of the standard deviation of the posterior Normal distribution with a constant value. The constant value helps pushing the variance of the posterior towards zero when the LION model is initialized. In our experiments, we set this offset value as $6$.
With the above techniques, at the beginning of training the input point cloud is effectively copied into the latent point cloud and then directly decoded back to point cloud space, and the shape latent variables are not active. This prevents diverging reconstruction losses at the beginning of training.
\input{sections/table/architecture}
\subsection{Shape Latent DDM Prior}
We show the details of the shape latent DDM prior in Tab.~\ref{tab:architecture_global_prior}. We use a dropout probability of 0.3, 0.3, and 0.4 for the airplane, car, and chair category, respectively. The time embeddings are added to the features of each ResSE layer.
\subsection{Latent Points DDM Prior}
We show the details of the latent points DDM prior in Tab.~\ref{tab:architecture_local_prior}. We use a dropout probability of 0.1 for all dropout layers in this DDM prior. All group normalization layers are replaced by adaptive group normalization layers to condition on the shape latent variable. The time embeddings are concatenated with the point features for the inputs of each SA and FP layer.
Note that both latent DDMs use a \textit{mixed denoising score network} parametrization, directly following \citet{vahdat2021score}. In short, the DDM's denoising model is parametrized as the analytically ideal denoising network assuming a normal data distribution plus a neural network-based correction. This can be advantageous, if the distribution that is modeled by the DDM is close to normal. This is indeed the case in our situation, because during the first training stage all latent encodings were regularized to fall under a standard normal distribution due to the VAE objective's Kullback-Leibler regularization. Our implementation of the mixed denoising score network technique directly follows \citet{vahdat2021score} and we refer the reader there for further details.
\subsection{Two-stage Training }
The training of LION consists of two stages:
\textbf{First Stage Training.} LION optimizes the modified ELBO of Eq.~(\ref{eq:lion_elbo}) with respect to the two encoders and the decoder as shown in the main paper. We use the same value for $\lambda_{\mathbf{z}}$ and $\lambda_{\mathbf{h}}$. These KL weights, starting at $10^{-7}$, are annealed linearly for the first 50\% of the maximum number of epochs. Their final value is set to $0.5$ at the end of the annealing process.
\textbf{Second Stage Training.} In this stage, the encoders and the decoder are frozen, and only the two DDM prior networks are trained using the objectives in Eqs.~(\ref{eq:latent_ddm_1}) and~(\ref{eq:latent_ddm_2}).
During training, we first encode the clean point clouds ${\mathbf{x}}$ with the encoders and sample ${\mathbf{z}}_0 \sim q_{\bm{\phi}}({\mathbf{z}}_0|{\mathbf{x}}), \ {\mathbf{h}}_0 \sim q_{\bm{\phi}}({\mathbf{h}}_0|{\mathbf{x}},{\mathbf{z}}_0) $.
We then draw the time steps $t$ uniformly from $U\{1,...,T\}$, then sample the diffused shape latent ${\mathbf{z}}_t$ and latent points ${\mathbf{h}}_t$.
Our shape latent DDM prior takes ${\mathbf{z}}_t$ with $t$ as input,
and the latent points DDM prior takes $({\mathbf{z}}_0, t, {\mathbf{h}}_t)$ as input.
We use the un-weighted training objective (i.e., $w(t)=1$).
During second stage training, we regularize the prior DDM neural networks by adding spectral normaliation (SN)~\cite{miyato2018spectral} and a group normalization (GN) loss similar to \citet{vahdat2021score}. Furthermore, we record the exponential moving average (EMA) of the latent DDMs' weight parameters, and use the parameter EMAs during inference when calling the DDM priors.
\section{Experiment Details} \label{app:exp_details}
\subsection{Different Datasets} \label{app:datasets}
For the unconditional 3D point cloud generation task, we follow previous works and use the ShapeNet~\cite{shapenet2015} dataset, as pre-processed and released by
PointFlow~\cite{yang2019pointflow}. Also following previous works~\cite{yang2019pointflow,zhou2021shape,luo2021diffusion} and to be able to compare with many different baseline methods, we train on three categories: \textit{airplane}, \textit{chair} and \textit{car}.
The ShapeNet dataset released by PointFlow consists of 15k points for each shape. During training, 2,048 points are randomly sampled from the 15k points at each iteration. The training set consists of 2,832, 4,612, and 2,458 shapes for airplane, chair and car, respectively. The sample quality metrics are reported with respect to the standard reference set, which consists of 405, 662, and 352 shapes for airplane, chair and car, respectively.
During training, we use the same normalization as in PointFlow~\cite{yang2019pointflow} and PVD~\cite{zhou2021shape}, where the data is normalized globally across all shapes. We compute the means for each axis across the whole training set, and one standard deviation across all axes and the whole training set. \textit{Note that there is a typo in the caption of Tab.~\ref{tab:gen_v1_global_norm} in the main text: In fact, this kind of global normalization using standard deviation does not result in $[-1,1]$ point coordinate bounds, but the coordinate values usually extend beyond that.}
When reproducing the baselines on the ShapeNet dataset released by PointFlow~\cite{yang2019pointflow}, we found that some methods~\cite{li2021spgan,zhang2021learning,cai2020eccv,hui2020pdgn} require per-shape normalization, where the mean is computed for each axis for each shape, and the scale is computed as the maximum length across all axes for each shape. As a result, the $xyz$-values of the point coordinates will be bounded within $[-1,1]$.
We train and evaluate LION following this convention~\cite{cai2020eccv} when comparing it to these methods. Note that these different normalizations imply different generative modeling problems. Therefore, it is important to carefully distinguish these different setups for fair comparisons.
When training the SAP model, we follow \citet{peng2021SAP,mescheder2019occupancy} and also use their data splits and data pre-processing to get watertight meshes.
Watertight meshes are required to properly determine whether points are in the interior of the meshes or not,
and to define signed distance fields (SDFs) for volumetric supervision, which the PointFlow data does not offer. More details of the data processing can be found in \citet{mescheder2019occupancy} (Sec. 1.2 in the Supplementary Material).
This dataset variant is denoted as \textit{ShapeNet-vol}. This data is per-shape normalized, i.e., the points' coordinates are bounded by $[-1,1]$.
To combine LION and SAP, we also train LION on the same data used by the SAP model. Therefore, we report sample quality of LION as well as the most relevant baselines DPM, PVD, and also IM-GAN (which synthesizes shapes as SDFs) also on this dataset variant. The number of training shapes is 2,832, 1,272, 1,101, 5,248, 4,746, 767, 1,624, 1,134, 1,661, 2,222, 5,958, 737, and 1,359 for airplane, bench, cabinet, car, chair, display, lamp, loudspeaker, rifle, sofa, table, telephone, and watercraft, respectively. The number of shapes in the reference set is 404, 181, 157, 749, 677, 109, 231, 161, 237, 317, 850, 105, and 193 for airplane, bench, cabinet, car, chair, display, lamp, loudspeaker, rifle, sofa, table, telephone, and watercraft, respectively.
\subsection{Evaluation Metrics} \label{app:metrics}
Different metrics to quantitatively evaluate the generation performance of point cloud generative models have been proposed, and some of them suffer from certain drawbacks. Given a generated set of point clouds $S_g$ and a reference set $S_r$, the most popular metrics are (we are following \citet{yang2019pointflow}):\looseness=-1
\begin{itemize}
\item \textbf{Coverage (COV)}:
\begin{equation}
\textrm{COV}(S_g,S_r)=\frac{|\{\textrm{arg min}_{Y\in S_r} D(X,Y)|X\in S_g\}|}{|S_r|},
\end{equation}
where $D(\cdot,\cdot)$ is either the Chamfer distance (CD) or earth mover distance (EMD). COV measures the number of reference point clouds that are matched to at least one generated shape. COV can quantify diversity and is sensitive to mode dropping, but it does not quantify the quality of the generated point clouds. Also low quality but diverse generated point clouds can achieve high coverage scores.
\item \textbf{Minimum matching distance (MMD)}:
\begin{equation}
\textrm{MMD}(S_g,S_r)=\frac{1}{|S_r|}\sum_{Y\in S_r}\min_{X\in S_g} D(X,Y),
\end{equation}
where again $D(\cdot,\cdot)$ is again either CD or EMD. The idea behind MMD is to calculate the average distance between the point clouds in the reference set and their closest neighbors in the generated set. However, MMD is not sensitive to low quality points clouds in $S_g$, as they are most likely not matched to any shapes in $S_r$. Therefore, it is also not a reliable metric to measure overall generation quality, and it also does not quantify diversity or mode coverage.
\item \textbf{1-nearest neighbor accuracy (1-NNA)}: To overcome the drawbacks of COV and MMD, \citet{yang2019pointflow} proposed to use 1-NNA as a metric to evaluate point cloud generative models:
\begin{equation}\label{eq:1nna_def}
\textrm{1-NNA}(S_g,S_r)=\frac{\sum_{X\in S_g}\mathbb{I}[N_X\in S_g]+\sum_{Y\in S_r}\mathbb{I}[N_Y\in S_r]}{|S_g|+|S_r|},
\end{equation}
where $\mathbb{I}[\cdot]$ is the indicator function and $N_X$ is the nearest neighbor of $X$ in the set $S_r \cup S_g - \{X\}$ (i.e., the union of the sets $S_r$ and $S_g$, but without the particular point cloud $X$). Hence, 1-NNA represents the leave-one-out accuracy of the 1-NN classifier defined in Eq.~(\ref{eq:1nna_def}). More specifically, this 1-NN classifier classifies each sample as belonging to either $S_r$ or $S_g$ based on its nearest neighbor sample within $N_X$ (nearest neighbors can again be computed based on either CD or EMD). If the generated $S_g$ matches the reference $S_r$ well, this classification accuracy will be close to 50\%. Hence, this accuracy can be used as a metric to quantify point cloud generation performance. Importantly, 1-NNA directly quantifies distribution similarity between $S_r$ and $S_g$ and measures both quality and diversity.
\end{itemize}
Following \citet{yang2019pointflow}, we can conclude that COV and MMD are potentially unreliable metrics to quantify point cloud generation performance and 1-NNA seems like a more suitable evaluation metric. Also the more recent and very relevant PVD~\cite{zhou2021shape} follows this and uses 1-NNA as its primary evaluation metric. Note that also Jensen-Shannon Divergence (JSD) is sometimes used to quantify point cloud generation performance. However, it measures only the ``average shape'' similarity by marginalizing over all point clouds from the generated and reference set, respectively. This makes it an almost meaningless metric to quantify individual shape quality (see discussion in \citet{yang2019pointflow}).
In conclusion, we are following \citet{yang2019pointflow} and \citet{zhou2021shape} and use 1-NNA as our primary evaluation metric to quantify point cloud generation performance and we evaluate it generally both using CD and EMD distances, according to the following standard definitions:
\begin{align}
\textrm{CD}(X,Y)&= \sum_{x\in X}\min_{y\in Y}||x-y||^2_2 + \sum_{y\in Y}\min_{x\in X}||x-y||^2_2, \\
\textrm{EMD}(X,Y)&= \min_{\gamma:X\rightarrow Y}\sum_{x\in X} ||x-\gamma(x)||_2,
\end{align}
where $\gamma$ is a bijection between point clouds $X$ and $Y$ with the same number of points.
We use released codes to compute CD\footnote{\url{https://github.com/ThibaultGROUEIX/ChamferDistancePytorch} (MIT License)} and EMD\footnote{\url{https://github.com/daerduoCarey/PyTorchEMD}}.
Since COV and MMD are still widely used in the literature, though, we are also reporting COV and MMD for all our models in App.~\ref{app:extra_exps}, even though they may be unreliable as metrics for generation quality. Note that for the more meaningful 1-NNA metric, LION generally outperforms all baselines in all experiments.
For fair comparisons and to quantify LION's performance in isolation without SAP-based mesh reconstruction, all metrics are computed directly on LION's generated point clouds, not meshed outputs. However, we also do calculate generation performance after the SAP-based mesh reconstruction in a separate ablation study (see App.~\ref{app:abl_num_step_SAP}). In those cases, we sample points from the SAP-generated surface to create the point clouds for evaluation metric calculation. Similarly, when calculating metrics for the IM-GAN~\cite{Chen_2019_CVPR} baseline we sample points from the implicitly defined surfaces generated by IM-GAN. Analogously, for the GCA~\cite{zhang2021learning} baseline we sample points from the generated voxels' surfaces.
\subsection{Details for Unconditional Generation}
We list the hyperparameters used for training the unconditional generative LION models in Tab.~\ref{tab:hyper_1}. The hyperparameters are the same for both the single class model and many-class model. Notice that we do not perform any hyperparameter tuning on the many-class model, i.e., it is likely that the many-class LION can be further improved with some tuning of the hyperparameters.%
\input{sections/table/hyper_param}
When tuning the model for unconditional generation, we found that the dropout probability and the hidden dimension for the shape latent DDM prior have the largest impact on the model performance. The other hyperparameters, such as the size of the encoders and decoder, matter less.%
\subsection{Details for Voxel-guided Synthesis} \label{app:exp_details_voxelguided}
\textbf{Setup.} We use a voxel size of 0.6 for both training and testing. During training, the training data (after normalization) are first voxelized, and the six faces of all voxels are collected. The faces that are shared by two or more voxels are discarded. To create point clouds from the voxels, we sample the voxels' faces and then randomly sample points within the faces. In our experiments, 2,048 points are sampled from the voxel surfaces for each shape. We randomly sample a similar number of points at each face.
\textbf{Encoder Fine-Tuning.} For encoder fine-tuning, we initialize the model weights from the LION model trained on the same categories with clean data. Both the shape latent encoder and the latent points encoder are fine-tuned on the voxel inputs, while the decoder and the latent DDMs are frozen. We set the maximum training epochs as 10,000 and perform early-stopping when the reconstruction loss on the validation set reaches a minimum value. In our experiments, training is usually stopped early after around 500 epochs. For example, our model on airplane, chair, and car category are stopped at 129, 470, and 189 epochs, respectively. All other hyperparameters are the same as for the unconditional generation experiments. The training objective can be found in Eq.~(\ref{eq:lion_finetune_obj}) and Eq.~(\ref{eq:lion_finetune_obj3}).
In Fig.~\ref{fig:rec_noise1} and Fig.~\ref{fig:rec_noise2}, we report the reconstruction of input points and IOU of the voxels on the test set. We also evaluate the output shape quality by having the models encode and decode the whole training set, and compute the sample quality metrics with respect to the reference set.
Note that we also tried fine-tuning the encoder of the DPM baseline~\cite{luo2021diffusion}; however, the results did not substantially change. Hence, we kept using standard DPM models.
\textbf{Multimodal Generation.} When performing multimodal generation for the voxel-guided synthesis experiments, we encode the voxel inputs into the shape latent ${\mathbf{z}}_0$ and the latent points ${\mathbf{h}}_0$, and run the forward diffusion process for a few steps to obtain their diffused versions. The diffused shape latent (${\mathbf{z}}_\tau$) is then denoised by the shape latent DDM. The diffused latent points ${\mathbf{h}}_\tau$ are denoised by the latent points DDM, conditioned on the shape latent generated by the shape latent DDM (also see App.~\ref{app:exp_detail_dd}). Thee number of \textit{diffuse-denoise} steps can be found in Figs.~\ref{fig:voxel_exp},~\ref{fig:rec_noise1}, and~\ref{fig:rec_noise2}.
\subsection{Details for Denoising Experiments}
\textbf{Setup.} We perturb the input data using different types of noise and show how well different methods denoise the inputs. The experimental setting for each noise type is listed below: \begin{itemize}
\item \textit{Normal Noise}: for each coordinate of a point, we first sample the standard deviation value of the noise uniformly from 0 to 0.25; then, we perturb the point with the noise, sampled from a normal distribution with zero mean and the sampled standard deviation value.
\item \textit{Uniform Noise}: for each coordinate of a point, we add noise sampled from the uniform distribution $U(0,0.25)$.
\item \textit{Outlier Noise}: for a shape consisting of $N$ points, we replace 50\% of its points with points drawn uniformly from the 3D bounding box of the original shape. The remaining 50\% of the points are kept at their original location.
\end{itemize}
Similar to the encoder fine-tuning for voxel-guided synthesis (App.~\ref{app:exp_details_voxelguided}), when fine-tuning LION's encoder networks for the different denoising experiments, we freeze the latent DDMs and the decoder and only update the weights of the shape latent encoder and the latent points encoder. The maximum number of epochs is set to 4,000 and the training process is stopped early based on the reconstruction loss on the validation set. The other hyperparameters are the same as for the unconditional generation experiments. %
To get different generations from the same noisy inputs, we again diffuse and denoise in the latent space. The operations are the same as for the multimodal generation during voxel-guided synthesis (App.~\ref{app:exp_details_voxelguided}).
\subsection{Details for Fine-tuning SAP on LION} \label{app:exp_detail_sap_finetune}
\textbf{Training the Original SAP.} We first train the SAP model on the clean data with normal noise injected, following the practice in SAP~\cite{peng2021SAP}. We set the standard deviation of the noise to 0.005.
\textbf{Data Preparation.} The training data for SAP fine-tuning is obtained by having LION encode the whole training set, diffuse and denoise in the latent space for some steps, and then decode the point cloud using the decoder. We ablate the number of steps for the diffuse-denoise process in App.~\ref{app:abl_num_step_SAP}. In our experiments, we randomly sample the number of steps from $\{20,30,35,40,50\}$.
The number of points used in this preparation process is 3,000, since the SAP model takes 3,000 points as input (since LION is constructed only from PointNet-based and convolutional networks, it can be run with any number of points). To prevent SAP from overfitting to the sampled points, we generate 4 different samples for each shape, with the same number of diffuse-denoise steps. During fine-tuning, SAP randomly draws one sample as input.
\textbf{Fine-Tuning.} When fine-tuning SAP, we use the same learning rate, batch size, and other hyperparameters as during training of the original SAP model, except that we change the input and reduce the maximum number of epochs to 1,000.
\subsection{Training Times} \label{app:sampling_times}
For single-class LION models, the total training time is $\approx 550$ GPU hours ($\approx 110$ GPU hours for training the backbone VAE; $\approx 440$ GPU hours for training the two latent diffusion models). Sampling time analyses can be found in App.~\ref{sec:ddim_sampling}.
\subsection{Used Codebases} \label{sup:license}
Here, we list all external codebases and datasets we use in our project.
To compare to baselines, we use the following codes:
\begin{itemize}
\item r-GAN, l-GAN~\cite{achlioptas2018learning}: \url{https://github.com/optas/latent_3d_points} (MIT License)
\item PointFlow~\cite{yang2019pointflow}: \url{https://github.com/stevenygd/PointFlow} (MIT License)
\item SoftFlow~\cite{kim2020softflow}: \url{https://github.com/ANLGBOY/SoftFlow} %
\item Set-VAE~\cite{Kim_2021_CVPR}: \url{https://github.com/jw9730/setvae} (MIT License)
\item DPF-NET~\cite{klokov2020discrete}: \url{https://github.com/Regenerator/dpf-nets} %
\item DPM~\cite{luo2021diffusion}: \url{https://github.com/luost26/diffusion-point-cloud} (MIT License)
\item PVD~\cite{zhou2021shape}: \url{https://github.com/alexzhou907/PVD} (MIT License)
\item ShapeGF~\cite{cai2020eccv}: \url{https://github.com/RuojinCai/ShapeGF} (MIT License)
\item SP-GAN~\cite{li2021spgan}: \url{https://github.com/liruihui/sp-gan} (MIT License)
\item PDGN~\cite{hui2020pdgn}: \url{https://github.com/fpthink/PDGN} (MIT License)
\item IM-GAN~\cite{Chen_2019_CVPR}: \url{https://github.com/czq142857/implicit-decoder} (MIT license) and \url{https://github.com/czq142857/IM-NET-pytorch} (MIT license)
\item GCA~\cite{zhang2021learning}: \url{https://github.com/96lives/gca} (MIT license)
\end{itemize}
We use further codebases in other places:
\begin{itemize}
\item We use the MitSuba renderer for visualizations~\cite{Mitsuba}: \url{https://github.com/mitsuba-renderer/mitsuba2} (License: \url{https://github.com/mitsuba-renderer/mitsuba2/blob/master/LICENSE}), and the code to generate the scene discription files for MitSuba~\cite{yang2019pointflow}: \url{https://github.com/zekunhao1995/PointFlowRenderer}.%
\item We rely on SAP~\cite{peng2021SAP} for mesh generation with the code at \url{https://github.com/autonomousvision/shape_as_points} (MIT License).
\item For calculating the evaluation metrics, we use the implementation for CD at {\url{https://github.com/ThibaultGROUEIX/ChamferDistancePytorch}} (MIT License) and for EMD at \url{https://github.com/daerduoCarey/PyTorchEMD}.%
\item We use Text2Mesh~\cite{michel2021text2mesh} for per-sample text-driven texture synthesis: \url{https://github.com/threedle/text2mesh} (MIT License)
\end{itemize}
We also rely on the following datasets:
\begin{itemize}
\item ShapeNet~\cite{shapenet2015}. Its terms of use can be found at \url{https://shapenet.org/terms}.
\item The Cars dataset~\cite{cardataset} from \url{http://ai.stanford.edu/~jkrause/cars/car_dataset.html} with ImageNet License: \url{https://image-net.org/download.php}.
\item The TurboSquid data repository, \url{https://www.turbosquid.com}. We obtained a custom license from TurboSquid to use this data.
\item Redwood 3DScan Dataset~\cite{Choi2016}: {\url{https://github.com/isl-org/redwood-3dscan}} (Public Domain)
\item Pix3D~\cite{pix3d}: \url{https://github.com/xingyuansun/pix3d}. (Creative Commons Attribution 4.0 International License).
\end{itemize}
\subsection{Computational Resources} \label{app:compute}
The total amount of compute used in this research project is roughly 340,000 GPU hours. We used an in-house GPU cluster of V100 NVIDIA GPUs.
\section{Additional Experimental Results} \label{app:extra_exps}
Overview:
\begin{itemize}
\item In App.~\ref{app:abl_std_arch}, we present an \textbf{ablation study} on LION's hierarchical architecture.
\item In App.~\ref{app:abl_backbone}, we present an \textbf{ablation study} on the point cloud processing backbone neural network architecture.
\item In App.~\ref{app:abl_std_latent_points}, we present an \textbf{ablation study} on the extra dimensions of the latent points.
\item In App.~\ref{app:abl_num_step_SAP}, we show an \textbf{ablation study} on the number of diffuse-denoise steps used during SAP fune-tuning.
\item In App.~\ref{app:singleclass}, we provide additional experimental results on \textbf{single-class unconditional generation}. We show MMD and COV metrics, and also incorporate additional baselines in the extended tables. Furthermore, in App.~\ref{app:visual_singleclass} we visualize additional samples from the LION models.
\item In App.~\ref{app:manyclass}, we provide additional experimental results for the \textbf{13-class unconditional generation} LION model.
In App.~\ref{app:visual_manyclass} we show more samples from our many-class LION model. Additionally, in App.~\ref{app:tsne} we analyze LION's shape latent space via a two-dimensional t-SNE projection~\cite{van2008visualizing}.
\item In App.~\ref{app:many_class_55}, we provide additional experimental results for the \textbf{55-class unconditional generation} LION model.
\item In App.~\ref{app:mug_bottle}, we provide additional experimental results for the LION models trained on ShapeNet's \textbf{Mug and Bottle classes}.
\item In App.~\ref{app:animals}, we provide additional experimental results for the LION model trained on 3D \textbf{animal shapes}.
\item In App.~\ref{app:exp_guided}, we provide additional results on \textbf{voxel-guided synthesis and denoising} for the chair and car categories.
\item In App.~\ref{app:exp_autoencode}, we quantify LION's \textbf{autoencoding} performance and compare to various baselines, which we all outperform.
\item In App.~\ref{sec:ddim_sampling}, we provide additional results on significantly \textbf{accelerated DDIM}-based synthesis in LION~\citep{song2020denoising}.
\item In App.~\ref{app:text2mesh}, we use \textbf{Text2Mesh}~\cite{michel2021text2mesh} to generate textures based on text prompts for synthesized LION samples.
\item In App.~\ref{app:svr_textdriven}, we condition LION on CLIP embeddings of the shapes' rendered images, following CLIP-Forge~\cite{sanghi2021clipforge}. This allows us to perform \textbf{text-driven 3D shape generation} and \textbf{single view 3D reconstruction}.
\item In App.~\ref{app:more_shape_interpolations}, we demonstrate more \textbf{shape interpolations} using the three single-class and also the 13-class LION models and we also show shape interpolations of the relevant PVD~\cite{zhou2021shape} and DPM~\cite{luo2021diffusion} baselines.
\end{itemize}
\subsection{Ablation Studies}\label{app:ablations}
\subsubsection{Ablation Study on LION's Hierarchical Architecture}
\label{app:abl_std_arch}
We perform an ablation experiment with the car category over the different components of LION's architecture. We consider three settings:
\begin{itemize}
\item LION model without shape latents. But it still has latent points and a corresponding latent points DDM prior.
\item LION model without latent points. But it still has the shape latents and a corresponding shape latent DDM.
\item LION model without any latent variables at all, \textit{i.e.}, a DDM operates on the point clouds directly (this is somewhat similar to PVD~\cite{zhou2021shape}).
\end{itemize}
When simply dropping the different architecture components, the model ``loses'' parameters. Hence, a decrease in performance could also simply be due to the smaller model rather than an inferior architecture. Therefore, we also increase the model sizes in the above ablation study (by scaling up the channel dimensions of all networks), such that all models have approximately the same number of parameters as our main LION model that has all components. The results on the car category can be found in Tab.~\ref{tab:ablation_archi}. The results show that the full LION setup with both shape latents and latent points performs best on all metrics, sometimes by a large margin. Furthermore, for the models with no or only one type of latent variables, increasing model size does not compensate for the loss of performance due to the different architectures. This ablation study demonstrates the unique advantage of the hierarchical setup with both shape latent variables and latent points, and two latent DDMs. We believe that the different latent variables complement each other---the shape latent variables model overall global shape, while the latent points capture details. This interpretation is supported by the experiments in which we keep the shape latent fixed and only observe small shape variations due to different local point latent configurations (Sec.~\ref{sec:exp_many_class} and Fig.~\ref{fig:many_class_same_latents}).
\input{sections/table/ablation_maintext}
\subsubsection{Ablation Study on the Backbone Point Cloud Processing Network Architecture}
\label{app:abl_backbone}
We ablate different point cloud processing neural network architectures used for implementing LION's encoder, decoder and the latent points prior. Results are shown in Tab.~\ref{tab:ablation_backbone_encdec} and Tab.~\ref{tab:ablation_backbone_prior}, using the LION model on the car category as in the other ablation studies. We choose three different popular backbones used in the point cloud processing literature: Point-Voxel CNN (PVCNN)~\cite{liu2019pvcnn}, Dynamic Graph CNN (DGCNN)~\cite{dgcnn} and PointTransformer~\cite{zhao2021point}.
For the ablation on the encoder and decoder backbones, we train LION's VAE model (without prior) with different backbones, and compare the reconstruction performance for different backbones. We select the PVCNN as it provides the strongest performance (Tab.~\ref{tab:ablation_backbone_encdec}).
For the ablation on the prior backbone, we first train the VAE model with the PVCNN architecture, as in all of our main experiments, and then train the prior with different backbones and compare the generation performance. Again, PVCNN performs best as network to implement the latent points diffusion model (Tab.~\ref{tab:ablation_backbone_prior}). In conclusion, these experiments support choosing PVCNN as our point cloud neural network backbone architecture for implementing LION.
Note that all ablations were run with similar hyperparameters and the neural networks were generally set up in such a way that different architectures consumed the same GPU memory.
\input{sections/table/ablation_backbone}
\subsubsection{Ablation Study on Extra Dimensions for Latent Points}
\label{app:abl_std_latent_points} %
Next, we ablate the extra dimension $D_{\mathbf{h}}$ for the latent points in Tab.~\ref{tab:ablation_num_dim}, again using LION models on the car category. We see that $D_{\mathbf{h}} = 1$ provides the overall best performance. With a relatively large number of extra dimensions, it is observed that the 1-NNA scores are getting worse in general. We use $D_{\mathbf{h}} = 1$ for all other experiments.
\input{sections/table/ablation}
\subsubsection{Ablation Study on SAP Fine-Tuning} %
\label{app:abl_num_step_SAP}
After applying SAP to extract meshes from the generated point clouds, it is possible to again sample points from the meshed surface and evaluate the points' quality with the generation metrics that we used for unconditional generation. We call this process \textit{resampling}.
See Tab.~\ref{tab:gen_sap_ablate} for an ablation over the results of \textit{resampling} from SAP with or without fine-tuning. It also contains the ablation over different numbers of diffuse-denoise steps used to generate the training data for the SAP fine-tuning. Without fine-tuning, the reconstructed mesh has slightly lower quality according to 1-NNA, presumably since the noise within the generated points is different from the noise which the SAP model is trained on. For the ``mixed'' number of steps entry in the table, SAP randomly chooses one number of diffuse-denoise steps from the above five values at each iteration when producing the training shapes. This setting tends to give an overall good sample quality in terms of the 1-NNA evaluation metrics. We use this setting in all experiments.
To visually demonstrate the improvement of SAP's mesh reconstruction performance with and without fine-tuning, we show the reconstructed meshes before and after finetuning in Fig.~\ref{fig:sap:before_after_finetune}. The original SAP is trained with clean point clouds augmented with small Gaussian noise. As a result, SAP can handle small scale Gaussian noise in the point clouds. However, it is less robust to the generated points where the noise is different from the Gaussian noise which SAP is trained with.
With our proposed fine-tuning, SAP produces smoother surfaces and becomes more robust to the noise distribution in the point clouds generated by LION.
\input{sections/figTexts/sap_wo_finetune}
\input{sections/table/gen_v2_resample}
\subsection{Single-Class Unconditional Generation} \label{app:singleclass}
For our three single-class LION models, we show the full evaluation metrics for different dataset splits, and different data normalizations, in Tab.~\ref{tab:gen_v1_global_norm_full}, Tab.~\ref{tab:gen_v1_individual_norm_full} and Tab.~\ref{tab:gen_v2_full}. Under all settings and datasets, LION achieves state-of-the-art performance on the 1-NNA metrics, and is competitive on the MMD and COV metrics, which, however, can be unreliable with respect to quality (see discussion in App.~\ref{app:metrics}).
\input{sections/table/gen_v1g_full}
\input{sections/table/gen_v1i_full}
\input{sections/table/gen_v2_full}
\subsubsection{More Visualizations of the Generated Shapes}\label{app:visual_singleclass}
More visualizations of the generated shapes from the LION models trained on airplane, chair and car classes can be found in Fig.~\ref{fig:more_gen_air}, Fig.~\ref{fig:more_gen_chair} and Fig.~\ref{fig:more_gen_car}. LION is generating high quality samples with high diversity. We visualize both point clouds and meshes generated with the SAP that is fine-tuned on the VAE-encoded training set.
\subsection{Unconditional Generation of 13 ShapeNet Classes} \label{app:manyclass}
See Tab.~\ref{tab:gen_13cls_full} for the evaluation metrics of the sample quality of LION and other baselines, trained on the 13-class dataset. To evaluate the models, we sub-sample 1,000 shapes from the reference set and sample 1,000 shapes from the models. We can see that LION is better than all baselines under this challenging setting. The results are also consistent with our observations on the single-class models. For baseline comparisons, we picked PVD~\cite{zhou2021shape} and DPM~\cite{luo2021diffusion}, because they are also DDM-based and most relevant. We also picked TreeGAN~\cite{shu20193d}, as it is also trained on diverse data in their original paper, and DPF-Net~\cite{klokov2020discrete}, as it represents a modern competitive flow-based method that we could train relatively quickly. We did not run all other baselines that we ran for the single-class models due to limited compute resources.
\input{sections/table/gen_v2_all_class}
\subsubsection{More Visualizations of the Generated Shapes} \label{app:visual_manyclass}
See Fig.~\ref{fig:more_gen_all} for more visualizations of the generated shapes from LION trained on the 13-class data. We visualize both point clouds and meshes generated with the SAP that is fine-tuned on the VAE-encoded training set. LION is again able to generate diverse and high quality shapes even when training in the challenging 13-class setting. We also show in Fig.~\ref{fig:more_on_free_global} additional samples from the 13-class LION model with fixed shape latent variables, where only the latent points are sampled, similar to the experiments in Sec.~\ref{sec:exp_many_class} and Fig.~\ref{fig:many_class_same_latents}. We again see that the shape latent variables seem to capture overall shape, while the latent points are responsible for generating different details.
\subsubsection{Shape Latent Space Visualization} \label{app:tsne}
We project the shape latent variables learned by LION's 13-classes VAE into the 2D plane and create a t-SNE~\cite{van2008visualizing} plot in Fig.~\ref{fig:tsne_global_latnet}. It can be seen that many categories are separated, such as the rifle, car, watercraft, airplane, telephone, lamp, and display classes. The other categories that are hard to distinguish such as bench and table are mixing a bit, which is also reasonable. This indicates that LION's shape latent is learning to represent the category information, presumably capturing overall shape information, as also supported by our experiments in Sec.~\ref{sec:exp_many_class} and Fig.~\ref{fig:many_class_same_latents}. Potentially, this also means that the representation learnt by the shape latents could be leveraged for downstream tasks, such as shape classification, similar to~\citet{luo2021diffusion}.
\subsection{Unconditional Generation of all 55 ShapeNet Classes}\label{app:many_class_55}
We train a LION model \textit{jointly without any class conditioning} on all 55\footnote{the 55 classes are \textit{airplane, bag, basket, bathtub, bed, bench, birdhouse, bookshelf, bottle, bowl, bus, cabinet, camera, can, cap, car, cellphone, chair, clock, dishwasher, earphone, faucet, file, guitar, helmet, jar, keyboard, knife, lamp, laptop, mailbox, microphone, microwave, monitor, motorcycle, mug, piano, pillow, pistol, pot, printer, remote control, rifle, rocket, skateboard, sofa, speaker, stove, table, telephone, tin can, tower, train, vessel, washer}} different categories from ShapeNet. The total number of training data is 35,708.
Training a single model without conditioning over such a large number of categories is challenging, as the data distribution is highly complex and multimodal. Note that we did on purpose not use class-conditioning to explore LION's scalability to such complex and multimodal datasets. Furthermore, the number of training samples across different categories is imbalanced in this setting: 15 categories have less than 100 training samples and 5 categories have more than 2,000 training samples. We adopt the same model hyperparameters as for the single class LION models here without any tuning.
We show LION's generated samples in Fig.~\ref{fig:gen_55_cat}: LION synthesizes high-quality and diverse shapes. It can even generate samples from the \textit{cap} class, which contributes with only 39 training samples, indicating that LION has an excellent mode coverage that even includes the very rare classes. Note that we did not train an SAP model on the 55 classes data. Hence, we only show the generated point clouds in Fig.~\ref{fig:gen_55_cat}.
This experiment is run primarily as a qualitative scalability test of LION and due to limited compute resources, we did not train baselines here. Furthermore, to the best of our knowledge no previous 3D shape generative models have demonstrated satisfactory generation performance for such diverse and multimodal 3D data without relying on conditioning information. That said, to make sure future works can compare to LION, we report the generation performance over 1,000 samples in Tab.~\ref{tab:gen_55}. We would like to emphasize that hyperparameter tuning and using larger LION models with more parameters will almost certainly significantly improve the results even further. We simply used the single-class training settings out of the box.
\input{sections/table/gen_55}
\begin{figure}\centering\includegraphics[width=0.8\linewidth]{sections/fig/gen_mesh/all55_mitsuba_full.jpg}\caption{Generated shapes from our LION model that was trained jointly on all 55 ShapeNet classes without class-conditioning.} \label{fig:gen_55_cat}\end{figure}
\subsection{Unconditional Generation of ShapeNet's Mug and Bottle Classes} \label{app:mug_bottle}
Next, we explore whether LION can also be trained successfully on very small datasets. To this end, we train LION on the Mug and Bottle classes in ShapeNet. The number of training samples is 149 and 340, respectively, which is much smaller than the common classes like chair, car and airplane. All the hyperparameters are the same as for the models trained on single classes. We show generated shapes in Fig.~\ref{fig:gen_mug} and Fig.~\ref{fig:gen_bottle} (to extract meshes from the generated point clouds, for convenience we are using the SAP model that was trained for the 13-class LION experiment). We find that LION is also able to generate correct mugs and bottles in this very small training data set situation. We report the performance of the generated samples in Tab.\ref{tab:gen_mug_bottle}, such that future work can compare to LION on this task.
\begin{figure}\centering\includegraphics[width=0.9\linewidth]{sections/fig/gen_mesh/mug_mitsuba_full.jpg}\caption{Generated shapes from the LION model trained on ShapeNet's Mug category.} \label{fig:gen_mug}\end{figure}
\begin{figure}\centering\includegraphics[width=0.9\linewidth]{sections/fig/gen_mesh/bottle_mitsuba_full.jpg}\caption{Generated shapes from the LION model trained on ShapeNet's Bottle category.} \label{fig:gen_bottle}\end{figure}
\input{sections/table/gen_mug_bottle}
\subsection{Unconditional Generation of Animal Shapes}\label{app:animals}
Furthermore, we also train LION on 553 animal assets from the TurboSquid data repository.\footnote{\url{https://www.turbosquid.com} We obtained a custom license from TurboSquid to use this data.}. The animal data includes shapes of cats, bears, goats, etc. All the hyperparameters are again the same as for the models trained on single classes. See Fig.~\ref{fig:gen_animal} for visualizations of the generated shapes from LION trained on the animal data. We visualize both point clouds and meshes. For simplicity, the meshes are generated again with the SAP model that was trained on the ShapeNet 13-classes data. LION is again able to generate diverse and high quality shapes even when training in the challenging low-data setting.
\begin{figure} \centering\includegraphics[width=\linewidth]{sections/fig/gen_mesh/animal_full.jpg} \caption{Generated shapes from the animal class LION model.} \label{fig:gen_animal}\end{figure}
\subsection{Voxel-guided Synthesis and Denoising}\label{app:exp_guided}
We additionally add the results for voxel-guided synthesis and denoising experiments on the chair and car categories. In Fig.~\ref{fig:rec_noise1_chair} and Fig.~\ref{fig:rec_noise1_car}, we show the reconstruction metrics for different types of input: voxelized input, input with outlier noise, input with uniform noise, and input with normal noise. LION outperforms the other two baselines (PVD and DPM), especially for the voxelized input and the input with outliers, similar to the results presented in the main paper on the airplane class (Sec.~\ref{sec:exps_guided}). In Fig.~\ref{fig:rec_noise2_chair} and Fig.~\ref{fig:rec_noise2_car}, we show the output quality metrics and the voxel IOU for voxel-guided synthesis on chair, and car category, respectively. LION achieves high output quality while obeying the voxel input constraint well.
\input{sections/figTexts/more_plot_rec}
\textbf{More on Multimodal Visualization.}
In Fig.~\ref{fig:more_voxel_exp_3cls}, we show visualizations of multimodal voxel-guided synthesis on different classes. As discussed, we generate various plausible shapes using different numbers of diffuse-denoise steps. We show two different plausible shapes (with the corresponding latent points, and reconstructed meshes) given the same input at each row under different settings. LION is able to capture the structure indicated by the voxel grids: the shapes obey the voxel grid constraints. For example, the tail of the airplane, the legs of the chair, and the back part of the car are consistent with the input. Meanwhile, LION generates diverse and reasonable details in the output shapes.
See Fig.~\ref{fig:noise_exp} for denoising experiments, with comparisons to other baselines. In Fig.~\ref{fig:more_denoise_3cls}, we also show the visualizations for different classes. LION handles different types of input noises and generates reasonable and diverse details given the same input. See the car examples in the first column for the normal noise, uniform noise and the outlier noise.
Notice that we applied the SAP model here only for visualizing the meshed output shapes. The SAP model is not fine-tuned on voxel or noisy input data. This is potentially one reason why some reconstructed meshes do not have high quality.
\input{sections/figTexts/noise_exp}
\input{sections/figTexts/more_cond_gen}
\input{sections/figTexts/more_denoising}
\subsubsection{LION vs. Deep Matching Tetrahedra on Voxel-guided Synthesis} \label{app:dmtet_comp}
We additionally compare LION's performance on voxel-guided synthesis to Deep Marching Tetrahedra~\cite{shen2021dmtet} (DMTet) on the airplane category (see Tab.~\ref{tab:exp_voxel_dmt}). We train and evaluate DMTet with the same data as was used in our voxel-guided shape synthesis experiments (see Sec.~\ref{sec:exps_guided} and Apps.~\ref{app:exp_detail_guided} and~\ref{app:exp_details_voxelguided}). To compute the evaluation metrics on the DMTet output, we randomly sample points on DMTet's output meshes. LION achieves reconstruction results of similar or slighty better quality than DMTet. However, note that DMTet was specifically designed for such reconstruction tasks and is not a general generative model that could synthesize novel shapes from scratch without any guidance signal, unlike LION, which is a highly versatile general 3D generative model. Furthermore, as we demonstrated in the main paper, LION can generate multiple plausible de-voxelized shapes, while DMTet is fully deterministic and can only generate a single reconstruction.
\input{sections/table/voxel_guided_exp}
\subsection{Autoencoding} \label{app:exp_autoencode}
We report the auto-encoding performance of LION and other baselines in Tab.~\ref{tab:autoencode} for single-class models. We are calculating the reconstruction performance of LION's VAE component. Additional results for the LION model trained on many classes can be found in Tab.~\ref{tab:autoencode_13}. LION achieves much better reconstruction performance compared to all other baselines. The hierarchical latent space is expressive enough for the model to perform high quality reconstruction. At the same time, as we have shown above, LION also achieves state-of-the-art generation quality. Moreover, as shown in App.~\ref{app:tsne}, its shape latent space is also still semantically meaningful.
\input{sections/table/auto_encode}
\input{sections/figTexts/generation_mesh_full_single_class}
\input{sections/figTexts/more_freeze_global}
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{sections/fig/tsne/mean_pca80_p30_N5000_s5_niter2000_seed0.pdf}
\caption{Shape latent t-SNE plot for the many-class LION model.}
\label{fig:tsne_global_latnet}
\end{figure}
\subsection{Synthesis Time and DDIM Sampling} \label{sec:ddim_sampling}
Our main results in the paper are all generated using standard 1,000-step DDPM-based ancestral sampling (see Sec.~\ref{sec:background}). Generating a point cloud sample (with 2,048 points) from LION takes $\approx 27.12$ seconds, where $\approx 4.04$ seconds are used in the shape latent diffusion model and $\approx 23.05$ seconds in the latent points diffusion model. Optionally running SAP for mesh reconstruction requires an additional $\approx 2.57$ seconds.
A simple and popular way to accelerate sampling in diffusion models is based on the \textit{Denoising Diffusion Implicit Models}-framework (DDIM). We show DDIM-sampled shapes in Fig.~\ref{fig:ddim_samples} and the generation performance in Tab.~\ref{tab:ddim_sample}. For all DDIM sampling, we use $\eta=0.5$ as stochasticity hyperparameter and the quadratic time schedule as also proposed in DDIM~\citep{song2020denoising}. We also tried deterministic DDIM sampling, but it performed worse (for 50-step sampling). We find that we can produce good-looking shapes in under one second with only 25 synthesis steps. Performance significantly degrades when using $\leq 10$ steps.
\input{sections/figTexts/ddim_sample}
\input{sections/table/ddim}
\subsection{Per-sample Text-driven Texture Synthesis}\label{app:text2mesh}
To demonstrate the value to artists of being able to synthesize meshes and not just point clouds, we consider a downstream application: We apply Text2Mesh\footnote{\url{https://github.com/threedle/text2mesh}}~\cite{michel2021text2mesh} on some generated meshes from LION to additionally synthesize textures in a text-driven manner, leveraging CLIP~\cite{radford2021clip}. Optionally, Text2Mesh can also locally refine the mesh and displace vertices for enhanced visual effects. See Fig.~\ref{fig:text2mesh:share_mat} for results where we show different objects made of \textit{snow} and \textit{potato chips}, respectively. In Fig.~\ref{fig:text2mesh:share_obj}, we apply different text prompts on the same generated airplane. We show more diverse results on other categories in Fig.~\ref{fig:text2mesh:more_obj}. Note that this is only possible because of our SAP-based mesh reconstruction.
\input{sections/figTexts/text2mesh}
\subsection{Single View Reconstruction and Text-driven Shape Synthesis} \label{app:svr_textdriven}
Although our main goal in this work was to develop a strong generative model of 3D shapes, here we qualitatively show how to extend LION to also allow for single view reconstruction (SVR) from RGB data. We render 2D images from the 3D ShapeNet shapes, extracted the images’ CLIP~\cite{radford2021clip} image embeddings, and trained LION’s latent diffusion models while conditioning on the shapes’ CLIP image embeddings. At test time, we then take a single view 2D image, extract the CLIP image embedding, and generate corresponding 3D shapes, thereby effectively performing SVR. We show SVR results from real RGB data in Fig.~\ref{fig:svr}, Fig.~\ref{fig:svr_car} and Fig.~\ref{fig:svr_car_single}.
The RGB images of the chairs are from Pix3D\footnote{We downloaded the data from \url{https://github.com/xingyuansun/pix3d}. The Pix3D dataset is licensed under a Creative Commons Attribution 4.0 International License.}~\cite{pix3d} and Redwood 3DScan dataset\footnote{We downloaded the Redwood 3DScan dataset (public domain) from \url{https://github.com/isl-org/redwood-3dscan}.}~\cite{Choi2016}, respectively.
The RGB images of the cars are from the Cars dataset\footnote{We downloaded the Cars dataset from \url{http://ai.stanford.edu/~jkrause/cars/car_dataset.html}. The Cars dataset is licensed under the ImageNet License: \url{https://image-net.org/download.php}}~\cite{cardataset}.
For each input image, LION is able to generate different feasible shapes, showing LION's ability to perform multi-modal generation. Qualitatively, our results appear to be of similar quality as the results of PVD~\cite{zhou2021shape} for that task, and at least as good or better than the results of AutoSDF~\cite{autosdf2022}. Note that this approach only requires RGB images. In contrast, PVD requires RGB-D images, including depth. Hence, our approach can be considered more flexible.
Using CLIP’s text encoder, our method additionally allows for text-guided generation as demonstrated in Fig.~\ref{fig:clipforge} and Fig.~\ref{fig:clipforge_car}. Overall, this approach is inspired by CLIP-Forge~\cite{sanghi2021clipforge}. Note that this is a simple qualitative demonstration of LION’s extendibility. We did not perform any hyperparameter tuning here and believe that these results could be improved with more careful tuning and training.
\input{sections/figTexts/clip}
\subsection{More Shape Interpolations}\label{app:more_shape_interpolations}
We show more shape interpolation results for single-class LION models in Figs.~\ref{fig:more_interp1}, ~\ref{fig:more_interp1_chair}, ~\ref{fig:more_interp1_car}, and the many-class LION model in Figs.~\ref{fig:more_interp2}, ~\ref{fig:more_interp3}. We can see that LION is able to interpolate two shapes from different classes smoothly. For example, when it tries to interpolate a chair and a table, it starts to make the chair wider and wider, and gradually removes the back of the chair. When it tries to interpolate an airplane and a chair, it starts with making the wings more chair-like, and reduces the size of the rest of the body. The shapes in the middle of the interpolation provide a smooth and reasonable transition.
\input{sections/figTexts/more_interp}
\subsubsection{Shape Interpolation with PVD and DPM}\label{app:interp_pvd_dpm}
To be able to better judge the performance of LION's shape interpolation results, we now also show shape interpolations with PVD~\cite{zhou2021shape} and DPM~\cite{luo2021diffusion} in Fig.~\ref{fig:more_interp_pvd} and Fig.~\ref{fig:more_interp_dpm}, respectively. We apply the \textit{spherical interpolation} (see Sec.~\ref{app:exp_detail_interpol}) on the noise inputs for both PVD and DPM. DPM leverages a Normalizing Flow, which already offers deterministic generation given the noise inputs of the Flow's normal prior. For PVD, just like for LION, we again use the diffusion model's ODE formulation to obtain deterministic generation paths. In other words, to avoid confusion, in both cases we are interpolating in the normal prior distribution, just like for LION.
Although PVD is also able to interpolate two shapes, the transition from the source shapes to the target shapes appear less smooth than for LION; see, for example, the chair interpolation results of PVD. Furthermore, DPM's generated shape interpolations appear fairly noisy. When interpolating very different shapes using the 13-classes models, both PVD and DPM essentially break down and do not produce sensible outputs anymore. All shapes along the interpolation paths appear noisy.
In contrast, LION generally produces coherent interpolations, even when using the multimodal model that was trained on 13 ShapeNet classes (see Figs.~\ref{fig:more_interp1}, ~\ref{fig:more_interp1_chair}, ~\ref{fig:more_interp1_car} and ~\ref{fig:more_interp2} for LION interpolations for reference).
\input{sections/figTexts/interpo_baselines}
\section{Hierarchical Latent Point Diffusion Models}
\vspace{-2mm}
We first formally introduce LION, then discuss various applications and extensions in Sec.~\ref{subsec:extensions}, and finally recapitulate its unique advantages in Sec.~\ref{subsec:advantages}. See Fig.~\ref{fig:pipeline} for a visualization of LION.
We are modeling point clouds ${\mathbf{x}}\in\mathbb{R}^{3\times N}$, consisting of $N$ points with $xyz$-coordinates in $\mathbb{R}^{3}$. LION is set up as a hierarchical VAE with DDMs in latent space. It uses a vector-valued global shape latent ${\mathbf{z}}_0\in\mathbb{R}^{D_{\mathbf{z}}}$ and a point cloud-structured latent ${\mathbf{h}}_0\in\mathbb{R}^{(3+D_{\mathbf{h}})\times N}$. Specifically, ${\mathbf{h}}_0$ is a \textit{latent point cloud} consisting of $N$ points with $xyz$-coordinates in $\mathbb{R}^{3}$. In addition, each latent point can carry additional $D_{\mathbf{h}}$ latent features. Training of LION is then performed in two stages---first, we train it as a regular VAE with standard Gaussian priors; then, we train the latent DDMs on the latent encodings.
\textbf{First Stage Training.} Initially, LION is trained by maximizing a modified variational lower bound on the data log-likelihood (ELBO) with respect to the encoder and decoder parameters ${\bm{\phi}}$ and ${\bm{\xi}}$~\cite{kingma2014vae,rezende2014stochastic}:
\begin{equation}\label{eq:lion_elbo}
\begin{split}
\mathcal{L}_\textrm{ELBO}({\bm{\phi}},{\bm{\xi}})&=\mathbb{E}_{p({\mathbf{x}}),q_{\bm{\phi}}({\mathbf{z}}_0|{\mathbf{x}}),q_{\bm{\phi}}({\mathbf{h}}_0|{\mathbf{x}},{\mathbf{z}}_0)}\bigl[\log p_{\bm{\xi}}({\mathbf{x}}|{\mathbf{h}}_0,{\mathbf{z}}_0) \\ &- \lambda_{\mathbf{z}} D_\textrm{KL}\left(q_{\bm{\phi}}({\mathbf{z}}_0|{\mathbf{x}})|p({\mathbf{z}}_0)\right) - \lambda_{\mathbf{h}} D_\textrm{KL}\left(q_{\bm{\phi}}({\mathbf{h}}_0|{\mathbf{x}},{\mathbf{z}}_0)|p({\mathbf{h}}_0)\right)\bigr].
\end{split}
\end{equation}
Here, the global shape latent ${\mathbf{z}}_0$ is sampled from the posterior distribution $q_{\bm{\phi}}({\mathbf{z}}_0|{\mathbf{x}})$, which is parametrized by factorial Gaussians, whose means and variances are predicted via an encoder network. The point cloud latent ${\mathbf{h}}_0$ is sampled from a similarly parametrized posterior $q_{\bm{\phi}}({\mathbf{h}}_0|{\mathbf{x}},{\mathbf{z}}_0)$, while also conditioning on ${\mathbf{z}}_0$ (${\bm{\phi}}$ denotes the parameters of both encoders). Furthermore, $p_{\bm{\xi}}({\mathbf{x}}|{\mathbf{h}}_0,{\mathbf{z}}_0)$ denotes the decoder, parametrized as a factorial Laplace distribution with predicted means and fixed unit scale parameter (corresponding to an $L_1$ reconstruction loss). $\lambda_{\mathbf{z}}$ and $\lambda_{\mathbf{h}}$ are hyperparameters balancing reconstruction accuracy and Kullback-Leibler regularization (note that only for $\lambda_{\mathbf{z}}=\lambda_{\mathbf{h}}=1$ we are optimizing a rigorous ELBO). The priors $p({\mathbf{z}}_0)$ and $p({\mathbf{h}}_0)$ are %
${\mathcal{N}}(\bm{0}, {\bm{I}})$. Also see Fig.~\ref{fig:pipeline} again.
\input{sections/figTexts/generation_mesh_and_pts_all_cats}
\textbf{Second Stage Training.}
In principle, we could use the VAE's priors to sample encodings and generate new shapes. However, the simple Gaussian priors will not accurately match the encoding distribution from the training data and therefore produce poor samples (\textit{prior hole problem}~\cite{vahdat2021score,tomczak2018VampPrior,takahashi2019variational,bauer2019resampled,vahdat2020nvae,aneja2020ncpvae,sinha2021d2c,rosca2018distribution,hoffman2016elbo}). This motivates training highly expressive latent DDMs. In particular, in the second stage
we freeze the VAE's encoder and decoder networks and train two latent DDMs
on the encodings ${\mathbf{z}}_0$ and ${\mathbf{h}}_0$ sampled from $q_{\bm{\phi}}({\mathbf{z}}_0|{\mathbf{x}})$ and $q_{\bm{\phi}}({\mathbf{h}}_0|{\mathbf{x}},{\mathbf{z}}_0)$, minimizing score matching (SM) objectives similar to Eq.~(\ref{eq:background_scorematching}):
\begin{align} \label{eq:latent_ddm_1}
\mathcal{L}_{\textrm{SM}^{\mathbf{z}}}({\bm{\theta}}) & = \mathbb{E}_{t\sim U\{1,T\},p({\mathbf{x}}),q_{\bm{\phi}}({\mathbf{z}}_0|{\mathbf{x}}),{\bm{\epsilon}}\sim{\mathcal{N}}(\bm{0}, {\bm{I}})}||{\bm{\epsilon}}-{\bm{\epsilon}}_{\bm{\theta}}({\mathbf{z}}_t,t)||_2^2, \\ \label{eq:latent_ddm_2}
\mathcal{L}_{\textrm{SM}^{\mathbf{h}}}({\bm{\psi}}) & = \mathbb{E}_{t\sim U\{1,T\},p({\mathbf{x}}),q_{\bm{\phi}}({\mathbf{z}}_0|{\mathbf{x}}),q_{\bm{\phi}}({\mathbf{h}}_0|{\mathbf{x}},{\mathbf{z}}_0),{\bm{\epsilon}}\sim{\mathcal{N}}(\bm{0}, {\bm{I}})}||{\bm{\epsilon}}-{\bm{\epsilon}}_{\bm{\psi}}({\mathbf{h}}_t,{\mathbf{z}}_0,t)||_2^2,
\end{align}
where ${\mathbf{z}}_t=\alpha_t {\mathbf{z}}_0 + \sigma_t {\bm{\epsilon}}$ and ${\mathbf{h}}_t=\alpha_t {\mathbf{h}}_0 + \sigma_t {\bm{\epsilon}}$ are the diffused latent encodings. Furthermore, ${\bm{\theta}}$ denotes the parameters of the global shape latent DDM ${\bm{\epsilon}}_{\bm{\theta}}({\mathbf{z}}_t,t)$, and ${\bm{\psi}}$ refers to the parameters of the conditional DDM ${\bm{\epsilon}}_{\bm{\psi}}({\mathbf{h}}_t,{\mathbf{z}}_0,t)$ trained over the latent point cloud (note the conditioning on ${\mathbf{z}}_0$).
\textbf{Generation.} With the latent DDMs, we can formally define a hierarchical generative model $p_{{\bm{\xi}},{\bm{\psi}},{\bm{\theta}}}({\mathbf{x}},{\mathbf{h}}_0,{\mathbf{z}}_0)=p_{{\bm{\xi}}}({\mathbf{x}}|{\mathbf{h}}_0,{\mathbf{z}}_0)p_{{\bm{\psi}}}({\mathbf{h}}_0|{\mathbf{z}}_0)p_{{\bm{\theta}}}({\mathbf{z}}_0)$, where $p_{{\bm{\theta}}}({\mathbf{z}}_0)$ denotes the distribution of the global shape latent DDM, $p_{{\bm{\psi}}}({\mathbf{h}}_0|{\mathbf{z}}_0)$ refers to the DDM modeling the point cloud-structured latents, and $p_{{\bm{\xi}}}({\mathbf{x}}|{\mathbf{h}}_0,{\mathbf{z}}_0)$ is LION's decoder. We can hierarchically sample the latent DDMs following Eq.~(\ref{eq:ddpm_sampling}) and then translate the latent points back to the original point cloud space with the decoder.
\textbf{Network Architectures and DDM Parametrization.} Let us briefly summarize key implementation choices. The encoder networks, as well as the decoder and the latent point DDM, operating on point clouds ${\mathbf{x}}$, are all implemented based on Point-Voxel CNNs (PVCNNs)~\cite{liu2019pvcnn}, following \citet{zhou2021shape}. PVCNNs efficiently combine the point-based processing of PointNets~\cite{qi2017pointnet,qi2017pointnetplusplus} with the strong spatial inductive bias of convolutions. The DDM modeling the global shape latent uses a ResNet~\cite{he2016deeprl} structure with fully-connected layers (implemented as $1{\times}1$-convolutions). All conditionings on the global shape latent are implemented via adaptive Group Normalization~\cite{wu2018groupnorm} in the PVCNN layers. Furthermore, following \citet{vahdat2021score} we use a \textit{mixed score parametrization} in both latent DDMs. This means that the score models are parametrized to predict a residual correction to an analytic standard Gaussian score. This is beneficial since the latent encodings are regularized towards a standard Gaussian distribution during the first training stage (see App.~\ref{app:impl} for all details).
\subsection{Applications and Extensions} \label{subsec:extensions}
Here, we discuss how LION can be used and extended for different relevant applications.
\textbf{Multimodal Generation.}
We can synthesize different variations of a given shape, enabling multimodal generation in a controlled manner: Given a shape, i.e., its point cloud ${\mathbf{x}}$, we encode it into latent space. Then, we diffuse its encodings ${\mathbf{z}}_0$ and ${\mathbf{h}}_0$ for a small number of steps $\tau<T$ towards intermediate ${\mathbf{z}}_\tau$ and ${\mathbf{h}}_\tau$ along the diffusion process such that only local details are destroyed. Running the reverse generation process from this intermediate $\tau$, starting at ${\mathbf{z}}_\tau$ and ${\mathbf{h}}_\tau$, leads to variations of the original shape with different details (see, for instance, Fig.~\ref{fig:gen_v2_mesh}). We refer to this procedure as \textit{diffuse-denoise} (details in App.~\ref{app:exp_detail_dd}). Similar techniques have been used for image editing~\cite{meng2022sdedit}.
\input{sections/figTexts/pipe_voxel_latent_pts_mesh}
\textbf{Encoder Fine-tuning for Voxel-Conditioned Synthesis and Denoising.} In practice, an artist using a 3D generative model may have a rough idea of the desired shape. For instance, they may be able to quickly construct a coarse voxelized shape, to which the generative model then adds realistic details.
In LION, we can support such applications: using a similar ELBO as in Eq.~(\ref{eq:lion_elbo}), but with a frozen decoder, we can fine-tune LION's encoder networks to take voxelized shapes as input (we simply place points at the voxelized shape's surface) and map them to the corresponding latent encodings ${\mathbf{z}}_0$ and ${\mathbf{h}}_0$ that reconstruct the original non-voxelized point cloud. Now, a user can utilize the fine-tuned encoders to encode voxelized shapes and generate plausible detailed shapes. Importantly, this can be naturally combined with the \textit{diffuse-denoise} procedure to clean up imperfect encodings and to generate different possible detailed shapes (see Fig.~\ref{fig:voxel_to_mesh}).
Furthermore, this approach is general. Instead of voxel-conditioned synthesis, we can also fine-tune the encoder networks on noisy shapes to perform multimodal shape denoising, also potentially combined with \textit{diffuse-denoise}. %
LION supports these applications easily without re-training the latent DDMs due to its VAE framework with additional encoders and decoders, in contrast to previous works that train DDMs on point clouds directly~\cite{zhou2021shape,luo2021diffusion}. See App.~\ref{app:exp_detail_guided} for technical details.
\textbf{Shape Interpolation.}
LION also enables shape interpolation: We can encode different point clouds into LION's hierarchical latent space and use the \textit{probability flow ODE} (see App.~\ref{app:background_extended}) to further encode into the latent DDMs' Gaussian priors, where we can safely perform spherical interpolation and expect valid shapes along the interpolation path. We can use the intermediate encodings to generate the interpolated shapes
(see Fig.~\ref{fig:interpolation_2}; details in App.~\ref{app:exp_detail_interpol}).
\input{sections/figTexts/generation_mesh_with_pts}
\textbf{Surface Reconstruction.}
While point clouds are an ideal 3D representation for DDMs, artists may prefer meshed outputs. Hence, we propose to combine LION with modern geometry reconstruction methods (see Figs.~\ref{fig:gen_v2_mesh}, \ref{fig:voxel_to_mesh} and~\ref{fig:gen_v2_mesh_with_pts}).
We use \textit{Shape As Points} (SAP)~\cite{peng2021SAP},
which is based on differentiable Poisson surface reconstruction and can be trained to extract smooth meshes from noisy point clouds. Moreover, we fine-tune SAP on training data generated by LION's autoencoder to better adjust SAP to the noise distribution in point clouds generated by LION. Specifically, we take clean shapes, encode them into latent space, run a few steps of \textit{diffuse-denoise} that only slightly modify some details, and decode back. The \textit{diffuse-denoise} in latent space results in noise in the generated point clouds similar to what is observed during unconditional synthesis (details in App.~\ref{app:exp_detail_sap}).
\vspace{-2mm}
\subsection{LION's Advantages}
\label{subsec:advantages}
\vspace{-2mm}
We now recapitulate LION's unique advantages. LION's structure as a hierarchical VAE with latent DDMs is inspired by latent DDMs on images~\cite{rombach2021highresolution,vahdat2021score,sinha2021d2c}. This framework has key benefits:
\textbf{(i) Expressivity:} First training a VAE that regularizes the latent encodings to approximately fall under standard Gaussian distributions, which are also the DDMs' equilibrium distributions towards which the diffusion processes converge, results in an easier modeling task for the DDMs: They have to model only the remaining mismatch between the actual encoding distributions and their own Gaussian priors~\cite{vahdat2021score}. This translates into improved expressivity, which is further enhanced by the additional decoder network. However, point clouds are, in principle, an ideal representation for the DDM framework, because they can be diffused and denoised easily and powerful point cloud processing architectures exist. Therefore, LION uses \textit{point cloud latents} that combine the advantages of both latent DDMs and 3D point clouds. Our point cloud latents can be interpreted as smoothed versions of the original point clouds that are easier to model (see Fig.~\ref{fig:pipeline}). Moreover, the hierarchical VAE setup with an additional global shape latent increases LION's expressivity even further and results in natural disentanglement between overall shape and local details captured by the shape latents and latent points (Sec.~\ref{sec:exp_many_class}).\looseness=-1
\textbf{(ii) Flexibility:} Another advantage of LION's VAE framework is that its encoders can be fine-tuned for various relevant tasks, as discussed previously, and it also enables easy shape interpolation. Other 3D point cloud DDMs operating on point clouds directly~\cite{luo2021diffusion,zhou2021shape} do not offer simultaneously as much flexibility and expressivity out-of-the-box (see quantitative comparisons in Secs.~\ref{sec:uncond_gen} and \ref{sec:exps_guided}).
\textbf{(iii) Mesh Reconstruction:} As discussed, while point clouds are ideal for DDMs, artists likely prefer meshed outputs. As explained above, we propose to use LION together with modern surface reconstruction techniques~\cite{peng2021SAP}, again combining the best of both worlds---a point cloud-based VAE backbone ideal for DDMs, and smooth geometry reconstruction methods operating on the synthesized point clouds to generate practically useful smooth surfaces, which can be easily transformed into meshes.\looseness=-1
\section{Background} \label{sec:background}
\vspace{-2mm}
Traditionally, DDMs were introduced in a discrete-step fashion: Given samples ${\mathbf{x}}_0 \sim q({\mathbf{x}}_0)$ from a data distribution, DDMs use a Markovian fixed forward diffusion process defined as~\cite{sohl2015deep,ho2020ddpm}
\begin{equation} \label{eq:ddpm1}
q({\mathbf{x}}_{1:T}|{\mathbf{x}}_0):=\prod_{t=1}^T q({\mathbf{x}}_{t}|{\mathbf{x}}_{t-1}),\qquad q({\mathbf{x}}_{t}|{\mathbf{x}}_{t-1}):=\mathcal{N}({\mathbf{x}}_t;\sqrt{1-\beta_t}{\mathbf{x}}_{t-1},\beta_t{\bm{I}}),
\end{equation}
where $T$ denotes the number of steps and $q({\mathbf{x}}_{t}|{\mathbf{x}}_{t-1})$ is a Gaussian transition kernel, which gradually adds noise to the input with a variance schedule $\beta_1,...,\beta_T$. The $\beta_t$ are chosen such that the chain approximately converges to a standard Gaussian distribution after $T$ steps, $q({\mathbf{x}}_T){\approx}{\mathcal{N}}({\mathbf{x}}_T;\bm{0}, {\bm{I}})$. DDMs learn a parametrized reverse process (model parameters ${\bm{\theta}}$) that inverts the forward diffusion: \vspace{-0mm}
\begin{equation} \label{eq:background_scorematching}
p_{\bm{\theta}}({\mathbf{x}}_{0:T}):=p({\mathbf{x}}_T)\prod_{t=1}^T p_{\bm{\theta}}({\mathbf{x}}_{t-1}|{\mathbf{x}}_{t}),\qquad p_{\bm{\theta}}({\mathbf{x}}_{t-1}|{\mathbf{x}}_{t}):=\mathcal{N}({\mathbf{x}}_{t-1};\mathbf{\mu}_{\bm{\theta}}({\mathbf{x}}_{t},t),\rho_t^2{\bm{I}}).
\end{equation}
This generative reverse process is also Markovian with Gaussian transition kernels, which use fixed variances $\rho_t^2$. DDMs can be interpreted as latent variable models, where ${\mathbf{x}}_1,...,{\mathbf{x}}_T$ are latents, and the forward process $q({\mathbf{x}}_{1:T}|{\mathbf{x}}_0)$ acts as a fixed approximate posterior, to which the generative $p_{\bm{\theta}}({\mathbf{x}}_{0:T})$ is fit. DDMs are trained by minimizing the variational upper bound on the negative log-likelihood of the data ${\mathbf{x}}_0$ under $p_{\bm{\theta}}({\mathbf{x}}_{0:T})$. Up to irrelevant constant terms, this objective can be expressed as~\cite{ho2020ddpm}
{\small
\begin{equation} \label{eq:ddpm_obj}
\min_{\bm{\theta}} \mathbb{E}_{t\sim U\{1,T\},{\mathbf{x}}_0\sim p({\mathbf{x}}_0),{\bm{\epsilon}}\sim{\mathcal{N}}(\bm{0}, {\bm{I}})}\left[w(t)||{\bm{\epsilon}}-{\bm{\epsilon}}_{\bm{\theta}}(\alpha_t {\mathbf{x}}_0 + \sigma_t {\bm{\epsilon}},t)||_2^2 \right],\;\; w(t)=\frac{\beta_t^2}{2\rho_t^2(1-\beta_t)(1-\alpha_t^2)},
\end{equation}}\noindent
where $\alpha_t=\sqrt{\prod_{s=1}^t(1-\beta_s)}$ and $\sigma_t=\sqrt{1-\alpha_t^2}$ are the parameters of the tractable diffused distribution after $t$ steps $q({\mathbf{x}}_t|{\mathbf{x}}_0)={\mathcal{N}}({\mathbf{x}}_t;\alpha_t{\mathbf{x}}_0,\sigma_t^2{\bm{I}})$. Furthermore, Eq.~(\ref{eq:ddpm_obj}) employs the widely used parametrization $\mathbf{\mu}_{\bm{\theta}}({\mathbf{x}}_t,t):=\tfrac{1}{\sqrt{1-\beta_t}}\left({\mathbf{x}}_t-\tfrac{\beta_t}{\sqrt{1-\alpha_t^2}}{\bm{\epsilon}}_{\bm{\theta}}({\mathbf{x}}_t,t)\right)$. It is common practice to set $w(t)=1$, instead of the one in Eq.~(\ref{eq:ddpm_obj}), which often promotes perceptual quality of the generated output. In the objective of Eq.~(\ref{eq:ddpm_obj}), the model ${\bm{\epsilon}}_{\bm{\theta}}$ is, for all possible steps $t$ along the diffusion process, effectively trained to predict the noise vector ${\bm{\epsilon}}$ that is necessary to denoise an observed diffused sample ${\mathbf{x}}_t$. After training, the DDM can be sampled with ancestral sampling in an iterative fashion:
\begin{equation} \label{eq:ddpm_sampling}
{\mathbf{x}}_{t-1}=\tfrac{1}{\sqrt{1-\beta_t}}({\mathbf{x}}_t -\tfrac{\beta_t}{\sqrt{1-\alpha_t^2}}{\bm{\epsilon}}_{\bm{\theta}}({\mathbf{x}}_t,t)) +\rho_t \boldsymbol{\eta},
\end{equation}
where $\boldsymbol{\eta}\sim{\mathcal{N}}(\boldsymbol{\eta};\bm{0}, {\bm{I}})$. This sampling chain is initialized from a random sample ${\mathbf{x}}_T\sim{\mathcal{N}}({\mathbf{x}}_T;\bm{0}, {\bm{I}})$. Furthermore, the noise injection in Eq.~\ref{eq:ddpm_sampling} is usually omitted in the last sampling step.
DDMs can also be expressed with a continuous-time framework~\cite{song2020score,kingma2021variational}. In this formulation, the diffusion and reverse generative processes are described by differential equations.
This approach
allows for deterministic sampling and encoding schemes based on ordinary differential equations (ODEs). We make use of this framework in Sec.~\ref{subsec:extensions} and we review this approach in more detail in App.~\ref{app:background_extended}.\looseness=-1
\section{Related Work} \label{sec:related}
\vspace{-2mm}
We are building on DDMs~\cite{ho2020ddpm,sohl2015deep,song2019generative,song2020score}, which have been used most prominently for image~\cite{ho2020ddpm,ho2021cascaded,nichol2021improved,dhariwal2021diffusion,rombach2021highresolution,vahdat2021score,nichol2021glide,preechakul2021diffusion,dockhorn2022score,xiao2022DDGAN,ramesh2022dalle2} and speech synthesis~\cite{chen2021wavegrad,kong2021diffwave,jeong2021difftts,chen2021wavegrad2,popov2021gradtts,liu2022diffgantts}. We train DDMs in latent space, an idea that has been explored for image~\cite{rombach2021highresolution,vahdat2021score,sinha2021d2c} and music~\cite{mittal2021symbolic} generation, too. However, these works did not train separate conditional DDMs.
Hierarchical DDM training has been used for generative image upsampling~\cite{ho2021cascaded}, text-to-image generation~\cite{ramesh2022dalle2,saharia2022imagen}, and semantic image modeling~\cite{preechakul2021diffusion}. Most relevant among these works is \citet{preechakul2021diffusion}, which extracts a high-level semantic representation of an image with an auxiliary encoder and then trains a DDM that adds details directly in image space.
We are the first to explore related concepts for 3D shape synthesis and we also train both DDMs in latent space. Furthermore, DDMs and VAEs have also been combined in such a way that the DDM improves the output of the VAE~\cite{pandey2022diffusevae}.\looseness=-1
Most related to LION
are ``Point-Voxel Diffusion'' (PVD)~\cite{zhou2021shape} and ``Diffusion Probabilistic Models for 3D Point Cloud Generation'' (DPM)~\cite{luo2021diffusion}. PVD trains a DDM directly on point clouds, and our decision to use PVCNNs is inspired by this work. DPM, like LION, uses a shape latent variable, but models its distribution with Normalizing Flows~\cite{rezende2015normalizing,dinh2017realnvp}, and then trains a weaker point-wise conditional DDM directly on the point cloud data (this allows DPM to learn useful representations in its latent variable, but sacrifices generation quality). As we show below, neither PVD nor DPM easily enables applications such as multimodal voxel-conditioned synthesis and denoising. Furthermore, LION achieves significantly stronger generation performance. Finally, neither PVD nor DPM reconstructs meshes from the generated point clouds. Point cloud and 3D shape generation have also been explored with other generative models:
PointFlow~\cite{yang2019pointflow}, DPF-Net~\cite{klokov2020discrete}
and SoftFlow~\cite{kim2020softflow} rely on
Normalizing Flows~\cite{rezende2015normalizing,dinh2017realnvp,chen2018neuralode,grathwohl2019ffjord}. %
SetVAE~\cite{Kim_2021_CVPR} treats point cloud synthesis as set generation and uses
VAEs. ShapeGF~\cite{cai2020eccv}
learns distributions over gradient fields that model shape surfaces. Both IM-GAN~\cite{Chen_2019_CVPR}, which models shapes
as neural fields, and l-GAN~\cite{achlioptas2018learning} train GANs over latent variables that encode the shapes, similar to other works~\cite{li2018point}, while r-GAN~\cite{achlioptas2018learning} generates point clouds directly. PDGN~\cite{hui2020pdgn} proposes progressive deconvolutional networks within a point cloud GAN.
SP-GAN~\cite{li2021spgan}
uses a spherical point cloud prior.
Other progressive~\cite{wen2021learning,ko2021rpg} and graph-based architectures~\cite{valsesia2019learning,shu20193d} have been used, too. Also generative cellular automata (GCAs) can be employed for voxel-based 3D shape generation~\cite{zhang2021learning}.
In orthogonal work, point cloud DDMs have been used for generative shape completion~\cite{zhou2021shape,lyu2022a}.\looseness=-1
\input{sections/figTexts/interp}
Recently, image-driven~\cite{niemeyer2020GIRAFFE,Schwarz2020NEURIPS,Liao2020CVPR,piGAN2021,chan2021eg3d,orel2021styleSDF,gu2022stylenerf,zhou2021CIPS3D,pavllo2021textured3dgan,chao2021unsupervised} training of 3D generative models as well as text-driven 3D generation~\cite{sanghi2021clipforge,michel2021text2mesh,jain2021dreamfields,khalid2022texttomesh} have received much attention. These are complementary directions to ours; in fact, augmenting LION with additional image-based training or including text-guidance are promising future directions. Finally, we are relying on SAP~\cite{peng2021SAP} for mesh generation. Strong alternative approaches for reconstructing smooth surfaces from point clouds exist~\cite{groueix2018,mescheder2019occupancy,peng2020convoccnet,williams2021neuralsplines,williams2021nkf}.
\section{Introduction}
\vspace{-2mm}
Generative modeling of 3D shapes has extensive applications in 3D content creation
and has become an active area of research~\cite{wu2016learning,achlioptas2018learning,li2018point,valsesia2019learning,huang20193d,shu20193d,Chen_2019_CVPR,niemeyer2020GIRAFFE,Schwarz2020NEURIPS,Liao2020CVPR,piGAN2021,chan2021eg3d,orel2021styleSDF,gu2022stylenerf,zhou2021CIPS3D,pavllo2021textured3dgan,zhang2021image,ibing2021shape,li2021spgan,luo2021surfgen,chen2021decor,wen2021learning,hao2021GANcraft,litany2017deformable,tan2018variational,mo2019structurenet,gaosdmnet2019,gao2020tmnet,Kim_2021_CVPR,autosdf2022,yang2019pointflow,kim2020softflow,klokov2020discrete,sanghi2021clipforge,sun2018pointgrow,nash2020polygen,ko2021rpg,ibing2021octree,xie2018learning,xie2021GPointNet,shen2021dmtet,yin2021_3DStyleNet,zhang2021learning,chao2021unsupervised,cai2020eccv,zhou2021shape,luo2021diffusion,deng2021deformed,michel2021text2mesh,jain2021dreamfields,khalid2022texttomesh,hui2020pdgn}. However, to be useful as a tool for digital artists, generative models of 3D shapes have to fulfill several criteria: \textbf{(i)} Generated shapes need to be realistic and of high-quality without artifacts. \textbf{(ii)} The model should enable flexible and interactive use and refinement: For example, a user may want to refine a generated shape and synthesize versions with varying details. Or an artist may provide a coarse or noisy input shape, thereby guiding the model to produce multiple realistic high-quality outputs. Similarly, a user may want to interpolate different shapes. \textbf{(iii)} The model should output smooth meshes, which are the standard representation in most graphics software. %
Existing 3D generative models build on various frameworks, including generative adversarial networks (GANs)~\cite{wu2016learning,achlioptas2018learning,li2018point,valsesia2019learning,huang20193d,shu20193d,Chen_2019_CVPR,niemeyer2020GIRAFFE,Schwarz2020NEURIPS,Liao2020CVPR,piGAN2021,chan2021eg3d,orel2021styleSDF,gu2022stylenerf,zhou2021CIPS3D,pavllo2021textured3dgan,zhang2021image,ibing2021shape,li2021spgan,luo2021surfgen,chen2021decor,wen2021learning,hao2021GANcraft}, variational autoencoders (VAEs)~\cite{litany2017deformable,tan2018variational,mo2019structurenet,gaosdmnet2019,gao2020tmnet,Kim_2021_CVPR,autosdf2022}, normalizing flows~\cite{yang2019pointflow,kim2020softflow,klokov2020discrete,sanghi2021clipforge}, autoregressive models~\cite{sun2018pointgrow,nash2020polygen,ko2021rpg,ibing2021octree}, and more~\cite{xie2018learning,xie2021GPointNet,shen2021dmtet,yin2021_3DStyleNet,zhang2021learning,chao2021unsupervised}. Most recently, denoising diffusion models (DDMs) have emerged as powerful generative models, achieving outstanding results not only on image synthesis~\cite{ho2020ddpm,ho2021cascaded,nichol2021improved,dhariwal2021diffusion,rombach2021highresolution,vahdat2021score,nichol2021glide,preechakul2021diffusion,dockhorn2022score,xiao2022DDGAN,ramesh2022dalle2,saharia2022imagen} but also for point cloud-based 3D shape generation~\cite{cai2020eccv,zhou2021shape,luo2021diffusion}. In DDMs, the data is gradually perturbed by a diffusion process, while a deep neural network is trained to denoise. This network can then be used to synthesize novel data in an iterative fashion when initialized from random noise~\cite{ho2020ddpm,sohl2015deep,song2019generative,song2020score}.
However, existing DDMs for 3D shape synthesis struggle with simultaneously satisfying all criteria discussed above for practically useful 3D generative models.
Here, we aim to develop a DDM-based generative model of 3D shapes overcoming these limitations.
We introduce the
\textit{Latent Point Diffusion Model (LION)} for 3D shape generation (see Fig.~\ref{fig:pipeline}). Similar to previous 3D DDMs, LION operates on point clouds, but it is constructed as a VAE with DDMs in latent space.
LION comprises a hierarchical latent space with a vector-valued global shape latent and another point-structured latent space. The latent representations are predicted with point cloud processing encoders, and two latent DDMs are trained in these latent spaces. Synthesis in LION proceeds by drawing novel latent samples from the hierarchical latent DDMs and
decoding back to the original point cloud space. Importantly, we also demonstrate how to augment LION with modern surface reconstruction methods~\cite{peng2021SAP} to synthesize smooth shapes as desired by artists. LION has multiple advantages:\looseness=-1
\begin{figure} [t!]
\vspace{-0.8cm}
\begin{minipage}[c]{0.73\textwidth}
\includegraphics[width=1.0\textwidth]{sections/fig/pipeline_figs/pipeline_new3.jpg}
\end{minipage}\hfill
\begin{minipage}[c]{0.23\textwidth}
\caption{\small \textit{LION} is set up as a hierarchical point cloud VAE with denoising diffusion models over the shape latent and latent point distributions. Point-Voxel CNNs (PVCNN) with adaptive Group Normalization (Ada. GN) are used as neural networks. The latent points can be interpreted as a smoothed version of the input point cloud. \textit{Shape As Points} (SAP) is optionally used for mesh reconstruction.} \label{fig:pipeline}
\end{minipage}
\vspace{-6.5mm}
\end{figure}
\textbf{Expressivity:} By mapping point clouds into regularized latent spaces, the DDMs in latent space are effectively tasked with learning a smoothed distribution. This is easier than training on potentially complex point clouds directly~\cite{vahdat2021score}, thereby improving expressivity. However, point clouds are, in principle, an ideal representation for DDMs. Because of that, we use \textit{latent points}, this is, we keep a point cloud structure for our main latent representation. %
Augmenting the model with an additional global shape latent variable in a hierarchical manner further boosts expressivity.
We validate LION on several popular ShapeNet benchmarks and achieve state-of-the-art synthesis performance.
\textbf{Varying Output Types:} Extending LION with \textit{Shape As Points} (SAP)~\cite{peng2021SAP} geometry reconstruction allows us to also output smooth meshes. Fine-tuning SAP on data generated by LION's autoencoder reduces synthesis noise and enables us to generate high-quality geometry. LION combines (latent) point cloud-based modeling, ideal for DDMs, with surface reconstruction, desired by artists.
\textbf{Flexibility:} Since LION is set up as a VAE, it can be easily adapted for different tasks without re-training the latent DDMs: We can efficiently fine-tune LION's encoders on voxelized or noisy inputs, which a user can provide for guidance. This enables
multimodal voxel-guided synthesis and shape denoising.
We also leverage LION's latent spaces for shape interpolation and autoencoding. Optionally training the DDMs conditioned on CLIP embeddings enables image- and text-driven 3D generation.\looseness=-1
In summary, we make the following contributions: \textbf{(i)} We introduce LION, a novel generative model for 3D shape synthesis, which operates on point clouds and is built on a hierarchical VAE framework with two latent DDMs. \textbf{(ii)} We validate LION's high synthesis quality by reaching state-of-the-art performance on widely used ShapeNet benchmarks. \textbf{(iii)} We achieve high-quality and diverse 3D shape synthesis with LION even when trained jointly over many classes without conditioning. \textbf{(iv)} We propose to combine LION with SAP-based surface reconstruction. \textbf{(v)} We demonstrate the flexibility of our framework by adapting it to relevant tasks such as multimodal voxel-guided synthesis.
\section{Conclusions} \label{sec:conclusions}
\vspace{-2mm}
We introduced LION, a novel generative model of 3D shapes. LION uses a VAE framework with hierarchical DDMs in latent space and can be combined with SAP for mesh generation. LION achieves state-of-the-art shape generation performance and enables applications such as voxel-conditioned synthesis, multimodal shape denoising, and shape interpolation.
LION is currently trained on 3D point clouds only and can not directly generate textured shapes.
A promising extension would be to include image-based training by incorporating neural or differentiable rendering~\cite{zhang2021image,chen2019dibr,Laine2020diffrast,mildenhall2020nerf,chen2021dibrpp,tewari2021advances} and to also synthesize textures~\cite{pavllo2021textured3dgan,OechsleICCV2019,chen2022AUVNET,siddiqui2022texturify}. Furthermore, LION currently focuses on single object generation only. It would be interesting to extend it to full 3D scene synthesis. Moreover, synthesis could be further accelerated by building on works on accelerated sampling from DDMs~\cite{dockhorn2022score,xiao2022DDGAN,song2020score,song2020denoising,watson2021arxiv,kong2021arxiv,jolicoeur2021gotta,watson2022learning,liu2022pseudo,salimans2022progressive,bao2022analyticdpm}.\looseness=-1
\textbf{Broader Impact.} We believe that LION can potentially improve 3D content creation and assist the workflow of digital artists. We designed LION with such applications in mind and hope that it can grow into a practical tool enhancing artists' creativity.
Although we do not see any immediate negative use-cases for LION, it is important that practitioners apply an abundance of caution to mitigate impacts given generative modeling more generally can also be used for malicious purposes, discussed for instance in~\citet{vaccari2020deepfakes,nguyen2021deep,mirsky2021deepfakesurvey}.
\section{Experiments}
\vspace{-2mm}
We provide an overview of our most interesting experimental results in the main paper. All experiment details and extensive additional experiments can be found in App.~\ref{app:exp_details} and App.~\ref{app:extra_exps}, respectively.
\input{sections/table/gen_combined}
\begin{wrapfigure}{r}{0.28\textwidth}
\vspace{-1.2cm}
\begin{center}
\includegraphics[scale=0.34, clip=True]{sections/fig/pipeline_figs/many_class_same_latents.pdf}
\end{center}
\vspace{-0.4cm}
\caption{\small Samples from our unconditional 13-class model: In each column, we use the same global shape latent ${\mathbf{z}}_0$.} %
\vspace{-0.3cm}
\label{fig:many_class_same_latents}
\end{wrapfigure}%
\vspace{-3mm}
\subsection{Single-Class 3D Shape Generation} \label{sec:uncond_gen}
\vspace{-2mm}
\textbf{Datasets.} To compare LION against existing methods, we use ShapeNet~\cite{shapenet2015}, the most widely used dataset to benchmark 3D shape generative models. Following previous works~\cite{yang2019pointflow,zhou2021shape,luo2021diffusion}, we train on three categories: \textit{airplane}, \textit{chair}, \textit{car}. Also like previous methods, we primarily rely on PointFlow's~\cite{yang2019pointflow} dataset splits and preprocssing. It normalizes the data globally across the whole dataset. However, some baselines require per-shape normalization~\cite{li2021spgan,zhang2021learning,cai2020eccv,hui2020pdgn}; hence, we also train on such data. Furthermore, training SAP requires signed distance fields (SDFs) for volumetric supervision, which the PointFlow data does not offer. Hence, for simplicity we follow \citet{peng2021SAP,peng2020convoccnet} and also use their data splits and preprocessing, which includes SDFs.%
We train LION, DPM, PVD, and IM-GAN (which synthesizes shapes as SDFs) also on this dataset version (denoted as \textit{ShapeNet-vol} here). This data is also per-shape normalized. Dataset details in App.~\ref{app:datasets}.
\input{sections/table/gen_v2_all_class_1nna}
\textbf{Evaluation.} Model evaluation follows previous works~\cite{yang2019pointflow,zhou2021shape}. Various metrics to evaluate point cloud generative models exist, with different advantages and disadvantages, discussed in detail by \citet{yang2019pointflow}. Following recent works~\cite{yang2019pointflow,zhou2021shape}, we use \textit{1-NNA} (with both Chamfer distance (CD) and earth mover distance (EMD)) as our main metric. It quantifies the distributional similarity between generated shapes and validation set and measures both quality and diversity~\cite{yang2019pointflow}. For fair comparisons, all metrics are computed on point clouds, not meshed outputs (App.~\ref{app:metrics} discusses different metrics; further results on coverage (COV) and minimum matching distance (MMD) in App.~\ref{app:singleclass}).\looseness=-1
\input{sections/figTexts/generation_all_cats_only}
\textbf{Results.} Samples from LION are shown in Fig.~\ref{fig:gen_v1_pts} and quantitative results in Tabs.~\ref{tab:gen_v1_global_norm}-\ref{tab:gen_v2} (see Sec.~\ref{sec:related} for details about baselines---to reduce the number of baselines to train, we are focusing on the most recent and competitive ones). LION outperforms all baselines and achieves state-of-the-art performance on all classes and dataset versions. Importantly, we outperform both PVD and DPM, which also leverage DDMs, by large margins. Our samples are diverse and appear visually pleasing.
\textbf{Mesh Reconstruction.} As explained in Sec.~\ref{subsec:extensions}, we combine LION with mesh reconstruction, to directly synthesize practically useful meshes. We show generated meshes in Fig.~\ref{fig:gen_v2_mesh}, which look smooth and of high quality. In Fig.~\ref{fig:gen_v2_mesh}, we also visually demonstrate how we can vary the local details of synthesized shapes while preserving the overall shape with our \textit{diffuse-denoise} technique (Sec.~\ref{subsec:extensions}). Details about the number of diffusion steps for all \textit{diffuse-denoise} experiments
are in App.~\ref{app:exp_details}.
\textbf{Shape Interpolation.} As discussed in Sec.~\ref{subsec:extensions}, LION also enables shape interpolation, potentially useful for shape editing applications. We show this in Fig.~\ref{fig:interpolation_2}, combined with mesh reconstruction. The generated shapes are clean and semantically plausible along the entire interpolation path. In App.~\ref{app:interp_pvd_dpm}, we also show interpolations from PVD~\cite{zhou2021shape} and DPM~\cite{luo2021diffusion} for comparison.
\vspace{-0mm}
\subsection{Many-class Unconditional 3D Shape Generation} \label{sec:exp_many_class}
\vspace{-1mm}
\textbf{13-Class LION Model.} We train a LION model \textit{jointly without any class conditioning} on 13 different categories (\textit{airplane, chair, car, lamp, table, sofa, cabinet, bench, telephone, loudspeaker, display, watercraft, rifle}) from ShapeNet (ShapeNet-vol version).
Training a single model without conditioning over such diverse
shapes is challenging, as the data distribution is highly complex and multimodal. %
We show LION's generated samples in Fig.~\ref{fig:gen_v2_all_cats_mesh_and_pts}, including meshes: LION synthesizes high-quality and diverse plausible shapes even when trained on such complex data. We report the model's quantitative generation performance in Tab.~\ref{tab:gen_13cls_1nna}, and we also trained various strong baseline methods under the same setting for comparison. We find that LION significantly outperforms all baselines by a large margin.
We further observe that the hierarchical VAE architecture of LION becomes crucial: The shape latent variable ${\mathbf{z}}_0$ captures global shape, while the latent points ${\mathbf{h}}_0$ model details. This can be seen in Fig.~\ref{fig:many_class_same_latents}: we show samples when fixing the global shape latent ${\mathbf{z}}_0$ and only sample ${\mathbf{h}}_0$ (details in App.~\ref{app:manyclass}).\looseness=-1
\textbf{55-Class LION Model.} Encouraged by these results, we also trained a LION model again \textit{jointly without any class conditioning} on all 55 different categories from ShapeNet. Note that we did on purpose not use class-conditioning in these experiments to create a difficult 3D generation task and thereby explore LION's scalability to highly complex and multimodal datasets. We show generated point cloud samples in Fig.~\ref{fig:gen_v2_all_cats55_pts} (we did not train an SAP model on the 55 classes data): LION synthesizes high-quality and diverse shapes. It can even generate samples from the \textit{cap} class, which contributes with only 39 training data samples, indicating that LION has an excellent mode coverage that even includes the very rare classes. To the best of our knowledge no previous 3D shape generative models have demonstrated satisfactory generation performance for such diverse and multimodal 3D data without relying on conditioning information (details in App.~\ref{app:many_class_55}). In conclusion, we observe that LION out-of-the-box easily scales to highly complex multi-category shape generation.
\input{sections/figTexts/generation_mug_bottle}
\input{sections/figTexts/voxel_exp}
\vspace{-0.2cm}
\subsection{Training LION on Small Datasets}\label{sec:small_data}
\vspace{-0.1cm}
Next, we explore whether LION can also be trained successfully on very small datasets. To this end, we train models on the Mug and Bottle ShapeNet classes. The number of training samples is 149 and 340, respectively, which is much smaller than the common classes like chair, car and airplane. Furthermore, we also train LION on 553 animal assets from the TurboSquid data repository. Generated shapes from the three models are shown in Fig.~\ref{fig:gen_mug_main}. LION is able to generate correct mugs and bottles as well as diverse and high-quality animal shapes. We conclude that LION also performs well even when training in the challenging low-data setting (details in Apps.~\ref{app:mug_bottle} and~\ref{app:animals}).\looseness=-1
\vspace{-0.2cm}
\subsection{Voxel-guided Shape Synthesis and Denoising with Fine-tuned Encoders}\label{sec:exps_guided}
\vspace{-0.1cm}
Next, we test our strategy for multimodal voxel-guided shape synthesis (see Sec.~\ref{subsec:extensions}) using the \textit{airplane} class LION model (experiment details in App.~\ref{app:exp_details}, more experiments in App.~\ref{app:exp_guided}). We first voxelize our training set and fine-tune our encoder networks to produce the correct encodings to decode back the original shapes. When processing voxelized shapes with our point-cloud networks, we sample points on the surface of the voxels. As discussed, we can use different numbers of \textit{diffuse-denoise} steps in latent space to generate various plausible shapes and correct for poor encodings. Instead of voxelizations, we can also consider different noisy inputs (we use \textit{normal}, \textit{uniform}, and \textit{outlier} noise, see App.~\ref{app:exp_guided}) and achieve multimodal denoising with the same approach. The same tasks can be attempted with the important DDM-based baselines PVD and DPM, by directly---not in a latent space---diffusing and denoising voxelized (converted to point clouds) or noisy point clouds.%
\input{sections/figTexts/plot_rec}
\input{sections/figTexts/text2mesh_main}
Fig.~\ref{fig:rec_noise1} shows the reconstruction performance of LION, DPM and PVD for different numbers of \textit{diffuse-denoise} steps (we voxelized or noised the validation set to measure this). We see that for almost all inputs---voxelized or different noises---LION performs best. PVD and DPM perform acceptably for normal and uniform noise, which is similar to the noise injected during training of their DDMs, but perform very poorly for outlier noise or voxel inputs, which is the most relevant case to us, because voxels can be easily placed by users. It is LION's unique framework with additional fine-tuned encoders in its VAE and only latent DDMs that makes this possible. Performing more \textit{diffuse-denoise} steps means that more independent, novel shapes are generated. These will be cleaner and of higher quality, but also correspond less to the noisy or voxel inputs used for guidance. In Fig.~\ref{fig:rec_noise2}, we show this trade-off for the voxel-guidance experiment (other experiments in App.~\ref{app:exp_guided}), where \textit{(top)} we measured the outputs' synthesis quality by calculating 1-NNA with respect to the validation set, and \textit{(bottom)} the average intersection over union (IOU) between the input voxels and the voxelized outputs. We generally see a trade-off: More diffuse-denoise steps result in lower 1-NNA (better quality), but also lower IOU. LION strikes the best balance by a large gap: Its additional encoder network directly generates plausible latent encodings from the perturbed inputs that are both high quality and also correspond well to the input. This trade-off is visualized in Fig.~\ref{fig:voxel_exp} for LION, DPM, and PVD, where we show generated point clouds and voxelizations (note that performing no diffuse-denoise at all for PVD and DPM corresponds to simply keeping the input, as these models' DDMs operate directly on point clouds). We see that running 50 \textit{diffuse-denoise} steps to generate diverse outputs for DPM and especially PVD results in a significant violation of the input voxelization. In contrast, LION generates realistic outputs that also obey the driving voxels.
Overall, LION wins out both in this task and also in unconditional generation with large gaps over these previous DDM-based point cloud generative models.
We conclude that LION does not only offer state-of-the-art 3D shape generation quality, but is also very versatile. Note that guided synthesis can also be combined with mesh reconstruction, as shown in Fig.~\ref{fig:voxel_to_mesh}.\looseness=-1
\begin{wrapfigure}{r}{0.23\textwidth}
\vspace{-1.3cm}
\begin{minipage}[c]{0.23\textwidth}
\renewcommand{\figsize}{0.5\linewidth}
\scalebox{0.90}{\setlength{\tabcolsep}{1pt}
\begin{tabular}{cc}
\includegraphics[align=c, width=\figsize]{sections/fig/vis_ddim_sample_trim/render_dense_yml256_m0_H500W600/ddim_airplane_25/mesh/0006_0.00.jpg}
& \includegraphics[align=c, height=\figsize]{sections/fig/vis_ddim_sample_trim/render_dense_yml256_m0_H500W600/ddim_chair_25/mesh/0020_0.00.jpg} \\
\includegraphics[align=c, width=\figsize]{sections/fig/vis_ddim_sample_trim/render_dense_yml256_m0_H500W600/ddim_car_25/mesh/0008_0.00.jpg}
& \includegraphics[align=c, width=\figsize]{sections/fig/vis_ddim_sample_trim/render_dense_yml256_m0_H500W600/ddim_all13_25/mesh/0010_0.00.jpg}
\end{tabular}
}
\end{minipage}
\vspace{-0.1cm}
\caption{\small 25-step DDIM~\citep{song2020denoising} samples (0.89 seconds per shape).}
\vspace{-0.6cm}
\label{fig:ddim_samples_main}
\end{wrapfigure}%
\vspace{-2mm}
\subsection{Sampling Time}
\vspace{-2mm}
While our main experiments use 1,000-step DDPM-based synthesis, which takes $\approx 27.12$ seconds, we can significantly accelerate generation without significant loss in quality. Using DDIM-based sampling~\citep{song2020denoising}, we can generate high quality shapes in under one second (Fig.~\ref{fig:ddim_samples_main}), which would enable real-time interactive applications. More analyses in App.~\ref{sec:ddim_sampling}.
\vspace{-2mm}
\subsection{Overview of Additional Experiments in Appendix}
\vspace{-2mm}
\textbf{(i)} In App.~\ref{app:ablations}, we perform various ablation studies. The experiments quantitatively validate LION's architecture choices and the advantage of our hierarchical VAE setup with conditional latent DDMs. \textbf{(ii)} In App.~\ref{app:exp_autoencode}, we measure LION's autoencoding performance.
\textbf{(iii)} To demonstrate the value of directly outputting meshes, in App.~\ref{app:text2mesh} we use Text2Mesh~\cite{michel2021text2mesh} to generate textures based on text prompts for synthesized LION samples (Fig.~\ref{fig:text2mesh:share_obj_main}). This would not be possible, if we only generated point clouds.
\textbf{(iv)} To qualitatively show that LION can be adapted easily to other relevant tasks, in App.~\ref{app:svr_textdriven} we condition LION on CLIP embeddings of the shapes' rendered images, following CLIP-Forge~\cite{sanghi2021clipforge} (Fig.~\ref{fig:clipforge_main}). This enables text-driven 3D shape generation and single view 3D reconstruction (Fig.~\ref{fig:svr_car_main}).
\textbf{(v)} We also show many more samples (Apps.~\ref{app:singleclass}-\ref{app:animals}) and shape interpolations (App.~\ref{app:more_shape_interpolations}) from our models, more examples of voxel-guided and noise-guided synthesis (App.~\ref{app:exp_guided}), and we further analyze our 13-class LION model (App.~\ref{app:tsne}).\looseness=-1
\input{sections/figTexts/text2shape_main}
\input{sections/figTexts/svr_main}
|
2102.08366
|
\section{Introduction}
Biomedical question-answering (QA) \yulan{aims to provide users with succinct answers given their queries by analysing a large-scale scientific literature.}
\yulan{It enables} clinicians, public health officials and end-users
to
\yulan{quickly} access the rapid flow of specialised knowledge continuously produced.
This has led the research community's effort towards \yulan{developing} specialised models and tools \yulan{for biomedical QA and}
assessing their performance \yulan{on benchmark datasets such as BioASQ} \cite{Tsatsaronis15}.
Producing such data is time-consuming and requires involving domain experts, making it an expensive process. As a result, high-quality biomedical QA datasets are a scarce resource.
\ml{The} recently released CovidQA collection \cite{tang20}, the first manually curated dataset about COVID-19 related issues, provides only 127 question-answer pairs.
Even one of the largest available biomedical QA datasets, BioASQ, only contains a few thousand
questions.
\begin{figure}[!t]
\centering
\includegraphics[width=1.0\columnwidth]{bem_2_sentence_example-crop.pdf}
\caption{An excerpt of a sentence masked via the BEM strategy, where the masked words were chosen through a biomedical named entity recognizer. In contrast, BERT \cite{devlin19} would randomly select the words to be masked, without attention to the relevant concepts characterizing a technical domain.}
\label{fig:bem_sent_masked}
\end{figure}
\yulan{There have been attempts to fine-tune pre-trained large-scale language models}
for general-purpose QA tasks \cite{rajpurkar16, liu19, raffel20} \yulan{and then use them directly for biomedical QA}.
\yulan{Furthermore, there \ml{has also been} increasing interest in developing domain-specific}
language models, such as BioBERT \cite{Lee19} or RoBERTa-Biomed \cite{gururangan20}, leveraging the vast medical literature available.
\ek{While achieving state-of-the-art results on \ml{the} QA task, these models come with} a high computational cost: BioBERT \ml{needs} ten days on eight GPUs to train \cite{Lee19}, making it prohibitive for
researchers \yulan{with no access to massive computing resources}.
\begin{figure*}[!t]
\centering
\includegraphics[width=0.96\textwidth]{bem_diagram-crop.pdf}
\caption{A schematic representation of the main steps involved in fine-tuning masked language models for the QA task through the biomedical entity-aware masking (BEM) strategy.}
\label{fig:bem_diagram}
\end{figure*}
An alternative \ek{approach} to incorporating external knowledge \ek{into pre-trained language models} is to drive the LM \ek{to \ml{focus} on} pivotal entities characterising the domain at hand \ek{during the fine-tuning stage}. Similar ideas were explored in works by~\citet{zhang19}, \citet{Sun20}, which proposed the ERNIE model. However, their adaptation strategy was designed to generally improve the LM representations rather than adapting it to a \yulan{particular}
domain, requiring additional objective functions and memory.
\ek{In this work we aim to enrich } existing general-purpose LM models (e.g. BERT\ml{~\cite{devlin19}}) with the knowledge related to key medical concepts.
\yulan{In addition}, we want \yulan{domain-specific}
LMs (e.g. BioBERT) to re-encode the already acquired information around the medical entities of interests for a particular \yulan{topic or} theme
(e.g. literature \yulan{relating to COVID-19}).
Therefore, to \ek{facilitate} further domain adaptation, we propose a simple yet unexplored approach based on a novel masking strategy to fine-tune a LM. Our approach introduces a \textit{biomedical entity-aware masking} (BEM) strategy encouraging masked language models (MLMs) to learn entity-centric knowledge (\textsection \ref{subsect:bem}). We first identify a set of entities characterising the domain at hand using a \yulan{domain-specific}
entity recogniser (SciSpacy \cite{neumann19}), and then employ a subset of those entities to drive the masking strategy while fine-tuning (Figure \ref{fig:bem_sent_masked}).
The resulting BEM strategy is
applicable to a vast variety of MLMs and does not require additional memory or components in the neural architectures. Experimental results show performance on a par with the state-of-the-art models for biomedical QA tasks
(\textsection \ref{sec:exp_results}) on several biomedical QA datasets. A further qualitative assessment provides an insight into how QA pairs benefit from the proposed approach.
\begin{table*}[!htb]
\centering
\scalebox{0.9}{
\begin{tabular*}{0.83\textwidth}{@{\extracolsep{\fill}}*{8}{>{\smallernormal}l}}
\toprule
\multirow{2}{*}{\textbf{\#}} & \multirow{2}{*}{\textbf{Model}} & \multicolumn{3}{c}{\textbf{CovidQA}} & \multicolumn{3}{c}{\textbf{BioASQ 7b}} \\ \cline{3-5} \cline{6-8}
& & P@1 & R@3 & MRR & SAcc & LAcc & MRR \\ \toprule
1 $\ \ $ & \textbf{BERT} & 0.081$^*$ & 0.117$^*$ & 0.159$^*$ & 0.012 & 0.032 & 0.027 \\
2 & $\ \ $ \texttt{+ BioASQ} & 0.125 & 0.177 & 0.206 & 0.226 & 0.317 & 0.262 \\
3 & $\ \ $ \texttt{+ STM + BioASQ} & 0.132 & 0.195 & 0.218 & 0.233 & 0.325 & 0.265 \\
4 & $\ \ $ \texttt{+ BEM + BioASQ} & 0.145 & 0.278 & 0.269 & 0.241 & 0.341 & 0.288 \\ \hline
5 & \textbf{RoBERTa} & 0.068 & 0.115 & 0.122 & 0.023 & 0.041 & 0.036 \\
6 & $\ \ $ \texttt{+ BioASQ} & 0.106 & 0.155 & 0.178 & 0.278 & 0.324 & 0.294 \\
7 & $\ \ $ \texttt{+ STM + BioASQ} & 0.112 & 0.167 & 0.194 & 0.282 & 0.333 & 0.300 \\
8 & $\ \ $ \texttt{+ BEM + BioASQ} & 0.125 & 0.198 & 0.236 & 0.323 & 0.374 & 0.325 \\ \hline
9 & \textbf{RoBERTa-Biomed} & 0.104 & 0.163 & 0.192 & 0.028 & 0.044 & 0.037 \\
10 & $\ \ $ \texttt{+ BioASQ} & 0.128 & 0.355 & 0.315 & 0.415 & 0.398 & 0.376 \\
11 & $\ \ $ \texttt{+ STM + BioASQ} & 0.136 & 0.364 & 0.321 & 0.423 & 0.410 & 0.397 \\
12 & $\ \ $ \texttt{+ BEM + BioASQ} & 0.143 & 0.386 & 0.347 & \textbf{0.435} & 0.443 & 0.398 \\ \hline
13 & \textbf{BioBERT} & 0.097$^*$ & 0.142$^*$ & 0.170$^*$ & 0.031 & 0.046 & 0.039 \\
14 & $\ \ $ \texttt{+ BioASQ} & 0.166 & 0.419 & 0.348 & 0.410$^\dag$ & 0.474$^\dag$ & 0.409$^\dag$ \\
15 & $\ \ $ \texttt{+ STM + BioASQ} & 0.172 & 0.432 & 0.385 & 0.418 & 0.482 & 0.416 \\
16 & $\ \ $ \texttt{+ BEM + BioASQ} $\ \ \ $ & \textit{0.179}&\textbf{0.458} & \textit{0.391} & 0.421 & \textbf{0.497} & \textbf{0.434} \\ \hline
17 & \textbf{T5 LM} \\
18 & $\ \ $ \texttt{+ MS-MARCO} & \textbf{0.282}$^*$ & 0.404$^*$ & \textbf{0.415}$^*$ & --- & --- & --- \\
\bottomrule
\end{tabular*}}
\caption{Performance of language models on the CovidQA and BioASQ 7b1 dataset. Values referenced with {\small*} come from the \citet{tang20} work and with \dag $\,$ from \citet{Yoon20}.}
\vspace{-6pt}
\label{tb:qa_results}
\end{table*}
\section{BEM: A Biomedical Entity-Aware Masking Strategy}
The fundamental principle of a masked language model (MLM) is to generate word representation\ml{s} that can be used to predict the missing tokens of an input text.
While this general principle is adopted in the vast majority of MLMs, the particular way in which the tokens to be masked are chosen can vary considerably. We thus proceed analysing the random masking strategy adopted in BERT \cite{devlin19} which has
inspired most of the existing approaches, and we then introduce the biomedical entity-aware masking strategy used to fine-tune MLMs in the biomedical domain.
\noindent \paragraph{BERT Masking strategy.}
The masking strategy adopted in BERT
randomly replaces a predefined proportion of words with a special \texttt{[MASK]} token and the model is required to predict them.
In BERT, 15\% of tokens are chosen uniformly at random, 10\% of them are swapped into random tokens (thus, resulting in an overall 1.5\% of the tokens randomly swapped). This introduces a rather limited \ml{amount of } noise with the aim of making the predictions more robust to trivial associations between the masked tokens and the context.
While another 10\% of the selected tokens are kept without modifications, the remaining 80\% of them are replaced with the \texttt{[MASK]} token.
\paragraph{Biomedical Entity-Aware Masking Strategy}
\label{subsect:bem}
We describe an entity-aware masking strategy \yulan{which only masks biomedical entities detected by a domain-specific}
named entity recogniser (SciSpacy\footnote{https://scispacy.apps.allenai.org/}).
Compared to the \ek{random masking} strategy
described \ek{above, which is used} to pre-train the masked language models,
the introduced entity-aware masking strategy is adopted to boost the fine-tuning process for biomedical documents. In this phase, rather than randomly choosing the tokens to be masked, we inform the model \ek{of} the relevant tokens to pay attention to, and \yulan{encourage the model}
to refine \ml{its} representations using the new surrounding context.
\paragraph{Replacing strategy}
We decompose the BEM strategy into two steps: (1) \textit{recognition} and (2) \textit{sub-sampling and substitution}.
During the \textit{recognition phase}, a set of biomedical entities $\mathcal{E}$ is identified in advance over a training corpus.
Then, at \ml{the} \textit{sub-sampling and substitution} stage, we first sample a proportion $\rho$ of biomedical entities $\mathcal{E_s} \in \mathcal{E}$. The resulting entity subsets $\mathcal{E_s}$ is thus dynamically computed at batch time, in order to introduce a diverse and flexible spectrum of masked entities during training. For consistency, we use the same tokeniser for the documents $d_i$ in the batch and the entities $e_j \in \mathcal{E}$. Then, we substitute all the $k$ entity mentions $w_{e_j}^{k}$ in $d_i$ with the special token \texttt{[MASK]}, making sure that no consecutive entities are replaced.
The substitution takes place at batch time, so that the substitution is a downstream process suitable for a wide typology of MLMs.
A diagram synthesizing the involved steps is reported in Figure \ref{fig:bem_diagram}.
\begin{table*}[htbp]
\centering
\setlength\extrarowheight{2pt}
\resizebox{\linewidth}{!}{%
\begin{tabular}{cl}
\hline\hline
\textbf{BERT with STM} & \multicolumn{1}{c}{\textbf{BERT with BEM}} \\
\hline\hline
\multicolumn{2}{c}{\textit{What is the \textbf{OR }for severe infection in COVID-19 patients with hypertension?} } \\
\cmidrule(lr){1-2}
\multicolumn{1}{l}{\begin{tabular}[c]{@{}l@{}}\textbf{-} There were significant correlations between COVID-19 severity~\\and~[..],~diabetes [OR=2.67],~coronary heart~disease~[OR=2.85].\end{tabular}} & \begin{tabular}[c]{@{}l@{}}\textbf{-} There were significant correlations between COVID-19 severity~\\and [..],~diabetes [OR=2.67],~coronary heart disease~[OR=2.85].\end{tabular} \\
\multicolumn{1}{l}{\begin{tabular}[c]{@{}l@{}}\textbf{-} Compared with the non-severe patient, the pooled odds ratio of\\hypertension, respiratory system disease, cardiovascular disease in\\severe patients were (OR 2.36, ..), (OR 2.46, ..) and (OR 3.42, ..).\end{tabular}} & \begin{tabular}[c]{@{}l@{}}\textbf{-} Compared with the non-severe patient, the pooled odds ratio of\\hypertension, respiratory system disease, cardiovascular disease in\\severe patients were (OR 2.36, ..), (OR 2.46, ..) and (OR 3.42, ..).\end{tabular} \\
\cmidrule(r){1-2}
\multicolumn{2}{c}{\textit{What is the \textbf{HR }for severe infection in COVID-19 patients with hypertension?}} \\
\cmidrule(r){1-2}
- - - - & \begin{tabular}[c]{@{}l@{}}\textbf{-} After adjusting for age and smoking status, patients with COPD~\\(HR 2.681), diabetes (HR 1.59), and malignancy (HR 3.50) were \\ more likely to reach to the composite endpoints than those without.\end{tabular} \\
\cmidrule(r){1-2}
\multicolumn{2}{c}{\textit{What is the \textbf{RR }for severe infection in COVID-19 patients with hypertension?}} \\
\cmidrule(lr){1-2}
- - - - & \begin{tabular}[c]{@{}l@{}}\textbf{-} In univariate analyses, factors significantly associated with severe~\\COVID-19 were male sex (14 studies; pooled RR=1.70, ...),~hyper-\\tension (10 studies 2.74 ...),diabetes~(11 studies ...), and CVD (..).\end{tabular} \\
\bottomrule
\end{tabular}
}
\caption{Examples of questions and retrieved answers using BERT fine-tuned either with its original masking approach or with the biomedical entity-aware masking (BEM) strategy.}
\vspace{-7pt}
\label{tab:qa_pairs_codvidqa}
\end{table*}
\section{Evaluation Design}
\noindent\textbf{Biomedical Reading Comprehension}.
We represent a document as $d_i := (s_{0}^i,\ .\ .\ ,s_{j-1}^i)$
\ml{,} a sequence of sentences, \ml{in turn} defined as $s_j := (w_{0}^j,\ .\ .\ ,w_{k-1}^j)$, with $w_k$ a word occurring in $s_j$.
Given a question $q$, the task is to retrieve the span $w_{s}^j,\ .\ .\ ,w_{s+t}^j$ from a document $d_j$ that can answer the question.
We assume the extractive QA setting where the answer span to be extracted lies entirely within one, or more than one document $d_i$.
In addition, for consistency with \ml{the} CovidQA dataset and \ml{to compare with}
results in \citet{tang20}, we consider a further and sightly modified setting in which the task consists of retrieving the sentence $s_{j}^i$ that most likely contain\ml{s} the exact answer. This sentence level QA task mitigates the non-trivial ambiguities intrinsic to the definition of the exact span for an answer, an issue particularly relevant in the medical domain and well-know in the literature \cite{Voorhees99}\footnote{Consider, for instance, the following QA pair: \textit{``What is the incubation period of the virus?"}, \textit{``6.4 days (95\% 175 CI 5.3 to 7.6)"}, where a model returning just \textit{``6.4 days"} would be considered wrong.}.
\noindent\textbf{Datasets}. We assess the performance of the proposed masking strategies on two biomedical datasets: CovidQA and BioASQ.
\noindent\textbf{CovidQA} \cite{tang20} is a manually curated dataset based on the AI2's COVID-19 Open Research Dataset \cite{Wang20}. \ml{It} consists of 127 question-answer pairs with 27 questions and 85 unique \ml{related articles}
~\ek{This dataset is} too small for supervised training, but is a valuable resource for zero-shot evaluation to assess the unsupervised and transfer capability of models.
\noindent\textbf{BioASQ}~\cite{Tsatsaronis15} is one of the larger biomedical QA datasets available with over 2000 question-answer pairs. To use it within the extractive questions answering framework, we convert the questions into the SQuAD dataset format~\cite{rajpurkar16}, consisting of question-answer pairs and the corresponding \textit{passages,}
medical articles containing the answers or clues with a length varying from a sentence to a paragraph.
When multiple passages are available for a single question, we form additional question-context pairs combined subsequently in a postprocessing step to choose the answer with highest probability, similarly to \citet{Yoon20}.
For consistency with the CovidQA dataset, we report our evaluation exclusively on the factoid questions of the BioASQ 7b Phase B1.
\noindent\textbf{Baselines}. \ek{We use the following unsupervised neural models as baselines:} the out-of-the-box BERT \cite{devlin19} and RoBERTa \cite{liu19}, as well as their variants BioBERT \cite{Lee19} and RoBERTa-Biomed \cite{gururangan20} fine-tuned on medical and scientific corpora.
To highlight the impact of different fine-tuning strategies, we examine several configurations depending on the data and the masking strategy adopted.
We experiment using the BioASQ QA training pairs during the fine-tuning stage and denote the models using them with \texttt{+BioASQ}.
When we fine-tune the models on the corpus \ml{consisting}
of PubMed articles referred within the BioASQ and AI2's COVID-19 Open Research dataset, we compare two masking strategies denoted as \texttt{+STM} and \texttt{+BEM}, where \texttt{+STM} indicates the standard masking strategy of the model at hand and \texttt{+BEM} is our proposed strategy.
We additionally report the T5 \cite{raffel20} performance over
CovidQA, which constitutes the current state-of-the-art~\cite{tang20}\footnote{We attach supplementary results in Appx.~\ref{sec:appendix} on SQuAD (Tab. \ref{tb:qa_results_full}) and the \textit{perplexity} of MLMs when fine-tuned on the medical collection with different masking strategies (Fig.~\ref{fig:perplexity_lms})}.
\noindent\textbf{Metrics}. To facilitate comparisons, we adopt the same evaluation scores used in \citet{tang20} to assess the models on the CovidQA dataset, i.e. mean reciprocal rank (MRR), precision at rank one (P@1), and recall at rank three (R@3); similarly, for the BioASQ dataset, we use the strict accuracy (SAcc), lenient accuracy (LAcc) and MRR, the BioASQ challenge's official metrics.
\section{Experimental Results and Discussion}
\label{sec:exp_results}
We report the results on the QA tasks in Table \ref{tb:qa_results}.
Among the unsupervised models, BERT achieves slightly better performance than RoBERTa on CovidQA, yet the situation is reversed on
BioASQ (rows 1,5). The low precision of the two models (especially on the BioASQ dataset) confirms the difficulties in generalising to the biomedical domain.
Specialised language models such as RoBERTa-Biomed and BioBERT show a significant improvement on the CovidQA dataset, but a rather limited one on BioASQ (rows 9,13), highlighting the importance of having larger medical corpora to assess the
model's effectiveness.
A general boost in performance is \ek{shared} across models fine-tuned on the QA tasks, with a large benefit from the BioASQ QA. \ek{The performance gains obtained by the specialised models (BioBERT and RoBERTa-Biomed) suggest} the importance of transferring not only the domain knowledge but also the ability to perform the QA task itself (rows 9,10; 13,14).
A further fine-tuning step before the training over the QA pairs has been proven beneficial for all of the models. The BEM masking strategy has significantly amplified the model's
\ml{generalisability}, with an increased adaptation to the biomedical themes shown by the notable improvement in R@3 and MRR; with the R@3 outperforming the state-of-the-art results of T5 fine-tuned on MS-MARCO \cite{bajaj18} and proving the effectiveness of the BEM strategy.
Table \ref{tab:qa_pairs_codvidqa} reports questions from the CovidQA related to three statistical
\ml{indices} (i.e. Odds Ratio, Hazard Ratio and Relative Risk) to assess the risk of an event occurring in a group (e.g. infections or death). We notice that
\ml{even though the indices are mentioned as abbreviations, }BERT fine-tuned with the STM is able to retrieve sentences with the exact answer for just one of three questions.
\ml{By contrast}, BERT fine-tuned with the BEM strategy succeeds in retrieving at least one correct sentence for each question.
This example suggests the importance of placing the emphasis on the entities, which might be overlooked by LMs during the training process despite being available.
\section{Related Work}
Our work is closely related to two lines of research: the design of masking strategies for LMs and the development of specialized models for the biomedical domain.
\noindent \textbf{Masking strategies.} Building on top of the BERT's masking strategy \cite{devlin19}, a wide variety of approaches has been proposed \cite{liu19, Yang19, jiang20}.
A family of masking approaches aimed at leveraging entity and phrase occurrences in text. SpanBERT, \citet{Joshi20} proposed to mask and predict whole spans rather than standalone tokens and to make use of an auxiliary objective function.
ERNIE \cite{zhang19} is instead developed to mask well-known named entities and phrases to improve the external knowledge encoded. Similarly, KnowBERT \cite{peters19} explicitly model entity spans and use an entity linker to an external knowledge base to form knowledge enhanced entity-span representations.
However, despite the analogies with the BEM approach, the above masking strategies were designed to generally improve the LM representations rather than adapting them to particular domains, requiring additional objective functions and memory.
\noindent \textbf{Biomedical LMs.}
Particular attention has been devoted to the adaptation of LMs to the medical domain, with different corpora and tasks requiring tailored methodologies.
BioBERT \cite{Lee19} is a biomedical language model based on BERT-\textit{Base} with additional pre-training on biomedical documents from the PubMed and PMC collections using the same training settings adopted in BERT.
BioMed-RoBERTa \cite{gururangan20} is instead based on RoBERTa-\textit{Base} \cite{liu19} using a corpus of 2.27M articles from the Semantic Scholar dataset \cite{ammar18}.
SciBERT \cite{beltagy19} follows the BERT's masking strategy to pre-train the model from scratch using a scientific corpus composed of papers from Semantic Scholar \cite{ammar18}. Out of the 1.14M papers used, more than $80\%$ belong to the biomedical domain.
\section{Conclusion}
We presented BEM, a biomedical entity-aware masking strategy to boost LM adaptation to low-resource biomedical QA. It uses an entity-drive\ml{n} masking strategy to fine-tune LMs and effectively lead them in learning entity-centric knowledge based on the pivotal entities characterizing the domain at hand. Experimental results have shown the benefits of such an approach on several metrics for biomedical QA tasks.
\section*{Acknowledgements}
This work is funded by the EPSRC (grant no. EP/T017112/1, EP/V048597/1). YH is supported by a Turing AI Fellowship funded by the UK Research and Innovation (UKRI) (grant no. EP/V020579/1).
\section{Introduction}
Biomedical question-answering (QA) \yulan{aims to provide users with succinct answers given their queries by analysing a large-scale scientific literature.}
\yulan{It enables} clinicians, public health officials and end-users
to
\yulan{quickly} access the rapid flow of specialised knowledge continuously produced.
This has led the research community's effort towards \yulan{developing} specialised models and tools \yulan{for biomedical QA and}
assessing their performance \yulan{on benchmark datasets such as BioASQ} \cite{Tsatsaronis15}.
Producing such data is time-consuming and requires involving domain experts, making it an expensive process. As a result, high-quality biomedical QA datasets are a scarce resource.
\ml{The} recently released CovidQA collection \cite{tang20}, the first manually curated dataset about COVID-19 related issues, provides only 127 question-answer pairs.
Even one of the largest available biomedical QA datasets, BioASQ, only contains a few thousand
questions.
\begin{figure}[!t]
\centering
\includegraphics[width=1.0\columnwidth]{bem_2_sentence_example-crop.pdf}
\caption{An excerpt of a sentence masked via the BEM strategy, where the masked words were chosen through a biomedical named entity recognizer. In contrast, BERT \cite{devlin19} would randomly select the words to be masked, without attention to the relevant concepts characterizing a technical domain.}
\label{fig:bem_sent_masked}
\end{figure}
\yulan{There have been attempts to fine-tune pre-trained large-scale language models}
for general-purpose QA tasks \cite{rajpurkar16, liu19, raffel20} \yulan{and then use them directly for biomedical QA}.
\yulan{Furthermore, there \ml{has also been} increasing interest in developing domain-specific}
language models, such as BioBERT \cite{Lee19} or RoBERTa-Biomed \cite{gururangan20}, leveraging the vast medical literature available.
\ek{While achieving state-of-the-art results on \ml{the} QA task, these models come with} a high computational cost: BioBERT \ml{needs} ten days on eight GPUs to train \cite{Lee19}, making it prohibitive for
researchers \yulan{with no access to massive computing resources}.
\begin{figure*}[!t]
\centering
\includegraphics[width=0.96\textwidth]{bem_diagram-crop.pdf}
\caption{A schematic representation of the main steps involved in fine-tuning masked language models for the QA task through the biomedical entity-aware masking (BEM) strategy.}
\label{fig:bem_diagram}
\end{figure*}
An alternative \ek{approach} to incorporating external knowledge \ek{into pre-trained language models} is to drive the LM \ek{to \ml{focus} on} pivotal entities characterising the domain at hand \ek{during the fine-tuning stage}. Similar ideas were explored in works by~\citet{zhang19}, \citet{Sun20}, which proposed the ERNIE model. However, their adaptation strategy was designed to generally improve the LM representations rather than adapting it to a \yulan{particular}
domain, requiring additional objective functions and memory.
\ek{In this work we aim to enrich } existing general-purpose LM models (e.g. BERT\ml{~\cite{devlin19}}) with the knowledge related to key medical concepts.
\yulan{In addition}, we want \yulan{domain-specific}
LMs (e.g. BioBERT) to re-encode the already acquired information around the medical entities of interests for a particular \yulan{topic or} theme
(e.g. literature \yulan{relating to COVID-19}).
Therefore, to \ek{facilitate} further domain adaptation, we propose a simple yet unexplored approach based on a novel masking strategy to fine-tune a LM. Our approach introduces a \textit{biomedical entity-aware masking} (BEM) strategy encouraging masked language models (MLMs) to learn entity-centric knowledge (\textsection \ref{subsect:bem}). We first identify a set of entities characterising the domain at hand using a \yulan{domain-specific}
entity recogniser (SciSpacy \cite{neumann19}), and then employ a subset of those entities to drive the masking strategy while fine-tuning (Figure \ref{fig:bem_sent_masked}).
The resulting BEM strategy is
applicable to a vast variety of MLMs and does not require additional memory or components in the neural architectures. Experimental results show performance on a par with the state-of-the-art models for biomedical QA tasks
(\textsection \ref{sec:exp_results}) on several biomedical QA datasets. A further qualitative assessment provides an insight into how QA pairs benefit from the proposed approach.
\begin{table*}[!htb]
\centering
\scalebox{0.9}{
\begin{tabular*}{0.83\textwidth}{@{\extracolsep{\fill}}*{8}{>{\smallernormal}l}}
\toprule
\multirow{2}{*}{\textbf{\#}} & \multirow{2}{*}{\textbf{Model}} & \multicolumn{3}{c}{\textbf{CovidQA}} & \multicolumn{3}{c}{\textbf{BioASQ 7b}} \\ \cline{3-5} \cline{6-8}
& & P@1 & R@3 & MRR & SAcc & LAcc & MRR \\ \toprule
1 $\ \ $ & \textbf{BERT} & 0.081$^*$ & 0.117$^*$ & 0.159$^*$ & 0.012 & 0.032 & 0.027 \\
2 & $\ \ $ \texttt{+ BioASQ} & 0.125 & 0.177 & 0.206 & 0.226 & 0.317 & 0.262 \\
3 & $\ \ $ \texttt{+ STM + BioASQ} & 0.132 & 0.195 & 0.218 & 0.233 & 0.325 & 0.265 \\
4 & $\ \ $ \texttt{+ BEM + BioASQ} & 0.145 & 0.278 & 0.269 & 0.241 & 0.341 & 0.288 \\ \hline
5 & \textbf{RoBERTa} & 0.068 & 0.115 & 0.122 & 0.023 & 0.041 & 0.036 \\
6 & $\ \ $ \texttt{+ BioASQ} & 0.106 & 0.155 & 0.178 & 0.278 & 0.324 & 0.294 \\
7 & $\ \ $ \texttt{+ STM + BioASQ} & 0.112 & 0.167 & 0.194 & 0.282 & 0.333 & 0.300 \\
8 & $\ \ $ \texttt{+ BEM + BioASQ} & 0.125 & 0.198 & 0.236 & 0.323 & 0.374 & 0.325 \\ \hline
9 & \textbf{RoBERTa-Biomed} & 0.104 & 0.163 & 0.192 & 0.028 & 0.044 & 0.037 \\
10 & $\ \ $ \texttt{+ BioASQ} & 0.128 & 0.355 & 0.315 & 0.415 & 0.398 & 0.376 \\
11 & $\ \ $ \texttt{+ STM + BioASQ} & 0.136 & 0.364 & 0.321 & 0.423 & 0.410 & 0.397 \\
12 & $\ \ $ \texttt{+ BEM + BioASQ} & 0.143 & 0.386 & 0.347 & \textbf{0.435} & 0.443 & 0.398 \\ \hline
13 & \textbf{BioBERT} & 0.097$^*$ & 0.142$^*$ & 0.170$^*$ & 0.031 & 0.046 & 0.039 \\
14 & $\ \ $ \texttt{+ BioASQ} & 0.166 & 0.419 & 0.348 & 0.410$^\dag$ & 0.474$^\dag$ & 0.409$^\dag$ \\
15 & $\ \ $ \texttt{+ STM + BioASQ} & 0.172 & 0.432 & 0.385 & 0.418 & 0.482 & 0.416 \\
16 & $\ \ $ \texttt{+ BEM + BioASQ} $\ \ \ $ & \textit{0.179}&\textbf{0.458} & \textit{0.391} & 0.421 & \textbf{0.497} & \textbf{0.434} \\ \hline
17 & \textbf{T5 LM} \\
18 & $\ \ $ \texttt{+ MS-MARCO} & \textbf{0.282}$^*$ & 0.404$^*$ & \textbf{0.415}$^*$ & --- & --- & --- \\
\bottomrule
\end{tabular*}}
\caption{Performance of language models on the CovidQA and BioASQ 7b1 dataset. Values referenced with {\small*} come from the \citet{tang20} work and with \dag $\,$ from \citet{Yoon20}.}
\vspace{-6pt}
\label{tb:qa_results}
\end{table*}
\section{BEM: A Biomedical Entity-Aware Masking Strategy}
The fundamental principle of a masked language model (MLM) is to generate word representation\ml{s} that can be used to predict the missing tokens of an input text.
While this general principle is adopted in the vast majority of MLMs, the particular way in which the tokens to be masked are chosen can vary considerably. We thus proceed analysing the random masking strategy adopted in BERT \cite{devlin19} which has
inspired most of the existing approaches, and we then introduce the biomedical entity-aware masking strategy used to fine-tune MLMs in the biomedical domain.
\noindent \paragraph{BERT Masking strategy.}
The masking strategy adopted in BERT
randomly replaces a predefined proportion of words with a special \texttt{[MASK]} token and the model is required to predict them.
In BERT, 15\% of tokens are chosen uniformly at random, 10\% of them are swapped into random tokens (thus, resulting in an overall 1.5\% of the tokens randomly swapped). This introduces a rather limited \ml{amount of } noise with the aim of making the predictions more robust to trivial associations between the masked tokens and the context.
While another 10\% of the selected tokens are kept without modifications, the remaining 80\% of them are replaced with the \texttt{[MASK]} token.
\paragraph{Biomedical Entity-Aware Masking Strategy}
\label{subsect:bem}
We describe an entity-aware masking strategy \yulan{which only masks biomedical entities detected by a domain-specific}
named entity recogniser (SciSpacy\footnote{https://scispacy.apps.allenai.org/}).
Compared to the \ek{random masking} strategy
described \ek{above, which is used} to pre-train the masked language models,
the introduced entity-aware masking strategy is adopted to boost the fine-tuning process for biomedical documents. In this phase, rather than randomly choosing the tokens to be masked, we inform the model \ek{of} the relevant tokens to pay attention to, and \yulan{encourage the model}
to refine \ml{its} representations using the new surrounding context.
\paragraph{Replacing strategy}
We decompose the BEM strategy into two steps: (1) \textit{recognition} and (2) \textit{sub-sampling and substitution}.
During the \textit{recognition phase}, a set of biomedical entities $\mathcal{E}$ is identified in advance over a training corpus.
Then, at \ml{the} \textit{sub-sampling and substitution} stage, we first sample a proportion $\rho$ of biomedical entities $\mathcal{E_s} \in \mathcal{E}$. The resulting entity subsets $\mathcal{E_s}$ is thus dynamically computed at batch time, in order to introduce a diverse and flexible spectrum of masked entities during training. For consistency, we use the same tokeniser for the documents $d_i$ in the batch and the entities $e_j \in \mathcal{E}$. Then, we substitute all the $k$ entity mentions $w_{e_j}^{k}$ in $d_i$ with the special token \texttt{[MASK]}, making sure that no consecutive entities are replaced.
The substitution takes place at batch time, so that the substitution is a downstream process suitable for a wide typology of MLMs.
A diagram synthesizing the involved steps is reported in Figure \ref{fig:bem_diagram}.
\begin{table*}[htbp]
\centering
\setlength\extrarowheight{2pt}
\resizebox{\linewidth}{!}{%
\begin{tabular}{cl}
\hline\hline
\textbf{BERT with STM} & \multicolumn{1}{c}{\textbf{BERT with BEM}} \\
\hline\hline
\multicolumn{2}{c}{\textit{What is the \textbf{OR }for severe infection in COVID-19 patients with hypertension?} } \\
\cmidrule(lr){1-2}
\multicolumn{1}{l}{\begin{tabular}[c]{@{}l@{}}\textbf{-} There were significant correlations between COVID-19 severity~\\and~[..],~diabetes [OR=2.67],~coronary heart~disease~[OR=2.85].\end{tabular}} & \begin{tabular}[c]{@{}l@{}}\textbf{-} There were significant correlations between COVID-19 severity~\\and [..],~diabetes [OR=2.67],~coronary heart disease~[OR=2.85].\end{tabular} \\
\multicolumn{1}{l}{\begin{tabular}[c]{@{}l@{}}\textbf{-} Compared with the non-severe patient, the pooled odds ratio of\\hypertension, respiratory system disease, cardiovascular disease in\\severe patients were (OR 2.36, ..), (OR 2.46, ..) and (OR 3.42, ..).\end{tabular}} & \begin{tabular}[c]{@{}l@{}}\textbf{-} Compared with the non-severe patient, the pooled odds ratio of\\hypertension, respiratory system disease, cardiovascular disease in\\severe patients were (OR 2.36, ..), (OR 2.46, ..) and (OR 3.42, ..).\end{tabular} \\
\cmidrule(r){1-2}
\multicolumn{2}{c}{\textit{What is the \textbf{HR }for severe infection in COVID-19 patients with hypertension?}} \\
\cmidrule(r){1-2}
- - - - & \begin{tabular}[c]{@{}l@{}}\textbf{-} After adjusting for age and smoking status, patients with COPD~\\(HR 2.681), diabetes (HR 1.59), and malignancy (HR 3.50) were \\ more likely to reach to the composite endpoints than those without.\end{tabular} \\
\cmidrule(r){1-2}
\multicolumn{2}{c}{\textit{What is the \textbf{RR }for severe infection in COVID-19 patients with hypertension?}} \\
\cmidrule(lr){1-2}
- - - - & \begin{tabular}[c]{@{}l@{}}\textbf{-} In univariate analyses, factors significantly associated with severe~\\COVID-19 were male sex (14 studies; pooled RR=1.70, ...),~hyper-\\tension (10 studies 2.74 ...),diabetes~(11 studies ...), and CVD (..).\end{tabular} \\
\bottomrule
\end{tabular}
}
\caption{Examples of questions and retrieved answers using BERT fine-tuned either with its original masking approach or with the biomedical entity-aware masking (BEM) strategy.}
\vspace{-7pt}
\label{tab:qa_pairs_codvidqa}
\end{table*}
\section{Evaluation Design}
\noindent\textbf{Biomedical Reading Comprehension}.
We represent a document as $d_i := (s_{0}^i,\ .\ .\ ,s_{j-1}^i)$
\ml{,} a sequence of sentences, \ml{in turn} defined as $s_j := (w_{0}^j,\ .\ .\ ,w_{k-1}^j)$, with $w_k$ a word occurring in $s_j$.
Given a question $q$, the task is to retrieve the span $w_{s}^j,\ .\ .\ ,w_{s+t}^j$ from a document $d_j$ that can answer the question.
We assume the extractive QA setting where the answer span to be extracted lies entirely within one, or more than one document $d_i$.
In addition, for consistency with \ml{the} CovidQA dataset and \ml{to compare with}
results in \citet{tang20}, we consider a further and sightly modified setting in which the task consists of retrieving the sentence $s_{j}^i$ that most likely contain\ml{s} the exact answer. This sentence level QA task mitigates the non-trivial ambiguities intrinsic to the definition of the exact span for an answer, an issue particularly relevant in the medical domain and well-know in the literature \cite{Voorhees99}\footnote{Consider, for instance, the following QA pair: \textit{``What is the incubation period of the virus?"}, \textit{``6.4 days (95\% 175 CI 5.3 to 7.6)"}, where a model returning just \textit{``6.4 days"} would be considered wrong.}.
\noindent\textbf{Datasets}. We assess the performance of the proposed masking strategies on two biomedical datasets: CovidQA and BioASQ.
\noindent\textbf{CovidQA} \cite{tang20} is a manually curated dataset based on the AI2's COVID-19 Open Research Dataset \cite{Wang20}. \ml{It} consists of 127 question-answer pairs with 27 questions and 85 unique \ml{related articles}
~\ek{This dataset is} too small for supervised training, but is a valuable resource for zero-shot evaluation to assess the unsupervised and transfer capability of models.
\noindent\textbf{BioASQ}~\cite{Tsatsaronis15} is one of the larger biomedical QA datasets available with over 2000 question-answer pairs. To use it within the extractive questions answering framework, we convert the questions into the SQuAD dataset format~\cite{rajpurkar16}, consisting of question-answer pairs and the corresponding \textit{passages,}
medical articles containing the answers or clues with a length varying from a sentence to a paragraph.
When multiple passages are available for a single question, we form additional question-context pairs combined subsequently in a postprocessing step to choose the answer with highest probability, similarly to \citet{Yoon20}.
For consistency with the CovidQA dataset, we report our evaluation exclusively on the factoid questions of the BioASQ 7b Phase B1.
\noindent\textbf{Baselines}. \ek{We use the following unsupervised neural models as baselines:} the out-of-the-box BERT \cite{devlin19} and RoBERTa \cite{liu19}, as well as their variants BioBERT \cite{Lee19} and RoBERTa-Biomed \cite{gururangan20} fine-tuned on medical and scientific corpora.
To highlight the impact of different fine-tuning strategies, we examine several configurations depending on the data and the masking strategy adopted.
We experiment using the BioASQ QA training pairs during the fine-tuning stage and denote the models using them with \texttt{+BioASQ}.
When we fine-tune the models on the corpus \ml{consisting}
of PubMed articles referred within the BioASQ and AI2's COVID-19 Open Research dataset, we compare two masking strategies denoted as \texttt{+STM} and \texttt{+BEM}, where \texttt{+STM} indicates the standard masking strategy of the model at hand and \texttt{+BEM} is our proposed strategy.
We additionally report the T5 \cite{raffel20} performance over
CovidQA, which constitutes the current state-of-the-art~\cite{tang20}\footnote{We attach supplementary results in Appx.~\ref{sec:appendix} on SQuAD (Tab. \ref{tb:qa_results_full}) and the \textit{perplexity} of MLMs when fine-tuned on the medical collection with different masking strategies (Fig.~\ref{fig:perplexity_lms})}.
\noindent\textbf{Metrics}. To facilitate comparisons, we adopt the same evaluation scores used in \citet{tang20} to assess the models on the CovidQA dataset, i.e. mean reciprocal rank (MRR), precision at rank one (P@1), and recall at rank three (R@3); similarly, for the BioASQ dataset, we use the strict accuracy (SAcc), lenient accuracy (LAcc) and MRR, the BioASQ challenge's official metrics.
\section{Experimental Results and Discussion}
\label{sec:exp_results}
We report the results on the QA tasks in Table \ref{tb:qa_results}.
Among the unsupervised models, BERT achieves slightly better performance than RoBERTa on CovidQA, yet the situation is reversed on
BioASQ (rows 1,5). The low precision of the two models (especially on the BioASQ dataset) confirms the difficulties in generalising to the biomedical domain.
Specialised language models such as RoBERTa-Biomed and BioBERT show a significant improvement on the CovidQA dataset, but a rather limited one on BioASQ (rows 9,13), highlighting the importance of having larger medical corpora to assess the
model's effectiveness.
A general boost in performance is \ek{shared} across models fine-tuned on the QA tasks, with a large benefit from the BioASQ QA. \ek{The performance gains obtained by the specialised models (BioBERT and RoBERTa-Biomed) suggest} the importance of transferring not only the domain knowledge but also the ability to perform the QA task itself (rows 9,10; 13,14).
A further fine-tuning step before the training over the QA pairs has been proven beneficial for all of the models. The BEM masking strategy has significantly amplified the model's
\ml{generalisability}, with an increased adaptation to the biomedical themes shown by the notable improvement in R@3 and MRR; with the R@3 outperforming the state-of-the-art results of T5 fine-tuned on MS-MARCO \cite{bajaj18} and proving the effectiveness of the BEM strategy.
Table \ref{tab:qa_pairs_codvidqa} reports questions from the CovidQA related to three statistical
\ml{indices} (i.e. Odds Ratio, Hazard Ratio and Relative Risk) to assess the risk of an event occurring in a group (e.g. infections or death). We notice that
\ml{even though the indices are mentioned as abbreviations, }BERT fine-tuned with the STM is able to retrieve sentences with the exact answer for just one of three questions.
\ml{By contrast}, BERT fine-tuned with the BEM strategy succeeds in retrieving at least one correct sentence for each question.
This example suggests the importance of placing the emphasis on the entities, which might be overlooked by LMs during the training process despite being available.
\section{Related Work}
Our work is closely related to two lines of research: the design of masking strategies for LMs and the development of specialized models for the biomedical domain.
\noindent \textbf{Masking strategies.} Building on top of the BERT's masking strategy \cite{devlin19}, a wide variety of approaches has been proposed \cite{liu19, Yang19, jiang20}.
A family of masking approaches aimed at leveraging entity and phrase occurrences in text. SpanBERT, \citet{Joshi20} proposed to mask and predict whole spans rather than standalone tokens and to make use of an auxiliary objective function.
ERNIE \cite{zhang19} is instead developed to mask well-known named entities and phrases to improve the external knowledge encoded. Similarly, KnowBERT \cite{peters19} explicitly model entity spans and use an entity linker to an external knowledge base to form knowledge enhanced entity-span representations.
However, despite the analogies with the BEM approach, the above masking strategies were designed to generally improve the LM representations rather than adapting them to particular domains, requiring additional objective functions and memory.
\noindent \textbf{Biomedical LMs.}
Particular attention has been devoted to the adaptation of LMs to the medical domain, with different corpora and tasks requiring tailored methodologies.
BioBERT \cite{Lee19} is a biomedical language model based on BERT-\textit{Base} with additional pre-training on biomedical documents from the PubMed and PMC collections using the same training settings adopted in BERT.
BioMed-RoBERTa \cite{gururangan20} is instead based on RoBERTa-\textit{Base} \cite{liu19} using a corpus of 2.27M articles from the Semantic Scholar dataset \cite{ammar18}.
SciBERT \cite{beltagy19} follows the BERT's masking strategy to pre-train the model from scratch using a scientific corpus composed of papers from Semantic Scholar \cite{ammar18}. Out of the 1.14M papers used, more than $80\%$ belong to the biomedical domain.
\section{Conclusion}
We presented BEM, a biomedical entity-aware masking strategy to boost LM adaptation to low-resource biomedical QA. It uses an entity-drive\ml{n} masking strategy to fine-tune LMs and effectively lead them in learning entity-centric knowledge based on the pivotal entities characterizing the domain at hand. Experimental results have shown the benefits of such an approach on several metrics for biomedical QA tasks.
\section*{Acknowledgements}
This work is funded by the EPSRC (grant no. EP/T017112/1, EP/V048597/1). YH is supported by a Turing AI Fellowship funded by the UK Research and Innovation (UKRI) (grant no. EP/V020579/1).
|
2202.02804
|
\section{Introduction} \label{sec:intro}
Circumstellar disks around newly born stars evolve to be protoplanetary disks (PPDs) where planets are formed.
Recent interferometric observations revealed
directly the shape and the chemical structure
of PPDs \citep{Walsh:2015, Ansdell:2016, Johnston:2020}.
Near infrared observations also show that the disk fraction in a star-forming region decreases as the age of the central star, suggesting that PPDs disappear in 3--6 Myr \citep[e.g.,][]{Haisch:2001, Meyer:2007, Hernandez:2007, Mamajek:2009, Bayo:2012, Ribas:2014}.
Photoevaporation is suggested to be an important disk dispersal mechanism.
High-energy photons such as far-ultraviolet (FUV; $6{\rm \, eV} \lesssim h\nu < 13.6{\rm \, eV} $), extreme-ultraviolet (EUV; $13.6{\rm \, eV} \lesssim h\nu \lesssim 100{\rm \, eV}$), and X-ray ($100{\rm \, eV} \lesssim h\nu \lesssim 10{\rm \, keV}$) emitted from the central star
heat a large portion of a PPD through various
radiative and photo-chemical processes.
It is generally found that EUV photons are mainly absorbed by abundant neutral hydrogen close to the central star, while FUV and X-ray photons penetrate deep into the disk, to drive dense photoevaporative flows \citep[e.g.,][]{RichlingYorke:2000, Ercolano:2009, GortiHollenbach:2009}.
FUV photons are the cause of the photoelectric effect on dust grains in the disk, whereas EUV and X-ray do through ionization of hydrogen, helium, and other heavy elements.
These processes altogether raise the gas temperature of the disk surface and generate photoevaporative flows.
PPDs typically have a complicated, multi-layer structure
in which various chemical species exist in different regions.
It has been suggested by both observations and
numerical simulations that \ce{H2} molecules in the neutral regions absorb the radiation from the central star and affect the observed spectral energy distribution \citep{Herczeg:2004, Nomura:2005, Schindhelm:2012}.
The so-called UV pumping of \ce{H2} molecules (\ce{H2} pumping hereafter) can be an important heating process in the region where a substantial amount of \ce{H2} molecules are continuously formed and destroyed.
The net heating process can affect the thermal structure of a photoevaporating disk with a sufficiently high FUV \citep{WangGoodman:2017}.
\ce{H2} pumping is a two-step process that Lyman-Werner photons (LW photons; $11.2{\rm \, eV} \lesssim h\nu < 13.6{\rm \, eV} $) excite \ce{H2} molecules to the electronically excited state followed by de-excitation to the vibrationally excited state of the ground electronic state \citep{TielensHollenbach:1985}.
Collisional de-excitation to the vibrationally ground state
results in heating the gas by distributing the de-excitation energy
to the surrounding atoms and molecules.
Note that, while the FUV photoelectric heating rate roughly scales with the local amount of small grains, \ce{H2} pumping may operate even in regions with little dust content.
The disk dust properties are characterised by the dust-to-gas mass ratio and the size distribution.
Since these quantities may be inhomogeneous in time and in space within the disk, the relative importance of the heating processes can also significantly change according to the disk evolution.
Dust grains undergo physical processing during the disk evolution.
Numerical calculations show that small dust grains entrain photoevaporative flows \citep{Hutchison:2016}.
Dust settling may occur in PPDs, as inferred from
infrared observations and SED modelling \citep{DAlessio:1999,DAlessio:2006,Grant:2018}.
Grown dust can move inward by radial drift, likely making the dust disk radius apparently smaller than the gas disk radius, as suggested by ALMA observations \citep{DeGregorioMonsalvo:2013,Ansdell:2018}.
Millimeter observations suggest that the dust mass decreases as the system age increases \citep{Mathews:2012, Ansdell:2016, Pascucci:2016, Ansdell:2017}.
Dust sedimentation can reduce the local dust-to-gas mass ratio of the disk surface significantly.
The total dust-to-gas mass ratio of a disk likely varies through its evolutionary phases.
Dust growth changes the total surface area of grains. It strongly impacts on the disk opacity, photoelectric heating, dust-gas collision, and formation of molecules, meaning that dust growth does impact on photoevaporation and disk lifetimes \citep{Gorti:2015}.
Although the actual dust-to-gas mass ratio is uncertain, overall it appears meaningful to study the evolution of PPDs with considering a wide range of dust-to-gas mass ratio.
In this paper, we perform hydrodynamics simulations to study the disk photoevaporation and the dispersal.
\ce{H2} pumping is incorporated to closely examine the effect of \ce{H2} pumping on the disk temperature.
We vary the dust-to-gas mass ratio of the disk, $\mathcal{D}$, in the range of $10^{-8}$--$10^{-1}$ to study how the disk thermochemical distribution changes.
Finally, we examine if \ce{H2} pumping is an important heating process around a massive star.
We increase the stellar mass from $M_*=1{\rm \, M_\odot}$ to $M_*=7{\rm \, M_\odot}$ and also increase the central luminosity at the same time.
We explain our methods in Section2.
We show our results in Section 3 and discussions in Section 4.
In Section 5 we summarize.
\section{Numerical simulations}
We perform disk photoevaporation simulations including hydrodynamics, radiative transfer and non-equilibrium chemistry in a coupled and self-consistent manner.
We use the open source code PLUTO \citep{Mignone:2007}
suitably modified for simulations of PPDs.
The details of the implemented physics are found in \citet{Nakatani:2018a,Nakatani:2018b} and \citet{Komaki:2021}.
We assume the disk is axisymmetric around the rotational axis and adopt the 2D spherical coordinates ($r, \theta$).
We consider three components of the gas velocity, $\bm{v}=(v_r, v_{\theta}, v_{\phi})$.
We first run our fiducial case, where the stellar mass $M_*=1{\rm \, M_\odot}$ and $\mathcal{D} = 10^{-2}$, with a detailed treatment of \ce{H2} pumping.
In the case of $\mathcal{D}=10^{-2}$, we calculate the dust temperature in the same way as \cite{Komaki:2021}, where we calculate and tabulate the dust temperature as a function of the distance from the central star and the column density, based on the results of more costly radiative transfer calculations.
In our runs with other dust-to-gas mass ratios, we solve the dust temperature in a self-consistent manner
by incorporating both the direct and diffuse radiation.
We perform ray-tracing for the direct component and adopt the flux-limited-diffusion (FLD) approximation for the diffused component \citep{Kuiper:2010, KuiperKlessen:2013, Kuiper:2020}.
We incorporate EUV/X-ray photoionization heating, FUV photoelectric heating \citep{BakesTielens:1994}, and heating by \ce{H2} photodissociation and by \ce{H2} pumping.
We also incorporate dust-gas collisional cooling \citep{YorkeWelz:1996}, fine-structure cooling of \ion{C}{2} and \ion{O}{1} \citep{HollenbachMcKee:1989, Osterbrock:1989, Santoro:2006}, molecular line cooling of $\ce{H2}$ and $\ce{CO}$ \citep{Galli:1998, Omukai:2010}, hydrogen Lyman $\alpha$ line cooling \citep{Anninos:1997}, and radiative recombination cooling \citep{Spitzer:1978} as cooling sources.
The chemical reaction network follows the abundances of eleven chemical species: \ion{H}{1}, \ion{H}{2}, \ce{H-}, \ion{He}{1}, \ce{H2}, \ce{H2+}, \ce{H2^*}, \ce{CO}, \ion{O}{1}, \ion{C}{2} and electrons in the run with $M_*=1{\rm \, M_\odot}$, $\mathcal{D}=10^{-2}$.
We denote the vibrationally excited state of \ce{H2} ($v = 6$) as \ce{H2^*}
and treat as a distinct chemical species \citep{TielensHollenbach:1985}.
\ce{H2} has hundreds of excited states, but the pseudo-level of $v=6$ approximately represents the excited levels altogether, which then reduces the computational cost significantly.
We also use the pseudo-level of \ce{H2^*} to study the effect of \ce{H2} pumping on the disk thermal structure.
We assume that \ion{C}{1} is quickly ionized to become \ion{C}{2} after the dissociation of \ce{CO} \citep{NelsonLanger:1997, RichlingYorke:2000}.
\ce{H2} pumping is the process where LW photons excite \ce{H2} molecule to the electronically excited state and then de-excite to a vibrationally excited state of the ground electronic state \citep{TielensHollenbach:1985}.
In the run with $M_*=1{\rm \, M_\odot}$, $\mathcal{D}=10^{-2}$, we incorporate \ce{H2^*} to calculate both the excitation rate and the de-excitation rate.
We calculate the \ce{H2} pumping rate, $R_{\rm{pump}}$ as
\[
R_{\rm{pump}}=3.4\times10^{-10}\beta G_0 e^{-2.5A_{\rm{v}}}{\rm \, s}^{-1},
\]
following \cite{TielensHollenbach:1985}.
We define $\beta$, $G_0$ and $A_{\rm{v}}$ as the shielding factor, the intensity of the incident FUV field and the visual extinction respectively.
We determine the shielding factor by \ce{H2} following \cite{DraineBertordi:1996}.
The vibrationally excited \ce{H2} molecules de-excite and fluoresce.
We incorporate the collisional de-excitation by \ion{H}{1} or \ce{H2} as the de-excitation reactions.
We also consider the direct dissociation of \ce{H2^*} by FUV radiation and by spontaneous radiation \citep{TielensHollenbach:1985}.
We define $k_{\rm{de}}(\rm{H})$, $k_{\rm{de}}(\rm{H_2})$, $R_{\rm{de}}$, $A(\rm{H_2^*})$ as the reaction rates by the collision with \ion{H}{1}, the collision with \ce{H2}, the direct dissociation by FUV radiation and the spontaneous radiation respectively.
The reaction rates are written as
\[
\begin{split}
k_{\rm{de}}(\rm{H}) &\simeq 1.8\times10^{-13}{\rm \, cm}^3{\rm \, s}^{-1}\\
&\times\left(\frac{T_{\rm{gas}}}{{\rm \,K}}\right)^{1/2}\exp\left(-\frac{1000{\rm \,K}}{T_{\rm{gas}}}\right)\\
k_{\rm{de}}(\rm{H_2}) &\simeq 2.3\times10^{-13}{\rm \, cm}^3{\rm \, s}^{-1}\\
&\times\left(\frac{T_{\rm{gas}}}{{\rm \,K}}\right)^{1/2}\exp\left(-\frac{1800{\rm \,K}}{(T_{\rm{gas}}+1200{\rm \,K})}\right)\\
R_{\rm{de}} &= 10^{-11}\beta G_0 e^{-2.5A_{\rm{v}}}{\rm \, s}^{-1}\\
A(\rm{H_2^*}) &= 2.0\times10^{-7}{\rm \, s}^{-1},
\end{split}
\]
where $T_{\rm{gas}}$ represents the gas temperature.
We follow \cite{WangGoodman:2017} to obtain reaction rates.
Each collisional de-excitation process deposits $2.6{\rm \, eV}$ and each de-excitation by direct FUV radiation deposits $0.4{\rm \, eV}$ \citep{TielensHollenbach:1985}.
We explicitly treat \ce{H2^*} as a distinct chemical species in the fiducial run where $M_{*}=1{\rm \, M_\odot}$
and $\mathcal{D}=10^{-2}$.
For the other runs, we calculate \ce{H2} pumping without explicitly including the excited molecular hydrogen, as in \citet{Rollig:2006}, \citet{Gressel:2020} and \citet{Nakatani:2021}:
\begin{equation}
\frac{{\rm d}e}{{\rm d}t}= n_{\ce{H2}}\frac{\chi P_{\rm{tot}}\Delta E_{\rm{eff}}}{1+[A_{\rm{eff}}+\chi D_{\rm{eff}}]/[\gamma_{\rm{eff}}n]},
\label{eq:heat_without_H2star}
\end{equation}
where $\chi$ is the FUV radiation intensity in units of the local interstellar radiation field, $P_{\rm{tot}}$ and $n$ are the formation rate of \ce{H2^*} and the particle number density, respectively.
The coefficients $\Delta E_{\rm{eff}},A_{\rm{eff}}, D_{\rm{eff}},\gamma_{\rm{eff}}$ express the average excitation energy, the effective spontaneous emission rate, the effective dissociation rate, and the effective collisional de-excitation rate assuming that molecular hydrogen has two levels, the ground state and the pseudo-excited level.
We set $P_{\rm{tot}}\Delta E_{\rm{eff}}=9.4\times10^{-22}{\rm \, erg}{\rm \, s}^{-1}$, $A_{\rm{eff}}=1.9\times10^{-6}{\rm \, s}^{-1}$, $D_{\rm{eff}}=4.7\times10^{-10}{\rm \, s}^{-1}$ and $\gamma_{\rm{eff}}=5.4\times10^{-13}\sqrt{T_{\rm{gas}}}{\rm \, s}^{-1}{\rm \, cm}^{-3}$ following \citet{Rollig:2006} and \citet{Gressel:2020}.
We have checked that using this formula does not change the resulting mass-loss profiles.
It can reduce the computational time by $\sim1.5$ times compared to the run without \ce{H2^*}.
\begin{figure*}[htbp]
\centering
\includegraphics[width=\linewidth,clip]{Snapshot_Z-0_FEXM1_withpump.pdf}
\includegraphics[width=\linewidth,clip]{Snapshot_Z-0_FEXM1_chemistry_withpump.pdf}
\caption{(top) The time-averaged disk structure of the run incorporating \ce{H2} pumping.
The color map on the left portion shows the gas temperature, $T_{\rm{gas}}$, and the right portion shows the density distribution, $\rho$. The white line shows the location of the base, which satisfies $N_{\rm{H_2}}=10^{20}{\rm \, cm}^{-2}$. The navy contour lines on the left side represent where the gas temperature is $100{\rm \,K}$, $300{\rm \,K}$, $1000{\rm \,K}$, and $3000{\rm \,K}$. The streamline represents the poloidal velocity of the gas, which is defined as $v_p=\sqrt{v_r^2+v_{\theta}^2}$.
For clarity, we omit to plot the streamlines where the velocity is less than $0.1{\rm km\,s^{-1}}$, which are mostly seen in the optically thick disk region.
(bottom) The time-averaged disk chemical structure of the run incorporating \ce{H2} pumping.The color map on the left portion shows the specific heating rate for \ce{H2} pumping. The right portion shows the abundance of \ce{H2}.The right portion also shows the distribution of H-bearing species. The yellow line on the right side represents the ionization front where the abundance of \ion{H}{2} is 0.5, and the brown line indicates the dissociation front where the abundance of \ce{H2} is 0.25.
}
\label{fig:snapshots}
\end{figure*}
The size distribution of dust grains is assumed to follow ${\rm d} n(a)\propto a^{-3.5} {\rm d}a$ with $a$ denoting dust radii \citep{DraineLee:1984}.
\cite{BakesTielens:1994} show that small grains are crucial for efficient photoelectric heating.
In practice, dust dynamics such as sedimentation, radial drift and/or turbulent mixing can change the dust distribution across the disk.
However, for simplicity, we set the dust-to-gas mass ratio to constant throughout the computational domain.
The underlying assumption is that the dust grains responsible for photoelectric heating are well mixed within the gas so that we treat the mixture of dust and gas as a single fluid.
We perform a set of simulations with different constant values of the dust-to-gas mass ratio.
Note that the dust amount near the disk surface, where photoevaporative flows are launched, critically determines the disk mass-loss rate.
We perform disk evolution simulations with varying the dust-to-gas mass ratio fo the disk $\mathcal{D}$
in the range of $10^{-8}$--$10^{-1}$ as a parameter to examine how the effective heating and cooling processes vary with $\mathcal{D}$.
In our simulations, the dust opacity, photoelectric heating rate, dust-gas collisional cooling rate, the rate of \ce{H2} formation catalyzed by grains are scaled with $(\mathcal{D}/0.01)$.
Our fiducial dust-to-gas mass ratio is $\mathcal{D}=10^{-2}$, which corresponds to the local ISM value.
The gas-phase elemental abundance of carbon is set to $y_{\rm{C}}=0.927\times10^{-4}$ and that of oxygen to $y_{\rm{O}}=3.568\times10^{-4}$ \citep{Pollack:1994}.
We assume $y_{\rm C}$ and $y_{\rm O}$ to be constant even when we set different $\mathcal{D}$.
We take into account FUV, EUV and X-ray radiations as the high-energy radiation from the central star.
As in our previous paper \citet{Komaki:2021}, we choose the FUV, EUV and X-ray luminosities, $\phi_{\rm{EUV}}$, $L_{\rm{FUV}}$, $L_{\rm{X}}$, following Table~1 in \cite{GortiHollenbach:2009}, which shows typical $1{\,\rm Myr}$ pre-main sequence star luminosities.
We define the computational domain of the polar angle to [0, $\pi /2$], and the radial coordinate to [$0.1r_{\rm g}$, $20r_{\rm g}$].
The gravitational radius $r_{\rm g}$ for a fully ionized gas with $T_{\ce{gas}} = 10^4{\rm \,K}$
is
\[
r_{\rm g} = \frac{GM_*}{(10{\rm km\,s^{-1}})^2} \simeq 8.87 {\rm \,au} \braket{\frac{M_*}{1{\rm \, M_\odot}}},
\]
where $G$ is the gravitational constant.
The gravitational radius sets the estimate of the effective boundary where the photoevaporative flows can escape out of the potential well of the host star.
However hydrodynamics simulations show that the photoevaporative flows are also excited inside the gravitational radius from the critical radius $\sim 0.2 r_{\rm g}$ \citep{Liffman:2003, Font:2004}.
Motivated by findings, we set the computational range to include the region inside the graviational radius.
We use $r_{\rm g}$ as a reference physical length scale in this paper.
We start our simulations setting the initial disk condition to be a 1 million year system, when the mass loss by accretion becomes lower \citep{Clarke:2001, Alexander:2006, Owen:2010, Suzuki:2016, Kunitomo:2020}.
We assume that the disk mass, $M_{\rm{disk}}$, is 3$\%$ of the central stellar mass \citep{AndrewsWilliams:2005}.
We assume the initial surface density profile, $\Sigma$ and the initial temperature distribution, $T_{\rm{ini}}$, as \cite{Komaki:2021}, to be
\[
\begin{split}
\Sigma (R) &= 27.1{\rm \, g}{\rm \, cm}^{-2}\times (R/r_{\rm g})^{-1}\\
T_{\rm{ini}} &= 100{\rm \,K}\times (R/0.1r_{\rm g})^{-1/2}.
\end{split}
\]
We run the simulations for $8.4\times 10^{3}(M_*/{\rm \, M_\odot})$ yr.
This is 10 times of the gas Kepler rotational period around the central star at $r=r_{\rm g}$ and this is long enough for the time-averaged mass-loss to converge.
\section{Results} \label{sec:results}
In this section, we present the results of our simulations.
We first describe the effect of \ce{H2} pumping on the temperature and the chemical structure in the run with $M_*=1{\rm \, M_\odot}$ and $\mathcal{D}=10^{-2}$ in \secref{sec:result1}.
We then calculate the mass-loss rate of this run in \secref{sec:result2}.
Finally in \secref{sec:result4}, we describe how the disk thermal structure changes when varying the dust-to-gas mass ratio.
\subsection{Thermochemical Structure}\label{sec:result1}
\figref{fig:snapshots} shows the time-averaged snapshot of the simulation with $M_*=1{\rm \, M_\odot}$ and $\mathcal{D}=10^{-2}$ and with incorporating \ce{H2^*} and \ce{H2} pumping.
To make this plot, we average the outputs from $840{\,\rm yr}$ to $8400{\,\rm yr}$ to avoid the initial transient and to smooth out (temporarily) fluctuating features.
Photoevaporative flows are launched from the surface of the disk, where the column density satisfies $N_{\rm{H_2}}\approx10^{20}{\rm \, cm}^{-2}$
(white line in the top panel).
We define this surface as the base of the photoevaporative flows.
EUV radiation heats the gas near the central star and at high latitudes,
shaping the polar \ion{H}{2} region where $y_{\rm{HII}}>0.5$.
The outgoing gas cools by adiabatic cooling
and has a temperature of $T_{\rm{gas}}\sim3000{\rm \,K}$.
The bottom portion of \figref{fig:snapshots} shows the \ce{H2} pumping heating rate $\Gamma_{{\rm H}_2^*}$ and also the molecular fraction $y_{\rm H2}$.
The \ce{H2^*} is most abundant near the dissociation front
with a number fraction $\sim0.04$, where $\Gamma_{{\rm H}_2^*}$
reaches $\sim10^{4}{\rm \, erg}{\rm \, s}^{-1}{\rm \, g}^{-1}$.
We note that the \ce{H2} dissociation front locates at
a lower polar angle than the base.
In the dense inner region at $r\lesssim 30{\rm \,au}$,
\ce{H2^*} molecules get de-excited mainly by collisions with \ion{H}{1}.
There, the gas is largely heated by this \ce{H2} pumping process.
The critical density for \ce{H2^*} spontaneous radiation is $\rho_{\rm c} \simeq5.0\times10^{-20}{\rm \, g}{\rm \, cm}^{-3}$ assuming $T_{\rm{gas}}=3000{\rm \,K}$.
In our simulation, the density at $r\gtrsim 30{\rm \,au}$ is lower than
$\rho_{\rm c}$, and thus
the spontaneous radiation primarily de-excites the \ce{H2^*} molecules there, and the resulting gas heating rate is small.
We note that, in our simulations, the timescale for \ce{H2^*} de-excitation is shorter than the local hyrdrodynamical timescale by a factor of $\sim 10^{2}$,
so the \ce{H2^*} molecules almost immediately get de-excited at the location where they are excited by the incident radiation.
We have run an additional simulation that does not explicitly
treat \ce{H2^*} molecules. By comparing the outputs, we find that
the resulting temperature structure is almost identical in all regions. This result is expected from the shorter timescales of \ce{H2^*}-involved (photo)chemical processes than the local dynamical timescale.
We thus confirm that it is sufficient to implement the effect of \ce{H2} pumping by considering its heating rate
using Equation (\ref{eq:heat_without_H2star}).
\begin{figure*}
\centering
\includegraphics[width=\linewidth,clip]{HeatingCooling_DGMR.pdf}
\caption{The time-averaged specific heating and cooling rates
measured at the base in our runs with different $\mathcal{D}$. The solid lines show FUV photoelectric heating (orange), \ce{H2} photodissociation heating (yellow), \ce{H2} pumping heating (brown), and X-ray heating (magenta). The dotted lines are for dust-gas collisional cooling (purple), \ce{H2} cooling (cyan), and \ion{O}{1} cooling (green).}
\label{fig:HeatingCoolingDGMR}
\end{figure*}
\subsection{The Mass-loss Rate}\label{sec:result2}
The disk gas is heated by the radiation from the central star,
and photoevaporateive flows are launched from the disk.
The disk mass-loss rate is determined by the heating process around the base.
In our fiducial run,
FUV photoelectric heating is the main heating source at the base,
and its rate is greater than the heating rate of \ce{H2} pumping by a factor of $\sim 10^3$ throughout the base.
The resulting temperature distribution is almost the same as in the run without \ce{H2} pumping at the base, indicating that the photoevaporative flows are driven mainly by FUV photoelectric heating.
We use the simulation outputs to calculate the mass-loss rate
\begin{equation}
\dot{M} = \int_{S, \eta >0} \rho \bm{v}\cdot d\bm{S}, \label{eq:massloss}
\end{equation}
where $d\bm{S}$ represents the spherical surface unit area at $r=100{\rm \,au}$.
We define $\eta$ as the Bernoulli function given by
\[
\eta = \frac{1}{2}v_{p}^{2}+\frac{1}{2}v_{\phi}^{2}+\frac{\gamma}{\gamma -1}c_{\rm{s}}^{2} - \frac{GM_{*}}{r}.
\]
We define $v_p=\sqrt{v_r^2+v_{\theta}^2}$ as the poloidal velocity, $\gamma$ as the specific heat ratio, and $c_{\rm{s}}$ as the sound velocity.
We assume that only the gas satisfying $\eta>0$ has sufficient mechanical energy to escape from the system eventually.
The mass-loss rate slightly fluctuates in time mainly because of spurious reflection of subsonic gas at the outer computational boundary.
The averaged mass-loss rate (from $840{\,\rm yr}$ to $8400{\,\rm yr}$) is
\[
\dot{M}=2.7\times 10^{-9} {\rm \, M_\odot}{\,\rm yr}^{-1}.
\]
The magnitude of the fluctuation is from $-13\%$ to $+10\%$ compared to the averaged value.
The mass-loss rate with \ce{H2} pumping is almost the same within the fluctuation as that without \ce{H2} pumping ($\dot{M}=2.6\times10^{-9}{\rm \, M_\odot}{\,\rm yr}^{-1}$).
This is again because \ce{H2} pumping heats the gas mainly in the vicinity of the \ce{H2} dissociation front and does not affect the temperature structure in the photoelectric-heated region.
Hence, the effect of \ce{H2} pumping on driving photoevaporative flows is limited if \ce{H2}-rich flow is excited by photoelectric heating.
\subsection{Dependence on the Dust-to-Gas Mass Ratio}\label{sec:result4}
We perform a series of photoevaporation simulations with varying the dust-to-gas ratio in the range $\mathcal{D}=10^{-8}$--$10^{-1}$.
\figref{fig:HeatingCoolingDGMR} shows the specific heating and cooling rates at the base in each simulation.
In the case of $\mathcal{D}=10^{-1}$, since the dust optical depth is high, FUV photons cannot deeply penetrate into the disk compared to the fiducial case ($\mathcal{D} = 10^{-2}$).
Photoevaporation is caused by FUV photoelectric heating at $N_{\rm{H_2}}=10^{19}{\rm \, cm}^{-2}$, which is one order of magnitude lower than the $\mathcal{D}=10^{-2}$ case.
(We define this surface as the base for this run.)
The mass-loss rate decreases because of the lower base density (\figref{MassLossDGMR}).
The FUV photoelectric heating rate approximately follows $\Gamma_{\rm{FUV}}\propto \mathcal{D}$ at the base.
For $\mathcal{D}=10^{-3}$, it gets lower to be comparable to heating rates for X-ray heating and \ce{H2} pumping.
The temperature of the \ce{H2} region can not get sufficiently high to have $\eta > 0$. It is in contrast to the fiducial run where molecular flow is observed. The mass-loss rate is lower than that of the fiducial run accordingly.
For $\mathcal{D}=10^{-4}$--$10^{-8}$, the FUV photoelectric heating is ineffective, and the thermochemical structure differs from the higher-$\mathcal{D}$ runs.
EUV photons heat the gas at high latitudes, whereas
X-ray heating is dominant in the \ion{H}{1} region
instead of FUV photoelectric heating.
The gas temperature is lowered to $300{\rm \,K}$ by adiabatic cooling and \ion{O}{1} line emission there.
\ce{H2} pumping is generally dominant only near the dissociation front, and the other \ce{H2} region is heated by X-ray up to $N_{\rm H} \sim 10^{22} {\rm \, cm}^{-2}$.
\figref{fig:HeatingCoolingDGMR} shows the specific heating and cooling rates along the base.
The relative contributions of the heating processes vary with the distance from the host star.
\ce{H2} pumping and X-ray are dominant at the radii inner and outer than $7$--$12r_{\rm g}$, respectively. The inner regions are favorable for \ce{H2} pumping since the density exceeds the critical value.
The gas temperature is $200$--$300{\rm \,K}$ in the molecular region at $r=10r_{\rm g}$.
It is lower by a factor of two than would have been achieved by photoelectric heating.
The same is true at other radii, and thus the $\eta = 0$ boundary is present at an order of magnitude smaller column density, $N_{\rm{H_2}}=10^{19}{\rm \, cm}^{-2}$, than the $\mathcal{D}=10^{-2}$ case.
The flow accordingly has a low density. It results in the significant decrease in the mass-loss rate compared to the $\mathcal{D} \geq 10^{-3}$ runs (see \figref{MassLossDGMR}).
\begin{figure}
\centering
\includegraphics[width=\linewidth,clip]{MassLossRate_100au_yearrange_DGMR.pdf}
\caption{The time-averaged mass-loss rate for each dust-to-gas mass ratio. The blue dots represent the simulation results. The dotted line is a fit.}
\label{MassLossDGMR}
\end{figure}
\deleted{
As $\mathcal{D}$ decreases, the photoelectric heating gets weaker.
Smaller $\mathcal{D}$ also implies a lower rate of \ce{H2} formation on grain surfaces.
However, in all the runs, \ce{H2} is mainly provided by advection at the base.
The charge exchange reaction
\begin{equation}
\ce{H2^+}+\ce{H}\ce{->} \ce{H2}+\ce{H+}
\label{H2*destruction}
\end{equation}
dominates the \ce{H2} formation process, and the \ce{H2} abundance is independent of $\mathcal{D}$.
This makes the heating rate
$\Gamma_{{\rm H}_2^*}$
to remain almost constant, and thus \ce{H2} pumping contributes to the disk heating in all the range of $\mathcal{D}$ of our calculations.
}
\deleted{To distinguish the effects of the EUV ionization and \ce{H2} pumping, we perform a simulation which includes EUV ionization as the only heating process with a $\mathcal{D}=10^{-6}$ case.
The photoevaporative winds are driven by EUV heating and the mass-loss rate is $\dot{M}=5.1\times10^{-11}{\rm \, M_\odot}{\,\rm yr}^{-1}$, which is one magnitude lower than the fiducial $\mathcal{D}=10^{-6}$ case.
The low mass-loss rate suggest that EUV radiation does not contribute to photoevaporation.}
In order to extract the contribution of EUV-driven flows from the total mass-loss rates in the low-$\mathcal{D}$ cases,
we count only the mass flux of the ionized outflows ($y_{\rm{HII}}\geq0.5$) using \eqnref{eq:massloss}.
The mass-loss rates of the ionized gas are found to be roughly an order of magnitude smaller than the total, meaning that the contribution of EUV-driven flows is limited.
This small contribution is mainly due to the fact that \ce{H2} pumping increases the local scale height of the neutral gas at the innermost radii, and the vertically extended gas shields EUV that would otherwise have reached the outer radii.
As a result, the \ion{H}{2} region is confined withing the conical volume at low polar angle.
\deleted{
Because of the low abundance of dust grains in the low $\mathcal{D}$ cases, the dust-gas collisional heat transfer is inefficient, and the temperature difference between the disk region and the outflow is large.
The gas is mainly heated through dust-gas collisional heat transfer in the disk region, below the base.
The gas temperature and the scale height becomes low consequently.
}
The total mass-loss rates are measured in the same manner as in \secref{sec:result2} and are fitted as a function of $\mathcal{D}$. We change the fitting functions at $\mathcal{D}=10^{-3}$, where the main heating process alternates. We get
\[
\dot{M} = \left\{
\begin{array}{l}
1.0\times10^{-9}\times10^{-0.15\mathcal{D}^2 - 0.49\mathcal{D}}{\rm \, M_\odot}{\,\rm yr}^{-1}
\,(\mathcal{D}\geq10^{-3})\\
5.6\times10^{-10}{\rm \, M_\odot}{\,\rm yr}^{-1}\,(\mathcal{D}\leq10^{-5})
\end{array}
\right. .
\]
For $\mathcal{D}=10^{-6}-10^{-8}$, the mass-loss rate converges to the constant value.
Since the abundance of \ce{H2} does not strongly depend on the already small dust amount, we expect that the mass-loss rate remains constant towards even lower $\mathcal{D}$ than shown in \figref{MassLossDGMR}.
\cite{Nakatani:2018b} investigated the dependence of the mass-loss rate on the gas metallicity $Z$.
They varied $Z$ with setting the dust-to-gas mass ratio as $\mathcal{D}=0.01\times(Z/Z_{\odot})$.
The gas-phase elemental abundances of metals are also scaled in proportion to $(Z/Z_{\odot})$.
On the other hand, we fix the gas-phase elemental abundances of carbon and oxygen constant regardless of $\mathcal{D}$ in the present study.
Our run with $\mathcal{D}=10^{-2}$ corresponds to the $Z/Z_{\odot}=1$ case of \cite{Nakatani:2018b},
which assumes $\sim10$ times higher luminosities for FUV and EUV, and $\sim 2.5$ times higher X-ray luminosity. The overall stronger radiation yields a higher mass-loss rates in all the metallicity range. The difference between the low-$Z$ runs of \cite{Nakatani:2018b} and our low-$\mathcal{D}$ runs appears in the major cooling process. For example, with small $\mathcal{D}$, \ion{O}{1} line cooling is more effective than dust-gas collisional cooling, which has not been observed in \cite{Nakatani:2018b}.
\section{Discussion}
While the results presented in the previous section have a variety of interesting implications, there are still uncertainties originating from the model assumptions.
In this section, we further discuss possible variations of our results and show caveats.
\subsection{Dust Evolution}
It is expected that dust sedimentation
causes the local dust-to-gas mass ratio to vary in the vertical direction.
Dust grains can also drift radially
toward the inner region.
In the present study,
we set the dust-to-gas mass ratio $\mathcal{D}$
as a convenient parameter to quantify and examine the effect of dust abundance on disk photoevaporation.
We have shown that the mass-loss rate and the mass-loss profile change sensitively depending on
$\mathcal{D}$.
Infrared and millimeter observations suggest that PPDs contain dust grains with various sizes \citep{Acke:2004, Pascual:2016, Davies:2018}.
The dust size distribution and its variation have considerable effects on disk heating and the resulting mass-loss rate.
Small grains have large surface areas per mass,
and thus contribute predominantly to the net photoelectric heating.
If a PPD achieves a state deficient in small dust grains,
it is less susceptible to FUV photoelectric heating.
\cite{Hutchison:2016} show that the dust size differs between wind regions and the disk region because small dust grains with a few $\rm{\mu m}$ diameter are entrained by photoevaporating flows.
We have shown that the mass-loss profile is largely determined by the heating at the disk surface,
and thus it is unlikely that the dust transportation by photoevaporating flows affect the mass-loss rate. However, mixing of gas and dust near the base may be a complicated process, and thus ideally detailed hydrodynamics simulations with explicit treatment of dust size evolution and dynamical gas-dust (de)coupling would be needed to fully address this issue.
\subsection{X-ray Luminosity Dependence}
Recent observations suggest that EUV, FUV, X-ray luminosities differ considerably even among the same spectral-type stars \citep{Gullbring:1998,Flaccomio:2003,France:2012,Vidotto:2014,France:2018}.
\cite{Kunitomo:2021} consider the variation of the emissivity of the high-energy radiation together with
stellar evolution.
For example, for a central star with mass $M_*\geq1.5{\rm \, M_\odot}$, the X-ray luminosity decreases by a factor of $\sim 10^{4}$ at the age of $1$--$10{\,\rm Myr}$.
To examine the impact of X-ray radiation, we perform an additional set of photoevaporation simulations with varying X-ray luminosities to $L_{\rm{X}}=2.51\times10^{29}{\rm \, erg}{\rm \, s}^{-1}$ and $L_{\rm{X}}=2.51\times10^{31}{\rm \, erg}{\rm \, s}^{-1}$, which are 0.1 and 10 times our fiducial value, $L_{\rm{X,f}}$, respectively.
We keep the stellar mass to $M_*=1{\rm \, M_\odot}$ and the dust-to-gas mass ratio to $\mathcal{D}=10^{-2}, 10^{-6}$.
\begin{table}[]
\centering
\begin{tabular}{ccc}
$\mathcal{D}$ & $L_{\rm{X}}$ [${\rm \, erg}{\rm \, s}^{-1}$] & $\dot{M}$ [$10^{-9}{\rm \, M_\odot}{\,\rm yr}^{-1}$]\\ \hline \hline
$10^{-2}$ & $2.5\times10^{31}$ & $5.2$\\
$10^{-2}$ & $2.5\times10^{30}$ & $2.7$\\
$10^{-2}$ & $2.5\times10^{29}$ & $2.5$\\ \hline
$10^{-6}$ & $2.5\times10^{31}$ & $1.6$\\
$10^{-6}$ & $2.5\times10^{30}$ & $0.62$\\
$10^{-6}$ & $2.5\times10^{29}$ & $0.5$\\ \hline
\end{tabular}
\caption{The mass-loss rates in the runs with different $\mathcal{D}$ and $L_{\rm{X}}$.}
\label{tab:Xraydependence}
\end{table}
The resulting mass-loss rates are listed in \tabref{tab:Xraydependence}.
In the low X-ray cases, \ce{H2} pumping dominantly heats the gas throughout the neutral regions even at the outer radii $r\geq10r_{\rm g}$.
The temperature structure does not differ from the fiducial run. The disk mass-loss rate does not sensitively depend on $L_{\rm{X}}$, yielding only a factor of a few enhancement by orders-of-magnitude increase of $L_{\rm{X}}$.
This insensitivity indicates that both \ce{H2} pumping and X-ray radiation contribute to the photoevaporation in the low $\mathcal{D}$ cases.
\subsection{PPDs where $\rm{H_2}$ pumping is important}
Earlier in \cite{Komaki:2021}, we performed a large set of PPD simulations with varying the central stellar mass from $M_{*}=1{\rm \, M_\odot}$ to $M_{*}=7{\rm \, M_\odot}$.
More massive stars with higher UV luminosities generate stronger photoevaporative flows.
We expect that \ce{H2} pumping heats the gas at the disk surface to further promote photoevaporation
around massive stars.
We have run an additional simulation with $M_*=7{\rm \, M_\odot}$ to examine the effect of \ce{H2} pumping on a disk around a massive star.
We set the dust-to-gas mass ratio to the fiducial value of $\mathcal{D}=10^{-2}$.
The simulation yields a mass-loss rate of $\dot{M}=5.9\times10^{-7}{\rm \, M_\odot}{\,\rm yr}^{-1}$.
We find that the mass-loss rate
is twice larger than in the run without \ce{H2} pumping.
The base is heated by both FUV photoelectric effect and \ce{H2} pumping, to generate strong outflows.
Finally, we discuss the effect of \ce{H2} pumping
in an extremely metal/dust-poor disk.
It is relevant to an interesting question
regarding the effect of \ce{H2} pumping on the dispersal of primordial-gas disks.
Our simulation with $\mathcal{D}=10^{-8}$ can be regarded as almost a primordial-gas case (\figref{fig:HeatingCoolingDGMR}).
The result shows that a primordial disk around a low-mass star is heated by \ce{H2} pumping and X-ray radiation and loses its gas mass by photoevaporation at a rate of $\sim 5 \times 10^{-10} {\rm \, M_\odot}{\,\rm yr}^{-1}$.
In our future study (Komaki et al., in preparation), we examine the structure, stability and the dispersal of primordial-gas disks.
\deleted{
\subsection{Comparison with Previous Simulations}
\cite{WangGoodman:2017} performed hydrodynamics simulations with the same disk mass, the same central stellar mass and the same stellar luminosities as ours.
They assume that the disk size is $100{\rm \,au}$ in radius and the dust-to-gas mass ratio is $7\times10^{-5}$.
In order to compare their result directly with our simulations, we calculate the mass-loss rate in the range $r<100{\rm \,au}$ in our simulation with $M_*=1{\rm \, M_\odot}$, $\mathcal{D}=10^{-4}$.
We obtain
\[
\dot{M}=6.92\times10^{-10}{\rm \, M_\odot}{\,\rm yr}^{-1},
\]
which is smaller than that of \cite{WangGoodman:2017} by a factor of $\sim3.6$.
In their simulation, the disk surface is heated to $T_{\rm{gas}}>10^4{\rm \,K}$ by EUV photoionization heating.
The \ion{H}{2} region extends to the wind base, while in our simulation the \ion{H}{2} region is confined to the region at high latitudes.
The evaporating gas moves faster and results in a higher mass-loss rate.
\textcolor{magenta}{
\cite{WangGoodman:2017} derive the time-averaged mass loss rate for the first $500{\,\rm yr}$ after the initiation of radiative-transfer calculation.
We calculate averages over a sufficiently long period of 4500 yrs starting from $t=1500{\,\rm yr}$, in order to avoid including initial transient features and also to consider only the phase when the disk is in a quasi-steady state.
We have found that the mass-loss rate is generally higher at the beginning of our simulation.}
}
\section{summary}
Recent millimeter observations suggest that the dust amount in a PPD decreases with disk evolution. Although the gas mass is uncertain, the dust abundance and its long-term variation
may be important in the process of disk photoevaporation and thereby planet formation in terms of disk dispersal.
In the present paper, we have performed a set of
radiation-hydrodynamics simulations with varying the dust-to-gas mass ratio to quantify how the effects of \ce{H2} pumping on the photoevaporation of dust-deficient disks.
In the standard case with $M_*=1{\rm \, M_\odot}$ and $\mathcal{D}=10^{-2}$, \ce{H2} pumping heats the evaporating gas driven by FUV photoelectric heating. \ce{H2} pumping does, therefore, not directly contribute to the mass loss but only affects the temperature structure of the outflow.
However, we have also shown that \ce{H2} pumping heating is effective at the inner region within several gravitational radii where the density is higher than the critical density for
collisional de-excitation of \ce{H2} molecules.
In relatively dust-rich cases with $\mathcal{D}\geq 10^{-3}$,
the disks are heated mainly by the FUV photoelectric process, and the outer molecular region evaporates efficiently.
In the other, dust-deficient disks with $\mathcal{D}\leq 10^{-3}$, the dominant heating process is \ce{H2} pumping and X-ray.
Since the dust amount can vary on a long timescale as the disk evolves, the temperature structure, the mass-loss profile, and the chemical composition would also change.
In future work, we shall study the long-term dispersal process of a PPD by incorporating the formation, growth, and destruction of dust grains.
\acknowledgments
R.N. acknowledges support from the Special Postdoctoral Researcher program at RIKEN and from Grant-in-Aid for Research Activity Start-up (19K23469).
R.K. acknowledges financial support via the Emmy Noether and Heisenberg Research Grants funded by the German Research Foundation (DFG) under grant no.~KU 2849/3 and 2849/9.
Numerical computations were carried out on Cray XC50 at Center for Computational Astrophysics, National Astronomical Observatory of Japan.
\vspace{5mm}
|
2104.11052
|
\section{Introduction}
Millimeter-wave (mmWave) has been widely recognized as a key technology in future wireless communication systems, since the abundant bandwidth resources can significantly increase the throughput \cite{heath_JSTSP, GZ_WC}.
Moreover, to mitigate the severe propagation loss in the mmWave band, massive multiple-input multiple-output (MIMO) is usually adopted to perform beamforming \cite{LULU_JSTSP, CM_beam}.
However, the fully-digital massive MIMO architecture gives rise to an unaffordable hardware cost and power consumption, where a dedicated radio frequency (RF) chain is required for each antenna.
In order to circumvent the technical hurdle and facilitate the deployment of mmWave massive MIMO systems in practice, the phase shift network (PSN) based hybrid MIMO architecture has been widely adopted to achieve a large array gain with a much smaller number of RF chains \cite{YuWei_JSTSP, MJN_WCL, LY_JSTSP}.
To fully capitalize on the large spatial degrees of freedom in mmWave massive MIMO systems, channel state information (CSI) at the base station (BS) is essential,
since beamforming, signal detection, and interference alignment heavily rely on accurate CSI at the BS \cite{SWQ_TVT}.
As for time-division duplexing (TDD) systems, estimating the high-dimensional uplink mmWave massive MIMO channels from limited number of RF chains at the BS suffers from an excessively high pilot overhead \cite{LY_TWC}.
As for frequency-division duplexing (FDD) systems, the downlink high-dimensional channel is first obtained at the users using very few RF chains, and is then fed back to the BS.
In this case, the prohibitively high channel estimation and feedback overhead problem is more severe~\cite{GZ_TSP}.
\subsection{Related Work}
To this end, by exploiting the sparsity of massive MIMO channels in the angle-domain and/or delay-domain, several low overhead channel estimation and feedback solutions have been proposed \cite{SWQ_TVT, low_frequency_massive_MIMO1, GZ_TSP, LAW_Tcom, LY_TWC, WZW_TVT, LXC_TWC}.
Specifically, by exploiting the temporal correlation of time-varying channels, the authors of \cite{SWQ_TVT} proposed a differential channel estimation and feedback scheme for FDD massive MIMO systems with reduced overhead, and a structured compressive sampling matching pursuit (S-CoSaMP) algorithm to acquire a reliable CSI at the BS.
In \cite{GZ_TSP}, the authors proposed a spatially common sparsity based adaptive channel estimation and feedback scheme for FDD massive MIMO systems, which adapted the training overhead and pilot design to reliably estimate and feed back the downlink CSI with reduced overhead.
Moreover, by introducing an enhanced Newtonized orthogonal matching pursuit (eNOMP) algorithm, the authors of \cite{low_frequency_massive_MIMO1} proposed an efficient downlink channel reconstruction-based transceiver for FDD massive MIMO systems.
However, these schemes \cite{SWQ_TVT, low_frequency_massive_MIMO1, GZ_TSP} were mainly proposed for low-frequency massive MIMO systems using a fully-digital array.
As for the mmWave hybrid MIMO,
by exploiting the channel sparsity in both angle and delay domains, a closed-loop sparse channel estimation scheme for TDD systems was proposed in \cite{LAW_Tcom}, which utilized ESPRIT-type algorithms to acquire super-resolution estimates of the angle of arrivals/departures (AoAs/AoDs) and of the delays of multipath components with low overhead.
In \cite{LY_TWC}, two high-resolution channel estimation schemes based on the ESPRIT algorithm were proposed for broadband mmWave massive MIMO systems.
By exploiting the mmWave channels' sparsity, the authors of \cite{WZW_TVT} proposed a compressive sensing (CS) greedy algorithm based channel estimation solution for reducing the channel estimation overhead.
Additionally, by exploiting the 3-D clustered structure exhibited in the virtual AoA-AoD-delay domain, an approximate message passing (AMP) with the nearest neighbor pattern learning algorithm was proposed to estimate the broadband mmWave massive MIMO-OFDM channels in \cite{LXC_TWC}.
Although the overhead is reduced, the computational complexity of ESPRIT techniques \cite{LAW_Tcom, LY_TWC} and greedy algorithms \cite{WZW_TVT} can be prohibitively high due to the matrix inversion and singular value decomposition (SVD) operations.
Besides, although AMP algorithms based solutions \cite{LXC_TWC} can reduce the computational complexity, they heavily rely on {\it{a priori}} models, which would lead to performance degradation since {\it{a priori}} models may not always be consistent with the actual systems.
\subsection{Motivations}
Recently, the successful application of deep learning in various fields, particularly in computer science, has gained major attention in the communication community, and has promoted an increasing interest in applying it to address communication and signal processing problems \cite{QZJ_WC, GUI_WC, QZJ_TCCN, Marco_Tcom}.
The deep learning based intelligent communication paradigm has attained manifold accomplishments, including channel coding \cite{decoder_design}, random access \cite{random_access}, beamforming design \cite{beam_design1,HHJ_hybrid_precoding, beamforming_ICC, YuWei_Globecom, YuWei_TWC}
activity and signal detection \cite{activity_detection, signal_detection}, autoencoder-based end-to-end communication system \cite{autoencoder_system}, CSI feedback \cite{CSI_feedback1, CSI_feedback2, CSI_feedback3}, and channel estimation \cite{LY_JSTSP, channel_estimation1, MXS_TVT}, etc.
To be specific, pure data-driven deep learning based solutions often employ deep neural networks (DNNs), including fully-connected neural networks and/or convolutional neural networks (CNNs), as a black box to design communication signal processing modules without any {\it{a priori}} model information.
Moreover, a large amount of training data samples are required to optimize the neural network through a customized loss function and learning strategy.
In particular, in order to overcome the high-computational complexity and fully exploit the spatial information, the authors of \cite{HHJ_hybrid_precoding} proposed a deep-learning-enabled mmWave massive MIMO framework for effective hybrid precoding, in which each selection of precoders for obtaining the optimized decoder is regarded as a mapping relation in the DNN.
In addition, the authors of \cite{YuWei_Globecom} proposed a DNN-based approach for channel sensing and downlink hybrid analog-digital beamforming, which was generalizable for any numbers of users by decomposing the deep learning architecture into multiple parallel independent single-user DNNs.
By considering the multiuser channel estimation and feedback problem as a distributed source coding problem, the authors of \cite{YuWei_TWC} proposed a joint design of pilots and a novel DNN architecture, which mapped the feedback bits from all the users directly into the precoding matrix at the BS.
By exploiting an unsupervised maching learning (ML) model, i.e., an autoencoder, the authors of \cite{beamforming_ICC} presented a linear autoencoder-based beamformer and combiner design, which maximizes the achievable rates over a mmWave channel.
Moreover, in order to address the overwhelming feedback overhead of FDD massive MIMO systems, the authors of \cite{CSI_feedback1} proposed a CS-ReNet framework, where the CSI was first compressed at the users based on CS methods and then reconstructed at the BS using a deep learning-based recovery solver.
However, in practical scenarios, there exists various interference and non-linear effects.
Therefore, the authors of \cite{CSI_feedback3} designed a deep learning-based denoising network, called DNNet, to improve the performance and robustness of channel feedback.
Additionally, by exploiting the spatial, temporal, and frequency correlations, the authors of \cite{LY_JSTSP} employed a CNN to address the channel estimation problem for mmWave massive hybrid MIMO systems.
However, the performance of those data-driven approaches heavily depends on the quantity and quality of the training data samples, but good data sets are usually difficult to be obtained in practice and parctical issues such as training over-fitting could degrade the capability of the system to generalize.
In addition, data-driven approaches lack interpretability and trustability that are major strengths of model-driven signal processing.
Moreover, compared with conventional model-based methods, model-driven approaches have better denoising capabilities by utilizing the powerful data processing capabilities and denoising capabilities of neural networks.
Therefore, different from pure data-driven and conventional model-based approaches, model-driven deep learning (MDDL)-based approaches construct the network structure by exploiting {\it{a priori}} knowledge from known physical mechanisms, such as well-developed channel models and transmission protocols.
Note that the MDDL-based approaches retain some of the advantages of conventional model-based iterative methods, which includes exploiting some {\it a priori} information to be trained with fewer trainable parameters and with less data samples, e.g., the structured sparsity of the mmWave channels can be exploited.
Moreover, they can further retain the learning ability of deep learning methods and avoid the performance degradation caused by the mismatch between the predetermined parameters (based on an assumed model) and the true optimal parameters (based on empirical data samples).
By leveraging some {\it{a priori}} information, model-driven methods require fewer parameters to be learned and less samples for training as compared to pure data-driven deep learning solutions \cite{Marco_VTM, channel_estimation2, LAMP_TSP, HHT_WC, AMP_WCL}.
Specifically, the authors of \cite{channel_estimation2} proposed an MDDL-based downlink channel reconstruction scheme for FDD massive MIMO systems, where a powerful neural network, named You Only Look Once (YOLO), was introduced to enable a rapid estimation process of the model parameters.
Moreover, \cite{AMP_WCL} proposed a novel AMP-based network with deep residual learning, referred to as LampResNet, to estimate the beamspace channel for mmWave massive MIMO systems.
\subsection{Our Contributions}
This paper proposes an MDDL-based channel estimation and feedback scheme for wideband mmWave massive hybrid MIMO systems, where the angle-delay domain channels' sparsity is exploited for reducing the overhead.
First, we consider the uplink channel estimation for TDD systems.
To reduce the uplink pilot overhead for estimating the high-dimensional channels from a limited number of RF chains at the BS,
we propose to jointly train the PSN and the channel estimator as an auto-encoder.
Particularly, by learning the integrated trainable parameters from data samples and exploiting the channels' structured sparsity from an {\it{a priori}} model,
the proposed multiple-measurement-vectors learned approximate message passing (MMV-LAMP) network with the devised redundant dictionary can jointly recover multiple subcarriers' channels with significantly enhanced performance.
Moreover, we consider the downlink channel estimation and feedback for FDD systems.
Similarly, the pilots at the BS and channel estimator at the users can be jointly trained as an encoder and a decoder, respectively.
Besides, to further reduce the channel feedback overhead, only the received pilots on part of the subcarriers are fed back to the BS, which can exploit the MMV-LAMP network to reconstruct the spatial-frequency channel matrix.
Simulations are conducted to demonstrate the effectiveness of the proposed MDDL-based channel estimation and feedback scheme over the conventional approaches.
The main contributions of this paper are summarized as follows:
\begin{itemize}
\item Operations in the complex domain are well supported by most deep learning frameworks.
However, the beamforming/combining matrix is a complex-valued matrix and satisfies the constant modulus constraint due to the RF PSN adopted in the hybrid MIMO architecture.
To this end, we design a novel fully-connected channel compression network (CCN) as the encoder to compress the high-dimensional channels, and the network parameters are defined as the real-valued phases of the PSN (i.e., beamforming/combining matrix in channel estimation).
\item To reliably reconstruct the channels from the compressed measurements,
we propose a channel reconstruction network (CRN) based on a developed MMV-LAMP network with the devised redundant dictionary as the decoder, which can exploit the {\it{a priori}} model and learn the optimal parameters from data to jointly recover multiple subcarriers' channels with significantly enhanced performance.
\item To effectively estimate the channels from compressed feedback signals, a feedback based channel reconstruction network (FCRN) is proposed. The FCRN consists of a feedback reconstruction sub-network (FRSN) and a CRN. The FRSN is based on the MMV-LAMP network and can exploit the delay-domain sparsity of the channels to reliably reconstruct the compressed channel.
\item Due to the mismatch between continuous AoAs/AoDs and the limited angle resolution of spatial-angular transform matrix, the resulted power leakage can weaken the channel sparsity represented in the angle domain. Hence, by quantizing the angles with a finer resolution, we design a redundant dictionary to further improve the sparse channel estimation performance.
\item To evaluate the superiority of the proposed solution that jointly trains the pilots and channel estimator, we further consider scenarios with fixed scattering environments.
Simulation results verify that, by learning the characteristics of the data samples with fixed scattering, the optimized CCN and MMV-LAMP network can well match the channel environments with improved performance.
\end{itemize}
\textit{Notations}: Throughout this paper, scalar variables are denoted by normal-face letters, while boldface lower and upper-case symbols denote column vectors and matrices, respectively.
Superscripts ${( \cdot )^{\text{T}}}$, ${( \cdot )^ * }$ and ${( \cdot )^{\text{H}}}$ denote the transpose, conjugate and Hermitian transpose operators, respectively.
${\left\| {\mathbf{a}} \right\|_0}$ and ${\left\| {\mathbf{A}} \right\|_F}$ denote the ${\ell}_{0}$-norm of ${\mathbf{a}}$ and the Frobenius norm of ${\mathbf{A}}$, respectively.
${\left[ {\mathbf{a}} \right]_m}$ and ${\left[ {\mathbf{A}} \right]_{m,n}}$ are the $m$-th element of ${\mathbf{a}}$ and the $m$-th row and the $n$-th colomn element of ${\mathbf{A}}$, respectively.
${\mathbf{A}}( {m,:} )$ and ${\mathbf{A}}( {:,n} )$ denote the $m$-th row vector and the $n$-th colomn vector of ${\mathbf{A}}$, respectively.
${\left. {\mathbf{A}} \right|_{\mathbf{\Omega }}}$ denotes a sub-matrix by selecting the rows of ${\mathbf{A}}$ according to the ordered set ${\bm{\Omega}}$ and ${\left\{ {\bm{\Omega }} \right\}_m}$ is the $m$-th element of the set ${\bm{\Omega }}$.
${{\mathbf{e}}^{{\text{j}}\left[ {\bm{\Xi }} \right]}}$ denotes a complex matrix with its element being ${\left[ {{{\mathbf{e}}^{{\text{j}}\left[ {\bm{\Xi }} \right]}}} \right]_{m,n}} = {e^{{\text{j}}{{\left[ {\bm{\Xi }} \right]}_{m,n}}}}$, and ${\bm{\Xi }}$ is a real matrix.
Finally, $\partial ( \cdot )$ is the first-order partial derivative operation.
\section{System Model}
Consider a mmWave massive MIMO system with hybrid beamforming,
where the BS is equipped with a uniform linear array (ULA) and comprises ${N_{{\rm{BS}}}}$ antennas
and ${N_{{\rm{RF}}}}$ RF chains, and the $U$ users have a single-antenna.
At the BS, the PSN is employed to connect a large number of antennas with a much fewer number of RF chains (i.e., ${{N_{{\text{BS}}}} \gg {N_{{\text{RF}}}}}$), and orthogonal frequency division multiplexing (OFDM) with $K$ subcarriers is adopted to combat the frequency selective fading of the mmWave channels.
\subsection{Uplink Channel Estimation for TDD Systems}
Firstly, we consider the uplink channel estimation for TDD systems.
The uplink channel estimation stage includes $Q$ OFDM symbols (i.e., $Q$ time slots) dedicated for channel estimation.
For a certain user$\footnote{Consider the uplink multi-user channel estimation, if $U$ users adopt mutually orthogonal pilot signals, the pilot signals associated with different users can be distinguished and then respectively processed.}$,
in order to estimate the $k$-th subcarrier's channel, the received baseband signal vector ${{\mathbf{y}}'_{{\text{UL}}}}\left[ {k,q} \right] \in {\mathbb{C}^{{N_{{\text{RF}}}} \times 1}}$ at the BS in the $q$-th time slot can be expressed as
\begin{equation}\label{uplink received signal q}
{{{\mathbf{y}}}'_{{\text{UL}}}}\left[ {k,q} \right] = {\mathbf{F}}_{{\text{UL}}}^{\text{H}}\left[ q \right]{{\mathbf{h}}_{{\text{UL}}}}\left[ k \right]x\left[ {k,q} \right] + {{{\mathbf{\bar n'}}}_{{\text{UL}}}}\left[ {k,q} \right],
\end{equation}
where $1 \le q \le Q$, $1 \le k \le K$,
${{\mathbf{F}}_{{\text{UL}}}}\left[ {q} \right] \in {\mathbb{C}^{{N_{{\text{BS}}}} \times {N_{{\text{RF}}}}}}$ denotes the uplink combining matrix at the BS,
${{\mathbf{h}}_{{\text{UL}}}}\left[ k \right] \in {\mathbb{C}^{{N_{{\text{BS}}}} \times 1}}$ is the uplink $k$-th subcarrier channel,
$x\left[ {k,q} \right] \in \mathbb{C}$ is the transmitted pilot symbol,
and ${{{\mathbf{\bar n'}}}_{{\text{UL}}}}\left[ {k,q} \right] \sim \mathcal{C}\mathcal{N}( {0,\sigma _n^2{\mathbf{I}}_{N_{\text{RF}}}} )$ is the effective noise modeled at the receiver (front-end) level.
Then, the received baseband signal is post-processed by multiplying it by ${x^ * }\left[ {k,q} \right]$, i.e.,
\begin{align}\label{received_signal_tran}
{{\mathbf{y}}_{{\text{UL}}}}\left[ k,q \right] = {{\mathbf{y}}'_{{\text{UL}}}}\left[ k,q \right]{x^ * }\left[ {k,q} \right]
= {\mathbf{F}}_{{\text{UL}}}^{\text{H}}\left[ q \right]{{\mathbf{h}}_{{\text{UL}}}}\left[ k \right] + {{{ {\mathbf{n}}}}_{{\text{UL}}}}\left[ {k,q} \right],
\end{align}
where we assume that $x\left[ {k,q} \right]{x^ * }\left[ {k,q} \right] = 1$
and ${{{ {\mathbf{n}}}}_{{\text{UL}}}}\left[ {k,q} \right] = {{{\bar {\mathbf{n}}}'}_{{\text{UL}}}}\left[ {k,q} \right]{x^ * }\left[ {k,q} \right]$.
Note that, due to the constant modulus constraint of the adopted fully-connected RF PSN at the BS,
the uplink combining matrix ${{\mathbf{F}}_{{\text{UL}}}}\left[ {q} \right]$, $\forall q$, can be expressed as
${\big[ {{{\mathbf{F}}_{{\text{UL}}}}\left[ q \right]} \big]_{m,n}} = \frac{1}{{\sqrt {{N_{{\text{BS}}}}} }}{e^{{\text{j}}{{\left[ {{{\bm{\Xi }}_{{\text{UL}}}}} \right]}_{m,n}}}}$ for $1 \le m \le N_{\text{BS}}$, $1 \le n \le N_{\text{RF}}$,
and $\left[{{{{{\bm{\Xi }}_{{\text{UL}}}}}}}\right]_{m,n}$ denotes the phase value connecting the $m$-th antenna and the $n$-th RF chain$\footnote{Note that changing the phase values of the PSN does not require a re-synchronization of the whole system. This is because: i) The phase values of the phase shifts will be changed in the guard interval before each pilot OFDM symbol; ii) The synchronization of frame and symbol can be obtained based on the preambles transmitted before the pilot symbols; iii) When to adjust the phase shifts can be exactly calculated according to the synchronization information and the predefined signal frame structure, and the adjustment of the phase shifts can be controlled according to the system clock.}$.
By collecting ${{\mathbf{y}}_{{\text{UL}}}}\left[ {k,q} \right]$ for $1 \le q \le Q$ together,
the aggregate received signals ${{\mathbf{y}}_{{\text{UL}}}}\left[ k \right] \in {\mathbb{C}^{M \times 1}}$ ($M = Q{N_{{\text{RF}}}}$) can be written as
\begin{equation}\label{uplink received signal Q}
{{\mathbf{y}}_{{\text{UL}}}}\left[ k \right] = {\mathbf{F}}_{{\text{UL}}}^{\text{H}}{{\mathbf{h}}_{{\text{UL}}}}\left[ k \right] + {{{ {\mathbf{n}}}}_{{\text{UL}}}}\left[ {k} \right],
\end{equation}
where ${{\mathbf{y}}_{{\text{UL}}}}\left[ k \right] = {\big[ {{\mathbf{y}}_{{\text{UL}}}^{\text{T}}\left[ {k,1} \right], \cdots ,{\mathbf{y}}_{{\text{UL}}}^{\text{T}}\left[ {k,Q} \right]} \big]^{\text{T}}}$,
${{\mathbf{F}}_{{\text{UL}}}} = \big[ {{{\mathbf{F}}_{{\text{UL}}}}\left[ {1} \right], \cdots ,{{\mathbf{F}}_{{\text{UL}}}}\left[ {Q} \right]} \big] \in {\mathbb{C}^{{N_{{\text{BS}}}} \times M}}$,
and ${{{ {\mathbf{n}}}}_{{\text{UL}}}}\left[ k \right] = {\big[ {{ {\mathbf{n}}}_{{\text{UL}}}^{\text{T}}\left[ {k,1} \right], \cdots ,{ {\mathbf{n}}}_{{\text{UL}}}^{\text{T}}\left[ {k,Q} \right]} \big]^{\text{T}}} \in {\mathbb{C}^{M \times 1}}$.
Finally, by stacking ${{\mathbf{y}}_{{\text{UL}}}}\left[ k \right]$ from all subcarriers, the received signals ${{\mathbf{y}}_{{\text{UL}}}}\left[ k \right]$ for $1 \le k \le K$ can be further expressed as
\begin{equation}\label{uplink received signal K}
{{\mathbf{Y}}_{{\text{UL}}}} = {\mathbf{F}}_{{\text{UL}}}^{\text{H}}{\mathbf{H}}_{{\text{UL}}}^{{\text{sf}}} + { {\mathbf{N}}}_{\text{UL}},
\end{equation}
where ${{\mathbf{Y}}_{{\text{UL}}}} = \big[ {{{\mathbf{y}}_{{\text{UL}}}}\left[ 1 \right], \cdots ,{{\mathbf{y}}_{{\text{UL}}}}\left[ K \right]} \big] \in {\mathbb{C}^{M \times K}}$,
${\mathbf{H}}_{{\text{UL}}}^{{\text{sf}}} = \big[ {{{\mathbf{h}}_{{\text{UL}}}}\left[ 1 \right],\cdots ,{{\mathbf{h}}_{{\text{UL}}}}\left[ K \right]} \big] \in {\mathbb{C}^{{N_{{\text{BS}}}} \times K}}$ denotes the uplink spatial-frequency domain channel matrix,
and ${{{ {\mathbf{N}}}}_{{\text{UL}}}} = \big[ {{{ {\mathbf{n}}}}_{{\text{UL}}}}\left[ 1 \right], \cdots ,{{{{\mathbf{n}}}}_{{\text{UL}}}}\left[ K \right] \big] \in {\mathbb{C}^{M \times K}}$.
\subsection{Downlink Channel Estimation and Feedback for FDD Systems}
Moreover, we consider the downlink channel estimation and feedback for FDD systems.
Specifically, the downlink pilot signals transmitted by the BS can be denoted as ${{\mathbf{f}}_{{\text{DL}}}}\left[ {q} \right]{{s}}\left[ {k,q} \right] \in {\mathbb{C}^{{N_{{\text{BS}}}} \times 1}}$ for $1 \le q \le Q$,
where ${{\mathbf{f}}_{{\text{DL}}}}\left[ {q} \right]$ is the RF pilot signal
and ${{s}}\left[ {k,q} \right]$ is the baseband pilot signal.
Mathematically, the received signal in the $q$-th time slot associated with the $k$-th subcarrier at the user can be written as
\begin{equation}\label{received_signal_1}
{{y}'_{{\text{DL}}}}\left[ {k,q} \right] = {\mathbf{h}}_{{\text{DL}}}^{\text{T}}\left[ k \right]{{\mathbf{f}}_{{\text{DL}}}}\left[ {q} \right]s\left[ {k,q} \right] + {{\bar {n}}_{{\text{DL}}}}\left[ {k,q} \right],
\end{equation}
where ${{\mathbf{h}}_{{\text{DL}}}}\left[ k \right] \in {\mathbb{C}^{{N_{{\text{BS}}}} \times 1}}$ is the downlink $k$-th subcarrier's channel,
and ${{{\bar {n}}}_{{\text{DL}}}}\left[ {k,q} \right]$
is the complex noise.
Similar to (\ref{received_signal_tran}), the received signal can be further post-processed to obtain
\begin{align}\label{down_received_signal_tran}
{y_{{\text{DL}}}}\left[ {k,q} \right] = {y_{{\text{DL}}}'}\left[ {k,q} \right]{s^ * }\left[ {k,q} \right]
= {\mathbf{h}}_{{\text{DL}}}^{\text{T}}\left[ k \right]{{\mathbf{f}}_{{\text{DL}}}}\left[ {q} \right] + {{{n}}_{{\text{DL}}}}\left[ {k,q} \right],
\end{align}
where we assume that $s\left[ {k,q} \right]{s^ * }\left[ {k,q} \right] = 1$
and ${{{n}}_{{\text{DL}}}}\left[ {k,q} \right] = {{\bar{n}}_{{\text{DL}}}}\left[ {k,q} \right]{s^ * }\left[ {k,q} \right]$.
Similarly, due to the constant modulus constraint of the adopted RF PSN, the RF pilot signal ${{\mathbf{f}}_{{\text{DL}}}}\left[ {q} \right]$, $\forall q$, can be expressed as
${\big[ {{{\mathbf{f}}_{{\text{DL}}}}\left[ q \right]} \big]_{m}} = \frac{1}{{\sqrt {{N_{{\text{BS}}}}} }}{e^{{\text{j}}{{\left[ {{{\bm{\Xi }}_{{\text{DL}}}}} \right]}_{m}}}}$ for $1 \le m \le N_{\text{BS}}$,
and $\left[{{{{{\bm{\Xi }}_{{\text{DL}}}}}}}\right]_{m}$ denotes the phase value connecting the $m$-th antenna and the activated RF chain.
By collecting the received signals from $Q$ time slots, the aggregate received signals can be expressed as
\begin{equation}\label{received_signal_rewritten}
{{\mathbf{y}}_{{\text{DL}}}}\left[ k \right] = {\mathbf{F}}_{{\text{DL}}}^{\text{T}}{{\mathbf{h}}_{{\text{DL}}}}\left[ k \right] + {{ {\mathbf{n}}}_{{\text{DL}}}}\left[ k \right],
\end{equation}
where ${{\mathbf{y}}_{{\text{DL}}}}\left[ k \right] = {\big[ {{y_{{\text{DL}}}}\left[ {k,1} \right],\cdots ,{y_{{\text{DL}}}}\left[ {k,Q} \right]} \big]^{\text{T}}} \in {\mathbb{C}^{Q \times 1}}$,
${{\mathbf{F}}_{{\text{DL}}}} = \big[ {{{\mathbf{f}}_{{\text{DL}}}}\left[ {1} \right], \cdots ,{{\mathbf{f}}_{{\text{DL}}}}\left[ {Q} \right]} \big] \in {\mathbb{C}^{{N_{{\text{BS}}}} \times Q}}$, and
${{ {\mathbf{n}}}_{{\text{DL}}}}\left[ k \right] = {\big[ {{{{n}}_{{\text{DL}}}}\left[ {k,1} \right],\cdots ,{{{n}}_{{\text{DL}}}}\left[ {k,Q} \right]} \big]^{\text{T}}} \in {\mathbb{C}^{Q \times 1}}$.
Similar to (\ref{uplink received signal K}), we can collect ${{\mathbf{y}}_{{\text{DL}}}}\left[ k \right]$ for $1 \le k \le K$ from $K$ subcarriers to obtain
\begin{equation}\label{downlink received signal K}
{{\mathbf{Y}}_{{\text{DL}}}} = {\mathbf{F}}_{{\text{DL}}}^{\text{T}}{\mathbf{H}}_{{\text{DL}}}^{{\text{sf}}} + {{ {\mathbf{N}}}_{{\text{DL}}}},
\end{equation}
where ${\mathbf{H}}_{{\text{DL}}}^{{\text{sf}}} = \big[ {{{\mathbf{h}}_{{\text{DL}}}}\left[ 1 \right], \cdots ,{{\mathbf{h}}_{{\text{DL}}}}\left[ K \right]} \big] \in {\mathbb{C}^{{N_{{\text{BS}}}} \times K}}$ denotes the downlink spatial-frequency domain channel matrix,
${{\mathbf{Y}}_{{\text{DL}}}} = \big[ {{{\mathbf{y}}_{{\text{DL}}}}\left[ 1 \right], \cdots ,{{\mathbf{y}}_{{\text{DL}}}}\left[ K \right]} \big] \in {\mathbb{C}^{Q \times K}}$,
and
${{ {\mathbf{N}}}_{{\text{DL}}}} = \big[ {{{ {\mathbf{n}}}_{{\text{DL}}}}\left[ 1 \right], \cdots ,{{ {\mathbf{n}}}_{{\text{DL}}}}\left[ K \right]} \big] \in {\mathbb{C}^{Q \times K}}$.
\subsection{Channel Model}
According to typical mmWave channel models \cite{heath_JSTSP, LY_JSTSP, LY_TWC, GZ_TSP, LAW_Tcom},
the downlink delay-domain continuous channel vector ${\mathbf{h}}_{\text{DL}}( \tau ) \in {\mathbb{C}^{{N_{{\text{BS}}}} \times 1}}$ can be expressed as
\begin{equation}\label{delay channel}
{\mathbf{h}}_{\text{DL}}( \tau ) = \sqrt {\frac{{{N_{{\text{BS}}}}}}{L}} \sum\limits_{l = 1}^L {{\beta _l}p( {\tau - {\tau _l}} ){\mathbf{a}}( {{\varphi _l}} )} ,
\end{equation}
where ${\beta _l} \sim \mathcal{C}\mathcal{N}( {0,\sigma _\alpha ^2} )$ and ${\tau _l}$ denote the propagation gain and delay corresponding to the $l$-th path, respectively,
$p( \tau )$ is the pulse shaping filter,
and ${{\varphi _l}}$ is the angle-of-departure (AoD) of the $l$-th path at the BS.
Moreover, the frequency-domain channel ${\mathbf{h}}_{\text{DL}}\left[ k \right]$ at the $k$-th subcarrier can be expressed as
\begin{equation}\label{frequency channel}
{\mathbf{h}}_{\text{DL}}\left[ k \right] = \sqrt {\frac{{{N_{{\text{BS}}}}}}{L}} \sum\limits_{l = 1}^L {{\beta _l}{e^{ - {\text{j}}\frac{{2\pi k{f_s}{\tau _l}}}{K}}}{\mathbf{a}}( {{\varphi _l}} )},
\end{equation}
where ${{f_s}}$ is the system sampling rate.
Since the BS is equipped with an ULA, the corresponding array steering vector ${\mathbf{a}}( \theta ) \in {\mathbb{C}^{{N_{{\text{BS}}}} \times 1}}$ can be written as
\begin{equation}\label{steeting vector}
{\mathbf{a}}( \theta ) = \frac{1}{{\sqrt {{N_{{\text{BS}}}}} }}{\left[ {1,{e^{ - {\text{j}}\frac{{2\pi d}}{\lambda }\sin ( \theta )}}, \cdots ,{e^{ - {\text{j}}\frac{{2\pi d}}{\lambda }( {{N_{{\text{BS}}}} - 1} )\sin ( \theta )}}} \right]^{\text{T}}},
\end{equation}
where $\lambda$ is the carrier wavelength, and $d$ is the adjacent antenna spacing usually satisfying $d = \lambda / 2$.
\section{MDDL-Based TDD Uplink Channel Estimation}
In this section, we first propose an improved frame structure design for optimizing the channel estimation duration.
Secondly, we propose to jointly train the RF PSN and the channel estimator as an auto-encoder.
Finally, we develop an MMV-LAMP network, which can both exploit the structured sparsity from an {\it{a priori}} model and adaptively learn the trainable parameters from the data samples.
\subsection{The Proposed Transmit Frame Structure Design}
The proposed frame structure is illustrated in Fig. \ref{frame structure}, where the cyclic prefix (CP)-OFDM is employed to combat the time dispersive channels and the time-frequency radio resources can be divided into multiple resource elements to convey the pilot signals and payload data. Specifically, a frame comprising $T$ time slots is divided into two phases in the time domain, where the first $Q$ time slots (i.e., pilot phase) are used to transmit pilot signals and the remaining $(T-Q )$ time slots (i.e., data transmission phase) are reserved only for payload data transmission. In the pilot phase, we denote the OFDM's DFT length as $P_{L} = {N_{{\text{cp}}}}$, where ${N_{{\text{cp}}}}$ is the length of CP. Therefore, the subcarrier spacing is ${B_s}/{P_L}$ and each CP-OFDM symbol duration is $( {{N_{{\text{cp}}}} + {P_L}} )/{B_s}$, where ${B_s}$ is the system bandwidth. On the other hand, in the data transmission phase, we consider the OFDM symbol's DFT length is ${D_L} \gg {P_L}$ and thus each CP-OFDM symbol duration is $( {{N_{{\text{cp}}}} + {D_L}} )/{B_s}$.
\subsection{The Developed MMV-LAMP Network}
In this section, we will detail the developed MMV-LAMP network.
Without loss of generality, we consider a typical MMV CS problem
\begin{equation}\label{MMV_problem}
{\mathbf{Y}} = {\mathbf{AX}} + {\mathbf{N}},
\end{equation}
where ${\mathbf{Y}} \in {\mathbb{C}^{M \times K}}$ is a noisy measurement,
${\mathbf{A}} \in {\mathbb{C}^{M \times N}}$ is a measurement matrix,
${\mathbf{X}}\in {\mathbb{C}^{N \times K}}$ is a sparse matrix whose columns $\big\{ {{\mathbf{X}}( {:,i} )} \big\}_{i = 1}^K$ share a common sparsity, and ${\mathbf{N}} \in {\mathbb{C}^{M \times K}}$ is the additive white Gaussian noise (AWGN).
To solve the MMV CS problem in (\ref{MMV_problem}) efficiently, the developed MMV-LAMP network has two features: i) it fully exploits an {\it{a priori}} model, i.e., the structured sparsity of ${\mathbf{X}}$; ii) by integrating the trainable parameters into the unfolded iterations of conventional AMP algorithms, it can adaptively learn and optimize the network from data samples.
Specifically, as for the $t$-th layer $( {1 \le t \le T} )$ of the developed MMV-LAMP network, the key procedure includes
\begin{subequations}
\begin{align}
{{\mathbf{R}}_t} &= {\mathbf{\widehat X}}_{t - 1} + {{\mathbf{B}}}{{\mathbf{V}}_{t-1}} , \\
{\mathbf{\widehat X}}_t &= {{\eta }}( {{{\mathbf{R}}_t};{{\bm{\theta }}},{\sigma _t}} ), \\
{{\mathbf{V}}_t} &= {\mathbf{Y}} - {\mathbf{A\widehat X}}_{t} + {b_t}{{\mathbf{V}}_{t - 1}}, \label{residual}
\end{align}
\end{subequations}
where ${{\mathbf{V}}_0} = \mathbf{Y}$, ${\mathbf{\widehat X}}_0 = \mathbf{0}$, and
\begin{align}
{\sigma _t} &= \frac{1}{{\sqrt {MK} }}{\left\| {{{\mathbf{V}}_{t - 1}}} \right\|_F}, \label{epsilon_t} \\
{b_t}{\mathbf{I}} &= \frac{1}{M}\sum\limits_{j = 1}^N {\frac{{\partial {{\left[ {{{\eta}} ( {{{\mathbf{R}}_t};{{\bm{\theta}}},{\sigma _t}} )} \right]}_j}}}{\partial \left[ {{{\mathbf{R}}_t}( {j,:} )} \right]}}. \label{b_t}
\end{align}
Note that, the residual ${{\mathbf{V}}_t}$ in (\ref{residual}) includes the ``Onsager correction" term ${b_{t}}{{\mathbf{V}}_{t - 1}}$, which is introduced into the conventional AMP algorithms to accelerate the convergence \cite{LAMP_TSP}.
Moreover, the shrinkage function $\eta ( { \cdot ; \cdot } )$ can be expressed as
\begin{equation}\label{eta}
{\left[ {\eta ( {{{\mathbf{R}}_t};{{\bm{\theta }}},{\sigma _t}} )} \right]_j} = \frac{{{{\mathbf{r}}_{t,j}}}}{{{\pi _t}\left[ {1 + \exp ( {{\psi _t} - \frac{{{\mathbf{r}}_{t,j}^{\text{H}}{{\mathbf{r}}_{t,j}}}}{{2\sigma _t^2{\pi _t}}}})} \right]}},
\end{equation}
where ${{\mathbf{r}}_{t,j}} = {{{\mathbf{R}}_t}( {j,:} )}$ denotes the $j$-th row of the ${{{\mathbf{R}}_t}}$, ${\pi _t}$ and ${\psi _t}$ are respectively given by
\begin{align}
{\pi _t} &= 1 + \frac{{\sigma _t^2}}{{{\theta _{1}}}}, \label{pi} \\
{\psi _t} &= K\log ( {1 + \frac{{{\theta _{1}}}}{{\sigma _t^2}}} ) + {\theta _{2}}. \label{psi}
\end{align}
Note that, different from the learned denoising-based approximate message passing (LDAMP) network [36], where the authors replaced the denoiser module ${D_{{{\hat \sigma }^l}}}( \cdot )$ in the DAMP algorithm with the denoising convolutional neural network (DnCNN), we derive the shrinkage function $\eta ( { \cdot ; \cdot } )$ in detail, which plays the role of the nonlinear activation function in deep learning.
Moreover, instead of processing the element $r$ in the existing MDDL-based scheme \cite{LAMP_TSP, HHT_WC,HHT_LDAMP,AMP_WCL},
the developed MMV-LAMP network processes the row vector ${{\mathbf{r}}_j}$ for $1 \le j \le N$ by fully exploiting the structured sparsity of ${\mathbf{X}}$ from the {\it{a priori}} model.
The derivation of the developed MMV-LAMP network is shown in Appendix.
Additionally, in order to avoid the performance degradation caused by the mismatch between the continuous angles and the discrete dictionary, we integrate the redundant dictionary into the CRN (i.e., the decoder) to improve the channel estimation performance.
Fig. \ref{LAMP} illustrates the $t$-th layer architecture of the developed MMV-LAMP network.
In Fig. \ref{LAMP}, the inputs are ${{\mathbf{\widehat X}}_{t - 1}} \in {\mathbb{C}^{N \times K}}$, ${{\mathbf{V}}_{t - 1}} \in {\mathbb{C}^{M \times K}}$, and ${\mathbf{Y}} \in {\mathbb{C}^{M \times K}}$, where ${{\mathbf{\widehat X}}_{t - 1}}$ and ${{\mathbf{V}}_{t - 1}}$ are the outputs of the previous $( {t - 1} )$-th layer, and $\mathbf{Y}$ is the noisy measurement in (\ref{MMV_problem}).
Moreover, at the network training stage, we define ${\mathbf{B}} \in {\mathbb{C}^{N \times M}}$ and ${\bm{\theta }} = \left\{ {{\theta _1},{\theta _2}} \right\}$ as the trainable parameters of the MMV-LAMP network, which are identical for all $T$ layers.
\begin{figure}[t]
\centering
\includegraphics[width=252pt]{Fig/Fig_1.eps}
\caption{The proposed frame structure for the communication transmission.}
\label{frame structure}
\end{figure}
\subsection{MMV-LAMP Network Based Uplink Channel Estimation}
The block diagram of the proposed MMV-LAMP network based uplink channel estimation scheme is depicted in Fig. \ref{CE_LAMP}, which contains the CCN and CRN.
The CCN corresponds to the combining matrix ${\mathbf{F}}_{{\text{UL}}}^{\text{H}}$ (encoder) in (\ref{uplink received signal K}), and the CRN corresponds to the channel estimator (decoder) at the BS.
Specifically, the input and output of the CCN are the uplink spatial-frequency domain channel matrix ${\mathbf{H}}_{{\text{UL}}}^{{\text{sf}}}$ and the noiseless pilot signals received at the BS.
Note that the parameters of the CCN $\left\{ {\bm{\Xi }}_{\text{UL}} \right\}$ corresponds to the phase values of the combining matrix ${\mathbf{F}}_{{\text{UL}}}^{\text{H}}$ in (\ref{uplink received signal K}).
Moreover, the CRN consists of $T$ layers and each has the same network structure and trainable parameters $\left\{ {{\mathbf{B}}_{\text{UL}},{\bm{\theta }}}_{\text{UL}} \right\}$ as shown in Fig. \ref{LAMP}, whose output is the estimated angle-frequency domain channel matrix $\widehat {\mathbf{H}}_{{\text{UL}}}^{{\text{af}}}$.
At the offline training stage, we jointly train the overall network parameters $\left\{ {\bm{\Xi }}_{\text{UL}},{\mathbf{B}}_{\text{UL}},{\bm{\theta }}_{\text{UL}} \right\}$ as an auto-encoder in an end-to-end approach.
Finally, the estimated spatial-frequency domain channel matrix ${\widehat {\mathbf{H}}_{{\text{UL}}}^{{\text{sf}}}}$ can be obtained by multiplying a devised redundant dictionary matrix.
\subsubsection{Fully-Connected CCN}
Consider the formula ${{\mathbf{Y}}_{{\text{UL}}}} = {\mathbf{F}}_{{\text{UL}}}^{\text{H}}{\mathbf{H}}_{{\text{UL}}}^{{\text{sf}}} + { {\mathbf{N}}}_{\text{UL}}$ in (\ref{uplink received signal K}), in order to mimic the linear compressibility process of high-dimensional channels, the combining matrix ${\mathbf{F}}_{{\text{UL}}}$ can be well modeled as a CCN realized by a fully-connected layer without biases and a nonlinear activation function.
Note that the combining matrix ${\mathbf{F}}_{{\text{UL}}}$ is a complex-valued matrix and satisfies the constant modulus constraint due to the RF PSN adopted in the hybrid MIMO architecture, and the expression of the combining matrix ${\mathbf{F}}_{{\text{UL}}}$ is given by
\begin{align}\label{W}
{{\mathbf{F}}_{{\text{UL}}}}{\text{ }} = \frac{1}{{\sqrt {{N_{{\text{BS}}}}} }}\exp( {{\text{j}}{{\bm{\Xi }}_{{\text{UL}}}}} )
= \frac{1}{{\sqrt {{N_{{\text{BS}}}}} }}\left[ {\cos ( {\mathbf{\Xi }}_{\text{UL}} ) + {\text{j}}\sin ( {\mathbf{\Xi }}_\text{UL} )} \right],
\end{align}
where ${\text{j}} = \sqrt { - 1}$ and ${\left[ {\mathbf{\Xi }}_{\text{UL}} \right]_{m,n}} \in \left[ {0,2\pi } \right)$.
As it is well known that complex-valued outputs are not well supported by most deep learning frameworks (e.g., Tensorflow, Pytorch), it would be difficult to directly train the complex-valued combining matrix ${\mathbf{F}}_{{\text{UL}}}$.
Hence, for the fully-connected CCN, we choose to train the real-valued phases of PSN $\left\{ {\bm{\Xi}}_{\text{UL}} \right\}$, in other words, we define the real-valued phases of PSN as the real-valued trainable parameters of the fully-connected CCN.
Moreover, the structure of the proposed fully-connected CCN is shown in Fig. \ref{dimension reduction network}, where the trainable parameter of the CCN is $\left\{ {\bm{\Xi}}_{\text{UL}} \right\}$ and the corresponding weight matrix of the CCN is $\exp ( {{\text{j}}{\bm{\Xi} _{{\text{UL}}}}} )/\sqrt {{N_{{\text{BS}}}}} $.
Hence, the parameters of the fully-connected layer are regarded as the phases of PSN and can be learned at the deep learning training stage.
\begin{figure*}[t]
\centering
\includegraphics[scale=0.35]{Fig/Fig_2.eps}
\caption{The $t$-th layer architecture of the developed MMV-LAMP network with the trainable parameters $\left\{ {{\mathbf{B}},{\bm{\theta }}} \right\}$.}
\label{LAMP}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[width=6.68in]{Fig/Fig_3.eps}
\caption{The block diagram of the proposed MDDL-based uplink channel estimation solution, which includes a CCN and an MMV-LAMP network based CRN.}
\label{CE_LAMP}
\end{figure*}
\subsubsection{CRN Based on MMV-LAMP Network}
First, we detail the devised redundant dictionary matrix.
Specifically, massive MIMO channels are sparse in the angle-domain, and the accuracy of CRN depends heavily on their sparsity, which may be weakened by the power leakage \cite{WZW_TVT}.
\begin{figure}[t]
\centering
\includegraphics[scale=0.3]{Fig/Fig_4.eps}
\caption{The proposed fully-connected CCN with the trainable parameters $\left\{ {\bm{\Xi}}_{\text{UL}} \right\}$, which correspond to the combining matrix ${\mathbf{F}}_{{\text{UL}}}^{\text{H}}$.}
\label{dimension reduction network}
\end{figure}
Therefore, we design a redundant dictionary matrix ${\mathbf{D}}$ with a finer angular resolution to transform the spatial-frequency domain channel matrix ${{\mathbf{H}}^{{\text{sf}}}}$ into the angle-frequency domain channel matrix ${{\mathbf{H}}^{{\text{af}}}}$, which can be expressed as
\begin{equation}\label{angular domain channel}
{{\mathbf{H}}^{\text{sf}}} = {{\mathbf{D}}^{\text{H}}}{{\mathbf{H}}^{\text{af}}},
\end{equation}
where the redundant dictionary matrix ${\mathbf{D}} \in {\mathbb{C}^{G \times {N_{{\text{BS}}}}}}$ consists of $G$ column vectors ${\mathbf{a}}( {{\phi _g}} )$ for $1 \le g \le G$, i.e., ${\mathbf{D}} = {\left[ {{\mathbf{a}}( {{\phi _1}} ),{\mathbf{a}}( {{\phi _2}} ), \cdots ,{\mathbf{a}}( {{\phi _G}} )} \right]^{\text{T}}}$, with ${\mathbf{a}}( {{\phi _g}} )$ the corresponding array steering vector in (\ref{steeting vector}), where the sine function in the array steering vector is defined as $\sin ( {{\phi _g}} ) = - 1 + 2( {g - 1})/G$ by quantifying the range of AoDs into $G$ grids for $g = 1,2, \cdots ,G$.
Therefore, estimating ${\mathbf{H}}_{{\text{UL}}}^{{\text{sf}}}$ in (\ref{uplink received signal K}) is equivalent to estimating ${\mathbf{H}}_{{\text{UL}}}^{{\text{af}}}$ represented in the angle-domain redundant dictionary $\mathbf{D}$, i.e.,
\begin{align}\label{LAMP_CE_problem}
{{\mathbf{Y}}_{{\text{UL}}}} = {\mathbf{F}}_{{\text{UL}}}^{\text{H}}{{\mathbf{D}}^{\text{H}}}{\mathbf{H}}_{{\text{UL}}}^{{\text{af}}} + {{{\mathbf{ N}}}_{{\text{UL}}}}
= {\mathbf{A}}_{\text{UL}}{\mathbf{H}}_{{\text{UL}}}^{{\text{af}}} + {{{\mathbf{ N}}}_{{\text{UL}}}},
\end{align}
where ${\mathbf{A}}_{\text{UL}} = {\mathbf{F}}_{{\text{UL}}}^{\text{H}}{{\mathbf{D}}^{\text{H}}}$ is the effective measurement matrix.
It's worth noting that the $k$-th column of ${\mathbf{H}}_{{\text{UL}}}^{{\text{af}}}$, i.e., ${\mathbf{H}}_{{\text{UL}}}^{{\text{af}}}( {:,k} )$ is a sparse column vector, meanwhile, $\big\{ {{\mathbf{H}}_{{\text{UL}}}^{{\text{af}}}( {:,k} )} \big\}_{k = 1}^K$ share the common sparsity \cite{GZ_TSP}.
Consequently, the sparse channel estimation problem can be formulated as an MMV sparse matrix recovery problem in CS. Noth that, given the received signals, the channel matrix ${\mathbf{H}}_{{\text{UL}}}^{{\text{af}}}$ can be estimated by solving the following optimization problem
\begin{align}\label{CS_optimization}
\mathop {\min }\limits_{{\mathbf{H}}_{{\text{UL}}}^{{\text{af}}}} &{\left( {\sum\limits_{k = 1}^K {\left\| {{\mathbf{H}}_{{\text{UL}}}^{{\text{af}}}( {:,k} )} \right\|_0^2} } \right)^{1/2}} \\
{\text{s}}{\text{.t}}{\text{.}} &{{\left\| {{{\mathbf{Y}}_{{\text{UL}}}} - {{\mathbf{A}}_{{\text{UL}}}}{\mathbf{H}}_{{\text{UL}}}^{{\text{af}}}} \right\|_F} \le \delta}, \nonumber \\
{\text{and}} {\ } {\big\{ {{\mathbf{H}}_{{\text{UL}}}^{{\text{af}}}( {:,k} )} \big\}_{k = 1}^K}{\ }&{\text{share the common sparse support set}}, \nonumber
\end{align}
where ${\left\| {{\mathbf{H}}_{{\text{UL}}}^{{\text{af}}}( {:,k} )} \right\|_0}$ is the number of non-zero elements of ${{\mathbf{H}}_{{\text{UL}}}^{{\text{af}}}( {:,k} )}$ and $\delta$ is the error tolerance parameter.
By replacing the ${l_0}$-norm with the ${l_1}$-norm, various CS algorithms can be utilized to solve the problem, such as the simultaneous orthogonal matching pursuit (SOMP) algorithm \cite{GZ_WC2}, the MMV-AMP algorithm \cite{KML_TSP}, and the proposed MMV-LAMP algorithm.
However, these greedy CS algorithms cannot achieve satisfactory channel estimation accuracy.
To efficiently solve the MMV CS problem in (\ref{CS_optimization}), we further develop an MMV-LAMP network with $T$ layers as illustrated in Fig. \ref{reconstruction network} and summarized in Algorithm \ref{CE_LAMP_Algorithm}, which can reconstruct the high-dimensional angle-frequency domain channel matrix $\widehat {{\mathbf{H}}}_{{\text{UL}}}^{{\text{af}}}$ from the low-dimensional received signals ${{\mathbf{Y}}_{{\text{UL}}}}$.
Specifically, the input is the received signals ${{\mathbf{Y}}_{{\text{UL}}}}$ and the output of the $t$-th layer is the estimated angle-frequency domain channel matrix $\widehat {\mathbf{H}}_{{\text{UL}},t}^{{\text{af}}}$.
In addition, the initial values $\widehat {\mathbf{H}}_{{\text{UL}},0}^{{\text{af}}}$ and ${{\mathbf{V}}_0}$ are denoted as $\widehat {\mathbf{H}}_{{\text{UL}},0}^{{\text{af}}} = {\mathbf{0}}$ and ${{\mathbf{V}}_0} = {{\mathbf{Y}}_{{\text{UL}}}}$, respectively.
\begin{figure*}[t]
\centering
\includegraphics[scale=0.71]{Fig/Fig_5.eps}
\caption{The proposed CRN based on MMV-LAMP network with the trainable parameters $\left\{ {{\mathbf{B}}_{\text{UL}},{\bm{\theta }}_{\text{UL}}} \right\}.$}
\label{reconstruction network}
\end{figure*}
\begin{algorithm}[t]
\caption{MMV-LAMP network based CRN}
\label{CE_LAMP_Algorithm}
\begin{algorithmic}[1]
\REQUIRE The received signal ${\mathbf{Y}_{\text{UL}}}$, the measurement matrix ${\mathbf{A}}_{\text{UL}}$, the number of layers $T$.
\ENSURE The output of the $T$-th layer MMV-LAMP network
${\widehat {\mathbf{H}}_{{\text{UL}}}^{{\text{sf}}}} = {{\mathbf{D}}^{\text{H}}}{\widehat {\mathbf{H}}^{{\text{af}}}_{{\text{UL}},T}}$.
\STATE Initialization: ${{\mathbf{V}}_0} = {\mathbf{Y}_{\text{UL}}}$, ${\widehat {\mathbf{H}}_{{\text{UL,}}0}^{{\text{af}}}} = {\mathbf{0}}$, ${\mathbf{B}}_{\text{UL}} = {{\mathbf{A}}_{\text{UL}}^{\text{H}}}$, ${\bm{\theta }}_{\text{UL}} = \left\{ {1,1} \right\}$.
\FOR {$t = 1,2, \cdots ,T$}
\STATE ${{\mathbf{R}}_t} = {\widehat {\mathbf{H}}_{{\text{UL,}}t-1}^{{\text{af}}}}+ {\mathbf{B}}_{\text{UL}}{{\mathbf{V}}_{t - 1}}$
\STATE ${\sigma _t} = \frac{1}{{\sqrt {QK} }}{\left\| {{{\mathbf{V}}_{t - 1}}} \right\|_F}$
\STATE ${\widehat {\mathbf{H}}_{{\text{UL,}}t}^{{\text{af}}}} = \eta ( {{{\mathbf{R}}_t};{\bm{\theta }}_{\text{UL}},{\sigma _t}} )$
\STATE ${b_t} = \frac{1}{Q}\sum\limits_{j = 1}^G {\frac{{\partial {{\left[ {{{\eta}} ( {{{\mathbf{R}}_t};{{\bm{\theta}}}_{\text{UL}},{\sigma _t}} )} \right]}_j}}}{\partial \left[ {{{\mathbf{R}}_t}( {j,:} )} \right]}}$
\STATE ${{\mathbf{V}}_t} = {\mathbf{Y}_{\text{UL}}} - {\mathbf{A}}_{\text{UL}}{\widehat {\mathbf{H}}_{{\text{UL,}}t}^{{\text{af}}}} + {b_t}{{\mathbf{V}}_{t - 1}}$
\ENDFOR
\end{algorithmic}
\end{algorithm}
As for the $t$-th layer, the trainable parameter $\left\{ {\mathbf{B}}_{\text{UL}} \right\}$ is denoted as
\begin{equation}\label{B}
{\mathbf{B}}_{\text{UL}} = \operatorname{Re} \left\{ {\mathbf{B}}_{\text{UL}} \right\} + {\text{j}}\operatorname{Im} \left\{ {\mathbf{B}}_{\text{UL}} \right\},
\end{equation}
where $\operatorname{Re} \left\{ {\mathbf{B}}_{\text{UL}} \right\}$ and $\operatorname{Im} \left\{ {\mathbf{B}}_{\text{UL}} \right\}$ denote the real and imaginary parts of the trainable parameter ${\mathbf{B}}_{\text{UL}}$, respectively.
In other words, in order to achieve mathematical complex-valued processing in the MMV-LAMP network, we define two real-valued trainable parameters $\operatorname{Re} \left\{ {\mathbf{B}}_{\text{UL}} \right\}$ and $\operatorname{Im} \left\{ {\mathbf{B}}_{\text{UL}} \right\}$ to form the complex-valued trainable parameters $\left\{ {\mathbf{B}}_{\text{UL}} \right\}$.
Finally, the final estimated spatial-frequency domain channel matrix based on the output of the $T$-th layer is given by
\begin{equation}\label{estimated spatial-frequency domain channel}
{\widehat {\mathbf{H}}_{{\text{UL}}}^{{\text{sf}}}} = {{\mathbf{D}}^{\text{H}}}\widehat {\mathbf{H}}_{{\text{UL}},T}^{{\text{af}}}.
\end{equation}
\subsubsection{Learning Strategy}
Inspired by the auto-encoder, we propose a novel layer-by-layer learning strategy to
jointly train the CCN (encoder) and CRN (decoder).
Specifically, at the offline training stage, we first generate the training data set $\big\{ {{\mathbf{H}}_{{\text{UL}}}^{{\text{sf,}}n}} \big\}_{n = 1}^{{N_{{\text{train}}}}}$ according to (\ref{frequency channel}),
where ${{N_{{\text{train}}}}}$ is the number of channel samples in the training set,
and ${{{\mathbf{H}}^{{\text{sf}},n}_{\text{UL}}}}$ is not only the input of the CCN, but also the corresponding target output.
In order to jointly optimize the trainable parameters of the CCN and CRN, i.e., $\left\{ {\bm{\Xi }}_{\text{UL}},{\mathbf{B}}_{\text{UL}},{\bm{\theta }}_{\text{UL}} \right\}$,
we define the normalized mean square error (NMSE) between the target value ${{{\mathbf{H}}^{{\text{sf}},n}_{\text{UL}}}}$ and the estimated spatial-frequency channel matrix of the $t$-th layer MMV-LAMP network $\widehat {\mathbf{H}}_{{\text{UL,}}t}^{{\text{sf}},n} = {{\mathbf{D}}^{\text{H}}}\widehat {\mathbf{H}}_{{\text{UL,}}t}^{{\text{af}},n}$ as the loss function of the $t$-th layer $\footnote{The ultimate goal of the proposed MDDL-based channel estimation scheme is to obtain better NMSE performance, hence, we choose the NMSE rather than the MSE as the loss function.}$, i.e.,
\begin{align}\label{NMSE_t}
{L_{\text{UL},t}} &\Big( {\big\{ {{{\bm{\Xi }}_{{\text{UL}}}},{{\mathbf{B}}_{{\text{UL}}}},{{\bm{\theta }}_{{\text{UL}}}}} \big\}_t} \Big) \nonumber \\
&= \sum\limits_{n = 1}^N {\frac{{\left\| {\widehat {\mathbf{H}}_{{\text{UL}},t}^{{\text{sf}},n} - {\mathbf{H}}_{{\text{UL}}}^{{\text{sf}},n}} \right\|_F^2}}{{\left\| {{\mathbf{H}}_{{\text{UL}}}^{{\text{sf}},n}} \right\|_F^2}}} \nonumber \\
&= \sum\limits_{n = 1}^N {\frac{{\left\| {{{\mathbf{D}}^{\text{H}}}}{{f_t}( {{\mathbf{H}}_{{\text{UL}}}^{{\text{sf}},n},{\big\{ {{{\bm{\Xi }}_{{\text{UL}}}},{{\mathbf{B}}_{{\text{UL}}}},{{\bm{\theta }}_{{\text{UL}}}}} \big\}_t}} ) - {\mathbf{H}}_{{\text{UL}}}^{{\text{sf}},n}} \right\|_F^2}}{{\left\| {{\mathbf{H}}_{{\text{UL}}}^{{\text{sf}},n}} \right\|_F^2}}},
\end{align}
where $N$ is the number of data samples in each batch of the training set, ${\mathbf{H}}_{{\text{UL}}}^{{\text{sf}},n}$ is the $n$-th uplink spatial-frequency domain channel sample,
and ${\widehat {\mathbf{H}}_{{\text{UL}},t}}^{{\text{af}},n} = {{f_t}( {{\mathbf{H}}_{{\text{UL}}}^{{\text{sf}},n},{\left\{ {{{\mathbf{\Xi }}_{{\text{UL}}}},{{\mathbf{B}}_{{\text{UL}}}},{{\mathbf{\theta }}_{{\text{UL}}}}} \right\}_t}} )}$ is the output of the $t$-th layer MMV-LAMP network.
Note that ${f_t}( { \cdot , \cdot } )$ indicates the proposed uplink channel estimation solution including the CCN and CRN, where the CRN is iterated $t$ times in the $t$-th layer, i.e., ${f_t}( { \cdot , \cdot } ) = f^{{\text{CRN}}}( { \cdots f^{{\text{CRN}}}( {f^{{\text{CCN}}}( {{\mathbf{H}}_{{\text{UL}}}^{{\text{sf,}}n},{{\bm{\Xi }}_{{\text{UL}}}}} ),{{\mathbf{B}}_{{\text{UL}}}},{{\bm{\theta }}_{{\text{UL}}}}} ),{{\mathbf{B}}_{{\text{UL}}}},{{\bm{\theta }}_{{\text{UL}}}}} )$.
The proposed novel layer-by-layer learning strategy is summarized in Algorithm 2, and the Adam algorithm with the learning rate 0.001 is adopted \cite{Adam}.
Specifically, for the 1-st layer, the input data is the target data ${{{\mathbf{H}}^{{\text{sf}}}_{\text{UL}}}}$ and the output is $\widehat {\mathbf{H}}_{{\text{UL,}}1}^{{\text{sf}}} = {{\mathbf{D}}^{\text{H}}}{f^{{\text{CRN}}}}( {{f^{{\text{CCN}}}}( {{\mathbf{H}}_{{\text{UL}}}^{{\text{sf}}},{{\mathbf{\Xi }}_{{\text{UL}}}}} ),{{\mathbf{B}}_{{\text{UL}}}},{{\bm{\theta }}_{{\text{UL}}}}} )$, where ${f^{{\text{CCN}}}}( { \cdot , \cdot } )$ and ${f^{{\text{CRN}}}}( { \cdot , \cdot , \cdot } )$ denote the proposed CCN and CRN structure, respectively.
Therefore, we aim to optimize the $1$-st layer's trainable parameters ${\left\{ {{{\bm{\Xi }}_{{\text{UL}}}},{{\mathbf{B}}_{{\text{UL}}}},{{\bm{\theta }}_{{\text{UL}}}}} \right\}_1}$ by minimizing the loss function of the $1$-st layer ${L_{\text{UL},1}} \big( {\left\{ {{{\bm{\Xi }}_{{\text{UL}}}},{{\mathbf{B}}_{{\text{UL}}}},{{\bm{\theta }}_{{\text{UL}}}}} \right\}_1} \big)$.
For the 2-nd layer, the input data is ${{{\mathbf{H}}^{{\text{sf}}}_{\text{UL}}}}$, but the output is $\widehat {\mathbf{H}}_{{\text{UL,2}}}^{{\text{sf}}} = {{\mathbf{D}}^{\text{H}}}{f^{{\text{CRN}}}}( {{f^{{\text{CRN}}}}( {{f^{{\text{CCN}}}}( {{\mathbf{H}}_{{\text{UL}}}^{{\text{sf}}},{{\bm{\Xi }}_{{\text{UL}}}}} ),{{\mathbf{B}}_{{\text{UL}}}},{{\bm{\theta }}_{{\text{UL}}}}} ),{{\mathbf{B}}_{{\text{UL}}}},{{\bm{\theta }}_{{\text{UL}}}}} )$.
And the trainable parameters and the loss function of the $2$-nd layer are ${\left\{ {{{\bm{\Xi }}_{{\text{UL}}}},{{\mathbf{B}}_{{\text{UL}}}},{{\bm{\theta }}_{{\text{UL}}}}} \right\}_2}$ and ${L_{\text{UL},2}} \big( {\left\{ {{{\bm{\Xi }}_{{\text{UL}}}},{{\mathbf{B}}_{{\text{UL}}}},{{\bm{\theta }}_{{\text{UL}}}}} \right\}_2} \big)$, respectively.
Note that, the initial values of the $2$-nd layer's trainable parameters ${\left\{ {{{\bm{\Xi }}_{{\text{UL}}}},{{\mathbf{B}}_{{\text{UL}}}},{{\bm{\theta }}_{{\text{UL}}}}} \right\}_2}$ are the obtained parameters ${\left\{ {{{\bm{\Xi }}_{{\text{UL}}}},{{\mathbf{B}}_{{\text{UL}}}},{{\bm{\theta }}_{{\text{UL}}}}} \right\}_1}$ from the $1$-st layer's training.
Similarly, for $T$-th layer, the input data is still the target data ${{{\mathbf{H}}^{{\text{sf}}}_{\text{UL}}}}$, and the output is $\widehat {\mathbf{H}}_{{\text{UL,}}T}^{{\text{sf}}}$, where $\widehat {\mathbf{H}}_{{\text{UL,}}T}^{{\text{sf}}} = {{\mathbf{D}}^{\text{H}}}{f^{{\text{CRN}}}}( { \cdots {f^{{\text{CRN}}}}( {{f^{{\text{CCN}}}}( {{\mathbf{H}}_{{\text{UL}}}^{{\text{sf}}},{{\bm{\Xi }}_{{\text{UL}}}}} ),{{\mathbf{B}}_{{\text{UL}}}},{{\bm{\theta }}_{{\text{UL}}}}} ), {{\mathbf{B}}_{{\text{UL}}}},{{\bm{\theta }}_{{\text{UL}}}}} )$.
Also, the trainable parameters and the loss function of the $T$-th layer are ${\left\{ {{{\bm{\Xi }}_{{\text{UL}}}},{{\mathbf{B}}_{{\text{UL}}}},{{\bm{\theta }}_{{\text{UL}}}}} \right\}_T}$ and ${L_{\text{UL},T}} \big( {\left\{ {{{\bm{\Xi }}_{{\text{UL}}}},{{\mathbf{B}}_{{\text{UL}}}},{{\bm{\theta }}_{{\text{UL}}}}} \right\}_T} \big)$, respectively.
Note that, the initial values of the $T$-th layer's trainable parameters are the obtained parameters ${\left\{ {{{\bm{\Xi }}_{{\text{UL}}}},{{\mathbf{B}}_{{\text{UL}}}},{{\bm{\theta }}_{{\text{UL}}}}} \right\}_{T-1}}$ from the $(T-1)$-th layer's training.
In other words, the proposed layer-by-layer training strategy combines both the conventional layer-by-layer and all-layer training strategy.
Consider the $t$-th layer training $( {1 \le t \le T} )$, we adopt the all-layer training strategy, i.e., the trainable parameters ${\left\{ {{{\bm{\Xi }}_{{\text{UL}}}},{{\mathbf{B}}_{{\text{UL}}}},{{\bm{\theta }}_{{\text{UL}}}}} \right\}_{t}}$ are jointly optimized; while from the perspective of $T$ times training with the increasing layer number $t$, it is a modified kind of layer-by-layer training strategy.
After the trainable parameters $\left\{ {{\bm{\Xi }}_{\text{UL}},{\mathbf{B}}_{\text{UL}},{\bm{\theta }}_{\text{UL}}} \right\}_{T}$ of the $T$-th layer are optimized,
we can obtain the complex-valued PSN and the MMV-LAMP network simultaneously, which can be adopted to design the combining matrix ${\mathbf{F}}_{{\text{UL}}}^{\text{H}}$ and the channel estimator at the online channel estimation stage.
\begin{figure*}[t]
\centering
\includegraphics[width=483pt]{Fig/Fig_6.eps}
\caption{The block diagram of the proposed MDDL-based downlink channel estimation and feedback solution, where the green and yellow block diagrams represent that the modules are processed at the users and the BS, respectively.}
\label{feedback_overall}
\end{figure*}
\begin{algorithm}[t]
\caption{Learning strategy to jointly train CCN's parameters ${\left\{ {{{\bm{\Xi }}_{{\text{UL}}}}} \right\}}$ and CRN's parameters ${\left\{ {{{\mathbf{B}}_{{\text{UL}}}},{{\bm{\theta }}_{{\text{UL}}}}} \right\}}$}
\label{learning parameters}
\begin{algorithmic}[1]
\STATE Initialization: ${\mathbf{B}}_{\text{UL}} = {{\mathbf{A}}_{\text{UL}}^{\text{H}}}$, ${\bm{\theta }}_{\text{UL}} = \left\{ {1,1} \right\}$, ${\left[ {\mathbf{\Xi }}_{\text{UL}} \right]_{i,j}} \in \left[ {0,2\pi } \right)$, ${\left\{ {{\bm{\Xi }}_{\text{UL}},{\mathbf{B}}_{\text{UL}},{\bm{\theta }}_{\text{UL}}} \right\}_0}=\left\{ {{\bm{\Xi }}_{\text{UL}},{\mathbf{B}}_{\text{UL}},{\bm{\theta }}_{\text{UL}}} \right\}$.
\FOR {$t = 1,2, \cdots ,T$}
\STATE Initialize ${\left\{ {{\bm{\Xi }}_{\text{UL}},{\mathbf{B}}_{\text{UL}},{\bm{\theta }}_{\text{UL}}} \right\}_t}$ as $\left\{ {{\bm{\Xi }}_{\text{UL}},{\mathbf{B}}_{\text{UL}},{\bm{\theta }}_{\text{UL}}} \right\}_{t-1}$
\STATE Learn ${\left\{ {{\bm{\Xi }}_{\text{UL}},{\mathbf{B}}_{\text{UL}},{\bm{\theta }}_{\text{UL}}} \right\}_t}$ to minimize the loss function of $t$-th layer ${L_{\text{UL},t}}( {\left\{ {{\bm{\Xi }}_{\text{UL}},{\mathbf{B}}_{\text{UL}},{\bm{\theta }}_{\text{UL}}} \right\}_t} )$
\ENDFOR
\ENSURE $\left\{ {{\bm{\Xi }}_{\text{UL}},{\mathbf{B}}_{\text{UL}},{\bm{\theta }}_{\text{UL}}} \right\} = {\left\{ {{\bm{\Xi }}_{\text{UL}},{\mathbf{B}}_{\text{UL}},{\bm{\theta }}_{\text{UL}}} \right\}_T}.$
\end{algorithmic}
\end{algorithm}
\section{MDDL-Based FDD Downlink Channel Estimation and Feedback}
In this section, we first extend the proposed MDDL-based TDD uplink channel estimation scheme to the FDD downlink channel estimation.
Moreover, since the uplink/downlink channel reciprocity does not hold in FDD systems,
we further propose an MMV-LAMP network based channel feedback solution, whereby the channels' delay-domain sparsity is exploited for reducing the feedback overhead.
As shown in Fig. \ref{feedback_overall}, the block diagram of the proposed downlink channel estimation and feedback solution contains a CCN at the BS, a feedback compression network (FCN) at the users, and a FCRN at the BS, where the FCRN further consists of a FRSN and a CRN (this CRN is the same as that in Fig. \ref{CE_LAMP}).
\subsection{MMV-LAMP Network Based Downlink Channel Estimation}
Similar to Section III, estimating the ${\mathbf{H}}_{{\text{DL}}}^{{\text{sf}}}$ in (\ref{downlink received signal K}) is equivalent to estimating ${\mathbf{H}}_{{\text{DL}}}^{{\text{af}}}$ represented in the angle-domain redundant dictionary ${{\mathbf{D}}}$, i.e.,
\begin{align}\label{downlink LAMP_CE_problem}
{\mathbf{Y}_{\text{DL}}} = {{\mathbf{F}}_{\text{DL}}^{\text{T}}}{{\mathbf{D}}^{\text{H}}}{{\mathbf{H}}^{{\text{af}}}_{\text{DL}}} + {{\mathbf{N}}}_{\text{DL}}
= {\mathbf{A}}_{\text{DL}}{{\mathbf{H}}^{{\text{af}}}_{\text{DL}}} + { {\mathbf{N}}}_{\text{DL}},
\end{align}
where ${\mathbf{A}}_{\text{DL}} = {{\mathbf{F}}_{\text{DL}}^{\text{T}}}{{\mathbf{D}}^{\text{H}}} \in {\mathbb{C}^{Q \times G}}$ is the measurement matrix in CS.
We can observe that (\ref{LAMP_CE_problem}) and (\ref{downlink LAMP_CE_problem}) share a similar expression, hence, the proposed MMV-LAMP network for uplink channel estimation scheme at the BS can be used for the downlink channel estimation at the users.
Specifically, at the downlink channel estimation stage, the input of the CCN is the downlink spatial-frequency domain channel matrix ${{\mathbf{H}}^{{\text{sf}}}_{\text{DL}}}$ and the output is the received signals ${\mathbf{Y}_{\text{DL}}}$ at the user.
Similar to Section III, the trainable parameters of the CCN can be regarded as the real-valued phases of PSN, and the corresponding expression of the complex-valued beamforming matrix ${\mathbf{F}}_{\text{DL}}$ is given by
\begin{align}\label{F}
{\mathbf{F}}_{\text{DL}} = \frac{1}{{\sqrt {{N_{{\text{BS}}}}} }}{{\mathbf{e}}^{{\text{j}}\left[ {{{\bm{\Xi }}_{{\text{DL}}}}} \right]}}
= \frac{1}{{\sqrt {{N_{{\text{BS}}}}} }}\left[ {\cos ( {{{\bm{\Xi }}_{{\text{DL}}}}} ) + {\text{j}}\sin ( {{{\bm{\Xi }}_{{\text{DL}}}}} )} \right].
\end{align}
On the other hand, for the CRN at the users, we replace the inputs ${\mathbf{Y}}_{\text{UL}}$ and ${\mathbf{A}}_{\text{UL}}$ with ${\mathbf{Y}}_{\text{DL}}$ and ${\mathbf{A}}_{\text{DL}}$, respectively, and the other terms are the same as in the Algorithm \ref{CE_LAMP_Algorithm}.
As for the learning strategy, we train the trainable parameters $\left\{ {{\bm{\Xi }}_{\text{DL}},{\mathbf{B}}_{\text{DL}},{\bm{\theta }}_{\text{DL}}} \right\}$ in the downlink channel estimation based on Algorithm \ref{learning parameters} by replacing the inputs ${\mathbf{A}}_{\text{UL}}$ and $\left\{ {{\bm{\Xi }}_{\text{UL}},{\mathbf{B}}_{\text{UL}},{\bm{\theta }}_{\text{UL}}} \right\}$ with ${\mathbf{A}}_{\text{DL}}$ and $\left\{ {{\bm{\Xi }}_{\text{DL}},{\mathbf{B}}_{\text{DL}},{\bm{\theta }}_{\text{DL}}} \right\}$, respectively.
\begin{figure*}[t]
\centering
\includegraphics[scale=0.65]{Fig/Fig_7.eps}
\caption{The proposed FRSN based on MMV-LAMP network with the trainable parameters $\left\{ {{\mathbf{B}'},{\bm{\theta }}'} \right\}$.}
\label{FB_LAMP}
\end{figure*}
\subsection{MMV-LAMP Network Based Channel Feedback}
Given the received signal ${\mathbf{Y}}_{{\text{DL}}}$ in (\ref{downlink received signal K}) at the users, the received signals can be rewritten as
\begin{align}\label{FB_received_signal_Q}
{\mathbf{Y}}_{{\text{DL}}}^{\text{T}} = {\mathbf{ H}}_{{\text{DL}}}^{{\text{fs}}}{{\mathbf{F}}_{{\text{DL}}}} + {\mathbf{N}}_{{\text{DL}}}^{\text{T}}
= {\mathbf{H}} _{{\text{DL}}}^{{\text{freq}}} + {\mathbf{N}}_{{\text{DL}}}^{\text{T}},
\end{align}
where ${\mathbf{ H}}_{{\text{DL}}}^{{\text{fs}}} = {( {{\mathbf{H}}_{{\text{DL}}}^{{\text{sf}}}} )^{\text{T}}} \in {\mathbb{C}^{K \times {N_{{\text{BS}}}}}}$ denotes the frequency-spatial domain channel matrix and ${\mathbf{H}}_{{\text{DL}}}^{{\text{freq}}} = {\mathbf{H}}_{{\text{DL}}}^{{\text{fs}}}{{\mathbf{F}}_{{\text{DL}}}} \in {\mathbb{C}^{K \times Q}}$ is a frequency-domain channel matrix after spatial-domain compression.
In order to accurately acquire the CSI at the BS with reduced feedback overhead, we propose an FCN and an FCRN based on an MMV-LAMP network with two steps.
In the first step, by exploiting the channels' delay-domain sparsity, we compress the received pilots at the users by only feeding back the received pilot signals on part of $K$ subcarriers.
In the second step, the compressed feedback signals received at the BS are regarded as the input of the FRSN to reconstruct the frequency-domain channel matrix $\widehat { {\mathbf{H}}}_{{\text{DL}}}^{{\text{freq}}}$,
which is then input to the CRN (can be well trained at the uplink channel estimation stage), so that the spatial-frequency domain channel matrix ${{\mathbf{H}}_{\text{DL}}^{{\text{sf}}}}$ is finally reconstructed at the BS.
\begin{algorithm}[t]
\caption{MMV-LAMP network based FRSN}
\label{FB_LAMP_Algorithm}
\begin{algorithmic}[1]
\REQUIRE The feedback signal ${\widetilde {\mathbf{Y}}}_{\text{DL}}$, the measurement matrix \\ \qquad ${\widetilde {\mathbf{U}}}$, the number of layers $T'$.
\ENSURE The output of the $T'$-th layer MMV-LAMP network \\ \qquad
${\widehat { {\mathbf{H}}}_{{\text{DL}}}^{\text{freq}}} = {\mathbf{U}}\widehat { {\mathbf{H}}}_{{\text{DL}},T'}^{{\text{delay}}}.$
\STATE Initialization: ${{\mathbf{V}}_0} = {\widetilde {\mathbf{Y}}}_{\text{DL}}$, ${\widehat { {\mathbf{H}}}_{{\text{DL,}}{0}}^{\text{delay}}} = {\mathbf{0}}$, ${\mathbf{B}'} = {\widetilde {\mathbf{U}}^{\text{H}}}$, ${\bm{\theta }'} = \left\{ {1,1} \right\}$.
\FOR {$t = 1,2, \cdots ,T'$}
\STATE ${{\mathbf{R}}_t} = {\widehat { {\mathbf{H}}}_{{\text{DL,}}{t-1}}^{\text{delay}}} + {\mathbf{B}'}{{\mathbf{V}}_{t - 1}}$
\STATE ${\sigma _t} = \frac{1}{{\sqrt {K_cQ} }}{\left\| {{{\mathbf{V}}_{t - 1}}} \right\|_F}$
\STATE ${\widehat { {\mathbf{H}}}_{{\text{DL,}}{t}}^{\text{delay}}} = \eta ( {{{\mathbf{R}}_t};{\bm{\theta }'},{\sigma _t}} )$
\STATE ${b_t} = \frac{1}{K_c}\sum\limits_{j = 1}^K {\frac{{\partial {{\left[ {{{\eta}} ( {{{\mathbf{R}}_t};{{\bm{\theta}'}},{\sigma _t}} )} \right]}_j}}}{{\partial \left[ {{{\mathbf{R}}_t}( {j,:} )} \right]}}}$
\STATE ${{\mathbf{V}}_t} = {\widetilde {\mathbf{Y}}}_{\text{DL}} - {\widetilde {\mathbf{U}}}{\widehat { {\mathbf{H}}}_{{\text{DL,}}{t}}^{\text{delay}}} + {b_t}{{\mathbf{V}}_{t - 1}}$
\ENDFOR
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[t]
\caption{Learning strategy to train FRSN's parameters $\left\{ {\mathbf{B}'},{\bm{\theta }'} \right\}$}
\label{FB learning parameters}
\begin{algorithmic}[1]
\STATE Initialization: ${\mathbf{B}'} = {\widetilde{ {\mathbf{U}} }^{\text{H}}}$, ${\bm{\theta }'} = \left\{ {1,1} \right\}$, ${\left\{ {\mathbf{B}'},{\bm{\theta }}' \right\}_0} = \left\{ {\mathbf{B}'},{\bm{\theta }'} \right\}$.
\FOR {$t = 1,2, \cdots ,T'$}
\STATE Initialize ${\left\{ {\mathbf{B}'},{\bm{\theta }'} \right\}_t}$ as ${\left\{ {\mathbf{B}'},{\bm{\theta }'} \right\}_{t - 1}}$
\STATE Learn ${\left\{ {\mathbf{B}'},{\bm{\theta }'} \right\}_t}$ to minimize ${L_t'}\big( {{{\left\{ {\mathbf{B}'},{\bm{\theta }'} \right\}}_t}} \big)$
\ENDFOR
\ENSURE ${\left\{ {\mathbf{B}'},{\bm{\theta }'} \right\}} = {\left\{ {\mathbf{B}'},{\bm{\theta }'} \right\}_{T'}}.$
\end{algorithmic}
\end{algorithm}
\subsubsection{Feedback Compression Network at Users}
To reduce the channel feedback overhead at the users, we compress the dimensionality of the received pilot signals by exploiting the channels' delay-domain sparsity.
Specifically, we transform the frequency-spatial domain channel matrix ${ {\mathbf{H}}_{{\text{DL}}}^{{\text{fs}}}}$ into the delay-spatial domain channel matrix ${\mathbf{H}}_{{\text{DL}}}^{{\text{ds}}}$ via a DFT matrix ${\mathbf{U}} \in {\mathbb{C}^{K \times K}}$, and (\ref{FB_received_signal_Q}) can be expressed as
\begin{align}\label{FB_delay_received_signal}
{\mathbf{Y}}_{{\text{DL}}}^{\text{T}} = {\mathbf{U}}{\mathbf{H}}_{{\text{DL}}}^{{\text{ds}}}{\mathbf{F}}_{\text{DL}} + {\mathbf{N}}_{{\text{DL}}}^{\text{T}}
= {\mathbf{U}}{ {\mathbf{H}}_{{\text{DL}}}^{{\text{delay}}}} + {\mathbf{N}}_{{\text{DL}}}^{\text{T}},
\end{align}
where ${ {\mathbf{H}}_{{\text{DL}}}^{{\text{delay}}}} = {\mathbf{H}}_{{\text{DL}}}^{{\text{ds}}}{\mathbf{F}}_{\text{DL}} \in {\mathbb{C}^{K \times Q}}$ is a delay-domain channel matrix after spatial-domain compression.
The compressed pilot signals fed back to the BS can be expressed as
\begin{equation}\label{FB_signal}
{\widetilde {\mathbf{Y}}_{{\text{DL}}}} = {\left. {{\mathbf{Y}}_{{\text{DL}}}^{\text{T}}} \right|_{\mathbf{\Omega }}} = {\mathbf{\widetilde U}}{ {\mathbf{H}}_{{\text{DL}}}^{{\text{delay}}}} + {\widetilde {\mathbf{N}}_{{\text{DL}}}},
\end{equation}
where
${\mathbf{\widetilde U}} = {\left. {\mathbf{U}} \right|_{\mathbf{\Omega }}} \in {\mathbb{C}^{{K_c} \times K}}$ is a partial DFT matrix, ${\widetilde {\mathbf{N}}_{{\text{DL}}}} = {\left. {{\mathbf{N}}_{{\text{DL}}}^{\text{T}}} \right|_{\mathbf{\Omega }}}$,
and the $i$-th element of the set ${\left\{ {\mathbf{\Omega }} \right\}_i}$ for $1 \le i \le {K_c}$ is randomly selected without repeating \cite{GZ_EL}.
\subsubsection{Feedback Based Channel Reconstruction Network at BS}
The FCRN consists of an FRSN and a CRN.
Different from the MMV-LAMP network with $T$ layers used in CRN,
we just need to train the FRSN based on an MMV-LAMP network with $T'$ layers to obtain the channel matrix $\widehat { {\mathbf{H}}}_{{\text{DL}}}^{{\text{freq}}}$,
which is illustrated in Fig. \ref{FB_LAMP} and summarized in Algorithm \ref{FB_LAMP_Algorithm}.
Specifically, the input is the feedback pilot signals ${\widetilde {\mathbf{Y}}_{{\text{DL}}}}$,
and the initial values $\widehat { {\mathbf{H}}}_{{\text{DL,}}0}^{{\text{delay}}}$ and ${{\mathbf{V}}_0}$ of Algorithm \ref{FB_LAMP_Algorithm} are denoted as $\widehat { {\mathbf{H}}}_{{\text{DL,}}0}^{{\text{delay}}} = {\mathbf{0}}$ and ${{\mathbf{V}}_0} = {\widetilde {\mathbf{Y}}_{{\text{DL}}}}$, respectively.
In FRSN, the measurement matrix is the partial DFT matrix ${\mathbf{\widetilde U}}$
and the trainable parameters are $\left\{ {{\mathbf{B}'},{\bm{\theta }'}} \right\}$.
After the BS receives the compressed feedback signals ${\widetilde {\mathbf{Y}}_{{\text{DL}}}}$ in (\ref{FB_signal}),
we first exploit the proposed FRSN to reconstruct the channel matrix $\widehat { {\mathbf{H}}}_{{\text{DL}}}^{{\text{freq}}} = {\mathbf{U}}\widehat { {\mathbf{H}}}_{{\text{DL}},T'}^{{\text{delay}}}$,
which is then passed to the CRN to reconstruct $\widehat { {\mathbf{H}}}_{{\text{DL}}}^{{\text{sf}}}$ based on MMV-LAMP network.
Moreover, the training strategy of the trainable parameters $\left\{ {{\mathbf{B}'},{\bm{\theta }'}} \right\}$ of the FRSN is summarized in Algorithm \ref{FB learning parameters}, where the corresponding loss function of $t$-th layer ($1 \le t \le T'$) is given by
\begin{align}\label{FB_loss_function}
{ L_t'}\Big( {\big\{ {{\mathbf{B}}',{\bm{\theta}}'} \big\}_t} \Big) &= \sum\limits_{n = 1}^{N'} {\frac{{\left\| {\widehat { {\mathbf{H}}}_{{\text{DL}},t}^{{\text{freq}},n} - {\mathbf{H}}_{{\text{DL}}}^{{\text{freq}},n}} \right\|_F^2}}{{\left\| { {\mathbf{H}}_{{\text{DL}}}^{{\text{freq}},n}} \right\|_F^2}}} \nonumber \\
&= \sum\limits_{n = 1}^{ N'} {\frac{{\left\| {{\mathbf{U}}}{{{ f}_t'}( { {\widetilde {\mathbf{Y}}_{{\text{DL}}}^n},{\big\{ {{\mathbf{B}}',{\bm{\theta}}'} \big\}_t}} ) - {\mathbf{H}}_{{\text{DL}}}^{{\text{freq}},n}} \right\|_F^2}}{{\left\| { {\mathbf{H}}_{{\text{DL}}}^{{\text{freq}},n}} \right\|_F^2}}},
\end{align}
where $N'$ is the number of data samples in each batch of the training set, and $f'{( { \cdot , \cdot } )_t}$ denotes the MMV-LAMP network based FRSN.
Specifically, we first generate the training data set $\big\{ {{\mathbf{H}}_{{\text{DL}}}^{{\text{freq,}}n}} \big\}_{n = 1}^{N'}$ according to (\ref{FB_received_signal_Q}).
Then, for the $t$-th layer, we aim to optimize the trainable parameters $\left\{ {{\mathbf{B}'},{\bm{\theta }'}} \right\}_{t}$ by minimizing the loss function of the $t$-th layer ${L_t'}( {\big\{ {{\mathbf{B'}},{\bm{\theta '}}} \big\}_t} )$ for $1 \le t \le T'$.
Finally, after the trainable parameters $\left\{ {{\mathbf{B}'},{\bm{\theta }'}} \right\}_{T'}$ of the $T'$-th layer are optimized,
we can obtain $\widehat { {\mathbf{H}}}_{{\text{DL}}}^{{\text{delay}}}$ and $\widehat { {\mathbf{H}}}_{{\text{DL}}}^{{\text{freq}}} = {\mathbf{U}}\widehat { {\mathbf{H}}}_{{\text{DL}}}^{{\text{delay}}}$.
Also, we can find that ${ {\mathbf{H}}}_{{\text{DL}}}^{{\text{freq}}}$ in (\ref{FB_received_signal_Q}) can also be expressed as
\begin{align}\label{FB_2CE}
{( { {\mathbf{H}}_{{\text{DL}}}^{{\text{freq}}}} )^{\text{T}}} = {{\mathbf{F}}_{\text{DL}}^{\text{T}}}{{\mathbf{H}}_{\text{DL}}^{\text{sf}}} = {{\mathbf{F}}_{\text{DL}}^{\text{T}}}{{\mathbf{D}}^{\text{H}}}{{\mathbf{H}}_{\text{DL}}^{\text{af}}}
= {\mathbf{A}}_{\text{DL}}{{\mathbf{H}}_{\text{DL}}^{{\text{af}}}},
\end{align}
which indicates that we can exploit the following CRN to reconstruct the final estimation $\widehat { {\mathbf{H}}}_{{\text{DL}}}^{\text{sf}}$.
\section{Simulation Results}
In this section, we provide numerical results to verify the effectiveness of the proposed MDDL-based channel estimation and feedback scheme.
First, we elaborate the implementation details and parameters adopted in our simulations setting.
Then, since the TDD uplink channel estimation and FDD downlink channel estimation share the same processing mechanism, we take the downlink channel estimation and feedback for FDD systems as examples to evaluate the performance.
Finally, we investigate the performance of the proposed MDDL-based scheme in scenarios with fixed scattering environments, which is discussed in more detail next.
\subsection{Simulation Setup}
In our simulations, we consider that the BS is equipped with an ULA with ${N_{{\text{BS}}}} = 256$ and ${N_{{\text{RF}}}} = 4$ RF chains.
The number of OFDM subcarriers in the channel estimation phase is set to $K = 64$. The redundant dictionary with an oversampling ratio $G{\text{/}}{N_{{\text{BS}}}}{\text{ = 4}}$ is considered, i.e., the quantized angle grids $G$ is set to $1024$.
In addition, the experiments are performed in PyCharm Community Edition (Python 3.6 environment and Tensorflow 1.13.1) on a computer with dual Intel Xeon 8280 CPU (2.6GHz) and dual Nvidia GeForce GTX 2080Ti GPUs.
The proposed MMV-LAMP network based CRN is composed of $T = 5$ layers, where each layer has the same network structure with the trainable parameters $\left\{ {{\bm{\Xi }}_{\text{DL}},{\mathbf{B}}_{\text{DL}},{\bm{\theta }}_{\text{DL}}} \right\}$.
While the proposed MMV-LAMP based FRSN is composed of $T' = 2$ layers, where the trainable parameters are $\left\{ {{\mathbf{B}'},{\bm{\theta }'}} \right\}$.
For the training of MMV-LAMP network, we generate a training set including ${S_{{\text{tr}}}} = 5000$ spatial-frequency domain channel samples according to the channel model in (\ref{frequency channel}), so that the dimension of the input channel samples is $( {{S_{{\text{tr}}}},{N_{{\text{BS}}}},K} )$.
Similarly, the parameters of the validation set and the test set are ${S_{{\text{va}}}} = 2000$ and ${S_{{\text{te}}}} = 1000$, respectively.
We choose the NMSE as the metric for performance evaluation, which is defined as
\begin{equation}\label{NMSE}
{\text{NMSE}}( {{\mathbf{H}},\widehat {\mathbf{H}}} ) = 10{\text{log}}_{10}( {{\mathbb{E}}\left[ {\frac{{\left\| {{\mathbf{H}} - {\mathbf{\widehat H}}} \right\|_F^2}}{{\big\| {\mathbf{H}} \big\|_F^2}}} \right]} ).
\end{equation}
\subsection{MDDL-Based FDD Downlink Channel Estimation}
As shown in Fig. \ref{pilot} $\footnote{This simulated results for $\{M = 40, Q = 10, N_{\text{RF}} = 4\}$ are equivalent to the uplink channel estimation results for $\{M = 40, Q = 40, N_{\text{RF}} = 1\}$, $\{M = 40, Q = 20, N_{\text{RF}} = 2\}$, and $\{M = 40, Q = 5, N_{\text{RF}} = 8\}$, as long as $M = QN_{\text{RF}}$.}$, we plot the NMSE performance ${\text{NMSE}}( {{\mathbf{H}}_{{\text{DL}}}^{{\text{sf}}},\widehat {\mathbf{H}}_{{\text{DL}}}^{{\text{sf}}}} )$ of the different schemes as a function of signal-to-noise ratio (SNR), including the proposed MDDL-based channel estimation scheme using the MMV-LAMP network, the data-driven deep learning based channel estimation scheme \cite{MXS_TVT}, the LAMP-based channel estimation scheme \cite{LAMP_TSP},
and two model-based channel estimation schemes using the MMV-AMP algorithm \cite{KML_TSP} and the SOMP algorithm \cite{GZ_WC2}.
The number of propagation paths is $L = 8$.
Note that the AMP-based channel estimation scheme requires the measurement matrix's elements to be independent identically distributed, so we don't consider the redundant dictionary matrix (i.e., $G = {N_{{\text{BS}}}} = 256$).
\begin{figure}[t]
\centering
\includegraphics[width=244pt]{Fig/Fig_8.eps}
\caption{NMSE performance comparison of different channel estimation schemes versus SNRs.}
\label{pilot}
\end{figure}
We can observe that the proposed channel estimation scheme outperforms the other channel estimation schemes even with a smaller pilot overhead.
This observation indicates that the proposed channel estimation scheme can achieve better NMSE performance while keeping the pilot overhead to a low level.
This is because the network architecture (i.e., the cascaded CCN and MMV-LAMP-based CRN) of the MDDL-based approach is constructed based on known physical mechanisms and some {\it{a priori}} model knowledge, which can reduce the number of trainable parameters to be learned and can fully take advantage of both model-based algorithms and deep learning methods.
We also observe that the proposed channel estimation scheme can significantly improve the NMSE performance in the low SNR regime compared with other channel estimation schemes.
Therefore, the proposed scheme can reliably reconstruct the high-dimensional channel with a much reduced pilot overhead.
To clearly present the percentage of the reduced pilot overhead compared to state-of-the-art algorithms, we compare the proposed scheme with four state-of-the-art algorithms at SNR=0dB and SNR=5dB, as shown in Table \ref{pilotoverhead}.
We can observe that the proposed scheme with $Q=40$ outperforms the MMV-AMP algorithm, the LAMP network, and data driven deep learning methods with $Q=80$ in the whole range of SNR.
Therefore, we conclude that the proposed scheme can reduce the pilot overhead by at least $50\%$ while achieving the same or even better channel estimation NMSE performance.
\begin{figure}[t]
\centering
\includegraphics[width=244pt]{Fig/Fig_9.eps}
\caption{NMSE performance comparison of the proposed scheme versus the number of multipath $L$.}
\label{path}
\end{figure}
\renewcommand\thetable{\Roman{table}}
\begin{table}[t]
\centering
\renewcommand\arraystretch{1.5}
\caption{NMSE in {\upshape dB}}\label{pilotoverhead}
\begin{tabular}{c|c|c|c|c}
\Xhline{1.2pt}
\multirow{2}*{Channel Estimation Schemes} & \multicolumn{2}{c}{SNR=0dB} & \multicolumn{2}{|c}{SNR=5dB} \\
\cline{2-5}
& {{\it Q}=40} & {{\it Q}=80} & {{\it Q}=40} & {{\it Q}=80}\\
\Xhline{1.2pt}
MMV-AMP & 1.15 & -0.31 & -0.37 & -7.62 \\
LAMP & -1.37 & -3.13 & -2.73 & -5.61 \\
Data Driven Deep Learning & -2.02 & -4.49 & -3.72 & -8.16 \\
SOMP & -3.14 & -4.39 & -6.82 & -8.48 \\
Proposed & {\textbf{-6.06}} & & {\textbf{-9.21}} & \\
\Xhline{1.2pt}
\end{tabular}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=244pt]{Fig/Fig_10.eps}
\caption{NMSE performance comparison of the proposed scheme trained at the different SNRs.}
\label{snr}
\end{figure}
We further investigate the robustness of the proposed channel estimation scheme as a function of the number of multipath $L$ in Fig. \ref{path}.
Note that the proposed MMV-LAMP based CRN is trained during the offline training stage, which is based on the channel samples with $L = 8$ multipath components.
However, at the online estimation stage, we observe that the proposed scheme can be robustly adopted to estimate multipath channels with $L \ne 8$, without having to retrain the entire network architecture.
In Fig. \ref{snr}, we show the NMSE performance of the proposed scheme when it is trained based for different SNR values. For example, the setup denoted by ``Proposed, SNR=-10dB, $G=1024$, $Q=40$" means that the proposed scheme was trained at the SNR=-10dB, but tested in the whole SNR range, and the setup denoted by ``Proposed, $G=1024$, $Q=40$" means that the proposed scheme was trained and tested for the same values of SNR.
We can observe that the proposed scheme trained at SNR=-5dB is robust in the low SNR regime, and the NMSE performance is deteriorated just at SNR=10dB.
Therefore, the proposed MMV-LAMP based channel estimator enjoys a better robustness and generalization capability to different channel conditions.
\begin{figure}[t]
\centering
\includegraphics[width=244pt]{Fig/Fig_11.eps}
\caption{NMSE performance comparison of different channel estimation schemes versus SNRs.}
\label{128antennas}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=244pt]{Fig/Fig_12.eps}
\caption{NMSE performance comparison of the proposed scheme versus the number of subcarrier $K$.}
\label{carriers_number}
\end{figure}
As for the number of antennas $N_{\text{BS}}$, we further investigated the channel estimation NMSE performance with $N_{\text{BS}} = 128$ in Fig. \ref{128antennas}.
Specifically, since the dimension of the beamforming/combining matrix is related to the number of antennas, changing the number of antennas requires retraining the CCN and CRN.
In order to meet the same compression ratio, i.e., $Q/{N_{\text{BS}}}=40/256$, the adopted pilot overhead is $Q = 20$.
We can observe that the proposed channel estimation scheme outperforms the other channel estimation schemes in this case.
As for the different numbers of subcarriers $K$, as shown in Fig.\ref{carriers_number}, we have investigated the channel estimation NMSE performance with $K = 64, 128, 256, 512$.
Specifically, since the proposed channel estimation scheme was trained under the number of subcarriers $K=64$, we divide the subcarriers evenly into multiple sub-groups (each sub-group has 64 subcarriers) to directly apply the trained channel estimation network.
For instance, for the number of subcarriers $K=128$, we divide the subcarriers into $128/64 = 2$ sub-groups, i.e., the subcarrier indices of each sub-group are respectively $\left\{ {1,3,5, \cdots ,127} \right\}$ and $\left\{ {2,4,6, \cdots ,128} \right\}$.
We can observe that the proposed scheme trained with 64 subcarriers can be effectively applied to the case $K=$ 128, 256, and 512, which further proves the robustness of the proposed scheme.
As mentioned above, we design a redundant dictionary matrix to combat the power leakage problem by quantizing the angles with a finer resolution.
Therefore, in Fig. \ref{dictionary},
we also compare the performance of the proposed MMV-LAMP based channel estimator without using redundant dictionary matrix, i.e., $G = {N_{{\text{BS}}}} = 256$, to demonstrate the effectiveness of the redundant dictionary matrix.
We can find that the proposed MMV-LAMP based and the SOMP-based channel estimators can improve the sparse channel estimation performance by utilizing the redundant dictionary matrix to cope with the power leakage problem.
\begin{figure}[t]
\centering
{\subfigure[]{\label{dictionary}
\centering
\includegraphics[width=244pt]{Fig/Fig_13_a.eps}
}}
{\subfigure[]{\label{carrier}
\centering
\includegraphics[width=244pt]{Fig/Fig_13_b.eps}
}}
\caption{NMSE performance comparison of the proposed MDDL-based channel estimation scheme versus SNRs, where (a) the effectiveness of the devised redundant dictionary matrix, (b) the effectiveness of the proposed scheme using multi-carrier channel samples to train.}
\end{figure}
Note that the effectiveness of the proposed MMV-LAMP based channel estimator for OFDM systems is based on a training dataset based on multi-carrier channel samples, which consist of the channels of all subcarriers from different channel realizations.
To verify the training effectiveness of multi-carrier channel samples, we further compare the performance of the proposed scheme based on single carrier channel samples, as shown in Fig. \ref{carrier}.
We can observe that the proposed MMV-LAMP based channel estimator trained by multi-carrier channel samples can exhibit more excellent performance.
We further investigate the computational complexity. In the case of offline training, specifically, the computation complexity is not a major concern, because the required time is usually not strictly limited.
Moreover. we discuss the computational complexity of the different channel estimation schemes as follows.
\begin{itemize}
\item As for the data driven deep learning based channel estimation scheme, its main steps in testing stage include: i) two fully-connected operations with computational complexity $\mathcal{O}( 2MG )$; ii) $N_{\text{re}}$ convolutional operations with computational complexity $\mathcal{O}( {G{\beta ^2}\sum\limits_{i = 1}^{{N_{{\text{re}}}}} {{n_{i - 1}}{n_i}} } )$, where $\beta$ is the side length of the convolutional filters, $n_{i-1}$ and $n_i$ denote the numbers of input and output feature maps of the $i$-th convolutional layer $(1 \le i \le {N_{{\text{re}}}})$, respectively.
Therefore, the computational complexity of the data driven deep learning based channel estimation is $\mathcal{O}( {2MG + G{\beta ^2}\sum\limits_{i = 1}^{{N_{{\text{re}}}}} {{n_{i - 1}}{n_i}} } )$.
\item The proposed channel estimation scheme is developed from the MMV-AMP algorithm, which mainly requires matrix multiplication operations, Therefore, the MMV-AMP algorithm, the LAMP network, and the proposed MMV-LAMP network share similar computational complexities, i.e., $\mathcal{O}( {M{N_{{\text{BS}}}}K + TMGK} )$ in the case of $K$ subcarriers.
\item As for the SOMP algorithm, we denote the number of iterations as $I$ and its main steps include: i) correlation operation with computational complexity ${\mathcal{O}}(M{G^2}KI)$; ii) project subspace operation with computational complexity ${\mathcal{O}}(\frac{1}{4}{I^2}{(I + 1)^2} + \frac{1}{3}MI(I + 1)(2I + 1) + \frac{1}{2}MKI(I + 1))$; iii) update residual operation with computational complexity ${\mathcal{O}} ( {\frac{1}{2}\rho {N_{{\rm{BS}}}}KI(I + 1)} )$.
Therefore, the computational complexity of the SOMP algorithm is ${\mathcal{O}}(M{G^2}KI$ $ + \frac{1}{4}{I^2}{(I + 1)^2}$ $+ \frac{1}{3}MI(I + 1)(2I + 1) + \frac{1}{2}MKI(I + 1) + {\frac{1}{2}\rho {N_{{\rm{BS}}}}KI(I + 1)})$.
\end{itemize}
Finally, we provide the computational complexity analysis of the different channel estimation schemes as shown in Table \ref{CC}.
\begin{table}[t]
\centering
\renewcommand\arraystretch{2.5}
\caption{Computational Complexity of Different Channel Estimation Schemes}\label{CC}
\begin{tabular}{c|c}
\Xhline{1.2pt}
{\bf{Schemes}} & {\bf{Complexity}} \\
\Xhline{1.2pt}
Data driven deep learning & $\mathcal{O}( {2MG + G{\beta ^2}\sum\limits_{i = 1}^{{N_{{\text{re}}}}} {{n_{i - 1}}{n_i}} } )$ \\
\hline
SOMP & \makecell[c]{${\mathcal{O}}(M{G^2}KI$ $ + \frac{1}{4}{I^2}{(I + 1)^2}$ \\ $+ \frac{1}{3}MI(I + 1)(2I + 1)$ \\ $+ \frac{1}{2}MKI(I + 1) + {\frac{1}{2}\rho {N_{{\rm{BS}}}}KI(I + 1)})$} \\
\hline
LAMP & $\mathcal{O}( {M{N_{{\text{BS}}}}K + TMGK} )$\\
\hline
MMV-AMP & $\mathcal{O}( {M{N_{{\text{BS}}}}K + TMGK} )$ \\
\hline
Proposed & $\mathcal{O}( {M{N_{{\text{BS}}}}K + TMGK} )$ \\
\Xhline{1.2pt}
\end{tabular}
\end{table}
\subsection{Channel Estimation Under Non-Ideal Hardware Constraints}
In practical systems, the phase of each combining and beamforming matrix coefficient is not a continuous value.
Therefore, we further investigate the performance of the proposed MDDL-based channel estimation scheme under the constraint of PSN with a finite phase resolution.
Specifically, after the offline training, the CCN including the continuous complex-valued combining matrix ${\mathbf{F}}_{\text{UL}} = \frac{1}{{\sqrt {{N_{{\text{BS}}}}} }}\exp \left[ {{\text{j}}( {\bm{\Xi }}_{\text{UL}} )} \right]$ in (\ref{W}) and the beamforming matrix ${\mathbf{F}}_{\text{DL}} = \frac{1}{{\sqrt {{N_{{\text{BS}}}}} }}\exp \left[ {{\text{j}}( {\bm{\Xi }}_{\text{DL}} )} \right]$ in (\ref{F}) are quantized to the elements in the set ${\mathbf{\Delta }}$ according to the minimum Euclidean distance criterion. That is to say, after quantization, we have $\left\{ {{\mathbf{\Xi }}_{\text{UL}},{\mathbf{\Xi}}_{\text{DL}}} \right\} \in {\mathbf{\Delta }}$,
and ${\mathbf{\Delta }}$ is the quantized phase set of the PSN whose resolution is ${B^{{\text{ps}}}}$, given by
\begin{equation}\label{phase quan eq}
{\mathbf{\Delta }} = \left\{ {0,\frac{{2\pi }}{{{2^{{B^{{\text{ps}}}}}}}},2 \cdot \frac{{2\pi }}{{{2^{{B^{{\text{ps}}}}}}}}, \cdots ,2\pi - \frac{{2\pi }}{{{2^{{B^{{\text{ps}}}}}}}}} \right\}.
\end{equation}
The network architecture and training strategy of the CRN remains unchanged$\footnote{Since the quantized phase shifters will result in non-differential gradients, it is not feasible to directly use the Adam algorithm.
To avoid this challenge, we consider a simple method that the offline training is performed by assuming the infinite precision phase resolution. At the online test stage, the combining/beamforming matrix are quantized according to the minimun Euclidean distance criterion. Note that the authors of \cite{beamforming_ICC} proposed a sub-optimum quantization method to solve this issue, which may be integrated into the proposed scheme for better performance in our future work..}$.
As shown in Fig. \ref{phase quan}, we plot the NMSE performance ${\text{NMSE}}( {{\mathbf{H}}_{{\text{DL}}}^{{\text{sf}}},\widehat {\mathbf{H}}_{{\text{DL}}}^{{\text{sf}}}} )$ of the proposed channel estimation scheme as a function of the SNR, where the resolution of the PSN is set to ${B^{{\text{ps}}}} = 2, 3$ bits.
We observe that the proposed MMV-LAMP based channel estimator works well in the case of 3-bit quantization and suffers from a little loss in the case of 2-bit quantization.
This observation further demonstrates the robustness of the proposed MMV-LAMP based channel estimator under non-ideal PSN with limited resolution.
\begin{figure}[t]
\centering
{\subfigure[]{\label{phase quan}
\centering
\includegraphics[width=244pt]{Fig/Fig_14_a.eps}
}}
{\subfigure[]{\label{ADC quan}
\centering
\includegraphics[width=244pt]{Fig/Fig_14_b.eps}
}}
\caption{NMSE performance comparison of the proposed MDDL-based channel estimation scheme versus SNRs, where (a) phase shift quantization, (b) analog-to-digital converter (ADC) quantization.}
\end{figure}
Moreover, considering the limited resolution of the analog-to-digital converter (ADC) at the BS, the downlink received signals are first quantized by the ADC in the time domain, so the received frequency-domain pilot signals after time-domain quantization can be expressed as
\begin{equation}\label{ADC quan eq}
{ {{\mathbf{Y}}_{{\text{DL}}}^{{\text{quan}}}} } = {{\lambda ^{{\text{quan}}}}{( {\mathbf{Y}}_{{\text{DL}}}{\mathbf{U}} )}}{{{\mathbf{U}}^{\text{H}}}},
\end{equation}
where ${{\mathbf{Y}}_{{\text{DL}}}{\mathbf{U}}}$ and ${{\lambda ^{{\text{quan}}}}{( {\mathbf{Y}}_{{\text{DL}}}{\mathbf{U}} )}}$ are the received time-domain signals before the ADC and after the ADC, respectively,
and $\lambda ^{{\text{quan}}}( \cdot )$ is the complex-valued quantization function. This quantization function is applied to the received signals element-wise, and the real and imaginary parts are quantized separately. Here, we consider a uniform codebook for quantization as
\begin{equation}\label{quan function}
C = \left\{ { - \frac{{{2^{{B^{{\text{adc}}}}}} - 1}}{2}\varepsilon , \cdots ,\frac{{{2^{{B^{{\text{adc}}}}}} - 1}}{2}\varepsilon } \right\},
\end{equation}
where ${B^{{\text{adc}}}}$ is the number of quantization bits, $\varepsilon = \left[ {y_{\max } - y_{\min }} \right]\big/{2^{{B^{{\text{adc}}}}}}$, $y_{\max} $
and $y_{\min }$ are the maximum and the minimum real value of both the real and imaginary parts of ${\mathbf{Y}}_{{\text{DL}}}{\mathbf{U}}$, respectively.
Specifically, we first train the CCN's parameters ${\left\{ {{{\bm{\Xi }}_{{\text{DL}}}}} \right\}}$ and CRN's parameters ${\left\{ {{{\mathbf{B}}_{{\text{DL}}}},{{\bm{\theta }}_{{\text{DL}}}}} \right\}}$ based on the ADC with infinite resolution,
then we adopt the above quantization method to obtain the quantized frequency-domain signal ${ {{\mathbf{Y}}_{{\text{DL}}}^{{\text{quan}}}} }$.
Finally, we input the quantized ${ {{\mathbf{Y}}_{{\text{DL}}}^{{\text{quan}}}} }$ directly into the previously trained CRN to reconstruct the channel matrix ${{\widehat{{\mathbf{H}}}}_{\text{DL}}^{{\text{sf}}}}$.
As shown in Fig. \ref{ADC quan}, we plot the NMSE performance ${\text{NMSE}}( {{\mathbf{H}}_{{\text{DL}}}^{{\text{sf}}},\widehat {\mathbf{H}}_{{\text{DL}}}^{{\text{sf}}}} )$ of the proposed channel estimation scheme as a function of the SNR, where the number of quantization bits is set to ${B^{{\text{adc}}}} = 2, 3$.
We observe that the performance of the MMV-AMP-based and the SOMP-based channel estimation schemes degrade.
However, the proposed channel estimation scheme can still work even in the low SNR regime. Considering that the SNR is usually low in most mmWave systems at the channel estimation stage, the proposed scheme is effective in estimating the channels with non-ideal ADC at the receiver.
\begin{figure}[t]
\centering
\includegraphics[width=252pt]{Fig/Fig_15.eps}
\caption{Channel reconstruction NMSE performance comparison of the proposed MDDL-based channel feedback scheme versus SNRs.}
\label{CS_radio}
\end{figure}
\subsection{MDDL-Based FDD Downlink Channel Feedback}
In this section, we investigate the channel reconstruction performance ${\text{NMSE}}( {{\mathbf{H}}_{{\text{DL}}}^{{\text{sf}}},\widehat {\mathbf{H}}_{{\text{DL}}}^{{\text{sf}}}} )$ of the proposed channel feedback scheme.
Specifically, as shown in Fig. \ref{dimension reduction network}, the CCN at the BS is the same as that used at the downlink channel estimation stage,
then the noisy frequency-domain received signal ${\mathbf{Y}}_{{\text{DL}}}^{\text{T}}$ is obtained at the user.
Moreover, in order to compress the feedback overhead, the user only feeds ${\mathbf{Y}}_{{\text{DL}}}^{\text{T}}$ on $K_c$ of $K$ subcarriers back to the BS.
Finally, the BS exploits the proposed MMV-LAMP based FRSN and the CRN to recover the spatial-frequency domain channel matrix $\widehat {\mathbf{H}}_{{\text{DL}}}^{{\text{sf}}}$.
In addition, we define the feedback compression radio as $\rho {\text{ = }}{K_c}/K$.
As shown in Fig. \ref{CS_radio}, we can observe that the proposed MMV-LAMP based FCRN (including the FRSN and the following CRN) with $\rho = 0.25$, i.e., $K_c = 16$, even outperforms the SOMP-based channel feedback scheme with $\rho = 0.5$, i.e., $K_c = 32$.
Therefore, the effectiveness of the proposed MDDL-based channel feedback scheme is verified.
\subsection{Channel Estimation Based on Fixed Scattering Environments}
In this section, the TDD uplink channel estimation NMSE performance ${\text{NMSE}}( {{\mathbf{H}}_{{\text{UL}}}^{{\text{sf}}},\widehat {\mathbf{H}}_{{\text{UL}}}^{{\text{sf}}}} )$ is investigated.
To evaluate the superiority of the proposed learning strategy that jointly trains the pilots and channel estimator, we consider the scenario with fixed scattering environments.
The fixed scattering environment is shown in Fig. \ref{envir_channel}, in which the positions of the BS, the user, and the scatterers are marked by blue, red, and green, respectively.
The red solid line represents the line-of-sight (LoS) link between the BS and the user, and the black dotted line indicates the non-line-of-sight (NLoS) link via the scatterer.
Note that for the sake of simplifying the channel generation, we only consider a single-bounce NLoS channel model.
As for the array setting, we take the BS side as an example, i.e., the bold blue solid line represents the antenna array of the BS, whose normal direction is marked as the black arrow.
The antenna array setting at users is the same as the BS.
\begin{figure*}[t]
\centering
\includegraphics[scale=0.4]{Fig/Fig_16.eps}
\caption{The schematic diagram of the fixed scattering environment.}
\label{envir_channel}
\end{figure*}
Next, we describe how to generate the channel samples based on the fixed scattering environment.
Specifically, as shown in Fig. \ref{envir_channel}, the scenario consists of a BS and a number of users geographically distributed in a certain outdoor environment, in which the scatterers are randomly distributed.
For the given fixed scattering environment, we can generate the channel samples based on the channel parameters (including the AoAs/AoDs, path loss), which can be calculated based on the geometric characteristics between the BS and the user.
More specifically,
\begin{itemize}
\item \textbf{AoAs/AoDs:}
As for the uplink channel estimation, the AoA ${\phi _{{\text{BS}}}}$ at the BS is the angle relative to the horizontal axis.
For the user, the AoD ${\phi _{{\text{UE}}}}$ at the user follows the uniform distribution $\left[ {0,2\pi } \right)$, which is defined as the angle between the normal direction of the user array and the horizontal axis.
\item \textbf{Path loss:}
Taking the NLoS link as an example, the large-scale fading gain $G_l$ can be modeled based on the free-space path loss of Friis' formula as
\begin{equation}
{G_l} = 20{\log _{10}}( {\frac{{4\pi d_{l,1}}}{{{\lambda _c}}}} ) + 20{\log _{10}}( {\frac{{4\pi d_{l,2}}}{{{\lambda _c}}}} ) + {G_s},
\end{equation}
where $d_{l,1}$ ($d_{l,2}$) denotes the communication distance between the user and the $l$-th scatterer (the $l$-th scatterer and the BS), ${\lambda _c}$ is the carrier wavelength, and ${G_s}$ denotes the path loss via the $l$-th scatterer.
\end{itemize}
Based on the fixed scattering environment discussed above,
we generate 10000 channel samples, named as fixed scattering environment (FSE) data set, by considering a randomly distributed user location and normal direction of the user array.
In simulations, we divide the 10000 channel samples into 8000, 1000, and 1000, corresponding to the training, validation, and test sets, respectively.
\begin{figure}[t]
\centering
\includegraphics[width=252pt]{Fig/Fig_17.eps}
\caption{NMSE performance comparison of different channel estimation schemes versus SNRs.}
\label{Environment}
\end{figure}
As shown in Fig. \ref{Environment}, we plot the uplink channel estimation performance ${\text{NMSE}}( {{\mathbf{H}}_{{\text{UL}}}^{{\text{sf}}},\widehat {\mathbf{H}}_{{\text{UL}}}^{{\text{sf}}}} )$ of the different schemes as a function of the SNR, including the proposed scheme trained by using the FSE data set, the proposed scheme trained in Section V-B (i.e., not trained with the FSE data set), and the SOMP algorithm using random combining matrix, where we consider $G=1024$, $Q=40$.
Note that, at the NMSE performance evaluation phase, the input channel samples for these schemes come from the test set of the FSE data set.
We can observe that the proposed channel estimation scheme (including the combining matrix ${\mathbf{F}}_{{\text{UL}}}^{\text{H}}$ and the MMV-LAMP network based channel estimator) trained with FSE data set can effectively learn the channel environment characteristics using less pilot overhead.
Moreover, we can utilize the trained combining matrix ${\mathbf{F}}_{{\text{UL}}}^{\text{H}}$ in the uplink as the beamforming ${\mathbf{F}}_{{\text{DL}}}^{\text{T}}$ in the downlink channel estimation for improving the receive SNR at the users as verified in Fig. \ref{E1}.
Fig. \ref{E1} depicts the received SNR distributions at the users using two different CCN designs in the downlink channel estimation phase.
Specifically, ``CCN trained by FSE Data Set" indicates that the BS adopts the beamforming matrix ${\mathbf{F}}_{{\text{DL}}}^{\text{T}}$ or CCN being the same as the combining matrix ${\mathbf{F}}_{{\text{UL}}}^{\text{H}}$ trained in Fig. \ref{Environment}, while ``CCN with random phases" indicates that the phases of the beamforming matrix ${\mathbf{F}}_{{\text{DL}}}^{\text{T}}$ adopted by the BS are random.
In Fig. \ref{E1}, under the fixed scattering environment, we can observe that more users can achieve a larger received SNR by exploiting the proposed CCN trained with the FSE data set than the CCN with random phases.
\begin{figure}[t]
\centering
\includegraphics[width=252pt]{Fig/Fig_18.eps}
\caption{The received SNR distributions at the user side using different CCNs.}
\label{E1}
\end{figure}
\section{Conclusions}
In this paper, we have proposed an MDDL-based channel estimation and feedback scheme for wideband mmWave massive hybrid MIMO systems, where the angle-delay domain channels' sparsity is exploited for reduced overhead.
First, we have considered the uplink channel estimation for TDD systems.
To reduce the uplink pilot overhead for estimating the high-dimensional channels from a limited number of RF chains at the BS,
we have proposed to jointly train the PSN and the channel estimator as an auto-encoder.
Specifically, by exploiting channels' structured sparsity from an {\it{a priori}} model and learning the integrated trainable parameters from the data samples, the MMV-LAMP network with the devised redundant dictionary has been proposed to jointly recover multiple subcarriers' channels with significantly enhanced performance.
Moreover, we have considered the downlink channel estimation and feedback for FDD systems.
Similarly, the pilots at the BS and channel estimator at the users can be jointly trained as an encoder and a decoder, respectively.
To further reduce the channel feedback overhead, only the received pilots on part of the subcarriers are fed back to the BS, which can exploit the proposed MMV-LAMP network to reconstruct the spatial-frequency channel matrix.
Further, we consider to generate the data samples from the scenarios with fixed scattering environments, and optimize the combining/beamforming matrix and MMV-LAMP network by learning and perceiving the characteristics of the channel environments for the improved performance.
Simulation results have verified that the proposed MDDL-based solution can achieve a significant improvement in channel estimation and feedback performance than the conventional schemes.
|
2104.11021
|
\section{INTRODUCTION} \label{intro}
Nowadays, perception is a crucial task for autonomous vehicles. Research in this field demands accurate sensors and
algorithms to perform safe and precise navigation. LiDAR stands as an ideal candidate to directly describe the scene geometry by a dense point cloud representation.
Despite the recent increment in the amount of labeled data, public datasets may not be sufficient to train models able to grasp a complete understanding of the situations that they may encounter in operation due to the domain shift problem. Variations in sensor positions, device specifications (e.g., number and distribution of planes), or even the geographic region \cite{train_germ_usa} can lead to a significant performance drop in supervised learning approaches. Moreover, the annotation type (point-wise, 3D boxes, etc.) could also differ between source and target samples, and collecting well-annotated data for custom applications is prohibitively expensive.
Hence, synthetic data stands as an enticing option to provide on-demand and accurate data which can be modified and extended almost infinitely. Despite the realism of current simulators' sensor and world models available today, algorithms trained with these data usually fail to generalize in a real environment.
Domain adaptation (DA) techniques have been explored to bridge the aforementioned gaps between domains. Therefore, some approaches have attempted to directly adapt raw LiDAR information to other data distributions \cite{pointgan1}. Nevertheless, due to the sparsity, irregularity, and unstructured distribution of LiDAR data and the high number of points contained in each cloud, on-board perception applications often use efficient projections such as the range view (RV) and the bird's eye view (BEV), for which DA alternatives have also been proposed \cite{epointda, bevda1}.
In many of these works, CycleGAN \cite{cyclegan} and its cycle consistency mechanism have reported an excellent performance on image-level domain adaptation and content preservation for these 2D projected-based representations. Whilst this method can produce realistic adaptations of big and medium-sized vehicles, we argue that further refinement \cite{cycada, semgan} can help preserve scarcely represented objects, which are normally vulnerable road users such as pedestrians and cyclists
This work proposes an approach to enhance the style transfer of BEV representations from a synthetic scalable source domain, generated using a simulator, to a real target domain.
The conversion, shown in Fig.~\ref{fig:init_diagram}, makes use of an adversarial generative network adapted to BEV representations and endowed with semantic segmentation consistency to help preserve object instances in the scene.
To the best of our knowledge, this is the first method addressing unsupervised domain adaptation between unpaired BEV images using a CycleGAN with multi-class semantic regularization.
\begin{figure}[thb]
\centering
\includegraphics[width=\linewidth]{images/cycleseg_init_2.pdf}
\caption{On the left, raw synthetic BEV recorded in the CARLA simulator. On the right, the result of the proposed domain adaptation. Zoomed regions are provided to better observe the differences.}
\label{fig:init_diagram}
\end{figure}
The goal is to avoid, or at least reduce, the need for labeled samples from the target domain, thus enabling the deployment of high-complexity models on custom setups. In particular, to validate the effectiveness of the approach, we use adapted synthetic data to train a state-of-the-art BEV-based 3D object detection method, BirdNet+ \cite{birdnet+}, which is later deployed and tested on the KITTI object detection benchmark \cite{kitti}.
\section{Related Work} \label{soa}
The advent of deep learning has led to a significant research interest in domain adaptation (DA).
Among deep DA methods, adversarial-based ones generally use a domain classifier to extract domain-invariant features through the confusion of the source and target domain boundaries \cite{dagans}. The optimum result from these networks is to minimize the domain distance to maximize the domain discriminator error, producing data that the discriminator cannot distinguish from real \cite{gans, improvedgans}. On the other hand, in the reconstruction-based DA category, \cite{johnsonstyle} combines different losses to recombine style and content from two separate images.
CycleGAN \cite{cyclegan} combines both solutions using an adversarial loss and a reconstruction loss (cycle consistency loss) to address the image-to-image translation problem when paired training data is not available (unsupervised DA or UDA).
Although the CycleGAN reconstruction task shows promising results in a wide variety of scenarios, CYCADA \cite{cycada} extends its capabilities using both image space alignment and latent representation space alignment. Besides, it incorporates a task to encourage content consistency enforcing relevant semantics to match before and after adaptation.
This semantic consistency has proven vital in multimodal scenarios because the invertibility provided by the cycle does not necessarily preserve the arrangement of the classes from the original source domain, as shown in \cite{semgan}. However, unlike our proposal, this method requires labels of both the source and target domains, as it operates in a supervised fashion.
All
these methods are designed for 2D vision tasks where RGB images are the protagonists. However, when it comes to LiDAR point cloud representations,
some adaptations are required.
In order to work with point clouds, the most straightforward alternative to preserve all the LiDAR information is to use raw clouds to perform point-wise DA and set-level DA \cite{pointgan1}. By the same token, PointDAN \cite{pointdan} studies local-level and global-level point cloud alignments by the use of self-adaptive attention nodes.
Although such methods are able to preserve all the LiDAR information, their execution time and memory requirements make them inefficient when it comes to a full point cloud. In this context, projection-based methods gain popularity due to their adaptability to the well-studied 2D approaches. Unfortunately, this also entails the inevitable loss of spatial information.
In this field, ePointDA \cite{epointda} uses range
view representations from simulation and real domains to bridge the domain gap at pixel-level and feature-level.
LiDARNet \cite{lidarnet} combines multiple tasks such as boundary extraction, cycle consistency, and domain invariance to address a full-scene semantic segmentation task on real range view images. BEV-Seg \cite{bev-seg} uses multiple camera angles, with RGB and depth images from a simulator to create a semantically enriched point cloud to find BEV semantic segmentation.
Regarding BEV projection, \cite{bevda1} generates from simulation data realistic scenarios to transfer annotations from each other, and \cite{bevda} shows the capabilities of a similar method on a BEV-based detector.
As can be seen, many of the previous works focus their efforts on simulation-to-real domain adaptation (SRDA). The main reason is to avoid the very challenging annotation task, which is a time and money-consuming task. Considering this issue, simulators such as CARLA \cite{carla}, which counts with multiple modeled sensors, or LiDAR-based datasets such as GTA-V \cite{gtav} and SynthCity \cite{synthcity} have been developed.
Furthermore, some works improve existing synthetic data adding well-modeled obstacles where needed \cite{syntheticlidar}.
\section{Proposed Method} \label{method_head}
This section provides a detailed explanation of the proposed approach, which is depicted in Fig.~\ref{fig:network_diagram}. Two different sets of bird's eye view (BEV) images, encoding LiDAR information, are used as input to carry out a transformation between unpaired synthetic BEV point cloud data and real data.
This fact will make it possible to use annotations from synthetic data in place of real data, and therefore, expand and improve the diversity of the objects for the detection task.
\begin{figure*}[tb]
\centering
\includegraphics[width=0.9\linewidth]{images/cycleseg_6.pdf}
\caption{Overview of the proposed approach. We depict the adversarial loss, to transform the domains, in purple, the cycle consistency mechanism in blue, the identity consistency in red, and the semantic constraint in orange.
Blocks $G$ and $F$ stand for the generators $G: X \rightarrow Y$ and $F: Y \rightarrow X$, $D_Y$ for the discriminator of the real domain, and CLS for the semantic segmentation network. For clarity reasons, only losses where the synthetic representation is involved are shown. The real cycle is designed to mirror the synthetic one.}
\label{fig:network_diagram}
\end{figure*}
\subsection{Input Representation} \label{input_repr}
Our BEV representation follows the one proposed in \cite{birdnet+}; however, we dispense with the LiDAR intensity data, for which realistic values are difficult to obtain. Thus, three distance-invariant channels are used: maximum height, normalized density within each cell, and binary occupancy.
In our experiments, we use a voxel size of $10$~cm to obtain handleable representations and a data range of $50$~m forward and $22.5$~m to each side in order to represent the area where the majority of annotations are available in the KITTI dataset.
For the generation of the synthetic BEV images, we rely on the CARLA simulator \cite{carla}. This simulator provides multiple realistic scenarios, agent models, and sensors generated by the graphics engine Unreal.
We use a semantically enhanced LiDAR
modeled after the KITTI dataset device and let the domain adaptation model the noise.
\subsection{Architecture Description} \label{arch_desc}
Our adversarial-based approach, which provides a translation of both representations guided by cycle, identity, and strong pixel-level semantic-aware consistencies, is built upon the CycleGAN architecture \cite{cyclegan}.
\medskip
\noindent \textbf{Adversarial network.}
Given the source domain $X$ (synthetic BEV images) and the target domain $Y$ (real data), an adversarial network aims to map the data distributions of each domain, $x \sim p_d(x)$ and $y \sim p_d(y)$, to the other.
First, the architecture is composed of two classifiers named domain discriminators, $D_X$ and $D_Y$, that learn the characteristic features of each domain separately. Then a set of generators $G$ and $F$ is designed to translate $G: X \rightarrow Y$ and $F: Y \rightarrow X$. Finally, the discriminators $D_X$ and $D_Y$ will provide the necessary feedback of the undergoing mapping $G(x)$ and $F(y)$ until they can not distinguish the domains.
Generators $G$ and $F$, following the architecture in \cite{johnsonstyle}, are organized as follows: first, the encoder module reduces the initial resolution smoothly by a downsampling factor of~$4$.
Afterward, the transformer, which leads the conversion between domains, is built with nine residual blocks that follow a structure similar to that of the encoder,
but to which strong dropout is applied during training to provide noise \cite{patchgan_pix2pix}.
Finally, the decoder mimics the encoder structure, but it contains fractionally strided convolutions and one convolution to map features to RGB values, followed by a $\tanh$ activation function.
On the other hand, discriminator networks are inspired in the PatchGAN architecture \cite{patchgan_pix2pix}, where four sets of convolution + instance normalization + LeakyReLU layers downsample the initial resolution and then, two non-strided convolutions predict on $70 \times 70$-pixel overlapping image patches the domain to which they belong.
\medskip
\noindent \textbf{Cycle consistency.} Although adversarial training produces some resemblance between each domain results, it still lacks a mechanism to preserve the structure from the original domain after the conversion. Therefore, in \cite{cyclegan}, a cyclical training is proposed to prevent $G$ and $F$ from contradicting each other; thus, the mapping $G( F (y) )$ will attempt to reproduce the content in $y$ and $F( G (x) )$ the content in $x$.
\medskip
\noindent \textbf{Identity consistency.} One more constrain is imposed on the generators to minimize their distance to an identity mapping when real samples of the opposite domain are provided: $y \approx G(y)$ and $x \approx F(x)$. This idea generally preserves better the information contained in each channel, which may otherwise suffer from undesirable blending under adversarial training \cite{cyclegan}.
Working with a BEV representation requires retaining the per-pixel structure of the channels so that crucial information, such as object height or cell density, is not modified in the process.
\medskip
\noindent \textbf{Semantic consistency.}
Unlike \cite{bevda}, Sem-GAN \cite{semgan} demonstrates that the previous methods (i.e., cycle and identity consistency) do not necessarily maintain object identities, but they focus on the whole image adaptation instead. It implies that small details in a BEV representation such as poles and pedestrians could disappear during the transformation.
In order to encourage source voxels to keep their structure while being translated to the target domain, we follow the approach in CYCADA~\cite{cycada}. The idea consists of training a semantic segmentation network on the synthetic domain beforehand and using its predictions to ensure high semantic consistency after conversion, thus preserving the fine-grained content and the style of the input.
As our method is unsupervised, we only use synthetic labels in the process. In addition, for the semantic segmentation task, we chose a network that has previously provided state-of-the-art results in LiDAR projection-based representations such as SalsaNext \cite{salsanext}. Although not dedicated to operating in BEV representations, its predecessor was able to use both representations indistinctly.
This network, arranged in an encoder-decoder fashion, is composed of a contextual module that stacks three residual blocks to fuse features of two different receptive fields. Afterward, the network follows a U-Net-based structure concatenating residual blocks $i$ with the $n - i$ blocks.
Similarly, the decoder utilizes a sequence that mimics the encoder; however, it is preceded by a pixel-shuffle layer.
\subsection{Loss Functions}
\label{loss_head}
To train the proposed adversarial network, two different cost functions are minimized. On the one hand, each discriminator $D$ tries to reduce, independently of the generator, an adversarial loss $\mathcal{L}_{\text{adv}_D}$; hence, for $D_Y$:
\begin{equation}
\begin{split}
\mathcal{L}_{\text{adv}_{D_Y}} &= \mathbb{E}_{y \sim p_d(y)} [(D_Y(y) - 1 )^2] + \\ &\hphantom{{}={}} \mathbb{E}_{x \sim p_d(x)} [ D_Y( G(x) )^2 ]
\end{split}
\label{eq:advloss_D_XY}
\end{equation}
In this way, the classifier is trained to distinguish between its domain ($D_Y(y) \approx 1$) and the domain representation created by the opposite generator ($D_Y(G(x)) \approx 0$).
On the other hand, the generators' multi-task training loss is given by the following equation, where each component has a weight $\lambda$ and is computed in both directions; i.e., $X\rightarrow Y$ and $Y\rightarrow X$:
\begin{equation}
\mathcal{L} = \mathcal{L}_{\text{adv}_\text{gen}} + \lambda_\text{cyc}\mathcal{L}_\text{cyc} + \lambda_\text{idt}\mathcal{L}_\text{idt} + \lambda_\text{sem}\mathcal{L}_\text{sem}
\label{eq:totalloss}
\end{equation}
The generators attempt to minimize the adversarial loss $\mathcal{L}_{\text{adv}_\text{gen}}$, where the usual negative log-likelihood or binary cross-entropy loss has been modified by a least-squares loss, as in \cite{cyclegan}:
\begin{equation}
\begin{split}
\mathcal{L}_{\text{adv}_\text{gen}} &= \mathbb{E}_{x \sim p_d(x)} [( D_Y( G(x) ) - 1 )^2 ] + \\ &\hphantom{{}={}} \mathbb{E}_{x \sim p_d(y)} [( D_X( F(y) ) - 1 )^2 ]
\end{split}
\label{eq:advloss_G_XY}
\end{equation}
The first reconstruction component, namely cycle consistency $\mathcal{L}_{cyc}$, is an L1 penalty between the initial sample from one domain to the final representation, after both translations $x \rightarrow G( x ) \rightarrow F( G( x ) ) \approx x$ (and the analogous for domain $Y$):
\begin{equation}
\begin{split}
\mathcal{L}_\text{cyc} &= \mathbb{E}_{x \sim p_d(x)} [\norm{F(G(x)) - x}_1] + \\ &\hphantom{{}={}} \mathbb{E}_{y \sim p_d(y)} [\norm{G(F(y)) - y}_1]
\end{split}
\label{eq:cycloss}
\end{equation}
Secondly, the identity loss is defined as the mean absolute error, L1 loss, to ensure when a generator is fed by a sample from the opposite domain, it can produce a representation of its own domain:
\begin{equation}
\begin{split}
\mathcal{L}_\text{idt} &= \mathbb{E}_{x \sim p_d(x)} [\norm{G(y) - y}_1] + \\ &\hphantom{{}={}} \mathbb{E}_{y \sim p_d(y)} [\norm{F(x) - x}_1]
\end{split}
\label{eq:idtloss}
\end{equation}
Finally, regarding the semantic loss, we use the SalsaNext \cite{salsanext} semantic segmentation outputs as a noisy labeler to keep as much context as possible after translation:
\begin{equation} \label{eq:loss_G_sem}
\begin{split}
\mathcal{L}_\text{sem} &= \mathcal{L}_\text{wCE}(\text{CLS}(G(x)), \argmax{(\text{CLS}(x))} ) + \\
&\hphantom{{}={}} \mathcal{L}_\text{wCE}(\text{CLS}(F(y)), \argmax{(\text{CLS}(y))} )
\end{split}
\end{equation}
where $\mathcal{L}_\text{wCE}$ represents a weighted cross-entropy loss, which is computed over the pixels of the semantic prediction ($\text{CLS}$) for $G(x)$ and $F(y)$ with the predictions from the source BEVs ($x$ and $y$, respectively) as weak labels. Only cells with non-zero values in both inputs contribute to the loss in order to preserve both the class and location of the points. Weights are used to increase the importance, by a $2\times$ factor, of the categories of interest (i.e., cars, pedestrians, and cyclists) over the rest of classes (e.g., roads or buildings), which are still included to maintain the geometric consistency of the complete scene.
As stated above, we use a SalsaNext model as semantic predictor ($\text{CLS}$). This model is trained beforehand with synthetic data through the usual weighted multi-category cross-entropy and Lovász-Softmax losses.
\section{Experimental Results} \label{experiments_head}
In this section, we present a set of experiments to validate the performance of our domain adaptation approach from synthetic BEV data to real BEV representations. Our implementation will be evaluated on the well-known KITTI object dataset using the 3D detector specified below.
\subsection{3D Obstacle Detection for Evaluation} \label{det_3d}
Assessing the feasibility of a domain adaptation method on a 3D detector is a common practice nowadays. The main advantage lies in the fact that the results will offer a good estimate of the resemblance of the generated data to the real dataset to be emulated.
With this task in mind, we have chosen a 3D detector that uses enriched BEV inputs to provide the object's location, shape, and category in a two-stage end-to-end fashion. The first stage of the BirdNet+ architecture \cite{birdnet+} is built upon a residual-based encoder with per-level skips to preserve global and local content (ResNet-50 and a feature pyramid network). These features are fed into a region proposal network that classifies and refines a default anchor estimation. These non-axis-aligned proposals are dimensionally normalized by an ROI Align layer and forwarded to a second stage composed of a sequence of multiple fully-connected layers, which finally estimate the 3D object parameters.
\subsection{Experimental Setup} \label{train_dets}
As in the original CycleGAN approach \cite{cyclegan}, the adversarial network is trained from scratch with random weights following a normal distribution
$\mathcal{N}(0,\,0.02)$
to initialize the weights of every layer in our model. Training data is randomly augmented using different techniques: horizontal flip, point dropout, and additive Gaussian noise with similar distribution to our input representation. Following \cite{improvedgans}, real labels in (\ref{eq:advloss_D_XY}) and (\ref{eq:advloss_G_XY}) are softened randomly and kept between $0.7$ and $1$. Additionally, the last $50$ samples are used to compute the discriminators' losses and provide better stability.
For our experiments, we fix $\lambda_\text{cyc} = \lambda_\text{idt} = 10$ and $\lambda_\text{sem} = 0.5$ in (\ref{eq:totalloss}) to weigh all the losses. We use the Adam solver for the optimization with momentum $[0.5, 0.99]$ and train up to $50$ epochs. The number of epochs, batch size, learning rate (LR), and learning rate decay of all networks involved in this work are indicated in Table~\ref{tab:train_params}.
\begin{table}[htbp]
\caption{Training Parameters for the Different Models.}
\label{tab:train_params}
\begin{center}
\begin{tabular}{l r c l l
\toprule
\textbf{Network} & \textbf{Epochs} & \textbf{Batch} & \textbf{LR} & \textbf{LR decay}\\
\midrule
DA network & 50 & 1 & 0.0001 & None\\
SalsaNext & 100 & 1 & 0.01 & 0.01 every epoch\\
BirdNet+ & $\sim$12 & 4 & 0.01 & 0.1 at the 10\textsuperscript{th} epoch\\
\bottomrule
\end{tabular}
\end{center}
\end{table}
Our synthetic data have been extracted from five different towns
delivered by the CARLA simulator. Between $1000$ to $1500$ samples per town were extracted with a delay of $0.5$ simulated seconds to provide more variability to our results. It is worth noting that in one of the maps, we only use cyclists and pedestrians to deal with the unbalanced data.
The scenes contain a limited and approximately constant number of agents, which are spawned and destroyed outside the field of view of our BEV representation to avoid inconsistencies.
Besides, we divide the CARLA vehicle class into more fine-grained categories (i.e., car, motorcycle, and bicycle), matching the KITTI labeling criteria for two-wheelers (so that they include both the vehicle and the rider).
Additionally, 3D labels for parked vehicles, which are not reported by CARLA, are estimated using the dense BEV semantic representation provided by a top-view camera.
In total, $6878$ synthetic BEV images, endowed with semantic and 3D box labels, are extracted. They are used to train both SalsaNext and the proposed domain adaptation framework (together with the KITTI training set as target). Finally, the 3D detection model is trained with the adapted images (considering only the car, pedestrian, and cyclist categories) and tested on the KITTI validation set, defined as in \cite{birdnet+}.
\subsection{Overall assessment} \label{results_head}
To shed light on the importance of the proposed consistency in the domain adaptation, Fig.~\ref{fig:da_comparison} shows a sample from the CARLA simulator, the output after the adversarial training proposed in~\cite{cyclegan}, and the output with our model, including all the losses described in Sec.~\ref{loss_head}.
\begin{figure}
\subfloat{%
\includegraphics[width=0.31\linewidth]{images/DA/239000124_s.png}} \hfill
\subfloat{%
\includegraphics[width=0.31\linewidth]{images/DA/239000124_da.png}} \hfill
\subfloat{%
\includegraphics[width=0.31\linewidth]{images/DA/239000124_das.png}} \setcounter{subfigure}{0} \\
\subfloat[Synthetic]{%
\includegraphics[width=0.31\linewidth]{images/DA/439000010_s.png}\label{fig:1s}} \hfill
\subfloat[Original DA]{%
\includegraphics[width=0.31\linewidth]{images/DA/439000010_da.png}\label{fig:2da}} \hfill
\subfloat[Ours]{%
\includegraphics[width=0.31\linewidth]{images/DA/439000010_das.png}\label{fig:3das}} \hfill
\caption{Domain adaptation results with the original DA (CycleGAN) and the proposed approach.} \label{fig:da_comparison}
\end{figure}
The noise introduced in the original DA approach (Fig.~\ref{fig:2da}) adds unrealistic LiDAR measurements and changes the pixel value of some areas. On the other hand, our method (Fig.~\ref{fig:3das}) seems to eliminate some points from the ground that may impact the segmentation task, but it preserves significantly better the semantic identity of each individual pixel while modeling the noise of the real LiDAR.
For the evaluation of the performance of the BirdNet+ detector trained with the adapted data, we follow the 3D and BEV detection tasks from the KITTI object detection benchmark \cite{kitti}.
The strong IoU requirements between detections and labels imposed by the official evaluation do not fit well the localization uncertainty shown by models trained only with synthetic data; therefore, we employ less strict thresholds: $50\%$ (cars) and $30\%$ (pedestrians and cyclists).
Table \ref{tab:bev3d_da_comparison} shows the considerable performance gap in the domain shift between the synthetic and the real in our representations. It is worth noting that the original DA method focuses either on medium and big-sized obstacles (i.e., cars) and structures rather than small objects, which, in the end, are partially occluded by the noise generated. In our approach, the disappearance or modification of these elements penalizes the generator, that becomes more aware of the semantic context of each point. Thus, our method preserves better the details in the scene, easing the task of 3D object detection in BEV representations.
\begin{table}[htb]
\caption{BirdNet+ Detection Performance (AP BEV \% and AP 3D \%) on the KITTI Val Set for the Different Training Data Sources.}
\label{tab:bev3d_da_comparison}
\centering
\begin{tabular}{l c c c c c c }
\toprule
& \multicolumn{2}{c}{Car} & \multicolumn{2}{c}{Pedestrian} & \multicolumn{2}{c}{Cyclist} \\
\cmidrule(lr){2-3} \cmidrule(l){4-5} \cmidrule(l){6-7}
& BEV & 3D & BEV & 3D & BEV & 3D \\
\midrule
Oracle (KITTI) & 81.94 & 67.04 & 50.17 & 43.90 & 42.74 & 39.89 \\
\midrule
Synthetic & 52.91 & 46.82 & 18.11 & 17.91 & 22.37 & 21.85 \\
Original DA & \textbf{61.53} & 48.08 & 10.58 & 07.45 & 18.82 & 16.56 \\
Ours & 53.79 & \textbf{48.61} & \textbf{26.21} & \textbf{25.81} & \textbf{29.88} & \textbf{29.75} \\
\bottomrule
\end{tabular}
\end{table}
\begin{figure*}[thb]
\captionsetup[subfloat]{farskip=4pt}
\def0.325{0.325}
\subfloat{%
\includegraphics[width=0.325\linewidth]{images/Det/007232_s.png}%
}\hfill
\setcounter{subfigure}{0}%
\subfloat{%
\includegraphics[width=0.325\linewidth]{images/Det/007232_da.png}\label{fig:7232_qual}%
}\hfill
\subfloat{%
\includegraphics[width=0.325\linewidth]{images/Det/007232_das.png}%
}\\
\subfloat{%
\includegraphics[width=0.325\linewidth]{images/Det/007277_s.png}%
}\hfill
\setcounter{subfigure}{1}%
\subfloat{%
\includegraphics[width=0.325\linewidth]{images/Det/007277_da.png}\label{fig:7277_qual}%
}\hfill
\subfloat{%
\includegraphics[width=0.325\linewidth]{images/Det/007277_das.png}%
}\\
\subfloat{%
\includegraphics[width=0.325\linewidth]{images/Det/007464_s.png}%
}\hfill
\setcounter{subfigure}{2}%
\subfloat{%
\includegraphics[width=0.325\linewidth]{images/Det/007464_da.png}\label{fig:7464_qual}%
}\hfill
\subfloat{%
\includegraphics[width=0.325\linewidth]{images/Det/007464_das.png}%
}
\caption
Qualitative results in KITTI validation set produced by BirdNet+ using different training data. From left to right: raw synthetic data, cycle consistent DA, cycle and semantic consistent DA.} \label{fig:kitti_qual}
\end{figure*}
The validity of the method is further confirmed in the qualitative results depicted in Fig.~\ref{fig:kitti_qual}.
As can be seen, our method adjusts the elevation of cars (first row) and pedestrians (third row) in a better way due to the fact that it preserves the pixel values better than the others whilst generating fewer artifacts. In addition, our method provides more recall in small classes (second and third rows) such as pedestrians and cyclists; however, it occasionally fails to distinguish between them. It is clear that, although detection capabilities are naturally limited by the domain gap, our method demonstrates a significant improvement over its predecessor and the synthetic-only approach in the 3D detection task.
\section{CONCLUSIONS}
In this work, it has been shown that the enforcement of semantic consistency in GAN-based domain adaptation of BEV projections benefits the preservation of the original layout of the elements in the synthetic scene during style transfer. The performance of the presented framework has been assessed using a state-of-the-art object detection network in the challenging KITTI Benchmark. Contrary to the baseline method, our results improve by a wide margin those obtained when training with raw synthetic data, being especially significant the difference in the detection of smaller road participants.
In future works, a lossless LiDAR style transfer will be studied so that any kind of object detection network can be used regardless of its input representation. To this aim, two different approaches will be explored: first, by means of several simultaneous generators per domain dedicated to different projections, which will ultimately allow reconstructing the LiDAR point cloud; and second, by designing a method able to perform domain adaptation over the raw 3D information.
\section*{ACKNOWLEDGMENT}
Research conducted within the project PEAVAUTO-CM-UC3M. The research project PEAVAUTO-CM-UC3M has been funded by the call “Programa de apoyo a la realización de proyectos interdisciplinares de I+D para jóvenes investigadores de la Universidad Carlos III de Madrid 2019-2020 under the frame of the Convenio Plurianual Comunidad de Madrid-Universidad Carlos III de Madrid.
We gratefully acknowledge the support of NVIDIA Corporation with the donation of the GPUs used for this research.
\bibliographystyle{IEEEtran}
|
1811.00890
|
\section{Definitions and Proofs} \label{ap:proofs}
\subsection{Definitions}
\begin{definition}[Assigns-to set $\assset(S)$] \label{def:assset}
\assset(S) is the set that contains the names of global variables that have been assigned to within the statement S. It is defined recursively as follows: \vspace{-10pt}
\begin{multicols}{2}\noindent
$\assset(x[E_1]\dots[E_n] = E) = \{x\}$ \\
$\assset(\{T x; S\}) = \assset(S) \setminus \{x\}$\\
$\assset(S_1; S_2) = \assset(S_1) \cup \assset(S_2) $ \\
$\assset(\kw{skip}) = \emptyset $ \\
$\assset(\kw{if}(E)\; S_1 \;\kw{else}\; S_2) = \assset(S_1)\cup \assset(S_2) $
$\assset(\kw{for}(x\;\kw{in}\;E_1:E_2)\;S) = \assset(S) \setminus \{x\}$
\end{multicols}\vspace{-12pt}
\end{definition}
\begin{definition}[Reads set $\readset(S)$] \label{def:readset}
\readset(S) is the set that contains the names of global variables that have been read within the statement S. It is defined recursively as follows: \vspace{-10pt}
\begin{multicols}{2}\noindent
$\readset(x) = \{x\}$ \\
$\readset(c) = \emptyset$ \\
$\readset([E_1,\dots,E_n]) = \bigcup_{i=1}^n\readset(E_i)$ \\
$\readset(E_1[E_2]) = \readset(E_1) \cup \readset(E_2) $ \\
$\readset(f(E_1,\dots,E_n)) = \bigcup_{i=1}^n\readset(E_i)$\\
$\readset(F(E_1,\dots,E_n)) = \bigcup_{i=1}^n\readset(E_i)$\\
$\readset(x[E_1]\dots[E_n] = E) = \bigcup_{i=1}^n\readset(E_i) \cup \readset(E)$ \\
$\readset(\{T x; S\}) = \readset(S) \setminus \{x\}$\\
$\readset(S_1; S_2) = \readset(S_1) \cup \readset(S_2) $ \\
$\readset(\kw{skip}) = \emptyset $ \\
$\readset(\kw{if}(E)\; S_1 \;\kw{else}\; S_2) = \readset(E)\cup\readset(S_1)\cup \readset(S_2) $
$\readset(\kw{for}(x\;\kw{in}\;E_1:E_2)\;S) = \readset(E_1) \cup \readset(E_2) \cup \readset(S) \setminus \{x\}$
\end{multicols}\vspace{-12pt}
\end{definition}
\begin{definition}[Type of expression $E$ in $\Gamma$] \label{def:lvaluetypes}
$\Gamma(E)$ is the type of the expression $E$ with respect to $\Gamma$:\\
\noindent
$\Gamma(x) = (\tau, \ell)$ for $x: (\tau, \ell) \in \Gamma$ \\
$\Gamma(c) = (\kw{ty}(c), \lev{data})$ \\
$\Gamma([E_1,\dots,E_n]) = (\tau[], \ell_1 \sqcup \dots \sqcup \ell_n )$ if $\Gamma(E_i) = (\tau, \ell_i)$ for $i \in 1..n$, and $\ell' \sqcup \ell''$ denoting the least upper bound of $\ell'$ and $\ell''$, , and it is undefined otherwise. \\
$\Gamma(E_1[E_2]) = (\tau, \ell \sqcup \ell')$ if $\Gamma(E_1)=(\tau[], \ell)$ and $\Gamma(E_2)=(\kw{int}, \ell')$, and it is undefined otherwise.
$\Gamma(f(E_1,\dots, E_n)) = (\tau, \ell)$ if $\Gamma(E_i) = (\tau_i, \ell_i)$ for $i \in 1..n$, and $f:(\tau_1, \ell_1), \dots, (\tau_n, \ell_n) \rightarrow (\tau, \ell)$.
\end{definition}
The \textit{elaboration relation} transforms SlicStan statements and expressions to Core Stan statements and expressions. Thus, throughout this document, we use the terms ``elaborated statement'' and ``elaborated expression'' to mean a Core Stan statement and a Core Stan expression respectively.
\begin{definition}[Vectorising functions $v_{\Gamma}, v_E, v_S$] \label{def:vector} ~
\begin{enumerate}
\item $v_{\Gamma}(\Gamma, n) \deq \{x:(\tau[n], \ell)\}_{x:(\tau, \ell) \in \Gamma}$, for any typing environment $\Gamma$.
\item $v_E(x, \Gamma, E)$ is defined for a variable $x$, typing environment $\Gamma$, and an elaborated expression $E$:
\vspace{-8pt}
\begin{multicols}{2}
$v_E(x, \Gamma, x') = \begin{cases} x'[x] &\text{if } x' \in \dom (\Gamma) \\ x' &\text{if } x' \notin \dom(\Gamma) \end{cases}$ \\
$v_E(x, \Gamma, c) = c$ \\
$v_E(x, \Gamma, [E_1,\dots, E_n]) = [v_E(E_1),\dots,v_E(E_n)]$ \\
$v_E(x, \Gamma, E_1[E_2]) = v_E(E_1)[v_E(E_2)]$ \\
$v_E(x, \Gamma, f(E_1,\dots,E_n)) = f(v_E(E_1),\dots,v_E(E_n))$
\end{multicols} \vspace{-6pt}
\item $v_{S}(x, \Gamma, S)$ is defined for a variable $x$, typing environment $\Gamma$, and an elaborated statement $S$:
\vspace{-8pt}
\begin{multicols}{2}
$v_{S}(x, \Gamma, L = E) = (v_E(L) = v_E(E))$ \\
$v_{S}(x, \Gamma, S_1; S_2) = v_{S}(x, \Gamma, S_1);v_{S}(x, \Gamma, S_2) $ \\
$v_{S}(x, \Gamma, \kw{if}(E)\; S_1\; \kw{else}\; S_2) =$ \\
$\text{ }\qquad\kw{if}(v_E(E))\; v_{S}(S_1)\; \kw{else}\; v_{S}(S_2)$ \\
$v_{S}(x, \Gamma, \kw{for}(x'\;\kw{in}\;E_1:E_2)\;S') =$ \\
$\text{ }\qquad\kw{for}(x'\;\kw{in}\;v_E(E_1):v_E(E_2))\;v_{S}(S'))$ \\
$v_{S}(\kw{skip}) = \kw{skip}$
\end{multicols} \vspace{-6pt}
\end{enumerate}
\end{definition}
\begin{definition}[$\readset_{\Gamma \vdash \ell}(S)$]\label{def:read_level_set}
$\readset_{\Gamma \vdash \ell}(S)$ is the set that contains the names of global variables that have been read at level $\ell$ with respect to $\Gamma$ within the statement S. It is defined recursively as follows: \vspace{-10pt}
\begin{multicols}{2}\noindent
$\readset_{\Gamma \vdash \ell}(x) = \emptyset $ \\
$\readset_{\Gamma \vdash \ell}(x[E_1]\dots[E_n]) = \begin{cases}
\bigcup_{i=1}^n\readset(E_i) & \text{if}~\Gamma(x) = \ell \\
\emptyset & \text{otherwise}
\end{cases}$ \\
$\readset_{\Gamma \vdash \ell}(L = E) =
\begin{cases}
\readset_{\Gamma \vdash \ell}(L) \cup \readset(E) & \text{if}~\Gamma(L) = \ell \\
\emptyset & \text{otherwise}
\end{cases}$ \\
$\readset_{\Gamma \vdash \ell}(S_1; S_2) = \readset_{\Gamma \vdash \ell}(S_1) \cup \readset_{\Gamma \vdash \ell}(S_2) $ \\
$\readset_{\Gamma \vdash \ell}(\kw{skip}) = \emptyset $ \\
%
For $S = \kw{if}(E)\; S_1 \;\kw{else}\; S_2$, and \\ $A = \readset_{\Gamma \vdash \ell}(S_1) \cup \readset_{\Gamma \vdash \ell}(S_2)$: \\
$\readset_{\Gamma \vdash \ell}(S) =
\begin{cases}
\readset(E) \cup A & \text{if}~A \neq \emptyset \\
\emptyset & \text{otherwise}
\end{cases}$ \\
%
For $S = \kw{for}(x\;\kw{in}\;E_1:E_2)\;S'$, and\\
$A = \readset_{\Gamma \vdash \ell}(S')$\\
$\readset_{\Gamma \vdash \ell}(S) =
\begin{cases}
\readset(E_1) \cup \readset(E_2) \cup A & \text{if}~A \neq \emptyset \\
\emptyset & \text{otherwise}
\end{cases}$
\end{multicols}\vspace{-12pt}
\end{definition}
\begin{definition}[$\assset_{\Gamma \vdash \ell}(S)$]\label{def:write_level_set}
$\assset_{\Gamma \vdash \ell}(S) \deq \{x \in \assset(S) \mid \Gamma(x) = (\tau, \ell) \text{ for some } \tau\}$
\end{definition}
\begin{definition}{Merging Stan programs $\stan_1; \stan_2$} \label{def:stanmerge} \\
Let $\stan_1$ and $\stan_2$ be two Stan programs, such that for $i = 1, 2$:
\begin{align*}
\stan_i = \squad\, &\databl{\,\Gamma_d^{(i)}\,} \\ &\kw{transformed data}\{\,\Gamma_d^{(i)'}\hquad S_d^{(i)'}\,\} \\ &\paramsbl{\,\Gamma_m^{(i)}\,} \\ &\kw{transformed parameters}\{\,\Gamma_m^{(i)'}\hquad S_m^{(i)'}\,\} \\ &\modelbl{\,\Gamma_m^{(i)''}\hquad S_m^{(i)''}\,} \\ &\kw{generated quantities}\{\,\Gamma_q^{(i)}\hquad S_q^{(i)}\,\}
\end{align*}
The sequence of $\stan_1$ and $\stan_2$, written $\stan_1; \stan_2$ is then defined as:
\begin{align*}
\stan_i = \squad\, &\databl{\,\Gamma_d^{(1)},\Gamma_d^{(2)}\,} \\ &\kw{transformed data}\{\,\Gamma_d^{(1)'},\Gamma_d^{(2)'}\hquad S_d^{(1)'};S_d^{(2)'}\,\} \\ &\paramsbl{\,\Gamma_m^{(1)},\Gamma_m^{(2)}\,} \\ &\kw{transformed parameters}\{\,\Gamma_m^{(1)'},\Gamma_m^{(2)'}\hquad S_m^{(1)'};S_m^{(2)'}\,\} \\ &\modelbl{\,\Gamma_m^{(1)''},\Gamma_m^{(2)''}\hquad S_m^{(1)''};S_m^{(2)''}\,} \\ &\kw{generated quantities}\{\,\Gamma_q^{(1)},\Gamma_q^{(2)}\hquad S_q^{(1)};S_q^{(2)}\,\}
\end{align*}
\end{definition}
\subsection{Proof of Semantic Preservation of Shredding} \label{ap:proofshred}
\begin{lemma}\label{lemma:dom}
If $s \models \Gamma$ then $\dom(s)=\dom(\Gamma)$.
\end{lemma}
\begin{proof}
By inspection of the definition of $s \models \Gamma$.
\end{proof}
\begin{lemma}\label{lemma:assset}
If $S$ is well-typed in some environment $\Gamma$, $x \in \dom(s)$ and $(s, S) \Downarrow s'$ and $x \notin \assset(S)$ then $s(x)=s'(x)$.
\end{lemma}
\begin{proof}
By induction on the derivation $(s, S) \Downarrow s'$.
\end{proof}
\begin{lemma} \label{lem:same_effect}
If $(s_1, S) \Downarrow s_1'$ and $(s_2, S) \Downarrow s_2'$ for some $s_1, s_1', s_2, s_2'$, and $s_1(x) = s_2(x)$ for all $x \in A$, where $A \supseteq \readset(S)$, then $s_1'(y) = s_2'(y)$ for all $y \in A\cup\assset(S)$.
\end{lemma}
\begin{proof}
By induction on the structure of $S$.
\end{proof}
\begin{restate}{Lemma~\ref{lem:shredisleveled}(Shredding produces single-level statements)}
$$S \shred[\Gamma] \shredded \implies \singlelevelS{\lev{data}}{S_D} \wedge \singlelevelS{\lev{model}}{S_M} \wedge \singlelevelS{\lev{genquant}}{S_Q}$$
\end{restate}
\begin{proof}
By rule induction on the derivation of $S \shred \shredded$.
\end{proof}
\begin{restate}{Lemma~\ref{lem:reorder} (Statement Reordering)}
For statements $S_1$ and $S_2$ that are well-typed in $\Gamma$, if $\readset(S_1)\cap\assset(S_2) = \emptyset$, $\assset(S_1)\cap\readset(S_2) = \emptyset$, and $\assset(S_1)\cap\assset(S_2) = \emptyset$ then $S_1;S_2 \eveq S_2; S_1$.
\end{restate}
\begin{proof}
Let $R_i = \readset(S_i)$ and $W_i = \assset(S_i)$ for $i = 1,2$.
Take any state $s$ and assume that $s \models \Gamma$. Suppose that
$(s, S_1) \Downarrow s_1$,
$(s, S_2) \Downarrow s_2$,
$(s_1, S_2) \Downarrow s_{12}$, and
$(s_2, S_1) \Downarrow s_{21}$.
We want to prove that $s_{12}=s_{21}$.
By Theorem~\ref{th:eval} and Lemma~\ref{lemma:dom}, we have $\dom(\Gamma)=\dom(s)=\dom(s_1)=\dom(s_2)=\dom(s_{12})=\dom(s_{21})$.
%
Now, as $S_1$ writes only to $W_1$, by Lemma~\ref{lemma:assset}, we have that for all variables $x \in \dom(\Gamma)$:
\begin{equation}\label{eq:1}
x \notin W_1 \implies s(x) = s_1(x) \wedge s_2(x) = s_{21}(x)
\end{equation}
But $R_2$ and $W_1$ are disjoint, and $W_2$ and $W_1$ are disjoint, therefore $x \notin W_1$ for all $x \in R_2 \cup W_2$, and hence by Lemma~\ref{lemma:assset}:
\begin{equation} \label{eq:2}
x \in R_2 \cup W_2 \implies s(x) = s_1(x) \wedge s_2(x) = s_{21}(x)
\end{equation}
If two states are equal up to all variables in $\readset(S_2)$, then $S_2$ has the same effect on them (Lemma~\ref{lem:same_effect}).
Combining this with (\ref{eq:2}) gives us:
\begin{equation} \label{eq:4}
x \in R_2 \cup W_2 \implies s_2(x) = s_{12}(x)
\end{equation}
Next, combining (\ref{eq:2}) and (\ref{eq:4}), gives us:
\begin{equation} \label{eq:5}
x \in R_2 \cup W_2 \implies s_2(x) = s_{12}(x) = s_{21}(x)
\end{equation}
Applying the same reasoning, but starting from $S_2$, we also obtain:
\begin{align} \label{eq:6}
x \notin W_2 \implies s(x)=s_2(x) \wedge s_1(x) = s_{12}(x)
\\ \label{eq:7}
x \in R_1 \cup W_1 \implies s_1(x) = s_{21}(x) = s_{12}(x)
\end{align}
Finally, we have:
\begin{itemize}
\item $\forall x\in R_1\cap W_2. s_{12}(x)=s_{21}(x)$, as
$R_1\cap W_2 = \emptyset$;
\item $\forall x\in W_1\cap R_2. s_{12}(x)=s_{21}(x)$, as
$W_1\cap R_2 = \emptyset$;
\item $\forall x\in W_1\cap W_2. s_{12}(x)=s_{21}(x)$, as
$W_1\cap W_2 = \emptyset$;
\item $\forall x\notin W_1 \cup W_2. s_{12}(x)=s_{21}(x)$, by combining (\ref{eq:1}) with (\ref{eq:6});
\item $\forall x\in R_2\cup W_2. s_{12}(x)=s_{21}(x)$, by (\ref{eq:5});
\item $\forall x\in R_1\cup W_1. s_{12}(x)=s_{21}(x)$, by (\ref{eq:7});
\end{itemize}
This covers all possible cases for $x \in \dom(\Gamma)$, therefore $\forall x \in \dom(\Gamma). s_{12}(x) = s_{21}(x)$. But $\dom(\Gamma) = \dom(s_{12}) = \dom(s_{21})$, thus $s_{12} = s_{21}$.
\end{proof}
\begin{lemma}[] \label{lem:vars_in_exps}
If $~\Gamma(x) = (\tau, \ell)$, $\Gamma \vdash E :(\tau', \ell')$, and $x \in \readset(E)$ then $\ell \leq \ell'$.
\end{lemma}
\begin{proof}
By induction on the structure of the derivation $\Gamma \vdash E:(\tau', \ell')$.
\end{proof}
\begin{lemma}[] \label{lem:exp_in_singleleveled_s}
If $~\Gamma(E) = (\tau, \ell)$, $\Gamma \vdash \ell'(S)$, and $E$ occurs in $S$, then $\ell \leq \ell'$.
\end{lemma}
\begin{proof}
By induction on the structure of S.
\end{proof}
\begin{restate}{Lemma~\ref{lem:halfway}} ~
If $~\Gamma \vdash \ell_1(S_1)$, $\Gamma \vdash \ell_2(S_2)$ and $\ell_1 < \ell_2$ then $\readset(S_1) \cap \assset(S_2) = \emptyset$ and $\assset(S_1) \cap \assset(S_2) = \emptyset$.
\end{restate}
\begin{proof} Suppose $\Gamma \vdash \ell_1(S_1)$, $\Gamma \vdash \ell_2(S_2)$ and $\ell_1 < \ell_2$. Then, directly from Definition~\ref{def:singlelev}, we have that $\assset(S_1) \cap \assset(S_2) = \emptyset$.
Next, we prove by contradiction that $\readset(S_1) \cap \assset(S_2) = \emptyset$.
%
Suppose that for some $x$, $x\in \readset(S_1)$ and $x \in \assset(S_2)$. From $x \in \assset(S_2)$ and $\Gamma \vdash \ell_2(S_2)$, we have $\Gamma(x) = (\tau, \ell_2)$ for some $\tau$.
%
From the definition of $\readset(S_1)$ and $x \in \readset(S_1)$, there must be an expression $E$ in $S_1$, such that $x \in \readset(E)$.
Suppose $\Gamma(E) = (\tau_E, \ell_E)$.
%
Now, $\Gamma(x) = (\tau, \ell_2)$, $x \in \readset(E)$, and $\Gamma \vdash E:(\tau_E, \ell_E)$, therefore $\ell_2 \leq \ell_E$ (Lemma~\ref{lem:vars_in_exps}).
%
But $\Gamma \vdash \ell_1(S_1)$, and $E$ occurs in $S_1$, therefore $\ell_E \leq \ell_1$ (Lemma~\ref{lem:exp_in_singleleveled_s}).
We have $\ell_2 \leq \ell_E \leq \ell_1$. But $\ell_1 < \ell_2$, which is a contradiction.
\end{proof}
\begin{restate}{Lemma~\ref{lem:same_sets_when_singlelevel}} ~
If $~\Gamma \vdash \ell(S)$, then $\readset_{\Gamma \vdash \ell}(S) = \readset(S)$ and $\assset_{\Gamma \vdash \ell}(S) = \assset(S)$.
\end{restate}
\begin{proof}
Suppose $\Gamma \vdash \ell(S)$. The first equality,
$\readset_{\Gamma \vdash \ell}(S) = \readset(S)$, follows by structural induction on $\readset_{\Gamma \vdash \ell}(S)$.
Furthermore, by definition of single-level statements, if $x \in \assset(S)$, then $\Gamma(x) = (\tau, \ell)$ for some $\tau$. Thus, by definition of $\assset_{\Gamma \vdash \ell}(S)$, we have that $\assset_{\Gamma \vdash \ell}(S) = \assset(S)$.
\end{proof}
\begin{lemma}[Associativity of $\eveq$] \label{lem:assoc}
$S_1;(S_2;S_3) \eveq (S_1;S_2);S_3$ for any $S_1$, $S_2$, and $S_3$.
\end{lemma}
\begin{proof}
By expanding the definition of statement equivalence (Definition~\ref{def:equiv}).
\end{proof}
\begin{lemma}[Congruence of $\eveq$] \label{lem:congr}
If $S_1 \eveq S_2$ then $S_1; S \eveq S_2; S$ and $S;S_1 \eveq S;S_2$ for any $S$.
\end{lemma}
\begin{proof}
By expanding the definition of statement equivalence (Definition~\ref{def:equiv}).
\end{proof}
\begin{lemma} \label{lem:forloops}
For any two states $s, s'$, and a statement $\kw{for}(x\;\kw{in}\;g_1:g_2)\;S$, suppose $n_1 = s(g_1)$, $n_2 = s(g_2)$ and $n_1 \leq n_2$. Then $(s, \kw{for}(x\;\kw{in}\;g_1:g_2)\;S) \Downarrow s'$ if and only if there exists $s_x$, such that $(s, x=n_1;S;x=(n_1+1);S\dots x=n_2;S) \Downarrow s_x$ and $s' = s_x[-x]$.
\end{lemma}
\begin{proof}
By induction on $n = n_2 - n_1$.
\end{proof}
\begin{lemma} \label{lem:moreassignments}
If $x \notin \assset{(S_1)}$ then $(x=n; S_1;S_2) \eveq (x=n; S_1; x=n; S_2)$.
\end{lemma}
\begin{proof}
By expanding the definition of statement equivalence (Definition~\ref{def:equiv}).
\end{proof}
\begin{lemma} \label{lem:loopreorder}
If $\,\Gamma \vdash \ell_1(S_1)$, $\Gamma \vdash \ell_2(S_2)$, $\ell_1 < \ell_2$, $\Gamma \vdash S_2;S_1 : \lev{data}$, and $x \notin \assset{(S_2;S_1)}$ then $x=i;S_2; x=j;S_1 \eveq x=j;S_1;x=i;S_2;x=j$ for all integers $i$ and $j$.
\end{lemma}
\begin{proof}
By expanding the definition of statement equivalence (Definition~\ref{def:equiv}), and using Lemma~\ref{lem:congr} and Lemma~\ref{lem:moreassignments}.
\end{proof}
\begin{restate}{Lemma~\ref{lem:commutativity}(Commutativity of sequencing single-level statements)} ~
If $~\singlelevelS{\ell_1}{S_1}$, $\singlelevelS{\ell_2}{S_2}$, $\Gamma \vdash S_2;S_1 : \lev{data}$ and $\ell_1 < \ell_2$ then $S_2; S_1; \eveq S_1; S_2;$
\end{restate}
\begin{proof}
Since $\Gamma \vdash S_2;S_1 : \lev{data}$, and $\lev{data} \leq \ell_1 < \ell_2$, it must be that $\readset_{\Gamma\vdash\ell_2}(S_2) \cap \assset_{\Gamma\vdash\ell_1}(S_1) = \emptyset$. By Lemma~\ref{lem:same_sets_when_singlelevel}, $\readset_{\Gamma\vdash\ell_2}(S_2) = \readset(S_2)$ and $\assset_{\Gamma\vdash\ell_1}(S_1) = \assset(S_1)$, as $S_1$ and $S_2$ are single-level of level $\ell_1$ and $\ell_2$ respectively. Therefore, $\readset(S_2) \cap \assset(S_1) = \emptyset$.
%
From $\Gamma \vdash \ell_1(S_1)$, $\Gamma \vdash \ell_2(S_2)$, $\ell_1 < \ell_2$ and by Lemma~\ref{lem:halfway}, we have $\readset(S_1) \cap \assset(S_2) = \emptyset$ and $\assset(S_1) \cap \assset(S_2) = \emptyset$.
Therefore, by Lemma~\ref{lem:reorder}, $S_2; S_1 \eveq S_1; S_2$.
\end{proof}
\begin{restate}{Theorem~\ref{th:shred} (Semantic Preservation of $\shred$)} ~
If $~\Gamma \vdash S:\lev{data} $ and $ S \shred[\Gamma] \shredded $ then $ \log p^*_{\Gamma \vdash S}(s) = \log p^*_{\Gamma \vdash (S_D; S_M; S_Q)}(s)$, for all $s \models \Gamma$.
\end{restate}
\begin{proof}
Note that if $S \eveq S'$ then $\log p^*_{\Gamma \vdash S}(s) = \log p^*_{\Gamma \vdash S'}(s)$ for all states $s \models \Gamma$.
Semantic preservation then follows from proving the stronger result $$S \shred \shredded \implies \Gamma \vdash S:\lev{data} \implies S \eveq (S_D; S_M; S_Q)$$ by rule induction on $S \shred \shredded$.
%
Let $$\Phi(S, S_D, S_M, S_Q) \deq
S \shred \shredded \implies \Gamma \vdash S:\lev{data} \implies S \eveq S_D;S_M;S_Q$$
Take any $S$, $S_D, S_M, S_Q$, and assume $S \shred \shredded$ and $\Gamma \vdash S: \lev{data}$.
\begin{itemize}
\item[\ref{Shred Skip}]
\begin{math}
\copyrule{}{\kw{skip} \shred (\kw{skip}, \kw{skip}, \kw{skip})}
\end{math}
For all $s$, $(s, \kw{skip}) \Downarrow s$, and also $(s, \kw{skip}; \kw{skip}; \kw{skip}) \Downarrow s$. Thus $\kw{skip} \eveq \kw{skip};\kw{skip};\kw{skip}$.
\item[\ref{Shred DataAssign}]
\begin{math}
\copyrule
{\Gamma(L) = (\_,\lev{data})}
{ L = E \shred (L = E, \kw{skip}, \kw{skip})}
\end{math} \\
For any state $s$, if $(s, L=E) \Downarrow s'$, then $(s, L=E;\kw{skip};\kw{skip}) \Downarrow s'$, and vice versa. Thus, $\Phi(L=E, L=E, \kw{skip}, \kw{skip})$ holds.
\item[\ref{Shred ModelAssign}]
\begin{math}
\copyrule
{\Gamma(L) = (\_,\lev{model})}
{ L = E \shred (\kw{skip}, L = E, \kw{skip})}
\end{math} \\
For any state $s$, if $(s, L=E) \Downarrow s'$, then $(s, \kw{skip};L=E;\kw{skip}) \Downarrow s'$, and vice versa. Thus, $\Phi(L=E, \kw{skip}, L=E, \kw{skip})$ holds.
\item[\ref{Shred GenQuantAssign}]
\begin{math}
\copyrule
{\Gamma(L) = (\_,\lev{genquant})}
{ L = E \shred (\kw{skip}, \kw{skip}, L = E)}
\end{math} \\
For any state $s$, if $(s, L=E) \Downarrow s'$, then $(s, \kw{skip};\kw{skip};L=E) \Downarrow s'$, and vice versa. Thus, $\Phi(L=E, \kw{skip}, \kw{skip}, L=E)$ holds.
\item[\ref{Shred Seq}] $S = (S_1; S_2)$. Suppose that $\Phi(\Gamma, S_1)$ and $\Phi(\Gamma, S_2)$, and assume $S_1 \shred \shredded[1]$, $S_2 \shred \shredded[2]$. Thus, $S_1 \eveq \shreddedseqsingle[1]$ and $S_2 \eveq \shreddedseqsingle[2]$. Now:
\begin{align*}
&\eveq (\shreddedseqsingle[1]);(\shreddedseqsingle[2]) && \text{from } \Phi(\Gamma, S_1) \text{ and } \Phi(\Gamma, S_2)\\
&\eveq (S_{D_1};S_{M_1});S_{D_2};S_{Q_1};(S_{M_2};S_{Q_2}) && \text{by Lemmas~\ref{lem:shredisleveled}, \ref{lem:commutativity}, \ref{lem:assoc} and~\ref{lem:congr}} \\
&\eveq (S_{D_1});S_{D_2};S_{M_1};(S_{Q_1};S_{M_2};S_{Q_2}) && \text{by Lemmas~\ref{lem:shredisleveled}, \ref{lem:commutativity}, \ref{lem:assoc} and~\ref{lem:congr}} \\
&\eveq (S_{D_1};S_{D_2};S_{M_1});S_{M_2};S_{Q_1};(S_{Q_2}) && \text{by Lemmas~\ref{lem:shredisleveled}, \ref{lem:commutativity}, \ref{lem:assoc} and~\ref{lem:congr}} \\
&\eveq (S_{D_1};S_{D_2});(S_{M_1};S_{M_2});(S_{Q_1};S_{Q_2}) &&
\end{align*}
As by \ref{Shred Seq}, $S \shred (S_{D_1};S_{D_2}), (S_{M_1};S_{M_2}), (S_{Q_1};S_{Q_2})$, it follows that $\Phi(\Gamma, S_1;S_2)$.
\item[\ref{Shred If}] $S = (\kw{if}(g)\; S_1\; \kw{else}\; S_2)$. Suppose that $\Phi(\Gamma, S_1)$ and $\Phi(\Gamma, S_2)$, and assume $S_1 \shred \shredded[1]$, $S_2 \shred \shredded[2]$. Thus, by \ref{Shred If}:
$$\kw{if}(g)\; S_1\; \kw{else}\; S_2 \shred
(\kw{if}(g)\; S_{D_1}\; \kw{else}\; S_{D_2}),
(\kw{if}(g)\; S_{M_1}\; \kw{else}\; S_{M_2}),
(\kw{if}(g)\; S_{Q_1}\; \kw{else}\; S_{Q_2})$$
Now take any two states $s$ and $s'$, such that $s \models \Gamma$ and $(s, S) \Downarrow s'$. Given that $\Gamma \vdash S : \lev{data}$, $\Gamma(g) = (\kw{bool}, \_)$ by \ref{If}. Therefore $s(g) = \kw{true}$ or $s(g) = \kw{false}$.
\begin{enumerate}
\item If $s(g) = \kw{true}$, it must be that $(s, S_1) \Downarrow s'$. Then:
\begin{align*}
& (s, \kw{if}(g)\; S_1\; \kw{else}\; S_2) \Downarrow s' && \text{by } \ref{Eval IfTrue} \\
& (s, (\shreddedseqsingle[1])) \Downarrow s' && \text{from } \Phi(\Gamma, S_1)\\
& (s, (\kw{if}(g)\; S_{D_1}\,\kw{else}\; S_{D_2};
\kw{if}(g)\; S_{M_1}\,\kw{else}\; S_{M_2};
\kw{if}(g)\; S_{Q_1}\,\kw{else}\; S_{Q_2})) \Downarrow s' && 3 \times \ref{Eval IfTrue} \\
\end{align*}
\item If $s(g) = \kw{false}$, it must be that $(s, S_2) \Downarrow s'$. Then:
\begin{align*}
& (s, \kw{if}(g)\; S_1\; \kw{else}\; S_2) \Downarrow s' && \text{by } \ref{Eval IfFalse} \\
& (s, (\shreddedseqsingle[2])) \Downarrow s' && \text{from } \Phi(\Gamma, S_2)\\
& (s, (\kw{if}(g)\; S_{D_1}\,\kw{else}\; S_{D_2};
\kw{if}(g)\; S_{M_1}\,\kw{else}\; S_{M_2};
\kw{if}(g)\; S_{Q_1}\,\kw{else}\; S_{Q_2})) \Downarrow s' && 3 \times \ref{Eval IfFalse} \\
\end{align*}
\end{enumerate}
Thus, $(s, \kw{if}(g)\; S_1\; \kw{else}\; S_2)) \Downarrow s' \implies (s, (\kw{if}(g)\; S_{D_1}\; \kw{else}\; S_{D_2};
\kw{if}(g)\; S_{M_1}\; \kw{else}\; S_{M_2};
\kw{if}(g)\; S_{Q_1}\; \kw{else}\; S_{Q_2})) \Downarrow s'$. For the implication in the opposite direction:
\begin{enumerate}
\item If $s(g) = \kw{true}$, take any $s'$ such that $(s, (\kw{if}(g)\; S_{D_1}\,\kw{else}\; S_{D_2};
\kw{if}(g)\; S_{M_1}\,\kw{else}\; S_{M_2};
\kw{if}(g)\; S_{Q_1}\,\kw{else}\; S_{Q_2})) \Downarrow s'$. Then:
\begin{align*}
& (s, (\shreddedseqsingle[1])) \Downarrow s' && \text{by } 3 \times \ref{Eval IfTrue} \\
& (s, S_1) \Downarrow s' && \text{from } \Phi(\Gamma, S_1)\\
& (s, \kw{if}(g)\; S_1\; \kw{else}\; S_2) \Downarrow s' && \text{by } \ref{Eval IfTrue}
\end{align*}
\item If $s(g) = \kw{false}$, take any $s'$ such that $(s, (\kw{if}(g)\; S_{D_1}\,\kw{else}\; S_{D_2};
\kw{if}(g)\; S_{M_1}\,\kw{else}\; S_{M_2};
\kw{if}(g)\; S_{Q_1}\,\kw{else}\; S_{Q_2})) \Downarrow s'$. Then:
\begin{align*}
& (s, (\shreddedseqsingle[2])) \Downarrow s' && \text{by } 3 \times \ref{Eval IfFalse} \\
& (s, S_2) \Downarrow s' && \text{from } \Phi(\Gamma, S_2)\\
& (s, \kw{if}(g)\; S_1\; \kw{else}\; S_2) \Downarrow s' && \text{by } \ref{Eval IfFalse}
\end{align*}
\end{enumerate}
Thus, $(s, (\kw{if}(g)\; S_{D_1}\; \kw{else}\; S_{D_2};
\kw{if}(g)\; S_{M_1}\; \kw{else}\; S_{M_2};
\kw{if}(g)\; S_{Q_1}\; \kw{else}\; S_{Q_2})) \Downarrow s' \implies (s, \kw{if}(g)\; S_1\; \kw{else}\; S_2)) \Downarrow s'$.
Therefore, $\kw{if}(g)\; S_1\; \kw{else}\; S_2 \eveq (\kw{if}(g)\; S_{D_1}\; \kw{else}\; S_{D_2}); (\kw{if}(g)\; S_{M_1}\; \kw{else}\; S_{M_2}); (\kw{if}(g)\; S_{Q_1}\; \kw{else}\; S_{Q_2})$, and $\Phi(\Gamma, \kw{if}(g)\; S_1\; \kw{else}\; S_2)$.
\item[\ref{Shred For}] Suppose $S = (\kw{for}(x\;\kw{in}\;g_1:g_2)\;S') = LHS$. Then: \\
\begin{math}
\copyrule
{ S' \shred (S_D', S_M', S_Q') }
{LHS \shred (\kw{for}(x\;\kw{in}\;g_1:g_2)\;S_D'),
(\kw{for}(x\;\kw{in}\;g_1:g_2)\;S_M'),
(\kw{for}(x\;\kw{in}\;g_1:g_2)\;S_Q')}
\end{math}\\
Take $RHS = (\kw{for}(x\;\kw{in}\;g_1:g_2)\;S_D');
(\kw{for}(x\;\kw{in}\;g_1:g_2)\;S_M');
(\kw{for}(x\;\kw{in}\;g_1:g_2)\;S_Q')$
We must show $\Phi(S', S_D', S_M', S_Q') \implies LHS \eveq RHS$.
Assume $\Phi(S', S_D', S_M', S_Q')$, and consider $s, s'$, such that $(s, LHS) \Downarrow s'$ to show $(s, RHS) \Downarrow s'$.
Suppose $n_1 = s(g_1)$ and $n_2 = s(g_2)$. Then either $n_1 \leq n_2$ or $n_1 > n_2$:
\begin{enumerate}
\item Case $n_1 \leq n_2$.
Using Lemma~\ref{lem:forloops}, we have that there exists $s_x$, such that $(s, x=n_1;S';x=(n_1+1);S'\dots x=n_2;S') \Downarrow s_x$ and $s' = s_x[-x]$.
As $\Phi(S', S_D', S_M', S_Q')$, $S' \shred (S_D', S_M', S_Q')$ and $\Gamma \vdash S':\lev{data}$ (by \ref{For}), we have that $S' \eveq S_D';S_M';S_Q'$. Combined with the result from above and using Lemma~\ref{lem:congr}, this gives us $(s, x=n_1;S_D';S_M';S_Q';x=(n_1+1);S_D';S_M';S_Q'\dots x=n_2;S_D';S_M';S_Q') \Downarrow s_x$ and $s' = s_x[-x]$.
By Lemma~\ref{lem:moreassignments}, we then have $(s, x=n_1;S_D';x=n_1;S_M';x=n_1;S_Q';\dots x=n_2;S_D';x=n_2;S_M';x=n_2;S_Q') \Downarrow s_x$ and $s' = s_x[-x]$.
By Lemma~\ref{lem:shredisleveled} $\Gamma \vdash \lev{data}(S_D')$, $\Gamma \vdash \lev{model}(S_M')$, and $\Gamma \vdash \lev{genquant}(S_Q')$. Thus, we apply Lemma~\ref{lem:loopreorder} to get $(s, x=n_1;S_D'; \dots x=n_2; S_D'; x=n_1;S_M'; \dots x=n_2; S_M'; x=n_1;S_Q';\dots x=n_2;S_Q') \Downarrow s_x$ and $s' = s_x[-x]$.
By applying \ref{Eval Seq}, we split this into:
\begin{itemize}
\item $(s, x=n_1;S_D'; \dots x=n_2; S_D') \Downarrow s_{xd}$
\item $(s_{xd}, x=n_1;S_M'; \dots x=n_2; S_M') \Downarrow s_{xm}$
\item $(s_{xm},x=n_1;S_Q';\dots x=n_2;S_Q') \Downarrow s_x$
\end{itemize}
For some $s_{xd}$ and $s_{xm}$. By taking $s_d = s_{xd}[-x]$ and $s_m = s_{xm}[-x]$, and applying \ref{lem:forloops}, we get:
\begin{itemize}
\item $(s, \kw{for}(x\;\kw{in}\;g_1:g_2)\;S_D') \Downarrow s_d$
\item $(s_{xd}, \kw{for}(x\;\kw{in}\;g_1:g_2)\;S_M') \Downarrow s_m$
\item $(s_{xm}, \kw{for}(x\;\kw{in}\;g_1:g_2)\;S_Q') \Downarrow s'$
\end{itemize}
As $x \notin \readset(\kw{for}(x\;\kw{in}\;g_1:g_2)\;S_M)$, $x \notin \readset(\kw{for}(x\;\kw{in}\;g_1:g_2)\;S_Q')$, $s_d = s_{xd}[-x]$ and $s_m = s_{xm}[-x]$, we also have:
\begin{itemize}
\item $(s, \kw{for}(x\;\kw{in}\;g_1:g_2)\;S_D') \Downarrow s_d$
\item $(s_d, \kw{for}(x\;\kw{in}\;g_1:g_2)\;S_M') \Downarrow s_m$
\item $(s_m, \kw{for}(x\;\kw{in}\;g_1:g_2)\;S_Q') \Downarrow s'$
\end{itemize}
Therefore, by \ref{Eval Seq}: $(s, (\kw{for}(x\;\kw{in}\;g_1:g_2)\;S_D');
(\kw{for}(x\;\kw{in}\;g_1:g_2)\;S_M');
(\kw{for}(x\;\kw{in}\;g_1:g_2)\;S_Q')) \Downarrow s'$, so $(s, LHS) \Downarrow s' \implies (s, RHS) \Downarrow s'$.
\item Case $n_1 > n_2$. By \ref{Eval ForFalse} $s' = s$. Also, $(s, \kw{for}(x\;\kw{in}\;g_1:g_2)\;S_D') \Downarrow s$, $(s, \kw{for}(x\;\kw{in}\;g_1:g_2)\;S_M') \Downarrow s$ and $(s, \kw{for}(x\;\kw{in}\;g_1:g_2)\;S_Q') \Downarrow s$. So by \ref{Eval Seq}, $(s, RHS) \Downarrow s$, and thus $(s, LHS) \Downarrow s' \implies (s, RHS) \Downarrow s'$.
\end{enumerate}
By assuming instead $s$ and $s'$, such that $(s, RHS) \Downarrow s'$, and reversing this reasoning, we also obtain $(s, RHS) \Downarrow s' \implies (s, LHS) \Downarrow s'$.
Therefore $(s, LHS) \Downarrow s' \iff (s, RHS) \Downarrow s'$, so $LHS \eveq RHS$.
\end{itemize}
\end{proof}
\section{Further discussion on semantics} \label{ap:sem}
\subsection{Semantics of Generated Quantities} \label{ap:gq_semantics}
In addition to defining random variables to be sampled using HMC, Stan also supports sampling using pseudo-random number generators. For example, a standard normal parameter $x$ can be sampled in two ways:
\begin{enumerate}
\item By declaring $x$ to be a parameter of the model, and giving it a prior:
\begin{lstlisting}
parameters { real x; }
model { x ~ normal(0, 1); }
\end{lstlisting}
\item Treating $x$ as a generated quantity and using a pseudo-random number generator:
\begin{lstlisting}
generated quantities { x = normal_rng(0, 1); }
\end{lstlisting}
\end{enumerate}
Option (1) will sample $x$ using HMC, which is not needed in this case. Option (2) is a much more efficient solution. Thus, a Stan user can explicitly optimise their program by specifying how HMC should be composed with forward (ancestral) sampling.
In the density-based semantics presented in this paper, we do not formalise this usage of pseudo-random number generators. We treat the function \lstinline|normal_rng(mu, sigma)| as any other function, ignoring its random nature. We define the semantics of a Stan program to be the unnormalised log posterior over parameters only --- $\log p^*(\params \mid \data)$. However, this semantics can be extended to cover the generated quantities $\mathbf{g}$ as well: $\log p^*(\params, \mathbf{g} \mid \data)$.
The easiest way to do that is to simply treat \lstinline| normal_rng| as another derived form:
\begin{display}[.5]{}
\clause{L = \kw{d_rng}(E_1, \dots E_n) \deq L \sim \kw{d}(E_1, \dots E_n)}{random number generation}
\end{display}
However, this causes a discrepancy with the current information-flow type system. Perhaps a more suitable treatment is as an assignment to another reserved variable, which holds a different density to that of \lstinline|target|:
\begin{display}[.5]{}
\clause{L = \kw{d_rng}(E_1, \dots E_n) \deq \kw{gen} = \kw{gen} + \kw{d_lpdf}(L, E_1, \dots E_n)}{random number generation}
\end{display}
The density-based semantics of a Stan program can then be defined as:
$$\log p^*(\params, \mathbf{g} \mid \data) = \log p^*(\params \mid \data) + \log p(\mathbf{g} \mid \params, \data)$$
where $\log p^*(\params \mid \data)$, as before, is given by the \lstinline|target| variable, and $\log p(\mathbf{g} \mid \params, \data)$ is given by \lstinline|gen|.
An interesting direction for future work is to extend the semantics and type system of SlicStan, so that modelling statements, such as \lstinline|x ~ normal(0, 1)| can be treated either as modifying the \lstinline|target| density, or random number generation, depending on the level of the variable \lstinline|x|. This can allow SlicStan programs to be optimised by automatically determining the most efficient way to compose HMC and forward sampling, based on the concrete model.
\subsection{Relation of Density-based Semantics to Sampling-based Semantics} \label{ap:sampling_semantics}
The density-based semantics of Stan and SlicStan given in this paper is inspired by the sampling-based semantics that \citet{HurNRS15} give to the imperative language \textsc{Prob}.
This section outlines the differences between the two semantics.
\subsubsection{Operational semantics relations }
The intuition behind the density-based semantics of Stan is that the relation $(s, S) \Downarrow s'$ specifies what the value of the (unnormalised) posterior is at a specific point in the parameter space. For $\params \subset s$ and $\data \subset s$, $p^*(\params \mid \data) = s'(\kw{target})$.
The intuition behind the operational semantics relation by \citet{HurNRS15}, $(s, S) \Downarrow^t (s', p)$, is that there is probability $p$ for the program $S$, started in initial state $s$, to sample a trace $t$ and terminate in state $s'$.
For programs with no observed values, and single probabilistic assignment (\lstinline|x ~ d1(...); x ~ d2(...)| is not allowed), we guess that if $(s, S) \Downarrow^t (s', p)$, then $((s\cup t), S) \Downarrow s'[\kw{target} \mapsto p]$, and $\params = t$.
\subsubsection{Difference in the meaning of $\sim$ }
In Stan, a model statement such as \lstinline|x ~ normal(0,1)|, does not denote sampling, but a change to the target density. The value of $x$ remains the same; we only compute the standard normal density at the current value of $x$. This is also similar to the score operator of \citet{Staton2016}.
\begin{display}{Operational Density-based Semantics of Model Statements (Derived Rule)}
\quad
\staterule{Eval Model}
{ (s, E) \Downarrow V \quad (s, E_i) \Downarrow V_i \quad \forall i \in 1..n \quad V' = s(\kw{target}) + \kw{d_lpdf}(V, V_1, \dots, V_n)}
{ (s, E \sim \kw{d}(E_1,\dots,E_n)) \Downarrow s[\kw{target} \mapsto V']}
\end{display}
In the sampling-based semantics of \citet{HurNRS15}, on the other hand, \lstinline|x ~ normal(0,1)| is understood as ``we sample a standard normal variable and assign its value to $x$.''
\begin{display}{Operational Sampling-based Semantics of Model Statements \cite{HurNRS15}}
\quad
\staterule{Sampling Model}
{ v \in \text{Val} \quad p = \kw{Dist}(s(\overline{\theta}))(x)}
{ (s, x \sim \kw{Dist}(\overline{\theta})) \Downarrow^{x\mapsto [v]} (s[x \mapsto v], p)}
\end{display}
In this sampling-based semantics, variables can be sampled more than once, and we keep track of the entire trace of samples. In Stan's density-based semantics, modelling a variable more than once would mean modifying the target density more than once. For example, consider the program:
\begin{lstlisting}
x ~ normal(-5, 1);
x ~ normal(5, 1);
\end{lstlisting}
The difference between the density-based and sampling-based semantics is then as follows:
\begin{itemize}
\item \textbf{Density-based:} the program denotes the unnormalised density ${p^*(x) = \normal(x \mid -5, 1)\normal(x \mid 5, 1)}$.
\item \textbf{Sampling-based:} the program denotes the unnormalised density $p^*(x^{(1)}, x^{(2)}) = \normal(x^{(1)} \mid -5, 1)\normal(x^{(2)} \mid 5, 1)$, with $x^{(1)}$ and $x^{(2)}$ being variables denoting the value of $x$ we sampled the first and second time in the program respectively.
\end{itemize}
\subsubsection{Difference in the meaning of \kw{observe}}
As mentioned previously, we presume that for \textit{single probabilistic assignment} programs that contain only \textit{unobserved} parameters, out density-based semantics is equivalent to the sampling-based semantics of \citet{HurNRS15}. However the two semantics treat observations differently.
Consider the following SlicStan program, where $y$ is an observed variable:
\begin{lstlisting}
real mu ~ normal(0, 1);
data real y ~ normal(mu, 1);
\end{lstlisting}
The density-based semantics of this program is a function of $\mu$: $$p^*(\mu \mid y = v) = \normal(\mu \mid 0, 1)\normal(v \mid \mu, 1)$$ where $v$ is some concrete value for the data $y$, which is supplied externally.
The corresponding \textsc{Prob} program is:
\begin{lstlisting}
double mu ~ normal(0, 1);
double y ~ normal(mu, 1);
observe(y = v);
\end{lstlisting}
The data $v$ is encoded in the program, and the sampling-based semantics is a function of $\mu$ and $y$: $$p^*(\mu, y) = \begin{cases}
\normal(\mu \mid 0, 1)\normal(y \mid \mu, 1), & \text{if}\ y=v \\
0, & \text{otherwise}
\end{cases}$$
Intuitively, the operational sampling-based semantics defines how to sample the variables $\mu$ and $y$, and then reject the run if $y \neq v$. This introduces a zero-probability conditioning problem when working with continuous variables, and fails in practise.
The operational density-based semantics of this paper puts SlicStan's observations closer to the idea of soft constraints. Using the \textit{score} operator of \citet{Staton2016}, we can write:
\begin{lstlisting}
let x ~ normal(0,1) in
score(density_normal(y,(x, 1));
\end{lstlisting}
Once again, $y$ has some concrete value $v$, and the score operator calculates the density of $y$ at $v$. \citet{Staton2016} make the score operator part of their metalanguage, while we build it into the density-based semantics itself.
\section{Elaborating and shredding if or for statements} \label{ap:shred}
This section demonstrates with an example the elaboration and shredding of \lstinline|if| and \lstinline|for| statements.
Consider the following SlicStan program:
\begin{lstlisting}
data real x;
real +++data+++ d;
real +++model+++ m;
if(x > 0){
d = 2 * x;
m ~ normal(d, 1);
}
\end{lstlisting}
The body of the \lstinline|if| statement contains an assignment to a \lev{data} variable (\lstinline|d = 2 * x|), and a model statement (\lstinline|m ~ normal(d, 1)|). The former belongs to the \lstinline|transformed data| block of a Stan program, while the latter belongs to the \lstinline|model| block. We need to shred the entire body of the \lstinline|if| statement, into several \lstinline|if| statements, each of which contains statements of a single level only.
Firstly, the elaboration step ensures that the guard of each \lstinline|if| statement (and the bounds of each \lstinline|for| loop) is a fresh boolean variable, \lstinline|g|, which is not modified anywhere in the program:
\begin{lstlisting}
g = (x > 0)
if(g){
d = 2 * x;
m ~ normal(d, 1);
}
\end{lstlisting}
Then the shredding step can copy the \lstinline|if| statement at each level, without changing the meaning of the original program:
\begin{lstlisting}
$S_D =~\,$g = (x > 0)
if(g){ d = 2 * x; }
$S_M =~$if(g){ m ~ normal(d, 1); }
$S_Q =~$skip
\end{lstlisting}
Finally, this translates to the Stan program:
\begin{lstlisting}
data {
real x;
}
transformed data {
real d;
bool g = (x > 0);
if(g){ d = 2 * x; }
}
parameters {
real m;
}
model {
if(g){ m ~ normal(d, 1); }
}
\end{lstlisting}
\section{Non-centred Reparameterisation} \label{ap:ncp}
\emph{Reparameterising} a model means expressing it in terms of different parameters, so that the original parameters can be recovered from the new ones. Reparametrisation plays a key role in optimising some models for MCMC inference, as it could transform a posterior distribution that is difficult to sample from in practice, into a flatter, easier to sample from distribution.
To show the importance of one such reparameterisation technique, the non-centred reparameterisation, consider the pathological \emph{Neal's Funnel} example, which was chosen by \cite{Funnel} to demonstrate the difficulties Metropolis--Hastings runs into when sampling from a distribution with strong non-linear dependencies. The model defines a density over variables $x$ and $y$,\footnotemark such that:
\footnotetext{For simplicity, we consider a 2-dimensional version of the funnel, as opposed to the original 10-dimensional version.}
$$y \sim \normal(0,3) \qquad\qquad x \sim \normal(0, e^{\frac{y}{2}})$$
The density has the form of a funnel (thus the name \emph{``Neal's Funnel''}), with a very sharp neck, as shown in \autoref{fig:neals}. We can easily implement the model in a straightforward way (centred parameterisation), as on the left below.
\vspace{-8pt}\begin{multicols}{2}
\textbf{Centred parameterisation in Stan}
\vspace{1cm}
\begin{lstlisting}[basicstyle=\small]
parameters {
real y;
real x;
}
model {
y ~ normal(0, 3);
x ~ normal(0, exp(y/2));
}
\end{lstlisting}
\vspace{4cm}
\textbf{Non-centred parameterisation in Stan}
\begin{lstlisting}[basicstyle=\small]
parameters {
real y_std;
real x_std;
}
transformed parameters {
real y = 3.0 * y_std;
real x = exp(y/2) * x_std;
}
model {
y_std ~ normal(0, 1); // \!implies y $\sim \normal(0, 3)$
x_std ~ normal(0, 1); // \!implies x $\sim \normal(0, e^{(y/2)})$
}
\end{lstlisting}
\end{multicols}\vspace{-8pt}
However, in that case, Stan's sampler has trouble obtaining samples from the neck of the funnel, because there exists a strong non-linear dependency between $x$ and $y$, and the posterior geometry is difficult for the sampler to explore well (see \autoref{fig:nealsineff}).
\begin{figure}[!b]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{plots/neals_naive.png}
\caption{$24,000$ samples obtained using Stan (default settings) for the \emph{non-efficient} form of Neal's Funnel.}
\label{fig:nealsineff}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{plots/neals.png}
\caption{$24,000$ samples obtained using Stan (default settings) for the \emph{efficient} form of Neal's Funnel.}
\label{fig:nealseff}
\end{subfigure}
\vspace{8pt} \\
\begin{subfigure}[b]{0.7\textwidth}
\includegraphics[width=\textwidth]{plots/neals_actual.png}
\caption{The actual log density of Neal's Funnel. Dark regions are of high density (log density greater than $-8$).}
\label{fig:neals}
\end{subfigure}
\caption{}
\end{figure}
Alternatively, we can reparameterise the model, so that the model parameters are changed from $x$ and $y$ to the standard normal parameters $x^{(std)}$ and $y^{(std)}$, and the original parameters are recovered using shifting and scaling:
$$y^{(std)} \sim \normal(0, 1) \qquad\qquad x^{(std)} \sim \normal(0, 1) \qquad\qquad y = y^{(std)} \times 3 \qquad\qquad x = x^{(std)} \times e^{\frac{y}{2}}$$
This reparameterisation, called non-centred parametrisation, is a special case of a more general transform introduced by \citet{Reparam}.
As shown on the right above, the non-centred model is longer and less readable. However, it performs much better than the centred one. \autoref{fig:nealseff} shows that by reparameterising the model, we are able to explore the tails of density better than if we use the straightforward implementation.
Neal's Funnel is a typical example of the dependencies that priors in hierarchical models could have. The example demonstrates that in some cases, especially when there is little data available, using non-centred parameterisation could be vital to the performance of the inference algorithm. The centred to non-centred parameterisation transformation is therefore common to Stan models, and is extensively described by the \citet{StanManual} as a useful technique.
\section{Examples} \label{ap:examples}
\subsection{Neal's Funnel}
We continue with the Neal's funnel example from Appendix~\ref{ap:ncp}, to demonstrate the usage of user-defined functions in SlicStan.
Reparameterising a model, which involves a (centred) Gaussian variable $x \sim \normal(\mu, \sigma)$ involves introducing a new parameter $x^{(std)}$. Therefore, a non-centred reparameterisation function cannot be defined in Stan, as Stan user-defined functions cannot declare new parameters. In SlicStan, on the other hand, reparameterising the model to a non-centred parameterisation can be implemented by simply calling the function \lstinline|my_normal|.
Below is the Neal's funnel in SlicStan (left) and Stan (right).
\begin{minipage}[t][6cm][t]{\linewidth}
\vspace{1pt}
\begin{multicols}{2}
\centering
\textbf{``Neal's Funnel'' in SlicStan}
\vspace{1cm}
\begin{lstlisting}[numbers=left,numbersep=\numbdist,numberstyle=\tiny\color{\numbcolor},basicstyle=\small]
real my_normal(real m, real s) {
real std ~ normal(0, 1);
return s * std + m;
}
real y = my_normal(0, 3);
real x = my_normal(0, exp(y/2));
\end{lstlisting}
\vspace{4cm}
\textbf{``Neal's Funnel'' in Stan}
\begin{lstlisting}[numbers=left,numbersep=\numbdist,numberstyle=\tiny\color{\numbcolor},basicstyle=\small]
parameters {
real y_std;
real x_std;
}
transformed parameters {
real y = 3.0 * y_std;
real x = exp(y/2) * x_std;
}
model {
y_std ~ normal(0, 1);
x_std ~ normal(0, 1);
}
\end{lstlisting}
\end{multicols}
\end{minipage}
The non-centred SlicStan program (left) is only longer than its centred version, due to the presence of the definition of the function. In comparison, Stan requires defining the new parameters \lstinline|x_std| and \lstinline|y_std| (lines 2,3), moving the declarations of \lstinline|x| and \lstinline|y| to the \lstinline|transformed parameters| block (lines 6,7), defining them in terms of the parameters (lines 8,9), and changing the definition of the joint density accordingly (lines 12,13).
We also present the ``translated'' Neal's Funnel model, as it would be outputted by an implemented compiler. We notice a major difference between the two Stan programs --- in one case the variables of interest $x$ and $y$ are defined in the \lstinline|transformed parameters| block, while in the other they are defined in the \lstinline|generated quantities| block. In an intuitive, centred parameterisation of this model, $x$ and $y$ are in fact the parameters. Therefore, it is much more natural to think of those variables as transformed parameters when using a non-centred parameterisation. However, as shown in \autoref{tab:blocks}, variables declared in the \lstinline|transformed parameters| block are re-evaluated at every leapfrog, while those declared in the \lstinline|generated quantities| block are re-evaluated at every sample. This means that even though it is more intuitive to think of $x$ and $y$ as transformed parameters (original Stan program), declaring them as generated quantities where possible results in a better optimised inference algorithm in the general case.
There are some advantages of the original Stan code that the translated from SlicStan Stan code does not have.
The original version is considerably shorter than the translated one. This is due to the lack of the additional variables \lstinline|m|, \lstinline|mp|, \lstinline|s|, \lstinline|sp|, \lstinline|ret|, and \lstinline|retp|, which are a consequence of statically unrolling the function calls in the elaboration step. When using SlicStan, the produced Stan program acts as an intermediate representation of the probabilistic program, meaning that the reduced readability of the translation is not necessarily problematic. However, the presence of the additional variables can also, in some cases, lead to slower inference. This problem can be tackled by introducing standard optimising compilers techniques, such as variable and common subexpression elimination.
\begin{minipage}[t][14cm][t]{\linewidth}
\vspace{12pt}
\textbf{``Neal's Funnel'' translated to Stan}
\vspace{6pt}
\begin{lstlisting}[numbers=left,numbersep=\numbdist,numberstyle=\tiny\color{\numbcolor}]
transformed data {
real m;
real mp;
m = 0;
mp = 0;
}
parameters {
real x_std;
real x_stdp;
}
model{
x_std ~ normal(0, 1);
xr_stdp ~ normal(0, 1);
}
generated quantities {
real s;
real sp;
real ret;
real retp;
real x;
real y;
s = 3;
ret = s * x_std + m;
y = ret;
sp = exp(y * 0.5);
retp = (sp * x_stdp + mp);
x = retp;
}
\end{lstlisting}
\end{minipage}
Moreover, we notice the names of the new parameters in the translated code: \lstinline|x_std| and \lstinline|x_stdp|. The names are important, as they are part of the output of the sampling algorithm. Unlike Stan, with the user-defined-function version of Neal's funnel, in SlicStan the programmer does not have control on the names of the newly introduced parameters. One can argue that the user was not interested in those parameters in the first place (as they are solely used to reparameterise the model for more efficient inference), so it does not matter that their names are not descriptive. However, if the user wants to debug their model, the output from the original Stan model would be more useful than that of the translated one.
\newpage
\subsection{Cockroaches} \label{ssec:roaches}
The ``Cockroaches'' example is described by \citet[p.~161]{GelmanHill}, and it concerns measuring the effects of integrated pest management on reducing cockroach numbers in apartment blocks. They use \emph{Poisson regression} to model the number of caught cockroaches $y_i$ in a single apartment $i$, with exposure $u_i$ (the number of days that the apartment had cockroach traps in it), and regression predictors:
\begin{itemize}
\item the pre-treatment cockroach level $r_i$;
\item whether the apartment is in a senior building (restricted to the elderly), $s_i$; and
\item the treatment indicator $t_i$.
\end{itemize}
In other words, with $\beta_0, \beta_1, \beta_2, \beta_3$ being the regression parameters, we have:
$$y_i \sim Poisson(u_i \exp(\beta_0 + \beta_1r_i + \beta_2s_i + \beta_3t_i))$$
After specifying their model this way, Gelman and Hill simulate a replicated dataset $\mathbf{y}_{rep}$, and compare it to the actual data $\mathbf{y}$ to find that the variance of the simulated dataset is much lower than that of the real dataset. In statistics, this is called \emph{overdispersion}, and is often encountered when fitting models based on a single parameter distributions,\footnotemark~such as the Poisson distribution. A better model for this data would be one that includes an \emph{overdispersion parameter} $\boldsymbol{\lambda}$ that can account for the greater variance in the data.
\footnotetext{In a distribution specified by a single parameter $\alpha$, the mean and variance both depend on $\alpha$, and are thus not independent.}
The next page shows the ``Cockroach'' example before (ignoring the red lines) and after (assuming the red lines) adding the overdispersion parameter, in both SlicStan (left) and Stan (right). Similarly to before, SlicStan gives us more flexibility as to where the statements accounting for overdispersion can be added.
Stan, on the other hand, introduces an entirely new to this program block --- \lstinline|transformed parameters|.
\newpage
\begin{minipage}[t][14cm][t]{\linewidth}
\vspace{1pt}
\begin{multicols}{2} \label{roaches}
\centering
\textbf{``Cockroaches'' in SlicStan}
\vspace{1cm}
\begin{lstlisting}[numbers=left,numbersep=\numbdist,numberstyle=\tiny\color{\numbcolor}]
data int N;
data real[N] exposure2;
data real[N] roach1;
data real[N] senior;
data real[N] treatment;
real[N] log_expo$\,$=$\,$log(exposure2);
real[4] beta;
data int[N] y
~ poisson_log$\footnotemark$(log_expo + beta[1]
$\,$+ beta[2] * roach1
$\,$+ beta[3] * treatment
$\,$+ beta[4] * senior
\end{lstlisting}
\vspace{4cm}
\textbf{``Cockroaches'' in Stan}
\begin{lstlisting}[numbers=left,numbersep=\numbdist,numberstyle=\tiny\color{\numbcolor}]
data {
int N;
real[N] exposure2;
real[N] roach1;
real[N] senior;
real[N] treatment;
int y[N];
}
transformed data {
real[N] log_expo$\,$=$\,$log(exposure2);
}
parameters {
real[4] beta;
}
model {
y ~ poisson_log$^{\ref{note2}}$(log_expo + beta[1]
$\,$+ beta[2] * roach1
$\,$+ beta[3] * treatment
$\,$+ beta[4] * senior
}
\end{lstlisting}
\end{multicols} \vspace{-18pt}
\begin{center}Example adapted from \url{https://github.com/stan-dev/example-models/}.\end{center}
\end{minipage}
\footnotetext{\label{note2}Stan's \lstinline|poisson_log| is a numerically stable way to model a Poisson variable where the event rate is $e^{\alpha}$ for some $\alpha$.}
\newpage
\subsection{Seeds}
Next, we take the ``Seeds'' example introduced by \citet[p.~300]{BUGSBook} in ``\emph{The BUGS Book}''. In this example, we have $I$ plates, with plate $i$ having a total of $N_i$ seeds on it, $n_i$ of which have germinated. Moreover, each plate $i$ has one of 2 types of seeds $x_1^{(i)}$, and one of 2 types of root extract $x_2^{(i)}$. We are interested in modelling the number of germinated seeds based on the type of seed and root extract, which we do in two steps. Firstly, we model the number of germinated seeds with a Binomial distribution, whose success probability is the probability of a single seed germinating:
$$n_i \sim Binomial(N, p_i)$$
We model the probability of a single seed on plate $i$ germinating as the output of a logistic regression with input variables the type of seed and root extract:
$$p_i = \sigma(\alpha_0 + \alpha_1x_1^{(i)} + \alpha_2x_2^{(i)} + \alpha_{12}x_1^{(i)}x_2^{(i)} + \beta^{(i)})$$
In the above, $\alpha_0, \alpha_1, \alpha_2, \alpha_{12}$ and $\beta^{(i)}$ are parameters of the model, with $\beta^{(i)}$ allowing for \emph{over-dispersion} (see \autoref{ssec:roaches}).
The next page shows the ``Seeds'' model written in SlicStan (left) and in Stan (right). The Stan code was adapted from the example models listed on Stan's GitHub page.
As before, we see that SlicStan's code is shorter than that of Stan. It also allows for more flexibility in the order of declarations and definitions, making it possible to keep related statements together (e.g. lines 14 and 15 of the example written in SlicStan). Once again, SlicStan provides more abstraction, as the programmer does not have to specify how each variable of the model should be treated by the underlying inference algorithm. Instead it automatically determines this when it translates the program to Stan.
\newpage
\begin{minipage}[t][14cm][t]{\linewidth}
\begin{multicols}{2} \label{seeds}
\centering
\textbf{``Seeds'' in SlicStan}
\vspace{3cm}
\begin{lstlisting}[numbers=left,numbersep=\numbdist,numberstyle=\tiny\color{\numbcolor}]
data int I;
data int[I] n;
data int[I] N;
data real[I] x1;
data real[I] x2;
real[I] x1x2 = x1 .* x2;
real alpha0 ~ normal(0.0,1000);
real alpha1 ~ normal(0.0,1000);
real alpha2 ~ normal(0.0,1000);
real alpha12 ~ normal(0.0,1000);
real tau ~ gamma(0.001,0.001);
real sigma = 1.0 / sqrt(tau);
real[I] b ~ normal(0.0, sigma);
n ~ binomial_logit$\footnotemark$(N, alpha0
$\qquad\qquad$ + alpha1 * x1
$\qquad\qquad$ + alpha2 * x2
$\qquad\qquad$ + alpha12 * x1x2
$\qquad\qquad$ + b);
\end{lstlisting}
\vspace{5cm}
\textbf{``Seeds'' in Stan}
\begin{lstlisting}[numbers=left,numbersep=\numbdist,numberstyle=\tiny\color{\numbcolor}]
data {
int I;
int n[I];
int N[I];
real[I] x1;
real[I] x2;
}
transformed data {
real[I] x1x2 = x1 .* x2;
}
parameters {
real alpha0;
real alpha1;
real alpha12;
real alpha2;
real tau;
real[I] b;
}
transformed parameters {
real sigma = 1.0 / sqrt(tau);
}
model {
alpha0 ~ normal(0.0,1000);
alpha1 ~ normal(0.0,1000);
alpha2 ~ normal(0.0,1000);
alpha12 ~ normal(0.0,1000);
tau ~ gamma(0.001,0.001);
b ~ normal(0.0, sigma);
n ~ binomial_logit$^{\ref{note1}}$(N, alpha0
$\qquad\qquad$ + alpha1 * x1
$\qquad\qquad$ + alpha2 * x2
$\qquad\qquad$ + alpha12 * x1x2
$\qquad\qquad$ + b);
}
\end{lstlisting}
\end{multicols}
\begin{center}Example adapted from \url{https://github.com/stan-dev/example-models/}.\end{center}
\end{minipage}
\footnotetext{\label{note1}Stan's \lstinline|binomial_logit| distribution is a numerically stable way to use a logistic sigmoid in combination with a Binomial distribution.}
\section{Introduction}
\subsection{Background: Probabilistic Programming Languages and Stan}
Probabilistic programming languages \cite{GordonPP} are a concise notation for specifying probabilistic models, while abstracting the underlying inference algorithm.
There are many such languages, including BUGS \cite{BUGS}, JAGS \cite{JAGS}, Anglican \cite{Anglican}, Church \cite{Church}, Infer.NET \cite{InferNET}, Venture \cite{Venture}, Edward \cite{Edward} and many others.
Stan \cite{StanJSS}, with nearly 300,000 downloads of its R interface \cite{RStan}, is perhaps the most widely used probabilistic programming language.
Stan's syntax is designed to enable automatic compilation to an efficient \emph{Hamiltonian Monte Carlo} (HMC) inference algorithm \cite{HMCfirst}, which allows programs to scale to real-word projects in statistics and data science.
(For example, the forecasting tool Prophet \cite{Prophet} uses Stan.)
This efficiency comes at a price: Stan's syntax lacks the compositionality of other similar languages and systems, such as Edward \cite{Edward} and PyMC3 \cite{PyMC3}.
The design of Stan assumes that the programmer needs to organise their model into separate blocks, which correspond to different stages of the inference algorithm (preprocessing, sampling, postprocessing).
This compartmentalised syntax affects the usability of Stan: related statements may be separated in the source code,
and functions are restricted to only acting within a single compartment.
It is difficult to write complex Stan programs and encapsulate distributions and sub-model structures into re-usable libraries and routines.
\subsection{Goals and Key Insight}
Our goals are (1) to examine the principles and assumptions behind the probabilistic programming language Stan that help it bridge the gap between probabilistic modelling and black-box inference; and (2) to design a suitable abstraction that captures the statistical meaning of Stan's compartments, but allows for compositional and more flexible probabilistic programming language syntax.
Our key insight is that \emph{the essence of a probabilistic program in Stan is in fact a deterministic imperative program}, that can be automatically sliced into the different compartments used in the current syntax for Stan.
It may come as a surprise that a probabilistic program is deterministic, but when performing Bayesian inference by sampling parameters, the probabilistic program serves to compute a deterministic score at a specific point of the parameter space.
An implication of this insight is that standard forms of procedural abstraction are easily adapted to Stan.
\subsection{The Insight by Example}
As demonstration, and as a candidate for a future re-design of Stan, we present SlicStan\footnotemark --- a compositional, Stan-like language, which supports first-order functions.
\footnotetext{SlicStan (pronounced \emph{slick-Stan}) stands for ``Slightly Less Intensely Constrained Stan''.}
Below, we show an example of a Stan program (right), and the same program written in SlicStan (left). In both cases, the goal is to obtain samples from the joint distribution of the variables $y \sim \normal(0, 3)$ and $x \sim \normal(0, \exp(y/2))$, using auxiliary standard normal variables for performance. Working with such auxiliary variables, instead of defining the model in terms of $x$ and $y$ directly, can facilitate inference and is a standard technique. We give more details in \autoref{ssec:encaps} and Appendix~\ref{ap:ncp}.
\vspace{-8pt}
\begin{multicols}{2}
\centering
\textbf{SlicStan}
\begin{lstlisting}
real my_normal(real m, real s) {
real std ~ normal(0, 1);
return s * std + m;
}
real y = my_normal(0, 3);
real x = my_normal(0, exp(y/2));
\end{lstlisting}
\vspace{2.5cm}
\textcolor{white}{.}\\
\textbf{Stan}
\begin{lstlisting}
parameters {
real y_std;
real x_std;
}
transformed parameters {
real y = 3 * y_std;
real x = exp(y/2) * x_std;
}
model {
y_std ~ normal(0, 1);
x_std ~ normal(0, 1);
}
\end{lstlisting}
\end{multicols}
\vspace{-15pt}
In both programs, the aim is to obtain samples for the random variables $x$ and $y$, which are defined by scaling and shifting standard normal variables.
In SlicStan we do so by calling the function \lstinline|my_normal| twice, which defines a local parameter \lstinline|std| and encapsulates the transformation of each variable.
Stan, on the other hand, does not support functions that declare new parameters, because all parameters must be declared inside the \lstinline|parameters| block. We need to write out each transformation explicitly, also explicitly declaring each auxiliary parameter (\lstinline|x_std| and \lstinline|y_std|).
The SlicStan code is a conventional deterministic imperative program, where
the model statement \lstinline|std ~ normal(0,1)| is a derived form of an assignment to a reserved variable that holds the score at a particular point of the parameter space.
Due to the absence of blocks, SlicStan's syntax is compositional and more compact. Statements that would belong to different blocks of a Stan program can be interleaved, and no understanding about the performance implications of different compartments is required.
Via an information-flow analysis, we automatically translate the program on the left to the one on the right.
\subsection{Core Contributions and Outline}
In short, this paper makes the following contributions:
\begin{itemize}
\item We formalise the syntax and semantics of a core subset of Stan (\autoref{sec:stan}). To the best of our knowledge, this is the first formal treatment of Stan, despite the popularity of the language.
\item We design SlicStan --- a compositional Stan-like language with first-order functions. We formalise an information-flow type system that captures the essence of Stan's compartmentalised syntax, give the formal semantics of SlicStan, and prove standard results for the calculus, including noninterference and type-preservation properties (\autoref{sec:slicstan}).
\item We give a formal procedure for translating SlicStan to Stan, and prove that it is semantics preserving (\autoref{sec:translate-slicstan-to-stan}).
\item We examine the usability of SlicStan compared to that of Stan, using examples (\autoref{sec:demo}).
\end{itemize}
\iftoggle{LONG}{
This paper also includes an appendix, which provides additional details, discussion and examples.
} { }
\iftoggle{SHORT}{
The extended version of this paper \cite{SlicStanArxiv} includes an appendix (referred to as Appendix here) with additional details, discussion and examples.
} { }
\section{Core Stan}\label{sec:stan}
Stan \cite{StanJSS} is a probabilistic programming language, whose syntax is similar to that of BUGS \cite{BUGS,BUGSBook}, and aims to be close to the model specification conventions used in the statistics community. This section gives the syntax (\autoref{ssec:stanstatements} and \autoref{ssec:stansyntax}) and semantics (\autoref{ssec:stanop} and \autoref{ssec:stansem}) of Core Stan, a core subset of the Stan language. The subset omits, for example, constraint data types, \lstinline|while| loops, random number generators, recursive user defined functions, and local variables.
To the best of our knowledge, our work is the first to give a formal semantics to the core of Stan.
A full descriptive language specification can be found in the official reference manual \cite{StanManual}.
\subsection{Syntax of Core Stan Expressions and Statements} \label{ssec:stanstatements}
The building blocks of a Stan statement are expressions.
In Core Stan, expressions cover most of what the Stan manual specifies, including variables, constants, arrays and array elements, and function calls (of builtin functions).
We let $x$ range over variables.
L-values are expressions limited to array elements $x[E_1]\dots[E_n]$, where the case $n=0$ corresponds to a variable $x$.
Statements cover the core functionality of Stan, with the exception of \lstinline|while| statements, which we omit to to make \emph{shredding} of SlicStan possible (see \autoref{ssec:slicsyntax} and \autoref{ssec:shred}). \vspace{-8pt}
\begin{multicols}{2}
\begin{display}[.35]{Core Stan Syntax of Expressions:}
\Category{E}{expression}\\
\entry{x}{variable}\\
\entry{c}{constant}\\
\entry{[E_1,...,E_n]}{array}\\
\entry{E_1[E_2]}{array element}\\
\entry{f(E_1,\dots,E_n)}{function call\footnotemark}\\
\clause{L ::= x[E_1]\dots[E_n] \quad n \geq 0}{L-value}
\end{display}
\begin{display}[.35]{Core Stan Syntax of Statements:}
\Category{S}{statement}\\
\entry{L = E}{assignment}\\
\entry{S_1; S_2}{sequence}\\
\entry{\kw{for}(x\;\kw{in}\;E_1:E_2)\;S}{for loop} \\
\entry{\kw{if}(E)\;S_1\,\kw{else}\;S_2}{if statement} \\
\entry{\kw{skip}}{skip}\\
\end{display}
\end{multicols} \vspace{-8pt}
\footnotetext{If $f$ is a binary operator, e.g.\,``$+$'', we write it in infix.}
We assume a set of builtin functions, ranged over by $f$.
We also assume a set of standard builtin continuous or discrete distributions, ranged over by \lstinline|d|.
Each continuous distribution \lstinline|d| has a corresponding builtin function \lstinline|d_lpdf|, which defines its log probability density function. In this paper, we omit discrete random variables for simplicity.
Defined like this, the syntax of Stan statements is one of a standard imperative language. What makes the language \emph{probabilistic} is the reserved variable \lstinline|target|, which holds the logarithm\footnotemark of the probability density function defined by the program (up to an additive constant),
evaluated at the point specified by the current values of the program variables.
For example, to define a Stan model with random variables, $\mu$ and $x$, where we assume the variables are normally distributed and $\mu \sim \normal(0, 1)$ and $x \sim \normal(\mu, 1)$, we write:
\footnotetext{Stan evaluates the unnormalised density in the $\log$ domain to ensure numerical stability and to simplify internal computations. We follow this style throughout the paper, and define the semantics in terms of $\log p^*$, instead of $p^*$.}
\vspace{-2pt}\begin{lstlisting}
target = normal_lpdf(mu, 0, 1) + normal_lpdf(x, mu, 1);$\footnotemark$
\end{lstlisting}\vspace{-4pt}
\footnotetext{We treat \lstinline|target| as a mutable program variable for simplicity. This slightly differs from the actual implementation of Stan, where \lstinline|target| does not allow for general lookup and update, but it is a special bit of state that can only be incremented.}
Here, \lstinline|normal_lpdf| is the log density of the normal distribution: $\log \normal(x|\,\mu, \sigma) = -\frac{(x-\mu)^2}{2\sigma^2} -\frac{1}{2}\log 2\pi\sigma^2$.
The value of \kw{target} is equal to the logarithm of the joint density over $\mu$ and $x$, $\log p(\mu, x)$, evaluated at the current values of the program variables \kw{mu} and \kw{x}.
Suppose $x$ is some known \textit{data}, and $\mu$ is an unknown \textit{parameter} of the model. We are interested in computing the \textit{posterior distribution} of $\mu$ given $x$, $p(\mu \mid x) \propto p(\mu, x) = \normal(x \mid \mu, 1)\normal(\mu \mid 0, 1)$. Stan directly encodes a function that calculates the value of the log posterior density (up to an additive constant), and stores it in \kw{target}.
Thus, in addition to Stan's core statement syntax, we have a derived form for modelling statements:
\begin{display}[.58]{Derived Form for Model Statements:}
\clause{E \sim \kw{d}(E_1, \dots E_n) \deq \kw{target} = \kw{target} + \kw{d_lpdf}(E, E_1, \dots E_n)}{model statement}
\end{display}
In Stan, ``$\sim$'' is \emph{not} considered to mean ``draw a sample from'', but rather ``modify the joint distribution over parameters and data.''
This is also reflected by the semantics given in \autoref{ssec:stansem}.
\subsection{Operational Semantics of Stan Statements} \label{ssec:stanop}
Next, we define a standard big-step operational semantics for Stan expressions and statements:
\begin{display}[.2]{Big-step Relation}
\clause{ (s, E) \Downarrow V }{expression evaluation} \\
\clause{ (s, S) \Downarrow s'}{statement evaluation}
\end{display}
Here, $s$ and $s'$ are states, and values $V$ are the expressions conforming to the following grammar:
\begin{display}[0.4]{Values and States:}
\Category{V}{value}\\
\entry{c}{constant}\\
\entry{[V_1,\dots,V_n]}{array}\\
\clause{s ::= x_1 \mapsto V_1, \dots, x_n \mapsto V_n \quad x_i\textrm{ distinct}}{state (finite map from variables to values)}
\end{display}
The relation $\Downarrow$ is deterministic but partial, as we do not explicitly handle error states.
The purpose of the operational semantics is to define a density function in \autoref{ssec:stansem}, and any errors lead to the density being undefined.
In the rest of the paper, we use the notation for states $s = x_1 \mapsto V_1, \dots, x_n \mapsto V_n$:
\begin{itemize}
\item $s[x \mapsto V]$ is the state $s$, but where the value of $x$ is updated to $V$ if $x \in \dom(s)$, or the element $x \mapsto V$ is added to $s$ if $x \notin \dom(s)$.
\item $s[-x]$ is the state s, but where $x$ is removed from the domain of $s$ (if it were present).
\end{itemize}
We also define lookup and update operations on values:
\begin{itemize}
\item If $U$ is an $n$-dimensional array value for $n \geq 0$
and $c_1$, \dots, $c_n$ are suitable indexes into $U$,
then the \emph{lookup} $U[c_1]\dots[c_n]$ is the value in $U$ indexed by $c_1$, \dots, $c_n$.
\item If $U$ is an $n$-dimensional array value for $n \geq 0$
and $c_1$, \dots, $c_n$ are suitable indexes into $U$,
then the \emph{update} $U[c_1]\dots[c_n] := V$ is the array that is the same as $U$ except that the
value indexed by $c_1$, \dots, $c_n$ is $V$.
\end{itemize}
\vspace{3pt}
\begin{display}{Operational Semantics of Expressions:}
\squad
\staterule{Eval Const}
{ }
{ (s, c) \Downarrow c }\qquad
\staterule{Eval Var}
{ V = s(x) \quad x \in \dom(s)}
{ (s, x) \Downarrow V }\qquad
\staterule{Eval Arr}
{ (s, E_i) \Downarrow V_i \quad \forall i \in 1.. n }
{ (s, [E_1, \dots, E_n]) \Downarrow [V_1, \dots, V_n]}\qquad
\\[1.3ex]\squad
\staterule{Eval ArrEl}
{ (s, E_1 \Downarrow V) \quad (s, E_2 \Downarrow c)}
{ (s, E_1[E_2]) \Downarrow V[c]}\qquad
\staterule{Eval PrimCall}
{ (s, E_i) \Downarrow V_i \quad \forall i \in 1 \dots n \quad V = f(V_1, \dots, V_n)\footnotemark}
{ (s, f(E_1, \dots, E_n)) \Downarrow V}
\end{display}
\footnotetext{$f(V_1, \dots, V_n)$ means applying the builtin function $f$ on the values $V_1, \dots, V_n$.}
\begin{display}{Operational Semantics of Statements:}
\staterule[~(where $L=x[E_1{]}\dots[E_n{]}$)]{Eval Assign}
{ (s,E_i) \Downarrow V_i \quad \forall i \in 1..n \quad (s,E) \Downarrow V \quad
U = s(x) \quad
U' = (U[V_1]\dots[V_n] := V) }
{ (s, L=E) \Downarrow (s[x \mapsto U'])}
\\[1.3ex]
\staterule{Eval Seq}
{ (s, S_1) \Downarrow s' \quad (s', S_2) \Downarrow s''}
{ (s, S_1;S_2) \Downarrow s''}\qquad
\staterule{Eval IfTrue}
{ (s, E) \Downarrow \kw{true} \quad (s, S_1) \Downarrow s'}
{ (s, \kw{if}(E)\; S_1\; \kw{else}\; S_2) \Downarrow s'}\qquad
\staterule{Eval IfFalse}
{ (s, E) \Downarrow \kw{false} \quad (s, S_2) \Downarrow s'}
{ (s, \kw{if}(E)\; S_1\; \kw{else}\; S_2) \Downarrow s'}\qquad
\\[1.3ex]\squad
\staterule[\footnotemark]{Eval ForTrue}
{ (s, E_1) \Downarrow c_1 \quad (s, E_2) \Downarrow c_2 \quad c_1 \leq c_2 \quad (s[x \mapsto c_1], S) \Downarrow s' \quad (s'[-x], \kw{for}(x\;\kw{in}\;(c_1+1):c_2)\;S) \Downarrow s''}
{ (s, \kw{for}(x\;\kw{in}\;E_1:E_2)\;S) \Downarrow s'' } \qquad
\\[1.3ex]\squad
\staterule{Eval ForFalse}
{ (s, E_1) \Downarrow c_1 \quad (s, E_2) \Downarrow c_2 \quad c_1 > c_2}
{ (s, \kw{for}(x\;\kw{in}\;E_1:E_2)\;S) \Downarrow s } \qquad
\staterule{Eval Skip}
{ }
{ (s, \kw{skip}) \Downarrow s }\qquad
\end{display}
\footnotetext{To make shredding to Stan possible, Core Stan only supports \lstinline|for|-loops where the loop bounds do not change during execution: $E_2$ does not contain any variables that $S$ writes to. This differs from the more flexible loops implemented in Stan.}
\subsection{Syntax of Stan} \label{ssec:stansyntax}
A full Stan program consists of six program blocks,
each of which is optional. Blocks appear in order.
Each block has a different purpose and can reference variables declared in itself or previous blocks. Formally, we define a Stan program as a sequence of six blocks, each containing variable declarations or Stan statements, as shown next. We also present an example Stan program that contains all six blocks in \autoref{ssec:expert}.
\vspace{16pt}
\begin{display}[0.5]{Stan program:}
\Category{P}{Stan Program}\\
\begin{lstlisting}
data { $\decls{d}$ }
transformed data { $\decls{td}, \cmds{td}$ }
parameters { $\decls{p}$ }
transformed parameters { $\decls{tp}, \cmds{tp}$ }
model { $\cmds{m}$ }
generated quantities { $\decls{gq}, \cmds{gq}$ }
\end{lstlisting}
\end{display}
\vspace{-3pt}
Arrays in Stan are sized, but we do not include any static checks on array sizes in this paper.
\begin{display}[0.5]{Stan Types and Type Environment:}
\clause{\Gamma ::= x_1:\tau_1, \dots, x_n:\tau_n \quad \forall i \in 1\dots n \hquad x_i\textrm{ distinct}}{declarations} \\
\clause{\tau ::= \kw{real} \; | \; \kw{int} \; | \; \kw{bool} \; | \; \tau [n] }{type} \\
\clause{n}{size}
\end{display}
The size of an array, $n$, can be a number or a variable.
For simplicity, we treat $n$ as decorative and do not include checks on the sizes in the type system of Stan. However, the system can be extended to a lightweight dependent type system, similarly to Tabular as extended by \citet{MarcinDiss}.
Each program block in Stan has a different purpose as follows:
\begin{itemize}
\item \lstinline|data|: declarations of the input data.
\item \lstinline|transformed data|: definition of known constants and \textit{preprocessing} of the data.
\item \lstinline|parameters|: declarations of the parameters of the model.
\item \lstinline|transformed parameters|: declarations and statements defining transformations of the data and parameters.
\item \lstinline|model|: statements defining the distributions of random variables in the model.
\item \lstinline|generated quantities|: declarations and statements that do not affect inference, used for \textit{postprocessing}, or predictions for unseen data.
\end{itemize}
We define a conformance relation on states $s$ and typing environments $\Gamma$. A state $s$ conforms to an environment $\Gamma$, whenever $s$ provides values of the correct types for the variables given in $\Gamma$:
\vspace{-8pt}\begin{multicols}{2}
\begin{display}[-0.03]{Conformance Relation:}
\\
\clause{ s \models \Gamma }{state $s$ conforms to environment $\Gamma$}\\[2.5pt]
\end{display}
\begin{display}[.2]{Rule for the Conformance Relation:}
\hquad
\staterule{Stan State}
{ V_i \models \tau_i \quad \forall i \in I}
{(x_i \mapsto V_i)^{i \in I} \models (x_i : \tau_i)^{i \in I}}
\end{display}
\end{multicols} \vspace{-8pt}
Here, $V \models \tau$ denotes that the value $V$ is of type $\tau$, and has the following definition:
\begin{itemize}
\item $c \models \kw{int}$, if $c \in \mathbb{Z}$, $c \models \kw{real}$, if $c \in \mathbb{R}$, and $c \models \kw{bool}$ if $c \in \{\kw{true}, \kw{false}\}$.
\item $[V_1,\dots,V_n] \models \tau[m]$, if $\forall i \in 1\dots n. V_i \models \tau$.
\end{itemize}
We do not include any checks on array sizes in this paper, thus we do not assume $n$ and $m$ are the same in this definition.
The evaluation relation is not defined on initial states that lead to array out-of-bounds errors.
\subsection{Density-based Semantics of Stan} \label{ssec:stansem}
Finally, we give the semantics of Stan in terms of the big-step relation from \autoref{ssec:stanop}. As the \citet{StanManual} explain:
\begin{quote}
A Stan program defines a statistical model through a conditional probability function $p(\theta \mid y,x)$, where $\theta$ is a sequence of modeled unknown values (e.g., model parameters, latent variables, $\dots$), $y$ is a sequence of modeled known values, and $x$ is a sequence of unmodeled predictors and constants (e.g., sizes, hyperparameters). (p.~22)
\end{quote}
More specifically, a Stan program is executed to evaluate a function on the data and parameters $\log p^*(\params \mid \data)$, for some given (and fixed) values of $\data$ and $\params$. This function encodes the log joint density of the data and parameters $\log p(\params, \data)$ up to an additive constant, and also equals
the log density of the posterior $\log p(\params \mid \data)$ up to an additive constant:
$$ \log p(\params \mid \data) = \log p(\params, \data) - \log p(\data) \propto \log p(\params, \data) \propto \log p^*(\params \mid \data)$$
The return value of $\log p^*(\params \mid \data)$ is stored in the reserved variable $\kw{target}$. We give the semantics of Core Stan through this \textit{unnormalised log posterior density function}.
Consider a Core Stan program $P$ defined as previously, and the statement $S = S_{td}; S_{tp}; S_{m}; S_{gq}$.
The semantics of $P$ is the unnormalised log posterior density function $\log p^*$ on parameters $\params$ given data $\data$ (where $\params \models \Gamma_p$ and $\data \models \Gamma_d$):
$$\log p^*\left( \params \mid \data \right) \deq s'[\kw{target}] \text{ if there is }s'\text{ such that }((\data, \params, \kw{target} \mapsto 0), S) \Downarrow s'$$
If there is no such $s'$ then the log density is undefined. Observe also that if such an $s'$ exists, it is unique, because the operational semantics is deterministic.
For example, suppose that $P$ specifies a simple model for the data array \lstinline|y|:
\begin{lstlisting}
data { int N; real[N] y; }
parameters { real mu; real sigma; }
model {
mu ~ normal(0, 1);
sigma ~ normal(0, 1);
for(i in 1:N){ y[i] ~ normal(mu, sigma); }
}
\end{lstlisting}
Suppose also that $\params = (\kw{mu} \mapsto \mu, \kw{sigma} \mapsto \sigma)$ and $\data = (\kw{N} \mapsto n, \kw{y} \mapsto \mathbf{y})$, for some $\mu$, $\sigma$, $n$, and a vector $\mathbf{y}$ of length $n$. The statement $S = S_{td}; S_{tp}; S_{m}; S_{gq}$ is then the body of the \lstinline|model| block as specified above.
Then $((\data, \params, \kw{target} \mapsto 0), S) \Downarrow s'$, with
$s'[\kw{target}] = \log\normal(\mu \mid 0, 1) + \log\normal(\sigma \mid 0, 1) + \sum_{i=1}^n\log\normal(y_i \mid \mu, \sigma)$.
This is precisely the log joint density on \lstinline|mu|, \lstinline|sigma| and \lstinline|y|, which is proportionate to the posterior of \lstinline|mu| and \lstinline|sigma| given \lstinline|y|.
The function $\log p^*\left( \params \mid \data \right)$ is \emph{not} a (log) density, but rather it encodes the logarithm of the density of the posterior up to an additive constant. Such unnormalised log density uniquely defines the log density $\log p\left( \params \mid \data \right)$:
$$\log p\left( \params \mid \data \right) = \log p^*\left( \params \mid \data \right) - \log Z(\data)\, \text{ where }\, Z(\data) = \int p^*(\params \mid \data)\,d\params$$
The value $Z(\data)$ is called the \emph{normalising constant} (it is a constant with respect to the variables $\params$ that the density is defined on). Computing $Z(\data)$ is in most cases intractable. Thus, many inference algorithms (including those of Stan) are designed to successfully approximate the posterior, relying only on being able to evaluate a function proportional to it: an \emph{unnormalised density function}, such as $\log p^*\left( \params \mid \data \right)$ above.
The goal of this paper is to formalise the statistical meaning of a Stan program, as given by the quotation from the reference manual above. This semantics concentrates on defining the unnormalised log posterior of parameters given data, but omits the fact that the values of transformed parameters and generated quantities blocks are also part of the observable state.
Transformed parameters and generated quantities can be seen as variables that are generated using the function $g(\params, \data) \deq s'[\dom(\Gamma_{tp} \cup \Gamma_{gq})]$ for $s'$ defined as previously.
Appendix~\ref{ap:gq_semantics} discusses generated quantities in more detail, and we leave their full treatment for future work.
Moreover, Appendix~\ref{ap:sampling_semantics} discusses how this density-based semantics relates to other imperative probabilistic languages semantics, such as the sampling-based semantics of \textsc{Prob} \cite{HurNRS15}.
\subsection{Inference}
Executing a Stan program consists of generating samples from the \textit{posterior distribution} $p(\params \mid \data)$, as a way of performing \textit{Bayesian inference}.
The primary algorithm that Stan uses is the asymptotically exact algorithm Hamiltonian Monte Carlo (HMC) \cite{HMCfirst}, and more specifically, an enhanced version of the No-U-Turn Sampler (NUTS) \cite{NUTS, HMCConceptual}, which is an adaptive path lengths extension to HMC.
HMC is a Markov Chain Monte Carlo (MCMC) method (see \citet{MCMCIain} for a review on MCMC). Similarly to other MCMC methods, it obtains samples $\{\params_i\}_{i=1}^{\infty}$ from the target distribution, by using the latest sample $\params_n$ and a carefully designed transition function $\delta$ to generate a new sample $\params_{n+1} = \delta(\params_n)$.
When sampling from the posterior $p(\params \mid \data)$, HMC evaluates the unnormalised density $p^*(\params \mid \data)$ at several points in the parameter space at each step $n$.
To improve performance, HMC also uses the gradient of $\log p^*(\params \mid \data)$ with respect to $\boldsymbol{\theta}$.
\section{SlicStan} \label{sec:slicstan}
This section outlines the second key contribution of this work --- the design and semantics of SlicStan.
SlicStan is a probabilistic programming language, which aims to provide a more compositional alternative to Stan, while retaining Stan's efficiency and statement syntax natural to the statistics community. Thus, we design the language so that:
\begin{enumerate}
\item SlicStan statements are a superset of the Core Stan statements given in \autoref{sec:stan},
\item SlicStan programs contain no program blocks, and allow the interleaving of statements that would belong to different program blocks if the program was written in Stan, and
\item SlicStan supports first-order non-recursive functions.
\end{enumerate}
This results in a flexible syntax, that allows for better encapsulation and code reuse, similarly to Edward \cite{Edward} and PyMC3 \cite{PyMC3}.
The key idea behind SlicStan is to use \textit{information flow analysis} to optimise and transform the program to Stan code.
Secure information flow analysis has a long history, summarised by \citet{InfoFlowSurvey}, and \citet{InfoFlowSmithPrinciples}. It concerns systems where variables have one of several \textit{security levels}, and the aim is to disallow the flow of high-level information to a low-level variable, but allow other flows of information. For example, consider two security levels, \lev{public} and \lev{secret}. We want to forbid \lev{public} information to depend on \lev{secret} information.
Formally, the levels form a \textit{lattice} $\left(\{\lev{public}, \lev{secret}\}, <\right)$ with $\lev{public} < \lev{secret}$.
Secure information flow analysis is used to ensure that information flows only upwards in the lattice. This is formalized as the \textit{noninterference property} \cite{goguen1982security}: confidential data may not interfere with public data.
Looking back to the description of Stan's program blocks in \autoref{ssec:stansyntax}, as well as the Stan Manual, we identify three information levels in Stan: \lev{data}, \lev{model}, and \lev{genquant}. We assign one of these levels to each program block, as summarised by \autoref{tab:blocks}. `Chain', `sample' and `leapfrog' refer to stages of the Hamiltonian Monte Carlo sampling algorithm. Usually, Stan runs several chains to perform inference, where there are many samples per chain, and many leapfrogs per sample.
\begin{table}[h]
\centering
\begin{tabular}{@{}lll@{}}\toprule[1.2pt]
Block & Execution & Level \\\midrule
\lstinline|data| & --- & \lev{data} \\
\lstinline|transformed data| & per chain & \lev{data}\\
\lstinline|parameters| & --- & \lev{model} \\
\lstinline|transformed parameters| & per leapfrog & \lev{model} \\
\lstinline|model| & per leapfrog & \lev{model} \\
\lstinline|generated quantities| & per sample & \lev{genquant}\\
\bottomrule[1.2pt]
\end{tabular}
\caption{Program blocks in Stan. Adapted from \citet{StanTutorial}.}
\label{tab:blocks}
\vspace{-21pt}
\end{table}
Even though our insight about the three information levels comes from Stan, they are not tied to Stan's peculiarities. Variables at level \lev{data} are the known quantities in the statistical inference problem, that is, the data. Computations at this level can be seen as a form of \textit{preprocessing}.
Variables at level \lev{model} are unknown --- they are the quantities we wish to infer. Changing the \lev{model} variables or the dependencies between them changes the statistical model we are working with, which can have a huge effect on the quality of inference.
Finally, generated quantities are variables that \lev{data} and \lev{model} variables do not depend on, and computing them can be seen as a form of \textit{postprocessing}. All three are fundamental concepts of statistical inference and are not specific to Stan.
The rest of this section defines the SlicStan language.
The syntax of SlicStan statements (\autoref{ssec:slicsyntax}) extends that of the Core Stan statements from \autoref{sec:stan}, and its type system (\autoref{ssec:slictyping}) assumes level types \lev{data}, \lev{model} and \lev{genquant} on variables. The typing rules are then implemented so that in well-typed SlicStan programs, information flows from level \lev{data}, through \lev{model} to \lev{genquant}. Every Core Stan program can be turned into an equivalent SlicStan program by concatenating the statements and declarations in its compartments.
Next, we give the semantics of a SlicStan program, much as we did for Core Stan, as an unnormalised log density function on parameters and data (\autoref{ssec:slicsem}), and show some examples (\autoref{ssec:slicexamples}).
To do so, we \textit{elaborate} SlicStan's statements to Core Stan statements by statically unrolling user-defined function calls and bringing all variable declarations to the top level (\autoref{ssec:slicelab}).
The main purpose of elaboration is to identify all parameters statically so as to give the semantics as a function on the parameters.
Elaboration also serves as a first step in translating SlicStan to Stan (\autoref{sec:translate-slicstan-to-stan}).
\subsection{Syntax} \label{ssec:slicsyntax}
A SlicStan program is a sequence of function definitions $F_i$, followed by top-level statement $S$.
\vspace{-2pt}
\begin{display}{Syntax of a SlicStan Program}
\clause{F_1, \dots, F_n, S \quad n \geq 0}{SlicStan program}
\end{display}
SlicStan's user-defined functions are not recursive (a call to $F_i$ can only occur in the body of $F_j$ if $i < j$).
Functions are specified by a return type $T$, arguments with their types $a_i:T_i$, and a body $S$.
There is a reserved variable \kw{ret_g} associated with each function, to hold the return value.
\vspace{-2pt}
\begin{display}{Syntax of Function Definitions}
\clause{F ::= T\;g(T_1\,a_1, \dots, T_n\,a_n)\;S}{function definition (signature $g: T_1,\dots,T_n \to T$)}
\end{display}
SlicStan's expressions and statements extend those of Stan, by user-defined function calls $g(E_1,\dots,E_n)$ and variable declarations $T\,x;\;S $ (in italic).
In both declarations $T\,x;\;S$ and loops $\kw{for}(x\;\kw{in}\;E_1:E_2)\;S$, the variable $x$ is locally bound with scope $S$. We identify statements up to consistent renaming of bound variables. Note that occurrences of variables in L-values are free.
\begin{multicols}{2}
\begin{display}[.2]{SlicStan Syntax of Expressions:}
\Category{E}{expression}\\
\entry{x}{variable}\\
\entry{c}{constant}\\
\entry{[E_1,...,E_n]}{array}\\
\entry{E_1[E_2]}{array element}\\
\entry{f(E_1,\dots,E_n)}{builtin function call}\\
\entry{\mathit{g(E_1,\dots,E_n)}}{\emph{user-defined fun. call}}\\
\Category{L}{L-value}\\
\entry{x}{variable}\\
\entry{x[E_1]\dots[E_n]}{array element}
\end{display}
\begin{display}[.25]{SlicStan Syntax of Statements:}
\\
\Category{S}{statement}\\
\entry{L = E}{assignment}\\
\entry{S_1; S_2}{sequence}\\
\entry{\kw{for}(x\;\kw{in}\;E_1:E_2)\;S}{for loop} \\
\entry{\kw{if}(E)\;S_1\,\kw{else}\;S_2}{if statement} \\
\entry{\kw{skip}}{skip} \\
\entry{\mathit{T\,x;\;S}}{\emph{declaration}}\\
\\
\end{display}
\end{multicols} \vspace{-12pt}
We constrain the language to only support \lstinline|for| loops, disallowing the value of the loop guard to depend on the body of the loop. As described in later subsections, in order to give the semantics of a SlicStan program, as well as to translate it to Stan, we need to \textit{elaborate} the statements to Core Stan statements (\autoref{ssec:slicelab}), statically unrolling user-defined functions and extracting variable declarations to the top-level. Extending the language to support \lstinline|while| loops (or recursive functions) means risk of non-terminating elaboration step, and a potentially inefficient resulting Stan program.
This design choice is a small restriction on the usability and range of expressible models compared to Stan: models in Stan can only have a fixed number of parameters. As a result, an overwhelming number of examples in the Stan official repository use \lstinline|for| loops only.
We define derived forms for data declarations, modelling statements, and return statements.
Any user-defined function \lstinline|D_lpdf| can be used as the log density function of a user-defined distribution \lstinline|D| on the first argument of \lstinline|D_lpdf|.
For the sake of simplicity, we assume the body of a user-defined function \lstinline|g| contains \textit{at most one} return statement, at the end,
and we treat it as an assignment to the return variable \lstinline|ret_g|.
\vspace{-1pt}
\begin{display}[.52]{Derived Forms}
\clause{\kw{data }\textrm{ }\tau\,x; S \deq (\tau, \lev{data})\,x; S}{data declaration} \\
\clause{E \sim \kw{d}(E_1, \dots E_n) \deq \kw{target} = \kw{target} + \kw{d_lpdf}(E, E_1, \dots E_n)}{model, builtin distribution} \\
\clause{E \sim \kw{D}(E_1, \dots E_n) \deq \kw{target} = \kw{target} + \kw{D_lpdf}(E, E_1, \dots E_n)}{model, user-defined distribution} \\
\clause{\kw{return}\; E \deq \kw{ret_g} = E}{return}
\end{display}
\vspace{-10pt}
\subsection{Typing of SlicStan} \label{ssec:slictyping}
Next, we present SlicStan's type system.
We define a lattice $\left(\{\lev{data}, \lev{model}, \lev{genquant}\}, <\right)$ of level types, where $\lev{data} < \lev{model} < \lev{genquant}$.
Types $T$ in SlicStan range over pairs $(\tau, \ell)$ of a base type $\tau$, and a level type $\ell$ --- one of \lev{data}, \lev{model}, or \lev{genquant}.
Arrays are sized, with $n \geq 0$.
Each builtin function $f$ has a family of signatures $f:(\tau_1,\ell),\dots,(\tau_n,\ell) \to (\tau,\ell)$, one for each level $\ell$.
\vspace{-1pt}
\begin{display}{Types, and Typing Environment:}
\Category{\ell}{level type}\\
\entry{\lev{data}}{data, transformed data}\\
\entry{\lev{model}}{parameters, transformed parameters}\\
\entry{\lev{genquant}}{generated quantities}\\
\clause{n}{size} \\
\clause{\tau ::= \kw{real} \; | \; \kw{int} \; | \; \kw{bool} \; | \; \tau [n] }{base type} \\
\clause{T ::= (\tau,\ell)}{type: base type and level} \\
\clause{\Gamma ::= x_1:T_1, \dots, x_n:T_n \quad x_i\textrm{ distinct}}{typing environment}
\end{display}
(While builtin functions of our formal system are level polymorphic, user-defined functions are monomorphic.
This design choice was made to keep the system simple, and we see no challenges to polymorphism that are unique to SlicStan.)
We assume the type of the reserved \kw{target} variable to be $(\kw{real}, \lev{model})$: this variable can only be accessed within the \lstinline|model| block in Stan, thus its level is \lev{model}.
Each function $g$ is associated with a single return variable \lstinline|ret_g| matching the return type of the function.
\begin{display}[.2]{Reserved variables}
\clause{\kw{target} : (\kw{real}, \lev{model})}{log joint probability density function}\\
\clause{\kw{ret_g} : T}{return value of a function $T\;g(T_1\,a_1, \dots, T_n\,a_n)\;S$}
\end{display}
We present the full set of declarative typing rules, inspired by those of the secure information flow calculus defined by \citet{InfoFlowVolpano}, and more precisely, its summary by \citet{InfoFlowAbadi}. The information flow constraints are enforced by the subsumption rules \ref{ESub} and \ref{SSub}, which together allow information to only flow upwards the $\lev{data} < \lev{model} < \lev{genquant}$ lattice.
Intuitively, we need to associate each expression $E$ with a level type to prevent lower-level variables to \textit{directly} depend on higher-level variables, such as in the case \lstinline|d = m + 1|, for \lstinline|d| of level \lev{data} and \lstinline|m| of level \lev{model}. We also need to associate each statement $S$ with a level type to prevent lower-level variables to \textit{indirectly} depend on higher-level variables, such as in the case \lstinline|if(m > 0)$\,$d = 2|.
\vspace{3pt}
\begin{display}[.2]{Judgments of the Type System:}
\clause{ \Gamma \vdash E : (\tau, \ell) }{expression $E$ has type $(\tau, \ell)$ and reads only level $\ell$ and below}\\
\clause{ \Gamma \vdash S : \ell }{statement $S$ assigns only to level $\ell$ and above}\\
\clause{ \vdash F }{function definition $F$ is well-typed}
\end{display}
\vspace{3pt}
The function \lstinline|ty(c)| maps constants to their types (for example \lstinline|ty(5.5) = real|).
\vspace{3pt}
\begin{display}{Typing Rules for Expressions:}
\squad
\staterule{ESub}
{ \Gamma \vdash E : (\tau,\ell) \quad \ell \leq \ell'}
{ \Gamma \vdash E : (\tau,\ell') }
\quad\,
\staterule{Var}
{ }
{ \Gamma, x:T \vdash x:T} \quad\,
\staterule{Const}
{ \kw{ty}(c) = \tau }
{ \Gamma \vdash c : (\tau,\lev{data}) }\quad\,
\staterule{Arr}
{\Gamma \vdash E_i : (\tau,\ell) \quad \forall i \in 1..n}
{\Gamma \vdash [E_1,...,E_n] : (\tau [n],\ell)}
\\[\GAP]\squad
\staterule{ArrEl}
{\Gamma \vdash E_1 : (\tau[n], \ell) \quad \Gamma \vdash E_2 : (\kw{int}, \ell)}
{\Gamma \vdash E_1[E_2] : (\tau,\ell)}\quad
\staterule[($f: T_1,\dots,T_n \to T$)]
{PrimCall}
{ \Gamma \vdash E_i : T_i \quad \forall i \in 1..n}
{ \Gamma \vdash f(E_1,\dots,E_n) : T } \squad
\staterule[($g: T_1,\dots,T_n \to T$)]
{FCall}
{ \Gamma \vdash E_i : T_i \quad \forall i \in 1..n}
{ \Gamma \vdash g(E_1,\dots,E_n) : T }
\end{display}
\vspace{3pt}
Here and throughout, we make use of several functions on the language building blocks:
\begin{itemize}
\item $\assset(S)$ (Definition~\ref{def:assset}) is the set of variables that are assigned to in $S$: $\assset(x=2*y) = \{x\}$.
\item $\readset(S)$ (Definition~\ref{def:readset}) is the set of variables read by $S$: $\readset(x=2*y) = \{y\}$.
\item $\Gamma(L)$ (Definition~\ref{def:lvaluetypes}) is the type of the L-value $L$ in the context $\Gamma$: \\ $\Gamma(x[0]) = (\kw{real}, \lev{data})$ for $x:(\kw{real}[], \lev{data}) \in \Gamma$.
\end{itemize}
The rule \ref{Decl} for a variable declaration $(\tau, \ell)\,x; S$ has a side-condition ($x \notin \dom(\Gamma)$), where $\Gamma$ is the local typing environment,
that enforces that the variable $x$ is globally unique, that is, there is no other declaration of $x$ in the program.
The condition $x \notin \assset(S)$ in \ref{For} enforces that the loop index $x$ is immutable inside the body of the loop. In \ref{Seq}, we make sure that the sequence $S_1; S_2$ is \textit{shreddable}, through the predicate $\shreddable(S_1, S_2)$ (Definition~\ref{def:shreddable}). This imposes a restriction on the range of well-typed programs, which is needed both to allow translation to Stan (see \autoref{ssec:shred}), and to allow interpreting of the program in terms of preprocessing, inference and postprocessing.
Using the rules for expressions and statements, we can also obtain rules for the derived statements.
\vspace{1pt}
\begin{display}{Typing Rules for Statements:}
\squad
\staterule{SSub}
{ \Gamma \vdash S : \ell' \quad \ell \leq \ell'}
{ \Gamma \vdash S : \ell }\quad
\staterule{Assign}
{ \Gamma(L) = (\tau, \ell) \quad \Gamma \vdash E : (\tau,\ell)}
{ \Gamma \vdash (L = E) : \ell }\qquad
\staterule{Decl}
{\Gamma, x:(\tau, \ell) \vdash S : \ell' \quad x \notin \dom(\Gamma)
}
{\Gamma \vdash (\tau, \ell)\,x; S : \ell'}\qquad
\\[\GAP]\squad
\staterule{If}
{ \Gamma \vdash E : (\kw{bool},\ell) \quad \Gamma \vdash S_1 : \ell \quad \Gamma \vdash S_2 : \ell}
{ \Gamma \vdash \kw{if}(E)\;S_1 \;\kw{else}\; S_2: \ell }\qquad
\staterule{Seq}
{ \Gamma \vdash S_1 : \ell \quad \Gamma \vdash S_2 : \ell \quad \shreddable(S_1, S_2)}
{ \Gamma \vdash (S_1; S_2) : \ell }\qquad
\staterule{Skip}
{ }
{ \Gamma \vdash \kw{skip} : \ell } \qquad
\\[\GAP]\squad
\staterule{For}
{ \Gamma \vdash E_1 : (\kw{int},\ell) \quad \Gamma \vdash E_2 : (\kw{int},\ell) \quad \Gamma, x:(\kw{int}, \ell) \vdash S : \ell \quad x \notin \dom(\Gamma) \quad x \notin \assset(S)}
{ \Gamma \vdash \kw{for}(x\;\kw{in}\;E_1:E_2)\;S : \ell } \qquad
\end{display}
\vspace{1pt}
\begin{display}{Derived Typing Rules}
\squad
\staterule{DataDecl}
{ \Gamma, x: (\tau, \lev{data}) \vdash S:\ell \quad x \notin \dom(\Gamma)}
{\Gamma \vdash \kw{data }\textrm{ }\tau\,x; S : \ell}\qquad
\staterule[($\kw{d_lpdf}: T, T_1,\dots,T_n \to (\kw{real}, \lev{model})$)]{PrimModel}
{ \Gamma \vdash E : T \quad \Gamma \vdash E_i : T_i \quad \forall i \in 1..n}
{ \Gamma \vdash E \sim \kw{d}(E_1, \dots E_n) : \lev{model} }\qquad
\\[\GAP]\squad
\staterule{Return}
{ \Gamma \vdash \kw{ret_g}:(\tau,\ell) \quad \Gamma \vdash E : (\tau, \ell) }
{ \Gamma \vdash \kw{return}\;E : \ell }\qquad
\staterule[($\kw{D}: T, T_1,\dots,T_n \to (\kw{real}, \lev{model})$)]{FModel}
{ \Gamma \vdash E : T \quad \Gamma \vdash E_i : T_i \quad \forall i \in 1..n}
{ \Gamma \vdash E \sim \kw{D_dist}(E_1, \dots E_n) : \lev{model} }\qquad
\end{display}
Finally, we complete the three judgments of the type system with the rule \ref{FDef} for checking the well-formedness of a function definition.
The condition $\ell_i \leq \ell$ ensures that the level of the result of a function call is no smaller than the level of its arguments.
\vspace{1pt}
\begin{display}{Typing Rule for Function Definitions:}
\squad
\staterulealt{FDef}
{ a_1:T_1,...,a_n:T_n, \kw{ret_g}:(\tau,\ell) \vdash S : \ell \quad T_i = (\tau_i,\ell_i) \quad \ell_i \leq \ell \quad \forall i \in 1..n}
{ \vdash (\tau,\ell)\;g(T_1\,a_1, \dots, T_n\,a_n)\;S }
\end{display}
In our formal development, we implicitly assume a fixed program with well-typed functions $\vdash F_1$, \dots, $\vdash F_n$.
More precisely, we assume a given well-formed program defined as follows.
\vspace{1pt}
\begin{display}{Well-Formed SlicStan Program:}
\squad
A program $F_1,\dots F_N, S$ is \emph{well-formed} iff $\vdash F_1$, \dots, $\vdash F_n$, and $\emptyset \vdash S : \lev{data}$.
\end{display}
SlicStan statements are, by design, a superset of Core Stan statements. Thus, we can treat any Core Stan statement as a SlicStan statement with big-step operational semantics defined as in \autoref{sec:stan}. By extending the conformance relation $s \models \Gamma$ to correspond to a SlicStan typing environment, we can prove type preservation of the operational semantics, with respect to SlicStan's type system.
\begin{display}[.2]{Rule of the Conformance Relation:}
\squad
\staterulealt{State}
{V_i \models \tau_i \quad \forall i \in I}
{(x_i \mapsto V_i)^{i \in I} \models (x_i : (\tau_i, \ell_i))^{i \in I}}
\end{display}
\begin{theorem}[Type Preservation for $\Downarrow$]\label{th:eval}~
For a Core Stan statement $S$ and a Core Stan expression $E$:
\begin{enumerate}
\item If $s \models \Gamma$ and $\Gamma \vdash E : (\tau, \ell)$ and $(s,E) \Downarrow V$ then $ V \models \tau$.
\item If $s \models \Gamma$ and $\Gamma \vdash S : \ell$ and $(s,S) \Downarrow s'$ then $s' \models \Gamma$.
\end{enumerate}
\end{theorem}
\begin{proof}
By inductions on the size of the derivations of the judgments $(s,E) \Downarrow V$ and $(s,S) \Downarrow s'$.
\end{proof}
Finally, we state a termination-insensitive noninterference result.
Intuitively, the result means that (observed) data cannot depend on the model parameters, and that generated quantities do not affect the log density distribution defined by the model.
\begin{definition}[$\ell$-equal states] Given a typing environment $\Gamma$, states $s_1 \models \Gamma$ and $s_2 \models \Gamma$ are $\ell$-equal for some level $\ell$ (written $s_1 \approx_{\ell} s_2$), if they differ only for variables of a level strictly higher than $\ell$:
$$s_1 \approx_{\ell} s_2 \deq \forall x:(\tau, \ell') \in \Gamma. \left( \ell' \leq \ell \implies s_1(x) = s_2(x) \right)$$
\end{definition}
\begin{theorem}[Noninterference] \label{th:noninterf} Suppose $s_1 \models \Gamma$, $s_2 \models \Gamma$, and $s_1 \approx_{\ell} s_2$ for some $\ell$. Then for Core Stan statements $S$ and Core Stan expressions $E$:
\begin{enumerate}
\item If $~\Gamma \vdash E:(\tau,\ell)$ and $(s_1, E) \Downarrow V_1$ and $(s_2, E) \Downarrow V_2$ then $V_1 = V_2$.
\item If $~\Gamma \vdash S:\ell$ and $(s_1, S) \Downarrow s_1'$ and $(s_2, S) \Downarrow s_2'$ then $s_1' \approx_{\ell} s_2'$.
\end{enumerate}
\end{theorem}
\begin{proof} (1)~follows by rule induction on the derivation $\Gamma \vdash E:(\tau, \ell)$, and using that if $\Gamma \vdash E:(\tau, \ell)$, $x \in \readset(E)$ and $\Gamma(x) = (\tau', \ell')$, then $\ell' \leq \ell$. (2)~follows by rule induction on the derivation $\Gamma \vdash S:\ell$ and using (1).
\end{proof}
\subsection{Elaboration of SlicStan} \label{ssec:slicelab}
Similarly to Stan, a SlicStan program defines a probabilistic model, through an unnormalised log density function on the model parameters and data. That is, the semantics of SlicStan is in terms of a fixed (or data-dependent) number of variables. Therefore, in order to be able to formally give the semantics, we need to statically unroll calls to user-defined functions, and pull all variable declarations to the top level.
(We discuss the difficulties of directly specifying the semantics of SlicStan without elaboration in \autoref{ssec:difficulty}.)
We call this static unrolling step \textit{elaboration}, and we formalise it through the elaboration relation $\elab$. Intuitively, to elaborate a program $F_1,\dots F_N, S$, we elaborate its main body $S$ by unrolling any calls to $F_1, \dots, F_N$ (as specified by \ref{Elab FCall}), and move all variable declarations to the top level (as specified by \ref{Elab Decl}). The result is an elaborated SlicStan statement $S'$ and a list of variable declarations $\Gamma$.
As mentioned previously, to avoid notational clutter, we assume a top-level SlicStan program $F_1,\dots, F_N, S$.
Since the syntax of a SlicStan statement differs from that of a Core Stan statement only by the presence of user-defined function calls and variable declarations, an \textit{elaborated SlicStan} statement is also a well-formed \textit{Core Stan} statement.
\vspace{-8pt}
\begin{multicols}{2}
\begin{display}[.20]{Elaboration Relation}
\clause{P \elab[\emptyset] \elpair{\ctx}{S'}}{program elaboration}\\
\clause{S \elab \elpair{\ctx}{S'}}{statement elaboration} \\
\clause{E \elab \elpair{\ctx}{S.E'}}{expression elaboration} \\
\clause{F \elab \elpair{r\!:\!T,A,\ctx}{S}}{fun. definition elaboration}
\end{display}
\begin{display}{Elaboration Rule for a SlicStan Program}
\quad
\staterule{Elab SlicStan}
{S \elab[\emptyset] \elpair{\ctx}{S'}}
{F_1,\dots F_N, S \elab[\emptyset] \elpair{\ctx}{S'}} \\[-2.7pt]
\end{display}
\end{multicols}\vspace{-8pt}
The \textit{unrolling} rule \ref{Elab FCall} assumes a call to a user-defined function $g$ with definition $F=T\;g(T_1\,a_1, \dots, T_n\,a_n)\;S$, which elaborates as described by \ref{Elab FDef}.
\begin{display}{Elaboration Rules for Expressions:}
\staterule{Elab Var}
{ }
{ x \elab \elpair{\emptyset}{\kw{skip}.x}}\qquad
\staterule{Elab Const}
{ }
{ c \elab \elpair{\emptyset}{\kw{skip}.c}}\qquad
\staterule{Elab ArrEl}
{ E_1 \elab \elpair{\ctx_1}{S_1.E_1'} \quad
E_2 \elab \elpair{\ctx_2}{S_2.E_2'} \quad
\ctx_1 \cap \ctx_2 = \emptyset}
{ E_1[E_2] \elab \elpair{\ctx_1 \cup \ctx_2}{S_1; S_2.(E_1'[E_2'])}}\qquad
\\[\GAP]
\staterule{Elab Arr}
{ E_i \elab \elpair{\ctx_i}{S_i.E_i'} \quad \forall i \in 1..n \quad \bigcap_{i=1}^n \ctx_i = \emptyset}
{[E_1, ..., E_n] \elab \elpair{\bigcup_{i=1}^n \ctx_i}{S_1;...;S_n.([E_1', ..., E_n'])}}\quad
\staterule{Elab PrimCall}
{ E_i \elab \elpair{\ctx_i}{S_i.E_i'} \quad \forall i \in 1..n \quad \bigcap_{i=1}^n \ctx_i = \emptyset}
{ f(E_1, ..., E_n) \elab \elpair{\bigcup_{i=1}^n \ctx_i}{S_1;...;S_n. f(E_1',...,E_n')}}
\\[\GAP]
\staterule[~(where $F$ is the definition for function $g$)]{Elab FCall}
{ E_i \elab \elpair{\ctx_{E_i}}{S_{E_i}.E_i'} \quad \forall i \in 1..n \quad
F \elab \elpair{r_F:T_F, A_F,\ctx_F}{S_F} \\
A_F = \set{a_i:T_i \given i \in 1..n} \quad
\{r_F:T_F\}\cap A_F\cap(\bigcap_{i=1}^n \ctx_i)\cap\ctx_F = \emptyset }
{ g(E_1, ..., E_n) \elab \elpair{\{r_F:T_F\} \cup A_F\cup(\bigcup_{i=1}^n \ctx_i)\cup\ctx_F}{(S_{E_1}; a_1=E_1';...;S_{E_n};a_n=E_n'; S_F.r_F)}}
\end{display}
\begin{display}{Elaboration Rule for Function Definitions:}
\squad
\staterule{Elab FDef}
{ S \elab[\set{r:T} \cup \Gamma_A \cup \Gamma] \elpair{\ctx'}{S'} \quad
\Gamma_A = \set{a_1:T_1,...,a_n:T_n} \quad
\set{r} \cap \dom(\Gamma_A) \cap \dom(\ctx') = \emptyset }
{ T\;g(T_1\,a_1,...,T_n\,a_n)\;S \elab \elpair{r:T,\Gamma_A,\ctx' }{S'}}
\end{display}
As we identify statements up to $\alpha$-conversion, $T\,x;\;x = 1$ elaborates to $\elpair{\{x_1:T\}}{x_1=1}$, but also to $\elpair{\{x_2:T\}}{x_2=1}$, and so on. The \ref{Elab Decl} rule simply extracts a variable declaration to the top level. Other than recursively applying $\elab$ to sub-parts of the statement, the \ref{Elab If} and \ref{Elab For} rules transform the guards of the respective compound statement to be the fresh variables $g$ or $g_1$, $g_2$ respectively (as opposed to unrestricted expressions). This is a necessary preparation step needed for the program to be correctly translated to Stan later (see \autoref{ssec:shred} and Appendix~\ref{ap:shred}).
\begin{display}{Elaboration Rules for Statements:}
\staterule{Elab Decl}
{ S \elab[\{x:T\} \cup \Gamma] \elpair{\ctx'}{S'} \quad x \notin \dom(\ctx')}
{ T\,x; S \elab \elpair{\set{x:T}\cup\ctx'}{S'}} \qquad
\staterule{Elab Skip}
{}
{\kw{skip} \elab \elpair{\emptyset}{\kw{skip}}}\qquad
\\[\GAP]\squad
\staterule{Elab Assign}
{ L \elab \elpair{\ctx_L}{S_L.L'} \quad E \elab \elpair{\ctx_E}{S_E.E'} \quad \ctx_L \cap \ctx_E = \emptyset}
{ L = E \elab \elpair{\ctx_L \cup \ctx_E}{S_L; S_E; L' = E' }}\quad
\staterule{Elab Seq}
{ S_1 \elab \elpair{\ctx_1}{S_1'} \quad
S_2 \elab \elpair{\ctx_2}{S_2'} \quad
\ctx_1 \cap \ctx_2 = \emptyset}
{ S_1; S_2 \elab \elpair{\ctx_1 \cup \ctx_2}{S_1';S_2'} } \qquad
\\[\GAP]\squad
\staterule[~(where $\Gamma \vdash E : T$)]{Elab If}
{ E \elab \elpair{\ctx_E}{S_E.E'} \quad
S_1 \elab \elpair{\ctx_1}{S_1'} \quad
S_2 \elab \elpair{\ctx_2}{S_2'} \quad
\{g:T\} \cap \ctx_E \cap \ctx_{1} \cap \ctx_{2} = \emptyset }
{ \kw{if}(E)\; S_1\; \kw{else}\; S_2 \elab \elpair{\{g:T\} \cup \ctx_E \cup \ctx_{1} \cup \ctx_{2}}{(S_E; g=E'; \kw{if}(g)\; S_1'\; \kw{else}\; S_2')}} \qquad
\\[\GAP]\squad
\staterule[~(where $\Gamma \vdash E_1 : T_1$ and $\Gamma \vdash E_2 : T_2$)]{Elab For}
{ E_1 \elab \elpair{\ctx_1}{S_1.E_1'} \quad
E_2 \elab \elpair{\ctx_2}{S_2.E_2'} \quad
S \elab[\ctx \cup \{x:(\kw{int}, \lev{data})\}] \elpair{\ctx_S}{S'} \\
\ctx_V = v_{\Gamma}(\ctx_S, n) \quad
\{g_1:T_1, g_2:T_2, n:(\kw{int}, \lev{data})\} \cap \ctx_1 \cap \ctx_2 \cap \ctx_V = \emptyset}
{ \kw{for}(x\;\kw{in}\;E_1:E_2)\;S \elab \elpair{\{g_1:T_1, g_2:T_2, n:(\kw{int}, \lev{data})\} \cup \ctx_1 \cup \ctx_2 \cup \ctx_V}{\\S_1; S_2; g_1=E_1'; g_2=E_2'; n = g_2 - g_1 + 1; \kw{for}(x\;\kw{in}\;g_1:g_2)\;v_S(x,\ctx_V,S')} } \qquad
\end{display}
In some cases when elaborating a \lstinline|for| loop, $\Gamma_S$ will not be empty (in other words, the body of the loop will declare new variables). Thus, as \ref{Elab For} shows, variables in $\Gamma_S$ are upgraded to an array, and then accessed by the index of the loop. We use the function $v_S$ (Definition~\ref{def:vector}) which takes a variable $x$, a typing environment $\Gamma$, and a statement $S$, and returns a statement $S'$, where any mention of a variable $x' \in \dom(\Gamma)$ is substituted with $x'[x]$. For example, consider the statement
\lstinline|for(i in 1:N){real +++model +++ x ~ normal(0,1); y[i] ~ normal(x,1);}| and an environment $\Gamma$, such that $\Gamma \vdash N:(\kw{int}, \lev{data})$. The body of the loop declares a new variable $x$, thus it elaborates to $\elpair{\Gamma_S}{S'}$, where
$\Gamma_S = \{x:(\kw{real}, \lev{model})\}$, and \lstinline|$S' =$ {x ~ normal(0,1); y[i] ~ normal(x,1);}|.
By \ref{Elab For},
$S \elab \langle \{\kw{g1}:(\kw{int}, \lev{data}), \kw{g2}:(\kw{int}, \lev{data})\}\cup\Gamma_V,~$\lstinline|for(i in g1:g2){$S''$}|$\rangle$
where:
\begin{align*}
\Gamma_V &= v_{\Gamma}(\Gamma_S, N) = \{x:(\kw{real[N]}, \lev{model})\} \\
S'' &= v_S(i, \Gamma_V, S') = \kw{x[i] ~ normal(0,1); y[i] ~ normal(x[i],1);}
\end{align*}\vspace{-10pt}
Next, we state and prove type preservation of the elaboration relation.
\begin{theorem}[Type preservation of $\elab$] \label{th:elab} For SlicStan statements $S$, SlicStan expressions $E$, and SlicStan function definitions $F$:
~
\begin{enumerate}
\item If $ \Gamma \vdash E:(\tau,\ell)$ and $E \elab \elpair{\Gamma'}{S'.E'}$ then $\Gamma, \Gamma' \vdash S':\lev{data}$ and $\Gamma, \Gamma' \vdash E':(\tau, \ell)$.
\item If $\Gamma \vdash S:\ell$ and $S \elab \elpair{\Gamma'}{S'} $ then $ \Gamma, \Gamma' \vdash S':\ell $
\item If $F \elab \elpair{\Gamma'}{S'.\kw{ret}} $ then $ \Gamma, \Gamma' \vdash S':\lev{data}$
\end{enumerate}
\end{theorem}
\begin{proof}
By inductions on the size of the derivations of the judgments $E \elab \elpair{\Gamma'}{S'.E'}$, $S \elab \elpair{\Gamma'}{S'}$, and $F \elab \elpair{\Gamma'}{S'.\kw{ret}}$.
\end{proof}
\subsection{Semantics of SlicStan} \label{ssec:slicsem}
We now show how SlicStan's type system allows us to specify the semantics of the probabilistic program as an unnormalised posterior density function. This shows how the semantics of SlicStan connects to that of Stan, and demonstrates that explicitly encoding the roles of program variables into the block syntax of the language is not needed.
We specify the semantics --- the unnormalised density $\log p^*_{F_1, \dots, F_n, S}(\params \mid \data)$ --- in two steps.
\subsubsection{Semantics of (elaborated) SlicStan statements} \label{ssec:sem_statement}
Consider an elaborated SlicStan statement $S$ such that $\Gamma \vdash S : \lev{data}$. The semantics of $S$ is the function $\log p^*_{\Gamma \vdash S}$, such that
for any state $s \models \Gamma$:
$$\log p^*_{\Gamma \vdash S} (s) \deq
s'[\kw{target}] \text{ if there is } s' \text{ such that } ((s,\kw{target} \mapsto 0), S) \Downarrow s'$$
\subsubsection{Semantics of SlicStan programs} \label{ssec:sem_prog_density}
Consider a well-formed SlicStan program $F_1, \dots, F_n, S$ and suppose that $S \elab[\emptyset] \elpair{\Gamma'}{S'}$.
(Observe that $\Gamma'$ and $S'$ are uniquely determined by $F_1, \dots, F_n, S$.)
Suppose also that:
\begin{itemize}
\item $\Gamma_{\data}$ corresponds to \emph{data variables}, $\Gamma_{\data} = \{ x:\ell \in \Gamma' \mid \ell=\lev{data} \wedge x \notin \assset(S') \}$, and
\item $\Gamma_{\theta}$ corresponds to \emph{model parameters}, $\Gamma_{\theta} = \{ x:\ell \in \Gamma' \mid \ell=\lev{model} \wedge x \notin \assset(S')\}$.
\end{itemize}
Similarly to Stan (\autoref{ssec:stansem}),
the semantics of a SlicStan program $S$ is the unnormalised log posterior density function $\log p^*_{F_1, \dots, F_n, S}$ on parameters $\params$ given data $\data$
(with $\params \models \Gamma_{\theta}$ and $\data \models \Gamma_{\data}$):
\begin{equation}\label{eq:slicposterior}
\log p^*_{F_1, \dots, F_n, S}\left( \params \mid \data \right) \deq \log p^*_{\Gamma' \vdash S'}(\params, \data)
\end{equation}
\subsection{Examples} \label{ssec:slicexamples}
Next, we give two examples of SlicStan programs, their elaborated versions, and their semantics in the form of an unnormalised log density function. Here, we specify the levels of variables in SlicStan programs explicitly. In \autoref{sec:demo} we describe how type inference can be implemented to infer optimal levels for program variables, thus making explicit declaration of levels unnecessary.
\subsubsection{Simple Example}
Consider a SlicStan program $\emptyset,S$
($\emptyset$ denotes no function definitions), where we simply model the distribution of a data array $\mathbf{y}$:
\begin{lstlisting}
$S =\;$ real +++model+++ mu ~ normal(0, 1);
real +++model+++ sigma ~ normal(0, 1);
int +++data+++ N;
real +++data+++ y[N];
for(i in 1:N){ y[i] ~ normal(mu, sigma); }
\end{lstlisting}
We define the semantics of $S$ in three steps:
\begin{enumerate}
\item Elaboration: $S \elab[\emptyset] \elpair{\Gamma'}{S'}$, where: \vspace{-20pt}
\begin{multicols}{2}
\begin{align*}
\Gamma' =& \kw{mu}:(\kw{real}, \lev{model}), \kw{sigma}:(\kw{real}, \lev{model}), \\
&\kw{y}:(\kw{real[N]}, \lev{model}), \kw{N}:(\kw{int}, \lev{data})
\end{align*}
\vspace{5pt}
\begin{lstlisting}
$S' =$ mu ~ normal(0, 1);
sigma ~ normal(0, 1);
for(i in 1:N){
y[i] ~ normal(mu, sigma); }
\end{lstlisting}
\end{multicols}\vspace{-14pt}
\item Semantics of $S'$: For any state $s \models \Gamma'$, $\log p^*_{\Gamma' \vdash S'}(s) = s'[\kw{target}]$, where $(s, S') \Downarrow s'$. Thus: \vspace{-1pt}
$$\textstyle \log p^*_{\Gamma' \vdash S'} (s) = \log \mathcal{N}(\mu, 0, 1) + \log \mathcal{N}(\sigma, 0, 1) + \sum_{i=1}^{N}\log \mathcal{N}(y_i, \mu, \sigma)$$
\item Semantics of $S$: We derive $\Gamma_{\data} = \{ x:\ell \in \Gamma' \mid \ell=\lev{data} \wedge x \notin \assset(S') \} = \{N, y\}$, and $\Gamma_{\theta} = \{ x:\ell \in \Gamma' \mid \ell=\lev{model} \wedge x \notin \assset(S')\} = \{\mu, \sigma\}$.
Therefore, the semantics of $S$ is the unnormalised density on the parameters $\mu$ and $\sigma$, given data $N$ and $y$: \vspace{-1pt}
$$\textstyle \log p_S^*(\mu, \sigma \mid y, N) = \log \mathcal{N}(\mu, 0, 1) + \log \mathcal{N}(\sigma, 0, 1) + \sum_{i=1}^{N}\log \mathcal{N}(y_i, \mu, \sigma)$$
\end{enumerate}
\subsubsection{User-defined Functions Example} \label{sssec:udex}
Next, we look at an example that includes a user-defined function. Here, the function \lstinline|my_normal| is a reparameterising function (\autoref{ssec:encaps}), that defines a Gaussian random variable, by scaling and shifting a standard Gaussian variable:
\begin{lstlisting}
$S\,$ = real +++model+++ my_normal(real +++model+++ m, real +++model+++ s){
real +++model+++ x_std ~ normal(0, 1);
return m + x_std * s;
}
real +++model+++ mu ~ normal(0, 1);
real +++model+++ sigma ~ normal(0, 1);
int +++data+++ N;
real +++genquant+++ x[N];
for(i in 1:N) { x[i] = my_normal(mu, sigma); }
\end{lstlisting}
\vspace{-2pt}
\begin{enumerate}
\item Elaboration: $S \elab[\emptyset] \elpair{\Gamma'}{S'}$, where: \vspace{-10pt}
\begin{multicols}{2}
\noindent
\begin{align*}
\Gamma' =& \kw{mu}:(\kw{real}, \lev{model}), \kw{sigma}:(\kw{real}, \lev{model}), \\
&\kw{m}:(\kw{real}, \lev{model}), \kw{s}:(\kw{real}, \lev{model}), \\ &\kw{x_std}:(\kw{real[N]}, \lev{model}), \\
&\kw{x}:(\kw{real[N]}, \lev{genquant}), \kw{N}:(\kw{int}, \lev{data})
\end{align*}
\vspace{1.5cm}
\begin{lstlisting}
$S' =$ mu ~ normal(0, 1);
sigma ~ normal(0, 1);
for(i in 1:N){
m = mu; s = sigma;
x_std[i] ~ normal(0, 1);
x[i] = m + x_std[i] * s; }
\end{lstlisting}
\end{multicols}\vspace{-20pt}
\item Semantics of $S'$: Consider any $s \models \Gamma'$.
Then: \vspace{-1pt}
$$\textstyle \log p^*_{\Gamma' \vdash S'} (s) = \log \mathcal{N}(\mu, 0, 1) + \log \mathcal{N}(\sigma, 0, 1) + \sum_{i=1}^{N}\log \mathcal{N}(x^{\mathrm{std}}_i, 0, 1)$$
\item Semantics of $S$: We derive $\Gamma_{\data} = \{N\}$, and $\Gamma_{\theta} = \{\mu, \sigma, \mathbf{x}^{\mathrm{std}}\}$.
The semantics of the program $S$ is the unnormalised density on the parameters $\mu$, $\sigma$, and $\mathbf{x}^{\mathrm{std}}$, given data $N$: \vspace{-1pt}
$$\textstyle \log p_S^*(\mu, \sigma, \mathbf{x}^{\mathrm{std}} \mid N) = \log \mathcal{N}(\mu, 0, 1) + \log \mathcal{N}(\sigma, 0, 1) + \sum_{i=1}^{N}\log \mathcal{N}(x^{\mathrm{std}}_i, 0, 1)$$
\end{enumerate}
\subsection{Difficulty of Specifying Direct Semantics Without Elaboration} \label{ssec:difficulty}
Specifying the direct semantics $\log p^*_{\emptyset \vdash S}(s)$, without an elaboration step, is not simple.
SlicStan's user-defined functions are flexible enough to allow new model parameters to be declared inside of the body of a function. Having some of the parameters declared this way means that it is not obvious what the complete set of parameters is, unless we elaborate the program.
Consider the program from \autoref{sssec:udex}. Its semantics is ${\log p^*_{S} (\mu, \sigma, \mathbf{x}^{\mathrm{std}} \mid N)} = \log \mathcal{N}(\mu, 0, 1) + \log \mathcal{N}(\sigma, 0, 1) + \sum_{i=1}^{N}\log \mathcal{N}(x^{\mathrm{std}}_i, 0, 1)$.
This differs from ${\log p^*_{S} (\mu, \sigma, x^{\mathrm{std}} \mid N))} = \log \mathcal{N}(\mu, 0, 1) + \log \mathcal{N}(\sigma, 0, 1) + N \times \log \mathcal{N}(x^{\mathrm{std}}, 0, 1)$, which would be the accumulated log density in case we do not unroll the \lstinline|my_normal| call, and instead implement direct semantics. In one case, the model has $N+2$ parameters: $\mu, \sigma, x^\mathrm{std}_1, \dots, x^\mathrm{std}_N$. In the other, the model has only 3 parameters: $\mu, \sigma, x^{\mathrm{std}}$.
\section{Translation of SlicStan to Stan} \label{sec:translate-slicstan-to-stan}
Translating SlicStan to Stan happens in two steps: \emph{shredding} (\autoref{ssec:shred}) and \emph{transformation} (\autoref{ssec:trans}). In this section, we formalise these steps and show that the semantics, seen as an unnormalised log posterior density function on parameters given data, is preserved in the translation.
\subsection{Shredding} \label{ssec:shred}
The first step in translating an elaborated SlicStan program to Stan is the idea of \textit{shredding} (or \textit{slicing}) by level. SlicStan allows statements that assign to variables of different levels to be interleaved. Stan, on the other hand, requires all \lev{data} level statements to come first (in the \lstinline|data| and \lstinline|transformed data| blocks), then all \lev{model} level statements (in the \lstinline|parameters|, \lstinline|transformed parameters| and \lstinline|model| blocks), and finally, the \lev{genquant} level statements (in the \lstinline|generated quantities| block).
Therefore, we define the shredding relation $\shred$ on an elaborated SlicStan statement $S$ and triples of \textit{single-level statements} $\shredded$ (Definition~\ref{def:singlelev}). That is, $\shred$ shreds a statement into three elaborated SlicStan statements $S_D$, $S_M$ and $S_Q$, where $S_D$ only assigns to variables of level $\lev{data}$, $S_M$ only assigns to variables of level $\lev{model}$, and $S_Q$ only assigns to variables of level $\lev{genquant}$. We formally state and prove this result in Lemma~\ref{lem:shredisleveled}.
\begin{display}[.3]{Shredding Relation}
\clause{S \shred \shredded}{statement shredding}
\end{display}
Currently, Stan can only assign to \lev{data} variables inside the \lstinline|transformed data| block, to \lev{model} variables inside the \lstinline|transformed parameters| block, and to generated quantities inside the \lstinline|generated quantities| block. Therefore, in Stan it is not possible to write an \lstinline|if| statement or a \lstinline|for| loop which assigns to variables of different levels inside its body. The \ref{Shred If} and \ref{Shred For} rules resolve this by copying the entire body of the \lstinline|if| statement or \lstinline|for| loop on each of the three levels. Notice that we restrict the \lstinline|if| and \lstinline|for| guards to be variables (as opposed to any expression), which we have ensured is the case after the elaboration step (\ref{Elab If} and \ref{Elab For}).
For example, consider the SlicStan program $S$, as defined below. It elaborates to $S'$ and $\Gamma'$, and it is then shredded to the single-level statements $\shredded$:
\vspace{-8pt}\begin{multicols}{3}
\begin{lstlisting}
$S=$ real +++data+++ d;
$\hspace{1pt}$real +++model+++ m;
$\hspace{1pt}$if(d > 0){
d = 1;
m = 2;
}
\end{lstlisting}
\vspace{9pt}
\begin{align*}\hspace{-14pt}
\Gamma' =\;\{&\kw{d}:(\kw{real}, \lev{data}),\\
&\kw{m}:(\kw{real}, \lev{model}),\\
&\kw{g}:(\kw{bool}, \lev{data})\}
\end{align*}
\vspace{-18pt}
\begin{lstlisting}
$S' =$ g = (d > 0);
if(g){d=1; m=2;}
\end{lstlisting}
\begin{lstlisting}
$S_D =$ g = (d > 0);
$\hspace{3pt}$if(g){d = 1;}
$S_M =$ if(g){m = 2;}
$S_Q =$ skip;
\end{lstlisting}
\end{multicols}
%
\begin{display}{Shredding Rules for Statements:}
\squad
\staterule{Shred DataAssign}
{\Gamma(L) = (\_,\lev{data})}
{ L = E \shred (L = E, \kw{skip}, \kw{skip})}\quad\hquad
\staterule{Shred ModelAssign}
{ \Gamma(L) = (\_,\lev{model}) }
{ L = E \shred \kw{skip}, L = E, \kw{skip}}\quad\hquad
\staterule{Shred GenQuantAssign}
{ \Gamma(L) = (\_,\lev{genquant})}
{ L = E \shred \kw{skip}, \kw{skip}, L = E}\qquad
\\[\GAP]\squad
\staterule{Shred Seq}
{ S_1 \shred S_{D_1}, S_{M_1}, S_{Q_1} \quad
S_2 \shred S_{D_2}, S_{M_2}, S_{Q_2}}
{ S_1; S_2 \shred (S_{D_1};S_{D_2}), (S_{M_1};S_{M_2}), (S_{Q_1};S_{Q_2}) } \qquad
\staterule{Shred Skip}
{}
{\kw{skip} \shred (\kw{skip}, \kw{skip}, \kw{skip})}\qquad
\\[\GAP]\squad
\staterule{Shred If}
{ S_1 \shred \shredded[1] \quad
S_2 \shred \shredded[2] \quad}
{ \kw{if}(g)\; S_1\; \kw{else}\; S_2 \shred
(\kw{if}(g)\; S_{D_1}\; \kw{else}\; S_{D_2}),
(\kw{if}(g)\; S_{M_1}\; \kw{else}\; S_{M_2}),
(\kw{if}(g)\; S_{Q_1}\; \kw{else}\; S_{Q_2})} \qquad
\\[\GAP]\squad
\staterule{Shred For}
{ S \shred \shredded }
{ \kw{for}(x\;\kw{in}\;g_1:g_2)\;S \shred
(\kw{for}(x\;\kw{in}\;g_1:g_2)\;S_D),
(\kw{for}(x\;\kw{in}\;g_1:g_2)\;S_M),
(\kw{for}(x\;\kw{in}\;g_1:g_2)\;S_Q)} \qquad
\end{display}
In the rest of this section, we show that shredding a SlicStan program preserves its semantics (Theorem~\ref{th:shred}), in the sense that an elaborated program $S$ has the same meaning as the sequence of its shredded parts $S_D; S_M; S_Q$. We do so by:
\begin{enumerate}
\item Proving that shredding produces \textit{single-level statements} (Definition~\ref{def:singlelev} and Lemma~\ref{lem:shredisleveled}).
\item Defining a notion of \textit{statement equivalence} (Definition~\ref{def:equiv}) and specifying what conditions need to hold to change the order of two statements (Lemma~\ref{lem:reorder}).
\item Showing how to extend the type system of SlicStan in order for the language to fulfil the criteria from (2) (Definition~\ref{def:shreddable}, Lemma~\ref{lem:commutativity}).
\end{enumerate}
Intuitively, a single-level statement of level $\ell$ is one that updates only variables of level $\ell$.
\begin{definition}[Single-level Statement $\Gamma \vdash \ell(S)$] \label{def:singlelev}
$S$ is a single-level statement of level $\ell$ with respect to $\Gamma$ (written $\Gamma \vdash \ell(S)$) if and only if,
$\Gamma \vdash S : \ell$ and $\forall x \in \assset(S)$ there is some $\tau$, s.t. $x:(\tau, \ell) \in \Gamma$.
\end{definition}
\begin{lemma}[Shredding produces single-level statements] \label{lem:shredisleveled}
$$S \shred[\Gamma] \shredded \implies \singlelevelS{\lev{data}}{S_D} \wedge \singlelevelS{\lev{model}}{S_M} \wedge \singlelevelS{\lev{genquant}}{S_Q}$$
\end{lemma}
The core of proving Theorem~\ref{th:shred} is that if we take a statement $S$ that is well-typed in $\Gamma$, and reorder its building blocks according to $\shred$, the resulting statement $S'$ will be \textit{equivalent} to $S$.
\begin{definition}[Statement equivalence] \label{def:equiv}
$S \eveq S' \deq \left( \forall s, s'. (s, S) \Downarrow s' \iff (s, S') \Downarrow s' \right)$
\end{definition}
In the general case, to swap the order of executing $S_1$ and $S_2$, it is enough for each statement not to assign to a variable that the other statement reads or assigns to:
\begin{lemma}[Statement Reordering
\label{lem:reorder}
For statements $S_1$ and $S_2$ that are well-typed in $\Gamma$, if $\readset(S_1)\cap\assset(S_2) = \emptyset$, $\assset(S_1)\cap\readset(S_2) = \emptyset$, and $\assset(S_1)\cap\assset(S_2) = \emptyset$ then $S_1;S_2 \eveq S_2; S_1$.
\end{lemma}
Shredding produces single-level statements, therefore we only encounter reordering single-level statements of distinct levels. Thus, two of the conditions needed for reordering already hold.
\begin{lemma}[] \label{lem:halfway} If $~\Gamma \vdash \ell_1(S_1)$, $\Gamma \vdash \ell_2(S_2)$ and $\ell_1 < \ell_2$ then $\readset(S_1) \cap \assset(S_2) = \emptyset$ and $\assset(S_1) \cap \assset(S_2) = \emptyset$.
\end{lemma}
To reorder the sequence $S_2;S_1$ according to Lemma~\ref{lem:reorder}, we need to satisfy one more condition, which is
$\readset(S_2) \cap \assset(S_1) = \emptyset$. We achieve this through the predicate $\shreddable$ in the \ref{Seq} typing rule.
One way to define $\shreddable(S_2, S_1)$ is so that it directly reflects this condition: $\shreddable(S_2, S_1) = \readset(S_2) \cap \assset(S_1)$. This corresponds to a form of a single-assignment system, where variables become immutable once they are read.
We adopt a more flexible strategy, where we enforce variables of level $\ell$ to become immutable only once they have been \textit{read at a level higher than} $\ell$. We define:
\begin{itemize}
\item $\readset_{\Gamma \vdash \ell}(S)$: the set of variables $x$ that are read at level $\ell$ in $S$. For example, if $y$ is of level $\ell$, then $x\in \readset_{\Gamma \vdash \ell}(y=x)$. (Definition~\ref{def:read_level_set}).
\item $\assset_{\Gamma \vdash \ell}(S)$: the set of variables $x$ of level $\ell$ that have been assigned to in $S$ (Definition~\ref{def:write_level_set}).
\end{itemize}
Importantly, if $\Gamma \vdash \ell(S)$, then the sets $\readset_{\Gamma \vdash \ell}(S)$ and $\assset_{\Gamma \vdash \ell}(S)$ are the same as $\readset(S)$ and $\assset(S)$:
\begin{lemma} \label{lem:same_sets_when_singlelevel}
If $~\Gamma \vdash \ell(S)$, then $\readset_{\Gamma \vdash \ell}(S) = \readset(S)$ and $\assset_{\Gamma \vdash \ell}(S) = \assset(S)$.
\end{lemma}
Finally, we give the formal definition of $\shreddable$:
\begin{definition}[Shreddable sequence] \label{def:shreddable} $\shreddable(S_1, S_2) \deq \forall \ell_1,\ell_2. (\ell_2 < \ell_1) \implies \readset_{\Gamma \vdash \ell_1}(S_1) \cap \assset_{\Gamma \vdash \ell_2}(S_2) = \emptyset$
\end{definition}
\begin{lemma}[Commutativity of sequencing single-level statements] \label{lem:commutativity} ~
If $~\singlelevelS{\ell_1}{S_1}$, $\singlelevelS{\ell_2}{S_2}$, $\Gamma \vdash S_2;S_1 : \lev{data}$ and $\ell_1 < \ell_2$ then $S_2; S_1; \eveq S_1; S_2;$
\end{lemma}
\begin{theorem}[Semantic Preservation of $\shred$] \label{th:shred} ~
If $~\Gamma \vdash S:\lev{data} $ and $ S \shred[\Gamma] \shredded $ then $ \log p^*_{\Gamma \vdash S}(s) = \log p^*_{\Gamma \vdash (S_D; S_M; S_Q)}(s)$, for all $s \models \Gamma$.
\end{theorem}
\begin{proof}
Note that if $S \eveq S'$ then $\log p^*_{\Gamma \vdash S}(s) = \log p^*_{\Gamma \vdash S'}(s)$ for all states $s \models \Gamma$.
Semantic preservation then follows from proving the stronger result $\Gamma \vdash S:\lev{data} \wedge S \shred[\Gamma] \shredded \implies S \eveq (S_D; S_M; S_Q)$ by structural induction on the structure of $S$.
We give the full proof, together with proofs for Lemma~\ref{lem:shredisleveled}, \ref{lem:reorder}, \ref{lem:halfway} and \ref{lem:same_sets_when_singlelevel}, in \ref{ap:proofshred}.
\end{proof}
\subsection{Transformation} \label{ssec:trans}
The last step of translating SlicStan to Stan is \textit{transformation}. We formalise how a shredded SlicStan program $\elpair{\Gamma}{\shredded}$ transforms to a Stan program $P$, through the transformation relations:
\vspace{4pt}
\begin{display}[.2]{Transformation Relations}
\clause{\Gamma \ctxtostan \stan}{variable declarations transformation} \\
\clause{S \dtostan \stan}{\lev{data} statement transformation} \\
\clause{S \mtostan \stan}{\lev{model} statement transformation} \\
\clause{S \qtostan \stan}{\lev{genquant} statement transformation} \\
\clause{\elpair{\Gamma}{S} \tostan \stan}{top-level transformation}
\end{display}
Intuitively, a shredded program $\elpair{\Gamma}{\shredded}$ transforms to Stan in four steps:
\begin{enumerate}
\item The declarations $\Gamma$ are split into blocks, depending on the level of variables and whether or not they have been assigned to inside of $S_D$, $S_M$ or $S_Q$.
\item The \lev{data}-levelled statement $S_D$ becomes the body of the \lstinline|transformed data| block.
\item The \lev{model}-levelled statement $S_M$ is split into the \lstinline|transformed parameters| and \lstinline|model| block, depending on whether or not substatements assign to the \lstinline|target| variable or not.
\item The \lev{genquant}-levelled statement $S_Q$ becomes the body of the \lstinline|generated quantities| block.
\end{enumerate}
This is formalised by the \ref{Trans Prog} rule below. The Stan program $P_1;P_2$ is the Stan programs $P_1$ and $P_2$ merged by composing together the statements in each program block (Definition~\ref{def:stanmerge}).
\vspace{12pt}
\begin{display}{Top-level Transformation Rule}
\squad\staterule{Trans Prog}
{ S \shred[\Gamma] \shredded \quad \Gamma \ctxtostan[(S_D;S_M;S_Q)] \stan \quad S_D \dtostan \stan_D \quad S_M \mtostan \stan_M \quad S_Q \qtostan \stan_Q}
{ \elpair{\Gamma}{S} \tostan \stan ; \stan_D; \stan_M; \stan_Q}
\end{display}
\begin{display}{Transformation Rules for Declarations:}
\squad
\staterule{Trans Data}
{ \Gamma \ctxtostan P \quad \quad x \notin \assset(S)}
{ \Gamma, x:(\tau, \lev{data}) \ctxtostan \databl{x:\tau} ; P} \qquad
\staterule{Trans TrData}
{ \Gamma \ctxtostan P \quad \quad x \in \assset(S)}
{ \Gamma, x:(\tau, \lev{data}) \ctxtostan \trdatabl{x:\tau} ; P}
\\[\GAP]\squad
\staterule{Trans Param}
{ \Gamma \ctxtostan P \quad \quad x \notin \assset(S)}
{ \Gamma, x:(\tau, \lev{model}) \ctxtostan \paramsbl{x:\tau} ; P} \quad
\staterule{Trans TrParam}
{ \Gamma \ctxtostan P \quad \quad x \in \assset(S)}
{ \Gamma, x:(\tau, \lev{model}) \ctxtostan \trparamsbl{x:\tau} ; P}
\\[\GAP]\squad
\staterule{Trans GenQuant}
{ \Gamma \ctxtostan P}
{ \Gamma, x:(\tau, \lev{genquant}) \ctxtostan \genquantbl{x:\tau} ; P} \qquad
\staterule{Trans Empty}{}{\emptyset \tostan \emptyprog}
\end{display}
\vspace{-8pt}\begin{multicols}{2}
\begin{display}{Transformation Rule for Data Statements:}
\squad
\staterule{Trans Data}
{ }
{ S_D \dtostan \trdatabl{S_D} }\qquad
\\[-8.5pt]
\end{display}
\begin{display}{Transformation Rule for GenQuant Statements:}
\squad
\staterule{Trans GenQuant}
{ }
{ S_Q \dtostan \genquantbl{S_Q} }\qquad
\end{display}
\end{multicols}\vspace{-8pt}
The rules \ref{Trans ParamIf}, \ref{Trans ModelIf}, \ref{Trans ParamFor}, and \ref{Trans ModelFor} might produce a Stan program that does not compile in the current version of Stan. This is because Stan restricts the \lstinline|transformed parameters| block to only assign to transformed parameters, and the \lstinline|model| block to only assign to the \lstinline|target| variable. However, a \lstinline|for| loop, for example, can assign to both kinds of variables in its body:
\begin{lstlisting}
for(i in 1:N){
sigma[i] = pow(tau[i], -0.5);
y[i] ~ normal(0, sigma[i]); }
\end{lstlisting}
To the best of our knowledge, this limitation is an implementational particularity of the current version of the Stan compiler, and does not have an effect on the semantics of the language.\footnotemark Therefore, we assume Core Stan to be a slightly more expressive version of Stan, that allows transformed parameters to be assigned in the \lstinline|model| block.
\footnotetext{Moreover, there is an ongoing discussion amongst Stan developers to merge the parameters, transformed parameters and model blocks in future versions of Stan \url{http://andrewgelman.com/2018/02/01/stan-feature-declare-distribute/}.}
\begin{display}{Transformation Rules for Model Statements:}
\squad
\staterule{Trans ParamAssign}
{ L \neq \kw{target}}
{ L = E \mtostan \trparamsbl{L = E} }\quad
\staterule{Trans Model}
{ }
{ \kw{target} = E \mtostan \modelbl{\kw{target} = E} }\quad
\staterule{Trans ParamSeq}
{ S_1 \mtostan \stan_1 \quad S_2 \tostan \stan_2}
{ S_1;S_2 \mtostan \stan_1 ; \stan_2} \quad
\\[\GAP]\squad
\staterule{Trans ParamIf}
{ \kw{target} \notin \assset(S_1)\cup\assset(S_2) }
{ \kw{if}(E)\; S_1\; \kw{else}\; S_2 \mtostan \trparamsbl{\kw{if}(E)\; S_1\; \kw{else}\; S_2}}\quad
\staterule{Trans ModelIf}
{ \kw{target} \in \assset(S_1)\cup\assset(S_2) }
{ \kw{if}(E)\; S_1\; \kw{else}\; S_2 \mtostan \modelbl{\kw{if}(E)\; S_1\; \kw{else}\; S_2}}\quad
\\[\GAP]\squad
\staterule{Trans ParamFor}
{ \kw{target} \notin \assset(S) }
{ \kw{for}(x\;\kw{in}\;E_1:E_2)\;S \mtostan \trparamsbl{\kw{for}(x\;\kw{in}\;E_1:E_2)\;S}}\qquad
\staterule{Trans ParamSkip}
{}
{\kw{skip} \mtostan \emptyprog}\qquad
\\[\GAP]\squad
\staterule{Trans ModelFor}
{ \kw{target} \in \assset(S) }
{ \kw{for}(x\;\kw{in}\;E_1:E_2)\;S \mtostan \modelbl{\kw{for}(x\;\kw{in}\;E_1:E_2)\;S}}\qquad
\end{display}
\begin{theorem}[Semantic Preservation of $\tostan$]\label{th:trans}
Consider a well-formed SlicStan program $F_1, \dots, F_n, S$, such that $S \elab[\emptyset] \elpair{\Gamma'}{S'}$.
Consider also a Core Stan program $P$, such that $\elpair{\Gamma'}{S'} \tostan P$.
Then for any $\params \models \{ (x: (\tau, \lev{data})) \in \Gamma' \mid x \notin \assset(S') \}$ and $\data \models \{ (x: (\tau, \lev{model})) \in \Gamma' \mid x \notin \assset(S')\}$:
$$\log p^*_{F_1, \dots, F_n, S}(\params \mid \data) = \log p^*_{P}(\params \mid \data)$$
\end{theorem}
\begin{proof}
By rule induction on the derivation of $\elpair{\Gamma'}{S'} \tostan \stan$, and equation \ref{eq:slicposterior} from \autoref{ssec:sem_prog_density}.
\end{proof}
\section{Examples and Discussion} \label{sec:demo}
In this section, we demonstrate and discuss the functionality of SlicStan. We compare several Stan code examples, from Stan's Reference Manual \cite{StanManual} and Stan's GitHub repositories \cite{StanGitHub}, with their equivalent written in SlicStan, and analyse the differences.
All examples presented in this section have been tested using a preliminary implementation of SlicStan, developed by \citet{SlicStanPPS, SlicStanStanCon}, although in this paper we use \lstinline|for| loops where the work makes use of a vectorised notation.
Firstly, we assume a type inference strategy for level types, which allows us to remove the explicit specification of levels from the language (\autoref{ssec:typeinference}).
Next, we show that SlicStan allows the user to better follow the principle of locality --- related concepts can be kept closer together (\autoref{ssec:expert}). Secondly, we demonstrate the advantages of the more compositional syntax, when code refactoring is needed (\autoref{ssec:refactor}). The last comparison point shows the usage of more flexible user-defined functions, and points out a few limitations of SlicStan (\autoref{ssec:encaps}). More examples and a further discussion on the usability of the languages is presented in Appendix~\ref{ap:examples}.
\subsection{Type Inference} \label{ssec:typeinference}
Going back to \autoref{ssec:stansyntax}, and \autoref{tab:blocks}, we identify that different Stan blocks are executed a different number of times, which gives us another ordering on the level types: a performance ordering.
Code associated with variables of level \lev{data} is executed only once, as a \textit{preprocessing} step before inference. Code associated with variables of level \lev{genquant} is executed once per sample, right after inference has completed, as these quantities can be \emph{generated} from the already obtained samples of the model parameters (in other words, this is a \textit{postprocessing} step). Finally, code associated with \lev{model} variables is needed at each step of the inference algorithm itself. In the case of HMC, this means such code is executed once per leapfrog step (many times per sample).
Thus, there is a \emph{performance ordering} of level types: $\lev{data} \leq \lev{genquant} \leq \lev{model}$: it is cheaper for a variable to be \lev{data} than to be \lev{genquant}, and is cheaper for it to be \lev{genquant} than to be \lev{model}.
We can implement type inference following the rules from \autoref{ssec:slictyping}, to infer the level type of each variable in a SlicStan program, so that:
\begin{itemize}
\item the hard constraint on the information flow direction $\lev{data} < \lev{model} < \lev{genquant}$ is enforced
\item the choice of levels is optimised with respect to the ordering $\lev{data} \leq \lev{genquant} \leq \lev{model}$.
\end{itemize}
We have implemented type inference for a preliminary version of SlicStan. In the rest of this section, we assume that no level type annotations are necessary in SlicStan, except for what the data of the probabilistic model is (specified using the derived form \lstinline|data|$~\tau~x;~S$), and that the optimal level type of each variable is inferred as part of the translation process.
\subsection{Locality} \label{ssec:expert}
With the first example, we demonstrate that SlicStan's blockless syntax makes it easier to follow good software development practices, such as declaring variables close to where they are used, and for writing out models that follow a \textit{generative story}. It abstracts away some of the specifics of the underlying inference algorithm, and thus writing optimised programs requires less mental effort.
Consider an example adapted from \cite[p.~101]{StanManual}. We are interested in inferring the mean $\mu_y$ and variance $\sigma_y^2$ of the independent and identically distributed variables $\mathbf{y} \sim \normal(\mu_y, \sigma_y)$. The model parameters are $\mu_y$ (the mean of $\mathbf{y}$), and $\tau_y = 1 / \sigma_y^2$ (the precision of $\mathbf{y}$).
Below, we show this example written in SlicStan (left) and Stan (right).
\begin{minipage}[t][11.7cm][t]{\linewidth}
\vspace{2pt}
\begin{multicols}{2} \label{shred2}
\centering
\textbf{SlicStan}
\vspace{-1pt}
\begin{lstlisting}[numbers=left,numbersep=\numbdist,numberstyle=\tiny\color{\numbcolor},basicstyle=\small]
real alpha = 0.1;
real beta = 0.1;
real tau_y ~ gamma(alpha, beta);
data real mu_mu;
data real sigma_mu;
real mu_y ~ normal(mu_mu, sigma_mu);
real sigma_y = pow(tau_y, -0.5);
real variance_y = pow(sigma_y, 2);
data int N;
data real[N] y;
for(i in 1:N){ y[i] ~ normal(mu_y, sigma_y); }
\end{lstlisting}
\vspace{4.5cm}
\textbf{Stan}
\vspace{-4pt}
\begin{lstlisting}[numbers=left,numbersep=\numbdist,numberstyle=\tiny\color{\numbcolor},basicstyle=\small]
data {
real mu_mu;
real sigma_mu;
int N;
real y[N];
}
transformed data {
real alpha = 0.1;
real beta = 0.1;
}
parameters {
real mu_y;
real tau_y;
}
transformed parameters {
real sigma_y = pow(tau_y,-0.5);
}
model {
tau_y ~ gamma(alpha, beta);
mu_y ~ normal(mu_mu, sigma_mu);
for(i in 1:N){ y[i] ~ normal(mu_y, sigma_y); }
}
generated quantities {
real variance_y = pow(sigma_y,2);
}
\end{lstlisting}
\end{multicols}
\end{minipage}
The lack of blocks in SlicStan makes it more flexible in terms of order of statements. The code here is written to follow more closely than Stan the \textit{generative story}: we firstly define the prior distribution over parameters, and then specify how we believe data was generated from them. We also keep declarations of variables close to where they have been used: for example, \lstinline{sigma_y} is defined right before it is used in the definition of \lstinline|variance_y|. This model can be expressed in SlicStan by using any order of the statements, provided that variables are not used before they are declared. In Stan this is not always possible and may result in closely related statements being located far away from each other.
With SlicStan there is no need to understand when different statements are executed in order to perform inference. The SlicStan code is translated to the hand-optimised Stan code, as specified by the manual, without any annotations from the user, apart from what the input data to the model is. In Stan, however, an inexperienced Stan programmer might have attempted to define the \lstinline|transformed data| variables \lstinline|alpha| and \lstinline|beta| in the \lstinline|data| block, which would result in a syntactic error. Even more subtly, they could have defined \lstinline|alpha|, \lstinline|beta| and \lstinline|variance_y| all in the \lstinline|transformed parameters| block, in which case the program will compile to a less efficient, semantically equivalent model.
\subsection{Code Refactoring} \label{ssec:refactor}
The next example is adapted from \cite[p.~202]{StanManual}, and shows how the absence of program blocks can lead to easier to refactor code. We start from a simple model, standard linear regression, and show what changes need to be made in both SlicStan and Stan, in order to change the model to account for measurement error.
The initial model is a simple Bayesian linear regression with $N$ predictor points $\mathbf{x}$, and $N$ outcomes $\mathbf{y}$. It has 3 parameters --- the intercept $\alpha$, the slope $\beta$, and the amount of noise $\sigma$. In other words,
$\mathbf{y} \sim \normal(\alpha \mathbf{1} + \beta \mathbf{x}, \sigma I)$.
If we want to account for measurement noise, we need to introduce another vector of variables $\mathbf{x}_{meas}$, which represents the \emph{measured} predictors (as opposed to the true predictors $\mathbf{x}$). We postulate that the values of $\mathbf{x}_{meas}$ are noisy (with standard deviation $\tau$) versions of $\mathbf{x}$:
$\mathbf{x}_{meas} \sim \normal(\mathbf{x}, \tau I)$.
The next page shows these two models written in SlicStan (left) and Stan (right). Ignoring all the lines/corrections in red gives us the initial regression model, the one \emph{not} accounting for measurement errors. The entire code, including the red corrections, gives us the second regression model, the one that \emph{does} account for measurement errors. Transitioning from model one to model two requires the following corrections:
\begin{itemize}
\item \textbf{In SlicStan:}
\begin{itemize}
\item Delete the \lstinline|data| keyword for $\mathbf{x}$ (line 2).
\item Introduce \emph{anywhere} in the program statements declaring the measurements $\mathbf{x}_{meas}$, their deviation $\tau$, the now parameter $\mathbf{x}$, and its hyperparameters $\mu_x, \sigma_x$ (lines 11--17).
\end{itemize}
\item \textbf{In Stan:}
\begin{itemize}
\item Move $\mathbf{x}$'s declaration from \lstinline|data| to \lstinline|parameters| (line 5 and line 9).
\item Declare $\mathbf{x}_{meas}$ and $\tau$ in \lstinline|data| (lines 3--4).
\item Declare $\mathbf{x}$'s hyperparameters $\mu_x$ and $\sigma_x$ in \lstinline|parameters| (lines 10--11).
\item Add statements modelling $\mathbf{x}$ and $\mathbf{x}_{meas}$ in \lstinline|model| (lines 18--19).
\end{itemize}
\end{itemize}
Performing the code refactoring requires the same amount of code in SlicStan and Stan. However, in SlicStan the changes interfere much less with the code already written. We can add statements extending the model anywhere (as long variables are declared before they are used). In Stan, on the other hand, we need to modify each block separately. This example demonstrates a successful step towards our aim of making Stan more compositional --- composing programs is easier in SlicStan.
\begin{minipage}[t][10.7cm][t]{\linewidth}
\vspace{-8pt}
\begin{multicols}{2} \label{refactor}
\centering
\textbf{Regression in SlicStan}
\vspace{1cm}
\begin{lstlisting}[numbers=left,numbersep=\numbdist,numberstyle=\tiny\color{\numbcolor},basicstyle=\small]
data int N;
$\hbox{\stkw{data}}$ real[N] x;
real alpha ~ normal(0, 10);
real beta ~ normal(0, 10);
real sigma ~ cauchy(0, 5);
data real[N] y;
for(i in 1:N){
y[i] ~ normal(alpha + beta*x[i], sigma);
}
\end{lstlisting}
\vspace{4cm}
\textbf{Regression in Stan}
\begin{lstlisting}[numbers=left,numbersep=\numbdist,numberstyle=\tiny\color{\numbcolor},basicstyle=\small]
data {
int N;
$\hbox{\stkw{real}}$$\hbox{\stlst{[N] x;}}$
real[N] y;
}
parameters {
real alpha;
real beta;
real sigma;
}
model {
alpha ~ normal(0, 10);
beta ~ normal(0, 10);
sigma ~ cauchy(0, 5);
for(i in 1:N){
y[i] ~ normal(alpha + beta*x[i], sigma); }
}
\end{lstlisting}
\end{multicols}
\end{minipage}
\vspace{-8pt}
\subsection{Code Reuse} \label{ssec:encaps}
Finally, we demonstrate the usage of more flexible functions in SlicStan, which allow for better code reuse, and therefore can lead to shorter, more readable code.
In the introduction of this paper, we presented a transformation that is commonly used when specifying hierarchical model --- the \textit{non-centred parametrisation} of a normal variable.
In brief, an MCMC sampler may have difficulties in exploring a posterior density well, if there exist strong non-linear dependencies between variables. In such cases, we can \emph{reparameterise} the model: we can express it in terms of different parameters, so that the original parameters can be recovered. In the case of a normal variable $x \sim \normal(\mu, \sigma)$, we define it as $x = \mu + \sigma x'$, where $x' \sim \normal(0,1)$. We explain in more detail the usage of the non-centered parametrisation in Appendix~\ref{ap:ncp}.
In this section, we show the ``Eight Schools'' example \cite[p.~119]{Gelman2013}, which also uses non-centred parametrisation in order to improve performance.
Eight schools study the effects of their SAT-V coaching program. The input data is the estimated effects $\mathbf{y}$ of the program for each of the eight schools, and their shared standard deviation $\boldsymbol{\sigma}$. The task is to specify a model that accounts for errors, by considering the observed effects to be noisy estimates of the \emph{true effects} $\boldsymbol{\theta}$. Assuming a Gaussian model for the effects and the noise, we have $\mathbf{y} \sim \normal(\boldsymbol{\theta}, \boldsymbol{\sigma} I)$ and $\boldsymbol{\theta} \sim \normal(\mu\mathbf{1}, \tau I)$.
Below is this model written in SlicStan (left) and Stan (right, adapted from Stan's GitHub repository \cite{StanGitHub}). In both cases, we use non-centred reparameterisation to improve performance: in Stan, the coaching effect for the \lstinline|i|\textsuperscript{th} school, \lstinline|theta[i]|, is declared as a transformed parameter obtained from the standard normal variable \lstinline|eta[i]|; in SlicStan, we can once again make use of the non-centred reparameterisation function \lstinline|my_normal|.
\begin{minipage}[t][8.7cm][t]{\linewidth}
\vspace{1pt}
\begin{multicols}{2} \label{schools}
\centering
\textbf{``Eight Schools'' in SlicStan}
\begin{lstlisting}[numbers=left,numbersep=\numbdist,numberstyle=\tiny\color{\numbcolor},basicstyle=\small]
real my_normal(real m, real v){
real std ~ normal(0, 1);
return v * std + m;
}
data real[8] y;
data real[8] sigma;
real[8] theta;
real mu;
real tau;
for (i in 1:8){
theta[i] = my_normal(mu, tau);
y[i] ~ normal(theta[i], sigma[i]);
}
\end{lstlisting}
\vspace{3cm}
\textbf{``Eight Schools'' in Stan}
\begin{lstlisting}[numbers=left,numbersep=\numbdist,numberstyle=\tiny\color{\numbcolor},basicstyle=\small]
data {
real y[8];
real sigma[8];
}
parameters {
real mu;
real tau;
real theta_std[8];
}
transformed parameters {
real theta[8];
for (j in 1:8){theta[j] = mu + tau * theta_std[i];}
}
model {
for (j in 1:8){
y[i] ~ normal(theta[i], sigma[i]);$\footnotemark$
theta_std[i] ~ normal(0, 1);
}
}
\end{lstlisting}
\end{multicols}
\end{minipage}
\footnotetext{In the full version of Stan these statements can be ``vectorised'' for efficiency, e.g. \lstinline|y ~ normal(theta,sigma);|}
\vspace{6pt}
One advantage of the original Stan code compared to SlicStan is the flexibility the user has to name all model parameters. In Stan, the auxiliary standard normal variables \lstinline|theta_std| are named by the user, while in SlicStan, the names of parameters defined inside of a function are automatically generated, and might not correspond to the names of transformed parameters of interest.
All parameter names are important, as they are part of the output of the sampling algorithm, which is shown to the user. Even though in this case the auxiliary parameters were introduced solely for performance reasons, inspecting their values in Stan's output can be useful for debugging purposes.
\section{Related Work}
\label{sec:related}
There exists a range of probabilistic programming languages and systems. Stan's syntax is inspired by that of BUGS \cite{BUGS}, which uses Gibbs sampling to perform inference. Other languages include Anglican \cite{Anglican}, Church \cite{Church} and Venture \cite{Venture}, which focus on expressiveness of the language and range of supported models. They provide clean syntax and formalised semantics, but use less efficient, more general-purpose inference algorithms.
The Infer.NET framework \cite{InferNET} uses an efficient inference algorithm called expectation propagation, but supports a limited range of models. Turing \cite{Turing} allows different inference techniques to be used for different sub-parts of the model, but requires the user to explicitly specify which inference algorithms to use as well as their hyperparameters
More recently, there has been the introduction of \textit{deep probabilistic programming}, in the form of Edward \cite{Edward, Edward2} and Pyro \cite{Pyro}, which focus on using deep learning techniques for probabilistic programming. Edward and Pyro are built on top of the deep learning libraries TensorFlow \cite{Tensorflow} and PyTorch \cite{PyTorch} respectively, and support a range of efficient inference algorithms. However, they lack the conciseness and formalism of some of the other systems, and it many cases require sophisticated understanding of inference.
Other languages and systems include Hakaru \cite{Hakaru}, Figaro \cite{Figaro}, Fun \cite{Fun}, Greta \cite{Greta} and many others.
The rest of this section addresses related work done mostly within the programming languages community, which focuses on the semantics (\autoref{ssec:relatedsem}), static analysis (\autoref{ssec:relatedstatic}), and usability (\autoref{ssec:relatedusab}) of probabilistic programming languages. A more extensive overview of the connection between probabilistic programming and programming language research is given by \citet{GordonPP}.
\subsection{Formalisation of Probabilistic Programming Languages} \label{ssec:relatedsem}
There has been extensive work on the formalisation of probabilistic programming languages syntax and semantics. A widely accepted denotational semantics formalisation is that of \citet{Kozen81}. Other work includes a domain-theoretic semantics \cite{Plotkin},
measure-theoretic semantics \cite{Fun, ScibiorPPMonads, Toronto2015}, operational semantics \cite{Dal2012, MarcinICFP2016, Staton2016}, and more recently, categorical formalisation for higher-order probabilistic programs \cite{HeunenStaton}. Most previous work specifies either a measure-theoretic denotational semantics, or a sampling-based operational semantics. Some work \cite{Staton2016, Huang2016, HurNRS15} gives both denotational and operational semantics, and shows a correspondence between the two.
The density-based semantics we specify for Stan and SlicStan is inspired by the work of \citet{HurNRS15}, who give an operational sampling-based semantics to the imperative language \textsc{Prob}. Intuitively, the difference between the two styles of operational semantics is:
\begin{itemize}
\item Operational \textit{density-based} semantics specifies how a program $S$ is executed to evaluate the (unnormalised) posterior density $p^*(\params \mid \data)$ at some specific point $\params$ of the parameter space.
\item Operational \textit{sampling-based} semantics specifies how a program $S$ is executed to evaluate the (unnormalised) probability $p^*(\mathbf{t})$ of the program generating some specific trace of samples $\mathbf{t}$.
\end{itemize}
Refer to Appendix~\ref{ap:sampling_semantics} for examples and further discussion of the differences between density-based and sampling-based semantics.
\subsection{Static Analysis for Probabilistic Programming Languages} \label{ssec:relatedstatic}
Work on static analysis for probabilistic programs includes several papers that focus on improving efficiency of inference. R2 \cite{R2} applies a semantics-preserving transformation to the probabilistic program, and then uses a modified version of the Metropolis--Hastings algorithm that exploits the structure of the model. This results in more efficient sampling, which can be further improved by \textit{slicing} the program to only contain parts relevant to estimating a target probability distribution \cite{Slicing}.
\citet{DataFlow2013} present a new inference algorithm that is based on data-flow analysis.
Hakaru \cite{Hakaru} is a relatively new probabilistic programming language embedded in Haskell, which performs automatic and semantic-preserving transformations on the program, in order to calculate conditional distributions and perform exact inference by computer algebra.
The PSI system \cite{PSI2016} analyses probabilistic programs using a symbolic domain, and outputs a simplified expression representing the posterior distribution.
The Julia-embedded language Gen \cite{Gen} uses type inference to automatically generate inference tactics for different sub-parts of the model. Similarly to Turing, the user then combines the generated tactics to build a model-specific inference algorithm.
With the exception of the work on slicing \cite{Slicing}, which is shown to work with Church and Infer.NET, each of the above systems either uses its own probabilistic language or the method is applicable only to a restricted type of models (for example boolean probabilistic programs). SlicStan is different in that it uses information flow analysis and type inference in order to self-optimise to Stan --- a scalable probabilistic programming language with a large user-base.
\subsection{Usability of Probabilistic Programming Languages} \label{ssec:relatedusab} \label{ssec:usab}
This paper also relates to the line of work on usability of probabilistic programming languages.
\citet{Tabular} implement a schema-driven language, Tabular, which allows probabilistic programs to be written as annotated relational schemas. Fabular \cite{Fabular} extends this idea by incorporating syntax for hierarchical linear regression inspired by the lme4 package \cite{lmer}. BayesDB \cite{BayesDB} introduces BQL (Bayesian Query Language), which can be used to answer statistical questions about data, through SQL-like queries.
Other work includes visualisation of probabilistic programs, in the form of graphical models \cite{BUGS, Vibes, GorinovaIDE}, and
more data-driven approaches, such as synthesising programs from relational datasets \cite{PPSynth2015, PPSynth2017}.
\section{Conclusion} \label{sec:conc}
Probabilistic inference is a challenging task. As a consequence, existing probabilistic languages are forced to trade off efficiency of inference for range of supported models and usability. For example, Stan, an increasingly popular probabilistic programming language, makes efficient scalable automatic inference possible, but sacrifices compositionality of the language.
This paper formalises the syntax of a core subset of Stan and gives its operational \textit{density-based semantics}; it introduces a new, compositional probabilistic programming language, SlicStan; and it gives a semantic-preserving procedure for translating SlicStan to Stan.
SlicStan adopts an \emph{information-flow type system}, that captures the taxonomy classes of variables of the probabilistic model. The classes can be inferred to
automatically optimise the program for probabilistic inference.
To the best of our knowledge, this work is the first formal treatment of the Stan language.
We show that the use of static analysis and formal language treatment can facilitate efficient black-box probabilistic inference, and improve usability.
Looking forward, it would be interesting to formalise the usage of pseudo-random generators inside of Stan. Variables in the \lstinline|generated$$quantities| block can be generated using pseudo-random number generators. In other words, the user can explicitly compose Hamiltonian Monte Carlo with forward (ancestral) sampling to improve inference performance. SlicStan can be extended to automatically determine what the most efficient way to sample a variable is,
which could significantly improve usability.
Another interesting future direction would be to adapt the sampling-based semantics of \citet{HurNRS15} to SlicStan and establish how the density-based semantics of this paper corresponds to it.
\newpage
\begin{acks}
We thank Bob Carpenter and the Stan team for insightful discussions, and George Papamakarios for useful comments.
Maria Gorinova was supported by the EPSRC Centre for Doctoral Training in Data Science, funded by the UK Engineering and Physical Sciences Research Council (grant EP/L016427/1) and the University of Edinburgh.
\end{acks}
|
1304.6419
|
\section{}
|
1304.6407
|
\section{Introduction}
\label{sec-intro}
Over the last few decades fundamental physics has been dominated by fine-tuning problems associated with the small scales of the cosmological constant, $\Lambda$, and the weak interactions, $v$. Small scales can arise from different origins: naturally from symmetries, or by fine-tuning through environmental selection in a multiverse that scans the parameters that determine the scale. Which mechanisms are at work for $\Lambda$ and $v$?
A key difference is that no symmetries are known that could account for $\Lambda$, but supersymmetry or other new physics could stabilize the weak scale $v$. The leading goal of the Large Hadron Collider (LHC) has been to search for a symmetry origin of $v$. Yet the discovery of a weakly coupled Higgs boson, without any signal of physics beyond the Standard Model so far, points to the absence of such a symmetry. If this situation persists in coming runs at the LHC, it will become more plausible that not only $\Lambda$ but also $v$ is anthropically selected in a multiverse.
In addition to these fine-tuning problems, fundamental physics must grapple with a variety of coincidence problems. Two such problems are paramount in understanding the contents of the universe. Matter and vacuum energy densities evolve differently, and yet they are comparable today: Why Now? Moreover, while dark matter and baryon energy densities, $\rho_D$ and $\rho_B$, evolve according to the same power law, they have different origins and depend on different combinations of low-energy parameters. The two densities could easily differ by dozens of orders of magnitude, so it is remarkable that they are numerically so close~\cite{Planck13}:
\begin{equation}
\zeta\equiv \frac{\rho_D}{\rho_B} = \frac{n_D m_D}{n_B m_B} \approx 5.5~.
\label{eq:XoverB}
\end{equation}
Why Comparable?
New symmetries~\cite{Nussinov:1985xr, Gelmini:1986zz, Barr:1990ca, Barr:1991qn,Hooper:2004dc, Kaplan:2009ag} may explain why the number densities are comparable, $n_D/n_B \approx 1$. But the observed coincidence is in the mass density, so further work is required to link the dark matter and baryon masses, $m_D$ and $m_B$. Meanwhile, cosmological observations and early LHC data increasingly favor multiverse solutions of the hierarchy problem, the cosmological constant problem, and the Why Now coincidence. Here we will argue that the multiverse can explain the Why Comparable coincidence, independently of the mass and nature of the dark matter particle.
The probability distribution in the multiverse for observing a dark matter to baryon energy density $\zeta$ can be written as
\begin{equation}
dP = f(\zeta) \, \frac{d\zeta}{\zeta}\, \alpha(\zeta)\, M_b(\zeta)
\label{eq:genarg}
\end{equation}
where $f$ captures the distribution of the parameter $\zeta$ among the different metastable vacua in the landscape, $M_b(\zeta)$ is the total baryonic mass in regions with dark matter to baryon ratio $\zeta$, and $\alpha(\zeta)$ is the number of observations per unit baryonic mass in these regions. Because of the self-reproducing gravitational dynamics of metastable de~Sitter vacua, the total baryonic mass in the multiverse (and indeed, the total amount of any type of object or event whose probability is nonzero) diverges for any $\zeta$ and must be regulated. This is the measure problem of eternal inflation. It arises with the existence of at least one stable or metastable de~Sitter vacuum in the theory, such as (apparently) our own. Here we assume that the landscape has a very large number of vacua, at least enough to solve the cosmological constant problem~\cite{BP}.
The causal patch measure~\cite{Bou06,BouFre06a} is a theoretically well-motivated proposal that robustly solves the Why Now problem and predicts a value of $\Lambda>0$ in excellent agreement with observation~\cite{BouHar07}.\footnote{While entropic considerations were used in~\cite{BouHar07}, the resolution of the Why Now problem first identified there turned out to be independent of how observers are modeled. Only the causal patch measure plays an essential role.} These results follow from the geometry of the causal patch. They are insensitive to specific anthropic assumptions involving, say, the disruption of galaxy formation or other dynamical effects~\cite{BouHar10}. The causal patch weights by the number of observations within a single event horizon. If this region is mostly empty, as would be the case due to exponential dilution if the cosmological constant dominates before the era when observers live, then very little probability is assigned to the corresponding parameter range.
Also working with the causal patch measure, Freivogel~\cite{Fre08} discovered the important result
\begin{equation}
M_b(\zeta) \propto \frac{1}{1+\zeta} ~.
\label{eq:gCD}
\end{equation}
He applied this result to axion dark matter, where the vacuum misalignment angle varies with a flat distribution, and found the observed dark matter abundance to be fairly typical. Previously, again studying axion dark matter, Wilczek showed that by {\it assuming\/} a weighting of the form (\ref{eq:gCD}), the Why Comparable coincidence could be addressed~\cite{Wil04}.\footnote{The baryon fraction $f_b=(1+\zeta)^{-1}$ also appears in an early application~\cite{CliFre07} of the causal entropic principle~\cite{BouHar07} (and thus, in particular, of the causal patch measure). The example studied in Ref.~\cite{CliFre07} implicitly assumed a flat prior over $f_b$. This may be difficult to motivate, since $f_b\in [0,1]$ has finite range and does not depend simply on fundamental parameters. In terms of $\zeta$, this prior takes the form $dN/d\zeta\propto (1+\zeta)^{-2}$. This is tantamount to {\em assuming\/} comparability of baryonic and dark densities: the prior already favors $\zeta\sim O(1)$. The measure factor and the catastrophic (entropic) boundary implicit in Ref.~\cite{CliFre07} sharpen this assumed preference.}
Here we argue that the causal patch measure allows a general solution of the Why Comparable problem, independently of the particle nature of dark matter. The crucial point is that by Eq.~(\ref{eq:gCD}), the measure factor $M_b$ undergoes a sharp change in behavior in the vicinity of $\zeta = 1$. The prior distribution $f$ is expected to be smooth in this region; the absence of a special scale dictates that $f \propto \zeta^n$. This is the Why Comparable puzzle in the language of the landscape. But the suppression of $M_b$ for $\zeta> O(1)$ leads to a maximum in $f M_b$ near $\zeta = 1$, if $0<n<1$. Satisfyingly, the ``anthropic factor'' $\alpha(\zeta)$ is not needed in this argument and can be set to a constant. If the number of observers per baryon drops for large or small values of $\zeta$, as has been argued~\cite{Lin88,TegAgu05,HelWal05,Hall:2011jd}, this will only improve an already satisfactory prediction.
The causal patch measure thus provides a unified and robust understanding of both the Why Now and Why Comparable coincidences: baryons, and therefore observations, must avoid being diluted by excess vacuum or dark matter energy density. Because the causal patch measure is defined geometrically and hence determined by the gravity of matter, it directly explains the coincidence of energy densities and not of number densities. Conventional anthropic assumptions are not needed.
\paragraph{Outline} In Sec.~\ref{sec-whycomp} we propose a multiverse explanation of the Why Comparable coincidence in a completely general setting. We combine a prior distribution over $\zeta$ with no special features at $\zeta\sim O(1)$ with the baryonic mass factor obtained from the causal patch measure. We obtain probability distributions with a broad peak around $\zeta\sim O(1)$. The observed value is typical in these distributions. We consider an interesting class of competing measure proposals and find that they do not lead to the same conclusion; so the causal patch is favored by our result.
In Sec.~\ref{sec-freeze} we specialize to the case where dark matter is a long-lived particle with an abundance determined by thermal freeze-out in the early universe. We review the analysis linking its abundance to a mass scale near the weak scale. Our solution of the Why Comparable problem generates this mass scale in the low-energy theory of observed vacua, independently of the weak scale and of the possible existence of new symmetries at that scale. We thus relate the dark matter mass parametrically to the baryon abundance. It is only accidentally close to the weak scale, and generically slightly higher.
In Sec.~\ref{sec-little}, we specialize to an overlapping but different set of assumptions: that dark matter is the lightest supersymmetric particle (LSP). We assume that the overall scale of supersymmetry breaking, $\tilde{m}$, is the only relevant scanning parameter, and study how the probability distribution for this scale is affected by the statistical preference for comparability, $\zeta\sim O(1)$. We consider both freeze-out relics and gravitinos. In both cases, we find that $\tilde m$ may not be far above the TeV scale even though it is unrelated to the weak scale. Solving the Why Comparable coincidence yields a fundamental reason for a little hierarchy.
\section{Solving the Why Comparable Coincidence}
\label{sec-whycomp}
In this section, we address the Why Comparable coincidence at a very general level. We do not assume the existence of catastrophic dynamical boundaries; for example, we do not assume that galaxy formation is adversely impacted when $\zeta$ becomes greater than some critical value. Parameters other than $\zeta$ will not enter the analysis; hence they can either be regarded as fixed to their observed values, or marginalized over. (Here we choose the latter option, in subsequent sections we will choose the former.) In Sec.~\ref{sec-cpm}, we show that the total baryonic mass $M_b$ depends on $\zeta$ in a very simple way determined by the geometry of the causal patch measure. In Sec.~\ref{sec-peak}, we combine this with the prior distribution over $\zeta$ among landscape vacua and show that under weak assumptions, the multiverse probability distribution over $\zeta$ has support mainly for $\zeta\sim O(1)$. We find in Sec.~\ref{sec-other} that this is not the case for other interesting measures.
\subsection{Suppression from the Causal Patch Measure}
\label{sec-cpm}
Consider a class of observers in the multiverse that exist at a fixed\footnote{Of course, the observed value $t_{\rm obs}\sim 10^{61}$ should not be very unlikely, if $t_{\rm obs}$ is allowed to scan. Perhaps the number of vacua in the landscape determine this value~\cite{BouFre10d}. If $t_{\rm obs}$ is correlated with $\zeta$, then marginalizing over $t_{\rm obs}$ will modify the prior probability distribution over $\zeta$. Our argument that $\zeta\sim O(1)$ can still be made, if appropriate versions of our (weak) assumptions on the prior, $f$, and on the number of observers per baryon, $\alpha$, are made directly on the overall probabilty distribution $f(\zeta) \alpha(\zeta) M(t_\Lambda, t_{\rm c})$, where $f$ is the marginalized prior distribution, $\alpha$ is the marginalized expected number of observers per baryon, and $M(t_\Lambda, t_{\rm c})$ is the weighted average matter mass after integrating out $t_{\rm obs}$.} time $t_{\rm obs}$ in a flat or open FRW universe, as would naturally be produced from the decay of a parent vacuum in the landscape, followed by a period of slow-roll inflation. One could set $t_{\rm obs}=13.7$ Gyr but this is not important for our argument.
We will assume that the observers consist of baryonic matter. We make no further assumptions about them, such as the need for galaxies, or carbon, etc.; we will assume instead that the number of observations per baryon at the time $t_{\rm obs}$ is fixed. That is, the total number of observations $N(\zeta)$ of a particular value of $\zeta$ is given by the total baryonic mass, $M_b(\zeta)$, inside the causal patch at the time $t_{\rm obs}$.
\begin{equation}
N(\zeta)=\alpha M_b(\zeta)~.
\end{equation}
This will be sufficient to explain the Why Comparable coincidence, assuming only that the number of observers per baryon, $\alpha$, does not {\em increase\/} dramatically for large $\zeta$.
Previous multiverse analyses argued that observed values of $\zeta$ are limited by certain catastrophic dynamical boundaries, such as the failure to cool halos or to form stars~\cite{HelWal05,TegAgu05}. It would be interesting if such a catastrophic boundary were close to the observed value of $\zeta \simeq 5.5$, especially if it provided an upper bound on $\zeta$. However, there is no clear argument that such a nearby boundary exists. Requiring early formation of halos with sufficient baryons to make at least one star yields a distant boundary, $\zeta \leq 10^5 - 10^6$, and the requirement of disk fragmentation is highly uncertain, with a boundary plausibly in the range $\zeta \leq 10 - 10^4$, depending on the assumptions made. These arguments are thus currently insufficient for explaining the Why Comparable coincidence. Here we show that they are not necessary either. The inclusion of presumed catastrophic boundaries in our analysis would sharpen the probability distribution without changing the main result, and/or allow us to further relax our (already weak) assumptions about the prior distribution over $\zeta$ in the landscape.
The class of vacua we consider is extremely broad: we will allow not only $\zeta$, but all low-energy parameters, to vary away from their observed values. There are some technical conditions we impose, which do not significantly impact the generality of our approach:
\begin{itemize}
\item There exists a matter-dominated era.
\item The cosmological constant is not zero. (This is a technical assumption since the causal patch cutoff is not well-defined in vacua with $\Lambda=0$.\footnote{In fact, the causal patch and other leading cutoff proposals do not appear to be reliable in the causal neighborhood of big crunch singularities~\cite{BouLei09,BouFre10e}, so it may be appropriate to assume $\Lambda>0$. However, this is not necessary for our analysis.})
\item The time when observations are made, $t_{\rm obs}$, occurs no earlier than in the matter era. It may lie in later eras (e.g., curvature or vacuum dominated), but we do not consider observers that exist during radiation domination.
\item We compute the probability distribution over the value of $\zeta$ at the time $t_{\rm obs}$. Dark matter that decays before $t_{\rm obs}$, or dark matter produced after $t_{\rm obs}$ is not constrained by our arguments.
\end{itemize}
As a matter of principle, it is important to understand that the above conditions are not assumptions; our conclusions do not hinge on their universal validity. Perhaps there are observers that are not made from baryons, or which live in some other era. Here we compute conditional probabilities: what are typical observations made by observers in the class we have specified. Since this class includes ourselves, our approach could be falsified if we find that our observations are very atypical among such observers. Thus, our prediction that $\zeta\sim 1$ is a nontrivial success.
In fact, it would be legitimate to be far more restrictive, and to limit our attention to vacua that differ from ours only through the dark matter to baryon density ratio, $\zeta$. In the later sections of this paper, where we discuss concrete models of dark matter, we will indeed take this viewpoint. At the level of explaining the Why Comparable coincidence, however, considering a broader class of vacua does not complicate our task. It allows us to claim a more general result that applies to all vacua with baryons and a matter-dominated era. Any additional parameters can be scanned and integrated out. Our assumptions about the prior distribution for $\zeta$ refer to the marginalized distributions.
The key observation relevant to resolving the Why Comparable coincidence is extremely simple. The baryonic mass within the causal patch is given by
\begin{equation}
M_b=\frac{1}{1+\zeta} M(t_{\rm obs})~,
\label{eq-mbxil}
\end{equation}
where $M(t_{\rm obs})$ is the total matter mass in the causal patch at the time when observations are made. As reviewed in the Appendix, $M$ depends on $t_{\rm obs}$, on the cosmological constant, and on the time of curvature domination (if there is such an era). But in vacua satisfying the above rather weak conditions, {\em $M$ does not depend on $\zeta$}.
This is the central point of our argument. It holds because the size of the causal patch is determined by computing the past light-cone of a point on the future boundary of the spacetime. Varying $\zeta$ could affect the time of equal matter and radiation density, but we have assumed that observations are made after this time. Because the patch is constructed from the future back, its size at $t_{\rm obs}$ is thus independent of $\zeta$. In other words, $M$ depends only on parameters that are uncorrelated with $\zeta$. (For spatial curvature this could be considered a mild assumption.) Marginalizing over these parameters is thus trivial. $M$ does not depend on other particle physics parameters, such as the number of baryons per photon, which could introduce an implicit additional $\zeta$-dependence. Thus we find that the baryonic mass inside the patch depends on the dark matter to baryon ratio as
\begin{equation}
M_b\propto \frac{1}{1+\zeta}~.
\label{eq-mbxi}
\end{equation}
This equation is central to our solution of the Why Comparable problem. We will now show that for a wide range of prior probability distributions with no special features near $\zeta\sim 1$, Eq.~(\ref{eq-mbxi}) leads to the prediction that $\zeta\sim O(1)$, independently of detailed anthropic assumptions.
\subsection{The Peak at $\zeta\sim 1$}
\label{sec-peak}
The prior distribution is the probability distribution over $\zeta$ in the ensemble of vacua produced by eternal inflation in the causal patch. Let
\begin{equation}
y=\log\zeta
\label{eq-mf1}
\end{equation}
and
\begin{equation}
f(y)=\frac{dN}{dy}~,
\label{eq-mf2}
\end{equation}
where $dN$ is the number of vacua for which $\log\zeta$ lies in the interval $(y,y+dy)$. The derivative
\begin{equation}
F(y)=\frac{d\log f}{dy}
\label{eq-mf3}
\end{equation}
is called the prior multiverse force on $y$.
We work with logarithmic variables because they make it simple to implement the requirement that no scale should be special in the prior distribution,\footnote{This is the weakest assumption one can start with. If a special scale was introduced into $f$ by hand, it would be easy to obtain $\zeta\sim 1$ but impossible to {\em explain} this coincidence.} by setting the prior multiverse force to a constant:
\begin{equation}
F=n~.
\end{equation}
Alternatively, one can think of this equation as a Taylor expansion of $\log f$ around some point of interest; in our case $y=0$ ($\zeta=1$). There may be corrections to the linear approximation, in violation of our earlier assumption that no scale is special in the prior distribution. A more minimal assumption is then that deviations from the linear approximation are not so drastic as to overcome the vast suppression of the probability distribution we obtain for the regime far from $y=0$.
We will now show that for nearly any value of $n$ in the interval $(0,1)$, the overall probability distribution over $\zeta$ is peaked for values of order unity, explaining the coincidence that baryons and dark matter have comparable density. We will also show that the distribution is quite broad, so that the observed value, $\zeta=5.5$, is typical, i.e., in agreement with the theory. There is neither a surprisingly large nor a surprisingly small amount of dark matter in our universe.
The probability distribution over $\zeta$ that is relevant to comparing theory with observation is not the prior, because values that are favored by the prior may not contain many observers. Rather, the probability distribution is proportional to the expected number of observations of the various possible values of $\zeta$ that are made inside the causal patch. This is given by the product of the prior---the probability that a particular value will actually appear as a vacuum region in the causal patch---with the number of observers inside the patch in such a vacuum, which is proportional to $M_b(\zeta)$. By Eq.~(\ref{eq-mbxi})
\begin{equation}
\frac{dp}{d\zeta}\propto\frac{\zeta^{n-1}}{1+\zeta}~,
\label{eq-zetadist}
\end{equation}
or in terms of the logarithmic variable:
\begin{equation}
\frac{dp}{dy}\propto\frac{e^{ny}}{1+e^y}~.
\end{equation}
This distribution is shown for three values of $n\in (0,1)$ in Fig.~\ref{fig-xiprobs}.
The distribution has a peak near $y=0$ ($\zeta\sim 1$). For large negative values of $y$, the multiverse force is positive, $F=n$, favoring more dark matter. But for large positive values it is negative, $F=n-1$, favoring baryons. This is easily understood: the prior distribution is rising for $y<0$, whereas the measure factor $(1+\zeta)^{-1}$ is nearly constant and so does not affect the prior much in this regime. But for positive values of $y$ ($\zeta\gg 1$), the measure factor becomes important, and overwhelms the prior. Because the density drops off rapidly away from $y=0$, the values of $\zeta$ that are most likely to be observed are of order unity. This explains the Why Comparable coincidence.
\begin{figure}[tbp]\centering
\includegraphics[width=.9 \textwidth]{zetaprobs.pdf}
\caption{Probability distribution over $y=\log\zeta$, for prior multiverse forces $n=.2$ (left), $n=.5$ (middle), and $n=.8$ (right). The left slope is due to the prior distribution favoring more dark matter; the right downslope is caused by the dilution of baryons in the measure factor. The observed value, $y\approx 1.7$, shown as a red spike, is quite typical in any of the three distributions.}
\label{fig-xiprobs}
\end{figure}
The above argument applies to any value of $n\in (0,1)$, but for values very close to the boundary, the smallness of $n$ or of $(1-n)$ introduces a new small parameter into the problem, and the peak need no longer lie at $\zeta\sim 1$. One finds, however, that the observed value of $\zeta$ is typical (i.e., within the central $2\sigma$ of the distribution) for nearly all values of $n\in (0,1)$; see Fig.~\ref{fig-nrange}.
\begin{figure}[tbp]\centering
\includegraphics[width=.9 \textwidth]{nrange.pdf}
\caption{As a function of the prior multiverse force $n$, the probability is shown for observers to find themselves in vacua with more dark matter than the observed value, $\zeta>5.5$. For our vacuum to lie within the central $1\sigma$ of the probability distribution over $\zeta$ (horizontal lines), $n$ must lie between about $.38$ and $.91$. For nearly all values of the multiverse force in the range shown, our observation of $\zeta=5.5$ is at least within the central $2\sigma$ of the predicted distribution. For values of $n$ outside the interval $(0,1)$, the probability distribution would peak at $\zeta\to 0$ or $\zeta\to \infty$, and our observation would be very unlikely unless explicit anthropic assumptions are introduced into the model.}
\label{fig-nrange}
\end{figure}
A prior landscape force $n<0$ would imply that most vacua have very little dark matter; a prior force $n>1$ would mean that such a large fraction of vacua have large $\zeta$ that the measure factor cannot overcome this pressure. In either case, the observed value would be very unlikely unless one assumes explicit catastrophic boundaries that cut off the probability distribution, such as a failure to form galaxies~\cite{TegAgu05}. Moreover, one cannot understand the Why Comparable coincidence in this way.
For $n\in (0,1)$, remarkably, no such assumptions are needed, $\zeta=5.5$ is typical, and the Why Comparable coincidence is explained. Moreover, it is not difficult to obtain $n\in (0,1)$ from plausible assumptions about the landscape. A particularly compelling example is due to Freivogel~\cite{Fre08}: a natural (GUT or Planck scale) QCD axion~\cite{PecQui77}, which contributes a dark matter abundance of
\begin{equation}
\zeta\sim \left(\frac{f_a}{10^{12}~\mbox{GeV}}\right)^{7/6} \langle\theta_i\rangle^2~.
\label{eq-axion}
\end{equation}
The main assumption in this case is a low scale of slow-roll inflation in the relevant class of vacua. The energy scale of inflation must be lower than the Peccei-Quinn symmetry breaking scale, $f_a$. (It must also be low enough to evade constraints on isocurvature perturbations.) Then the axion misalignment angle $\theta_i$ at the end of inflation is random and constant over the scale of the horizon at $t_{\rm obs}$. Thus, the prior distribution over $\theta_i$ is flat, and by Eq.~(\ref{eq-axion}), the prior over $\zeta$ is
\begin{equation}
\frac{dN}{d\zeta}\propto \frac{d\theta}{d\zeta}\propto \frac{1}{\sqrt{\zeta}}~.
\end{equation}
By Eqs.~(\ref{eq-mf1})--(\ref{eq-mf3}), this corresponds to a prior multiverse force $n=\frac{1}{2}\in (0,1)$.
This result is important for two reasons. It renders a high-scale QCD axion viable both as dark matter and as a solution to the strong CP problem, without the need for the controversial assumption that too much dark matter would prevent efficient structure formation~\cite{TegAgu05,HelWal05}. Just as importantly (though this aspect was not emphasized in Ref.~\cite{Fre08}), it explains the Why Comparable coincidence, if the dark matter is an axion.
From the viewpoint we have developed above, these two features will be shared by any landscape model with prior pressure $n\in (0,1)$ towards large dark matter abundance. Such models would overproduce dark matter, were it not for the measure factor suppressing large $\zeta$. And with the measure factor taken into account, overproduction is averted and the Why Comparable coincidence is explained. In the following sections, we will illustrate this point by considering the dark matter as a freeze-out relic, and the dark matter as the LSP.
We note briefly that our solution of the Why Comparable problem generalizes if dark matter has multiple components. Each component has a density parameter $\zeta_i = \rho_{Di}/\rho_B$ distributed according to some a prior distribution $dN = f(\zeta_i) d \log \zeta_i$ and corresponding force $F_i = d \log f / d \log \zeta_i = n_i$. The force for $\zeta = \Sigma \, \zeta_i$ is then given by $F=n\equiv \Sigma' n_i$, where the prime indicates that only terms with positive $n_i$ are included in the sum. Our solution to the Why Comparable problem requires $n<1$, so that it may be unlikely that there are a large number of dark matter components having $n_i>0$. However, there could certainly be a few such components and these typically have roughly comparable $\rho_{Di}$. For example, if dark matter has both an axion and a freeze-out component then $n=0.5 + n_{FO}$, where $0<n_{FO}<0.5$ is the force on the freeze-out relic density, which could result from a force on the dark matter mass between 0 and 1.
\subsection{Other Measures}
\label{sec-other}
Before moving on, we note that the above result is specific to the causal patch measure. Other popular measures do not lead to Eq.~(\ref{eq-mbxi}). As an example, consider the fat geodesic cutoff~\cite{BouFre08b}, which is closely related to an interesting class of local~\cite{BouMai12} and global~\cite{DesGut08a,Bou12b} measures. We may analyze this measure in its local formulation. Consider an ensemble of geodesics orthogonal to a fiducial initial volume in the early universe. Each geodesic is thickened by a fixed infinitesimal physical volume $\delta V$. (For equivalence to the scale factor cutoff, this volume must be representative of the attractor regime of eternal inflation, but the present analysis holds for more general initial conditions.) The fat geodesic measure contains baryonic mass $M_b^{\rm FG}=\rho_b\delta V$, where $\rho_b$ is the density of baryons at the time of observation, $t_{\rm obs}$.
The causal patch is the largest causally connected region in the universe and effectively averages $\rho_b$ over a large volume; it depends on the total mass within the event horizon, not how it is distributed. But the fat geodesic is sensitive to the density of matter at its own location.\footnote{For the same reason, this class of measures does not solve the Why Now problem~\cite{BouFre08b}, for positive values of $\Lambda$.} Timelike geodesics behave like collisionless dark matter and thus trace the dark matter. They will end up in structures with virial density $\rho_{\rm vir} =Q^3 T_{\rm eq}^4$. Here $Q\sim 10^{-5}$ is the primordial density contrast, and $T_{\rm eq}\sim \xi_b+\xi_D=\xi_b(1+\zeta)$ is the temperature at matter-radiation equality; $\xi_b$ and $\xi_D$ are the baryonic and dark matter mass per photon. Hence the baryonic mass inside the fat geodesic at $t_{\rm obs}$ is
\begin{equation}
M_b^{\rm FG}\propto \frac{\rho_{\rm vir} }{1+\zeta}\propto Q^3\xi_b^4 (1+\zeta)^3 ~.
\label{eq-mfg}
\end{equation}
(Marginalizing over the cosmological constant~\cite{BouFre08b} would contribute a further factor of $\rho_{vir} \sim Q^3 \xi_b^4 (1 + \zeta)^4$ to the above formula, providing an even stronger force to large $\zeta$.) Since $Q$ and $\xi_b$ might depend on $\zeta$, further analysis requires more assumptions than for the causal patch measure. In fact, we have already assumed implicitly that structure forms, which was not necessary for the causal patch.
It will suffice to show in a particularly natural setting that the fat geodesic measure cannot predict $\zeta\sim O(1)$. Suppose that the physics of baryogenesis and inflation is held fixed, so that $Q$ and $\xi_b$ are independent of $\zeta$. This is natural when studying the abundances of axion dark matter as in Ref.~\cite{Fre08}, or of the lightest supersymmetric particles, as in Sec.~\ref{sec-little}. By Eq.~(\ref{eq-mfg}), large dark matter abundance is strongly favored, because it increases the baryonic density near the fat geodesic. This cannot be compensated by assuming that vacua with large dark matter abundance are sufficiently rare ($n$ sufficiently small in the notation of the previous subsection): if the prior were strong enough to overcome a force towards large $\zeta$ which is strongest for $\zeta\gg 1$, then it would push right past $\zeta\sim O(1)$. Unless more specific anthropic assumptions are added by invoking catastrophic dynamical boundaries, the fat geodesic predicts that $\zeta$ should either be much less than or much greater than unity, for any prior without special features at $\zeta\sim O(1)$.
\section{Origin of the Mass Scale of Freeze-Out Dark Matter }
\label{sec-freeze}
A cosmological relic from thermal freeze-out is frequently called a Weakly Interacting Massive Particle, or WIMP, because the mass that yields the observed abundance appears to be close to the weak scale, and because the hierarchy problem might be solved by assuming a new symmetry at this scale. In this section, we show that the mass of a freeze-out relic is connected, via our solution to the Why Comparable problem, to the baryon density of the universe. Its proximity to the weak scale is accidental and, by construction, unrelated to the hierarchy problem. For natural choices of couplings, values in the multi-TeV domain, somewhat above the weak scale, appear to be favored.
\subsection{The Mass Scale of a Thermal Relic}
\label{sec-relic}
We begin with a brief review of thermal freeze-out, following Ref.~\cite{Fen10}. We assume a particle with lifetime exceeding $t_{\rm obs}$ and mass $m_X \leq T_r$, the reheat temperature after inflation, having sufficient interactions that it is brought into thermal equilibrium by temperature $T \sim m_X$. As the temperature drops below $m_X$, as long as annihilation is efficient, the number density becomes Boltzmann suppressed:
\begin{equation}
n \sim (m_X T)^{3/2} e^{-m_X/T}~.
\label{eq-numd}
\end{equation}
The particles become too dilute to annihilate when the mean free time to annihilation becomes longer than the Hubble time (``freeze-out''):
\begin{equation}
n \langle \sigma_A v \rangle \sim H~,
\end{equation}
where $\sigma_A$ is the annihilation cross-section and $v$ is the velocity of dark matter particles. The thermally averaged annihilation cross-section is of the form
\begin{equation}
\langle\sigma_Av\rangle = \frac{c}{m_X^2}~,
\end{equation}
where $c$ involves a product of coupling constants, and may depend on mass ratios. With $H\sim T^2/M_{\rm P}$, the freeze-out temperature satisfies
\begin{equation}
\left(\frac{T_f}{m_X}\right)^{1/2} e^{m_X/T_f} \sim c \frac{M_{\rm P}}{m_X}~.
\label{eq-tf}
\end{equation}
Equivalently, with $x_f\equiv m_X/T_f$,
\begin{equation}
x_f - \frac{1}{2}\log x_f\sim \log\left( \frac{cM_{\rm P}}{m_X}\right)~.
\label{eq-xf}
\end{equation}
Note that $x_f$ depends very weakly on the dark matter mass. For $m_X$ within one or two orders of magnitude of the TeV scale, justified below, and typical values of $c$, one finds $x_f\approx 20$.
It is useful to express the dark matter abundance in terms of an approximately conserved quantity, such as the dark matter mass per photon:
\begin{equation}
\xi_X\sim \frac{m_X n}{T^3}=\frac{m_X n_f}{T_f^3}~.
\end{equation}
With Eqs.~(\ref{eq-numd}) and (\ref{eq-tf}) holding at freeze-out, one finds the well-known result for the relic abundance from thermal freeze-out
\begin{equation}
\xi_X \sim\frac{x_f }{\langle\sigma_Av\rangle M_{\rm P}} \sim\frac{x_f }{cM_{\rm P}} m_X^2~.
\label{eq-zetaFO}
\end{equation}
\subsection{The Why Comparable Coincidence Sets the Scale}
\label{sec-setting}
For our purposes, it is crucial to compare this to the baryon mass per photon, which is given at late times by
\begin{equation}
\xi_b \sim m_p \eta~,
\end{equation}
where $\eta=n_b/n_\gamma\approx 6\times 10^{-10}$ is the baryon asymmetry and $m_p$ is the proton mass. The dark matter to baryon ratio is thus given by
\begin{equation}
\zeta=\frac{\xi_X}{\xi_b}\sim \frac{x_f}{c\,\eta}
\frac{m_X^2 }{M_{\rm P }m_p}~.
\label{eq-zetaFO2}
\end{equation}
The ratio grows quadratically with $m_X$, apart from the weak logarithmic dependence determined by the transcendental Eq.~(\ref{eq-xf}).
We now assume that $m_X$ varies in the landscape, with a probability distribution that has no preferred scale and is described by a probability force $n/2$, with $n\in (0,1)$ favoring large values of $m_X$. Then $\zeta$ has a probability force $n$ and, taking other parameters fixed, by the analysis of the previous section the distribution for $\zeta$ is peaked near unity. As shown in Fig.~\ref{fig-xiprobs}, the observed dark matter abundance, $\zeta=5.5$, is quite typical.
This result leads to a statistical prediction for $m_X^2$:
\begin{equation}
m_X^2 \sim x_f^{-1} M_{\rm P} m_p \, c\, \eta .
\label{eq-FOscaling}
\end{equation}
With order-one factors restored and the dependence on $\zeta\sim O(1)$ made explicit, this prediction becomes
\begin{equation}
m_X = 0.5 \left( \frac{\alpha_{\rm eff}}{0.01} \right) \sqrt{\zeta} \; \mbox{TeV},~~\zeta\sim O(1),
\label{eq-mXFO}
\end{equation}
where we have defined an effective coupling strength by $c= 4 \pi \alpha_{\rm eff}^2$, and normalized to the value that results for freeze-out of a vector electroweak doublet fermion annihilating to gauge bosons, $\alpha_{\rm eff} = 0.01$.
Of course, inserting the observed value, $\zeta\approx 5.5$, in (\ref{eq-mXFO}) gives the usual thermal freeze-out result, $m_X\approx 1$ TeV. The key point is that we have explained {\em why\/} $\zeta\sim O(1)$, the Why Comparable coincidence. We conclude that the mass scale of the freeze-out relic,
\begin{equation}
m_X \sim \alpha_{\rm eff} \, \sqrt{\eta \, M_{\rm P} m_p }
\end{equation}
is set by the statistical preference for $\zeta\sim O(1)$. It is parametrically unrelated (though accidentally close) to the weak scale. In Sec.~\ref{sec-LSP}, we will apply this result to explain why the SUSY breaking scale may be close to, but somewhat above, the weak scale.
\section{The Origin of a Little Supersymmetric Hierarchy}
\label{sec-little}
In this section, we explore our explanation of the Why Comparable coincidence in theories with supersymmetric dark matter. We will begin, in Sec.~\ref{sec-hierarchy}, by considering the implications of the hierarchy problem for the expected scale of SUSY breaking in the landscape. We will argue that without taking dark matter into account, there are only two simple possibilities: either, the weak scale is natural, in which case SUSY should have been already discovered. Or SUSY should be broken far above the weak scale and remain out of reach. If dark matter is the LSP, however, we find that the baryon dilution factor $(1+\zeta)^{-1}$ can make it statistically likely for SUSY to be broken near the weak scale without rendering it natural. We consider two classes of models in detail: thermal freeze-out of a standard model superpartner, in Sec.~\ref{sec-LSP}, and gravitino LSPs produced by various mechanisms in Sec.~\ref{sec-gravitino}. In all cases, one expects a small hierarchy between the weak scale and the scale of observable superpartners.
\subsection{Naturalness and the Prior for $\tilde m$ }
\label{sec-hierarchy}
In a wide class of supersymmetric theories, the scale of weak interactions, $v$, is related to the overall mass scale of Standard Model superpartners, $\tilde{m}$, by
\begin{equation}
v^2 = (C_1 + C_2 + ...) \, \tilde{m}^2.
\label{eq-vtm}
\end{equation}
The parameters $C_i$ depend on details of the model, such as coupling constants and mass ratios. But generically they should be of order unity. Absent fine-tuning of $\sum C_i$, supersymmetry can stabilize the weak scale against radiative corrections only if the superpartners have mass near the observed weak scale.
In this section, we assume that the mass scale $\tilde{m}$ scans in the landscape, with a prior distribution
\begin{equation}
\frac{dp}{d \log \tilde{m}} \; \propto \; \tilde{m}^q
\end{equation}
with $q>0$. This preference for large values of $\tilde{m}$ in the prior distribution need not lead to a prediction of large {\em observed} values. With all other model parameters held fixed, increasing $\tilde m$ drives up the weak scale, by Eq.~(\ref{eq-vtm}). All compound nuclei are unstable if $v$ exceeds the value we observe by more than 60\% \cite{AgrBar97}. We assume that this suppresses the abundance of observers dramatically, so that $v\sim 1.6 v_o$ can be regarded as a catastrophic boundary.
However, one expects that parameters of the supersymmetric model do vary in the landscape, in such a way that one or more of the $C_i$ scan. Then $\tilde{m}\gg v$ can occur, provided a cancellation allows $(C_1 + C_2 + ...) =v^2/\tilde m^2\ll 1$. The prior probability for such a cancellation (i.e., the probability that it occurs in a randomly chosen vacuum) can be estimated by noting that $\sum C_i=0$ should not be a special point in the probability distribution, because the sum is over unrelated positive and negative terms of order unity. Hence, we may Taylor-expand the probability distribution around this point. For small $\sum C_i\ll 1$, it suffices to keep only the leading (constant) term~\cite{Wei87}. Thus, the prior probability for $\sum C_i\leq \epsilon\ll 1$ is of order $\epsilon$.
We now integrate over these additional scanning parameters, and we require that $v$ remains below the catastrophic value for stability of nuclei beyond hydrogen. This modifies the prior distribution over $\tilde m$, yielding a marginalized distribution
\begin{equation}
\frac{dp}{d \log \tilde{m}} \; \propto \left\{ \begin{array}{ll} \tilde m^q & (\tilde m \ll v) \\
\tilde{m}^{q-2} & (\tilde{m} \gg v) \end{array} \right.
\label{eq-distabovev}
\end{equation}
that exhibits two different regimes. For $\tilde m \ll v$, the prior distribution is unmodified, with a probability force $q$ towards large $\tilde m$. But for $\tilde m \gg v$, the probability force is decreased to $q-2$, because of the statistical price that must be paid for fine-tuning the weak scale.
Hence we identify two behaviors:
\begin{enumerate}
\item If $q<2$, then the prior favoring large $\tilde{m}$ is too weak to overcome the statistical cost of fine-tuning (the accidental cancellation among the $C_i$ required for $\tilde{m} \gg v$). In this case, we would expect to discover natural supersymmetry ($\tilde m \sim v$, i.e., superpartners with mass of order the weak scale $v$).
\item If $q>2$, then the multiverse force is sufficiently strong to favor runaway behavior for $\tilde{m}$, with superpartners many orders of magnitude above $v$, so that we expect to discover a finely-tuned theory of the weak scale.
\end{enumerate}
The first of these options is severely challenged by the failure, so far, of the LHC and other experiments to discover particles beyond the Standard Model. The second option would imply that we will never discover any, since $\tilde m \gg v$.
This analysis ignores the possible production of LSP dark matter. This may be appropriate. For example, the LSP might be unstable, or it might have a mass larger than the reheat temperature. Then the LSP would not contribute to the dark matter, and the above analysis would hold. One would then expect that the superpartners are out of reach of present experiments, and that the dark matter is associated with entirely different physics, such as an axion.
If LSP dark matter is produced, however, then our analysis so far is incomplete. The dark matter abundance will generally depend on $\tilde m$:
\begin{equation}
\zeta=\zeta(\tilde m)~.
\end{equation}
Thus, the probability distribution over $\tilde m$ will be modified by the baryon dilution factor of the causal patch measure of Eq.~(\ref{eq-mbxi}), yielding
\begin{equation}
\frac{dp}{d \log \tilde{m}} \; \propto \left\{ \begin{array}{ll} \tilde m^q/[1+\zeta(\tilde m)] & (\tilde m \ll v) \\
\tilde{m}^{q-2}/[1+\zeta(\tilde m)] & (\tilde{m} \gg v) \end{array} \right.~~.
\label{eq-di}
\end{equation}
Below we show that baryon dilution creates a third regime---effectively, a catastrophe---whose threshold can set $\tilde m$. We will find that this scale is parametrically unrelated to the weak scale, but accidentally lies nearby.
\subsection{LSP Freeze-Out with Only $\tilde{m}$ Scanning }
\label{sec-LSP}
Here we specialize to a large class of supersymmetric theories where dark matter arises from freeze-out of the LSP. We take the overall scale of superpartner masses, $\tilde{m}$, to scan, while keeping all other parameters fixed. In particular the mass of superpartner $i$ is given by $\tilde{m}_i = A_i \, \tilde{m}$ with $A_i$ fixed.\footnote{If the $A_i$ are scanned, our analysis favors a hierarchy between $\tilde m$ and $m_X$, since this would allow the SUSY breaking scale to become large without incurring a penalty from the baryon dilution factor. The extent to which this hierarchy is realized depends on the prior distribution over the $A_i$, which is not known. Similarly, if the coupling strength $c=4\pi\alpha_{\rm eff}^2$ is scanned, it will be driven up to the unitarity bound~\cite{Griest:1989wd} unless the prior distribution disfavors this sufficiently.} This class includes theories where the superpartners are at a single scale, with $A_i \approx 1$, and theories with a split spectrum. For example, split supersymmetry \cite{ArkaniHamed:2004fb} has the fermionic superpartners much lighter than the scalar superpartners, so $A_{\rm ferm} \ll A_{\rm scal} \approx 1$. With the discovery of a Higgs boson near 125 GeV, a spectrum based on anomaly mediation for gaugino masses \cite{Giudice:1998xp,Wells:2003tf} has become popular, variously called Spread \cite{Hall:2011jd, Hall:2012zp}, Pure Gravity Mediation \cite{Ibe:2011aa}, Mini-Split \cite{Arvanitaki:2012ps} and Simply Unnatural \cite{ArkaniHamed:2012gw}. In these theories the gaugino masses have $A_a = (b_a g_a / 16 \pi^2) \, ( M_*/\sqrt{3} M_{Pl})$, where $b_a$ and $g_a$ are the beta function coefficients and gauge couplings for gauge group $a$, and $M_*$ is a high mediation scale. By taking $A_a$ to be fixed, the analysis of this sub-section also applies to these theories.
In the previous section, we showed that the mass of a freeze-out relic is proportional to the square root of the dark matter to baryon energy density ratio, by Eq.~(\ref{eq-zetaFO}): $m_X\propto\sqrt{\zeta}$. Our present assumptions link this mass to the SUSY breaking scale:
\begin{equation}
m_X = \tilde{m}_{LSP} = A_{LSP} \, \tilde{m}~.
\end{equation}
Hence,
\begin{equation}
\zeta \propto \tilde{m}^2~,
\end{equation}
and the probability distribution of Eq.~(\ref{eq-di}), which includes the baryon dilution factor of Eq.~(\ref{eq-mbxi}), takes the form
\begin{equation}
\frac{dp}{d \log \zeta} \; \propto \;\;
\frac{\zeta^{q/2-1}}{1 + \zeta} \hspace{0.3in} \mbox{for} \;\; \tilde{m} \gg v~.
\end{equation}
This is identical to the distribution (\ref{eq-zetadist}) studied in Sec.~\ref{sec-whycomp}, with $n=q/2-1$. Thus the Why Comparable problem is solved for most values of $q$ between 2 and 4. The distribution is then peaked near $\zeta =1$, as shown in Fig.~\ref{fig-xiprobs} and Fig.~\ref{fig-nrange}.
The mass of the LSP, $m_X=\tilde m_{LSP}$, is set by the freeze-out analysis of the previous section, and with our prediction $\zeta\sim O(1)$, is parametrically unrelated to the weak scale. By Eq.~(\ref{eq-FOscaling}),
\begin{equation}
\tilde m_{LSP}^2 \sim x_f^{-1} M_{\rm P} m_p \, c\, \eta .
\label{eq-FOscaling2}
\end{equation}
With order-one factors restored and the dependence on $\zeta\sim O(1)$ made explicit, this leads to a statistical, environmental determination of the scale of supersymmetry breaking $\tilde{m} = \tilde{m}_{LSP} /A_{LSP}$:
\begin{equation}
\tilde m = \frac{0.5\, \mbox{TeV}}{A_{LSP}} \left( \frac{\alpha_{\rm eff}}{0.01} \right) \sqrt{\zeta},~~\zeta\sim O(1).
\label{eq-mXFO2}
\end{equation}
If the spectrum is not highly split, $A_{LSP}\sim O(1)$, this predicts a Little Hierarchy with superpartners in the multi-TeV domain, somewhat above the weak scale. In Split (or Spread) Supersymmetry, $A_{LSP}\ll 1$, only the fermionic superpartners (gauginos), including the dark matter particle, remain near the TeV domain, while the scalar superpartners are considerably heavier. Either way, there are superpartners with masses in the TeV domain that are parametrically unrelated to the weak scale and, in a variety of theories, these superpartners are accessible to the LHC and future colliders.
\subsection{Gravitino LSP Dark Matter}
\label{sec-gravitino}
The mass scale of the Standard Model superpartners is given by $\tilde{m} \sim F/M_{\rm mess}$ where $\sqrt{F}$ is the primordial scale of supersymmetry breaking and $M_{\rm mess}$ is the messenger scale. All supersymmetric theories contain a gravitino of mass $F/M_{Pl}$; so unless $M_{\rm mess} \sim M_{Pl}$, the gravitino is expected to be the LSP. (Special circumstances evade this conclusion~\cite{Hall:2011jd, Hall:2012zp, Ibe:2011aa, Arvanitaki:2012ps, ArkaniHamed:2012gw}.)
Given the genericity of the gravitino as the LSP, it is important to investigate the implications for the observable superpartner mass scale $\tilde{m}$, if it is controlled not by the requirement of a natural weak scale as has long been assumed, but by $\zeta\sim O(1)$ as we have argued.
Recently, it was shown that, under weak assumptions, if gravitinos are the dark matter $\tilde{m}$ cannot lie far from the TeV domain even if SUSY does not solve the hierarchy problem ~\cite{Hall:2013uga}. For simplicity we take all Standard Model superpartners to be comparable in mass so that the relevant parameter space is 3 dimensional $(T_r, \tilde{m}, m_{3/2})$. To implement our solution of the Why Comparable problem we take $\tilde{m}$ to scan, and although $T_r$ and $m_{3/2}$ do not scan we allow them to take a wide range of values. We take $m_{3/2} < \tilde{m}$ so the gravitino is the LSP, and $T_r > \tilde{m}$ so that the superpartners are cosmologically interesting. We take $m_{3/2}$ sufficiently large, certainly greater than a keV, so that gravitinos could account for the dark matter. With these restrictions $(T_r, \tilde{m}, m_{3/2})$ range over many orders of magnitude.
Although the gravitinos are sufficiently weakly interacting that they never reach thermal equilibrium, they are produced by three processes. In Freeze-Out and Decay (FO\&D) the lightest Standard Model superpartner undergoes freeze-out, and at some later era decays to gravitinos; in Freeze-In (FI), when the temperature is of order $\tilde{m}$, Standard Model superpartners occasionally decay to give gravitinos; and finally gravitinos can be produced by scattering at high temperatures of order $T_r$ (UV). For the usual Freeze-Out mechanism we found in (\ref{eq-FOscaling}) the parametric scaling behavior $m_X^2 \sim ( M_{\rm P} m_p \eta) \, \zeta$. For gravitino dark matter, if the production is dominated by (FO\&D, FI, UV) we find the scaling behaviors
\begin{equation}
\left( \tilde{m} m_{3/2}, \frac{\tilde{m}^3}{m_{3/2}}, \frac{\tilde{m}^2 T_r}{m_{3/2}} \right) \; \sim \; ( M_{\rm P} m_p \eta) \, \zeta.
\label{eq-gravDMscaling}
\end{equation}
Including numerical factors, these results can be used to predict the scale $\tilde{m}$ analogous to the freeze-out prediction of (\ref{eq-mXFO}). When $r = m_{3/2} / \tilde{m}$ is not far below unity, FO\&D dominates giving
\begin{equation}
\tilde{m} = \frac{0.5}{\sqrt{r}} \left( \frac{\alpha_{\rm eff}}{0.01} \right) \sqrt{\zeta} \; \mbox{TeV},
\label{eq-mFOD}
\end{equation}
with the TeV scale again emerging from $\sqrt{\eta \, M_{\rm P} m_p }$. As $r$ drops so $\tilde{m}$ increases, but at $r=r_c$ the prediction for $\tilde{m}$ reaches a maximum as either FI or UV production dominates for $r < r_c$ giving
\begin{equation}
\tilde{m} = 25 \sqrt{ \frac{\alpha_{\rm eff}}{0.01} } \sqrt{ \frac{r}{r_c} }
\left( \frac{\tilde{m}}{T_r} \right)^{1/4} \sqrt{\zeta} \; \mbox{TeV},
\label{eq-mFIUV}
\end{equation}
with $r_c = 5 \cdot 10^{-4} (\alpha_{\rm eff}/0.01)( T_r/\tilde{m})^{1/4}$. For the FI case $T_r$ should be set equal to $\tilde{m}$. Thus for $\alpha_{\rm eff} = 0.01$ the maximum value for $\tilde{m}$ is $25 (\tilde{m}/T_r)^{1/4} \sqrt{\zeta} $ TeV, and this drops as $\sqrt{r}$ as $m_{3/2}$ is reduced further.
Thus scanning of the supersymmetry breaking scale, $\tilde{m}$, not only solves the Why Comparable problem but leads to a large class of theories, having a wide range of $T_r$ and $m_{3/2}$, with superpartners that may be accessible to LHC and future colliders.
|
1209.1508
|
\section{Introduction}\label{sec1}
Consider the linear model
\begin{equation}
\label{model} Y = X \theta+ \varepsilon,
\end{equation}
where $X$ is a $n \times p$ matrix, $\theta\in\mathbb R^p$, potentially
$p>n$, and where $\varepsilon$ is a $n\times1$ vector consisting of
i.i.d. Gaussian noise independent of $X$, with mean zero and known
variance standardised to one. To develop the main ideas, let us assume
for the moment that the matrix $X$ consists of i.i.d. $N(0,1)$ Gaussian
entries $(X_{ij})$, reflecting a prototypical high-dimensional model,
such as those encountered in compressive sensing; our main results hold
for more general design assumptions that we introduce and discuss in
detail below.
We denote by $P_\theta$ the law of $(Y, X)$, by $E_\theta$ the
corresponding expectation, and will omit the subscript $\theta$ when no
confusion may arise. For the asymptotic analysis we shall let $\min
(n,p)$ tend towards infinity, and the $o,O$-notation is to be
understood accordingly. Let $B_0(k)$ be the $\ell^0$-``ball'' of radius
$k$ in $\mathbb R^p$, so all vectors in $\mathbb R^p$ with at most $k
\le p$ nonzero entries. As common in the literature on high-dimensional
models, we shall consider $p$ potentially greater than $n$ but signals
$\theta$ that are \textit{sparse} in the sense that $\theta\in B_0(k)$
for some $k$ significantly smaller than $p$. We parameterise $k$ as
\[
k\equiv k(\beta) \sim p^{1-\beta},\qquad 0<\beta<1.
\]
The parameter $\beta$ measures the sparsity of the signal: if $\beta$
is close to one, only very few of the $p$ coefficients of $\theta$ are
nonzero. If $\beta\in(0,1/2]$, one speaks of the moderately sparse case
and for $\beta\in(1/2,1]$ of the highly sparse case. We include the
case $\beta=1$ where, by convention, $k \equiv \operatorname{const}
\times p^{0} = \operatorname{const}$.
A sparse adaptive estimator $\hat\theta\equiv\hat\theta_{np} = \hat
\theta(Y,X)$ for $\theta$ achieves for every $n$, every $k \le p$, some
universal constant $c$ and with high $P_\theta$-probability, the risk bound
\begin{equation}
\label{spast} \|\hat\theta- \theta\|^2 \le c \log p \times
\frac{k}{n},
\end{equation}
uniformly for all $\theta\in B_0(k)$. Here $\|\cdot\| \equiv\|\cdot\|
_2$ denotes the standard Euclidean norm on $\mathbb R^p$, with inner
product $\langle\cdot, \cdot\rangle$. Such estimators exist (see
Corollary~\ref{spadest} below, for example)---they attain the risk of an
estimator that would know the positions of the $k$ nonzero
coefficients, with the typically mild penalty of $\log p$. The
literature on such estimators is abundant; see, for instance,
\citet{CT07}, \citet{BRT09}, and the monograph
\citet{BvdG2011}, where many further references can be found.
We are interested in the question of whether one can construct a
confidence set for $\theta$ that takes inferential advantage of
sparsity as in (\ref{spast}). Most of what follows applies as well to
the related problem of constructing confidence sets for $X\theta$---we
discuss this briefly at the end of the \hyperref[sec1]{Introduction}. A confidence set
$C\equiv C_{np}$ is a~random subset of $\mathbb R^p$---depending only
on the sample $Y$, $X$ and on a significance level $0<\alpha<1$---that
we require to contain the true parameter $\theta$ with at least a
prescribed probability $1-\alpha$. Our positive results rely on the
in many ways natural universal assumption $\theta\in B_0(k_1)$, with $k_1$
a minimal sparsity degree such that consistent estimation is possible.
Formally,
\[
k_1 \sim p^{1-\beta_1},\qquad \beta_1 \in(0,1);\qquad
k_1 = o(n/\log p),
\]
so that the risk bound in (\ref{spast}) converges to zero for $k=k_1$.
Our statistical procedure should have coverage over signals that are at
least $k_1$-sparse. Given $0<\alpha<1$, a~level $1-\alpha$ confidence set
$C$ should then be \textit{honest} over $B_0(k_1)$,
\begin{equation}
\label{honest} \liminf_{\min(n,p) \to\infty}\ \inf_{\theta\in B_0(k_1)}
P_{\theta}(\theta\in C) \ge1- \alpha.
\end{equation}
Moreover, the Euclidean diameter $|C|_2$ of $C$ should satisfy that for
every $\alpha'>0$ there exists a universal constant $L$ such that for
every $0<k \le k_1$,
\begin{equation}
\label{adapt} \limsup_{\min(n,p) \to\infty}\ \sup_{\theta\in B_0(k)}
P_\theta \biggl(|C|^2_2 > L \log p \times
\frac{k}{n} \biggr) \le\alpha'.
\end{equation}
Such a confidence set would cover the true $\theta$ with prescribed
probability and would shrink at an optimal rate for $k$-sparse signals
without requiring knowledge of the position of the $k$ nonzero coefficients.
A first attempt to construct such a confidence set, inspired by
\citet{L89}, \citet{BD98}, \citet{B04} in nonparametric
regression problems, is based on estimating the accuracy of estimation
in (\ref{spast}) directly via sample splitting. Heuristically the idea
is to compute a sparse estimator $\tilde\theta$ based on the first
subsample of $(Y,X)$ and to construct a confidence set centred at
$\tilde\theta$ based on the risk estimate
\[
\frac{1}{n}(Y-X \tilde\theta){}^T(Y-X\tilde\theta)-1
\]
based on $Y,X$ from the other subsample.
\begin{theorem} \label{n4}
Consider the model (\ref{model}) with i.i.d. Gaussian design $X_{ij}
\sim N(0,1)$ and assume $k_1=o(n/\log p)$. There exists a confidence
set $C$ that is honest over $B_0(k_1)$ in the sense of (\ref{honest})
and which satisfies, for any $k \le k_1$, and uniformly in $\theta\in
B_0(k)$,
\[
|C|_2^2 = O_P \biggl(\log p \times
\frac{k}{n} + n^{-1/2} \biggr).
\]
\end{theorem}
In fact, we prove Theorem~\ref{n4} for general correlated designs
satisfying Condition~\ref{subgauss} below. As a consequence, in such
situations full adaptive inference is possible if the rate of sparse
estimation in (\ref{spast}) is not desired to exceed~$n^{-1/4}$.
One may next look for estimates of $\|\tilde\theta-\theta\|$ that have
a better accuracy than just of order $n^{-1/4}$. In nonparametric
estimation problems this has been shown to be possible; see
\citet{HL02}, \citet{JL03}, \citet{CL06},
\citet{RV06}, \citet{BN12}. Translated to high-dimensional
linear models, the accuracy of these methods can be seen to be of order
$p^{1/4}/\sqrt n$, which for $p \ge n$ is of larger order of magnitude
than $n^{-1/4}$ and hence of limited interest.
Indeed, our results below will show that the rate $n^{-1/4}$ is
intrinsic to high-dimensional models: for $p \ge n$ a confidence set
that simultaneously satisfies (\ref{honest}) and adapts at \textit{any}
rate $\sqrt{(k \log p)/n} = o(n^{-1/4})$ in (\ref{adapt}) does not
exist. Rather one then needs to remove certain `critical regions' from
the parameter space in order to construct confidence sets. This is so
despite the existence of estimators satisfying (\ref{spast});
\textit{the construction of general sparse confidence sets is thus a
qualitatively different problem than that of sparse estimation.}
To formalise these ideas, we take the separation approach to adaptive
confidence sets introduced in \citet{GN10}, \citet{HN11},
\citet{BN12} in the framework of nonparametric function
estimation. We shall attempt to make honest inference over maximal
subsets of $B_0(k_1)$ where $k_1$ is given a priori as above, in a way
that is adaptive over the submodel of sparse vectors $\theta$ that
belong to $B_0(k_0)$,
\[
k_0 \sim p^{1-\beta_0},\qquad k_0 < k_1,\qquad \beta_0>\beta_1.
\]
By tracking constants in our proofs, we could include $\beta_0 = \beta
_1$ too if $k_0 \le ck_1$ for $c>0$ a small constant without changing
our findings. However, assuming $k_0=o(k_1)$ results in a considerably
cleaner mathematical exposition.
We shall remove those $\theta\in B_0(k_1)$ that are too close in
Euclidean distance to $B_0(k_0)$, and consider
\begin{equation}
\label{sepclass} \tilde B_0(k_1, \rho) = \bigl\{\theta\in
B_0(k_1)\dvtx \bigl\|\theta- B_0(k_0)\bigr\|
\ge\rho \bigr\},
\end{equation}
where $\rho= \rho_{np}$ is a separation sequence and where $\|\theta
-Z\|= \inf_{z \in Z}\|\theta-z\|$ for any $Z \subset\mathbb R^p$. Thus,
if $\theta\notin B_0(k_0)$, we remove the $k_0$ coefficients $\theta_j$
with largest modulus $|\theta_j|$ from $\theta$, and require a lower
bound on the $\ell^2$-norm of the remaining subvector. In other words,
if $|\theta_{(1)}| \le\cdots\le|\theta_{(j)}| \le\cdots\le
|\theta_{(p)}|$ are any order statistics of $\{|\theta_j|\dvtx j=1,
\ldots, p\}$, then
\[
\bigl\|\theta-B_0(k_0)\bigr\|^2 = \sum_{j=1}^{p-k_0} \theta_{(j)}^2
\]
needs to exceed $\rho^2$. Defining the new model
\[
\Theta(\rho) = B_0(k_0) \cup\tilde B_0(k_1,
\rho),
\]
we now require, instead of (\ref{honest}) and (\ref{adapt}), the weaker
coverage property
\begin{equation}
\label{h2} \liminf_{\min(n,p) \to\infty}\ \inf_{\theta\in\Theta(\rho_{np})}
P_{\theta}(\theta\in C_{np}) \ge1- \alpha
\end{equation}
for any $0<\alpha<1$, as well as, for some finite constant $L>0$,
\begin{equation}
\label{ad1} \limsup_{\min(n,p) \to\infty}\ \sup_{\theta\in B_0(k_0)}
P_\theta \biggl(|C_{np}|^2_2 > L \log
p \times\frac{k_0}{n} \biggr) \le\alpha'
\end{equation}
and
\begin{equation}
\label{ad2} \limsup_{\min(n,p) \to\infty}\ \sup_{\theta\in\tilde B_0(k_1, \rho
_{np}) }
P_\theta \biggl(|C_{np}|^2_2 > L \log
p \times\frac{k_1}{n} \biggr) \le\alpha'
\end{equation}
and search for minimal assumptions on the separation sequence $\rho
_{np}$. Note that any confidence set $C$ that satisfies (\ref{honest})
and (\ref{adapt}) also satisfies the above three conditions for any
$\rho\ge0$, so if one can prove the necessity of a lower bound on the
sequence~$\rho_{np}$, then one disproves in particular the existence of
adaptive confidence sets in the stronger sense of (\ref{honest}) and
(\ref{adapt}).
The following result describes our findings under the conditions of
Theorem~\ref{n4}, but now requiring adaptation to $B_0(k_0)$ at
estimation rate $\sqrt{(k_0 \log p)/n}$ faster than $n^{-1/4}$ or, what
is the same, assuming
\[
k_0 = o(\sqrt n / \log p).
\]
When specialising to the high-dimensional case $p \ge n$ this
automatically forces $\beta_0>1/2$. We require coverage over moderately
sparse alternatives ($\beta_1\le1/2$); the cases $\beta_1 >1/2$, $p\le
n$ as well more general design assumptions will be considered below.
\begin{theorem} \label{gauss}
Consider the model (\ref{model}) with i.i.d. Gaussian design $X_{ij}
\sim N(0,1)$ and $p \ge n$. For $0<\beta_1 \le1/2<\beta_0 \le1$ and
$k_0< k_1$ as above assume
\[
k_0 = o(\sqrt n / \log p),\qquad k_1= o (n/\log p).
\]
An honest adaptive confidence set $C_{np}$ over $\Theta(\rho_{np})$ in
the sense of (\ref{h2}), (\ref{ad1}), (\ref{ad2}) exist if and only if
$\rho_{np}$ exceeds, up to a multiplicative universal constant,
$n^{-1/4}$, which is the minimax rate of testing between the composite
hypotheses
\begin{equation}
\label{compo} H_0\dvtx \theta\in B_0(k_0)\quad
\mbox{vs.}\quad H_1\dvtx \theta\in\tilde B_0(k_1,
\rho_{np}).
\end{equation}
\end{theorem}
The question arises whether insisting on exact rate adaptation in
(\ref{ad1}) is crucial in Theorem~\ref{gauss} or whether some mild
`penalty' for adaptation (beyond $\log p$) could be paid to avoid
separation conditions ($\rho>0$). The proof of Theorem~\ref{gauss}
implies that requiring $|C|_2^2$ in (\ref{ad1}) to shrink at any rate
$o(n^{-1/2})$ that is possibly slower than $(k_0 \log p)/n$ but still
$o((k_1 \log p)/n)$ does not alter the conclusion of necessity of
separation at rate $\rho\simeq n^{-1/4}$ in Theorem~\ref{gauss}. In
particular, for $p \ge n$, Theorem~\ref{n4} cannot be improved if one
wants adaptive confidence sets that are honest over all of $B_0(k_1)$.
Theorem~\ref{gauss} and our further results below show that sparse
$o(n^{-1/4})$-adaptive confidence sets exist precisely over those
parameter subspaces of $B_0(k_1)$ for which the degree of sparsity is
asymptotically detectable. Sparse adaptive confidence sets solve the
composite testing problem (\ref{compo}) in a minimax way, either
implicitly or explicitly. Theorem~\ref{gauss} reiterates the findings
in \citet{HN11} and \citet{BN12} that adaptive confidence
sets exist over parameter spaces for which the structural property one
wishes to adapt to--in the present case, sparsity---can be detected
from the sample.
The paper \citet{ITV10}, where the testing problem (\ref{compo})
is considered with simple $H_0\dvtx \theta=0$, is instrumental for our
lower bound results. Our upper bounds show that a minimax test for
the composite problem (\ref{compo}) exists without requiring stronger
separation conditions than those already needed in the case of
$H_0\dvtx \theta=0$, and under general correlated design assumptions.
In the setting of Theorem~\ref{gauss} the tests are based on rejecting
$H_0$ if $T_n$ defined by
\begin{equation}
\label{testdef} t_n\bigl(\theta'\bigr) =
\frac{1}{\sqrt{2n}} \sum_{i=1}^n \bigl[
\bigl(Y_i-\bigl(X\theta '\bigr)_i
\bigr)^2-1\bigr],\qquad T_n = \inf_{\theta' \in B_0(k_0)}
\bigl|t_n\bigl(\theta'\bigr)\bigr|
\end{equation}
exceeds a critical value. In practice, the computation of $T_n$
requires a convex relaxation of the minimisation problem as is standard
in the construction of sparse estimators. The
proofs that such minimum tests are minimax optimal are based on ratio
empirical process techniques, particularly Lemmas
\ref{AssumptionBlemma} and~\ref{peel} below, which are of independent
interest.
Our results give weakest possible conditions on the regions of the
parameter space that have to be removed from consideration in order to
obtain sparse adaptive confidence sets for $\theta$. Another separation condition
that may come to mind would be a lower bound $\gamma_{np}$ on the
smallest nonzero entry of $\theta\in B_0(k_1)$. Then
\[
\bigl\|\theta-B_0(k_0)\bigr\|^2 \ge(k_1-k_0)
\gamma^2_{np}
\]
and if one considers, for example, moderately sparse $\beta_1<1/2$, and
$p \ge n, k_0 = o(k_1)$, the lower bound required on $\gamma_{np}$ for
Theorem~\ref{gauss} to apply is in fact $o(n^{-1/2})$. A sparse
estimator will not be able to detect nonzero coefficients of such size,
rather one needs tailor-made procedures as presented here, and similar in spirit to results
in sparse signal detection [\citet{ITV10}, \citet{ACCP11}].
Our results concern confidence sets for the parameter vector $\theta$
itself in the Euclidean norm $\|\cdot\|$. Often, instead of on $\theta
$, inference on $Z\theta$ is of interest, where $Z$~is a $m \times p$
prediction vector. If
\[
\|Z\theta\| \ge c \|\theta\|\qquad \forall\theta\in B_0(k_1)
\]
with high probability, including the important case $Z=X$ under the
usual coherence assumptions on the design matrix $X$, then any honest
confidence set for $Z\theta$ can be used to solve the testing problem
(\ref{contr}) below, so that lower bounds for sparse confidence sets
for $\theta$ carry over to lower bounds for sparse confidence sets for
$Z\theta$. In contrast, for regular fixed linear functionals of $\theta
$ such as low-dimensional projections, the situation may be different:
for instance, in the recent papers of \citet{ZZ11},
\citet{vdGBR2013} and \citet{JM13} one-dimensional confidence
intervals for a fixed element $\theta_j$ in the vector $\theta$ are
constructed.
\section{Main results}
A heuristic summary of our findings for all parameters simultaneously
is as follows: if the rate of estimation in the submodel $B_0(k_0)$ of
$B_0(k_1)$ one wishes to adapt to is faster than
\begin{equation}
\label{unifrat} \rho\simeq\min \biggl(n^{-1/4}, \frac{p^{1/4}}{\sqrt n}, \sqrt{
\frac
{k_1 \log p}{n}} \biggr),
\end{equation}
then separation is necessary for adaptive confidence sets to exist at
precisely this rate $\rho$. For $p \ge n$ this simply reduces to
requiring that the rate of adaptive estimation in $B_0(k_0)$ beats
$n^{-1/4}$---the natural condition expected in view of Theorem~\ref{n4}, which proves existence of honest adaptive confidence sets
when the estimation rate is $O(n^{-1/4})$.
We consider the following conditions on the design matrix $X$.
\begin{condition} \label{uncor}
Consider the model (\ref{model}) with independent and identically
distributed $(X_{ij})$ satisfying $EX_{ij}=0$, $EX^2_{ij}=1\ \forall
i,j$.
\begin{longlist}[(a)]
\item[(a)] For some $h_0>0$,
\[
\max_{1 \le j \le l \le p} E\bigl(\exp(hX_{1j}X_{1l})
\bigr) = O(1)\qquad\forall |h|\le h_0.
\]
\item[(b)] $|X_{ij}| \le b$ for some $b>0$ and all $i,j$.
\end{longlist}
\end{condition}
Let next $\hat\Sigma:= X^T X / n $ denote the Gram matrix and let
$\Sigma:= E \hat\Sigma$. We will sometimes write $\| X \theta\|_n^2:=
\theta^T \hat\Sigma\theta$ to expedite notation.
\begin{condition}\label{subgauss}
In the model (\ref{model}) assume the
following:
\begin{longlist}[(a)]
\item[(a)] The matrix $X$ has independent rows, and for each $i
\in\{1,
\ldots, n\} $ and each $u \in\mathbb{R}^p$ with $u^T \Sigma u
\le1$, the random variable $(Xu)_i$ is sub-Gaussian with constants
$\sigma_0$ and $\kappa_0$:
\[
\kappa_0^2 \bigl( E \exp\bigl[ \bigl|(Xu)_i
\bigr|^2 / \kappa_0^2 \bigr] -1 \bigr) \le
\sigma_0^2\qquad \forall u^T \Sigma u \le1.
\]
\item[(b)] The smallest eigenvalue $\Lambda_{\min}^2
\equiv\Lambda_{\min, p}^2$ of $\Sigma$ satisfies $\inf_p
\Lambda_{\min, p}^2 >0$.
\end{longlist}
\end{condition}
Condition~\ref{uncor}(a) could be replaced by a fixed design assumption
as in Remark~4.1 in \citet{ITV10}. Condition~\ref{uncor}(b)
clearly implies Condition~\ref{uncor}(a); it also implies Condition
\ref{subgauss} with $\Sigma=I$ and universal constants $\kappa_0,
\sigma _0$: we have $(Xu)_i = \sum_{m=1}^p X_{im} u_m$ with mean zero
and independent summands bounded in absolute value by $b|u_m|$, so that
by Hoeffding's inequality $(Xu)_i$ is sub-Gaussian,
\[
P\bigl(\bigl|(Xu)_i\bigr| \ge t \bigr) \le2 e^{-t^2/2b \|u\|_2^2}
\]
and Condition~\ref{subgauss} follows, integrating tail probabilities.
\subsection{\texorpdfstring{Adaptation to sparse signals when \mbox{$p\! \ge n$}}
{Adaptation to sparse signals when p >= n}}
We first give a version of Theorem~\ref{gauss} for general (not
necessarily Gaussian) design matrices. The proofs imply that part~(B)
actually holds also for $p \le n$ and for $0<\beta_1 < \beta_0 \le1$.
\begin{theorem}[(Moderately sparse case)]\label{sp0}
Let $p \ge n$,
$0<\beta_1 \le1/2 < \beta_0 \le1$ and let $k_0 \sim p^{1-\beta_0}<k_1
\sim p^{1-\beta_1}$ such that $k_0 = o(\sqrt n / \log p)$.
\begin{longlist}[(A)]
\item[(A) Lower bound.] Assume Condition~\ref{uncor}\textup{(a)} and
that $\log^3 p = o(n)$. Suppose for some separation sequence $\rho_{np}
\ge0$ and some $0<\alpha$, $\alpha' <1/3$, the confidence set
$C_{np}$ is both honest over $\Theta(\rho_{np})$ and adapts to
sparsity in the sense of (\ref{ad1}), (\ref{ad2}). Then necessarily
\[
\liminf_{n,p} \frac{\rho_{np}}{n^{-1/4}}>0.
\]
\item[(B) Upper bound.] Assume Condition~\ref{subgauss} and $k_1 =
o (n/\log p)$. Then for every $0<\alpha$, $\alpha'<1$ there exists a
sequence $\rho_{np} \ge0$ satisfying
\[
\limsup_{n,p} \frac{\rho_{np}}{n^{-1/4}}<\infty
\]
and a level $\alpha$-confidence set $C_{np}$ that is honest over
$\Theta (\rho_{np})$ and that adapts to sparsity in the sense of
(\ref{ad1}), (\ref{ad2}).
\end{longlist}
\end{theorem}
We next consider restricting the maximal parameter space itself to
highly sparse $\theta\in B_0(k_1), \beta_1 > 1/2$. If the rate of
estimation in $B_0(k_1)$ accelerates beyond $n^{-1/4}$, then one can
take advantage of this fact, although separation of $B_0(k_0)$ and
$B_0(k_1)$ is still necessary to obtain sparse adaptive confidence
sets. The following result holds also for $p \le n$.
\begin{theorem}[(Highly sparse case)]\label{sp3}
Let $1/2 < \beta_1 <
\beta_0 \le1$ and let $k_0 \sim p^{1-\beta_0}<k_1 \sim p^{1-\beta_1}$
such that $k_0 = o(\sqrt n / \log p)$.
\begin{longlist}[A]
\item[(A) Lower bound.] Assume Condition~\ref{uncor}\textup{(a)} and
that $\log ^3 p = o(n)$. Suppose for some separation sequence $\rho_{np}
\ge0$ and some $0<\alpha, \alpha' <1/3$, the confidence set
$C_{np}$ is both honest over $\Theta(\rho_{np})$ and adapts to
sparsity in the sense of (\ref{ad1}), (\ref{ad2}). Then necessarily
\[
\liminf_{n,p} \frac{\rho_{np}}{\min (\sqrt{\log p \times
({k_1}/{n})}, n^{-1/4} )}>0.
\]
\item[(B) Upper bound.] Assume Condition~\ref{subgauss} and that
$k_1= o (n/\log p)$. Then for every $0<\alpha', \alpha<1$ there
exists a sequence $\rho_{np} \ge0$ satisfying
\[
\limsup_{n,p} \frac{\rho_{np}}{\min (\sqrt{\log p \times
({k_1}/{n})}, n^{-1/4} )}<\infty
\]
and a level $\alpha$-confidence set $C_{np}$ that is honest over
$\Theta (\rho_{np})$ and that adapts to sparsity in the sense of
(\ref{ad1}), (\ref{ad2}).
\end{longlist}
\end{theorem}
\subsection{\texorpdfstring{The case $p\le n$---approaching nonparametric models}
{The case p <= n---approaching nonparametric models}}
\label{plen}
The case of highly sparse alternatives and $p \le n$ was already
considered in Theorem~\ref{sp3}, explaining the presence of $\sqrt
{(k_1 \log p)/n}$ in (\ref{unifrat}). We thus now restrict to $0 <\beta
_1 \le1/2$ and, moreover, to highlight the main ideas, also to $\beta
_0 > 1/2$ corresponding to the most interesting highly sparse
null-models. We now require from any confidence set $C_n$ the
conditions (\ref{h2}), (\ref{ad1}), (\ref{ad2}) with the
infimum/supremum there intersected with
\[
B_r(M)= \Biggl\{\theta\in\mathbb R^p\dvtx \|\theta
\|^r_r = \sum_{j=1}^p
|\theta_j|^r \le M^r \Biggr\}.
\]
Let us denote the new conditions by (\ref{h2}$'$), (\ref{ad1}$'$),
(\ref{ad2}$'$).
\begin{theorem} \label{sp4} Assume $p \le n$, let $0 < \beta_1 \le1/2
< \beta_0 \le1, 0<M<\infty$, and let $k_0 \sim p^{1-\beta_0}<k_1 \sim
p^{1-\beta_1}$.
\begin{longlist}[A]
\item[(A) Lower bound.] Assume Condition~\ref{uncor}\textup{(a)}, and
suppose for some separation sequence $\rho_{np} \ge0$ and some
$0<\alpha, \alpha'<1/3$, the confidence set $C_{np}$ is both honest
over $\Theta (\rho_{np}) \cap B_r(M)$ and adapts to sparsity in the
sense of (\ref{ad1}$'$), (\ref{ad2}$'$). If $r=2$ or if $r=1$ and
$p = O(n^{2/3})$, then necessarily
\[
\liminf_{n,p} \frac{\rho_{np}}{p^{1/4}n^{-1/2}}>0.
\]
\item[(B) Lower bound.] Assume Condition~\ref{uncor}\textup{(b)} holds
and
either $r=1$, $k_1= o (n/\log p)$ or $r=2$, $\beta_0=1$, $k_1= o (\sqrt{n/\log p})$. Then for every $0<\alpha, \alpha'<1$ there exists a
sequence $\rho_{np} \ge0$ satisfying
\[
\limsup_{n,p} \frac{\rho_{np}}{p^{1/4}n^{-1/2}}<\infty
\]
and a level $\alpha$-confidence set $C\equiv C(n,p, b,M)$ that is
honest over $\Theta(\rho_{np}) \cap\{\theta\dvtx \|\theta\|_r \le M\}$
and that adapts to sparsity in the sense of (\ref{ad1}$'$),
(\ref{ad2}$'$).
\end{longlist}
\end{theorem}
The rate $\rho$ in the previous theorems is related to the results in
\citet{BN12} and approaches, for $p = \operatorname{const}$, the
parametric theory, where the separation rate equals, quite naturally,
$1/\sqrt n$. This is in line with the findings in \citet{P09},
\citet{PS11} in the $p \le n$ setting, who point out that a class
of specific but common sparse estimators cannot reliably be used for
the construction of confidence sets.
\section{Proofs}
All lower bounds are proved in Section~\ref{lbsec}. The proofs of
existence of confidence sets are given in Section~\ref{comp}. Theorem
\ref{n4} is proved at the end, after some auxiliary results that are
required throughout.
\subsection{\texorpdfstring{Proof of Theorems \protect\ref{gauss} (necessity), \protect\ref{sp0}\textup{(A)},
\protect\ref{sp3}\textup{(A)}, \protect\ref{sp4}\textup{(A)}}
{Proof of Theorems 2 (necessity), 3(A), 4(A), 5(A)}} \label{lbsec}
The necessity part of Theorem~\ref{gauss} follows from Theorem
\ref{sp0}(A) since any i.i.d. Gaussian matrix satisfies
Condition~\ref{uncor}(a), and since its assumptions imply the growth
condition $\log^3 p = o(n)$. Except for the $\ell^r$-norm restrictions
of Theorem~\ref{sp4} discussed at the end of the proof, Theorems
\ref{sp0}(A) and~\ref{sp4}(A) can be joined into a single statement
with separation sequence $\min(p^{1/4}n^{-1/2}, n^{-1/4})$, valid for
every $p$. We thus have to consider, for all values of $p$, two cases:
the moderately sparse case $\beta_1<1/2$ with separation lower bound
$\min(p^{1/4}n^{-1/2}, n^{-1/4})$, and the highly sparse case $\beta_1
> 1/2$ with separation lower bound $\min((\log p \times(k_1/n))^{1/2},
n^{-1/4})$. Depending on the case considered,\vspace*{-1pt} denote thus by
$\rho^*=\rho^*_{np}$ either $\min((\log p \times(k_1/n))^{1/2},
n^{-1/4})$ or $\min (p^{1/4} n^{-1/2}, n^{-1/4} )$.
The main idea of the proof follows the mechanism introduced in
\citet{HN11}. Suppose by way of contradiction that $C$ is a
confidence set as in the relevant theorems, for some sequence
$\rho=\rho_{np}$ such that
\[
\liminf_{n,p} \frac{\rho}{\rho^*}=0.
\]
By passing to a subsequence, we may replace the $\liminf$ by a proper
limit, and we shall in what follows only argue along this subsequence
$n_k\equiv n$. We claim that we can then find a further sequence $\bar
\rho_{np} \equiv\bar\rho, \rho^*_{np} \ge\bar\rho_{np} \ge\rho
_{np}$, s.t.
\begin{equation}
\label{squeeze} \sqrt{\log p \times\frac{k_0}{n}} = o(\bar\rho),\qquad \bar
\rho=o\bigl(\rho^*\bigr),
\end{equation}
that is, $\bar\rho$ can be taken to be squeezed between the rate of
adaptive estimation in the submodel $B_0(k_0)$ and the separation rate
$\rho^*$ that we want to establish as a lower bound. To check that this
is indeed possible, we need to verify that $(\log p \times
(k_0/n))^{1/2}$ is of smaller order than any of the three terms
\[
\sqrt{\log p \times\frac{k_1}{n}},\qquad p^{1/4} n^{-1/2},\qquad n^{-1/4}
\]
appearing in $\rho^*$. This is obvious for the first in view of the
definition of $k_0, k_1$ ($\beta_1 < \beta_0$); follows for the second
from $\beta_0>1/2$; and follows for the third from our assumption $k_0
= o(\sqrt n / \log p)$ [automatically verified in Theorem~\ref{sp4}(A)
as $p \le n$, $\beta_0>1/2$].
For such a sequence $\bar\rho$ consider testing
\[
H_0\dvtx \theta= 0\quad\mbox{vs.}\quad H_1\dvtx \theta\in\tilde
B_0(k_1, \bar\rho).
\]
Using the confidence set $C$, we can test $H_0$ by $\Psi= 1\{C \cap H_1
\neq\varnothing\}$---we reject $H_0$ if $C$ contains any of the
alternatives. The type two errors satisfy
\[
\sup_{\theta\in H_1} E_\theta(1-\Psi) =\sup
_{\theta\in H_1} P_\theta (C \cap H_1 = \varnothing)
\le\sup_{\theta\in H_1} P_\theta(\theta \notin C) \le\alpha+ o(1)
\]
by coverage of $C$ over $H_1 \subset\Theta(\rho)$ (recall $\bar\rho
\ge\rho$). For the type one errors we have, again by coverage, since $0
\in B_0(k_0)$ for any $k_0$, using adaptivity (\ref{ad1}) and
(\ref{squeeze}), that
\[
E_0 \Psi= P_0(C \cap H_1 \neq\varnothing) \le
P_0 \bigl(0 \in C, |C|_2 \ge \bar\rho\bigr) +\alpha+ o(1) =
\alpha'+ \alpha+o(1).
\]
We conclude from $\min(\alpha', \alpha)<1/3$ that
\begin{equation}
\label{contr} E_0\Psi+ \sup_{\theta\in H_1}
E_\theta(1-\Psi) \le\alpha'+2 \alpha + o(1) < 1 +o(1).
\end{equation}
On the other hand, we now show
\begin{equation}
\label{contra} \liminf_{n,p} \inf_{\Psi}
\Bigl(E_0\Psi+ \sup_{\theta\in H_1} E_\theta (1-\Psi)
\Bigr)\ge1,
\end{equation}
a contradiction, so that
\[
\liminf_{n,p} \frac{\rho}{\rho^*}>0
\]
necessarily must be true. Our argument proceeds by deriving
(\ref{contra}) from Theorem~4.1 in\vspace*{-3pt} \citet{ITV10}. Let $0<c<1$, $b=
\frac{\bar \rho}{c \sqrt{k_1}}$$, h= \frac{c k_1}{p}$, and note that
\begin{equation}
\label{relat} b^2ph =\frac{\bar\rho^2}{c} \ge\bar\rho^2,
\qquad b^2k_0 = o \bigl(b^2ph\bigr)
\end{equation}
using that $k_0 = o(k_1)$. Consider a product prior $\pi$ on $\theta$
with marginal coefficients $\theta_j = b \varepsilon_j$, $j=1, \ldots,
p$, where the $\varepsilon_j$ are i.i.d. with $P(\varepsilon_j=0)=1-h,
P(\varepsilon_j = 1)=P(\varepsilon_j=-1)=h/2$. We show that this prior
asymptotically concentrates on our alternative space $H_1=\tilde
B_0(k_1, \bar\rho)$. Let $Z_j = \varepsilon^2_{j}$ and denote by
$Z_{(j)}$ the corresponding order statistics (counting ties in any
order, for instance, ranking numerically by dimension), then for any
$\delta>0$ and $n$ large enough, using (\ref{relat}),
\begin{eqnarray*}
\pi\bigl(\bigl\|\theta-B_0(k_0)\bigr\|^2 < (1+\delta)
\bar\rho^2\bigr) &=& P \Biggl(b^2\sum
_{j=1}^{p-k_0} Z_{(j)} < (1+\delta)\bar
\rho^2 \Biggr)
\\
&\le& P \Biggl(b^2\sum_{j=1}^{p}
Z_{(j)} < (1+\delta)\bar\rho^2 - b^2k_0
\Biggr)
\\
&\le& P \Biggl(b^2\sum_{j=1}^{p}
\varepsilon_j^2< \bar\rho^2 \Biggr) = \pi
\bigl(\|\theta\|^2 < \bar\rho^2\bigr),
\end{eqnarray*}
which by the proof of Lemma 5.1 in \citet{ITV10} converges to $0$
as $\min(n,p) \to\infty$. Moreover, that lemma also contains the proof
that $\pi(\theta\in B_0(k_1)) \to1$ (identifying $k$ there with our
$k_1$), which thus implies $\pi(\tilde B_0(k_1, \bar\rho)) \to1$ as
$\min(n,p) \to\infty$. The testing lower bound based on this prior,
derived in Theorem 4.1 in \citet{ITV10} (cf. particularly page
1487), then implies (\ref{contra}), which is the desired contradiction.
Finally, for Theorem~\ref{sp4}, note that the above implies immediately
that $\theta\sim\pi$ asymptotically concentrates on any fixed $\ell
^2$-ball. Moreover, $E_\pi\|\theta\|_1 = bph = o(1)$ under the
hypotheses of Theorem~\ref{sp4} when $p = O(n^{2/3})$, and likewise
$\operatorname{Var}_\pi(\|\theta\|_1) = b^2ph$, so we conclude as in
the proof of Lemma 5.1 in \citet{ITV10} that the prior
asymptotically concentrates on any fixed $\ell^1$-ball in this
situation.
\subsection{\texorpdfstring{Proofs of upper bounds: Theorems \protect\ref{gauss}
(sufficiency), \protect\ref{sp0}\textup{(B)}, \protect\ref{sp3}\textup{(B)},
\protect\ref{sp4}\textup{(B)}}
{Proofs of upper bounds: Theorems 2 (sufficiency), 3(B), 4(B), 5(B)}} \label{comp}
We first note that sufficiency in Theorem~\ref{gauss} follows from
Theorem~\ref{sp0}(B) as i.i.d. Gaussian design satisfies Condition
\ref{subgauss}. The main idea, which is the same for all theorems,
follows \citet{HN11}, \citet{BN12} to solve the composite
testing problem
\begin{equation}
\label{ctest} H_0\dvtx \theta\in B_0(k_0)\quad
\mbox{vs.}\quad H_1\dvtx \theta\in\tilde B_0(k_1,
\rho)
\end{equation}
under the pa
rameter constellations of $k_0, k_1, \rho, p, n$ relevant
in Theorems~\ref{sp0}(B), \ref{sp3}(B), \ref{sp4}(B) [and in the last
case with both hypotheses intersected with $B_r(M)$, suppressed in the
notation in what follows]. Once a minimax test $\Psi$ is available for
which type one and type two errors
\begin{equation}
\label{mtest} \sup_{\theta\in H_0} E_\theta\Psi_n
+ \sup_{\theta\in H_1} E_\theta (1-\Psi_n) \le\gamma
\end{equation}
can be controlled, for $n$ large enough, at any level $\gamma>0$, one
takes $\tilde\theta$ to be the estimator from (\ref{spest}) below with
$\lambda$ chosen as in Lemma~\ref{AssumptionDalemma}, and constructs
the confidence set
\begin{eqnarray*}
C_n=\cases{ \displaystyle\biggl\{\theta\dvtx \|\theta-\tilde\theta\|_2
\le L'\sqrt{\log p \frac{k_0}{n}} \biggr\},&\quad if $
\Psi_n=0$,
\vspace*{4pt}\cr
\displaystyle\biggl\{\theta\dvtx \|\theta-\tilde\theta\|_2
\le L'\sqrt{\log p \frac{k_1}{n}} \biggr\},&\quad if $
\Psi_n=1$.}
\end{eqnarray*}
Assuming (\ref{mtest}), we now prove that $C_n$ is honest for $B_0(k_0)
\cup\tilde B_0(k_1, \rho_{np})$ if we choose the constant $L'$ large
enough: for $\theta\in B_0(k_0)$ we have from Corollary~\ref{spadest}
below, for $L'$ large,
\begin{eqnarray*}
\inf_{\theta\in B_0(k_0)} P_\theta \{\theta\in C_n \}&
\ge& 1- \sup_{\theta\in B_0(k_0)} P_\theta \biggl\{ \|\tilde\theta-
\theta\| _2 > L'\sqrt{\log p\frac{k_0}{n}} \biggr\}
\to1
\end{eqnarray*}
as $n \to\infty$. When $\theta\in\tilde B_0(k_1, \rho_{np})$, we
have that $P_\theta \{\theta\in C_n \}$ exceeds
\[
1 - \sup_{\theta\in B_0(k_1)}P_\theta \biggl\{ \|\tilde\theta-\theta
\| _2 > L'\sqrt{\log p \frac{k_1}{n}} \biggr\} -
\sup_{\theta\in\tilde
B_0(k_1, \rho_{np})}P_\theta\{\Psi_n =0\}.
\]
The first subtracted term converges to zero for $L'$ large enough, as
before. The second subtracted term can be made less than $\gamma=\alpha
$, using (\ref{mtest}). This proves that $C_n$ is honest. We now turn
to sparse adaptivity of $C_n$: by the definition of $C_n$ we always
have $|C_n| \le L'\sqrt{\log p \times k_1/n} $, so the case $\theta
\in\tilde B_0(k_1, \rho_{np})$ is proved. If $\theta\in B_0(k_0)$,
then
\[
P_\theta \biggl\{|C_n| > L'\sqrt{\log p
\frac{k_0}{n}} \biggr\} =P_\theta\{\Psi_n =1\} \le
\alpha'
\]
by the bound on the type one errors of the test, completing the
reduction of the proof to (\ref{mtest}).
\subsubsection{\texorpdfstring{Proof of Theorem \protect\ref{sp0}\textup{(B)}}
{Proof of Theorem 3(B)}}
Throughout this subsection we impose the assumptions from Theorem
\ref{sp0}---in fact, without the restriction $p \ge n$---and with $\rho
_{np} \ge L_0 n^{-1/4}$ for some $L_0$ large enough that we will choose
below. By the arguments from the previous subsection, it suffices to
solve the testing problem (\ref{mtest}) with this choice of $\rho$, for
any $\gamma>0$. Define $t_n(\theta')$, $T_n$ as in (\ref{testdef}) and
the test $\Psi_n = 1\{T_n \ge u_\gamma\}$ where $u_\gamma$ is a
suitable fixed quantile constant such that, for every $\theta\in
B_0(k_0)$, the type one error $E_\theta\Psi_n$ is bounded by
\begin{eqnarray}
\label{tone} \qquad P_\theta(T_n \ge u_\gamma) &\le&
P_\theta\bigl(\bigl|t_n(\theta)\bigr| \ge u_\gamma\bigr) =
P_\theta \Biggl(\frac{1}{\sqrt{2n}} \sum_{i=1}^n
\bigl(\varepsilon_i^2-1\bigr) \ge u_\gamma
\Biggr) \le\gamma.
\end{eqnarray}
For the type two errors $\theta\in H_1$, let $\theta^*$ be a minimiser
in $T_n$ (if the infimum is not attained, the argument below requires
obvious modifications). Then
\begin{eqnarray*}
\sqrt{2n} t_n\bigl(\theta^*\bigr) &=&\sum
_{i=1}^n \bigl[\bigl(Y_i-\bigl(X
\theta^*\bigr)_i\bigr)^2-1\bigr]
\\
&=&\sum_{i=1}^n \bigl[
\bigl(Y_i-(X\theta)_i + (X\theta)_i-\bigl(X
\theta^*\bigr)_i\bigr)^2-1\bigr]
\\
&=& \sum_{i=1}^n \bigl[
\bigl(Y_i-(X\theta)_i\bigr)^2-1\bigr] + 2
\bigl\langle Y-X\theta, X\bigl(\theta -\theta^*\bigr) \bigr\rangle+ \bigl\|X\bigl(
\theta-\theta^*\bigr)\bigr\|^2,
\end{eqnarray*}
so the type two errors $E_\theta(1-\Psi_n)$ are controlled by
\begin{eqnarray}
\label{split} && P_\theta \Biggl(\Biggl\llvert \sum
_{i=1}^n \bigl[\bigl(Y_i-(X
\theta)_i\bigr)^2-1\bigr]+ 2\bigl\langle Y-X\theta, X
\bigl(\theta-\theta^*\bigr) \bigr\rangle\nonumber
\\[-5pt]
&&\hspace*{160pt}{} + \bigl\|X\bigl(\theta-\theta^*\bigr)
\bigr\|^2 \Biggr\rrvert < \sqrt{2n} u_\gamma \Biggr)\nonumber
\nonumber\\[-8pt]\\[-8pt]
&&\qquad \le P_\theta \Biggl(\Biggl\llvert \sum_{i=1}^n
\bigl(\varepsilon_i^2-1\bigr) \Biggr\rrvert >
\frac{\|X(\theta-\theta^*)\|^2}{2} - \sqrt{n} u_\gamma \Biggr)
\nonumber
\\
&&\quad\qquad{} + P_\theta \biggl(\bigl\llvert 2\bigl\langle\varepsilon, X\bigl(
\theta-\theta^*\bigr) \bigr\rangle\bigr\rrvert > \frac{\|X(\theta-\theta^*)\|^2}{2} - \sqrt{n}
u_\gamma \biggr).\nonumber
\end{eqnarray}
Since $\theta^* \in B_0(k_0), \theta\in\tilde B_0(k_1, \rho)$ and
$k_0+k_1 = o(n/ \log p)$, we have, from Corollary
\ref{AssumptionAcorollary} below with $t= (k_0+k_1) \log p$ that, for
$n$ large enough and with probability at least $1-4e^{-(k_0+k_1) \log
p} \to1$,
\begin{eqnarray}
\label{ripl} \bigl\|X\bigl(\theta-\theta^*\bigr)\bigr\|^2 &\ge&\inf
_{\theta' \in H_0} \bigl\|X\bigl(\theta-\theta '\bigr)
\bigr\|^2 \ge c(\Lambda_{\min}) n\rho_{np}^2
\ge L' \sqrt n
\end{eqnarray}
for every $L'>0$ (choosing $L_0$ large enough). We thus restrict to
this event. The probability in the last but one line of (\ref{split})
is then bounded by
\[
P_\theta \Biggl(\Biggl\llvert \sum_{i=1}^n
\bigl(\varepsilon_i^2-1\bigr) \Biggr\rrvert > \sqrt {n}
\bigl(L'-u_\gamma\bigr) \Biggr)
\]
for $n$ large enough, which can be made as small as desired by choosing
$L' \ge4u_\gamma$, as in (\ref{tone}). Likewise, the last probability
in the display (\ref{split}) is bounded, for $n$ large enough, by
\[
P_\theta \biggl(\bigl\llvert 2\bigl\langle\varepsilon, X\bigl(\theta-
\theta^*\bigr) \bigr\rangle \bigr\rrvert > \frac{\|X(\theta-\theta^*)\|^2}{4} \biggr) \le
P_\theta \biggl(\sup_{\theta' \in H_0} \frac{2\llvert \langle\varepsilon, X(\theta
-\theta') \rrvert \rangle}{\|X(\theta-\theta')\|^2}>
\frac{1}{4} \biggr),
\]
which converges to zero for large enough separation constant $L_0$,
uniformly in $\tilde B_0(k_1, \rho)$, proved in Lemma
\ref{AssumptionBlemma} below [using the lower bound (\ref{ripl}) for
$\| X(\theta-\theta')\|$ and that $\sqrt{k_0 \log p /n}=o(n^{-1/4})$].
\subsubsection{\texorpdfstring{Proof of Theorem \protect\ref{sp3}\textup{(B)}}{Proof of Theorem 4(B)}}
Throughout this subsection we impose the assumptions from Theorem
\ref{sp3}(B), with $\rho_{np}$ exceeding $L_0 \sqrt{(k_1/n) \log p}$
for some $L_0$ large enough that we will choose below (the
$n^{-1/4}$-regime was treated already in Theorem~\ref{sp0}(B), whose
proof holds for all $p$). By the arguments from the beginning of
Section~\ref{comp}, it suffices to solve the testing problem
(\ref{mtest}) with this choice of $\rho$, for any level $\gamma>0$. Let
$\tilde\theta$ be the estimator from (\ref{spest}) below with $\lambda$
chosen as in Corollary~\ref{spadest} below, and define the test
statistic
\[
T_n = \inf_{\theta\in B_0(k_0)} \|\tilde\theta- \theta
\|^2,\qquad\Psi_n = 1 \biggl\{T_n \ge D \log p
\frac{k_1}{n} \biggr\}
\]
for $D$ to be chosen. The type one errors satisfy, uniformly in $\theta
\in H_0$, for $D$ large enough,
\[
E_\theta\Psi_n \le P_\theta \biggl(\|\tilde\theta-
\theta\|^2 \ge D\log p \frac{k_1}{n} \biggr) \to0
\]
as $\min(p,n) \to\infty$, by Corollary~\ref{spadest}. Likewise, we
bound $E_\theta(1-\Psi_n)$ under $\theta\in\tilde B_0(k_1, \rho)$, for
some $\theta^* \in B_0(k_0)$, by the triangle inequality,
\begin{eqnarray*}
P_\theta \biggl(\bigl\|\tilde\theta- \theta^*\bigr\|_2^2
< C\log p \frac{k_1}{n} \biggr) &\le& P_\theta \biggl(\|\tilde\theta-
\theta\| > \bigl\|\theta ^*-\theta\bigr\| - \sqrt{C\log p \frac{k_1}{n}} \biggr)
\\
& \le& P_\theta \biggl(\|\tilde\theta- \theta\|^2
\ge(L_0-C)\log p \frac{k_1}{n} \biggr) \to0
\end{eqnarray*}
for $L_0$ large enough, again by Corollary~\ref{spadest} below.
\subsubsection{\texorpdfstring{Proof of Theorem \protect\ref{sp4}\textup{(B)}}{Proof of Theorem 5(B)}}
Throughout this subsection we impose the assumptions from Theorem
\ref{sp4}(B), with $\rho_{np} \ge L_0 p^{1/4}/\sqrt n$ for some $L_0$
large enough that we will choose below. By the arguments from the
beginning of Section~\ref{comp}, it suffices to solve the testing
problem (\ref{mtest}) [with both hypotheses there intersected with
$B_r(M)$] for this choice of $\rho$ and any level $\gamma>0$. For
$\theta' \in \mathbb R^p$ we define the $U$-statistic
\[
U_n\bigl(\theta'\bigr) = \frac{2}{n(n-1)} \sum
_{i<k} \sum_{j=1}^p
\bigl(Y_iX_{ij}-\theta_j'\bigr)
\bigl(Y_k X_{kj}-\theta_j'
\bigr),
\]
which equals $\|n^{-1}X^TY - \theta'\|^2$ with diagonal terms ($i=k$)
removed. Then
\begin{eqnarray}\label{exp}
\frac{1}{n}E_\theta X^TY &=&
E_\theta \biggl(\frac{1}{n}X^TX \biggr) \theta =
\theta,
\nonumber\\[-8pt]\\[-8pt]
E_\theta Y_1X_{1j} &=&
\theta_j,\qquad E_\theta U_n\bigl(
\theta'\bigr) = \bigl\| \theta-\theta'\bigr\|^2\nonumber
\end{eqnarray}
and we define the test statistic and test as
\[
T_n = \inf_{\theta' \in B_0(k_0)} \bigl|U_n\bigl(
\theta'\bigr)\bigr|,\qquad \Psi_n = 1 \biggl\{T_n
\ge u_\gamma\frac{\sqrt p}{n} \biggr\}
\]
for $u_\gamma$ quantile constants specified below. For the type one
errors we have, uniformly in $H_0$, by Chebyshev's inequality
\begin{equation}
\label{varest} \qquad E_\theta\Psi_n = P_\theta
\biggl(T_n \ge u_\gamma\frac{\sqrt
p}{n} \biggr) \le
P_\theta \biggl(\bigl|U_n(\theta)\bigr| \ge u_\gamma
\frac{\sqrt
p}{n} \biggr) \le\frac{\operatorname{Var}(U_n(\theta))}{u^2_\gamma} \frac{n^2}{p}.
\end{equation}
Under $P_\theta$ the $U$-statistic $U_n(\theta)$ is fully centered [cf.
(\ref{exp})], and by standard \mbox{$U$-}statistic arguments the variance can
be bounded by $\operatorname{Var}_\theta(U_n(\theta)) \le D p/n^2$ for
some constant $D$ depending only on $M$ and\vspace*{1pt} $\max_{1\le j \le
p}EX^4_{1j} \le b^4$; see, for instance, display (6.6) in
\citet{ITV10} and the arguments preceding it. We can thus choose
$u_\gamma= u_\gamma(M, b)$ to control the type one errors in~(\ref{varest}).
We now turn to the type two errors and assume $\theta\in\tilde B_0(k_1,
\rho)$: let $\theta^*$ be a minimiser in $T_n$, then $U_n(\theta^*)$
has Hoeffding decomposition $U_n(\theta^*) = U_n(\theta ) +
2L_n(\theta^*) + \|\theta^*-\theta\|^2$ with the linear term
\[
L_n\bigl(\theta'\bigr) = \frac{1}{n} \sum
_{i=1}^n \sum_{j=1}^p
(\theta_j - Y_iX_{ij}) \bigl(
\theta_j - \theta_j'\bigr).
\]
We can thus bound the type two errors $E_\theta(1-\Psi_n)$ as follows:
\begin{eqnarray*}
P_\theta \biggl(T_n < u_\gamma\frac{\sqrt p}{n}
\biggr) & \le& P_\theta \biggl(\bigl|U_n(\theta)\bigr| +
2\bigl|L_n\bigl(\theta^*\bigr)\bigr| \ge\bigl\|\theta-\theta^*\bigr\|^2 -
u_\gamma\frac{\sqrt p}{n} \biggr)
\\
& \le& P_\theta \biggl(\bigl|U_n(\theta)\bigr| \ge\frac{\|\theta-\theta^*\|^2}{2} -
u_\gamma\frac{\sqrt p}{2n} \biggr)
\\
&&{} +P_\theta \biggl(\bigl|L_n\bigl(\theta^*\bigr)\bigr| \ge
\frac{\|\theta-\theta^*\|
^2}{4} - u_\gamma\frac{\sqrt p}{4n} \biggr).
\end{eqnarray*}
By hypothesis on $\rho_{np}$ we can find $L_0$ large enough such that
$\|\theta-\theta^*\|^2 \ge\inf_{\theta' \in H_0} \|\theta-\theta'\|^2
\ge L \sqrt p/n$ for any $L>0$, so that the first probability in the
previous display can be bounded by $P_\theta(|U_n(\theta)| > u_\gamma
\sqrt p/n)$, which involves a~fully centered $U$-statistic and can thus
be dealt with as in the case of type one errors. The critical term is
the linear term, which, by the above estimate on $\|\theta-\theta^*\|$,
is less than or equal to
\[
P_\theta \biggl(\bigl|L_n\bigl(\theta^*\bigr)\bigr| \ge
\frac{\|\theta-\theta^*\|^2}{8} \biggr) \le P_\theta \biggl(\sup_{\theta' \in H_0}
\frac{|L_n(\theta')|}{\|
\theta-\theta'\|^2} > \frac{1}{8} \biggr).
\]
The process $L_n(\theta')$ can be written as
\begin{eqnarray*}
\bigl\langle\theta- n^{-1}X^TY, \theta-
\theta' \bigr\rangle &=& \bigl\langle\theta- n^{-1}X^TX
\theta, \theta- \theta' \bigr\rangle- \bigl\langle n^{-1}
X^T\varepsilon, \theta- \theta' \bigr\rangle
\\
& =& \frac{1}{n} \bigl\langle\bigl(E_\theta X^TX -
X^TX\bigr)\theta, \theta- \theta' \bigr\rangle-
\frac{1}{n}\bigl\langle\varepsilon, X\bigl(\theta- \theta'
\bigr) \bigr\rangle
\\
& \equiv & L^{(1)}_n\bigl(\theta'\bigr) +
L_n^{(2)}\bigl(\theta'\bigr)
\end{eqnarray*}
and we can thus bound the last probability by
\begin{equation}
\label{2terms} P_\theta \biggl(\sup_{\theta' \in H_0}
\frac{|L^{(1)}_n(\theta')|}{\|
\theta-\theta'\|^2} > \frac{1}{16} \biggr) + P_\theta \biggl(\sup
_{\theta
' \in H_0} \frac{|L^{(2)}_n(\theta')|}{\|\theta-\theta'\|^2} > \frac
{1}{16} \biggr).
\end{equation}
To show that the probability involving the second process approaches
zero, it suffices to show that
\begin{equation}
P_\theta \biggl(\sup_{\theta' \in H_0} \frac{\llvert \varepsilon^TX(\theta
-\theta')/n \rrvert }{\|X(\theta-\theta')\|^2/n} >
\frac{1}{16 \Lambda
} \biggr)
\end{equation}
converges to zero, using that $\sup_{v \in B_0(k_1)}\|Xv\|_2^2/(n\|v\|
_2^2) \le\Lambda$ for some $0<\Lambda<\infty$, on events of probability
approaching one, by Lemma~\ref{AssumptionAlemma} [noting $k_0+k_1 =
o(n/\log p)$]. By Lemma~\ref{AssumptionBlemma}\vadjust{\goodbreak} this last probability
approaches zero as \mbox{$\min(n,p)\to\infty$,} for $L_0$ large enough, noting
that the lower bound on $R_t$ there is satisfied for our separation
sequence $\rho_{np}$, by Corollary~\ref{AssumptionAcorollary} and
since $(k_0/n) \log p =o(p^{1/2}/n)$ in view of $\beta_0>1/2$.
Likewise, using the preceding arguments with Lemma~\ref{peel} instead
of Lemma~\ref{AssumptionBlemma}, the probability involving the first
process also converges to zero, which completes the proof.
\subsection{Remaining proofs}
\begin{lemma} \label{AssumptionAlemma} Assume Condition~\ref{subgauss}\textup{(a)} and denote by $P$ the law of $X$. Let $\theta\in
B_0(k_1)$ and $k \in\{ 1, \ldots, p \}$. Then for some constants
$\sigma$ and $\kappa$ depending only on $\sigma_0$ and $\kappa_0$,
$C_{k,k_1,p}\equiv(k+k_1+1) \log(25p)$ and for all $t >0$,
\begin{eqnarray*}
&& P \biggl( \sup_{\theta^{\prime} \in B_0(k), ( \theta^{\prime} -
\theta){}^T\Sigma( \theta^{\prime} - \theta) \neq0 } \biggl|{ (\theta^{\prime} - \theta){}^T \hat\Sigma(\theta^{\prime} -
\theta) \over
(\theta^{\prime} - \theta){}^T\Sigma(\theta^{\prime} - \theta)
} -1 \biggr|
\\
&&\hspace*{75pt}{} \ge
4 \sigma\sqrt{ t + C_{k,k_1,p} \over n }+ 4 \kappa{ t +
C_{k,k_1,p} \over n} \biggr) \le4
\exp[-t].
\end{eqnarray*}
\end{lemma}
\begin{corollary}\label{AssumptionAcorollary}
Let $X$ satisfy Conditions~\ref{subgauss}\textup{(a)} and~\ref{subgauss}\textup{(b)}. Let
$\sigma$, $\kappa$, $\theta$, $k$, $k_1$, $C_{k,k_1,p}$ be defined as in Lemma
\ref{AssumptionAlemma}. Suppose that $k$, $k_1$ and $t>0$ are such that
\[
\biggl( { 8C_{k,k_1,p}\over n } \vee{8 t \over n } \biggr) \le \biggl(
{1 \over4 ( \sigma\vee\kappa) } \wedge1 \biggr).
\]
Then for all $\theta\in B_0(k_1)$,
\[
P_{\theta} \biggl( \bigl(\theta^{\prime} - \theta
\bigr){}^T \hat\Sigma\bigl(\theta^{\prime} - \theta\bigr) \ge
{1 \over2} \bigl\| \theta^{\prime} - \theta\bigr\|^2
\Lambda_{\min}^2\ \forall \theta^{\prime} \in
B_0(k) \biggr) \ge1- 4 \exp[-t].
\]
\end{corollary}
\begin{pf*}{Proof of Lemma~\ref{AssumptionAlemma}}
The vector $\theta
'-\theta$ has at most $k+k_1$ nonzero entries; in the lemma we may thus
replace $\theta'-\theta$ by a fixed vector in $B_0(k+k_1)$ and take the
supremum over all $k+k_1$-sparse nonzero vectors. In abuse of notation
let us still write $\theta'$ for any such vector, and fix a set $S
\subset\{ 1, \ldots, p \}$ with cardinality $|S|=k+k_1$. Let
$\mathbb{R}_S^p:= \{ \theta\in\mathbb{R}^p\dvtx \theta_j = 0\ \forall
j \notin S \} $. We will show, for $\bar C(t,n) \equiv(t + 2(k+k_1)
\log5)/ n$, that
\[
P \biggl( \sup_{\theta^{\prime} \in\mathbb{R}_S^p, ( \theta^{\prime}){}^T
\Sigma\theta^{\prime} \neq0 } \biggl| { ( \theta^{\prime} ){}^T \hat\Sigma\theta^{\prime} \over
( \theta^{\prime} ){}^T \Sigma\theta^{\prime}} -1 \biggr| \ge 4
\sigma\sqrt{\bar C(t,n) } + 4\kappa{\bar C(t,n)} \biggr) \le4 \exp[-t].
\]
Since there are ${ p \choose(k+k_1)} \le p^{(k+k_1)} $ sets $S$ of
cardinality $k+k_1$, the result then follows from the union bound. To
establish the inequality in the last display, it suffices to show
\begin{equation}
\label{toshowequation} P \Bigl( \sup_{\theta^{\prime} \in\mathcal{B}_S } \bigl| \bigl(
\theta^{\prime} \bigr){}^T \Phi\theta^{\prime} \bigr| \ge 4
\sigma\sqrt{\bar C(t,n) } + 4 \kappa{\bar C(t,n)} \Bigr) \le4 e^{-t},
\end{equation}
where $\mathcal{B}_S:= \{( \theta^{\prime} \in\mathbb{R}_S^p\dvtx
(\theta^{\prime}){}^T \Sigma\theta^{\prime} \le1 \}$ and $\Phi:= \hat
\Sigma- \Sigma$.\vadjust{\goodbreak}
We use\vspace*{-1pt} the notation $\| X u \|_{\Sigma} ^2:= u^T \Sigma u $, $u \in
\mathbb{R}^p$, and we let for $0< \delta< 1 $, $\{ X \theta_S^l
\}_{l=1}^{N(\delta)}$ be a minimal $\delta$-covering of $(\{ X
\theta^{\prime}\dvtx \theta^{\prime} \in\mathcal{B}_S \}, \| \cdot
\|_{\Sigma} )$. Thus, for every $\theta^{\prime} \in\mathcal{B}_S$
there is a $\theta^l = \theta_S^l (\theta^{\prime}) $ such that $ \| X
( \theta^{\prime} - \theta^l ) \|_{\Sigma} \le\delta$. Note that $\{
\theta_S^l \} \subset\mathbb{R}_S^p$. Following an idea of
\citet{loh2012}, we then have
\[
\sup_{\theta^{\prime} \in\mathcal{B}_S } \bigl| \bigl( \theta^{\prime} -
\theta_S^l \bigl(\theta^{\prime}\bigr)
\bigr){}^T \Phi\bigl( \theta^{\prime} - \theta_S^l
\bigl(\theta^{\prime} \bigr) \bigr) \bigr| \le\delta^2 \sup
_{\vartheta\in\mathcal{B}_S } \vartheta^T \Phi\vartheta
\]
and also that $ \sup_{\theta^{\prime}
\in\mathcal{B}_S } | ( \theta^{\prime} - \theta_S^l (\theta^{\prime} )
){}^T \Phi\theta | \le\delta\sup_{\vartheta\in\mathcal{B}_S } |
\vartheta^T \Phi \vartheta| $. This implies with $\delta= 1/3$ that
\[
\sup_{\theta' \in \mathcal B_S} \bigl|\bigl(\theta'\bigr)\Phi \theta'\bigr| \le (9/2) \max_{l=1, \dots, N(1/3)} \bigl|\bigl(\theta^l_S\bigr)\Phi \bigl(\theta_S^l\bigr)\bigr|.
\]
Condition~\ref{subgauss}(a) ensures that for some constants $\sigma$
and $\kappa$ depending only on $\sigma_0$ and $\kappa_0$, for any $u$
with $\| X u \| _\Sigma\le1 $, and
any $t >0$, it holds that
\[
P \biggl( \bigl| u^T \Phi u \bigr| \ge\sigma\sqrt{t \over n} +
\kappa{t
\over n} \biggr) \le2 \exp[-t].
\]
This follows from the fact that the $((Xu)_i)$ are
sub-Gaussian, hence, the squares $((Xu)_i^2)$ are
sub-exponential. Bernstein's inequality can therefore be used [e.g.,
\citet{BvdG2011}, Lemma 14.9]. Finally, the covering number of a
ball in $k+k_1$-dimensional space is well known. Apply, for example,
Lemma 14.27 in \citet{BvdG2011}: $N(\delta) \le( ( 2+ \delta)/
\delta)^{k+k_1} $. If we take $\delta=1/3$, this gives $N(1/3)
\le 9^{k+k_1}$. The union bound then proves (\ref{toshowequation}).
\end{pf*}
\subsubsection{\texorpdfstring{A ratio-bound for $\theta' \mapsto\varepsilon^TX(\theta -\theta')$}
{A ratio-bound for theta' -> epsilon TX(theta - theta')}}
\begin{lemma} \label{AssumptionBlemma} Suppose that $\varepsilon\sim
N (0, I)$ is independent of $X$. Let $\delta>0$. Then for any $t \ge
\max(1/\delta, 1)$, and for $R_t = t C_0 \sqrt{k_0 \log p / n} $ where
$C_0$ is a universal constant, we have for some universal constants
$C_1$ and $C_2$,
\begin{eqnarray*}
&& P \biggl( \sup_{\theta^{\prime} \in B_0 (k_0), \| X
(\theta- \theta ^{\prime} ) \|_n > R_t} { |\varepsilon^T X( \theta-
\theta^{\prime} )| / n \over\| X ( \theta- \theta^{\prime} )\|_n^2 }
\ge\delta \bigg| X \biggr)
\\
&&\qquad \le C_1 \exp \biggl[- { t^2 \delta^2 k_0 \log p \over C_2 } \biggr].
\end{eqnarray*}
\end{lemma}
\begin{pf}
Let $\mathcal{G}_R (\theta):= \{ \theta^{\prime}\dvtx \| X ( \theta-
\theta^{\prime} )\|_n \le R, \theta^{\prime} \in B_0 (k_0) \} $. Then,
using the bound $\log{ p \choose k_0 } \le k_0 \log p $ and, for
example, Lemma 14.27 in \citet{BvdG2011}, we have, for $H(u, B,
\|\cdot\|) = \log N(u, B, \|\cdot\|)$ the logarithm of the usual
$u$-covering number of a subset $B$ of a normed space
\begin{eqnarray}
H\bigl( u, \bigl\{ X \bigl( \theta- \theta^{\prime} \bigr)\dvtx
\theta^{\prime} \in\mathcal {G}_R (\theta) \bigr\}, \| \cdot
\|_n \bigr) \le(k_0+1) \log \biggl( { 2R +u
\over u}
\biggr) + k_0 \log p,\nonumber
\\[-4pt]
\eqntext{u > 0.}
\end{eqnarray}
Indeed, if we fix the locations of the zeros, say, $\theta^{\prime} \in
B_0^{\prime} (k_0):= \{ \vartheta\dvtx \vartheta_j= 0\ \forall j >
k_0 \} $, then $\{ X \theta^{\prime}\dvtx \theta^{\prime} \in
B_0^{\prime } (k_0) \}$ is a $k_0$-dimensional linear space, so
\[
H\bigl( u, \bigl\{ X \theta^{\prime}\dvtx \theta^{\prime} \in
B_0^{\prime} (k_0), \bigl\| X \theta^{\prime}
\bigr\|_n \le R \bigr\}, \| \cdot\|_n \bigr) \le
k_0 \log \biggl( { 2R + u \over u} \biggr),\qquad u > 0.
\]
Furthermore, the vector $X \theta$ is fixed,
so that $\mathcal{G}_R (\theta) $ is a subset of a ball with radius $R$
in the $(k_0 +1)$-dimensional linear space spanned by $\{ X_j
\}_{j=1}^{k_0}, X \theta$.
By Dudley's bound [see \citet{dudley1967sizes} or more recent
references such as \citet{VW96}, \citet{vandeGeer00}],
applied to the (conditional on $X$) Gaussian process $\theta'
\mapsto\varepsilon ^TX(\theta-\theta')$, and using $\int_0^c
\sqrt{\log(c/x) } \,dx = c \int_0^1 \sqrt{\log(1/x) } \,dx = c A $,
where $A$ is the constant $A=\int_0^1 \sqrt{\log(1/x) } \,dx$, we
obtain
\begin{eqnarray*}
E \Bigl[ \sup_{\theta^{\prime} \in\mathcal{G}_R (\theta) }
\bigl|\varepsilon^T X\bigl( \theta- \theta^{\prime} \bigr)\bigr| \vert X \Bigr]
&\le& C^{\prime} \int _0^R \sqrt{ nH\bigl( u, \mathcal{G}_R (\theta), \|
\cdot \|_n \bigr)} \,du
\\
&\le& C \sqrt{ 2 k_0 \log p} \sqrt n R
\end{eqnarray*}
for some universal constants $C \ge1$ and $C^{\prime}$. By the
Borell--Sudakov--Cirelson Gaussian concentration inequality [e.g.,
\citet{BLM13}], we therefore have for all $u>0$,
\[
P \biggl( \sup_{\theta^{\prime} \in\mathcal{G}_R (\theta) } \bigl|\varepsilon^T X\bigl(
\theta- \theta^{\prime} \bigr)\bigr|/n \ge C R \sqrt{2k_0
\log p \over n } + R
\sqrt{2u \over n} \bigg| X \biggr) \le\exp[-u].
\]
Substituting $u=v^2 k_0 \log p $ gives that for all $v>0$,
\[
P \biggl( \sup_{\theta^{\prime} \in\mathcal{G}_R (\theta) } \bigl|\varepsilon^T X\bigl(
\theta- \theta^{\prime} \bigr)\bigr|/n \ge(C +v) R \sqrt {2k_0 \log p \over n }
\bigg| X \biggr) \le\exp\bigl[-v^2 k_0 \log p \bigr],
\]
which implies that for all $v \ge1$,
\[
P \biggl( \sup_{\theta^{\prime} \in\mathcal{G}_R (\theta) } \bigl|\varepsilon^T X\bigl(
\theta- \theta^{\prime} \bigr)\bigr|/n \ge2v CR \sqrt{2k_0
\log p \over n }
\bigg| X \biggr) \le\exp\bigl[-{v^2 k_0 \log p } \bigr].
\]
Now insert the peeling device [see \citet{Alexander85}, the
terminology coming from \citet{vandeGeer00}, Section~5.3]. Let
$R_t:= 8 C t \sqrt{2 k_0 \log p / n } $. We then have
\begin{eqnarray*}
&& P \biggl( \sup_{\theta^{\prime} \in B_0 (k_0), \| X (\theta-
\theta ^{\prime} ) \|_n > R_t} { |\varepsilon^T X( \theta-
\theta^{\prime} )| / n \over\| X ( \theta- \theta^{\prime}) \|_n^2 }
\ge\delta \bigg| X \biggr)
\\
&&\qquad \le\sum_{s=1}^{\infty} P \Bigl(\sup
_{\theta^{\prime} \in\mathcal
{G}_{2^s R_t} (\theta) } \bigl|\varepsilon^T X \bigl(\theta-
\theta^{\prime} \bigr)\bigr|/n \ge \delta2^{2(s-1)} R_t^2
\Big| X \Bigr)
\\
&&\qquad = \sum_{s=1}^{\infty} P \biggl(\sup
_{\theta^{\prime} \in\mathcal
{G}_{2^s R_t} (\theta) } \bigl|\varepsilon^T X \bigl(\theta-
\theta^{\prime} \bigr)\bigr|/n \ge 2^s R_t \times2C
\bigl(2^s t \delta\bigr) \sqrt{ 2 k_0 \log p \over n } \bigg| X
\biggr)
\\
&&\qquad \le\sum_{s=1}^{\infty} \exp\bigl[ -
2^{2s} t^2 \delta^2 k_0 \log p
\bigr] \le C_1 \exp \biggl[- { t^2 \delta^2 k_0 \log p \over C_2 } \biggr]
\end{eqnarray*}
for some universal constants $C_1$ and $C_2$, completing the proof.
\end{pf}
\subsubsection{\texorpdfstring{A ratio-bound for $\theta' \mapsto L_n^{(1)}(\theta')
\equiv\langle(E_\theta X^TX - X^TX)\theta, \theta- \theta' \rangle$}
{A ratio-bound for theta' -> L n (1)(theta') equivalent to <(E theta X TX - X TX)theta,
theta - theta'>}}
\begin{lemma} \label{peel}
We have, for every $\delta>0$, $R_t= tD_1 \sqrt{k_0 \log p/n}, t \ge
1$, some positive constants $D_1, D_2, D_3, D_4, D_5$ depending on
$\delta$, that
\[
\sup_{\theta\in B_r(M)} P_\theta \biggl(\sup_{\theta' \in B_0(k_0)\dvtx \|
\theta-\theta'\|>R_t}
\frac{|L^{(1)}_n(\theta')|}{\|\theta-\theta'\|^2} > \delta \biggr) \le B(t,p,n),
\]
where $B(t,p,n)=D_2e^{-D_3t^2\delta^2k_0 \log p}$ under the assumptions
of Theorem~\ref{sp4}\textup{(B)}, \mbox{$r=1$}, and $B(t,p,n)=D_4 e^{-D_5 t
\delta\sqrt{n \log p}/k_1}$ under the assumptions of Theorem~\ref{sp4}\textup{(B)}, $r=2$.
\end{lemma}
\begin{pf}
The process in question is of the form
\begin{equation}
\quad L_n^{(1)}\dvtx \theta' \mapsto\frac{1}{n}
\sum_{i=1}^n \sum
_{j=1}^p (Z_{ij}-EZ_{ij}) \bigl(
\theta_j - \theta_j'\bigr),\qquad
Z_{ij} = \sum_{m=1}^p \theta
_m X_{im}X_{ij}.
\end{equation}
Since the $X_{ij}$ are uniformly bounded by $b$, we conclude that the
summands in $i$ of this process are uniformly bounded by
\begin{equation}
\label{env} 2b^2 \sum_{j=1}^p\bigl|
\theta_j-\theta_j'\bigr| \sum
_{m=1}^p |\theta_m|
\end{equation}
and the weak variances $n\operatorname{Var}_\theta (L_n^{(1)}(\theta')
)$ equal, for $\delta_{mj}$ the Kronecker delta,
\begin{eqnarray}
\label{varpeel}
\qquad && E\sum_{j, l} (Z_{ij}-EZ_{ij})
(Z_{il}-EZ_{il}) \bigl(\theta_j-\theta
_j'\bigr) \bigl(\theta_l-
\theta_l'\bigr)
\nonumber
\\
&&\qquad = E\sum_{j,l,m,m'} (X_{im}X_{ij}-
\delta_{mj}) (X_{im'}X_{il}-\delta
_{m'l}) \theta_m \theta_{m'} \bigl(
\theta_j-\theta_j'\bigr) \bigl(
\theta_l - \theta_l'\bigr)
\\
&&\qquad= \sum_{j,l,m,m'} D_{mjm'l}
\theta_m \theta_{m'} \bigl(\theta_j-\theta
_j'\bigr) \bigl(\theta_l -
\theta_l'\bigr) \le c\|\theta\|_2^2
\bigl\|\theta-\theta'\bigr\|_2^2,\nonumber
\end{eqnarray}
where we have used, by the design assumptions, that $D_{mjm'l}\le1$
whenever the indices $m,j,m',l$ match exactly to two distinct values,
$D_{mjm'l}\le EX^4_{11}$ if $m=l=j=m'$, and $D_{mjm'l}=0$ in all other
cases, as well as the Cauchy--Schwarz inequality. So $L_n^{(1)}$ is a
uniformly bounded empirical process $\{(P_n-P)(f_{\theta'})\}_{\theta'
\in H_0}$ given by
\[
\frac{1}{n}\sum_{i=1}^n
\bigl(f_{\theta'}(Z_i)-Ef_{\theta'}(Z_i)
\bigr),\qquad f_{\theta'}(Z_i) = \sum
_{j=1}^p\sum_{m=1}^p
\theta_m X_{im}X_{ij}\bigl(\theta_j-
\theta_j'\bigr)
\]
with variables $Z_i = (X_{i1}, \ldots, X_{ip}){}^T \in\mathbb R^p$.
Define $\mathcal F_s \equiv\{f=f_{\theta'}\dvtx \theta' \in H_0,
\|\theta '-\theta\|^2 \le2^{s+1} \}$. We know $R_t < \|\theta-\theta'\|
\le \sqrt C$ so the first probability in (\ref{2terms}) can be bounded,
for $c'>0$ a small constant, by
\begin{eqnarray*}
&& P_\theta \biggl(\max_{s \in\mathbb Z\dvtx c'R_t^2 \le2^s \le C} \sup
_{\theta' \in H_0, 2^s < \|\theta-\theta'\|^2 \le2^{s+1}} \frac
{|L^{(1)}_n(\theta')|}{\|\theta-\theta'\|^2} > \delta \biggr)
\\
&&\qquad \le\sum_{s \in\mathbb Z\dvtx c' R_t^2 \le2^s \le C} P_\theta \Bigl(\sup
_{\theta' \in H_0, \|\theta-\theta'\|^2 \le2^{s+1}} \bigl|L^{(1)}_n\bigl(
\theta'\bigr)\bigr|> 2^s\delta \Bigr),
\end{eqnarray*}\vspace*{-5pt}
\[
\sum_{s \in\mathbb Z\dvtx c' R_t^2 \le2^s \le C} P_\theta \bigl(\|
P_n-P\|_{\mathcal F_s}- E\|P_n-P\|_{\mathcal F_s}>
2^s \delta- E\| P_n-P\|_{\mathcal F_s} \bigr).
\]
Moreover, $\mathcal F_s$ varies in a linear space of measurable
functions of dimension $k_0$, so we have, from $\log{ p \choose k_0 }
\le k_0 \log p $ and from Theorem 2.6.7 and Lemma 2.6.15 in
\citet{VW96}, that
\[
H\bigl(u, \mathcal F_s, L^2(Q)\bigr) \lesssim
k_0 \log(AU/u)+ k_0 \log p,\qquad 0<u<UA
\]
for some universal constant $A$ and envelope bound $U$ of $\mathcal
F_s$. Using (\ref{env}), if $\theta, \theta'$ are bounded in $\ell^1$
by $M$, we can take $U$ a large enough fixed constant depending on $M,
b$ only, and if $k_0$ is constant, we can take $U=\max(k_1\sqrt
{2^s},1)$ since $\|\theta-\theta'\|_1 \le\sqrt{k_1} \|\theta-\theta'\|
_2$. A standard moment bound for empirical processes under a uniform
entropy condition [e.g., Proposition 3 in \citet{GN09a}] then
gives, using~(\ref{varpeel}),
\begin{equation}
\label{mombd} E\|P_n-P\|_{\mathcal F_s} \lesssim\sqrt{
\frac{2^s k_0}{n} \log p} + \frac{U k_0 \log p}{n},
\end{equation}
which is, under the maintained hypotheses, of smaller order than $2^s
\delta$ precisely for those $s$ such that $R_t^2 \simeq(k_0/n) \log p
\lesssim2^s$. The last sum of probabilities can thus be bounded, for
$D_1$ large enough and $c_0$ some positive constant, by
\[
\sum_{s \in\mathbb Z\dvtx c'R_t^2 \le2^s \le C} P_\theta \bigl(n
\|P_n-P\| _{\mathcal F_s}- nE\|P_n-P\|_{\mathcal F_s}>
c_0n2^s \delta \bigr),
\]
to which we can apply Talagrand's inequality [\citet{T96}] [as at
the end of the proof of Proposition 1 in \citet{BN12}], to obtain
the bound
\[
\sum_{s \in\mathbb Z\dvtx c'R_t^2 \le2^s \le C} \exp \biggl\{-\delta^2
\frac{c_0^2n^2(2^s)^2}{n2^{s+1} + n UE \|P_n-P\|_{\mathcal F_s} +
Uc_0n2^s\delta} \biggr\}.
\]
Using (\ref{mombd}), this gives the desired bound $D_2e^{-D_3t^2\delta
^2 k_0 \log p}$ when the envelope $U$ is constant, and the bound
$B(t,p,n)=D_4 e^{-D_5t \delta(n \log p)^{1/2}/k_1}$ when the envelope
is $U= \max(k_1\sqrt{2^s},1)$ (with $k_0$ constant), completing the proof.
\end{pf}
\subsubsection{Tail inequalities for sparse estimators}\label{tail}
Recall that \mbox{$S_{\vartheta}:= \{ j\dvtx \vartheta_j \neq0 \} $}. Let
$k_{\vartheta}:= | S_{\vartheta} | $. For $\lambda>0$, take the
estimator
\begin{equation}
\label{spest} \tilde\theta:= \arg\min_{\vartheta} \bigl\{ \| Y - X
\vartheta\|_2^2 / n + \lambda^2
k_{\vartheta} \bigr\}.
\end{equation}
\begin{lemma} \label{AssumptionDalemma} Let $\varepsilon\sim\mathcal
{N} (0, I)$ be independent of $X$. Take $\lambda^2 = C_3 \log p / n $,
where $C_3$ is an appropriate universal constant. Let $t \ge1$ be
arbitrary and $R_t:= \sqrt{t/n}$. Then for some universal constants
$C_4$ and $C_5$,
\[
\sup_{\theta\in B_0 (k_0) } P_{\theta} \bigl( \bigl\| X ( \tilde\theta-
\theta) \bigr\|_n^2 + \lambda^2 k_{\tilde\theta} >
2 \lambda^2 k_0 + R_t^2 \vert X
\bigr) \le C_4 \exp \biggl[ - { n R_t^2 \over C_5 } \biggr].
\]
\end{lemma}
\begin{pf}
The result follows from an oracle inequality for
least squares estimators with general penalties as given in
\citet{vdG2001}. For completeness, we present a full proof. Define
\[
\tau^2 ( \vartheta; \theta):= \bigl\| X ( \vartheta- \theta)
\bigr\|_n^2 + \lambda^2 k_{\vartheta}\quad\mbox{and}\quad\mathcal{G}_R ( \theta):= \bigl\{ \vartheta\dvtx
\tau^2 ( \vartheta) \le R \bigr\}.
\]
If $\tau^2 ( \tilde\theta; \theta) \le2 \lambda^2 k_{\theta} $, we
are done.
So suppose $\tau^2 ( \tilde\theta; \theta) > 2 \lambda^2 k_{\theta} $.
We then have
$(2/n) \varepsilon^T X( \tilde\theta- \theta) \ge\tau^2 ( \tilde
\theta, \theta) - \lambda^2 k_{\theta} \ge\tau^2 ( \tilde\theta,
\theta) / 2$.
Now again apply the peeling device:
\begin{eqnarray*}
&& P \biggl( \sup_{\tau(\vartheta; \theta) > R_t } { \varepsilon^T X(
\vartheta- \theta)/n \over\tau^2 ( \vartheta, \theta) } \ge
{1 \over4} \bigg| X \biggr)
\\
&&\qquad \le\sum_{s=1}^{\infty} P \biggl( \sup
_{\vartheta\in\mathcal{G}_{2^s
R_t} (\theta) } { \varepsilon^T X( \vartheta- \theta)/n \over\tau^2
( \vartheta, \theta) } \ge {1 \over16 }
2^{2s} R_t^2 \bigg| X \biggr).
\end{eqnarray*}
But if $\vartheta\in\mathcal{G}_R (\theta)$, we know that $\|X(
\vartheta- \theta) \|_n \le R$ and that $k_{\vartheta} \le R^2 /
\lambda^2 $. Hence, as in the proof of Lemma~\ref{AssumptionBlemma},
we know that
\[
P \biggl( \sup_{\vartheta\in\mathcal{G}_R (\theta) } \varepsilon^T X( \vartheta-
\theta)/n \ge2 CR \sqrt{2R^2 \log p \over n\lambda^2 } \bigg| X \biggr) \le\exp \biggl[-
{ C^2 R^2 \log p \over
\lambda^2 } \biggr].
\]
As $\lambda= 32 C \sqrt{2 \log p / n } $, we get
\[
P \biggl( \sup_{\vartheta\in\mathcal{G}_R (\theta) } \varepsilon^T X( \vartheta-
\theta)/n \ge{ R^2 \over16} \bigg| X \biggr) \le\exp \biggl[-
{n R^2 \over2 \times(32)^2
} \biggr].
\]
We therefore have
\[
P \biggl( \sup_{\tau(\vartheta; \theta) > R_t } { \varepsilon^T X(
\vartheta- \theta) /n \over\tau^2 ( \vartheta, \theta) } \ge
{1 \over4} \bigg| X \biggr) \le\sum_{s=1}^{\infty}
\exp \biggl[- {n 2^{2s} R_t^2 \over2 \times(32)^2 } \biggr] \le C_4 \exp \biggl[ -
{ n R_t^2 \over C_5 } \biggr]
\]
for some universal constants $C_4$ and $C_5$.
\end{pf}
\begin{corollary} \label{spadest}
Assume Condition~\ref{subgauss} and let $\varepsilon\sim\mathcal{N} (0,
I)$ be independent of~$X$. Let $\tilde\theta$ be as in (\ref{spest})
with $\lambda^2 = (C_3 \log p) / n $, where $C_3$ is as in Lemma
\ref{AssumptionDalemma}, and let $k_0 = o(n/\log p)$. Then for some
universal constants $C_6, C_7, C_8, c$, every $C\ge C_6$ and every $n$
large enough,
\[
\sup_{\theta\in B_0 (k_0) } P_{\theta} \biggl( \| \tilde\theta- \theta
\|^2 > C \frac{k_0 \log p }{n} \biggr) \le C_7 \exp \biggl[
- {k_0 \log p \over C_8} \biggr].
\]
\end{corollary}
\begin{pf}
By Lemma~\ref{AssumptionDalemma} with $R_\tau$, $\tau$ equal to a
suitable constant times $k_0 \log p$, we see first $k_{\tilde\theta}
\lesssim3k_0$ on the event on which the exponential inequality holds.
Then from Corollary~\ref{AssumptionAcorollary} with $k=3k_0$, on an
event of sufficiently large probability, $\|\tilde\theta- \theta\| _2^2
\le C(\Lambda_{\min}) \|X(\tilde\theta-\theta)\|_{n}^2$ for $n$ large
enough, so that the result follows from applying Lemma
\ref{AssumptionDalemma} again [this time to $\|X(\tilde\theta-
\theta)\| _n^2$] and from combining the bounds.
\end{pf}
\subsubsection{\texorpdfstring{Proof of Theorem \protect\ref{n4} under Condition \protect\ref{subgauss}}
{Proof of Theorem 1 under Condition 2}}
For $p, n$ fixed, the random vectors $(Y_i, X_{i1}, \ldots,
X_{ip})_{i=1}^n$ are i.i.d., and if we split the $n$ points into two
subsamples, each of size of order $n$, then we have two independent
replicates $Y^{(s)}=X^{(s)}\theta+\varepsilon^{(s)}, \hat\Sigma^{(s)}=
(X^{(s)}){}^TX^{(s)}/n, s=1,2$, of the model. In abuse of notation,
denote throughout this proof by $\tilde\theta\equiv\tilde\theta ^{(1)}$
the estimator from (\ref{spest}) based on the subsample $s=1$, with
$\lambda$ chosen as in Lemma~\ref{AssumptionDalemma}, and by $(Y,X,
\varepsilon)\equiv(Y^{(2)}, X^{(2)}, \varepsilon^{(2)})$ the variables
from the second subsample. Define
\begin{eqnarray*}
\hat R_n &=& \frac{1}{n} (Y-X\tilde\theta){}^T(Y-X
\tilde \theta) - 1
\\
& =& (\theta- \tilde\theta){}^T\hat\Sigma^{(2)} (\theta-\tilde
\theta) + \frac{2}{n} \varepsilon^TX(\theta-\tilde\theta) +
\frac{1}{n} \varepsilon^T \varepsilon-1.
\end{eqnarray*}
By independence, and conditional on $(Y^{(1)}, X^{(1)})$, we have
$E^{(2)}_\theta(\varepsilon^TX(\theta-\tilde\theta))^2 = n(\tilde
\theta-\theta){}^T\Sigma(\tilde\theta-\theta)$ and so, using Markov's
inequality,
\begin{equation}
\label{linearterm} \frac{2}{n} \varepsilon^TX(\theta-\tilde
\theta) = O_P \biggl(\sqrt{\frac
{(\tilde\theta-\theta){}^T\Sigma(\tilde\theta-\theta)}{ n}} \biggr).
\end{equation}
By Lemma~\ref{AssumptionDalemma}, we have $\| X^{(1)} ( \tilde\theta -
\theta) \|^2_n= O_P ((k \log p) / n ) $ and $k_{\tilde\theta} = O(k_1)$
and, hence, by Lemma~\ref{AssumptionAlemma}, also $(\tilde\theta-
\theta){}^T \Sigma( \tilde \theta- \theta) =O_P ((k \log p)/n )=o(1)$.
Thus, the bound in (\ref{linearterm}) is $o_P(1/\sqrt n)$ uniformly in
$B_0(k_1)$, and this will be used in the following estimate. Let
$u_\alpha$ be suitable quantile constants to be chosen below. Take as
confidence set
\[
C_n = \biggl\{\theta\in\mathbb R^p\dvtx \|\theta-\tilde
\theta\|^2 \le2 \Lambda_{\min}^{-2} \biggl( \hat
R_n + \frac{u_\alpha}{\sqrt n} \biggr) \biggr\}.
\]
Uniformly in $\theta\in B_0(k_1)$ with $k_1 = o (n/\log p)$, we have
again by Lemma~\ref{AssumptionDalemma} that $\tilde\theta\in
B_0(2k_1)$ on events of probability approaching one, so that, using
Corollary~\ref{AssumptionAcorollary} on these events,
\begin{eqnarray*}
P_\theta(\theta\notin C_n) &=& P_\theta \biggl(\|
\theta-\tilde\theta\| ^2 > 2 \Lambda_{\min}^{-2}
\biggl(\hat R_n + \frac{u_\alpha}{\sqrt
n} \biggr) \biggr)
\\
&\le& P_\theta \biggl((\theta-\tilde\theta){}^T\hat
\Sigma^{(2)}(\theta -\tilde\theta) > \hat R_n +
\frac{u_\alpha}{\sqrt n} \biggr) +o(1)
\\
&=& P_\theta \biggl(-\frac{1}{n}\varepsilon^T
\varepsilon+1 > \frac
{u_\alpha}{\sqrt n} + \frac{2}{n} \varepsilon^T
X(\theta-\tilde\theta) \biggr) + o(1)
\\
&=& P_\theta \Biggl(\frac{-1}{\sqrt n} \sum
_{i=1}^n \bigl(\varepsilon_i^2-1
\bigr) > \bigl(1+o(1)\bigr)u_\alpha \Biggr) +o(1) \le\alpha+ o(1)
\end{eqnarray*}
for a fixed constant $u_\alpha$. Moreover, from the previous arguments
and Corollary~\ref{spadest}, we see that, for $\theta\in B_0(k)$, the
diameter $\hat R_n = O_P(\|\tilde\theta-\theta\|^2+n^{-1/2})$ is of
order $O_P(\frac{k \log p}{n} + n^{-1/2})$.
\section*{Acknowledgements} We would like to thank the Editor, Associate
Editor, two referees and Sasha Tsybakov for helpful remarks on this article.
Richard Nickl is grateful to the Cafes Florianihof and Griensteidl
in Vienna where parts of this research were carried out.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.